From nobody Tue Apr 28 02:28:07 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 93411CCA47C for ; Tue, 7 Jun 2022 16:51:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345206AbiFGQvn (ORCPT ); Tue, 7 Jun 2022 12:51:43 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41546 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345197AbiFGQvm (ORCPT ); Tue, 7 Jun 2022 12:51:42 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CFA52F6889 for ; Tue, 7 Jun 2022 09:51:39 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id s22-20020a252d56000000b0065d1ef35f9dso15596806ybe.5 for ; Tue, 07 Jun 2022 09:51:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=utThYmevKONU/9m2pQGdDHA12doEH9idunElwn0Wp+Y=; b=gUIQMQl0JQzhYd3S4fyRf5XmWfJe5lmGtNGJawPlt9xGCh6iVfChXa73AoojCUrlbl IVXeOtMoTYUHDpAFH3A8brBwPVcti7T1TXiYbRGm9WL+wI2w1S/KEwaSu9WutVR9tygx 84Kvg+VfGwL6M1f0ziUNEjgMhLSuSnn44AbVKJ9wncUs/50JTgmnIHf+wQmtrOV7M8Mt K4uO5mPCKZewDVjZ3Ihsg/rY8/0Ud5jqcoTKOFhCTALO+vhAlMfVTPQqBzexRVTQ5AE6 AWpm7eIByU/UruiBwIFFfpmgXXAyJwhbholwIR4cxXKH/anXN/52PMZ9cQYrHOq/XYDw pDWQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=utThYmevKONU/9m2pQGdDHA12doEH9idunElwn0Wp+Y=; b=CpGyKSEn7XLH3/1dWBHr3H5NIq2OwmQSHJMomUoavH4sTtTs5p0bDEwgtDlsE1ICsR AV5uOWMP4mDAF8K1eJH6g/+Im3mABV+ZxJZ6HZHG8/tPPlSFeqyq17o16ADm5Xm5BUN+ E1T1qOs8L60CZDEQo8j1Uk5iIuCrSleW6EoIdia/Slu3PFfok9Sg9Ijxd/zfcIYKX9Z0 x0c6dZFomWU6qH27fZAQNU4Mo9uKcP6laVRaNVCdulCLbYzrvl/z0rC8X6JMGMYCahz7 xGRTjb6GZTbo7RJ+LoUKI5fWA0PWg5le2zvQs1hevCA61F2mTLXFG+LtxAqCsM120p/D ZjwA== X-Gm-Message-State: AOAM533xIr7zbEPbi9rCDmwUtdCzFC9TQXKPVzF0QLo0ppoOyxWj5x6n beWUsVZoYRdV6Rv2QwYV+8h9Ag1HeiUvApXSOw== X-Google-Smtp-Source: ABdhPJzuHcHlzERjFFs10J5b1VKR548pg9LJge0vp7jHTNhan1WTvDRj/6ZH8ppj3YV0piRTv9xwQbWhp9qkOIWgBQ== X-Received: from kaleshsingh.mtv.corp.google.com ([2620:15c:211:200:7091:8518:ec1e:93dc]) (user=kaleshsingh job=sendgmr) by 2002:a81:57d6:0:b0:30c:a234:140d with SMTP id l205-20020a8157d6000000b0030ca234140dmr33093505ywb.269.1654620698862; Tue, 07 Jun 2022 09:51:38 -0700 (PDT) Date: Tue, 7 Jun 2022 09:50:43 -0700 In-Reply-To: <20220607165105.639716-1-kaleshsingh@google.com> Message-Id: <20220607165105.639716-2-kaleshsingh@google.com> Mime-Version: 1.0 References: <20220607165105.639716-1-kaleshsingh@google.com> X-Mailer: git-send-email 2.36.1.255.ge46751e96f-goog Subject: [PATCH v3 1/5] KVM: arm64: Factor out common stack unwinding logic From: Kalesh Singh To: mark.rutland@arm.com, broonie@kernel.org, maz@kernel.org Cc: will@kernel.org, qperret@google.com, tabba@google.com, surenb@google.com, tjmercier@google.com, kernel-team@android.com, Kalesh Singh , James Morse , Alexandru Elisei , Suzuki K Poulose , Catalin Marinas , Masami Hiramatsu , Alexei Starovoitov , "Madhavan T. Venkataraman" , Peter Zijlstra , Andrew Jones , Marco Elver , Kefeng Wang , Zenghui Yu , Keir Fraser , Ard Biesheuvel , Oliver Upton , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Factor out the stack unwinding logic common to both the host kernel and the nVHE hypersivor into __unwind_next(). This allows for reuse in the nVHE hypervisor stack unwinding (later in this series). Signed-off-by: Kalesh Singh Reviewed-by: Mark Brown --- Changes in v3: - Add Mark's Reviewed-by tag arch/arm64/kernel/stacktrace.c | 36 +++++++++++++++++++++++----------- 1 file changed, 25 insertions(+), 11 deletions(-) diff --git a/arch/arm64/kernel/stacktrace.c b/arch/arm64/kernel/stacktrace.c index 0467cb79f080..ee60c279511c 100644 --- a/arch/arm64/kernel/stacktrace.c +++ b/arch/arm64/kernel/stacktrace.c @@ -81,23 +81,19 @@ NOKPROBE_SYMBOL(unwind_init); * records (e.g. a cycle), determined based on the location and fp value o= f A * and the location (but not the fp value) of B. */ -static int notrace unwind_next(struct task_struct *tsk, - struct unwind_state *state) +static int notrace __unwind_next(struct task_struct *tsk, + struct unwind_state *state, + struct stack_info *info) { unsigned long fp =3D state->fp; - struct stack_info info; - - /* Final frame; nothing to unwind */ - if (fp =3D=3D (unsigned long)task_pt_regs(tsk)->stackframe) - return -ENOENT; =20 if (fp & 0x7) return -EINVAL; =20 - if (!on_accessible_stack(tsk, fp, 16, &info)) + if (!on_accessible_stack(tsk, fp, 16, info)) return -EINVAL; =20 - if (test_bit(info.type, state->stacks_done)) + if (test_bit(info->type, state->stacks_done)) return -EINVAL; =20 /* @@ -113,7 +109,7 @@ static int notrace unwind_next(struct task_struct *tsk, * stack to another, it's never valid to unwind back to that first * stack. */ - if (info.type =3D=3D state->prev_type) { + if (info->type =3D=3D state->prev_type) { if (fp <=3D state->prev_fp) return -EINVAL; } else { @@ -127,7 +123,25 @@ static int notrace unwind_next(struct task_struct *tsk, state->fp =3D READ_ONCE_NOCHECK(*(unsigned long *)(fp)); state->pc =3D READ_ONCE_NOCHECK(*(unsigned long *)(fp + 8)); state->prev_fp =3D fp; - state->prev_type =3D info.type; + state->prev_type =3D info->type; + + return 0; +} +NOKPROBE_SYMBOL(__unwind_next); + +static int notrace unwind_next(struct task_struct *tsk, + struct unwind_state *state) +{ + struct stack_info info; + int err; + + /* Final frame; nothing to unwind */ + if (state->fp =3D=3D (unsigned long)task_pt_regs(tsk)->stackframe) + return -ENOENT; + + err =3D __unwind_next(tsk, state, &info); + if (err) + return err; =20 state->pc =3D ptrauth_strip_insn_pac(state->pc); =20 --=20 2.36.1.255.ge46751e96f-goog From nobody Tue Apr 28 02:28:07 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6FAEFCCA47C for ; Tue, 7 Jun 2022 16:52:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345233AbiFGQwH (ORCPT ); Tue, 7 Jun 2022 12:52:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41968 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345229AbiFGQwE (ORCPT ); Tue, 7 Jun 2022 12:52:04 -0400 Received: from mail-yw1-x1149.google.com (mail-yw1-x1149.google.com [IPv6:2607:f8b0:4864:20::1149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EA92A10192A for ; Tue, 7 Jun 2022 09:52:03 -0700 (PDT) Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-310061f47faso110149097b3.9 for ; Tue, 07 Jun 2022 09:52:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=rjRsj4m79FaG01hh1JM+gnevDQqcLHdW4HjfiuVC0I0=; b=hn7Xx71n2KnaaudPsWoOTWsmIKVH2tEImkA5C6UnP/mD/yBngfMLji74jihCgmGl3i Oe+usDOEZhIztDMPc4rcdCiSldhSIax6bzofbCx/aXbOpHA8gJZb4Yt4b3HyzrEDGavY tz1an+NE9s3+fZ4I/wqrsGluqW7Gl9q8cb52CZ1wNMrgLvNqRIVFbx7acB7jUIs6dy3a z6SeXpB+1tP7RqmG/ZlE2XEI4V6CdBQCR3gjZkV62ZVv42q3eIfxz3DsUvqfdulCdf0n aWtbzj2IgfmtGIcAPuyAQ6d5/2iF/92KY92kI9WA2rv9rcUAH23ftma3wW1dEriJCT6t aD5Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=rjRsj4m79FaG01hh1JM+gnevDQqcLHdW4HjfiuVC0I0=; b=oy+lZM3NZWiWS7zo5Tmvd+sBYn8lVjtplxSfMVSM0FSiKm5i1RFd2NIZuYGQgj2Iy/ 2LsZPmL2/97GGd8AXyAxGjUEUySJprey1y5Kz6teqMU79bO16BtJsbK6mSVQy1qCjXRE BAXA+0dUm4KDPHOR+yHUy8RQeQbmeuX8Bx69mI39EynK5X8b7OBK5nU87mP+TpduTmIB ugqpudRArFWXZeAWUCIwcMwesce+BQjwLfKIbskVt4esUZDJZ2jXB2VpfmkN1Pq2YSs+ 1h8OjeON1rTvZw4DSR3pMTe1eL2jl1qlHZGrDmJ/GH0NA+3FK9w3FdoQZel4GLimGyLz WAJQ== X-Gm-Message-State: AOAM533hkGn6T7UwAMNpemp/hZej3w8XiZbpWA/iiQl3F83q76BgltK3 NeHMBhfQd/yFIDyw54havUqQx0xcp2byOnMJ+g== X-Google-Smtp-Source: ABdhPJy+IB15B8hsKv/SihsClOR+eo9Nq0yxU4MyQoY67kA3uwyRsLph8BS0L40V/9kbjX+bqF3kZKaMqzx2PJ910Q== X-Received: from kaleshsingh.mtv.corp.google.com ([2620:15c:211:200:7091:8518:ec1e:93dc]) (user=kaleshsingh job=sendgmr) by 2002:a05:6902:1006:b0:660:6f21:a210 with SMTP id w6-20020a056902100600b006606f21a210mr22111816ybt.178.1654620722845; Tue, 07 Jun 2022 09:52:02 -0700 (PDT) Date: Tue, 7 Jun 2022 09:50:44 -0700 In-Reply-To: <20220607165105.639716-1-kaleshsingh@google.com> Message-Id: <20220607165105.639716-3-kaleshsingh@google.com> Mime-Version: 1.0 References: <20220607165105.639716-1-kaleshsingh@google.com> X-Mailer: git-send-email 2.36.1.255.ge46751e96f-goog Subject: [PATCH v3 2/5] KVM: arm64: Compile stacktrace.nvhe.o From: Kalesh Singh To: mark.rutland@arm.com, broonie@kernel.org, maz@kernel.org Cc: will@kernel.org, qperret@google.com, tabba@google.com, surenb@google.com, tjmercier@google.com, kernel-team@android.com, Kalesh Singh , James Morse , Alexandru Elisei , Suzuki K Poulose , Catalin Marinas , Masami Hiramatsu , Alexei Starovoitov , "Madhavan T. Venkataraman" , Andrew Jones , Kefeng Wang , Zenghui Yu , Keir Fraser , Ard Biesheuvel , Oliver Upton , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Recompile stack unwinding code for use with the nVHE hypervisor. This is a preparatory patch that will allow reusing most of the kernel unwinding logic in the nVHE hypervisor. Suggested-by: Mark Rutland Signed-off-by: Kalesh Singh Reviewed-by: Mark Brown --- Changes in v3: - Add Mark's Reviewed-by tag Changes in v2: - Split out refactoring of common unwinding logic into a separate patch, per Mark Brown arch/arm64/include/asm/stacktrace.h | 18 +++++++++----- arch/arm64/kernel/stacktrace.c | 37 ++++++++++++++++------------- arch/arm64/kvm/hyp/nvhe/Makefile | 3 ++- 3 files changed, 35 insertions(+), 23 deletions(-) diff --git a/arch/arm64/include/asm/stacktrace.h b/arch/arm64/include/asm/s= tacktrace.h index aec9315bf156..f5af9a94c5a6 100644 --- a/arch/arm64/include/asm/stacktrace.h +++ b/arch/arm64/include/asm/stacktrace.h @@ -16,12 +16,14 @@ #include =20 enum stack_type { - STACK_TYPE_UNKNOWN, +#ifndef __KVM_NVHE_HYPERVISOR__ STACK_TYPE_TASK, STACK_TYPE_IRQ, STACK_TYPE_OVERFLOW, STACK_TYPE_SDEI_NORMAL, STACK_TYPE_SDEI_CRITICAL, +#endif /* !__KVM_NVHE_HYPERVISOR__ */ + STACK_TYPE_UNKNOWN, __NR_STACK_TYPES }; =20 @@ -31,11 +33,6 @@ struct stack_info { enum stack_type type; }; =20 -extern void dump_backtrace(struct pt_regs *regs, struct task_struct *tsk, - const char *loglvl); - -DECLARE_PER_CPU(unsigned long *, irq_stack_ptr); - static inline bool on_stack(unsigned long sp, unsigned long size, unsigned long low, unsigned long high, enum stack_type type, struct stack_info *info) @@ -54,6 +51,12 @@ static inline bool on_stack(unsigned long sp, unsigned l= ong size, return true; } =20 +#ifndef __KVM_NVHE_HYPERVISOR__ +extern void dump_backtrace(struct pt_regs *regs, struct task_struct *tsk, + const char *loglvl); + +DECLARE_PER_CPU(unsigned long *, irq_stack_ptr); + static inline bool on_irq_stack(unsigned long sp, unsigned long size, struct stack_info *info) { @@ -88,6 +91,7 @@ static inline bool on_overflow_stack(unsigned long sp, un= signed long size, static inline bool on_overflow_stack(unsigned long sp, unsigned long size, struct stack_info *info) { return false; } #endif +#endif /* !__KVM_NVHE_HYPERVISOR__ */ =20 =20 /* @@ -101,6 +105,7 @@ static inline bool on_accessible_stack(const struct tas= k_struct *tsk, if (info) info->type =3D STACK_TYPE_UNKNOWN; =20 +#ifndef __KVM_NVHE_HYPERVISOR__ if (on_task_stack(tsk, sp, size, info)) return true; if (tsk !=3D current || preemptible()) @@ -111,6 +116,7 @@ static inline bool on_accessible_stack(const struct tas= k_struct *tsk, return true; if (on_sdei_stack(sp, size, info)) return true; +#endif /* !__KVM_NVHE_HYPERVISOR__ */ =20 return false; } diff --git a/arch/arm64/kernel/stacktrace.c b/arch/arm64/kernel/stacktrace.c index ee60c279511c..a84e38d41d38 100644 --- a/arch/arm64/kernel/stacktrace.c +++ b/arch/arm64/kernel/stacktrace.c @@ -129,6 +129,26 @@ static int notrace __unwind_next(struct task_struct *t= sk, } NOKPROBE_SYMBOL(__unwind_next); =20 +static int notrace unwind_next(struct task_struct *tsk, + struct unwind_state *state); + +static void notrace unwind(struct task_struct *tsk, + struct unwind_state *state, + stack_trace_consume_fn consume_entry, void *cookie) +{ + while (1) { + int ret; + + if (!consume_entry(cookie, state->pc)) + break; + ret =3D unwind_next(tsk, state); + if (ret < 0) + break; + } +} +NOKPROBE_SYMBOL(unwind); + +#ifndef __KVM_NVHE_HYPERVISOR__ static int notrace unwind_next(struct task_struct *tsk, struct unwind_state *state) { @@ -171,22 +191,6 @@ static int notrace unwind_next(struct task_struct *tsk, } NOKPROBE_SYMBOL(unwind_next); =20 -static void notrace unwind(struct task_struct *tsk, - struct unwind_state *state, - stack_trace_consume_fn consume_entry, void *cookie) -{ - while (1) { - int ret; - - if (!consume_entry(cookie, state->pc)) - break; - ret =3D unwind_next(tsk, state); - if (ret < 0) - break; - } -} -NOKPROBE_SYMBOL(unwind); - static bool dump_backtrace_entry(void *arg, unsigned long where) { char *loglvl =3D arg; @@ -238,3 +242,4 @@ noinline notrace void arch_stack_walk(stack_trace_consu= me_fn consume_entry, =20 unwind(task, &state, consume_entry, cookie); } +#endif /* !__KVM_NVHE_HYPERVISOR__ */ diff --git a/arch/arm64/kvm/hyp/nvhe/Makefile b/arch/arm64/kvm/hyp/nvhe/Mak= efile index f9fe4dc21b1f..c0ff0d6fc403 100644 --- a/arch/arm64/kvm/hyp/nvhe/Makefile +++ b/arch/arm64/kvm/hyp/nvhe/Makefile @@ -14,7 +14,8 @@ lib-objs :=3D $(addprefix ../../../lib/, $(lib-objs)) =20 obj-y :=3D timer-sr.o sysreg-sr.o debug-sr.o switch.o tlb.o hyp-init.o hos= t.o \ hyp-main.o hyp-smp.o psci-relay.o early_alloc.o page_alloc.o \ - cache.o setup.o mm.o mem_protect.o sys_regs.o pkvm.o + cache.o setup.o mm.o mem_protect.o sys_regs.o pkvm.o \ + ../../../kernel/stacktrace.o obj-y +=3D ../vgic-v3-sr.o ../aarch32.o ../vgic-v2-cpuif-proxy.o ../entry.= o \ ../fpsimd.o ../hyp-entry.o ../exception.o ../pgtable.o obj-$(CONFIG_DEBUG_LIST) +=3D list_debug.o --=20 2.36.1.255.ge46751e96f-goog From nobody Tue Apr 28 02:28:07 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BE2C3C433EF for ; Tue, 7 Jun 2022 16:52:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345237AbiFGQwc (ORCPT ); Tue, 7 Jun 2022 12:52:32 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43202 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345231AbiFGQw2 (ORCPT ); Tue, 7 Jun 2022 12:52:28 -0400 Received: from mail-yw1-x1149.google.com (mail-yw1-x1149.google.com [IPv6:2607:f8b0:4864:20::1149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5494211C38 for ; Tue, 7 Jun 2022 09:52:26 -0700 (PDT) Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-30047b94aa8so153321567b3.1 for ; Tue, 07 Jun 2022 09:52:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=x4HYk8ZXIQ3L7NS6dgOSqFvxE7Amt+Ba78KLaT8irts=; b=bbRfr+VLNGH5abSF72ZiMEU0zNJsQdJuZGojBmlWNAKmd1Ix07dEopbMuqAJ5xYIMb z01k3Ui7ONQkbcxRWF9ga19riZtMb8V7AbmCO4w39rPeR/t8TvN1l3MyXBCK/tQXXgBt +8NF8qc1zN7qjitT9iTsE1VPPfeXteLDvl9Ay4SBBHThXEGq/BXjsxwklqYe70WmQ8ht B//5R0e0lXiDLWSo1IWOJLY2QJzHyPd5V+AnnA2sqNE39mfpfPTbR7oRZ1zS2kIl3LpD g7cHHtq6aP+d8NyxJqCmyhVy37PTQdtOFWQ2sJus28hTcinWBTQXvWv7QiBi74k2Tz1M dw5Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=x4HYk8ZXIQ3L7NS6dgOSqFvxE7Amt+Ba78KLaT8irts=; b=R3rJKOwZGe55LdI9KpK5zNzwHUS5A8LTa+Tbr656OzXekfNccb0ihluZgL3IlEdEUj fVOlmAoVPWpimaT+j78VXfsUMHqfvmUv44GYd5TZofRpuH+9NBzD/HsTAPXVgtnwD8qZ bOMEDgmY4QLXGgTz+cmqONmNBQ1RHK3+baFNayhpKhjCTjTK1Y+YhMexWH85lZTXMjdn jONhNjwGJPE4b9VmEdn2sMyLDwfd5EzYVarQHxqHpdt+BcMnqxFcMZhEoTIDJhao386h ch1uPG38VhWDf1Krs7YJnwCRBu9Pmq5OQF3QoYJO5mVg4dZZkC7T8bvm6Xt2LNxpvC1s WHbQ== X-Gm-Message-State: AOAM532vCjwNPPTfrG6zy2GCB4ABR7gBtsUj9/lVXbkt6XgEauaaH351 TuofOP62sNhFU0QIeMJTGcX14GqtWxQyDXCLKQ== X-Google-Smtp-Source: ABdhPJz5cu56sG78OQqQQjV0I3/hccrWnE63JIWa21o18y/iEx0ZFYvdLKe0iAXWzw5Pfvq5YrrnFsuiy2fijwl19w== X-Received: from kaleshsingh.mtv.corp.google.com ([2620:15c:211:200:7091:8518:ec1e:93dc]) (user=kaleshsingh job=sendgmr) by 2002:a5b:12:0:b0:663:e4df:7bc0 with SMTP id a18-20020a5b0012000000b00663e4df7bc0mr2722447ybp.208.1654620745478; Tue, 07 Jun 2022 09:52:25 -0700 (PDT) Date: Tue, 7 Jun 2022 09:50:45 -0700 In-Reply-To: <20220607165105.639716-1-kaleshsingh@google.com> Message-Id: <20220607165105.639716-4-kaleshsingh@google.com> Mime-Version: 1.0 References: <20220607165105.639716-1-kaleshsingh@google.com> X-Mailer: git-send-email 2.36.1.255.ge46751e96f-goog Subject: [PATCH v3 3/5] KVM: arm64: Add hypervisor overflow stack From: Kalesh Singh To: mark.rutland@arm.com, broonie@kernel.org, maz@kernel.org Cc: will@kernel.org, qperret@google.com, tabba@google.com, surenb@google.com, tjmercier@google.com, kernel-team@android.com, Kalesh Singh , James Morse , Alexandru Elisei , Suzuki K Poulose , Catalin Marinas , Masami Hiramatsu , Alexei Starovoitov , "Madhavan T. Venkataraman" , Peter Zijlstra , Andrew Jones , Zenghui Yu , Kefeng Wang , Keir Fraser , Ard Biesheuvel , Oliver Upton , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Allocate and switch to 16-byte aligned secondary stack on overflow. This provides us stack space to better handle overflows; and is used in a subsequent patch to dump the hypervisor stacktrace. Signed-off-by: Kalesh Singh --- arch/arm64/kernel/stacktrace.c | 3 +++ arch/arm64/kvm/hyp/nvhe/host.S | 9 ++------- 2 files changed, 5 insertions(+), 7 deletions(-) diff --git a/arch/arm64/kernel/stacktrace.c b/arch/arm64/kernel/stacktrace.c index a84e38d41d38..f346b4c66f1c 100644 --- a/arch/arm64/kernel/stacktrace.c +++ b/arch/arm64/kernel/stacktrace.c @@ -242,4 +242,7 @@ noinline notrace void arch_stack_walk(stack_trace_consu= me_fn consume_entry, =20 unwind(task, &state, consume_entry, cookie); } +#else /* __KVM_NVHE_HYPERVISOR__ */ +DEFINE_PER_CPU(unsigned long [PAGE_SIZE/sizeof(long)], overflow_stack) + __aligned(16); #endif /* !__KVM_NVHE_HYPERVISOR__ */ diff --git a/arch/arm64/kvm/hyp/nvhe/host.S b/arch/arm64/kvm/hyp/nvhe/host.S index ea6a397b64a6..4e3032a244e1 100644 --- a/arch/arm64/kvm/hyp/nvhe/host.S +++ b/arch/arm64/kvm/hyp/nvhe/host.S @@ -177,13 +177,8 @@ SYM_FUNC_END(__host_hvc) b hyp_panic =20 .L__hyp_sp_overflow\@: - /* - * Reset SP to the top of the stack, to allow handling the hyp_panic. - * This corrupts the stack but is ok, since we won't be attempting - * any unwinding here. - */ - ldr_this_cpu x0, kvm_init_params + NVHE_INIT_STACK_HYP_VA, x1 - mov sp, x0 + /* Switch to the overflow stack */ + adr_this_cpu sp, overflow_stack + PAGE_SIZE, x0 =20 b hyp_panic_bad_stack ASM_BUG() --=20 2.36.1.255.ge46751e96f-goog From nobody Tue Apr 28 02:28:07 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0DBD7CCA47C for ; Tue, 7 Jun 2022 16:56:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345253AbiFGQ4F (ORCPT ); Tue, 7 Jun 2022 12:56:05 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53460 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236744AbiFGQ4C (ORCPT ); Tue, 7 Jun 2022 12:56:02 -0400 Received: from mail-pj1-x104a.google.com (mail-pj1-x104a.google.com [IPv6:2607:f8b0:4864:20::104a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 208381021DC for ; Tue, 7 Jun 2022 09:56:01 -0700 (PDT) Received: by mail-pj1-x104a.google.com with SMTP id o8-20020a17090a9f8800b001dc9f554c7fso9547278pjp.4 for ; Tue, 07 Jun 2022 09:56:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=0VHqSpYj40pidwvqadPpnWxx4OZYKsR2ZljfrIJfKyE=; b=BeaU0wAa2g0Kiu5D1gmb5uJ76d5AlU9rIdKN+WKMfPZXCRaPDOZBeHGCb2YXAwixzI OCBny7z0nInd3F0lowOxrNMPOTwmqhRt9oX2f6UGqzs8Oen4JP8iVFII0AS/WKCB7LqD jIsycA3VZfWRs5Fm2yn5GtlJfWCb8qQJGPwObJYjk4GDn3r8aa9TSi+529JuJeETKo3/ 84fYQv31ibh/BGHYCOmG57ssI0vQEjktcHUHxnCJUxTUih+7EdUXIXmgq3d33ilfDCbA JFh/tPiGwAvstxtB5QdVm4cVf74EAJvixcorLVxLr1Bor3M/N6AqphbawD+oB21uPfm/ PlMw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=0VHqSpYj40pidwvqadPpnWxx4OZYKsR2ZljfrIJfKyE=; b=2WQdOXFm3PrHOJOIg367V4Hgwzyjim8jOSAOOpycw7hRaDrJnpjmix7hv99NPtzuSi JKaPE9KOQ1KEPXTPCvJ3aUBzi+hIei4+NLT/Qs2zrUXUuF2GmNTVmNyh95PaFURnAc3x 6FoKH20IhqJU5RHB8wCX7modUZny2Lc67xbNwL5pRdFqXM4mmBXQDu4DpI4cHcjemhM/ PuBCAfZpvm6KRZuVOnenZ2rIqXcrBMwZ8D2CgQ+hoENmZqi5n4qLnMLVQmXiHDSuQ6aU Iqm2cw+ra70F9SmuGIwrq7T3hAJwPdg0MV44U288p2c7Lm44/Zga7L7v45Q2ig0NZB4G FTYw== X-Gm-Message-State: AOAM531Yaese43Est5yGmAXvXeAcejpOD880bmFn521wDWla2RuwxDEc JGe7Y3kdNJZKpVmLzWKa/7mmzGOrZJAeVYdFhQ== X-Google-Smtp-Source: ABdhPJxgwIRDwveYRMn6jZYEppdEHf6zMGiXAbdouv/bN5rkU0F3Reqsgn5ZoRJJoWKi0KbuJ4cps8DTxTVzu+nyrA== X-Received: from kaleshsingh.mtv.corp.google.com ([2620:15c:211:200:7091:8518:ec1e:93dc]) (user=kaleshsingh job=sendgmr) by 2002:a17:90a:1f4c:b0:1e6:6f77:c573 with SMTP id y12-20020a17090a1f4c00b001e66f77c573mr34893288pjy.17.1654620960558; Tue, 07 Jun 2022 09:56:00 -0700 (PDT) Date: Tue, 7 Jun 2022 09:50:46 -0700 In-Reply-To: <20220607165105.639716-1-kaleshsingh@google.com> Message-Id: <20220607165105.639716-5-kaleshsingh@google.com> Mime-Version: 1.0 References: <20220607165105.639716-1-kaleshsingh@google.com> X-Mailer: git-send-email 2.36.1.255.ge46751e96f-goog Subject: [PATCH v3 4/5] KVM: arm64: Allocate shared stacktrace pages From: Kalesh Singh To: mark.rutland@arm.com, broonie@kernel.org, maz@kernel.org Cc: will@kernel.org, qperret@google.com, tabba@google.com, surenb@google.com, tjmercier@google.com, kernel-team@android.com, Kalesh Singh , James Morse , Alexandru Elisei , Suzuki K Poulose , Catalin Marinas , Masami Hiramatsu , Alexei Starovoitov , "Madhavan T. Venkataraman" , Peter Zijlstra , Andrew Jones , Zenghui Yu , Keir Fraser , Kefeng Wang , Ard Biesheuvel , Oliver Upton , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The nVHE hypervisor can use this shared area to dump its stacktrace addresses on hyp_panic(). Symbolization and printing the stacktrace can then be handled by the host in EL1 (done in a later patch in this series). Signed-off-by: Kalesh Singh --- arch/arm64/include/asm/kvm_asm.h | 1 + arch/arm64/kvm/arm.c | 34 ++++++++++++++++++++++++++++++++ arch/arm64/kvm/hyp/nvhe/setup.c | 11 +++++++++++ 3 files changed, 46 insertions(+) diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_= asm.h index 2e277f2ed671..ad31ac68264f 100644 --- a/arch/arm64/include/asm/kvm_asm.h +++ b/arch/arm64/include/asm/kvm_asm.h @@ -174,6 +174,7 @@ struct kvm_nvhe_init_params { unsigned long hcr_el2; unsigned long vttbr; unsigned long vtcr; + unsigned long stacktrace_hyp_va; }; =20 /* Translate a kernel address @ptr into its equivalent linear mapping */ diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index 400bb0fe2745..c0a936a7623d 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -50,6 +50,7 @@ DEFINE_STATIC_KEY_FALSE(kvm_protected_mode_initialized); DECLARE_KVM_HYP_PER_CPU(unsigned long, kvm_hyp_vector); =20 static DEFINE_PER_CPU(unsigned long, kvm_arm_hyp_stack_page); +DEFINE_PER_CPU(unsigned long, kvm_arm_hyp_stacktrace_page); unsigned long kvm_arm_hyp_percpu_base[NR_CPUS]; DECLARE_KVM_NVHE_PER_CPU(struct kvm_nvhe_init_params, kvm_init_params); =20 @@ -1554,6 +1555,7 @@ static void cpu_prepare_hyp_mode(int cpu) tcr |=3D (idmap_t0sz & GENMASK(TCR_TxSZ_WIDTH - 1, 0)) << TCR_T0SZ_OFFSET; params->tcr_el2 =3D tcr; =20 + params->stacktrace_hyp_va =3D kern_hyp_va(per_cpu(kvm_arm_hyp_stacktrace_= page, cpu)); params->pgd_pa =3D kvm_mmu_get_httbr(); if (is_protected_kvm_enabled()) params->hcr_el2 =3D HCR_HOST_NVHE_PROTECTED_FLAGS; @@ -1845,6 +1847,7 @@ static void teardown_hyp_mode(void) free_hyp_pgds(); for_each_possible_cpu(cpu) { free_page(per_cpu(kvm_arm_hyp_stack_page, cpu)); + free_page(per_cpu(kvm_arm_hyp_stacktrace_page, cpu)); free_pages(kvm_arm_hyp_percpu_base[cpu], nvhe_percpu_order()); } } @@ -1936,6 +1939,23 @@ static int init_hyp_mode(void) per_cpu(kvm_arm_hyp_stack_page, cpu) =3D stack_page; } =20 + /* + * Allocate stacktrace pages for Hypervisor-mode. + * This is used by the hypervisor to share its stacktrace + * with the host on a hyp_panic(). + */ + for_each_possible_cpu(cpu) { + unsigned long stacktrace_page; + + stacktrace_page =3D __get_free_page(GFP_KERNEL); + if (!stacktrace_page) { + err =3D -ENOMEM; + goto out_err; + } + + per_cpu(kvm_arm_hyp_stacktrace_page, cpu) =3D stacktrace_page; + } + /* * Allocate and initialize pages for Hypervisor-mode percpu regions. */ @@ -2043,6 +2063,20 @@ static int init_hyp_mode(void) params->stack_hyp_va =3D hyp_addr + (2 * PAGE_SIZE); } =20 + /* + * Map the hyp stacktrace pages. + */ + for_each_possible_cpu(cpu) { + char *stacktrace_page =3D (char *)per_cpu(kvm_arm_hyp_stacktrace_page, c= pu); + + err =3D create_hyp_mappings(stacktrace_page, stacktrace_page + PAGE_SIZE, + PAGE_HYP); + if (err) { + kvm_err("Cannot map hyp stacktrace page\n"); + goto out_err; + } + } + for_each_possible_cpu(cpu) { char *percpu_begin =3D (char *)kvm_arm_hyp_percpu_base[cpu]; char *percpu_end =3D percpu_begin + nvhe_percpu_size(); diff --git a/arch/arm64/kvm/hyp/nvhe/setup.c b/arch/arm64/kvm/hyp/nvhe/setu= p.c index e8d4ea2fcfa0..9b81bf2d40d7 100644 --- a/arch/arm64/kvm/hyp/nvhe/setup.c +++ b/arch/arm64/kvm/hyp/nvhe/setup.c @@ -135,6 +135,17 @@ static int recreate_hyp_mappings(phys_addr_t phys, uns= igned long size, =20 /* Update stack_hyp_va to end of the stack's private VA range */ params->stack_hyp_va =3D hyp_addr + (2 * PAGE_SIZE); + + /* + * Map the stacktrace pages as shared and transfer ownership to + * the hypervisor. + */ + prot =3D pkvm_mkstate(PAGE_HYP, PKVM_PAGE_SHARED_OWNED); + start =3D (void *)params->stacktrace_hyp_va; + end =3D start + PAGE_SIZE; + ret =3D pkvm_create_mappings(start, end, prot); + if (ret) + return ret; } =20 /* --=20 2.36.1.255.ge46751e96f-goog From nobody Tue Apr 28 02:28:07 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CDB0AC433EF for ; Tue, 7 Jun 2022 16:56:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345271AbiFGQ4a (ORCPT ); Tue, 7 Jun 2022 12:56:30 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53878 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345265AbiFGQ42 (ORCPT ); Tue, 7 Jun 2022 12:56:28 -0400 Received: from mail-pg1-x54a.google.com (mail-pg1-x54a.google.com [IPv6:2607:f8b0:4864:20::54a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 800391021DC for ; Tue, 7 Jun 2022 09:56:25 -0700 (PDT) Received: by mail-pg1-x54a.google.com with SMTP id e189-20020a6369c6000000b003fd31d5990fso5545660pgc.20 for ; Tue, 07 Jun 2022 09:56:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=ik67eq7ZFLPpGR0zNaRH/kqbteisvZSFHAun96EESgY=; b=c5MnAxiYEqPUKzHWj6VaKDHLpVGmdQSneovsbZDaIlVlrUT0cWvKa+9KuwbIMXJkX2 nq1k6E8l8eEqVBh+PyGQX9aHE/jBgk8dsv85NbeBbW3lrnYlhkCSdt2/c+HYe2GXH982 UJSNvT3tPQ/Ec/J58jiDra/FhbZgxUwcqR9HKv3WBvsUtKISRf2Fc+3EgZ5Adritd4Ob 3BEBfW/Ky4rJ2Ct1yG69tsW4yEuwJYgysrD/gGd/gHBqOjs3fc6ud2uwuV/S36ASvt6l 7vBEVwvj20nxRnZGSjY/Z0Z0kk5RxYqg2AqqfHtUNCjzErEPrMt2bw/TJV6MYV9j8eRH 3M0w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=ik67eq7ZFLPpGR0zNaRH/kqbteisvZSFHAun96EESgY=; b=7TaeFWNs7lnCuJGcXK0kPFcAGXXenX4BvJqYXYmPsZRcNJPYfKcDed1xZeJzvnKX3m lJ6Jmtcw/HTKUsm8d4xWVBWIRRSOzf8r9lHeDm3of3FCIlYAirj1ZljFGVk7qDD8tR3b h164x26hxeDStZymor2r0orfTtjRglxsU/n+A7Cd/I1htCCU8ct0dxN7LrkvSL0e9vCf N6oSO17MzRhCvZRGY/709mYXmA919pq3rOJZlma9Bahq/YbsDTd/Quuo4dntTaaSh9IC 0EYX3or21IuGGHWYuYYx2FSw/wJub50+p8T3aTGqPNp6YzjzAuut/VDKW+2u+9+KXzHh j5wg== X-Gm-Message-State: AOAM531G6sxr10HO38vjl4EDHiUvpjg/vYszXcQ4sdQL+c8fo88SzyRo 7w4wr1s+HPdsSseYHGMll/a7psIgDUrBD9YKFg== X-Google-Smtp-Source: ABdhPJyQoKHawMwo4HxqpkAQ9clAAvMd8TryB/FlktgNoP0ffixbJjrIY+9zZn++XlD8hy1B7BR/4fhUDYjPOaZJwA== X-Received: from kaleshsingh.mtv.corp.google.com ([2620:15c:211:200:7091:8518:ec1e:93dc]) (user=kaleshsingh job=sendgmr) by 2002:a17:902:b683:b0:163:4ef2:3c40 with SMTP id c3-20020a170902b68300b001634ef23c40mr29010356pls.123.1654620984901; Tue, 07 Jun 2022 09:56:24 -0700 (PDT) Date: Tue, 7 Jun 2022 09:50:47 -0700 In-Reply-To: <20220607165105.639716-1-kaleshsingh@google.com> Message-Id: <20220607165105.639716-6-kaleshsingh@google.com> Mime-Version: 1.0 References: <20220607165105.639716-1-kaleshsingh@google.com> X-Mailer: git-send-email 2.36.1.255.ge46751e96f-goog Subject: [PATCH v3 5/5] KVM: arm64: Unwind and dump nVHE hypervisor stacktrace From: Kalesh Singh To: mark.rutland@arm.com, broonie@kernel.org, maz@kernel.org Cc: will@kernel.org, qperret@google.com, tabba@google.com, surenb@google.com, tjmercier@google.com, kernel-team@android.com, Kalesh Singh , James Morse , Alexandru Elisei , Suzuki K Poulose , Catalin Marinas , Masami Hiramatsu , Alexei Starovoitov , "Madhavan T. Venkataraman" , Peter Zijlstra , Andrew Jones , Keir Fraser , Zenghui Yu , Kefeng Wang , Ard Biesheuvel , Oliver Upton , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" On hyp_panic(), the hypervisor dumps the addresses for its stacktrace entries to a page shared with the host. The host then symbolizes and prints the hyp stacktrace before panicking itself. Example stacktrace: [ 122.051187] kvm [380]: Invalid host exception to nVHE hyp! [ 122.052467] kvm [380]: nVHE HYP call trace: [ 122.052814] kvm [380]: [] __kvm_nvhe___pkvm_vcpu_init_= traps+0x1f0/0x1f0 [ 122.053865] kvm [380]: [] __kvm_nvhe_hyp_panic+0x130/0= x1c0 [ 122.054367] kvm [380]: [] __kvm_nvhe___kvm_vcpu_run+0x= 10/0x10 [ 122.054878] kvm [380]: [] __kvm_nvhe_handle___kvm_vcpu= _run+0x30/0x50 [ 122.055412] kvm [380]: [] __kvm_nvhe_handle_trap+0xbc/= 0x160 [ 122.055911] kvm [380]: [] __kvm_nvhe___host_exit+0x64/= 0x64 [ 122.056417] kvm [380]: ---- end of nVHE HYP call trace ---- Signed-off-by: Kalesh Singh Reviewed-by: Mark Brown Reported-by: kernel test robot --- Changes in v2: - Add Mark's Reviewed-by tag arch/arm64/include/asm/stacktrace.h | 42 ++++++++++++++-- arch/arm64/kernel/stacktrace.c | 75 +++++++++++++++++++++++++++++ arch/arm64/kvm/handle_exit.c | 4 ++ arch/arm64/kvm/hyp/nvhe/switch.c | 4 ++ 4 files changed, 121 insertions(+), 4 deletions(-) diff --git a/arch/arm64/include/asm/stacktrace.h b/arch/arm64/include/asm/s= tacktrace.h index f5af9a94c5a6..3063912107b0 100644 --- a/arch/arm64/include/asm/stacktrace.h +++ b/arch/arm64/include/asm/stacktrace.h @@ -5,6 +5,7 @@ #ifndef __ASM_STACKTRACE_H #define __ASM_STACKTRACE_H =20 +#include #include #include #include @@ -19,10 +20,12 @@ enum stack_type { #ifndef __KVM_NVHE_HYPERVISOR__ STACK_TYPE_TASK, STACK_TYPE_IRQ, - STACK_TYPE_OVERFLOW, STACK_TYPE_SDEI_NORMAL, STACK_TYPE_SDEI_CRITICAL, +#else /* __KVM_NVHE_HYPERVISOR__ */ + STACK_TYPE_HYP, #endif /* !__KVM_NVHE_HYPERVISOR__ */ + STACK_TYPE_OVERFLOW, STACK_TYPE_UNKNOWN, __NR_STACK_TYPES }; @@ -55,6 +58,9 @@ static inline bool on_stack(unsigned long sp, unsigned lo= ng size, extern void dump_backtrace(struct pt_regs *regs, struct task_struct *tsk, const char *loglvl); =20 +extern void hyp_dump_backtrace(unsigned long hyp_offset); + +DECLARE_PER_CPU(unsigned long, kvm_arm_hyp_stacktrace_page); DECLARE_PER_CPU(unsigned long *, irq_stack_ptr); =20 static inline bool on_irq_stack(unsigned long sp, unsigned long size, @@ -91,8 +97,32 @@ static inline bool on_overflow_stack(unsigned long sp, u= nsigned long size, static inline bool on_overflow_stack(unsigned long sp, unsigned long size, struct stack_info *info) { return false; } #endif -#endif /* !__KVM_NVHE_HYPERVISOR__ */ +#else /* __KVM_NVHE_HYPERVISOR__ */ + +extern void hyp_save_backtrace(void); + +DECLARE_PER_CPU(unsigned long [PAGE_SIZE/sizeof(long)], overflow_stack); +DECLARE_PER_CPU(struct kvm_nvhe_init_params, kvm_init_params); + +static inline bool on_overflow_stack(unsigned long sp, unsigned long size, + struct stack_info *info) +{ + unsigned long low =3D (unsigned long)this_cpu_ptr(overflow_stack); + unsigned long high =3D low + PAGE_SIZE; + + return on_stack(sp, size, low, high, STACK_TYPE_OVERFLOW, info); +} + +static inline bool on_hyp_stack(unsigned long sp, unsigned long size, + struct stack_info *info) +{ + struct kvm_nvhe_init_params *params =3D this_cpu_ptr(&kvm_init_params); + unsigned long high =3D params->stack_hyp_va; + unsigned long low =3D high - PAGE_SIZE; =20 + return on_stack(sp, size, low, high, STACK_TYPE_HYP, info); +} +#endif /* !__KVM_NVHE_HYPERVISOR__ */ =20 /* * We can only safely access per-cpu stacks from current in a non-preempti= ble @@ -105,6 +135,9 @@ static inline bool on_accessible_stack(const struct tas= k_struct *tsk, if (info) info->type =3D STACK_TYPE_UNKNOWN; =20 + if (on_overflow_stack(sp, size, info)) + return true; + #ifndef __KVM_NVHE_HYPERVISOR__ if (on_task_stack(tsk, sp, size, info)) return true; @@ -112,10 +145,11 @@ static inline bool on_accessible_stack(const struct t= ask_struct *tsk, return false; if (on_irq_stack(sp, size, info)) return true; - if (on_overflow_stack(sp, size, info)) - return true; if (on_sdei_stack(sp, size, info)) return true; +#else /* __KVM_NVHE_HYPERVISOR__ */ + if (on_hyp_stack(sp, size, info)) + return true; #endif /* !__KVM_NVHE_HYPERVISOR__ */ =20 return false; diff --git a/arch/arm64/kernel/stacktrace.c b/arch/arm64/kernel/stacktrace.c index f346b4c66f1c..c81dea9760ac 100644 --- a/arch/arm64/kernel/stacktrace.c +++ b/arch/arm64/kernel/stacktrace.c @@ -104,6 +104,7 @@ static int notrace __unwind_next(struct task_struct *ts= k, * * TASK -> IRQ -> OVERFLOW -> SDEI_NORMAL * TASK -> SDEI_NORMAL -> SDEI_CRITICAL -> OVERFLOW + * HYP -> OVERFLOW * * ... but the nesting itself is strict. Once we transition from one * stack to another, it's never valid to unwind back to that first @@ -242,7 +243,81 @@ noinline notrace void arch_stack_walk(stack_trace_cons= ume_fn consume_entry, =20 unwind(task, &state, consume_entry, cookie); } + +/** + * Symbolizes and dumps the hypervisor backtrace from the shared + * stacktrace page. + */ +noinline notrace void hyp_dump_backtrace(unsigned long hyp_offset) +{ + unsigned long *stacktrace_pos =3D + (unsigned long *)*this_cpu_ptr(&kvm_arm_hyp_stacktrace_page); + unsigned long va_mask =3D GENMASK_ULL(vabits_actual - 1, 0); + unsigned long pc =3D *stacktrace_pos++; + + kvm_err("nVHE HYP call trace:\n"); + + while (pc) { + pc &=3D va_mask; /* Mask tags */ + pc +=3D hyp_offset; /* Convert to kern addr */ + kvm_err("[<%016lx>] %pB\n", pc, (void *)pc); + pc =3D *stacktrace_pos++; + } + + kvm_err("---- end of nVHE HYP call trace ----\n"); +} #else /* __KVM_NVHE_HYPERVISOR__ */ DEFINE_PER_CPU(unsigned long [PAGE_SIZE/sizeof(long)], overflow_stack) __aligned(16); + +static int notrace unwind_next(struct task_struct *tsk, + struct unwind_state *state) +{ + struct stack_info info; + + return __unwind_next(tsk, state, &info); +} + +/** + * Saves a hypervisor stacktrace entry (address) to the shared stacktrace = page. + */ +static bool hyp_save_backtrace_entry(void *arg, unsigned long where) +{ + struct kvm_nvhe_init_params *params =3D this_cpu_ptr(&kvm_init_params); + unsigned long **stacktrace_pos =3D (unsigned long **)arg; + unsigned long stacktrace_start, stacktrace_end; + + stacktrace_start =3D (unsigned long)params->stacktrace_hyp_va; + stacktrace_end =3D stacktrace_start + PAGE_SIZE - (2 * sizeof(long)); + + if ((unsigned long) *stacktrace_pos > stacktrace_end) + return false; + + /* Save the entry to the current pos in stacktrace page */ + **stacktrace_pos =3D where; + + /* A zero entry delimits the end of the stacktrace. */ + *(*stacktrace_pos + 1) =3D 0UL; + + /* Increment the current pos */ + ++*stacktrace_pos; + + return true; +} + +/** + * Saves hypervisor stacktrace to the shared stacktrace page. + */ +noinline notrace void hyp_save_backtrace(void) +{ + struct kvm_nvhe_init_params *params =3D this_cpu_ptr(&kvm_init_params); + void *stacktrace_start =3D (void *)params->stacktrace_hyp_va; + struct unwind_state state; + + unwind_init(&state, (unsigned long)__builtin_frame_address(0), + _THIS_IP_); + + unwind(NULL, &state, hyp_save_backtrace_entry, &stacktrace_start); +} + #endif /* !__KVM_NVHE_HYPERVISOR__ */ diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c index f66c0142b335..96c5dc5529a1 100644 --- a/arch/arm64/kvm/handle_exit.c +++ b/arch/arm64/kvm/handle_exit.c @@ -17,6 +17,7 @@ #include #include #include +#include #include =20 #include @@ -353,6 +354,9 @@ void __noreturn __cold nvhe_hyp_panic_handler(u64 esr, = u64 spsr, (void *)panic_addr); } =20 + /* Dump the hypervisor stacktrace */ + hyp_dump_backtrace(hyp_offset); + /* * Hyp has panicked and we're going to handle that by panicking the * kernel. The kernel offset will be revealed in the panic so we're diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/swi= tch.c index 6db801db8f27..add157f8e3f3 100644 --- a/arch/arm64/kvm/hyp/nvhe/switch.c +++ b/arch/arm64/kvm/hyp/nvhe/switch.c @@ -25,6 +25,7 @@ #include #include #include +#include =20 #include #include @@ -375,6 +376,9 @@ asmlinkage void __noreturn hyp_panic(void) __sysreg_restore_state_nvhe(host_ctxt); } =20 + /* Save the hypervisor stacktrace */ + hyp_save_backtrace(); + __hyp_do_panic(host_ctxt, spsr, elr, par); unreachable(); } --=20 2.36.1.255.ge46751e96f-goog