From nobody Thu Dec 18 15:44:56 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D065AC77B75 for ; Wed, 19 Apr 2023 22:19:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232681AbjDSWT4 (ORCPT ); Wed, 19 Apr 2023 18:19:56 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42192 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232273AbjDSWTI (ORCPT ); Wed, 19 Apr 2023 18:19:08 -0400 Received: from mail-pl1-x634.google.com (mail-pl1-x634.google.com [IPv6:2607:f8b0:4864:20::634]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BFBDC9748 for ; Wed, 19 Apr 2023 15:18:27 -0700 (PDT) Received: by mail-pl1-x634.google.com with SMTP id d9443c01a7336-1a80d827179so4504245ad.3 for ; Wed, 19 Apr 2023 15:18:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20221208.gappssmtp.com; s=20221208; t=1681942695; x=1684534695; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=BVskLFhSfykEs4eltGnJpzQVboIqM++EFU/NG1sE5NQ=; b=SO8VMn/gX3e5vrOk29Jpp+EW3CH0aS36Ddf+WvlYfYdV2qV/sEQkL249EYwnIbGQPP IbrgDiUX6yusY5G6N+gv2DqO2gsWL/6qr3Vjo5vnbGXNSTABlKc+Z5vspap8SEcOKeLU So7lcKbcoaSAhkenmIqQER+vNAwXBWV2IvUOGT4NehU2jVE3eYLDxu3zsV9yLUqbsLR7 v7+u/wK5FHf0rE8XEaPscHq9ay3O7yJVyu0alqyYfsf0oZDQ04UuLg/LxXgaBLNutCRN t5gGEdgbWRMkTDfoaMJlc0lkYq+/3m6Gj+9+DtkEoQY+7pVpM5Wn8qS8u2Y8SjuI+hhP lcIQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681942695; x=1684534695; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=BVskLFhSfykEs4eltGnJpzQVboIqM++EFU/NG1sE5NQ=; b=BGbDA7aLZjzolbBZsNIPvIk3tt0I6cYaNorubKmmuD8rbDXAI8cmoC165AjPqqP29g YfoLGS6Vj2xMCzAUYmbJ2hSgEg7z+U8wW7hc8cXNt3wnKqBEZZ6A0liWioD4T4LJvZpa LwB2IPCLaFG854rqWqN94qw1k+sjskdWcEJHYTVgspLHjtEAsa4g137+nOdBdhzfe5pI xeDPlBj3gi/pX7HJ0ueGvQQgvZ3SeFClOywNTVQ5AkREpQjDpBBR3RlLKbPN9HlyKvAS n8/CZLWwSrIXCFfe7YrIj6IsEozBhaMKGWFjs7ZrxScbjwR9RESvLAslzUZ09ol3Hppe FAxw== X-Gm-Message-State: AAQBX9cPzoZ2MdDfA60l4X4JUkBTuF6nVYUGHHCyBrWieziq3h514omm E5+PgmvT0axmM6ezneDIG0E5GhX4+YSP4ialEAM= X-Google-Smtp-Source: AKy350YJdtE76oL4JZlI/rCsVg2p4hVlt1BCJyTIcyC6OmWTTvtKZe676vtO5k8no3TozALhiDYlgQ== X-Received: by 2002:a17:902:b489:b0:19d:297:f30b with SMTP id y9-20020a170902b48900b0019d0297f30bmr6180769plr.19.1681942695581; Wed, 19 Apr 2023 15:18:15 -0700 (PDT) Received: from atishp.ba.rivosinc.com ([66.220.2.162]) by smtp.gmail.com with ESMTPSA id jn11-20020a170903050b00b00196807b5189sm11619190plb.292.2023.04.19.15.18.13 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 19 Apr 2023 15:18:15 -0700 (PDT) From: Atish Patra To: linux-kernel@vger.kernel.org Cc: Atish Patra , Alexandre Ghiti , Andrew Jones , Andrew Morton , Anup Patel , Atish Patra , =?UTF-8?q?Bj=C3=B6rn=20T=C3=B6pel?= , Suzuki K Poulose , Will Deacon , Marc Zyngier , Sean Christopherson , linux-coco@lists.linux.dev, Dylan Reid , abrestic@rivosinc.com, Samuel Ortiz , Christoph Hellwig , Conor Dooley , Greg Kroah-Hartman , Guo Ren , Heiko Stuebner , Jiri Slaby , kvm-riscv@lists.infradead.org, kvm@vger.kernel.org, linux-mm@kvack.org, linux-riscv@lists.infradead.org, Mayuresh Chitale , Palmer Dabbelt , Paolo Bonzini , Paul Walmsley , Rajnesh Kanwal , Uladzislau Rezki Subject: [RFC 21/48] RISC-V: KVM: Handle SBI call forward from the TSM Date: Wed, 19 Apr 2023 15:16:49 -0700 Message-Id: <20230419221716.3603068-22-atishp@rivosinc.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230419221716.3603068-1-atishp@rivosinc.com> References: <20230419221716.3603068-1-atishp@rivosinc.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" TSM may forward the some SBI calls to the host as the host is the best place to handle these calls. Any calls related to hart state management or console or guest side interface (COVG) falls under this category. Add a cove specific ecall handler to take appropriate actions upon receiving these SBI calls. Signed-off-by: Atish Patra --- arch/riscv/include/asm/kvm_cove.h | 5 +++ arch/riscv/kvm/cove.c | 54 +++++++++++++++++++++++++++++++ arch/riscv/kvm/vcpu_exit.c | 6 +++- arch/riscv/kvm/vcpu_sbi.c | 2 ++ 4 files changed, 66 insertions(+), 1 deletion(-) diff --git a/arch/riscv/include/asm/kvm_cove.h b/arch/riscv/include/asm/kvm= _cove.h index fc8633d..b63682f 100644 --- a/arch/riscv/include/asm/kvm_cove.h +++ b/arch/riscv/include/asm/kvm_cove.h @@ -126,6 +126,7 @@ int kvm_riscv_cove_vcpu_init(struct kvm_vcpu *vcpu); void kvm_riscv_cove_vcpu_load(struct kvm_vcpu *vcpu); void kvm_riscv_cove_vcpu_put(struct kvm_vcpu *vcpu); void kvm_riscv_cove_vcpu_switchto(struct kvm_vcpu *vcpu, struct kvm_cpu_tr= ap *trap); +int kvm_riscv_cove_vcpu_sbi_ecall(struct kvm_vcpu *vcpu, struct kvm_run *r= un); =20 int kvm_riscv_cove_vm_measure_pages(struct kvm *kvm, struct kvm_riscv_cove= _measure_region *mr); int kvm_riscv_cove_vm_add_memreg(struct kvm *kvm, unsigned long gpa, unsig= ned long size); @@ -148,6 +149,10 @@ static inline int kvm_riscv_cove_vcpu_init(struct kvm_= vcpu *vcpu) {return -1; } static inline void kvm_riscv_cove_vcpu_load(struct kvm_vcpu *vcpu) {} static inline void kvm_riscv_cove_vcpu_put(struct kvm_vcpu *vcpu) {} static inline void kvm_riscv_cove_vcpu_switchto(struct kvm_vcpu *vcpu, str= uct kvm_cpu_trap *trap) {} +static inline int kvm_riscv_cove_vcpu_sbi_ecall(struct kvm_vcpu *vcpu, str= uct kvm_run *run) +{ + return -1; +} static inline int kvm_riscv_cove_vm_add_memreg(struct kvm *kvm, unsigned l= ong gpa, unsigned long size) {return -1; } static inline int kvm_riscv_cove_vm_measure_pages(struct kvm *kvm, diff --git a/arch/riscv/kvm/cove.c b/arch/riscv/kvm/cove.c index 44095f6..87fa04b 100644 --- a/arch/riscv/kvm/cove.c +++ b/arch/riscv/kvm/cove.c @@ -147,6 +147,60 @@ void kvm_riscv_cove_vcpu_put(struct kvm_vcpu *vcpu) /* TODO */ } =20 +int kvm_riscv_cove_vcpu_sbi_ecall(struct kvm_vcpu *vcpu, struct kvm_run *r= un) +{ + void *nshmem; + const struct kvm_vcpu_sbi_extension *sbi_ext; + struct kvm_cpu_context *cp =3D &vcpu->arch.guest_context; + struct kvm_cpu_trap utrap =3D { 0 }; + struct kvm_vcpu_sbi_return sbi_ret =3D { + .out_val =3D 0, + .err_val =3D 0, + .utrap =3D &utrap, + }; + bool ext_is_01 =3D false; + int ret =3D 1; + + nshmem =3D nacl_shmem(); + cp->a0 =3D nacl_shmem_gpr_read_cove(nshmem, KVM_ARCH_GUEST_A0); + cp->a1 =3D nacl_shmem_gpr_read_cove(nshmem, KVM_ARCH_GUEST_A1); + cp->a6 =3D nacl_shmem_gpr_read_cove(nshmem, KVM_ARCH_GUEST_A6); + cp->a7 =3D nacl_shmem_gpr_read_cove(nshmem, KVM_ARCH_GUEST_A7); + + /* TSM will only forward legacy console to the host */ +#ifdef CONFIG_RISCV_SBI_V01 + if (cp->a7 =3D=3D SBI_EXT_0_1_CONSOLE_PUTCHAR) + ext_is_01 =3D true; +#endif + + sbi_ext =3D kvm_vcpu_sbi_find_ext(vcpu, cp->a7); + if ((sbi_ext && sbi_ext->handler) && ((cp->a7 =3D=3D SBI_EXT_DBCN) || + (cp->a7 =3D=3D SBI_EXT_HSM) || (cp->a7 =3D=3D SBI_EXT_SRST) || ext_is= _01)) { + ret =3D sbi_ext->handler(vcpu, run, &sbi_ret); + } else { + kvm_err("%s: SBI EXT %lx not supported for TVM\n", __func__, cp->a7); + /* Return error for unsupported SBI calls */ + sbi_ret.err_val =3D SBI_ERR_NOT_SUPPORTED; + goto ecall_done; + } + + if (ret < 0) + goto ecall_done; + + ret =3D (sbi_ret.uexit) ? 0 : 1; + +ecall_done: + /* + * No need to update the sepc as TSM will take care of SEPC increment + * for ECALLS that won't be forwarded to the user space (e.g. console) + */ + nacl_shmem_gpr_write_cove(nshmem, KVM_ARCH_GUEST_A0, sbi_ret.err_val); + if (!ext_is_01) + nacl_shmem_gpr_write_cove(nshmem, KVM_ARCH_GUEST_A1, sbi_ret.out_val); + + return ret; +} + int kvm_riscv_cove_gstage_map(struct kvm_vcpu *vcpu, gpa_t gpa, unsigned l= ong hva) { struct kvm_riscv_cove_page *tpage; diff --git a/arch/riscv/kvm/vcpu_exit.c b/arch/riscv/kvm/vcpu_exit.c index d00b9ee5..8944e29 100644 --- a/arch/riscv/kvm/vcpu_exit.c +++ b/arch/riscv/kvm/vcpu_exit.c @@ -207,11 +207,15 @@ int kvm_riscv_vcpu_exit(struct kvm_vcpu *vcpu, struct= kvm_run *run, case EXC_INST_GUEST_PAGE_FAULT: case EXC_LOAD_GUEST_PAGE_FAULT: case EXC_STORE_GUEST_PAGE_FAULT: + //TODO: If the host runs in HS mode, this won't work as we don't + //read hstatus from the shared memory yet if (vcpu->arch.guest_context.hstatus & HSTATUS_SPV) ret =3D gstage_page_fault(vcpu, run, trap); break; case EXC_SUPERVISOR_SYSCALL: - if (vcpu->arch.guest_context.hstatus & HSTATUS_SPV) + if (is_cove_vcpu(vcpu)) + ret =3D kvm_riscv_cove_vcpu_sbi_ecall(vcpu, run); + else if (vcpu->arch.guest_context.hstatus & HSTATUS_SPV) ret =3D kvm_riscv_vcpu_sbi_ecall(vcpu, run); break; default: diff --git a/arch/riscv/kvm/vcpu_sbi.c b/arch/riscv/kvm/vcpu_sbi.c index 047ba10..d2f43bc 100644 --- a/arch/riscv/kvm/vcpu_sbi.c +++ b/arch/riscv/kvm/vcpu_sbi.c @@ -10,6 +10,8 @@ #include #include #include +#include +#include #include =20 #ifndef CONFIG_RISCV_SBI_V01 --=20 2.25.1