From nobody Sun Feb 8 01:29:48 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B2742C001DC for ; Wed, 12 Jul 2023 16:11:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232516AbjGLQLP (ORCPT ); Wed, 12 Jul 2023 12:11:15 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45106 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231918AbjGLQLJ (ORCPT ); Wed, 12 Jul 2023 12:11:09 -0400 Received: from mail-pg1-x52f.google.com (mail-pg1-x52f.google.com [IPv6:2607:f8b0:4864:20::52f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EFA8219B4 for ; Wed, 12 Jul 2023 09:11:05 -0700 (PDT) Received: by mail-pg1-x52f.google.com with SMTP id 41be03b00d2f7-553ad54d3c6so4990343a12.1 for ; Wed, 12 Jul 2023 09:11:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ventanamicro.com; s=google; t=1689178265; x=1691770265; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=GHJImrVD2XL3oL1dawrSeKAYR0VlpwbjbzqVnunLK3c=; b=LiuifeXIIqehUO3TE79wfAJMWXNb2VJBw0qX7Jdd1mtwtCc96SMSIfuZKULmmowyIH FWlhcGvvGjMe1d8+TRZIlSN/Zr2Z1DvLBAo/CHOSgT7mTGPVJfqppu7wCLH4rYKpsJAj A2C5DGaZe0JtjlbqY/64D8rDjamSIoUKl/LTLcgbw0/cD329EAKqI6B7vYW4cXdnPqqv eZPaQhMtQiDp2tMH3/JA/DCElTc/b53JZgVQfdmWGgjtOb7y8MURXK5WN1P4s2Y3JXeu IfJ1eYbsxSXo4S8u7oYAZxDm7cg2DnxOYgXq5Na/7oWgaIwL7o2+5wSccC8WMkLyxwIX Hgew== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1689178265; x=1691770265; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=GHJImrVD2XL3oL1dawrSeKAYR0VlpwbjbzqVnunLK3c=; b=gpU5zq9yXLfTBImCk2hHGhRQMw+kd90rOJ4SjdcW4Fj68Jj4YuHrfvCRaxJPKGfqFY k1Ma4+9lqwXDsEKSzU07yG4H2FJfpxEDjfPtLOOMWOXTW4yIuDc89uvrqinRoZZ6S2iz YTMdBm6RFOrsAOU+W0Z84jP0sIY6ZPsjy1/TZn9QKwtyEsFiAYB9zfdTUN6Nu2Eg2jqX kN2f045K7JVVuCS/T5LLOrix2nXm0OuU7oDRr3uX6xXmfMqHSe8DckOdayjz+Er/NMu6 v0afK0MhnvJ3EuyMCZlb1o+wer8zgWOuo8wSECauC/ik6X1e1/sJBUAHe0fDRNCAsk6M OMrg== X-Gm-Message-State: ABy/qLYUEOQBH8As90MHwZc2N3To1aqp/TxZFsp8bwyUmpkoOuwVF9r1 cwKpsrN1Ks7wolapnqXscyiNPgPb5RVpQbNqIEM= X-Google-Smtp-Source: APBJJlF91RFf6NF4r9up1g/jdv5HJC7oolva6ayjYQ07KdNffbIhoOQNyIdc9mbcXweN9OQjczerUw== X-Received: by 2002:a17:902:d38c:b0:1b5:5aa0:cfd9 with SMTP id e12-20020a170902d38c00b001b55aa0cfd9mr15427875pld.48.1689178265170; Wed, 12 Jul 2023 09:11:05 -0700 (PDT) Received: from anup-ubuntu-vm.localdomain ([171.76.82.173]) by smtp.gmail.com with ESMTPSA id bc2-20020a170902930200b001b9f032bb3dsm3811650plb.3.2023.07.12.09.11.01 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 12 Jul 2023 09:11:04 -0700 (PDT) From: Anup Patel To: Paolo Bonzini , Atish Patra Cc: Palmer Dabbelt , Paul Walmsley , Andrew Jones , Heiko Stuebner , Samuel Ortiz , kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Anup Patel Subject: [PATCH 1/7] RISC-V: KVM: Factor-out ONE_REG related code to its own source file Date: Wed, 12 Jul 2023 21:40:41 +0530 Message-Id: <20230712161047.1764756-2-apatel@ventanamicro.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230712161047.1764756-1-apatel@ventanamicro.com> References: <20230712161047.1764756-1-apatel@ventanamicro.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" The VCPU ONE_REG interface has grown over time and it will continue to grow with new ISA extensions and other features. Let us move all ONE_REG related code to its own source file so that vcpu.c only focuses only on high-level VCPU functions. Signed-off-by: Anup Patel Reviewed-by: Andrew Jones --- arch/riscv/include/asm/kvm_host.h | 6 + arch/riscv/kvm/Makefile | 1 + arch/riscv/kvm/vcpu.c | 529 +---------------------------- arch/riscv/kvm/vcpu_onereg.c | 547 ++++++++++++++++++++++++++++++ 4 files changed, 555 insertions(+), 528 deletions(-) create mode 100644 arch/riscv/kvm/vcpu_onereg.c diff --git a/arch/riscv/include/asm/kvm_host.h b/arch/riscv/include/asm/kvm= _host.h index 2d8ee53b66c7..55bc7bdbff48 100644 --- a/arch/riscv/include/asm/kvm_host.h +++ b/arch/riscv/include/asm/kvm_host.h @@ -337,6 +337,12 @@ int kvm_riscv_vcpu_exit(struct kvm_vcpu *vcpu, struct = kvm_run *run, =20 void __kvm_riscv_switch_to(struct kvm_vcpu_arch *vcpu_arch); =20 +void kvm_riscv_vcpu_setup_isa(struct kvm_vcpu *vcpu); +int kvm_riscv_vcpu_get_reg(struct kvm_vcpu *vcpu, + const struct kvm_one_reg *reg); +int kvm_riscv_vcpu_set_reg(struct kvm_vcpu *vcpu, + const struct kvm_one_reg *reg); + int kvm_riscv_vcpu_set_interrupt(struct kvm_vcpu *vcpu, unsigned int irq); int kvm_riscv_vcpu_unset_interrupt(struct kvm_vcpu *vcpu, unsigned int irq= ); void kvm_riscv_vcpu_flush_interrupts(struct kvm_vcpu *vcpu); diff --git a/arch/riscv/kvm/Makefile b/arch/riscv/kvm/Makefile index fee0671e2dc1..4c2067fc59fc 100644 --- a/arch/riscv/kvm/Makefile +++ b/arch/riscv/kvm/Makefile @@ -19,6 +19,7 @@ kvm-y +=3D vcpu_exit.o kvm-y +=3D vcpu_fp.o kvm-y +=3D vcpu_vector.o kvm-y +=3D vcpu_insn.o +kvm-y +=3D vcpu_onereg.o kvm-y +=3D vcpu_switch.o kvm-y +=3D vcpu_sbi.o kvm-$(CONFIG_RISCV_SBI_V01) +=3D vcpu_sbi_v01.o diff --git a/arch/riscv/kvm/vcpu.c b/arch/riscv/kvm/vcpu.c index d12ef99901fc..452d6548e951 100644 --- a/arch/riscv/kvm/vcpu.c +++ b/arch/riscv/kvm/vcpu.c @@ -13,16 +13,12 @@ #include #include #include -#include #include #include #include #include #include #include -#include -#include -#include #include =20 const struct _kvm_stats_desc kvm_vcpu_stats_desc[] =3D { @@ -46,79 +42,6 @@ const struct kvm_stats_header kvm_vcpu_stats_header =3D { sizeof(kvm_vcpu_stats_desc), }; =20 -#define KVM_RISCV_BASE_ISA_MASK GENMASK(25, 0) - -#define KVM_ISA_EXT_ARR(ext) [KVM_RISCV_ISA_EXT_##ext] =3D RISCV_ISA_EXT_= ##ext - -/* Mapping between KVM ISA Extension ID & Host ISA extension ID */ -static const unsigned long kvm_isa_ext_arr[] =3D { - [KVM_RISCV_ISA_EXT_A] =3D RISCV_ISA_EXT_a, - [KVM_RISCV_ISA_EXT_C] =3D RISCV_ISA_EXT_c, - [KVM_RISCV_ISA_EXT_D] =3D RISCV_ISA_EXT_d, - [KVM_RISCV_ISA_EXT_F] =3D RISCV_ISA_EXT_f, - [KVM_RISCV_ISA_EXT_H] =3D RISCV_ISA_EXT_h, - [KVM_RISCV_ISA_EXT_I] =3D RISCV_ISA_EXT_i, - [KVM_RISCV_ISA_EXT_M] =3D RISCV_ISA_EXT_m, - [KVM_RISCV_ISA_EXT_V] =3D RISCV_ISA_EXT_v, - - KVM_ISA_EXT_ARR(SSAIA), - KVM_ISA_EXT_ARR(SSTC), - KVM_ISA_EXT_ARR(SVINVAL), - KVM_ISA_EXT_ARR(SVNAPOT), - KVM_ISA_EXT_ARR(SVPBMT), - KVM_ISA_EXT_ARR(ZBB), - KVM_ISA_EXT_ARR(ZIHINTPAUSE), - KVM_ISA_EXT_ARR(ZICBOM), - KVM_ISA_EXT_ARR(ZICBOZ), -}; - -static unsigned long kvm_riscv_vcpu_base2isa_ext(unsigned long base_ext) -{ - unsigned long i; - - for (i =3D 0; i < KVM_RISCV_ISA_EXT_MAX; i++) { - if (kvm_isa_ext_arr[i] =3D=3D base_ext) - return i; - } - - return KVM_RISCV_ISA_EXT_MAX; -} - -static bool kvm_riscv_vcpu_isa_enable_allowed(unsigned long ext) -{ - switch (ext) { - case KVM_RISCV_ISA_EXT_H: - return false; - case KVM_RISCV_ISA_EXT_V: - return riscv_v_vstate_ctrl_user_allowed(); - default: - break; - } - - return true; -} - -static bool kvm_riscv_vcpu_isa_disable_allowed(unsigned long ext) -{ - switch (ext) { - case KVM_RISCV_ISA_EXT_A: - case KVM_RISCV_ISA_EXT_C: - case KVM_RISCV_ISA_EXT_I: - case KVM_RISCV_ISA_EXT_M: - case KVM_RISCV_ISA_EXT_SSAIA: - case KVM_RISCV_ISA_EXT_SSTC: - case KVM_RISCV_ISA_EXT_SVINVAL: - case KVM_RISCV_ISA_EXT_SVNAPOT: - case KVM_RISCV_ISA_EXT_ZIHINTPAUSE: - case KVM_RISCV_ISA_EXT_ZBB: - return false; - default: - break; - } - - return true; -} - static void kvm_riscv_reset_vcpu(struct kvm_vcpu *vcpu) { struct kvm_vcpu_csr *csr =3D &vcpu->arch.guest_csr; @@ -176,7 +99,6 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu) int rc; struct kvm_cpu_context *cntx; struct kvm_vcpu_csr *reset_csr =3D &vcpu->arch.guest_reset_csr; - unsigned long host_isa, i; =20 /* Mark this VCPU never ran */ vcpu->arch.ran_atleast_once =3D false; @@ -184,12 +106,7 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu) bitmap_zero(vcpu->arch.isa, RISCV_ISA_EXT_MAX); =20 /* Setup ISA features available to VCPU */ - for (i =3D 0; i < ARRAY_SIZE(kvm_isa_ext_arr); i++) { - host_isa =3D kvm_isa_ext_arr[i]; - if (__riscv_isa_extension_available(NULL, host_isa) && - kvm_riscv_vcpu_isa_enable_allowed(i)) - set_bit(host_isa, vcpu->arch.isa); - } + kvm_riscv_vcpu_setup_isa(vcpu); =20 /* Setup vendor, arch, and implementation details */ vcpu->arch.mvendorid =3D sbi_get_mvendorid(); @@ -294,450 +211,6 @@ vm_fault_t kvm_arch_vcpu_fault(struct kvm_vcpu *vcpu,= struct vm_fault *vmf) return VM_FAULT_SIGBUS; } =20 -static int kvm_riscv_vcpu_get_reg_config(struct kvm_vcpu *vcpu, - const struct kvm_one_reg *reg) -{ - unsigned long __user *uaddr =3D - (unsigned long __user *)(unsigned long)reg->addr; - unsigned long reg_num =3D reg->id & ~(KVM_REG_ARCH_MASK | - KVM_REG_SIZE_MASK | - KVM_REG_RISCV_CONFIG); - unsigned long reg_val; - - if (KVM_REG_SIZE(reg->id) !=3D sizeof(unsigned long)) - return -EINVAL; - - switch (reg_num) { - case KVM_REG_RISCV_CONFIG_REG(isa): - reg_val =3D vcpu->arch.isa[0] & KVM_RISCV_BASE_ISA_MASK; - break; - case KVM_REG_RISCV_CONFIG_REG(zicbom_block_size): - if (!riscv_isa_extension_available(vcpu->arch.isa, ZICBOM)) - return -EINVAL; - reg_val =3D riscv_cbom_block_size; - break; - case KVM_REG_RISCV_CONFIG_REG(zicboz_block_size): - if (!riscv_isa_extension_available(vcpu->arch.isa, ZICBOZ)) - return -EINVAL; - reg_val =3D riscv_cboz_block_size; - break; - case KVM_REG_RISCV_CONFIG_REG(mvendorid): - reg_val =3D vcpu->arch.mvendorid; - break; - case KVM_REG_RISCV_CONFIG_REG(marchid): - reg_val =3D vcpu->arch.marchid; - break; - case KVM_REG_RISCV_CONFIG_REG(mimpid): - reg_val =3D vcpu->arch.mimpid; - break; - default: - return -EINVAL; - } - - if (copy_to_user(uaddr, ®_val, KVM_REG_SIZE(reg->id))) - return -EFAULT; - - return 0; -} - -static int kvm_riscv_vcpu_set_reg_config(struct kvm_vcpu *vcpu, - const struct kvm_one_reg *reg) -{ - unsigned long __user *uaddr =3D - (unsigned long __user *)(unsigned long)reg->addr; - unsigned long reg_num =3D reg->id & ~(KVM_REG_ARCH_MASK | - KVM_REG_SIZE_MASK | - KVM_REG_RISCV_CONFIG); - unsigned long i, isa_ext, reg_val; - - if (KVM_REG_SIZE(reg->id) !=3D sizeof(unsigned long)) - return -EINVAL; - - if (copy_from_user(®_val, uaddr, KVM_REG_SIZE(reg->id))) - return -EFAULT; - - switch (reg_num) { - case KVM_REG_RISCV_CONFIG_REG(isa): - /* - * This ONE REG interface is only defined for - * single letter extensions. - */ - if (fls(reg_val) >=3D RISCV_ISA_EXT_BASE) - return -EINVAL; - - if (!vcpu->arch.ran_atleast_once) { - /* Ignore the enable/disable request for certain extensions */ - for (i =3D 0; i < RISCV_ISA_EXT_BASE; i++) { - isa_ext =3D kvm_riscv_vcpu_base2isa_ext(i); - if (isa_ext >=3D KVM_RISCV_ISA_EXT_MAX) { - reg_val &=3D ~BIT(i); - continue; - } - if (!kvm_riscv_vcpu_isa_enable_allowed(isa_ext)) - if (reg_val & BIT(i)) - reg_val &=3D ~BIT(i); - if (!kvm_riscv_vcpu_isa_disable_allowed(isa_ext)) - if (!(reg_val & BIT(i))) - reg_val |=3D BIT(i); - } - reg_val &=3D riscv_isa_extension_base(NULL); - /* Do not modify anything beyond single letter extensions */ - reg_val =3D (vcpu->arch.isa[0] & ~KVM_RISCV_BASE_ISA_MASK) | - (reg_val & KVM_RISCV_BASE_ISA_MASK); - vcpu->arch.isa[0] =3D reg_val; - kvm_riscv_vcpu_fp_reset(vcpu); - } else { - return -EOPNOTSUPP; - } - break; - case KVM_REG_RISCV_CONFIG_REG(zicbom_block_size): - return -EOPNOTSUPP; - case KVM_REG_RISCV_CONFIG_REG(zicboz_block_size): - return -EOPNOTSUPP; - case KVM_REG_RISCV_CONFIG_REG(mvendorid): - if (!vcpu->arch.ran_atleast_once) - vcpu->arch.mvendorid =3D reg_val; - else - return -EBUSY; - break; - case KVM_REG_RISCV_CONFIG_REG(marchid): - if (!vcpu->arch.ran_atleast_once) - vcpu->arch.marchid =3D reg_val; - else - return -EBUSY; - break; - case KVM_REG_RISCV_CONFIG_REG(mimpid): - if (!vcpu->arch.ran_atleast_once) - vcpu->arch.mimpid =3D reg_val; - else - return -EBUSY; - break; - default: - return -EINVAL; - } - - return 0; -} - -static int kvm_riscv_vcpu_get_reg_core(struct kvm_vcpu *vcpu, - const struct kvm_one_reg *reg) -{ - struct kvm_cpu_context *cntx =3D &vcpu->arch.guest_context; - unsigned long __user *uaddr =3D - (unsigned long __user *)(unsigned long)reg->addr; - unsigned long reg_num =3D reg->id & ~(KVM_REG_ARCH_MASK | - KVM_REG_SIZE_MASK | - KVM_REG_RISCV_CORE); - unsigned long reg_val; - - if (KVM_REG_SIZE(reg->id) !=3D sizeof(unsigned long)) - return -EINVAL; - if (reg_num >=3D sizeof(struct kvm_riscv_core) / sizeof(unsigned long)) - return -EINVAL; - - if (reg_num =3D=3D KVM_REG_RISCV_CORE_REG(regs.pc)) - reg_val =3D cntx->sepc; - else if (KVM_REG_RISCV_CORE_REG(regs.pc) < reg_num && - reg_num <=3D KVM_REG_RISCV_CORE_REG(regs.t6)) - reg_val =3D ((unsigned long *)cntx)[reg_num]; - else if (reg_num =3D=3D KVM_REG_RISCV_CORE_REG(mode)) - reg_val =3D (cntx->sstatus & SR_SPP) ? - KVM_RISCV_MODE_S : KVM_RISCV_MODE_U; - else - return -EINVAL; - - if (copy_to_user(uaddr, ®_val, KVM_REG_SIZE(reg->id))) - return -EFAULT; - - return 0; -} - -static int kvm_riscv_vcpu_set_reg_core(struct kvm_vcpu *vcpu, - const struct kvm_one_reg *reg) -{ - struct kvm_cpu_context *cntx =3D &vcpu->arch.guest_context; - unsigned long __user *uaddr =3D - (unsigned long __user *)(unsigned long)reg->addr; - unsigned long reg_num =3D reg->id & ~(KVM_REG_ARCH_MASK | - KVM_REG_SIZE_MASK | - KVM_REG_RISCV_CORE); - unsigned long reg_val; - - if (KVM_REG_SIZE(reg->id) !=3D sizeof(unsigned long)) - return -EINVAL; - if (reg_num >=3D sizeof(struct kvm_riscv_core) / sizeof(unsigned long)) - return -EINVAL; - - if (copy_from_user(®_val, uaddr, KVM_REG_SIZE(reg->id))) - return -EFAULT; - - if (reg_num =3D=3D KVM_REG_RISCV_CORE_REG(regs.pc)) - cntx->sepc =3D reg_val; - else if (KVM_REG_RISCV_CORE_REG(regs.pc) < reg_num && - reg_num <=3D KVM_REG_RISCV_CORE_REG(regs.t6)) - ((unsigned long *)cntx)[reg_num] =3D reg_val; - else if (reg_num =3D=3D KVM_REG_RISCV_CORE_REG(mode)) { - if (reg_val =3D=3D KVM_RISCV_MODE_S) - cntx->sstatus |=3D SR_SPP; - else - cntx->sstatus &=3D ~SR_SPP; - } else - return -EINVAL; - - return 0; -} - -static int kvm_riscv_vcpu_general_get_csr(struct kvm_vcpu *vcpu, - unsigned long reg_num, - unsigned long *out_val) -{ - struct kvm_vcpu_csr *csr =3D &vcpu->arch.guest_csr; - - if (reg_num >=3D sizeof(struct kvm_riscv_csr) / sizeof(unsigned long)) - return -EINVAL; - - if (reg_num =3D=3D KVM_REG_RISCV_CSR_REG(sip)) { - kvm_riscv_vcpu_flush_interrupts(vcpu); - *out_val =3D (csr->hvip >> VSIP_TO_HVIP_SHIFT) & VSIP_VALID_MASK; - *out_val |=3D csr->hvip & ~IRQ_LOCAL_MASK; - } else - *out_val =3D ((unsigned long *)csr)[reg_num]; - - return 0; -} - -static inline int kvm_riscv_vcpu_general_set_csr(struct kvm_vcpu *vcpu, - unsigned long reg_num, - unsigned long reg_val) -{ - struct kvm_vcpu_csr *csr =3D &vcpu->arch.guest_csr; - - if (reg_num >=3D sizeof(struct kvm_riscv_csr) / sizeof(unsigned long)) - return -EINVAL; - - if (reg_num =3D=3D KVM_REG_RISCV_CSR_REG(sip)) { - reg_val &=3D VSIP_VALID_MASK; - reg_val <<=3D VSIP_TO_HVIP_SHIFT; - } - - ((unsigned long *)csr)[reg_num] =3D reg_val; - - if (reg_num =3D=3D KVM_REG_RISCV_CSR_REG(sip)) - WRITE_ONCE(vcpu->arch.irqs_pending_mask[0], 0); - - return 0; -} - -static int kvm_riscv_vcpu_get_reg_csr(struct kvm_vcpu *vcpu, - const struct kvm_one_reg *reg) -{ - int rc; - unsigned long __user *uaddr =3D - (unsigned long __user *)(unsigned long)reg->addr; - unsigned long reg_num =3D reg->id & ~(KVM_REG_ARCH_MASK | - KVM_REG_SIZE_MASK | - KVM_REG_RISCV_CSR); - unsigned long reg_val, reg_subtype; - - if (KVM_REG_SIZE(reg->id) !=3D sizeof(unsigned long)) - return -EINVAL; - - reg_subtype =3D reg_num & KVM_REG_RISCV_SUBTYPE_MASK; - reg_num &=3D ~KVM_REG_RISCV_SUBTYPE_MASK; - switch (reg_subtype) { - case KVM_REG_RISCV_CSR_GENERAL: - rc =3D kvm_riscv_vcpu_general_get_csr(vcpu, reg_num, ®_val); - break; - case KVM_REG_RISCV_CSR_AIA: - rc =3D kvm_riscv_vcpu_aia_get_csr(vcpu, reg_num, ®_val); - break; - default: - rc =3D -EINVAL; - break; - } - if (rc) - return rc; - - if (copy_to_user(uaddr, ®_val, KVM_REG_SIZE(reg->id))) - return -EFAULT; - - return 0; -} - -static int kvm_riscv_vcpu_set_reg_csr(struct kvm_vcpu *vcpu, - const struct kvm_one_reg *reg) -{ - int rc; - unsigned long __user *uaddr =3D - (unsigned long __user *)(unsigned long)reg->addr; - unsigned long reg_num =3D reg->id & ~(KVM_REG_ARCH_MASK | - KVM_REG_SIZE_MASK | - KVM_REG_RISCV_CSR); - unsigned long reg_val, reg_subtype; - - if (KVM_REG_SIZE(reg->id) !=3D sizeof(unsigned long)) - return -EINVAL; - - if (copy_from_user(®_val, uaddr, KVM_REG_SIZE(reg->id))) - return -EFAULT; - - reg_subtype =3D reg_num & KVM_REG_RISCV_SUBTYPE_MASK; - reg_num &=3D ~KVM_REG_RISCV_SUBTYPE_MASK; - switch (reg_subtype) { - case KVM_REG_RISCV_CSR_GENERAL: - rc =3D kvm_riscv_vcpu_general_set_csr(vcpu, reg_num, reg_val); - break; - case KVM_REG_RISCV_CSR_AIA: - rc =3D kvm_riscv_vcpu_aia_set_csr(vcpu, reg_num, reg_val); - break; - default: - rc =3D -EINVAL; - break; - } - if (rc) - return rc; - - return 0; -} - -static int kvm_riscv_vcpu_get_reg_isa_ext(struct kvm_vcpu *vcpu, - const struct kvm_one_reg *reg) -{ - unsigned long __user *uaddr =3D - (unsigned long __user *)(unsigned long)reg->addr; - unsigned long reg_num =3D reg->id & ~(KVM_REG_ARCH_MASK | - KVM_REG_SIZE_MASK | - KVM_REG_RISCV_ISA_EXT); - unsigned long reg_val =3D 0; - unsigned long host_isa_ext; - - if (KVM_REG_SIZE(reg->id) !=3D sizeof(unsigned long)) - return -EINVAL; - - if (reg_num >=3D KVM_RISCV_ISA_EXT_MAX || - reg_num >=3D ARRAY_SIZE(kvm_isa_ext_arr)) - return -EINVAL; - - host_isa_ext =3D kvm_isa_ext_arr[reg_num]; - if (__riscv_isa_extension_available(vcpu->arch.isa, host_isa_ext)) - reg_val =3D 1; /* Mark the given extension as available */ - - if (copy_to_user(uaddr, ®_val, KVM_REG_SIZE(reg->id))) - return -EFAULT; - - return 0; -} - -static int kvm_riscv_vcpu_set_reg_isa_ext(struct kvm_vcpu *vcpu, - const struct kvm_one_reg *reg) -{ - unsigned long __user *uaddr =3D - (unsigned long __user *)(unsigned long)reg->addr; - unsigned long reg_num =3D reg->id & ~(KVM_REG_ARCH_MASK | - KVM_REG_SIZE_MASK | - KVM_REG_RISCV_ISA_EXT); - unsigned long reg_val; - unsigned long host_isa_ext; - - if (KVM_REG_SIZE(reg->id) !=3D sizeof(unsigned long)) - return -EINVAL; - - if (reg_num >=3D KVM_RISCV_ISA_EXT_MAX || - reg_num >=3D ARRAY_SIZE(kvm_isa_ext_arr)) - return -EINVAL; - - if (copy_from_user(®_val, uaddr, KVM_REG_SIZE(reg->id))) - return -EFAULT; - - host_isa_ext =3D kvm_isa_ext_arr[reg_num]; - if (!__riscv_isa_extension_available(NULL, host_isa_ext)) - return -EOPNOTSUPP; - - if (!vcpu->arch.ran_atleast_once) { - /* - * All multi-letter extension and a few single letter - * extension can be disabled - */ - if (reg_val =3D=3D 1 && - kvm_riscv_vcpu_isa_enable_allowed(reg_num)) - set_bit(host_isa_ext, vcpu->arch.isa); - else if (!reg_val && - kvm_riscv_vcpu_isa_disable_allowed(reg_num)) - clear_bit(host_isa_ext, vcpu->arch.isa); - else - return -EINVAL; - kvm_riscv_vcpu_fp_reset(vcpu); - } else { - return -EOPNOTSUPP; - } - - return 0; -} - -static int kvm_riscv_vcpu_set_reg(struct kvm_vcpu *vcpu, - const struct kvm_one_reg *reg) -{ - switch (reg->id & KVM_REG_RISCV_TYPE_MASK) { - case KVM_REG_RISCV_CONFIG: - return kvm_riscv_vcpu_set_reg_config(vcpu, reg); - case KVM_REG_RISCV_CORE: - return kvm_riscv_vcpu_set_reg_core(vcpu, reg); - case KVM_REG_RISCV_CSR: - return kvm_riscv_vcpu_set_reg_csr(vcpu, reg); - case KVM_REG_RISCV_TIMER: - return kvm_riscv_vcpu_set_reg_timer(vcpu, reg); - case KVM_REG_RISCV_FP_F: - return kvm_riscv_vcpu_set_reg_fp(vcpu, reg, - KVM_REG_RISCV_FP_F); - case KVM_REG_RISCV_FP_D: - return kvm_riscv_vcpu_set_reg_fp(vcpu, reg, - KVM_REG_RISCV_FP_D); - case KVM_REG_RISCV_ISA_EXT: - return kvm_riscv_vcpu_set_reg_isa_ext(vcpu, reg); - case KVM_REG_RISCV_SBI_EXT: - return kvm_riscv_vcpu_set_reg_sbi_ext(vcpu, reg); - case KVM_REG_RISCV_VECTOR: - return kvm_riscv_vcpu_set_reg_vector(vcpu, reg, - KVM_REG_RISCV_VECTOR); - default: - break; - } - - return -EINVAL; -} - -static int kvm_riscv_vcpu_get_reg(struct kvm_vcpu *vcpu, - const struct kvm_one_reg *reg) -{ - switch (reg->id & KVM_REG_RISCV_TYPE_MASK) { - case KVM_REG_RISCV_CONFIG: - return kvm_riscv_vcpu_get_reg_config(vcpu, reg); - case KVM_REG_RISCV_CORE: - return kvm_riscv_vcpu_get_reg_core(vcpu, reg); - case KVM_REG_RISCV_CSR: - return kvm_riscv_vcpu_get_reg_csr(vcpu, reg); - case KVM_REG_RISCV_TIMER: - return kvm_riscv_vcpu_get_reg_timer(vcpu, reg); - case KVM_REG_RISCV_FP_F: - return kvm_riscv_vcpu_get_reg_fp(vcpu, reg, - KVM_REG_RISCV_FP_F); - case KVM_REG_RISCV_FP_D: - return kvm_riscv_vcpu_get_reg_fp(vcpu, reg, - KVM_REG_RISCV_FP_D); - case KVM_REG_RISCV_ISA_EXT: - return kvm_riscv_vcpu_get_reg_isa_ext(vcpu, reg); - case KVM_REG_RISCV_SBI_EXT: - return kvm_riscv_vcpu_get_reg_sbi_ext(vcpu, reg); - case KVM_REG_RISCV_VECTOR: - return kvm_riscv_vcpu_get_reg_vector(vcpu, reg, - KVM_REG_RISCV_VECTOR); - default: - break; - } - - return -EINVAL; -} - long kvm_arch_vcpu_async_ioctl(struct file *filp, unsigned int ioctl, unsigned long arg) { diff --git a/arch/riscv/kvm/vcpu_onereg.c b/arch/riscv/kvm/vcpu_onereg.c new file mode 100644 index 000000000000..836ffe79311a --- /dev/null +++ b/arch/riscv/kvm/vcpu_onereg.c @@ -0,0 +1,547 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (C) 2019 Western Digital Corporation or its affiliates. + * Copyright (C) 2023 Ventana Micro Systems Inc. + * + * Authors: + * Anup Patel + */ + +#include +#include +#include +#include +#include +#include +#include +#include + +#define KVM_RISCV_BASE_ISA_MASK GENMASK(25, 0) + +#define KVM_ISA_EXT_ARR(ext) \ +[KVM_RISCV_ISA_EXT_##ext] =3D RISCV_ISA_EXT_##ext + +/* Mapping between KVM ISA Extension ID & Host ISA extension ID */ +static const unsigned long kvm_isa_ext_arr[] =3D { + [KVM_RISCV_ISA_EXT_A] =3D RISCV_ISA_EXT_a, + [KVM_RISCV_ISA_EXT_C] =3D RISCV_ISA_EXT_c, + [KVM_RISCV_ISA_EXT_D] =3D RISCV_ISA_EXT_d, + [KVM_RISCV_ISA_EXT_F] =3D RISCV_ISA_EXT_f, + [KVM_RISCV_ISA_EXT_H] =3D RISCV_ISA_EXT_h, + [KVM_RISCV_ISA_EXT_I] =3D RISCV_ISA_EXT_i, + [KVM_RISCV_ISA_EXT_M] =3D RISCV_ISA_EXT_m, + [KVM_RISCV_ISA_EXT_V] =3D RISCV_ISA_EXT_v, + + KVM_ISA_EXT_ARR(SSAIA), + KVM_ISA_EXT_ARR(SSTC), + KVM_ISA_EXT_ARR(SVINVAL), + KVM_ISA_EXT_ARR(SVNAPOT), + KVM_ISA_EXT_ARR(SVPBMT), + KVM_ISA_EXT_ARR(ZBB), + KVM_ISA_EXT_ARR(ZIHINTPAUSE), + KVM_ISA_EXT_ARR(ZICBOM), + KVM_ISA_EXT_ARR(ZICBOZ), +}; + +static unsigned long kvm_riscv_vcpu_base2isa_ext(unsigned long base_ext) +{ + unsigned long i; + + for (i =3D 0; i < KVM_RISCV_ISA_EXT_MAX; i++) { + if (kvm_isa_ext_arr[i] =3D=3D base_ext) + return i; + } + + return KVM_RISCV_ISA_EXT_MAX; +} + +static bool kvm_riscv_vcpu_isa_enable_allowed(unsigned long ext) +{ + switch (ext) { + case KVM_RISCV_ISA_EXT_H: + return false; + case KVM_RISCV_ISA_EXT_V: + return riscv_v_vstate_ctrl_user_allowed(); + default: + break; + } + + return true; +} + +static bool kvm_riscv_vcpu_isa_disable_allowed(unsigned long ext) +{ + switch (ext) { + case KVM_RISCV_ISA_EXT_A: + case KVM_RISCV_ISA_EXT_C: + case KVM_RISCV_ISA_EXT_I: + case KVM_RISCV_ISA_EXT_M: + case KVM_RISCV_ISA_EXT_SSAIA: + case KVM_RISCV_ISA_EXT_SSTC: + case KVM_RISCV_ISA_EXT_SVINVAL: + case KVM_RISCV_ISA_EXT_SVNAPOT: + case KVM_RISCV_ISA_EXT_ZIHINTPAUSE: + case KVM_RISCV_ISA_EXT_ZBB: + return false; + default: + break; + } + + return true; +} + +void kvm_riscv_vcpu_setup_isa(struct kvm_vcpu *vcpu) +{ + unsigned long host_isa, i; + + for (i =3D 0; i < ARRAY_SIZE(kvm_isa_ext_arr); i++) { + host_isa =3D kvm_isa_ext_arr[i]; + if (__riscv_isa_extension_available(NULL, host_isa) && + kvm_riscv_vcpu_isa_enable_allowed(i)) + set_bit(host_isa, vcpu->arch.isa); + } +} + +static int kvm_riscv_vcpu_get_reg_config(struct kvm_vcpu *vcpu, + const struct kvm_one_reg *reg) +{ + unsigned long __user *uaddr =3D + (unsigned long __user *)(unsigned long)reg->addr; + unsigned long reg_num =3D reg->id & ~(KVM_REG_ARCH_MASK | + KVM_REG_SIZE_MASK | + KVM_REG_RISCV_CONFIG); + unsigned long reg_val; + + if (KVM_REG_SIZE(reg->id) !=3D sizeof(unsigned long)) + return -EINVAL; + + switch (reg_num) { + case KVM_REG_RISCV_CONFIG_REG(isa): + reg_val =3D vcpu->arch.isa[0] & KVM_RISCV_BASE_ISA_MASK; + break; + case KVM_REG_RISCV_CONFIG_REG(zicbom_block_size): + if (!riscv_isa_extension_available(vcpu->arch.isa, ZICBOM)) + return -EINVAL; + reg_val =3D riscv_cbom_block_size; + break; + case KVM_REG_RISCV_CONFIG_REG(zicboz_block_size): + if (!riscv_isa_extension_available(vcpu->arch.isa, ZICBOZ)) + return -EINVAL; + reg_val =3D riscv_cboz_block_size; + break; + case KVM_REG_RISCV_CONFIG_REG(mvendorid): + reg_val =3D vcpu->arch.mvendorid; + break; + case KVM_REG_RISCV_CONFIG_REG(marchid): + reg_val =3D vcpu->arch.marchid; + break; + case KVM_REG_RISCV_CONFIG_REG(mimpid): + reg_val =3D vcpu->arch.mimpid; + break; + default: + return -EINVAL; + } + + if (copy_to_user(uaddr, ®_val, KVM_REG_SIZE(reg->id))) + return -EFAULT; + + return 0; +} + +static int kvm_riscv_vcpu_set_reg_config(struct kvm_vcpu *vcpu, + const struct kvm_one_reg *reg) +{ + unsigned long __user *uaddr =3D + (unsigned long __user *)(unsigned long)reg->addr; + unsigned long reg_num =3D reg->id & ~(KVM_REG_ARCH_MASK | + KVM_REG_SIZE_MASK | + KVM_REG_RISCV_CONFIG); + unsigned long i, isa_ext, reg_val; + + if (KVM_REG_SIZE(reg->id) !=3D sizeof(unsigned long)) + return -EINVAL; + + if (copy_from_user(®_val, uaddr, KVM_REG_SIZE(reg->id))) + return -EFAULT; + + switch (reg_num) { + case KVM_REG_RISCV_CONFIG_REG(isa): + /* + * This ONE REG interface is only defined for + * single letter extensions. + */ + if (fls(reg_val) >=3D RISCV_ISA_EXT_BASE) + return -EINVAL; + + if (!vcpu->arch.ran_atleast_once) { + /* Ignore the enable/disable request for certain extensions */ + for (i =3D 0; i < RISCV_ISA_EXT_BASE; i++) { + isa_ext =3D kvm_riscv_vcpu_base2isa_ext(i); + if (isa_ext >=3D KVM_RISCV_ISA_EXT_MAX) { + reg_val &=3D ~BIT(i); + continue; + } + if (!kvm_riscv_vcpu_isa_enable_allowed(isa_ext)) + if (reg_val & BIT(i)) + reg_val &=3D ~BIT(i); + if (!kvm_riscv_vcpu_isa_disable_allowed(isa_ext)) + if (!(reg_val & BIT(i))) + reg_val |=3D BIT(i); + } + reg_val &=3D riscv_isa_extension_base(NULL); + /* Do not modify anything beyond single letter extensions */ + reg_val =3D (vcpu->arch.isa[0] & ~KVM_RISCV_BASE_ISA_MASK) | + (reg_val & KVM_RISCV_BASE_ISA_MASK); + vcpu->arch.isa[0] =3D reg_val; + kvm_riscv_vcpu_fp_reset(vcpu); + } else { + return -EOPNOTSUPP; + } + break; + case KVM_REG_RISCV_CONFIG_REG(zicbom_block_size): + return -EOPNOTSUPP; + case KVM_REG_RISCV_CONFIG_REG(zicboz_block_size): + return -EOPNOTSUPP; + case KVM_REG_RISCV_CONFIG_REG(mvendorid): + if (!vcpu->arch.ran_atleast_once) + vcpu->arch.mvendorid =3D reg_val; + else + return -EBUSY; + break; + case KVM_REG_RISCV_CONFIG_REG(marchid): + if (!vcpu->arch.ran_atleast_once) + vcpu->arch.marchid =3D reg_val; + else + return -EBUSY; + break; + case KVM_REG_RISCV_CONFIG_REG(mimpid): + if (!vcpu->arch.ran_atleast_once) + vcpu->arch.mimpid =3D reg_val; + else + return -EBUSY; + break; + default: + return -EINVAL; + } + + return 0; +} + +static int kvm_riscv_vcpu_get_reg_core(struct kvm_vcpu *vcpu, + const struct kvm_one_reg *reg) +{ + struct kvm_cpu_context *cntx =3D &vcpu->arch.guest_context; + unsigned long __user *uaddr =3D + (unsigned long __user *)(unsigned long)reg->addr; + unsigned long reg_num =3D reg->id & ~(KVM_REG_ARCH_MASK | + KVM_REG_SIZE_MASK | + KVM_REG_RISCV_CORE); + unsigned long reg_val; + + if (KVM_REG_SIZE(reg->id) !=3D sizeof(unsigned long)) + return -EINVAL; + if (reg_num >=3D sizeof(struct kvm_riscv_core) / sizeof(unsigned long)) + return -EINVAL; + + if (reg_num =3D=3D KVM_REG_RISCV_CORE_REG(regs.pc)) + reg_val =3D cntx->sepc; + else if (KVM_REG_RISCV_CORE_REG(regs.pc) < reg_num && + reg_num <=3D KVM_REG_RISCV_CORE_REG(regs.t6)) + reg_val =3D ((unsigned long *)cntx)[reg_num]; + else if (reg_num =3D=3D KVM_REG_RISCV_CORE_REG(mode)) + reg_val =3D (cntx->sstatus & SR_SPP) ? + KVM_RISCV_MODE_S : KVM_RISCV_MODE_U; + else + return -EINVAL; + + if (copy_to_user(uaddr, ®_val, KVM_REG_SIZE(reg->id))) + return -EFAULT; + + return 0; +} + +static int kvm_riscv_vcpu_set_reg_core(struct kvm_vcpu *vcpu, + const struct kvm_one_reg *reg) +{ + struct kvm_cpu_context *cntx =3D &vcpu->arch.guest_context; + unsigned long __user *uaddr =3D + (unsigned long __user *)(unsigned long)reg->addr; + unsigned long reg_num =3D reg->id & ~(KVM_REG_ARCH_MASK | + KVM_REG_SIZE_MASK | + KVM_REG_RISCV_CORE); + unsigned long reg_val; + + if (KVM_REG_SIZE(reg->id) !=3D sizeof(unsigned long)) + return -EINVAL; + if (reg_num >=3D sizeof(struct kvm_riscv_core) / sizeof(unsigned long)) + return -EINVAL; + + if (copy_from_user(®_val, uaddr, KVM_REG_SIZE(reg->id))) + return -EFAULT; + + if (reg_num =3D=3D KVM_REG_RISCV_CORE_REG(regs.pc)) + cntx->sepc =3D reg_val; + else if (KVM_REG_RISCV_CORE_REG(regs.pc) < reg_num && + reg_num <=3D KVM_REG_RISCV_CORE_REG(regs.t6)) + ((unsigned long *)cntx)[reg_num] =3D reg_val; + else if (reg_num =3D=3D KVM_REG_RISCV_CORE_REG(mode)) { + if (reg_val =3D=3D KVM_RISCV_MODE_S) + cntx->sstatus |=3D SR_SPP; + else + cntx->sstatus &=3D ~SR_SPP; + } else + return -EINVAL; + + return 0; +} + +static int kvm_riscv_vcpu_general_get_csr(struct kvm_vcpu *vcpu, + unsigned long reg_num, + unsigned long *out_val) +{ + struct kvm_vcpu_csr *csr =3D &vcpu->arch.guest_csr; + + if (reg_num >=3D sizeof(struct kvm_riscv_csr) / sizeof(unsigned long)) + return -EINVAL; + + if (reg_num =3D=3D KVM_REG_RISCV_CSR_REG(sip)) { + kvm_riscv_vcpu_flush_interrupts(vcpu); + *out_val =3D (csr->hvip >> VSIP_TO_HVIP_SHIFT) & VSIP_VALID_MASK; + *out_val |=3D csr->hvip & ~IRQ_LOCAL_MASK; + } else + *out_val =3D ((unsigned long *)csr)[reg_num]; + + return 0; +} + +static int kvm_riscv_vcpu_general_set_csr(struct kvm_vcpu *vcpu, + unsigned long reg_num, + unsigned long reg_val) +{ + struct kvm_vcpu_csr *csr =3D &vcpu->arch.guest_csr; + + if (reg_num >=3D sizeof(struct kvm_riscv_csr) / sizeof(unsigned long)) + return -EINVAL; + + if (reg_num =3D=3D KVM_REG_RISCV_CSR_REG(sip)) { + reg_val &=3D VSIP_VALID_MASK; + reg_val <<=3D VSIP_TO_HVIP_SHIFT; + } + + ((unsigned long *)csr)[reg_num] =3D reg_val; + + if (reg_num =3D=3D KVM_REG_RISCV_CSR_REG(sip)) + WRITE_ONCE(vcpu->arch.irqs_pending_mask[0], 0); + + return 0; +} + +static int kvm_riscv_vcpu_get_reg_csr(struct kvm_vcpu *vcpu, + const struct kvm_one_reg *reg) +{ + int rc; + unsigned long __user *uaddr =3D + (unsigned long __user *)(unsigned long)reg->addr; + unsigned long reg_num =3D reg->id & ~(KVM_REG_ARCH_MASK | + KVM_REG_SIZE_MASK | + KVM_REG_RISCV_CSR); + unsigned long reg_val, reg_subtype; + + if (KVM_REG_SIZE(reg->id) !=3D sizeof(unsigned long)) + return -EINVAL; + + reg_subtype =3D reg_num & KVM_REG_RISCV_SUBTYPE_MASK; + reg_num &=3D ~KVM_REG_RISCV_SUBTYPE_MASK; + switch (reg_subtype) { + case KVM_REG_RISCV_CSR_GENERAL: + rc =3D kvm_riscv_vcpu_general_get_csr(vcpu, reg_num, ®_val); + break; + case KVM_REG_RISCV_CSR_AIA: + rc =3D kvm_riscv_vcpu_aia_get_csr(vcpu, reg_num, ®_val); + break; + default: + rc =3D -EINVAL; + break; + } + if (rc) + return rc; + + if (copy_to_user(uaddr, ®_val, KVM_REG_SIZE(reg->id))) + return -EFAULT; + + return 0; +} + +static int kvm_riscv_vcpu_set_reg_csr(struct kvm_vcpu *vcpu, + const struct kvm_one_reg *reg) +{ + int rc; + unsigned long __user *uaddr =3D + (unsigned long __user *)(unsigned long)reg->addr; + unsigned long reg_num =3D reg->id & ~(KVM_REG_ARCH_MASK | + KVM_REG_SIZE_MASK | + KVM_REG_RISCV_CSR); + unsigned long reg_val, reg_subtype; + + if (KVM_REG_SIZE(reg->id) !=3D sizeof(unsigned long)) + return -EINVAL; + + if (copy_from_user(®_val, uaddr, KVM_REG_SIZE(reg->id))) + return -EFAULT; + + reg_subtype =3D reg_num & KVM_REG_RISCV_SUBTYPE_MASK; + reg_num &=3D ~KVM_REG_RISCV_SUBTYPE_MASK; + switch (reg_subtype) { + case KVM_REG_RISCV_CSR_GENERAL: + rc =3D kvm_riscv_vcpu_general_set_csr(vcpu, reg_num, reg_val); + break; + case KVM_REG_RISCV_CSR_AIA: + rc =3D kvm_riscv_vcpu_aia_set_csr(vcpu, reg_num, reg_val); + break; + default: + rc =3D -EINVAL; + break; + } + if (rc) + return rc; + + return 0; +} + +static int kvm_riscv_vcpu_get_reg_isa_ext(struct kvm_vcpu *vcpu, + const struct kvm_one_reg *reg) +{ + unsigned long __user *uaddr =3D + (unsigned long __user *)(unsigned long)reg->addr; + unsigned long reg_num =3D reg->id & ~(KVM_REG_ARCH_MASK | + KVM_REG_SIZE_MASK | + KVM_REG_RISCV_ISA_EXT); + unsigned long reg_val =3D 0; + unsigned long host_isa_ext; + + if (KVM_REG_SIZE(reg->id) !=3D sizeof(unsigned long)) + return -EINVAL; + + if (reg_num >=3D KVM_RISCV_ISA_EXT_MAX || + reg_num >=3D ARRAY_SIZE(kvm_isa_ext_arr)) + return -EINVAL; + + host_isa_ext =3D kvm_isa_ext_arr[reg_num]; + if (__riscv_isa_extension_available(vcpu->arch.isa, host_isa_ext)) + reg_val =3D 1; /* Mark the given extension as available */ + + if (copy_to_user(uaddr, ®_val, KVM_REG_SIZE(reg->id))) + return -EFAULT; + + return 0; +} + +static int kvm_riscv_vcpu_set_reg_isa_ext(struct kvm_vcpu *vcpu, + const struct kvm_one_reg *reg) +{ + unsigned long __user *uaddr =3D + (unsigned long __user *)(unsigned long)reg->addr; + unsigned long reg_num =3D reg->id & ~(KVM_REG_ARCH_MASK | + KVM_REG_SIZE_MASK | + KVM_REG_RISCV_ISA_EXT); + unsigned long reg_val; + unsigned long host_isa_ext; + + if (KVM_REG_SIZE(reg->id) !=3D sizeof(unsigned long)) + return -EINVAL; + + if (reg_num >=3D KVM_RISCV_ISA_EXT_MAX || + reg_num >=3D ARRAY_SIZE(kvm_isa_ext_arr)) + return -EINVAL; + + if (copy_from_user(®_val, uaddr, KVM_REG_SIZE(reg->id))) + return -EFAULT; + + host_isa_ext =3D kvm_isa_ext_arr[reg_num]; + if (!__riscv_isa_extension_available(NULL, host_isa_ext)) + return -EOPNOTSUPP; + + if (!vcpu->arch.ran_atleast_once) { + /* + * All multi-letter extension and a few single letter + * extension can be disabled + */ + if (reg_val =3D=3D 1 && + kvm_riscv_vcpu_isa_enable_allowed(reg_num)) + set_bit(host_isa_ext, vcpu->arch.isa); + else if (!reg_val && + kvm_riscv_vcpu_isa_disable_allowed(reg_num)) + clear_bit(host_isa_ext, vcpu->arch.isa); + else + return -EINVAL; + kvm_riscv_vcpu_fp_reset(vcpu); + } else { + return -EOPNOTSUPP; + } + + return 0; +} + +int kvm_riscv_vcpu_set_reg(struct kvm_vcpu *vcpu, + const struct kvm_one_reg *reg) +{ + switch (reg->id & KVM_REG_RISCV_TYPE_MASK) { + case KVM_REG_RISCV_CONFIG: + return kvm_riscv_vcpu_set_reg_config(vcpu, reg); + case KVM_REG_RISCV_CORE: + return kvm_riscv_vcpu_set_reg_core(vcpu, reg); + case KVM_REG_RISCV_CSR: + return kvm_riscv_vcpu_set_reg_csr(vcpu, reg); + case KVM_REG_RISCV_TIMER: + return kvm_riscv_vcpu_set_reg_timer(vcpu, reg); + case KVM_REG_RISCV_FP_F: + return kvm_riscv_vcpu_set_reg_fp(vcpu, reg, + KVM_REG_RISCV_FP_F); + case KVM_REG_RISCV_FP_D: + return kvm_riscv_vcpu_set_reg_fp(vcpu, reg, + KVM_REG_RISCV_FP_D); + case KVM_REG_RISCV_ISA_EXT: + return kvm_riscv_vcpu_set_reg_isa_ext(vcpu, reg); + case KVM_REG_RISCV_SBI_EXT: + return kvm_riscv_vcpu_set_reg_sbi_ext(vcpu, reg); + case KVM_REG_RISCV_VECTOR: + return kvm_riscv_vcpu_set_reg_vector(vcpu, reg, + KVM_REG_RISCV_VECTOR); + default: + break; + } + + return -EINVAL; +} + +int kvm_riscv_vcpu_get_reg(struct kvm_vcpu *vcpu, + const struct kvm_one_reg *reg) +{ + switch (reg->id & KVM_REG_RISCV_TYPE_MASK) { + case KVM_REG_RISCV_CONFIG: + return kvm_riscv_vcpu_get_reg_config(vcpu, reg); + case KVM_REG_RISCV_CORE: + return kvm_riscv_vcpu_get_reg_core(vcpu, reg); + case KVM_REG_RISCV_CSR: + return kvm_riscv_vcpu_get_reg_csr(vcpu, reg); + case KVM_REG_RISCV_TIMER: + return kvm_riscv_vcpu_get_reg_timer(vcpu, reg); + case KVM_REG_RISCV_FP_F: + return kvm_riscv_vcpu_get_reg_fp(vcpu, reg, + KVM_REG_RISCV_FP_F); + case KVM_REG_RISCV_FP_D: + return kvm_riscv_vcpu_get_reg_fp(vcpu, reg, + KVM_REG_RISCV_FP_D); + case KVM_REG_RISCV_ISA_EXT: + return kvm_riscv_vcpu_get_reg_isa_ext(vcpu, reg); + case KVM_REG_RISCV_SBI_EXT: + return kvm_riscv_vcpu_get_reg_sbi_ext(vcpu, reg); + case KVM_REG_RISCV_VECTOR: + return kvm_riscv_vcpu_get_reg_vector(vcpu, reg, + KVM_REG_RISCV_VECTOR); + default: + break; + } + + return -EINVAL; +} --=20 2.34.1