From nobody Mon May 11 07:03:13 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 93641C4332F for ; Wed, 20 Apr 2022 11:25:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1377976AbiDTL2A (ORCPT ); Wed, 20 Apr 2022 07:28:00 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34278 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1378047AbiDTL1y (ORCPT ); Wed, 20 Apr 2022 07:27:54 -0400 Received: from mail-pg1-x531.google.com (mail-pg1-x531.google.com [IPv6:2607:f8b0:4864:20::531]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 31E752BD7 for ; Wed, 20 Apr 2022 04:25:07 -0700 (PDT) Received: by mail-pg1-x531.google.com with SMTP id g9so1325247pgc.10 for ; Wed, 20 Apr 2022 04:25:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ventanamicro.com; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=zYiihKkdO40bMYsSTQPZrRATjASD5eL46f3IMFuZUds=; b=fgw/1HajcaMIo16g96blk2towN2Y9gTyhoaiNYAzqOOXQVYXwJ4+Sfus/rVMin4ZFM Z8GXlfjZMVyjnfq4RGgv78yX9bAKjEULkQYRqKtEa84H4fpPqfbEOhq7YJiRUlnnk+p6 mdvS6lBzxcP7pUaXjlaR8NrV8jp477kmsRajSiwq9BDzmBnb7AsdBnaITaMdi4fQ5aRV kpjQcPNo9JLfWkrfvM8qz3XEDEAwUeDim5vumzEuVDKohJU/d/LtJ/TMVySW2d2l2+qH +kdYDtanSLf11bhCyVDV5+ptq2Dll7S/EO0LuKOsimHCDGmdN2h6B69S9DUDd1otFsMc CoFw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=zYiihKkdO40bMYsSTQPZrRATjASD5eL46f3IMFuZUds=; b=VxijbvjOWKydH7lF5TG9CCNkmHvhVSMeL3lMH2XsF9WylWPpgA9DLbqwZb9ji6CXgT 5mOKmSZCWwPcjI/efN+re4GnarD1bIx4JvdC4w0TlOqNewNSYAtsRYdzkeupymg45vav fCobXYBWVMnjd+wV7ELEBakcanB9pEGN/DiThXTVeEI80oupFC7uyw6M2HSr4sqwaxLY ZFqNxQc+0nn71/ReQsAUGFWfQiDdZL/Oyy0SUJcXzMsXlmdWbngi1S58HtdzTlQo1B7n A/zPQCSXstgRJaNOf+L3S/TKamsVxyweP4kKLgmOA9pOHUuNLmpKnKmIZnTzJinKps/3 3/TQ== X-Gm-Message-State: AOAM531Mm8Gx6UipPwdC2wcLqAiAQIJuQBJBAHsaKtOcii0AOS0apeNG gXCgriNx1O0QKQnKOOeAKp5XiQ== X-Google-Smtp-Source: ABdhPJyuKPv3unwNfKAs0NRAz5ojnG4LbcNbpT1LnFoMpc5vjK/WHqVOCvqfQLPHVja/OkNpuww8Tg== X-Received: by 2002:a05:6a00:330a:b0:50a:cac1:7986 with SMTP id cq10-20020a056a00330a00b0050acac17986mr2789819pfb.4.1650453906477; Wed, 20 Apr 2022 04:25:06 -0700 (PDT) Received: from localhost.localdomain ([122.167.88.101]) by smtp.gmail.com with ESMTPSA id u12-20020a17090a890c00b001b8efcf8e48sm22529274pjn.14.2022.04.20.04.25.02 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 20 Apr 2022 04:25:05 -0700 (PDT) From: Anup Patel To: Paolo Bonzini , Atish Patra Cc: Palmer Dabbelt , Paul Walmsley , Alistair Francis , Anup Patel , kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Anup Patel Subject: [PATCH v2 1/7] RISC-V: KVM: Use G-stage name for hypervisor page table Date: Wed, 20 Apr 2022 16:54:44 +0530 Message-Id: <20220420112450.155624-2-apatel@ventanamicro.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220420112450.155624-1-apatel@ventanamicro.com> References: <20220420112450.155624-1-apatel@ventanamicro.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" The two-stage address translation defined by the RISC-V privileged specification defines: VS-stage (guest virtual address to guest physical address) programmed by the Guest OS and G-stage (guest physical addree to host physical address) programmed by the hypervisor. To align with above terminology, we replace "stage2" with "gstage" and "Stage2" with "G-stage" name everywhere in KVM RISC-V sources. Signed-off-by: Anup Patel Reviewed-by: Atish Patra --- arch/riscv/include/asm/kvm_host.h | 30 ++-- arch/riscv/kvm/main.c | 8 +- arch/riscv/kvm/mmu.c | 222 +++++++++++++++--------------- arch/riscv/kvm/vcpu.c | 10 +- arch/riscv/kvm/vcpu_exit.c | 6 +- arch/riscv/kvm/vm.c | 8 +- arch/riscv/kvm/vmid.c | 18 +-- 7 files changed, 151 insertions(+), 151 deletions(-) diff --git a/arch/riscv/include/asm/kvm_host.h b/arch/riscv/include/asm/kvm= _host.h index 78da839657e5..3e2cbbd7d1c9 100644 --- a/arch/riscv/include/asm/kvm_host.h +++ b/arch/riscv/include/asm/kvm_host.h @@ -54,10 +54,10 @@ struct kvm_vmid { }; =20 struct kvm_arch { - /* stage2 vmid */ + /* G-stage vmid */ struct kvm_vmid vmid; =20 - /* stage2 page table */ + /* G-stage page table */ pgd_t *pgd; phys_addr_t pgd_phys; =20 @@ -210,21 +210,21 @@ void __kvm_riscv_hfence_gvma_vmid(unsigned long vmid); void __kvm_riscv_hfence_gvma_gpa(unsigned long gpa_divby_4); void __kvm_riscv_hfence_gvma_all(void); =20 -int kvm_riscv_stage2_map(struct kvm_vcpu *vcpu, +int kvm_riscv_gstage_map(struct kvm_vcpu *vcpu, struct kvm_memory_slot *memslot, gpa_t gpa, unsigned long hva, bool is_write); -int kvm_riscv_stage2_alloc_pgd(struct kvm *kvm); -void kvm_riscv_stage2_free_pgd(struct kvm *kvm); -void kvm_riscv_stage2_update_hgatp(struct kvm_vcpu *vcpu); -void kvm_riscv_stage2_mode_detect(void); -unsigned long kvm_riscv_stage2_mode(void); -int kvm_riscv_stage2_gpa_bits(void); - -void kvm_riscv_stage2_vmid_detect(void); -unsigned long kvm_riscv_stage2_vmid_bits(void); -int kvm_riscv_stage2_vmid_init(struct kvm *kvm); -bool kvm_riscv_stage2_vmid_ver_changed(struct kvm_vmid *vmid); -void kvm_riscv_stage2_vmid_update(struct kvm_vcpu *vcpu); +int kvm_riscv_gstage_alloc_pgd(struct kvm *kvm); +void kvm_riscv_gstage_free_pgd(struct kvm *kvm); +void kvm_riscv_gstage_update_hgatp(struct kvm_vcpu *vcpu); +void kvm_riscv_gstage_mode_detect(void); +unsigned long kvm_riscv_gstage_mode(void); +int kvm_riscv_gstage_gpa_bits(void); + +void kvm_riscv_gstage_vmid_detect(void); +unsigned long kvm_riscv_gstage_vmid_bits(void); +int kvm_riscv_gstage_vmid_init(struct kvm *kvm); +bool kvm_riscv_gstage_vmid_ver_changed(struct kvm_vmid *vmid); +void kvm_riscv_gstage_vmid_update(struct kvm_vcpu *vcpu); =20 void __kvm_riscv_unpriv_trap(void); =20 diff --git a/arch/riscv/kvm/main.c b/arch/riscv/kvm/main.c index 2e5ca43c8c49..c374dad82eee 100644 --- a/arch/riscv/kvm/main.c +++ b/arch/riscv/kvm/main.c @@ -89,13 +89,13 @@ int kvm_arch_init(void *opaque) return -ENODEV; } =20 - kvm_riscv_stage2_mode_detect(); + kvm_riscv_gstage_mode_detect(); =20 - kvm_riscv_stage2_vmid_detect(); + kvm_riscv_gstage_vmid_detect(); =20 kvm_info("hypervisor extension available\n"); =20 - switch (kvm_riscv_stage2_mode()) { + switch (kvm_riscv_gstage_mode()) { case HGATP_MODE_SV32X4: str =3D "Sv32x4"; break; @@ -110,7 +110,7 @@ int kvm_arch_init(void *opaque) } kvm_info("using %s G-stage page table format\n", str); =20 - kvm_info("VMID %ld bits available\n", kvm_riscv_stage2_vmid_bits()); + kvm_info("VMID %ld bits available\n", kvm_riscv_gstage_vmid_bits()); =20 return 0; } diff --git a/arch/riscv/kvm/mmu.c b/arch/riscv/kvm/mmu.c index f80a34fbf102..dc0520792e31 100644 --- a/arch/riscv/kvm/mmu.c +++ b/arch/riscv/kvm/mmu.c @@ -21,50 +21,50 @@ #include =20 #ifdef CONFIG_64BIT -static unsigned long stage2_mode =3D (HGATP_MODE_SV39X4 << HGATP_MODE_SHIF= T); -static unsigned long stage2_pgd_levels =3D 3; -#define stage2_index_bits 9 +static unsigned long gstage_mode =3D (HGATP_MODE_SV39X4 << HGATP_MODE_SHIF= T); +static unsigned long gstage_pgd_levels =3D 3; +#define gstage_index_bits 9 #else -static unsigned long stage2_mode =3D (HGATP_MODE_SV32X4 << HGATP_MODE_SHIF= T); -static unsigned long stage2_pgd_levels =3D 2; -#define stage2_index_bits 10 +static unsigned long gstage_mode =3D (HGATP_MODE_SV32X4 << HGATP_MODE_SHIF= T); +static unsigned long gstage_pgd_levels =3D 2; +#define gstage_index_bits 10 #endif =20 -#define stage2_pgd_xbits 2 -#define stage2_pgd_size (1UL << (HGATP_PAGE_SHIFT + stage2_pgd_xbits)) -#define stage2_gpa_bits (HGATP_PAGE_SHIFT + \ - (stage2_pgd_levels * stage2_index_bits) + \ - stage2_pgd_xbits) -#define stage2_gpa_size ((gpa_t)(1ULL << stage2_gpa_bits)) +#define gstage_pgd_xbits 2 +#define gstage_pgd_size (1UL << (HGATP_PAGE_SHIFT + gstage_pgd_xbits)) +#define gstage_gpa_bits (HGATP_PAGE_SHIFT + \ + (gstage_pgd_levels * gstage_index_bits) + \ + gstage_pgd_xbits) +#define gstage_gpa_size ((gpa_t)(1ULL << gstage_gpa_bits)) =20 -#define stage2_pte_leaf(__ptep) \ +#define gstage_pte_leaf(__ptep) \ (pte_val(*(__ptep)) & (_PAGE_READ | _PAGE_WRITE | _PAGE_EXEC)) =20 -static inline unsigned long stage2_pte_index(gpa_t addr, u32 level) +static inline unsigned long gstage_pte_index(gpa_t addr, u32 level) { unsigned long mask; - unsigned long shift =3D HGATP_PAGE_SHIFT + (stage2_index_bits * level); + unsigned long shift =3D HGATP_PAGE_SHIFT + (gstage_index_bits * level); =20 - if (level =3D=3D (stage2_pgd_levels - 1)) - mask =3D (PTRS_PER_PTE * (1UL << stage2_pgd_xbits)) - 1; + if (level =3D=3D (gstage_pgd_levels - 1)) + mask =3D (PTRS_PER_PTE * (1UL << gstage_pgd_xbits)) - 1; else mask =3D PTRS_PER_PTE - 1; =20 return (addr >> shift) & mask; } =20 -static inline unsigned long stage2_pte_page_vaddr(pte_t pte) +static inline unsigned long gstage_pte_page_vaddr(pte_t pte) { return (unsigned long)pfn_to_virt(pte_val(pte) >> _PAGE_PFN_SHIFT); } =20 -static int stage2_page_size_to_level(unsigned long page_size, u32 *out_lev= el) +static int gstage_page_size_to_level(unsigned long page_size, u32 *out_lev= el) { u32 i; unsigned long psz =3D 1UL << 12; =20 - for (i =3D 0; i < stage2_pgd_levels; i++) { - if (page_size =3D=3D (psz << (i * stage2_index_bits))) { + for (i =3D 0; i < gstage_pgd_levels; i++) { + if (page_size =3D=3D (psz << (i * gstage_index_bits))) { *out_level =3D i; return 0; } @@ -73,27 +73,27 @@ static int stage2_page_size_to_level(unsigned long page= _size, u32 *out_level) return -EINVAL; } =20 -static int stage2_level_to_page_size(u32 level, unsigned long *out_pgsize) +static int gstage_level_to_page_size(u32 level, unsigned long *out_pgsize) { - if (stage2_pgd_levels < level) + if (gstage_pgd_levels < level) return -EINVAL; =20 - *out_pgsize =3D 1UL << (12 + (level * stage2_index_bits)); + *out_pgsize =3D 1UL << (12 + (level * gstage_index_bits)); =20 return 0; } =20 -static bool stage2_get_leaf_entry(struct kvm *kvm, gpa_t addr, +static bool gstage_get_leaf_entry(struct kvm *kvm, gpa_t addr, pte_t **ptepp, u32 *ptep_level) { pte_t *ptep; - u32 current_level =3D stage2_pgd_levels - 1; + u32 current_level =3D gstage_pgd_levels - 1; =20 *ptep_level =3D current_level; ptep =3D (pte_t *)kvm->arch.pgd; - ptep =3D &ptep[stage2_pte_index(addr, current_level)]; + ptep =3D &ptep[gstage_pte_index(addr, current_level)]; while (ptep && pte_val(*ptep)) { - if (stage2_pte_leaf(ptep)) { + if (gstage_pte_leaf(ptep)) { *ptep_level =3D current_level; *ptepp =3D ptep; return true; @@ -102,8 +102,8 @@ static bool stage2_get_leaf_entry(struct kvm *kvm, gpa_= t addr, if (current_level) { current_level--; *ptep_level =3D current_level; - ptep =3D (pte_t *)stage2_pte_page_vaddr(*ptep); - ptep =3D &ptep[stage2_pte_index(addr, current_level)]; + ptep =3D (pte_t *)gstage_pte_page_vaddr(*ptep); + ptep =3D &ptep[gstage_pte_index(addr, current_level)]; } else { ptep =3D NULL; } @@ -112,12 +112,12 @@ static bool stage2_get_leaf_entry(struct kvm *kvm, gp= a_t addr, return false; } =20 -static void stage2_remote_tlb_flush(struct kvm *kvm, u32 level, gpa_t addr) +static void gstage_remote_tlb_flush(struct kvm *kvm, u32 level, gpa_t addr) { unsigned long size =3D PAGE_SIZE; struct kvm_vmid *vmid =3D &kvm->arch.vmid; =20 - if (stage2_level_to_page_size(level, &size)) + if (gstage_level_to_page_size(level, &size)) return; addr &=3D ~(size - 1); =20 @@ -131,19 +131,19 @@ static void stage2_remote_tlb_flush(struct kvm *kvm, = u32 level, gpa_t addr) preempt_enable(); } =20 -static int stage2_set_pte(struct kvm *kvm, u32 level, +static int gstage_set_pte(struct kvm *kvm, u32 level, struct kvm_mmu_memory_cache *pcache, gpa_t addr, const pte_t *new_pte) { - u32 current_level =3D stage2_pgd_levels - 1; + u32 current_level =3D gstage_pgd_levels - 1; pte_t *next_ptep =3D (pte_t *)kvm->arch.pgd; - pte_t *ptep =3D &next_ptep[stage2_pte_index(addr, current_level)]; + pte_t *ptep =3D &next_ptep[gstage_pte_index(addr, current_level)]; =20 if (current_level < level) return -EINVAL; =20 while (current_level !=3D level) { - if (stage2_pte_leaf(ptep)) + if (gstage_pte_leaf(ptep)) return -EEXIST; =20 if (!pte_val(*ptep)) { @@ -155,23 +155,23 @@ static int stage2_set_pte(struct kvm *kvm, u32 level, *ptep =3D pfn_pte(PFN_DOWN(__pa(next_ptep)), __pgprot(_PAGE_TABLE)); } else { - if (stage2_pte_leaf(ptep)) + if (gstage_pte_leaf(ptep)) return -EEXIST; - next_ptep =3D (pte_t *)stage2_pte_page_vaddr(*ptep); + next_ptep =3D (pte_t *)gstage_pte_page_vaddr(*ptep); } =20 current_level--; - ptep =3D &next_ptep[stage2_pte_index(addr, current_level)]; + ptep =3D &next_ptep[gstage_pte_index(addr, current_level)]; } =20 *ptep =3D *new_pte; - if (stage2_pte_leaf(ptep)) - stage2_remote_tlb_flush(kvm, current_level, addr); + if (gstage_pte_leaf(ptep)) + gstage_remote_tlb_flush(kvm, current_level, addr); =20 return 0; } =20 -static int stage2_map_page(struct kvm *kvm, +static int gstage_map_page(struct kvm *kvm, struct kvm_mmu_memory_cache *pcache, gpa_t gpa, phys_addr_t hpa, unsigned long page_size, @@ -182,7 +182,7 @@ static int stage2_map_page(struct kvm *kvm, pte_t new_pte; pgprot_t prot; =20 - ret =3D stage2_page_size_to_level(page_size, &level); + ret =3D gstage_page_size_to_level(page_size, &level); if (ret) return ret; =20 @@ -193,9 +193,9 @@ static int stage2_map_page(struct kvm *kvm, * PTE so that software can update these bits. * * We support both options mentioned above. To achieve this, we - * always set 'A' and 'D' PTE bits at time of creating stage2 + * always set 'A' and 'D' PTE bits at time of creating G-stage * mapping. To support KVM dirty page logging with both options - * mentioned above, we will write-protect stage2 PTEs to track + * mentioned above, we will write-protect G-stage PTEs to track * dirty pages. */ =20 @@ -213,24 +213,24 @@ static int stage2_map_page(struct kvm *kvm, new_pte =3D pfn_pte(PFN_DOWN(hpa), prot); new_pte =3D pte_mkdirty(new_pte); =20 - return stage2_set_pte(kvm, level, pcache, gpa, &new_pte); + return gstage_set_pte(kvm, level, pcache, gpa, &new_pte); } =20 -enum stage2_op { - STAGE2_OP_NOP =3D 0, /* Nothing */ - STAGE2_OP_CLEAR, /* Clear/Unmap */ - STAGE2_OP_WP, /* Write-protect */ +enum gstage_op { + GSTAGE_OP_NOP =3D 0, /* Nothing */ + GSTAGE_OP_CLEAR, /* Clear/Unmap */ + GSTAGE_OP_WP, /* Write-protect */ }; =20 -static void stage2_op_pte(struct kvm *kvm, gpa_t addr, - pte_t *ptep, u32 ptep_level, enum stage2_op op) +static void gstage_op_pte(struct kvm *kvm, gpa_t addr, + pte_t *ptep, u32 ptep_level, enum gstage_op op) { int i, ret; pte_t *next_ptep; u32 next_ptep_level; unsigned long next_page_size, page_size; =20 - ret =3D stage2_level_to_page_size(ptep_level, &page_size); + ret =3D gstage_level_to_page_size(ptep_level, &page_size); if (ret) return; =20 @@ -239,31 +239,31 @@ static void stage2_op_pte(struct kvm *kvm, gpa_t addr, if (!pte_val(*ptep)) return; =20 - if (ptep_level && !stage2_pte_leaf(ptep)) { - next_ptep =3D (pte_t *)stage2_pte_page_vaddr(*ptep); + if (ptep_level && !gstage_pte_leaf(ptep)) { + next_ptep =3D (pte_t *)gstage_pte_page_vaddr(*ptep); next_ptep_level =3D ptep_level - 1; - ret =3D stage2_level_to_page_size(next_ptep_level, + ret =3D gstage_level_to_page_size(next_ptep_level, &next_page_size); if (ret) return; =20 - if (op =3D=3D STAGE2_OP_CLEAR) + if (op =3D=3D GSTAGE_OP_CLEAR) set_pte(ptep, __pte(0)); for (i =3D 0; i < PTRS_PER_PTE; i++) - stage2_op_pte(kvm, addr + i * next_page_size, + gstage_op_pte(kvm, addr + i * next_page_size, &next_ptep[i], next_ptep_level, op); - if (op =3D=3D STAGE2_OP_CLEAR) + if (op =3D=3D GSTAGE_OP_CLEAR) put_page(virt_to_page(next_ptep)); } else { - if (op =3D=3D STAGE2_OP_CLEAR) + if (op =3D=3D GSTAGE_OP_CLEAR) set_pte(ptep, __pte(0)); - else if (op =3D=3D STAGE2_OP_WP) + else if (op =3D=3D GSTAGE_OP_WP) set_pte(ptep, __pte(pte_val(*ptep) & ~_PAGE_WRITE)); - stage2_remote_tlb_flush(kvm, ptep_level, addr); + gstage_remote_tlb_flush(kvm, ptep_level, addr); } } =20 -static void stage2_unmap_range(struct kvm *kvm, gpa_t start, +static void gstage_unmap_range(struct kvm *kvm, gpa_t start, gpa_t size, bool may_block) { int ret; @@ -274,9 +274,9 @@ static void stage2_unmap_range(struct kvm *kvm, gpa_t s= tart, gpa_t addr =3D start, end =3D start + size; =20 while (addr < end) { - found_leaf =3D stage2_get_leaf_entry(kvm, addr, + found_leaf =3D gstage_get_leaf_entry(kvm, addr, &ptep, &ptep_level); - ret =3D stage2_level_to_page_size(ptep_level, &page_size); + ret =3D gstage_level_to_page_size(ptep_level, &page_size); if (ret) break; =20 @@ -284,8 +284,8 @@ static void stage2_unmap_range(struct kvm *kvm, gpa_t s= tart, goto next; =20 if (!(addr & (page_size - 1)) && ((end - addr) >=3D page_size)) - stage2_op_pte(kvm, addr, ptep, - ptep_level, STAGE2_OP_CLEAR); + gstage_op_pte(kvm, addr, ptep, + ptep_level, GSTAGE_OP_CLEAR); =20 next: addr +=3D page_size; @@ -299,7 +299,7 @@ static void stage2_unmap_range(struct kvm *kvm, gpa_t s= tart, } } =20 -static void stage2_wp_range(struct kvm *kvm, gpa_t start, gpa_t end) +static void gstage_wp_range(struct kvm *kvm, gpa_t start, gpa_t end) { int ret; pte_t *ptep; @@ -309,9 +309,9 @@ static void stage2_wp_range(struct kvm *kvm, gpa_t star= t, gpa_t end) unsigned long page_size; =20 while (addr < end) { - found_leaf =3D stage2_get_leaf_entry(kvm, addr, + found_leaf =3D gstage_get_leaf_entry(kvm, addr, &ptep, &ptep_level); - ret =3D stage2_level_to_page_size(ptep_level, &page_size); + ret =3D gstage_level_to_page_size(ptep_level, &page_size); if (ret) break; =20 @@ -319,15 +319,15 @@ static void stage2_wp_range(struct kvm *kvm, gpa_t st= art, gpa_t end) goto next; =20 if (!(addr & (page_size - 1)) && ((end - addr) >=3D page_size)) - stage2_op_pte(kvm, addr, ptep, - ptep_level, STAGE2_OP_WP); + gstage_op_pte(kvm, addr, ptep, + ptep_level, GSTAGE_OP_WP); =20 next: addr +=3D page_size; } } =20 -static void stage2_wp_memory_region(struct kvm *kvm, int slot) +static void gstage_wp_memory_region(struct kvm *kvm, int slot) { struct kvm_memslots *slots =3D kvm_memslots(kvm); struct kvm_memory_slot *memslot =3D id_to_memslot(slots, slot); @@ -335,12 +335,12 @@ static void stage2_wp_memory_region(struct kvm *kvm, = int slot) phys_addr_t end =3D (memslot->base_gfn + memslot->npages) << PAGE_SHIFT; =20 spin_lock(&kvm->mmu_lock); - stage2_wp_range(kvm, start, end); + gstage_wp_range(kvm, start, end); spin_unlock(&kvm->mmu_lock); kvm_flush_remote_tlbs(kvm); } =20 -static int stage2_ioremap(struct kvm *kvm, gpa_t gpa, phys_addr_t hpa, +static int gstage_ioremap(struct kvm *kvm, gpa_t gpa, phys_addr_t hpa, unsigned long size, bool writable) { pte_t pte; @@ -361,12 +361,12 @@ static int stage2_ioremap(struct kvm *kvm, gpa_t gpa,= phys_addr_t hpa, if (!writable) pte =3D pte_wrprotect(pte); =20 - ret =3D kvm_mmu_topup_memory_cache(&pcache, stage2_pgd_levels); + ret =3D kvm_mmu_topup_memory_cache(&pcache, gstage_pgd_levels); if (ret) goto out; =20 spin_lock(&kvm->mmu_lock); - ret =3D stage2_set_pte(kvm, 0, &pcache, addr, &pte); + ret =3D gstage_set_pte(kvm, 0, &pcache, addr, &pte); spin_unlock(&kvm->mmu_lock); if (ret) goto out; @@ -388,7 +388,7 @@ void kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm= *kvm, phys_addr_t start =3D (base_gfn + __ffs(mask)) << PAGE_SHIFT; phys_addr_t end =3D (base_gfn + __fls(mask) + 1) << PAGE_SHIFT; =20 - stage2_wp_range(kvm, start, end); + gstage_wp_range(kvm, start, end); } =20 void kvm_arch_sync_dirty_log(struct kvm *kvm, struct kvm_memory_slot *mems= lot) @@ -411,7 +411,7 @@ void kvm_arch_memslots_updated(struct kvm *kvm, u64 gen) =20 void kvm_arch_flush_shadow_all(struct kvm *kvm) { - kvm_riscv_stage2_free_pgd(kvm); + kvm_riscv_gstage_free_pgd(kvm); } =20 void kvm_arch_flush_shadow_memslot(struct kvm *kvm, @@ -421,7 +421,7 @@ void kvm_arch_flush_shadow_memslot(struct kvm *kvm, phys_addr_t size =3D slot->npages << PAGE_SHIFT; =20 spin_lock(&kvm->mmu_lock); - stage2_unmap_range(kvm, gpa, size, false); + gstage_unmap_range(kvm, gpa, size, false); spin_unlock(&kvm->mmu_lock); } =20 @@ -436,7 +436,7 @@ void kvm_arch_commit_memory_region(struct kvm *kvm, * the memory slot is write protected. */ if (change !=3D KVM_MR_DELETE && new->flags & KVM_MEM_LOG_DIRTY_PAGES) - stage2_wp_memory_region(kvm, new->id); + gstage_wp_memory_region(kvm, new->id); } =20 int kvm_arch_prepare_memory_region(struct kvm *kvm, @@ -458,7 +458,7 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm, * space addressable by the KVM guest GPA space. */ if ((new->base_gfn + new->npages) >=3D - (stage2_gpa_size >> PAGE_SHIFT)) + (gstage_gpa_size >> PAGE_SHIFT)) return -EFAULT; =20 hva =3D new->userspace_addr; @@ -514,7 +514,7 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm, goto out; } =20 - ret =3D stage2_ioremap(kvm, gpa, pa, + ret =3D gstage_ioremap(kvm, gpa, pa, vm_end - vm_start, writable); if (ret) break; @@ -527,7 +527,7 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm, =20 spin_lock(&kvm->mmu_lock); if (ret) - stage2_unmap_range(kvm, base_gpa, size, false); + gstage_unmap_range(kvm, base_gpa, size, false); spin_unlock(&kvm->mmu_lock); =20 out: @@ -540,7 +540,7 @@ bool kvm_unmap_gfn_range(struct kvm *kvm, struct kvm_gf= n_range *range) if (!kvm->arch.pgd) return false; =20 - stage2_unmap_range(kvm, range->start << PAGE_SHIFT, + gstage_unmap_range(kvm, range->start << PAGE_SHIFT, (range->end - range->start) << PAGE_SHIFT, range->may_block); return false; @@ -556,10 +556,10 @@ bool kvm_set_spte_gfn(struct kvm *kvm, struct kvm_gfn= _range *range) =20 WARN_ON(range->end - range->start !=3D 1); =20 - ret =3D stage2_map_page(kvm, NULL, range->start << PAGE_SHIFT, + ret =3D gstage_map_page(kvm, NULL, range->start << PAGE_SHIFT, __pfn_to_phys(pfn), PAGE_SIZE, true, true); if (ret) { - kvm_debug("Failed to map stage2 page (error %d)\n", ret); + kvm_debug("Failed to map G-stage page (error %d)\n", ret); return true; } =20 @@ -577,7 +577,7 @@ bool kvm_age_gfn(struct kvm *kvm, struct kvm_gfn_range = *range) =20 WARN_ON(size !=3D PAGE_SIZE && size !=3D PMD_SIZE && size !=3D PGDIR_SIZE= ); =20 - if (!stage2_get_leaf_entry(kvm, range->start << PAGE_SHIFT, + if (!gstage_get_leaf_entry(kvm, range->start << PAGE_SHIFT, &ptep, &ptep_level)) return false; =20 @@ -595,14 +595,14 @@ bool kvm_test_age_gfn(struct kvm *kvm, struct kvm_gfn= _range *range) =20 WARN_ON(size !=3D PAGE_SIZE && size !=3D PMD_SIZE && size !=3D PGDIR_SIZE= ); =20 - if (!stage2_get_leaf_entry(kvm, range->start << PAGE_SHIFT, + if (!gstage_get_leaf_entry(kvm, range->start << PAGE_SHIFT, &ptep, &ptep_level)) return false; =20 return pte_young(*ptep); } =20 -int kvm_riscv_stage2_map(struct kvm_vcpu *vcpu, +int kvm_riscv_gstage_map(struct kvm_vcpu *vcpu, struct kvm_memory_slot *memslot, gpa_t gpa, unsigned long hva, bool is_write) { @@ -648,9 +648,9 @@ int kvm_riscv_stage2_map(struct kvm_vcpu *vcpu, } =20 /* We need minimum second+third level pages */ - ret =3D kvm_mmu_topup_memory_cache(pcache, stage2_pgd_levels); + ret =3D kvm_mmu_topup_memory_cache(pcache, gstage_pgd_levels); if (ret) { - kvm_err("Failed to topup stage2 cache\n"); + kvm_err("Failed to topup G-stage cache\n"); return ret; } =20 @@ -680,15 +680,15 @@ int kvm_riscv_stage2_map(struct kvm_vcpu *vcpu, if (writeable) { kvm_set_pfn_dirty(hfn); mark_page_dirty(kvm, gfn); - ret =3D stage2_map_page(kvm, pcache, gpa, hfn << PAGE_SHIFT, + ret =3D gstage_map_page(kvm, pcache, gpa, hfn << PAGE_SHIFT, vma_pagesize, false, true); } else { - ret =3D stage2_map_page(kvm, pcache, gpa, hfn << PAGE_SHIFT, + ret =3D gstage_map_page(kvm, pcache, gpa, hfn << PAGE_SHIFT, vma_pagesize, true, true); } =20 if (ret) - kvm_err("Failed to map in stage2\n"); + kvm_err("Failed to map in G-stage\n"); =20 out_unlock: spin_unlock(&kvm->mmu_lock); @@ -697,7 +697,7 @@ int kvm_riscv_stage2_map(struct kvm_vcpu *vcpu, return ret; } =20 -int kvm_riscv_stage2_alloc_pgd(struct kvm *kvm) +int kvm_riscv_gstage_alloc_pgd(struct kvm *kvm) { struct page *pgd_page; =20 @@ -707,7 +707,7 @@ int kvm_riscv_stage2_alloc_pgd(struct kvm *kvm) } =20 pgd_page =3D alloc_pages(GFP_KERNEL | __GFP_ZERO, - get_order(stage2_pgd_size)); + get_order(gstage_pgd_size)); if (!pgd_page) return -ENOMEM; kvm->arch.pgd =3D page_to_virt(pgd_page); @@ -716,13 +716,13 @@ int kvm_riscv_stage2_alloc_pgd(struct kvm *kvm) return 0; } =20 -void kvm_riscv_stage2_free_pgd(struct kvm *kvm) +void kvm_riscv_gstage_free_pgd(struct kvm *kvm) { void *pgd =3D NULL; =20 spin_lock(&kvm->mmu_lock); if (kvm->arch.pgd) { - stage2_unmap_range(kvm, 0UL, stage2_gpa_size, false); + gstage_unmap_range(kvm, 0UL, gstage_gpa_size, false); pgd =3D READ_ONCE(kvm->arch.pgd); kvm->arch.pgd =3D NULL; kvm->arch.pgd_phys =3D 0; @@ -730,12 +730,12 @@ void kvm_riscv_stage2_free_pgd(struct kvm *kvm) spin_unlock(&kvm->mmu_lock); =20 if (pgd) - free_pages((unsigned long)pgd, get_order(stage2_pgd_size)); + free_pages((unsigned long)pgd, get_order(gstage_pgd_size)); } =20 -void kvm_riscv_stage2_update_hgatp(struct kvm_vcpu *vcpu) +void kvm_riscv_gstage_update_hgatp(struct kvm_vcpu *vcpu) { - unsigned long hgatp =3D stage2_mode; + unsigned long hgatp =3D gstage_mode; struct kvm_arch *k =3D &vcpu->kvm->arch; =20 hgatp |=3D (READ_ONCE(k->vmid.vmid) << HGATP_VMID_SHIFT) & @@ -744,18 +744,18 @@ void kvm_riscv_stage2_update_hgatp(struct kvm_vcpu *v= cpu) =20 csr_write(CSR_HGATP, hgatp); =20 - if (!kvm_riscv_stage2_vmid_bits()) + if (!kvm_riscv_gstage_vmid_bits()) __kvm_riscv_hfence_gvma_all(); } =20 -void kvm_riscv_stage2_mode_detect(void) +void kvm_riscv_gstage_mode_detect(void) { #ifdef CONFIG_64BIT - /* Try Sv48x4 stage2 mode */ + /* Try Sv48x4 G-stage mode */ csr_write(CSR_HGATP, HGATP_MODE_SV48X4 << HGATP_MODE_SHIFT); if ((csr_read(CSR_HGATP) >> HGATP_MODE_SHIFT) =3D=3D HGATP_MODE_SV48X4) { - stage2_mode =3D (HGATP_MODE_SV48X4 << HGATP_MODE_SHIFT); - stage2_pgd_levels =3D 4; + gstage_mode =3D (HGATP_MODE_SV48X4 << HGATP_MODE_SHIFT); + gstage_pgd_levels =3D 4; } csr_write(CSR_HGATP, 0); =20 @@ -763,12 +763,12 @@ void kvm_riscv_stage2_mode_detect(void) #endif } =20 -unsigned long kvm_riscv_stage2_mode(void) +unsigned long kvm_riscv_gstage_mode(void) { - return stage2_mode >> HGATP_MODE_SHIFT; + return gstage_mode >> HGATP_MODE_SHIFT; } =20 -int kvm_riscv_stage2_gpa_bits(void) +int kvm_riscv_gstage_gpa_bits(void) { - return stage2_gpa_bits; + return gstage_gpa_bits; } diff --git a/arch/riscv/kvm/vcpu.c b/arch/riscv/kvm/vcpu.c index aad430668bb4..e87af6480dfd 100644 --- a/arch/riscv/kvm/vcpu.c +++ b/arch/riscv/kvm/vcpu.c @@ -137,7 +137,7 @@ void kvm_arch_vcpu_destroy(struct kvm_vcpu *vcpu) /* Cleanup VCPU timer */ kvm_riscv_vcpu_timer_deinit(vcpu); =20 - /* Free unused pages pre-allocated for Stage2 page table mappings */ + /* Free unused pages pre-allocated for G-stage page table mappings */ kvm_mmu_free_memory_cache(&vcpu->arch.mmu_page_cache); } =20 @@ -635,7 +635,7 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu) csr_write(CSR_HVIP, csr->hvip); csr_write(CSR_VSATP, csr->vsatp); =20 - kvm_riscv_stage2_update_hgatp(vcpu); + kvm_riscv_gstage_update_hgatp(vcpu); =20 kvm_riscv_vcpu_timer_restore(vcpu); =20 @@ -690,7 +690,7 @@ static void kvm_riscv_check_vcpu_requests(struct kvm_vc= pu *vcpu) kvm_riscv_reset_vcpu(vcpu); =20 if (kvm_check_request(KVM_REQ_UPDATE_HGATP, vcpu)) - kvm_riscv_stage2_update_hgatp(vcpu); + kvm_riscv_gstage_update_hgatp(vcpu); =20 if (kvm_check_request(KVM_REQ_TLB_FLUSH, vcpu)) __kvm_riscv_hfence_gvma_all(); @@ -762,7 +762,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu) /* Check conditions before entering the guest */ cond_resched(); =20 - kvm_riscv_stage2_vmid_update(vcpu); + kvm_riscv_gstage_vmid_update(vcpu); =20 kvm_riscv_check_vcpu_requests(vcpu); =20 @@ -800,7 +800,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu) kvm_riscv_update_hvip(vcpu); =20 if (ret <=3D 0 || - kvm_riscv_stage2_vmid_ver_changed(&vcpu->kvm->arch.vmid) || + kvm_riscv_gstage_vmid_ver_changed(&vcpu->kvm->arch.vmid) || kvm_request_pending(vcpu)) { vcpu->mode =3D OUTSIDE_GUEST_MODE; local_irq_enable(); diff --git a/arch/riscv/kvm/vcpu_exit.c b/arch/riscv/kvm/vcpu_exit.c index aa8af129e4bb..79772c32d881 100644 --- a/arch/riscv/kvm/vcpu_exit.c +++ b/arch/riscv/kvm/vcpu_exit.c @@ -412,7 +412,7 @@ static int emulate_store(struct kvm_vcpu *vcpu, struct = kvm_run *run, return 0; } =20 -static int stage2_page_fault(struct kvm_vcpu *vcpu, struct kvm_run *run, +static int gstage_page_fault(struct kvm_vcpu *vcpu, struct kvm_run *run, struct kvm_cpu_trap *trap) { struct kvm_memory_slot *memslot; @@ -440,7 +440,7 @@ static int stage2_page_fault(struct kvm_vcpu *vcpu, str= uct kvm_run *run, }; } =20 - ret =3D kvm_riscv_stage2_map(vcpu, memslot, fault_addr, hva, + ret =3D kvm_riscv_gstage_map(vcpu, memslot, fault_addr, hva, (trap->scause =3D=3D EXC_STORE_GUEST_PAGE_FAULT) ? true : false); if (ret < 0) return ret; @@ -686,7 +686,7 @@ int kvm_riscv_vcpu_exit(struct kvm_vcpu *vcpu, struct k= vm_run *run, case EXC_LOAD_GUEST_PAGE_FAULT: case EXC_STORE_GUEST_PAGE_FAULT: if (vcpu->arch.guest_context.hstatus & HSTATUS_SPV) - ret =3D stage2_page_fault(vcpu, run, trap); + ret =3D gstage_page_fault(vcpu, run, trap); break; case EXC_SUPERVISOR_SYSCALL: if (vcpu->arch.guest_context.hstatus & HSTATUS_SPV) diff --git a/arch/riscv/kvm/vm.c b/arch/riscv/kvm/vm.c index c768f75279ef..945a2bf5e3f6 100644 --- a/arch/riscv/kvm/vm.c +++ b/arch/riscv/kvm/vm.c @@ -31,13 +31,13 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long typ= e) { int r; =20 - r =3D kvm_riscv_stage2_alloc_pgd(kvm); + r =3D kvm_riscv_gstage_alloc_pgd(kvm); if (r) return r; =20 - r =3D kvm_riscv_stage2_vmid_init(kvm); + r =3D kvm_riscv_gstage_vmid_init(kvm); if (r) { - kvm_riscv_stage2_free_pgd(kvm); + kvm_riscv_gstage_free_pgd(kvm); return r; } =20 @@ -75,7 +75,7 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ex= t) r =3D KVM_USER_MEM_SLOTS; break; case KVM_CAP_VM_GPA_BITS: - r =3D kvm_riscv_stage2_gpa_bits(); + r =3D kvm_riscv_gstage_gpa_bits(); break; default: r =3D 0; diff --git a/arch/riscv/kvm/vmid.c b/arch/riscv/kvm/vmid.c index 2fa4f7b1813d..01fdc342ad76 100644 --- a/arch/riscv/kvm/vmid.c +++ b/arch/riscv/kvm/vmid.c @@ -20,7 +20,7 @@ static unsigned long vmid_next; static unsigned long vmid_bits; static DEFINE_SPINLOCK(vmid_lock); =20 -void kvm_riscv_stage2_vmid_detect(void) +void kvm_riscv_gstage_vmid_detect(void) { unsigned long old; =20 @@ -40,12 +40,12 @@ void kvm_riscv_stage2_vmid_detect(void) vmid_bits =3D 0; } =20 -unsigned long kvm_riscv_stage2_vmid_bits(void) +unsigned long kvm_riscv_gstage_vmid_bits(void) { return vmid_bits; } =20 -int kvm_riscv_stage2_vmid_init(struct kvm *kvm) +int kvm_riscv_gstage_vmid_init(struct kvm *kvm) { /* Mark the initial VMID and VMID version invalid */ kvm->arch.vmid.vmid_version =3D 0; @@ -54,7 +54,7 @@ int kvm_riscv_stage2_vmid_init(struct kvm *kvm) return 0; } =20 -bool kvm_riscv_stage2_vmid_ver_changed(struct kvm_vmid *vmid) +bool kvm_riscv_gstage_vmid_ver_changed(struct kvm_vmid *vmid) { if (!vmid_bits) return false; @@ -63,13 +63,13 @@ bool kvm_riscv_stage2_vmid_ver_changed(struct kvm_vmid = *vmid) READ_ONCE(vmid_version)); } =20 -void kvm_riscv_stage2_vmid_update(struct kvm_vcpu *vcpu) +void kvm_riscv_gstage_vmid_update(struct kvm_vcpu *vcpu) { unsigned long i; struct kvm_vcpu *v; struct kvm_vmid *vmid =3D &vcpu->kvm->arch.vmid; =20 - if (!kvm_riscv_stage2_vmid_ver_changed(vmid)) + if (!kvm_riscv_gstage_vmid_ver_changed(vmid)) return; =20 spin_lock(&vmid_lock); @@ -78,7 +78,7 @@ void kvm_riscv_stage2_vmid_update(struct kvm_vcpu *vcpu) * We need to re-check the vmid_version here to ensure that if * another vcpu already allocated a valid vmid for this vm. */ - if (!kvm_riscv_stage2_vmid_ver_changed(vmid)) { + if (!kvm_riscv_gstage_vmid_ver_changed(vmid)) { spin_unlock(&vmid_lock); return; } @@ -96,7 +96,7 @@ void kvm_riscv_stage2_vmid_update(struct kvm_vcpu *vcpu) * instances is invalid and we have force VMID re-assignement * for all Guest instances. The Guest instances that were not * running will automatically pick-up new VMIDs because will - * call kvm_riscv_stage2_vmid_update() whenever they enter + * call kvm_riscv_gstage_vmid_update() whenever they enter * in-kernel run loop. For Guest instances that are already * running, we force VM exits on all host CPUs using IPI and * flush all Guest TLBs. @@ -112,7 +112,7 @@ void kvm_riscv_stage2_vmid_update(struct kvm_vcpu *vcpu) =20 spin_unlock(&vmid_lock); =20 - /* Request stage2 page table update for all VCPUs */ + /* Request G-stage page table update for all VCPUs */ kvm_for_each_vcpu(i, v, vcpu->kvm) kvm_make_request(KVM_REQ_UPDATE_HGATP, v); } --=20 2.25.1 From nobody Mon May 11 07:03:13 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CAC40C433F5 for ; Wed, 20 Apr 2022 11:25:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1378023AbiDTL2O (ORCPT ); Wed, 20 Apr 2022 07:28:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34430 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1377995AbiDTL15 (ORCPT ); Wed, 20 Apr 2022 07:27:57 -0400 Received: from mail-pf1-x433.google.com (mail-pf1-x433.google.com [IPv6:2607:f8b0:4864:20::433]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 61A72640A for ; Wed, 20 Apr 2022 04:25:11 -0700 (PDT) Received: by mail-pf1-x433.google.com with SMTP id i24so1650088pfa.7 for ; Wed, 20 Apr 2022 04:25:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ventanamicro.com; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=TPHPVEhymfv8oSdQwn6KZ7VGPONNXumfRuRVQxny474=; b=PAgsii84VgeZ/Q2oKSCovvB872ig347wVkn4dZtpPKF7+4mn6c/HRZpP4/hStmf1ys T7LVGobfgoMFGbYfMGlLNiU26A0EjUVf7TBM1D4uUq8fUtnZ7cGJDBOhnPhnxVllcU8U 45QLawAPuR++SGIfj0ocQyV259l2+NhuJ5HUKagx4sBBd2EvxtvxNxR+DY6De8T70+Nm 6+e4rDRHYm1u7+Kyw8yrA9DiCG3qz+3E2IkIalzQRqz+h+p+YsbhfLjtzAEZK7UCPTAo AyQItG5Wqq3QpZIHx5QTwdpHWc6XZ0Z120bW/WuYMO2Jq04oY8moZuS6TiTrIoE+dyvb KP8Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=TPHPVEhymfv8oSdQwn6KZ7VGPONNXumfRuRVQxny474=; b=4bt0w9CnT61BnbHXi0MTGxbK7QiZxf0dZwJM4O5oFey7keFkLHJhifCt+Q2aTM2RfT JAZM6bMs7R7OUqwlW2/VWtbooxv3vjh7Qk6KD1JmZm2abkSYSMZI5kUs9IM20SYtBrvt d7DY7Eq3+gI2+oYgKvzKvfNR54HNu40DzNoWglS3FYmisyPpXBBmY909kKqAI9M8AjsX j+4IVFTv+MIJXxBP9fDqdkzplu9UvO5fLtQa1q9wtCqUt+dylyn7P88uqdeJ2HZbRFFr 7xe597d0y33nHLpdSgxW+vk1SAJ1k195WPNQm8m9i+jxdh18cjGATBBf/TwI4X24HVJa d1Ug== X-Gm-Message-State: AOAM531qgNihZawt2qT8zjHcg83tg8X+VR7J/hH9M2TuRqxooKXq3Q8N pe0gCVgMLwuT4yvGwjgrgaIapw== X-Google-Smtp-Source: ABdhPJzJy+ngC76UER8GeFIwNmkd0eADoMpR8Xk4oDzv3I0PBEHDsejwpsx1pE0ajWmahbSPcXwk3g== X-Received: by 2002:a05:6a00:9a2:b0:505:974f:9fd6 with SMTP id u34-20020a056a0009a200b00505974f9fd6mr22641638pfg.12.1650453910832; Wed, 20 Apr 2022 04:25:10 -0700 (PDT) Received: from localhost.localdomain ([122.167.88.101]) by smtp.gmail.com with ESMTPSA id u12-20020a17090a890c00b001b8efcf8e48sm22529274pjn.14.2022.04.20.04.25.06 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 20 Apr 2022 04:25:10 -0700 (PDT) From: Anup Patel To: Paolo Bonzini , Atish Patra Cc: Palmer Dabbelt , Paul Walmsley , Alistair Francis , Anup Patel , kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Anup Patel Subject: [PATCH v2 2/7] RISC-V: KVM: Add Sv57x4 mode support for G-stage Date: Wed, 20 Apr 2022 16:54:45 +0530 Message-Id: <20220420112450.155624-3-apatel@ventanamicro.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220420112450.155624-1-apatel@ventanamicro.com> References: <20220420112450.155624-1-apatel@ventanamicro.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Latest QEMU supports G-stage Sv57x4 mode so this patch extends KVM RISC-V G-stage handling to detect and use Sv57x4 mode when available. Signed-off-by: Anup Patel Reviewed-by: Atish Patra --- arch/riscv/include/asm/csr.h | 1 + arch/riscv/kvm/main.c | 3 +++ arch/riscv/kvm/mmu.c | 11 ++++++++++- 3 files changed, 14 insertions(+), 1 deletion(-) diff --git a/arch/riscv/include/asm/csr.h b/arch/riscv/include/asm/csr.h index e935f27b10fd..cc40521e438b 100644 --- a/arch/riscv/include/asm/csr.h +++ b/arch/riscv/include/asm/csr.h @@ -117,6 +117,7 @@ #define HGATP_MODE_SV32X4 _AC(1, UL) #define HGATP_MODE_SV39X4 _AC(8, UL) #define HGATP_MODE_SV48X4 _AC(9, UL) +#define HGATP_MODE_SV57X4 _AC(10, UL) =20 #define HGATP32_MODE_SHIFT 31 #define HGATP32_VMID_SHIFT 22 diff --git a/arch/riscv/kvm/main.c b/arch/riscv/kvm/main.c index c374dad82eee..1549205fe5fe 100644 --- a/arch/riscv/kvm/main.c +++ b/arch/riscv/kvm/main.c @@ -105,6 +105,9 @@ int kvm_arch_init(void *opaque) case HGATP_MODE_SV48X4: str =3D "Sv48x4"; break; + case HGATP_MODE_SV57X4: + str =3D "Sv57x4"; + break; default: return -ENODEV; } diff --git a/arch/riscv/kvm/mmu.c b/arch/riscv/kvm/mmu.c index dc0520792e31..8823eb32dcde 100644 --- a/arch/riscv/kvm/mmu.c +++ b/arch/riscv/kvm/mmu.c @@ -751,14 +751,23 @@ void kvm_riscv_gstage_update_hgatp(struct kvm_vcpu *v= cpu) void kvm_riscv_gstage_mode_detect(void) { #ifdef CONFIG_64BIT + /* Try Sv57x4 G-stage mode */ + csr_write(CSR_HGATP, HGATP_MODE_SV57X4 << HGATP_MODE_SHIFT); + if ((csr_read(CSR_HGATP) >> HGATP_MODE_SHIFT) =3D=3D HGATP_MODE_SV57X4) { + gstage_mode =3D (HGATP_MODE_SV57X4 << HGATP_MODE_SHIFT); + gstage_pgd_levels =3D 5; + goto skip_sv48x4_test; + } + /* Try Sv48x4 G-stage mode */ csr_write(CSR_HGATP, HGATP_MODE_SV48X4 << HGATP_MODE_SHIFT); if ((csr_read(CSR_HGATP) >> HGATP_MODE_SHIFT) =3D=3D HGATP_MODE_SV48X4) { gstage_mode =3D (HGATP_MODE_SV48X4 << HGATP_MODE_SHIFT); gstage_pgd_levels =3D 4; } - csr_write(CSR_HGATP, 0); +skip_sv48x4_test: =20 + csr_write(CSR_HGATP, 0); __kvm_riscv_hfence_gvma_all(); #endif } --=20 2.25.1 From nobody Mon May 11 07:03:13 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A8A3BC433F5 for ; Wed, 20 Apr 2022 11:25:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1378028AbiDTL2W (ORCPT ); Wed, 20 Apr 2022 07:28:22 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34550 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1378007AbiDTL2B (ORCPT ); Wed, 20 Apr 2022 07:28:01 -0400 Received: from mail-pf1-x42e.google.com (mail-pf1-x42e.google.com [IPv6:2607:f8b0:4864:20::42e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9867D640A for ; Wed, 20 Apr 2022 04:25:15 -0700 (PDT) Received: by mail-pf1-x42e.google.com with SMTP id x80so1674390pfc.1 for ; Wed, 20 Apr 2022 04:25:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ventanamicro.com; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=5aKqoFa3PS3ofFsonmsRpILD1LHcN5f7ZDBzSGfsEbM=; b=XQt2qNB5u6pQxEF1HwDnEhc6NcPf+VkZSZq4w0mIY/jogwXW+sK8aZ+cXC1l/ZoHHU YGJJ0QjeSA8J/aP2zERHCpNEU3ZfqCQbGxKungKtD9r+4XedTyeDSfRcUxooSnaWKFZL gmBkBO0EgiRISNHALM9twlHmgO13HJIMrXTzGyTt3t9Zq1+0HHINUA5lb9iti/692dK7 XNwIASZYWRnzfxyd35UsIcbXGhLyRkWH9A1fSsA4t0pkykVdUL5pNqHyfrVPlWjtkNB0 2AxqTk60EGXEazaomUtuNqeeov959Q79Wt+MaAJB+BqYakQk9AYKGFLtAo09qwlMu30z /cCw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=5aKqoFa3PS3ofFsonmsRpILD1LHcN5f7ZDBzSGfsEbM=; b=T0sOmGXK0XQPDzDhpZj2M9BmlYu2T+O61mlAkLeCWtmLl3PdKsx31UjyTARUzY0CPN BZrq7r5sAvdvpkErH4PA0wOHroxh2iKQbap5IOA6+UJ1kegr5VOh6lqL6+DYiCVq3YDz /VYtdTlyGH3Jl04ON0I2UgccB/2pq7sFq4Tu6UDfuHZb2PTXDyxKXW3rVWrPSQ7yLix0 01h5T0SgmuXpX//oqOZAzTWYKub3OQdxIXneTG5CTeLuDCQz6gQD59bovNr2/HLxQvD9 VEllSF2x/nfZ0KGx6VaCeuRqbAQNux/PDdo4JmQ72QjbMrFO5oPUzb71qnlUXgYcAjWG PWRA== X-Gm-Message-State: AOAM532VZliXlT4Deh7lXwTsYpkfHlvUgSQAurh9rdI4ljj5NrNndDFJ PKdPCAYlpUuGEEVsoAfi79mJ3w== X-Google-Smtp-Source: ABdhPJxvck0UJetQTLnRjK9sgifn28nUcX1A1ECsvY4RrQIudiYMG8WNofzA4+8hhV4Kj+KU6N0HvQ== X-Received: by 2002:a05:6a00:16c5:b0:505:c572:7c2a with SMTP id l5-20020a056a0016c500b00505c5727c2amr22726073pfc.46.1650453915154; Wed, 20 Apr 2022 04:25:15 -0700 (PDT) Received: from localhost.localdomain ([122.167.88.101]) by smtp.gmail.com with ESMTPSA id u12-20020a17090a890c00b001b8efcf8e48sm22529274pjn.14.2022.04.20.04.25.11 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 20 Apr 2022 04:25:14 -0700 (PDT) From: Anup Patel To: Paolo Bonzini , Atish Patra Cc: Palmer Dabbelt , Paul Walmsley , Alistair Francis , Anup Patel , kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Anup Patel Subject: [PATCH v2 3/7] RISC-V: KVM: Treat SBI HFENCE calls as NOPs Date: Wed, 20 Apr 2022 16:54:46 +0530 Message-Id: <20220420112450.155624-4-apatel@ventanamicro.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220420112450.155624-1-apatel@ventanamicro.com> References: <20220420112450.155624-1-apatel@ventanamicro.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" We should treat SBI HFENCE calls as NOPs until nested virtualization is supported by KVM RISC-V. This will help us test booting a hypervisor under KVM RISC-V. Signed-off-by: Anup Patel Reviewed-by: Atish Patra --- arch/riscv/kvm/vcpu_sbi_replace.c | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/arch/riscv/kvm/vcpu_sbi_replace.c b/arch/riscv/kvm/vcpu_sbi_re= place.c index 0f217365c287..3c1dcd38358e 100644 --- a/arch/riscv/kvm/vcpu_sbi_replace.c +++ b/arch/riscv/kvm/vcpu_sbi_replace.c @@ -117,7 +117,11 @@ static int kvm_sbi_ext_rfence_handler(struct kvm_vcpu = *vcpu, struct kvm_run *run case SBI_EXT_RFENCE_REMOTE_HFENCE_GVMA_VMID: case SBI_EXT_RFENCE_REMOTE_HFENCE_VVMA: case SBI_EXT_RFENCE_REMOTE_HFENCE_VVMA_ASID: - /* TODO: implement for nested hypervisor case */ + /* + * Until nested virtualization is implemented, the + * SBI HFENCE calls should be treated as NOPs + */ + break; default: ret =3D -EOPNOTSUPP; } --=20 2.25.1 From nobody Mon May 11 07:03:13 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7F0ECC433F5 for ; Wed, 20 Apr 2022 11:25:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1378048AbiDTL23 (ORCPT ); Wed, 20 Apr 2022 07:28:29 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34952 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1378009AbiDTL2G (ORCPT ); Wed, 20 Apr 2022 07:28:06 -0400 Received: from mail-pj1-x102a.google.com (mail-pj1-x102a.google.com [IPv6:2607:f8b0:4864:20::102a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3D32F1CB0B for ; Wed, 20 Apr 2022 04:25:20 -0700 (PDT) Received: by mail-pj1-x102a.google.com with SMTP id o5so1677064pjr.0 for ; Wed, 20 Apr 2022 04:25:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ventanamicro.com; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=6i94bvajLRJRkqQLvjalhN2FfvAUltQ3Yc8X+TR1wFU=; b=c4lsDmQddzWnfY1g4LYtKo2LxAKD7G2GfMQ/vuTEbKn/NbhyQSkm8EYAyCNlLIYROo HqcVLNBuEjAhfJO+GQ7RTXuzS7yzwXWPWUgpiu4/3amQgVVxjN/3LmMOIXtI+jagaktu 1acMWUALhLPVdQMhnm0iVKgSpQVWPGlhSHZ4VFXX++RDT4CrBhnBtlm7jbOGalarltDF XFupnS42mqvIqCa5k6EHYEsfNUYIxaqq0Cvw56hGggxHnwmr3YLcQdP6ETKhCHG4HtJy gpb2hbDWpiiGYGlDri/zxHXdpcdjn3/y/sXYEOHsIp7j4vtejVTyX6sZybKY9YdA+U5q TJrA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=6i94bvajLRJRkqQLvjalhN2FfvAUltQ3Yc8X+TR1wFU=; b=P8XnGw8E5VJOCN7dEIA0CZ2eYmMM/bInEleDs+UWcfBlDGFslOxi0xUqGlUCHELqps mqt4+INoDPcrPqHPKEe8EyRGrc0dtKWoAcUYtiJJCk3m6llGgVlIZi4qht7sWjh6mrdv 57CWyjUOYY7QWFMCtPdYmz6rZBGS56QT1v+TPcEm6+Y3855dSUXym/UBap170RMxRR1u 62cDklUzC5WvQEJWlpodJQhxEo8pNXxDxdzsOmJ5Da1xQ7j4C1JlHRCK0Ruh8j0yW5/j wGow16228tDlMDuoYcEnfMUt4TM/5D8gex20paEQn0Pawhi26rW1QVYAUniqnokRHkHK 1Aqg== X-Gm-Message-State: AOAM532coxeJpUr+AESMrg3LC+WIaMoCHD8nNyQ/C+MNXFRtupYE4E7m 45pnlVHDr1Q3QquP+HQyHRof7Q== X-Google-Smtp-Source: ABdhPJy0YdUbl3j4Px0nImrXqJJZgIzvKoLuXpK6IVJ32OC265CW+voyfW8dekN3NS98re239DxrrA== X-Received: by 2002:a17:902:7404:b0:158:bff8:aa13 with SMTP id g4-20020a170902740400b00158bff8aa13mr19987787pll.133.1650453919688; Wed, 20 Apr 2022 04:25:19 -0700 (PDT) Received: from localhost.localdomain ([122.167.88.101]) by smtp.gmail.com with ESMTPSA id u12-20020a17090a890c00b001b8efcf8e48sm22529274pjn.14.2022.04.20.04.25.15 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 20 Apr 2022 04:25:19 -0700 (PDT) From: Anup Patel To: Paolo Bonzini , Atish Patra Cc: Palmer Dabbelt , Paul Walmsley , Alistair Francis , Anup Patel , kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Anup Patel Subject: [PATCH v2 4/7] RISC-V: KVM: Introduce range based local HFENCE functions Date: Wed, 20 Apr 2022 16:54:47 +0530 Message-Id: <20220420112450.155624-5-apatel@ventanamicro.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220420112450.155624-1-apatel@ventanamicro.com> References: <20220420112450.155624-1-apatel@ventanamicro.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Various __kvm_riscv_hfence_xyz() functions implemented in the kvm/tlb.S are equivalent to corresponding HFENCE.GVMA instructions and we don't have range based local HFENCE functions. This patch provides complete set of local HFENCE functions which supports range based TLB invalidation and supports HFENCE.VVMA based functions. This is also a preparatory patch for upcoming Svinval support in KVM RISC-V. Signed-off-by: Anup Patel Reviewed-by: Atish Patra --- arch/riscv/include/asm/kvm_host.h | 25 +++- arch/riscv/kvm/mmu.c | 4 +- arch/riscv/kvm/tlb.S | 74 ----------- arch/riscv/kvm/tlb.c | 213 ++++++++++++++++++++++++++++++ arch/riscv/kvm/vcpu.c | 2 +- arch/riscv/kvm/vmid.c | 2 +- 6 files changed, 237 insertions(+), 83 deletions(-) delete mode 100644 arch/riscv/kvm/tlb.S create mode 100644 arch/riscv/kvm/tlb.c diff --git a/arch/riscv/include/asm/kvm_host.h b/arch/riscv/include/asm/kvm= _host.h index 3e2cbbd7d1c9..806f74dc0bfc 100644 --- a/arch/riscv/include/asm/kvm_host.h +++ b/arch/riscv/include/asm/kvm_host.h @@ -204,11 +204,26 @@ static inline void kvm_arch_sched_in(struct kvm_vcpu = *vcpu, int cpu) {} =20 #define KVM_ARCH_WANT_MMU_NOTIFIER =20 -void __kvm_riscv_hfence_gvma_vmid_gpa(unsigned long gpa_divby_4, - unsigned long vmid); -void __kvm_riscv_hfence_gvma_vmid(unsigned long vmid); -void __kvm_riscv_hfence_gvma_gpa(unsigned long gpa_divby_4); -void __kvm_riscv_hfence_gvma_all(void); +#define KVM_RISCV_GSTAGE_TLB_MIN_ORDER 12 + +void kvm_riscv_local_hfence_gvma_vmid_gpa(unsigned long vmid, + gpa_t gpa, gpa_t gpsz, + unsigned long order); +void kvm_riscv_local_hfence_gvma_vmid_all(unsigned long vmid); +void kvm_riscv_local_hfence_gvma_gpa(gpa_t gpa, gpa_t gpsz, + unsigned long order); +void kvm_riscv_local_hfence_gvma_all(void); +void kvm_riscv_local_hfence_vvma_asid_gva(unsigned long vmid, + unsigned long asid, + unsigned long gva, + unsigned long gvsz, + unsigned long order); +void kvm_riscv_local_hfence_vvma_asid_all(unsigned long vmid, + unsigned long asid); +void kvm_riscv_local_hfence_vvma_gva(unsigned long vmid, + unsigned long gva, unsigned long gvsz, + unsigned long order); +void kvm_riscv_local_hfence_vvma_all(unsigned long vmid); =20 int kvm_riscv_gstage_map(struct kvm_vcpu *vcpu, struct kvm_memory_slot *memslot, diff --git a/arch/riscv/kvm/mmu.c b/arch/riscv/kvm/mmu.c index 8823eb32dcde..1e07603c905b 100644 --- a/arch/riscv/kvm/mmu.c +++ b/arch/riscv/kvm/mmu.c @@ -745,7 +745,7 @@ void kvm_riscv_gstage_update_hgatp(struct kvm_vcpu *vcp= u) csr_write(CSR_HGATP, hgatp); =20 if (!kvm_riscv_gstage_vmid_bits()) - __kvm_riscv_hfence_gvma_all(); + kvm_riscv_local_hfence_gvma_all(); } =20 void kvm_riscv_gstage_mode_detect(void) @@ -768,7 +768,7 @@ void kvm_riscv_gstage_mode_detect(void) skip_sv48x4_test: =20 csr_write(CSR_HGATP, 0); - __kvm_riscv_hfence_gvma_all(); + kvm_riscv_local_hfence_gvma_all(); #endif } =20 diff --git a/arch/riscv/kvm/tlb.S b/arch/riscv/kvm/tlb.S deleted file mode 100644 index 899f75d60bad..000000000000 --- a/arch/riscv/kvm/tlb.S +++ /dev/null @@ -1,74 +0,0 @@ -/* SPDX-License-Identifier: GPL-2.0 */ -/* - * Copyright (C) 2019 Western Digital Corporation or its affiliates. - * - * Authors: - * Anup Patel - */ - -#include -#include - - .text - .altmacro - .option norelax - - /* - * Instruction encoding of hfence.gvma is: - * HFENCE.GVMA rs1, rs2 - * HFENCE.GVMA zero, rs2 - * HFENCE.GVMA rs1 - * HFENCE.GVMA - * - * rs1!=3Dzero and rs2!=3Dzero =3D=3D> HFENCE.GVMA rs1, rs2 - * rs1=3D=3Dzero and rs2!=3Dzero =3D=3D> HFENCE.GVMA zero, rs2 - * rs1!=3Dzero and rs2=3D=3Dzero =3D=3D> HFENCE.GVMA rs1 - * rs1=3D=3Dzero and rs2=3D=3Dzero =3D=3D> HFENCE.GVMA - * - * Instruction encoding of HFENCE.GVMA is: - * 0110001 rs2(5) rs1(5) 000 00000 1110011 - */ - -ENTRY(__kvm_riscv_hfence_gvma_vmid_gpa) - /* - * rs1 =3D a0 (GPA >> 2) - * rs2 =3D a1 (VMID) - * HFENCE.GVMA a0, a1 - * 0110001 01011 01010 000 00000 1110011 - */ - .word 0x62b50073 - ret -ENDPROC(__kvm_riscv_hfence_gvma_vmid_gpa) - -ENTRY(__kvm_riscv_hfence_gvma_vmid) - /* - * rs1 =3D zero - * rs2 =3D a0 (VMID) - * HFENCE.GVMA zero, a0 - * 0110001 01010 00000 000 00000 1110011 - */ - .word 0x62a00073 - ret -ENDPROC(__kvm_riscv_hfence_gvma_vmid) - -ENTRY(__kvm_riscv_hfence_gvma_gpa) - /* - * rs1 =3D a0 (GPA >> 2) - * rs2 =3D zero - * HFENCE.GVMA a0 - * 0110001 00000 01010 000 00000 1110011 - */ - .word 0x62050073 - ret -ENDPROC(__kvm_riscv_hfence_gvma_gpa) - -ENTRY(__kvm_riscv_hfence_gvma_all) - /* - * rs1 =3D zero - * rs2 =3D zero - * HFENCE.GVMA - * 0110001 00000 00000 000 00000 1110011 - */ - .word 0x62000073 - ret -ENDPROC(__kvm_riscv_hfence_gvma_all) diff --git a/arch/riscv/kvm/tlb.c b/arch/riscv/kvm/tlb.c new file mode 100644 index 000000000000..e2d4fd610745 --- /dev/null +++ b/arch/riscv/kvm/tlb.c @@ -0,0 +1,213 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (c) 2022 Ventana Micro Systems Inc. + */ + +#include +#include +#include +#include +#include +#include + +/* + * Instruction encoding of hfence.gvma is: + * HFENCE.GVMA rs1, rs2 + * HFENCE.GVMA zero, rs2 + * HFENCE.GVMA rs1 + * HFENCE.GVMA + * + * rs1!=3Dzero and rs2!=3Dzero =3D=3D> HFENCE.GVMA rs1, rs2 + * rs1=3D=3Dzero and rs2!=3Dzero =3D=3D> HFENCE.GVMA zero, rs2 + * rs1!=3Dzero and rs2=3D=3Dzero =3D=3D> HFENCE.GVMA rs1 + * rs1=3D=3Dzero and rs2=3D=3Dzero =3D=3D> HFENCE.GVMA + * + * Instruction encoding of HFENCE.GVMA is: + * 0110001 rs2(5) rs1(5) 000 00000 1110011 + */ + +void kvm_riscv_local_hfence_gvma_vmid_gpa(unsigned long vmid, + gpa_t gpa, gpa_t gpsz, + unsigned long order) +{ + gpa_t pos; + + if (PTRS_PER_PTE < (gpsz >> order)) { + kvm_riscv_local_hfence_gvma_vmid_all(vmid); + return; + } + + for (pos =3D gpa; pos < (gpa + gpsz); pos +=3D BIT(order)) { + /* + * rs1 =3D a0 (GPA >> 2) + * rs2 =3D a1 (VMID) + * HFENCE.GVMA a0, a1 + * 0110001 01011 01010 000 00000 1110011 + */ + asm volatile ("srli a0, %0, 2\n" + "add a1, %1, zero\n" + ".word 0x62b50073\n" + :: "r" (pos), "r" (vmid) + : "a0", "a1", "memory"); + } +} + +void kvm_riscv_local_hfence_gvma_vmid_all(unsigned long vmid) +{ + /* + * rs1 =3D zero + * rs2 =3D a0 (VMID) + * HFENCE.GVMA zero, a0 + * 0110001 01010 00000 000 00000 1110011 + */ + asm volatile ("add a0, %0, zero\n" + ".word 0x62a00073\n" + :: "r" (vmid) : "a0", "memory"); +} + +void kvm_riscv_local_hfence_gvma_gpa(gpa_t gpa, gpa_t gpsz, + unsigned long order) +{ + gpa_t pos; + + if (PTRS_PER_PTE < (gpsz >> order)) { + kvm_riscv_local_hfence_gvma_all(); + return; + } + + for (pos =3D gpa; pos < (gpa + gpsz); pos +=3D BIT(order)) { + /* + * rs1 =3D a0 (GPA >> 2) + * rs2 =3D zero + * HFENCE.GVMA a0 + * 0110001 00000 01010 000 00000 1110011 + */ + asm volatile ("srli a0, %0, 2\n" + ".word 0x62050073\n" + :: "r" (pos) : "a0", "memory"); + } +} + +void kvm_riscv_local_hfence_gvma_all(void) +{ + /* + * rs1 =3D zero + * rs2 =3D zero + * HFENCE.GVMA + * 0110001 00000 00000 000 00000 1110011 + */ + asm volatile (".word 0x62000073" ::: "memory"); +} + +/* + * Instruction encoding of hfence.gvma is: + * HFENCE.VVMA rs1, rs2 + * HFENCE.VVMA zero, rs2 + * HFENCE.VVMA rs1 + * HFENCE.VVMA + * + * rs1!=3Dzero and rs2!=3Dzero =3D=3D> HFENCE.VVMA rs1, rs2 + * rs1=3D=3Dzero and rs2!=3Dzero =3D=3D> HFENCE.VVMA zero, rs2 + * rs1!=3Dzero and rs2=3D=3Dzero =3D=3D> HFENCE.VVMA rs1 + * rs1=3D=3Dzero and rs2=3D=3Dzero =3D=3D> HFENCE.VVMA + * + * Instruction encoding of HFENCE.VVMA is: + * 0010001 rs2(5) rs1(5) 000 00000 1110011 + */ + +void kvm_riscv_local_hfence_vvma_asid_gva(unsigned long vmid, + unsigned long asid, + unsigned long gva, + unsigned long gvsz, + unsigned long order) +{ + unsigned long pos, hgatp; + + if (PTRS_PER_PTE < (gvsz >> order)) { + kvm_riscv_local_hfence_vvma_asid_all(vmid, asid); + return; + } + + hgatp =3D csr_swap(CSR_HGATP, vmid << HGATP_VMID_SHIFT); + + for (pos =3D gva; pos < (gva + gvsz); pos +=3D BIT(order)) { + /* + * rs1 =3D a0 (GVA) + * rs2 =3D a1 (ASID) + * HFENCE.VVMA a0, a1 + * 0010001 01011 01010 000 00000 1110011 + */ + asm volatile ("add a0, %0, zero\n" + "add a1, %1, zero\n" + ".word 0x22b50073\n" + :: "r" (pos), "r" (asid) + : "a0", "a1", "memory"); + } + + csr_write(CSR_HGATP, hgatp); +} + +void kvm_riscv_local_hfence_vvma_asid_all(unsigned long vmid, + unsigned long asid) +{ + unsigned long hgatp; + + hgatp =3D csr_swap(CSR_HGATP, vmid << HGATP_VMID_SHIFT); + + /* + * rs1 =3D zero + * rs2 =3D a0 (ASID) + * HFENCE.VVMA zero, a0 + * 0010001 01010 00000 000 00000 1110011 + */ + asm volatile ("add a0, %0, zero\n" + ".word 0x22a00073\n" + :: "r" (asid) : "a0", "memory"); + + csr_write(CSR_HGATP, hgatp); +} + +void kvm_riscv_local_hfence_vvma_gva(unsigned long vmid, + unsigned long gva, unsigned long gvsz, + unsigned long order) +{ + unsigned long pos, hgatp; + + if (PTRS_PER_PTE < (gvsz >> order)) { + kvm_riscv_local_hfence_vvma_all(vmid); + return; + } + + hgatp =3D csr_swap(CSR_HGATP, vmid << HGATP_VMID_SHIFT); + + for (pos =3D gva; pos < (gva + gvsz); pos +=3D BIT(order)) { + /* + * rs1 =3D a0 (GVA) + * rs2 =3D zero + * HFENCE.VVMA a0 + * 0010001 00000 01010 000 00000 1110011 + */ + asm volatile ("add a0, %0, zero\n" + ".word 0x22050073\n" + :: "r" (pos) : "a0", "memory"); + } + + csr_write(CSR_HGATP, hgatp); +} + +void kvm_riscv_local_hfence_vvma_all(unsigned long vmid) +{ + unsigned long hgatp; + + hgatp =3D csr_swap(CSR_HGATP, vmid << HGATP_VMID_SHIFT); + + /* + * rs1 =3D zero + * rs2 =3D zero + * HFENCE.VVMA + * 0010001 00000 00000 000 00000 1110011 + */ + asm volatile (".word 0x22000073" ::: "memory"); + + csr_write(CSR_HGATP, hgatp); +} diff --git a/arch/riscv/kvm/vcpu.c b/arch/riscv/kvm/vcpu.c index e87af6480dfd..2b7e27bc946c 100644 --- a/arch/riscv/kvm/vcpu.c +++ b/arch/riscv/kvm/vcpu.c @@ -693,7 +693,7 @@ static void kvm_riscv_check_vcpu_requests(struct kvm_vc= pu *vcpu) kvm_riscv_gstage_update_hgatp(vcpu); =20 if (kvm_check_request(KVM_REQ_TLB_FLUSH, vcpu)) - __kvm_riscv_hfence_gvma_all(); + kvm_riscv_local_hfence_gvma_all(); } } =20 diff --git a/arch/riscv/kvm/vmid.c b/arch/riscv/kvm/vmid.c index 01fdc342ad76..8987e76aa6db 100644 --- a/arch/riscv/kvm/vmid.c +++ b/arch/riscv/kvm/vmid.c @@ -33,7 +33,7 @@ void kvm_riscv_gstage_vmid_detect(void) csr_write(CSR_HGATP, old); =20 /* We polluted local TLB so flush all guest TLB */ - __kvm_riscv_hfence_gvma_all(); + kvm_riscv_local_hfence_gvma_all(); =20 /* We don't use VMID bits if they are not sufficient */ if ((1UL << vmid_bits) < num_possible_cpus()) --=20 2.25.1 From nobody Mon May 11 07:03:13 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E85C4C433F5 for ; Wed, 20 Apr 2022 11:25:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1378027AbiDTL2f (ORCPT ); Wed, 20 Apr 2022 07:28:35 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35124 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1378029AbiDTL2S (ORCPT ); Wed, 20 Apr 2022 07:28:18 -0400 Received: from mail-pg1-x529.google.com (mail-pg1-x529.google.com [IPv6:2607:f8b0:4864:20::529]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 22E671E3D0 for ; Wed, 20 Apr 2022 04:25:24 -0700 (PDT) Received: by mail-pg1-x529.google.com with SMTP id s137so1342877pgs.5 for ; Wed, 20 Apr 2022 04:25:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ventanamicro.com; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=W/lhXgecBDWrQwrmi3IwcGDaw2Q0aGQ9eYgSFA2SGhM=; b=mq0MqMZABzTc+/HenZrx4jIjHskHGbPWFO0zolX7aGm6nOvmdHo3sUi4z1OzbFH1kY FZYRRTiZGfmnFe905yHmrNmy082WbSRJL5akr43VqfbRTWKbyWgJh2aUSPzmG/g1Qjqb RRIDlaEWJVGnkZ7JhAPFeyh9XKD5/PSU4g1P/ofzzHSV50GyRPFTt1SJniBKJ1xXP97N hV49noZYTRKb/YbNmpp8lI55WEJcKdgvoU45RRpHcSHe0xh0pefKcprDEFalAnYcp1iI 7gn7AJxBgE3WHX2UtcXxYc1BxfBHMcEyFOwGqYMjbp6Yu2rC5WGzKGJNeg4xVwiP2+q0 Rc0g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=W/lhXgecBDWrQwrmi3IwcGDaw2Q0aGQ9eYgSFA2SGhM=; b=dfgT0DvP7EbbKopWvD/Z3FNUNz+o5V2YqvEzq/VtdNABuRGhkpHekg6O5S/L5ka7Zw 49/+wdudblTVyFvVxarWL7ETbqFFk0EIcuXL4jCiLaah078+JBhV/AitHuhPhbLh2ybK k9dL/YooBNRbl8KwFIJ6S/WuGdXG3uKrLYHJHFznJ12Vfzqdfu1Jp8vjTBkzjs52+ObB HkFPCQndDCrpFrXzauVlB4ud2xKQO+bMTtbw/x0iw1bLU3J74Ij1sFvnKMsbVhiIatMV Iqd1w48Aig2VNIXyj167VR+6VxOqi+fsps92iMsfKHBF4arAk77pVHykLBFSYIZG5RVU 0AeQ== X-Gm-Message-State: AOAM530VK/Xq/isa3nZ5wBtcuRDYYCME8cKPvSXZKL+N1wSLhAFsBkYf 7CsdFghzcVFMfEs6Ysh71qq1vQ== X-Google-Smtp-Source: ABdhPJz9nV0msPZvcrWhZpToFAJ37v3xNoclCZihIzibbkZvB62z1STbaDlTLIG+owmRpdjqD288fQ== X-Received: by 2002:a05:6a00:996:b0:505:b6d2:abc8 with SMTP id u22-20020a056a00099600b00505b6d2abc8mr22909619pfg.11.1650453924256; Wed, 20 Apr 2022 04:25:24 -0700 (PDT) Received: from localhost.localdomain ([122.167.88.101]) by smtp.gmail.com with ESMTPSA id u12-20020a17090a890c00b001b8efcf8e48sm22529274pjn.14.2022.04.20.04.25.20 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 20 Apr 2022 04:25:23 -0700 (PDT) From: Anup Patel To: Paolo Bonzini , Atish Patra Cc: Palmer Dabbelt , Paul Walmsley , Alistair Francis , Anup Patel , kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Anup Patel Subject: [PATCH v2 5/7] RISC-V: KVM: Reduce KVM_MAX_VCPUS value Date: Wed, 20 Apr 2022 16:54:48 +0530 Message-Id: <20220420112450.155624-6-apatel@ventanamicro.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220420112450.155624-1-apatel@ventanamicro.com> References: <20220420112450.155624-1-apatel@ventanamicro.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Currently, the KVM_MAX_VCPUS value is 16384 for RV64 and 128 for RV32. The KVM_MAX_VCPUS value is too high for RV64 and too low for RV32 compared to other architectures (e.g. x86 sets it to 1024 and ARM64 sets it to 512). The too high value of KVM_MAX_VCPUS on RV64 also leads to VCPU mask on stack consuming 2KB. We set KVM_MAX_VCPUS to 1024 for both RV64 and RV32 to be aligned other architectures. Signed-off-by: Anup Patel Reviewed-by: Atish Patra --- arch/riscv/include/asm/kvm_host.h | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/arch/riscv/include/asm/kvm_host.h b/arch/riscv/include/asm/kvm= _host.h index 806f74dc0bfc..61d8b40e3d82 100644 --- a/arch/riscv/include/asm/kvm_host.h +++ b/arch/riscv/include/asm/kvm_host.h @@ -16,8 +16,7 @@ #include #include =20 -#define KVM_MAX_VCPUS \ - ((HGATP_VMID_MASK >> HGATP_VMID_SHIFT) + 1) +#define KVM_MAX_VCPUS 1024 =20 #define KVM_HALT_POLL_NS_DEFAULT 500000 =20 --=20 2.25.1 From nobody Mon May 11 07:03:13 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5D10DC433F5 for ; Wed, 20 Apr 2022 11:26:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1378066AbiDTL2o (ORCPT ); Wed, 20 Apr 2022 07:28:44 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35138 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1378035AbiDTL2S (ORCPT ); Wed, 20 Apr 2022 07:28:18 -0400 Received: from mail-pg1-x52d.google.com (mail-pg1-x52d.google.com [IPv6:2607:f8b0:4864:20::52d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8D8313B56E for ; Wed, 20 Apr 2022 04:25:29 -0700 (PDT) Received: by mail-pg1-x52d.google.com with SMTP id h5so1338225pgc.7 for ; Wed, 20 Apr 2022 04:25:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ventanamicro.com; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=trV5Y8OzdRTDJevkoCn9e0ApdxBzXB6tgm6DMboGPuU=; b=ArXeiTu4Ygj1eT7RqxgDoqu2qPa5DctUnAZWhq0RBRbrG7fsMQX9WW6qXLxjnIkY01 wlc9BxHphOU8/K52vNjBYBkXeS7B7lrtZu+bULuxaQAWceY6A5Iu5Jt1odRQqfJtp0Bx 2ATOzJ56Xl94/sYrfZbWleJhc1EG0xJFDhkUNGrbyKplZGaTHag/r+mY9cPJGRbOa+sy js7xU8w4/nP5gB8F5PU4Pl1OWNKGxvDtVPArj+6vXDdy5ILgytVc/eCpmy0a2pbVxg1P BFVoRI41OWI5AHVnef+39X6wBmUMKM/A7Xfrh5yi8y30B/sCVUaolaof8nQzIVzlK1Rm lMXg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=trV5Y8OzdRTDJevkoCn9e0ApdxBzXB6tgm6DMboGPuU=; b=M7uwvLOdtzSBR/oVxkT+p6el7PuiYSVCwq1dvotdixXvOAeONOWlte3QMpTHMi3vUk nkmIK2CXDq3TVf78vRVxOubXIMfhzCPMBQKwlKHCiEMnb34knafjoTqFf+gOgoi7nFqC 8KFQQAcnI55zqACVNwGIuJhfhrfEXECYKzC7MwB8xt6Ix5OJVaQk+DSOETi+uSe2IZG5 C5Yrxgf4L+KL70r0UhlxQUQMMTCi8whVfCdayrLryCfLA9op/AGIgM+eX0DI3eUBB/lL NFKemvKp/AlTUDUj3dtipK8SMsgF3ICvFs4N2AS6tC/rHdibBa9wNn41QoXMKJDe2i6D mI2g== X-Gm-Message-State: AOAM530u1LxEJ+dJI5NY0+eTw5jHgMgXL1P5Lq7koDhvS+76tWTgw0YR nPWMkw68yfLAoSNfCsb+ieRkMQ== X-Google-Smtp-Source: ABdhPJy6DhQ4M3M3dFcSsaaqw09Zoz2TEebEQG2o1NxolhgsrZZAGnyjYsKzLRdW2yJUUafkPfG0cA== X-Received: by 2002:aa7:9110:0:b0:4fa:e388:af57 with SMTP id 16-20020aa79110000000b004fae388af57mr22692652pfh.1.1650453928944; Wed, 20 Apr 2022 04:25:28 -0700 (PDT) Received: from localhost.localdomain ([122.167.88.101]) by smtp.gmail.com with ESMTPSA id u12-20020a17090a890c00b001b8efcf8e48sm22529274pjn.14.2022.04.20.04.25.24 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 20 Apr 2022 04:25:28 -0700 (PDT) From: Anup Patel To: Paolo Bonzini , Atish Patra Cc: Palmer Dabbelt , Paul Walmsley , Alistair Francis , Anup Patel , kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Anup Patel Subject: [PATCH v2 6/7] RISC-V: KVM: Add remote HFENCE functions based on VCPU requests Date: Wed, 20 Apr 2022 16:54:49 +0530 Message-Id: <20220420112450.155624-7-apatel@ventanamicro.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220420112450.155624-1-apatel@ventanamicro.com> References: <20220420112450.155624-1-apatel@ventanamicro.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" The generic KVM has support for VCPU requests which can be used to do arch-specific work in the run-loop. We introduce remote HFENCE functions which will internally use VCPU requests instead of host SBI calls. Advantages of doing remote HFENCEs as VCPU requests are: 1) Multiple VCPUs of a Guest may be running on different Host CPUs so it is not always possible to determine the Host CPU mask for doing Host SBI call. For example, when VCPU X wants to do HFENCE on VCPU Y, it is possible that VCPU Y is blocked or in user-space (i.e. vcpu->cpu < 0). 2) To support nested virtualization, we will be having a separate shadow G-stage for each VCPU and a common host G-stage for the entire Guest/VM. The VCPU requests based remote HFENCEs helps us easily synchronize the common host G-stage and shadow G-stage of each VCPU without any additional IPI calls. This is also a preparatory patch for upcoming nested virtualization support where we will be having a shadow G-stage page table for each Guest VCPU. Signed-off-by: Anup Patel Acked-by: Atish Patra --- arch/riscv/include/asm/kvm_host.h | 59 ++++++++ arch/riscv/kvm/mmu.c | 33 +++-- arch/riscv/kvm/tlb.c | 227 +++++++++++++++++++++++++++++- arch/riscv/kvm/vcpu.c | 24 +++- arch/riscv/kvm/vcpu_sbi_replace.c | 34 ++--- arch/riscv/kvm/vcpu_sbi_v01.c | 35 +++-- arch/riscv/kvm/vmid.c | 10 +- 7 files changed, 369 insertions(+), 53 deletions(-) diff --git a/arch/riscv/include/asm/kvm_host.h b/arch/riscv/include/asm/kvm= _host.h index 61d8b40e3d82..a40e88a9481c 100644 --- a/arch/riscv/include/asm/kvm_host.h +++ b/arch/riscv/include/asm/kvm_host.h @@ -12,6 +12,7 @@ #include #include #include +#include #include #include #include @@ -26,6 +27,31 @@ KVM_ARCH_REQ_FLAGS(0, KVM_REQUEST_WAIT | KVM_REQUEST_NO_WAKEUP) #define KVM_REQ_VCPU_RESET KVM_ARCH_REQ(1) #define KVM_REQ_UPDATE_HGATP KVM_ARCH_REQ(2) +#define KVM_REQ_FENCE_I \ + KVM_ARCH_REQ_FLAGS(3, KVM_REQUEST_WAIT | KVM_REQUEST_NO_WAKEUP) +#define KVM_REQ_HFENCE_GVMA_VMID_ALL KVM_REQ_TLB_FLUSH +#define KVM_REQ_HFENCE_VVMA_ALL \ + KVM_ARCH_REQ_FLAGS(4, KVM_REQUEST_WAIT | KVM_REQUEST_NO_WAKEUP) +#define KVM_REQ_HFENCE \ + KVM_ARCH_REQ_FLAGS(5, KVM_REQUEST_WAIT | KVM_REQUEST_NO_WAKEUP) + +enum kvm_riscv_hfence_type { + KVM_RISCV_HFENCE_UNKNOWN =3D 0, + KVM_RISCV_HFENCE_GVMA_VMID_GPA, + KVM_RISCV_HFENCE_VVMA_ASID_GVA, + KVM_RISCV_HFENCE_VVMA_ASID_ALL, + KVM_RISCV_HFENCE_VVMA_GVA, +}; + +struct kvm_riscv_hfence { + enum kvm_riscv_hfence_type type; + unsigned long asid; + unsigned long order; + gpa_t addr; + gpa_t size; +}; + +#define KVM_RISCV_VCPU_MAX_HFENCE 64 =20 struct kvm_vm_stat { struct kvm_vm_stat_generic generic; @@ -178,6 +204,12 @@ struct kvm_vcpu_arch { /* VCPU Timer */ struct kvm_vcpu_timer timer; =20 + /* HFENCE request queue */ + spinlock_t hfence_lock; + unsigned long hfence_head; + unsigned long hfence_tail; + struct kvm_riscv_hfence hfence_queue[KVM_RISCV_VCPU_MAX_HFENCE]; + /* MMIO instruction details */ struct kvm_mmio_decode mmio_decode; =20 @@ -224,6 +256,33 @@ void kvm_riscv_local_hfence_vvma_gva(unsigned long vmi= d, unsigned long order); void kvm_riscv_local_hfence_vvma_all(unsigned long vmid); =20 +void kvm_riscv_fence_i_process(struct kvm_vcpu *vcpu); +void kvm_riscv_hfence_gvma_vmid_all_process(struct kvm_vcpu *vcpu); +void kvm_riscv_hfence_vvma_all_process(struct kvm_vcpu *vcpu); +void kvm_riscv_hfence_process(struct kvm_vcpu *vcpu); + +void kvm_riscv_fence_i(struct kvm *kvm, + unsigned long hbase, unsigned long hmask); +void kvm_riscv_hfence_gvma_vmid_gpa(struct kvm *kvm, + unsigned long hbase, unsigned long hmask, + gpa_t gpa, gpa_t gpsz, + unsigned long order); +void kvm_riscv_hfence_gvma_vmid_all(struct kvm *kvm, + unsigned long hbase, unsigned long hmask); +void kvm_riscv_hfence_vvma_asid_gva(struct kvm *kvm, + unsigned long hbase, unsigned long hmask, + unsigned long gva, unsigned long gvsz, + unsigned long order, unsigned long asid); +void kvm_riscv_hfence_vvma_asid_all(struct kvm *kvm, + unsigned long hbase, unsigned long hmask, + unsigned long asid); +void kvm_riscv_hfence_vvma_gva(struct kvm *kvm, + unsigned long hbase, unsigned long hmask, + unsigned long gva, unsigned long gvsz, + unsigned long order); +void kvm_riscv_hfence_vvma_all(struct kvm *kvm, + unsigned long hbase, unsigned long hmask); + int kvm_riscv_gstage_map(struct kvm_vcpu *vcpu, struct kvm_memory_slot *memslot, gpa_t gpa, unsigned long hva, bool is_write); diff --git a/arch/riscv/kvm/mmu.c b/arch/riscv/kvm/mmu.c index 1e07603c905b..1c00695ebee7 100644 --- a/arch/riscv/kvm/mmu.c +++ b/arch/riscv/kvm/mmu.c @@ -18,7 +18,6 @@ #include #include #include -#include =20 #ifdef CONFIG_64BIT static unsigned long gstage_mode =3D (HGATP_MODE_SV39X4 << HGATP_MODE_SHIF= T); @@ -73,13 +72,25 @@ static int gstage_page_size_to_level(unsigned long page= _size, u32 *out_level) return -EINVAL; } =20 -static int gstage_level_to_page_size(u32 level, unsigned long *out_pgsize) +static int gstage_level_to_page_order(u32 level, unsigned long *out_pgorde= r) { if (gstage_pgd_levels < level) return -EINVAL; =20 - *out_pgsize =3D 1UL << (12 + (level * gstage_index_bits)); + *out_pgorder =3D 12 + (level * gstage_index_bits); + return 0; +} =20 +static int gstage_level_to_page_size(u32 level, unsigned long *out_pgsize) +{ + int rc; + unsigned long page_order =3D PAGE_SHIFT; + + rc =3D gstage_level_to_page_order(level, &page_order); + if (rc) + return rc; + + *out_pgsize =3D BIT(page_order); return 0; } =20 @@ -114,21 +125,13 @@ static bool gstage_get_leaf_entry(struct kvm *kvm, gp= a_t addr, =20 static void gstage_remote_tlb_flush(struct kvm *kvm, u32 level, gpa_t addr) { - unsigned long size =3D PAGE_SIZE; - struct kvm_vmid *vmid =3D &kvm->arch.vmid; + unsigned long order =3D PAGE_SHIFT; =20 - if (gstage_level_to_page_size(level, &size)) + if (gstage_level_to_page_order(level, &order)) return; - addr &=3D ~(size - 1); + addr &=3D ~(BIT(order) - 1); =20 - /* - * TODO: Instead of cpu_online_mask, we should only target CPUs - * where the Guest/VM is running. - */ - preempt_disable(); - sbi_remote_hfence_gvma_vmid(cpu_online_mask, addr, size, - READ_ONCE(vmid->vmid)); - preempt_enable(); + kvm_riscv_hfence_gvma_vmid_gpa(kvm, -1UL, 0, addr, BIT(order), order); } =20 static int gstage_set_pte(struct kvm *kvm, u32 level, diff --git a/arch/riscv/kvm/tlb.c b/arch/riscv/kvm/tlb.c index e2d4fd610745..c0f86d09c41d 100644 --- a/arch/riscv/kvm/tlb.c +++ b/arch/riscv/kvm/tlb.c @@ -3,11 +3,14 @@ * Copyright (c) 2022 Ventana Micro Systems Inc. */ =20 -#include +#include +#include #include #include #include +#include #include +#include #include =20 /* @@ -211,3 +214,225 @@ void kvm_riscv_local_hfence_vvma_all(unsigned long vm= id) =20 csr_write(CSR_HGATP, hgatp); } + +void kvm_riscv_fence_i_process(struct kvm_vcpu *vcpu) +{ + local_flush_icache_all(); +} + +void kvm_riscv_hfence_gvma_vmid_all_process(struct kvm_vcpu *vcpu) +{ + struct kvm_vmid *vmid; + + vmid =3D &vcpu->kvm->arch.vmid; + kvm_riscv_local_hfence_gvma_vmid_all(READ_ONCE(vmid->vmid)); +} + +void kvm_riscv_hfence_vvma_all_process(struct kvm_vcpu *vcpu) +{ + struct kvm_vmid *vmid; + + vmid =3D &vcpu->kvm->arch.vmid; + kvm_riscv_local_hfence_vvma_all(READ_ONCE(vmid->vmid)); +} + +static bool vcpu_hfence_dequeue(struct kvm_vcpu *vcpu, + struct kvm_riscv_hfence *out_data) +{ + bool ret =3D false; + struct kvm_vcpu_arch *varch =3D &vcpu->arch; + + spin_lock(&varch->hfence_lock); + + if (varch->hfence_queue[varch->hfence_head].type) { + memcpy(out_data, &varch->hfence_queue[varch->hfence_head], + sizeof(*out_data)); + varch->hfence_queue[varch->hfence_head].type =3D 0; + + varch->hfence_head++; + if (varch->hfence_head =3D=3D KVM_RISCV_VCPU_MAX_HFENCE) + varch->hfence_head =3D 0; + + ret =3D true; + } + + spin_unlock(&varch->hfence_lock); + + return ret; +} + +static bool vcpu_hfence_enqueue(struct kvm_vcpu *vcpu, + const struct kvm_riscv_hfence *data) +{ + bool ret =3D false; + struct kvm_vcpu_arch *varch =3D &vcpu->arch; + + spin_lock(&varch->hfence_lock); + + if (!varch->hfence_queue[varch->hfence_tail].type) { + memcpy(&varch->hfence_queue[varch->hfence_tail], + data, sizeof(*data)); + + varch->hfence_tail++; + if (varch->hfence_tail =3D=3D KVM_RISCV_VCPU_MAX_HFENCE) + varch->hfence_tail =3D 0; + + ret =3D true; + } + + spin_unlock(&varch->hfence_lock); + + return ret; +} + +void kvm_riscv_hfence_process(struct kvm_vcpu *vcpu) +{ + struct kvm_riscv_hfence d =3D { 0 }; + struct kvm_vmid *v =3D &vcpu->kvm->arch.vmid; + + while (vcpu_hfence_dequeue(vcpu, &d)) { + switch (d.type) { + case KVM_RISCV_HFENCE_UNKNOWN: + break; + case KVM_RISCV_HFENCE_GVMA_VMID_GPA: + kvm_riscv_local_hfence_gvma_vmid_gpa( + READ_ONCE(v->vmid), + d.addr, d.size, d.order); + break; + case KVM_RISCV_HFENCE_VVMA_ASID_GVA: + kvm_riscv_local_hfence_vvma_asid_gva( + READ_ONCE(v->vmid), d.asid, + d.addr, d.size, d.order); + break; + case KVM_RISCV_HFENCE_VVMA_ASID_ALL: + kvm_riscv_local_hfence_vvma_asid_all( + READ_ONCE(v->vmid), d.asid); + break; + case KVM_RISCV_HFENCE_VVMA_GVA: + kvm_riscv_local_hfence_vvma_gva( + READ_ONCE(v->vmid), + d.addr, d.size, d.order); + break; + default: + break; + } + } +} + +static void make_xfence_request(struct kvm *kvm, + unsigned long hbase, unsigned long hmask, + unsigned int req, unsigned int fallback_req, + const struct kvm_riscv_hfence *data) +{ + unsigned long i; + struct kvm_vcpu *vcpu; + unsigned int actual_req =3D req; + DECLARE_BITMAP(vcpu_mask, KVM_MAX_VCPUS); + + bitmap_clear(vcpu_mask, 0, KVM_MAX_VCPUS); + kvm_for_each_vcpu(i, vcpu, kvm) { + if (hbase !=3D -1UL) { + if (vcpu->vcpu_id < hbase) + continue; + if (!(hmask & (1UL << (vcpu->vcpu_id - hbase)))) + continue; + } + + bitmap_set(vcpu_mask, i, 1); + + if (!data || !data->type) + continue; + + /* + * Enqueue hfence data to VCPU hfence queue. If we don't + * have space in the VCPU hfence queue then fallback to + * a more conservative hfence request. + */ + if (!vcpu_hfence_enqueue(vcpu, data)) + actual_req =3D fallback_req; + } + + kvm_make_vcpus_request_mask(kvm, actual_req, vcpu_mask); +} + +void kvm_riscv_fence_i(struct kvm *kvm, + unsigned long hbase, unsigned long hmask) +{ + make_xfence_request(kvm, hbase, hmask, KVM_REQ_FENCE_I, + KVM_REQ_FENCE_I, NULL); +} + +void kvm_riscv_hfence_gvma_vmid_gpa(struct kvm *kvm, + unsigned long hbase, unsigned long hmask, + gpa_t gpa, gpa_t gpsz, + unsigned long order) +{ + struct kvm_riscv_hfence data; + + data.type =3D KVM_RISCV_HFENCE_GVMA_VMID_GPA; + data.asid =3D 0; + data.addr =3D gpa; + data.size =3D gpsz; + data.order =3D order; + make_xfence_request(kvm, hbase, hmask, KVM_REQ_HFENCE, + KVM_REQ_HFENCE_GVMA_VMID_ALL, &data); +} + +void kvm_riscv_hfence_gvma_vmid_all(struct kvm *kvm, + unsigned long hbase, unsigned long hmask) +{ + make_xfence_request(kvm, hbase, hmask, KVM_REQ_HFENCE_GVMA_VMID_ALL, + KVM_REQ_HFENCE_GVMA_VMID_ALL, NULL); +} + +void kvm_riscv_hfence_vvma_asid_gva(struct kvm *kvm, + unsigned long hbase, unsigned long hmask, + unsigned long gva, unsigned long gvsz, + unsigned long order, unsigned long asid) +{ + struct kvm_riscv_hfence data; + + data.type =3D KVM_RISCV_HFENCE_VVMA_ASID_GVA; + data.asid =3D asid; + data.addr =3D gva; + data.size =3D gvsz; + data.order =3D order; + make_xfence_request(kvm, hbase, hmask, KVM_REQ_HFENCE, + KVM_REQ_HFENCE_VVMA_ALL, &data); +} + +void kvm_riscv_hfence_vvma_asid_all(struct kvm *kvm, + unsigned long hbase, unsigned long hmask, + unsigned long asid) +{ + struct kvm_riscv_hfence data; + + data.type =3D KVM_RISCV_HFENCE_VVMA_ASID_ALL; + data.asid =3D asid; + data.addr =3D data.size =3D data.order =3D 0; + make_xfence_request(kvm, hbase, hmask, KVM_REQ_HFENCE, + KVM_REQ_HFENCE_VVMA_ALL, &data); +} + +void kvm_riscv_hfence_vvma_gva(struct kvm *kvm, + unsigned long hbase, unsigned long hmask, + unsigned long gva, unsigned long gvsz, + unsigned long order) +{ + struct kvm_riscv_hfence data; + + data.type =3D KVM_RISCV_HFENCE_VVMA_GVA; + data.asid =3D 0; + data.addr =3D gva; + data.size =3D gvsz; + data.order =3D order; + make_xfence_request(kvm, hbase, hmask, KVM_REQ_HFENCE, + KVM_REQ_HFENCE_VVMA_ALL, &data); +} + +void kvm_riscv_hfence_vvma_all(struct kvm *kvm, + unsigned long hbase, unsigned long hmask) +{ + make_xfence_request(kvm, hbase, hmask, KVM_REQ_HFENCE_VVMA_ALL, + KVM_REQ_HFENCE_VVMA_ALL, NULL); +} diff --git a/arch/riscv/kvm/vcpu.c b/arch/riscv/kvm/vcpu.c index 2b7e27bc946c..9cd8f6e91c98 100644 --- a/arch/riscv/kvm/vcpu.c +++ b/arch/riscv/kvm/vcpu.c @@ -78,6 +78,10 @@ static void kvm_riscv_reset_vcpu(struct kvm_vcpu *vcpu) WRITE_ONCE(vcpu->arch.irqs_pending, 0); WRITE_ONCE(vcpu->arch.irqs_pending_mask, 0); =20 + vcpu->arch.hfence_head =3D 0; + vcpu->arch.hfence_tail =3D 0; + memset(vcpu->arch.hfence_queue, 0, sizeof(vcpu->arch.hfence_queue)); + /* Reset the guest CSRs for hotplug usecase */ if (loaded) kvm_arch_vcpu_load(vcpu, smp_processor_id()); @@ -101,6 +105,9 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu) /* Setup ISA features available to VCPU */ vcpu->arch.isa =3D riscv_isa_extension_base(NULL) & KVM_RISCV_ISA_ALLOWED; =20 + /* Setup VCPU hfence queue */ + spin_lock_init(&vcpu->arch.hfence_lock); + /* Setup reset state of shadow SSTATUS and HSTATUS CSRs */ cntx =3D &vcpu->arch.guest_reset_context; cntx->sstatus =3D SR_SPP | SR_SPIE; @@ -692,8 +699,21 @@ static void kvm_riscv_check_vcpu_requests(struct kvm_v= cpu *vcpu) if (kvm_check_request(KVM_REQ_UPDATE_HGATP, vcpu)) kvm_riscv_gstage_update_hgatp(vcpu); =20 - if (kvm_check_request(KVM_REQ_TLB_FLUSH, vcpu)) - kvm_riscv_local_hfence_gvma_all(); + if (kvm_check_request(KVM_REQ_FENCE_I, vcpu)) + kvm_riscv_fence_i_process(vcpu); + + /* + * The generic KVM_REQ_TLB_FLUSH is same as + * KVM_REQ_HFENCE_GVMA_VMID_ALL + */ + if (kvm_check_request(KVM_REQ_HFENCE_GVMA_VMID_ALL, vcpu)) + kvm_riscv_hfence_gvma_vmid_all_process(vcpu); + + if (kvm_check_request(KVM_REQ_HFENCE_VVMA_ALL, vcpu)) + kvm_riscv_hfence_vvma_all_process(vcpu); + + if (kvm_check_request(KVM_REQ_HFENCE, vcpu)) + kvm_riscv_hfence_process(vcpu); } } =20 diff --git a/arch/riscv/kvm/vcpu_sbi_replace.c b/arch/riscv/kvm/vcpu_sbi_re= place.c index 3c1dcd38358e..4c034d8a606a 100644 --- a/arch/riscv/kvm/vcpu_sbi_replace.c +++ b/arch/riscv/kvm/vcpu_sbi_replace.c @@ -81,37 +81,31 @@ static int kvm_sbi_ext_rfence_handler(struct kvm_vcpu *= vcpu, struct kvm_run *run struct kvm_cpu_trap *utrap, bool *exit) { int ret =3D 0; - unsigned long i; - struct cpumask cm; - struct kvm_vcpu *tmp; struct kvm_cpu_context *cp =3D &vcpu->arch.guest_context; unsigned long hmask =3D cp->a0; unsigned long hbase =3D cp->a1; unsigned long funcid =3D cp->a6; =20 - cpumask_clear(&cm); - kvm_for_each_vcpu(i, tmp, vcpu->kvm) { - if (hbase !=3D -1UL) { - if (tmp->vcpu_id < hbase) - continue; - if (!(hmask & (1UL << (tmp->vcpu_id - hbase)))) - continue; - } - if (tmp->cpu < 0) - continue; - cpumask_set_cpu(tmp->cpu, &cm); - } - switch (funcid) { case SBI_EXT_RFENCE_REMOTE_FENCE_I: - ret =3D sbi_remote_fence_i(&cm); + kvm_riscv_fence_i(vcpu->kvm, hbase, hmask); break; case SBI_EXT_RFENCE_REMOTE_SFENCE_VMA: - ret =3D sbi_remote_hfence_vvma(&cm, cp->a2, cp->a3); + if (cp->a2 =3D=3D 0 && cp->a3 =3D=3D 0) + kvm_riscv_hfence_vvma_all(vcpu->kvm, hbase, hmask); + else + kvm_riscv_hfence_vvma_gva(vcpu->kvm, hbase, hmask, + cp->a2, cp->a3, PAGE_SHIFT); break; case SBI_EXT_RFENCE_REMOTE_SFENCE_VMA_ASID: - ret =3D sbi_remote_hfence_vvma_asid(&cm, cp->a2, - cp->a3, cp->a4); + if (cp->a2 =3D=3D 0 && cp->a3 =3D=3D 0) + kvm_riscv_hfence_vvma_asid_all(vcpu->kvm, + hbase, hmask, cp->a4); + else + kvm_riscv_hfence_vvma_asid_gva(vcpu->kvm, + hbase, hmask, + cp->a2, cp->a3, + PAGE_SHIFT, cp->a4); break; case SBI_EXT_RFENCE_REMOTE_HFENCE_GVMA: case SBI_EXT_RFENCE_REMOTE_HFENCE_GVMA_VMID: diff --git a/arch/riscv/kvm/vcpu_sbi_v01.c b/arch/riscv/kvm/vcpu_sbi_v01.c index da4d6c99c2cf..8a91a14e7139 100644 --- a/arch/riscv/kvm/vcpu_sbi_v01.c +++ b/arch/riscv/kvm/vcpu_sbi_v01.c @@ -23,7 +23,6 @@ static int kvm_sbi_ext_v01_handler(struct kvm_vcpu *vcpu,= struct kvm_run *run, int i, ret =3D 0; u64 next_cycle; struct kvm_vcpu *rvcpu; - struct cpumask cm; struct kvm *kvm =3D vcpu->kvm; struct kvm_cpu_context *cp =3D &vcpu->arch.guest_context; =20 @@ -80,19 +79,29 @@ static int kvm_sbi_ext_v01_handler(struct kvm_vcpu *vcp= u, struct kvm_run *run, if (utrap->scause) break; =20 - cpumask_clear(&cm); - for_each_set_bit(i, &hmask, BITS_PER_LONG) { - rvcpu =3D kvm_get_vcpu_by_id(vcpu->kvm, i); - if (rvcpu->cpu < 0) - continue; - cpumask_set_cpu(rvcpu->cpu, &cm); - } if (cp->a7 =3D=3D SBI_EXT_0_1_REMOTE_FENCE_I) - ret =3D sbi_remote_fence_i(&cm); - else if (cp->a7 =3D=3D SBI_EXT_0_1_REMOTE_SFENCE_VMA) - ret =3D sbi_remote_hfence_vvma(&cm, cp->a1, cp->a2); - else - ret =3D sbi_remote_hfence_vvma_asid(&cm, cp->a1, cp->a2, cp->a3); + kvm_riscv_fence_i(vcpu->kvm, 0, hmask); + else if (cp->a7 =3D=3D SBI_EXT_0_1_REMOTE_SFENCE_VMA) { + if (cp->a1 =3D=3D 0 && cp->a2 =3D=3D 0) + kvm_riscv_hfence_vvma_all(vcpu->kvm, + 0, hmask); + else + kvm_riscv_hfence_vvma_gva(vcpu->kvm, + 0, hmask, + cp->a1, cp->a2, + PAGE_SHIFT); + } else { + if (cp->a1 =3D=3D 0 && cp->a2 =3D=3D 0) + kvm_riscv_hfence_vvma_asid_all(vcpu->kvm, + 0, hmask, + cp->a3); + else + kvm_riscv_hfence_vvma_asid_gva(vcpu->kvm, + 0, hmask, + cp->a1, cp->a2, + PAGE_SHIFT, + cp->a3); + } break; default: ret =3D -EINVAL; diff --git a/arch/riscv/kvm/vmid.c b/arch/riscv/kvm/vmid.c index 8987e76aa6db..9f764df125db 100644 --- a/arch/riscv/kvm/vmid.c +++ b/arch/riscv/kvm/vmid.c @@ -11,9 +11,9 @@ #include #include #include +#include #include #include -#include =20 static unsigned long vmid_version =3D 1; static unsigned long vmid_next; @@ -63,6 +63,11 @@ bool kvm_riscv_gstage_vmid_ver_changed(struct kvm_vmid *= vmid) READ_ONCE(vmid_version)); } =20 +static void __local_hfence_gvma_all(void *info) +{ + kvm_riscv_local_hfence_gvma_all(); +} + void kvm_riscv_gstage_vmid_update(struct kvm_vcpu *vcpu) { unsigned long i; @@ -101,7 +106,8 @@ void kvm_riscv_gstage_vmid_update(struct kvm_vcpu *vcpu) * running, we force VM exits on all host CPUs using IPI and * flush all Guest TLBs. */ - sbi_remote_hfence_gvma(cpu_online_mask, 0, 0); + on_each_cpu_mask(cpu_online_mask, __local_hfence_gvma_all, + NULL, 1); } =20 vmid->vmid =3D vmid_next; --=20 2.25.1 From nobody Mon May 11 07:03:13 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 694F3C433F5 for ; Wed, 20 Apr 2022 11:25:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1378063AbiDTL2l (ORCPT ); Wed, 20 Apr 2022 07:28:41 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35120 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1378016AbiDTL2T (ORCPT ); Wed, 20 Apr 2022 07:28:19 -0400 Received: from mail-pg1-x52b.google.com (mail-pg1-x52b.google.com [IPv6:2607:f8b0:4864:20::52b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CF0B45FE6 for ; Wed, 20 Apr 2022 04:25:33 -0700 (PDT) Received: by mail-pg1-x52b.google.com with SMTP id k29so1320689pgm.12 for ; Wed, 20 Apr 2022 04:25:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ventanamicro.com; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=2EYfKgcqlxuOnooxrLdWILH0tv0pWGBlD29VNkEAOIM=; b=TO+LW48QFXS3QD0O7Jjzyvt6UTzY0D0TmGGl5G4tEPIBixG7jvudXnWCmoz7KEq9rv Z7rmmBafNeUvqZ7FATVc/zgdJZ9L96pQRVNXGtn1vG928g2Q+A7vn0Gc7yLXBmrq9O0Q kevKlCITSWkdtSxzlM7SWBK9jLEyJA5PTCBoePJ18IVqLYPgUHS0cfxkU1pjHINpCXUi +Tisu8DP6PEi05Wo07i493iWZj4LgJvqFnHvzk2lQk6LBkmHbBs+jve6QShFwJooByWT rY+Qr0CVXKf5zJW4H0Me5+pal4Yp1gG2EO7VFWsVpCbttmOHbkBug/Km8RPn9S+8HL2M h38Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=2EYfKgcqlxuOnooxrLdWILH0tv0pWGBlD29VNkEAOIM=; b=gpLv7SxlihtXZKC3bc0kNOhY57xQPvk7+SypfNVqCFXwAn2lg1a/hh707E7a4eA/lM 9NUUfUDdE/WDYWwJUEvIwMPUqv33/AKfnGfWM1I2GUNGVj3WrK2rPYjv7gRYW2vsn5RE mxjfiJFq8uwuGW7rxPtxD87dR1A1MpBLep3UPQ1e/cPL0Z24mQmNSnAJkI5zhHCOWtFf S2dLmvrupUXMe9XzB4keFaYrSZsxEEXEjbCAbTJojVF4cVNeWb6VMWQNkHQTs8/Dp+jY Ljr/+ssGIzTr7E4NBjQ5o9D3zpaThIVLQFvRrJwNpdKGTUCZLEYVqM0I2Yi6/PxI+cWE osjg== X-Gm-Message-State: AOAM532aZkI7sIu36hgpORjxAHR6JCTyMfhFcwvHh/s4eFkRDsS/1YNe IHO7sxg3sDS9J33oIN06P5Oveg== X-Google-Smtp-Source: ABdhPJz9fpRR8XgvDFGPI0R0U1yvNeSnPezLq8uPbEoEewCOLKg0ONM5TWPK4tpx2F3blDf1floEMg== X-Received: by 2002:aa7:9255:0:b0:505:a44b:275c with SMTP id 21-20020aa79255000000b00505a44b275cmr22903451pfp.40.1650453933365; Wed, 20 Apr 2022 04:25:33 -0700 (PDT) Received: from localhost.localdomain ([122.167.88.101]) by smtp.gmail.com with ESMTPSA id u12-20020a17090a890c00b001b8efcf8e48sm22529274pjn.14.2022.04.20.04.25.29 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 20 Apr 2022 04:25:32 -0700 (PDT) From: Anup Patel To: Paolo Bonzini , Atish Patra Cc: Palmer Dabbelt , Paul Walmsley , Alistair Francis , Anup Patel , kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Anup Patel Subject: [PATCH v2 7/7] RISC-V: KVM: Cleanup stale TLB entries when host CPU changes Date: Wed, 20 Apr 2022 16:54:50 +0530 Message-Id: <20220420112450.155624-8-apatel@ventanamicro.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220420112450.155624-1-apatel@ventanamicro.com> References: <20220420112450.155624-1-apatel@ventanamicro.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" On RISC-V platforms with hardware VMID support, we share same VMID for all VCPUs of a particular Guest/VM. This means we might have stale G-stage TLB entries on the current Host CPU due to some other VCPU of the same Guest which ran previously on the current Host CPU. To cleanup stale TLB entries, we simply flush all G-stage TLB entries by VMID whenever underlying Host CPU changes for a VCPU. Signed-off-by: Anup Patel Reviewed-by: Atish Patra --- arch/riscv/include/asm/kvm_host.h | 5 +++++ arch/riscv/kvm/tlb.c | 23 +++++++++++++++++++++++ arch/riscv/kvm/vcpu.c | 11 +++++++++++ 3 files changed, 39 insertions(+) diff --git a/arch/riscv/include/asm/kvm_host.h b/arch/riscv/include/asm/kvm= _host.h index a40e88a9481c..94349a5ffd34 100644 --- a/arch/riscv/include/asm/kvm_host.h +++ b/arch/riscv/include/asm/kvm_host.h @@ -166,6 +166,9 @@ struct kvm_vcpu_arch { /* VCPU ran at least once */ bool ran_atleast_once; =20 + /* Last Host CPU on which Guest VCPU exited */ + int last_exit_cpu; + /* ISA feature bits (similar to MISA) */ unsigned long isa; =20 @@ -256,6 +259,8 @@ void kvm_riscv_local_hfence_vvma_gva(unsigned long vmid, unsigned long order); void kvm_riscv_local_hfence_vvma_all(unsigned long vmid); =20 +void kvm_riscv_local_tlb_sanitize(struct kvm_vcpu *vcpu); + void kvm_riscv_fence_i_process(struct kvm_vcpu *vcpu); void kvm_riscv_hfence_gvma_vmid_all_process(struct kvm_vcpu *vcpu); void kvm_riscv_hfence_vvma_all_process(struct kvm_vcpu *vcpu); diff --git a/arch/riscv/kvm/tlb.c b/arch/riscv/kvm/tlb.c index c0f86d09c41d..1a76d0b1907d 100644 --- a/arch/riscv/kvm/tlb.c +++ b/arch/riscv/kvm/tlb.c @@ -215,6 +215,29 @@ void kvm_riscv_local_hfence_vvma_all(unsigned long vmi= d) csr_write(CSR_HGATP, hgatp); } =20 +void kvm_riscv_local_tlb_sanitize(struct kvm_vcpu *vcpu) +{ + unsigned long vmid; + + if (!kvm_riscv_gstage_vmid_bits() || + vcpu->arch.last_exit_cpu =3D=3D vcpu->cpu) + return; + + /* + * On RISC-V platforms with hardware VMID support, we share same + * VMID for all VCPUs of a particular Guest/VM. This means we might + * have stale G-stage TLB entries on the current Host CPU due to + * some other VCPU of the same Guest which ran previously on the + * current Host CPU. + * + * To cleanup stale TLB entries, we simply flush all G-stage TLB + * entries by VMID whenever underlying Host CPU changes for a VCPU. + */ + + vmid =3D READ_ONCE(vcpu->kvm->arch.vmid.vmid); + kvm_riscv_local_hfence_gvma_vmid_all(vmid); +} + void kvm_riscv_fence_i_process(struct kvm_vcpu *vcpu) { local_flush_icache_all(); diff --git a/arch/riscv/kvm/vcpu.c b/arch/riscv/kvm/vcpu.c index 9cd8f6e91c98..a86710fcd2e0 100644 --- a/arch/riscv/kvm/vcpu.c +++ b/arch/riscv/kvm/vcpu.c @@ -67,6 +67,8 @@ static void kvm_riscv_reset_vcpu(struct kvm_vcpu *vcpu) if (loaded) kvm_arch_vcpu_put(vcpu); =20 + vcpu->arch.last_exit_cpu =3D -1; + memcpy(csr, reset_csr, sizeof(*csr)); =20 memcpy(cntx, reset_cntx, sizeof(*cntx)); @@ -735,6 +737,7 @@ static void noinstr kvm_riscv_vcpu_enter_exit(struct kv= m_vcpu *vcpu) { guest_state_enter_irqoff(); __kvm_riscv_switch_to(&vcpu->arch); + vcpu->arch.last_exit_cpu =3D vcpu->cpu; guest_state_exit_irqoff(); } =20 @@ -829,6 +832,14 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu) continue; } =20 + /* + * Cleanup stale TLB enteries + * + * Note: This should be done after G-stage VMID has been + * updated using kvm_riscv_gstage_vmid_ver_changed() + */ + kvm_riscv_local_tlb_sanitize(vcpu); + guest_timing_enter_irqoff(); =20 kvm_riscv_vcpu_enter_exit(vcpu); --=20 2.25.1