From nobody Fri Oct 10 19:56:21 2025 Received: from mail-pj1-f46.google.com (mail-pj1-f46.google.com [209.85.216.46]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B4D2525FA0B for ; Fri, 13 Jun 2025 06:58:36 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.46 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749797918; cv=none; b=sxsBc4NuAZhq227JMYFmD0EH/UGjEPN2mqUACUGopxzfiMrri5ZQ2gq3amF+UxPHmV+lU3Ld9CJV4LHsQVzCaxsom4kqtnATRPduFWrL6RIf7rQtvBn0A0aCbCoogK3ANL1ccKe6jcPYg5gf4V4rh4S0Uh/9Pk5/HFhSK5nSTI4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749797918; c=relaxed/simple; bh=PRuzpVjGDJh0mSObq7ljy2DGtx976aS+FkoP/VhjpW4=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=ZwWY0uah2czMVs83eljgOriOVbj4mSjLqs5can8EbKl+SDh3ZGK9xlhszltkYzzYwYmI/y1zyHYKdXu4UGPplIfnpDAnIiiyVM+L8hJ+Ivg2cD7x8YzonFODQvOPqkWAuDvmGPY9eeWMN5meNo+ZShOsG/d8Jl3/H0wA5aQaAZk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=ventanamicro.com; spf=pass smtp.mailfrom=ventanamicro.com; dkim=pass (2048-bit key) header.d=ventanamicro.com header.i=@ventanamicro.com header.b=bwJfUnyd; arc=none smtp.client-ip=209.85.216.46 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=ventanamicro.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=ventanamicro.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=ventanamicro.com header.i=@ventanamicro.com header.b="bwJfUnyd" Received: by mail-pj1-f46.google.com with SMTP id 98e67ed59e1d1-3109f106867so2515459a91.1 for ; Thu, 12 Jun 2025 23:58:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ventanamicro.com; s=google; t=1749797916; x=1750402716; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=FNP3ZhC9GNIj7dhp8mAGlOJO0IDx3wYw/8fnPqgsQ4A=; b=bwJfUnydjvWIzvo4B1c9ihMIbSLggYIiu5jiGa7VXEJrmRljtX11Y1gO25Yh/Zrf7D cqDhKAD8QWOzs/HZz77aebeOC8sJyulJ/xNTCa5/9BhdW6EVXIMXfj3RhgZQh41ZlS5s rQtVJG896DP3CRwL0V8H7DkqTy5iBXXnRt0WYHyHYlrk8TxwnJVm2jKVNpBMU05k0un9 7/+bWXTk/57honpJVPgg+2JpYmhPy19kXYPiQyF4QGJ3CopmA/uexbHql8zm2DiKRYUh j0UtEMiw8POm1I21hZ6CjoIisNPzAig7UFx6i0XuUiBYQi7vxYYOzztETGmGPjFjflJx AHBg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1749797916; x=1750402716; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=FNP3ZhC9GNIj7dhp8mAGlOJO0IDx3wYw/8fnPqgsQ4A=; b=MkZPfbVCUha1Up5nMA7jikisihVj2lOtDmu975QVVJflN+XFq/VZcqtIsT7byHRQTt ygDC2oBfr2EL0eJ1w0Ij6fzMnEi+3bEA5s08vLPYFkFSaORtZ66BaMDJ0fpGhbgeB0Aq xl+sngoRGeQ2JyPvw2iV3kbVSetimjfmPYkso/DA07Up3s2A0nnGAUvSvGNWY1kBMneJ J5wn6XtYAiYmkT9cPdgP06MqdHnjChY1YoZvI90j0HnaJBXBS+g2jeAiqp68cw8AKXX8 SPkm774FBulxO2CeTJQ1IZr2ynNZlT+M6z8UhKUqNHWq6SIZolqppChb2R0f+A/+LAKV IkHg== X-Forwarded-Encrypted: i=1; AJvYcCUhB04j2iOJ29nZccDuXQTds9xRDGr9CUzkVMTzwrPRz053a7ZsP3YLq3TFVgEl95hpZOorcveRJHDjemo=@vger.kernel.org X-Gm-Message-State: AOJu0YwU89plkcmX+GpZEZzR4IgNY4TA+GzOVVCUgLmvI+ZGk9UBzGZa wrHmLcBkr8Opno7u0yuM/gPPR8eK2DVOtqFsrpNM3tOS/XSe2DxaU+TmGNzBMJGfKyY= X-Gm-Gg: ASbGnctB7DK75/YNoTA4Yqg63NX04TZ/yNId1gypGJK6Q1aVhONwISOHAXBUp4Gy/r4 KSCtKNVzlERFjWymXJmd700BebH9C/NtkpshC53bvWF7dedbng00Dtvvf++adRFaJi7PCLFATpk Pu5K//pdC/9z3rI1Du+UH238bmGzGVazSVZTUX9A7yLa8Jx0dz37m0SxlYrT8P5daY5gn2BsvM3 kYosoxwvDmwL4DkX3mMcNfG2LmfL5hv/1C8+NhSJIPg4ZjFKl4mD8jaaXFt5AuS8FoCXaOd7+XB 0jRjlzufYx43QTLQT34FVg7ApG0Sq//Pgo5JAqz47z2vIjcQ9G8OOyybjHq3pme22m46HjBHpVm nvBg3v2FzncSKHoim9WE= X-Google-Smtp-Source: AGHT+IGDqy2PQavEy6ds0UawCYjteSsSz2IcR4uhx43bZDMn/f28fdAHJhym8i2PaanGblD94psvdQ== X-Received: by 2002:a17:90a:fc47:b0:311:fde5:e224 with SMTP id 98e67ed59e1d1-313d9bcea22mr2924713a91.6.1749797915759; Thu, 12 Jun 2025 23:58:35 -0700 (PDT) Received: from localhost.localdomain ([103.97.166.196]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-313c1b49b7fsm2653022a91.24.2025.06.12.23.58.31 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 12 Jun 2025 23:58:35 -0700 (PDT) From: Anup Patel To: Atish Patra Cc: Palmer Dabbelt , Paul Walmsley , Alexandre Ghiti , Andrew Jones , Anup Patel , kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Anup Patel Subject: [PATCH v2 09/12] RISC-V: KVM: Introduce struct kvm_gstage_mapping Date: Fri, 13 Jun 2025 12:27:40 +0530 Message-ID: <20250613065743.737102-10-apatel@ventanamicro.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250613065743.737102-1-apatel@ventanamicro.com> References: <20250613065743.737102-1-apatel@ventanamicro.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Introduce struct kvm_gstage_mapping which represents a g-stage mapping at a particular g-stage page table level. Also, update the kvm_riscv_gstage_map() to return the g-stage mapping upon success. Signed-off-by: Anup Patel Reviewed-by: Atish Patra --- arch/riscv/include/asm/kvm_mmu.h | 9 ++++- arch/riscv/kvm/mmu.c | 58 ++++++++++++++++++-------------- arch/riscv/kvm/vcpu_exit.c | 3 +- 3 files changed, 43 insertions(+), 27 deletions(-) diff --git a/arch/riscv/include/asm/kvm_mmu.h b/arch/riscv/include/asm/kvm_= mmu.h index 4e1654282ee4..91c11e692dc7 100644 --- a/arch/riscv/include/asm/kvm_mmu.h +++ b/arch/riscv/include/asm/kvm_mmu.h @@ -8,6 +8,12 @@ =20 #include =20 +struct kvm_gstage_mapping { + gpa_t addr; + pte_t pte; + u32 level; +}; + int kvm_riscv_gstage_ioremap(struct kvm *kvm, gpa_t gpa, phys_addr_t hpa, unsigned long size, bool writable, bool in_atomic); @@ -15,7 +21,8 @@ void kvm_riscv_gstage_iounmap(struct kvm *kvm, gpa_t gpa, unsigned long size); int kvm_riscv_gstage_map(struct kvm_vcpu *vcpu, struct kvm_memory_slot *memslot, - gpa_t gpa, unsigned long hva, bool is_write); + gpa_t gpa, unsigned long hva, bool is_write, + struct kvm_gstage_mapping *out_map); int kvm_riscv_gstage_alloc_pgd(struct kvm *kvm); void kvm_riscv_gstage_free_pgd(struct kvm *kvm); void kvm_riscv_gstage_update_hgatp(struct kvm_vcpu *vcpu); diff --git a/arch/riscv/kvm/mmu.c b/arch/riscv/kvm/mmu.c index c1a3eb076df3..806614b3e46d 100644 --- a/arch/riscv/kvm/mmu.c +++ b/arch/riscv/kvm/mmu.c @@ -135,18 +135,18 @@ static void gstage_remote_tlb_flush(struct kvm *kvm, = u32 level, gpa_t addr) kvm_riscv_hfence_gvma_vmid_gpa(kvm, -1UL, 0, addr, BIT(order), order); } =20 -static int gstage_set_pte(struct kvm *kvm, u32 level, - struct kvm_mmu_memory_cache *pcache, - gpa_t addr, const pte_t *new_pte) +static int gstage_set_pte(struct kvm *kvm, + struct kvm_mmu_memory_cache *pcache, + const struct kvm_gstage_mapping *map) { u32 current_level =3D gstage_pgd_levels - 1; pte_t *next_ptep =3D (pte_t *)kvm->arch.pgd; - pte_t *ptep =3D &next_ptep[gstage_pte_index(addr, current_level)]; + pte_t *ptep =3D &next_ptep[gstage_pte_index(map->addr, current_level)]; =20 - if (current_level < level) + if (current_level < map->level) return -EINVAL; =20 - while (current_level !=3D level) { + while (current_level !=3D map->level) { if (gstage_pte_leaf(ptep)) return -EEXIST; =20 @@ -165,13 +165,13 @@ static int gstage_set_pte(struct kvm *kvm, u32 level, } =20 current_level--; - ptep =3D &next_ptep[gstage_pte_index(addr, current_level)]; + ptep =3D &next_ptep[gstage_pte_index(map->addr, current_level)]; } =20 - if (pte_val(*ptep) !=3D pte_val(*new_pte)) { - set_pte(ptep, *new_pte); + if (pte_val(*ptep) !=3D pte_val(map->pte)) { + set_pte(ptep, map->pte); if (gstage_pte_leaf(ptep)) - gstage_remote_tlb_flush(kvm, current_level, addr); + gstage_remote_tlb_flush(kvm, current_level, map->addr); } =20 return 0; @@ -181,14 +181,16 @@ static int gstage_map_page(struct kvm *kvm, struct kvm_mmu_memory_cache *pcache, gpa_t gpa, phys_addr_t hpa, unsigned long page_size, - bool page_rdonly, bool page_exec) + bool page_rdonly, bool page_exec, + struct kvm_gstage_mapping *out_map) { - int ret; - u32 level =3D 0; - pte_t new_pte; pgprot_t prot; + int ret; =20 - ret =3D gstage_page_size_to_level(page_size, &level); + out_map->addr =3D gpa; + out_map->level =3D 0; + + ret =3D gstage_page_size_to_level(page_size, &out_map->level); if (ret) return ret; =20 @@ -216,10 +218,10 @@ static int gstage_map_page(struct kvm *kvm, else prot =3D PAGE_WRITE; } - new_pte =3D pfn_pte(PFN_DOWN(hpa), prot); - new_pte =3D pte_mkdirty(new_pte); + out_map->pte =3D pfn_pte(PFN_DOWN(hpa), prot); + out_map->pte =3D pte_mkdirty(out_map->pte); =20 - return gstage_set_pte(kvm, level, pcache, gpa, &new_pte); + return gstage_set_pte(kvm, pcache, out_map); } =20 enum gstage_op { @@ -352,7 +354,6 @@ int kvm_riscv_gstage_ioremap(struct kvm *kvm, gpa_t gpa, phys_addr_t hpa, unsigned long size, bool writable, bool in_atomic) { - pte_t pte; int ret =3D 0; unsigned long pfn; phys_addr_t addr, end; @@ -360,22 +361,25 @@ int kvm_riscv_gstage_ioremap(struct kvm *kvm, gpa_t g= pa, .gfp_custom =3D (in_atomic) ? GFP_ATOMIC | __GFP_ACCOUNT : 0, .gfp_zero =3D __GFP_ZERO, }; + struct kvm_gstage_mapping map; =20 end =3D (gpa + size + PAGE_SIZE - 1) & PAGE_MASK; pfn =3D __phys_to_pfn(hpa); =20 for (addr =3D gpa; addr < end; addr +=3D PAGE_SIZE) { - pte =3D pfn_pte(pfn, PAGE_KERNEL_IO); + map.addr =3D addr; + map.pte =3D pfn_pte(pfn, PAGE_KERNEL_IO); + map.level =3D 0; =20 if (!writable) - pte =3D pte_wrprotect(pte); + map.pte =3D pte_wrprotect(map.pte); =20 ret =3D kvm_mmu_topup_memory_cache(&pcache, gstage_pgd_levels); if (ret) goto out; =20 spin_lock(&kvm->mmu_lock); - ret =3D gstage_set_pte(kvm, 0, &pcache, addr, &pte); + ret =3D gstage_set_pte(kvm, &pcache, &map); spin_unlock(&kvm->mmu_lock); if (ret) goto out; @@ -593,7 +597,8 @@ bool kvm_test_age_gfn(struct kvm *kvm, struct kvm_gfn_r= ange *range) =20 int kvm_riscv_gstage_map(struct kvm_vcpu *vcpu, struct kvm_memory_slot *memslot, - gpa_t gpa, unsigned long hva, bool is_write) + gpa_t gpa, unsigned long hva, bool is_write, + struct kvm_gstage_mapping *out_map) { int ret; kvm_pfn_t hfn; @@ -608,6 +613,9 @@ int kvm_riscv_gstage_map(struct kvm_vcpu *vcpu, unsigned long vma_pagesize, mmu_seq; struct page *page; =20 + /* Setup initial state of output mapping */ + memset(out_map, 0, sizeof(*out_map)); + /* We need minimum second+third level pages */ ret =3D kvm_mmu_topup_memory_cache(pcache, gstage_pgd_levels); if (ret) { @@ -677,10 +685,10 @@ int kvm_riscv_gstage_map(struct kvm_vcpu *vcpu, if (writable) { mark_page_dirty(kvm, gfn); ret =3D gstage_map_page(kvm, pcache, gpa, hfn << PAGE_SHIFT, - vma_pagesize, false, true); + vma_pagesize, false, true, out_map); } else { ret =3D gstage_map_page(kvm, pcache, gpa, hfn << PAGE_SHIFT, - vma_pagesize, true, true); + vma_pagesize, true, true, out_map); } =20 if (ret) diff --git a/arch/riscv/kvm/vcpu_exit.c b/arch/riscv/kvm/vcpu_exit.c index 965df528de90..6b4694bc07ea 100644 --- a/arch/riscv/kvm/vcpu_exit.c +++ b/arch/riscv/kvm/vcpu_exit.c @@ -15,6 +15,7 @@ static int gstage_page_fault(struct kvm_vcpu *vcpu, struct kvm_run *run, struct kvm_cpu_trap *trap) { + struct kvm_gstage_mapping host_map; struct kvm_memory_slot *memslot; unsigned long hva, fault_addr; bool writable; @@ -43,7 +44,7 @@ static int gstage_page_fault(struct kvm_vcpu *vcpu, struc= t kvm_run *run, } =20 ret =3D kvm_riscv_gstage_map(vcpu, memslot, fault_addr, hva, - (trap->scause =3D=3D EXC_STORE_GUEST_PAGE_FAULT) ? true : false); + (trap->scause =3D=3D EXC_STORE_GUEST_PAGE_FAULT) ? true : false, &host_m= ap); if (ret < 0) return ret; =20 --=20 2.43.0