From nobody Sun Feb 8 01:50:52 2026 Received: from mail-pl1-f174.google.com (mail-pl1-f174.google.com [209.85.214.174]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B9A56214A6A for ; Thu, 5 Jun 2025 06:15:44 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.174 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749104146; cv=none; b=p79z0wE5aQsL+iWbc1G2LHcZTHic7YawbOaNkFLvpYd8J8CL3brd2xTqvniPFK6qwl2VMHuZ0pNis98Y04AUhFh+PBPrL7jGmvVWnllRNhP/Dh7ZRf1yGHoDCkGcA+h1sAPd7Tf3auzDA67pR81HNwd900ofPCKTFFXQxlgOyCg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749104146; c=relaxed/simple; bh=6sszuUfccYuEtJctsI5BNx1ZHbTwew1hmbwnSMnNPl4=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=szvt1Kwp6ZP/wkMMrZG4p7fhlJWg7NeUCrpHzA0vJbfP4p/iwqtgRMJJIqWWOlO6Dsmx5MORYwJxL/tFL+B3UVrlK/EQ3b7ZllTtMAFe9FUxnDz+g+jB7ZAosPh+m75QUwQhbMhj7T/9yP/dVjfHzzM8PHk8I57IEweAHvr9cLU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=ventanamicro.com; spf=pass smtp.mailfrom=ventanamicro.com; dkim=pass (2048-bit key) header.d=ventanamicro.com header.i=@ventanamicro.com header.b=G2AairEB; arc=none smtp.client-ip=209.85.214.174 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=ventanamicro.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=ventanamicro.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=ventanamicro.com header.i=@ventanamicro.com header.b="G2AairEB" Received: by mail-pl1-f174.google.com with SMTP id d9443c01a7336-23526264386so5583825ad.2 for ; Wed, 04 Jun 2025 23:15:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ventanamicro.com; s=google; t=1749104144; x=1749708944; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=hZLXSy5dKdnrCLdxDNwluOSVBrj34xrZ5br2B7QUPFM=; b=G2AairEBuM94wAjFcm6qq6RM6ARY5RGxyZGFQ16ABMetm8uotzHfENwYxMrkQMBflJ V5qiFgt6WqjyA53fSA8bszeC1Mb9OU1mDJO8M/67Fe79PeIUbCZoywmn41mUVqlOxRdt plX2gNDQ+FE9iHDf3i0wY+HZblxHk7rgyM8MUqiqZFWUnLlikhExsdkoqIKgwjBIkvv6 TS2C0WpUvQvETMXwqFLLeugTiWVB5awTbU6BLYinWvdv3DwsVZ11zBR2LxCcVgsiuzd4 ekfVZHOea+IPogteJJjKpjygyIIkLySE/joj0+3FHGE7pgSPH0wOg5kM+ZxptQHhxptV 1dIg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1749104144; x=1749708944; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=hZLXSy5dKdnrCLdxDNwluOSVBrj34xrZ5br2B7QUPFM=; b=nKn+7MmnC9rcoNtgsguXnQ4paYHjM3mE7H3Gf9eIZFkKMS4kX2Uf0R7UWg9ArHrev/ AlRy5+QtyXg/gYuhM/2aBDgGv5L/ZJnXgZlvRt8U55A2ubAR1SDlYcuprdxriUp6FQF0 53GOsnsRI5EvdVQnPEwZlokcCotk01H126+2+s20liWIiR/brSZfo3+W5evlaNW2evUE 0zoAT9AsuTOfLYV0eqz5nha8W+oN9MLvKEfN/8mSAauN0v0tW0GZ9PrTRlYo4Nk9HndN 5RhMC+tji4xdZadJvIzXjVC6zpIzsXtpSGdm3RdICsreBVVr1k5K8fsv5MoiQ5adMJx0 Hz/A== X-Forwarded-Encrypted: i=1; AJvYcCUuSuZu9HzZwYqkcywK5lTbaN4MCxEWHUF8/m4J1WFqYEjYbHCDZJk3RJw/wcBYuaKXYzXIqgGIduqzs00=@vger.kernel.org X-Gm-Message-State: AOJu0YxocGe8krSaj3MYIbzWidb6txqIIbP7bB7yaJh4Dy8ZNAbg6TrG HWxfUt9euJxUBKLE9s2mkPNtQR+r04MfpMGTshzwOxr+u9x7DX1WZgzMdEA0pELAQig= X-Gm-Gg: ASbGncuMQ6Rjnm2ueT9/FuPDAsIa+eT9c5l1X09j7e2V1F82AriTeLFriRFuRzyG4Uh Al2ue0qES1wxSAIKlc/uhNmi8QJ4kVLYyiXQoU24pXRRMaLA7cDCqydnSN5TSfqhxShXG2miaBt UwXXqXIUp3IE1QYbLB7lI+/FYzTQpOGUXpwRE6c6f76bni/pxYwxI4UH2+ynTFQoIdGIqHzTM6K LKLyWXwPCFATvvPsY3BGneWhwH6DvgeQHHtJsN1+iQJcQxfkY2v7H/qpYgZiBBDQx1HMc34otF+ 5r6eko6YKZiatVwLK/dOxf0Y6Ta+Zj+aYNv2NuVmGMbCz2jxiEsvVWVfpAdexaaEERxQNplkJxX qRcfZRlMQS/wXfM/V X-Google-Smtp-Source: AGHT+IFO2epwN2LjtG31OoHm0egnGNF3it3sm9FcAWa7tNOhP+eBWwaD0IdyHqo/ELqbwOrF1xeE1A== X-Received: by 2002:a17:903:230d:b0:234:8f5d:e3bd with SMTP id d9443c01a7336-235e11ebbaemr80514335ad.39.1749104143890; Wed, 04 Jun 2025 23:15:43 -0700 (PDT) Received: from localhost.localdomain ([14.141.91.70]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-3132c0bedc7sm716026a91.49.2025.06.04.23.15.40 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 04 Jun 2025 23:15:43 -0700 (PDT) From: Anup Patel To: Atish Patra Cc: Palmer Dabbelt , Paul Walmsley , Alexandre Ghiti , Andrew Jones , Anup Patel , kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Anup Patel Subject: [PATCH 10/13] RISC-V: KVM: Introduce struct kvm_gstage_mapping Date: Thu, 5 Jun 2025 11:44:55 +0530 Message-ID: <20250605061458.196003-11-apatel@ventanamicro.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250605061458.196003-1-apatel@ventanamicro.com> References: <20250605061458.196003-1-apatel@ventanamicro.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Introduce struct kvm_gstage_mapping which represents a g-stage mapping at a particular page table level of the g-stage. Also, update the kvm_riscv_gstage_map() to return the g-stage mapping upon success. Signed-off-by: Anup Patel --- arch/riscv/include/asm/kvm_mmu.h | 9 ++++- arch/riscv/kvm/mmu.c | 58 ++++++++++++++++++-------------- arch/riscv/kvm/vcpu_exit.c | 3 +- 3 files changed, 43 insertions(+), 27 deletions(-) diff --git a/arch/riscv/include/asm/kvm_mmu.h b/arch/riscv/include/asm/kvm_= mmu.h index 4e1654282ee4..91c11e692dc7 100644 --- a/arch/riscv/include/asm/kvm_mmu.h +++ b/arch/riscv/include/asm/kvm_mmu.h @@ -8,6 +8,12 @@ =20 #include =20 +struct kvm_gstage_mapping { + gpa_t addr; + pte_t pte; + u32 level; +}; + int kvm_riscv_gstage_ioremap(struct kvm *kvm, gpa_t gpa, phys_addr_t hpa, unsigned long size, bool writable, bool in_atomic); @@ -15,7 +21,8 @@ void kvm_riscv_gstage_iounmap(struct kvm *kvm, gpa_t gpa, unsigned long size); int kvm_riscv_gstage_map(struct kvm_vcpu *vcpu, struct kvm_memory_slot *memslot, - gpa_t gpa, unsigned long hva, bool is_write); + gpa_t gpa, unsigned long hva, bool is_write, + struct kvm_gstage_mapping *out_map); int kvm_riscv_gstage_alloc_pgd(struct kvm *kvm); void kvm_riscv_gstage_free_pgd(struct kvm *kvm); void kvm_riscv_gstage_update_hgatp(struct kvm_vcpu *vcpu); diff --git a/arch/riscv/kvm/mmu.c b/arch/riscv/kvm/mmu.c index c9d87e7472fb..934c97c21130 100644 --- a/arch/riscv/kvm/mmu.c +++ b/arch/riscv/kvm/mmu.c @@ -135,18 +135,18 @@ static void gstage_remote_tlb_flush(struct kvm *kvm, = u32 level, gpa_t addr) kvm_riscv_hfence_gvma_vmid_gpa(kvm, -1UL, 0, addr, BIT(order), order); } =20 -static int gstage_set_pte(struct kvm *kvm, u32 level, - struct kvm_mmu_memory_cache *pcache, - gpa_t addr, const pte_t *new_pte) +static int gstage_set_pte(struct kvm *kvm, + struct kvm_mmu_memory_cache *pcache, + const struct kvm_gstage_mapping *map) { u32 current_level =3D gstage_pgd_levels - 1; pte_t *next_ptep =3D (pte_t *)kvm->arch.pgd; - pte_t *ptep =3D &next_ptep[gstage_pte_index(addr, current_level)]; + pte_t *ptep =3D &next_ptep[gstage_pte_index(map->addr, current_level)]; =20 - if (current_level < level) + if (current_level < map->level) return -EINVAL; =20 - while (current_level !=3D level) { + while (current_level !=3D map->level) { if (gstage_pte_leaf(ptep)) return -EEXIST; =20 @@ -165,13 +165,13 @@ static int gstage_set_pte(struct kvm *kvm, u32 level, } =20 current_level--; - ptep =3D &next_ptep[gstage_pte_index(addr, current_level)]; + ptep =3D &next_ptep[gstage_pte_index(map->addr, current_level)]; } =20 - if (pte_val(*ptep) !=3D pte_val(*new_pte)) { - set_pte(ptep, *new_pte); + if (pte_val(*ptep) !=3D pte_val(map->pte)) { + set_pte(ptep, map->pte); if (gstage_pte_leaf(ptep)) - gstage_remote_tlb_flush(kvm, current_level, addr); + gstage_remote_tlb_flush(kvm, current_level, map->addr); } =20 return 0; @@ -181,14 +181,16 @@ static int gstage_map_page(struct kvm *kvm, struct kvm_mmu_memory_cache *pcache, gpa_t gpa, phys_addr_t hpa, unsigned long page_size, - bool page_rdonly, bool page_exec) + bool page_rdonly, bool page_exec, + struct kvm_gstage_mapping *out_map) { - int ret; - u32 level =3D 0; - pte_t new_pte; pgprot_t prot; + int ret; =20 - ret =3D gstage_page_size_to_level(page_size, &level); + out_map->addr =3D gpa; + out_map->level =3D 0; + + ret =3D gstage_page_size_to_level(page_size, &out_map->level); if (ret) return ret; =20 @@ -216,10 +218,10 @@ static int gstage_map_page(struct kvm *kvm, else prot =3D PAGE_WRITE; } - new_pte =3D pfn_pte(PFN_DOWN(hpa), prot); - new_pte =3D pte_mkdirty(new_pte); + out_map->pte =3D pfn_pte(PFN_DOWN(hpa), prot); + out_map->pte =3D pte_mkdirty(out_map->pte); =20 - return gstage_set_pte(kvm, level, pcache, gpa, &new_pte); + return gstage_set_pte(kvm, pcache, out_map); } =20 enum gstage_op { @@ -350,7 +352,6 @@ int kvm_riscv_gstage_ioremap(struct kvm *kvm, gpa_t gpa, phys_addr_t hpa, unsigned long size, bool writable, bool in_atomic) { - pte_t pte; int ret =3D 0; unsigned long pfn; phys_addr_t addr, end; @@ -358,22 +359,25 @@ int kvm_riscv_gstage_ioremap(struct kvm *kvm, gpa_t g= pa, .gfp_custom =3D (in_atomic) ? GFP_ATOMIC | __GFP_ACCOUNT : 0, .gfp_zero =3D __GFP_ZERO, }; + struct kvm_gstage_mapping map; =20 end =3D (gpa + size + PAGE_SIZE - 1) & PAGE_MASK; pfn =3D __phys_to_pfn(hpa); =20 for (addr =3D gpa; addr < end; addr +=3D PAGE_SIZE) { - pte =3D pfn_pte(pfn, PAGE_KERNEL_IO); + map.addr =3D addr; + map.pte =3D pfn_pte(pfn, PAGE_KERNEL_IO); + map.level =3D 0; =20 if (!writable) - pte =3D pte_wrprotect(pte); + map.pte =3D pte_wrprotect(map.pte); =20 ret =3D kvm_mmu_topup_memory_cache(&pcache, gstage_pgd_levels); if (ret) goto out; =20 spin_lock(&kvm->mmu_lock); - ret =3D gstage_set_pte(kvm, 0, &pcache, addr, &pte); + ret =3D gstage_set_pte(kvm, &pcache, &map); spin_unlock(&kvm->mmu_lock); if (ret) goto out; @@ -591,7 +595,8 @@ bool kvm_test_age_gfn(struct kvm *kvm, struct kvm_gfn_r= ange *range) =20 int kvm_riscv_gstage_map(struct kvm_vcpu *vcpu, struct kvm_memory_slot *memslot, - gpa_t gpa, unsigned long hva, bool is_write) + gpa_t gpa, unsigned long hva, bool is_write, + struct kvm_gstage_mapping *out_map) { int ret; kvm_pfn_t hfn; @@ -606,6 +611,9 @@ int kvm_riscv_gstage_map(struct kvm_vcpu *vcpu, unsigned long vma_pagesize, mmu_seq; struct page *page; =20 + /* Setup initial state of output mapping */ + memset(out_map, 0, sizeof(*out_map)); + /* We need minimum second+third level pages */ ret =3D kvm_mmu_topup_memory_cache(pcache, gstage_pgd_levels); if (ret) { @@ -675,10 +683,10 @@ int kvm_riscv_gstage_map(struct kvm_vcpu *vcpu, if (writable) { mark_page_dirty(kvm, gfn); ret =3D gstage_map_page(kvm, pcache, gpa, hfn << PAGE_SHIFT, - vma_pagesize, false, true); + vma_pagesize, false, true, out_map); } else { ret =3D gstage_map_page(kvm, pcache, gpa, hfn << PAGE_SHIFT, - vma_pagesize, true, true); + vma_pagesize, true, true, out_map); } =20 if (ret) diff --git a/arch/riscv/kvm/vcpu_exit.c b/arch/riscv/kvm/vcpu_exit.c index cc82bbab0e24..4fadf2bcd070 100644 --- a/arch/riscv/kvm/vcpu_exit.c +++ b/arch/riscv/kvm/vcpu_exit.c @@ -14,6 +14,7 @@ static int gstage_page_fault(struct kvm_vcpu *vcpu, struct kvm_run *run, struct kvm_cpu_trap *trap) { + struct kvm_gstage_mapping host_map; struct kvm_memory_slot *memslot; unsigned long hva, fault_addr; bool writable; @@ -42,7 +43,7 @@ static int gstage_page_fault(struct kvm_vcpu *vcpu, struc= t kvm_run *run, } =20 ret =3D kvm_riscv_gstage_map(vcpu, memslot, fault_addr, hva, - (trap->scause =3D=3D EXC_STORE_GUEST_PAGE_FAULT) ? true : false); + (trap->scause =3D=3D EXC_STORE_GUEST_PAGE_FAULT) ? true : false, &host_m= ap); if (ret < 0) return ret; =20 --=20 2.43.0