From nobody Fri Oct 10 19:56:21 2025 Received: from mail-pg1-f172.google.com (mail-pg1-f172.google.com [209.85.215.172]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 77F24261588 for ; Fri, 13 Jun 2025 06:58:51 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.172 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749797933; cv=none; b=Vis+grfHRFRSDhtMawwXXpEK+ww/exIihNEdG30Cyyqrpkw3gRPPDgzp3fXdvbSY5wRvLkpJBZuQtIBqnpoxlthcwMOt+hgN9mgVdUCu1pjiQg+Ro1wd0wQEdJVCjoHoX/FdS8sdodJlrVk1TmKySMLd3UBDRY4qmLRfRdyiojI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749797933; c=relaxed/simple; bh=F0Me1ybyZIngF4/eJkZvNT8k+pj/FmCQMMFm1YCl2HA=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=p/w1s9WlaxlRmgiTlKBJVVRICg1RTjWiL6oWrd7XFL4H7tpCeqjnqFBHi8hOWq6uxCck1IiPl8VUU9J601TuUDdzmuOIioZZuwsy8/cwXqEf9NpCz9SFzy24joWaMMCLu+0lWN5EBZflzhNAUe5r8YfbfOP/ST7VtQcX6GRW/uM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=ventanamicro.com; spf=pass smtp.mailfrom=ventanamicro.com; dkim=pass (2048-bit key) header.d=ventanamicro.com header.i=@ventanamicro.com header.b=gDnBScU4; arc=none smtp.client-ip=209.85.215.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=ventanamicro.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=ventanamicro.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=ventanamicro.com header.i=@ventanamicro.com header.b="gDnBScU4" Received: by mail-pg1-f172.google.com with SMTP id 41be03b00d2f7-b2f62bbb5d6so1341167a12.0 for ; Thu, 12 Jun 2025 23:58:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ventanamicro.com; s=google; t=1749797931; x=1750402731; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=fv+HUsKbkqNgweSMN6BOnXE1OqaGyWp3bxb16If7Nvw=; b=gDnBScU4dJ+6ECW+Wm0qq0am08o3a5+9eye7mQ35wHOu9FyqOFKIbpAFiCbMxGViFr H8XmVqdh/9KD0ffmICZvv4pefZFyEc6sbaWpD+mv2Do61Kkp2laVwW4rqoSgZSsersW8 jxTWFHY2Erk7P+RK3AHYcL6PABpCgbliUxE0HaK6o18uao8JjDnjGd1qVog1RJ9reSyi qey17/FKz3c6jIbMcqSSyfa6hGf7qp1bBXGZmNHB4CnT+yc1EbI067XZUcHMhg5lq3/n YErO2ElMh7SB9OR0nr+vVkfRQ6+g2NszquChmoQrkM8jAgZD59pGu8gHCi/C1E3Lhbod +sJw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1749797931; x=1750402731; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=fv+HUsKbkqNgweSMN6BOnXE1OqaGyWp3bxb16If7Nvw=; b=fUd/GCMy7OkQAm5880/UmD4yFXSEGfuu7l4Ti0/1qEUXu70CliDvlrOVQ6sVr15JG/ HuTkPmMbRH1f6zHMaXeJ4kHqpMt6L53Qos7YYXOd618yUwgoYM+JpTirnXGwEu3tzuD9 fLCAocjcPtaMT51RLSYZyCrgXr8owsBMUYXiwvOiESwS3jfANP9clerPGTOLfPP0SPxC fQaKaRwlWwOQFzNash4YHdLkmWEoY7r6Ptsp5Qgm+hf12/o2eHY2m98BvZTFmKkktd82 J/hx9yqC2SrYCuiax/llFN/nsYcVgvaj5wV/p4c+QOpw5pTtyYHHPxiEFJ8bHTkWRm4q qADQ== X-Forwarded-Encrypted: i=1; AJvYcCWqEPMtb9xuqhcamqTsFIEbwecYG5WL56r/A6unjoBSoRB9ECriaUwp0z8/VC8qRgTDkbw+yPANUoKcBMw=@vger.kernel.org X-Gm-Message-State: AOJu0YzQLs3xY5v7Hz5/WyCcQQTsHOjjTW0xaK+zDoXpNVOwzWmaMbS+ EgcCcAvYbwAD4Zntv50Hr3HrrgU+1pAyJJZSEIwzzmNjLkB6cgDm7PG0oJkpK3PYNfIjUDjcssx X7e5L/6o= X-Gm-Gg: ASbGnctP7zEaooFxSs5fKKvM+cF+853a8gmzAxTUq4e2EP4hu5SSLfJZN9r03VWCGWq jJgs8upQbji2UJi6sI/QjgcLaVl61I//4oWOnmsDnUo7uVxhe4cwVxy5bJ9NKB+soCR4LTKbBZx MKxd8Iak3qH79BIUqX7qvH24yE5VGFbhe/QKXc243Y4nlTnk1lnOWIrlNcf013DkR8PSuFD40Rj GyOSnjvvGAo7HjyqxH5XqnGWej2LR4hwPrnKmrAZTEtg8pJD4rVXs7zoNBrHsQyICDKo/gEbM5Y yZPa90FIQQYGqyBL0u1Gr93x9F9v466eevpjPC3Aur3AfoxrV8uvcq5jeXOBMBK3ZRF0Jq7stUW y7woSOD9WdB+F/11XSfc= X-Google-Smtp-Source: AGHT+IGIN4c9ZvVR6EoxrUs6sx6a1+kFEdVQXr9ZmJ+rly629C37HuyOKLkiMKqAFCHARTElA/FWvg== X-Received: by 2002:a17:90b:48c5:b0:312:1ae9:1529 with SMTP id 98e67ed59e1d1-313d9eaec02mr3116866a91.27.1749797930635; Thu, 12 Jun 2025 23:58:50 -0700 (PDT) Received: from localhost.localdomain ([103.97.166.196]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-313c1b49b7fsm2653022a91.24.2025.06.12.23.58.46 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 12 Jun 2025 23:58:50 -0700 (PDT) From: Anup Patel To: Atish Patra Cc: Palmer Dabbelt , Paul Walmsley , Alexandre Ghiti , Andrew Jones , Anup Patel , kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Anup Patel Subject: [PATCH v2 12/12] RISC-V: KVM: Pass VMID as parameter to kvm_riscv_hfence_xyz() APIs Date: Fri, 13 Jun 2025 12:27:43 +0530 Message-ID: <20250613065743.737102-13-apatel@ventanamicro.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250613065743.737102-1-apatel@ventanamicro.com> References: <20250613065743.737102-1-apatel@ventanamicro.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Currently, all kvm_riscv_hfence_xyz() APIs assume VMID to be the host VMID of the Guest/VM which resticts use of these APIs only for host TLB maintenance. Let's allow passing VMID as a parameter to all kvm_riscv_hfence_xyz() APIs so that they can be re-used for nested virtualization related TLB maintenance. Signed-off-by: Anup Patel --- arch/riscv/include/asm/kvm_tlb.h | 17 ++++++--- arch/riscv/kvm/gstage.c | 3 +- arch/riscv/kvm/tlb.c | 61 ++++++++++++++++++++----------- arch/riscv/kvm/vcpu_sbi_replace.c | 17 +++++---- arch/riscv/kvm/vcpu_sbi_v01.c | 25 ++++++------- 5 files changed, 73 insertions(+), 50 deletions(-) diff --git a/arch/riscv/include/asm/kvm_tlb.h b/arch/riscv/include/asm/kvm_= tlb.h index f67e03edeaec..38a2f933ad3a 100644 --- a/arch/riscv/include/asm/kvm_tlb.h +++ b/arch/riscv/include/asm/kvm_tlb.h @@ -11,9 +11,11 @@ enum kvm_riscv_hfence_type { KVM_RISCV_HFENCE_UNKNOWN =3D 0, KVM_RISCV_HFENCE_GVMA_VMID_GPA, + KVM_RISCV_HFENCE_GVMA_VMID_ALL, KVM_RISCV_HFENCE_VVMA_ASID_GVA, KVM_RISCV_HFENCE_VVMA_ASID_ALL, KVM_RISCV_HFENCE_VVMA_GVA, + KVM_RISCV_HFENCE_VVMA_ALL }; =20 struct kvm_riscv_hfence { @@ -59,21 +61,24 @@ void kvm_riscv_fence_i(struct kvm *kvm, void kvm_riscv_hfence_gvma_vmid_gpa(struct kvm *kvm, unsigned long hbase, unsigned long hmask, gpa_t gpa, gpa_t gpsz, - unsigned long order); + unsigned long order, unsigned long vmid); void kvm_riscv_hfence_gvma_vmid_all(struct kvm *kvm, - unsigned long hbase, unsigned long hmask); + unsigned long hbase, unsigned long hmask, + unsigned long vmid); void kvm_riscv_hfence_vvma_asid_gva(struct kvm *kvm, unsigned long hbase, unsigned long hmask, unsigned long gva, unsigned long gvsz, - unsigned long order, unsigned long asid); + unsigned long order, unsigned long asid, + unsigned long vmid); void kvm_riscv_hfence_vvma_asid_all(struct kvm *kvm, unsigned long hbase, unsigned long hmask, - unsigned long asid); + unsigned long asid, unsigned long vmid); void kvm_riscv_hfence_vvma_gva(struct kvm *kvm, unsigned long hbase, unsigned long hmask, unsigned long gva, unsigned long gvsz, - unsigned long order); + unsigned long order, unsigned long vmid); void kvm_riscv_hfence_vvma_all(struct kvm *kvm, - unsigned long hbase, unsigned long hmask); + unsigned long hbase, unsigned long hmask, + unsigned long vmid); =20 #endif diff --git a/arch/riscv/kvm/gstage.c b/arch/riscv/kvm/gstage.c index 9c7c44f09b05..24c270d6d0e2 100644 --- a/arch/riscv/kvm/gstage.c +++ b/arch/riscv/kvm/gstage.c @@ -117,7 +117,8 @@ static void gstage_tlb_flush(struct kvm_gstage *gstage,= u32 level, gpa_t addr) if (gstage->flags & KVM_GSTAGE_FLAGS_LOCAL) kvm_riscv_local_hfence_gvma_vmid_gpa(gstage->vmid, addr, BIT(order), ord= er); else - kvm_riscv_hfence_gvma_vmid_gpa(gstage->kvm, -1UL, 0, addr, BIT(order), o= rder); + kvm_riscv_hfence_gvma_vmid_gpa(gstage->kvm, -1UL, 0, addr, BIT(order), o= rder, + gstage->vmid); } =20 int kvm_riscv_gstage_set_pte(struct kvm_gstage *gstage, diff --git a/arch/riscv/kvm/tlb.c b/arch/riscv/kvm/tlb.c index 349fcfc93f54..3c5a70a2b927 100644 --- a/arch/riscv/kvm/tlb.c +++ b/arch/riscv/kvm/tlb.c @@ -251,6 +251,12 @@ void kvm_riscv_hfence_process(struct kvm_vcpu *vcpu) kvm_riscv_local_hfence_gvma_vmid_gpa(d.vmid, d.addr, d.size, d.order); break; + case KVM_RISCV_HFENCE_GVMA_VMID_ALL: + if (kvm_riscv_nacl_available()) + nacl_hfence_gvma_vmid_all(nacl_shmem(), d.vmid); + else + kvm_riscv_local_hfence_gvma_vmid_all(d.vmid); + break; case KVM_RISCV_HFENCE_VVMA_ASID_GVA: kvm_riscv_vcpu_pmu_incr_fw(vcpu, SBI_PMU_FW_HFENCE_VVMA_ASID_RCVD); if (kvm_riscv_nacl_available()) @@ -276,6 +282,13 @@ void kvm_riscv_hfence_process(struct kvm_vcpu *vcpu) kvm_riscv_local_hfence_vvma_gva(d.vmid, d.addr, d.size, d.order); break; + case KVM_RISCV_HFENCE_VVMA_ALL: + kvm_riscv_vcpu_pmu_incr_fw(vcpu, SBI_PMU_FW_HFENCE_VVMA_RCVD); + if (kvm_riscv_nacl_available()) + nacl_hfence_vvma_all(nacl_shmem(), d.vmid); + else + kvm_riscv_local_hfence_vvma_all(d.vmid); + break; default: break; } @@ -328,14 +341,13 @@ void kvm_riscv_fence_i(struct kvm *kvm, void kvm_riscv_hfence_gvma_vmid_gpa(struct kvm *kvm, unsigned long hbase, unsigned long hmask, gpa_t gpa, gpa_t gpsz, - unsigned long order) + unsigned long order, unsigned long vmid) { - struct kvm_vmid *v =3D &kvm->arch.vmid; struct kvm_riscv_hfence data; =20 data.type =3D KVM_RISCV_HFENCE_GVMA_VMID_GPA; data.asid =3D 0; - data.vmid =3D READ_ONCE(v->vmid); + data.vmid =3D vmid; data.addr =3D gpa; data.size =3D gpsz; data.order =3D order; @@ -344,23 +356,28 @@ void kvm_riscv_hfence_gvma_vmid_gpa(struct kvm *kvm, } =20 void kvm_riscv_hfence_gvma_vmid_all(struct kvm *kvm, - unsigned long hbase, unsigned long hmask) + unsigned long hbase, unsigned long hmask, + unsigned long vmid) { - make_xfence_request(kvm, hbase, hmask, KVM_REQ_TLB_FLUSH, - KVM_REQ_TLB_FLUSH, NULL); + struct kvm_riscv_hfence data =3D {0}; + + data.type =3D KVM_RISCV_HFENCE_GVMA_VMID_ALL; + data.vmid =3D vmid; + make_xfence_request(kvm, hbase, hmask, KVM_REQ_HFENCE, + KVM_REQ_TLB_FLUSH, &data); } =20 void kvm_riscv_hfence_vvma_asid_gva(struct kvm *kvm, unsigned long hbase, unsigned long hmask, unsigned long gva, unsigned long gvsz, - unsigned long order, unsigned long asid) + unsigned long order, unsigned long asid, + unsigned long vmid) { - struct kvm_vmid *v =3D &kvm->arch.vmid; struct kvm_riscv_hfence data; =20 data.type =3D KVM_RISCV_HFENCE_VVMA_ASID_GVA; data.asid =3D asid; - data.vmid =3D READ_ONCE(v->vmid); + data.vmid =3D vmid; data.addr =3D gva; data.size =3D gvsz; data.order =3D order; @@ -370,15 +387,13 @@ void kvm_riscv_hfence_vvma_asid_gva(struct kvm *kvm, =20 void kvm_riscv_hfence_vvma_asid_all(struct kvm *kvm, unsigned long hbase, unsigned long hmask, - unsigned long asid) + unsigned long asid, unsigned long vmid) { - struct kvm_vmid *v =3D &kvm->arch.vmid; - struct kvm_riscv_hfence data; + struct kvm_riscv_hfence data =3D {0}; =20 data.type =3D KVM_RISCV_HFENCE_VVMA_ASID_ALL; data.asid =3D asid; - data.vmid =3D READ_ONCE(v->vmid); - data.addr =3D data.size =3D data.order =3D 0; + data.vmid =3D vmid; make_xfence_request(kvm, hbase, hmask, KVM_REQ_HFENCE, KVM_REQ_HFENCE_VVMA_ALL, &data); } @@ -386,14 +401,13 @@ void kvm_riscv_hfence_vvma_asid_all(struct kvm *kvm, void kvm_riscv_hfence_vvma_gva(struct kvm *kvm, unsigned long hbase, unsigned long hmask, unsigned long gva, unsigned long gvsz, - unsigned long order) + unsigned long order, unsigned long vmid) { - struct kvm_vmid *v =3D &kvm->arch.vmid; struct kvm_riscv_hfence data; =20 data.type =3D KVM_RISCV_HFENCE_VVMA_GVA; data.asid =3D 0; - data.vmid =3D READ_ONCE(v->vmid); + data.vmid =3D vmid; data.addr =3D gva; data.size =3D gvsz; data.order =3D order; @@ -402,16 +416,21 @@ void kvm_riscv_hfence_vvma_gva(struct kvm *kvm, } =20 void kvm_riscv_hfence_vvma_all(struct kvm *kvm, - unsigned long hbase, unsigned long hmask) + unsigned long hbase, unsigned long hmask, + unsigned long vmid) { - make_xfence_request(kvm, hbase, hmask, KVM_REQ_HFENCE_VVMA_ALL, - KVM_REQ_HFENCE_VVMA_ALL, NULL); + struct kvm_riscv_hfence data =3D {0}; + + data.type =3D KVM_RISCV_HFENCE_VVMA_ALL; + data.vmid =3D vmid; + make_xfence_request(kvm, hbase, hmask, KVM_REQ_HFENCE, + KVM_REQ_HFENCE_VVMA_ALL, &data); } =20 int kvm_arch_flush_remote_tlbs_range(struct kvm *kvm, gfn_t gfn, u64 nr_pa= ges) { kvm_riscv_hfence_gvma_vmid_gpa(kvm, -1UL, 0, gfn << PAGE_SHIFT, nr_pages << PAGE_SHIFT, - PAGE_SHIFT); + PAGE_SHIFT, READ_ONCE(kvm->arch.vmid.vmid)); return 0; } diff --git a/arch/riscv/kvm/vcpu_sbi_replace.c b/arch/riscv/kvm/vcpu_sbi_re= place.c index b17fad091bab..b490ed1428a6 100644 --- a/arch/riscv/kvm/vcpu_sbi_replace.c +++ b/arch/riscv/kvm/vcpu_sbi_replace.c @@ -96,6 +96,7 @@ static int kvm_sbi_ext_rfence_handler(struct kvm_vcpu *vc= pu, struct kvm_run *run unsigned long hmask =3D cp->a0; unsigned long hbase =3D cp->a1; unsigned long funcid =3D cp->a6; + unsigned long vmid; =20 switch (funcid) { case SBI_EXT_RFENCE_REMOTE_FENCE_I: @@ -103,22 +104,22 @@ static int kvm_sbi_ext_rfence_handler(struct kvm_vcpu= *vcpu, struct kvm_run *run kvm_riscv_vcpu_pmu_incr_fw(vcpu, SBI_PMU_FW_FENCE_I_SENT); break; case SBI_EXT_RFENCE_REMOTE_SFENCE_VMA: + vmid =3D READ_ONCE(vcpu->kvm->arch.vmid.vmid); if ((cp->a2 =3D=3D 0 && cp->a3 =3D=3D 0) || cp->a3 =3D=3D -1UL) - kvm_riscv_hfence_vvma_all(vcpu->kvm, hbase, hmask); + kvm_riscv_hfence_vvma_all(vcpu->kvm, hbase, hmask, vmid); else kvm_riscv_hfence_vvma_gva(vcpu->kvm, hbase, hmask, - cp->a2, cp->a3, PAGE_SHIFT); + cp->a2, cp->a3, PAGE_SHIFT, vmid); kvm_riscv_vcpu_pmu_incr_fw(vcpu, SBI_PMU_FW_HFENCE_VVMA_SENT); break; case SBI_EXT_RFENCE_REMOTE_SFENCE_VMA_ASID: + vmid =3D READ_ONCE(vcpu->kvm->arch.vmid.vmid); if ((cp->a2 =3D=3D 0 && cp->a3 =3D=3D 0) || cp->a3 =3D=3D -1UL) - kvm_riscv_hfence_vvma_asid_all(vcpu->kvm, - hbase, hmask, cp->a4); + kvm_riscv_hfence_vvma_asid_all(vcpu->kvm, hbase, hmask, + cp->a4, vmid); else - kvm_riscv_hfence_vvma_asid_gva(vcpu->kvm, - hbase, hmask, - cp->a2, cp->a3, - PAGE_SHIFT, cp->a4); + kvm_riscv_hfence_vvma_asid_gva(vcpu->kvm, hbase, hmask, cp->a2, + cp->a3, PAGE_SHIFT, cp->a4, vmid); kvm_riscv_vcpu_pmu_incr_fw(vcpu, SBI_PMU_FW_HFENCE_VVMA_ASID_SENT); break; case SBI_EXT_RFENCE_REMOTE_HFENCE_GVMA: diff --git a/arch/riscv/kvm/vcpu_sbi_v01.c b/arch/riscv/kvm/vcpu_sbi_v01.c index 8f4c4fa16227..368dfddd23d9 100644 --- a/arch/riscv/kvm/vcpu_sbi_v01.c +++ b/arch/riscv/kvm/vcpu_sbi_v01.c @@ -23,6 +23,7 @@ static int kvm_sbi_ext_v01_handler(struct kvm_vcpu *vcpu,= struct kvm_run *run, struct kvm *kvm =3D vcpu->kvm; struct kvm_cpu_context *cp =3D &vcpu->arch.guest_context; struct kvm_cpu_trap *utrap =3D retdata->utrap; + unsigned long vmid; =20 switch (cp->a7) { case SBI_EXT_0_1_CONSOLE_GETCHAR: @@ -78,25 +79,21 @@ static int kvm_sbi_ext_v01_handler(struct kvm_vcpu *vcp= u, struct kvm_run *run, if (cp->a7 =3D=3D SBI_EXT_0_1_REMOTE_FENCE_I) kvm_riscv_fence_i(vcpu->kvm, 0, hmask); else if (cp->a7 =3D=3D SBI_EXT_0_1_REMOTE_SFENCE_VMA) { + vmid =3D READ_ONCE(vcpu->kvm->arch.vmid.vmid); if (cp->a1 =3D=3D 0 && cp->a2 =3D=3D 0) - kvm_riscv_hfence_vvma_all(vcpu->kvm, - 0, hmask); + kvm_riscv_hfence_vvma_all(vcpu->kvm, 0, hmask, vmid); else - kvm_riscv_hfence_vvma_gva(vcpu->kvm, - 0, hmask, - cp->a1, cp->a2, - PAGE_SHIFT); + kvm_riscv_hfence_vvma_gva(vcpu->kvm, 0, hmask, cp->a1, + cp->a2, PAGE_SHIFT, vmid); } else { + vmid =3D READ_ONCE(vcpu->kvm->arch.vmid.vmid); if (cp->a1 =3D=3D 0 && cp->a2 =3D=3D 0) - kvm_riscv_hfence_vvma_asid_all(vcpu->kvm, - 0, hmask, - cp->a3); + kvm_riscv_hfence_vvma_asid_all(vcpu->kvm, 0, hmask, + cp->a3, vmid); else - kvm_riscv_hfence_vvma_asid_gva(vcpu->kvm, - 0, hmask, - cp->a1, cp->a2, - PAGE_SHIFT, - cp->a3); + kvm_riscv_hfence_vvma_asid_gva(vcpu->kvm, 0, hmask, + cp->a1, cp->a2, PAGE_SHIFT, + cp->a3, vmid); } break; default: --=20 2.43.0