From nobody Thu Oct 9 11:05:47 2025 Received: from mail-pl1-f169.google.com (mail-pl1-f169.google.com [209.85.214.169]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 45A822EA150 for ; Wed, 18 Jun 2025 11:36:45 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.169 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750246607; cv=none; b=h7wpvPxGxP6bn1NiYgw8isb6vc3cAawqNe2SlYixoDN3JGmycINw3iguTjvk0ShJK3a9z4JUEAGUP58Mvo0d0BGPYuKOB4OFR8JRBw6/wPbVSef/tZZITA2jkbNG1jntuLWieXqfvfnqBLrX7NCwYwLxhdk/Mta+VtielcnEvfk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750246607; c=relaxed/simple; bh=F0Me1ybyZIngF4/eJkZvNT8k+pj/FmCQMMFm1YCl2HA=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=ash0d9b7+ejGEMDiy+SeMjES0UZinENqI8U7UHPGjcJT01ZfmenFKKgxJbg+5ujeLvgpm9o7YCvnqYfDDd8q5MvGmSBQdC3JHSfOekZQwLgT1JKFobplNRnfV4xR+8c1V9Kn7rjRgeksRC9HtDw5Ze6x1hzfSIAKn+8eg+JYgeI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=ventanamicro.com; spf=pass smtp.mailfrom=ventanamicro.com; dkim=pass (2048-bit key) header.d=ventanamicro.com header.i=@ventanamicro.com header.b=bRnTst4k; arc=none smtp.client-ip=209.85.214.169 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=ventanamicro.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=ventanamicro.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=ventanamicro.com header.i=@ventanamicro.com header.b="bRnTst4k" Received: by mail-pl1-f169.google.com with SMTP id d9443c01a7336-2363497cc4dso56818015ad.1 for ; Wed, 18 Jun 2025 04:36:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ventanamicro.com; s=google; t=1750246604; x=1750851404; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=fv+HUsKbkqNgweSMN6BOnXE1OqaGyWp3bxb16If7Nvw=; b=bRnTst4kfPHeh1bjV6oYgOiYt7eydt/4luZ+nRO9iQLYuSUUB1ubdFwOxTiR/hcXA+ UEzYmsgijwWb6FiPNgIVgzs6oaCtJ/Ab9PnjBDcmx5qDdjKFFnxXyPs/nxbjZwvX2mL7 TkLkpyaoUUP7RIQc8ZFO6dIMprYUNIWYsGlX88Oersnx9IgvV51EldqYmB7Lu7zZ0RUq 3/abznWXtfKERtyVpH0ELtWYDPB0A0YhOFyasn7qd16o7rN0WulRarH15Wi1O1Wi6uf+ anSkFL0DhfdzKIhmoO29NbA8OGAPGBUpuDwoqAFJcKapnGdvtB2X7AQv2WvQorT40X0Y gBRw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1750246604; x=1750851404; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=fv+HUsKbkqNgweSMN6BOnXE1OqaGyWp3bxb16If7Nvw=; b=w5TXGKVRS8unTetSHCzp3EchqpfHNO1aDgwgXQjaOcAG/L8xgP0T9aDIJE0Uhv2U7u zU+ATxdUindP3C8G/Or88IOl2h4liOr191BrAJD/m0hlpK7ZKlfzZMNGdsZaYpf0lAoM umc8oXnLeyjd/gC+xlkBTJsV2oVHxNbUmJw5vc+N4USPe4QsRSIXpweUkwvPfEXlR9J+ gSCtzE7wSthH5267NOurODWeb3eE2IzBPHfadnFfY9+YMXYildYF2sQidUX2KeuGrwGn VZZy1uegwIu4c+196eghjzd6km9dPcC6/eJs407u8RPdW5E8rBQNgaMJLO250A8fAzV6 BOVQ== X-Forwarded-Encrypted: i=1; AJvYcCXES+4lho8FwXncX8ENK9W6AY7cQG3Oqhq+5GRFSzB5fObklzRWw9O2wxQ42VWNjhQlPRiKydyeu49/ACo=@vger.kernel.org X-Gm-Message-State: AOJu0YzD+FmVwZwZz4ghGN6YKg/33IpHUTt5p5RxOJ4W8j9rnna3vdyl d98zM6BwctH3OL8qnNUZ2kX/UvIaqMb5xC+M5bk0xl/vJ/KNmRorjTcUdlDoy2ae1tJYR09uWrD ZWEzx X-Gm-Gg: ASbGncvzD93ocMbel/jvj52+0a8u1VoFdGjeU7xN2cnSv0H84vjsJFRzmeCmYarYNiI ZCCv3ccE9mYhmJtofIwi16iKZ5+9H/qo1SqEP+Z432dYpIsvjm+1a+1keKt4h5TcLqOzHe/RMDF o4Du+6H7IPvzBA+SjC9xBnIkbzmTWwAAZUZOHWqoancR14X70Z/goBcg8Vf+CSRCOe9sSlcT5HO i/sVU9P0p/C4AD3qz8nexLjhAyg7j7RAxU2YSyE4mSuJAeLcS0et/L1H6Fm+6/XbaPHzRa5g+uL bsnJ2Ew813s3RUfM2sW4VpMtSg/x91q0KC1mfy37wjjuu7uL441+YQsaVWEGLVfbjia5qroUNeC AM57SBonWoRvn6lypqw== X-Google-Smtp-Source: AGHT+IGLYl1NWd1HtJ2HyUGLmzDbLL3275c7uTWDrNfGR5rw9VFZWeK6rf79dHoLdUV5jp5C3Srd3g== X-Received: by 2002:a17:902:d487:b0:234:c8f6:1afb with SMTP id d9443c01a7336-2366aeea601mr286773315ad.0.1750246604285; Wed, 18 Jun 2025 04:36:44 -0700 (PDT) Received: from localhost.localdomain ([122.171.23.44]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-237c57c63efsm9112475ad.172.2025.06.18.04.36.40 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 18 Jun 2025 04:36:43 -0700 (PDT) From: Anup Patel To: Atish Patra Cc: Palmer Dabbelt , Paul Walmsley , Alexandre Ghiti , Andrew Jones , Anup Patel , kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Anup Patel Subject: [PATCH v3 12/12] RISC-V: KVM: Pass VMID as parameter to kvm_riscv_hfence_xyz() APIs Date: Wed, 18 Jun 2025 17:05:32 +0530 Message-ID: <20250618113532.471448-13-apatel@ventanamicro.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250618113532.471448-1-apatel@ventanamicro.com> References: <20250618113532.471448-1-apatel@ventanamicro.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Currently, all kvm_riscv_hfence_xyz() APIs assume VMID to be the host VMID of the Guest/VM which resticts use of these APIs only for host TLB maintenance. Let's allow passing VMID as a parameter to all kvm_riscv_hfence_xyz() APIs so that they can be re-used for nested virtualization related TLB maintenance. Signed-off-by: Anup Patel Reviewed-by: Nutty Liu --- arch/riscv/include/asm/kvm_tlb.h | 17 ++++++--- arch/riscv/kvm/gstage.c | 3 +- arch/riscv/kvm/tlb.c | 61 ++++++++++++++++++++----------- arch/riscv/kvm/vcpu_sbi_replace.c | 17 +++++---- arch/riscv/kvm/vcpu_sbi_v01.c | 25 ++++++------- 5 files changed, 73 insertions(+), 50 deletions(-) diff --git a/arch/riscv/include/asm/kvm_tlb.h b/arch/riscv/include/asm/kvm_= tlb.h index f67e03edeaec..38a2f933ad3a 100644 --- a/arch/riscv/include/asm/kvm_tlb.h +++ b/arch/riscv/include/asm/kvm_tlb.h @@ -11,9 +11,11 @@ enum kvm_riscv_hfence_type { KVM_RISCV_HFENCE_UNKNOWN =3D 0, KVM_RISCV_HFENCE_GVMA_VMID_GPA, + KVM_RISCV_HFENCE_GVMA_VMID_ALL, KVM_RISCV_HFENCE_VVMA_ASID_GVA, KVM_RISCV_HFENCE_VVMA_ASID_ALL, KVM_RISCV_HFENCE_VVMA_GVA, + KVM_RISCV_HFENCE_VVMA_ALL }; =20 struct kvm_riscv_hfence { @@ -59,21 +61,24 @@ void kvm_riscv_fence_i(struct kvm *kvm, void kvm_riscv_hfence_gvma_vmid_gpa(struct kvm *kvm, unsigned long hbase, unsigned long hmask, gpa_t gpa, gpa_t gpsz, - unsigned long order); + unsigned long order, unsigned long vmid); void kvm_riscv_hfence_gvma_vmid_all(struct kvm *kvm, - unsigned long hbase, unsigned long hmask); + unsigned long hbase, unsigned long hmask, + unsigned long vmid); void kvm_riscv_hfence_vvma_asid_gva(struct kvm *kvm, unsigned long hbase, unsigned long hmask, unsigned long gva, unsigned long gvsz, - unsigned long order, unsigned long asid); + unsigned long order, unsigned long asid, + unsigned long vmid); void kvm_riscv_hfence_vvma_asid_all(struct kvm *kvm, unsigned long hbase, unsigned long hmask, - unsigned long asid); + unsigned long asid, unsigned long vmid); void kvm_riscv_hfence_vvma_gva(struct kvm *kvm, unsigned long hbase, unsigned long hmask, unsigned long gva, unsigned long gvsz, - unsigned long order); + unsigned long order, unsigned long vmid); void kvm_riscv_hfence_vvma_all(struct kvm *kvm, - unsigned long hbase, unsigned long hmask); + unsigned long hbase, unsigned long hmask, + unsigned long vmid); =20 #endif diff --git a/arch/riscv/kvm/gstage.c b/arch/riscv/kvm/gstage.c index 9c7c44f09b05..24c270d6d0e2 100644 --- a/arch/riscv/kvm/gstage.c +++ b/arch/riscv/kvm/gstage.c @@ -117,7 +117,8 @@ static void gstage_tlb_flush(struct kvm_gstage *gstage,= u32 level, gpa_t addr) if (gstage->flags & KVM_GSTAGE_FLAGS_LOCAL) kvm_riscv_local_hfence_gvma_vmid_gpa(gstage->vmid, addr, BIT(order), ord= er); else - kvm_riscv_hfence_gvma_vmid_gpa(gstage->kvm, -1UL, 0, addr, BIT(order), o= rder); + kvm_riscv_hfence_gvma_vmid_gpa(gstage->kvm, -1UL, 0, addr, BIT(order), o= rder, + gstage->vmid); } =20 int kvm_riscv_gstage_set_pte(struct kvm_gstage *gstage, diff --git a/arch/riscv/kvm/tlb.c b/arch/riscv/kvm/tlb.c index 349fcfc93f54..3c5a70a2b927 100644 --- a/arch/riscv/kvm/tlb.c +++ b/arch/riscv/kvm/tlb.c @@ -251,6 +251,12 @@ void kvm_riscv_hfence_process(struct kvm_vcpu *vcpu) kvm_riscv_local_hfence_gvma_vmid_gpa(d.vmid, d.addr, d.size, d.order); break; + case KVM_RISCV_HFENCE_GVMA_VMID_ALL: + if (kvm_riscv_nacl_available()) + nacl_hfence_gvma_vmid_all(nacl_shmem(), d.vmid); + else + kvm_riscv_local_hfence_gvma_vmid_all(d.vmid); + break; case KVM_RISCV_HFENCE_VVMA_ASID_GVA: kvm_riscv_vcpu_pmu_incr_fw(vcpu, SBI_PMU_FW_HFENCE_VVMA_ASID_RCVD); if (kvm_riscv_nacl_available()) @@ -276,6 +282,13 @@ void kvm_riscv_hfence_process(struct kvm_vcpu *vcpu) kvm_riscv_local_hfence_vvma_gva(d.vmid, d.addr, d.size, d.order); break; + case KVM_RISCV_HFENCE_VVMA_ALL: + kvm_riscv_vcpu_pmu_incr_fw(vcpu, SBI_PMU_FW_HFENCE_VVMA_RCVD); + if (kvm_riscv_nacl_available()) + nacl_hfence_vvma_all(nacl_shmem(), d.vmid); + else + kvm_riscv_local_hfence_vvma_all(d.vmid); + break; default: break; } @@ -328,14 +341,13 @@ void kvm_riscv_fence_i(struct kvm *kvm, void kvm_riscv_hfence_gvma_vmid_gpa(struct kvm *kvm, unsigned long hbase, unsigned long hmask, gpa_t gpa, gpa_t gpsz, - unsigned long order) + unsigned long order, unsigned long vmid) { - struct kvm_vmid *v =3D &kvm->arch.vmid; struct kvm_riscv_hfence data; =20 data.type =3D KVM_RISCV_HFENCE_GVMA_VMID_GPA; data.asid =3D 0; - data.vmid =3D READ_ONCE(v->vmid); + data.vmid =3D vmid; data.addr =3D gpa; data.size =3D gpsz; data.order =3D order; @@ -344,23 +356,28 @@ void kvm_riscv_hfence_gvma_vmid_gpa(struct kvm *kvm, } =20 void kvm_riscv_hfence_gvma_vmid_all(struct kvm *kvm, - unsigned long hbase, unsigned long hmask) + unsigned long hbase, unsigned long hmask, + unsigned long vmid) { - make_xfence_request(kvm, hbase, hmask, KVM_REQ_TLB_FLUSH, - KVM_REQ_TLB_FLUSH, NULL); + struct kvm_riscv_hfence data =3D {0}; + + data.type =3D KVM_RISCV_HFENCE_GVMA_VMID_ALL; + data.vmid =3D vmid; + make_xfence_request(kvm, hbase, hmask, KVM_REQ_HFENCE, + KVM_REQ_TLB_FLUSH, &data); } =20 void kvm_riscv_hfence_vvma_asid_gva(struct kvm *kvm, unsigned long hbase, unsigned long hmask, unsigned long gva, unsigned long gvsz, - unsigned long order, unsigned long asid) + unsigned long order, unsigned long asid, + unsigned long vmid) { - struct kvm_vmid *v =3D &kvm->arch.vmid; struct kvm_riscv_hfence data; =20 data.type =3D KVM_RISCV_HFENCE_VVMA_ASID_GVA; data.asid =3D asid; - data.vmid =3D READ_ONCE(v->vmid); + data.vmid =3D vmid; data.addr =3D gva; data.size =3D gvsz; data.order =3D order; @@ -370,15 +387,13 @@ void kvm_riscv_hfence_vvma_asid_gva(struct kvm *kvm, =20 void kvm_riscv_hfence_vvma_asid_all(struct kvm *kvm, unsigned long hbase, unsigned long hmask, - unsigned long asid) + unsigned long asid, unsigned long vmid) { - struct kvm_vmid *v =3D &kvm->arch.vmid; - struct kvm_riscv_hfence data; + struct kvm_riscv_hfence data =3D {0}; =20 data.type =3D KVM_RISCV_HFENCE_VVMA_ASID_ALL; data.asid =3D asid; - data.vmid =3D READ_ONCE(v->vmid); - data.addr =3D data.size =3D data.order =3D 0; + data.vmid =3D vmid; make_xfence_request(kvm, hbase, hmask, KVM_REQ_HFENCE, KVM_REQ_HFENCE_VVMA_ALL, &data); } @@ -386,14 +401,13 @@ void kvm_riscv_hfence_vvma_asid_all(struct kvm *kvm, void kvm_riscv_hfence_vvma_gva(struct kvm *kvm, unsigned long hbase, unsigned long hmask, unsigned long gva, unsigned long gvsz, - unsigned long order) + unsigned long order, unsigned long vmid) { - struct kvm_vmid *v =3D &kvm->arch.vmid; struct kvm_riscv_hfence data; =20 data.type =3D KVM_RISCV_HFENCE_VVMA_GVA; data.asid =3D 0; - data.vmid =3D READ_ONCE(v->vmid); + data.vmid =3D vmid; data.addr =3D gva; data.size =3D gvsz; data.order =3D order; @@ -402,16 +416,21 @@ void kvm_riscv_hfence_vvma_gva(struct kvm *kvm, } =20 void kvm_riscv_hfence_vvma_all(struct kvm *kvm, - unsigned long hbase, unsigned long hmask) + unsigned long hbase, unsigned long hmask, + unsigned long vmid) { - make_xfence_request(kvm, hbase, hmask, KVM_REQ_HFENCE_VVMA_ALL, - KVM_REQ_HFENCE_VVMA_ALL, NULL); + struct kvm_riscv_hfence data =3D {0}; + + data.type =3D KVM_RISCV_HFENCE_VVMA_ALL; + data.vmid =3D vmid; + make_xfence_request(kvm, hbase, hmask, KVM_REQ_HFENCE, + KVM_REQ_HFENCE_VVMA_ALL, &data); } =20 int kvm_arch_flush_remote_tlbs_range(struct kvm *kvm, gfn_t gfn, u64 nr_pa= ges) { kvm_riscv_hfence_gvma_vmid_gpa(kvm, -1UL, 0, gfn << PAGE_SHIFT, nr_pages << PAGE_SHIFT, - PAGE_SHIFT); + PAGE_SHIFT, READ_ONCE(kvm->arch.vmid.vmid)); return 0; } diff --git a/arch/riscv/kvm/vcpu_sbi_replace.c b/arch/riscv/kvm/vcpu_sbi_re= place.c index b17fad091bab..b490ed1428a6 100644 --- a/arch/riscv/kvm/vcpu_sbi_replace.c +++ b/arch/riscv/kvm/vcpu_sbi_replace.c @@ -96,6 +96,7 @@ static int kvm_sbi_ext_rfence_handler(struct kvm_vcpu *vc= pu, struct kvm_run *run unsigned long hmask =3D cp->a0; unsigned long hbase =3D cp->a1; unsigned long funcid =3D cp->a6; + unsigned long vmid; =20 switch (funcid) { case SBI_EXT_RFENCE_REMOTE_FENCE_I: @@ -103,22 +104,22 @@ static int kvm_sbi_ext_rfence_handler(struct kvm_vcpu= *vcpu, struct kvm_run *run kvm_riscv_vcpu_pmu_incr_fw(vcpu, SBI_PMU_FW_FENCE_I_SENT); break; case SBI_EXT_RFENCE_REMOTE_SFENCE_VMA: + vmid =3D READ_ONCE(vcpu->kvm->arch.vmid.vmid); if ((cp->a2 =3D=3D 0 && cp->a3 =3D=3D 0) || cp->a3 =3D=3D -1UL) - kvm_riscv_hfence_vvma_all(vcpu->kvm, hbase, hmask); + kvm_riscv_hfence_vvma_all(vcpu->kvm, hbase, hmask, vmid); else kvm_riscv_hfence_vvma_gva(vcpu->kvm, hbase, hmask, - cp->a2, cp->a3, PAGE_SHIFT); + cp->a2, cp->a3, PAGE_SHIFT, vmid); kvm_riscv_vcpu_pmu_incr_fw(vcpu, SBI_PMU_FW_HFENCE_VVMA_SENT); break; case SBI_EXT_RFENCE_REMOTE_SFENCE_VMA_ASID: + vmid =3D READ_ONCE(vcpu->kvm->arch.vmid.vmid); if ((cp->a2 =3D=3D 0 && cp->a3 =3D=3D 0) || cp->a3 =3D=3D -1UL) - kvm_riscv_hfence_vvma_asid_all(vcpu->kvm, - hbase, hmask, cp->a4); + kvm_riscv_hfence_vvma_asid_all(vcpu->kvm, hbase, hmask, + cp->a4, vmid); else - kvm_riscv_hfence_vvma_asid_gva(vcpu->kvm, - hbase, hmask, - cp->a2, cp->a3, - PAGE_SHIFT, cp->a4); + kvm_riscv_hfence_vvma_asid_gva(vcpu->kvm, hbase, hmask, cp->a2, + cp->a3, PAGE_SHIFT, cp->a4, vmid); kvm_riscv_vcpu_pmu_incr_fw(vcpu, SBI_PMU_FW_HFENCE_VVMA_ASID_SENT); break; case SBI_EXT_RFENCE_REMOTE_HFENCE_GVMA: diff --git a/arch/riscv/kvm/vcpu_sbi_v01.c b/arch/riscv/kvm/vcpu_sbi_v01.c index 8f4c4fa16227..368dfddd23d9 100644 --- a/arch/riscv/kvm/vcpu_sbi_v01.c +++ b/arch/riscv/kvm/vcpu_sbi_v01.c @@ -23,6 +23,7 @@ static int kvm_sbi_ext_v01_handler(struct kvm_vcpu *vcpu,= struct kvm_run *run, struct kvm *kvm =3D vcpu->kvm; struct kvm_cpu_context *cp =3D &vcpu->arch.guest_context; struct kvm_cpu_trap *utrap =3D retdata->utrap; + unsigned long vmid; =20 switch (cp->a7) { case SBI_EXT_0_1_CONSOLE_GETCHAR: @@ -78,25 +79,21 @@ static int kvm_sbi_ext_v01_handler(struct kvm_vcpu *vcp= u, struct kvm_run *run, if (cp->a7 =3D=3D SBI_EXT_0_1_REMOTE_FENCE_I) kvm_riscv_fence_i(vcpu->kvm, 0, hmask); else if (cp->a7 =3D=3D SBI_EXT_0_1_REMOTE_SFENCE_VMA) { + vmid =3D READ_ONCE(vcpu->kvm->arch.vmid.vmid); if (cp->a1 =3D=3D 0 && cp->a2 =3D=3D 0) - kvm_riscv_hfence_vvma_all(vcpu->kvm, - 0, hmask); + kvm_riscv_hfence_vvma_all(vcpu->kvm, 0, hmask, vmid); else - kvm_riscv_hfence_vvma_gva(vcpu->kvm, - 0, hmask, - cp->a1, cp->a2, - PAGE_SHIFT); + kvm_riscv_hfence_vvma_gva(vcpu->kvm, 0, hmask, cp->a1, + cp->a2, PAGE_SHIFT, vmid); } else { + vmid =3D READ_ONCE(vcpu->kvm->arch.vmid.vmid); if (cp->a1 =3D=3D 0 && cp->a2 =3D=3D 0) - kvm_riscv_hfence_vvma_asid_all(vcpu->kvm, - 0, hmask, - cp->a3); + kvm_riscv_hfence_vvma_asid_all(vcpu->kvm, 0, hmask, + cp->a3, vmid); else - kvm_riscv_hfence_vvma_asid_gva(vcpu->kvm, - 0, hmask, - cp->a1, cp->a2, - PAGE_SHIFT, - cp->a3); + kvm_riscv_hfence_vvma_asid_gva(vcpu->kvm, 0, hmask, + cp->a1, cp->a2, PAGE_SHIFT, + cp->a3, vmid); } break; default: --=20 2.43.0