From nobody Wed Dec 17 16:32:56 2025 Received: from Atcsqr.andestech.com (60-248-80-70.hinet-ip.hinet.net [60.248.80.70]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 440F9215F4A for ; Thu, 23 Oct 2025 03:29:06 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=60.248.80.70 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761190166; cv=none; b=lLAW0dOS9i+n/4qbuv4679V6FvioUPDC3WU8MJB/d3KUEl7AaQhs8lqMPUBQMNZTpAbQuEdAwSaU+Db3ndJqGwn/ZHf2KaKlezAdQba1zLqdGOMLX4PFA5qazAbxDARqnVJSY+REmBnKaRk3ewt4KbUK5obn8FPrAPP7h+2XdXM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761190166; c=relaxed/simple; bh=RCZSGh4PwVbnssGD0Ep99NURJjnoNcltfPOBel3L4UI=; h=From:To:CC:Subject:Date:Message-ID:MIME-Version:Content-Type; b=Y6S0AzmE1439ObachmIwS5rV7YuUGlJRasHBk8+LO6lTFDHlimm6sncbgohrkjw1MQypuyX2lWpq7N/n+eRYlxmIZP3ITMXJEZonQXMsL7nEYZPMY3ZYnIJkUK/RuZIgw9VoPYPn5jWIWWUIUcJ2Vdgpa+xWR/2F1eo39+W8kYM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=permerror header.from=andestech.com; spf=pass smtp.mailfrom=andestech.com; arc=none smtp.client-ip=60.248.80.70 Authentication-Results: smtp.subspace.kernel.org; dmarc=permerror header.from=andestech.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=andestech.com Received: from mail.andestech.com (ATCPCS34.andestech.com [10.0.1.134]) by Atcsqr.andestech.com with ESMTPS id 59N3Q4Bw097416 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=OK); Thu, 23 Oct 2025 11:26:04 +0800 (+08) (envelope-from minachou@andestech.com) Received: from swlinux02.andestech.com (10.0.15.183) by ATCPCS34.andestech.com (10.0.1.134) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.39; Thu, 23 Oct 2025 11:26:04 +0800 From: Hui Min Mina Chou To: , , , , , CC: , , , , , , , , =?UTF-8?q?Radim=20Kr=C4=8Dm=C3=A1=C5=99?= Subject: [PATCH v3] RISC-V: KVM: flush VS-stage TLB after VCPU migration to prevent stale entries Date: Thu, 23 Oct 2025 11:25:17 +0800 Message-ID: <20251023032517.2527193-1-minachou@andestech.com> X-Mailer: git-send-email 2.34.1 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-ClientProxiedBy: ATCPCS33.andestech.com (10.0.1.100) To ATCPCS34.andestech.com (10.0.1.134) X-DKIM-Results: atcpcs34.andestech.com; dkim=none; X-DNSRBL: X-SPAM-SOURCE-CHECK: pass X-MAIL: Atcsqr.andestech.com 59N3Q4Bw097416 From: Hui Min Mina Chou If multiple VCPUs of the same Guest/VM run on the same Host CPU, hfence.vvma only flushes that Host CPU=E2=80=99s VS-stage TLB. Other Host C= PUs may retain stale VS-stage entries. When a VCPU later migrates to a different Host CPU, it can hit these stale GVA to GPA mappings, causing unexpected faults in the Guest. To fix this, kvm_riscv_gstage_vmid_sanitize() is extended to flush both G-stage and VS-stage TLBs whenever a VCPU migrates to a different Host CPU. This ensures that no stale VS-stage mappings remain after VCPU migration. Fixes: 92e450507d56 ("RISC-V: KVM: Cleanup stale TLB entries when host CPU = changes") Signed-off-by: Hui Min Mina Chou Signed-off-by: Ben Zong-You Xie Reviewed-by: Radim Kr=C4=8Dm=C3=A1=C5=99 --- v3: - Resolved build warning; updated header declaration and call side to kvm_riscv_local_tlb_sanitize v2: - Updated Fixes commit to 92e450507d56 - Renamed function to kvm_riscv_local_tlb_sanitize arch/riscv/include/asm/kvm_vmid.h | 2 +- arch/riscv/kvm/vcpu.c | 2 +- arch/riscv/kvm/vmid.c | 8 +++++++- 3 files changed, 9 insertions(+), 3 deletions(-) diff --git a/arch/riscv/include/asm/kvm_vmid.h b/arch/riscv/include/asm/kvm= _vmid.h index ab98e1434fb7..75fb6e872ccd 100644 --- a/arch/riscv/include/asm/kvm_vmid.h +++ b/arch/riscv/include/asm/kvm_vmid.h @@ -22,6 +22,6 @@ unsigned long kvm_riscv_gstage_vmid_bits(void); int kvm_riscv_gstage_vmid_init(struct kvm *kvm); bool kvm_riscv_gstage_vmid_ver_changed(struct kvm_vmid *vmid); void kvm_riscv_gstage_vmid_update(struct kvm_vcpu *vcpu); -void kvm_riscv_gstage_vmid_sanitize(struct kvm_vcpu *vcpu); +void kvm_riscv_local_tlb_sanitize(struct kvm_vcpu *vcpu); =20 #endif diff --git a/arch/riscv/kvm/vcpu.c b/arch/riscv/kvm/vcpu.c index 3ebcfffaa978..796218e4a462 100644 --- a/arch/riscv/kvm/vcpu.c +++ b/arch/riscv/kvm/vcpu.c @@ -968,7 +968,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu) * Note: This should be done after G-stage VMID has been * updated using kvm_riscv_gstage_vmid_ver_changed() */ - kvm_riscv_gstage_vmid_sanitize(vcpu); + kvm_riscv_local_tlb_sanitize(vcpu); =20 trace_kvm_entry(vcpu); =20 diff --git a/arch/riscv/kvm/vmid.c b/arch/riscv/kvm/vmid.c index 3b426c800480..6323f5383d36 100644 --- a/arch/riscv/kvm/vmid.c +++ b/arch/riscv/kvm/vmid.c @@ -125,7 +125,7 @@ void kvm_riscv_gstage_vmid_update(struct kvm_vcpu *vcpu) kvm_make_request(KVM_REQ_UPDATE_HGATP, v); } =20 -void kvm_riscv_gstage_vmid_sanitize(struct kvm_vcpu *vcpu) +void kvm_riscv_local_tlb_sanitize(struct kvm_vcpu *vcpu) { unsigned long vmid; =20 @@ -146,4 +146,10 @@ void kvm_riscv_gstage_vmid_sanitize(struct kvm_vcpu *v= cpu) =20 vmid =3D READ_ONCE(vcpu->kvm->arch.vmid.vmid); kvm_riscv_local_hfence_gvma_vmid_all(vmid); + + /* + * Flush VS-stage TLBs entry after VCPU migration to avoid using + * stale entries. + */ + kvm_riscv_local_hfence_vvma_all(vmid); } --=20 2.34.1