From nobody Mon Feb 9 10:28:41 2026 Received: from mail-pj1-f73.google.com (mail-pj1-f73.google.com [209.85.216.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2DDF218D63A for ; Thu, 27 Feb 2025 01:49:16 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740620958; cv=none; b=VpDgg3KEm2Bfihc5xOOGzpqSIikXoulzjoS7YG74xqWJl9rvprfptctqUULXteq3XTS5i1H6xwn9w7MJ7i/gV2v5h9Qdomkt+rT4X8D0jtLADRtEK5UhRcak7WWrYJbm17jsNRKS4Ydj6nwiBvwKWAkrmiOLEaAlVp3/DuH9YPA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740620958; c=relaxed/simple; bh=ZjznNdxeawcs4ehBQN14IvUeGWD7tUmkCSukq8IQXtk=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=e8l8h61+mESLZjDpWYP3jXWnxelXmZNTRn1uAxHoWicDuAHYtejevBhqDaLIIuBitZSlx4JJCRErVhzEu/rJxlGgEjgUgLJtTYdEI56A8ybK9N5K/RjUQ4FhTUFbk7a6ItdYvzpc7EBRW/vSRAQ8SdK/DoTvZk62YoGwhVdLh08= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=d0K1ouEp; arc=none smtp.client-ip=209.85.216.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="d0K1ouEp" Received: by mail-pj1-f73.google.com with SMTP id 98e67ed59e1d1-2fc1e7efdffso1473057a91.0 for ; Wed, 26 Feb 2025 17:49:16 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1740620956; x=1741225756; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=+fAaFYGznjHWuwCGEwUCA2tWPe6g3CNPq2rCUGtXY68=; b=d0K1ouEpird8L1LWt/dgiZAAcoBnqbc6UE4hKIqx+2cqZnFXIBBDzP8l4KRiXLppj5 ieWrXcGcc3pWmgxiIIcj2B5IrYQJ+BXu/E2aFetE8UX47VAmR4leQk+zo2X8CAovA949 13eIq28uJW7FqjMS06bDmNtFoKO0P+X3PYYXMUXdWNxP/sKMTM1zGnEvB3hqc8YHmZi7 PLLKaYcVn8BPf0z1DWELwVL/5YhbSF44dQoW354SgLPe8nXYpEp3ROZeK7DzsUnPTgaM fMtJh9LBjtV6EMkR/9mdKIQV/oBOWhCgai07JMX4+r9EC2TJza89esr4lg7uk81gsxZO H+Og== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1740620956; x=1741225756; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=+fAaFYGznjHWuwCGEwUCA2tWPe6g3CNPq2rCUGtXY68=; b=rHOC2VXNZZ/jx0cVwFWnnWyD4IUoq+7kXRlJuPOeVVrtMcV6nyBFP1YbkwzPujJ/7V BexrhphVxccc7HY2aRZBCHNkXxIKSI6c4AjV58XNL+gFx1f2SWuJGliBT2y1H0JxSLHI Py3FvhG3o4LJSFSSzpzHmoTF1oMSxCapNd+pZngA+9MGUbabRFMaJoRAdHB+gCCo+20Z n4LUEimiiSwyW8hZuajRCYa1Wz5UWHJ2cOuUx6lhMGSl0Z6cIJtixhAAqSAVMDZVgRyZ Cy/8C4A65Nif+EBIeP0SChfwsWouy5nGhkh4EvsD4pdAKujW0fwFJPUH/zcdS9HZgIGc 9eZQ== X-Gm-Message-State: AOJu0YwAA+LWE+pTplsHoW4k3MrifLLpiMO/oqQ2I9gBMy8zLTqJI9Nr azAjecVMJ5jqjwimtGCXpl1R6FVBFMoevbj8DLbqWGccs8iThBR9l1NkWFlrD69s39M4APaOvVY z1Q== X-Google-Smtp-Source: AGHT+IEbQ1gQgokpG5EzaQqC9W30sV157j2cgm8ux/I9GJmzFDDxw179J+/d8wjCg+jKRuKefKs1aMX8P1M= X-Received: from pjyd4.prod.google.com ([2002:a17:90a:dfc4:b0:2ef:8055:93d9]) (user=seanjc job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:6cc:b0:2ee:ad18:b309 with SMTP id 98e67ed59e1d1-2fe68accf77mr13751734a91.3.1740620956399; Wed, 26 Feb 2025 17:49:16 -0800 (PST) Reply-To: Sean Christopherson Date: Wed, 26 Feb 2025 17:48:58 -0800 In-Reply-To: <20250227014858.3244505-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250227014858.3244505-1-seanjc@google.com> X-Mailer: git-send-email 2.48.1.711.g2feabab25a-goog Message-ID: <20250227014858.3244505-8-seanjc@google.com> Subject: [PATCH 7/7] KVM: SVM: Flush cache only on CPUs running SEV guest From: Sean Christopherson To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, Sean Christopherson , Paolo Bonzini Cc: linux-kernel@vger.kernel.org, kvm@vger.kernel.org, Zheyun Shen , Tom Lendacky , Kevin Loughlin , Mingwei Zhang Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Zheyun Shen On AMD CPUs without ensuring cache consistency, each memory page reclamation in an SEV guest triggers a call to do WBNOINVD/WBINVD on all CPUs, thereby affecting the performance of other programs on the host. Typically, an AMD server may have 128 cores or more, while the SEV guest might only utilize 8 of these cores. Meanwhile, host can use qemu-affinity to bind these 8 vCPUs to specific physical CPUs. Therefore, keeping a record of the physical core numbers each time a vCPU runs can help avoid flushing the cache for all CPUs every time. Signed-off-by: Zheyun Shen Co-developed-by: Sean Christopherson Signed-off-by: Sean Christopherson --- arch/x86/kvm/svm/sev.c | 42 +++++++++++++++++++++++++++++++++++------- arch/x86/kvm/svm/svm.h | 1 + 2 files changed, 36 insertions(+), 7 deletions(-) diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c index 4238af23ab1b..b7a4cb728fba 100644 --- a/arch/x86/kvm/svm/sev.c +++ b/arch/x86/kvm/svm/sev.c @@ -447,6 +447,8 @@ static int __sev_guest_init(struct kvm *kvm, struct kvm= _sev_cmd *argp, ret =3D sev_platform_init(&init_args); if (ret) goto e_free; + if (!zalloc_cpumask_var(&sev->have_run_cpus, GFP_KERNEL_ACCOUNT)) + goto e_free; =20 /* This needs to happen after SEV/SNP firmware initialization. */ if (vm_type =3D=3D KVM_X86_SNP_VM) { @@ -706,16 +708,31 @@ static void sev_clflush_pages(struct page *pages[], u= nsigned long npages) } } =20 -static void sev_writeback_caches(void) +static void sev_writeback_caches(struct kvm *kvm) { + /* + * Note, the caller is responsible for ensuring correctness if the mask + * can be modified, e.g. if a CPU could be doing VMRUN. + */ + if (cpumask_empty(to_kvm_sev_info(kvm)->have_run_cpus)) + return; + /* * Ensure that all dirty guest tagged cache entries are written back * before releasing the pages back to the system for use. CLFLUSH will * not do this without SME_COHERENT, and flushing many cache lines * individually is slower than blasting WBINVD for large VMs, so issue - * WBNOINVD (or WBINVD if the "no invalidate" variant is unsupported). + * WBNOINVD (or WBINVD if the "no invalidate" variant is unsupported) + * on CPUs that have done VMRUN, i.e. may have dirtied data using the + * VM's ASID. + * + * For simplicity, never remove CPUs from the bitmap. Ideally, KVM + * would clear the mask when flushing caches, but doing so requires + * serializing multiple calls and having responding CPUs (to the IPI) + * mark themselves as still running if they are running (or about to + * run) a vCPU for the VM. */ - wbnoinvd_on_all_cpus(); + wbnoinvd_on_many_cpus(to_kvm_sev_info(kvm)->have_run_cpus); } =20 static unsigned long get_num_contig_pages(unsigned long idx, @@ -2766,7 +2783,7 @@ int sev_mem_enc_unregister_region(struct kvm *kvm, goto failed; } =20 - sev_writeback_caches(); + sev_writeback_caches(kvm); =20 __unregister_enc_region_locked(kvm, region); =20 @@ -2914,6 +2931,7 @@ void sev_vm_destroy(struct kvm *kvm) } =20 sev_asid_free(sev); + free_cpumask_var(sev->have_run_cpus); } =20 void __init sev_set_cpu_caps(void) @@ -3127,7 +3145,7 @@ static void sev_flush_encrypted_page(struct kvm_vcpu = *vcpu, void *va) return; =20 do_sev_writeback_caches: - sev_writeback_caches(); + sev_writeback_caches(vcpu->kvm); } =20 void sev_guest_memory_reclaimed(struct kvm *kvm) @@ -3140,7 +3158,7 @@ void sev_guest_memory_reclaimed(struct kvm *kvm) if (!sev_guest(kvm) || sev_snp_guest(kvm)) return; =20 - sev_writeback_caches(); + sev_writeback_caches(kvm); } =20 void sev_free_vcpu(struct kvm_vcpu *vcpu) @@ -3456,7 +3474,17 @@ void sev_es_unmap_ghcb(struct vcpu_svm *svm) void pre_sev_run(struct vcpu_svm *svm, int cpu) { struct svm_cpu_data *sd =3D per_cpu_ptr(&svm_data, cpu); - unsigned int asid =3D sev_get_asid(svm->vcpu.kvm); + struct kvm *kvm =3D svm->vcpu.kvm; + unsigned int asid =3D sev_get_asid(kvm); + + /* + * To optimize cache flushes when memory is reclaimed from an SEV VM, + * track physical CPUs that enter the guest for SEV VMs and thus can + * have encrypted, dirty data in the cache, and flush caches only for + * CPUs that have entered the guest. + */ + if (!cpumask_test_cpu(cpu, to_kvm_sev_info(kvm)->have_run_cpus)) + cpumask_set_cpu(cpu, to_kvm_sev_info(kvm)->have_run_cpus); =20 /* Assign the asid allocated with this SEV guest */ svm->asid =3D asid; diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h index 5b159f017055..6ad18ce5a754 100644 --- a/arch/x86/kvm/svm/svm.h +++ b/arch/x86/kvm/svm/svm.h @@ -112,6 +112,7 @@ struct kvm_sev_info { void *guest_req_buf; /* Bounce buffer for SNP Guest Request input */ void *guest_resp_buf; /* Bounce buffer for SNP Guest Request output */ struct mutex guest_req_mutex; /* Must acquire before using bounce buffers= */ + cpumask_var_t have_run_cpus; /* CPUs that have done VMRUN for this VM. */ }; =20 struct kvm_svm { --=20 2.48.1.711.g2feabab25a-goog