From nobody Wed Jan 22 08:45:53 2025 Received: from mail-oi1-f202.google.com (mail-oi1-f202.google.com [209.85.167.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 22F8617591 for ; Wed, 22 Jan 2025 01:34:50 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.167.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737509692; cv=none; b=PDk6S3hOfXikvcfbc19rdZohqPsIRWpwPhWuji4nLczLmzFCwODjF9lkKRI2U5Hjwevd7od8KF5dJNgsIqIHhn9EDm/e+g494Qk1URIkpjPKk32iuClE+Sh3L013ZJs17DITYNTw60z0CAIGHm85j3ynfUpzbiwsThCIHY5e9tc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737509692; c=relaxed/simple; bh=9Ade7knO5VEVNmwnbjnPb6BiwOMU+ufwYFiIL/gLHIU=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=rY9i0VuFz4x7MquwNKqs3k/ogs/LkPzZYgwGbcHai0SgEn22OiwPaS2AfDwHZGSmuI0p5Bb7/xa/vA0hOZGngnYQO62vEa2RUnsFJ+uapC0qjh7cxkaDQtF2RDM0T2hQjVHn273lPpm4OJwCmdPGbKAgdGNp1d0yO0J1rRPWc/k= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--kevinloughlin.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=VZWY4y1X; arc=none smtp.client-ip=209.85.167.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--kevinloughlin.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="VZWY4y1X" Received: by mail-oi1-f202.google.com with SMTP id 5614622812f47-3eb8e88a844so4135296b6e.0 for ; Tue, 21 Jan 2025 17:34:50 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1737509690; x=1738114490; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=HgteI4fiuJZOEkE1qejk8g/bCWMB3HRRd+RVqIi5LEs=; b=VZWY4y1XKeINUrUhj3QnTzL5VW27YH7+iBjdhE1z8h/CIpuK1b1ImQ6+/pZUbqynuJ DhN2CcclTBXUKiMYZ/GGk1S3psH2q/BWQin6Y8IAarlDzY25Ese8JXsBayiLOOriEXhQ FQMgIFseKweaCu3fhXWvOjyclge1TdozCDKm2tyhT0OiAMC4i3sSV4xJ1issMtPb56D6 GVoSybIY4QIq1p61vbjT24Dw3RCm7nEFQE/VpaRSrrSN7nRmPM0d5iQqaeXwZabVyNbz ev5d//Fy9wZ/Z70I0yKRODzE1Xt3h68YdVYNTCrhrcg1wjqWHrFP0QrRPAq9Kbc+ssyZ igbg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1737509690; x=1738114490; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=HgteI4fiuJZOEkE1qejk8g/bCWMB3HRRd+RVqIi5LEs=; b=etwZVaZmkXM3nrB8OEF8J38qrb00o767vGGfenilHxz9YWLVXrFfx4mye1H9BGIgk2 YoTpMgGP9w/ssttxckEqjPl1erDuE9U0WtVh0eaXJcWihTzeRvrrwEQXl53NytfkQShN lLe8/8rGIRz0xE9JrdXQG6ekyUd7VEU/1Hw35kqo67yWsXck/v/oBfI8pPM2zMVj62bt 9WqFj9HJitzGL50XO0pZMhGd0eNhNuFaFY8jw0jGehpYenlZjqu642UFbyG026PRO6p1 N8Bf0Hk+cLnPMBpxVdI41fZRZgdAWHYvRXXM9/fVhPHOMdSxHxvCNFOOYQKUXe9o8NA2 oxiQ== X-Gm-Message-State: AOJu0Yy9qIq3rjKo4Tin+6NSK0lSTJNQ6tHbM+m33BEEAH7J2qjhge1l 3gS1xmYqH69P5+9ygu0o7foY1Vg+li6N/zHsPS94Q/WapA1kSiXh/BhuMa9ZfqgbpMoeMhiRNAk bppsXvXCPibx/zM+c1qAxirdpysqbYT0lerqQzBbmxE++baIB1nvqssUiEiVELhdQkQvQsiuWb6 KMcWMk6boPXPYw7YLQW63ShVHY/eB74EQnWuBL05lNrA53EXxgtL3uv1b+PpQ2neTDG4QRwx0ra 7VVIA== X-Google-Smtp-Source: AGHT+IE77jKtUq5gPhUy8no02bvIde6UggrYU2Sq+8Doh8YUX3w9e8Gjr8vJMgO6IKwaiiZbIw7Z330GjHpSMRqnl3lV X-Received: from oiwq37.prod.google.com ([2002:a05:6808:2025:b0:3eb:6944:3f22]) (user=kevinloughlin job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6808:4e0f:b0:3e6:5522:b333 with SMTP id 5614622812f47-3f19fc87b21mr10329879b6e.22.1737509690070; Tue, 21 Jan 2025 17:34:50 -0800 (PST) Date: Wed, 22 Jan 2025 01:34:37 +0000 In-Reply-To: <20250122013438.731416-1-kevinloughlin@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250122001329.647970-1-kevinloughlin@google.com> <20250122013438.731416-1-kevinloughlin@google.com> X-Mailer: git-send-email 2.48.1.262.g85cc9f2d1e-goog Message-ID: <20250122013438.731416-2-kevinloughlin@google.com> Subject: [PATCH v4 1/2] x86, lib: Add WBNOINVD helper functions From: Kevin Loughlin To: linux-kernel@vger.kernel.org Cc: tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, x86@kernel.org, hpa@zytor.com, seanjc@google.com, pbonzini@redhat.com, kevinloughlin@google.com, kirill.shutemov@linux.intel.com, kai.huang@intel.com, ubizjak@gmail.com, jgross@suse.com, kvm@vger.kernel.org, thomas.lendacky@amd.com, pgonda@google.com, sidtelang@google.com, mizhang@google.com, rientjes@google.com, manalinandan@google.com, szy0127@sjtu.edu.cn Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" In line with WBINVD usage, add WBONINVD helper functions. For the wbnoinvd() helper, fall back to WBINVD if X86_FEATURE_WBNOINVD is not present. Signed-off-by: Kevin Loughlin --- arch/x86/include/asm/smp.h | 7 +++++++ arch/x86/include/asm/special_insns.h | 15 ++++++++++++++- arch/x86/lib/cache-smp.c | 12 ++++++++++++ 3 files changed, 33 insertions(+), 1 deletion(-) diff --git a/arch/x86/include/asm/smp.h b/arch/x86/include/asm/smp.h index ca073f40698f..ecf93a243b83 100644 --- a/arch/x86/include/asm/smp.h +++ b/arch/x86/include/asm/smp.h @@ -112,6 +112,7 @@ void native_play_dead(void); void play_dead_common(void); void wbinvd_on_cpu(int cpu); int wbinvd_on_all_cpus(void); +int wbnoinvd_on_all_cpus(void); =20 void smp_kick_mwait_play_dead(void); =20 @@ -160,6 +161,12 @@ static inline int wbinvd_on_all_cpus(void) return 0; } =20 +static inline int wbnoinvd_on_all_cpus(void) +{ + wbnoinvd(); + return 0; +} + static inline struct cpumask *cpu_llc_shared_mask(int cpu) { return (struct cpumask *)cpumask_of(0); diff --git a/arch/x86/include/asm/special_insns.h b/arch/x86/include/asm/sp= ecial_insns.h index 03e7c2d49559..94640c3491d7 100644 --- a/arch/x86/include/asm/special_insns.h +++ b/arch/x86/include/asm/special_insns.h @@ -117,7 +117,20 @@ static inline void wrpkru(u32 pkru) =20 static __always_inline void wbinvd(void) { - asm volatile("wbinvd": : :"memory"); + asm volatile("wbinvd" : : : "memory"); +} + +/* + * Cheaper version of wbinvd(). Call when caches + * need to be written back but not invalidated. + */ +static __always_inline void wbnoinvd(void) +{ + /* + * Use the compatible but more destructive "invalidate" + * variant when no-invalidate is unavailable. + */ + alternative("wbinvd", "wbnoinvd", X86_FEATURE_WBNOINVD); } =20 static inline unsigned long __read_cr4(void) diff --git a/arch/x86/lib/cache-smp.c b/arch/x86/lib/cache-smp.c index 7af743bd3b13..7ac5cca53031 100644 --- a/arch/x86/lib/cache-smp.c +++ b/arch/x86/lib/cache-smp.c @@ -20,3 +20,15 @@ int wbinvd_on_all_cpus(void) return 0; } EXPORT_SYMBOL(wbinvd_on_all_cpus); + +static void __wbnoinvd(void *dummy) +{ + wbnoinvd(); +} + +int wbnoinvd_on_all_cpus(void) +{ + on_each_cpu(__wbnoinvd, NULL, 1); + return 0; +} +EXPORT_SYMBOL(wbnoinvd_on_all_cpus); --=20 2.48.1.262.g85cc9f2d1e-goog From nobody Wed Jan 22 08:45:53 2025 Received: from mail-io1-f74.google.com (mail-io1-f74.google.com [209.85.166.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C71F014885B for ; Wed, 22 Jan 2025 01:34:53 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.166.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737509695; cv=none; b=mOEj7BicHQsIN0gjpFjiOWF9Fr8361BgrWiv0/Zb6aaCCjCtDV+XJ8jpolUt5ePVjcuxAVO/QpD1UnWtRX3ke6f0twvpk8hUHxOD0S4BGP3e9lcWI5l6HevBug1tHmQmdLOqE5Zi+Qe9dOVAmhMERQzj4CnVvk+Jty6OseWmQsA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737509695; c=relaxed/simple; bh=yPf27K03/uVqqrLt0hh23/xHA+vw8UXWLwpu1KmsK20=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=aiMSlFhnS5rDDxvgP76eYSjHFlL/VStLnZY4oVCb4tMieibEatyNBdxPURyIIv6qCt4hUngFb5I4zR1obOWEBn8v643acuPrp1+yEEsttHDMIo81q6fYgQe+glz6evdVJ6ZDFUR75DQfE/ir/alnOJ3ItevJ1RbuWWX0bV2OHK8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--kevinloughlin.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=zkRWwpCc; arc=none smtp.client-ip=209.85.166.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--kevinloughlin.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="zkRWwpCc" Received: by mail-io1-f74.google.com with SMTP id ca18e2360f4ac-84cdae60616so445568339f.3 for ; Tue, 21 Jan 2025 17:34:53 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1737509693; x=1738114493; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=o1kWLqC/0VVh3Ec7HiMF8h2C2kgo4gHLLUGbm+ZaoH4=; b=zkRWwpCcuvNVVbGhpL9KeyY7xVc9dEXdzGWaq5kePyVlXa2VBa0w9iMyQdrGPF4T7n 8o7j+2FooYndwV4o36w8AOs7UTuADjk4gIFEXEYdYs3jAMArgoWQL2PjYK6XqQvEeyCJ hfSetdG7aHwG0odRP9g19GvZBRdoWtVJ4H/SCAX9S3Ld8t0LeKXsXE+dDKt8KiJRMIZe GuFgQGav3t/SSyFE8U82tr5RkpjFLiZfFPBU3ocA0lFmgZc0C0o3W19hQV18kU8c98Wo BkwZq26XzZ3k1QUxKcCc5kdwwYo50stE3koTJXTOS7hOpLoAiwIYgmRdJt7ZvYDQ/sj7 p7ng== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1737509693; x=1738114493; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=o1kWLqC/0VVh3Ec7HiMF8h2C2kgo4gHLLUGbm+ZaoH4=; b=Idf3v8ZBamcfyOwcit4uc9OZi/PP9waGUyox2e6ldbjpVJnfP472HXPRO6DLPmqTFi 9VygZptAziT4z1E0LVswt9RaZpvjyKdmdo9ciXEAtV8vcmX9UZiyYVX2G3Q2evLdzoee b/Lp3b1B2P5cv/E6pwVCJkfn6dle8t9Cd1Oz2tdphTmQ7v75bxW8a9HCc5tR13uEULXi pIUvVYJSFyhqgAImBoj1pg+oiugTBJFB153wokBtewoyHRyOgteu1J6+e/O8PPKgZO1Z bng5JNmnFEHJt+ir/HwFdRpHSRTOVE1RGfmi8e2lS21rvwGZF9chDaBfdey7wLRHJnbV /KHQ== X-Gm-Message-State: AOJu0Yy3YKVrE/K86h7pEvCLTwbVbWxk3YXfgZFvJCq3jg+3AV7RThu/ ftsq2JGHEcUfK/XLeife36UbTcH+bTGqn+R/CzxZIsOdwJpDX+/R5tSXjq9P9ovYhfV6oK7hlyH v42wlYIUNkzqBCJt/8ETEuHYwNzNAc/sGeOci1HGsNZkvfplLI7v6ZUe+3G+oMpOxqT0wya9uCk KBOQ8EvO0tG4qr3oN/KO8YUfc0Li6Jir5TPQulo/2cZ77AbTF3RzHfgC0VkWDL/Rnn9RYdZFQW1 cJJJg== X-Google-Smtp-Source: AGHT+IENqNHF6TwDR2t1G9IC5SHDsRc2itu4M6ExIszwTzHYfjC1GR6kt5kh6jgzkQjyLEuQJAp1qLtl/bkYzM0VGZep X-Received: from ios25.prod.google.com ([2002:a05:6602:7419:b0:844:d3b6:585d]) (user=kevinloughlin job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6602:3411:b0:83a:639b:bc44 with SMTP id ca18e2360f4ac-851b61733a0mr1529315439f.3.1737509692884; Tue, 21 Jan 2025 17:34:52 -0800 (PST) Date: Wed, 22 Jan 2025 01:34:38 +0000 In-Reply-To: <20250122013438.731416-1-kevinloughlin@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250122001329.647970-1-kevinloughlin@google.com> <20250122013438.731416-1-kevinloughlin@google.com> X-Mailer: git-send-email 2.48.1.262.g85cc9f2d1e-goog Message-ID: <20250122013438.731416-3-kevinloughlin@google.com> Subject: [PATCH v4 2/2] KVM: SEV: Prefer WBNOINVD over WBINVD for cache maintenance efficiency From: Kevin Loughlin To: linux-kernel@vger.kernel.org Cc: tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, x86@kernel.org, hpa@zytor.com, seanjc@google.com, pbonzini@redhat.com, kevinloughlin@google.com, kirill.shutemov@linux.intel.com, kai.huang@intel.com, ubizjak@gmail.com, jgross@suse.com, kvm@vger.kernel.org, thomas.lendacky@amd.com, pgonda@google.com, sidtelang@google.com, mizhang@google.com, rientjes@google.com, manalinandan@google.com, szy0127@sjtu.edu.cn Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" AMD CPUs currently execute WBINVD in the host when unregistering SEV guest memory or when deactivating SEV guests. Such cache maintenance is performed to prevent data corruption, wherein the encrypted (C=3D1) version of a dirty cache line might otherwise only be written back after the memory is written in a different context (ex: C=3D0), yielding corruption. However, WBINVD is performance-costly, especially because it invalidates processor caches. Strictly-speaking, unless the SEV ASID is being recycled (meaning the SNP firmware requires the use of WBINVD prior to DF_FLUSH), the cache invalidation triggered by WBINVD is unnecessary; only the writeback is needed to prevent data corruption in remaining scenarios. To improve performance in these scenarios, use WBNOINVD when available instead of WBINVD. WBNOINVD still writes back all dirty lines (preventing host data corruption by SEV guests) but does *not* invalidate processor caches. Note that the implementation of wbnoinvd() ensures fall back to WBINVD if WBNOINVD is unavailable. In anticipation of forthcoming optimizations to limit the WBNOINVD only to physical CPUs that have executed SEV guests, place the call to wbnoinvd_on_all_cpus() in a wrapper function sev_writeback_caches(). Signed-off-by: Kevin Loughlin Reviewed-by: Mingwei Zhang --- arch/x86/kvm/svm/sev.c | 41 +++++++++++++++++++++-------------------- 1 file changed, 21 insertions(+), 20 deletions(-) diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c index fe6cc763fd51..f10f1c53345e 100644 --- a/arch/x86/kvm/svm/sev.c +++ b/arch/x86/kvm/svm/sev.c @@ -116,6 +116,7 @@ static int sev_flush_asids(unsigned int min_asid, unsig= ned int max_asid) */ down_write(&sev_deactivate_lock); =20 + /* SNP firmware requires use of WBINVD for ASID recycling. */ wbinvd_on_all_cpus(); =20 if (sev_snp_enabled) @@ -710,6 +711,16 @@ static void sev_clflush_pages(struct page *pages[], un= signed long npages) } } =20 +static inline void sev_writeback_caches(void) +{ + /* + * Ensure that all dirty guest tagged cache entries are written back + * before releasing the pages back to the system for use. CLFLUSH will + * not do this without SME_COHERENT, so issue a WBNOINVD. + */ + wbnoinvd_on_all_cpus(); +} + static unsigned long get_num_contig_pages(unsigned long idx, struct page **inpages, unsigned long npages) { @@ -2773,12 +2784,7 @@ int sev_mem_enc_unregister_region(struct kvm *kvm, goto failed; } =20 - /* - * Ensure that all guest tagged cache entries are flushed before - * releasing the pages back to the system for use. CLFLUSH will - * not do this, so issue a WBINVD. - */ - wbinvd_on_all_cpus(); + sev_writeback_caches(); =20 __unregister_enc_region_locked(kvm, region); =20 @@ -2899,12 +2905,7 @@ void sev_vm_destroy(struct kvm *kvm) return; } =20 - /* - * Ensure that all guest tagged cache entries are flushed before - * releasing the pages back to the system for use. CLFLUSH will - * not do this, so issue a WBINVD. - */ - wbinvd_on_all_cpus(); + sev_writeback_caches(); =20 /* * if userspace was terminated before unregistering the memory regions @@ -3126,16 +3127,16 @@ static void sev_flush_encrypted_page(struct kvm_vcp= u *vcpu, void *va) =20 /* * VM Page Flush takes a host virtual address and a guest ASID. Fall - * back to WBINVD if this faults so as not to make any problems worse + * back to WBNOINVD if this faults so as not to make any problems worse * by leaving stale encrypted data in the cache. */ if (WARN_ON_ONCE(wrmsrl_safe(MSR_AMD64_VM_PAGE_FLUSH, addr | asid))) - goto do_wbinvd; + goto do_sev_writeback_caches; =20 return; =20 -do_wbinvd: - wbinvd_on_all_cpus(); +do_sev_writeback_caches: + sev_writeback_caches(); } =20 void sev_guest_memory_reclaimed(struct kvm *kvm) @@ -3144,12 +3145,12 @@ void sev_guest_memory_reclaimed(struct kvm *kvm) * With SNP+gmem, private/encrypted memory is unreachable via the * hva-based mmu notifiers, so these events are only actually * pertaining to shared pages where there is no need to perform - * the WBINVD to flush associated caches. + * the WBNOINVD to flush associated caches. */ if (!sev_guest(kvm) || sev_snp_guest(kvm)) return; =20 - wbinvd_on_all_cpus(); + sev_writeback_caches(); } =20 void sev_free_vcpu(struct kvm_vcpu *vcpu) @@ -3858,7 +3859,7 @@ static int __sev_snp_update_protected_guest_state(str= uct kvm_vcpu *vcpu) * guest-mapped page rather than the initial one allocated * by KVM in svm->sev_es.vmsa. In theory, svm->sev_es.vmsa * could be free'd and cleaned up here, but that involves - * cleanups like wbinvd_on_all_cpus() which would ideally + * cleanups like sev_writeback_caches() which would ideally * be handled during teardown rather than guest boot. * Deferring that also allows the existing logic for SEV-ES * VMSAs to be re-used with minimal SNP-specific changes. @@ -4910,7 +4911,7 @@ void sev_gmem_invalidate(kvm_pfn_t start, kvm_pfn_t e= nd) =20 /* * SEV-ES avoids host/guest cache coherency issues through - * WBINVD hooks issued via MMU notifiers during run-time, and + * WBNOINVD hooks issued via MMU notifiers during run-time, and * KVM's VM destroy path at shutdown. Those MMU notifier events * don't cover gmem since there is no requirement to map pages * to a HVA in order to use them for a running guest. While the --=20 2.48.1.262.g85cc9f2d1e-goog