From nobody Thu Apr 2 23:54:07 2026 Received: from mail-pj1-f74.google.com (mail-pj1-f74.google.com [209.85.216.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 065A728D83E for ; Sat, 14 Feb 2026 01:27:18 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1771032440; cv=none; b=EXgHV2zkAK7fcJKJAMB1jUzhPErsi6RcYqtQPZqoaQAOwbFUnGbtaVGan+ejvobQXNr0Vjo4z8h4ct/ZWAvVj1X8rHv1Ol84PoksRTg3MvGak8SdHRuodC7EJdPgwdu9VKgz4vNiFB4+Q1eS6phLANIy+WDTN+PclxCzJBR4h+Q= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1771032440; c=relaxed/simple; bh=kfZ1Zz3kqt0Lijh2yW8pjFnNAWPf0J1gDLwcJsJYrSU=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=jQp2zAdAnWUv3HJsePs7+n264oYZilNmg4faHpLp7sZ5nXEuqOsA78ETRAvRTHNG326MGS6dnnUQROPdeSyJaprj/zJUzeBQmuLIcTarMnr94zOWMfbgwBSulGYU/kT7RfdnGVVgoiVA2EjiSL2cxtgXJXV1GEf8P6kmAl2FHd0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=Dt1vt/HZ; arc=none smtp.client-ip=209.85.216.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="Dt1vt/HZ" Received: by mail-pj1-f74.google.com with SMTP id 98e67ed59e1d1-354c65f69edso1752820a91.0 for ; Fri, 13 Feb 2026 17:27:18 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1771032438; x=1771637238; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=g0hBl26w79lKn400XYX8NYrJRLqwtNFM7RMEsJBIMT0=; b=Dt1vt/HZiSNqdOpZWvHv5ibJ5IFAnUhAUzO58NQINehJzOl6o4MQ360JM7U/a5w9Di 4Dm5Zs6N8hJ28cZE4q77wFASLNdLSOVV34c8y6+wDzrbKHcDsDVPkZ2i+8oe9FBofkKg 6KxUvp6tkK3zLcNSOVK1d0033JXfuSbflB6r5qEre8m7rKvzqDGykWJE3w9ubf39fi7z 3mHMw4JCehAIQa9fsDyFUjhcQLkr4MGZ0a1VXOfyK5tMMD04RMDIDdSf+9XEbNIDHamN ZYm6pXSxM8XPvXmunEvOXLJdalFWLUbdizDHEPbomTM4dKlbxyNKpI0r5Ii58MCKPq17 NTMg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1771032438; x=1771637238; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=g0hBl26w79lKn400XYX8NYrJRLqwtNFM7RMEsJBIMT0=; b=oB1rtjE3Mtgmnenb/9Y+sVoWmPbnE8y0vpJpGoi7Crv0KFh/6ECq0giGrXxc712xUE QUFKrPU545gBpEmpXZdMfZlKemHjYaRV4g2j/Hy439O/mLC02loV5LuwNC26K9TsVxo5 E/2uqBMYbz9r+G0NHEu5QwX0370KUa+pdvDv5cbc/qZGO1777NUoAO3E/jOleWXszEe6 pCiMAJyLNGuC6jTieB2kyd9G7ME/tXYVIphxrNwy9+MKxpqN5iuwJrx+/Pv65JYgQI6N /uMPaDehmx7VVL9TwzszUdspOADZce+/qJEHOKrqIpdH7lxZS/e86hIPNsBEobh0SBwH GUnA== X-Gm-Message-State: AOJu0Yy4crm0fmOuXyhju5m/BMnl307nfbs7lu4k8VfclZvbBnOVeIpG +/k0CRpXhq8tgXHbTfeCz39U8nnemIIIJ7k1VGHe4KrxFZEBPi1pm4TsacV32u/uNtI2mQje1UK E3x66hQ== X-Received: from pjbpx8.prod.google.com ([2002:a17:90b:2708:b0:356:a274:747f]) (user=seanjc job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:2784:b0:343:684c:f8a0 with SMTP id 98e67ed59e1d1-356aad5f32fmr3636333a91.23.1771032438229; Fri, 13 Feb 2026 17:27:18 -0800 (PST) Reply-To: Sean Christopherson Date: Fri, 13 Feb 2026 17:26:53 -0800 In-Reply-To: <20260214012702.2368778-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20260214012702.2368778-1-seanjc@google.com> X-Mailer: git-send-email 2.53.0.310.g728cabbaf7-goog Message-ID: <20260214012702.2368778-8-seanjc@google.com> Subject: [PATCH v3 07/16] KVM: SVM: Move core EFER.SVME enablement to kernel From: Sean Christopherson To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, Kiryl Shutsemau , Peter Zijlstra , Arnaldo Carvalho de Melo , Namhyung Kim , Sean Christopherson , Paolo Bonzini Cc: linux-kernel@vger.kernel.org, linux-coco@lists.linux.dev, kvm@vger.kernel.org, linux-perf-users@vger.kernel.org, Chao Gao , Xu Yilun , Dan Williams Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Move the innermost EFER.SVME logic out of KVM and into to core x86 to land the SVM support alongside VMX support. This will allow providing a more unified API from the kernel to KVM, and will allow moving the bulk of the emergency disabling insanity out of KVM without having a weird split between kernel and KVM for SVM vs. VMX. No functional change intended. Signed-off-by: Sean Christopherson --- arch/x86/include/asm/virt.h | 6 +++++ arch/x86/kvm/svm/svm.c | 33 +++++------------------ arch/x86/virt/hw.c | 53 +++++++++++++++++++++++++++++++++++++ 3 files changed, 65 insertions(+), 27 deletions(-) diff --git a/arch/x86/include/asm/virt.h b/arch/x86/include/asm/virt.h index cca0210a5c16..9a0753eaa20c 100644 --- a/arch/x86/include/asm/virt.h +++ b/arch/x86/include/asm/virt.h @@ -15,6 +15,12 @@ int x86_vmx_disable_virtualization_cpu(void); void x86_vmx_emergency_disable_virtualization_cpu(void); #endif =20 +#if IS_ENABLED(CONFIG_KVM_AMD) +int x86_svm_enable_virtualization_cpu(void); +int x86_svm_disable_virtualization_cpu(void); +void x86_svm_emergency_disable_virtualization_cpu(void); +#endif + #else static __always_inline void x86_virt_init(void) {} #endif diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 0ae66c770ebc..5f033bf3ba83 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -478,27 +478,9 @@ static __always_inline struct sev_es_save_area *sev_es= _host_save_area(struct svm return &sd->save_area->host_sev_es_save; } =20 -static inline void kvm_cpu_svm_disable(void) -{ - uint64_t efer; - - wrmsrq(MSR_VM_HSAVE_PA, 0); - rdmsrq(MSR_EFER, efer); - if (efer & EFER_SVME) { - /* - * Force GIF=3D1 prior to disabling SVM, e.g. to ensure INIT and - * NMI aren't blocked. - */ - stgi(); - wrmsrq(MSR_EFER, efer & ~EFER_SVME); - } -} - static void svm_emergency_disable_virtualization_cpu(void) { - virt_rebooting =3D true; - - kvm_cpu_svm_disable(); + wrmsrq(MSR_VM_HSAVE_PA, 0); } =20 static void svm_disable_virtualization_cpu(void) @@ -507,7 +489,7 @@ static void svm_disable_virtualization_cpu(void) if (tsc_scaling) __svm_write_tsc_multiplier(SVM_TSC_RATIO_DEFAULT); =20 - kvm_cpu_svm_disable(); + x86_svm_disable_virtualization_cpu(); =20 amd_pmu_disable_virt(); } @@ -516,12 +498,12 @@ static int svm_enable_virtualization_cpu(void) { =20 struct svm_cpu_data *sd; - uint64_t efer; int me =3D raw_smp_processor_id(); + int r; =20 - rdmsrq(MSR_EFER, efer); - if (efer & EFER_SVME) - return -EBUSY; + r =3D x86_svm_enable_virtualization_cpu(); + if (r) + return r; =20 sd =3D per_cpu_ptr(&svm_data, me); sd->asid_generation =3D 1; @@ -529,8 +511,6 @@ static int svm_enable_virtualization_cpu(void) sd->next_asid =3D sd->max_asid + 1; sd->min_asid =3D max_sev_asid + 1; =20 - wrmsrq(MSR_EFER, efer | EFER_SVME); - wrmsrq(MSR_VM_HSAVE_PA, sd->save_area_pa); =20 if (static_cpu_has(X86_FEATURE_TSCRATEMSR)) { @@ -541,7 +521,6 @@ static int svm_enable_virtualization_cpu(void) __svm_write_tsc_multiplier(SVM_TSC_RATIO_DEFAULT); } =20 - /* * Get OSVW bits. * diff --git a/arch/x86/virt/hw.c b/arch/x86/virt/hw.c index dc426c2bc24a..014e9dfab805 100644 --- a/arch/x86/virt/hw.c +++ b/arch/x86/virt/hw.c @@ -163,6 +163,59 @@ static __init int x86_vmx_init(void) static __init int x86_vmx_init(void) { return -EOPNOTSUPP; } #endif =20 +#if IS_ENABLED(CONFIG_KVM_AMD) +int x86_svm_enable_virtualization_cpu(void) +{ + u64 efer; + + if (!cpu_feature_enabled(X86_FEATURE_SVM)) + return -EOPNOTSUPP; + + rdmsrq(MSR_EFER, efer); + if (efer & EFER_SVME) + return -EBUSY; + + wrmsrq(MSR_EFER, efer | EFER_SVME); + return 0; +} +EXPORT_SYMBOL_FOR_KVM(x86_svm_enable_virtualization_cpu); + +int x86_svm_disable_virtualization_cpu(void) +{ + int r =3D -EIO; + u64 efer; + + /* + * Force GIF=3D1 prior to disabling SVM, e.g. to ensure INIT and + * NMI aren't blocked. + */ + asm goto("1: stgi\n\t" + _ASM_EXTABLE(1b, %l[fault]) + ::: "memory" : fault); + r =3D 0; + +fault: + rdmsrq(MSR_EFER, efer); + wrmsrq(MSR_EFER, efer & ~EFER_SVME); + return r; +} +EXPORT_SYMBOL_FOR_KVM(x86_svm_disable_virtualization_cpu); + +void x86_svm_emergency_disable_virtualization_cpu(void) +{ + u64 efer; + + virt_rebooting =3D true; + + rdmsrq(MSR_EFER, efer); + if (!(efer & EFER_SVME)) + return; + + x86_svm_disable_virtualization_cpu(); +} +EXPORT_SYMBOL_FOR_KVM(x86_svm_emergency_disable_virtualization_cpu); +#endif + void __init x86_virt_init(void) { x86_vmx_init(); --=20 2.53.0.310.g728cabbaf7-goog