From nobody Thu Apr 2 20:25:28 2026 Received: from mail-pj1-f73.google.com (mail-pj1-f73.google.com [209.85.216.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B6FAB421EF7 for ; Thu, 26 Mar 2026 17:50:08 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774547411; cv=none; b=o9AJ+iTI5RdsHiq9TRl8hkPRNdClGbsLRQW1VjZBV441a2iwH1MgXAgx13DnWZ9ZphfKmYrF9Y0J28pvyvi3Xmmg+60dYjei5wqwM3eHe6tJ25qebec06q4WmxJUtRgA1LOMPZo3DzZL1V13kf7gr+2KfXeOt+/HD4tPEARU1m0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774547411; c=relaxed/simple; bh=X62HksLg050YL9v3riIZda6b41B7umKzI21DaTgbf/k=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=Sqaq9kAY+tdvad2HI2aX9odUXPIeYcXMbHYD+ibANW6t8iH1ShKU5gQPLJh6k0j3HOkFQR9qzLvCon3DS0rJsH1bG5H7Kcet0hJ56ZNnZQwlNQlS8DUHMWVzFXkrEfSmjnhDzU2HafWy36KuEIDREwLdcI1ME+Sqp8JIHVz+Yc0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--jmattson.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=JjWmzHeT; arc=none smtp.client-ip=209.85.216.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--jmattson.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="JjWmzHeT" Received: by mail-pj1-f73.google.com with SMTP id 98e67ed59e1d1-35c2dc274bdso72994a91.3 for ; Thu, 26 Mar 2026 10:50:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20251104; t=1774547408; x=1775152208; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=A9S2mDvA0dSmiYZiR+DNxsk39toXNnVepGluK15UcSI=; b=JjWmzHeTVfG0PLNnrvcQFZ5T6HiSSZmY4UB434zas6nYQTepL5rtQ6PFNla9QyGs50 vffP5EbcM0loWJuT+w8/nINFkaZGiMz4FV2QPlVKL1sN5pIZbYT8t9z77mFUw0hnPpGw ECHUrHkDpHY67DtaZNn2e20XY8mohiQR6gDfoEBQkr9OAEmJXxypgD3hlM8kmjS9u3xD G1+7ARZH9medELUmDBSABgiAjaJGP7bb4drDnIn59XbTGTq7X97bczzindkjt6j+4/il U33i2uMvxKfkv89990FWJLm/RJ3sVEj30wz8AArNE8YW05qBED6N/MbutqbfrCk64G82 n6cw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20251104; t=1774547408; x=1775152208; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=A9S2mDvA0dSmiYZiR+DNxsk39toXNnVepGluK15UcSI=; b=hszdkJJLBJTOoSgBOnR5Ajm2jegWfFYtpxwsHUGDpdZNeGLbGC/MzIeTfXSPyATwFc KVl1/yyr+2SCeqEhFX6FReB0W/6FjKgU62jpzMzu/+BtEtfGpiz6s2qnC+pOYpjsmVrQ WU2x+U/lnlu2SKqH+WtPlgodko+/CqpUWJScvCnDc0mchUX+sZYWwQ/NXK1veaS6JvC0 j1AnqAzraoANe96SHfiboG4EudBonPAnpb8DCxUsCJQGjTUWVtFlGvC7pRj8b6//ERyL wVq6p4aOpcOUuvB5/qTiC5YAt56/c984OQFmkrz8BZR0kXeB0ONfJSv1E3hjt1WFc8CH tSzA== X-Forwarded-Encrypted: i=1; AJvYcCVDUZbHaF76bS2osrNYrGOSPHD/HZMHCA0SLebqMTC0T0vyfEVTiksE+LKEe7NTQWUS/iDl1Dug7UrgPN0=@vger.kernel.org X-Gm-Message-State: AOJu0YxiZSTTt/tebdPO6bhK2+GUOn337qBeY9bZSJfLylLjbMFTv1JX 9eZ8hJYqCzGOfZoZII+SlKQ9zGeMrOx6vJ9ZSI2vTnkamDlWTiZC2FO8e8eMIezPEFFaOiYafhK XyuCJEpRpX7De2g== X-Received: from pgww20.prod.google.com ([2002:a05:6a02:2c94:b0:c76:681a:7042]) (user=jmattson job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6a20:748a:b0:398:a060:a97b with SMTP id adf61e73a8af0-39c4aa57082mr9347105637.1.1774547407685; Thu, 26 Mar 2026 10:50:07 -0700 (PDT) Date: Thu, 26 Mar 2026 10:49:23 -0700 In-Reply-To: <20260326174944.3820245-1-jmattson@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20260326174944.3820245-1-jmattson@google.com> X-Mailer: git-send-email 2.53.0.1018.g2bb0e51243-goog Message-ID: <20260326174944.3820245-6-jmattson@google.com> Subject: [PATCH v6 05/10] KVM: x86: nSVM: Redirect IA32_PAT accesses to either hPAT or gPAT From: Jim Mattson To: Paolo Bonzini , Jonathan Corbet , Shuah Khan , Sean Christopherson , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , kvm@vger.kernel.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, Yosry Ahmed Cc: Jim Mattson Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" When KVM_X86_QUIRK_NESTED_SVM_SHARED_PAT is disabled and the vCPU is in guest mode with nested NPT enabled, guest accesses to IA32_PAT are redirected to the gPAT register, which is stored in VMCB02's g_pat field. Non-guest accesses (e.g. from userspace) to IA32_PAT are always redirected to hPAT, which is stored in vcpu->arch.pat. Directing host-initiated accesses to hPAT ensures that KVM_GET/SET_MSRS and KVM_GET/SET_NESTED_STATE are independent of each other and can be ordered arbitrarily during save and restore. gPAT is saved and restored separately via KVM_GET/SET_NESTED_STATE. Add WARN_ON_ONCE to flag any host-initiated accesses originating from KVM itself rather than userspace. Fixes: 15038e147247 ("KVM: SVM: obey guest PAT") Signed-off-by: Jim Mattson Co-developed-by: Sean Christopherson Signed-off-by: Sean Christopherson --- arch/x86/kvm/svm/nested.c | 9 ------- arch/x86/kvm/svm/svm.c | 53 ++++++++++++++++++++++++++++++++++----- arch/x86/kvm/svm/svm.h | 1 - 3 files changed, 47 insertions(+), 16 deletions(-) diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c index 405209f8c4cd..14063bef36f1 100644 --- a/arch/x86/kvm/svm/nested.c +++ b/arch/x86/kvm/svm/nested.c @@ -697,15 +697,6 @@ static int nested_svm_load_cr3(struct kvm_vcpu *vcpu, = unsigned long cr3, return 0; } =20 -void nested_vmcb02_compute_g_pat(struct vcpu_svm *svm) -{ - if (!svm->nested.vmcb02.ptr) - return; - - /* FIXME: merge g_pat from vmcb01 and vmcb12. */ - vmcb_set_gpat(svm->nested.vmcb02.ptr, svm->vmcb01.ptr->save.g_pat); -} - static bool nested_vmcb12_has_lbrv(struct kvm_vcpu *vcpu) { return guest_cpu_cap_has(vcpu, X86_FEATURE_LBRV) && diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index af808e83173e..6cf5fa87b4d4 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -2776,6 +2776,47 @@ static bool sev_es_prevent_msr_access(struct kvm_vcp= u *vcpu, !msr_write_intercepted(vcpu, msr_info->index); } =20 +static bool svm_pat_accesses_gpat(struct kvm_vcpu *vcpu, bool from_host) +{ + struct vcpu_svm *svm =3D to_svm(vcpu); + + /* + * When nested NPT is enabled, L2 has a separate PAT from L1. Guest + * accesses to IA32_PAT while running L2 target L2's gPAT; + * host-initiated accesses always target L1's hPAT so that + * KVM_GET/SET_MSRS and KVM_GET/SET_NESTED_STATE are independent of + * each other and can be ordered arbitrarily during save and restore. + */ + WARN_ON_ONCE(from_host && vcpu->wants_to_run); + return !from_host && is_guest_mode(vcpu) && l2_has_separate_pat(svm); +} + +static u64 svm_get_pat(struct kvm_vcpu *vcpu, bool from_host) +{ + if (svm_pat_accesses_gpat(vcpu, from_host)) + return to_svm(vcpu)->vmcb->save.g_pat; + else + return vcpu->arch.pat; +} + +static void svm_set_pat(struct kvm_vcpu *vcpu, bool from_host, u64 data) +{ + struct vcpu_svm *svm =3D to_svm(vcpu); + + if (svm_pat_accesses_gpat(vcpu, from_host)) { + vmcb_set_gpat(svm->vmcb, data); + return; + } + + svm->vcpu.arch.pat =3D data; + + if (npt_enabled) { + vmcb_set_gpat(svm->vmcb01.ptr, data); + if (is_guest_mode(&svm->vcpu) && !nested_npt_enabled(svm)) + vmcb_set_gpat(svm->vmcb, data); + } +} + static int svm_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info) { struct vcpu_svm *svm =3D to_svm(vcpu); @@ -2892,6 +2933,9 @@ static int svm_get_msr(struct kvm_vcpu *vcpu, struct = msr_data *msr_info) case MSR_AMD64_DE_CFG: msr_info->data =3D svm->msr_decfg; break; + case MSR_IA32_CR_PAT: + msr_info->data =3D svm_get_pat(vcpu, msr_info->host_initiated); + break; default: return kvm_get_msr_common(vcpu, msr_info); } @@ -2975,13 +3019,10 @@ static int svm_set_msr(struct kvm_vcpu *vcpu, struc= t msr_data *msr) =20 break; case MSR_IA32_CR_PAT: - ret =3D kvm_set_msr_common(vcpu, msr); - if (ret) - break; + if (!kvm_pat_valid(data)) + return 1; =20 - vmcb_set_gpat(svm->vmcb01.ptr, data); - if (is_guest_mode(vcpu)) - nested_vmcb02_compute_g_pat(svm); + svm_set_pat(vcpu, msr->host_initiated, data); break; case MSR_IA32_SPEC_CTRL: if (!msr->host_initiated && diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h index 3588f6d3fb9b..220b8cb0c80f 100644 --- a/arch/x86/kvm/svm/svm.h +++ b/arch/x86/kvm/svm/svm.h @@ -864,7 +864,6 @@ void nested_copy_vmcb_control_to_cache(struct vcpu_svm = *svm, void nested_copy_vmcb_save_to_cache(struct vcpu_svm *svm, struct vmcb_save_area *save); void nested_sync_control_from_vmcb02(struct vcpu_svm *svm); -void nested_vmcb02_compute_g_pat(struct vcpu_svm *svm); void svm_switch_vmcb(struct vcpu_svm *svm, struct kvm_vmcb_info *target_vm= cb); =20 extern struct kvm_x86_nested_ops svm_nested_ops; --=20 2.53.0.1018.g2bb0e51243-goog