From nobody Fri Dec 19 16:46:36 2025 Received: from mail-pl1-f202.google.com (mail-pl1-f202.google.com [209.85.214.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5F2A47DA7F for ; Sat, 15 Feb 2025 01:09:51 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1739581792; cv=none; b=ToY7LinErid74mrhtIK39kMA+MEckbdpeXOeGJOTULijw1gBs6NHfzNsSeABjrJJ4J2iAxmqC7SPAaLlBqlQnaW4trUHzumjcsnVktD/InE1K/E12PhAHo7Hbiu2l9Vr3YKztrb1UDZZjNJ5oBrF8URXt7Z8X8/27LSilaGbxSs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1739581792; c=relaxed/simple; bh=K+YrvtF/PJ1uCc+2T0XG7iYvk/7/1XrAkwuhbsXzdio=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=iCEiZWkqO5D+R89VSFyEubBywebxqvddFuJf7XcGJBp7lVP29Hxnt9Hf2LJ6c/C6iNSRJGLUHpdRBf4gJomOToLvHx9ql6Sfc+R1rQw+i7ZLtyAGP3Ve7Ks1ImBh35w3pl5yF0+4JFiW5fQVQRERiQXo+bNX7PSLT59IVK7VAyI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=ELxEQLbc; arc=none smtp.client-ip=209.85.214.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="ELxEQLbc" Received: by mail-pl1-f202.google.com with SMTP id d9443c01a7336-220e04e67e2so68629935ad.2 for ; Fri, 14 Feb 2025 17:09:51 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1739581790; x=1740186590; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=KF1a5SRq4o2UjN9KAy8jU6n0CzexHKnuO4cjsTur5ZQ=; b=ELxEQLbcPRNt5H9Qlj3sQdcICNt1jBjFWl+ArlCg7RvJ/O1gW/sFFdhtLKXYWr6unB DEhm02Rpg8fib+Wm6TrABQYi9koUnFwSHp5b3cyaUQhfQwvUImJe2yEk8Rqt9vHeNXzk LrnV27GdDfKQH5o3a6/iSJTypNAwyF6FzSa7uWwiw2CzjlJZxEgiqoiEKHB/aRSnFCsw +iwL82p3UsJB4d7hNCw4vgvZPdhP0BKKx73oz91XBhmQEJZ+fpefzBq0IbseEaaAgIXO RiXfWRPx/C6Sz6LBpDVJrIt8xsLLJ3wPgqn9X5PJ87yTBQ93NoQEk5TLHPbwnOoutALM YClQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1739581790; x=1740186590; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=KF1a5SRq4o2UjN9KAy8jU6n0CzexHKnuO4cjsTur5ZQ=; b=jQovGisYJ8iUjXAwNfNNzo8Qs6Iq/NWAI+HnaLnhZWDDZMRY4H4NYSQLXa3fEfUT5R s66xcDLo/m9hoj6nFCqltyHtmJqRD34lgl1vFDI34D5DBHxYfgVjfPSUrpyIdaw0Oe93 dat2Cym4+p3WyIcl+cOBM6AVymtqyPH4sRT6Stqi4FeOTTXNWvoQm6vBNHBBKjlgVbYb AVmN5Jcxd1QmpRUXR4lbwtHO5Q4UUrdvAo5LUV2w6qQhT+guY+p2YQeT6NNOSMD4OO4r hinGIuFNGTobLIfVXsUjdrePoboaPqRPWuIA+lbzGqqiNIg3acKanGrzX5zcCHK+icre pz+w== X-Forwarded-Encrypted: i=1; AJvYcCWtAMksMmtZ2gzcRn+uqB/OqfM2bOZYXxxYUBBKD+tlKEf1DuvaUruQdci1qaXq+vwax/eHnw46Eskk8cE=@vger.kernel.org X-Gm-Message-State: AOJu0YyletFNgEfPOZOMB0RrlDwrGqxbcP7Gf3rdhqzanOeo6lNISUTx Tu7Oraxe5bhxKY20GRj8yDTwbyckrsm3NZ6LRIE2t1zDPvWnfKqOPgJhzp70MJEyjY+XqggF2qb kpA== X-Google-Smtp-Source: AGHT+IGxCjaniKdavPPMMvuiGundFpjuNeVg+TFHSMJ1qALPEH53iQGqirqeLNvBjMX9MFSxQVH8vbN0his= X-Received: from pfbc6.prod.google.com ([2002:a05:6a00:ad06:b0:72b:ccb:c99b]) (user=seanjc job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6a20:394b:b0:1ee:6032:b1e4 with SMTP id adf61e73a8af0-1ee8cb0bb12mr3215621637.18.1739581790691; Fri, 14 Feb 2025 17:09:50 -0800 (PST) Reply-To: Sean Christopherson Date: Fri, 14 Feb 2025 17:09:45 -0800 In-Reply-To: <20250215010946.1201353-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250215010946.1201353-1-seanjc@google.com> X-Mailer: git-send-email 2.48.1.601.g30ceb7b040-goog Message-ID: <20250215010946.1201353-2-seanjc@google.com> Subject: [PATCH 1/2] KVM: SVM: Set RFLAGS.IF=1 in C code, to get VMRUN out of the STI shadow From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Doug Covelli Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Enable/disable local IRQs, i.e. set/clear RFLAGS.IF, in the common svm_vcpu_enter_exit() just after/before guest_state_{enter,exit}_irqoff() so that VMRUN is not executed in an STI shadow. AMD CPUs have a quirk (some would say "bug"), where the STI shadow bleeds into the guest's intr_state field if a #VMEXIT occurs during injection of an event, i.e. if the VMRUN doesn't complete before the subsequent #VMEXIT. The spurious "interrupts masked" state is relatively benign, as it only occurs during event injection and is transient. Because KVM is already injecting an event, the guest can't be in HLT, and if KVM is querying IRQ blocking for injection, then KVM would need to force an immediate exit anyways since injecting multiple events is impossible. However, because KVM copies int_state verbatim from vmcb02 to vmcb12, the spurious STI shadow is visible to L1 when running a nested VM, which can trip sanity checks, e.g. in VMware's VMM. Hoist the STI+CLI all the way to C code, as the aforementioned calls to guest_state_{enter,exit}_irqoff() already inform lockdep that IRQs are enabled/disabled, and taking a fault on VMRUN with RFLAGS.IF=3D1 is already possible. I.e. if there's kernel code that is confused by running with RFLAGS.IF=3D1, then it's already a problem. In practice, since GIF=3D0 also blocks NMIs, the only change in exposure to non-KVM code (relative to surrounding VMRUN with STI+CLI) is exception handling code, and except for the kvm_rebooting=3D1 case, all exception in the core VM-Enter/VM-Exit path are fatal. Oppurtunstically document why KVM needs to do STI in the first place. Reported-by: Doug Covelli Closes: https://lore.kernel.org/all/CADH9ctBs1YPmE4aCfGPNBwA10cA8RuAk2gO754= 2DjMZgs4uzJQ@mail.gmail.com Fixes: f14eec0a3203 ("KVM: SVM: move more vmentry code to assembly") Cc: stable@vger.kernel.org Signed-off-by: Sean Christopherson --- arch/x86/kvm/svm/svm.c | 14 ++++++++++++++ arch/x86/kvm/svm/vmenter.S | 10 +--------- 2 files changed, 15 insertions(+), 9 deletions(-) diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 7640a84e554a..fa0687711c48 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -4189,6 +4189,18 @@ static noinstr void svm_vcpu_enter_exit(struct kvm_v= cpu *vcpu, bool spec_ctrl_in =20 guest_state_enter_irqoff(); =20 + /* + * Set RFLAGS.IF prior to VMRUN, as the host's RFLAGS.IF at the time of + * VMRUN controls whether or not physical IRQs are masked (KVM always + * runs with V_INTR_MASKING_MASK). Toggle RFLAGS.IF here to avoid the + * temptation to do STI+VMRUN+CLI, as AMD CPUs bleed the STI shadow + * into guest state if delivery of an event during VMRUN triggers a + * #VMEXIT, and the guest_state transitions already tell lockdep that + * IRQs are being enabled/disabled. Note! GIF=3D0 for the entirety of + * this path, so IRQs aren't actually unmasked while running host code. + */ + local_irq_enable(); + amd_clear_divider(); =20 if (sev_es_guest(vcpu->kvm)) @@ -4197,6 +4209,8 @@ static noinstr void svm_vcpu_enter_exit(struct kvm_vc= pu *vcpu, bool spec_ctrl_in else __svm_vcpu_run(svm, spec_ctrl_intercepted); =20 + local_irq_disable(); + guest_state_exit_irqoff(); } =20 diff --git a/arch/x86/kvm/svm/vmenter.S b/arch/x86/kvm/svm/vmenter.S index 2ed80aea3bb1..0c61153b275f 100644 --- a/arch/x86/kvm/svm/vmenter.S +++ b/arch/x86/kvm/svm/vmenter.S @@ -170,12 +170,8 @@ SYM_FUNC_START(__svm_vcpu_run) mov VCPU_RDI(%_ASM_DI), %_ASM_DI =20 /* Enter guest mode */ - sti - 3: vmrun %_ASM_AX 4: - cli - /* Pop @svm to RAX while it's the only available register. */ pop %_ASM_AX =20 @@ -340,12 +336,8 @@ SYM_FUNC_START(__svm_sev_es_vcpu_run) mov KVM_VMCB_pa(%rax), %rax =20 /* Enter guest mode */ - sti - 1: vmrun %rax - -2: cli - +2: /* IMPORTANT: Stuff the RSB immediately after VM-Exit, before RET! */ FILL_RETURN_BUFFER %rax, RSB_CLEAR_LOOPS, X86_FEATURE_RSB_VMEXIT =20 --=20 2.48.1.601.g30ceb7b040-goog From nobody Fri Dec 19 16:46:36 2025 Received: from mail-pl1-f202.google.com (mail-pl1-f202.google.com [209.85.214.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 15E74433AB for ; Sat, 15 Feb 2025 01:09:52 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1739581794; cv=none; b=QdBW/uuNg4OPk3xhWeqqpmBREfHGqpxnppu7u2JUDWwkRf6h0B3DpC2Ti0oBJ3bunMTQhjYPdESPtvF8ECTmaW9YwBm6HDGKJgD7DXluEJuoasKJR2Pdx6Oa9OxUlrZTeeZqk4F+bNuYzRMcxFBLFGvWbRSTos49/Oo+7PNjhvU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1739581794; c=relaxed/simple; bh=k08kYOr5amr0PMaAsg/vb1TyI0SEHxFgH5fDVqymMUA=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=u1kvT8jYDICmoQ76ikk4AnsfBFQvArANAHfNBHTzZkDJGKQjuTS7E2zgdXnzI/64EqYZLGAFFewcO81XvpCwdeEPTY+6elefLJkLqZsAyuy6fa6WZvy9PU6LmbAppon5XIV4eCJfZPzBwHtCvjszBxYMSiqhoDbX6NCp9uQlMPk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=WTjDmWcM; arc=none smtp.client-ip=209.85.214.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="WTjDmWcM" Received: by mail-pl1-f202.google.com with SMTP id d9443c01a7336-220ee2e7746so25900095ad.2 for ; Fri, 14 Feb 2025 17:09:52 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1739581792; x=1740186592; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=daGjRlPn538a7Mhk1b9p32Ju5z76xxGZiL/mJoi1gQY=; b=WTjDmWcMzDfewC/A8m+CN5Z8BFUl++XnjKhK0DJ0bsT1v4sTWbTgeeaR60VZnrTV4k Mxvvqrz2qfXkZDKILb0YWjeU4B0JvZIFMrWwFUAT0yE4vgHAXOgDSWxBWmWxIMC0ATRc F6DprdU0KakYGpGLyHl1jy2Hc3iiixowTDk9VpI+CXqt0RPRVNiJnDLM6B5N1TvHHcP8 FPMvJ4NQ/7DEPj/idwLft1jViFVc/Hkko8OpvtsYQPcm7dy/msOykQQFLNHCU2fLU3Tq +k+YB5U0MVaO/6WrrvdX5eVHdSO96Z8p+Qr9dRN2URzI6lcCLxoXk1s3vNjoOomQVGFi iDkA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1739581792; x=1740186592; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=daGjRlPn538a7Mhk1b9p32Ju5z76xxGZiL/mJoi1gQY=; b=VgESR4jYSmZaJSMroq2sQxOImi5YhSB/e0NyViPvmEG007DfgH3eukI1hypHVxCXvJ iUWCu14eNETtq1ZxvhG0oJ88ifvi8/X+BHqhH8rKMvy4S5j/O1L+JG1agj9OUTlq16nn vANHH+sut0tuqkbntlrzSIhjXGzfsQmpnHIN2P129iLIzgF+mbHNEKs4guAiclb9YGJt hQvp67Pee60/q/SdnrGigesHmA3TqnIwP9LWQVT23QGKoqr/yM4eUhlc4Ej27+NwCFhf G4yPOAF6MfwyUejGezDbnFU0wedPz2QqcYE2cnNZvd+QKOLt/txvC8gehmR8a2kudKvw 7kfw== X-Forwarded-Encrypted: i=1; AJvYcCWVwStRm+yltNNHF37UqEOsiyyuRe7+RorL4pGG+JfM8IkiZFE7Hx5LSBN6TiO6p8yzwIM/tIpHi7+beFs=@vger.kernel.org X-Gm-Message-State: AOJu0YwFn4NYP1A+rGcr/QWuJZ5nGw5l3ygtqcTI0CKyzsazwdZ76q5L Glp3QRb9x7rkr475AWZhIELx0zzYW813cZ88XbiJU6wgdnikEN/yv5XmeMs5t1+hYzs8KAVR/1u RJg== X-Google-Smtp-Source: AGHT+IFvuYkc6dkhIoLSXv/TfkhyhlegXXjjNDffj8c9h7UcOv2qQGR5/L2/ydeA43o8bay1iGCM92srIFU= X-Received: from pfar15.prod.google.com ([2002:a05:6a00:a90f:b0:730:9717:b216]) (user=seanjc job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6a21:99a2:b0:1ee:6a26:648c with SMTP id adf61e73a8af0-1ee8cba0fe8mr2926356637.32.1739581792352; Fri, 14 Feb 2025 17:09:52 -0800 (PST) Reply-To: Sean Christopherson Date: Fri, 14 Feb 2025 17:09:46 -0800 In-Reply-To: <20250215010946.1201353-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250215010946.1201353-1-seanjc@google.com> X-Mailer: git-send-email 2.48.1.601.g30ceb7b040-goog Message-ID: <20250215010946.1201353-3-seanjc@google.com> Subject: [PATCH 2/2] KVM: selftests: Assert that STI blocking isn't set after event injection From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Doug Covelli Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add an L1 (guest) assert to the nested exceptions test to verify that KVM doesn't put VMRUN in an STI shadow (AMD CPUs bleed the shadow into the guest's int_state if a #VMEXIT occurs before VMRUN fully completes). Add a similar assert to the VMX side as well, because why not. Signed-off-by: Sean Christopherson --- tools/testing/selftests/kvm/x86/nested_exceptions_test.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/tools/testing/selftests/kvm/x86/nested_exceptions_test.c b/too= ls/testing/selftests/kvm/x86/nested_exceptions_test.c index 3eb0313ffa39..3641a42934ac 100644 --- a/tools/testing/selftests/kvm/x86/nested_exceptions_test.c +++ b/tools/testing/selftests/kvm/x86/nested_exceptions_test.c @@ -85,6 +85,7 @@ static void svm_run_l2(struct svm_test_data *svm, void *l= 2_code, int vector, =20 GUEST_ASSERT_EQ(ctrl->exit_code, (SVM_EXIT_EXCP_BASE + vector)); GUEST_ASSERT_EQ(ctrl->exit_info_1, error_code); + GUEST_ASSERT(!ctrl->int_state); } =20 static void l1_svm_code(struct svm_test_data *svm) @@ -122,6 +123,7 @@ static void vmx_run_l2(void *l2_code, int vector, uint3= 2_t error_code) GUEST_ASSERT_EQ(vmreadz(VM_EXIT_REASON), EXIT_REASON_EXCEPTION_NMI); GUEST_ASSERT_EQ((vmreadz(VM_EXIT_INTR_INFO) & 0xff), vector); GUEST_ASSERT_EQ(vmreadz(VM_EXIT_INTR_ERROR_CODE), error_code); + GUEST_ASSERT(!vmreadz(GUEST_INTERRUPTIBILITY_INFO)); } =20 static void l1_vmx_code(struct vmx_pages *vmx) --=20 2.48.1.601.g30ceb7b040-goog