From nobody Tue Apr 28 02:28:07 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B97E4CCA485 for ; Wed, 8 Jun 2022 00:33:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1387723AbiFHAaj (ORCPT ); Tue, 7 Jun 2022 20:30:39 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46456 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1574450AbiFGXZo (ORCPT ); Tue, 7 Jun 2022 19:25:44 -0400 Received: from mail-pj1-x1049.google.com (mail-pj1-x1049.google.com [IPv6:2607:f8b0:4864:20::1049]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EEDB722DF86 for ; Tue, 7 Jun 2022 14:36:20 -0700 (PDT) Received: by mail-pj1-x1049.google.com with SMTP id r1-20020a17090a0ac100b001e0ab5fc247so9797129pje.8 for ; Tue, 07 Jun 2022 14:36:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=WXqg88p2W+QoqW0x8irccM2qB/JUANba4v+R2Pcoblk=; b=kYHBUq4aXh+WUpNxwTAVniJskGH5s70WIrbs5mbLliVUPU5pTzBwiHADxfu6Lneur4 q5sQZjf4vvflRuaczsuPRnSiBVWv43bbVBuUaxLONzgkFMlXjruI/Rn13pUb8onmC0uz MND83pFzT2kK1dqtAUYl5WkGNNPECOUIuqPIAlYAe9Zfgpthry2RYANmICZZSkFg4tDy TfqmrZiy+/qCPRluPFZzwMmk/FX3fQIv+aj13Bm2DAUKtEK35keG1eoGP5lbSzbSUsuR zX1UbwyEFD1EmqUTMlnx7LlVAcZBl0CHMXbK0bjYSZDUY4kH7n/jgF3BF4R6OBe9Qz6U 3fOA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=WXqg88p2W+QoqW0x8irccM2qB/JUANba4v+R2Pcoblk=; b=jOjfVhcUNlNPtePEOFvZSWTVEo6w4zxz5mPSBWAXT1nX3g4dyRVb9eDWejnmM9IV5P 651l9ItSA6aN+bQjxRUQAU3/G1zV9JYqRVRozxZbAPWPW9ZjomU3TRrtoHjyQ2EMOAle aoDfgGoF0O0y3fvdQWz3QasJ/yT6T7RCMlAWxIGABP58hJhe08TsCzVAIa6REBOOcGL6 nFbawZr783cbA4TXUCwduTkB0uVKF0lE+GQKtfuF4FbQExc/89a2LXNjUVHEN2resqgX M5NXTbgCg+9FAOflDl05Zf3v+DSHmMlngVD2YHi3FRG+wZNr093LpGqe78DVvWSaNBUG 1X7g== X-Gm-Message-State: AOAM532BkpDpx8znBTbM3Id2t3AO/AQvmNSoTIKZ/8UvsXib3bz8K6Nb TbtJCAqW06nQv5sdas/xueOgl4cTjyg= X-Google-Smtp-Source: ABdhPJzoH10sKmh/w1/pRGxvESN5pblryY/rwKpo8gAUEdW6XnwIoFFqF4IA02Ebcqh3S1FH9HwfcaVJ+NA= X-Received: from seanjc.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:3e5]) (user=seanjc job=sendgmr) by 2002:a17:90a:249:b0:1e0:a8a3:3c6c with SMTP id t9-20020a17090a024900b001e0a8a33c6cmr408946pje.0.1654637778961; Tue, 07 Jun 2022 14:36:18 -0700 (PDT) Reply-To: Sean Christopherson Date: Tue, 7 Jun 2022 21:35:50 +0000 In-Reply-To: <20220607213604.3346000-1-seanjc@google.com> Message-Id: <20220607213604.3346000-2-seanjc@google.com> Mime-Version: 1.0 References: <20220607213604.3346000-1-seanjc@google.com> X-Mailer: git-send-email 2.36.1.255.ge46751e96f-goog Subject: [PATCH v5 01/15] KVM: x86: Split kvm_is_valid_cr4() and export only the non-vendor bits From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Eric Li , David Matlack , Oliver Upton Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Split the common x86 parts of kvm_is_valid_cr4(), i.e. the reserved bits checks, into a separate helper, __kvm_is_valid_cr4(), and export only the inner helper to vendor code in order to prevent nested VMX from calling back into vmx_is_valid_cr4() via kvm_is_valid_cr4(). On SVM, this is a nop as SVM doesn't place any additional restrictions on CR4. On VMX, this is also currently a nop, but only because nested VMX is missing checks on reserved CR4 bits for nested VM-Enter. That bug will be fixed in a future patch, and could simply use kvm_is_valid_cr4() as-is, but nVMX has _another_ bug where VMXON emulation doesn't enforce VMX's restrictions on CR0/CR4. The cleanest and most intuitive way to fix the VMXON bug is to use nested_host_cr{0,4}_valid(). If the CR4 variant routes through kvm_is_valid_cr4(), using nested_host_cr4_valid() won't do the right thing for the VMXON case as vmx_is_valid_cr4() enforces VMX's restrictions if and only if the vCPU is post-VMXON. Cc: stable@vger.kernel.org Signed-off-by: Sean Christopherson --- arch/x86/kvm/svm/nested.c | 3 ++- arch/x86/kvm/vmx/vmx.c | 4 ++-- arch/x86/kvm/x86.c | 12 +++++++++--- arch/x86/kvm/x86.h | 2 +- 4 files changed, 14 insertions(+), 7 deletions(-) diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c index 88da8edbe1e1..2953939d5bf4 100644 --- a/arch/x86/kvm/svm/nested.c +++ b/arch/x86/kvm/svm/nested.c @@ -320,7 +320,8 @@ static bool __nested_vmcb_check_save(struct kvm_vcpu *v= cpu, return false; } =20 - if (CC(!kvm_is_valid_cr4(vcpu, save->cr4))) + /* Note, SVM doesn't have any additional restrictions on CR4. */ + if (CC(!__kvm_is_valid_cr4(vcpu, save->cr4))) return false; =20 if (CC(!kvm_valid_efer(vcpu, save->efer))) diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index fd2e707faf2b..57df799ffa29 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -3187,8 +3187,8 @@ static bool vmx_is_valid_cr4(struct kvm_vcpu *vcpu, u= nsigned long cr4) { /* * We operate under the default treatment of SMM, so VMX cannot be - * enabled under SMM. Note, whether or not VMXE is allowed at all is - * handled by kvm_is_valid_cr4(). + * enabled under SMM. Note, whether or not VMXE is allowed at all, + * i.e. is a reserved bit, is handled by common x86 code. */ if ((cr4 & X86_CR4_VMXE) && is_smm(vcpu)) return false; diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 2db6f0373fa3..540651cd28d7 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -1079,7 +1079,7 @@ int kvm_emulate_xsetbv(struct kvm_vcpu *vcpu) } EXPORT_SYMBOL_GPL(kvm_emulate_xsetbv); =20 -bool kvm_is_valid_cr4(struct kvm_vcpu *vcpu, unsigned long cr4) +bool __kvm_is_valid_cr4(struct kvm_vcpu *vcpu, unsigned long cr4) { if (cr4 & cr4_reserved_bits) return false; @@ -1087,9 +1087,15 @@ bool kvm_is_valid_cr4(struct kvm_vcpu *vcpu, unsigne= d long cr4) if (cr4 & vcpu->arch.cr4_guest_rsvd_bits) return false; =20 - return static_call(kvm_x86_is_valid_cr4)(vcpu, cr4); + return true; +} +EXPORT_SYMBOL_GPL(__kvm_is_valid_cr4); + +static bool kvm_is_valid_cr4(struct kvm_vcpu *vcpu, unsigned long cr4) +{ + return __kvm_is_valid_cr4(vcpu, cr4) && + static_call(kvm_x86_is_valid_cr4)(vcpu, cr4); } -EXPORT_SYMBOL_GPL(kvm_is_valid_cr4); =20 void kvm_post_set_cr4(struct kvm_vcpu *vcpu, unsigned long old_cr4, unsign= ed long cr4) { diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h index 501b884b8cc4..1926d2cb8e79 100644 --- a/arch/x86/kvm/x86.h +++ b/arch/x86/kvm/x86.h @@ -434,7 +434,7 @@ static inline void kvm_machine_check(void) void kvm_load_guest_xsave_state(struct kvm_vcpu *vcpu); void kvm_load_host_xsave_state(struct kvm_vcpu *vcpu); int kvm_spec_ctrl_test_value(u64 value); -bool kvm_is_valid_cr4(struct kvm_vcpu *vcpu, unsigned long cr4); +bool __kvm_is_valid_cr4(struct kvm_vcpu *vcpu, unsigned long cr4); int kvm_handle_memory_failure(struct kvm_vcpu *vcpu, int r, struct x86_exception *e); int kvm_handle_invpcid(struct kvm_vcpu *vcpu, unsigned long type, gva_t gv= a); --=20 2.36.1.255.ge46751e96f-goog From nobody Tue Apr 28 02:28:07 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B971ACCA499 for ; Wed, 8 Jun 2022 00:38:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1389339AbiFHAfk (ORCPT ); Tue, 7 Jun 2022 20:35:40 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46464 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1574455AbiFGXZp (ORCPT ); Tue, 7 Jun 2022 19:25:45 -0400 Received: from mail-pg1-x549.google.com (mail-pg1-x549.google.com [IPv6:2607:f8b0:4864:20::549]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 40EBD22DFB5 for ; Tue, 7 Jun 2022 14:36:22 -0700 (PDT) Received: by mail-pg1-x549.google.com with SMTP id h190-20020a636cc7000000b003fd5d5452cfso4768609pgc.8 for ; Tue, 07 Jun 2022 14:36:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=Y68XIZUJ1GQ/ObMpbiUUdrzn6qb8lkahi8a44hfFrNA=; b=jQ95zgp7qVJDyz/rhBbnbzbFaPXyqDEzj8zrULA0JTAxn1ujGCUU4KkH2I2nb7lyN1 BgyIKJWUQMaXAbdAtXG12SiP3C1YPVlMBEjCPZZjMLxyqrLf05ilwaPOGFqn5UDHWtN2 VOGq4vG+livVzLe0sI1eqlvSg/ho1RMSvot8fBuX/peS59R2BwmmovC4CMmGmAkvKeOq Nif/YKDQfkwW8ztwDY1E4vNh2+DeObMVSnh3J+n3VpTrqqW/kbUe9OUBIEd2YYAnQRLx GVlH6v0JEVKur9tlG4S+AuaTKJXe5K+9BUqQkLjD1C28qEfDCivr45t0uzgm5G2Hw+Sj ZS/w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=Y68XIZUJ1GQ/ObMpbiUUdrzn6qb8lkahi8a44hfFrNA=; b=FKbnGDRa3l6RBAo9tEmHke0y7+GT/corjKjbNIm8rZL8zrzs4L8KDvmBU71CxX36l3 qtC5ScakjagbySdxHt0K4Wg9r2N4cx1CZ83pfp782O+NRdeoIQFiY2imARZPepvfcEjJ hSDnTX2i12tgHor8IlfenSBED7brClok0SaiywMcp6+bfZZcg9NAH8vz78x32DyrE5Kq gMAPQao8upozjo6VKlgCgKWyJbM9uBMEEStlzRO/pc5f2Axg+J5E3TXZlGj+bih0q7zi cSNUCSP0aMvgUC0pKfJyU38Lt+TvhuPvDEmr7MyWiu6NQ3b96LMM9Wv8YbO5nmgMJVAY enqw== X-Gm-Message-State: AOAM531OTpFjTtjC9qlvij7yBkkB5kfkqu1x80Rys+rmIuAp4/9SpO4V qHKwnOo9vhuYelbr7/vo6vGyrcKgJgU= X-Google-Smtp-Source: ABdhPJxB3VYaQ3PQ1Ta72PzTm03vXyJkriBqEkaiWnpRhP0oEWKbD0KEj038iaragoTuunxOvzLdz2r/Ra8= X-Received: from seanjc.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:3e5]) (user=seanjc job=sendgmr) by 2002:a17:90a:c981:b0:1e6:75f0:d4ea with SMTP id w1-20020a17090ac98100b001e675f0d4eamr35605587pjt.37.1654637781774; Tue, 07 Jun 2022 14:36:21 -0700 (PDT) Reply-To: Sean Christopherson Date: Tue, 7 Jun 2022 21:35:51 +0000 In-Reply-To: <20220607213604.3346000-1-seanjc@google.com> Message-Id: <20220607213604.3346000-3-seanjc@google.com> Mime-Version: 1.0 References: <20220607213604.3346000-1-seanjc@google.com> X-Mailer: git-send-email 2.36.1.255.ge46751e96f-goog Subject: [PATCH v5 02/15] KVM: nVMX: Account for KVM reserved CR4 bits in consistency checks From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Eric Li , David Matlack , Oliver Upton Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Check that the guest (L2) and host (L1) CR4 values that would be loaded by nested VM-Enter and VM-Exit respectively are valid with respect to KVM's (L0 host) allowed CR4 bits. Failure to check KVM reserved bits would allow L1 to load an illegal CR4 (or trigger hardware VM-Fail or failed VM-Entry) by massaging guest CPUID to allow features that are not supported by KVM. Amusingly, KVM itself is an accomplice in its doom, as KVM adjusts L1's MSR_IA32_VMX_CR4_FIXED1 to allow L1 to enable bits for L2 based on L1's CPUID model. Note, although nested_{guest,host}_cr4_valid() are _currently_ used if and only if the vCPU is post-VMXON (nested.vmxon =3D=3D true), that may not be true in the future, e.g. emulating VMXON has a bug where it doesn't check the allowed/required CR0/CR4 bits. Cc: stable@vger.kernel.org Fixes: 3899152ccbf4 ("KVM: nVMX: fix checks on CR{0,4} during virtual VMX o= peration") Signed-off-by: Sean Christopherson --- arch/x86/kvm/vmx/nested.h | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/arch/x86/kvm/vmx/nested.h b/arch/x86/kvm/vmx/nested.h index c92cea0b8ccc..129ae4e01f7c 100644 --- a/arch/x86/kvm/vmx/nested.h +++ b/arch/x86/kvm/vmx/nested.h @@ -281,7 +281,8 @@ static inline bool nested_cr4_valid(struct kvm_vcpu *vc= pu, unsigned long val) u64 fixed0 =3D to_vmx(vcpu)->nested.msrs.cr4_fixed0; u64 fixed1 =3D to_vmx(vcpu)->nested.msrs.cr4_fixed1; =20 - return fixed_bits_valid(val, fixed0, fixed1); + return fixed_bits_valid(val, fixed0, fixed1) && + __kvm_is_valid_cr4(vcpu, val); } =20 /* No difference in the restrictions on guest and host CR4 in VMX operatio= n. */ --=20 2.36.1.255.ge46751e96f-goog From nobody Tue Apr 28 02:28:07 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id EE0AAC43334 for ; Wed, 8 Jun 2022 01:53:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1390650AbiFHBvA (ORCPT ); Tue, 7 Jun 2022 21:51:00 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46388 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1574462AbiFGXZp (ORCPT ); Tue, 7 Jun 2022 19:25:45 -0400 Received: from mail-pg1-x54a.google.com (mail-pg1-x54a.google.com [IPv6:2607:f8b0:4864:20::54a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AA5E91737F5 for ; Tue, 7 Jun 2022 14:36:23 -0700 (PDT) Received: by mail-pg1-x54a.google.com with SMTP id p8-20020a63c148000000b003fe2810ba94so334598pgi.3 for ; Tue, 07 Jun 2022 14:36:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=cSKpn6SMwTsMUj7CoeqNmjfQf5su6IS+82GqcaqnP00=; b=JWGXT7GEWj3pb9eTehi/Y90hNRe2r2nD3GODPRsCFGFIO94KMlBrx7c8XB4DkoWFap tI4niM/3ed3SzDY3NNkzOSRYzSLkiLqPjkk3As2ovRFbd7pmVxZt+sXbTl9x8LiAAORi 3Vr9LYTZi42R/BwxBDnt3jkpFPN+FupD/o/vIOY+4xjMON2gHl0fqkFDS8Yg5YKrTe/c ciWQpj/Mlx/ZoNQ6xdJFdhWaJnLFxmXQSqomzM0qnlnJZqYtX4teaqf2dzyoJ1KIMKVG rln0JOiOQl42ONjEwe3TmRLV8G0Jb3/5MmIy3TScrZlJTjBvqimcF2h1BdxYiVVdv7sE hOxA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=cSKpn6SMwTsMUj7CoeqNmjfQf5su6IS+82GqcaqnP00=; b=udaFmyUPyrz5hvA9V6lP9dUn+DCF8ZJNW1mM8bMT4V4cPZV11/Kzby9j5cegc71dlJ 3vWDCQLqcCynxIYQtsuMLBYyoN7Tt9tmwPt4+EeJNLlmHSN46LTQ90soA8Jalv5OrE9g rTSY5NIIQhrtOOwu25T4/IrW0dXrw8gjiA/6bdzKtRBT5Ww7ELjqUuHHih7Ckq2pbkiV jb8gh7wM4eCFYLqsWif05JjkG17SXRDzcei6xVAm+q/sfKJeBlXlcvN0ADU1T5iQuG7a lzhZ2tul8S1PFjQZ5mi6WghqMYnvAj78ZPS9r6Cb4jedn5k9E2KaEldhRbH/FWy02uAW H2lA== X-Gm-Message-State: AOAM531rocmBwlc+fHIsWDlazwh7f/UMljTL0yC4jz4G+GTVCEdZxyso xJdgANxHYs/TrmKMuUlqnR2Cql4EcJ0= X-Google-Smtp-Source: ABdhPJwE2W93MwTTHTpFoshK51Bw9wW68F1SgoKDpJfNb8EAazUid7gT93Su7z8/oN2Qp2FKn/9dCM2Yy7E= X-Received: from seanjc.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:3e5]) (user=seanjc job=sendgmr) by 2002:a17:902:7594:b0:15e:bbac:8d49 with SMTP id j20-20020a170902759400b0015ebbac8d49mr30533387pll.124.1654637783148; Tue, 07 Jun 2022 14:36:23 -0700 (PDT) Reply-To: Sean Christopherson Date: Tue, 7 Jun 2022 21:35:52 +0000 In-Reply-To: <20220607213604.3346000-1-seanjc@google.com> Message-Id: <20220607213604.3346000-4-seanjc@google.com> Mime-Version: 1.0 References: <20220607213604.3346000-1-seanjc@google.com> X-Mailer: git-send-email 2.36.1.255.ge46751e96f-goog Subject: [PATCH v5 03/15] KVM: nVMX: Inject #UD if VMXON is attempted with incompatible CR0/CR4 From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Eric Li , David Matlack , Oliver Upton Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Inject a #UD if L1 attempts VMXON with a CR0 or CR4 that is disallowed per the associated nested VMX MSRs' fixed0/1 settings. KVM cannot rely on hardware to perform the checks, even for the few checks that have higher priority than VM-Exit, as (a) KVM may have forced CR0/CR4 bits in hardware while running the guest, (b) there may incompatible CR0/CR4 bits that have lower priority than VM-Exit, e.g. CR0.NE, and (c) userspace may have further restricted the allowed CR0/CR4 values by manipulating the guest's nested VMX MSRs. Note, despite a very strong desire to throw shade at Jim, commit 70f3aac964ae ("kvm: nVMX: Remove superfluous VMX instruction fault checks") is not to blame for the buggy behavior (though the comment...). That commit only removed the CR0.PE, EFLAGS.VM, and COMPATIBILITY mode checks (though it did erroneously drop the CPL check, but that has already been remedied). KVM may force CR0.PE=3D1, but will do so only when also forcing EFLAGS.VM=3D1 to emulate Real Mode, i.e. hardware will still #UD. Link: https://bugzilla.kernel.org/show_bug.cgi?id=3D216033 Fixes: ec378aeef9df ("KVM: nVMX: Implement VMXON and VMXOFF") Reported-by: Eric Li Cc: stable@vger.kernel.org Signed-off-by: Sean Christopherson --- arch/x86/kvm/vmx/nested.c | 23 ++++++++++++++--------- 1 file changed, 14 insertions(+), 9 deletions(-) diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index 7d8cd0ebcc75..9a5b6ef16c1c 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -4968,20 +4968,25 @@ static int handle_vmon(struct kvm_vcpu *vcpu) | FEAT_CTL_VMX_ENABLED_OUTSIDE_SMX; =20 /* - * The Intel VMX Instruction Reference lists a bunch of bits that are - * prerequisite to running VMXON, most notably cr4.VMXE must be set to - * 1 (see vmx_is_valid_cr4() for when we allow the guest to set this). - * Otherwise, we should fail with #UD. But most faulting conditions - * have already been checked by hardware, prior to the VM-exit for - * VMXON. We do test guest cr4.VMXE because processor CR4 always has - * that bit set to 1 in non-root mode. + * Note, KVM cannot rely on hardware to perform the CR0/CR4 #UD checks + * that have higher priority than VM-Exit (see Intel SDM's pseudocode + * for VMXON), as KVM must load valid CR0/CR4 values into hardware while + * running the guest, i.e. KVM needs to check the _guest_ values. + * + * Rely on hardware for the other two pre-VM-Exit checks, !VM86 and + * !COMPATIBILITY modes. KVM may run the guest in VM86 to emulate Real + * Mode, but KVM will never take the guest out of those modes. */ - if (!kvm_read_cr4_bits(vcpu, X86_CR4_VMXE)) { + if (!nested_host_cr0_valid(vcpu, kvm_read_cr0(vcpu)) || + !nested_host_cr4_valid(vcpu, kvm_read_cr4(vcpu))) { kvm_queue_exception(vcpu, UD_VECTOR); return 1; } =20 - /* CPL=3D0 must be checked manually. */ + /* + * CPL=3D0 and all other checks that are lower priority than VM-Exit must + * be checked manually. + */ if (vmx_get_cpl(vcpu)) { kvm_inject_gp(vcpu, 0); return 1; --=20 2.36.1.255.ge46751e96f-goog From nobody Tue Apr 28 02:28:07 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 38421CCA48F for ; Wed, 8 Jun 2022 01:38:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1386600AbiFHBhE (ORCPT ); Tue, 7 Jun 2022 21:37:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57792 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1381302AbiFGXjC (ORCPT ); Tue, 7 Jun 2022 19:39:02 -0400 Received: from mail-pg1-x549.google.com (mail-pg1-x549.google.com [IPv6:2607:f8b0:4864:20::549]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4665822DFAE for ; Tue, 7 Jun 2022 14:36:25 -0700 (PDT) Received: by mail-pg1-x549.google.com with SMTP id t14-20020a65608e000000b003fa321e8ea3so9153770pgu.18 for ; Tue, 07 Jun 2022 14:36:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=EJfk8nwbrnmO1UuI866I8yMqvfudRDGtQGXb6HNwIrM=; b=cn1958um9WkdotZSF69FsUeBfSeTHbq+FyWFm/QqdUOkrDZ20ANQK0PPQrxi/NLkrE EOzpX4xgCHfHqrGscVBpazqOyxk3P+phvhtPDrTGhOxUsewtUdzwQHQaIVUg0xhSIXTb eMI18H3mg6NDelffkVysacbr4O/KHNry1/qOM9B+ptmEA3jvqd82zUILmyxqj3GOZmTn JBd2U/5PRnm/VVZBlu9TSd1gbDdXCDHSkQKJ44gSooijmAJPLmXhMOgOyy2ZbGk8cIV3 mx2S3mDkwmj8zyAyHIfRq2V6TNIMTeyMzG/0cVYzVOSxBmrwSEEstbIEfIAWeE0sj+/p G9wA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=EJfk8nwbrnmO1UuI866I8yMqvfudRDGtQGXb6HNwIrM=; b=Ky2sLtaWk8e7RsK0IUk1sTiSZxv6sVu3E+P4gqjXmiRUHSvuQ/gZwlth4Kt1Ej0oBG uR85E/qsldwNLuC3sagvdBQ+sMbeguXQlOa8g4w4+qHtbDqji6WYP/BVHVfSyu81JF2w 3EhjaEKMWhlrxalBHJvcKNzaFQ85RIT8kVnsLGJcKk6XlXdMY86UV111QnZ+dLoLlOrg 4wWHtGck1y226IQDbotP64HeCCvlvYOHQY66DdcWb1i62ACo32x6OCoQUr6nYY2MsUmP L2DXPkpURsgCAmUOh4A6Q5GLEIv/CGgc4EidgNI2N6z86J8WtIjUtLps6CKmX8jmipcX GjMA== X-Gm-Message-State: AOAM530f0nUCxlRZ1qgF9GA2+4WyQLc08us4rbH/LdH/c15BRhRros0q cUKHpYeYSFMtcQe9gZWHQm6bV+rgyGY= X-Google-Smtp-Source: ABdhPJzHHxYpwDw7PZZ+h5cYuSRuLOiUBVayajr26XmFr4L8q40OPMapzi4SZKc/+lJsQarfzYlXjX9kFLU= X-Received: from seanjc.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:3e5]) (user=seanjc job=sendgmr) by 2002:a17:902:da87:b0:166:423d:f3be with SMTP id j7-20020a170902da8700b00166423df3bemr29143461plx.150.1654637784762; Tue, 07 Jun 2022 14:36:24 -0700 (PDT) Reply-To: Sean Christopherson Date: Tue, 7 Jun 2022 21:35:53 +0000 In-Reply-To: <20220607213604.3346000-1-seanjc@google.com> Message-Id: <20220607213604.3346000-5-seanjc@google.com> Mime-Version: 1.0 References: <20220607213604.3346000-1-seanjc@google.com> X-Mailer: git-send-email 2.36.1.255.ge46751e96f-goog Subject: [PATCH v5 04/15] KVM: nVMX: Rename handle_vm{on,off}() to handle_vmx{on,off}() From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Eric Li , David Matlack , Oliver Upton Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Rename the exit handlers for VMXON and VMXOFF to match the instruction names, the terms "vmon" and "vmoff" are not used anywhere in Intel's documentation, nor are they used elsehwere in KVM. Sadly, the exit reasons are exposed to userspace and so cannot be renamed without breaking userspace. :-( Fixes: ec378aeef9df ("KVM: nVMX: Implement VMXON and VMXOFF") Signed-off-by: Sean Christopherson --- arch/x86/kvm/vmx/nested.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index 9a5b6ef16c1c..00c7b00c017a 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -4958,7 +4958,7 @@ static int enter_vmx_operation(struct kvm_vcpu *vcpu) } =20 /* Emulate the VMXON instruction. */ -static int handle_vmon(struct kvm_vcpu *vcpu) +static int handle_vmxon(struct kvm_vcpu *vcpu) { int ret; gpa_t vmptr; @@ -5055,7 +5055,7 @@ static inline void nested_release_vmcs12(struct kvm_v= cpu *vcpu) } =20 /* Emulate the VMXOFF instruction */ -static int handle_vmoff(struct kvm_vcpu *vcpu) +static int handle_vmxoff(struct kvm_vcpu *vcpu) { if (!nested_vmx_check_permission(vcpu)) return 1; @@ -6832,8 +6832,8 @@ __init int nested_vmx_hardware_setup(int (*exit_handl= ers[])(struct kvm_vcpu *)) exit_handlers[EXIT_REASON_VMREAD] =3D handle_vmread; exit_handlers[EXIT_REASON_VMRESUME] =3D handle_vmresume; exit_handlers[EXIT_REASON_VMWRITE] =3D handle_vmwrite; - exit_handlers[EXIT_REASON_VMOFF] =3D handle_vmoff; - exit_handlers[EXIT_REASON_VMON] =3D handle_vmon; + exit_handlers[EXIT_REASON_VMOFF] =3D handle_vmxoff; + exit_handlers[EXIT_REASON_VMON] =3D handle_vmxon; exit_handlers[EXIT_REASON_INVEPT] =3D handle_invept; exit_handlers[EXIT_REASON_INVVPID] =3D handle_invvpid; exit_handlers[EXIT_REASON_VMFUNC] =3D handle_vmfunc; --=20 2.36.1.255.ge46751e96f-goog From nobody Tue Apr 28 02:28:07 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2A740CCA49D for ; Wed, 8 Jun 2022 00:55:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1386467AbiFHAuL (ORCPT ); Tue, 7 Jun 2022 20:50:11 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59740 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1381319AbiFGXjC (ORCPT ); Tue, 7 Jun 2022 19:39:02 -0400 Received: from mail-pj1-x1049.google.com (mail-pj1-x1049.google.com [IPv6:2607:f8b0:4864:20::1049]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9879222DFBD for ; Tue, 7 Jun 2022 14:36:27 -0700 (PDT) Received: by mail-pj1-x1049.google.com with SMTP id 92-20020a17090a09e500b001d917022847so9810145pjo.1 for ; Tue, 07 Jun 2022 14:36:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=kIHbaDeE5+Xu+cED69bXs6B1hwEW4reTl6ZJqla8mJM=; b=ddoNjco06S2NNUIkU9zlbWUj0xo8C+FljBGgAoCnXG3GPpuHeGRRJa/Hytg/tzaFog TPGRWsGbPaJNUE53ORYSeT+B+Ou+QNyhqzU3dCnLoaKSH47ocD7dwM8yy/l+ThzeXw0B 0SYzBG/MpunS62oHsfCPu3GsM+48gGrN1TIHc2CbEz8+gxAHEBjk2aJZ9Atz4LW3517V rsjQM4khW9D2bkFm6qMHrLURJhB7GwySH2AHAxiJBImS344oLk4txlQz9NN9bFp6uKi2 km0A3NH62ctAsu/gofq2sGRWdzuD+xJSjnj7ZT6ilVJYVNsw58eLq7HYAtLtV/eY88bY QG4g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=kIHbaDeE5+Xu+cED69bXs6B1hwEW4reTl6ZJqla8mJM=; b=AxOxFKQl/c4lamc8xeDTN5CXARZRHezT1X+e/6A+uI9tcPP/jUGAP9cnsa+GpWikuC ycPl+gr2Ti5k6Zp3s7fCOSZK4FNLVxvjP76xt3tJ0YEeEGO7ll/wxrDpv9Yxy6jilKZg 3WAw0Dvj00rEZq6fpullMh50pzMRAzsM0aOrq4G5pJi/j7KAY8tyMTC+gEHE7ZmUfsVo jXagUWrQ/UisMf+WIyd+qDzAx/GItyAj4J1spzYFCTfxYbrwCyLvMzOhbDVdyeECFILm zuUuEQrQOuUb33BlJCdBqcbS2LWJVIzyqm6UH2lRgk5BwJn5rcRw3Et4Jg8qXQ21AAnu 7Sdg== X-Gm-Message-State: AOAM532J2wBtuvDOy11IURrH2tp5XdrS7sQ4aZf/t1JARaCUlfAEHxIZ MD6RBNkXRNrss2hjeH5pGkBmCZhNMfw= X-Google-Smtp-Source: ABdhPJyzOGNGFBCl5CqBHPWYmK+RmIfbSPiaIyoa64t/YktmSSIesEUxPj1DS5IaUAxlajJYp2atKl8ttVY= X-Received: from seanjc.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:3e5]) (user=seanjc job=sendgmr) by 2002:a17:90a:178f:b0:1e3:3ba:c185 with SMTP id q15-20020a17090a178f00b001e303bac185mr409798pja.1.1654637786593; Tue, 07 Jun 2022 14:36:26 -0700 (PDT) Reply-To: Sean Christopherson Date: Tue, 7 Jun 2022 21:35:54 +0000 In-Reply-To: <20220607213604.3346000-1-seanjc@google.com> Message-Id: <20220607213604.3346000-6-seanjc@google.com> Mime-Version: 1.0 References: <20220607213604.3346000-1-seanjc@google.com> X-Mailer: git-send-email 2.36.1.255.ge46751e96f-goog Subject: [PATCH v5 05/15] KVM: nVMX: Let userspace set nVMX MSR to any _host_ supported value From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Eric Li , David Matlack , Oliver Upton Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Restrict the nVMX MSRs based on KVM's config, not based on the guest's current config. Using the guest's config to audit the new config prevents userspace from restoring the original config (KVM's config) if at any point in the past the guest's config was restricted in any way. Fixes: 62cc6b9dc61e ("KVM: nVMX: support restore of VMX capability MSRs") Cc: stable@vger.kernel.org Cc: David Matlack Signed-off-by: Sean Christopherson --- arch/x86/kvm/vmx/nested.c | 100 ++++++++++++++++++++------------------ 1 file changed, 52 insertions(+), 48 deletions(-) diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index 00c7b00c017a..fca30e79b3a0 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -1223,7 +1223,7 @@ static int vmx_restore_vmx_basic(struct vcpu_vmx *vmx= , u64 data) BIT_ULL(49) | BIT_ULL(54) | BIT_ULL(55) | /* reserved */ BIT_ULL(31) | GENMASK_ULL(47, 45) | GENMASK_ULL(63, 56); - u64 vmx_basic =3D vmx->nested.msrs.basic; + u64 vmx_basic =3D vmcs_config.nested.basic; =20 if (!is_bitwise_subset(vmx_basic, data, feature_and_reserved)) return -EINVAL; @@ -1246,36 +1246,42 @@ static int vmx_restore_vmx_basic(struct vcpu_vmx *v= mx, u64 data) return 0; } =20 +static void vmx_get_control_msr(struct nested_vmx_msrs *msrs, u32 msr_inde= x, + u32 **low, u32 **high) +{ + switch (msr_index) { + case MSR_IA32_VMX_TRUE_PINBASED_CTLS: + *low =3D &msrs->pinbased_ctls_low; + *high =3D &msrs->pinbased_ctls_high; + break; + case MSR_IA32_VMX_TRUE_PROCBASED_CTLS: + *low =3D &msrs->procbased_ctls_low; + *high =3D &msrs->procbased_ctls_high; + break; + case MSR_IA32_VMX_TRUE_EXIT_CTLS: + *low =3D &msrs->exit_ctls_low; + *high =3D &msrs->exit_ctls_high; + break; + case MSR_IA32_VMX_TRUE_ENTRY_CTLS: + *low =3D &msrs->entry_ctls_low; + *high =3D &msrs->entry_ctls_high; + break; + case MSR_IA32_VMX_PROCBASED_CTLS2: + *low =3D &msrs->secondary_ctls_low; + *high =3D &msrs->secondary_ctls_high; + break; + default: + BUG(); + } +} + static int vmx_restore_control_msr(struct vcpu_vmx *vmx, u32 msr_index, u64 data) { - u64 supported; u32 *lowp, *highp; + u64 supported; =20 - switch (msr_index) { - case MSR_IA32_VMX_TRUE_PINBASED_CTLS: - lowp =3D &vmx->nested.msrs.pinbased_ctls_low; - highp =3D &vmx->nested.msrs.pinbased_ctls_high; - break; - case MSR_IA32_VMX_TRUE_PROCBASED_CTLS: - lowp =3D &vmx->nested.msrs.procbased_ctls_low; - highp =3D &vmx->nested.msrs.procbased_ctls_high; - break; - case MSR_IA32_VMX_TRUE_EXIT_CTLS: - lowp =3D &vmx->nested.msrs.exit_ctls_low; - highp =3D &vmx->nested.msrs.exit_ctls_high; - break; - case MSR_IA32_VMX_TRUE_ENTRY_CTLS: - lowp =3D &vmx->nested.msrs.entry_ctls_low; - highp =3D &vmx->nested.msrs.entry_ctls_high; - break; - case MSR_IA32_VMX_PROCBASED_CTLS2: - lowp =3D &vmx->nested.msrs.secondary_ctls_low; - highp =3D &vmx->nested.msrs.secondary_ctls_high; - break; - default: - BUG(); - } + vmx_get_control_msr(&vmcs_config.nested, msr_index, &lowp, &highp); =20 supported =3D vmx_control_msr(*lowp, *highp); =20 @@ -1287,6 +1293,7 @@ vmx_restore_control_msr(struct vcpu_vmx *vmx, u32 msr= _index, u64 data) if (!is_bitwise_subset(supported, data, GENMASK_ULL(63, 32))) return -EINVAL; =20 + vmx_get_control_msr(&vmx->nested.msrs, msr_index, &lowp, &highp); *lowp =3D data; *highp =3D data >> 32; return 0; @@ -1300,10 +1307,8 @@ static int vmx_restore_vmx_misc(struct vcpu_vmx *vmx= , u64 data) BIT_ULL(28) | BIT_ULL(29) | BIT_ULL(30) | /* reserved */ GENMASK_ULL(13, 9) | BIT_ULL(31); - u64 vmx_misc; - - vmx_misc =3D vmx_control_msr(vmx->nested.msrs.misc_low, - vmx->nested.msrs.misc_high); + u64 vmx_misc =3D vmx_control_msr(vmcs_config.nested.misc_low, + vmcs_config.nested.misc_high); =20 if (!is_bitwise_subset(vmx_misc, data, feature_and_reserved_bits)) return -EINVAL; @@ -1331,10 +1336,8 @@ static int vmx_restore_vmx_misc(struct vcpu_vmx *vmx= , u64 data) =20 static int vmx_restore_vmx_ept_vpid_cap(struct vcpu_vmx *vmx, u64 data) { - u64 vmx_ept_vpid_cap; - - vmx_ept_vpid_cap =3D vmx_control_msr(vmx->nested.msrs.ept_caps, - vmx->nested.msrs.vpid_caps); + u64 vmx_ept_vpid_cap =3D vmx_control_msr(vmcs_config.nested.ept_caps, + vmcs_config.nested.vpid_caps); =20 /* Every bit is either reserved or a feature bit. */ if (!is_bitwise_subset(vmx_ept_vpid_cap, data, -1ULL)) @@ -1345,20 +1348,21 @@ static int vmx_restore_vmx_ept_vpid_cap(struct vcpu= _vmx *vmx, u64 data) return 0; } =20 +static u64 *vmx_get_fixed0_msr(struct nested_vmx_msrs *msrs, u32 msr_index) +{ + switch (msr_index) { + case MSR_IA32_VMX_CR0_FIXED0: + return &msrs->cr0_fixed0; + case MSR_IA32_VMX_CR4_FIXED0: + return &msrs->cr4_fixed0; + default: + BUG(); + } +} + static int vmx_restore_fixed0_msr(struct vcpu_vmx *vmx, u32 msr_index, u64= data) { - u64 *msr; - - switch (msr_index) { - case MSR_IA32_VMX_CR0_FIXED0: - msr =3D &vmx->nested.msrs.cr0_fixed0; - break; - case MSR_IA32_VMX_CR4_FIXED0: - msr =3D &vmx->nested.msrs.cr4_fixed0; - break; - default: - BUG(); - } + const u64 *msr =3D vmx_get_fixed0_msr(&vmcs_config.nested, msr_index); =20 /* * 1 bits (which indicates bits which "must-be-1" during VMX operation) @@ -1367,7 +1371,7 @@ static int vmx_restore_fixed0_msr(struct vcpu_vmx *vm= x, u32 msr_index, u64 data) if (!is_bitwise_subset(data, *msr, -1ULL)) return -EINVAL; =20 - *msr =3D data; + *vmx_get_fixed0_msr(&vmx->nested.msrs, msr_index) =3D data; return 0; } =20 @@ -1428,7 +1432,7 @@ int vmx_set_vmx_msr(struct kvm_vcpu *vcpu, u32 msr_in= dex, u64 data) vmx->nested.msrs.vmcs_enum =3D data; return 0; case MSR_IA32_VMX_VMFUNC: - if (data & ~vmx->nested.msrs.vmfunc_controls) + if (data & ~vmcs_config.nested.vmfunc_controls) return -EINVAL; vmx->nested.msrs.vmfunc_controls =3D data; return 0; --=20 2.36.1.255.ge46751e96f-goog From nobody Tue Apr 28 02:28:07 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 27BB0C433EF for ; Wed, 8 Jun 2022 01:33:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237542AbiFHBdi (ORCPT ); Tue, 7 Jun 2022 21:33:38 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60128 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1381413AbiFGXjD (ORCPT ); Tue, 7 Jun 2022 19:39:03 -0400 Received: from mail-pg1-x549.google.com (mail-pg1-x549.google.com [IPv6:2607:f8b0:4864:20::549]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AE2EA22DFB9 for ; Tue, 7 Jun 2022 14:36:28 -0700 (PDT) Received: by mail-pg1-x549.google.com with SMTP id d125-20020a636883000000b003db5e24db27so9166919pgc.13 for ; Tue, 07 Jun 2022 14:36:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=QDhCTBvKksOgW3yltrev1z9m075RHWrH68Rp9z9odIU=; b=eHTU7XIQc58CpLgbBEygq7L2SQypYU0MagyyVlUopG1gE8AJx0F+aZHvE2r/hK415I qjsF+JPz+x6DvnKHVEuc1z7WLvdvsp6CDXL9dK+ZA48Sv6mIak40kdM5QBe/K5hKO1GD TtdcZCGYCBp3P2992BGn3vfIpsKsvWzkEsiB5RcuttYS2oMjNrwYhJE2nn2HWYxmrMoi WB4R3K4sNeo4D5O286ILY17iJm1pEQKGPNnTUga8shI3bUNo0fWv2M7ZhkP+oAhm1yEz rK4eyhIlwxNnitq/y5Xu32ygJuUgOVbbtmQi8YE2Xu3vm2Tl/FsWOJ9KVXoxWg8Ftidp 58AA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=QDhCTBvKksOgW3yltrev1z9m075RHWrH68Rp9z9odIU=; b=gF/KUOTXHb4rRsF7MYu4Nhv7ePt3CcJHPXUzoWwli2ZpzMU/C6TQY9DHtzU7Za2oqo MdkBK/g7+4v0pNjyJFc/3ldIFGbbTFoc4Cjpjmdt0fZrry+ogMbMgbZSB9P3d8QwQrdG 0xX7QJNG0aS9Q6H8ggZsnDZh59X9cKDFeoyKtPNv2+DpB4d4wU9TcyH9dDjQG3kkLPTs mrrG5KPHHC0z+Ce0I9b2jRPmo/4dVxXcOYlviLeGF+0la8DBzLccMIvDRJ3zUidt2R44 RdlL5mq2AysNP1MF5+V0uR7YfM5LTi2bZ34auzelFsZzWzaWefZvzWGRLeyuVHdjwAIT mkWg== X-Gm-Message-State: AOAM533yX47uiotAS25we6tqTmYr2bDSLTnlT08p3I2w5yLNwU1S6Xy3 z9dc5OvGSYc/AGPdkJooMIAXm26JmIk= X-Google-Smtp-Source: ABdhPJx/KMFWqCTxyQ9O6UQQqXbMKyHRzRmss/NznagKocVXY9AyFHedpTFWR5JWygryIHwK8G21Q8StQbs= X-Received: from seanjc.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:3e5]) (user=seanjc job=sendgmr) by 2002:a17:90a:df14:b0:1e3:33ba:a94 with SMTP id gp20-20020a17090adf1400b001e333ba0a94mr33864474pjb.83.1654637788240; Tue, 07 Jun 2022 14:36:28 -0700 (PDT) Reply-To: Sean Christopherson Date: Tue, 7 Jun 2022 21:35:55 +0000 In-Reply-To: <20220607213604.3346000-1-seanjc@google.com> Message-Id: <20220607213604.3346000-7-seanjc@google.com> Mime-Version: 1.0 References: <20220607213604.3346000-1-seanjc@google.com> X-Mailer: git-send-email 2.36.1.255.ge46751e96f-goog Subject: [PATCH v5 06/15] KVM: nVMX: Keep KVM updates to BNDCFGS ctrl bits across MSR write From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Eric Li , David Matlack , Oliver Upton Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Oliver Upton Since commit 5f76f6f5ff96 ("KVM: nVMX: Do not expose MPX VMX controls when guest MPX disabled"), KVM has taken ownership of the "load IA32_BNDCFGS" and "clear IA32_BNDCFGS" VMX entry/exit controls. The ABI is that these bits must be set in the IA32_VMX_TRUE_{ENTRY,EXIT}_CTLS MSRs if the guest's CPUID supports MPX, and clear otherwise. However, commit aedbaf4f6afd ("KVM: x86: Extract kvm_update_cpuid_runtime() from kvm_update_cpuid()") partially broke KVM ownership of the aforementioned bits. Before, kvm_update_cpuid() was exercised frequently when running a guest and constantly applied its own changes to the BNDCFGS bits. Now, the BNDCFGS bits are only ever updated after a KVM_SET_CPUID/KVM_SET_CPUID2 ioctl, meaning that a subsequent MSR write from userspace will clobber these values. Uphold the old ABI by reapplying KVM's tweaks to the BNDCFGS bits after an MSR write from userspace. Note, the old ABI that is being preserved is a KVM hack to workaround a userspace bug; see commit 5f76f6f5ff96 ("KVM: nVMX: Do not expose MPX VMX controls when guest MPX disabled"). VMCS controls are not tied to CPUID, i.e. KVM should not be mucking with unrelated things. The argument that it's KVM's responsibility to propagate CPUID state to VMX MSRs doesn't hold water, as the MPX shenanigans are an exception, not the norm, e.g. KVM doesn't perform the following adjustments (and this is but a subset of all possible tweaks): X86_FEATURE_LM =3D> VM_EXIT_HOST_ADDR_SPACE_SIZE, VM_ENTRY_IA32E_MODE, VMX_MISC_SAVE_EFER_LMA X86_FEATURE_TSC =3D> CPU_BASED_RDTSC_EXITING, CPU_BASED_USE_TSC_OFFSETTIN= G, SECONDARY_EXEC_TSC_SCALING X86_FEATURE_INVPCID_SINGLE =3D> SECONDARY_EXEC_ENABLE_INVPCID X86_FEATURE_MWAIT =3D> CPU_BASED_MONITOR_EXITING, CPU_BASED_MWAIT_EXITING X86_FEATURE_INTEL_PT =3D> SECONDARY_EXEC_PT_CONCEAL_VMX, SECONDARY_EXEC_P= T_USE_GPA, VM_EXIT_CLEAR_IA32_RTIT_CTL, VM_ENTRY_LOAD_IA32_R= TIT_CTL X86_FEATURE_XSAVES =3D> SECONDARY_EXEC_XSAVES Fixes: aedbaf4f6afd ("KVM: x86: Extract kvm_update_cpuid_runtime() from kvm= _update_cpuid()") Signed-off-by: Oliver Upton [sean: explicitly document the original KVM hack] Signed-off-by: Sean Christopherson --- arch/x86/kvm/vmx/nested.c | 9 +++++++++ arch/x86/kvm/vmx/vmx.c | 2 +- arch/x86/kvm/vmx/vmx.h | 2 ++ 3 files changed, 12 insertions(+), 1 deletion(-) diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index fca30e79b3a0..d1c21d387716 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -1296,6 +1296,15 @@ vmx_restore_control_msr(struct vcpu_vmx *vmx, u32 ms= r_index, u64 data) vmx_get_control_msr(&vmx->nested.msrs, msr_index, &lowp, &highp); *lowp =3D data; *highp =3D data >> 32; + + /* + * To preserve an old, kludgy ABI, ensure KVM fiddling with the "true" + * entry/exit controls MSRs is preserved after userspace modifications. + */ + if (msr_index =3D=3D MSR_IA32_VMX_TRUE_ENTRY_CTLS || + msr_index =3D=3D MSR_IA32_VMX_TRUE_EXIT_CTLS) + nested_vmx_entry_exit_ctls_update(&vmx->vcpu); + return 0; } =20 diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 57df799ffa29..3f1671d7cbe4 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -7398,7 +7398,7 @@ static void nested_vmx_cr_fixed1_bits_update(struct k= vm_vcpu *vcpu) #undef cr4_fixed1_update } =20 -static void nested_vmx_entry_exit_ctls_update(struct kvm_vcpu *vcpu) +void nested_vmx_entry_exit_ctls_update(struct kvm_vcpu *vcpu) { struct vcpu_vmx *vmx =3D to_vmx(vcpu); =20 diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h index 71bcb486e73f..576fed7e33de 100644 --- a/arch/x86/kvm/vmx/vmx.h +++ b/arch/x86/kvm/vmx/vmx.h @@ -425,6 +425,8 @@ static inline void vmx_set_intercept_for_msr(struct kvm= _vcpu *vcpu, u32 msr, =20 void vmx_update_cpu_dirty_logging(struct kvm_vcpu *vcpu); =20 +void nested_vmx_entry_exit_ctls_update(struct kvm_vcpu *vcpu); + /* * Note, early Intel manuals have the write-low and read-high bitmap offse= ts * the wrong way round. The bitmaps control MSRs 0x00000000-0x00001fff and --=20 2.36.1.255.ge46751e96f-goog From nobody Tue Apr 28 02:28:07 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1BE7CC433EF for ; Wed, 8 Jun 2022 01:09:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1391076AbiFHBGW (ORCPT ); Tue, 7 Jun 2022 21:06:22 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60470 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1382306AbiFGXjF (ORCPT ); Tue, 7 Jun 2022 19:39:05 -0400 Received: from mail-pg1-x54a.google.com (mail-pg1-x54a.google.com [IPv6:2607:f8b0:4864:20::54a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6F66A1737D3 for ; Tue, 7 Jun 2022 14:36:30 -0700 (PDT) Received: by mail-pg1-x54a.google.com with SMTP id 15-20020a63020f000000b003fca9ebc5cbso8958038pgc.22 for ; Tue, 07 Jun 2022 14:36:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=GmS2Zfk79J/Sy/RToehyTdCO4x0cZbdTb+j/zQ23WRQ=; b=bdzmq7UnnPvyPEMgI3dsae1dM+24A3NDnSWf4gIbLMniZ2r+Aq4+nRpXJeZPPB9xL8 PK2dP2sPhxm42na7Jm0Eep0e+8o5d5QH9UDPqbjcsE/VWDdOrTWQ2asiqtAsh3BmYVBz HFXIyvQgt7roQVy8YOrqw6ZqEAfovWah6OHIiWgSxODHxPCKVHdzC9QY9emiU80JH6c6 Pf9AY6vbYJZsdWVbS8Utv1IcJ6z3SCEdzpio9yHXPzxLYiTu0UudYuPUblOSgy2Kr46l fLzlCtqxY70zJB5zQcWyWFB+McvkwFLgrM4/v7DiqHrbVt1Y1Bqq3nqDXxLf26xzQ9Mh 90rA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=GmS2Zfk79J/Sy/RToehyTdCO4x0cZbdTb+j/zQ23WRQ=; b=wKk5d0/nLHTRFEi6bM9TwPC4ufW96fEkYnZ9Und7+7mtNpYJrnv8PUsw7bP8+81ImH JQQeQ/RgVIhGS1BHtQ+cqW5M+ny2WSHyM9G+IMn+E1ycsasaizM/SekqZsae9FYm+aly uSB1jVxTuQpvSWfeewP3S+aXbZvDg49e/cOjHGSpj85FG8RfJ96Q0LP+bp9HVI4eMbZE Ur3FjMt1Qf5vnsUT/yEBKbsu2LcZUXn+FVsk2qHFih+6kUJIQfjrTbdIr3TPYyB+HlwL cz7PYJh05UprG7/n5KQNIoaA9F3rBBiL59B5zzaWtepPQUbeHT+N8MBaFCxIECIOvFL8 CqUQ== X-Gm-Message-State: AOAM531yotaXfwdlel39k1ovR7XhoXvfCBy0+vhH5zD2kXkoDpkiCOrj xAqPqJ8is8JVUnvC1unc4zs3SpqHMTo= X-Google-Smtp-Source: ABdhPJzLtIwGn7tXr8E4ox73+ncuEOoNaH720aApuV9kLjNuT2EKO4l6xAlZFYo1g5kpgraplK4mzENO8E8= X-Received: from seanjc.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:3e5]) (user=seanjc job=sendgmr) by 2002:a17:903:2d0:b0:14d:8a8d:cb1 with SMTP id s16-20020a17090302d000b0014d8a8d0cb1mr30934054plk.50.1654637789914; Tue, 07 Jun 2022 14:36:29 -0700 (PDT) Reply-To: Sean Christopherson Date: Tue, 7 Jun 2022 21:35:56 +0000 In-Reply-To: <20220607213604.3346000-1-seanjc@google.com> Message-Id: <20220607213604.3346000-8-seanjc@google.com> Mime-Version: 1.0 References: <20220607213604.3346000-1-seanjc@google.com> X-Mailer: git-send-email 2.36.1.255.ge46751e96f-goog Subject: [PATCH v5 07/15] KVM: VMX: Add helper to check if the guest PMU has PERF_GLOBAL_CTRL From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Eric Li , David Matlack , Oliver Upton Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add a helper to check of the guest PMU has PERF_GLOBAL_CTRL, which is unintuitive _and_ diverges from Intel's architecturally defined behavior. Even worse, KVM currently implements the check using two different, but equivalent checksand , _and_ there has been at least one attempt to add a _third_ flavor. Link: https://lore.kernel.org/all/Yk4ugOETeo%2FqDRbW@google.com Signed-off-by: Sean Christopherson --- arch/x86/kvm/vmx/pmu_intel.c | 4 ++-- arch/x86/kvm/vmx/vmx.h | 12 ++++++++++++ 2 files changed, 14 insertions(+), 2 deletions(-) diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c index 5b85320fc9f1..6ce3b066f7d9 100644 --- a/arch/x86/kvm/vmx/pmu_intel.c +++ b/arch/x86/kvm/vmx/pmu_intel.c @@ -111,7 +111,7 @@ static bool intel_pmc_is_enabled(struct kvm_pmc *pmc) { struct kvm_pmu *pmu =3D pmc_to_pmu(pmc); =20 - if (pmu->version < 2) + if (!intel_pmu_has_perf_global_ctrl(pmu)) return true; =20 return test_bit(pmc->idx, (unsigned long *)&pmu->global_ctrl); @@ -208,7 +208,7 @@ static bool intel_is_valid_msr(struct kvm_vcpu *vcpu, u= 32 msr, bool host_initiat case MSR_CORE_PERF_GLOBAL_OVF_CTRL: if (host_initiated) return true; - return pmu->version > 1; + return intel_pmu_has_perf_global_ctrl(pmu); break; case MSR_IA32_PEBS_ENABLE: if (host_initiated) diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h index 576fed7e33de..215f17eb6732 100644 --- a/arch/x86/kvm/vmx/vmx.h +++ b/arch/x86/kvm/vmx/vmx.h @@ -91,6 +91,18 @@ union vmx_exit_reason { u32 full; }; =20 +static inline bool intel_pmu_has_perf_global_ctrl(struct kvm_pmu *pmu) +{ + /* + * Architecturally, Intel's SDM states that IA32_PERF_GLOBAL_CTRL is + * supported if "CPUID.0AH: EAX[7:0] > 0", i.e. if the PMU version is + * greater than zero. However, KVM only exposes and emulates the MSR + * to/for the guest if the guest PMU supports at least "Architectural + * Performance Monitoring Version 2". + */ + return pmu->version > 1; +} + #define vcpu_to_lbr_desc(vcpu) (&to_vmx(vcpu)->lbr_desc) #define vcpu_to_lbr_records(vcpu) (&to_vmx(vcpu)->lbr_desc.records) =20 --=20 2.36.1.255.ge46751e96f-goog From nobody Tue Apr 28 02:28:07 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CFF8ECCA48A for ; Wed, 8 Jun 2022 01:47:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1389764AbiFHBqQ (ORCPT ); Tue, 7 Jun 2022 21:46:16 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58176 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1382466AbiFGXjF (ORCPT ); Tue, 7 Jun 2022 19:39:05 -0400 Received: from mail-pl1-x64a.google.com (mail-pl1-x64a.google.com [IPv6:2607:f8b0:4864:20::64a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 209DB1742B1 for ; Tue, 7 Jun 2022 14:36:32 -0700 (PDT) Received: by mail-pl1-x64a.google.com with SMTP id i16-20020a170902cf1000b001540b6a09e3so10037554plg.0 for ; Tue, 07 Jun 2022 14:36:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=xglW57R7dPrxeD1+iu7KlOWod3vvDTBjvAqs2DB9Tyo=; b=Z8n2j8d/l+NC3N9+mbSwNLqH3Dk5xFYh3W8mbYfNzq6SnxKTw2miTYcRgW55MA898M tmofR+kdMmdNHI4Ukui4H9GO2TaNKM1ORixz8VRgMCvGIBmAFOimwOB+cmuXfel+5/Q8 wmI/GlXJFGuExnuB2R051rnXIKjHh8mXRiGXe0GsvU/ke6Z2UFuzAycPdzJLmrqYUVNt u0757aLcVkOOPfwGMK8VmL5NOw5E6tMyLH7XQY94dWQS5DUwDJMA4buwRvXpM0cNjGLX Ri77c5iIuMaAm6afVDTEFcuQgsRqJZPMXnoDhaUyRHigFNl1sYnLGmktT7uVpepWchos Hf9A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=xglW57R7dPrxeD1+iu7KlOWod3vvDTBjvAqs2DB9Tyo=; b=lnbV0bKXOyIU7cIM48xlU0XdLOgzpQoUgj8i7rR3sGK3sWJ7n9GY3+qHwUSOFmwP5D ++iGOUZXnXxRyIHII7B3OFdh4Sh/dtlf9MHm5x9X0tqDBr/BhslJ7bzOWT4BG+Kb8zDY pB47cUcelhfm1jv39SQHWCoEYFmBQ5vbEAFsAsvztt1JbpYuR9x78IaMXtHAH+GczY9H 2n4OeddpWB7/Rz5WDCGD8sGReTLOL6EzzJKUm//w5ZZyNIE1/RaUUSr9g792Q47hd2T9 dAa5sLHww+KSaB2JNt+n0dfV1YvRblRGYc3i/Ig6JwnNlU7ojP+m+TX835ynncuOJR5N 6u6w== X-Gm-Message-State: AOAM533gv/TXH1txCkv4EkRRBLpT2kc9vmFTNxoAvhAQQVrCuDDnzsOk sKHxhSUr069G94aWg4PVAqOEtt5aiYc= X-Google-Smtp-Source: ABdhPJyYWskGfqg7Lzc1BANxoat/HX9oJuMU6eVjhdTVKgpGqjo/fAChIvEUAOBwu0ruJlwRyubKpPudm4k= X-Received: from seanjc.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:3e5]) (user=seanjc job=sendgmr) by 2002:a17:902:9b8a:b0:163:d0ad:f9e8 with SMTP id y10-20020a1709029b8a00b00163d0adf9e8mr30305164plp.79.1654637791642; Tue, 07 Jun 2022 14:36:31 -0700 (PDT) Reply-To: Sean Christopherson Date: Tue, 7 Jun 2022 21:35:57 +0000 In-Reply-To: <20220607213604.3346000-1-seanjc@google.com> Message-Id: <20220607213604.3346000-9-seanjc@google.com> Mime-Version: 1.0 References: <20220607213604.3346000-1-seanjc@google.com> X-Mailer: git-send-email 2.36.1.255.ge46751e96f-goog Subject: [PATCH v5 08/15] KVM: nVMX: Keep KVM updates to PERF_GLOBAL_CTRL ctrl bits across MSR write From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Eric Li , David Matlack , Oliver Upton Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Oliver Upton Since commit 03a8871add95 ("KVM: nVMX: Expose load IA32_PERF_GLOBAL_CTRL VM-{Entry,Exit} control"), KVM has taken ownership of the "load IA32_PERF_GLOBAL_CTRL" VMX entry/exit control bits. The ABI is that these bits will be set in the IA32_VMX_TRUE_{ENTRY,EXIT}_CTLS MSRs if the guest's CPUID exposes a vPMU that supports the IA32_PERF_GLOBAL_CTRL MSR (CPUID.0AH:EAX[7:0] > 1), and clear otherwise. However, commit aedbaf4f6afd ("KVM: x86: Extract kvm_update_cpuid_runtime() from kvm_update_cpuid()") partially broke KVM ownership of the aforementioned bits. Before, kvm_update_cpuid() was exercised frequently when running a guest and constantly applied its own changes to the "load IA32_PERF_GLOBAL_CTRL" bits. Now, the "load IA32_PERF_GLOBAL_CTRL" bits are only ever updated after a KVM_SET_CPUID/KVM_SET_CPUID2 ioctl, meaning that a subsequent MSR write from userspace will clobber these values. Note that older kernels without commit c44d9b34701d ("KVM: x86: Invoke vendor's vcpu_after_set_cpuid() after all common updates") still require that the entry/exit controls be updated from kvm_pmu_refresh(). Leave the benign call in place to allow for cleaner backporting and punt the cleanup to a later change. Uphold the old ABI by reapplying KVM's tweaks to the "load IA32_PERF_GLOBAL_CTRL" bits after an MSR write from userspace. Note, the old ABI that is being preserved is misguided KVM behavior that was introduced by commit 03a8871add95 ("KVM: nVMX: Expose load IA32_PERF_GLOBAL_CTRL VM-{Entry,Exit} control"). KVM's bogus tweaking of VMX MSRs was first implemented by commit 5f76f6f5ff96 ("KVM: nVMX: Do not expose MPX VMX controls when guest MPX disabled") to hack around a QEMU bug, and that bad behavior was unfortunately applied to PERF_GLOBAL_CTRL before it could be stamped out. Fixes: aedbaf4f6afd ("KVM: x86: Extract kvm_update_cpuid_runtime() from kvm= _update_cpuid()") Reported-by: Jim Mattson Signed-off-by: Oliver Upton [sean: explicitly document the original KVM hack, set bits iff CPU supports the control] Signed-off-by: Sean Christopherson --- arch/x86/kvm/vmx/vmx.c | 14 ++++++++++++++ 1 file changed, 14 insertions(+) diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 3f1671d7cbe4..73ec4746a4e6 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -7413,6 +7413,20 @@ void nested_vmx_entry_exit_ctls_update(struct kvm_vc= pu *vcpu) vmx->nested.msrs.exit_ctls_high &=3D ~VM_EXIT_CLEAR_BNDCFGS; } } + + /* + * KVM supports a 1-setting of the "load IA32_PERF_GLOBAL_CTRL" + * VM-{Entry,Exit} controls if the vPMU supports IA32_PERF_GLOBAL_CTRL. + */ + if (cpu_has_load_perf_global_ctrl()) { + if (intel_pmu_has_perf_global_ctrl(vcpu_to_pmu(vcpu))) { + vmx->nested.msrs.entry_ctls_high |=3D VM_ENTRY_LOAD_IA32_PERF_GLOBAL_CT= RL; + vmx->nested.msrs.exit_ctls_high |=3D VM_EXIT_LOAD_IA32_PERF_GLOBAL_CTRL; + } else { + vmx->nested.msrs.entry_ctls_high &=3D ~VM_ENTRY_LOAD_IA32_PERF_GLOBAL_C= TRL; + vmx->nested.msrs.exit_ctls_high &=3D ~VM_EXIT_LOAD_IA32_PERF_GLOBAL_CTR= L; + } + } } =20 static void update_intel_pt_cfg(struct kvm_vcpu *vcpu) --=20 2.36.1.255.ge46751e96f-goog From nobody Tue Apr 28 02:28:07 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7DFD7CCA490 for ; Wed, 8 Jun 2022 01:29:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1386764AbiFHB0q (ORCPT ); Tue, 7 Jun 2022 21:26:46 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58002 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1382676AbiFGXjF (ORCPT ); Tue, 7 Jun 2022 19:39:05 -0400 Received: from mail-pl1-x64a.google.com (mail-pl1-x64a.google.com [IPv6:2607:f8b0:4864:20::64a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D68CA16F34A for ; Tue, 7 Jun 2022 14:36:33 -0700 (PDT) Received: by mail-pl1-x64a.google.com with SMTP id g3-20020a170902868300b00163cd75c014so10004148plo.14 for ; Tue, 07 Jun 2022 14:36:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=b2y6HIM0yFuupb/Asz+ocPiupQ2BH2KLPsPF8kzjrkw=; b=m9SlpsGTqZNrbbB28kCBcURN1oC2as0iS1/ePdYn+z9eVKChoC0y5YYloYFRM6viLn MwdmyILQveyGVDpO50TdB8AnQZapzqw9WtsZ9c6tol2WHl2eOmthABe6MA9jd0KVU++E uPc51dyF0P2g9keIN2hwDykX7XCJI5GZg1SfKPjrscBSw/U1/4swV3Tnn8MxFbdbocYw ifJr35+TuoVkg2KGklLjtR3nccuVqd2rieEm7huqqv6g82byL4uL2QqnZBFjmUr2Sh/8 pioxGoHE/ToGuY4gzekI9l/Llte/+S1BNToib5RDZOD1haOfL7gcI/jvzaqTcHFgIcog /iTw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=b2y6HIM0yFuupb/Asz+ocPiupQ2BH2KLPsPF8kzjrkw=; b=cDib4hvp7z9PkiST3lQAq9CSxDpWGMJy+4LiJjZiox2Fm2W4ipl//E7nOTQ/7MqbiE 1WCMnMS4pvH44nMTNOyW9OKdHFo9ClcHI2c/N03iKoiSqjdiDa3Rj4H8C6Kb90I8D1M0 OKs/OS+IpoVEZpmJmCIPHZ1q9DIcBczBGfoTz0EcOWIVqUhlMc/QgGSU5M1SUPwXCSfT avZ2SfsoZqDq8xuHjeD5yecSIRh/K1oge8vIoZpDihlJf+fd4qst7oxXRnAbuCl33vi0 /XC/K6b/vRkucM2DJpjjTIY/v0mszDOH7YIH7uN5Gf02aM0/ISJpgkhczjbDHT9iNf3g TjkA== X-Gm-Message-State: AOAM532v/D4GepJ4Fk/jZQxCqSg9WiozJqjg18EDreR8fvHktzuTYaKQ 8/oN7Wj2dgMD/hsSoJGYyg9rQsgQGEo= X-Google-Smtp-Source: ABdhPJwgtT75q6YVsktniY/VHzibvVaHAIcTz9UivR7D0BpOChyXwBs/01A9v7fwrZCt9bFCGn2ylPNciYw= X-Received: from seanjc.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:3e5]) (user=seanjc job=sendgmr) by 2002:aa7:8890:0:b0:51c:454f:c93f with SMTP id z16-20020aa78890000000b0051c454fc93fmr3586124pfe.35.1654637793364; Tue, 07 Jun 2022 14:36:33 -0700 (PDT) Reply-To: Sean Christopherson Date: Tue, 7 Jun 2022 21:35:58 +0000 In-Reply-To: <20220607213604.3346000-1-seanjc@google.com> Message-Id: <20220607213604.3346000-10-seanjc@google.com> Mime-Version: 1.0 References: <20220607213604.3346000-1-seanjc@google.com> X-Mailer: git-send-email 2.36.1.255.ge46751e96f-goog Subject: [PATCH v5 09/15] KVM: nVMX: Drop nested_vmx_pmu_refresh() From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Eric Li , David Matlack , Oliver Upton Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Oliver Upton nested_vmx_pmu_refresh() is now unneeded, as the call to nested_vmx_entry_exit_ctls_update() in vmx_vcpu_after_set_cpuid() guarantees that the VM-{Entry,Exit} control MSR changes are applied after setting CPUID. Drop all vestiges of nested_vmx_pmu_refresh(). No functional change intended. Signed-off-by: Oliver Upton Signed-off-by: Sean Christopherson --- arch/x86/kvm/vmx/nested.c | 22 ---------------------- arch/x86/kvm/vmx/nested.h | 2 -- arch/x86/kvm/vmx/pmu_intel.c | 3 --- 3 files changed, 27 deletions(-) diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index d1c21d387716..4ba0e5540908 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -4847,28 +4847,6 @@ int get_vmx_mem_address(struct kvm_vcpu *vcpu, unsig= ned long exit_qualification, return 0; } =20 -void nested_vmx_pmu_refresh(struct kvm_vcpu *vcpu, - bool vcpu_has_perf_global_ctrl) -{ - struct vcpu_vmx *vmx; - - if (!nested_vmx_allowed(vcpu)) - return; - - vmx =3D to_vmx(vcpu); - if (vcpu_has_perf_global_ctrl) { - vmx->nested.msrs.entry_ctls_high |=3D - VM_ENTRY_LOAD_IA32_PERF_GLOBAL_CTRL; - vmx->nested.msrs.exit_ctls_high |=3D - VM_EXIT_LOAD_IA32_PERF_GLOBAL_CTRL; - } else { - vmx->nested.msrs.entry_ctls_high &=3D - ~VM_ENTRY_LOAD_IA32_PERF_GLOBAL_CTRL; - vmx->nested.msrs.exit_ctls_high &=3D - ~VM_EXIT_LOAD_IA32_PERF_GLOBAL_CTRL; - } -} - static int nested_vmx_get_vmptr(struct kvm_vcpu *vcpu, gpa_t *vmpointer, int *ret) { diff --git a/arch/x86/kvm/vmx/nested.h b/arch/x86/kvm/vmx/nested.h index 129ae4e01f7c..88b00a7359e4 100644 --- a/arch/x86/kvm/vmx/nested.h +++ b/arch/x86/kvm/vmx/nested.h @@ -32,8 +32,6 @@ int vmx_set_vmx_msr(struct kvm_vcpu *vcpu, u32 msr_index,= u64 data); int vmx_get_vmx_msr(struct nested_vmx_msrs *msrs, u32 msr_index, u64 *pdat= a); int get_vmx_mem_address(struct kvm_vcpu *vcpu, unsigned long exit_qualific= ation, u32 vmx_instruction_info, bool wr, int len, gva_t *ret); -void nested_vmx_pmu_refresh(struct kvm_vcpu *vcpu, - bool vcpu_has_perf_global_ctrl); void nested_mark_vmcs12_pages_dirty(struct kvm_vcpu *vcpu); bool nested_vmx_check_io_bitmaps(struct kvm_vcpu *vcpu, unsigned int port, int size); diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c index 6ce3b066f7d9..515ab6594333 100644 --- a/arch/x86/kvm/vmx/pmu_intel.c +++ b/arch/x86/kvm/vmx/pmu_intel.c @@ -597,9 +597,6 @@ static void intel_pmu_refresh(struct kvm_vcpu *vcpu) bitmap_set(pmu->all_valid_pmc_idx, INTEL_PMC_MAX_GENERIC, pmu->nr_arch_fixed_counters); =20 - nested_vmx_pmu_refresh(vcpu, - intel_is_valid_msr(vcpu, MSR_CORE_PERF_GLOBAL_CTRL, false)); - if (cpuid_model_is_consistent(vcpu)) x86_perf_get_lbr(&lbr_desc->records); else --=20 2.36.1.255.ge46751e96f-goog From nobody Tue Apr 28 02:28:07 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 69DC2CCA482 for ; Wed, 8 Jun 2022 01:29:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1359552AbiFHBY7 (ORCPT ); Tue, 7 Jun 2022 21:24:59 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60590 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1382930AbiFGXjF (ORCPT ); Tue, 7 Jun 2022 19:39:05 -0400 Received: from mail-pl1-x64a.google.com (mail-pl1-x64a.google.com [IPv6:2607:f8b0:4864:20::64a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E4BE722E683 for ; Tue, 7 Jun 2022 14:36:35 -0700 (PDT) Received: by mail-pl1-x64a.google.com with SMTP id y8-20020a17090322c800b0016777c34c83so3498482plg.19 for ; Tue, 07 Jun 2022 14:36:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=WZpB3yHrMHQls3IHqVi5lPdjdnlbxe8rlI8VYkmKcTg=; b=qgrHcXETJXkpmlZPmQS/NOZ+KytYM0pxovZrSQu80fEFQAeT1XKVcSb9kzkmZup4eE 5Wlr/4msmFRYT9gS5ET8AIj5cPDazkEpx2qKO7C2NUUnVjeri8WGTVZCJU6yjeH38DuL /Ei3bG7FuhJVV1lcd7w3GEuVXCx45srEvSrlYsdeDjz63sMPqNcUDzgzgKBvae9f6TvS +XxUBg67gMqW++6V+yGem7WOz27qunQPbP1PPfEXM9AcmMycnB0WPxTVH5MrTNeCTBEa q+jZL1ixhaqja25GsCBXcaHv2fWCOR8f4wl57m0b1tsIHPfsx10T72usJJum5N7k3cJD OlCA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=WZpB3yHrMHQls3IHqVi5lPdjdnlbxe8rlI8VYkmKcTg=; b=ol+jau6RgvBT783k9F1CguFJ4tdTbfPs0fXYvOC6a+6jJIMsk12a9PTIVo7p7rZATb K2TlivVWiMRshHemgsgPhx+DBlvC3enYFAlZm2HOhBa3LbxIdGBOQTG/T0ryaqGl5tSd +tvlQArPbXMY60eM3B49Pvas8jwr+xyqZY7FYxBMP3p6GgUYSPElbdQSCiU0v96cy7eW cXozv8cLIVnSt6gz60cO2npDkdrSvQ9Oy8SCDNCq2kuS9fKyswCDfOctbQ5uOttSOhur HXdzvT/dLnSvGgdzBbKsWraG79MfehkNLhomsjDp9mzxZCmMxo/51CXGY0CJbuEZBftV MJJg== X-Gm-Message-State: AOAM5322rY9Oni4ah0Dpi8ADroBEOMLWckmviYFVsLCOg0j7fR4bKoAx T/Vttz6NAwmgALTXSSRv0w4sEYINyfI= X-Google-Smtp-Source: ABdhPJycgDWrLZOXZYxn7zpkwQ3Ufqj9H70i8GawTN7PJv9ONWWssntAX0LWLtrHHvzpYelwHn301CR9IPM= X-Received: from seanjc.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:3e5]) (user=seanjc job=sendgmr) by 2002:a62:d045:0:b0:51b:fcf6:3add with SMTP id p66-20020a62d045000000b0051bfcf63addmr17692776pfg.68.1654637795122; Tue, 07 Jun 2022 14:36:35 -0700 (PDT) Reply-To: Sean Christopherson Date: Tue, 7 Jun 2022 21:35:59 +0000 In-Reply-To: <20220607213604.3346000-1-seanjc@google.com> Message-Id: <20220607213604.3346000-11-seanjc@google.com> Mime-Version: 1.0 References: <20220607213604.3346000-1-seanjc@google.com> X-Mailer: git-send-email 2.36.1.255.ge46751e96f-goog Subject: [PATCH v5 10/15] KVM: nVMX: Add a quirk for KVM tweaks to VMX MSRs From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Eric Li , David Matlack , Oliver Upton Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Oliver Upton Quirk KVM's misguided behavior of manipulating VMX MSRs based on guest CPUID state. There is no requirement, at all, that a CPU support virtualizing a feature if said feature is supported in bare metal. I.e. the VMX MSRs exist independent of CPUID for a reason. One could argue that disabling features, as KVM does for the entry/exit controls for the IA32_PERF_GLOBAL_CTRL and IA32_BNDCFGS MSRs, is correct as such a configuration is contradictory, but KVM's policy is to let userspace have full control of the guest vCPU model so long as the host kernel is not at risk. Furthermore, mucking with the VMX MSRs creates a subtle, difficult to maintain ABI as KVM must ensure that any internal changes, e.g. to how KVM handles _any_ guest CPUID changes, yield the same functional result. Suggested-by: Sean Christopherson Signed-off-by: Oliver Upton Co-developed-by: Sean Christopherson Signed-off-by: Sean Christopherson --- Documentation/virt/kvm/api.rst | 21 +++++++++++++++++++++ arch/x86/include/asm/kvm_host.h | 3 ++- arch/x86/include/uapi/asm/kvm.h | 1 + arch/x86/kvm/vmx/nested.c | 5 +++-- arch/x86/kvm/vmx/vmx.c | 3 ++- 5 files changed, 29 insertions(+), 4 deletions(-) diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst index 42a1984fafc8..1095692ddab7 100644 --- a/Documentation/virt/kvm/api.rst +++ b/Documentation/virt/kvm/api.rst @@ -7374,6 +7374,27 @@ The valid bits in cap.args[0] are: hypercall instructions. Executing the incorrect hypercall instruction will generate a #UD within the guest. + + KVM_X86_QUIRK_TWEAK_VMX_MSRS By default, during a guest CPUID updat= e, + KVM adjusts the values of select VMX M= SRs + (usually based on guest CPUID): + + - If CPUID.07H:EBX[bit 14] (MPX) is se= t, KVM + sets IA32_VMX_TRUE_ENTRY_CTLS[bit 48] + ('load IA32_BNDCFGS') and + IA32_VMX_TRUE_EXIT_CTLS[bit 55] + ('clear IA32_BNDCFGS'). Otherwise, t= hese + corresponding MSR bits are cleared. + - If CPUID.0AH:EAX[bits 7:0] > 1, KVM = sets + IA32_VMX_TRUE_ENTRY_CTLS[bit 45] + ('load IA32_PERF_GLOBAL_CTRL') and + IA32_VMX_TRUE_EXIT_CTLS[bit 44] + ('load IA32_PERF_GLOBAL_CTRL'). Othe= rwise, + these corresponding MSR bits are cle= ared. + + When this quirk is disabled, KVM will = not + change the values of the aformentioned= VMX + MSRs during guest CPUID updates. =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D =20 7.32 KVM_CAP_MAX_VCPU_ID diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_hos= t.h index 6cf5d77d7896..a783c82fb902 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -2011,6 +2011,7 @@ int memslot_rmap_alloc(struct kvm_memory_slot *slot, = unsigned long npages); KVM_X86_QUIRK_LAPIC_MMIO_HOLE | \ KVM_X86_QUIRK_OUT_7E_INC_RIP | \ KVM_X86_QUIRK_MISC_ENABLE_NO_MWAIT | \ - KVM_X86_QUIRK_FIX_HYPERCALL_INSN) + KVM_X86_QUIRK_FIX_HYPERCALL_INSN | \ + KVM_X86_QUIRK_TWEAK_VMX_MSRS) =20 #endif /* _ASM_X86_KVM_HOST_H */ diff --git a/arch/x86/include/uapi/asm/kvm.h b/arch/x86/include/uapi/asm/kv= m.h index 24c807c8d5f7..0705178bd93d 100644 --- a/arch/x86/include/uapi/asm/kvm.h +++ b/arch/x86/include/uapi/asm/kvm.h @@ -438,6 +438,7 @@ struct kvm_sync_regs { #define KVM_X86_QUIRK_OUT_7E_INC_RIP (1 << 3) #define KVM_X86_QUIRK_MISC_ENABLE_NO_MWAIT (1 << 4) #define KVM_X86_QUIRK_FIX_HYPERCALL_INSN (1 << 5) +#define KVM_X86_QUIRK_TWEAK_VMX_MSRS (1 << 6) =20 #define KVM_STATE_NESTED_FORMAT_VMX 0 #define KVM_STATE_NESTED_FORMAT_SVM 1 diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index 4ba0e5540908..dc2f9b06b99a 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -1301,8 +1301,9 @@ vmx_restore_control_msr(struct vcpu_vmx *vmx, u32 msr= _index, u64 data) * To preserve an old, kludgy ABI, ensure KVM fiddling with the "true" * entry/exit controls MSRs is preserved after userspace modifications. */ - if (msr_index =3D=3D MSR_IA32_VMX_TRUE_ENTRY_CTLS || - msr_index =3D=3D MSR_IA32_VMX_TRUE_EXIT_CTLS) + if ((msr_index =3D=3D MSR_IA32_VMX_TRUE_ENTRY_CTLS || + msr_index =3D=3D MSR_IA32_VMX_TRUE_EXIT_CTLS) && + kvm_check_has_quirk(vmx->vcpu.kvm, KVM_X86_QUIRK_TWEAK_VMX_MSRS)) nested_vmx_entry_exit_ctls_update(&vmx->vcpu); =20 return 0; diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 73ec4746a4e6..4c31c8f24329 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -7522,7 +7522,8 @@ static void vmx_vcpu_after_set_cpuid(struct kvm_vcpu = *vcpu) =20 if (nested_vmx_allowed(vcpu)) { nested_vmx_cr_fixed1_bits_update(vcpu); - nested_vmx_entry_exit_ctls_update(vcpu); + if (kvm_check_has_quirk(vcpu->kvm, KVM_X86_QUIRK_TWEAK_VMX_MSRS)) + nested_vmx_entry_exit_ctls_update(vcpu); } =20 if (boot_cpu_has(X86_FEATURE_INTEL_PT) && --=20 2.36.1.255.ge46751e96f-goog From nobody Tue Apr 28 02:28:07 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 834C1CCA499 for ; Wed, 8 Jun 2022 01:29:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1388312AbiFHB1g (ORCPT ); Tue, 7 Jun 2022 21:27:36 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60946 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1578992AbiFGXjJ (ORCPT ); Tue, 7 Jun 2022 19:39:09 -0400 Received: from mail-pl1-x64a.google.com (mail-pl1-x64a.google.com [IPv6:2607:f8b0:4864:20::64a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BEDEE22E6A9 for ; Tue, 7 Jun 2022 14:36:37 -0700 (PDT) Received: by mail-pl1-x64a.google.com with SMTP id d11-20020a170902cecb00b00163fe890197so10005215plg.1 for ; Tue, 07 Jun 2022 14:36:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=NQ0NXcc6zaLq7ONEOxmfQD1Om9k/BwyLb/mU8W7ZfBE=; b=psYIhDQIbOYoDfMgxj9hZcW2N+nnDfFJ5muVwPxGWwXfML0dLWXF9sGEkHhJQoDmFP 5ExBSyxrvpuj1yTHNxeYkv2QEs4voLF7o/tmPLCLxcKmQaXuzaPyFwQU4oJkpOsm2Zx2 AP5IY8rEIYsZoLE7swfYED4wr9c1vD3MRfCrztDMxwFuUfnYkzPgMueQNGK1V6hXRONP 4yvkXyQ6DFG21NGtPby+nm1UIQRl06vjw7Anv5TDAebli3SXZapowg2PuEzEveDd1L2d 3U6N9oiHio93I0lk1n83VWY3SdKqQE60ZccbRD2cFT19l0E8NdLc3J9aZGLehAbMGHdK mY6A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=NQ0NXcc6zaLq7ONEOxmfQD1Om9k/BwyLb/mU8W7ZfBE=; b=Te7woG4+2NQFfUYllMR1nBUo1/K5jSrwIFse9zp8zGUS/pK4IJ2VZocek9QzkLvaCZ 1pqD7SBF8+EoLaqXcPaYzipIeskzqT+74tBnETlI2S2T4gJPXK6wClV88celQCuqZAjq TZgZxSLi+MUP58aJHIR8R/br2mpc3L9JdilZv7TzFkvi1kw09pc/1auHrIRQLhdIzJwl M/+im6XVOnl9zqwPULrbP07yK3Cn0QaCtcn+J6vp4MU+Pa0fw8OlZFsZwsMV0fSIBkBe HlKpR59aUj/u/fdIOB2aCzCLzMS/vve76ISDsO5QiwgTtz3SgmEgxVlOPTB1B0peNhGX 27Wg== X-Gm-Message-State: AOAM533YKnibyoIaiuOZpfAkW0+TFL8JM1Yio/VmY2d+8jM6EFWklaK9 l9UtMKw4yR1sq5ilPFSgRAP2EpX+VyA= X-Google-Smtp-Source: ABdhPJwB35ykedAAuBKX8Jf0+rndwi9hRqmJsr0aoHx0x/f2vEZfKa4n3yLn1zl4KS3zxkYRBIpKAiXEMcg= X-Received: from seanjc.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:3e5]) (user=seanjc job=sendgmr) by 2002:a62:38d7:0:b0:51c:4c30:3aef with SMTP id f206-20020a6238d7000000b0051c4c303aefmr2369470pfa.5.1654637796967; Tue, 07 Jun 2022 14:36:36 -0700 (PDT) Reply-To: Sean Christopherson Date: Tue, 7 Jun 2022 21:36:00 +0000 In-Reply-To: <20220607213604.3346000-1-seanjc@google.com> Message-Id: <20220607213604.3346000-12-seanjc@google.com> Mime-Version: 1.0 References: <20220607213604.3346000-1-seanjc@google.com> X-Mailer: git-send-email 2.36.1.255.ge46751e96f-goog Subject: [PATCH v5 11/15] KVM: nVMX: Set UMIP bit CR4_FIXED1 MSR when emulating UMIP From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Eric Li , David Matlack , Oliver Upton Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Make UMIP an "allowed-1" bit CR4_FIXED1 MSR when KVM is emulating UMIP. KVM emulates UMIP for both L1 and L2, and so should enumerate that L2 is allowed to have CR4.UMIP=3D1. Not setting the bit doesn't immediately break nVMX, as KVM does set/clear the bit in CR4_FIXED1 in response to a guest CPUID update, i.e. KVM will correctly (dis)allow nested VM-Entry based on whether or not UMIP is exposed to L1. That said, KVM should enumerate the bit as being allowed from time zero, e.g. userspace will see the wrong value if the MSR is read before CPUID is written. And a future patch will quirk KVM's behavior of stuffing CR4_FIXED1 in response to guest CPUID changes, as CR4_FIXED1 is not strictly required to match the CPUID model exposed to L1. Fixes: 0367f205a3b7 ("KVM: vmx: add support for emulating UMIP") Signed-off-by: Sean Christopherson --- arch/x86/kvm/vmx/nested.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index dc2f9b06b99a..5533c2474128 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -6781,6 +6781,9 @@ void nested_vmx_setup_ctls_msrs(struct nested_vmx_msr= s *msrs, u32 ept_caps) rdmsrl(MSR_IA32_VMX_CR0_FIXED1, msrs->cr0_fixed1); rdmsrl(MSR_IA32_VMX_CR4_FIXED1, msrs->cr4_fixed1); =20 + if (vmx_umip_emulated()) + msrs->cr4_fixed1 |=3D X86_CR4_UMIP; + msrs->vmcs_enum =3D nested_vmx_calc_vmcs_enum_msr(); } =20 --=20 2.36.1.255.ge46751e96f-goog From nobody Tue Apr 28 02:28:07 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1F4B9CCA490 for ; Wed, 8 Jun 2022 01:38:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1386462AbiFHBgx (ORCPT ); Tue, 7 Jun 2022 21:36:53 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57794 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1385412AbiFGXjJ (ORCPT ); Tue, 7 Jun 2022 19:39:09 -0400 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 39A0922FA01 for ; Tue, 7 Jun 2022 14:36:40 -0700 (PDT) Received: by mail-yb1-xb49.google.com with SMTP id k73-20020a25244c000000b0065ca88b381aso16139855ybk.2 for ; Tue, 07 Jun 2022 14:36:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=liucOeq+i8wBW37ckX9CitJW/au6CJxD8EZtE2Dscgc=; b=IN4fgLOAlV1KfJqGAahoaf3ef2R61NuhCc8Zcuigfom73c04gZoF4gqmTlUtsMkVpC JJjG5PLhtp0idJcE3GQZIUcLYjAFyliljLpmJEtskP/d9vDA+SUVMTiFr2hP1UxPLYHS l6qT8pIGPWE5c8PmBss+TQAa+aDolxVsk5hpUIWrgBd/eOL+Ni75YM8tBNHcv5KC2108 hX4vi4h52X6J7ztyPP6LvuYeEc4Q8rr/3H40NyyOkO82MsQgFndVerbRD480mQnmy87g d0+G/e3Pf8MNkO8RgEMH0iZYVzv21SEneAUSiIpjkqmdtoD+EB3MUXh+XkggzrR2H7DJ wsyQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=liucOeq+i8wBW37ckX9CitJW/au6CJxD8EZtE2Dscgc=; b=zsegZb4GazgncRJxM44ROODuyMZQk1AfIT0EdhWQLyJde0kae1a5BQkPoMBlAiU9Sg NWrjTRES58EC3ahhdfyST4RROKOa5XXTprozUqZVmEKtMPGPyw3khYld4GZzbSWaa8Dl tH/6/ZLwnofcqsYwa0i44GPcot6H497phx7Yd1NtOdxb8mN4dPplD0+0vOv8sFOmE08t ELXjpS9tpFZLqlMFcpT0IGSmzioYk3sSqv0pAq4Kf3Heih2ak2uNh/5QnowMlm6DoKmI zj4aLHuYzhLQAyPCMirPcRSSIxEfQ0jd/Yf1V1DaOZrL/ViAld9AhtpGcQZAmQNbAeWn R6lQ== X-Gm-Message-State: AOAM5312B8+CO7Bv7eawJT/PFylOFlUj29FnX83mwvFyoJGziJGadmWa xU7tYBjgZkAutNEcQ7DyIJWeCfmU7z0= X-Google-Smtp-Source: ABdhPJwaMXUoyu3EtfXs0KZLcGC4hnzNLLKfYxjZfrDEVK3xRQ3CoJWyKoyj8BCWfKvB0yvC4Lb3ZojZyZM= X-Received: from seanjc.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:3e5]) (user=seanjc job=sendgmr) by 2002:a81:9d5:0:b0:2f4:dd93:4513 with SMTP id 204-20020a8109d5000000b002f4dd934513mr33067623ywj.54.1654637798700; Tue, 07 Jun 2022 14:36:38 -0700 (PDT) Reply-To: Sean Christopherson Date: Tue, 7 Jun 2022 21:36:01 +0000 In-Reply-To: <20220607213604.3346000-1-seanjc@google.com> Message-Id: <20220607213604.3346000-13-seanjc@google.com> Mime-Version: 1.0 References: <20220607213604.3346000-1-seanjc@google.com> X-Mailer: git-send-email 2.36.1.255.ge46751e96f-goog Subject: [PATCH v5 12/15] KVM: nVMX: Extend VMX MSRs quirk to CR0/4 fixed1 bits From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Eric Li , David Matlack , Oliver Upton Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Extend the VMX MSRs quirk to the CR0/4_FIXED1 MSRs, i.e. when the quirk is disabled, allow userspace to set the MSRs and do not rewrite the MSRs during CPUID updates. The bits that the guest (L2 in this case) is allowed to set are not directly tied to CPUID. Enumerating to L1 that it can set reserved CR0/4 bits is nonsensical and will ultimately result in a failed nested VM-Entry (KVM enforces guest reserved CR4 bits on top of the VMX MSRs), but KVM typically doesn't police the vCPU model except when doing so is necessary to protect the host kernel. Further restricting CR4 bits is however a reasonable thing to do, e.g. to work around a bug in nested virtualization, in which case exposing a feature to L1 is ok, but letting L2 use the feature is not. Of course, whether or not the L1 hypervisor will actually _check_ the FIXED1 bits is another matter entirely, e.g. KVM currently assumes all bits that can be set in the host can also be set in the guest. Signed-off-by: Sean Christopherson --- Documentation/virt/kvm/api.rst | 8 ++++++++ arch/x86/kvm/vmx/nested.c | 33 ++++++++++++++++++++++++++++++--- arch/x86/kvm/vmx/vmx.c | 6 +++--- 3 files changed, 41 insertions(+), 6 deletions(-) diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst index 1095692ddab7..88d1bbae031e 100644 --- a/Documentation/virt/kvm/api.rst +++ b/Documentation/virt/kvm/api.rst @@ -7391,6 +7391,14 @@ The valid bits in cap.args[0] are: IA32_VMX_TRUE_EXIT_CTLS[bit 44] ('load IA32_PERF_GLOBAL_CTRL'). Othe= rwise, these corresponding MSR bits are cle= ared. + - MSR_IA32_VMX_CR0_FIXED1 is unconditi= onally + set to 0xffffffff + - CR4.PCE is unconditionally set in + MSR_IA32_VMX_CR4_FIXED1. + - All CR4 bits with an associated CPUID + feature flag are set in + MSR_IA32_VMX_CR4_FIXED1 if the featu= re is + reported as supported in guest CPUID. =20 When this quirk is disabled, KVM will = not change the values of the aformentioned= VMX diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index 5533c2474128..abce74cfefc9 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -1385,6 +1385,30 @@ static int vmx_restore_fixed0_msr(struct vcpu_vmx *v= mx, u32 msr_index, u64 data) return 0; } =20 +static u64 *vmx_get_fixed1_msr(struct nested_vmx_msrs *msrs, u32 msr_index) +{ + switch (msr_index) { + case MSR_IA32_VMX_CR0_FIXED1: + return &msrs->cr0_fixed1; + case MSR_IA32_VMX_CR4_FIXED1: + return &msrs->cr4_fixed1; + default: + BUG(); + } +} + +static int vmx_restore_fixed1_msr(struct vcpu_vmx *vmx, u32 msr_index, u64= data) +{ + const u64 *msr =3D vmx_get_fixed1_msr(&vmcs_config.nested, msr_index); + + /* Bits that "must-be-0" must not be set in the restored value. */ + if (!is_bitwise_subset(*msr, data, -1ULL)) + return -EINVAL; + + *vmx_get_fixed1_msr(&vmx->nested.msrs, msr_index) =3D data; + return 0; +} + /* * Called when userspace is restoring VMX MSRs. * @@ -1432,10 +1456,13 @@ int vmx_set_vmx_msr(struct kvm_vcpu *vcpu, u32 msr_= index, u64 data) case MSR_IA32_VMX_CR0_FIXED1: case MSR_IA32_VMX_CR4_FIXED1: /* - * These MSRs are generated based on the vCPU's CPUID, so we - * do not support restoring them directly. + * These MSRs are generated based on the vCPU's CPUID when KVM + * "owns" the VMX MSRs, do not allow restoring them directly. */ - return -EINVAL; + if (kvm_check_has_quirk(vmx->vcpu.kvm, KVM_X86_QUIRK_TWEAK_VMX_MSRS)) + return -EINVAL; + + return vmx_restore_fixed1_msr(vmx, msr_index, data); case MSR_IA32_VMX_EPT_VPID_CAP: return vmx_restore_vmx_ept_vpid_cap(vmx, data); case MSR_IA32_VMX_VMCS_ENUM: diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 4c31c8f24329..139f365ca6bb 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -7520,10 +7520,10 @@ static void vmx_vcpu_after_set_cpuid(struct kvm_vcp= u *vcpu) ~(FEAT_CTL_VMX_ENABLED_INSIDE_SMX | FEAT_CTL_VMX_ENABLED_OUTSIDE_SMX); =20 - if (nested_vmx_allowed(vcpu)) { + if (nested_vmx_allowed(vcpu) && + kvm_check_has_quirk(vcpu->kvm, KVM_X86_QUIRK_TWEAK_VMX_MSRS)) { nested_vmx_cr_fixed1_bits_update(vcpu); - if (kvm_check_has_quirk(vcpu->kvm, KVM_X86_QUIRK_TWEAK_VMX_MSRS)) - nested_vmx_entry_exit_ctls_update(vcpu); + nested_vmx_entry_exit_ctls_update(vcpu); } =20 if (boot_cpu_has(X86_FEATURE_INTEL_PT) && --=20 2.36.1.255.ge46751e96f-goog From nobody Tue Apr 28 02:28:07 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DC134C433EF for ; Wed, 8 Jun 2022 01:30:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1388754AbiFHB1u (ORCPT ); Tue, 7 Jun 2022 21:27:50 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59104 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1578995AbiFGXjJ (ORCPT ); Tue, 7 Jun 2022 19:39:09 -0400 Received: from mail-pf1-x449.google.com (mail-pf1-x449.google.com [IPv6:2607:f8b0:4864:20::449]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7B94437AD7C for ; Tue, 7 Jun 2022 14:36:41 -0700 (PDT) Received: by mail-pf1-x449.google.com with SMTP id x19-20020aa78f13000000b0051bdda60a06so7029802pfr.2 for ; Tue, 07 Jun 2022 14:36:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=A1QzBL/Nmqz2F6BNd27eYTxcbJ/pZ61eqO0oiIixNDI=; b=Db1SvxBIb2DsUy46A0dAj6+ZmFkSPF27w+9DEhO6qhsgUf1mF7uhTxBpIVMqodwop5 hl59ox8ukdMh6gqTO+TVRsOjuVyydSw2Ch1+iQ/51z02ULBPNGGCVBXCXHZBJORabQQr yAzTWmdbP0G9QqrpoF3qp/mF7nwOdbY42N40ypztvPYkD0GmhkGpohsNNbVO+nOi+Oln DH17NxAMQLpue4YNQP5sL5WKI1253rM1qQXwoXI0lpgbsV+h1fwSAqzsZvx7NIVn/rAP An5btIOpDLEhQZH4+DjuuhfSi82Ddfcgkmptvjtp3mAMupEcgw9OenMvw78lId0WAxkl 42mQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=A1QzBL/Nmqz2F6BNd27eYTxcbJ/pZ61eqO0oiIixNDI=; b=K6TrOFP8UyB9mR2YL/ozKXE8RDpv2bjsZ9jllrbnHUnb0Gd4ZztFy+b3ZJwXPttwE5 nbmA5YDkjKb6IIkBJlMoxkMvszkOzQ5v69chytbejuyG1tg8SaMOi0PaRRc1S955Xzk4 ElxAQ55mMpJ/ZCQg9wSnzeHmgIhTCVKm1OQ8M8LJK162XtL+fK3QPCmvw6zNUvp6gvy5 mpi8RWvHlNyyUy5c3QtG63ZAegyGd63l8ERADM31gr9aM5JS3AHc7neAjCBcdgJz4E8o FMRnh2w+jJy4XqOxHT3BV+CMWiByx78htpNIjp+l74rIYcP7neGKSOqEjZShaE9LqBR1 rREg== X-Gm-Message-State: AOAM531zeGT1IH3Bxz/i2BRREAhu66e3rW/L0SdJONUlqW1B+7toWe9G FrZRMy91Vgn41d+4et5fAwzwt7N+Pt8= X-Google-Smtp-Source: ABdhPJy3l2yd61SiPhtcX+F+gl5mfqfQO84DcbbvRYxt/Y/zuLGXuz0A4cZ3aBAfXillFcWAGIViKSVWEhM= X-Received: from seanjc.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:3e5]) (user=seanjc job=sendgmr) by 2002:a05:6a00:1a8f:b0:51c:2f82:cdba with SMTP id e15-20020a056a001a8f00b0051c2f82cdbamr7674725pfv.85.1654637800483; Tue, 07 Jun 2022 14:36:40 -0700 (PDT) Reply-To: Sean Christopherson Date: Tue, 7 Jun 2022 21:36:02 +0000 In-Reply-To: <20220607213604.3346000-1-seanjc@google.com> Message-Id: <20220607213604.3346000-14-seanjc@google.com> Mime-Version: 1.0 References: <20220607213604.3346000-1-seanjc@google.com> X-Mailer: git-send-email 2.36.1.255.ge46751e96f-goog Subject: [PATCH v5 13/15] KVM: selftests: Add test to verify KVM's VMX MSRs quirk for controls From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Eric Li , David Matlack , Oliver Upton Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add a test to verify KVM's established ABI with respect to the entry/exit controls for PERF_GLOBAL_CTRL and BNDCFGS. KVM has a quirk where KVM updates the VMX "true" entry/exit control MSRs to force PERF_GLOBAL_CTRL and BNDCFGS to follow the guest CPUID model, i.e. set when supported, clear when not, even though the MSR values are not strictly associated with CPUID. Note, KVM's ABI is that its modifications to the MSRs are preserved even when userspace explicitly writes the MSRs. Verify that KVM correctly tweaks the MSRs When the quirk is enabled (the default behavior), and does not touch them when the quirk is disabled. Suggested-by: Oliver Upton Signed-off-by: Sean Christopherson --- tools/testing/selftests/kvm/.gitignore | 1 + tools/testing/selftests/kvm/Makefile | 1 + .../selftests/kvm/include/x86_64/processor.h | 1 + .../selftests/kvm/include/x86_64/vmx.h | 2 + .../selftests/kvm/x86_64/vmx_msrs_test.c | 161 ++++++++++++++++++ 5 files changed, 166 insertions(+) create mode 100644 tools/testing/selftests/kvm/x86_64/vmx_msrs_test.c diff --git a/tools/testing/selftests/kvm/.gitignore b/tools/testing/selftes= ts/kvm/.gitignore index 0ab0e255d292..5893804b5196 100644 --- a/tools/testing/selftests/kvm/.gitignore +++ b/tools/testing/selftests/kvm/.gitignore @@ -47,6 +47,7 @@ /x86_64/vmx_dirty_log_test /x86_64/vmx_exception_with_invalid_guest_state /x86_64/vmx_invalid_nested_guest_state +/x86_64/vmx_msrs_test /x86_64/vmx_preemption_timer_test /x86_64/vmx_set_nested_state_test /x86_64/vmx_tsc_adjust_test diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests= /kvm/Makefile index 9a256c1f1bdf..2ee2dc55c100 100644 --- a/tools/testing/selftests/kvm/Makefile +++ b/tools/testing/selftests/kvm/Makefile @@ -74,6 +74,7 @@ TEST_GEN_PROGS_x86_64 +=3D x86_64/vmx_apic_access_test TEST_GEN_PROGS_x86_64 +=3D x86_64/vmx_close_while_nested_test TEST_GEN_PROGS_x86_64 +=3D x86_64/vmx_dirty_log_test TEST_GEN_PROGS_x86_64 +=3D x86_64/vmx_exception_with_invalid_guest_state +TEST_GEN_PROGS_x86_64 +=3D x86_64/vmx_msrs_test TEST_GEN_PROGS_x86_64 +=3D x86_64/vmx_invalid_nested_guest_state TEST_GEN_PROGS_x86_64 +=3D x86_64/vmx_set_nested_state_test TEST_GEN_PROGS_x86_64 +=3D x86_64/vmx_tsc_adjust_test diff --git a/tools/testing/selftests/kvm/include/x86_64/processor.h b/tools= /testing/selftests/kvm/include/x86_64/processor.h index 3fd3d58148c2..51cab9b080f7 100644 --- a/tools/testing/selftests/kvm/include/x86_64/processor.h +++ b/tools/testing/selftests/kvm/include/x86_64/processor.h @@ -98,6 +98,7 @@ struct kvm_x86_cpu_feature { #define X86_FEATURE_SMEP KVM_X86_CPU_FEATURE(0x7, 0, EBX, 7) #define X86_FEATURE_INVPCID KVM_X86_CPU_FEATURE(0x7, 0, EBX, 10) #define X86_FEATURE_RTM KVM_X86_CPU_FEATURE(0x7, 0, EBX, 11) +#define X86_FEATURE_MPX KVM_X86_CPU_FEATURE(0x7, 0, EBX, 14) #define X86_FEATURE_SMAP KVM_X86_CPU_FEATURE(0x7, 0, EBX, 20) #define X86_FEATURE_PCOMMIT KVM_X86_CPU_FEATURE(0x7, 0, EBX, 22) #define X86_FEATURE_CLFLUSHOPT KVM_X86_CPU_FEATURE(0x7, 0, EBX, 23) diff --git a/tools/testing/selftests/kvm/include/x86_64/vmx.h b/tools/testi= ng/selftests/kvm/include/x86_64/vmx.h index fe0ebb790b49..5a6002b34d2b 100644 --- a/tools/testing/selftests/kvm/include/x86_64/vmx.h +++ b/tools/testing/selftests/kvm/include/x86_64/vmx.h @@ -80,6 +80,7 @@ #define VM_EXIT_SAVE_IA32_EFER 0x00100000 #define VM_EXIT_LOAD_IA32_EFER 0x00200000 #define VM_EXIT_SAVE_VMX_PREEMPTION_TIMER 0x00400000 +#define VM_EXIT_CLEAR_BNDCFGS 0x00800000 =20 #define VM_EXIT_ALWAYSON_WITHOUT_TRUE_MSR 0x00036dff =20 @@ -90,6 +91,7 @@ #define VM_ENTRY_LOAD_IA32_PERF_GLOBAL_CTRL 0x00002000 #define VM_ENTRY_LOAD_IA32_PAT 0x00004000 #define VM_ENTRY_LOAD_IA32_EFER 0x00008000 +#define VM_ENTRY_LOAD_BNDCFGS 0x00010000 =20 #define VM_ENTRY_ALWAYSON_WITHOUT_TRUE_MSR 0x000011ff =20 diff --git a/tools/testing/selftests/kvm/x86_64/vmx_msrs_test.c b/tools/tes= ting/selftests/kvm/x86_64/vmx_msrs_test.c new file mode 100644 index 000000000000..9be2c2e3acf1 --- /dev/null +++ b/tools/testing/selftests/kvm/x86_64/vmx_msrs_test.c @@ -0,0 +1,161 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * VMX control MSR test + * + * Copyright (C) 2022 Google LLC. + * + * Tests for KVM ownership of bits in the VMX entry/exit control MSRs. Che= cks + * that KVM will set owned bits where appropriate, and will not if + * KVM_X86_QUIRK_TWEAK_VMX_CTRL_MSRS is disabled. + */ + +#include "kvm_util.h" +#include "vmx.h" + +#define SUBTEST_REQUIRE(f) \ + if (!(f)) { \ + print_skip("Requirement not met: %s", #f); \ + return; \ + } + +static bool vmx_has_ctrl(struct kvm_vcpu *vcpu, uint32_t msr, uint32_t ctr= l_mask) +{ + return (vcpu_get_msr(vcpu, msr) >> 32) & ctrl_mask; +} + +static void test_vmx_ctrl_msr(struct kvm_vcpu *vcpu, + uint32_t msr, uint64_t ctrl_mask, + bool quirk_enabled, bool feature_enabled) +{ + uint64_t ctrl_allowed1 =3D ctrl_mask << 32; + uint64_t val =3D vcpu_get_msr(vcpu, msr); + + /* + * If the quirk is enabled, KVM should have modified the MSR when the + * guest's CPUID was set. Don't assert anything when the quirk is + * disabled, the value of the MSR is not known (it could be made known, + * but it gets messy and the added value is minimal). + */ + TEST_ASSERT(!quirk_enabled || (!!(val & ctrl_allowed1) =3D=3D feature_ena= bled), + "KVM owns the ctrl when the quirk is enabled, want 0x%lx, got 0x%lx", + feature_enabled ? ctrl_allowed1 : 0, val & ctrl_allowed1); + + val |=3D ctrl_allowed1; + vcpu_set_msr(vcpu, msr, val); + + val =3D vcpu_get_msr(vcpu, msr); + if (quirk_enabled) + TEST_ASSERT(!!(val & ctrl_allowed1) =3D=3D feature_enabled, + "KVM owns the ctrl when the quirk is enabled, want 0x%lx, got 0x%lx= ", + feature_enabled ? ctrl_allowed1 : 0, val & ctrl_allowed1); + else + TEST_ASSERT(val & ctrl_allowed1, + "KVM shouldn't clear the ctrl when the quirk is disabled"); + + val &=3D ~ctrl_allowed1; + vcpu_set_msr(vcpu, msr, val); + + val =3D vcpu_get_msr(vcpu, msr); + if (quirk_enabled) + TEST_ASSERT(!!(val & ctrl_allowed1) =3D=3D feature_enabled, + "KVM owns the ctrl when the quirk is enabled, want 0x%lx, got 0x%lx= ", + feature_enabled ? ctrl_allowed1 : 0, val & ctrl_allowed1); + else + TEST_ASSERT(!(val & ctrl_allowed1), + "KVM shouldn't set the ctrl when the quirk is disabled"); +} + +static void test_vmx_ctrl_msrs_pair(struct kvm_vcpu *vcpu, + bool quirk_enabled, bool feature_enabled, + uint32_t entry_msr, uint64_t entry_mask, + uint32_t exit_msr, uint64_t exit_mask) +{ + test_vmx_ctrl_msr(vcpu, entry_msr, entry_mask, quirk_enabled, feature_ena= bled); + test_vmx_ctrl_msr(vcpu, exit_msr, exit_mask, quirk_enabled, feature_enabl= ed); +} + +static void test_vmx_ctrls(struct kvm_vm *vm, struct kvm_vcpu *vcpu, + uint64_t entry_ctrl, uint64_t exit_ctrl) +{ + /* + * KVM's quirky behavior only exists for PERF_GLOBAL_CTRL and BNDCFGS, + * any attempt to extend KVM's quirky behavior must be met with fierce + * resistance! + */ + TEST_ASSERT(entry_ctrl =3D=3D VM_ENTRY_LOAD_IA32_PERF_GLOBAL_CTRL || + entry_ctrl =3D=3D VM_ENTRY_LOAD_BNDCFGS, + "Don't let KVM expand its quirk beyond PERF_GLOBAL_CTRL and BNDCFSG"= ); + + SUBTEST_REQUIRE(vmx_has_ctrl(vcpu, MSR_IA32_VMX_TRUE_ENTRY_CTLS, entry_ct= rl)); + SUBTEST_REQUIRE(vmx_has_ctrl(vcpu, MSR_IA32_VMX_TRUE_EXIT_CTLS, exit_ctrl= )); + + /* + * Test that, when the quirk is enabled, KVM sets/clears the VMX MSR + * bits based on whether or not the feature is exposed to the guest. + */ + test_vmx_ctrl_msrs_pair(vcpu, true, true, + MSR_IA32_VMX_TRUE_ENTRY_CTLS, entry_ctrl, + MSR_IA32_VMX_TRUE_EXIT_CTLS, exit_ctrl); + + /* Hide the feature in CPUID. */ + if (entry_ctrl =3D=3D VM_ENTRY_LOAD_IA32_PERF_GLOBAL_CTRL) + vcpu_clear_cpuid_entry(vcpu, 0xa); + else + vcpu_clear_cpuid_feature(vcpu, X86_FEATURE_MPX); + + test_vmx_ctrl_msrs_pair(vcpu, true, false, + MSR_IA32_VMX_TRUE_ENTRY_CTLS, entry_ctrl, + MSR_IA32_VMX_TRUE_EXIT_CTLS, exit_ctrl); + + /* + * Disable the quirk, giving userspace control of the VMX MSRs. KVM + * should not touch the MSR, i.e. should allow hiding the control when + * a vPMU is supported, and should allow exposing the control when a + * vPMU is not supported. + */ + vm_enable_cap(vm, KVM_CAP_DISABLE_QUIRKS2, KVM_X86_QUIRK_TWEAK_VMX_MSRS); + + test_vmx_ctrl_msrs_pair(vcpu, false, false, + MSR_IA32_VMX_TRUE_ENTRY_CTLS, entry_ctrl, + MSR_IA32_VMX_TRUE_EXIT_CTLS, exit_ctrl); + + /* Restore the full CPUID to expose the feature to the guest. */ + vcpu_init_cpuid(vcpu, kvm_get_supported_cpuid()); + test_vmx_ctrl_msrs_pair(vcpu, false, true, + MSR_IA32_VMX_TRUE_ENTRY_CTLS, entry_ctrl, + MSR_IA32_VMX_TRUE_EXIT_CTLS, exit_ctrl); + + vm_enable_cap(vm, KVM_CAP_DISABLE_QUIRKS2, 0); +} + +static void load_perf_global_ctrl_test(struct kvm_vm *vm, struct kvm_vcpu = *vcpu) +{ + SUBTEST_REQUIRE(kvm_get_cpuid_max_basic() >=3D 0xa); + + test_vmx_ctrls(vm, vcpu, VM_ENTRY_LOAD_IA32_PERF_GLOBAL_CTRL, + VM_EXIT_LOAD_IA32_PERF_GLOBAL_CTRL); +} + +static void load_and_clear_bndcfgs_test(struct kvm_vm *vm, struct kvm_vcpu= *vcpu) +{ + SUBTEST_REQUIRE(kvm_cpu_has(X86_FEATURE_MPX)); + + test_vmx_ctrls(vm, vcpu, VM_ENTRY_LOAD_BNDCFGS, VM_EXIT_CLEAR_BNDCFGS); +} + +int main(void) +{ + struct kvm_vcpu *vcpu; + struct kvm_vm *vm; + + TEST_REQUIRE(kvm_has_cap(KVM_CAP_DISABLE_QUIRKS2)); + TEST_REQUIRE(kvm_cpu_has(X86_FEATURE_VMX)); + + /* No need to actually do KVM_RUN, thus no guest code. */ + vm =3D vm_create_with_one_vcpu(&vcpu, NULL); + + load_perf_global_ctrl_test(vm, vcpu); + load_and_clear_bndcfgs_test(vm, vcpu); + + kvm_vm_free(vm); +} --=20 2.36.1.255.ge46751e96f-goog From nobody Tue Apr 28 02:28:07 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2AF97CCA493 for ; Wed, 8 Jun 2022 01:54:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1391588AbiFHBwu (ORCPT ); Tue, 7 Jun 2022 21:52:50 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57788 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1376546AbiFGXi4 (ORCPT ); Tue, 7 Jun 2022 19:38:56 -0400 Received: from mail-pj1-x104a.google.com (mail-pj1-x104a.google.com [IPv6:2607:f8b0:4864:20::104a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E875C40079B for ; Tue, 7 Jun 2022 14:36:42 -0700 (PDT) Received: by mail-pj1-x104a.google.com with SMTP id 92-20020a17090a09e500b001d917022847so9810377pjo.1 for ; Tue, 07 Jun 2022 14:36:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=Y4MZ3Sym9w/pI1xJDthLYzwUgTFZC459iHyqI3Vsa9Y=; b=WEOEPyV3UHc47hAJUKfpAzZ48ZUzFYmQHB0fWCeJcFiChg8dzGz4HlnKycnt7EstWP qWt5lhkRyxo5y7pF+NPGwBCH8eWH8A1F6RtDttq17muu/nVoMccq2tahltetZmaPl8bE tnfnQomRLxMsMC9CK2foUlEbbxNe++WpIbzhy93MOifVYMKUnU9nvfhOPf0Y+TuNGGBi COrc/o+V12V4juRHqx0A7lT/+abgtAlMWx3pILFyG2/jtuV1WCSiHZygxHTB2YfdE3Fl cDqoPn5lvDD8jYxSyrEEmI/KBv//22VG+PkZOCw8M9KOUa6Xq7MRzFbbWI3BEXSq+917 5nog== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=Y4MZ3Sym9w/pI1xJDthLYzwUgTFZC459iHyqI3Vsa9Y=; b=1EmB+23v92nj8f+MQrYzq9weDmvva3ymKx37rgkc+tjZv/SfbyP5YJGeotVZPUB6a1 JkLLCgUDNUckYFiWhUJHRzp78pk/y/bSxwlJXuT/ugHMFg9eNpfCtVJ5s/2jpjI9ehJ6 dnTNZcXFafQHbP6BuimvOzAjAiMZrnHHw/uOpf8Z7NJ+0tE5uFDNOGrGxupUopn8q3TF CPdkJ2TZRqOMM6BDIJRorQXfp6uhrDAEqZ9FWbDGFdEConoZgB6gXUOXo5YtKSv0IHyP e4jrbXeB3DmAhMdCdORgEct/HK6v6JvdlPnfCbcAT8DtIV8HX0E0vrplgruq29mSLi9c eOdw== X-Gm-Message-State: AOAM531TqsiDloZi0dE8iISPVKT6XpNi5QKMXkEkjEnWTva6KB1A3iaX 93eYbdqmt7hNlQCByxkw/MN/YVmKJyc= X-Google-Smtp-Source: ABdhPJwEt7iBU7YM9aCf297nNnfk6Mo5q0/MsDJTnsQDWr/qDTwCDRdN2IJkkUsNOIbDYF3MHiU5HwHgsTo= X-Received: from seanjc.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:3e5]) (user=seanjc job=sendgmr) by 2002:a05:6a00:a10:b0:51b:fbe1:cedb with SMTP id p16-20020a056a000a1000b0051bfbe1cedbmr18012140pfh.68.1654637802174; Tue, 07 Jun 2022 14:36:42 -0700 (PDT) Reply-To: Sean Christopherson Date: Tue, 7 Jun 2022 21:36:03 +0000 In-Reply-To: <20220607213604.3346000-1-seanjc@google.com> Message-Id: <20220607213604.3346000-15-seanjc@google.com> Mime-Version: 1.0 References: <20220607213604.3346000-1-seanjc@google.com> X-Mailer: git-send-email 2.36.1.255.ge46751e96f-goog Subject: [PATCH v5 14/15] KVM: selftests: Extend VMX MSRs test to cover CR4_FIXED1 (and its quirks) From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Eric Li , David Matlack , Oliver Upton Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Extend the VMX MSRs test to verify that KVM adheres to its established quirky ABI for CR4_FIXED1 when VMX MSRS quirk is enabled, and that KV doesn't touch CR4_FIXED1 when the quirk is disabled. Signed-off-by: Sean Christopherson --- .../selftests/kvm/include/x86_64/processor.h | 7 ++ .../selftests/kvm/x86_64/vmx_msrs_test.c | 65 +++++++++++++++++++ 2 files changed, 72 insertions(+) diff --git a/tools/testing/selftests/kvm/include/x86_64/processor.h b/tools= /testing/selftests/kvm/include/x86_64/processor.h index 51cab9b080f7..716e72bc9163 100644 --- a/tools/testing/selftests/kvm/include/x86_64/processor.h +++ b/tools/testing/selftests/kvm/include/x86_64/processor.h @@ -87,9 +87,16 @@ struct kvm_x86_cpu_feature { #define X86_FEATURE_XSAVE KVM_X86_CPU_FEATURE(0x1, 0, ECX, 26) #define X86_FEATURE_OSXSAVE KVM_X86_CPU_FEATURE(0x1, 0, ECX, 27) #define X86_FEATURE_RDRAND KVM_X86_CPU_FEATURE(0x1, 0, ECX, 30) +#define X86_FEATURE_VME KVM_X86_CPU_FEATURE(0x1, 0, EDX, 1) +#define X86_FEATURE_DE KVM_X86_CPU_FEATURE(0x1, 0, EDX, 2) +#define X86_FEATURE_PSE KVM_X86_CPU_FEATURE(0x1, 0, EDX, 3) +#define X86_FEATURE_TSC KVM_X86_CPU_FEATURE(0x1, 0, EDX, 4) +#define X86_FEATURE_PAE KVM_X86_CPU_FEATURE(0x1, 0, EDX, 6) #define X86_FEATURE_MCE KVM_X86_CPU_FEATURE(0x1, 0, EDX, 7) #define X86_FEATURE_APIC KVM_X86_CPU_FEATURE(0x1, 0, EDX, 9) +#define X86_FEATURE_PGE KVM_X86_CPU_FEATURE(0x1, 0, EDX, 13) #define X86_FEATURE_CLFLUSH KVM_X86_CPU_FEATURE(0x1, 0, EDX, 19) +#define X86_FEATURE_FXSR KVM_X86_CPU_FEATURE(0x1, 0, EDX, 24) #define X86_FEATURE_XMM KVM_X86_CPU_FEATURE(0x1, 0, EDX, 25) #define X86_FEATURE_XMM2 KVM_X86_CPU_FEATURE(0x1, 0, EDX, 26) #define X86_FEATURE_FSGSBASE KVM_X86_CPU_FEATURE(0x7, 0, EBX, 0) diff --git a/tools/testing/selftests/kvm/x86_64/vmx_msrs_test.c b/tools/tes= ting/selftests/kvm/x86_64/vmx_msrs_test.c index 9be2c2e3acf1..c0c4252a6a03 100644 --- a/tools/testing/selftests/kvm/x86_64/vmx_msrs_test.c +++ b/tools/testing/selftests/kvm/x86_64/vmx_msrs_test.c @@ -143,6 +143,70 @@ static void load_and_clear_bndcfgs_test(struct kvm_vm = *vm, struct kvm_vcpu *vcpu test_vmx_ctrls(vm, vcpu, VM_ENTRY_LOAD_BNDCFGS, VM_EXIT_CLEAR_BNDCFGS); } =20 +static void cr4_reserved_bit_test(struct kvm_vm *vm, struct kvm_vcpu *vcpu, + uint64_t cr4_bit, + struct kvm_x86_cpu_feature feature) +{ + uint64_t val; + int r; + + if (!kvm_cpu_has(feature)) + return; + + vcpu_set_cpuid_feature(vcpu, feature); + val =3D vcpu_get_msr(vcpu, MSR_IA32_VMX_CR4_FIXED1); + TEST_ASSERT(val & cr4_bit, + "KVM should set CR4 bit when quirk and feature are enabled"); + + vcpu_clear_cpuid_feature(vcpu, feature); + val =3D vcpu_get_msr(vcpu, MSR_IA32_VMX_CR4_FIXED1); + TEST_ASSERT(!(val & cr4_bit), + "KVM should clear CR4 bit when quirk and feature are enabled"); + + r =3D _vcpu_set_msr(vcpu, MSR_IA32_VMX_CR4_FIXED1, val); + TEST_ASSERT(r =3D=3D 0, "Writing CR4_FIXED1 should fail when quirk is ena= bled"); + + vm_enable_cap(vm, KVM_CAP_DISABLE_QUIRKS2, KVM_X86_QUIRK_TWEAK_VMX_MSRS); + + val &=3D ~cr4_bit; + vcpu_set_msr(vcpu, MSR_IA32_VMX_CR4_FIXED1, val); + + vcpu_set_cpuid_feature(vcpu, feature); + TEST_ASSERT(!(val & cr4_bit), + "KVM shouldn't set CR4 bit when quirk is disabled"); + + val |=3D cr4_bit; + vcpu_clear_cpuid_feature(vcpu, feature); + TEST_ASSERT(val & cr4_bit, + "KVM shouldn't clear CR4 bit when quirk is disabled"); + + vm_enable_cap(vm, KVM_CAP_DISABLE_QUIRKS2, 0); +} + +static void cr4_reserved_bits_test(struct kvm_vm *vm, struct kvm_vcpu *vcp= u) +{ + cr4_reserved_bit_test(vm, vcpu, X86_CR4_VME, X86_FEATURE_VME); + cr4_reserved_bit_test(vm, vcpu, X86_CR4_PVI, X86_FEATURE_VME); + cr4_reserved_bit_test(vm, vcpu, X86_CR4_TSD, X86_FEATURE_TSC); + cr4_reserved_bit_test(vm, vcpu, X86_CR4_DE, X86_FEATURE_DE); + cr4_reserved_bit_test(vm, vcpu, X86_CR4_PSE, X86_FEATURE_PSE); + cr4_reserved_bit_test(vm, vcpu, X86_CR4_PAE, X86_FEATURE_PAE); + cr4_reserved_bit_test(vm, vcpu, X86_CR4_MCE, X86_FEATURE_MCE); + cr4_reserved_bit_test(vm, vcpu, X86_CR4_PGE, X86_FEATURE_PGE); + cr4_reserved_bit_test(vm, vcpu, X86_CR4_OSFXSR, X86_FEATURE_FXSR); + cr4_reserved_bit_test(vm, vcpu, X86_CR4_OSXMMEXCPT, X86_FEATURE_XMM); + cr4_reserved_bit_test(vm, vcpu, X86_CR4_VMXE, X86_FEATURE_VMX); + cr4_reserved_bit_test(vm, vcpu, X86_CR4_SMXE, X86_FEATURE_SMX); + cr4_reserved_bit_test(vm, vcpu, X86_CR4_PCIDE, X86_FEATURE_PCID); + cr4_reserved_bit_test(vm, vcpu, X86_CR4_OSXSAVE, X86_FEATURE_XSAVE); + cr4_reserved_bit_test(vm, vcpu, X86_CR4_FSGSBASE, X86_FEATURE_FSGSBASE); + cr4_reserved_bit_test(vm, vcpu, X86_CR4_SMEP, X86_FEATURE_SMEP); + cr4_reserved_bit_test(vm, vcpu, X86_CR4_SMAP, X86_FEATURE_SMAP); + cr4_reserved_bit_test(vm, vcpu, X86_CR4_PKE, X86_FEATURE_PKU); + cr4_reserved_bit_test(vm, vcpu, X86_CR4_UMIP, X86_FEATURE_UMIP); + cr4_reserved_bit_test(vm, vcpu, X86_CR4_LA57, X86_FEATURE_LA57); +} + int main(void) { struct kvm_vcpu *vcpu; @@ -156,6 +220,7 @@ int main(void) =20 load_perf_global_ctrl_test(vm, vcpu); load_and_clear_bndcfgs_test(vm, vcpu); + cr4_reserved_bits_test(vm, vcpu); =20 kvm_vm_free(vm); } --=20 2.36.1.255.ge46751e96f-goog From nobody Tue Apr 28 02:28:07 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CA95FCCA486 for ; Wed, 8 Jun 2022 01:29:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1381989AbiFHBZm (ORCPT ); Tue, 7 Jun 2022 21:25:42 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58870 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1379891AbiFGXi6 (ORCPT ); Tue, 7 Jun 2022 19:38:58 -0400 Received: from mail-pj1-x1049.google.com (mail-pj1-x1049.google.com [IPv6:2607:f8b0:4864:20::1049]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2A08D4007AE for ; Tue, 7 Jun 2022 14:36:45 -0700 (PDT) Received: by mail-pj1-x1049.google.com with SMTP id w36-20020a17090a6ba700b001e876698a01so4951147pjj.5 for ; Tue, 07 Jun 2022 14:36:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=+20MfJ0vMHfzAJ9JVz3Ilf+oLJVPek78HYKPnn8c9cI=; b=GzaZwmfFP8vFYNZLORNpuD7awBp8Rety7blOC/GnKJeayC/QNhz/rVrqEoqPDIDJQ5 Gr2bc9ghv6mUfiehiQpYB4cc9jcB+245qLe/uhRNEj1b40aCflSfQLnp3cG/pT12Y7k5 LT+HWvQvuJ8ap7Rr7vj5EUwiqKVuwtVjnwTYDIk9bkqqLHVHrTHR8nlGRm5VMZDpzpNh 7gllhD9j1UHaO3sB0xDYg0SJ+sZ4zTLEYK4X7Iecwa7qUSwAS8l4lCizJH3kXyeZD1mZ yhy7k8BbqWJ3eYRrbHvT+s0gvTS3IGbIp1nwvb6PoywDwjLY/9Mgncb0EHEgtDIAqlsP JAgw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=+20MfJ0vMHfzAJ9JVz3Ilf+oLJVPek78HYKPnn8c9cI=; b=RCClV7Fn/tDuFxhNutnB6sBYi/qSkSCX+4Y3j7I5xAMwFRBnvdvxKgd0+//JWDRgt9 wwcIfebVOovFmJj1R4vXKM9vEK4BWLZkHdfxSlBTfRmi9MN6pod5QLnrrPFCB5JdV2Zm 34WljHoU26c6u9k4uyKIWsUj3Pvlwfqd/BVmPDS01bw8TMzSK415Ite1A9nRfw+w6ydh VaqIdLs0L6L/y5EN+KUFha0SXG3rcSbmO6eULBIisSmZBuqaLAjbyyEO5pecg0UzkifB ZU2n0cAKteiTgV+gwdgmvc3FchSkJT3pvcW8NrO6tH0I43Z4fWlLPF8XUIIhoGzZl99W 5Tug== X-Gm-Message-State: AOAM531rL9E6jghQNEzVGHwwKHyC8NFkjZnNA4yCfrQsyD2W3TEu7yWT L3EGu9WHb8geb9B959vny1z0ksq5tJg= X-Google-Smtp-Source: ABdhPJy8nvbk5N1f7eP43igaN6g5k1eob5EoUhjXROY9e4dJsbpdsRU4dDRkNiIQIOlD9np8KBtmGdLZlhc= X-Received: from seanjc.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:3e5]) (user=seanjc job=sendgmr) by 2002:a05:6a00:1788:b0:51b:f462:b16 with SMTP id s8-20020a056a00178800b0051bf4620b16mr19566862pfg.42.1654637804006; Tue, 07 Jun 2022 14:36:44 -0700 (PDT) Reply-To: Sean Christopherson Date: Tue, 7 Jun 2022 21:36:04 +0000 In-Reply-To: <20220607213604.3346000-1-seanjc@google.com> Message-Id: <20220607213604.3346000-16-seanjc@google.com> Mime-Version: 1.0 References: <20220607213604.3346000-1-seanjc@google.com> X-Mailer: git-send-email 2.36.1.255.ge46751e96f-goog Subject: [PATCH v5 15/15] KVM: selftests: Verify VMX MSRs can be restored to KVM-supported values From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Eric Li , David Matlack , Oliver Upton Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Verify that KVM allows toggling VMX MSR bits to be "more" restrictive, and also allows restoring each MSR to KVM's original, less restrictive value. Signed-off-by: Sean Christopherson --- .../selftests/kvm/x86_64/vmx_msrs_test.c | 61 +++++++++++++++++++ 1 file changed, 61 insertions(+) diff --git a/tools/testing/selftests/kvm/x86_64/vmx_msrs_test.c b/tools/tes= ting/selftests/kvm/x86_64/vmx_msrs_test.c index c0c4252a6a03..99d9614999c9 100644 --- a/tools/testing/selftests/kvm/x86_64/vmx_msrs_test.c +++ b/tools/testing/selftests/kvm/x86_64/vmx_msrs_test.c @@ -8,6 +8,7 @@ * that KVM will set owned bits where appropriate, and will not if * KVM_X86_QUIRK_TWEAK_VMX_CTRL_MSRS is disabled. */ +#include =20 #include "kvm_util.h" #include "vmx.h" @@ -207,6 +208,65 @@ static void cr4_reserved_bits_test(struct kvm_vm *vm, = struct kvm_vcpu *vcpu) cr4_reserved_bit_test(vm, vcpu, X86_CR4_LA57, X86_FEATURE_LA57); } =20 +static void vmx_fixed1_msr_test(struct kvm_vcpu *vcpu, uint32_t msr_index, + uint64_t mask) +{ + uint64_t val =3D vcpu_get_msr(vcpu, msr_index); + uint64_t bit; + + mask &=3D val; + + for_each_set_bit(bit, &mask, 64) { + vcpu_set_msr(vcpu, msr_index, val & ~BIT_ULL(bit)); + vcpu_set_msr(vcpu, msr_index, val); + } +} + +static void vmx_fixed0_msr_test(struct kvm_vcpu *vcpu, uint32_t msr_index, + uint64_t mask) +{ + uint64_t val =3D vcpu_get_msr(vcpu, msr_index); + uint64_t bit; + + mask =3D ~mask | val; + + for_each_clear_bit(bit, &mask, 64) { + vcpu_set_msr(vcpu, msr_index, val | BIT_ULL(bit)); + vcpu_set_msr(vcpu, msr_index, val); + } +} + +static void vmx_fixed0and1_msr_test(struct kvm_vcpu *vcpu, uint32_t msr_in= dex) +{ + vmx_fixed0_msr_test(vcpu, msr_index, GENMASK_ULL(31, 0)); + vmx_fixed1_msr_test(vcpu, msr_index, GENMASK_ULL(63, 32)); +} + +static void vmx_save_restore_msrs_test(struct kvm_vcpu *vcpu) +{ + vcpu_set_msr(vcpu, MSR_IA32_VMX_VMCS_ENUM, 0); + vcpu_set_msr(vcpu, MSR_IA32_VMX_VMCS_ENUM, -1ull); + + vmx_fixed1_msr_test(vcpu, MSR_IA32_VMX_BASIC, + BIT_ULL(49) | BIT_ULL(54) | BIT_ULL(55)); + + vmx_fixed1_msr_test(vcpu, MSR_IA32_VMX_BASIC, + BIT_ULL(5) | GENMASK_ULL(8, 6) | BIT_ULL(14) | + BIT_ULL(15) | BIT_ULL(28) | BIT_ULL(29) | BIT_ULL(30)); + + vmx_fixed0_msr_test(vcpu, MSR_IA32_VMX_CR0_FIXED0, -1ull); + vmx_fixed1_msr_test(vcpu, MSR_IA32_VMX_CR0_FIXED1, -1ull); + vmx_fixed0_msr_test(vcpu, MSR_IA32_VMX_CR4_FIXED0, -1ull); + vmx_fixed1_msr_test(vcpu, MSR_IA32_VMX_CR4_FIXED1, -1ull); + vmx_fixed0and1_msr_test(vcpu, MSR_IA32_VMX_PROCBASED_CTLS2); + vmx_fixed1_msr_test(vcpu, MSR_IA32_VMX_EPT_VPID_CAP, -1ull); + vmx_fixed0and1_msr_test(vcpu, MSR_IA32_VMX_TRUE_PINBASED_CTLS); + vmx_fixed0and1_msr_test(vcpu, MSR_IA32_VMX_TRUE_PROCBASED_CTLS); + vmx_fixed0and1_msr_test(vcpu, MSR_IA32_VMX_TRUE_EXIT_CTLS); + vmx_fixed0and1_msr_test(vcpu, MSR_IA32_VMX_TRUE_ENTRY_CTLS); + vmx_fixed1_msr_test(vcpu, MSR_IA32_VMX_VMFUNC, -1ull); +} + int main(void) { struct kvm_vcpu *vcpu; @@ -221,6 +281,7 @@ int main(void) load_perf_global_ctrl_test(vm, vcpu); load_and_clear_bndcfgs_test(vm, vcpu); cr4_reserved_bits_test(vm, vcpu); + vmx_save_restore_msrs_test(vcpu); =20 kvm_vm_free(vm); } --=20 2.36.1.255.ge46751e96f-goog