From nobody Sat Feb 7 22:55:09 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 83B4BC001DF for ; Sat, 29 Jul 2023 01:16:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237131AbjG2BQT (ORCPT ); Fri, 28 Jul 2023 21:16:19 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35792 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233297AbjG2BQP (ORCPT ); Fri, 28 Jul 2023 21:16:15 -0400 Received: from mail-pl1-x64a.google.com (mail-pl1-x64a.google.com [IPv6:2607:f8b0:4864:20::64a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E116D3AA8 for ; Fri, 28 Jul 2023 18:16:14 -0700 (PDT) Received: by mail-pl1-x64a.google.com with SMTP id d9443c01a7336-1bb982d2572so17448605ad.0 for ; Fri, 28 Jul 2023 18:16:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1690593374; x=1691198174; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=OXHhc2U25Tn9jZSaAqLCulQOLba1UW3VXuih6bIvn0s=; b=Tod3zMqUpSYaMSjMntQET7ADtNMiYMgbBq1at6tAEQOFQg+AzH1AB/EPsntvDm5vNj EIG9W/6ReBz3I+JF0tX8SPTOlSXnEzx9ES1GjZQ1n/MoCOi0fTE2yochXFZoYOjwC6mc U3t5LNtMvj5DqqX4aJ80m3rhTwam3UPQTo2VgxfpkXQkLR7j/mmDbfm7nz5BPfC0DIRw ipi5tkLT3PhkVV1UrAy//YPPYzAiTfYSNf2NW4oExuekrUxEKRzNGrgzOAYZyTstIeTY SHF0J2p3TR7x0+vKZfwMp/3jlFMAB27g/khgrIS61KbP9aelsIKxNogzIgGACNWWhEod n7PQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1690593374; x=1691198174; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=OXHhc2U25Tn9jZSaAqLCulQOLba1UW3VXuih6bIvn0s=; b=MdSZurcqWqPKUmhl2gRiJSDWFWjldXNj1mj9LGbqQ7BRa3kIDyssRrDrATY1UqI8hF EkIcvXdCCooHyNh0cYdyGPzUI9oOEZw/l4D0JXA46h91B9yrwOTKyzpwJgktKEi09UZA Ms5RPJRaULrbXPiHRxaaXyA+PNLmEBNaD4GEBGwQsO4IjowRukFGBS29AVfDExq3e2Wc Y9W8Ma/Yl2LPen+XRre3mbUsk3wB7KnFt1No5cmdybFd8a1SIH023v1cl+kt7DS/hwxM gEfw8YezfuKByIF1JSVkjv/bvDxy3GXyr9/QHlDNPgIlrE/gUsPfSLLczgmCVxCad4Fx viKw== X-Gm-Message-State: ABy/qLZ5JZLfvWIzTKd2MCJcwddXTcRavSWODfBaHGeNPRnnS/bpG4QG RrF123MZ3UZ+u1lkchDqT8bNghfkDz8= X-Google-Smtp-Source: APBJJlH7vCkGKQZKjpj0ik6iY2LhHM7ilq1KS/soeEeTq5p/PnpLaOaPR70bJynHLI3xqkpO2y/gKTOUAlI= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a17:902:e74b:b0:1b8:a56e:1dcc with SMTP id p11-20020a170902e74b00b001b8a56e1dccmr11826plf.13.1690593374418; Fri, 28 Jul 2023 18:16:14 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 28 Jul 2023 18:15:48 -0700 In-Reply-To: <20230729011608.1065019-1-seanjc@google.com> Mime-Version: 1.0 References: <20230729011608.1065019-1-seanjc@google.com> X-Mailer: git-send-email 2.41.0.487.g6d72f3e995-goog Message-ID: <20230729011608.1065019-2-seanjc@google.com> Subject: [PATCH v2 01/21] KVM: nSVM: Check instead of asserting on nested TSC scaling support From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini , Vitaly Kuznetsov Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Maxim Levitsky Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Check for nested TSC scaling support on nested SVM VMRUN instead of asserting that TSC scaling is exposed to L1 if L1's MSR_AMD64_TSC_RATIO has diverged from KVM's default. Userspace can trigger the WARN at will by writing the MSR and then updating guest CPUID to hide the feature (modifying guest CPUID is allowed anytime before KVM_RUN). E.g. hacking KVM's state_test selftest to do vcpu_set_msr(vcpu, MSR_AMD64_TSC_RATIO, 0); vcpu_clear_cpuid_feature(vcpu, X86_FEATURE_TSCRATEMSR); after restoring state in a new VM+vCPU yields an endless supply of: ------------[ cut here ]------------ WARNING: CPU: 164 PID: 62565 at arch/x86/kvm/svm/nested.c:699 nested_vmcb02_prepare_control+0x3d6/0x3f0 [kvm_amd] Call Trace: enter_svm_guest_mode+0x114/0x560 [kvm_amd] nested_svm_vmrun+0x260/0x330 [kvm_amd] vmrun_interception+0x29/0x30 [kvm_amd] svm_invoke_exit_handler+0x35/0x100 [kvm_amd] svm_handle_exit+0xe7/0x180 [kvm_amd] kvm_arch_vcpu_ioctl_run+0x1eab/0x2570 [kvm] kvm_vcpu_ioctl+0x4c9/0x5b0 [kvm] __se_sys_ioctl+0x7a/0xc0 __x64_sys_ioctl+0x21/0x30 do_syscall_64+0x41/0x90 entry_SYSCALL_64_after_hwframe+0x63/0xcd RIP: 0033:0x45ca1b Note, the nested #VMEXIT path has the same flaw, but needs a different fix and will be handled separately. Fixes: 5228eb96a487 ("KVM: x86: nSVM: implement nested TSC scaling") Cc: Maxim Levitsky Cc: stable@vger.kernel.org Signed-off-by: Sean Christopherson --- arch/x86/kvm/svm/nested.c | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c index 96936ddf1b3c..0b90f5cf9df3 100644 --- a/arch/x86/kvm/svm/nested.c +++ b/arch/x86/kvm/svm/nested.c @@ -695,10 +695,9 @@ static void nested_vmcb02_prepare_control(struct vcpu_= svm *svm, =20 vmcb02->control.tsc_offset =3D vcpu->arch.tsc_offset; =20 - if (svm->tsc_ratio_msr !=3D kvm_caps.default_tsc_scaling_ratio) { - WARN_ON(!svm->tsc_scaling_enabled); + if (svm->tsc_scaling_enabled && + svm->tsc_ratio_msr !=3D kvm_caps.default_tsc_scaling_ratio) nested_svm_update_tsc_ratio_msr(vcpu); - } =20 vmcb02->control.int_ctl =3D (svm->nested.ctl.int_ctl & int_ctl_vmcb12_bits) | --=20 2.41.0.487.g6d72f3e995-goog From nobody Sat Feb 7 22:55:09 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D671AC0015E for ; Sat, 29 Jul 2023 01:16:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237257AbjG2BQ0 (ORCPT ); Fri, 28 Jul 2023 21:16:26 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35808 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237203AbjG2BQS (ORCPT ); Fri, 28 Jul 2023 21:16:18 -0400 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2DAEF1739 for ; Fri, 28 Jul 2023 18:16:17 -0700 (PDT) Received: by mail-yb1-xb49.google.com with SMTP id 3f1490d57ef6-d0fff3cf2d7so2590904276.2 for ; Fri, 28 Jul 2023 18:16:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1690593376; x=1691198176; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=cVoVow+zuN1SKsgICwhs7CtUzu9pX/xAq3Gg5W6CKlA=; b=YegBHX6eDw1uj+xgCXD6YD39fb70h+W0KWhVQvpOym2+owNKwn7bsXsD7brfQlaUlf nRiXpfAJYv5tM2P5/LI8rEIIKE74oKUaMlxpjLkLbXER2QNSuON95aBZaC3p7+A0+c1M KDEVVdJ/DMoEy0OxipsHd0LzTQy1HfKDjpTDZlfJL7IycKCUN4TiYjmAvw7EqOVAxW2Y CfUFO9DxC+8zR88DMhZAb/dnigzi3xGo+v3rVtUc7OwKKF7Ukco8DWb6aBM/2P2esD0X sjqb1dFWw31Zz1Z+UhInJmfA/Ptn4X0BgSzE+utq/SBa9GhLjL0UghrF8bbcB2IClu3Z zZbw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1690593376; x=1691198176; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=cVoVow+zuN1SKsgICwhs7CtUzu9pX/xAq3Gg5W6CKlA=; b=DhCa8tRZe75l1aYitVmmWTjcK6a1gc3K1zIYuozBeSMxi41TyAVqnIgdAmhwX5FWS8 AsWblpAABubfjhk6Xr1tGembj83mAFbw2St5ucT5fRY9x9vx5LGURc3yLZJgitGrt5zQ 9xXTV7CrMa3cOVrV28pEa6lD2OaKraySUGboOcEnZIcjqoyEgEHCMlF0COcwa4o/4nBd 383wJzEzcDjt9ANpHwqYl3DGydDWnvzA3PgFcT2VJV+6ZGgZUhPPMxQhBoS4i5hDdPiY 4E6AET9byPmQnumZ4VBkxjg5P9Qxpq9Bk5JqpnFsVkzw/6Mkq8fr0XTe49VBZ/IN/Gl2 CNuw== X-Gm-Message-State: ABy/qLawvg1pbT6yO0e3VekFGlpf0pmioK7nu2i2Nj+ko8orV0l59PFN gNsieUSmmd+TXCxc5TQVTbx706Dn29k= X-Google-Smtp-Source: APBJJlGnp2J4e6Im3FhJn6G1eZaI2N06RqoUjnlGB6FT2wg8J2TBPEbZo0STu1BHToWrHT5q+TNpNabVrXs= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a25:d2c9:0:b0:d05:7ba4:67f9 with SMTP id j192-20020a25d2c9000000b00d057ba467f9mr17210ybg.3.1690593376432; Fri, 28 Jul 2023 18:16:16 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 28 Jul 2023 18:15:49 -0700 In-Reply-To: <20230729011608.1065019-1-seanjc@google.com> Mime-Version: 1.0 References: <20230729011608.1065019-1-seanjc@google.com> X-Mailer: git-send-email 2.41.0.487.g6d72f3e995-goog Message-ID: <20230729011608.1065019-3-seanjc@google.com> Subject: [PATCH v2 02/21] KVM: nSVM: Load L1's TSC multiplier based on L1 state, not L2 state From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini , Vitaly Kuznetsov Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Maxim Levitsky Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" When emulating nested VM-Exit, load L1's TSC multiplier if L1's desired ratio doesn't match the current ratio, not if the ratio L1 is using for L2 diverges from the default. Functionally, the end result is the same as KVM will run L2 with L1's multiplier if L2's multiplier is the default, i.e. checking that L1's multiplier is loaded is equivalent to checking if L2 has a non-default multiplier. However, the assertion that TSC scaling is exposed to L1 is flawed, as userspace can trigger the WARN at will by writing the MSR and then updating guest CPUID to hide the feature (modifying guest CPUID is allowed anytime before KVM_RUN). E.g. hacking KVM's state_test selftest to do vcpu_set_msr(vcpu, MSR_AMD64_TSC_RATIO, 0); vcpu_clear_cpuid_feature(vcpu, X86_FEATURE_TSCRATEMSR); after restoring state in a new VM+vCPU yields an endless supply of: ------------[ cut here ]------------ WARNING: CPU: 10 PID: 206939 at arch/x86/kvm/svm/nested.c:1105 nested_svm_vmexit+0x6af/0x720 [kvm_amd] Call Trace: nested_svm_exit_handled+0x102/0x1f0 [kvm_amd] svm_handle_exit+0xb9/0x180 [kvm_amd] kvm_arch_vcpu_ioctl_run+0x1eab/0x2570 [kvm] kvm_vcpu_ioctl+0x4c9/0x5b0 [kvm] ? trace_hardirqs_off+0x4d/0xa0 __se_sys_ioctl+0x7a/0xc0 __x64_sys_ioctl+0x21/0x30 do_syscall_64+0x41/0x90 entry_SYSCALL_64_after_hwframe+0x63/0xcd Unlike the nested VMRUN path, hoisting the svm->tsc_scaling_enabled check into the if-statement is wrong as KVM needs to ensure L1's multiplier is loaded in the above scenario. Alternatively, the WARN_ON() could simply be deleted, but that would make KVM's behavior even more subtle, e.g. it's not immediately obvious why it's safe to write MSR_AMD64_TSC_RATIO when checking only tsc_ratio_msr. Fixes: 5228eb96a487 ("KVM: x86: nSVM: implement nested TSC scaling") Cc: Maxim Levitsky Cc: stable@vger.kernel.org Signed-off-by: Sean Christopherson --- arch/x86/kvm/svm/nested.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c index 0b90f5cf9df3..c66c823ae222 100644 --- a/arch/x86/kvm/svm/nested.c +++ b/arch/x86/kvm/svm/nested.c @@ -1100,8 +1100,8 @@ int nested_svm_vmexit(struct vcpu_svm *svm) vmcb_mark_dirty(vmcb01, VMCB_INTERCEPTS); } =20 - if (svm->tsc_ratio_msr !=3D kvm_caps.default_tsc_scaling_ratio) { - WARN_ON(!svm->tsc_scaling_enabled); + if (kvm_caps.has_tsc_control && + vcpu->arch.tsc_scaling_ratio !=3D vcpu->arch.l1_tsc_scaling_ratio) { vcpu->arch.tsc_scaling_ratio =3D vcpu->arch.l1_tsc_scaling_ratio; __svm_write_tsc_multiplier(vcpu->arch.tsc_scaling_ratio); } --=20 2.41.0.487.g6d72f3e995-goog From nobody Sat Feb 7 22:55:09 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 12AA0C0015E for ; Sat, 29 Jul 2023 01:16:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237347AbjG2BQ3 (ORCPT ); Fri, 28 Jul 2023 21:16:29 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35828 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233297AbjG2BQT (ORCPT ); Fri, 28 Jul 2023 21:16:19 -0400 Received: from mail-pl1-x649.google.com (mail-pl1-x649.google.com [IPv6:2607:f8b0:4864:20::649]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D0E4B3AB4 for ; Fri, 28 Jul 2023 18:16:18 -0700 (PDT) Received: by mail-pl1-x649.google.com with SMTP id d9443c01a7336-1bbb34b091dso18454445ad.0 for ; Fri, 28 Jul 2023 18:16:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1690593378; x=1691198178; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=ZuWZLM3TsGRT4+GSq26rjSsckFQ3rCb0+5vjCs2V2s0=; b=7e0xigtHAglSpXpflBYGihw2dTRSRuft1mPV4wW8wW0ghLKMtLjC0R9ZHyxDhwvGE4 J2NwAQd/ce9HycT7MO9KVtjDyHWxeXF+a3ewQbDDS8EUlq9JC6JpnA25xFBrap2axg0u NLxe6+4MYhXGS121VtYa/OJLo809sN/3FplntG7TNrxWb+ve2btyLP/qgu/wvzufkrZc JIZzuVhAFnBHEyHdmioU8znfRXzwIjOgUqdPcJOPGmU4t2DYFBMzj+oj/mDU+GvCN6H4 0ntkCLS7KSx2VtFtwI11x7otgtFbDftSCYJ1Cjz5WGxH9DxsMhkI4uTnxGDGNAypG0ro OwAQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1690593378; x=1691198178; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=ZuWZLM3TsGRT4+GSq26rjSsckFQ3rCb0+5vjCs2V2s0=; b=T4GOvtlHg5RGE/ehANy56Lmh41pn2G9ZrLKP0GufwCrXvKkoHTIkTVeG8MKx6HkZRe NSKqp0p571k/WuWNs4hXX9JjQy+UfdTq42+RvxEn37rXAXhU0qq1pKP7gTAtdH1mVV3e B7AbXrnKuTqH/xB4UfRiQHTBKPWb7tO8csMtOzitHrFbYr4daupGBawrMzFR5pUKn7Tt cTbz6hPXCN/X0Na1/NrNj54wEvNjdAq4wQgxe0rXwlV2S0LXqxQHnBaHxUzY5UU+92O5 tIMk3Jy78fCAcyJVOKWTVROb6z5pElI2CEaGcAjE1LgBl8fiNX4SOms3rlimXvA027h0 jkzg== X-Gm-Message-State: ABy/qLakKWPQCOsu4GN/Qup0PixGGIvLNoJcCY1w2XzQSKHFboWyvwIF R8rGbeifDDmOsLZKeWF1c4AtaCyC0Ks= X-Google-Smtp-Source: APBJJlFZqNr0y1fbAls4dJdTesI1TscEDP3NEV6mws5JMcjxB4og7oJDo35Pg97t0QTrOc5zvNI6UGhoshA= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a17:902:da83:b0:1b3:c62d:71b5 with SMTP id j3-20020a170902da8300b001b3c62d71b5mr12310plx.0.1690593378396; Fri, 28 Jul 2023 18:16:18 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 28 Jul 2023 18:15:50 -0700 In-Reply-To: <20230729011608.1065019-1-seanjc@google.com> Mime-Version: 1.0 References: <20230729011608.1065019-1-seanjc@google.com> X-Mailer: git-send-email 2.41.0.487.g6d72f3e995-goog Message-ID: <20230729011608.1065019-4-seanjc@google.com> Subject: [PATCH v2 03/21] KVM: nSVM: Use the "outer" helper for writing multiplier to MSR_AMD64_TSC_RATIO From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini , Vitaly Kuznetsov Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Maxim Levitsky Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" When emulating nested SVM transitions, use the outer helper for writing the TSC multiplier for L2. Using the inner helper only for one-off cases, i.e. for paths where KVM is NOT emulating or modifying vCPU state, will allow for multiple cleanups: - Explicitly disabling preemption only in the outer helper - Getting the multiplier from the vCPU field in the outer helper - Skipping the WRMSR in the outer helper if guest state isn't loaded Opportunistically delete an extra newline. No functional change intended. Signed-off-by: Sean Christopherson --- arch/x86/kvm/svm/nested.c | 4 ++-- arch/x86/kvm/svm/svm.c | 5 ++--- arch/x86/kvm/svm/svm.h | 2 +- 3 files changed, 5 insertions(+), 6 deletions(-) diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c index c66c823ae222..5d5a1d7832fb 100644 --- a/arch/x86/kvm/svm/nested.c +++ b/arch/x86/kvm/svm/nested.c @@ -1103,7 +1103,7 @@ int nested_svm_vmexit(struct vcpu_svm *svm) if (kvm_caps.has_tsc_control && vcpu->arch.tsc_scaling_ratio !=3D vcpu->arch.l1_tsc_scaling_ratio) { vcpu->arch.tsc_scaling_ratio =3D vcpu->arch.l1_tsc_scaling_ratio; - __svm_write_tsc_multiplier(vcpu->arch.tsc_scaling_ratio); + svm_write_tsc_multiplier(vcpu, vcpu->arch.tsc_scaling_ratio); } =20 svm->nested.ctl.nested_cr3 =3D 0; @@ -1536,7 +1536,7 @@ void nested_svm_update_tsc_ratio_msr(struct kvm_vcpu = *vcpu) vcpu->arch.tsc_scaling_ratio =3D kvm_calc_nested_tsc_multiplier(vcpu->arch.l1_tsc_scaling_ratio, svm->tsc_ratio_msr); - __svm_write_tsc_multiplier(vcpu->arch.tsc_scaling_ratio); + svm_write_tsc_multiplier(vcpu, vcpu->arch.tsc_scaling_ratio); } =20 /* Inverse operation of nested_copy_vmcb_control_to_cache(). asid is copie= d too. */ diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index d381ad424554..13f316375b14 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -550,7 +550,7 @@ static int svm_check_processor_compat(void) return 0; } =20 -void __svm_write_tsc_multiplier(u64 multiplier) +static void __svm_write_tsc_multiplier(u64 multiplier) { preempt_disable(); =20 @@ -1110,12 +1110,11 @@ static void svm_write_tsc_offset(struct kvm_vcpu *v= cpu, u64 offset) vmcb_mark_dirty(svm->vmcb, VMCB_INTERCEPTS); } =20 -static void svm_write_tsc_multiplier(struct kvm_vcpu *vcpu, u64 multiplier) +void svm_write_tsc_multiplier(struct kvm_vcpu *vcpu, u64 multiplier) { __svm_write_tsc_multiplier(multiplier); } =20 - /* Evaluate instruction intercepts that depend on guest CPUID features. */ static void svm_recalc_instruction_intercepts(struct kvm_vcpu *vcpu, struct vcpu_svm *svm) diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h index 18af7e712a5a..7132c0a04817 100644 --- a/arch/x86/kvm/svm/svm.h +++ b/arch/x86/kvm/svm/svm.h @@ -658,7 +658,7 @@ int nested_svm_check_exception(struct vcpu_svm *svm, un= signed nr, bool has_error_code, u32 error_code); int nested_svm_exit_special(struct vcpu_svm *svm); void nested_svm_update_tsc_ratio_msr(struct kvm_vcpu *vcpu); -void __svm_write_tsc_multiplier(u64 multiplier); +void svm_write_tsc_multiplier(struct kvm_vcpu *vcpu, u64 multiplier); void nested_copy_vmcb_control_to_cache(struct vcpu_svm *svm, struct vmcb_control_area *control); void nested_copy_vmcb_save_to_cache(struct vcpu_svm *svm, --=20 2.41.0.487.g6d72f3e995-goog From nobody Sat Feb 7 22:55:09 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1C3E6C0015E for ; Sat, 29 Jul 2023 01:16:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237368AbjG2BQd (ORCPT ); Fri, 28 Jul 2023 21:16:33 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35934 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237213AbjG2BQX (ORCPT ); Fri, 28 Jul 2023 21:16:23 -0400 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5E5413C34 for ; Fri, 28 Jul 2023 18:16:21 -0700 (PDT) Received: by mail-yb1-xb49.google.com with SMTP id 3f1490d57ef6-d1ebc896bd7so2518275276.2 for ; Fri, 28 Jul 2023 18:16:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1690593380; x=1691198180; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=yjRclKPY/9S+JAFpnnezgnJAMv3d4o18PbOAl4Gq6ok=; b=caCnBKLK7IeUNp+Z+raLFiokOC/PMzHYkMmLwjuwc30UhDYMyfbhEyZ5nbkAN7xkug KKEqQjN9iHGmf2ROn0mMx3MXFk+YuOxc8W5HM0Gs4i/dieGRnzIvxSz0qTiN9Odhjyab seK5w16gJ9mug5/WoV1dYB+JqSU4GkUKZuEOGiyzUF8SpjKRgXR1FrDI+0JENrWJtLKT 2GG1xPNuSYMeHfd3SL6QK/G/DsfLaMWwAuExeV/ZmopcHhdtiUrB6+SH9EIoA0GYXJA2 KkguPnmbdOsuwC+YTNWiz362JDyAEftoOG+Gtq8k02+XNdOvh3/R0RtWkr+0nNq2QegI I8/Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1690593380; x=1691198180; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=yjRclKPY/9S+JAFpnnezgnJAMv3d4o18PbOAl4Gq6ok=; b=FE4qrH0koDxVKFoagwlxGSc4N20a3sv+2Iako/5OaZDmuBmU7p4eibrLu0ZcRgIAz4 kg7MKgMg7pV9U2yjOdpbkc2mPGwXuN1NqVthty/LSfutPhZVASLlUCFmrOCUC1dVnPC1 KlojGD8dcEYm0iKwjEmLH1EQDAm6iMAPN+FlCcY+RvYfAu0SJNE0iRmnTTCRppDygBPK N6t16UKyrov13RSYvZrded2hnqEh9one0E3lznkw0327IM9JtxUgUwT21lmqPh2aFliq asARD43kWS1zr8t2UaCPSbkT1T73SAR6GYo8L3pB7Bng/gQFRlU0ebmJO9oRiqScU4X8 cO/A== X-Gm-Message-State: ABy/qLbPuewHJnNc8x4XZRMN/maX/ou389sBE4uAE1bBLMlih8EvDSyB qeoSK9y2Y/mV+LZZFMfKmV5unzAf5xM= X-Google-Smtp-Source: APBJJlEV5PhfGedhl3jiGyHeUE/m8NExxIiEA2dNQG0uJ2VsA7EdgcHWbU7ZBrlSB3ScclHwZe4y4vgA/9E= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a25:ac48:0:b0:d15:d6da:7e97 with SMTP id r8-20020a25ac48000000b00d15d6da7e97mr17043ybd.3.1690593380274; Fri, 28 Jul 2023 18:16:20 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 28 Jul 2023 18:15:51 -0700 In-Reply-To: <20230729011608.1065019-1-seanjc@google.com> Mime-Version: 1.0 References: <20230729011608.1065019-1-seanjc@google.com> X-Mailer: git-send-email 2.41.0.487.g6d72f3e995-goog Message-ID: <20230729011608.1065019-5-seanjc@google.com> Subject: [PATCH v2 04/21] KVM: SVM: Clean up preemption toggling related to MSR_AMD64_TSC_RATIO From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini , Vitaly Kuznetsov Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Maxim Levitsky Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Explicitly disable preemption when writing MSR_AMD64_TSC_RATIO only in the "outer" helper, as all direct callers of the "inner" helper now run with preemption already disabled. And that isn't a coincidence, as the outer helper requires a vCPU and is intended to be used when modifying guest state and/or emulating guest instructions, which are typically done with preemption enabled. Direct use of the inner helper should be extremely limited, as the only time KVM should modify MSR_AMD64_TSC_RATIO without a vCPU is when sanitizing the MSR for a specific pCPU (currently done when {en,dis}abling disabling SVM). The other direct caller is svm_prepare_switch_to_guest(), which does have a vCPU, but is a one-off special case: KVM is about to enter the guest on a specific pCPU and thus must have preemption disabled. Signed-off-by: Sean Christopherson --- arch/x86/kvm/svm/svm.c | 8 +++----- 1 file changed, 3 insertions(+), 5 deletions(-) diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 13f316375b14..9fc5e402636a 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -552,15 +552,11 @@ static int svm_check_processor_compat(void) =20 static void __svm_write_tsc_multiplier(u64 multiplier) { - preempt_disable(); - if (multiplier =3D=3D __this_cpu_read(current_tsc_ratio)) - goto out; + return; =20 wrmsrl(MSR_AMD64_TSC_RATIO, multiplier); __this_cpu_write(current_tsc_ratio, multiplier); -out: - preempt_enable(); } =20 static void svm_hardware_disable(void) @@ -1112,7 +1108,9 @@ static void svm_write_tsc_offset(struct kvm_vcpu *vcp= u, u64 offset) =20 void svm_write_tsc_multiplier(struct kvm_vcpu *vcpu, u64 multiplier) { + preempt_disable(); __svm_write_tsc_multiplier(multiplier); + preempt_enable(); } =20 /* Evaluate instruction intercepts that depend on guest CPUID features. */ --=20 2.41.0.487.g6d72f3e995-goog From nobody Sat Feb 7 22:55:09 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BAD3FC001DF for ; Sat, 29 Jul 2023 01:16:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237304AbjG2BQl (ORCPT ); Fri, 28 Jul 2023 21:16:41 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35902 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237252AbjG2BQ0 (ORCPT ); Fri, 28 Jul 2023 21:16:26 -0400 Received: from mail-pg1-x54a.google.com (mail-pg1-x54a.google.com [IPv6:2607:f8b0:4864:20::54a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7682E3AB3 for ; Fri, 28 Jul 2023 18:16:23 -0700 (PDT) Received: by mail-pg1-x54a.google.com with SMTP id 41be03b00d2f7-5635233876bso1898453a12.0 for ; Fri, 28 Jul 2023 18:16:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1690593382; x=1691198182; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=w0TB/9gZR8BYEaJuOAquuw9fLoM6MdZQTs9CMG8qcvs=; b=3wx3MWt26WL60Xj9bVeqIqNc/cA6K8uIJbOqM0p0ZsGKleZ9VkfwZZcYY37I5uGVNG Np8SxBmmoAe8KsDMX66c/MniGkfVg2h0nGEPrNQVBDURGQdlBReM9Hr//TngwzZgzuGt E3nFvZTeYp6S3aEMU+cinlpk47lU5hL/Sc+1SgOYkkzm4kSsDp1l+B8Ch1i0zMD3KIx1 dPTnowq2y6e91DvnbsS1BO4Ef+Ylm60pQmd/OVTFGxDKT/WJahH6rtuZZ+2hk9LbjI/j +SbsZieRKrTgkPNIrA9Rw7LOYrGONcI8HUI7T2nAoAQ5DUOL2kIzhXXOkuET13EmXizY fAUg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1690593382; x=1691198182; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=w0TB/9gZR8BYEaJuOAquuw9fLoM6MdZQTs9CMG8qcvs=; b=F39yZAn4OyyVQK3LF3bqk9Qa6JBb7hSdwWnEpvo6Q33aGN91tMm3D+q9XVv2eTmaYv dWsRCiaGmm8eMtikPCgFYSIsGOMIGMDyLL5vnuL+9ztrBs7hEo7VAAi19lonm8up0M/G GuFys/EtBVTn4ZLEvPT1zHZ9LlyfSaqwhUAalH9Ywl/FKoVOEPnPmxR6kRAMCg8Zt+ad Qq/unx9yd5W1X2iKHfGPBvThbXiEWojoED0Gmi+rddbTfpEDDd6JAcurNQa/krzl6HVg E0bwUWnfOPvdqQByT4XzZzTAbyN+2XxcuZvDTdwTNIGsHdXfQKC5sZ/rq2t+0oBnhqa9 w6Cg== X-Gm-Message-State: ABy/qLZqNWzyyte/kkIOimRcCebEaiaU8L4Ki9x6pThzunVwPwEdndx+ hec5qd/4XtIjFCJLH2VmryXYMfphaag= X-Google-Smtp-Source: APBJJlFMGbsg9Sun7CSyWqpqMOUHosQ3ioKPE+f/pWq2XPuG2v86lsp7Rjsxz9Ltt5B0ZZ4qkGRyJ52Wi9U= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a17:902:dac4:b0:1bb:91c9:d334 with SMTP id q4-20020a170902dac400b001bb91c9d334mr11559plx.0.1690593382395; Fri, 28 Jul 2023 18:16:22 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 28 Jul 2023 18:15:52 -0700 In-Reply-To: <20230729011608.1065019-1-seanjc@google.com> Mime-Version: 1.0 References: <20230729011608.1065019-1-seanjc@google.com> X-Mailer: git-send-email 2.41.0.487.g6d72f3e995-goog Message-ID: <20230729011608.1065019-6-seanjc@google.com> Subject: [PATCH v2 05/21] KVM: x86: Always write vCPU's current TSC offset/ratio in vendor hooks From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini , Vitaly Kuznetsov Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Maxim Levitsky Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Drop the @offset and @multiplier params from the kvm_x86_ops hooks for propagating TSC offsets/multipliers into hardware, and instead have the vendor implementations pull the information directly from the vCPU structure. The respective vCPU fields _must_ be written at the same time in order to maintain consistent state, i.e. it's not random luck that the value passed in by all callers is grabbed from the vCPU. Explicitly grabbing the value from the vCPU field in SVM's implementation in particular will allow for additional cleanup without introducing even more subtle dependencies. Specifically, SVM can skip the WRMSR if guest state isn't loaded, i.e. svm_prepare_switch_to_guest() will load the correct value for the vCPU prior to entering the guest. This also reconciles KVM's handling of related values that are stored in the vCPU, as svm_write_tsc_offset() already assumes/requires the caller to have updated l1_tsc_offset. Signed-off-by: Sean Christopherson --- arch/x86/include/asm/kvm_host.h | 4 ++-- arch/x86/kvm/svm/nested.c | 4 ++-- arch/x86/kvm/svm/svm.c | 8 ++++---- arch/x86/kvm/svm/svm.h | 2 +- arch/x86/kvm/vmx/vmx.c | 8 ++++---- arch/x86/kvm/x86.c | 5 ++--- 6 files changed, 15 insertions(+), 16 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_hos= t.h index 28bd38303d70..dad9331c5270 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1654,8 +1654,8 @@ struct kvm_x86_ops { =20 u64 (*get_l2_tsc_offset)(struct kvm_vcpu *vcpu); u64 (*get_l2_tsc_multiplier)(struct kvm_vcpu *vcpu); - void (*write_tsc_offset)(struct kvm_vcpu *vcpu, u64 offset); - void (*write_tsc_multiplier)(struct kvm_vcpu *vcpu, u64 multiplier); + void (*write_tsc_offset)(struct kvm_vcpu *vcpu); + void (*write_tsc_multiplier)(struct kvm_vcpu *vcpu); =20 /* * Retrieve somewhat arbitrary exit information. Intended to diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c index 5d5a1d7832fb..3342cc4a5189 100644 --- a/arch/x86/kvm/svm/nested.c +++ b/arch/x86/kvm/svm/nested.c @@ -1103,7 +1103,7 @@ int nested_svm_vmexit(struct vcpu_svm *svm) if (kvm_caps.has_tsc_control && vcpu->arch.tsc_scaling_ratio !=3D vcpu->arch.l1_tsc_scaling_ratio) { vcpu->arch.tsc_scaling_ratio =3D vcpu->arch.l1_tsc_scaling_ratio; - svm_write_tsc_multiplier(vcpu, vcpu->arch.tsc_scaling_ratio); + svm_write_tsc_multiplier(vcpu); } =20 svm->nested.ctl.nested_cr3 =3D 0; @@ -1536,7 +1536,7 @@ void nested_svm_update_tsc_ratio_msr(struct kvm_vcpu = *vcpu) vcpu->arch.tsc_scaling_ratio =3D kvm_calc_nested_tsc_multiplier(vcpu->arch.l1_tsc_scaling_ratio, svm->tsc_ratio_msr); - svm_write_tsc_multiplier(vcpu, vcpu->arch.tsc_scaling_ratio); + svm_write_tsc_multiplier(vcpu); } =20 /* Inverse operation of nested_copy_vmcb_control_to_cache(). asid is copie= d too. */ diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 9fc5e402636a..c786c8e9108f 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -1097,19 +1097,19 @@ static u64 svm_get_l2_tsc_multiplier(struct kvm_vcp= u *vcpu) return svm->tsc_ratio_msr; } =20 -static void svm_write_tsc_offset(struct kvm_vcpu *vcpu, u64 offset) +static void svm_write_tsc_offset(struct kvm_vcpu *vcpu) { struct vcpu_svm *svm =3D to_svm(vcpu); =20 svm->vmcb01.ptr->control.tsc_offset =3D vcpu->arch.l1_tsc_offset; - svm->vmcb->control.tsc_offset =3D offset; + svm->vmcb->control.tsc_offset =3D vcpu->arch.tsc_offset; vmcb_mark_dirty(svm->vmcb, VMCB_INTERCEPTS); } =20 -void svm_write_tsc_multiplier(struct kvm_vcpu *vcpu, u64 multiplier) +void svm_write_tsc_multiplier(struct kvm_vcpu *vcpu) { preempt_disable(); - __svm_write_tsc_multiplier(multiplier); + __svm_write_tsc_multiplier(vcpu->arch.tsc_scaling_ratio); preempt_enable(); } =20 diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h index 7132c0a04817..5829a1801862 100644 --- a/arch/x86/kvm/svm/svm.h +++ b/arch/x86/kvm/svm/svm.h @@ -658,7 +658,7 @@ int nested_svm_check_exception(struct vcpu_svm *svm, un= signed nr, bool has_error_code, u32 error_code); int nested_svm_exit_special(struct vcpu_svm *svm); void nested_svm_update_tsc_ratio_msr(struct kvm_vcpu *vcpu); -void svm_write_tsc_multiplier(struct kvm_vcpu *vcpu, u64 multiplier); +void svm_write_tsc_multiplier(struct kvm_vcpu *vcpu); void nested_copy_vmcb_control_to_cache(struct vcpu_svm *svm, struct vmcb_control_area *control); void nested_copy_vmcb_save_to_cache(struct vcpu_svm *svm, diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 0ecf4be2c6af..ca6194b0e35e 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -1884,14 +1884,14 @@ u64 vmx_get_l2_tsc_multiplier(struct kvm_vcpu *vcpu) return kvm_caps.default_tsc_scaling_ratio; } =20 -static void vmx_write_tsc_offset(struct kvm_vcpu *vcpu, u64 offset) +static void vmx_write_tsc_offset(struct kvm_vcpu *vcpu) { - vmcs_write64(TSC_OFFSET, offset); + vmcs_write64(TSC_OFFSET, vcpu->arch.tsc_offset); } =20 -static void vmx_write_tsc_multiplier(struct kvm_vcpu *vcpu, u64 multiplier) +static void vmx_write_tsc_multiplier(struct kvm_vcpu *vcpu) { - vmcs_write64(TSC_MULTIPLIER, multiplier); + vmcs_write64(TSC_MULTIPLIER, vcpu->arch.tsc_scaling_ratio); } =20 /* diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index a6b9bea62fb8..5a14378ed4e1 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -2615,7 +2615,7 @@ static void kvm_vcpu_write_tsc_offset(struct kvm_vcpu= *vcpu, u64 l1_offset) else vcpu->arch.tsc_offset =3D l1_offset; =20 - static_call(kvm_x86_write_tsc_offset)(vcpu, vcpu->arch.tsc_offset); + static_call(kvm_x86_write_tsc_offset)(vcpu); } =20 static void kvm_vcpu_write_tsc_multiplier(struct kvm_vcpu *vcpu, u64 l1_mu= ltiplier) @@ -2631,8 +2631,7 @@ static void kvm_vcpu_write_tsc_multiplier(struct kvm_= vcpu *vcpu, u64 l1_multipli vcpu->arch.tsc_scaling_ratio =3D l1_multiplier; =20 if (kvm_caps.has_tsc_control) - static_call(kvm_x86_write_tsc_multiplier)( - vcpu, vcpu->arch.tsc_scaling_ratio); + static_call(kvm_x86_write_tsc_multiplier)(vcpu); } =20 static inline bool kvm_check_tsc_unstable(void) --=20 2.41.0.487.g6d72f3e995-goog From nobody Sat Feb 7 22:55:09 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E11BCEB64DD for ; Sat, 29 Jul 2023 01:16:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234228AbjG2BQy (ORCPT ); Fri, 28 Jul 2023 21:16:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35990 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237224AbjG2BQj (ORCPT ); Fri, 28 Jul 2023 21:16:39 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 95B9E3C3B for ; Fri, 28 Jul 2023 18:16:26 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id 3f1490d57ef6-d2a392775c6so544688276.0 for ; Fri, 28 Jul 2023 18:16:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1690593384; x=1691198184; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=9PS6/kdsyjCNOIgCqCLtomsQfkSrRepMjSfHvyibg7s=; b=c9HO1UvEnLy9K57Umvf7Flw62OFFzkuP2T0gD0+H/F2Wrc2Z02KCFwh2+D53HbJtWH KhfBsskIdUPOI8Jkc4V34fiYowP+Rd6ZB5OzGU3ekI1UFMwHI77pr/q39J9d6KxXiq4B SlMKcGMdqFy2Co+J4cJU0bs8yXrxu/HsSZUSCMZV1tCItPwiplohOl3FrDi3Ln4VrLut nYtgYf4DMLSxjERbAjwrhlbXHShhmsuCE0McWqHXd0oKqVCy3cepOSYM8dgVtihTDp2V sgE0WaY+/6SsQrBYtbDGicJ/SThQ1OI7S5i//Zgy+AymK2OaCkbKQP2sR87WUwc9EGK6 7fzA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1690593384; x=1691198184; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=9PS6/kdsyjCNOIgCqCLtomsQfkSrRepMjSfHvyibg7s=; b=gOuzANCxXyjSQVqImNfGwTmOqKxC5O0HKYzFuHR5xARr8Z+nZgLz+MP+o40LfaUBgW pushUi9P0yTgAjaR4M7y1C3J9ro1/cZBQShLxRkfVpYkghptld2RRyRrpGzxpYnTeUwX vQ9u6dZc5b2JVBc+zBECQ2NsyWNd6covynr/h7Hq9PX/wk6hGrT9B3Q3RT6cSb+acuwV iKAj5TSbf6MUHoK2dYkw2xQ1tcbkroXaona/SZ8yZ8BKadrIWyDPTb3Nug1Qf/6LeYKg 62ShApwRRqzS/d01A3vwdSmu7+UoeM+4MZZs+K4Yqg2MYlZLwyF+QlVANv364oeX/GYE rJrg== X-Gm-Message-State: ABy/qLanMX1rm+E7kEvSIWNbn4gpszh4lkQweXEkbVae069IMif3Hutb BOlxxL7jCz95JkU5K5diAIIrQrEzid0= X-Google-Smtp-Source: APBJJlEOipUZszKQaAABUUJZrcFo61YSL85CyZnPKsfsLjKRZbFTJHR65zsbVaSreP/dlB4CN3av+CE42iU= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a25:2311:0:b0:d0c:e71d:fab with SMTP id j17-20020a252311000000b00d0ce71d0fabmr19536ybj.0.1690593384631; Fri, 28 Jul 2023 18:16:24 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 28 Jul 2023 18:15:53 -0700 In-Reply-To: <20230729011608.1065019-1-seanjc@google.com> Mime-Version: 1.0 References: <20230729011608.1065019-1-seanjc@google.com> X-Mailer: git-send-email 2.41.0.487.g6d72f3e995-goog Message-ID: <20230729011608.1065019-7-seanjc@google.com> Subject: [PATCH v2 06/21] KVM: nSVM: Skip writes to MSR_AMD64_TSC_RATIO if guest state isn't loaded From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini , Vitaly Kuznetsov Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Maxim Levitsky Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Skip writes to MSR_AMD64_TSC_RATIO that are done in the context of a vCPU if guest state isn't loaded, i.e. if KVM will update MSR_AMD64_TSC_RATIO during svm_prepare_switch_to_guest() before entering the guest. Checking guest_state_loaded may or may not be a net positive for performance as the current_tsc_ratio cache will optimize away duplicate WRMSRs in the vast majority of scenarios. However, the cost of the check is negligible, and the real motivation is to document that KVM needs to load the vCPU's value only when running the vCPU. Signed-off-by: Sean Christopherson --- arch/x86/kvm/svm/svm.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index c786c8e9108f..64092df06f94 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -1109,7 +1109,8 @@ static void svm_write_tsc_offset(struct kvm_vcpu *vcp= u) void svm_write_tsc_multiplier(struct kvm_vcpu *vcpu) { preempt_disable(); - __svm_write_tsc_multiplier(vcpu->arch.tsc_scaling_ratio); + if (to_svm(vcpu)->guest_state_loaded) + __svm_write_tsc_multiplier(vcpu->arch.tsc_scaling_ratio); preempt_enable(); } =20 --=20 2.41.0.487.g6d72f3e995-goog From nobody Sat Feb 7 22:55:09 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C2331EB64DD for ; Sat, 29 Jul 2023 01:16:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237480AbjG2BQ5 (ORCPT ); Fri, 28 Jul 2023 21:16:57 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36448 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237303AbjG2BQl (ORCPT ); Fri, 28 Jul 2023 21:16:41 -0400 Received: from mail-pl1-x64a.google.com (mail-pl1-x64a.google.com [IPv6:2607:f8b0:4864:20::64a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EF8E1421B for ; Fri, 28 Jul 2023 18:16:27 -0700 (PDT) Received: by mail-pl1-x64a.google.com with SMTP id d9443c01a7336-1bbb34b091dso18455055ad.0 for ; Fri, 28 Jul 2023 18:16:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1690593386; x=1691198186; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=M16bFNHvkcRN+2tjzsNjLxHb/2/W6Tup2HBNR7k8XBw=; b=49bhXtk1TQndkdZicqvaWVKyYKrRCpE3oCrlFpmKUCTsnGZUIsKHCjwhwWRb08n9Qk a8gr0vD/KpN7Fa62d5JTSUM0dOOsaAzwHXFCxs/RdSjylNaP04n8sXCDwwjl8UUTVOGA izSx1Nrqt6//EO73u8NI8XIUb5y/4W8n0bl0ogxbBscbk6d1cV1/ofxHr2ijd/LbjPGF H4eR39j0Cp3ins3W4N2BtoYhu6gyvlIKHLMwaRIo1H2y+VJ7D10Q4ny+Hxc1HrTQne2Q Xa3ERJq72CcR5TwfES4F7mS5xvdKem9UCVvfWpxLvYyRPOXXKqNWd8e0TgoFGv7slb1+ Y74w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1690593386; x=1691198186; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=M16bFNHvkcRN+2tjzsNjLxHb/2/W6Tup2HBNR7k8XBw=; b=LvajNwCGqBUOUxQz9/K2o0xspcH576HFkCyAq9GQJsCbqEqLQ5zXtFZ/VTTnkqYKy/ 3s85hS/BL9rU25PkLW3B0GSxQYCTa/vUDj4i+8WyXwKgWjLnkoAVYga+hJLaWRypor2V +rKalOZkUXNmqMXbzeLuCIuRNGrOuvGRioOZjGanXM8qvPD1Zo59w7aHbETxYuTfTGZG YuLJqSr1/hpaNzV1ve4TdonMgR6ELdEeQ91ucYkn5Tf+V/kdCffqH8kB/9OhXOPIjG/g QpPvDeUCdScxRYZApQHVtDo9vA+izkITbylAq4Ww3e3u/OTEwImZlzMclHpXDdcHh58D Cu4A== X-Gm-Message-State: ABy/qLa/BRwFCC8rs8okEHeA8N8I+KW6CGWGnJT4nxozald7TrjdQUiv dma2rKK3Ho0F5lcBZMa1DUdeNtGxgRM= X-Google-Smtp-Source: APBJJlHPdWNRrX2vCjd70d69yFLt3KezmqaL9eTpvVY0WgkKo1Dwh5Lw5m4/O7c/2AAnazdTaLIVUHNn36M= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a17:903:2445:b0:1bb:c7bc:ce9a with SMTP id l5-20020a170903244500b001bbc7bcce9amr13660pls.10.1690593386757; Fri, 28 Jul 2023 18:16:26 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 28 Jul 2023 18:15:54 -0700 In-Reply-To: <20230729011608.1065019-1-seanjc@google.com> Mime-Version: 1.0 References: <20230729011608.1065019-1-seanjc@google.com> X-Mailer: git-send-email 2.41.0.487.g6d72f3e995-goog Message-ID: <20230729011608.1065019-8-seanjc@google.com> Subject: [PATCH v2 07/21] KVM: x86: Add a framework for enabling KVM-governed x86 features From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini , Vitaly Kuznetsov Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Maxim Levitsky Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Introduce yet another X86_FEATURE flag framework to manage and cache KVM governed features (for lack of a better name). "Governed" in this case means that KVM has some level of involvement and/or vested interest in whether or not an X86_FEATURE can be used by the guest. The intent of the framework is twofold: to simplify caching of guest CPUID flags that KVM needs to frequently query, and to add clarity to such caching, e.g. it isn't immediately obvious that SVM's bundle of flags for "optional nested SVM features" track whether or not a flag is exposed to L1. Begrudgingly define KVM_MAX_NR_GOVERNED_FEATURES for the size of the bitmap to avoid exposing governed_features.h in arch/x86/include/asm/, but add a FIXME to call out that it can and should be cleaned up once "struct kvm_vcpu_arch" is no longer expose to the kernel at large. Signed-off-by: Sean Christopherson --- arch/x86/include/asm/kvm_host.h | 19 +++++++++++++ arch/x86/kvm/cpuid.c | 4 +++ arch/x86/kvm/cpuid.h | 46 ++++++++++++++++++++++++++++++++ arch/x86/kvm/governed_features.h | 9 +++++++ 4 files changed, 78 insertions(+) create mode 100644 arch/x86/kvm/governed_features.h diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_hos= t.h index dad9331c5270..007fa8bfd634 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -831,6 +831,25 @@ struct kvm_vcpu_arch { struct kvm_cpuid_entry2 *cpuid_entries; struct kvm_hypervisor_cpuid kvm_cpuid; =20 + /* + * FIXME: Drop this macro and use KVM_NR_GOVERNED_FEATURES directly + * when "struct kvm_vcpu_arch" is no longer defined in an + * arch/x86/include/asm header. The max is mostly arbitrary, i.e. + * can be increased as necessary. + */ +#define KVM_MAX_NR_GOVERNED_FEATURES BITS_PER_LONG + + /* + * Track whether or not the guest is allowed to use features that are + * governed by KVM, where "governed" means KVM needs to manage state + * and/or explicitly enable the feature in hardware. Typically, but + * not always, governed features can be used by the guest if and only + * if both KVM and userspace want to expose the feature to the guest. + */ + struct { + DECLARE_BITMAP(enabled, KVM_MAX_NR_GOVERNED_FEATURES); + } governed_features; + u64 reserved_gpa_bits; int maxphyaddr; =20 diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c index 7f4d13383cf2..ef826568c222 100644 --- a/arch/x86/kvm/cpuid.c +++ b/arch/x86/kvm/cpuid.c @@ -313,6 +313,10 @@ static void kvm_vcpu_after_set_cpuid(struct kvm_vcpu *= vcpu) struct kvm_lapic *apic =3D vcpu->arch.apic; struct kvm_cpuid_entry2 *best; =20 + BUILD_BUG_ON(KVM_NR_GOVERNED_FEATURES > KVM_MAX_NR_GOVERNED_FEATURES); + bitmap_zero(vcpu->arch.governed_features.enabled, + KVM_MAX_NR_GOVERNED_FEATURES); + best =3D kvm_find_cpuid_entry(vcpu, 1); if (best && apic) { if (cpuid_entry_has(best, X86_FEATURE_TSC_DEADLINE_TIMER)) diff --git a/arch/x86/kvm/cpuid.h b/arch/x86/kvm/cpuid.h index b1658c0de847..3000fbe97678 100644 --- a/arch/x86/kvm/cpuid.h +++ b/arch/x86/kvm/cpuid.h @@ -232,4 +232,50 @@ static __always_inline bool guest_pv_has(struct kvm_vc= pu *vcpu, return vcpu->arch.pv_cpuid.features & (1u << kvm_feature); } =20 +enum kvm_governed_features { +#define KVM_GOVERNED_FEATURE(x) KVM_GOVERNED_##x, +#include "governed_features.h" + KVM_NR_GOVERNED_FEATURES +}; + +static __always_inline int kvm_governed_feature_index(unsigned int x86_fea= ture) +{ + switch (x86_feature) { +#define KVM_GOVERNED_FEATURE(x) case x: return KVM_GOVERNED_##x; +#include "governed_features.h" + default: + return -1; + } +} + +static __always_inline int kvm_is_governed_feature(unsigned int x86_featur= e) +{ + return kvm_governed_feature_index(x86_feature) >=3D 0; +} + +static __always_inline void kvm_governed_feature_set(struct kvm_vcpu *vcpu, + unsigned int x86_feature) +{ + BUILD_BUG_ON(!kvm_is_governed_feature(x86_feature)); + + __set_bit(kvm_governed_feature_index(x86_feature), + vcpu->arch.governed_features.enabled); +} + +static __always_inline void kvm_governed_feature_check_and_set(struct kvm_= vcpu *vcpu, + unsigned int x86_feature) +{ + if (kvm_cpu_cap_has(x86_feature) && guest_cpuid_has(vcpu, x86_feature)) + kvm_governed_feature_set(vcpu, x86_feature); +} + +static __always_inline bool guest_can_use(struct kvm_vcpu *vcpu, + unsigned int x86_feature) +{ + BUILD_BUG_ON(!kvm_is_governed_feature(x86_feature)); + + return test_bit(kvm_governed_feature_index(x86_feature), + vcpu->arch.governed_features.enabled); +} + #endif diff --git a/arch/x86/kvm/governed_features.h b/arch/x86/kvm/governed_featu= res.h new file mode 100644 index 000000000000..40ce8e6608cd --- /dev/null +++ b/arch/x86/kvm/governed_features.h @@ -0,0 +1,9 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#if !defined(KVM_GOVERNED_FEATURE) || defined(KVM_GOVERNED_X86_FEATURE) +BUILD_BUG() +#endif + +#define KVM_GOVERNED_X86_FEATURE(x) KVM_GOVERNED_FEATURE(X86_FEATURE_##x) + +#undef KVM_GOVERNED_X86_FEATURE +#undef KVM_GOVERNED_FEATURE --=20 2.41.0.487.g6d72f3e995-goog From nobody Sat Feb 7 22:55:09 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 38968C0015E for ; Sat, 29 Jul 2023 01:17:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237275AbjG2BRL (ORCPT ); Fri, 28 Jul 2023 21:17:11 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36360 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237219AbjG2BQq (ORCPT ); Fri, 28 Jul 2023 21:16:46 -0400 Received: from mail-yw1-x114a.google.com (mail-yw1-x114a.google.com [IPv6:2607:f8b0:4864:20::114a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D3ED149EA for ; Fri, 28 Jul 2023 18:16:29 -0700 (PDT) Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-5704995f964so29395747b3.2 for ; Fri, 28 Jul 2023 18:16:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1690593388; x=1691198188; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=OeX+CuubSaKKTeg9YUHiYCwLGztaZPXmiwSd5vwTnGo=; b=tczXnxVR/+BbWdcXZzl16Zjac9Ua038CcCQZlGqLyrjqqJA8FnbTg//UO8FJYiqmLY bTheeO1XRmDgq5B0p1UTPzDHqUoZePinMfEwTVlLSLupYGg3UtaMSrQDuZLMBmksB1kY /ZibaeRhEWniSleStOuqcs+QbMglMUexLC3HWcSWhg+imS8D1XTSRDYw4VIGfDXngXiF yA0MLBkGp6BTuyJco3d6fa58/C31vKQ9PpcZId1xL2u3XuuFJTkdwby9axhAZX5MSkMf On5p9uy1VV3x+9bGcxgT7AO/fPmQ/vrmqcVD70Oe05qaAKI7xSOLVWDn9TNpD53phozU DXTA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1690593388; x=1691198188; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=OeX+CuubSaKKTeg9YUHiYCwLGztaZPXmiwSd5vwTnGo=; b=eS+3od7w1DhAQVSv/aH0ulYzZxwlylvcRItfAfsZ3iz6SpFs9IcD+O45MLNLIVuQMl I8mCxSRYb9w71LfxRNlWJK0lN0yc7LKEhHHxDJmi/1WPP4Ngcei07/BA6SuomcmX3PgC Oa9kRxguSc6Q35acmipLl7hF72Mi5yD7/UBZDM9a4I9S0YIK0jcL7lnw2u6vJqHS2PXe 3xlrCp+g1LvQav7sLWQQlbT0mmdRl9E6g+PSP5d3Iwvz5L+MwP0pasZE8gisTfrmN9YM b3x+ep5JsHa0kv4u03u23Xo/516R3ORUeipvJ7FcwEYZ9wMfkT+n0n0m53Th+ybIFG1q G37Q== X-Gm-Message-State: ABy/qLbfKbDn7a6t+m91GWH0Bj3JQNJ7GjQVyPymbyxavg4gY1toUDAM kz/P7y7Jpi2S4TnwpyRx8G+xUEXfdq4= X-Google-Smtp-Source: APBJJlGBvNCfyxKuG1R4NNoZpfFF1I/ljW+Qw9NESSIYSbL8hQok5yLiRGghpWLn+saGaQJuw1Daom1ZinM= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a25:c094:0:b0:c6a:caf1:e601 with SMTP id c142-20020a25c094000000b00c6acaf1e601mr18754ybf.13.1690593388698; Fri, 28 Jul 2023 18:16:28 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 28 Jul 2023 18:15:55 -0700 In-Reply-To: <20230729011608.1065019-1-seanjc@google.com> Mime-Version: 1.0 References: <20230729011608.1065019-1-seanjc@google.com> X-Mailer: git-send-email 2.41.0.487.g6d72f3e995-goog Message-ID: <20230729011608.1065019-9-seanjc@google.com> Subject: [PATCH v2 08/21] KVM: x86/mmu: Use KVM-governed feature framework to track "GBPAGES enabled" From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini , Vitaly Kuznetsov Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Maxim Levitsky Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Use the governed feature framework to track whether or not the guest can use 1GiB pages, and drop the one-off helper that wraps the surprisingly non-trivial logic surrounding 1GiB page usage in the guest. No functional change intended. Signed-off-by: Sean Christopherson --- arch/x86/kvm/cpuid.c | 17 +++++++++++++++++ arch/x86/kvm/governed_features.h | 2 ++ arch/x86/kvm/mmu/mmu.c | 20 +++----------------- 3 files changed, 22 insertions(+), 17 deletions(-) diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c index ef826568c222..f74d6c404551 100644 --- a/arch/x86/kvm/cpuid.c +++ b/arch/x86/kvm/cpuid.c @@ -312,11 +312,28 @@ static void kvm_vcpu_after_set_cpuid(struct kvm_vcpu = *vcpu) { struct kvm_lapic *apic =3D vcpu->arch.apic; struct kvm_cpuid_entry2 *best; + bool allow_gbpages; =20 BUILD_BUG_ON(KVM_NR_GOVERNED_FEATURES > KVM_MAX_NR_GOVERNED_FEATURES); bitmap_zero(vcpu->arch.governed_features.enabled, KVM_MAX_NR_GOVERNED_FEATURES); =20 + /* + * If TDP is enabled, let the guest use GBPAGES if they're supported in + * hardware. The hardware page walker doesn't let KVM disable GBPAGES, + * i.e. won't treat them as reserved, and KVM doesn't redo the GVA->GPA + * walk for performance and complexity reasons. Not to mention KVM + * _can't_ solve the problem because GVA->GPA walks aren't visible to + * KVM once a TDP translation is installed. Mimic hardware behavior so + * that KVM's is at least consistent, i.e. doesn't randomly inject #PF. + * If TDP is disabled, honor *only* guest CPUID as KVM has full control + * and can install smaller shadow pages if the host lacks 1GiB support. + */ + allow_gbpages =3D tdp_enabled ? boot_cpu_has(X86_FEATURE_GBPAGES) : + guest_cpuid_has(vcpu, X86_FEATURE_GBPAGES); + if (allow_gbpages) + kvm_governed_feature_set(vcpu, X86_FEATURE_GBPAGES); + best =3D kvm_find_cpuid_entry(vcpu, 1); if (best && apic) { if (cpuid_entry_has(best, X86_FEATURE_TSC_DEADLINE_TIMER)) diff --git a/arch/x86/kvm/governed_features.h b/arch/x86/kvm/governed_featu= res.h index 40ce8e6608cd..b29c15d5e038 100644 --- a/arch/x86/kvm/governed_features.h +++ b/arch/x86/kvm/governed_features.h @@ -5,5 +5,7 @@ BUILD_BUG() =20 #define KVM_GOVERNED_X86_FEATURE(x) KVM_GOVERNED_FEATURE(X86_FEATURE_##x) =20 +KVM_GOVERNED_X86_FEATURE(GBPAGES) + #undef KVM_GOVERNED_X86_FEATURE #undef KVM_GOVERNED_FEATURE diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index ec169f5c7dce..7b9104b054bc 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4808,28 +4808,13 @@ static void __reset_rsvds_bits_mask(struct rsvd_bit= s_validate *rsvd_check, } } =20 -static bool guest_can_use_gbpages(struct kvm_vcpu *vcpu) -{ - /* - * If TDP is enabled, let the guest use GBPAGES if they're supported in - * hardware. The hardware page walker doesn't let KVM disable GBPAGES, - * i.e. won't treat them as reserved, and KVM doesn't redo the GVA->GPA - * walk for performance and complexity reasons. Not to mention KVM - * _can't_ solve the problem because GVA->GPA walks aren't visible to - * KVM once a TDP translation is installed. Mimic hardware behavior so - * that KVM's is at least consistent, i.e. doesn't randomly inject #PF. - */ - return tdp_enabled ? boot_cpu_has(X86_FEATURE_GBPAGES) : - guest_cpuid_has(vcpu, X86_FEATURE_GBPAGES); -} - static void reset_guest_rsvds_bits_mask(struct kvm_vcpu *vcpu, struct kvm_mmu *context) { __reset_rsvds_bits_mask(&context->guest_rsvd_check, vcpu->arch.reserved_gpa_bits, context->cpu_role.base.level, is_efer_nx(context), - guest_can_use_gbpages(vcpu), + guest_can_use(vcpu, X86_FEATURE_GBPAGES), is_cr4_pse(context), guest_cpuid_is_amd_or_hygon(vcpu)); } @@ -4906,7 +4891,8 @@ static void reset_shadow_zero_bits_mask(struct kvm_vc= pu *vcpu, __reset_rsvds_bits_mask(shadow_zero_check, reserved_hpa_bits(), context->root_role.level, context->root_role.efer_nx, - guest_can_use_gbpages(vcpu), is_pse, is_amd); + guest_can_use(vcpu, X86_FEATURE_GBPAGES), + is_pse, is_amd); =20 if (!shadow_me_mask) return; --=20 2.41.0.487.g6d72f3e995-goog From nobody Sat Feb 7 22:55:09 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C6A7BEB64DD for ; Sat, 29 Jul 2023 01:17:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237496AbjG2BQ7 (ORCPT ); Fri, 28 Jul 2023 21:16:59 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36464 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237341AbjG2BQm (ORCPT ); Fri, 28 Jul 2023 21:16:42 -0400 Received: from mail-yw1-x1149.google.com (mail-yw1-x1149.google.com [IPv6:2607:f8b0:4864:20::1149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5273F4EDA for ; Fri, 28 Jul 2023 18:16:31 -0700 (PDT) Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-583f048985bso29330407b3.2 for ; Fri, 28 Jul 2023 18:16:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1690593390; x=1691198190; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=dha0R9DASOGG7XsDxaLGGX3ibqnbBc/BGtiYIStTeKk=; b=R3Co0mWiKnMv23E+s2+bqRdZmHQ84c/0b2b0fBo6EJQu6wPbygTYbxo36RI0+IgysS XDCFUpUFfWR+ehZJdSP1sSsQQ9o3WOEV2VplPcfel3Ge0mrZ7/rRUgtg0mGGWpuk13bp xfpe9jYKGXATKIbGI1hcpa59d19+f3EeNTKaIxoidXs9dYUzDY55tDOFOu/O7QyxycyU grN9DnbNhrA+O05bNUzvKfg77yWa464N+cFEO0gvI7NSnuVmeOezegM5mUJyeYF6ioP/ yksWReHMBzO/z9PT19qKsWocLXdvMQ0NUXoNe7RDBZd6t5rF0HgB/41Nmvvda1pWzJEt 8aZQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1690593390; x=1691198190; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=dha0R9DASOGG7XsDxaLGGX3ibqnbBc/BGtiYIStTeKk=; b=iNvV4n1dyw4JqOZWOjFYNzWynuRGpzWyToclMzLJeZ68Vn1/XbarnpqupyaXqSaBxa huuNdIlaGbjgoM1krZynDIDJDkr26fBHU/psaWnDk9bJ0jTgenqHnj0ztQmUirNwahJA +Eo8GgrI02fopYTzBikJv8nqjNIVwSNU4Odl/Y8xsuJ575KBOaOYSiCOr7OlBRSOUpXD QgSjoxcgt9jM2kjbRcKPn7S3LfZeJ7PnyQIPFUnU3bRbLCGGIXir9y32Y7y1q8i6Hasa mPSQgeej/Lk38G1HMG0WQF3NllI+Jpd0E+RJd8HSrujAoK4htli5ZgDj4/gGvtkz3o+X DrbA== X-Gm-Message-State: ABy/qLZn4ylD0NazZm32ZjDCWa2Fp+gHU93/o08i0YsKAfnX4INz5s1O i9D6a6Rg7PLdmjibcN1IyNkpX2/hM88= X-Google-Smtp-Source: APBJJlEGQMMVjnj7NPVOTcIR9PA0hrdPRQ5dUv8FmgzbAmnbcIC8v/BGIjyYqfdJwtH6n+7jy0APZwofMcI= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a81:a784:0:b0:577:d5b:7ce3 with SMTP id e126-20020a81a784000000b005770d5b7ce3mr27415ywh.9.1690593390450; Fri, 28 Jul 2023 18:16:30 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 28 Jul 2023 18:15:56 -0700 In-Reply-To: <20230729011608.1065019-1-seanjc@google.com> Mime-Version: 1.0 References: <20230729011608.1065019-1-seanjc@google.com> X-Mailer: git-send-email 2.41.0.487.g6d72f3e995-goog Message-ID: <20230729011608.1065019-10-seanjc@google.com> Subject: [PATCH v2 09/21] KVM: VMX: Recompute "XSAVES enabled" only after CPUID update From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini , Vitaly Kuznetsov Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Maxim Levitsky Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Recompute whether or not XSAVES is enabled for the guest only if the guest's CPUID model changes instead of redoing the computation every time KVM generates vmcs01's secondary execution controls. The boot_cpu_has() and cpu_has_vmx_xsaves() checks should never change after KVM is loaded, and if they do the kernel/KVM is hosed. Opportunistically add a comment explaining _why_ XSAVES is effectively exposed to the guest if and only if XSAVE is also exposed to the guest. Practically speaking, no functional change intended (KVM will do fewer computations, but should still get the see the same xsaves_enabled value whenever KVM looks at it). Signed-off-by: Sean Christopherson --- arch/x86/kvm/vmx/vmx.c | 24 +++++++++++------------- 1 file changed, 11 insertions(+), 13 deletions(-) diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index ca6194b0e35e..307d73749185 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -4587,19 +4587,10 @@ static u32 vmx_secondary_exec_control(struct vcpu_v= mx *vmx) if (!enable_pml || !atomic_read(&vcpu->kvm->nr_memslots_dirty_logging)) exec_control &=3D ~SECONDARY_EXEC_ENABLE_PML; =20 - if (cpu_has_vmx_xsaves()) { - /* Exposing XSAVES only when XSAVE is exposed */ - bool xsaves_enabled =3D - boot_cpu_has(X86_FEATURE_XSAVE) && - guest_cpuid_has(vcpu, X86_FEATURE_XSAVE) && - guest_cpuid_has(vcpu, X86_FEATURE_XSAVES); - - vcpu->arch.xsaves_enabled =3D xsaves_enabled; - + if (cpu_has_vmx_xsaves()) vmx_adjust_secondary_exec_control(vmx, &exec_control, SECONDARY_EXEC_XSAVES, - xsaves_enabled, false); - } + vcpu->arch.xsaves_enabled, false); =20 /* * RDPID is also gated by ENABLE_RDTSCP, turn on the control if either @@ -7722,8 +7713,15 @@ static void vmx_vcpu_after_set_cpuid(struct kvm_vcpu= *vcpu) { struct vcpu_vmx *vmx =3D to_vmx(vcpu); =20 - /* xsaves_enabled is recomputed in vmx_compute_secondary_exec_control(). = */ - vcpu->arch.xsaves_enabled =3D false; + /* + * XSAVES is effectively enabled if and only if XSAVE is also exposed + * to the guest. XSAVES depends on CR4.OSXSAVE, and CR4.OSXSAVE can be + * set if and only if XSAVE is supported. + */ + vcpu->arch.xsaves_enabled =3D cpu_has_vmx_xsaves() && + boot_cpu_has(X86_FEATURE_XSAVE) && + guest_cpuid_has(vcpu, X86_FEATURE_XSAVE) && + guest_cpuid_has(vcpu, X86_FEATURE_XSAVES); =20 vmx_setup_uret_msrs(vmx); =20 --=20 2.41.0.487.g6d72f3e995-goog From nobody Sat Feb 7 22:55:09 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 19B12EB64DD for ; Sat, 29 Jul 2023 01:17:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237511AbjG2BRi (ORCPT ); Fri, 28 Jul 2023 21:17:38 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35992 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237394AbjG2BRN (ORCPT ); Fri, 28 Jul 2023 21:17:13 -0400 Received: from mail-pl1-x64a.google.com (mail-pl1-x64a.google.com [IPv6:2607:f8b0:4864:20::64a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5DC3C44B1 for ; Fri, 28 Jul 2023 18:16:48 -0700 (PDT) Received: by mail-pl1-x64a.google.com with SMTP id d9443c01a7336-1bbd4f526caso20313835ad.3 for ; Fri, 28 Jul 2023 18:16:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1690593392; x=1691198192; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=/Va/MWC5DRY46k94Uc8qLWdNEj91Xq1TuiBAGty68F8=; b=yRqDe6hnQT3wYKRPtH+TorJeNAyO7rJdwfhdzAa8+zEHsyF8WvTsItjBPD+rKS0Bre AWLVyXBfQ54MW4JkJ0PV5hWRO1Fxd0rFnJdzJfj+6026Wy296jACG2ypQTWOPOTxnm58 hc2QwMRTYw5zBjGNrKdbTgeJQRTl0AXfMn7T05nG5aiuEMzUZmIZWtCkP+omOr++FrxB L7ENLl09CAGiAgDkJr/jQ27vcqMUjXRP0JP7gCAc28Rd7+WWeW9u2YkmTg8Sqqf2cTBy D4BXD3n8j8znlME2TL5r/kHdBzikHOsATdn4giB+mgOwpVwUEiQ07zQ9RlC8q4nywObi deTw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1690593392; x=1691198192; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=/Va/MWC5DRY46k94Uc8qLWdNEj91Xq1TuiBAGty68F8=; b=iw/avOnQfsE3oldf3I1QPI3gdaor0qYrJ4TLXfBmDHeF14/YylY/w/ju45HKdqHlCu Z1U542kKk9cvDo9AiCLl/TDIp/ZmiSfD8JFqAuPz48WyxZOBv6IAFki6nBrIt9MwLUuk xtfmWXkDNkMLADw6tTlO5Dz5SEYW0qgTMlsZLwXwH6CD4qQtcvgM7e9QUIpJjFxAMGdN E74aOK9G3VyNa2kPpuSLeO9nLv6nCuyC2oWtUVi3FAPKFW9luX+4Ak/Ytt/tvv3k22uB p1Lb2mnJclJGmbs3RJnLypsp2cr5K4HfoVReGqWV83CpO33S6ACtnfq8sQDVrOQiqmcF xUlA== X-Gm-Message-State: ABy/qLbFNYZSunySjFFYeJl7R4h3pMgLgJUojwcjwjNoYUDzx5NbJ8MX sALeDjI9m+lhTJNzpKrOf9Ppm719XkE= X-Google-Smtp-Source: APBJJlFQbjjv4j0eZ7yTd6o6d4NNrnwqwn3O3h5tAwnBDxSdxiPvphArnIx6zqIUmY01tf1nqT4wuZuZFt4= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a17:902:c409:b0:1b8:2055:fc1f with SMTP id k9-20020a170902c40900b001b82055fc1fmr12348plk.2.1690593392373; Fri, 28 Jul 2023 18:16:32 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 28 Jul 2023 18:15:57 -0700 In-Reply-To: <20230729011608.1065019-1-seanjc@google.com> Mime-Version: 1.0 References: <20230729011608.1065019-1-seanjc@google.com> X-Mailer: git-send-email 2.41.0.487.g6d72f3e995-goog Message-ID: <20230729011608.1065019-11-seanjc@google.com> Subject: [PATCH v2 10/21] KVM: VMX: Check KVM CPU caps, not just VMX MSR support, for XSAVE enabling From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini , Vitaly Kuznetsov Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Maxim Levitsky Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Check KVM CPU capabilities instead of raw VMX support for XSAVES when determining whether or not XSAVER can/should be exposed to the guest. Practically speaking, it's nonsensical/impossible for a CPU to support "enable XSAVES" without XSAVES being supported natively. The real motivation for checking to kvm_cpu_cap_has() is to allow using the governed feature's standard check-and-set logic. Signed-off-by: Sean Christopherson --- arch/x86/kvm/vmx/vmx.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 307d73749185..e358e3fa1ced 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -7718,7 +7718,7 @@ static void vmx_vcpu_after_set_cpuid(struct kvm_vcpu = *vcpu) * to the guest. XSAVES depends on CR4.OSXSAVE, and CR4.OSXSAVE can be * set if and only if XSAVE is supported. */ - vcpu->arch.xsaves_enabled =3D cpu_has_vmx_xsaves() && + vcpu->arch.xsaves_enabled =3D kvm_cpu_cap_has(X86_FEATURE_XSAVES) && boot_cpu_has(X86_FEATURE_XSAVE) && guest_cpuid_has(vcpu, X86_FEATURE_XSAVE) && guest_cpuid_has(vcpu, X86_FEATURE_XSAVES); --=20 2.41.0.487.g6d72f3e995-goog From nobody Sat Feb 7 22:55:09 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 945DAC001DF for ; Sat, 29 Jul 2023 01:17:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237354AbjG2BRu (ORCPT ); Fri, 28 Jul 2023 21:17:50 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36504 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237486AbjG2BRY (ORCPT ); Fri, 28 Jul 2023 21:17:24 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A6B0746A0 for ; Fri, 28 Jul 2023 18:16:56 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id 3f1490d57ef6-c6db61f7f64so2604931276.0 for ; Fri, 28 Jul 2023 18:16:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1690593394; x=1691198194; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=txdsA1xtp9opNpvC2N9YIeErGNCur9/2M/6XZwnPohE=; b=X/UYeRCMQX8Eu9aWHhq75jQGUGL5bLyA71aOLK+Lx7A7sJ9r+Bx7QwhdNrIw4gJUdX /FVtta1v5YqwQaCJonKbTtCYFJCYboHzXK7DfnJRMUgleO5LJ6nIVVq/GFjTrjM68fCa 7vVF2JmEDsxFnWJiki9i003iyqtuYGt5WXx95tGx5s1Sw6aF1xEWNiGtVk76Pdrx1Fna 91SguJH+ATnTEShAEIWEgOjRn3Gv6MOhegfUJbXNWSup4GfS+BQEHtWfUkVE07RBpnP8 E9UyDiIVBjb8c5GIUuXDzPRoJlmN8LSBlH8Jec4Nh0YdePtBTWyuQcVh9j70+YAtNlfn YyEA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1690593394; x=1691198194; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=txdsA1xtp9opNpvC2N9YIeErGNCur9/2M/6XZwnPohE=; b=HoeWxinUu1b8k7ZXhz9AQb4/s8fDUIKjtgMlEK7DXiZXXsEClhjqtYJhYsVT32Oq8T TifI/LYEKYt20iNlVOra/80oF+OzrEMNaP9eLpXwgbfGe8lFWeOq8OnE0zqyryJOE+fX kakh7eI4lac5zqc/sppkCs0ipK4gn7K/yv5THLR+053Sa4EGC4isBfhN5Zj4L/7wZ11q crGUhIJSHpFh/OCFuGAIKkwHdzJdbaQ4Ev30M977717Etf30O/cfyMgOlnEwpnvOYI60 a94pGbrwcSpt1gfCOd0gW5EF1GMwtSF1mijzefVfquvWpsp9HMF+Q6ZV9jfilsE6JcdZ frAg== X-Gm-Message-State: ABy/qLazonRYDrtu7H4sJHzyHUjX/iCK4t6NT7BJzY46G/58tFWz+0kq NCx2ZPh11+FWkCPxEPDxenDDQaLTdB0= X-Google-Smtp-Source: APBJJlHm92p3mBshzqwFy8AkJJGn2KFFOP58Bbms+0JsAktDEbZUGe75BC0IveFbL/cDfaFSFLZV+ASfxeg= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a25:e6d3:0:b0:c72:2386:7d26 with SMTP id d202-20020a25e6d3000000b00c7223867d26mr18792ybh.0.1690593394398; Fri, 28 Jul 2023 18:16:34 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 28 Jul 2023 18:15:58 -0700 In-Reply-To: <20230729011608.1065019-1-seanjc@google.com> Mime-Version: 1.0 References: <20230729011608.1065019-1-seanjc@google.com> X-Mailer: git-send-email 2.41.0.487.g6d72f3e995-goog Message-ID: <20230729011608.1065019-12-seanjc@google.com> Subject: [PATCH v2 11/21] KVM: VMX: Rename XSAVES control to follow KVM's preferred "ENABLE_XYZ" From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini , Vitaly Kuznetsov Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Maxim Levitsky Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Rename the XSAVES secondary execution control to follow KVM's preferred style so that XSAVES related logic can use common macros that depend on KVM's preferred style. No functional change intended. Signed-off-by: Sean Christopherson --- arch/x86/include/asm/vmx.h | 2 +- arch/x86/kvm/vmx/capabilities.h | 2 +- arch/x86/kvm/vmx/hyperv.c | 2 +- arch/x86/kvm/vmx/nested.c | 6 +++--- arch/x86/kvm/vmx/nested.h | 2 +- arch/x86/kvm/vmx/vmx.c | 2 +- arch/x86/kvm/vmx/vmx.h | 2 +- 7 files changed, 9 insertions(+), 9 deletions(-) diff --git a/arch/x86/include/asm/vmx.h b/arch/x86/include/asm/vmx.h index 0d02c4aafa6f..0e73616b82f3 100644 --- a/arch/x86/include/asm/vmx.h +++ b/arch/x86/include/asm/vmx.h @@ -71,7 +71,7 @@ #define SECONDARY_EXEC_RDSEED_EXITING VMCS_CONTROL_BIT(RDSEED_EXITING) #define SECONDARY_EXEC_ENABLE_PML VMCS_CONTROL_BIT(PAGE_MOD_= LOGGING) #define SECONDARY_EXEC_PT_CONCEAL_VMX VMCS_CONTROL_BIT(PT_CONCEAL_VMX) -#define SECONDARY_EXEC_XSAVES VMCS_CONTROL_BIT(XSAVES) +#define SECONDARY_EXEC_ENABLE_XSAVES VMCS_CONTROL_BIT(XSAVES) #define SECONDARY_EXEC_MODE_BASED_EPT_EXEC VMCS_CONTROL_BIT(MODE_BASED_EPT= _EXEC) #define SECONDARY_EXEC_PT_USE_GPA VMCS_CONTROL_BIT(PT_USE_GPA) #define SECONDARY_EXEC_TSC_SCALING VMCS_CONTROL_BIT(TSC_SCALI= NG) diff --git a/arch/x86/kvm/vmx/capabilities.h b/arch/x86/kvm/vmx/capabilitie= s.h index d0abee35d7ba..41a4533f9989 100644 --- a/arch/x86/kvm/vmx/capabilities.h +++ b/arch/x86/kvm/vmx/capabilities.h @@ -252,7 +252,7 @@ static inline bool cpu_has_vmx_pml(void) static inline bool cpu_has_vmx_xsaves(void) { return vmcs_config.cpu_based_2nd_exec_ctrl & - SECONDARY_EXEC_XSAVES; + SECONDARY_EXEC_ENABLE_XSAVES; } =20 static inline bool cpu_has_vmx_waitpkg(void) diff --git a/arch/x86/kvm/vmx/hyperv.c b/arch/x86/kvm/vmx/hyperv.c index 79450e1ed7cf..313b8bb5b8a7 100644 --- a/arch/x86/kvm/vmx/hyperv.c +++ b/arch/x86/kvm/vmx/hyperv.c @@ -78,7 +78,7 @@ SECONDARY_EXEC_DESC | \ SECONDARY_EXEC_ENABLE_RDTSCP | \ SECONDARY_EXEC_ENABLE_INVPCID | \ - SECONDARY_EXEC_XSAVES | \ + SECONDARY_EXEC_ENABLE_XSAVES | \ SECONDARY_EXEC_RDSEED_EXITING | \ SECONDARY_EXEC_RDRAND_EXITING | \ SECONDARY_EXEC_TSC_SCALING | \ diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index 516391cc0d64..22e08d30baef 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -2307,7 +2307,7 @@ static void prepare_vmcs02_early(struct vcpu_vmx *vmx= , struct loaded_vmcs *vmcs0 SECONDARY_EXEC_VIRTUALIZE_X2APIC_MODE | SECONDARY_EXEC_ENABLE_INVPCID | SECONDARY_EXEC_ENABLE_RDTSCP | - SECONDARY_EXEC_XSAVES | + SECONDARY_EXEC_ENABLE_XSAVES | SECONDARY_EXEC_ENABLE_USR_WAIT_PAUSE | SECONDARY_EXEC_VIRTUAL_INTR_DELIVERY | SECONDARY_EXEC_APIC_REGISTER_VIRT | @@ -6331,7 +6331,7 @@ static bool nested_vmx_l1_wants_exit(struct kvm_vcpu = *vcpu, * If if it were, XSS would have to be checked against * the XSS exit bitmap in vmcs12. */ - return nested_cpu_has2(vmcs12, SECONDARY_EXEC_XSAVES); + return nested_cpu_has2(vmcs12, SECONDARY_EXEC_ENABLE_XSAVES); case EXIT_REASON_UMWAIT: case EXIT_REASON_TPAUSE: return nested_cpu_has2(vmcs12, @@ -6874,7 +6874,7 @@ static void nested_vmx_setup_secondary_ctls(u32 ept_c= aps, SECONDARY_EXEC_ENABLE_INVPCID | SECONDARY_EXEC_ENABLE_VMFUNC | SECONDARY_EXEC_RDSEED_EXITING | - SECONDARY_EXEC_XSAVES | + SECONDARY_EXEC_ENABLE_XSAVES | SECONDARY_EXEC_TSC_SCALING | SECONDARY_EXEC_ENABLE_USR_WAIT_PAUSE; =20 diff --git a/arch/x86/kvm/vmx/nested.h b/arch/x86/kvm/vmx/nested.h index 96952263b029..b4b9d51438c6 100644 --- a/arch/x86/kvm/vmx/nested.h +++ b/arch/x86/kvm/vmx/nested.h @@ -168,7 +168,7 @@ static inline int nested_cpu_has_ept(struct vmcs12 *vmc= s12) =20 static inline bool nested_cpu_has_xsaves(struct vmcs12 *vmcs12) { - return nested_cpu_has2(vmcs12, SECONDARY_EXEC_XSAVES); + return nested_cpu_has2(vmcs12, SECONDARY_EXEC_ENABLE_XSAVES); } =20 static inline bool nested_cpu_has_pml(struct vmcs12 *vmcs12) diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index e358e3fa1ced..a0a47be2feed 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -4589,7 +4589,7 @@ static u32 vmx_secondary_exec_control(struct vcpu_vmx= *vmx) =20 if (cpu_has_vmx_xsaves()) vmx_adjust_secondary_exec_control(vmx, &exec_control, - SECONDARY_EXEC_XSAVES, + SECONDARY_EXEC_ENABLE_XSAVES, vcpu->arch.xsaves_enabled, false); =20 /* diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h index 32384ba38499..cde902b44d97 100644 --- a/arch/x86/kvm/vmx/vmx.h +++ b/arch/x86/kvm/vmx/vmx.h @@ -562,7 +562,7 @@ static inline u8 vmx_get_rvi(void) SECONDARY_EXEC_APIC_REGISTER_VIRT | \ SECONDARY_EXEC_VIRTUAL_INTR_DELIVERY | \ SECONDARY_EXEC_SHADOW_VMCS | \ - SECONDARY_EXEC_XSAVES | \ + SECONDARY_EXEC_ENABLE_XSAVES | \ SECONDARY_EXEC_RDSEED_EXITING | \ SECONDARY_EXEC_RDRAND_EXITING | \ SECONDARY_EXEC_ENABLE_PML | \ --=20 2.41.0.487.g6d72f3e995-goog From nobody Sat Feb 7 22:55:09 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CCB9BC0015E for ; Sat, 29 Jul 2023 01:18:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237498AbjG2BRz (ORCPT ); Fri, 28 Jul 2023 21:17:55 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35990 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237382AbjG2BRe (ORCPT ); Fri, 28 Jul 2023 21:17:34 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 84CBD5597 for ; Fri, 28 Jul 2023 18:17:03 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id 3f1490d57ef6-d114bc2057fso2488826276.3 for ; Fri, 28 Jul 2023 18:17:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1690593396; x=1691198196; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=Y5BOuAmsGu5bKSoLgi0ClRVcqN8jZYDkPjN1Ef5oXFc=; b=druYUWwg+Cc+LP91WMNQ9dRoa7w6GYwh9GZqzIigWMkFkWk8Uj2z1giZell+k87Zn0 PFHW1nx+OjdSmtIXMvUq6mrlU8oB2wAeof8STbQvZWoMNx5gf8Lt5GvTkCThnHdv0Y9T /yX2rIr0Mn2tNKO4ALpLcsmIl6OZBZrXr2jTpRAv2xGZNeWVyxPcrsmnOuGl08UbY+PG MfgsDWIzS5xT5XQyiD5zK2CSnjjONty8UTFlbmujMRzDVveGUy0vsXRBUgQjHFYmmUH3 xrzZLdJPPEXJ04vuzw/WnBBvgfEs8hokIWiBtaH7Oo5volw8JS5CRBSvj1lzdZrSYM+T tdrA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1690593396; x=1691198196; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=Y5BOuAmsGu5bKSoLgi0ClRVcqN8jZYDkPjN1Ef5oXFc=; b=dIQ70DaIRST97IpZXWAvRLL2TPr545TBu3uB7qmwB811DiI5yLRtzinDYs85sihVZc +QcwDgC/4LNrIuZwvgvXskfL5D79quI87zsMcBBnL0f/GWuYT9eRonBdGKJVX2WUbDXL EG99kzr4CNAsMF88XCXVmIgrzChKq4Ci7UBzO8+hg+Gm5WolXptWg67wml1lK+xSuXuB deoUT+selg0FHPclZ3Wu4DQASftia9CmU4QnLKzOMIMTVe/XEcT29v4vATGf6b+9Bk/2 E2LKPj8jGPBoxXc78gfBVtVpXQ+m+0/mfZCBtMnOFg+TIj25n9K9dO9Lpubis1ERlNBq J8rg== X-Gm-Message-State: ABy/qLbnfrS0ZaKaRIYx8ZEN1OuoZEvviL0cKlz5gsy9PA6ogJqCGfuH CSm3zlVJ0ykJ9DbFAorZfJDWSD6ymwE= X-Google-Smtp-Source: APBJJlEVY2XTgkED7EmJfSyPFRPcLkJ6OIqU6YgZjXhcx6LZTqLNwNhp9SBmAEfpbJhi88U4IVidPoNquCA= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a25:db8d:0:b0:d18:73fc:40af with SMTP id g135-20020a25db8d000000b00d1873fc40afmr18262ybf.5.1690593396223; Fri, 28 Jul 2023 18:16:36 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 28 Jul 2023 18:15:59 -0700 In-Reply-To: <20230729011608.1065019-1-seanjc@google.com> Mime-Version: 1.0 References: <20230729011608.1065019-1-seanjc@google.com> X-Mailer: git-send-email 2.41.0.487.g6d72f3e995-goog Message-ID: <20230729011608.1065019-13-seanjc@google.com> Subject: [PATCH v2 12/21] KVM: x86: Use KVM-governed feature framework to track "XSAVES enabled" From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini , Vitaly Kuznetsov Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Maxim Levitsky Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Use the governed feature framework to track if XSAVES is "enabled", i.e. if XSAVES can be used by the guest. Add a comment in the SVM code to explain the very unintuitive logic of deliberately NOT checking if XSAVES is enumerated in the guest CPUID model. No functional change intended. Signed-off-by: Sean Christopherson --- arch/x86/kvm/governed_features.h | 1 + arch/x86/kvm/svm/svm.c | 17 ++++++++++++++--- arch/x86/kvm/vmx/vmx.c | 32 ++++++++++++++++++-------------- arch/x86/kvm/x86.c | 4 ++-- 4 files changed, 35 insertions(+), 19 deletions(-) diff --git a/arch/x86/kvm/governed_features.h b/arch/x86/kvm/governed_featu= res.h index b29c15d5e038..b896a64e4ac3 100644 --- a/arch/x86/kvm/governed_features.h +++ b/arch/x86/kvm/governed_features.h @@ -6,6 +6,7 @@ BUILD_BUG() #define KVM_GOVERNED_X86_FEATURE(x) KVM_GOVERNED_FEATURE(X86_FEATURE_##x) =20 KVM_GOVERNED_X86_FEATURE(GBPAGES) +KVM_GOVERNED_X86_FEATURE(XSAVES) =20 #undef KVM_GOVERNED_X86_FEATURE #undef KVM_GOVERNED_FEATURE diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 64092df06f94..d5f8cb402eb7 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -4204,9 +4204,20 @@ static void svm_vcpu_after_set_cpuid(struct kvm_vcpu= *vcpu) struct vcpu_svm *svm =3D to_svm(vcpu); struct kvm_cpuid_entry2 *best; =20 - vcpu->arch.xsaves_enabled =3D guest_cpuid_has(vcpu, X86_FEATURE_XSAVE) && - boot_cpu_has(X86_FEATURE_XSAVE) && - boot_cpu_has(X86_FEATURE_XSAVES); + /* + * SVM doesn't provide a way to disable just XSAVES in the guest, KVM + * can only disable all variants of by disallowing CR4.OSXSAVE from + * being set. As a result, if the host has XSAVE and XSAVES, and the + * guest has XSAVE enabled, the guest can execute XSAVES without + * faulting. Treat XSAVES as enabled in this case regardless of + * whether it's advertised to the guest so that KVM context switches + * XSS on VM-Enter/VM-Exit. Failure to do so would effectively give + * the guest read/write access to the host's XSS. + */ + if (boot_cpu_has(X86_FEATURE_XSAVE) && + boot_cpu_has(X86_FEATURE_XSAVES) && + guest_cpuid_has(vcpu, X86_FEATURE_XSAVE)) + kvm_governed_feature_set(vcpu, X86_FEATURE_XSAVES); =20 /* Update nrips enabled cache */ svm->nrips_enabled =3D kvm_cpu_cap_has(X86_FEATURE_NRIPS) && diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index a0a47be2feed..3100ed62615c 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -4518,16 +4518,19 @@ vmx_adjust_secondary_exec_control(struct vcpu_vmx *= vmx, u32 *exec_control, * based on a single guest CPUID bit, with a dedicated feature bit. This = also * verifies that the control is actually supported by KVM and hardware. */ -#define vmx_adjust_sec_exec_control(vmx, exec_control, name, feat_name, ct= rl_name, exiting) \ -({ \ - bool __enabled; \ - \ - if (cpu_has_vmx_##name()) { \ - __enabled =3D guest_cpuid_has(&(vmx)->vcpu, \ - X86_FEATURE_##feat_name); \ - vmx_adjust_secondary_exec_control(vmx, exec_control, \ - SECONDARY_EXEC_##ctrl_name, __enabled, exiting); \ - } \ +#define vmx_adjust_sec_exec_control(vmx, exec_control, name, feat_name, ct= rl_name, exiting) \ +({ \ + struct kvm_vcpu *__vcpu =3D &(vmx)->vcpu; \ + bool __enabled; \ + \ + if (cpu_has_vmx_##name()) { \ + if (kvm_is_governed_feature(X86_FEATURE_##feat_name)) \ + __enabled =3D guest_can_use(__vcpu, X86_FEATURE_##feat_name); \ + else \ + __enabled =3D guest_cpuid_has(__vcpu, X86_FEATURE_##feat_name); \ + vmx_adjust_secondary_exec_control(vmx, exec_control, SECONDARY_EXEC_##ct= rl_name,\ + __enabled, exiting); \ + } \ }) =20 /* More macro magic for ENABLE_/opt-in versus _EXITING/opt-out controls. */ @@ -4587,10 +4590,7 @@ static u32 vmx_secondary_exec_control(struct vcpu_vm= x *vmx) if (!enable_pml || !atomic_read(&vcpu->kvm->nr_memslots_dirty_logging)) exec_control &=3D ~SECONDARY_EXEC_ENABLE_PML; =20 - if (cpu_has_vmx_xsaves()) - vmx_adjust_secondary_exec_control(vmx, &exec_control, - SECONDARY_EXEC_ENABLE_XSAVES, - vcpu->arch.xsaves_enabled, false); + vmx_adjust_sec_exec_feature(vmx, &exec_control, xsaves, XSAVES); =20 /* * RDPID is also gated by ENABLE_RDTSCP, turn on the control if either @@ -4609,6 +4609,7 @@ static u32 vmx_secondary_exec_control(struct vcpu_vmx= *vmx) SECONDARY_EXEC_ENABLE_RDTSCP, rdpid_or_rdtscp_enabled, false); } + vmx_adjust_sec_exec_feature(vmx, &exec_control, invpcid, INVPCID); =20 vmx_adjust_sec_exec_exiting(vmx, &exec_control, rdrand, RDRAND); @@ -7722,6 +7723,9 @@ static void vmx_vcpu_after_set_cpuid(struct kvm_vcpu = *vcpu) boot_cpu_has(X86_FEATURE_XSAVE) && guest_cpuid_has(vcpu, X86_FEATURE_XSAVE) && guest_cpuid_has(vcpu, X86_FEATURE_XSAVES); + if (boot_cpu_has(X86_FEATURE_XSAVE) && + guest_cpuid_has(vcpu, X86_FEATURE_XSAVE)) + kvm_governed_feature_check_and_set(vcpu, X86_FEATURE_XSAVES); =20 vmx_setup_uret_msrs(vmx); =20 diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 5a14378ed4e1..201fa957ce9a 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -1012,7 +1012,7 @@ void kvm_load_guest_xsave_state(struct kvm_vcpu *vcpu) if (vcpu->arch.xcr0 !=3D host_xcr0) xsetbv(XCR_XFEATURE_ENABLED_MASK, vcpu->arch.xcr0); =20 - if (vcpu->arch.xsaves_enabled && + if (guest_can_use(vcpu, X86_FEATURE_XSAVES) && vcpu->arch.ia32_xss !=3D host_xss) wrmsrl(MSR_IA32_XSS, vcpu->arch.ia32_xss); } @@ -1043,7 +1043,7 @@ void kvm_load_host_xsave_state(struct kvm_vcpu *vcpu) if (vcpu->arch.xcr0 !=3D host_xcr0) xsetbv(XCR_XFEATURE_ENABLED_MASK, host_xcr0); =20 - if (vcpu->arch.xsaves_enabled && + if (guest_can_use(vcpu, X86_FEATURE_XSAVES) && vcpu->arch.ia32_xss !=3D host_xss) wrmsrl(MSR_IA32_XSS, host_xss); } --=20 2.41.0.487.g6d72f3e995-goog From nobody Sat Feb 7 22:55:09 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id F1B0CC0015E for ; Sat, 29 Jul 2023 01:18:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237219AbjG2BSP (ORCPT ); Fri, 28 Jul 2023 21:18:15 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35928 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237393AbjG2BRg (ORCPT ); Fri, 28 Jul 2023 21:17:36 -0400 Received: from mail-pl1-x64a.google.com (mail-pl1-x64a.google.com [IPv6:2607:f8b0:4864:20::64a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0F17855A9 for ; Fri, 28 Jul 2023 18:17:06 -0700 (PDT) Received: by mail-pl1-x64a.google.com with SMTP id d9443c01a7336-1bbbc4ae328so20393585ad.1 for ; Fri, 28 Jul 2023 18:17:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1690593398; x=1691198198; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=ZBstFzzUxVyW0UD2LT3q1n62xfHDtx4xPYrvh1DuKtE=; b=AWA3eueKfdIcRemXTjYfiddcMtJI0gM8qgAv12KCTw4HqUU0shQMSFOxKJIqy4vXKo o6PGd++MfofA37Ilb4CdFRIImOYn0LHj5TT/u1y6lKa9cwtHyqNqTYJcxTPFrFc5HcMg EzkaGymUsTaNE6uXUIiX2/rTAeuI23PlQHZzCKwbbJLVpdsqh7sHcY7VRZn0qolmfCBn sXyDMqpc2/A4JbuFGUtM7AmJ5Kvx2y7QHKtKaD736u9Q81xpRZHnJtwpiSI05j1+W0Hx gPZA0p3nGewQVfW/euy/AuhF4Bt3UH3aB9Rmw0+o18MlQzaEbSF8ZXWj2V8m1ocyJGws Bj/Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1690593398; x=1691198198; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=ZBstFzzUxVyW0UD2LT3q1n62xfHDtx4xPYrvh1DuKtE=; b=BpMghsoJtiZG7BkzrF1eXNCnhUDC+Ih7FDx8bHsSDrO/RsqxVfcZchR8D7tTWFuIut WueIx+snl+kynWmOGrPRAdkhuc0EQWfCq/KHEQxXgRqnXqbgrCaaMxChEZghNHzgeZUW GtYvz/wfbLMWw5WMhT/pK1EqZdJj/D9RoC5i32ckDUtwURg1Xnf7oLBGWGivZyBSaboi VJXv9xymmlvVf7plt1x49Sl/gEY4SgimrovQqMOgMbAsFwzg6MUpaUfVLz3QJaoJY+xp m0wwmDIY+GFDODc0l9FwKkCbRZCPPEVnWNfhKOpEI9pVJMByAV84a8oHtJpibF2qeARk QJ7A== X-Gm-Message-State: ABy/qLZP5njHzNBZAVzkFR5V++Q2rFdt1BsQuT6FZWl2WDqtL/wSFdhZ aM+9Akv1D37Wu7D8+O3Fdqmul/6Nfzk= X-Google-Smtp-Source: APBJJlE9gJP2mXJxfkwEVOlLCgzzX3TWKCeVQjYElvF6CgWBwkyCaatBW6oKGQdFMvEvR4lm9CkyS6Qtl1s= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a17:902:e542:b0:1b8:c666:207a with SMTP id n2-20020a170902e54200b001b8c666207amr13470plf.9.1690593398073; Fri, 28 Jul 2023 18:16:38 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 28 Jul 2023 18:16:00 -0700 In-Reply-To: <20230729011608.1065019-1-seanjc@google.com> Mime-Version: 1.0 References: <20230729011608.1065019-1-seanjc@google.com> X-Mailer: git-send-email 2.41.0.487.g6d72f3e995-goog Message-ID: <20230729011608.1065019-14-seanjc@google.com> Subject: [PATCH v2 13/21] KVM: nVMX: Use KVM-governed feature framework to track "nested VMX enabled" From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini , Vitaly Kuznetsov Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Maxim Levitsky Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Track "VMX exposed to L1" via a governed feature flag instead of using a dedicated helper to provide the same functionality. The main goal is to drive convergence between VMX and SVM with respect to querying features that are controllable via module param (SVM likes to cache nested features), avoiding the guest CPUID lookups at runtime is just a bonus and unlikely to provide any meaningful performance benefits. No functional change intended. Signed-off-by: Sean Christopherson Reviewed-by: Yuan Yao --- arch/x86/kvm/governed_features.h | 1 + arch/x86/kvm/vmx/nested.c | 7 ++++--- arch/x86/kvm/vmx/vmx.c | 21 ++++++--------------- arch/x86/kvm/vmx/vmx.h | 1 - 4 files changed, 11 insertions(+), 19 deletions(-) diff --git a/arch/x86/kvm/governed_features.h b/arch/x86/kvm/governed_featu= res.h index b896a64e4ac3..22446614bf49 100644 --- a/arch/x86/kvm/governed_features.h +++ b/arch/x86/kvm/governed_features.h @@ -7,6 +7,7 @@ BUILD_BUG() =20 KVM_GOVERNED_X86_FEATURE(GBPAGES) KVM_GOVERNED_X86_FEATURE(XSAVES) +KVM_GOVERNED_X86_FEATURE(VMX) =20 #undef KVM_GOVERNED_X86_FEATURE #undef KVM_GOVERNED_FEATURE diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index 22e08d30baef..c5ec0ef51ff7 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -6426,7 +6426,7 @@ static int vmx_get_nested_state(struct kvm_vcpu *vcpu, vmx =3D to_vmx(vcpu); vmcs12 =3D get_vmcs12(vcpu); =20 - if (nested_vmx_allowed(vcpu) && + if (guest_can_use(vcpu, X86_FEATURE_VMX) && (vmx->nested.vmxon || vmx->nested.smm.vmxon)) { kvm_state.hdr.vmx.vmxon_pa =3D vmx->nested.vmxon_ptr; kvm_state.hdr.vmx.vmcs12_pa =3D vmx->nested.current_vmptr; @@ -6567,7 +6567,7 @@ static int vmx_set_nested_state(struct kvm_vcpu *vcpu, if (kvm_state->flags & ~KVM_STATE_NESTED_EVMCS) return -EINVAL; } else { - if (!nested_vmx_allowed(vcpu)) + if (!guest_can_use(vcpu, X86_FEATURE_VMX)) return -EINVAL; =20 if (!page_address_valid(vcpu, kvm_state->hdr.vmx.vmxon_pa)) @@ -6601,7 +6601,8 @@ static int vmx_set_nested_state(struct kvm_vcpu *vcpu, return -EINVAL; =20 if ((kvm_state->flags & KVM_STATE_NESTED_EVMCS) && - (!nested_vmx_allowed(vcpu) || !vmx->nested.enlightened_vmcs_enabled)) + (!guest_can_use(vcpu, X86_FEATURE_VMX) || + !vmx->nested.enlightened_vmcs_enabled)) return -EINVAL; =20 vmx_leave_nested(vcpu); diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 3100ed62615c..fdf932cfc64d 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -1894,17 +1894,6 @@ static void vmx_write_tsc_multiplier(struct kvm_vcpu= *vcpu) vmcs_write64(TSC_MULTIPLIER, vcpu->arch.tsc_scaling_ratio); } =20 -/* - * nested_vmx_allowed() checks whether a guest should be allowed to use VMX - * instructions and MSRs (i.e., nested VMX). Nested VMX is disabled for - * all guests if the "nested" module option is off, and can also be disabl= ed - * for a single guest by disabling its VMX cpuid bit. - */ -bool nested_vmx_allowed(struct kvm_vcpu *vcpu) -{ - return nested && guest_cpuid_has(vcpu, X86_FEATURE_VMX); -} - /* * Userspace is allowed to set any supported IA32_FEATURE_CONTROL regardle= ss of * guest CPUID. Note, KVM allows userspace to set "VMX in SMX" to maintain @@ -2032,7 +2021,7 @@ static int vmx_get_msr(struct kvm_vcpu *vcpu, struct = msr_data *msr_info) [msr_info->index - MSR_IA32_SGXLEPUBKEYHASH0]; break; case KVM_FIRST_EMULATED_VMX_MSR ... KVM_LAST_EMULATED_VMX_MSR: - if (!nested_vmx_allowed(vcpu)) + if (!guest_can_use(vcpu, X86_FEATURE_VMX)) return 1; if (vmx_get_vmx_msr(&vmx->nested.msrs, msr_info->index, &msr_info->data)) @@ -2340,7 +2329,7 @@ static int vmx_set_msr(struct kvm_vcpu *vcpu, struct = msr_data *msr_info) case KVM_FIRST_EMULATED_VMX_MSR ... KVM_LAST_EMULATED_VMX_MSR: if (!msr_info->host_initiated) return 1; /* they are read-only */ - if (!nested_vmx_allowed(vcpu)) + if (!guest_can_use(vcpu, X86_FEATURE_VMX)) return 1; return vmx_set_vmx_msr(vcpu, msr_index, data); case MSR_IA32_RTIT_CTL: @@ -7727,13 +7716,15 @@ static void vmx_vcpu_after_set_cpuid(struct kvm_vcp= u *vcpu) guest_cpuid_has(vcpu, X86_FEATURE_XSAVE)) kvm_governed_feature_check_and_set(vcpu, X86_FEATURE_XSAVES); =20 + kvm_governed_feature_check_and_set(vcpu, X86_FEATURE_VMX); + vmx_setup_uret_msrs(vmx); =20 if (cpu_has_secondary_exec_ctrls()) vmcs_set_secondary_exec_control(vmx, vmx_secondary_exec_control(vmx)); =20 - if (nested_vmx_allowed(vcpu)) + if (guest_can_use(vcpu, X86_FEATURE_VMX)) vmx->msr_ia32_feature_control_valid_bits |=3D FEAT_CTL_VMX_ENABLED_INSIDE_SMX | FEAT_CTL_VMX_ENABLED_OUTSIDE_SMX; @@ -7742,7 +7733,7 @@ static void vmx_vcpu_after_set_cpuid(struct kvm_vcpu = *vcpu) ~(FEAT_CTL_VMX_ENABLED_INSIDE_SMX | FEAT_CTL_VMX_ENABLED_OUTSIDE_SMX); =20 - if (nested_vmx_allowed(vcpu)) + if (guest_can_use(vcpu, X86_FEATURE_VMX)) nested_vmx_cr_fixed1_bits_update(vcpu); =20 if (boot_cpu_has(X86_FEATURE_INTEL_PT) && diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h index cde902b44d97..c2130d2c8e24 100644 --- a/arch/x86/kvm/vmx/vmx.h +++ b/arch/x86/kvm/vmx/vmx.h @@ -374,7 +374,6 @@ struct kvm_vmx { u64 *pid_table; }; =20 -bool nested_vmx_allowed(struct kvm_vcpu *vcpu); void vmx_vcpu_load_vmcs(struct kvm_vcpu *vcpu, int cpu, struct loaded_vmcs *buddy); int allocate_vpid(void); --=20 2.41.0.487.g6d72f3e995-goog From nobody Sat Feb 7 22:55:09 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id EB522C001DF for ; Sat, 29 Jul 2023 01:18:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233913AbjG2BSS (ORCPT ); Fri, 28 Jul 2023 21:18:18 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36762 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237454AbjG2BRi (ORCPT ); Fri, 28 Jul 2023 21:17:38 -0400 Received: from mail-yw1-x114a.google.com (mail-yw1-x114a.google.com [IPv6:2607:f8b0:4864:20::114a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 301E24695 for ; Fri, 28 Jul 2023 18:17:06 -0700 (PDT) Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-5844e92ee6bso26778467b3.3 for ; Fri, 28 Jul 2023 18:17:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1690593400; x=1691198200; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=EGn34XFQPv+N6od2XkKlclrB8ffPqa49xtGxfrYWBec=; b=v31PIkS9snbDUZj+Uyjuf6WXXAA6w8WXWgiyqlnQ1/vx7fwISulFdmJYlpAyHQAAJ7 dgMjNdDeTU75s+eO6KvMT7xVzL91qu+03JuwQkuiVIQcJO6nkA5Cj+picfRGu9nLNmt9 xaOfhF8j9ImmDwHDZXNENBLwCpSIzj5ptpXm4P5NXZpg0/2PzmAf7qxMvsdHm8Q0X5EY gu4tssSDCpbSWayrA6+e5IaeWLI771OGU89pb0GiRYlddzYesDHPCvC2Ab/U2ZTtJhlP SXIS15hjbLy6j1YFOTTMWZnsx4Ce7j138ve9JAgeFY0iaO8zZ6BAIgC5oH07iQUH5ALI 1law== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1690593400; x=1691198200; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=EGn34XFQPv+N6od2XkKlclrB8ffPqa49xtGxfrYWBec=; b=VmeJ+6Fau2bCkiT/9N+DKRHJPrEZ58L3470qiRcFn9Sz3BkIU3fZYoF+sWcpbar8Mn liJAI4CIvz0JK50MiV9g/bFmfP6kfMxJr7Hipt+0tHioqwyHxSqAQuVz8W9+Wron4iiT 7td41xBVqczgZ8rDTL3Jg3DHvWDHEBsU0u3HNKKTacYn0OoZHkf2Y7kIFb3xIp3kYx8H aM5N6VxZKHL+Xxza6ZeEWIsAvkFoZZetAMZ1ZDzJIO8L1wgklzxQOCCu7DBIVkPfVvKn qUOqCqZ82WR7U9Xo+MggKvv3yi/K+0z4unN4Z3saSGmTAFJePSZCFYEcaWz0SbgGrQ1m 9OjA== X-Gm-Message-State: ABy/qLZ6se5QHb6gQ96aDY98mU9qtdjSvYG9RkaOVdyTczHALW/RUvkX aWnSguPf27AR+S+BDBmmv8lahDgdsYw= X-Google-Smtp-Source: APBJJlFYr9GjqTl6g7lhn1HTlDsA0lQaVAlRWTntsb4iWAW4OoSr7T0p1AQqx4vNqISunfzpIffok86Ha8Q= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a25:acde:0:b0:d1c:6a5e:3e46 with SMTP id x30-20020a25acde000000b00d1c6a5e3e46mr18827ybd.8.1690593400206; Fri, 28 Jul 2023 18:16:40 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 28 Jul 2023 18:16:01 -0700 In-Reply-To: <20230729011608.1065019-1-seanjc@google.com> Mime-Version: 1.0 References: <20230729011608.1065019-1-seanjc@google.com> X-Mailer: git-send-email 2.41.0.487.g6d72f3e995-goog Message-ID: <20230729011608.1065019-15-seanjc@google.com> Subject: [PATCH v2 14/21] KVM: nSVM: Use KVM-governed feature framework to track "NRIPS enabled" From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini , Vitaly Kuznetsov Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Maxim Levitsky Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Track "NRIPS exposed to L1" via a governed feature flag instead of using a dedicated bit/flag in vcpu_svm. No functional change intended. Signed-off-by: Sean Christopherson --- arch/x86/kvm/governed_features.h | 1 + arch/x86/kvm/svm/nested.c | 6 +++--- arch/x86/kvm/svm/svm.c | 4 +--- arch/x86/kvm/svm/svm.h | 1 - 4 files changed, 5 insertions(+), 7 deletions(-) diff --git a/arch/x86/kvm/governed_features.h b/arch/x86/kvm/governed_featu= res.h index 22446614bf49..722b66af412c 100644 --- a/arch/x86/kvm/governed_features.h +++ b/arch/x86/kvm/governed_features.h @@ -8,6 +8,7 @@ BUILD_BUG() KVM_GOVERNED_X86_FEATURE(GBPAGES) KVM_GOVERNED_X86_FEATURE(XSAVES) KVM_GOVERNED_X86_FEATURE(VMX) +KVM_GOVERNED_X86_FEATURE(NRIPS) =20 #undef KVM_GOVERNED_X86_FEATURE #undef KVM_GOVERNED_FEATURE diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c index 3342cc4a5189..9092f3f8dccf 100644 --- a/arch/x86/kvm/svm/nested.c +++ b/arch/x86/kvm/svm/nested.c @@ -716,7 +716,7 @@ static void nested_vmcb02_prepare_control(struct vcpu_s= vm *svm, * what a nrips=3D0 CPU would do (L1 is responsible for advancing RIP * prior to injecting the event). */ - if (svm->nrips_enabled) + if (guest_can_use(vcpu, X86_FEATURE_NRIPS)) vmcb02->control.next_rip =3D svm->nested.ctl.next_rip; else if (boot_cpu_has(X86_FEATURE_NRIPS)) vmcb02->control.next_rip =3D vmcb12_rip; @@ -726,7 +726,7 @@ static void nested_vmcb02_prepare_control(struct vcpu_s= vm *svm, svm->soft_int_injected =3D true; svm->soft_int_csbase =3D vmcb12_csbase; svm->soft_int_old_rip =3D vmcb12_rip; - if (svm->nrips_enabled) + if (guest_can_use(vcpu, X86_FEATURE_NRIPS)) svm->soft_int_next_rip =3D svm->nested.ctl.next_rip; else svm->soft_int_next_rip =3D vmcb12_rip; @@ -1026,7 +1026,7 @@ int nested_svm_vmexit(struct vcpu_svm *svm) if (vmcb12->control.exit_code !=3D SVM_EXIT_ERR) nested_save_pending_event_to_vmcb12(svm, vmcb12); =20 - if (svm->nrips_enabled) + if (guest_can_use(vcpu, X86_FEATURE_NRIPS)) vmcb12->control.next_rip =3D vmcb02->control.next_rip; =20 vmcb12->control.int_ctl =3D svm->nested.ctl.int_ctl; diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index d5f8cb402eb7..7c1aa532f767 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -4219,9 +4219,7 @@ static void svm_vcpu_after_set_cpuid(struct kvm_vcpu = *vcpu) guest_cpuid_has(vcpu, X86_FEATURE_XSAVE)) kvm_governed_feature_set(vcpu, X86_FEATURE_XSAVES); =20 - /* Update nrips enabled cache */ - svm->nrips_enabled =3D kvm_cpu_cap_has(X86_FEATURE_NRIPS) && - guest_cpuid_has(vcpu, X86_FEATURE_NRIPS); + kvm_governed_feature_check_and_set(vcpu, X86_FEATURE_NRIPS); =20 svm->tsc_scaling_enabled =3D tsc_scaling && guest_cpuid_has(vcpu, X86_FEA= TURE_TSCRATEMSR); svm->lbrv_enabled =3D lbrv && guest_cpuid_has(vcpu, X86_FEATURE_LBRV); diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h index 5829a1801862..c06de55425d4 100644 --- a/arch/x86/kvm/svm/svm.h +++ b/arch/x86/kvm/svm/svm.h @@ -259,7 +259,6 @@ struct vcpu_svm { bool soft_int_injected; =20 /* optional nested SVM features that are enabled for this guest */ - bool nrips_enabled : 1; bool tsc_scaling_enabled : 1; bool v_vmload_vmsave_enabled : 1; bool lbrv_enabled : 1; --=20 2.41.0.487.g6d72f3e995-goog From nobody Sat Feb 7 22:55:09 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8E48BC0015E for ; Sat, 29 Jul 2023 01:18:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237422AbjG2BSX (ORCPT ); Fri, 28 Jul 2023 21:18:23 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36430 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237506AbjG2BRi (ORCPT ); Fri, 28 Jul 2023 21:17:38 -0400 Received: from mail-pl1-x64a.google.com (mail-pl1-x64a.google.com [IPv6:2607:f8b0:4864:20::64a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D570A49EA for ; Fri, 28 Jul 2023 18:17:07 -0700 (PDT) Received: by mail-pl1-x64a.google.com with SMTP id d9443c01a7336-1bbb97d27d6so18432715ad.1 for ; Fri, 28 Jul 2023 18:17:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1690593401; x=1691198201; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=Y8epFYKyce1KnScIbSRkcuIoi0XcTk6uaJkGq1C7BJM=; b=qZgEyJuxgzUpwGX2Mn+ohO2gYTQrmIw0kdGVMzx6haVus/h7RD1MI1VGtBEFEHyods dpurSEpGuzO0+gcSy8R6KUUKSgxU1c1CEgrODN84DsDzj2p4BM2Wv6MIVfXD5J9lGm69 1fcXmrsVSrrxrfbrFS4lE5kHMfuwy55o2mN3jXqZJGxKIaXcodeTBE/6HFpJqc3V/7V7 64ZqHVnt3hcMlS9oC3pKNmiwQlIdXlyd3zPwuWPTY0EIYgdHG+uQfio4L5wX7E0jQEs8 A1Tnj/JKAWdTMSW9CsItz6c0/UYcuekd7XgvE+c8OSu0aGIRRBG4Mfve+VuuGYV8pvv7 ++xA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1690593401; x=1691198201; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=Y8epFYKyce1KnScIbSRkcuIoi0XcTk6uaJkGq1C7BJM=; b=XLI5+QSPdBMtnl4YvYDeitFbxgQ9lzaamfMR1OncIddMsWt9AajVQYJXpw5TzHg+Ah sjzp7KZ2VdhO6c+9qxf2bfyyZAUvSO2QIy1fUIaFCvUyEy5aE8NPPC6yd6Cw/LxlIuC6 rYx1VPo2RBC0zWIffJCRDFoS7+Gj9KRWmoyopJfxBgprLafOccNGUIdq0W8Osb5nZ675 m+/jZiF++STW+41tlLN6atySc/tLBbAbX2G+CuMw2ef9hRW/W+OGqqEzo+QWANfXuwCG w5pqNWuLL0bNJYSEdUn+P0J4rHS+UjZMt/8ht1jZ1DChJ5XzytFqeQDnzZQgCKa2GYjC CA8g== X-Gm-Message-State: ABy/qLZsbKwipmtXV/gu1fQOVxYUA8YszEnntJ7S9r6vf326Aogr7rG0 JEAlafs84KSSrIPtbpsNT7ZF0mFcvk8= X-Google-Smtp-Source: APBJJlEdj76n8IZm16b86VgVQmKE54SzKyigxPMKB+s7rxNWA+iel+HMTj+S9/jMvC/DRMAS9a6kHHLGAPI= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a17:903:2445:b0:1bb:c7bc:ce9a with SMTP id l5-20020a170903244500b001bbc7bcce9amr13662pls.10.1690593401752; Fri, 28 Jul 2023 18:16:41 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 28 Jul 2023 18:16:02 -0700 In-Reply-To: <20230729011608.1065019-1-seanjc@google.com> Mime-Version: 1.0 References: <20230729011608.1065019-1-seanjc@google.com> X-Mailer: git-send-email 2.41.0.487.g6d72f3e995-goog Message-ID: <20230729011608.1065019-16-seanjc@google.com> Subject: [PATCH v2 15/21] KVM: nSVM: Use KVM-governed feature framework to track "TSC scaling enabled" From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini , Vitaly Kuznetsov Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Maxim Levitsky Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Track "TSC scaling exposed to L1" via a governed feature flag instead of using a dedicated bit/flag in vcpu_svm. Note, this fixes a benign bug where KVM would mark TSC scaling as exposed to L1 even if overall nested SVM supported is disabled, i.e. KVM would let L1 write MSR_AMD64_TSC_RATIO even when KVM didn't advertise TSCRATEMSR support to userspace. Signed-off-by: Sean Christopherson --- arch/x86/kvm/governed_features.h | 1 + arch/x86/kvm/svm/nested.c | 2 +- arch/x86/kvm/svm/svm.c | 10 ++++++---- arch/x86/kvm/svm/svm.h | 1 - 4 files changed, 8 insertions(+), 6 deletions(-) diff --git a/arch/x86/kvm/governed_features.h b/arch/x86/kvm/governed_featu= res.h index 722b66af412c..32c0469cf952 100644 --- a/arch/x86/kvm/governed_features.h +++ b/arch/x86/kvm/governed_features.h @@ -9,6 +9,7 @@ KVM_GOVERNED_X86_FEATURE(GBPAGES) KVM_GOVERNED_X86_FEATURE(XSAVES) KVM_GOVERNED_X86_FEATURE(VMX) KVM_GOVERNED_X86_FEATURE(NRIPS) +KVM_GOVERNED_X86_FEATURE(TSCRATEMSR) =20 #undef KVM_GOVERNED_X86_FEATURE #undef KVM_GOVERNED_FEATURE diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c index 9092f3f8dccf..da65948064dc 100644 --- a/arch/x86/kvm/svm/nested.c +++ b/arch/x86/kvm/svm/nested.c @@ -695,7 +695,7 @@ static void nested_vmcb02_prepare_control(struct vcpu_s= vm *svm, =20 vmcb02->control.tsc_offset =3D vcpu->arch.tsc_offset; =20 - if (svm->tsc_scaling_enabled && + if (guest_can_use(vcpu, X86_FEATURE_TSCRATEMSR) && svm->tsc_ratio_msr !=3D kvm_caps.default_tsc_scaling_ratio) nested_svm_update_tsc_ratio_msr(vcpu); =20 diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 7c1aa532f767..2f7f7df5a591 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -2755,7 +2755,8 @@ static int svm_get_msr(struct kvm_vcpu *vcpu, struct = msr_data *msr_info) =20 switch (msr_info->index) { case MSR_AMD64_TSC_RATIO: - if (!msr_info->host_initiated && !svm->tsc_scaling_enabled) + if (!msr_info->host_initiated && + !guest_can_use(vcpu, X86_FEATURE_TSCRATEMSR)) return 1; msr_info->data =3D svm->tsc_ratio_msr; break; @@ -2897,7 +2898,7 @@ static int svm_set_msr(struct kvm_vcpu *vcpu, struct = msr_data *msr) switch (ecx) { case MSR_AMD64_TSC_RATIO: =20 - if (!svm->tsc_scaling_enabled) { + if (!guest_can_use(vcpu, X86_FEATURE_TSCRATEMSR)) { =20 if (!msr->host_initiated) return 1; @@ -2919,7 +2920,8 @@ static int svm_set_msr(struct kvm_vcpu *vcpu, struct = msr_data *msr) =20 svm->tsc_ratio_msr =3D data; =20 - if (svm->tsc_scaling_enabled && is_guest_mode(vcpu)) + if (guest_can_use(vcpu, X86_FEATURE_TSCRATEMSR) && + is_guest_mode(vcpu)) nested_svm_update_tsc_ratio_msr(vcpu); =20 break; @@ -4220,8 +4222,8 @@ static void svm_vcpu_after_set_cpuid(struct kvm_vcpu = *vcpu) kvm_governed_feature_set(vcpu, X86_FEATURE_XSAVES); =20 kvm_governed_feature_check_and_set(vcpu, X86_FEATURE_NRIPS); + kvm_governed_feature_check_and_set(vcpu, X86_FEATURE_TSCRATEMSR); =20 - svm->tsc_scaling_enabled =3D tsc_scaling && guest_cpuid_has(vcpu, X86_FEA= TURE_TSCRATEMSR); svm->lbrv_enabled =3D lbrv && guest_cpuid_has(vcpu, X86_FEATURE_LBRV); =20 svm->v_vmload_vmsave_enabled =3D vls && guest_cpuid_has(vcpu, X86_FEATURE= _V_VMSAVE_VMLOAD); diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h index c06de55425d4..4e7332f77702 100644 --- a/arch/x86/kvm/svm/svm.h +++ b/arch/x86/kvm/svm/svm.h @@ -259,7 +259,6 @@ struct vcpu_svm { bool soft_int_injected; =20 /* optional nested SVM features that are enabled for this guest */ - bool tsc_scaling_enabled : 1; bool v_vmload_vmsave_enabled : 1; bool lbrv_enabled : 1; bool pause_filter_enabled : 1; --=20 2.41.0.487.g6d72f3e995-goog From nobody Sat Feb 7 22:55:09 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3FCB9C001DF for ; Sat, 29 Jul 2023 01:18:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237369AbjG2BS1 (ORCPT ); Fri, 28 Jul 2023 21:18:27 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36114 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237455AbjG2BRk (ORCPT ); Fri, 28 Jul 2023 21:17:40 -0400 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4DD0849F2 for ; Fri, 28 Jul 2023 18:17:10 -0700 (PDT) Received: by mail-yb1-xb49.google.com with SMTP id 3f1490d57ef6-d1bcb99b518so2503001276.2 for ; Fri, 28 Jul 2023 18:17:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1690593403; x=1691198203; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=RivGO1zReamo45LkTKZurMbG/2NW5Na6scf+1QseqDY=; b=hIQtvbpt0HUSaqF39Lk9Xfa9TrQTmVjU7RALdaw1fEsbZtAiuObGnFR0Gku+I64fsr EVrNvlR/IQ3tigJBye53R3SEyQq9ap5sEjRS/w2ib/Nbaz6ND/a27qXUH/r81czS+i95 +4M/PpgfVpbnHAK78VBIkHZWNOg/uMCvRhbsbRb5Hz+3PclOkZWQ3b+JNlm+EY4AMFDN 20KHnMdzEj16XfUGCUjYSYQTHQI7CuKSHt/AuA1oUQSVbnuRFX7uwRCDJlxDMZO7lCzE 2180eCxVHw5kgi3tfuffiJsVwm2w8dHM1viduqd8KLxmLQDiaS63Vsh1giEXn7bIp+Nv Jtog== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1690593403; x=1691198203; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=RivGO1zReamo45LkTKZurMbG/2NW5Na6scf+1QseqDY=; b=M3v4jlEiEsD0zGN1vM8KG5d42ZZ4N/FuuSYJLYrYA9cF8IbIylT8PQ6Hi2Z0pJsqkp mdDjaEvkK78Mctv+vrVkYsGMHp4V5+AR8BvkZqqZRloYqw3bCXkXDIwPDX9W4dFp9B4T R1z/IIbvFqlzT0QKWtGxObGE1dP1zlOJVakB3kXXClTKdpOILcQhgEr351s04zsR5eXQ BQVbW612m2gPneZcCmM+47UyF0ls//8Qxj2Q1eVOYxOOlRrV3l3pFvzwCUZlf3xSObRq PyydPwqT3ybj2RWPUzcaYKY7BNGzF5xO7i7cMEOuIQyUYQxcpUA9DmEj+UT+vux5IoLm YVeA== X-Gm-Message-State: ABy/qLYt/ec3n+3ILlar3IaH3YgoNu0mBbTCc1zCiwFIaVv0uXAr4FfV Gt5Uh2rmSFkb0P+u20Z0Y7Z598Uj2NU= X-Google-Smtp-Source: APBJJlFzrRMu/FipBCWOALymzRtbc31rIKvodpA1k0Hng04DLRC6eCO5Q8u++IKFWaiaXRNNRrLr5Q/WTcQ= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a25:2d1a:0:b0:d0c:77a8:1f6b with SMTP id t26-20020a252d1a000000b00d0c77a81f6bmr20040ybt.10.1690593403726; Fri, 28 Jul 2023 18:16:43 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 28 Jul 2023 18:16:03 -0700 In-Reply-To: <20230729011608.1065019-1-seanjc@google.com> Mime-Version: 1.0 References: <20230729011608.1065019-1-seanjc@google.com> X-Mailer: git-send-email 2.41.0.487.g6d72f3e995-goog Message-ID: <20230729011608.1065019-17-seanjc@google.com> Subject: [PATCH v2 16/21] KVM: nSVM: Use KVM-governed feature framework to track "vVM{SAVE,LOAD} enabled" From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini , Vitaly Kuznetsov Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Maxim Levitsky Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Track "virtual VMSAVE/VMLOAD exposed to L1" via a governed feature flag instead of using a dedicated bit/flag in vcpu_svm. Opportunistically add a comment explaining why KVM disallows virtual VMLOAD/VMSAVE when the vCPU model is Intel. No functional change intended. Signed-off-by: Sean Christopherson --- arch/x86/kvm/governed_features.h | 1 + arch/x86/kvm/svm/nested.c | 2 +- arch/x86/kvm/svm/svm.c | 10 +++++++--- arch/x86/kvm/svm/svm.h | 1 - 4 files changed, 9 insertions(+), 5 deletions(-) diff --git a/arch/x86/kvm/governed_features.h b/arch/x86/kvm/governed_featu= res.h index 32c0469cf952..f01a95fd0071 100644 --- a/arch/x86/kvm/governed_features.h +++ b/arch/x86/kvm/governed_features.h @@ -10,6 +10,7 @@ KVM_GOVERNED_X86_FEATURE(XSAVES) KVM_GOVERNED_X86_FEATURE(VMX) KVM_GOVERNED_X86_FEATURE(NRIPS) KVM_GOVERNED_X86_FEATURE(TSCRATEMSR) +KVM_GOVERNED_X86_FEATURE(V_VMSAVE_VMLOAD) =20 #undef KVM_GOVERNED_X86_FEATURE #undef KVM_GOVERNED_FEATURE diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c index da65948064dc..24d47ebeb0e0 100644 --- a/arch/x86/kvm/svm/nested.c +++ b/arch/x86/kvm/svm/nested.c @@ -107,7 +107,7 @@ static void nested_svm_uninit_mmu_context(struct kvm_vc= pu *vcpu) =20 static bool nested_vmcb_needs_vls_intercept(struct vcpu_svm *svm) { - if (!svm->v_vmload_vmsave_enabled) + if (!guest_can_use(&svm->vcpu, X86_FEATURE_V_VMSAVE_VMLOAD)) return true; =20 if (!nested_npt_enabled(svm)) diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 2f7f7df5a591..39a69c1786ea 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -1154,8 +1154,6 @@ static inline void init_vmcb_after_set_cpuid(struct k= vm_vcpu *vcpu) =20 set_msr_interception(vcpu, svm->msrpm, MSR_IA32_SYSENTER_EIP, 0, 0); set_msr_interception(vcpu, svm->msrpm, MSR_IA32_SYSENTER_ESP, 0, 0); - - svm->v_vmload_vmsave_enabled =3D false; } else { /* * If hardware supports Virtual VMLOAD VMSAVE then enable it @@ -4226,7 +4224,13 @@ static void svm_vcpu_after_set_cpuid(struct kvm_vcpu= *vcpu) =20 svm->lbrv_enabled =3D lbrv && guest_cpuid_has(vcpu, X86_FEATURE_LBRV); =20 - svm->v_vmload_vmsave_enabled =3D vls && guest_cpuid_has(vcpu, X86_FEATURE= _V_VMSAVE_VMLOAD); + /* + * Intercept VMLOAD if the vCPU mode is Intel in order to emulate that + * VMLOAD drops bits 63:32 of SYSENTER (ignoring the fact that exposing + * SVM on Intel is bonkers and extremely unlikely to work). + */ + if (!guest_cpuid_is_intel(vcpu)) + kvm_governed_feature_check_and_set(vcpu, X86_FEATURE_V_VMSAVE_VMLOAD); =20 svm->pause_filter_enabled =3D kvm_cpu_cap_has(X86_FEATURE_PAUSEFILTER) && guest_cpuid_has(vcpu, X86_FEATURE_PAUSEFILTER); diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h index 4e7332f77702..b475241df6dc 100644 --- a/arch/x86/kvm/svm/svm.h +++ b/arch/x86/kvm/svm/svm.h @@ -259,7 +259,6 @@ struct vcpu_svm { bool soft_int_injected; =20 /* optional nested SVM features that are enabled for this guest */ - bool v_vmload_vmsave_enabled : 1; bool lbrv_enabled : 1; bool pause_filter_enabled : 1; bool pause_threshold_enabled : 1; --=20 2.41.0.487.g6d72f3e995-goog From nobody Sat Feb 7 22:55:09 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CF64BC41513 for ; Sat, 29 Jul 2023 01:19:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237495AbjG2BTA (ORCPT ); Fri, 28 Jul 2023 21:19:00 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36136 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237411AbjG2BRo (ORCPT ); Fri, 28 Jul 2023 21:17:44 -0400 Received: from mail-pl1-x64a.google.com (mail-pl1-x64a.google.com [IPv6:2607:f8b0:4864:20::64a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7BAEE49F6 for ; Fri, 28 Jul 2023 18:17:11 -0700 (PDT) Received: by mail-pl1-x64a.google.com with SMTP id d9443c01a7336-1bba7a32a40so20469725ad.0 for ; Fri, 28 Jul 2023 18:17:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1690593405; x=1691198205; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=XWSSoUKucCtT+zh7f53pOsw13W6cMYdZgxKQXI7Ab2I=; b=yhlSD1yG1el5nTNdFCwX254jnUkkNYOXgxL3vLVEESLQsoiTYOoBvmAlUHCdWGFKr7 D6bafPmaPvha3tf+K15xZ7Mxf33c/hOon4+KQtJUGFm2WaesQWQdoVCXaHYPAieARYFU dZkSQpl7ZxiGnohoOgKgMpLl0Ix06KpOa2AYEFv++505rIg5haD5FZ4bxDVls6tn9dHE bE3KIRAhn4R3bgbpFXdkhywGti73ytsYt3MV1CVcKFc0j5H7oiBOHvbCrLPKDoSWHm3T XwrcEmTGkJX2ew9kDZNXmnzPZUjNY8DELPKb/nbkLJQr5igoVMRQbeaeNqPfEZLFQEMd JiDQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1690593405; x=1691198205; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=XWSSoUKucCtT+zh7f53pOsw13W6cMYdZgxKQXI7Ab2I=; b=R1QmKkq9yXEyYytTQpMcrtRT6iUTuxpPvbeFl+ydhkMbL66kEXvyh5kJ9K4NgyVkoM 4YyTDdXjSXm9a4dFi547q6hEJhWsA5TSEfLf29lkVd8n85ENuSyhXUFxu+84i/t5OQFr J7sWcgMeqUoP9+CfAKlYaDQhL+ec5pSXxl8IWsIB3j1lW5Og6gkA90RW8IwYDECiUKe5 W25PcJ0YrgHMB8XLu4w7S/OaQ2CTKgek0Wo3jKgssSfBOGdvO+eaG7OKdlmUWAzonkJN KpEFJrZUFYVHtGAhlUcTE6uN9eUvPr95ecaUHJJwJqMdCEWREnZQlRgIbJjt26TvHSL2 vAkg== X-Gm-Message-State: ABy/qLZLLrRKzDg3XQrF1jWVcOY2jTe3HWciU6uh3I+ugG+GT5zvzwEh KN8WIAXgdPckrTLRNBoTA6z+TVzMAqQ= X-Google-Smtp-Source: APBJJlGenUF2lXxwYe7mb5+fWTj3m9CTsGfVbaSxOPi6qoSyfGb29NUnLKTaiqJteegpznKNZfBr46pWw9E= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a17:902:e542:b0:1b8:c666:207a with SMTP id n2-20020a170902e54200b001b8c666207amr13474plf.9.1690593405592; Fri, 28 Jul 2023 18:16:45 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 28 Jul 2023 18:16:04 -0700 In-Reply-To: <20230729011608.1065019-1-seanjc@google.com> Mime-Version: 1.0 References: <20230729011608.1065019-1-seanjc@google.com> X-Mailer: git-send-email 2.41.0.487.g6d72f3e995-goog Message-ID: <20230729011608.1065019-18-seanjc@google.com> Subject: [PATCH v2 17/21] KVM: nSVM: Use KVM-governed feature framework to track "LBRv enabled" From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini , Vitaly Kuznetsov Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Maxim Levitsky Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Track "LBR virtualization exposed to L1" via a governed feature flag instead of using a dedicated bit/flag in vcpu_svm. Note, checking KVM's capabilities instead of the "lbrv" param means that the code isn't strictly equivalent, as lbrv_enabled could have been set if nested=3Dfalse where as that the governed feature cannot. But that's a glorified nop as the feature/flag is consumed only by paths that are gated by nSVM being enabled. Signed-off-by: Sean Christopherson --- arch/x86/kvm/governed_features.h | 1 + arch/x86/kvm/svm/nested.c | 23 +++++++++++++---------- arch/x86/kvm/svm/svm.c | 7 ++++--- arch/x86/kvm/svm/svm.h | 1 - 4 files changed, 18 insertions(+), 14 deletions(-) diff --git a/arch/x86/kvm/governed_features.h b/arch/x86/kvm/governed_featu= res.h index f01a95fd0071..3a4c0e40e1e0 100644 --- a/arch/x86/kvm/governed_features.h +++ b/arch/x86/kvm/governed_features.h @@ -11,6 +11,7 @@ KVM_GOVERNED_X86_FEATURE(VMX) KVM_GOVERNED_X86_FEATURE(NRIPS) KVM_GOVERNED_X86_FEATURE(TSCRATEMSR) KVM_GOVERNED_X86_FEATURE(V_VMSAVE_VMLOAD) +KVM_GOVERNED_X86_FEATURE(LBRV) =20 #undef KVM_GOVERNED_X86_FEATURE #undef KVM_GOVERNED_FEATURE diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c index 24d47ebeb0e0..f50f74b1a04e 100644 --- a/arch/x86/kvm/svm/nested.c +++ b/arch/x86/kvm/svm/nested.c @@ -552,6 +552,7 @@ static void nested_vmcb02_prepare_save(struct vcpu_svm = *svm, struct vmcb *vmcb12 bool new_vmcb12 =3D false; struct vmcb *vmcb01 =3D svm->vmcb01.ptr; struct vmcb *vmcb02 =3D svm->nested.vmcb02.ptr; + struct kvm_vcpu *vcpu =3D &svm->vcpu; =20 nested_vmcb02_compute_g_pat(svm); =20 @@ -577,18 +578,18 @@ static void nested_vmcb02_prepare_save(struct vcpu_sv= m *svm, struct vmcb *vmcb12 vmcb_mark_dirty(vmcb02, VMCB_DT); } =20 - kvm_set_rflags(&svm->vcpu, vmcb12->save.rflags | X86_EFLAGS_FIXED); + kvm_set_rflags(vcpu, vmcb12->save.rflags | X86_EFLAGS_FIXED); =20 - svm_set_efer(&svm->vcpu, svm->nested.save.efer); + svm_set_efer(vcpu, svm->nested.save.efer); =20 - svm_set_cr0(&svm->vcpu, svm->nested.save.cr0); - svm_set_cr4(&svm->vcpu, svm->nested.save.cr4); + svm_set_cr0(vcpu, svm->nested.save.cr0); + svm_set_cr4(vcpu, svm->nested.save.cr4); =20 svm->vcpu.arch.cr2 =3D vmcb12->save.cr2; =20 - kvm_rax_write(&svm->vcpu, vmcb12->save.rax); - kvm_rsp_write(&svm->vcpu, vmcb12->save.rsp); - kvm_rip_write(&svm->vcpu, vmcb12->save.rip); + kvm_rax_write(vcpu, vmcb12->save.rax); + kvm_rsp_write(vcpu, vmcb12->save.rsp); + kvm_rip_write(vcpu, vmcb12->save.rip); =20 /* In case we don't even reach vcpu_run, the fields are not updated */ vmcb02->save.rax =3D vmcb12->save.rax; @@ -602,7 +603,8 @@ static void nested_vmcb02_prepare_save(struct vcpu_svm = *svm, struct vmcb *vmcb12 vmcb_mark_dirty(vmcb02, VMCB_DR); } =20 - if (unlikely(svm->lbrv_enabled && (svm->nested.ctl.virt_ext & LBR_CTL_ENA= BLE_MASK))) { + if (unlikely(guest_can_use(vcpu, X86_FEATURE_LBRV) && + (svm->nested.ctl.virt_ext & LBR_CTL_ENABLE_MASK))) { /* * Reserved bits of DEBUGCTL are ignored. Be consistent with * svm_set_msr's definition of reserved bits. @@ -734,7 +736,7 @@ static void nested_vmcb02_prepare_control(struct vcpu_s= vm *svm, =20 vmcb02->control.virt_ext =3D vmcb01->control.virt_ext & LBR_CTL_ENABLE_MASK; - if (svm->lbrv_enabled) + if (guest_can_use(vcpu, X86_FEATURE_LBRV)) vmcb02->control.virt_ext |=3D (svm->nested.ctl.virt_ext & LBR_CTL_ENABLE_MASK); =20 @@ -1065,7 +1067,8 @@ int nested_svm_vmexit(struct vcpu_svm *svm) if (!nested_exit_on_intr(svm)) kvm_make_request(KVM_REQ_EVENT, &svm->vcpu); =20 - if (unlikely(svm->lbrv_enabled && (svm->nested.ctl.virt_ext & LBR_CTL_ENA= BLE_MASK))) { + if (unlikely(guest_can_use(vcpu, X86_FEATURE_LBRV) && + (svm->nested.ctl.virt_ext & LBR_CTL_ENABLE_MASK))) { svm_copy_lbrs(vmcb12, vmcb02); svm_update_lbrv(vcpu); } else if (unlikely(vmcb01->control.virt_ext & LBR_CTL_ENABLE_MASK)) { diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 39a69c1786ea..a83fa6df7c04 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -984,9 +984,11 @@ void svm_update_lbrv(struct kvm_vcpu *vcpu) bool current_enable_lbrv =3D !!(svm->vmcb->control.virt_ext & LBR_CTL_ENABLE_MASK); =20 - if (unlikely(is_guest_mode(vcpu) && svm->lbrv_enabled)) + if (unlikely(is_guest_mode(vcpu) && + guest_can_use(vcpu, X86_FEATURE_LBRV))) { if (unlikely(svm->nested.ctl.virt_ext & LBR_CTL_ENABLE_MASK)) enable_lbrv =3D true; + } =20 if (enable_lbrv =3D=3D current_enable_lbrv) return; @@ -4221,8 +4223,7 @@ static void svm_vcpu_after_set_cpuid(struct kvm_vcpu = *vcpu) =20 kvm_governed_feature_check_and_set(vcpu, X86_FEATURE_NRIPS); kvm_governed_feature_check_and_set(vcpu, X86_FEATURE_TSCRATEMSR); - - svm->lbrv_enabled =3D lbrv && guest_cpuid_has(vcpu, X86_FEATURE_LBRV); + kvm_governed_feature_check_and_set(vcpu, X86_FEATURE_LBRV); =20 /* * Intercept VMLOAD if the vCPU mode is Intel in order to emulate that diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h index b475241df6dc..0e21823e8a19 100644 --- a/arch/x86/kvm/svm/svm.h +++ b/arch/x86/kvm/svm/svm.h @@ -259,7 +259,6 @@ struct vcpu_svm { bool soft_int_injected; =20 /* optional nested SVM features that are enabled for this guest */ - bool lbrv_enabled : 1; bool pause_filter_enabled : 1; bool pause_threshold_enabled : 1; bool vgif_enabled : 1; --=20 2.41.0.487.g6d72f3e995-goog From nobody Sat Feb 7 22:55:09 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 97974EB64DD for ; Sat, 29 Jul 2023 01:18:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237374AbjG2BSa (ORCPT ); Fri, 28 Jul 2023 21:18:30 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35902 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237475AbjG2BRr (ORCPT ); Fri, 28 Jul 2023 21:17:47 -0400 Received: from mail-yw1-x114a.google.com (mail-yw1-x114a.google.com [IPv6:2607:f8b0:4864:20::114a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EC1CB5260 for ; Fri, 28 Jul 2023 18:17:16 -0700 (PDT) Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-584126c65d1so29404367b3.3 for ; Fri, 28 Jul 2023 18:17:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1690593407; x=1691198207; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=IAQPKbD8tPDmYBWrX7RApaZ6urq2NKcobd6GaE19UZk=; b=ZeRoOCqFI4p+w9IAeXNpwy5tKs7MTBzITIeEzXVkbc0WGkIBaxYXBabBmGzaP1liLi XzrIM1m5WJ5mzyQ+rtejbV0YofqVAwW9vi2WTcum05Q9JdcoEmvQzyNePKLll1Sn2HMO ahBeBbAd+C4oZYuQ4+SLDrttWUUutbvJzB+NPE7xpQnmvhQPgMqjg7siGDxiXFihzVXR CGyom5LhVfRPhLZEQNaMXxCXXDXIhIMlYrl/Im0JiotbqoucEne7MLA8+4hOwpXgTPnp HZqMWWaIu0y0sib7kJ858UYs6W7nHxDs6gM9J0R2oTbVCNf3/l6v/DSd89ZM/J9z6Dt8 UoTg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1690593407; x=1691198207; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=IAQPKbD8tPDmYBWrX7RApaZ6urq2NKcobd6GaE19UZk=; b=NCEkWa5nxLo7nT7ycYZNVBeSTtzQ18dZNbd+rx5RWY6GX1LoGNa0BZtXzk4xCCciPu 9bU3tCzmLeE7Ude7DS6ph19qt0yDnQ/pLfv6qnYVT7XOHB90vpzvuIsuRS3ju0bq3RS1 rH0ddYDEAM9cjBXASDDWRsFHf7hHPPYQzQ85FNmbZ/BW4giM/PxeF0wT/kNJathyWi52 5OmEE0H47CFjBO2lmlTXJiylxVPTvR1g1wtbPhQF+KZvFLkDI4WWIy0qNoC/6XsbmTPj kST3m/6Vb+B82PId+X/quKeB88GAaEgvEYg8+oPR2k7OIyyxwt2M+jTg6l5Aa+87xqL1 SyoA== X-Gm-Message-State: ABy/qLZWZfShnCAfhg11QMcaDTTpptAbllhVHmZUSjWFw++vX4uyJ+AP yS36p/ca4noRnOYwyPT/DTbXXrj07hY= X-Google-Smtp-Source: APBJJlGCdrikao/e6/kr7d/aLh6t1yJ8UDloeWc8WvjWjWmX1Ju375VMcZzc7H1kC6TbxCNSrsZbWzRKWOc= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a81:ae25:0:b0:56c:b037:88aa with SMTP id m37-20020a81ae25000000b0056cb03788aamr24021ywh.5.1690593407555; Fri, 28 Jul 2023 18:16:47 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 28 Jul 2023 18:16:05 -0700 In-Reply-To: <20230729011608.1065019-1-seanjc@google.com> Mime-Version: 1.0 References: <20230729011608.1065019-1-seanjc@google.com> X-Mailer: git-send-email 2.41.0.487.g6d72f3e995-goog Message-ID: <20230729011608.1065019-19-seanjc@google.com> Subject: [PATCH v2 18/21] KVM: nSVM: Use KVM-governed feature framework to track "Pause Filter enabled" From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini , Vitaly Kuznetsov Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Maxim Levitsky Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Track "Pause Filtering is exposed to L1" via governed feature flags instead of using dedicated bits/flags in vcpu_svm. No functional change intended. Signed-off-by: Sean Christopherson --- arch/x86/kvm/governed_features.h | 2 ++ arch/x86/kvm/svm/nested.c | 10 ++++++++-- arch/x86/kvm/svm/svm.c | 7 ++----- arch/x86/kvm/svm/svm.h | 2 -- 4 files changed, 12 insertions(+), 9 deletions(-) diff --git a/arch/x86/kvm/governed_features.h b/arch/x86/kvm/governed_featu= res.h index 3a4c0e40e1e0..9afd34f30599 100644 --- a/arch/x86/kvm/governed_features.h +++ b/arch/x86/kvm/governed_features.h @@ -12,6 +12,8 @@ KVM_GOVERNED_X86_FEATURE(NRIPS) KVM_GOVERNED_X86_FEATURE(TSCRATEMSR) KVM_GOVERNED_X86_FEATURE(V_VMSAVE_VMLOAD) KVM_GOVERNED_X86_FEATURE(LBRV) +KVM_GOVERNED_X86_FEATURE(PAUSEFILTER) +KVM_GOVERNED_X86_FEATURE(PFTHRESHOLD) =20 #undef KVM_GOVERNED_X86_FEATURE #undef KVM_GOVERNED_FEATURE diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c index f50f74b1a04e..ac03b2bc5b2c 100644 --- a/arch/x86/kvm/svm/nested.c +++ b/arch/x86/kvm/svm/nested.c @@ -743,8 +743,14 @@ static void nested_vmcb02_prepare_control(struct vcpu_= svm *svm, if (!nested_vmcb_needs_vls_intercept(svm)) vmcb02->control.virt_ext |=3D VIRTUAL_VMLOAD_VMSAVE_ENABLE_MASK; =20 - pause_count12 =3D svm->pause_filter_enabled ? svm->nested.ctl.pause_filte= r_count : 0; - pause_thresh12 =3D svm->pause_threshold_enabled ? svm->nested.ctl.pause_f= ilter_thresh : 0; + if (guest_can_use(vcpu, X86_FEATURE_PAUSEFILTER)) + pause_count12 =3D svm->nested.ctl.pause_filter_count; + else + pause_count12 =3D 0; + if (guest_can_use(vcpu, X86_FEATURE_PFTHRESHOLD)) + pause_thresh12 =3D svm->nested.ctl.pause_filter_thresh; + else + pause_thresh12 =3D 0; if (kvm_pause_in_guest(svm->vcpu.kvm)) { /* use guest values since host doesn't intercept PAUSE */ vmcb02->control.pause_filter_count =3D pause_count12; diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index a83fa6df7c04..be3a11f00f4e 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -4233,11 +4233,8 @@ static void svm_vcpu_after_set_cpuid(struct kvm_vcpu= *vcpu) if (!guest_cpuid_is_intel(vcpu)) kvm_governed_feature_check_and_set(vcpu, X86_FEATURE_V_VMSAVE_VMLOAD); =20 - svm->pause_filter_enabled =3D kvm_cpu_cap_has(X86_FEATURE_PAUSEFILTER) && - guest_cpuid_has(vcpu, X86_FEATURE_PAUSEFILTER); - - svm->pause_threshold_enabled =3D kvm_cpu_cap_has(X86_FEATURE_PFTHRESHOLD)= && - guest_cpuid_has(vcpu, X86_FEATURE_PFTHRESHOLD); + kvm_governed_feature_check_and_set(vcpu, X86_FEATURE_PAUSEFILTER); + kvm_governed_feature_check_and_set(vcpu, X86_FEATURE_PFTHRESHOLD); =20 svm->vgif_enabled =3D vgif && guest_cpuid_has(vcpu, X86_FEATURE_VGIF); =20 diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h index 0e21823e8a19..fb438439b61e 100644 --- a/arch/x86/kvm/svm/svm.h +++ b/arch/x86/kvm/svm/svm.h @@ -259,8 +259,6 @@ struct vcpu_svm { bool soft_int_injected; =20 /* optional nested SVM features that are enabled for this guest */ - bool pause_filter_enabled : 1; - bool pause_threshold_enabled : 1; bool vgif_enabled : 1; bool vnmi_enabled : 1; =20 --=20 2.41.0.487.g6d72f3e995-goog From nobody Sat Feb 7 22:55:09 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 289E0C0015E for ; Sat, 29 Jul 2023 01:18:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237514AbjG2BSe (ORCPT ); Fri, 28 Jul 2023 21:18:34 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35898 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237490AbjG2BRs (ORCPT ); Fri, 28 Jul 2023 21:17:48 -0400 Received: from mail-pg1-x54a.google.com (mail-pg1-x54a.google.com [IPv6:2607:f8b0:4864:20::54a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 640F74EDB for ; Fri, 28 Jul 2023 18:17:20 -0700 (PDT) Received: by mail-pg1-x54a.google.com with SMTP id 41be03b00d2f7-5637a108d02so1773459a12.2 for ; Fri, 28 Jul 2023 18:17:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1690593410; x=1691198210; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=y7cEpFM3GtEXX/RC2xjBSh+vjkWowYW5UIIx0dKpIJc=; b=N7UCx11biKYErqs/Sg1mCGYKNalKl+tIeARjD04Bwez6xhq6l8wVRALyxSOrQ1QQmO UTH9da3+mLMaP/KNOX/DxomZdnmpDv8Ay78AL76fWZzMDViUXtO0dFAhtRyLjra1Q/bb zoFxZP72VGejWtujQoQLPQkymjrudLkYNZepOq1DNNlgVdBTdi0EqxjsqJwGM3e+lOT7 LC2YLaJp8QrH+VQ/QHgMwFqy+Zu18ZYlIiZrNedoN8UlOirdkNhxINBuTR6vcyBap8NG +ZDNYYxueYkJUbiNOLasE5Z1m92P3Cmwk58uRzZZJSoe7nuD/V/CYxdpsA01S0tb90Yy 3E8A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1690593410; x=1691198210; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=y7cEpFM3GtEXX/RC2xjBSh+vjkWowYW5UIIx0dKpIJc=; b=eqTj/1UkbgZPQ16jU1PjRpHpnQM7HKOSvUN6XLJkIu7QfJR6PH0F9Spy9vvAJAJ6qE ymrFTB0oFRiJDzLUmHGyzBS60t6TTLvRjCmrbh53JyrNtr8REX87nDoriOeeGXUG6ua6 KAspPrzf1hXzQ8DreO+DOzKFC8S2T+NE0C/N8R7lBBWV+7EjjimO/bWTVQfXED3MEp7Q atEL7w/siY5zd31ErGYgV3LQfx6FHupE2iTYIvcDBPCSQkfOhgqdg7O6x1Hm3q5Z5AGH /8SVIAUbQRE3MQ9OpjxtP1CIaeq3k6F1fbd/u0uuw2F5A8HM7eR9LYxoxSiZQ1DTVCud N0Jg== X-Gm-Message-State: ABy/qLbdrV3AboKECGO1A+n+CisOG6cRiREWrR+/W3O81busTtZ0+Zzs 7Ohk9D7yXK6j7E5ys1jwbHHYu/U0TDc= X-Google-Smtp-Source: APBJJlGEmLTrDBb0wzCi6yfLs0uVwHwFBXdROMbG5VP85WcFNJ0YDR99ZLURDrRr5KN6KCJU+MuP6PjhuuE= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a17:903:2291:b0:1b5:2b14:5f2c with SMTP id b17-20020a170903229100b001b52b145f2cmr14342plh.4.1690593409416; Fri, 28 Jul 2023 18:16:49 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 28 Jul 2023 18:16:06 -0700 In-Reply-To: <20230729011608.1065019-1-seanjc@google.com> Mime-Version: 1.0 References: <20230729011608.1065019-1-seanjc@google.com> X-Mailer: git-send-email 2.41.0.487.g6d72f3e995-goog Message-ID: <20230729011608.1065019-20-seanjc@google.com> Subject: [PATCH v2 19/21] KVM: nSVM: Use KVM-governed feature framework to track "vGIF enabled" From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini , Vitaly Kuznetsov Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Maxim Levitsky Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Track "virtual GIF exposed to L1" via a governed feature flag instead of using a dedicated bit/flag in vcpu_svm. Note, checking KVM's capabilities instead of the "vgif" param means that the code isn't strictly equivalent, as vgif_enabled could have been set if nested=3Dfalse where as that the governed feature cannot. But that's a glorified nop as the feature/flag is consumed only by paths that are Signed-off-by: Sean Christopherson --- arch/x86/kvm/governed_features.h | 1 + arch/x86/kvm/svm/nested.c | 3 ++- arch/x86/kvm/svm/svm.c | 3 +-- arch/x86/kvm/svm/svm.h | 5 +++-- 4 files changed, 7 insertions(+), 5 deletions(-) diff --git a/arch/x86/kvm/governed_features.h b/arch/x86/kvm/governed_featu= res.h index 9afd34f30599..368696c2e96b 100644 --- a/arch/x86/kvm/governed_features.h +++ b/arch/x86/kvm/governed_features.h @@ -14,6 +14,7 @@ KVM_GOVERNED_X86_FEATURE(V_VMSAVE_VMLOAD) KVM_GOVERNED_X86_FEATURE(LBRV) KVM_GOVERNED_X86_FEATURE(PAUSEFILTER) KVM_GOVERNED_X86_FEATURE(PFTHRESHOLD) +KVM_GOVERNED_X86_FEATURE(VGIF) =20 #undef KVM_GOVERNED_X86_FEATURE #undef KVM_GOVERNED_FEATURE diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c index ac03b2bc5b2c..dd496c9e5f91 100644 --- a/arch/x86/kvm/svm/nested.c +++ b/arch/x86/kvm/svm/nested.c @@ -660,7 +660,8 @@ static void nested_vmcb02_prepare_control(struct vcpu_s= vm *svm, * exit_int_info, exit_int_info_err, next_rip, insn_len, insn_bytes. */ =20 - if (svm->vgif_enabled && (svm->nested.ctl.int_ctl & V_GIF_ENABLE_MASK)) + if (guest_can_use(vcpu, X86_FEATURE_VGIF) && + (svm->nested.ctl.int_ctl & V_GIF_ENABLE_MASK)) int_ctl_vmcb12_bits |=3D (V_GIF_MASK | V_GIF_ENABLE_MASK); else int_ctl_vmcb01_bits |=3D (V_GIF_MASK | V_GIF_ENABLE_MASK); diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index be3a11f00f4e..6d9bb4453f2d 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -4235,8 +4235,7 @@ static void svm_vcpu_after_set_cpuid(struct kvm_vcpu = *vcpu) =20 kvm_governed_feature_check_and_set(vcpu, X86_FEATURE_PAUSEFILTER); kvm_governed_feature_check_and_set(vcpu, X86_FEATURE_PFTHRESHOLD); - - svm->vgif_enabled =3D vgif && guest_cpuid_has(vcpu, X86_FEATURE_VGIF); + kvm_governed_feature_check_and_set(vcpu, X86_FEATURE_VGIF); =20 svm->vnmi_enabled =3D vnmi && guest_cpuid_has(vcpu, X86_FEATURE_VNMI); =20 diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h index fb438439b61e..6eb5877cc6c3 100644 --- a/arch/x86/kvm/svm/svm.h +++ b/arch/x86/kvm/svm/svm.h @@ -22,6 +22,7 @@ #include #include =20 +#include "cpuid.h" #include "kvm_cache_regs.h" =20 #define __sme_page_pa(x) __sme_set(page_to_pfn(x) << PAGE_SHIFT) @@ -259,7 +260,6 @@ struct vcpu_svm { bool soft_int_injected; =20 /* optional nested SVM features that are enabled for this guest */ - bool vgif_enabled : 1; bool vnmi_enabled : 1; =20 u32 ldr_reg; @@ -485,7 +485,8 @@ static inline bool svm_is_intercept(struct vcpu_svm *sv= m, int bit) =20 static inline bool nested_vgif_enabled(struct vcpu_svm *svm) { - return svm->vgif_enabled && (svm->nested.ctl.int_ctl & V_GIF_ENABLE_MASK); + return guest_can_use(&svm->vcpu, X86_FEATURE_VGIF) && + (svm->nested.ctl.int_ctl & V_GIF_ENABLE_MASK); } =20 static inline struct vmcb *get_vgif_vmcb(struct vcpu_svm *svm) --=20 2.41.0.487.g6d72f3e995-goog From nobody Sat Feb 7 22:55:09 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id AF343EB64DD for ; Sat, 29 Jul 2023 01:18:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237524AbjG2BSg (ORCPT ); Fri, 28 Jul 2023 21:18:36 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36618 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235248AbjG2BRt (ORCPT ); Fri, 28 Jul 2023 21:17:49 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 048F54EEC for ; Fri, 28 Jul 2023 18:17:22 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id 3f1490d57ef6-c8f360a07a2so2440386276.2 for ; Fri, 28 Jul 2023 18:17:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1690593412; x=1691198212; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=VgunzukDjbHa3GZgUyAC0qt2xyD/roymvALt+6M5c/I=; b=oaLMtNoaDWLDjp7EXIcXnG7Ta8UVIB2jBzIDDVrwenl3iwfmCnZ/cJ2/73VgIR/qyc osC30NnMKZIrDgCTIPJKM8UJ5BR4tu4pKbB3e8aLCi7PR1j0Kt0QEOkmTEGzlhU9pFrj nQrc3ad1YG2CBMJjxUAcbD0q1GMSefgfF+hTdEB7ae0p8nm/CjL1fLIEf3gj/xHG9tX+ AEzIDs5tvNgA6AStrqBdBFfdqIUWrBnPRCSYH0KcfGxNlJ2buNJ5msJYAfE1DssmAgos EsoxekFuAQsCcWvNz5MZAx6kJpa5nrJOfanbYJfn8dmO0WM3+5daH4FKNVxgd9a3bSpw AuIw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1690593412; x=1691198212; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=VgunzukDjbHa3GZgUyAC0qt2xyD/roymvALt+6M5c/I=; b=DfvYBivqRJBpOH4IIKRlzgTPvU1IykXs8UGPFT3f4i1usDeNxLMGPcen3+H8h6k5cR laene63/qjBBPQTErixxTXMK4QtKPn0uJCv2ozdrtatobbqdhDOoaT3BGiP3V8NvrUyP bZLJDQV+Vbw267NXwHek24h1s48TUlOBrmchysOG6z90Rwsj4Ys4adwY9N9H6CmMz3qO JcHL8w1M95joMjZyla2OwtqWoVVkhTbGUMaw25Axki3wX8i0rNJUQuCAV8anPX9A4E5t w89n3mUmbTg6Sft0Nefs0qE6PI3hDz2bKwyHQLtAIFBhOK38+eOEa+VAvYd6ELTi42T3 n8+g== X-Gm-Message-State: ABy/qLbirdF+spuZP8qSYvA+unD8rtN35zrm1mo2JPSYGAhaY2xhsR7v I79pjit9U9OlgsxXF+by2/m386K2Jj4= X-Google-Smtp-Source: APBJJlGdPQGKpEIdAZ91PoH47O0lYVhM4CXCmURb6ZKMnYKkb5R9ts2YU1hH2lp49FG/KDCfOyaxpvo4fcE= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a25:2c5:0:b0:d09:3919:35c with SMTP id 188-20020a2502c5000000b00d093919035cmr17449ybc.11.1690593412338; Fri, 28 Jul 2023 18:16:52 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 28 Jul 2023 18:16:07 -0700 In-Reply-To: <20230729011608.1065019-1-seanjc@google.com> Mime-Version: 1.0 References: <20230729011608.1065019-1-seanjc@google.com> X-Mailer: git-send-email 2.41.0.487.g6d72f3e995-goog Message-ID: <20230729011608.1065019-21-seanjc@google.com> Subject: [PATCH v2 20/21] KVM: nSVM: Use KVM-governed feature framework to track "vNMI enabled" From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini , Vitaly Kuznetsov Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Maxim Levitsky Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Track "virtual NMI exposed to L1" via a governed feature flag instead of using a dedicated bit/flag in vcpu_svm. Note, checking KVM's capabilities instead of the "vnmi" param means that the code isn't strictly equivalent, as vnmi_enabled could have been set if nested=3Dfalse where as that the governed feature cannot. But that's a glorified nop as the feature/flag is consumed only by paths that are gated by nSVM being enabled. Signed-off-by: Sean Christopherson --- arch/x86/kvm/governed_features.h | 1 + arch/x86/kvm/svm/svm.c | 3 +-- arch/x86/kvm/svm/svm.h | 5 +---- 3 files changed, 3 insertions(+), 6 deletions(-) diff --git a/arch/x86/kvm/governed_features.h b/arch/x86/kvm/governed_featu= res.h index 368696c2e96b..423a73395c10 100644 --- a/arch/x86/kvm/governed_features.h +++ b/arch/x86/kvm/governed_features.h @@ -15,6 +15,7 @@ KVM_GOVERNED_X86_FEATURE(LBRV) KVM_GOVERNED_X86_FEATURE(PAUSEFILTER) KVM_GOVERNED_X86_FEATURE(PFTHRESHOLD) KVM_GOVERNED_X86_FEATURE(VGIF) +KVM_GOVERNED_X86_FEATURE(VNMI) =20 #undef KVM_GOVERNED_X86_FEATURE #undef KVM_GOVERNED_FEATURE diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 6d9bb4453f2d..89cc9f4f3ddc 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -4236,8 +4236,7 @@ static void svm_vcpu_after_set_cpuid(struct kvm_vcpu = *vcpu) kvm_governed_feature_check_and_set(vcpu, X86_FEATURE_PAUSEFILTER); kvm_governed_feature_check_and_set(vcpu, X86_FEATURE_PFTHRESHOLD); kvm_governed_feature_check_and_set(vcpu, X86_FEATURE_VGIF); - - svm->vnmi_enabled =3D vnmi && guest_cpuid_has(vcpu, X86_FEATURE_VNMI); + kvm_governed_feature_check_and_set(vcpu, X86_FEATURE_VNMI); =20 svm_recalc_instruction_intercepts(vcpu, svm); =20 diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h index 6eb5877cc6c3..06400cfe2244 100644 --- a/arch/x86/kvm/svm/svm.h +++ b/arch/x86/kvm/svm/svm.h @@ -259,9 +259,6 @@ struct vcpu_svm { unsigned long soft_int_next_rip; bool soft_int_injected; =20 - /* optional nested SVM features that are enabled for this guest */ - bool vnmi_enabled : 1; - u32 ldr_reg; u32 dfr_reg; struct page *avic_backing_page; @@ -537,7 +534,7 @@ static inline bool nested_npt_enabled(struct vcpu_svm *= svm) =20 static inline bool nested_vnmi_enabled(struct vcpu_svm *svm) { - return svm->vnmi_enabled && + return guest_can_use(&svm->vcpu, X86_FEATURE_VNMI) && (svm->nested.ctl.int_ctl & V_NMI_ENABLE_MASK); } =20 --=20 2.41.0.487.g6d72f3e995-goog From nobody Sat Feb 7 22:55:09 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 084A5C0015E for ; Sat, 29 Jul 2023 01:19:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234189AbjG2BS4 (ORCPT ); Fri, 28 Jul 2023 21:18:56 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35974 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231725AbjG2BRu (ORCPT ); Fri, 28 Jul 2023 21:17:50 -0400 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 77E444EEA for ; Fri, 28 Jul 2023 18:17:23 -0700 (PDT) Received: by mail-yb1-xb49.google.com with SMTP id 3f1490d57ef6-d1c693a29a0so2526820276.1 for ; Fri, 28 Jul 2023 18:17:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1690593414; x=1691198214; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=Td1GH9IlHvpXHIHUDo3By6hiD1XsWn1I7aZZNe6DB5w=; b=4Af6FshVFsrgrFsPJc4IthqUvttt1ICUmFLNLWtLZE5xK3kzxiG/xG13Xua1xOPAG/ qsi+BpFoCzlp0VqIdOlZQTdc+VAwJpYpajeNm0aL+x2sNusxHJ/vlU7DAgRyh5rmbLaZ WiAqQcKmtSK4GPhYMEy5RsN9vB29UAhXPElyy3PHh1U4dOm6r1eZgY8xnD3/RvIcv/bn lxqE7kbzbGxs9WqU0pdeayYujblF5h2ScPpYWgTeS5EU3IHTWmT+NIX0KnUD0CHtrTJV kLsfo4Pxhv3CCog7OEi23/z3x7p5Ec3EYwTHO7qsM2gZaKo6owxQ+L33ziSVxQGlz6qK zKxA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1690593414; x=1691198214; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=Td1GH9IlHvpXHIHUDo3By6hiD1XsWn1I7aZZNe6DB5w=; b=c0goQAeB0xlgoLvcUHS0MxZ+8mjHmOgO87aH1CdPVl/us2ndgNUcxjg2zf9LMO9qjB JTJkONMqD5D0UiE6Nkyqpy7lKhcQqO9sUi1xSZfjjEpMOshhZnVOCJSfOXlupzCOtGP9 +opqrYCs19R8vdemkAZlxlL9i3F1P3OrGDFs2piqzEcU3EexuX+rfMtZfdWkmsVfRlBc Py9L4j1n++d8/fuqCymR1GEbuJSOPP88ah7vG6/t1FYsFe4bLsPtbfnAgYb+9Ag/FtTd M+htVk3lYvd7bSm+RTl8bgvtWI92QviYdoscYrqXl0E9gapOq5atcx+t5M6XhvI1Wkaj 1ggA== X-Gm-Message-State: ABy/qLbtSnsp8euTTJfe3H/EjTEUddQNtr+9HcaitOa2a0EEkqJjoE3r PN9aaqoCqpaWr+wVcXwPWslHTua7Dz0= X-Google-Smtp-Source: APBJJlFP2d5RvtjZ9pjPB+7KOrkaTgp7g+JNH10ugdM7iZq8xOgChgQ4IeHH9mcrUrSucyrbppH+IZqcT8s= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a25:69c7:0:b0:d07:f1ed:521a with SMTP id e190-20020a2569c7000000b00d07f1ed521amr17972ybc.4.1690593414398; Fri, 28 Jul 2023 18:16:54 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 28 Jul 2023 18:16:08 -0700 In-Reply-To: <20230729011608.1065019-1-seanjc@google.com> Mime-Version: 1.0 References: <20230729011608.1065019-1-seanjc@google.com> X-Mailer: git-send-email 2.41.0.487.g6d72f3e995-goog Message-ID: <20230729011608.1065019-22-seanjc@google.com> Subject: [PATCH v2 21/21] KVM: x86: Disallow guest CPUID lookups when IRQs are disabled From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini , Vitaly Kuznetsov Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Maxim Levitsky Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Now that KVM has a framework for caching guest CPUID feature flags, add a "rule" that IRQs must be enabled when doing guest CPUID lookups, and enforce the rule via a lockdep assertion. CPUID lookups are slow, and within KVM, IRQs are only ever disabled in hot paths, e.g. the core run loop, fast page fault handling, etc. I.e. querying guest CPUID with IRQs disabled, especially in the run loop, should be avoided. Signed-off-by: Sean Christopherson --- arch/x86/kvm/cpuid.c | 13 +++++++++++++ 1 file changed, 13 insertions(+) diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c index f74d6c404551..4b14bd9c5637 100644 --- a/arch/x86/kvm/cpuid.c +++ b/arch/x86/kvm/cpuid.c @@ -11,6 +11,7 @@ #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt =20 #include +#include "linux/lockdep.h" #include #include #include @@ -84,6 +85,18 @@ static inline struct kvm_cpuid_entry2 *cpuid_entry2_find( struct kvm_cpuid_entry2 *e; int i; =20 + /* + * KVM has a semi-arbitrary rule that querying the guest's CPUID model + * with IRQs disabled is disallowed. The CPUID model can legitimately + * have over one hundred entries, i.e. the lookup is slow, and IRQs are + * typically disabled in KVM only when KVM is in a performance critical + * path, e.g. the core VM-Enter/VM-Exit run loop. Nothing will break + * if this rule is violated, this assertion is purely to flag potential + * performance issues. If this fires, consider moving the lookup out + * of the hotpath, e.g. by caching information during CPUID updates. + */ + lockdep_assert_irqs_enabled(); + for (i =3D 0; i < nent; i++) { e =3D &entries[i]; =20 --=20 2.41.0.487.g6d72f3e995-goog