From nobody Sun Feb 8 01:31:19 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 468AFC7619A for ; Wed, 22 Mar 2023 01:15:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230144AbjCVBPy (ORCPT ); Tue, 21 Mar 2023 21:15:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52880 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230101AbjCVBPl (ORCPT ); Tue, 21 Mar 2023 21:15:41 -0400 Received: from mail-pj1-x104a.google.com (mail-pj1-x104a.google.com [IPv6:2607:f8b0:4864:20::104a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 414DC5A6EB for ; Tue, 21 Mar 2023 18:15:20 -0700 (PDT) Received: by mail-pj1-x104a.google.com with SMTP id j15-20020a17090a318f00b0023fe33f8825so1040932pjb.9 for ; Tue, 21 Mar 2023 18:15:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; t=1679447717; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=r2E6+28pFIcVoD7c62WV1LTJDe4T0Oa+OT5aWWRY1ec=; b=GuTR6vZjRwaqrGPvM/+L09WWxqjQDN7+wT3jEVHzXWuWUZqTmurtmFEM7Qe0oW8M9J KnZHcD5vT8q/++rXX8l1kkfjDUVpOdYQJ8OP+QHPrIZEFrFPMEpklM0CFAj9A9zpH8Ed NTkWX2N8zoDDbsE3K7sPU5BHL5B759yae0/0itmniEqVUZTRsps99zrgYkezYHfxCjRI BmzXVMrGFo0YlLanINz4dqUyvTY3rJRj+B86hErG4loF+g2UtYbEL+HaaZCwk98PMTBY k+fXNg4uxi7/eHWecDTs8IS8jtXRcrc8ftTLZn3zuKqdeZmw7Xj3NdzzAwQU9e4wfOcX NUdw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1679447717; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=r2E6+28pFIcVoD7c62WV1LTJDe4T0Oa+OT5aWWRY1ec=; b=rhZhMUWbl1ddnd9pMH+k71H73sVJUUIsoYXlbcCdLKAvpChdIaozE67csCQPz7/OqZ rKU49Repd26/60RVFV2N3wcQXExKmytqxpeOCvOrZJlP1e69HiqdCf789AXXlhj5nP1q gf8OL/ALzCgQmKbtGrYtTrh6xkYAF/PGHzBq9cH4dpdMy8XKvcIpNdsyuKT+MC/QG3B3 BvnQqVQoZ2wGb4IV6m5u1U93DKmos3RbR86r+oDZGtYMDVe5gu8vITklSO5zOEvW7akm JIhJ4lNhgBJfU3ECo0wM5GG1qwWjrcQSTtPxZoVVHRXNHIHU5r+To1o9Y0ohEJNZHvZE n8PQ== X-Gm-Message-State: AO0yUKWElgvtBPSspGf6xzkCUi5GKsN0zj9xj6QZRzrZ/nICg6QpSUor 1PK0R4o3d/n2nTtRSasJ538zxwlEjww= X-Google-Smtp-Source: AK7set8r1JwiMuVaFCUIvSJ9lZ4h2y0PWt+dufZb884fSTIB29zd+CUw9yVlZpZPfuw0/vdgMB1kKxTUPC0= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a65:63c4:0:b0:503:7be2:19a7 with SMTP id n4-20020a6563c4000000b005037be219a7mr302089pgv.1.1679447717170; Tue, 21 Mar 2023 18:15:17 -0700 (PDT) Reply-To: Sean Christopherson Date: Tue, 21 Mar 2023 18:14:35 -0700 In-Reply-To: <20230322011440.2195485-1-seanjc@google.com> Mime-Version: 1.0 References: <20230322011440.2195485-1-seanjc@google.com> X-Mailer: git-send-email 2.40.0.rc2.332.ga46443480c-goog Message-ID: <20230322011440.2195485-2-seanjc@google.com> Subject: [PATCH 1/6] KVM: x86: Revert MSR_IA32_FLUSH_CMD.FLUSH_L1D enabling From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Nathan Chancellor , Emanuele Giuseppe Esposito , Pawan Gupta , Jim Mattson Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Revert the recently added virtualizing of MSR_IA32_FLUSH_CMD, as both the VMX and SVM are fatally buggy to guests that use MSR_IA32_FLUSH_CMD or MSR_IA32_PRED_CMD, and because the entire foundation of the logic is flawed. The most immediate problem is an inverted check on @cmd that results in rejecting legal values. SVM doubles down on bugs and drops the error, i.e. silently breaks all guest mitigations based on the command MSRs. The next issue is that neither VMX nor SVM was updated to mark MSR_IA32_FLUSH_CMD as being a possible passthrough MSR, which isn't hugely problematic, but does break MSR filtering and triggers a WARN on VMX designed to catch this exact bug. The foundational issues stem from the MSR_IA32_FLUSH_CMD code reusing logic from MSR_IA32_PRED_CMD, which in turn was likely copied from KVM's support for MSR_IA32_SPEC_CTRL. The copy+paste from MSR_IA32_SPEC_CTRL was misguided as MSR_IA32_PRED_CMD (and MSR_IA32_FLUSH_CMD) is a write-only MSR, i.e. doesn't need the same "deferred passthrough" shenanigans as MSR_IA32_SPEC_CTRL. Revert all MSR_IA32_FLUSH_CMD enabling in one fell swoop so that there is no point where KVM advertises, but does not support, L1D_FLUSH. This reverts commits 45cf86f26148e549c5ba4a8ab32a390e4bde216e, 723d5fb0ffe4c02bd4edf47ea02c02e454719f28, and a807b78ad04b2eaa348f52f5cc7702385b6de1ee. Reported-by: Nathan Chancellor Link: https://lkml.kernel.org/r/20230317190432.GA863767%40dev-arch.thelio-3= 990X Cc: Emanuele Giuseppe Esposito Cc: Pawan Gupta Cc: Jim Mattson Signed-off-by: Sean Christopherson Tested-by: Mathias Krause --- arch/x86/kvm/cpuid.c | 2 +- arch/x86/kvm/svm/svm.c | 43 ++++++++----------------- arch/x86/kvm/vmx/nested.c | 3 -- arch/x86/kvm/vmx/vmx.c | 68 ++++++++++++++------------------------- 4 files changed, 39 insertions(+), 77 deletions(-) diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c index 9583a110cf5f..599aebec2d52 100644 --- a/arch/x86/kvm/cpuid.c +++ b/arch/x86/kvm/cpuid.c @@ -653,7 +653,7 @@ void kvm_set_cpu_caps(void) F(SPEC_CTRL_SSBD) | F(ARCH_CAPABILITIES) | F(INTEL_STIBP) | F(MD_CLEAR) | F(AVX512_VP2INTERSECT) | F(FSRM) | F(SERIALIZE) | F(TSXLDTRK) | F(AVX512_FP16) | - F(AMX_TILE) | F(AMX_INT8) | F(AMX_BF16) | F(FLUSH_L1D) + F(AMX_TILE) | F(AMX_INT8) | F(AMX_BF16) ); =20 /* TSC_ADJUST and ARCH_CAPABILITIES are emulated in software. */ diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 70183d2271b5..252e7f37e4e2 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -2869,28 +2869,6 @@ static int svm_set_vm_cr(struct kvm_vcpu *vcpu, u64 = data) return 0; } =20 -static int svm_set_msr_ia32_cmd(struct kvm_vcpu *vcpu, struct msr_data *ms= r, - bool guest_has_feat, u64 cmd, - int x86_feature_bit) -{ - struct vcpu_svm *svm =3D to_svm(vcpu); - - if (!msr->host_initiated && !guest_has_feat) - return 1; - - if (!(msr->data & ~cmd)) - return 1; - if (!boot_cpu_has(x86_feature_bit)) - return 1; - if (!msr->data) - return 0; - - wrmsrl(msr->index, cmd); - set_msr_interception(vcpu, svm->msrpm, msr->index, 0, 1); - - return 0; -} - static int svm_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr) { struct vcpu_svm *svm =3D to_svm(vcpu); @@ -2965,14 +2943,19 @@ static int svm_set_msr(struct kvm_vcpu *vcpu, struc= t msr_data *msr) set_msr_interception(vcpu, svm->msrpm, MSR_IA32_SPEC_CTRL, 1, 1); break; case MSR_IA32_PRED_CMD: - r =3D svm_set_msr_ia32_cmd(vcpu, msr, - guest_has_pred_cmd_msr(vcpu), - PRED_CMD_IBPB, X86_FEATURE_IBPB); - break; - case MSR_IA32_FLUSH_CMD: - r =3D svm_set_msr_ia32_cmd(vcpu, msr, - guest_cpuid_has(vcpu, X86_FEATURE_FLUSH_L1D), - L1D_FLUSH, X86_FEATURE_FLUSH_L1D); + if (!msr->host_initiated && + !guest_has_pred_cmd_msr(vcpu)) + return 1; + + if (data & ~PRED_CMD_IBPB) + return 1; + if (!boot_cpu_has(X86_FEATURE_IBPB)) + return 1; + if (!data) + break; + + wrmsrl(MSR_IA32_PRED_CMD, PRED_CMD_IBPB); + set_msr_interception(vcpu, svm->msrpm, MSR_IA32_PRED_CMD, 0, 1); break; case MSR_AMD64_VIRT_SPEC_CTRL: if (!msr->host_initiated && diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index f63b28f46a71..1bc2b80273c9 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -654,9 +654,6 @@ static inline bool nested_vmx_prepare_msr_bitmap(struct= kvm_vcpu *vcpu, nested_vmx_set_intercept_for_msr(vmx, msr_bitmap_l1, msr_bitmap_l0, MSR_IA32_PRED_CMD, MSR_TYPE_W); =20 - nested_vmx_set_intercept_for_msr(vmx, msr_bitmap_l1, msr_bitmap_l0, - MSR_IA32_FLUSH_CMD, MSR_TYPE_W); - kvm_vcpu_unmap(vcpu, &vmx->nested.msr_bitmap_map, false); =20 vmx->nested.force_msr_bitmap_recalc =3D false; diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index d7bf14abdba1..f777509ecf17 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -2133,39 +2133,6 @@ static u64 vmx_get_supported_debugctl(struct kvm_vcp= u *vcpu, bool host_initiated return debugctl; } =20 -static int vmx_set_msr_ia32_cmd(struct kvm_vcpu *vcpu, - struct msr_data *msr_info, - bool guest_has_feat, u64 cmd, - int x86_feature_bit) -{ - if (!msr_info->host_initiated && !guest_has_feat) - return 1; - - if (!(msr_info->data & ~cmd)) - return 1; - if (!boot_cpu_has(x86_feature_bit)) - return 1; - if (!msr_info->data) - return 0; - - wrmsrl(msr_info->index, cmd); - - /* - * For non-nested: - * When it's written (to non-zero) for the first time, pass - * it through. - * - * For nested: - * The handling of the MSR bitmap for L2 guests is done in - * nested_vmx_prepare_msr_bitmap. We should not touch the - * vmcs02.msr_bitmap here since it gets completely overwritten - * in the merging. - */ - vmx_disable_intercept_for_msr(vcpu, msr_info->index, MSR_TYPE_W); - - return 0; -} - /* * Writes msr value into the appropriate "register". * Returns 0 on success, non-0 otherwise. @@ -2319,16 +2286,31 @@ static int vmx_set_msr(struct kvm_vcpu *vcpu, struc= t msr_data *msr_info) return 1; goto find_uret_msr; case MSR_IA32_PRED_CMD: - ret =3D vmx_set_msr_ia32_cmd(vcpu, msr_info, - guest_has_pred_cmd_msr(vcpu), - PRED_CMD_IBPB, - X86_FEATURE_IBPB); - break; - case MSR_IA32_FLUSH_CMD: - ret =3D vmx_set_msr_ia32_cmd(vcpu, msr_info, - guest_cpuid_has(vcpu, X86_FEATURE_FLUSH_L1D), - L1D_FLUSH, - X86_FEATURE_FLUSH_L1D); + if (!msr_info->host_initiated && + !guest_has_pred_cmd_msr(vcpu)) + return 1; + + if (data & ~PRED_CMD_IBPB) + return 1; + if (!boot_cpu_has(X86_FEATURE_IBPB)) + return 1; + if (!data) + break; + + wrmsrl(MSR_IA32_PRED_CMD, PRED_CMD_IBPB); + + /* + * For non-nested: + * When it's written (to non-zero) for the first time, pass + * it through. + * + * For nested: + * The handling of the MSR bitmap for L2 guests is done in + * nested_vmx_prepare_msr_bitmap. We should not touch the + * vmcs02.msr_bitmap here since it gets completely overwritten + * in the merging. + */ + vmx_disable_intercept_for_msr(vcpu, MSR_IA32_PRED_CMD, MSR_TYPE_W); break; case MSR_IA32_CR_PAT: if (!kvm_pat_valid(data)) --=20 2.40.0.rc2.332.ga46443480c-goog From nobody Sun Feb 8 01:31:19 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CB08FC7619A for ; Wed, 22 Mar 2023 01:16:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230207AbjCVBQG (ORCPT ); Tue, 21 Mar 2023 21:16:06 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53064 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229838AbjCVBPr (ORCPT ); Tue, 21 Mar 2023 21:15:47 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D0C2D5A901 for ; Tue, 21 Mar 2023 18:15:23 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id y144-20020a253296000000b00b69ce0e6f2dso9844322yby.18 for ; Tue, 21 Mar 2023 18:15:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; t=1679447719; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=CY4cnk4UOMeVTH0kQ1tQ9VzeyOFu2K0f5K2GYFhb34s=; b=AkqrcYEiE53LAVOgUo3cY+bXfl82+gNCSSPEEGj65pQsP74Nh5wscxPvqarY6zz8gX wxFFEj4sVQ55KaHHudOG4SDRtxMvI7uReX6C9FV2XQ/Kr1D1OmZn/Kzzzn2h0yJ0saqI JoqmsXg94qdQ2pJ5/VzpC1eYtVTUY20JIwxA80o8ZYoMjbHr5HHZx8pKe+9qsOk8FJPe H/SVIFvpeBwLPKZyiYdt9ta06DDjeJs1pCzvoW/rlLWhDOAoV3sIwYqJVVTJh/xgYRlC dY9OC0h/FG9tJKvDkkaea/P/7kzKuE1QUvgRtkl5IgPrgLUX94yW+4No4Wyc5PB/xOCB nvIw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1679447719; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=CY4cnk4UOMeVTH0kQ1tQ9VzeyOFu2K0f5K2GYFhb34s=; b=1MbaXD+qyqidvPqqLVEBVqcIodjHwgTgTCRQm3tRGo7IbK7n7+SJns2G6e5jIA3QFr sAeDNw59HGHIw7sRhE84RVsnMctLIZGp85rIlUZqhFuQPlVLKkRHbsbE/EzdXzRjBTNN J0L48Q69soJhrwYN3H1yPzLC6BLzcHOX+4fUgqQCkS8qNJbFTsM6MVy/dg7+k4x6tlpC +6EnQozuKjwyMuDeWEZe7mcjSWzA4TT/kIJ7AvUVHp736aoaOw+H/B+Z55pXtDqnvMKw NBfy/vPQqB2yKIgerQ2fNl0hZXdYBkloOGdSrXBxwyHFHMrHO93U77XpBmR+Y7V9Y000 6sig== X-Gm-Message-State: AAQBX9fts6NxrgwIN78N6xDOMPqT1rIDlY1o9c15h9QmXfvoRgafpi3E Ju9rIb7vCrDBkcgefrSlvEY3+C/E5kM= X-Google-Smtp-Source: AKy350Z1Pz7Il1MvE5au3oW5jW3Jfl7nOGY73wWqSaw4J9IQh/AUgbMNAONy8q9hsNO8ETNgRFoyd6JFKe8= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a25:820d:0:b0:a27:3ecc:ffe7 with SMTP id q13-20020a25820d000000b00a273eccffe7mr361394ybk.3.1679447719213; Tue, 21 Mar 2023 18:15:19 -0700 (PDT) Reply-To: Sean Christopherson Date: Tue, 21 Mar 2023 18:14:36 -0700 In-Reply-To: <20230322011440.2195485-1-seanjc@google.com> Mime-Version: 1.0 References: <20230322011440.2195485-1-seanjc@google.com> X-Mailer: git-send-email 2.40.0.rc2.332.ga46443480c-goog Message-ID: <20230322011440.2195485-3-seanjc@google.com> Subject: [PATCH 2/6] KVM: VMX: Passthrough MSR_IA32_PRED_CMD based purely on host+guest CPUID From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Nathan Chancellor , Emanuele Giuseppe Esposito , Pawan Gupta , Jim Mattson Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Passthrough MSR_IA32_PRED_CMD based purely on whether or not the MSR is supported and enabled, i.e. don't wait until the first write. There's no benefit to deferred passthrough, and the extra logic only adds complexity. Signed-off-by: Sean Christopherson Reviewed-by: Xiaoyao Li --- arch/x86/kvm/vmx/vmx.c | 16 +++------------- 1 file changed, 3 insertions(+), 13 deletions(-) diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index f777509ecf17..5c01c76c0d45 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -2298,19 +2298,6 @@ static int vmx_set_msr(struct kvm_vcpu *vcpu, struct= msr_data *msr_info) break; =20 wrmsrl(MSR_IA32_PRED_CMD, PRED_CMD_IBPB); - - /* - * For non-nested: - * When it's written (to non-zero) for the first time, pass - * it through. - * - * For nested: - * The handling of the MSR bitmap for L2 guests is done in - * nested_vmx_prepare_msr_bitmap. We should not touch the - * vmcs02.msr_bitmap here since it gets completely overwritten - * in the merging. - */ - vmx_disable_intercept_for_msr(vcpu, MSR_IA32_PRED_CMD, MSR_TYPE_W); break; case MSR_IA32_CR_PAT: if (!kvm_pat_valid(data)) @@ -7743,6 +7730,9 @@ static void vmx_vcpu_after_set_cpuid(struct kvm_vcpu = *vcpu) vmx_set_intercept_for_msr(vcpu, MSR_IA32_XFD_ERR, MSR_TYPE_R, !guest_cpuid_has(vcpu, X86_FEATURE_XFD)); =20 + if (boot_cpu_has(X86_FEATURE_IBPB)) + vmx_set_intercept_for_msr(vcpu, MSR_IA32_PRED_CMD, MSR_TYPE_W, + !guest_has_pred_cmd_msr(vcpu)); =20 set_cr4_guest_host_mask(vmx); =20 --=20 2.40.0.rc2.332.ga46443480c-goog From nobody Sun Feb 8 01:31:19 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 01DE0C6FD1D for ; Wed, 22 Mar 2023 01:16:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230124AbjCVBQJ (ORCPT ); Tue, 21 Mar 2023 21:16:09 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52332 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229912AbjCVBPr (ORCPT ); Tue, 21 Mar 2023 21:15:47 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2F1DF5A903 for ; Tue, 21 Mar 2023 18:15:24 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id 204-20020a2514d5000000b00a3637aea9e1so18272039ybu.17 for ; Tue, 21 Mar 2023 18:15:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; t=1679447721; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=bCprSD7F8RNK3sb9JsGw6XUAq7a4JeIg3plwEoRHVh8=; b=YIfiHPdYGeLXud9fanwJkzpqMhXu9ONdgLi3kWklYWx9AmdlQIm1JlSKnVHYPY903M w4KQNHT7wvCE16Stgyu5u74kWIU/niqgHvn8Nz8qYnIAIneIGXSECDDpWdmZ3POaPEW9 h4V/uzUdVBGQ0Pav6rFS1wv0uf8sbXWgRZf20VkwMZL0ec4osReXufoHf/QT7EF3p4XT jds1OsktmRiJ4pJetVbRgnvOu2NLILefoU6ChynqqWUyoClpLR6ZzqmQJyB80CFjRWC7 Q5SYO0NVDM0r+dYxbm9pi2l4SV2CpfagP+HLD9lHvLyaqcZtW2T4I82SJFq0Kor1DamS mmzw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1679447721; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=bCprSD7F8RNK3sb9JsGw6XUAq7a4JeIg3plwEoRHVh8=; b=ZbLm1UQcWPERaPf03wGjcmNAGNMli07mwNL2lk+pI5dmoTzYgZXAoSBfGhkJ4BA++V nkZmHMQzMcvzcFyMWUKOgeTRIjrBtiUb7xoz7gLHLAntIdK871iYkxnshLXLt328Jmov IV2Js2chaAv+ueon2E4OcE3VHDchb9c/BtcLaeSZuvLOHMFOQu/x9MvuxFG/a8rN+A2D 8FxpwGGBGSSR3yaX3fN5ON94wzEBOyaY0KwAo3+RFGMRO0m7TR8HcJb3vFhAHraCtyfZ g8NpjjJfZeExGhNBZRd3CyP8lypd21WyfbdkzegggjUjaaBvpsnvEOCBz5rpTzTk7PYM Pfaw== X-Gm-Message-State: AAQBX9cB139SwaU/PsqW9caNS0PlFOaRsM17rtoP5wsbjsC8TJGZ2Kaq jENz8yuHo6JM9Qo1WNUls5bxDh2pGyE= X-Google-Smtp-Source: AKy350Z4GYLkN9GqsRyZPUTnQKxQkYtfWgACitjGks5G4hRIDJhBShaAPFpSuQT8RDwS/VE2uizQpswSTX4= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a81:b3c1:0:b0:541:9895:4ce9 with SMTP id r184-20020a81b3c1000000b0054198954ce9mr2208350ywh.9.1679447720994; Tue, 21 Mar 2023 18:15:20 -0700 (PDT) Reply-To: Sean Christopherson Date: Tue, 21 Mar 2023 18:14:37 -0700 In-Reply-To: <20230322011440.2195485-1-seanjc@google.com> Mime-Version: 1.0 References: <20230322011440.2195485-1-seanjc@google.com> X-Mailer: git-send-email 2.40.0.rc2.332.ga46443480c-goog Message-ID: <20230322011440.2195485-4-seanjc@google.com> Subject: [PATCH 3/6] KVM: SVM: Passthrough MSR_IA32_PRED_CMD based purely on host+guest CPUID From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Nathan Chancellor , Emanuele Giuseppe Esposito , Pawan Gupta , Jim Mattson Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Passthrough MSR_IA32_PRED_CMD based purely on whether or not the MSR is supported and enabled, i.e. don't wait until the first write. There's no benefit to deferred passthrough, and the extra logic only adds complexity. Signed-off-by: Sean Christopherson --- arch/x86/kvm/svm/svm.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 252e7f37e4e2..f757b436ffae 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -2955,7 +2955,6 @@ static int svm_set_msr(struct kvm_vcpu *vcpu, struct = msr_data *msr) break; =20 wrmsrl(MSR_IA32_PRED_CMD, PRED_CMD_IBPB); - set_msr_interception(vcpu, svm->msrpm, MSR_IA32_PRED_CMD, 0, 1); break; case MSR_AMD64_VIRT_SPEC_CTRL: if (!msr->host_initiated && @@ -4151,6 +4150,10 @@ static void svm_vcpu_after_set_cpuid(struct kvm_vcpu= *vcpu) =20 svm_recalc_instruction_intercepts(vcpu, svm); =20 + if (boot_cpu_has(X86_FEATURE_IBPB)) + set_msr_interception(vcpu, svm->msrpm, MSR_IA32_PRED_CMD, 0, + !!guest_has_pred_cmd_msr(vcpu)); + /* For sev guests, the memory encryption bit is not reserved in CR3. */ if (sev_guest(vcpu->kvm)) { best =3D kvm_find_cpuid_entry(vcpu, 0x8000001F); --=20 2.40.0.rc2.332.ga46443480c-goog From nobody Sun Feb 8 01:31:19 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id EF712C6FD20 for ; Wed, 22 Mar 2023 01:16:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230093AbjCVBQM (ORCPT ); Tue, 21 Mar 2023 21:16:12 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52762 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229645AbjCVBPw (ORCPT ); Tue, 21 Mar 2023 21:15:52 -0400 Received: from mail-pl1-x649.google.com (mail-pl1-x649.google.com [IPv6:2607:f8b0:4864:20::649]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3C91B58B55 for ; Tue, 21 Mar 2023 18:15:27 -0700 (PDT) Received: by mail-pl1-x649.google.com with SMTP id d2-20020a170902cec200b001a1e8390831so1369057plg.5 for ; Tue, 21 Mar 2023 18:15:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; t=1679447722; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=gDWDUJZKEiSdc2jrx8xTfg9KaQTEhFf0r1pqILfpYfw=; b=c3vy0tRLOjOsR5JxOTTiqFC4RogaQM7GDamcMvByk11LgnruHpbE0CrS3dqs/uWiNy IfAErbnBxDoXUzsSwlirlmq6tg8Gv02goL5hgS5cbCjhSegRXhoeuOPaxT0K5E/hcjXZ uHly/hPshpkQc2OGsHdsgvAutoX51UwxGGrw7hek4T9CCI5chUpRR7pp80palsbOhg6j CKrExnLgK6rx+rT79i2HMg/l4K5dfZNWoGSokqGj4OGW7/unsdycIDuy1B5fMpI0sqAd 4BYnElvb6yco11+yccg+6cQyxXLc/sKnc9yrX67bmbEmzHw9Lkvy2JffZ8itDeYSorRY L6ag== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1679447722; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=gDWDUJZKEiSdc2jrx8xTfg9KaQTEhFf0r1pqILfpYfw=; b=ud+0/BRCaqE/YOKk2OGFW/sA8rPidS0tw5xDVJDcePn26Es7P5bElYv6D4khqY3Hpz 3pK33hqhwPuq+77VZ0Hlq2eftszIRoFIONc6lOC513Kk5wxWsQQ/ItrbsAxwnPGIZU0v DT6bFdLcNyvSTkiJhDakh98BOdDEJXyVNxg+1mGyQiKBt2kzQX7KOP+j9jfCfYZARfsL ab0y8SFRx8NQfrBqsE0sDGRBZOHHObQeJwy/SFC9REwalyPDhyO9g7O8WuXLriRQHCxl 0xrpcn3PGhImcqzF9ePJUByUhsEPvrjH4VvPu66JvxpphlyCM7ws7PrzZjULZyb2qRpa YcRQ== X-Gm-Message-State: AAQBX9eP8u2nKmipJVcMwhDxIbE9p/JAHxjGhH1NW3J9LFMOfyyr7vJl /Z8EDZGELNJtM/OujVN/CQmNHHUkIuA= X-Google-Smtp-Source: AKy350bhYH/mN3be0Nvdh6JmMN6RjBoXTacZBAfDFH/cZZn0+D6DHLnYaz0gfwAWEf8bsunQUCVF4fokyig= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a17:902:d88d:b0:1a1:def4:a030 with SMTP id b13-20020a170902d88d00b001a1def4a030mr248093plz.0.1679447722736; Tue, 21 Mar 2023 18:15:22 -0700 (PDT) Reply-To: Sean Christopherson Date: Tue, 21 Mar 2023 18:14:38 -0700 In-Reply-To: <20230322011440.2195485-1-seanjc@google.com> Mime-Version: 1.0 References: <20230322011440.2195485-1-seanjc@google.com> X-Mailer: git-send-email 2.40.0.rc2.332.ga46443480c-goog Message-ID: <20230322011440.2195485-5-seanjc@google.com> Subject: [PATCH 4/6] KVM: x86: Move MSR_IA32_PRED_CMD WRMSR emulation to common code From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Nathan Chancellor , Emanuele Giuseppe Esposito , Pawan Gupta , Jim Mattson Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Dedup the handling of MSR_IA32_PRED_CMD across VMX and SVM by moving the logic to kvm_set_msr_common(). Now that the MSR interception toggling is handled as part of setting guest CPUID, the VMX and SVM paths are identical. Opportunistically massage the code to make it a wee bit denser. Signed-off-by: Sean Christopherson Reviewed-by: Xiaoyao Li --- arch/x86/kvm/svm/svm.c | 14 -------------- arch/x86/kvm/vmx/vmx.c | 14 -------------- arch/x86/kvm/x86.c | 11 +++++++++++ 3 files changed, 11 insertions(+), 28 deletions(-) diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index f757b436ffae..85bb535fc321 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -2942,20 +2942,6 @@ static int svm_set_msr(struct kvm_vcpu *vcpu, struct= msr_data *msr) */ set_msr_interception(vcpu, svm->msrpm, MSR_IA32_SPEC_CTRL, 1, 1); break; - case MSR_IA32_PRED_CMD: - if (!msr->host_initiated && - !guest_has_pred_cmd_msr(vcpu)) - return 1; - - if (data & ~PRED_CMD_IBPB) - return 1; - if (!boot_cpu_has(X86_FEATURE_IBPB)) - return 1; - if (!data) - break; - - wrmsrl(MSR_IA32_PRED_CMD, PRED_CMD_IBPB); - break; case MSR_AMD64_VIRT_SPEC_CTRL: if (!msr->host_initiated && !guest_cpuid_has(vcpu, X86_FEATURE_VIRT_SSBD)) diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 5c01c76c0d45..29807be219b9 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -2285,20 +2285,6 @@ static int vmx_set_msr(struct kvm_vcpu *vcpu, struct= msr_data *msr_info) if (data & ~(TSX_CTRL_RTM_DISABLE | TSX_CTRL_CPUID_CLEAR)) return 1; goto find_uret_msr; - case MSR_IA32_PRED_CMD: - if (!msr_info->host_initiated && - !guest_has_pred_cmd_msr(vcpu)) - return 1; - - if (data & ~PRED_CMD_IBPB) - return 1; - if (!boot_cpu_has(X86_FEATURE_IBPB)) - return 1; - if (!data) - break; - - wrmsrl(MSR_IA32_PRED_CMD, PRED_CMD_IBPB); - break; case MSR_IA32_CR_PAT: if (!kvm_pat_valid(data)) return 1; diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 237c483b1230..c83ec88da043 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -3617,6 +3617,17 @@ int kvm_set_msr_common(struct kvm_vcpu *vcpu, struct= msr_data *msr_info) vcpu->arch.perf_capabilities =3D data; kvm_pmu_refresh(vcpu); return 0; + case MSR_IA32_PRED_CMD: + if (!msr_info->host_initiated && !guest_has_pred_cmd_msr(vcpu)) + return 1; + + if (!boot_cpu_has(X86_FEATURE_IBPB) || (data & ~PRED_CMD_IBPB)) + return 1; + if (!data) + break; + + wrmsrl(MSR_IA32_PRED_CMD, PRED_CMD_IBPB); + break; case MSR_EFER: return set_efer(vcpu, msr_info); case MSR_K7_HWCR: --=20 2.40.0.rc2.332.ga46443480c-goog From nobody Sun Feb 8 01:31:19 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4841EC6FD1D for ; Wed, 22 Mar 2023 01:16:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230247AbjCVBQT (ORCPT ); Tue, 21 Mar 2023 21:16:19 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53844 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229555AbjCVBP6 (ORCPT ); Tue, 21 Mar 2023 21:15:58 -0400 Received: from mail-yw1-x1149.google.com (mail-yw1-x1149.google.com [IPv6:2607:f8b0:4864:20::1149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5C6E159E41 for ; Tue, 21 Mar 2023 18:15:31 -0700 (PDT) Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-5417f156cb9so172491227b3.8 for ; Tue, 21 Mar 2023 18:15:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; t=1679447724; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=WKp3cWIz4EB3TSxmAohTftvSjLswbqRY4EZ6PCZ/cQw=; b=SZzJVjS73HjISYGSqDl5X10T5A6vxZwc8w9BaiTNsBSO0cSNyB4hlwaM9sKizC4ghB YDRUGjRaM4MzLZ4G5+W5IAV2W19DJGyDRDcPXvhb3f3SqjoKsnvIYKT8KW8R/4xVV+H6 RSugV8g6sUBq5chuQC3PsCeCghkYlm+xXE0o4X6rHCbn6QvhYmjpw8YtsPRNt/suO3VU gZfRSe1rwJCkvO3LNXWIp88QcFwDwTVMzZjFikJ6bDs9LRsjMr54Cf4DAC56UN7Ts3Vz EtqGl2A2qZlAFhHKJw59aMgVCZDNFW6pplnwFeTCUoKrP2ES/9Yc5H5OuoP3UvGc5AGY A8Hg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1679447724; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=WKp3cWIz4EB3TSxmAohTftvSjLswbqRY4EZ6PCZ/cQw=; b=EV9UFJFfA702OtrTK0J/wqapoI8oskyqPRrgKi6KiPoc6xZTpEntVcfVZiH1SgGGbh hy6jgO5M98SNyngOppueXbDHc00mrR/jfeelObCyfTPJWS5oc3yAGAeyexBk9BdCwaPz FttYP6+FDetithEfbuaIgpXZgj/qs32SOk+4mqAyx/9GUFmABaQRKf6OZDxLXeAEs45Q Yu+jjd4WNVwwyfTSRGgvWy6HhG//y6EovtZnGa1fQhSZjLhIHulepY64LoQVs3ktviqx spAhxGxtiB4pc3lpIhcPXOqnFKcns8EWQuVbPD8gWJ96Dl7xh4aqNzRFKZbgyGpE0fZz y2Og== X-Gm-Message-State: AAQBX9eZ4/Apa6bnxIu8bQifYb/YtASIB4+t/LVPdtoEtBMQxUt/93yJ uLBiNg7WQ9m0mTD+8WFgzjzi+JkKqjg= X-Google-Smtp-Source: AKy350bODg0yg1Ds22N+XsPtajs1sxEVcp7uCpy/Qzri1NtkVKUYP4coiqPgFy52W7qxjc83WnVLcP6v3+8= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a81:b285:0:b0:541:8291:5237 with SMTP id q127-20020a81b285000000b0054182915237mr352609ywh.0.1679447724562; Tue, 21 Mar 2023 18:15:24 -0700 (PDT) Reply-To: Sean Christopherson Date: Tue, 21 Mar 2023 18:14:39 -0700 In-Reply-To: <20230322011440.2195485-1-seanjc@google.com> Mime-Version: 1.0 References: <20230322011440.2195485-1-seanjc@google.com> X-Mailer: git-send-email 2.40.0.rc2.332.ga46443480c-goog Message-ID: <20230322011440.2195485-6-seanjc@google.com> Subject: [PATCH 5/6] KVM: x86: Virtualize FLUSH_L1D and passthrough MSR_IA32_FLUSH_CMD From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Nathan Chancellor , Emanuele Giuseppe Esposito , Pawan Gupta , Jim Mattson Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Virtualize FLUSH_L1D so that the guest can use the performant L1D flush if one of the many mitigations might require a flush in the guest, e.g. Linux provides an option to flush the L1D when switching mms. Passthrough MSR_IA32_FLUSH_CMD for write when it's supported in hardware and exposed to the guest, i.e. always let the guest write it directly if FLUSH_L1D is fully supported. Forward writes to hardware in host context on the off chance that KVM ends up emulating a WRMSR, or in the really unlikely scenario where userspace wants to force a flush. Restrict these forwarded WRMSRs to the known command out of an abundance of caution. Passing through the MSR means the guest can throw any and all values at hardware, but doing so in host context is arguably a bit more dangerous. Link: https://lkml.kernel.org/r/CALMp9eTt3xzAEoQ038bJQ9LN0ZOXrSWsN7xnNUD%2B= 0SS%3DWwF7Pg%40mail.gmail.com Link: https://lore.kernel.org/all/20230201132905.549148-2-eesposit@redhat.c= om Signed-off-by: Sean Christopherson --- arch/x86/kvm/cpuid.c | 2 +- arch/x86/kvm/svm/svm.c | 5 +++++ arch/x86/kvm/vmx/nested.c | 3 +++ arch/x86/kvm/vmx/vmx.c | 5 +++++ arch/x86/kvm/vmx/vmx.h | 2 +- arch/x86/kvm/x86.c | 12 ++++++++++++ 6 files changed, 27 insertions(+), 2 deletions(-) diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c index 599aebec2d52..9583a110cf5f 100644 --- a/arch/x86/kvm/cpuid.c +++ b/arch/x86/kvm/cpuid.c @@ -653,7 +653,7 @@ void kvm_set_cpu_caps(void) F(SPEC_CTRL_SSBD) | F(ARCH_CAPABILITIES) | F(INTEL_STIBP) | F(MD_CLEAR) | F(AVX512_VP2INTERSECT) | F(FSRM) | F(SERIALIZE) | F(TSXLDTRK) | F(AVX512_FP16) | - F(AMX_TILE) | F(AMX_INT8) | F(AMX_BF16) + F(AMX_TILE) | F(AMX_INT8) | F(AMX_BF16) | F(FLUSH_L1D) ); =20 /* TSC_ADJUST and ARCH_CAPABILITIES are emulated in software. */ diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 85bb535fc321..b32edaf5a74b 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -95,6 +95,7 @@ static const struct svm_direct_access_msrs { #endif { .index =3D MSR_IA32_SPEC_CTRL, .always =3D false }, { .index =3D MSR_IA32_PRED_CMD, .always =3D false }, + { .index =3D MSR_IA32_FLUSH_CMD, .always =3D false }, { .index =3D MSR_IA32_LASTBRANCHFROMIP, .always =3D false }, { .index =3D MSR_IA32_LASTBRANCHTOIP, .always =3D false }, { .index =3D MSR_IA32_LASTINTFROMIP, .always =3D false }, @@ -4140,6 +4141,10 @@ static void svm_vcpu_after_set_cpuid(struct kvm_vcpu= *vcpu) set_msr_interception(vcpu, svm->msrpm, MSR_IA32_PRED_CMD, 0, !!guest_has_pred_cmd_msr(vcpu)); =20 + if (boot_cpu_has(X86_FEATURE_FLUSH_L1D)) + set_msr_interception(vcpu, svm->msrpm, MSR_IA32_FLUSH_CMD, 0, + !!guest_cpuid_has(vcpu, X86_FEATURE_FLUSH_L1D)); + /* For sev guests, the memory encryption bit is not reserved in CR3. */ if (sev_guest(vcpu->kvm)) { best =3D kvm_find_cpuid_entry(vcpu, 0x8000001F); diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index 1bc2b80273c9..f63b28f46a71 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -654,6 +654,9 @@ static inline bool nested_vmx_prepare_msr_bitmap(struct= kvm_vcpu *vcpu, nested_vmx_set_intercept_for_msr(vmx, msr_bitmap_l1, msr_bitmap_l0, MSR_IA32_PRED_CMD, MSR_TYPE_W); =20 + nested_vmx_set_intercept_for_msr(vmx, msr_bitmap_l1, msr_bitmap_l0, + MSR_IA32_FLUSH_CMD, MSR_TYPE_W); + kvm_vcpu_unmap(vcpu, &vmx->nested.msr_bitmap_map, false); =20 vmx->nested.force_msr_bitmap_recalc =3D false; diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 29807be219b9..56e0c7ae961d 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -164,6 +164,7 @@ module_param(allow_smaller_maxphyaddr, bool, S_IRUGO); static u32 vmx_possible_passthrough_msrs[MAX_POSSIBLE_PASSTHROUGH_MSRS] = =3D { MSR_IA32_SPEC_CTRL, MSR_IA32_PRED_CMD, + MSR_IA32_FLUSH_CMD, MSR_IA32_TSC, #ifdef CONFIG_X86_64 MSR_FS_BASE, @@ -7720,6 +7721,10 @@ static void vmx_vcpu_after_set_cpuid(struct kvm_vcpu= *vcpu) vmx_set_intercept_for_msr(vcpu, MSR_IA32_PRED_CMD, MSR_TYPE_W, !guest_has_pred_cmd_msr(vcpu)); =20 + if (boot_cpu_has(X86_FEATURE_FLUSH_L1D)) + vmx_set_intercept_for_msr(vcpu, MSR_IA32_FLUSH_CMD, MSR_TYPE_W, + !guest_cpuid_has(vcpu, X86_FEATURE_FLUSH_L1D)); + set_cr4_guest_host_mask(vmx); =20 vmx_write_encls_bitmap(vcpu, NULL); diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h index 2acdc54bc34b..cb766f65a3eb 100644 --- a/arch/x86/kvm/vmx/vmx.h +++ b/arch/x86/kvm/vmx/vmx.h @@ -369,7 +369,7 @@ struct vcpu_vmx { struct lbr_desc lbr_desc; =20 /* Save desired MSR intercept (read: pass-through) state */ -#define MAX_POSSIBLE_PASSTHROUGH_MSRS 15 +#define MAX_POSSIBLE_PASSTHROUGH_MSRS 16 struct { DECLARE_BITMAP(read, MAX_POSSIBLE_PASSTHROUGH_MSRS); DECLARE_BITMAP(write, MAX_POSSIBLE_PASSTHROUGH_MSRS); diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index c83ec88da043..3c58dbae7b4c 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -3628,6 +3628,18 @@ int kvm_set_msr_common(struct kvm_vcpu *vcpu, struct= msr_data *msr_info) =20 wrmsrl(MSR_IA32_PRED_CMD, PRED_CMD_IBPB); break; + case MSR_IA32_FLUSH_CMD: + if (!msr_info->host_initiated && + !guest_cpuid_has(vcpu, X86_FEATURE_FLUSH_L1D)) + return 1; + + if (!boot_cpu_has(X86_FEATURE_FLUSH_L1D) || (data & ~L1D_FLUSH)) + return 1; + if (!data) + break; + + wrmsrl(MSR_IA32_FLUSH_CMD, L1D_FLUSH); + break; case MSR_EFER: return set_efer(vcpu, msr_info); case MSR_K7_HWCR: --=20 2.40.0.rc2.332.ga46443480c-goog From nobody Sun Feb 8 01:31:19 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 248C4C6FD1D for ; Wed, 22 Mar 2023 01:16:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230176AbjCVBQa (ORCPT ); Tue, 21 Mar 2023 21:16:30 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52654 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229684AbjCVBQC (ORCPT ); Tue, 21 Mar 2023 21:16:02 -0400 Received: from mail-pl1-x64a.google.com (mail-pl1-x64a.google.com [IPv6:2607:f8b0:4864:20::64a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 12BD459800 for ; Tue, 21 Mar 2023 18:15:37 -0700 (PDT) Received: by mail-pl1-x64a.google.com with SMTP id a9-20020a170902b58900b0019e2eafafddso9669823pls.7 for ; Tue, 21 Mar 2023 18:15:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; t=1679447726; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=R3Ju8+L4fgF6abYzf02i1EjyBRNep0dilDxdCj7Ur6o=; b=a2vwCmIpZZBVMlv8zgK50uxkIS0xyvGrcbb26ooiKRDtEuoeDMNLY6bOohAPGj6Wlx 2NnVJOjsLYTD3ZqBMs+ROUq33YaZWKwMLnjmyVbpvPv5gy89EbkBhZfD1r7vdOkSs+uv zPrTSZw/8EF8Md12lugEYzIWbquJESphU/VIWTXHmjT+OUkKW4/UpTtYBK//8g/W8i0d SGUWnmJU4A/AzbgvQXXygK4KrBVn+wkTWLRzSGy40vGVnlDmzLQJOJO/fDnZ3ED2hFES Zk9QVxtFz8G+YTDi6az4IydhtKPbdd8AZbFiU22II4Psb5KaepFptGo6J0c2cVWyNbyr SaYA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1679447726; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=R3Ju8+L4fgF6abYzf02i1EjyBRNep0dilDxdCj7Ur6o=; b=ReFFubLxpGWudvzyreJ3d88L8Nr7d8l7YOCGZFbql9GCijOFEjshhtlSFQtCH9CZz7 04ejnKZlqRNGh3F02WrrCE7YSlBVl+1EbGdc6BiM2/mbjSUMlxHudrFw/CUlagHWPmic bCOmOK779Goksvf7TJ8e7V00lKSU612gvXd834aUS2ntLiPkB+lHdOKNHfsLCpVwdUW2 xexBL0iBPNX173hj94snGCroYou5heZqnOrzxbuxsG3tEx2Tn+e3CADk2kN+YMSS1yP8 fQx3deo+TB8oExKy+xvZIHMiFR8jQQWDyXojn13zI6GBccTAVp4oLV9v4lhza9uJAhhr QMww== X-Gm-Message-State: AO0yUKXesoKW4HOt7L0Gb3E1Bcu0nE3sO9jGgAmveztt2DmT/aVXdYsY QR/dbYgdf7VjBpXEd73ihALYQhv23o8= X-Google-Smtp-Source: AK7set8/rqQSOxX9RrLUOnnpg5r8vmhnjOz/0JynQasAjavBKAhr2J8mL71LNgEZ0KoBWyHP3YQsjBDlMSw= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a17:90a:ca8d:b0:234:ac9c:5daf with SMTP id y13-20020a17090aca8d00b00234ac9c5dafmr621856pjt.2.1679447726571; Tue, 21 Mar 2023 18:15:26 -0700 (PDT) Reply-To: Sean Christopherson Date: Tue, 21 Mar 2023 18:14:40 -0700 In-Reply-To: <20230322011440.2195485-1-seanjc@google.com> Mime-Version: 1.0 References: <20230322011440.2195485-1-seanjc@google.com> X-Mailer: git-send-email 2.40.0.rc2.332.ga46443480c-goog Message-ID: <20230322011440.2195485-7-seanjc@google.com> Subject: [PATCH 6/6] KVM: SVM: Return the local "r" variable from svm_set_msr() From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Nathan Chancellor , Emanuele Giuseppe Esposito , Pawan Gupta , Jim Mattson Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Rename "r" to "ret" and actually return it from svm_set_msr() to reduce the probability of repeating the mistake of commit 723d5fb0ffe4 ("kvm: svm: Add IA32_FLUSH_CMD guest support"), which set "r" thinking that it would be propagated to the caller. Alternatively, the declaration of "r" could be moved into the handling of MSR_TSC_AUX, but that risks variable shadowing in the future. A wrapper for kvm_set_user_return_msr() would allow eliding a local variable, but that feels like delaying the inevitable. No functional change intended. Signed-off-by: Sean Christopherson --- arch/x86/kvm/svm/svm.c | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index b32edaf5a74b..57f241c5a371 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -2873,7 +2873,7 @@ static int svm_set_vm_cr(struct kvm_vcpu *vcpu, u64 d= ata) static int svm_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr) { struct vcpu_svm *svm =3D to_svm(vcpu); - int r; + int ret =3D 0; =20 u32 ecx =3D msr->index; u64 data =3D msr->data; @@ -2995,10 +2995,10 @@ static int svm_set_msr(struct kvm_vcpu *vcpu, struc= t msr_data *msr) * guest via direct_access_msrs, and switch it via user return. */ preempt_disable(); - r =3D kvm_set_user_return_msr(tsc_aux_uret_slot, data, -1ull); + ret =3D kvm_set_user_return_msr(tsc_aux_uret_slot, data, -1ull); preempt_enable(); - if (r) - return 1; + if (ret) + break; =20 svm->tsc_aux =3D data; break; @@ -3056,7 +3056,7 @@ static int svm_set_msr(struct kvm_vcpu *vcpu, struct = msr_data *msr) default: return kvm_set_msr_common(vcpu, msr); } - return 0; + return ret; } =20 static int msr_interception(struct kvm_vcpu *vcpu) --=20 2.40.0.rc2.332.ga46443480c-goog