From nobody Sun Feb 8 15:58:15 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 12AA0C0015E for ; Sat, 29 Jul 2023 01:16:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237347AbjG2BQ3 (ORCPT ); Fri, 28 Jul 2023 21:16:29 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35828 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233297AbjG2BQT (ORCPT ); Fri, 28 Jul 2023 21:16:19 -0400 Received: from mail-pl1-x649.google.com (mail-pl1-x649.google.com [IPv6:2607:f8b0:4864:20::649]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D0E4B3AB4 for ; Fri, 28 Jul 2023 18:16:18 -0700 (PDT) Received: by mail-pl1-x649.google.com with SMTP id d9443c01a7336-1bbb34b091dso18454445ad.0 for ; Fri, 28 Jul 2023 18:16:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1690593378; x=1691198178; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=ZuWZLM3TsGRT4+GSq26rjSsckFQ3rCb0+5vjCs2V2s0=; b=7e0xigtHAglSpXpflBYGihw2dTRSRuft1mPV4wW8wW0ghLKMtLjC0R9ZHyxDhwvGE4 J2NwAQd/ce9HycT7MO9KVtjDyHWxeXF+a3ewQbDDS8EUlq9JC6JpnA25xFBrap2axg0u NLxe6+4MYhXGS121VtYa/OJLo809sN/3FplntG7TNrxWb+ve2btyLP/qgu/wvzufkrZc JIZzuVhAFnBHEyHdmioU8znfRXzwIjOgUqdPcJOPGmU4t2DYFBMzj+oj/mDU+GvCN6H4 0ntkCLS7KSx2VtFtwI11x7otgtFbDftSCYJ1Cjz5WGxH9DxsMhkI4uTnxGDGNAypG0ro OwAQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1690593378; x=1691198178; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=ZuWZLM3TsGRT4+GSq26rjSsckFQ3rCb0+5vjCs2V2s0=; b=T4GOvtlHg5RGE/ehANy56Lmh41pn2G9ZrLKP0GufwCrXvKkoHTIkTVeG8MKx6HkZRe NSKqp0p571k/WuWNs4hXX9JjQy+UfdTq42+RvxEn37rXAXhU0qq1pKP7gTAtdH1mVV3e B7AbXrnKuTqH/xB4UfRiQHTBKPWb7tO8csMtOzitHrFbYr4daupGBawrMzFR5pUKn7Tt cTbz6hPXCN/X0Na1/NrNj54wEvNjdAq4wQgxe0rXwlV2S0LXqxQHnBaHxUzY5UU+92O5 tIMk3Jy78fCAcyJVOKWTVROb6z5pElI2CEaGcAjE1LgBl8fiNX4SOms3rlimXvA027h0 jkzg== X-Gm-Message-State: ABy/qLakKWPQCOsu4GN/Qup0PixGGIvLNoJcCY1w2XzQSKHFboWyvwIF R8rGbeifDDmOsLZKeWF1c4AtaCyC0Ks= X-Google-Smtp-Source: APBJJlFZqNr0y1fbAls4dJdTesI1TscEDP3NEV6mws5JMcjxB4og7oJDo35Pg97t0QTrOc5zvNI6UGhoshA= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a17:902:da83:b0:1b3:c62d:71b5 with SMTP id j3-20020a170902da8300b001b3c62d71b5mr12310plx.0.1690593378396; Fri, 28 Jul 2023 18:16:18 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 28 Jul 2023 18:15:50 -0700 In-Reply-To: <20230729011608.1065019-1-seanjc@google.com> Mime-Version: 1.0 References: <20230729011608.1065019-1-seanjc@google.com> X-Mailer: git-send-email 2.41.0.487.g6d72f3e995-goog Message-ID: <20230729011608.1065019-4-seanjc@google.com> Subject: [PATCH v2 03/21] KVM: nSVM: Use the "outer" helper for writing multiplier to MSR_AMD64_TSC_RATIO From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini , Vitaly Kuznetsov Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Maxim Levitsky Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" When emulating nested SVM transitions, use the outer helper for writing the TSC multiplier for L2. Using the inner helper only for one-off cases, i.e. for paths where KVM is NOT emulating or modifying vCPU state, will allow for multiple cleanups: - Explicitly disabling preemption only in the outer helper - Getting the multiplier from the vCPU field in the outer helper - Skipping the WRMSR in the outer helper if guest state isn't loaded Opportunistically delete an extra newline. No functional change intended. Signed-off-by: Sean Christopherson --- arch/x86/kvm/svm/nested.c | 4 ++-- arch/x86/kvm/svm/svm.c | 5 ++--- arch/x86/kvm/svm/svm.h | 2 +- 3 files changed, 5 insertions(+), 6 deletions(-) diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c index c66c823ae222..5d5a1d7832fb 100644 --- a/arch/x86/kvm/svm/nested.c +++ b/arch/x86/kvm/svm/nested.c @@ -1103,7 +1103,7 @@ int nested_svm_vmexit(struct vcpu_svm *svm) if (kvm_caps.has_tsc_control && vcpu->arch.tsc_scaling_ratio !=3D vcpu->arch.l1_tsc_scaling_ratio) { vcpu->arch.tsc_scaling_ratio =3D vcpu->arch.l1_tsc_scaling_ratio; - __svm_write_tsc_multiplier(vcpu->arch.tsc_scaling_ratio); + svm_write_tsc_multiplier(vcpu, vcpu->arch.tsc_scaling_ratio); } =20 svm->nested.ctl.nested_cr3 =3D 0; @@ -1536,7 +1536,7 @@ void nested_svm_update_tsc_ratio_msr(struct kvm_vcpu = *vcpu) vcpu->arch.tsc_scaling_ratio =3D kvm_calc_nested_tsc_multiplier(vcpu->arch.l1_tsc_scaling_ratio, svm->tsc_ratio_msr); - __svm_write_tsc_multiplier(vcpu->arch.tsc_scaling_ratio); + svm_write_tsc_multiplier(vcpu, vcpu->arch.tsc_scaling_ratio); } =20 /* Inverse operation of nested_copy_vmcb_control_to_cache(). asid is copie= d too. */ diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index d381ad424554..13f316375b14 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -550,7 +550,7 @@ static int svm_check_processor_compat(void) return 0; } =20 -void __svm_write_tsc_multiplier(u64 multiplier) +static void __svm_write_tsc_multiplier(u64 multiplier) { preempt_disable(); =20 @@ -1110,12 +1110,11 @@ static void svm_write_tsc_offset(struct kvm_vcpu *v= cpu, u64 offset) vmcb_mark_dirty(svm->vmcb, VMCB_INTERCEPTS); } =20 -static void svm_write_tsc_multiplier(struct kvm_vcpu *vcpu, u64 multiplier) +void svm_write_tsc_multiplier(struct kvm_vcpu *vcpu, u64 multiplier) { __svm_write_tsc_multiplier(multiplier); } =20 - /* Evaluate instruction intercepts that depend on guest CPUID features. */ static void svm_recalc_instruction_intercepts(struct kvm_vcpu *vcpu, struct vcpu_svm *svm) diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h index 18af7e712a5a..7132c0a04817 100644 --- a/arch/x86/kvm/svm/svm.h +++ b/arch/x86/kvm/svm/svm.h @@ -658,7 +658,7 @@ int nested_svm_check_exception(struct vcpu_svm *svm, un= signed nr, bool has_error_code, u32 error_code); int nested_svm_exit_special(struct vcpu_svm *svm); void nested_svm_update_tsc_ratio_msr(struct kvm_vcpu *vcpu); -void __svm_write_tsc_multiplier(u64 multiplier); +void svm_write_tsc_multiplier(struct kvm_vcpu *vcpu, u64 multiplier); void nested_copy_vmcb_control_to_cache(struct vcpu_svm *svm, struct vmcb_control_area *control); void nested_copy_vmcb_save_to_cache(struct vcpu_svm *svm, --=20 2.41.0.487.g6d72f3e995-goog