From nobody Sun Dec 14 06:16:54 2025 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4D24C23D0; Wed, 22 May 2024 00:18:34 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=90.155.50.34 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1716337119; cv=none; b=fxbcYCTvtSTb/dJQtDkjTO8vgcmxENkDUvRInu87t4V4+DtJEQQ16wOhe+YIvgFOETpbQJKwp/7gXe7a5m57+O9GgdcRCmRxenVbaivi0v9WtQ4/ZRKI3/8fQMpmfGA+Ft/MJ9QAuxAyEWA97th8nU1hBirbFpVI8rZ0cL+zgHg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1716337119; c=relaxed/simple; bh=MfsZSEbRwBJ9w5jCG1r9UNHwXDhRvX+YH1b8L7lq6jc=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=d1RWD1hyHOV2cgkQjHAp6oGDWK/m0KMdiiKN6/qotQ+krk4FteVBtZWaR4SYYhlPKMC+C/7B3IhguyD18tdofP6EFAkQyLgAXz1oRFsaSIhgYembD7ZbihvkrJuIckG15jmcklvY+frFowpn0KiUVYMVjCJoZGljknoOauPuGFc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org; spf=none smtp.mailfrom=casper.srs.infradead.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b=SykCe26A; arc=none smtp.client-ip=90.155.50.34 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=casper.srs.infradead.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="SykCe26A" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Sender:Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From: Reply-To:Content-Type:Content-ID:Content-Description; bh=LhUA0U4IOeUA/e+o4NGCvx07Pxtovz/JSXQ/6S+Lttw=; b=SykCe26At/GO6JjeZAaeY7wWIC q3swHUY1uqOoNF8X+0XQ0jjVgtWXdIHtAl1hAtMK6xDV0asG6qZv+ftneNJo/S4hJWE6HCY3QC8bt 5CkuCAAs3kZWBaCrr9rAN+HSjW6wqjePkmBV8AuPzFTP6nU9goQbyq/Pqa4G7+wjz8dUbVaWC4Vhf uFZd7MOhvZgA7yqczZ22ER7SVvaHoVMaUt5Rl3xqG8bjwebjQwCJMCIhHjBIxscHOKSnvkjk6uMaj N0eY2zMzGU66oWAT3o7XlVaHRkPSWgEHeqbnzpNAevXRuMLOpOBmwk4InYbGQVVrgkznR1gWkSiGA ffwdTorQ==; Received: from [2001:8b0:10b:1::ebe] (helo=i7.infradead.org) by casper.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux)) id 1s9ZgS-0000000081H-2pxq; Wed, 22 May 2024 00:18:20 +0000 Received: from dwoodhou by i7.infradead.org with local (Exim 4.97.1 #2 (Red Hat Linux)) id 1s9ZgS-00000002b4z-10yQ; Wed, 22 May 2024 01:18:20 +0100 From: David Woodhouse To: kvm@vger.kernel.org Cc: Paolo Bonzini , Jonathan Corbet , Sean Christopherson , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Paul Durrant , Peter Zijlstra , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Daniel Bristot de Oliveira , Valentin Schneider , Shuah Khan , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, jalliste@amazon.co.uk, sveith@amazon.de, zide.chen@intel.com, Dongli Zhang , Chenyi Qiang Subject: [RFC PATCH v3 12/21] KVM: x86: Remove implicit rdtsc() from kvm_compute_l1_tsc_offset() Date: Wed, 22 May 2024 01:17:07 +0100 Message-ID: <20240522001817.619072-13-dwmw2@infradead.org> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20240522001817.619072-1-dwmw2@infradead.org> References: <20240522001817.619072-1-dwmw2@infradead.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Sender: David Woodhouse X-SRS-Rewrite: SMTP reverse-path rewritten from by casper.infradead.org. See http://www.infradead.org/rpr.html Content-Type: text/plain; charset="utf-8" From: David Woodhouse Let the callers pass the host TSC value in as an explicit parameter. This leaves some fairly obviously stupid code, which using this function to compare the guest TSC at some *other* time, with the newly-minted TSC value from rdtsc(). Unless it's being used to measure *elapsed* time, that isn't very sensible. In this case, "obviously stupid" is an improvement over being non-obviously so. No functional change intended. Signed-off-by: David Woodhouse Reviewed-by: Paul Durrant --- arch/x86/kvm/x86.c | 14 ++++++++------ 1 file changed, 8 insertions(+), 6 deletions(-) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index ef3cd6113037..ea59694d712a 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -2601,11 +2601,12 @@ u64 kvm_scale_tsc(u64 tsc, u64 ratio) return _tsc; } =20 -static u64 kvm_compute_l1_tsc_offset(struct kvm_vcpu *vcpu, u64 target_tsc) +static u64 kvm_compute_l1_tsc_offset(struct kvm_vcpu *vcpu, u64 host_tsc, + u64 target_tsc) { u64 tsc; =20 - tsc =3D kvm_scale_tsc(rdtsc(), vcpu->arch.l1_tsc_scaling_ratio); + tsc =3D kvm_scale_tsc(host_tsc, vcpu->arch.l1_tsc_scaling_ratio); =20 return target_tsc - tsc; } @@ -2758,7 +2759,7 @@ static void kvm_synchronize_tsc(struct kvm_vcpu *vcpu= , u64 *user_value) bool synchronizing =3D false; =20 raw_spin_lock_irqsave(&kvm->arch.tsc_write_lock, flags); - offset =3D kvm_compute_l1_tsc_offset(vcpu, data); + offset =3D kvm_compute_l1_tsc_offset(vcpu, rdtsc(), data); ns =3D get_kvmclock_base_ns(); elapsed =3D ns - kvm->arch.last_tsc_nsec; =20 @@ -2809,7 +2810,7 @@ static void kvm_synchronize_tsc(struct kvm_vcpu *vcpu= , u64 *user_value) } else { u64 delta =3D nsec_to_cycles(vcpu, elapsed); data +=3D delta; - offset =3D kvm_compute_l1_tsc_offset(vcpu, data); + offset =3D kvm_compute_l1_tsc_offset(vcpu, rdtsc(), data); } matched =3D true; } @@ -4024,7 +4025,8 @@ int kvm_set_msr_common(struct kvm_vcpu *vcpu, struct = msr_data *msr_info) if (msr_info->host_initiated) { kvm_synchronize_tsc(vcpu, &data); } else { - u64 adj =3D kvm_compute_l1_tsc_offset(vcpu, data) - vcpu->arch.l1_tsc_o= ffset; + u64 adj =3D kvm_compute_l1_tsc_offset(vcpu, rdtsc(), data) - + vcpu->arch.l1_tsc_offset; adjust_tsc_offset_guest(vcpu, adj); vcpu->arch.ia32_tsc_adjust_msr +=3D adj; } @@ -5098,7 +5100,7 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cp= u) mark_tsc_unstable("KVM discovered backwards TSC"); =20 if (kvm_check_tsc_unstable()) { - u64 offset =3D kvm_compute_l1_tsc_offset(vcpu, + u64 offset =3D kvm_compute_l1_tsc_offset(vcpu, rdtsc(), vcpu->arch.last_guest_tsc); kvm_vcpu_write_tsc_offset(vcpu, offset); vcpu->arch.tsc_catchup =3D 1; --=20 2.44.0