From nobody Sat Apr 11 21:27:43 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6EA16C19F2B for ; Thu, 4 Aug 2022 14:04:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239562AbiHDOER (ORCPT ); Thu, 4 Aug 2022 10:04:17 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58500 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237312AbiHDOEO (ORCPT ); Thu, 4 Aug 2022 10:04:14 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 621E13DF1F for ; Thu, 4 Aug 2022 07:04:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1659621852; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding; bh=ZRa5985Gg50/ZukwX4Cs2hTko4HIw4TyEjKPx3Kk718=; b=Ss/EeHXCspSGqFrpL58D9bBDvUkJJ07PPU/QDx4lPD7xX/7PQrvjOnLYhcXlPcZH77Kxg1 duR6brFnB1BBBOYoYWcC5SsvxwPbkBw/fu8sEmj1IOIMTBI691W1qJ275AieCta9O0Qz9P 544ergusj1MwOMQcgmRZQryltvaqk1s= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-644-bS-RrM9aPqafhPgGkgxEww-1; Thu, 04 Aug 2022 10:04:08 -0400 X-MC-Unique: bS-RrM9aPqafhPgGkgxEww-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com [10.11.54.2]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 39EB1811E80; Thu, 4 Aug 2022 14:04:07 +0000 (UTC) Received: from virtlab701.virt.lab.eng.bos.redhat.com (virtlab701.virt.lab.eng.bos.redhat.com [10.19.152.228]) by smtp.corp.redhat.com (Postfix) with ESMTP id 0DAF240C128A; Thu, 4 Aug 2022 14:04:07 +0000 (UTC) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: vkuznets@redhat.com, dgilbert@redhat.com, David Woodhouse , stable@vger.kernel.org Subject: [PATCH] KVM: x86: do not report preemption if the steal time cache is stale Date: Thu, 4 Aug 2022 10:04:06 -0400 Message-Id: <20220804140406.1335587-1-pbonzini@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.84 on 10.11.54.2 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Commit 7e2175ebd695 ("KVM: x86: Fix recording of guest steal time / preempted status", 2021-11-11) open coded the previous call to kvm_map_gfn, but in doing so it dropped the comparison between the cached guest physical address and the one in the MSR. This cause an incorrect cache hit if the guest modifies the steal time address while the memslots remain the same. This can happen with kexec, in which case the preempted bit is written at the address used by the old kernel instead of the old one. Cc: David Woodhouse Cc: stable@vger.kernel.org Fixes: 7e2175ebd695 ("KVM: x86: Fix recording of guest steal time / preempt= ed status") Signed-off-by: Paolo Bonzini Signed-off-by: Paolo Bonzini Reviewed-by: Vitaly Kuznetsov --- arch/x86/kvm/x86.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 0f3c2e034740..8ee4698cb90a 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -4715,6 +4715,7 @@ static void kvm_steal_time_set_preempted(struct kvm_v= cpu *vcpu) struct kvm_steal_time __user *st; struct kvm_memslots *slots; static const u8 preempted =3D KVM_VCPU_PREEMPTED; + gpa_t gpa =3D vcpu->arch.st.msr_val & KVM_STEAL_VALID_BITS; =20 /* * The vCPU can be marked preempted if and only if the VM-Exit was on @@ -4742,6 +4743,7 @@ static void kvm_steal_time_set_preempted(struct kvm_v= cpu *vcpu) slots =3D kvm_memslots(vcpu->kvm); =20 if (unlikely(slots->generation !=3D ghc->generation || + gpa !=3D ghc->gpa || kvm_is_error_hva(ghc->hva) || !ghc->memslot)) return; =20 --=20 2.31.1