From nobody Mon Feb 9 07:53:05 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 27D50C433EF for ; Fri, 4 Mar 2022 20:11:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231366AbiCDULp (ORCPT ); Fri, 4 Mar 2022 15:11:45 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53112 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230484AbiCDUHl (ORCPT ); Fri, 4 Mar 2022 15:07:41 -0500 Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0FBB7269A76; Fri, 4 Mar 2022 12:02:06 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1646424126; x=1677960126; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=8l8peHNVZyeS/auPKUnLmYj1Qk41lhKXin8JnhT2XHY=; b=BOqqTk8/SqdXnbd9eusAr/cKyYg6F/sJqWv8GoMrQLWjCYhR5hFOwk3b +bH1HjrfEmo6h97KaDkjbRPVje5iVD6pYz7dtNJwtcITmR2gmNeGp2QDz f4YtKfuxDiXv3z+cwjL56HFXCxmrMKx6Fbm35h7PYgTopP2qDNkPvaBIp txLycVy4jyEymfdbRysjxru77Q+Tvx3yJV27Cq9Av/SoCi5r2V3a1157t 3zg36EpqcJWZirsLT5Z/jAGg2YtYhme6pKpeoTG9lZ9rZzFX98P6D7qwX lVYD+zAaF091E/vj3lS4sThRhKKdgoHlzE2oKFcmGpyajaYYezFhjiu70 A==; X-IronPort-AV: E=McAfee;i="6200,9189,10276"; a="253983457" X-IronPort-AV: E=Sophos;i="5.90,156,1643702400"; d="scan'208";a="253983457" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Mar 2022 11:50:19 -0800 X-IronPort-AV: E=Sophos;i="5.90,156,1643702400"; d="scan'208";a="552344305" Received: from ls.sc.intel.com (HELO localhost) ([143.183.96.54]) by orsmga008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Mar 2022 11:50:19 -0800 From: isaku.yamahata@intel.com To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: isaku.yamahata@intel.com, isaku.yamahata@gmail.com, Paolo Bonzini , Jim Mattson , erdemaktas@google.com, Connor Kuehl , Sean Christopherson Subject: [RFC PATCH v5 036/104] KVM: x86/mmu: Explicitly check for MMIO spte in fast page fault Date: Fri, 4 Mar 2022 11:48:52 -0800 Message-Id: X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Sean Christopherson Explicitly check for an MMIO spte in the fast page fault flow. TDX will use a not-present entry for MMIO sptes, which can be mistaken for an access-tracked spte since both have SPTE_SPECIAL_MASK set. The fast page fault handles the case of changing access bits without obtaining mmu_lock. For example, clear write protect bit for dirty page tracking. MMIO emulation is handled in a slow path. So it doesn't affect the default VM case. Signed-off-by: Sean Christopherson Signed-off-by: Isaku Yamahata --- arch/x86/kvm/mmu/mmu.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index b68191aa39bf..9907cb759fd1 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -3167,7 +3167,7 @@ static int fast_page_fault(struct kvm_vcpu *vcpu, str= uct kvm_page_fault *fault) break; =20 sp =3D sptep_to_sp(sptep); - if (!is_last_spte(spte, sp->role.level)) + if (!is_last_spte(spte, sp->role.level) || is_mmio_spte(spte)) break; =20 /* --=20 2.25.1