From nobody Fri Oct 3 19:17:01 2025 Received: from mail-pj1-f74.google.com (mail-pj1-f74.google.com [209.85.216.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A3BB72144D7 for ; Wed, 27 Aug 2025 00:05:40 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1756253142; cv=none; b=IuBqd4a5F80wzvlZPPr6XJR5gJUhBGjGewG8onsTqfsd3hLs+ZVQVjTz0/t39LcwB/p5ZbYiPv7MzE1sNAbkaJzULL5FtORTIrS4JtsJh7xFl+zxbOr/IO2eJzANEIQW6NZ9w5rLH1lrMZdRyjbExw6lI/wXLD5xZF7eRd6h8Jo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1756253142; c=relaxed/simple; bh=inYnERU1cZUpLoPVwwGG1cPzJ315J3ScXyEuAIhLuzY=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=e3kU4yv57ocdOj9Ez3x6SfPrSYGrNVuOetqJRQciPF59lXw82EQsVseq3dcaNxaKnbsJxlvImkVTwm4JVTvn0ZWA56EL5H4/fU8E1w/+tuinEKaNrg20U5wzWdPuAceYGHgMZWfSgoAF6iDSdT7Hq982tGi2IUizw0suFzTZaOY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=0WRTvK+H; arc=none smtp.client-ip=209.85.216.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="0WRTvK+H" Received: by mail-pj1-f74.google.com with SMTP id 98e67ed59e1d1-325228e9c12so370369a91.1 for ; Tue, 26 Aug 2025 17:05:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1756253140; x=1756857940; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=EzIV4kw208Crs8PBdWAhCsmfi9lScWdvkVxAX4LkGs0=; b=0WRTvK+HDahOWdoNdla4xBbcaCWJOBo/DcTyFmtPvlNjM1k4xgva1ES7Dp6JUaOiaI BRC8JllV9tFMez4aSvZTVPVTdPnmjCsoWd92idSX2FOL/ZrNMjZJu3M80k5uCKHz03cA zS3mx9jKzXR2FI3UM5F7XHfFiCde5N5YMKtjzfd8w9BWfZZS2L2CqIOvfJ1UXXTHnZ26 8ZxmEP0aEvvzjmkPbxRvUWCzOv3CPiqIpJztxO9/7O4YLSQFc6qIPKDF3EN8GyunJcwh 1dx3ikV3iE9yExorLbqN0uPQx0iePs4bZGS8SIaY4ghEqtrPl4otz/J8hdhSGgMokylO h0TA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1756253140; x=1756857940; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=EzIV4kw208Crs8PBdWAhCsmfi9lScWdvkVxAX4LkGs0=; b=pHwYNy2vQYp5cAzCIkqg8a1+6AL3baSWM6D/ItxlwLF0t+7iXBVT/EHjyr0sFZI7iw bKvO18fqOdDfKMs3MQzOgKtCRqUHZUN5Nz/EQ23LwDJMJ/+6y+JHEueP+ls1KzQHexe4 XHiANQ4PDAjPFVNfkXwcIpgDaLa9H4LQDTtZqMKcWzXwyiP7g0bJndMCeeGRbXUXzvty on1SheClaq70Z21OGanAMei9U+glWm8wvtBv9eVsEkW5C8GIP7S8ETEjJIeo0Fd2r0G5 bhzTeswAZwzanKLcpIjTSSBj6YzYB0gCRghQCrasuX0rE3gYZdKXpDAalbtYxgStChrW lgjA== X-Forwarded-Encrypted: i=1; AJvYcCUNvbcvm0s1g4nhjOe//PR8gDPmSwAVLRZXvQuiqCyAJkF3AEHEjqfiPY2Rxhu9Avc/ojo3ZYO+KCjAVkY=@vger.kernel.org X-Gm-Message-State: AOJu0YxjrlzoMXjw/JUe8/PCl/6BNrtnwo/9ueElVsJlE6B1wEyQMtux xfei147un8+0KQxv9Ez64I8XFhGL11GD1TksyGiKgl8d61EFG6JGmA3gMEhdqc/2CvdYsokQ75M aOaPYRw== X-Google-Smtp-Source: AGHT+IFsjkLo5saaZ5ATkRw7E1vx5cE/AELT09M9Pwwd8rhFQR3pIhA4BadJFV/SzGBGB3WS8W5BgnMdyvQ= X-Received: from pjbpl7.prod.google.com ([2002:a17:90b:2687:b0:321:6924:af9a]) (user=seanjc job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:5647:b0:324:e74a:117c with SMTP id 98e67ed59e1d1-3275085dceamr4812944a91.13.1756253140034; Tue, 26 Aug 2025 17:05:40 -0700 (PDT) Reply-To: Sean Christopherson Date: Tue, 26 Aug 2025 17:05:19 -0700 In-Reply-To: <20250827000522.4022426-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250827000522.4022426-1-seanjc@google.com> X-Mailer: git-send-email 2.51.0.268.g9569e192d0-goog Message-ID: <20250827000522.4022426-10-seanjc@google.com> Subject: [RFC PATCH 09/12] KVM: TDX: Fold tdx_mem_page_record_premap_cnt() into its sole caller From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Michael Roth , Yan Zhao , Ira Weiny , Vishal Annapurve , Rick Edgecombe Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Fold tdx_mem_page_record_premap_cnt() into tdx_sept_set_private_spte() as providing a one-off helper for effectively three lines of code is at best a wash, and splitting the code makes the comment for smp_rmb() _extremely_ confusing as the comment talks about reading kvm->arch.pre_fault_allowed before kvm_tdx->state, but the immediately visible code does the exact opposite. Opportunistically rewrite the comments to more explicitly explain who is checking what, as well as _why_ the ordering matters. No functional change intended. Signed-off-by: Sean Christopherson --- arch/x86/kvm/vmx/tdx.c | 49 ++++++++++++++++++------------------------ 1 file changed, 21 insertions(+), 28 deletions(-) diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c index b7559ea1e353..e4b70c0dbda3 100644 --- a/arch/x86/kvm/vmx/tdx.c +++ b/arch/x86/kvm/vmx/tdx.c @@ -1608,29 +1608,6 @@ static int tdx_mem_page_aug(struct kvm *kvm, gfn_t g= fn, return 0; } =20 -/* - * KVM_TDX_INIT_MEM_REGION calls kvm_gmem_populate() to map guest pages; t= he - * callback tdx_gmem_post_populate() then maps pages into private memory. - * through the a seamcall TDH.MEM.PAGE.ADD(). The SEAMCALL also requires = the - * private EPT structures for the page to have been built before, which is - * done via kvm_tdp_map_page(). nr_premapped counts the number of pages th= at - * were added to the EPT structures but not added with TDH.MEM.PAGE.ADD(). - * The counter has to be zero on KVM_TDX_FINALIZE_VM, to ensure that there - * are no half-initialized shared EPT pages. - */ -static int tdx_mem_page_record_premap_cnt(struct kvm *kvm, gfn_t gfn, - enum pg_level level, kvm_pfn_t pfn) -{ - struct kvm_tdx *kvm_tdx =3D to_kvm_tdx(kvm); - - if (KVM_BUG_ON(kvm->arch.pre_fault_allowed, kvm)) - return -EIO; - - /* nr_premapped will be decreased when tdh_mem_page_add() is called. */ - atomic64_inc(&kvm_tdx->nr_premapped); - return 0; -} - static int tdx_sept_set_private_spte(struct kvm *kvm, gfn_t gfn, enum pg_level level, kvm_pfn_t pfn) { @@ -1641,14 +1618,30 @@ static int tdx_sept_set_private_spte(struct kvm *kv= m, gfn_t gfn, return -EIO; =20 /* - * Read 'pre_fault_allowed' before 'kvm_tdx->state'; see matching - * barrier in tdx_td_finalize(). + * Ensure pre_fault_allowed is read by kvm_arch_vcpu_pre_fault_memory() + * before kvm_tdx->state. Userspace must not be allowed to pre-fault + * arbitrary memory until the initial memory image is finalized. Pairs + * with the smp_wmb() in tdx_td_finalize(). */ smp_rmb(); - if (likely(kvm_tdx->state =3D=3D TD_STATE_RUNNABLE)) - return tdx_mem_page_aug(kvm, gfn, level, pfn); =20 - return tdx_mem_page_record_premap_cnt(kvm, gfn, level, pfn); + /* + * If the TD isn't finalized/runnable, then userspace is initializing + * the VM image via KVM_TDX_INIT_MEM_REGION. Increment the number of + * pages that need to be initialized via TDH.MEM.PAGE.ADD (PAGE.ADD + * requires a pre-existing S-EPT mapping). KVM_TDX_FINALIZE_VM checks + * the counter to ensure all mapped pages have been added to the image, + * to prevent running the TD with uninitialized memory. + */ + if (unlikely(kvm_tdx->state !=3D TD_STATE_RUNNABLE)) { + if (KVM_BUG_ON(kvm->arch.pre_fault_allowed, kvm)) + return -EIO; + + atomic64_inc(&kvm_tdx->nr_premapped); + return 0; + } + + return tdx_mem_page_aug(kvm, gfn, level, pfn); } =20 static int tdx_sept_drop_private_spte(struct kvm *kvm, gfn_t gfn, --=20 2.51.0.268.g9569e192d0-goog