From nobody Tue Oct 7 18:28:34 2025 Received: from mail-pj1-f74.google.com (mail-pj1-f74.google.com [209.85.216.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 61EBA4A06 for ; Mon, 7 Jul 2025 22:48:28 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751928510; cv=none; b=Cy30sfzX6Iwuq/fx9NUgxaM77yLkuuWU908S9TCg6gEEGOYf8f1Ov1F6UFPDTiSjTR/nU/KoFLEVbvC6sxuKMhDZzy+cY+hVM25rE57yiFXFymy74qn3zF4Q81sHw+UFQYyP2sVlUfLVbAXm7PtHh9CWSj9Hjd9AeT6AOWmpeuw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751928510; c=relaxed/simple; bh=lShPP1U+Rvs/6QXtnOxlJLyiJhHAoVh5TqKhsf9YoY4=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=PKRULfH+flFHbo7OFQY+ZjgwJVwbcGVbEjHzcem3Q7rrOBbB+VMSqMTpfJmwuwqSGFnxdyPznWxbk6HrlFl/d98yr2CtxWWqUJr/EiDFd9hmfFuaC8BEtZHWGvZKA2VM97rfjTGRUMAwlOW2KDq+sEc/LuCxV/pULRj1hx/zLQI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--jthoughton.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=2F0Sr2BW; arc=none smtp.client-ip=209.85.216.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--jthoughton.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="2F0Sr2BW" Received: by mail-pj1-f74.google.com with SMTP id 98e67ed59e1d1-3121cffd7e8so2317689a91.0 for ; Mon, 07 Jul 2025 15:48:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1751928507; x=1752533307; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=myy8tIt6s7QEOEcD9IVw1QFE5C/VIXZH6WBInZX0UO4=; b=2F0Sr2BW2TBlMJDdanzQQBW0dnEtnhgyDQd6kWC+jzGKiVd+jKKwrCp6O2zEfKlQAI EYHeEo1MFL+P4iZf097sLX3DmevNZwx/EkM3VgEP9wJafsMimaBr13Eq161E0y2Dlfse M7rqngI52387eaUKPMRiNp7MzwTw8ZQFr5QmjvSQzJHUF97LD+YQeWT9kYcL2EbQU1He aexye+rjeZClVN3NB4Ld/w/GH1vdJGRbenaabAXZ39MHDIRghR37y0hrUQjfmqrmtE+l DSMTQcnSZ1Z/uwPShZMDryWXz/a1NSvEXrtVjwbd6bbpT3dGoGqY5+6md2APlzr5DjFu Dh0w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1751928507; x=1752533307; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=myy8tIt6s7QEOEcD9IVw1QFE5C/VIXZH6WBInZX0UO4=; b=vPKYhZQNZ/nV2nOqGcPVcEjVakywG69jxDRA71aR8iaLwSncmSsCPP1Cej/N6p6M+A +w+4VOa7ELr4BXwR8B5JjLLsF4dx7FE1P5XRI32lExPqJhTGj4LLc3x3an02/pBYST+h RGbsCyE+7gmwQN9FLPzChITLTh1U82oCr4UXdCqzSwoSqRorX310LFqKLJJUxQ1DrzwI y8Ets0bTPhR/Drl42MxJm40fl89IGadYfTp1RJHffY0GvDhX6aUPi4Sgyf16mxXpTr+q rG76ZUAUDIhtOPdi5PBtNVud3+kCLBqvclEpaCLN+3CV52bm3wDwTnhrmylnkk470N9p inpw== X-Forwarded-Encrypted: i=1; AJvYcCWbtooqoN2Y2+nwsk/B25uLdA8f40sqP9T57SodU5yxtF9ucYVJpvon+vgwIhH6EVhbUCP+q0BqJxLc3tI=@vger.kernel.org X-Gm-Message-State: AOJu0YyPR4clFoTXdCbwW9PBEENZQxU9a3J5S3VxzyMadt5WzcwnTOnQ YK/SPsCqWLeTymfFIFiQwLlpZhoCfIU/WujOfsLp2PED2hWvDlcmo01yzQ8aBMZt6Ux+ic6ElB3 NT3ZyqTOjqevfGf549mdg0w== X-Google-Smtp-Source: AGHT+IEbREDdYASXzsI+uvlmXkGmB94vjUol5eWXSOoRnjAIRrc+JDMF8Cp21CM6bmYDYZ6sxjfYq3wtaTickWM7 X-Received: from pjbrr16.prod.google.com ([2002:a17:90b:2b50:b0:308:6685:55e6]) (user=jthoughton job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:4990:b0:311:e4ff:1810 with SMTP id 98e67ed59e1d1-31aadcaba26mr18720925a91.3.1751928507538; Mon, 07 Jul 2025 15:48:27 -0700 (PDT) Date: Mon, 7 Jul 2025 22:47:14 +0000 In-Reply-To: <20250707224720.4016504-1-jthoughton@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250707224720.4016504-1-jthoughton@google.com> X-Mailer: git-send-email 2.50.0.727.gbf7dc18ff4-goog Message-ID: <20250707224720.4016504-2-jthoughton@google.com> Subject: [PATCH v5 1/7] KVM: x86/mmu: Track TDP MMU NX huge pages separately From: James Houghton To: Paolo Bonzini , Sean Christopherson Cc: Vipin Sharma , David Matlack , James Houghton , kvm@vger.kernel.org, linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Vipin Sharma Introduce struct kvm_possible_nx_huge_pages to track the list of possible NX huge pages and the number of pages on the list. When calculating how many pages to zap, we use the new counts we have (instead of kvm->stat.nx_lpage_splits, which would be the sum of the two new counts). Suggested-by: Sean Christopherson Suggested-by: David Matlack Signed-off-by: Vipin Sharma Co-developed-by: James Houghton Signed-off-by: James Houghton --- arch/x86/include/asm/kvm_host.h | 43 ++++++++++++++++-------- arch/x86/kvm/mmu/mmu.c | 58 +++++++++++++++++++++------------ arch/x86/kvm/mmu/mmu_internal.h | 7 ++-- arch/x86/kvm/mmu/tdp_mmu.c | 4 +-- 4 files changed, 75 insertions(+), 37 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_hos= t.h index b4a391929cdba..d544a269c1920 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1334,6 +1334,34 @@ enum kvm_apicv_inhibit { __APICV_INHIBIT_REASON(SEV), \ __APICV_INHIBIT_REASON(LOGICAL_ID_ALIASED) =20 +struct kvm_possible_nx_huge_pages { + /* + * A list of kvm_mmu_page structs that, if zapped, could possibly be + * replaced by an NX huge page. A shadow page is on this list if its + * existence disallows an NX huge page (nx_huge_page_disallowed is set) + * and there are no other conditions that prevent a huge page, e.g. + * the backing host page is huge, dirtly logging is not enabled for its + * memslot, etc... Note, zapping shadow pages on this list doesn't + * guarantee an NX huge page will be created in its stead, e.g. if the + * guest attempts to execute from the region then KVM obviously can't + * create an NX huge page (without hanging the guest). + */ + struct list_head pages; + u64 nr_pages; +}; + +enum kvm_mmu_type { + KVM_SHADOW_MMU, +#ifdef CONFIG_X86_64 + KVM_TDP_MMU, +#endif + KVM_NR_MMU_TYPES, +}; + +#ifndef CONFIG_X86_64 +#define KVM_TDP_MMU -1 +#endif + struct kvm_arch { unsigned long n_used_mmu_pages; unsigned long n_requested_mmu_pages; @@ -1346,18 +1374,7 @@ struct kvm_arch { bool pre_fault_allowed; struct hlist_head mmu_page_hash[KVM_NUM_MMU_PAGES]; struct list_head active_mmu_pages; - /* - * A list of kvm_mmu_page structs that, if zapped, could possibly be - * replaced by an NX huge page. A shadow page is on this list if its - * existence disallows an NX huge page (nx_huge_page_disallowed is set) - * and there are no other conditions that prevent a huge page, e.g. - * the backing host page is huge, dirtly logging is not enabled for its - * memslot, etc... Note, zapping shadow pages on this list doesn't - * guarantee an NX huge page will be created in its stead, e.g. if the - * guest attempts to execute from the region then KVM obviously can't - * create an NX huge page (without hanging the guest). - */ - struct list_head possible_nx_huge_pages; + struct kvm_possible_nx_huge_pages possible_nx_huge_pages[KVM_NR_MMU_TYPES= ]; #ifdef CONFIG_KVM_EXTERNAL_WRITE_TRACKING struct kvm_page_track_notifier_head track_notifier_head; #endif @@ -1516,7 +1533,7 @@ struct kvm_arch { * is held in read mode: * - tdp_mmu_roots (above) * - the link field of kvm_mmu_page structs used by the TDP MMU - * - possible_nx_huge_pages; + * - possible_nx_huge_pages[KVM_TDP_MMU]; * - the possible_nx_huge_page_link field of kvm_mmu_page structs used * by the TDP MMU * Because the lock is only taken within the MMU lock, strictly diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 4e06e2e89a8fa..f44d7f3acc179 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -65,9 +65,9 @@ int __read_mostly nx_huge_pages =3D -1; static uint __read_mostly nx_huge_pages_recovery_period_ms; #ifdef CONFIG_PREEMPT_RT /* Recovery can cause latency spikes, disable it for PREEMPT_RT. */ -static uint __read_mostly nx_huge_pages_recovery_ratio =3D 0; +unsigned int __read_mostly nx_huge_pages_recovery_ratio; #else -static uint __read_mostly nx_huge_pages_recovery_ratio =3D 60; +unsigned int __read_mostly nx_huge_pages_recovery_ratio =3D 60; #endif =20 static int get_nx_huge_pages(char *buffer, const struct kernel_param *kp); @@ -776,7 +776,8 @@ static void account_shadowed(struct kvm *kvm, struct kv= m_mmu_page *sp) kvm_flush_remote_tlbs_gfn(kvm, gfn, PG_LEVEL_4K); } =20 -void track_possible_nx_huge_page(struct kvm *kvm, struct kvm_mmu_page *sp) +void track_possible_nx_huge_page(struct kvm *kvm, struct kvm_mmu_page *sp, + enum kvm_mmu_type mmu_type) { /* * If it's possible to replace the shadow page with an NX huge page, @@ -790,8 +791,9 @@ void track_possible_nx_huge_page(struct kvm *kvm, struc= t kvm_mmu_page *sp) return; =20 ++kvm->stat.nx_lpage_splits; + ++kvm->arch.possible_nx_huge_pages[mmu_type].nr_pages; list_add_tail(&sp->possible_nx_huge_page_link, - &kvm->arch.possible_nx_huge_pages); + &kvm->arch.possible_nx_huge_pages[mmu_type].pages); } =20 static void account_nx_huge_page(struct kvm *kvm, struct kvm_mmu_page *sp, @@ -800,7 +802,7 @@ static void account_nx_huge_page(struct kvm *kvm, struc= t kvm_mmu_page *sp, sp->nx_huge_page_disallowed =3D true; =20 if (nx_huge_page_possible) - track_possible_nx_huge_page(kvm, sp); + track_possible_nx_huge_page(kvm, sp, KVM_SHADOW_MMU); } =20 static void unaccount_shadowed(struct kvm *kvm, struct kvm_mmu_page *sp) @@ -819,12 +821,14 @@ static void unaccount_shadowed(struct kvm *kvm, struc= t kvm_mmu_page *sp) kvm_mmu_gfn_allow_lpage(slot, gfn); } =20 -void untrack_possible_nx_huge_page(struct kvm *kvm, struct kvm_mmu_page *s= p) +void untrack_possible_nx_huge_page(struct kvm *kvm, struct kvm_mmu_page *s= p, + enum kvm_mmu_type mmu_type) { if (list_empty(&sp->possible_nx_huge_page_link)) return; =20 --kvm->stat.nx_lpage_splits; + --kvm->arch.possible_nx_huge_pages[mmu_type].nr_pages; list_del_init(&sp->possible_nx_huge_page_link); } =20 @@ -832,7 +836,7 @@ static void unaccount_nx_huge_page(struct kvm *kvm, str= uct kvm_mmu_page *sp) { sp->nx_huge_page_disallowed =3D false; =20 - untrack_possible_nx_huge_page(kvm, sp); + untrack_possible_nx_huge_page(kvm, sp, KVM_SHADOW_MMU); } =20 static struct kvm_memory_slot *gfn_to_memslot_dirty_bitmap(struct kvm_vcpu= *vcpu, @@ -6684,9 +6688,12 @@ static void kvm_mmu_zap_all_fast(struct kvm *kvm) =20 void kvm_mmu_init_vm(struct kvm *kvm) { + int i; + kvm->arch.shadow_mmio_value =3D shadow_mmio_value; INIT_LIST_HEAD(&kvm->arch.active_mmu_pages); - INIT_LIST_HEAD(&kvm->arch.possible_nx_huge_pages); + for (i =3D 0; i < KVM_NR_MMU_TYPES; ++i) + INIT_LIST_HEAD(&kvm->arch.possible_nx_huge_pages[i].pages); spin_lock_init(&kvm->arch.mmu_unsync_pages_lock); =20 if (tdp_mmu_enabled) @@ -7519,16 +7526,27 @@ static int set_nx_huge_pages_recovery_param(const c= har *val, const struct kernel return err; } =20 -static void kvm_recover_nx_huge_pages(struct kvm *kvm) +static unsigned long nx_huge_pages_to_zap(struct kvm *kvm, + enum kvm_mmu_type mmu_type) +{ + unsigned long pages =3D READ_ONCE(kvm->arch.possible_nx_huge_pages[mmu_ty= pe].nr_pages); + unsigned int ratio =3D READ_ONCE(nx_huge_pages_recovery_ratio); + + return ratio ? DIV_ROUND_UP(pages, ratio) : 0; +} + +static void kvm_recover_nx_huge_pages(struct kvm *kvm, + enum kvm_mmu_type mmu_type) { - unsigned long nx_lpage_splits =3D kvm->stat.nx_lpage_splits; + unsigned long to_zap =3D nx_huge_pages_to_zap(kvm, mmu_type); + struct list_head *nx_huge_pages; struct kvm_memory_slot *slot; - int rcu_idx; struct kvm_mmu_page *sp; - unsigned int ratio; LIST_HEAD(invalid_list); bool flush =3D false; - ulong to_zap; + int rcu_idx; + + nx_huge_pages =3D &kvm->arch.possible_nx_huge_pages[mmu_type].pages; =20 rcu_idx =3D srcu_read_lock(&kvm->srcu); write_lock(&kvm->mmu_lock); @@ -7540,10 +7558,8 @@ static void kvm_recover_nx_huge_pages(struct kvm *kv= m) */ rcu_read_lock(); =20 - ratio =3D READ_ONCE(nx_huge_pages_recovery_ratio); - to_zap =3D ratio ? DIV_ROUND_UP(nx_lpage_splits, ratio) : 0; for ( ; to_zap; --to_zap) { - if (list_empty(&kvm->arch.possible_nx_huge_pages)) + if (list_empty(nx_huge_pages)) break; =20 /* @@ -7553,7 +7569,7 @@ static void kvm_recover_nx_huge_pages(struct kvm *kvm) * the total number of shadow pages. And because the TDP MMU * doesn't use active_mmu_pages. */ - sp =3D list_first_entry(&kvm->arch.possible_nx_huge_pages, + sp =3D list_first_entry(nx_huge_pages, struct kvm_mmu_page, possible_nx_huge_page_link); WARN_ON_ONCE(!sp->nx_huge_page_disallowed); @@ -7590,7 +7606,7 @@ static void kvm_recover_nx_huge_pages(struct kvm *kvm) =20 if (slot && kvm_slot_dirty_track_enabled(slot)) unaccount_nx_huge_page(kvm, sp); - else if (is_tdp_mmu_page(sp)) + else if (mmu_type =3D=3D KVM_TDP_MMU) flush |=3D kvm_tdp_mmu_zap_sp(kvm, sp); else kvm_mmu_prepare_zap_page(kvm, sp, &invalid_list); @@ -7621,9 +7637,10 @@ static void kvm_nx_huge_page_recovery_worker_kill(vo= id *data) static bool kvm_nx_huge_page_recovery_worker(void *data) { struct kvm *kvm =3D data; + long remaining_time; bool enabled; uint period; - long remaining_time; + int i; =20 enabled =3D calc_nx_huge_pages_recovery_period(&period); if (!enabled) @@ -7638,7 +7655,8 @@ static bool kvm_nx_huge_page_recovery_worker(void *da= ta) } =20 __set_current_state(TASK_RUNNING); - kvm_recover_nx_huge_pages(kvm); + for (i =3D 0; i < KVM_NR_MMU_TYPES; ++i) + kvm_recover_nx_huge_pages(kvm, i); kvm->arch.nx_huge_page_last =3D get_jiffies_64(); return true; } diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_interna= l.h index db8f33e4de624..a8fd2de13f707 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -413,7 +413,10 @@ int kvm_mmu_max_mapping_level(struct kvm *kvm, void kvm_mmu_hugepage_adjust(struct kvm_vcpu *vcpu, struct kvm_page_fault = *fault); void disallowed_hugepage_adjust(struct kvm_page_fault *fault, u64 spte, in= t cur_level); =20 -void track_possible_nx_huge_page(struct kvm *kvm, struct kvm_mmu_page *sp); -void untrack_possible_nx_huge_page(struct kvm *kvm, struct kvm_mmu_page *s= p); +void track_possible_nx_huge_page(struct kvm *kvm, struct kvm_mmu_page *sp, + enum kvm_mmu_type mmu_type); +void untrack_possible_nx_huge_page(struct kvm *kvm, struct kvm_mmu_page *s= p, + enum kvm_mmu_type mmu_type); =20 +extern unsigned int nx_huge_pages_recovery_ratio; #endif /* __KVM_X86_MMU_INTERNAL_H */ diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 7f3d7229b2c1f..48b070f9f4e13 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -355,7 +355,7 @@ static void tdp_mmu_unlink_sp(struct kvm *kvm, struct k= vm_mmu_page *sp) =20 spin_lock(&kvm->arch.tdp_mmu_pages_lock); sp->nx_huge_page_disallowed =3D false; - untrack_possible_nx_huge_page(kvm, sp); + untrack_possible_nx_huge_page(kvm, sp, KVM_TDP_MMU); spin_unlock(&kvm->arch.tdp_mmu_pages_lock); } =20 @@ -1303,7 +1303,7 @@ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, struct kvm= _page_fault *fault) fault->req_level >=3D iter.level) { spin_lock(&kvm->arch.tdp_mmu_pages_lock); if (sp->nx_huge_page_disallowed) - track_possible_nx_huge_page(kvm, sp); + track_possible_nx_huge_page(kvm, sp, KVM_TDP_MMU); spin_unlock(&kvm->arch.tdp_mmu_pages_lock); } } --=20 2.50.0.727.gbf7dc18ff4-goog From nobody Tue Oct 7 18:28:34 2025 Received: from mail-pg1-f202.google.com (mail-pg1-f202.google.com [209.85.215.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E341B21D3CA for ; Mon, 7 Jul 2025 22:48:29 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751928511; cv=none; b=MkmbDSHccFkee/XC1sy4XJc748y+kgt5D6+j9a47Zo094txkju1HyjpVKij59yVQOCQMjWtUn7dOSVLl1hEmoP0zqe1YxVjvlBw4C2Atn811x8l3V2LguJiFeMqjUCsRHbFZWqO0iDYNhJ7qDr78n2Bo/ImAzz0hcYachXuyqtw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751928511; c=relaxed/simple; bh=LzKYRQCVuaJMqlGvxUBg23DAUAcSx2WIUPKtC6dbxMU=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=jBITWq/6rXaZoczM/1kPC5yQZiIKFbZASAs50DLi7IMGcuI1g8dX+phjW8Vm2SYGvKvDafTbFtNqCU7TakcCaKpxYKqo8scascVg6d1gKkzPqr68KlwzDnXELHt7JZmfNhxguIJOUGuNu8GMzOFo5XFdbRX2HsIrt2cZwzWjuMA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--jthoughton.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=GWcareNK; arc=none smtp.client-ip=209.85.215.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--jthoughton.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="GWcareNK" Received: by mail-pg1-f202.google.com with SMTP id 41be03b00d2f7-b3184712fd8so2910013a12.3 for ; Mon, 07 Jul 2025 15:48:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1751928509; x=1752533309; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=kyLCHmkUQILbp25o/Y3USfag+DNv6QnpeB4rV3o77kE=; b=GWcareNKYUGFL2h2/u30T0QJR38Vwkia93mIHnRqf0+Tc6VUPFHzqrnMDuOvv0sftE 4M2QuUMbclhoc6VnXcb7BqWh9/oIp127n16HgDVZ4fOIPzxKrYUonYLrKft36SoGgDPI zsjxU4xKkPoyf579apH1FdNvft5sJadRKjwYEhW3HNoqssmApSdlu9eZxEuWot5hJzdu 9YfOe352l3/8oX60KuVuRprZR99aXc6ecI4bhlS/CG7UKzFz6A6mmcjKRKr3V7HK/ylt yr9ZEZOMvyBtaOagmH5xSBalYQW0bgJ0Mj1oDW05jqWyFRNQbi6FXpNRaoN05mhQtHv4 iCBA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1751928509; x=1752533309; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=kyLCHmkUQILbp25o/Y3USfag+DNv6QnpeB4rV3o77kE=; b=N3QpavtprMQUG5GBOS0FqKgWTzOVF+Eq4Oqe7yy75oIC1KRIADg6e3dnc8zTgFvTz2 9t81nYz9ms3EHfqEBtwvKCG44475KvBuccSVaXCYFOcAlI8exrDuZqbbrY7+kVlehUB0 4c+dfWqHpsYUBJR0spS2Pc5/WlhjCdsG5MQFFHZTgSyZTQNJr6mGTprfIKhr5P7OGg64 p0Bw45j0/9NLqemUaGClRYXXcrVOq5C1IEzcf0meBXLR/tcWasPpaKLd8gUNvLr5KhDt dpXn1LZvxQjZY6J02VYOSdbzmt3E+PTBPB08aIa4oYBHj2iHUc8FU61HZAnll+V2yStL wgwQ== X-Forwarded-Encrypted: i=1; AJvYcCWpbvF5JsD5ROf3sIFES47nVn8Vs5tDZ+8u3TofHlmYVAPxWckBPXcQrBlUvqDcCNqSVqXJXiU3LbU/YV4=@vger.kernel.org X-Gm-Message-State: AOJu0YxQscyCkoV2gcdbNc1SCMvijexjUyKuA1kXoaI46Hb5tiXf7Yj+ nL9DpJRO4UPnV5c1IRln0A4hl0iZ7T/2pRHYxcmT2l2YDrf7kGw60db/vcLnhTjOl6mSPYYKIUX VjMv8nqPBZG0v7yt/YXZoDw== X-Google-Smtp-Source: AGHT+IEES6dWaYObQEJRBDnpbfu3sljQUDi7DMpVdgtUHxweHCDJwqF3Bnh9vM7cmXgUWD6fPN0ZK/bTTYK4yTL5 X-Received: from pjbnd10.prod.google.com ([2002:a17:90b:4cca:b0:313:2213:1f54]) (user=jthoughton job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:4c0a:b0:315:c77b:37d6 with SMTP id 98e67ed59e1d1-31aac4dfa9emr17263064a91.23.1751928509156; Mon, 07 Jul 2025 15:48:29 -0700 (PDT) Date: Mon, 7 Jul 2025 22:47:15 +0000 In-Reply-To: <20250707224720.4016504-1-jthoughton@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250707224720.4016504-1-jthoughton@google.com> X-Mailer: git-send-email 2.50.0.727.gbf7dc18ff4-goog Message-ID: <20250707224720.4016504-3-jthoughton@google.com> Subject: [PATCH v5 2/7] KVM: x86/mmu: Rename kvm_tdp_mmu_zap_sp() to better indicate its purpose From: James Houghton To: Paolo Bonzini , Sean Christopherson Cc: Vipin Sharma , David Matlack , James Houghton , kvm@vger.kernel.org, linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Vipin Sharma kvm_tdp_mmu_zap_sp() is only used for NX huge page recovery, so rename it to kvm_tdp_mmu_zap_possible_nx_huge_page(). In a future commit, this function will be changed to include logic specific to NX huge page recovery. Signed-off-by: Vipin Sharma Signed-off-by: James Houghton --- arch/x86/kvm/mmu/mmu.c | 2 +- arch/x86/kvm/mmu/tdp_mmu.c | 3 ++- arch/x86/kvm/mmu/tdp_mmu.h | 3 ++- 3 files changed, 5 insertions(+), 3 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index f44d7f3acc179..b074f7bb5cc58 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -7607,7 +7607,7 @@ static void kvm_recover_nx_huge_pages(struct kvm *kvm, if (slot && kvm_slot_dirty_track_enabled(slot)) unaccount_nx_huge_page(kvm, sp); else if (mmu_type =3D=3D KVM_TDP_MMU) - flush |=3D kvm_tdp_mmu_zap_sp(kvm, sp); + flush |=3D kvm_tdp_mmu_zap_possible_nx_huge_page(kvm, sp); else kvm_mmu_prepare_zap_page(kvm, sp, &invalid_list); WARN_ON_ONCE(sp->nx_huge_page_disallowed); diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 48b070f9f4e13..19907eb04a9c4 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -925,7 +925,8 @@ static void tdp_mmu_zap_root(struct kvm *kvm, struct kv= m_mmu_page *root, rcu_read_unlock(); } =20 -bool kvm_tdp_mmu_zap_sp(struct kvm *kvm, struct kvm_mmu_page *sp) +bool kvm_tdp_mmu_zap_possible_nx_huge_page(struct kvm *kvm, + struct kvm_mmu_page *sp) { u64 old_spte; =20 diff --git a/arch/x86/kvm/mmu/tdp_mmu.h b/arch/x86/kvm/mmu/tdp_mmu.h index 52acf99d40a00..bd62977c9199e 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.h +++ b/arch/x86/kvm/mmu/tdp_mmu.h @@ -64,7 +64,8 @@ static inline struct kvm_mmu_page *tdp_mmu_get_root(struc= t kvm_vcpu *vcpu, } =20 bool kvm_tdp_mmu_zap_leafs(struct kvm *kvm, gfn_t start, gfn_t end, bool f= lush); -bool kvm_tdp_mmu_zap_sp(struct kvm *kvm, struct kvm_mmu_page *sp); +bool kvm_tdp_mmu_zap_possible_nx_huge_page(struct kvm *kvm, + struct kvm_mmu_page *sp); void kvm_tdp_mmu_zap_all(struct kvm *kvm); void kvm_tdp_mmu_invalidate_roots(struct kvm *kvm, enum kvm_tdp_mmu_root_types root_types); --=20 2.50.0.727.gbf7dc18ff4-goog From nobody Tue Oct 7 18:28:34 2025 Received: from mail-pj1-f73.google.com (mail-pj1-f73.google.com [209.85.216.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6125F28F955 for ; Mon, 7 Jul 2025 22:48:31 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751928513; cv=none; b=muQr8Ak7WB41c+ptdBEArHDEVOoxhVTg9F59DRhYH2E0Rm3IWrvGeHI8WWjKXsz8zom8kIOPAhmAX3tK6S1DAy2Uq5RxAA2sX+gSGw+ttRXey/80XFgPhayeR5MBUBgNa3k3UEcfPAWLf+H68vj0Ld/tbSQ7Qe81pj5Je8vCyMc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751928513; c=relaxed/simple; bh=KFY76OztoaGgvyZYSEG+X1NnOeAiy/PXyOHQTfSj98A=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=tliKhYHY7eHpFJdpmf/gLSS2KeJr/kcX17zXB+hPJ99a8l46IpK0FA8ZS2fAVnCJZq0mKS7TU1SA+/XgcxTJMcc8gRXqz6Ib6KaZPJx4/86a8ov2iswAkdS4c7jFC2zPmNh7mRRfNP+6ZNfEq9PvFyXwC3jr1Tc0MgQuHn/Nq0g= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--jthoughton.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=aVx5/a9E; arc=none smtp.client-ip=209.85.216.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--jthoughton.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="aVx5/a9E" Received: by mail-pj1-f73.google.com with SMTP id 98e67ed59e1d1-3141a9a6888so3373275a91.3 for ; Mon, 07 Jul 2025 15:48:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1751928510; x=1752533310; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=63YuRp5JRiwz35F008n7xPw4ISE7/+YautHrLsfI/AY=; b=aVx5/a9Elg2wcTg5r+tuq5MSTKwYU1ViLoPi0qXXhH0vgPZbgag/Ig4ivNJ6kPtuPz EfCcML9P745aFBl0J8+KJPAgNkLGNhiKf7O5btBU6pd32dL9RrUbHVboN3S3H2JJZUNs pIfYCp7TX97rCFJh4mXDvSrnRN0BG7Arz1Ntvt33EzMZ7SOV4HcJW7JNruExnP01q0jq RSm81BDUEZsgyR5N0kO3Tp53X86riqkZzyUT3bDwAYzz4RLvnXpJR+v8mYBmrasu0Res Cqxp+nclgnw8ahiojQ6QYx6UoVW9vISwCWZxmhA03LesKn1ZI2itrt/p/oYWU3yqPo06 QKBQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1751928510; x=1752533310; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=63YuRp5JRiwz35F008n7xPw4ISE7/+YautHrLsfI/AY=; b=qbycR0Pj2J7upARQzgTXXfmWJzqHirnoKAxOpGL+aBCuUsMQizykx9HCv1tlatKvBA nqnZ45AURZCNdsACzp09QKJUNO1erKC2GAWxq0U6TY6ri3dpY6Ama11WgLQKhKO/JawB Nns/IpYCNPup835jK0lfLIBXQbWO7l1cVRykf3MVYaONnivm9mauqd5ywffQxbKom947 n4XK9jLPKR0iYmSvGTR4vslusljxwTr0Tqh9Q6Ut6w1UwVtgq981pN8520SHZca1nB3e n4WxXR8IgnrsrdNb5znKWUUo9SWGM2mDaCrOKfWFLwBOLefe5fK5jeTsGdZ2jRYsLbY5 fLtw== X-Forwarded-Encrypted: i=1; AJvYcCUi8S2Wm3e1cx+2Asu6j5/+GjSTip2cCjIWa5HqsK/XGVV/CmxupnOc6DqXadk7NFaDMweau4ptZU3NX2g=@vger.kernel.org X-Gm-Message-State: AOJu0Yw6lNhxww4ITbW3mOJbPd+dTtVEmxpjFFc0D8HiO4nhzTcSQJ41 hhRlWuU8HTfkRmG0uT+j51VY9Nh9VEOYH1irpYEvegmL6/mmi5oFK6glq6pbSoQbowccvZxTDBL RC4fa/nd5SInX0LVC4lFMrg== X-Google-Smtp-Source: AGHT+IHxgvuCz+16e1Vy1aMt3tccVzSt8BxJwVfzCWxpal0itppps/QWpBJVHuH2Lc6fEpwuD0QBu1nc3HmAlTF5 X-Received: from pjbsv15.prod.google.com ([2002:a17:90b:538f:b0:311:46e:8c26]) (user=jthoughton job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:48c8:b0:311:c5d9:2c7c with SMTP id 98e67ed59e1d1-31c21de8312mr746959a91.23.1751928510684; Mon, 07 Jul 2025 15:48:30 -0700 (PDT) Date: Mon, 7 Jul 2025 22:47:16 +0000 In-Reply-To: <20250707224720.4016504-1-jthoughton@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250707224720.4016504-1-jthoughton@google.com> X-Mailer: git-send-email 2.50.0.727.gbf7dc18ff4-goog Message-ID: <20250707224720.4016504-4-jthoughton@google.com> Subject: [PATCH v5 3/7] KVM: x86/mmu: Recover TDP MMU NX huge pages using MMU read lock From: James Houghton To: Paolo Bonzini , Sean Christopherson Cc: Vipin Sharma , David Matlack , James Houghton , kvm@vger.kernel.org, linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Vipin Sharma Use MMU read lock to recover TDP MMU NX huge pages. Iterate over the huge pages list under tdp_mmu_pages_lock protection and unaccount the page before dropping the lock. We must not zap an SPTE if: - The SPTE is a root page. - The SPTE does not point at the SP's page table. If the SPTE does not point at the SP's page table, then something else has change the SPTE, so we cannot safely zap it. Warn if zapping SPTE fails and current SPTE is still pointing to same page table. This should never happen. There is always a race between dirty logging, vCPU faults, and NX huge page recovery for backing a gfn by an NX huge page or an executable small page. Unaccounting sooner during the list traversal is increasing the window of that race. Functionally, it is okay, because accounting doesn't protect against iTLB multi-hit bug, it is there purely to prevent KVM from bouncing a gfn between two page sizes. The only downside is that a vCPU will end up doing more work in tearing down all the child SPTEs. This should be a very rare race. Zapping under MMU read lock unblocks vCPUs which are waiting for MMU read lock. This optimizaion is done to solve a guest jitter issue on Windows VM which was observing an increase in network latency. Suggested-by: Sean Christopherson Signed-off-by: Vipin Sharma Co-developed-by: James Houghton Signed-off-by: James Houghton --- arch/x86/kvm/mmu/mmu.c | 107 ++++++++++++++++++++++++------------- arch/x86/kvm/mmu/tdp_mmu.c | 42 ++++++++++++--- 2 files changed, 105 insertions(+), 44 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index b074f7bb5cc58..7df1b4ead705b 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -7535,12 +7535,40 @@ static unsigned long nx_huge_pages_to_zap(struct kv= m *kvm, return ratio ? DIV_ROUND_UP(pages, ratio) : 0; } =20 +static bool kvm_mmu_sp_dirty_logging_enabled(struct kvm *kvm, + struct kvm_mmu_page *sp) +{ + struct kvm_memory_slot *slot =3D NULL; + + /* + * Since gfn_to_memslot() is relatively expensive, it helps to skip it if + * it the test cannot possibly return true. On the other hand, if any + * memslot has logging enabled, chances are good that all of them do, in + * which case unaccount_nx_huge_page() is much cheaper than zapping the + * page. + * + * If a memslot update is in progress, reading an incorrect value of + * kvm->nr_memslots_dirty_logging is not a problem: if it is becoming + * zero, gfn_to_memslot() will be done unnecessarily; if it is becoming + * nonzero, the page will be zapped unnecessarily. Either way, this only + * affects efficiency in racy situations, and not correctness. + */ + if (atomic_read(&kvm->nr_memslots_dirty_logging)) { + struct kvm_memslots *slots; + + slots =3D kvm_memslots_for_spte_role(kvm, sp->role); + slot =3D __gfn_to_memslot(slots, sp->gfn); + WARN_ON_ONCE(!slot); + } + return slot && kvm_slot_dirty_track_enabled(slot); +} + static void kvm_recover_nx_huge_pages(struct kvm *kvm, - enum kvm_mmu_type mmu_type) + const enum kvm_mmu_type mmu_type) { unsigned long to_zap =3D nx_huge_pages_to_zap(kvm, mmu_type); + bool is_tdp_mmu =3D mmu_type =3D=3D KVM_TDP_MMU; struct list_head *nx_huge_pages; - struct kvm_memory_slot *slot; struct kvm_mmu_page *sp; LIST_HEAD(invalid_list); bool flush =3D false; @@ -7549,7 +7577,10 @@ static void kvm_recover_nx_huge_pages(struct kvm *kv= m, nx_huge_pages =3D &kvm->arch.possible_nx_huge_pages[mmu_type].pages; =20 rcu_idx =3D srcu_read_lock(&kvm->srcu); - write_lock(&kvm->mmu_lock); + if (is_tdp_mmu) + read_lock(&kvm->mmu_lock); + else + write_lock(&kvm->mmu_lock); =20 /* * Zapping TDP MMU shadow pages, including the remote TLB flush, must @@ -7559,8 +7590,17 @@ static void kvm_recover_nx_huge_pages(struct kvm *kv= m, rcu_read_lock(); =20 for ( ; to_zap; --to_zap) { - if (list_empty(nx_huge_pages)) +#ifdef CONFIG_X86_64 + if (is_tdp_mmu) + spin_lock(&kvm->arch.tdp_mmu_pages_lock); +#endif + if (list_empty(nx_huge_pages)) { +#ifdef CONFIG_X86_64 + if (is_tdp_mmu) + spin_unlock(&kvm->arch.tdp_mmu_pages_lock); +#endif break; + } =20 /* * We use a separate list instead of just using active_mmu_pages @@ -7575,50 +7615,40 @@ static void kvm_recover_nx_huge_pages(struct kvm *k= vm, WARN_ON_ONCE(!sp->nx_huge_page_disallowed); WARN_ON_ONCE(!sp->role.direct); =20 + unaccount_nx_huge_page(kvm, sp); + +#ifdef CONFIG_X86_64 + if (is_tdp_mmu) + spin_unlock(&kvm->arch.tdp_mmu_pages_lock); +#endif + /* - * Unaccount and do not attempt to recover any NX Huge Pages - * that are being dirty tracked, as they would just be faulted - * back in as 4KiB pages. The NX Huge Pages in this slot will be - * recovered, along with all the other huge pages in the slot, - * when dirty logging is disabled. - * - * Since gfn_to_memslot() is relatively expensive, it helps to - * skip it if it the test cannot possibly return true. On the - * other hand, if any memslot has logging enabled, chances are - * good that all of them do, in which case unaccount_nx_huge_page() - * is much cheaper than zapping the page. - * - * If a memslot update is in progress, reading an incorrect value - * of kvm->nr_memslots_dirty_logging is not a problem: if it is - * becoming zero, gfn_to_memslot() will be done unnecessarily; if - * it is becoming nonzero, the page will be zapped unnecessarily. - * Either way, this only affects efficiency in racy situations, - * and not correctness. + * Do not attempt to recover any NX Huge Pages that are being + * dirty tracked, as they would just be faulted back in as 4KiB + * pages. The NX Huge Pages in this slot will be recovered, + * along with all the other huge pages in the slot, when dirty + * logging is disabled. */ - slot =3D NULL; - if (atomic_read(&kvm->nr_memslots_dirty_logging)) { - struct kvm_memslots *slots; + if (!kvm_mmu_sp_dirty_logging_enabled(kvm, sp)) { + if (is_tdp_mmu) + flush |=3D kvm_tdp_mmu_zap_possible_nx_huge_page(kvm, sp); + else + kvm_mmu_prepare_zap_page(kvm, sp, &invalid_list); =20 - slots =3D kvm_memslots_for_spte_role(kvm, sp->role); - slot =3D __gfn_to_memslot(slots, sp->gfn); - WARN_ON_ONCE(!slot); } =20 - if (slot && kvm_slot_dirty_track_enabled(slot)) - unaccount_nx_huge_page(kvm, sp); - else if (mmu_type =3D=3D KVM_TDP_MMU) - flush |=3D kvm_tdp_mmu_zap_possible_nx_huge_page(kvm, sp); - else - kvm_mmu_prepare_zap_page(kvm, sp, &invalid_list); WARN_ON_ONCE(sp->nx_huge_page_disallowed); =20 if (need_resched() || rwlock_needbreak(&kvm->mmu_lock)) { kvm_mmu_remote_flush_or_zap(kvm, &invalid_list, flush); rcu_read_unlock(); =20 - cond_resched_rwlock_write(&kvm->mmu_lock); - flush =3D false; + if (is_tdp_mmu) + cond_resched_rwlock_read(&kvm->mmu_lock); + else + cond_resched_rwlock_write(&kvm->mmu_lock); =20 + flush =3D false; rcu_read_lock(); } } @@ -7626,7 +7656,10 @@ static void kvm_recover_nx_huge_pages(struct kvm *kv= m, =20 rcu_read_unlock(); =20 - write_unlock(&kvm->mmu_lock); + if (is_tdp_mmu) + read_unlock(&kvm->mmu_lock); + else + write_unlock(&kvm->mmu_lock); srcu_read_unlock(&kvm->srcu, rcu_idx); } =20 diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 19907eb04a9c4..31d921705dee7 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -928,21 +928,49 @@ static void tdp_mmu_zap_root(struct kvm *kvm, struct = kvm_mmu_page *root, bool kvm_tdp_mmu_zap_possible_nx_huge_page(struct kvm *kvm, struct kvm_mmu_page *sp) { - u64 old_spte; + struct tdp_iter iter =3D { + .old_spte =3D sp->ptep ? kvm_tdp_mmu_read_spte(sp->ptep) : 0, + .sptep =3D sp->ptep, + .level =3D sp->role.level + 1, + .gfn =3D sp->gfn, + .as_id =3D kvm_mmu_page_as_id(sp), + }; + + lockdep_assert_held_read(&kvm->mmu_lock); + + if (WARN_ON_ONCE(!is_tdp_mmu_page(sp))) + return false; =20 /* - * This helper intentionally doesn't allow zapping a root shadow page, - * which doesn't have a parent page table and thus no associated entry. + * Root shadow pages don't have a parent page table and thus no + * associated entry, but they can never be possible NX huge pages. */ if (WARN_ON_ONCE(!sp->ptep)) return false; =20 - old_spte =3D kvm_tdp_mmu_read_spte(sp->ptep); - if (WARN_ON_ONCE(!is_shadow_present_pte(old_spte))) + /* + * Since mmu_lock is held in read mode, it's possible another task has + * already modified the SPTE. Zap the SPTE if and only if the SPTE + * points at the SP's page table, as checking shadow-present isn't + * sufficient, e.g. the SPTE could be replaced by a leaf SPTE, or even + * another SP. Note, spte_to_child_pt() also checks that the SPTE is + * shadow-present, i.e. guards against zapping a frozen SPTE. + */ + if ((tdp_ptep_t)sp->spt !=3D spte_to_child_pt(iter.old_spte, iter.level)) return false; =20 - tdp_mmu_set_spte(kvm, kvm_mmu_page_as_id(sp), sp->ptep, old_spte, - SHADOW_NONPRESENT_VALUE, sp->gfn, sp->role.level + 1); + /* + * If a different task modified the SPTE, then it should be impossible + * for the SPTE to still be used for the to-be-zapped SP. Non-leaf + * SPTEs don't have Dirty bits, KVM always sets the Accessed bit when + * creating non-leaf SPTEs, and all other bits are immutable for non- + * leaf SPTEs, i.e. the only legal operations for non-leaf SPTEs are + * zapping and replacement. + */ + if (tdp_mmu_set_spte_atomic(kvm, &iter, SHADOW_NONPRESENT_VALUE)) { + WARN_ON_ONCE((tdp_ptep_t)sp->spt =3D=3D spte_to_child_pt(iter.old_spte, = iter.level)); + return false; + } =20 return true; } --=20 2.50.0.727.gbf7dc18ff4-goog From nobody Tue Oct 7 18:28:34 2025 Received: from mail-pg1-f201.google.com (mail-pg1-f201.google.com [209.85.215.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id EE75328C2DC for ; Mon, 7 Jul 2025 22:48:32 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751928514; cv=none; b=agM6YDj70teSPoPQNumHr8djSp/1OU/OGgBVUy9c3F6ldO4FnP7Gr12cNVUBa2L5Qtr3kJ9sQd5dJEWsBnQlWirAOM7LQYv959u8/MUyqT7zuM92EplATplmWWiSitrqewia6SFrZzaVdOoDK/yChVS+BqGI5yqktwWcOLH8cqE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751928514; c=relaxed/simple; bh=pfTbcdZSSLLL5oaShJwPnizP77bbCKnSgy4AHooKMXw=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=GnuLCkY+7sNNhBi3a5DftSOB7WQFmhleSMGrjBpta9dR8uuGkSvIqQMpOuxwCyUvaxXEBCz9CgXUarK/GMI0hEEOxTVXAd9PBItxBVcAjr4tMOCr/NWaFuOPNl2vNgN8pH4LI7PloY9waE8Q75mEbBsx2vPEGH1J9APQSYbgviQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--jthoughton.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=3qfKmOiM; arc=none smtp.client-ip=209.85.215.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--jthoughton.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="3qfKmOiM" Received: by mail-pg1-f201.google.com with SMTP id 41be03b00d2f7-b00e4358a34so2237606a12.0 for ; Mon, 07 Jul 2025 15:48:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1751928512; x=1752533312; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=0XRp6vH5YCO0vNmv7C5suxuZF5x6t8xgiOL3MRTM9E0=; b=3qfKmOiMkBVsRMaycUR9uyhbT0BNP1dXEbPwwRtCCvkfAF5T/r1PqvXdO4FX15CS1S xKJ/8njqztT9nqAlmW6AUoxKFRFSyxL6vqBRFoLxqSCPDmeRYbRSmhKue8b+85RFuNNA HswpxEOMA6WpVc2vuDEvHQjjDR0xJNESMjEVKuFeCx/FCH5mCGEO0OxD3LQ519AZfueG UqgIavf4DCuU5iSsufa6T/Z6LC8/LJsPJnCTK5e6h7kZwTgpF3pCMoTMm2B0hrr//Ztz e7nhpmZsNskWfdsHDYmt5bgano1DkKRI8PHN/jnlKI40PSaz0bRjKw6P4dlQk0xgu1pd T7Sg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1751928512; x=1752533312; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=0XRp6vH5YCO0vNmv7C5suxuZF5x6t8xgiOL3MRTM9E0=; b=mQVGKF23Fyo1267QINGU96onGx1q7V6TPZumuJhTPtlhor9seoikmeDzjbI6goiM2K 32cZqWZFv/xGWNy2HDgGWYYWkBPLrX3AMHN6voY2E7eLAoHbetYaxm9HFSFK2tKTuRfX 7CnjbWLUtbwUPB2bRJVFHRoS8XOHAZmJa/DosbY+qrZBkTfaM9dtgZv3R/y05AIHOAB5 N/KdQd9VYA0b5MbVUlv3Ye/UZpdyinZsxRjAgnoomjXla1ZJlLEPhNK9RNf26Rx4kVCA oDSADsXFV7m3zKk0m52EieLNHHCTONb3RuPp4ThJ6AhUZOwLaOmzrZAVluU7MKXBXFCS VkMQ== X-Forwarded-Encrypted: i=1; AJvYcCV3QdUkU6XJLBSqDbIHxUI/65Ck8DOai5fS85+zRICExJvZYPuZ/QLqc9OGDVZg2EpFMDZ9NFypcbjRFrg=@vger.kernel.org X-Gm-Message-State: AOJu0Yz+FS97ey1Zjsa/zh0FNZDouErpgsMs0NMOL0gT1Urvt8Qgs+Av c1dcSswr5EwFw9P+PQHuxiQW4vz21NSbSLt0hB/4zcsO+ETpe+i60eeZ3Gp27zSDQNXj9H/h2AS dQu9TMq6/t5VDIkoNycr0Dw== X-Google-Smtp-Source: AGHT+IEQGLj1y5VPablO5Se2AW+glk11ncXO8Xt55xsvLAKmnnmI6ZkgDZYUUigN/e/HEzDXWEdb3qi16gZ1EFuY X-Received: from pjbqa14.prod.google.com ([2002:a17:90b:4fce:b0:312:14e5:174b]) (user=jthoughton job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:548b:b0:313:14b5:2538 with SMTP id 98e67ed59e1d1-31aaddfb84bmr21087259a91.35.1751928512116; Mon, 07 Jul 2025 15:48:32 -0700 (PDT) Date: Mon, 7 Jul 2025 22:47:17 +0000 In-Reply-To: <20250707224720.4016504-1-jthoughton@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250707224720.4016504-1-jthoughton@google.com> X-Mailer: git-send-email 2.50.0.727.gbf7dc18ff4-goog Message-ID: <20250707224720.4016504-5-jthoughton@google.com> Subject: [PATCH v5 4/7] KVM: x86/mmu: Only grab RCU lock for nx hugepage recovery for TDP MMU From: James Houghton To: Paolo Bonzini , Sean Christopherson Cc: Vipin Sharma , David Matlack , James Houghton , kvm@vger.kernel.org, linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Now that we have separate paths for the TDP MMU, it is trivial to only grab rcu_read_lock() for the TDP MMU case. We do not need to grab it for the shadow MMU, as pages are not RCU-freed in that case. Signed-off-by: James Houghton --- arch/x86/kvm/mmu/mmu.c | 33 ++++++++++++++++++--------------- 1 file changed, 18 insertions(+), 15 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 7df1b4ead705b..c8f7dd747d524 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -7577,17 +7577,18 @@ static void kvm_recover_nx_huge_pages(struct kvm *k= vm, nx_huge_pages =3D &kvm->arch.possible_nx_huge_pages[mmu_type].pages; =20 rcu_idx =3D srcu_read_lock(&kvm->srcu); - if (is_tdp_mmu) + if (is_tdp_mmu) { read_lock(&kvm->mmu_lock); - else + /* + * Zapping TDP MMU shadow pages, including the remote TLB flush, + * must be done under RCU protection, because the pages are + * freed via RCU callback. + */ + rcu_read_lock(); + } else { write_lock(&kvm->mmu_lock); + } =20 - /* - * Zapping TDP MMU shadow pages, including the remote TLB flush, must - * be done under RCU protection, because the pages are freed via RCU - * callback. - */ - rcu_read_lock(); =20 for ( ; to_zap; --to_zap) { #ifdef CONFIG_X86_64 @@ -7641,25 +7642,27 @@ static void kvm_recover_nx_huge_pages(struct kvm *k= vm, =20 if (need_resched() || rwlock_needbreak(&kvm->mmu_lock)) { kvm_mmu_remote_flush_or_zap(kvm, &invalid_list, flush); - rcu_read_unlock(); =20 - if (is_tdp_mmu) + if (is_tdp_mmu) { + rcu_read_unlock(); cond_resched_rwlock_read(&kvm->mmu_lock); - else + rcu_read_lock(); + } else { cond_resched_rwlock_write(&kvm->mmu_lock); + } =20 flush =3D false; - rcu_read_lock(); } } kvm_mmu_remote_flush_or_zap(kvm, &invalid_list, flush); =20 - rcu_read_unlock(); =20 - if (is_tdp_mmu) + if (is_tdp_mmu) { + rcu_read_unlock(); read_unlock(&kvm->mmu_lock); - else + } else { write_unlock(&kvm->mmu_lock); + } srcu_read_unlock(&kvm->srcu, rcu_idx); } =20 --=20 2.50.0.727.gbf7dc18ff4-goog From nobody Tue Oct 7 18:28:34 2025 Received: from mail-pl1-f202.google.com (mail-pl1-f202.google.com [209.85.214.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0481A293C5B for ; Mon, 7 Jul 2025 22:48:33 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751928516; cv=none; b=eeIa89yE2M51qLkj/Pr0Pvh3fRaR8uWEwem6d9WLbUAugv/DsiHa5yWs4k5uDNBvE/0TFyflSFUmvUR14pHEaNR7o7KYmYye+GlKZSZj/em1mdAQjGSH9O5TK+GOaGP2Pc6C49GPpejQMCf0LbtPbKY+ix29m5w9fXHHyGOiw2o= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751928516; c=relaxed/simple; bh=D1eAJwuRvE/m61+W1ccPdYrT9y68HNGrUskK+IMXzLY=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=c/y3mbTEOrBFWWHq701mHFf91DcjqZzkNobH/fT4I+MV2ps6/vmDtlFR9scZ5cOhkj3WLynP2pi3qYWjAGU84bQrwKIoiQgAVA0U5zd/7YUofzkE60Ju66mLtaz82m0oGKoC/shr9oQ0FwXK27Fo8evicXTYmEE8RJSOkLqT/v8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--jthoughton.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=WkJ3cTg5; arc=none smtp.client-ip=209.85.214.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--jthoughton.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="WkJ3cTg5" Received: by mail-pl1-f202.google.com with SMTP id d9443c01a7336-235e3f93687so54077525ad.2 for ; Mon, 07 Jul 2025 15:48:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1751928513; x=1752533313; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=Ajp6FLeiEeRR3O3QA9DObPaEcPqbkZtgTcVl91PBjcU=; b=WkJ3cTg5M3HjFSkNA2vkvTXws3k210T//iKXR/hfOJd/Z3QB46ZKrR+PTdNuHhIY0j leRPPeaWaP/8nSvUVJJ4IEiSefVfy9xkCjesxRUYaCPi4ROm+6gFpm3LP3W5KEuxwdZd JmCRbKwti+yD3oULINlLp9wg1sGKiWusfVYCCNzXTo+X3x0yjPNopErQccHM4l/KMiac WXJ/OaROW/6oJvyy0vQFn3b0y64hSO3uFV2EZ0g7Qwh9Finmfifecp0hq/rs8Ydggymz /+Yoifk0ZPWArOATRWx9GlBFloI6SQw2u2DJirJxO9K8ZO8J9nncptN5oPJ4MYUGWRYn wggw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1751928513; x=1752533313; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=Ajp6FLeiEeRR3O3QA9DObPaEcPqbkZtgTcVl91PBjcU=; b=S1uhwgvX5D3p1IsdTooXah0g0Quyk8Y9V7NIYeo7GpzxQ6dXKZN6ceqGLy4LuiG6Ae l1y6stbXnXUHna5z0A//tba08MktMIQ866O9SpBAtQ//XHytuR92zOQDbOFyHMCrbBts ojWzCH8v+ac+9Rfzl9CuZMnwM8EB0+p2hLbaOh5lqsTxCnkVMXJh6HLdRe0v6VvNWDu0 +WAZJ4WIosmBR3se8aVsMjVTgyPjiqOhkcWEMWBUywf9r1cXTJfb5fz9IPupldcJHXP2 ax8gvcvpSQQANkMlJYdlOIj3b8QU6Ec+pPjRrw7bHLe8nG6EXlNAuySr8Qxu+gi443+/ XEaQ== X-Forwarded-Encrypted: i=1; AJvYcCV554z/ZTtvOQZX/Lkz723MvRkgRHoPugopo30Kt03P/fTMvhGSAjyEu+PE9UGValh7AnL+T9SNuZwuKkM=@vger.kernel.org X-Gm-Message-State: AOJu0Yx8QvvEzS4ag0bI42+HRV/3swVgBj7A3diITTtO0hA1hB4Sp5gL ouV/+RIAk2ahvyBSRaVEPY613rul6AwEMiUSyJfD9bR4P56dAMhq1bJMHJPMhhN+BhLCkba3pYC my07YQgnhHBInjmWrmeleEw== X-Google-Smtp-Source: AGHT+IHvyfP8O25pOh/+g3jAGWcbOQHzTTAp6rWc3NKi9cSBr7mgGEcBbu/P85x32I6S2mKx6IzQIVW+OEj6Sn59 X-Received: from pgew16.prod.google.com ([2002:a63:af10:0:b0:b2c:4fcd:fe1b]) (user=jthoughton job=prod-delivery.src-stubby-dispatcher) by 2002:a17:903:2a85:b0:237:ec18:eae3 with SMTP id d9443c01a7336-23c85d9bf04mr262645515ad.4.1751928513497; Mon, 07 Jul 2025 15:48:33 -0700 (PDT) Date: Mon, 7 Jul 2025 22:47:18 +0000 In-Reply-To: <20250707224720.4016504-1-jthoughton@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250707224720.4016504-1-jthoughton@google.com> X-Mailer: git-send-email 2.50.0.727.gbf7dc18ff4-goog Message-ID: <20250707224720.4016504-6-jthoughton@google.com> Subject: [PATCH v5 5/7] KVM: selftests: Introduce a selftest to measure execution performance From: James Houghton To: Paolo Bonzini , Sean Christopherson Cc: Vipin Sharma , David Matlack , James Houghton , kvm@vger.kernel.org, linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: David Matlack Introduce a new selftest, execute_perf_test, that uses the perf_test_util framework to measure the performance of executing code within a VM. This test is similar to the other perf_test_util-based tests in that it spins up a variable number of vCPUs and runs them concurrently, accessing memory. In order to support execution, extend perf_test_util to populate guest memory with return instructions rather than random garbage. This way memory can be execute simply by calling it. Currently only x86_64 supports execution, but other architectures can be easily added by providing their return code instruction. Signed-off-by: David Matlack Signed-off-by: James Houghton --- tools/testing/selftests/kvm/Makefile.kvm | 1 + .../testing/selftests/kvm/execute_perf_test.c | 199 ++++++++++++++++++ .../testing/selftests/kvm/include/memstress.h | 4 + tools/testing/selftests/kvm/lib/memstress.c | 25 ++- 4 files changed, 227 insertions(+), 2 deletions(-) create mode 100644 tools/testing/selftests/kvm/execute_perf_test.c diff --git a/tools/testing/selftests/kvm/Makefile.kvm b/tools/testing/selft= ests/kvm/Makefile.kvm index 38b95998e1e6b..0dc435e944632 100644 --- a/tools/testing/selftests/kvm/Makefile.kvm +++ b/tools/testing/selftests/kvm/Makefile.kvm @@ -137,6 +137,7 @@ TEST_GEN_PROGS_x86 +=3D x86/recalc_apic_map_test TEST_GEN_PROGS_x86 +=3D access_tracking_perf_test TEST_GEN_PROGS_x86 +=3D coalesced_io_test TEST_GEN_PROGS_x86 +=3D dirty_log_perf_test +TEST_GEN_PROGS_x86 +=3D execute_perf_test TEST_GEN_PROGS_x86 +=3D guest_memfd_test TEST_GEN_PROGS_x86 +=3D hardware_disable_test TEST_GEN_PROGS_x86 +=3D memslot_modification_stress_test diff --git a/tools/testing/selftests/kvm/execute_perf_test.c b/tools/testin= g/selftests/kvm/execute_perf_test.c new file mode 100644 index 0000000000000..f7cbfd8184497 --- /dev/null +++ b/tools/testing/selftests/kvm/execute_perf_test.c @@ -0,0 +1,199 @@ +// SPDX-License-Identifier: GPL-2.0 +#include +#include +#include +#include +#include +#include + +#include "kvm_util.h" +#include "test_util.h" +#include "memstress.h" +#include "guest_modes.h" +#include "ucall_common.h" + +/* Global variable used to synchronize all of the vCPU threads. */ +static int iteration; + +/* Set to true when vCPU threads should exit. */ +static bool done; + +/* The iteration that was last completed by each vCPU. */ +static int vcpu_last_completed_iteration[KVM_MAX_VCPUS]; + +/* Whether to overlap the regions of memory vCPUs access. */ +static bool overlap_memory_access; + +struct test_params { + /* The backing source for the region of memory. */ + enum vm_mem_backing_src_type backing_src; + + /* The amount of memory to allocate for each vCPU. */ + uint64_t vcpu_memory_bytes; + + /* The number of vCPUs to create in the VM. */ + int nr_vcpus; + + /* The number of execute iterations the test will run. */ + int iterations; +}; + +static void assert_ucall(struct kvm_vcpu *vcpu, uint64_t expected_ucall) +{ + struct ucall uc =3D {}; + + TEST_ASSERT(expected_ucall =3D=3D get_ucall(vcpu, &uc), + "Guest exited unexpectedly (expected ucall %" PRIu64 + ", got %" PRIu64 ")", + expected_ucall, uc.cmd); +} + +static bool spin_wait_for_next_iteration(int *current_iteration) +{ + int last_iteration =3D *current_iteration; + + do { + if (READ_ONCE(done)) + return false; + + *current_iteration =3D READ_ONCE(iteration); + } while (last_iteration =3D=3D *current_iteration); + + return true; +} + +static void vcpu_thread_main(struct memstress_vcpu_args *vcpu_args) +{ + struct kvm_vcpu *vcpu =3D vcpu_args->vcpu; + int current_iteration =3D 0; + + while (spin_wait_for_next_iteration(¤t_iteration)) { + vcpu_run(vcpu); + assert_ucall(vcpu, UCALL_SYNC); + vcpu_last_completed_iteration[vcpu->id] =3D current_iteration; + } +} + +static void spin_wait_for_vcpu(struct kvm_vcpu *vcpu, int target_iteration) +{ + while (READ_ONCE(vcpu_last_completed_iteration[vcpu->id]) !=3D + target_iteration) { + continue; + } +} + +static void run_iteration(struct kvm_vm *vm, const char *description) +{ + struct timespec ts_elapsed; + struct timespec ts_start; + struct kvm_vcpu *vcpu; + int next_iteration; + + /* Kick off the vCPUs by incrementing iteration. */ + next_iteration =3D ++iteration; + + clock_gettime(CLOCK_MONOTONIC, &ts_start); + + /* Wait for all vCPUs to finish the iteration. */ + list_for_each_entry(vcpu, &vm->vcpus, list) + spin_wait_for_vcpu(vcpu, next_iteration); + + ts_elapsed =3D timespec_elapsed(ts_start); + pr_info("%-30s: %ld.%09lds\n", + description, ts_elapsed.tv_sec, ts_elapsed.tv_nsec); +} + +static void run_test(enum vm_guest_mode mode, void *arg) +{ + struct test_params *params =3D arg; + struct kvm_vm *vm; + int i; + + vm =3D memstress_create_vm(mode, params->nr_vcpus, + params->vcpu_memory_bytes, 1, + params->backing_src, !overlap_memory_access); + + memstress_start_vcpu_threads(params->nr_vcpus, vcpu_thread_main); + + pr_info("\n"); + + memstress_set_write_percent(vm, 100); + run_iteration(vm, "Populating memory"); + + run_iteration(vm, "Writing to memory"); + + memstress_set_execute(vm, true); + for (i =3D 0; i < params->iterations; ++i) + run_iteration(vm, "Executing from memory"); + + /* Set done to signal the vCPU threads to exit */ + done =3D true; + + memstress_join_vcpu_threads(params->nr_vcpus); + memstress_destroy_vm(vm); +} + +static void help(char *name) +{ + puts(""); + printf("usage: %s [-h] [-m mode] [-b vcpu_bytes] [-v nr_vcpus] [-o] " + "[-s mem_type] [-i iterations]\n", + name); + puts(""); + printf(" -h: Display this help message."); + guest_modes_help(); + printf(" -b: specify the size of the memory region which should be\n" + " dirtied by each vCPU. e.g. 10M or 3G.\n" + " (default: 1G)\n"); + printf(" -i: specify the number iterations to execute from memory.\n"); + printf(" -v: specify the number of vCPUs to run.\n"); + printf(" -o: Overlap guest memory accesses instead of partitioning\n" + " them into a separate region of memory for each vCPU.\n"); + backing_src_help("-s"); + puts(""); + exit(0); +} + +int main(int argc, char *argv[]) +{ + struct test_params params =3D { + .backing_src =3D DEFAULT_VM_MEM_SRC, + .vcpu_memory_bytes =3D DEFAULT_PER_VCPU_MEM_SIZE, + .nr_vcpus =3D 1, + .iterations =3D 1, + }; + int opt; + + guest_modes_append_default(); + + while ((opt =3D getopt(argc, argv, "hm:b:i:v:os:")) !=3D -1) { + switch (opt) { + case 'm': + guest_modes_cmdline(optarg); + break; + case 'b': + params.vcpu_memory_bytes =3D parse_size(optarg); + break; + case 'i': + params.iterations =3D atoi(optarg); + break; + case 'v': + params.nr_vcpus =3D atoi(optarg); + break; + case 'o': + overlap_memory_access =3D true; + break; + case 's': + params.backing_src =3D parse_backing_src_type(optarg); + break; + case 'h': + default: + help(argv[0]); + break; + } + } + + for_each_guest_mode(run_test, ¶ms); + + return 0; +} diff --git a/tools/testing/selftests/kvm/include/memstress.h b/tools/testin= g/selftests/kvm/include/memstress.h index 9071eb6dea60a..ab2a0c05e3fd2 100644 --- a/tools/testing/selftests/kvm/include/memstress.h +++ b/tools/testing/selftests/kvm/include/memstress.h @@ -50,6 +50,9 @@ struct memstress_args { /* Test is done, stop running vCPUs. */ bool stop_vcpus; =20 + /* If vCPUs should execute from memory. */ + bool execute; + struct memstress_vcpu_args vcpu_args[KVM_MAX_VCPUS]; }; =20 @@ -63,6 +66,7 @@ void memstress_destroy_vm(struct kvm_vm *vm); =20 void memstress_set_write_percent(struct kvm_vm *vm, uint32_t write_percent= ); void memstress_set_random_access(struct kvm_vm *vm, bool random_access); +void memstress_set_execute(struct kvm_vm *vm, bool execute); =20 void memstress_start_vcpu_threads(int vcpus, void (*vcpu_fn)(struct memstr= ess_vcpu_args *)); void memstress_join_vcpu_threads(int vcpus); diff --git a/tools/testing/selftests/kvm/lib/memstress.c b/tools/testing/se= lftests/kvm/lib/memstress.c index 313277486a1de..49677742ec92d 100644 --- a/tools/testing/selftests/kvm/lib/memstress.c +++ b/tools/testing/selftests/kvm/lib/memstress.c @@ -40,6 +40,16 @@ static bool all_vcpu_threads_running; =20 static struct kvm_vcpu *vcpus[KVM_MAX_VCPUS]; =20 +/* + * When writing to guest memory, write the opcode for the `ret` instructio= n so + * that subsequent iteractions can exercise instruction fetch by calling t= he + * memory. + * + * NOTE: Non-x86 architectures would to use different values here to suppo= rt + * execute. + */ +#define RETURN_OPCODE 0xC3 + /* * Continuously write to the first 8 bytes of each page in the * specified region. @@ -75,8 +85,10 @@ void memstress_guest_code(uint32_t vcpu_idx) =20 addr =3D gva + (page * args->guest_page_size); =20 - if (__guest_random_bool(&rand_state, args->write_percent)) - *(uint64_t *)addr =3D 0x0123456789ABCDEF; + if (args->execute) + ((void (*)(void)) addr)(); + else if (__guest_random_bool(&rand_state, args->write_percent)) + *(uint64_t *)addr =3D RETURN_OPCODE; else READ_ONCE(*(uint64_t *)addr); } @@ -259,6 +271,15 @@ void __weak memstress_setup_nested(struct kvm_vm *vm, = int nr_vcpus, struct kvm_v exit(KSFT_SKIP); } =20 +void memstress_set_execute(struct kvm_vm *vm, bool execute) +{ +#ifndef __x86_64__ + TEST_FAIL("Execute not supported on thhis architecture; see RETURN_OPCODE= ."); +#endif + memstress_args.execute =3D execute; + sync_global_to_guest(vm, memstress_args); +} + static void *vcpu_thread_main(void *data) { struct vcpu_thread *vcpu =3D data; --=20 2.50.0.727.gbf7dc18ff4-goog From nobody Tue Oct 7 18:28:34 2025 Received: from mail-pg1-f202.google.com (mail-pg1-f202.google.com [209.85.215.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B4DCF2E8DEF for ; Mon, 7 Jul 2025 22:48:35 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751928518; cv=none; b=EhB3jC5DP3zZxslq5Fn04ldb7yuClNdN7sIBtti/YIS1lYq9/l0NiRmYxxSYk/KUFFaW8cx08O4Uqgzz86M2dqJ/GiL8yMcuQGPAX8X0/irmOydASzGrQIqObgQjXRK3zZ379Z8FhM5RSsXFGaiLEriDvNQpJ4RWyvaEI9Y4I7E= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751928518; c=relaxed/simple; bh=WHQD+wG5cjWj67PYFFOGP3SPCRZb1qKxvRff49PdsyQ=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=nOCCPsd7JZ4lvnlB0RI1LkFoPd1dsBn2tJ6epdHSYBucK9IBlB+DB0NOlD8fLnhyIg2bFTZtz11ViJGl3Uv5+Fj0BBdysrP4YswHJW6GsxGrdAIPnHscGhTnB44PSkgQpm11FdqQIdGdvSrU9uhjLIYoGGz74wpz5/Rb0nWkOSs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--jthoughton.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=G6WlA2/Z; arc=none smtp.client-ip=209.85.215.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--jthoughton.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="G6WlA2/Z" Received: by mail-pg1-f202.google.com with SMTP id 41be03b00d2f7-b3928ad6176so445911a12.3 for ; Mon, 07 Jul 2025 15:48:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1751928515; x=1752533315; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=nlIdHQhnmUm7I8SnUJOzI4Momd1aXaHzR+xD0oVsTbQ=; b=G6WlA2/ZYkGSS/147OlH58TywKUkne0A3RvknwDxcyTuFkNuApBhZy7/rvJ4wglMEg zFIBhqiAk8S2YTTRuCnHd9NujKlkVxaVZqpNtmMjDALdQ8T22eEWO8HLwqQcw2yeD0Qb gQyD9ocjZxITDdyQgGei4+F7leVgRnFzMgsyZSiPfIJ6dIXyhiCKb4xEXL/Ku1yNDGf+ yZtPqWoP91y2yw/48R/g2EwFTIuyQyGvxr1S7O8hUz7TvFSoR3SFwzBjJ1JhnQrOpScy rf15PiOTcCuI/sa6Nllh0cc8kzRw1E1ZXMQb569APs8GwrdwkFznVhEopjx3iPScpG8F vwEg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1751928515; x=1752533315; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=nlIdHQhnmUm7I8SnUJOzI4Momd1aXaHzR+xD0oVsTbQ=; b=m1ICowcwJywFcLBkYUFaU26gW/cp26Snymp8U0Q7jRtp/o5170sgKnZ/uFnrd+JKr/ eyxnd8Ue+HPERexv5/hTLEgX6xpYzEUGgUUvKuGQe8dJprZhF2UX3sMGzoFuz2clPR2x RLvCUQDVzUENlFC76/mSiR7fshBl1URKrP6C78vRnSWgarbfQAdC+W91ocxPI4VUOMIQ XuYoH3dRMDGcYv3DVZzBBt6UvUqnabVX+KGejnE+w+wtWlJtLNptYhMNTzd1W4aRZhAX T8CYxYWeXFGCDQkSaqrG6svt10iiFiW6lXVAfmOE8Hf6eJIwj2M2aElTvG8KKDBXN68+ V89g== X-Forwarded-Encrypted: i=1; AJvYcCX6WFwgsYKFinrlyZu4qa1/SepxQ4fRsnIzzhp/Loi+6RNX7KC9vzaVl6bBT7KsTtT43pc0lL3ewqFEzuY=@vger.kernel.org X-Gm-Message-State: AOJu0Yxf3M9UBw+x5RFE8eyENKZE3R8tWMonZvKq6wKp55+00C3su6QQ m2YHygA9c5qP2O0xsvanwl0vLQQejAl4w61uvmbWl21TV8bONnDG4ghA1WiNaNdSW71Stv/JP2F tyf0ASRb/lyk3Od/N9mdhLA== X-Google-Smtp-Source: AGHT+IGceyFHfr9U/9tL1Akpsxnk/yoBH0Lh/7wvDo2nwSnzWD120gq79KQyk4BX4Blz0eZnN4sxE/RhFMqKQVk3 X-Received: from pgah22.prod.google.com ([2002:a05:6a02:4e96:b0:b31:c90f:389d]) (user=jthoughton job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6a20:430d:b0:21f:74b5:e8cf with SMTP id adf61e73a8af0-22b45046e6emr1374616637.25.1751928514982; Mon, 07 Jul 2025 15:48:34 -0700 (PDT) Date: Mon, 7 Jul 2025 22:47:19 +0000 In-Reply-To: <20250707224720.4016504-1-jthoughton@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250707224720.4016504-1-jthoughton@google.com> X-Mailer: git-send-email 2.50.0.727.gbf7dc18ff4-goog Message-ID: <20250707224720.4016504-7-jthoughton@google.com> Subject: [PATCH v5 6/7] KVM: selftests: Provide extra mmap flags in vm_mem_add() From: James Houghton To: Paolo Bonzini , Sean Christopherson Cc: Vipin Sharma , David Matlack , James Houghton , kvm@vger.kernel.org, linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The immediate application here is to allow selftests to pass MAP_POPULATE (to time fault time without having to allocate guest memory). Signed-off-by: James Houghton --- tools/testing/selftests/kvm/include/kvm_util.h | 3 ++- tools/testing/selftests/kvm/lib/kvm_util.c | 15 +++++++++------ .../kvm/x86/private_mem_conversions_test.c | 2 +- 3 files changed, 12 insertions(+), 8 deletions(-) diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing= /selftests/kvm/include/kvm_util.h index bee65ca087217..4aafd5bf786e2 100644 --- a/tools/testing/selftests/kvm/include/kvm_util.h +++ b/tools/testing/selftests/kvm/include/kvm_util.h @@ -629,7 +629,8 @@ void vm_userspace_mem_region_add(struct kvm_vm *vm, uint32_t flags); void vm_mem_add(struct kvm_vm *vm, enum vm_mem_backing_src_type src_type, uint64_t guest_paddr, uint32_t slot, uint64_t npages, - uint32_t flags, int guest_memfd_fd, uint64_t guest_memfd_offset); + uint32_t flags, int guest_memfd_fd, uint64_t guest_memfd_offset, + int extra_mmap_flags); =20 #ifndef vm_arch_has_protected_memory static inline bool vm_arch_has_protected_memory(struct kvm_vm *vm) diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/sel= ftests/kvm/lib/kvm_util.c index a055343a7bf75..8157a0fd7f8b3 100644 --- a/tools/testing/selftests/kvm/lib/kvm_util.c +++ b/tools/testing/selftests/kvm/lib/kvm_util.c @@ -977,13 +977,15 @@ void vm_set_user_memory_region2(struct kvm_vm *vm, ui= nt32_t slot, uint32_t flags /* FIXME: This thing needs to be ripped apart and rewritten. */ void vm_mem_add(struct kvm_vm *vm, enum vm_mem_backing_src_type src_type, uint64_t guest_paddr, uint32_t slot, uint64_t npages, - uint32_t flags, int guest_memfd, uint64_t guest_memfd_offset) + uint32_t flags, int guest_memfd, uint64_t guest_memfd_offset, + int extra_mmap_flags) { int ret; struct userspace_mem_region *region; size_t backing_src_pagesz =3D get_backing_src_pagesz(src_type); size_t mem_size =3D npages * vm->page_size; size_t alignment; + int mmap_flags; =20 TEST_REQUIRE_SET_USER_MEMORY_REGION2(); =20 @@ -1066,9 +1068,11 @@ void vm_mem_add(struct kvm_vm *vm, enum vm_mem_backi= ng_src_type src_type, region->fd =3D kvm_memfd_alloc(region->mmap_size, src_type =3D=3D VM_MEM_SRC_SHARED_HUGETLB); =20 + mmap_flags =3D vm_mem_backing_src_alias(src_type)->flag | + extra_mmap_flags; + region->mmap_start =3D mmap(NULL, region->mmap_size, - PROT_READ | PROT_WRITE, - vm_mem_backing_src_alias(src_type)->flag, + PROT_READ | PROT_WRITE, mmap_flags, region->fd, 0); TEST_ASSERT(region->mmap_start !=3D MAP_FAILED, __KVM_SYSCALL_ERROR("mmap()", (int)(unsigned long)MAP_FAILED)); @@ -1143,8 +1147,7 @@ void vm_mem_add(struct kvm_vm *vm, enum vm_mem_backin= g_src_type src_type, /* If shared memory, create an alias. */ if (region->fd >=3D 0) { region->mmap_alias =3D mmap(NULL, region->mmap_size, - PROT_READ | PROT_WRITE, - vm_mem_backing_src_alias(src_type)->flag, + PROT_READ | PROT_WRITE, mmap_flags, region->fd, 0); TEST_ASSERT(region->mmap_alias !=3D MAP_FAILED, __KVM_SYSCALL_ERROR("mmap()", (int)(unsigned long)MAP_FAILED)); @@ -1159,7 +1162,7 @@ void vm_userspace_mem_region_add(struct kvm_vm *vm, uint64_t guest_paddr, uint32_t slot, uint64_t npages, uint32_t flags) { - vm_mem_add(vm, src_type, guest_paddr, slot, npages, flags, -1, 0); + vm_mem_add(vm, src_type, guest_paddr, slot, npages, flags, -1, 0, 0); } =20 /* diff --git a/tools/testing/selftests/kvm/x86/private_mem_conversions_test.c= b/tools/testing/selftests/kvm/x86/private_mem_conversions_test.c index 82a8d88b5338e..637e9e57fce46 100644 --- a/tools/testing/selftests/kvm/x86/private_mem_conversions_test.c +++ b/tools/testing/selftests/kvm/x86/private_mem_conversions_test.c @@ -399,7 +399,7 @@ static void test_mem_conversions(enum vm_mem_backing_sr= c_type src_type, uint32_t for (i =3D 0; i < nr_memslots; i++) vm_mem_add(vm, src_type, BASE_DATA_GPA + slot_size * i, BASE_DATA_SLOT + i, slot_size / vm->page_size, - KVM_MEM_GUEST_MEMFD, memfd, slot_size * i); + KVM_MEM_GUEST_MEMFD, memfd, slot_size * i, 0); =20 for (i =3D 0; i < nr_vcpus; i++) { uint64_t gpa =3D BASE_DATA_GPA + i * per_cpu_size; --=20 2.50.0.727.gbf7dc18ff4-goog From nobody Tue Oct 7 18:28:34 2025 Received: from mail-pj1-f73.google.com (mail-pj1-f73.google.com [209.85.216.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5EB572E8E02 for ; Mon, 7 Jul 2025 22:48:37 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751928519; cv=none; b=Rjn9C/OC3WVWi/kCuiLHAsPek6MdIOTKqkXOW2ztRpfUE8WNTO+Xz6vVHKFZBXemDYN9Bn02vUTbY00LesJThzqz6q8jAs3Kyt0SykFgQOLyVN1d0iBoF7kKYOMEaL0evkjZamz5GLkN7346jDmnEXlsH2v2/y23tfgUVJkrvjY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751928519; c=relaxed/simple; bh=+B1tzUeRnbjcRJrdTNGtKuA80J+Vd/hMm8gBBpvb0H8=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=SjR7oIqBuz0Ioc+lByyopasCMdksKnZspK4SOTeUHWkPmJgCcch8/CzRvugqKQqV1BZtp9IEOjSxwtUBHgyrYasBtQ5fZuKU57dWpII1IqsJJ94gqONv7x3iVhPHHwjdPREWW3VOWDqpIioMFVG3QF1g4tfi9me2PjPriVsGAG0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--jthoughton.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=wq/iQLYx; arc=none smtp.client-ip=209.85.216.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--jthoughton.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="wq/iQLYx" Received: by mail-pj1-f73.google.com with SMTP id 98e67ed59e1d1-31218e2d5b0so6187705a91.2 for ; Mon, 07 Jul 2025 15:48:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1751928517; x=1752533317; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=Rwbn8AkKKcNl/7mjlcPAAJ9HVPrgom/gFcHQx8uwU7o=; b=wq/iQLYx6YNSMa3posLyN0vz67FEF5eLNxwOhMtf/1iR/SgaLFxIjJr2pxLLoR66I8 q8L7iwTuAZMjtacc8rSQMiFzDLDbBq+M/m7SzY8YXuSvgsiBTRngkYZPjkwDXPV6PC9h 63xh4fE3sABZZL2VKL5V/lLi+yyRV0W+X1UtploivulQjbRJ0wLADypjp0/CHFwWbTBJ o0GeV0M0iRmGIo+HD0wzasPtU+1ZdCo9WyZ/jKaYbnGlIeuS3jX2DIzKRqNxiLCOIG1w 204TmEqHgt/wQqTIXlzEaDqT7to+xQqhG1EsHFIvqwL1E90WOJKSFNbSZNnkjRKloDvP QbMg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1751928517; x=1752533317; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=Rwbn8AkKKcNl/7mjlcPAAJ9HVPrgom/gFcHQx8uwU7o=; b=kOtrVLLs1w44rFEk7V02E7X1vK4JjW/5m/clZQ7bUSn8DQQ8J9JylQuRhkpHNLucx8 kdm5xC+RxEp2Cx1BS/8ACiQgEfTTNpbnHpqmpctZOabOTJvU17TYJI1drfq3e8HDhEiu ZjCyhuw4mjiOj2ywgU0pBJ0hx9y+3UN+D8MU+fyGDyM4tnDfqj28yOdEPHTuRD7VTY4p C+Itr0ECYLlCa4TKj6NLntNCPzQI91MNd40V+xdQ5Ed6fszdbNXSNdRvlKt7kRea03S0 4Szr+GAP6okab6ErZeTRJPgLEEN4+74mXb+Vf0InPOWR+2Pv6KVG9l+yGd82zUjfshIC zaEA== X-Forwarded-Encrypted: i=1; AJvYcCUfr48aYu/oRv/+n/80BlYOi21+TcqChmlTHFQSY3+mS9JZQOXuij1hBcRmTwq+u5b+k5Gs0ALiQTUep2k=@vger.kernel.org X-Gm-Message-State: AOJu0YyN0r9vqWB6qvCIO172IFRBYWfPWcl2lCe0SBuYn7sUNSLQp5+7 p5+ChSqEXxU0sWOsS9/QxC002CXv6OJKCnVEnZrh/mYmybPmh/VyFzCTsTJX3fuMFI4tnHMZGRa zj1LSv2TOEqh3C8ItZvAeyQ== X-Google-Smtp-Source: AGHT+IFxcSTgLeGyYaHVsYcNu0MgQB9pPS3S5NDxa7lwYcKq/ZVn+af7NlY69BJsaMTfNIQybdrsbm+PcQfF1m4U X-Received: from pja12.prod.google.com ([2002:a17:90b:548c:b0:312:e266:f849]) (user=jthoughton job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:3c50:b0:311:d28a:73ef with SMTP id 98e67ed59e1d1-31aac447bc9mr22346078a91.10.1751928516542; Mon, 07 Jul 2025 15:48:36 -0700 (PDT) Date: Mon, 7 Jul 2025 22:47:20 +0000 In-Reply-To: <20250707224720.4016504-1-jthoughton@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250707224720.4016504-1-jthoughton@google.com> X-Mailer: git-send-email 2.50.0.727.gbf7dc18ff4-goog Message-ID: <20250707224720.4016504-8-jthoughton@google.com> Subject: [PATCH v5 7/7] KVM: selftests: Add an NX huge pages jitter test From: James Houghton To: Paolo Bonzini , Sean Christopherson Cc: Vipin Sharma , David Matlack , James Houghton , kvm@vger.kernel.org, linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add a test that checks how much NX huge page recovery affects vCPUs that are faulting on pages not undergoing NX huge page recovery. To do this, this test uses a single vCPU to touch all of guest memory. After every 1G of guest memory, it will switch from writing to executing. Only the writes are timed. With this setup, while the guest is in the middle of reading a 1G region, NX huge page recovery (provided it is set aggressive enough) will start to recover huge pages in the previous 1G region. Signed-off-by: James Houghton --- tools/testing/selftests/kvm/Makefile.kvm | 1 + .../kvm/x86/nx_huge_pages_perf_test.c | 223 ++++++++++++++++++ 2 files changed, 224 insertions(+) create mode 100644 tools/testing/selftests/kvm/x86/nx_huge_pages_perf_test= .c diff --git a/tools/testing/selftests/kvm/Makefile.kvm b/tools/testing/selft= ests/kvm/Makefile.kvm index 0dc435e944632..4b5be9f0bac5b 100644 --- a/tools/testing/selftests/kvm/Makefile.kvm +++ b/tools/testing/selftests/kvm/Makefile.kvm @@ -88,6 +88,7 @@ TEST_GEN_PROGS_x86 +=3D x86/kvm_buslock_test TEST_GEN_PROGS_x86 +=3D x86/monitor_mwait_test TEST_GEN_PROGS_x86 +=3D x86/nested_emulation_test TEST_GEN_PROGS_x86 +=3D x86/nested_exceptions_test +TEST_GEN_PROGS_x86 +=3D x86/nx_huge_pages_perf_test TEST_GEN_PROGS_x86 +=3D x86/platform_info_test TEST_GEN_PROGS_x86 +=3D x86/pmu_counters_test TEST_GEN_PROGS_x86 +=3D x86/pmu_event_filter_test diff --git a/tools/testing/selftests/kvm/x86/nx_huge_pages_perf_test.c b/to= ols/testing/selftests/kvm/x86/nx_huge_pages_perf_test.c new file mode 100644 index 0000000000000..e33e913ec7dfa --- /dev/null +++ b/tools/testing/selftests/kvm/x86/nx_huge_pages_perf_test.c @@ -0,0 +1,223 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * nx_huge_pages_perf_test + * + * Copyright (C) 2025, Google LLC. + * + * Performance test for NX hugepage recovery. + * + * This test checks for long faults on allocated pages when NX huge page + * recovery is taking place on pages mapped by the VM. + */ +#include +#include +#include +#include +#include + +#include "test_util.h" + +#include "kvm_util.h" +#include "processor.h" +#include "ucall_common.h" + +/* Default guest test virtual memory offset */ +#define DEFAULT_GUEST_TEST_MEM 0xc0000000 + +/* Default size (2GB) of the memory for testing */ +#define DEFAULT_TEST_MEM_SIZE (2 << 30) + +/* + * Guest virtual memory offset of the testing memory slot. + * Must not conflict with identity mapped test code. + */ +static uint64_t guest_test_virt_mem =3D DEFAULT_GUEST_TEST_MEM; + +static struct kvm_vcpu *vcpu; + +struct test_params { + enum vm_mem_backing_src_type backing_src; + uint64_t memory_bytes; +}; + +struct guest_args { + uint64_t guest_page_size; + uint64_t pages; +}; + +static struct guest_args guest_args; + +#define RETURN_OPCODE 0xC3 + +static void guest_code(int vcpu_idx) +{ + struct guest_args *args =3D &guest_args; + uint64_t page_size =3D args->guest_page_size; + uint64_t max_cycles =3D 0UL; + volatile char *gva; + uint64_t page; + + + for (page =3D 0; page < args->pages; ++page) { + gva =3D (volatile char *)guest_test_virt_mem + page * page_size; + + /* + * To time the jitter on all faults on pages that are not + * undergoing nx huge page recovery, only execute on every + * other 1G region, and only time the non-executing pass. + */ + if (page & (1UL << 18)) { + uint64_t tsc1, tsc2; + + tsc1 =3D rdtsc(); + *gva =3D 0; + tsc2 =3D rdtsc(); + + if (tsc2 - tsc1 > max_cycles) + max_cycles =3D tsc2 - tsc1; + } else { + *gva =3D RETURN_OPCODE; + ((void (*)(void)) gva)(); + } + } + + GUEST_SYNC1(max_cycles); +} + +struct kvm_vm *create_vm(uint64_t memory_bytes, + enum vm_mem_backing_src_type backing_src) +{ + uint64_t backing_src_pagesz =3D get_backing_src_pagesz(backing_src); + struct guest_args *args =3D &guest_args; + uint64_t guest_num_pages; + uint64_t region_end_gfn; + uint64_t gpa, size; + struct kvm_vm *vm; + + args->guest_page_size =3D getpagesize(); + + guest_num_pages =3D vm_adjust_num_guest_pages(VM_MODE_DEFAULT, + memory_bytes / args->guest_page_size); + + TEST_ASSERT(memory_bytes % getpagesize() =3D=3D 0, + "Guest memory size is not host page size aligned."); + + vm =3D __vm_create_with_one_vcpu(&vcpu, guest_num_pages, guest_code); + + /* Put the test region at the top guest physical memory. */ + region_end_gfn =3D vm->max_gfn + 1; + + /* + * If there should be more memory in the guest test region than there + * can be pages in the guest, it will definitely cause problems. + */ + TEST_ASSERT(guest_num_pages < region_end_gfn, + "Requested more guest memory than address space allows.\n" + " guest pages: %" PRIx64 " max gfn: %" PRIx64 + " wss: %" PRIx64 "]", + guest_num_pages, region_end_gfn - 1, memory_bytes); + + gpa =3D (region_end_gfn - guest_num_pages - 1) * args->guest_page_size; + gpa =3D align_down(gpa, backing_src_pagesz); + + size =3D guest_num_pages * args->guest_page_size; + pr_info("guest physical test memory: [0x%lx, 0x%lx)\n", + gpa, gpa + size); + + /* + * Pass in MAP_POPULATE, because we are trying to test how long + * we have to wait for a pending NX huge page recovery to take. + * We do not want to also wait for GUP itself. + */ + vm_mem_add(vm, backing_src, gpa, 1, + guest_num_pages, 0, -1, 0, MAP_POPULATE); + + virt_map(vm, guest_test_virt_mem, gpa, guest_num_pages); + + args->pages =3D guest_num_pages; + + /* Export the shared variables to the guest. */ + sync_global_to_guest(vm, guest_args); + + return vm; +} + +static void run_vcpu(struct kvm_vcpu *vcpu) +{ + struct timespec ts_elapsed; + struct timespec ts_start; + struct ucall uc =3D {}; + int ret; + + clock_gettime(CLOCK_MONOTONIC, &ts_start); + + ret =3D _vcpu_run(vcpu); + + ts_elapsed =3D timespec_elapsed(ts_start); + + TEST_ASSERT(ret =3D=3D 0, "vcpu_run failed: %d", ret); + + TEST_ASSERT(get_ucall(vcpu, &uc) =3D=3D UCALL_SYNC, + "Invalid guest sync status: %" PRIu64, uc.cmd); + + pr_info("Duration: %ld.%09lds\n", + ts_elapsed.tv_sec, ts_elapsed.tv_nsec); + pr_info("Max fault latency: %" PRIu64 " cycles\n", uc.args[0]); +} + +static void run_test(struct test_params *params) +{ + /* + * The fault + execute pattern in the guest relies on having more than + * 1GiB to use. + */ + TEST_ASSERT(params->memory_bytes > PAGE_SIZE << 18, + "Must use more than 1GiB of memory."); + + create_vm(params->memory_bytes, params->backing_src); + + pr_info("\n"); + + run_vcpu(vcpu); +} + +static void help(char *name) +{ + puts(""); + printf("usage: %s [-h] [-b bytes] [-s mem_type]\n", + name); + puts(""); + printf(" -h: Display this help message."); + printf(" -b: specify the size of the memory region which should be\n" + " dirtied by the guest. e.g. 2048M or 3G.\n" + " (default: 2G, must be greater than 1G)\n"); + backing_src_help("-s"); + puts(""); + exit(0); +} + +int main(int argc, char *argv[]) +{ + struct test_params params =3D { + .backing_src =3D DEFAULT_VM_MEM_SRC, + .memory_bytes =3D DEFAULT_TEST_MEM_SIZE, + }; + int opt; + + while ((opt =3D getopt(argc, argv, "hb:s:")) !=3D -1) { + switch (opt) { + case 'b': + params.memory_bytes =3D parse_size(optarg); + break; + case 's': + params.backing_src =3D parse_backing_src_type(optarg); + break; + case 'h': + default: + help(argv[0]); + break; + } + } + + run_test(¶ms); +} --=20 2.50.0.727.gbf7dc18ff4-goog