From nobody Thu Oct 9 14:42:45 2025 Received: from mail-pf1-f202.google.com (mail-pf1-f202.google.com [209.85.210.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9F9C71FC0EA for ; Wed, 18 Jun 2025 04:24:32 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750220674; cv=none; b=P7hF/YjfgYAvEBYNUljBBGN1nWrtJMks56yS5VkLNo2c2lngELGP+kqm7IcXXr8jNrBYKL6XUhA0kegNaDTT7BAb67/L0svehNxQ56JgSM1S70nQjFkvxL/ZEgx+f1h9FErJv/XTGJczp6Mtmii/WHdBzqYyCGdmabpwGNwGjcA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750220674; c=relaxed/simple; bh=gpLnCbZ2QT66InCL+IQSfKdjaE0VlJUfUrY2ZyIBHpg=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=KDbX2FxgoR9G8+XhLNOA+/kEbQbU3yok1JHyMPtx1bF2U277jFBZ12OGa4UbUp0L1WvV/qXZDCU4JmLILxIoKDHkgPnaejeiy0KDyUGtGU6+OHxIw9wI6SMDuKsWymr1gFLeKGB4+ttM07mnMZehC+Lmnh/58WC5jakNMpwlb3I= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--jthoughton.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=vVAiBVq4; arc=none smtp.client-ip=209.85.210.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--jthoughton.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="vVAiBVq4" Received: by mail-pf1-f202.google.com with SMTP id d2e1a72fcca58-748b4d5c045so2436393b3a.1 for ; Tue, 17 Jun 2025 21:24:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1750220672; x=1750825472; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=lAANK9EatRgM8qlVuTkHcUTTaZtpLq75H6hzaPLeuxs=; b=vVAiBVq4vJGNSUub2iHTye3iyJpCCkYUDRUkhWRuiWXy0btu+g7EMixIKESLACZ5qC iS1vAZGtXmylB+1CpuIqnDseU53cC/dzzQwdWIwHvlNi8b9WNjH562+Ox07JHzrQR4v9 3ZtN+y6gddn4BFb85Rr+ir0fC4jcIVQGQq/xhoF4NkpMBphZRxNroSo4bSLZQt1hvVrk ss0s0QGFv+m6Okg+8oIc8kGYypBocc+JP8YP6QOpY+kDz4ELxUDGsS0gHSv+xZe/o63y wOLXQftXA5pyLnTjA0GA1JXFBdNs45PYHLAr/0noFJjMI+45NGySFYcniyxwCip3a6P+ YXwg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1750220672; x=1750825472; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=lAANK9EatRgM8qlVuTkHcUTTaZtpLq75H6hzaPLeuxs=; b=ngLU85TU8A6i4/UxGW4IGF8/pdglKsUmWkmr+rNb6bhVM7m6I6q0X+ilQqgwO+iSvR +wFfnz4BwXxCRjPD5QSAaN4sZYU0u9ebT377YKh9jQKEiNtop/ZSx6/2LU++PkkHIg3j EB1puqmKdHVclumpRX0QyDeeWds5EKjl4wuOwK8/cq3T6+Tfy4uwRkG/8I6pq7/i3dTz w67eKZQmw9mh4u6yftAu0SbXYFNjbsHRnQkfF5zWX/oFGCWtjFfU32uazdz/uQqWFyQA ttnA+u2XqCh8bStMnJtHxkkrsPALzU0nt2RqnUWh4IM7meAVrO8ZhpNvkxPtefllVmWf fvIg== X-Forwarded-Encrypted: i=1; AJvYcCVoLw1l/HNzwLgut7B4s52WXndgfi6Gu0/JPCn7uBW3tBywqLYHT6NOKvReo6wIKpDWZzQYbO7TKDHnMVs=@vger.kernel.org X-Gm-Message-State: AOJu0YwJiog1AXEuZlsykZli6SBqJ2+AtqWRuc2nhhDDehVjLmCijCz4 FlivepTYfhjGnEzwtjsSAtBGcEv5T2nuTt0VipC55DbAPkQw/dlA5DoSawit7ia9lv621a3Iklr gUZPX6KMUablB7KcF8mjYkQ== X-Google-Smtp-Source: AGHT+IGfw/fuYp0YvoE+rn33nARlIYiN3aa5M590yzdjQSAaiecYqF5p0spxRmCeRQgOGDvJLrq4ZeRSCV3QEL3v X-Received: from pfjw27.prod.google.com ([2002:aa7:9a1b:0:b0:748:de42:3b4]) (user=jthoughton job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6a00:a86:b0:732:2923:b70f with SMTP id d2e1a72fcca58-7489d00631fmr17847993b3a.11.1750220671791; Tue, 17 Jun 2025 21:24:31 -0700 (PDT) Date: Wed, 18 Jun 2025 04:24:12 +0000 In-Reply-To: <20250618042424.330664-1-jthoughton@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250618042424.330664-1-jthoughton@google.com> X-Mailer: git-send-email 2.50.0.rc2.696.g1fc2a0284f-goog Message-ID: <20250618042424.330664-4-jthoughton@google.com> Subject: [PATCH v3 03/15] KVM: arm64: x86: Require "struct kvm_page_fault" for memory fault exits From: James Houghton To: Paolo Bonzini , Sean Christopherson , Oliver Upton Cc: Jonathan Corbet , Marc Zyngier , Yan Zhao , James Houghton , Nikita Kalyazin , Anish Moorthy , Peter Gonda , Peter Xu , David Matlack , wei.w.wang@intel.com, kvm@vger.kernel.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Sean Christopherson Now that both arm64 and x86 define "struct kvm_page_fault" with a base set of fields, rework kvm_prepare_memory_fault_exit() to take a kvm_page_fault structure instead of passing in a pile of parameters. Guard the related code with CONFIG_KVM_GENERIC_PAGE_FAULT to play nice with architectures that don't yet support kvm_page_fault. Rather than define a common kvm_page_fault and kvm_arch_page_fault child, simply assert that the handful of required fields are provided by the arch-defined structure. Unlike vCPU and VMs, the number of common fields is expected to be small, and letting arch code fully define the structure allows for maximum flexibility with respect to const, layout, etc. No functional change intended. Signed-off-by: Sean Christopherson Signed-off-by: James Houghton --- arch/arm64/kvm/Kconfig | 1 + arch/x86/kvm/Kconfig | 1 + arch/x86/kvm/mmu/mmu.c | 8 ++++---- arch/x86/kvm/mmu/mmu_internal.h | 10 +--------- include/linux/kvm_host.h | 26 ++++++++++++++++++++------ virt/kvm/Kconfig | 3 +++ 6 files changed, 30 insertions(+), 19 deletions(-) diff --git a/arch/arm64/kvm/Kconfig b/arch/arm64/kvm/Kconfig index 713248f240e03..3c299735b1668 100644 --- a/arch/arm64/kvm/Kconfig +++ b/arch/arm64/kvm/Kconfig @@ -37,6 +37,7 @@ menuconfig KVM select HAVE_KVM_VCPU_RUN_PID_CHANGE select SCHED_INFO select GUEST_PERF_EVENTS if PERF_EVENTS + select KVM_GENERIC_PAGE_FAULT help Support hosting virtualized guest machines. =20 diff --git a/arch/x86/kvm/Kconfig b/arch/x86/kvm/Kconfig index 2eeffcec53828..2d5966f15738d 100644 --- a/arch/x86/kvm/Kconfig +++ b/arch/x86/kvm/Kconfig @@ -48,6 +48,7 @@ config KVM_X86 select KVM_GENERIC_PRE_FAULT_MEMORY select KVM_GENERIC_PRIVATE_MEM if KVM_SW_PROTECTED_VM select KVM_WERROR if WERROR + select KVM_GENERIC_PAGE_FAULT =20 config KVM tristate "Kernel-based Virtual Machine (KVM) support" diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index cbc84c6abc2e3..a4439e9e07268 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -3429,7 +3429,7 @@ static int kvm_handle_noslot_fault(struct kvm_vcpu *v= cpu, gva_t gva =3D fault->is_tdp ? 0 : fault->addr; =20 if (fault->is_private) { - kvm_mmu_prepare_memory_fault_exit(vcpu, fault); + kvm_prepare_memory_fault_exit(vcpu, fault); return -EFAULT; } =20 @@ -4499,14 +4499,14 @@ static int kvm_mmu_faultin_pfn_private(struct kvm_v= cpu *vcpu, int max_order, r; =20 if (!kvm_slot_can_be_private(fault->slot)) { - kvm_mmu_prepare_memory_fault_exit(vcpu, fault); + kvm_prepare_memory_fault_exit(vcpu, fault); return -EFAULT; } =20 r =3D kvm_gmem_get_pfn(vcpu->kvm, fault->slot, fault->gfn, &fault->pfn, &fault->refcounted_page, &max_order); if (r) { - kvm_mmu_prepare_memory_fault_exit(vcpu, fault); + kvm_prepare_memory_fault_exit(vcpu, fault); return r; } =20 @@ -4586,7 +4586,7 @@ static int kvm_mmu_faultin_pfn(struct kvm_vcpu *vcpu, * private vs. shared mismatch. */ if (fault->is_private !=3D kvm_mem_is_private(kvm, fault->gfn)) { - kvm_mmu_prepare_memory_fault_exit(vcpu, fault); + kvm_prepare_memory_fault_exit(vcpu, fault); return -EFAULT; } =20 diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_interna= l.h index 384fc4d0bfec0..c15060ed6e8be 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -269,14 +269,6 @@ enum { */ static_assert(RET_PF_CONTINUE =3D=3D 0); =20 -static inline void kvm_mmu_prepare_memory_fault_exit(struct kvm_vcpu *vcpu, - struct kvm_page_fault *fault) -{ - kvm_prepare_memory_fault_exit(vcpu, fault->gfn << PAGE_SHIFT, - PAGE_SIZE, fault->write, fault->exec, - fault->is_private); -} - static inline int kvm_mmu_do_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_o= r_gpa, u64 err, bool prefetch, int *emulation_type, u8 *level) @@ -329,7 +321,7 @@ static inline int kvm_mmu_do_page_fault(struct kvm_vcpu= *vcpu, gpa_t cr2_or_gpa, */ if (r =3D=3D RET_PF_EMULATE && fault.is_private) { pr_warn_ratelimited("kvm: unexpected emulation request on private memory= \n"); - kvm_mmu_prepare_memory_fault_exit(vcpu, &fault); + kvm_prepare_memory_fault_exit(vcpu, &fault); return -EFAULT; } =20 diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 3bde4fb5c6aa4..9a85500cd5c50 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -2497,20 +2497,34 @@ static inline void kvm_account_pgtable_pages(void *= virt, int nr) /* Max number of entries allowed for each kvm dirty ring */ #define KVM_DIRTY_RING_MAX_ENTRIES 65536 =20 +#ifdef CONFIG_KVM_GENERIC_PAGE_FAULT + +#define KVM_ASSERT_TYPE_IS(type_t, x) \ +do { \ + type_t __maybe_unused tmp; \ + \ + BUILD_BUG_ON(!__types_ok(tmp, x) || !__typecheck(tmp, x)); \ +} while (0) + static inline void kvm_prepare_memory_fault_exit(struct kvm_vcpu *vcpu, - gpa_t gpa, gpa_t size, - bool is_write, bool is_exec, - bool is_private) + struct kvm_page_fault *fault) { + KVM_ASSERT_TYPE_IS(gfn_t, fault->gfn); + KVM_ASSERT_TYPE_IS(bool, fault->exec); + KVM_ASSERT_TYPE_IS(bool, fault->write); + KVM_ASSERT_TYPE_IS(bool, fault->is_private); + KVM_ASSERT_TYPE_IS(struct kvm_memory_slot *, fault->slot); + vcpu->run->exit_reason =3D KVM_EXIT_MEMORY_FAULT; - vcpu->run->memory_fault.gpa =3D gpa; - vcpu->run->memory_fault.size =3D size; + vcpu->run->memory_fault.gpa =3D fault->gfn << PAGE_SHIFT; + vcpu->run->memory_fault.size =3D PAGE_SIZE; =20 /* RWX flags are not (yet) defined or communicated to userspace. */ vcpu->run->memory_fault.flags =3D 0; - if (is_private) + if (fault->is_private) vcpu->run->memory_fault.flags |=3D KVM_MEMORY_EXIT_FLAG_PRIVATE; } +#endif =20 #ifdef CONFIG_KVM_GENERIC_MEMORY_ATTRIBUTES static inline unsigned long kvm_get_memory_attributes(struct kvm *kvm, gfn= _t gfn) diff --git a/virt/kvm/Kconfig b/virt/kvm/Kconfig index 727b542074e7e..28ed6b241578b 100644 --- a/virt/kvm/Kconfig +++ b/virt/kvm/Kconfig @@ -128,3 +128,6 @@ config HAVE_KVM_ARCH_GMEM_PREPARE config HAVE_KVM_ARCH_GMEM_INVALIDATE bool depends on KVM_PRIVATE_MEM + +config KVM_GENERIC_PAGE_FAULT + bool --=20 2.50.0.rc2.692.g299adb8693-goog