From nobody Tue Nov 5 11:39:50 2024 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4DCCEC0018C for ; Sun, 5 Nov 2023 16:31:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229552AbjKEQb7 (ORCPT ); Sun, 5 Nov 2023 11:31:59 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44328 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229598AbjKEQby (ORCPT ); Sun, 5 Nov 2023 11:31:54 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EE8EEAD for ; Sun, 5 Nov 2023 08:31:03 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1699201863; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=pxlejjl6YmMVlFXn9rcotC9xBT4/6oGJzKZrTdq/mHw=; b=B/i04yXnv3+tDvppZy0435wAPSdkSEO2gypSYqQJmUs9VnH2Ixa7OKSzbb8Si6fXpjVzoD uhhNOGC7nCPmKjbo3LBnffE2wQHAySYopDx7ZDfqk5+MVvVUTm90g2AiqH5qfHUgDxwO7s d+Q1r5l6dX2v2ydO2gC4R0QNv1QVeXA= Received: from mimecast-mx02.redhat.com (mx-ext.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-635-C2QXJr7EPFqH3gwtOpk27g-1; Sun, 05 Nov 2023 11:30:59 -0500 X-MC-Unique: C2QXJr7EPFqH3gwtOpk27g-1 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.rdu2.redhat.com [10.11.54.6]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id DF8453C0FC8E; Sun, 5 Nov 2023 16:30:56 +0000 (UTC) Received: from avogadro.redhat.com (unknown [10.39.192.93]) by smtp.corp.redhat.com (Postfix) with ESMTP id 295C92166B26; Sun, 5 Nov 2023 16:30:50 +0000 (UTC) From: Paolo Bonzini To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Alexander Viro , Christian Brauner , "Matthew Wilcox (Oracle)" , Andrew Morton Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Xiaoyao Li , Xu Yilun , Chao Peng , Fuad Tabba , Jarkko Sakkinen , Anish Moorthy , David Matlack , Yu Zhang , Isaku Yamahata , =?UTF-8?q?Micka=C3=ABl=20Sala=C3=BCn?= , Vlastimil Babka , Vishal Annapurve , Ackerley Tng , Maciej Szmigiero , David Hildenbrand , Quentin Perret , Michael Roth , Wang , Liam Merwick , Isaku Yamahata , "Kirill A. Shutemov" Subject: [PATCH 01/34] KVM: Tweak kvm_hva_range and hva_handler_t to allow reusing for gfn ranges Date: Sun, 5 Nov 2023 17:30:04 +0100 Message-ID: <20231105163040.14904-2-pbonzini@redhat.com> In-Reply-To: <20231105163040.14904-1-pbonzini@redhat.com> References: <20231105163040.14904-1-pbonzini@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.6 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Sean Christopherson Rework and rename "struct kvm_hva_range" into "kvm_mmu_notifier_range" so that the structure can be used to handle notifications that operate on gfn context, i.e. that aren't tied to a host virtual address. Rename the handler typedef too (arguably it should always have been gfn_handler_t). Practically speaking, this is a nop for 64-bit kernels as the only meaningful change is to store start+end as u64s instead of unsigned longs. Reviewed-by: Paolo Bonzini Reviewed-by: Xiaoyao Li Signed-off-by: Sean Christopherson Reviewed-by: Fuad Tabba Tested-by: Fuad Tabba Message-Id: <20231027182217.3615211-2-seanjc@google.com> Signed-off-by: Paolo Bonzini Reviewed-by: Kai Huang --- virt/kvm/kvm_main.c | 34 +++++++++++++++++++--------------- 1 file changed, 19 insertions(+), 15 deletions(-) diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 486800a7024b..0524933856d4 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -541,18 +541,22 @@ static inline struct kvm *mmu_notifier_to_kvm(struct = mmu_notifier *mn) return container_of(mn, struct kvm, mmu_notifier); } =20 -typedef bool (*hva_handler_t)(struct kvm *kvm, struct kvm_gfn_range *range= ); +typedef bool (*gfn_handler_t)(struct kvm *kvm, struct kvm_gfn_range *range= ); =20 typedef void (*on_lock_fn_t)(struct kvm *kvm, unsigned long start, unsigned long end); =20 typedef void (*on_unlock_fn_t)(struct kvm *kvm); =20 -struct kvm_hva_range { - unsigned long start; - unsigned long end; +struct kvm_mmu_notifier_range { + /* + * 64-bit addresses, as KVM notifiers can operate on host virtual + * addresses (unsigned long) and guest physical addresses (64-bit). + */ + u64 start; + u64 end; union kvm_mmu_notifier_arg arg; - hva_handler_t handler; + gfn_handler_t handler; on_lock_fn_t on_lock; on_unlock_fn_t on_unlock; bool flush_on_ret; @@ -581,7 +585,7 @@ static const union kvm_mmu_notifier_arg KVM_MMU_NOTIFIE= R_NO_ARG; node =3D interval_tree_iter_next(node, start, last)) \ =20 static __always_inline int __kvm_handle_hva_range(struct kvm *kvm, - const struct kvm_hva_range *range) + const struct kvm_mmu_notifier_range *range) { bool ret =3D false, locked =3D false; struct kvm_gfn_range gfn_range; @@ -608,9 +612,9 @@ static __always_inline int __kvm_handle_hva_range(struc= t kvm *kvm, unsigned long hva_start, hva_end; =20 slot =3D container_of(node, struct kvm_memory_slot, hva_node[slots->nod= e_idx]); - hva_start =3D max(range->start, slot->userspace_addr); - hva_end =3D min(range->end, slot->userspace_addr + - (slot->npages << PAGE_SHIFT)); + hva_start =3D max_t(unsigned long, range->start, slot->userspace_addr); + hva_end =3D min_t(unsigned long, range->end, + slot->userspace_addr + (slot->npages << PAGE_SHIFT)); =20 /* * To optimize for the likely case where the address @@ -660,10 +664,10 @@ static __always_inline int kvm_handle_hva_range(struc= t mmu_notifier *mn, unsigned long start, unsigned long end, union kvm_mmu_notifier_arg arg, - hva_handler_t handler) + gfn_handler_t handler) { struct kvm *kvm =3D mmu_notifier_to_kvm(mn); - const struct kvm_hva_range range =3D { + const struct kvm_mmu_notifier_range range =3D { .start =3D start, .end =3D end, .arg =3D arg, @@ -680,10 +684,10 @@ static __always_inline int kvm_handle_hva_range(struc= t mmu_notifier *mn, static __always_inline int kvm_handle_hva_range_no_flush(struct mmu_notifi= er *mn, unsigned long start, unsigned long end, - hva_handler_t handler) + gfn_handler_t handler) { struct kvm *kvm =3D mmu_notifier_to_kvm(mn); - const struct kvm_hva_range range =3D { + const struct kvm_mmu_notifier_range range =3D { .start =3D start, .end =3D end, .handler =3D handler, @@ -771,7 +775,7 @@ static int kvm_mmu_notifier_invalidate_range_start(stru= ct mmu_notifier *mn, const struct mmu_notifier_range *range) { struct kvm *kvm =3D mmu_notifier_to_kvm(mn); - const struct kvm_hva_range hva_range =3D { + const struct kvm_mmu_notifier_range hva_range =3D { .start =3D range->start, .end =3D range->end, .handler =3D kvm_unmap_gfn_range, @@ -835,7 +839,7 @@ static void kvm_mmu_notifier_invalidate_range_end(struc= t mmu_notifier *mn, const struct mmu_notifier_range *range) { struct kvm *kvm =3D mmu_notifier_to_kvm(mn); - const struct kvm_hva_range hva_range =3D { + const struct kvm_mmu_notifier_range hva_range =3D { .start =3D range->start, .end =3D range->end, .handler =3D (void *)kvm_null_fn, --=20 2.39.1