From nobody Tue Nov 5 09:21:55 2024 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3C4D9C27C49 for ; Fri, 27 Oct 2023 18:22:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1346351AbjJ0SWy (ORCPT ); Fri, 27 Oct 2023 14:22:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59692 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1346344AbjJ0SWh (ORCPT ); Fri, 27 Oct 2023 14:22:37 -0400 Received: from mail-yw1-x1149.google.com (mail-yw1-x1149.google.com [IPv6:2607:f8b0:4864:20::1149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 80D05196 for ; Fri, 27 Oct 2023 11:22:32 -0700 (PDT) Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-5b053454aeeso2261777b3.0 for ; Fri, 27 Oct 2023 11:22:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1698430951; x=1699035751; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=0ecI9u+NU7x+hIBqOXm0c4fKiWu/1/+75d/SeCcbF3U=; b=A8wFet3Cqy5OOLC0onI5Z3ZG4OflT68ccntusKroqe8fr9COygt9ilMtOM2rAgV0vd x+qdc1B9SHHtAZ/rBBiHK9HQ2OBwG+VrhbzKQIfdUK97vgk/3xPtfLUL/tmMOGcgqVdN 8uaQM/QHx/udNKjqxx0m6EceDg/yybAr4FiRhWTr0sLrrAYbaWQYytmiDdvb4trBdTGD mq1wkjOZP+uFLubKz98l6Rb0xzFlGdFbn7tBRSG7ezAAhEEelPJhHRwtfhL8fOA9OZ58 jIXc4PFPXFQj1XguTiz4cBoqJdBQfdu2TD8ST0PZcoCKK7UDNlr1gsu7Jiig1l3oUTYo x6aQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1698430951; x=1699035751; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=0ecI9u+NU7x+hIBqOXm0c4fKiWu/1/+75d/SeCcbF3U=; b=hwIwZtHh0fqgLtptOxkwXPDqCIRLpva3OSAUKKLm7smjjob0zT1W1NeI9gldVmfKZt LX1IU9QZk8CVED/o31iXEVVMZtcVLeukupJQewu+npMj3RqAdFXlSfs3qpJJmBA0kfO7 KjYkbPyylS8HtOsNuZqeGcRWXyA3SCxzppdjC6eEPq53XdcMM4ejBK+8J66uVKg6KEjU xQl2KkJDYOyetVr+0ZZHBWNpHDx6MPMNMv1FjHqYp/ApJkEkf0Fz/kR1eaTRxg+jtttK KZJYSBlmWoIaioqMxruNAA0IyUjsl23zN+YJQ3TPtvOMr3hpS4sVUQsM2uYILNT3jjsj 7krg== X-Gm-Message-State: AOJu0YyTcr6Yql0maTMkQYKBpUeq8CIi2IVPPcTgephHWzg0V1NZZKvA fFPkeCmQDAjtSJiTQt36mEEJdgNJyqA= X-Google-Smtp-Source: AGHT+IEVX0PyerXEbWK1UOBgKc7BZmLynzYtP989xU7q9rVAiqIPcpbzc0WQAlmV6uCIwfdxpJ3o/vng1sw= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a81:4858:0:b0:5a7:be3f:159f with SMTP id v85-20020a814858000000b005a7be3f159fmr70939ywa.5.1698430951577; Fri, 27 Oct 2023 11:22:31 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 27 Oct 2023 11:21:45 -0700 In-Reply-To: <20231027182217.3615211-1-seanjc@google.com> Mime-Version: 1.0 References: <20231027182217.3615211-1-seanjc@google.com> X-Mailer: git-send-email 2.42.0.820.g83a721a137-goog Message-ID: <20231027182217.3615211-4-seanjc@google.com> Subject: [PATCH v13 03/35] KVM: Use gfn instead of hva for mmu_notifier_retry From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Alexander Viro , Christian Brauner , "Matthew Wilcox (Oracle)" , Andrew Morton Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Xiaoyao Li , Xu Yilun , Chao Peng , Fuad Tabba , Jarkko Sakkinen , Anish Moorthy , David Matlack , Yu Zhang , Isaku Yamahata , "=?UTF-8?q?Micka=C3=ABl=20Sala=C3=BCn?=" , Vlastimil Babka , Vishal Annapurve , Ackerley Tng , Maciej Szmigiero , David Hildenbrand , Quentin Perret , Michael Roth , Wang , Liam Merwick , Isaku Yamahata , "Kirill A . Shutemov" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Chao Peng Currently in mmu_notifier invalidate path, hva range is recorded and then checked against by mmu_notifier_retry_hva() in the page fault handling path. However, for the to be introduced private memory, a page fault may not have a hva associated, checking gfn(gpa) makes more sense. For existing hva based shared memory, gfn is expected to also work. The only downside is when aliasing multiple gfns to a single hva, the current algorithm of checking multiple ranges could result in a much larger range being rejected. Such aliasing should be uncommon, so the impact is expected small. Suggested-by: Sean Christopherson Cc: Xu Yilun Signed-off-by: Chao Peng Reviewed-by: Fuad Tabba Tested-by: Fuad Tabba [sean: convert vmx_set_apic_access_page_addr() to gfn-based API] Signed-off-by: Sean Christopherson Reviewed-by: Paolo Bonzini --- arch/x86/kvm/mmu/mmu.c | 10 ++++++---- arch/x86/kvm/vmx/vmx.c | 11 +++++------ include/linux/kvm_host.h | 33 ++++++++++++++++++++------------ virt/kvm/kvm_main.c | 41 +++++++++++++++++++++++++++++++--------- 4 files changed, 64 insertions(+), 31 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index f7901cb4d2fa..d33657d61d80 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -3056,7 +3056,7 @@ static void direct_pte_prefetch(struct kvm_vcpu *vcpu= , u64 *sptep) * * There are several ways to safely use this helper: * - * - Check mmu_invalidate_retry_hva() after grabbing the mapping level, be= fore + * - Check mmu_invalidate_retry_gfn() after grabbing the mapping level, be= fore * consuming it. In this case, mmu_lock doesn't need to be held during = the * lookup, but it does need to be held while checking the MMU notifier. * @@ -4358,7 +4358,7 @@ static bool is_page_fault_stale(struct kvm_vcpu *vcpu, return true; =20 return fault->slot && - mmu_invalidate_retry_hva(vcpu->kvm, fault->mmu_seq, fault->hva); + mmu_invalidate_retry_gfn(vcpu->kvm, fault->mmu_seq, fault->gfn); } =20 static int direct_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault = *fault) @@ -6245,7 +6245,9 @@ void kvm_zap_gfn_range(struct kvm *kvm, gfn_t gfn_sta= rt, gfn_t gfn_end) =20 write_lock(&kvm->mmu_lock); =20 - kvm_mmu_invalidate_begin(kvm, 0, -1ul); + kvm_mmu_invalidate_begin(kvm); + + kvm_mmu_invalidate_range_add(kvm, gfn_start, gfn_end); =20 flush =3D kvm_rmap_zap_gfn_range(kvm, gfn_start, gfn_end); =20 @@ -6255,7 +6257,7 @@ void kvm_zap_gfn_range(struct kvm *kvm, gfn_t gfn_sta= rt, gfn_t gfn_end) if (flush) kvm_flush_remote_tlbs_range(kvm, gfn_start, gfn_end - gfn_start); =20 - kvm_mmu_invalidate_end(kvm, 0, -1ul); + kvm_mmu_invalidate_end(kvm); =20 write_unlock(&kvm->mmu_lock); } diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 72e3943f3693..6e502ba93141 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -6757,10 +6757,10 @@ static void vmx_set_apic_access_page_addr(struct kv= m_vcpu *vcpu) return; =20 /* - * Grab the memslot so that the hva lookup for the mmu_notifier retry - * is guaranteed to use the same memslot as the pfn lookup, i.e. rely - * on the pfn lookup's validation of the memslot to ensure a valid hva - * is used for the retry check. + * Explicitly grab the memslot using KVM's internal slot ID to ensure + * KVM doesn't unintentionally grab a userspace memslot. It _should_ + * be impossible for userspace to create a memslot for the APIC when + * APICv is enabled, but paranoia won't hurt in this case. */ slot =3D id_to_memslot(slots, APIC_ACCESS_PAGE_PRIVATE_MEMSLOT); if (!slot || slot->flags & KVM_MEMSLOT_INVALID) @@ -6785,8 +6785,7 @@ static void vmx_set_apic_access_page_addr(struct kvm_= vcpu *vcpu) return; =20 read_lock(&vcpu->kvm->mmu_lock); - if (mmu_invalidate_retry_hva(kvm, mmu_seq, - gfn_to_hva_memslot(slot, gfn))) { + if (mmu_invalidate_retry_gfn(kvm, mmu_seq, gfn)) { kvm_make_request(KVM_REQ_APIC_PAGE_RELOAD, vcpu); read_unlock(&vcpu->kvm->mmu_lock); goto out; diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index fb6c6109fdca..11d091688346 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -787,8 +787,8 @@ struct kvm { struct mmu_notifier mmu_notifier; unsigned long mmu_invalidate_seq; long mmu_invalidate_in_progress; - unsigned long mmu_invalidate_range_start; - unsigned long mmu_invalidate_range_end; + gfn_t mmu_invalidate_range_start; + gfn_t mmu_invalidate_range_end; #endif struct list_head devices; u64 manual_dirty_log_protect; @@ -1392,10 +1392,9 @@ void kvm_mmu_free_memory_cache(struct kvm_mmu_memory= _cache *mc); void *kvm_mmu_memory_cache_alloc(struct kvm_mmu_memory_cache *mc); #endif =20 -void kvm_mmu_invalidate_begin(struct kvm *kvm, unsigned long start, - unsigned long end); -void kvm_mmu_invalidate_end(struct kvm *kvm, unsigned long start, - unsigned long end); +void kvm_mmu_invalidate_begin(struct kvm *kvm); +void kvm_mmu_invalidate_range_add(struct kvm *kvm, gfn_t start, gfn_t end); +void kvm_mmu_invalidate_end(struct kvm *kvm); =20 long kvm_arch_dev_ioctl(struct file *filp, unsigned int ioctl, unsigned long arg); @@ -1970,9 +1969,9 @@ static inline int mmu_invalidate_retry(struct kvm *kv= m, unsigned long mmu_seq) return 0; } =20 -static inline int mmu_invalidate_retry_hva(struct kvm *kvm, +static inline int mmu_invalidate_retry_gfn(struct kvm *kvm, unsigned long mmu_seq, - unsigned long hva) + gfn_t gfn) { lockdep_assert_held(&kvm->mmu_lock); /* @@ -1981,10 +1980,20 @@ static inline int mmu_invalidate_retry_hva(struct k= vm *kvm, * that might be being invalidated. Note that it may include some false * positives, due to shortcuts when handing concurrent invalidations. */ - if (unlikely(kvm->mmu_invalidate_in_progress) && - hva >=3D kvm->mmu_invalidate_range_start && - hva < kvm->mmu_invalidate_range_end) - return 1; + if (unlikely(kvm->mmu_invalidate_in_progress)) { + /* + * Dropping mmu_lock after bumping mmu_invalidate_in_progress + * but before updating the range is a KVM bug. + */ + if (WARN_ON_ONCE(kvm->mmu_invalidate_range_start =3D=3D INVALID_GPA || + kvm->mmu_invalidate_range_end =3D=3D INVALID_GPA)) + return 1; + + if (gfn >=3D kvm->mmu_invalidate_range_start && + gfn < kvm->mmu_invalidate_range_end) + return 1; + } + if (kvm->mmu_invalidate_seq !=3D mmu_seq) return 1; return 0; diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 5a97e6c7d9c2..1a577a25de47 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -543,9 +543,7 @@ static inline struct kvm *mmu_notifier_to_kvm(struct mm= u_notifier *mn) =20 typedef bool (*gfn_handler_t)(struct kvm *kvm, struct kvm_gfn_range *range= ); =20 -typedef void (*on_lock_fn_t)(struct kvm *kvm, unsigned long start, - unsigned long end); - +typedef void (*on_lock_fn_t)(struct kvm *kvm); typedef void (*on_unlock_fn_t)(struct kvm *kvm); =20 struct kvm_mmu_notifier_range { @@ -637,7 +635,8 @@ static __always_inline int __kvm_handle_hva_range(struc= t kvm *kvm, locked =3D true; KVM_MMU_LOCK(kvm); if (!IS_KVM_NULL_FN(range->on_lock)) - range->on_lock(kvm, range->start, range->end); + range->on_lock(kvm); + if (IS_KVM_NULL_FN(range->handler)) break; } @@ -742,16 +741,29 @@ static void kvm_mmu_notifier_change_pte(struct mmu_no= tifier *mn, kvm_handle_hva_range(mn, address, address + 1, arg, kvm_change_spte_gfn); } =20 -void kvm_mmu_invalidate_begin(struct kvm *kvm, unsigned long start, - unsigned long end) +void kvm_mmu_invalidate_begin(struct kvm *kvm) { + lockdep_assert_held_write(&kvm->mmu_lock); /* * The count increase must become visible at unlock time as no * spte can be established without taking the mmu_lock and * count is also read inside the mmu_lock critical section. */ kvm->mmu_invalidate_in_progress++; + if (likely(kvm->mmu_invalidate_in_progress =3D=3D 1)) { + kvm->mmu_invalidate_range_start =3D INVALID_GPA; + kvm->mmu_invalidate_range_end =3D INVALID_GPA; + } +} + +void kvm_mmu_invalidate_range_add(struct kvm *kvm, gfn_t start, gfn_t end) +{ + lockdep_assert_held_write(&kvm->mmu_lock); + + WARN_ON_ONCE(!kvm->mmu_invalidate_in_progress); + + if (likely(kvm->mmu_invalidate_range_start =3D=3D INVALID_GPA)) { kvm->mmu_invalidate_range_start =3D start; kvm->mmu_invalidate_range_end =3D end; } else { @@ -771,6 +783,12 @@ void kvm_mmu_invalidate_begin(struct kvm *kvm, unsigne= d long start, } } =20 +static bool kvm_mmu_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range = *range) +{ + kvm_mmu_invalidate_range_add(kvm, range->start, range->end); + return kvm_unmap_gfn_range(kvm, range); +} + static int kvm_mmu_notifier_invalidate_range_start(struct mmu_notifier *mn, const struct mmu_notifier_range *range) { @@ -778,7 +796,7 @@ static int kvm_mmu_notifier_invalidate_range_start(stru= ct mmu_notifier *mn, const struct kvm_mmu_notifier_range hva_range =3D { .start =3D range->start, .end =3D range->end, - .handler =3D kvm_unmap_gfn_range, + .handler =3D kvm_mmu_unmap_gfn_range, .on_lock =3D kvm_mmu_invalidate_begin, .on_unlock =3D kvm_arch_guest_memory_reclaimed, .flush_on_ret =3D true, @@ -817,8 +835,7 @@ static int kvm_mmu_notifier_invalidate_range_start(stru= ct mmu_notifier *mn, return 0; } =20 -void kvm_mmu_invalidate_end(struct kvm *kvm, unsigned long start, - unsigned long end) +void kvm_mmu_invalidate_end(struct kvm *kvm) { /* * This sequence increase will notify the kvm page fault that @@ -834,6 +851,12 @@ void kvm_mmu_invalidate_end(struct kvm *kvm, unsigned = long start, */ kvm->mmu_invalidate_in_progress--; KVM_BUG_ON(kvm->mmu_invalidate_in_progress < 0, kvm); + + /* + * Assert that at least one range was added between start() and end(). + * Not adding a range isn't fatal, but it is a KVM bug. + */ + WARN_ON_ONCE(kvm->mmu_invalidate_range_start =3D=3D INVALID_GPA); } =20 static void kvm_mmu_notifier_invalidate_range_end(struct mmu_notifier *mn, --=20 2.42.0.820.g83a721a137-goog