From nobody Sat Feb 7 23:48:08 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id ECF62EB64D9 for ; Tue, 4 Jul 2023 07:51:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231544AbjGDHvm (ORCPT ); Tue, 4 Jul 2023 03:51:42 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60532 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229736AbjGDHvh (ORCPT ); Tue, 4 Jul 2023 03:51:37 -0400 Received: from mail-oo1-xc2c.google.com (mail-oo1-xc2c.google.com [IPv6:2607:f8b0:4864:20::c2c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 27A679B for ; Tue, 4 Jul 2023 00:51:36 -0700 (PDT) Received: by mail-oo1-xc2c.google.com with SMTP id 006d021491bc7-5634db21a78so3639185eaf.0 for ; Tue, 04 Jul 2023 00:51:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; t=1688457095; x=1691049095; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=kK+di2LH2O2PIVMiKS4D/tuGXEaFYURT1f0nc5afMRA=; b=AN1PbDYID8uXga3qYegkFGjesKwc0Hlh/fi8AmJAU1/smu162cLbw0Y60BCvXS/NQa xxi7dK9Xc3uBscDPa3brvsKzNqz2xE6KDB/zQ94CKEkYBMGicId1sub8nxB7zvTwk449 5jItIUUfNJdzEetuDKpMFg599Q6jMYqYE5Ygc= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1688457095; x=1691049095; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=kK+di2LH2O2PIVMiKS4D/tuGXEaFYURT1f0nc5afMRA=; b=BepksBy1zFXt1d/8UW5Io8ilnwQfuBZKOw5tiykOxL2hfw57+g9Oe5P94wsh5OiXpP wP0qy0q8YbTv6A9dXVn5/ccdDj3CPFVrhuf0qYJMXzANlrUrHOK/i5tk8/oEouMdsvLK EIa6YUh5di9SwwdrCzNtaBvy7Pqdp5TaE7OOjyo1tyMqIgEXWGGMdK21kKPG5xlR24Ez yWkFpK/VTLFtr3rK7BYW8nS66aIbDVl7GDSd83nm0qisxmpbGpd8Gsf1+AuGWkRxzgQb ONXnNuToa7EkRwHNtXKM/STD+zr3QJUt+mM9xBl6kSiNEpMNUtEk1e62Mq26iudAsWJc HHhA== X-Gm-Message-State: AC+VfDy9AFmAFbPsgCLfMLVkVYpH1BSSA7uhe9asvzwK6HMoISyYlQg0 MZtxlHoP+4L4CJ1DaPONa45rSA== X-Google-Smtp-Source: ACHHUZ6QINB9a6PnyEez1WUpngKCHju6pHXnmFm3K1Dg70QA+/T/oNbclZKeG1lrixsFr0eCuXP5Nw== X-Received: by 2002:a05:6808:238d:b0:3a3:61fc:f913 with SMTP id bp13-20020a056808238d00b003a361fcf913mr15281455oib.0.1688457095409; Tue, 04 Jul 2023 00:51:35 -0700 (PDT) Received: from localhost ([2401:fa00:8f:203:a11b:bff7:d8ae:bb0]) by smtp.gmail.com with UTF8SMTPSA id px4-20020a17090b270400b0024e37e0a67dsm10734152pjb.20.2023.07.04.00.51.33 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Tue, 04 Jul 2023 00:51:35 -0700 (PDT) From: David Stevens X-Google-Original-From: David Stevens To: Sean Christopherson Cc: Marc Zyngier , Michael Ellerman , Peter Xu , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org Subject: [PATCH v7 1/8] KVM: Assert that a page's refcount is elevated when marking accessed/dirty Date: Tue, 4 Jul 2023 16:50:46 +0900 Message-ID: <20230704075054.3344915-2-stevensd@google.com> X-Mailer: git-send-email 2.41.0.255.g8b1d071c50-goog In-Reply-To: <20230704075054.3344915-1-stevensd@google.com> References: <20230704075054.3344915-1-stevensd@google.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Sean Christopherson Assert that a page's refcount is elevated, i.e. that _something_ holds a reference to the page, when KVM marks a page as accessed and/or dirty. KVM typically doesn't hold a reference to pages that are mapped into the guest, e.g. to allow page migration, compaction, swap, etc., and instead relies on mmu_notifiers to react to changes in the primary MMU. Incorrect handling of mmu_notifier events (or similar mechanisms) can result in KVM keeping a mapping beyond the lifetime of the backing page, i.e. can (and often does) result in use-after-free. Yelling if KVM marks a freed page as accessed/dirty doesn't prevent badness as KVM usually only does A/D updates when unmapping memory from the guest, i.e. the assertion fires well after an underlying bug has occurred, but yelling does help detect, triage, and debug use-after-free bugs. Note, the assertion must use page_count(), NOT page_ref_count()! For hugepages, the returned struct page may be a tailpage and thus not have its own refcount. Signed-off-by: Sean Christopherson --- virt/kvm/kvm_main.c | 13 +++++++++++++ 1 file changed, 13 insertions(+) diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index b838c8f71349..371bd783ff2b 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -2885,6 +2885,19 @@ EXPORT_SYMBOL_GPL(kvm_vcpu_unmap); =20 static bool kvm_is_ad_tracked_page(struct page *page) { + /* + * Assert that KVM isn't attempting to mark a freed page as Accessed or + * Dirty, i.e. that KVM's MMU doesn't have a use-after-free bug. KVM + * (typically) doesn't pin pages that are mapped in KVM's MMU, and + * instead relies on mmu_notifiers to know when a mapping needs to be + * zapped/invalidated. Unmapping from KVM's MMU must happen _before_ + * KVM returns from its mmu_notifier, i.e. the page should have an + * elevated refcount at this point even though KVM doesn't hold a + * reference of its own. + */ + if (WARN_ON_ONCE(!page_count(page))) + return false; + /* * Per page-flags.h, pages tagged PG_reserved "should in general not be * touched (e.g. set dirty) except by its owner". --=20 2.41.0.255.g8b1d071c50-goog From nobody Sat Feb 7 23:48:08 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 81AA7EB64D9 for ; Tue, 4 Jul 2023 07:51:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231225AbjGDHvz (ORCPT ); Tue, 4 Jul 2023 03:51:55 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60586 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231550AbjGDHvm (ORCPT ); Tue, 4 Jul 2023 03:51:42 -0400 Received: from mail-pg1-x52d.google.com (mail-pg1-x52d.google.com [IPv6:2607:f8b0:4864:20::52d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CB3511AA for ; Tue, 4 Jul 2023 00:51:40 -0700 (PDT) Received: by mail-pg1-x52d.google.com with SMTP id 41be03b00d2f7-55b1238a013so3710387a12.3 for ; Tue, 04 Jul 2023 00:51:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; t=1688457100; x=1691049100; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=vhMJfBWbFBVskOmdPpb6QsvpJIZB+NhjXawVV5ccjk8=; b=C5eJsCM3//4H7q4GdB8q7Jw/ggVKydzf/yCl9ryFMjxgSxXhd+KSgsv4+3LDNy3EZR k5cN0Co79rLPcrRK7BMlML+fQihKEmBpDqyUdU9a4aZoE6LORgTEtTtexhTc0vSAUaTU gtsdKzLeQ7ZJZfiCJ65ya1c2m2IfX/FaYtyFQ= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1688457100; x=1691049100; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=vhMJfBWbFBVskOmdPpb6QsvpJIZB+NhjXawVV5ccjk8=; b=V8fVTvzA7e0nz3EuTpqwdCpcp+aivXmOj9Z9sFdx9fbxVz3KKt+es8s5lmL4PWm7ye kpnaarjo085bmZ60UjjFoig54Ws0VvkM1fRtOiDZB6t9uUEwNRPeFxMN0Pe3clEEBgxY s9HucGextcyWwo8vlgR46BJQmEyky3x29HI9QMq2oaODRww//3/iG8euMAuNjZ1RF2oi A73V86dIiNLQPKfrZbkRUkRISXVOK6cznQ913/5vdi3QxkYb/m1KUDn0Pm59b8MQXOii krbhSjlvkxyq9irmK9ya2SF4Nn4Bs3JovB7MbMFOQNSuTktnU2Rl2sHhivLdBYqsoTv5 q7Qw== X-Gm-Message-State: ABy/qLaLii/q2AlMkGEUAWokfzazGBc33dyMyfunw7kCpijflo9aiNvA Ejlvb/HOT4T89WxRPRq0JUNlCg== X-Google-Smtp-Source: APBJJlHQ933xhSivCiYmXA92EoYmb1T9BWhWfLcILly+w2hMEM2jc0rk5hy65iOUb64pVHHdqCZHZQ== X-Received: by 2002:a05:6a20:8e1f:b0:12a:82a0:687e with SMTP id y31-20020a056a208e1f00b0012a82a0687emr16344817pzj.60.1688457100152; Tue, 04 Jul 2023 00:51:40 -0700 (PDT) Received: from localhost ([2401:fa00:8f:203:a11b:bff7:d8ae:bb0]) by smtp.gmail.com with UTF8SMTPSA id x16-20020a056a00271000b0067459e92801sm13696386pfv.64.2023.07.04.00.51.37 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Tue, 04 Jul 2023 00:51:39 -0700 (PDT) From: David Stevens X-Google-Original-From: David Stevens To: Sean Christopherson Cc: Marc Zyngier , Michael Ellerman , Peter Xu , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org, David Stevens Subject: [PATCH v7 2/8] KVM: Introduce __kvm_follow_pfn function Date: Tue, 4 Jul 2023 16:50:47 +0900 Message-ID: <20230704075054.3344915-3-stevensd@google.com> X-Mailer: git-send-email 2.41.0.255.g8b1d071c50-goog In-Reply-To: <20230704075054.3344915-1-stevensd@google.com> References: <20230704075054.3344915-1-stevensd@google.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: David Stevens Introduce __kvm_follow_pfn, which will replace __gfn_to_pfn_memslot. __kvm_follow_pfn refactors the old API's arguments into a struct and, where possible, combines the boolean arguments into a single flags argument. Signed-off-by: David Stevens --- include/linux/kvm_host.h | 16 ++++ virt/kvm/kvm_main.c | 171 ++++++++++++++++++++++----------------- virt/kvm/kvm_mm.h | 3 +- virt/kvm/pfncache.c | 8 +- 4 files changed, 122 insertions(+), 76 deletions(-) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 9d3ac7720da9..ef2763c2b12e 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -97,6 +97,7 @@ #define KVM_PFN_ERR_HWPOISON (KVM_PFN_ERR_MASK + 1) #define KVM_PFN_ERR_RO_FAULT (KVM_PFN_ERR_MASK + 2) #define KVM_PFN_ERR_SIGPENDING (KVM_PFN_ERR_MASK + 3) +#define KVM_PFN_ERR_NEEDS_IO (KVM_PFN_ERR_MASK + 4) =20 /* * error pfns indicate that the gfn is in slot but faild to @@ -1156,6 +1157,21 @@ unsigned long gfn_to_hva_memslot_prot(struct kvm_mem= ory_slot *slot, gfn_t gfn, void kvm_release_page_clean(struct page *page); void kvm_release_page_dirty(struct page *page); =20 +struct kvm_follow_pfn { + const struct kvm_memory_slot *slot; + gfn_t gfn; + unsigned int flags; + bool atomic; + /* Allow a read fault to create a writeable mapping. */ + bool allow_write_mapping; + + /* Outputs of __kvm_follow_pfn */ + hva_t hva; + bool writable; +}; + +kvm_pfn_t __kvm_follow_pfn(struct kvm_follow_pfn *foll); + kvm_pfn_t gfn_to_pfn(struct kvm *kvm, gfn_t gfn); kvm_pfn_t gfn_to_pfn_prot(struct kvm *kvm, gfn_t gfn, bool write_fault, bool *writable); diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 371bd783ff2b..b13f22861d2f 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -2486,24 +2486,22 @@ static inline int check_user_page_hwpoison(unsigned= long addr) * true indicates success, otherwise false is returned. It's also the * only part that runs if we can in atomic context. */ -static bool hva_to_pfn_fast(unsigned long addr, bool write_fault, - bool *writable, kvm_pfn_t *pfn) +static bool hva_to_pfn_fast(struct kvm_follow_pfn *foll, kvm_pfn_t *pfn) { struct page *page[1]; + bool write_fault =3D foll->flags & FOLL_WRITE; =20 /* * Fast pin a writable pfn only if it is a write fault request * or the caller allows to map a writable pfn for a read fault * request. */ - if (!(write_fault || writable)) + if (!(write_fault || foll->allow_write_mapping)) return false; =20 - if (get_user_page_fast_only(addr, FOLL_WRITE, page)) { + if (get_user_page_fast_only(foll->hva, FOLL_WRITE, page)) { *pfn =3D page_to_pfn(page[0]); - - if (writable) - *writable =3D true; + foll->writable =3D foll->allow_write_mapping; return true; } =20 @@ -2514,35 +2512,26 @@ static bool hva_to_pfn_fast(unsigned long addr, boo= l write_fault, * The slow path to get the pfn of the specified host virtual address, * 1 indicates success, -errno is returned if error is detected. */ -static int hva_to_pfn_slow(unsigned long addr, bool *async, bool write_fau= lt, - bool interruptible, bool *writable, kvm_pfn_t *pfn) +static int hva_to_pfn_slow(struct kvm_follow_pfn *foll, kvm_pfn_t *pfn) { - unsigned int flags =3D FOLL_HWPOISON; + unsigned int flags =3D FOLL_HWPOISON | FOLL_GET | foll->flags; struct page *page; int npages; =20 might_sleep(); =20 - if (writable) - *writable =3D write_fault; - - if (write_fault) - flags |=3D FOLL_WRITE; - if (async) - flags |=3D FOLL_NOWAIT; - if (interruptible) - flags |=3D FOLL_INTERRUPTIBLE; - - npages =3D get_user_pages_unlocked(addr, 1, &page, flags); + npages =3D get_user_pages_unlocked(foll->hva, 1, &page, flags); if (npages !=3D 1) return npages; =20 + foll->writable =3D (foll->flags & FOLL_WRITE) && foll->allow_write_mappin= g; + /* map read fault as writable if possible */ - if (unlikely(!write_fault) && writable) { + if (unlikely(!foll->writable) && foll->allow_write_mapping) { struct page *wpage; =20 - if (get_user_page_fast_only(addr, FOLL_WRITE, &wpage)) { - *writable =3D true; + if (get_user_page_fast_only(foll->hva, FOLL_WRITE, &wpage)) { + foll->writable =3D true; put_page(page); page =3D wpage; } @@ -2572,23 +2561,23 @@ static int kvm_try_get_pfn(kvm_pfn_t pfn) return get_page_unless_zero(page); } =20 -static int hva_to_pfn_remapped(struct vm_area_struct *vma, - unsigned long addr, bool write_fault, - bool *writable, kvm_pfn_t *p_pfn) +static int hva_to_pfn_remapped(struct vm_area_struct *vma, struct kvm_foll= ow_pfn *foll, + kvm_pfn_t *p_pfn) { kvm_pfn_t pfn; pte_t *ptep; spinlock_t *ptl; + bool write_fault =3D foll->flags & FOLL_WRITE; int r; =20 - r =3D follow_pte(vma->vm_mm, addr, &ptep, &ptl); + r =3D follow_pte(vma->vm_mm, foll->hva, &ptep, &ptl); if (r) { /* * get_user_pages fails for VM_IO and VM_PFNMAP vmas and does * not call the fault handler, so do it here. */ bool unlocked =3D false; - r =3D fixup_user_fault(current->mm, addr, + r =3D fixup_user_fault(current->mm, foll->hva, (write_fault ? FAULT_FLAG_WRITE : 0), &unlocked); if (unlocked) @@ -2596,7 +2585,7 @@ static int hva_to_pfn_remapped(struct vm_area_struct = *vma, if (r) return r; =20 - r =3D follow_pte(vma->vm_mm, addr, &ptep, &ptl); + r =3D follow_pte(vma->vm_mm, foll->hva, &ptep, &ptl); if (r) return r; } @@ -2606,8 +2595,7 @@ static int hva_to_pfn_remapped(struct vm_area_struct = *vma, goto out; } =20 - if (writable) - *writable =3D pte_write(*ptep); + foll->writable =3D pte_write(*ptep) && foll->allow_write_mapping; pfn =3D pte_pfn(*ptep); =20 /* @@ -2652,24 +2640,22 @@ static int hva_to_pfn_remapped(struct vm_area_struc= t *vma, * 2): @write_fault =3D false && @writable, @writable will tell the caller * whether the mapping is writable. */ -kvm_pfn_t hva_to_pfn(unsigned long addr, bool atomic, bool interruptible, - bool *async, bool write_fault, bool *writable) +kvm_pfn_t hva_to_pfn(struct kvm_follow_pfn *foll) { struct vm_area_struct *vma; kvm_pfn_t pfn; int npages, r; =20 /* we can do it either atomically or asynchronously, not both */ - BUG_ON(atomic && async); + BUG_ON(foll->atomic && (foll->flags & FOLL_NOWAIT)); =20 - if (hva_to_pfn_fast(addr, write_fault, writable, &pfn)) + if (hva_to_pfn_fast(foll, &pfn)) return pfn; =20 - if (atomic) + if (foll->atomic) return KVM_PFN_ERR_FAULT; =20 - npages =3D hva_to_pfn_slow(addr, async, write_fault, interruptible, - writable, &pfn); + npages =3D hva_to_pfn_slow(foll, &pfn); if (npages =3D=3D 1) return pfn; if (npages =3D=3D -EINTR) @@ -2677,83 +2663,122 @@ kvm_pfn_t hva_to_pfn(unsigned long addr, bool atom= ic, bool interruptible, =20 mmap_read_lock(current->mm); if (npages =3D=3D -EHWPOISON || - (!async && check_user_page_hwpoison(addr))) { + (!(foll->flags & FOLL_NOWAIT) && check_user_page_hwpoison(foll->hva= ))) { pfn =3D KVM_PFN_ERR_HWPOISON; goto exit; } =20 retry: - vma =3D vma_lookup(current->mm, addr); + vma =3D vma_lookup(current->mm, foll->hva); =20 if (vma =3D=3D NULL) pfn =3D KVM_PFN_ERR_FAULT; else if (vma->vm_flags & (VM_IO | VM_PFNMAP)) { - r =3D hva_to_pfn_remapped(vma, addr, write_fault, writable, &pfn); + r =3D hva_to_pfn_remapped(vma, foll, &pfn); if (r =3D=3D -EAGAIN) goto retry; if (r < 0) pfn =3D KVM_PFN_ERR_FAULT; } else { - if (async && vma_is_valid(vma, write_fault)) - *async =3D true; - pfn =3D KVM_PFN_ERR_FAULT; + if ((foll->flags & FOLL_NOWAIT) && + vma_is_valid(vma, foll->flags & FOLL_WRITE)) + pfn =3D KVM_PFN_ERR_NEEDS_IO; + else + pfn =3D KVM_PFN_ERR_FAULT; } exit: mmap_read_unlock(current->mm); return pfn; } =20 -kvm_pfn_t __gfn_to_pfn_memslot(const struct kvm_memory_slot *slot, gfn_t g= fn, - bool atomic, bool interruptible, bool *async, - bool write_fault, bool *writable, hva_t *hva) +kvm_pfn_t __kvm_follow_pfn(struct kvm_follow_pfn *foll) { - unsigned long addr =3D __gfn_to_hva_many(slot, gfn, NULL, write_fault); - - if (hva) - *hva =3D addr; + foll->hva =3D __gfn_to_hva_many(foll->slot, foll->gfn, NULL, + foll->flags & FOLL_WRITE); =20 - if (addr =3D=3D KVM_HVA_ERR_RO_BAD) { - if (writable) - *writable =3D false; + if (foll->hva =3D=3D KVM_HVA_ERR_RO_BAD) return KVM_PFN_ERR_RO_FAULT; - } =20 - if (kvm_is_error_hva(addr)) { - if (writable) - *writable =3D false; + if (kvm_is_error_hva(foll->hva)) return KVM_PFN_NOSLOT; - } =20 - /* Do not map writable pfn in the readonly memslot. */ - if (writable && memslot_is_readonly(slot)) { - *writable =3D false; - writable =3D NULL; - } + if (memslot_is_readonly(foll->slot)) + foll->allow_write_mapping =3D false; + + return hva_to_pfn(foll); +} +EXPORT_SYMBOL_GPL(__kvm_follow_pfn); =20 - return hva_to_pfn(addr, atomic, interruptible, async, write_fault, - writable); +kvm_pfn_t __gfn_to_pfn_memslot(const struct kvm_memory_slot *slot, gfn_t g= fn, + bool atomic, bool interruptible, bool *async, + bool write_fault, bool *writable, hva_t *hva) +{ + kvm_pfn_t pfn; + struct kvm_follow_pfn foll =3D { + .slot =3D slot, + .gfn =3D gfn, + .flags =3D 0, + .atomic =3D atomic, + .allow_write_mapping =3D !!writable, + }; + + if (write_fault) + foll.flags |=3D FOLL_WRITE; + if (async) + foll.flags |=3D FOLL_NOWAIT; + if (interruptible) + foll.flags |=3D FOLL_INTERRUPTIBLE; + + pfn =3D __kvm_follow_pfn(&foll); + if (pfn =3D=3D KVM_PFN_ERR_NEEDS_IO) { + *async =3D true; + pfn =3D KVM_PFN_ERR_FAULT; + } + if (hva) + *hva =3D foll.hva; + if (writable) + *writable =3D foll.writable; + return pfn; } EXPORT_SYMBOL_GPL(__gfn_to_pfn_memslot); =20 kvm_pfn_t gfn_to_pfn_prot(struct kvm *kvm, gfn_t gfn, bool write_fault, bool *writable) { - return __gfn_to_pfn_memslot(gfn_to_memslot(kvm, gfn), gfn, false, false, - NULL, write_fault, writable, NULL); + kvm_pfn_t pfn; + struct kvm_follow_pfn foll =3D { + .slot =3D gfn_to_memslot(kvm, gfn), + .gfn =3D gfn, + .flags =3D write_fault ? FOLL_WRITE : 0, + .allow_write_mapping =3D !!writable, + }; + pfn =3D __kvm_follow_pfn(&foll); + if (writable) + *writable =3D foll.writable; + return pfn; } EXPORT_SYMBOL_GPL(gfn_to_pfn_prot); =20 kvm_pfn_t gfn_to_pfn_memslot(const struct kvm_memory_slot *slot, gfn_t gfn) { - return __gfn_to_pfn_memslot(slot, gfn, false, false, NULL, true, - NULL, NULL); + struct kvm_follow_pfn foll =3D { + .slot =3D slot, + .gfn =3D gfn, + .flags =3D FOLL_WRITE, + }; + return __kvm_follow_pfn(&foll); } EXPORT_SYMBOL_GPL(gfn_to_pfn_memslot); =20 kvm_pfn_t gfn_to_pfn_memslot_atomic(const struct kvm_memory_slot *slot, gf= n_t gfn) { - return __gfn_to_pfn_memslot(slot, gfn, true, false, NULL, true, - NULL, NULL); + struct kvm_follow_pfn foll =3D { + .slot =3D slot, + .gfn =3D gfn, + .flags =3D FOLL_WRITE, + .atomic =3D true, + }; + return __kvm_follow_pfn(&foll); } EXPORT_SYMBOL_GPL(gfn_to_pfn_memslot_atomic); =20 diff --git a/virt/kvm/kvm_mm.h b/virt/kvm/kvm_mm.h index 180f1a09e6ba..ed896aee5396 100644 --- a/virt/kvm/kvm_mm.h +++ b/virt/kvm/kvm_mm.h @@ -20,8 +20,7 @@ #define KVM_MMU_UNLOCK(kvm) spin_unlock(&(kvm)->mmu_lock) #endif /* KVM_HAVE_MMU_RWLOCK */ =20 -kvm_pfn_t hva_to_pfn(unsigned long addr, bool atomic, bool interruptible, - bool *async, bool write_fault, bool *writable); +kvm_pfn_t hva_to_pfn(struct kvm_follow_pfn *foll); =20 #ifdef CONFIG_HAVE_KVM_PFNCACHE void gfn_to_pfn_cache_invalidate_start(struct kvm *kvm, diff --git a/virt/kvm/pfncache.c b/virt/kvm/pfncache.c index 2d6aba677830..e3fefa753a51 100644 --- a/virt/kvm/pfncache.c +++ b/virt/kvm/pfncache.c @@ -144,6 +144,12 @@ static kvm_pfn_t hva_to_pfn_retry(struct gfn_to_pfn_ca= che *gpc) kvm_pfn_t new_pfn =3D KVM_PFN_ERR_FAULT; void *new_khva =3D NULL; unsigned long mmu_seq; + struct kvm_follow_pfn foll =3D { + .slot =3D gpc->memslot, + .gfn =3D gpa_to_gfn(gpc->gpa), + .flags =3D FOLL_WRITE, + .hva =3D gpc->uhva, + }; =20 lockdep_assert_held(&gpc->refresh_lock); =20 @@ -183,7 +189,7 @@ static kvm_pfn_t hva_to_pfn_retry(struct gfn_to_pfn_cac= he *gpc) } =20 /* We always request a writeable mapping */ - new_pfn =3D hva_to_pfn(gpc->uhva, false, false, NULL, true, NULL); + new_pfn =3D hva_to_pfn(&foll); if (is_error_noslot_pfn(new_pfn)) goto out_error; =20 --=20 2.41.0.255.g8b1d071c50-goog From nobody Sat Feb 7 23:48:08 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 00CBEEB64DD for ; Tue, 4 Jul 2023 07:52:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231190AbjGDHwB (ORCPT ); Tue, 4 Jul 2023 03:52:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60844 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231589AbjGDHvy (ORCPT ); Tue, 4 Jul 2023 03:51:54 -0400 Received: from mail-pj1-x1030.google.com (mail-pj1-x1030.google.com [IPv6:2607:f8b0:4864:20::1030]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CC90710C2 for ; Tue, 4 Jul 2023 00:51:45 -0700 (PDT) Received: by mail-pj1-x1030.google.com with SMTP id 98e67ed59e1d1-262ea2ff59dso2425283a91.0 for ; Tue, 04 Jul 2023 00:51:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; t=1688457105; x=1691049105; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=+5DkPPdG9jFIYurRnuU5kK5QzYbqeEOzec/vNXukhRU=; b=H/gzREI8gqz9rOc8Yo2qNEebLuKH0UC36blhMs/5sjeB4XcB8mL4Pb33eu74owR7oo LImEVitz4nZC8KLg18ebNQp526jkjLFgz1tuomeUfy4cqP8ZXx0T82bJAAIK/PRgzB/m XV1WiUaEj/SEfGx6eGZLPRUrd0pyuEH75e9kI= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1688457105; x=1691049105; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=+5DkPPdG9jFIYurRnuU5kK5QzYbqeEOzec/vNXukhRU=; b=IJHAf1q3ntUSKLAJPX6uHSr7I2xHYZReaxlrgQtr9SuGNZostxTZWo6H4bPVFcMd+5 y/XKq/SD20XwsJCSN7S2ZkujqVuNsVe2IPATQdtnExxJT9tduAPaj590RyTngehpTZu8 V9opcxu4WFLUMrGA5fpjCAlZ2hOBiPnNSLW0gma2BgUl68M1iK67zem9P/bUjMqFpFsk uGFZd9nf8M3a9TR3qIeX87XsKe05GHrCCEFq9kHMA+nwqNAN+VZiB+JBbqhGkP0VllAH gcv+rNg+G9+l+vKVBlVUBOklkyMQPFwG95zCY+XkNIpUdk6eaK9ZVxbL85ExMsPjAkXN x9Aw== X-Gm-Message-State: ABy/qLZY6Il1wagiXIXVJv2e0suaIIr/Nl496hzj+zI2QLsXMiibtpK5 s7OpZksm4Xh0seG4e7P3zdq3ag== X-Google-Smtp-Source: APBJJlFreffwn+fy5a7p2FSsrJgGhUHhG06zkMoAAc8DQLr05ASG5KzLSpAHS2X9hZdcqugTa1ObbQ== X-Received: by 2002:a17:90a:598b:b0:263:9329:19df with SMTP id l11-20020a17090a598b00b00263932919dfmr8192252pji.36.1688457105037; Tue, 04 Jul 2023 00:51:45 -0700 (PDT) Received: from localhost ([2401:fa00:8f:203:a11b:bff7:d8ae:bb0]) by smtp.gmail.com with UTF8SMTPSA id sk15-20020a17090b2dcf00b0025eb5aafd3csm16407720pjb.32.2023.07.04.00.51.42 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Tue, 04 Jul 2023 00:51:44 -0700 (PDT) From: David Stevens X-Google-Original-From: David Stevens To: Sean Christopherson Cc: Marc Zyngier , Michael Ellerman , Peter Xu , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org, David Stevens Subject: [PATCH v7 3/8] KVM: Make __kvm_follow_pfn not imply FOLL_GET Date: Tue, 4 Jul 2023 16:50:48 +0900 Message-ID: <20230704075054.3344915-4-stevensd@google.com> X-Mailer: git-send-email 2.41.0.255.g8b1d071c50-goog In-Reply-To: <20230704075054.3344915-1-stevensd@google.com> References: <20230704075054.3344915-1-stevensd@google.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: David Stevens Make it so that __kvm_follow_pfn does not imply FOLL_GET. This allows callers to resolve a gfn when the associated pfn has a valid struct page that isn't being actively refcounted (e.g. tail pages of non-compound higher order pages). For a caller to safely omit FOLL_GET, all usages of the returned pfn must be guarded by a mmu notifier. This also adds a is_refcounted_page out parameter to kvm_follow_pfn that is set when the returned pfn has an associated struct page with a valid refcount. Callers that don't pass FOLL_GET should remember this value and use it to avoid places like kvm_is_ad_tracked_page that assume a non-zero refcount. Signed-off-by: David Stevens --- include/linux/kvm_host.h | 10 ++++++ virt/kvm/kvm_main.c | 67 +++++++++++++++++++++------------------- virt/kvm/pfncache.c | 2 +- 3 files changed, 47 insertions(+), 32 deletions(-) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index ef2763c2b12e..a45308c7d2d9 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -1157,6 +1157,9 @@ unsigned long gfn_to_hva_memslot_prot(struct kvm_memo= ry_slot *slot, gfn_t gfn, void kvm_release_page_clean(struct page *page); void kvm_release_page_dirty(struct page *page); =20 +void kvm_set_page_accessed(struct page *page); +void kvm_set_page_dirty(struct page *page); + struct kvm_follow_pfn { const struct kvm_memory_slot *slot; gfn_t gfn; @@ -1164,10 +1167,17 @@ struct kvm_follow_pfn { bool atomic; /* Allow a read fault to create a writeable mapping. */ bool allow_write_mapping; + /* + * Usage of the returned pfn will be guared by a mmu notifier. Must + * be true if FOLL_GET is not set. + */ + bool guarded_by_mmu_notifier; =20 /* Outputs of __kvm_follow_pfn */ hva_t hva; bool writable; + /* True if the returned pfn is for a page with a valid refcount. */ + bool is_refcounted_page; }; =20 kvm_pfn_t __kvm_follow_pfn(struct kvm_follow_pfn *foll); diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index b13f22861d2f..0f7b41f220b6 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -2502,6 +2502,9 @@ static bool hva_to_pfn_fast(struct kvm_follow_pfn *fo= ll, kvm_pfn_t *pfn) if (get_user_page_fast_only(foll->hva, FOLL_WRITE, page)) { *pfn =3D page_to_pfn(page[0]); foll->writable =3D foll->allow_write_mapping; + foll->is_refcounted_page =3D true; + if (!(foll->flags & FOLL_GET)) + put_page(page[0]); return true; } =20 @@ -2525,6 +2528,7 @@ static int hva_to_pfn_slow(struct kvm_follow_pfn *fol= l, kvm_pfn_t *pfn) return npages; =20 foll->writable =3D (foll->flags & FOLL_WRITE) && foll->allow_write_mappin= g; + foll->is_refcounted_page =3D true; =20 /* map read fault as writable if possible */ if (unlikely(!foll->writable) && foll->allow_write_mapping) { @@ -2537,6 +2541,8 @@ static int hva_to_pfn_slow(struct kvm_follow_pfn *fol= l, kvm_pfn_t *pfn) } } *pfn =3D page_to_pfn(page); + if (!(foll->flags & FOLL_GET)) + put_page(page); return npages; } =20 @@ -2551,16 +2557,6 @@ static bool vma_is_valid(struct vm_area_struct *vma,= bool write_fault) return true; } =20 -static int kvm_try_get_pfn(kvm_pfn_t pfn) -{ - struct page *page =3D kvm_pfn_to_refcounted_page(pfn); - - if (!page) - return 1; - - return get_page_unless_zero(page); -} - static int hva_to_pfn_remapped(struct vm_area_struct *vma, struct kvm_foll= ow_pfn *foll, kvm_pfn_t *p_pfn) { @@ -2568,6 +2564,7 @@ static int hva_to_pfn_remapped(struct vm_area_struct = *vma, struct kvm_follow_pfn pte_t *ptep; spinlock_t *ptl; bool write_fault =3D foll->flags & FOLL_WRITE; + struct page *page; int r; =20 r =3D follow_pte(vma->vm_mm, foll->hva, &ptep, &ptl); @@ -2599,24 +2596,27 @@ static int hva_to_pfn_remapped(struct vm_area_struc= t *vma, struct kvm_follow_pfn pfn =3D pte_pfn(*ptep); =20 /* - * Get a reference here because callers of *hva_to_pfn* and - * *gfn_to_pfn* ultimately call kvm_release_pfn_clean on the - * returned pfn. This is only needed if the VMA has VM_MIXEDMAP - * set, but the kvm_try_get_pfn/kvm_release_pfn_clean pair will - * simply do nothing for reserved pfns. - * - * Whoever called remap_pfn_range is also going to call e.g. - * unmap_mapping_range before the underlying pages are freed, - * causing a call to our MMU notifier. + * Now deal with reference counting. If kvm_pfn_to_refcounted_page + * returns NULL, then there's no refcount to worry about. * - * Certain IO or PFNMAP mappings can be backed with valid - * struct pages, but be allocated without refcounting e.g., - * tail pages of non-compound higher order allocations, which - * would then underflow the refcount when the caller does the - * required put_page. Don't allow those pages here. + * Otherwise, certain IO or PFNMAP mappings can be backed with valid + * struct pages but be allocated without refcounting e.g., tail pages of + * non-compound higher order allocations. If FOLL_GET is set and we + * increment such a refcount, then when that pfn is eventually passed to + * kvm_release_pfn_clean, its refcount would hit zero and be incorrectly + * freed. Therefore don't allow those pages here when FOLL_GET is set. */=20 - if (!kvm_try_get_pfn(pfn)) + page =3D kvm_pfn_to_refcounted_page(pfn); + if (!page) + goto out; + + if (get_page_unless_zero(page)) { + foll->is_refcounted_page =3D true; + if (!(foll->flags & FOLL_GET)) + put_page(page); + } else if (foll->flags & FOLL_GET) { r =3D -EFAULT; + } =20 out: pte_unmap_unlock(ptep, ptl); @@ -2693,6 +2693,9 @@ kvm_pfn_t hva_to_pfn(struct kvm_follow_pfn *foll) =20 kvm_pfn_t __kvm_follow_pfn(struct kvm_follow_pfn *foll) { + if (WARN_ON_ONCE(!(foll->flags & FOLL_GET) && !foll->guarded_by_mmu_notif= ier)) + return KVM_PFN_ERR_FAULT; + foll->hva =3D __gfn_to_hva_many(foll->slot, foll->gfn, NULL, foll->flags & FOLL_WRITE); =20 @@ -2717,7 +2720,7 @@ kvm_pfn_t __gfn_to_pfn_memslot(const struct kvm_memor= y_slot *slot, gfn_t gfn, struct kvm_follow_pfn foll =3D { .slot =3D slot, .gfn =3D gfn, - .flags =3D 0, + .flags =3D FOLL_GET, .atomic =3D atomic, .allow_write_mapping =3D !!writable, }; @@ -2749,7 +2752,7 @@ kvm_pfn_t gfn_to_pfn_prot(struct kvm *kvm, gfn_t gfn,= bool write_fault, struct kvm_follow_pfn foll =3D { .slot =3D gfn_to_memslot(kvm, gfn), .gfn =3D gfn, - .flags =3D write_fault ? FOLL_WRITE : 0, + .flags =3D FOLL_GET | (write_fault ? FOLL_WRITE : 0), .allow_write_mapping =3D !!writable, }; pfn =3D __kvm_follow_pfn(&foll); @@ -2764,7 +2767,7 @@ kvm_pfn_t gfn_to_pfn_memslot(const struct kvm_memory_= slot *slot, gfn_t gfn) struct kvm_follow_pfn foll =3D { .slot =3D slot, .gfn =3D gfn, - .flags =3D FOLL_WRITE, + .flags =3D FOLL_GET | FOLL_WRITE, }; return __kvm_follow_pfn(&foll); } @@ -2775,7 +2778,7 @@ kvm_pfn_t gfn_to_pfn_memslot_atomic(const struct kvm_= memory_slot *slot, gfn_t gf struct kvm_follow_pfn foll =3D { .slot =3D slot, .gfn =3D gfn, - .flags =3D FOLL_WRITE, + .flags =3D FOLL_GET | FOLL_WRITE, .atomic =3D true, }; return __kvm_follow_pfn(&foll); @@ -2930,17 +2933,19 @@ static bool kvm_is_ad_tracked_page(struct page *pag= e) return !PageReserved(page); } =20 -static void kvm_set_page_dirty(struct page *page) +void kvm_set_page_dirty(struct page *page) { if (kvm_is_ad_tracked_page(page)) SetPageDirty(page); } +EXPORT_SYMBOL_GPL(kvm_set_page_dirty); =20 -static void kvm_set_page_accessed(struct page *page) +void kvm_set_page_accessed(struct page *page) { if (kvm_is_ad_tracked_page(page)) mark_page_accessed(page); } +EXPORT_SYMBOL_GPL(kvm_set_page_accessed); =20 void kvm_release_page_clean(struct page *page) { diff --git a/virt/kvm/pfncache.c b/virt/kvm/pfncache.c index e3fefa753a51..87caafce3dd0 100644 --- a/virt/kvm/pfncache.c +++ b/virt/kvm/pfncache.c @@ -147,7 +147,7 @@ static kvm_pfn_t hva_to_pfn_retry(struct gfn_to_pfn_cac= he *gpc) struct kvm_follow_pfn foll =3D { .slot =3D gpc->memslot, .gfn =3D gpa_to_gfn(gpc->gpa), - .flags =3D FOLL_WRITE, + .flags =3D FOLL_WRITE | FOLL_GET, .hva =3D gpc->uhva, }; =20 --=20 2.41.0.255.g8b1d071c50-goog From nobody Sat Feb 7 23:48:08 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 298FCEB64DD for ; Tue, 4 Jul 2023 07:52:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231682AbjGDHwU (ORCPT ); Tue, 4 Jul 2023 03:52:20 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:32832 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231601AbjGDHwD (ORCPT ); Tue, 4 Jul 2023 03:52:03 -0400 Received: from mail-oa1-x36.google.com (mail-oa1-x36.google.com [IPv6:2001:4860:4864:20::36]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D363810FD for ; Tue, 4 Jul 2023 00:51:50 -0700 (PDT) Received: by mail-oa1-x36.google.com with SMTP id 586e51a60fabf-1b0719dd966so5235001fac.1 for ; Tue, 04 Jul 2023 00:51:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; t=1688457109; x=1691049109; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Uw1cYx5mpX+mMIRwwTz/eOOOsab4miY9iNG5/ESy4Mk=; b=hM2/lEtQfedjJNo6vksX1IzO4Wc7lA479rqm5EXmbgKzDwV1/aXimAiQpHo3XK8phH qL7Ga8Ly5/N9covlLsX8Q4ZpGICRncTKBb3TCq+F1unF0JVVYMtC5Yss6lU6mL9eAMDO HLrAMgF+i9FZ7IPDuh+PU/uN+tQR37ttUWdQo= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1688457109; x=1691049109; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Uw1cYx5mpX+mMIRwwTz/eOOOsab4miY9iNG5/ESy4Mk=; b=G6lNRO9nwtcTF6aPvdcBD++xrPkUlXBCTiyiP0mQm1Nlv4WcSNO7iUeis9TABIs3c4 sUopbMAg5cGSVdqjGZOWGb7ocePg56/grdRw29mesGgkhbEsyAH0koaYlRpHK+u2NBNJ WbkNcseLsYO2KEWCqyw0D3/lASK3yVhSW2APUBDJhVJxvpR6qKZbgm52KAbfTSBS++5j eAAIshfbCnUiuZGgIapzhJTJrXO2lPGNNKDA9237Uay5JLgXIAvNWgQ+DupspXEhE5Eo fT0ssGvGiAK492a1Jr4Ox3qhskR/DjSkc+wFPU0nhKBQ6s4tXfVLmwUgtOFfu4AqSaDX Fq3A== X-Gm-Message-State: ABy/qLaZTITl7ZnW9qlLGnAJgtW8sIUqWpFSIrbraocMsYd+XVtsBoA5 UFsJrG1MVnJafQkDF6wkyMryouRt85QLid7+pOw= X-Google-Smtp-Source: APBJJlE1s3SlpgurTIEl8nvf4AdSMp7i/6+oVrU7uwn9VXv/KuuRcBhlcTP1OyovYu4UgYP03T8sfg== X-Received: by 2002:a05:6870:7885:b0:1b0:805:8678 with SMTP id hc5-20020a056870788500b001b008058678mr13821055oab.24.1688457109645; Tue, 04 Jul 2023 00:51:49 -0700 (PDT) Received: from localhost ([2401:fa00:8f:203:a11b:bff7:d8ae:bb0]) by smtp.gmail.com with UTF8SMTPSA id w22-20020a17090a15d600b00263b28e49fcsm4324087pjd.47.2023.07.04.00.51.47 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Tue, 04 Jul 2023 00:51:49 -0700 (PDT) From: David Stevens X-Google-Original-From: David Stevens To: Sean Christopherson Cc: Marc Zyngier , Michael Ellerman , Peter Xu , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org, David Stevens Subject: [PATCH v7 4/8] KVM: x86/mmu: Migrate to __kvm_follow_pfn Date: Tue, 4 Jul 2023 16:50:49 +0900 Message-ID: <20230704075054.3344915-5-stevensd@google.com> X-Mailer: git-send-email 2.41.0.255.g8b1d071c50-goog In-Reply-To: <20230704075054.3344915-1-stevensd@google.com> References: <20230704075054.3344915-1-stevensd@google.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: David Stevens Migrate from __gfn_to_pfn_memslot to __kvm_follow_pfn. Signed-off-by: David Stevens --- arch/x86/kvm/mmu/mmu.c | 35 +++++++++++++++++++++++++---------- 1 file changed, 25 insertions(+), 10 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index ec169f5c7dce..e44ab512c3a1 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4296,7 +4296,12 @@ void kvm_arch_async_page_ready(struct kvm_vcpu *vcpu= , struct kvm_async_pf *work) static int __kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault = *fault) { struct kvm_memory_slot *slot =3D fault->slot; - bool async; + struct kvm_follow_pfn foll =3D { + .slot =3D slot, + .gfn =3D fault->gfn, + .flags =3D FOLL_GET | (fault->write ? FOLL_WRITE : 0), + .allow_write_mapping =3D true, + }; =20 /* * Retry the page fault if the gfn hit a memslot that is being deleted @@ -4325,12 +4330,14 @@ static int __kvm_faultin_pfn(struct kvm_vcpu *vcpu,= struct kvm_page_fault *fault return RET_PF_EMULATE; } =20 - async =3D false; - fault->pfn =3D __gfn_to_pfn_memslot(slot, fault->gfn, false, false, &asyn= c, - fault->write, &fault->map_writable, - &fault->hva); - if (!async) - return RET_PF_CONTINUE; /* *pfn has correct page already */ + foll.flags |=3D FOLL_NOWAIT; + fault->pfn =3D __kvm_follow_pfn(&foll); + + if (!is_error_noslot_pfn(fault->pfn)) + goto success; + + if (fault->pfn !=3D KVM_PFN_ERR_NEEDS_IO) + return RET_PF_CONTINUE; =20 if (!fault->prefetch && kvm_can_do_async_pf(vcpu)) { trace_kvm_try_async_get_page(fault->addr, fault->gfn); @@ -4348,9 +4355,17 @@ static int __kvm_faultin_pfn(struct kvm_vcpu *vcpu, = struct kvm_page_fault *fault * to wait for IO. Note, gup always bails if it is unable to quickly * get a page and a fatal signal, i.e. SIGKILL, is pending. */ - fault->pfn =3D __gfn_to_pfn_memslot(slot, fault->gfn, false, true, NULL, - fault->write, &fault->map_writable, - &fault->hva); + foll.flags |=3D FOLL_INTERRUPTIBLE; + foll.flags &=3D ~FOLL_NOWAIT; + fault->pfn =3D __kvm_follow_pfn(&foll); + + if (!is_error_noslot_pfn(fault->pfn)) + goto success; + + return RET_PF_CONTINUE; +success: + fault->hva =3D foll.hva; + fault->map_writable =3D foll.writable; return RET_PF_CONTINUE; } =20 --=20 2.41.0.255.g8b1d071c50-goog From nobody Sat Feb 7 23:48:08 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9F2ABEB64D9 for ; Tue, 4 Jul 2023 07:52:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231703AbjGDHwc (ORCPT ); Tue, 4 Jul 2023 03:52:32 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60962 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231585AbjGDHwM (ORCPT ); Tue, 4 Jul 2023 03:52:12 -0400 Received: from mail-pg1-x532.google.com (mail-pg1-x532.google.com [IPv6:2607:f8b0:4864:20::532]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B3D44E59 for ; Tue, 4 Jul 2023 00:51:54 -0700 (PDT) Received: by mail-pg1-x532.google.com with SMTP id 41be03b00d2f7-53482b44007so2567989a12.2 for ; Tue, 04 Jul 2023 00:51:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; t=1688457114; x=1691049114; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=AcjFavyo0/8lleZG5RjIznx3IVQVCz54WpJ58XOvGXs=; b=Albj4+6v0VCXdDCyTd8S/qVKx1iS7d60NitIJ0zGq2gfsD9uA2yOkIP5HPDno1Vc54 RZxr0iuzuIjN9YCeD80RHflQRyS9eKuQ5CbNLnh8ssmylhYTPqzQjZjDLhMNRN1S9oKR ug2PrGRMxrjzrBqVPcGzauwb2TOmu3BJoESrc= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1688457114; x=1691049114; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=AcjFavyo0/8lleZG5RjIznx3IVQVCz54WpJ58XOvGXs=; b=E/t/LRuWloIwQLEZCD/Blg5TM8dilpU9EHDSDlOco/9K57pQjZwi1Fc1m73WwH7R9f Gpw0V3X56PB3VyrwS4fdaw5NLANxIAxtKhO16HOQZy/IKZDeQoKJN5pySEMrlD6zxrK2 0TlivkuQlN7apZvkA0nnGky3yoBwX9JYcMupi16TDKBEbumaJers8ql+gTqeNK2tzSbP PVBY+FS3UHmalcTYZk5NTXGVgyviQurobVV0qgT0ePawwe5sYFyuhKMLU5t1VkK4mi9d CAvpKjZQP7MAN1AzhOfxgrsX3VmgU9MMAjTwTObmSmSMw3/NE0BUevnmUFzdr8KSI9HB WYVQ== X-Gm-Message-State: AC+VfDz2krhq5PFdqTZQpXFueQLUOhWnjYZtWAS/qzhKsK6sEtsY3jHr AlpaBILXbM8Aa2DC3pOwQcr0Kg== X-Google-Smtp-Source: ACHHUZ632/FeJEsRMiAgZp7avQZvnLREpWyjrFpW1jYaEDpRm2AjvUK5tT04q8UY3vwfvST/ZmeMog== X-Received: by 2002:a05:6a20:9187:b0:10f:f672:6e88 with SMTP id v7-20020a056a20918700b0010ff6726e88mr12748149pzd.4.1688457113771; Tue, 04 Jul 2023 00:51:53 -0700 (PDT) Received: from localhost ([2401:fa00:8f:203:a11b:bff7:d8ae:bb0]) by smtp.gmail.com with UTF8SMTPSA id x12-20020a170902820c00b001b3d7205401sm16444812pln.303.2023.07.04.00.51.51 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Tue, 04 Jul 2023 00:51:53 -0700 (PDT) From: David Stevens X-Google-Original-From: David Stevens To: Sean Christopherson Cc: Marc Zyngier , Michael Ellerman , Peter Xu , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org, David Stevens Subject: [PATCH v7 5/8] KVM: x86/mmu: Don't pass FOLL_GET to __kvm_follow_pfn Date: Tue, 4 Jul 2023 16:50:50 +0900 Message-ID: <20230704075054.3344915-6-stevensd@google.com> X-Mailer: git-send-email 2.41.0.255.g8b1d071c50-goog In-Reply-To: <20230704075054.3344915-1-stevensd@google.com> References: <20230704075054.3344915-1-stevensd@google.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: David Stevens Stop passing FOLL_GET to __kvm_follow_pfn. This allows the host to map memory into the guest that is backed by un-refcounted struct pages - for example, higher order non-compound pages allocated by the amdgpu driver via ttm_pool_alloc_page. The bulk of this change is tracking the is_refcounted_page flag so that non-refcounted pages don't trigger page_count() =3D=3D 0 warnings. This is done by storing the flag in an unused bit in the sptes. Signed-off-by: David Stevens --- arch/x86/kvm/mmu/mmu.c | 44 +++++++++++++++++++++------------ arch/x86/kvm/mmu/mmu_internal.h | 1 + arch/x86/kvm/mmu/paging_tmpl.h | 9 ++++--- arch/x86/kvm/mmu/spte.c | 4 ++- arch/x86/kvm/mmu/spte.h | 12 ++++++++- arch/x86/kvm/mmu/tdp_mmu.c | 22 ++++++++++------- 6 files changed, 62 insertions(+), 30 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index e44ab512c3a1..b1607e314497 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -553,12 +553,14 @@ static bool mmu_spte_update(u64 *sptep, u64 new_spte) =20 if (is_accessed_spte(old_spte) && !is_accessed_spte(new_spte)) { flush =3D true; - kvm_set_pfn_accessed(spte_to_pfn(old_spte)); + if (is_refcounted_page_pte(old_spte)) + kvm_set_page_accessed(pfn_to_page(spte_to_pfn(old_spte))); } =20 if (is_dirty_spte(old_spte) && !is_dirty_spte(new_spte)) { flush =3D true; - kvm_set_pfn_dirty(spte_to_pfn(old_spte)); + if (is_refcounted_page_pte(old_spte)) + kvm_set_page_dirty(pfn_to_page(spte_to_pfn(old_spte))); } =20 return flush; @@ -596,14 +598,18 @@ static u64 mmu_spte_clear_track_bits(struct kvm *kvm,= u64 *sptep) * before they are reclaimed. Sanity check that, if the pfn is backed * by a refcounted page, the refcount is elevated. */ - page =3D kvm_pfn_to_refcounted_page(pfn); - WARN_ON(page && !page_count(page)); + if (is_refcounted_page_pte(old_spte)) { + page =3D kvm_pfn_to_refcounted_page(pfn); + WARN_ON(!page || !page_count(page)); + } =20 - if (is_accessed_spte(old_spte)) - kvm_set_pfn_accessed(pfn); + if (is_refcounted_page_pte(old_spte)) { + if (is_accessed_spte(old_spte)) + kvm_set_page_accessed(pfn_to_page(pfn)); =20 - if (is_dirty_spte(old_spte)) - kvm_set_pfn_dirty(pfn); + if (is_dirty_spte(old_spte)) + kvm_set_page_dirty(pfn_to_page(pfn)); + } =20 return old_spte; } @@ -639,8 +645,8 @@ static bool mmu_spte_age(u64 *sptep) * Capture the dirty status of the page, so that it doesn't get * lost when the SPTE is marked for access tracking. */ - if (is_writable_pte(spte)) - kvm_set_pfn_dirty(spte_to_pfn(spte)); + if (is_writable_pte(spte) && is_refcounted_page_pte(spte)) + kvm_set_page_dirty(pfn_to_page(spte_to_pfn(spte))); =20 spte =3D mark_spte_for_access_track(spte); mmu_spte_update_no_track(sptep, spte); @@ -1278,8 +1284,8 @@ static bool spte_wrprot_for_clear_dirty(u64 *sptep) { bool was_writable =3D test_and_clear_bit(PT_WRITABLE_SHIFT, (unsigned long *)sptep); - if (was_writable && !spte_ad_enabled(*sptep)) - kvm_set_pfn_dirty(spte_to_pfn(*sptep)); + if (was_writable && !spte_ad_enabled(*sptep) && is_refcounted_page_pte(*s= ptep)) + kvm_set_page_dirty(pfn_to_page(spte_to_pfn(*sptep))); =20 return was_writable; } @@ -2937,6 +2943,7 @@ static int mmu_set_spte(struct kvm_vcpu *vcpu, struct= kvm_memory_slot *slot, bool host_writable =3D !fault || fault->map_writable; bool prefetch =3D !fault || fault->prefetch; bool write_fault =3D fault && fault->write; + bool is_refcounted =3D !fault || fault->is_refcounted_page; =20 pgprintk("%s: spte %llx write_fault %d gfn %llx\n", __func__, *sptep, write_fault, gfn); @@ -2969,7 +2976,7 @@ static int mmu_set_spte(struct kvm_vcpu *vcpu, struct= kvm_memory_slot *slot, } =20 wrprot =3D make_spte(vcpu, sp, slot, pte_access, gfn, pfn, *sptep, prefet= ch, - true, host_writable, &spte); + true, host_writable, is_refcounted, &spte); =20 if (*sptep =3D=3D spte) { ret =3D RET_PF_SPURIOUS; @@ -4299,8 +4306,9 @@ static int __kvm_faultin_pfn(struct kvm_vcpu *vcpu, s= truct kvm_page_fault *fault struct kvm_follow_pfn foll =3D { .slot =3D slot, .gfn =3D fault->gfn, - .flags =3D FOLL_GET | (fault->write ? FOLL_WRITE : 0), + .flags =3D fault->write ? FOLL_WRITE : 0, .allow_write_mapping =3D true, + .guarded_by_mmu_notifier =3D true, }; =20 /* @@ -4317,6 +4325,7 @@ static int __kvm_faultin_pfn(struct kvm_vcpu *vcpu, s= truct kvm_page_fault *fault fault->slot =3D NULL; fault->pfn =3D KVM_PFN_NOSLOT; fault->map_writable =3D false; + fault->is_refcounted_page =3D false; return RET_PF_CONTINUE; } /* @@ -4366,6 +4375,7 @@ static int __kvm_faultin_pfn(struct kvm_vcpu *vcpu, s= truct kvm_page_fault *fault success: fault->hva =3D foll.hva; fault->map_writable =3D foll.writable; + fault->is_refcounted_page =3D foll.is_refcounted_page; return RET_PF_CONTINUE; } =20 @@ -4451,7 +4461,8 @@ static int direct_page_fault(struct kvm_vcpu *vcpu, s= truct kvm_page_fault *fault =20 out_unlock: write_unlock(&vcpu->kvm->mmu_lock); - kvm_release_pfn_clean(fault->pfn); + if (fault->is_refcounted_page) + kvm_set_page_accessed(pfn_to_page(fault->pfn)); return r; } =20 @@ -4529,7 +4540,8 @@ static int kvm_tdp_mmu_page_fault(struct kvm_vcpu *vc= pu, =20 out_unlock: read_unlock(&vcpu->kvm->mmu_lock); - kvm_release_pfn_clean(fault->pfn); + if (fault->is_refcounted_page) + kvm_set_page_accessed(pfn_to_page(fault->pfn)); return r; } #endif diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_interna= l.h index d39af5639ce9..55790085884f 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -240,6 +240,7 @@ struct kvm_page_fault { kvm_pfn_t pfn; hva_t hva; bool map_writable; + bool is_refcounted_page; =20 /* * Indicates the guest is trying to write a gfn that contains one or diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index 0662e0278e70..3284e7bd9619 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -829,7 +829,8 @@ static int FNAME(page_fault)(struct kvm_vcpu *vcpu, str= uct kvm_page_fault *fault =20 out_unlock: write_unlock(&vcpu->kvm->mmu_lock); - kvm_release_pfn_clean(fault->pfn); + if (fault->is_refcounted_page) + kvm_set_page_accessed(pfn_to_page(fault->pfn)); return r; } =20 @@ -883,7 +884,7 @@ static gpa_t FNAME(gva_to_gpa)(struct kvm_vcpu *vcpu, s= truct kvm_mmu *mmu, */ static int FNAME(sync_spte)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp= , int i) { - bool host_writable; + bool host_writable, is_refcounted; gpa_t first_pte_gpa; u64 *sptep, spte; struct kvm_memory_slot *slot; @@ -940,10 +941,12 @@ static int FNAME(sync_spte)(struct kvm_vcpu *vcpu, st= ruct kvm_mmu_page *sp, int sptep =3D &sp->spt[i]; spte =3D *sptep; host_writable =3D spte & shadow_host_writable_mask; + // TODO: is this correct? + is_refcounted =3D spte & SPTE_MMU_PAGE_REFCOUNTED; slot =3D kvm_vcpu_gfn_to_memslot(vcpu, gfn); make_spte(vcpu, sp, slot, pte_access, gfn, spte_to_pfn(spte), spte, true, false, - host_writable, &spte); + host_writable, is_refcounted, &spte); =20 return mmu_spte_update(sptep, spte); } diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c index cf2c6426a6fc..46c681dc45e6 100644 --- a/arch/x86/kvm/mmu/spte.c +++ b/arch/x86/kvm/mmu/spte.c @@ -138,7 +138,7 @@ bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_pa= ge *sp, const struct kvm_memory_slot *slot, unsigned int pte_access, gfn_t gfn, kvm_pfn_t pfn, u64 old_spte, bool prefetch, bool can_unsync, - bool host_writable, u64 *new_spte) + bool host_writable, bool is_refcounted, u64 *new_spte) { int level =3D sp->role.level; u64 spte =3D SPTE_MMU_PRESENT_MASK; @@ -188,6 +188,8 @@ bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_pa= ge *sp, =20 if (level > PG_LEVEL_4K) spte |=3D PT_PAGE_SIZE_MASK; + else if (is_refcounted) + spte |=3D SPTE_MMU_PAGE_REFCOUNTED; =20 if (shadow_memtype_mask) spte |=3D static_call(kvm_x86_get_mt_mask)(vcpu, gfn, diff --git a/arch/x86/kvm/mmu/spte.h b/arch/x86/kvm/mmu/spte.h index 1279db2eab44..be93dd061ae3 100644 --- a/arch/x86/kvm/mmu/spte.h +++ b/arch/x86/kvm/mmu/spte.h @@ -95,6 +95,11 @@ static_assert(!(EPT_SPTE_MMU_WRITABLE & SHADOW_ACC_TRACK= _SAVED_MASK)); /* Defined only to keep the above static asserts readable. */ #undef SHADOW_ACC_TRACK_SAVED_MASK =20 +/* + * Indicates that the SPTE refers to a page with a valid refcount. + */ +#define SPTE_MMU_PAGE_REFCOUNTED BIT_ULL(59) + /* * Due to limited space in PTEs, the MMIO generation is a 19 bit subset of * the memslots generation and is derived as follows: @@ -332,6 +337,11 @@ static inline bool is_dirty_spte(u64 spte) return dirty_mask ? spte & dirty_mask : spte & PT_WRITABLE_MASK; } =20 +static inline bool is_refcounted_page_pte(u64 spte) +{ + return spte & SPTE_MMU_PAGE_REFCOUNTED; +} + static inline u64 get_rsvd_bits(struct rsvd_bits_validate *rsvd_check, u64= pte, int level) { @@ -462,7 +472,7 @@ bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_pa= ge *sp, const struct kvm_memory_slot *slot, unsigned int pte_access, gfn_t gfn, kvm_pfn_t pfn, u64 old_spte, bool prefetch, bool can_unsync, - bool host_writable, u64 *new_spte); + bool host_writable, bool is_refcounted, u64 *new_spte); u64 make_huge_page_split_spte(struct kvm *kvm, u64 huge_spte, union kvm_mmu_page_role role, int index); u64 make_nonleaf_spte(u64 *child_pt, bool ad_disabled); diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 512163d52194..a9b1b14d2e26 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -474,6 +474,7 @@ static void handle_changed_spte(struct kvm *kvm, int as= _id, gfn_t gfn, bool was_leaf =3D was_present && is_last_spte(old_spte, level); bool is_leaf =3D is_present && is_last_spte(new_spte, level); bool pfn_changed =3D spte_to_pfn(old_spte) !=3D spte_to_pfn(new_spte); + bool is_refcounted =3D is_refcounted_page_pte(old_spte); =20 WARN_ON(level > PT64_ROOT_MAX_LEVEL); WARN_ON(level < PG_LEVEL_4K); @@ -538,9 +539,9 @@ static void handle_changed_spte(struct kvm *kvm, int as= _id, gfn_t gfn, if (is_leaf !=3D was_leaf) kvm_update_page_stats(kvm, level, is_leaf ? 1 : -1); =20 - if (was_leaf && is_dirty_spte(old_spte) && + if (was_leaf && is_dirty_spte(old_spte) && is_refcounted && (!is_present || !is_dirty_spte(new_spte) || pfn_changed)) - kvm_set_pfn_dirty(spte_to_pfn(old_spte)); + kvm_set_page_dirty(pfn_to_page(spte_to_pfn(old_spte))); =20 /* * Recursively handle child PTs if the change removed a subtree from @@ -552,9 +553,9 @@ static void handle_changed_spte(struct kvm *kvm, int as= _id, gfn_t gfn, (is_leaf || !is_present || WARN_ON_ONCE(pfn_changed))) handle_removed_pt(kvm, spte_to_child_pt(old_spte, level), shared); =20 - if (was_leaf && is_accessed_spte(old_spte) && + if (was_leaf && is_accessed_spte(old_spte) && is_refcounted && (!is_present || !is_accessed_spte(new_spte) || pfn_changed)) - kvm_set_pfn_accessed(spte_to_pfn(old_spte)); + kvm_set_page_accessed(pfn_to_page(spte_to_pfn(old_spte))); } =20 /* @@ -988,8 +989,9 @@ static int tdp_mmu_map_handle_target_level(struct kvm_v= cpu *vcpu, new_spte =3D make_mmio_spte(vcpu, iter->gfn, ACC_ALL); else wrprot =3D make_spte(vcpu, sp, fault->slot, ACC_ALL, iter->gfn, - fault->pfn, iter->old_spte, fault->prefetch, true, - fault->map_writable, &new_spte); + fault->pfn, iter->old_spte, fault->prefetch, true, + fault->map_writable, fault->is_refcounted_page, + &new_spte); =20 if (new_spte =3D=3D iter->old_spte) ret =3D RET_PF_SPURIOUS; @@ -1205,8 +1207,9 @@ static bool age_gfn_range(struct kvm *kvm, struct tdp= _iter *iter, * Capture the dirty status of the page, so that it doesn't get * lost when the SPTE is marked for access tracking. */ - if (is_writable_pte(iter->old_spte)) - kvm_set_pfn_dirty(spte_to_pfn(iter->old_spte)); + if (is_writable_pte(iter->old_spte) && + is_refcounted_page_pte(iter->old_spte)) + kvm_set_page_dirty(pfn_to_page(spte_to_pfn(iter->old_spte))); =20 new_spte =3D mark_spte_for_access_track(iter->old_spte); iter->old_spte =3D kvm_tdp_mmu_write_spte(iter->sptep, @@ -1626,7 +1629,8 @@ static void clear_dirty_pt_masked(struct kvm *kvm, st= ruct kvm_mmu_page *root, trace_kvm_tdp_mmu_spte_changed(iter.as_id, iter.gfn, iter.level, iter.old_spte, iter.old_spte & ~dbit); - kvm_set_pfn_dirty(spte_to_pfn(iter.old_spte)); + if (is_refcounted_page_pte(iter.old_spte)) + kvm_set_page_dirty(pfn_to_page(spte_to_pfn(iter.old_spte))); } =20 rcu_read_unlock(); --=20 2.41.0.255.g8b1d071c50-goog From nobody Sat Feb 7 23:48:08 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2AABBEB64D9 for ; Tue, 4 Jul 2023 07:52:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231423AbjGDHwo (ORCPT ); Tue, 4 Jul 2023 03:52:44 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33126 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231349AbjGDHwS (ORCPT ); Tue, 4 Jul 2023 03:52:18 -0400 Received: from mail-pl1-x62f.google.com (mail-pl1-x62f.google.com [IPv6:2607:f8b0:4864:20::62f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A87CF10C6 for ; Tue, 4 Jul 2023 00:51:58 -0700 (PDT) Received: by mail-pl1-x62f.google.com with SMTP id d9443c01a7336-1b7ffab7ff1so29889405ad.2 for ; Tue, 04 Jul 2023 00:51:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; t=1688457118; x=1691049118; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=VCpKC4uXwQHydKgBR4yAPbGVdXRXBlTDW5vlI7SMZnI=; b=VdipnehtoBWVnZgiiIRUC3n4cutyHftCzzQTvCt+6pDBfsoJDl0IPP7jexUln1Z3yv 24G47TmH8QpI+nwzx6ctjE817ctenOUmr6AjP+CmoU02rktwSe6aOZx2ZntH2UwQ9DeO KBx6QbhtpOviBTERdqZU9t3L7LqRKS8+C/r84= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1688457118; x=1691049118; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=VCpKC4uXwQHydKgBR4yAPbGVdXRXBlTDW5vlI7SMZnI=; b=D+rpT+7LGyi5hbr9ri5/aLqYDE5RSdx3CcB4MBHJlxep5EzCxxNTqmY4TFk70H65BA vRpzdCAPL06uTc8noK/SAAEKNEHnu1YbYd5DwnRISgitasd/IACEjvqZ12Oxts0sqfW5 +eGTC1f0yN51KlU9QPuLS7L63EXxcnh3hEVoOzHTFZOFKcnq9V0o80uwc1pTj9hnD1u5 Jxho5It0BgrNCB8f6aGI/W//Jrc5CdSiZ7SjqMBhGnO4hl28//hJDn4tTAtMJ36pGzKW 2sNTUgQpqDGwGmQNSuYFeGdi3LpsIEvNpnmGlqYhHkeA0qIdhYiXtc8UaCjgv/Uj/x/o Lh6g== X-Gm-Message-State: ABy/qLZnUMMDRlZlhHDueAdoDxzFQl6WDPDwfLmKXm5ELflzGzpKvuO5 WF6QeXjMtvhotIyqSHTzd2t2RA== X-Google-Smtp-Source: APBJJlFlOFbHWdaS64bV6rkagDydvh49k9oEO7W5D2RIf2i40ajXmc6e9E8Ak2SPFXB+2GmF5HmwRQ== X-Received: by 2002:a17:902:c409:b0:1b8:6cae:4400 with SMTP id k9-20020a170902c40900b001b86cae4400mr9997546plk.37.1688457118004; Tue, 04 Jul 2023 00:51:58 -0700 (PDT) Received: from localhost ([2401:fa00:8f:203:a11b:bff7:d8ae:bb0]) by smtp.gmail.com with UTF8SMTPSA id a4-20020a1709027d8400b001ae0a4b1d3fsm16529164plm.153.2023.07.04.00.51.55 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Tue, 04 Jul 2023 00:51:57 -0700 (PDT) From: David Stevens X-Google-Original-From: David Stevens To: Sean Christopherson Cc: Marc Zyngier , Michael Ellerman , Peter Xu , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org, David Stevens Subject: [PATCH v7 6/8] KVM: arm64: Migrate to __kvm_follow_pfn Date: Tue, 4 Jul 2023 16:50:51 +0900 Message-ID: <20230704075054.3344915-7-stevensd@google.com> X-Mailer: git-send-email 2.41.0.255.g8b1d071c50-goog In-Reply-To: <20230704075054.3344915-1-stevensd@google.com> References: <20230704075054.3344915-1-stevensd@google.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: David Stevens Migrate from __gfn_to_pfn_memslot to __kvm_follow_pfn. Signed-off-by: David Stevens --- arch/arm64/kvm/mmu.c | 25 ++++++++++++++----------- 1 file changed, 14 insertions(+), 11 deletions(-) diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 6db9ef288ec3..c706530d304d 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1334,7 +1334,8 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys= _addr_t fault_ipa, unsigned long fault_status) { int ret =3D 0; - bool write_fault, writable, force_pte =3D false; + bool write_fault =3D kvm_is_write_fault(vcpu); + bool force_pte =3D false; bool exec_fault, mte_allowed; bool device =3D false; unsigned long mmu_seq; @@ -1342,16 +1343,19 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, ph= ys_addr_t fault_ipa, struct kvm_mmu_memory_cache *memcache =3D &vcpu->arch.mmu_page_cache; struct vm_area_struct *vma; short vma_shift; - gfn_t gfn; kvm_pfn_t pfn; bool logging_active =3D memslot_is_logging(memslot); unsigned long fault_level =3D kvm_vcpu_trap_get_fault_level(vcpu); long vma_pagesize, fault_granule; enum kvm_pgtable_prot prot =3D KVM_PGTABLE_PROT_R; struct kvm_pgtable *pgt; + struct kvm_follow_pfn foll =3D { + .slot =3D memslot, + .flags =3D FOLL_GET | (write_fault ? FOLL_WRITE : 0), + .allow_write_mapping =3D true, + }; =20 fault_granule =3D 1UL << ARM64_HW_PGTABLE_LEVEL_SHIFT(fault_level); - write_fault =3D kvm_is_write_fault(vcpu); exec_fault =3D kvm_vcpu_trap_is_exec_fault(vcpu); VM_BUG_ON(write_fault && exec_fault); =20 @@ -1425,7 +1429,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys= _addr_t fault_ipa, if (vma_pagesize =3D=3D PMD_SIZE || vma_pagesize =3D=3D PUD_SIZE) fault_ipa &=3D ~(vma_pagesize - 1); =20 - gfn =3D fault_ipa >> PAGE_SHIFT; + foll.gfn =3D fault_ipa >> PAGE_SHIFT; mte_allowed =3D kvm_vma_mte_allowed(vma); =20 /* Don't use the VMA after the unlock -- it may have vanished */ @@ -1433,7 +1437,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys= _addr_t fault_ipa, =20 /* * Read mmu_invalidate_seq so that KVM can detect if the results of - * vma_lookup() or __gfn_to_pfn_memslot() become stale prior to + * vma_lookup() or __kvm_follow_pfn() become stale prior to * acquiring kvm->mmu_lock. * * Rely on mmap_read_unlock() for an implicit smp_rmb(), which pairs @@ -1442,8 +1446,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys= _addr_t fault_ipa, mmu_seq =3D vcpu->kvm->mmu_invalidate_seq; mmap_read_unlock(current->mm); =20 - pfn =3D __gfn_to_pfn_memslot(memslot, gfn, false, false, NULL, - write_fault, &writable, NULL); + pfn =3D __kvm_follow_pfn(&foll); if (pfn =3D=3D KVM_PFN_ERR_HWPOISON) { kvm_send_hwpoison_signal(hva, vma_shift); return 0; @@ -1468,7 +1471,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys= _addr_t fault_ipa, * Only actually map the page as writable if this was a write * fault. */ - writable =3D false; + foll.writable =3D false; } =20 if (exec_fault && device) @@ -1508,7 +1511,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys= _addr_t fault_ipa, } } =20 - if (writable) + if (foll.writable) prot |=3D KVM_PGTABLE_PROT_W; =20 if (exec_fault) @@ -1534,9 +1537,9 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys= _addr_t fault_ipa, KVM_PGTABLE_WALK_SHARED); =20 /* Mark the page dirty only if the fault is handled successfully */ - if (writable && !ret) { + if (foll.writable && !ret) { kvm_set_pfn_dirty(pfn); - mark_page_dirty_in_slot(kvm, memslot, gfn); + mark_page_dirty_in_slot(kvm, memslot, foll.gfn); } =20 out_unlock: --=20 2.41.0.255.g8b1d071c50-goog From nobody Sat Feb 7 23:48:08 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E1D78EB64D9 for ; Tue, 4 Jul 2023 07:52:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231653AbjGDHwr (ORCPT ); Tue, 4 Jul 2023 03:52:47 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60906 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231649AbjGDHwW (ORCPT ); Tue, 4 Jul 2023 03:52:22 -0400 Received: from mail-pl1-x629.google.com (mail-pl1-x629.google.com [IPv6:2607:f8b0:4864:20::629]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DEE30173B for ; Tue, 4 Jul 2023 00:52:02 -0700 (PDT) Received: by mail-pl1-x629.google.com with SMTP id d9443c01a7336-1b8033987baso31904055ad.0 for ; Tue, 04 Jul 2023 00:52:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; t=1688457122; x=1691049122; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=QLyQmP5nDh0RgJC6CxbUZRaauZtJFbl0jChhLeFWjJY=; b=YcEN4H40VBFhjUSroHCTQ6NibkY4/dreFivRtQDGr7gpTqvTZYZwVe0bAU1BlFr7Tw uJyTBZW+lzvSd7mHzrU/btQNmpbPixGeevr1thUasCjlICbLKVv58agecCi50tM+ervX xWtDxYDrmZS6HTOxaQPe/+ifjAer7v5B3P0B0= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1688457122; x=1691049122; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=QLyQmP5nDh0RgJC6CxbUZRaauZtJFbl0jChhLeFWjJY=; b=gBYLIreTrqcolPE10y9CY1h0emYGPZQVyo6ngenRufSOIt/iYhUpfIZYIeZMeraIWH oVpm69rN4X13YNs3OIb0AmzR2ChzzTHjl0q5KKN64FwX+J+AZglhow7Fsp+So6s1u9g0 Y4Rz9+cLd4TmFgSwE2NAOfh2VJac3E/a7aLYau1J1RAS14NZzg/o9A3pjL5H1mSVBB4F r3mxj1hhZkWEnDnbiryyB5+y0I3WgzwZ4qYCKReEaTUTqc9VApnQ/n5OLJ51TxKwbL0I aaaZi1FpbY9bhXhJAdn5Y9mox9oFcSytSfn3WviXxWSOGkBWo+BcfMzwc5zmMBFthgzv 6UXA== X-Gm-Message-State: AC+VfDxfc86yUAvIvIuYeSYhffQiafamFle8UngLJilTLC/6A3k5H4pk TX1JkrqA9zYspdBBebA+JsOxC/g7/zmHMikohGg= X-Google-Smtp-Source: ACHHUZ4Cs6VvcY2DlvQtFDLVIBEE3ptFC2QoRwiqnzemyLMsDUP2Zh7EwpxaQZ2k3xDH9fQ4SSovsQ== X-Received: by 2002:a17:902:c411:b0:1b6:92f0:b6f5 with SMTP id k17-20020a170902c41100b001b692f0b6f5mr24933837plk.14.1688457122247; Tue, 04 Jul 2023 00:52:02 -0700 (PDT) Received: from localhost ([2401:fa00:8f:203:a11b:bff7:d8ae:bb0]) by smtp.gmail.com with UTF8SMTPSA id b3-20020a170902a9c300b001b53be3d956sm16550209plr.167.2023.07.04.00.51.59 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Tue, 04 Jul 2023 00:52:01 -0700 (PDT) From: David Stevens X-Google-Original-From: David Stevens To: Sean Christopherson Cc: Marc Zyngier , Michael Ellerman , Peter Xu , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org, David Stevens Subject: [PATCH v7 7/8] KVM: PPC: Migrate to __kvm_follow_pfn Date: Tue, 4 Jul 2023 16:50:52 +0900 Message-ID: <20230704075054.3344915-8-stevensd@google.com> X-Mailer: git-send-email 2.41.0.255.g8b1d071c50-goog In-Reply-To: <20230704075054.3344915-1-stevensd@google.com> References: <20230704075054.3344915-1-stevensd@google.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: David Stevens Migrate from __gfn_to_pfn_memslot to __kvm_follow_pfn. As part of the refactoring, remove the redundant calls to get_user_page_fast_only, since the check for !async && !atomic was removed from the KVM generic code in b9b33da2aa74. Also, remove the kvm_ro parameter because the KVM generic code handles RO memslots. Signed-off-by: David Stevens --- I have checked that this patch compiles, but I don't have the hardware to test it myself. arch/powerpc/include/asm/kvm_book3s.h | 2 +- arch/powerpc/kvm/book3s_64_mmu_hv.c | 38 +++++++++----------- arch/powerpc/kvm/book3s_64_mmu_radix.c | 50 +++++++++++--------------- arch/powerpc/kvm/book3s_hv_nested.c | 4 +-- 4 files changed, 38 insertions(+), 56 deletions(-) diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/a= sm/kvm_book3s.h index bbf5e2c5fe09..bf48c511e700 100644 --- a/arch/powerpc/include/asm/kvm_book3s.h +++ b/arch/powerpc/include/asm/kvm_book3s.h @@ -202,7 +202,7 @@ extern bool kvmppc_hv_handle_set_rc(struct kvm *kvm, bo= ol nested, extern int kvmppc_book3s_instantiate_page(struct kvm_vcpu *vcpu, unsigned long gpa, struct kvm_memory_slot *memslot, - bool writing, bool kvm_ro, + bool writing, pte_t *inserted_pte, unsigned int *levelp); extern int kvmppc_init_vm_radix(struct kvm *kvm); extern void kvmppc_free_radix(struct kvm *kvm); diff --git a/arch/powerpc/kvm/book3s_64_mmu_hv.c b/arch/powerpc/kvm/book3s_= 64_mmu_hv.c index 7f765d5ad436..9a4715e73937 100644 --- a/arch/powerpc/kvm/book3s_64_mmu_hv.c +++ b/arch/powerpc/kvm/book3s_64_mmu_hv.c @@ -523,6 +523,9 @@ int kvmppc_book3s_hv_page_fault(struct kvm_vcpu *vcpu, unsigned long rcbits; long mmio_update; pte_t pte, *ptep; + struct kvm_follow_pfn foll =3D { + .allow_write_mapping =3D true, + }; =20 if (kvm_is_radix(kvm)) return kvmppc_book3s_radix_page_fault(vcpu, ea, dsisr); @@ -599,29 +602,20 @@ int kvmppc_book3s_hv_page_fault(struct kvm_vcpu *vcpu, page =3D NULL; writing =3D (dsisr & DSISR_ISSTORE) !=3D 0; /* If writing !=3D 0, then the HPTE must allow writing, if we get here */ - write_ok =3D writing; - hva =3D gfn_to_hva_memslot(memslot, gfn); =20 - /* - * Do a fast check first, since __gfn_to_pfn_memslot doesn't - * do it with !atomic && !async, which is how we call it. - * We always ask for write permission since the common case - * is that the page is writable. - */ - if (get_user_page_fast_only(hva, FOLL_WRITE, &page)) { - write_ok =3D true; - } else { - /* Call KVM generic code to do the slow-path check */ - pfn =3D __gfn_to_pfn_memslot(memslot, gfn, false, false, NULL, - writing, &write_ok, NULL); - if (is_error_noslot_pfn(pfn)) - return -EFAULT; - page =3D NULL; - if (pfn_valid(pfn)) { - page =3D pfn_to_page(pfn); - if (PageReserved(page)) - page =3D NULL; - } + foll.slot =3D memslot; + foll.gfn =3D gfn; + foll.flags =3D FOLL_GET | (writing ? FOLL_WRITE : 0); + pfn =3D __kvm_follow_pfn(&foll); + if (is_error_noslot_pfn(pfn)) + return -EFAULT; + page =3D NULL; + write_ok =3D foll.writable; + hva =3D foll.hva; + if (pfn_valid(pfn)) { + page =3D pfn_to_page(pfn); + if (PageReserved(page)) + page =3D NULL; } =20 /* diff --git a/arch/powerpc/kvm/book3s_64_mmu_radix.c b/arch/powerpc/kvm/book= 3s_64_mmu_radix.c index 461307b89c3a..339d1efcb6c9 100644 --- a/arch/powerpc/kvm/book3s_64_mmu_radix.c +++ b/arch/powerpc/kvm/book3s_64_mmu_radix.c @@ -815,47 +815,39 @@ bool kvmppc_hv_handle_set_rc(struct kvm *kvm, bool ne= sted, bool writing, int kvmppc_book3s_instantiate_page(struct kvm_vcpu *vcpu, unsigned long gpa, struct kvm_memory_slot *memslot, - bool writing, bool kvm_ro, + bool writing, pte_t *inserted_pte, unsigned int *levelp) { struct kvm *kvm =3D vcpu->kvm; struct page *page =3D NULL; unsigned long mmu_seq; - unsigned long hva, gfn =3D gpa >> PAGE_SHIFT; - bool upgrade_write =3D false; - bool *upgrade_p =3D &upgrade_write; + unsigned long hva, pfn, gfn =3D gpa >> PAGE_SHIFT; + bool upgrade_write; pte_t pte, *ptep; unsigned int shift, level; int ret; bool large_enable; + struct kvm_follow_pfn foll =3D { + .slot =3D memslot, + .gfn =3D gfn, + .flags =3D FOLL_GET | (writing ? FOLL_WRITE : 0), + .allow_write_mapping =3D true, + }; =20 /* used to check for invalidations in progress */ mmu_seq =3D kvm->mmu_invalidate_seq; smp_rmb(); =20 - /* - * Do a fast check first, since __gfn_to_pfn_memslot doesn't - * do it with !atomic && !async, which is how we call it. - * We always ask for write permission since the common case - * is that the page is writable. - */ - hva =3D gfn_to_hva_memslot(memslot, gfn); - if (!kvm_ro && get_user_page_fast_only(hva, FOLL_WRITE, &page)) { - upgrade_write =3D true; - } else { - unsigned long pfn; - - /* Call KVM generic code to do the slow-path check */ - pfn =3D __gfn_to_pfn_memslot(memslot, gfn, false, false, NULL, - writing, upgrade_p, NULL); - if (is_error_noslot_pfn(pfn)) - return -EFAULT; - page =3D NULL; - if (pfn_valid(pfn)) { - page =3D pfn_to_page(pfn); - if (PageReserved(page)) - page =3D NULL; - } + pfn =3D __kvm_follow_pfn(&foll); + if (is_error_noslot_pfn(pfn)) + return -EFAULT; + page =3D NULL; + hva =3D foll.hva; + upgrade_write =3D foll.writable; + if (pfn_valid(pfn)) { + page =3D pfn_to_page(pfn); + if (PageReserved(page)) + page =3D NULL; } =20 /* @@ -944,7 +936,6 @@ int kvmppc_book3s_radix_page_fault(struct kvm_vcpu *vcp= u, struct kvm_memory_slot *memslot; long ret; bool writing =3D !!(dsisr & DSISR_ISSTORE); - bool kvm_ro =3D false; =20 /* Check for unusual errors */ if (dsisr & DSISR_UNSUPP_MMU) { @@ -997,7 +988,6 @@ int kvmppc_book3s_radix_page_fault(struct kvm_vcpu *vcp= u, ea, DSISR_ISSTORE | DSISR_PROTFAULT); return RESUME_GUEST; } - kvm_ro =3D true; } =20 /* Failed to set the reference/change bits */ @@ -1015,7 +1005,7 @@ int kvmppc_book3s_radix_page_fault(struct kvm_vcpu *v= cpu, =20 /* Try to insert a pte */ ret =3D kvmppc_book3s_instantiate_page(vcpu, gpa, memslot, writing, - kvm_ro, NULL, NULL); + NULL, NULL); =20 if (ret =3D=3D 0 || ret =3D=3D -EAGAIN) ret =3D RESUME_GUEST; diff --git a/arch/powerpc/kvm/book3s_hv_nested.c b/arch/powerpc/kvm/book3s_= hv_nested.c index 377d0b4a05ee..6d531051df04 100644 --- a/arch/powerpc/kvm/book3s_hv_nested.c +++ b/arch/powerpc/kvm/book3s_hv_nested.c @@ -1497,7 +1497,6 @@ static long int __kvmhv_nested_page_fault(struct kvm_= vcpu *vcpu, unsigned long n_gpa, gpa, gfn, perm =3D 0UL; unsigned int shift, l1_shift, level; bool writing =3D !!(dsisr & DSISR_ISSTORE); - bool kvm_ro =3D false; long int ret; =20 if (!gp->l1_gr_to_hr) { @@ -1577,7 +1576,6 @@ static long int __kvmhv_nested_page_fault(struct kvm_= vcpu *vcpu, ea, DSISR_ISSTORE | DSISR_PROTFAULT); return RESUME_GUEST; } - kvm_ro =3D true; } =20 /* 2. Find the host pte for this L1 guest real address */ @@ -1599,7 +1597,7 @@ static long int __kvmhv_nested_page_fault(struct kvm_= vcpu *vcpu, if (!pte_present(pte) || (writing && !(pte_val(pte) & _PAGE_WRITE))) { /* No suitable pte found -> try to insert a mapping */ ret =3D kvmppc_book3s_instantiate_page(vcpu, gpa, memslot, - writing, kvm_ro, &pte, &level); + writing, &pte, &level); if (ret =3D=3D -EAGAIN) return RESUME_GUEST; else if (ret) --=20 2.41.0.255.g8b1d071c50-goog From nobody Sat Feb 7 23:48:08 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 27DE3EB64D9 for ; Tue, 4 Jul 2023 07:53:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230262AbjGDHxZ (ORCPT ); Tue, 4 Jul 2023 03:53:25 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33052 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231620AbjGDHwc (ORCPT ); Tue, 4 Jul 2023 03:52:32 -0400 Received: from mail-pg1-x533.google.com (mail-pg1-x533.google.com [IPv6:2607:f8b0:4864:20::533]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AE61E19A9 for ; Tue, 4 Jul 2023 00:52:07 -0700 (PDT) Received: by mail-pg1-x533.google.com with SMTP id 41be03b00d2f7-55b66ca1c80so1811224a12.0 for ; Tue, 04 Jul 2023 00:52:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; t=1688457126; x=1691049126; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Qee/VKQzM1LmZxAkmmj2aucWtJOH/l94Gljx+K2KDa8=; b=k+iMyiIZ/rvkyJFLnpCxfPh2kihs9hKp7WCZMgRJ/KWcfxGthOuSTxsh7ufzJA/Bf/ mzh6BZEJrCQrd57esW6bSICX4N5fXXviTpScCU4T+p1S9Ex1CyHnmwVooY7ZQFENwzJz zEkI3p/OBQ13cxCJNgSaA3m+7cI5QWfuYcJr4= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1688457126; x=1691049126; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Qee/VKQzM1LmZxAkmmj2aucWtJOH/l94Gljx+K2KDa8=; b=AlqucU/+808gQrjnU+2waZjXIzd5TkqdZbIgDkhnSZrHBZNW7hG1av5cbU6CQhz6Jk zeoek8zuJ+flq89QbnAZB3tgRLDoATIJssoaDdmUjl9+maZvQ8HQtcGxgikFvAehXIKi lUbKg/gt7Rqo6y+CVu5rmxVigECJt7pYpSC/+PxEtymd0W7KAoGs/aUrzg9YGoJdqcPW TL64l+NytmGEQw/vZapkaxflQLbTB4Y0S5OdT9mrovQ7dajV0+chEiR8PehfRmPHbDv9 b4H3Vn1qgrlvQ8PUOnNfHhokXbTIKxrygfMJy/5tyZ5N9mN2dFBbbbkTIMOhAQ0Xk4fu HlhQ== X-Gm-Message-State: AC+VfDyGE9T8q4wjBFfUotnlh1cipGANlX/OpftxZ6oYrEWTM8wTk1rM ppfp8uZcDqsB63Lo6D2/m3lKxg== X-Google-Smtp-Source: ACHHUZ6+9ouW33wDPBoDXHeV/P1AI/Ns1TtvrKGhlPJ4FwNyTE2MhXe7ejlBjshhe6QIZRhVgVHIEg== X-Received: by 2002:a05:6a20:8f0c:b0:117:a2f3:3c93 with SMTP id b12-20020a056a208f0c00b00117a2f33c93mr13849232pzk.2.1688457126666; Tue, 04 Jul 2023 00:52:06 -0700 (PDT) Received: from localhost ([2401:fa00:8f:203:a11b:bff7:d8ae:bb0]) by smtp.gmail.com with UTF8SMTPSA id jm23-20020a17090304d700b001b51b3e84cesm16610112plb.166.2023.07.04.00.52.03 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Tue, 04 Jul 2023 00:52:06 -0700 (PDT) From: David Stevens X-Google-Original-From: David Stevens To: Sean Christopherson Cc: Marc Zyngier , Michael Ellerman , Peter Xu , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org, David Stevens Subject: [PATCH v7 8/8] KVM: remove __gfn_to_pfn_memslot Date: Tue, 4 Jul 2023 16:50:53 +0900 Message-ID: <20230704075054.3344915-9-stevensd@google.com> X-Mailer: git-send-email 2.41.0.255.g8b1d071c50-goog In-Reply-To: <20230704075054.3344915-1-stevensd@google.com> References: <20230704075054.3344915-1-stevensd@google.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: David Stevens All callers have been migrated to __kvm_follow_pfn. Signed-off-by: David Stevens --- virt/kvm/kvm_main.c | 33 --------------------------------- 1 file changed, 33 deletions(-) diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 0f7b41f220b6..5b5afd70f239 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -2712,39 +2712,6 @@ kvm_pfn_t __kvm_follow_pfn(struct kvm_follow_pfn *fo= ll) } EXPORT_SYMBOL_GPL(__kvm_follow_pfn); =20 -kvm_pfn_t __gfn_to_pfn_memslot(const struct kvm_memory_slot *slot, gfn_t g= fn, - bool atomic, bool interruptible, bool *async, - bool write_fault, bool *writable, hva_t *hva) -{ - kvm_pfn_t pfn; - struct kvm_follow_pfn foll =3D { - .slot =3D slot, - .gfn =3D gfn, - .flags =3D FOLL_GET, - .atomic =3D atomic, - .allow_write_mapping =3D !!writable, - }; - - if (write_fault) - foll.flags |=3D FOLL_WRITE; - if (async) - foll.flags |=3D FOLL_NOWAIT; - if (interruptible) - foll.flags |=3D FOLL_INTERRUPTIBLE; - - pfn =3D __kvm_follow_pfn(&foll); - if (pfn =3D=3D KVM_PFN_ERR_NEEDS_IO) { - *async =3D true; - pfn =3D KVM_PFN_ERR_FAULT; - } - if (hva) - *hva =3D foll.hva; - if (writable) - *writable =3D foll.writable; - return pfn; -} -EXPORT_SYMBOL_GPL(__gfn_to_pfn_memslot); - kvm_pfn_t gfn_to_pfn_prot(struct kvm *kvm, gfn_t gfn, bool write_fault, bool *writable) { --=20 2.41.0.255.g8b1d071c50-goog