From nobody Thu Sep 19 01:05:53 2024 Received: from mail-yw1-f202.google.com (mail-yw1-f202.google.com [209.85.128.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E3CD815EFC0 for ; Fri, 26 Jul 2024 23:53:02 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722037984; cv=none; b=tBdsOdGJopzGmiRg2kyhFQkO1ArY6tMUHidAe6X2gqyV1G/P1VUhPhNxrJNeg5LYbadLRwok/yElkgADH8fZI88Z6KhVI7ZNpSodE0jxk1qpGlC5GcDxffq+vhx6FrZciKXXlVtUM4iaQkT94RkcCFoONBPxX+kZlRkX3cJfuZY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722037984; c=relaxed/simple; bh=qQVc59im7GngCaPa3TwBCc9CgIeFb9pgAGD2oXsQqY8=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=i+yAY5gz/sScAjOVYL4uB7sdv1MsfOHAEL73Lv80ANjQVL4PAgMhjUwPuhg+xow2lBWDiZtkWvFHaNrQy2qZsVlIzL8YQnnxenyUham33bLNMhd1NxmPf+EWLXjeA1rlTQ99K6ITSvazU6+b4LIxxbYaB3KO2GEsM5k9V2UgOhM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=DaL84cRz; arc=none smtp.client-ip=209.85.128.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="DaL84cRz" Received: by mail-yw1-f202.google.com with SMTP id 00721157ae682-665a6dd38c8so7196407b3.1 for ; Fri, 26 Jul 2024 16:53:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722037982; x=1722642782; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=ADEmxv9XmYXZ8ay6LRZAqHHnUfOELcTSDZRch+cTwuw=; b=DaL84cRzXnoD/jx1QtLx3hC0ce1MWM+VEWmoJGMJSklrvlz5ZBDckNtOwk2snbincM Y8YF+XxEyBFlcwjdZirAHSLiajBnh7GKA1Xq9Bt1S/1Ca0V04U81S3xAgbBr7G6PZvIn ij9s6ylTR+bpyLs+nIHjzgptOonqw9wDk7I4osznW3s4kiG0/7bRTGis2CsTVehlsTwQ jcOyIEprNopD6ClYq9sOTDbqakNWXo6fMJGtrdS8Aahb6vgt2J+puHtaXOZ4BcSof6fS n8h4caWT2wHDcBxCpt9SxorWGCuTbAtWiDnwnsd28nJAL64uXTcYWQWMbW5Sz9F895yj G8YA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722037982; x=1722642782; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=ADEmxv9XmYXZ8ay6LRZAqHHnUfOELcTSDZRch+cTwuw=; b=vPzyavgBtPzkBxqsgzkJgDRlm+3Panf3McKMkpSE3CMCiS0/ZTtBYldLOXZfWZXdcc GoCv00OjYKMAW6bb/c15gfLQX85K5EybPjG83i2lZlNEJmCNfVQGeulpr9VaF6cy+aZm N7Bh3G2kC3WzbOZP28W3KuBzCb+7Gfv1pbLe1mVMiIk3+H8vSrYz8mwUnfKoo14vMZ2f fan6EhFUVYtNUhtDxBDHSRVMqfUKiOo7lofTGFjKErhDwvbmB3AF1BHGJoyfGp8zMVD0 na7hJg3KIwU3ADXYUl9RCeDsNdJ5IY23HadVl0/fMtthLiAO48HvTZZyAuKksn6LJeNt k0pg== X-Forwarded-Encrypted: i=1; AJvYcCVWhlrKOTr2lLfp5zKgP36snBtmoi5BT1wOyziwHSuwK4GgOGTsoQJzaoIxXQH8jZlIg25hLvUR7rpx78htUYeanAHHQIfw147qpRyj X-Gm-Message-State: AOJu0YzN+O4k1chqNw+WVJiR4/aw48F1GWIMkBM8i1aSkCqjWkFU+2Ys W0SRUbPdOLXrdZmnFagqin9RNgQlukAroJxLe+Uj+cZ+PvfY0OGgzD2KYrZrMeXuvpMyPk90xvF b6Q== X-Google-Smtp-Source: AGHT+IFvuKtzzbSFT7TzhVG4cD407iDdVRMZdEo+hUTgj12n6OpGkNu03b7iXctLyyeC9wijqsbDK83s/Cg= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a05:690c:f03:b0:665:a4a4:57c1 with SMTP id 00721157ae682-67a057b7ffdmr33777b3.2.1722037981863; Fri, 26 Jul 2024 16:53:01 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 26 Jul 2024 16:51:21 -0700 In-Reply-To: <20240726235234.228822-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240726235234.228822-1-seanjc@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240726235234.228822-13-seanjc@google.com> Subject: [PATCH v12 12/84] KVM: Drop @atomic param from gfn=>pfn and hva=>pfn APIs From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, David Matlack , David Stevens Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Drop @atomic from the myriad "to_pfn" APIs now that all callers pass "false". No functional change intended. Signed-off-by: Sean Christopherson Reviewed-by: Alex Benn=C3=A9e --- Documentation/virt/kvm/locking.rst | 4 +-- arch/arm64/kvm/mmu.c | 2 +- arch/powerpc/kvm/book3s_64_mmu_hv.c | 2 +- arch/powerpc/kvm/book3s_64_mmu_radix.c | 2 +- arch/x86/kvm/mmu/mmu.c | 12 ++++----- include/linux/kvm_host.h | 4 +-- virt/kvm/kvm_main.c | 36 +++++--------------------- virt/kvm/kvm_mm.h | 4 +-- virt/kvm/pfncache.c | 2 +- 9 files changed, 22 insertions(+), 46 deletions(-) diff --git a/Documentation/virt/kvm/locking.rst b/Documentation/virt/kvm/lo= cking.rst index 8b3bb9fe60bf..9af511e7aa53 100644 --- a/Documentation/virt/kvm/locking.rst +++ b/Documentation/virt/kvm/locking.rst @@ -126,8 +126,8 @@ We dirty-log for gfn1, that means gfn2 is lost in dirty= -bitmap. For direct sp, we can easily avoid it since the spte of direct sp is fixed to gfn. For indirect sp, we disabled fast page fault for simplicity. =20 -A solution for indirect sp could be to pin the gfn, for example via -kvm_vcpu_gfn_to_pfn_atomic, before the cmpxchg. After the pinning: +A solution for indirect sp could be to pin the gfn before the cmpxchg. Af= ter +the pinning: =20 - We have held the refcount of pfn; that means the pfn can not be freed and be reused for another gfn. diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 6981b1bc0946..30dd62f56a11 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1562,7 +1562,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys= _addr_t fault_ipa, mmu_seq =3D vcpu->kvm->mmu_invalidate_seq; mmap_read_unlock(current->mm); =20 - pfn =3D __gfn_to_pfn_memslot(memslot, gfn, false, false, NULL, + pfn =3D __gfn_to_pfn_memslot(memslot, gfn, false, NULL, write_fault, &writable, NULL); if (pfn =3D=3D KVM_PFN_ERR_HWPOISON) { kvm_send_hwpoison_signal(hva, vma_shift); diff --git a/arch/powerpc/kvm/book3s_64_mmu_hv.c b/arch/powerpc/kvm/book3s_= 64_mmu_hv.c index 1b51b1c4713b..8cd02ca4b1b8 100644 --- a/arch/powerpc/kvm/book3s_64_mmu_hv.c +++ b/arch/powerpc/kvm/book3s_64_mmu_hv.c @@ -613,7 +613,7 @@ int kvmppc_book3s_hv_page_fault(struct kvm_vcpu *vcpu, write_ok =3D true; } else { /* Call KVM generic code to do the slow-path check */ - pfn =3D __gfn_to_pfn_memslot(memslot, gfn, false, false, NULL, + pfn =3D __gfn_to_pfn_memslot(memslot, gfn, false, NULL, writing, &write_ok, NULL); if (is_error_noslot_pfn(pfn)) return -EFAULT; diff --git a/arch/powerpc/kvm/book3s_64_mmu_radix.c b/arch/powerpc/kvm/book= 3s_64_mmu_radix.c index 408d98f8a514..26a969e935e3 100644 --- a/arch/powerpc/kvm/book3s_64_mmu_radix.c +++ b/arch/powerpc/kvm/book3s_64_mmu_radix.c @@ -852,7 +852,7 @@ int kvmppc_book3s_instantiate_page(struct kvm_vcpu *vcp= u, unsigned long pfn; =20 /* Call KVM generic code to do the slow-path check */ - pfn =3D __gfn_to_pfn_memslot(memslot, gfn, false, false, NULL, + pfn =3D __gfn_to_pfn_memslot(memslot, gfn, false, NULL, writing, upgrade_p, NULL); if (is_error_noslot_pfn(pfn)) return -EFAULT; diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index c1914f02c5e1..d76390ef49b2 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4334,9 +4334,9 @@ static int __kvm_faultin_pfn(struct kvm_vcpu *vcpu, s= truct kvm_page_fault *fault return kvm_faultin_pfn_private(vcpu, fault); =20 async =3D false; - fault->pfn =3D __gfn_to_pfn_memslot(fault->slot, fault->gfn, false, false, - &async, fault->write, - &fault->map_writable, &fault->hva); + fault->pfn =3D __gfn_to_pfn_memslot(fault->slot, fault->gfn, false, &asyn= c, + fault->write, &fault->map_writable, + &fault->hva); if (!async) return RET_PF_CONTINUE; /* *pfn has correct page already */ =20 @@ -4356,9 +4356,9 @@ static int __kvm_faultin_pfn(struct kvm_vcpu *vcpu, s= truct kvm_page_fault *fault * to wait for IO. Note, gup always bails if it is unable to quickly * get a page and a fatal signal, i.e. SIGKILL, is pending. */ - fault->pfn =3D __gfn_to_pfn_memslot(fault->slot, fault->gfn, false, true, - NULL, fault->write, - &fault->map_writable, &fault->hva); + fault->pfn =3D __gfn_to_pfn_memslot(fault->slot, fault->gfn, true, NULL, + fault->write, &fault->map_writable, + &fault->hva); return RET_PF_CONTINUE; } =20 diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 79fed9fea638..6d4503e8eabe 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -1217,9 +1217,8 @@ kvm_pfn_t gfn_to_pfn(struct kvm *kvm, gfn_t gfn); kvm_pfn_t gfn_to_pfn_prot(struct kvm *kvm, gfn_t gfn, bool write_fault, bool *writable); kvm_pfn_t gfn_to_pfn_memslot(const struct kvm_memory_slot *slot, gfn_t gfn= ); -kvm_pfn_t gfn_to_pfn_memslot_atomic(const struct kvm_memory_slot *slot, gf= n_t gfn); kvm_pfn_t __gfn_to_pfn_memslot(const struct kvm_memory_slot *slot, gfn_t g= fn, - bool atomic, bool interruptible, bool *async, + bool interruptible, bool *async, bool write_fault, bool *writable, hva_t *hva); =20 void kvm_release_pfn_clean(kvm_pfn_t pfn); @@ -1300,7 +1299,6 @@ void mark_page_dirty(struct kvm *kvm, gfn_t gfn); =20 struct kvm_memslots *kvm_vcpu_memslots(struct kvm_vcpu *vcpu); struct kvm_memory_slot *kvm_vcpu_gfn_to_memslot(struct kvm_vcpu *vcpu, gfn= _t gfn); -kvm_pfn_t kvm_vcpu_gfn_to_pfn_atomic(struct kvm_vcpu *vcpu, gfn_t gfn); kvm_pfn_t kvm_vcpu_gfn_to_pfn(struct kvm_vcpu *vcpu, gfn_t gfn); int kvm_vcpu_map(struct kvm_vcpu *vcpu, gpa_t gpa, struct kvm_host_map *ma= p); void kvm_vcpu_unmap(struct kvm_vcpu *vcpu, struct kvm_host_map *map, bool = dirty); diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 803299778cf8..84c73b4fc804 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -2929,7 +2929,6 @@ static int hva_to_pfn_remapped(struct vm_area_struct = *vma, /* * Pin guest page in memory and return its pfn. * @addr: host virtual address which maps memory to the guest - * @atomic: whether this function is forbidden from sleeping * @interruptible: whether the process can be interrupted by non-fatal sig= nals * @async: whether this function need to wait IO complete if the * host page is not in the memory @@ -2941,22 +2940,16 @@ static int hva_to_pfn_remapped(struct vm_area_struc= t *vma, * 2): @write_fault =3D false && @writable, @writable will tell the caller * whether the mapping is writable. */ -kvm_pfn_t hva_to_pfn(unsigned long addr, bool atomic, bool interruptible, - bool *async, bool write_fault, bool *writable) +kvm_pfn_t hva_to_pfn(unsigned long addr, bool interruptible, bool *async, + bool write_fault, bool *writable) { struct vm_area_struct *vma; kvm_pfn_t pfn; int npages, r; =20 - /* we can do it either atomically or asynchronously, not both */ - BUG_ON(atomic && async); - if (hva_to_pfn_fast(addr, write_fault, writable, &pfn)) return pfn; =20 - if (atomic) - return KVM_PFN_ERR_FAULT; - npages =3D hva_to_pfn_slow(addr, async, write_fault, interruptible, writable, &pfn); if (npages =3D=3D 1) @@ -2993,7 +2986,7 @@ kvm_pfn_t hva_to_pfn(unsigned long addr, bool atomic,= bool interruptible, } =20 kvm_pfn_t __gfn_to_pfn_memslot(const struct kvm_memory_slot *slot, gfn_t g= fn, - bool atomic, bool interruptible, bool *async, + bool interruptible, bool *async, bool write_fault, bool *writable, hva_t *hva) { unsigned long addr =3D __gfn_to_hva_many(slot, gfn, NULL, write_fault); @@ -3015,39 +3008,24 @@ kvm_pfn_t __gfn_to_pfn_memslot(const struct kvm_mem= ory_slot *slot, gfn_t gfn, writable =3D NULL; } =20 - return hva_to_pfn(addr, atomic, interruptible, async, write_fault, - writable); + return hva_to_pfn(addr, interruptible, async, write_fault, writable); } EXPORT_SYMBOL_GPL(__gfn_to_pfn_memslot); =20 kvm_pfn_t gfn_to_pfn_prot(struct kvm *kvm, gfn_t gfn, bool write_fault, bool *writable) { - return __gfn_to_pfn_memslot(gfn_to_memslot(kvm, gfn), gfn, false, false, - NULL, write_fault, writable, NULL); + return __gfn_to_pfn_memslot(gfn_to_memslot(kvm, gfn), gfn, false, NULL, + write_fault, writable, NULL); } EXPORT_SYMBOL_GPL(gfn_to_pfn_prot); =20 kvm_pfn_t gfn_to_pfn_memslot(const struct kvm_memory_slot *slot, gfn_t gfn) { - return __gfn_to_pfn_memslot(slot, gfn, false, false, NULL, true, - NULL, NULL); + return __gfn_to_pfn_memslot(slot, gfn, false, NULL, true, NULL, NULL); } EXPORT_SYMBOL_GPL(gfn_to_pfn_memslot); =20 -kvm_pfn_t gfn_to_pfn_memslot_atomic(const struct kvm_memory_slot *slot, gf= n_t gfn) -{ - return __gfn_to_pfn_memslot(slot, gfn, true, false, NULL, true, - NULL, NULL); -} -EXPORT_SYMBOL_GPL(gfn_to_pfn_memslot_atomic); - -kvm_pfn_t kvm_vcpu_gfn_to_pfn_atomic(struct kvm_vcpu *vcpu, gfn_t gfn) -{ - return gfn_to_pfn_memslot_atomic(kvm_vcpu_gfn_to_memslot(vcpu, gfn), gfn); -} -EXPORT_SYMBOL_GPL(kvm_vcpu_gfn_to_pfn_atomic); - kvm_pfn_t gfn_to_pfn(struct kvm *kvm, gfn_t gfn) { return gfn_to_pfn_memslot(gfn_to_memslot(kvm, gfn), gfn); diff --git a/virt/kvm/kvm_mm.h b/virt/kvm/kvm_mm.h index 715f19669d01..a3fa86f60d6c 100644 --- a/virt/kvm/kvm_mm.h +++ b/virt/kvm/kvm_mm.h @@ -20,8 +20,8 @@ #define KVM_MMU_UNLOCK(kvm) spin_unlock(&(kvm)->mmu_lock) #endif /* KVM_HAVE_MMU_RWLOCK */ =20 -kvm_pfn_t hva_to_pfn(unsigned long addr, bool atomic, bool interruptible, - bool *async, bool write_fault, bool *writable); +kvm_pfn_t hva_to_pfn(unsigned long addr, bool interruptible, bool *async, + bool write_fault, bool *writable); =20 #ifdef CONFIG_HAVE_KVM_PFNCACHE void gfn_to_pfn_cache_invalidate_start(struct kvm *kvm, diff --git a/virt/kvm/pfncache.c b/virt/kvm/pfncache.c index f0039efb9e1e..58c706a610e5 100644 --- a/virt/kvm/pfncache.c +++ b/virt/kvm/pfncache.c @@ -198,7 +198,7 @@ static kvm_pfn_t hva_to_pfn_retry(struct gfn_to_pfn_cac= he *gpc) } =20 /* We always request a writeable mapping */ - new_pfn =3D hva_to_pfn(gpc->uhva, false, false, NULL, true, NULL); + new_pfn =3D hva_to_pfn(gpc->uhva, false, NULL, true, NULL); if (is_error_noslot_pfn(new_pfn)) goto out_error; =20 --=20 2.46.0.rc1.232.g9752f9e123-goog