From nobody Sun Feb 8 14:52:29 2026 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail header.i=@intel.com; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=linux.intel.com Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1642518863796726.006583686839; Tue, 18 Jan 2022 07:14:23 -0800 (PST) Received: from localhost ([::1]:33942 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1n9qBi-00084Z-O6 for importer@patchew.org; Tue, 18 Jan 2022 10:14:22 -0500 Received: from eggs.gnu.org ([209.51.188.92]:40600) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1n9oSE-0001PA-PM for qemu-devel@nongnu.org; Tue, 18 Jan 2022 08:23:20 -0500 Received: from mga04.intel.com ([192.55.52.120]:18828) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1n9oSB-0003I3-5R for qemu-devel@nongnu.org; Tue, 18 Jan 2022 08:23:18 -0500 Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Jan 2022 05:23:04 -0800 Received: from chaop.bj.intel.com ([10.240.192.101]) by orsmga008.jf.intel.com with ESMTP; 18 Jan 2022 05:22:56 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1642512195; x=1674048195; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=0KmAtoYc2KWHMVvjIOKZFm1pJUtWiDFa3f+h5iULyg8=; b=Q0qdd7x9i5ADVkTc6jcyYXldVpS04YcifnYaXDql47sEVOQNCIAp/CIT dqXhoxQkcG4QQTnR+VZkiSCjnLYhCTc2Y3xPI8/LDjESPfdsFq8e59xjQ CXb2o2MjVlvPkuO3KYx7QNGtRuqLATd1i8UjUzr7rfOpdL3U9JMlqMJ73 aMZxn8DKRpXxPfowE63M7b1EB9xBYzi2rmpxPi1GY/4ZjjP/9dpK0z4Ui XZ/ZXWXO07BVT0eWKvBmTKPpvOl3xACZfYmW6dmIK4tDdzG1/WAt/lCzV xF8NIE3RS/ll1UycKV2l3MX/9Z+YKxgPNREXpXKHaqX13UTPmMA+XB070 A==; X-IronPort-AV: E=McAfee;i="6200,9189,10230"; a="243636373" X-IronPort-AV: E=Sophos;i="5.88,297,1635231600"; d="scan'208";a="243636373" X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.88,297,1635231600"; d="scan'208";a="531791885" From: Chao Peng To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, qemu-devel@nongnu.org Subject: [PATCH v4 09/12] KVM: Handle page fault for private memory Date: Tue, 18 Jan 2022 21:21:18 +0800 Message-Id: <20220118132121.31388-10-chao.p.peng@linux.intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20220118132121.31388-1-chao.p.peng@linux.intel.com> References: <20220118132121.31388-1-chao.p.peng@linux.intel.com> Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: none client-ip=192.55.52.120; envelope-from=chao.p.peng@linux.intel.com; helo=mga04.intel.com X-Spam_score_int: -49 X-Spam_score: -5.0 X-Spam_bar: ----- X-Spam_report: (-5.0 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.7, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_MED=-2.3, SPF_HELO_NONE=0.001, SPF_NONE=0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Wanpeng Li , luto@kernel.org, david@redhat.com, "J . Bruce Fields" , dave.hansen@intel.com, "H . Peter Anvin" , Chao Peng , ak@linux.intel.com, Jonathan Corbet , Joerg Roedel , x86@kernel.org, Hugh Dickins , Ingo Molnar , Borislav Petkov , jun.nakajima@intel.com, Thomas Gleixner , Vitaly Kuznetsov , Jim Mattson , Sean Christopherson , Jeff Layton , Yu Zhang , Paolo Bonzini , Andrew Morton , "Kirill A . Shutemov" Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) X-ZM-MESSAGEID: 1642518865726100001 Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" When page fault happens for a memslot with KVM_MEM_PRIVATE, we use kvm_memfile_get_pfn() which further calls into memfile_pfn_ops callbacks defined for each memslot to request the pfn from the memory backing store. One assumption is that private pages are persistent and pre-allocated in the private memory fd (backing store) so KVM uses this information as an indicator for a page is private or shared (i.e. the private fd is the final source of truth as to whether or not a GPA is private). Depending on the access is private or shared, we go different paths: - For private access, KVM checks if the page is already allocated in the memory backing store, if yes KVM establishes the mapping, otherwise exits to userspace to convert a shared page to private one. - For shared access, KVM also checks if the page is already allocated in the memory backing store, if yes then exit to userspace to convert a private page to shared one, otherwise it's treated as a traditional hva-based shared memory, KVM lets existing code to obtain a pfn with get_user_pages() and establish the mapping. Signed-off-by: Yu Zhang Signed-off-by: Chao Peng --- arch/x86/kvm/mmu/mmu.c | 73 ++++++++++++++++++++++++++++++++-- arch/x86/kvm/mmu/paging_tmpl.h | 11 +++-- 2 files changed, 77 insertions(+), 7 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 1d275e9d76b5..df526ab7e657 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -2873,6 +2873,9 @@ int kvm_mmu_max_mapping_level(struct kvm *kvm, if (max_level =3D=3D PG_LEVEL_4K) return PG_LEVEL_4K; =20 + if (kvm_slot_is_private(slot)) + return max_level; + host_level =3D host_pfn_mapping_level(kvm, gfn, pfn, slot); return min(host_level, max_level); } @@ -3903,7 +3906,59 @@ static bool kvm_arch_setup_async_pf(struct kvm_vcpu = *vcpu, gpa_t cr2_or_gpa, kvm_vcpu_gfn_to_hva(vcpu, gfn), &arch); } =20 -static bool kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *= fault, int *r) +static bool kvm_vcpu_is_private_gfn(struct kvm_vcpu *vcpu, gfn_t gfn) +{ + /* + * At this time private gfn has not been supported yet. Other patch + * that enables it should change this. + */ + return false; +} + +static bool kvm_faultin_pfn_private(struct kvm_vcpu *vcpu, + struct kvm_page_fault *fault, + bool *is_private_pfn, int *r) +{ + int order; + unsigned int flags =3D 0; + struct kvm_memory_slot *slot =3D fault->slot; + long pfn =3D kvm_memfile_get_pfn(slot, fault->gfn, &order); + + if (kvm_vcpu_is_private_gfn(vcpu, fault->addr >> PAGE_SHIFT)) { + if (pfn < 0) + flags |=3D KVM_MEMORY_EXIT_FLAG_PRIVATE; + else { + fault->pfn =3D pfn; + if (slot->flags & KVM_MEM_READONLY) + fault->map_writable =3D false; + else + fault->map_writable =3D true; + + if (order =3D=3D 0) + fault->max_level =3D PG_LEVEL_4K; + *is_private_pfn =3D true; + *r =3D RET_PF_FIXED; + return true; + } + } else { + if (pfn < 0) + return false; + + kvm_memfile_put_pfn(slot, pfn); + } + + vcpu->run->exit_reason =3D KVM_EXIT_MEMORY_ERROR; + vcpu->run->memory.flags =3D flags; + vcpu->run->memory.padding =3D 0; + vcpu->run->memory.gpa =3D fault->gfn << PAGE_SHIFT; + vcpu->run->memory.size =3D PAGE_SIZE; + fault->pfn =3D -1; + *r =3D -1; + return true; +} + +static bool kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *= fault, + bool *is_private_pfn, int *r) { struct kvm_memory_slot *slot =3D fault->slot; bool async; @@ -3937,6 +3992,10 @@ static bool kvm_faultin_pfn(struct kvm_vcpu *vcpu, s= truct kvm_page_fault *fault, } } =20 + if (kvm_slot_is_private(slot) && + kvm_faultin_pfn_private(vcpu, fault, is_private_pfn, r)) + return *r =3D=3D RET_PF_FIXED ? false : true; + async =3D false; fault->pfn =3D __gfn_to_pfn_memslot(slot, fault->gfn, false, &async, fault->write, &fault->map_writable, @@ -3997,6 +4056,7 @@ static int direct_page_fault(struct kvm_vcpu *vcpu, s= truct kvm_page_fault *fault bool is_tdp_mmu_fault =3D is_tdp_mmu(vcpu->arch.mmu); =20 unsigned long mmu_seq; + bool is_private_pfn =3D false; int r; =20 fault->gfn =3D fault->addr >> PAGE_SHIFT; @@ -4016,7 +4076,7 @@ static int direct_page_fault(struct kvm_vcpu *vcpu, s= truct kvm_page_fault *fault mmu_seq =3D vcpu->kvm->mmu_notifier_seq; smp_rmb(); =20 - if (kvm_faultin_pfn(vcpu, fault, &r)) + if (kvm_faultin_pfn(vcpu, fault, &is_private_pfn, &r)) return r; =20 if (handle_abnormal_pfn(vcpu, fault, ACC_ALL, &r)) @@ -4029,7 +4089,7 @@ static int direct_page_fault(struct kvm_vcpu *vcpu, s= truct kvm_page_fault *fault else write_lock(&vcpu->kvm->mmu_lock); =20 - if (is_page_fault_stale(vcpu, fault, mmu_seq)) + if (!is_private_pfn && is_page_fault_stale(vcpu, fault, mmu_seq)) goto out_unlock; =20 r =3D make_mmu_pages_available(vcpu); @@ -4046,7 +4106,12 @@ static int direct_page_fault(struct kvm_vcpu *vcpu, = struct kvm_page_fault *fault read_unlock(&vcpu->kvm->mmu_lock); else write_unlock(&vcpu->kvm->mmu_lock); - kvm_release_pfn_clean(fault->pfn); + + if (is_private_pfn) + kvm_memfile_put_pfn(fault->slot, fault->pfn); + else + kvm_release_pfn_clean(fault->pfn); + return r; } =20 diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index 5b5bdac97c7b..a1d26b50a5ec 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -825,6 +825,8 @@ static int FNAME(page_fault)(struct kvm_vcpu *vcpu, str= uct kvm_page_fault *fault int r; unsigned long mmu_seq; bool is_self_change_mapping; + bool is_private_pfn =3D false; + =20 pgprintk("%s: addr %lx err %x\n", __func__, fault->addr, fault->error_cod= e); WARN_ON_ONCE(fault->is_tdp); @@ -873,7 +875,7 @@ static int FNAME(page_fault)(struct kvm_vcpu *vcpu, str= uct kvm_page_fault *fault mmu_seq =3D vcpu->kvm->mmu_notifier_seq; smp_rmb(); =20 - if (kvm_faultin_pfn(vcpu, fault, &r)) + if (kvm_faultin_pfn(vcpu, fault, &is_private_pfn, &r)) return r; =20 if (handle_abnormal_pfn(vcpu, fault, walker.pte_access, &r)) @@ -901,7 +903,7 @@ static int FNAME(page_fault)(struct kvm_vcpu *vcpu, str= uct kvm_page_fault *fault r =3D RET_PF_RETRY; write_lock(&vcpu->kvm->mmu_lock); =20 - if (is_page_fault_stale(vcpu, fault, mmu_seq)) + if (!is_private_pfn && is_page_fault_stale(vcpu, fault, mmu_seq)) goto out_unlock; =20 kvm_mmu_audit(vcpu, AUDIT_PRE_PAGE_FAULT); @@ -913,7 +915,10 @@ static int FNAME(page_fault)(struct kvm_vcpu *vcpu, st= ruct kvm_page_fault *fault =20 out_unlock: write_unlock(&vcpu->kvm->mmu_lock); - kvm_release_pfn_clean(fault->pfn); + if (is_private_pfn) + kvm_memfile_put_pfn(fault->slot, fault->pfn); + else + kvm_release_pfn_clean(fault->pfn); return r; } =20 --=20 2.17.1