From nobody Thu Apr 25 06:03:40 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org ARC-Seal: i=1; a=rsa-sha256; t=1562776915; cv=none; d=zoho.com; s=zohoarc; b=mOZ9weAga7GgDwSIVDXG9x780raI8C+gx2EGMVlYd7+WwpzylEqIU3uKSxQGwxILiRAbZ1C3jaopd2mdFcNLwOTbYev4PU1/YIHwA9Hh09gffCORrTkEIf+YUseoRFMIosSOFdvBtNGntkbkTLX8JnY+yebp4cIYa5Dk/7H8OJ8= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1562776915; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:Sender:Subject:To:ARC-Authentication-Results; bh=0c2oEgmwflup2H55JMwCe5OAdfNn5KQxMM5NYuMa458=; b=oM+Ro/g4AwTxjaVFf7WvrIt+6Uo6BSARcHbRKqu5EW8BbE/Kai62EsbhXxLJdhPBGJFWlqEgUYRs/J6kuz1wFn/AzqjFFrvLkgpSi71jhQu+7FMWPaT89v4I63WrR8QcO5m8yCrkxHUY17ocL7sylrMvLrgSAuWsnKPpXNyEC/U= ARC-Authentication-Results: i=1; mx.zoho.com; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1562776915662949.8991902859285; Wed, 10 Jul 2019 09:41:55 -0700 (PDT) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hlFdt-0002K5-3B; Wed, 10 Jul 2019 16:40:29 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hlFds-0002K0-ES for xen-devel@lists.xenproject.org; Wed, 10 Jul 2019 16:40:28 +0000 Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id 6aeb1908-a331-11e9-ab9b-e7750c63a686; Wed, 10 Jul 2019 16:40:25 +0000 (UTC) X-Inumbo-ID: 6aeb1908-a331-11e9-ab9b-e7750c63a686 Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none; spf=None smtp.pra=paul.durrant@citrix.com; spf=Pass smtp.mailfrom=Paul.Durrant@citrix.com; spf=None smtp.helo=postmaster@mail.citrix.com Received-SPF: none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Received-SPF: None (esa1.hc3370-68.iphmx.com: no sender authenticity information available from domain of paul.durrant@citrix.com) identity=pra; client-ip=162.221.158.21; receiver=esa1.hc3370-68.iphmx.com; envelope-from="Paul.Durrant@citrix.com"; x-sender="paul.durrant@citrix.com"; x-conformance=sidf_compatible Received-SPF: Pass (esa1.hc3370-68.iphmx.com: domain of Paul.Durrant@citrix.com designates 162.221.158.21 as permitted sender) identity=mailfrom; client-ip=162.221.158.21; receiver=esa1.hc3370-68.iphmx.com; envelope-from="Paul.Durrant@citrix.com"; x-sender="Paul.Durrant@citrix.com"; x-conformance=sidf_compatible; x-record-type="v=spf1"; x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83 ~all" Received-SPF: None (esa1.hc3370-68.iphmx.com: no sender authenticity information available from domain of postmaster@mail.citrix.com) identity=helo; client-ip=162.221.158.21; receiver=esa1.hc3370-68.iphmx.com; envelope-from="Paul.Durrant@citrix.com"; x-sender="postmaster@mail.citrix.com"; x-conformance=sidf_compatible IronPort-SDR: BpSBvDnXEbk1DqN0miqN5IyEIk0MUQaxK8Hdk6qUp8u+WZQFrp4iGEaCcqIFHOLcm8iy62UtAf Wt5CDkVlgbgpy493ZcP8cVdgQDrcjUaH5/N/R70wVMuXMuKQHVqICOeAmKmaLUFC8Gq5DbjOU9 9JIF3JeZ0nqEPE9GZ2x4mFhIW4wcXn85sqTztbMhm3Fo42h7OTkZXWx+mkHohVLb/hhHEgrLdQ qAghwmfK22BEbcOpdUIMsdFrSTWeulRtqhhkETQD3NnkIBz4sIiYF4K2RSXGJYUusod+oNRWeX MVA= X-SBRS: 2.7 X-MesageID: 2834654 X-Ironport-Server: esa1.hc3370-68.iphmx.com X-Remote-IP: 162.221.158.21 X-Policy: $RELAYED X-IronPort-AV: E=Sophos;i="5.63,475,1557201600"; d="scan'208";a="2834654" From: Paul Durrant To: Date: Wed, 10 Jul 2019 17:17:33 +0100 Message-ID: <20190710161733.39119-1-paul.durrant@citrix.com> X-Mailer: git-send-email 2.20.1.2.gb21ebb671 MIME-Version: 1.0 Subject: [Xen-devel] [PATCH] xen/mm.h: add helper function to test-and-clear _PGC_allocated X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Stefano Stabellini , Wei Liu , Konrad Rzeszutek Wilk , George Dunlap , Andrew Cooper , Ian Jackson , Tim Deegan , Julien Grall , Paul Durrant , Tamas K Lengyel , Jan Beulich , Volodymyr Babchuk , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" The _PGC_allocated flag is set on a page when it is assigned to a domain along with an initial reference count of 1. To clear this initial reference count it is necessary to test-and-clear _PGC_allocated and then only drop the reference if the test-and-clear succeeds. This is open- coded in many places. It is also unsafe to test-and-clear _PGC_allocated unless the caller holds an additional reference. This patch adds a helper function, clear_assignment_reference(), to replace all the open-coded test-and-clear/put_page occurrences and incorporates in that an ASSERTion that an additional page reference is held. Signed-off-by: Paul Durrant --- Cc: Stefano Stabellini Cc: Julien Grall Cc: Volodymyr Babchuk Cc: Andrew Cooper Cc: George Dunlap Cc: Ian Jackson Cc: Jan Beulich Cc: Konrad Rzeszutek Wilk Cc: Tim Deegan Cc: Wei Liu Cc: "Roger Pau Monn=C3=A9" Cc: Tamas K Lengyel Cc: George Dunlap --- xen/arch/arm/domain.c | 4 +--- xen/arch/x86/domain.c | 3 +-- xen/arch/x86/hvm/ioreq.c | 11 ++--------- xen/arch/x86/mm.c | 3 +-- xen/arch/x86/mm/mem_sharing.c | 9 +++------ xen/arch/x86/mm/p2m-pod.c | 4 +--- xen/arch/x86/mm/p2m.c | 3 +-- xen/common/grant_table.c | 3 +-- xen/common/memory.c | 5 ++--- xen/common/xenoprof.c | 3 +-- xen/include/xen/mm.h | 11 +++++++++++ 11 files changed, 25 insertions(+), 34 deletions(-) diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c index 4f44d5c742..78700d6f08 100644 --- a/xen/arch/arm/domain.c +++ b/xen/arch/arm/domain.c @@ -926,9 +926,7 @@ static int relinquish_memory(struct domain *d, struct p= age_list_head *list) */ continue; =20 - if ( test_and_clear_bit(_PGC_allocated, &page->count_info) ) - put_page(page); - + clear_assignment_reference(page); put_page(page); =20 if ( hypercall_preempt_check() ) diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c index 147f96a09e..c8c51d5f76 100644 --- a/xen/arch/x86/domain.c +++ b/xen/arch/x86/domain.c @@ -1939,8 +1939,7 @@ static int relinquish_memory( BUG(); } =20 - if ( test_and_clear_bit(_PGC_allocated, &page->count_info) ) - put_page(page); + clear_assignment_reference(page); =20 /* * Forcibly invalidate top-most, still valid page tables at this p= oint diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c index 7a80cfb28b..129f9fddbc 100644 --- a/xen/arch/x86/hvm/ioreq.c +++ b/xen/arch/x86/hvm/ioreq.c @@ -398,8 +398,7 @@ static int hvm_alloc_ioreq_mfn(struct hvm_ioreq_server = *s, bool buf) return 0; =20 fail: - if ( test_and_clear_bit(_PGC_allocated, &page->count_info) ) - put_page(page); + clear_assignment_reference(page); put_page_and_type(page); =20 return -ENOMEM; @@ -418,13 +417,7 @@ static void hvm_free_ioreq_mfn(struct hvm_ioreq_server= *s, bool buf) unmap_domain_page_global(iorp->va); iorp->va =3D NULL; =20 - /* - * Check whether we need to clear the allocation reference before - * dropping the explicit references taken by get_page_and_type(). - */ - if ( test_and_clear_bit(_PGC_allocated, &page->count_info) ) - put_page(page); - + clear_assignment_reference(page); put_page_and_type(page); } =20 diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c index df2c0130f1..9fe66a6d26 100644 --- a/xen/arch/x86/mm.c +++ b/xen/arch/x86/mm.c @@ -498,8 +498,7 @@ void share_xen_page_with_guest(struct page_info *page, = struct domain *d, =20 void free_shared_domheap_page(struct page_info *page) { - if ( test_and_clear_bit(_PGC_allocated, &page->count_info) ) - put_page(page); + clear_assignment_reference(page); if ( !test_and_clear_bit(_PGC_xen_heap, &page->count_info) ) ASSERT_UNREACHABLE(); page->u.inuse.type_info =3D 0; diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c index f16a3f5324..7a643aed53 100644 --- a/xen/arch/x86/mm/mem_sharing.c +++ b/xen/arch/x86/mm/mem_sharing.c @@ -1000,8 +1000,7 @@ static int share_pages(struct domain *sd, gfn_t sgfn,= shr_handle_t sh, mem_sharing_page_unlock(firstpg); =20 /* Free the client page */ - if(test_and_clear_bit(_PGC_allocated, &cpage->count_info)) - put_page(cpage); + clear_assignment_reference(cpage); put_page(cpage); =20 /* We managed to free a domain page. */ @@ -1082,8 +1081,7 @@ int mem_sharing_add_to_physmap(struct domain *sd, uns= igned long sgfn, shr_handle ret =3D -EOVERFLOW; goto err_unlock; } - if ( test_and_clear_bit(_PGC_allocated, &cpage->count_info= ) ) - put_page(cpage); + clear_assignment_reference(cpage); put_page(cpage); } } @@ -1177,8 +1175,7 @@ int __mem_sharing_unshare_page(struct domain *d, domain_crash(d); return -EOVERFLOW; } - if ( test_and_clear_bit(_PGC_allocated, &page->count_info) ) - put_page(page); + clear_assignment_reference(page); put_page(page); } put_gfn(d, gfn); diff --git a/xen/arch/x86/mm/p2m-pod.c b/xen/arch/x86/mm/p2m-pod.c index 4313863066..2e22764950 100644 --- a/xen/arch/x86/mm/p2m-pod.c +++ b/xen/arch/x86/mm/p2m-pod.c @@ -274,9 +274,7 @@ p2m_pod_set_cache_target(struct p2m_domain *p2m, unsign= ed long pod_target, int p if ( test_and_clear_bit(_PGT_pinned, &(page+i)->u.inuse.type_i= nfo) ) put_page_and_type(page + i); =20 - if ( test_and_clear_bit(_PGC_allocated, &(page+i)->count_info)= ) - put_page(page + i); - + clear_assignment_reference(page + i); put_page(page + i); =20 if ( preemptible && pod_target !=3D p2m->pod.count && diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c index 4c9954867c..ce6859d51b 100644 --- a/xen/arch/x86/mm/p2m.c +++ b/xen/arch/x86/mm/p2m.c @@ -1609,8 +1609,7 @@ int p2m_mem_paging_evict(struct domain *d, unsigned l= ong gfn_l) goto out_put; =20 /* Decrement guest domain's ref count of the page */ - if ( test_and_clear_bit(_PGC_allocated, &page->count_info) ) - put_page(page); + clear_assignment_reference(page); =20 /* Remove mapping from p2m table */ ret =3D p2m_set_entry(p2m, gfn, INVALID_MFN, PAGE_ORDER_4K, diff --git a/xen/common/grant_table.c b/xen/common/grant_table.c index e6a0f30a4b..5ae85e3dad 100644 --- a/xen/common/grant_table.c +++ b/xen/common/grant_table.c @@ -1707,8 +1707,7 @@ gnttab_unpopulate_status_frames(struct domain *d, str= uct grant_table *gt) } =20 BUG_ON(page_get_owner(pg) !=3D d); - if ( test_and_clear_bit(_PGC_allocated, &pg->count_info) ) - put_page(pg); + clear_assignment_reference(pg); =20 if ( pg->count_info & ~PGC_xen_heap ) { diff --git a/xen/common/memory.c b/xen/common/memory.c index 03db7bfa9e..ab19a4ca86 100644 --- a/xen/common/memory.c +++ b/xen/common/memory.c @@ -388,9 +388,8 @@ int guest_remove_page(struct domain *d, unsigned long g= mfn) * For this purpose (and to match populate_physmap() behavior), the pa= ge * is kept allocated. */ - if ( !rc && !is_domain_direct_mapped(d) && - test_and_clear_bit(_PGC_allocated, &page->count_info) ) - put_page(page); + if ( !rc && !is_domain_direct_mapped(d) ) + clear_assignment_reference(page); =20 put_page(page); =20 diff --git a/xen/common/xenoprof.c b/xen/common/xenoprof.c index 8a72e382e6..262d537074 100644 --- a/xen/common/xenoprof.c +++ b/xen/common/xenoprof.c @@ -173,8 +173,7 @@ unshare_xenoprof_page_with_guest(struct xenoprof *x) struct page_info *page =3D mfn_to_page(mfn_add(mfn, i)); =20 BUG_ON(page_get_owner(page) !=3D current->domain); - if ( test_and_clear_bit(_PGC_allocated, &page->count_info) ) - put_page(page); + clear_assignment_reference(page); } } =20 diff --git a/xen/include/xen/mm.h b/xen/include/xen/mm.h index a57974ae51..1c36c74b8c 100644 --- a/xen/include/xen/mm.h +++ b/xen/include/xen/mm.h @@ -658,4 +658,15 @@ static inline void share_xen_page_with_privileged_gues= ts( share_xen_page_with_guest(page, dom_xen, flags); } =20 +static inline void clear_assignment_reference(struct page_info *page) +{ + /* + * It is unsafe to clear _PGC_allocated without holding an additional + * reference. + */ + ASSERT((page->count_info & PGC_count_mask) > 1); + if ( test_and_clear_bit(_PGC_allocated, &page->count_info) ) + put_page(page); +} + #endif /* __XEN_MM_H__ */ --=20 2.20.1.2.gb21ebb671 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel