From nobody Fri Apr 19 03:26:57 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=reject dis=none) header.from=citrix.com ARC-Seal: i=1; a=rsa-sha256; t=1668647330; cv=none; d=zohomail.com; s=zohoarc; b=D2Fv1jimDBNyDNo5DsJhpqZOFbbZFUfArcz4u4ydVSDnEJ8kwA8NE6qx63L7GyqeKL52SLcuIWXzQzYcu022oD707bMssLw0nb16+85f87RkKKCCECbaQqBEVk+fuT1QSPXfx86RtzLvVdvpEGRrDbgZsbaZsquQnmsxwHBPvEo= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1668647330; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=HWYOaQ56yGxkk0Qj4mnP96t1kDelww05Qe59A9n1obo=; b=L9oo9QXaBVKNM3owxs6dVqjCdKGYkZ2UYLDx59sbV8ZVY46SJl0MRl9VFb3hEAD5VKGtwBx5fxlrAnq6lIf0mSe82Kub/KdNbGXIPd3RQsb3nuReKj5f0MPgcpGw+lGdITkFNcDtaQsg24Sy9ehDDdQXKylvUfX9szHJpbqJLi0= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=reject dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1668647330918698.1061882219284; Wed, 16 Nov 2022 17:08:50 -0800 (PST) Received: from list by lists.xenproject.org with outflank-mailman.444451.699690 (Exim 4.92) (envelope-from ) id 1ovTOD-0002jG-0f; Thu, 17 Nov 2022 01:08:25 +0000 Received: by outflank-mailman (output) from mailman id 444451.699690; Thu, 17 Nov 2022 01:08:24 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1ovTOC-0002ii-TI; Thu, 17 Nov 2022 01:08:24 +0000 Received: by outflank-mailman (input) for mailman id 444451; Thu, 17 Nov 2022 01:08:23 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1ovTOB-0002f1-Cd for xen-devel@lists.xenproject.org; Thu, 17 Nov 2022 01:08:23 +0000 Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com [216.71.155.175]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id 51beb88b-6614-11ed-8fd2-01056ac49cbb; Thu, 17 Nov 2022 02:08:19 +0100 (CET) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 51beb88b-6614-11ed-8fd2-01056ac49cbb DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=citrix.com; s=securemail; t=1668647299; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=5f7lo/vfS1azSCVrGCLN6cgm1Xi1cZ6LYp1mvu+q1ws=; b=CaNKTYg0hgx2kyfMfcaooQDgzoLlssrL3LLBc75kiFksISnCUSvsVLC1 Eh88vOSYgR2WmvMKq35Aqso2ElvxteQXaa3WoUkSf6P7hwQSQ45Nxc0Rs OkwQeUl794r6sRY4ZPP/a+jsjAKzCjNOZ02Bp4F5TO/2l7MBJiBVRdx0k g=; Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none X-SBRS: None X-MesageID: 84572755 X-Ironport-Server: esa6.hc3370-68.iphmx.com X-Remote-IP: 162.221.156.83 X-Policy: $RELAYED IronPort-Data: A9a23:UDwTsKxHphyhu4Ikavx6t+fDxirEfRIJ4+MujC+fZmUNrF6WrkUAy GodD2HSOfmIYGvxL9B+bI3g9EwH6sOBnYJjTAVq/CAxQypGp/SeCIXCJC8cHc8wwu7rFxs7s ppEOrEsCOhuExcwcz/0auCJQUFUjP3OHfykTbaeYUidfCc8IA85kxVvhuUltYBhhNm9Emult Mj75sbSIzdJ4RYtWo4vw//F+U0HUMja4mtC5AVnPK4T5zcyqlFOZH4hDfDpR5fHatE88t6SH 47r0Ly/92XFyBYhYvvNfmHTKxBirhb6ZGBiu1IOM0SQqkEqSh8ai87XAME0e0ZP4whlqvgqo Dl7WT5cfi9yVkHEsLx1vxC1iEiSN4UekFPMCSDXXcB+UyQq2pYjqhljJBheAGEWxgp4KVtz9 KQ/IwxQVQug2ryKmZidQ/Qyg/12eaEHPKtH0p1h5TTQDPJgSpHfWaTao9Rf2V/chOgXQ6yYP ZBAL2MyMlKQOHWjOX9OYH46tM6uimPybHtzr1WNqLBsy2PS0BZwwP7mN9+9ltmiFZkIwRzH+ z+uE2LRCVIoLOa80xq5/H+yufaTuAXaV74rG+jtnhJtqALKnTFCYPEMbnOFpv2+hl+7SshoA UUe8SozroA/7EWuCNL6WnWQomOAvxMac8pdFas98g7l4qDZ+RqDD24ICDtIcsU7tdQeTCYvk FSOmrvBJTFpqqzTdnub+Z+dtzb0Mi8QRUcZfjMNRwYB59jloakwgwjJQ9IlF7S65vXlFDe1z z2UoSwWg7QIkdVNx6i95UrAgT+nut7OVAFdzif9U3+h7wh5TJW4fIHu4l/ehd5fKK6JQ1/Hu 2IL8/Vy98hXU8vLznbUBrxQQvf5vJ5pLQEwn3ZVIph50D+RpkW4Xt0O3GhUAk50bMM9LGqBj FDohStd45paPX2PZKBxYp6sB8lC8ZUMBegJRdiPMIMQP8EZmBuvuXg3OBXOhzyFfF0Ey/lXB HuNTSq74Z/244xDxSH+eeoS2KRDKssWlTKKHsCTI/hKPNOjiJ+ppVUtagXmggMRtvnsTODpH zF3aaO3J+13CrGWX8Uu2dd7wJBjBSFT6WrKg8JWbPWfBQFtBXssDfTcqZt4JdI1wf4OyriSo C7nMqO99LYZrSaeQeltQik9AI4DoL4l9S5rVcDSFQvAN4cfjXaHs/5EKspfkUgP/+1/1/9kJ 8TpiO3Zasmii13vpVwgUHUKhNw4JU711FzVbkJIolEXJvZdeuAAwfe8FiOHycXEJnPfWRcWy 1F46j7mfA== IronPort-HdrOrdr: A9a23:/oB0iqp3Mhf7wlVGfsVBRUcaV5oteYIsimQD101hICG8cqSj+P xG+85rsyMc6QxhP03I9urgBEDtex7hHNtOkOss1NSZLW3bUQmTTL2KhLGKq1aLJ8S9zJ856U 4JSdkGNDSaNzZHZKjBjDVQa+xQo+W6zA== X-IronPort-AV: E=Sophos;i="5.96,169,1665460800"; d="scan'208";a="84572755" From: Andrew Cooper To: Xen-devel CC: Andrew Cooper , Jan Beulich , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= , Wei Liu , Stefano Stabellini , Julien Grall , Volodymyr Babchuk , Bertrand Marquis , Henry Wang , Anthony PERARD Subject: [PATCH 1/4] xen: Introduce non-broken hypercalls for the paging mempool size Date: Thu, 17 Nov 2022 01:08:01 +0000 Message-ID: <20221117010804.9384-2-andrew.cooper3@citrix.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20221117010804.9384-1-andrew.cooper3@citrix.com> References: <20221117010804.9384-1-andrew.cooper3@citrix.com> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @citrix.com) X-ZM-MESSAGEID: 1668647332939100001 The existing XEN_DOMCTL_SHADOW_OP_{GET,SET}_ALLOCATION have problems: * All set_allocation() flavours have an overflow-before-widen bug when calculating "sc->mb << (20 - PAGE_SHIFT)". * All flavours have a granularity of 1M. This was tolerable when the size= of the pool could only be set at the same granularity, but is broken now th= at ARM has a 16-page stopgap allocation in use. * All get_allocation() flavours round up, and in particular turn 0 into 1, meaning the get op returns junk before a successful set op. * The x86 flavours reject the hypercalls before the VM has vCPUs allocated, despite the pool size being a domain property. * Even the hypercall names are long-obsolete. Implement a better interface, which can be first used to unit test the behaviour, and subsequently correct a broken implementation. The old interface will be retired in due course. The unit of bytes (as opposed pages) is a deliberate API/ABI improvement to more easily support multiple page granularities. This is part of XSA-409 / CVE-2022-33747. Signed-off-by: Andrew Cooper Release-acked-by: Henry Wang Acked-by: Anthony PERARD Reviewed-by: Jan Beulich # hypervisor Reviewed-by: Stefano Stabellini --- CC: Jan Beulich CC: Roger Pau Monn=C3=A9 CC: Wei Liu CC: Stefano Stabellini CC: Julien Grall CC: Volodymyr Babchuk CC: Bertrand Marquis CC: Henry Wang CC: Anthony PERARD v2: * s/p2m/paging/ * Fix overflow-before-widen in ARM's arch_get_p2m_mempool_size() * Fix overflow-before-widen in both {hap,shadow}_get_allocation_bytes() * Leave a TODO about x86/PV, drop assertion. * Check for long->int truncation in x86's arch_set_paging_mempool_size() Future TODOs: * x86 shadow still rounds up. This is buggy as it's a simultaneous equati= on with tot_pages which varies over time with ballooning. * x86 PV is weird. There are no toolstack interact with the shadow pool size, but the "shadow" pool it does come into existence when logdirty (or pv-l1tf) when first enabled. * The shadow+hap logic is in desperate need of deduping. --- tools/include/xenctrl.h | 3 +++ tools/libs/ctrl/xc_domain.c | 29 ++++++++++++++++++++++++++ xen/arch/arm/p2m.c | 26 +++++++++++++++++++++++ xen/arch/x86/include/asm/hap.h | 1 + xen/arch/x86/include/asm/shadow.h | 4 ++++ xen/arch/x86/mm/hap/hap.c | 11 ++++++++++ xen/arch/x86/mm/paging.c | 43 +++++++++++++++++++++++++++++++++++= ++++ xen/arch/x86/mm/shadow/common.c | 11 ++++++++++ xen/common/domctl.c | 14 +++++++++++++ xen/include/public/domctl.h | 24 +++++++++++++++++++++- xen/include/xen/domain.h | 3 +++ 11 files changed, 168 insertions(+), 1 deletion(-) diff --git a/tools/include/xenctrl.h b/tools/include/xenctrl.h index 0c8b4c3aa7a5..23037874d3d5 100644 --- a/tools/include/xenctrl.h +++ b/tools/include/xenctrl.h @@ -893,6 +893,9 @@ long long xc_logdirty_control(xc_interface *xch, unsigned int mode, xc_shadow_op_stats_t *stats); =20 +int xc_get_paging_mempool_size(xc_interface *xch, uint32_t domid, uint64_t= *size); +int xc_set_paging_mempool_size(xc_interface *xch, uint32_t domid, uint64_t= size); + int xc_sched_credit_domain_set(xc_interface *xch, uint32_t domid, struct xen_domctl_sched_credit *sdom); diff --git a/tools/libs/ctrl/xc_domain.c b/tools/libs/ctrl/xc_domain.c index 14c0420c35be..e939d0715739 100644 --- a/tools/libs/ctrl/xc_domain.c +++ b/tools/libs/ctrl/xc_domain.c @@ -706,6 +706,35 @@ long long xc_logdirty_control(xc_interface *xch, return (rc =3D=3D 0) ? domctl.u.shadow_op.pages : rc; } =20 +int xc_get_paging_mempool_size(xc_interface *xch, uint32_t domid, uint64_t= *size) +{ + int rc; + struct xen_domctl domctl =3D { + .cmd =3D XEN_DOMCTL_get_paging_mempool_size, + .domain =3D domid, + }; + + rc =3D do_domctl(xch, &domctl); + if ( rc ) + return rc; + + *size =3D domctl.u.paging_mempool.size; + return 0; +} + +int xc_set_paging_mempool_size(xc_interface *xch, uint32_t domid, uint64_t= size) +{ + struct xen_domctl domctl =3D { + .cmd =3D XEN_DOMCTL_set_paging_mempool_size, + .domain =3D domid, + .u.paging_mempool =3D { + .size =3D size, + }, + }; + + return do_domctl(xch, &domctl); +} + int xc_domain_setmaxmem(xc_interface *xch, uint32_t domid, uint64_t max_memkb) diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c index 94d3b60b1387..8c1972e58227 100644 --- a/xen/arch/arm/p2m.c +++ b/xen/arch/arm/p2m.c @@ -100,6 +100,13 @@ unsigned int p2m_get_allocation(struct domain *d) return ROUNDUP(nr_pages, 1 << (20 - PAGE_SHIFT)) >> (20 - PAGE_SHIFT); } =20 +/* Return the size of the pool, in bytes. */ +int arch_get_paging_mempool_size(struct domain *d, uint64_t *size) +{ + *size =3D (uint64_t)ACCESS_ONCE(d->arch.paging.p2m_total_pages) << PAG= E_SHIFT; + return 0; +} + /* * Set the pool of pages to the required number of pages. * Returns 0 for success, non-zero for failure. @@ -157,6 +164,25 @@ int p2m_set_allocation(struct domain *d, unsigned long= pages, bool *preempted) return 0; } =20 +int arch_set_paging_mempool_size(struct domain *d, uint64_t size) +{ + unsigned long pages =3D size >> PAGE_SHIFT; + bool preempted =3D false; + int rc; + + if ( (size & ~PAGE_MASK) || /* Non page-sized request? */ + pages !=3D (size >> PAGE_SHIFT) ) /* 32-bit overflow? */ + return -EINVAL; + + spin_lock(&d->arch.paging.lock); + rc =3D p2m_set_allocation(d, pages, &preempted); + spin_unlock(&d->arch.paging.lock); + + ASSERT(preempted =3D=3D (rc =3D=3D -ERESTART)); + + return rc; +} + int p2m_teardown_allocation(struct domain *d) { int ret =3D 0; diff --git a/xen/arch/x86/include/asm/hap.h b/xen/arch/x86/include/asm/hap.h index 90dece29deca..14d2f212dab9 100644 --- a/xen/arch/x86/include/asm/hap.h +++ b/xen/arch/x86/include/asm/hap.h @@ -47,6 +47,7 @@ int hap_track_dirty_vram(struct domain *d, extern const struct paging_mode *hap_paging_get_mode(struct vcpu *); int hap_set_allocation(struct domain *d, unsigned int pages, bool *preempt= ed); unsigned int hap_get_allocation(struct domain *d); +int hap_get_allocation_bytes(struct domain *d, uint64_t *size); =20 #endif /* XEN_HAP_H */ =20 diff --git a/xen/arch/x86/include/asm/shadow.h b/xen/arch/x86/include/asm/s= hadow.h index 1365fe480518..dad876d29499 100644 --- a/xen/arch/x86/include/asm/shadow.h +++ b/xen/arch/x86/include/asm/shadow.h @@ -97,6 +97,8 @@ void shadow_blow_tables_per_domain(struct domain *d); int shadow_set_allocation(struct domain *d, unsigned int pages, bool *preempted); =20 +int shadow_get_allocation_bytes(struct domain *d, uint64_t *size); + #else /* !CONFIG_SHADOW_PAGING */ =20 #define shadow_vcpu_teardown(v) ASSERT(is_pv_vcpu(v)) @@ -108,6 +110,8 @@ int shadow_set_allocation(struct domain *d, unsigned in= t pages, ({ ASSERT_UNREACHABLE(); -EOPNOTSUPP; }) #define shadow_set_allocation(d, pages, preempted) \ ({ ASSERT_UNREACHABLE(); -EOPNOTSUPP; }) +#define shadow_get_allocation_bytes(d, size) \ + ({ ASSERT_UNREACHABLE(); -EOPNOTSUPP; }) =20 static inline void sh_remove_shadows(struct domain *d, mfn_t gmfn, int fast, int all) {} diff --git a/xen/arch/x86/mm/hap/hap.c b/xen/arch/x86/mm/hap/hap.c index f809ea9aa6ae..0fc1b1d9aced 100644 --- a/xen/arch/x86/mm/hap/hap.c +++ b/xen/arch/x86/mm/hap/hap.c @@ -345,6 +345,17 @@ unsigned int hap_get_allocation(struct domain *d) + ((pg & ((1 << (20 - PAGE_SHIFT)) - 1)) ? 1 : 0)); } =20 +int hap_get_allocation_bytes(struct domain *d, uint64_t *size) +{ + unsigned long pages =3D d->arch.paging.hap.total_pages; + + pages +=3D d->arch.paging.hap.p2m_pages; + + *size =3D pages << PAGE_SHIFT; + + return 0; +} + /* Set the pool of pages to the required number of pages. * Returns 0 for success, non-zero for failure. */ int hap_set_allocation(struct domain *d, unsigned int pages, bool *preempt= ed) diff --git a/xen/arch/x86/mm/paging.c b/xen/arch/x86/mm/paging.c index 3a355eee9ca3..8d579fa9a3e8 100644 --- a/xen/arch/x86/mm/paging.c +++ b/xen/arch/x86/mm/paging.c @@ -977,6 +977,49 @@ int __init paging_set_allocation(struct domain *d, uns= igned int pages, } #endif =20 +int arch_get_paging_mempool_size(struct domain *d, uint64_t *size) +{ + int rc; + + if ( is_pv_domain(d) ) /* TODO: Relax in due course */ + return -EOPNOTSUPP; + + if ( hap_enabled(d) ) + rc =3D hap_get_allocation_bytes(d, size); + else + rc =3D shadow_get_allocation_bytes(d, size); + + return rc; +} + +int arch_set_paging_mempool_size(struct domain *d, uint64_t size) +{ + unsigned long pages =3D size >> PAGE_SHIFT; + bool preempted =3D false; + int rc; + + if ( is_pv_domain(d) ) /* TODO: Relax in due course */ + return -EOPNOTSUPP; + + if ( size & ~PAGE_MASK || /* Non page-sized request? */ + pages !=3D (unsigned int)pages ) /* Overflow $X_set_allocation= ()? */ + return -EINVAL; + + paging_lock(d); + if ( hap_enabled(d) ) + rc =3D hap_set_allocation(d, pages, &preempted); + else + rc =3D shadow_set_allocation(d, pages, &preempted); + paging_unlock(d); + + /* + * TODO: Adjust $X_set_allocation() so this is true. + ASSERT(preempted =3D=3D (rc =3D=3D -ERESTART)); + */ + + return preempted ? -ERESTART : rc; +} + /* * Local variables: * mode: C diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/commo= n.c index badfd53c6b23..a8404f97f668 100644 --- a/xen/arch/x86/mm/shadow/common.c +++ b/xen/arch/x86/mm/shadow/common.c @@ -1427,6 +1427,17 @@ static unsigned int shadow_get_allocation(struct dom= ain *d) + ((pg & ((1 << (20 - PAGE_SHIFT)) - 1)) ? 1 : 0)); } =20 +int shadow_get_allocation_bytes(struct domain *d, uint64_t *size) +{ + unsigned long pages =3D d->arch.paging.shadow.total_pages; + + pages +=3D d->arch.paging.shadow.p2m_pages; + + *size =3D pages << PAGE_SHIFT; + + return 0; +} + /*************************************************************************= */ /* Hash table for storing the guest->shadow mappings. * The table itself is an array of pointers to shadows; the shadows are th= en diff --git a/xen/common/domctl.c b/xen/common/domctl.c index 69fb9abd346f..ad71ad8a4cc5 100644 --- a/xen/common/domctl.c +++ b/xen/common/domctl.c @@ -874,6 +874,20 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_= domctl) ret =3D iommu_do_domctl(op, d, u_domctl); break; =20 + case XEN_DOMCTL_get_paging_mempool_size: + ret =3D arch_get_paging_mempool_size(d, &op->u.paging_mempool.size= ); + if ( !ret ) + copyback =3D 1; + break; + + case XEN_DOMCTL_set_paging_mempool_size: + ret =3D arch_set_paging_mempool_size(d, op->u.paging_mempool.size); + + if ( ret =3D=3D -ERESTART ) + ret =3D hypercall_create_continuation( + __HYPERVISOR_domctl, "h", u_domctl); + break; + default: ret =3D arch_do_domctl(op, d, u_domctl); break; diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h index b2ae839c3632..d4072761791a 100644 --- a/xen/include/public/domctl.h +++ b/xen/include/public/domctl.h @@ -214,7 +214,10 @@ struct xen_domctl_getpageframeinfo3 { /* Return the bitmap but do not modify internal copy. */ #define XEN_DOMCTL_SHADOW_OP_PEEK 12 =20 -/* Memory allocation accessors. */ +/* + * Memory allocation accessors. These APIs are broken and will be removed. + * Use XEN_DOMCTL_{get,set}_paging_mempool_size instead. + */ #define XEN_DOMCTL_SHADOW_OP_GET_ALLOCATION 30 #define XEN_DOMCTL_SHADOW_OP_SET_ALLOCATION 31 =20 @@ -946,6 +949,22 @@ struct xen_domctl_cacheflush { xen_pfn_t start_pfn, nr_pfns; }; =20 +/* + * XEN_DOMCTL_get_paging_mempool_size / XEN_DOMCTL_set_paging_mempool_size. + * + * Get or set the paging memory pool size. The size is in bytes. + * + * This is a dedicated pool of memory for Xen to use while managing the gu= est, + * typically containing pagetables. As such, there is an implementation + * specific minimum granularity. + * + * The set operation can fail mid-way through the request (e.g. Xen running + * out of memory, no free memory to reclaim from the pool, etc.). + */ +struct xen_domctl_paging_mempool { + uint64_aligned_t size; /* IN/OUT. Size in bytes. */ +}; + #if defined(__i386__) || defined(__x86_64__) struct xen_domctl_vcpu_msr { uint32_t index; @@ -1274,6 +1293,8 @@ struct xen_domctl { #define XEN_DOMCTL_get_cpu_policy 82 #define XEN_DOMCTL_set_cpu_policy 83 #define XEN_DOMCTL_vmtrace_op 84 +#define XEN_DOMCTL_get_paging_mempool_size 85 +#define XEN_DOMCTL_set_paging_mempool_size 86 #define XEN_DOMCTL_gdbsx_guestmemio 1000 #define XEN_DOMCTL_gdbsx_pausevcpu 1001 #define XEN_DOMCTL_gdbsx_unpausevcpu 1002 @@ -1335,6 +1356,7 @@ struct xen_domctl { struct xen_domctl_psr_alloc psr_alloc; struct xen_domctl_vuart_op vuart_op; struct xen_domctl_vmtrace_op vmtrace_op; + struct xen_domctl_paging_mempool paging_mempool; uint8_t pad[128]; } u; }; diff --git a/xen/include/xen/domain.h b/xen/include/xen/domain.h index 2c8116afba27..0de9cbc1696d 100644 --- a/xen/include/xen/domain.h +++ b/xen/include/xen/domain.h @@ -98,6 +98,9 @@ void arch_get_info_guest(struct vcpu *, vcpu_guest_contex= t_u); int arch_initialise_vcpu(struct vcpu *v, XEN_GUEST_HANDLE_PARAM(void) arg); int default_initialise_vcpu(struct vcpu *v, XEN_GUEST_HANDLE_PARAM(void) a= rg); =20 +int arch_get_paging_mempool_size(struct domain *d, uint64_t *size /* bytes= */); +int arch_set_paging_mempool_size(struct domain *d, uint64_t size /* bytes = */); + int domain_relinquish_resources(struct domain *d); =20 void dump_pageframe_info(struct domain *d); --=20 2.11.0 From nobody Fri Apr 19 03:26:57 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=reject dis=none) header.from=citrix.com ARC-Seal: i=1; a=rsa-sha256; t=1668647339; cv=none; d=zohomail.com; s=zohoarc; b=hQ2rAYoGswkhcEOPK478pzLph01bp8pTro96YHpUvVMfBO9BpkofNZKLUqR/ikvSn4meAIbXGAQ+81Kx4v3wMXuo+LCwUtJQ54W34Hq7s4TmIRXutV2+2EF6Nk100U+SGyDbe0PDcXLtT5r/FKgaAcxYyA6qSQDdjwmoGRKOmxM= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1668647339; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=Gm9w2U13B9cDnWtADSva4Xb8mX6uFYnBzXomYSavcy0=; b=C4wbCwhn7vW/1vyl2l6pyPN/38vHjBzFXLSkt7Oo5en5MyM9DYlozg7nwQdw/46oDdYIBEA5Q1fRUNvN0iqZlLIep5zhhqBhK9RzsxUyycxtApianYTdFvHk1E1nmVgjBusL5cS90eYYOT5n0hdLin1TS6/APNb6WdfLKzn8kxo= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=reject dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1668647339247737.4377095332801; Wed, 16 Nov 2022 17:08:59 -0800 (PST) Received: from list by lists.xenproject.org with outflank-mailman.444453.699718 (Exim 4.92) (envelope-from ) id 1ovTOQ-0003sl-ST; Thu, 17 Nov 2022 01:08:38 +0000 Received: by outflank-mailman (output) from mailman id 444453.699718; Thu, 17 Nov 2022 01:08:38 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1ovTOQ-0003sU-Od; Thu, 17 Nov 2022 01:08:38 +0000 Received: by outflank-mailman (input) for mailman id 444453; Thu, 17 Nov 2022 01:08:36 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1ovTOO-0002PW-Kd for xen-devel@lists.xenproject.org; Thu, 17 Nov 2022 01:08:36 +0000 Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com [216.71.145.142]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id 5b64a19f-6614-11ed-91b6-6bf2151ebd3b; Thu, 17 Nov 2022 02:08:35 +0100 (CET) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 5b64a19f-6614-11ed-91b6-6bf2151ebd3b DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=citrix.com; s=securemail; t=1668647315; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=LGNXYRbqu08/x18C0o33vlfzJWuGN3UVgv27ZYimOA4=; b=MDxUSFVXro9+EaaZhXO//XMg3+sYibSmj/lZgtU0wUJCPle1AXhW57v2 rxDS8SFfgPQPGVv+Y/Le602iQY3O4huZJH3VsZoB+/xqTuUVuorfsWULG AMLJ4shL7x7wPPGwrm+dYJKpZXIkCra/fvd8ifYVE3t/wwlOzo9pLClYv E=; Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none X-SBRS: None X-MesageID: 85413939 X-Ironport-Server: esa1.hc3370-68.iphmx.com X-Remote-IP: 162.221.156.83 X-Policy: $RELAYED IronPort-Data: A9a23:WvCOHaDjMoFcAhVW/97jw5YqxClBgxIJ4kV8jS/XYbTApDtx0TEHm GUdD2+BMqqOMDCkfN4ja9vk8U9TsJHSydI2QQY4rX1jcSlH+JHPbTi7wuUcHAvJd5GeExg3h yk6QoOdRCzhZiaE/n9BCpC48T8nk/nNHuCnYAL9EngZbRd+Tys8gg5Ulec8g4p56fC0GArIs t7pyyHlEAbNNwVcbyRFtcpvlDs15K6o4WpB4ARlDRx2lAS2e0c9Xcp3yZ6ZdxMUcqEMdsamS uDKyq2O/2+x13/B3fv8z94X2mVTKlLjFVDmZkh+AsBOsTAbzsAG6Y4pNeJ0VKtio27hc+ada jl6ncfYpQ8BZsUgkQmGOvVSO3kW0aZuoNcrLZUj2CA6IoKvn3bEmp1T4E8K0YIwyMMsHCIR1 NokdGoddzyKg+GE7qyZc7w57igjBJGD0II3v3hhyXfSDOo8QICFSKLPjTNa9G5u3IYUR6+YP pdHL2o0BPjDS0Qn1lM/IZQyhuq3wFL4dCVVsgm9rqsr+WnDigd21dABNfKFJ4DTHpQOzy50o ErW32SiAQoAGeW9lzPUqm+JvNDCmzjSDdd6+LqQqacx3Qz7KnYoICMRUVy3sPyokHmUUthUK 1EX0ic2pK10/0uuJvH/Qhm5rXisrhMaHd1KHIUSyAyL0LuS3A+fCUANVDsHY9sj3Oc8SCY2z FaPk5XsDCZ2rbyOYXuH8/GfqjbaEQo/IHIGZCQEZRAY+NSlq4Y25jrfQ9AmHKOrg9ndHTDr3 yvMvCU4n68Uj8MAy+O851+vqym3upHDQwox5wPWdmGo9AV0YMiifYPAwUffxeZNKsCeVFbpg ZQfs5HAtqZUV8jLzXHTBrVWdF202xqbGA/52kMsQLgHy2j362ePdIRT0h9gBn48Z67oZgTVS EPUvApQ4rpaM32rcbJ7buqNNig68UTzPY+7D66JN7KidrA0LVbap382OSZ8yki3yCARfbcD1 YB3mCpGJVITEuxZwTW/XI/xOpd7l3lllQs/qX0WpilLMIZyhlbPF9/p03PUNIjVCZ9oRy2Lm +uzz+PQl31ivBTWO0E6Mec7dDjm10QTC5HssNBwfeWeOAdgE2xJI6aPn+N/Idc5wP4Lyb2gE pSBtqlwkwOXaZrvcFviV5yeQOm3AcYXQYwTYETAwmpEK1B8ON3yvc/zhrM8fKU99fwL8BKHZ 6BtRihBa9wRIgn6F8M1PcOs9tIzKk/67e9MVgL8CAUCk1dbb1Sh0rfZksHHrXJm4vaf3Sfmn 4Cd6w== IronPort-HdrOrdr: A9a23:/VcpnKFuJTZcwOcmpLqE0MeALOsnbusQ8zAXP0AYc3Jom6uj5r mTdZUgpHnJYVkqOE3I9ertBEDEewK4yXcX2/h3AV7BZniEhILAFugLhuGO/9SjIVybygc079 YZT0EUMrzN5DZB4voSmDPIceod/A== X-IronPort-AV: E=Sophos;i="5.96,169,1665460800"; d="scan'208";a="85413939" From: Andrew Cooper To: Xen-devel CC: Andrew Cooper , Jan Beulich , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= , Wei Liu , Stefano Stabellini , Julien Grall , Volodymyr Babchuk , Bertrand Marquis , Henry Wang , Anthony PERARD Subject: [PATCH 2/4] tools/tests: Unit test for paging mempool size Date: Thu, 17 Nov 2022 01:08:02 +0000 Message-ID: <20221117010804.9384-3-andrew.cooper3@citrix.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20221117010804.9384-1-andrew.cooper3@citrix.com> References: <20221117010804.9384-1-andrew.cooper3@citrix.com> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @citrix.com) X-ZM-MESSAGEID: 1668647340460100001 Exercise some basic functionality of the new xc_{get,set}_paging_mempool_size() hypercalls. This passes on x86, but fails currently on ARM. ARM will be fixed up in future patches. This is part of XSA-409 / CVE-2022-33747. Signed-off-by: Andrew Cooper Release-acked-by: Henry Wang Acked-by: Anthony PERARD Acked-by: Jan Beulich --- CC: Jan Beulich CC: Roger Pau Monn=C3=A9 CC: Wei Liu CC: Stefano Stabellini CC: Julien Grall CC: Volodymyr Babchuk CC: Bertrand Marquis CC: Henry Wang CC: Anthony PERARD x86 Shadow is complicated because of how it behaves for PV guests, and beca= use of how it forms a simultaneous equation with tot_pages. This will require more work to untangle. v2: * s/p2m/paging/ * Fix CFLAGS_libxenforeginmemory typo --- tools/tests/Makefile | 1 + tools/tests/paging-mempool/.gitignore | 1 + tools/tests/paging-mempool/Makefile | 42 ++++++ tools/tests/paging-mempool/test-paging-mempool.c | 181 +++++++++++++++++++= ++++ 4 files changed, 225 insertions(+) create mode 100644 tools/tests/paging-mempool/.gitignore create mode 100644 tools/tests/paging-mempool/Makefile create mode 100644 tools/tests/paging-mempool/test-paging-mempool.c diff --git a/tools/tests/Makefile b/tools/tests/Makefile index d99146d56a64..1319c3a9d88c 100644 --- a/tools/tests/Makefile +++ b/tools/tests/Makefile @@ -11,6 +11,7 @@ endif SUBDIRS-y +=3D xenstore SUBDIRS-y +=3D depriv SUBDIRS-y +=3D vpci +SUBDIRS-y +=3D paging-mempool =20 .PHONY: all clean install distclean uninstall all clean distclean install uninstall: %: subdirs-% diff --git a/tools/tests/paging-mempool/.gitignore b/tools/tests/paging-mem= pool/.gitignore new file mode 100644 index 000000000000..2f9305b7cc07 --- /dev/null +++ b/tools/tests/paging-mempool/.gitignore @@ -0,0 +1 @@ +test-paging-mempool diff --git a/tools/tests/paging-mempool/Makefile b/tools/tests/paging-mempo= ol/Makefile new file mode 100644 index 000000000000..5d49497710e0 --- /dev/null +++ b/tools/tests/paging-mempool/Makefile @@ -0,0 +1,42 @@ +XEN_ROOT =3D $(CURDIR)/../../.. +include $(XEN_ROOT)/tools/Rules.mk + +TARGET :=3D test-paging-mempool + +.PHONY: all +all: $(TARGET) + +.PHONY: clean +clean: + $(RM) -- *.o $(TARGET) $(DEPS_RM) + +.PHONY: distclean +distclean: clean + $(RM) -- *~ + +.PHONY: install +install: all + $(INSTALL_DIR) $(DESTDIR)$(LIBEXEC_BIN) + $(INSTALL_PROG) $(TARGET) $(DESTDIR)$(LIBEXEC_BIN) + +.PHONY: uninstall +uninstall: + $(RM) -- $(DESTDIR)$(LIBEXEC_BIN)/$(TARGET) + +CFLAGS +=3D $(CFLAGS_xeninclude) +CFLAGS +=3D $(CFLAGS_libxenctrl) +CFLAGS +=3D $(CFLAGS_libxenforeignmemory) +CFLAGS +=3D $(CFLAGS_libxengnttab) +CFLAGS +=3D $(APPEND_CFLAGS) + +LDFLAGS +=3D $(LDLIBS_libxenctrl) +LDFLAGS +=3D $(LDLIBS_libxenforeignmemory) +LDFLAGS +=3D $(LDLIBS_libxengnttab) +LDFLAGS +=3D $(APPEND_LDFLAGS) + +%.o: Makefile + +$(TARGET): test-paging-mempool.o + $(CC) -o $@ $< $(LDFLAGS) + +-include $(DEPS_INCLUDE) diff --git a/tools/tests/paging-mempool/test-paging-mempool.c b/tools/tests= /paging-mempool/test-paging-mempool.c new file mode 100644 index 000000000000..942a2fde19c7 --- /dev/null +++ b/tools/tests/paging-mempool/test-paging-mempool.c @@ -0,0 +1,181 @@ +#include +#include +#include +#include +#include +#include + +#include +#include +#include +#include + +static unsigned int nr_failures; +#define fail(fmt, ...) \ +({ \ + nr_failures++; \ + (void)printf(fmt, ##__VA_ARGS__); \ +}) + +static xc_interface *xch; +static uint32_t domid; + +static struct xen_domctl_createdomain create =3D { + .flags =3D XEN_DOMCTL_CDF_hvm | XEN_DOMCTL_CDF_hap, + .max_vcpus =3D 1, + .max_grant_frames =3D 1, + .grant_opts =3D XEN_DOMCTL_GRANT_version(1), + + .arch =3D { +#if defined(__x86_64__) || defined(__i386__) + .emulation_flags =3D XEN_X86_EMU_LAPIC, +#endif + }, +}; + +static uint64_t default_mempool_size_bytes =3D +#if defined(__x86_64__) || defined(__i386__) + 256 << 12; /* Only x86 HAP for now. x86 Shadow needs more work. */ +#elif defined (__arm__) || defined(__aarch64__) + 16 << 12; +#endif + +static void run_tests(void) +{ + xen_pfn_t physmap[] =3D { 0 }; + uint64_t size_bytes, old_size_bytes; + int rc; + + printf("Test default mempool size\n"); + + rc =3D xc_get_paging_mempool_size(xch, domid, &size_bytes); + if ( rc ) + return fail(" Fail: get mempool size: %d - %s\n", + errno, strerror(errno)); + + printf("mempool size %"PRIu64" bytes (%"PRIu64"kB, %"PRIu64"MB)\n", + size_bytes, size_bytes >> 10, size_bytes >> 20); + + + /* + * Check that the domain has the expected default allocation size. Th= is + * will fail if the logic in Xen is altered without an equivelent + * adjustment here. + */ + if ( size_bytes !=3D default_mempool_size_bytes ) + return fail(" Fail: size %"PRIu64" !=3D expected size %"PRIu64"\n= ", + size_bytes, default_mempool_size_bytes); + + + printf("Test that allocate doesn't alter pool size\n"); + + /* + * Populate the domain with some RAM. This will cause more of the mem= pool + * to be used. + */ + old_size_bytes =3D size_bytes; + + rc =3D xc_domain_setmaxmem(xch, domid, -1); + if ( rc ) + return fail(" Fail: setmaxmem: : %d - %s\n", + errno, strerror(errno)); + + rc =3D xc_domain_populate_physmap_exact(xch, domid, 1, 0, 0, physmap); + if ( rc ) + return fail(" Fail: populate physmap: %d - %s\n", + errno, strerror(errno)); + + /* + * Re-get the p2m size. Should not have changed as a consequence of + * populate physmap. + */ + rc =3D xc_get_paging_mempool_size(xch, domid, &size_bytes); + if ( rc ) + return fail(" Fail: get mempool size: %d - %s\n", + errno, strerror(errno)); + + if ( old_size_bytes !=3D size_bytes ) + return fail(" Fail: mempool size changed %"PRIu64" =3D> %"PRIu64"= \n", + old_size_bytes, size_bytes); + + + + printf("Test bad set size\n"); + + /* + * Check that setting a non-page size results in failure. + */ + rc =3D xc_set_paging_mempool_size(xch, domid, size_bytes + 1); + if ( rc !=3D -1 || errno !=3D EINVAL ) + return fail(" Fail: Bad set size: expected -1/EINVAL, got %d/%d -= %s\n", + rc, errno, strerror(errno)); + + + printf("Test very large set size\n"); + + /* + * Check that setting a large P2M size succeeds. This is expecting to + * trigger continuations. + */ + rc =3D xc_set_paging_mempool_size(xch, domid, 64 << 20); + if ( rc ) + return fail(" Fail: Set size 64MB: %d - %s\n", + errno, strerror(errno)); + + + /* + * Check that the reported size matches what set consumed. + */ + rc =3D xc_get_paging_mempool_size(xch, domid, &size_bytes); + if ( rc ) + return fail(" Fail: get p2m mempool size: %d - %s\n", + errno, strerror(errno)); + + if ( size_bytes !=3D 64 << 20 ) + return fail(" Fail: expected mempool size %u, got %"PRIu64"\n", + 64 << 20, size_bytes); +} + +int main(int argc, char **argv) +{ + int rc; + + printf("Paging mempool tests\n"); + + xch =3D xc_interface_open(NULL, NULL, 0); + + if ( !xch ) + err(1, "xc_interface_open"); + + rc =3D xc_domain_create(xch, &domid, &create); + if ( rc ) + { + if ( errno =3D=3D EINVAL || errno =3D=3D EOPNOTSUPP ) + printf(" Skip: %d - %s\n", errno, strerror(errno)); + else + fail(" Domain create failure: %d - %s\n", + errno, strerror(errno)); + goto out; + } + + printf(" Created d%u\n", domid); + + run_tests(); + + rc =3D xc_domain_destroy(xch, domid); + if ( rc ) + fail(" Failed to destroy domain: %d - %s\n", + errno, strerror(errno)); + out: + return !!nr_failures; +} + +/* + * Local variables: + * mode: C + * c-file-style: "BSD" + * c-basic-offset: 4 + * tab-width: 4 + * indent-tabs-mode: nil + * End: + */ --=20 2.11.0 From nobody Fri Apr 19 03:26:57 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=reject dis=none) header.from=citrix.com ARC-Seal: i=1; a=rsa-sha256; t=1668647332; cv=none; d=zohomail.com; s=zohoarc; b=cukT5nhrSWtZFoKEfEDFJE4OTkpXYe8U8qVKbj1Vb8pO4LkDoUeSpfXe3tqYX3hb7yzKPQS6Ra+z1sibQ1FMNUzGDD1AJFLIOmt6JM2Hi1KOw5rWHbu+N5vuGHdTNGKX6v0a0J8qbCrKVSEavCAko9LL7fDXyGQhfM4s1F2hbsY= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1668647332; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=+2DG1uJBe0/5GU6d9TSwvfu2IEOxdq/0Kp7m6eqtAb4=; b=KdMCuk+o6FyPApEoA8fQTTI5fk0QyQ6RsXrh8Kg6wFpoDs0/afVtOMznCSa1iPBS2R5S4cgvpist82qRtkm8reIQzNUbclJOymxJVKeG6k1zIyf8iFxD5LofBdJsusbBKyN8Jy862fy26vLO5xDO/sQjY8XVQFfVoMj6OUnMMms= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=reject dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1668647332613507.12479917885946; Wed, 16 Nov 2022 17:08:52 -0800 (PST) Received: from list by lists.xenproject.org with outflank-mailman.444450.699686 (Exim 4.92) (envelope-from ) id 1ovTOC-0002gD-OU; Thu, 17 Nov 2022 01:08:24 +0000 Received: by outflank-mailman (output) from mailman id 444450.699686; Thu, 17 Nov 2022 01:08:24 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1ovTOC-0002g5-Kf; Thu, 17 Nov 2022 01:08:24 +0000 Received: by outflank-mailman (input) for mailman id 444450; Thu, 17 Nov 2022 01:08:23 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1ovTOB-0002PW-5O for xen-devel@lists.xenproject.org; Thu, 17 Nov 2022 01:08:23 +0000 Received: from esa3.hc3370-68.iphmx.com (esa3.hc3370-68.iphmx.com [216.71.145.155]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id 52e7fc15-6614-11ed-91b6-6bf2151ebd3b; Thu, 17 Nov 2022 02:08:21 +0100 (CET) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 52e7fc15-6614-11ed-91b6-6bf2151ebd3b DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=citrix.com; s=securemail; t=1668647301; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=J0y9Rl8v0F76CgtxLvCCWe9CUZTAHzzgMbNWoK/a9vE=; b=OvXO2HkbONE1Z0hDSGHAk9F8NJLr+vNru3KFPSn2JFeBjB4MYeYkEVaW zp1vzB3HE1oNrTFpU0lsXQlx3Au8LjqHPNrGWCQxwvZ8NYVvCGtAYM+5E nBMeM92YVLRYJtmPje9UYnEar3FpUnD5j+vAifaK19pGB4BX2G3Mn1P6U U=; Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none X-SBRS: None X-MesageID: 84987027 X-Ironport-Server: esa3.hc3370-68.iphmx.com X-Remote-IP: 162.221.156.83 X-Policy: $RELAYED IronPort-Data: A9a23:vdvjfaw2//tgc/iI9od6t+fDxirEfRIJ4+MujC+fZmUNrF6WrkVUy GsaXzzTMviMamL3Kd10a4S1/UMHvpTUzYNkSgE4rSAxQypGp/SeCIXCJC8cHc8wwu7rFxs7s ppEOrEsCOhuExcwcz/0auCJQUFUjP3OHfykTbaeYUidfCc8IA85kxVvhuUltYBhhNm9Emult Mj75sbSIzdJ4RYtWo4vw//F+U0HUMja4mtC5AVnPK4T5zcyqlFOZH4hDfDpR5fHatE88t6SH 47r0Ly/92XFyBYhYvvNfmHTKxBirhb6ZGBiu1IOM0SQqkEqSh8ai87XAME0e0ZP4whlqvgqo Dl7WT5cfi9yVkHEsLx1vxC1iEiSN4UekFPMCSDXXcB+UyQq2pYjqhljJBheAGEWxgp4KVFB9 /YnMS1XVz2ohueKy6qYRvk2lNt2eaEHPKtH0p1h5TTQDPJgSpHfWaTao9Rf2V/chOgXQ6yYP ZBAL2MyMlKQOHWjOX9OYH46tM6uimPybHtzr1WNqLBsy2PS0BZwwP7mN9+9ltmiFZkEzhnA/ D2uE2LRJTsmGv6Dyyi57FGy2uzOlg/GSpMpC+jtnhJtqALKnTFCYPEMbnOFpv2+hl+7SshoA UUe8SozroA/7EWuCNL6WnWQomOAvxMac8pdFas98g7l4orZ5RyIQFcNSDFpYcYj8sQxQFQC3 FKTg8ngAzAptbSPUG+c7Z+dtzb0Mi8QRUcgTyIZSQoO4/H4vZo+yBnIS75LAKOzy9H4Bzz06 zSLtzQlwaUei9YR0Ke29kyBhCijzrDSVRI87AjTWmOj7yt6aZSjaoju7kLUhd5fKK6JQ1/Hu 2IL8/Vy98hXU8vLznbUBrxQQvf5vJ5pLQEwn3ZfEJsryC2woUeOVoFV3DpmB0t5aNQbLGqBj FDohStd45paPX2PZKBxYp6sB8lC8ZUMBegJRdiPMIMQP8EZmBuvuXg3OBXOhzyFfF0Ey/lXB HuNTSq74Z/244xDxSH+eeoS2KRDKssWlTKKHsCTI/hKPNOjiJ+ppVUtagXmggMRtvnsTODpH zF3aaO3J+13CrGWX8Uu2dd7wJBjBSFT6WrKg8JWbPWfBQFtBXssDfTcqZt4JdI1wf4OyriSo C7nMqO99LYZrSaeQeltQik9AI4DoL4l9S5rVcDSFQvAN4cfjXaHs/5EKspfkUgP/+1/1/9kJ 8TpiO3Zasmii13vpVwgUHUKhNw4JU711FzVbkJIolEXJvZdeuAAwfe8FiOHycXEJnDfWRcWy 1F46j7mfA== IronPort-HdrOrdr: A9a23:2vkR16qalYKO2lisAt5vTZgaV5oTeYIsimQD101hICG8cqSj+f xG+85rrCMc6QxhPk3I9urhBEDtex/hHNtOkOws1NSZLW7bUQmTXeJfBOLZqlWKcUDDH6xmpM NdmsBFeaXN5DNB7PoSjjPWLz9Z+qjkzJyV X-IronPort-AV: E=Sophos;i="5.96,169,1665460800"; d="scan'208";a="84987027" From: Andrew Cooper To: Xen-devel CC: Andrew Cooper , Jan Beulich , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= , Wei Liu , Stefano Stabellini , Julien Grall , Volodymyr Babchuk , Bertrand Marquis , Henry Wang , Anthony PERARD Subject: [PATCH 3/4] xen/arm, libxl: Revert XEN_DOMCTL_shadow_op; use p2m mempool hypercalls Date: Thu, 17 Nov 2022 01:08:03 +0000 Message-ID: <20221117010804.9384-4-andrew.cooper3@citrix.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20221117010804.9384-1-andrew.cooper3@citrix.com> References: <20221117010804.9384-1-andrew.cooper3@citrix.com> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @citrix.com) X-ZM-MESSAGEID: 1668647334429100004 This reverts most of commit cf2a68d2ffbc3ce95e01449d46180bddb10d24a0, and b= its of cbea5a1149ca7fd4b7cdbfa3ec2e4f109b601ff7. First of all, with ARM borrowing x86's implementation, the logic to set the pool size should have been common, not duplicated. Introduce libxl__domain_set_p2m_pool_size() as a shared implementation, and use it fr= om the ARM and x86 paths. It is left as an exercise to the reader to judge how libxl/xl can reasonably function without the ability to query the pool size= ... Remove ARM's p2m_domctl() infrastructure now the functioanlity has been replaced with a working and unit tested interface. This is part of XSA-409 / CVE-2022-33747. Signed-off-by: Andrew Cooper Release-acked-by: Henry Wang Reviewed-by: Anthony PERARD Reviewed-by: Stefano Stabellini --- CC: Jan Beulich CC: Roger Pau Monn=C3=A9 CC: Wei Liu CC: Stefano Stabellini CC: Julien Grall CC: Volodymyr Babchuk CC: Bertrand Marquis CC: Henry Wang CC: Anthony PERARD v2: * s/p2m/paging/ * Fix get/set typo in libxl__domain_set_p2m_pool_size() --- tools/libs/light/libxl_arm.c | 14 +---------- tools/libs/light/libxl_dom.c | 19 ++++++++++++++ tools/libs/light/libxl_internal.h | 3 +++ tools/libs/light/libxl_x86.c | 15 ++--------- xen/arch/arm/domctl.c | 53 -----------------------------------= ---- xen/arch/arm/include/asm/p2m.h | 1 - xen/arch/arm/p2m.c | 8 ------ 7 files changed, 25 insertions(+), 88 deletions(-) diff --git a/tools/libs/light/libxl_arm.c b/tools/libs/light/libxl_arm.c index 2a5e93c28403..2f5615263543 100644 --- a/tools/libs/light/libxl_arm.c +++ b/tools/libs/light/libxl_arm.c @@ -209,19 +209,7 @@ int libxl__arch_domain_create(libxl__gc *gc, libxl__domain_build_state *state, uint32_t domid) { - libxl_ctx *ctx =3D libxl__gc_owner(gc); - unsigned int shadow_mb =3D DIV_ROUNDUP(d_config->b_info.shadow_memkb, = 1024); - - int r =3D xc_shadow_control(ctx->xch, domid, - XEN_DOMCTL_SHADOW_OP_SET_ALLOCATION, - &shadow_mb, 0); - if (r) { - LOGED(ERROR, domid, - "Failed to set %u MiB shadow allocation", shadow_mb); - return ERROR_FAIL; - } - - return 0; + return libxl__domain_set_p2m_pool_size(gc, d_config, domid); } =20 int libxl__arch_extra_memory(libxl__gc *gc, diff --git a/tools/libs/light/libxl_dom.c b/tools/libs/light/libxl_dom.c index 2abaab439c4f..f8f7b7e81837 100644 --- a/tools/libs/light/libxl_dom.c +++ b/tools/libs/light/libxl_dom.c @@ -1448,6 +1448,25 @@ int libxl_userdata_unlink(libxl_ctx *ctx, uint32_t d= omid, return rc; } =20 +int libxl__domain_set_p2m_pool_size( + libxl__gc *gc, libxl_domain_config *d_config, uint32_t domid) +{ + libxl_ctx *ctx =3D libxl__gc_owner(gc); + uint64_t shadow_mem; + + shadow_mem =3D d_config->b_info.shadow_memkb; + shadow_mem <<=3D 10; + + int r =3D xc_set_paging_mempool_size(ctx->xch, domid, shadow_mem); + if (r) { + LOGED(ERROR, domid, + "Failed to set paging mempool size to %"PRIu64"kB", shadow_m= em); + return ERROR_FAIL; + } + + return 0; +} + /* * Local variables: * mode: C diff --git a/tools/libs/light/libxl_internal.h b/tools/libs/light/libxl_int= ernal.h index cb9e8b3b8b5a..f31164bc6c0d 100644 --- a/tools/libs/light/libxl_internal.h +++ b/tools/libs/light/libxl_internal.h @@ -4864,6 +4864,9 @@ int libxl__is_domid_recent(libxl__gc *gc, uint32_t do= mid, bool *recent); /* os-specific implementation of setresuid() */ int libxl__setresuid(uid_t ruid, uid_t euid, uid_t suid); =20 +_hidden int libxl__domain_set_p2m_pool_size( + libxl__gc *gc, libxl_domain_config *d_config, uint32_t domid); + #endif =20 /* diff --git a/tools/libs/light/libxl_x86.c b/tools/libs/light/libxl_x86.c index 7c5ee74443e5..99aba51d05df 100644 --- a/tools/libs/light/libxl_x86.c +++ b/tools/libs/light/libxl_x86.c @@ -538,20 +538,9 @@ int libxl__arch_domain_create(libxl__gc *gc, xc_domain_set_time_offset(ctx->xch, domid, rtc_timeoffset); =20 if (d_config->b_info.type !=3D LIBXL_DOMAIN_TYPE_PV) { - unsigned int shadow_mb =3D DIV_ROUNDUP(d_config->b_info.shadow_mem= kb, - 1024); - int r =3D xc_shadow_control(ctx->xch, domid, - XEN_DOMCTL_SHADOW_OP_SET_ALLOCATION, - &shadow_mb, 0); - - if (r) { - LOGED(ERROR, domid, - "Failed to set %u MiB %s allocation", - shadow_mb, - libxl_defbool_val(d_config->c_info.hap) ? "HAP" : "shado= w"); - ret =3D ERROR_FAIL; + ret =3D libxl__domain_set_p2m_pool_size(gc, d_config, domid); + if (ret) goto out; - } } =20 if (d_config->c_info.type =3D=3D LIBXL_DOMAIN_TYPE_PV && diff --git a/xen/arch/arm/domctl.c b/xen/arch/arm/domctl.c index c8fdeb124084..1baf25c3d98b 100644 --- a/xen/arch/arm/domctl.c +++ b/xen/arch/arm/domctl.c @@ -47,64 +47,11 @@ static int handle_vuart_init(struct domain *d, return rc; } =20 -static long p2m_domctl(struct domain *d, struct xen_domctl_shadow_op *sc, - XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl) -{ - long rc; - bool preempted =3D false; - - if ( unlikely(d =3D=3D current->domain) ) - { - printk(XENLOG_ERR "Tried to do a p2m domctl op on itself.\n"); - return -EINVAL; - } - - if ( unlikely(d->is_dying) ) - { - printk(XENLOG_ERR "Tried to do a p2m domctl op on dying domain %u\= n", - d->domain_id); - return -EINVAL; - } - - switch ( sc->op ) - { - case XEN_DOMCTL_SHADOW_OP_SET_ALLOCATION: - { - /* Allow and handle preemption */ - spin_lock(&d->arch.paging.lock); - rc =3D p2m_set_allocation(d, sc->mb << (20 - PAGE_SHIFT), &preempt= ed); - spin_unlock(&d->arch.paging.lock); - - if ( preempted ) - /* Not finished. Set up to re-run the call. */ - rc =3D hypercall_create_continuation(__HYPERVISOR_domctl, "h", - u_domctl); - else - /* Finished. Return the new allocation. */ - sc->mb =3D p2m_get_allocation(d); - - return rc; - } - case XEN_DOMCTL_SHADOW_OP_GET_ALLOCATION: - { - sc->mb =3D p2m_get_allocation(d); - return 0; - } - default: - { - printk(XENLOG_ERR "Bad p2m domctl op %u\n", sc->op); - return -EINVAL; - } - } -} - long arch_do_domctl(struct xen_domctl *domctl, struct domain *d, XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl) { switch ( domctl->cmd ) { - case XEN_DOMCTL_shadow_op: - return p2m_domctl(d, &domctl->u.shadow_op, u_domctl); case XEN_DOMCTL_cacheflush: { gfn_t s =3D _gfn(domctl->u.cacheflush.start_pfn); diff --git a/xen/arch/arm/include/asm/p2m.h b/xen/arch/arm/include/asm/p2m.h index c8f14d13c2c5..91df922e1c9f 100644 --- a/xen/arch/arm/include/asm/p2m.h +++ b/xen/arch/arm/include/asm/p2m.h @@ -222,7 +222,6 @@ void p2m_restore_state(struct vcpu *n); /* Print debugging/statistial info about a domain's p2m */ void p2m_dump_info(struct domain *d); =20 -unsigned int p2m_get_allocation(struct domain *d); int p2m_set_allocation(struct domain *d, unsigned long pages, bool *preemp= ted); int p2m_teardown_allocation(struct domain *d); =20 diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c index 8c1972e58227..b2f7e8d804aa 100644 --- a/xen/arch/arm/p2m.c +++ b/xen/arch/arm/p2m.c @@ -92,14 +92,6 @@ static void p2m_free_page(struct domain *d, struct page_= info *pg) spin_unlock(&d->arch.paging.lock); } =20 -/* Return the size of the pool, rounded up to the nearest MB */ -unsigned int p2m_get_allocation(struct domain *d) -{ - unsigned long nr_pages =3D ACCESS_ONCE(d->arch.paging.p2m_total_pages); - - return ROUNDUP(nr_pages, 1 << (20 - PAGE_SHIFT)) >> (20 - PAGE_SHIFT); -} - /* Return the size of the pool, in bytes. */ int arch_get_paging_mempool_size(struct domain *d, uint64_t *size) { --=20 2.11.0 From nobody Fri Apr 19 03:26:57 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=reject dis=none) header.from=citrix.com ARC-Seal: i=1; a=rsa-sha256; t=1668647332; cv=none; d=zohomail.com; s=zohoarc; b=lJj3fwZrZNAl4Vlisp7G6yfHCaVp3uc/t9HAK+s+4ByaiVMBND9MYVpbt6DF7kz/I3o7bo8aYAC3l6LTCRh3Wm/9hYucAsIbn2QebmyzO9Wh2iTZwDtWZSZ+x2gWyqqfPm/i3Nc6XQig069YFHgpiEGbf2hR+tLBCg5GuKuPs6Q= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1668647332; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=on3kCttGRwfoA6v6+mJPPolDShV+EPw1Q2djpPGybII=; b=k1yhZAjRS51IAOJT++NIKv+8xXQZQe/cBF3c3JCMYShTGX/yYUnFKqCOSJVgllfFjfx1tZnE+B0qkA9I/duXAtG61lFBxCH3ZREfy7+mMraD73eaI3oja8EyH8V6m4n8/GMz/+HU2uPoOltLZ5bYWUUpP1Au8Cvb0lJuI6Dwtuo= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=reject dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1668647332933767.4915010809137; Wed, 16 Nov 2022 17:08:52 -0800 (PST) Received: from list by lists.xenproject.org with outflank-mailman.444452.699700 (Exim 4.92) (envelope-from ) id 1ovTOD-0002vD-Kh; Thu, 17 Nov 2022 01:08:25 +0000 Received: by outflank-mailman (output) from mailman id 444452.699700; Thu, 17 Nov 2022 01:08:25 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1ovTOD-0002s7-Bx; Thu, 17 Nov 2022 01:08:25 +0000 Received: by outflank-mailman (input) for mailman id 444452; Thu, 17 Nov 2022 01:08:24 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1ovTOC-0002PW-CP for xen-devel@lists.xenproject.org; Thu, 17 Nov 2022 01:08:24 +0000 Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com [216.71.145.153]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id 544abed0-6614-11ed-91b6-6bf2151ebd3b; Thu, 17 Nov 2022 02:08:23 +0100 (CET) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 544abed0-6614-11ed-91b6-6bf2151ebd3b DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=citrix.com; s=securemail; t=1668647303; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Ap2hp5WICNPWbnbtbccI6RyBgntjwRWIPNLozuORhLE=; b=KWMGzipBweGURFunZZOb4n5S9B7QG//zgxiZVcIz6KPuo0KrHW6McnnV qrtzBhmZjDuMKuU88mkCDlnvfWszZTgCop/xreKKxYZoC74SMxk1Vi8vR /8pB1fN6xWuSAHodwBVLwHxAUnuluDG0BaNEZQE8XQObGT7wIjDmZ6aQp w=; Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none X-SBRS: None X-MesageID: 85008573 X-Ironport-Server: esa2.hc3370-68.iphmx.com X-Remote-IP: 162.221.156.83 X-Policy: $RELAYED IronPort-Data: A9a23:HVS5KKLuDOayhK8qFE+RJZUlxSXFcZb7ZxGr2PjKsXjdYENS1TYCz mMYCzqDbP3YNmfzKt13YYWz8EsOvp+Dz4QxGgJlqX01Q3x08seUXt7xwmUcnc+xBpaaEB84t ZV2hv3odp1coqr0/0/1WlTZhSAgk/rOHv+kUrWs1hlZHWdMUD0mhQ9oh9k3i4tphcnRKw6Ws Jb5rta31GWNglaYCUpJrfPdwP9TlK6q4mlB5wVgPasjUGL2zBH5MrpOfcldEFOgKmVkNrbSb /rOyri/4lTY838FYj9yuu+mGqGiaue60Tmm0hK6aYD76vRxjnVaPpIAHOgdcS9qZwChxLid/ jnvWauYEm/FNoWU8AgUvoIx/ytWZcWq85efSZSzXFD6I+QrvBIAzt03ZHzaM7H09c4uJ1of7 flJEww0Ywygn8Sn35+mVbJV05FLwMnDZOvzu1llxDDdS/0nXYrCU+PB4towMDUY354UW6yEP oxANGQpPE+ojx5nYz/7DLoXmuuyi2a5WDpfsF+P/oI84nTJzRw327/oWDbQUozXHZ0FwxnDz o7A1zjfEwAFOIy88AqI2W2vqbfVxQ31RqtHQdVU8dY12QbOlwT/EiY+a1y/pvWoj1+kbPhWI UcU5ykGoLA78QqgSdyVdx+lpH+JuDYMVtwWFPc1gCmHx7DI+Q+fCi4BRyRYdd09nMYsQHoh0 Vrht/PkAyZ+9oKcT321/62R6zi1PEA9NnQebCUJSQ8E5djLo4wpiB/LCNF5H8adntDzXD393 T2OhCw/nKkIy94G0b2h+lLKiC7qoYLGJiYXzAjKWmOu7itieZWoIYev7DDz8vJoPIufCF6bs xA5d9O2tb5US8vXzWrUHbtLTOrBC+u53CP02HhUToEkpg+RwnO/Xph28S5TOVgyC5NREdP2W 3P7tQRU7Z5VGXKla65rfo68Y/gXIbjc+cfNDa6NMIcXCnRlXErepXw1OxbMt4z4uBJ0+ZzTL 6t3ZipF4ZwyLa18hAS7SO4GuVPA7nBvnDiDLXwXIvnO7FZ/WJJ3Ye1bWLdtRrpjhE9hnOky2 4g3Cidy408DONASmwGOmWPTRHhTRZTBObj4qtZMasmIKRd8FWcqBpf5mO1/K9I/xPgKzLmXp BlRv3O0LnKk3BUrzi3TNBhehE7HB84j/RrXwwRxVbpX55TTSdn2t/pOH3fGVbIm6PZi3ZZJo wotIq297zUmYmqvxgnxmrGt9t04KkX13lPm0ujMSGFXQqOMjjfhorfMFjYDPgFXZsZrnaPSe 4Gd6z4= IronPort-HdrOrdr: A9a23:q8GjOK5QtxsrF6uzNgPXwM7XdLJyesId70hD6qhwISY7TiX+rb HJoB17726StN9/YhAdcLy7VZVoBEmsl6KdgrNhWYtKPjOHhILAFugLhuHfKn/bakjDH4ZmpN 5dmsNFZuEYY2IXsS+D2njaL+od X-IronPort-AV: E=Sophos;i="5.96,169,1665460800"; d="scan'208";a="85008573" From: Andrew Cooper To: Xen-devel CC: Andrew Cooper , Jan Beulich , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= , Wei Liu , Stefano Stabellini , Julien Grall , Volodymyr Babchuk , Bertrand Marquis , Henry Wang , Anthony PERARD Subject: [PATCH 4/4] xen/arm: Correct the p2m pool size calculations Date: Thu, 17 Nov 2022 01:08:04 +0000 Message-ID: <20221117010804.9384-5-andrew.cooper3@citrix.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20221117010804.9384-1-andrew.cooper3@citrix.com> References: <20221117010804.9384-1-andrew.cooper3@citrix.com> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @citrix.com) X-ZM-MESSAGEID: 1668647334503100006 Allocating or freeing p2m pages doesn't alter the size of the mempool; only the split between free and used pages. Right now, the hypercalls operate on the free subset of the pool, meaning t= hat XEN_DOMCTL_get_paging_mempool_size varies with time as the guest shuffles i= ts physmap, and XEN_DOMCTL_set_paging_mempool_size ignores the used subset of = the pool and lets the guest grow unbounded. This fixes test-pagign-mempool on ARM so that the behaviour matches x86. This is part of XSA-409 / CVE-2022-33747. Fixes: cbea5a1149ca ("xen/arm: Allocate and free P2M pages from the P2M poo= l") Signed-off-by: Andrew Cooper Reviewed-by: Julien Grall Release-acked-by: Henry Wang --- CC: Jan Beulich CC: Roger Pau Monn=C3=A9 CC: Wei Liu CC: Stefano Stabellini CC: Julien Grall CC: Volodymyr Babchuk CC: Bertrand Marquis CC: Henry Wang CC: Anthony PERARD --- xen/arch/arm/p2m.c | 4 ---- 1 file changed, 4 deletions(-) diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c index b2f7e8d804aa..9bc5443d9e8a 100644 --- a/xen/arch/arm/p2m.c +++ b/xen/arch/arm/p2m.c @@ -72,7 +72,6 @@ static struct page_info *p2m_alloc_page(struct domain *d) spin_unlock(&d->arch.paging.lock); return NULL; } - d->arch.paging.p2m_total_pages--; } spin_unlock(&d->arch.paging.lock); =20 @@ -85,10 +84,7 @@ static void p2m_free_page(struct domain *d, struct page_= info *pg) if ( is_hardware_domain(d) ) free_domheap_page(pg); else - { - d->arch.paging.p2m_total_pages++; page_list_add_tail(pg, &d->arch.paging.p2m_freelist); - } spin_unlock(&d->arch.paging.lock); } =20 --=20 2.11.0