From nobody Mon Mar 23 21:29:21 2026 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass header.i=julian.vetter@vates.tech; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=vates.tech ARC-Seal: i=1; a=rsa-sha256; t=1773659867; cv=none; d=zohomail.com; s=zohoarc; b=T3TjajWYC96IU0xBJY+kYjfB8Wcyojt8EGU+UO/PuJsYBGq+EhWfJX5FcwvHFwQVeguoRU8oR1prr+sNlM3bC4fxelmOh79KJJ8gExSi6bYeBGnNkAImr+NQniZCWOZ/u5oNcy9WD9HLbBGcWb9AgX7mZnKDk2zPTlaR77OL1Vc= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1773659867; h=Content-Type:Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=1rdsgYGrdZ+Aj6vu34GfiCU8HteOHeBAK+Ods8sd53g=; b=E99mh6gCTN2u7FnWk+NzOWA+R7GbRxz30mGwhWPAlmrb/X5kID75/54ct6f8wyvgGcXVE7rX1jJi5735XFXRDU1tzpwhz8wFNElNCTCDOlkCI/kQu+N2DJNLd0W5YDOT/DaOOKTTONXMB0fTA5SU9yY4SNzu2dVY69L5nBEEHA4= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass header.i=julian.vetter@vates.tech; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1773659867161586.0986319925206; Mon, 16 Mar 2026 04:17:47 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.1255502.1550440 (Exim 4.92) (envelope-from ) id 1w25wj-00083a-An; Mon, 16 Mar 2026 11:17:17 +0000 Received: by outflank-mailman (output) from mailman id 1255502.1550440; Mon, 16 Mar 2026 11:17:17 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1w25wj-00082j-7T; Mon, 16 Mar 2026 11:17:17 +0000 Received: by outflank-mailman (input) for mailman id 1255502; Mon, 16 Mar 2026 11:17:15 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1w25wh-00080D-Fc for xen-devel@lists.xenproject.org; Mon, 16 Mar 2026 11:17:15 +0000 Received: from mail136-12.atl41.mandrillapp.com (mail136-12.atl41.mandrillapp.com [198.2.136.12]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id a7533da5-2129-11f1-9ccf-f158ae23cfc8; Mon, 16 Mar 2026 12:17:01 +0100 (CET) Received: from pmta11.mandrill.prod.atl01.rsglab.com (localhost [127.0.0.1]) by mail136-12.atl41.mandrillapp.com (Mailchimp) with ESMTP id 4fZCGh2Mgfz5QkT4q for ; Mon, 16 Mar 2026 11:17:00 +0000 (GMT) Received: from [37.26.189.201] by mandrillapp.com id 096817dc48ee470daf05cea08c53910a; Mon, 16 Mar 2026 11:17:00 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: a7533da5-2129-11f1-9ccf-f158ae23cfc8 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=mandrillapp.com; s=mte1; t=1773659820; x=1773929820; bh=1rdsgYGrdZ+Aj6vu34GfiCU8HteOHeBAK+Ods8sd53g=; h=From:Subject:To:Cc:Message-Id:In-Reply-To:References:Feedback-ID: Date:MIME-Version:Content-Type:Content-Transfer-Encoding:CC:Date: Subject:From; b=KsoMc2o9rnN1cBHUFtsRMm0buK7ivoy7p+CFHHXmhGwQrtDGvzuYw2clTd47xw562 z7hsc38a+9+l9K9orfmcOFch5uxY1qZPqZuAX9n0jhPyP87YKL3aycAJZfxJfa9L6R zPJuC/dKhlGZMlZS3KlRADuAmzCAHpOVeAEYcm1MnVDfm7MQROBQOR6XtNN78UctnX Ci+CNVzEneh6prBdX1YN1KQGsk0VPJ5FPZdLR+7zMWFQgzHkFKSrIK8lIM9xvm+W7r RnPxH8b1LK4XXRqwmwIWfqkj6BdAZKealS7vMdGs7f8hk+I8TgUHX8DuUDqHaqlusd oBDMfK19xUoDw== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=vates.tech; s=mte1; t=1773659820; x=1773920320; i=julian.vetter@vates.tech; bh=1rdsgYGrdZ+Aj6vu34GfiCU8HteOHeBAK+Ods8sd53g=; h=From:Subject:To:Cc:Message-Id:In-Reply-To:References:Feedback-ID: Date:MIME-Version:Content-Type:Content-Transfer-Encoding:CC:Date: Subject:From; b=Bxl3/X9AGtgxVEiWpx+oJf9jqOyGEY6Im/amXa19x+ahVssXvRvIqto+OwgcCrVVy olQ7R7Z3oraWvce6WEOvsEK73wo+5Wn4NHlIIo+5qEz6+NssA2rNhNGsKNlUdXa30d umq6uIgRSrqCN4pZRuW//ir54uHyaY7FKS9yhIz9Z9nqW51k9KQSsCuhiWfNUe3rkW CnFYbHqYzEG7KoJGhp9X4RklHR7aBNa4jIo1FQh49SIaohvoqkGIjgytT47mimnB8g VqJq9JrvX8rnvnb9vd8qKUMbmMSAnJ8aLQBGZdMzsp9Bmf2oVKXcy0YUHVh/ka8ZDQ 6xsX7tKZ/+GVA== From: "Julian Vetter" Subject: =?utf-8?Q?[PATCH=20v5=201/3]=20ioreq:=20Unify=20buf=20and=20non-buf=20ioreq=20page=20management?= X-Mailer: git-send-email 2.51.0 X-Bm-Disclaimer: Yes X-Bm-Milter-Handled: 4ffbd6c1-ee69-4e1b-aabd-f977039bd3e2 X-Bm-Transport-Timestamp: 1773659819095 To: xen-devel@lists.xenproject.org Cc: "Jan Beulich" , "Andrew Cooper" , "=?utf-8?Q?Roger=20Pau=20Monn=C3=A9?=" , "Anthony PERARD" , "Michal Orzel" , "Julien Grall" , "Stefano Stabellini" , "Julian Vetter" Message-Id: <20260316111653.178104-2-julian.vetter@vates.tech> In-Reply-To: <20260316111653.178104-1-julian.vetter@vates.tech> References: <20260316111653.178104-1-julian.vetter@vates.tech> X-Native-Encoded: 1 X-Report-Abuse: =?UTF-8?Q?Please=20forward=20a=20copy=20of=20this=20message,=20including=20all=20headers,=20to=20abuse@mandrill.com.=20You=20can=20also=20report=20abuse=20here:=20https://mandrillapp.com/contact/abuse=3Fid=3D30504962.096817dc48ee470daf05cea08c53910a?= X-Mandrill-User: md_30504962 Feedback-ID: 30504962:30504962.20260316:md Date: Mon, 16 Mar 2026 11:17:00 +0000 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @mandrillapp.com) (identity julian.vetter@vates.tech) X-ZM-MESSAGEID: 1773659868305154100 Content-Type: text/plain; charset="utf-8" Switch the ioreq page mapping in hvm_map_ioreq_gfn() from prepare_ring_for_helper() / __map_domain_page_global() to explicit vmap(), aligning it with ioreq_server_alloc_mfn() which already allocates domain-heap pages and will now also map them via vmap(). With both paths using vmap(), vmap_to_page() can recover the struct page_info * uniformly during teardown, removing the need to cache the page pointer in struct ioreq_page. So, drop the 'page' field from struct ioreq_page and update all callers accordingly. Signed-off-by: Julian Vetter --- Changes in v5: - New patch that unforms the buf and non-buf code path --- xen/arch/x86/hvm/ioreq.c | 57 ++++++++++++++++++++++++++++++++-------- xen/common/ioreq.c | 36 +++++++++++++------------ xen/include/xen/ioreq.h | 1 - 3 files changed, 65 insertions(+), 29 deletions(-) diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c index a5fa97e149..145dcba5c1 100644 --- a/xen/arch/x86/hvm/ioreq.c +++ b/xen/arch/x86/hvm/ioreq.c @@ -15,6 +15,7 @@ #include #include #include +#include #include =20 #include @@ -128,8 +129,9 @@ static void hvm_unmap_ioreq_gfn(struct ioreq_server *s,= bool buf) if ( gfn_eq(iorp->gfn, INVALID_GFN) ) return; =20 - destroy_ring_for_helper(&iorp->va, iorp->page); - iorp->page =3D NULL; + put_page_and_type(vmap_to_page(iorp->va)); + vunmap(iorp->va); + iorp->va =3D NULL; =20 hvm_free_ioreq_gfn(s, iorp->gfn); iorp->gfn =3D INVALID_GFN; @@ -139,9 +141,13 @@ static int hvm_map_ioreq_gfn(struct ioreq_server *s, b= ool buf) { struct domain *d =3D s->target; struct ioreq_page *iorp =3D buf ? &s->bufioreq : &s->ioreq; + struct page_info *page; + p2m_type_t p2mt; + gfn_t base_gfn; + mfn_t mfn; int rc; =20 - if ( iorp->page ) + if ( iorp->va ) { /* * If a page has already been allocated (which will happen on @@ -157,17 +163,45 @@ static int hvm_map_ioreq_gfn(struct ioreq_server *s, = bool buf) if ( d->is_dying ) return -EINVAL; =20 - iorp->gfn =3D hvm_alloc_ioreq_gfn(s); + base_gfn =3D hvm_alloc_ioreq_gfn(s); =20 - if ( gfn_eq(iorp->gfn, INVALID_GFN) ) + if ( gfn_eq(base_gfn, INVALID_GFN) ) return -ENOMEM; =20 - rc =3D prepare_ring_for_helper(d, gfn_x(iorp->gfn), &iorp->page, - &iorp->va); - + /* + * vmap() is used for the Xen-side mapping so that vmap_to_page() can + * recover the struct page_info * during teardown, consistent with + * ioreq_server_alloc_mfn(). + */ + rc =3D check_get_page_from_gfn(d, base_gfn, false, &p2mt, &page); if ( rc ) - hvm_unmap_ioreq_gfn(s, buf); + { + if ( rc =3D=3D -EAGAIN ) + rc =3D -ENOENT; + goto fail; + } + + if ( !get_page_type(page, PGT_writable_page) ) + { + put_page(page); + rc =3D -EINVAL; + goto fail; + } + + mfn =3D page_to_mfn(page); + iorp->va =3D vmap(&mfn, 1); + if ( !iorp->va ) + { + put_page_and_type(page); + rc =3D -ENOMEM; + goto fail; + } + + iorp->gfn =3D base_gfn; + return 0; =20 + fail: + hvm_free_ioreq_gfn(s, base_gfn); return rc; } =20 @@ -179,7 +213,7 @@ static void hvm_remove_ioreq_gfn(struct ioreq_server *s= , bool buf) if ( gfn_eq(iorp->gfn, INVALID_GFN) ) return; =20 - if ( p2m_remove_page(d, iorp->gfn, page_to_mfn(iorp->page), 0) ) + if ( p2m_remove_page(d, iorp->gfn, page_to_mfn(vmap_to_page(iorp->va))= , 0) ) domain_crash(d); clear_page(iorp->va); } @@ -195,7 +229,8 @@ static int hvm_add_ioreq_gfn(struct ioreq_server *s, bo= ol buf) =20 clear_page(iorp->va); =20 - rc =3D p2m_add_page(d, iorp->gfn, page_to_mfn(iorp->page), 0, p2m_ram_= rw); + rc =3D p2m_add_page(d, iorp->gfn, page_to_mfn(vmap_to_page(iorp->va)),= 0, + p2m_ram_rw); if ( rc =3D=3D 0 ) paging_mark_pfn_dirty(d, _pfn(gfn_x(iorp->gfn))); =20 diff --git a/xen/common/ioreq.c b/xen/common/ioreq.c index f5fd30ce12..5b026fc1b2 100644 --- a/xen/common/ioreq.c +++ b/xen/common/ioreq.c @@ -17,11 +17,11 @@ */ =20 #include -#include #include #include #include #include +#include #include #include #include @@ -262,8 +262,9 @@ static int ioreq_server_alloc_mfn(struct ioreq_server *= s, bool buf) { struct ioreq_page *iorp =3D buf ? &s->bufioreq : &s->ioreq; struct page_info *page; + mfn_t mfn; =20 - if ( iorp->page ) + if ( iorp->va ) { /* * If a guest frame has already been mapped (which may happen @@ -291,11 +292,11 @@ static int ioreq_server_alloc_mfn(struct ioreq_server= *s, bool buf) return -ENODATA; } =20 - iorp->va =3D __map_domain_page_global(page); + mfn =3D page_to_mfn(page); + iorp->va =3D vmap(&mfn, 1); if ( !iorp->va ) goto fail; =20 - iorp->page =3D page; clear_page(iorp->va); return 0; =20 @@ -309,14 +310,13 @@ static int ioreq_server_alloc_mfn(struct ioreq_server= *s, bool buf) static void ioreq_server_free_mfn(struct ioreq_server *s, bool buf) { struct ioreq_page *iorp =3D buf ? &s->bufioreq : &s->ioreq; - struct page_info *page =3D iorp->page; + struct page_info *page; =20 - if ( !page ) + if ( !iorp->va ) return; =20 - iorp->page =3D NULL; - - unmap_domain_page_global(iorp->va); + page =3D vmap_to_page(iorp->va); + vunmap(iorp->va); iorp->va =3D NULL; =20 put_page_alloc_ref(page); @@ -333,7 +333,8 @@ bool is_ioreq_server_page(struct domain *d, const struc= t page_info *page) =20 FOR_EACH_IOREQ_SERVER(d, id, s) { - if ( (s->ioreq.page =3D=3D page) || (s->bufioreq.page =3D=3D page)= ) + if ( (s->ioreq.va && vmap_to_page(s->ioreq.va) =3D=3D page) || + (s->bufioreq.va && vmap_to_page(s->bufioreq.va) =3D=3D page) ) { found =3D true; break; @@ -626,11 +627,12 @@ static void ioreq_server_deinit(struct ioreq_server *= s) /* * NOTE: It is safe to call both arch_ioreq_server_unmap_pages() and * ioreq_server_free_pages() in that order. - * This is because the former will do nothing if the pages - * are not mapped, leaving the page to be freed by the latter. - * However if the pages are mapped then the former will set - * the page_info pointer to NULL, meaning the latter will do - * nothing. + * arch_ioreq_server_unmap_pages() handles the GFN-mapped path + * (iorp->gfn !=3D INVALID_GFN) and clears iorp->va on completio= n, + * so ioreq_server_free_pages() will find iorp->va =3D=3D NULL a= nd + * do nothing. Conversely, pages allocated via the resource path + * have iorp->gfn =3D=3D INVALID_GFN, so arch_ioreq_server_unmap= _pages() + * is a no-op and ioreq_server_free_pages() handles the teardown. */ arch_ioreq_server_unmap_pages(s); ioreq_server_free_pages(s); @@ -819,12 +821,12 @@ int ioreq_server_get_frame(struct domain *d, ioservid= _t id, if ( !HANDLE_BUFIOREQ(s) ) goto out; =20 - *mfn =3D page_to_mfn(s->bufioreq.page); + *mfn =3D page_to_mfn(vmap_to_page(s->bufioreq.va)); rc =3D 0; break; =20 case XENMEM_resource_ioreq_server_frame_ioreq(0): - *mfn =3D page_to_mfn(s->ioreq.page); + *mfn =3D page_to_mfn(vmap_to_page(s->ioreq.va)); rc =3D 0; break; =20 diff --git a/xen/include/xen/ioreq.h b/xen/include/xen/ioreq.h index e86f0869fa..d63fa4729e 100644 --- a/xen/include/xen/ioreq.h +++ b/xen/include/xen/ioreq.h @@ -25,7 +25,6 @@ =20 struct ioreq_page { gfn_t gfn; - struct page_info *page; void *va; }; =20 --=20 2.51.0 -- Julian Vetter | Vates Hypervisor & Kernel Developer XCP-ng & Xen Orchestra - Vates solutions web: https://vates.tech