From nobody Tue May 5 08:52:53 2026 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass header.i=julian.vetter@vates.tech; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=vates.tech ARC-Seal: i=1; a=rsa-sha256; t=1776677940; cv=none; d=zohomail.com; s=zohoarc; b=YJ9oCZUY372MQxsFmGEnB6FNF6/NaqZbTeS9e67QsQa7CWc4r09vpx0ML7NC3JTeBR7wHgK/mzOOT4fnKfRh6fG2ehxYjjOX9SmokeqZykYYJboTxFLH2FsdHTBxgAwSYt0M4Yk5q9aqDE3lpC3KIZQ+SwfNYJf3sFvo2Q0UI4k= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1776677940; h=Content-Type:Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=gSN+dG6DEEzuWv6g//ewPcup5z/eGaDyrl5RwIr2Dy4=; b=fFzsCnvmmsL7WaRpXCR8eZXCggvAVtNu8CdaPhPBvXJwg1MnyenKUDvNrs9EM6I5cca+R7HUdlPa25qPlxgMghzkvKNGU7HMa5SpXwkw8ji8G+wFRFlciRHNG21PxL4fhIFm0OJHS+45nfiKfhG96dQgiXT5A6VEc9M6hSPxv44= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass header.i=julian.vetter@vates.tech; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1776677940175965.8376710003765; Mon, 20 Apr 2026 02:39:00 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.1285559.1566561 (Exim 4.92) (envelope-from ) id 1wEl5P-0001Qy-PL; Mon, 20 Apr 2026 09:38:35 +0000 Received: by outflank-mailman (output) from mailman id 1285559.1566561; Mon, 20 Apr 2026 09:38:35 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1wEl5P-0001Qr-Ld; Mon, 20 Apr 2026 09:38:35 +0000 Received: by outflank-mailman (input) for mailman id 1285559; Mon, 20 Apr 2026 09:38:34 +0000 Received: from mx.expurgate.net ([195.190.135.10]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1wEl5O-0001Lg-7E for xen-devel@lists.xenproject.org; Mon, 20 Apr 2026 09:38:34 +0000 Received: from mx.expurgate.net (helo=localhost) by mx.expurgate.net with esmtp id 1wEl5M-007TdJ-2M for xen-devel@lists.xenproject.org; Mon, 20 Apr 2026 11:38:33 +0200 Received: from [10.42.69.5] (helo=localhost) by localhost with ESMTP (eXpurgate MTA 0.9.1) (envelope-from ) id 69e5f419-2eae-0a2a0a5409dd-0a2a4505e70c-0 for ; Mon, 20 Apr 2026 11:38:33 +0200 Received: from [198.2.137.11] (helo=mail137-11.atl71.mandrillapp.com) by tlsNG-c201ff.mxtls.expurgate.net with ESMTPS (eXpurgate 4.56.1) (envelope-from ) id 69e5f417-aaa8-0a2a45050019-c602890bad21-3 for ; Mon, 20 Apr 2026 11:38:33 +0200 Received: from mta004-md-usw2.delv.a.intuit.com (localhost [127.0.0.1]) by mail137-11.atl71.mandrillapp.com (Mailchimp) with ESMTP id 4fzgQv3Wr5zDRT7yQ for ; Mon, 20 Apr 2026 09:38:31 +0000 (UTC) Received: from [37.26.189.201] by mandrillapp.com id 52cc267a23d34b12b2a72cd81a0c3c36; Mon, 20 Apr 2026 09:38:31 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" Authentication-Results: eu.smtp.expurgate.cloud; dkim=pass header.s=mte1 header.d=mandrillapp.com header.i="@mandrillapp.com" header.h="From:Subject:To:Cc:Message-Id:In-Reply-To:References:Feedback-ID:Date:MIME-Version:Content-Type:Content-Transfer-Encoding"; dkim=pass header.s=mte1 header.d=vates.tech header.i="julian.vetter@vates.tech" header.h="From:Subject:To:Cc:Message-Id:In-Reply-To:References:Feedback-ID:Date:MIME-Version:Content-Type:Content-Transfer-Encoding" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=mandrillapp.com; s=mte1; t=1776677911; x=1776947911; bh=gSN+dG6DEEzuWv6g//ewPcup5z/eGaDyrl5RwIr2Dy4=; h=From:Subject:To:Cc:Message-Id:In-Reply-To:References:Feedback-ID: Date:MIME-Version:Content-Type:Content-Transfer-Encoding:CC:Date: Subject:From; b=BQ+eryZrCpEcJoM3E9l+lkBeQzs8Xp8zfqBe/TEfywfJFmnMU95BYvr44mIIb4ajO YR3ONlllI3vTjxGUmCB5lHiVUyaG68fQvZKEOtAam//COjgAZcm8btxHVapng8Iktq pWhT/IxNGiFMLuBNNP7eKPZBRp9HuacLGiCLAKCYsFY0MtNFeH0Q2wFy+I5j1B4ZuL 2gGIp18EG+HeQ9BY3yVzaoXVIZbDEB1IuMc+ZOcJp54qW6Dk6MOsqVI2/E9aVqzBUn UKh2kvA9CINu/PvyTLsqcLExVECbDj4E3vcIOszjs6OSgxm0/iRb7qGe34Wx/ZC5aS rsLct+WyO3TWw== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=vates.tech; s=mte1; t=1776677911; x=1776938411; i=julian.vetter@vates.tech; bh=gSN+dG6DEEzuWv6g//ewPcup5z/eGaDyrl5RwIr2Dy4=; h=From:Subject:To:Cc:Message-Id:In-Reply-To:References:Feedback-ID: Date:MIME-Version:Content-Type:Content-Transfer-Encoding:CC:Date: Subject:From; b=Pd8dOW1Xn9C5vBAXW2NBwKw9bIiJU3AMDc80V2pFkPxST9r+Bwh4Kzdv/oeWPgGx6 8te7ePDA0cD7EQpk8zg5/J+fVvkje9l5KiGqoUQNwFI+zv4Tgiu9sXyqdSoR8QVdar MMeZMrvUhXe4dHsWicCuNZvPOlhkwu1kUOKO0P2di4qUJuJYFfIBq7eMeGfJ+3C+y9 inJRBV2TwURIbm7tRfK9UV2K1E+NBl8GjJ+IAiDFSjobxjQGGJVuT15yq7bX+dnlcx jEOMmJhiCDGybPk/VYSDb4jghGJeIgggiKDFTUBtaB2bcvp77QC77oOhONbsh5MErG jkA8MHCLIM4MA== From: "Julian Vetter" Subject: =?utf-8?Q?[PATCH=20v6=201/3]=20ioreq:=20switch=20ioreq=20page=20allocation=20to=20vmap?= X-Mailer: git-send-email 2.53.0 X-Bm-Disclaimer: Yes X-Bm-Milter-Handled: 4ffbd6c1-ee69-4e1b-aabd-f977039bd3e2 X-Bm-Transport-Timestamp: 1776677910519 To: xen-devel@lists.xenproject.org Cc: "Jan Beulich" , "Andrew Cooper" , "=?utf-8?Q?Roger=20Pau=20Monn=C3=A9?=" , "Anthony PERARD" , "Michal Orzel" , "Julien Grall" , "Stefano Stabellini" , "Julian Vetter" Message-Id: <20260420093820.825969-2-julian.vetter@vates.tech> In-Reply-To: <20260420093820.825969-1-julian.vetter@vates.tech> References: <20260420093820.825969-1-julian.vetter@vates.tech> X-Native-Encoded: 1 X-Report-Abuse: =?UTF-8?Q?Please=20forward=20a=20copy=20of=20this=20message,=20including=20all=20headers,=20to=20abuse@mandrill.com.=20You=20can=20also=20report=20abuse=20here:=20https://mandrillapp.com/contact/abuse=3Fid=3D30504962.52cc267a23d34b12b2a72cd81a0c3c36?= X-Mandrill-User: md_30504962 Feedback-ID: 30504962:30504962.20260420:md Date: Mon, 20 Apr 2026 09:38:31 +0000 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-purgate-ID: tlsNG-c201ff/1776677913-E1FDA443-9C96E951/0/0 X-purgate-type: clean X-purgate-size: 8541 X-ZohoMail-DKIM: pass (identity julian.vetter@vates.tech) (identity @mandrillapp.com) X-ZM-MESSAGEID: 1776677942639158500 Content-Type: text/plain; charset="utf-8" Switch the Xen-side ioreq page mapping from prepare_ring_for_helper() / map_domain_page_global() to explicit vmap(), to ensure vmap_to_page() can recover the struct page_info * uniformly during teardown. This is a prerequisite for multi-page ioreq support: the non-buf ioreq region will need to span multiple pages for domains with more vCPUs than fit in a single page, and vmap() is the natural interface for contiguous multi-page Xen VA mappings. In non-debug builds map_domain_page_global() uses the directmap for low MFNs rather than vmap(), so this change has a small overhead in the common case. Debug builds already used vmap() indirectly. With both paths using vmap(), vmap_to_page() can recover the struct page_info * uniformly, so drop the 'page' field from struct ioreq_page and update all callers accordingly. Signed-off-by: Julian Vetter --- Changes in v6: - Updated commit message to clearly specify why these changes are made - Added comment to say that this is {prepare,destroy}_ring_for_helper() just using vmap_to_page() + v{map,unmap}() - Kept proper ordering in ioreq_server_free_mfn(), first clearing the va pointer before unmapping --- xen/arch/x86/hvm/ioreq.c | 55 +++++++++++++++++++++++++++++++++------- xen/common/ioreq.c | 34 +++++++++++++------------ xen/include/xen/ioreq.h | 1 - 3 files changed, 64 insertions(+), 26 deletions(-) diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c index a5fa97e149..3cabec141c 100644 --- a/xen/arch/x86/hvm/ioreq.c +++ b/xen/arch/x86/hvm/ioreq.c @@ -15,6 +15,7 @@ #include #include #include +#include #include =20 #include @@ -128,8 +129,13 @@ static void hvm_unmap_ioreq_gfn(struct ioreq_server *s= , bool buf) if ( gfn_eq(iorp->gfn, INVALID_GFN) ) return; =20 - destroy_ring_for_helper(&iorp->va, iorp->page); - iorp->page =3D NULL; + /* Equivalent to destroy_ring_for_helper(), using vmap_to_page(). */ + if ( iorp->va ) + { + put_page_and_type(vmap_to_page(iorp->va)); + vunmap(iorp->va); + iorp->va =3D NULL; + } =20 hvm_free_ioreq_gfn(s, iorp->gfn); iorp->gfn =3D INVALID_GFN; @@ -139,9 +145,12 @@ static int hvm_map_ioreq_gfn(struct ioreq_server *s, b= ool buf) { struct domain *d =3D s->target; struct ioreq_page *iorp =3D buf ? &s->bufioreq : &s->ioreq; + struct page_info *page; + p2m_type_t p2mt; + mfn_t mfn; int rc; =20 - if ( iorp->page ) + if ( iorp->va ) { /* * If a page has already been allocated (which will happen on @@ -162,12 +171,40 @@ static int hvm_map_ioreq_gfn(struct ioreq_server *s, = bool buf) if ( gfn_eq(iorp->gfn, INVALID_GFN) ) return -ENOMEM; =20 - rc =3D prepare_ring_for_helper(d, gfn_x(iorp->gfn), &iorp->page, - &iorp->va); - + /* + * Equivalent to prepare_ring_for_helper() using vmap(). Using vmap() + * rather than map_domain_page_global() ensures vmap_to_page() can + * recover the struct page_info * uniformly at teardown, which is + * needed to support multi-page ioreq mappings (see nr_ioreq_pages()). + */ + rc =3D check_get_page_from_gfn(d, iorp->gfn, false, &p2mt, &page); if ( rc ) - hvm_unmap_ioreq_gfn(s, buf); + { + if ( rc =3D=3D -EAGAIN ) + rc =3D -ENOENT; + goto fail; + } + + if ( !get_page_type(page, PGT_writable_page) ) + { + put_page(page); + rc =3D -EINVAL; + goto fail; + } + + mfn =3D page_to_mfn(page); + iorp->va =3D vmap(&mfn, 1); + if ( !iorp->va ) + { + put_page_and_type(page); + rc =3D -ENOMEM; + goto fail; + } + + return 0; =20 + fail: + hvm_unmap_ioreq_gfn(s, buf); return rc; } =20 @@ -179,7 +216,7 @@ static void hvm_remove_ioreq_gfn(struct ioreq_server *s= , bool buf) if ( gfn_eq(iorp->gfn, INVALID_GFN) ) return; =20 - if ( p2m_remove_page(d, iorp->gfn, page_to_mfn(iorp->page), 0) ) + if ( p2m_remove_page(d, iorp->gfn, vmap_to_mfn(iorp->va), 0) ) domain_crash(d); clear_page(iorp->va); } @@ -195,7 +232,7 @@ static int hvm_add_ioreq_gfn(struct ioreq_server *s, bo= ol buf) =20 clear_page(iorp->va); =20 - rc =3D p2m_add_page(d, iorp->gfn, page_to_mfn(iorp->page), 0, p2m_ram_= rw); + rc =3D p2m_add_page(d, iorp->gfn, vmap_to_mfn(iorp->va), 0, p2m_ram_rw= ); if ( rc =3D=3D 0 ) paging_mark_pfn_dirty(d, _pfn(gfn_x(iorp->gfn))); =20 diff --git a/xen/common/ioreq.c b/xen/common/ioreq.c index f5fd30ce12..d8d02167b4 100644 --- a/xen/common/ioreq.c +++ b/xen/common/ioreq.c @@ -17,11 +17,11 @@ */ =20 #include -#include #include #include #include #include +#include #include #include #include @@ -262,8 +262,9 @@ static int ioreq_server_alloc_mfn(struct ioreq_server *= s, bool buf) { struct ioreq_page *iorp =3D buf ? &s->bufioreq : &s->ioreq; struct page_info *page; + mfn_t mfn; =20 - if ( iorp->page ) + if ( iorp->va ) { /* * If a guest frame has already been mapped (which may happen @@ -291,11 +292,11 @@ static int ioreq_server_alloc_mfn(struct ioreq_server= *s, bool buf) return -ENODATA; } =20 - iorp->va =3D __map_domain_page_global(page); + mfn =3D page_to_mfn(page); + iorp->va =3D vmap(&mfn, 1); if ( !iorp->va ) goto fail; =20 - iorp->page =3D page; clear_page(iorp->va); return 0; =20 @@ -309,15 +310,16 @@ static int ioreq_server_alloc_mfn(struct ioreq_server= *s, bool buf) static void ioreq_server_free_mfn(struct ioreq_server *s, bool buf) { struct ioreq_page *iorp =3D buf ? &s->bufioreq : &s->ioreq; - struct page_info *page =3D iorp->page; + struct page_info *page; + void *va; =20 - if ( !page ) + if ( !iorp->va ) return; =20 - iorp->page =3D NULL; - - unmap_domain_page_global(iorp->va); + va =3D iorp->va; + page =3D vmap_to_page(va); iorp->va =3D NULL; + vunmap(va); =20 put_page_alloc_ref(page); put_page_and_type(page); @@ -333,7 +335,8 @@ bool is_ioreq_server_page(struct domain *d, const struc= t page_info *page) =20 FOR_EACH_IOREQ_SERVER(d, id, s) { - if ( (s->ioreq.page =3D=3D page) || (s->bufioreq.page =3D=3D page)= ) + if ( (s->ioreq.va && vmap_to_page(s->ioreq.va) =3D=3D page) || + (s->bufioreq.va && vmap_to_page(s->bufioreq.va) =3D=3D page) ) { found =3D true; break; @@ -627,10 +630,9 @@ static void ioreq_server_deinit(struct ioreq_server *s) * NOTE: It is safe to call both arch_ioreq_server_unmap_pages() and * ioreq_server_free_pages() in that order. * This is because the former will do nothing if the pages - * are not mapped, leaving the page to be freed by the latter. - * However if the pages are mapped then the former will set - * the page_info pointer to NULL, meaning the latter will do - * nothing. + * are not mapped, leaving the pages to be freed by the latter. + * However if the pages are mapped then the former will clear + * iorp->va, meaning the latter will do nothing. */ arch_ioreq_server_unmap_pages(s); ioreq_server_free_pages(s); @@ -819,12 +821,12 @@ int ioreq_server_get_frame(struct domain *d, ioservid= _t id, if ( !HANDLE_BUFIOREQ(s) ) goto out; =20 - *mfn =3D page_to_mfn(s->bufioreq.page); + *mfn =3D vmap_to_mfn(s->bufioreq.va); rc =3D 0; break; =20 case XENMEM_resource_ioreq_server_frame_ioreq(0): - *mfn =3D page_to_mfn(s->ioreq.page); + *mfn =3D vmap_to_mfn(s->ioreq.va); rc =3D 0; break; =20 diff --git a/xen/include/xen/ioreq.h b/xen/include/xen/ioreq.h index e86f0869fa..d63fa4729e 100644 --- a/xen/include/xen/ioreq.h +++ b/xen/include/xen/ioreq.h @@ -25,7 +25,6 @@ =20 struct ioreq_page { gfn_t gfn; - struct page_info *page; void *va; }; =20 --=20 2.53.0 -- Julian Vetter | Vates Hypervisor & Kernel Developer XCP-ng & Xen Orchestra - Vates solutions web: https://vates.tech From nobody Tue May 5 08:52:53 2026 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass header.i=julian.vetter@vates.tech; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=vates.tech ARC-Seal: i=1; a=rsa-sha256; t=1776677947; cv=none; d=zohomail.com; s=zohoarc; b=TihiMh02wApjiGY0NdZMhhO4Jy0i8p4afgcVdBcYUkj4UjukfgRgBkRTNXhkQTR7myozYdPsuQm9jib2xXllR5GYd9cbm42GxLlfiEVSN+D6J/Wzor6E3iz4cSh5gzysskGyJG32gdmkbog0Ey4VgTgvVKXJwJfvzA+pVdNw+ZM= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1776677947; h=Content-Type:Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=p8k8y/LVlfkSFjIR489TVlpP2VQjBt8UHcDivsfmOkw=; b=OQWArCv4im6qbyPDuBIPRbyE3tKzBDOtZZM9IKMLuwKTkUXmZ0Wkg15qxCWVXDxD9v2Ie1qgExu+SWDW4lzvVI+zgW9JHA33iDSvEfbfFizNzEL74x4qlREFlKyf0RwbB2xYrzv5vT6RvoOwzBu28Z66EPTKF1hv2z8bxVreNuk= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass header.i=julian.vetter@vates.tech; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1776677947620208.14096565616558; Mon, 20 Apr 2026 02:39:07 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.1285560.1566566 (Exim 4.92) (envelope-from ) id 1wEl5Q-0001U4-2X; Mon, 20 Apr 2026 09:38:36 +0000 Received: by outflank-mailman (output) from mailman id 1285560.1566566; Mon, 20 Apr 2026 09:38:36 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1wEl5P-0001TK-Rt; Mon, 20 Apr 2026 09:38:35 +0000 Received: by outflank-mailman (input) for mailman id 1285560; Mon, 20 Apr 2026 09:38:34 +0000 Received: from mx.expurgate.net ([195.190.135.10]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1wEl5O-0001OQ-Dy for xen-devel@lists.xenproject.org; Mon, 20 Apr 2026 09:38:34 +0000 Received: from mx.expurgate.net (helo=localhost) by mx.expurgate.net with esmtp id 1wEl5N-00AEIs-Qt for xen-devel@lists.xenproject.org; Mon, 20 Apr 2026 11:38:33 +0200 Received: from [10.42.69.10] (helo=localhost) by localhost with ESMTP (eXpurgate MTA 0.9.1) (envelope-from ) id 69e5f412-5cb7-0a2a0a5109dd-0a2a450a9e6c-30 for ; Mon, 20 Apr 2026 11:38:33 +0200 Received: from [198.2.137.11] (helo=mail137-11.atl71.mandrillapp.com) by tlsNG-4011c0.mxtls.expurgate.net with ESMTPS (eXpurgate 4.56.1) (envelope-from ) id 69e5f418-56b3-0a2a450a0019-c602890b16cf-3 for ; Mon, 20 Apr 2026 11:38:33 +0200 Received: from mta004-md-usw2.delv.a.intuit.com (localhost [127.0.0.1]) by mail137-11.atl71.mandrillapp.com (Mailchimp) with ESMTP id 4fzgQv5M7HzDRSrSC for ; Mon, 20 Apr 2026 09:38:31 +0000 (UTC) Received: from [37.26.189.201] by mandrillapp.com id 2e3c03ff03de4943839f2f7b526ff987; Mon, 20 Apr 2026 09:38:31 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" Authentication-Results: eu.smtp.expurgate.cloud; dkim=pass header.s=mte1 header.d=mandrillapp.com header.i="@mandrillapp.com" header.h="From:Subject:To:Cc:Message-Id:In-Reply-To:References:Feedback-ID:Date:MIME-Version:Content-Type:Content-Transfer-Encoding"; dkim=pass header.s=mte1 header.d=vates.tech header.i="julian.vetter@vates.tech" header.h="From:Subject:To:Cc:Message-Id:In-Reply-To:References:Feedback-ID:Date:MIME-Version:Content-Type:Content-Transfer-Encoding" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=mandrillapp.com; s=mte1; t=1776677911; x=1776947911; bh=p8k8y/LVlfkSFjIR489TVlpP2VQjBt8UHcDivsfmOkw=; h=From:Subject:To:Cc:Message-Id:In-Reply-To:References:Feedback-ID: Date:MIME-Version:Content-Type:Content-Transfer-Encoding:CC:Date: Subject:From; b=isQPJtNjOMy94YA7LfNFY7LxB6CX445WWOjhrUskqM5oMumyGu+c0kkntG/xeqOIJ hNUH7xPK2UJEjMDstv1LANozaumzRI7ML7OiqN8jzJ39M7tH9sK8146UszwxLVmlCS o6GgyVPD10/hzl0F6QNP9LaZCCHZeIX4Rxazdtl73r83fnAqAPn0Z5DEyD/B6z5OlD LQ5u+9OUmpQ6sXj+A9sVPvYqYqtghkztCrj+yZ9QLfgVMP3u8/tJAIJdaf1n0d0j03 zsFZdHcrLTO3s0l5K0SIdZQ2BSifbwFzRLRrsEr58B+VlbZPMy+In2tZJySlUuRKy0 WpwE40SUJckEA== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=vates.tech; s=mte1; t=1776677911; x=1776938411; i=julian.vetter@vates.tech; bh=p8k8y/LVlfkSFjIR489TVlpP2VQjBt8UHcDivsfmOkw=; h=From:Subject:To:Cc:Message-Id:In-Reply-To:References:Feedback-ID: Date:MIME-Version:Content-Type:Content-Transfer-Encoding:CC:Date: Subject:From; b=LSrYyx9lvLEY21z6LWF+iTyIQN0zpSkPpNPVmTxU1nNGiEms/5FndmcrIPjEInsZf y64yshv2KBL3VP7Vx3WI1S+LY/DPxO6aDd82a0IOHMJmpvT1wqaK2vuf3fZ0DfWgLt TgEnR0K2hKHNE5jVooEnj5pf0lpgff9nGiMeE2KeZgmULK0kC/sGLtWmPZ8eq74UPX K2Lq4H2wGpIgTe6/GIf19lJrMZNIB0pMQtfhzJuI0RRwR6rim5C1yTTgGip39EiyEM k2rJzBVzJb034p09dzd0TVYmIC5D5qXZ0ZxoYp8zTaiSB+KF43qOW5nbg6mlPSTZkA 85/8OCBHdKTIg== From: "Julian Vetter" Subject: =?utf-8?Q?[PATCH=20v6=202/3]=20ioreq:=20Indent=20ioreq=5Fserver=5Falloc=5Fmfn()=20body=20one=20level=20deeper?= X-Mailer: git-send-email 2.53.0 X-Bm-Disclaimer: Yes X-Bm-Milter-Handled: 4ffbd6c1-ee69-4e1b-aabd-f977039bd3e2 X-Bm-Transport-Timestamp: 1776677910895 To: xen-devel@lists.xenproject.org Cc: "Jan Beulich" , "Andrew Cooper" , "=?utf-8?Q?Roger=20Pau=20Monn=C3=A9?=" , "Anthony PERARD" , "Michal Orzel" , "Julien Grall" , "Stefano Stabellini" , "Julian Vetter" Message-Id: <20260420093820.825969-3-julian.vetter@vates.tech> In-Reply-To: <20260420093820.825969-1-julian.vetter@vates.tech> References: <20260420093820.825969-1-julian.vetter@vates.tech> X-Native-Encoded: 1 X-Report-Abuse: =?UTF-8?Q?Please=20forward=20a=20copy=20of=20this=20message,=20including=20all=20headers,=20to=20abuse@mandrill.com.=20You=20can=20also=20report=20abuse=20here:=20https://mandrillapp.com/contact/abuse=3Fid=3D30504962.2e3c03ff03de4943839f2f7b526ff987?= X-Mandrill-User: md_30504962 Feedback-ID: 30504962:30504962.20260420:md Date: Mon, 20 Apr 2026 09:38:31 +0000 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-purgate-ID: tlsNG-4011c0/1776677913-471728B7-98452ACF/0/0 X-purgate-type: clean X-purgate-size: 1965 X-ZohoMail-DKIM: pass (identity julian.vetter@vates.tech) (identity @mandrillapp.com) X-ZM-MESSAGEID: 1776677948985154100 Content-Type: text/plain; charset="utf-8" No functional change. It adds a wrapping block to prepare for the loop that the subsequent patch introduces to handle multiple ioreq pages. Signed-off-by: Julian Vetter --- Changes in v6: - Dropped the indentation change for ioreq_server_free_mfn, because the modifications in the next patch don't really merit the change anymore --- xen/common/ioreq.c | 28 +++++++++++++++------------- 1 file changed, 15 insertions(+), 13 deletions(-) diff --git a/xen/common/ioreq.c b/xen/common/ioreq.c index d8d02167b4..bae9b99c99 100644 --- a/xen/common/ioreq.c +++ b/xen/common/ioreq.c @@ -277,22 +277,24 @@ static int ioreq_server_alloc_mfn(struct ioreq_server= *s, bool buf) return 0; } =20 - page =3D alloc_domheap_page(s->target, MEMF_no_refcount); + { + page =3D alloc_domheap_page(s->target, MEMF_no_refcount); =20 - if ( !page ) - return -ENOMEM; + if ( !page ) + return -ENOMEM; =20 - if ( !get_page_and_type(page, s->target, PGT_writable_page) ) - { - /* - * The domain can't possibly know about this page yet, so failure - * here is a clear indication of something fishy going on. - */ - domain_crash(s->emulator); - return -ENODATA; - } + if ( !get_page_and_type(page, s->target, PGT_writable_page) ) + { + /* + * The domain can't possibly know about this page yet, so fail= ure + * here is a clear indication of something fishy going on. + */ + domain_crash(s->emulator); + return -ENODATA; + } =20 - mfn =3D page_to_mfn(page); + mfn =3D page_to_mfn(page); + } iorp->va =3D vmap(&mfn, 1); if ( !iorp->va ) goto fail; --=20 2.53.0 -- Julian Vetter | Vates Hypervisor & Kernel Developer XCP-ng & Xen Orchestra - Vates solutions web: https://vates.tech From nobody Tue May 5 08:52:53 2026 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass header.i=julian.vetter@vates.tech; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=vates.tech ARC-Seal: i=1; a=rsa-sha256; t=1776677939; cv=none; d=zohomail.com; s=zohoarc; b=S/fc36iHfopLodrXowbWB9p0XWqt8IcuRgfxV88cXrs/7ohHylnfx1EL527l0Oua007qMp7w1W5HN/nskxekYUtZQb+dV4ACvWgU3P4q+fbe6+LdfZ6haexE7VTVq2+NWMMI0a4r7vUrvZEyHashQrdSOChkaU4gITzJtpylLIw= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1776677939; h=Content-Type:Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=DH2Rgk3UV3w32Q7m6Ik6auoISiJxaMmJr2fSdyVJpEU=; b=lfw9XmlTiqY4LXWnHoWpfdmjwj4XGGViZjHE7ZJOpBcHs5bN3ZQwEWPN9V+Mv8chufcCG3zK7n8IX0nW6eXGevjsxbn1Nm45e1QiiMg6h2TBOIR04f1srIwLi7VmsCXhCmy1ohVnW59XiLQNsiO5KpzKVAIzwYk2cuPB69hcO7I= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass header.i=julian.vetter@vates.tech; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1776677939736220.45475055652605; Mon, 20 Apr 2026 02:38:59 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.1285561.1566578 (Exim 4.92) (envelope-from ) id 1wEl5S-0001re-71; Mon, 20 Apr 2026 09:38:38 +0000 Received: by outflank-mailman (output) from mailman id 1285561.1566578; Mon, 20 Apr 2026 09:38:38 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1wEl5S-0001rU-3N; Mon, 20 Apr 2026 09:38:38 +0000 Received: by outflank-mailman (input) for mailman id 1285561; Mon, 20 Apr 2026 09:38:36 +0000 Received: from mx.expurgate.net ([195.190.135.10]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1wEl5Q-0001iT-OQ for xen-devel@lists.xenproject.org; Mon, 20 Apr 2026 09:38:36 +0000 Received: from mx.expurgate.net (helo=localhost) by mx.expurgate.net with esmtp id 1wEl5O-007TdJ-JO for xen-devel@lists.xenproject.org; Mon, 20 Apr 2026 11:38:36 +0200 Received: from [10.42.69.5] (helo=localhost) by localhost with ESMTP (eXpurgate MTA 0.9.1) (envelope-from ) id 69e5f419-2eae-0a2a0a5409dd-0a2a4505e70c-30 for ; Mon, 20 Apr 2026 11:38:36 +0200 Received: from [198.2.137.11] (helo=mail137-11.atl71.mandrillapp.com) by tlsNG-c201ff.mxtls.expurgate.net with ESMTPS (eXpurgate 4.56.1) (envelope-from ) id 69e5f419-aaa8-0a2a45050019-c602890be989-3 for ; Mon, 20 Apr 2026 11:38:35 +0200 Received: from mta004-md-usw2.delv.a.intuit.com (localhost [127.0.0.1]) by mail137-11.atl71.mandrillapp.com (Mailchimp) with ESMTP id 4fzgQw0tbyzDRT7yr for ; Mon, 20 Apr 2026 09:38:32 +0000 (UTC) Received: from [37.26.189.201] by mandrillapp.com id 638eba881eec477b9b9f3a94f3b2a7b1; Mon, 20 Apr 2026 09:38:32 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" Authentication-Results: eu.smtp.expurgate.cloud; dkim=pass header.s=mte1 header.d=mandrillapp.com header.i="@mandrillapp.com" header.h="From:Subject:To:Cc:Message-Id:In-Reply-To:References:Feedback-ID:Date:MIME-Version:Content-Type:Content-Transfer-Encoding"; dkim=pass header.s=mte1 header.d=vates.tech header.i="julian.vetter@vates.tech" header.h="From:Subject:To:Cc:Message-Id:In-Reply-To:References:Feedback-ID:Date:MIME-Version:Content-Type:Content-Transfer-Encoding" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=mandrillapp.com; s=mte1; t=1776677912; x=1776947912; bh=DH2Rgk3UV3w32Q7m6Ik6auoISiJxaMmJr2fSdyVJpEU=; h=From:Subject:To:Cc:Message-Id:In-Reply-To:References:Feedback-ID: Date:MIME-Version:Content-Type:Content-Transfer-Encoding:CC:Date: Subject:From; b=Ed2ftxkXhN64xYbsJd+PwScEOEw2zeuk/dOLhV7fe1ZNmwIVnT9EF+R4dcdGAXDX7 fzeogMH7hxYSJGYyOs8H6U4QeGTtdDw9F22j4Dk3XegB/i1tFi9BOCfewy7EwTix9i qATeicBQTUhvsYyQBU5QbmJ9GcaWlByeDk2Be2bsIKvSAMFo84bkfoqZFETvKf9N4n bxcwhXNb287bNQUCUDg/BCCNDiCqgZPPTR3YeJfHcMjwCfqsw/3Fd493TheEJ24FID PiLHDWJHX4m19knP6WlyOioLbYWLibRVajQP3tICxyZ3qbB7Hex7L9w87SbZPS8iMZ cJ6MoQMPk0kDQ== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=vates.tech; s=mte1; t=1776677912; x=1776938412; i=julian.vetter@vates.tech; bh=DH2Rgk3UV3w32Q7m6Ik6auoISiJxaMmJr2fSdyVJpEU=; h=From:Subject:To:Cc:Message-Id:In-Reply-To:References:Feedback-ID: Date:MIME-Version:Content-Type:Content-Transfer-Encoding:CC:Date: Subject:From; b=VpAhtjFnsEXe2d0hFm7nzdC098AexlXl2cXGfDV9piBTNpb+Ybb9ZE7wITOkILo4x qfrGsd6jpVbF8U8HwJ8a58xM6nHSdFHyyilCtL5VRmvQHQ+w7zTSnDDYtbEtoCLfMT Lf6o0cWN1iezd5M7PckORfhl1WS2zAwq/kB2QNY9asKQDbw2yQvcbmxilzu63HualS juy9nRU0q6sUcy60/Y2DmnvXguKEg7rg34P7ndYH7BkTscYbCrr2RoXIGMtzdo7QxC cOuf5aY0+zvrjHTdgg9uoHixcUgplnKmRpjhvpdH4N2ztgPr+wXPn3ftDI00wpEKtK cgxXsrS+Zdgxg== From: "Julian Vetter" Subject: =?utf-8?Q?[PATCH=20v6=203/3]=20x86/ioreq:=20Extend=20ioreq=20server=20to=20support=20multiple=20ioreq=20pages?= X-Mailer: git-send-email 2.53.0 X-Bm-Disclaimer: Yes X-Bm-Milter-Handled: 4ffbd6c1-ee69-4e1b-aabd-f977039bd3e2 X-Bm-Transport-Timestamp: 1776677911153 To: xen-devel@lists.xenproject.org Cc: "Jan Beulich" , "Andrew Cooper" , "=?utf-8?Q?Roger=20Pau=20Monn=C3=A9?=" , "Anthony PERARD" , "Michal Orzel" , "Julien Grall" , "Stefano Stabellini" , "Julian Vetter" Message-Id: <20260420093820.825969-4-julian.vetter@vates.tech> In-Reply-To: <20260420093820.825969-1-julian.vetter@vates.tech> References: <20260420093820.825969-1-julian.vetter@vates.tech> X-Native-Encoded: 1 X-Report-Abuse: =?UTF-8?Q?Please=20forward=20a=20copy=20of=20this=20message,=20including=20all=20headers,=20to=20abuse@mandrill.com.=20You=20can=20also=20report=20abuse=20here:=20https://mandrillapp.com/contact/abuse=3Fid=3D30504962.638eba881eec477b9b9f3a94f3b2a7b1?= X-Mandrill-User: md_30504962 Feedback-ID: 30504962:30504962.20260420:md Date: Mon, 20 Apr 2026 09:38:32 +0000 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-purgate-ID: tlsNG-c201ff/1776677915-E97A6443-096F3498/0/0 X-purgate-type: clean X-purgate-size: 8014 X-ZohoMail-DKIM: pass (identity julian.vetter@vates.tech) (identity @mandrillapp.com) X-ZM-MESSAGEID: 1776677942640158500 Content-Type: text/plain; charset="utf-8" As the number of vCPUs grows, a single ioreq page of 128 slots may not be sufficient. Add support for allocating and mapping multiple ioreq pages so that the ioreq region can scale with d->max_vcpus. Introduce nr_ioreq_pages() to compute the number of pages required for a given domain, and IOREQ_NR_PAGES_MAX as a compile-time upper bound (based on HVM_MAX_VCPUS). ioreq_server_alloc_mfn() is updated to allocate nr_ioreq_pages() pages and map them contiguously via vmap(). is_ioreq_server_page() iterates over all ioreq pages when checking page ownership. ioreq_server_get_frame() allows callers to retrieve any ioreq page by index via the XENMEM_acquire_resource interface. On x86, the legacy GFN mapping path (hvm_map_ioreq_gfn) is limited to a single ioreq page; device models requiring more ioreq slots must use the resource mapping interface (XENMEM_acquire_resource). Signed-off-by: Julian Vetter --- Changes in v6: - Adapted the comment to not mention the guest, but the device model - Replaced the dynamic allocation for the mfns array by a static array - Fixed error handling in ioreq_server_alloc_mfn, using an extra nr_alloc variable to track the already allocated pages - Dropped unnecessary void casts --- xen/arch/x86/hvm/ioreq.c | 8 ++++ xen/common/ioreq.c | 93 ++++++++++++++++++++++++++++------------ xen/include/xen/ioreq.h | 12 ++++++ 3 files changed, 86 insertions(+), 27 deletions(-) diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c index 3cabec141c..ee679bdf5a 100644 --- a/xen/arch/x86/hvm/ioreq.c +++ b/xen/arch/x86/hvm/ioreq.c @@ -166,6 +166,14 @@ static int hvm_map_ioreq_gfn(struct ioreq_server *s, b= ool buf) if ( d->is_dying ) return -EINVAL; =20 + /* + * The legacy GFN path supports only a single ioreq page. Device models + * requiring more ioreq slots must use the resource mapping interface + * (XENMEM_acquire_resource). + */ + if ( !buf && nr_ioreq_pages(d) > 1 ) + return -EOPNOTSUPP; + iorp->gfn =3D hvm_alloc_ioreq_gfn(s); =20 if ( gfn_eq(iorp->gfn, INVALID_GFN) ) diff --git a/xen/common/ioreq.c b/xen/common/ioreq.c index bae9b99c99..3a08e77597 100644 --- a/xen/common/ioreq.c +++ b/xen/common/ioreq.c @@ -261,8 +261,11 @@ bool vcpu_ioreq_handle_completion(struct vcpu *v) static int ioreq_server_alloc_mfn(struct ioreq_server *s, bool buf) { struct ioreq_page *iorp =3D buf ? &s->bufioreq : &s->ioreq; - struct page_info *page; - mfn_t mfn; + unsigned int i, nr_alloc =3D 0, nr_pages =3D buf ? 1 : nr_ioreq_pages(= s->target); + mfn_t mfns[IOREQ_NR_PAGES_MAX] =3D {}; + int rc; + + ASSERT(nr_pages <=3D IOREQ_NR_PAGES_MAX); =20 if ( iorp->va ) { @@ -277,11 +280,16 @@ static int ioreq_server_alloc_mfn(struct ioreq_server= *s, bool buf) return 0; } =20 + for ( i =3D 0; i < nr_pages; i++ ) { - page =3D alloc_domheap_page(s->target, MEMF_no_refcount); + struct page_info *page =3D alloc_domheap_page(s->target, + MEMF_no_refcount); =20 if ( !page ) - return -ENOMEM; + { + rc =3D -ENOMEM; + goto fail; + } =20 if ( !get_page_and_type(page, s->target, PGT_writable_page) ) { @@ -290,41 +298,59 @@ static int ioreq_server_alloc_mfn(struct ioreq_server= *s, bool buf) * here is a clear indication of something fishy going on. */ domain_crash(s->emulator); - return -ENODATA; + rc =3D -ENODATA; + goto fail; } =20 - mfn =3D page_to_mfn(page); + mfns[nr_alloc++] =3D page_to_mfn(page); } - iorp->va =3D vmap(&mfn, 1); + + iorp->va =3D vmap(mfns, nr_pages); if ( !iorp->va ) + { + rc =3D -ENOMEM; goto fail; + } =20 - clear_page(iorp->va); + memset(iorp->va, 0, nr_pages * PAGE_SIZE); return 0; =20 fail: - put_page_alloc_ref(page); - put_page_and_type(page); + while ( nr_alloc-- ) + { + struct page_info *page =3D mfn_to_page(mfns[nr_alloc]); + + put_page_alloc_ref(page); + put_page_and_type(page); + } =20 - return -ENOMEM; + return rc; } =20 static void ioreq_server_free_mfn(struct ioreq_server *s, bool buf) { struct ioreq_page *iorp =3D buf ? &s->bufioreq : &s->ioreq; - struct page_info *page; + unsigned int i, nr_pages =3D buf ? 1 : nr_ioreq_pages(s->target); + struct page_info *pages[IOREQ_NR_PAGES_MAX]; void *va; =20 if ( !iorp->va ) return; =20 + ASSERT(nr_pages <=3D IOREQ_NR_PAGES_MAX); + + for ( i =3D 0; i < nr_pages; i++ ) + pages[i] =3D vmap_to_page(iorp->va + i * PAGE_SIZE); + va =3D iorp->va; - page =3D vmap_to_page(va); iorp->va =3D NULL; vunmap(va); =20 - put_page_alloc_ref(page); - put_page_and_type(page); + for ( i =3D 0; i < nr_pages; i++ ) + { + put_page_alloc_ref(pages[i]); + put_page_and_type(pages[i]); + } } =20 bool is_ioreq_server_page(struct domain *d, const struct page_info *page) @@ -337,12 +363,25 @@ bool is_ioreq_server_page(struct domain *d, const str= uct page_info *page) =20 FOR_EACH_IOREQ_SERVER(d, id, s) { - if ( (s->ioreq.va && vmap_to_page(s->ioreq.va) =3D=3D page) || - (s->bufioreq.va && vmap_to_page(s->bufioreq.va) =3D=3D page) ) + unsigned int i; + + if ( s->bufioreq.va && vmap_to_page(s->bufioreq.va) =3D=3D page ) { found =3D true; break; } + + for ( i =3D 0; i < nr_ioreq_pages(d) && s->ioreq.va; i++ ) + { + if ( vmap_to_page(s->ioreq.va + i * PAGE_SIZE) =3D=3D page ) + { + found =3D true; + break; + } + } + + if ( found ) + break; } =20 rspin_unlock(&d->ioreq_server.lock); @@ -816,26 +855,26 @@ int ioreq_server_get_frame(struct domain *d, ioservid= _t id, if ( rc ) goto out; =20 - switch ( idx ) + if ( idx =3D=3D XENMEM_resource_ioreq_server_frame_bufioreq ) { - case XENMEM_resource_ioreq_server_frame_bufioreq: rc =3D -ENOENT; if ( !HANDLE_BUFIOREQ(s) ) goto out; =20 *mfn =3D vmap_to_mfn(s->bufioreq.va); rc =3D 0; - break; + } + else if ( idx >=3D XENMEM_resource_ioreq_server_frame_ioreq(0) && + idx < XENMEM_resource_ioreq_server_frame_ioreq(nr_ioreq_page= s(d)) ) + { + unsigned int page_idx =3D idx - XENMEM_resource_ioreq_server_frame= _ioreq(0); =20 - case XENMEM_resource_ioreq_server_frame_ioreq(0): - *mfn =3D vmap_to_mfn(s->ioreq.va); + ASSERT(page_idx < nr_ioreq_pages(d)); + *mfn =3D vmap_to_mfn(s->ioreq.va + page_idx * PAGE_SIZE); rc =3D 0; - break; - - default: - rc =3D -EINVAL; - break; } + else + rc =3D -EINVAL; =20 out: rspin_unlock(&d->ioreq_server.lock); diff --git a/xen/include/xen/ioreq.h b/xen/include/xen/ioreq.h index d63fa4729e..d2a08c2371 100644 --- a/xen/include/xen/ioreq.h +++ b/xen/include/xen/ioreq.h @@ -35,6 +35,18 @@ struct ioreq_vcpu { bool pending; }; =20 +/* + * Maximum number of ioreq pages, based on the maximum number + * of vCPUs and the number of ioreq slots per page. + */ +#define IOREQ_NR_PAGES_MAX \ + DIV_ROUND_UP(HVM_MAX_VCPUS, PAGE_SIZE / sizeof(ioreq_t)) + +static inline unsigned int nr_ioreq_pages(const struct domain *d) +{ + return DIV_ROUND_UP(d->max_vcpus, PAGE_SIZE / sizeof(ioreq_t)); +} + #define NR_IO_RANGE_TYPES (XEN_DMOP_IO_RANGE_PCI + 1) #define MAX_NR_IO_RANGES 256 =20 --=20 2.53.0 -- Julian Vetter | Vates Hypervisor & Kernel Developer XCP-ng & Xen Orchestra - Vates solutions web: https://vates.tech