From nobody Mon Mar 23 21:25:17 2026 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass header.i=julian.vetter@vates.tech; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=vates.tech ARC-Seal: i=1; a=rsa-sha256; t=1773659863; cv=none; d=zohomail.com; s=zohoarc; b=NPQCDICSItbeyTZAUQwyxKxKiXuSXFs5hTPKOMLgrgbOJx/O0CMdDF4kisJVhqOl6jNmGBdCMdVZ49hQzNn0uB3EnINQo1z2Dg7toIXm7yUUBss5UdxEzeMARYu3zwP1MNVyrLAtXn4mx2dXQb48i43y9Tk9ma8ddyC9K9cgTGA= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1773659863; h=Content-Type:Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=6RYWc6Fjxgy8OveMTnPi/ksrvu05Heao6RIo3wosYw8=; b=EKMbn8lsaYzRh0T9P9Mv5IEezXGigWzoYSG/MWoEEtsJTlSm/9z4HL0F17pQDHgJbkLg+JS+benSql4vDg1rYyfX6SD0rw/tE+t1DE9CEzCZS/+EWxxeIERjlBCnRIt4NZHc7FZBUCMwJzG3ZWyEDkdAKWPy0nP5KJ6MC4R8sUU= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass header.i=julian.vetter@vates.tech; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1773659863170150.7774111701675; Mon, 16 Mar 2026 04:17:43 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.1255503.1550448 (Exim 4.92) (envelope-from ) id 1w25wj-0008Dj-Q4; Mon, 16 Mar 2026 11:17:17 +0000 Received: by outflank-mailman (output) from mailman id 1255503.1550448; Mon, 16 Mar 2026 11:17:17 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1w25wj-0008C7-Kp; Mon, 16 Mar 2026 11:17:17 +0000 Received: by outflank-mailman (input) for mailman id 1255503; Mon, 16 Mar 2026 11:17:16 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1w25wi-00080O-HU for xen-devel@lists.xenproject.org; Mon, 16 Mar 2026 11:17:16 +0000 Received: from mail136-12.atl41.mandrillapp.com (mail136-12.atl41.mandrillapp.com [198.2.136.12]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id af652f1c-2129-11f1-b164-2bf370ae4941; Mon, 16 Mar 2026 12:17:15 +0100 (CET) Received: from pmta11.mandrill.prod.atl01.rsglab.com (localhost [127.0.0.1]) by mail136-12.atl41.mandrillapp.com (Mailchimp) with ESMTP id 4fZCGh5gmcz5QkW27 for ; Mon, 16 Mar 2026 11:17:00 +0000 (GMT) Received: from [37.26.189.201] by mandrillapp.com id 384f3c454f2b4e7a96bf99f610f55f96; Mon, 16 Mar 2026 11:17:00 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: af652f1c-2129-11f1-b164-2bf370ae4941 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=mandrillapp.com; s=mte1; t=1773659820; x=1773929820; bh=6RYWc6Fjxgy8OveMTnPi/ksrvu05Heao6RIo3wosYw8=; h=From:Subject:To:Cc:Message-Id:In-Reply-To:References:Feedback-ID: Date:MIME-Version:Content-Type:Content-Transfer-Encoding:CC:Date: Subject:From; b=pvkx76ahf78bKM6fYZmuZ+wRuUNX+O6vpujMXp8XfA9jurWPMN6l2p1/r17IYvKt/ u5GsiPew3GZWQpMVBUt/HBd6XD9/jZ7o2jY//lpm0YjqMttqY9WjwQ2BOWWhpm+oFv jxpZ/wvRgZighI1wY8TuodkidWbwAtOuQ8ASwuRBCVJwX96XwCBeNEavpBGwLSxBFM ZLfXhJp+r9Yf7FM8DN0xEgOfKqJApD5/g7uyZy3ua9ubptLd9MUzxVjlzFe/RMJ/Kc 5Wcw174sM7Xxb7omove/6hHpY5+EX+4soSmWv7mzHmkqyHSRRrIIFy2H7B92jfmfOl 4LhJs45priH3A== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=vates.tech; s=mte1; t=1773659820; x=1773920320; i=julian.vetter@vates.tech; bh=6RYWc6Fjxgy8OveMTnPi/ksrvu05Heao6RIo3wosYw8=; h=From:Subject:To:Cc:Message-Id:In-Reply-To:References:Feedback-ID: Date:MIME-Version:Content-Type:Content-Transfer-Encoding:CC:Date: Subject:From; b=XzCqO8EqfeFZCh5tfbSRLQGEIxJ8xbxHAnfIL3u3MjHI3fgViqivWyelLgVFivG2x +ROtvIb+dX3jqMa6kfum4LU1TEW+sIt1r0LnnxCIOBzZst8G2m3s4YGwJSAiCX+W7P 1E8DSsnxNPsj/NNlgTfkZU10GTkGqjIbJckcRV2YA856duhEjR4XVcPsRH2aoTct5d 20ZarZGw/CY8+0T/N1rfr+H6fcHFVlMjmlSfpl7BqUN7SVCY1VUSPFYlWZbSo6fLUV HPPUyEKINZkVbSTa36uzCr2RshR0jIMsXTnFeJMrEu0eikGg6QMNaRgzbSy0+Fajfx IQsSTIyjVZPnw== From: "Julian Vetter" Subject: =?utf-8?Q?[PATCH=20v5=203/3]=20x86/ioreq:=20Extend=20ioreq=20server=20to=20support=20multiple=20ioreq=20pages?= X-Mailer: git-send-email 2.51.0 X-Bm-Disclaimer: Yes X-Bm-Milter-Handled: 4ffbd6c1-ee69-4e1b-aabd-f977039bd3e2 X-Bm-Transport-Timestamp: 1773659819650 To: xen-devel@lists.xenproject.org Cc: "Jan Beulich" , "Andrew Cooper" , "=?utf-8?Q?Roger=20Pau=20Monn=C3=A9?=" , "Anthony PERARD" , "Michal Orzel" , "Julien Grall" , "Stefano Stabellini" , "Julian Vetter" Message-Id: <20260316111653.178104-4-julian.vetter@vates.tech> In-Reply-To: <20260316111653.178104-1-julian.vetter@vates.tech> References: <20260316111653.178104-1-julian.vetter@vates.tech> X-Native-Encoded: 1 X-Report-Abuse: =?UTF-8?Q?Please=20forward=20a=20copy=20of=20this=20message,=20including=20all=20headers,=20to=20abuse@mandrill.com.=20You=20can=20also=20report=20abuse=20here:=20https://mandrillapp.com/contact/abuse=3Fid=3D30504962.384f3c454f2b4e7a96bf99f610f55f96?= X-Mandrill-User: md_30504962 Feedback-ID: 30504962:30504962.20260316:md Date: Mon, 16 Mar 2026 11:17:00 +0000 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity julian.vetter@vates.tech) (identity @mandrillapp.com) X-ZM-MESSAGEID: 1773659864642154100 Content-Type: text/plain; charset="utf-8" A domain with more than (PAGE_SIZE / sizeof(ioreq_t)) vCPUs needs more than one ioreq page to hold all per-vCPU ioreq slots. In order to support this a number of changes have been made: 1. Add nr_ioreq_pages() to compute the required number of pages, defined as DIV_ROUND_UP(d->max_vcpus, PAGE_SIZE / sizeof(ioreq_t)) 2. ioreq_server_alloc_mfn() now allocates nr_ioreq_pages() pages for the non-buf case, builds an mfn_t array, and calls vmap() to map them contiguously. The buf path remains single-page. 3. ioreq_server_free_mfn() uses vmap_size() to determine how many pages to release. 4. is_ioreq_server_page() loops over all mapped ioreq pages using vmap_size() and vmap_to_page() with per-page offsets 5. ioreq_server_get_frame() now handles idx values in the range [XENMEM_resource_ioreq_server_frame_ioreq(0), XENMEM_resource_ioreq_server_frame_ioreq(nr_pages - 1)], returning the MFN via vmap_to_mfn() with the appropriate page offset. The legacy GFN path (hvm_map_ioreq_gfn) is restricted to single-page. Domains with more vCPUs must use XENMEM_acquire_resource! Signed-off-by: Julian Vetter --- Changes in v5: - Reduced complexity a lot because there is no distinction between buf and !buf case - Directly use va and gfn from struct ioreq_page, dropped additional members in struct ioreq_server --- xen/arch/x86/hvm/ioreq.c | 8 +++ xen/common/ioreq.c | 103 +++++++++++++++++++++++++++++---------- xen/include/xen/ioreq.h | 6 +++ 3 files changed, 90 insertions(+), 27 deletions(-) diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c index 145dcba5c1..872247e300 100644 --- a/xen/arch/x86/hvm/ioreq.c +++ b/xen/arch/x86/hvm/ioreq.c @@ -163,6 +163,14 @@ static int hvm_map_ioreq_gfn(struct ioreq_server *s, b= ool buf) if ( d->is_dying ) return -EINVAL; =20 + /* + * The legacy GFN path supports only a single ioreq page. Guests requi= ring + * more ioreq slots must use the resource mapping interface + * (XENMEM_acquire_resource). + */ + if ( !buf && nr_ioreq_pages(d) > 1 ) + return -EOPNOTSUPP; + base_gfn =3D hvm_alloc_ioreq_gfn(s); =20 if ( gfn_eq(base_gfn, INVALID_GFN) ) diff --git a/xen/common/ioreq.c b/xen/common/ioreq.c index b22f656701..71fac2bc7b 100644 --- a/xen/common/ioreq.c +++ b/xen/common/ioreq.c @@ -261,8 +261,9 @@ bool vcpu_ioreq_handle_completion(struct vcpu *v) static int ioreq_server_alloc_mfn(struct ioreq_server *s, bool buf) { struct ioreq_page *iorp =3D buf ? &s->bufioreq : &s->ioreq; - struct page_info *page; - mfn_t mfn; + unsigned int i, nr_pages =3D buf ? 1 : nr_ioreq_pages(s->target); + mfn_t *mfns; + int rc; =20 if ( iorp->va ) { @@ -277,11 +278,20 @@ static int ioreq_server_alloc_mfn(struct ioreq_server= *s, bool buf) return 0; } =20 + mfns =3D xmalloc_array(mfn_t, nr_pages); + if ( !mfns ) + return -ENOMEM; + + for ( i =3D 0; i < nr_pages; i++ ) { - page =3D alloc_domheap_page(s->target, MEMF_no_refcount); + struct page_info *page =3D alloc_domheap_page(s->target, + MEMF_no_refcount); =20 if ( !page ) - return -ENOMEM; + { + rc =3D -ENOMEM; + goto fail; + } =20 if ( !get_page_and_type(page, s->target, PGT_writable_page) ) { @@ -290,41 +300,60 @@ static int ioreq_server_alloc_mfn(struct ioreq_server= *s, bool buf) * here is a clear indication of something fishy going on. */ domain_crash(s->emulator); - return -ENODATA; + rc =3D -ENODATA; + goto fail; } =20 - mfn =3D page_to_mfn(page); + mfns[i] =3D page_to_mfn(page); } - iorp->va =3D vmap(&mfn, 1); + + iorp->va =3D vmap(mfns, nr_pages); if ( !iorp->va ) + { + rc =3D -ENOMEM; goto fail; + } + + xfree(mfns); + + for ( i =3D 0; i < nr_pages; i++ ) + clear_page((char *)iorp->va + i * PAGE_SIZE); =20 - clear_page(iorp->va); return 0; =20 fail: - put_page_alloc_ref(page); - put_page_and_type(page); + while ( i-- ) + { + struct page_info *page =3D mfn_to_page(mfns[i]); + + put_page_alloc_ref(page); + put_page_and_type(page); + } + xfree(mfns); =20 - return -ENOMEM; + return rc; } =20 static void ioreq_server_free_mfn(struct ioreq_server *s, bool buf) { struct ioreq_page *iorp =3D buf ? &s->bufioreq : &s->ioreq; - struct page_info *page; + unsigned int i, nr_pages; =20 if ( !iorp->va ) return; =20 + nr_pages =3D vmap_size(iorp->va); + + for ( i =3D 0; i < nr_pages; i++ ) { - page =3D vmap_to_page(iorp->va); - vunmap(iorp->va); - iorp->va =3D NULL; + struct page_info *page =3D vmap_to_page(iorp->va + i * PAGE_SIZE); =20 put_page_alloc_ref(page); put_page_and_type(page); } + + vunmap(iorp->va); + iorp->va =3D NULL; } =20 bool is_ioreq_server_page(struct domain *d, const struct page_info *page) @@ -337,12 +366,28 @@ bool is_ioreq_server_page(struct domain *d, const str= uct page_info *page) =20 FOR_EACH_IOREQ_SERVER(d, id, s) { - if ( (s->ioreq.va && vmap_to_page(s->ioreq.va) =3D=3D page) || - (s->bufioreq.va && vmap_to_page(s->bufioreq.va) =3D=3D page) ) + if ( s->bufioreq.va && vmap_to_page(s->bufioreq.va) =3D=3D page ) { found =3D true; break; } + + if ( s->ioreq.va ) + { + unsigned int i; + + for ( i =3D 0; i < vmap_size(s->ioreq.va); i++ ) + { + if ( vmap_to_page(s->ioreq.va + i * PAGE_SIZE) =3D=3D page= ) + { + found =3D true; + break; + } + } + + if ( found ) + break; + } } =20 rspin_unlock(&d->ioreq_server.lock); @@ -818,26 +863,30 @@ int ioreq_server_get_frame(struct domain *d, ioservid= _t id, if ( rc ) goto out; =20 - switch ( idx ) + if ( idx =3D=3D XENMEM_resource_ioreq_server_frame_bufioreq ) { - case XENMEM_resource_ioreq_server_frame_bufioreq: rc =3D -ENOENT; if ( !HANDLE_BUFIOREQ(s) ) goto out; =20 *mfn =3D page_to_mfn(vmap_to_page(s->bufioreq.va)); rc =3D 0; - break; + } + else if ( idx >=3D XENMEM_resource_ioreq_server_frame_ioreq(0) && + idx < XENMEM_resource_ioreq_server_frame_ioreq(nr_ioreq_page= s(d)) ) + { + unsigned int page_idx =3D idx - XENMEM_resource_ioreq_server_frame= _ioreq(0); + if ( page_idx >=3D vmap_size(s->ioreq.va) ) + { + rc =3D -EINVAL; + goto out; + } =20 - case XENMEM_resource_ioreq_server_frame_ioreq(0): - *mfn =3D page_to_mfn(vmap_to_page(s->ioreq.va)); + *mfn =3D vmap_to_mfn(s->ioreq.va + page_idx * PAGE_SIZE); rc =3D 0; - break; - - default: - rc =3D -EINVAL; - break; } + else + rc =3D -EINVAL; =20 out: rspin_unlock(&d->ioreq_server.lock); diff --git a/xen/include/xen/ioreq.h b/xen/include/xen/ioreq.h index d63fa4729e..c12480472d 100644 --- a/xen/include/xen/ioreq.h +++ b/xen/include/xen/ioreq.h @@ -19,6 +19,7 @@ #ifndef __XEN_IOREQ_H__ #define __XEN_IOREQ_H__ =20 +#include #include =20 #include @@ -82,6 +83,11 @@ static inline bool ioreq_needs_completion(const ioreq_t = *ioreq) #define HANDLE_BUFIOREQ(s) \ ((s)->bufioreq_handling !=3D HVM_IOREQSRV_BUFIOREQ_OFF) =20 +static inline unsigned int nr_ioreq_pages(const struct domain *d) +{ + return DIV_ROUND_UP(d->max_vcpus, PAGE_SIZE / sizeof(ioreq_t)); +} + bool domain_has_ioreq_server(const struct domain *d); =20 bool vcpu_ioreq_pending(struct vcpu *v); --=20 2.51.0 -- Julian Vetter | Vates Hypervisor & Kernel Developer XCP-ng & Xen Orchestra - Vates solutions web: https://vates.tech