From nobody Wed Feb 11 01:00:15 2026 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass header.i=julian.vetter@vates.tech; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=vates.tech ARC-Seal: i=1; a=rsa-sha256; t=1770640279; cv=none; d=zohomail.com; s=zohoarc; b=MwClbLfzTlgc1QKd2f2tG44yed2sqIFhSIFhVWQA89MyG+nCWq40uUMscDr3EXcx3g1gdUgd7CutHhlgSaWxUpOUHQl6OsZ/dLLad8ddqXvtcyGJq5Yu/kWRHS6HSSyJ4F9/jxHDOLA7Ro1kHwGEh5mF9Jd0u8/3qY+r6jY2q24= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1770640279; h=Content-Type:Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=TfENv7311CbqQTHZdmTk5osd0PeqUAge6A03sADL0to=; b=UlFWWn7He047lMzj9umvi9bCLU9Rb08Q9GIqOw7BhugYg7rpc8oqBcGLz7XPTbmg5vq3U7x99OA+XC4tynWrZ5/NueCI8ov/UYXK3XbXZ1EUedP5Ft8hNa1RFgKQKfLPkreHcUpS8q3ewSi0jwUfe8+iy6aYkaiggFajYr95ORQ= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass header.i=julian.vetter@vates.tech; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1770640278763789.2452229654147; Mon, 9 Feb 2026 04:31:18 -0800 (PST) Received: from list by lists.xenproject.org with outflank-mailman.1225300.1531769 (Exim 4.92) (envelope-from ) id 1vpQPW-0000FX-Bi; Mon, 09 Feb 2026 12:30:38 +0000 Received: by outflank-mailman (output) from mailman id 1225300.1531769; Mon, 09 Feb 2026 12:30:38 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1vpQPW-0000FQ-7b; Mon, 09 Feb 2026 12:30:38 +0000 Received: by outflank-mailman (input) for mailman id 1225300; Mon, 09 Feb 2026 12:30:37 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1vpQPV-0000FK-16 for xen-devel@lists.xenproject.org; Mon, 09 Feb 2026 12:30:37 +0000 Received: from mail177-22.suw61.mandrillapp.com (mail177-22.suw61.mandrillapp.com [198.2.177.22]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id 2142db85-05b3-11f1-b162-2bf370ae4941; Mon, 09 Feb 2026 13:30:34 +0100 (CET) Received: from pmta14.mandrill.prod.suw01.rsglab.com (localhost [127.0.0.1]) by mail177-22.suw61.mandrillapp.com (Mailchimp) with ESMTP id 4f8kYj3m7NzGlspS5 for ; Mon, 9 Feb 2026 12:30:33 +0000 (GMT) Received: from [37.26.189.201] by mandrillapp.com id 83dfc58c492b4955aa7410bdef83aa34; Mon, 09 Feb 2026 12:30:33 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 2142db85-05b3-11f1-b162-2bf370ae4941 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=mandrillapp.com; s=mte1; t=1770640233; x=1770910233; bh=TfENv7311CbqQTHZdmTk5osd0PeqUAge6A03sADL0to=; h=From:Subject:To:Cc:Message-Id:Feedback-ID:Date:MIME-Version: Content-Type:Content-Transfer-Encoding:CC:Date:Subject:From; b=jnNTh1wQUB0i+0aCdmjW+TEhpJC3RBmA/UHU3pZ312mixmiH6IYIyxlqYf3/itEvW ZAg7MNQ1m2I4x2hjWoKmRnB0mx0kwkhDI+SgA15f675VZOmY0OkQ9qWHeuLR5rRLlH X32opi53i+lIF48pNIu3CZjgNnIFbZz2zrrKYPFla1+au6uSZsYliKyIgasm595kn4 ThjXiJq1dcg4GJvLQVeD5uDtqMlVCj6ixDbO8pLWmRta+Me1fJc0RXRT4yK8stsed/ lH3OR6mJY7rB9HWx1HvhDQFK19HBP7JDnWqVQLQgBVkWJzpZYpmt8axsvkmO8bR7En 3AET6lBvDS0qw== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=vates.tech; s=mte1; t=1770640233; x=1770900733; i=julian.vetter@vates.tech; bh=TfENv7311CbqQTHZdmTk5osd0PeqUAge6A03sADL0to=; h=From:Subject:To:Cc:Message-Id:Feedback-ID:Date:MIME-Version: Content-Type:Content-Transfer-Encoding:CC:Date:Subject:From; b=vYaceytzXnH1/JB/7kZpZj3Hm2fNaW3/zb/ZyqjcJytATRWDC4XSphZVin267oI6I gCOkIxr0wfMtn/l/htNzX5o7z8UB4kxkop1hWuInTBvAeZKYECdB4CL5QZE+j65kEF i7HV8PUAI8ppStJsWADuZX2UjkOkvdf1VU95Iv0IfwckglSSPkUYSYUtd2Ot8So2fS ha1jt7vwyNSiDmKT0VfThLOidhXRNLBsy0UbCIuaSD5lVxnqGdmZZT+m04hAJuhOsO nVTzKZc1x0aIlPdhADT9HsAyYUkdhfWT1cgMWq1OTD7Ec1ALUA25wJs9qdH/iVBmOI 4oLSPApkFyKNA== From: "Julian Vetter" Subject: =?utf-8?Q?[PATCH]=20x86/ioreq:=20Extend=20ioreq=20server=20to=20support=20multiple=20ioreq=20pages?= X-Mailer: git-send-email 2.51.0 X-Bm-Disclaimer: Yes X-Bm-Milter-Handled: 4ffbd6c1-ee69-4e1b-aabd-f977039bd3e2 X-Bm-Transport-Timestamp: 1770640232140 To: xen-devel@lists.xenproject.org Cc: "Jan Beulich" , "Andrew Cooper" , "=?utf-8?Q?Roger=20Pau=20Monn=C3=A9?=" , "Anthony PERARD" , "Michal Orzel" , "Julien Grall" , "Stefano Stabellini" , "Julian Vetter" Message-Id: <20260209123025.2628513-1-julian.vetter@vates.tech> X-Native-Encoded: 1 X-Report-Abuse: =?UTF-8?Q?Please=20forward=20a=20copy=20of=20this=20message,=20including=20all=20headers,=20to=20abuse@mandrill.com.=20You=20can=20also=20report=20abuse=20here:=20https://mandrillapp.com/contact/abuse=3Fid=3D30504962.83dfc58c492b4955aa7410bdef83aa34?= X-Mandrill-User: md_30504962 Feedback-ID: 30504962:30504962.20260209:md Date: Mon, 09 Feb 2026 12:30:33 +0000 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @mandrillapp.com) (identity julian.vetter@vates.tech) X-ZM-MESSAGEID: 1770640281119154100 Content-Type: text/plain; charset="utf-8" A single shared ioreq page provides PAGE_SIZE/sizeof(struct ioreq) =3D 128 slots, limiting HVM guests to 128 vCPUs. To support more vCPUs, extend the ioreq server to allocate multiple contiguous ioreq pages based on the max number of vCPUs. This patch replaces the single ioreq_page with an array of pages (ioreq_pages). It also extends the GFN allocation to find contiguous free GFNs for multi-page mappings. All existing single-page paths (bufioreq, legacy clients) remain unchanged. Signed-off-by: Julian Vetter --- xen/arch/x86/hvm/ioreq.c | 160 ++++++++++++++++++++++++++++++--------- xen/common/ioreq.c | 145 +++++++++++++++++++++++------------ xen/include/xen/ioreq.h | 13 +++- 3 files changed, 230 insertions(+), 88 deletions(-) diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c index a5fa97e149..a5c2a4baca 100644 --- a/xen/arch/x86/hvm/ioreq.c +++ b/xen/arch/x86/hvm/ioreq.c @@ -71,6 +71,38 @@ static gfn_t hvm_alloc_legacy_ioreq_gfn(struct ioreq_ser= ver *s) return INVALID_GFN; } =20 +static gfn_t hvm_alloc_ioreq_gfns(struct ioreq_server *s, + unsigned int nr_pages) +{ + struct domain *d =3D s->target; + unsigned long mask =3D d->arch.hvm.ioreq_gfn.mask; + unsigned int i, run; + + /* Find nr_pages consecutive set bits */ + for ( i =3D 0, run =3D 0; i < BITS_PER_LONG; i++ ) + { + if ( test_bit(i, &mask) ) + { + if ( ++run =3D=3D nr_pages ) + { + /* Found a run - clear all bits and return base GFN */ + unsigned int start =3D i - nr_pages + 1; + for ( unsigned int j =3D start; j <=3D i; j++ ) + clear_bit(j, &d->arch.hvm.ioreq_gfn.mask); + return _gfn(d->arch.hvm.ioreq_gfn.base + start); + } + } + else + run =3D 0; + } + + /* Fall back to legacy for single page only */ + if ( nr_pages =3D=3D 1 ) + return hvm_alloc_legacy_ioreq_gfn(s); + + return INVALID_GFN; +} + static gfn_t hvm_alloc_ioreq_gfn(struct ioreq_server *s) { struct domain *d =3D s->target; @@ -121,52 +153,95 @@ static void hvm_free_ioreq_gfn(struct ioreq_server *s= , gfn_t gfn) } } =20 +static void hvm_free_ioreq_gfns(struct ioreq_server *s, gfn_t gfn, + unsigned int nr_pages) +{ + unsigned int i; + + for ( i =3D 0; i < nr_pages; i++ ) + hvm_free_ioreq_gfn(s, _gfn(gfn_x(gfn) + i)); +} + static void hvm_unmap_ioreq_gfn(struct ioreq_server *s, bool buf) { - struct ioreq_page *iorp =3D buf ? &s->bufioreq : &s->ioreq; + unsigned int i, nr_pages =3D buf ? 1 : NR_IOREQ_PAGES; =20 - if ( gfn_eq(iorp->gfn, INVALID_GFN) ) - return; + for ( i =3D 0; i < nr_pages; i++ ) + { + struct ioreq_page *iorp =3D buf ? &s->bufioreq : &s->ioreqs.page[i= ]; + + if ( gfn_eq(iorp->gfn, INVALID_GFN) ) + continue; =20 - destroy_ring_for_helper(&iorp->va, iorp->page); - iorp->page =3D NULL; + destroy_ring_for_helper(&iorp->va, iorp->page); + iorp->page =3D NULL; =20 - hvm_free_ioreq_gfn(s, iorp->gfn); - iorp->gfn =3D INVALID_GFN; + hvm_free_ioreq_gfn(s, iorp->gfn); + iorp->gfn =3D INVALID_GFN; + } } =20 static int hvm_map_ioreq_gfn(struct ioreq_server *s, bool buf) { struct domain *d =3D s->target; - struct ioreq_page *iorp =3D buf ? &s->bufioreq : &s->ioreq; + unsigned int i, nr_pages =3D buf ? 1 : NR_IOREQ_PAGES; + gfn_t base_gfn; int rc; =20 - if ( iorp->page ) + /* Check if already mapped */ + for ( i =3D 0; i < nr_pages; i++ ) { - /* - * If a page has already been allocated (which will happen on - * demand if ioreq_server_get_frame() is called), then - * mapping a guest frame is not permitted. - */ - if ( gfn_eq(iorp->gfn, INVALID_GFN) ) - return -EPERM; + struct ioreq_page *iorp =3D buf ? &s->bufioreq : &s->ioreqs.page[i= ]; =20 - return 0; + if ( iorp->page ) + { + /* + * If a page has already been allocated (which will happen on + * demand if ioreq_server_get_frame() is called), then + * mapping a guest frame is not permitted. + */ + if ( gfn_eq(iorp->gfn, INVALID_GFN) ) + return -EPERM; + + return 0; + } } =20 if ( d->is_dying ) return -EINVAL; =20 - iorp->gfn =3D hvm_alloc_ioreq_gfn(s); + /* Allocate contiguous GFNs for all pages */ + base_gfn =3D buf ? hvm_alloc_ioreq_gfn(s) : hvm_alloc_ioreq_gfns(s, nr= _pages); =20 - if ( gfn_eq(iorp->gfn, INVALID_GFN) ) + if ( gfn_eq(base_gfn, INVALID_GFN) ) return -ENOMEM; =20 - rc =3D prepare_ring_for_helper(d, gfn_x(iorp->gfn), &iorp->page, - &iorp->va); + /* Map each page */ + for ( i =3D 0; i < nr_pages; i++ ) + { + struct ioreq_page *iorp =3D buf ? &s->bufioreq : &s->ioreqs.page[i= ]; + + iorp->gfn =3D _gfn(gfn_x(base_gfn) + i); =20 - if ( rc ) - hvm_unmap_ioreq_gfn(s, buf); + rc =3D prepare_ring_for_helper(d, gfn_x(iorp->gfn), &iorp->page, + &iorp->va); + if ( rc ) + goto fail; + } + + return 0; + +fail: + /* Unmap any pages we successfully mapped */ + while ( i-- > 0 ) + { + struct ioreq_page *iorp =3D buf ? &s->bufioreq : &s->ioreqs.page[i= ]; + + destroy_ring_for_helper(&iorp->va, iorp->page); + iorp->page =3D NULL; + iorp->gfn =3D INVALID_GFN; + } + hvm_free_ioreq_gfns(s, base_gfn, nr_pages); =20 return rc; } @@ -174,32 +249,43 @@ static int hvm_map_ioreq_gfn(struct ioreq_server *s, = bool buf) static void hvm_remove_ioreq_gfn(struct ioreq_server *s, bool buf) { struct domain *d =3D s->target; - struct ioreq_page *iorp =3D buf ? &s->bufioreq : &s->ioreq; + unsigned int i, nr_pages =3D buf ? 1 : NR_IOREQ_PAGES; =20 - if ( gfn_eq(iorp->gfn, INVALID_GFN) ) - return; + for ( i =3D 0; i < nr_pages; i++ ) + { + struct ioreq_page *iorp =3D buf ? &s->bufioreq : &s->ioreqs.page[i= ]; + + if ( gfn_eq(iorp->gfn, INVALID_GFN) ) + continue; =20 - if ( p2m_remove_page(d, iorp->gfn, page_to_mfn(iorp->page), 0) ) - domain_crash(d); - clear_page(iorp->va); + if ( p2m_remove_page(d, iorp->gfn, page_to_mfn(iorp->page), 0) ) + domain_crash(d); + clear_page(iorp->va); + } } =20 static int hvm_add_ioreq_gfn(struct ioreq_server *s, bool buf) { struct domain *d =3D s->target; - struct ioreq_page *iorp =3D buf ? &s->bufioreq : &s->ioreq; + unsigned int i, nr_pages =3D buf ? 1 : NR_IOREQ_PAGES; int rc; =20 - if ( gfn_eq(iorp->gfn, INVALID_GFN) ) - return 0; + for ( i =3D 0; i < nr_pages; i++ ) + { + struct ioreq_page *iorp =3D buf ? &s->bufioreq : &s->ioreqs.page[i= ]; =20 - clear_page(iorp->va); + if ( gfn_eq(iorp->gfn, INVALID_GFN) ) + continue; =20 - rc =3D p2m_add_page(d, iorp->gfn, page_to_mfn(iorp->page), 0, p2m_ram_= rw); - if ( rc =3D=3D 0 ) - paging_mark_pfn_dirty(d, _pfn(gfn_x(iorp->gfn))); + clear_page(iorp->va); =20 - return rc; + rc =3D p2m_add_page(d, iorp->gfn, page_to_mfn(iorp->page), 0, p2m_= ram_rw); + if ( rc ) + return rc; + + paging_mark_pfn_dirty(d, _pfn(gfn_x(iorp->gfn))); + } + return 0; } =20 int arch_ioreq_server_map_pages(struct ioreq_server *s) diff --git a/xen/common/ioreq.c b/xen/common/ioreq.c index f5fd30ce12..13c638db53 100644 --- a/xen/common/ioreq.c +++ b/xen/common/ioreq.c @@ -95,12 +95,15 @@ static struct ioreq_server *get_ioreq_server(const stru= ct domain *d, =20 static ioreq_t *get_ioreq(struct ioreq_server *s, struct vcpu *v) { - shared_iopage_t *p =3D s->ioreq.va; + unsigned int vcpu_id =3D v->vcpu_id; + unsigned int page_idx =3D vcpu_id / IOREQS_PER_PAGE; + unsigned int slot_idx =3D vcpu_id % IOREQS_PER_PAGE; + shared_iopage_t *p =3D s->ioreqs.page[page_idx].va; =20 ASSERT((v =3D=3D current) || !vcpu_runnable(v)); ASSERT(p !=3D NULL); =20 - return &p->vcpu_ioreq[v->vcpu_id]; + return &p->vcpu_ioreq[slot_idx]; } =20 /* @@ -260,84 +263,120 @@ bool vcpu_ioreq_handle_completion(struct vcpu *v) =20 static int ioreq_server_alloc_mfn(struct ioreq_server *s, bool buf) { - struct ioreq_page *iorp =3D buf ? &s->bufioreq : &s->ioreq; struct page_info *page; + unsigned int i, j, nr_pages =3D buf ? 1 : NR_IOREQ_PAGES; =20 - if ( iorp->page ) + for ( i =3D 0; i < nr_pages; i++ ) { - /* - * If a guest frame has already been mapped (which may happen - * on demand if ioreq_server_get_info() is called), then - * allocating a page is not permitted. - */ - if ( !gfn_eq(iorp->gfn, INVALID_GFN) ) - return -EPERM; + struct ioreq_page *iorp =3D buf ? &s->bufioreq : &s->ioreqs.page[i= ]; =20 - return 0; - } + if ( iorp->page ) + { + /* + * If a guest frame has already been mapped (which may happen + * on demand if ioreq_server_get_info() is called), then + * allocating a page is not permitted. + */ + if ( !gfn_eq(iorp->gfn, INVALID_GFN) ) + return -EPERM; + continue; /* Already allocated */ + } =20 - page =3D alloc_domheap_page(s->target, MEMF_no_refcount); + page =3D alloc_domheap_page(s->target, MEMF_no_refcount); + if ( !page ) + goto fail; =20 - if ( !page ) - return -ENOMEM; + if ( !get_page_and_type(page, s->target, PGT_writable_page) ) + { + /* + * The domain can't possibly know about this page yet, so fail= ure + * here is a clear indication of something fishy going on. + */ + put_page_alloc_ref(page); + domain_crash(s->emulator); + return -ENODATA; + } =20 - if ( !get_page_and_type(page, s->target, PGT_writable_page) ) - { - /* - * The domain can't possibly know about this page yet, so failure - * here is a clear indication of something fishy going on. - */ - domain_crash(s->emulator); - return -ENODATA; - } + /* Assign early so cleanup can find it */ + iorp->page =3D page; =20 - iorp->va =3D __map_domain_page_global(page); - if ( !iorp->va ) - goto fail; + iorp->va =3D __map_domain_page_global(page); + if ( !iorp->va ) + goto fail; + + clear_page(iorp->va); + } =20 - iorp->page =3D page; - clear_page(iorp->va); return 0; =20 - fail: - put_page_alloc_ref(page); - put_page_and_type(page); +fail: + /* Free all previously allocated pages */ + for ( j =3D 0; j <=3D i; j++ ) + { + struct ioreq_page *iorp =3D buf ? &s->bufioreq : &s->ioreqs.page[j= ]; + if ( iorp->page ) + { + if ( iorp->va ) + unmap_domain_page_global(iorp->va); + iorp->va =3D NULL; + put_page_alloc_ref(iorp->page); + put_page_and_type(iorp->page); + iorp->page =3D NULL; + } + } =20 return -ENOMEM; } =20 static void ioreq_server_free_mfn(struct ioreq_server *s, bool buf) { - struct ioreq_page *iorp =3D buf ? &s->bufioreq : &s->ioreq; - struct page_info *page =3D iorp->page; + unsigned int i, nr_pages =3D buf ? 1 : NR_IOREQ_PAGES; =20 - if ( !page ) - return; + for ( i =3D 0; i < nr_pages; i++ ) + { + struct ioreq_page *iorp =3D buf ? &s->bufioreq : &s->ioreqs.page[i= ]; + struct page_info *page =3D iorp->page; =20 - iorp->page =3D NULL; + if ( !page ) + continue; + + iorp->page =3D NULL; =20 - unmap_domain_page_global(iorp->va); - iorp->va =3D NULL; + unmap_domain_page_global(iorp->va); + iorp->va =3D NULL; =20 - put_page_alloc_ref(page); - put_page_and_type(page); + put_page_alloc_ref(page); + put_page_and_type(page); + } } =20 bool is_ioreq_server_page(struct domain *d, const struct page_info *page) { const struct ioreq_server *s; - unsigned int id; + unsigned int id, i; bool found =3D false; =20 rspin_lock(&d->ioreq_server.lock); =20 FOR_EACH_IOREQ_SERVER(d, id, s) { - if ( (s->ioreq.page =3D=3D page) || (s->bufioreq.page =3D=3D page)= ) + if ( s->bufioreq.page =3D=3D page ) { found =3D true; break; } + + for ( i =3D 0; i < NR_IOREQ_PAGES; i++ ) + { + if ( s->ioreqs.page[i].page =3D=3D page ) + { + found =3D true; + break; + } + } + + if ( found ) + break; } =20 rspin_unlock(&d->ioreq_server.lock); @@ -348,9 +387,11 @@ bool is_ioreq_server_page(struct domain *d, const stru= ct page_info *page) static void ioreq_server_update_evtchn(struct ioreq_server *s, struct ioreq_vcpu *sv) { + unsigned int page_idx =3D sv->vcpu->vcpu_id / IOREQS_PER_PAGE; + ASSERT(spin_is_locked(&s->lock)); =20 - if ( s->ioreq.va !=3D NULL ) + if ( s->ioreqs.page[page_idx].va !=3D NULL ) { ioreq_t *p =3D get_ioreq(s, sv->vcpu); =20 @@ -579,6 +620,7 @@ static int ioreq_server_init(struct ioreq_server *s, { struct domain *currd =3D current->domain; struct vcpu *v; + unsigned int i; int rc; =20 s->target =3D d; @@ -590,7 +632,8 @@ static int ioreq_server_init(struct ioreq_server *s, INIT_LIST_HEAD(&s->ioreq_vcpu_list); spin_lock_init(&s->bufioreq_lock); =20 - s->ioreq.gfn =3D INVALID_GFN; + for ( i =3D 0; i < NR_IOREQ_PAGES; i++ ) + s->ioreqs.page[i].gfn =3D INVALID_GFN; s->bufioreq.gfn =3D INVALID_GFN; =20 rc =3D ioreq_server_alloc_rangesets(s, id); @@ -768,8 +811,9 @@ static int ioreq_server_get_info(struct domain *d, iose= rvid_t id, goto out; } =20 + /* Just return the first ireq page because the region is contigeous */ if ( ioreq_gfn ) - *ioreq_gfn =3D gfn_x(s->ioreq.gfn); + *ioreq_gfn =3D gfn_x(s->ioreqs.page[0].gfn); =20 if ( HANDLE_BUFIOREQ(s) ) { @@ -822,12 +866,13 @@ int ioreq_server_get_frame(struct domain *d, ioservid= _t id, *mfn =3D page_to_mfn(s->bufioreq.page); rc =3D 0; break; + case XENMEM_resource_ioreq_server_frame_ioreq(0) ... + XENMEM_resource_ioreq_server_frame_ioreq(NR_IOREQ_PAGES - 1): + unsigned int page_idx =3D idx - XENMEM_resource_ioreq_server_frame= _ioreq(0); =20 - case XENMEM_resource_ioreq_server_frame_ioreq(0): - *mfn =3D page_to_mfn(s->ioreq.page); + *mfn =3D page_to_mfn(s->ioreqs.page[page_idx].page); rc =3D 0; break; - default: rc =3D -EINVAL; break; diff --git a/xen/include/xen/ioreq.h b/xen/include/xen/ioreq.h index e86f0869fa..8604311cb4 100644 --- a/xen/include/xen/ioreq.h +++ b/xen/include/xen/ioreq.h @@ -19,9 +19,16 @@ #ifndef __XEN_IOREQ_H__ #define __XEN_IOREQ_H__ =20 +#include #include =20 #include +#include +#include + +/* 4096 / 32 =3D 128 ioreq slots per page */ +#define IOREQS_PER_PAGE (PAGE_SIZE / sizeof(struct ioreq)) +#define NR_IOREQ_PAGES DIV_ROUND_UP(HVM_MAX_VCPUS, IOREQS_PER_PAGE) =20 struct ioreq_page { gfn_t gfn; @@ -29,6 +36,10 @@ struct ioreq_page { void *va; }; =20 +struct ioreq_pages { + struct ioreq_page page[NR_IOREQ_PAGES]; +}; + struct ioreq_vcpu { struct list_head list_entry; struct vcpu *vcpu; @@ -45,7 +56,7 @@ struct ioreq_server { /* Lock to serialize toolstack modifications */ spinlock_t lock; =20 - struct ioreq_page ioreq; + struct ioreq_pages ioreqs; struct list_head ioreq_vcpu_list; struct ioreq_page bufioreq; =20 --=20 2.51.0 -- Julian Vetter | Vates Hypervisor & Kernel Developer XCP-ng & Xen Orchestra - Vates solutions web: https://vates.tech