From nobody Mon Apr 13 03:43:31 2026 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass header.i=julian.vetter@vates.tech; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=vates.tech ARC-Seal: i=1; a=rsa-sha256; t=1771249464; cv=none; d=zohomail.com; s=zohoarc; b=lgupvAdusJZ6fHIJyQ8MI5TmACosMmwZB+jUZbpvcZB3ip8a9vudVlxErH0nROdCQnUpkH+Xkm/ZWftrKoXO+sEHN1SJWh7vXOydnEZDNvdTE3JGD88jvBd82cxFp1DZk+r9pR+FGopPUa78s8eqazyMHWxSwKWtdKSVzBBnzLw= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1771249464; h=Content-Type:Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=prpBvrYoPfoemUVfs3/BMcBIrBUZ71dlGXNF25SbH5I=; b=ZiX01q1JrK4dTpUq7Zcb9w6/L+wRyREIRqvZ/o8rjclRKa0tO3QI4QlZ11xic9xdhppFzuaGlSwE8YoCrMqTCPqoSVSUQD5CSgzI8SXoL0SBZH25WaJIdQhdbs61CEHHzu8hyrhSL6IFFIDmdmUw10PuA05q7rXx7ekx1UMyaUI= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass header.i=julian.vetter@vates.tech; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1771249464035174.82107108637967; Mon, 16 Feb 2026 05:44:24 -0800 (PST) Received: from list by lists.xenproject.org with outflank-mailman.1234160.1537478 (Exim 4.92) (envelope-from ) id 1vrytO-0007hy-R3; Mon, 16 Feb 2026 13:44:02 +0000 Received: by outflank-mailman (output) from mailman id 1234160.1537478; Mon, 16 Feb 2026 13:44:02 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1vrytO-0007hr-O7; Mon, 16 Feb 2026 13:44:02 +0000 Received: by outflank-mailman (input) for mailman id 1234160; Mon, 16 Feb 2026 13:44:01 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1vrytN-0007hf-69 for xen-devel@lists.xenproject.org; Mon, 16 Feb 2026 13:44:01 +0000 Received: from mail187-15.suw11.mandrillapp.com (mail187-15.suw11.mandrillapp.com [198.2.187.15]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id 8b388506-0b3d-11f1-b163-2bf370ae4941; Mon, 16 Feb 2026 14:43:59 +0100 (CET) Received: from pmta09.mandrill.prod.suw01.rsglab.com (localhost [127.0.0.1]) by mail187-15.suw11.mandrillapp.com (Mailchimp) with ESMTP id 4fF3s931BnzPm0PXq for ; Mon, 16 Feb 2026 13:43:57 +0000 (GMT) Received: from [37.26.189.201] by mandrillapp.com id dca80abff0c64da19efeaa21a1c9b379; Mon, 16 Feb 2026 13:43:57 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 8b388506-0b3d-11f1-b163-2bf370ae4941 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=mandrillapp.com; s=mte1; t=1771249437; x=1771519437; bh=prpBvrYoPfoemUVfs3/BMcBIrBUZ71dlGXNF25SbH5I=; h=From:Subject:To:Cc:Message-Id:In-Reply-To:References:Feedback-ID: Date:MIME-Version:Content-Type:Content-Transfer-Encoding:CC:Date: Subject:From; b=gycgvRLNqHG4dN5CpiEl4iAKMBqJsvs2GpOzKjUyG+YkCVGvplhpLUJTsmTR+HWp8 A7nk4UYOfiAx2wTYfOfTAaVJAlARxBBmou5LKREDV4VymT/ldpVcoYtpCKci5ZNEc5 KSEFskuVsjdpqqPzVNhXKUuAjKaV4BmPZgci4yE0rEvqS2QbWNiniVTNzWpYUakR6A GypHn7MWGD5Mcjj/mHWy10XZXZ0wUkM+yPG/mLXdPxFRNc1309w8cSY89o5dlq75vx 0dq7sOgnXpvXMveCvNolyRmnzTgtXzEbv4skvy1x4hO3jsXuym3JTV6d6bhoF2slB1 Jyd2QNb4fZK2A== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=vates.tech; s=mte1; t=1771249437; x=1771509937; i=julian.vetter@vates.tech; bh=prpBvrYoPfoemUVfs3/BMcBIrBUZ71dlGXNF25SbH5I=; h=From:Subject:To:Cc:Message-Id:In-Reply-To:References:Feedback-ID: Date:MIME-Version:Content-Type:Content-Transfer-Encoding:CC:Date: Subject:From; b=lonCE8CtF0qoJnimvdGDV6amCnx65em5YnPoPbkg2+ud60LyMiIKZYBW6GZl/Fk0V xoeZ8Tn92tGnx9/fDdw4hbYDxKTV97xnY/kRaD6Zr5HK7gF1zw3YxAoqy25L9Xmtpe 0YVKv2hoICsIFOzDfimbtTb0KzLnA2wwXMVKaHnP7EzGwAvgCqtlu+iFVX/QgcUx4k H3Nq3Y59BGxS/qiIQsslGnw6uTQmNFgrIEj+v4V/cv7IBSeos+Flw/0Nf0WKhUlPks EEGpDo+9SinOtjrMgYzDSKQhZVWX5IUP0w52Zd6NlpZI9P8OP5E8hX3ILo1uS/G7j7 F9THbbL65o+Eg== From: "Julian Vetter" Subject: =?utf-8?Q?[PATCH=20v2=201/3]=20x86/ioreq:=20Add=20missing=20put=5Fpage=5Falloc=5Fref(page)?= X-Mailer: git-send-email 2.51.0 X-Bm-Disclaimer: Yes X-Bm-Milter-Handled: 4ffbd6c1-ee69-4e1b-aabd-f977039bd3e2 X-Bm-Transport-Timestamp: 1771249436235 To: xen-devel@lists.xenproject.org Cc: "Jan Beulich" , "Andrew Cooper" , "=?utf-8?Q?Roger=20Pau=20Monn=C3=A9?=" , "Anthony PERARD" , "Michal Orzel" , "Julien Grall" , "Stefano Stabellini" , "Julian Vetter" Message-Id: <20260216134334.3510048-2-julian.vetter@vates.tech> In-Reply-To: <20260216134334.3510048-1-julian.vetter@vates.tech> References: <20260216134334.3510048-1-julian.vetter@vates.tech> X-Native-Encoded: 1 X-Report-Abuse: =?UTF-8?Q?Please=20forward=20a=20copy=20of=20this=20message,=20including=20all=20headers,=20to=20abuse@mandrill.com.=20You=20can=20also=20report=20abuse=20here:=20https://mandrillapp.com/contact/abuse=3Fid=3D30504962.dca80abff0c64da19efeaa21a1c9b379?= X-Mandrill-User: md_30504962 Feedback-ID: 30504962:30504962.20260216:md Date: Mon, 16 Feb 2026 13:43:57 +0000 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @mandrillapp.com) (identity julian.vetter@vates.tech) X-ZM-MESSAGEID: 1771249465764154100 Content-Type: text/plain; charset="utf-8" The page was allocated with MEMF_no_refcount. The MEMF_no_refcount flag means the page is allocated without a regular reference, but it still has the allocation reference. If get_page_and_type() fails, we still need to release the allocation reference. Otherwise the page would leak. domain_crash() doesn't free individual pages; it just marks the domain for destruction. The domain teardown will eventually free domain heap pages, but only those it can find. A page with a dangling alloc ref would prevent the page from being fully freed during domain cleanup. Signed-off-by: Julian Vetter --- Changes in v2: - New patch --- xen/common/ioreq.c | 1 + 1 file changed, 1 insertion(+) diff --git a/xen/common/ioreq.c b/xen/common/ioreq.c index f5fd30ce12..5d722c8d4e 100644 --- a/xen/common/ioreq.c +++ b/xen/common/ioreq.c @@ -287,6 +287,7 @@ static int ioreq_server_alloc_mfn(struct ioreq_server *= s, bool buf) * The domain can't possibly know about this page yet, so failure * here is a clear indication of something fishy going on. */ + put_page_alloc_ref(page); domain_crash(s->emulator); return -ENODATA; } --=20 2.51.0 -- Julian Vetter | Vates Hypervisor & Kernel Developer XCP-ng & Xen Orchestra - Vates solutions web: https://vates.tech From nobody Mon Apr 13 03:43:31 2026 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass header.i=julian.vetter@vates.tech; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=vates.tech ARC-Seal: i=1; a=rsa-sha256; t=1771249463; cv=none; d=zohomail.com; s=zohoarc; b=QSohnIHG7/+SU5wjphX1SVZiRAsPEJb4eNlUCo9jeJTqALnqfIA6D53bFko0gSfENFEarVGvmAozYeKVfh4Jno8p6rOvcBdLbbea9/jZHpgJObFUk9q4cgSBzJBKP1qgJl2yplf0NxIY+92HgNw/cOww0IzsA57Spbs12Qz8zzo= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1771249463; h=Content-Type:Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=8ylKuGd2BMY1tYigbaDonsjt+0UlJARYpSIysh5XnsQ=; b=adDlwXSVkGAPXRXaYNUnP0FwCSKXf92R/WAg+idtxb5Q2fNsw3XS4hfBktaykdBdbC6dLXQgKNSmB6DUCj+74kioNOaKha3ziL/tSKbgLa6hWNbnS3k/bPd9C7wp8BYZHJGDDjeNxJ+8ofiDYBHz8x/JRdcP4MxZkntAGFRXEpE= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass header.i=julian.vetter@vates.tech; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1771249463190891.2458754682826; Mon, 16 Feb 2026 05:44:23 -0800 (PST) Received: from list by lists.xenproject.org with outflank-mailman.1234162.1537494 (Exim 4.92) (envelope-from ) id 1vrytQ-0007yF-GO; Mon, 16 Feb 2026 13:44:04 +0000 Received: by outflank-mailman (output) from mailman id 1234162.1537494; Mon, 16 Feb 2026 13:44:04 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1vrytQ-0007xF-8u; Mon, 16 Feb 2026 13:44:04 +0000 Received: by outflank-mailman (input) for mailman id 1234162; Mon, 16 Feb 2026 13:44:03 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1vrytP-0007hl-4B for xen-devel@lists.xenproject.org; Mon, 16 Feb 2026 13:44:03 +0000 Received: from mail187-15.suw11.mandrillapp.com (mail187-15.suw11.mandrillapp.com [198.2.187.15]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id 8c74fc17-0b3d-11f1-9ccf-f158ae23cfc8; Mon, 16 Feb 2026 14:44:01 +0100 (CET) Received: from pmta09.mandrill.prod.suw01.rsglab.com (localhost [127.0.0.1]) by mail187-15.suw11.mandrillapp.com (Mailchimp) with ESMTP id 4fF3sC4xYlzPm0PYM for ; Mon, 16 Feb 2026 13:43:59 +0000 (GMT) Received: from [37.26.189.201] by mandrillapp.com id b9b5c5ead9004716bf05dfc35d46d2e5; Mon, 16 Feb 2026 13:43:59 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 8c74fc17-0b3d-11f1-9ccf-f158ae23cfc8 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=mandrillapp.com; s=mte1; t=1771249439; x=1771519439; bh=8ylKuGd2BMY1tYigbaDonsjt+0UlJARYpSIysh5XnsQ=; h=From:Subject:To:Cc:Message-Id:In-Reply-To:References:Feedback-ID: Date:MIME-Version:Content-Type:Content-Transfer-Encoding:CC:Date: Subject:From; b=g3t1tEAUX/Hwr7m69E0W6oiHtXKiEPgfO9J9r4Z5VzSohA553lrdkgklB4mX72s2P 3gZtkRT6aOUSXkfDOrcF1sjWow0otGnD9CGDaB4v5hnUvP2ZJ7QsP9JuDglXBE89u8 0JqBL3PADqwtBprHaEsbmVZ9k1+F3K1f7ATksZtekspbs2YtqFdGZDRZgizTxDRf3e 5ym27DIri9lwjxEOJiThMdrgbW0Jh/3wj/JXhckr++8YGZ0LLIJI+Q9zTjYuZTF8h1 C/e6XAeiDSF5I7wQVynccrtvlQ+D90eqaYFS92Em4GUV8PWuN1HBNfPn6nfYIM3JqZ ECH7h836za+xA== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=vates.tech; s=mte1; t=1771249439; x=1771509939; i=julian.vetter@vates.tech; bh=8ylKuGd2BMY1tYigbaDonsjt+0UlJARYpSIysh5XnsQ=; h=From:Subject:To:Cc:Message-Id:In-Reply-To:References:Feedback-ID: Date:MIME-Version:Content-Type:Content-Transfer-Encoding:CC:Date: Subject:From; b=zFWanLLTSQ9GDGybif5GeUk2sTb9nGOi87qcVSLNwYp9WwWTRK3X7o5zmh4GM2T5L lu4SgSRayHsFwuXr9hzLwB2fUQBZK1C5Bje/LVEIQYmsUO+W+IzkfhfmCBqaihsv5A aTfiNTUCg2jDXeRAKb2/5P0gG6JdEoZ6wunpR+A7U4wVDVgO5DgFh7aHUY/hoPB4SC CND4AJTSmBivv/hiChQaWKv+O1TfpX+yDDWlh81YnYqZycPGkYlLSz7a4yX8ee9cev Qhouf0t8THvQfQxNpFWWeGF8Oijh3iZ4A5l8lpKN+Gf8diyztHMhO39URbj3A2JSp/ 3BvIwKAqW46Ng== From: "Julian Vetter" Subject: =?utf-8?Q?[PATCH=20v2=202/3]=20x86/ioreq:=20Prepare=20spacing=20for=20upcoming=20patch?= X-Mailer: git-send-email 2.51.0 X-Bm-Disclaimer: Yes X-Bm-Milter-Handled: 4ffbd6c1-ee69-4e1b-aabd-f977039bd3e2 X-Bm-Transport-Timestamp: 1771249437847 To: xen-devel@lists.xenproject.org Cc: "Jan Beulich" , "Andrew Cooper" , "=?utf-8?Q?Roger=20Pau=20Monn=C3=A9?=" , "Anthony PERARD" , "Michal Orzel" , "Julien Grall" , "Stefano Stabellini" , "Julian Vetter" Message-Id: <20260216134334.3510048-3-julian.vetter@vates.tech> In-Reply-To: <20260216134334.3510048-1-julian.vetter@vates.tech> References: <20260216134334.3510048-1-julian.vetter@vates.tech> X-Native-Encoded: 1 X-Report-Abuse: =?UTF-8?Q?Please=20forward=20a=20copy=20of=20this=20message,=20including=20all=20headers,=20to=20abuse@mandrill.com.=20You=20can=20also=20report=20abuse=20here:=20https://mandrillapp.com/contact/abuse=3Fid=3D30504962.b9b5c5ead9004716bf05dfc35d46d2e5?= X-Mandrill-User: md_30504962 Feedback-ID: 30504962:30504962.20260216:md Date: Mon, 16 Feb 2026 13:43:59 +0000 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @mandrillapp.com) (identity julian.vetter@vates.tech) X-ZM-MESSAGEID: 1771249465798154100 Content-Type: text/plain; charset="utf-8" This patch just changes indentation, to make the next patch easier to review. Signed-off-by: Julian Vetter --- Changes in v2 - New patch --- xen/arch/x86/hvm/ioreq.c | 86 ++++++++++++++++++++++------------------ 1 file changed, 47 insertions(+), 39 deletions(-) diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c index a5fa97e149..5ebc48dbd4 100644 --- a/xen/arch/x86/hvm/ioreq.c +++ b/xen/arch/x86/hvm/ioreq.c @@ -125,14 +125,16 @@ static void hvm_unmap_ioreq_gfn(struct ioreq_server *= s, bool buf) { struct ioreq_page *iorp =3D buf ? &s->bufioreq : &s->ioreq; =20 - if ( gfn_eq(iorp->gfn, INVALID_GFN) ) - return; + { + if ( gfn_eq(iorp->gfn, INVALID_GFN) ) + return; =20 - destroy_ring_for_helper(&iorp->va, iorp->page); - iorp->page =3D NULL; + destroy_ring_for_helper(&iorp->va, iorp->page); + iorp->page =3D NULL; =20 - hvm_free_ioreq_gfn(s, iorp->gfn); - iorp->gfn =3D INVALID_GFN; + hvm_free_ioreq_gfn(s, iorp->gfn); + iorp->gfn =3D INVALID_GFN; + } } =20 static int hvm_map_ioreq_gfn(struct ioreq_server *s, bool buf) @@ -141,34 +143,36 @@ static int hvm_map_ioreq_gfn(struct ioreq_server *s, = bool buf) struct ioreq_page *iorp =3D buf ? &s->bufioreq : &s->ioreq; int rc; =20 - if ( iorp->page ) { - /* - * If a page has already been allocated (which will happen on - * demand if ioreq_server_get_frame() is called), then - * mapping a guest frame is not permitted. - */ - if ( gfn_eq(iorp->gfn, INVALID_GFN) ) - return -EPERM; - - return 0; - } + if ( iorp->page ) + { + /* + * If a page has already been allocated (which will happen on + * demand if ioreq_server_get_frame() is called), then + * mapping a guest frame is not permitted. + */ + if ( gfn_eq(iorp->gfn, INVALID_GFN) ) + return -EPERM; + + return 0; + } =20 - if ( d->is_dying ) - return -EINVAL; + if ( d->is_dying ) + return -EINVAL; =20 - iorp->gfn =3D hvm_alloc_ioreq_gfn(s); + iorp->gfn =3D hvm_alloc_ioreq_gfn(s); =20 - if ( gfn_eq(iorp->gfn, INVALID_GFN) ) - return -ENOMEM; + if ( gfn_eq(iorp->gfn, INVALID_GFN) ) + return -ENOMEM; =20 - rc =3D prepare_ring_for_helper(d, gfn_x(iorp->gfn), &iorp->page, - &iorp->va); + rc =3D prepare_ring_for_helper(d, gfn_x(iorp->gfn), &iorp->page, + &iorp->va); =20 - if ( rc ) - hvm_unmap_ioreq_gfn(s, buf); + if ( rc ) + hvm_unmap_ioreq_gfn(s, buf); =20 - return rc; + return rc; + } } =20 static void hvm_remove_ioreq_gfn(struct ioreq_server *s, bool buf) @@ -176,12 +180,14 @@ static void hvm_remove_ioreq_gfn(struct ioreq_server = *s, bool buf) struct domain *d =3D s->target; struct ioreq_page *iorp =3D buf ? &s->bufioreq : &s->ioreq; =20 - if ( gfn_eq(iorp->gfn, INVALID_GFN) ) - return; + { + if ( gfn_eq(iorp->gfn, INVALID_GFN) ) + return; =20 - if ( p2m_remove_page(d, iorp->gfn, page_to_mfn(iorp->page), 0) ) - domain_crash(d); - clear_page(iorp->va); + if ( p2m_remove_page(d, iorp->gfn, page_to_mfn(iorp->page), 0) ) + domain_crash(d); + clear_page(iorp->va); + } } =20 static int hvm_add_ioreq_gfn(struct ioreq_server *s, bool buf) @@ -190,16 +196,18 @@ static int hvm_add_ioreq_gfn(struct ioreq_server *s, = bool buf) struct ioreq_page *iorp =3D buf ? &s->bufioreq : &s->ioreq; int rc; =20 - if ( gfn_eq(iorp->gfn, INVALID_GFN) ) - return 0; + { + if ( gfn_eq(iorp->gfn, INVALID_GFN) ) + return 0; =20 - clear_page(iorp->va); + clear_page(iorp->va); =20 - rc =3D p2m_add_page(d, iorp->gfn, page_to_mfn(iorp->page), 0, p2m_ram_= rw); - if ( rc =3D=3D 0 ) - paging_mark_pfn_dirty(d, _pfn(gfn_x(iorp->gfn))); + rc =3D p2m_add_page(d, iorp->gfn, page_to_mfn(iorp->page), 0, p2m_= ram_rw); + if ( rc =3D=3D 0 ) + paging_mark_pfn_dirty(d, _pfn(gfn_x(iorp->gfn))); =20 - return rc; + return rc; + } } =20 int arch_ioreq_server_map_pages(struct ioreq_server *s) --=20 2.51.0 -- Julian Vetter | Vates Hypervisor & Kernel Developer XCP-ng & Xen Orchestra - Vates solutions web: https://vates.tech From nobody Mon Apr 13 03:43:31 2026 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass header.i=julian.vetter@vates.tech; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=vates.tech ARC-Seal: i=1; a=rsa-sha256; t=1771249464; cv=none; d=zohomail.com; s=zohoarc; b=CfkdnwEp18Q+mzur7i0sK1yaSg86g8TNpnJh44nJP4Dcff9r5k6sP600smWY0V1qIMRw69wys9iU25m0Il0PnvXzClQDOQIZJ9mfJG4dxVReMRxJ501ThLjap74pN/e24isusOmAriAD5BbNo1otIVQ5Ok/luc7DRZxMxEZamKE= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1771249464; h=Content-Type:Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=F1SS3wigxhXzgwX235EXJtDoch2rIlhw5Qmf1Ym2oJI=; b=AeDsonwqW7ZCPYuRuj9Uk1h9SNW2UxJa4bsggUYxqhbI9YHcSAqcCTNU9fpqOh7L90f31FYIa0ythroJcyz9llTBm4yYNLLo+MYj8YEbdebOAjiZF0XBLVz6eeHeBRHQjym4yDGXRypztNFq16sRCR9QM591K6rsyEjj8wUb5+w= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass header.i=julian.vetter@vates.tech; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1771249464607761.1510322082463; Mon, 16 Feb 2026 05:44:24 -0800 (PST) Received: from list by lists.xenproject.org with outflank-mailman.1234163.1537508 (Exim 4.92) (envelope-from ) id 1vrytR-0008Mj-KM; Mon, 16 Feb 2026 13:44:05 +0000 Received: by outflank-mailman (output) from mailman id 1234163.1537508; Mon, 16 Feb 2026 13:44:05 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1vrytR-0008Mc-HI; Mon, 16 Feb 2026 13:44:05 +0000 Received: by outflank-mailman (input) for mailman id 1234163; Mon, 16 Feb 2026 13:44:04 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1vrytQ-0007hl-4E for xen-devel@lists.xenproject.org; Mon, 16 Feb 2026 13:44:04 +0000 Received: from mail187-15.suw11.mandrillapp.com (mail187-15.suw11.mandrillapp.com [198.2.187.15]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id 8cbf39b9-0b3d-11f1-9ccf-f158ae23cfc8; Mon, 16 Feb 2026 14:44:01 +0100 (CET) Received: from pmta09.mandrill.prod.suw01.rsglab.com (localhost [127.0.0.1]) by mail187-15.suw11.mandrillapp.com (Mailchimp) with ESMTP id 4fF3sC5nCbzPm0PZr for ; Mon, 16 Feb 2026 13:43:59 +0000 (GMT) Received: from [37.26.189.201] by mandrillapp.com id d76f599d107e4e6ba4639d252a67c5d0; Mon, 16 Feb 2026 13:43:59 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 8cbf39b9-0b3d-11f1-9ccf-f158ae23cfc8 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=mandrillapp.com; s=mte1; t=1771249439; x=1771519439; bh=F1SS3wigxhXzgwX235EXJtDoch2rIlhw5Qmf1Ym2oJI=; h=From:Subject:To:Cc:Message-Id:In-Reply-To:References:Feedback-ID: Date:MIME-Version:Content-Type:Content-Transfer-Encoding:CC:Date: Subject:From; b=hrwo6tnjMsuP5FJAbw7H9IKcfR04O4r/m1VC9UPyULUm4kcvgPqhG5A3nd52LL8Dm kgzaR0MqqevAnFbZwoyI7uXqKynbQV5xaABLZB4IAcgtvwq7yjT+x7hmhY8dujlpam bpk2e+Ujh7t4fsydpmC6mfPYwAPLBxLfdNNfqekpy/FMn6tNigBcEInGzFC8nO4INS ZcOIiyabo1MhrvYKYTkTMD4+xj/EzI/Lee7xgq6vqXyGJVETftDgV7kKEZO42yFa5T 1mNufl1Lgvv2KfnO8VMFdGr7y9AqtUYoz6G6Z7bakn1AoqLQKcpmKqHkzRDwUygzRl vlOQYmK+jRlYg== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=vates.tech; s=mte1; t=1771249439; x=1771509939; i=julian.vetter@vates.tech; bh=F1SS3wigxhXzgwX235EXJtDoch2rIlhw5Qmf1Ym2oJI=; h=From:Subject:To:Cc:Message-Id:In-Reply-To:References:Feedback-ID: Date:MIME-Version:Content-Type:Content-Transfer-Encoding:CC:Date: Subject:From; b=ruFgZSOnc9hZvE0PtMTt4nvSkdlNcbHeq8K7aVoAosNu4IT6Bh9wfhqZw3qs3YJJ8 XAuad5PYxdFcSLY5MJa1ggtlTjPz/9NgT2A71pvW7BV+QH1pt4HPHLtXPCCbRAKXrk ms3tPOw7rJQsOFdtMt+LNPNC3WKHSZia0u+sR6rSEa0W3DXqi8fHe20JJEnfCJEW18 CEP+SU21HMuayKRN42rg+rni7PO65AMPQ/5FbvieW/yJT1H+MXo50QzIFwIVeZSgTC W+hyHo8OyucBTrKBSuHL3OTMLMWkv78dDfz1TnclECxMayV8jgM+5CZ1gT8jjgvIaO RGKFJnS82QtQQ== From: "Julian Vetter" Subject: =?utf-8?Q?[PATCH=20v2=203/3]=20x86/ioreq:=20Extend=20ioreq=20server=20to=20support=20multiple=20ioreq=20pages?= X-Mailer: git-send-email 2.51.0 X-Bm-Disclaimer: Yes X-Bm-Milter-Handled: 4ffbd6c1-ee69-4e1b-aabd-f977039bd3e2 X-Bm-Transport-Timestamp: 1771249438581 To: xen-devel@lists.xenproject.org Cc: "Jan Beulich" , "Andrew Cooper" , "=?utf-8?Q?Roger=20Pau=20Monn=C3=A9?=" , "Anthony PERARD" , "Michal Orzel" , "Julien Grall" , "Stefano Stabellini" , "Julian Vetter" Message-Id: <20260216134334.3510048-4-julian.vetter@vates.tech> In-Reply-To: <20260216134334.3510048-1-julian.vetter@vates.tech> References: <20260216134334.3510048-1-julian.vetter@vates.tech> X-Native-Encoded: 1 X-Report-Abuse: =?UTF-8?Q?Please=20forward=20a=20copy=20of=20this=20message,=20including=20all=20headers,=20to=20abuse@mandrill.com.=20You=20can=20also=20report=20abuse=20here:=20https://mandrillapp.com/contact/abuse=3Fid=3D30504962.d76f599d107e4e6ba4639d252a67c5d0?= X-Mandrill-User: md_30504962 Feedback-ID: 30504962:30504962.20260216:md Date: Mon, 16 Feb 2026 13:43:59 +0000 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity julian.vetter@vates.tech) (identity @mandrillapp.com) X-ZM-MESSAGEID: 1771249465851154100 Content-Type: text/plain; charset="utf-8" A single shared ioreq page provides PAGE_SIZE/sizeof(ioreq_t) =3D 128 slots, limiting HVM guests to 128 vCPUs. To support more vCPUs, extend the ioreq server to use xvzalloc_array() for allocating a contiguous virtual array of ioreq_t slots sized to d->max_vcpus, backed by potentially non-contiguous physical pages. For the GFN-mapped path (x86), individual pages are mapped via prepare_ring_for_helper() and then combined into a single contiguous VA using vmap(). The number of ioreq pages is computed at runtime via nr_ioreq_pages(d) =3D DIV_ROUND_UP(d->max_vcpus, IOREQS_PER_PAGE), so small VMs only allocate one page. All existing single-page paths (bufioreq, legacy clients) remain unchanged. Mark the now-unused shared_iopage_t in the public header as deprecated. Signed-off-by: Julian Vetter --- Changes in v2 - Use xvalloc_array to allocate the contigeous region - Removed unncessary includes - nr_ioreq_pages is now based on d->max_vcpus and not the HVM_MAX_VCPUS define - Reduced indentation by 1 level in hvm_alloc_ioreq_gfns - Added blank lines between declarations and statements - Added comment why we can just return in hvm_add_ioreq_gfn without rollback --- xen/arch/x86/hvm/ioreq.c | 198 ++++++++++++++++++++++++++++++++- xen/common/ioreq.c | 95 ++++++++++++---- xen/include/public/hvm/ioreq.h | 5 + xen/include/xen/ioreq.h | 13 ++- 4 files changed, 285 insertions(+), 26 deletions(-) diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c index 5ebc48dbd4..a77f00dd96 100644 --- a/xen/arch/x86/hvm/ioreq.c +++ b/xen/arch/x86/hvm/ioreq.c @@ -6,6 +6,7 @@ */ =20 #include +#include #include #include #include @@ -15,6 +16,7 @@ #include #include #include +#include #include =20 #include @@ -89,6 +91,39 @@ static gfn_t hvm_alloc_ioreq_gfn(struct ioreq_server *s) return hvm_alloc_legacy_ioreq_gfn(s); } =20 +static gfn_t hvm_alloc_ioreq_gfns(struct ioreq_server *s, + unsigned int nr_pages) +{ + struct domain *d =3D s->target; + unsigned long mask; + unsigned int i, run; + + if ( nr_pages =3D=3D 1 ) + return hvm_alloc_ioreq_gfn(s); + + /* Find nr_pages consecutive set bits */ + mask =3D d->arch.hvm.ioreq_gfn.mask; + + for ( i =3D 0, run =3D 0; i < BITS_PER_LONG; i++ ) + { + if ( !test_bit(i, &mask) ) + run =3D 0; + else if ( ++run =3D=3D nr_pages ) + { + /* Found a run - clear all bits and return base GFN */ + unsigned int start =3D i - nr_pages + 1; + unsigned int j; + + for ( j =3D start; j <=3D i; j++ ) + clear_bit(j, &d->arch.hvm.ioreq_gfn.mask); + + return _gfn(d->arch.hvm.ioreq_gfn.base + start); + } + } + + return INVALID_GFN; +} + static bool hvm_free_legacy_ioreq_gfn(struct ioreq_server *s, gfn_t gfn) { @@ -121,11 +156,23 @@ static void hvm_free_ioreq_gfn(struct ioreq_server *s= , gfn_t gfn) } } =20 +static void hvm_free_ioreq_gfns(struct ioreq_server *s, gfn_t gfn, + unsigned int nr_pages) +{ + unsigned int i; + + for ( i =3D 0; i < nr_pages; i++ ) + hvm_free_ioreq_gfn(s, gfn_add(gfn, i)); +} + static void hvm_unmap_ioreq_gfn(struct ioreq_server *s, bool buf) { - struct ioreq_page *iorp =3D buf ? &s->bufioreq : &s->ioreq; + unsigned int i, nr_pages; =20 + if ( buf ) { + struct ioreq_page *iorp =3D &s->bufioreq; + if ( gfn_eq(iorp->gfn, INVALID_GFN) ) return; =20 @@ -134,16 +181,41 @@ static void hvm_unmap_ioreq_gfn(struct ioreq_server *= s, bool buf) =20 hvm_free_ioreq_gfn(s, iorp->gfn); iorp->gfn =3D INVALID_GFN; + return; + } + + if ( gfn_eq(s->ioreq_gfn, INVALID_GFN) ) + return; + + nr_pages =3D nr_ioreq_pages(s->target); + + for ( i =3D 0; i < nr_pages; i++ ) + { + struct page_info *pg =3D vmap_to_page((char *)s->ioreq + + i * PAGE_SIZE); + + put_page_and_type(pg); + put_page(pg); } + vunmap(s->ioreq); + s->ioreq =3D NULL; + + hvm_free_ioreq_gfns(s, s->ioreq_gfn, nr_pages); + s->ioreq_gfn =3D INVALID_GFN; } =20 static int hvm_map_ioreq_gfn(struct ioreq_server *s, bool buf) { struct domain *d =3D s->target; - struct ioreq_page *iorp =3D buf ? &s->bufioreq : &s->ioreq; + unsigned int i, nr_pages; + gfn_t base_gfn; + mfn_t *mfns; int rc; =20 + if ( buf ) { + struct ioreq_page *iorp =3D &s->bufioreq; + if ( iorp->page ) { /* @@ -173,30 +245,122 @@ static int hvm_map_ioreq_gfn(struct ioreq_server *s,= bool buf) =20 return rc; } + + /* ioreq: multi-page with contiguous VA */ + if ( s->ioreq ) + { + if ( gfn_eq(s->ioreq_gfn, INVALID_GFN) ) + return -EPERM; + return 0; + } + + if ( d->is_dying ) + return -EINVAL; + + nr_pages =3D nr_ioreq_pages(d); + base_gfn =3D hvm_alloc_ioreq_gfns(s, nr_pages); + + if ( gfn_eq(base_gfn, INVALID_GFN) ) + return -ENOMEM; + + mfns =3D xmalloc_array(mfn_t, nr_pages); + if ( !mfns ) + { + hvm_free_ioreq_gfns(s, base_gfn, nr_pages); + return -ENOMEM; + } + + /* + * Use prepare_ring_for_helper() to obtain page and type references + * for each GFN. Discard its per-page VA immediately, as all pages + * will be combined into a single contiguous VA via vmap() below. + */ + for ( i =3D 0; i < nr_pages; i++ ) + { + struct page_info *pg; + void *va; + + rc =3D prepare_ring_for_helper(d, gfn_x(base_gfn) + i, &pg, &va); + if ( rc ) + goto fail; + + /* Discard per-page VA */ + unmap_domain_page_global(va); + mfns[i] =3D page_to_mfn(pg); + } + + /* Map all mfns as single contiguous VA */ + s->ioreq =3D vmap(mfns, nr_pages); + if ( !s->ioreq ) + { + rc =3D -ENOMEM; + goto fail; + } + + s->ioreq_gfn =3D base_gfn; + xfree(mfns); + + return 0; + + fail: + while ( i-- > 0 ) + { + struct page_info *pg =3D mfn_to_page(mfns[i]); + + put_page_and_type(pg); + put_page(pg); + } + hvm_free_ioreq_gfns(s, base_gfn, nr_pages); + xfree(mfns); + + return rc; } =20 static void hvm_remove_ioreq_gfn(struct ioreq_server *s, bool buf) { struct domain *d =3D s->target; - struct ioreq_page *iorp =3D buf ? &s->bufioreq : &s->ioreq; + unsigned int i, nr_pages; =20 + if ( buf ) { + struct ioreq_page *iorp =3D &s->bufioreq; + if ( gfn_eq(iorp->gfn, INVALID_GFN) ) return; =20 if ( p2m_remove_page(d, iorp->gfn, page_to_mfn(iorp->page), 0) ) domain_crash(d); clear_page(iorp->va); + return; + } + + if ( gfn_eq(s->ioreq_gfn, INVALID_GFN) ) + return; + + nr_pages =3D nr_ioreq_pages(d); + + for ( i =3D 0; i < nr_pages; i++ ) + { + gfn_t gfn =3D gfn_add(s->ioreq_gfn, i); + struct page_info *pg =3D vmap_to_page((char *)s->ioreq + + i * PAGE_SIZE); + + if ( p2m_remove_page(d, gfn, page_to_mfn(pg), 0) ) + domain_crash(d); } + memset(s->ioreq, 0, nr_pages * PAGE_SIZE); } =20 static int hvm_add_ioreq_gfn(struct ioreq_server *s, bool buf) { struct domain *d =3D s->target; - struct ioreq_page *iorp =3D buf ? &s->bufioreq : &s->ioreq; + unsigned int i, nr_pages; int rc; =20 + if ( buf ) { + struct ioreq_page *iorp =3D &s->bufioreq; + if ( gfn_eq(iorp->gfn, INVALID_GFN) ) return 0; =20 @@ -208,6 +372,32 @@ static int hvm_add_ioreq_gfn(struct ioreq_server *s, b= ool buf) =20 return rc; } + + if ( gfn_eq(s->ioreq_gfn, INVALID_GFN) ) + return 0; + + nr_pages =3D nr_ioreq_pages(d); + memset(s->ioreq, 0, nr_pages * PAGE_SIZE); + + for ( i =3D 0; i < nr_pages; i++ ) + { + gfn_t gfn =3D gfn_add(s->ioreq_gfn, i); + struct page_info *pg =3D vmap_to_page((char *)s->ioreq + + i * PAGE_SIZE); + + rc =3D p2m_add_page(d, gfn, page_to_mfn(pg), 0, p2m_ram_rw); + if ( rc ) + /* + * No rollback of previously added pages: The caller + * (arch_ioreq_server_disable) has no error handling path, + * and partial failure here will be cleaned up when the + * ioreq server is eventually destroyed. + */ + return rc; + + paging_mark_pfn_dirty(d, _pfn(gfn_x(gfn))); + } + return 0; } =20 int arch_ioreq_server_map_pages(struct ioreq_server *s) diff --git a/xen/common/ioreq.c b/xen/common/ioreq.c index 5d722c8d4e..0ad86d3af3 100644 --- a/xen/common/ioreq.c +++ b/xen/common/ioreq.c @@ -26,6 +26,7 @@ #include #include #include +#include =20 #include #include @@ -95,12 +96,10 @@ static struct ioreq_server *get_ioreq_server(const stru= ct domain *d, =20 static ioreq_t *get_ioreq(struct ioreq_server *s, struct vcpu *v) { - shared_iopage_t *p =3D s->ioreq.va; - ASSERT((v =3D=3D current) || !vcpu_runnable(v)); - ASSERT(p !=3D NULL); + ASSERT(s->ioreq !=3D NULL); =20 - return &p->vcpu_ioreq[v->vcpu_id]; + return &s->ioreq[v->vcpu_id]; } =20 /* @@ -260,9 +259,32 @@ bool vcpu_ioreq_handle_completion(struct vcpu *v) =20 static int ioreq_server_alloc_mfn(struct ioreq_server *s, bool buf) { - struct ioreq_page *iorp =3D buf ? &s->bufioreq : &s->ioreq; + struct ioreq_page *iorp; struct page_info *page; =20 + if ( !buf ) + { + if ( s->ioreq ) + { + /* + * If a guest frame has already been mapped (which may happen + * on demand if ioreq_server_get_info() is called), then + * allocating a page is not permitted. + */ + if ( !gfn_eq(s->ioreq_gfn, INVALID_GFN) ) + return -EPERM; + + return 0; + } + + s->ioreq =3D xvzalloc_array(ioreq_t, s->target->max_vcpus); + + return s->ioreq ? 0 : -ENOMEM; + } + + /* bufioreq: single page allocation */ + iorp =3D &s->bufioreq; + if ( iorp->page ) { /* @@ -309,8 +331,17 @@ static int ioreq_server_alloc_mfn(struct ioreq_server = *s, bool buf) =20 static void ioreq_server_free_mfn(struct ioreq_server *s, bool buf) { - struct ioreq_page *iorp =3D buf ? &s->bufioreq : &s->ioreq; - struct page_info *page =3D iorp->page; + struct ioreq_page *iorp; + struct page_info *page; + + if ( !buf ) + { + XVFREE(s->ioreq); + return; + } + + iorp =3D &s->bufioreq; + page =3D iorp->page; =20 if ( !page ) return; @@ -334,11 +365,29 @@ bool is_ioreq_server_page(struct domain *d, const str= uct page_info *page) =20 FOR_EACH_IOREQ_SERVER(d, id, s) { - if ( (s->ioreq.page =3D=3D page) || (s->bufioreq.page =3D=3D page)= ) + if ( s->bufioreq.page =3D=3D page ) { found =3D true; break; } + + if ( s->ioreq ) + { + unsigned int i; + + for ( i =3D 0; i < nr_ioreq_pages(d); i++ ) + { + if ( vmap_to_page((char *)s->ioreq + + i * PAGE_SIZE) =3D=3D page ) + { + found =3D true; + break; + } + } + + if ( found ) + break; + } } =20 rspin_unlock(&d->ioreq_server.lock); @@ -351,7 +400,7 @@ static void ioreq_server_update_evtchn(struct ioreq_ser= ver *s, { ASSERT(spin_is_locked(&s->lock)); =20 - if ( s->ioreq.va !=3D NULL ) + if ( s->ioreq !=3D NULL ) { ioreq_t *p =3D get_ioreq(s, sv->vcpu); =20 @@ -591,7 +640,7 @@ static int ioreq_server_init(struct ioreq_server *s, INIT_LIST_HEAD(&s->ioreq_vcpu_list); spin_lock_init(&s->bufioreq_lock); =20 - s->ioreq.gfn =3D INVALID_GFN; + s->ioreq_gfn =3D INVALID_GFN; s->bufioreq.gfn =3D INVALID_GFN; =20 rc =3D ioreq_server_alloc_rangesets(s, id); @@ -770,7 +819,7 @@ static int ioreq_server_get_info(struct domain *d, iose= rvid_t id, } =20 if ( ioreq_gfn ) - *ioreq_gfn =3D gfn_x(s->ioreq.gfn); + *ioreq_gfn =3D gfn_x(s->ioreq_gfn); =20 if ( HANDLE_BUFIOREQ(s) ) { @@ -813,26 +862,30 @@ int ioreq_server_get_frame(struct domain *d, ioservid= _t id, if ( rc ) goto out; =20 - switch ( idx ) + if ( idx =3D=3D XENMEM_resource_ioreq_server_frame_bufioreq) { - case XENMEM_resource_ioreq_server_frame_bufioreq: rc =3D -ENOENT; if ( !HANDLE_BUFIOREQ(s) ) goto out; =20 *mfn =3D page_to_mfn(s->bufioreq.page); rc =3D 0; - break; - - case XENMEM_resource_ioreq_server_frame_ioreq(0): - *mfn =3D page_to_mfn(s->ioreq.page); - rc =3D 0; - break; + } + else if (( idx >=3D XENMEM_resource_ioreq_server_frame_ioreq(0) ) && + ( idx < XENMEM_resource_ioreq_server_frame_ioreq(nr_ioreq_pag= es(d)) )) + { + unsigned int page_idx =3D idx - XENMEM_resource_ioreq_server_frame= _ioreq(0); =20 - default: rc =3D -EINVAL; - break; + if ( idx >=3D XENMEM_resource_ioreq_server_frame_ioreq(0) && + page_idx < nr_ioreq_pages(d) && s->ioreq ) + { + *mfn =3D vmap_to_mfn((char *)s->ioreq + page_idx * PAGE_SIZE); + rc =3D 0; + } } + else + rc =3D -EINVAL; =20 out: rspin_unlock(&d->ioreq_server.lock); diff --git a/xen/include/public/hvm/ioreq.h b/xen/include/public/hvm/ioreq.h index 7a6bc760d0..1c1a9e61ae 100644 --- a/xen/include/public/hvm/ioreq.h +++ b/xen/include/public/hvm/ioreq.h @@ -49,6 +49,11 @@ struct ioreq { }; typedef struct ioreq ioreq_t; =20 +/* + * Deprecated: shared_iopage is no longer used by Xen internally. + * The ioreq server now uses a dynamically sized ioreq_t array + * to support more than 128 vCPUs. + */ struct shared_iopage { struct ioreq vcpu_ioreq[1]; }; diff --git a/xen/include/xen/ioreq.h b/xen/include/xen/ioreq.h index e86f0869fa..a4c7621f3f 100644 --- a/xen/include/xen/ioreq.h +++ b/xen/include/xen/ioreq.h @@ -19,9 +19,19 @@ #ifndef __XEN_IOREQ_H__ #define __XEN_IOREQ_H__ =20 +#include #include =20 #include +#include + +/* 4096 / 32 =3D 128 ioreq slots per page */ +#define IOREQS_PER_PAGE (PAGE_SIZE / sizeof(ioreq_t)) + +static inline unsigned int nr_ioreq_pages(const struct domain *d) +{ + return DIV_ROUND_UP(d->max_vcpus, IOREQS_PER_PAGE); +} =20 struct ioreq_page { gfn_t gfn; @@ -45,7 +55,8 @@ struct ioreq_server { /* Lock to serialize toolstack modifications */ spinlock_t lock; =20 - struct ioreq_page ioreq; + ioreq_t *ioreq; + gfn_t ioreq_gfn; struct list_head ioreq_vcpu_list; struct ioreq_page bufioreq; =20 --=20 2.51.0 -- Julian Vetter | Vates Hypervisor & Kernel Developer XCP-ng & Xen Orchestra - Vates solutions web: https://vates.tech