From nobody Wed Feb 11 06:56:53 2026 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org ARC-Seal: i=1; a=rsa-sha256; t=1678213682; cv=none; d=zohomail.com; s=zohoarc; b=V9eBURyeB66HKZMi5JivEk/ZC/gpqS0SsuBT9r1abzsjayGvdjKGUhpEBDo3Edn4TBDKf5V3M2ETPSzrGSXOODAJ3SAHHWHxG64OFsRWwRoEZEK1oY/rjHDe8Ej6tuQf03CmUO3rjtXa3N/Gi/u0/rhfcWstNYOoP9sl4ZuwLgI= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1678213682; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=6uxwjOvxQQcRhYsdZWs4ssoZa+V8OI5mOwBQQytLfJc=; b=mgebAqcES/jnFMfWAhzfSVyeHyTEVX+nIoaK5RilFQrJFxWVcHPI6f1GskID11LQubWdk1qtqQO9tmvrNhTHYBYw8wICqklzFn5Jz+BJhqQGF0X8Tk89ynL5cSF2Kf4H3x+5te9W51GZr/LxFLVbNW4rYLl6pfYKnwfArGh/xi8= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1678213682237634.5235597973129; Tue, 7 Mar 2023 10:28:02 -0800 (PST) Received: from list by lists.xenproject.org with outflank-mailman.507792.781917 (Exim 4.92) (envelope-from ) id 1pZc2H-0007Dg-Ej; Tue, 07 Mar 2023 18:27:41 +0000 Received: by outflank-mailman (output) from mailman id 507792.781917; Tue, 07 Mar 2023 18:27:41 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1pZc2G-00078H-9B; Tue, 07 Mar 2023 18:27:40 +0000 Received: by outflank-mailman (input) for mailman id 507792; Tue, 07 Mar 2023 18:27:29 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1pZc25-0002MP-76 for xen-devel@lists.xenproject.org; Tue, 07 Mar 2023 18:27:29 +0000 Received: from casper.infradead.org (casper.infradead.org [2001:8b0:10b:1236::1]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id b28e30bc-bd15-11ed-a550-8520e6686977; Tue, 07 Mar 2023 19:27:20 +0100 (CET) Received: from i7.infradead.org ([2001:8b0:10b:1:21e:67ff:fecb:7a92]) by casper.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1pZc1n-006deI-Lg; Tue, 07 Mar 2023 18:27:12 +0000 Received: from dwoodhou by i7.infradead.org with local (Exim 4.96 #2 (Red Hat Linux)) id 1pZc1n-009e9A-29; Tue, 07 Mar 2023 18:27:11 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list X-Inumbo-ID: b28e30bc-bd15-11ed-a550-8520e6686977 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Sender:Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From: Reply-To:Content-Type:Content-ID:Content-Description; bh=6uxwjOvxQQcRhYsdZWs4ssoZa+V8OI5mOwBQQytLfJc=; b=jt8JKlUFhfmYcWmPwmdHkJnYdT NFSxseKXHMK5+xLotkZFkEtaa9tsgPA3EGVJcTdgzrsuk2E4WCkj8B/4yboxQg2btpWLx5Al5Abuf sNntS3MJcTCqK1GKCZTHpNVghVPm8sL2z0UtyugtgpARtmiCmUQrwiXY/mkm8IsP2udt8Nmqutyxz ZbLT6c57LSKtkayx1ctVrAK9FEc0hkWL0yqgAVtRtoQyK1jrgrO1c0TVPUYDTrfBUalqtaoHQzB65 drg+QsHtCjVBEhdGhV4Rck1KNGh985m/bxUFTzUkxChsAPRuxCZ9DSxZKxUHyqULO06VCJ/7ZRp68 Kx5LkEDg==; From: David Woodhouse To: Peter Maydell Cc: qemu-devel@nongnu.org, Paolo Bonzini , Paul Durrant , Joao Martins , Ankur Arora , Stefano Stabellini , vikram.garhwal@amd.com, Anthony Perard , xen-devel@lists.xenproject.org, Juan Quintela , "Dr . David Alan Gilbert" Subject: [PULL 15/27] hw/xen: Use XEN_PAGE_SIZE in PV backend drivers Date: Tue, 7 Mar 2023 18:26:55 +0000 Message-Id: <20230307182707.2298618-16-dwmw2@infradead.org> X-Mailer: git-send-email 2.39.0 In-Reply-To: <20230307182707.2298618-1-dwmw2@infradead.org> References: <20230307182707.2298618-1-dwmw2@infradead.org> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Sender: David Woodhouse X-SRS-Rewrite: SMTP reverse-path rewritten from by casper.infradead.org. See http://www.infradead.org/rpr.html X-ZohoMail-DKIM: pass (identity @infradead.org) X-ZM-MESSAGEID: 1678213684596100010 Content-Type: text/plain; charset="utf-8" From: David Woodhouse XC_PAGE_SIZE comes from the actual Xen libraries, while XEN_PAGE_SIZE is provided by QEMU itself in xen_backend_ops.h. For backends which may be built for emulation mode, use the latter. Signed-off-by: David Woodhouse Reviewed-by: Paul Durrant --- hw/block/dataplane/xen-block.c | 8 ++++---- hw/display/xenfb.c | 12 ++++++------ hw/net/xen_nic.c | 12 ++++++------ hw/usb/xen-usb.c | 8 ++++---- 4 files changed, 20 insertions(+), 20 deletions(-) diff --git a/hw/block/dataplane/xen-block.c b/hw/block/dataplane/xen-block.c index e55b713002..8322a1de82 100644 --- a/hw/block/dataplane/xen-block.c +++ b/hw/block/dataplane/xen-block.c @@ -101,9 +101,9 @@ static XenBlockRequest *xen_block_start_request(XenBloc= kDataPlane *dataplane) * re-use requests, allocate the memory once here. It will be freed * xen_block_dataplane_destroy() when the request list is freed. */ - request->buf =3D qemu_memalign(XC_PAGE_SIZE, + request->buf =3D qemu_memalign(XEN_PAGE_SIZE, BLKIF_MAX_SEGMENTS_PER_REQUEST * - XC_PAGE_SIZE); + XEN_PAGE_SIZE); dataplane->requests_total++; qemu_iovec_init(&request->v, 1); } else { @@ -185,7 +185,7 @@ static int xen_block_parse_request(XenBlockRequest *req= uest) goto err; } if (request->req.seg[i].last_sect * dataplane->sector_size >=3D - XC_PAGE_SIZE) { + XEN_PAGE_SIZE) { error_report("error: page crossing"); goto err; } @@ -740,7 +740,7 @@ void xen_block_dataplane_start(XenBlockDataPlane *datap= lane, =20 dataplane->protocol =3D protocol; =20 - ring_size =3D XC_PAGE_SIZE * dataplane->nr_ring_ref; + ring_size =3D XEN_PAGE_SIZE * dataplane->nr_ring_ref; switch (dataplane->protocol) { case BLKIF_PROTOCOL_NATIVE: { diff --git a/hw/display/xenfb.c b/hw/display/xenfb.c index 2c4016fcbd..0074a9b6f8 100644 --- a/hw/display/xenfb.c +++ b/hw/display/xenfb.c @@ -489,13 +489,13 @@ static int xenfb_map_fb(struct XenFB *xenfb) } =20 if (xenfb->pixels) { - munmap(xenfb->pixels, xenfb->fbpages * XC_PAGE_SIZE); + munmap(xenfb->pixels, xenfb->fbpages * XEN_PAGE_SIZE); xenfb->pixels =3D NULL; } =20 - xenfb->fbpages =3D DIV_ROUND_UP(xenfb->fb_len, XC_PAGE_SIZE); + xenfb->fbpages =3D DIV_ROUND_UP(xenfb->fb_len, XEN_PAGE_SIZE); n_fbdirs =3D xenfb->fbpages * mode / 8; - n_fbdirs =3D DIV_ROUND_UP(n_fbdirs, XC_PAGE_SIZE); + n_fbdirs =3D DIV_ROUND_UP(n_fbdirs, XEN_PAGE_SIZE); =20 pgmfns =3D g_new0(xen_pfn_t, n_fbdirs); fbmfns =3D g_new0(xen_pfn_t, xenfb->fbpages); @@ -528,8 +528,8 @@ static int xenfb_configure_fb(struct XenFB *xenfb, size= _t fb_len_lim, { size_t mfn_sz =3D sizeof_field(struct xenfb_page, pd[0]); size_t pd_len =3D sizeof_field(struct xenfb_page, pd) / mfn_sz; - size_t fb_pages =3D pd_len * XC_PAGE_SIZE / mfn_sz; - size_t fb_len_max =3D fb_pages * XC_PAGE_SIZE; + size_t fb_pages =3D pd_len * XEN_PAGE_SIZE / mfn_sz; + size_t fb_len_max =3D fb_pages * XEN_PAGE_SIZE; int max_width, max_height; =20 if (fb_len_lim > fb_len_max) { @@ -930,7 +930,7 @@ static void fb_disconnect(struct XenLegacyDevice *xende= v) * instead. This releases the guest pages and keeps qemu happy. */ qemu_xen_foreignmem_unmap(fb->pixels, fb->fbpages); - fb->pixels =3D mmap(fb->pixels, fb->fbpages * XC_PAGE_SIZE, + fb->pixels =3D mmap(fb->pixels, fb->fbpages * XEN_PAGE_SIZE, PROT_READ | PROT_WRITE, MAP_SHARED | MAP_ANON, -1, 0); if (fb->pixels =3D=3D MAP_FAILED) { diff --git a/hw/net/xen_nic.c b/hw/net/xen_nic.c index 166d03787d..9bbf6599fc 100644 --- a/hw/net/xen_nic.c +++ b/hw/net/xen_nic.c @@ -145,7 +145,7 @@ static void net_tx_packets(struct XenNetDev *netdev) continue; } =20 - if ((txreq.offset + txreq.size) > XC_PAGE_SIZE) { + if ((txreq.offset + txreq.size) > XEN_PAGE_SIZE) { xen_pv_printf(&netdev->xendev, 0, "error: page crossing\n"= ); net_tx_error(netdev, &txreq, rc); continue; @@ -171,7 +171,7 @@ static void net_tx_packets(struct XenNetDev *netdev) if (txreq.flags & NETTXF_csum_blank) { /* have read-only mapping -> can't fill checksum in-place = */ if (!tmpbuf) { - tmpbuf =3D g_malloc(XC_PAGE_SIZE); + tmpbuf =3D g_malloc(XEN_PAGE_SIZE); } memcpy(tmpbuf, page + txreq.offset, txreq.size); net_checksum_calculate(tmpbuf, txreq.size, CSUM_ALL); @@ -243,9 +243,9 @@ static ssize_t net_rx_packet(NetClientState *nc, const = uint8_t *buf, size_t size if (rc =3D=3D rp || RING_REQUEST_CONS_OVERFLOW(&netdev->rx_ring, rc)) { return 0; } - if (size > XC_PAGE_SIZE - NET_IP_ALIGN) { + if (size > XEN_PAGE_SIZE - NET_IP_ALIGN) { xen_pv_printf(&netdev->xendev, 0, "packet too big (%lu > %ld)", - (unsigned long)size, XC_PAGE_SIZE - NET_IP_ALIGN); + (unsigned long)size, XEN_PAGE_SIZE - NET_IP_ALIGN); return -1; } =20 @@ -348,8 +348,8 @@ static int net_connect(struct XenLegacyDevice *xendev) netdev->txs =3D NULL; return -1; } - BACK_RING_INIT(&netdev->tx_ring, netdev->txs, XC_PAGE_SIZE); - BACK_RING_INIT(&netdev->rx_ring, netdev->rxs, XC_PAGE_SIZE); + BACK_RING_INIT(&netdev->tx_ring, netdev->txs, XEN_PAGE_SIZE); + BACK_RING_INIT(&netdev->rx_ring, netdev->rxs, XEN_PAGE_SIZE); =20 xen_be_bind_evtchn(&netdev->xendev); =20 diff --git a/hw/usb/xen-usb.c b/hw/usb/xen-usb.c index a770a64cb4..66cb3f7c24 100644 --- a/hw/usb/xen-usb.c +++ b/hw/usb/xen-usb.c @@ -161,7 +161,7 @@ static int usbback_gnttab_map(struct usbback_req *usbba= ck_req) =20 for (i =3D 0; i < nr_segs; i++) { if ((unsigned)usbback_req->req.seg[i].offset + - (unsigned)usbback_req->req.seg[i].length > XC_PAGE_SIZE) { + (unsigned)usbback_req->req.seg[i].length > XEN_PAGE_SIZE) { xen_pv_printf(xendev, 0, "segment crosses page boundary\n"); return -EINVAL; } @@ -185,7 +185,7 @@ static int usbback_gnttab_map(struct usbback_req *usbba= ck_req) =20 for (i =3D 0; i < usbback_req->nr_buffer_segs; i++) { seg =3D usbback_req->req.seg + i; - addr =3D usbback_req->buffer + i * XC_PAGE_SIZE + seg->offset; + addr =3D usbback_req->buffer + i * XEN_PAGE_SIZE + seg->offset; qemu_iovec_add(&usbback_req->packet.iov, addr, seg->length); } } @@ -902,8 +902,8 @@ static int usbback_connect(struct XenLegacyDevice *xend= ev) usbif->conn_ring_ref =3D conn_ring_ref; urb_sring =3D usbif->urb_sring; conn_sring =3D usbif->conn_sring; - BACK_RING_INIT(&usbif->urb_ring, urb_sring, XC_PAGE_SIZE); - BACK_RING_INIT(&usbif->conn_ring, conn_sring, XC_PAGE_SIZE); + BACK_RING_INIT(&usbif->urb_ring, urb_sring, XEN_PAGE_SIZE); + BACK_RING_INIT(&usbif->conn_ring, conn_sring, XEN_PAGE_SIZE); =20 xen_be_bind_evtchn(xendev); =20 --=20 2.39.0