From nobody Mon Feb 9 12:14:57 2026 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1638982844; cv=none; d=zohomail.com; s=zohoarc; b=hGxjdw+AUNRz6ULmcs6TNl+E0mbwP2wXoFG8zHTUHfUunO9xLUEkdTkriFFjirp1rVFTG7TdGGGu/wWtO3PIfoRSUg5sLkAWNbwyUzwYJDCMv1nxbtV8DaCKFHVFVx8c9OqH3z4w3fg2dZTnh+7DEKPgm9sXtFfu+OhkdqkQjBs= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1638982844; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=81m5aay7mqcIHbcdPLJDtuGC8d74B9O4SgnireXM5ns=; b=DyxdaAttf1EU/NlYdrXGANpHOZ2ClHNHZLKR4XnUG0m68JoSxVwsVYMcSZJQuYEN6QrF2SeUDghF8Dd88ARCUYM+t+oIMEkqYHmwEu2nT2VS7ztIrfk6Iv2zYl48T354xMfLHFmoOLTy+ZuHMSHEknsOQeM7flXXOH3yorehfg4= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1638982844485905.419463113848; Wed, 8 Dec 2021 09:00:44 -0800 (PST) Received: from list by lists.xenproject.org with outflank-mailman.242521.419446 (Exim 4.92) (envelope-from ) id 1mv0Ik-0007lp-V6; Wed, 08 Dec 2021 17:00:18 +0000 Received: by outflank-mailman (output) from mailman id 242521.419446; Wed, 08 Dec 2021 17:00:18 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1mv0Ik-0007li-Rn; Wed, 08 Dec 2021 17:00:18 +0000 Received: by outflank-mailman (input) for mailman id 242521; Wed, 08 Dec 2021 17:00:18 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1mv0Ik-0007YL-4C for xen-devel@lists.xenproject.org; Wed, 08 Dec 2021 17:00:18 +0000 Received: from mail-lf1-x133.google.com (mail-lf1-x133.google.com [2a00:1450:4864:20::133]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id 5174d5e4-5848-11ec-a831-37629979565c; Wed, 08 Dec 2021 18:00:17 +0100 (CET) Received: by mail-lf1-x133.google.com with SMTP id k37so6876994lfv.3 for ; Wed, 08 Dec 2021 09:00:17 -0800 (PST) Received: from otyshchenko.router ([212.22.223.21]) by smtp.gmail.com with ESMTPSA id b14sm302767lfs.174.2021.12.08.09.00.15 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Wed, 08 Dec 2021 09:00:16 -0800 (PST) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 5174d5e4-5848-11ec-a831-37629979565c DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=81m5aay7mqcIHbcdPLJDtuGC8d74B9O4SgnireXM5ns=; b=hzyD77kPTFC66YBPoae4G3/pYUiXGuzXjN58wQsDPhHzfL31T8VTyzC5v0+kwmRl5P orm9pBoax8V/VIx9m7YbANRtMun60PNArlU5ibbnagzv4RX3b5PVL/J/tbPo/Tlwp0Oq Vt2Jp8Ng1T2fxl9/+NU2rVteHgdLFqli6fMX09rahqDVkA6WeCtRJu+pi2VEJn3PZ1yr vvk05MOWuPhayMOYkDN3UQiTHUyQOD4OsSGmLSkncfvAqo72R440ZphGsKY8GaFFkarj P0oY69awUd22tJ87eRorAU9mgMk4AOVBKWzd8Sg3fp3kW9ee7iFxWOKuN6tdxa5a32/N n7eA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=81m5aay7mqcIHbcdPLJDtuGC8d74B9O4SgnireXM5ns=; b=DqVxua6wAqFtxHO4SQ6k+VUOKBmR0WPdRFGKRSCVk+hFdqlssufLskK+Frr9EaSY61 MtJe8RlHtkf59z7RJGIFw5wPmfrX4tKokIqafb/qSD4ngYw2wSqZP8M8lgNKJo27Z9FW 6qYErfZbnXNWEjKnyfEvwyU38sFfWmBJRK/3PVTm6oeYpr59kdp2BxRvLy8PBkFYxWZQ XqG456+HzX/BqpUrTKio4xK1KbzmfOEyj814Wum5WUzbwyh/dG/74iWYBkryeiPHQQfa dY232hKjYB/mE6Lbvkcwkl4lcu/bzj6/1zTq2hEITwrvNLWNZIblEhQZ8UQ7l900Cgt0 xjEQ== X-Gm-Message-State: AOAM531YyfqRuZ+d0/dPLA69mKrcyhpDlUyLMasGQAK2ja24EkMuBPYV LVp06oGuke3SScmdj1DGkspQbeuFQF0= X-Google-Smtp-Source: ABdhPJxfg7yYPHPLVOrdDVKMyGznmP3ycTTFsYf7nAmX+EfDot50U/ibX44L6Sy2BUcFIswNA8UlOw== X-Received: by 2002:ac2:4e61:: with SMTP id y1mr616845lfs.459.1638982816400; Wed, 08 Dec 2021 09:00:16 -0800 (PST) From: Oleksandr Tyshchenko To: xen-devel@lists.xenproject.org Cc: Julien Grall , Wei Liu , Anthony PERARD , Juergen Gross , Stefano Stabellini , Julien Grall , Volodymyr Babchuk , Bertrand Marquis , Oleksandr Tyshchenko Subject: [PATCH V6 2/2] libxl: Introduce basic virtio-mmio support on Arm Date: Wed, 8 Dec 2021 18:59:44 +0200 Message-Id: <1638982784-14390-3-git-send-email-olekstysh@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1638982784-14390-1-git-send-email-olekstysh@gmail.com> References: <1638982784-14390-1-git-send-email-olekstysh@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @gmail.com) X-ZM-MESSAGEID: 1638982845731100001 From: Julien Grall This patch introduces helpers to allocate Virtio MMIO params (IRQ and memory region) and create specific device node in the Guest device-tree with allocated params. In order to deal with multiple Virtio devices, reserve corresponding ranges. For now, we reserve 1MB for memory regions and 10 SPIs. As these helpers should be used for every Virtio device attached to the Guest, call them for Virtio disk(s). Please note, with statically allocated Virtio IRQs there is a risk of a clash with a physical IRQs of passthrough devices. For the first version, it's fine, but we should consider allocating the Virtio IRQs automatically. Thankfully, we know in advance which IRQs will be used for passthrough to be able to choose non-clashed ones. Signed-off-by: Julien Grall Signed-off-by: Oleksandr Tyshchenko Reviewed-by: Henry Wang Tested-by: Jiamei Xie --- Please note, this is a split/cleanup/hardening of Julien's PoC: "Add support for Guest IO forwarding to a device emulator" Changes RFC -> V1: - was squashed with: "[RFC PATCH V1 09/12] libxl: Handle virtio-mmio irq in more correct wa= y" "[RFC PATCH V1 11/12] libxl: Insert "dma-coherent" property into virti= o-mmio device node" "[RFC PATCH V1 12/12] libxl: Fix duplicate memory node in DT" - move VirtIO MMIO #define-s to xen/include/public/arch-arm.h Changes V1 -> V2: - update the author of a patch Changes V2 -> V3: - no changes Changes V3 -> V4: - no changes Changes V4 -> V5: - split the changes, change the order of the patches - drop an extra "virtio" configuration option - update patch description - use CONTAINER_OF instead of own implementation - reserve ranges for Virtio MMIO params and put them in correct location - create helpers to allocate Virtio MMIO params, add corresponding sanity-=D1=81hecks - add comment why MMIO size 0x200 is chosen - update debug print - drop Wei's T-b Changes V5 -> V6: - rebase on current staging --- tools/libs/light/libxl_arm.c | 131 ++++++++++++++++++++++++++++++++++++++= +++- xen/include/public/arch-arm.h | 7 +++ 2 files changed, 136 insertions(+), 2 deletions(-) diff --git a/tools/libs/light/libxl_arm.c b/tools/libs/light/libxl_arm.c index eef1de0..d475249 100644 --- a/tools/libs/light/libxl_arm.c +++ b/tools/libs/light/libxl_arm.c @@ -8,6 +8,56 @@ #include #include =20 +/* + * There is no clear requirements for the total size of Virtio MMIO region. + * The size of control registers is 0x100 and device-specific configuration + * registers starts at the offset 0x100, however it's size depends on the = device + * and the driver. Pick the biggest known size at the moment to cover most + * of the devices (also consider allowing the user to configure the size v= ia + * config file for the one not conforming with the proposed value). + */ +#define VIRTIO_MMIO_DEV_SIZE xen_mk_ullong(0x200) + +static uint64_t virtio_mmio_base; +static uint32_t virtio_mmio_irq; + +static void init_virtio_mmio_params(void) +{ + virtio_mmio_base =3D GUEST_VIRTIO_MMIO_BASE; + virtio_mmio_irq =3D GUEST_VIRTIO_MMIO_SPI_FIRST; +} + +static uint64_t alloc_virtio_mmio_base(libxl__gc *gc) +{ + uint64_t base =3D virtio_mmio_base; + + /* Make sure we have enough reserved resources */ + if ((virtio_mmio_base + VIRTIO_MMIO_DEV_SIZE > + GUEST_VIRTIO_MMIO_BASE + GUEST_VIRTIO_MMIO_SIZE)) { + LOG(ERROR, "Ran out of reserved range for Virtio MMIO BASE 0x%"PRI= x64"\n", + virtio_mmio_base); + return 0; + } + virtio_mmio_base +=3D VIRTIO_MMIO_DEV_SIZE; + + return base; +} + +static uint32_t alloc_virtio_mmio_irq(libxl__gc *gc) +{ + uint32_t irq =3D virtio_mmio_irq; + + /* Make sure we have enough reserved resources */ + if (virtio_mmio_irq > GUEST_VIRTIO_MMIO_SPI_LAST) { + LOG(ERROR, "Ran out of reserved range for Virtio MMIO IRQ %u\n", + virtio_mmio_irq); + return 0; + } + virtio_mmio_irq++; + + return irq; +} + static const char *gicv_to_string(libxl_gic_version gic_version) { switch (gic_version) { @@ -26,8 +76,8 @@ int libxl__arch_domain_prepare_config(libxl__gc *gc, { uint32_t nr_spis =3D 0; unsigned int i; - uint32_t vuart_irq; - bool vuart_enabled =3D false; + uint32_t vuart_irq, virtio_irq =3D 0; + bool vuart_enabled =3D false, virtio_enabled =3D false; =20 /* * If pl011 vuart is enabled then increment the nr_spis to allow alloc= ation @@ -39,6 +89,35 @@ int libxl__arch_domain_prepare_config(libxl__gc *gc, vuart_enabled =3D true; } =20 + /* + * Virtio MMIO params are non-unique across the whole system and must = be + * initialized for every new guest. + */ + init_virtio_mmio_params(); + for (i =3D 0; i < d_config->num_disks; i++) { + libxl_device_disk *disk =3D &d_config->disks[i]; + + if (disk->virtio) { + disk->base =3D alloc_virtio_mmio_base(gc); + if (!disk->base) + return ERROR_FAIL; + + disk->irq =3D alloc_virtio_mmio_irq(gc); + if (!disk->irq) + return ERROR_FAIL; + + if (virtio_irq < disk->irq) + virtio_irq =3D disk->irq; + virtio_enabled =3D true; + + LOG(DEBUG, "Allocate Virtio MMIO params for Vdev %s: IRQ %u BA= SE 0x%"PRIx64, + disk->vdev, disk->irq, disk->base); + } + } + + if (virtio_enabled) + nr_spis +=3D (virtio_irq - 32) + 1; + for (i =3D 0; i < d_config->b_info.num_irqs; i++) { uint32_t irq =3D d_config->b_info.irqs[i]; uint32_t spi; @@ -58,6 +137,13 @@ int libxl__arch_domain_prepare_config(libxl__gc *gc, return ERROR_FAIL; } =20 + /* The same check as for vpl011 */ + if (virtio_enabled && + (irq >=3D GUEST_VIRTIO_MMIO_SPI_FIRST && irq <=3D virtio_irq)) { + LOG(ERROR, "Physical IRQ %u conflicting with Virtio MMIO IRQ r= ange\n", irq); + return ERROR_FAIL; + } + if (irq < 32) continue; =20 @@ -787,6 +873,39 @@ static int make_vpci_node(libxl__gc *gc, void *fdt, return 0; } =20 + +static int make_virtio_mmio_node(libxl__gc *gc, void *fdt, + uint64_t base, uint32_t irq) +{ + int res; + gic_interrupt intr; + /* Placeholder for virtio@ + a 64-bit number + \0 */ + char buf[24]; + + snprintf(buf, sizeof(buf), "virtio@%"PRIx64, base); + res =3D fdt_begin_node(fdt, buf); + if (res) return res; + + res =3D fdt_property_compat(gc, fdt, 1, "virtio,mmio"); + if (res) return res; + + res =3D fdt_property_regs(gc, fdt, GUEST_ROOT_ADDRESS_CELLS, GUEST_ROO= T_SIZE_CELLS, + 1, base, VIRTIO_MMIO_DEV_SIZE); + if (res) return res; + + set_interrupt(intr, irq, 0xf, DT_IRQ_TYPE_EDGE_RISING); + res =3D fdt_property_interrupts(gc, fdt, &intr, 1); + if (res) return res; + + res =3D fdt_property(fdt, "dma-coherent", NULL, 0); + if (res) return res; + + res =3D fdt_end_node(fdt); + if (res) return res; + + return 0; +} + static const struct arch_info *get_arch_info(libxl__gc *gc, const struct xc_dom_image *do= m) { @@ -988,6 +1107,7 @@ static int libxl__prepare_dtb(libxl__gc *gc, libxl_dom= ain_config *d_config, size_t fdt_size =3D 0; int pfdt_size =3D 0; libxl_domain_build_info *const info =3D &d_config->b_info; + unsigned int i; =20 const libxl_version_info *vers; const struct arch_info *ainfo; @@ -1094,6 +1214,13 @@ next_resize: if (d_config->num_pcidevs) FDT( make_vpci_node(gc, fdt, ainfo, dom) ); =20 + for (i =3D 0; i < d_config->num_disks; i++) { + libxl_device_disk *disk =3D &d_config->disks[i]; + + if (disk->virtio) + FDT( make_virtio_mmio_node(gc, fdt, disk->base, disk->irq)= ); + } + if (pfdt) FDT( copy_partial_fdt(gc, fdt, pfdt) ); =20 diff --git a/xen/include/public/arch-arm.h b/xen/include/public/arch-arm.h index 94b3151..6dc55df 100644 --- a/xen/include/public/arch-arm.h +++ b/xen/include/public/arch-arm.h @@ -398,6 +398,10 @@ typedef uint64_t xen_callback_t; =20 /* Physical Address Space */ =20 +/* Virtio MMIO mappings */ +#define GUEST_VIRTIO_MMIO_BASE xen_mk_ullong(0x02000000) +#define GUEST_VIRTIO_MMIO_SIZE xen_mk_ullong(0x00100000) + /* * vGIC mappings: Only one set of mapping is used by the guest. * Therefore they can overlap. @@ -484,6 +488,9 @@ typedef uint64_t xen_callback_t; =20 #define GUEST_VPL011_SPI 32 =20 +#define GUEST_VIRTIO_MMIO_SPI_FIRST 33 +#define GUEST_VIRTIO_MMIO_SPI_LAST 43 + /* PSCI functions */ #define PSCI_cpu_suspend 0 #define PSCI_cpu_off 1 --=20 2.7.4