From nobody Sun May 3 14:21:18 2026 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=invisiblethingslab.com ARC-Seal: i=1; a=rsa-sha256; t=1777472129; cv=none; d=zohomail.com; s=zohoarc; b=j03DsKxEARcJDuCgxh3LwbiYRtRmeFAwQKpE6fkIb+ge0k7cEvNP791j2m4/bsswBZ9ljti2SfK+PU+eF1XtoKVlpJZlhHdr4jJnJvtSYsxRgWvxAOUqT+ENKLssmfGoDa05wLgcXWiI9v9QN2MBDstShORNnhPwATuhO5doXX0= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1777472129; h=Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=m91VKPh4PUBytU5kpYfhb4cVChEL8DyMvXKY3Xve3Rc=; b=bZGSYyP9sGTvBUbs4nFkOTZfM/xII50/4qBlwYBUNUpIeJTQQmKEKH/xshTXaCfbSNLat4OTnpQ/WoQVT5HRgI3x/+33hQwoj7zTssNLbTxUEF0IKASRwGt7zo0tuQWR08rEhWVEY2Tsb66VROTxAahIePeY8Ggse8jWqDo6xHQ= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 177747212919286.38823460482547; Wed, 29 Apr 2026 07:15:29 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.1297412.1573447 (Exim 4.92) (envelope-from ) id 1wI5gR-0005BY-L5; Wed, 29 Apr 2026 14:14:35 +0000 Received: by outflank-mailman (output) from mailman id 1297412.1573447; Wed, 29 Apr 2026 14:14:35 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1wI5gR-0005BR-Hg; Wed, 29 Apr 2026 14:14:35 +0000 Received: by outflank-mailman (input) for mailman id 1297412; Wed, 29 Apr 2026 14:14:33 +0000 Received: from mx.expurgate.net ([195.190.135.10]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1wI5g7-0005B0-TE for xen-devel@lists.xenproject.org; Wed, 29 Apr 2026 14:14:33 +0000 Received: from mx.expurgate.net (helo=localhost) by mx.expurgate.net with esmtp id 1wI5g5-00GCdp-Em for xen-devel@lists.xenproject.org; Wed, 29 Apr 2026 16:14:13 +0200 Received: from [10.42.69.3] (helo=localhost) by localhost with ESMTP (eXpurgate MTA 0.9.1) (envelope-from ) id 69f21231-2eae-0a2a0a5409dd-0a2a4503971c-6 for ; Wed, 29 Apr 2026 16:14:13 +0200 Received: from [202.12.124.156] (helo=fhigh-b5-smtp.messagingengine.com) by tlsNG-33051d.mxtls.expurgate.net with ESMTPS (eXpurgate 4.56.1) (envelope-from ) id 69f21233-672d-0a2a45030019-ca0c7c9cb595-3 for ; Wed, 29 Apr 2026 16:14:12 +0200 Received: from phl-compute-05.internal (phl-compute-05.internal [10.202.2.45]) by mailfhigh.stl.internal (Postfix) with ESMTP id C958C7A00D0; Wed, 29 Apr 2026 10:14:10 -0400 (EDT) Received: from phl-frontend-04 ([10.202.2.163]) by phl-compute-05.internal (MEProxy); Wed, 29 Apr 2026 10:14:11 -0400 Received: by mail.messagingengine.com (Postfix) with ESMTPA; Wed, 29 Apr 2026 10:14:07 -0400 (EDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" Authentication-Results: eu.smtp.expurgate.cloud; dkim=pass header.s=fm2 header.d=invisiblethingslab.com header.i="@invisiblethingslab.com" header.h="Cc:Content-Transfer-Encoding:Date:From:Message-ID:MIME-Version:Subject:To"; dkim=pass header.s=fm2 header.d=messagingengine.com header.i="@messagingengine.com" header.h="Cc:Content-Transfer-Encoding:Date:Feedback-ID:From:Message-ID:MIME-Version:Subject:To:X-ME-Proxy:X-ME-Sender" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= invisiblethingslab.com; h=cc:cc:content-transfer-encoding :content-type:date:date:from:from:in-reply-to:message-id :mime-version:reply-to:subject:subject:to:to; s=fm2; t= 1777472050; x=1777558450; bh=m91VKPh4PUBytU5kpYfhb4cVChEL8DyMvXK Y3Xve3Rc=; b=Iu6B/GBfj+HguUmDDVmVY//ympuSkvgQQFeQAfq07KKIrMLXJ+0 sudJdqth9JicKBLFrihqDGb+n19HyqgCYfTy0DGSGUXXqO770uTVp9fZGPzIfRVA 1UHAl+6zBsIkOPpPjzxDVODNcZpMpT/DmjfF2gKHQQk8LhubPG8zAmX+hnx3VMh7 SnzXVZx2cf8fCpZ6nlpRZcoV4oiVNu0+WuOEs87s3e5Nk7JKWpSimflkpEeZVvCc Ck+nu1DEZfzy8/PQD470quElAi/jaVx58ZLESZxsiLQSb1gInIB85/xT7XP4wrGC ekBQ4PS40zY6hqRcRT6Oaqh0o27S18dm+CQ== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:cc:content-transfer-encoding :content-type:date:date:feedback-id:feedback-id:from:from :in-reply-to:message-id:mime-version:reply-to:subject:subject:to :to:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=fm2; t= 1777472050; x=1777558450; bh=m91VKPh4PUBytU5kpYfhb4cVChEL8DyMvXK Y3Xve3Rc=; b=UuVdds6zgumutc6n/Y4b3uziBXmJKtt1qwCJceMK/Sm/0jhz725 2TJDTPcJKEDlJFWWot9pz7wVwy71gWFoHfTLtfgKnyDj8oQp3BlavSANW6h/tAhJ Ggz/woaxMBgYR2FYESeFNhl4gMBbaKVCvy5hQPcc4ZnngWzLLVomuKo9l2+H8Hif ZyvXE4faqNUW/4cvY3CgoDujBin/raOj0xMMSo/v4H+QmPTvIiry7amK6DB3XQGz k2F4eSdBgUHHbJ53sAQ7OOMFIqGxTmNh0ILKxzwxPAXVIIlUF8tXWvDhQSHqPUI/ F5HUr4GVRglaU4aqbBuiLewRPsw4jp+HVZw== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeefhedrtddtgdekgeeihecutefuodetggdotefrod ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpuffrtefokffrpgfnqfghnecuuegr ihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenucfjug hrpefhvfevufffkffoggfgsedtkeertdertddtnecuhfhrohhmpeggrghlucfrrggtkhgv thhtuceovhgrlhesihhnvhhishhisghlvghthhhinhhgshhlrggsrdgtohhmqeenucggtf frrghtthgvrhhnpefgiefgiefhveehgfegueevffdtleevueefjeekledtheevleevhedt jeehteeuudenucffohhmrghinhepghhithhhuhgsrdgtohhmnecuvehluhhsthgvrhfuih iivgeptdenucfrrghrrghmpehmrghilhhfrhhomhepvhgrlhesihhnvhhishhisghlvght hhhinhhgshhlrggsrdgtohhmpdhnsggprhgtphhtthhopedutddpmhhouggvpehsmhhtph houhhtpdhrtghpthhtohepmhhsthesrhgvughhrghtrdgtohhmpdhrtghpthhtohepjhgr shhofigrnhhgsehrvgguhhgrthdrtghomhdprhgtphhtthhopeiguhgrnhiihhhuoheslh hinhhugidrrghlihgsrggsrgdrtghomhdprhgtphhtthhopegvphgvrhgviihmrgesrhgv ughhrghtrdgtohhmpdhrtghpthhtohepvhgrlhesihhnvhhishhisghlvghthhhinhhgsh hlrggsrdgtohhmpdhrtghpthhtohepmhgrrhhmrghrvghksehinhhvihhsihgslhgvthhh ihhnghhslhgrsgdrtghomhdprhgtphhtthhopehvihhrvghshhdrkhhumhgrrheslhhinh grrhhordhorhhgpdhrtghpthhtohepgigvnhdquggvvhgvlheslhhishhtshdrgigvnhhp rhhojhgvtghtrdhorhhgpdhrtghpthhtoheplhhinhhugidqkhgvrhhnvghlsehvghgvrh drkhgvrhhnvghlrdhorhhg X-ME-Proxy: Feedback-ID: i001e48d0:Fastmail From: Val Packett To: "Michael S. Tsirkin" , Jason Wang , Xuan Zhuo , =?UTF-8?q?Eugenio=20P=C3=A9rez?= Cc: Val Packett , =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= , Viresh Kumar , xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org, virtualization@lists.linux.dev Subject: [RFC PATCH] virtio-mmio: add xenbus probing Date: Wed, 29 Apr 2026 10:52:17 -0300 Message-ID: <20260429141339.74472-1-val@invisiblethingslab.com> X-Mailer: git-send-email 2.53.0 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-purgate-ID: tlsNG-33051d/1777472053-A1F7A938-E2F0AAA1/0/0 X-purgate-type: clean X-purgate-size: 7798 X-ZohoMail-DKIM: pass (identity @invisiblethingslab.com) X-ZM-MESSAGEID: 1777472132047154101 Content-Type: text/plain; charset="utf-8" The experimental virtio-mmio support for Xen was initially developed on aarch64, so device trees were used to configure the mmio devices, with arbitrary vGIC interrupts used by the hypervisor. On x86_64 however, the only reasonable way to interrupt the guest is over Xen event channels, which can only be acquired by children of xenbus, the virtual bus driven by Xen's configuration database, XenStore. It is also a more convenient and "Xen-ish" way to provision devices. Implement a xenbus client for virtio-mmio which negotiates an event channel and provides it as a platform IRQ to the virtio-mmio driver. Signed-off-by: Val Packett --- Hi, I've been working on porting virtio-mmio support from Arm to x86_64, with the goal of running vhost-user-gpu to power Wayland/GPU integration for Qubes OS. (I'm aware of various proposals for alternative virtio transports but virtio-mmio seems to be the only one that *is* upstream already and just Works..) Setting up virtio-mmio through xenbus, initially motivated just by event channels being the only real way to get interrupts working on HVM, turned out to generally be quite pleasant and nice :) I'd like to get some early feedback for this patch, particularly the general stuff: * is this whole thing acceptable in general? * should it be extracted into a different file? * (from the Xen side) any input on the xenstore keys, what goes where? * anything else to keep in mind? It does seem simple enough, so hopefully this can be done? The corresponding userspace-side WIP is available at: https://github.com/QubesOS/xen-vhost-frontend And the required DMOP for firing the evtchn events will be sent to xen-devel shortly as well. Thanks, ~val --- drivers/virtio/Kconfig | 7 ++ drivers/virtio/virtio_mmio.c | 177 ++++++++++++++++++++++++++++++++++- 2 files changed, 183 insertions(+), 1 deletion(-) diff --git a/drivers/virtio/Kconfig b/drivers/virtio/Kconfig index ce5bc0d9ea28..56bc2b10526b 100644 --- a/drivers/virtio/Kconfig +++ b/drivers/virtio/Kconfig @@ -171,6 +171,13 @@ config VIRTIO_MMIO_CMDLINE_DEVICES =20 If unsure, say 'N'. =20 +config VIRTIO_MMIO_XENBUS + bool "Memory mapped virtio devices parameter parsing" + depends on VIRTIO_MMIO && XEN + select XEN_XENBUS_FRONTEND + help + Allow virtio-mmio devices instantiation for Xen guests via xenbus. + config VIRTIO_DMA_SHARED_BUFFER tristate depends on DMA_SHARED_BUFFER diff --git a/drivers/virtio/virtio_mmio.c b/drivers/virtio/virtio_mmio.c index 595c2274fbb5..32295284bdbf 100644 --- a/drivers/virtio/virtio_mmio.c +++ b/drivers/virtio/virtio_mmio.c @@ -70,6 +70,11 @@ #include #include =20 +#ifdef CONFIG_VIRTIO_MMIO_XENBUS +#include +#include +#include +#endif =20 =20 /* The alignment to use between consumer and producer parts of vring. @@ -810,13 +815,183 @@ static struct platform_driver virtio_mmio_driver =3D= { }, }; =20 +#ifdef CONFIG_VIRTIO_MMIO_XENBUS +struct virtio_mmio_xen_info { + struct resource resources[2]; + unsigned int evtchn; + struct platform_device *pdev; +}; + +static int virtio_mmio_xen_probe(struct xenbus_device *dev, + const struct xenbus_device_id *id) +{ + int err; + long long base, size; + char *mem; + struct virtio_mmio_xen_info *info; + struct xenbus_transaction xbt; + + /* TODO: allocate an unused address here and pass it to the host instead = */ + err =3D xenbus_scanf(XBT_NIL, dev->otherend, "base", "0x%llx", + &base); + if (err < 0) { + xenbus_dev_fatal(dev, err, "reading base"); + return -EINVAL; + } + + mem =3D xenbus_read(XBT_NIL, dev->otherend, "size", NULL); + if (XENBUS_IS_ERR_READ(mem)) + return PTR_ERR(mem); + size =3D memparse(mem, NULL); + kfree(mem); + + info =3D kzalloc_obj(*info); + if (!info) { + xenbus_dev_fatal(dev, -ENOMEM, "allocating info structure"); + return -ENOMEM; + } + + info->resources[0].flags =3D IORESOURCE_MEM; + info->resources[0].start =3D base; + info->resources[0].end =3D base + size - 1; + + err =3D xenbus_alloc_evtchn(dev, &info->evtchn); + if (err) { + xenbus_dev_fatal(dev, err, "xenbus_alloc_evtchn"); + goto error_info; + } + + err =3D bind_evtchn_to_irq(info->evtchn); + if (err <=3D 0) { + xenbus_dev_fatal(dev, err, "bind_evtchn_to_irq"); + goto error_evtchan; + } + + info->resources[1].flags =3D IORESOURCE_IRQ; + info->resources[1].start =3D info->resources[1].end =3D err; + +again: + err =3D xenbus_transaction_start(&xbt); + if (err) { + xenbus_dev_fatal(dev, err, "starting transaction"); + goto error_irq; + } + + err =3D xenbus_printf(xbt, dev->nodename, "event-channel", "%u", + info->evtchn); + if (err) { + xenbus_transaction_end(xbt, 1); + xenbus_dev_fatal(dev, err, "%s", "writing event-channel"); + goto error_irq; + } + + err =3D xenbus_transaction_end(xbt, 0); + if (err) { + if (err =3D=3D -EAGAIN) + goto again; + xenbus_dev_fatal(dev, err, "completing transaction"); + goto error_irq; + } + + dev_set_drvdata(&dev->dev, info); + xenbus_switch_state(dev, XenbusStateInitialised); + return 0; + +error_irq: + unbind_from_irqhandler(info->resources[1].start, info); +error_evtchan: + xenbus_free_evtchn(dev, info->evtchn); +error_info: + kfree(info); + + return err; +} + +static void virtio_mmio_xen_backend_changed(struct xenbus_device *dev, + enum xenbus_state backend_state) +{ + struct virtio_mmio_xen_info *info =3D dev_get_drvdata(&dev->dev); + + switch (backend_state) { + case XenbusStateInitialising: + case XenbusStateInitWait: + case XenbusStateInitialised: + case XenbusStateReconfiguring: + case XenbusStateReconfigured: + case XenbusStateUnknown: + break; + + case XenbusStateConnected: + if (dev->state !=3D XenbusStateInitialised) { + dev_warn(&dev->dev, "state %d on connect", dev->state); + break; + } + info->pdev =3D platform_device_register_resndata(&dev->dev, + "virtio-mmio", PLATFORM_DEVID_AUTO, + info->resources, ARRAY_SIZE(info->resources), NULL, 0); + xenbus_switch_state(dev, XenbusStateConnected); + break; + + case XenbusStateClosed: + if (dev->state =3D=3D XenbusStateClosed) + break; + fallthrough; /* Missed the backend's Closing state. */ + case XenbusStateClosing: + platform_device_unregister(info->pdev); + xenbus_switch_state(dev, XenbusStateClosed); + break; + + default: + xenbus_dev_fatal(dev, -EINVAL, "saw state %d at frontend", + backend_state); + break; + } +} + +static void virtio_mmio_xen_remove(struct xenbus_device *dev) +{ + struct virtio_mmio_xen_info *info =3D dev_get_drvdata(&dev->dev); + + kfree(info); + dev_set_drvdata(&dev->dev, NULL); +} + +static const struct xenbus_device_id virtio_mmio_xen_ids[] =3D { + { "virtio" }, + { "" }, +}; + +static struct xenbus_driver virtio_mmio_xen_driver =3D { + .ids =3D virtio_mmio_xen_ids, + .probe =3D virtio_mmio_xen_probe, + .otherend_changed =3D virtio_mmio_xen_backend_changed, + .remove =3D virtio_mmio_xen_remove, +}; +#endif + static int __init virtio_mmio_init(void) { - return platform_driver_register(&virtio_mmio_driver); + int ret; + + ret =3D platform_driver_register(&virtio_mmio_driver); + if (ret) + return ret; + +#ifdef CONFIG_VIRTIO_MMIO_XENBUS + if (xen_domain()) + ret =3D xenbus_register_frontend(&virtio_mmio_xen_driver); +#endif + + return ret; } =20 static void __exit virtio_mmio_exit(void) { +#ifdef CONFIG_VIRTIO_MMIO_XENBUS + if (xen_domain()) + xenbus_unregister_driver(&virtio_mmio_xen_driver); +#endif + platform_driver_unregister(&virtio_mmio_driver); vm_unregister_cmdline_devices(); } --=20 2.53.0