From nobody Thu Oct 2 01:59:14 2025 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 05C6326A1A4 for ; Wed, 24 Sep 2025 07:01:03 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758697265; cv=none; b=L9RkCWB/0l96M3UxVFe9IlfIPMAtChIjemPhkVxzrbtGUdE6uY3MDMnTE8n0SBt8YWCR6KK2PZ+UckhDEFywf8sRxRIpJbl9WQYjSisHA4O874ozzOrdT9tdV51Pru+cBRN0vpwbZJGG04OR0DVsRTU3N1rjxqD2iefW66EbjcY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758697265; c=relaxed/simple; bh=lFMTZtgXG2W5MMhPbM+M5RsqnWrxHr2A+AmhHwVWhk8=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=n/2us+Oe7Bb2C0FRptXGxP8aM8P7mIuvfruKCoKVpaAvikDmJYyUJtMSkMLRI2bV3hcG1i4dYm+Sp3QMDNiYLK1iDlZuAkXWyU2IcnlUnyyAulE2POa8c4hybrncvGPxNoyQP5fGJUMG0smr7Mdvwd03mMiyaYyaynZNxO0YbjE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=al6WFfVf; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="al6WFfVf" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1758697262; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=F4scP4nkkJB/ykH0o3geV2MoMx4np23FbBzlUsofvFQ=; b=al6WFfVfaq7MRmRnqM7Vy/XcykrWeApWWn9PNa/UFXvn/Ew6uqa3LGDlGf3+BGw+TM1q0n /vAhmDVM+Yl7nDpvPq9xzb9fdjJZop7ynN6ZxDdknziQtBJF4ljo1UZXijemEVbU81Y9vi OaOZpVXmtZBKiPN0O0bfeKEeBrHwsl8= Received: from mx-prod-mc-08.mail-002.prod.us-west-2.aws.redhat.com (ec2-35-165-154-97.us-west-2.compute.amazonaws.com [35.165.154.97]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-447-E5Xj1TZQN9yfnqirq8mIuQ-1; Wed, 24 Sep 2025 03:00:58 -0400 X-MC-Unique: E5Xj1TZQN9yfnqirq8mIuQ-1 X-Mimecast-MFC-AGG-ID: E5Xj1TZQN9yfnqirq8mIuQ_1758697257 Received: from mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.4]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-08.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 13ECA18002E9; Wed, 24 Sep 2025 07:00:57 +0000 (UTC) Received: from localhost.localdomain (unknown [10.72.112.172]) by mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 31FC930002C5; Wed, 24 Sep 2025 07:00:52 +0000 (UTC) From: Jason Wang To: mst@redhat.com, jasowang@redhat.com, xuanzhuo@linux.alibaba.com, eperezma@redhat.com Cc: virtualization@lists.linux.dev, linux-kernel@vger.kernel.org, Christoph Hellwig Subject: [PATCH V7 1/2] vdpa: introduce map ops Date: Wed, 24 Sep 2025 15:00:44 +0800 Message-ID: <20250924070045.10361-2-jasowang@redhat.com> In-Reply-To: <20250924070045.10361-1-jasowang@redhat.com> References: <20250924070045.10361-1-jasowang@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.4 Content-Type: text/plain; charset="utf-8" Virtio core allows the transport to provide device or transport specific mapping functions. This patch adds this support to vDPA. We can simply do this by allowing the vDPA parent to register a virtio_map_ops. Reviewed-by: Christoph Hellwig Signed-off-by: Jason Wang --- drivers/vdpa/alibaba/eni_vdpa.c | 3 ++- drivers/vdpa/ifcvf/ifcvf_main.c | 3 ++- drivers/vdpa/mlx5/net/mlx5_vnet.c | 2 +- drivers/vdpa/octeon_ep/octep_vdpa_main.c | 4 ++-- drivers/vdpa/pds/vdpa_dev.c | 3 ++- drivers/vdpa/solidrun/snet_main.c | 4 ++-- drivers/vdpa/vdpa.c | 3 +++ drivers/vdpa/vdpa_sim/vdpa_sim.c | 2 +- drivers/vdpa/vdpa_user/vduse_dev.c | 3 ++- drivers/vdpa/virtio_pci/vp_vdpa.c | 3 ++- drivers/virtio/virtio_vdpa.c | 4 +++- include/linux/vdpa.h | 10 +++++++--- 12 files changed, 29 insertions(+), 15 deletions(-) diff --git a/drivers/vdpa/alibaba/eni_vdpa.c b/drivers/vdpa/alibaba/eni_vdp= a.c index 54aea086d08c..e476504db0c8 100644 --- a/drivers/vdpa/alibaba/eni_vdpa.c +++ b/drivers/vdpa/alibaba/eni_vdpa.c @@ -478,7 +478,8 @@ static int eni_vdpa_probe(struct pci_dev *pdev, const s= truct pci_device_id *id) return ret; =20 eni_vdpa =3D vdpa_alloc_device(struct eni_vdpa, vdpa, - dev, &eni_vdpa_ops, 1, 1, NULL, false); + dev, &eni_vdpa_ops, NULL, + 1, 1, NULL, false); if (IS_ERR(eni_vdpa)) { ENI_ERR(pdev, "failed to allocate vDPA structure\n"); return PTR_ERR(eni_vdpa); diff --git a/drivers/vdpa/ifcvf/ifcvf_main.c b/drivers/vdpa/ifcvf/ifcvf_mai= n.c index 979d188d74ee..6658dc74d915 100644 --- a/drivers/vdpa/ifcvf/ifcvf_main.c +++ b/drivers/vdpa/ifcvf/ifcvf_main.c @@ -705,7 +705,8 @@ static int ifcvf_vdpa_dev_add(struct vdpa_mgmt_dev *mde= v, const char *name, vf =3D &ifcvf_mgmt_dev->vf; pdev =3D vf->pdev; adapter =3D vdpa_alloc_device(struct ifcvf_adapter, vdpa, - &pdev->dev, &ifc_vdpa_ops, 1, 1, NULL, false); + &pdev->dev, &ifc_vdpa_ops, + NULL, 1, 1, NULL, false); if (IS_ERR(adapter)) { IFCVF_ERR(pdev, "Failed to allocate vDPA structure"); return PTR_ERR(adapter); diff --git a/drivers/vdpa/mlx5/net/mlx5_vnet.c b/drivers/vdpa/mlx5/net/mlx5= _vnet.c index a7e76f175914..82034efb74fc 100644 --- a/drivers/vdpa/mlx5/net/mlx5_vnet.c +++ b/drivers/vdpa/mlx5/net/mlx5_vnet.c @@ -3882,7 +3882,7 @@ static int mlx5_vdpa_dev_add(struct vdpa_mgmt_dev *v_= mdev, const char *name, } =20 ndev =3D vdpa_alloc_device(struct mlx5_vdpa_net, mvdev.vdev, mdev->device= , &mgtdev->vdpa_ops, - MLX5_VDPA_NUMVQ_GROUPS, MLX5_VDPA_NUM_AS, name, false); + NULL, MLX5_VDPA_NUMVQ_GROUPS, MLX5_VDPA_NUM_AS, name, false); if (IS_ERR(ndev)) return PTR_ERR(ndev); =20 diff --git a/drivers/vdpa/octeon_ep/octep_vdpa_main.c b/drivers/vdpa/octeon= _ep/octep_vdpa_main.c index 5818dae133a3..9e8d07078606 100644 --- a/drivers/vdpa/octeon_ep/octep_vdpa_main.c +++ b/drivers/vdpa/octeon_ep/octep_vdpa_main.c @@ -508,8 +508,8 @@ static int octep_vdpa_dev_add(struct vdpa_mgmt_dev *mde= v, const char *name, u64 device_features; int ret; =20 - oct_vdpa =3D vdpa_alloc_device(struct octep_vdpa, vdpa, &pdev->dev, &octe= p_vdpa_ops, 1, 1, - NULL, false); + oct_vdpa =3D vdpa_alloc_device(struct octep_vdpa, vdpa, &pdev->dev, &octe= p_vdpa_ops, + NULL, 1, 1, NULL, false); if (IS_ERR(oct_vdpa)) { dev_err(&pdev->dev, "Failed to allocate vDPA structure for octep vdpa de= vice"); return PTR_ERR(oct_vdpa); diff --git a/drivers/vdpa/pds/vdpa_dev.c b/drivers/vdpa/pds/vdpa_dev.c index 63d82263fb52..36f61cc96e21 100644 --- a/drivers/vdpa/pds/vdpa_dev.c +++ b/drivers/vdpa/pds/vdpa_dev.c @@ -632,7 +632,8 @@ static int pds_vdpa_dev_add(struct vdpa_mgmt_dev *mdev,= const char *name, } =20 pdsv =3D vdpa_alloc_device(struct pds_vdpa_device, vdpa_dev, - dev, &pds_vdpa_ops, 1, 1, name, false); + dev, &pds_vdpa_ops, NULL, + 1, 1, name, false); if (IS_ERR(pdsv)) { dev_err(dev, "Failed to allocate vDPA structure: %pe\n", pdsv); return PTR_ERR(pdsv); diff --git a/drivers/vdpa/solidrun/snet_main.c b/drivers/vdpa/solidrun/snet= _main.c index 39050aab147f..4588211d57eb 100644 --- a/drivers/vdpa/solidrun/snet_main.c +++ b/drivers/vdpa/solidrun/snet_main.c @@ -1008,8 +1008,8 @@ static int snet_vdpa_probe_vf(struct pci_dev *pdev) } =20 /* Allocate vdpa device */ - snet =3D vdpa_alloc_device(struct snet, vdpa, &pdev->dev, &snet_config_op= s, 1, 1, NULL, - false); + snet =3D vdpa_alloc_device(struct snet, vdpa, &pdev->dev, &snet_config_op= s, + NULL, 1, 1, NULL, false); if (!snet) { SNET_ERR(pdev, "Failed to allocate a vdpa device\n"); ret =3D -ENOMEM; diff --git a/drivers/vdpa/vdpa.c b/drivers/vdpa/vdpa.c index c71debeb8471..34874beb0152 100644 --- a/drivers/vdpa/vdpa.c +++ b/drivers/vdpa/vdpa.c @@ -142,6 +142,7 @@ static void vdpa_release_dev(struct device *d) * initialized but before registered. * @parent: the parent device * @config: the bus operations that is supported by this device + * @map: the map operations that is supported by this device * @ngroups: number of groups supported by this device * @nas: number of address spaces supported by this device * @size: size of the parent structure that contains private data @@ -156,6 +157,7 @@ static void vdpa_release_dev(struct device *d) */ struct vdpa_device *__vdpa_alloc_device(struct device *parent, const struct vdpa_config_ops *config, + const struct virtio_map_ops *map, unsigned int ngroups, unsigned int nas, size_t size, const char *name, bool use_va) @@ -187,6 +189,7 @@ struct vdpa_device *__vdpa_alloc_device(struct device *= parent, vdev->dev.release =3D vdpa_release_dev; vdev->index =3D err; vdev->config =3D config; + vdev->map =3D map; vdev->features_valid =3D false; vdev->use_va =3D use_va; vdev->ngroups =3D ngroups; diff --git a/drivers/vdpa/vdpa_sim/vdpa_sim.c b/drivers/vdpa/vdpa_sim/vdpa_= sim.c index 22ee53538444..c1c6431950e1 100644 --- a/drivers/vdpa/vdpa_sim/vdpa_sim.c +++ b/drivers/vdpa/vdpa_sim/vdpa_sim.c @@ -215,7 +215,7 @@ struct vdpasim *vdpasim_create(struct vdpasim_dev_attr = *dev_attr, else ops =3D &vdpasim_config_ops; =20 - vdpa =3D __vdpa_alloc_device(NULL, ops, + vdpa =3D __vdpa_alloc_device(NULL, ops, NULL, dev_attr->ngroups, dev_attr->nas, dev_attr->alloc_size, dev_attr->name, use_va); diff --git a/drivers/vdpa/vdpa_user/vduse_dev.c b/drivers/vdpa/vdpa_user/vd= use_dev.c index f68ed569394c..ad782d20a8ed 100644 --- a/drivers/vdpa/vdpa_user/vduse_dev.c +++ b/drivers/vdpa/vdpa_user/vduse_dev.c @@ -2009,7 +2009,8 @@ static int vduse_dev_init_vdpa(struct vduse_dev *dev,= const char *name) return -EEXIST; =20 vdev =3D vdpa_alloc_device(struct vduse_vdpa, vdpa, dev->dev, - &vduse_vdpa_config_ops, 1, 1, name, true); + &vduse_vdpa_config_ops, NULL, + 1, 1, name, true); if (IS_ERR(vdev)) return PTR_ERR(vdev); =20 diff --git a/drivers/vdpa/virtio_pci/vp_vdpa.c b/drivers/vdpa/virtio_pci/vp= _vdpa.c index 242641c0f2bd..17a19a728c9c 100644 --- a/drivers/vdpa/virtio_pci/vp_vdpa.c +++ b/drivers/vdpa/virtio_pci/vp_vdpa.c @@ -511,7 +511,8 @@ static int vp_vdpa_dev_add(struct vdpa_mgmt_dev *v_mdev= , const char *name, int ret, i; =20 vp_vdpa =3D vdpa_alloc_device(struct vp_vdpa, vdpa, - dev, &vp_vdpa_ops, 1, 1, name, false); + dev, &vp_vdpa_ops, NULL, + 1, 1, name, false); =20 if (IS_ERR(vp_vdpa)) { dev_err(dev, "vp_vdpa: Failed to allocate vDPA structure\n"); diff --git a/drivers/virtio/virtio_vdpa.c b/drivers/virtio/virtio_vdpa.c index 8b27c6e8eebb..2464fa655f09 100644 --- a/drivers/virtio/virtio_vdpa.c +++ b/drivers/virtio/virtio_vdpa.c @@ -466,9 +466,11 @@ static int virtio_vdpa_probe(struct vdpa_device *vdpa) if (!vd_dev) return -ENOMEM; =20 - vd_dev->vdev.dev.parent =3D vdpa_get_map(vdpa).dma_dev; + vd_dev->vdev.dev.parent =3D vdpa->map ? &vdpa->dev : + vdpa_get_map(vdpa).dma_dev; vd_dev->vdev.dev.release =3D virtio_vdpa_release_dev; vd_dev->vdev.config =3D &virtio_vdpa_config_ops; + vd_dev->vdev.map =3D vdpa->map; vd_dev->vdpa =3D vdpa; =20 vd_dev->vdev.id.device =3D ops->get_device_id(vdpa); diff --git a/include/linux/vdpa.h b/include/linux/vdpa.h index ae0451945851..4cf21d6e9cfd 100644 --- a/include/linux/vdpa.h +++ b/include/linux/vdpa.h @@ -76,6 +76,7 @@ struct vdpa_mgmt_dev; * because core frees it; use driver_set_override() to * set or clear it. * @config: the configuration ops for this device. + * @map: the map ops for this device * @cf_lock: Protects get and set access to configuration layout. * @index: device index * @features_valid: were features initialized? for legacy guests @@ -91,6 +92,7 @@ struct vdpa_device { union virtio_map vmap; const char *driver_override; const struct vdpa_config_ops *config; + const struct virtio_map_ops *map; struct rw_semaphore cf_lock; /* Protects get/set config */ unsigned int index; bool features_valid; @@ -447,6 +449,7 @@ struct vdpa_config_ops { =20 struct vdpa_device *__vdpa_alloc_device(struct device *parent, const struct vdpa_config_ops *config, + const struct virtio_map_ops *map, unsigned int ngroups, unsigned int nas, size_t size, const char *name, bool use_va); @@ -458,6 +461,7 @@ struct vdpa_device *__vdpa_alloc_device(struct device *= parent, * @member: the name of struct vdpa_device within the @dev_struct * @parent: the parent device * @config: the bus operations that is supported by this device + * @map: the map operations that is supported by this device * @ngroups: the number of virtqueue groups supported by this device * @nas: the number of address spaces * @name: name of the vdpa device @@ -465,10 +469,10 @@ struct vdpa_device *__vdpa_alloc_device(struct device= *parent, * * Return allocated data structure or ERR_PTR upon error */ -#define vdpa_alloc_device(dev_struct, member, parent, config, ngroups, nas= , \ - name, use_va) \ +#define vdpa_alloc_device(dev_struct, member, parent, config, map, \ + ngroups, nas, name, use_va) \ container_of((__vdpa_alloc_device( \ - parent, config, ngroups, nas, \ + parent, config, map, ngroups, nas, \ (sizeof(dev_struct) + \ BUILD_BUG_ON_ZERO(offsetof( \ dev_struct, member))), name, use_va)), \ --=20 2.31.1 From nobody Thu Oct 2 01:59:14 2025 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D68FB26C3BC for ; Wed, 24 Sep 2025 07:01:06 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758697268; cv=none; b=fLEHbX0yfQEeZgBtkgkb9fWPFHSm/ZF16zRY8GzMwKIHoe60WZh34bIN6m8/Ed7wgndyPSHlNqC4wiTJJXls/VYQQ9TSAOFq17U+dPnZTxUeZgXQNp18/OzsX4qx0pji8KC9UUZwBGZesyarIn7/m5VADVQ/4XDHTj10mKyUh4k= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758697268; c=relaxed/simple; bh=oJn6kFHaRJjo+9Rydz6db0lcwstUDlWvdDG5ca7d4TI=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=ZJPUieCQcrkJi+bOgt1uu9t68y/rPKtpuQPnrjpA0PYv/ytmv3OBDFqxJHsJLClCPIF3xuYMHwI90Z9vqvJ+INmba6Ws+k7t5UJPSvmL/xMhAc64v1Zu5K6vNJRer99hGYJdYCPF+QSUzg33KMK5DISzPPiMNU5zUD8IvKGb3rs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=fdzSRJWz; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="fdzSRJWz" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1758697265; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=o92MnsX1tO2Tjt/ETqvuPKhptU7wZU1sM/64+jL2FLU=; b=fdzSRJWzBJaIQkIwEtJitpQQBnhnmnVeWtWCSX/THzvKROTPtthnlzcu+o2Uvge/C0k+2U 4zRGmOW02UY90domAb2sd4E10G2SiVZzSfXFae6PLCfjKB+sONqpdjCHwGRhkReLowdvzQ 12FounpGTg3WVah68Kl0seUDUX/J4Hc= Received: from mx-prod-mc-01.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-108-QhA-I6VKMV6Lp2a7AZIi0w-1; Wed, 24 Sep 2025 03:01:03 -0400 X-MC-Unique: QhA-I6VKMV6Lp2a7AZIi0w-1 X-Mimecast-MFC-AGG-ID: QhA-I6VKMV6Lp2a7AZIi0w_1758697262 Received: from mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.4]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 02F471956094; Wed, 24 Sep 2025 07:01:02 +0000 (UTC) Received: from localhost.localdomain (unknown [10.72.112.172]) by mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id C8819300018D; Wed, 24 Sep 2025 07:00:57 +0000 (UTC) From: Jason Wang To: mst@redhat.com, jasowang@redhat.com, xuanzhuo@linux.alibaba.com, eperezma@redhat.com Cc: virtualization@lists.linux.dev, linux-kernel@vger.kernel.org Subject: [PATCH V7 2/2] vduse: switch to use virtio map API instead of DMA API Date: Wed, 24 Sep 2025 15:00:45 +0800 Message-ID: <20250924070045.10361-3-jasowang@redhat.com> In-Reply-To: <20250924070045.10361-1-jasowang@redhat.com> References: <20250924070045.10361-1-jasowang@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.4 Content-Type: text/plain; charset="utf-8" Lacking the support of device specific mapping supported in virtio, VDUSE must trick the DMA API in order to make virtio-vdpa transport work. This is done by advertising vDPA device as dma device with a VDUSE specific dma_ops even if it doesn't do DMA at all. This will be fixed by this patch. Thanks to the new mapping operations support by virtio and vDPA. VDUSE can simply switch to advertise its specific mappings operations to virtio via virtio-vdpa then DMA API is not needed for VDUSE any more and iova domain could be used as the mapping token instead. Signed-off-by: Jason Wang --- drivers/vdpa/Kconfig | 8 +-- drivers/vdpa/vdpa_user/iova_domain.c | 2 +- drivers/vdpa/vdpa_user/iova_domain.h | 2 +- drivers/vdpa/vdpa_user/vduse_dev.c | 78 ++++++++++++++-------------- include/linux/virtio.h | 4 ++ 5 files changed, 46 insertions(+), 48 deletions(-) diff --git a/drivers/vdpa/Kconfig b/drivers/vdpa/Kconfig index 559fb9d3271f..857cf288c876 100644 --- a/drivers/vdpa/Kconfig +++ b/drivers/vdpa/Kconfig @@ -34,13 +34,7 @@ config VDPA_SIM_BLOCK =20 config VDPA_USER tristate "VDUSE (vDPA Device in Userspace) support" - depends on EVENTFD && MMU && HAS_DMA - # - # This driver incorrectly tries to override the dma_ops. It should - # never have done that, but for now keep it working on architectures - # that use dma ops - # - depends on ARCH_HAS_DMA_OPS + depends on EVENTFD && MMU select VHOST_IOTLB select IOMMU_IOVA help diff --git a/drivers/vdpa/vdpa_user/iova_domain.c b/drivers/vdpa/vdpa_user/= iova_domain.c index 58116f89d8da..ccaed24b7ef8 100644 --- a/drivers/vdpa/vdpa_user/iova_domain.c +++ b/drivers/vdpa/vdpa_user/iova_domain.c @@ -447,7 +447,7 @@ void vduse_domain_unmap_page(struct vduse_iova_domain *= domain, =20 void *vduse_domain_alloc_coherent(struct vduse_iova_domain *domain, size_t size, dma_addr_t *dma_addr, - gfp_t flag, unsigned long attrs) + gfp_t flag) { struct iova_domain *iovad =3D &domain->consistent_iovad; unsigned long limit =3D domain->iova_limit; diff --git a/drivers/vdpa/vdpa_user/iova_domain.h b/drivers/vdpa/vdpa_user/= iova_domain.h index 7f3f0928ec78..1f3c30be272a 100644 --- a/drivers/vdpa/vdpa_user/iova_domain.h +++ b/drivers/vdpa/vdpa_user/iova_domain.h @@ -64,7 +64,7 @@ void vduse_domain_unmap_page(struct vduse_iova_domain *do= main, =20 void *vduse_domain_alloc_coherent(struct vduse_iova_domain *domain, size_t size, dma_addr_t *dma_addr, - gfp_t flag, unsigned long attrs); + gfp_t flag); =20 void vduse_domain_free_coherent(struct vduse_iova_domain *domain, size_t s= ize, void *vaddr, dma_addr_t dma_addr, diff --git a/drivers/vdpa/vdpa_user/vduse_dev.c b/drivers/vdpa/vdpa_user/vd= use_dev.c index ad782d20a8ed..e7bced0b5542 100644 --- a/drivers/vdpa/vdpa_user/vduse_dev.c +++ b/drivers/vdpa/vdpa_user/vduse_dev.c @@ -814,59 +814,53 @@ static const struct vdpa_config_ops vduse_vdpa_config= _ops =3D { .free =3D vduse_vdpa_free, }; =20 -static void vduse_dev_sync_single_for_device(struct device *dev, +static void vduse_dev_sync_single_for_device(union virtio_map token, dma_addr_t dma_addr, size_t size, enum dma_data_direction dir) { - struct vduse_dev *vdev =3D dev_to_vduse(dev); - struct vduse_iova_domain *domain =3D vdev->domain; + struct vduse_iova_domain *domain =3D token.iova_domain; =20 vduse_domain_sync_single_for_device(domain, dma_addr, size, dir); } =20 -static void vduse_dev_sync_single_for_cpu(struct device *dev, +static void vduse_dev_sync_single_for_cpu(union virtio_map token, dma_addr_t dma_addr, size_t size, enum dma_data_direction dir) { - struct vduse_dev *vdev =3D dev_to_vduse(dev); - struct vduse_iova_domain *domain =3D vdev->domain; + struct vduse_iova_domain *domain =3D token.iova_domain; =20 vduse_domain_sync_single_for_cpu(domain, dma_addr, size, dir); } =20 -static dma_addr_t vduse_dev_map_page(struct device *dev, struct page *page, +static dma_addr_t vduse_dev_map_page(union virtio_map token, struct page *= page, unsigned long offset, size_t size, enum dma_data_direction dir, unsigned long attrs) { - struct vduse_dev *vdev =3D dev_to_vduse(dev); - struct vduse_iova_domain *domain =3D vdev->domain; + struct vduse_iova_domain *domain =3D token.iova_domain; =20 return vduse_domain_map_page(domain, page, offset, size, dir, attrs); } =20 -static void vduse_dev_unmap_page(struct device *dev, dma_addr_t dma_addr, - size_t size, enum dma_data_direction dir, - unsigned long attrs) +static void vduse_dev_unmap_page(union virtio_map token, dma_addr_t dma_ad= dr, + size_t size, enum dma_data_direction dir, + unsigned long attrs) { - struct vduse_dev *vdev =3D dev_to_vduse(dev); - struct vduse_iova_domain *domain =3D vdev->domain; + struct vduse_iova_domain *domain =3D token.iova_domain; =20 return vduse_domain_unmap_page(domain, dma_addr, size, dir, attrs); } =20 -static void *vduse_dev_alloc_coherent(struct device *dev, size_t size, - dma_addr_t *dma_addr, gfp_t flag, - unsigned long attrs) +static void *vduse_dev_alloc_coherent(union virtio_map token, size_t size, + dma_addr_t *dma_addr, gfp_t flag) { - struct vduse_dev *vdev =3D dev_to_vduse(dev); - struct vduse_iova_domain *domain =3D vdev->domain; + struct vduse_iova_domain *domain =3D token.iova_domain; unsigned long iova; void *addr; =20 *dma_addr =3D DMA_MAPPING_ERROR; addr =3D vduse_domain_alloc_coherent(domain, size, - (dma_addr_t *)&iova, flag, attrs); + (dma_addr_t *)&iova, flag); if (!addr) return NULL; =20 @@ -875,31 +869,45 @@ static void *vduse_dev_alloc_coherent(struct device *= dev, size_t size, return addr; } =20 -static void vduse_dev_free_coherent(struct device *dev, size_t size, - void *vaddr, dma_addr_t dma_addr, - unsigned long attrs) +static void vduse_dev_free_coherent(union virtio_map token, size_t size, + void *vaddr, dma_addr_t dma_addr, + unsigned long attrs) { - struct vduse_dev *vdev =3D dev_to_vduse(dev); - struct vduse_iova_domain *domain =3D vdev->domain; + struct vduse_iova_domain *domain =3D token.iova_domain; =20 vduse_domain_free_coherent(domain, size, vaddr, dma_addr, attrs); } =20 -static size_t vduse_dev_max_mapping_size(struct device *dev) +static bool vduse_dev_need_sync(union virtio_map token, dma_addr_t dma_add= r) { - struct vduse_dev *vdev =3D dev_to_vduse(dev); - struct vduse_iova_domain *domain =3D vdev->domain; + struct vduse_iova_domain *domain =3D token.iova_domain; + + return dma_addr < domain->bounce_size; +} + +static int vduse_dev_mapping_error(union virtio_map token, dma_addr_t dma_= addr) +{ + if (unlikely(dma_addr =3D=3D DMA_MAPPING_ERROR)) + return -ENOMEM; + return 0; +} + +static size_t vduse_dev_max_mapping_size(union virtio_map token) +{ + struct vduse_iova_domain *domain =3D token.iova_domain; =20 return domain->bounce_size; } =20 -static const struct dma_map_ops vduse_dev_dma_ops =3D { +static const struct virtio_map_ops vduse_map_ops =3D { .sync_single_for_device =3D vduse_dev_sync_single_for_device, .sync_single_for_cpu =3D vduse_dev_sync_single_for_cpu, .map_page =3D vduse_dev_map_page, .unmap_page =3D vduse_dev_unmap_page, .alloc =3D vduse_dev_alloc_coherent, .free =3D vduse_dev_free_coherent, + .need_sync =3D vduse_dev_need_sync, + .mapping_error =3D vduse_dev_mapping_error, .max_mapping_size =3D vduse_dev_max_mapping_size, }; =20 @@ -2003,27 +2011,18 @@ static struct vduse_mgmt_dev *vduse_mgmt; static int vduse_dev_init_vdpa(struct vduse_dev *dev, const char *name) { struct vduse_vdpa *vdev; - int ret; =20 if (dev->vdev) return -EEXIST; =20 vdev =3D vdpa_alloc_device(struct vduse_vdpa, vdpa, dev->dev, - &vduse_vdpa_config_ops, NULL, + &vduse_vdpa_config_ops, &vduse_map_ops, 1, 1, name, true); if (IS_ERR(vdev)) return PTR_ERR(vdev); =20 dev->vdev =3D vdev; vdev->dev =3D dev; - vdev->vdpa.dev.dma_mask =3D &vdev->vdpa.dev.coherent_dma_mask; - ret =3D dma_set_mask_and_coherent(&vdev->vdpa.dev, DMA_BIT_MASK(64)); - if (ret) { - put_device(&vdev->vdpa.dev); - return ret; - } - set_dma_ops(&vdev->vdpa.dev, &vduse_dev_dma_ops); - vdev->vdpa.vmap.dma_dev =3D &vdev->vdpa.dev; vdev->vdpa.mdev =3D &vduse_mgmt->mgmt_dev; =20 return 0; @@ -2056,6 +2055,7 @@ static int vdpa_dev_add(struct vdpa_mgmt_dev *mdev, c= onst char *name, return -ENOMEM; } =20 + dev->vdev->vdpa.vmap.iova_domain =3D dev->domain; ret =3D _vdpa_register_device(&dev->vdev->vdpa, dev->vq_num); if (ret) { put_device(&dev->vdev->vdpa.dev); diff --git a/include/linux/virtio.h b/include/linux/virtio.h index 3386a4a8d06b..96c66126c074 100644 --- a/include/linux/virtio.h +++ b/include/linux/virtio.h @@ -41,9 +41,13 @@ struct virtqueue { void *priv; }; =20 +struct vduse_iova_domain; + union virtio_map { /* Device that performs DMA */ struct device *dma_dev; + /* VDUSE specific mapping data */ + struct vduse_iova_domain *iova_domain; }; =20 int virtqueue_add_outbuf(struct virtqueue *vq, --=20 2.31.1