From nobody Sun Feb 8 15:46:31 2026 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 384AB2E228C for ; Tue, 30 Dec 2025 10:16:11 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1767089774; cv=none; b=GG8FsIuLvvVlJr7CSDk5ifFuMQdf/12rZWaa8+F6MP9e0tuyK0xITCdp2nOzw8DZV26w2/MlvhP6FRkxwssFJ/c1pMCD+JXkPJkc0Gnp20yXnmUGOFoAWT52fgdpPfub0QbY32YNw3jwta9jeyLl8sZybF0ovRD8nGd3wOVF/N8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1767089774; c=relaxed/simple; bh=N4HR0a5dn05GuDoktJG7BnG9Lq/5EUgyVWUyfL7k7BY=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=I5t9i1HUuJgOpL71f8kZKHUJjlB63L61EIE4rOXe2URIVgZVVUTFiaI4ToUqCUvDn0ml7NnVzH+VW4rBW+JuWcML9yffW5mz7QqTOEY+BVc0JtnZ+kKDfXIRPwZ0FSq15Bsm7dXJtfpO29ZuouVKYGEFLd4WA0ZP9zQK29ypPco= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=gsyT/re7; dkim=pass (2048-bit key) header.d=redhat.com header.i=@redhat.com header.b=iVyrmX03; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="gsyT/re7"; dkim=pass (2048-bit key) header.d=redhat.com header.i=@redhat.com header.b="iVyrmX03" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1767089770; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=5P8nBFllNyVQLFQFLwvEGLPOI82khKcPXA1snN92gfM=; b=gsyT/re7zBghS/yQntGAGJu/ISHw3WlYNSYuOoI3O5bVjjfEChnWJYC8bfiysfM7cdsytI mC1VU/x0p9sSZcPopYpSC8DxVm38YZ7jhJABhGAUEAI/psZFsU4ReqN0GjJ5mGfM899Qof 2dYKGMpN2Pc62ofKcAndOkY+dEAyPcI= Received: from mail-wr1-f71.google.com (mail-wr1-f71.google.com [209.85.221.71]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-271-mLKOrzrfPG-1NVqpwcDEGg-1; Tue, 30 Dec 2025 05:16:09 -0500 X-MC-Unique: mLKOrzrfPG-1NVqpwcDEGg-1 X-Mimecast-MFC-AGG-ID: mLKOrzrfPG-1NVqpwcDEGg_1767089768 Received: by mail-wr1-f71.google.com with SMTP id ffacd0b85a97d-43284f60a8aso1509961f8f.3 for ; Tue, 30 Dec 2025 02:16:08 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=google; t=1767089768; x=1767694568; darn=vger.kernel.org; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to; bh=5P8nBFllNyVQLFQFLwvEGLPOI82khKcPXA1snN92gfM=; b=iVyrmX03FuQnqi6H3tuoQWOJv9slaJsd0tbiMGcxay/IxUohk2ggdCZMLrZKNTFUaO CW3rUssFVY0mE2ThXqPQCOkvn5xUhttREU3G8xP+GNEOWv44sBZluoN33Tqm+MuLwv2R xAjuj/MWIWHnNeU0Q3Fz6vDy8J3chwdvDOadNAkgcKEuYa4K3odd0d9fcGiaxA7+80rq kRGWXkTcd23zxznIC/uDn/y7yCQ5PocW1NyiQvgrVhmB04bqbW8j6ytBCJANtuNgQt6h 2K0whmHe41LkSJ5kXfLhAbVrSWTvLHopiLiclJLVdy7MsZjr2DUggP+CCOvL8SUDHtLG W3OQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1767089768; x=1767694568; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-gg:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=5P8nBFllNyVQLFQFLwvEGLPOI82khKcPXA1snN92gfM=; b=jtI1IMDF0iRPkDEE2ate7uQ2G5MO3gqu1JXrxcq3kkYijbzPoLg+81+I6P5v4Cgkte 2q3ytG4H61xURwG295Ff827VHbAf4t0tqbPmnyR0TvbBavB5WKVImE6skyWC5x4xH8Ls B3D67oiwv3D7sEAVB1P7Q7rLTsDMjxVrM82coalsv1yCRm9fB9fkClEspgls3wezVVZy IHSLNHAagVnBUgwbImy+7ECrysrj5r7vcOn8W0X50DsN3hqp9T+2p7qqQzIzVz9x6Mb6 0Wn1T0JwR2mtDdAcnXS3Q5ILGlPoKjcqEzenXDTj/ryIiDYKHRTMjS80ndPj9rBF0kA9 c37g== X-Gm-Message-State: AOJu0Yzdrw5v0OnfYtlsNTbjBcDwS783lSCJwF6AO3dYzrmwHPJ0UAHq AK+7hz73bMFhZGRMtX72vQl88d5XjPqQomjL0BNSnbH96vN1A7UIDwlikk3cX8MQGUV4cjUAwv5 0EX9OTh1H424AYCz3NK1FZDWm497VPgguZNJ73tQPA5DS2HNnuu1jZtCdHPbwqXrJqOWRlQH2HR mh8eki5HG52vpbfbIidv/d48/QbRyLNxIYS5qaoYY5bK4= X-Gm-Gg: AY/fxX6/bznQB75dUxayHFaCozlAJq1IBjm87P1QevPnGpaDzl1MXbGCAoSBhXBfA8/ uB+1JJbnyTRTBw+kVDG2d+4xq/A+a7G5zcgPDyRnTLVdHPuONYGsl9LYjiXEU9D2Vgy34l2Pk5p aVmGHr6HI05bSDpGjQLYavARoug7ChLh1zuh4S1bmpza28u2dFlLKv2fMF3gjpKhPMiI3co1h6Y XySSjTUJ0QMNbTn3jrd9m1sVZNxTL2f7Xe+gYz7tAixMt/x+qV+XpbX5hOi/aeqZaqtHTlQmazL X4CHpMReCpiVs118gM+88s9M3j32dmQNVcZ4JktK7GvmQOZwodhARB8wD9AUb0EGRZ1TLJ/N92k eHEtBWx5VIwsVqEmq3q5b7Npsq0DGfW7dHg== X-Received: by 2002:a5d:5f54:0:b0:431:2b2:9628 with SMTP id ffacd0b85a97d-4324e506ba3mr39259970f8f.52.1767089767511; Tue, 30 Dec 2025 02:16:07 -0800 (PST) X-Google-Smtp-Source: AGHT+IGtsdMUlb1wS6GR5OayFkymlWIYaYAnUxOGKY8jeZK45Dwg0ymMw4lZ1+63FGBt0jVsyojNrA== X-Received: by 2002:a5d:5f54:0:b0:431:2b2:9628 with SMTP id ffacd0b85a97d-4324e506ba3mr39259899f8f.52.1767089766895; Tue, 30 Dec 2025 02:16:06 -0800 (PST) Received: from redhat.com (IGLD-80-230-31-118.inter.net.il. [80.230.31.118]) by smtp.gmail.com with ESMTPSA id ffacd0b85a97d-4324eaa477bsm67834948f8f.36.2025.12.30.02.16.04 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 30 Dec 2025 02:16:06 -0800 (PST) Date: Tue, 30 Dec 2025 05:16:03 -0500 From: "Michael S. Tsirkin" To: linux-kernel@vger.kernel.org Cc: Cong Wang , Jonathan Corbet , Olivia Mackall , Herbert Xu , Jason Wang , Paolo Bonzini , Stefan Hajnoczi , Eugenio =?utf-8?B?UMOpcmV6?= , "James E.J. Bottomley" , "Martin K. Petersen" , Gerd Hoffmann , Xuan Zhuo , Marek Szyprowski , Robin Murphy , Stefano Garzarella , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Simon Horman , Petr Tesarik , Leon Romanovsky , Jason Gunthorpe , linux-doc@vger.kernel.org, linux-crypto@vger.kernel.org, virtualization@lists.linux.dev, linux-scsi@vger.kernel.org, iommu@lists.linux.dev, kvm@vger.kernel.org, netdev@vger.kernel.org Subject: [PATCH RFC 06/13] virtio: add virtqueue_add_inbuf_cache_clean API Message-ID: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: X-Mailer: git-send-email 2.27.0.106.g8ac3dc51b1 X-Mutt-Fcc: =sent Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add virtqueue_add_inbuf_cache_clean() for passing DMA_ATTR_CPU_CACHE_CLEAN to virtqueue operations. This suppresses DMA debug cacheline overlap warnings for buffers where proper cache management is ensured by the caller. Signed-off-by: Michael S. Tsirkin --- drivers/virtio/virtio_ring.c | 72 ++++++++++++++++++++++++++---------- include/linux/virtio.h | 5 +++ 2 files changed, 58 insertions(+), 19 deletions(-) diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c index 1832ea7982a6..19a4a8cd22f9 100644 --- a/drivers/virtio/virtio_ring.c +++ b/drivers/virtio/virtio_ring.c @@ -382,7 +382,7 @@ static int vring_mapping_error(const struct vring_virtq= ueue *vq, /* Map one sg entry. */ static int vring_map_one_sg(const struct vring_virtqueue *vq, struct scatt= erlist *sg, enum dma_data_direction direction, dma_addr_t *addr, - u32 *len, bool premapped) + u32 *len, bool premapped, unsigned long attr) { if (premapped) { *addr =3D sg_dma_address(sg); @@ -410,7 +410,7 @@ static int vring_map_one_sg(const struct vring_virtqueu= e *vq, struct scatterlist */ *addr =3D virtqueue_map_page_attrs(&vq->vq, sg_page(sg), sg->offset, sg->length, - direction, 0); + direction, attr); =20 if (vring_mapping_error(vq, *addr)) return -ENOMEM; @@ -539,7 +539,8 @@ static inline int virtqueue_add_split(struct vring_virt= queue *vq, void *data, void *ctx, bool premapped, - gfp_t gfp) + gfp_t gfp, + unsigned long attr) { struct vring_desc_extra *extra; struct scatterlist *sg; @@ -605,7 +606,8 @@ static inline int virtqueue_add_split(struct vring_virt= queue *vq, dma_addr_t addr; u32 len; =20 - if (vring_map_one_sg(vq, sg, DMA_TO_DEVICE, &addr, &len, premapped)) + if (vring_map_one_sg(vq, sg, DMA_TO_DEVICE, &addr, &len, + premapped, attr)) goto unmap_release; =20 prev =3D i; @@ -622,7 +624,8 @@ static inline int virtqueue_add_split(struct vring_virt= queue *vq, dma_addr_t addr; u32 len; =20 - if (vring_map_one_sg(vq, sg, DMA_FROM_DEVICE, &addr, &len, premapped)) + if (vring_map_one_sg(vq, sg, DMA_FROM_DEVICE, &addr, &len, + premapped, attr)) goto unmap_release; =20 prev =3D i; @@ -1315,7 +1318,8 @@ static int virtqueue_add_indirect_packed(struct vring= _virtqueue *vq, unsigned int in_sgs, void *data, bool premapped, - gfp_t gfp) + gfp_t gfp, + unsigned long attr) { struct vring_desc_extra *extra; struct vring_packed_desc *desc; @@ -1346,7 +1350,7 @@ static int virtqueue_add_indirect_packed(struct vring= _virtqueue *vq, for (sg =3D sgs[n]; sg; sg =3D sg_next(sg)) { if (vring_map_one_sg(vq, sg, n < out_sgs ? DMA_TO_DEVICE : DMA_FROM_DEVICE, - &addr, &len, premapped)) + &addr, &len, premapped, attr)) goto unmap_release; =20 desc[i].flags =3D cpu_to_le16(n < out_sgs ? @@ -1441,7 +1445,8 @@ static inline int virtqueue_add_packed(struct vring_v= irtqueue *vq, void *data, void *ctx, bool premapped, - gfp_t gfp) + gfp_t gfp, + unsigned long attr) { struct vring_packed_desc *desc; struct scatterlist *sg; @@ -1466,7 +1471,7 @@ static inline int virtqueue_add_packed(struct vring_v= irtqueue *vq, =20 if (virtqueue_use_indirect(vq, total_sg)) { err =3D virtqueue_add_indirect_packed(vq, sgs, total_sg, out_sgs, - in_sgs, data, premapped, gfp); + in_sgs, data, premapped, gfp, attr); if (err !=3D -ENOMEM) { END_USE(vq); return err; @@ -1502,7 +1507,7 @@ static inline int virtqueue_add_packed(struct vring_v= irtqueue *vq, =20 if (vring_map_one_sg(vq, sg, n < out_sgs ? DMA_TO_DEVICE : DMA_FROM_DEVICE, - &addr, &len, premapped)) + &addr, &len, premapped, attr)) goto unmap_release; =20 flags =3D cpu_to_le16(vq->packed.avail_used_flags | @@ -2244,14 +2249,17 @@ static inline int virtqueue_add(struct virtqueue *_= vq, void *data, void *ctx, bool premapped, - gfp_t gfp) + gfp_t gfp, + unsigned long attr) { struct vring_virtqueue *vq =3D to_vvq(_vq); =20 return vq->packed_ring ? virtqueue_add_packed(vq, sgs, total_sg, - out_sgs, in_sgs, data, ctx, premapped, gfp) : + out_sgs, in_sgs, data, ctx, premapped, gfp, + attr) : virtqueue_add_split(vq, sgs, total_sg, - out_sgs, in_sgs, data, ctx, premapped, gfp); + out_sgs, in_sgs, data, ctx, premapped, gfp, + attr); } =20 /** @@ -2289,7 +2297,7 @@ int virtqueue_add_sgs(struct virtqueue *_vq, total_sg++; } return virtqueue_add(_vq, sgs, total_sg, out_sgs, in_sgs, - data, NULL, false, gfp); + data, NULL, false, gfp, 0); } EXPORT_SYMBOL_GPL(virtqueue_add_sgs); =20 @@ -2311,7 +2319,7 @@ int virtqueue_add_outbuf(struct virtqueue *vq, void *data, gfp_t gfp) { - return virtqueue_add(vq, &sg, num, 1, 0, data, NULL, false, gfp); + return virtqueue_add(vq, &sg, num, 1, 0, data, NULL, false, gfp, 0); } EXPORT_SYMBOL_GPL(virtqueue_add_outbuf); =20 @@ -2334,7 +2342,7 @@ int virtqueue_add_outbuf_premapped(struct virtqueue *= vq, void *data, gfp_t gfp) { - return virtqueue_add(vq, &sg, num, 1, 0, data, NULL, true, gfp); + return virtqueue_add(vq, &sg, num, 1, 0, data, NULL, true, gfp, 0); } EXPORT_SYMBOL_GPL(virtqueue_add_outbuf_premapped); =20 @@ -2356,10 +2364,36 @@ int virtqueue_add_inbuf(struct virtqueue *vq, void *data, gfp_t gfp) { - return virtqueue_add(vq, &sg, num, 0, 1, data, NULL, false, gfp); + return virtqueue_add(vq, &sg, num, 0, 1, data, NULL, false, gfp, 0); } EXPORT_SYMBOL_GPL(virtqueue_add_inbuf); =20 +/** + * virtqueue_add_inbuf_cache_clean - expose input buffers with cache clean= hint + * @vq: the struct virtqueue we're talking about. + * @sg: scatterlist (must be well-formed and terminated!) + * @num: the number of entries in @sg writable by other side + * @data: the token identifying the buffer. + * @gfp: how to do memory allocations (if necessary). + * + * Adds DMA_ATTR_CPU_CACHE_CLEAN attribute to suppress overlapping cacheli= ne + * warnings in DMA debug builds. Has no effect in production builds. + * + * Caller must ensure we don't call this with other virtqueue operations + * at the same time (except where noted). + * + * Returns zero or a negative error (ie. ENOSPC, ENOMEM, EIO). + */ +int virtqueue_add_inbuf_cache_clean(struct virtqueue *vq, + struct scatterlist *sg, unsigned int num, + void *data, + gfp_t gfp) +{ + return virtqueue_add(vq, &sg, num, 0, 1, data, NULL, false, gfp, + DMA_ATTR_CPU_CACHE_CLEAN); +} +EXPORT_SYMBOL_GPL(virtqueue_add_inbuf_cache_clean); + /** * virtqueue_add_inbuf_ctx - expose input buffers to other end * @vq: the struct virtqueue we're talking about. @@ -2380,7 +2414,7 @@ int virtqueue_add_inbuf_ctx(struct virtqueue *vq, void *ctx, gfp_t gfp) { - return virtqueue_add(vq, &sg, num, 0, 1, data, ctx, false, gfp); + return virtqueue_add(vq, &sg, num, 0, 1, data, ctx, false, gfp, 0); } EXPORT_SYMBOL_GPL(virtqueue_add_inbuf_ctx); =20 @@ -2405,7 +2439,7 @@ int virtqueue_add_inbuf_premapped(struct virtqueue *v= q, void *ctx, gfp_t gfp) { - return virtqueue_add(vq, &sg, num, 0, 1, data, ctx, true, gfp); + return virtqueue_add(vq, &sg, num, 0, 1, data, ctx, true, gfp, 0); } EXPORT_SYMBOL_GPL(virtqueue_add_inbuf_premapped); =20 diff --git a/include/linux/virtio.h b/include/linux/virtio.h index 3626eb694728..63bb05ece8c5 100644 --- a/include/linux/virtio.h +++ b/include/linux/virtio.h @@ -62,6 +62,11 @@ int virtqueue_add_inbuf(struct virtqueue *vq, void *data, gfp_t gfp); =20 +int virtqueue_add_inbuf_cache_clean(struct virtqueue *vq, + struct scatterlist sg[], unsigned int num, + void *data, + gfp_t gfp); + int virtqueue_add_inbuf_ctx(struct virtqueue *vq, struct scatterlist sg[], unsigned int num, void *data, --=20 MST