From nobody Sun Feb 8 04:17:44 2026 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 641A0326940 for ; Mon, 5 Jan 2026 08:23:22 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1767601404; cv=none; b=mJAA8A44kuB/S0mZ+2s+g9fRvhUUbB+ouSMkh6/JXxluR8YClvly1rXE7ovBYF2eW/OF2Hcz72Bwz3tupxKdDMBC5K69GS7fafu+0E67ixHNnQENdd7nKxO07uqcRvvMXQZVq0ALadWiOnTjxr+MancaDsuSrAuZet+2cVaTYUo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1767601404; c=relaxed/simple; bh=+fh5yr7B9tzMU58eX5VqhVn+kLA6dyyd9L7KjvRe8jk=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=CXtSiUsCSRgbGZ/mJhLDkaFze5tMtJPDHvA9hBAn3Ox6hkIhBLnr/q1BnhkMiqVcIJa6JMh8keymdiGAlskPIXA+lZF6ejIqA9gdi3JU1M59FT3wOR5lzIdqUTO3xVW+Bo7cvGQSkO9j2BD7+Em7KQU/T+oRDxAx5BFiT/Wj0OA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=Dp+EOOZf; dkim=pass (2048-bit key) header.d=redhat.com header.i=@redhat.com header.b=oPx6lLvC; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="Dp+EOOZf"; dkim=pass (2048-bit key) header.d=redhat.com header.i=@redhat.com header.b="oPx6lLvC" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1767601401; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=KKTzcDg3xAPTqY88u2oQetZhvyJiwFAVniuIJmxLHUY=; b=Dp+EOOZfL3peO3lrO/qo5Z1SwNPi846egn14ongsK+xm1gd83Sk+/z81Ezz37LDUNUmqLT GttGFiBHad+jhFQgIrtRjo0erzJ0ihhEhOHUcWAw5ZcitJgXhww0ONReFYlvA5uRnScR/i c/3TK7Tm5Li6d+lrj6M5L+cK2ZouMow= Received: from mail-wr1-f70.google.com (mail-wr1-f70.google.com [209.85.221.70]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-508-blopjv1SPtum7QTcm-WK0A-1; Mon, 05 Jan 2026 03:23:20 -0500 X-MC-Unique: blopjv1SPtum7QTcm-WK0A-1 X-Mimecast-MFC-AGG-ID: blopjv1SPtum7QTcm-WK0A_1767601399 Received: by mail-wr1-f70.google.com with SMTP id ffacd0b85a97d-4325b182e3cso5757089f8f.2 for ; Mon, 05 Jan 2026 00:23:19 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=google; t=1767601398; x=1768206198; darn=vger.kernel.org; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to; bh=KKTzcDg3xAPTqY88u2oQetZhvyJiwFAVniuIJmxLHUY=; b=oPx6lLvCC7VWKf4SKNWlXpDA4OpVv265fhyRm0n66vnCWjhhumAuL/RJ486uVlYr0j 3CAIMbUH0WleblUlPs57TKJgqBdj7/Bs7HYGK/U0MNqOnlwd82VDSCcSLDz2bmWvLFSs I00ajQ9D5lmWAqO0C3IDQ0Fnlu4PSFcDAsp+nZFrGaz1PQh46kXRSEUjZyqCN7ihcGbb PHrYLjsyZxbI1PcDuUNCQvG6o1rLyAVpyh+P4lf4/P0C8nL5ySGTSAliXYXGevs17p0/ ic1rcWBik3AJ7B/M+M0m5PpagEt66D6NdILPMLgCkNY8PpvW7oNN/mb4R7QhWaJggltx J7eg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1767601398; x=1768206198; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-gg:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=KKTzcDg3xAPTqY88u2oQetZhvyJiwFAVniuIJmxLHUY=; b=MW7NuRj86OeGkgaf6WV2tbiSAtMA5IkuPt41o51qZk8oR/yNJMuRed7tkkTlCfnpD5 guuftqO0gwUT3AVlL5iwddNgJvyc+RUEjXhL0P6uju3/LY8DV2th+WBPolyWNZbI1jPa /IzRGt/bKOAWqfykbZkqdL9prnu5/JkgwNMQCwUyQiAVMM6fKEfDRvCkkAW+EuxwrTSw ZCyRZj/FuEpIASOuUogPO3YgztifgOcdyH9HndcZdCc+aTPtesD9/+B7PSTo+oAQrcrq 945/LVzvXfi1O5Xpl/7QkqxqfnukIvvwlpVCUz2CJE93OCj1mdSsZzJ3Aptg+YWC/VTD NvtA== X-Gm-Message-State: AOJu0YzAUZ8SFdGvpydE5GlDHdiWjX8qYZ65QhtTN9NEBCc2dZH5uito 60n5Rk0pXdH/Deq4gZaDBziufymcuncTx+8RqGPMeThgfc9Xoc9oSL57Lb0eLUFNX0f3sbqT4Ns eNDoh650n813oRDNBNjXNgtw9wnf26csCqwnwTO8zyh043j8w9RBimyoPPaGSR5DeukkbJBbwuD tleff6UVcGFflLnHZ2y+CBZobcU9kikxnBZ56j4qHDe6k= X-Gm-Gg: AY/fxX46amvRl/nXnWmAmZ7zX2HMAJP8MBYpyrTZDZ+9y+QHCpYN6qWNg8jFL2uUcuC fUy2Q/pTG6R2irCS0pyx3JO4Indbgq3ad6yeQ9UwghEsEztPKEUVtv+iWZdC469awK5aHKQiL82 Hn57LlTIaBID27ctMPMFuQ7LOPnNxCm/0SEXqzY5G251Bo+QNbGKURb2lFhCOFkLTkPf+WcQHRx UeFDGqsT8SGH6rsnzkJ93gkbck9Kbm/NqbFbJiVTtn1do6VjmmqNsuGTeOGlmsDbnzYDhkYb48n 2vHHUFT8LMo7XJ/8KXK2VyduNUlOdykNnQTjJBgTKDHggP6FFgr/pYcTwbXNDevC5uTBzNHQ6D1 yQPe4rV+VRKscEpqo+ScX3i9HLT5kAaWHyQ== X-Received: by 2002:a5d:5f49:0:b0:42c:b8fd:21b4 with SMTP id ffacd0b85a97d-4324e709d72mr66277166f8f.57.1767601398255; Mon, 05 Jan 2026 00:23:18 -0800 (PST) X-Google-Smtp-Source: AGHT+IEFd+BKePtnLnTT2Wp7pDPsb2ZtKPYbwHq2FGpVmLPCddNuF0tBL9FqE7HQpZrJRIHQ41vPQg== X-Received: by 2002:a5d:5f49:0:b0:42c:b8fd:21b4 with SMTP id ffacd0b85a97d-4324e709d72mr66277090f8f.57.1767601397490; Mon, 05 Jan 2026 00:23:17 -0800 (PST) Received: from redhat.com (IGLD-80-230-31-118.inter.net.il. [80.230.31.118]) by smtp.gmail.com with ESMTPSA id ffacd0b85a97d-43277b0efefsm66584454f8f.25.2026.01.05.00.23.14 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 05 Jan 2026 00:23:17 -0800 (PST) Date: Mon, 5 Jan 2026 03:23:13 -0500 From: "Michael S. Tsirkin" To: linux-kernel@vger.kernel.org Cc: Cong Wang , Jonathan Corbet , Olivia Mackall , Herbert Xu , Jason Wang , Paolo Bonzini , Stefan Hajnoczi , Eugenio =?utf-8?B?UMOpcmV6?= , "James E.J. Bottomley" , "Martin K. Petersen" , Gerd Hoffmann , Xuan Zhuo , Marek Szyprowski , Robin Murphy , Stefano Garzarella , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Simon Horman , Petr Tesarik , Leon Romanovsky , Jason Gunthorpe , Bartosz Golaszewski , linux-doc@vger.kernel.org, linux-crypto@vger.kernel.org, virtualization@lists.linux.dev, linux-scsi@vger.kernel.org, iommu@lists.linux.dev, kvm@vger.kernel.org, netdev@vger.kernel.org Subject: [PATCH v2 06/15] virtio: add virtqueue_add_inbuf_cache_clean API Message-ID: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: X-Mailer: git-send-email 2.27.0.106.g8ac3dc51b1 X-Mutt-Fcc: =sent Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add virtqueue_add_inbuf_cache_clean() for passing DMA_ATTR_CPU_CACHE_CLEAN to virtqueue operations. This suppresses DMA debug cacheline overlap warnings for buffers where proper cache management is ensured by the caller. Signed-off-by: Michael S. Tsirkin --- drivers/virtio/virtio_ring.c | 83 ++++++++++++++++++++++++++---------- include/linux/virtio.h | 5 +++ 2 files changed, 65 insertions(+), 23 deletions(-) diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c index 95e320b23624..4fe0f78df5ec 100644 --- a/drivers/virtio/virtio_ring.c +++ b/drivers/virtio/virtio_ring.c @@ -174,7 +174,8 @@ struct virtqueue_ops { int (*add)(struct vring_virtqueue *vq, struct scatterlist *sgs[], unsigned int total_sg, unsigned int out_sgs, unsigned int in_sgs, void *data, - void *ctx, bool premapped, gfp_t gfp); + void *ctx, bool premapped, gfp_t gfp, + unsigned long attr); void *(*get)(struct vring_virtqueue *vq, unsigned int *len, void **ctx); bool (*kick_prepare)(struct vring_virtqueue *vq); void (*disable_cb)(struct vring_virtqueue *vq); @@ -444,7 +445,7 @@ static int vring_mapping_error(const struct vring_virtq= ueue *vq, /* Map one sg entry. */ static int vring_map_one_sg(const struct vring_virtqueue *vq, struct scatt= erlist *sg, enum dma_data_direction direction, dma_addr_t *addr, - u32 *len, bool premapped) + u32 *len, bool premapped, unsigned long attr) { if (premapped) { *addr =3D sg_dma_address(sg); @@ -472,7 +473,7 @@ static int vring_map_one_sg(const struct vring_virtqueu= e *vq, struct scatterlist */ *addr =3D virtqueue_map_page_attrs(&vq->vq, sg_page(sg), sg->offset, sg->length, - direction, 0); + direction, attr); =20 if (vring_mapping_error(vq, *addr)) return -ENOMEM; @@ -603,7 +604,8 @@ static inline int virtqueue_add_split(struct vring_virt= queue *vq, void *data, void *ctx, bool premapped, - gfp_t gfp) + gfp_t gfp, + unsigned long attr) { struct vring_desc_extra *extra; struct scatterlist *sg; @@ -675,7 +677,8 @@ static inline int virtqueue_add_split(struct vring_virt= queue *vq, if (++sg_count !=3D total_sg) flags |=3D VRING_DESC_F_NEXT; =20 - if (vring_map_one_sg(vq, sg, DMA_TO_DEVICE, &addr, &len, premapped)) + if (vring_map_one_sg(vq, sg, DMA_TO_DEVICE, &addr, &len, + premapped, attr)) goto unmap_release; =20 /* Note that we trust indirect descriptor @@ -694,7 +697,8 @@ static inline int virtqueue_add_split(struct vring_virt= queue *vq, if (++sg_count !=3D total_sg) flags |=3D VRING_DESC_F_NEXT; =20 - if (vring_map_one_sg(vq, sg, DMA_FROM_DEVICE, &addr, &len, premapped)) + if (vring_map_one_sg(vq, sg, DMA_FROM_DEVICE, &addr, &len, + premapped, attr)) goto unmap_release; =20 /* Note that we trust indirect descriptor @@ -1487,7 +1491,8 @@ static int virtqueue_add_indirect_packed(struct vring= _virtqueue *vq, void *data, bool premapped, gfp_t gfp, - u16 id) + u16 id, + unsigned long attr) { struct vring_desc_extra *extra; struct vring_packed_desc *desc; @@ -1516,7 +1521,7 @@ static int virtqueue_add_indirect_packed(struct vring= _virtqueue *vq, for (sg =3D sgs[n]; sg; sg =3D sg_next(sg)) { if (vring_map_one_sg(vq, sg, n < out_sgs ? DMA_TO_DEVICE : DMA_FROM_DEVICE, - &addr, &len, premapped)) + &addr, &len, premapped, attr)) goto unmap_release; =20 desc[i].flags =3D cpu_to_le16(n < out_sgs ? @@ -1615,7 +1620,8 @@ static inline int virtqueue_add_packed(struct vring_v= irtqueue *vq, void *data, void *ctx, bool premapped, - gfp_t gfp) + gfp_t gfp, + unsigned long attr) { struct vring_packed_desc *desc; struct scatterlist *sg; @@ -1642,8 +1648,8 @@ static inline int virtqueue_add_packed(struct vring_v= irtqueue *vq, id =3D vq->free_head; BUG_ON(id =3D=3D vq->packed.vring.num); err =3D virtqueue_add_indirect_packed(vq, sgs, total_sg, out_sgs, - in_sgs, data, premapped, - gfp, id); + in_sgs, data, premapped, gfp, + id, attr); if (err !=3D -ENOMEM) { END_USE(vq); return err; @@ -1679,7 +1685,7 @@ static inline int virtqueue_add_packed(struct vring_v= irtqueue *vq, =20 if (vring_map_one_sg(vq, sg, n < out_sgs ? DMA_TO_DEVICE : DMA_FROM_DEVICE, - &addr, &len, premapped)) + &addr, &len, premapped, attr)) goto unmap_release; =20 flags =3D cpu_to_le16(vq->packed.avail_used_flags | @@ -1772,7 +1778,8 @@ static inline int virtqueue_add_packed_in_order(struc= t vring_virtqueue *vq, void *data, void *ctx, bool premapped, - gfp_t gfp) + gfp_t gfp, + unsigned long attr) { struct vring_packed_desc *desc; struct scatterlist *sg; @@ -1799,7 +1806,8 @@ static inline int virtqueue_add_packed_in_order(struc= t vring_virtqueue *vq, if (virtqueue_use_indirect(vq, total_sg)) { err =3D virtqueue_add_indirect_packed(vq, sgs, total_sg, out_sgs, in_sgs, data, premapped, gfp, - vq->packed.next_avail_idx); + vq->packed.next_avail_idx, + attr); if (err !=3D -ENOMEM) { END_USE(vq); return err; @@ -1838,7 +1846,7 @@ static inline int virtqueue_add_packed_in_order(struc= t vring_virtqueue *vq, =20 if (vring_map_one_sg(vq, sg, n < out_sgs ? DMA_TO_DEVICE : DMA_FROM_DEVICE, - &addr, &len, premapped)) + &addr, &len, premapped, attr)) goto unmap_release; =20 flags |=3D cpu_to_le16(vq->packed.avail_used_flags); @@ -2781,13 +2789,14 @@ static inline int virtqueue_add(struct virtqueue *_= vq, void *data, void *ctx, bool premapped, - gfp_t gfp) + gfp_t gfp, + unsigned long attr) { struct vring_virtqueue *vq =3D to_vvq(_vq); =20 return VIRTQUEUE_CALL(vq, add, sgs, total_sg, out_sgs, in_sgs, data, - ctx, premapped, gfp); + ctx, premapped, gfp, attr); } =20 /** @@ -2825,7 +2834,7 @@ int virtqueue_add_sgs(struct virtqueue *_vq, total_sg++; } return virtqueue_add(_vq, sgs, total_sg, out_sgs, in_sgs, - data, NULL, false, gfp); + data, NULL, false, gfp, 0); } EXPORT_SYMBOL_GPL(virtqueue_add_sgs); =20 @@ -2847,7 +2856,7 @@ int virtqueue_add_outbuf(struct virtqueue *vq, void *data, gfp_t gfp) { - return virtqueue_add(vq, &sg, num, 1, 0, data, NULL, false, gfp); + return virtqueue_add(vq, &sg, num, 1, 0, data, NULL, false, gfp, 0); } EXPORT_SYMBOL_GPL(virtqueue_add_outbuf); =20 @@ -2870,7 +2879,7 @@ int virtqueue_add_outbuf_premapped(struct virtqueue *= vq, void *data, gfp_t gfp) { - return virtqueue_add(vq, &sg, num, 1, 0, data, NULL, true, gfp); + return virtqueue_add(vq, &sg, num, 1, 0, data, NULL, true, gfp, 0); } EXPORT_SYMBOL_GPL(virtqueue_add_outbuf_premapped); =20 @@ -2892,10 +2901,38 @@ int virtqueue_add_inbuf(struct virtqueue *vq, void *data, gfp_t gfp) { - return virtqueue_add(vq, &sg, num, 0, 1, data, NULL, false, gfp); + return virtqueue_add(vq, &sg, num, 0, 1, data, NULL, false, gfp, 0); } EXPORT_SYMBOL_GPL(virtqueue_add_inbuf); =20 +/** + * virtqueue_add_inbuf_cache_clean - expose input buffers with cache clean + * @vq: the struct virtqueue we're talking about. + * @sg: scatterlist (must be well-formed and terminated!) + * @num: the number of entries in @sg writable by other side + * @data: the token identifying the buffer. + * @gfp: how to do memory allocations (if necessary). + * + * Same as virtqueue_add_inbuf but passes DMA_ATTR_CPU_CACHE_CLEAN to indi= cate + * that the CPU will not dirty any cacheline overlapping this buffer while= it + * is available, and to suppress overlapping cacheline warnings in DMA deb= ug + * builds. + * + * Caller must ensure we don't call this with other virtqueue operations + * at the same time (except where noted). + * + * Returns zero or a negative error (ie. ENOSPC, ENOMEM, EIO). + */ +int virtqueue_add_inbuf_cache_clean(struct virtqueue *vq, + struct scatterlist *sg, unsigned int num, + void *data, + gfp_t gfp) +{ + return virtqueue_add(vq, &sg, num, 0, 1, data, NULL, false, gfp, + DMA_ATTR_CPU_CACHE_CLEAN); +} +EXPORT_SYMBOL_GPL(virtqueue_add_inbuf_cache_clean); + /** * virtqueue_add_inbuf_ctx - expose input buffers to other end * @vq: the struct virtqueue we're talking about. @@ -2916,7 +2953,7 @@ int virtqueue_add_inbuf_ctx(struct virtqueue *vq, void *ctx, gfp_t gfp) { - return virtqueue_add(vq, &sg, num, 0, 1, data, ctx, false, gfp); + return virtqueue_add(vq, &sg, num, 0, 1, data, ctx, false, gfp, 0); } EXPORT_SYMBOL_GPL(virtqueue_add_inbuf_ctx); =20 @@ -2941,7 +2978,7 @@ int virtqueue_add_inbuf_premapped(struct virtqueue *v= q, void *ctx, gfp_t gfp) { - return virtqueue_add(vq, &sg, num, 0, 1, data, ctx, true, gfp); + return virtqueue_add(vq, &sg, num, 0, 1, data, ctx, true, gfp, 0); } EXPORT_SYMBOL_GPL(virtqueue_add_inbuf_premapped); =20 diff --git a/include/linux/virtio.h b/include/linux/virtio.h index 3626eb694728..63bb05ece8c5 100644 --- a/include/linux/virtio.h +++ b/include/linux/virtio.h @@ -62,6 +62,11 @@ int virtqueue_add_inbuf(struct virtqueue *vq, void *data, gfp_t gfp); =20 +int virtqueue_add_inbuf_cache_clean(struct virtqueue *vq, + struct scatterlist sg[], unsigned int num, + void *data, + gfp_t gfp); + int virtqueue_add_inbuf_ctx(struct virtqueue *vq, struct scatterlist sg[], unsigned int num, void *data, --=20 MST