From nobody Sat May 18 21:16:03 2024 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.10]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 544A413790B; Tue, 23 Apr 2024 13:58:53 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.10 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1713880735; cv=none; b=tQszpSdyQ3DBmo8q+M5dDLsPoq2SOy61rFvqibttxOo4x8+kTsMdcRdv6m9nndZM4BEby4YA+/5m8W8QZnneVr76DyjMRIe3yzmasVeqJNMAk0xIbpYIixqMU2kRBHDBwcSUZH/5+fdf4lXJZHGkVVrH45uolO4nJXRvEx/p8N4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1713880735; c=relaxed/simple; bh=NmhuzlmNypIrcR1nFCJLZ0OUcLV2LaL6YNQcG9vQrhg=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Pcq55QAAkC30LKgW4CfI2GKYnDPvTDO0Pk48/6uWYVJjlBHOy3dVglwLGPliHSPNeQAzvpa3s/mxSXxwSTGVoZ8WuyivFdroY+JCFyzNHqDP4DUEkRWueD3BAwFCKUv6Pw3OlFE8WL1Vjw7HECQri+sLoVLn98IkcSgOaZ+pmf8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=ewITrsgr; arc=none smtp.client-ip=198.175.65.10 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="ewITrsgr" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1713880733; x=1745416733; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=NmhuzlmNypIrcR1nFCJLZ0OUcLV2LaL6YNQcG9vQrhg=; b=ewITrsgrJIJn7o3fOqoffIfJt3HyHoupuU5hyO2o5OArd46DDQe2CM/A pxTkWrLySC6nhN9G7mkWjTrVGatNp2jW4F6iIPUHsuKY7IFtQvDFfLGKY VorFRC5EzxMDEI/yRWKnuAJW2AXFixRDo+dQ/YMhmOBqsOOGOCBE7ddCJ p0ygzLVtdY7NT821SqHWkNy4egXPVdb7NvAp+JEJWlTNzvjPxCQviLGC0 iccfCXKvXurEGMSO0QtXJN7Ap45k5k9JIi+OQuG3BbobkBRpmmJfGax/3 SzMeZq9aXziryf8NUVv7lpL1s2x/J7ekMA49n10PHZFoY9cCwfIK20Js5 A==; X-CSE-ConnectionGUID: rHL7CEcaSruKTlLC7kemdQ== X-CSE-MsgGUID: 9JG7IvdrS1uYj9zztXUNqA== X-IronPort-AV: E=McAfee;i="6600,9927,11053"; a="26921441" X-IronPort-AV: E=Sophos;i="6.07,222,1708416000"; d="scan'208";a="26921441" Received: from fmviesa008.fm.intel.com ([10.60.135.148]) by orvoesa102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Apr 2024 06:58:53 -0700 X-CSE-ConnectionGUID: ooD2TtQ0QciCoTh6svn2Mg== X-CSE-MsgGUID: 9sUECdpLQpCaA2IjaAe3vA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.07,222,1708416000"; d="scan'208";a="24431803" Received: from newjersey.igk.intel.com ([10.102.20.203]) by fmviesa008.fm.intel.com with ESMTP; 23 Apr 2024 06:58:49 -0700 From: Alexander Lobakin To: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Cc: Alexander Lobakin , Christoph Hellwig , Marek Szyprowski , Robin Murphy , Joerg Roedel , Will Deacon , "Rafael J. Wysocki" , Magnus Karlsson , nex.sw.ncis.osdt.itp.upstreaming@intel.com, bpf@vger.kernel.org, netdev@vger.kernel.org, iommu@lists.linux.dev, linux-kernel@vger.kernel.org Subject: [PATCH net-next v4 1/7] dma: compile-out DMA sync op calls when not used Date: Tue, 23 Apr 2024 15:58:26 +0200 Message-ID: <20240423135832.2271696-2-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20240423135832.2271696-1-aleksander.lobakin@intel.com> References: <20240423135832.2271696-1-aleksander.lobakin@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Some platforms do have DMA, but DMA there is always direct and coherent. Currently, even on such platforms DMA sync operations are compiled and called. Add a new hidden Kconfig symbol, DMA_NEED_SYNC, and set it only when either sync operations are needed or there is DMA ops or swiotlb or DMA debug is enabled. Compile global dma_sync_*() and dma_need_sync() only when it's set, otherwise provide empty inline stubs. The change allows for future optimizations of DMA sync calls depending on runtime conditions. Signed-off-by: Alexander Lobakin --- kernel/dma/Kconfig | 5 +++ include/linux/dma-mapping.h | 62 ++++++++++++++++++++----------------- kernel/dma/mapping.c | 22 +++++++------ 3 files changed, 50 insertions(+), 39 deletions(-) diff --git a/kernel/dma/Kconfig b/kernel/dma/Kconfig index d62f5957f36b..c06e56be0ca1 100644 --- a/kernel/dma/Kconfig +++ b/kernel/dma/Kconfig @@ -107,6 +107,11 @@ config DMA_BOUNCE_UNALIGNED_KMALLOC bool depends on SWIOTLB =20 +config DMA_NEED_SYNC + def_bool ARCH_HAS_SYNC_DMA_FOR_DEVICE || ARCH_HAS_SYNC_DMA_FOR_CPU || \ + ARCH_HAS_SYNC_DMA_FOR_CPU_ALL || DMA_API_DEBUG || DMA_OPS || \ + SWIOTLB + config DMA_RESTRICTED_POOL bool "DMA Restricted Pool" depends on OF && OF_RESERVED_MEM && SWIOTLB diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h index 4a658de44ee9..a569b56b25e2 100644 --- a/include/linux/dma-mapping.h +++ b/include/linux/dma-mapping.h @@ -117,14 +117,6 @@ dma_addr_t dma_map_resource(struct device *dev, phys_a= ddr_t phys_addr, size_t size, enum dma_data_direction dir, unsigned long attrs); void dma_unmap_resource(struct device *dev, dma_addr_t addr, size_t size, enum dma_data_direction dir, unsigned long attrs); -void dma_sync_single_for_cpu(struct device *dev, dma_addr_t addr, size_t s= ize, - enum dma_data_direction dir); -void dma_sync_single_for_device(struct device *dev, dma_addr_t addr, - size_t size, enum dma_data_direction dir); -void dma_sync_sg_for_cpu(struct device *dev, struct scatterlist *sg, - int nelems, enum dma_data_direction dir); -void dma_sync_sg_for_device(struct device *dev, struct scatterlist *sg, - int nelems, enum dma_data_direction dir); void *dma_alloc_attrs(struct device *dev, size_t size, dma_addr_t *dma_han= dle, gfp_t flag, unsigned long attrs); void dma_free_attrs(struct device *dev, size_t size, void *cpu_addr, @@ -147,7 +139,6 @@ u64 dma_get_required_mask(struct device *dev); bool dma_addressing_limited(struct device *dev); size_t dma_max_mapping_size(struct device *dev); size_t dma_opt_mapping_size(struct device *dev); -bool dma_need_sync(struct device *dev, dma_addr_t dma_addr); unsigned long dma_get_merge_boundary(struct device *dev); struct sg_table *dma_alloc_noncontiguous(struct device *dev, size_t size, enum dma_data_direction dir, gfp_t gfp, unsigned long attrs); @@ -195,22 +186,6 @@ static inline void dma_unmap_resource(struct device *d= ev, dma_addr_t addr, size_t size, enum dma_data_direction dir, unsigned long attrs) { } -static inline void dma_sync_single_for_cpu(struct device *dev, dma_addr_t = addr, - size_t size, enum dma_data_direction dir) -{ -} -static inline void dma_sync_single_for_device(struct device *dev, - dma_addr_t addr, size_t size, enum dma_data_direction dir) -{ -} -static inline void dma_sync_sg_for_cpu(struct device *dev, - struct scatterlist *sg, int nelems, enum dma_data_direction dir) -{ -} -static inline void dma_sync_sg_for_device(struct device *dev, - struct scatterlist *sg, int nelems, enum dma_data_direction dir) -{ -} static inline int dma_mapping_error(struct device *dev, dma_addr_t dma_add= r) { return -ENOMEM; @@ -277,10 +252,6 @@ static inline size_t dma_opt_mapping_size(struct devic= e *dev) { return 0; } -static inline bool dma_need_sync(struct device *dev, dma_addr_t dma_addr) -{ - return false; -} static inline unsigned long dma_get_merge_boundary(struct device *dev) { return 0; @@ -310,6 +281,39 @@ static inline int dma_mmap_noncontiguous(struct device= *dev, } #endif /* CONFIG_HAS_DMA */ =20 +#if defined(CONFIG_HAS_DMA) && defined(CONFIG_DMA_NEED_SYNC) +void dma_sync_single_for_cpu(struct device *dev, dma_addr_t addr, size_t s= ize, + enum dma_data_direction dir); +void dma_sync_single_for_device(struct device *dev, dma_addr_t addr, + size_t size, enum dma_data_direction dir); +void dma_sync_sg_for_cpu(struct device *dev, struct scatterlist *sg, + int nelems, enum dma_data_direction dir); +void dma_sync_sg_for_device(struct device *dev, struct scatterlist *sg, + int nelems, enum dma_data_direction dir); +bool dma_need_sync(struct device *dev, dma_addr_t dma_addr); +#else /* !CONFIG_HAS_DMA || !CONFIG_DMA_NEED_SYNC */ +static inline void dma_sync_single_for_cpu(struct device *dev, dma_addr_t = addr, + size_t size, enum dma_data_direction dir) +{ +} +static inline void dma_sync_single_for_device(struct device *dev, + dma_addr_t addr, size_t size, enum dma_data_direction dir) +{ +} +static inline void dma_sync_sg_for_cpu(struct device *dev, + struct scatterlist *sg, int nelems, enum dma_data_direction dir) +{ +} +static inline void dma_sync_sg_for_device(struct device *dev, + struct scatterlist *sg, int nelems, enum dma_data_direction dir) +{ +} +static inline bool dma_need_sync(struct device *dev, dma_addr_t dma_addr) +{ + return false; +} +#endif /* !CONFIG_HAS_DMA || !CONFIG_DMA_NEED_SYNC */ + struct page *dma_alloc_pages(struct device *dev, size_t size, dma_addr_t *dma_handle, enum dma_data_direction dir, gfp_t gfp); void dma_free_pages(struct device *dev, size_t size, struct page *page, diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c index 58db8fd70471..c78b78e95a26 100644 --- a/kernel/dma/mapping.c +++ b/kernel/dma/mapping.c @@ -329,6 +329,7 @@ void dma_unmap_resource(struct device *dev, dma_addr_t = addr, size_t size, } EXPORT_SYMBOL(dma_unmap_resource); =20 +#ifdef CONFIG_DMA_NEED_SYNC void dma_sync_single_for_cpu(struct device *dev, dma_addr_t addr, size_t s= ize, enum dma_data_direction dir) { @@ -385,6 +386,17 @@ void dma_sync_sg_for_device(struct device *dev, struct= scatterlist *sg, } EXPORT_SYMBOL(dma_sync_sg_for_device); =20 +bool dma_need_sync(struct device *dev, dma_addr_t dma_addr) +{ + const struct dma_map_ops *ops =3D get_dma_ops(dev); + + if (dma_map_direct(dev, ops)) + return dma_direct_need_sync(dev, dma_addr); + return ops->sync_single_for_cpu || ops->sync_single_for_device; +} +EXPORT_SYMBOL_GPL(dma_need_sync); +#endif /* CONFIG_DMA_NEED_SYNC */ + /* * The whole dma_get_sgtable() idea is fundamentally unsafe - it seems * that the intention is to allow exporting memory allocated via the @@ -841,16 +853,6 @@ size_t dma_opt_mapping_size(struct device *dev) } EXPORT_SYMBOL_GPL(dma_opt_mapping_size); =20 -bool dma_need_sync(struct device *dev, dma_addr_t dma_addr) -{ - const struct dma_map_ops *ops =3D get_dma_ops(dev); - - if (dma_map_direct(dev, ops)) - return dma_direct_need_sync(dev, dma_addr); - return ops->sync_single_for_cpu || ops->sync_single_for_device; -} -EXPORT_SYMBOL_GPL(dma_need_sync); - unsigned long dma_get_merge_boundary(struct device *dev) { const struct dma_map_ops *ops =3D get_dma_ops(dev); --=20 2.44.0 From nobody Sat May 18 21:16:03 2024 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.10]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0E51C136E2F; Tue, 23 Apr 2024 13:58:58 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.10 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1713880740; cv=none; b=fEgCrox8lph2CYDLVG0BZUKVw6MjTOBlNTQrXSEbA0wicIYQadJPZsp2l1aIBpDmGu8FH82GUI/vpkhsNIRA9by0ky2qmdOmTJAHNx8kxx7rz92VWUMAB2rURoatFdJuLRfg9eiBoNFf5/gdSLHqGPYCLHg+6iLXO6oG+MGDfB8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1713880740; c=relaxed/simple; bh=KhP+84ramHDPXR/BxIZq2xkJs/o5zXz6CG8eXTcgnrM=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=b4b9l5r2ZZqA2zyfvVYCzQ66citegokkneuxobVd0LajU0s7iY/LJZS8T3Ero4u0A1+anrlYVi+BOOjNSHZu1rqTO7k6lQz5pFD8B59b0UenDXEQBje6tH+UFbuMwSKp2pbt9pPDkz3JwQTbKtGQoUX3YeePU74Jneono4zlIjo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=WAdHJ1li; arc=none smtp.client-ip=198.175.65.10 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="WAdHJ1li" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1713880738; x=1745416738; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=KhP+84ramHDPXR/BxIZq2xkJs/o5zXz6CG8eXTcgnrM=; b=WAdHJ1liEGhVQQaXzZYyLaqVaCUI3TlAWhymdLYsAMs9sQdmr0OS0VeH C2wrrq0VDNd/dMRCVm1r0p6zaxu0dpD+/wDvdPxp37VtuTqGXVVHndg9G t2HzVmKgaTPR44XlMJsv4Eyk4VrKPbIHClrOnq0rsovrwlD+A47I3PoRk Dd1An3Cfq6FNhu1vjev1Yk7Y3XRdb8+EJvYVEd6Uc/gddMHNUTBEQhlh1 ThjWQBDXB98aROWNetOBadCDIYpl7JxG/UsH3miB0pMBQ9SKcorv1OK1Q INfS2TPJP5sTj97tecOi+xvGBe6UF5YiNDMhQzf2/NZT+SS14NR2eLMO4 g==; X-CSE-ConnectionGUID: imGOK7aVR+mPHqEYKAFPuQ== X-CSE-MsgGUID: K3zJLsgZRACtHR4u29KpYw== X-IronPort-AV: E=McAfee;i="6600,9927,11053"; a="26921454" X-IronPort-AV: E=Sophos;i="6.07,222,1708416000"; d="scan'208";a="26921454" Received: from fmviesa008.fm.intel.com ([10.60.135.148]) by orvoesa102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Apr 2024 06:58:58 -0700 X-CSE-ConnectionGUID: SGjbaVWISTe+W3u/d5IK/w== X-CSE-MsgGUID: SV0DFZiFQqu6IvTagsQCIg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.07,222,1708416000"; d="scan'208";a="24431816" Received: from newjersey.igk.intel.com ([10.102.20.203]) by fmviesa008.fm.intel.com with ESMTP; 23 Apr 2024 06:58:53 -0700 From: Alexander Lobakin To: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Cc: Alexander Lobakin , Christoph Hellwig , Marek Szyprowski , Robin Murphy , Joerg Roedel , Will Deacon , "Rafael J. Wysocki" , Magnus Karlsson , nex.sw.ncis.osdt.itp.upstreaming@intel.com, bpf@vger.kernel.org, netdev@vger.kernel.org, iommu@lists.linux.dev, linux-kernel@vger.kernel.org Subject: [PATCH net-next v4 2/7] dma: avoid redundant calls for sync operations Date: Tue, 23 Apr 2024 15:58:27 +0200 Message-ID: <20240423135832.2271696-3-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20240423135832.2271696-1-aleksander.lobakin@intel.com> References: <20240423135832.2271696-1-aleksander.lobakin@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Quite often, devices do not need dma_sync operations on x86_64 at least. Indeed, when dev_is_dma_coherent(dev) is true and dev_use_swiotlb(dev) is false, iommu_dma_sync_single_for_cpu() and friends do nothing. However, indirectly calling them when CONFIG_RETPOLINE=3Dy consumes about 10% of cycles on a cpu receiving packets from softirq at ~100Gbit rate. Even if/when CONFIG_RETPOLINE is not set, there is a cost of about 3%. Add dev->need_dma_sync boolean and turn it off during the device initialization (dma_set_mask()) depending on the setup: dev_is_dma_coherent() for the direct DMA, !(sync_single_for_device || sync_single_for_cpu) or the new dma_map_ops flag, %DMA_F_CAN_SKIP_SYNC, advertised for non-NULL DMA ops. Then later, if/when swiotlb is used for the first time, the flag is reset back to on, from swiotlb_tbl_map_single(). On iavf, the UDP trafficgen with XDP_DROP in skb mode test shows +3-5% increase for direct DMA. Suggested-by: Christoph Hellwig # direct DMA shortcut Co-developed-by: Eric Dumazet Signed-off-by: Eric Dumazet Signed-off-by: Alexander Lobakin --- include/linux/device.h | 4 +++ include/linux/dma-map-ops.h | 12 ++++++++ include/linux/dma-mapping.h | 53 +++++++++++++++++++++++++++++++---- kernel/dma/mapping.c | 55 +++++++++++++++++++++++++++++-------- kernel/dma/swiotlb.c | 6 ++++ 5 files changed, 113 insertions(+), 17 deletions(-) diff --git a/include/linux/device.h b/include/linux/device.h index b9f5464f44ed..ed95b829f05b 100644 --- a/include/linux/device.h +++ b/include/linux/device.h @@ -691,6 +691,7 @@ struct device_physical_location { * and optionall (if the coherent mask is large enough) also * for dma allocations. This flag is managed by the dma ops * instance from ->dma_supported. + * @dma_need_sync: The device needs performing DMA sync operations. * * At the lowest level, every device in a Linux system is represented by an * instance of struct device. The device structure contains the information @@ -803,6 +804,9 @@ struct device { #ifdef CONFIG_DMA_OPS_BYPASS bool dma_ops_bypass : 1; #endif +#ifdef CONFIG_DMA_NEED_SYNC + bool dma_need_sync:1; +#endif }; =20 /** diff --git a/include/linux/dma-map-ops.h b/include/linux/dma-map-ops.h index 4abc60f04209..4893cb89cb52 100644 --- a/include/linux/dma-map-ops.h +++ b/include/linux/dma-map-ops.h @@ -18,8 +18,11 @@ struct iommu_ops; * * DMA_F_PCI_P2PDMA_SUPPORTED: Indicates the dma_map_ops implementation can * handle PCI P2PDMA pages in the map_sg/unmap_sg operation. + * DMA_F_CAN_SKIP_SYNC: DMA sync operations can be skipped if the device is + * coherent and it's not an SWIOTLB buffer. */ #define DMA_F_PCI_P2PDMA_SUPPORTED (1 << 0) +#define DMA_F_CAN_SKIP_SYNC (1 << 1) =20 struct dma_map_ops { unsigned int flags; @@ -273,6 +276,15 @@ static inline bool dev_is_dma_coherent(struct device *= dev) } #endif /* CONFIG_ARCH_HAS_DMA_COHERENCE_H */ =20 +static inline void dma_reset_need_sync(struct device *dev) +{ +#ifdef CONFIG_DMA_NEED_SYNC + /* Reset it only once so that the function can be called on hotpath */ + if (unlikely(!dev->dma_need_sync)) + dev->dma_need_sync =3D true; +#endif +} + /* * Check whether potential kmalloc() buffers are safe for non-coherent DMA. */ diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h index a569b56b25e2..eb4e15893b6c 100644 --- a/include/linux/dma-mapping.h +++ b/include/linux/dma-mapping.h @@ -282,16 +282,59 @@ static inline int dma_mmap_noncontiguous(struct devic= e *dev, #endif /* CONFIG_HAS_DMA */ =20 #if defined(CONFIG_HAS_DMA) && defined(CONFIG_DMA_NEED_SYNC) -void dma_sync_single_for_cpu(struct device *dev, dma_addr_t addr, size_t s= ize, +void __dma_sync_single_for_cpu(struct device *dev, dma_addr_t addr, size_t= size, enum dma_data_direction dir); -void dma_sync_single_for_device(struct device *dev, dma_addr_t addr, +void __dma_sync_single_for_device(struct device *dev, dma_addr_t addr, size_t size, enum dma_data_direction dir); -void dma_sync_sg_for_cpu(struct device *dev, struct scatterlist *sg, +void __dma_sync_sg_for_cpu(struct device *dev, struct scatterlist *sg, int nelems, enum dma_data_direction dir); -void dma_sync_sg_for_device(struct device *dev, struct scatterlist *sg, +void __dma_sync_sg_for_device(struct device *dev, struct scatterlist *sg, int nelems, enum dma_data_direction dir); -bool dma_need_sync(struct device *dev, dma_addr_t dma_addr); +bool __dma_need_sync(struct device *dev, dma_addr_t dma_addr); + +static inline bool dma_dev_need_sync(const struct device *dev) +{ + /* Always call DMA sync operations when debugging is enabled */ + return dev->dma_need_sync || IS_ENABLED(CONFIG_DMA_API_DEBUG); +} + +static inline void dma_sync_single_for_cpu(struct device *dev, dma_addr_t = addr, + size_t size, enum dma_data_direction dir) +{ + if (dma_dev_need_sync(dev)) + __dma_sync_single_for_cpu(dev, addr, size, dir); +} + +static inline void dma_sync_single_for_device(struct device *dev, + dma_addr_t addr, size_t size, enum dma_data_direction dir) +{ + if (dma_dev_need_sync(dev)) + __dma_sync_single_for_device(dev, addr, size, dir); +} + +static inline void dma_sync_sg_for_cpu(struct device *dev, + struct scatterlist *sg, int nelems, enum dma_data_direction dir) +{ + if (dma_dev_need_sync(dev)) + __dma_sync_sg_for_cpu(dev, sg, nelems, dir); +} + +static inline void dma_sync_sg_for_device(struct device *dev, + struct scatterlist *sg, int nelems, enum dma_data_direction dir) +{ + if (dma_dev_need_sync(dev)) + __dma_sync_sg_for_device(dev, sg, nelems, dir); +} + +static inline bool dma_need_sync(struct device *dev, dma_addr_t dma_addr) +{ + return dma_dev_need_sync(dev) ? __dma_need_sync(dev, dma_addr) : false; +} #else /* !CONFIG_HAS_DMA || !CONFIG_DMA_NEED_SYNC */ +static inline bool dma_dev_need_sync(const struct device *dev) +{ + return false; +} static inline void dma_sync_single_for_cpu(struct device *dev, dma_addr_t = addr, size_t size, enum dma_data_direction dir) { diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c index c78b78e95a26..3524bc92c37f 100644 --- a/kernel/dma/mapping.c +++ b/kernel/dma/mapping.c @@ -330,7 +330,7 @@ void dma_unmap_resource(struct device *dev, dma_addr_t = addr, size_t size, EXPORT_SYMBOL(dma_unmap_resource); =20 #ifdef CONFIG_DMA_NEED_SYNC -void dma_sync_single_for_cpu(struct device *dev, dma_addr_t addr, size_t s= ize, +void __dma_sync_single_for_cpu(struct device *dev, dma_addr_t addr, size_t= size, enum dma_data_direction dir) { const struct dma_map_ops *ops =3D get_dma_ops(dev); @@ -342,9 +342,9 @@ void dma_sync_single_for_cpu(struct device *dev, dma_ad= dr_t addr, size_t size, ops->sync_single_for_cpu(dev, addr, size, dir); debug_dma_sync_single_for_cpu(dev, addr, size, dir); } -EXPORT_SYMBOL(dma_sync_single_for_cpu); +EXPORT_SYMBOL(__dma_sync_single_for_cpu); =20 -void dma_sync_single_for_device(struct device *dev, dma_addr_t addr, +void __dma_sync_single_for_device(struct device *dev, dma_addr_t addr, size_t size, enum dma_data_direction dir) { const struct dma_map_ops *ops =3D get_dma_ops(dev); @@ -356,9 +356,9 @@ void dma_sync_single_for_device(struct device *dev, dma= _addr_t addr, ops->sync_single_for_device(dev, addr, size, dir); debug_dma_sync_single_for_device(dev, addr, size, dir); } -EXPORT_SYMBOL(dma_sync_single_for_device); +EXPORT_SYMBOL(__dma_sync_single_for_device); =20 -void dma_sync_sg_for_cpu(struct device *dev, struct scatterlist *sg, +void __dma_sync_sg_for_cpu(struct device *dev, struct scatterlist *sg, int nelems, enum dma_data_direction dir) { const struct dma_map_ops *ops =3D get_dma_ops(dev); @@ -370,9 +370,9 @@ void dma_sync_sg_for_cpu(struct device *dev, struct sca= tterlist *sg, ops->sync_sg_for_cpu(dev, sg, nelems, dir); debug_dma_sync_sg_for_cpu(dev, sg, nelems, dir); } -EXPORT_SYMBOL(dma_sync_sg_for_cpu); +EXPORT_SYMBOL(__dma_sync_sg_for_cpu); =20 -void dma_sync_sg_for_device(struct device *dev, struct scatterlist *sg, +void __dma_sync_sg_for_device(struct device *dev, struct scatterlist *sg, int nelems, enum dma_data_direction dir) { const struct dma_map_ops *ops =3D get_dma_ops(dev); @@ -384,18 +384,47 @@ void dma_sync_sg_for_device(struct device *dev, struc= t scatterlist *sg, ops->sync_sg_for_device(dev, sg, nelems, dir); debug_dma_sync_sg_for_device(dev, sg, nelems, dir); } -EXPORT_SYMBOL(dma_sync_sg_for_device); +EXPORT_SYMBOL(__dma_sync_sg_for_device); =20 -bool dma_need_sync(struct device *dev, dma_addr_t dma_addr) +bool __dma_need_sync(struct device *dev, dma_addr_t dma_addr) { const struct dma_map_ops *ops =3D get_dma_ops(dev); =20 if (dma_map_direct(dev, ops)) + /* + * dma_need_sync could've been reset on first SWIOTLB buffer + * mapping, but @dma_addr is not necessary an SWIOTLB buffer. + * In this case, fall back to more granular check. + */ return dma_direct_need_sync(dev, dma_addr); - return ops->sync_single_for_cpu || ops->sync_single_for_device; + return true; } -EXPORT_SYMBOL_GPL(dma_need_sync); -#endif /* CONFIG_DMA_NEED_SYNC */ +EXPORT_SYMBOL_GPL(__dma_need_sync); + +static void dma_setup_need_sync(struct device *dev) +{ + const struct dma_map_ops *ops =3D get_dma_ops(dev); + + if (dma_map_direct(dev, ops) || (ops->flags & DMA_F_CAN_SKIP_SYNC)) + /* + * dma_need_sync will be reset to %true on first SWIOTLB buffer + * mapping, if any. During the device initialization, it's + * enough to check only for the DMA coherence. + */ + dev->dma_need_sync =3D !dev_is_dma_coherent(dev); + else if (!ops->sync_single_for_device && !ops->sync_single_for_cpu && + !ops->sync_sg_for_device && !ops->sync_sg_for_cpu) + /* + * Synchronization is not possible when none of DMA sync ops + * is set. + */ + dev->dma_need_sync =3D false; + else + dev->dma_need_sync =3D true; +} +#else /* !CONFIG_DMA_NEED_SYNC */ +static inline void dma_setup_need_sync(struct device *dev) { } +#endif /* !CONFIG_DMA_NEED_SYNC */ =20 /* * The whole dma_get_sgtable() idea is fundamentally unsafe - it seems @@ -785,6 +814,8 @@ int dma_set_mask(struct device *dev, u64 mask) =20 arch_dma_set_mask(dev, mask); *dev->dma_mask =3D mask; + dma_setup_need_sync(dev); + return 0; } EXPORT_SYMBOL(dma_set_mask); diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c index a5e0dfc44d24..3b9dddbcdda7 100644 --- a/kernel/dma/swiotlb.c +++ b/kernel/dma/swiotlb.c @@ -1379,6 +1379,12 @@ phys_addr_t swiotlb_tbl_map_single(struct device *de= v, phys_addr_t orig_addr, return (phys_addr_t)DMA_MAPPING_ERROR; } =20 + /* + * If dma_need_sync wasn't set, reset it on first SWIOTLB buffer + * mapping to always sync SWIOTLB buffers. + */ + dma_reset_need_sync(dev); + /* * Save away the mapping from the original address to the DMA address. * This is needed when we sync the memory. Then we sync the buffer if --=20 2.44.0 From nobody Sat May 18 21:16:03 2024 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.10]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2C83B13774F; Tue, 23 Apr 2024 13:59:02 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.10 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1713880743; cv=none; b=ogrdclHOIg8gy6OtA7SgcPCwBcef7ysrkr40UCNSdRSuNDRnGSRd9/lKVrdC6TJGqrwAxMkq+tFOqpt6DtLxA/WHQjpCh7ImSATSvYtfkUz1kKDhy7ZlyGGb6aH9YAISTYY+kmtsNHtPgxmfWPotmO9V5SeC0wmPWvLOHPsrPd8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1713880743; c=relaxed/simple; bh=6lH3EXvd+Ai0KPhrBTY+XB53NDNa+PvM4xruiocc8MM=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=tA0FiIO4xpt44xG2W4PXDSx5vljrH3RgXBEkC6JgLFSl/vcYWRoQfI5LjdY2BYugQnB4ZE4R3iKnCFb1HfI5n4S5vfN7PkQL4XOT4Y491O0ItfkZV9MQqeQNErqbw1GeMF3/nUamA5HsCKuMK8oRJIs3pyzX0obO1NqEqccEbvQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=lg4Kv1R7; arc=none smtp.client-ip=198.175.65.10 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="lg4Kv1R7" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1713880742; x=1745416742; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=6lH3EXvd+Ai0KPhrBTY+XB53NDNa+PvM4xruiocc8MM=; b=lg4Kv1R73xD9pbuJEoe0MGLfEpdyL4WXkE27xPJoo1uqnhLDRbC0TBb4 7K+WK2el+mI4Ei/iqfnz4ExeDMVkkWUbYY2VgcuM2YyjoPBlgrcaEcb2y p/5DMcf8J/lUVm5RWCE3csR7qX3GFgsXHOBuhOePe1xLZtfEVYFc9co8L MGlcUlM5maUWS8c58hedldtvDeAK4nTNVTXosYyNxTjJTkhl7GqD9e+GV wX/DbES+aUsaBLI9PId0T8KdAhyTp5DRTusZgblgiNnatkY6wZDDaRlXh cvGvx+IDqlHe3PNSCANLY0+ICli1L1IKq4pt1Qd0+ErD4XMqem2nnivqW A==; X-CSE-ConnectionGUID: Rwk0MlJCQA+rgU0vjlY8cw== X-CSE-MsgGUID: 88kdf+4qSjCpnBhYmGY7Bg== X-IronPort-AV: E=McAfee;i="6600,9927,11053"; a="26921470" X-IronPort-AV: E=Sophos;i="6.07,222,1708416000"; d="scan'208";a="26921470" Received: from fmviesa008.fm.intel.com ([10.60.135.148]) by orvoesa102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Apr 2024 06:59:02 -0700 X-CSE-ConnectionGUID: viYKlHnsSBeUmPEBGa3v/g== X-CSE-MsgGUID: zRT4Sj/FTdSVn6ZKchSxiw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.07,222,1708416000"; d="scan'208";a="24431825" Received: from newjersey.igk.intel.com ([10.102.20.203]) by fmviesa008.fm.intel.com with ESMTP; 23 Apr 2024 06:58:58 -0700 From: Alexander Lobakin To: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Cc: Alexander Lobakin , Christoph Hellwig , Marek Szyprowski , Robin Murphy , Joerg Roedel , Will Deacon , "Rafael J. Wysocki" , Magnus Karlsson , nex.sw.ncis.osdt.itp.upstreaming@intel.com, bpf@vger.kernel.org, netdev@vger.kernel.org, iommu@lists.linux.dev, linux-kernel@vger.kernel.org Subject: [PATCH net-next v4 3/7] iommu/dma: avoid expensive indirect calls for sync operations Date: Tue, 23 Apr 2024 15:58:28 +0200 Message-ID: <20240423135832.2271696-4-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20240423135832.2271696-1-aleksander.lobakin@intel.com> References: <20240423135832.2271696-1-aleksander.lobakin@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" When IOMMU is on, the actual synchronization happens in the same cases as with the direct DMA. Advertise %DMA_F_CAN_SKIP_SYNC in IOMMU DMA to skip sync ops calls (indirect) for non-SWIOTLB buffers. perf profile before the patch: 18.53% [kernel] [k] gq_rx_skb 14.77% [kernel] [k] napi_reuse_skb 8.95% [kernel] [k] skb_release_data 5.42% [kernel] [k] dev_gro_receive 5.37% [kernel] [k] memcpy <*> 5.26% [kernel] [k] iommu_dma_sync_sg_for_cpu 4.78% [kernel] [k] tcp_gro_receive <*> 4.42% [kernel] [k] iommu_dma_sync_sg_for_device 4.12% [kernel] [k] ipv6_gro_receive 3.65% [kernel] [k] gq_pool_get 3.25% [kernel] [k] skb_gro_receive 2.07% [kernel] [k] napi_gro_frags 1.98% [kernel] [k] tcp6_gro_receive 1.27% [kernel] [k] gq_rx_prep_buffers 1.18% [kernel] [k] gq_rx_napi_handler 0.99% [kernel] [k] csum_partial 0.74% [kernel] [k] csum_ipv6_magic 0.72% [kernel] [k] free_pcp_prepare 0.60% [kernel] [k] __napi_poll 0.58% [kernel] [k] net_rx_action 0.56% [kernel] [k] read_tsc <*> 0.50% [kernel] [k] __x86_indirect_thunk_r11 0.45% [kernel] [k] memset After patch, lines with <*> no longer show up, and overall cpu usage looks much better (~60% instead of ~72%): 25.56% [kernel] [k] gq_rx_skb 9.90% [kernel] [k] napi_reuse_skb 7.39% [kernel] [k] dev_gro_receive 6.78% [kernel] [k] memcpy 6.53% [kernel] [k] skb_release_data 6.39% [kernel] [k] tcp_gro_receive 5.71% [kernel] [k] ipv6_gro_receive 4.35% [kernel] [k] napi_gro_frags 4.34% [kernel] [k] skb_gro_receive 3.50% [kernel] [k] gq_pool_get 3.08% [kernel] [k] gq_rx_napi_handler 2.35% [kernel] [k] tcp6_gro_receive 2.06% [kernel] [k] gq_rx_prep_buffers 1.32% [kernel] [k] csum_partial 0.93% [kernel] [k] csum_ipv6_magic 0.65% [kernel] [k] net_rx_action iavf yields +10% of Mpps on Rx. This also unblocks batched allocations of XSk buffers when IOMMU is active. Co-developed-by: Eric Dumazet Signed-off-by: Eric Dumazet Acked-by: Robin Murphy Signed-off-by: Alexander Lobakin --- drivers/iommu/dma-iommu.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index e4cb26f6a943..0516e3e859b5 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -1720,7 +1720,8 @@ static size_t iommu_dma_max_mapping_size(struct devic= e *dev) } =20 static const struct dma_map_ops iommu_dma_ops =3D { - .flags =3D DMA_F_PCI_P2PDMA_SUPPORTED, + .flags =3D DMA_F_PCI_P2PDMA_SUPPORTED | + DMA_F_CAN_SKIP_SYNC, .alloc =3D iommu_dma_alloc, .free =3D iommu_dma_free, .alloc_pages =3D dma_common_alloc_pages, --=20 2.44.0 From nobody Sat May 18 21:16:03 2024 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.10]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5D68E13BAF4; Tue, 23 Apr 2024 13:59:15 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.10 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1713880756; cv=none; b=Nu7BfQNTUrSKlr8Rof45XGAt2/3Fw6pM+OFrHQeWqTJ314E2+XvGgOgVW3p9fPl8iq2Lgg8ditjDHP8AqpFrbTzJDM8idH+0lESxZxr1khKD/+CdavJpUcbY+mpXGUcvcAOvotol8SXP5PDmnt3yNndQIU/c5NtbZ7URvwTdSeo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1713880756; c=relaxed/simple; bh=KxZwLxsdXCEO5MnMIk60nerc4SMVtekAOt2JOpxEhbc=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=slxy78qsxE6A4/ucbfg1+QYuWal9SFoimQnqnekSQLHml62QJSXaYJOHNTlNsMR1XWSwXz1LH73wcIq0IwAsnNKY6d6AX2NmOwTWJ25TPxDt2FEt5e2XSj1pq/8OSW2fMHBYvGnbfb5PvQxt1fzHZroB1S9iZlwZYFa3gpQLZ3w= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=UzpgiwFF; arc=none smtp.client-ip=198.175.65.10 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="UzpgiwFF" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1713880756; x=1745416756; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=KxZwLxsdXCEO5MnMIk60nerc4SMVtekAOt2JOpxEhbc=; b=UzpgiwFFeCdzXH0oRZNyI/2/6DE+s82SAmPINln9tVbEW8rI1lH257Pk cLf8lCkHosS22JfX5R9+MmwWHHHNHoCnXXWMKq++GY7mmYz1TvhTo8xZf Gz3OO9F9RFG7gFQ+tMa6JcRzZLoZzq99DpAOrV/6TfaIkxlfQgEGo4m6l 7NqUShrOBn5z4GywEA0kbTsGfXF/lbMFpLkD/d+0+NOGr8jVIkrjDaO7t Go5A67WvyJ0dP+Om1HAtUaIhbBc0rs1zXm9WhlzKTsBPoKDqxa5hgKgxR GkQFMj0vGpG4Y3towNGgw+mHhUL6XP1uPNtDU0UZ4+TVC8ctHTnL1CkXo w==; X-CSE-ConnectionGUID: RHDT5Sx4TPeX/gJKsdMrbg== X-CSE-MsgGUID: qR1C1lW/TmKsEfZtvH8dhA== X-IronPort-AV: E=McAfee;i="6600,9927,11053"; a="26921500" X-IronPort-AV: E=Sophos;i="6.07,222,1708416000"; d="scan'208";a="26921500" Received: from fmviesa008.fm.intel.com ([10.60.135.148]) by orvoesa102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Apr 2024 06:59:06 -0700 X-CSE-ConnectionGUID: x1Mx214cT7eZ9iqrw9JL8g== X-CSE-MsgGUID: lr3UkwE+Qhm5Xy4/cXCwyA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.07,222,1708416000"; d="scan'208";a="24431840" Received: from newjersey.igk.intel.com ([10.102.20.203]) by fmviesa008.fm.intel.com with ESMTP; 23 Apr 2024 06:59:02 -0700 From: Alexander Lobakin To: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Cc: Alexander Lobakin , Christoph Hellwig , Marek Szyprowski , Robin Murphy , Joerg Roedel , Will Deacon , "Rafael J. Wysocki" , Magnus Karlsson , nex.sw.ncis.osdt.itp.upstreaming@intel.com, bpf@vger.kernel.org, netdev@vger.kernel.org, iommu@lists.linux.dev, linux-kernel@vger.kernel.org Subject: [PATCH net-next v4 4/7] page_pool: make sure frag API fields don't span between cachelines Date: Tue, 23 Apr 2024 15:58:29 +0200 Message-ID: <20240423135832.2271696-5-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20240423135832.2271696-1-aleksander.lobakin@intel.com> References: <20240423135832.2271696-1-aleksander.lobakin@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" After commit 5027ec19f104 ("net: page_pool: split the page_pool_params into fast and slow") that made &page_pool contain only "hot" params at the start, cacheline boundary chops frag API fields group in the middle again. To not bother with this each time fast params get expanded or shrunk, let's just align them to `4 * sizeof(long)`, the closest upper pow-2 to their actual size (2 longs + 1 int). This ensures 16-byte alignment for the 32-bit architectures and 32-byte alignment for the 64-bit ones, excluding unnecessary false-sharing. ::page_state_hold_cnt is used quite intensively on hotpath no matter if frag API is used, so move it to the newly created hole in the first cacheline. Signed-off-by: Alexander Lobakin --- include/net/page_pool/types.h | 12 +++++++++++- net/core/page_pool.c | 10 ++++++++++ 2 files changed, 21 insertions(+), 1 deletion(-) diff --git a/include/net/page_pool/types.h b/include/net/page_pool/types.h index a6ebed002216..548321f7c49d 100644 --- a/include/net/page_pool/types.h +++ b/include/net/page_pool/types.h @@ -130,12 +130,22 @@ struct page_pool { struct page_pool_params_fast p; =20 int cpuid; + u32 pages_state_hold_cnt; bool has_init_callback; =20 + /* The following block must stay within one cacheline. On 32-bit + * systems, sizeof(long) =3D=3D sizeof(int), so that the block size is + * ``3 * sizeof(long)``. On 64-bit systems, the actual size is + * ``2 * sizeof(long) + sizeof(int)``. The closest pow-2 to both of + * them is ``4 * sizeof(long)``, so just use that one for simplicity. + * Having it aligned to a cacheline boundary may be excessive and + * doesn't bring any good. + */ + __cacheline_group_begin(frag) __aligned(4 * sizeof(long)); long frag_users; struct page *frag_page; unsigned int frag_offset; - u32 pages_state_hold_cnt; + __cacheline_group_end(frag); =20 struct delayed_work release_dw; void (*disconnect)(void *pool); diff --git a/net/core/page_pool.c b/net/core/page_pool.c index 273c24429bce..35c9d61853c8 100644 --- a/net/core/page_pool.c +++ b/net/core/page_pool.c @@ -172,12 +172,22 @@ static void page_pool_producer_unlock(struct page_poo= l *pool, spin_unlock_bh(&pool->ring.producer_lock); } =20 +static void page_pool_struct_check(void) +{ + CACHELINE_ASSERT_GROUP_MEMBER(struct page_pool, frag, frag_users); + CACHELINE_ASSERT_GROUP_MEMBER(struct page_pool, frag, frag_page); + CACHELINE_ASSERT_GROUP_MEMBER(struct page_pool, frag, frag_offset); + CACHELINE_ASSERT_GROUP_SIZE(struct page_pool, frag, 4 * sizeof(long)); +} + static int page_pool_init(struct page_pool *pool, const struct page_pool_params *params, int cpuid) { unsigned int ring_qsize =3D 1024; /* Default */ =20 + page_pool_struct_check(); + memcpy(&pool->p, ¶ms->fast, sizeof(pool->p)); memcpy(&pool->slow, ¶ms->slow, sizeof(pool->slow)); =20 --=20 2.44.0 From nobody Sat May 18 21:16:03 2024 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.10]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 962E6138481; Tue, 23 Apr 2024 13:59:18 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.10 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1713880760; cv=none; b=UMCz+LDVq8sgSqhBTYFQUHxoN2NTl+Ft10dDi4HySipynPC/VbDOnbzhidmbWBhKIUP3KWynRv0SeKX8nFqjLgAU5G9YWQqPS1j7uacB+WYqMuA7EjuoBJbcFcCcaBxFF5GfbNTzFjt3Vt3K+TVtb5thOEc7QcnMOaajoJy04nc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1713880760; c=relaxed/simple; bh=TEDWbUJRjZJx2R3JWtZAhjjSaAnvVXdyEEx31VqgwEo=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=ZTfT3T4pf6hXaMouZ2yiQUW3Yrf8sGRSlmf7PU0+P9FAbXE8ddzyU6hWSBvVpNL7c56uT3piNIrzj57uIwm9O0MuxgRGAZeZq++IaZpCUOvOCplgI3bgdGdQX7MAIsjXU8k3Bar9jVeprTFYdDewxsodDnVTRXPY1dx2HVsIaDQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=kCK4MvMH; arc=none smtp.client-ip=198.175.65.10 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="kCK4MvMH" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1713880759; x=1745416759; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=TEDWbUJRjZJx2R3JWtZAhjjSaAnvVXdyEEx31VqgwEo=; b=kCK4MvMHRrP1AjMq3aRSt9TsGSZdoId/jP8NyVU3vnfIl0IuVNpOo2nX c6p7NfLrEJtOdLAOf1crGHvAJRtCCA8iwnMQP/DK1wyQUuRXbpIewf44x UnSkI8OHl9wzxl1tbfCdMeEGUnrGVwobDb6qBVCE+Kjk0kcDuQhl1Swev jkbBh3kesbNk2SvJPEJF7puol/JuOCrp9z2mBtjfEi60wP3Skhr1KQV0y 1U5b7xyvuW8+yLby0vSsIPTMSVY/evoE+fxR63G4Gc9gAVqjUADn2gS1A nF2xjz6zWrUPlK95ZjkzOC11j11F4/9Grkvzvi23TP/hmtiX99Eg3qGQK w==; X-CSE-ConnectionGUID: yHZEkSG9SeqSsZEEqAqcCQ== X-CSE-MsgGUID: XVs3pGz3SbqIrXmImn9eWQ== X-IronPort-AV: E=McAfee;i="6600,9927,11053"; a="26921529" X-IronPort-AV: E=Sophos;i="6.07,222,1708416000"; d="scan'208";a="26921529" Received: from fmviesa008.fm.intel.com ([10.60.135.148]) by orvoesa102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Apr 2024 06:59:10 -0700 X-CSE-ConnectionGUID: XQ7yoIQDQYKO5lQ2cIv2/g== X-CSE-MsgGUID: Q+Gc6XDcTaWTM/tor9e/Lw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.07,222,1708416000"; d="scan'208";a="24431863" Received: from newjersey.igk.intel.com ([10.102.20.203]) by fmviesa008.fm.intel.com with ESMTP; 23 Apr 2024 06:59:06 -0700 From: Alexander Lobakin To: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Cc: Alexander Lobakin , Christoph Hellwig , Marek Szyprowski , Robin Murphy , Joerg Roedel , Will Deacon , "Rafael J. Wysocki" , Magnus Karlsson , nex.sw.ncis.osdt.itp.upstreaming@intel.com, bpf@vger.kernel.org, netdev@vger.kernel.org, iommu@lists.linux.dev, linux-kernel@vger.kernel.org, Alexander Duyck Subject: [PATCH net-next v4 5/7] page_pool: don't use driver-set flags field directly Date: Tue, 23 Apr 2024 15:58:30 +0200 Message-ID: <20240423135832.2271696-6-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20240423135832.2271696-1-aleksander.lobakin@intel.com> References: <20240423135832.2271696-1-aleksander.lobakin@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" page_pool::p is driver-defined params, copied directly from the structure passed to page_pool_create(). The structure isn't meant to be modified by the Page Pool core code and this even might look confusing[0][1]. In order to be able to alter some flags, let's define our own, internal fields the same way as the already existing one (::has_init_callback). They are defined as bits in the driver-set params, leave them so here as well, to not waste byte-per-bit or so. Almost 30 bits are still free for future extensions. We could've defined only new flags here or only the ones we may need to alter, but checking some flags in one place while others in another doesn't sound convenient or intuitive. ::flags passed by the driver can now go to the "slow" PP params. Suggested-by: Jakub Kicinski Link[0]: https://lore.kernel.org/netdev/20230703133207.4f0c54ce@kernel.org Suggested-by: Alexander Duyck Link[1]: https://lore.kernel.org/netdev/CAKgT0UfZCGnWgOH96E4GV3ZP6LLbROHM7S= HE8NKwq+exX+Gk_Q@mail.gmail.com Signed-off-by: Alexander Lobakin --- include/net/page_pool/types.h | 13 ++++++++--- net/core/page_pool.c | 41 +++++++++++++++++++---------------- 2 files changed, 32 insertions(+), 22 deletions(-) diff --git a/include/net/page_pool/types.h b/include/net/page_pool/types.h index 548321f7c49d..b088d131aeb0 100644 --- a/include/net/page_pool/types.h +++ b/include/net/page_pool/types.h @@ -45,7 +45,6 @@ struct pp_alloc_cache { =20 /** * struct page_pool_params - page pool parameters - * @flags: PP_FLAG_DMA_MAP, PP_FLAG_DMA_SYNC_DEV * @order: 2^order pages on allocation * @pool_size: size of the ptr_ring * @nid: NUMA node id to allocate from pages from @@ -55,10 +54,11 @@ struct pp_alloc_cache { * @dma_dir: DMA mapping direction * @max_len: max DMA sync memory size for PP_FLAG_DMA_SYNC_DEV * @offset: DMA sync address offset for PP_FLAG_DMA_SYNC_DEV + * @netdev: corresponding &net_device for Netlink introspection + * @flags: PP_FLAG_DMA_MAP, PP_FLAG_DMA_SYNC_DEV, PP_FLAG_SYSTEM_POOL */ struct page_pool_params { struct_group_tagged(page_pool_params_fast, fast, - unsigned int flags; unsigned int order; unsigned int pool_size; int nid; @@ -70,6 +70,7 @@ struct page_pool_params { ); struct_group_tagged(page_pool_params_slow, slow, struct net_device *netdev; + unsigned int flags; /* private: used by test code only */ void (*init_callback)(struct page *page, void *arg); void *init_arg; @@ -131,7 +132,13 @@ struct page_pool { =20 int cpuid; u32 pages_state_hold_cnt; - bool has_init_callback; + + bool has_init_callback:1; /* slow::init_callback is set */ + bool dma_map:1; /* Perform DMA mapping */ + bool dma_sync:1; /* Perform DMA sync */ +#ifdef CONFIG_PAGE_POOL_STATS + bool system:1; /* This is a global percpu pool */ +#endif =20 /* The following block must stay within one cacheline. On 32-bit * systems, sizeof(long) =3D=3D sizeof(int), so that the block size is diff --git a/net/core/page_pool.c b/net/core/page_pool.c index 35c9d61853c8..6cf26a68fa91 100644 --- a/net/core/page_pool.c +++ b/net/core/page_pool.c @@ -194,7 +194,7 @@ static int page_pool_init(struct page_pool *pool, pool->cpuid =3D cpuid; =20 /* Validate only known flags were used */ - if (pool->p.flags & ~(PP_FLAG_ALL)) + if (pool->slow.flags & ~PP_FLAG_ALL) return -EINVAL; =20 if (pool->p.pool_size) @@ -208,22 +208,26 @@ static int page_pool_init(struct page_pool *pool, * DMA_BIDIRECTIONAL is for allowing page used for DMA sending, * which is the XDP_TX use-case. */ - if (pool->p.flags & PP_FLAG_DMA_MAP) { + if (pool->slow.flags & PP_FLAG_DMA_MAP) { if ((pool->p.dma_dir !=3D DMA_FROM_DEVICE) && (pool->p.dma_dir !=3D DMA_BIDIRECTIONAL)) return -EINVAL; + + pool->dma_map =3D true; } =20 - if (pool->p.flags & PP_FLAG_DMA_SYNC_DEV) { + if (pool->slow.flags & PP_FLAG_DMA_SYNC_DEV) { /* In order to request DMA-sync-for-device the page * needs to be mapped */ - if (!(pool->p.flags & PP_FLAG_DMA_MAP)) + if (!(pool->slow.flags & PP_FLAG_DMA_MAP)) return -EINVAL; =20 if (!pool->p.max_len) return -EINVAL; =20 + pool->dma_sync =3D true; + /* pool->p.offset has to be set according to the address * offset used by the DMA engine to start copying rx data */ @@ -232,7 +236,7 @@ static int page_pool_init(struct page_pool *pool, pool->has_init_callback =3D !!pool->slow.init_callback; =20 #ifdef CONFIG_PAGE_POOL_STATS - if (!(pool->p.flags & PP_FLAG_SYSTEM_POOL)) { + if (!(pool->slow.flags & PP_FLAG_SYSTEM_POOL)) { pool->recycle_stats =3D alloc_percpu(struct page_pool_recycle_stats); if (!pool->recycle_stats) return -ENOMEM; @@ -242,12 +246,13 @@ static int page_pool_init(struct page_pool *pool, * (also percpu) page pool instance. */ pool->recycle_stats =3D &pp_system_recycle_stats; + pool->system =3D true; } #endif =20 if (ptr_ring_init(&pool->ring, ring_qsize, GFP_KERNEL) < 0) { #ifdef CONFIG_PAGE_POOL_STATS - if (!(pool->p.flags & PP_FLAG_SYSTEM_POOL)) + if (!pool->system) free_percpu(pool->recycle_stats); #endif return -ENOMEM; @@ -258,7 +263,7 @@ static int page_pool_init(struct page_pool *pool, /* Driver calling page_pool_create() also call page_pool_destroy() */ refcount_set(&pool->user_cnt, 1); =20 - if (pool->p.flags & PP_FLAG_DMA_MAP) + if (pool->dma_map) get_device(pool->p.dev); =20 return 0; @@ -268,11 +273,11 @@ static void page_pool_uninit(struct page_pool *pool) { ptr_ring_cleanup(&pool->ring, NULL); =20 - if (pool->p.flags & PP_FLAG_DMA_MAP) + if (pool->dma_map) put_device(pool->p.dev); =20 #ifdef CONFIG_PAGE_POOL_STATS - if (!(pool->p.flags & PP_FLAG_SYSTEM_POOL)) + if (!pool->system) free_percpu(pool->recycle_stats); #endif } @@ -424,7 +429,7 @@ static bool page_pool_dma_map(struct page_pool *pool, s= truct page *page) if (page_pool_set_dma_addr(page, dma)) goto unmap_failed; =20 - if (pool->p.flags & PP_FLAG_DMA_SYNC_DEV) + if (pool->dma_sync) page_pool_dma_sync_for_device(pool, page, pool->p.max_len); =20 return true; @@ -470,8 +475,7 @@ static struct page *__page_pool_alloc_page_order(struct= page_pool *pool, if (unlikely(!page)) return NULL; =20 - if ((pool->p.flags & PP_FLAG_DMA_MAP) && - unlikely(!page_pool_dma_map(pool, page))) { + if (pool->dma_map && unlikely(!page_pool_dma_map(pool, page))) { put_page(page); return NULL; } @@ -491,8 +495,8 @@ static struct page *__page_pool_alloc_pages_slow(struct= page_pool *pool, gfp_t gfp) { const int bulk =3D PP_ALLOC_CACHE_REFILL; - unsigned int pp_flags =3D pool->p.flags; unsigned int pp_order =3D pool->p.order; + bool dma_map =3D pool->dma_map; struct page *page; int i, nr_pages; =20 @@ -517,8 +521,7 @@ static struct page *__page_pool_alloc_pages_slow(struct= page_pool *pool, */ for (i =3D 0; i < nr_pages; i++) { page =3D pool->alloc.cache[i]; - if ((pp_flags & PP_FLAG_DMA_MAP) && - unlikely(!page_pool_dma_map(pool, page))) { + if (dma_map && unlikely(!page_pool_dma_map(pool, page))) { put_page(page); continue; } @@ -590,7 +593,7 @@ void __page_pool_release_page_dma(struct page_pool *poo= l, struct page *page) { dma_addr_t dma; =20 - if (!(pool->p.flags & PP_FLAG_DMA_MAP)) + if (!pool->dma_map) /* Always account for inflight pages, even if we didn't * map them */ @@ -673,7 +676,7 @@ static bool __page_pool_page_can_be_recycled(const stru= ct page *page) } =20 /* If the page refcnt =3D=3D 1, this will try to recycle the page. - * if PP_FLAG_DMA_SYNC_DEV is set, we'll try to sync the DMA area for + * If pool->dma_sync is set, we'll try to sync the DMA area for * the configured size min(dma_sync_size, pool->max_len). * If the page refcnt !=3D 1, then the page will be returned to memory * subsystem. @@ -696,7 +699,7 @@ __page_pool_put_page(struct page_pool *pool, struct pag= e *page, if (likely(__page_pool_page_can_be_recycled(page))) { /* Read barrier done in page_ref_count / READ_ONCE */ =20 - if (pool->p.flags & PP_FLAG_DMA_SYNC_DEV) + if (pool->dma_sync) page_pool_dma_sync_for_device(pool, page, dma_sync_size); =20 @@ -837,7 +840,7 @@ static struct page *page_pool_drain_frag(struct page_po= ol *pool, return NULL; =20 if (__page_pool_page_can_be_recycled(page)) { - if (pool->p.flags & PP_FLAG_DMA_SYNC_DEV) + if (pool->dma_sync) page_pool_dma_sync_for_device(pool, page, -1); =20 return page; --=20 2.44.0 From nobody Sat May 18 21:16:03 2024 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.10]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D035F13C9A4; Tue, 23 Apr 2024 13:59:21 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.10 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1713880763; cv=none; b=NWs2m6WFf90RwbllA+jAnFtFNAFHbriidglZp6dXHYXV+7WFTs3LkeWmyczKrvAfiVX5pOx4mggr3xk8iaMh7zsxrGC5sFXT172C42srjvR5+cN7A/iX5aPq/4RCFBNiPWsEGAx6dqBXNFvU7IPdftH4Nsn6Ux/EDXtiQ8GYnrs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1713880763; c=relaxed/simple; bh=/kkHgP++5WmB+SKjIXakdZj5fr6XaWKzHwhooynuFYM=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=UgpxhBSof9+UCl/7v8H/N+IrIWQQ/mGH4dcp5C0FvD2oZWPDxmZK3DxY3icr0fi0vl/UaVYc316p5NRlvJyzxCVKV9atIFStKuwdZqSDWuA/02F74cwiKoOJkn+vTjY/m524cx0iIPdHR8AW96RdrxOgH7IbLYO0ZY3oAGy+VwI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=M2MUQ+3l; arc=none smtp.client-ip=198.175.65.10 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="M2MUQ+3l" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1713880762; x=1745416762; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=/kkHgP++5WmB+SKjIXakdZj5fr6XaWKzHwhooynuFYM=; b=M2MUQ+3lkkCflPOwWCUJ+jmD4Q7gnPTHkysc48KhbdIAVPjXx6zP6Ouy m6kQm0wLbSNJnilqC580dDyWoSthGCh8Wpo0wSaP9SvPLYgMzHKXO5neU 1jy4q1UYRv/l+zWXTtDjAmGR9D477XvVhxnc6DRcmyH/PWsrOB3a4m8g5 Zr2GjREzhYgX6mQLBOXhMfLyuhYxwFw1Qzg5bNBd3xEfLQIlYhNzJ/jyh CuPgDj338ZR+s5j2X4is8Q40h8tQk3x4MBHw0ASaafqnwFAAmpKWytTc3 3uBoFhKNIHJ7SA5ghzgvd11BG6myYFfR4vsE2MqN3r3pjizKBPO2MK7gn A==; X-CSE-ConnectionGUID: JtLSyKymRqWn6a3HaZKHyw== X-CSE-MsgGUID: JKuLJ21TQVu1WQN2SWtDRA== X-IronPort-AV: E=McAfee;i="6600,9927,11053"; a="26921563" X-IronPort-AV: E=Sophos;i="6.07,222,1708416000"; d="scan'208";a="26921563" Received: from fmviesa008.fm.intel.com ([10.60.135.148]) by orvoesa102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Apr 2024 06:59:14 -0700 X-CSE-ConnectionGUID: lkVA5DDYRPGD/ekFDj17gw== X-CSE-MsgGUID: BzZEYJTbQ0qBakydqpjSGw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.07,222,1708416000"; d="scan'208";a="24431890" Received: from newjersey.igk.intel.com ([10.102.20.203]) by fmviesa008.fm.intel.com with ESMTP; 23 Apr 2024 06:59:10 -0700 From: Alexander Lobakin To: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Cc: Alexander Lobakin , Christoph Hellwig , Marek Szyprowski , Robin Murphy , Joerg Roedel , Will Deacon , "Rafael J. Wysocki" , Magnus Karlsson , nex.sw.ncis.osdt.itp.upstreaming@intel.com, bpf@vger.kernel.org, netdev@vger.kernel.org, iommu@lists.linux.dev, linux-kernel@vger.kernel.org Subject: [PATCH net-next v4 6/7] page_pool: check for DMA sync shortcut earlier Date: Tue, 23 Apr 2024 15:58:31 +0200 Message-ID: <20240423135832.2271696-7-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20240423135832.2271696-1-aleksander.lobakin@intel.com> References: <20240423135832.2271696-1-aleksander.lobakin@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" We can save a couple more function calls in the Page Pool code if we check for dma_need_sync() earlier, just when we test pp->p.dma_sync. Move both these checks into an inline wrapper and call the PP wrapper over the generic DMA sync function only when both are true. You can't cache the result of dma_need_sync() in &page_pool, as it may change anytime if an SWIOTLB buffer is allocated or mapped. Signed-off-by: Alexander Lobakin --- net/core/page_pool.c | 31 +++++++++++++++++-------------- 1 file changed, 17 insertions(+), 14 deletions(-) diff --git a/net/core/page_pool.c b/net/core/page_pool.c index 6cf26a68fa91..87319c6365e0 100644 --- a/net/core/page_pool.c +++ b/net/core/page_pool.c @@ -398,16 +398,24 @@ static struct page *__page_pool_get_cached(struct pag= e_pool *pool) return page; } =20 -static void page_pool_dma_sync_for_device(const struct page_pool *pool, - const struct page *page, - unsigned int dma_sync_size) +static void __page_pool_dma_sync_for_device(const struct page_pool *pool, + const struct page *page, + u32 dma_sync_size) { dma_addr_t dma_addr =3D page_pool_get_dma_addr(page); =20 dma_sync_size =3D min(dma_sync_size, pool->p.max_len); - dma_sync_single_range_for_device(pool->p.dev, dma_addr, - pool->p.offset, dma_sync_size, - pool->p.dma_dir); + __dma_sync_single_for_device(pool->p.dev, dma_addr + pool->p.offset, + dma_sync_size, pool->p.dma_dir); +} + +static __always_inline void +page_pool_dma_sync_for_device(const struct page_pool *pool, + const struct page *page, + u32 dma_sync_size) +{ + if (pool->dma_sync && dma_dev_need_sync(pool->p.dev)) + __page_pool_dma_sync_for_device(pool, page, dma_sync_size); } =20 static bool page_pool_dma_map(struct page_pool *pool, struct page *page) @@ -429,8 +437,7 @@ static bool page_pool_dma_map(struct page_pool *pool, s= truct page *page) if (page_pool_set_dma_addr(page, dma)) goto unmap_failed; =20 - if (pool->dma_sync) - page_pool_dma_sync_for_device(pool, page, pool->p.max_len); + page_pool_dma_sync_for_device(pool, page, pool->p.max_len); =20 return true; =20 @@ -699,9 +706,7 @@ __page_pool_put_page(struct page_pool *pool, struct pag= e *page, if (likely(__page_pool_page_can_be_recycled(page))) { /* Read barrier done in page_ref_count / READ_ONCE */ =20 - if (pool->dma_sync) - page_pool_dma_sync_for_device(pool, page, - dma_sync_size); + page_pool_dma_sync_for_device(pool, page, dma_sync_size); =20 if (allow_direct && page_pool_recycle_in_cache(page, pool)) return NULL; @@ -840,9 +845,7 @@ static struct page *page_pool_drain_frag(struct page_po= ol *pool, return NULL; =20 if (__page_pool_page_can_be_recycled(page)) { - if (pool->dma_sync) - page_pool_dma_sync_for_device(pool, page, -1); - + page_pool_dma_sync_for_device(pool, page, -1); return page; } =20 --=20 2.44.0 From nobody Sat May 18 21:16:03 2024 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.10]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9EC1D13CA97; Tue, 23 Apr 2024 13:59:24 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.10 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1713880766; cv=none; b=rQ3Swy1iORQdGwCQFn7iiFeRE1kf/DFu/0kTRw9KEcKk4UMp52OpsLsHsoWIq6VPa01BSTySA+XuNlI0tsD8UkwAdeGc2BikjfAjP/Vn3e4DNg2vel8Us1YQEMVNt0ewjG1fXyiBB4XvO7IqC1FaYqERbFlAFoqeFuvYbG5zjkk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1713880766; c=relaxed/simple; bh=a6GBMoetkyL5U151DNbmvgCKmuVUnTZ2swdqkInhFnU=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=qeEMzhQPCT7N+0mpM2ERkouvPaOpTxNXH1RQUZFbh/jbmHSiq1b86cRfptjBAcF5MRZtFIN/Wt+FM9GQsgBzg1Acal0M2ALdIoE50QBl51RzMpfh5EeTHCIezuHBVH6T50Bk1a00xlyUcORugZV+Wsobw+qSuoxL2+9U/J9Jyuk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=IrXJlNwK; arc=none smtp.client-ip=198.175.65.10 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="IrXJlNwK" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1713880765; x=1745416765; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=a6GBMoetkyL5U151DNbmvgCKmuVUnTZ2swdqkInhFnU=; b=IrXJlNwKYKm1wSj8yY4/7WPFWuLpxBwWE/0pr3jAJRsCqXPJ4gvRVITB Ryt+Q7rq31k5Odiz91KMkF9xPY6TINJ1o+QWEY3zHXq9e+rql36rDCW5K opVnGsKyvnLv/9p7egTHwkY19u5TWRYfJGlUTD7lE9TJOjarp46ty7auS UttsSO2hDRYR8aUhI5TDPWhdptRPpfPTyiLes8PqkTMAapynoP3Ou+fo6 jZvx7uI0Bumjx4nlWhbzM7H0BCeWcGMV8SfesGMhrCD2aFyv9AHzlXUcq Ma7WU1yHsWAIPrQKZHYgtfjW72VweiejUKkvd5wrzG+zn3pjUdDRun1ET Q==; X-CSE-ConnectionGUID: 7zzaZB2mQNyPG6IEQwfuFA== X-CSE-MsgGUID: Km+tGq3zQWevEMwW3YogqA== X-IronPort-AV: E=McAfee;i="6600,9927,11053"; a="26921596" X-IronPort-AV: E=Sophos;i="6.07,222,1708416000"; d="scan'208";a="26921596" Received: from fmviesa008.fm.intel.com ([10.60.135.148]) by orvoesa102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Apr 2024 06:59:18 -0700 X-CSE-ConnectionGUID: dDmUugkrTSy067GR1DywkQ== X-CSE-MsgGUID: biC6+3stQ7ybjXN7bQdUpA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.07,222,1708416000"; d="scan'208";a="24431970" Received: from newjersey.igk.intel.com ([10.102.20.203]) by fmviesa008.fm.intel.com with ESMTP; 23 Apr 2024 06:59:14 -0700 From: Alexander Lobakin To: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Cc: Alexander Lobakin , Christoph Hellwig , Marek Szyprowski , Robin Murphy , Joerg Roedel , Will Deacon , "Rafael J. Wysocki" , Magnus Karlsson , nex.sw.ncis.osdt.itp.upstreaming@intel.com, bpf@vger.kernel.org, netdev@vger.kernel.org, iommu@lists.linux.dev, linux-kernel@vger.kernel.org Subject: [PATCH net-next v4 7/7] xsk: use generic DMA sync shortcut instead of a custom one Date: Tue, 23 Apr 2024 15:58:32 +0200 Message-ID: <20240423135832.2271696-8-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20240423135832.2271696-1-aleksander.lobakin@intel.com> References: <20240423135832.2271696-1-aleksander.lobakin@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" XSk infra's been using its own DMA sync shortcut to try avoiding redundant function calls. Now that there is a generic one, remove the custom implementation and rely on the generic helpers. xsk_buff_dma_sync_for_cpu() doesn't need the second argument anymore, remove it. Signed-off-by: Alexander Lobakin --- include/net/xdp_sock_drv.h | 7 ++--- include/net/xsk_buff_pool.h | 14 +++------- drivers/net/ethernet/engleder/tsnep_main.c | 2 +- .../net/ethernet/freescale/dpaa2/dpaa2-xsk.c | 2 +- drivers/net/ethernet/intel/i40e/i40e_xsk.c | 2 +- drivers/net/ethernet/intel/ice/ice_xsk.c | 2 +- drivers/net/ethernet/intel/igc/igc_main.c | 2 +- drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c | 2 +- .../ethernet/mellanox/mlx5/core/en/xsk/rx.c | 4 +-- .../net/ethernet/mellanox/mlx5/core/en_rx.c | 2 +- drivers/net/ethernet/netronome/nfp/nfd3/xsk.c | 2 +- .../net/ethernet/stmicro/stmmac/stmmac_main.c | 2 +- net/xdp/xsk_buff_pool.c | 28 ++----------------- 13 files changed, 20 insertions(+), 51 deletions(-) diff --git a/include/net/xdp_sock_drv.h b/include/net/xdp_sock_drv.h index c9aec9ab6191..0a5dca2b2b3f 100644 --- a/include/net/xdp_sock_drv.h +++ b/include/net/xdp_sock_drv.h @@ -219,13 +219,10 @@ static inline struct xsk_tx_metadata *xsk_buff_get_me= tadata(struct xsk_buff_pool return meta; } =20 -static inline void xsk_buff_dma_sync_for_cpu(struct xdp_buff *xdp, struct = xsk_buff_pool *pool) +static inline void xsk_buff_dma_sync_for_cpu(struct xdp_buff *xdp) { struct xdp_buff_xsk *xskb =3D container_of(xdp, struct xdp_buff_xsk, xdp); =20 - if (!pool->dma_need_sync) - return; - xp_dma_sync_for_cpu(xskb); } =20 @@ -402,7 +399,7 @@ static inline struct xsk_tx_metadata *xsk_buff_get_meta= data(struct xsk_buff_pool return NULL; } =20 -static inline void xsk_buff_dma_sync_for_cpu(struct xdp_buff *xdp, struct = xsk_buff_pool *pool) +static inline void xsk_buff_dma_sync_for_cpu(struct xdp_buff *xdp) { } =20 diff --git a/include/net/xsk_buff_pool.h b/include/net/xsk_buff_pool.h index 99dd7376df6a..bacb33f1e3e5 100644 --- a/include/net/xsk_buff_pool.h +++ b/include/net/xsk_buff_pool.h @@ -43,7 +43,6 @@ struct xsk_dma_map { refcount_t users; struct list_head list; /* Protected by the RTNL_LOCK */ u32 dma_pages_cnt; - bool dma_need_sync; }; =20 struct xsk_buff_pool { @@ -82,7 +81,6 @@ struct xsk_buff_pool { u8 tx_metadata_len; /* inherited from umem */ u8 cached_need_wakeup; bool uses_need_wakeup; - bool dma_need_sync; bool unaligned; bool tx_sw_csum; void *addrs; @@ -155,21 +153,17 @@ static inline dma_addr_t xp_get_frame_dma(struct xdp_= buff_xsk *xskb) return xskb->frame_dma; } =20 -void xp_dma_sync_for_cpu_slow(struct xdp_buff_xsk *xskb); static inline void xp_dma_sync_for_cpu(struct xdp_buff_xsk *xskb) { - xp_dma_sync_for_cpu_slow(xskb); + dma_sync_single_for_cpu(xskb->pool->dev, xskb->dma, + xskb->pool->frame_len, + DMA_BIDIRECTIONAL); } =20 -void xp_dma_sync_for_device_slow(struct xsk_buff_pool *pool, dma_addr_t dm= a, - size_t size); static inline void xp_dma_sync_for_device(struct xsk_buff_pool *pool, dma_addr_t dma, size_t size) { - if (!pool->dma_need_sync) - return; - - xp_dma_sync_for_device_slow(pool, dma, size); + dma_sync_single_for_device(pool->dev, dma, size, DMA_BIDIRECTIONAL); } =20 /* Masks for xdp_umem_page flags. diff --git a/drivers/net/ethernet/engleder/tsnep_main.c b/drivers/net/ether= net/engleder/tsnep_main.c index 4b15af6b7122..44da335d66bd 100644 --- a/drivers/net/ethernet/engleder/tsnep_main.c +++ b/drivers/net/ethernet/engleder/tsnep_main.c @@ -1587,7 +1587,7 @@ static int tsnep_rx_poll_zc(struct tsnep_rx *rx, stru= ct napi_struct *napi, length =3D __le32_to_cpu(entry->desc_wb->properties) & TSNEP_DESC_LENGTH_MASK; xsk_buff_set_size(entry->xdp, length - ETH_FCS_LEN); - xsk_buff_dma_sync_for_cpu(entry->xdp, rx->xsk_pool); + xsk_buff_dma_sync_for_cpu(entry->xdp); =20 /* RX metadata with timestamps is in front of actual data, * subtract metadata size to get length of actual data and diff --git a/drivers/net/ethernet/freescale/dpaa2/dpaa2-xsk.c b/drivers/net= /ethernet/freescale/dpaa2/dpaa2-xsk.c index 051748b997f3..a466c2379146 100644 --- a/drivers/net/ethernet/freescale/dpaa2/dpaa2-xsk.c +++ b/drivers/net/ethernet/freescale/dpaa2/dpaa2-xsk.c @@ -55,7 +55,7 @@ static u32 dpaa2_xsk_run_xdp(struct dpaa2_eth_priv *priv, xdp_set_data_meta_invalid(xdp_buff); xdp_buff->rxq =3D &ch->xdp_rxq; =20 - xsk_buff_dma_sync_for_cpu(xdp_buff, ch->xsk_pool); + xsk_buff_dma_sync_for_cpu(xdp_buff); xdp_act =3D bpf_prog_run_xdp(xdp_prog, xdp_buff); =20 /* xdp.data pointer may have changed */ diff --git a/drivers/net/ethernet/intel/i40e/i40e_xsk.c b/drivers/net/ether= net/intel/i40e/i40e_xsk.c index a85b425794df..4e885df789ef 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_xsk.c +++ b/drivers/net/ethernet/intel/i40e/i40e_xsk.c @@ -482,7 +482,7 @@ int i40e_clean_rx_irq_zc(struct i40e_ring *rx_ring, int= budget) =20 bi =3D *i40e_rx_bi(rx_ring, next_to_process); xsk_buff_set_size(bi, size); - xsk_buff_dma_sync_for_cpu(bi, rx_ring->xsk_pool); + xsk_buff_dma_sync_for_cpu(bi); =20 if (!first) first =3D bi; diff --git a/drivers/net/ethernet/intel/ice/ice_xsk.c b/drivers/net/etherne= t/intel/ice/ice_xsk.c index aa81d1162b81..7541f223bf4f 100644 --- a/drivers/net/ethernet/intel/ice/ice_xsk.c +++ b/drivers/net/ethernet/intel/ice/ice_xsk.c @@ -878,7 +878,7 @@ int ice_clean_rx_irq_zc(struct ice_rx_ring *rx_ring, in= t budget) ICE_RX_FLX_DESC_PKT_LEN_M; =20 xsk_buff_set_size(xdp, size); - xsk_buff_dma_sync_for_cpu(xdp, xsk_pool); + xsk_buff_dma_sync_for_cpu(xdp); =20 if (!first) { first =3D xdp; diff --git a/drivers/net/ethernet/intel/igc/igc_main.c b/drivers/net/ethern= et/intel/igc/igc_main.c index d9bd001af7ba..303404752deb 100644 --- a/drivers/net/ethernet/intel/igc/igc_main.c +++ b/drivers/net/ethernet/intel/igc/igc_main.c @@ -2812,7 +2812,7 @@ static int igc_clean_rx_irq_zc(struct igc_q_vector *q= _vector, const int budget) } =20 bi->xdp->data_end =3D bi->xdp->data + size; - xsk_buff_dma_sync_for_cpu(bi->xdp, ring->xsk_pool); + xsk_buff_dma_sync_for_cpu(bi->xdp); =20 res =3D __igc_xdp_run_prog(adapter, prog, bi->xdp); switch (res) { diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c b/drivers/net/eth= ernet/intel/ixgbe/ixgbe_xsk.c index 397cb773fabb..3e3b471e53f0 100644 --- a/drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c @@ -303,7 +303,7 @@ int ixgbe_clean_rx_irq_zc(struct ixgbe_q_vector *q_vect= or, } =20 bi->xdp->data_end =3D bi->xdp->data + size; - xsk_buff_dma_sync_for_cpu(bi->xdp, rx_ring->xsk_pool); + xsk_buff_dma_sync_for_cpu(bi->xdp); xdp_res =3D ixgbe_run_xdp_zc(adapter, rx_ring, bi->xdp); =20 if (likely(xdp_res & (IXGBE_XDP_TX | IXGBE_XDP_REDIR))) { diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/rx.c b/drivers/= net/ethernet/mellanox/mlx5/core/en/xsk/rx.c index b8dd74453655..1b7132fa70de 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/rx.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/rx.c @@ -270,7 +270,7 @@ struct sk_buff *mlx5e_xsk_skb_from_cqe_mpwrq_linear(str= uct mlx5e_rq *rq, /* mxbuf->rq is set on allocation, but cqe is per-packet so set it here */ mxbuf->cqe =3D cqe; xsk_buff_set_size(&mxbuf->xdp, cqe_bcnt); - xsk_buff_dma_sync_for_cpu(&mxbuf->xdp, rq->xsk_pool); + xsk_buff_dma_sync_for_cpu(&mxbuf->xdp); net_prefetch(mxbuf->xdp.data); =20 /* Possible flows: @@ -319,7 +319,7 @@ struct sk_buff *mlx5e_xsk_skb_from_cqe_linear(struct ml= x5e_rq *rq, /* mxbuf->rq is set on allocation, but cqe is per-packet so set it here */ mxbuf->cqe =3D cqe; xsk_buff_set_size(&mxbuf->xdp, cqe_bcnt); - xsk_buff_dma_sync_for_cpu(&mxbuf->xdp, rq->xsk_pool); + xsk_buff_dma_sync_for_cpu(&mxbuf->xdp); net_prefetch(mxbuf->xdp.data); =20 prog =3D rcu_dereference(rq->xdp_prog); diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c b/drivers/net/= ethernet/mellanox/mlx5/core/en_rx.c index d601b5faaed5..b5333da20e8a 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c @@ -917,7 +917,7 @@ INDIRECT_CALLABLE_SCOPE bool mlx5e_post_rx_wqes(struct = mlx5e_rq *rq) =20 if (!rq->xsk_pool) { count =3D mlx5e_refill_rx_wqes(rq, head, wqe_bulk); - } else if (likely(!rq->xsk_pool->dma_need_sync)) { + } else if (likely(!dma_dev_need_sync(rq->pdev))) { mlx5e_xsk_free_rx_wqes(rq, head, wqe_bulk); count =3D mlx5e_xsk_alloc_rx_wqes_batched(rq, head, wqe_bulk); } else { diff --git a/drivers/net/ethernet/netronome/nfp/nfd3/xsk.c b/drivers/net/et= hernet/netronome/nfp/nfd3/xsk.c index 45be6954d5aa..01cfa9cc1b5e 100644 --- a/drivers/net/ethernet/netronome/nfp/nfd3/xsk.c +++ b/drivers/net/ethernet/netronome/nfp/nfd3/xsk.c @@ -184,7 +184,7 @@ nfp_nfd3_xsk_rx(struct nfp_net_rx_ring *rx_ring, int bu= dget, xrxbuf->xdp->data +=3D meta_len; xrxbuf->xdp->data_end =3D xrxbuf->xdp->data + pkt_len; xdp_set_data_meta_invalid(xrxbuf->xdp); - xsk_buff_dma_sync_for_cpu(xrxbuf->xdp, r_vec->xsk_pool); + xsk_buff_dma_sync_for_cpu(xrxbuf->xdp); net_prefetch(xrxbuf->xdp->data); =20 if (meta_len) { diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/ne= t/ethernet/stmicro/stmmac/stmmac_main.c index 2ea9f0fa0cf9..cfda1a0956e4 100644 --- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c +++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c @@ -5354,7 +5354,7 @@ static int stmmac_rx_zc(struct stmmac_priv *priv, int= limit, u32 queue) =20 /* RX buffer is good and fit into a XSK pool buffer */ buf->xdp->data_end =3D buf->xdp->data + buf1_len; - xsk_buff_dma_sync_for_cpu(buf->xdp, rx_q->xsk_pool); + xsk_buff_dma_sync_for_cpu(buf->xdp); =20 prog =3D READ_ONCE(priv->xdp_prog); res =3D __stmmac_xdp_run_prog(priv, prog, buf->xdp); diff --git a/net/xdp/xsk_buff_pool.c b/net/xdp/xsk_buff_pool.c index ce60ecd48a4d..b2cce6dbe6d8 100644 --- a/net/xdp/xsk_buff_pool.c +++ b/net/xdp/xsk_buff_pool.c @@ -338,7 +338,6 @@ static struct xsk_dma_map *xp_create_dma_map(struct dev= ice *dev, struct net_devi =20 dma_map->netdev =3D netdev; dma_map->dev =3D dev; - dma_map->dma_need_sync =3D false; dma_map->dma_pages_cnt =3D nr_pages; refcount_set(&dma_map->users, 1); list_add(&dma_map->list, &umem->xsk_dma_list); @@ -424,7 +423,6 @@ static int xp_init_dma_info(struct xsk_buff_pool *pool,= struct xsk_dma_map *dma_ =20 pool->dev =3D dma_map->dev; pool->dma_pages_cnt =3D dma_map->dma_pages_cnt; - pool->dma_need_sync =3D dma_map->dma_need_sync; memcpy(pool->dma_pages, dma_map->dma_pages, pool->dma_pages_cnt * sizeof(*pool->dma_pages)); =20 @@ -460,8 +458,6 @@ int xp_dma_map(struct xsk_buff_pool *pool, struct devic= e *dev, __xp_dma_unmap(dma_map, attrs); return -ENOMEM; } - if (dma_need_sync(dev, dma)) - dma_map->dma_need_sync =3D true; dma_map->dma_pages[i] =3D dma; } =20 @@ -557,11 +553,8 @@ struct xdp_buff *xp_alloc(struct xsk_buff_pool *pool) xskb->xdp.data_meta =3D xskb->xdp.data; xskb->xdp.flags =3D 0; =20 - if (pool->dma_need_sync) { - dma_sync_single_range_for_device(pool->dev, xskb->dma, 0, - pool->frame_len, - DMA_BIDIRECTIONAL); - } + xp_dma_sync_for_device(pool, xskb->dma, pool->frame_len); + return &xskb->xdp; } EXPORT_SYMBOL(xp_alloc); @@ -633,7 +626,7 @@ u32 xp_alloc_batch(struct xsk_buff_pool *pool, struct x= dp_buff **xdp, u32 max) { u32 nb_entries1 =3D 0, nb_entries2; =20 - if (unlikely(pool->dma_need_sync)) { + if (unlikely(dma_dev_need_sync(pool->dev))) { struct xdp_buff *buff; =20 /* Slow path */ @@ -693,18 +686,3 @@ dma_addr_t xp_raw_get_dma(struct xsk_buff_pool *pool, = u64 addr) (addr & ~PAGE_MASK); } EXPORT_SYMBOL(xp_raw_get_dma); - -void xp_dma_sync_for_cpu_slow(struct xdp_buff_xsk *xskb) -{ - dma_sync_single_range_for_cpu(xskb->pool->dev, xskb->dma, 0, - xskb->pool->frame_len, DMA_BIDIRECTIONAL); -} -EXPORT_SYMBOL(xp_dma_sync_for_cpu_slow); - -void xp_dma_sync_for_device_slow(struct xsk_buff_pool *pool, dma_addr_t dm= a, - size_t size) -{ - dma_sync_single_range_for_device(pool->dev, dma, 0, - size, DMA_BIDIRECTIONAL); -} -EXPORT_SYMBOL(xp_dma_sync_for_device_slow); --=20 2.44.0