From nobody Sat Apr 18 09:23:40 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 03787C433EF for ; Fri, 15 Jul 2022 10:45:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231876AbiGOKpt (ORCPT ); Fri, 15 Jul 2022 06:45:49 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44878 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229463AbiGOKpr (ORCPT ); Fri, 15 Jul 2022 06:45:47 -0400 Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7539A82FA0 for ; Fri, 15 Jul 2022 03:45:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1657881946; x=1689417946; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=i6wERldfINGotCZB9RRsepryrHDN3FyuIRZkpCQPpjo=; b=I+uRVlwCgxQ5Y2rSMPvydEw4ZViZwmCdJ2pdolItvoZ0/ufjAqvo3sKg 9T9DFcQuxZsg6FzlkIbaiwwYOUTRKcqzfQrr5O42J07J9MDDmiuiNsiha SPryIsABhLlet/vBiHbCniH0v57b1UND1c0dpmlHsAlOeOazgck6EAMNu 4HnohRlvS7zZuiKZJ16o5OmR6hvl/q6nqdb9gVSHr1BI4M+Tvfn+b5xlK /8bG7clSK8WQ0HcJbUmB2QkI2218v53C/ua8q2VTAwZwacZHXBJHZk0+z YkLTpaE3y34+ere0Y3vZurtz01aL+euVIFbcdes5lzrLHW+qqKCruzIYZ A==; X-IronPort-AV: E=McAfee;i="6400,9594,10408"; a="286500357" X-IronPort-AV: E=Sophos;i="5.92,273,1650956400"; d="scan'208";a="286500357" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Jul 2022 03:45:46 -0700 X-IronPort-AV: E=Sophos;i="5.92,273,1650956400"; d="scan'208";a="654287105" Received: from spr.sh.intel.com ([10.239.53.120]) by fmsmga008-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Jul 2022 03:45:44 -0700 From: Chao Gao To: iommu@lists.linux-foundation.org, linux-kernel@vger.kernel.org Cc: Chao Gao , Christoph Hellwig , Marek Szyprowski , Robin Murphy Subject: [PATCH 1/3] swiotlb: remove unused fields in io_tlb_mem Date: Fri, 15 Jul 2022 18:45:33 +0800 Message-Id: <20220715104535.1053907-2-chao.gao@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220715104535.1053907-1-chao.gao@intel.com> References: <20220715104535.1053907-1-chao.gao@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Commit 20347fca71a3 ("swiotlb: split up the global swiotlb lock") splits io_tlb_mem into multiple areas. Each area has its own lock and index. The global ones are not used so remove them. Signed-off-by: Chao Gao --- include/linux/swiotlb.h | 5 ----- kernel/dma/swiotlb.c | 2 -- 2 files changed, 7 deletions(-) diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h index f65ff1930120..d3ae03edbbd2 100644 --- a/include/linux/swiotlb.h +++ b/include/linux/swiotlb.h @@ -79,11 +79,8 @@ dma_addr_t swiotlb_map(struct device *dev, phys_addr_t p= hys, * @used: The number of used IO TLB block. * @list: The free list describing the number of free entries available * from each index. - * @index: The index to start searching in the next round. * @orig_addr: The original address corresponding to a mapped entry. * @alloc_size: Size of the allocated buffer. - * @lock: The lock to protect the above data structures in the map and - * unmap calls. * @debugfs: The dentry to debugfs. * @late_alloc: %true if allocated using the page allocator * @force_bounce: %true if swiotlb bouncing is forced @@ -97,8 +94,6 @@ struct io_tlb_mem { void *vaddr; unsigned long nslabs; unsigned long used; - unsigned int index; - spinlock_t lock; struct dentry *debugfs; bool late_alloc; bool force_bounce; diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c index dcf1459ce723..0d0f99146360 100644 --- a/kernel/dma/swiotlb.c +++ b/kernel/dma/swiotlb.c @@ -253,14 +253,12 @@ static void swiotlb_init_io_tlb_mem(struct io_tlb_mem= *mem, phys_addr_t start, mem->nslabs =3D nslabs; mem->start =3D start; mem->end =3D mem->start + bytes; - mem->index =3D 0; mem->late_alloc =3D late_alloc; mem->nareas =3D nareas; mem->area_nslabs =3D nslabs / mem->nareas; =20 mem->force_bounce =3D swiotlb_force_bounce || (flags & SWIOTLB_FORCE); =20 - spin_lock_init(&mem->lock); for (i =3D 0; i < mem->nareas; i++) { spin_lock_init(&mem->areas[i].lock); mem->areas[i].index =3D 0; --=20 2.25.1 From nobody Sat Apr 18 09:23:40 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8DF91C433EF for ; Fri, 15 Jul 2022 10:45:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232081AbiGOKpx (ORCPT ); Fri, 15 Jul 2022 06:45:53 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44912 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231793AbiGOKpt (ORCPT ); Fri, 15 Jul 2022 06:45:49 -0400 Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7AB8785D61 for ; Fri, 15 Jul 2022 03:45:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1657881948; x=1689417948; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=MSWP29tqWJG1PAHv7yFlPylCea0BZGM4/LHBRM52PJQ=; b=QQtJflpRfpK9xmKuAyBpMZqcTSVjdWaD8Lp8xKdmCrl9FH9ejTRJYToE G11Kj/0vkICFS26PVxYpLCgqGCXx9UVLaIhpd2+W8tWT/AiVMaG5ijog0 i84qG8dsPMoI1yW3E4IJc0G0FQKoPJronRmNbIZIUAqQTpVDOyIGteSvf +jiHhe4iOmG8b1HIICVvoHOQZ2L0NhmRR6yIxL7p8EXmMfUl8NTddmoca dzVjiqjd31G0JvtcUwsTN7WvpRcKEaj6JnVHdse2gfZtTakbWcbHAX6gc W5VBw/hwQ2aaU4HA4AssONgwFqb7Q2NhtDLvBT5YMXf5KIOeFhTzIvsfZ A==; X-IronPort-AV: E=McAfee;i="6400,9594,10408"; a="286500367" X-IronPort-AV: E=Sophos;i="5.92,273,1650956400"; d="scan'208";a="286500367" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Jul 2022 03:45:48 -0700 X-IronPort-AV: E=Sophos;i="5.92,273,1650956400"; d="scan'208";a="654287109" Received: from spr.sh.intel.com ([10.239.53.120]) by fmsmga008-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Jul 2022 03:45:46 -0700 From: Chao Gao To: iommu@lists.linux-foundation.org, linux-kernel@vger.kernel.org Cc: Chao Gao , Christoph Hellwig , Marek Szyprowski , Robin Murphy Subject: [PATCH 2/3] swiotlb: consolidate rounding up default_nslabs Date: Fri, 15 Jul 2022 18:45:34 +0800 Message-Id: <20220715104535.1053907-3-chao.gao@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220715104535.1053907-1-chao.gao@intel.com> References: <20220715104535.1053907-1-chao.gao@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" default_nslabs are rounded up in two cases with exactly same comments. Add a simple wrapper to reduce duplicate code/comments. It is preparatory to adding more logics into the round-up. No functional change intended. Signed-off-by: Chao Gao --- kernel/dma/swiotlb.c | 34 ++++++++++++++++++---------------- 1 file changed, 18 insertions(+), 16 deletions(-) diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c index 0d0f99146360..9ab87d6d47bc 100644 --- a/kernel/dma/swiotlb.c +++ b/kernel/dma/swiotlb.c @@ -88,6 +88,22 @@ struct io_tlb_area { spinlock_t lock; }; =20 +/* + * Round up number of slabs to the next power of 2. The last area is going + * be smaller than the rest if default_nslabs is not power of two. + * + * Return true if default_nslabs is rounded up. + */ +static bool round_up_default_nslabs(void) +{ + if (!default_nareas || is_power_of_2(default_nslabs)) + return false; + + default_nslabs =3D roundup_pow_of_two(default_nslabs); + + return true; +} + static void swiotlb_adjust_nareas(unsigned int nareas) { if (!is_power_of_2(nareas)) @@ -96,16 +112,9 @@ static void swiotlb_adjust_nareas(unsigned int nareas) default_nareas =3D nareas; =20 pr_info("area num %d.\n", nareas); - /* - * Round up number of slabs to the next power of 2. - * The last area is going be smaller than the rest if - * default_nslabs is not power of two. - */ - if (nareas && !is_power_of_2(default_nslabs)) { - default_nslabs =3D roundup_pow_of_two(default_nslabs); + if (round_up_default_nslabs()) pr_info("SWIOTLB bounce buffer size roundup to %luMB", (default_nslabs << IO_TLB_SHIFT) >> 20); - } } =20 static int __init @@ -154,17 +163,10 @@ void __init swiotlb_adjust_size(unsigned long size) if (default_nslabs !=3D IO_TLB_DEFAULT_SIZE >> IO_TLB_SHIFT) return; =20 - /* - * Round up number of slabs to the next power of 2. - * The last area is going be smaller than the rest if - * default_nslabs is not power of two. - */ size =3D ALIGN(size, IO_TLB_SIZE); default_nslabs =3D ALIGN(size >> IO_TLB_SHIFT, IO_TLB_SEGSIZE); - if (default_nareas) { - default_nslabs =3D roundup_pow_of_two(default_nslabs); + if (round_up_default_nslabs()) size =3D default_nslabs << IO_TLB_SHIFT; - } =20 pr_info("SWIOTLB bounce buffer size adjusted to %luMB", size >> 20); } --=20 2.25.1 From nobody Sat Apr 18 09:23:40 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B2D7FC433EF for ; Fri, 15 Jul 2022 10:45:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231793AbiGOKp4 (ORCPT ); Fri, 15 Jul 2022 06:45:56 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44950 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231641AbiGOKpu (ORCPT ); Fri, 15 Jul 2022 06:45:50 -0400 Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 62BE185D6E for ; Fri, 15 Jul 2022 03:45:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1657881950; x=1689417950; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=mf+ou5sUbNJrHFghDywBtdNo5AKN4S9liviBGsFpQzU=; b=mGrV+F8Oi0L6DvAACXGfijOv06kbDXxE8YcyJlN4iJ3XIVJ3Q/dVoPZ4 XmZHJDiDu4qlcHJHW59fawfmVJ5AebNjq7ey6cp0ocHm+dvL02PJNjx4I s0gYQY6Oby5Hp2j1tFwWISNAx8bvrzsQpZf8GDlabfY8Woof+RQmRIFq4 lT+7oPaaXZRuGGiWpBX+AwrSAuKEMS3tyiYCOHkCl2CNzb8orCj4SvrHP 2JKvN6Vhb55jPRmRKs3EECn2ro+w4grl71fFMXwrOVzVKmFom/wxvZ4qc DI2PphIdAnGZOUvfO72x0O4D6xI2um4OXaPWbA1U8D7Rp1ou9+aId4/1B w==; X-IronPort-AV: E=McAfee;i="6400,9594,10408"; a="286500373" X-IronPort-AV: E=Sophos;i="5.92,273,1650956400"; d="scan'208";a="286500373" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Jul 2022 03:45:50 -0700 X-IronPort-AV: E=Sophos;i="5.92,273,1650956400"; d="scan'208";a="654287118" Received: from spr.sh.intel.com ([10.239.53.120]) by fmsmga008-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Jul 2022 03:45:48 -0700 From: Chao Gao To: iommu@lists.linux-foundation.org, linux-kernel@vger.kernel.org Cc: Chao Gao , Christoph Hellwig , Marek Szyprowski , Robin Murphy Subject: [PATCH 3/3] swiotlb: ensure a segment doesn't cross the area boundary Date: Fri, 15 Jul 2022 18:45:35 +0800 Message-Id: <20220715104535.1053907-4-chao.gao@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220715104535.1053907-1-chao.gao@intel.com> References: <20220715104535.1053907-1-chao.gao@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Free slots tracking assumes that slots in a segment can be allocated to fulfill a request. This implies that slots in a segment should belong to the same area. Although the possibility of a violation is low, it is better to explicitly enforce segments won't span multiple areas by adjusting the number of slabs when configuring areas. Signed-off-by: Chao Gao --- kernel/dma/swiotlb.c | 11 ++++++++++- 1 file changed, 10 insertions(+), 1 deletion(-) diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c index 9ab87d6d47bc..70fd73fc357a 100644 --- a/kernel/dma/swiotlb.c +++ b/kernel/dma/swiotlb.c @@ -91,12 +91,21 @@ struct io_tlb_area { /* * Round up number of slabs to the next power of 2. The last area is going * be smaller than the rest if default_nslabs is not power of two. + * The number of slot in an area should be a multiple of IO_TLB_SEGSIZE, + * otherwise a segment may span two or more areas. It conflicts with free + * contiguous slots tracking: free slots are treated contiguous no matter + * whether they cross an area boundary. * * Return true if default_nslabs is rounded up. */ static bool round_up_default_nslabs(void) { - if (!default_nareas || is_power_of_2(default_nslabs)) + if (!default_nareas) + return false; + + if (default_nslabs < IO_TLB_SEGSIZE * default_nareas) + default_nslabs =3D IO_TLB_SEGSIZE * default_nareas; + else if (is_power_of_2(default_nslabs)) return false; =20 default_nslabs =3D roundup_pow_of_two(default_nslabs); --=20 2.25.1