From nobody Sat Apr 18 04:20:56 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6BCE8C433EF for ; Mon, 18 Jul 2022 01:16:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232806AbiGRBQ3 (ORCPT ); Sun, 17 Jul 2022 21:16:29 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54706 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232593AbiGRBQ1 (ORCPT ); Sun, 17 Jul 2022 21:16:27 -0400 Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CE68211826 for ; Sun, 17 Jul 2022 18:16:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1658106986; x=1689642986; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=i6wERldfINGotCZB9RRsepryrHDN3FyuIRZkpCQPpjo=; b=nKuJnL74KmYRUzwkrL1X+gcWMOINNd1edR1YCTjJxBL5NGaMCf6vAvcZ jKTKdyEgmtmEdAyFvyRjroSPHvgfzIksFuSoLWmQHeteblBwQkhLQGUHg a394ZWVivVmysHHk9ewQFA5fq6KiYVvm2Wswx6ZoL1rDHuzSzecZZEwLr mhIFD2+OooOUYLaJvgRk4Eyg5RirtEAC/qUK6vwQMD8cLJkfBxWHsXhkx 5/2Celv/zyPUa/lacCQ6hJhY7YpoI/MIzIyZsF1DUsnzA1wA4s27m7UGW Lw9DSDcBWYERgJ1bMo1+C8EgGAO7u3tHQ2w61pW6jL05tgnI37RFugRUh w==; X-IronPort-AV: E=McAfee;i="6400,9594,10411"; a="286127475" X-IronPort-AV: E=Sophos;i="5.92,280,1650956400"; d="scan'208";a="286127475" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Jul 2022 18:16:26 -0700 X-IronPort-AV: E=Sophos;i="5.92,280,1650956400"; d="scan'208";a="624520461" Received: from spr.sh.intel.com ([10.239.53.122]) by orsmga008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Jul 2022 18:16:24 -0700 From: Chao Gao To: iommu@lists.linux.dev, linux-kernel@vger.kernel.org Cc: Chao Gao , Christoph Hellwig , Marek Szyprowski , Robin Murphy Subject: [RESEND PATCH 1/3] swiotlb: remove unused fields in io_tlb_mem Date: Mon, 18 Jul 2022 09:16:05 +0800 Message-Id: <20220718011608.106289-2-chao.gao@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220718011608.106289-1-chao.gao@intel.com> References: <20220718011608.106289-1-chao.gao@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Commit 20347fca71a3 ("swiotlb: split up the global swiotlb lock") splits io_tlb_mem into multiple areas. Each area has its own lock and index. The global ones are not used so remove them. Signed-off-by: Chao Gao --- include/linux/swiotlb.h | 5 ----- kernel/dma/swiotlb.c | 2 -- 2 files changed, 7 deletions(-) diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h index f65ff1930120..d3ae03edbbd2 100644 --- a/include/linux/swiotlb.h +++ b/include/linux/swiotlb.h @@ -79,11 +79,8 @@ dma_addr_t swiotlb_map(struct device *dev, phys_addr_t p= hys, * @used: The number of used IO TLB block. * @list: The free list describing the number of free entries available * from each index. - * @index: The index to start searching in the next round. * @orig_addr: The original address corresponding to a mapped entry. * @alloc_size: Size of the allocated buffer. - * @lock: The lock to protect the above data structures in the map and - * unmap calls. * @debugfs: The dentry to debugfs. * @late_alloc: %true if allocated using the page allocator * @force_bounce: %true if swiotlb bouncing is forced @@ -97,8 +94,6 @@ struct io_tlb_mem { void *vaddr; unsigned long nslabs; unsigned long used; - unsigned int index; - spinlock_t lock; struct dentry *debugfs; bool late_alloc; bool force_bounce; diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c index dcf1459ce723..0d0f99146360 100644 --- a/kernel/dma/swiotlb.c +++ b/kernel/dma/swiotlb.c @@ -253,14 +253,12 @@ static void swiotlb_init_io_tlb_mem(struct io_tlb_mem= *mem, phys_addr_t start, mem->nslabs =3D nslabs; mem->start =3D start; mem->end =3D mem->start + bytes; - mem->index =3D 0; mem->late_alloc =3D late_alloc; mem->nareas =3D nareas; mem->area_nslabs =3D nslabs / mem->nareas; =20 mem->force_bounce =3D swiotlb_force_bounce || (flags & SWIOTLB_FORCE); =20 - spin_lock_init(&mem->lock); for (i =3D 0; i < mem->nareas; i++) { spin_lock_init(&mem->areas[i].lock); mem->areas[i].index =3D 0; --=20 2.25.1 From nobody Sat Apr 18 04:20:56 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2355BC43334 for ; Mon, 18 Jul 2022 01:16:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232896AbiGRBQd (ORCPT ); Sun, 17 Jul 2022 21:16:33 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54740 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232759AbiGRBQ3 (ORCPT ); Sun, 17 Jul 2022 21:16:29 -0400 Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C1A2211A0C for ; Sun, 17 Jul 2022 18:16:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1658106988; x=1689642988; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=MSWP29tqWJG1PAHv7yFlPylCea0BZGM4/LHBRM52PJQ=; b=nVp0z560szE6TW+g1H4fD1bjZvnZ0qs5jOG6HHnZFheWFKAZS92G71Qu AGemhHB2aJDqnHh82SzLq+D7AsRleCs6DQEA0DlcXOBTafbcecdQIiGDa J3DGLNCvnn+nDNYi1LXufscMHG77/r5AOSCWvgJjA3Px8TC/ey+nD0FeD v3YsFUglZ640lT5FSwXbn5vMpaeemY3GKRDjV/PcanoFxdioopFUwpZM3 AzD4Oj9Hvcmwl7tLlzysOEWYQacqfus9iBZtN/OL925ehpVimnC8DKXNd xV1XPJBOt/dLS/zeO8pdnBvt12UC7V4mFi8u1miS1BHqpyBB9uUPx4Tw3 A==; X-IronPort-AV: E=McAfee;i="6400,9594,10411"; a="286127481" X-IronPort-AV: E=Sophos;i="5.92,280,1650956400"; d="scan'208";a="286127481" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Jul 2022 18:16:28 -0700 X-IronPort-AV: E=Sophos;i="5.92,280,1650956400"; d="scan'208";a="624520466" Received: from spr.sh.intel.com ([10.239.53.122]) by orsmga008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Jul 2022 18:16:26 -0700 From: Chao Gao To: iommu@lists.linux.dev, linux-kernel@vger.kernel.org Cc: Chao Gao , Christoph Hellwig , Marek Szyprowski , Robin Murphy Subject: [RESEND PATCH 2/3] swiotlb: consolidate rounding up default_nslabs Date: Mon, 18 Jul 2022 09:16:06 +0800 Message-Id: <20220718011608.106289-3-chao.gao@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220718011608.106289-1-chao.gao@intel.com> References: <20220718011608.106289-1-chao.gao@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" default_nslabs are rounded up in two cases with exactly same comments. Add a simple wrapper to reduce duplicate code/comments. It is preparatory to adding more logics into the round-up. No functional change intended. Signed-off-by: Chao Gao --- kernel/dma/swiotlb.c | 34 ++++++++++++++++++---------------- 1 file changed, 18 insertions(+), 16 deletions(-) diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c index 0d0f99146360..9ab87d6d47bc 100644 --- a/kernel/dma/swiotlb.c +++ b/kernel/dma/swiotlb.c @@ -88,6 +88,22 @@ struct io_tlb_area { spinlock_t lock; }; =20 +/* + * Round up number of slabs to the next power of 2. The last area is going + * be smaller than the rest if default_nslabs is not power of two. + * + * Return true if default_nslabs is rounded up. + */ +static bool round_up_default_nslabs(void) +{ + if (!default_nareas || is_power_of_2(default_nslabs)) + return false; + + default_nslabs =3D roundup_pow_of_two(default_nslabs); + + return true; +} + static void swiotlb_adjust_nareas(unsigned int nareas) { if (!is_power_of_2(nareas)) @@ -96,16 +112,9 @@ static void swiotlb_adjust_nareas(unsigned int nareas) default_nareas =3D nareas; =20 pr_info("area num %d.\n", nareas); - /* - * Round up number of slabs to the next power of 2. - * The last area is going be smaller than the rest if - * default_nslabs is not power of two. - */ - if (nareas && !is_power_of_2(default_nslabs)) { - default_nslabs =3D roundup_pow_of_two(default_nslabs); + if (round_up_default_nslabs()) pr_info("SWIOTLB bounce buffer size roundup to %luMB", (default_nslabs << IO_TLB_SHIFT) >> 20); - } } =20 static int __init @@ -154,17 +163,10 @@ void __init swiotlb_adjust_size(unsigned long size) if (default_nslabs !=3D IO_TLB_DEFAULT_SIZE >> IO_TLB_SHIFT) return; =20 - /* - * Round up number of slabs to the next power of 2. - * The last area is going be smaller than the rest if - * default_nslabs is not power of two. - */ size =3D ALIGN(size, IO_TLB_SIZE); default_nslabs =3D ALIGN(size >> IO_TLB_SHIFT, IO_TLB_SEGSIZE); - if (default_nareas) { - default_nslabs =3D roundup_pow_of_two(default_nslabs); + if (round_up_default_nslabs()) size =3D default_nslabs << IO_TLB_SHIFT; - } =20 pr_info("SWIOTLB bounce buffer size adjusted to %luMB", size >> 20); } --=20 2.25.1 From nobody Sat Apr 18 04:20:56 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D1DAFC433EF for ; Mon, 18 Jul 2022 01:16:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232882AbiGRBQh (ORCPT ); Sun, 17 Jul 2022 21:16:37 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54774 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232837AbiGRBQb (ORCPT ); Sun, 17 Jul 2022 21:16:31 -0400 Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E815011A0C for ; Sun, 17 Jul 2022 18:16:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1658106990; x=1689642990; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=mf+ou5sUbNJrHFghDywBtdNo5AKN4S9liviBGsFpQzU=; b=E/SBW7BDBj3sNAI2b0ZvodbLSR2LBixqJuqnMuvKgVthELjKF1g7YAht yyY0u4ukW4b+f/qkrDoeo0GV2d2TKmbDKZQxkji1ccUP1QEBmuXhebj/x cT4rm/XcoZwvAJilh60yo/bk++uG4n/5twejSgqhlvEji/5tDqWwa3gsU /qZmODkDywc//79OHa3Lcn+GFMWyZcJ9j//VdEg/n8nX1xzxrBV8ipqqt UjLpwfKa3bdPvrWTIcOAT6nFm31DfvXor4wKNu+oxY1KngRm5NndK2DET APxsWE6SCqaErE+pofekyqfKDKaXf5FkKrCbJnwL4SYLSexNJdrIKvhOr g==; X-IronPort-AV: E=McAfee;i="6400,9594,10411"; a="286127491" X-IronPort-AV: E=Sophos;i="5.92,280,1650956400"; d="scan'208";a="286127491" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Jul 2022 18:16:30 -0700 X-IronPort-AV: E=Sophos;i="5.92,280,1650956400"; d="scan'208";a="624520476" Received: from spr.sh.intel.com ([10.239.53.122]) by orsmga008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Jul 2022 18:16:28 -0700 From: Chao Gao To: iommu@lists.linux.dev, linux-kernel@vger.kernel.org Cc: Chao Gao , Christoph Hellwig , Marek Szyprowski , Robin Murphy Subject: [RESEND PATCH 3/3] swiotlb: ensure a segment doesn't cross the area boundary Date: Mon, 18 Jul 2022 09:16:07 +0800 Message-Id: <20220718011608.106289-4-chao.gao@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220718011608.106289-1-chao.gao@intel.com> References: <20220718011608.106289-1-chao.gao@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Free slots tracking assumes that slots in a segment can be allocated to fulfill a request. This implies that slots in a segment should belong to the same area. Although the possibility of a violation is low, it is better to explicitly enforce segments won't span multiple areas by adjusting the number of slabs when configuring areas. Signed-off-by: Chao Gao --- kernel/dma/swiotlb.c | 11 ++++++++++- 1 file changed, 10 insertions(+), 1 deletion(-) diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c index 9ab87d6d47bc..70fd73fc357a 100644 --- a/kernel/dma/swiotlb.c +++ b/kernel/dma/swiotlb.c @@ -91,12 +91,21 @@ struct io_tlb_area { /* * Round up number of slabs to the next power of 2. The last area is going * be smaller than the rest if default_nslabs is not power of two. + * The number of slot in an area should be a multiple of IO_TLB_SEGSIZE, + * otherwise a segment may span two or more areas. It conflicts with free + * contiguous slots tracking: free slots are treated contiguous no matter + * whether they cross an area boundary. * * Return true if default_nslabs is rounded up. */ static bool round_up_default_nslabs(void) { - if (!default_nareas || is_power_of_2(default_nslabs)) + if (!default_nareas) + return false; + + if (default_nslabs < IO_TLB_SEGSIZE * default_nareas) + default_nslabs =3D IO_TLB_SEGSIZE * default_nareas; + else if (is_power_of_2(default_nslabs)) return false; =20 default_nslabs =3D roundup_pow_of_two(default_nslabs); --=20 2.25.1