From nobody Sun Feb 8 06:22:35 2026 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 0F8F83644BC for ; Mon, 12 Jan 2026 15:46:54 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=217.140.110.172 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1768232816; cv=none; b=Zp7R+ESkaIG34u6Le7l9uXrh0ZGBLI+imr1nmVEIh2KJDp1FmdsEiil2wAsu8bgiLaByR6p1jycfJ8rIZev40ek6FUatKYN59FhD3cskTLgAXnSnNHG1m8agwRgADjPzv6f4G/cXThESg9kqEG9WvZpJDfDq78wBKb9svMkBHTA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1768232816; c=relaxed/simple; bh=9BCgbGEN6YvO8PtqQjF4kxhcZB2A1GWo4Tq7+Q1XqJI=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=asn6R2YkYdEKiKTge+qVn+sRBzHTLjPn5eosvJMxITsAkSEZfZL45WI7blB0IjAcf3w782ztwqPSVwjJni3iBhDSs5+dJeimfFGUgkhjRWotpaFgLAVM7+ruO58S1GYdk3Sg9PfWRzMj/yzLfCkD0EDUaBdPr1kY2PjlnQ8Ww+8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com; spf=pass smtp.mailfrom=arm.com; arc=none smtp.client-ip=217.140.110.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id BE5DE1596; Mon, 12 Jan 2026 07:46:47 -0800 (PST) Received: from 010265703453.arm.com (unknown [10.57.48.115]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 1267C3F694; Mon, 12 Jan 2026 07:46:51 -0800 (PST) From: Robin Murphy To: m.szyprowski@samsung.com, akpm@linux-foundation.org, vbabka@suse.cz, david@kernel.org Cc: bhe@redhat.com, iommu@lists.linux-foundation.org, linux-mm@kvack.org, vladimir.kondratiev@mobileye.com, s-adivi@ti.com, linux-kernel@vger.kernel.org, lorenzo.stoakes@oracle.com, Liam.Howlett@oracle.com, rppt@kernel.org, surenb@google.com, mhocko@suse.com, jackmanb@google.com, hannes@cmpxchg.org, ziy@nvidia.com Subject: [PATCH 3/3] dma/pool: Avoid allocating redundant pools Date: Mon, 12 Jan 2026 15:46:38 +0000 Message-Id: <8ab8d8a620dee0109f33f5cb63d6bfeed35aac37.1768230104.git.robin.murphy@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" On smaller systems, e.g. embedded arm64, it is common for all memory to end up in ZONE_DMA32 or even ZONE_DMA. In such cases it is redundant to allocate a nominal pool for an empty higher zone that just ends up coming from a lower zone that should already have its own pool anyway. We already have logic to skip allocating a ZONE_DMA pool when that is empty, so generalise that to save memory in the case of other zones too. Signed-off-by: Robin Murphy --- kernel/dma/pool.c | 19 ++++++++++++++----- 1 file changed, 14 insertions(+), 5 deletions(-) diff --git a/kernel/dma/pool.c b/kernel/dma/pool.c index 2645cfb5718b..c5da29ad010c 100644 --- a/kernel/dma/pool.c +++ b/kernel/dma/pool.c @@ -184,6 +184,12 @@ static __init struct gen_pool *__dma_atomic_pool_init(= size_t pool_size, return pool; } =20 +#ifdef CONFIG_ZONE_DMA32 +#define has_managed_dma32 has_managed_zone(ZONE_DMA32) +#else +#define has_managed_dma32 false +#endif + static int __init dma_atomic_pool_init(void) { int ret =3D 0; @@ -199,17 +205,20 @@ static int __init dma_atomic_pool_init(void) } INIT_WORK(&atomic_pool_work, atomic_pool_work_fn); =20 - atomic_pool_kernel =3D __dma_atomic_pool_init(atomic_pool_size, + /* All memory might be in the DMA zone(s) to begin with */ + if (has_managed_zone(ZONE_NORMAL)) { + atomic_pool_kernel =3D __dma_atomic_pool_init(atomic_pool_size, GFP_KERNEL); - if (!atomic_pool_kernel) - ret =3D -ENOMEM; + if (!atomic_pool_kernel) + ret =3D -ENOMEM; + } if (has_managed_dma()) { atomic_pool_dma =3D __dma_atomic_pool_init(atomic_pool_size, GFP_KERNEL | GFP_DMA); if (!atomic_pool_dma) ret =3D -ENOMEM; } - if (IS_ENABLED(CONFIG_ZONE_DMA32)) { + if (has_managed_dma32) { atomic_pool_dma32 =3D __dma_atomic_pool_init(atomic_pool_size, GFP_KERNEL | GFP_DMA32); if (!atomic_pool_dma32) @@ -228,7 +237,7 @@ static inline struct gen_pool *dma_guess_pool(struct ge= n_pool *prev, gfp_t gfp) return atomic_pool_dma ?: atomic_pool_dma32 ?: atomic_pool_kernel; if (gfp & GFP_DMA32) return atomic_pool_dma32 ?: atomic_pool_dma ?: atomic_pool_kernel; - return atomic_pool_kernel; + return atomic_pool_kernel ?: atomic_pool_dma32 ?: atomic_pool_dma; } if (prev =3D=3D atomic_pool_kernel) return atomic_pool_dma32 ? atomic_pool_dma32 : atomic_pool_dma; --=20 2.34.1