From nobody Sat Oct 4 01:39:03 2025 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3FD6536CC9C for ; Thu, 21 Aug 2025 20:08:15 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1755806897; cv=none; b=DGWegA9eWPncunfJYb5NzwQfg77MiyqH14aRPtWxHSjfCK4fzmBzf4h/9K12ftP0dByg7vdxAasVa0pndmJnVieROyWJAx+irW88jlxofdLfZvyYTB2wasprNS+vicTQHQBNTXmskJB2mu6DmeiWSVMXnwqod/4YZsqUdFO3/t4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1755806897; c=relaxed/simple; bh=u7U0E5UNosH6c7PaP/scUrTgiGRReCYM/RkOjp4OjeY=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=XpGQwc9R1+cTvDhEnVdcnEKjwpYvvQxZeErOanX1WX7kARqZOvxsK1o+EJlBQTVAMwH8eSka2NBvI/B2gLwTb/AYH5d5rHGPe9F+4MH4EsykyTeio16n0GFywh7Jvt86MFNN1m2eK85t2HNoJEX7ehncBhvujPoAovgj8M8q9vA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=HjlGJHaU; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="HjlGJHaU" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1755806894; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=OlD9YGSFbsAaeSHrs+lmfmp3eNVZAqkWDWaUN3GbxpI=; b=HjlGJHaUfnM22gtUtdDOyrUJYTp9KYUZDkssNX+JAveWv5ClZuiokUR5Tbq4/1vIrpv6q6 LyTaPaCSslXLduG6pfpS4+wSz91LzNm9yqG8xS0UhZJYLOrHt/199MJ6l4q/EDzW4rHb0y uNNGYOLfXxNSInyZLn1+6+m7fy57GIc= Received: from mail-wr1-f71.google.com (mail-wr1-f71.google.com [209.85.221.71]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-214-AlpeNxZpPmK9k-HcLnpiFQ-1; Thu, 21 Aug 2025 16:08:07 -0400 X-MC-Unique: AlpeNxZpPmK9k-HcLnpiFQ-1 X-Mimecast-MFC-AGG-ID: AlpeNxZpPmK9k-HcLnpiFQ_1755806886 Received: by mail-wr1-f71.google.com with SMTP id ffacd0b85a97d-3b9dc5c2ba0so675619f8f.1 for ; Thu, 21 Aug 2025 13:08:07 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1755806886; x=1756411686; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=OlD9YGSFbsAaeSHrs+lmfmp3eNVZAqkWDWaUN3GbxpI=; b=L5A/vV30oCcUJZlzAy92vFjEZi7tn5JvRaWQOiphhU/RHO+eYubvETkHzkg8qR1l2Z teZqkUIzPZt+9Onn33PUjmTLSj7Rxylcj/fTwNuOvjz3g7VrP1PawwSbvBwiRF2AM5Va lG0TA5bqvrfzC6Lw3D5p0mY+w5+/mEwLmaRqOVsucM/Ug4979VeWVxto5ic8YJSsPRIV Apzhku9TwkX0pnbya/nE7y/VbWpW2bnX/9MdoXobl1rL0nqgcy1iHVZN/Z3bPbarWGb4 IxN0HwQ1AphS/+1CqkDA3E26Ms79/DYXKRozBMAYawyr2YfEQ3AhWR0kpiv8INBMp11y fiCQ== X-Gm-Message-State: AOJu0YzkM0FUTN3mCU5gnx2tSlVQRX4/GPxAJ/xo7CdPjdeTql39Yw6I WT6WsnaSzw2N1HtGuAFy3fQ7Xo8o6m0lJjiWBvYKkgnjZ69VhKq0GIo1g2sGYj3CX/RM423U6q2 umB28EoweXl5QhtYxh3DcUwROnCJ/3BQci0arYB7e206j7KGIbrq3V1DQcYa5a5x3BgtZHGIzWZ fOhRz2ZLT4V3NXRFb/ElfvcK3hqzveCDuGDRQYGlwevRAVE5UB X-Gm-Gg: ASbGncv12y+/jALLkiDJ6VeB3NSmHyoya5zN3T33V/uU0Anq3LSZrOU29GALJd+wYzt 50dRiGRdhblxwLJrOSiqLc17aibbdBUzWi9emQhvqn18SMDCo01f/qIeXtHmuH5JN3nXXJ2kGhh 7rdQDLytDOgTaVwNNrNdVHw5zhF7RR6GnL7NA31FAjv1rfBaxR6A+0cxEo+3w54x/XV6dsohHHt 6bSa6P1s7EG4ayWsRigYIoLgqBGxWFXzXaugP9ACPxeqzNRnx+ib5JP+MmyoOzVUJq0TsRpZiYO WQ3MfBA2m+DW1E+8+IzJxi3fVi4tjgk3kklyJLCRbPF6pvNYYpeNu8BpVdpVn7QRkHDVt8Wyn4T De8JkToVU4ty8Yb9lFRGbig== X-Received: by 2002:a5d:5849:0:b0:3b7:94c6:7c9 with SMTP id ffacd0b85a97d-3c5db4ca226mr187852f8f.27.1755806886318; Thu, 21 Aug 2025 13:08:06 -0700 (PDT) X-Google-Smtp-Source: AGHT+IEN2zPHDTqkDZIT/HT3QVvSDWvKglll+HTpAUltvzBh+S3AYz2M0rKfptvzXjL7aLPeFpS91Q== X-Received: by 2002:a5d:5849:0:b0:3b7:94c6:7c9 with SMTP id ffacd0b85a97d-3c5db4ca226mr187788f8f.27.1755806885705; Thu, 21 Aug 2025 13:08:05 -0700 (PDT) Received: from localhost (p200300d82f26ba0008036ec5991806fd.dip0.t-ipconnect.de. [2003:d8:2f26:ba00:803:6ec5:9918:6fd]) by smtp.gmail.com with UTF8SMTPSA id ffacd0b85a97d-3c077789d1dsm12697993f8f.49.2025.08.21.13.08.03 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Thu, 21 Aug 2025 13:08:05 -0700 (PDT) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: David Hildenbrand , Alexander Potapenko , Andrew Morton , Brendan Jackman , Christoph Lameter , Dennis Zhou , Dmitry Vyukov , dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, iommu@lists.linux.dev, io-uring@vger.kernel.org, Jason Gunthorpe , Jens Axboe , Johannes Weiner , John Hubbard , kasan-dev@googlegroups.com, kvm@vger.kernel.org, "Liam R. Howlett" , Linus Torvalds , linux-arm-kernel@axis.com, linux-arm-kernel@lists.infradead.org, linux-crypto@vger.kernel.org, linux-ide@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-mips@vger.kernel.org, linux-mmc@vger.kernel.org, linux-mm@kvack.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-scsi@vger.kernel.org, Lorenzo Stoakes , Marco Elver , Marek Szyprowski , Michal Hocko , Mike Rapoport , Muchun Song , netdev@vger.kernel.org, Oscar Salvador , Peter Xu , Robin Murphy , Suren Baghdasaryan , Tejun Heo , virtualization@lists.linux.dev, Vlastimil Babka , wireguard@lists.zx2c4.com, x86@kernel.org, Zi Yan Subject: [PATCH RFC 21/35] mm/cma: refuse handing out non-contiguous page ranges Date: Thu, 21 Aug 2025 22:06:47 +0200 Message-ID: <20250821200701.1329277-22-david@redhat.com> X-Mailer: git-send-email 2.50.1 In-Reply-To: <20250821200701.1329277-1-david@redhat.com> References: <20250821200701.1329277-1-david@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Let's disallow handing out PFN ranges with non-contiguous pages, so we can remove the nth-page usage in __cma_alloc(), and so any callers don't have to worry about that either when wanting to blindly iterate pages. This is really only a problem in configs with SPARSEMEM but without SPARSEMEM_VMEMMAP, and only when we would cross memory sections in some cases. Will this cause harm? Probably not, because it's mostly 32bit that does not support SPARSEMEM_VMEMMAP. If this ever becomes a problem we could look into allocating the memmap for the memory sections spanned by a single CMA region in one go from memblock. Signed-off-by: David Hildenbrand Reviewed-by: Alexandru Elisei --- include/linux/mm.h | 6 ++++++ mm/cma.c | 36 +++++++++++++++++++++++------------- mm/util.c | 33 +++++++++++++++++++++++++++++++++ 3 files changed, 62 insertions(+), 13 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index ef360b72cb05c..f59ad1f9fc792 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -209,9 +209,15 @@ extern unsigned long sysctl_user_reserve_kbytes; extern unsigned long sysctl_admin_reserve_kbytes; =20 #if defined(CONFIG_SPARSEMEM) && !defined(CONFIG_SPARSEMEM_VMEMMAP) +bool page_range_contiguous(const struct page *page, unsigned long nr_pages= ); #define nth_page(page,n) pfn_to_page(page_to_pfn((page)) + (n)) #else #define nth_page(page,n) ((page) + (n)) +static inline bool page_range_contiguous(const struct page *page, + unsigned long nr_pages) +{ + return true; +} #endif =20 /* to align the pointer to the (next) page boundary */ diff --git a/mm/cma.c b/mm/cma.c index 2ffa4befb99ab..1119fa2830008 100644 --- a/mm/cma.c +++ b/mm/cma.c @@ -780,10 +780,8 @@ static int cma_range_alloc(struct cma *cma, struct cma= _memrange *cmr, unsigned long count, unsigned int align, struct page **pagep, gfp_t gfp) { - unsigned long mask, offset; - unsigned long pfn =3D -1; - unsigned long start =3D 0; unsigned long bitmap_maxno, bitmap_no, bitmap_count; + unsigned long start, pfn, mask, offset; int ret =3D -EBUSY; struct page *page =3D NULL; =20 @@ -795,7 +793,7 @@ static int cma_range_alloc(struct cma *cma, struct cma_= memrange *cmr, if (bitmap_count > bitmap_maxno) goto out; =20 - for (;;) { + for (start =3D 0; ; start =3D bitmap_no + mask + 1) { spin_lock_irq(&cma->lock); /* * If the request is larger than the available number @@ -812,6 +810,22 @@ static int cma_range_alloc(struct cma *cma, struct cma= _memrange *cmr, spin_unlock_irq(&cma->lock); break; } + + pfn =3D cmr->base_pfn + (bitmap_no << cma->order_per_bit); + page =3D pfn_to_page(pfn); + + /* + * Do not hand out page ranges that are not contiguous, so + * callers can just iterate the pages without having to worry + * about these corner cases. + */ + if (!page_range_contiguous(page, count)) { + spin_unlock_irq(&cma->lock); + pr_warn_ratelimited("%s: %s: skipping incompatible area [0x%lx-0x%lx]", + __func__, cma->name, pfn, pfn + count - 1); + continue; + } + bitmap_set(cmr->bitmap, bitmap_no, bitmap_count); cma->available_count -=3D count; /* @@ -821,29 +835,25 @@ static int cma_range_alloc(struct cma *cma, struct cm= a_memrange *cmr, */ spin_unlock_irq(&cma->lock); =20 - pfn =3D cmr->base_pfn + (bitmap_no << cma->order_per_bit); mutex_lock(&cma->alloc_mutex); ret =3D alloc_contig_range(pfn, pfn + count, ACR_FLAGS_CMA, gfp); mutex_unlock(&cma->alloc_mutex); - if (ret =3D=3D 0) { - page =3D pfn_to_page(pfn); + if (!ret) break; - } =20 cma_clear_bitmap(cma, cmr, pfn, count); if (ret !=3D -EBUSY) break; =20 pr_debug("%s(): memory range at pfn 0x%lx %p is busy, retrying\n", - __func__, pfn, pfn_to_page(pfn)); + __func__, pfn, page); =20 trace_cma_alloc_busy_retry(cma->name, pfn, pfn_to_page(pfn), count, align); - /* try again with a bit different memory target */ - start =3D bitmap_no + mask + 1; } out: - *pagep =3D page; + if (!ret) + *pagep =3D page; return ret; } =20 @@ -882,7 +892,7 @@ static struct page *__cma_alloc(struct cma *cma, unsign= ed long count, */ if (page) { for (i =3D 0; i < count; i++) - page_kasan_tag_reset(nth_page(page, i)); + page_kasan_tag_reset(page + i); } =20 if (ret && !(gfp & __GFP_NOWARN)) { diff --git a/mm/util.c b/mm/util.c index d235b74f7aff7..0bf349b19b652 100644 --- a/mm/util.c +++ b/mm/util.c @@ -1280,4 +1280,37 @@ unsigned int folio_pte_batch(struct folio *folio, pt= e_t *ptep, pte_t pte, { return folio_pte_batch_flags(folio, NULL, ptep, &pte, max_nr, 0); } + +#if defined(CONFIG_SPARSEMEM) && !defined(CONFIG_SPARSEMEM_VMEMMAP) +/** + * page_range_contiguous - test whether the page range is contiguous + * @page: the start of the page range. + * @nr_pages: the number of pages in the range. + * + * Test whether the page range is contiguous, such that they can be iterat= ed + * naively, corresponding to iterating a contiguous PFN range. + * + * This function should primarily only be used for debug checks, or when + * working with page ranges that are not naturally contiguous (e.g., pages + * within a folio are). + * + * Returns true if contiguous, otherwise false. + */ +bool page_range_contiguous(const struct page *page, unsigned long nr_pages) +{ + const unsigned long start_pfn =3D page_to_pfn(page); + const unsigned long end_pfn =3D start_pfn + nr_pages; + unsigned long pfn; + + /* + * The memmap is allocated per memory section. We need to check + * each involved memory section once. + */ + for (pfn =3D ALIGN(start_pfn, PAGES_PER_SECTION); + pfn < end_pfn; pfn +=3D PAGES_PER_SECTION) + if (unlikely(page + (pfn - start_pfn) !=3D pfn_to_page(pfn))) + return false; + return true; +} +#endif #endif /* CONFIG_MMU */ --=20 2.50.1