From nobody Fri Dec 19 16:07:21 2025 Received: from mail-pj1-f74.google.com (mail-pj1-f74.google.com [209.85.216.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2AD772755F6 for ; Fri, 28 Feb 2025 18:29:50 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740767392; cv=none; b=g15z/Iys1HfqBebOND5LMM3FkpoI6SJqsz0WC49893DvNRTwWIdIc8jfy30acky+PPO/0jehg1GA3aTg5Ho7SIyROl34bhDyguZ1ilwBByyCGHMODE8wjkODfSLE29ykGfm1ndeH6EwInafzYvnN9j+DfZd7bPC59pX6nfCY5lA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740767392; c=relaxed/simple; bh=vHaXUAlNn8PozCDKoY9kUl2s51Y98UGyKOdNrWvNRpU=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=OdDKgH2T5W2mnFXA8DiafFh3tUEogflFclDcXrg3AqYfk0YcFyt3fQN6q5ymHeK69h9s0sP0R7naRL91XDSZDxkxkz4Su5rvC15KJe/Bjm8G50XuTGVx/4NPJoAt1TtrGsIx5CnoYEbQAn0u+WezzoqqeasWNzL8VQMZqyhayjM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--fvdl.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=P6RYC5Be; arc=none smtp.client-ip=209.85.216.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--fvdl.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="P6RYC5Be" Received: by mail-pj1-f74.google.com with SMTP id 98e67ed59e1d1-2fc1eabf4f7so5260838a91.1 for ; Fri, 28 Feb 2025 10:29:50 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1740767390; x=1741372190; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=kKtMLmlpe+m6qN9ca3/progHrAqe0123z61YwAW9d3s=; b=P6RYC5BeX37Yoyf8kX1eUMYMvp+VTHvM0Do8l06m0GuBzK0rE1HgtoQeZDpiXoqcYy 8JlFByDOaLuw7Z2gV+EVC1uuGt010klvdTLCQ6Qa02U3x+AtVOxCdvhm5VK0AaS+fBZ7 JQj8rX2y6IeFZRTBnMPVKtPHQ56BS9+/o/KHdQ+1f9q4FsF37K1fDHI8r9xq2LMtz8tI oJ2SDonS6lj3tleJ8U6+Q5DyZIg30+DL8/Ml3BvHErUvjZWbTYnan6x/+o0XSmcCyGVi pnt5nYgNxo8MOhSti38fIBakC5sVHnbz5dD4ouOt9GHjLKVHip5wzgs9D8klo1htT1Kx Q71g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1740767390; x=1741372190; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=kKtMLmlpe+m6qN9ca3/progHrAqe0123z61YwAW9d3s=; b=Wpq1+0aaaCAan/p1VZQ9luI2jsfHYy7NlToWylxNlvAo7kPo4I5w8ewOmQCc4R46OV DXUlbFnjTsHex4hRAAeZahwTz0+PaHIpm00C2o3gyK+qUtQZy9x0Q14DCZfEsRYQv4yp XPK+Of/zMKWeu4h57G7zrwZmoFEc+twoyeUjPMdRMwX2DSIptAychuS72GDMY4XH4h4R EHFz8ogb/TqXlPd3QG5e+oLgdcs2nc4lvNRAHjrqNkaj1GT53RZu0+H3MEeYNWlw8M1S 7oxCoE0cl7tyvE1m9Vh46SVc+9oyUSdkD3ZnIYJUZWovFHnIkKeOnRIliiuCai6wVrOK XQVg== X-Forwarded-Encrypted: i=1; AJvYcCUeDJmWw7kbFSVRzkvMEH4S77vFJdtXSKRho3OdEPdnyFamvfx0+zUBJsxXC/+IA8oqE+JIEXAVdUVazG4=@vger.kernel.org X-Gm-Message-State: AOJu0YwACuuUbIWm6TA3jd0v3/xK6x1C4iqEYGhY8jBidNj5wCRde7TP qh8Nxx56fAlg4AGr4fvMuV0pxVIynRo98Gf+qkAgAG33EHsa8FtOe7TtxD4h5vdg6FsbAA== X-Google-Smtp-Source: AGHT+IHqJuYtYkzOhZDMCMvoR+05mDEEDRM0zr7QytAhww8Je1Y3UD1lMlTzOEV4CRBkoQI5L703TwyQ X-Received: from pjbsr11.prod.google.com ([2002:a17:90b:4e8b:b0:2ef:82a8:7171]) (user=fvdl job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:3a89:b0:2ef:e0bb:1ef2 with SMTP id 98e67ed59e1d1-2febab71ec6mr6563503a91.19.1740767390505; Fri, 28 Feb 2025 10:29:50 -0800 (PST) Date: Fri, 28 Feb 2025 18:29:02 +0000 In-Reply-To: <20250228182928.2645936-1-fvdl@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250228182928.2645936-1-fvdl@google.com> X-Mailer: git-send-email 2.48.1.711.g2feabab25a-goog Message-ID: <20250228182928.2645936-2-fvdl@google.com> Subject: [PATCH v5 01/27] mm/cma: export total and free number of pages for CMA areas From: Frank van der Linden To: akpm@linux-foundation.org, muchun.song@linux.dev, linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: yuzhao@google.com, usamaarif642@gmail.com, joao.m.martins@oracle.com, roman.gushchin@linux.dev, ziy@nvidia.com, david@redhat.com, Frank van der Linden , Oscar Salvador Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" In addition to the number of allocations and releases, system management software may like to be aware of the size of CMA areas, and how many pages are available in it. This information is currently not available, so export it in total_page and available_pages, respectively. The name 'available_pages' was picked over 'free_pages' because 'free' implies that the pages are unused. But they might not be, they just haven't been used by cma_alloc The number of available pages is tracked regardless of CONFIG_CMA_SYSFS, allowing for a few minor shortcuts in the code, avoiding bitmap operations. Reviewed-by: Oscar Salvador Signed-off-by: Frank van der Linden --- Documentation/ABI/testing/sysfs-kernel-mm-cma | 13 +++++++++++ mm/cma.c | 22 ++++++++++++++----- mm/cma.h | 1 + mm/cma_debug.c | 5 +---- mm/cma_sysfs.c | 20 +++++++++++++++++ 5 files changed, 51 insertions(+), 10 deletions(-) diff --git a/Documentation/ABI/testing/sysfs-kernel-mm-cma b/Documentation/= ABI/testing/sysfs-kernel-mm-cma index dfd755201142..aaf2a5d8b13b 100644 --- a/Documentation/ABI/testing/sysfs-kernel-mm-cma +++ b/Documentation/ABI/testing/sysfs-kernel-mm-cma @@ -29,3 +29,16 @@ Date: Feb 2024 Contact: Anshuman Khandual Description: the number of pages CMA API succeeded to release + +What: /sys/kernel/mm/cma//total_pages +Date: Jun 2024 +Contact: Frank van der Linden +Description: + The size of the CMA area in pages. + +What: /sys/kernel/mm/cma//available_pages +Date: Jun 2024 +Contact: Frank van der Linden +Description: + The number of pages in the CMA area that are still + available for CMA allocation. diff --git a/mm/cma.c b/mm/cma.c index de5bc0c81fc2..95a8788e54d3 100644 --- a/mm/cma.c +++ b/mm/cma.c @@ -86,6 +86,7 @@ static void cma_clear_bitmap(struct cma *cma, unsigned lo= ng pfn, =20 spin_lock_irqsave(&cma->lock, flags); bitmap_clear(cma->bitmap, bitmap_no, bitmap_count); + cma->available_count +=3D count; spin_unlock_irqrestore(&cma->lock, flags); } =20 @@ -133,7 +134,7 @@ static void __init cma_activate_area(struct cma *cma) free_reserved_page(pfn_to_page(pfn)); } totalcma_pages -=3D cma->count; - cma->count =3D 0; + cma->available_count =3D cma->count =3D 0; pr_err("CMA area %s could not be activated\n", cma->name); } =20 @@ -206,7 +207,7 @@ int __init cma_init_reserved_mem(phys_addr_t base, phys= _addr_t size, snprintf(cma->name, CMA_MAX_NAME, "cma%d\n", cma_area_count); =20 cma->base_pfn =3D PFN_DOWN(base); - cma->count =3D size >> PAGE_SHIFT; + cma->available_count =3D cma->count =3D size >> PAGE_SHIFT; cma->order_per_bit =3D order_per_bit; *res_cma =3D cma; cma_area_count++; @@ -390,7 +391,7 @@ static void cma_debug_show_areas(struct cma *cma) { unsigned long next_zero_bit, next_set_bit, nr_zero; unsigned long start =3D 0; - unsigned long nr_part, nr_total =3D 0; + unsigned long nr_part; unsigned long nbits =3D cma_bitmap_maxno(cma); =20 spin_lock_irq(&cma->lock); @@ -402,12 +403,12 @@ static void cma_debug_show_areas(struct cma *cma) next_set_bit =3D find_next_bit(cma->bitmap, nbits, next_zero_bit); nr_zero =3D next_set_bit - next_zero_bit; nr_part =3D nr_zero << cma->order_per_bit; - pr_cont("%s%lu@%lu", nr_total ? "+" : "", nr_part, + pr_cont("%s%lu@%lu", start ? "+" : "", nr_part, next_zero_bit); - nr_total +=3D nr_part; start =3D next_zero_bit + nr_zero; } - pr_cont("=3D> %lu free of %lu total pages\n", nr_total, cma->count); + pr_cont("=3D> %lu free of %lu total pages\n", cma->available_count, + cma->count); spin_unlock_irq(&cma->lock); } =20 @@ -444,6 +445,14 @@ static struct page *__cma_alloc(struct cma *cma, unsig= ned long count, =20 for (;;) { spin_lock_irq(&cma->lock); + /* + * If the request is larger than the available number + * of pages, stop right away. + */ + if (count > cma->available_count) { + spin_unlock_irq(&cma->lock); + break; + } bitmap_no =3D bitmap_find_next_zero_area_off(cma->bitmap, bitmap_maxno, start, bitmap_count, mask, offset); @@ -452,6 +461,7 @@ static struct page *__cma_alloc(struct cma *cma, unsign= ed long count, break; } bitmap_set(cma->bitmap, bitmap_no, bitmap_count); + cma->available_count -=3D count; /* * It's safe to drop the lock here. We've marked this region for * our exclusive use. If the migration fails we will take the diff --git a/mm/cma.h b/mm/cma.h index 8485ef893e99..3dd3376ae980 100644 --- a/mm/cma.h +++ b/mm/cma.h @@ -13,6 +13,7 @@ struct cma_kobject { struct cma { unsigned long base_pfn; unsigned long count; + unsigned long available_count; unsigned long *bitmap; unsigned int order_per_bit; /* Order of pages represented by one bit */ spinlock_t lock; diff --git a/mm/cma_debug.c b/mm/cma_debug.c index 602fff89b15f..89236f22230a 100644 --- a/mm/cma_debug.c +++ b/mm/cma_debug.c @@ -34,13 +34,10 @@ DEFINE_DEBUGFS_ATTRIBUTE(cma_debugfs_fops, cma_debugfs_= get, NULL, "%llu\n"); static int cma_used_get(void *data, u64 *val) { struct cma *cma =3D data; - unsigned long used; =20 spin_lock_irq(&cma->lock); - /* pages counter is smaller than sizeof(int) */ - used =3D bitmap_weight(cma->bitmap, (int)cma_bitmap_maxno(cma)); + *val =3D cma->count - cma->available_count; spin_unlock_irq(&cma->lock); - *val =3D (u64)used << cma->order_per_bit; =20 return 0; } diff --git a/mm/cma_sysfs.c b/mm/cma_sysfs.c index f50db3973171..97acd3e5a6a5 100644 --- a/mm/cma_sysfs.c +++ b/mm/cma_sysfs.c @@ -62,6 +62,24 @@ static ssize_t release_pages_success_show(struct kobject= *kobj, } CMA_ATTR_RO(release_pages_success); =20 +static ssize_t total_pages_show(struct kobject *kobj, + struct kobj_attribute *attr, char *buf) +{ + struct cma *cma =3D cma_from_kobj(kobj); + + return sysfs_emit(buf, "%lu\n", cma->count); +} +CMA_ATTR_RO(total_pages); + +static ssize_t available_pages_show(struct kobject *kobj, + struct kobj_attribute *attr, char *buf) +{ + struct cma *cma =3D cma_from_kobj(kobj); + + return sysfs_emit(buf, "%lu\n", cma->available_count); +} +CMA_ATTR_RO(available_pages); + static void cma_kobj_release(struct kobject *kobj) { struct cma *cma =3D cma_from_kobj(kobj); @@ -75,6 +93,8 @@ static struct attribute *cma_attrs[] =3D { &alloc_pages_success_attr.attr, &alloc_pages_fail_attr.attr, &release_pages_success_attr.attr, + &total_pages_attr.attr, + &available_pages_attr.attr, NULL, }; ATTRIBUTE_GROUPS(cma); --=20 2.48.1.711.g2feabab25a-goog From nobody Fri Dec 19 16:07:21 2025 Received: from mail-pl1-f202.google.com (mail-pl1-f202.google.com [209.85.214.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 097D427602D for ; Fri, 28 Feb 2025 18:29:52 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740767395; cv=none; b=EtMU8EGuPnu8pn+kifjHjfwXJ689XyW/oc6tdgyB+y5Jhl3hbeu+1Yro339+2PbPk+HKC8XHjLVosjfm40uBAbTYGAGp5Wwsk+oBChm8H0Vee9eqxZmrRttLE/B0z96DE6LTuKAM5WexkFyK9jE2BIb+J/eIMiDBXzGvH9YCYxc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740767395; c=relaxed/simple; bh=zOC6ZFYwwXeKjDbmfDS+J9wpcZD5Ybi7JyfuhbyK9HM=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=qQcUfFULlojrdluMDZSS4gkSWDwLdTiLgwARHKCsaMIApK4xBABctCKvLbw31A9lEA/lWbBCkJNSGgoel3hpdu0c4eCM6IG+2ev1HeVs8F4eveqvrPCn9DGYtjWmCOqMredF68lGEgT3t+fnTPC1X1N5IQkrmly1XIBBu3sRoes= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--fvdl.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=qCwuWWaf; arc=none smtp.client-ip=209.85.214.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--fvdl.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="qCwuWWaf" Received: by mail-pl1-f202.google.com with SMTP id d9443c01a7336-22356964533so32384935ad.3 for ; Fri, 28 Feb 2025 10:29:52 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1740767392; x=1741372192; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=dzEENFYZl7nQllfpXO7dpxeX1+3boBuKmXbQdqjtFEg=; b=qCwuWWafsZEqIXExw096jKhB9KubYEGq5MHO9EGuzBmwF8q151rvwKnUKehWFk06F/ ouW21B8m38l1oLIOsJIFuwubKS7PposbtUZrJKMku97qc5i4ozAi0LWYxvrj1G1x7T/k lg+Naf1m0idcasC0haRaUJIjrcNhr/ltBk6HiHAonVV9quxO6v5kYFRoveXYZAOSSr+0 JSUw920vP0KCVdV8RxCXYJkqFtMjnkkg4hvXEW6hSDi3ruuRwh2gpFwMaVrE8m300jg0 JGQbwQo8RtXZpVO/S5S7ggxS+yisYwf8KD9XbQhkgLjMVFKzQjvJNcJhHCK61oH4S+A0 e+oA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1740767392; x=1741372192; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=dzEENFYZl7nQllfpXO7dpxeX1+3boBuKmXbQdqjtFEg=; b=qgW+S4Sira+myBeNAIr4XUNIl/SzFzDslcFh5EZvtlESZnEu4qrOiYv4kxwU7tvd7K LT86jdyVPggvpFoAwmTVTLZEF8bVzMrFxss0FuZtOYno65mYXq9Q+8iVB+BS8UWr64Qu Xkb5wLhEc6yvhUDCcK3WtwmwIs9ZZKXgqIKOXshHQRLwkKmvYghqV26LkmxruPioYnoc 35sQipfcpkGKSCXiB284oGFRQ1DnYjW1R6miBvmnpOrJBt4MN9VHH2YWkLgMLmRFSnI5 uNu5qF8IIXeKnmp1Ye/pwvrKhFsb/oIRFPGPNNq7YwyEzTNuPXTAqABbigJlIUGFBAqz E47w== X-Forwarded-Encrypted: i=1; AJvYcCW1R7DdQZr/HpoJJD2MRnHIbttCrrs/C3MOOuWFCa1y5C39KPB6pv0g6ACq1AZG9nlaSkN8bY10YOOS/Ko=@vger.kernel.org X-Gm-Message-State: AOJu0YzuwQbbOwYvMjB0XRx2nTlI4BadBD24X1LmjMAJw/cJZn5PoFyp 2MzOlIUjEht1iLx2xRDTzVWh+2T1cxQrARRc3VENAWlFwvGKC0WGZhB4/wMTvihD3G7K4g== X-Google-Smtp-Source: AGHT+IEozZRcCCzIxcAfCp0hqih2IMauOaJb9F3VaZQMgVwa3HP5TGN8xienmC6z56OPijrfHzg1G3ms X-Received: from pgar6.prod.google.com ([2002:a05:6a02:2e86:b0:adf:4e78:be67]) (user=fvdl job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6a20:cd91:b0:1ee:c6bf:7c49 with SMTP id adf61e73a8af0-1f2f4c9737emr8391249637.6.1740767391798; Fri, 28 Feb 2025 10:29:51 -0800 (PST) Date: Fri, 28 Feb 2025 18:29:03 +0000 In-Reply-To: <20250228182928.2645936-1-fvdl@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250228182928.2645936-1-fvdl@google.com> X-Mailer: git-send-email 2.48.1.711.g2feabab25a-goog Message-ID: <20250228182928.2645936-3-fvdl@google.com> Subject: [PATCH v5 02/27] mm, cma: support multiple contiguous ranges, if requested From: Frank van der Linden To: akpm@linux-foundation.org, muchun.song@linux.dev, linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: yuzhao@google.com, usamaarif642@gmail.com, joao.m.martins@oracle.com, roman.gushchin@linux.dev, ziy@nvidia.com, david@redhat.com, Frank van der Linden , Arnd Bergmann Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Currently, CMA manages one range of physically contiguous memory. Creation of larger CMA areas with hugetlb_cma may run in to gaps in physical memory, so that they are not able to allocate that contiguous physical range from memblock when creating the CMA area. This can happen, for example, on an AMD system with > 1TB of memory, where there will be a gap just below the 1TB (40bit DMA) line. If you have set aside most of memory for potential hugetlb CMA allocation, cma_declare_contiguous_nid will fail. hugetlb_cma doesn't need the entire area to be one physically contiguous range. It just cares about being able to get physically contiguous chunks of a certain size (e.g. 1G), and it is fine to have the CMA area backed by multiple physical ranges, as long as it gets 1G contiguous allocations. Multi-range support is implemented by introducing an array of ranges, instead of just one big one. Each range has its own bitmap. Effectively, the allocate and release operations work as before, just per-range. So, instead of going through one large bitmap, they now go through a number of smaller ones. The maximum number of supported ranges is 8, as defined in CMA_MAX_RANGES. Since some current users of CMA expect a CMA area to just use one physically contiguous range, only allow for multiple ranges if a new interface, cma_declare_contiguous_nid_multi, is used. The other interfaces will work like before, creating only CMA areas with 1 range. cma_declare_contiguous_nid_multi works as follows, mimicking the default "bottom-up, above 4G" reservation approach: 0) Try cma_declare_contiguous_nid, which will use only one region. If this succeeds, return. This makes sure that for all the cases that currently work, the behavior remains unchanged even if the caller switches from cma_declare_contiguous_nid to cma_declare_contiguous_nid_multi. 1) Select the largest free memblock ranges above 4G, with a maximum number of CMA_MAX_RANGES. 2) If we did not find at most CMA_MAX_RANGES that add up to the total size requested, return -ENOMEM. 3) Sort the selected ranges by base address. 4) Reserve them bottom-up until we get what we wanted. Cc: Arnd Bergmann Signed-off-by: Frank van der Linden --- Documentation/admin-guide/mm/cma_debugfs.rst | 10 +- include/linux/cma.h | 3 + mm/cma.c | 594 +++++++++++++++---- mm/cma.h | 27 +- mm/cma_debug.c | 56 +- 5 files changed, 550 insertions(+), 140 deletions(-) diff --git a/Documentation/admin-guide/mm/cma_debugfs.rst b/Documentation/a= dmin-guide/mm/cma_debugfs.rst index 7367e6294ef6..4120e9cb0cd5 100644 --- a/Documentation/admin-guide/mm/cma_debugfs.rst +++ b/Documentation/admin-guide/mm/cma_debugfs.rst @@ -12,10 +12,16 @@ its CMA name like below: =20 The structure of the files created under that directory is as follows: =20 - - [RO] base_pfn: The base PFN (Page Frame Number) of the zone. + - [RO] base_pfn: The base PFN (Page Frame Number) of the CMA area. + This is the same as ranges/0/base_pfn. - [RO] count: Amount of memory in the CMA area. - [RO] order_per_bit: Order of pages represented by one bit. - - [RO] bitmap: The bitmap of page states in the zone. + - [RO] bitmap: The bitmap of allocated pages in the area. + This is the same as ranges/0/base_pfn. + - [RO] ranges/N/base_pfn: The base PFN of contiguous range N + in the CMA area. + - [RO] ranges/N/bitmap: The bit map of allocated pages in + range N in the CMA area. - [WO] alloc: Allocate N pages from that CMA area. For example:: =20 echo 5 > /cma//alloc diff --git a/include/linux/cma.h b/include/linux/cma.h index d15b64f51336..863427c27dc2 100644 --- a/include/linux/cma.h +++ b/include/linux/cma.h @@ -40,6 +40,9 @@ static inline int __init cma_declare_contiguous(phys_addr= _t base, return cma_declare_contiguous_nid(base, size, limit, alignment, order_per_bit, fixed, name, res_cma, NUMA_NO_NODE); } +extern int __init cma_declare_contiguous_multi(phys_addr_t size, + phys_addr_t align, unsigned int order_per_bit, + const char *name, struct cma **res_cma, int nid); extern int cma_init_reserved_mem(phys_addr_t base, phys_addr_t size, unsigned int order_per_bit, const char *name, diff --git a/mm/cma.c b/mm/cma.c index 95a8788e54d3..34caa6b29c99 100644 --- a/mm/cma.c +++ b/mm/cma.c @@ -18,6 +18,7 @@ =20 #include #include +#include #include #include #include @@ -35,9 +36,16 @@ struct cma cma_areas[MAX_CMA_AREAS]; unsigned int cma_area_count; static DEFINE_MUTEX(cma_mutex); =20 +static int __init __cma_declare_contiguous_nid(phys_addr_t base, + phys_addr_t size, phys_addr_t limit, + phys_addr_t alignment, unsigned int order_per_bit, + bool fixed, const char *name, struct cma **res_cma, + int nid); + phys_addr_t cma_get_base(const struct cma *cma) { - return PFN_PHYS(cma->base_pfn); + WARN_ON_ONCE(cma->nranges !=3D 1); + return PFN_PHYS(cma->ranges[0].base_pfn); } =20 unsigned long cma_get_size(const struct cma *cma) @@ -63,9 +71,10 @@ static unsigned long cma_bitmap_aligned_mask(const struc= t cma *cma, * The value returned is represented in order_per_bits. */ static unsigned long cma_bitmap_aligned_offset(const struct cma *cma, + const struct cma_memrange *cmr, unsigned int align_order) { - return (cma->base_pfn & ((1UL << align_order) - 1)) + return (cmr->base_pfn & ((1UL << align_order) - 1)) >> cma->order_per_bit; } =20 @@ -75,46 +84,57 @@ static unsigned long cma_bitmap_pages_to_bits(const str= uct cma *cma, return ALIGN(pages, 1UL << cma->order_per_bit) >> cma->order_per_bit; } =20 -static void cma_clear_bitmap(struct cma *cma, unsigned long pfn, - unsigned long count) +static void cma_clear_bitmap(struct cma *cma, const struct cma_memrange *c= mr, + unsigned long pfn, unsigned long count) { unsigned long bitmap_no, bitmap_count; unsigned long flags; =20 - bitmap_no =3D (pfn - cma->base_pfn) >> cma->order_per_bit; + bitmap_no =3D (pfn - cmr->base_pfn) >> cma->order_per_bit; bitmap_count =3D cma_bitmap_pages_to_bits(cma, count); =20 spin_lock_irqsave(&cma->lock, flags); - bitmap_clear(cma->bitmap, bitmap_no, bitmap_count); + bitmap_clear(cmr->bitmap, bitmap_no, bitmap_count); cma->available_count +=3D count; spin_unlock_irqrestore(&cma->lock, flags); } =20 static void __init cma_activate_area(struct cma *cma) { - unsigned long base_pfn =3D cma->base_pfn, pfn; + unsigned long pfn, base_pfn; + int allocrange, r; struct zone *zone; + struct cma_memrange *cmr; + + for (allocrange =3D 0; allocrange < cma->nranges; allocrange++) { + cmr =3D &cma->ranges[allocrange]; + cmr->bitmap =3D bitmap_zalloc(cma_bitmap_maxno(cma, cmr), + GFP_KERNEL); + if (!cmr->bitmap) + goto cleanup; + } =20 - cma->bitmap =3D bitmap_zalloc(cma_bitmap_maxno(cma), GFP_KERNEL); - if (!cma->bitmap) - goto out_error; + for (r =3D 0; r < cma->nranges; r++) { + cmr =3D &cma->ranges[r]; + base_pfn =3D cmr->base_pfn; =20 - /* - * alloc_contig_range() requires the pfn range specified to be in the - * same zone. Simplify by forcing the entire CMA resv range to be in the - * same zone. - */ - WARN_ON_ONCE(!pfn_valid(base_pfn)); - zone =3D page_zone(pfn_to_page(base_pfn)); - for (pfn =3D base_pfn + 1; pfn < base_pfn + cma->count; pfn++) { - WARN_ON_ONCE(!pfn_valid(pfn)); - if (page_zone(pfn_to_page(pfn)) !=3D zone) - goto not_in_zone; - } + /* + * alloc_contig_range() requires the pfn range specified + * to be in the same zone. Simplify by forcing the entire + * CMA resv range to be in the same zone. + */ + WARN_ON_ONCE(!pfn_valid(base_pfn)); + zone =3D page_zone(pfn_to_page(base_pfn)); + for (pfn =3D base_pfn + 1; pfn < base_pfn + cmr->count; pfn++) { + WARN_ON_ONCE(!pfn_valid(pfn)); + if (page_zone(pfn_to_page(pfn)) !=3D zone) + goto cleanup; + } =20 - for (pfn =3D base_pfn; pfn < base_pfn + cma->count; - pfn +=3D pageblock_nr_pages) - init_cma_reserved_pageblock(pfn_to_page(pfn)); + for (pfn =3D base_pfn; pfn < base_pfn + cmr->count; + pfn +=3D pageblock_nr_pages) + init_cma_reserved_pageblock(pfn_to_page(pfn)); + } =20 spin_lock_init(&cma->lock); =20 @@ -125,13 +145,19 @@ static void __init cma_activate_area(struct cma *cma) =20 return; =20 -not_in_zone: - bitmap_free(cma->bitmap); -out_error: +cleanup: + for (r =3D 0; r < allocrange; r++) + bitmap_free(cma->ranges[r].bitmap); + /* Expose all pages to the buddy, they are useless for CMA. */ if (!cma->reserve_pages_on_error) { - for (pfn =3D base_pfn; pfn < base_pfn + cma->count; pfn++) - free_reserved_page(pfn_to_page(pfn)); + for (r =3D 0; r < allocrange; r++) { + cmr =3D &cma->ranges[r]; + for (pfn =3D cmr->base_pfn; + pfn < cmr->base_pfn + cmr->count; + pfn++) + free_reserved_page(pfn_to_page(pfn)); + } } totalcma_pages -=3D cma->count; cma->available_count =3D cma->count =3D 0; @@ -154,6 +180,43 @@ void __init cma_reserve_pages_on_error(struct cma *cma) cma->reserve_pages_on_error =3D true; } =20 +static int __init cma_new_area(const char *name, phys_addr_t size, + unsigned int order_per_bit, + struct cma **res_cma) +{ + struct cma *cma; + + if (cma_area_count =3D=3D ARRAY_SIZE(cma_areas)) { + pr_err("Not enough slots for CMA reserved regions!\n"); + return -ENOSPC; + } + + /* + * Each reserved area must be initialised later, when more kernel + * subsystems (like slab allocator) are available. + */ + cma =3D &cma_areas[cma_area_count]; + cma_area_count++; + + if (name) + snprintf(cma->name, CMA_MAX_NAME, "%s", name); + else + snprintf(cma->name, CMA_MAX_NAME, "cma%d\n", cma_area_count); + + cma->available_count =3D cma->count =3D size >> PAGE_SHIFT; + cma->order_per_bit =3D order_per_bit; + *res_cma =3D cma; + totalcma_pages +=3D cma->count; + + return 0; +} + +static void __init cma_drop_area(struct cma *cma) +{ + totalcma_pages -=3D cma->count; + cma_area_count--; +} + /** * cma_init_reserved_mem() - create custom contiguous area from reserved m= emory * @base: Base address of the reserved area @@ -172,13 +235,9 @@ int __init cma_init_reserved_mem(phys_addr_t base, phy= s_addr_t size, struct cma **res_cma) { struct cma *cma; + int ret; =20 /* Sanity checks */ - if (cma_area_count =3D=3D ARRAY_SIZE(cma_areas)) { - pr_err("Not enough slots for CMA reserved regions!\n"); - return -ENOSPC; - } - if (!size || !memblock_is_region_reserved(base, size)) return -EINVAL; =20 @@ -195,25 +254,261 @@ int __init cma_init_reserved_mem(phys_addr_t base, p= hys_addr_t size, if (!IS_ALIGNED(base | size, CMA_MIN_ALIGNMENT_BYTES)) return -EINVAL; =20 + ret =3D cma_new_area(name, size, order_per_bit, &cma); + if (ret !=3D 0) + return ret; + + cma->ranges[0].base_pfn =3D PFN_DOWN(base); + cma->ranges[0].count =3D cma->count; + cma->nranges =3D 1; + + *res_cma =3D cma; + + return 0; +} + +/* + * Structure used while walking physical memory ranges and finding out + * which one(s) to use for a CMA area. + */ +struct cma_init_memrange { + phys_addr_t base; + phys_addr_t size; + struct list_head list; +}; + +/* + * Work array used during CMA initialization. + */ +static struct cma_init_memrange memranges[CMA_MAX_RANGES] __initdata; + +static bool __init revsizecmp(struct cma_init_memrange *mlp, + struct cma_init_memrange *mrp) +{ + return mlp->size > mrp->size; +} + +static bool __init basecmp(struct cma_init_memrange *mlp, + struct cma_init_memrange *mrp) +{ + return mlp->base < mrp->base; +} + +/* + * Helper function to create sorted lists. + */ +static void __init list_insert_sorted( + struct list_head *ranges, + struct cma_init_memrange *mrp, + bool (*cmp)(struct cma_init_memrange *lh, struct cma_init_memrange *rh)) +{ + struct list_head *mp; + struct cma_init_memrange *mlp; + + if (list_empty(ranges)) + list_add(&mrp->list, ranges); + else { + list_for_each(mp, ranges) { + mlp =3D list_entry(mp, struct cma_init_memrange, list); + if (cmp(mlp, mrp)) + break; + } + __list_add(&mrp->list, mlp->list.prev, &mlp->list); + } +} + +/* + * Create CMA areas with a total size of @total_size. A normal allocation + * for one area is tried first. If that fails, the biggest memblock + * ranges above 4G are selected, and allocated bottom up. + * + * The complexity here is not great, but this function will only be + * called during boot, and the lists operated on have fewer than + * CMA_MAX_RANGES elements (default value: 8). + */ +int __init cma_declare_contiguous_multi(phys_addr_t total_size, + phys_addr_t align, unsigned int order_per_bit, + const char *name, struct cma **res_cma, int nid) +{ + phys_addr_t start, end; + phys_addr_t size, sizesum, sizeleft; + struct cma_init_memrange *mrp, *mlp, *failed; + struct cma_memrange *cmrp; + LIST_HEAD(ranges); + LIST_HEAD(final_ranges); + struct list_head *mp, *next; + int ret, nr =3D 1; + u64 i; + struct cma *cma; + /* - * Each reserved area must be initialised later, when more kernel - * subsystems (like slab allocator) are available. + * First, try it the normal way, producing just one range. */ - cma =3D &cma_areas[cma_area_count]; + ret =3D __cma_declare_contiguous_nid(0, total_size, 0, align, + order_per_bit, false, name, res_cma, nid); + if (ret !=3D -ENOMEM) + goto out; =20 - if (name) - snprintf(cma->name, CMA_MAX_NAME, name); - else - snprintf(cma->name, CMA_MAX_NAME, "cma%d\n", cma_area_count); + /* + * Couldn't find one range that fits our needs, so try multiple + * ranges. + * + * No need to do the alignment checks here, the call to + * cma_declare_contiguous_nid above would have caught + * any issues. With the checks, we know that: + * + * - @align is a power of 2 + * - @align is >=3D pageblock alignment + * - @size is aligned to @align and to @order_per_bit + * + * So, as long as we create ranges that have a base + * aligned to @align, and a size that is aligned to + * both @align and @order_to_bit, things will work out. + */ + nr =3D 0; + sizesum =3D 0; + failed =3D NULL; =20 - cma->base_pfn =3D PFN_DOWN(base); - cma->available_count =3D cma->count =3D size >> PAGE_SHIFT; - cma->order_per_bit =3D order_per_bit; + ret =3D cma_new_area(name, total_size, order_per_bit, &cma); + if (ret !=3D 0) + goto out; + + align =3D max_t(phys_addr_t, align, CMA_MIN_ALIGNMENT_BYTES); + /* + * Create a list of ranges above 4G, largest range first. + */ + for_each_free_mem_range(i, nid, MEMBLOCK_NONE, &start, &end, NULL) { + if (upper_32_bits(start) =3D=3D 0) + continue; + + start =3D ALIGN(start, align); + if (start >=3D end) + continue; + + end =3D ALIGN_DOWN(end, align); + if (end <=3D start) + continue; + + size =3D end - start; + size =3D ALIGN_DOWN(size, (PAGE_SIZE << order_per_bit)); + if (!size) + continue; + sizesum +=3D size; + + pr_debug("consider %016llx - %016llx\n", (u64)start, (u64)end); + + /* + * If we don't yet have used the maximum number of + * areas, grab a new one. + * + * If we can't use anymore, see if this range is not + * smaller than the smallest one already recorded. If + * not, re-use the smallest element. + */ + if (nr < CMA_MAX_RANGES) + mrp =3D &memranges[nr++]; + else { + mrp =3D list_last_entry(&ranges, + struct cma_init_memrange, list); + if (size < mrp->size) + continue; + list_del(&mrp->list); + sizesum -=3D mrp->size; + pr_debug("deleted %016llx - %016llx from the list\n", + (u64)mrp->base, (u64)mrp->base + size); + } + mrp->base =3D start; + mrp->size =3D size; + + /* + * Now do a sorted insert. + */ + list_insert_sorted(&ranges, mrp, revsizecmp); + pr_debug("added %016llx - %016llx to the list\n", + (u64)mrp->base, (u64)mrp->base + size); + pr_debug("total size now %llu\n", (u64)sizesum); + } + + /* + * There is not enough room in the CMA_MAX_RANGES largest + * ranges, so bail out. + */ + if (sizesum < total_size) { + cma_drop_area(cma); + ret =3D -ENOMEM; + goto out; + } + + /* + * Found ranges that provide enough combined space. + * Now, sorted them by address, smallest first, because we + * want to mimic a bottom-up memblock allocation. + */ + sizesum =3D 0; + list_for_each_safe(mp, next, &ranges) { + mlp =3D list_entry(mp, struct cma_init_memrange, list); + list_del(mp); + list_insert_sorted(&final_ranges, mlp, basecmp); + sizesum +=3D mlp->size; + if (sizesum >=3D total_size) + break; + } + + /* + * Walk the final list, and add a CMA range for + * each range, possibly not using the last one fully. + */ + nr =3D 0; + sizeleft =3D total_size; + list_for_each(mp, &final_ranges) { + mlp =3D list_entry(mp, struct cma_init_memrange, list); + size =3D min(sizeleft, mlp->size); + if (memblock_reserve(mlp->base, size)) { + /* + * Unexpected error. Could go on to + * the next one, but just abort to + * be safe. + */ + failed =3D mlp; + break; + } + + pr_debug("created region %d: %016llx - %016llx\n", + nr, (u64)mlp->base, (u64)mlp->base + size); + cmrp =3D &cma->ranges[nr++]; + cmrp->base_pfn =3D PHYS_PFN(mlp->base); + cmrp->count =3D size >> PAGE_SHIFT; + + sizeleft -=3D size; + if (sizeleft =3D=3D 0) + break; + } + + if (failed) { + list_for_each(mp, &final_ranges) { + mlp =3D list_entry(mp, struct cma_init_memrange, list); + if (mlp =3D=3D failed) + break; + memblock_phys_free(mlp->base, mlp->size); + } + cma_drop_area(cma); + ret =3D -ENOMEM; + goto out; + } + + cma->nranges =3D nr; *res_cma =3D cma; - cma_area_count++; - totalcma_pages +=3D cma->count; =20 - return 0; +out: + if (ret !=3D 0) + pr_err("Failed to reserve %lu MiB\n", + (unsigned long)total_size / SZ_1M); + else + pr_info("Reserved %lu MiB in %d range%s\n", + (unsigned long)total_size / SZ_1M, nr, + nr > 1 ? "s" : ""); + + return ret; } =20 /** @@ -241,6 +536,26 @@ int __init cma_declare_contiguous_nid(phys_addr_t base, phys_addr_t alignment, unsigned int order_per_bit, bool fixed, const char *name, struct cma **res_cma, int nid) +{ + int ret; + + ret =3D __cma_declare_contiguous_nid(base, size, limit, alignment, + order_per_bit, fixed, name, res_cma, nid); + if (ret !=3D 0) + pr_err("Failed to reserve %ld MiB\n", + (unsigned long)size / SZ_1M); + else + pr_info("Reserved %ld MiB at %pa\n", + (unsigned long)size / SZ_1M, &base); + + return ret; +} + +static int __init __cma_declare_contiguous_nid(phys_addr_t base, + phys_addr_t size, phys_addr_t limit, + phys_addr_t alignment, unsigned int order_per_bit, + bool fixed, const char *name, struct cma **res_cma, + int nid) { phys_addr_t memblock_end =3D memblock_end_of_DRAM(); phys_addr_t highmem_start; @@ -273,10 +588,9 @@ int __init cma_declare_contiguous_nid(phys_addr_t base, /* Sanitise input arguments. */ alignment =3D max_t(phys_addr_t, alignment, CMA_MIN_ALIGNMENT_BYTES); if (fixed && base & (alignment - 1)) { - ret =3D -EINVAL; pr_err("Region at %pa must be aligned to %pa bytes\n", &base, &alignment); - goto err; + return -EINVAL; } base =3D ALIGN(base, alignment); size =3D ALIGN(size, alignment); @@ -294,10 +608,9 @@ int __init cma_declare_contiguous_nid(phys_addr_t base, * low/high memory boundary. */ if (fixed && base < highmem_start && base + size > highmem_start) { - ret =3D -EINVAL; pr_err("Region at %pa defined on low/high memory boundary (%pa)\n", &base, &highmem_start); - goto err; + return -EINVAL; } =20 /* @@ -309,18 +622,16 @@ int __init cma_declare_contiguous_nid(phys_addr_t bas= e, limit =3D memblock_end; =20 if (base + size > limit) { - ret =3D -EINVAL; pr_err("Size (%pa) of region at %pa exceeds limit (%pa)\n", &size, &base, &limit); - goto err; + return -EINVAL; } =20 /* Reserve memory */ if (fixed) { if (memblock_is_region_reserved(base, size) || memblock_reserve(base, size) < 0) { - ret =3D -EBUSY; - goto err; + return -EBUSY; } } else { phys_addr_t addr =3D 0; @@ -357,10 +668,8 @@ int __init cma_declare_contiguous_nid(phys_addr_t base, if (!addr) { addr =3D memblock_alloc_range_nid(size, alignment, base, limit, nid, true); - if (!addr) { - ret =3D -ENOMEM; - goto err; - } + if (!addr) + return -ENOMEM; } =20 /* @@ -373,75 +682,67 @@ int __init cma_declare_contiguous_nid(phys_addr_t bas= e, =20 ret =3D cma_init_reserved_mem(base, size, order_per_bit, name, res_cma); if (ret) - goto free_mem; - - pr_info("Reserved %ld MiB at %pa on node %d\n", (unsigned long)size / SZ_= 1M, - &base, nid); - return 0; + memblock_phys_free(base, size); =20 -free_mem: - memblock_phys_free(base, size); -err: - pr_err("Failed to reserve %ld MiB on node %d\n", (unsigned long)size / SZ= _1M, - nid); return ret; } =20 static void cma_debug_show_areas(struct cma *cma) { unsigned long next_zero_bit, next_set_bit, nr_zero; - unsigned long start =3D 0; + unsigned long start; unsigned long nr_part; - unsigned long nbits =3D cma_bitmap_maxno(cma); + unsigned long nbits; + int r; + struct cma_memrange *cmr; =20 spin_lock_irq(&cma->lock); pr_info("number of available pages: "); - for (;;) { - next_zero_bit =3D find_next_zero_bit(cma->bitmap, nbits, start); - if (next_zero_bit >=3D nbits) - break; - next_set_bit =3D find_next_bit(cma->bitmap, nbits, next_zero_bit); - nr_zero =3D next_set_bit - next_zero_bit; - nr_part =3D nr_zero << cma->order_per_bit; - pr_cont("%s%lu@%lu", start ? "+" : "", nr_part, - next_zero_bit); - start =3D next_zero_bit + nr_zero; + for (r =3D 0; r < cma->nranges; r++) { + cmr =3D &cma->ranges[r]; + + start =3D 0; + nbits =3D cma_bitmap_maxno(cma, cmr); + + pr_info("range %d: ", r); + for (;;) { + next_zero_bit =3D find_next_zero_bit(cmr->bitmap, + nbits, start); + if (next_zero_bit >=3D nbits) + break; + next_set_bit =3D find_next_bit(cmr->bitmap, nbits, + next_zero_bit); + nr_zero =3D next_set_bit - next_zero_bit; + nr_part =3D nr_zero << cma->order_per_bit; + pr_cont("%s%lu@%lu", start ? "+" : "", nr_part, + next_zero_bit); + start =3D next_zero_bit + nr_zero; + } + pr_info("\n"); } pr_cont("=3D> %lu free of %lu total pages\n", cma->available_count, cma->count); spin_unlock_irq(&cma->lock); } =20 -static struct page *__cma_alloc(struct cma *cma, unsigned long count, - unsigned int align, gfp_t gfp) +static int cma_range_alloc(struct cma *cma, struct cma_memrange *cmr, + unsigned long count, unsigned int align, + struct page **pagep, gfp_t gfp) { unsigned long mask, offset; unsigned long pfn =3D -1; unsigned long start =3D 0; unsigned long bitmap_maxno, bitmap_no, bitmap_count; - unsigned long i; + int ret =3D -EBUSY; struct page *page =3D NULL; - int ret =3D -ENOMEM; - const char *name =3D cma ? cma->name : NULL; - - trace_cma_alloc_start(name, count, align); - - if (!cma || !cma->count || !cma->bitmap) - return page; - - pr_debug("%s(cma %p, name: %s, count %lu, align %d)\n", __func__, - (void *)cma, cma->name, count, align); - - if (!count) - return page; =20 mask =3D cma_bitmap_aligned_mask(cma, align); - offset =3D cma_bitmap_aligned_offset(cma, align); - bitmap_maxno =3D cma_bitmap_maxno(cma); + offset =3D cma_bitmap_aligned_offset(cma, cmr, align); + bitmap_maxno =3D cma_bitmap_maxno(cma, cmr); bitmap_count =3D cma_bitmap_pages_to_bits(cma, count); =20 if (bitmap_count > bitmap_maxno) - return page; + goto out; =20 for (;;) { spin_lock_irq(&cma->lock); @@ -453,14 +754,14 @@ static struct page *__cma_alloc(struct cma *cma, unsi= gned long count, spin_unlock_irq(&cma->lock); break; } - bitmap_no =3D bitmap_find_next_zero_area_off(cma->bitmap, + bitmap_no =3D bitmap_find_next_zero_area_off(cmr->bitmap, bitmap_maxno, start, bitmap_count, mask, offset); if (bitmap_no >=3D bitmap_maxno) { spin_unlock_irq(&cma->lock); break; } - bitmap_set(cma->bitmap, bitmap_no, bitmap_count); + bitmap_set(cmr->bitmap, bitmap_no, bitmap_count); cma->available_count -=3D count; /* * It's safe to drop the lock here. We've marked this region for @@ -469,7 +770,7 @@ static struct page *__cma_alloc(struct cma *cma, unsign= ed long count, */ spin_unlock_irq(&cma->lock); =20 - pfn =3D cma->base_pfn + (bitmap_no << cma->order_per_bit); + pfn =3D cmr->base_pfn + (bitmap_no << cma->order_per_bit); mutex_lock(&cma_mutex); ret =3D alloc_contig_range(pfn, pfn + count, MIGRATE_CMA, gfp); mutex_unlock(&cma_mutex); @@ -478,7 +779,7 @@ static struct page *__cma_alloc(struct cma *cma, unsign= ed long count, break; } =20 - cma_clear_bitmap(cma, pfn, count); + cma_clear_bitmap(cma, cmr, pfn, count); if (ret !=3D -EBUSY) break; =20 @@ -490,6 +791,38 @@ static struct page *__cma_alloc(struct cma *cma, unsig= ned long count, /* try again with a bit different memory target */ start =3D bitmap_no + mask + 1; } +out: + *pagep =3D page; + return ret; +} + +static struct page *__cma_alloc(struct cma *cma, unsigned long count, + unsigned int align, gfp_t gfp) +{ + struct page *page =3D NULL; + int ret =3D -ENOMEM, r; + unsigned long i; + const char *name =3D cma ? cma->name : NULL; + + trace_cma_alloc_start(name, count, align); + + if (!cma || !cma->count) + return page; + + pr_debug("%s(cma %p, name: %s, count %lu, align %d)\n", __func__, + (void *)cma, cma->name, count, align); + + if (!count) + return page; + + for (r =3D 0; r < cma->nranges; r++) { + page =3D NULL; + + ret =3D cma_range_alloc(cma, &cma->ranges[r], count, align, + &page, gfp); + if (ret !=3D -EBUSY || page) + break; + } =20 /* * CMA can allocate multiple page blocks, which results in different @@ -508,7 +841,8 @@ static struct page *__cma_alloc(struct cma *cma, unsign= ed long count, } =20 pr_debug("%s(): returned %p\n", __func__, page); - trace_cma_alloc_finish(name, pfn, page, count, align, ret); + trace_cma_alloc_finish(name, page ? page_to_pfn(page) : 0, + page, count, align, ret); if (page) { count_vm_event(CMA_ALLOC_SUCCESS); cma_sysfs_account_success_pages(cma, count); @@ -551,20 +885,31 @@ struct folio *cma_alloc_folio(struct cma *cma, int or= der, gfp_t gfp) bool cma_pages_valid(struct cma *cma, const struct page *pages, unsigned long count) { - unsigned long pfn; + unsigned long pfn, end; + int r; + struct cma_memrange *cmr; + bool ret; =20 - if (!cma || !pages) + if (!cma || !pages || count > cma->count) return false; =20 pfn =3D page_to_pfn(pages); + ret =3D false; =20 - if (pfn < cma->base_pfn || pfn >=3D cma->base_pfn + cma->count) { - pr_debug("%s(page %p, count %lu)\n", __func__, - (void *)pages, count); - return false; + for (r =3D 0; r < cma->nranges; r++) { + cmr =3D &cma->ranges[r]; + end =3D cmr->base_pfn + cmr->count; + if (pfn >=3D cmr->base_pfn && pfn < end) { + ret =3D pfn + count <=3D end; + break; + } } =20 - return true; + if (!ret) + pr_debug("%s(page %p, count %lu)\n", + __func__, (void *)pages, count); + + return ret; } =20 /** @@ -580,19 +925,32 @@ bool cma_pages_valid(struct cma *cma, const struct pa= ge *pages, bool cma_release(struct cma *cma, const struct page *pages, unsigned long count) { - unsigned long pfn; + struct cma_memrange *cmr; + unsigned long pfn, end_pfn; + int r; + + pr_debug("%s(page %p, count %lu)\n", __func__, (void *)pages, count); =20 if (!cma_pages_valid(cma, pages, count)) return false; =20 - pr_debug("%s(page %p, count %lu)\n", __func__, (void *)pages, count); - pfn =3D page_to_pfn(pages); + end_pfn =3D pfn + count; + + for (r =3D 0; r < cma->nranges; r++) { + cmr =3D &cma->ranges[r]; + if (pfn >=3D cmr->base_pfn && + pfn < (cmr->base_pfn + cmr->count)) { + VM_BUG_ON(end_pfn > cmr->base_pfn + cmr->count); + break; + } + } =20 - VM_BUG_ON(pfn + count > cma->base_pfn + cma->count); + if (r =3D=3D cma->nranges) + return false; =20 free_contig_range(pfn, count); - cma_clear_bitmap(cma, pfn, count); + cma_clear_bitmap(cma, cmr, pfn, count); cma_sysfs_account_release_pages(cma, count); trace_cma_release(cma->name, pfn, pages, count); =20 diff --git a/mm/cma.h b/mm/cma.h index 3dd3376ae980..5f39dd1aac91 100644 --- a/mm/cma.h +++ b/mm/cma.h @@ -10,19 +10,35 @@ struct cma_kobject { struct cma *cma; }; =20 +/* + * Multi-range support. This can be useful if the size of the allocation + * is not expected to be larger than the alignment (like with hugetlb_cma), + * and the total amount of memory requested, while smaller than the total + * amount of memory available, is large enough that it doesn't fit in a + * single physical memory range because of memory holes. + */ +struct cma_memrange { + unsigned long base_pfn; + unsigned long count; + unsigned long *bitmap; +#ifdef CONFIG_CMA_DEBUGFS + struct debugfs_u32_array dfs_bitmap; +#endif +}; +#define CMA_MAX_RANGES 8 + struct cma { - unsigned long base_pfn; unsigned long count; unsigned long available_count; - unsigned long *bitmap; unsigned int order_per_bit; /* Order of pages represented by one bit */ spinlock_t lock; #ifdef CONFIG_CMA_DEBUGFS struct hlist_head mem_head; spinlock_t mem_head_lock; - struct debugfs_u32_array dfs_bitmap; #endif char name[CMA_MAX_NAME]; + int nranges; + struct cma_memrange ranges[CMA_MAX_RANGES]; #ifdef CONFIG_CMA_SYSFS /* the number of CMA page successful allocations */ atomic64_t nr_pages_succeeded; @@ -39,9 +55,10 @@ struct cma { extern struct cma cma_areas[MAX_CMA_AREAS]; extern unsigned int cma_area_count; =20 -static inline unsigned long cma_bitmap_maxno(struct cma *cma) +static inline unsigned long cma_bitmap_maxno(struct cma *cma, + struct cma_memrange *cmr) { - return cma->count >> cma->order_per_bit; + return cmr->count >> cma->order_per_bit; } =20 #ifdef CONFIG_CMA_SYSFS diff --git a/mm/cma_debug.c b/mm/cma_debug.c index 89236f22230a..fdf899532ca0 100644 --- a/mm/cma_debug.c +++ b/mm/cma_debug.c @@ -46,17 +46,26 @@ DEFINE_DEBUGFS_ATTRIBUTE(cma_used_fops, cma_used_get, N= ULL, "%llu\n"); static int cma_maxchunk_get(void *data, u64 *val) { struct cma *cma =3D data; + struct cma_memrange *cmr; unsigned long maxchunk =3D 0; - unsigned long start, end =3D 0; - unsigned long bitmap_maxno =3D cma_bitmap_maxno(cma); + unsigned long start, end; + unsigned long bitmap_maxno; + int r; =20 spin_lock_irq(&cma->lock); - for (;;) { - start =3D find_next_zero_bit(cma->bitmap, bitmap_maxno, end); - if (start >=3D bitmap_maxno) - break; - end =3D find_next_bit(cma->bitmap, bitmap_maxno, start); - maxchunk =3D max(end - start, maxchunk); + for (r =3D 0; r < cma->nranges; r++) { + cmr =3D &cma->ranges[r]; + bitmap_maxno =3D cma_bitmap_maxno(cma, cmr); + end =3D 0; + for (;;) { + start =3D find_next_zero_bit(cmr->bitmap, + bitmap_maxno, end); + if (start >=3D bitmap_maxno) + break; + end =3D find_next_bit(cmr->bitmap, bitmap_maxno, + start); + maxchunk =3D max(end - start, maxchunk); + } } spin_unlock_irq(&cma->lock); *val =3D (u64)maxchunk << cma->order_per_bit; @@ -159,24 +168,41 @@ DEFINE_DEBUGFS_ATTRIBUTE(cma_alloc_fops, NULL, cma_al= loc_write, "%llu\n"); =20 static void cma_debugfs_add_one(struct cma *cma, struct dentry *root_dentr= y) { - struct dentry *tmp; + struct dentry *tmp, *dir, *rangedir; + int r; + char rdirname[12]; + struct cma_memrange *cmr; =20 tmp =3D debugfs_create_dir(cma->name, root_dentry); =20 debugfs_create_file("alloc", 0200, tmp, cma, &cma_alloc_fops); debugfs_create_file("free", 0200, tmp, cma, &cma_free_fops); - debugfs_create_file("base_pfn", 0444, tmp, - &cma->base_pfn, &cma_debugfs_fops); debugfs_create_file("count", 0444, tmp, &cma->count, &cma_debugfs_fops); debugfs_create_file("order_per_bit", 0444, tmp, &cma->order_per_bit, &cma_debugfs_fops); debugfs_create_file("used", 0444, tmp, cma, &cma_used_fops); debugfs_create_file("maxchunk", 0444, tmp, cma, &cma_maxchunk_fops); =20 - cma->dfs_bitmap.array =3D (u32 *)cma->bitmap; - cma->dfs_bitmap.n_elements =3D DIV_ROUND_UP(cma_bitmap_maxno(cma), - BITS_PER_BYTE * sizeof(u32)); - debugfs_create_u32_array("bitmap", 0444, tmp, &cma->dfs_bitmap); + rangedir =3D debugfs_create_dir("ranges", tmp); + for (r =3D 0; r < cma->nranges; r++) { + cmr =3D &cma->ranges[r]; + snprintf(rdirname, sizeof(rdirname), "%d", r); + dir =3D debugfs_create_dir(rdirname, rangedir); + debugfs_create_file("base_pfn", 0444, dir, + &cmr->base_pfn, &cma_debugfs_fops); + cmr->dfs_bitmap.array =3D (u32 *)cmr->bitmap; + cmr->dfs_bitmap.n_elements =3D + DIV_ROUND_UP(cma_bitmap_maxno(cma, cmr), + BITS_PER_BYTE * sizeof(u32)); + debugfs_create_u32_array("bitmap", 0444, dir, + &cmr->dfs_bitmap); + } + + /* + * Backward compatible symlinks to range 0 for base_pfn and bitmap. + */ + debugfs_create_symlink("base_pfn", tmp, "ranges/0/base_pfn"); + debugfs_create_symlink("bitmap", tmp, "ranges/0/bitmap"); } =20 static int __init cma_debugfs_init(void) --=20 2.48.1.711.g2feabab25a-goog From nobody Fri Dec 19 16:07:21 2025 Received: from mail-pl1-f202.google.com (mail-pl1-f202.google.com [209.85.214.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DE974276051 for ; Fri, 28 Feb 2025 18:29:54 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740767396; cv=none; b=IyN7mhEwxZY+1YU2FXEV9DOAnwGpTNKSadR8F9JEFHlREIlsHNzHL4Mn3PJDWGSzzo/JuxQGjkn4LgjfmtGYvxLi/lZ9Sv/XCkdQGEi7LTrEG+1JL9NiqQglloJpnEQHunJKuMlLMtp3VYUnKqBZfGZlyRg1W+Dmq/IN+6EFtQE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740767396; c=relaxed/simple; bh=s0deOUnNgMIFiGIX8UNyQZNO2RKvZSA/G/yOBXDqhdk=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=PXU3hYfQK4sDsDAtryfwS7wOvHglUKkma8zJRmZmt2k4y1ouj3SdRM3Xv/qu/XfzhR6qu7S8MLJXvsMkzj2Zx8j4rhqgEjDbkUJxRUCpBxANxhajuvOu3vKrV8ntZuMeZtpGxkcQ1D3VA9opybt42BwIxbk8rA7An7k2tyOXTIo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--fvdl.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=e2RVXD+v; arc=none smtp.client-ip=209.85.214.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--fvdl.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="e2RVXD+v" Received: by mail-pl1-f202.google.com with SMTP id d9443c01a7336-22328fb6cbfso44745285ad.2 for ; Fri, 28 Feb 2025 10:29:54 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1740767394; x=1741372194; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=cv0B4/uThibyDXPGrCU6CzEy8xZoS0FJOJ+cAKPL8G4=; b=e2RVXD+vmKeg6JaEbngDcna7T+WMYhxk0WKLlUgJD740Bw0So2cqUDsAOKVyHJqzV7 Cdg4qEG1IV+pvMzDG6XpY22z+fzOrT+Act2cXkSTyed9QLQQtkjdOzsVNRIAlqrEgRYF VLP6wrF+449Fjr3bHO1fuEHbVCQskqbhQyqjnLs7+d7AyaBvyyq2LoiiBkoWteVL/F/1 +4LUOMRwuMWiHld4Scd5jkNAoviLvmTt51fsCQC0yiQVaOna/57yAkXS3HYfXQUXxhkO DWQPS8L80zNfIV2CZDMsrFN/GvEmr7stKBwfV1iNCOEdcYuNvWeCTj28zZ+xxYCGTWUl RImw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1740767394; x=1741372194; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=cv0B4/uThibyDXPGrCU6CzEy8xZoS0FJOJ+cAKPL8G4=; b=M8ZmzPXr6hoTSAmrlfBSlKlIeIdN+EEq9b3KVRC46k2Uo0C2STO1VCWhI0hrsb3IPe VTkD7qKDN94FDjdRf0BRetFwF3lSHByYezKCCi644gM5yr8grxGqxxMH9gsZX/ph/R1p o0joXC95alyh1y2clN96lAlnMxdGJtH4ArlBvZM4bwaDYqrCrB5g7YWgFW1v6ZyUeJMr wBZ/osL5W3XfAC2x1JRoHDjdoJ+Ffzo9DGHxVioc0ahH12X273Wd/fZjF9rjVVDeH9KF 67lReh61stJRH/pTQZr+vNIohgiLYmr6gZqmEZl31r5i/ffGPNjMR8ngyL0PkZ9yj9zT Tu/w== X-Forwarded-Encrypted: i=1; AJvYcCVryUjkhCbRXrDmjVsdBIKV8xDt/JbD8wW8fgwgAGKBB4aAr17nnROT50Pfks/eMzAsGmMzv2nYctH21tI=@vger.kernel.org X-Gm-Message-State: AOJu0Yzse5PbFrvnQBKP7OWp+22+Hx5fGPNnN1GodxUCe1aB4g+NAFPg 1ZXxne4Qv/NgYniqmH6FY34QJHmdlknxeaWyxQ6TbOKFHFsDEbo6VrCeZTYsRroSiNHPXg== X-Google-Smtp-Source: AGHT+IGsUKkAmT8g+s3WZ9uk+Wed7lmB9nB51UlYclAmGf34PovRshjaGeeXtqu2JnyLABvgZzxxfd3d X-Received: from pljc15.prod.google.com ([2002:a17:903:3b8f:b0:220:bf5f:1984]) (user=fvdl job=prod-delivery.src-stubby-dispatcher) by 2002:a17:902:f693:b0:21f:9c48:254b with SMTP id d9443c01a7336-22368fa8f86mr82559765ad.24.1740767394131; Fri, 28 Feb 2025 10:29:54 -0800 (PST) Date: Fri, 28 Feb 2025 18:29:04 +0000 In-Reply-To: <20250228182928.2645936-1-fvdl@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250228182928.2645936-1-fvdl@google.com> X-Mailer: git-send-email 2.48.1.711.g2feabab25a-goog Message-ID: <20250228182928.2645936-4-fvdl@google.com> Subject: [PATCH v5 03/27] mm/cma: introduce cma_intersects function From: Frank van der Linden To: akpm@linux-foundation.org, muchun.song@linux.dev, linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: yuzhao@google.com, usamaarif642@gmail.com, joao.m.martins@oracle.com, roman.gushchin@linux.dev, ziy@nvidia.com, david@redhat.com, Frank van der Linden , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , linux-s390@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Now that CMA areas can have multiple physical ranges, code can't assume a CMA struct represents a base_pfn plus a size, as returned from cma_get_base. Most cases are ok though, since they all explicitly refer to CMA areas that were created using existing interfaces (cma_declare_contiguous_nid or cma_init_reserved_mem), which guarantees they have just one physical range. An exception is the s390 code, which walks all CMA ranges to see if they intersect with a range of memory that is about to be hotremoved. So, in the future, it might run in to multi-range areas. To keep this check working, define a cma_intersects function. This just checks if a physaddr range intersects any of the ranges. Use it in the s390 check. Cc: Heiko Carstens Cc: Vasily Gorbik Cc: Alexander Gordeev Cc: linux-s390@vger.kernel.org Acked-by: Alexander Gordeev Signed-off-by: Frank van der Linden --- arch/s390/mm/init.c | 13 +++++-------- include/linux/cma.h | 1 + mm/cma.c | 21 +++++++++++++++++++++ 3 files changed, 27 insertions(+), 8 deletions(-) diff --git a/arch/s390/mm/init.c b/arch/s390/mm/init.c index f2298f7a3f21..d88cb1c13f7d 100644 --- a/arch/s390/mm/init.c +++ b/arch/s390/mm/init.c @@ -239,16 +239,13 @@ struct s390_cma_mem_data { static int s390_cma_check_range(struct cma *cma, void *data) { struct s390_cma_mem_data *mem_data; - unsigned long start, end; =20 mem_data =3D data; - start =3D cma_get_base(cma); - end =3D start + cma_get_size(cma); - if (end < mem_data->start) - return 0; - if (start >=3D mem_data->end) - return 0; - return -EBUSY; + + if (cma_intersects(cma, mem_data->start, mem_data->end)) + return -EBUSY; + + return 0; } =20 static int s390_cma_mem_notifier(struct notifier_block *nb, diff --git a/include/linux/cma.h b/include/linux/cma.h index 863427c27dc2..03d85c100dcc 100644 --- a/include/linux/cma.h +++ b/include/linux/cma.h @@ -53,6 +53,7 @@ extern bool cma_pages_valid(struct cma *cma, const struct= page *pages, unsigned extern bool cma_release(struct cma *cma, const struct page *pages, unsigne= d long count); =20 extern int cma_for_each_area(int (*it)(struct cma *cma, void *data), void = *data); +extern bool cma_intersects(struct cma *cma, unsigned long start, unsigned = long end); =20 extern void cma_reserve_pages_on_error(struct cma *cma); =20 diff --git a/mm/cma.c b/mm/cma.c index 34caa6b29c99..8dc46bfa3819 100644 --- a/mm/cma.c +++ b/mm/cma.c @@ -978,3 +978,24 @@ int cma_for_each_area(int (*it)(struct cma *cma, void = *data), void *data) =20 return 0; } + +bool cma_intersects(struct cma *cma, unsigned long start, unsigned long en= d) +{ + int r; + struct cma_memrange *cmr; + unsigned long rstart, rend; + + for (r =3D 0; r < cma->nranges; r++) { + cmr =3D &cma->ranges[r]; + + rstart =3D PFN_PHYS(cmr->base_pfn); + rend =3D PFN_PHYS(cmr->base_pfn + cmr->count); + if (end < rstart) + continue; + if (start >=3D rend) + continue; + return true; + } + + return false; +} --=20 2.48.1.711.g2feabab25a-goog From nobody Fri Dec 19 16:07:21 2025 Received: from mail-pj1-f73.google.com (mail-pj1-f73.google.com [209.85.216.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7C8431F4C85 for ; Fri, 28 Feb 2025 18:29:56 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740767398; cv=none; b=fiOtLdoo/Xowlrl0hQB53xXKRf+9kNBQwFrOUou+oqlCxYrp7ffCdePHBRdzCMmOx5sTOmAcr98NKMLTPPtL+V4jiobUlVx29ddRDh+AJhiHm3RAZHH33ggvS5XIBuQaximvsSElPGvYoEOVbgUfnGoARyRHU13mH2xMEuBl6cM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740767398; c=relaxed/simple; bh=2LQrg0zUkb/TTwCKsSCtWsYItLgKQyWOhb8ym+5nzxI=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=KDPpElN+1Y8kFLKgoS7cJl0zjwJappUKVkZHRzSCfI1Srh6PZpG6gxCskiOX60bmMdA4RTD+WNwRI54Oa75oCA2jGxSbRQilAx54WYd0c8HpF/vJok+pA6ITOAaTpQuXrMwV6cn0B0AEcwi+/MotC8nH1bKvEZxH2sy8pUQ7LNg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--fvdl.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=cFwHs9lX; arc=none smtp.client-ip=209.85.216.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--fvdl.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="cFwHs9lX" Received: by mail-pj1-f73.google.com with SMTP id 98e67ed59e1d1-2fc1eadf5a8so5048833a91.3 for ; Fri, 28 Feb 2025 10:29:56 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1740767396; x=1741372196; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=W6gZ1EoDBME1HGkx1DBWcAG+gCVdWdKNzaRnW4BhOvY=; b=cFwHs9lXI6B9JqRZQzUtpi5B45rhuRQnn6kr7Es3ZYMXwuyiB1wwfIbZeCiTvk1iDk Qnv5f/T9iUKvzFtW4gbrYhOK74Sla0hyqihqs3fonv01UGyecMPcmn1JRi7AI1VP3h8u CB1KiGK2KZUY770Fmpz3kh5SqmSJVBlLyGZ/g/glPljRsMNgJRR0p4Va0elrF9a/h8Rh 9T7Z9ZRWxpjtpyPzedOW0SaDU071wj2COt+Lj2mZF3NGOdl50KYWt9VHPO54hT5ssfpb SYs4wuItHfGrnWxh1zXBfQJH2dMcd1AVUaZp97rpYu62zraAdj/UJwubderp6UyV8Zsd ek6Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1740767396; x=1741372196; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=W6gZ1EoDBME1HGkx1DBWcAG+gCVdWdKNzaRnW4BhOvY=; b=O/5WQ6bRrb0tm21vUSe5BdFZs6NoYFOrLopv5oxO41kaJ5C11bb9/EaPegC+NKSUh6 8Cu48p8l7hofRGQlZl9Kk9+tQRdsC/Mc4emL+77o1yrRvi8Pr/dFqfz1vfmU705ukq2k fX1EhbGJqWQJgVK67+AXCXpHyEe6U6xEXyMV0B2J1D3vMy7GT/vCkbfxQNKJQwe2IWDO MeMeQXvFnoR5wQaw7s42eLMp3luuKkRrXma0CFYzFfgB82V86fOWiL6QDaTox9fPPxBK YTP1At6DiWl16/BUH8SiCMu5VuDC9LBcw7V68l18VYvp/ozBR9i8RavNRiu5L0uqCizH d5RA== X-Forwarded-Encrypted: i=1; AJvYcCXsDPAtCbDP3/ns4BX+WF3ITeGxNIS6ah3ZEd39bC0B7k78L1dk0icmxbx3xVmrTL5p/Pz+ZgR8BdbEL0g=@vger.kernel.org X-Gm-Message-State: AOJu0YyD8pB5OtfqODX1J1XZwGT8Z+oe/hJbL4vsOoFNh+/iciyLYn0e DncKGSH9SKZZUIOq1C4Et1zET1dwp4iv10sUxfJsxMaJljZSpxSO/c6Rsv5yQF/CMvmYgg== X-Google-Smtp-Source: AGHT+IGBekA9yH+3WWdUMJNOD5Pd3PBDucriRK0CwTmdQpzZ//urtl/q/fvn5N6SAkEkZtWc1aZX3AbQ X-Received: from pjn14.prod.google.com ([2002:a17:90b:570e:b0:2ea:aa56:49c]) (user=fvdl job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:1e01:b0:2ee:ad18:b30d with SMTP id 98e67ed59e1d1-2febab30c86mr6700197a91.6.1740767395727; Fri, 28 Feb 2025 10:29:55 -0800 (PST) Date: Fri, 28 Feb 2025 18:29:05 +0000 In-Reply-To: <20250228182928.2645936-1-fvdl@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250228182928.2645936-1-fvdl@google.com> X-Mailer: git-send-email 2.48.1.711.g2feabab25a-goog Message-ID: <20250228182928.2645936-5-fvdl@google.com> Subject: [PATCH v5 04/27] mm, hugetlb: use cma_declare_contiguous_multi From: Frank van der Linden To: akpm@linux-foundation.org, muchun.song@linux.dev, linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: yuzhao@google.com, usamaarif642@gmail.com, joao.m.martins@oracle.com, roman.gushchin@linux.dev, ziy@nvidia.com, david@redhat.com, Frank van der Linden Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" hugetlb_cma is fine with using multiple CMA ranges, as long as it can get its gigantic pages allocated from them. So, use cma_declare_contiguous_multi to allow for multiple ranges, increasing the chances of getting what we want on systems with gaps in physical memory. Signed-off-by: Frank van der Linden --- mm/hugetlb.c | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 163190e89ea1..fadfacf56066 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -7738,9 +7738,8 @@ void __init hugetlb_cma_reserve(int order) * may be returned to CMA allocator in the case of * huge page demotion. */ - res =3D cma_declare_contiguous_nid(0, size, 0, - PAGE_SIZE << order, - HUGETLB_PAGE_ORDER, false, name, + res =3D cma_declare_contiguous_multi(size, PAGE_SIZE << order, + HUGETLB_PAGE_ORDER, name, &hugetlb_cma[nid], nid); if (res) { pr_warn("hugetlb_cma: reservation failed: err %d, node %d", --=20 2.48.1.711.g2feabab25a-goog From nobody Fri Dec 19 16:07:21 2025 Received: from mail-pj1-f73.google.com (mail-pj1-f73.google.com [209.85.216.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 19266277812 for ; Fri, 28 Feb 2025 18:29:57 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740767399; cv=none; b=M+tjOy+c95Dm7OVr52PfjjHs6cmV/HvsVNA1XjaEcbLJo9LbIqWT3OzumUzhSUwnKmwcooEI0xWk4VU3HBHiF9VcTElLURH1RAkcQZgEJgRXHjxBXUbg5hG+Z5Vx182vB8PRyFYKtM96L0LEX8D1TdD1w9T/dVbqPcHKUs71wBI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740767399; c=relaxed/simple; bh=HY2CGcpkWtS+ntQTMmiNkH2cZKOFeEFwZNwW9vji9Mo=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=kGb1XqQxeWbWdr4n09iL3XEYfNWTn76/wE0CeEfGYdhlCfEXYJLiBxGFURYBEvxT+Q0D0+FsS2Q3ccyWwAekph285GCr/0ALvSCKeGwwUyeaeS5DSlIVaRQR0nL9KuZoo5IHVXBWOWb2z85EJATMpo3jBqd5CGcNj9m9EfIOXC8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--fvdl.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=qXwaL0i2; arc=none smtp.client-ip=209.85.216.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--fvdl.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="qXwaL0i2" Received: by mail-pj1-f73.google.com with SMTP id 98e67ed59e1d1-2fc1eabf4f7so5260953a91.1 for ; Fri, 28 Feb 2025 10:29:57 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1740767397; x=1741372197; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=MFqvkI2A59BbRTUcT+L5ulTUxSAi2JTL7XhWtVP7K+w=; b=qXwaL0i2bV+s5VXeoHAUyKYcMVxs/QQhTnnalMqGGWaMIXWESQIX9ucdpigep+Oees SyaoJA2RPyJYFkYmXWEUwNKcmVTKbYZjkYKjfbhKkjR2JtdJmbXekKpuzvbuC1DH5vwz K2EQalggCbdPU1TJlEYkTGihllUZ8wlc4Gw8qOz33iCMzTLL6wB0UJcrhGGI2/AOrBeB OJcVVLEZ3eORFfJfje3IPtZlVBg0JK8uRuPprcYYfO4B2EI5QNybpRCcymKr9zWGrl0v yADOlIhTPlmYqD9K/jSLVGmIFcgYHj1Mn+ZNUN20x/CB1hEsSXM4Q/F352m4Q10FZk6r U+3A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1740767397; x=1741372197; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=MFqvkI2A59BbRTUcT+L5ulTUxSAi2JTL7XhWtVP7K+w=; b=qro/2TZLW/4bFmoQfXj61niS0DkRjRGJdBSlo7glcDTwG1BrD5e+YiWg5HcAZmaLZL 4i/MqhA4uLbqx0MRjK9BmtQ0ozFLn4lXDC3sqPK7o1+Qj0blP+5N2Rt+vCC36sXmSruC 6mBx/ceCHep/B843zPmrBRf1YD8oNY1zmktLkhv/P9y0XcVs8nAQr3cy2cqX84L/SZld ilQYerI7XyC9ViX0ex6n+JS1kYZDlA8J2XypzxEEItWfvr7NEvX4LfCu0lhjfyck5Zh4 qo7b8KX8a8yqFfe3OSSwgBJ3Io4I0ouFOwT7PwGugfb3X2GTDZLPaMFc8iG9tynLvFkT d0HA== X-Forwarded-Encrypted: i=1; AJvYcCU4qTl5v6MYGJY4VYl5ColKH+oc9/FCOM2iJtkwppnZp7n8ldcXr12P9O72JOZfFFjKXBnpTF/vcWS0iUc=@vger.kernel.org X-Gm-Message-State: AOJu0YxPlPkYbq/4tnx70CkwVCZDH6EsQreWOwuveVvYBPl1F8pXZPxx 4BQ0XL0iq9ZXtmj7GZlkdxisj/OQR34FDTPyJZyrX5dVSlNu8l8bdbDccuYf+0uWMpKo0w== X-Google-Smtp-Source: AGHT+IHeteXGZQA011CNg8cQpzXHdzrwZLVc5nXKBfxk4YN+FprIvtPh1vjqeNwkbz2HLtK+Z+J7QiKW X-Received: from pjbhl3.prod.google.com ([2002:a17:90b:1343:b0:2fc:11a0:c53f]) (user=fvdl job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:4c11:b0:2fe:baa3:b8b9 with SMTP id 98e67ed59e1d1-2febab2bdd6mr7106444a91.4.1740767397292; Fri, 28 Feb 2025 10:29:57 -0800 (PST) Date: Fri, 28 Feb 2025 18:29:06 +0000 In-Reply-To: <20250228182928.2645936-1-fvdl@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250228182928.2645936-1-fvdl@google.com> X-Mailer: git-send-email 2.48.1.711.g2feabab25a-goog Message-ID: <20250228182928.2645936-6-fvdl@google.com> Subject: [PATCH v5 05/27] mm/hugetlb: remove redundant __ClearPageReserved From: Frank van der Linden To: akpm@linux-foundation.org, muchun.song@linux.dev, linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: yuzhao@google.com, usamaarif642@gmail.com, joao.m.martins@oracle.com, roman.gushchin@linux.dev, ziy@nvidia.com, david@redhat.com, Frank van der Linden , Oscar Salvador Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" In hugetlb_folio_init_tail_vmemmap, the reserved flag is cleared for the tail page just before it is zeroed out, which is redundant. Remove the __ClearPageReserved call. Reviewed-by: Oscar Salvador Signed-off-by: Frank van der Linden --- mm/hugetlb.c | 1 - 1 file changed, 1 deletion(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index fadfacf56066..d6d7ebc75b86 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -3198,7 +3198,6 @@ static void __init hugetlb_folio_init_tail_vmemmap(st= ruct folio *folio, for (pfn =3D head_pfn + start_page_number; pfn < end_pfn; pfn++) { struct page *page =3D pfn_to_page(pfn); =20 - __ClearPageReserved(folio_page(folio, pfn - head_pfn)); __init_single_page(page, pfn, zone, nid); prep_compound_tail((struct page *)folio, pfn - head_pfn); ret =3D page_ref_freeze(page, 1); --=20 2.48.1.711.g2feabab25a-goog From nobody Fri Dec 19 16:07:21 2025 Received: from mail-pj1-f73.google.com (mail-pj1-f73.google.com [209.85.216.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C7CAA27FE7F for ; Fri, 28 Feb 2025 18:29:59 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740767401; cv=none; b=dgX9AU5JjmqjrGGQEQ/LBAQtNd0TVUgF6+1/pSazQ+PjGdii6SaFeq2lCF2tU66nuhTSrKR8cf8PjLHoerMLkeMUDtZkkO157qL1GNXAS5dzFmGpEOvqvQQwAdfsaZiZzWIHkBk5FOMkdul9jBdbJbJAmKBMk4l12vaVDZUmovw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740767401; c=relaxed/simple; bh=uXyYOXDU0aeoS4KbS7TXHQWHQaLj/wkyH4OQB/o5oDA=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=VHLQ2BFQGUDvettxR8M5zwH9cysR1xMstEzcm1ylMBWYFvF3OFY4trAsqMtdVb9BrmF6H5H/UaaExED/SBoUQLjWoHUNT/Bi/aJbCTGNDTYQRFa7IGE9ezhdyhH83CftpmTHi5q61PKBfPDfA5PEpMJnCQG1FBH1JGZoARGAURA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--fvdl.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=rDSlDzTH; arc=none smtp.client-ip=209.85.216.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--fvdl.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="rDSlDzTH" Received: by mail-pj1-f73.google.com with SMTP id 98e67ed59e1d1-2feb8d29740so2891793a91.1 for ; Fri, 28 Feb 2025 10:29:59 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1740767399; x=1741372199; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=OPcdQrMWcNv7Zau8OEpT7S16MfDg/G7TA8+N5ucsG+E=; b=rDSlDzTHiXd6EMUONnN34Uy0gllSgpx9iZFcAbgZ4ofKQyFpRtevOnUrIbNw9x+oGS 0QCornZaRPRxBmRoM9ZiWWE51YZB0ktLgd9DfhUsQTPO+lxqUYxy+tu3Cxqp3ag3tMBo DckBWJGiwt4Sa70OjdVIP+la5m4zpvw3NhQ6quq+c7lsLcH+E62x458H5enxTPYi1LCq IHllIdRjD8PONggtA6e+eSxfkXYqzfSBzVf2q4fL9XXd6qt/OqVK7ewQFhP8tCzQ1NH3 /MBgB0diENryszCQKhjvczoFOYRMKw0KceQk5P5U+ZRDER5Y/sWh3bpdUZloCZkB/mKv +DBg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1740767399; x=1741372199; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=OPcdQrMWcNv7Zau8OEpT7S16MfDg/G7TA8+N5ucsG+E=; b=W4VtIZpBEnhKO1cWpvWPuL+IiVDf/Du2n6IuHsdxJ71gFgHLJPZ+nA+ASXT6KDlNyl HmtDw78jPxtmP20PlDou9YkwllM4ARaIKMgOfWYitR1QboWPcku+Meg+X4hGn5cfT7If jgvQWgXpkMnmwc3CpVW5XJOHK8/RyaObjYFDi3HchjAQ659YClaWFlVeHdxrHkd2Edim RtiJzgiipkN+KsbCFiVuvAWGyYGNzfD+ATWayqNwJW3z5+E40VKl55MelqUk0AxatOvt S9Ob1EW410BYYcfWARDZJZCzsp1LnzQ1hKT2ODQOeO6JHGCzCca5+Osmp+ffLDpoMc6K f/eQ== X-Forwarded-Encrypted: i=1; AJvYcCU1JcqDl9hNJlcGGeExTfArQKMd7poLr7RyXagBo+1mRVT/nLy3buMRAPJI1vtc1Tfax3uw1upm5D3BbFg=@vger.kernel.org X-Gm-Message-State: AOJu0Yz1CRculjq5gmgY2kwLJ9sD/BdcfXL2kti9e63gl1hiMavqpvkE fNEnucWJK+I6yVDA4xyLm4Lr6zZeYh/Cu/138rSxHOV7VAOwU6/LKoMRtpk2Dkw7PkBOrw== X-Google-Smtp-Source: AGHT+IEc3NQ9Go14+cTqU/Rv73CdPaJm8EFFQ8SG1wAZVrkhFUHYZ37HYVGqfmxLOr8BdLrKJjaxUlZ8 X-Received: from pjur6.prod.google.com ([2002:a17:90a:d406:b0:2fc:ccfe:368]) (user=fvdl job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:2fc7:b0:2ef:31a9:95c6 with SMTP id 98e67ed59e1d1-2febab5bf2fmr7679905a91.14.1740767399073; Fri, 28 Feb 2025 10:29:59 -0800 (PST) Date: Fri, 28 Feb 2025 18:29:07 +0000 In-Reply-To: <20250228182928.2645936-1-fvdl@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250228182928.2645936-1-fvdl@google.com> X-Mailer: git-send-email 2.48.1.711.g2feabab25a-goog Message-ID: <20250228182928.2645936-7-fvdl@google.com> Subject: [PATCH v5 06/27] mm/hugetlb: use online nodes for bootmem allocation From: Frank van der Linden To: akpm@linux-foundation.org, muchun.song@linux.dev, linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: yuzhao@google.com, usamaarif642@gmail.com, joao.m.martins@oracle.com, roman.gushchin@linux.dev, ziy@nvidia.com, david@redhat.com, Frank van der Linden Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Later commits will move hugetlb bootmem allocation to earlier in init, when N_MEMORY has not yet been set on nodes. Use online nodes instead. At most, this wastes just a few cycles once during boot (and most likely none). Signed-off-by: Frank van der Linden --- mm/hugetlb.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index d6d7ebc75b86..0592c076cd36 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -3152,7 +3152,7 @@ int __alloc_bootmem_huge_page(struct hstate *h, int n= id) goto found; } /* allocate from next node when distributing huge pages */ - for_each_node_mask_to_alloc(&h->next_nid_to_alloc, nr_nodes, node, &node_= states[N_MEMORY]) { + for_each_node_mask_to_alloc(&h->next_nid_to_alloc, nr_nodes, node, &node_= states[N_ONLINE]) { m =3D memblock_alloc_try_nid_raw( huge_page_size(h), huge_page_size(h), 0, MEMBLOCK_ALLOC_ACCESSIBLE, node); @@ -4546,8 +4546,8 @@ void __init hugetlb_add_hstate(unsigned int order) for (i =3D 0; i < MAX_NUMNODES; ++i) INIT_LIST_HEAD(&h->hugepage_freelists[i]); INIT_LIST_HEAD(&h->hugepage_activelist); - h->next_nid_to_alloc =3D first_memory_node; - h->next_nid_to_free =3D first_memory_node; + h->next_nid_to_alloc =3D first_online_node; + h->next_nid_to_free =3D first_online_node; snprintf(h->name, HSTATE_NAME_LEN, "hugepages-%lukB", huge_page_size(h)/SZ_1K); =20 --=20 2.48.1.711.g2feabab25a-goog From nobody Fri Dec 19 16:07:21 2025 Received: from mail-pl1-f201.google.com (mail-pl1-f201.google.com [209.85.214.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7A886280A32 for ; Fri, 28 Feb 2025 18:30:01 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740767403; cv=none; b=sLWN61pZtTfQxaqw6N/vHqKnQFWNXopDFsVCpOAbvPWRjzRfbVTtaQVUN9poOXgKsescsdXzNvg2pRGb6eteByA7ioJqqIg/1yNtD5xy3bLIaAtIpRJ9jgk/Qmvf/c7BeQbopo4EG7AlBLoMpYajuLs964o7mv+oNgVw7B4a14U= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740767403; c=relaxed/simple; bh=N1ZEZ5EWBTufz6p6TrAGf/BGfWFVXcbKgteQC44BnMY=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=MDBDQ6OcmaYImruvpogHmbqDI0msDZQND6qhMWZKZkEqomIU3VHqDsi9AnQZmPgAc/jCY09OntsP2BBYm97ciOo91usoWaXr/S+ttBIh2oHf5YwAH7qvCOf27iOM4ABnztjGJS/5ZDmIxBdw35rWYPdGNceL9ESYTnXAm/wLOYg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--fvdl.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=zFtv06DN; arc=none smtp.client-ip=209.85.214.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--fvdl.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="zFtv06DN" Received: by mail-pl1-f201.google.com with SMTP id d9443c01a7336-22326da4c8eso46079235ad.3 for ; Fri, 28 Feb 2025 10:30:01 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1740767401; x=1741372201; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=e4fT3nl83EuvMS99kJHXSYdNG4GrpqXQFI3hJuJeA/Y=; b=zFtv06DNA65JtfHIrkIP81WhpjCJb3hdGwaGhaHzlFBPdDB6aP1wI9lNybsd5rEXui JmQuEgJVqXfWrJbu416yXoAnOvCl6ImnsWFSySkj+G3TwL1spda97eMR7KRNILvsRP4G LlVNsxfvFss0gnqB3ZKcTbSO0BVDanS9Z5YhPj/YGunSP64cqwsQi1Yq1/8dYXdpn0Q7 2EFnrZozoLieJkaXpslqsToO0aix5SxwchyJ+lfot/uzvluZNX0Gfp6loTvLgIdPRCRL gGb5MGwD1jrpxgiGzL47rJtxoOWfSSfzyJFrLrLpi7dMZrhtOn8nYR7mzOCN7RJFbLwc oVkw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1740767401; x=1741372201; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=e4fT3nl83EuvMS99kJHXSYdNG4GrpqXQFI3hJuJeA/Y=; b=iOHMyy9JOMTKOY0bgbmnbJlr8ec69NYXcgWwDQrg+CsLWEV1/noBpeKX3R71t3TcyN j4YGacv8YS5P1NtrqJkgmpebzuYlTvkJYcJaqABbRUV7wrpU8vTeTsspt7NHmWhKMSYB RYd0kiKzu6psBoWNyX3LM2fz1NxKvfHn39LHIba+Z/tbEL4DWLy6VN+N/f/JtJAzZLbH GPzp7bY3SdsAG6TdR2fmw5LYlO8dFAobZ78u+85KPzFhQvjK1Rb4u/DqGGIS/PkidCZv ryinV+vVU+LIF8m9OxbZFENON5nfNGLOX8ILPNmaKIhyZCVy13ONM4e0UUvKrdTXokW3 1emw== X-Forwarded-Encrypted: i=1; AJvYcCVssH1uSiPoOnoB2U/w161OLDZeidJQJ/5LihcCYEDiFmm6Ae1xBJVedBbXrBi8WU7XwP0MWY5nwY4jIvQ=@vger.kernel.org X-Gm-Message-State: AOJu0YwDunNIfPEs1wqOZliiiGyMj//dJkh9ydhlULtxP2kFxX8yp8Bp oVpFrHzPWdCfs0nHFcSVeGaJI21eb8phGoAtvX21n0j9YejVGy+CJTy1WwMM9v+rQtIFVQ== X-Google-Smtp-Source: AGHT+IGUAIYvgIF+BtbI5pOnkTpsWOz6nQUUqftXrAjuMRZR1VXxhKSR8/d7Qe0qOmWXJT+LI7VzDoQF X-Received: from pge11.prod.google.com ([2002:a05:6a02:2d0b:b0:ad5:4ccd:a656]) (user=fvdl job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6a20:2449:b0:1ee:d418:f754 with SMTP id adf61e73a8af0-1f2f4e5a4b3mr8377121637.40.1740767400717; Fri, 28 Feb 2025 10:30:00 -0800 (PST) Date: Fri, 28 Feb 2025 18:29:08 +0000 In-Reply-To: <20250228182928.2645936-1-fvdl@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250228182928.2645936-1-fvdl@google.com> X-Mailer: git-send-email 2.48.1.711.g2feabab25a-goog Message-ID: <20250228182928.2645936-8-fvdl@google.com> Subject: [PATCH v5 07/27] mm/hugetlb: convert cmdline parameters from setup to early From: Frank van der Linden To: akpm@linux-foundation.org, muchun.song@linux.dev, linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: yuzhao@google.com, usamaarif642@gmail.com, joao.m.martins@oracle.com, roman.gushchin@linux.dev, ziy@nvidia.com, david@redhat.com, Frank van der Linden Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Convert the cmdline parameters (hugepagesz, hugepages, default_hugepagesz and hugetlb_free_vmemmap) to early parameters. Since parse_early_param might run before MMU setups on some platforms (powerpc), validation of huge page sizes as specified in command line parameters would fail. So instead, for the hstate-related values, just record the them and parse them on demand, from hugetlb_bootmem_alloc. The allocation of hugetlb bootmem pages is now done in hugetlb_bootmem_alloc, which is called explicitly at the start of mm_core_init(). core_initcall would be too late, as that happens with memblock already torn down. This change will allow earlier allocation and initialization of bootmem hugetlb pages later on. No functional change intended. Signed-off-by: Frank van der Linden --- .../admin-guide/kernel-parameters.txt | 14 +- include/linux/hugetlb.h | 6 + mm/hugetlb.c | 133 ++++++++++++++---- mm/hugetlb_vmemmap.c | 6 +- mm/mm_init.c | 3 + 5 files changed, 126 insertions(+), 36 deletions(-) diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentatio= n/admin-guide/kernel-parameters.txt index fb8752b42ec8..ae21d911d1c7 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -1861,7 +1861,7 @@ hpet_mmap=3D [X86, HPET_MMAP] Allow userspace to mmap HPET registers. Default set by CONFIG_HPET_MMAP_DEFAULT. =20 - hugepages=3D [HW] Number of HugeTLB pages to allocate at boot. + hugepages=3D [HW,EARLY] Number of HugeTLB pages to allocate at boot. If this follows hugepagesz (below), it specifies the number of pages of hugepagesz to be allocated. If this is the first HugeTLB parameter on the command @@ -1873,12 +1873,12 @@ :[,:] =20 hugepagesz=3D - [HW] The size of the HugeTLB pages. This is used in - conjunction with hugepages (above) to allocate huge - pages of a specific size at boot. The pair - hugepagesz=3DX hugepages=3DY can be specified once for - each supported huge page size. Huge page sizes are - architecture dependent. See also + [HW,EARLY] The size of the HugeTLB pages. This is + used in conjunction with hugepages (above) to + allocate huge pages of a specific size at boot. The + pair hugepagesz=3DX hugepages=3DY can be specified once + for each supported huge page size. Huge page sizes + are architecture dependent. See also Documentation/admin-guide/mm/hugetlbpage.rst. Format: size[KMG] =20 diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index ec8c0ccc8f95..9cd7c9dacb88 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -174,6 +174,8 @@ struct address_space *hugetlb_folio_mapping_lock_write(= struct folio *folio); extern int sysctl_hugetlb_shm_group; extern struct list_head huge_boot_pages[MAX_NUMNODES]; =20 +void hugetlb_bootmem_alloc(void); + /* arch callbacks */ =20 #ifndef CONFIG_HIGHPTE @@ -1250,6 +1252,10 @@ static inline bool hugetlbfs_pagecache_present( { return false; } + +static inline void hugetlb_bootmem_alloc(void) +{ +} #endif /* CONFIG_HUGETLB_PAGE */ =20 static inline spinlock_t *huge_pte_lock(struct hstate *h, diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 0592c076cd36..1a200f89e21a 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -40,6 +40,7 @@ #include #include #include +#include =20 #include #include @@ -62,6 +63,24 @@ static unsigned long hugetlb_cma_size __initdata; =20 __initdata struct list_head huge_boot_pages[MAX_NUMNODES]; =20 +/* + * Due to ordering constraints across the init code for various + * architectures, hugetlb hstate cmdline parameters can't simply + * be early_param. early_param might call the setup function + * before valid hugetlb page sizes are determined, leading to + * incorrect rejection of valid hugepagesz=3D options. + * + * So, record the parameters early and consume them whenever the + * init code is ready for them, by calling hugetlb_parse_params(). + */ + +/* one (hugepagesz=3D,hugepages=3D) pair per hstate, one default_hugepages= z */ +#define HUGE_MAX_CMDLINE_ARGS (2 * HUGE_MAX_HSTATE + 1) +struct hugetlb_cmdline { + char *val; + int (*setup)(char *val); +}; + /* for command line parsing */ static struct hstate * __initdata parsed_hstate; static unsigned long __initdata default_hstate_max_huge_pages; @@ -69,6 +88,20 @@ static bool __initdata parsed_valid_hugepagesz =3D true; static bool __initdata parsed_default_hugepagesz; static unsigned int default_hugepages_in_node[MAX_NUMNODES] __initdata; =20 +static char hstate_cmdline_buf[COMMAND_LINE_SIZE] __initdata; +static int hstate_cmdline_index __initdata; +static struct hugetlb_cmdline hugetlb_params[HUGE_MAX_CMDLINE_ARGS] __init= data; +static int hugetlb_param_index __initdata; +static __init int hugetlb_add_param(char *s, int (*setup)(char *val)); +static __init void hugetlb_parse_params(void); + +#define hugetlb_early_param(str, func) \ +static __init int func##args(char *s) \ +{ \ + return hugetlb_add_param(s, func); \ +} \ +early_param(str, func##args) + /* * Protects updates to hugepage_freelists, hugepage_activelist, nr_huge_pa= ges, * free_huge_pages, and surplus_huge_pages. @@ -3484,6 +3517,8 @@ static void __init hugetlb_hstate_alloc_pages(struct = hstate *h) =20 for (i =3D 0; i < MAX_NUMNODES; i++) INIT_LIST_HEAD(&huge_boot_pages[i]); + h->next_nid_to_alloc =3D first_online_node; + h->next_nid_to_free =3D first_online_node; initialized =3D true; } =20 @@ -4546,8 +4581,6 @@ void __init hugetlb_add_hstate(unsigned int order) for (i =3D 0; i < MAX_NUMNODES; ++i) INIT_LIST_HEAD(&h->hugepage_freelists[i]); INIT_LIST_HEAD(&h->hugepage_activelist); - h->next_nid_to_alloc =3D first_online_node; - h->next_nid_to_free =3D first_online_node; snprintf(h->name, HSTATE_NAME_LEN, "hugepages-%lukB", huge_page_size(h)/SZ_1K); =20 @@ -4572,6 +4605,42 @@ static void __init hugepages_clear_pages_in_node(voi= d) } } =20 +static __init int hugetlb_add_param(char *s, int (*setup)(char *)) +{ + size_t len; + char *p; + + if (hugetlb_param_index >=3D HUGE_MAX_CMDLINE_ARGS) + return -EINVAL; + + len =3D strlen(s) + 1; + if (len + hstate_cmdline_index > sizeof(hstate_cmdline_buf)) + return -EINVAL; + + p =3D &hstate_cmdline_buf[hstate_cmdline_index]; + memcpy(p, s, len); + hstate_cmdline_index +=3D len; + + hugetlb_params[hugetlb_param_index].val =3D p; + hugetlb_params[hugetlb_param_index].setup =3D setup; + + hugetlb_param_index++; + + return 0; +} + +static __init void hugetlb_parse_params(void) +{ + int i; + struct hugetlb_cmdline *hcp; + + for (i =3D 0; i < hugetlb_param_index; i++) { + hcp =3D &hugetlb_params[i]; + + hcp->setup(hcp->val); + } +} + /* * hugepages command line processing * hugepages normally follows a valid hugepagsz or default_hugepagsz @@ -4591,7 +4660,7 @@ static int __init hugepages_setup(char *s) if (!parsed_valid_hugepagesz) { pr_warn("HugeTLB: hugepages=3D%s does not follow a valid hugepagesz, ign= oring\n", s); parsed_valid_hugepagesz =3D true; - return 1; + return -EINVAL; } =20 /* @@ -4645,24 +4714,16 @@ static int __init hugepages_setup(char *s) } } =20 - /* - * Global state is always initialized later in hugetlb_init. - * But we need to allocate gigantic hstates here early to still - * use the bootmem allocator. - */ - if (hugetlb_max_hstate && hstate_is_gigantic(parsed_hstate)) - hugetlb_hstate_alloc_pages(parsed_hstate); - last_mhp =3D mhp; =20 - return 1; + return 0; =20 invalid: pr_warn("HugeTLB: Invalid hugepages parameter %s\n", p); hugepages_clear_pages_in_node(); - return 1; + return -EINVAL; } -__setup("hugepages=3D", hugepages_setup); +hugetlb_early_param("hugepages", hugepages_setup); =20 /* * hugepagesz command line processing @@ -4681,7 +4742,7 @@ static int __init hugepagesz_setup(char *s) =20 if (!arch_hugetlb_valid_size(size)) { pr_err("HugeTLB: unsupported hugepagesz=3D%s\n", s); - return 1; + return -EINVAL; } =20 h =3D size_to_hstate(size); @@ -4696,7 +4757,7 @@ static int __init hugepagesz_setup(char *s) if (!parsed_default_hugepagesz || h !=3D &default_hstate || default_hstate.max_huge_pages) { pr_warn("HugeTLB: hugepagesz=3D%s specified twice, ignoring\n", s); - return 1; + return -EINVAL; } =20 /* @@ -4706,14 +4767,14 @@ static int __init hugepagesz_setup(char *s) */ parsed_hstate =3D h; parsed_valid_hugepagesz =3D true; - return 1; + return 0; } =20 hugetlb_add_hstate(ilog2(size) - PAGE_SHIFT); parsed_valid_hugepagesz =3D true; - return 1; + return 0; } -__setup("hugepagesz=3D", hugepagesz_setup); +hugetlb_early_param("hugepagesz", hugepagesz_setup); =20 /* * default_hugepagesz command line input @@ -4727,14 +4788,14 @@ static int __init default_hugepagesz_setup(char *s) parsed_valid_hugepagesz =3D false; if (parsed_default_hugepagesz) { pr_err("HugeTLB: default_hugepagesz previously specified, ignoring %s\n"= , s); - return 1; + return -EINVAL; } =20 size =3D (unsigned long)memparse(s, NULL); =20 if (!arch_hugetlb_valid_size(size)) { pr_err("HugeTLB: unsupported default_hugepagesz=3D%s\n", s); - return 1; + return -EINVAL; } =20 hugetlb_add_hstate(ilog2(size) - PAGE_SHIFT); @@ -4751,17 +4812,33 @@ static int __init default_hugepagesz_setup(char *s) */ if (default_hstate_max_huge_pages) { default_hstate.max_huge_pages =3D default_hstate_max_huge_pages; - for_each_online_node(i) - default_hstate.max_huge_pages_node[i] =3D - default_hugepages_in_node[i]; - if (hstate_is_gigantic(&default_hstate)) - hugetlb_hstate_alloc_pages(&default_hstate); + /* + * Since this is an early parameter, we can't check + * NUMA node state yet, so loop through MAX_NUMNODES. + */ + for (i =3D 0; i < MAX_NUMNODES; i++) { + if (default_hugepages_in_node[i] !=3D 0) + default_hstate.max_huge_pages_node[i] =3D + default_hugepages_in_node[i]; + } default_hstate_max_huge_pages =3D 0; } =20 - return 1; + return 0; +} +hugetlb_early_param("default_hugepagesz", default_hugepagesz_setup); + +void __init hugetlb_bootmem_alloc(void) +{ + struct hstate *h; + + hugetlb_parse_params(); + + for_each_hstate(h) { + if (hstate_is_gigantic(h)) + hugetlb_hstate_alloc_pages(h); + } } -__setup("default_hugepagesz=3D", default_hugepagesz_setup); =20 static unsigned int allowed_mems_nr(struct hstate *h) { diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c index 7735972add01..5b484758f813 100644 --- a/mm/hugetlb_vmemmap.c +++ b/mm/hugetlb_vmemmap.c @@ -444,7 +444,11 @@ DEFINE_STATIC_KEY_FALSE(hugetlb_optimize_vmemmap_key); EXPORT_SYMBOL(hugetlb_optimize_vmemmap_key); =20 static bool vmemmap_optimize_enabled =3D IS_ENABLED(CONFIG_HUGETLB_PAGE_OP= TIMIZE_VMEMMAP_DEFAULT_ON); -core_param(hugetlb_free_vmemmap, vmemmap_optimize_enabled, bool, 0); +static int __init hugetlb_vmemmap_optimize_param(char *buf) +{ + return kstrtobool(buf, &vmemmap_optimize_enabled); +} +early_param("hugetlb_free_vmemmap", hugetlb_vmemmap_optimize_param); =20 static int __hugetlb_vmemmap_restore_folio(const struct hstate *h, struct folio *folio, unsigned long flags) diff --git a/mm/mm_init.c b/mm/mm_init.c index 2630cc30147e..d2dee53e95dd 100644 --- a/mm/mm_init.c +++ b/mm/mm_init.c @@ -30,6 +30,7 @@ #include #include #include +#include #include "internal.h" #include "slab.h" #include "shuffle.h" @@ -2641,6 +2642,8 @@ static void __init mem_init_print_info(void) */ void __init mm_core_init(void) { + hugetlb_bootmem_alloc(); + /* Initializations relying on SMP setup */ BUILD_BUG_ON(MAX_ZONELISTS > 2); build_all_zonelists(NULL); --=20 2.48.1.711.g2feabab25a-goog From nobody Fri Dec 19 16:07:21 2025 Received: from mail-pj1-f74.google.com (mail-pj1-f74.google.com [209.85.216.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 30002281346 for ; Fri, 28 Feb 2025 18:30:02 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740767404; cv=none; b=eWZBRvI2Nx+OyP8GALcAP3f9Ew7RPZpLlBcs31TLzdFXLuXy/uv1TEn/EnFuCBcl9DBmKdO3Xq3YVXa/Rf7FwFhBwvVh3Or6RQuEmFVMJ8zM1yDX9kVuoLqUa+9e/pXYEL5SPp9/Jkrwum50SVW9x/gPMdltmUY5mo+3pebH+90= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740767404; c=relaxed/simple; bh=yt9jreh7p8FgaxmwISqk7Ueg8wiW2YUV6rWrFsx8v5k=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=QsST7YXpPVSBBD2HLwbt00kJ0WI45C/9f4Ek8Y+DIagODpkOIJ8jnJwvDCxIHn54NLKM3Am5RsA1556apy5R7QrLLh1DgVC+OkwANh4coaFd/IiFI138zzK/qsQhG6f8RHXAsQmjtLu8rsDrWCfs1G8caE680KimWrFvIogeCQs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--fvdl.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=k552tydf; arc=none smtp.client-ip=209.85.216.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--fvdl.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="k552tydf" Received: by mail-pj1-f74.google.com with SMTP id 98e67ed59e1d1-2fe86c01e2bso5034775a91.2 for ; Fri, 28 Feb 2025 10:30:02 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1740767402; x=1741372202; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=vT9V+Abq/i2ri7sXnmoXq0qIAuHJN34FJxNuQkKtmyk=; b=k552tydfKm6b4JP9FkohVsAPAKJBfgN83rCMBPhFycCoVbw8Pg4nwljZbbYoHyqdoK DWnfcEE5OWSdC2TD0lufrYfj+ZGUn97f7CTrkaSIBK3cduh+k/FTVJRg0+8v6A+sqi9k yr0vVpSd2I19vmZxZMud8Yb3YvqFShtHXhso0jbyqaP0dpB8C76WF4r9ZHGls0N8XSEV NHdHiYeNDD+hRBeolWOk1eJKEMGn2IkpaBPidTBki0kc6zdCt3KFmCBdcBHWRvEbCGMj OUUzmktFbhbO+90xfy372LBH+yNoPm9bjswq4Fu4RwKsYOPJpe5vwbRWg2Pn0BKWm2Iu rmow== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1740767402; x=1741372202; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=vT9V+Abq/i2ri7sXnmoXq0qIAuHJN34FJxNuQkKtmyk=; b=uDLEs9kbRYgg3EAvMjOmmTjlB++qLOlvKXV6+yShLMgXBNEfz3zjKZQb5KnC9y+wSu k8nEvq5xYK35f3vVu718dVwF6WQlBNxLsibRqSrfnXpfOCS7Pwmqo/O8C1F8WY7BoIar Mr0NibDg6v6ebUeQtneg/rFJW6EphGiLGIsuS6K4nJN/sNZ7mmYnHHOIAVsTnJUxMHcJ pe9vN9VdPRAevT6DPfrXPkZFVcA4xRZi3SEv6A+81M4qoV2RG5y8JzXwMD0PCDyNapbM +XyDtRUIoF/GXa7j3djlrLTIXV/reeh+gnGW53HmHcvrMiy3em+trlQkQbp1tW5ZP5fY 2i6Q== X-Forwarded-Encrypted: i=1; AJvYcCVT74eUSoFleGTkJ8/s4o4fFilFR8OD+dgrbLDmehiDTldv2mYt10uS6KN8sqUVz+s0Izps4gHRgVA7IMU=@vger.kernel.org X-Gm-Message-State: AOJu0YzdYQQ006d2LZ3gcPbK19lV7Lg/qfcVddGKILUe9qxMvdRY4Dt0 /PE02DT+kfz8fpRVc4VR9BwxD74uLp/o/Rm5u1mGUqsi1NADc/fC00/i0v9VX0SFexe20w== X-Google-Smtp-Source: AGHT+IEJZtJlg1nf2iF5sruFhlqBO6bDCuDixMIcx2WJKx58Jky48PZHl6y7MGnhwLHWadCWonhMeC6V X-Received: from pjboh15.prod.google.com ([2002:a17:90b:3a4f:b0:2ea:5be5:da6]) (user=fvdl job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:2247:b0:2fe:84d6:cdf9 with SMTP id 98e67ed59e1d1-2febabf1a73mr6371733a91.26.1740767402413; Fri, 28 Feb 2025 10:30:02 -0800 (PST) Date: Fri, 28 Feb 2025 18:29:09 +0000 In-Reply-To: <20250228182928.2645936-1-fvdl@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250228182928.2645936-1-fvdl@google.com> X-Mailer: git-send-email 2.48.1.711.g2feabab25a-goog Message-ID: <20250228182928.2645936-9-fvdl@google.com> Subject: [PATCH v5 08/27] x86/mm: make register_page_bootmem_memmap handle PTE mappings From: Frank van der Linden To: akpm@linux-foundation.org, muchun.song@linux.dev, linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: yuzhao@google.com, usamaarif642@gmail.com, joao.m.martins@oracle.com, roman.gushchin@linux.dev, ziy@nvidia.com, david@redhat.com, Frank van der Linden , Dave Hansen , Andy Lutomirski , Peter Zijlstra , Dan Carpenter Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" register_page_bootmem_memmap expects that vmemmap pages handed to it are PMD-mapped, and that the number of pages to call get_page_bootmem on is PMD-aligned. This is currently a correct assumption, but will no longer be true once pre-HVO of hugetlb pages is implemented. Make it handle PTE-mapped vmemmap pages and a nr_pages argument that is not necessarily PAGES_PER_SECTION. Cc: Dave Hansen Cc: Andy Lutomirski Cc: Peter Zijlstra Cc: Dan Carpenter Signed-off-by: Frank van der Linden --- arch/x86/mm/init_64.c | 18 ++++++++---------- 1 file changed, 8 insertions(+), 10 deletions(-) diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c index 01ea7c6df303..6e8e4ef5312a 100644 --- a/arch/x86/mm/init_64.c +++ b/arch/x86/mm/init_64.c @@ -1599,11 +1599,14 @@ void register_page_bootmem_memmap(unsigned long sec= tion_nr, } get_page_bootmem(section_nr, pud_page(*pud), MIX_SECTION_INFO); =20 - if (!boot_cpu_has(X86_FEATURE_PSE)) { + pmd =3D pmd_offset(pud, addr); + if (pmd_none(*pmd)) { + next =3D (addr + PAGE_SIZE) & PAGE_MASK; + continue; + } + + if (!boot_cpu_has(X86_FEATURE_PSE) || !pmd_leaf(*pmd)) { next =3D (addr + PAGE_SIZE) & PAGE_MASK; - pmd =3D pmd_offset(pud, addr); - if (pmd_none(*pmd)) - continue; get_page_bootmem(section_nr, pmd_page(*pmd), MIX_SECTION_INFO); =20 @@ -1614,12 +1617,7 @@ void register_page_bootmem_memmap(unsigned long sect= ion_nr, SECTION_INFO); } else { next =3D pmd_addr_end(addr, end); - - pmd =3D pmd_offset(pud, addr); - if (pmd_none(*pmd)) - continue; - - nr_pmd_pages =3D 1 << get_order(PMD_SIZE); + nr_pmd_pages =3D (next - addr) >> PAGE_SHIFT; page =3D pmd_page(*pmd); while (nr_pmd_pages--) get_page_bootmem(section_nr, page++, --=20 2.48.1.711.g2feabab25a-goog From nobody Fri Dec 19 16:07:21 2025 Received: from mail-pj1-f73.google.com (mail-pj1-f73.google.com [209.85.216.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B9D5827603F for ; Fri, 28 Feb 2025 18:30:04 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740767406; cv=none; b=GoLc0HS6duPy9/KJ68+rzka9OYj0o1WmIpTnfSoneEgSI01+MvB9ttorJ8NExhM6RZBUWvUTTlXqFvFmaASax/2LaXUzI94vEgeQ7egA4MhH1S19uZRnuhGNNIo9PfgVxp7ShtlOLtDGZ0d7yip7jjwskzC3pkjHjzEovlPL35k= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740767406; c=relaxed/simple; bh=UUAz05wEFnezIXvRTahNblvNZ7OUEt19we6LGHXdN2k=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=KB7ynh3Vs/K3Th6IoO+3BqktUBmtkXKggUmrmxKF+CGiTgMiq9m5j3JsPP8nippjH0ASBkWAgLJECjWsCadboi5a/MHL0wVFpE2X+mMvt746yu+rdyuYpw5f5u3vkYYS0JwmzrquftffR83AhbXTizV1DHl68fZdre85byiQesk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--fvdl.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=TL1Jw8uk; arc=none smtp.client-ip=209.85.216.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--fvdl.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="TL1Jw8uk" Received: by mail-pj1-f73.google.com with SMTP id 98e67ed59e1d1-2fed20dd70cso427076a91.1 for ; Fri, 28 Feb 2025 10:30:04 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1740767404; x=1741372204; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=hj109oq4/9DdjGz6HODDxStLMga0lyJyfx71i/zQohU=; b=TL1Jw8ukHtw5GPd/ypR7bb1Ysp6vZqwQEazJoMFbT/7vq+eJ9JIYZ5ixTsjUwYMSXW XorFmq6nFgbIiqye4lBHfm5vOT+Vzswx5Roqi64Qlnw9vp0UGXuMhF2aPppkV1zqs6+N 8QCjtB9rLIC221sPCa5DQM9Sqz5wCe6BDjSwXl/+waFLkDf87vBlDyOIlfmptBL5bsG7 8n2KRD6wHR4IWHMLZHT1qs1xmWKwNtn1WnXEW32K7kCEO6x9J9w+FZ+gKcNlYYHU7s7y Iw0QEQJJA27raqmhint2jPVXbxKd7tHGOvz5dIX9/RCtR328qJiOWn5Dz9vL5JiYG3vD yAFA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1740767404; x=1741372204; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=hj109oq4/9DdjGz6HODDxStLMga0lyJyfx71i/zQohU=; b=lKdHKsW5vgopVWFWXQOMWrTkeAnILmbDr2LKdb2X9W2Uf5zUXHcgd2r+D1PK+ttQnL tbktYcj8+LfkQSsP5nkSpqeyL1EWqI4kIFb7R03ja1FpDqE2mCoq858o7AMWSu1q0KJg 0Vgcmmzt/UIoRqLiMe5gKN72g4YkNZ/GQS9AYJwFBmR+wKb2mpGLbYhrA7bPXy0WuRhs CaCI4pWGf/zW+KHOrw7eGSqcd7FniPlLjg475XCZ042FRj+LFwkqCQoa7lFto4sM+a/h cF3c9GWJ6YzBc9YA0ExzHM+DnbADRNfFzyWlzdKoED2exDohrTo4e0B3T1Qr86ArtKBM Gcnw== X-Forwarded-Encrypted: i=1; AJvYcCW/VNZPUGinsH82aob23JKm9bKeICkx6l+gmtytk6R8UcpxvihVdUzeDxM+BQoX9xEqqKWeJlivjGP8WL8=@vger.kernel.org X-Gm-Message-State: AOJu0YwMEnQUQPHTSGWuA1Ey4/RBfIc9Sk9QuQirEGYvVtELBPka2YcP SVy4sT2RLPVovOLL+e8KD0oWqHJ4ghusN1nxFQ8UxkD92IJWg2LNjHqTBYEvId2U2Evq0g== X-Google-Smtp-Source: AGHT+IHtRop7g3Onw3l9wZ1xBsCGhgyKFRwX1qlsiqvWyIrnE5Bj3EWSAy2ByEB3fvrSkioLhcNZQOAu X-Received: from pjblb1.prod.google.com ([2002:a17:90b:4a41:b0:2fc:1158:9fe5]) (user=fvdl job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:2f8d:b0:2ee:48bf:7dc3 with SMTP id 98e67ed59e1d1-2febab7862fmr7705039a91.15.1740767404053; Fri, 28 Feb 2025 10:30:04 -0800 (PST) Date: Fri, 28 Feb 2025 18:29:10 +0000 In-Reply-To: <20250228182928.2645936-1-fvdl@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250228182928.2645936-1-fvdl@google.com> X-Mailer: git-send-email 2.48.1.711.g2feabab25a-goog Message-ID: <20250228182928.2645936-10-fvdl@google.com> Subject: [PATCH v5 09/27] mm/bootmem_info: export register_page_bootmem_memmap From: Frank van der Linden To: akpm@linux-foundation.org, muchun.song@linux.dev, linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: yuzhao@google.com, usamaarif642@gmail.com, joao.m.martins@oracle.com, roman.gushchin@linux.dev, ziy@nvidia.com, david@redhat.com, Frank van der Linden Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" If other mm code wants to use this function for early memmap inialization (on the platforms that have it), it should be made available properly, not just unconditionally in mm.h Make this function available for such cases. Signed-off-by: Frank van der Linden --- arch/powerpc/mm/init_64.c | 4 ++++ include/linux/bootmem_info.h | 7 +++++++ include/linux/mm.h | 3 --- 3 files changed, 11 insertions(+), 3 deletions(-) diff --git a/arch/powerpc/mm/init_64.c b/arch/powerpc/mm/init_64.c index d96bbc001e73..b6f3ae03ca9e 100644 --- a/arch/powerpc/mm/init_64.c +++ b/arch/powerpc/mm/init_64.c @@ -41,6 +41,7 @@ #include #include #include +#include =20 #include #include @@ -386,10 +387,13 @@ void __ref vmemmap_free(unsigned long start, unsigned= long end, } =20 #endif + +#ifdef CONFIG_HAVE_BOOTMEM_INFO_NODE void register_page_bootmem_memmap(unsigned long section_nr, struct page *start_page, unsigned long size) { } +#endif /* CONFIG_HAVE_BOOTMEM_INFO_NODE */ =20 #endif /* CONFIG_SPARSEMEM_VMEMMAP */ =20 diff --git a/include/linux/bootmem_info.h b/include/linux/bootmem_info.h index d8a8d245824a..4c506e76a808 100644 --- a/include/linux/bootmem_info.h +++ b/include/linux/bootmem_info.h @@ -18,6 +18,8 @@ enum bootmem_type { =20 #ifdef CONFIG_HAVE_BOOTMEM_INFO_NODE void __init register_page_bootmem_info_node(struct pglist_data *pgdat); +void register_page_bootmem_memmap(unsigned long section_nr, struct page *m= ap, + unsigned long nr_pages); =20 void get_page_bootmem(unsigned long info, struct page *page, enum bootmem_type type); @@ -58,6 +60,11 @@ static inline void register_page_bootmem_info_node(struc= t pglist_data *pgdat) { } =20 +static inline void register_page_bootmem_memmap(unsigned long section_nr, + struct page *map, unsigned long nr_pages) +{ +} + static inline void put_page_bootmem(struct page *page) { } diff --git a/include/linux/mm.h b/include/linux/mm.h index 7b1068ddcbb7..6dfc41b461af 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -3918,9 +3918,6 @@ static inline bool vmemmap_can_optimize(struct vmem_a= ltmap *altmap, } #endif =20 -void register_page_bootmem_memmap(unsigned long section_nr, struct page *m= ap, - unsigned long nr_pages); - enum mf_flags { MF_COUNT_INCREASED =3D 1 << 0, MF_ACTION_REQUIRED =3D 1 << 1, --=20 2.48.1.711.g2feabab25a-goog From nobody Fri Dec 19 16:07:21 2025 Received: from mail-pl1-f201.google.com (mail-pl1-f201.google.com [209.85.214.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 81848286281 for ; Fri, 28 Feb 2025 18:30:06 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740767408; cv=none; b=cykEfiqMbRGoS7CXGnuRpBYVUNx3IjhNM8STjE9cdU/Os8I7HZfcun0KiPRNQ0cjQE5ib+1MeikJc1cmnnzXetuHGyIDlUJDR49RqaxL4Xv2MX3KUf5tE5XNWaxXAIgKElM/AWHHcKaZE/zfMmkkzRgoISmwxREMfCH4XJ7iapI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740767408; c=relaxed/simple; bh=z2YJ8XPL0CunM2YDzmDunUEMZK4goqbkwAqueIhRbFw=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=MklSBykCvKEyJSLaSlh3pUkVuk2h0Bx+LTIR8y7kU5FHYAnEsw1cECD2uyhRRhxZXR8OJ4J6m2vpGAzj6Bn57K7md/JCBWSA+68qa+rFFIfq4PjrpR7AM+wyWEpgkGIjxUf+sXXACGggvhe1iy+LJYh/ZBohMhlv6R4DB8SCON8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--fvdl.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=LqKC8vig; arc=none smtp.client-ip=209.85.214.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--fvdl.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="LqKC8vig" Received: by mail-pl1-f201.google.com with SMTP id d9443c01a7336-2217a4bfcc7so42740025ad.3 for ; Fri, 28 Feb 2025 10:30:06 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1740767406; x=1741372206; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=ra4AUllv/zBr2Fkht24FlEHhA3ebueh3mxdRv0OhSAk=; b=LqKC8vigdZ++yZkqamJYR0YiaQd8yHnUPueJ/dwAy1KE0t9Y38wdUULVbT4JijUqvg K1x5J8LBx/RXr/i/zaRqBazGMx52UYgHx6XQdf0c/MTTUA7mrPh5NwmWXAC18Zc6xjAn SXQk30MkL0oCxJWbebttYgxPMItIySPu6tvMLWKPqCpDwRzGgTKO7OAP2iQgZSs4mdMt evVYW3RlKpsmCIvK7J0ndduyrOY/2nR6SSB4GMCGobIOkwBGlIL5ut11V4f/0D5JjKab uyQaxYq6uISY1vhrsPGomUU5aIwoJTfTUz5pQCCK+m7ASS4sitsyXwq4Kqhxb+U7rI+V dV1w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1740767406; x=1741372206; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=ra4AUllv/zBr2Fkht24FlEHhA3ebueh3mxdRv0OhSAk=; b=oPaPO6whv+yQbuR2fXvUpNfHbdTB08V1PgNvAVg7F4W/F9IS4l/eGx0zMu6PJSdrg6 E+lUlTyg4u1Tc30w7H4g1TfRi9uzrxjUnukykiuoAXKswi/e6KaQRf8RurSuQiAFwVhD 2xIa6RRa9r5v2SVKXQb3SNi9v1D3kyhi+rBtxYZrqwLXyH3E0+F5gfo7bbVmk7cOlJiC /DmlEpcQcAf+gMLU+Htq4tg8LuikS2ekwThmcMWVhH3Fg7owS6AMb4vmTSIlTjPKn6Ym Ax1Rk9/7QcimNiWrBBvjbV0sNTBSmzoJ44vGsKJy2vkbBHXzv5gRYTYaeSqaTSmk1sTE Ltww== X-Forwarded-Encrypted: i=1; AJvYcCXb3QhVLDGptjiHHfD1Olnf9qw74YVRfItCGoQUdhDBnOQm2o8RN6EdqLD8Zv878HUQR50BEfyAw4B1pUY=@vger.kernel.org X-Gm-Message-State: AOJu0YzqdWA3TtLwd5KWPB54ZN2yK1xDpm4d8m6Cf3g4D82dsgiEQTOX JkzK5RjnzdP2kPNOBcPKwjCeQ0r9jXjVq0Ebg5JHohupebpwwf/UKn/TAUXa4nSWOlw0PA== X-Google-Smtp-Source: AGHT+IEwo5zR6/BSpe744ofWatyxcchtZST/W5Hz30OslVcHVriswK0uaLJEww6tSZ4q1o2zKNWmWI7r X-Received: from pljs12.prod.google.com ([2002:a17:903:3bac:b0:223:52c5:17f6]) (user=fvdl job=prod-delivery.src-stubby-dispatcher) by 2002:a17:902:e751:b0:216:393b:23d4 with SMTP id d9443c01a7336-22368f6a3bcmr73705915ad.11.1740767405647; Fri, 28 Feb 2025 10:30:05 -0800 (PST) Date: Fri, 28 Feb 2025 18:29:11 +0000 In-Reply-To: <20250228182928.2645936-1-fvdl@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250228182928.2645936-1-fvdl@google.com> X-Mailer: git-send-email 2.48.1.711.g2feabab25a-goog Message-ID: <20250228182928.2645936-11-fvdl@google.com> Subject: [PATCH v5 10/27] mm/sparse: allow for alternate vmemmap section init at boot From: Frank van der Linden To: akpm@linux-foundation.org, muchun.song@linux.dev, linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: yuzhao@google.com, usamaarif642@gmail.com, joao.m.martins@oracle.com, roman.gushchin@linux.dev, ziy@nvidia.com, david@redhat.com, Frank van der Linden , Johannes Weiner Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add functions that are called just before the per-section memmap is initialized and just before the memmap page structures are initialized. They are called sparse_vmemmap_init_nid_early and sparse_vmemmap_init_nid_late, respectively. This allows for mm subsystems to add calls to initialize memmap and page structures in a specific way, if using SPARSEMEM_VMEMMAP. Specifically, hugetlb can pre-HVO bootmem allocated pages that way, so that no time and resources are wasted on allocating vmemmap pages, only to free them later (and possibly unnecessarily running the system out of memory in the process). Refactor some code and export a few convenience functions for external use. In sparse_init_nid, skip any sections that are already initialized, e.g. they have been initialized by sparse_vmemmap_init_nid_early already. The hugetlb code to use these functions will be added in a later commit. Export section_map_size, as any alternate memmap init code will want to use it. The internal config option to enable this is SPARSEMEM_VMEMMAP_PREINIT, which is selected if an architecture-specific option, ARCH_WANT_HUGETLB_VMEMMAP_PREINIT, is set. In the future, if other subsystems want to do preinit too, they can do it in a similar fashion. The internal config option is there because a section flag is used, and the number of flags available is architecture-dependent (see mmzone.h). Architecures can decide if there is room for the flag when enabling options that select SPARSEMEM_VMEMMAP_PREINIT. Fortunately, as of right now, all sparse vmemmap using architectures do have room. Cc: Johannes Weiner Signed-off-by: Frank van der Linden --- fs/Kconfig | 1 + include/linux/mm.h | 1 + include/linux/mmzone.h | 35 +++++++++++++++++ mm/Kconfig | 6 +++ mm/bootmem_info.c | 4 +- mm/mm_init.c | 3 ++ mm/sparse-vmemmap.c | 23 +++++++++++ mm/sparse.c | 87 ++++++++++++++++++++++++++++++++---------- 8 files changed, 138 insertions(+), 22 deletions(-) diff --git a/fs/Kconfig b/fs/Kconfig index 64d420e3c475..8bcd3a6f80ab 100644 --- a/fs/Kconfig +++ b/fs/Kconfig @@ -286,6 +286,7 @@ config HUGETLB_PAGE_OPTIMIZE_VMEMMAP def_bool HUGETLB_PAGE depends on ARCH_WANT_OPTIMIZE_HUGETLB_VMEMMAP depends on SPARSEMEM_VMEMMAP + select SPARSEMEM_VMEMMAP_PREINIT if ARCH_WANT_HUGETLB_VMEMMAP_PREINIT =20 config HUGETLB_PMD_PAGE_TABLE_SHARING def_bool HUGETLB_PAGE diff --git a/include/linux/mm.h b/include/linux/mm.h index 6dfc41b461af..df83653ed6e3 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -3828,6 +3828,7 @@ static inline void print_vma_addr(char *prefix, unsig= ned long rip) #endif =20 void *sparse_buffer_alloc(unsigned long size); +unsigned long section_map_size(void); struct page * __populate_section_memmap(unsigned long pfn, unsigned long nr_pages, int nid, struct vmem_altmap *altmap, struct dev_pagemap *pgmap); diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index 9540b41894da..44ecb2f90db4 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -1933,6 +1933,9 @@ enum { SECTION_IS_EARLY_BIT, #ifdef CONFIG_ZONE_DEVICE SECTION_TAINT_ZONE_DEVICE_BIT, +#endif +#ifdef CONFIG_SPARSEMEM_VMEMMAP_PREINIT + SECTION_IS_VMEMMAP_PREINIT_BIT, #endif SECTION_MAP_LAST_BIT, }; @@ -1944,6 +1947,9 @@ enum { #ifdef CONFIG_ZONE_DEVICE #define SECTION_TAINT_ZONE_DEVICE BIT(SECTION_TAINT_ZONE_DEVICE_BIT) #endif +#ifdef CONFIG_SPARSEMEM_VMEMMAP_PREINIT +#define SECTION_IS_VMEMMAP_PREINIT BIT(SECTION_IS_VMEMMAP_PREINIT_BIT) +#endif #define SECTION_MAP_MASK (~(BIT(SECTION_MAP_LAST_BIT) - 1)) #define SECTION_NID_SHIFT SECTION_MAP_LAST_BIT =20 @@ -1998,6 +2004,30 @@ static inline int online_device_section(struct mem_s= ection *section) } #endif =20 +#ifdef CONFIG_SPARSEMEM_VMEMMAP_PREINIT +static inline int preinited_vmemmap_section(struct mem_section *section) +{ + return (section && + (section->section_mem_map & SECTION_IS_VMEMMAP_PREINIT)); +} + +void sparse_vmemmap_init_nid_early(int nid); +void sparse_vmemmap_init_nid_late(int nid); + +#else +static inline int preinited_vmemmap_section(struct mem_section *section) +{ + return 0; +} +static inline void sparse_vmemmap_init_nid_early(int nid) +{ +} + +static inline void sparse_vmemmap_init_nid_late(int nid) +{ +} +#endif + static inline int online_section_nr(unsigned long nr) { return online_section(__nr_to_section(nr)); @@ -2035,6 +2065,9 @@ static inline int pfn_section_valid(struct mem_sectio= n *ms, unsigned long pfn) } #endif =20 +void sparse_init_early_section(int nid, struct page *map, unsigned long pn= um, + unsigned long flags); + #ifndef CONFIG_HAVE_ARCH_PFN_VALID /** * pfn_valid - check if there is a valid memory map entry for a PFN @@ -2116,6 +2149,8 @@ void sparse_init(void); #else #define sparse_init() do {} while (0) #define sparse_index_init(_sec, _nid) do {} while (0) +#define sparse_vmemmap_init_nid_early(_nid, _use) do {} while (0) +#define sparse_vmemmap_init_nid_late(_nid) do {} while (0) #define pfn_in_present_section pfn_valid #define subsection_map_init(_pfn, _nr_pages) do {} while (0) #endif /* CONFIG_SPARSEMEM */ diff --git a/mm/Kconfig b/mm/Kconfig index 1b501db06417..0837f989a2dc 100644 --- a/mm/Kconfig +++ b/mm/Kconfig @@ -489,6 +489,9 @@ config SPARSEMEM_VMEMMAP SPARSEMEM_VMEMMAP uses a virtually mapped memmap to optimise pfn_to_page and page_to_pfn operations. This is the most efficient option when sufficient kernel resources are available. + +config SPARSEMEM_VMEMMAP_PREINIT + bool # # Select this config option from the architecture Kconfig, if it is prefer= red # to enable the feature of HugeTLB/dev_dax vmemmap optimization. @@ -499,6 +502,9 @@ config ARCH_WANT_OPTIMIZE_DAX_VMEMMAP config ARCH_WANT_OPTIMIZE_HUGETLB_VMEMMAP bool =20 +config ARCH_WANT_HUGETLB_VMEMMAP_PREINIT + bool + config HAVE_MEMBLOCK_PHYS_MAP bool =20 diff --git a/mm/bootmem_info.c b/mm/bootmem_info.c index 95f288169a38..b0e2a9fa641f 100644 --- a/mm/bootmem_info.c +++ b/mm/bootmem_info.c @@ -88,7 +88,9 @@ static void __init register_page_bootmem_info_section(uns= igned long start_pfn) =20 memmap =3D sparse_decode_mem_map(ms->section_mem_map, section_nr); =20 - register_page_bootmem_memmap(section_nr, memmap, PAGES_PER_SECTION); + if (!preinited_vmemmap_section(ms)) + register_page_bootmem_memmap(section_nr, memmap, + PAGES_PER_SECTION); =20 usage =3D ms->usage; page =3D virt_to_page(usage); diff --git a/mm/mm_init.c b/mm/mm_init.c index d2dee53e95dd..9f1e41c3dde6 100644 --- a/mm/mm_init.c +++ b/mm/mm_init.c @@ -1862,6 +1862,9 @@ void __init free_area_init(unsigned long *max_zone_pf= n) } } =20 + for_each_node_state(nid, N_MEMORY) + sparse_vmemmap_init_nid_late(nid); + calc_nr_kernel_pages(); memmap_init(); =20 diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c index 3287ebadd167..8751c46c35e4 100644 --- a/mm/sparse-vmemmap.c +++ b/mm/sparse-vmemmap.c @@ -470,3 +470,26 @@ struct page * __meminit __populate_section_memmap(unsi= gned long pfn, =20 return pfn_to_page(pfn); } + +#ifdef CONFIG_SPARSEMEM_VMEMMAP_PREINIT +/* + * This is called just before initializing sections for a NUMA node. + * Any special initialization that needs to be done before the + * generic initialization can be done from here. Sections that + * are initialized in hooks called from here will be skipped by + * the generic initialization. + */ +void __init sparse_vmemmap_init_nid_early(int nid) +{ +} + +/* + * This is called just before the initialization of page structures + * through memmap_init. Zones are now initialized, so any work that + * needs to be done that needs zone information can be done from + * here. + */ +void __init sparse_vmemmap_init_nid_late(int nid) +{ +} +#endif diff --git a/mm/sparse.c b/mm/sparse.c index 133b033d0cba..ee0234a77c7f 100644 --- a/mm/sparse.c +++ b/mm/sparse.c @@ -408,13 +408,13 @@ static void __init check_usemap_section_nr(int nid, #endif /* CONFIG_MEMORY_HOTREMOVE */ =20 #ifdef CONFIG_SPARSEMEM_VMEMMAP -static unsigned long __init section_map_size(void) +unsigned long __init section_map_size(void) { return ALIGN(sizeof(struct page) * PAGES_PER_SECTION, PMD_SIZE); } =20 #else -static unsigned long __init section_map_size(void) +unsigned long __init section_map_size(void) { return PAGE_ALIGN(sizeof(struct page) * PAGES_PER_SECTION); } @@ -495,6 +495,44 @@ void __weak __meminit vmemmap_populate_print_last(void) { } =20 +static void *sparse_usagebuf __meminitdata; +static void *sparse_usagebuf_end __meminitdata; + +/* + * Helper function that is used for generic section initialization, and + * can also be used by any hooks added above. + */ +void __init sparse_init_early_section(int nid, struct page *map, + unsigned long pnum, unsigned long flags) +{ + BUG_ON(!sparse_usagebuf || sparse_usagebuf >=3D sparse_usagebuf_end); + check_usemap_section_nr(nid, sparse_usagebuf); + sparse_init_one_section(__nr_to_section(pnum), pnum, map, + sparse_usagebuf, SECTION_IS_EARLY | flags); + sparse_usagebuf =3D (void *)sparse_usagebuf + mem_section_usage_size(); +} + +static int __init sparse_usage_init(int nid, unsigned long map_count) +{ + unsigned long size; + + size =3D mem_section_usage_size() * map_count; + sparse_usagebuf =3D sparse_early_usemaps_alloc_pgdat_section( + NODE_DATA(nid), size); + if (!sparse_usagebuf) { + sparse_usagebuf_end =3D NULL; + return -ENOMEM; + } + + sparse_usagebuf_end =3D sparse_usagebuf + size; + return 0; +} + +static void __init sparse_usage_fini(void) +{ + sparse_usagebuf =3D sparse_usagebuf_end =3D NULL; +} + /* * Initialize sparse on a specific node. The node spans [pnum_begin, pnum_= end) * And number of present sections in this node is map_count. @@ -503,47 +541,54 @@ static void __init sparse_init_nid(int nid, unsigned = long pnum_begin, unsigned long pnum_end, unsigned long map_count) { - struct mem_section_usage *usage; unsigned long pnum; struct page *map; + struct mem_section *ms; =20 - usage =3D sparse_early_usemaps_alloc_pgdat_section(NODE_DATA(nid), - mem_section_usage_size() * map_count); - if (!usage) { + if (sparse_usage_init(nid, map_count)) { pr_err("%s: node[%d] usemap allocation failed", __func__, nid); goto failed; } + sparse_buffer_init(map_count * section_map_size(), nid); + + sparse_vmemmap_init_nid_early(nid); + for_each_present_section_nr(pnum_begin, pnum) { unsigned long pfn =3D section_nr_to_pfn(pnum); =20 if (pnum >=3D pnum_end) break; =20 - map =3D __populate_section_memmap(pfn, PAGES_PER_SECTION, - nid, NULL, NULL); - if (!map) { - pr_err("%s: node[%d] memory map backing failed. Some memory will not be= available.", - __func__, nid); - pnum_begin =3D pnum; - sparse_buffer_fini(); - goto failed; + ms =3D __nr_to_section(pnum); + if (!preinited_vmemmap_section(ms)) { + map =3D __populate_section_memmap(pfn, PAGES_PER_SECTION, + nid, NULL, NULL); + if (!map) { + pr_err("%s: node[%d] memory map backing failed. Some memory will not b= e available.", + __func__, nid); + pnum_begin =3D pnum; + sparse_usage_fini(); + sparse_buffer_fini(); + goto failed; + } + sparse_init_early_section(nid, map, pnum, 0); } - check_usemap_section_nr(nid, usage); - sparse_init_one_section(__nr_to_section(pnum), pnum, map, usage, - SECTION_IS_EARLY); - usage =3D (void *) usage + mem_section_usage_size(); } + sparse_usage_fini(); sparse_buffer_fini(); return; failed: - /* We failed to allocate, mark all the following pnums as not present */ + /* + * We failed to allocate, mark all the following pnums as not present, + * except the ones already initialized earlier. + */ for_each_present_section_nr(pnum_begin, pnum) { - struct mem_section *ms; - if (pnum >=3D pnum_end) break; ms =3D __nr_to_section(pnum); + if (!preinited_vmemmap_section(ms)) + ms->section_mem_map =3D 0; ms->section_mem_map =3D 0; } } --=20 2.48.1.711.g2feabab25a-goog From nobody Fri Dec 19 16:07:21 2025 Received: from mail-pj1-f74.google.com (mail-pj1-f74.google.com [209.85.216.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CA4EE276046 for ; Fri, 28 Feb 2025 18:30:07 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740767409; cv=none; b=Pi8bnOROCn+ypXqN4cuty1QMtdX4f1Inrg4AmTyD5bMVPW9JqZMYtLuEmzv/daoJ/nk7nrrjyylX6boSbUezMK4/3Ny+YuPf09omw58ONIwj+sb3A5EHHoHFKpHHOBAfsJW+D0Jkcw4DJ6FAEPqugWxIinjrBUS5PPjh9DHPPe0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740767409; c=relaxed/simple; bh=rfDCDTCZR1vjOAMrrN+kyOgyhP5uaFXoA6YPneWMxAc=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=lK6hoXydOjcHCiWMRtnfx5J7F6UPYxcgGmUrmHF+nz0aKZWw5gizwttQuv9Ii67mAnNACYiKh20+s/qDJRghx0rpxTAJn8a4IyiiQ7gwmaSZ1YgLaVbQLb9api8/D+K4/xkuZOjLzhw0nd8OefXB2edkvPyVYsK9XfjWeQJmVLE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--fvdl.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=EG1buTcf; arc=none smtp.client-ip=209.85.216.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--fvdl.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="EG1buTcf" Received: by mail-pj1-f74.google.com with SMTP id 98e67ed59e1d1-2feb019b13aso6136533a91.3 for ; Fri, 28 Feb 2025 10:30:07 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1740767407; x=1741372207; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=kbowoUPJCyiQsizxpkD6Ny3B5kmia3e97FJeeBCM/C8=; b=EG1buTcfCMMPbS+T/zSxeNYMHxXezbZSlnwd2W24ZG4q8eyd+h9TMoMlo8gsspTe2P oHsuOGsVp4PE1eMtdz+PGMBSvhtnWj83AMiVWATj70PKHo30ID8ya3FfxrkREhFExnCG zGjE0GIPGRMzlt/50cWAoC/p/bxYKbblBG2Sg3sg4H1jUvJ7g7lzSB72aYyZMq9dwD4t awtV8baGG8t1pkCi8xYPKlFFRrxVjIv2/gu45X+IoQmO6ThjxMZOptTu623QzkkM92WR UU5gShORSVal6KqmSuygESsw0/+zJG2iATgfIf4K98mHdVcKNA6+x1VPGCRm7cyGCJG6 hd2w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1740767407; x=1741372207; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=kbowoUPJCyiQsizxpkD6Ny3B5kmia3e97FJeeBCM/C8=; b=uhQb8OrbFmcF6adPZPX7Ej+tiHpfe7EPVQjnn7OCAORuNFYHM6liu6PpztYzoDvKIZ WOTLIJH5+lC12vm4X3qD1DJ4he3QlP18lDmI9VDjex5yFAEXxhsTPEhRICAdUS2dHPOc VI+kQgDk6Ihl13iGC58eJEXXQDvbNEiLOEKFCfjP7m6QnGvb7/oL7wxMCN13RZRzJ4+P fVTu2CpBQdQlkpkkXvZ2KhjlT/cboWrGvuCRVroflvtSe1t1o6VyDrEn1iZGq05aWwdH qKYBEU7ZnND0pUonI11b7o5sFGD33KDaJzkyPZ02ywGMk/l762NcOXToHWhiTALxY+Gj nUpA== X-Forwarded-Encrypted: i=1; AJvYcCXRbbHbpXYS46DeQbwMpgYe3h4sgUGTaOZ8cY24zH/Q5KpLVsBDgtFBFPz8ZgXwQSYXgx7f/aVwZU4MasI=@vger.kernel.org X-Gm-Message-State: AOJu0Yzi+B/6ehQVLKtZhthFAuOFbGHqcUbjgBDZsgRqpQxJp1V9q/ec hbm1wrl90mEmViXzOAv+9ieXw+X/I7biIzPMlf8ORaQmJeJd3HuP7AXRl7gpydPAJ4eW0A== X-Google-Smtp-Source: AGHT+IGKuB3uJBVlbNU/F3YmwsBG4OICoUNsGFAqYF88KK7BiOQo3YRtYFp6xhdAPgZNqpSARz9qCLce X-Received: from pjbsn4.prod.google.com ([2002:a17:90b:2e84:b0:2fc:13d6:b4cb]) (user=fvdl job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:1804:b0:2fe:8c22:48b0 with SMTP id 98e67ed59e1d1-2febab7876amr7870153a91.15.1740767407364; Fri, 28 Feb 2025 10:30:07 -0800 (PST) Date: Fri, 28 Feb 2025 18:29:12 +0000 In-Reply-To: <20250228182928.2645936-1-fvdl@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250228182928.2645936-1-fvdl@google.com> X-Mailer: git-send-email 2.48.1.711.g2feabab25a-goog Message-ID: <20250228182928.2645936-12-fvdl@google.com> Subject: [PATCH v5 11/27] mm/hugetlb: set migratetype for bootmem folios From: Frank van der Linden To: akpm@linux-foundation.org, muchun.song@linux.dev, linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: yuzhao@google.com, usamaarif642@gmail.com, joao.m.martins@oracle.com, roman.gushchin@linux.dev, ziy@nvidia.com, david@redhat.com, Frank van der Linden Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The pageblocks that back memblock allocated hugetlb folios might not have the migrate type set, in the CONFIG_DEFERRED_STRUCT_PAGE_INIT case. memblock allocated hugetlb folios might be given to the buddy allocator eventually (if nr_hugepages is lowered), so make sure that the migrate type for the pageblocks contained in them is set when initializing them. Set it to the default that memmap init also uses (MIGRATE_MOVABLE). Signed-off-by: Frank van der Linden --- mm/hugetlb.c | 21 +++++++++++++++++++++ 1 file changed, 21 insertions(+) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 1a200f89e21a..19a7a795a388 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -3254,6 +3254,26 @@ static void __init hugetlb_folio_init_vmemmap(struct= folio *folio, prep_compound_head((struct page *)folio, huge_page_order(h)); } =20 +/* + * memblock-allocated pageblocks might not have the migrate type set + * if marked with the 'noinit' flag. Set it to the default (MIGRATE_MOVABL= E) + * here. + * + * Note that this will not write the page struct, it is ok (and necessary) + * to do this on vmemmap optimized folios. + */ +static void __init hugetlb_bootmem_init_migratetype(struct folio *folio, + struct hstate *h) +{ + unsigned long nr_pages =3D pages_per_huge_page(h), i; + + WARN_ON_ONCE(!pageblock_aligned(folio_pfn(folio))); + + for (i =3D 0; i < nr_pages; i +=3D pageblock_nr_pages) + set_pageblock_migratetype(folio_page(folio, i), + MIGRATE_MOVABLE); +} + static void __init prep_and_add_bootmem_folios(struct hstate *h, struct list_head *folio_list) { @@ -3275,6 +3295,7 @@ static void __init prep_and_add_bootmem_folios(struct= hstate *h, HUGETLB_VMEMMAP_RESERVE_PAGES, pages_per_huge_page(h)); } + hugetlb_bootmem_init_migratetype(folio, h); /* Subdivide locks to achieve better parallel performance */ spin_lock_irqsave(&hugetlb_lock, flags); __prep_account_new_huge_page(h, folio_nid(folio)); --=20 2.48.1.711.g2feabab25a-goog From nobody Fri Dec 19 16:07:21 2025 Received: from mail-pl1-f202.google.com (mail-pl1-f202.google.com [209.85.214.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C7FA91F872F for ; Fri, 28 Feb 2025 18:30:09 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740767411; cv=none; b=no9rWLtw6idK5jsqzrdhNZ7iN0daj5O/6NjMUGGLoFJaHZDJN5LIf3dL9diETplP5R1szDoxYG/2KCO9lHNOPYWloyD3aqWazs/LSg0lyT2L3wzsR+z6Jn46NPNKVJ+4KxT4QD6Bnjolafa0v8WrvD/N6K3O7JtiDa9sLJwW8Ug= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740767411; c=relaxed/simple; bh=RyF3/gTbP5gy6tuXi9ooiaMUqPGl0EAWhimbDy1EXjU=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=go0+XOt+Hly1ibe8eqOzV+sqA3JJaCy3u+uXw8Cry0Wau+IShw3BuESavIwhZ69qH89Il8PDiMwC5NOK86MeAHmMy9xC9b6ngFkRR2uZr5xqpT7i3TKyngs0S3x9iHfizUEGQ633pci3iKxEMk/27m3evwBr2YIA7e6+HxqgXMo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--fvdl.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=HG2b8n+W; arc=none smtp.client-ip=209.85.214.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--fvdl.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="HG2b8n+W" Received: by mail-pl1-f202.google.com with SMTP id d9443c01a7336-22349ce68a9so77545315ad.0 for ; Fri, 28 Feb 2025 10:30:09 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1740767409; x=1741372209; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=K94kLlqGeFQ+aFx16P/h3vYpIISWhF2cF1hUz7z5Syk=; b=HG2b8n+WgPyWg9HS0peyJSkkGtJ6PhrA/iVIcGA1Iu5PPOuS/xsBm9Tvs7yktzEEjl LxusfyeZ4eWwDToaK+cKSxA+ujgc8QZduE1euoKkqp4ZZwp0QwtrLGSJLyhq3iX5+PEQ CaeOeh6hup0UvrzuwlgMyLaybKyMhvzCNCe3/8OpHtZw6omV23760b5gXaraF5AJroX0 U7ZsiP7yYxQu98j9XUuSYwasnTb1t3cS4M88L7jbHHs14wjCHLhe+/t+dXuQjrUi5PBZ R1HvKHtZzvTmuFX5QR8CzAuFzJBm8mrQHOGdxUv5mBMTzwBbXmsOpeUbDDusKXPpqh73 nviw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1740767409; x=1741372209; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=K94kLlqGeFQ+aFx16P/h3vYpIISWhF2cF1hUz7z5Syk=; b=VDhwITnc8JcyioL7RkoFKukE/NhlkofiUpZTtf1331STdf4WoriYJlA8Yi4CCTrhKT sN29b34NprO2HRE/KuEVTaeC7WBqat+ce2y4dgrzWnuxpYql/OzyWrGpW6Ja9AwSg7ME riaG0Dq4K6BCjNWfm78HMPyDyqCRVIYzpOLThVNnqpA73FkElO2iLIFcZc1ndvs2d6vC C0041S+hfmsQkJ3oWcCgShojhyaJQxtgOdc5Aw4mmlH+ISo0PvgZussk6wgvAnYopCmz 6rHnHIhBQPQhdhNIC08JLRKzmQTclTTw1jU36+lUjPbRwsOjWoEQxZWazxrtk6+PsInV hNcQ== X-Forwarded-Encrypted: i=1; AJvYcCVE4nlF0bS0G1edWGbIe6TmUVi0hg+NrVPTWkTxWY6UfnJmE6TeYJgpEdmHycBRtz7Lv0s9u3P8sqa030Y=@vger.kernel.org X-Gm-Message-State: AOJu0Yzq6/aM6Ybe/hZmo+gyf1oZBntTswMmKjlXKmC9laryEczxIJ5e 9PvU/Ar0v7KLa/Hxuf/QevvJTqbDXhFHFGZLp5QyZKCKT3i+JM3nOYPC0PhOMsp5FBzZQg== X-Google-Smtp-Source: AGHT+IFDWi4N9QdQQBBc9RlXjvM6dqwf/CPH36DP/o8uLrYBQ+/2fZBTUs+cvUB/toGJpVVERB85nZsx X-Received: from pjur16.prod.google.com ([2002:a17:90a:d410:b0:2fa:1fac:269c]) (user=fvdl job=prod-delivery.src-stubby-dispatcher) by 2002:a17:902:e750:b0:21f:ba96:5de9 with SMTP id d9443c01a7336-2236926f502mr89499265ad.49.1740767408967; Fri, 28 Feb 2025 10:30:08 -0800 (PST) Date: Fri, 28 Feb 2025 18:29:13 +0000 In-Reply-To: <20250228182928.2645936-1-fvdl@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250228182928.2645936-1-fvdl@google.com> X-Mailer: git-send-email 2.48.1.711.g2feabab25a-goog Message-ID: <20250228182928.2645936-13-fvdl@google.com> Subject: [PATCH v5 12/27] mm: define __init_reserved_page_zone function From: Frank van der Linden To: akpm@linux-foundation.org, muchun.song@linux.dev, linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: yuzhao@google.com, usamaarif642@gmail.com, joao.m.martins@oracle.com, roman.gushchin@linux.dev, ziy@nvidia.com, david@redhat.com, Frank van der Linden Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Sometimes page structs must be unconditionally initialized as reserved, regardless of DEFERRED_STRUCT_PAGE_INIT. Define a function, __init_reserved_page_zone, containing code that already did all of the work in init_reserved_page, and make it available for use. Signed-off-by: Frank van der Linden --- mm/internal.h | 1 + mm/mm_init.c | 38 +++++++++++++++++++++++--------------- 2 files changed, 24 insertions(+), 15 deletions(-) diff --git a/mm/internal.h b/mm/internal.h index 109ef30fee11..57662141930e 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -1448,6 +1448,7 @@ static inline bool pte_needs_soft_dirty_wp(struct vm_= area_struct *vma, pte_t pte =20 void __meminit __init_single_page(struct page *page, unsigned long pfn, unsigned long zone, int nid); +void __meminit __init_reserved_page_zone(unsigned long pfn, int nid); =20 /* shrinker related functions */ unsigned long shrink_slab(gfp_t gfp_mask, int nid, struct mem_cgroup *memc= g, diff --git a/mm/mm_init.c b/mm/mm_init.c index 9f1e41c3dde6..925ed6564572 100644 --- a/mm/mm_init.c +++ b/mm/mm_init.c @@ -650,6 +650,28 @@ static inline void fixup_hashdist(void) static inline void fixup_hashdist(void) {} #endif /* CONFIG_NUMA */ =20 +/* + * Initialize a reserved page unconditionally, finding its zone first. + */ +void __meminit __init_reserved_page_zone(unsigned long pfn, int nid) +{ + pg_data_t *pgdat; + int zid; + + pgdat =3D NODE_DATA(nid); + + for (zid =3D 0; zid < MAX_NR_ZONES; zid++) { + struct zone *zone =3D &pgdat->node_zones[zid]; + + if (zone_spans_pfn(zone, pfn)) + break; + } + __init_single_page(pfn_to_page(pfn), pfn, zid, nid); + + if (pageblock_aligned(pfn)) + set_pageblock_migratetype(pfn_to_page(pfn), MIGRATE_MOVABLE); +} + #ifdef CONFIG_DEFERRED_STRUCT_PAGE_INIT static inline void pgdat_set_deferred_range(pg_data_t *pgdat) { @@ -708,24 +730,10 @@ defer_init(int nid, unsigned long pfn, unsigned long = end_pfn) =20 static void __meminit init_reserved_page(unsigned long pfn, int nid) { - pg_data_t *pgdat; - int zid; - if (early_page_initialised(pfn, nid)) return; =20 - pgdat =3D NODE_DATA(nid); - - for (zid =3D 0; zid < MAX_NR_ZONES; zid++) { - struct zone *zone =3D &pgdat->node_zones[zid]; - - if (zone_spans_pfn(zone, pfn)) - break; - } - __init_single_page(pfn_to_page(pfn), pfn, zid, nid); - - if (pageblock_aligned(pfn)) - set_pageblock_migratetype(pfn_to_page(pfn), MIGRATE_MOVABLE); + __init_reserved_page_zone(pfn, nid); } #else static inline void pgdat_set_deferred_range(pg_data_t *pgdat) {} --=20 2.48.1.711.g2feabab25a-goog From nobody Fri Dec 19 16:07:21 2025 Received: from mail-pj1-f73.google.com (mail-pj1-f73.google.com [209.85.216.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 28C911F874E for ; Fri, 28 Feb 2025 18:30:10 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740767413; cv=none; b=h7xCI1y1wiaiDlw6SAM7yroIcyJt1DG7kEbaObf2NBM/bIr6yeH2MUD9FTRX61oAN7TwJ+bbDMMhdMmAJrFHF7P9G/vOhe47KifXH1XtYIZdaK0HPc5yqpczhGG8XzDaJg0CMCRfPtDfo52IE4zqnwINfmusGnDaymnCAcp8xYA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740767413; c=relaxed/simple; bh=nk+w7m9EFQ6Dot3B6gOWwpxoc4h88RVejZ123Fmm/mw=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=duEOJ1JeSGL4gaP89IIy+eNaP+xEbAgsUbmiZ0MDkPM30VES9IpXqzCNrFrkDE7JXqz7H4I9Qc1bUe5XZtrNKE2WZhRQU18gpFU21Rc0bvpCJFzacE1MNCZ5QOMYAWiIxZuZ4wH3g88+X62JcfUJSQhdMpHNtGGe3WewXXqejXk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--fvdl.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=TXl1N/58; arc=none smtp.client-ip=209.85.216.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--fvdl.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="TXl1N/58" Received: by mail-pj1-f73.google.com with SMTP id 98e67ed59e1d1-2fe86c01f5cso5129849a91.1 for ; Fri, 28 Feb 2025 10:30:10 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1740767410; x=1741372210; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=8BnmSyjAWl8KyXjU3dkeSbWxZ2sVLeXlaOuN4oJvYMc=; b=TXl1N/58XUV01vIt6mbcaNF9i70uldmR/COBcIdD8hrQoMS7BkJNOQnR3dnAu9etoG 9DuHsuakY6aUbGoB7U0qLVLGc3a2IZmQTT6O92UsJzp+7jlZvymV0ralMyNGd/AoywSi uv0On7Dd+3KwI6uTidyoEzyIpArxvN02z3PYC5gESF2zcSiWUc3mD60llEqQTrRFvYpB UUgpsBkw4uhxgfS0bnlcG0tp01JgrTSviwERNRLOUzWqAhVoMJTWllm+WVbxLRaWMrVv BAWIh8puAADMVFzDE2YpJlt8t/RNd1BNOEHH7exXu0vUoTvmfk0TGQlAbFDWmxBLWiSz 3lVA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1740767410; x=1741372210; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=8BnmSyjAWl8KyXjU3dkeSbWxZ2sVLeXlaOuN4oJvYMc=; b=pxqsc9+lVZbptYrTl7VuuLH/YgAqnEEcdFB+insxdH5h+TDdheiveJAJbZz5Qgxllb DEihHWvWk0PB4UgCW9YmB0IMO7c/QfdxFAJwzIEcsI9ZlY2Hk7s7P7343IqXxssEhgh4 zn+Z6PhZOOaSqZTo3rimtY72q8fNPZV1YeAEfNUpYA8JVhCbEn7OaHmKaDP6mebihJge i9JTbU2p+5UfIuHfU2oDA9/XvD+oeAjb1Kxa7jd0q/b5rDd4Eau7w7i4TvC8qEczu4Oi OeHNG9jVVPMn9OjNWNZ+KSET3aj7f/t12kaVWhfvcSbuq/dWLqhFKULLgxZrbiyD/1vo 13bQ== X-Forwarded-Encrypted: i=1; AJvYcCUJr5gJYP044WT9Kuc+01NIM+R7mQCeLSYwUC6RX15HHb8owwUC6u+ysv8BpUeTxPzHu3BTp+04MRY+HXU=@vger.kernel.org X-Gm-Message-State: AOJu0YyocpdGr6hDLqsPRl01kkBE5vLN8F3I4KeKJYLpQv29VMscuW7g Eufm4ldWngFqGzzlvcgKBDYrEJIE6FeDnlWqJgjDaIkxy1GZ5k/iOzXTz5kOrBGQ/9Vrsw== X-Google-Smtp-Source: AGHT+IHkA2l5A4sRrVBTdd/1UDvjktTBUXrR2r60wNN+pEhOugKCAuRSJ2vy2XoSkO7SYrkW3vFR2Pm+ X-Received: from pgmm31.prod.google.com ([2002:a05:6a02:551f:b0:ad5:52c1:c4d0]) (user=fvdl job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6a20:1595:b0:1ee:d6a7:e341 with SMTP id adf61e73a8af0-1f2f4e45091mr9004502637.30.1740767410238; Fri, 28 Feb 2025 10:30:10 -0800 (PST) Date: Fri, 28 Feb 2025 18:29:14 +0000 In-Reply-To: <20250228182928.2645936-1-fvdl@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250228182928.2645936-1-fvdl@google.com> X-Mailer: git-send-email 2.48.1.711.g2feabab25a-goog Message-ID: <20250228182928.2645936-14-fvdl@google.com> Subject: [PATCH v5 13/27] mm/hugetlb: check bootmem pages for zone intersections From: Frank van der Linden To: akpm@linux-foundation.org, muchun.song@linux.dev, linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: yuzhao@google.com, usamaarif642@gmail.com, joao.m.martins@oracle.com, roman.gushchin@linux.dev, ziy@nvidia.com, david@redhat.com, Frank van der Linden Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Bootmem hugetlb pages are allocated using memblock, which isn't (and mostly can't be) aware of zones. So, they may end up crossing zone boundaries. This would create confusion, a hugetlb page that is part of multiple zones is bad. Worse, HVO might then end up stealthily re-assigning pages to a different zone when a hugetlb page is freed, since the tail page structures beyond the first vmemmap page would inherit the zone of the first page structures. While the chance of this happening is low, you can definitely create a configuration where this happens (especially using ZONE_MOVABLE). To avoid this issue, check if bootmem hugetlb pages intersect with multiple zones during the gather phase, and discard them, handing them to the page allocator, if they do. Record the number of invalid bootmem pages per node and subtract them from the number of available pages at the end, making it easier to do these checks in multiple places later on. Signed-off-by: Frank van der Linden --- mm/hugetlb.c | 61 +++++++++++++++++++++++++++++++++++++++++++++++++-- mm/internal.h | 2 ++ mm/mm_init.c | 25 +++++++++++++++++++++ 3 files changed, 86 insertions(+), 2 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 19a7a795a388..f9704a0e62de 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -62,6 +62,7 @@ static unsigned long hugetlb_cma_size_in_node[MAX_NUMNODE= S] __initdata; static unsigned long hugetlb_cma_size __initdata; =20 __initdata struct list_head huge_boot_pages[MAX_NUMNODES]; +static unsigned long hstate_boot_nrinvalid[HUGE_MAX_HSTATE] __initdata; =20 /* * Due to ordering constraints across the init code for various @@ -3304,6 +3305,44 @@ static void __init prep_and_add_bootmem_folios(struc= t hstate *h, } } =20 +static bool __init hugetlb_bootmem_page_zones_valid(int nid, + struct huge_bootmem_page *m) +{ + unsigned long start_pfn; + bool valid; + + start_pfn =3D virt_to_phys(m) >> PAGE_SHIFT; + + valid =3D !pfn_range_intersects_zones(nid, start_pfn, + pages_per_huge_page(m->hstate)); + if (!valid) + hstate_boot_nrinvalid[hstate_index(m->hstate)]++; + + return valid; +} + +/* + * Free a bootmem page that was found to be invalid (intersecting with + * multiple zones). + * + * Since it intersects with multiple zones, we can't just do a free + * operation on all pages at once, but instead have to walk all + * pages, freeing them one by one. + */ +static void __init hugetlb_bootmem_free_invalid_page(int nid, struct page = *page, + struct hstate *h) +{ + unsigned long npages =3D pages_per_huge_page(h); + unsigned long pfn; + + while (npages--) { + pfn =3D page_to_pfn(page); + __init_reserved_page_zone(pfn, nid); + free_reserved_page(page); + page++; + } +} + /* * Put bootmem huge pages into the standard lists after mem_map is up. * Note: This only applies to gigantic (order > MAX_PAGE_ORDER) pages. @@ -3311,14 +3350,25 @@ static void __init prep_and_add_bootmem_folios(stru= ct hstate *h, static void __init gather_bootmem_prealloc_node(unsigned long nid) { LIST_HEAD(folio_list); - struct huge_bootmem_page *m; + struct huge_bootmem_page *m, *tm; struct hstate *h =3D NULL, *prev_h =3D NULL; =20 - list_for_each_entry(m, &huge_boot_pages[nid], list) { + list_for_each_entry_safe(m, tm, &huge_boot_pages[nid], list) { struct page *page =3D virt_to_page(m); struct folio *folio =3D (void *)page; =20 h =3D m->hstate; + if (!hugetlb_bootmem_page_zones_valid(nid, m)) { + /* + * Can't use this page. Initialize the + * page structures if that hasn't already + * been done, and give them to the page + * allocator. + */ + hugetlb_bootmem_free_invalid_page(nid, page, h); + continue; + } + /* * It is possible to have multiple huge page sizes (hstates) * in this list. If so, process each size separately. @@ -3590,13 +3640,20 @@ static void __init hugetlb_init_hstates(void) static void __init report_hugepages(void) { struct hstate *h; + unsigned long nrinvalid; =20 for_each_hstate(h) { char buf[32]; =20 + nrinvalid =3D hstate_boot_nrinvalid[hstate_index(h)]; + h->max_huge_pages -=3D nrinvalid; + string_get_size(huge_page_size(h), 1, STRING_UNITS_2, buf, 32); pr_info("HugeTLB: registered %s page size, pre-allocated %ld pages\n", buf, h->free_huge_pages); + if (nrinvalid) + pr_info("HugeTLB: %s page size: %lu invalid page%s discarded\n", + buf, nrinvalid, nrinvalid > 1 ? "s" : ""); pr_info("HugeTLB: %d KiB vmemmap can be freed for a %s page\n", hugetlb_vmemmap_optimizable_size(h) / SZ_1K, buf); } diff --git a/mm/internal.h b/mm/internal.h index 57662141930e..63fda9bb9426 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -658,6 +658,8 @@ static inline struct page *pageblock_pfn_to_page(unsign= ed long start_pfn, } =20 void set_zone_contiguous(struct zone *zone); +bool pfn_range_intersects_zones(int nid, unsigned long start_pfn, + unsigned long nr_pages); =20 static inline void clear_zone_contiguous(struct zone *zone) { diff --git a/mm/mm_init.c b/mm/mm_init.c index 925ed6564572..f7d5b4fe1ae9 100644 --- a/mm/mm_init.c +++ b/mm/mm_init.c @@ -2287,6 +2287,31 @@ void set_zone_contiguous(struct zone *zone) zone->contiguous =3D true; } =20 +/* + * Check if a PFN range intersects multiple zones on one or more + * NUMA nodes. Specify the @nid argument if it is known that this + * PFN range is on one node, NUMA_NO_NODE otherwise. + */ +bool pfn_range_intersects_zones(int nid, unsigned long start_pfn, + unsigned long nr_pages) +{ + struct zone *zone, *izone =3D NULL; + + for_each_zone(zone) { + if (nid !=3D NUMA_NO_NODE && zone_to_nid(zone) !=3D nid) + continue; + + if (zone_intersects(zone, start_pfn, nr_pages)) { + if (izone !=3D NULL) + return true; + izone =3D zone; + } + + } + + return false; +} + static void __init mem_init_print_info(void); void __init page_alloc_init_late(void) { --=20 2.48.1.711.g2feabab25a-goog From nobody Fri Dec 19 16:07:21 2025 Received: from mail-pj1-f74.google.com (mail-pj1-f74.google.com [209.85.216.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B443F230BC4 for ; Fri, 28 Feb 2025 18:30:12 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740767414; cv=none; b=TNLKYhhwIMGvsVIi3+x9BCQnDiRYffs+AZkmVCJyTk5N32HQzpvcCno+QYHMFpLV+5xGHknsZlRrnr4fN0HMoLriSOW3z59ahQfL1EH4eYdvC7aqLXP2EHprV2qpa3diB8ZEf7qGXu/fgP8VtVahzY2pfvmdiGFXop9+N68aLUI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740767414; c=relaxed/simple; bh=YVeNipAXLxo7C3ILwJ258W1KaV6o6R/B+CwI6BK6ekQ=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=iLMRu3xi+deqinXVvhnMole2K8LdZuSrcj2zLdlNmuOtLkp/2GAtD0gLQgxtJs3QseoZ2NDA1Ca1+Bo77Gt6UnwxZ+1dEEgY5Pv7erdg87X9H7/IJY+UaTuF1tMc7rHAVM47XMQ1v/0x6+2VPuu2YMenF+z22ToBnSUXqv2t4Rc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--fvdl.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=WBHFyQt/; arc=none smtp.client-ip=209.85.216.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--fvdl.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="WBHFyQt/" Received: by mail-pj1-f74.google.com with SMTP id 98e67ed59e1d1-2fe862ea448so7582984a91.3 for ; Fri, 28 Feb 2025 10:30:12 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1740767412; x=1741372212; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=r8aB06LbmC50HiWzwaYK4Q8Yqhxu+9P15EWR1SVWZ+A=; b=WBHFyQt/SGvHdnNir2UnIerH+Ms7+zfgleAMsoDp/28QLF45rFru5yxsJfWhODxXvA wu5OCg6DJzm3yYK7qDe6RrobtyyI6BLDbKLQAKAHQlTCU3TGZZ1DQOIK9W0SoMdmCA04 HsIbNKLFmoH5tUOczTPKrpxlJYw1u3RM9fBcFpRt43dYaDQoqctZHi8ysdqKdA6/FIuN NseyM3+fPwzp5J2zXNuOwtm7CiBOptmvl1J5qJ/URb1aucV1srIa9G07zBL61mK3qjpN ycJHn/hHiAL5KIMZfh+E/Kuxc9clfnrttTIU+R4ANVTH8MOpSDqmOOGUMvQ5Km4oHYSm gpHQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1740767412; x=1741372212; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=r8aB06LbmC50HiWzwaYK4Q8Yqhxu+9P15EWR1SVWZ+A=; b=wC8jmrYROIa6oY/goeQoxDeWd/vR5DXgU7Rrc6SXDCUIvuaVbLbOp62hEh0kfghkqM f9FrlghkfrilwF0uDvTU0rp8I36APTqE39tb+BVrzMtr1W7J2nb6+ARjAkuHLrO73Tgd nOPXSbLsj7TZjVtM57IG2kkcl2xgwEAthxtfihn2DxIH+WsKBXl8+S+ihtOMNq7v1OJs fdH1YYJyA0xYAEtqqhuME5zZChvxCWM+/IAzj112DRIMcdx2HcheJscMzOviDNhIVd70 73gnZKIi8SIrZCl2baZPLqNJvEpBJSD/Asu5I1R1aLKx5uKi56JD6VVqNFOBExi0qpwf NnHg== X-Forwarded-Encrypted: i=1; AJvYcCWyXm2UnUpzUH22x5K1PCrvzk8NOH5SIEoDuhIkEZJMdTgepsG54BaGmVPMEPJRcZZ4tQ/TPP45A9yJIAU=@vger.kernel.org X-Gm-Message-State: AOJu0YzEs3ZZcyTkXj48ISsF7SW6BIhKTUsGJvvZFDPCy3e0QjEo5y/M qwR3GV6Ryzi7wqhNJuN07Qsk0zXU6/3czGVC6Jg31fERMVR04zJJc7Q2mLz5+JqzYZ4tsg== X-Google-Smtp-Source: AGHT+IEomtc175Tn7QU4dLcaFEmpKHLIE++VomnQMAMqddL9GpIn6l/mbB6muVrn6Dl0pyEnMs9+WryW X-Received: from pgvt16.prod.google.com ([2002:a65:64d0:0:b0:ade:f03a:8509]) (user=fvdl job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6a20:144e:b0:1f2:e0c3:2619 with SMTP id adf61e73a8af0-1f2f4ddb75cmr9167961637.32.1740767411937; Fri, 28 Feb 2025 10:30:11 -0800 (PST) Date: Fri, 28 Feb 2025 18:29:15 +0000 In-Reply-To: <20250228182928.2645936-1-fvdl@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250228182928.2645936-1-fvdl@google.com> X-Mailer: git-send-email 2.48.1.711.g2feabab25a-goog Message-ID: <20250228182928.2645936-15-fvdl@google.com> Subject: [PATCH v5 14/27] mm/sparse: add vmemmap_*_hvo functions From: Frank van der Linden To: akpm@linux-foundation.org, muchun.song@linux.dev, linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: yuzhao@google.com, usamaarif642@gmail.com, joao.m.martins@oracle.com, roman.gushchin@linux.dev, ziy@nvidia.com, david@redhat.com, Frank van der Linden Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add a few functions to enable early HVO: vmemmap_populate_hvo vmemmap_undo_hvo vmemmap_wrprotect_hvo The populate and undo functions are expected to be used in early init, from the sparse_init_nid_early() function. The wrprotect function is to be used, potentially, later. To implement these functions, mostly re-use the existing compound pages vmemmap logic used by DAX. vmemmap_populate_address has its argument changed a bit in this commit: the page structure passed in to be reused in the mapping is replaced by a PFN and a flag. The flag indicates whether an extra ref should be taken on the vmemmap page containing the head page structure. Taking the ref is appropriate to for DAX / ZONE_DEVICE, but not for HugeTLB HVO. The HugeTLB vmemmap optimization maps tail page structure pages read-only. The vmemmap_wrprotect_hvo function that does this is implemented separately, because it cannot be guaranteed that reserved page structures will not be write accessed during memory initialization. Even with CONFIG_DEFERRED_STRUCT_PAGE_INIT, they might still be written to (if they are at the bottom of a zone). So, vmemmap_populate_hvo leaves the tail page structure pages RW initially, and then later during initialization, after memmap init is fully done, vmemmap_wrprotect_hvo must be called to finish the job. Subsequent commits will use these functions for early HugeTLB HVO. Signed-off-by: Frank van der Linden --- include/linux/mm.h | 9 ++- mm/sparse-vmemmap.c | 141 +++++++++++++++++++++++++++++++++++++++----- 2 files changed, 135 insertions(+), 15 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index df83653ed6e3..0463c062fd7a 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -3837,7 +3837,8 @@ p4d_t *vmemmap_p4d_populate(pgd_t *pgd, unsigned long= addr, int node); pud_t *vmemmap_pud_populate(p4d_t *p4d, unsigned long addr, int node); pmd_t *vmemmap_pmd_populate(pud_t *pud, unsigned long addr, int node); pte_t *vmemmap_pte_populate(pmd_t *pmd, unsigned long addr, int node, - struct vmem_altmap *altmap, struct page *reuse); + struct vmem_altmap *altmap, unsigned long ptpfn, + unsigned long flags); void *vmemmap_alloc_block(unsigned long size, int node); struct vmem_altmap; void *vmemmap_alloc_block_buf(unsigned long size, int node, @@ -3853,6 +3854,12 @@ int vmemmap_populate_hugepages(unsigned long start, = unsigned long end, int node, struct vmem_altmap *altmap); int vmemmap_populate(unsigned long start, unsigned long end, int node, struct vmem_altmap *altmap); +int vmemmap_populate_hvo(unsigned long start, unsigned long end, int node, + unsigned long headsize); +int vmemmap_undo_hvo(unsigned long start, unsigned long end, int node, + unsigned long headsize); +void vmemmap_wrprotect_hvo(unsigned long start, unsigned long end, int nod= e, + unsigned long headsize); void vmemmap_populate_print_last(void); #ifdef CONFIG_MEMORY_HOTPLUG void vmemmap_free(unsigned long start, unsigned long end, diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c index 8751c46c35e4..8cc848c4b17c 100644 --- a/mm/sparse-vmemmap.c +++ b/mm/sparse-vmemmap.c @@ -30,6 +30,13 @@ =20 #include #include +#include + +/* + * Flags for vmemmap_populate_range and friends. + */ +/* Get a ref on the head page struct page, for ZONE_DEVICE compound pages = */ +#define VMEMMAP_POPULATE_PAGEREF 0x0001 =20 #include "internal.h" =20 @@ -144,17 +151,18 @@ void __meminit vmemmap_verify(pte_t *pte, int node, =20 pte_t * __meminit vmemmap_pte_populate(pmd_t *pmd, unsigned long addr, int= node, struct vmem_altmap *altmap, - struct page *reuse) + unsigned long ptpfn, unsigned long flags) { pte_t *pte =3D pte_offset_kernel(pmd, addr); if (pte_none(ptep_get(pte))) { pte_t entry; void *p; =20 - if (!reuse) { + if (ptpfn =3D=3D (unsigned long)-1) { p =3D vmemmap_alloc_block_buf(PAGE_SIZE, node, altmap); if (!p) return NULL; + ptpfn =3D PHYS_PFN(__pa(p)); } else { /* * When a PTE/PMD entry is freed from the init_mm @@ -165,10 +173,10 @@ pte_t * __meminit vmemmap_pte_populate(pmd_t *pmd, un= signed long addr, int node, * and through vmemmap_populate_compound_pages() when * slab is available. */ - get_page(reuse); - p =3D page_to_virt(reuse); + if (flags & VMEMMAP_POPULATE_PAGEREF) + get_page(pfn_to_page(ptpfn)); } - entry =3D pfn_pte(__pa(p) >> PAGE_SHIFT, PAGE_KERNEL); + entry =3D pfn_pte(ptpfn, PAGE_KERNEL); set_pte_at(&init_mm, addr, pte, entry); } return pte; @@ -238,7 +246,8 @@ pgd_t * __meminit vmemmap_pgd_populate(unsigned long ad= dr, int node) =20 static pte_t * __meminit vmemmap_populate_address(unsigned long addr, int = node, struct vmem_altmap *altmap, - struct page *reuse) + unsigned long ptpfn, + unsigned long flags) { pgd_t *pgd; p4d_t *p4d; @@ -258,7 +267,7 @@ static pte_t * __meminit vmemmap_populate_address(unsig= ned long addr, int node, pmd =3D vmemmap_pmd_populate(pud, addr, node); if (!pmd) return NULL; - pte =3D vmemmap_pte_populate(pmd, addr, node, altmap, reuse); + pte =3D vmemmap_pte_populate(pmd, addr, node, altmap, ptpfn, flags); if (!pte) return NULL; vmemmap_verify(pte, node, addr, addr + PAGE_SIZE); @@ -269,13 +278,15 @@ static pte_t * __meminit vmemmap_populate_address(uns= igned long addr, int node, static int __meminit vmemmap_populate_range(unsigned long start, unsigned long end, int node, struct vmem_altmap *altmap, - struct page *reuse) + unsigned long ptpfn, + unsigned long flags) { unsigned long addr =3D start; pte_t *pte; =20 for (; addr < end; addr +=3D PAGE_SIZE) { - pte =3D vmemmap_populate_address(addr, node, altmap, reuse); + pte =3D vmemmap_populate_address(addr, node, altmap, + ptpfn, flags); if (!pte) return -ENOMEM; } @@ -286,7 +297,107 @@ static int __meminit vmemmap_populate_range(unsigned = long start, int __meminit vmemmap_populate_basepages(unsigned long start, unsigned lon= g end, int node, struct vmem_altmap *altmap) { - return vmemmap_populate_range(start, end, node, altmap, NULL); + return vmemmap_populate_range(start, end, node, altmap, -1, 0); +} + +/* + * Undo populate_hvo, and replace it with a normal base page mapping. + * Used in memory init in case a HVO mapping needs to be undone. + * + * This can happen when it is discovered that a memblock allocated + * hugetlb page spans multiple zones, which can only be verified + * after zones have been initialized. + * + * We know that: + * 1) The first @headsize / PAGE_SIZE vmemmap pages were individually + * allocated through memblock, and mapped. + * + * 2) The rest of the vmemmap pages are mirrors of the last head page. + */ +int __meminit vmemmap_undo_hvo(unsigned long addr, unsigned long end, + int node, unsigned long headsize) +{ + unsigned long maddr, pfn; + pte_t *pte; + int headpages; + + /* + * Should only be called early in boot, so nothing will + * be accessing these page structures. + */ + WARN_ON(!early_boot_irqs_disabled); + + headpages =3D headsize >> PAGE_SHIFT; + + /* + * Clear mirrored mappings for tail page structs. + */ + for (maddr =3D addr + headsize; maddr < end; maddr +=3D PAGE_SIZE) { + pte =3D virt_to_kpte(maddr); + pte_clear(&init_mm, maddr, pte); + } + + /* + * Clear and free mappings for head page and first tail page + * structs. + */ + for (maddr =3D addr; headpages-- > 0; maddr +=3D PAGE_SIZE) { + pte =3D virt_to_kpte(maddr); + pfn =3D pte_pfn(ptep_get(pte)); + pte_clear(&init_mm, maddr, pte); + memblock_phys_free(PFN_PHYS(pfn), PAGE_SIZE); + } + + flush_tlb_kernel_range(addr, end); + + return vmemmap_populate(addr, end, node, NULL); +} + +/* + * Write protect the mirrored tail page structs for HVO. This will be + * called from the hugetlb code when gathering and initializing the + * memblock allocated gigantic pages. The write protect can't be + * done earlier, since it can't be guaranteed that the reserved + * page structures will not be written to during initialization, + * even if CONFIG_DEFERRED_STRUCT_PAGE_INIT is enabled. + * + * The PTEs are known to exist, and nothing else should be touching + * these pages. The caller is responsible for any TLB flushing. + */ +void vmemmap_wrprotect_hvo(unsigned long addr, unsigned long end, + int node, unsigned long headsize) +{ + unsigned long maddr; + pte_t *pte; + + for (maddr =3D addr + headsize; maddr < end; maddr +=3D PAGE_SIZE) { + pte =3D virt_to_kpte(maddr); + ptep_set_wrprotect(&init_mm, maddr, pte); + } +} + +/* + * Populate vmemmap pages HVO-style. The first page contains the head + * page and needed tail pages, the other ones are mirrors of the first + * page. + */ +int __meminit vmemmap_populate_hvo(unsigned long addr, unsigned long end, + int node, unsigned long headsize) +{ + pte_t *pte; + unsigned long maddr; + + for (maddr =3D addr; maddr < addr + headsize; maddr +=3D PAGE_SIZE) { + pte =3D vmemmap_populate_address(maddr, node, NULL, -1, 0); + if (!pte) + return -ENOMEM; + } + + /* + * Reuse the last page struct page mapped above for the rest. + */ + return vmemmap_populate_range(maddr, end, node, NULL, + pte_pfn(ptep_get(pte)), 0); } =20 void __weak __meminit vmemmap_set_pmd(pmd_t *pmd, void *p, int node, @@ -409,7 +520,8 @@ static int __meminit vmemmap_populate_compound_pages(un= signed long start_pfn, * with just tail struct pages. */ return vmemmap_populate_range(start, end, node, NULL, - pte_page(ptep_get(pte))); + pte_pfn(ptep_get(pte)), + VMEMMAP_POPULATE_PAGEREF); } =20 size =3D min(end - start, pgmap_vmemmap_nr(pgmap) * sizeof(struct page)); @@ -417,13 +529,13 @@ static int __meminit vmemmap_populate_compound_pages(= unsigned long start_pfn, unsigned long next, last =3D addr + size; =20 /* Populate the head page vmemmap page */ - pte =3D vmemmap_populate_address(addr, node, NULL, NULL); + pte =3D vmemmap_populate_address(addr, node, NULL, -1, 0); if (!pte) return -ENOMEM; =20 /* Populate the tail pages vmemmap page */ next =3D addr + PAGE_SIZE; - pte =3D vmemmap_populate_address(next, node, NULL, NULL); + pte =3D vmemmap_populate_address(next, node, NULL, -1, 0); if (!pte) return -ENOMEM; =20 @@ -433,7 +545,8 @@ static int __meminit vmemmap_populate_compound_pages(un= signed long start_pfn, */ next +=3D PAGE_SIZE; rc =3D vmemmap_populate_range(next, last, node, NULL, - pte_page(ptep_get(pte))); + pte_pfn(ptep_get(pte)), + VMEMMAP_POPULATE_PAGEREF); if (rc) return -ENOMEM; } --=20 2.48.1.711.g2feabab25a-goog From nobody Fri Dec 19 16:07:21 2025 Received: from mail-pj1-f73.google.com (mail-pj1-f73.google.com [209.85.216.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3F8BC1F09A1 for ; Fri, 28 Feb 2025 18:30:14 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740767416; cv=none; b=Z8S6Qc2MFrfmKG2u0T7ompeDvzjgdLoIYdAcdocDB2UioRHOzDUijRWJXPPiESzOq9jtGenPYomEQYPSkd4pa3K4abajB7S5L3pHnDoW3LcNneFQf17teI0buGCgNFxv/wJO1pDTCyoFx3xVE7zGijgjX/DV4UpgTx6GGhq0YEE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740767416; c=relaxed/simple; bh=n8ooIlwvuqYrq6tlu5NBzw5tNXbGCYpJo/0nrKRGQbI=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=s+zdXhZxSziniqd+Hzg3qX+NAnBU7k6WnzMbCcVnrwFEVPkZGzBHSS2MO3/A/2d0+oXlY9PqUULJAIIGLfRDbJ2WP6BV1Pf8TNrJKFhlbig1d6bN7gf+DCL8j87HlVpaXjuenxYUGIaPJqMtqR7OKUeI6I1Zwz4S8LdoXfIkW3g= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--fvdl.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=4ZAYTcGW; arc=none smtp.client-ip=209.85.216.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--fvdl.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="4ZAYTcGW" Received: by mail-pj1-f73.google.com with SMTP id 98e67ed59e1d1-2feb8d29740so2892143a91.1 for ; Fri, 28 Feb 2025 10:30:14 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1740767413; x=1741372213; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=zk9Htkr8Yt0AZQiLc51J5q9c0jXDbJ3jlcXbWrZj33Y=; b=4ZAYTcGWWErA2WVfTMmfQrtQwbdpBlUxLy7MRapusLY4mLm+VbVeP3OuB7YpASodZJ aUWXXYsCkdh4GaF8nhxNAd1FQ1w1LY363eG1PnYiXcZdRO1XfCFokjQ1ii11/KBvw9x2 MqWJnY9B99FTL/uIIAfpkIbyZITzHZN05SbR7SYLv/JZLQ8UvWXYClnXvIxLDfu9hM1j OrfMIIkdMKk5QgnWrpVM4tFXeq+uI+b0PregEVWrxkcgzp0B0/ooydFjztlg91onO28j mUe0KuqNT7k5speQMBV0fAAhXllrcFlDvOakjEUmlqUOs9LpuRx4o7Avr4iBxnOm35Ai difQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1740767413; x=1741372213; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=zk9Htkr8Yt0AZQiLc51J5q9c0jXDbJ3jlcXbWrZj33Y=; b=t2ts+95tozywAK0OQ5QKC3BheH3Xo5EaR8sjy+gY7rSKb/VUGX2MF7+agR4sNGwyw8 3+LQXPwcCUSn6XEuVENQP9w3XTSE4nGrnNelP54vpdfq2C9Cpchg1RhL9XvtAFHJWuK6 yMM9YKpOznPBcAyh35SHnCH5SbpadW6JR34N5FpUWMF2+edtVEUssutK7FYtakmo/kSy oGjtTsQYILa49EshIGCEVjrvVKzqiixAQARtpEmhyfLcGJwBVYIhPHqKp3TA4VQ1YJH9 WL5nwJM87tpBAi7ba3963Arcmb9618OlPToK6XIlmkzETZIyKaWGP+aDZXe+uDaMf42f kJ5A== X-Forwarded-Encrypted: i=1; AJvYcCUNDX3MW46Fr+7Yw8I5N2J8KQzH69SNnUNGR2xRaG+j/GVEBFgaq39U3lHd5iH8Bq2ZXQiJLQlnA9rw2Yc=@vger.kernel.org X-Gm-Message-State: AOJu0YxeZDpJ1iAJz6coud7k9FE+qVB8Fw7zJiILEyX0wQc1Te1wxUGh YaLUfCr5W6e6J7FWylbDAsWKaQ6yzWcjxTEx6B1qqnR3utSx4oIizA4L4lA4OUBmVYvkoQ== X-Google-Smtp-Source: AGHT+IEv+QN7E0eQvDAjJvrjNKcnw7M4fyJS/kvcrVF43N89A8GZT3t7+PIL/4t4nWbs737dFUx4stCK X-Received: from pjbee14.prod.google.com ([2002:a17:90a:fc4e:b0:2ea:5469:76c2]) (user=fvdl job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:3803:b0:2ee:44ec:e524 with SMTP id 98e67ed59e1d1-2febac0a7e6mr6645676a91.35.1740767413637; Fri, 28 Feb 2025 10:30:13 -0800 (PST) Date: Fri, 28 Feb 2025 18:29:16 +0000 In-Reply-To: <20250228182928.2645936-1-fvdl@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250228182928.2645936-1-fvdl@google.com> X-Mailer: git-send-email 2.48.1.711.g2feabab25a-goog Message-ID: <20250228182928.2645936-16-fvdl@google.com> Subject: [PATCH v5 15/27] mm/hugetlb: deal with multiple calls to hugetlb_bootmem_alloc From: Frank van der Linden To: akpm@linux-foundation.org, muchun.song@linux.dev, linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: yuzhao@google.com, usamaarif642@gmail.com, joao.m.martins@oracle.com, roman.gushchin@linux.dev, ziy@nvidia.com, david@redhat.com, Frank van der Linden Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Architectures that want pre-HVO of hugetlb vmemmap pages will need to call hugetlb_bootmem_alloc from an earlier spot in boot (before sparse_init). To facilitate some architectures doing this, protect hugetlb_bootmem_alloc against multiple calls. Also provide a helper function to check if it's been called, so that the early HVO code, to be added later, can see if there is anything to do. Signed-off-by: Frank van der Linden --- include/linux/hugetlb.h | 6 ++++++ mm/hugetlb.c | 12 ++++++++++++ 2 files changed, 18 insertions(+) diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index 9cd7c9dacb88..5061279e5f73 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -175,6 +175,7 @@ extern int sysctl_hugetlb_shm_group; extern struct list_head huge_boot_pages[MAX_NUMNODES]; =20 void hugetlb_bootmem_alloc(void); +bool hugetlb_bootmem_allocated(void); =20 /* arch callbacks */ =20 @@ -1256,6 +1257,11 @@ static inline bool hugetlbfs_pagecache_present( static inline void hugetlb_bootmem_alloc(void) { } + +static inline bool hugetlb_bootmem_allocated(void) +{ + return false; +} #endif /* CONFIG_HUGETLB_PAGE */ =20 static inline spinlock_t *huge_pte_lock(struct hstate *h, diff --git a/mm/hugetlb.c b/mm/hugetlb.c index f9704a0e62de..ea5f22182c6e 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -4906,16 +4906,28 @@ static int __init default_hugepagesz_setup(char *s) } hugetlb_early_param("default_hugepagesz", default_hugepagesz_setup); =20 +static bool __hugetlb_bootmem_allocated __initdata; + +bool __init hugetlb_bootmem_allocated(void) +{ + return __hugetlb_bootmem_allocated; +} + void __init hugetlb_bootmem_alloc(void) { struct hstate *h; =20 + if (__hugetlb_bootmem_allocated) + return; + hugetlb_parse_params(); =20 for_each_hstate(h) { if (hstate_is_gigantic(h)) hugetlb_hstate_alloc_pages(h); } + + __hugetlb_bootmem_allocated =3D true; } =20 static unsigned int allowed_mems_nr(struct hstate *h) --=20 2.48.1.711.g2feabab25a-goog From nobody Fri Dec 19 16:07:21 2025 Received: from mail-pj1-f73.google.com (mail-pj1-f73.google.com [209.85.216.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C8EEC2862A1 for ; Fri, 28 Feb 2025 18:30:15 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740767420; cv=none; b=apYT9COA8x6lICtzQp3DXJEZLC5V/hGlEUa9V8MrjMAdbMUOCRGgytVkWQhGb9csNQW4qSBkmPOokscu0ZwN1oilnAQYGk55fmgm1GgvDnRkfxxl9WYaDiruXm9EHLbqhC/IwIDU2xh2OKlKnKhT+PTrk1bucHZ0kgrHJ8m7WSo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740767420; c=relaxed/simple; bh=EBNjv3QmmUKLkngAKWBkrnqQd8Uhk7FKcWRbkKxs+VI=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=LDIScAssccM0CeWq5o7D3EwTBGTNY5Udy0/4XFBReS9EB594mpTssyvrNrpmj7ZTLYy+eil5tuG6735/bncRiDSU1IGconDMwHirhwA75rTADWtrtWV562yyPe0TVvol0i/wp4OpoRRv8zWS/8n0yeQyQL+P1D+IUoLhwlGhay8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--fvdl.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=luJUqK4+; arc=none smtp.client-ip=209.85.216.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--fvdl.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="luJUqK4+" Received: by mail-pj1-f73.google.com with SMTP id 98e67ed59e1d1-2fec3e38b60so1479116a91.0 for ; Fri, 28 Feb 2025 10:30:15 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1740767415; x=1741372215; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=pBBYpB4hgweH6Clpvn0bScN6WwMSi1BpuOUxZoDm4hs=; b=luJUqK4+va92/sItUya7Seq6qtFfOmBgoJyC8ze4BwsiETzT0kQ+yrmCmoWtSaFHVb OrHO+O8PF9QCx0rUHWMnSSS990Qc+F4xM9BTIrx/wRaeazr4D4i6kxdokcx6FmdygcBD HDxNsOpJl8NVxDdPcG8Q5ZWhfu873Tic94kzYnc3V6OeYzjgNWg+uBz1/M2EdroAtkmz 24TBVrClPuAFefwHai7LiXnyT/n8T6wMaC6SUoyOTPMsdU9VVhobn8839uAFNveKaZu2 ZxfheB9oDvA+xcmJqMlNFYN2FG9iODhQoSPzgRZL54chwzcot6Gn/wgJv9hFutVdkq+c Hx+A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1740767415; x=1741372215; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=pBBYpB4hgweH6Clpvn0bScN6WwMSi1BpuOUxZoDm4hs=; b=cnM21CFSBcHTmvAt2lXg2Yoc6Hy5wLq7UlRsRoWts08jzvPum5+rqr/fi9xVlNo72R gS437YpYiIu/f8gZ6je1GJrLurlM/sBEpqflk8+ENFAJ+HlkDphqxQKKY3utWlsgIpMq aFJq+WO7cdnPjR2S6yp6KU/z/1tiKsjR+94uk2GLmTXOC7+OZNSONmFNJW6ThxlFlWna IHHYNeLMXAp9D+y9Z/K5xT5qxZbcfctWGQ1bma+sQD6QVYE04Ps4sQ2oSmQiWL5XIKNg v0dpVyMTPL5qylimFtmSxQebXn1kbf81j1kJb+Fu7aZLJQiN8yXd1Mzy4j5MddZGynQB o1Kw== X-Forwarded-Encrypted: i=1; AJvYcCX9P34K+Kri4rwottykd0frUV5TvW1smsrp4OyhSYpm5i8bHzMRr2uyZ8UD4AdE8CpkXjgNYfUHmtjoEXs=@vger.kernel.org X-Gm-Message-State: AOJu0YzMrSjdYhNS8cMMUskY2uiko8NG3ALL+Vf1tOwMwbSMNrSwMBfY C4DQzQlT2o5boqpgGukoJpohwMtfGU1Wxzk+1B02m7rNVm4h568KnJolW7Cis1ncwvnVkw== X-Google-Smtp-Source: AGHT+IGJ1/QMzLpmOmVFjGTQ/4fFo0bzWyrrthdyD9uiwQoixag+hgh7PWEv/GbYJAbMW/NyrIAhOQFu X-Received: from pjbpt14.prod.google.com ([2002:a17:90b:3d0e:b0:2fc:201d:6026]) (user=fvdl job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:1b08:b0:2fa:13d9:39c with SMTP id 98e67ed59e1d1-2febab3e741mr7883201a91.14.1740767415176; Fri, 28 Feb 2025 10:30:15 -0800 (PST) Date: Fri, 28 Feb 2025 18:29:17 +0000 In-Reply-To: <20250228182928.2645936-1-fvdl@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250228182928.2645936-1-fvdl@google.com> X-Mailer: git-send-email 2.48.1.711.g2feabab25a-goog Message-ID: <20250228182928.2645936-17-fvdl@google.com> Subject: [PATCH v5 16/27] mm/hugetlb: move huge_boot_pages list init to hugetlb_bootmem_alloc From: Frank van der Linden To: akpm@linux-foundation.org, muchun.song@linux.dev, linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: yuzhao@google.com, usamaarif642@gmail.com, joao.m.martins@oracle.com, roman.gushchin@linux.dev, ziy@nvidia.com, david@redhat.com, Frank van der Linden Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Instead of initializing the per-node hugetlb bootmem pages list from the alloc function, we can now do it in a somewhat cleaner way, since there is an explicit hugetlb_bootmem_alloc function. Initialize the lists there. Signed-off-by: Frank van der Linden --- mm/hugetlb.c | 19 +++++++------------ 1 file changed, 7 insertions(+), 12 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index ea5f22182c6e..0f14a7736875 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -3574,7 +3574,6 @@ static unsigned long __init hugetlb_pages_alloc_boot(= struct hstate *h) static void __init hugetlb_hstate_alloc_pages(struct hstate *h) { unsigned long allocated; - static bool initialized __initdata; =20 /* skip gigantic hugepages allocation if hugetlb_cma enabled */ if (hstate_is_gigantic(h) && hugetlb_cma_size) { @@ -3582,17 +3581,6 @@ static void __init hugetlb_hstate_alloc_pages(struct= hstate *h) return; } =20 - /* hugetlb_hstate_alloc_pages will be called many times, initialize huge_= boot_pages once */ - if (!initialized) { - int i =3D 0; - - for (i =3D 0; i < MAX_NUMNODES; i++) - INIT_LIST_HEAD(&huge_boot_pages[i]); - h->next_nid_to_alloc =3D first_online_node; - h->next_nid_to_free =3D first_online_node; - initialized =3D true; - } - /* do node specific alloc */ if (hugetlb_hstate_alloc_pages_specific_nodes(h)) return; @@ -4916,13 +4904,20 @@ bool __init hugetlb_bootmem_allocated(void) void __init hugetlb_bootmem_alloc(void) { struct hstate *h; + int i; =20 if (__hugetlb_bootmem_allocated) return; =20 + for (i =3D 0; i < MAX_NUMNODES; i++) + INIT_LIST_HEAD(&huge_boot_pages[i]); + hugetlb_parse_params(); =20 for_each_hstate(h) { + h->next_nid_to_alloc =3D first_online_node; + h->next_nid_to_free =3D first_online_node; + if (hstate_is_gigantic(h)) hugetlb_hstate_alloc_pages(h); } --=20 2.48.1.711.g2feabab25a-goog From nobody Fri Dec 19 16:07:21 2025 Received: from mail-pl1-f202.google.com (mail-pl1-f202.google.com [209.85.214.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 90D30279E29 for ; Fri, 28 Feb 2025 18:30:17 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740767421; cv=none; b=MvaVuudSipKURnCgwcZoijDGn1L73ymVP2N6A+y0hCU89FSo2y95VbBMplYOzlFMSEBoGbyIqcjLVs9XhfJu3B6ZnKhf1u/6iE5GqgfGhGAIKNckb8I9aA1V7DVXvj+mg8mSHUXiUwBSfsw71qciXBJ56HgRwt9QyWzX1VRl50o= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740767421; c=relaxed/simple; bh=BTBW1tDeHJiufkXqfh5GFKAfdZqamOsTHrNHihF2tbU=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=AB7sYz0S2xZcwlLa6w0qtYovH0mDx8hivPb1wn2tdGosI17GBiy3MrTEuQmQUAHET82r/LSkGMiXk0pvuDUNZw8ahJL1O98rK9CyNdpWUtS+mamY83/CzFy0Ipl/xPynK3xxivjGzKR7tQQ399F8+byI9T0stVPAjMlKlOTQYPk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--fvdl.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=gLq0BIfO; arc=none smtp.client-ip=209.85.214.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--fvdl.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="gLq0BIfO" Received: by mail-pl1-f202.google.com with SMTP id d9443c01a7336-22349ce68a9so77550215ad.0 for ; Fri, 28 Feb 2025 10:30:17 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1740767417; x=1741372217; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=+u9YpCDfOddo7IoooZ06HWMpfAumbRB14bwYiVvX4Wo=; b=gLq0BIfOL6DAf5PJzIMuE3ucUmpHKdZNxbetwBLNfBchb3e9uHlQ01lUtzWZlbJbPw NaS8WolnD8C7nEOsVOdqEwu/kBT9LvKs/g6xsmgsRKj6UI06bLbPvTf06qRmpZKv6DFn afEz97bZwqWwectkKxZzJmpZYZTwpwyOkYrwzjZD4CsGDr2l+QX+pwxAXo8jEZ1gUJXN 4plPU5lE/DB2M+IuPw9/Le4YZAeTRbBKKGXMsnbGHeGS94ptn/GHDBUWRSLvay99kqjI nB4xMgPrTA/ZNdJaFzP8tt0bnreBYmyvIy08MU7BVXoAY0obaOQkRavMs4CMCL7jTlPk vGIA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1740767417; x=1741372217; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=+u9YpCDfOddo7IoooZ06HWMpfAumbRB14bwYiVvX4Wo=; b=n5+Lu+kQUCMiRQiCf+9df8kq3lONaeYN1myzhi8hoAHPxLWZjfsK1hzXuPAhP2ximB B0Oqn2TL89KCvMsUe9gBh4yeLMYawbP+Gu8QGruzeogdX09LvlZ6NxUzkD4sA41RRCkP m6WBB0AKkEQXVh7zUoch00WNotNSBaHit/Sd7y/RyUy44ZuDaouU5wCUs5u7TQiJxMZn Zg79jVGbq3fKwGki7c5y9jc3cLJFU2GEwNoFxuIc1CU8J971OtR2FVCRCpb1aOo26yRr YRAIULbifIUZ4C5ezgpjftclG3jVZ8wy3vrTi34W6f+3yWqy6m3zLlI7nHWEVRGg83YN uwDQ== X-Forwarded-Encrypted: i=1; AJvYcCW1Du71L1sUUaf8FHqsG4z2ntyqV5vEhlWS2Wj+3ucR+8wYgD4KFU3bMXT7Fdcp+q1UV0VbZoXzLK7ntF4=@vger.kernel.org X-Gm-Message-State: AOJu0Yy1bPv2bmPVQfAaICW1tCGoYir6l+Q/05QSGyZpDBnOjrL6RB1y GQqU1YQFrW8hYAbfhYxHFTIxdTWWKDYTyfO6IxrLNoynzQ03OPNfWU9rYHIpOGycy8380Q== X-Google-Smtp-Source: AGHT+IHTbxxgSRmI9JPbjqv6YVWrtnqd2U549w4/Q1RWPys+2AxjBGjQpLOqvUpKa1VvMDtiAePntETH X-Received: from pgc16.prod.google.com ([2002:a05:6a02:2f90:b0:aeb:ac4a:ebf6]) (user=fvdl job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6a21:788a:b0:1ee:d438:3fb4 with SMTP id adf61e73a8af0-1f2f4e0142emr6186292637.28.1740767416720; Fri, 28 Feb 2025 10:30:16 -0800 (PST) Date: Fri, 28 Feb 2025 18:29:18 +0000 In-Reply-To: <20250228182928.2645936-1-fvdl@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250228182928.2645936-1-fvdl@google.com> X-Mailer: git-send-email 2.48.1.711.g2feabab25a-goog Message-ID: <20250228182928.2645936-18-fvdl@google.com> Subject: [PATCH v5 17/27] mm/hugetlb: add pre-HVO framework From: Frank van der Linden To: akpm@linux-foundation.org, muchun.song@linux.dev, linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: yuzhao@google.com, usamaarif642@gmail.com, joao.m.martins@oracle.com, roman.gushchin@linux.dev, ziy@nvidia.com, david@redhat.com, Frank van der Linden Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Define flags for pre-HVOed bootmem hugetlb pages, and act on them. The most important flag is the HVO flag, signalling that a bootmem allocated gigantic page has already been HVO-ed. If this flag is seen by the hugetlb bootmem gather code, the page is marked as HVO optimized. The HVO code will then not try to optimize it again. Instead, it will just map the tail page mirror pages read-only, completing the HVO steps. No functional change, as nothing sets the flags yet. Signed-off-by: Frank van der Linden --- arch/powerpc/mm/hugetlbpage.c | 1 + include/linux/hugetlb.h | 4 +++ mm/hugetlb.c | 24 ++++++++++++++++- mm/hugetlb_vmemmap.c | 50 +++++++++++++++++++++++++++++++++-- mm/hugetlb_vmemmap.h | 7 +++++ 5 files changed, 83 insertions(+), 3 deletions(-) diff --git a/arch/powerpc/mm/hugetlbpage.c b/arch/powerpc/mm/hugetlbpage.c index 6b043180220a..d3c1b749dcfc 100644 --- a/arch/powerpc/mm/hugetlbpage.c +++ b/arch/powerpc/mm/hugetlbpage.c @@ -113,6 +113,7 @@ static int __init pseries_alloc_bootmem_huge_page(struc= t hstate *hstate) gpage_freearray[nr_gpages] =3D 0; list_add(&m->list, &huge_boot_pages[0]); m->hstate =3D hstate; + m->flags =3D 0; return 1; } =20 diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index 5061279e5f73..10a7ce2b95e1 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -681,8 +681,12 @@ struct hstate { struct huge_bootmem_page { struct list_head list; struct hstate *hstate; + unsigned long flags; }; =20 +#define HUGE_BOOTMEM_HVO 0x0001 +#define HUGE_BOOTMEM_ZONES_VALID 0x0002 + int isolate_or_dissolve_huge_page(struct page *page, struct list_head *lis= t); int replace_free_hugepage_folios(unsigned long start_pfn, unsigned long en= d_pfn); struct folio *alloc_hugetlb_folio(struct vm_area_struct *vma, diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 0f14a7736875..40c88c46b34f 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -3215,6 +3215,7 @@ int __alloc_bootmem_huge_page(struct hstate *h, int n= id) INIT_LIST_HEAD(&m->list); list_add(&m->list, &huge_boot_pages[node]); m->hstate =3D h; + m->flags =3D 0; return 1; } =20 @@ -3282,7 +3283,7 @@ static void __init prep_and_add_bootmem_folios(struct= hstate *h, struct folio *folio, *tmp_f; =20 /* Send list for bulk vmemmap optimization processing */ - hugetlb_vmemmap_optimize_folios(h, folio_list); + hugetlb_vmemmap_optimize_bootmem_folios(h, folio_list); =20 list_for_each_entry_safe(folio, tmp_f, folio_list, lru) { if (!folio_test_hugetlb_vmemmap_optimized(folio)) { @@ -3311,6 +3312,13 @@ static bool __init hugetlb_bootmem_page_zones_valid(= int nid, unsigned long start_pfn; bool valid; =20 + if (m->flags & HUGE_BOOTMEM_ZONES_VALID) { + /* + * Already validated, skip check. + */ + return true; + } + start_pfn =3D virt_to_phys(m) >> PAGE_SHIFT; =20 valid =3D !pfn_range_intersects_zones(nid, start_pfn, @@ -3343,6 +3351,11 @@ static void __init hugetlb_bootmem_free_invalid_page= (int nid, struct page *page, } } =20 +static bool __init hugetlb_bootmem_page_prehvo(struct huge_bootmem_page *m) +{ + return (m->flags & HUGE_BOOTMEM_HVO); +} + /* * Put bootmem huge pages into the standard lists after mem_map is up. * Note: This only applies to gigantic (order > MAX_PAGE_ORDER) pages. @@ -3383,6 +3396,15 @@ static void __init gather_bootmem_prealloc_node(unsi= gned long nid) hugetlb_folio_init_vmemmap(folio, h, HUGETLB_VMEMMAP_RESERVE_PAGES); init_new_hugetlb_folio(h, folio); + + if (hugetlb_bootmem_page_prehvo(m)) + /* + * If pre-HVO was done, just set the + * flag, the HVO code will then skip + * this folio. + */ + folio_set_hugetlb_vmemmap_optimized(folio); + list_add(&folio->lru, &folio_list); =20 /* diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c index 5b484758f813..be6b33ecbc8e 100644 --- a/mm/hugetlb_vmemmap.c +++ b/mm/hugetlb_vmemmap.c @@ -649,14 +649,39 @@ static int hugetlb_vmemmap_split_folio(const struct h= state *h, struct folio *fol return vmemmap_remap_split(vmemmap_start, vmemmap_end, vmemmap_reuse); } =20 -void hugetlb_vmemmap_optimize_folios(struct hstate *h, struct list_head *f= olio_list) +static void __hugetlb_vmemmap_optimize_folios(struct hstate *h, + struct list_head *folio_list, + bool boot) { struct folio *folio; + int nr_to_optimize; LIST_HEAD(vmemmap_pages); unsigned long flags =3D VMEMMAP_REMAP_NO_TLB_FLUSH | VMEMMAP_SYNCHRONIZE_= RCU; =20 + nr_to_optimize =3D 0; list_for_each_entry(folio, folio_list, lru) { - int ret =3D hugetlb_vmemmap_split_folio(h, folio); + int ret; + unsigned long spfn, epfn; + + if (boot && folio_test_hugetlb_vmemmap_optimized(folio)) { + /* + * Already optimized by pre-HVO, just map the + * mirrored tail page structs RO. + */ + spfn =3D (unsigned long)&folio->page; + epfn =3D spfn + pages_per_huge_page(h); + vmemmap_wrprotect_hvo(spfn, epfn, folio_nid(folio), + HUGETLB_VMEMMAP_RESERVE_SIZE); + register_page_bootmem_memmap(pfn_to_section_nr(spfn), + &folio->page, + HUGETLB_VMEMMAP_RESERVE_SIZE); + static_branch_inc(&hugetlb_optimize_vmemmap_key); + continue; + } + + nr_to_optimize++; + + ret =3D hugetlb_vmemmap_split_folio(h, folio); =20 /* * Spliting the PMD requires allocating a page, thus lets fail @@ -668,6 +693,16 @@ void hugetlb_vmemmap_optimize_folios(struct hstate *h,= struct list_head *folio_l break; } =20 + if (!nr_to_optimize) + /* + * All pre-HVO folios, nothing left to do. It's ok if + * there is a mix of pre-HVO and not yet HVO-ed folios + * here, as __hugetlb_vmemmap_optimize_folio() will + * skip any folios that already have the optimized flag + * set, see vmemmap_should_optimize_folio(). + */ + goto out; + flush_tlb_all(); =20 list_for_each_entry(folio, folio_list, lru) { @@ -693,10 +728,21 @@ void hugetlb_vmemmap_optimize_folios(struct hstate *h= , struct list_head *folio_l } } =20 +out: flush_tlb_all(); free_vmemmap_page_list(&vmemmap_pages); } =20 +void hugetlb_vmemmap_optimize_folios(struct hstate *h, struct list_head *f= olio_list) +{ + __hugetlb_vmemmap_optimize_folios(h, folio_list, false); +} + +void hugetlb_vmemmap_optimize_bootmem_folios(struct hstate *h, struct list= _head *folio_list) +{ + __hugetlb_vmemmap_optimize_folios(h, folio_list, true); +} + static const struct ctl_table hugetlb_vmemmap_sysctls[] =3D { { .procname =3D "hugetlb_optimize_vmemmap", diff --git a/mm/hugetlb_vmemmap.h b/mm/hugetlb_vmemmap.h index 2fcae92d3359..71110a90275f 100644 --- a/mm/hugetlb_vmemmap.h +++ b/mm/hugetlb_vmemmap.h @@ -24,6 +24,8 @@ long hugetlb_vmemmap_restore_folios(const struct hstate *= h, struct list_head *non_hvo_folios); void hugetlb_vmemmap_optimize_folio(const struct hstate *h, struct folio *= folio); void hugetlb_vmemmap_optimize_folios(struct hstate *h, struct list_head *f= olio_list); +void hugetlb_vmemmap_optimize_bootmem_folios(struct hstate *h, struct list= _head *folio_list); + =20 static inline unsigned int hugetlb_vmemmap_size(const struct hstate *h) { @@ -64,6 +66,11 @@ static inline void hugetlb_vmemmap_optimize_folios(struc= t hstate *h, struct list { } =20 +static inline void hugetlb_vmemmap_optimize_bootmem_folios(struct hstate *= h, + struct list_head *folio_list) +{ +} + static inline unsigned int hugetlb_vmemmap_optimizable_size(const struct h= state *h) { return 0; --=20 2.48.1.711.g2feabab25a-goog From nobody Fri Dec 19 16:07:21 2025 Received: from mail-pj1-f74.google.com (mail-pj1-f74.google.com [209.85.216.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2769E279E23 for ; Fri, 28 Feb 2025 18:30:18 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740767420; cv=none; b=ZrT0+x8CF1qGn7jMgblrvzdf6yCvioA/EaNP+b8sJKGh4ap47eLnoBGFNDrO2TMMe50HAoKD2IEhx4jmpnHh0g1fjloVZ9+IqE3t3w12bbJboQKGpVI5FUCxeCzCDaIqwEwaHsxnFgM4Fcukjj09fcrrTk0llkRKAMtH3+Es2ds= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740767420; c=relaxed/simple; bh=Uqx2MOddIzEVhMima1zUgs2wFsC4jYaiHH85/DCq4E0=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=uIJ5eWMBN+Jec9ao4v8tBldlidTG1NlIdJUG/WMvNvymO2+W66cSfYXfF0MU9aL5hr4WL5M9mFhPNqeMkFEy/G1h2QsqaJOGcT+JOOhY8MJTIBUH8e6yrXguUq4XP+22fxS1CchYWqGyFLm1cLHQir2qWSdkiRJ5tIE2dVSprw4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--fvdl.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=pEbPEGqa; arc=none smtp.client-ip=209.85.216.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--fvdl.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="pEbPEGqa" Received: by mail-pj1-f74.google.com with SMTP id 98e67ed59e1d1-2feb47c6757so3223055a91.3 for ; Fri, 28 Feb 2025 10:30:18 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1740767418; x=1741372218; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=I2OTsio8sZEwb/RU9cVLVK4uXYnO7HAAoJEXN+XoPAc=; b=pEbPEGqa0zPz2WoLhjcUStXE3CPVqeG9uvat3L6wQ/IZXrKxPZyzWBZcgDIdHMsoZN mRh+01QrnxQIayxpHUZdKj/wt9MVkt8A+GfikvVgLZGwzyXp/GRJtPf0AFNy7KhpYMb/ 12UT/fqI2jNZzIREs9ZjtVQBJM4yUNQyp5TSSjM/AtX/0pv6aKuPn3F78bcZAfpeOFFn XkD+yozqf+jwqtjuErmqrZvOgpcpWzNfqymAookBLw2xDQYG0xWAmDGVrpZD8DX7MfoN hytK4VMNrpZe7E+JV/MiHS6NRcbU+XoEpRp7bjsOjG6hmA+vKd5LesmyTHikpzjCBzla 1riw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1740767418; x=1741372218; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=I2OTsio8sZEwb/RU9cVLVK4uXYnO7HAAoJEXN+XoPAc=; b=R6coicnXkjiM4geIJA7k3iC6oGvvnsWw+4d6qsG6Qi7u3xX+vObbb3Di2adKOzDCyq QHS9/0FoLeT7tjdPwx0jbYxyV154OPZm5AIagOu2HU9OD25Js/PimsyiEO02Lsd/RY4Q ugseAWmsEo1yG/pVXaKcUyoe5uETVJlTc5TsSU0YZRF2+T06VY9xlnlAg5ArLCTwZs68 vn6kRpFRQ0f81D5v7FSYGT8XtywlqFnnlBSr0+bB49/AkZbmFqSxEO9fB52aAR/1qCVH VGozfiJHfsKweNElnrVlv4w/WA21+f7UzXmBBQ7cG/yiEEEvFRFvB2k1pJEndAwmTVPS Y+xw== X-Forwarded-Encrypted: i=1; AJvYcCUcbJY+wvEkGxMuekRBGZOrGyeg/5AW5fUGXwQ6ArDhgnAZi6d+kjsM67m3ne8MIoy1P3harPEO5g52PRk=@vger.kernel.org X-Gm-Message-State: AOJu0YwKjsbqXXBcRVBnt31ESMJxwRPYsb4Gqpaou0BSJN+uomaNvk3A tYzOm5eo92YYGYRbX7rsEQBfG6EfQLU4YZ8ERaZ+4ao3feOOAfk43vpaONTjBLZll7IdUg== X-Google-Smtp-Source: AGHT+IHUnBifXScpSNr6SZNgKx7X+v++q78gepOk2a+w86P9LcvmPk5lA9HV6vO/w06pUMljs+USRXb+ X-Received: from pgwa13.prod.google.com ([2002:a65:654d:0:b0:ad5:57cd:8f91]) (user=fvdl job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6a21:789a:b0:1f0:e322:45 with SMTP id adf61e73a8af0-1f2f4cd6fdbmr7693779637.12.1740767418371; Fri, 28 Feb 2025 10:30:18 -0800 (PST) Date: Fri, 28 Feb 2025 18:29:19 +0000 In-Reply-To: <20250228182928.2645936-1-fvdl@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250228182928.2645936-1-fvdl@google.com> X-Mailer: git-send-email 2.48.1.711.g2feabab25a-goog Message-ID: <20250228182928.2645936-19-fvdl@google.com> Subject: [PATCH v5 18/27] mm/hugetlb_vmemmap: fix hugetlb_vmemmap_restore_folios definition From: Frank van der Linden To: akpm@linux-foundation.org, muchun.song@linux.dev, linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: yuzhao@google.com, usamaarif642@gmail.com, joao.m.martins@oracle.com, roman.gushchin@linux.dev, ziy@nvidia.com, david@redhat.com, Frank van der Linden Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Make the hugetlb_vmemmap_restore_folios definition inline for the !CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP case, so that including this file in files other than hugetlb_vmemmap.c will work. Fixes: cfb8c75099db ("hugetlb: perform vmemmap restoration on a list of pag= es") Signed-off-by: Frank van der Linden --- mm/hugetlb_vmemmap.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/hugetlb_vmemmap.h b/mm/hugetlb_vmemmap.h index 71110a90275f..62d3d645a793 100644 --- a/mm/hugetlb_vmemmap.h +++ b/mm/hugetlb_vmemmap.h @@ -50,7 +50,7 @@ static inline int hugetlb_vmemmap_restore_folio(const str= uct hstate *h, struct f return 0; } =20 -static long hugetlb_vmemmap_restore_folios(const struct hstate *h, +static inline long hugetlb_vmemmap_restore_folios(const struct hstate *h, struct list_head *folio_list, struct list_head *non_hvo_folios) { --=20 2.48.1.711.g2feabab25a-goog From nobody Fri Dec 19 16:07:21 2025 Received: from mail-pj1-f74.google.com (mail-pj1-f74.google.com [209.85.216.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AE741279E36 for ; Fri, 28 Feb 2025 18:30:20 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740767422; cv=none; b=CX13vQ+9deP1JTIbc13T/bEzBVeL2vMmVYYATmvwyeOXwbRnuDZVkyliaJ+d5yiG+iu0goFlob4nhKtz+X9x8BKnPYEABBzD0r6glVocXDxSTbz1PjPx5sgOWfLzv57swpaFuD/0CRqpZ7mrnOBQ0qyrzO2dM4WH3ys8jK42+3M= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740767422; c=relaxed/simple; bh=Lq9yOdzTuKonlpg6YK+v4ZwOLE7IS6wsYR5j/gMXWB0=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=hs7v0Bcvv2+5yAPNl7tTEOPB6+GFFuRzSGA2bCcCXYYE5t2mg1LnsdSN8PQy9Jbfi9fBoSK99yl/lql1fooiskWq4hgryS+YzNppWHkQG9gPf2xydV14lugxYBpZBQ//j52ygAgu1JZFI07L3fdEYRevMMzmByqNAM9TnFHFgVo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--fvdl.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=AA705Mj9; arc=none smtp.client-ip=209.85.216.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--fvdl.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="AA705Mj9" Received: by mail-pj1-f74.google.com with SMTP id 98e67ed59e1d1-2fec1f46678so2976160a91.1 for ; Fri, 28 Feb 2025 10:30:20 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1740767420; x=1741372220; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=bilRBQrQ392Fbxoju7upbE9+OCkgy4821gbuoMPJZmQ=; b=AA705Mj9rb8cLHCZWdawFM26Lax9mBZbIUacHg0p1eY71Eut1CemBBkzAZVew/0JXi pSPBEXwwmtYUXXmBD6r/eG80id5xu4ZWusHEyceR341NLZX6F9GUrJztCKSsF/u32DiJ PWJTzYjl4WcDTRSXvmOq0ML5gxok7OzM4CtU6fA0i5x4rHEbzekiC9j6pOZdYAj0fksL xM7DQpmJHhZBRZR5rqk8MPeFYH3BPngLphmApruztdBmwuZmuWs2VviR2Yvxz4pbuKBf fCJOTacexQ8XIpWStwru08gaOvY/j55IyQek5IOSrplg+4qZ73U9yBvg5EUl3DfL3p8B /fnQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1740767420; x=1741372220; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=bilRBQrQ392Fbxoju7upbE9+OCkgy4821gbuoMPJZmQ=; b=rAHQm20qxDVsMgI+BJFwhLSeeZvinQIdRe942rCN1R28SERYdx6uQ4bbIsbZ8TNtzn 5GjL5xxXUOHSmESHdfRKDvC01dQfDQKG3lcqLDQZ+gv9f+eqw2SlXLNr1L2lG6oGmZ8G +zRsIWsP982LLGj6sYAkLQPFDdujVCYu3cpLd9jlxP7/9hnjrgOrm+zzIa26AfuMOgQI MB+ncoqdh1jyFFjMTXlePbb6MEB9SwV7FPsph4LN1xc2gArUISNwwF8ivQ6RG9N0PElQ aS3BkfyYs0YB8gSTMQkWoeTZHpXCZgrK1dPtilB53mDVy+P/49CBlbVfzWcnimJNdX2U jvnQ== X-Forwarded-Encrypted: i=1; AJvYcCX+xF5dqEdwI6MD+VX93yexUuWXfQNGuSK/dzNaSqqCCZXgZrHNBJ07iylDcWGDwEnJl0a8pS6Vou/P02c=@vger.kernel.org X-Gm-Message-State: AOJu0Yy8Vel4xn5VlhTBGEuT8Qw6dQLwix9RQiHbJZZuNsic9rqX9TWE 7bEAgwzfqu45OHWnDWOp+AZvVaOvd6qCKyjxrWXVSDdEE9oTQWwE+6IvO2Vu4cTjdsi2Dw== X-Google-Smtp-Source: AGHT+IHZH9jgpRWIq6i/MFnGQz3hWlzoaLJhHTlF5N5208y0nznHeVTLTF4A3tr7Hamk1dVEU9yqERiZ X-Received: from pjbqa8.prod.google.com ([2002:a17:90b:4fc8:b0:2fa:210c:d068]) (user=fvdl job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:4fd0:b0:2ee:aed6:9ec2 with SMTP id 98e67ed59e1d1-2febab5e11dmr8140944a91.14.1740767420048; Fri, 28 Feb 2025 10:30:20 -0800 (PST) Date: Fri, 28 Feb 2025 18:29:20 +0000 In-Reply-To: <20250228182928.2645936-1-fvdl@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250228182928.2645936-1-fvdl@google.com> X-Mailer: git-send-email 2.48.1.711.g2feabab25a-goog Message-ID: <20250228182928.2645936-20-fvdl@google.com> Subject: [PATCH v5 19/27] mm/hugetlb: do pre-HVO for bootmem allocated pages From: Frank van der Linden To: akpm@linux-foundation.org, muchun.song@linux.dev, linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: yuzhao@google.com, usamaarif642@gmail.com, joao.m.martins@oracle.com, roman.gushchin@linux.dev, ziy@nvidia.com, david@redhat.com, Frank van der Linden Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" For large systems, the overhead of vmemmap pages for hugetlb is substantial. It's about 1.5% of memory, which is about 45G for a 3T system. If you want to configure most of that system for hugetlb (e.g. to use as backing memory for VMs), there is a chance of running out of memory on boot, even though you know that the 45G will become available later. To avoid this scenario, and since it's a waste to first allocate and then free that 45G during boot, do pre-HVO for hugetlb bootmem allocated pages ('gigantic' pages). pre-HVO is done by adding functions that are called from sparse_init_nid_early and sparse_init_nid_late. The first is called before memmap allocation, so it takes care of allocating memmap HVO-style. The second verifies that all bootmem pages look good, specifically it checks that they do not intersect with multiple zones. This can only be done from sparse_init_nid_late path, when zones have been initialized. The hugetlb page size must be aligned to the section size, and aligned to the size of memory described by the number of page structures contained in one PMD (since pre-HVO is not prepared to split PMDs). This should be true for most 'gigantic' pages, it is for 1G pages on x86, where both of these alignment requirements are 128M. This will only have an effect if hugetlb_bootmem_alloc was called early in boot. If not, it won't do anything, and HVO for bootmem hugetlb pages works as before. Signed-off-by: Frank van der Linden --- include/linux/hugetlb.h | 2 + mm/hugetlb.c | 17 ++++- mm/hugetlb_vmemmap.c | 143 ++++++++++++++++++++++++++++++++++++++++ mm/hugetlb_vmemmap.h | 14 ++++ mm/sparse-vmemmap.c | 4 ++ 5 files changed, 177 insertions(+), 3 deletions(-) diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index 10a7ce2b95e1..2512463bca49 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -687,6 +687,8 @@ struct huge_bootmem_page { #define HUGE_BOOTMEM_HVO 0x0001 #define HUGE_BOOTMEM_ZONES_VALID 0x0002 =20 +bool hugetlb_bootmem_page_zones_valid(int nid, struct huge_bootmem_page *m= ); + int isolate_or_dissolve_huge_page(struct page *page, struct list_head *lis= t); int replace_free_hugepage_folios(unsigned long start_pfn, unsigned long en= d_pfn); struct folio *alloc_hugetlb_folio(struct vm_area_struct *vma, diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 40c88c46b34f..634dc53f1e3e 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -3211,7 +3211,18 @@ int __alloc_bootmem_huge_page(struct hstate *h, int = nid) */ memblock_reserved_mark_noinit(virt_to_phys((void *)m + PAGE_SIZE), huge_page_size(h) - PAGE_SIZE); - /* Put them into a private list first because mem_map is not up yet */ + + /* + * Put them into a private list first because mem_map is not up yet. + * + * For pre-HVO to work correctly, pages need to be on the list for + * the node they were actually allocated from. That node may be + * different in the case of fallback by memblock_alloc_try_nid_raw. + * So, extract the actual node first. + */ + if (nid =3D=3D NUMA_NO_NODE) + node =3D early_pfn_to_nid(PHYS_PFN(virt_to_phys(m))); + INIT_LIST_HEAD(&m->list); list_add(&m->list, &huge_boot_pages[node]); m->hstate =3D h; @@ -3306,8 +3317,8 @@ static void __init prep_and_add_bootmem_folios(struct= hstate *h, } } =20 -static bool __init hugetlb_bootmem_page_zones_valid(int nid, - struct huge_bootmem_page *m) +bool __init hugetlb_bootmem_page_zones_valid(int nid, + struct huge_bootmem_page *m) { unsigned long start_pfn; bool valid; diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c index be6b33ecbc8e..9a99dfa3c495 100644 --- a/mm/hugetlb_vmemmap.c +++ b/mm/hugetlb_vmemmap.c @@ -743,6 +743,149 @@ void hugetlb_vmemmap_optimize_bootmem_folios(struct h= state *h, struct list_head __hugetlb_vmemmap_optimize_folios(h, folio_list, true); } =20 +#ifdef CONFIG_SPARSEMEM_VMEMMAP_PREINIT + +/* Return true of a bootmem allocated HugeTLB page should be pre-HVO-ed */ +static bool vmemmap_should_optimize_bootmem_page(struct huge_bootmem_page = *m) +{ + unsigned long section_size, psize, pmd_vmemmap_size; + phys_addr_t paddr; + + if (!READ_ONCE(vmemmap_optimize_enabled)) + return false; + + if (!hugetlb_vmemmap_optimizable(m->hstate)) + return false; + + psize =3D huge_page_size(m->hstate); + paddr =3D virt_to_phys(m); + + /* + * Pre-HVO only works if the bootmem huge page + * is aligned to the section size. + */ + section_size =3D (1UL << PA_SECTION_SHIFT); + if (!IS_ALIGNED(paddr, section_size) || + !IS_ALIGNED(psize, section_size)) + return false; + + /* + * The pre-HVO code does not deal with splitting PMDS, + * so the bootmem page must be aligned to the number + * of base pages that can be mapped with one vmemmap PMD. + */ + pmd_vmemmap_size =3D (PMD_SIZE / (sizeof(struct page))) << PAGE_SHIFT; + if (!IS_ALIGNED(paddr, pmd_vmemmap_size) || + !IS_ALIGNED(psize, pmd_vmemmap_size)) + return false; + + return true; +} + +/* + * Initialize memmap section for a gigantic page, HVO-style. + */ +void __init hugetlb_vmemmap_init_early(int nid) +{ + unsigned long psize, paddr, section_size; + unsigned long ns, i, pnum, pfn, nr_pages; + unsigned long start, end; + struct huge_bootmem_page *m =3D NULL; + void *map; + + /* + * Noting to do if bootmem pages were not allocated + * early in boot, or if HVO wasn't enabled in the + * first place. + */ + if (!hugetlb_bootmem_allocated()) + return; + + if (!READ_ONCE(vmemmap_optimize_enabled)) + return; + + section_size =3D (1UL << PA_SECTION_SHIFT); + + list_for_each_entry(m, &huge_boot_pages[nid], list) { + if (!vmemmap_should_optimize_bootmem_page(m)) + continue; + + nr_pages =3D pages_per_huge_page(m->hstate); + psize =3D nr_pages << PAGE_SHIFT; + paddr =3D virt_to_phys(m); + pfn =3D PHYS_PFN(paddr); + map =3D pfn_to_page(pfn); + start =3D (unsigned long)map; + end =3D start + nr_pages * sizeof(struct page); + + if (vmemmap_populate_hvo(start, end, nid, + HUGETLB_VMEMMAP_RESERVE_SIZE) < 0) + continue; + + memmap_boot_pages_add(HUGETLB_VMEMMAP_RESERVE_SIZE / PAGE_SIZE); + + pnum =3D pfn_to_section_nr(pfn); + ns =3D psize / section_size; + + for (i =3D 0; i < ns; i++) { + sparse_init_early_section(nid, map, pnum, + SECTION_IS_VMEMMAP_PREINIT); + map +=3D section_map_size(); + pnum++; + } + + m->flags |=3D HUGE_BOOTMEM_HVO; + } +} + +void __init hugetlb_vmemmap_init_late(int nid) +{ + struct huge_bootmem_page *m, *tm; + unsigned long phys, nr_pages, start, end; + unsigned long pfn, nr_mmap; + struct hstate *h; + void *map; + + if (!hugetlb_bootmem_allocated()) + return; + + if (!READ_ONCE(vmemmap_optimize_enabled)) + return; + + list_for_each_entry_safe(m, tm, &huge_boot_pages[nid], list) { + if (!(m->flags & HUGE_BOOTMEM_HVO)) + continue; + + phys =3D virt_to_phys(m); + h =3D m->hstate; + pfn =3D PHYS_PFN(phys); + nr_pages =3D pages_per_huge_page(h); + + if (!hugetlb_bootmem_page_zones_valid(nid, m)) { + /* + * Oops, the hugetlb page spans multiple zones. + * Remove it from the list, and undo HVO. + */ + list_del(&m->list); + + map =3D pfn_to_page(pfn); + + start =3D (unsigned long)map; + end =3D start + nr_pages * sizeof(struct page); + + vmemmap_undo_hvo(start, end, nid, + HUGETLB_VMEMMAP_RESERVE_SIZE); + nr_mmap =3D end - start - HUGETLB_VMEMMAP_RESERVE_SIZE; + memmap_boot_pages_add(DIV_ROUND_UP(nr_mmap, PAGE_SIZE)); + + memblock_phys_free(phys, huge_page_size(h)); + continue; + } else + m->flags |=3D HUGE_BOOTMEM_ZONES_VALID; + } +} +#endif + static const struct ctl_table hugetlb_vmemmap_sysctls[] =3D { { .procname =3D "hugetlb_optimize_vmemmap", diff --git a/mm/hugetlb_vmemmap.h b/mm/hugetlb_vmemmap.h index 62d3d645a793..18b490825215 100644 --- a/mm/hugetlb_vmemmap.h +++ b/mm/hugetlb_vmemmap.h @@ -9,6 +9,8 @@ #ifndef _LINUX_HUGETLB_VMEMMAP_H #define _LINUX_HUGETLB_VMEMMAP_H #include +#include +#include =20 /* * Reserve one vmemmap page, all vmemmap addresses are mapped to it. See @@ -25,6 +27,10 @@ long hugetlb_vmemmap_restore_folios(const struct hstate = *h, void hugetlb_vmemmap_optimize_folio(const struct hstate *h, struct folio *= folio); void hugetlb_vmemmap_optimize_folios(struct hstate *h, struct list_head *f= olio_list); void hugetlb_vmemmap_optimize_bootmem_folios(struct hstate *h, struct list= _head *folio_list); +#ifdef CONFIG_SPARSEMEM_VMEMMAP_PREINIT +void hugetlb_vmemmap_init_early(int nid); +void hugetlb_vmemmap_init_late(int nid); +#endif =20 =20 static inline unsigned int hugetlb_vmemmap_size(const struct hstate *h) @@ -71,6 +77,14 @@ static inline void hugetlb_vmemmap_optimize_bootmem_foli= os(struct hstate *h, { } =20 +static inline void hugetlb_vmemmap_init_early(int nid) +{ +} + +static inline void hugetlb_vmemmap_init_late(int nid) +{ +} + static inline unsigned int hugetlb_vmemmap_optimizable_size(const struct h= state *h) { return 0; diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c index 8cc848c4b17c..fd2ab5118e13 100644 --- a/mm/sparse-vmemmap.c +++ b/mm/sparse-vmemmap.c @@ -32,6 +32,8 @@ #include #include =20 +#include "hugetlb_vmemmap.h" + /* * Flags for vmemmap_populate_range and friends. */ @@ -594,6 +596,7 @@ struct page * __meminit __populate_section_memmap(unsig= ned long pfn, */ void __init sparse_vmemmap_init_nid_early(int nid) { + hugetlb_vmemmap_init_early(nid); } =20 /* @@ -604,5 +607,6 @@ void __init sparse_vmemmap_init_nid_early(int nid) */ void __init sparse_vmemmap_init_nid_late(int nid) { + hugetlb_vmemmap_init_late(nid); } #endif --=20 2.48.1.711.g2feabab25a-goog From nobody Fri Dec 19 16:07:21 2025 Received: from mail-pj1-f74.google.com (mail-pj1-f74.google.com [209.85.216.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 61D97279E54 for ; Fri, 28 Feb 2025 18:30:21 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740767423; cv=none; b=Tk0urGKZIuVvdUlxrpHv0vOi4cMhxIK9+5ouqSFICZlsCIy402HUGLtLiIeP4vDw37mvxQCzS6Ah4JRi5PEF+lT93QKo0JuW7eQCkxF4fn5cWiaPiUYVNbnX4N0uk9E/u3QDBCqMY02sHs9TRU9vqNKKkyuKfvZBtddwP18GD0g= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740767423; c=relaxed/simple; bh=dTNP90UIwyKky8XRrOt2SvQdCW9PGMaKmirOyIMj7C4=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=o8YV1gqg1Gr69cLubss35FgNAf4nzsG24YkmjmEsiAWMSr/aBjV65ojcH5VOPcLHbH6mqz5kb6UWytQfeOVMKWKvH/qawJpdbgW5LMh9lM658InIv6gDgwdwlJweTBGVVlZefdmalsO3c6wvbk9Wcd6TNIwYI3q/3oJQ6aCoZus= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--fvdl.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=u6wcmG+P; arc=none smtp.client-ip=209.85.216.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--fvdl.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="u6wcmG+P" Received: by mail-pj1-f74.google.com with SMTP id 98e67ed59e1d1-2fc1eadf5a8so5049475a91.3 for ; Fri, 28 Feb 2025 10:30:21 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1740767421; x=1741372221; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=Qsv/Ru062kbIZ6fLgoxLCpOrSns6HMFCNnESdSOgKyI=; b=u6wcmG+Pwo9xIM55vEpk9TjJ+NC0E6EEK/MKVT3qOx+rqEN/6+OZGcU+iy+a74ulaV YN/OKCUOYGQau7vPCb+Pvib3yJEv/QAyuAXqmByY67Mr1xz/IkstSeLdK4nSu/a/B0oh BRaG2R6Mm+YGP+6hkO7spZuWmilSR6zU6b48haI5xFKeMHp1WLoPtcJycQ8MEepd02ip 5cuTWAybIGYk1IpP8x891I2rbFGFX1UHyqxHdXwmIpLGfrzYJVSgkNIzTFyFY2EwQPv+ l27pxlheR4yxr7e6GUk5Fsku7GnFObKXrXuVE5MdVu9KQWNx3t9U9tkfSdohuFDU4gdD UjhA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1740767421; x=1741372221; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=Qsv/Ru062kbIZ6fLgoxLCpOrSns6HMFCNnESdSOgKyI=; b=crrdsjOVjufjvCCjwn9W59L62n0f3gQbXcJ33nSYy3/jPpUJB8H41KSDCh3V3PVCLI XXer0Ih0Hxf8P0y5Fz8FQ9OOdoK2TDGeNcdROLWDe7vjdVNaKGZNbhOz1ofzjjHZV4eV uUQiSPtj3nMs+pvUSwZ4JPWVtqFkC6Z9quFfuuEx3AY0BVHjI9mxXUyVP+tUNTN4+Lug LJhU5h8XZ4OGiwGEhR8YdupDspdb9G0pFE69Vc8k3sHM3HlkxemHcXFdjRjKDP7CUYCv yItVIRmtC/7ClGUZ983EORwYPnC60TdUWExxMfDb8HM+IF3OKeXUkadt2u9jDOtFCFo7 d2eg== X-Forwarded-Encrypted: i=1; AJvYcCXdINL0Cc/+w//MmZ4cyM9rFMPnTL3Nkr658esP/KFhuK0WFLCIU8T/GcvqFGswpLB6y0wUmcpC3wiX23Y=@vger.kernel.org X-Gm-Message-State: AOJu0Ywbz5h78Mt3eF5Nx5yHxl0uqXK7zmIUhMHRfFEhlbJCe/OG9Xk1 Auv9nagtiFLYCD6KJdXLD/k7Nga4/g8zDHsA8DnIkzbgZBXkN9DB41/1asvoSdqWi8sXrQ== X-Google-Smtp-Source: AGHT+IFQ+4CYaI0GArxrWm7V0AIunfjglhCLiozYOHsEzSuzY5cNohoIWVj5V0zB+hLlPEDwCz0ehvUR X-Received: from pjur6.prod.google.com ([2002:a17:90a:d406:b0:2ee:3128:390f]) (user=fvdl job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:38c9:b0:2fe:7fea:ca34 with SMTP id 98e67ed59e1d1-2febac10a31mr6078409a91.32.1740767421435; Fri, 28 Feb 2025 10:30:21 -0800 (PST) Date: Fri, 28 Feb 2025 18:29:21 +0000 In-Reply-To: <20250228182928.2645936-1-fvdl@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250228182928.2645936-1-fvdl@google.com> X-Mailer: git-send-email 2.48.1.711.g2feabab25a-goog Message-ID: <20250228182928.2645936-21-fvdl@google.com> Subject: [PATCH v5 20/27] x86/setup: call hugetlb_bootmem_alloc early From: Frank van der Linden To: akpm@linux-foundation.org, muchun.song@linux.dev, linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: yuzhao@google.com, usamaarif642@gmail.com, joao.m.martins@oracle.com, roman.gushchin@linux.dev, ziy@nvidia.com, david@redhat.com, Frank van der Linden , Dave Hansen , Andy Lutomirski , Peter Zijlstra Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Call hugetlb_bootmem_allloc in an earlier spot in setup, after hugelb_cma_reserve. This will make vmemmap preinit of the sections covered by the allocated hugetlb pages possible. Cc: Dave Hansen Cc: Andy Lutomirski Cc: Peter Zijlstra Signed-off-by: Frank van der Linden --- arch/x86/kernel/setup.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c index cebee310e200..ff8604007b08 100644 --- a/arch/x86/kernel/setup.c +++ b/arch/x86/kernel/setup.c @@ -1108,8 +1108,10 @@ void __init setup_arch(char **cmdline_p) initmem_init(); dma_contiguous_reserve(max_pfn_mapped << PAGE_SHIFT); =20 - if (boot_cpu_has(X86_FEATURE_GBPAGES)) + if (boot_cpu_has(X86_FEATURE_GBPAGES)) { hugetlb_cma_reserve(PUD_SHIFT - PAGE_SHIFT); + hugetlb_bootmem_alloc(); + } =20 /* * Reserve memory for crash kernel after SRAT is parsed so that it --=20 2.48.1.711.g2feabab25a-goog From nobody Fri Dec 19 16:07:21 2025 Received: from mail-pj1-f74.google.com (mail-pj1-f74.google.com [209.85.216.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C120C27AA40 for ; Fri, 28 Feb 2025 18:30:23 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740767425; cv=none; b=qwIybST8NnJgPi0g/uYUECvCkou0PP8mRV4SrhugTUTwsneQ0ta36FvlRQ2aINMdKu9TvHrl4PuTUy+t2c0OuZTR7/luyNajXmRbt7nRDH2/1Gh5PzKnNpz/DSaeLx9wmwwoDHECy6jETBb6Scco0Vf4XN8PBIHHaoACizGibeU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740767425; c=relaxed/simple; bh=X39w3F+lZKzDtwc+JkA4F7RVrVKOpwrVxWXrIMQgSAA=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=nawF5vRMfsrnTjhlkl4qMKBm3ppvt+BbGD34yEd4MX/MldJ/9/2AiGTdQ8LG3iON5JmB7CAeFgzL9yI+P4TrYg8n+PNHzFFgRxRXdRBA5RMksXrUaGwma6nvLsCe9xdTZCgTHF9z+9XMg+I3McjLo7UHBGR7EsaHsUa+0j/ejLk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--fvdl.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=HRP38e3t; arc=none smtp.client-ip=209.85.216.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--fvdl.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="HRP38e3t" Received: by mail-pj1-f74.google.com with SMTP id 98e67ed59e1d1-2fe916ba298so5050342a91.1 for ; Fri, 28 Feb 2025 10:30:23 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1740767423; x=1741372223; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=Ri+EEQGIMa13uM7TTbruNz5oquaji2Ir3iub6xGqmT0=; b=HRP38e3tIVsDTI9JcQIttpwUV46Ia4BeWCea0RkmWRYVsjEOsDAsAdPHMY/Pmos+Pa 6GWMmmQlw45z1Rf3Sopf1IAQtL4jgydLxDVZPdvNILLB/d9DnVIaB2XncgzpDNC0tFzM /TXI4wMay3360rrGUrD9pYwvuPOz7+oaXQA5MqdiJ3vFyeFBcrhFdZxe8HfLhxcD1On6 UKehC7IyssFif3IN6l7R9xndVYJmXc3E5bcXqP2vszz/eidGSkOAarJneDIUdiWCte9o BX9uPuQFBbwRGc9O1zru1RlgwqeXk8JidBvFaf94fn1vo1UHHHoAybNr3GxlAaJAKje5 SMow== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1740767423; x=1741372223; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=Ri+EEQGIMa13uM7TTbruNz5oquaji2Ir3iub6xGqmT0=; b=cR0NjnB8zuoz4lspcJvbFBDqMka6RC0Z5qIGoc6q03vkFZPQ6F7ETnw154VTjyBHYn Ett10NVthEwC/L3CFt4fBufT6tpd3ZWQe7yTP6RC8w5m6vhlCXT/5q+U44DonhPZto6J AvDSZj59ZrsLBFzlEIwz54Rzr3ML0Nw632FHG/9Z63bgXof9KHUCQt5VVQUKJGtcEjwp BwdWlgjt4cYYDltf9u3MrRvtgao74wnuJKgL6AdyeVuPXgpB1T2R1mMyjHmBkCozTRvH iAiar23rNon1NomT+WRz4jwghbiJeGmfULIpfZqS/HM4+DweKiCJLi7uYv14EQfL25+X HyQA== X-Forwarded-Encrypted: i=1; AJvYcCVckfjYXXYXttCBwavkTxTY459ELzn88FA0XgbRb2BntyGy323mVsFP+RZTFAL6X61eLFPHO+hJArF5tYs=@vger.kernel.org X-Gm-Message-State: AOJu0YxTs/Joe18/3NB8K5OK5BJVgod7oxjgpPlydcE/gKh72LXQ+ISr 3L/T7YkOjejIVLUloMH+Zfdx+TPH8ObFxw6qSB2/IfD6f84ndfqPKlWBEVOx0JGI77KIMg== X-Google-Smtp-Source: AGHT+IHkocdsxLFj1UhQoNs63q4QIAx+wUoAy7Ek0ZQmWo/uXxf9KSLx0hvkfC5/UWsfEohsJBSicvyr X-Received: from pjbee14.prod.google.com ([2002:a17:90a:fc4e:b0:2ea:5469:76c2]) (user=fvdl job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:4fc8:b0:2ee:c9b6:4c42 with SMTP id 98e67ed59e1d1-2febab78711mr8118779a91.16.1740767423034; Fri, 28 Feb 2025 10:30:23 -0800 (PST) Date: Fri, 28 Feb 2025 18:29:22 +0000 In-Reply-To: <20250228182928.2645936-1-fvdl@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250228182928.2645936-1-fvdl@google.com> X-Mailer: git-send-email 2.48.1.711.g2feabab25a-goog Message-ID: <20250228182928.2645936-22-fvdl@google.com> Subject: [PATCH v5 21/27] x86/mm: set ARCH_WANT_HUGETLB_VMEMMAP_PREINIT From: Frank van der Linden To: akpm@linux-foundation.org, muchun.song@linux.dev, linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: yuzhao@google.com, usamaarif642@gmail.com, joao.m.martins@oracle.com, roman.gushchin@linux.dev, ziy@nvidia.com, david@redhat.com, Frank van der Linden , Johannes Weiner Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Now that hugetlb bootmem pages are allocated earlier, and available for section preinit (HVO-style), set ARCH_WANT_HUGETLB_VMEMMAP_PREINIT for x86_64, so that is can be done. This enables pre-HVO on x86_64. Cc: Johannes Weiner Signed-off-by: Frank van der Linden --- arch/x86/Kconfig | 1 + 1 file changed, 1 insertion(+) diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index be2c311f5118..384e54b23d50 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -146,6 +146,7 @@ config X86 select ARCH_WANT_LD_ORPHAN_WARN select ARCH_WANT_OPTIMIZE_DAX_VMEMMAP if X86_64 select ARCH_WANT_OPTIMIZE_HUGETLB_VMEMMAP if X86_64 + select ARCH_WANT_HUGETLB_VMEMMAP_PREINIT if X86_64 select ARCH_WANTS_THP_SWAP if X86_64 select ARCH_HAS_PARANOID_L1D_FLUSH select BUILDTIME_TABLE_SORT --=20 2.48.1.711.g2feabab25a-goog From nobody Fri Dec 19 16:07:21 2025 Received: from mail-pj1-f74.google.com (mail-pj1-f74.google.com [209.85.216.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4F7C527AA61 for ; Fri, 28 Feb 2025 18:30:25 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740767426; cv=none; b=AL2yxodqOvqz+dH9B9XvrBgUbnfagd5gjwZZj/cu1oyxmTaqpIVOrMK9pchjuqDtIn6I5ZDLk7nKf8hHQLJtxqiycqr75Uf1zP9Y82DNfoauaZX5TeO50nEmVaLpHqbG/mqMF2A1VwObpAOKW3gDk8y7EgWJfxnk9LZC7ybGwvo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740767426; c=relaxed/simple; bh=8GUE6c39N0/vGXgmG14efZlKjAYbe0xbfmj7LVTyYUM=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=XGFMoClmWEE960u3CM65l9bcPBEIIZ/DsUOMs/F3yhJ9wJN7bBQW6XpYZ+x+92TC6mET582WvCb3t7aEY16OJ1Wg1mVkGf/n4yduIV4UsPuVq3kHK7cx/G1Z4jav6eIyVfbycLFv9UmTgNLz1qFjUDeBUq1xIujkzXHxMpEZ5Nk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--fvdl.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=uz3XWyWf; arc=none smtp.client-ip=209.85.216.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--fvdl.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="uz3XWyWf" Received: by mail-pj1-f74.google.com with SMTP id 98e67ed59e1d1-2f816a85facso5031417a91.3 for ; Fri, 28 Feb 2025 10:30:25 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1740767424; x=1741372224; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=4Pj/N1+GF3Q+15wlANDSI128u4evupABN+ixVt0HlBM=; b=uz3XWyWffHFRvWCksLH1rCtSx/ChyOlQAMOPZJbEfhdN4x9Ilr/k470buIFnDycbzv 3eQhsDiCkHfJXY7YUIkd7/mzWP8jnW20Xh/HHtSGazRHw5Dq+koubYC6WX4uBZWt6u1+ HJXfoX/yDTWE3RzMm2lr32hhFnd4YAO73NkpqecQKXmo6wpnR7SV2hgpbhQB8hjBsoKi ClEO+BCy4O15ZqTeTXIzCVyqxk7hpLfJKp1kIRtPA4KnVD85nK7pmoh8lxT5sIKQSZcr GeIk3+rfX9SNqVgSab1rIWKfUKrWPcSh35LpG7YjvOCIkkvokZsdBpTJY+dOf0VjAto5 QZaQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1740767424; x=1741372224; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=4Pj/N1+GF3Q+15wlANDSI128u4evupABN+ixVt0HlBM=; b=nGxRVbvEfDZh7B+DkqkprhYJYHMU80SgEczA0f8fo9if8SIk0VoMAyQflHfFbixcwB FVawEhfCCssvK8d+lco53jnV7hnWa/nYIfxCREBHYklqQ66xuFXDeOH78oDbl4xAQNot t41CyxmelNtFtH0n1ItPvw3+3HiHD8vVhI9r7qhOH9J0W3XM2r/qZfEIV86EIxVq8A2p OaRrZT9bwaZZJjUxEjSYP9a32jhmSxLrrACW/hvb8B/Ti8GbSOGexx9QhwH0UUd1Dy0p fQimeKubjUMi2Ua/bPVROx2gOBTF9YMZDfqNzGu1ry0gUYk/qunH+C1dYy5yhWAHaBaN AO8Q== X-Forwarded-Encrypted: i=1; AJvYcCUq8SL93wwgwUTu0xDZ394dS6BJXKrz0heQRWb2VA4Wop9QXiSCXhPYmSdSIzbiKicLUZtjSh6BtIhSCfA=@vger.kernel.org X-Gm-Message-State: AOJu0Yw4R/4+YI5o9e4o1Nxdqou+oy/B8iBvPcyPDh91JS7DgW0+5Ip2 aJWJH/nr8Xm1WIn07J2YVVcmDQIeb1WiQeIzKNiNh3+tusmM3e6r1HzxHpasqmXcYUVKWA== X-Google-Smtp-Source: AGHT+IF24dXL3LJ+F8+s7hLTj4qoGxn/r2uY+HJ8KpKKTOzotTZzgKMLzWfSEpexnlD+Bvn5pLcYDA3/ X-Received: from pgwb25.prod.google.com ([2002:a65:6699:0:b0:af1:44fa:2fc3]) (user=fvdl job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6a21:600e:b0:1f1:1cd1:e3fa with SMTP id adf61e73a8af0-1f2f4c9c728mr8137086637.6.1740767424601; Fri, 28 Feb 2025 10:30:24 -0800 (PST) Date: Fri, 28 Feb 2025 18:29:23 +0000 In-Reply-To: <20250228182928.2645936-1-fvdl@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250228182928.2645936-1-fvdl@google.com> X-Mailer: git-send-email 2.48.1.711.g2feabab25a-goog Message-ID: <20250228182928.2645936-23-fvdl@google.com> Subject: [PATCH v5 22/27] mm/cma: simplify zone intersection check From: Frank van der Linden To: akpm@linux-foundation.org, muchun.song@linux.dev, linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: yuzhao@google.com, usamaarif642@gmail.com, joao.m.martins@oracle.com, roman.gushchin@linux.dev, ziy@nvidia.com, david@redhat.com, Frank van der Linden Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" cma_activate_area walks all pages in the area, checking their zone individually to see if the area resides in more than one zone. Make this a little more efficient by using the recently introduced pfn_range_intersects_zones() function. Store the NUMA node id (if any) in the cma structure to facilitate this. Signed-off-by: Frank van der Linden --- mm/cma.c | 13 ++++++------- mm/cma.h | 2 ++ 2 files changed, 8 insertions(+), 7 deletions(-) diff --git a/mm/cma.c b/mm/cma.c index 8dc46bfa3819..61ad4fd2f62d 100644 --- a/mm/cma.c +++ b/mm/cma.c @@ -103,7 +103,6 @@ static void __init cma_activate_area(struct cma *cma) { unsigned long pfn, base_pfn; int allocrange, r; - struct zone *zone; struct cma_memrange *cmr; =20 for (allocrange =3D 0; allocrange < cma->nranges; allocrange++) { @@ -124,12 +123,8 @@ static void __init cma_activate_area(struct cma *cma) * CMA resv range to be in the same zone. */ WARN_ON_ONCE(!pfn_valid(base_pfn)); - zone =3D page_zone(pfn_to_page(base_pfn)); - for (pfn =3D base_pfn + 1; pfn < base_pfn + cmr->count; pfn++) { - WARN_ON_ONCE(!pfn_valid(pfn)); - if (page_zone(pfn_to_page(pfn)) !=3D zone) - goto cleanup; - } + if (pfn_range_intersects_zones(cma->nid, base_pfn, cmr->count)) + goto cleanup; =20 for (pfn =3D base_pfn; pfn < base_pfn + cmr->count; pfn +=3D pageblock_nr_pages) @@ -261,6 +256,7 @@ int __init cma_init_reserved_mem(phys_addr_t base, phys= _addr_t size, cma->ranges[0].base_pfn =3D PFN_DOWN(base); cma->ranges[0].count =3D cma->count; cma->nranges =3D 1; + cma->nid =3D NUMA_NO_NODE; =20 *res_cma =3D cma; =20 @@ -497,6 +493,7 @@ int __init cma_declare_contiguous_multi(phys_addr_t tot= al_size, } =20 cma->nranges =3D nr; + cma->nid =3D nid; *res_cma =3D cma; =20 out: @@ -684,6 +681,8 @@ static int __init __cma_declare_contiguous_nid(phys_add= r_t base, if (ret) memblock_phys_free(base, size); =20 + (*res_cma)->nid =3D nid; + return ret; } =20 diff --git a/mm/cma.h b/mm/cma.h index 5f39dd1aac91..ff79dba5508c 100644 --- a/mm/cma.h +++ b/mm/cma.h @@ -50,6 +50,8 @@ struct cma { struct cma_kobject *cma_kobj; #endif bool reserve_pages_on_error; + /* NUMA node (NUMA_NO_NODE if unspecified) */ + int nid; }; =20 extern struct cma cma_areas[MAX_CMA_AREAS]; --=20 2.48.1.711.g2feabab25a-goog From nobody Fri Dec 19 16:07:21 2025 Received: from mail-pl1-f201.google.com (mail-pl1-f201.google.com [209.85.214.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 03DAC277005 for ; Fri, 28 Feb 2025 18:30:26 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740767428; cv=none; b=KLEp9JLlehDT5Rp1kel9kvrdNvhizVwN0kPG7B3nrpxp7vFa9FfrT37kI+EwOYm51LeKF2gKVD/5wgCmq00FIVhVB5meAS+L8HSfo8aHH7UvA8X9epQeKLqLsqLyg8u6he3vdhhFJrkOMMk3p7gF4qBCM1oikWMEvWP//8JUzgk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740767428; c=relaxed/simple; bh=3Lv5Z/ATYlscC4ohH4kKoakZp5reDJ5CLNojylEo6Ic=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=CBJFZQ90Rd1KZNuBL+AJS+2yEl+ucpDOolxCAFY4rNN/mr9yeSPIosCmxB3tZpJUchYmwZtxV+UtzA9tw1oHn4hrviSjN8/l8DETiAtdjkwYQB4xaMADXA14Sr3be7HhutHdkJxnvPYBlvJbHSY07oWfynS0vtLNe1B7P1PSidU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--fvdl.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=ieyKP8RZ; arc=none smtp.client-ip=209.85.214.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--fvdl.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="ieyKP8RZ" Received: by mail-pl1-f201.google.com with SMTP id d9443c01a7336-2234dddbd6fso54914895ad.0 for ; Fri, 28 Feb 2025 10:30:26 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1740767426; x=1741372226; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=GJJkcZXm0UDGyycKT5zJa6qHUoeoBff/K3o/6oYQz4Q=; b=ieyKP8RZ59dybxy+2H/6/dDk7PNpX1wcvdj85shbqJ3Ik00cSihG1+Fz9lc6qyKXNV RUthr4IZx6YulKf4d601xkTPE/mP1MIJSMu6eKVL/I+w/TujsoNO9sL6CqxBtodOzdgJ HXod50EJEavU31a4ft4ywz57uN+A2+1cnwia2GTSJrwTNJc94i1fw7x6r34x+Y7xTHk8 RghRg26DXceIIsowfkCj+OJjlO76YdsYNjgQjFoeZ04IG752YNpnuJvX8NLm4IRVwZ6d 5/dhqgBFZ9t/m1mWwpuSamoVpQz0TVHWJRSn0vmTudgP/+WJEbpVZUtbx2gmlv1fGwfk 1vMA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1740767426; x=1741372226; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=GJJkcZXm0UDGyycKT5zJa6qHUoeoBff/K3o/6oYQz4Q=; b=pgEainlfp0kTFdao2wWQWetIb/LPnEuRjRIrqasUYWpfw0d3j2uDUfcZmcMWUmFZaU jPL5uzHqwPg4jQoRd9L+OLCmQntvTgdsl/SWus4713BY1kqQcKFMi6AasNQva+pY2vT6 v//q6Z1voGKiB+RgaA+eDMp63dtBRLGt7eVmqFRq9lZDjN3fvn0gfIpd9I2hxdtIjn1k kMLt3VNRC09tFY7ksYTr1AXehJ50y72B8VuNAA0VV/si9IwDfQRI7gntX6eHxYJa7Hc2 JSDdMCORzprdSJAyPHJfCdYsTnOqaDkDBrW39CdcRmlEqWYf00fUaklTpbMOCUxCqTCx P0NA== X-Forwarded-Encrypted: i=1; AJvYcCU0qoE7DEYCWcnFOTCgz+0rekXNiuczlZKHhrHRG14r9nrTNETPj8gPe6m95J7uJdX7wi7Un0paWSE9dF4=@vger.kernel.org X-Gm-Message-State: AOJu0Yzs8YifNU01oVfG5e6wFlseKP7QRoKp7NyeTOnsmLH8gRDVGpgg wT+MOaWwQAAuDRHpNm4pwPL0BT9cpJKhKnRkjSag/F/ZPXjmtX+TPU2N9LQdujRAKVYzTg== X-Google-Smtp-Source: AGHT+IFaQfecV0+g8772luhiih5Wz27ZxuM2qewDmjx0CupIM+riybkEMnFN2lTpHGDtl7PIHg8dgK4Y X-Received: from pjbpw4.prod.google.com ([2002:a17:90b:2784:b0:2ef:82c0:cb8d]) (user=fvdl job=prod-delivery.src-stubby-dispatcher) by 2002:a17:902:ef47:b0:221:2d4b:b4c6 with SMTP id d9443c01a7336-2234b05eb87mr134741995ad.17.1740767426169; Fri, 28 Feb 2025 10:30:26 -0800 (PST) Date: Fri, 28 Feb 2025 18:29:24 +0000 In-Reply-To: <20250228182928.2645936-1-fvdl@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250228182928.2645936-1-fvdl@google.com> X-Mailer: git-send-email 2.48.1.711.g2feabab25a-goog Message-ID: <20250228182928.2645936-24-fvdl@google.com> Subject: [PATCH v5 23/27] mm/cma: introduce a cma validate function From: Frank van der Linden To: akpm@linux-foundation.org, muchun.song@linux.dev, linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: yuzhao@google.com, usamaarif642@gmail.com, joao.m.martins@oracle.com, roman.gushchin@linux.dev, ziy@nvidia.com, david@redhat.com, Frank van der Linden Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Define a function to check if a CMA area is valid, which means: do its ranges not cross any zone boundaries. Store the result in the newly created flags for each CMA area, so that multiple calls are dealt with. This allows for checking the validity of a CMA area early, which is needed later in order to be able to allocate hugetlb bootmem pages from it with pre-HVO. Signed-off-by: Frank van der Linden --- include/linux/cma.h | 5 ++++ mm/cma.c | 60 ++++++++++++++++++++++++++++++++++++--------- mm/cma.h | 8 +++++- 3 files changed, 60 insertions(+), 13 deletions(-) diff --git a/include/linux/cma.h b/include/linux/cma.h index 03d85c100dcc..62d9c1cf6326 100644 --- a/include/linux/cma.h +++ b/include/linux/cma.h @@ -60,6 +60,7 @@ extern void cma_reserve_pages_on_error(struct cma *cma); #ifdef CONFIG_CMA struct folio *cma_alloc_folio(struct cma *cma, int order, gfp_t gfp); bool cma_free_folio(struct cma *cma, const struct folio *folio); +bool cma_validate_zones(struct cma *cma); #else static inline struct folio *cma_alloc_folio(struct cma *cma, int order, gf= p_t gfp) { @@ -70,6 +71,10 @@ static inline bool cma_free_folio(struct cma *cma, const= struct folio *folio) { return false; } +static inline bool cma_validate_zones(struct cma *cma) +{ + return false; +} #endif =20 #endif diff --git a/mm/cma.c b/mm/cma.c index 61ad4fd2f62d..5e1d169e24fa 100644 --- a/mm/cma.c +++ b/mm/cma.c @@ -99,6 +99,49 @@ static void cma_clear_bitmap(struct cma *cma, const stru= ct cma_memrange *cmr, spin_unlock_irqrestore(&cma->lock, flags); } =20 +/* + * Check if a CMA area contains no ranges that intersect with + * multiple zones. Store the result in the flags in case + * this gets called more than once. + */ +bool cma_validate_zones(struct cma *cma) +{ + int r; + unsigned long base_pfn; + struct cma_memrange *cmr; + bool valid_bit_set; + + /* + * If already validated, return result of previous check. + * Either the valid or invalid bit will be set if this + * check has already been done. If neither is set, the + * check has not been performed yet. + */ + valid_bit_set =3D test_bit(CMA_ZONES_VALID, &cma->flags); + if (valid_bit_set || test_bit(CMA_ZONES_INVALID, &cma->flags)) + return valid_bit_set; + + for (r =3D 0; r < cma->nranges; r++) { + cmr =3D &cma->ranges[r]; + base_pfn =3D cmr->base_pfn; + + /* + * alloc_contig_range() requires the pfn range specified + * to be in the same zone. Simplify by forcing the entire + * CMA resv range to be in the same zone. + */ + WARN_ON_ONCE(!pfn_valid(base_pfn)); + if (pfn_range_intersects_zones(cma->nid, base_pfn, cmr->count)) { + set_bit(CMA_ZONES_INVALID, &cma->flags); + return false; + } + } + + set_bit(CMA_ZONES_VALID, &cma->flags); + + return true; +} + static void __init cma_activate_area(struct cma *cma) { unsigned long pfn, base_pfn; @@ -113,19 +156,12 @@ static void __init cma_activate_area(struct cma *cma) goto cleanup; } =20 + if (!cma_validate_zones(cma)) + goto cleanup; + for (r =3D 0; r < cma->nranges; r++) { cmr =3D &cma->ranges[r]; base_pfn =3D cmr->base_pfn; - - /* - * alloc_contig_range() requires the pfn range specified - * to be in the same zone. Simplify by forcing the entire - * CMA resv range to be in the same zone. - */ - WARN_ON_ONCE(!pfn_valid(base_pfn)); - if (pfn_range_intersects_zones(cma->nid, base_pfn, cmr->count)) - goto cleanup; - for (pfn =3D base_pfn; pfn < base_pfn + cmr->count; pfn +=3D pageblock_nr_pages) init_cma_reserved_pageblock(pfn_to_page(pfn)); @@ -145,7 +181,7 @@ static void __init cma_activate_area(struct cma *cma) bitmap_free(cma->ranges[r].bitmap); =20 /* Expose all pages to the buddy, they are useless for CMA. */ - if (!cma->reserve_pages_on_error) { + if (!test_bit(CMA_RESERVE_PAGES_ON_ERROR, &cma->flags)) { for (r =3D 0; r < allocrange; r++) { cmr =3D &cma->ranges[r]; for (pfn =3D cmr->base_pfn; @@ -172,7 +208,7 @@ core_initcall(cma_init_reserved_areas); =20 void __init cma_reserve_pages_on_error(struct cma *cma) { - cma->reserve_pages_on_error =3D true; + set_bit(CMA_RESERVE_PAGES_ON_ERROR, &cma->flags); } =20 static int __init cma_new_area(const char *name, phys_addr_t size, diff --git a/mm/cma.h b/mm/cma.h index ff79dba5508c..bddc84b3cd96 100644 --- a/mm/cma.h +++ b/mm/cma.h @@ -49,11 +49,17 @@ struct cma { /* kobject requires dynamic object */ struct cma_kobject *cma_kobj; #endif - bool reserve_pages_on_error; + unsigned long flags; /* NUMA node (NUMA_NO_NODE if unspecified) */ int nid; }; =20 +enum cma_flags { + CMA_RESERVE_PAGES_ON_ERROR, + CMA_ZONES_VALID, + CMA_ZONES_INVALID, +}; + extern struct cma cma_areas[MAX_CMA_AREAS]; extern unsigned int cma_area_count; =20 --=20 2.48.1.711.g2feabab25a-goog From nobody Fri Dec 19 16:07:21 2025 Received: from mail-pj1-f73.google.com (mail-pj1-f73.google.com [209.85.216.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8B09E277022 for ; Fri, 28 Feb 2025 18:30:28 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740767430; cv=none; b=tzFJVB3AOHo859EbSQ1Rbt+Cbk9xJTCMWBxBguFsfWGe5re+tDnbaBv+ygq+5FyUFAijjUD7Odvl+KBB/wtuHoJhR/lDvGN8lW5ZZTLCD2Pzv5rogRHtzJFHxQV/oLsrjZHaof3a4uu+CA9Ux6v/QDp1L/5j1VItkODdP6ahrN4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740767430; c=relaxed/simple; bh=XAMDomxjWmWZdFEDOhshq2O5uzA8ImsIbgaRw92mkBw=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=qZTN8XQk4uw9NBTkfCOwH3alVZVVQCYvuJKaEwPWaBUcu/YrCf43kVqTmA5Wn9Z/FURVcgnc3vnb80SAlbyzhWn6KHVJASN7NiHwB9nn8zFMpidWC5/Kpb+q6PdV5TI9VTceMojQRr3ONMMT3ynWsdSmgeuMIGu/G9M9iqLW6Cw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--fvdl.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=K72UMv8P; arc=none smtp.client-ip=209.85.216.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--fvdl.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="K72UMv8P" Received: by mail-pj1-f73.google.com with SMTP id 98e67ed59e1d1-2fc1eabf4f7so5261684a91.1 for ; Fri, 28 Feb 2025 10:30:28 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1740767428; x=1741372228; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=A/PlrUHdXBS5aQMGQGUAn/25QTwI04y5T00xCw9SgH8=; b=K72UMv8PCcsKcGBWkFMr58BStg2JlHYO4W+DvAcA270/9GNsKeDm4wCtlX2itJV2dG NJcEj3HqBYH1iPG2TTY+557xiCR3gp0KXyosxw1JovxPusOYXwopPT0tq9vms2J5b48s DbBYehrUVahrBOuhZmHy2mgP+Y0U02zDazG4TW1g/GK3mBQpLOkKjxAaKn0mD4i9EL2O 1u+nwCaRSOUPUJhIelgBksbUfkjBl/cvXi2CXbPUSQXN8tMkWHQUsrYdfmMIIpOIqm2m IowxLMlinNYqfNl1R6X5g192I0JakcSTRuIOW+hk/YfmljVDXOaX2/j1FAL8ppiJedXp ZCnA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1740767428; x=1741372228; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=A/PlrUHdXBS5aQMGQGUAn/25QTwI04y5T00xCw9SgH8=; b=ts0IbiJ6++I27tbf3cEOFdHEgWzVLzbqWI/cLfh6y/afCcIe2tQImWyrS1NYVgtpHT T5lWbyS5LlRQ+9ZuYGZSyPgKsjFzpG1CK5RZ5xRs9pvPqtyZg2BAsueLJxMSx84P6ra5 z2Vwl5Pk4gKKutoadFgGXfsB85cTFE9jGj+gZtb+MGHtllkvHCONK4EGf+tpN4EP85Wt YaQngUST5ow3dba2jQc0Atn28WyKr4d6ZVIkQ+/tm7KbRfHI9Yx6HTN6PVe2VzXWgFY+ oCfuYyMCERhRj9ksCJ0Vy5flg2ct4uvFb+pAbBHfKWY6+pfeuU/d3Zonk+ggPvBJogIi IwOw== X-Forwarded-Encrypted: i=1; AJvYcCW2S2SAWakVu8ZySeb0BoCk+e81TW3LUuC80p+9UuwbHXxFLNt0ibSsCPGfUbG5kjjQEiL7/sxUxAsxzmM=@vger.kernel.org X-Gm-Message-State: AOJu0YyT9VIHVfskVjGVVHyzZBj7rvpb0wsMB6Xr3SEWhWpwvwRvjxlS r9SU2tP7NHd2bvGypT0TyqH32v3lF4n2H4X1BMfCFdNiam5EJV6xdzvHDAB4nAksZ/63dA== X-Google-Smtp-Source: AGHT+IE9hv4JieOLsrmBlcdFHGW2nIb9erq5H77qZHSPmE2pf3XgwT3SfOsT16N+0hOwK5A8lx3Kobsp X-Received: from pjbpd10.prod.google.com ([2002:a17:90b:1dca:b0:2fa:15aa:4d1e]) (user=fvdl job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:3845:b0:2ee:b6c5:1def with SMTP id 98e67ed59e1d1-2febab3e271mr7456963a91.8.1740767427793; Fri, 28 Feb 2025 10:30:27 -0800 (PST) Date: Fri, 28 Feb 2025 18:29:25 +0000 In-Reply-To: <20250228182928.2645936-1-fvdl@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250228182928.2645936-1-fvdl@google.com> X-Mailer: git-send-email 2.48.1.711.g2feabab25a-goog Message-ID: <20250228182928.2645936-25-fvdl@google.com> Subject: [PATCH v5 24/27] mm/cma: introduce interface for early reservations From: Frank van der Linden To: akpm@linux-foundation.org, muchun.song@linux.dev, linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: yuzhao@google.com, usamaarif642@gmail.com, joao.m.martins@oracle.com, roman.gushchin@linux.dev, ziy@nvidia.com, david@redhat.com, Frank van der Linden Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" It can be desirable to reserve memory in a CMA area before it is activated, early in boot. Such reservations would effectively be memblock allocations, but they can be returned to the CMA area later. This functionality can be used to allow hugetlb bootmem allocations from a hugetlb CMA area. A new interface, cma_reserve_early is introduced. This allows for pageblock-aligned reservations. These reservations are skipped during the initial handoff of pages in a CMA area to the buddy allocator. The caller is responsible for making sure that the page structures are set up, and that the migrate type is set correctly, as with other memblock allocations that stick around. If the CMA area fails to activate (because it intersects with multiple zones), the reserved memory is not given to the buddy allocator, the caller needs to take care of that. Signed-off-by: Frank van der Linden --- mm/cma.c | 83 ++++++++++++++++++++++++++++++++++++++++++++++----- mm/cma.h | 8 +++++ mm/internal.h | 16 ++++++++++ mm/mm_init.c | 9 ++++++ 4 files changed, 109 insertions(+), 7 deletions(-) diff --git a/mm/cma.c b/mm/cma.c index 5e1d169e24fa..09322b8284bd 100644 --- a/mm/cma.c +++ b/mm/cma.c @@ -144,9 +144,10 @@ bool cma_validate_zones(struct cma *cma) =20 static void __init cma_activate_area(struct cma *cma) { - unsigned long pfn, base_pfn; + unsigned long pfn, end_pfn; int allocrange, r; struct cma_memrange *cmr; + unsigned long bitmap_count, count; =20 for (allocrange =3D 0; allocrange < cma->nranges; allocrange++) { cmr =3D &cma->ranges[allocrange]; @@ -161,8 +162,13 @@ static void __init cma_activate_area(struct cma *cma) =20 for (r =3D 0; r < cma->nranges; r++) { cmr =3D &cma->ranges[r]; - base_pfn =3D cmr->base_pfn; - for (pfn =3D base_pfn; pfn < base_pfn + cmr->count; + if (cmr->early_pfn !=3D cmr->base_pfn) { + count =3D cmr->early_pfn - cmr->base_pfn; + bitmap_count =3D cma_bitmap_pages_to_bits(cma, count); + bitmap_set(cmr->bitmap, 0, bitmap_count); + } + + for (pfn =3D cmr->early_pfn; pfn < cmr->base_pfn + cmr->count; pfn +=3D pageblock_nr_pages) init_cma_reserved_pageblock(pfn_to_page(pfn)); } @@ -173,6 +179,7 @@ static void __init cma_activate_area(struct cma *cma) INIT_HLIST_HEAD(&cma->mem_head); spin_lock_init(&cma->mem_head_lock); #endif + set_bit(CMA_ACTIVATED, &cma->flags); =20 return; =20 @@ -184,9 +191,8 @@ static void __init cma_activate_area(struct cma *cma) if (!test_bit(CMA_RESERVE_PAGES_ON_ERROR, &cma->flags)) { for (r =3D 0; r < allocrange; r++) { cmr =3D &cma->ranges[r]; - for (pfn =3D cmr->base_pfn; - pfn < cmr->base_pfn + cmr->count; - pfn++) + end_pfn =3D cmr->base_pfn + cmr->count; + for (pfn =3D cmr->early_pfn; pfn < end_pfn; pfn++) free_reserved_page(pfn_to_page(pfn)); } } @@ -290,6 +296,7 @@ int __init cma_init_reserved_mem(phys_addr_t base, phys= _addr_t size, return ret; =20 cma->ranges[0].base_pfn =3D PFN_DOWN(base); + cma->ranges[0].early_pfn =3D PFN_DOWN(base); cma->ranges[0].count =3D cma->count; cma->nranges =3D 1; cma->nid =3D NUMA_NO_NODE; @@ -509,6 +516,7 @@ int __init cma_declare_contiguous_multi(phys_addr_t tot= al_size, nr, (u64)mlp->base, (u64)mlp->base + size); cmrp =3D &cma->ranges[nr++]; cmrp->base_pfn =3D PHYS_PFN(mlp->base); + cmrp->early_pfn =3D cmrp->base_pfn; cmrp->count =3D size >> PAGE_SHIFT; =20 sizeleft -=3D size; @@ -540,7 +548,6 @@ int __init cma_declare_contiguous_multi(phys_addr_t tot= al_size, pr_info("Reserved %lu MiB in %d range%s\n", (unsigned long)total_size / SZ_1M, nr, nr > 1 ? "s" : ""); - return ret; } =20 @@ -1034,3 +1041,65 @@ bool cma_intersects(struct cma *cma, unsigned long s= tart, unsigned long end) =20 return false; } + +/* + * Very basic function to reserve memory from a CMA area that has not + * yet been activated. This is expected to be called early, when the + * system is single-threaded, so there is no locking. The alignment + * checking is restrictive - only pageblock-aligned areas + * (CMA_MIN_ALIGNMENT_BYTES) may be reserved through this function. + * This keeps things simple, and is enough for the current use case. + * + * The CMA bitmaps have not yet been allocated, so just start + * reserving from the bottom up, using a PFN to keep track + * of what has been reserved. Unreserving is not possible. + * + * The caller is responsible for initializing the page structures + * in the area properly, since this just points to memblock-allocated + * memory. The caller should subsequently use init_cma_pageblock to + * set the migrate type and CMA stats the pageblocks that were reserved. + * + * If the CMA area fails to activate later, memory obtained through + * this interface is not handed to the page allocator, this is + * the responsibility of the caller (e.g. like normal memblock-allocated + * memory). + */ +void __init *cma_reserve_early(struct cma *cma, unsigned long size) +{ + int r; + struct cma_memrange *cmr; + unsigned long available; + void *ret =3D NULL; + + if (!cma || !cma->count) + return NULL; + /* + * Can only be called early in init. + */ + if (test_bit(CMA_ACTIVATED, &cma->flags)) + return NULL; + + if (!IS_ALIGNED(size, CMA_MIN_ALIGNMENT_BYTES)) + return NULL; + + if (!IS_ALIGNED(size, (PAGE_SIZE << cma->order_per_bit))) + return NULL; + + size >>=3D PAGE_SHIFT; + + if (size > cma->available_count) + return NULL; + + for (r =3D 0; r < cma->nranges; r++) { + cmr =3D &cma->ranges[r]; + available =3D cmr->count - (cmr->early_pfn - cmr->base_pfn); + if (size <=3D available) { + ret =3D phys_to_virt(PFN_PHYS(cmr->early_pfn)); + cmr->early_pfn +=3D size; + cma->available_count -=3D size; + return ret; + } + } + + return ret; +} diff --git a/mm/cma.h b/mm/cma.h index bddc84b3cd96..df7fc623b7a6 100644 --- a/mm/cma.h +++ b/mm/cma.h @@ -16,9 +16,16 @@ struct cma_kobject { * and the total amount of memory requested, while smaller than the total * amount of memory available, is large enough that it doesn't fit in a * single physical memory range because of memory holes. + * + * Fields: + * @base_pfn: physical address of range + * @early_pfn: first PFN not reserved through cma_reserve_early + * @count: size of range + * @bitmap: bitmap of allocated (1 << order_per_bit)-sized chunks. */ struct cma_memrange { unsigned long base_pfn; + unsigned long early_pfn; unsigned long count; unsigned long *bitmap; #ifdef CONFIG_CMA_DEBUGFS @@ -58,6 +65,7 @@ enum cma_flags { CMA_RESERVE_PAGES_ON_ERROR, CMA_ZONES_VALID, CMA_ZONES_INVALID, + CMA_ACTIVATED, }; =20 extern struct cma cma_areas[MAX_CMA_AREAS]; diff --git a/mm/internal.h b/mm/internal.h index 63fda9bb9426..8318c8e6e589 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -848,6 +848,22 @@ void init_cma_reserved_pageblock(struct page *page); =20 #endif /* CONFIG_COMPACTION || CONFIG_CMA */ =20 +struct cma; + +#ifdef CONFIG_CMA +void *cma_reserve_early(struct cma *cma, unsigned long size); +void init_cma_pageblock(struct page *page); +#else +static inline void *cma_reserve_early(struct cma *cma, unsigned long size) +{ + return NULL; +} +static inline void init_cma_pageblock(struct page *page) +{ +} +#endif + + int find_suitable_fallback(struct free_area *area, unsigned int order, int migratetype, bool only_stealable, bool *can_steal); =20 diff --git a/mm/mm_init.c b/mm/mm_init.c index f7d5b4fe1ae9..f31260fd393e 100644 --- a/mm/mm_init.c +++ b/mm/mm_init.c @@ -2263,6 +2263,15 @@ void __init init_cma_reserved_pageblock(struct page = *page) adjust_managed_page_count(page, pageblock_nr_pages); page_zone(page)->cma_pages +=3D pageblock_nr_pages; } +/* + * Similar to above, but only set the migrate type and stats. + */ +void __init init_cma_pageblock(struct page *page) +{ + set_pageblock_migratetype(page, MIGRATE_CMA); + adjust_managed_page_count(page, pageblock_nr_pages); + page_zone(page)->cma_pages +=3D pageblock_nr_pages; +} #endif =20 void set_zone_contiguous(struct zone *zone) --=20 2.48.1.711.g2feabab25a-goog From nobody Fri Dec 19 16:07:21 2025 Received: from mail-pj1-f74.google.com (mail-pj1-f74.google.com [209.85.216.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DB10D277033 for ; Fri, 28 Feb 2025 18:30:29 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740767431; cv=none; b=LnqdSk+kFm0WIE/9BjaQBO8iMm8ielQtmV4Nv1zyHqOEIRhnDztkVL6iPdCl/A0vW9wS7MULQYQ5c0mLNes++foQSFWHKXUj4EFoTHZHijWyelekKww/WlpTX0/4AY3NN6z9pEQRLhCsnigHgRrmD+QDgwuxxcx2BJvlErJqUNY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740767431; c=relaxed/simple; bh=7Acur164byQ0STeNcJdwq9eVToq5iNa+oJyKcDPpqPU=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=ezh9lCwA0ME0ruyfkUIJOpqYmDglC+kVZ9S178C+lEkILRShN/ofKwPprevLLeXdOOTsANZPumJummI52uMd9nFH/iuyKdIE0Ed3ODbqOGyptJllQez5YEBdx0UWP3GQTKqV43A3An+Q37uP5pCpEbaWKb7t31FDvHQUp0Momp0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--fvdl.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=m7Ppxl9/; arc=none smtp.client-ip=209.85.216.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--fvdl.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="m7Ppxl9/" Received: by mail-pj1-f74.google.com with SMTP id 98e67ed59e1d1-2fc3e239675so8300490a91.0 for ; Fri, 28 Feb 2025 10:30:29 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1740767429; x=1741372229; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=SJXuy9g+fIYCpUv35AHIylFJGRqMuxPCbt3E+0p7mQw=; b=m7Ppxl9/bdSS/bg/4tvvMaDmUARFBACgN219cRFc2ZsWuHDR3yWL33PG2Z2WcQXM1q IT8RkQiLN9psHuZzTxnnu0PNmPM2SPQwYtKDPi6A+IDDg4sR851U1OGDPW86wxyEJ8Na SpysYTgc+l4xxGPKD5hsS5ePJAhfL/iKQCDeccNSpKCknhL4YxDE0D9ShYjsg+UBXYAZ hqnH07eiEqsxtBTrFB8uX9UyXZqfy3/Fxpb1Ol6sEdSyQhBGsBGOhKiueJQJix5wM+kR FekTlHomSUSog82ynZBYAsjypTqxemyllQrMhidErGyU+qRNwd4mYiOKgynLgBu6Y2f9 wt5w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1740767429; x=1741372229; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=SJXuy9g+fIYCpUv35AHIylFJGRqMuxPCbt3E+0p7mQw=; b=FucC7b+HakL6Q8hrTyqxPo2OiM07heVIVUix7ve0YUWwsskpgA4dJmG31nvBWCckQO TNq4npubxkDgqdJyGhXU66y6dpWkb4fWYwhp4aNr7AqH138ENVHVrD+YyttUesprP02W zgaQQDDWR5LC+6BIGJWfRoSWKm4/OI4BGI4imDg0eCG2LzFRO3tiCcKxxA5LrfoGMs1t hvY8C/PZXHlDGUE9rGU2HEPfE+Kmb2DydshFVTlZf59+K1oIaM31OdjQGH0Yvq6vF4fs zbtNY/24ZTZxxOgBAQZv4QIsjc2HAmY8VyoYusBLLQdX3tW7vuikRtjjh/ybLLiNalSP cgVg== X-Forwarded-Encrypted: i=1; AJvYcCV3VIhGxET0NURHPpOXjSeR1BAqi5Bf23B9kZiTaZnBvDaC67rkbfpIesKOLfxzkZCaR3Uhx3XrT73ld7I=@vger.kernel.org X-Gm-Message-State: AOJu0YwRU495WswhwSVbnSuiwwvw8lbbHaejup79VB6HtOTq7jX/Xlxj IWc0huD+gY6Xo+jx4pttdtKlBnaB2TOjgFbI5/Gw1RRbnGElu59ngXrWm+gFdwjsEX6VJQ== X-Google-Smtp-Source: AGHT+IGyWQ5eAZFcGmvk4D9+09d3FgZW3zWgk6sQSpIGwMbzM1TQ8agrO5NPaNt1FY233n8g7xNvD2p7 X-Received: from pjyp12.prod.google.com ([2002:a17:90a:e70c:b0:2d8:8340:8e46]) (user=fvdl job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:3c88:b0:2ee:ee77:2263 with SMTP id 98e67ed59e1d1-2febab2ecd6mr7718548a91.7.1740767429126; Fri, 28 Feb 2025 10:30:29 -0800 (PST) Date: Fri, 28 Feb 2025 18:29:26 +0000 In-Reply-To: <20250228182928.2645936-1-fvdl@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250228182928.2645936-1-fvdl@google.com> X-Mailer: git-send-email 2.48.1.711.g2feabab25a-goog Message-ID: <20250228182928.2645936-26-fvdl@google.com> Subject: [PATCH v5 25/27] mm/hugetlb: add hugetlb_cma_only cmdline option From: Frank van der Linden To: akpm@linux-foundation.org, muchun.song@linux.dev, linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: yuzhao@google.com, usamaarif642@gmail.com, joao.m.martins@oracle.com, roman.gushchin@linux.dev, ziy@nvidia.com, david@redhat.com, Frank van der Linden Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add an option to force hugetlb gigantic pages to be allocated using CMA only (if hugetlb_cma is enabled). This avoids a fallback to allocation from the rest of system memory if the CMA allocation fails. This makes the size of hugetlb_cma a hard upper boundary for gigantic hugetlb page allocations. This is useful because, with a large CMA area, the kernel's unmovable allocations will have less room to work with and it is undesirable for new hugetlb gigantic page allocations to be done from that remaining area. It will eat in to the space available for unmovable allocations, leading to unwanted system behavior (OOMs because the kernel fails to do unmovable allocations). So, with this enabled, an administrator can force a hard upper bound for runtime gigantic page allocations, and have more predictable system behavior. Signed-off-by: Frank van der Linden --- Documentation/admin-guide/kernel-parameters.txt | 7 +++++++ mm/hugetlb.c | 14 ++++++++++++++ 2 files changed, 21 insertions(+) diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentatio= n/admin-guide/kernel-parameters.txt index ae21d911d1c7..491628ac071a 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -1892,6 +1892,13 @@ hugepages using the CMA allocator. If enabled, the boot-time allocation of gigantic hugepages is skipped. =20 + hugetlb_cma_only=3D + [HW,CMA,EARLY] When allocating new HugeTLB pages, only + try to allocate from the CMA areas. + + This option does nothing if hugetlb_cma=3D is not also + specified. + hugetlb_free_vmemmap=3D [KNL] Requires CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP enabled. diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 634dc53f1e3e..0b483c466656 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -59,6 +59,7 @@ struct hstate hstates[HUGE_MAX_HSTATE]; static struct cma *hugetlb_cma[MAX_NUMNODES]; static unsigned long hugetlb_cma_size_in_node[MAX_NUMNODES] __initdata; #endif +static bool hugetlb_cma_only; static unsigned long hugetlb_cma_size __initdata; =20 __initdata struct list_head huge_boot_pages[MAX_NUMNODES]; @@ -1510,6 +1511,9 @@ static struct folio *alloc_gigantic_folio(struct hsta= te *h, gfp_t gfp_mask, } #endif if (!folio) { + if (hugetlb_cma_only) + return NULL; + folio =3D folio_alloc_gigantic(order, gfp_mask, nid, nodemask); if (!folio) return NULL; @@ -4738,6 +4742,9 @@ static __init void hugetlb_parse_params(void) =20 hcp->setup(hcp->val); } + + if (!hugetlb_cma_size) + hugetlb_cma_only =3D false; } =20 /* @@ -7850,6 +7857,13 @@ static int __init cmdline_parse_hugetlb_cma(char *p) =20 early_param("hugetlb_cma", cmdline_parse_hugetlb_cma); =20 +static int __init cmdline_parse_hugetlb_cma_only(char *p) +{ + return kstrtobool(p, &hugetlb_cma_only); +} + +early_param("hugetlb_cma_only", cmdline_parse_hugetlb_cma_only); + void __init hugetlb_cma_reserve(int order) { unsigned long size, reserved, per_node; --=20 2.48.1.711.g2feabab25a-goog From nobody Fri Dec 19 16:07:21 2025 Received: from mail-pj1-f74.google.com (mail-pj1-f74.google.com [209.85.216.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 390C3277815 for ; Fri, 28 Feb 2025 18:30:31 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740767433; cv=none; b=npHIpDKXOqmVeGC2jXYvZs7NIjbH8g43KwsCGwkj17s8RgHkSm8zxAugxYCkk443eVwx8U2gpSPh2mBFpNCQ9c5WeVqa7tjk7bTVHabebGEIDdHS4mMWBFr/JmpGOdY3oOsqTogLIb8YcwCOqjNxBOjBViP/2Si9ouhG3kQoajQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740767433; c=relaxed/simple; bh=nIg5m9Ex8f47YbyoiSpc77kPHlDELNvGKJY7NaqdSmg=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=luMiVyTVvWedrjDEGt4/uwgpTpTTN3i9KxHfWLwp32TzHnfG9H9kJF/RreZfJp5dxjzDDWrIU5afNim8TltrRJYVTb58FAREWd3cUeMfGFoivA002CP8EshDr24GyVQz2fBvs4f8JXflEXyuccIVj4sDusZb0HJIZR4irvbEWUU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--fvdl.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=32pnMM3O; arc=none smtp.client-ip=209.85.216.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--fvdl.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="32pnMM3O" Received: by mail-pj1-f74.google.com with SMTP id 98e67ed59e1d1-2fea8e4a655so6940621a91.0 for ; Fri, 28 Feb 2025 10:30:31 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1740767431; x=1741372231; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=TMe19yWV86sRu83AfjzN0OeG5+sON+m39CVh+mgprHg=; b=32pnMM3OXozY6uzHlHMcAPqohkKA6QhA7gxRie8ydjUvxl18IuliJWutdO0s+ETHo8 o0fcaLZXmb+dSCbggXvptwQ47HbKKfmOxB+LriuR7dHyz0h5fTQH+BpPVxEo5KMjJXUK g1Q06bRJhxBxDqcVGxGwemNbCVRujZfA0rfhO9am5dgVHx077ih20RCzGuYB1Ff3rOFP im79TjaS2s37V/+26pCtgkPMYBSCwCrAIj97iwXxCQ4R/4BrpF9VKwq2hwW1J6wh+rsC mDX0ooSV7bJ1yGbKrUFYci5IjpSrE4ve6kpDmdbUPN1htvP54PYrCOpNoQ2ZDrp+EiN0 W28Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1740767431; x=1741372231; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=TMe19yWV86sRu83AfjzN0OeG5+sON+m39CVh+mgprHg=; b=xOqa3bF2aTMbu3hgVx41mK7pV4J+JF9LSUokwdTDwke72hlQAY3yiH+ga+UPtShe6Z nSfm2T+qoGnFxu7ALMzpV54UHGQi/+QJD43DRIPWZpbVfQByq4Vffexd8NJbMiE1AZbj gZiddzR4Umz//rSwe9sT5H7uNB5V16rLnlhCnC/4Kj0+WsbhBapYs/iu4xSGgRJioCvk DWZtDQXaNJH5pX9+9Fc0EfVLqsFJdsBxwRzBFk52LlTM9PFPOgkikh5Y2MYZVlAseg4v TyIsY4X5u/El4d2sQiJTe9L61mL1wuWFqx37tDwjZNyZzglNlE4gCLHr2Z9dDmOOp8Sf 7A7g== X-Forwarded-Encrypted: i=1; AJvYcCUg+teDfZmBD8Js8g5pguPzgyl8Dx6DdaVAVeEgJX1wsOiHvRcvYOtdAN1RdsQ/7Hsz6eTFam0FufXLoQc=@vger.kernel.org X-Gm-Message-State: AOJu0YzzGAiLHcJKdB9RvSLSsHoahAtyMWZhWzeUUmPuVmiWhU+1K2S8 lb1aM81yMAzvl/aOaFceInHpngp7huTXLNVC+iDbqtUiliJmn95Lu/BbwsvNccUD26sjjg== X-Google-Smtp-Source: AGHT+IFdDufRUVuOA2NiFtFscHtFYi1z0zZ+G49tzlylohpjZ77dDf1/vb2nPMiEU/5Q5ipJa4U2fDbm X-Received: from pjbsr16.prod.google.com ([2002:a17:90b:4e90:b0:2fa:2891:e310]) (user=fvdl job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:3d0a:b0:2f6:be57:49d2 with SMTP id 98e67ed59e1d1-2febab7459bmr8199787a91.17.1740767430763; Fri, 28 Feb 2025 10:30:30 -0800 (PST) Date: Fri, 28 Feb 2025 18:29:27 +0000 In-Reply-To: <20250228182928.2645936-1-fvdl@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250228182928.2645936-1-fvdl@google.com> X-Mailer: git-send-email 2.48.1.711.g2feabab25a-goog Message-ID: <20250228182928.2645936-27-fvdl@google.com> Subject: [PATCH v5 26/27] mm/hugetlb: enable bootmem allocation from CMA areas From: Frank van der Linden To: akpm@linux-foundation.org, muchun.song@linux.dev, linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: yuzhao@google.com, usamaarif642@gmail.com, joao.m.martins@oracle.com, roman.gushchin@linux.dev, ziy@nvidia.com, david@redhat.com, Frank van der Linden , Madhavan Srinivasan , Michael Ellerman , linuxppc-dev@lists.ozlabs.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" If hugetlb_cma_only is enabled, we know that hugetlb pages can only be allocated from CMA. Now that there is an interface to do early reservations from a CMA area (returning memblock memory), it can be used to allocate hugetlb pages from CMA. This also allows for doing pre-HVO on these pages (if enabled). Make sure to initialize the page structures and associated data correctly. Create a flag to signal that a hugetlb page has been allocated from CMA to make things a little easier. Some configurations of powerpc have a special hugetlb bootmem allocator, so introduce a boolean arch_specific_huge_bootmem_alloc that returns true if such an allocator is present. In that case, CMA bootmem allocations can't be used, so check that function before trying. Cc: Madhavan Srinivasan Cc: Michael Ellerman Cc: linuxppc-dev@lists.ozlabs.org Signed-off-by: Frank van der Linden --- arch/powerpc/include/asm/book3s/64/hugetlb.h | 6 + include/linux/hugetlb.h | 17 ++ mm/hugetlb.c | 168 ++++++++++++++----- 3 files changed, 152 insertions(+), 39 deletions(-) diff --git a/arch/powerpc/include/asm/book3s/64/hugetlb.h b/arch/powerpc/in= clude/asm/book3s/64/hugetlb.h index f0bba9c5f9c3..bb786694dd26 100644 --- a/arch/powerpc/include/asm/book3s/64/hugetlb.h +++ b/arch/powerpc/include/asm/book3s/64/hugetlb.h @@ -94,4 +94,10 @@ static inline int check_and_get_huge_psize(int shift) return mmu_psize; } =20 +#define arch_has_huge_bootmem_alloc arch_has_huge_bootmem_alloc + +static inline bool arch_has_huge_bootmem_alloc(void) +{ + return (firmware_has_feature(FW_FEATURE_LPAR) && !radix_enabled()); +} #endif diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index 2512463bca49..6c6546b54934 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -591,6 +591,7 @@ enum hugetlb_page_flags { HPG_freed, HPG_vmemmap_optimized, HPG_raw_hwp_unreliable, + HPG_cma, __NR_HPAGEFLAGS, }; =20 @@ -650,6 +651,7 @@ HPAGEFLAG(Temporary, temporary) HPAGEFLAG(Freed, freed) HPAGEFLAG(VmemmapOptimized, vmemmap_optimized) HPAGEFLAG(RawHwpUnreliable, raw_hwp_unreliable) +HPAGEFLAG(Cma, cma) =20 #ifdef CONFIG_HUGETLB_PAGE =20 @@ -678,14 +680,18 @@ struct hstate { char name[HSTATE_NAME_LEN]; }; =20 +struct cma; + struct huge_bootmem_page { struct list_head list; struct hstate *hstate; unsigned long flags; + struct cma *cma; }; =20 #define HUGE_BOOTMEM_HVO 0x0001 #define HUGE_BOOTMEM_ZONES_VALID 0x0002 +#define HUGE_BOOTMEM_CMA 0x0004 =20 bool hugetlb_bootmem_page_zones_valid(int nid, struct huge_bootmem_page *m= ); =20 @@ -823,6 +829,17 @@ static inline pte_t arch_make_huge_pte(pte_t entry, un= signed int shift, } #endif =20 +#ifndef arch_has_huge_bootmem_alloc +/* + * Some architectures do their own bootmem allocation, so they can't use + * early CMA allocation. + */ +static inline bool arch_has_huge_bootmem_alloc(void) +{ + return false; +} +#endif + static inline struct hstate *folio_hstate(struct folio *folio) { VM_BUG_ON_FOLIO(!folio_test_hugetlb(folio), folio); diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 0b483c466656..664ccaaa717a 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -131,8 +131,10 @@ static void hugetlb_free_folio(struct folio *folio) #ifdef CONFIG_CMA int nid =3D folio_nid(folio); =20 - if (cma_free_folio(hugetlb_cma[nid], folio)) + if (folio_test_hugetlb_cma(folio)) { + WARN_ON_ONCE(!cma_free_folio(hugetlb_cma[nid], folio)); return; + } #endif folio_put(folio); } @@ -1508,6 +1510,9 @@ static struct folio *alloc_gigantic_folio(struct hsta= te *h, gfp_t gfp_mask, break; } } + + if (folio) + folio_set_hugetlb_cma(folio); } #endif if (!folio) { @@ -3174,6 +3179,86 @@ struct folio *alloc_hugetlb_folio(struct vm_area_str= uct *vma, return ERR_PTR(-ENOSPC); } =20 +static bool __init hugetlb_early_cma(struct hstate *h) +{ + if (arch_has_huge_bootmem_alloc()) + return false; + + return (hstate_is_gigantic(h) && hugetlb_cma_only); +} + +static __init void *alloc_bootmem(struct hstate *h, int nid, bool node_exa= ct) +{ + struct huge_bootmem_page *m; + unsigned long flags; + struct cma *cma; + int listnode =3D nid; + +#ifdef CONFIG_CMA + if (hugetlb_early_cma(h)) { + flags =3D HUGE_BOOTMEM_CMA; + cma =3D hugetlb_cma[nid]; + m =3D cma_reserve_early(cma, huge_page_size(h)); + if (!m) { + int node; + + if (node_exact) + return NULL; + for_each_online_node(node) { + cma =3D hugetlb_cma[node]; + if (!cma || node =3D=3D nid) + continue; + m =3D cma_reserve_early(cma, huge_page_size(h)); + if (m) { + listnode =3D node; + break; + } + } + } + } else +#endif + { + flags =3D 0; + cma =3D NULL; + if (node_exact) + m =3D memblock_alloc_exact_nid_raw(huge_page_size(h), + huge_page_size(h), 0, + MEMBLOCK_ALLOC_ACCESSIBLE, nid); + else { + m =3D memblock_alloc_try_nid_raw(huge_page_size(h), + huge_page_size(h), 0, + MEMBLOCK_ALLOC_ACCESSIBLE, nid); + /* + * For pre-HVO to work correctly, pages need to be on + * the list for the node they were actually allocated + * from. That node may be different in the case of + * fallback by memblock_alloc_try_nid_raw. So, + * extract the actual node first. + */ + if (m) + listnode =3D early_pfn_to_nid(PHYS_PFN(virt_to_phys(m))); + } + } + + if (m) { + /* + * Use the beginning of the huge page to store the + * huge_bootmem_page struct (until gather_bootmem + * puts them into the mem_map). + * + * Put them into a private list first because mem_map + * is not up yet. + */ + INIT_LIST_HEAD(&m->list); + list_add(&m->list, &huge_boot_pages[listnode]); + m->hstate =3D h; + m->flags =3D flags; + m->cma =3D cma; + } + + return m; +} + int alloc_bootmem_huge_page(struct hstate *h, int nid) __attribute__ ((weak, alias("__alloc_bootmem_huge_page"))); int __alloc_bootmem_huge_page(struct hstate *h, int nid) @@ -3183,22 +3268,15 @@ int __alloc_bootmem_huge_page(struct hstate *h, int= nid) =20 /* do node specific alloc */ if (nid !=3D NUMA_NO_NODE) { - m =3D memblock_alloc_exact_nid_raw(huge_page_size(h), huge_page_size(h), - 0, MEMBLOCK_ALLOC_ACCESSIBLE, nid); + m =3D alloc_bootmem(h, node, true); if (!m) return 0; goto found; } + /* allocate from next node when distributing huge pages */ for_each_node_mask_to_alloc(&h->next_nid_to_alloc, nr_nodes, node, &node_= states[N_ONLINE]) { - m =3D memblock_alloc_try_nid_raw( - huge_page_size(h), huge_page_size(h), - 0, MEMBLOCK_ALLOC_ACCESSIBLE, node); - /* - * Use the beginning of the huge page to store the - * huge_bootmem_page struct (until gather_bootmem - * puts them into the mem_map). - */ + m =3D alloc_bootmem(h, node, false); if (!m) return 0; goto found; @@ -3216,21 +3294,6 @@ int __alloc_bootmem_huge_page(struct hstate *h, int = nid) memblock_reserved_mark_noinit(virt_to_phys((void *)m + PAGE_SIZE), huge_page_size(h) - PAGE_SIZE); =20 - /* - * Put them into a private list first because mem_map is not up yet. - * - * For pre-HVO to work correctly, pages need to be on the list for - * the node they were actually allocated from. That node may be - * different in the case of fallback by memblock_alloc_try_nid_raw. - * So, extract the actual node first. - */ - if (nid =3D=3D NUMA_NO_NODE) - node =3D early_pfn_to_nid(PHYS_PFN(virt_to_phys(m))); - - INIT_LIST_HEAD(&m->list); - list_add(&m->list, &huge_boot_pages[node]); - m->hstate =3D h; - m->flags =3D 0; return 1; } =20 @@ -3271,13 +3334,25 @@ static void __init hugetlb_folio_init_vmemmap(struc= t folio *folio, prep_compound_head((struct page *)folio, huge_page_order(h)); } =20 +static bool __init hugetlb_bootmem_page_prehvo(struct huge_bootmem_page *m) +{ + return m->flags & HUGE_BOOTMEM_HVO; +} + +static bool __init hugetlb_bootmem_page_earlycma(struct huge_bootmem_page = *m) +{ + return m->flags & HUGE_BOOTMEM_CMA; +} + /* * memblock-allocated pageblocks might not have the migrate type set * if marked with the 'noinit' flag. Set it to the default (MIGRATE_MOVABL= E) - * here. + * here, or MIGRATE_CMA if this was a page allocated through an early CMA + * reservation. * - * Note that this will not write the page struct, it is ok (and necessary) - * to do this on vmemmap optimized folios. + * In case of vmemmap optimized folios, the tail vmemmap pages are mapped + * read-only, but that's ok - for sparse vmemmap this does not write to + * the page structure. */ static void __init hugetlb_bootmem_init_migratetype(struct folio *folio, struct hstate *h) @@ -3286,9 +3361,13 @@ static void __init hugetlb_bootmem_init_migratetype(= struct folio *folio, =20 WARN_ON_ONCE(!pageblock_aligned(folio_pfn(folio))); =20 - for (i =3D 0; i < nr_pages; i +=3D pageblock_nr_pages) - set_pageblock_migratetype(folio_page(folio, i), + for (i =3D 0; i < nr_pages; i +=3D pageblock_nr_pages) { + if (folio_test_hugetlb_cma(folio)) + init_cma_pageblock(folio_page(folio, i)); + else + set_pageblock_migratetype(folio_page(folio, i), MIGRATE_MOVABLE); + } } =20 static void __init prep_and_add_bootmem_folios(struct hstate *h, @@ -3334,10 +3413,16 @@ bool __init hugetlb_bootmem_page_zones_valid(int ni= d, return true; } =20 + if (hugetlb_bootmem_page_earlycma(m)) { + valid =3D cma_validate_zones(m->cma); + goto out; + } + start_pfn =3D virt_to_phys(m) >> PAGE_SHIFT; =20 valid =3D !pfn_range_intersects_zones(nid, start_pfn, pages_per_huge_page(m->hstate)); +out: if (!valid) hstate_boot_nrinvalid[hstate_index(m->hstate)]++; =20 @@ -3366,11 +3451,6 @@ static void __init hugetlb_bootmem_free_invalid_page= (int nid, struct page *page, } } =20 -static bool __init hugetlb_bootmem_page_prehvo(struct huge_bootmem_page *m) -{ - return (m->flags & HUGE_BOOTMEM_HVO); -} - /* * Put bootmem huge pages into the standard lists after mem_map is up. * Note: This only applies to gigantic (order > MAX_PAGE_ORDER) pages. @@ -3420,14 +3500,21 @@ static void __init gather_bootmem_prealloc_node(uns= igned long nid) */ folio_set_hugetlb_vmemmap_optimized(folio); =20 + if (hugetlb_bootmem_page_earlycma(m)) + folio_set_hugetlb_cma(folio); + list_add(&folio->lru, &folio_list); =20 /* * We need to restore the 'stolen' pages to totalram_pages * in order to fix confusing memory reports from free(1) and * other side-effects, like CommitLimit going negative. + * + * For CMA pages, this is done in init_cma_pageblock + * (via hugetlb_bootmem_init_migratetype), so skip it here. */ - adjust_managed_page_count(page, pages_per_huge_page(h)); + if (!folio_test_hugetlb_cma(folio)) + adjust_managed_page_count(page, pages_per_huge_page(h)); cond_resched(); } =20 @@ -3612,8 +3699,11 @@ static void __init hugetlb_hstate_alloc_pages(struct= hstate *h) { unsigned long allocated; =20 - /* skip gigantic hugepages allocation if hugetlb_cma enabled */ - if (hstate_is_gigantic(h) && hugetlb_cma_size) { + /* + * Skip gigantic hugepages allocation if early CMA + * reservations are not available. + */ + if (hstate_is_gigantic(h) && hugetlb_cma_size && !hugetlb_early_cma(h)) { pr_warn_once("HugeTLB: hugetlb_cma is enabled, skip boot time allocation= \n"); return; } --=20 2.48.1.711.g2feabab25a-goog From nobody Fri Dec 19 16:07:21 2025 Received: from mail-pj1-f74.google.com (mail-pj1-f74.google.com [209.85.216.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0FBC927BD72 for ; Fri, 28 Feb 2025 18:30:32 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740767435; cv=none; b=qI+tHq5tSrO5Kz0+pNDYQFG0XSGklNSbtDsXiUMc9Xich6Isevzbeez4TsDSJV7JtF0WDx9jUXIHdnMQyjuMdga7whGf6n+zLx2TZxYcVoJZArgh2hGZhw84fXs54uEtPBHEJZQZkKaj/KLyzZxmxCiU5JCC4910ptofNrg8UHE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740767435; c=relaxed/simple; bh=7OHwiYw8YnokkcKYcTEzxbmmpAqC1Wlk7woCRaj+P8I=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=Iu38CTMdWoS0m3wsAMUFhAf8U2J0n2gZqecX9DjyF3YeLu0TXNfwd2Msn5febtaZNgTgiQdOQs5g2mUCBXlU/IHj+zUtkacaBIdHIFMl7iJrD/248DhbspVK9ZYqL285LGK7mvAN+C4k45UzT5LTwxdyPYrBo5DdDmL226Of+m4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--fvdl.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=j8jR6Oon; arc=none smtp.client-ip=209.85.216.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--fvdl.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="j8jR6Oon" Received: by mail-pj1-f74.google.com with SMTP id 98e67ed59e1d1-2fc1eabf4f7so5261959a91.1 for ; Fri, 28 Feb 2025 10:30:32 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1740767432; x=1741372232; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=IoBHRW/adUD3sx6ef6qBe3DIT3wk3SN+71DQwCQYK8E=; b=j8jR6Oontw7GZpNvz5d2UZbngUB+vMTXGut0IkJdg8IJR28+oABG8s9oIbpXf1KU0l hEW1qUrtlm74F0ohzA9xt3nO2W0pHKnZT70yIipu6Z/lhrrxztot3XDrpyGh3PfD7zU8 bL6xsaFmPW6W/HQT71ZI48+x1cRBGqffyfPlelWvQ4JvRAY6K5zMH9QiRIQI8eWcAEkT ROpxf/IsJH0uSVhVqlDWc7ba6hgD3oBuQtTOAS3y0bqxG4sDVHmO+bJi8bvo0nu51N08 lp5s4p8d5hEchKFZL+IclF5NZ6XT6LdI4NWlZ1IzXWK3Yc3Rmxwcdi6i0N4ixKN5k+99 wI7w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1740767432; x=1741372232; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=IoBHRW/adUD3sx6ef6qBe3DIT3wk3SN+71DQwCQYK8E=; b=tInH/8CF0fYeUeFTfYyendOzeZtIpkAdHzXHPbWjkzgljdq64qbKmO8NbK9R+hl9l+ PKkpgC++zx37QRY9ixE/gLKX8rwVEKcLsNUAfCwc6s43OL9KGJhhkY5ZqJR1gJBKjmJi 5Zbu2V0stI6KsJBoq3y0fNsZ++1SjvRgW8M/fmPtytt57SUcHNC5SopYTBFwAQibt/BE Qr91NCePvFbOurW9aTxW73ko9OpjFFcW05FNZLeD8IF7CnaY0poQgnOOsYVcWyk4Uoh0 w2a+Rmqoi/40Yqrg2lTM5gbmNi0Ck7143tugJN3REzHad2/SD8gS8TYQ6ywMhFb+Srbs JQ4Q== X-Forwarded-Encrypted: i=1; AJvYcCVxeHcT/n/MTG3UaKo0Pwf9EoA0+uWvEGKUVaxXe9rQzXjj8sW0oXmDR+U5GzT+3TMwoG7pAG7tmS6uFuU=@vger.kernel.org X-Gm-Message-State: AOJu0YwfGLe0UcvkeNccdNm/2+R1NDSfk5XEitHbXla5lvY0alTtrNgJ AsyqhYp/UQd80a+Af0t9HUc88hGrRwHdYEcTvTDcUNbDRzcPsfELRsfZLjeK+V6Ym9oZyw== X-Google-Smtp-Source: AGHT+IFYRZfw/iKPy89wJmaJ/TEGAeh30n7Li+71hZoPe7g3000pn8NixLfIZexb+lsh6KpV8fXysdvq X-Received: from pjbnv1.prod.google.com ([2002:a17:90b:1b41:b0:2fe:d61a:ed]) (user=fvdl job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:3511:b0:2fa:1f1b:3db2 with SMTP id 98e67ed59e1d1-2febabcfedfmr6880143a91.25.1740767432395; Fri, 28 Feb 2025 10:30:32 -0800 (PST) Date: Fri, 28 Feb 2025 18:29:28 +0000 In-Reply-To: <20250228182928.2645936-1-fvdl@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250228182928.2645936-1-fvdl@google.com> X-Mailer: git-send-email 2.48.1.711.g2feabab25a-goog Message-ID: <20250228182928.2645936-28-fvdl@google.com> Subject: [PATCH v5 27/27] mm/hugetlb: move hugetlb CMA code in to its own file From: Frank van der Linden To: akpm@linux-foundation.org, muchun.song@linux.dev, linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: yuzhao@google.com, usamaarif642@gmail.com, joao.m.martins@oracle.com, roman.gushchin@linux.dev, ziy@nvidia.com, david@redhat.com, Frank van der Linden Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" hugetlb.c contained a number of CONFIG_CMA ifdefs, and the code inside them was large enough to merit being in its own file, so move it, cleaning up things a bit. Hide some direct variable access behind functions to accommodate the move. No functional change intended. Signed-off-by: Frank van der Linden --- MAINTAINERS | 2 + mm/Makefile | 3 + mm/hugetlb.c | 269 +++------------------------------------------ mm/hugetlb_cma.c | 275 +++++++++++++++++++++++++++++++++++++++++++++++ mm/hugetlb_cma.h | 57 ++++++++++ 5 files changed, 354 insertions(+), 252 deletions(-) create mode 100644 mm/hugetlb_cma.c create mode 100644 mm/hugetlb_cma.h diff --git a/MAINTAINERS b/MAINTAINERS index 8e0736dc2ee0..7d083b653b69 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -10710,6 +10710,8 @@ F: fs/hugetlbfs/ F: include/linux/hugetlb.h F: include/trace/events/hugetlbfs.h F: mm/hugetlb.c +F: mm/hugetlb_cma.c +F: mm/hugetlb_cma.h F: mm/hugetlb_vmemmap.c F: mm/hugetlb_vmemmap.h F: tools/testing/selftests/cgroup/test_hugetlb_memcg.c diff --git a/mm/Makefile b/mm/Makefile index 850386a67b3e..810ccd45d270 100644 --- a/mm/Makefile +++ b/mm/Makefile @@ -79,6 +79,9 @@ obj-$(CONFIG_SWAP) +=3D page_io.o swap_state.o swapfile.o= swap_slots.o obj-$(CONFIG_ZSWAP) +=3D zswap.o obj-$(CONFIG_HAS_DMA) +=3D dmapool.o obj-$(CONFIG_HUGETLBFS) +=3D hugetlb.o +ifdef CONFIG_CMA +obj-$(CONFIG_HUGETLBFS) +=3D hugetlb_cma.o +endif obj-$(CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP) +=3D hugetlb_vmemmap.o obj-$(CONFIG_NUMA) +=3D mempolicy.o obj-$(CONFIG_SPARSEMEM) +=3D sparse.o diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 664ccaaa717a..3ee98f612137 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -49,19 +49,13 @@ #include #include "internal.h" #include "hugetlb_vmemmap.h" +#include "hugetlb_cma.h" #include =20 int hugetlb_max_hstate __read_mostly; unsigned int default_hstate_idx; struct hstate hstates[HUGE_MAX_HSTATE]; =20 -#ifdef CONFIG_CMA -static struct cma *hugetlb_cma[MAX_NUMNODES]; -static unsigned long hugetlb_cma_size_in_node[MAX_NUMNODES] __initdata; -#endif -static bool hugetlb_cma_only; -static unsigned long hugetlb_cma_size __initdata; - __initdata struct list_head huge_boot_pages[MAX_NUMNODES]; static unsigned long hstate_boot_nrinvalid[HUGE_MAX_HSTATE] __initdata; =20 @@ -128,14 +122,11 @@ static struct resv_map *vma_resv_map(struct vm_area_s= truct *vma); =20 static void hugetlb_free_folio(struct folio *folio) { -#ifdef CONFIG_CMA - int nid =3D folio_nid(folio); - if (folio_test_hugetlb_cma(folio)) { - WARN_ON_ONCE(!cma_free_folio(hugetlb_cma[nid], folio)); + hugetlb_cma_free_folio(folio); return; } -#endif + folio_put(folio); } =20 @@ -1492,31 +1483,9 @@ static struct folio *alloc_gigantic_folio(struct hst= ate *h, gfp_t gfp_mask, if (nid =3D=3D NUMA_NO_NODE) nid =3D numa_mem_id(); retry: - folio =3D NULL; -#ifdef CONFIG_CMA - { - int node; - - if (hugetlb_cma[nid]) - folio =3D cma_alloc_folio(hugetlb_cma[nid], order, gfp_mask); - - if (!folio && !(gfp_mask & __GFP_THISNODE)) { - for_each_node_mask(node, *nodemask) { - if (node =3D=3D nid || !hugetlb_cma[node]) - continue; - - folio =3D cma_alloc_folio(hugetlb_cma[node], order, gfp_mask); - if (folio) - break; - } - } - - if (folio) - folio_set_hugetlb_cma(folio); - } -#endif + folio =3D hugetlb_cma_alloc_folio(h, gfp_mask, nid, nodemask); if (!folio) { - if (hugetlb_cma_only) + if (hugetlb_cma_exclusive_alloc()) return NULL; =20 folio =3D folio_alloc_gigantic(order, gfp_mask, nid, nodemask); @@ -3179,47 +3148,14 @@ struct folio *alloc_hugetlb_folio(struct vm_area_st= ruct *vma, return ERR_PTR(-ENOSPC); } =20 -static bool __init hugetlb_early_cma(struct hstate *h) -{ - if (arch_has_huge_bootmem_alloc()) - return false; - - return (hstate_is_gigantic(h) && hugetlb_cma_only); -} - static __init void *alloc_bootmem(struct hstate *h, int nid, bool node_exa= ct) { struct huge_bootmem_page *m; - unsigned long flags; - struct cma *cma; int listnode =3D nid; =20 -#ifdef CONFIG_CMA - if (hugetlb_early_cma(h)) { - flags =3D HUGE_BOOTMEM_CMA; - cma =3D hugetlb_cma[nid]; - m =3D cma_reserve_early(cma, huge_page_size(h)); - if (!m) { - int node; - - if (node_exact) - return NULL; - for_each_online_node(node) { - cma =3D hugetlb_cma[node]; - if (!cma || node =3D=3D nid) - continue; - m =3D cma_reserve_early(cma, huge_page_size(h)); - if (m) { - listnode =3D node; - break; - } - } - } - } else -#endif - { - flags =3D 0; - cma =3D NULL; + if (hugetlb_early_cma(h)) + m =3D hugetlb_cma_alloc_bootmem(h, &listnode, node_exact); + else { if (node_exact) m =3D memblock_alloc_exact_nid_raw(huge_page_size(h), huge_page_size(h), 0, @@ -3238,6 +3174,11 @@ static __init void *alloc_bootmem(struct hstate *h, = int nid, bool node_exact) if (m) listnode =3D early_pfn_to_nid(PHYS_PFN(virt_to_phys(m))); } + + if (m) { + m->flags =3D 0; + m->cma =3D NULL; + } } =20 if (m) { @@ -3252,8 +3193,6 @@ static __init void *alloc_bootmem(struct hstate *h, i= nt nid, bool node_exact) INIT_LIST_HEAD(&m->list); list_add(&m->list, &huge_boot_pages[listnode]); m->hstate =3D h; - m->flags =3D flags; - m->cma =3D cma; } =20 return m; @@ -3703,7 +3642,8 @@ static void __init hugetlb_hstate_alloc_pages(struct = hstate *h) * Skip gigantic hugepages allocation if early CMA * reservations are not available. */ - if (hstate_is_gigantic(h) && hugetlb_cma_size && !hugetlb_early_cma(h)) { + if (hstate_is_gigantic(h) && hugetlb_cma_total_size() && + !hugetlb_early_cma(h)) { pr_warn_once("HugeTLB: hugetlb_cma is enabled, skip boot time allocation= \n"); return; } @@ -3740,7 +3680,7 @@ static void __init hugetlb_init_hstates(void) */ if (hstate_is_gigantic(h) && !gigantic_page_runtime_supported()) continue; - if (hugetlb_cma_size && h->order <=3D HUGETLB_PAGE_ORDER) + if (hugetlb_cma_total_size() && h->order <=3D HUGETLB_PAGE_ORDER) continue; for_each_hstate(h2) { if (h2 =3D=3D h) @@ -4642,14 +4582,6 @@ static void hugetlb_register_all_nodes(void) { } =20 #endif =20 -#ifdef CONFIG_CMA -static void __init hugetlb_cma_check(void); -#else -static inline __init void hugetlb_cma_check(void) -{ -} -#endif - static void __init hugetlb_sysfs_init(void) { struct hstate *h; @@ -4833,8 +4765,7 @@ static __init void hugetlb_parse_params(void) hcp->setup(hcp->val); } =20 - if (!hugetlb_cma_size) - hugetlb_cma_only =3D false; + hugetlb_cma_validate_params(); } =20 /* @@ -7904,169 +7835,3 @@ void hugetlb_unshare_all_pmds(struct vm_area_struct= *vma) hugetlb_unshare_pmds(vma, ALIGN(vma->vm_start, PUD_SIZE), ALIGN_DOWN(vma->vm_end, PUD_SIZE)); } - -#ifdef CONFIG_CMA -static bool cma_reserve_called __initdata; - -static int __init cmdline_parse_hugetlb_cma(char *p) -{ - int nid, count =3D 0; - unsigned long tmp; - char *s =3D p; - - while (*s) { - if (sscanf(s, "%lu%n", &tmp, &count) !=3D 1) - break; - - if (s[count] =3D=3D ':') { - if (tmp >=3D MAX_NUMNODES) - break; - nid =3D array_index_nospec(tmp, MAX_NUMNODES); - - s +=3D count + 1; - tmp =3D memparse(s, &s); - hugetlb_cma_size_in_node[nid] =3D tmp; - hugetlb_cma_size +=3D tmp; - - /* - * Skip the separator if have one, otherwise - * break the parsing. - */ - if (*s =3D=3D ',') - s++; - else - break; - } else { - hugetlb_cma_size =3D memparse(p, &p); - break; - } - } - - return 0; -} - -early_param("hugetlb_cma", cmdline_parse_hugetlb_cma); - -static int __init cmdline_parse_hugetlb_cma_only(char *p) -{ - return kstrtobool(p, &hugetlb_cma_only); -} - -early_param("hugetlb_cma_only", cmdline_parse_hugetlb_cma_only); - -void __init hugetlb_cma_reserve(int order) -{ - unsigned long size, reserved, per_node; - bool node_specific_cma_alloc =3D false; - int nid; - - /* - * HugeTLB CMA reservation is required for gigantic - * huge pages which could not be allocated via the - * page allocator. Just warn if there is any change - * breaking this assumption. - */ - VM_WARN_ON(order <=3D MAX_PAGE_ORDER); - cma_reserve_called =3D true; - - if (!hugetlb_cma_size) - return; - - for (nid =3D 0; nid < MAX_NUMNODES; nid++) { - if (hugetlb_cma_size_in_node[nid] =3D=3D 0) - continue; - - if (!node_online(nid)) { - pr_warn("hugetlb_cma: invalid node %d specified\n", nid); - hugetlb_cma_size -=3D hugetlb_cma_size_in_node[nid]; - hugetlb_cma_size_in_node[nid] =3D 0; - continue; - } - - if (hugetlb_cma_size_in_node[nid] < (PAGE_SIZE << order)) { - pr_warn("hugetlb_cma: cma area of node %d should be at least %lu MiB\n", - nid, (PAGE_SIZE << order) / SZ_1M); - hugetlb_cma_size -=3D hugetlb_cma_size_in_node[nid]; - hugetlb_cma_size_in_node[nid] =3D 0; - } else { - node_specific_cma_alloc =3D true; - } - } - - /* Validate the CMA size again in case some invalid nodes specified. */ - if (!hugetlb_cma_size) - return; - - if (hugetlb_cma_size < (PAGE_SIZE << order)) { - pr_warn("hugetlb_cma: cma area should be at least %lu MiB\n", - (PAGE_SIZE << order) / SZ_1M); - hugetlb_cma_size =3D 0; - return; - } - - if (!node_specific_cma_alloc) { - /* - * If 3 GB area is requested on a machine with 4 numa nodes, - * let's allocate 1 GB on first three nodes and ignore the last one. - */ - per_node =3D DIV_ROUND_UP(hugetlb_cma_size, nr_online_nodes); - pr_info("hugetlb_cma: reserve %lu MiB, up to %lu MiB per node\n", - hugetlb_cma_size / SZ_1M, per_node / SZ_1M); - } - - reserved =3D 0; - for_each_online_node(nid) { - int res; - char name[CMA_MAX_NAME]; - - if (node_specific_cma_alloc) { - if (hugetlb_cma_size_in_node[nid] =3D=3D 0) - continue; - - size =3D hugetlb_cma_size_in_node[nid]; - } else { - size =3D min(per_node, hugetlb_cma_size - reserved); - } - - size =3D round_up(size, PAGE_SIZE << order); - - snprintf(name, sizeof(name), "hugetlb%d", nid); - /* - * Note that 'order per bit' is based on smallest size that - * may be returned to CMA allocator in the case of - * huge page demotion. - */ - res =3D cma_declare_contiguous_multi(size, PAGE_SIZE << order, - HUGETLB_PAGE_ORDER, name, - &hugetlb_cma[nid], nid); - if (res) { - pr_warn("hugetlb_cma: reservation failed: err %d, node %d", - res, nid); - continue; - } - - reserved +=3D size; - pr_info("hugetlb_cma: reserved %lu MiB on node %d\n", - size / SZ_1M, nid); - - if (reserved >=3D hugetlb_cma_size) - break; - } - - if (!reserved) - /* - * hugetlb_cma_size is used to determine if allocations from - * cma are possible. Set to zero if no cma regions are set up. - */ - hugetlb_cma_size =3D 0; -} - -static void __init hugetlb_cma_check(void) -{ - if (!hugetlb_cma_size || cma_reserve_called) - return; - - pr_warn("hugetlb_cma: the option isn't supported by current arch\n"); -} - -#endif /* CONFIG_CMA */ diff --git a/mm/hugetlb_cma.c b/mm/hugetlb_cma.c new file mode 100644 index 000000000000..e0f2d5c3a84c --- /dev/null +++ b/mm/hugetlb_cma.c @@ -0,0 +1,275 @@ +// SPDX-License-Identifier: GPL-2.0-only + +#include +#include +#include +#include + +#include +#include + +#include +#include "internal.h" +#include "hugetlb_cma.h" + + +static struct cma *hugetlb_cma[MAX_NUMNODES]; +static unsigned long hugetlb_cma_size_in_node[MAX_NUMNODES] __initdata; +static bool hugetlb_cma_only; +static unsigned long hugetlb_cma_size __initdata; + +void hugetlb_cma_free_folio(struct folio *folio) +{ + int nid =3D folio_nid(folio); + + WARN_ON_ONCE(!cma_free_folio(hugetlb_cma[nid], folio)); +} + + +struct folio *hugetlb_cma_alloc_folio(struct hstate *h, gfp_t gfp_mask, + int nid, nodemask_t *nodemask) +{ + int node; + int order =3D huge_page_order(h); + struct folio *folio =3D NULL; + + if (hugetlb_cma[nid]) + folio =3D cma_alloc_folio(hugetlb_cma[nid], order, gfp_mask); + + if (!folio && !(gfp_mask & __GFP_THISNODE)) { + for_each_node_mask(node, *nodemask) { + if (node =3D=3D nid || !hugetlb_cma[node]) + continue; + + folio =3D cma_alloc_folio(hugetlb_cma[node], order, gfp_mask); + if (folio) + break; + } + } + + if (folio) + folio_set_hugetlb_cma(folio); + + return folio; +} + +struct huge_bootmem_page * __init +hugetlb_cma_alloc_bootmem(struct hstate *h, int *nid, bool node_exact) +{ + struct cma *cma; + struct huge_bootmem_page *m; + int node =3D *nid; + + cma =3D hugetlb_cma[*nid]; + m =3D cma_reserve_early(cma, huge_page_size(h)); + if (!m) { + if (node_exact) + return NULL; + + for_each_online_node(node) { + cma =3D hugetlb_cma[node]; + if (!cma || node =3D=3D *nid) + continue; + m =3D cma_reserve_early(cma, huge_page_size(h)); + if (m) { + *nid =3D node; + break; + } + } + } + + if (m) { + m->flags =3D HUGE_BOOTMEM_CMA; + m->cma =3D cma; + } + + return m; +} + + +static bool cma_reserve_called __initdata; + +static int __init cmdline_parse_hugetlb_cma(char *p) +{ + int nid, count =3D 0; + unsigned long tmp; + char *s =3D p; + + while (*s) { + if (sscanf(s, "%lu%n", &tmp, &count) !=3D 1) + break; + + if (s[count] =3D=3D ':') { + if (tmp >=3D MAX_NUMNODES) + break; + nid =3D array_index_nospec(tmp, MAX_NUMNODES); + + s +=3D count + 1; + tmp =3D memparse(s, &s); + hugetlb_cma_size_in_node[nid] =3D tmp; + hugetlb_cma_size +=3D tmp; + + /* + * Skip the separator if have one, otherwise + * break the parsing. + */ + if (*s =3D=3D ',') + s++; + else + break; + } else { + hugetlb_cma_size =3D memparse(p, &p); + break; + } + } + + return 0; +} + +early_param("hugetlb_cma", cmdline_parse_hugetlb_cma); + +static int __init cmdline_parse_hugetlb_cma_only(char *p) +{ + return kstrtobool(p, &hugetlb_cma_only); +} + +early_param("hugetlb_cma_only", cmdline_parse_hugetlb_cma_only); + +void __init hugetlb_cma_reserve(int order) +{ + unsigned long size, reserved, per_node; + bool node_specific_cma_alloc =3D false; + int nid; + + /* + * HugeTLB CMA reservation is required for gigantic + * huge pages which could not be allocated via the + * page allocator. Just warn if there is any change + * breaking this assumption. + */ + VM_WARN_ON(order <=3D MAX_PAGE_ORDER); + cma_reserve_called =3D true; + + if (!hugetlb_cma_size) + return; + + for (nid =3D 0; nid < MAX_NUMNODES; nid++) { + if (hugetlb_cma_size_in_node[nid] =3D=3D 0) + continue; + + if (!node_online(nid)) { + pr_warn("hugetlb_cma: invalid node %d specified\n", nid); + hugetlb_cma_size -=3D hugetlb_cma_size_in_node[nid]; + hugetlb_cma_size_in_node[nid] =3D 0; + continue; + } + + if (hugetlb_cma_size_in_node[nid] < (PAGE_SIZE << order)) { + pr_warn("hugetlb_cma: cma area of node %d should be at least %lu MiB\n", + nid, (PAGE_SIZE << order) / SZ_1M); + hugetlb_cma_size -=3D hugetlb_cma_size_in_node[nid]; + hugetlb_cma_size_in_node[nid] =3D 0; + } else { + node_specific_cma_alloc =3D true; + } + } + + /* Validate the CMA size again in case some invalid nodes specified. */ + if (!hugetlb_cma_size) + return; + + if (hugetlb_cma_size < (PAGE_SIZE << order)) { + pr_warn("hugetlb_cma: cma area should be at least %lu MiB\n", + (PAGE_SIZE << order) / SZ_1M); + hugetlb_cma_size =3D 0; + return; + } + + if (!node_specific_cma_alloc) { + /* + * If 3 GB area is requested on a machine with 4 numa nodes, + * let's allocate 1 GB on first three nodes and ignore the last one. + */ + per_node =3D DIV_ROUND_UP(hugetlb_cma_size, nr_online_nodes); + pr_info("hugetlb_cma: reserve %lu MiB, up to %lu MiB per node\n", + hugetlb_cma_size / SZ_1M, per_node / SZ_1M); + } + + reserved =3D 0; + for_each_online_node(nid) { + int res; + char name[CMA_MAX_NAME]; + + if (node_specific_cma_alloc) { + if (hugetlb_cma_size_in_node[nid] =3D=3D 0) + continue; + + size =3D hugetlb_cma_size_in_node[nid]; + } else { + size =3D min(per_node, hugetlb_cma_size - reserved); + } + + size =3D round_up(size, PAGE_SIZE << order); + + snprintf(name, sizeof(name), "hugetlb%d", nid); + /* + * Note that 'order per bit' is based on smallest size that + * may be returned to CMA allocator in the case of + * huge page demotion. + */ + res =3D cma_declare_contiguous_multi(size, PAGE_SIZE << order, + HUGETLB_PAGE_ORDER, name, + &hugetlb_cma[nid], nid); + if (res) { + pr_warn("hugetlb_cma: reservation failed: err %d, node %d", + res, nid); + continue; + } + + reserved +=3D size; + pr_info("hugetlb_cma: reserved %lu MiB on node %d\n", + size / SZ_1M, nid); + + if (reserved >=3D hugetlb_cma_size) + break; + } + + if (!reserved) + /* + * hugetlb_cma_size is used to determine if allocations from + * cma are possible. Set to zero if no cma regions are set up. + */ + hugetlb_cma_size =3D 0; +} + +void __init hugetlb_cma_check(void) +{ + if (!hugetlb_cma_size || cma_reserve_called) + return; + + pr_warn("hugetlb_cma: the option isn't supported by current arch\n"); +} + +bool hugetlb_cma_exclusive_alloc(void) +{ + return hugetlb_cma_only; +} + +unsigned long __init hugetlb_cma_total_size(void) +{ + return hugetlb_cma_size; +} + +void __init hugetlb_cma_validate_params(void) +{ + if (!hugetlb_cma_size) + hugetlb_cma_only =3D false; +} + +bool __init hugetlb_early_cma(struct hstate *h) +{ + if (arch_has_huge_bootmem_alloc()) + return false; + + return hstate_is_gigantic(h) && hugetlb_cma_only; +} diff --git a/mm/hugetlb_cma.h b/mm/hugetlb_cma.h new file mode 100644 index 000000000000..f7d7fb9880a2 --- /dev/null +++ b/mm/hugetlb_cma.h @@ -0,0 +1,57 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef _LINUX_HUGETLB_CMA_H +#define _LINUX_HUGETLB_CMA_H + +#ifdef CONFIG_CMA +void hugetlb_cma_free_folio(struct folio *folio); +struct folio *hugetlb_cma_alloc_folio(struct hstate *h, gfp_t gfp_mask, + int nid, nodemask_t *nodemask); +struct huge_bootmem_page *hugetlb_cma_alloc_bootmem(struct hstate *h, int = *nid, + bool node_exact); +void hugetlb_cma_check(void); +bool hugetlb_cma_exclusive_alloc(void); +unsigned long hugetlb_cma_total_size(void); +void hugetlb_cma_validate_params(void); +bool hugetlb_early_cma(struct hstate *h); +#else +static inline void hugetlb_cma_free_folio(struct folio *folio) +{ +} + +static inline struct folio *hugetlb_cma_alloc_folio(struct hstate *h, + gfp_t gfp_mask, int nid, nodemask_t *nodemask) +{ + return NULL; +} + +static inline +struct huge_bootmem_page *hugetlb_cma_alloc_bootmem(struct hstate *h, int = *nid, + bool node_exact) +{ + return NULL; +} + +static inline void hugetlb_cma_check(void) +{ +} + +static inline bool hugetlb_cma_exclusive_alloc(void) +{ + return false; +} + +static inline unsigned long hugetlb_cma_total_size(void) +{ + return 0; +} + +static inline void hugetlb_cma_validate_params(void) +{ +} + +static inline bool hugetlb_early_cma(struct hstate *h) +{ + return false; +} +#endif +#endif --=20 2.48.1.711.g2feabab25a-goog