From nobody Wed Apr 1 09:47:05 2026 Received: from mail-pl1-f182.google.com (mail-pl1-f182.google.com [209.85.214.182]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7E4873D8902 for ; Tue, 31 Mar 2026 11:30:33 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.182 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774956635; cv=none; b=k1rK584P5VQ8LVCz1IdLX1NOfKJmPGfzYdGvkJ+X2XaWBf87ecmCwEVVmRimaIVcgBP7VXEAGOY+CrGE0cSYu2AfgK/+S3lBDQa3spqZ+RVsmVZxmj1UE10XLpIWHcfWbY4awnd4Qj+RBngvVWPm7VN7fmQf9wc2hHOY7YWpiPs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774956635; c=relaxed/simple; bh=Yk2f18fQkwMCu40Z72wjXk68/0/IuBSOYaFDgegtldM=; h=From:To:Cc:Subject:Date:Message-Id:MIME-Version; b=rGwiIpFB9RDjyEpq1UWsf8jBoPNAn+q/qLpBRwAkGNpTSPme0l2Je6xhtIvfTiI7YsCSJ8sL1PS1dwHXsVdsDX4OHIjquJ9ur4z+yShatXUFBMbrUI9jt4oYZU2k7J9Fu9w0U/i/nOtL/wNd3UJULlGFlJPKZRH2mSrJDLDSuRA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=bytedance.com; spf=pass smtp.mailfrom=bytedance.com; dkim=pass (2048-bit key) header.d=bytedance.com header.i=@bytedance.com header.b=QkSKPzpD; arc=none smtp.client-ip=209.85.214.182 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=bytedance.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=bytedance.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=bytedance.com header.i=@bytedance.com header.b="QkSKPzpD" Received: by mail-pl1-f182.google.com with SMTP id d9443c01a7336-2addb31945aso41589265ad.1 for ; Tue, 31 Mar 2026 04:30:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance.com; s=google; t=1774956633; x=1775561433; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:from:to:cc:subject:date:message-id:reply-to; bh=XBqHhaDGqoANwlAvn/RbjCMLx5VT6iQBfuPcDypB4Ao=; b=QkSKPzpDZdi8UrDSPd4yzZ7i7T9hlvb9mU3vZI8w+lb68nObEkvaTl/EBVDUa68SQB Evt1ifei8JyDaUGCDDw091mdAGIpGw/AgEyiSCrTgh8R3J3ksb7xq3JQqZpTp/whiRN1 tNdsbChsEkrgAukNNJMdIndn10gckOLGigsCAxvp9n6vC+XAPfqc2gL75NNyH/R9Xhc4 HV/mGIjEbxK3viCR86ngIgkZp/dmV0bA4tAUDq4uxPyhL9t4ofOreG0+6mur5hWtnu6V 69jp+EuIlyQW34pCjWQDNOM1CX1crQvAbCFVfAuVeHg1RZXYaIN4UkP9gKjCoc323d6X cVeg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20251104; t=1774956633; x=1775561433; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:x-gm-gg:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=XBqHhaDGqoANwlAvn/RbjCMLx5VT6iQBfuPcDypB4Ao=; b=ZAg/nJ/54kS66AGuS1QZvoPunsOiabgNZm0qXX/UlIj04apnie8cjQ6g51KdVU1Di5 44Q85GuJwAKQJUo/4xtQRIF2B6uWHU7mLJTtXd61IvjIuXy6pExHzaiOCvjcV8wH/Prm EZWp7ZSFjM4I9zJ4sJ0bkrlN97eEzmj/niaZJUjJh+jMeZmdzGxGI9z8AXcm8fgmLbfh zuf9QcXGZY1MHZtW4XHCcbjW+TdYTEgyqQYPB6SnByEeBwE8UGZP03vAs1MNJB85vVzp gsJ99ru8LXz0KVZqesUdtknMpMXeSNnnRR6ikhWhgSQge8BpN231ePvYoOVvYBlc4mG7 V9rg== X-Forwarded-Encrypted: i=1; AJvYcCWe3g79FCpPMapqGFRKxmGWJi5NCLqF/uP09lQulWy1MwIRGrh8KGC/msnm1/DwF2fUInX1Pc1lcW9FYY4=@vger.kernel.org X-Gm-Message-State: AOJu0YxbbaMoxRMZFI0oLyRDkFaTm6/ARjvc0o668icTPhUZNg1/1led vmgz5e1zaDHohx5flLNiEijJs+Mi7x2TJcxr1NmOfP/ujKsBK68a/zz4Uxn4XfnCHWE= X-Gm-Gg: ATEYQzwJZrubsb32XQvsOkeXdmG6eaKrXqufypdW9gCwJv/8+nRgiW5a03HoTi7NpTO 2FS7hvqUdSAYWohABsgAulmnFpZJJHSvYDlamKbj97Yzvjq1KKGGL2FqVNNIM0xAp5MsrEgL0Js quuJHCturH9c/9dQCo7XY8G6uShUNdQV6Eng0ooA7Go2jPA+FahgBxWbGMC+xQe746iDMQtP985 /sSuKzx+9XE+JCE9oRtFNEcOQ5rTsuxoimXNouiw6HDoLYAV/LiVi+ZSrR7d4XzamSn4b0QY23V SjwgmLgpabMB1xllaQF9qN5Wa0cc9pOni8gm8PtkeSxQWn3FHUEOny3m+Y9CsVkwttlmu9qZqQK u2YtnXgSXx+relAXQ72+NgN9B35RcD32iiZYsavbhdbUyP8z7DE3UUknlrkL3BW3Md6yCR2QW/H Wr48AnzH8Xi2hgydZb3L6ZEbcOB7WqqBYrGa0WfP5ZBIA= X-Received: by 2002:a17:902:e883:b0:2b0:6323:1758 with SMTP id d9443c01a7336-2b0cdd1ad56mr170056755ad.48.1774956632637; Tue, 31 Mar 2026 04:30:32 -0700 (PDT) Received: from n232-176-004.byted.org ([36.110.163.99]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-2b242765b9fsm132159255ad.43.2026.03.31.04.30.28 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 31 Mar 2026 04:30:32 -0700 (PDT) From: Muchun Song To: Andrew Morton , David Hildenbrand Cc: Lorenzo Stoakes , "Liam R. Howlett" , Vlastimil Babka , Mike Rapoport , Suren Baghdasaryan , Michal Hocko , Petr Tesarik , linux-mm@kvack.org, linux-kernel@vger.kernel.org, muchun.song@linux.dev, Muchun Song Subject: [PATCH] mm/sparse: fix BUILD_BUG_ON check for section map alignment Date: Tue, 31 Mar 2026 19:30:23 +0800 Message-Id: <20260331113023.2068075-1-songmuchun@bytedance.com> X-Mailer: git-send-email 2.20.1 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The comment in mmzone.h states that the alignment requirement is the minimum of PAGE_SHIFT and PFN_SECTION_SHIFT. However, the pointer arithmetic (mem_map - section_nr_to_pfn()) results in a byte offset scaled by sizeof(struct page). Thus, the actual alignment provided by the second term is PFN_SECTION_SHIFT + __ffs(sizeof(struct page)). Update the compile-time check and the mmzone.h comment to accurately reflect this mathematically guaranteed alignment by taking the minimum of PAGE_SHIFT and PFN_SECTION_SHIFT + __ffs(sizeof(struct page)). This avoids the issue of the check being overly restrictive on architectures like powerpc where PFN_SECTION_SHIFT alone is very small (e.g., 6). Also, remove the exhaustive per-architecture bit-width list from the comment; such details risk falling out of date over time and may inadvertently be left un-updated, while the existing BUILD_BUG_ON provides sufficient compile-time verification of the constraint. No runtime impact so far: SECTION_MAP_LAST_BIT happens to fit within the smaller limit on all existing architectures. Fixes: def9b71ee651 ("include/linux/mmzone.h: fix explanation of lower bits= in the SPARSEMEM mem_map pointer") Signed-off-by: Muchun Song --- include/linux/mmzone.h | 24 +++++++++--------------- mm/sparse.c | 3 ++- 2 files changed, 11 insertions(+), 16 deletions(-) diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index 7bd0134c241c..584fa598ad75 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -2073,21 +2073,15 @@ static inline struct mem_section *__nr_to_section(u= nsigned long nr) extern size_t mem_section_usage_size(void); =20 /* - * We use the lower bits of the mem_map pointer to store - * a little bit of information. The pointer is calculated - * as mem_map - section_nr_to_pfn(pnum). The result is - * aligned to the minimum alignment of the two values: - * 1. All mem_map arrays are page-aligned. - * 2. section_nr_to_pfn() always clears PFN_SECTION_SHIFT - * lowest bits. PFN_SECTION_SHIFT is arch-specific - * (equal SECTION_SIZE_BITS - PAGE_SHIFT), and the - * worst combination is powerpc with 256k pages, - * which results in PFN_SECTION_SHIFT equal 6. - * To sum it up, at least 6 bits are available on all architectures. - * However, we can exceed 6 bits on some other architectures except - * powerpc (e.g. 15 bits are available on x86_64, 13 bits are available - * with the worst case of 64K pages on arm64) if we make sure the - * exceeded bit is not applicable to powerpc. + * We use the lower bits of the mem_map pointer to store a little bit of + * information. The pointer is calculated as mem_map - section_nr_to_pfn(). + * The result is aligned to the minimum alignment of the two values: + * + * 1. All mem_map arrays are page-aligned. + * 2. section_nr_to_pfn() always clears PFN_SECTION_SHIFT lowest bits. Bec= ause + * it is subtracted from a struct page pointer, the offset is scaled by + * sizeof(struct page). This provides an alignment of PFN_SECTION_SHIFT= + + * __ffs(sizeof(struct page)). */ enum { SECTION_MARKED_PRESENT_BIT, diff --git a/mm/sparse.c b/mm/sparse.c index dfabe554adf8..c2eb36bfb86d 100644 --- a/mm/sparse.c +++ b/mm/sparse.c @@ -269,7 +269,8 @@ static unsigned long sparse_encode_mem_map(struct page = *mem_map, unsigned long p { unsigned long coded_mem_map =3D (unsigned long)(mem_map - (section_nr_to_pfn(pnum))); - BUILD_BUG_ON(SECTION_MAP_LAST_BIT > PFN_SECTION_SHIFT); + BUILD_BUG_ON(SECTION_MAP_LAST_BIT > min(PFN_SECTION_SHIFT + __ffs(sizeof(= struct page)), + PAGE_SHIFT)); BUG_ON(coded_mem_map & ~SECTION_MAP_MASK); return coded_mem_map; } --=20 2.20.1