From nobody Tue Apr 7 12:53:43 2026 Received: from mail-pf1-f178.google.com (mail-pf1-f178.google.com [209.85.210.178]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B43CE242D70 for ; Fri, 13 Mar 2026 00:52:34 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.178 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773363156; cv=none; b=g+v/7hDrc9kzszwooLW0dUtHerUg63gycezjmTtjnk8djcv9amy8eFyzmnCcBuR/M+4My3wS4QsqoMHdeOjz1jXhGcBOm2c7XD7uN/IubP/ibhGToowPgvhLCI5TcEjv9qtjDPiqyODZQX9RlpnqErXy0UOy5JxQr+nAYLVZ/6c= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773363156; c=relaxed/simple; bh=1wDo3jXKa/91+g4XbNkQ2u1DZG7fYIBpjFkTHK5C6kQ=; h=From:To:Cc:Subject:Date:Message-ID:MIME-Version; b=hDClvqHH+Yul3YOdFINkAFCABvPBUVqDMX18Jhue6bgEByvRDFwfqfd4+Fh8uxLzk3TxPllNITqLJZUM3FMvTIo2/kqBL1uNQb9D1jsIieGbE9GrieZJapgP5klAI3xEXNBz4YYzrdDg+MyJr3FCo8tu0oz5jCMYYOZUwndm9Us= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=hev.cc; spf=pass smtp.mailfrom=hev.cc; dkim=pass (2048-bit key) header.d=hev-cc.20230601.gappssmtp.com header.i=@hev-cc.20230601.gappssmtp.com header.b=wrEsarjH; arc=none smtp.client-ip=209.85.210.178 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=hev.cc Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=hev.cc Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=hev-cc.20230601.gappssmtp.com header.i=@hev-cc.20230601.gappssmtp.com header.b="wrEsarjH" Received: by mail-pf1-f178.google.com with SMTP id d2e1a72fcca58-82985f42664so1029209b3a.0 for ; Thu, 12 Mar 2026 17:52:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=hev-cc.20230601.gappssmtp.com; s=20230601; t=1773363154; x=1773967954; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:from:to:cc:subject:date:message-id:reply-to; bh=dGznOHQEDu0JbgYRJJzSYbQuqE+ZUTZUlFHToiIO7T4=; b=wrEsarjHeymS+a+yiJXaSFRye2GhfTySmQHQMW2EgTfJGPULsMQRMNNpG6nBAVH5r/ goomdyhusLgKvauHi8218AWw1GSe2Hl3a3j97AuSlW88XsT7DJ/KXBnHsWXla/Kyk5Yx O3Ahx2/6NlTDE6RkcAdHLlmu0wlz4/X9X0E00ZSVBy/Fst0rlMhGVf/myHNbx0ilq3ld r2zvx4qvkHdOUUKiJ7RBiuEBcbS3qJjUnLlrhtuG7XuppHEJWIAg7fGq9+r36eqBbuvj BSslKmaeghMHANaUlbIBSEHVHyo7o39/EkV6Oyht6DFLc+8NgJeZt5hFVuBQD9nhkg7Q s1vA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20251104; t=1773363154; x=1773967954; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:x-gm-gg:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=dGznOHQEDu0JbgYRJJzSYbQuqE+ZUTZUlFHToiIO7T4=; b=HHkI/vC8kQ/vM51l2E+V2tY+QHduA9wFIANv3voTIz0Linxzo0KFrS6aEuDgBWPmaL dFoTZt8AWwYW0+l9p/ubFweeK67fy6OpO3TWgZ0/i5R4eP/B+919OXhSj3QTYyr1s3xi /NanE9PppxQZ2OTiB1xzTKvtDpe2xOO+LT42my0g25J4AIj0zq3hrVQFLbfjRddE98yD IVpZyaLQM7x4Vc6oXQPxzcFp37ihhssgeEK0oTw18JbqHwoYT9t60MdAsT0sau/3FBeQ N30ecAbcfQd5bGgFNglUbgRllxcQJcsO0Mc2LskC6EBfNtkVlLXvVzTJdMt5XD0yEVNe AwBg== X-Forwarded-Encrypted: i=1; AJvYcCVIBSFrtIK/O7oyPU3wHdiob5pO2NWax40V3mv8++hCwiuifdvc0zs/7Tvara2ddc4hthf0+c6uxvlqzfE=@vger.kernel.org X-Gm-Message-State: AOJu0Yw0Outz3AtsgUFRjXX5MJSQ/6A7ySANcvwp5U2iBI37xydxa2UY Ibj6a43Yl5XrLYOvlRhNNDqGE0MbD1q4jVWH0dsaFVHN8b0noTcYWlVFzXC/jtPxedA= X-Gm-Gg: ATEYQzyhvROCPf355nPmIFwp3An9zUIR5oeTcO6SSFWxTDDpdf1Towl+1ufZ4NUGDZ2 oLV2RLmMoEts5chQS/YvqSk9u7/FldMqcG/Y6Ei8DUIqHQUZEXIWyK8y0PYAYNJEKeGt+E5Et8A RRn9lm4xDhiIkyPRVtvCs5sxk7I18CRt4dAEMCmzedkqaXJum1r0BZt84vESsPK5FJv+LiFQBrl uXtoCtMlZcoPs3OJYHqwNXp5uMJ/rn/2FAa5HoRTds/Vym/h+0SaUhOi+swu++TzKR0c5lsk/yC YfllgXt3+bQqFw0lJdsW9HjiH1neWOfooBjkfWN148g8XOLPQFQ7UlamPvUJhXAGo2JxE95kkGJ wRIHB7SGl+tvLFOA5G+biI2feySSRwIKHMdZPhyabFs2oZdUgjTEO7oT5yVAUWX5T7ecgPJkej7 +anBHMlwON5cFZuLN8tuA= X-Received: by 2002:a05:6a20:3d88:b0:398:7357:bb92 with SMTP id adf61e73a8af0-398ec9e200amr981009637.5.1773363154010; Thu, 12 Mar 2026 17:52:34 -0700 (PDT) Received: from localhost ([2400:8902:e002:de08:5754:7dac:85df:935a]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-82a1e72cf33sm122827b3a.37.2026.03.12.17.52.31 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 12 Mar 2026 17:52:33 -0700 (PDT) From: WANG Rui To: Alexander Viro , Christian Brauner , David Hildenbrand , Jan Kara , Kees Cook , Matthew Wilcox Cc: linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, WANG Rui Subject: [PATCH v5] binfmt_elf: Align eligible read-only PT_LOAD segments to PMD_SIZE for THP Date: Fri, 13 Mar 2026 08:52:11 +0800 Message-ID: <20260313005211.882831-1-r@hev.cc> X-Mailer: git-send-email 2.53.0 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" File-backed mappings can only be collapsed into PMD-sized THP when the virtual address and file offset are both hugepage-aligned and the mapping is large enough to cover a huge page. For ELF executables loaded by the kernel ELF binary loader, PT_LOAD segments are aligned according to p_align, which is often just the normal page size. As a result, large read-only segments that would otherwise be eligible may fail to get PMD-sized mappings. Even when a PT_LOAD segment itself is not PMD-aligned, it may still contain a PMD-aligned subrange. In that case only that subrange can be mapped with huge pages, while the unaligned head of the segment remains mapped with normal pages. In practice, many executables already have PMD-aligned file offsets for their text segments, but the virtual address is not aligned due to the small p_align value. Aligning the segment to PMD_SIZE in such cases increases the chance of getting PMD-sized THP mappings. This matters especially for 2MB huge pages, where many programs have text segments only slightly larger than a single huge page. If the start address is not aligned, the leading unaligned region can prevent the mapping from forming a huge page. For larger huge pages (e.g. 32MB), the unaligned head region may be close to the huge page size itself, making the potential performance impact even more significant. A segment is considered eligible if: * it is not writable, * both p_vaddr and p_offset are PMD-aligned, * its size is at least PMD_SIZE, and * its existing p_align is smaller than PMD_SIZE. To avoid excessive virtual address space padding on systems with very large PMD_SIZE values, this is only applied when PMD_SIZE <=3D 32MB. This mainly benefits large text segments of executables by reducing iTLB pressure. This only affects ELF executables loaded directly by the kernel ELF binary loader. Shared libraries loaded from user space (e.g. by the dynamic linker) are not affected. Benchmark Machine: AMD Ryzen 9 7950X (x86_64) Binutils: 2.46 GCC: 15.2.1 (built with -z,noseparate-code + --enable-host-pie) Workload: building Linux v7.0-rc1 vmlinux with x86_64_defconfig. Without patch With patch instructions 8,246,133,611,932 8,246,025,137,750 cpu-cycles 8,001,028,142,928 7,565,925,107,502 itlb-misses 3,672,158,331 26,821,242 time elapsed 64.66 s 61.97 s Instructions are basically unchanged. iTLB misses drop from ~3.67B to ~26M (~99.27% reduction), which results in about a ~5.44% reduction in cycles and ~4.18% shorter wall time for this workload. Signed-off-by: WANG Rui --- Changes since [v4]: * Drop runtime THP mode check, only gate on CONFIG_TRANSPARENT_HUGEPAGE. Changes since [v3]: * Fix compilation failure under !CONFIG_TRANSPARENT_HUGEPAGE. * No functional changes otherwise. Changes since [v2]: * Rename align_to_pmd() to should_align_to_pmd(). * Add benchmark results to the commit message. Changes since [v1]: * Drop the Kconfig option CONFIG_ELF_RO_LOAD_THP_ALIGNMENT. * Move the alignment logic into a helper align_to_pmd() for clarity. * Improve the comment explaining why we skip the optimization when PMD_SIZE > 32MB. [v4]: https://lore.kernel.org/linux-fsdevel/20260310031138.509730-1-r@hev.cc [v3]: https://lore.kernel.org/linux-fsdevel/20260310013958.103636-1-r@hev.cc [v2]: https://lore.kernel.org/linux-fsdevel/20260304114727.384416-1-r@hev.cc [v1]: https://lore.kernel.org/linux-fsdevel/20260302155046.286650-1-r@hev.cc --- fs/binfmt_elf.c | 30 ++++++++++++++++++++++++++++++ 1 file changed, 30 insertions(+) diff --git a/fs/binfmt_elf.c b/fs/binfmt_elf.c index fb857faaf0d6..d5f5154079de 100644 --- a/fs/binfmt_elf.c +++ b/fs/binfmt_elf.c @@ -489,6 +489,32 @@ static int elf_read(struct file *file, void *buf, size= _t len, loff_t pos) return 0; } =20 +static inline bool should_align_to_pmd(const struct elf_phdr *cmd) +{ + if (!IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE)) + return false; + + /* + * Avoid excessive virtual address space padding when PMD_SIZE is very + * large, since this function increases PT_LOAD alignment. + * This threshold roughly matches the largest commonly used hugepage + * sizes on current architectures (e.g. x86 2M, arm64 32M with 16K pages). + */ + if (PMD_SIZE > SZ_32M) + return false; + + if (!IS_ALIGNED(cmd->p_vaddr | cmd->p_offset, PMD_SIZE)) + return false; + + if (cmd->p_filesz < PMD_SIZE) + return false; + + if (cmd->p_flags & PF_W) + return false; + + return true; +} + static unsigned long maximum_alignment(struct elf_phdr *cmds, int nr) { unsigned long alignment =3D 0; @@ -501,6 +527,10 @@ static unsigned long maximum_alignment(struct elf_phdr= *cmds, int nr) /* skip non-power of two alignments as invalid */ if (!is_power_of_2(p_align)) continue; + + if (p_align < PMD_SIZE && should_align_to_pmd(&cmds[i])) + p_align =3D PMD_SIZE; + alignment =3D max(alignment, p_align); } } --=20 2.53.0