From nobody Sun Oct 5 12:23:56 2025 Received: from mail-qt1-f179.google.com (mail-qt1-f179.google.com [209.85.160.179]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0E7331FDD; Mon, 4 Aug 2025 15:43:38 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.160.179 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1754322222; cv=none; b=SBLHUlNrAOydNNX+v1B2yNlPkFt4s7qnnlrdeY4YQJdDpLlfTwiFiEtFC2RVbBc/MQ1LXMNXEfHJaojELaf23sLNia3+5UMSmvSY3OkZqMYC6ubKmtUUTO18OuTakqpa3u81K0d3YSg0gI6OdLRssgTHLb/VQprJweIdj55wGzs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1754322222; c=relaxed/simple; bh=6tqS/SqSZ0Bbx5LnvjjJCCEQsH6rqiKVDt0L89VYGro=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=qnjYcUCAwN5kPB9E5crDuxSZYWwmX5BtUnAOCiFQjL2MPKa49+/81ZfBgkbschJEW/Qlyk/d4/S28+Xji+rsWBr74jKWENXgSLGvfGV348IQRHcX1g60bDhTM9flBrdDVQiD68lfn6C5ctmTnF5LbZtVCNey23kzSai1TRrz5yY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=IgWgEVEu; arc=none smtp.client-ip=209.85.160.179 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="IgWgEVEu" Received: by mail-qt1-f179.google.com with SMTP id d75a77b69052e-4b075353fa7so7736551cf.1; Mon, 04 Aug 2025 08:43:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1754322218; x=1754927018; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=J+C2IODosZScGy+0doONVpJcp9V/juhJoEj+aPvP1R4=; b=IgWgEVEu7D3y1rUt44Qw6Qq61RASIPgNGApmABYH7D4OBlM6Zhc6wMZRT9caUIuwU5 Uzxnv9PPZ4QexpnQs5g4XW8R1gxUc4aezWJkRg7/8n/VoXRpFLEhCf44PvoRYHIk5Yyf otSI486fpB0xj+CxSFmQtZVX/2CORPtM0XEfdStVB0KjwnScijqYkh75dAy9XP5RfRM5 E0eNRl9yLNsIlUWhNrmDzpViU3W9vET6oq+y+bJHf5JF8+vjC0YrMrXE/50P4k076+A+ t8quZPp7VZVMG7pKpBpwl6kyOzbzgJopLoM6njwjTwrFNJc68A8azkctlWxqpMSAl2Sm ETmA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1754322218; x=1754927018; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=J+C2IODosZScGy+0doONVpJcp9V/juhJoEj+aPvP1R4=; b=PtfX+Ruac6LupyUmxGwoXfxbqVNWxQco2IZPA3pUX8kc6hqN22UKmO6UaI30VJmyMz Vm0T7FChgJMnuxKGROB2ycd0Az+F/1KkG3UFl5u2BXP4IH7iI9dHa97scRoMtNLI1x6k lCZUQdRf7eiKq3OcBCS4vId1q+ZX3KOxHg9Cdohsy7Vlse8ZrVzS9XFzzDhUWrPm4Bkv XKo1WqqjqsAqVW3ytdPImRLZrBMl6xZEqECXh4+Mu/y6/Gbm0r/ElZ6iuU1t5Q9woBJ3 9GCSX52xu99bR17jA+vMbTSw3yNv8RzTXgWkGCGwYgHPEwqRV0X4xaqn7DYXmUrZIupN Uk7Q== X-Forwarded-Encrypted: i=1; AJvYcCUSLxuUojzgaaUZ4Bp5G9vnM9iQ8ctHoI1EtpkjtYXC+3BPn+dHCppr50BN8WSBgLbjq6KW8sUPkxI=@vger.kernel.org, AJvYcCVKnDp+LX0hKFaghzgpq6ExCskSMRTrWnmRu5ucOeo+Z6mUDeIYPt6nTGQNE3iGi4F7KbzSry+kZT6Dh4Th@vger.kernel.org X-Gm-Message-State: AOJu0YzaSfcChZC1P2PNuKfJeuQAlduBJrcrEdoc/6ebhriKomTnFXRQ PEByrYIfJsaF+PXnJpLE4ygFCrsm0fPHp1AlBcUCaqLwahkqF/KyQ6C/ X-Gm-Gg: ASbGnctpbUP7swVtOVRc1IRefNXT1q94IoVhUjEqBqEefB0pRBQ22jNR3x/IhV8uAl3 wWvfCb2mbg/sMjsm4bJFgDUcGpN3V5semC0tyVmupOrWQ2u1bQa+Xeg9CaEhzn9NmZOIYStwl1f WwTDHKmLs6QzTyd/gJAu9aUJZQEtHA9SrD0X4VYWAn4Es+w+qPxjBT1wabX3D7UPoe/c/wcRKY9 V0UVG8wNe7kWqY+5XNclwm8mmOZGCNjT+SGy+YCTRZfgxiLeAR44JuLMCGEc5kT7TpmrF/7AtpU hGVkrVWdrxyIKEk9Xk4Lwumwp8yhb0xYyNiUc+Y9EjPMzBiqwKNZOQ0yxHEZA+fkdTqLbm/obAx AZOr4S5ANiiLOCQOBmr0= X-Google-Smtp-Source: AGHT+IEWW7x74o0jlBxrSdpb3OsZc/AudWmYTuZYvvkT42ttIIvkCt5c0qYYcTXY1QVVDTvZSvHgnA== X-Received: by 2002:a05:622a:229f:b0:4b0:77d4:ec1e with SMTP id d75a77b69052e-4b077d4eee9mr26092401cf.3.1754322217552; Mon, 04 Aug 2025 08:43:37 -0700 (PDT) Received: from localhost ([2a03:2880:20ff:1::]) by smtp.gmail.com with ESMTPSA id d75a77b69052e-4aeeed67051sm53874291cf.35.2025.08.04.08.43.36 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 04 Aug 2025 08:43:36 -0700 (PDT) From: Usama Arif To: Andrew Morton , david@redhat.com, linux-mm@kvack.org Cc: linux-fsdevel@vger.kernel.org, corbet@lwn.net, rppt@kernel.org, surenb@google.com, mhocko@suse.com, hannes@cmpxchg.org, baohua@kernel.org, shakeel.butt@linux.dev, riel@surriel.com, ziy@nvidia.com, laoar.shao@gmail.com, dev.jain@arm.com, baolin.wang@linux.alibaba.com, npache@redhat.com, lorenzo.stoakes@oracle.com, Liam.Howlett@oracle.com, ryan.roberts@arm.com, vbabka@suse.cz, jannh@google.com, Arnd Bergmann , sj@kernel.org, linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, kernel-team@meta.com, Usama Arif Subject: [PATCH v3 1/6] prctl: extend PR_SET_THP_DISABLE to optionally exclude VM_HUGEPAGE Date: Mon, 4 Aug 2025 16:40:44 +0100 Message-ID: <20250804154317.1648084-2-usamaarif642@gmail.com> X-Mailer: git-send-email 2.47.3 In-Reply-To: <20250804154317.1648084-1-usamaarif642@gmail.com> References: <20250804154317.1648084-1-usamaarif642@gmail.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: David Hildenbrand People want to make use of more THPs, for example, moving from the "never" system policy to "madvise", or from "madvise" to "always". While this is great news for every THP desperately waiting to get allocated out there, apparently there are some workloads that require a bit of care during that transition: individual processes may need to opt-out from this behavior for various reasons, and this should be permitted without needing to make all other workloads on the system similarly opt-out. The following scenarios are imaginable: (1) Switch from "none" system policy to "madvise"/"always", but keep THPs disabled for selected workloads. (2) Stay at "none" system policy, but enable THPs for selected workloads, making only these workloads use the "madvise" or "always" policy. (3) Switch from "madvise" system policy to "always", but keep the "madvise" policy for selected workloads: allocate THPs only when advised. (4) Stay at "madvise" system policy, but enable THPs even when not advised for selected workloads -- "always" policy. Once can emulate (2) through (1), by setting the system policy to "madvise"/"always" while disabling THPs for all processes that don't want THPs. It requires configuring all workloads, but that is a user-space problem to sort out. (4) can be emulated through (3) in a similar way. Back when (1) was relevant in the past, as people started enabling THPs, we added PR_SET_THP_DISABLE, so relevant workloads that were not ready yet (i.e., used by Redis) were able to just disable THPs completely. Redis still implements the option to use this interface to disable THPs completely. With PR_SET_THP_DISABLE, we added a way to force-disable THPs for a workload -- a process, including fork+exec'ed process hierarchy. That essentially made us support (1): simply disable THPs for all workloads that are not ready for THPs yet, while still enabling THPs system-wide. The quest for handling (3) and (4) started, but current approaches (completely new prctl, options to set other policies per process, alternatives to prctl -- mctrl, cgroup handling) don't look particularly promising. Likely, the future will use bpf or something similar to implement better policies, in particular to also make better decisions about THP sizes to use, but this will certainly take a while as that work just started. Long story short: a simple enable/disable is not really suitable for the future, so we're not willing to add completely new toggles. While we could emulate (3)+(4) through (1)+(2) by simply disabling THPs completely for these processes, this is a step backwards, because these processes can no longer allocate THPs in regions where THPs were explicitly advised: regions flagged as VM_HUGEPAGE. Apparently, that imposes a problem for relevant workloads, because "not THPs" is certainly worse than "THPs only when advised". Could we simply relax PR_SET_THP_DISABLE, to "disable THPs unless not explicitly advised by the app through MAD_HUGEPAGE"? *maybe*, but this would change the documented semantics quite a bit, and the versatility to use it for debugging purposes, so I am not 100% sure that is what we want -- although it would certainly be much easier. So instead, as an easy way forward for (3) and (4), add an option to make PR_SET_THP_DISABLE disable *less* THPs for a process. In essence, this patch: (A) Adds PR_THP_DISABLE_EXCEPT_ADVISED, to be used as a flag in arg3 of prctl(PR_SET_THP_DISABLE) when disabling THPs (arg2 !=3D 0). prctl(PR_SET_THP_DISABLE, 1, PR_THP_DISABLE_EXCEPT_ADVISED). (B) Makes prctl(PR_GET_THP_DISABLE) return 3 if PR_THP_DISABLE_EXCEPT_ADVISED was set while disabling. Previously, it would return 1 if THPs were disabled completely. Now it returns the set flags as well: 3 if PR_THP_DISABLE_EXCEPT_ADVISED was set. (C) Renames MMF_DISABLE_THP to MMF_DISABLE_THP_COMPLETELY, to express the semantics clearly. Fortunately, there are only two instances outside of prctl() code. (D) Adds MMF_DISABLE_THP_EXCEPT_ADVISED to express "no THP except for VMAs with VM_HUGEPAGE" -- essentially "thp=3Dmadvise" behavior Fortunately, we only have to extend vma_thp_disabled(). (E) Indicates "THP_enabled: 0" in /proc/pid/status only if THPs are disabled completely Only indicating that THPs are disabled when they are really disabled completely, not only partially. For now, we don't add another interface to obtained whether THPs are disabled partially (PR_THP_DISABLE_EXCEPT_ADVISED was set). If ever required, we could add a new entry. The documented semantics in the man page for PR_SET_THP_DISABLE "is inherited by a child created via fork(2) and is preserved across execve(2)" is maintained. This behavior, for example, allows for disabling THPs for a workload through the launching process (e.g., systemd where we fork() a helper process to then exec()). For now, MADV_COLLAPSE will *fail* in regions without VM_HUGEPAGE and VM_NOHUGEPAGE. As MADV_COLLAPSE is a clear advise that user space thinks a THP is a good idea, we'll enable that separately next (requiring a bit of cleanup first). There is currently not way to prevent that a process will not issue PR_SET_THP_DISABLE itself to re-enable THP. There are not really known users for re-enabling it, and it's against the purpose of the original interface. So if ever required, we could investigate just forbidding to re-enable them, or make this somehow configurable. Acked-by: Usama Arif Tested-by: Usama Arif Signed-off-by: David Hildenbrand Reviewed-by: Lorenzo Stoakes Signed-off-by: Usama Arif Acked-by: Zi Yan --- Documentation/filesystems/proc.rst | 5 ++- fs/proc/array.c | 2 +- include/linux/huge_mm.h | 20 +++++++--- include/linux/mm_types.h | 13 +++---- include/uapi/linux/prctl.h | 10 +++++ kernel/sys.c | 59 ++++++++++++++++++++++++------ mm/khugepaged.c | 2 +- 7 files changed, 82 insertions(+), 29 deletions(-) diff --git a/Documentation/filesystems/proc.rst b/Documentation/filesystems= /proc.rst index 2971551b7235..915a3e44bc12 100644 --- a/Documentation/filesystems/proc.rst +++ b/Documentation/filesystems/proc.rst @@ -291,8 +291,9 @@ It's slow but very precise. HugetlbPages size of hugetlb memory portions CoreDumping process's memory is currently being dumped (killing the process may lead to a corrupted = core) - THP_enabled process is allowed to use THP (returns 0 when - PR_SET_THP_DISABLE is set on the process + THP_enabled process is allowed to use THP (returns 0 when + PR_SET_THP_DISABLE is set on the process to d= isable + THP completely, not just partially) Threads number of threads SigQ number of signals queued/max. number for queue SigPnd bitmap of pending signals for the thread diff --git a/fs/proc/array.c b/fs/proc/array.c index d6a0369caa93..c4f91a784104 100644 --- a/fs/proc/array.c +++ b/fs/proc/array.c @@ -422,7 +422,7 @@ static inline void task_thp_status(struct seq_file *m, = struct mm_struct *mm) bool thp_enabled =3D IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE); =20 if (thp_enabled) - thp_enabled =3D !test_bit(MMF_DISABLE_THP, &mm->flags); + thp_enabled =3D !test_bit(MMF_DISABLE_THP_COMPLETELY, &mm->flags); seq_printf(m, "THP_enabled:\t%d\n", thp_enabled); } =20 diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index 7748489fde1b..71db243a002e 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -318,16 +318,26 @@ struct thpsize { (transparent_hugepage_flags & \ (1<vm_mm->flags)) + return true; /* - * Explicitly disabled through madvise or prctl, or some - * architectures may disable THP for some mappings, for - * example, s390 kvm. + * Are THPs disabled only for VMAs where we didn't get an explicit + * advise to use them? */ - return (vm_flags & VM_NOHUGEPAGE) || - test_bit(MMF_DISABLE_THP, &vma->vm_mm->flags); + if (vm_flags & VM_HUGEPAGE) + return false; + return test_bit(MMF_DISABLE_THP_EXCEPT_ADVISED, &vma->vm_mm->flags); } =20 static inline bool thp_disabled_by_hw(void) diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 1ec273b06691..123fefaa4b98 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -1743,19 +1743,16 @@ enum { #define MMF_VM_MERGEABLE 16 /* KSM may merge identical pages */ #define MMF_VM_HUGEPAGE 17 /* set when mm is available for khugepaged */ =20 -/* - * This one-shot flag is dropped due to necessity of changing exe once aga= in - * on NFS restore - */ -//#define MMF_EXE_FILE_CHANGED 18 /* see prctl_set_mm_exe_file() */ +#define MMF_HUGE_ZERO_PAGE 18 /* mm has ever used the global huge zer= o page */ =20 #define MMF_HAS_UPROBES 19 /* has uprobes */ #define MMF_RECALC_UPROBES 20 /* MMF_HAS_UPROBES can be wrong */ #define MMF_OOM_SKIP 21 /* mm is of no interest for the OOM killer */ #define MMF_UNSTABLE 22 /* mm is unstable for copy_from_user */ -#define MMF_HUGE_ZERO_PAGE 23 /* mm has ever used the global huge zer= o page */ -#define MMF_DISABLE_THP 24 /* disable THP for all VMAs */ -#define MMF_DISABLE_THP_MASK (1 << MMF_DISABLE_THP) +#define MMF_DISABLE_THP_EXCEPT_ADVISED 23 /* no THP except when advised (e= .g., VM_HUGEPAGE) */ +#define MMF_DISABLE_THP_COMPLETELY 24 /* no THP for all VMAs */ +#define MMF_DISABLE_THP_MASK ((1 << MMF_DISABLE_THP_COMPLETELY) |\ + (1 << MMF_DISABLE_THP_EXCEPT_ADVISED)) #define MMF_OOM_REAP_QUEUED 25 /* mm was queued for oom_reaper */ #define MMF_MULTIPROCESS 26 /* mm is shared between processes */ /* diff --git a/include/uapi/linux/prctl.h b/include/uapi/linux/prctl.h index 43dec6eed559..9c1d6e49b8a9 100644 --- a/include/uapi/linux/prctl.h +++ b/include/uapi/linux/prctl.h @@ -177,7 +177,17 @@ struct prctl_mm_map { =20 #define PR_GET_TID_ADDRESS 40 =20 +/* + * Flags for PR_SET_THP_DISABLE are only applicable when disabling. Bit 0 + * is reserved, so PR_GET_THP_DISABLE can return "1 | flags", to effective= ly + * return "1" when no flags were specified for PR_SET_THP_DISABLE. + */ #define PR_SET_THP_DISABLE 41 +/* + * Don't disable THPs when explicitly advised (e.g., MADV_HUGEPAGE / + * VM_HUGEPAGE). + */ +# define PR_THP_DISABLE_EXCEPT_ADVISED (1 << 1) #define PR_GET_THP_DISABLE 42 =20 /* diff --git a/kernel/sys.c b/kernel/sys.c index b153fb345ada..5b6c80eafff9 100644 --- a/kernel/sys.c +++ b/kernel/sys.c @@ -2423,6 +2423,51 @@ static int prctl_get_auxv(void __user *addr, unsigne= d long len) return sizeof(mm->saved_auxv); } =20 +static int prctl_get_thp_disable(unsigned long arg2, unsigned long arg3, + unsigned long arg4, unsigned long arg5) +{ + unsigned long *mm_flags =3D ¤t->mm->flags; + + if (arg2 || arg3 || arg4 || arg5) + return -EINVAL; + + /* If disabled, we return "1 | flags", otherwise 0. */ + if (test_bit(MMF_DISABLE_THP_COMPLETELY, mm_flags)) + return 1; + else if (test_bit(MMF_DISABLE_THP_EXCEPT_ADVISED, mm_flags)) + return 1 | PR_THP_DISABLE_EXCEPT_ADVISED; + return 0; +} + +static int prctl_set_thp_disable(bool thp_disable, unsigned long flags, + unsigned long arg4, unsigned long arg5) +{ + unsigned long *mm_flags =3D ¤t->mm->flags; + + if (arg4 || arg5) + return -EINVAL; + + /* Flags are only allowed when disabling. */ + if ((!thp_disable && flags) || (flags & ~PR_THP_DISABLE_EXCEPT_ADVISED)) + return -EINVAL; + if (mmap_write_lock_killable(current->mm)) + return -EINTR; + if (thp_disable) { + if (flags & PR_THP_DISABLE_EXCEPT_ADVISED) { + clear_bit(MMF_DISABLE_THP_COMPLETELY, mm_flags); + set_bit(MMF_DISABLE_THP_EXCEPT_ADVISED, mm_flags); + } else { + set_bit(MMF_DISABLE_THP_COMPLETELY, mm_flags); + clear_bit(MMF_DISABLE_THP_EXCEPT_ADVISED, mm_flags); + } + } else { + clear_bit(MMF_DISABLE_THP_COMPLETELY, mm_flags); + clear_bit(MMF_DISABLE_THP_EXCEPT_ADVISED, mm_flags); + } + mmap_write_unlock(current->mm); + return 0; +} + SYSCALL_DEFINE5(prctl, int, option, unsigned long, arg2, unsigned long, ar= g3, unsigned long, arg4, unsigned long, arg5) { @@ -2596,20 +2641,10 @@ SYSCALL_DEFINE5(prctl, int, option, unsigned long, = arg2, unsigned long, arg3, return -EINVAL; return task_no_new_privs(current) ? 1 : 0; case PR_GET_THP_DISABLE: - if (arg2 || arg3 || arg4 || arg5) - return -EINVAL; - error =3D !!test_bit(MMF_DISABLE_THP, &me->mm->flags); + error =3D prctl_get_thp_disable(arg2, arg3, arg4, arg5); break; case PR_SET_THP_DISABLE: - if (arg3 || arg4 || arg5) - return -EINVAL; - if (mmap_write_lock_killable(me->mm)) - return -EINTR; - if (arg2) - set_bit(MMF_DISABLE_THP, &me->mm->flags); - else - clear_bit(MMF_DISABLE_THP, &me->mm->flags); - mmap_write_unlock(me->mm); + error =3D prctl_set_thp_disable(arg2, arg3, arg4, arg5); break; case PR_MPX_ENABLE_MANAGEMENT: case PR_MPX_DISABLE_MANAGEMENT: diff --git a/mm/khugepaged.c b/mm/khugepaged.c index 1ff0c7dd2be4..2c9008246785 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -410,7 +410,7 @@ static inline int hpage_collapse_test_exit(struct mm_st= ruct *mm) static inline int hpage_collapse_test_exit_or_disable(struct mm_struct *mm) { return hpage_collapse_test_exit(mm) || - test_bit(MMF_DISABLE_THP, &mm->flags); + test_bit(MMF_DISABLE_THP_COMPLETELY, &mm->flags); } =20 static bool hugepage_pmd_enabled(void) --=20 2.47.3 From nobody Sun Oct 5 12:23:56 2025 Received: from mail-qk1-f181.google.com (mail-qk1-f181.google.com [209.85.222.181]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6D68321FF41; Mon, 4 Aug 2025 15:43:40 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.222.181 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1754322222; cv=none; b=oYcCl50PNQOYxLlXOOpJZfC1SxkkYyHu0xy3MO6x7/RpoLmRHAo/oQpK97DtpjRi6HOMsXg7Pv4455vo/uUgRtdkkP67mO2YhCVZFwj6HevS5LhQ2oUs8nP6q+jlp0X8nJPgG+391DNgQVKviC1ZhLmg4t8I/sGboG2uzGR9n/s= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1754322222; c=relaxed/simple; bh=A5uv/gs7Au0rCuC480t0g6JrJTsMb8yhLZ8z3y6BXLQ=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=tF28gQE3GrYTQNejTNKIYgJJnxRLS4FgBxfgt0fv4/3LPqrBt/nntp2UzK4CnizdZnVdgLOvdj4C652+fOtvjUTn3Rsdl7ussyfE5I8E3HE1aMIf7IcxvBYw75kOajVCFVQ8vOreCwdcrOyPhMj6bGYw72Ff0dPWVQrwmbXhf8Y= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=P3SXtMv5; arc=none smtp.client-ip=209.85.222.181 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="P3SXtMv5" Received: by mail-qk1-f181.google.com with SMTP id af79cd13be357-7dd8773f9d9so345866685a.2; Mon, 04 Aug 2025 08:43:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1754322219; x=1754927019; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=5NZgFoJUXPNA//k4ADBS9TTEW5aX/1wCjASugsvaEmo=; b=P3SXtMv5/ZvZ1GXLtesG/2yAtxZ/eXNX4spQmD8DWqb+PZVPrCBhWFC0IgOcjZk71E ux5xmEHqoCaWVWZGV5hhvnVy4Mj/tl9+DhxPcGiNwH7zjHP9h30jpGf5RGe4iX9U/3Cd tyHXuj6nYCGOXbcapO47NqytJwOQnxgCli0BBVQq04b9w8WNFl6fsSidVDbWX561b3mq E3pDLW4BuDEmRePFsWSwcC4CvxDDrn+whhiUVu+PtR4ZIVxYZ/5tuTrPW6il953skFfg 1VlFNYTQjx2A9iiZ2Bjihek05abfvmv2jRPm1fofJGWTptQzVv4FY+1TPZKZhtaV2AZe X3mg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1754322219; x=1754927019; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=5NZgFoJUXPNA//k4ADBS9TTEW5aX/1wCjASugsvaEmo=; b=IAdpO5eXZSsQS2evBFVrWHVo9Om6uHa3dAo9B2I3FF7tmpb5Fkky8qxaOB7Hgo3LTU 99aZNTZcCwU5/C5IQIGFyhgcESo8U25UzxFJ3jmxN7EPrbRYry1m4ykar6tx9b/K/MkU UHYXM+VAxH8rrpoI0aW61pO2u/Y27dUjgYJOOOPqBZoY6AIBG5FMySI19m+FHvaq40DI 2GhJ1a5zFi+6c+Yi0l21K7eFxo46K7rvbUuF4hlbX8HJM6xVS61q/Fur31AbNMUMWS+1 ovgtkVc8X2wWJr+0ds/b/PNZOlM/Yv7API/XEbRayIxNp6KkFVKqIyAXqs1dUG4A2fQT l34A== X-Forwarded-Encrypted: i=1; AJvYcCUDnRWi/1Zh6duAWpYGCJWQpF++fJvTchgFixCanxLD9hUZJaAjWHlRHBue+s0AOMqPK/hoeQuMjjs=@vger.kernel.org, AJvYcCX0l3mzD57LBhWhYU2JProO9H+2qpNSxWQiVPaIZwlvll6bmwgs8nGn2pV2u7gqjoorp9y27UiFkjfjbOes@vger.kernel.org X-Gm-Message-State: AOJu0Ywq7eOrJR0gWt+qn+W7305mPAt2ph/KlmeGsOKjcLAwWoSozrhB hjLuNVilhF5mUr1+LaKVRaU5etrTb2yd4wjTLssGdtmlshL42FP32cUI X-Gm-Gg: ASbGncvshe4CiNRtZKLNkrT/lHOsjgw25l+qXCB2j6sRsXBRrx/4ofb1r9BT3EkyXeU QlbrBDpFTkFr++ur7CTi9tgVspFSr21aYEZrV3y9Am8Ssu6+fcZyAdN4Ky1p81sRQOdj/Nujm/W spZEyh1qg65//xoqbeW3NsBPlP3Wy+Om6Pn0JMsu0TrdXFE3xnf1Pc6toqoGw40apWwoV6JePeg A2tyvLen0TyuxJgaVDyFpwNC0tgDiBrp2MFDTvLVqy8lM+3xElGhnBZt9qFftqEiosnEiDfsW/0 +w++i3iDjbF+xAXQ67q90EV+MVOFndngWv/TWorekw8FFfWI1HzXKpMvXUBIlrUZtWuasWN4RGH v/p5aPRkbMC0TaY4YTPgL X-Google-Smtp-Source: AGHT+IFDNYpRCIHZf+T+f30UCiejvaXxvvT7o+L2UYQBp4noTzl1Vr697v2ADXthSDYYKiAEbDj8Xw== X-Received: by 2002:a05:620a:430e:b0:7e6:9a29:eb68 with SMTP id af79cd13be357-7e69a29ede6mr1059577185a.11.1754322218833; Mon, 04 Aug 2025 08:43:38 -0700 (PDT) Received: from localhost ([2a03:2880:20ff:72::]) by smtp.gmail.com with ESMTPSA id af79cd13be357-7e69a1e522esm277231485a.0.2025.08.04.08.43.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 04 Aug 2025 08:43:38 -0700 (PDT) From: Usama Arif To: Andrew Morton , david@redhat.com, linux-mm@kvack.org Cc: linux-fsdevel@vger.kernel.org, corbet@lwn.net, rppt@kernel.org, surenb@google.com, mhocko@suse.com, hannes@cmpxchg.org, baohua@kernel.org, shakeel.butt@linux.dev, riel@surriel.com, ziy@nvidia.com, laoar.shao@gmail.com, dev.jain@arm.com, baolin.wang@linux.alibaba.com, npache@redhat.com, lorenzo.stoakes@oracle.com, Liam.Howlett@oracle.com, ryan.roberts@arm.com, vbabka@suse.cz, jannh@google.com, Arnd Bergmann , sj@kernel.org, linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, kernel-team@meta.com, Usama Arif Subject: [PATCH v3 2/6] mm/huge_memory: convert "tva_flags" to "enum tva_type" Date: Mon, 4 Aug 2025 16:40:45 +0100 Message-ID: <20250804154317.1648084-3-usamaarif642@gmail.com> X-Mailer: git-send-email 2.47.3 In-Reply-To: <20250804154317.1648084-1-usamaarif642@gmail.com> References: <20250804154317.1648084-1-usamaarif642@gmail.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: David Hildenbrand When determining which THP orders are eligible for a VMA mapping, we have previously specified tva_flags, however it turns out it is really not necessary to treat these as flags. Rather, we distinguish between distinct modes. The only case where we previously combined flags was with TVA_ENFORCE_SYSFS, but we can avoid this by observing that this is the default, except for MADV_COLLAPSE or an edge cases in collapse_pte_mapped_thp() and hugepage_vma_revalidate(), and adding a mode specifically for this case - TVA_FORCED_COLLAPSE. We have: * smaps handling for showing "THPeligible" * Pagefault handling * khugepaged handling * Forced collapse handling: primarily MADV_COLLAPSE, but also for an edge case in collapse_pte_mapped_thp() Disregarding the edge cases, we only want to ignore sysfs settings only when we are forcing a collapse through MADV_COLLAPSE, otherwise we want to enforce it, hence this patch does the following flag to enum conversions: * TVA_SMAPS | TVA_ENFORCE_SYSFS -> TVA_SMAPS * TVA_IN_PF | TVA_ENFORCE_SYSFS -> TVA_PAGEFAULT * TVA_ENFORCE_SYSFS -> TVA_KHUGEPAGED * 0 -> TVA_FORCED_COLLAPSE With this change, we immediately know if we are in the forced collapse case, which will be valuable next. Signed-off-by: David Hildenbrand Acked-by: Usama Arif Signed-off-by: Usama Arif --- fs/proc/task_mmu.c | 4 ++-- include/linux/huge_mm.h | 30 ++++++++++++++++++------------ mm/huge_memory.c | 8 ++++---- mm/khugepaged.c | 17 ++++++++--------- mm/memory.c | 14 ++++++-------- 5 files changed, 38 insertions(+), 35 deletions(-) diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c index 3d6d8a9f13fc..d440df7b3d59 100644 --- a/fs/proc/task_mmu.c +++ b/fs/proc/task_mmu.c @@ -1293,8 +1293,8 @@ static int show_smap(struct seq_file *m, void *v) __show_smap(m, &mss, false); =20 seq_printf(m, "THPeligible: %8u\n", - !!thp_vma_allowable_orders(vma, vma->vm_flags, - TVA_SMAPS | TVA_ENFORCE_SYSFS, THP_ORDERS_ALL)); + !!thp_vma_allowable_orders(vma, vma->vm_flags, TVA_SMAPS, + THP_ORDERS_ALL)); =20 if (arch_pkeys_enabled()) seq_printf(m, "ProtectionKey: %8u\n", vma_pkey(vma)); diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index 71db243a002e..bd4f9e6327e0 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -94,12 +94,15 @@ extern struct kobj_attribute thpsize_shmem_enabled_attr; #define THP_ORDERS_ALL \ (THP_ORDERS_ALL_ANON | THP_ORDERS_ALL_SPECIAL | THP_ORDERS_ALL_FILE_DEFAU= LT) =20 -#define TVA_SMAPS (1 << 0) /* Will be used for procfs */ -#define TVA_IN_PF (1 << 1) /* Page fault handler */ -#define TVA_ENFORCE_SYSFS (1 << 2) /* Obey sysfs configuration */ +enum tva_type { + TVA_SMAPS, /* Exposing "THPeligible:" in smaps. */ + TVA_PAGEFAULT, /* Serving a page fault. */ + TVA_KHUGEPAGED, /* Khugepaged collapse. */ + TVA_FORCED_COLLAPSE, /* Forced collapse (e.g. MADV_COLLAPSE). */ +}; =20 -#define thp_vma_allowable_order(vma, vm_flags, tva_flags, order) \ - (!!thp_vma_allowable_orders(vma, vm_flags, tva_flags, BIT(order))) +#define thp_vma_allowable_order(vma, vm_flags, type, order) \ + (!!thp_vma_allowable_orders(vma, vm_flags, type, BIT(order))) =20 #define split_folio(f) split_folio_to_list(f, NULL) =20 @@ -264,14 +267,14 @@ static inline unsigned long thp_vma_suitable_orders(s= truct vm_area_struct *vma, =20 unsigned long __thp_vma_allowable_orders(struct vm_area_struct *vma, vm_flags_t vm_flags, - unsigned long tva_flags, + enum tva_type type, unsigned long orders); =20 /** * thp_vma_allowable_orders - determine hugepage orders that are allowed f= or vma * @vma: the vm area to check * @vm_flags: use these vm_flags instead of vma->vm_flags - * @tva_flags: Which TVA flags to honour + * @type: TVA type * @orders: bitfield of all orders to consider * * Calculates the intersection of the requested hugepage orders and the al= lowed @@ -285,11 +288,14 @@ unsigned long __thp_vma_allowable_orders(struct vm_ar= ea_struct *vma, static inline unsigned long thp_vma_allowable_orders(struct vm_area_struct *vma, vm_flags_t vm_flags, - unsigned long tva_flags, + enum tva_type type, unsigned long orders) { - /* Optimization to check if required orders are enabled early. */ - if ((tva_flags & TVA_ENFORCE_SYSFS) && vma_is_anonymous(vma)) { + /* + * Optimization to check if required orders are enabled early. Only + * forced collapse ignores sysfs configs. + */ + if (type !=3D TVA_FORCED_COLLAPSE && vma_is_anonymous(vma)) { unsigned long mask =3D READ_ONCE(huge_anon_orders_always); =20 if (vm_flags & VM_HUGEPAGE) @@ -303,7 +309,7 @@ unsigned long thp_vma_allowable_orders(struct vm_area_s= truct *vma, return 0; } =20 - return __thp_vma_allowable_orders(vma, vm_flags, tva_flags, orders); + return __thp_vma_allowable_orders(vma, vm_flags, type, orders); } =20 struct thpsize { @@ -536,7 +542,7 @@ static inline unsigned long thp_vma_suitable_orders(str= uct vm_area_struct *vma, =20 static inline unsigned long thp_vma_allowable_orders(struct vm_area_struct= *vma, vm_flags_t vm_flags, - unsigned long tva_flags, + enum tva_type type, unsigned long orders) { return 0; diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 2b4ea5a2ce7d..85252b468f80 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -99,12 +99,12 @@ static inline bool file_thp_enabled(struct vm_area_stru= ct *vma) =20 unsigned long __thp_vma_allowable_orders(struct vm_area_struct *vma, vm_flags_t vm_flags, - unsigned long tva_flags, + enum tva_type type, unsigned long orders) { - bool smaps =3D tva_flags & TVA_SMAPS; - bool in_pf =3D tva_flags & TVA_IN_PF; - bool enforce_sysfs =3D tva_flags & TVA_ENFORCE_SYSFS; + const bool smaps =3D type =3D=3D TVA_SMAPS; + const bool in_pf =3D type =3D=3D TVA_PAGEFAULT; + const bool enforce_sysfs =3D type !=3D TVA_FORCED_COLLAPSE; unsigned long supported_orders; =20 /* Check the intersection of requested and supported orders. */ diff --git a/mm/khugepaged.c b/mm/khugepaged.c index 2c9008246785..88cb6339e910 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -474,8 +474,7 @@ void khugepaged_enter_vma(struct vm_area_struct *vma, { if (!test_bit(MMF_VM_HUGEPAGE, &vma->vm_mm->flags) && hugepage_pmd_enabled()) { - if (thp_vma_allowable_order(vma, vm_flags, TVA_ENFORCE_SYSFS, - PMD_ORDER)) + if (thp_vma_allowable_order(vma, vm_flags, TVA_KHUGEPAGED, PMD_ORDER)) __khugepaged_enter(vma->vm_mm); } } @@ -921,7 +920,8 @@ static int hugepage_vma_revalidate(struct mm_struct *mm= , unsigned long address, struct collapse_control *cc) { struct vm_area_struct *vma; - unsigned long tva_flags =3D cc->is_khugepaged ? TVA_ENFORCE_SYSFS : 0; + enum tva_type type =3D cc->is_khugepaged ? TVA_KHUGEPAGED : + TVA_FORCED_COLLAPSE; =20 if (unlikely(hpage_collapse_test_exit_or_disable(mm))) return SCAN_ANY_PROCESS; @@ -932,7 +932,7 @@ static int hugepage_vma_revalidate(struct mm_struct *mm= , unsigned long address, =20 if (!thp_vma_suitable_order(vma, address, PMD_ORDER)) return SCAN_ADDRESS_RANGE; - if (!thp_vma_allowable_order(vma, vma->vm_flags, tva_flags, PMD_ORDER)) + if (!thp_vma_allowable_order(vma, vma->vm_flags, type, PMD_ORDER)) return SCAN_VMA_CHECK; /* * Anon VMA expected, the address may be unmapped then @@ -1532,9 +1532,9 @@ int collapse_pte_mapped_thp(struct mm_struct *mm, uns= igned long addr, * in the page cache with a single hugepage. If a mm were to fault-in * this memory (mapped by a suitably aligned VMA), we'd get the hugepage * and map it by a PMD, regardless of sysfs THP settings. As such, let's - * analogously elide sysfs THP settings here. + * analogously elide sysfs THP settings here and force collapse. */ - if (!thp_vma_allowable_order(vma, vma->vm_flags, 0, PMD_ORDER)) + if (!thp_vma_allowable_order(vma, vma->vm_flags, TVA_FORCED_COLLAPSE, PMD= _ORDER)) return SCAN_VMA_CHECK; =20 /* Keep pmd pgtable for uffd-wp; see comment in retract_page_tables() */ @@ -2431,8 +2431,7 @@ static unsigned int khugepaged_scan_mm_slot(unsigned = int pages, int *result, progress++; break; } - if (!thp_vma_allowable_order(vma, vma->vm_flags, - TVA_ENFORCE_SYSFS, PMD_ORDER)) { + if (!thp_vma_allowable_order(vma, vma->vm_flags, TVA_KHUGEPAGED, PMD_ORD= ER)) { skip: progress++; continue; @@ -2766,7 +2765,7 @@ int madvise_collapse(struct vm_area_struct *vma, unsi= gned long start, BUG_ON(vma->vm_start > start); BUG_ON(vma->vm_end < end); =20 - if (!thp_vma_allowable_order(vma, vma->vm_flags, 0, PMD_ORDER)) + if (!thp_vma_allowable_order(vma, vma->vm_flags, TVA_FORCED_COLLAPSE, PMD= _ORDER)) return -EINVAL; =20 cc =3D kmalloc(sizeof(*cc), GFP_KERNEL); diff --git a/mm/memory.c b/mm/memory.c index 92fd18a5d8d1..be761753f240 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -4369,8 +4369,8 @@ static struct folio *alloc_swap_folio(struct vm_fault= *vmf) * Get a list of all the (large) orders below PMD_ORDER that are enabled * and suitable for swapping THP. */ - orders =3D thp_vma_allowable_orders(vma, vma->vm_flags, - TVA_IN_PF | TVA_ENFORCE_SYSFS, BIT(PMD_ORDER) - 1); + orders =3D thp_vma_allowable_orders(vma, vma->vm_flags, TVA_PAGEFAULT, + BIT(PMD_ORDER) - 1); orders =3D thp_vma_suitable_orders(vma, vmf->address, orders); orders =3D thp_swap_suitable_orders(swp_offset(entry), vmf->address, orders); @@ -4917,8 +4917,8 @@ static struct folio *alloc_anon_folio(struct vm_fault= *vmf) * for this vma. Then filter out the orders that can't be allocated over * the faulting address and still be fully contained in the vma. */ - orders =3D thp_vma_allowable_orders(vma, vma->vm_flags, - TVA_IN_PF | TVA_ENFORCE_SYSFS, BIT(PMD_ORDER) - 1); + orders =3D thp_vma_allowable_orders(vma, vma->vm_flags, TVA_PAGEFAULT, + BIT(PMD_ORDER) - 1); orders =3D thp_vma_suitable_orders(vma, vmf->address, orders); =20 if (!orders) @@ -6108,8 +6108,7 @@ static vm_fault_t __handle_mm_fault(struct vm_area_st= ruct *vma, return VM_FAULT_OOM; retry_pud: if (pud_none(*vmf.pud) && - thp_vma_allowable_order(vma, vm_flags, - TVA_IN_PF | TVA_ENFORCE_SYSFS, PUD_ORDER)) { + thp_vma_allowable_order(vma, vm_flags, TVA_PAGEFAULT, PUD_ORDER)) { ret =3D create_huge_pud(&vmf); if (!(ret & VM_FAULT_FALLBACK)) return ret; @@ -6143,8 +6142,7 @@ static vm_fault_t __handle_mm_fault(struct vm_area_st= ruct *vma, goto retry_pud; =20 if (pmd_none(*vmf.pmd) && - thp_vma_allowable_order(vma, vm_flags, - TVA_IN_PF | TVA_ENFORCE_SYSFS, PMD_ORDER)) { + thp_vma_allowable_order(vma, vm_flags, TVA_PAGEFAULT, PMD_ORDER)) { ret =3D create_huge_pmd(&vmf); if (!(ret & VM_FAULT_FALLBACK)) return ret; --=20 2.47.3 From nobody Sun Oct 5 12:23:56 2025 Received: from mail-qv1-f50.google.com (mail-qv1-f50.google.com [209.85.219.50]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9FD94221544; Mon, 4 Aug 2025 15:43:41 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.50 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1754322223; cv=none; b=G0LLK/v4svxSbAKRU/ZaC9Je9cWdI7HDgIbyZf2jrDVOvLAsDDq1jBEFkwN3PfPuNvidqFhZpatQq/sRTLCyQc01wTxDUm2HdfrLTaDRjHec8gIfm9KeAwVmkDxb4aL0zvqBhk1ILrtBfOQ/5hUHvL9bdvy7yAbeJMtq4PrGBl8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1754322223; c=relaxed/simple; bh=4ByzGLLZrVyaD4dM/PYmq9r2OO1L7E+S1WLjflW3lLE=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=ZLjiOdKSEsfsrGp4Yf4zvLFup1CtmRW7tGDIO6MzeB2nIC7LQoeOp69OYkhBAsFYd5mugoB96jHTWWss08lQjoH9mrEL2x3zkBYadwgKkmdcU7FDu4Wd9vgsKACd5FJ7hFukBJPQ8ioWfehGfyKBXVYHIQazKfJGNMR2f5VeJMo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=mSzYVCsS; arc=none smtp.client-ip=209.85.219.50 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="mSzYVCsS" Received: by mail-qv1-f50.google.com with SMTP id 6a1803df08f44-707389a2fe3so45228766d6.2; Mon, 04 Aug 2025 08:43:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1754322220; x=1754927020; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=pff95T0ROqWNaJZZaJoS7Iz2EIBASIhdqYvDClX4pOE=; b=mSzYVCsSLLnMGizHLZYy9Q9efolKgle2M3Vpt+V6624iU1a5yLhqZB2QVMVSZkxk5Z Ge88WMdxaS6r7p5KszCYtjoWhGPjfYUAIcuxrfT5KCxbLzCW7f6y1OID9Yv0vxeMYksg To+yOpS1iJXlMaahI+mY+jXSHA0Gnc40tcliwHPfJZL036lEt49Uft7m0u/Xn61tbksU cYBw9d+j1xUhjoF//w3S1pAWmtDr29FmjdR1DcTIwB6tiYGsS4WEqp9NTJUYJDpngvJB BUSYPf7RrTKokvg/lTxfT/N/Bc3FmgowbB6ea4msAUJDNRudyLcroUhZMZVdcgKc4xMY ekkQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1754322220; x=1754927020; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=pff95T0ROqWNaJZZaJoS7Iz2EIBASIhdqYvDClX4pOE=; b=UEkrxjHqQrIzSY3CT/M7+gRbzdVT87SK2G4H92id5b+Sb9ibSNSGnuVuQUBRb7pyOM tervoVwQlvZFttIuxBl8XVuPXW/73yoiDFy/DsJTCUZErjya1t75twL2LSWVH4y7AZ5j F8lIkp0pLcQd4c4s3eR+/oZXjfLxfRoRpXQi9gD4+jL8xUSEz6evngHDN70In4kHWxFc 0i1o4GxkLVu0T5I/WkQHrIl5ALaZxgCY19vFWRDo7VkbY7gtlCjYKZ/R7lDoE46bxcc/ +T3aWU4+Fx7Mxo20OH/rWHP93npWB7I/dl3Smrcvmw7b2uHebDA8aOuRJLBE9F3DYcjh k1zg== X-Forwarded-Encrypted: i=1; AJvYcCVU+kxlocJWU8u6SozZWUXc67F2DJ85M5z6V3o0TzZtYs/ZTTLIg617k9OUaenj9SHdr87LOt3R7TSOu+eV@vger.kernel.org, AJvYcCX6uqLr6qlgYwoAJGo9ovad/OQBQROelwNjlO1ZybqeR1WMix0haDyUx9VXm4QNSgbLj2qA5DLX+/Y=@vger.kernel.org X-Gm-Message-State: AOJu0YzrZ0TdUBV1Db2xEdo8LMBOfpJGJNWyE8oaNgUZTDBfne2uLv2/ AMgsWs4ZPhR/SIpRsgU9wHpliD6pXvcapRPdkiGUhlVXw6IWJhzCdacd X-Gm-Gg: ASbGnctkNEZUjTTb3mz+w2dnQCEYCza0D2pf3JsWrT58v1xlajnXHp29i/R0M10Z1KL zOujDF0HIMCNUNbyIWnUFTQUq4OCHCDueTeieTgFMR5nwdVX7Hw2pok6gnX4V/BXgU2Tih/2zE8 7w7uLDXiGQwYI9Y+YTe0mEH9UIKfHtjOwxQumZEYbkU43/KAxj6UyEZrVCW9O0K4nTKnMS8cNGW eymQfIin90ggqS0zXLpUXCvoozJtl4UNREpWTDD9Ds9Hu8LK46ccoQ2TMt/JnyOvylxK70MMSSe RPkzkwVnaN/MR8PCVVG6hJO6/KnIy/PiTcbkdT7i3tWAn7jy6SUkxpyDQCqZOoyVS5AfHtn1y7w 0FyuZAwftWVi3nPIRj86j X-Google-Smtp-Source: AGHT+IFFPNIK5VdY0I2gSxLRd5i4uf04sMnarqPFp/nGE7a6EQybziuhh4dTs+npkQ5k1dzF3wyK5w== X-Received: by 2002:a05:6214:e4c:b0:6fa:d956:243b with SMTP id 6a1803df08f44-709363080damr139490616d6.37.1754322220184; Mon, 04 Aug 2025 08:43:40 -0700 (PDT) Received: from localhost ([2a03:2880:20ff:72::]) by smtp.gmail.com with ESMTPSA id 6a1803df08f44-7077cea1782sm58271266d6.93.2025.08.04.08.43.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 04 Aug 2025 08:43:39 -0700 (PDT) From: Usama Arif To: Andrew Morton , david@redhat.com, linux-mm@kvack.org Cc: linux-fsdevel@vger.kernel.org, corbet@lwn.net, rppt@kernel.org, surenb@google.com, mhocko@suse.com, hannes@cmpxchg.org, baohua@kernel.org, shakeel.butt@linux.dev, riel@surriel.com, ziy@nvidia.com, laoar.shao@gmail.com, dev.jain@arm.com, baolin.wang@linux.alibaba.com, npache@redhat.com, lorenzo.stoakes@oracle.com, Liam.Howlett@oracle.com, ryan.roberts@arm.com, vbabka@suse.cz, jannh@google.com, Arnd Bergmann , sj@kernel.org, linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, kernel-team@meta.com, Usama Arif Subject: [PATCH v3 3/6] mm/huge_memory: respect MADV_COLLAPSE with PR_THP_DISABLE_EXCEPT_ADVISED Date: Mon, 4 Aug 2025 16:40:46 +0100 Message-ID: <20250804154317.1648084-4-usamaarif642@gmail.com> X-Mailer: git-send-email 2.47.3 In-Reply-To: <20250804154317.1648084-1-usamaarif642@gmail.com> References: <20250804154317.1648084-1-usamaarif642@gmail.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: David Hildenbrand Let's allow for making MADV_COLLAPSE succeed on areas that neither have VM_HUGEPAGE nor VM_NOHUGEPAGE when we have THP disabled unless explicitly advised (PR_THP_DISABLE_EXCEPT_ADVISED). MADV_COLLAPSE is a clear advice that we want to collapse. Note that we still respect the VM_NOHUGEPAGE flag, just like MADV_COLLAPSE always does. So consequently, MADV_COLLAPSE is now only refused on VM_NOHUGEPAGE with PR_THP_DISABLE_EXCEPT_ADVISED, including for shmem. Co-developed-by: Usama Arif Signed-off-by: Usama Arif Signed-off-by: David Hildenbrand --- include/linux/huge_mm.h | 8 +++++++- include/uapi/linux/prctl.h | 2 +- mm/huge_memory.c | 5 +++-- mm/memory.c | 6 ++++-- mm/shmem.c | 2 +- 5 files changed, 16 insertions(+), 7 deletions(-) diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index bd4f9e6327e0..1fd06ecbde72 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -329,7 +329,7 @@ struct thpsize { * through madvise or prctl. */ static inline bool vma_thp_disabled(struct vm_area_struct *vma, - vm_flags_t vm_flags) + vm_flags_t vm_flags, bool forced_collapse) { /* Are THPs disabled for this VMA? */ if (vm_flags & VM_NOHUGEPAGE) @@ -343,6 +343,12 @@ static inline bool vma_thp_disabled(struct vm_area_str= uct *vma, */ if (vm_flags & VM_HUGEPAGE) return false; + /* + * Forcing a collapse (e.g., madv_collapse), is a clear advice to + * use THPs. + */ + if (forced_collapse) + return false; return test_bit(MMF_DISABLE_THP_EXCEPT_ADVISED, &vma->vm_mm->flags); } =20 diff --git a/include/uapi/linux/prctl.h b/include/uapi/linux/prctl.h index 9c1d6e49b8a9..cdda963a039a 100644 --- a/include/uapi/linux/prctl.h +++ b/include/uapi/linux/prctl.h @@ -185,7 +185,7 @@ struct prctl_mm_map { #define PR_SET_THP_DISABLE 41 /* * Don't disable THPs when explicitly advised (e.g., MADV_HUGEPAGE / - * VM_HUGEPAGE). + * VM_HUGEPAGE, MADV_COLLAPSE). */ # define PR_THP_DISABLE_EXCEPT_ADVISED (1 << 1) #define PR_GET_THP_DISABLE 42 diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 85252b468f80..ef5ccb0ec5d5 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -104,7 +104,8 @@ unsigned long __thp_vma_allowable_orders(struct vm_area= _struct *vma, { const bool smaps =3D type =3D=3D TVA_SMAPS; const bool in_pf =3D type =3D=3D TVA_PAGEFAULT; - const bool enforce_sysfs =3D type !=3D TVA_FORCED_COLLAPSE; + const bool forced_collapse =3D type =3D=3D TVA_FORCED_COLLAPSE; + const bool enforce_sysfs =3D !forced_collapse; unsigned long supported_orders; =20 /* Check the intersection of requested and supported orders. */ @@ -122,7 +123,7 @@ unsigned long __thp_vma_allowable_orders(struct vm_area= _struct *vma, if (!vma->vm_mm) /* vdso */ return 0; =20 - if (thp_disabled_by_hw() || vma_thp_disabled(vma, vm_flags)) + if (thp_disabled_by_hw() || vma_thp_disabled(vma, vm_flags, forced_collap= se)) return 0; =20 /* khugepaged doesn't collapse DAX vma, but page fault is fine. */ diff --git a/mm/memory.c b/mm/memory.c index be761753f240..bd04212d6f79 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -5186,9 +5186,11 @@ vm_fault_t do_set_pmd(struct vm_fault *vmf, struct f= olio *folio, struct page *pa * It is too late to allocate a small folio, we already have a large * folio in the pagecache: especially s390 KVM cannot tolerate any * PMD mappings, but PTE-mapped THP are fine. So let's simply refuse any - * PMD mappings if THPs are disabled. + * PMD mappings if THPs are disabled. As we already have a THP ... + * behave as if we are forcing a collapse. */ - if (thp_disabled_by_hw() || vma_thp_disabled(vma, vma->vm_flags)) + if (thp_disabled_by_hw() || vma_thp_disabled(vma, vma->vm_flags, + /* forced_collapse=3D*/ true)) return ret; =20 if (!thp_vma_suitable_order(vma, haddr, PMD_ORDER)) diff --git a/mm/shmem.c b/mm/shmem.c index e6cdfda08aed..30609197a266 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -1816,7 +1816,7 @@ unsigned long shmem_allowable_huge_orders(struct inod= e *inode, vm_flags_t vm_flags =3D vma ? vma->vm_flags : 0; unsigned int global_orders; =20 - if (thp_disabled_by_hw() || (vma && vma_thp_disabled(vma, vm_flags))) + if (thp_disabled_by_hw() || (vma && vma_thp_disabled(vma, vm_flags, shmem= _huge_force))) return 0; =20 global_orders =3D shmem_huge_global_enabled(inode, index, write_end, --=20 2.47.3 From nobody Sun Oct 5 12:23:56 2025 Received: from mail-qk1-f181.google.com (mail-qk1-f181.google.com [209.85.222.181]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 27B4826CE30; Mon, 4 Aug 2025 15:43:43 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.222.181 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1754322225; cv=none; b=KXaRkD7vwjR3AdxU/9BzeyfguHuW50dyfH7qBOW23dLAeeSeqf1yjBCsSXO4ihO0BkkCT+97dCsC77p4Zzwrf4cVm6eAk4b9tFbERVRIP9Ix0fBlYnn9leNEAfZyak3PD9SZSrF4kumYPo3sSsRB0F13sOBc9U1GeLHki52EP/M= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1754322225; c=relaxed/simple; bh=lGzOELJhrCci+XdLDFyVu3648iFkuugWcmHQwvvhow0=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=ZJjCIMzRMNwHXWVWhBmYo2SRKo6/ppotVDKmN1EcmVVUPmOO0z9n4JSs7GpggJfaBx7fsnhtJdE8vm1yVRGKD6o1kF3mZHKdt2dhapFa35ZAy+d2KzSH2berykpdW4iV3G326xVF3fLUa0ONeMHgEsQ27PmbJ5W32MXEetpaRgw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=hnkTPBrg; arc=none smtp.client-ip=209.85.222.181 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="hnkTPBrg" Received: by mail-qk1-f181.google.com with SMTP id af79cd13be357-7e29616cc4fso415365385a.0; Mon, 04 Aug 2025 08:43:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1754322222; x=1754927022; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=a7FGysSwc9ir6e18/+ECYekeynrX3D9ugFjSYDa8j38=; b=hnkTPBrg+OJbEAgjpSiAFrLqojnbaspbVKN6FkWCU2CI3zy2V/xsi8npeucIydPYmT K+7jcIKt/HwO1LAvW1pk08lHOKG7zKRRwMywy+i0o1Bz/ImjjneP9NHizF+MBu5Q/VJp LekbDOV8bs0irULwzK5QiKWqaSoids3kXhh3HgqiBb+l52+mv90Bsjv5Qutp2zn6y8wr D8WIptfv7AHqSNVFRLQnQu/qPTgjfpJolRQ+NqWJFgGrvO24IjTNgnZyMt9Oc9AYJocr sZPKwsYU6qMTym4qQcSd7Pys8gBC0FFypjt8NvAGOt6uOwp1XTkoJivQELAZ3GhHqbJY xpXg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1754322222; x=1754927022; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=a7FGysSwc9ir6e18/+ECYekeynrX3D9ugFjSYDa8j38=; b=MCsCjuCth1k52fS68w1jayrZLM1852Ms7wjSw4yQzNDEzWnMwFr0DzBzUy26mXawPG 5N/XosrgtIHH4eVGdFolL7IYQDcsO33QjwyqZqDxY7FBizU7Ewsl0Yb+w4ji1DDBvzHi 2zsVMcw//achIWawmJvActHUm3vez3pe5nBdTImCkyXxKRWBMzvFTtz53xqdaLCQxQhS 1cKLD0LreVnc/xHYq06d6EqRncfRSp/LH7yl0bhTIab6R30qhWwbAA5TPKOLnX2Zj/CN WIyeBxeqdoMaTpJdH1iaVjOJt1JPWiPqxbXqeDIPDkYV/v5koSBIBPRBvG5SpG7QXNex waEA== X-Forwarded-Encrypted: i=1; AJvYcCUG6HPhiH3pk7xMa8qMDwVClRoGg5Plf5Q+EkIO7DWxtPka8DcrLfvPdVlwO5Z2xzX5Hv9bIJUzwzU=@vger.kernel.org, AJvYcCXRZsZEXCYKw2jhaXW1IiI3b0gN9BFcAWV3xQgnZhQ0W8ECtJ7jq/bwjGFR7uyOKOiktUrvsq7TmW8q1NIQ@vger.kernel.org X-Gm-Message-State: AOJu0YwnlJS8TB1hXt0xF3RzW5f9q6uEcgZI6WwsPwRrjEuawecYCG9f YhhV8JcrbRVIPT3gPFlCGebM+XK3GfjgmdC3aGFUeCXKTh5XBC5I30nfnx1OV++s X-Gm-Gg: ASbGncsc0Bf3L7Pg/kd4Dme5mSbPLMU+0z3mecnIOeXOmz40wU1E1vbYf4lNN68sS3k /sqSpLBE8AwUO3Qe4dQmdPyyaVzuqL1wA7Gsto3u5kIQ0IhoApfrC/n76aR2rYJQagXaT0YU8ZD MxBGl34lXdMqHlN2WqylMS/EI4VxsyOzOk7r1GvKXS4AWXcoPobt6t6p/pSsnzSw6uh2u6WkL1p NnswAwiZT/xmcEXRV5zO4K/FZ6VlatLbm/LssWLdkNjuhrvi2Wx3BRBTdEiIyQeU5x68Htpr3bo ZhVwuXco3Uvlt89B50xXGwyOzGHIC2VW02zD6jumFbxM2g032UOIwnR3wzek0oDhPA9drhXZc84 HiZLuFFT5TXDo6ZuWMtg= X-Google-Smtp-Source: AGHT+IGS07Gw/uqMyBQHFvHilTAJPxp7B447DJt/tc+a5i1DvNlJq4ZtECfwx07goEsa9zjexjCclA== X-Received: by 2002:a05:620a:39d:b0:7e6:30ad:be32 with SMTP id af79cd13be357-7e696268883mr1447844185a.5.1754322221625; Mon, 04 Aug 2025 08:43:41 -0700 (PDT) Received: from localhost ([2a03:2880:20ff:1::]) by smtp.gmail.com with ESMTPSA id af79cd13be357-7e67f742e7asm559131385a.71.2025.08.04.08.43.40 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 04 Aug 2025 08:43:40 -0700 (PDT) From: Usama Arif To: Andrew Morton , david@redhat.com, linux-mm@kvack.org Cc: linux-fsdevel@vger.kernel.org, corbet@lwn.net, rppt@kernel.org, surenb@google.com, mhocko@suse.com, hannes@cmpxchg.org, baohua@kernel.org, shakeel.butt@linux.dev, riel@surriel.com, ziy@nvidia.com, laoar.shao@gmail.com, dev.jain@arm.com, baolin.wang@linux.alibaba.com, npache@redhat.com, lorenzo.stoakes@oracle.com, Liam.Howlett@oracle.com, ryan.roberts@arm.com, vbabka@suse.cz, jannh@google.com, Arnd Bergmann , sj@kernel.org, linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, kernel-team@meta.com, Usama Arif Subject: [PATCH v3 4/6] docs: transhuge: document process level THP controls Date: Mon, 4 Aug 2025 16:40:47 +0100 Message-ID: <20250804154317.1648084-5-usamaarif642@gmail.com> X-Mailer: git-send-email 2.47.3 In-Reply-To: <20250804154317.1648084-1-usamaarif642@gmail.com> References: <20250804154317.1648084-1-usamaarif642@gmail.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" This includes the PR_SET_THP_DISABLE/PR_GET_THP_DISABLE pair of prctl calls as well the newly introduced PR_THP_DISABLE_EXCEPT_ADVISED flag for the PR_SET_THP_DISABLE prctl call. Signed-off-by: Usama Arif --- Documentation/admin-guide/mm/transhuge.rst | 38 ++++++++++++++++++++++ 1 file changed, 38 insertions(+) diff --git a/Documentation/admin-guide/mm/transhuge.rst b/Documentation/adm= in-guide/mm/transhuge.rst index 370fba113460..a36a04394ff5 100644 --- a/Documentation/admin-guide/mm/transhuge.rst +++ b/Documentation/admin-guide/mm/transhuge.rst @@ -225,6 +225,44 @@ to "always" or "madvise"), and it'll be automatically = shutdown when PMD-sized THP is disabled (when both the per-size anon control and the top-level control are "never") =20 +process THP controls +-------------------- + +A process can control its own THP behaviour using the ``PR_SET_THP_DISABLE= `` +and ``PR_GET_THP_DISABLE`` pair of prctl(2) calls. These calls support the +following arguments:: + + prctl(PR_SET_THP_DISABLE, 1, 0, 0, 0): + This will set the MMF_DISABLE_THP_COMPLETELY mm flag which will + result in no THPs being faulted in or collapsed, irrespective + of global THP controls. This flag and hence the behaviour is + inherited across fork(2) and execve(2). + + prctl(PR_SET_THP_DISABLE, 1, PR_THP_DISABLE_EXCEPT_ADVISED, 0, 0): + This will set the MMF_DISABLE_THP_EXCEPT_ADVISED mm flag which + will result in THPs being faulted in or collapsed only for + the following cases: + - Global THP controls are set to "always" or "madvise" and + the process has madvised the region with either MADV_HUGEPAGE + or MADV_COLLAPSE. + - Global THP controls is set to "never" and the process has + madvised the region with MADV_COLLAPSE. + This flag and hence the behaviour is inherited across fork(2) + and execve(2). + + prctl(PR_SET_THP_DISABLE, 0, 0, 0, 0): + This will clear the MMF_DISABLE_THP_COMPLETELY and + MMF_DISABLE_THP_EXCEPT_ADVISED mm flags. The process will + behave according to the global THP controls. This behaviour + will be inherited across fork(2) and execve(2). + + prctl(PR_GET_THP_DISABLE, 0, 0, 0, 0): + This will return the THP disable mm flag status of the process + that was set by prctl(PR_SET_THP_DISABLE, ...). i.e. + - 1 if MMF_DISABLE_THP_COMPLETELY flag is set + - 3 if MMF_DISABLE_THP_EXCEPT_ADVISED flag is set + - 0 otherwise. + Khugepaged controls ------------------- =20 --=20 2.47.3 From nobody Sun Oct 5 12:23:56 2025 Received: from mail-qt1-f176.google.com (mail-qt1-f176.google.com [209.85.160.176]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E141B271A7C; Mon, 4 Aug 2025 15:43:44 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.160.176 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1754322227; cv=none; b=TSTXcDQ10Q+8gtgL/nudFg6w7Dv+KQTbVWfO6aSBs0/DZV6mLZTHX03IXoll1qzWjimNOlsFFAKiTjUq6jSTXjxDJVeP26g0deUgZhhDXbEgpaEvWduG0HNrM2qULucvwxNgH0N2qUsTfaG5kTwKNljRkHkarrBhiLV0WdMWQYI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1754322227; c=relaxed/simple; bh=39dOP4aQOQqerHfe7ciUdLNqEQ6vEMPhn2zVtu3rvY0=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=uaFm5C3vsnWawF+0yfH+Mhn2Vj84Bbr9p2tBT6I1biB4vMP1wPr0//6Yo1jv9e03o299K0i8XzzbDtMMnxUhDr7ISRxUldWlO7gV0xgefI6KkJIkrfmWgS7vu7OG9lnAJwPJJe8g5fXAMt2qF76CgjEvJsiE7x0Yw2hp9oP5TO0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=P3q4UwZi; arc=none smtp.client-ip=209.85.160.176 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="P3q4UwZi" Received: by mail-qt1-f176.google.com with SMTP id d75a77b69052e-4ab61ecc1e8so21296791cf.1; Mon, 04 Aug 2025 08:43:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1754322224; x=1754927024; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Ev6ABJetleEJTk7asnC+VOI2GS3gzwPELlTq9+6FP9k=; b=P3q4UwZipYbBqSYPWlVMZFPHKlqgna+1kpNJi4FjtedMIDz57D/SpsTsdzyoZJBp/Q QclWuh7W8yqOuW6g90QeXhxUrAIcP6kk6Sf8zg6K3KZlR4Z27YriDA56/grqtmhp+R/n EyLPzD7DG11AJKE9zxXIL3kv32GS2UCWHKsLPyQSR6xyt/un28Aa0SMSVWLVaPjWhy2S rc55las5Pwug6/rgfdnAeYBzbLUpTeqn7u2qfN58+YkU0qFqEDGkQx9D45yJsAEEQaSw Z0M0Ef9H/Qp1+mA0yZq5/65qa/JdSLETuZLrQBaVCwKFKNgX5/XSdxwH8KqXIEc9JX3J Z3mw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1754322224; x=1754927024; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Ev6ABJetleEJTk7asnC+VOI2GS3gzwPELlTq9+6FP9k=; b=xDGjtYLV9Wv1Iu/tvOORxh7q1RZFAFHXextfdHtlkRfos5ZkXlioPJxfagY1XKvGvE x3ohkYbBYM4gvQkos4MVKiyQcxwyJSaNKysLnMDc7Gmi8zpxBNPCrBsZSfmh/0JWL+Yv Z72R68IfeSSMj6SbvyQYlAPuxc3z9Zc7caF9AVEzLM+0rmjnAuDC/GxMiW32JmLdP/de nB6tFc/J4dZpcZRpa+OxF9LuNx6qi3cA/Eu7N8f3skzkD3LL0xssW8iHl+0DYhxbZWut 6snX/AhucGqe7Qu57XvyoG+aa5u1mFMF6Kv+wWwsKaxKqwR+WDzvFbTcWx12godYCcrh Ee0g== X-Forwarded-Encrypted: i=1; AJvYcCUzxGplY4rvELHk7FQ8Io0xm+KKuSYBEqtqgvrhMC6KJ3i4w3Dt7XeOyVwkZntMECJiSZviixEI15wn457E@vger.kernel.org, AJvYcCVgKMrueYmswB83BHyl9fH8Ot6nlOOEl6mi0XnULVQm4YDOTOHubmdzvLxO+CbCre4bBvXxA1WAPuo=@vger.kernel.org X-Gm-Message-State: AOJu0YyCd8rGmo1cOgZ5ZARXkigfMsRa0+uOPE7j+g9s3f1CbDHl5c+V mqzPu+rjE/GuNRaYnfdCaVLZYOYBFTaVN0Kn0Mq1S07TQ3PH1ONu/jEl X-Gm-Gg: ASbGncuEwIMmrgiFXS69Sc6YQfq7/ybPJFbdZEx6GxOPWmZXGRmdHYrZ4O56wiYyR3a Y8/sEQPo7fDiQust5JHhx5lp3lmEp3MFzITTCXpCr5d8rEsEFZflmeTfO0Ohvye2wmvHUdq/Tyc 1YvGsWLXDoTgtRuzUCzyE7cwgJfFPviAdbWahmMUf2xsZjeEy7bnXKiMePcRKd2BLSGDDEmXDPk DiDw7t1zcw3p9eTkadX/zN2b6I2ZbmVR131KjbGVITUe7rg7ni4f5Qe/hB7z5S3ZEE/i0MisxHz a6ZXvBoRMN7nFDB5ZVtFtUBTGFmNzB1tohQZl/ba2+XejCe348iRoucenq1G1Yx15ZRHPDOnryT POzE4gu24KzLnkTSSua8cER+yP4aVBj0= X-Google-Smtp-Source: AGHT+IF8j9BREOrCOYFL8FBPYdBxvd9a652kkY2yOpEvJ04WOo/DTulcAIF/mKex8B+hYmrvM0KmOw== X-Received: by 2002:a05:622a:207:b0:4a9:ae5a:e8a6 with SMTP id d75a77b69052e-4af10d8de1cmr145731371cf.47.1754322223162; Mon, 04 Aug 2025 08:43:43 -0700 (PDT) Received: from localhost ([2a03:2880:20ff:73::]) by smtp.gmail.com with ESMTPSA id d75a77b69052e-4b0785d13a2sm5486211cf.39.2025.08.04.08.43.42 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 04 Aug 2025 08:43:42 -0700 (PDT) From: Usama Arif To: Andrew Morton , david@redhat.com, linux-mm@kvack.org Cc: linux-fsdevel@vger.kernel.org, corbet@lwn.net, rppt@kernel.org, surenb@google.com, mhocko@suse.com, hannes@cmpxchg.org, baohua@kernel.org, shakeel.butt@linux.dev, riel@surriel.com, ziy@nvidia.com, laoar.shao@gmail.com, dev.jain@arm.com, baolin.wang@linux.alibaba.com, npache@redhat.com, lorenzo.stoakes@oracle.com, Liam.Howlett@oracle.com, ryan.roberts@arm.com, vbabka@suse.cz, jannh@google.com, Arnd Bergmann , sj@kernel.org, linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, kernel-team@meta.com, Usama Arif Subject: [PATCH v3 5/6] selftests: prctl: introduce tests for disabling THPs completely Date: Mon, 4 Aug 2025 16:40:48 +0100 Message-ID: <20250804154317.1648084-6-usamaarif642@gmail.com> X-Mailer: git-send-email 2.47.3 In-Reply-To: <20250804154317.1648084-1-usamaarif642@gmail.com> References: <20250804154317.1648084-1-usamaarif642@gmail.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The test will set the global system THP setting to never, madvise or always depending on the fixture variant and the 2M setting to inherit before it starts (and reset to original at teardown). This tests if the process can: - successfully set and get the policy to disable THPs completely. - never get a hugepage when the THPs are completely disabled with the prctl, including with MADV_HUGE and MADV_COLLAPSE. - successfully reset the policy of the process. - after reset, only get hugepages with: - MADV_COLLAPSE when policy is set to never. - MADV_HUGE and MADV_COLLAPSE when policy is set to madvise. - always when policy is set to "always". - repeat the above tests in a forked process to make sure the policy is carried across forks. Signed-off-by: Usama Arif --- tools/testing/selftests/mm/.gitignore | 1 + tools/testing/selftests/mm/Makefile | 1 + .../testing/selftests/mm/prctl_thp_disable.c | 173 ++++++++++++++++++ tools/testing/selftests/mm/thp_settings.c | 9 +- tools/testing/selftests/mm/thp_settings.h | 1 + 5 files changed, 184 insertions(+), 1 deletion(-) create mode 100644 tools/testing/selftests/mm/prctl_thp_disable.c diff --git a/tools/testing/selftests/mm/.gitignore b/tools/testing/selftest= s/mm/.gitignore index e7b23a8a05fe..eb023ea857b3 100644 --- a/tools/testing/selftests/mm/.gitignore +++ b/tools/testing/selftests/mm/.gitignore @@ -58,3 +58,4 @@ pkey_sighandler_tests_32 pkey_sighandler_tests_64 guard-regions merge +prctl_thp_disable diff --git a/tools/testing/selftests/mm/Makefile b/tools/testing/selftests/= mm/Makefile index d13b3cef2a2b..2bb8d3ebc17c 100644 --- a/tools/testing/selftests/mm/Makefile +++ b/tools/testing/selftests/mm/Makefile @@ -86,6 +86,7 @@ TEST_GEN_FILES +=3D on-fault-limit TEST_GEN_FILES +=3D pagemap_ioctl TEST_GEN_FILES +=3D pfnmap TEST_GEN_FILES +=3D process_madv +TEST_GEN_FILES +=3D prctl_thp_disable TEST_GEN_FILES +=3D thuge-gen TEST_GEN_FILES +=3D transhuge-stress TEST_GEN_FILES +=3D uffd-stress diff --git a/tools/testing/selftests/mm/prctl_thp_disable.c b/tools/testing= /selftests/mm/prctl_thp_disable.c new file mode 100644 index 000000000000..ef150180daf4 --- /dev/null +++ b/tools/testing/selftests/mm/prctl_thp_disable.c @@ -0,0 +1,173 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Basic tests for PR_GET/SET_THP_DISABLE prctl calls + * + * Author(s): Usama Arif + */ +#include +#include +#include +#include +#include +#include +#include + +#include "../kselftest_harness.h" +#include "thp_settings.h" +#include "vm_util.h" + +static int sz2ord(size_t size, size_t pagesize) +{ + return __builtin_ctzll(size / pagesize); +} + +enum thp_collapse_type { + THP_COLLAPSE_NONE, + THP_COLLAPSE_MADV_HUGEPAGE, /* MADV_HUGEPAGE before access */ + THP_COLLAPSE_MADV_COLLAPSE, /* MADV_COLLAPSE after access */ +}; + +/* + * Function to mmap a buffer, fault it in, madvise it appropriately (before + * page fault for MADV_HUGE, and after for MADV_COLLAPSE), and check if the + * mmap region is huge. + * Returns: + * 0 if test doesn't give hugepage + * 1 if test gives a hugepage + * -errno if mmap fails + */ +static int test_mmap_thp(enum thp_collapse_type madvise_buf, size_t pmdsiz= e) +{ + char *mem, *mmap_mem; + size_t mmap_size; + int ret; + + /* For alignment purposes, we need twice the THP size. */ + mmap_size =3D 2 * pmdsize; + mmap_mem =3D (char *)mmap(NULL, mmap_size, PROT_READ | PROT_WRITE, + MAP_PRIVATE | MAP_ANONYMOUS, -1, 0); + if (mmap_mem =3D=3D MAP_FAILED) + return -errno; + + /* We need a THP-aligned memory area. */ + mem =3D (char *)(((uintptr_t)mmap_mem + pmdsize) & ~(pmdsize - 1)); + + if (madvise_buf =3D=3D THP_COLLAPSE_MADV_HUGEPAGE) + madvise(mem, pmdsize, MADV_HUGEPAGE); + + /* Ensure memory is allocated */ + memset(mem, 1, pmdsize); + + if (madvise_buf =3D=3D THP_COLLAPSE_MADV_COLLAPSE) + madvise(mem, pmdsize, MADV_COLLAPSE); + + /* HACK: make sure we have a separate VMA that we can check reliably. */ + mprotect(mem, pmdsize, PROT_READ); + + ret =3D check_huge_anon(mem, 1, pmdsize); + munmap(mmap_mem, mmap_size); + return ret; +} + +static void prctl_thp_disable_completely_test(struct __test_metadata *cons= t _metadata, + size_t pmdsize, + enum thp_enabled thp_policy) +{ + ASSERT_EQ(prctl(PR_GET_THP_DISABLE, NULL, NULL, NULL, NULL), 1); + + /* tests after prctl overrides global policy */ + ASSERT_EQ(test_mmap_thp(THP_COLLAPSE_NONE, pmdsize), 0); + + ASSERT_EQ(test_mmap_thp(THP_COLLAPSE_MADV_HUGEPAGE, pmdsize), 0); + + ASSERT_EQ(test_mmap_thp(THP_COLLAPSE_MADV_COLLAPSE, pmdsize), 0); + + /* Reset to global policy */ + ASSERT_EQ(prctl(PR_SET_THP_DISABLE, 0, NULL, NULL, NULL), 0); + + /* tests after prctl is cleared, and only global policy is effective */ + ASSERT_EQ(test_mmap_thp(THP_COLLAPSE_NONE, pmdsize), + thp_policy =3D=3D THP_ALWAYS ? 1 : 0); + + ASSERT_EQ(test_mmap_thp(THP_COLLAPSE_MADV_HUGEPAGE, pmdsize), + thp_policy =3D=3D THP_NEVER ? 0 : 1); + + ASSERT_EQ(test_mmap_thp(THP_COLLAPSE_MADV_COLLAPSE, pmdsize), 1); +} + +FIXTURE(prctl_thp_disable_completely) +{ + struct thp_settings settings; + size_t pmdsize; +}; + +FIXTURE_VARIANT(prctl_thp_disable_completely) +{ + enum thp_enabled thp_policy; +}; + +FIXTURE_VARIANT_ADD(prctl_thp_disable_completely, never) +{ + .thp_policy =3D THP_NEVER, +}; + +FIXTURE_VARIANT_ADD(prctl_thp_disable_completely, madvise) +{ + .thp_policy =3D THP_MADVISE, +}; + +FIXTURE_VARIANT_ADD(prctl_thp_disable_completely, always) +{ + .thp_policy =3D THP_ALWAYS, +}; + +FIXTURE_SETUP(prctl_thp_disable_completely) +{ + if (!thp_available()) + SKIP(return, "Transparent Hugepages not available\n"); + + self->pmdsize =3D read_pmd_pagesize(); + if (!self->pmdsize) + SKIP(return, "Unable to read PMD size\n"); + + thp_save_settings(); + thp_read_settings(&self->settings); + self->settings.thp_enabled =3D variant->thp_policy; + self->settings.hugepages[sz2ord(self->pmdsize, getpagesize())].enabled = =3D THP_INHERIT; + thp_write_settings(&self->settings); +} + +FIXTURE_TEARDOWN(prctl_thp_disable_completely) +{ + thp_restore_settings(); +} + +TEST_F(prctl_thp_disable_completely, nofork) +{ + ASSERT_EQ(prctl(PR_SET_THP_DISABLE, 1, NULL, NULL, NULL), 0); + prctl_thp_disable_completely_test(_metadata, self->pmdsize, variant->thp_= policy); +} + +TEST_F(prctl_thp_disable_completely, fork) +{ + int ret =3D 0; + pid_t pid; + + ASSERT_EQ(prctl(PR_SET_THP_DISABLE, 1, NULL, NULL, NULL), 0); + + /* Make sure prctl changes are carried across fork */ + pid =3D fork(); + ASSERT_GE(pid, 0); + + if (!pid) + prctl_thp_disable_completely_test(_metadata, self->pmdsize, variant->thp= _policy); + + wait(&ret); + if (WIFEXITED(ret)) + ret =3D WEXITSTATUS(ret); + else + ret =3D -EINVAL; + ASSERT_EQ(ret, 0); +} + +TEST_HARNESS_MAIN diff --git a/tools/testing/selftests/mm/thp_settings.c b/tools/testing/self= tests/mm/thp_settings.c index bad60ac52874..574bd0f8ae48 100644 --- a/tools/testing/selftests/mm/thp_settings.c +++ b/tools/testing/selftests/mm/thp_settings.c @@ -382,10 +382,17 @@ unsigned long thp_shmem_supported_orders(void) return __thp_supported_orders(true); } =20 -bool thp_is_enabled(void) +bool thp_available(void) { if (access(THP_SYSFS, F_OK) !=3D 0) return false; + return true; +} + +bool thp_is_enabled(void) +{ + if (!thp_available()) + return false; =20 int mode =3D thp_read_string("enabled", thp_enabled_strings); =20 diff --git a/tools/testing/selftests/mm/thp_settings.h b/tools/testing/self= tests/mm/thp_settings.h index 6c07f70beee9..76eeb712e5f1 100644 --- a/tools/testing/selftests/mm/thp_settings.h +++ b/tools/testing/selftests/mm/thp_settings.h @@ -84,6 +84,7 @@ void thp_set_read_ahead_path(char *path); unsigned long thp_supported_orders(void); unsigned long thp_shmem_supported_orders(void); =20 +bool thp_available(void); bool thp_is_enabled(void); =20 #endif /* __THP_SETTINGS_H__ */ --=20 2.47.3 From nobody Sun Oct 5 12:23:56 2025 Received: from mail-qk1-f182.google.com (mail-qk1-f182.google.com [209.85.222.182]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2F6C4272818; Mon, 4 Aug 2025 15:43:45 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.222.182 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1754322227; cv=none; b=DgsWfoCbetOZI624/wgl6q7BD4Mds92ps8RT9n+fqTsxEvp3cJJnT5ibRMkcAPOSddkcxFiVa8mxT6piyWtYou+0VXR5tMsXPmOkmEaD394o3ck6h+tFMjrAqdDM398iPV9z2CJgfkmfgyO1c/EpkHTTvRRetk0NPDFELalPTtQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1754322227; c=relaxed/simple; bh=W/RN0pdaGq4mUFpyV0SdEM87yeK/4AdSOPwrEzTi49g=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=WGvQgCnlZaHJOgUESHvPQIP83Eys8U2HNsOLxMSRNBX2GNusgQeIiR0TyCDoVl9BfJPwepklu/isBEORYVqjdv+WzpDqTxG0MoJYlh7bVLQkBPeodD53LqUrily9FQTOCdQ7osiBok9Sm+O84mHBM6pdPqkDQUtRrbVAR4HPEOQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=TlH2w6zi; arc=none smtp.client-ip=209.85.222.182 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="TlH2w6zi" Received: by mail-qk1-f182.google.com with SMTP id af79cd13be357-7e62a1cbf82so201428185a.2; Mon, 04 Aug 2025 08:43:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1754322225; x=1754927025; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=AlmQ4lQBL2kHeox9ZT3orgIvobThtulJfDV9upDXFUM=; b=TlH2w6ziduk8zwFGGS3AOXo8C30g3HXTbHjpIZsxK3iTff2AFKLoiSS7+4txA2sU5s ZTP9J+OiErWnAfXd7+8hGue98Sof3n3hMYSdxKy6Xu/fQKeJ4VJ5Oo2DoB1KG5VGWKxx pagWLKOf52xvNvXOD1hTpEm6gdLzemRayD242nFlQmdhNdKVEr0oTcUvnyrMEond5Wbk uv0sBl8RhJyJE7oQqc6Dwk6iTwO0nTIAlB3HzH5Mo+6ytzy5Ti4dfKWxLg9Gnu0XdSpO BBEl5utz58wx39WkQlasWPIKRyB03Q8Vp9MmS+XYiGBMaLIcrUczuMFjc+6DFMWj9RP8 p9NQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1754322225; x=1754927025; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=AlmQ4lQBL2kHeox9ZT3orgIvobThtulJfDV9upDXFUM=; b=S67hpzlge7ZmfBJEJW9FqFXDl7cSvHxhMqnbOfw+MKFHoHmzZyNTLjv8R/FeHmnDRd 13oQIEd0pHVhfnkQonEtoSmHruLJ0VWhy0t2i/XHwzxsZj1oEXS+HW71SWskzfXAA4KD JvV1JoT9PC1BK4NoaUqiwOa/cG8sMeeSIAqoBVNbqdgjI2yNGpfp8m0PA2u47KAUbc64 HQTXfMgLiGrFN3MT1jyG1BMFHC1C/VNPCi71HO6dif+AVnb64nFbYKCae5344hd6AyAJ 3IOUT8JtGIJVBWtZ8xP3oeEtDu0AyOA4Fy7xGS63W86NXKGcbC6dHEsopHgQRfJ81d8d ZFbg== X-Forwarded-Encrypted: i=1; AJvYcCUrpNqKwU08uR9vSE23NnRhOwKZTbnG+mrH0RoVJxE1Ht+fgchidigjrVaFVTUnOtx8NfMF6LGjlgchLh74@vger.kernel.org, AJvYcCVn3m5zJryvNSk4ZP4KG1cEP18cmwJfKuJvVTcH++3YkuNVUJRpc89zNQHGMy1/HtrymbrdhXga2gU=@vger.kernel.org X-Gm-Message-State: AOJu0YxAiuArmSzpzSILWTp/k+TNXfsC4qZoqEfQfVP8htQk3wcSTrqO nA6YEoVYJbPdZsGehrKksFeZrxmS3w/IDpUQ3rwb54oDSHcIaDwojhda X-Gm-Gg: ASbGnctiLEMOvTWjH9JeX7dQ6RlITGkd+EablwxZTB9lz2Px+1WpRRSVr3S47VoZAxa DpA9qJcqiIMKSq86OzlMwuWAmSLssnPpkF0tpN1gcAx4r1FQqe+Gzjt0chw/sBpedT0372W5Zd3 GTYeEwtr68vI98tNNk6f0rRtvp5WVnXyRKfAk/aL9inHWw/w7SdwvbhWrOGh00XegL6L8Ddb5d3 S98AcJAHoi41fuQgoB8mTWgJB5ZSbr94x+9bqBF466mRJ2Xvc+tYXgAmDccjsYckV+pMFdd9nWs a8Y1gAIhOSJ2x+cD/Ka6RSt55JNq0kOhje9tMJkWLFt3W+VqYLhAbm0WwkudccMwU4q6x7PDFmD 0Ze1WGNmpqwNOoq/izwY= X-Google-Smtp-Source: AGHT+IGJQDqEPSNpcfXCQ0ziPzID1H18Arno2+qWA4NTYcPCKy8chybDSbutvMVp+JOvqicFgo31Nw== X-Received: by 2002:a05:620a:458c:b0:7e7:faaa:e7c9 with SMTP id af79cd13be357-7e7faaae8eamr591769285a.12.1754322224787; Mon, 04 Aug 2025 08:43:44 -0700 (PDT) Received: from localhost ([2a03:2880:20ff:5::]) by smtp.gmail.com with ESMTPSA id af79cd13be357-7e695e69ed5sm335521985a.46.2025.08.04.08.43.43 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 04 Aug 2025 08:43:43 -0700 (PDT) From: Usama Arif To: Andrew Morton , david@redhat.com, linux-mm@kvack.org Cc: linux-fsdevel@vger.kernel.org, corbet@lwn.net, rppt@kernel.org, surenb@google.com, mhocko@suse.com, hannes@cmpxchg.org, baohua@kernel.org, shakeel.butt@linux.dev, riel@surriel.com, ziy@nvidia.com, laoar.shao@gmail.com, dev.jain@arm.com, baolin.wang@linux.alibaba.com, npache@redhat.com, lorenzo.stoakes@oracle.com, Liam.Howlett@oracle.com, ryan.roberts@arm.com, vbabka@suse.cz, jannh@google.com, Arnd Bergmann , sj@kernel.org, linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, kernel-team@meta.com, Usama Arif Subject: [PATCH v3 6/6] selftests: prctl: introduce tests for disabling THPs except for madvise Date: Mon, 4 Aug 2025 16:40:49 +0100 Message-ID: <20250804154317.1648084-7-usamaarif642@gmail.com> X-Mailer: git-send-email 2.47.3 In-Reply-To: <20250804154317.1648084-1-usamaarif642@gmail.com> References: <20250804154317.1648084-1-usamaarif642@gmail.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The test will set the global system THP setting to never, madvise or always depending on the fixture variant and the 2M setting to inherit before it starts (and reset to original at teardown) This tests if the process can: - successfully set and get the policy to disable THPs expect for madvise. - get hugepages only on MADV_HUGE and MADV_COLLAPSE if the global policy is madvise/always and only with MADV_COLLAPSE if the global policy is never. - successfully reset the policy of the process. - after reset, only get hugepages with: - MADV_COLLAPSE when policy is set to never. - MADV_HUGE and MADV_COLLAPSE when policy is set to madvise. - always when policy is set to "always". - repeat the above tests in a forked process to make sure the policy is carried across forks. Signed-off-by: Usama Arif --- .../testing/selftests/mm/prctl_thp_disable.c | 107 ++++++++++++++++++ 1 file changed, 107 insertions(+) diff --git a/tools/testing/selftests/mm/prctl_thp_disable.c b/tools/testing= /selftests/mm/prctl_thp_disable.c index ef150180daf4..93cedaa59854 100644 --- a/tools/testing/selftests/mm/prctl_thp_disable.c +++ b/tools/testing/selftests/mm/prctl_thp_disable.c @@ -16,6 +16,10 @@ #include "thp_settings.h" #include "vm_util.h" =20 +#ifndef PR_THP_DISABLE_EXCEPT_ADVISED +#define PR_THP_DISABLE_EXCEPT_ADVISED (1 << 1) +#endif + static int sz2ord(size_t size, size_t pagesize) { return __builtin_ctzll(size / pagesize); @@ -170,4 +174,107 @@ TEST_F(prctl_thp_disable_completely, fork) ASSERT_EQ(ret, 0); } =20 +static void prctl_thp_disable_except_madvise_test(struct __test_metadata *= const _metadata, + size_t pmdsize, + enum thp_enabled thp_policy) +{ + ASSERT_EQ(prctl(PR_GET_THP_DISABLE, NULL, NULL, NULL, NULL), 3); + + /* tests after prctl overrides global policy */ + ASSERT_EQ(test_mmap_thp(THP_COLLAPSE_NONE, pmdsize), 0); + + ASSERT_EQ(test_mmap_thp(THP_COLLAPSE_MADV_HUGEPAGE, pmdsize), + thp_policy =3D=3D THP_NEVER ? 0 : 1); + + ASSERT_EQ(test_mmap_thp(THP_COLLAPSE_MADV_COLLAPSE, pmdsize), 1); + + /* Reset to global policy */ + ASSERT_EQ(prctl(PR_SET_THP_DISABLE, 0, NULL, NULL, NULL), 0); + + /* tests after prctl is cleared, and only global policy is effective */ + ASSERT_EQ(test_mmap_thp(THP_COLLAPSE_NONE, pmdsize), + thp_policy =3D=3D THP_ALWAYS ? 1 : 0); + + ASSERT_EQ(test_mmap_thp(THP_COLLAPSE_MADV_HUGEPAGE, pmdsize), + thp_policy =3D=3D THP_NEVER ? 0 : 1); + + ASSERT_EQ(test_mmap_thp(THP_COLLAPSE_MADV_COLLAPSE, pmdsize), 1); +} + +FIXTURE(prctl_thp_disable_except_madvise) +{ + struct thp_settings settings; + size_t pmdsize; +}; + +FIXTURE_VARIANT(prctl_thp_disable_except_madvise) +{ + enum thp_enabled thp_policy; +}; + +FIXTURE_VARIANT_ADD(prctl_thp_disable_except_madvise, never) +{ + .thp_policy =3D THP_NEVER, +}; + +FIXTURE_VARIANT_ADD(prctl_thp_disable_except_madvise, madvise) +{ + .thp_policy =3D THP_MADVISE, +}; + +FIXTURE_VARIANT_ADD(prctl_thp_disable_except_madvise, always) +{ + .thp_policy =3D THP_ALWAYS, +}; + +FIXTURE_SETUP(prctl_thp_disable_except_madvise) +{ + if (!thp_available()) + SKIP(return, "Transparent Hugepages not available\n"); + + self->pmdsize =3D read_pmd_pagesize(); + if (!self->pmdsize) + SKIP(return, "Unable to read PMD size\n"); + + thp_save_settings(); + thp_read_settings(&self->settings); + self->settings.thp_enabled =3D variant->thp_policy; + self->settings.hugepages[sz2ord(self->pmdsize, getpagesize())].enabled = =3D THP_INHERIT; + thp_write_settings(&self->settings); +} + +FIXTURE_TEARDOWN(prctl_thp_disable_except_madvise) +{ + thp_restore_settings(); +} + +TEST_F(prctl_thp_disable_except_madvise, nofork) +{ + ASSERT_EQ(prctl(PR_SET_THP_DISABLE, 1, PR_THP_DISABLE_EXCEPT_ADVISED, NUL= L, NULL), 0); + prctl_thp_disable_except_madvise_test(_metadata, self->pmdsize, variant->= thp_policy); +} + +TEST_F(prctl_thp_disable_except_madvise, fork) +{ + int ret =3D 0; + pid_t pid; + + ASSERT_EQ(prctl(PR_SET_THP_DISABLE, 1, PR_THP_DISABLE_EXCEPT_ADVISED, NUL= L, NULL), 0); + + /* Make sure prctl changes are carried across fork */ + pid =3D fork(); + ASSERT_GE(pid, 0); + + if (!pid) + prctl_thp_disable_except_madvise_test(_metadata, self->pmdsize, + variant->thp_policy); + + wait(&ret); + if (WIFEXITED(ret)) + ret =3D WEXITSTATUS(ret); + else + ret =3D -EINVAL; + ASSERT_EQ(ret, 0); +} + TEST_HARNESS_MAIN --=20 2.47.3