On systems with multiple memory-tiers consisting of DRAM and CXL memory,
the OOM killer is not invoked properly.
Here's the command to reproduce:
$ sudo swapoff -a
$ stress-ng --oomable -v --memrate 20 --memrate-bytes 10G \
--memrate-rd-mbs 1 --memrate-wr-mbs 1
The memory usage is the number of workers specified with the --memrate
option multiplied by the buffer size specified with the --memrate-bytes
option, so please adjust it so that it exceeds the total size of the
installed DRAM and CXL memory.
If swap is disabled, you can usually expect the OOM killer to terminate
the stress-ng process when memory usage approaches the installed memory
size.
However, if multiple memory-tiers exist (multiple
/sys/devices/virtual/memory_tiering/memory_tier<N> directories exist) and
/sys/kernel/mm/numa/demotion_enabled is true, the OOM killer will not be
invoked and the system will become inoperable, regardless of whether MGLRU
is enabled or not.
This issue can be reproduced using NUMA emulation even on systems with
only DRAM. You can create two-fake memory-tiers by booting a single-node
system with "numa=fake=2 numa_emulation.adistance=576,704" kernel
parameters.
The reason for this issue is that memory allocations do not directly
trigger the oom-killer, assuming that if the target node has an underlying
memory tier, it can always be reclaimed by demotion.
So this change avoids this issue by not attempting to demote if the
underlying node has less free memory than the minimum watermark, and the
oom-killer will be triggered directly from memory allocations.
Signed-off-by: Akinobu Mita <akinobu.mita@gmail.com>
---
v3:
- rebase to linux-next (next-20260108), where demotion target has changed
from node id to node mask.
v2:
- describe reproducibility with !mglru in the commit log
- removed unnecessary consideration for scan control when checking demotion_nid watermarks
mm/vmscan.c | 16 +++++++++++++++-
1 file changed, 15 insertions(+), 1 deletion(-)
diff --git a/mm/vmscan.c b/mm/vmscan.c
index a34cf784e131..9a4b12ef6b53 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -358,7 +358,21 @@ static bool can_demote(int nid, struct scan_control *sc,
/* Filter out nodes that are not in cgroup's mems_allowed. */
mem_cgroup_node_filter_allowed(memcg, &allowed_mask);
- return !nodes_empty(allowed_mask);
+ if (nodes_empty(allowed_mask))
+ return false;
+
+ for_each_node_mask(nid, allowed_mask) {
+ int z;
+ struct zone *zone;
+ struct pglist_data *pgdat = NODE_DATA(nid);
+
+ for_each_managed_zone_pgdat(zone, pgdat, z, MAX_NR_ZONES - 1) {
+ if (zone_watermark_ok(zone, 0, min_wmark_pages(zone),
+ ZONE_MOVABLE, 0))
+ return true;
+ }
+ }
+ return false;
}
static inline bool can_reclaim_anon_pages(struct mem_cgroup *memcg,
--
2.43.0
> + for_each_node_mask(nid, allowed_mask) {
> + int z;
> + struct zone *zone;
> + struct pglist_data *pgdat = NODE_DATA(nid);
> +
> + for_each_managed_zone_pgdat(zone, pgdat, z, MAX_NR_ZONES - 1) {
> + if (zone_watermark_ok(zone, 0, min_wmark_pages(zone),
> + ZONE_MOVABLE, 0))
Why does this only check zone movable?
Also, would this also limit pressure-signal to invoke reclaim when
there is still swap space available? Should demotion not be a pressure
source for triggering harder reclaim?
~Gregory
2026年1月10日(土) 1:08 Gregory Price <gourry@gourry.net>:
>
> > + for_each_node_mask(nid, allowed_mask) {
> > + int z;
> > + struct zone *zone;
> > + struct pglist_data *pgdat = NODE_DATA(nid);
> > +
> > + for_each_managed_zone_pgdat(zone, pgdat, z, MAX_NR_ZONES - 1) {
> > + if (zone_watermark_ok(zone, 0, min_wmark_pages(zone),
> > + ZONE_MOVABLE, 0))
>
> Why does this only check zone movable?
Here, zone_watermark_ok() checks the free memory for all zones from 0 to
MAX_NR_ZONES - 1.
There is no strong reason to pass ZONE_MOVABLE as the highest_zoneidx
argument every time zone_watermark_ok() is called; I can change it if an
appropriate value is found.
In v1, highest_zoneidx was "sc ? sc->reclaim_idx : MAX_NR_ZONES - 1"
> Also, would this also limit pressure-signal to invoke reclaim when
> there is still swap space available? Should demotion not be a pressure
> source for triggering harder reclaim?
Since can_reclaim_anon_pages() checks whether there is free space on the swap
device before checking with can_demote(), I think the negative impact of this
change will be small. However, since I have not been able to confirm the
behavior when a swap device is available, I would like to correctly understand
the impact.
On Sat, Jan 10, 2026 at 10:55:02PM +0900, Akinobu Mita wrote:
> 2026年1月10日(土) 1:08 Gregory Price <gourry@gourry.net>:
> >
> > > + for_each_node_mask(nid, allowed_mask) {
> > > + int z;
> > > + struct zone *zone;
> > > + struct pglist_data *pgdat = NODE_DATA(nid);
> > > +
> > > + for_each_managed_zone_pgdat(zone, pgdat, z, MAX_NR_ZONES - 1) {
> > > + if (zone_watermark_ok(zone, 0, min_wmark_pages(zone),
> > > + ZONE_MOVABLE, 0))
> >
> > Why does this only check zone movable?
>
> Here, zone_watermark_ok() checks the free memory for all zones from 0 to
> MAX_NR_ZONES - 1.
> There is no strong reason to pass ZONE_MOVABLE as the highest_zoneidx
> argument every time zone_watermark_ok() is called; I can change it if an
> appropriate value is found.
> In v1, highest_zoneidx was "sc ? sc->reclaim_idx : MAX_NR_ZONES - 1"
>
> > Also, would this also limit pressure-signal to invoke reclaim when
> > there is still swap space available? Should demotion not be a pressure
> > source for triggering harder reclaim?
>
> Since can_reclaim_anon_pages() checks whether there is free space on the swap
> device before checking with can_demote(), I think the negative impact of this
> change will be small. However, since I have not been able to confirm the
> behavior when a swap device is available, I would like to correctly understand
> the impact.
Something else is going on here
See demote_folio_list and alloc_demote_folio
static unsigned int demote_folio_list(struct list_head *demote_folios,
struct pglist_data *pgdat,
struct mem_cgroup *memcg)
{
struct migration_target_control mtc = {
*/
.gfp_mask = (GFP_HIGHUSER_MOVABLE & ~__GFP_RECLAIM) |
__GFP_NOMEMALLOC | GFP_NOWAIT,
};
}
static struct folio *alloc_demote_folio(struct folio *src,
unsigned long private)
{
/* Only attempt to demote to the preferred node */
mtc->nmask = NULL;
mtc->gfp_mask |= __GFP_THISNODE;
dst = alloc_migration_target(src, (unsigned long)mtc);
if (dst)
return dst;
/* Now attempt to demote to any node in the lower tier */
mtc->gfp_mask &= ~__GFP_THISNODE;
mtc->nmask = allowed_mask;
return alloc_migration_target(src, (unsigned long)mtc);
}
/*
* %__GFP_RECLAIM is shorthand to allow/forbid both direct and kswapd reclaim.
*/
You basically shouldn't be hitting any reclaim behavior at all, and if
the target nodes are actually under various watermarks, you should be
getting allocation failures and quick-outs from the demotion logic.
i.e. you should be seeing OOM happen
When I dug in far enough I found this:
static struct folio *alloc_demote_folio(struct folio *src,
unsigned long private)
{
...
dst = alloc_migration_target(src, (unsigned long)mtc);
}
struct folio *alloc_migration_target(struct folio *src, unsigned long private)
{
...
if (folio_test_hugetlb(src)) {
struct hstate *h = folio_hstate(src);
gfp_mask = htlb_modify_alloc_mask(h, gfp_mask);
return alloc_hugetlb_folio_nodemask(h, nid, ...)
}
}
static inline gfp_t htlb_modify_alloc_mask(struct hstate *h, gfp_t gfp_mask)
{
gfp_t modified_mask = htlb_alloc_mask(h);
/* Some callers might want to enforce node */
modified_mask |= (gfp_mask & __GFP_THISNODE);
modified_mask |= (gfp_mask & __GFP_NOWARN);
return modified_mask;
}
/* Movability of hugepages depends on migration support. */
static inline gfp_t htlb_alloc_mask(struct hstate *h)
{
gfp_t gfp = __GFP_COMP | __GFP_NOWARN;
gfp |= hugepage_movable_supported(h) ? GFP_HIGHUSER_MOVABLE : GFP_HIGHUSER;
return gfp;
}
#define GFP_USER (__GFP_RECLAIM | __GFP_IO | __GFP_FS | __GFP_HARDWALL)
#define GFP_HIGHUSER (GFP_USER | __GFP_HIGHMEM)
#define GFP_HIGHUSER_MOVABLE (GFP_HIGHUSER | __GFP_MOVABLE | __GFP_SKIP_KASAN)
If we try to move a hugepage, we start including __GFP_RECLAIM again -
regardless of whether HIGHUSER_MOVABLE or HIGHUSER is used.
Any chance you are using hugetlb on this system? This looks like a
clear bug, but it may not be what you're experiencing.
~Gregory
2026年1月28日(水) 5:24 Gregory Price <gourry@gourry.net>:
>
> On Sat, Jan 10, 2026 at 10:55:02PM +0900, Akinobu Mita wrote:
> > 2026年1月10日(土) 1:08 Gregory Price <gourry@gourry.net>:
> > >
> > > > + for_each_node_mask(nid, allowed_mask) {
> > > > + int z;
> > > > + struct zone *zone;
> > > > + struct pglist_data *pgdat = NODE_DATA(nid);
> > > > +
> > > > + for_each_managed_zone_pgdat(zone, pgdat, z, MAX_NR_ZONES - 1) {
> > > > + if (zone_watermark_ok(zone, 0, min_wmark_pages(zone),
> > > > + ZONE_MOVABLE, 0))
> > >
> > > Why does this only check zone movable?
> >
> > Here, zone_watermark_ok() checks the free memory for all zones from 0 to
> > MAX_NR_ZONES - 1.
> > There is no strong reason to pass ZONE_MOVABLE as the highest_zoneidx
> > argument every time zone_watermark_ok() is called; I can change it if an
> > appropriate value is found.
> > In v1, highest_zoneidx was "sc ? sc->reclaim_idx : MAX_NR_ZONES - 1"
> >
> > > Also, would this also limit pressure-signal to invoke reclaim when
> > > there is still swap space available? Should demotion not be a pressure
> > > source for triggering harder reclaim?
> >
> > Since can_reclaim_anon_pages() checks whether there is free space on the swap
> > device before checking with can_demote(), I think the negative impact of this
> > change will be small. However, since I have not been able to confirm the
> > behavior when a swap device is available, I would like to correctly understand
> > the impact.
>
> Something else is going on here
>
> See demote_folio_list and alloc_demote_folio
>
> static unsigned int demote_folio_list(struct list_head *demote_folios,
> struct pglist_data *pgdat,
> struct mem_cgroup *memcg)
> {
> struct migration_target_control mtc = {
> */
> .gfp_mask = (GFP_HIGHUSER_MOVABLE & ~__GFP_RECLAIM) |
> __GFP_NOMEMALLOC | GFP_NOWAIT,
> };
> }
>
> static struct folio *alloc_demote_folio(struct folio *src,
> unsigned long private)
> {
> /* Only attempt to demote to the preferred node */
> mtc->nmask = NULL;
> mtc->gfp_mask |= __GFP_THISNODE;
> dst = alloc_migration_target(src, (unsigned long)mtc);
> if (dst)
> return dst;
>
> /* Now attempt to demote to any node in the lower tier */
> mtc->gfp_mask &= ~__GFP_THISNODE;
> mtc->nmask = allowed_mask;
> return alloc_migration_target(src, (unsigned long)mtc);
> }
>
>
> /*
> * %__GFP_RECLAIM is shorthand to allow/forbid both direct and kswapd reclaim.
> */
>
>
> You basically shouldn't be hitting any reclaim behavior at all, and if
> the target nodes are actually under various watermarks, you should be
> getting allocation failures and quick-outs from the demotion logic.
>
> i.e. you should be seeing OOM happen
>
> When I dug in far enough I found this:
>
> static struct folio *alloc_demote_folio(struct folio *src,
> unsigned long private)
> {
> ...
> dst = alloc_migration_target(src, (unsigned long)mtc);
> }
>
> struct folio *alloc_migration_target(struct folio *src, unsigned long private)
> {
>
> ...
> if (folio_test_hugetlb(src)) {
> struct hstate *h = folio_hstate(src);
>
> gfp_mask = htlb_modify_alloc_mask(h, gfp_mask);
> return alloc_hugetlb_folio_nodemask(h, nid, ...)
> }
> }
>
> static inline gfp_t htlb_modify_alloc_mask(struct hstate *h, gfp_t gfp_mask)
> {
> gfp_t modified_mask = htlb_alloc_mask(h);
>
> /* Some callers might want to enforce node */
> modified_mask |= (gfp_mask & __GFP_THISNODE);
>
> modified_mask |= (gfp_mask & __GFP_NOWARN);
>
> return modified_mask;
> }
>
> /* Movability of hugepages depends on migration support. */
> static inline gfp_t htlb_alloc_mask(struct hstate *h)
> {
> gfp_t gfp = __GFP_COMP | __GFP_NOWARN;
>
> gfp |= hugepage_movable_supported(h) ? GFP_HIGHUSER_MOVABLE : GFP_HIGHUSER;
>
> return gfp;
> }
>
> #define GFP_USER (__GFP_RECLAIM | __GFP_IO | __GFP_FS | __GFP_HARDWALL)
> #define GFP_HIGHUSER (GFP_USER | __GFP_HIGHMEM)
> #define GFP_HIGHUSER_MOVABLE (GFP_HIGHUSER | __GFP_MOVABLE | __GFP_SKIP_KASAN)
>
>
> If we try to move a hugepage, we start including __GFP_RECLAIM again -
> regardless of whether HIGHUSER_MOVABLE or HIGHUSER is used.
>
>
> Any chance you are using hugetlb on this system? This looks like a
> clear bug, but it may not be what you're experiencing.
In my case where the issue was reproduced, alloc_demote_folio() failed almost
every time, but the folio passed to alloc_migration_target() was always false
for both folio_test_hugetlb() and folio_test_large().
On Tue 27-01-26 15:24:36, Gregory Price wrote:
> On Sat, Jan 10, 2026 at 10:55:02PM +0900, Akinobu Mita wrote:
> > 2026年1月10日(土) 1:08 Gregory Price <gourry@gourry.net>:
> > >
> > > > + for_each_node_mask(nid, allowed_mask) {
> > > > + int z;
> > > > + struct zone *zone;
> > > > + struct pglist_data *pgdat = NODE_DATA(nid);
> > > > +
> > > > + for_each_managed_zone_pgdat(zone, pgdat, z, MAX_NR_ZONES - 1) {
> > > > + if (zone_watermark_ok(zone, 0, min_wmark_pages(zone),
> > > > + ZONE_MOVABLE, 0))
> > >
> > > Why does this only check zone movable?
> >
> > Here, zone_watermark_ok() checks the free memory for all zones from 0 to
> > MAX_NR_ZONES - 1.
> > There is no strong reason to pass ZONE_MOVABLE as the highest_zoneidx
> > argument every time zone_watermark_ok() is called; I can change it if an
> > appropriate value is found.
> > In v1, highest_zoneidx was "sc ? sc->reclaim_idx : MAX_NR_ZONES - 1"
> >
> > > Also, would this also limit pressure-signal to invoke reclaim when
> > > there is still swap space available? Should demotion not be a pressure
> > > source for triggering harder reclaim?
> >
> > Since can_reclaim_anon_pages() checks whether there is free space on the swap
> > device before checking with can_demote(), I think the negative impact of this
> > change will be small. However, since I have not been able to confirm the
> > behavior when a swap device is available, I would like to correctly understand
> > the impact.
>
> Something else is going on here
>
> See demote_folio_list and alloc_demote_folio
>
> static unsigned int demote_folio_list(struct list_head *demote_folios,
> struct pglist_data *pgdat,
> struct mem_cgroup *memcg)
> {
> struct migration_target_control mtc = {
> */
> .gfp_mask = (GFP_HIGHUSER_MOVABLE & ~__GFP_RECLAIM) |
> __GFP_NOMEMALLOC | GFP_NOWAIT,
> };
> }
>
> static struct folio *alloc_demote_folio(struct folio *src,
> unsigned long private)
> {
> /* Only attempt to demote to the preferred node */
> mtc->nmask = NULL;
> mtc->gfp_mask |= __GFP_THISNODE;
> dst = alloc_migration_target(src, (unsigned long)mtc);
> if (dst)
> return dst;
>
> /* Now attempt to demote to any node in the lower tier */
> mtc->gfp_mask &= ~__GFP_THISNODE;
> mtc->nmask = allowed_mask;
> return alloc_migration_target(src, (unsigned long)mtc);
> }
>
>
> /*
> * %__GFP_RECLAIM is shorthand to allow/forbid both direct and kswapd reclaim.
> */
>
>
> You basically shouldn't be hitting any reclaim behavior at all, and if
This will trigger kswapd so there will be background reclaim demoting
from those lower tiers.
> the target nodes are actually under various watermarks, you should be
> getting allocation failures and quick-outs from the demotion logic.
>
> i.e. you should be seeing OOM happen
>
> When I dug in far enough I found this:
>
> static struct folio *alloc_demote_folio(struct folio *src,
> unsigned long private)
> {
> ...
> dst = alloc_migration_target(src, (unsigned long)mtc);
> }
>
> struct folio *alloc_migration_target(struct folio *src, unsigned long private)
> {
>
> ...
> if (folio_test_hugetlb(src)) {
> struct hstate *h = folio_hstate(src);
>
> gfp_mask = htlb_modify_alloc_mask(h, gfp_mask);
> return alloc_hugetlb_folio_nodemask(h, nid, ...)
> }
> }
>
> static inline gfp_t htlb_modify_alloc_mask(struct hstate *h, gfp_t gfp_mask)
> {
> gfp_t modified_mask = htlb_alloc_mask(h);
>
> /* Some callers might want to enforce node */
> modified_mask |= (gfp_mask & __GFP_THISNODE);
>
> modified_mask |= (gfp_mask & __GFP_NOWARN);
>
> return modified_mask;
> }
>
> /* Movability of hugepages depends on migration support. */
> static inline gfp_t htlb_alloc_mask(struct hstate *h)
> {
> gfp_t gfp = __GFP_COMP | __GFP_NOWARN;
>
> gfp |= hugepage_movable_supported(h) ? GFP_HIGHUSER_MOVABLE : GFP_HIGHUSER;
>
> return gfp;
> }
>
> #define GFP_USER (__GFP_RECLAIM | __GFP_IO | __GFP_FS | __GFP_HARDWALL)
> #define GFP_HIGHUSER (GFP_USER | __GFP_HIGHMEM)
> #define GFP_HIGHUSER_MOVABLE (GFP_HIGHUSER | __GFP_MOVABLE | __GFP_SKIP_KASAN)
>
>
> If we try to move a hugepage, we start including __GFP_RECLAIM again -
> regardless of whether HIGHUSER_MOVABLE or HIGHUSER is used.
>
>
> Any chance you are using hugetlb on this system? This looks like a
> clear bug, but it may not be what you're experiencing.
Hugetlb pages are not sitting on LRU lists so they are not participating
in the demotion.
Or maybe I missed your point.
--
Michal Hocko
SUSE Labs
On Wed, Jan 28, 2026 at 10:56:44AM +0100, Michal Hocko wrote:
> > .gfp_mask = (GFP_HIGHUSER_MOVABLE & ~__GFP_RECLAIM) |
> > __GFP_NOMEMALLOC | GFP_NOWAIT,
> > };
>
> This will trigger kswapd so there will be background reclaim demoting
> from those lower tiers.
>
given the node is full kswapd will be running, but the above line masks
~__GFP_RECLAIM so it's not supposed to trigger either reclaim path.
> > Any chance you are using hugetlb on this system? This looks like a
> > clear bug, but it may not be what you're experiencing.
>
> Hugetlb pages are not sitting on LRU lists so they are not participating
> in the demotion.
>
I noted in the v4 thread (responded there too) this was the case.
https://lore.kernel.org/linux-mm/aXksUiwYGwad5JvC@gourry-fedora-PF4VCD3F/
But since then we found another path through this code that adds
reclaim back on as well - and i wouldn't be surprised to find more.
the bigger issue is that this fix can cause inversions in transient
pressure situations - and in fact the current code will cause inversions
instead of waiting for reclaim to clear out lower nodes.
The reality is this code probably needs a proper look and detangling.
This has been on my back-burner for a while - i've wanted to sink the
actual demotion code into memory-tiers.c and provide something like:
... mt_demote_folios(src_nid, folio_list)
{
/* apply some demotion policy here */
}
~Gregory
On Wed 28-01-26 09:21:45, Gregory Price wrote:
> On Wed, Jan 28, 2026 at 10:56:44AM +0100, Michal Hocko wrote:
> > > .gfp_mask = (GFP_HIGHUSER_MOVABLE & ~__GFP_RECLAIM) |
> > > __GFP_NOMEMALLOC | GFP_NOWAIT,
> > > };
> >
> > This will trigger kswapd so there will be background reclaim demoting
> > from those lower tiers.
> >
>
> given the node is full kswapd will be running, but the above line masks
> ~__GFP_RECLAIM so it's not supposed to trigger either reclaim path.
Yeah, my bad, I haven't looked carefully enough.
> > > Any chance you are using hugetlb on this system? This looks like a
> > > clear bug, but it may not be what you're experiencing.
> >
> > Hugetlb pages are not sitting on LRU lists so they are not participating
> > in the demotion.
> >
>
> I noted in the v4 thread (responded there too) this was the case.
> https://lore.kernel.org/linux-mm/aXksUiwYGwad5JvC@gourry-fedora-PF4VCD3F/
>
> But since then we found another path through this code that adds
> reclaim back on as well - and i wouldn't be surprised to find more.
>
> the bigger issue is that this fix can cause inversions in transient
> pressure situations - and in fact the current code will cause inversions
> instead of waiting for reclaim to clear out lower nodes.
>
> The reality is this code probably needs a proper look and detangling.
Agreed!
> This has been on my back-burner for a while - i've wanted to sink the
> actual demotion code into memory-tiers.c and provide something like:
>
> ... mt_demote_folios(src_nid, folio_list)
> {
> /* apply some demotion policy here */
> }
>
> ~Gregory
--
Michal Hocko
SUSE Labs
On Tue, Jan 27, 2026 at 03:24:36PM -0500, Gregory Price wrote:
> On Sat, Jan 10, 2026 at 10:55:02PM +0900, Akinobu Mita wrote:
> > Since can_reclaim_anon_pages() checks whether there is free space on the swap
> > device before checking with can_demote(), I think the negative impact of this
> > change will be small. However, since I have not been able to confirm the
> > behavior when a swap device is available, I would like to correctly understand
> > the impact.
>
> Something else is going on here
>
> See demote_folio_list and alloc_demote_folio
>
> static unsigned int demote_folio_list(struct list_head *demote_folios,
> struct pglist_data *pgdat,
> struct mem_cgroup *memcg)
> {
> struct migration_target_control mtc = {
> */
> .gfp_mask = (GFP_HIGHUSER_MOVABLE & ~__GFP_RECLAIM) |
> __GFP_NOMEMALLOC | GFP_NOWAIT,
> };
> }
>
> static struct folio *alloc_demote_folio(struct folio *src,
> unsigned long private)
> {
> /* Only attempt to demote to the preferred node */
> mtc->nmask = NULL;
> mtc->gfp_mask |= __GFP_THISNODE;
> dst = alloc_migration_target(src, (unsigned long)mtc);
> if (dst)
> return dst;
>
> /* Now attempt to demote to any node in the lower tier */
> mtc->gfp_mask &= ~__GFP_THISNODE;
> mtc->nmask = allowed_mask;
> return alloc_migration_target(src, (unsigned long)mtc);
> }
>
>
> /*
> * %__GFP_RECLAIM is shorthand to allow/forbid both direct and kswapd reclaim.
> */
>
>
> You basically shouldn't be hitting any reclaim behavior at all, and if
> the target nodes are actually under various watermarks, you should be
> getting allocation failures and quick-outs from the demotion logic.
Hi, Gregory, hope you are doing well.
I observed that during the allocation of a large folio,
alloc_migration_target() cleans __GFP_RECLAIM but subsequently applies
GFP_TRANSHUGE. Given that GFP_TRANSHUGE includes __GFP_DIRECT_RECLAIM,
I am wondering if this triggers a form of reclamation that should be
avoided during demotion.
struct folio *alloc_migration_target(struct folio *src, unsigned long private)
...
if (folio_test_large(src)) {
/*
* clear __GFP_RECLAIM to make the migration callback
* consistent with regular THP allocations.
*/
gfp_mask &= ~__GFP_RECLAIM;
gfp_mask |= GFP_TRANSHUGE;
order = folio_order(src);
}
#define GFP_TRANSHUGE (GFP_TRANSHUGE_LIGHT | __GFP_DIRECT_RECLAIM)
Best,
Bing
On Tue, Jan 27, 2026 at 11:28:59PM +0000, Bing Jiao wrote:
> Hi, Gregory, hope you are doing well.
>
> I observed that during the allocation of a large folio,
> alloc_migration_target() cleans __GFP_RECLAIM but subsequently applies
> GFP_TRANSHUGE. Given that GFP_TRANSHUGE includes __GFP_DIRECT_RECLAIM,
> I am wondering if this triggers a form of reclamation that should be
> avoided during demotion.
>
> struct folio *alloc_migration_target(struct folio *src, unsigned long private)
> ...
> if (folio_test_large(src)) {
> /*
> * clear __GFP_RECLAIM to make the migration callback
> * consistent with regular THP allocations.
> */
> gfp_mask &= ~__GFP_RECLAIM;
> gfp_mask |= GFP_TRANSHUGE;
> order = folio_order(src);
> }
>
> #define GFP_TRANSHUGE (GFP_TRANSHUGE_LIGHT | __GFP_DIRECT_RECLAIM)
>
I think the answer is that the demotion code is a mess and no one
actually knows what it's doing. We probably need to rework this
entirely because we've now found at least 2 paths which clean and then
add reclaim.
~Gregory
On Thu, 8 Jan 2026 19:15:35 +0900 Akinobu Mita <akinobu.mita@gmail.com> wrote:
> On systems with multiple memory-tiers consisting of DRAM and CXL memory,
> the OOM killer is not invoked properly.
>
> Here's the command to reproduce:
>
> $ sudo swapoff -a
> $ stress-ng --oomable -v --memrate 20 --memrate-bytes 10G \
> --memrate-rd-mbs 1 --memrate-wr-mbs 1
>
> The memory usage is the number of workers specified with the --memrate
> option multiplied by the buffer size specified with the --memrate-bytes
> option, so please adjust it so that it exceeds the total size of the
> installed DRAM and CXL memory.
>
> If swap is disabled, you can usually expect the OOM killer to terminate
> the stress-ng process when memory usage approaches the installed memory
> size.
>
> However, if multiple memory-tiers exist (multiple
> /sys/devices/virtual/memory_tiering/memory_tier<N> directories exist) and
> /sys/kernel/mm/numa/demotion_enabled is true, the OOM killer will not be
> invoked and the system will become inoperable, regardless of whether MGLRU
> is enabled or not.
>
> This issue can be reproduced using NUMA emulation even on systems with
> only DRAM. You can create two-fake memory-tiers by booting a single-node
> system with "numa=fake=2 numa_emulation.adistance=576,704" kernel
> parameters.
>
> The reason for this issue is that memory allocations do not directly
> trigger the oom-killer, assuming that if the target node has an underlying
> memory tier, it can always be reclaimed by demotion.
>
> So this change avoids this issue by not attempting to demote if the
> underlying node has less free memory than the minimum watermark, and the
> oom-killer will be triggered directly from memory allocations.
>
Thanks.
An oom-killer fix which doesn't touch mm/oom-kill.c Hopefully
David/Shakeel/Michal can take a look.
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -358,7 +358,21 @@ static bool can_demote(int nid, struct scan_control *sc,
>
> /* Filter out nodes that are not in cgroup's mems_allowed. */
> mem_cgroup_node_filter_allowed(memcg, &allowed_mask);
> - return !nodes_empty(allowed_mask);
> + if (nodes_empty(allowed_mask))
> + return false;
> +
> + for_each_node_mask(nid, allowed_mask) {
> + int z;
> + struct zone *zone;
> + struct pglist_data *pgdat = NODE_DATA(nid);
> +
> + for_each_managed_zone_pgdat(zone, pgdat, z, MAX_NR_ZONES - 1) {
> + if (zone_watermark_ok(zone, 0, min_wmark_pages(zone),
> + ZONE_MOVABLE, 0))
> + return true;
> + }
> + }
> + return false;
> }
It would be nice to have a code comment in here to explain to readers
why we're doing this.
© 2016 - 2026 Red Hat, Inc.