[PATCH v3 6/7] mm: list_lru: introduce folio_memcg_list_lru_alloc()

Johannes Weiner posted 7 patches 2 weeks, 4 days ago
[PATCH v3 6/7] mm: list_lru: introduce folio_memcg_list_lru_alloc()
Posted by Johannes Weiner 2 weeks, 4 days ago
memcg_list_lru_alloc() is called every time an object that may end up
on the list_lru is created. It needs to quickly check if the list_lru
heads for the memcg already exist, and allocate them when they don't.

Doing this with folio objects is tricky: folio_memcg() is not stable
and requires either RCU protection or pinning the cgroup. But it's
desirable to make the existence check lightweight under RCU, and only
pin the memcg when we need to allocate list_lru heads and may block.

In preparation for switching the THP shrinker to list_lru, add a
helper function for allocating list_lru heads coming from a folio.

Reviewed-by: David Hildenbrand (Arm) <david@kernel.org>
Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
---
 include/linux/list_lru.h | 12 ++++++++++++
 mm/list_lru.c            | 39 ++++++++++++++++++++++++++++++++++-----
 2 files changed, 46 insertions(+), 5 deletions(-)

diff --git a/include/linux/list_lru.h b/include/linux/list_lru.h
index 4afc02deb44d..4bd29b61c59a 100644
--- a/include/linux/list_lru.h
+++ b/include/linux/list_lru.h
@@ -81,6 +81,18 @@ static inline int list_lru_init_memcg_key(struct list_lru *lru, struct shrinker
 
 int memcg_list_lru_alloc(struct mem_cgroup *memcg, struct list_lru *lru,
 			 gfp_t gfp);
+
+#ifdef CONFIG_MEMCG
+int folio_memcg_list_lru_alloc(struct folio *folio, struct list_lru *lru,
+			       gfp_t gfp);
+#else
+static inline int folio_memcg_list_lru_alloc(struct folio *folio,
+					     struct list_lru *lru, gfp_t gfp)
+{
+	return 0;
+}
+#endif
+
 void memcg_reparent_list_lrus(struct mem_cgroup *memcg, struct mem_cgroup *parent);
 
 /**
diff --git a/mm/list_lru.c b/mm/list_lru.c
index b817c0f48f73..1ccdd45b1d14 100644
--- a/mm/list_lru.c
+++ b/mm/list_lru.c
@@ -537,17 +537,14 @@ static inline bool memcg_list_lru_allocated(struct mem_cgroup *memcg,
 	return idx < 0 || xa_load(&lru->xa, idx);
 }
 
-int memcg_list_lru_alloc(struct mem_cgroup *memcg, struct list_lru *lru,
-			 gfp_t gfp)
+static int __memcg_list_lru_alloc(struct mem_cgroup *memcg,
+				  struct list_lru *lru, gfp_t gfp)
 {
 	unsigned long flags;
 	struct list_lru_memcg *mlru = NULL;
 	struct mem_cgroup *pos, *parent;
 	XA_STATE(xas, &lru->xa, 0);
 
-	if (!list_lru_memcg_aware(lru) || memcg_list_lru_allocated(memcg, lru))
-		return 0;
-
 	gfp &= GFP_RECLAIM_MASK;
 	/*
 	 * Because the list_lru can be reparented to the parent cgroup's
@@ -588,6 +585,38 @@ int memcg_list_lru_alloc(struct mem_cgroup *memcg, struct list_lru *lru,
 
 	return xas_error(&xas);
 }
+
+int memcg_list_lru_alloc(struct mem_cgroup *memcg, struct list_lru *lru,
+			 gfp_t gfp)
+{
+	if (!list_lru_memcg_aware(lru) || memcg_list_lru_allocated(memcg, lru))
+		return 0;
+	return __memcg_list_lru_alloc(memcg, lru, gfp);
+}
+
+int folio_memcg_list_lru_alloc(struct folio *folio, struct list_lru *lru,
+			       gfp_t gfp)
+{
+	struct mem_cgroup *memcg;
+	int res;
+
+	if (!list_lru_memcg_aware(lru))
+		return 0;
+
+	/* Fast path when list_lru heads already exist */
+	rcu_read_lock();
+	memcg = folio_memcg(folio);
+	res = memcg_list_lru_allocated(memcg, lru);
+	rcu_read_unlock();
+	if (likely(res))
+		return 0;
+
+	/* Allocation may block, pin the memcg */
+	memcg = get_mem_cgroup_from_folio(folio);
+	res = __memcg_list_lru_alloc(memcg, lru, gfp);
+	mem_cgroup_put(memcg);
+	return res;
+}
 #else
 static inline void memcg_init_list_lru(struct list_lru *lru, bool memcg_aware)
 {
-- 
2.53.0
Re: [PATCH v3 6/7] mm: list_lru: introduce folio_memcg_list_lru_alloc()
Posted by Lorenzo Stoakes (Oracle) 1 week, 6 days ago
On Wed, Mar 18, 2026 at 03:53:24PM -0400, Johannes Weiner wrote:
> memcg_list_lru_alloc() is called every time an object that may end up
> on the list_lru is created. It needs to quickly check if the list_lru
> heads for the memcg already exist, and allocate them when they don't.
>
> Doing this with folio objects is tricky: folio_memcg() is not stable
> and requires either RCU protection or pinning the cgroup. But it's
> desirable to make the existence check lightweight under RCU, and only
> pin the memcg when we need to allocate list_lru heads and may block.
>
> In preparation for switching the THP shrinker to list_lru, add a
> helper function for allocating list_lru heads coming from a folio.
>
> Reviewed-by: David Hildenbrand (Arm) <david@kernel.org>
> Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>

Logic LGTM, but would be nice to have some kdoc. With that addressed, feel free
to add:

Reviewed-by: Lorenzo Stoakes (Oracle) <ljs@kernel.org>

> ---
>  include/linux/list_lru.h | 12 ++++++++++++
>  mm/list_lru.c            | 39 ++++++++++++++++++++++++++++++++++-----
>  2 files changed, 46 insertions(+), 5 deletions(-)
>
> diff --git a/include/linux/list_lru.h b/include/linux/list_lru.h
> index 4afc02deb44d..4bd29b61c59a 100644
> --- a/include/linux/list_lru.h
> +++ b/include/linux/list_lru.h
> @@ -81,6 +81,18 @@ static inline int list_lru_init_memcg_key(struct list_lru *lru, struct shrinker
>
>  int memcg_list_lru_alloc(struct mem_cgroup *memcg, struct list_lru *lru,
>  			 gfp_t gfp);
> +
> +#ifdef CONFIG_MEMCG
> +int folio_memcg_list_lru_alloc(struct folio *folio, struct list_lru *lru,
> +			       gfp_t gfp);

Could we have a kdoc comment for this? Thanks!

> +#else
> +static inline int folio_memcg_list_lru_alloc(struct folio *folio,
> +					     struct list_lru *lru, gfp_t gfp)
> +{
> +	return 0;
> +}
> +#endif
> +
>  void memcg_reparent_list_lrus(struct mem_cgroup *memcg, struct mem_cgroup *parent);
>
>  /**
> diff --git a/mm/list_lru.c b/mm/list_lru.c
> index b817c0f48f73..1ccdd45b1d14 100644
> --- a/mm/list_lru.c
> +++ b/mm/list_lru.c
> @@ -537,17 +537,14 @@ static inline bool memcg_list_lru_allocated(struct mem_cgroup *memcg,
>  	return idx < 0 || xa_load(&lru->xa, idx);
>  }
>
> -int memcg_list_lru_alloc(struct mem_cgroup *memcg, struct list_lru *lru,
> -			 gfp_t gfp)
> +static int __memcg_list_lru_alloc(struct mem_cgroup *memcg,
> +				  struct list_lru *lru, gfp_t gfp)
>  {
>  	unsigned long flags;
>  	struct list_lru_memcg *mlru = NULL;
>  	struct mem_cgroup *pos, *parent;
>  	XA_STATE(xas, &lru->xa, 0);
>
> -	if (!list_lru_memcg_aware(lru) || memcg_list_lru_allocated(memcg, lru))
> -		return 0;
> -
>  	gfp &= GFP_RECLAIM_MASK;
>  	/*
>  	 * Because the list_lru can be reparented to the parent cgroup's
> @@ -588,6 +585,38 @@ int memcg_list_lru_alloc(struct mem_cgroup *memcg, struct list_lru *lru,
>
>  	return xas_error(&xas);
>  }
> +
> +int memcg_list_lru_alloc(struct mem_cgroup *memcg, struct list_lru *lru,
> +			 gfp_t gfp)
> +{
> +	if (!list_lru_memcg_aware(lru) || memcg_list_lru_allocated(memcg, lru))
> +		return 0;
> +	return __memcg_list_lru_alloc(memcg, lru, gfp);
> +}
> +
> +int folio_memcg_list_lru_alloc(struct folio *folio, struct list_lru *lru,
> +			       gfp_t gfp)
> +{
> +	struct mem_cgroup *memcg;
> +	int res;
> +
> +	if (!list_lru_memcg_aware(lru))
> +		return 0;
> +
> +	/* Fast path when list_lru heads already exist */
> +	rcu_read_lock();

OK nice I see folio_memcg() explicitly states an RCU lock suffices....

> +	memcg = folio_memcg(folio);
> +	res = memcg_list_lru_allocated(memcg, lru);

...And an xa_load() should also be RCU safe :)

> +	rcu_read_unlock();
> +	if (likely(res))
> +		return 0;

So that's nice!

> +
> +	/* Allocation may block, pin the memcg */
> +	memcg = get_mem_cgroup_from_folio(folio);
> +	res = __memcg_list_lru_alloc(memcg, lru, gfp);
> +	mem_cgroup_put(memcg);
> +	return res;
> +}
>  #else
>  static inline void memcg_init_list_lru(struct list_lru *lru, bool memcg_aware)
>  {
> --
> 2.53.0
>

Cheers, Lorenzo
Re: [PATCH v3 6/7] mm: list_lru: introduce folio_memcg_list_lru_alloc()
Posted by Johannes Weiner 6 days, 23 hours ago
On Tue, Mar 24, 2026 at 12:01:55PM +0000, Lorenzo Stoakes (Oracle) wrote:
> On Wed, Mar 18, 2026 at 03:53:24PM -0400, Johannes Weiner wrote:
> > memcg_list_lru_alloc() is called every time an object that may end up
> > on the list_lru is created. It needs to quickly check if the list_lru
> > heads for the memcg already exist, and allocate them when they don't.
> >
> > Doing this with folio objects is tricky: folio_memcg() is not stable
> > and requires either RCU protection or pinning the cgroup. But it's
> > desirable to make the existence check lightweight under RCU, and only
> > pin the memcg when we need to allocate list_lru heads and may block.
> >
> > In preparation for switching the THP shrinker to list_lru, add a
> > helper function for allocating list_lru heads coming from a folio.
> >
> > Reviewed-by: David Hildenbrand (Arm) <david@kernel.org>
> > Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
> 
> Logic LGTM, but would be nice to have some kdoc. With that addressed, feel free
> to add:
> 
> Reviewed-by: Lorenzo Stoakes (Oracle) <ljs@kernel.org>

Thanks!

> >  include/linux/list_lru.h | 12 ++++++++++++
> >  mm/list_lru.c            | 39 ++++++++++++++++++++++++++++++++++-----
> >  2 files changed, 46 insertions(+), 5 deletions(-)
> >
> > diff --git a/include/linux/list_lru.h b/include/linux/list_lru.h
> > index 4afc02deb44d..4bd29b61c59a 100644
> > --- a/include/linux/list_lru.h
> > +++ b/include/linux/list_lru.h
> > @@ -81,6 +81,18 @@ static inline int list_lru_init_memcg_key(struct list_lru *lru, struct shrinker
> >
> >  int memcg_list_lru_alloc(struct mem_cgroup *memcg, struct list_lru *lru,
> >  			 gfp_t gfp);
> > +
> > +#ifdef CONFIG_MEMCG
> > +int folio_memcg_list_lru_alloc(struct folio *folio, struct list_lru *lru,
> > +			       gfp_t gfp);
> 
> Could we have a kdoc comment for this? Thanks!

And one kdoc comment.

Your total will be 8.75, you can pull up to the next window, sir.
Re: [PATCH v3 6/7] mm: list_lru: introduce folio_memcg_list_lru_alloc()
Posted by Lorenzo Stoakes (Oracle) 5 days, 2 hours ago
On Mon, Mar 30, 2026 at 12:54:28PM -0400, Johannes Weiner wrote:
> On Tue, Mar 24, 2026 at 12:01:55PM +0000, Lorenzo Stoakes (Oracle) wrote:
> > On Wed, Mar 18, 2026 at 03:53:24PM -0400, Johannes Weiner wrote:
> > > memcg_list_lru_alloc() is called every time an object that may end up
> > > on the list_lru is created. It needs to quickly check if the list_lru
> > > heads for the memcg already exist, and allocate them when they don't.
> > >
> > > Doing this with folio objects is tricky: folio_memcg() is not stable
> > > and requires either RCU protection or pinning the cgroup. But it's
> > > desirable to make the existence check lightweight under RCU, and only
> > > pin the memcg when we need to allocate list_lru heads and may block.
> > >
> > > In preparation for switching the THP shrinker to list_lru, add a
> > > helper function for allocating list_lru heads coming from a folio.
> > >
> > > Reviewed-by: David Hildenbrand (Arm) <david@kernel.org>
> > > Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
> >
> > Logic LGTM, but would be nice to have some kdoc. With that addressed, feel free
> > to add:
> >
> > Reviewed-by: Lorenzo Stoakes (Oracle) <ljs@kernel.org>
>
> Thanks!
>
> > >  include/linux/list_lru.h | 12 ++++++++++++
> > >  mm/list_lru.c            | 39 ++++++++++++++++++++++++++++++++++-----
> > >  2 files changed, 46 insertions(+), 5 deletions(-)
> > >
> > > diff --git a/include/linux/list_lru.h b/include/linux/list_lru.h
> > > index 4afc02deb44d..4bd29b61c59a 100644
> > > --- a/include/linux/list_lru.h
> > > +++ b/include/linux/list_lru.h
> > > @@ -81,6 +81,18 @@ static inline int list_lru_init_memcg_key(struct list_lru *lru, struct shrinker
> > >
> > >  int memcg_list_lru_alloc(struct mem_cgroup *memcg, struct list_lru *lru,
> > >  			 gfp_t gfp);
> > > +
> > > +#ifdef CONFIG_MEMCG
> > > +int folio_memcg_list_lru_alloc(struct folio *folio, struct list_lru *lru,
> > > +			       gfp_t gfp);
> >
> > Could we have a kdoc comment for this? Thanks!
>
> And one kdoc comment.
>
> Your total will be 8.75, you can pull up to the next window, sir.

:>) Thanks!
Re: [PATCH v3 6/7] mm: list_lru: introduce folio_memcg_list_lru_alloc()
Posted by Shakeel Butt 2 weeks, 4 days ago
On Wed, Mar 18, 2026 at 03:53:24PM -0400, Johannes Weiner wrote:
> memcg_list_lru_alloc() is called every time an object that may end up
> on the list_lru is created. It needs to quickly check if the list_lru
> heads for the memcg already exist, and allocate them when they don't.
> 
> Doing this with folio objects is tricky: folio_memcg() is not stable
> and requires either RCU protection or pinning the cgroup. But it's
> desirable to make the existence check lightweight under RCU, and only
> pin the memcg when we need to allocate list_lru heads and may block.
> 
> In preparation for switching the THP shrinker to list_lru, add a
> helper function for allocating list_lru heads coming from a folio.
> 
> Reviewed-by: David Hildenbrand (Arm) <david@kernel.org>
> Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>

Acked-by: Shakeel Butt <shakeel.butt@linux.dev>
Re: [PATCH v3 6/7] mm: list_lru: introduce folio_memcg_list_lru_alloc()
Posted by Shakeel Butt 2 weeks, 4 days ago
On Wed, Mar 18, 2026 at 03:53:24PM -0400, Johannes Weiner wrote:
> memcg_list_lru_alloc() is called every time an object that may end up
> on the list_lru is created. It needs to quickly check if the list_lru
> heads for the memcg already exist, and allocate them when they don't.
> 
> Doing this with folio objects is tricky: folio_memcg() is not stable
> and requires either RCU protection or pinning the cgroup. But it's
> desirable to make the existence check lightweight under RCU, and only
> pin the memcg when we need to allocate list_lru heads and may block.
> 
> In preparation for switching the THP shrinker to list_lru, add a
> helper function for allocating list_lru heads coming from a folio.
> 
> Reviewed-by: David Hildenbrand (Arm) <david@kernel.org>
> Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>

Acked-by: Shakeel Butt <shakeel.butt@linux.dev>