[PATCH] slab: replace cache_from_obj() with inline checks

Vlastimil Babka posted 1 patch 2 weeks, 5 days ago
There is a newer version of this series
mm/slub.c | 52 +++++++++++++++++++++++++++++-----------------------
1 file changed, 29 insertions(+), 23 deletions(-)
[PATCH] slab: replace cache_from_obj() with inline checks
Posted by Vlastimil Babka 2 weeks, 5 days ago
Eric Dumazet has noticed cache_from_obj() is not inlined with clang and
suggested splitting it into two functions, where the smaller inlined one
assumes the fastpath is !CONFIG_SLAB_FREELIST_HARDENED. However most
distros enable it these days and so this would likely add a function
call to the object free fastpaths.

Instead take a step back and consider that cache_from_obj() is a relict
from when memcgs created their separate kmem_cache copies, as the
outdated comment in build_detached_freelist() reminds us.

Meanwhile hardening/debugging had reused cache_from_obj() to validate
that the freed object really belongs to a slab from the cache we think
we are freeing from.

In build_detached_freelist() simply remove this, because it did not
handle the NULL result from cache_from_obj() failure properly, nor
validate objects (for the NULL slab->slab_cache pointer) when called via
kfree_bulk(). If anyone is motivated to implement it properly, it should
be possible in a similar way to kmem_cache_free().

In kmem_cache_free(), do the hardening/debugging checks directly so they
are inlined by definition and virt_to_slab(obj) is performed just once.
In case they failed, call a newly introduced warn_free_bad_obj() that
performs the warnings outside of the fastpath.

As a result the fastpath should be inlined in all configs and the
warnings are moved away.

Reported-by: Eric Dumazet <edumazet@google.com>
Closes: https://lore.kernel.org/all/20260115130642.3419324-1-edumazet@google.com/
Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
---
 mm/slub.c | 52 +++++++++++++++++++++++++++++-----------------------
 1 file changed, 29 insertions(+), 23 deletions(-)

diff --git a/mm/slub.c b/mm/slub.c
index 861592ac5425..1bdb4f73d61b 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -6738,30 +6738,26 @@ void ___cache_free(struct kmem_cache *cache, void *x, unsigned long addr)
 }
 #endif
 
-static inline struct kmem_cache *virt_to_cache(const void *obj)
+static noinline void warn_free_bad_obj(struct kmem_cache *s, void *obj)
 {
+	struct kmem_cache *cachep;
 	struct slab *slab;
 
 	slab = virt_to_slab(obj);
-	if (WARN_ONCE(!slab, "%s: Object is not a Slab page!\n", __func__))
-		return NULL;
-	return slab->slab_cache;
-}
-
-static inline struct kmem_cache *cache_from_obj(struct kmem_cache *s, void *x)
-{
-	struct kmem_cache *cachep;
+	if (WARN_ONCE(!slab,
+			"kmem_cache_free(%s, %p): object is not in a slab page\n",
+			s->name, obj))
+		return;
 
-	if (!IS_ENABLED(CONFIG_SLAB_FREELIST_HARDENED) &&
-	    !kmem_cache_debug_flags(s, SLAB_CONSISTENCY_CHECKS))
-		return s;
+	cachep = slab->slab_cache;
 
-	cachep = virt_to_cache(x);
-	if (WARN(cachep && cachep != s,
-		 "%s: Wrong slab cache. %s but object is from %s\n",
-		 __func__, s->name, cachep->name))
-		print_tracking(cachep, x);
-	return cachep;
+	if (WARN_ONCE(cachep != s,
+			"kmem_cache_free(%s, %p): object belongs to different cache %s\n",
+			s->name, obj, cachep ? cachep->name : "(NULL)")) {
+		if (cachep)
+			print_tracking(cachep, obj);
+		return;
+	}
 }
 
 /**
@@ -6774,11 +6770,21 @@ static inline struct kmem_cache *cache_from_obj(struct kmem_cache *s, void *x)
  */
 void kmem_cache_free(struct kmem_cache *s, void *x)
 {
-	s = cache_from_obj(s, x);
-	if (!s)
-		return;
+	struct slab *slab;
+
+	slab = virt_to_slab(x);
+
+	if (IS_ENABLED(CONFIG_SLAB_FREELIST_HARDENED) ||
+	    kmem_cache_debug_flags(s, SLAB_CONSISTENCY_CHECKS)) {
+
+		if (unlikely(!slab || (slab->slab_cache != s))) {
+			warn_free_bad_obj(s, x);
+			return;
+		}
+	}
+
 	trace_kmem_cache_free(_RET_IP_, x, s);
-	slab_free(s, virt_to_slab(x), x, _RET_IP_);
+	slab_free(s, slab, x, _RET_IP_);
 }
 EXPORT_SYMBOL(kmem_cache_free);
 
@@ -7305,7 +7311,7 @@ int build_detached_freelist(struct kmem_cache *s, size_t size,
 		df->s = slab->slab_cache;
 	} else {
 		df->slab = slab;
-		df->s = cache_from_obj(s, object); /* Support for memcg */
+		df->s = s;
 	}
 
 	/* Start new detached freelist */

---
base-commit: 0f61b1860cc3f52aef9036d7235ed1f017632193
change-id: 20260120-b4-remove_cache_from_obj-190fcaf16789

Best regards,
-- 
Vlastimil Babka <vbabka@suse.cz>
Re: [PATCH] slab: replace cache_from_obj() with inline checks
Posted by Hao Li 2 weeks, 5 days ago
On Tue, Jan 20, 2026 at 10:35:42AM +0100, Vlastimil Babka wrote:
> Eric Dumazet has noticed cache_from_obj() is not inlined with clang and
> suggested splitting it into two functions, where the smaller inlined one
> assumes the fastpath is !CONFIG_SLAB_FREELIST_HARDENED. However most
> distros enable it these days and so this would likely add a function
> call to the object free fastpaths.
> 
> Instead take a step back and consider that cache_from_obj() is a relict
> from when memcgs created their separate kmem_cache copies, as the
> outdated comment in build_detached_freelist() reminds us.
> 
> Meanwhile hardening/debugging had reused cache_from_obj() to validate
> that the freed object really belongs to a slab from the cache we think
> we are freeing from.
> 
> In build_detached_freelist() simply remove this, because it did not
> handle the NULL result from cache_from_obj() failure properly, nor
> validate objects (for the NULL slab->slab_cache pointer) when called via
> kfree_bulk(). If anyone is motivated to implement it properly, it should
> be possible in a similar way to kmem_cache_free().
> 
> In kmem_cache_free(), do the hardening/debugging checks directly so they
> are inlined by definition and virt_to_slab(obj) is performed just once.
> In case they failed, call a newly introduced warn_free_bad_obj() that
> performs the warnings outside of the fastpath.
> 
> As a result the fastpath should be inlined in all configs and the
> warnings are moved away.
> 
> Reported-by: Eric Dumazet <edumazet@google.com>
> Closes: https://lore.kernel.org/all/20260115130642.3419324-1-edumazet@google.com/
> Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
> ---
>  mm/slub.c | 52 +++++++++++++++++++++++++++++-----------------------
>  1 file changed, 29 insertions(+), 23 deletions(-)
> 
> diff --git a/mm/slub.c b/mm/slub.c
> index 861592ac5425..1bdb4f73d61b 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -6738,30 +6738,26 @@ void ___cache_free(struct kmem_cache *cache, void *x, unsigned long addr)
>  }
>  #endif
>  
> -static inline struct kmem_cache *virt_to_cache(const void *obj)
> +static noinline void warn_free_bad_obj(struct kmem_cache *s, void *obj)
>  {
> +	struct kmem_cache *cachep;
>  	struct slab *slab;
>  
>  	slab = virt_to_slab(obj);
> -	if (WARN_ONCE(!slab, "%s: Object is not a Slab page!\n", __func__))
> -		return NULL;
> -	return slab->slab_cache;
> -}
> -
> -static inline struct kmem_cache *cache_from_obj(struct kmem_cache *s, void *x)
> -{
> -	struct kmem_cache *cachep;
> +	if (WARN_ONCE(!slab,
> +			"kmem_cache_free(%s, %p): object is not in a slab page\n",
> +			s->name, obj))
> +		return;
>  
> -	if (!IS_ENABLED(CONFIG_SLAB_FREELIST_HARDENED) &&
> -	    !kmem_cache_debug_flags(s, SLAB_CONSISTENCY_CHECKS))
> -		return s;
> +	cachep = slab->slab_cache;
>  
> -	cachep = virt_to_cache(x);
> -	if (WARN(cachep && cachep != s,
> -		 "%s: Wrong slab cache. %s but object is from %s\n",
> -		 __func__, s->name, cachep->name))
> -		print_tracking(cachep, x);
> -	return cachep;
> +	if (WARN_ONCE(cachep != s,
> +			"kmem_cache_free(%s, %p): object belongs to different cache %s\n",
> +			s->name, obj, cachep ? cachep->name : "(NULL)")) {
> +		if (cachep)
> +			print_tracking(cachep, obj);
> +		return;
> +	}
>  }
>  
>  /**
> @@ -6774,11 +6770,21 @@ static inline struct kmem_cache *cache_from_obj(struct kmem_cache *s, void *x)
>   */
>  void kmem_cache_free(struct kmem_cache *s, void *x)
>  {
> -	s = cache_from_obj(s, x);
> -	if (!s)
> -		return;
> +	struct slab *slab;
> +
> +	slab = virt_to_slab(x);
> +
> +	if (IS_ENABLED(CONFIG_SLAB_FREELIST_HARDENED) ||
> +	    kmem_cache_debug_flags(s, SLAB_CONSISTENCY_CHECKS)) {
> +
> +		if (unlikely(!slab || (slab->slab_cache != s))) {
> +			warn_free_bad_obj(s, x);

Just to make sure I'm understanding correctly, are we intentionally not
releasing the object x in this case? Thanks.

> +			return;
> +		}
> +	}
> +
>  	trace_kmem_cache_free(_RET_IP_, x, s);
> -	slab_free(s, virt_to_slab(x), x, _RET_IP_);
> +	slab_free(s, slab, x, _RET_IP_);
>  }
>  EXPORT_SYMBOL(kmem_cache_free);
>  
> @@ -7305,7 +7311,7 @@ int build_detached_freelist(struct kmem_cache *s, size_t size,
>  		df->s = slab->slab_cache;
>  	} else {
>  		df->slab = slab;
> -		df->s = cache_from_obj(s, object); /* Support for memcg */
> +		df->s = s;
>  	}
>  
>  	/* Start new detached freelist */
> 
> ---
> base-commit: 0f61b1860cc3f52aef9036d7235ed1f017632193
> change-id: 20260120-b4-remove_cache_from_obj-190fcaf16789
> 
> Best regards,
> -- 
> Vlastimil Babka <vbabka@suse.cz>
>
Re: [PATCH] slab: replace cache_from_obj() with inline checks
Posted by Vlastimil Babka 2 weeks, 5 days ago
On 1/20/26 12:57, Hao Li wrote:

>> @@ -6774,11 +6770,21 @@ static inline struct kmem_cache *cache_from_obj(struct kmem_cache *s, void *x)
>>   */
>>  void kmem_cache_free(struct kmem_cache *s, void *x)
>>  {
>> -	s = cache_from_obj(s, x);
>> -	if (!s)
>> -		return;
>> +	struct slab *slab;
>> +
>> +	slab = virt_to_slab(x);
>> +
>> +	if (IS_ENABLED(CONFIG_SLAB_FREELIST_HARDENED) ||
>> +	    kmem_cache_debug_flags(s, SLAB_CONSISTENCY_CHECKS)) {
>> +
>> +		if (unlikely(!slab || (slab->slab_cache != s))) {
>> +			warn_free_bad_obj(s, x);
> 
> Just to make sure I'm understanding correctly, are we intentionally not
> releasing the object x in this case? Thanks.

Yes, it means something went wrong so it's better not to do anything. That
was true before this patch as well.

>> +			return;
>> +		}
>> +	}
>> +
>>  	trace_kmem_cache_free(_RET_IP_, x, s);
>> -	slab_free(s, virt_to_slab(x), x, _RET_IP_);
>> +	slab_free(s, slab, x, _RET_IP_);
>>  }
>>  EXPORT_SYMBOL(kmem_cache_free);
>>  
>> @@ -7305,7 +7311,7 @@ int build_detached_freelist(struct kmem_cache *s, size_t size,
>>  		df->s = slab->slab_cache;
>>  	} else {
>>  		df->slab = slab;
>> -		df->s = cache_from_obj(s, object); /* Support for memcg */
>> +		df->s = s;
>>  	}
>>  
>>  	/* Start new detached freelist */
>> 
>> ---
>> base-commit: 0f61b1860cc3f52aef9036d7235ed1f017632193
>> change-id: 20260120-b4-remove_cache_from_obj-190fcaf16789
>> 
>> Best regards,
>> -- 
>> Vlastimil Babka <vbabka@suse.cz>
>>
Re: [PATCH] slab: replace cache_from_obj() with inline checks
Posted by Hao Li 2 weeks, 4 days ago
On Tue, Jan 20, 2026 at 01:55:09PM +0100, Vlastimil Babka wrote:
> On 1/20/26 12:57, Hao Li wrote:
> 
> >> @@ -6774,11 +6770,21 @@ static inline struct kmem_cache *cache_from_obj(struct kmem_cache *s, void *x)
> >>   */
> >>  void kmem_cache_free(struct kmem_cache *s, void *x)
> >>  {
> >> -	s = cache_from_obj(s, x);
> >> -	if (!s)
> >> -		return;
> >> +	struct slab *slab;
> >> +
> >> +	slab = virt_to_slab(x);
> >> +
> >> +	if (IS_ENABLED(CONFIG_SLAB_FREELIST_HARDENED) ||
> >> +	    kmem_cache_debug_flags(s, SLAB_CONSISTENCY_CHECKS)) {
> >> +
> >> +		if (unlikely(!slab || (slab->slab_cache != s))) {
> >> +			warn_free_bad_obj(s, x);
> > 
> > Just to make sure I'm understanding correctly, are we intentionally not
> > releasing the object x in this case? Thanks.
> 
> Yes, it means something went wrong so it's better not to do anything.

Got it, that makes sense - thanks!

> That was true before this patch as well.

I'm still not entirely sure I follow. I read the original code, and it seems
like it retrieved the real cache from the object and released the object into
that correct cache. Did I misunderstand something?

-- 
Thanks,
Hao

> 
> >> +			return;
> >> +		}
> >> +	}
> >> +
> >>  	trace_kmem_cache_free(_RET_IP_, x, s);
> >> -	slab_free(s, virt_to_slab(x), x, _RET_IP_);
> >> +	slab_free(s, slab, x, _RET_IP_);
> >>  }
> >>  EXPORT_SYMBOL(kmem_cache_free);
> >>  
> >> @@ -7305,7 +7311,7 @@ int build_detached_freelist(struct kmem_cache *s, size_t size,
> >>  		df->s = slab->slab_cache;
> >>  	} else {
> >>  		df->slab = slab;
> >> -		df->s = cache_from_obj(s, object); /* Support for memcg */
> >> +		df->s = s;
> >>  	}
> >>  
> >>  	/* Start new detached freelist */
> >> 
> >> ---
> >> base-commit: 0f61b1860cc3f52aef9036d7235ed1f017632193
> >> change-id: 20260120-b4-remove_cache_from_obj-190fcaf16789
> >> 
> >> Best regards,
> >> -- 
> >> Vlastimil Babka <vbabka@suse.cz>
> >> 
>
Re: [PATCH] slab: replace cache_from_obj() with inline checks
Posted by Vlastimil Babka 2 weeks, 4 days ago
On 1/20/26 14:56, Hao Li wrote:
> On Tue, Jan 20, 2026 at 01:55:09PM +0100, Vlastimil Babka wrote:
>> On 1/20/26 12:57, Hao Li wrote:
>> 
>> >> @@ -6774,11 +6770,21 @@ static inline struct kmem_cache *cache_from_obj(struct kmem_cache *s, void *x)
>> >>   */
>> >>  void kmem_cache_free(struct kmem_cache *s, void *x)
>> >>  {
>> >> -	s = cache_from_obj(s, x);
>> >> -	if (!s)
>> >> -		return;
>> >> +	struct slab *slab;
>> >> +
>> >> +	slab = virt_to_slab(x);
>> >> +
>> >> +	if (IS_ENABLED(CONFIG_SLAB_FREELIST_HARDENED) ||
>> >> +	    kmem_cache_debug_flags(s, SLAB_CONSISTENCY_CHECKS)) {
>> >> +
>> >> +		if (unlikely(!slab || (slab->slab_cache != s))) {
>> >> +			warn_free_bad_obj(s, x);
>> > 
>> > Just to make sure I'm understanding correctly, are we intentionally not
>> > releasing the object x in this case? Thanks.
>> 
>> Yes, it means something went wrong so it's better not to do anything.
> 
> Got it, that makes sense - thanks!
> 
>> That was true before this patch as well.
> 
> I'm still not entirely sure I follow. I read the original code, and it seems
> like it retrieved the real cache from the object and released the object into
> that correct cache. Did I misunderstand something?

You're right, I misread it as returning NULL, but that was only on
virt_to_slab() failure. So that's a change, but I believe a proper one. But
I'll mention it in the changelog and the comment. Thanks!
Re: [PATCH] slab: replace cache_from_obj() with inline checks
Posted by Hao Li 2 weeks, 4 days ago
On Tue, Jan 20, 2026 at 03:01:23PM +0100, Vlastimil Babka wrote:
> On 1/20/26 14:56, Hao Li wrote:
> > On Tue, Jan 20, 2026 at 01:55:09PM +0100, Vlastimil Babka wrote:
> >> On 1/20/26 12:57, Hao Li wrote:
> >> 
> >> >> @@ -6774,11 +6770,21 @@ static inline struct kmem_cache *cache_from_obj(struct kmem_cache *s, void *x)
> >> >>   */
> >> >>  void kmem_cache_free(struct kmem_cache *s, void *x)
> >> >>  {
> >> >> -	s = cache_from_obj(s, x);
> >> >> -	if (!s)
> >> >> -		return;
> >> >> +	struct slab *slab;
> >> >> +
> >> >> +	slab = virt_to_slab(x);
> >> >> +
> >> >> +	if (IS_ENABLED(CONFIG_SLAB_FREELIST_HARDENED) ||
> >> >> +	    kmem_cache_debug_flags(s, SLAB_CONSISTENCY_CHECKS)) {
> >> >> +
> >> >> +		if (unlikely(!slab || (slab->slab_cache != s))) {
> >> >> +			warn_free_bad_obj(s, x);
> >> > 
> >> > Just to make sure I'm understanding correctly, are we intentionally not
> >> > releasing the object x in this case? Thanks.
> >> 
> >> Yes, it means something went wrong so it's better not to do anything.
> > 
> > Got it, that makes sense - thanks!
> > 
> >> That was true before this patch as well.
> > 
> > I'm still not entirely sure I follow. I read the original code, and it seems
> > like it retrieved the real cache from the object and released the object into
> > that correct cache. Did I misunderstand something?
> 
> You're right, I misread it as returning NULL, but that was only on
> virt_to_slab() failure. So that's a change, but I believe a proper one.

Yes, I agree - the current handling looks good to me.

> But I'll mention it in the changelog and the comment. Thanks!

Sounds great, thanks for taking care of that.

Reviewed-by: Hao Li <hao.li@linux.dev>

-- 
Thanks,
Hao
Re: [PATCH] slab: replace cache_from_obj() with inline checks
Posted by Harry Yoo 2 weeks, 5 days ago
On Tue, Jan 20, 2026 at 10:35:42AM +0100, Vlastimil Babka wrote:
> Eric Dumazet has noticed cache_from_obj() is not inlined with clang and
> suggested splitting it into two functions, where the smaller inlined one
> assumes the fastpath is !CONFIG_SLAB_FREELIST_HARDENED. However most
> distros enable it these days and so this would likely add a function
> call to the object free fastpaths.
> 
> Instead take a step back and consider that cache_from_obj() is a relict
> from when memcgs created their separate kmem_cache copies, as the
> outdated comment in build_detached_freelist() reminds us.
> 
> Meanwhile hardening/debugging had reused cache_from_obj() to validate
> that the freed object really belongs to a slab from the cache we think
> we are freeing from.
> 
> In build_detached_freelist() simply remove this, because it did not
> handle the NULL result from cache_from_obj() failure properly, nor
> validate objects (for the NULL slab->slab_cache pointer) when called via
> kfree_bulk(). If anyone is motivated to implement it properly, it should
> be possible in a similar way to kmem_cache_free().
> 
> In kmem_cache_free(), do the hardening/debugging checks directly so they
> are inlined by definition and virt_to_slab(obj) is performed just once.
> In case they failed, call a newly introduced warn_free_bad_obj() that
> performs the warnings outside of the fastpath.
> 
> As a result the fastpath should be inlined in all configs and the
> warnings are moved away.
> 
> Reported-by: Eric Dumazet <edumazet@google.com>
> Closes: https://lore.kernel.org/all/20260115130642.3419324-1-edumazet@google.com/
> Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
> ---

Looks good to me,
Reviewed-by: Harry Yoo <harry.yoo@oracle.com>

-- 
Cheers,
Harry / Hyeonggon