[PATCH] slab: Update stale comment for sheaf_capacity.

Kuniyuki Iwashima posted 1 patch 1 month, 2 weeks ago
include/linux/slab.h | 5 ++---
1 file changed, 2 insertions(+), 3 deletions(-)
[PATCH] slab: Update stale comment for sheaf_capacity.
Posted by Kuniyuki Iwashima 1 month, 2 weeks ago
The comment for sheaf_capacity says it does not enforce NUMA
placement, but it's not true since commit 4ec1a08d2031 ("slab:
allow NUMA restricted allocations to use percpu sheaves").

Let's update the comment.

Signed-off-by: Kuniyuki Iwashima <kuniyu@google.com>
---
 include/linux/slab.h | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/include/linux/slab.h b/include/linux/slab.h
index 15a60b501b95..7477109eb315 100644
--- a/include/linux/slab.h
+++ b/include/linux/slab.h
@@ -359,9 +359,8 @@ struct kmem_cache_args {
 	 * may replace it with an empty sheaf, unless it's over capacity. In
 	 * that case a sheaf is bulk freed to slab pages.
 	 *
-	 * The sheaves do not enforce NUMA placement of objects, so allocations
-	 * via kmem_cache_alloc_node() with a node specified other than
-	 * NUMA_NO_NODE will bypass them.
+	 * The sheaves try to enforce NUMA placement of objects, but the
+	 * allocation may fall back to the normal operation.
 	 *
 	 * Bulk allocation and free operations also try to use the cpu sheaves
 	 * and barn, but fallback to using slab pages directly.
-- 
2.53.0.473.g4a7958ca14-goog
Re: [PATCH] slab: Update stale comment for sheaf_capacity.
Posted by Vlastimil Babka (SUSE) 1 month, 1 week ago
On 2/28/26 9:15 PM, Kuniyuki Iwashima wrote:
> The comment for sheaf_capacity says it does not enforce NUMA
> placement, but it's not true since commit 4ec1a08d2031 ("slab:
> allow NUMA restricted allocations to use percpu sheaves").
> 
> Let's update the comment.
> 
> Signed-off-by: Kuniyuki Iwashima <kuniyu@google.com>

Hm the comment is now more stale than the NUMA aspect. With 7.0-rc1
sheaves exist for all (non-debug) caches. We probably don't need to
explain the implementation details there anymore. That includes the NUMA
aspect as well. The sheaf_capacity argument can partially override (make
it larger, but not smaller) the automatic sheaf size calculation.

Would you like to rewrite the comment as per above then?

Thanks,
Vlastimil

> ---
>  include/linux/slab.h | 5 ++---
>  1 file changed, 2 insertions(+), 3 deletions(-)
> 
> diff --git a/include/linux/slab.h b/include/linux/slab.h
> index 15a60b501b95..7477109eb315 100644
> --- a/include/linux/slab.h
> +++ b/include/linux/slab.h
> @@ -359,9 +359,8 @@ struct kmem_cache_args {
>  	 * may replace it with an empty sheaf, unless it's over capacity. In
>  	 * that case a sheaf is bulk freed to slab pages.
>  	 *
> -	 * The sheaves do not enforce NUMA placement of objects, so allocations
> -	 * via kmem_cache_alloc_node() with a node specified other than
> -	 * NUMA_NO_NODE will bypass them.
> +	 * The sheaves try to enforce NUMA placement of objects, but the
> +	 * allocation may fall back to the normal operation.
>  	 *
>  	 * Bulk allocation and free operations also try to use the cpu sheaves
>  	 * and barn, but fallback to using slab pages directly.
Re: [PATCH] slab: Update stale comment for sheaf_capacity.
Posted by Harry Yoo 1 month, 1 week ago
On Sat, Feb 28, 2026 at 08:15:07PM +0000, Kuniyuki Iwashima wrote:
> The comment for sheaf_capacity says it does not enforce NUMA
> placement, but it's not true since commit 4ec1a08d2031 ("slab:
> allow NUMA restricted allocations to use percpu sheaves").
> 
> Let's update the comment.
> 
> Signed-off-by: Kuniyuki Iwashima <kuniyu@google.com>
> ---

Acked-by: Harry Yoo <harry.yoo@oracle.com>

-- 
Cheers,
Harry / Hyeonggon

>  include/linux/slab.h | 5 ++---
>  1 file changed, 2 insertions(+), 3 deletions(-)
> 
> diff --git a/include/linux/slab.h b/include/linux/slab.h
> index 15a60b501b95..7477109eb315 100644
> --- a/include/linux/slab.h
> +++ b/include/linux/slab.h
> @@ -359,9 +359,8 @@ struct kmem_cache_args {
>  	 * may replace it with an empty sheaf, unless it's over capacity. In
>  	 * that case a sheaf is bulk freed to slab pages.
>  	 *
> -	 * The sheaves do not enforce NUMA placement of objects, so allocations
> -	 * via kmem_cache_alloc_node() with a node specified other than
> -	 * NUMA_NO_NODE will bypass them.
> +	 * The sheaves try to enforce NUMA placement of objects, but the
> +	 * allocation may fall back to the normal operation.
>  	 *
>  	 * Bulk allocation and free operations also try to use the cpu sheaves
>  	 * and barn, but fallback to using slab pages directly.
> -- 
> 2.53.0.473.g4a7958ca14-goog