fs/ext4/super.c | 20 ++- include/linux/slab.h | 39 +++-- mm/memcontrol.c | 31 +++- mm/slab.h | 120 ++++++++++++++- mm/slab_common.c | 8 +- mm/slub.c | 345 +++++++++++++++++++++++++++++++++++-------- 6 files changed, 466 insertions(+), 97 deletions(-)
Happy new year!
V4: https://lore.kernel.org/linux-mm/20251027122847.320924-1-harry.yoo@oracle.com
V4 -> V5:
- Patch 4: Fixed returning false when the return type is unsigned long
- Patch 7: Fixed incorrect calculation of slabobj_ext offset (Thanks Hao!)
When CONFIG_MEMCG and CONFIG_MEM_ALLOC_PROFILING are enabled,
the kernel allocates two pointers per object: one for the memory cgroup
(actually, obj_cgroup) to which it belongs, and another for the code
location that requested the allocation.
In two special cases, this overhead can be eliminated by allocating
slabobj_ext metadata from unused space within a slab:
Case 1. The "leftover" space after the last slab object is larger than
the size of an array of slabobj_ext.
Case 2. The per-object alignment padding is larger than
sizeof(struct slabobj_ext).
For these two cases, one or two pointers can be saved per slab object.
Examples: ext4 inode cache (case 1) and xfs inode cache (case 2).
That's approximately 0.7-0.8% (memcg) or 1.5-1.6% (memcg + mem profiling)
of the total inode cache size.
Implementing case 2 is not straightforward, because the existing code
assumes that slab->obj_exts is an array of slabobj_ext, while case 2
breaks the assumption.
As suggested by Vlastimil, abstract access to individual slabobj_ext
metadata via a new helper named slab_obj_ext():
static inline struct slabobj_ext *slab_obj_ext(struct slab *slab,
unsigned long obj_exts,
unsigned int index)
{
return (struct slabobj_ext *)(obj_exts + slab_get_stride(slab) * index);
}
In the normal case (including case 1), slab->obj_exts points to an array
of slabobj_ext, and the stride is sizeof(struct slabobj_ext).
In case 2, the stride is s->size and
slab->obj_exts = slab_address(slab) + s->red_left_pad + (offset of slabobj_ext)
With this approach, the memcg charging fastpath doesn't need to care the
storage method of slabobj_ext.
Harry Yoo (8):
mm/slab: use unsigned long for orig_size to ensure proper metadata
align
mm/slab: allow specifying free pointer offset when using constructor
ext4: specify the free pointer offset for ext4_inode_cache
mm/slab: abstract slabobj_ext access via new slab_obj_ext() helper
mm/slab: use stride to access slabobj_ext
mm/memcontrol,alloc_tag: handle slabobj_ext access under KASAN poison
mm/slab: save memory by allocating slabobj_ext array from leftover
mm/slab: place slabobj_ext metadata in unused space within s->size
fs/ext4/super.c | 20 ++-
include/linux/slab.h | 39 +++--
mm/memcontrol.c | 31 +++-
mm/slab.h | 120 ++++++++++++++-
mm/slab_common.c | 8 +-
mm/slub.c | 345 +++++++++++++++++++++++++++++++++++--------
6 files changed, 466 insertions(+), 97 deletions(-)
--
2.43.0
On 1/5/26 09:02, Harry Yoo wrote:
> Happy new year!
>
> V4: https://lore.kernel.org/linux-mm/20251027122847.320924-1-harry.yoo@oracle.com
> V4 -> V5:
> - Patch 4: Fixed returning false when the return type is unsigned long
> - Patch 7: Fixed incorrect calculation of slabobj_ext offset (Thanks Hao!)
Besides the stuff pointed out the rest seemed ok to me. Can you resend with
those addressed, and rebased on slab.git slab/for-7.0/obj_metadata to avoid
a conflict in patch 8/8 with Hao's comment update patch there? I will add
the series on top there then. Thanks!
> When CONFIG_MEMCG and CONFIG_MEM_ALLOC_PROFILING are enabled,
> the kernel allocates two pointers per object: one for the memory cgroup
> (actually, obj_cgroup) to which it belongs, and another for the code
> location that requested the allocation.
>
> In two special cases, this overhead can be eliminated by allocating
> slabobj_ext metadata from unused space within a slab:
>
> Case 1. The "leftover" space after the last slab object is larger than
> the size of an array of slabobj_ext.
>
> Case 2. The per-object alignment padding is larger than
> sizeof(struct slabobj_ext).
>
> For these two cases, one or two pointers can be saved per slab object.
> Examples: ext4 inode cache (case 1) and xfs inode cache (case 2).
> That's approximately 0.7-0.8% (memcg) or 1.5-1.6% (memcg + mem profiling)
> of the total inode cache size.
>
> Implementing case 2 is not straightforward, because the existing code
> assumes that slab->obj_exts is an array of slabobj_ext, while case 2
> breaks the assumption.
>
> As suggested by Vlastimil, abstract access to individual slabobj_ext
> metadata via a new helper named slab_obj_ext():
>
> static inline struct slabobj_ext *slab_obj_ext(struct slab *slab,
> unsigned long obj_exts,
> unsigned int index)
> {
> return (struct slabobj_ext *)(obj_exts + slab_get_stride(slab) * index);
> }
>
> In the normal case (including case 1), slab->obj_exts points to an array
> of slabobj_ext, and the stride is sizeof(struct slabobj_ext).
>
> In case 2, the stride is s->size and
> slab->obj_exts = slab_address(slab) + s->red_left_pad + (offset of slabobj_ext)
>
> With this approach, the memcg charging fastpath doesn't need to care the
> storage method of slabobj_ext.
>
> Harry Yoo (8):
> mm/slab: use unsigned long for orig_size to ensure proper metadata
> align
> mm/slab: allow specifying free pointer offset when using constructor
> ext4: specify the free pointer offset for ext4_inode_cache
> mm/slab: abstract slabobj_ext access via new slab_obj_ext() helper
> mm/slab: use stride to access slabobj_ext
> mm/memcontrol,alloc_tag: handle slabobj_ext access under KASAN poison
> mm/slab: save memory by allocating slabobj_ext array from leftover
> mm/slab: place slabobj_ext metadata in unused space within s->size
>
> fs/ext4/super.c | 20 ++-
> include/linux/slab.h | 39 +++--
> mm/memcontrol.c | 31 +++-
> mm/slab.h | 120 ++++++++++++++-
> mm/slab_common.c | 8 +-
> mm/slub.c | 345 +++++++++++++++++++++++++++++++++++--------
> 6 files changed, 466 insertions(+), 97 deletions(-)
>
On Wed, Jan 07, 2026 at 06:43:30PM +0100, Vlastimil Babka wrote: > On 1/5/26 09:02, Harry Yoo wrote: > > Happy new year! > > > > V4: https://lore.kernel.org/linux-mm/20251027122847.320924-1-harry.yoo@oracle.com > > V4 -> V5: > > - Patch 4: Fixed returning false when the return type is unsigned long > > - Patch 7: Fixed incorrect calculation of slabobj_ext offset (Thanks Hao!) > > Besides the stuff pointed out the rest seemed ok to me. Thanks for reviewing, Vlastimil! > Can you resend with those addressed, Will do. > and rebased on slab.git slab/for-7.0/obj_metadata to avoid a conflict > in patch 8/8 with Hao's comment update patch there? Will do. > I will add the series on top there then. Thanks! Thanks a lot! -- Cheers, Harry / Hyeonggon
On Mon, Jan 05, 2026 at 05:02:22PM +0900, Harry Yoo wrote:
> Happy new year!
>
> V4: https://lore.kernel.org/linux-mm/20251027122847.320924-1-harry.yoo@oracle.com
Actually, that's RFC V3.
V4: https://lore.kernel.org/linux-mm/20251222110843.980347-1-harry.yoo@oracle.com/
--
Cheers,
Harry / Hyeonggon
> V4 -> V5:
> - Patch 4: Fixed returning false when the return type is unsigned long
> - Patch 7: Fixed incorrect calculation of slabobj_ext offset (Thanks Hao!)
>
> When CONFIG_MEMCG and CONFIG_MEM_ALLOC_PROFILING are enabled,
> the kernel allocates two pointers per object: one for the memory cgroup
> (actually, obj_cgroup) to which it belongs, and another for the code
> location that requested the allocation.
>
> In two special cases, this overhead can be eliminated by allocating
> slabobj_ext metadata from unused space within a slab:
>
> Case 1. The "leftover" space after the last slab object is larger than
> the size of an array of slabobj_ext.
>
> Case 2. The per-object alignment padding is larger than
> sizeof(struct slabobj_ext).
>
> For these two cases, one or two pointers can be saved per slab object.
> Examples: ext4 inode cache (case 1) and xfs inode cache (case 2).
> That's approximately 0.7-0.8% (memcg) or 1.5-1.6% (memcg + mem profiling)
> of the total inode cache size.
>
> Implementing case 2 is not straightforward, because the existing code
> assumes that slab->obj_exts is an array of slabobj_ext, while case 2
> breaks the assumption.
>
> As suggested by Vlastimil, abstract access to individual slabobj_ext
> metadata via a new helper named slab_obj_ext():
>
> static inline struct slabobj_ext *slab_obj_ext(struct slab *slab,
> unsigned long obj_exts,
> unsigned int index)
> {
> return (struct slabobj_ext *)(obj_exts + slab_get_stride(slab) * index);
> }
>
> In the normal case (including case 1), slab->obj_exts points to an array
> of slabobj_ext, and the stride is sizeof(struct slabobj_ext).
>
> In case 2, the stride is s->size and
> slab->obj_exts = slab_address(slab) + s->red_left_pad + (offset of slabobj_ext)
>
> With this approach, the memcg charging fastpath doesn't need to care the
> storage method of slabobj_ext.
>
> Harry Yoo (8):
> mm/slab: use unsigned long for orig_size to ensure proper metadata
> align
> mm/slab: allow specifying free pointer offset when using constructor
> ext4: specify the free pointer offset for ext4_inode_cache
> mm/slab: abstract slabobj_ext access via new slab_obj_ext() helper
> mm/slab: use stride to access slabobj_ext
> mm/memcontrol,alloc_tag: handle slabobj_ext access under KASAN poison
> mm/slab: save memory by allocating slabobj_ext array from leftover
> mm/slab: place slabobj_ext metadata in unused space within s->size
>
> fs/ext4/super.c | 20 ++-
> include/linux/slab.h | 39 +++--
> mm/memcontrol.c | 31 +++-
> mm/slab.h | 120 ++++++++++++++-
> mm/slab_common.c | 8 +-
> mm/slub.c | 345 +++++++++++++++++++++++++++++++++++--------
> 6 files changed, 466 insertions(+), 97 deletions(-)
>
> --
> 2.43.0
>
© 2016 - 2026 Red Hat, Inc.