mm/slub.c | 92 ++++++++++++++++++++++++++++++++----------------------- 1 file changed, 53 insertions(+), 39 deletions(-)
The comments above check_pad_bytes() document the field layout of a
single object. Rewrite them to improve clarity and precision.
Also update an outdated comment in calculate_sizes().
Suggested-by: Harry Yoo <harry.yoo@oracle.com>
Signed-off-by: Hao Li <hao.li@linux.dev>
---
Hi Harry, this patch adds more detailed object layout documentation. Let
me know if you have any comments.
mm/slub.c | 92 ++++++++++++++++++++++++++++++++-----------------------
1 file changed, 53 insertions(+), 39 deletions(-)
diff --git a/mm/slub.c b/mm/slub.c
index a94c64f56504..138e9d13540d 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -1211,44 +1211,58 @@ check_bytes_and_report(struct kmem_cache *s, struct slab *slab,
}
/*
- * Object layout:
- *
- * object address
- * Bytes of the object to be managed.
- * If the freepointer may overlay the object then the free
- * pointer is at the middle of the object.
- *
- * Poisoning uses 0x6b (POISON_FREE) and the last byte is
- * 0xa5 (POISON_END)
- *
- * object + s->object_size
- * Padding to reach word boundary. This is also used for Redzoning.
- * Padding is extended by another word if Redzoning is enabled and
- * object_size == inuse.
- *
- * We fill with 0xbb (SLUB_RED_INACTIVE) for inactive objects and with
- * 0xcc (SLUB_RED_ACTIVE) for objects in use.
- *
- * object + s->inuse
- * Meta data starts here.
- *
- * A. Free pointer (if we cannot overwrite object on free)
- * B. Tracking data for SLAB_STORE_USER
- * C. Original request size for kmalloc object (SLAB_STORE_USER enabled)
- * D. Padding to reach required alignment boundary or at minimum
- * one word if debugging is on to be able to detect writes
- * before the word boundary.
- *
- * Padding is done using 0x5a (POISON_INUSE)
- *
- * object + s->size
- * Nothing is used beyond s->size.
- *
- * If slabcaches are merged then the object_size and inuse boundaries are mostly
- * ignored. And therefore no slab options that rely on these boundaries
+ * Object field layout:
+ *
+ * [Left redzone padding] (if SLAB_RED_ZONE)
+ * - Field size: s->red_left_pad
+ * - Filled with 0xbb (SLUB_RED_INACTIVE) for inactive objects and
+ * 0xcc (SLUB_RED_ACTIVE) for objects in use when SLAB_RED_ZONE.
+ *
+ * [Object bytes]
+ * - Field size: s->object_size
+ * - Object payload bytes.
+ * - If the freepointer may overlap the object, it is stored inside
+ * the object (typically near the middle).
+ * - Poisoning uses 0x6b (POISON_FREE) and the last byte is
+ * 0xa5 (POISON_END) when __OBJECT_POISON is enabled.
+ *
+ * [Word-align padding] (right redzone when SLAB_RED_ZONE is set)
+ * - Field size: s->inuse - s->object_size
+ * - If redzoning is enabled and ALIGN(size, sizeof(void *)) adds no
+ * padding, explicitly extend by one word so the right redzone is
+ * non-empty.
+ * - Filled with 0xbb (SLUB_RED_INACTIVE) for inactive objects and
+ * 0xcc (SLUB_RED_ACTIVE) for objects in use when SLAB_RED_ZONE.
+ *
+ * [Metadata starts at object + s->inuse]
+ * - A. freelist pointer (if freeptr_outside_object)
+ * - B. alloc tracking (SLAB_STORE_USER)
+ * - C. free tracking (SLAB_STORE_USER)
+ * - D. original request size (SLAB_KMALLOC && SLAB_STORE_USER)
+ * - E. KASAN metadata (if enabled)
+ *
+ * [Mandatory padding] (if CONFIG_SLUB_DEBUG && SLAB_RED_ZONE)
+ * - One mandatory debug word to guarantee a minimum poisoned gap
+ * between metadata and the next object, independent of alignment.
+ * - Filled with 0x5a (POISON_INUSE) when SLAB_POISON is set.
+ * [Final alignment padding]
+ * - Any bytes added by ALIGN(size, s->align) to reach s->size.
+ * - Filled with 0x5a (POISON_INUSE) when SLAB_POISON is set.
+ *
+ * Notes:
+ * - Redzones are filled by init_object() with SLUB_RED_ACTIVE/INACTIVE.
+ * - Object contents are poisoned with POISON_FREE/END when __OBJECT_POISON.
+ * - The trailing padding is pre-filled with POISON_INUSE by
+ * setup_slab_debug() when SLAB_POISON is set, and is validated by
+ * check_pad_bytes().
+ * - The first object pointer is slab_address(slab) +
+ * (s->red_left_pad if redzoning); subsequent objects are reached by
+ * adding s->size each time.
+ *
+ * If slabcaches are merged then the object_size and inuse boundaries are
+ * mostly ignored. Therefore no slab options that rely on these boundaries
* may be used with merged slabcaches.
*/
-
static int check_pad_bytes(struct kmem_cache *s, struct slab *slab, u8 *p)
{
unsigned long off = get_info_end(s); /* The end of info */
@@ -7103,9 +7117,9 @@ static int calculate_sizes(struct kmem_cache_args *args, struct kmem_cache *s)
/*
- * If we are Redzoning then check if there is some space between the
- * end of the object and the free pointer. If not then add an
- * additional word to have some bytes to store Redzone information.
+ * If we are Redzoning and there is no space between the end of the
+ * object and the following fields, add one word so the right Redzone
+ * is non-empty.
*/
if ((flags & SLAB_RED_ZONE) && size == s->object_size)
size += sizeof(void *);
--
2.50.1
On Wed, Dec 24, 2025 at 08:51:14PM +0800, Hao Li wrote: > The comments above check_pad_bytes() document the field layout of a > single object. Rewrite them to improve clarity and precision. > > Also update an outdated comment in calculate_sizes(). > > Suggested-by: Harry Yoo <harry.yoo@oracle.com> > Signed-off-by: Hao Li <hao.li@linux.dev> > --- > Hi Harry, this patch adds more detailed object layout documentation. Let > me know if you have any comments. Hi Hao, thanks for improving it! It looks much clearer now. few nits below. > + * Object field layout: > + * > + * [Left redzone padding] (if SLAB_RED_ZONE) > + * - Field size: s->red_left_pad > + * - Filled with 0xbb (SLUB_RED_INACTIVE) for inactive objects and > + * 0xcc (SLUB_RED_ACTIVE) for objects in use when SLAB_RED_ZONE. nit: although it becomes clear after reading the Notes: section, I would like to make it clear that object address starts here (after the left redzone) and the left redzone is right before each object. > + * [Object bytes] > + * - Field size: s->object_size > + * - Object payload bytes. > + * - If the freepointer may overlap the object, it is stored inside > + * the object (typically near the middle). > + * - Poisoning uses 0x6b (POISON_FREE) and the last byte is > + * 0xa5 (POISON_END) when __OBJECT_POISON is enabled. > + * > + * [Word-align padding] (right redzone when SLAB_RED_ZONE is set) > + * - Field size: s->inuse - s->object_size > + * - If redzoning is enabled and ALIGN(size, sizeof(void *)) adds no > + * padding, explicitly extend by one word so the right redzone is > + * non-empty. > + * - Filled with 0xbb (SLUB_RED_INACTIVE) for inactive objects and > + * 0xcc (SLUB_RED_ACTIVE) for objects in use when SLAB_RED_ZONE. > + * > + * [Metadata starts at object + s->inuse] > + * - A. freelist pointer (if freeptr_outside_object) > + * - B. alloc tracking (SLAB_STORE_USER) > + * - C. free tracking (SLAB_STORE_USER) > + * - D. original request size (SLAB_KMALLOC && SLAB_STORE_USER) > + * - E. KASAN metadata (if enabled) > + * > + * [Mandatory padding] (if CONFIG_SLUB_DEBUG && SLAB_RED_ZONE) > + * - One mandatory debug word to guarantee a minimum poisoned gap > + * between metadata and the next object, independent of alignment. > + * - Filled with 0x5a (POISON_INUSE) when SLAB_POISON is set. > > + * [Final alignment padding] > + * - Any bytes added by ALIGN(size, s->align) to reach s->size. > + * - Filled with 0x5a (POISON_INUSE) when SLAB_POISON is set. > + * > + * Notes: > + * - Redzones are filled by init_object() with SLUB_RED_ACTIVE/INACTIVE. > + * - Object contents are poisoned with POISON_FREE/END when __OBJECT_POISON. > + * - The trailing padding is pre-filled with POISON_INUSE by > + * setup_slab_debug() when SLAB_POISON is set, and is validated by > + * check_pad_bytes(). > + * - The first object pointer is slab_address(slab) + > + * (s->red_left_pad if redzoning); subsequent objects are reached by > + * adding s->size each time. > + * > + * If slabcaches are merged then the object_size and inuse boundaries are > + * mostly ignored. Therefore no slab options that rely on these boundaries > * may be used with merged slabcaches. For the last paragraph, perhaps it'll be clearer to say: "If a slab cache flag relies on specific metadata to exist at a fixed offset, the flag must be included in SLAB_NEVER_MERGE to prevent merging. Otherwise, the cache would misbehave as s->object_size and s->inuse are adjusted during cache merging" Otherwise looks great to me, so please feel free to add: Acked-by: Harry Yoo <harry.yoo@oracle.com> -- Cheers, Harry / Hyeonggon
On Mon, Dec 29, 2025 at 04:07:54PM +0900, Harry Yoo wrote: > On Wed, Dec 24, 2025 at 08:51:14PM +0800, Hao Li wrote: > > The comments above check_pad_bytes() document the field layout of a > > single object. Rewrite them to improve clarity and precision. > > > > Also update an outdated comment in calculate_sizes(). > > > > Suggested-by: Harry Yoo <harry.yoo@oracle.com> > > Signed-off-by: Hao Li <hao.li@linux.dev> > > --- > > Hi Harry, this patch adds more detailed object layout documentation. Let > > me know if you have any comments. > > Hi Hao, thanks for improving it! > It looks much clearer now. Hi Harry, Thanks for the review and the Acked-by! > > few nits below. > > > + * Object field layout: > > + * > > + * [Left redzone padding] (if SLAB_RED_ZONE) > > + * - Field size: s->red_left_pad > > + * - Filled with 0xbb (SLUB_RED_INACTIVE) for inactive objects and > > + * 0xcc (SLUB_RED_ACTIVE) for objects in use when SLAB_RED_ZONE. > > nit: although it becomes clear after reading the Notes: section, > I would like to make it clear that object address starts here (after > the left redzone) and the left redzone is right before each object. Good point. I’ll make this explicit in v2. > > > + * [Object bytes] > > + * - Field size: s->object_size > > + * - Object payload bytes. > > + * - If the freepointer may overlap the object, it is stored inside > > + * the object (typically near the middle). > > + * - Poisoning uses 0x6b (POISON_FREE) and the last byte is > > + * 0xa5 (POISON_END) when __OBJECT_POISON is enabled. > > + * > > + * [Word-align padding] (right redzone when SLAB_RED_ZONE is set) > > + * - Field size: s->inuse - s->object_size > > + * - If redzoning is enabled and ALIGN(size, sizeof(void *)) adds no > > + * padding, explicitly extend by one word so the right redzone is > > + * non-empty. > > + * - Filled with 0xbb (SLUB_RED_INACTIVE) for inactive objects and > > + * 0xcc (SLUB_RED_ACTIVE) for objects in use when SLAB_RED_ZONE. > > + * > > + * [Metadata starts at object + s->inuse] > > + * - A. freelist pointer (if freeptr_outside_object) > > + * - B. alloc tracking (SLAB_STORE_USER) > > + * - C. free tracking (SLAB_STORE_USER) > > + * - D. original request size (SLAB_KMALLOC && SLAB_STORE_USER) > > + * - E. KASAN metadata (if enabled) > > + * > > + * [Mandatory padding] (if CONFIG_SLUB_DEBUG && SLAB_RED_ZONE) > > + * - One mandatory debug word to guarantee a minimum poisoned gap > > + * between metadata and the next object, independent of alignment. > > + * - Filled with 0x5a (POISON_INUSE) when SLAB_POISON is set. > > > > + * [Final alignment padding] > > + * - Any bytes added by ALIGN(size, s->align) to reach s->size. > > + * - Filled with 0x5a (POISON_INUSE) when SLAB_POISON is set. > > + * > > + * Notes: > > + * - Redzones are filled by init_object() with SLUB_RED_ACTIVE/INACTIVE. > > + * - Object contents are poisoned with POISON_FREE/END when __OBJECT_POISON. > > + * - The trailing padding is pre-filled with POISON_INUSE by > > + * setup_slab_debug() when SLAB_POISON is set, and is validated by > > + * check_pad_bytes(). > > + * - The first object pointer is slab_address(slab) + > > + * (s->red_left_pad if redzoning); subsequent objects are reached by > > + * adding s->size each time. > > + * > > + * If slabcaches are merged then the object_size and inuse boundaries are > > + * mostly ignored. Therefore no slab options that rely on these boundaries > > * may be used with merged slabcaches. > > For the last paragraph, perhaps it'll be clearer to say: > > "If a slab cache flag relies on specific metadata to exist at a fixed > offset, the flag must be included in SLAB_NEVER_MERGE to prevent > merging. Otherwise, the cache would misbehave as s->object_size and > s->inuse are adjusted during cache merging" Agreed. I’ll reword that paragraph along your suggestion to emphasize the fixed-offset metadata requirement. > > Otherwise looks great to me, so please feel free to add: > Acked-by: Harry Yoo <harry.yoo@oracle.com> I'll include this Acked-by in v2. Thanks! -- Thanks Hao > > -- > Cheers, > Harry / Hyeonggon
© 2016 - 2026 Red Hat, Inc.