On 2024/10/8 0:50, Thomas Gleixner wrote:
> Right now the per CPU pools are only refilled when they become
> empty. That's suboptimal especially when there are still non-freed objects
> in the to free list.
>
> Check whether an allocation from the per CPU pool emptied a batch and try
> to allocate from the free pool if that still has objects available.
>
> kmem_cache_alloc() kmem_cache_free()
> Baseline: 295k 245k
> Refill: 225k 173k
Reviewed-by: Zhen Lei <thunder.leizhen@huawei.com>
>
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> ---
> lib/debugobjects.c | 18 ++++++++++++++++++
> 1 file changed, 18 insertions(+)
>
> --- a/lib/debugobjects.c
> +++ b/lib/debugobjects.c
> @@ -255,6 +255,24 @@ static struct debug_obj *pcpu_alloc(void
>
> if (likely(obj)) {
> pcp->cnt--;
> + /*
> + * If this emptied a batch try to refill from the
> + * free pool. Don't do that if this was the top-most
> + * batch as pcpu_free() expects the per CPU pool
> + * to be less than ODEBUG_POOL_PERCPU_SIZE.
> + */
> + if (unlikely(pcp->cnt < (ODEBUG_POOL_PERCPU_SIZE - ODEBUG_BATCH_SIZE) &&
> + !(pcp->cnt % ODEBUG_BATCH_SIZE))) {
It seems better to swap the contents on both sides of &&. Because the probability
of the current right side is lower.
> + /*
> + * Don't try to allocate from the regular pool here
> + * to not exhaust it prematurely.
> + */
> + if (pool_count(&pool_to_free)) {
> + guard(raw_spinlock)(&pool_lock);
> + pool_move_batch(pcp, &pool_to_free);
> + pcpu_refill_stats();
> + }
> + }
> return obj;
> }
>
>
> .
>
--
Regards,
Zhen Lei