[PATCH v3 3/3] debugobjects: Reduce contention on pool lock in fill_pool()

Zhen Lei posted 3 patches 2 months, 2 weeks ago
[PATCH v3 3/3] debugobjects: Reduce contention on pool lock in fill_pool()
Posted by Zhen Lei 2 months, 2 weeks ago
If the conditions for starting fill are met, it means that all cores that
call fill() later are blocked until the first core completes the fill
operation.

Since it is low cost to move a set of free nodes from list obj_to_free
into obj_pool, once it is necessary to fill, trying to move regardless
of whether the context is preemptible. To reduce contention on pool
lock, use atomic operation to test state. Only the first comer is allowed
to try. If the last comer finds that someone is already trying, it will
give up.

Scenarios that use allocated node filling can also be applied lockless
mechanisms, but slightly different. The global list obj_to_free can only
be operated exclusively by one core, while kmem_cache_zalloc() can be
invoked by multiple cores simultaneously. Use atomic counting to mark how
many cores are filling, to reduce atomic write conflicts during check. In
principle, only the first comer is allowed to fill, but there is a very
low probability that multiple comers may fill at the time.

Suggested-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com>
---
 lib/debugobjects.c | 79 ++++++++++++++++++++++++++++++++--------------
 1 file changed, 56 insertions(+), 23 deletions(-)

diff --git a/lib/debugobjects.c b/lib/debugobjects.c
index 19a91c6bc67eb9c..568aae9cd9c3c4f 100644
--- a/lib/debugobjects.c
+++ b/lib/debugobjects.c
@@ -125,14 +125,10 @@ static const char *obj_states[ODEBUG_STATE_MAX] = {
 	[ODEBUG_STATE_NOTAVAILABLE]	= "not available",
 };
 
-static void fill_pool(void)
+static void fill_pool_from_freelist(void)
 {
-	gfp_t gfp = __GFP_HIGH | __GFP_NOWARN;
+	static unsigned long state;
 	struct debug_obj *obj;
-	unsigned long flags;
-
-	if (likely(READ_ONCE(obj_pool_free) >= debug_objects_pool_min_level))
-		return;
 
 	/*
 	 * Reuse objs from the global obj_to_free list; they will be
@@ -141,25 +137,53 @@ static void fill_pool(void)
 	 * obj_nr_tofree is checked locklessly; the READ_ONCE() pairs with
 	 * the WRITE_ONCE() in pool_lock critical sections.
 	 */
-	if (READ_ONCE(obj_nr_tofree)) {
-		raw_spin_lock_irqsave(&pool_lock, flags);
-		/*
-		 * Recheck with the lock held as the worker thread might have
-		 * won the race and freed the global free list already.
-		 */
-		while (obj_nr_tofree && (obj_pool_free < debug_objects_pool_min_level)) {
-			obj = hlist_entry(obj_to_free.first, typeof(*obj), node);
-			hlist_del(&obj->node);
-			WRITE_ONCE(obj_nr_tofree, obj_nr_tofree - 1);
-			hlist_add_head(&obj->node, &obj_pool);
-			WRITE_ONCE(obj_pool_free, obj_pool_free + 1);
-		}
-		raw_spin_unlock_irqrestore(&pool_lock, flags);
+	if (!READ_ONCE(obj_nr_tofree))
+		return;
+
+	/*
+	 * Prevent the context from being scheduled or interrupted after
+	 * setting the state flag;
+	 */
+	guard(irqsave)();
+
+	/*
+	 * Avoid lock contention on &pool_lock and avoid making the cache
+	 * line exclusive by testing the bit before attempting to set it.
+	 */
+	if (test_bit(0, &state) || test_and_set_bit(0, &state))
+		return;
+
+	guard(raw_spinlock)(&pool_lock);
+	/*
+	 * Recheck with the lock held as the worker thread might have
+	 * won the race and freed the global free list already.
+	 */
+	while (obj_nr_tofree && (obj_pool_free < debug_objects_pool_min_level)) {
+		obj = hlist_entry(obj_to_free.first, typeof(*obj), node);
+		hlist_del(&obj->node);
+		WRITE_ONCE(obj_nr_tofree, obj_nr_tofree - 1);
+		hlist_add_head(&obj->node, &obj_pool);
+		WRITE_ONCE(obj_pool_free, obj_pool_free + 1);
 	}
+	clear_bit(0, &state);
+}
+
+static void fill_pool(void)
+{
+	gfp_t gfp = __GFP_HIGH | __GFP_NOWARN;
+	static atomic_t cpus_allocating;
 
 	if (unlikely(!obj_cache))
 		return;
 
+	/*
+	 * Avoid allocation and lock contention when another CPU is already
+	 * in the allocation path.
+	 */
+	if (atomic_read(&cpus_allocating))
+		return;
+
+	atomic_inc(&cpus_allocating);
 	while (READ_ONCE(obj_pool_free) < debug_objects_pool_min_level) {
 		struct debug_obj *new, *last = NULL;
 		HLIST_HEAD(freelist);
@@ -174,14 +198,14 @@ static void fill_pool(void)
 				last = new;
 		}
 		if (!cnt)
-			return;
+			break;
 
-		raw_spin_lock_irqsave(&pool_lock, flags);
+		guard(raw_spinlock_irqsave)(&pool_lock);
 		hlist_splice_init(&freelist, &last->node, &obj_pool);
 		debug_objects_allocated += cnt;
 		WRITE_ONCE(obj_pool_free, obj_pool_free + cnt);
-		raw_spin_unlock_irqrestore(&pool_lock, flags);
 	}
+	atomic_dec(&cpus_allocating);
 }
 
 /*
@@ -600,6 +624,15 @@ static struct debug_obj *lookup_object_or_alloc(void *addr, struct debug_bucket
 
 static void debug_objects_fill_pool(void)
 {
+	if (likely(READ_ONCE(obj_pool_free) >= debug_objects_pool_min_level))
+		return;
+
+	/* Try reusing objects from obj_to_free_list */
+	fill_pool_from_freelist();
+
+	if (likely(READ_ONCE(obj_pool_free) >= debug_objects_pool_min_level))
+		return;
+
 	/*
 	 * On RT enabled kernels the pool refill must happen in preemptible
 	 * context -- for !RT kernels we rely on the fact that spinlock_t and
-- 
2.34.1
Re: [PATCH v3 3/3] debugobjects: Reduce contention on pool lock in fill_pool()
Posted by Thomas Gleixner 1 month, 3 weeks ago
On Wed, Sep 11 2024 at 16:35, Zhen Lei wrote:
> +	/*
> +	 * Avoid allocation and lock contention when another CPU is already
> +	 * in the allocation path.
> +	 */
> +	if (atomic_read(&cpus_allocating))
> +		return;

Hmm. I really don't want to rely on a single CPU doing allocations in
case that the pool level reached a critical state. That CPU might be
scheduled out and all others are consuming objects up to the point where
the pool becomes empty.

Let me integrate this into the series I'm going to post soon.

Thanks,

        tglx
Re: [PATCH v3 3/3] debugobjects: Reduce contention on pool lock in fill_pool()
Posted by Leizhen (ThunderTown) 1 month, 3 weeks ago

On 2024/10/7 22:04, Thomas Gleixner wrote:
> On Wed, Sep 11 2024 at 16:35, Zhen Lei wrote:
>> +	/*
>> +	 * Avoid allocation and lock contention when another CPU is already
>> +	 * in the allocation path.
>> +	 */
>> +	if (atomic_read(&cpus_allocating))
>> +		return;
> 
> Hmm. I really don't want to rely on a single CPU doing allocations in
> case that the pool level reached a critical state. That CPU might be
> scheduled out and all others are consuming objects up to the point where
> the pool becomes empty.

That makes sense, you're thoughtful.

> 
> Let me integrate this into the series I'm going to post soon.
> 
> Thanks,
> 
>         tglx
> .
> 

-- 
Regards,
  Zhen Lei
Re: [PATCH v3 3/3] debugobjects: Reduce contention on pool lock in fill_pool()
Posted by Leizhen (ThunderTown) 2 months, 2 weeks ago

On 2024/9/11 16:35, Zhen Lei wrote:
> If the conditions for starting fill are met, it means that all cores that
> call fill() later are blocked until the first core completes the fill
> operation.
> 
> Since it is low cost to move a set of free nodes from list obj_to_free
> into obj_pool, once it is necessary to fill, trying to move regardless
> of whether the context is preemptible. To reduce contention on pool
> lock, use atomic operation to test state. Only the first comer is allowed
> to try. If the last comer finds that someone is already trying, it will
> give up.
> 
> Scenarios that use allocated node filling can also be applied lockless
> mechanisms, but slightly different. The global list obj_to_free can only
> be operated exclusively by one core, while kmem_cache_zalloc() can be
> invoked by multiple cores simultaneously. Use atomic counting to mark how
> many cores are filling, to reduce atomic write conflicts during check. In
> principle, only the first comer is allowed to fill, but there is a very
> low probability that multiple comers may fill at the time.
> 
> Suggested-by: Thomas Gleixner <tglx@linutronix.de>
> Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com>

Hi, Thomas:
  I was going to mark "Signed-off-by" as you. Because except for the
following line of changes, you wrote everything. But you're maintainer.
It doesn't seem good if I post a patch with your Signed-off-by. Please
feel free to change it, but do not forget to add "Reported-by" or
"Tested-by" for me.

@@ -174,14 +198,14 @@ static void fill_pool(void)
 				last = new;
 		}
 		if (!cnt)
-			return;
+			break;

-- 
Regards,
  Zhen Lei
Re: [PATCH v3 3/3] debugobjects: Reduce contention on pool lock in fill_pool()
Posted by Thomas Gleixner 2 months, 1 week ago
On Wed, Sep 11 2024 at 17:04, Leizhen wrote:
> On 2024/9/11 16:35, Zhen Lei wrote:
>> Scenarios that use allocated node filling can also be applied lockless
>> mechanisms, but slightly different. The global list obj_to_free can only
>> be operated exclusively by one core, while kmem_cache_zalloc() can be
>> invoked by multiple cores simultaneously. Use atomic counting to mark how
>> many cores are filling, to reduce atomic write conflicts during check. In
>> principle, only the first comer is allowed to fill, but there is a very
>> low probability that multiple comers may fill at the time.
>> 
>> Suggested-by: Thomas Gleixner <tglx@linutronix.de>
>> Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com>
>
> Hi, Thomas:
>   I was going to mark "Signed-off-by" as you. Because except for the
> following line of changes, you wrote everything. But you're maintainer.
> It doesn't seem good if I post a patch with your Signed-off-by. Please
> feel free to change it, but do not forget to add "Reported-by" or
> "Tested-by" for me.

Suggested-by is fine. I look at it after back from travel and
conferencing.

Thanks,

        tglx
Re: [PATCH v3 3/3] debugobjects: Reduce contention on pool lock in fill_pool()
Posted by Leizhen (ThunderTown) 2 months ago

On 2024/9/17 20:19, Thomas Gleixner wrote:
> On Wed, Sep 11 2024 at 17:04, Leizhen wrote:
>> On 2024/9/11 16:35, Zhen Lei wrote:
>>> Scenarios that use allocated node filling can also be applied lockless
>>> mechanisms, but slightly different. The global list obj_to_free can only
>>> be operated exclusively by one core, while kmem_cache_zalloc() can be
>>> invoked by multiple cores simultaneously. Use atomic counting to mark how
>>> many cores are filling, to reduce atomic write conflicts during check. In
>>> principle, only the first comer is allowed to fill, but there is a very
>>> low probability that multiple comers may fill at the time.
>>>
>>> Suggested-by: Thomas Gleixner <tglx@linutronix.de>
>>> Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com>
>>
>> Hi, Thomas:
>>   I was going to mark "Signed-off-by" as you. Because except for the
>> following line of changes, you wrote everything. But you're maintainer.
>> It doesn't seem good if I post a patch with your Signed-off-by. Please
>> feel free to change it, but do not forget to add "Reported-by" or
>> "Tested-by" for me.
> 
> Suggested-by is fine. I look at it after back from travel and
> conferencing.

Thank you very much. You're such a gentleman.

> 
> Thanks,
> 
>         tglx
> .
> 

-- 
Regards,
  Zhen Lei