If the conditions for starting fill are met, it means that all cores that
call fill() later are blocked until the first core completes the fill
operation. But obviously, for a core that has free nodes locally, it does
not need to be blocked. This is good in stress situations.
Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com>
---
lib/debugobjects.c | 5 +++++
1 file changed, 5 insertions(+)
diff --git a/lib/debugobjects.c b/lib/debugobjects.c
index aba3e62a4315f51..fc8224f9f0eda8f 100644
--- a/lib/debugobjects.c
+++ b/lib/debugobjects.c
@@ -130,10 +130,15 @@ static void fill_pool(void)
gfp_t gfp = __GFP_HIGH | __GFP_NOWARN;
struct debug_obj *obj;
unsigned long flags;
+ struct debug_percpu_free *percpu_pool;
if (likely(READ_ONCE(obj_pool_free) >= debug_objects_pool_min_level))
return;
+ percpu_pool = this_cpu_ptr(&percpu_obj_pool);
+ if (likely(obj_cache) && percpu_pool->obj_free > 0)
+ return;
+
/*
* Reuse objs from the global free list; they will be reinitialized
* when allocating.
--
2.34.1
On Mon, Sep 02 2024 at 22:05, Zhen Lei wrote:
> If the conditions for starting fill are met, it means that all cores that
> call fill() later are blocked until the first core completes the fill
> operation. But obviously, for a core that has free nodes locally, it does
> not need to be blocked. This is good in stress situations.
Sure it's good, but is it correct? You need to explain why this can't
cause a pool depletion. The pool is filled opportunistically.
Aside of that the lock contention in fill_pool() is minimal. The heavy
lifting is the allocation of objects.
> diff --git a/lib/debugobjects.c b/lib/debugobjects.c
> index aba3e62a4315f51..fc8224f9f0eda8f 100644
> --- a/lib/debugobjects.c
> +++ b/lib/debugobjects.c
> @@ -130,10 +130,15 @@ static void fill_pool(void)
> gfp_t gfp = __GFP_HIGH | __GFP_NOWARN;
> struct debug_obj *obj;
> unsigned long flags;
> + struct debug_percpu_free *percpu_pool;
Please keep variables in reverse fir tree order.
https://www.kernel.org/doc/html/latest/process/maintainer-tip.html#variable-declarations
> if (likely(READ_ONCE(obj_pool_free) >= debug_objects_pool_min_level))
> return;
>
> + percpu_pool = this_cpu_ptr(&percpu_obj_pool);
You don't need the pointer
> + if (likely(obj_cache) && percpu_pool->obj_free > 0)
if (likely(obj_cache) && this_cpu_read(percpu_pool.obj_free) > 0)
This lacks a comment explaining the rationale of this check.
Thanks,
tglx
On 2024/9/3 17:52, Thomas Gleixner wrote: > On Mon, Sep 02 2024 at 22:05, Zhen Lei wrote: > >> If the conditions for starting fill are met, it means that all cores that >> call fill() later are blocked until the first core completes the fill >> operation. But obviously, for a core that has free nodes locally, it does >> not need to be blocked. This is good in stress situations. > > Sure it's good, but is it correct? You need to explain why this can't> cause a pool depletion. The pool is filled opportunistically. In the case of no nesting, a core uses only one node at a time. Even if nesting occurs and there is only one local node, 256 / (16 + 1) = 15, the current parameter definition tolerates 15 cores, which should be sufficient. In fact, many cores may see just >= 256 at the same time without filling. Therefore, to eliminate the probability problem completely, an additional mechanism is needed. #define ODEBUG_POOL_MIN_LEVEL 256 #define ODEBUG_BATCH_SIZE 16 > > Aside of that the lock contention in fill_pool() is minimal. The heavy > lifting is the allocation of objects. I'm optimizing this, too. However, a new hlist helper function need to be added. Now that you've mentioned it, I'll send it in V2 too! > >> diff --git a/lib/debugobjects.c b/lib/debugobjects.c >> index aba3e62a4315f51..fc8224f9f0eda8f 100644 >> --- a/lib/debugobjects.c >> +++ b/lib/debugobjects.c >> @@ -130,10 +130,15 @@ static void fill_pool(void) >> gfp_t gfp = __GFP_HIGH | __GFP_NOWARN; >> struct debug_obj *obj; >> unsigned long flags; >> + struct debug_percpu_free *percpu_pool; > > Please keep variables in reverse fir tree order. > > https://www.kernel.org/doc/html/latest/process/maintainer-tip.html#variable-declarations OK, I will correct it. > >> if (likely(READ_ONCE(obj_pool_free) >= debug_objects_pool_min_level)) >> return; >> >> + percpu_pool = this_cpu_ptr(&percpu_obj_pool); > > You don't need the pointer > >> + if (likely(obj_cache) && percpu_pool->obj_free > 0) > > if (likely(obj_cache) && this_cpu_read(percpu_pool.obj_free) > 0) Nice, thanks > > This lacks a comment explaining the rationale of this check. OK, I'll add. > > Thanks, > > tglx > > > > > . > -- Regards, Zhen Lei
© 2016 - 2025 Red Hat, Inc.