From nobody Sat Nov 30 05:50:18 2024 Received: from szxga05-in.huawei.com (szxga05-in.huawei.com [45.249.212.191]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3F81414386C for ; Wed, 11 Sep 2024 08:35:44 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.191 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726043747; cv=none; b=O5m39vyhbIQxF22tdpl/p0NzPk37mAYoRWgQ5GfD7uN9CDOIytqScCsSqCkCa7/SdEEseVlVOVfRsoJVRWL3MWUQc5q9ZQd0M3UmwGbwW2Ggz6oeNGZkSVvJTGzpWBoDn4d4aOO3m4ESDTILfSYuIARlV2b74JKjvJlZu3mc5P8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726043747; c=relaxed/simple; bh=0iIOk45VJROiy8gf7E83iZ6tIJSp58nuV/qYXDLRkTQ=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=ocVPWFemXzQFeEVoaVI8DiA6sbbSOedjJpx/QolfZqR7LQHdA39B4HAhC4bwxecPCreyc+2EzbA/G6PQbg4MgrbmKdVe28T0PfCUCnkyYRI4YGv3fZyRNSlcMdykDpcWO3vaZ70ruLPYsnE/P5IPGtJYXI23+mB7Y7UoBf0ZHZ8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; arc=none smtp.client-ip=45.249.212.191 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Received: from mail.maildlp.com (unknown [172.19.88.234]) by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4X3Yhk2vwNz1HJXs; Wed, 11 Sep 2024 16:32:06 +0800 (CST) Received: from dggpemf100006.china.huawei.com (unknown [7.185.36.228]) by mail.maildlp.com (Postfix) with ESMTPS id 765E41402E2; Wed, 11 Sep 2024 16:35:41 +0800 (CST) Received: from thunder-town.china.huawei.com (10.174.178.55) by dggpemf100006.china.huawei.com (7.185.36.228) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Wed, 11 Sep 2024 16:35:41 +0800 From: Zhen Lei To: Andrew Morton , Thomas Gleixner , CC: Zhen Lei Subject: [PATCH v3 1/3] debugobjects: Delete a piece of redundant code Date: Wed, 11 Sep 2024 16:35:19 +0800 Message-ID: <20240911083521.2257-2-thunder.leizhen@huawei.com> X-Mailer: git-send-email 2.37.3.windows.1 In-Reply-To: <20240911083521.2257-1-thunder.leizhen@huawei.com> References: <20240911083521.2257-1-thunder.leizhen@huawei.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ClientProxiedBy: dggems704-chm.china.huawei.com (10.3.19.181) To dggpemf100006.china.huawei.com (7.185.36.228) Content-Type: text/plain; charset="utf-8" The statically allocated objects are all located in obj_static_pool[], the whole memory of obj_static_pool[] will be reclaimed later. Therefore, there is no need to split the remaining statically nodes in list obj_pool into isolated ones, no one will use them anymore. Just write INIT_HLIST_HEAD(&obj_pool) is enough. Since hlist_move_list() directly discards the old list, even this can be omitted. Signed-off-by: Zhen Lei --- lib/debugobjects.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/lib/debugobjects.c b/lib/debugobjects.c index 5ce473ad499bad3..df48acc5d4b34fc 100644 --- a/lib/debugobjects.c +++ b/lib/debugobjects.c @@ -1325,10 +1325,10 @@ static int __init debug_objects_replace_static_obje= cts(void) * active object references. */ =20 - /* Remove the statically allocated objects from the pool */ - hlist_for_each_entry_safe(obj, tmp, &obj_pool, node) - hlist_del(&obj->node); - /* Move the allocated objects to the pool */ + /* + * Replace the statically allocated objects list with the allocated + * objects list. + */ hlist_move_list(&objects, &obj_pool); =20 /* Replace the active object references */ --=20 2.34.1 From nobody Sat Nov 30 05:50:18 2024 Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3B06213B2A8 for ; Wed, 11 Sep 2024 08:35:43 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.187 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726043747; cv=none; b=JOrU0Sr77i1Gcj5wFJhH9/f6L1EeqEODgLcEzmKwdfU/4rx9vIwemGnioBuc6xmaejMh2sH7eFVagIg+zK46BHJ/8RpYF25lUXBMhGBAIgNjFCENlBs4pjtIWjdHqlwZAKTBaaLVNwzGh0064zTX97oTdCn+4oE/svDO1D8uwy4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726043747; c=relaxed/simple; bh=wB+SnixEt/enl9uuT7ynNFchmAWCrPQWUclb9SOQGfQ=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=HIzNAGnRFeAvAITqMVjWgoU3iEV7K3MDi7lLnBF6CM5go+WtEBRh6FU4Mg45rXrMBUZjnNWOvKG3ON+itm1HNXia3YQFiE9q3GHg0gRsY+OUmXGOA1zDBwmYlr9Vw2v/S9L6jxJlh5+RMs4myYSlUwV9+oAHdkU4QCqpFt8KZR8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; arc=none smtp.client-ip=45.249.212.187 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Received: from mail.maildlp.com (unknown [172.19.162.254]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4X3Ylb17GZzyQsJ; Wed, 11 Sep 2024 16:34:35 +0800 (CST) Received: from dggpemf100006.china.huawei.com (unknown [7.185.36.228]) by mail.maildlp.com (Postfix) with ESMTPS id E645A180113; Wed, 11 Sep 2024 16:35:41 +0800 (CST) Received: from thunder-town.china.huawei.com (10.174.178.55) by dggpemf100006.china.huawei.com (7.185.36.228) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Wed, 11 Sep 2024 16:35:41 +0800 From: Zhen Lei To: Andrew Morton , Thomas Gleixner , CC: Zhen Lei Subject: [PATCH v3 2/3] debugobjects: Use hlist_splice_init() to reduce lock conflicts Date: Wed, 11 Sep 2024 16:35:20 +0800 Message-ID: <20240911083521.2257-3-thunder.leizhen@huawei.com> X-Mailer: git-send-email 2.37.3.windows.1 In-Reply-To: <20240911083521.2257-1-thunder.leizhen@huawei.com> References: <20240911083521.2257-1-thunder.leizhen@huawei.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ClientProxiedBy: dggems704-chm.china.huawei.com (10.3.19.181) To dggpemf100006.china.huawei.com (7.185.36.228) Content-Type: text/plain; charset="utf-8" The newly allocated debug_obj control blocks can be concatenated into a sub list in advance outside the lock, so that the operation time inside the lock can be reduced and the possibility of lock conflict can be reduced. Signed-off-by: Zhen Lei --- lib/debugobjects.c | 18 ++++++++++-------- 1 file changed, 10 insertions(+), 8 deletions(-) diff --git a/lib/debugobjects.c b/lib/debugobjects.c index df48acc5d4b34fc..19a91c6bc67eb9c 100644 --- a/lib/debugobjects.c +++ b/lib/debugobjects.c @@ -161,23 +161,25 @@ static void fill_pool(void) return; =20 while (READ_ONCE(obj_pool_free) < debug_objects_pool_min_level) { - struct debug_obj *new[ODEBUG_BATCH_SIZE]; + struct debug_obj *new, *last =3D NULL; + HLIST_HEAD(freelist); int cnt; =20 for (cnt =3D 0; cnt < ODEBUG_BATCH_SIZE; cnt++) { - new[cnt] =3D kmem_cache_zalloc(obj_cache, gfp); - if (!new[cnt]) + new =3D kmem_cache_zalloc(obj_cache, gfp); + if (!new) break; + hlist_add_head(&new->node, &freelist); + if (!last) + last =3D new; } if (!cnt) return; =20 raw_spin_lock_irqsave(&pool_lock, flags); - while (cnt) { - hlist_add_head(&new[--cnt]->node, &obj_pool); - debug_objects_allocated++; - WRITE_ONCE(obj_pool_free, obj_pool_free + 1); - } + hlist_splice_init(&freelist, &last->node, &obj_pool); + debug_objects_allocated +=3D cnt; + WRITE_ONCE(obj_pool_free, obj_pool_free + cnt); raw_spin_unlock_irqrestore(&pool_lock, flags); } } --=20 2.34.1 From nobody Sat Nov 30 05:50:18 2024 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 07C1316DC12 for ; Wed, 11 Sep 2024 08:35:50 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.188 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726043753; cv=none; b=LfelyCZxSz7FKvt697uUThTjfO7beuyK255QHKnjwTSwIr2vdqMweV5e+s8Vzxh0xjpHtO3ryEVRD3NdGPGKu+dW5I+o5FF22uqNk4Vngj1NfvyIcHFRLxLCex2FuZ3u3kMEL43fOSZKlMnLfXjqN3v2+Gcxzp1LL+2q5TZbeeU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726043753; c=relaxed/simple; bh=vi6ucJQyh1SziQ4nbh4q8HN3DFL1YqUMlKaTP+VcH1Y=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=UPKDeS6iQel05jy5H/pPykiN4ybleRvorbHnWgCGjnUZplV4OLcmIqyTkMFw7OFF/2XnSbQNclBR6HjgQNimM8gZhtc+7DSNq8L0snOPimkyDrzEXCX1aYQwqf+EghVuMBoRejjkjmMGD3ObRQ9u3TEf+ANFIsr8we5cnyYm514= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; arc=none smtp.client-ip=45.249.212.188 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Received: from mail.maildlp.com (unknown [172.19.163.48]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4X3YkW1f0CzmYWg; Wed, 11 Sep 2024 16:33:39 +0800 (CST) Received: from dggpemf100006.china.huawei.com (unknown [7.185.36.228]) by mail.maildlp.com (Postfix) with ESMTPS id 5B84618006C; Wed, 11 Sep 2024 16:35:42 +0800 (CST) Received: from thunder-town.china.huawei.com (10.174.178.55) by dggpemf100006.china.huawei.com (7.185.36.228) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Wed, 11 Sep 2024 16:35:41 +0800 From: Zhen Lei To: Andrew Morton , Thomas Gleixner , CC: Zhen Lei Subject: [PATCH v3 3/3] debugobjects: Reduce contention on pool lock in fill_pool() Date: Wed, 11 Sep 2024 16:35:21 +0800 Message-ID: <20240911083521.2257-4-thunder.leizhen@huawei.com> X-Mailer: git-send-email 2.37.3.windows.1 In-Reply-To: <20240911083521.2257-1-thunder.leizhen@huawei.com> References: <20240911083521.2257-1-thunder.leizhen@huawei.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ClientProxiedBy: dggems704-chm.china.huawei.com (10.3.19.181) To dggpemf100006.china.huawei.com (7.185.36.228) Content-Type: text/plain; charset="utf-8" If the conditions for starting fill are met, it means that all cores that call fill() later are blocked until the first core completes the fill operation. Since it is low cost to move a set of free nodes from list obj_to_free into obj_pool, once it is necessary to fill, trying to move regardless of whether the context is preemptible. To reduce contention on pool lock, use atomic operation to test state. Only the first comer is allowed to try. If the last comer finds that someone is already trying, it will give up. Scenarios that use allocated node filling can also be applied lockless mechanisms, but slightly different. The global list obj_to_free can only be operated exclusively by one core, while kmem_cache_zalloc() can be invoked by multiple cores simultaneously. Use atomic counting to mark how many cores are filling, to reduce atomic write conflicts during check. In principle, only the first comer is allowed to fill, but there is a very low probability that multiple comers may fill at the time. Suggested-by: Thomas Gleixner Signed-off-by: Zhen Lei Suggested-by is fine. I look at it after back from travel and --- lib/debugobjects.c | 79 ++++++++++++++++++++++++++++++++-------------- 1 file changed, 56 insertions(+), 23 deletions(-) diff --git a/lib/debugobjects.c b/lib/debugobjects.c index 19a91c6bc67eb9c..568aae9cd9c3c4f 100644 --- a/lib/debugobjects.c +++ b/lib/debugobjects.c @@ -125,14 +125,10 @@ static const char *obj_states[ODEBUG_STATE_MAX] =3D { [ODEBUG_STATE_NOTAVAILABLE] =3D "not available", }; =20 -static void fill_pool(void) +static void fill_pool_from_freelist(void) { - gfp_t gfp =3D __GFP_HIGH | __GFP_NOWARN; + static unsigned long state; struct debug_obj *obj; - unsigned long flags; - - if (likely(READ_ONCE(obj_pool_free) >=3D debug_objects_pool_min_level)) - return; =20 /* * Reuse objs from the global obj_to_free list; they will be @@ -141,25 +137,53 @@ static void fill_pool(void) * obj_nr_tofree is checked locklessly; the READ_ONCE() pairs with * the WRITE_ONCE() in pool_lock critical sections. */ - if (READ_ONCE(obj_nr_tofree)) { - raw_spin_lock_irqsave(&pool_lock, flags); - /* - * Recheck with the lock held as the worker thread might have - * won the race and freed the global free list already. - */ - while (obj_nr_tofree && (obj_pool_free < debug_objects_pool_min_level)) { - obj =3D hlist_entry(obj_to_free.first, typeof(*obj), node); - hlist_del(&obj->node); - WRITE_ONCE(obj_nr_tofree, obj_nr_tofree - 1); - hlist_add_head(&obj->node, &obj_pool); - WRITE_ONCE(obj_pool_free, obj_pool_free + 1); - } - raw_spin_unlock_irqrestore(&pool_lock, flags); + if (!READ_ONCE(obj_nr_tofree)) + return; + + /* + * Prevent the context from being scheduled or interrupted after + * setting the state flag; + */ + guard(irqsave)(); + + /* + * Avoid lock contention on &pool_lock and avoid making the cache + * line exclusive by testing the bit before attempting to set it. + */ + if (test_bit(0, &state) || test_and_set_bit(0, &state)) + return; + + guard(raw_spinlock)(&pool_lock); + /* + * Recheck with the lock held as the worker thread might have + * won the race and freed the global free list already. + */ + while (obj_nr_tofree && (obj_pool_free < debug_objects_pool_min_level)) { + obj =3D hlist_entry(obj_to_free.first, typeof(*obj), node); + hlist_del(&obj->node); + WRITE_ONCE(obj_nr_tofree, obj_nr_tofree - 1); + hlist_add_head(&obj->node, &obj_pool); + WRITE_ONCE(obj_pool_free, obj_pool_free + 1); } + clear_bit(0, &state); +} + +static void fill_pool(void) +{ + gfp_t gfp =3D __GFP_HIGH | __GFP_NOWARN; + static atomic_t cpus_allocating; =20 if (unlikely(!obj_cache)) return; =20 + /* + * Avoid allocation and lock contention when another CPU is already + * in the allocation path. + */ + if (atomic_read(&cpus_allocating)) + return; + + atomic_inc(&cpus_allocating); while (READ_ONCE(obj_pool_free) < debug_objects_pool_min_level) { struct debug_obj *new, *last =3D NULL; HLIST_HEAD(freelist); @@ -174,14 +198,14 @@ static void fill_pool(void) last =3D new; } if (!cnt) - return; + break; =20 - raw_spin_lock_irqsave(&pool_lock, flags); + guard(raw_spinlock_irqsave)(&pool_lock); hlist_splice_init(&freelist, &last->node, &obj_pool); debug_objects_allocated +=3D cnt; WRITE_ONCE(obj_pool_free, obj_pool_free + cnt); - raw_spin_unlock_irqrestore(&pool_lock, flags); } + atomic_dec(&cpus_allocating); } =20 /* @@ -600,6 +624,15 @@ static struct debug_obj *lookup_object_or_alloc(void *= addr, struct debug_bucket =20 static void debug_objects_fill_pool(void) { + if (likely(READ_ONCE(obj_pool_free) >=3D debug_objects_pool_min_level)) + return; + + /* Try reusing objects from obj_to_free_list */ + fill_pool_from_freelist(); + + if (likely(READ_ONCE(obj_pool_free) >=3D debug_objects_pool_min_level)) + return; + /* * On RT enabled kernels the pool refill must happen in preemptible * context -- for !RT kernels we rely on the fact that spinlock_t and --=20 2.34.1