From nobody Sun Dec 14 06:42:12 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2D92CC83F12 for ; Tue, 29 Aug 2023 17:13:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237693AbjH2RNQ (ORCPT ); Tue, 29 Aug 2023 13:13:16 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47842 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237715AbjH2RMw (ORCPT ); Tue, 29 Aug 2023 13:12:52 -0400 Received: from out-252.mta1.migadu.com (out-252.mta1.migadu.com [IPv6:2001:41d0:203:375::fc]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2010ECCA for ; Tue, 29 Aug 2023 10:12:41 -0700 (PDT) X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1693329159; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=+57664S6bw4mOO/qPj8h7l/H8qh2Ip2xAdoPp4fmyag=; b=xUAvBItl08LNbG+js05Euj/uDwk7mnKgQkEClKLARFeIBi8pvWrOlUBmEdpKyMjEBpe08t MZxZZK8u8HPHUPrktgetKtdD93OVx+rnbApYTVTui9WIUJLSE/Xb3wolF7HpROmuiVuy/N gklA8fWRmILe9ypho6CUw2WGHHwPn0E= From: andrey.konovalov@linux.dev To: Marco Elver , Alexander Potapenko Cc: Andrey Konovalov , Dmitry Vyukov , Vlastimil Babka , kasan-dev@googlegroups.com, Evgenii Stepanov , Andrew Morton , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrey Konovalov Subject: [PATCH 06/15] stackdepot: fix and clean-up atomic annotations Date: Tue, 29 Aug 2023 19:11:16 +0200 Message-Id: <8ad8f778b43dab49e4e6214b8d90bed31b75436f.1693328501.git.andreyknvl@google.com> In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Migadu-Flow: FLOW_OUT Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Andrey Konovalov Simplify comments accompanying the use of atomic accesses in the stack depot code. Also turn smp_load_acquire from next_pool_required in depot_init_pool into READ_ONCE, as both depot_init_pool and the all smp_store_release's to this variable are executed under the stack depot lock. Signed-off-by: Andrey Konovalov --- This patch is not strictly required, as the atomic accesses are fully removed in one of the latter patches. However, I decided to keep the patch just in case we end up needing these atomics in the following iterations of this series. --- lib/stackdepot.c | 27 +++++++++++++-------------- 1 file changed, 13 insertions(+), 14 deletions(-) diff --git a/lib/stackdepot.c b/lib/stackdepot.c index 93191ee70fc3..9ae71e1ef1a7 100644 --- a/lib/stackdepot.c +++ b/lib/stackdepot.c @@ -226,10 +226,10 @@ static void depot_init_pool(void **prealloc) /* * If the next pool is already initialized or the maximum number of * pools is reached, do not use the preallocated memory. - * smp_load_acquire() here pairs with smp_store_release() below and - * in depot_alloc_stack(). + * READ_ONCE is only used to mark the variable as atomic, + * there are no concurrent writes. */ - if (!smp_load_acquire(&next_pool_required)) + if (!READ_ONCE(next_pool_required)) return; =20 /* Check if the current pool is not yet allocated. */ @@ -250,8 +250,8 @@ static void depot_init_pool(void **prealloc) * At this point, either the next pool is initialized or the * maximum number of pools is reached. In either case, take * note that initializing another pool is not required. - * This smp_store_release pairs with smp_load_acquire() above - * and in stack_depot_save(). + * smp_store_release pairs with smp_load_acquire in + * stack_depot_save. */ smp_store_release(&next_pool_required, 0); } @@ -275,15 +275,15 @@ depot_alloc_stack(unsigned long *entries, int size, u= 32 hash, void **prealloc) /* * Move on to the next pool. * WRITE_ONCE pairs with potential concurrent read in - * stack_depot_fetch(). + * stack_depot_fetch. */ WRITE_ONCE(pool_index, pool_index + 1); pool_offset =3D 0; /* * If the maximum number of pools is not reached, take note * that the next pool needs to initialized. - * smp_store_release() here pairs with smp_load_acquire() in - * stack_depot_save() and depot_init_pool(). + * smp_store_release pairs with smp_load_acquire in + * stack_depot_save. */ if (pool_index + 1 < DEPOT_MAX_POOLS) smp_store_release(&next_pool_required, 1); @@ -414,8 +414,7 @@ depot_stack_handle_t __stack_depot_save(unsigned long *= entries, =20 /* * Fast path: look the stack trace up without locking. - * The smp_load_acquire() here pairs with smp_store_release() to - * |bucket| below. + * smp_load_acquire pairs with smp_store_release to |bucket| below. */ found =3D find_stack(smp_load_acquire(bucket), entries, nr_entries, hash); if (found) @@ -425,8 +424,8 @@ depot_stack_handle_t __stack_depot_save(unsigned long *= entries, * Check if another stack pool needs to be initialized. If so, allocate * the memory now - we won't be able to do that under the lock. * - * The smp_load_acquire() here pairs with smp_store_release() to - * |next_pool_inited| in depot_alloc_stack() and depot_init_pool(). + * smp_load_acquire pairs with smp_store_release + * in depot_alloc_stack and depot_init_pool. */ if (unlikely(can_alloc && smp_load_acquire(&next_pool_required))) { /* @@ -452,8 +451,8 @@ depot_stack_handle_t __stack_depot_save(unsigned long *= entries, if (new) { new->next =3D *bucket; /* - * This smp_store_release() pairs with - * smp_load_acquire() from |bucket| above. + * smp_store_release pairs with smp_load_acquire + * from |bucket| above. */ smp_store_release(bucket, new); found =3D new; --=20 2.25.1