From nobody Sat Dec 13 22:50:42 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 769FFC83F18 for ; Tue, 29 Aug 2023 17:12:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237639AbjH2RMJ (ORCPT ); Tue, 29 Aug 2023 13:12:09 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56986 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237637AbjH2RLm (ORCPT ); Tue, 29 Aug 2023 13:11:42 -0400 Received: from out-242.mta1.migadu.com (out-242.mta1.migadu.com [95.215.58.242]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4876711B for ; Tue, 29 Aug 2023 10:11:37 -0700 (PDT) X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1693329095; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Gxqstvn+9KrGAurRzEA5+xZd4FpeRy2hmiRTg5iKOfw=; b=DeG8gVsPVA6eHp6GYAtPbcN1W4Kc4hW54VJN2cPa1oAVV0Vtv+XvIoxvZ1he5+lPDyKPzU jVtmhpgvmpCpTClSH2nhy/NQ4GtPpio2EnuYBWWAcI0xhaei4Zp0zBBpeO5Q7FjGGTEUej XwtRIWiCy9Atd2ddYEKCng61HMvXEZ8= From: andrey.konovalov@linux.dev To: Marco Elver , Alexander Potapenko Cc: Andrey Konovalov , Dmitry Vyukov , Vlastimil Babka , kasan-dev@googlegroups.com, Evgenii Stepanov , Andrew Morton , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrey Konovalov Subject: [PATCH 01/15] stackdepot: check disabled flag when fetching Date: Tue, 29 Aug 2023 19:11:11 +0200 Message-Id: <43b26d397d4f0d76246f95a74a8a38cfd7297bbc.1693328501.git.andreyknvl@google.com> In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Migadu-Flow: FLOW_OUT Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Andrey Konovalov Do not try fetching a stack trace from the stack depot if the stack_depot_disabled flag is enabled. Signed-off-by: Andrey Konovalov Reviewed-by: Alexander Potapenko --- lib/stackdepot.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/lib/stackdepot.c b/lib/stackdepot.c index 2f5aa851834e..3a945c7206f3 100644 --- a/lib/stackdepot.c +++ b/lib/stackdepot.c @@ -477,7 +477,7 @@ unsigned int stack_depot_fetch(depot_stack_handle_t han= dle, */ kmsan_unpoison_memory(entries, sizeof(*entries)); =20 - if (!handle) + if (!handle || stack_depot_disabled) return 0; =20 if (parts.pool_index > pool_index_cached) { --=20 2.25.1 From nobody Sat Dec 13 22:50:42 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D26D1C83F1D for ; Tue, 29 Aug 2023 17:12:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237663AbjH2RMM (ORCPT ); Tue, 29 Aug 2023 13:12:12 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48458 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237640AbjH2RLm (ORCPT ); Tue, 29 Aug 2023 13:11:42 -0400 Received: from out-245.mta1.migadu.com (out-245.mta1.migadu.com [95.215.58.245]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D31581B9 for ; Tue, 29 Aug 2023 10:11:37 -0700 (PDT) X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1693329096; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Opt9t2Sh3BPQLxjav3raWw4j3L9ASrpPJpFprr4WIwg=; b=FMOUlvJxgMB5WpKRQ7t3KX2aGQnwsNKzQ2ccI31MOspuXDov1WZjg8f3zBxS9aiip16KLg QsPLydt7gf0VxZWrz0MHedBxpNNjUHO1AeoE3D3p5RwVNU7SOLfTAhOveTf+35NDZNtujl ResgGbrFAGlj6khMSxwabRc53OkvSPc= From: andrey.konovalov@linux.dev To: Marco Elver , Alexander Potapenko Cc: Andrey Konovalov , Dmitry Vyukov , Vlastimil Babka , kasan-dev@googlegroups.com, Evgenii Stepanov , Andrew Morton , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrey Konovalov Subject: [PATCH 02/15] stackdepot: simplify __stack_depot_save Date: Tue, 29 Aug 2023 19:11:12 +0200 Message-Id: <20dbc3376fccf2e7824482f56a75d6670bccd8ff.1693328501.git.andreyknvl@google.com> In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Migadu-Flow: FLOW_OUT Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Andrey Konovalov The retval local variable in __stack_depot_save has the union type handle_parts, but the function never uses anything but the union's handle field. Define retval simply as depot_stack_handle_t to simplify the code. Signed-off-by: Andrey Konovalov Reviewed-by: Alexander Potapenko --- lib/stackdepot.c | 9 ++++----- 1 file changed, 4 insertions(+), 5 deletions(-) diff --git a/lib/stackdepot.c b/lib/stackdepot.c index 3a945c7206f3..0772125efe8a 100644 --- a/lib/stackdepot.c +++ b/lib/stackdepot.c @@ -360,7 +360,7 @@ depot_stack_handle_t __stack_depot_save(unsigned long *= entries, gfp_t alloc_flags, bool can_alloc) { struct stack_record *found =3D NULL, **bucket; - union handle_parts retval =3D { .handle =3D 0 }; + depot_stack_handle_t handle =3D 0; struct page *page =3D NULL; void *prealloc =3D NULL; unsigned long flags; @@ -377,7 +377,7 @@ depot_stack_handle_t __stack_depot_save(unsigned long *= entries, nr_entries =3D filter_irq_stacks(entries, nr_entries); =20 if (unlikely(nr_entries =3D=3D 0) || stack_depot_disabled) - goto fast_exit; + return 0; =20 hash =3D hash_stack(entries, nr_entries); bucket =3D &stack_table[hash & stack_hash_mask]; @@ -443,9 +443,8 @@ depot_stack_handle_t __stack_depot_save(unsigned long *= entries, free_pages((unsigned long)prealloc, DEPOT_POOL_ORDER); } if (found) - retval.handle =3D found->handle.handle; -fast_exit: - return retval.handle; + handle =3D found->handle.handle; + return handle; } EXPORT_SYMBOL_GPL(__stack_depot_save); =20 --=20 2.25.1 From nobody Sat Dec 13 22:50:42 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id AAA14C83F1B for ; Tue, 29 Aug 2023 17:12:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237656AbjH2RMM (ORCPT ); Tue, 29 Aug 2023 13:12:12 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48466 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237641AbjH2RLm (ORCPT ); Tue, 29 Aug 2023 13:11:42 -0400 Received: from out-244.mta1.migadu.com (out-244.mta1.migadu.com [95.215.58.244]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EFA8B1B1 for ; Tue, 29 Aug 2023 10:11:38 -0700 (PDT) X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1693329097; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=J9x0A1WNFQDshCpK/emmSalBuss8B7zIgs4IPlFr17A=; b=FdmqrQoRG911hkZBrs9+qOuzijl9EyrM9zkCfxSRiSKdthhj9DynENOHTygZicJGL6ttQd b8bicv/bN9d7rHnZGH7XvWbnvE0ExxkcPNoHVf4CMtYnM6Xf5U4ohouTeDLn5GYyzrjaNE Ef5USp/vQmHcZo4oq0IjadJbI/g7Dw4= From: andrey.konovalov@linux.dev To: Marco Elver , Alexander Potapenko Cc: Andrey Konovalov , Dmitry Vyukov , Vlastimil Babka , kasan-dev@googlegroups.com, Evgenii Stepanov , Andrew Morton , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrey Konovalov Subject: [PATCH 03/15] stackdepot: drop valid bit from handles Date: Tue, 29 Aug 2023 19:11:13 +0200 Message-Id: In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Migadu-Flow: FLOW_OUT Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Andrey Konovalov Stack depot doesn't use the valid bit in handles in any way, so drop it. Signed-off-by: Andrey Konovalov Reviewed-by: Alexander Potapenko --- lib/stackdepot.c | 7 ++----- 1 file changed, 2 insertions(+), 5 deletions(-) diff --git a/lib/stackdepot.c b/lib/stackdepot.c index 0772125efe8a..482eac40791e 100644 --- a/lib/stackdepot.c +++ b/lib/stackdepot.c @@ -32,13 +32,12 @@ =20 #define DEPOT_HANDLE_BITS (sizeof(depot_stack_handle_t) * 8) =20 -#define DEPOT_VALID_BITS 1 #define DEPOT_POOL_ORDER 2 /* Pool size order, 4 pages */ #define DEPOT_POOL_SIZE (1LL << (PAGE_SHIFT + DEPOT_POOL_ORDER)) #define DEPOT_STACK_ALIGN 4 #define DEPOT_OFFSET_BITS (DEPOT_POOL_ORDER + PAGE_SHIFT - DEPOT_STACK_ALI= GN) -#define DEPOT_POOL_INDEX_BITS (DEPOT_HANDLE_BITS - DEPOT_VALID_BITS - \ - DEPOT_OFFSET_BITS - STACK_DEPOT_EXTRA_BITS) +#define DEPOT_POOL_INDEX_BITS (DEPOT_HANDLE_BITS - DEPOT_OFFSET_BITS - \ + STACK_DEPOT_EXTRA_BITS) #define DEPOT_POOLS_CAP 8192 #define DEPOT_MAX_POOLS \ (((1LL << (DEPOT_POOL_INDEX_BITS)) < DEPOT_POOLS_CAP) ? \ @@ -50,7 +49,6 @@ union handle_parts { struct { u32 pool_index : DEPOT_POOL_INDEX_BITS; u32 offset : DEPOT_OFFSET_BITS; - u32 valid : DEPOT_VALID_BITS; u32 extra : STACK_DEPOT_EXTRA_BITS; }; }; @@ -303,7 +301,6 @@ depot_alloc_stack(unsigned long *entries, int size, u32= hash, void **prealloc) stack->size =3D size; stack->handle.pool_index =3D pool_index; stack->handle.offset =3D pool_offset >> DEPOT_STACK_ALIGN; - stack->handle.valid =3D 1; stack->handle.extra =3D 0; memcpy(stack->entries, entries, flex_array_size(stack, entries, size)); pool_offset +=3D required_size; --=20 2.25.1 From nobody Sat Dec 13 22:50:42 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1F9D8C83F1F for ; Tue, 29 Aug 2023 17:12:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237674AbjH2RMP (ORCPT ); Tue, 29 Aug 2023 13:12:15 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48498 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237655AbjH2RLo (ORCPT ); Tue, 29 Aug 2023 13:11:44 -0400 Received: from out-246.mta1.migadu.com (out-246.mta1.migadu.com [IPv6:2001:41d0:203:375::f6]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8469CCD6 for ; Tue, 29 Aug 2023 10:11:40 -0700 (PDT) X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1693329097; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=iO6rOKx0E5Y80aTOb2gdl835Yi8CPfP2ICeqYsm38a8=; b=LIEnc9qf0bdzjRuVfvd2JdO2h3KjGuPt1jpYgEgLcKhd+uMTNri0079VJZ7BuNqzfKhWpH nChiTXEn/GCsQoyhhHm4/OnyjGw6tvxpMZ/P/+tL4MFSJeV+Aw2nZXmyYGKoQ8vfzv/ONv 7MlOomIeIzvZ5a8LNRxAPzvNoJy8ViI= From: andrey.konovalov@linux.dev To: Marco Elver , Alexander Potapenko Cc: Andrey Konovalov , Dmitry Vyukov , Vlastimil Babka , kasan-dev@googlegroups.com, Evgenii Stepanov , Andrew Morton , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrey Konovalov Subject: [PATCH 04/15] stackdepot: add depot_fetch_stack helper Date: Tue, 29 Aug 2023 19:11:14 +0200 Message-Id: <757ff72866010146fafda3049cb3749611cd7dd3.1693328501.git.andreyknvl@google.com> In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Migadu-Flow: FLOW_OUT Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Andrey Konovalov Add a helper depot_fetch_stack function that fetches the pointer to a stack record. With this change, all static depot_* functions now operate on stack pools and the exported stack_depot_* functions operate on the hash table. Signed-off-by: Andrey Konovalov Reviewed-by: Alexander Potapenko --- lib/stackdepot.c | 45 ++++++++++++++++++++++++++++----------------- 1 file changed, 28 insertions(+), 17 deletions(-) diff --git a/lib/stackdepot.c b/lib/stackdepot.c index 482eac40791e..2128108f2acb 100644 --- a/lib/stackdepot.c +++ b/lib/stackdepot.c @@ -304,6 +304,7 @@ depot_alloc_stack(unsigned long *entries, int size, u32= hash, void **prealloc) stack->handle.extra =3D 0; memcpy(stack->entries, entries, flex_array_size(stack, entries, size)); pool_offset +=3D required_size; + /* * Let KMSAN know the stored stack record is initialized. This shall * prevent false positive reports if instrumented code accesses it. @@ -313,6 +314,32 @@ depot_alloc_stack(unsigned long *entries, int size, u3= 2 hash, void **prealloc) return stack; } =20 +static struct stack_record *depot_fetch_stack(depot_stack_handle_t handle) +{ + union handle_parts parts =3D { .handle =3D handle }; + /* + * READ_ONCE pairs with potential concurrent write in + * depot_alloc_stack. + */ + int pool_index_cached =3D READ_ONCE(pool_index); + void *pool; + size_t offset =3D parts.offset << DEPOT_STACK_ALIGN; + struct stack_record *stack; + + if (parts.pool_index > pool_index_cached) { + WARN(1, "pool index %d out of bounds (%d) for stack id %08x\n", + parts.pool_index, pool_index_cached, handle); + return NULL; + } + + pool =3D stack_pools[parts.pool_index]; + if (!pool) + return NULL; + + stack =3D pool + offset; + return stack; +} + /* Calculates the hash for a stack. */ static inline u32 hash_stack(unsigned long *entries, unsigned int size) { @@ -456,14 +483,6 @@ EXPORT_SYMBOL_GPL(stack_depot_save); unsigned int stack_depot_fetch(depot_stack_handle_t handle, unsigned long **entries) { - union handle_parts parts =3D { .handle =3D handle }; - /* - * READ_ONCE pairs with potential concurrent write in - * depot_alloc_stack. - */ - int pool_index_cached =3D READ_ONCE(pool_index); - void *pool; - size_t offset =3D parts.offset << DEPOT_STACK_ALIGN; struct stack_record *stack; =20 *entries =3D NULL; @@ -476,15 +495,7 @@ unsigned int stack_depot_fetch(depot_stack_handle_t ha= ndle, if (!handle || stack_depot_disabled) return 0; =20 - if (parts.pool_index > pool_index_cached) { - WARN(1, "pool index %d out of bounds (%d) for stack id %08x\n", - parts.pool_index, pool_index_cached, handle); - return 0; - } - pool =3D stack_pools[parts.pool_index]; - if (!pool) - return 0; - stack =3D pool + offset; + stack =3D depot_fetch_stack(handle); =20 *entries =3D stack->entries; return stack->size; --=20 2.25.1 From nobody Sat Dec 13 22:50:42 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 862C8C83F1A for ; Tue, 29 Aug 2023 17:12:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237649AbjH2RMK (ORCPT ); Tue, 29 Aug 2023 13:12:10 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48458 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237650AbjH2RLn (ORCPT ); Tue, 29 Aug 2023 13:11:43 -0400 Received: from out-253.mta1.migadu.com (out-253.mta1.migadu.com [95.215.58.253]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 38954CCE for ; Tue, 29 Aug 2023 10:11:40 -0700 (PDT) X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1693329098; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=etRrGOIqY063bnkRK0UomaeBxyJpZf9uP5poWvb3gqc=; b=vWRwTzQB5vMYolMaxpPDvyneGGFmDZgws8+tFLq59cWPXaVn3GV0VaYcUF5RzitZxoe+T3 ZMExK1/RpceRrBL2M3n8y25ttAi3ULTCCP+y0Sbt1jt2hAnxV/2IzWEEpOegOi8gP9Lc6d hE8F8O5yYsTdFlsVFcDlLuY0x9gpyyU= From: andrey.konovalov@linux.dev To: Marco Elver , Alexander Potapenko Cc: Andrey Konovalov , Dmitry Vyukov , Vlastimil Babka , kasan-dev@googlegroups.com, Evgenii Stepanov , Andrew Morton , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrey Konovalov Subject: [PATCH 05/15] stackdepot: use fixed-sized slots for stack records Date: Tue, 29 Aug 2023 19:11:15 +0200 Message-Id: <89c2f64120a7dd6b2255a9a281603359a50cf6f7.1693328501.git.andreyknvl@google.com> In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Migadu-Flow: FLOW_OUT Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Andrey Konovalov Instead of storing stack records in stack depot pools one right after another, use 32-frame-sized slots. This is preparatory patch for implementing the eviction of stack records from the stack depot. Signed-off-by: Andrey Konovalov --- lib/stackdepot.c | 14 ++++++++++---- 1 file changed, 10 insertions(+), 4 deletions(-) diff --git a/lib/stackdepot.c b/lib/stackdepot.c index 2128108f2acb..93191ee70fc3 100644 --- a/lib/stackdepot.c +++ b/lib/stackdepot.c @@ -42,6 +42,7 @@ #define DEPOT_MAX_POOLS \ (((1LL << (DEPOT_POOL_INDEX_BITS)) < DEPOT_POOLS_CAP) ? \ (1LL << (DEPOT_POOL_INDEX_BITS)) : DEPOT_POOLS_CAP) +#define DEPOT_STACK_MAX_FRAMES 32 =20 /* Compact structure that stores a reference to a stack. */ union handle_parts { @@ -58,9 +59,12 @@ struct stack_record { u32 hash; /* Hash in the hash table */ u32 size; /* Number of stored frames */ union handle_parts handle; - unsigned long entries[]; /* Variable-sized array of frames */ + unsigned long entries[DEPOT_STACK_MAX_FRAMES]; /* Frames */ }; =20 +#define DEPOT_STACK_RECORD_SIZE \ + ALIGN(sizeof(struct stack_record), 1 << DEPOT_STACK_ALIGN) + static bool stack_depot_disabled; static bool __stack_depot_early_init_requested __initdata =3D IS_ENABLED(C= ONFIG_STACKDEPOT_ALWAYS_INIT); static bool __stack_depot_early_init_passed __initdata; @@ -258,9 +262,7 @@ static struct stack_record * depot_alloc_stack(unsigned long *entries, int size, u32 hash, void **preal= loc) { struct stack_record *stack; - size_t required_size =3D struct_size(stack, entries, size); - - required_size =3D ALIGN(required_size, 1 << DEPOT_STACK_ALIGN); + size_t required_size =3D DEPOT_STACK_RECORD_SIZE; =20 /* Check if there is not enough space in the current pool. */ if (unlikely(pool_offset + required_size > DEPOT_POOL_SIZE)) { @@ -295,6 +297,10 @@ depot_alloc_stack(unsigned long *entries, int size, u3= 2 hash, void **prealloc) if (stack_pools[pool_index] =3D=3D NULL) return NULL; =20 + /* Limit number of saved frames to DEPOT_STACK_MAX_FRAMES. */ + if (size > DEPOT_STACK_MAX_FRAMES) + size =3D DEPOT_STACK_MAX_FRAMES; + /* Save the stack trace. */ stack =3D stack_pools[pool_index] + pool_offset; stack->hash =3D hash; --=20 2.25.1 From nobody Sat Dec 13 22:50:42 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2D92CC83F12 for ; Tue, 29 Aug 2023 17:13:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237693AbjH2RNQ (ORCPT ); Tue, 29 Aug 2023 13:13:16 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47842 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237715AbjH2RMw (ORCPT ); Tue, 29 Aug 2023 13:12:52 -0400 Received: from out-252.mta1.migadu.com (out-252.mta1.migadu.com [IPv6:2001:41d0:203:375::fc]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2010ECCA for ; Tue, 29 Aug 2023 10:12:41 -0700 (PDT) X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1693329159; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=+57664S6bw4mOO/qPj8h7l/H8qh2Ip2xAdoPp4fmyag=; b=xUAvBItl08LNbG+js05Euj/uDwk7mnKgQkEClKLARFeIBi8pvWrOlUBmEdpKyMjEBpe08t MZxZZK8u8HPHUPrktgetKtdD93OVx+rnbApYTVTui9WIUJLSE/Xb3wolF7HpROmuiVuy/N gklA8fWRmILe9ypho6CUw2WGHHwPn0E= From: andrey.konovalov@linux.dev To: Marco Elver , Alexander Potapenko Cc: Andrey Konovalov , Dmitry Vyukov , Vlastimil Babka , kasan-dev@googlegroups.com, Evgenii Stepanov , Andrew Morton , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrey Konovalov Subject: [PATCH 06/15] stackdepot: fix and clean-up atomic annotations Date: Tue, 29 Aug 2023 19:11:16 +0200 Message-Id: <8ad8f778b43dab49e4e6214b8d90bed31b75436f.1693328501.git.andreyknvl@google.com> In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Migadu-Flow: FLOW_OUT Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Andrey Konovalov Simplify comments accompanying the use of atomic accesses in the stack depot code. Also turn smp_load_acquire from next_pool_required in depot_init_pool into READ_ONCE, as both depot_init_pool and the all smp_store_release's to this variable are executed under the stack depot lock. Signed-off-by: Andrey Konovalov --- This patch is not strictly required, as the atomic accesses are fully removed in one of the latter patches. However, I decided to keep the patch just in case we end up needing these atomics in the following iterations of this series. --- lib/stackdepot.c | 27 +++++++++++++-------------- 1 file changed, 13 insertions(+), 14 deletions(-) diff --git a/lib/stackdepot.c b/lib/stackdepot.c index 93191ee70fc3..9ae71e1ef1a7 100644 --- a/lib/stackdepot.c +++ b/lib/stackdepot.c @@ -226,10 +226,10 @@ static void depot_init_pool(void **prealloc) /* * If the next pool is already initialized or the maximum number of * pools is reached, do not use the preallocated memory. - * smp_load_acquire() here pairs with smp_store_release() below and - * in depot_alloc_stack(). + * READ_ONCE is only used to mark the variable as atomic, + * there are no concurrent writes. */ - if (!smp_load_acquire(&next_pool_required)) + if (!READ_ONCE(next_pool_required)) return; =20 /* Check if the current pool is not yet allocated. */ @@ -250,8 +250,8 @@ static void depot_init_pool(void **prealloc) * At this point, either the next pool is initialized or the * maximum number of pools is reached. In either case, take * note that initializing another pool is not required. - * This smp_store_release pairs with smp_load_acquire() above - * and in stack_depot_save(). + * smp_store_release pairs with smp_load_acquire in + * stack_depot_save. */ smp_store_release(&next_pool_required, 0); } @@ -275,15 +275,15 @@ depot_alloc_stack(unsigned long *entries, int size, u= 32 hash, void **prealloc) /* * Move on to the next pool. * WRITE_ONCE pairs with potential concurrent read in - * stack_depot_fetch(). + * stack_depot_fetch. */ WRITE_ONCE(pool_index, pool_index + 1); pool_offset =3D 0; /* * If the maximum number of pools is not reached, take note * that the next pool needs to initialized. - * smp_store_release() here pairs with smp_load_acquire() in - * stack_depot_save() and depot_init_pool(). + * smp_store_release pairs with smp_load_acquire in + * stack_depot_save. */ if (pool_index + 1 < DEPOT_MAX_POOLS) smp_store_release(&next_pool_required, 1); @@ -414,8 +414,7 @@ depot_stack_handle_t __stack_depot_save(unsigned long *= entries, =20 /* * Fast path: look the stack trace up without locking. - * The smp_load_acquire() here pairs with smp_store_release() to - * |bucket| below. + * smp_load_acquire pairs with smp_store_release to |bucket| below. */ found =3D find_stack(smp_load_acquire(bucket), entries, nr_entries, hash); if (found) @@ -425,8 +424,8 @@ depot_stack_handle_t __stack_depot_save(unsigned long *= entries, * Check if another stack pool needs to be initialized. If so, allocate * the memory now - we won't be able to do that under the lock. * - * The smp_load_acquire() here pairs with smp_store_release() to - * |next_pool_inited| in depot_alloc_stack() and depot_init_pool(). + * smp_load_acquire pairs with smp_store_release + * in depot_alloc_stack and depot_init_pool. */ if (unlikely(can_alloc && smp_load_acquire(&next_pool_required))) { /* @@ -452,8 +451,8 @@ depot_stack_handle_t __stack_depot_save(unsigned long *= entries, if (new) { new->next =3D *bucket; /* - * This smp_store_release() pairs with - * smp_load_acquire() from |bucket| above. + * smp_store_release pairs with smp_load_acquire + * from |bucket| above. */ smp_store_release(bucket, new); found =3D new; --=20 2.25.1 From nobody Sat Dec 13 22:50:42 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8DD20C83F18 for ; Tue, 29 Aug 2023 17:13:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237701AbjH2RNR (ORCPT ); Tue, 29 Aug 2023 13:13:17 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35782 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237717AbjH2RMw (ORCPT ); Tue, 29 Aug 2023 13:12:52 -0400 Received: from out-244.mta1.migadu.com (out-244.mta1.migadu.com [95.215.58.244]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8FF40CD1 for ; Tue, 29 Aug 2023 10:12:41 -0700 (PDT) X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1693329160; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=TBLA9eVhfpPnUCaW6dvoHE+HvIWz4FYhXI+BxxI+EVo=; b=rTza+MTDtXsApdY3UtmvCOQ7kVEgxoogWmU+0/N+/sEo8Jl8pyma+tVj6hbC217kRw1J6E W2W+iu1c4gZpxYAIOpkN1sNFMFS3da9ylVLiov/nqPc0JE4H1dH8MM7fvRWFW9ucQ0eS+L WhjGfi5dKdpIeZyWgv2oUZtl5JJ4KEI= From: andrey.konovalov@linux.dev To: Marco Elver , Alexander Potapenko Cc: Andrey Konovalov , Dmitry Vyukov , Vlastimil Babka , kasan-dev@googlegroups.com, Evgenii Stepanov , Andrew Morton , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrey Konovalov Subject: [PATCH 07/15] stackdepot: rework helpers for depot_alloc_stack Date: Tue, 29 Aug 2023 19:11:17 +0200 Message-Id: In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Migadu-Flow: FLOW_OUT Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Andrey Konovalov Split code in depot_alloc_stack and depot_init_pool into 3 functions: 1. depot_keep_next_pool that keeps preallocated memory for the next pool if required. 2. depot_update_pools that moves on to the next pool if there's no space left in the current pool, uses preallocated memory for the new current pool if required, and calls depot_keep_next_pool otherwise. 3. depot_alloc_stack that calls depot_update_pools and then allocates a stack record as before. This makes it somewhat easier to follow the logic of depot_alloc_stack and also serves as a preparation for implementing the eviction of stack records from the stack depot. Signed-off-by: Andrey Konovalov --- lib/stackdepot.c | 85 +++++++++++++++++++++++++++--------------------- 1 file changed, 48 insertions(+), 37 deletions(-) diff --git a/lib/stackdepot.c b/lib/stackdepot.c index 9ae71e1ef1a7..869d520bc690 100644 --- a/lib/stackdepot.c +++ b/lib/stackdepot.c @@ -220,11 +220,11 @@ int stack_depot_init(void) } EXPORT_SYMBOL_GPL(stack_depot_init); =20 -/* Uses preallocated memory to initialize a new stack depot pool. */ -static void depot_init_pool(void **prealloc) +/* Keeps the preallocated memory to be used for the next stack depot pool.= */ +static void depot_keep_next_pool(void **prealloc) { /* - * If the next pool is already initialized or the maximum number of + * If the next pool is already saved or the maximum number of * pools is reached, do not use the preallocated memory. * READ_ONCE is only used to mark the variable as atomic, * there are no concurrent writes. @@ -232,44 +232,33 @@ static void depot_init_pool(void **prealloc) if (!READ_ONCE(next_pool_required)) return; =20 - /* Check if the current pool is not yet allocated. */ - if (stack_pools[pool_index] =3D=3D NULL) { - /* Use the preallocated memory for the current pool. */ - stack_pools[pool_index] =3D *prealloc; + /* + * Use the preallocated memory for the next pool + * as long as we do not exceed the maximum number of pools. + */ + if (pool_index + 1 < DEPOT_MAX_POOLS) { + stack_pools[pool_index + 1] =3D *prealloc; *prealloc =3D NULL; - } else { - /* - * Otherwise, use the preallocated memory for the next pool - * as long as we do not exceed the maximum number of pools. - */ - if (pool_index + 1 < DEPOT_MAX_POOLS) { - stack_pools[pool_index + 1] =3D *prealloc; - *prealloc =3D NULL; - } - /* - * At this point, either the next pool is initialized or the - * maximum number of pools is reached. In either case, take - * note that initializing another pool is not required. - * smp_store_release pairs with smp_load_acquire in - * stack_depot_save. - */ - smp_store_release(&next_pool_required, 0); } + + /* + * At this point, either the next pool is kept or the maximum + * number of pools is reached. In either case, take note that + * keeping another pool is not required. + * smp_store_release pairs with smp_load_acquire in stack_depot_save. + */ + smp_store_release(&next_pool_required, 0); } =20 -/* Allocates a new stack in a stack depot pool. */ -static struct stack_record * -depot_alloc_stack(unsigned long *entries, int size, u32 hash, void **preal= loc) +/* Updates refences to the current and the next stack depot pools. */ +static bool depot_update_pools(size_t required_size, void **prealloc) { - struct stack_record *stack; - size_t required_size =3D DEPOT_STACK_RECORD_SIZE; - /* Check if there is not enough space in the current pool. */ if (unlikely(pool_offset + required_size > DEPOT_POOL_SIZE)) { /* Bail out if we reached the pool limit. */ if (unlikely(pool_index + 1 >=3D DEPOT_MAX_POOLS)) { WARN_ONCE(1, "Stack depot reached limit capacity"); - return NULL; + return false; } =20 /* @@ -279,9 +268,10 @@ depot_alloc_stack(unsigned long *entries, int size, u3= 2 hash, void **prealloc) */ WRITE_ONCE(pool_index, pool_index + 1); pool_offset =3D 0; + /* * If the maximum number of pools is not reached, take note - * that the next pool needs to initialized. + * that the next pool needs to be initialized. * smp_store_release pairs with smp_load_acquire in * stack_depot_save. */ @@ -289,9 +279,30 @@ depot_alloc_stack(unsigned long *entries, int size, u3= 2 hash, void **prealloc) smp_store_release(&next_pool_required, 1); } =20 - /* Assign the preallocated memory to a pool if required. */ + /* Check if the current pool is not yet allocated. */ + if (*prealloc && stack_pools[pool_index] =3D=3D NULL) { + /* Use the preallocated memory for the current pool. */ + stack_pools[pool_index] =3D *prealloc; + *prealloc =3D NULL; + return true; + } + + /* Otherwise, try using the preallocated memory for the next pool. */ if (*prealloc) - depot_init_pool(prealloc); + depot_keep_next_pool(prealloc); + return true; +} + +/* Allocates a new stack in a stack depot pool. */ +static struct stack_record * +depot_alloc_stack(unsigned long *entries, int size, u32 hash, void **preal= loc) +{ + struct stack_record *stack; + size_t required_size =3D DEPOT_STACK_RECORD_SIZE; + + /* Update current and next pools if required and possible. */ + if (!depot_update_pools(required_size, prealloc)) + return NULL; =20 /* Check if we have a pool to save the stack trace. */ if (stack_pools[pool_index] =3D=3D NULL) @@ -325,7 +336,7 @@ static struct stack_record *depot_fetch_stack(depot_sta= ck_handle_t handle) union handle_parts parts =3D { .handle =3D handle }; /* * READ_ONCE pairs with potential concurrent write in - * depot_alloc_stack. + * depot_update_pools. */ int pool_index_cached =3D READ_ONCE(pool_index); void *pool; @@ -425,7 +436,7 @@ depot_stack_handle_t __stack_depot_save(unsigned long *= entries, * the memory now - we won't be able to do that under the lock. * * smp_load_acquire pairs with smp_store_release - * in depot_alloc_stack and depot_init_pool. + * in depot_update_pools and depot_keep_next_pool. */ if (unlikely(can_alloc && smp_load_acquire(&next_pool_required))) { /* @@ -462,7 +473,7 @@ depot_stack_handle_t __stack_depot_save(unsigned long *= entries, * Stack depot already contains this stack trace, but let's * keep the preallocated memory for the future. */ - depot_init_pool(&prealloc); + depot_keep_next_pool(&prealloc); } =20 raw_spin_unlock_irqrestore(&pool_lock, flags); --=20 2.25.1 From nobody Sat Dec 13 22:50:42 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B3281C83F1A for ; Tue, 29 Aug 2023 17:13:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237737AbjH2RNU (ORCPT ); Tue, 29 Aug 2023 13:13:20 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35756 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237722AbjH2RM4 (ORCPT ); Tue, 29 Aug 2023 13:12:56 -0400 Received: from out-248.mta1.migadu.com (out-248.mta1.migadu.com [IPv6:2001:41d0:203:375::f8]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2E3F2CD8 for ; Tue, 29 Aug 2023 10:12:42 -0700 (PDT) X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1693329160; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Z7PnYrb1S+8x+6HixwHlp8BbvsuCogeXMw7DcmkZ+jU=; b=dEoDb6yzJM9iX6McraduAfQqnJZCVuZ61muNam0W1Q8u/Mj2MWXslfimq4jnFNa8v8+Yj+ /Lw1sjBeAZ25rXrIel+1C6rNGA5rk9DVJDQRgkWLe5Nd7RsIlPk+jzgO9LM5Ew++A1tMf5 NgbpsmDhYFFdEXDI5r4VFPiqImx4blM= From: andrey.konovalov@linux.dev To: Marco Elver , Alexander Potapenko Cc: Andrey Konovalov , Dmitry Vyukov , Vlastimil Babka , kasan-dev@googlegroups.com, Evgenii Stepanov , Andrew Morton , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrey Konovalov Subject: [PATCH 08/15] stackdepot: rename next_pool_required to new_pool_required Date: Tue, 29 Aug 2023 19:11:18 +0200 Message-Id: In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Migadu-Flow: FLOW_OUT Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Andrey Konovalov Rename next_pool_required to new_pool_required. This a purely code readability change: the following patch will change stack depot to store the pointer to the new pool in a separate variable, and "new" seems like a more logical name. Signed-off-by: Andrey Konovalov Reviewed-by: Alexander Potapenko --- lib/stackdepot.c | 47 +++++++++++++++++++++++------------------------ 1 file changed, 23 insertions(+), 24 deletions(-) diff --git a/lib/stackdepot.c b/lib/stackdepot.c index 869d520bc690..11934ea3b1c2 100644 --- a/lib/stackdepot.c +++ b/lib/stackdepot.c @@ -94,12 +94,11 @@ static size_t pool_offset; static DEFINE_RAW_SPINLOCK(pool_lock); /* * Stack depot tries to keep an extra pool allocated even before it runs o= ut - * of space in the currently used pool. - * This flag marks that this next extra pool needs to be allocated and - * initialized. It has the value 0 when either the next pool is not yet - * initialized or the limit on the number of pools is reached. + * of space in the currently used pool. This flag marks whether this extra= pool + * needs to be allocated. It has the value 0 when either an extra pool is = not + * yet allocated or if the limit on the number of pools is reached. */ -static int next_pool_required =3D 1; +static int new_pool_required =3D 1; =20 static int __init disable_stack_depot(char *str) { @@ -220,20 +219,20 @@ int stack_depot_init(void) } EXPORT_SYMBOL_GPL(stack_depot_init); =20 -/* Keeps the preallocated memory to be used for the next stack depot pool.= */ -static void depot_keep_next_pool(void **prealloc) +/* Keeps the preallocated memory to be used for a new stack depot pool. */ +static void depot_keep_new_pool(void **prealloc) { /* - * If the next pool is already saved or the maximum number of + * If a new pool is already saved or the maximum number of * pools is reached, do not use the preallocated memory. * READ_ONCE is only used to mark the variable as atomic, * there are no concurrent writes. */ - if (!READ_ONCE(next_pool_required)) + if (!READ_ONCE(new_pool_required)) return; =20 /* - * Use the preallocated memory for the next pool + * Use the preallocated memory for the new pool * as long as we do not exceed the maximum number of pools. */ if (pool_index + 1 < DEPOT_MAX_POOLS) { @@ -242,12 +241,12 @@ static void depot_keep_next_pool(void **prealloc) } =20 /* - * At this point, either the next pool is kept or the maximum + * At this point, either a new pool is kept or the maximum * number of pools is reached. In either case, take note that * keeping another pool is not required. * smp_store_release pairs with smp_load_acquire in stack_depot_save. */ - smp_store_release(&next_pool_required, 0); + smp_store_release(&new_pool_required, 0); } =20 /* Updates refences to the current and the next stack depot pools. */ @@ -262,7 +261,7 @@ static bool depot_update_pools(size_t required_size, vo= id **prealloc) } =20 /* - * Move on to the next pool. + * Move on to the new pool. * WRITE_ONCE pairs with potential concurrent read in * stack_depot_fetch. */ @@ -271,12 +270,12 @@ static bool depot_update_pools(size_t required_size, = void **prealloc) =20 /* * If the maximum number of pools is not reached, take note - * that the next pool needs to be initialized. + * that yet another new pool needs to be allocated. * smp_store_release pairs with smp_load_acquire in * stack_depot_save. */ if (pool_index + 1 < DEPOT_MAX_POOLS) - smp_store_release(&next_pool_required, 1); + smp_store_release(&new_pool_required, 1); } =20 /* Check if the current pool is not yet allocated. */ @@ -287,9 +286,9 @@ static bool depot_update_pools(size_t required_size, vo= id **prealloc) return true; } =20 - /* Otherwise, try using the preallocated memory for the next pool. */ + /* Otherwise, try using the preallocated memory for a new pool. */ if (*prealloc) - depot_keep_next_pool(prealloc); + depot_keep_new_pool(prealloc); return true; } =20 @@ -300,7 +299,7 @@ depot_alloc_stack(unsigned long *entries, int size, u32= hash, void **prealloc) struct stack_record *stack; size_t required_size =3D DEPOT_STACK_RECORD_SIZE; =20 - /* Update current and next pools if required and possible. */ + /* Update current and new pools if required and possible. */ if (!depot_update_pools(required_size, prealloc)) return NULL; =20 @@ -432,13 +431,13 @@ depot_stack_handle_t __stack_depot_save(unsigned long= *entries, goto exit; =20 /* - * Check if another stack pool needs to be initialized. If so, allocate - * the memory now - we won't be able to do that under the lock. + * Check if another stack pool needs to be allocated. If so, allocate + * the memory now: we won't be able to do that under the lock. * * smp_load_acquire pairs with smp_store_release - * in depot_update_pools and depot_keep_next_pool. + * in depot_update_pools and depot_keep_new_pool. */ - if (unlikely(can_alloc && smp_load_acquire(&next_pool_required))) { + if (unlikely(can_alloc && smp_load_acquire(&new_pool_required))) { /* * Zero out zone modifiers, as we don't have specific zone * requirements. Keep the flags related to allocation in atomic @@ -471,9 +470,9 @@ depot_stack_handle_t __stack_depot_save(unsigned long *= entries, } else if (prealloc) { /* * Stack depot already contains this stack trace, but let's - * keep the preallocated memory for the future. + * keep the preallocated memory for future. */ - depot_keep_next_pool(&prealloc); + depot_keep_new_pool(&prealloc); } =20 raw_spin_unlock_irqrestore(&pool_lock, flags); --=20 2.25.1 From nobody Sat Dec 13 22:50:42 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C307EC83F19 for ; Tue, 29 Aug 2023 17:13:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237758AbjH2RNW (ORCPT ); Tue, 29 Aug 2023 13:13:22 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48670 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237723AbjH2RM4 (ORCPT ); Tue, 29 Aug 2023 13:12:56 -0400 Received: from out-243.mta1.migadu.com (out-243.mta1.migadu.com [IPv6:2001:41d0:203:375::f3]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DD261CDF for ; Tue, 29 Aug 2023 10:12:42 -0700 (PDT) X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1693329161; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=MjxW8cEdQiVOI1Uv8f/KLmoitLM15XxpvrzjQCh/NXc=; b=tY6t+d+vBtGNax4kT5p4Y7yBgtMWSnu/QUCRoJmsSpGYGi+JfvAHktxIuYT65MqPmBkvxp 8RKOWRZXFpn759z55PlIk/UrGpleF+qwN8UTuRs6xh3eea3xHUbqZVYdzJbVmPrc/bwhgG rJ4DhJcJpXlAotO4gn30Fc2AmkPTb4c= From: andrey.konovalov@linux.dev To: Marco Elver , Alexander Potapenko Cc: Andrey Konovalov , Dmitry Vyukov , Vlastimil Babka , kasan-dev@googlegroups.com, Evgenii Stepanov , Andrew Morton , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrey Konovalov Subject: [PATCH 09/15] stackdepot: store next pool pointer in new_pool Date: Tue, 29 Aug 2023 19:11:19 +0200 Message-Id: In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Migadu-Flow: FLOW_OUT Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Andrey Konovalov Instead of using the last pointer in stack_pools for storing the pointer to a new pool (which does not yet store any stack records), use a new new_pool variable. This a purely code readability change: it seems more logical to store the pointer to a pool with a special meaning in a dedicated variable. Signed-off-by: Andrey Konovalov --- lib/stackdepot.c | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/lib/stackdepot.c b/lib/stackdepot.c index 11934ea3b1c2..5982ea79939d 100644 --- a/lib/stackdepot.c +++ b/lib/stackdepot.c @@ -86,6 +86,8 @@ static unsigned int stack_hash_mask; =20 /* Array of memory regions that store stack traces. */ static void *stack_pools[DEPOT_MAX_POOLS]; +/* Newly allocated pool that is not yet added to stack_pools. */ +static void *new_pool; /* Currently used pool in stack_pools. */ static int pool_index; /* Offset to the unused space in the currently used pool. */ @@ -236,7 +238,7 @@ static void depot_keep_new_pool(void **prealloc) * as long as we do not exceed the maximum number of pools. */ if (pool_index + 1 < DEPOT_MAX_POOLS) { - stack_pools[pool_index + 1] =3D *prealloc; + new_pool =3D *prealloc; *prealloc =3D NULL; } =20 @@ -266,6 +268,8 @@ static bool depot_update_pools(size_t required_size, vo= id **prealloc) * stack_depot_fetch. */ WRITE_ONCE(pool_index, pool_index + 1); + stack_pools[pool_index] =3D new_pool; + new_pool =3D NULL; pool_offset =3D 0; =20 /* --=20 2.25.1 From nobody Sat Dec 13 22:50:42 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E3613C83F1B for ; Tue, 29 Aug 2023 17:13:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237751AbjH2RNV (ORCPT ); Tue, 29 Aug 2023 13:13:21 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35802 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237736AbjH2RM5 (ORCPT ); Tue, 29 Aug 2023 13:12:57 -0400 Received: from out-248.mta1.migadu.com (out-248.mta1.migadu.com [IPv6:2001:41d0:203:375::f8]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5F3A3CE7 for ; Tue, 29 Aug 2023 10:12:43 -0700 (PDT) X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1693329161; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=78m2o0At4EqoxYvPO8GgUXqZmV/z9BTR2roI196o/ig=; b=aVdkqRkgoIFSKNeh42TLnbqZmPMoSkLTIn6T1JEhVD3Qqk6MuAku7bMKEsC8wSyrSX5enC FPGjSthHCdAdFwFBUcruYbc6A6xXWGBHMtC2szN93Qqo9L4dPl6NYeHve9SBanQLnxvT1U U8zvI/sIeAjU3YhEOjSFZSV+LFNUxSw= From: andrey.konovalov@linux.dev To: Marco Elver , Alexander Potapenko Cc: Andrey Konovalov , Dmitry Vyukov , Vlastimil Babka , kasan-dev@googlegroups.com, Evgenii Stepanov , Andrew Morton , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrey Konovalov Subject: [PATCH 10/15] stackdepot: store free stack records in a freelist Date: Tue, 29 Aug 2023 19:11:20 +0200 Message-Id: <0853a38f849f75a428a76fe9bcd093c0502d26f4.1693328501.git.andreyknvl@google.com> In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Migadu-Flow: FLOW_OUT Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Andrey Konovalov Instead of using the global pool_offset variable to find a free slot when storing a new stack record, mainlain a freelist of free slots within the allocated stack pools. A global next_stack variable is used as the head of the freelist, and the next field in the stack_record struct is reused as freelist link (when the record is not in the freelist, this field is used as a link in the hash table). This is preparatory patch for implementing the eviction of stack records from the stack depot. Signed-off-by: Andrey Konovalov --- lib/stackdepot.c | 130 +++++++++++++++++++++++++++++------------------ 1 file changed, 81 insertions(+), 49 deletions(-) diff --git a/lib/stackdepot.c b/lib/stackdepot.c index 5982ea79939d..9011f4adcf20 100644 --- a/lib/stackdepot.c +++ b/lib/stackdepot.c @@ -55,8 +55,8 @@ union handle_parts { }; =20 struct stack_record { - struct stack_record *next; /* Link in the hash table */ - u32 hash; /* Hash in the hash table */ + struct stack_record *next; /* Link in hash table or freelist */ + u32 hash; /* Hash in hash table */ u32 size; /* Number of stored frames */ union handle_parts handle; unsigned long entries[DEPOT_STACK_MAX_FRAMES]; /* Frames */ @@ -88,10 +88,10 @@ static unsigned int stack_hash_mask; static void *stack_pools[DEPOT_MAX_POOLS]; /* Newly allocated pool that is not yet added to stack_pools. */ static void *new_pool; -/* Currently used pool in stack_pools. */ -static int pool_index; -/* Offset to the unused space in the currently used pool. */ -static size_t pool_offset; +/* Number of pools in stack_pools. */ +static int pools_num; +/* Next stack in the freelist of stack records within stack_pools. */ +static struct stack_record *next_stack; /* Lock that protects the variables above. */ static DEFINE_RAW_SPINLOCK(pool_lock); /* @@ -221,6 +221,41 @@ int stack_depot_init(void) } EXPORT_SYMBOL_GPL(stack_depot_init); =20 +/* Initializes a stack depol pool. */ +static void depot_init_pool(void *pool) +{ + const int records_in_pool =3D DEPOT_POOL_SIZE / DEPOT_STACK_RECORD_SIZE; + int i, offset; + + /* Initialize handles and link stack records to each other. */ + for (i =3D 0, offset =3D 0; offset < DEPOT_POOL_SIZE; + i++, offset +=3D DEPOT_STACK_RECORD_SIZE) { + struct stack_record *stack =3D pool + offset; + + stack->handle.pool_index =3D pools_num; + stack->handle.offset =3D offset >> DEPOT_STACK_ALIGN; + stack->handle.extra =3D 0; + + if (i < records_in_pool - 1) + stack->next =3D (void *)stack + DEPOT_STACK_RECORD_SIZE; + else + stack->next =3D NULL; + } + + /* Link stack records into the freelist. */ + WARN_ON(next_stack); + next_stack =3D pool; + + /* Save reference to the pool to be used by depot_fetch_stack. */ + stack_pools[pools_num] =3D pool; + + /* + * WRITE_ONCE pairs with potential concurrent read in + * depot_fetch_stack. + */ + WRITE_ONCE(pools_num, pools_num + 1); +} + /* Keeps the preallocated memory to be used for a new stack depot pool. */ static void depot_keep_new_pool(void **prealloc) { @@ -237,7 +272,7 @@ static void depot_keep_new_pool(void **prealloc) * Use the preallocated memory for the new pool * as long as we do not exceed the maximum number of pools. */ - if (pool_index + 1 < DEPOT_MAX_POOLS) { + if (pools_num < DEPOT_MAX_POOLS) { new_pool =3D *prealloc; *prealloc =3D NULL; } @@ -252,45 +287,42 @@ static void depot_keep_new_pool(void **prealloc) } =20 /* Updates refences to the current and the next stack depot pools. */ -static bool depot_update_pools(size_t required_size, void **prealloc) +static bool depot_update_pools(void **prealloc) { - /* Check if there is not enough space in the current pool. */ - if (unlikely(pool_offset + required_size > DEPOT_POOL_SIZE)) { - /* Bail out if we reached the pool limit. */ - if (unlikely(pool_index + 1 >=3D DEPOT_MAX_POOLS)) { - WARN_ONCE(1, "Stack depot reached limit capacity"); - return false; - } + /* Check if we still have objects in the freelist. */ + if (next_stack) + goto out_keep_prealloc; =20 - /* - * Move on to the new pool. - * WRITE_ONCE pairs with potential concurrent read in - * stack_depot_fetch. - */ - WRITE_ONCE(pool_index, pool_index + 1); - stack_pools[pool_index] =3D new_pool; + /* Check if we have a new pool saved and use it. */ + if (new_pool) { + depot_init_pool(new_pool); new_pool =3D NULL; - pool_offset =3D 0; =20 - /* - * If the maximum number of pools is not reached, take note - * that yet another new pool needs to be allocated. - * smp_store_release pairs with smp_load_acquire in - * stack_depot_save. - */ - if (pool_index + 1 < DEPOT_MAX_POOLS) + /* Take note that we might need a new new_pool. */ + if (pools_num < DEPOT_MAX_POOLS) smp_store_release(&new_pool_required, 1); + + /* Try keeping the preallocated memory for new_pool. */ + goto out_keep_prealloc; + } + + /* Bail out if we reached the pool limit. */ + if (unlikely(pools_num >=3D DEPOT_MAX_POOLS)) { + WARN_ONCE(1, "Stack depot reached limit capacity"); + return false; } =20 - /* Check if the current pool is not yet allocated. */ - if (*prealloc && stack_pools[pool_index] =3D=3D NULL) { - /* Use the preallocated memory for the current pool. */ - stack_pools[pool_index] =3D *prealloc; + /* Check if we have preallocated memory and use it. */ + if (*prealloc) { + depot_init_pool(*prealloc); *prealloc =3D NULL; return true; } =20 - /* Otherwise, try using the preallocated memory for a new pool. */ + return false; + +out_keep_prealloc: + /* Keep the preallocated memory for a new pool if required. */ if (*prealloc) depot_keep_new_pool(prealloc); return true; @@ -301,35 +333,35 @@ static struct stack_record * depot_alloc_stack(unsigned long *entries, int size, u32 hash, void **preal= loc) { struct stack_record *stack; - size_t required_size =3D DEPOT_STACK_RECORD_SIZE; =20 /* Update current and new pools if required and possible. */ - if (!depot_update_pools(required_size, prealloc)) + if (!depot_update_pools(prealloc)) return NULL; =20 - /* Check if we have a pool to save the stack trace. */ - if (stack_pools[pool_index] =3D=3D NULL) + /* Check if we have a stack record to save the stack trace. */ + stack =3D next_stack; + if (!stack) return NULL; =20 + /* Advance the freelist. */ + next_stack =3D stack->next; + /* Limit number of saved frames to DEPOT_STACK_MAX_FRAMES. */ if (size > DEPOT_STACK_MAX_FRAMES) size =3D DEPOT_STACK_MAX_FRAMES; =20 /* Save the stack trace. */ - stack =3D stack_pools[pool_index] + pool_offset; + stack->next =3D NULL; stack->hash =3D hash; stack->size =3D size; - stack->handle.pool_index =3D pool_index; - stack->handle.offset =3D pool_offset >> DEPOT_STACK_ALIGN; - stack->handle.extra =3D 0; + /* stack->handle is already filled in by depot_init_pool. */ memcpy(stack->entries, entries, flex_array_size(stack, entries, size)); - pool_offset +=3D required_size; =20 /* * Let KMSAN know the stored stack record is initialized. This shall * prevent false positive reports if instrumented code accesses it. */ - kmsan_unpoison_memory(stack, required_size); + kmsan_unpoison_memory(stack, DEPOT_STACK_RECORD_SIZE); =20 return stack; } @@ -339,16 +371,16 @@ static struct stack_record *depot_fetch_stack(depot_s= tack_handle_t handle) union handle_parts parts =3D { .handle =3D handle }; /* * READ_ONCE pairs with potential concurrent write in - * depot_update_pools. + * depot_init_pool. */ - int pool_index_cached =3D READ_ONCE(pool_index); + int pools_num_cached =3D READ_ONCE(pools_num); void *pool; size_t offset =3D parts.offset << DEPOT_STACK_ALIGN; struct stack_record *stack; =20 - if (parts.pool_index > pool_index_cached) { + if (parts.pool_index > pools_num_cached) { WARN(1, "pool index %d out of bounds (%d) for stack id %08x\n", - parts.pool_index, pool_index_cached, handle); + parts.pool_index, pools_num_cached, handle); return NULL; } =20 --=20 2.25.1 From nobody Sat Dec 13 22:50:42 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A1B18C83F14 for ; Tue, 29 Aug 2023 17:13:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237721AbjH2RNT (ORCPT ); Tue, 29 Aug 2023 13:13:19 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47842 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237740AbjH2RM5 (ORCPT ); Tue, 29 Aug 2023 13:12:57 -0400 Received: from out-243.mta1.migadu.com (out-243.mta1.migadu.com [IPv6:2001:41d0:203:375::f3]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 27221CEB for ; Tue, 29 Aug 2023 10:12:43 -0700 (PDT) X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1693329162; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=VaH0nAjv38F7wcUNXUY6R+pPhb94jTIKfgVe1zCE0Y8=; b=W3s6IJQiOTNsLFWQX22lPhTUoCW/6QB06+fYLBi4rvyMjQlY8pZ19z1qHG5iTI4lSOXxc8 JzhL/UMWrVr1qyOR/ywQgLHEg1XcWVwHXHXc51/PGHp/Ttkp9aeef95fXkmWHkZlAw9eZI YwpKJwq1b7mI2IEYsDbLMikoFrHR7d4= From: andrey.konovalov@linux.dev To: Marco Elver , Alexander Potapenko Cc: Andrey Konovalov , Dmitry Vyukov , Vlastimil Babka , kasan-dev@googlegroups.com, Evgenii Stepanov , Andrew Morton , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrey Konovalov Subject: [PATCH 11/15] stackdepot: use read/write lock Date: Tue, 29 Aug 2023 19:11:21 +0200 Message-Id: <6db160185d3bd9b3312da4ccc073adcdac58709e.1693328501.git.andreyknvl@google.com> In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Migadu-Flow: FLOW_OUT Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Andrey Konovalov Currently, stack depot uses the following locking scheme: 1. Lock-free accesses when looking up a stack record, which allows to have multiple users to look up records in parallel; 2. Spinlock for protecting the stack depot pools and the hash table when adding a new record. For implementing the eviction of stack traces from stack depot, the lock-free approach is not going to work anymore, as we will need to be able to also remove records from the hash table. Convert the spinlock into a read/write lock, and drop the atomic accesses, as they are no longer required. Looking up stack traces is now protected by the read lock and adding new records - by the write lock. One of the following patches will add a new function for evicting stack records, which will be protected by the write lock as well. With this change, multiple users can still look up records in parallel. This is preparatory patch for implementing the eviction of stack records from the stack depot. Signed-off-by: Andrey Konovalov --- lib/stackdepot.c | 76 ++++++++++++++++++++++-------------------------- 1 file changed, 35 insertions(+), 41 deletions(-) diff --git a/lib/stackdepot.c b/lib/stackdepot.c index 9011f4adcf20..5ad454367379 100644 --- a/lib/stackdepot.c +++ b/lib/stackdepot.c @@ -23,6 +23,7 @@ #include #include #include +#include #include #include #include @@ -92,15 +93,15 @@ static void *new_pool; static int pools_num; /* Next stack in the freelist of stack records within stack_pools. */ static struct stack_record *next_stack; -/* Lock that protects the variables above. */ -static DEFINE_RAW_SPINLOCK(pool_lock); /* * Stack depot tries to keep an extra pool allocated even before it runs o= ut * of space in the currently used pool. This flag marks whether this extra= pool * needs to be allocated. It has the value 0 when either an extra pool is = not * yet allocated or if the limit on the number of pools is reached. */ -static int new_pool_required =3D 1; +static bool new_pool_required =3D true; +/* Lock that protects the variables above. */ +static DEFINE_RWLOCK(pool_rwlock); =20 static int __init disable_stack_depot(char *str) { @@ -248,12 +249,7 @@ static void depot_init_pool(void *pool) =20 /* Save reference to the pool to be used by depot_fetch_stack. */ stack_pools[pools_num] =3D pool; - - /* - * WRITE_ONCE pairs with potential concurrent read in - * depot_fetch_stack. - */ - WRITE_ONCE(pools_num, pools_num + 1); + pools_num++; } =20 /* Keeps the preallocated memory to be used for a new stack depot pool. */ @@ -262,10 +258,8 @@ static void depot_keep_new_pool(void **prealloc) /* * If a new pool is already saved or the maximum number of * pools is reached, do not use the preallocated memory. - * READ_ONCE is only used to mark the variable as atomic, - * there are no concurrent writes. */ - if (!READ_ONCE(new_pool_required)) + if (!new_pool_required) return; =20 /* @@ -281,9 +275,8 @@ static void depot_keep_new_pool(void **prealloc) * At this point, either a new pool is kept or the maximum * number of pools is reached. In either case, take note that * keeping another pool is not required. - * smp_store_release pairs with smp_load_acquire in stack_depot_save. */ - smp_store_release(&new_pool_required, 0); + new_pool_required =3D false; } =20 /* Updates refences to the current and the next stack depot pools. */ @@ -300,7 +293,7 @@ static bool depot_update_pools(void **prealloc) =20 /* Take note that we might need a new new_pool. */ if (pools_num < DEPOT_MAX_POOLS) - smp_store_release(&new_pool_required, 1); + new_pool_required =3D true; =20 /* Try keeping the preallocated memory for new_pool. */ goto out_keep_prealloc; @@ -369,18 +362,13 @@ depot_alloc_stack(unsigned long *entries, int size, u= 32 hash, void **prealloc) static struct stack_record *depot_fetch_stack(depot_stack_handle_t handle) { union handle_parts parts =3D { .handle =3D handle }; - /* - * READ_ONCE pairs with potential concurrent write in - * depot_init_pool. - */ - int pools_num_cached =3D READ_ONCE(pools_num); void *pool; size_t offset =3D parts.offset << DEPOT_STACK_ALIGN; struct stack_record *stack; =20 - if (parts.pool_index > pools_num_cached) { + if (parts.pool_index > pools_num) { WARN(1, "pool index %d out of bounds (%d) for stack id %08x\n", - parts.pool_index, pools_num_cached, handle); + parts.pool_index, pools_num, handle); return NULL; } =20 @@ -439,6 +427,7 @@ depot_stack_handle_t __stack_depot_save(unsigned long *= entries, depot_stack_handle_t handle =3D 0; struct page *page =3D NULL; void *prealloc =3D NULL; + bool need_alloc =3D false; unsigned long flags; u32 hash; =20 @@ -458,22 +447,26 @@ depot_stack_handle_t __stack_depot_save(unsigned long= *entries, hash =3D hash_stack(entries, nr_entries); bucket =3D &stack_table[hash & stack_hash_mask]; =20 - /* - * Fast path: look the stack trace up without locking. - * smp_load_acquire pairs with smp_store_release to |bucket| below. - */ - found =3D find_stack(smp_load_acquire(bucket), entries, nr_entries, hash); - if (found) + read_lock_irqsave(&pool_rwlock, flags); + + /* Fast path: look the stack trace up without full locking. */ + found =3D find_stack(*bucket, entries, nr_entries, hash); + if (found) { + read_unlock_irqrestore(&pool_rwlock, flags); goto exit; + } + + /* Take note if another stack pool needs to be allocated. */ + if (new_pool_required) + need_alloc =3D true; + + read_unlock_irqrestore(&pool_rwlock, flags); =20 /* - * Check if another stack pool needs to be allocated. If so, allocate - * the memory now: we won't be able to do that under the lock. - * - * smp_load_acquire pairs with smp_store_release - * in depot_update_pools and depot_keep_new_pool. + * Allocate memory for a new pool if required now: + * we won't be able to do that under the lock. */ - if (unlikely(can_alloc && smp_load_acquire(&new_pool_required))) { + if (unlikely(can_alloc && need_alloc)) { /* * Zero out zone modifiers, as we don't have specific zone * requirements. Keep the flags related to allocation in atomic @@ -487,7 +480,7 @@ depot_stack_handle_t __stack_depot_save(unsigned long *= entries, prealloc =3D page_address(page); } =20 - raw_spin_lock_irqsave(&pool_lock, flags); + write_lock_irqsave(&pool_rwlock, flags); =20 found =3D find_stack(*bucket, entries, nr_entries, hash); if (!found) { @@ -496,11 +489,7 @@ depot_stack_handle_t __stack_depot_save(unsigned long = *entries, =20 if (new) { new->next =3D *bucket; - /* - * smp_store_release pairs with smp_load_acquire - * from |bucket| above. - */ - smp_store_release(bucket, new); + *bucket =3D new; found =3D new; } } else if (prealloc) { @@ -511,7 +500,7 @@ depot_stack_handle_t __stack_depot_save(unsigned long *= entries, depot_keep_new_pool(&prealloc); } =20 - raw_spin_unlock_irqrestore(&pool_lock, flags); + write_unlock_irqrestore(&pool_rwlock, flags); exit: if (prealloc) { /* Stack depot didn't use this memory, free it. */ @@ -535,6 +524,7 @@ unsigned int stack_depot_fetch(depot_stack_handle_t han= dle, unsigned long **entries) { struct stack_record *stack; + unsigned long flags; =20 *entries =3D NULL; /* @@ -546,8 +536,12 @@ unsigned int stack_depot_fetch(depot_stack_handle_t ha= ndle, if (!handle || stack_depot_disabled) return 0; =20 + read_lock_irqsave(&pool_rwlock, flags); + stack =3D depot_fetch_stack(handle); =20 + read_unlock_irqrestore(&pool_rwlock, flags); + *entries =3D stack->entries; return stack->size; } --=20 2.25.1 From nobody Sat Dec 13 22:50:43 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A728FC83F12 for ; Tue, 29 Aug 2023 17:15:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237689AbjH2ROu (ORCPT ); Tue, 29 Aug 2023 13:14:50 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47788 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237703AbjH2ROV (ORCPT ); Tue, 29 Aug 2023 13:14:21 -0400 Received: from out-244.mta1.migadu.com (out-244.mta1.migadu.com [IPv6:2001:41d0:203:375::f4]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9C31FCEE for ; Tue, 29 Aug 2023 10:13:49 -0700 (PDT) X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1693329223; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=SXcX5gR3dSvkxNFtne9AIJ3MreGstDXokl9zt3D1S94=; b=h6llS9CHe5RoqXvZREMQb8iyAp/Kb6KqjJZ0snmlmlR4rFM6dED6F0JcGGQU+5vqbXo8HX elNoG7XJ6jnDUln2rKKJ4l8c8KxNHcrBz76e+Dr/1x/lSC9m6R4ZN8FJDGgM6G/w4Breap 7U2QWpyTEl7oegpOYYamAOWgDo9QJW0= From: andrey.konovalov@linux.dev To: Marco Elver , Alexander Potapenko Cc: Andrey Konovalov , Dmitry Vyukov , Vlastimil Babka , kasan-dev@googlegroups.com, Evgenii Stepanov , Andrew Morton , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrey Konovalov Subject: [PATCH 12/15] stackdepot: add refcount for records Date: Tue, 29 Aug 2023 19:11:22 +0200 Message-Id: <306aeddcd3c01f432d308043c382669e5f63b395.1693328501.git.andreyknvl@google.com> In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Migadu-Flow: FLOW_OUT Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Andrey Konovalov Add a reference counter for how many times a stack records has been added to stack depot. Do no yet decrement the refcount, this is implemented in one of the following patches. This is preparatory patch for implementing the eviction of stack records from the stack depot. Signed-off-by: Andrey Konovalov --- lib/stackdepot.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/lib/stackdepot.c b/lib/stackdepot.c index 5ad454367379..a84c0debbb9e 100644 --- a/lib/stackdepot.c +++ b/lib/stackdepot.c @@ -22,6 +22,7 @@ #include #include #include +#include #include #include #include @@ -60,6 +61,7 @@ struct stack_record { u32 hash; /* Hash in hash table */ u32 size; /* Number of stored frames */ union handle_parts handle; + refcount_t count; unsigned long entries[DEPOT_STACK_MAX_FRAMES]; /* Frames */ }; =20 @@ -348,6 +350,7 @@ depot_alloc_stack(unsigned long *entries, int size, u32= hash, void **prealloc) stack->hash =3D hash; stack->size =3D size; /* stack->handle is already filled in by depot_init_pool. */ + refcount_set(&stack->count, 1); memcpy(stack->entries, entries, flex_array_size(stack, entries, size)); =20 /* @@ -452,6 +455,7 @@ depot_stack_handle_t __stack_depot_save(unsigned long *= entries, /* Fast path: look the stack trace up without full locking. */ found =3D find_stack(*bucket, entries, nr_entries, hash); if (found) { + refcount_inc(&found->count); read_unlock_irqrestore(&pool_rwlock, flags); goto exit; } --=20 2.25.1 From nobody Sat Dec 13 22:50:43 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1F65DC83F14 for ; Tue, 29 Aug 2023 17:15:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237709AbjH2ROv (ORCPT ); Tue, 29 Aug 2023 13:14:51 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49546 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237702AbjH2ROV (ORCPT ); Tue, 29 Aug 2023 13:14:21 -0400 Received: from out-251.mta1.migadu.com (out-251.mta1.migadu.com [IPv6:2001:41d0:203:375::fb]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C56F5E7D for ; Tue, 29 Aug 2023 10:13:49 -0700 (PDT) X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1693329224; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=6r+jJInS3pVfd7hB9KGynpwXDuP9tHCSgnjNZ7TlokM=; b=PT1yVeNi/WFW0iBcrm2Hn4oAVO5Mp4hu85tRrRP1HYBbHSPjNipz67GtVxzpFdOYzHKM/M EOna2VPwPAI1BS2TmFOrLFsWZmPHYOllcK3aTIE2DJg0Lqwv7JOqQ+rn0o60sbtux81qug hC+Sq5lFDkk29LN4JU6YC08g6eDKk+8= From: andrey.konovalov@linux.dev To: Marco Elver , Alexander Potapenko Cc: Andrey Konovalov , Dmitry Vyukov , Vlastimil Babka , kasan-dev@googlegroups.com, Evgenii Stepanov , Andrew Morton , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrey Konovalov Subject: [PATCH 13/15] stackdepot: add backwards links to hash table buckets Date: Tue, 29 Aug 2023 19:11:23 +0200 Message-Id: In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Migadu-Flow: FLOW_OUT Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Andrey Konovalov Maintain links in the stack records to previous entries within the hash table buckets. This is preparatory patch for implementing the eviction of stack records from the stack depot. Signed-off-by: Andrey Konovalov --- lib/stackdepot.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/lib/stackdepot.c b/lib/stackdepot.c index a84c0debbb9e..641db97d8c7c 100644 --- a/lib/stackdepot.c +++ b/lib/stackdepot.c @@ -58,6 +58,7 @@ union handle_parts { =20 struct stack_record { struct stack_record *next; /* Link in hash table or freelist */ + struct stack_record *prev; /* Link in hash table */ u32 hash; /* Hash in hash table */ u32 size; /* Number of stored frames */ union handle_parts handle; @@ -493,6 +494,9 @@ depot_stack_handle_t __stack_depot_save(unsigned long *= entries, =20 if (new) { new->next =3D *bucket; + new->prev =3D NULL; + if (*bucket) + (*bucket)->prev =3D new; *bucket =3D new; found =3D new; } --=20 2.25.1 From nobody Sat Dec 13 22:50:43 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3E298C6FA8F for ; Tue, 29 Aug 2023 17:15:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232350AbjH2ROx (ORCPT ); Tue, 29 Aug 2023 13:14:53 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60122 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236139AbjH2ROb (ORCPT ); Tue, 29 Aug 2023 13:14:31 -0400 Received: from out-243.mta1.migadu.com (out-243.mta1.migadu.com [IPv6:2001:41d0:203:375::f3]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 167D510CC for ; Tue, 29 Aug 2023 10:13:57 -0700 (PDT) X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1693329224; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=c+OqyPrsOPGHg5XrLPNZBNE7kZ+J8BEkGBD6zb/JuYQ=; b=T1iG7qQWFqXknvW3oxV4ppqhlGJ2B4GzTHDs5oEMMGMfeCMjQjH9IJfBalbYfv0NVvUDzy EwkpyMnG6Nxt+AMl/9tnMXZ4f/nw+qGm5hpPJdpaKfCfaWqaErDfKrIusJZnN6iT+BpvCJ ObS/4jb3NCEUDuj8JPJ9Tc91Spi4dxo= From: andrey.konovalov@linux.dev To: Marco Elver , Alexander Potapenko Cc: Andrey Konovalov , Dmitry Vyukov , Vlastimil Babka , kasan-dev@googlegroups.com, Evgenii Stepanov , Andrew Morton , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrey Konovalov Subject: [PATCH 14/15] stackdepot: allow users to evict stack traces Date: Tue, 29 Aug 2023 19:11:24 +0200 Message-Id: <99cd7ac4a312e86c768b933332364272b9e3fb40.1693328501.git.andreyknvl@google.com> In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Migadu-Flow: FLOW_OUT Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Andrey Konovalov Add stack_depot_evict, a function that decrements a reference counter on a stack record and removes it from the stack depot once the counter reaches 0. Internally, when removing a stack record, the function unlinks it from the hash table bucket and returns to the freelist. With this change, the users of stack depot can call stack_depot_evict when keeping a stack trace in the stack depot is not needed anymore. This allows avoiding polluting the stack depot with irrelevant stack traces and thus have more space to store the relevant ones before the stack depot reaches its capacity. Signed-off-by: Andrey Konovalov --- include/linux/stackdepot.h | 11 ++++++++++ lib/stackdepot.c | 43 ++++++++++++++++++++++++++++++++++++++ 2 files changed, 54 insertions(+) diff --git a/include/linux/stackdepot.h b/include/linux/stackdepot.h index e58306783d8e..b14da6797714 100644 --- a/include/linux/stackdepot.h +++ b/include/linux/stackdepot.h @@ -121,6 +121,17 @@ depot_stack_handle_t stack_depot_save(unsigned long *e= ntries, unsigned int stack_depot_fetch(depot_stack_handle_t handle, unsigned long **entries); =20 +/** + * stack_depot_evict - Drop a reference to a stack trace from stack depot + * + * @handle: Stack depot handle returned from stack_depot_save() + * + * The stack trace gets fully removed from stack depot once all references + * to it has been dropped (once the number of stack_depot_evict calls matc= hes + * the number of stack_depot_save calls for this stack trace). + */ +void stack_depot_evict(depot_stack_handle_t handle); + /** * stack_depot_print - Print a stack trace from stack depot * diff --git a/lib/stackdepot.c b/lib/stackdepot.c index 641db97d8c7c..cf28720b842d 100644 --- a/lib/stackdepot.c +++ b/lib/stackdepot.c @@ -384,6 +384,13 @@ static struct stack_record *depot_fetch_stack(depot_st= ack_handle_t handle) return stack; } =20 +/* Frees stack into the freelist. */ +static void depot_free_stack(struct stack_record *stack) +{ + stack->next =3D next_stack; + next_stack =3D stack; +} + /* Calculates the hash for a stack. */ static inline u32 hash_stack(unsigned long *entries, unsigned int size) { @@ -555,6 +562,42 @@ unsigned int stack_depot_fetch(depot_stack_handle_t ha= ndle, } EXPORT_SYMBOL_GPL(stack_depot_fetch); =20 +void stack_depot_evict(depot_stack_handle_t handle) +{ + struct stack_record *stack, **bucket; + unsigned long flags; + + if (!handle || stack_depot_disabled) + return; + + write_lock_irqsave(&pool_rwlock, flags); + + stack =3D depot_fetch_stack(handle); + if (WARN_ON(!stack)) + goto out; + + if (refcount_dec_and_test(&stack->count)) { + /* Drop stack from the hash table. */ + if (stack->next) + stack->next->prev =3D stack->prev; + if (stack->prev) + stack->prev->next =3D stack->next; + else { + bucket =3D &stack_table[stack->hash & stack_hash_mask]; + *bucket =3D stack->next; + } + stack->next =3D NULL; + stack->prev =3D NULL; + + /* Free stack. */ + depot_free_stack(stack); + } + +out: + write_unlock_irqrestore(&pool_rwlock, flags); +} +EXPORT_SYMBOL_GPL(stack_depot_evict); + void stack_depot_print(depot_stack_handle_t stack) { unsigned long *entries; --=20 2.25.1 From nobody Sat Dec 13 22:50:43 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5E022C83F19 for ; Tue, 29 Aug 2023 17:15:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237746AbjH2ROy (ORCPT ); Tue, 29 Aug 2023 13:14:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60246 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237767AbjH2ROh (ORCPT ); Tue, 29 Aug 2023 13:14:37 -0400 Received: from out-250.mta1.migadu.com (out-250.mta1.migadu.com [IPv6:2001:41d0:203:375::fa]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3B4F81BD for ; Tue, 29 Aug 2023 10:14:00 -0700 (PDT) X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1693329225; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=nl/mi/sJhm8d2IDMyyx9SK432XHk6oEEQ1PTooRDAe8=; b=GlaqijiJCNuxG7rw/crBXwYIu+bsbRgtfNjI15SsNv8t1sbXPGyc+8Qf9Y8QvqLJiZuTi7 ZNtWdGIvQFFzknC6Cszo4DXXBT6fdVhnEoWLc28FUoQtxU/76o7hIYC2gdCczCKWh6/THq P/TGvd/MEaLMkz+3R7oGxuH+BvdITeQ= From: andrey.konovalov@linux.dev To: Marco Elver , Alexander Potapenko Cc: Andrey Konovalov , Dmitry Vyukov , Vlastimil Babka , kasan-dev@googlegroups.com, Evgenii Stepanov , Andrew Morton , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrey Konovalov Subject: [PATCH 15/15] kasan: use stack_depot_evict for tag-based modes Date: Tue, 29 Aug 2023 19:11:25 +0200 Message-Id: In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Migadu-Flow: FLOW_OUT Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Andrey Konovalov Evict stack traces from the stack depot for the tag-based KASAN modes once they are evicted from the stack ring. Signed-off-by: Andrey Konovalov --- mm/kasan/tags.c | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-) diff --git a/mm/kasan/tags.c b/mm/kasan/tags.c index 7dcfe341d48e..fa6b0f77a7dd 100644 --- a/mm/kasan/tags.c +++ b/mm/kasan/tags.c @@ -96,7 +96,7 @@ static void save_stack_info(struct kmem_cache *cache, voi= d *object, gfp_t gfp_flags, bool is_free) { unsigned long flags; - depot_stack_handle_t stack; + depot_stack_handle_t stack, old_stack; u64 pos; struct kasan_stack_ring_entry *entry; void *old_ptr; @@ -120,6 +120,8 @@ static void save_stack_info(struct kmem_cache *cache, v= oid *object, if (!try_cmpxchg(&entry->ptr, &old_ptr, STACK_RING_BUSY_PTR)) goto next; /* Busy slot. */ =20 + old_stack =3D READ_ONCE(entry->stack); + WRITE_ONCE(entry->size, cache->object_size); WRITE_ONCE(entry->pid, current->pid); WRITE_ONCE(entry->stack, stack); @@ -131,6 +133,9 @@ static void save_stack_info(struct kmem_cache *cache, v= oid *object, smp_store_release(&entry->ptr, (s64)object); =20 read_unlock_irqrestore(&stack_ring.lock, flags); + + if (old_stack) + stack_depot_evict(old_stack); } =20 void kasan_save_alloc_info(struct kmem_cache *cache, void *object, gfp_t f= lags) --=20 2.25.1