From nobody Fri Dec 19 04:17:20 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3DDF3C77B73 for ; Mon, 1 May 2023 16:59:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233059AbjEAQ7K (ORCPT ); Mon, 1 May 2023 12:59:10 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60834 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232711AbjEAQ54 (ORCPT ); Mon, 1 May 2023 12:57:56 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 726BF2D5E for ; Mon, 1 May 2023 09:56:05 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id 3f1490d57ef6-b9a6f15287eso27706848276.1 for ; Mon, 01 May 2023 09:56:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1682960149; x=1685552149; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=ANclwoiguD8BsbfKgYnGL1LnBw+SqNV6XnzUrNDvkRU=; b=PkT7GX0UpBWBL3ZxZZQ1u+ssYXtbTc+WLolVY9j+MP76VowH5UScBu+NGiGIbJ7Y6L eZfY/6ZYqoCRi2CKlp/WtuvsiD21usF1/P37qu0gPbOtENmIl51pQz/p9+lm6j/kWelt Ov6MNUZwC6/nK8n86rfdlgBHbFP156NpwLcMsML+lMt+4KcACeUQ1EH3vrJuTYlZHnxu 7BuZUefNEtoVSDEOw+FF7cwg3Q8gxPu6E4rHt6NEwlnEpLRRqtUVy0vsJsqLucVJRBrd gANWGNYs/01Nqy3PLDr7YbpWPHn8hbMhpjK41s03TQMV5IjienckbULG1YjCuS8h+b0d B72Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682960149; x=1685552149; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=ANclwoiguD8BsbfKgYnGL1LnBw+SqNV6XnzUrNDvkRU=; b=QV1/YEnhcvlnBkExlvGhfkOaI5VEx4aeNty/I4GPcRS8yM+s/76m6XqW/MKmaQkcog Ec7+uP3m7QDoX1vIvO/NpwuYa5iCXFxcX4k2TmJ9SkIHAKNncIHK1dQaBkGVRJIURHJT QsLmtqlJkojW4EKqZP3vwcmV+0IfcJa3Mm7+P0q9XAJ5jLgOOmSQKG3axXapum1obCuk CLtrIyHGChkEqx3DWchByz7nZKxz/djEsz9AG37d6fvVVFj9V37fQVWjYHfAzEP0sV1A CisP1oWOwFMPOBwO6Evn4yEEE3cMCZe/T64VIHZ05Bgb2nIAXN0SR/5W6r+QW7ofrP+A jWgg== X-Gm-Message-State: AC+VfDzjjZi6Tb0xoCFzHSCRT2inSJKrVVgQ2lDRsehFCxiE2rnDh15X BfWF2Pzd8mpG3ZGo3EHLOQbpynsdvlM= X-Google-Smtp-Source: ACHHUZ641wq2FLAPkyOcOf/2AaJ42rOTMPWQjI9sBcEhs8hmmrg4PFh1J2EuWrdbxT8OAjVL7D9g50h9mjM= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:6d24:3efd:facc:7ac4]) (user=surenb job=sendgmr) by 2002:a25:dbd2:0:b0:b99:cd69:cc32 with SMTP id g201-20020a25dbd2000000b00b99cd69cc32mr11391322ybf.0.1682960149191; Mon, 01 May 2023 09:55:49 -0700 (PDT) Date: Mon, 1 May 2023 09:54:28 -0700 In-Reply-To: <20230501165450.15352-1-surenb@google.com> Mime-Version: 1.0 References: <20230501165450.15352-1-surenb@google.com> X-Mailer: git-send-email 2.40.1.495.gc816e09b53d-goog Message-ID: <20230501165450.15352-19-surenb@google.com> Subject: [PATCH 18/40] lib: introduce support for page allocation tagging From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: kent.overstreet@linux.dev, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, roman.gushchin@linux.dev, mgorman@suse.de, dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com, corbet@lwn.net, void@manifault.com, peterz@infradead.org, juri.lelli@redhat.com, ldufour@linux.ibm.com, catalin.marinas@arm.com, will@kernel.org, arnd@arndb.de, tglx@linutronix.de, mingo@redhat.com, dave.hansen@linux.intel.com, x86@kernel.org, peterx@redhat.com, david@redhat.com, axboe@kernel.dk, mcgrof@kernel.org, masahiroy@kernel.org, nathan@kernel.org, dennis@kernel.org, tj@kernel.org, muchun.song@linux.dev, rppt@kernel.org, paulmck@kernel.org, pasha.tatashin@soleen.com, yosryahmed@google.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, andreyknvl@gmail.com, keescook@chromium.org, ndesaulniers@google.com, gregkh@linuxfoundation.org, ebiggers@google.com, ytcoode@gmail.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, bristot@redhat.com, vschneid@redhat.com, cl@linux.com, penberg@kernel.org, iamjoonsoo.kim@lge.com, 42.hyeyoo@gmail.com, glider@google.com, elver@google.com, dvyukov@google.com, shakeelb@google.com, songmuchun@bytedance.com, jbaron@akamai.com, rientjes@google.com, minchan@google.com, kaleshsingh@google.com, surenb@google.com, kernel-team@android.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, iommu@lists.linux.dev, linux-arch@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, kasan-dev@googlegroups.com, cgroups@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Introduce helper functions to easily instrument page allocators by storing a pointer to the allocation tag associated with the code that allocated the page in a page_ext field. Signed-off-by: Suren Baghdasaryan Co-developed-by: Kent Overstreet Signed-off-by: Kent Overstreet --- include/linux/pgalloc_tag.h | 33 +++++++++++++++++++++++++++++++++ lib/Kconfig.debug | 1 + lib/alloc_tag.c | 17 +++++++++++++++++ mm/page_ext.c | 12 +++++++++--- 4 files changed, 60 insertions(+), 3 deletions(-) create mode 100644 include/linux/pgalloc_tag.h diff --git a/include/linux/pgalloc_tag.h b/include/linux/pgalloc_tag.h new file mode 100644 index 000000000000..f8c7b6ef9c75 --- /dev/null +++ b/include/linux/pgalloc_tag.h @@ -0,0 +1,33 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * page allocation tagging + */ +#ifndef _LINUX_PGALLOC_TAG_H +#define _LINUX_PGALLOC_TAG_H + +#include +#include + +extern struct page_ext_operations page_alloc_tagging_ops; +struct page_ext *lookup_page_ext(const struct page *page); + +static inline union codetag_ref *get_page_tag_ref(struct page *page) +{ + if (page && mem_alloc_profiling_enabled()) { + struct page_ext *page_ext =3D lookup_page_ext(page); + + if (page_ext) + return (void *)page_ext + page_alloc_tagging_ops.offset; + } + return NULL; +} + +static inline void pgalloc_tag_dec(struct page *page, unsigned int order) +{ + union codetag_ref *ref =3D get_page_tag_ref(page); + + if (ref) + alloc_tag_sub(ref, PAGE_SIZE << order); +} + +#endif /* _LINUX_PGALLOC_TAG_H */ diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug index da0a91ea6042..d3aa5ee0bf0d 100644 --- a/lib/Kconfig.debug +++ b/lib/Kconfig.debug @@ -967,6 +967,7 @@ config MEM_ALLOC_PROFILING depends on DEBUG_FS select CODE_TAGGING select LAZY_PERCPU_COUNTER + select PAGE_EXTENSION help Track allocation source code and record total allocation size initiated at that code location. The mechanism can be used to track diff --git a/lib/alloc_tag.c b/lib/alloc_tag.c index 3c4cfeb79862..4a0b95a46b2e 100644 --- a/lib/alloc_tag.c +++ b/lib/alloc_tag.c @@ -4,6 +4,7 @@ #include #include #include +#include #include #include =20 @@ -159,6 +160,22 @@ static bool alloc_tag_module_unload(struct codetag_typ= e *cttype, struct codetag_ return module_unused; } =20 +static __init bool need_page_alloc_tagging(void) +{ + return true; +} + +static __init void init_page_alloc_tagging(void) +{ +} + +struct page_ext_operations page_alloc_tagging_ops =3D { + .size =3D sizeof(union codetag_ref), + .need =3D need_page_alloc_tagging, + .init =3D init_page_alloc_tagging, +}; +EXPORT_SYMBOL(page_alloc_tagging_ops); + static int __init alloc_tag_init(void) { struct codetag_type *cttype; diff --git a/mm/page_ext.c b/mm/page_ext.c index dc1626be458b..eaf054ec276c 100644 --- a/mm/page_ext.c +++ b/mm/page_ext.c @@ -10,6 +10,7 @@ #include #include #include +#include =20 /* * struct page extension @@ -82,6 +83,9 @@ static struct page_ext_operations *page_ext_ops[] __initd= ata =3D { #if defined(CONFIG_PAGE_IDLE_FLAG) && !defined(CONFIG_64BIT) &page_idle_ops, #endif +#ifdef CONFIG_MEM_ALLOC_PROFILING + &page_alloc_tagging_ops, +#endif #ifdef CONFIG_PAGE_TABLE_CHECK &page_table_check_ops, #endif @@ -90,7 +94,7 @@ static struct page_ext_operations *page_ext_ops[] __initd= ata =3D { unsigned long page_ext_size; =20 static unsigned long total_usage; -static struct page_ext *lookup_page_ext(const struct page *page); +struct page_ext *lookup_page_ext(const struct page *page); =20 bool early_page_ext __meminitdata; static int __init setup_early_page_ext(char *str) @@ -199,7 +203,7 @@ void __meminit pgdat_page_ext_init(struct pglist_data *= pgdat) pgdat->node_page_ext =3D NULL; } =20 -static struct page_ext *lookup_page_ext(const struct page *page) +struct page_ext *lookup_page_ext(const struct page *page) { unsigned long pfn =3D page_to_pfn(page); unsigned long index; @@ -219,6 +223,7 @@ static struct page_ext *lookup_page_ext(const struct pa= ge *page) MAX_ORDER_NR_PAGES); return get_entry(base, index); } +EXPORT_SYMBOL(lookup_page_ext); =20 static int __init alloc_node_page_ext(int nid) { @@ -278,7 +283,7 @@ static bool page_ext_invalid(struct page_ext *page_ext) return !page_ext || (((unsigned long)page_ext & PAGE_EXT_INVALID) =3D=3D = PAGE_EXT_INVALID); } =20 -static struct page_ext *lookup_page_ext(const struct page *page) +struct page_ext *lookup_page_ext(const struct page *page) { unsigned long pfn =3D page_to_pfn(page); struct mem_section *section =3D __pfn_to_section(pfn); @@ -295,6 +300,7 @@ static struct page_ext *lookup_page_ext(const struct pa= ge *page) return NULL; return get_entry(page_ext, pfn); } +EXPORT_SYMBOL(lookup_page_ext); =20 static void *__meminit alloc_page_ext(size_t size, int nid) { --=20 2.40.1.495.gc816e09b53d-goog