From nobody Thu Jan 1 07:45:32 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 30DF8C25B48 for ; Tue, 24 Oct 2023 13:46:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1343533AbjJXNqw (ORCPT ); Tue, 24 Oct 2023 09:46:52 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44470 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233153AbjJXNqs (ORCPT ); Tue, 24 Oct 2023 09:46:48 -0400 Received: from mail-yw1-x114a.google.com (mail-yw1-x114a.google.com [IPv6:2607:f8b0:4864:20::114a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DAF77E8 for ; Tue, 24 Oct 2023 06:46:44 -0700 (PDT) Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-5a7ed6903a6so59225777b3.2 for ; Tue, 24 Oct 2023 06:46:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1698155204; x=1698760004; darn=vger.kernel.org; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:from:to:cc:subject:date:message-id :reply-to; bh=C4ig+ghmPhQV+NN9gQrUnAhQK6p3EzwOX4/nD/QGc34=; b=Ghrl/xBO9kFO8Fqa8Mb+WQjtMSofpnfAlgHUxIxZo6mQWKvLfgW8ST3GO51SFS60Mn PN0tZP15ONP/jirzJQN93tNsr1Iitns/OICLRoNL76Iy06WGsH2iijYnse0B7lcFPAo8 RUUVLw3NZRCDvw1WD0xHjBAwdV4mS6fNRe0a6p71uGcQVfQ3EMNFsC69d0VB2J7y+RBS W4z0IVExXQv/nJZVOz+PRjIJBXZh9eoRKFGHFq7Ax0Zivjcs0slsOx/99TLeuoXpAolu XFK27eWqO3Pk1uaMEnY9K5NhHEp3aLUgNpvJU4gyeXTDSBrG5MgYnAUP7R2L6NO1E2m0 6B9Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1698155204; x=1698760004; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:x-gm-message-state:from:to:cc:subject :date:message-id:reply-to; bh=C4ig+ghmPhQV+NN9gQrUnAhQK6p3EzwOX4/nD/QGc34=; b=gIV2CFgT6q3+jxfevSpfTFmeWLIjP72x6jRMGCEsR2sKbBU6qLXcTxvX9fU6hF6s6Y IE7cQYJuOQmiOApbqkZ3gIXHfxNVXsXo2jfuHUIqfUcGAgdjsJHZ9hW4xWQXrgZxnJ8Z LIJcmphYTtYgS4mQKCl8Kzzn9xl7wZyPcbtXf4WZn7tDoXMkKz3q1je2TIdPYcG8zN9G aG46dF+NFeS+HNvjTr6YowcWrj85IBpIK4i3ETa19POVpmsSb9lV60R4nOwbeFdcjodf fBu/pfQXWlui6zeq8A35TcoJjubI+c0f36eWVVvaQUgmBocmtOXqDxS59wYEu5xQZAWB vBmg== X-Gm-Message-State: AOJu0YxRI7XvatXoHk+KIuQIrpn5/FkifN8JBR8PILelELcmIAB310/v kg8b2yOp3E1xI1OIjJWiyCMT7OI44X0= X-Google-Smtp-Source: AGHT+IH/oTVJ1Mu9lou4puXON+M9/Y6l8UTmlhuGUIs6R4EkfIBTFxk7sH9NDcndoJzEgmocQcXHiryGytM= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:45ba:3318:d7a5:336a]) (user=surenb job=sendgmr) by 2002:a25:d7d8:0:b0:d9a:da03:97e8 with SMTP id o207-20020a25d7d8000000b00d9ada0397e8mr212597ybg.2.1698155203948; Tue, 24 Oct 2023 06:46:43 -0700 (PDT) Date: Tue, 24 Oct 2023 06:45:58 -0700 In-Reply-To: <20231024134637.3120277-1-surenb@google.com> Mime-Version: 1.0 References: <20231024134637.3120277-1-surenb@google.com> X-Mailer: git-send-email 2.42.0.758.gaed0368e0e-goog Message-ID: <20231024134637.3120277-2-surenb@google.com> Subject: [PATCH v2 01/39] lib/string_helpers: Add flags param to string_get_size() From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: kent.overstreet@linux.dev, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, roman.gushchin@linux.dev, mgorman@suse.de, dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com, corbet@lwn.net, void@manifault.com, peterz@infradead.org, juri.lelli@redhat.com, ldufour@linux.ibm.com, catalin.marinas@arm.com, will@kernel.org, arnd@arndb.de, tglx@linutronix.de, mingo@redhat.com, dave.hansen@linux.intel.com, x86@kernel.org, peterx@redhat.com, david@redhat.com, axboe@kernel.dk, mcgrof@kernel.org, masahiroy@kernel.org, nathan@kernel.org, dennis@kernel.org, tj@kernel.org, muchun.song@linux.dev, rppt@kernel.org, paulmck@kernel.org, pasha.tatashin@soleen.com, yosryahmed@google.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, andreyknvl@gmail.com, keescook@chromium.org, ndesaulniers@google.com, vvvvvv@google.com, gregkh@linuxfoundation.org, ebiggers@google.com, ytcoode@gmail.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, bristot@redhat.com, vschneid@redhat.com, cl@linux.com, penberg@kernel.org, iamjoonsoo.kim@lge.com, 42.hyeyoo@gmail.com, glider@google.com, elver@google.com, dvyukov@google.com, shakeelb@google.com, songmuchun@bytedance.com, jbaron@akamai.com, rientjes@google.com, minchan@google.com, kaleshsingh@google.com, surenb@google.com, kernel-team@android.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, iommu@lists.linux.dev, linux-arch@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, kasan-dev@googlegroups.com, cgroups@vger.kernel.org, Andy Shevchenko , Michael Ellerman , Benjamin Herrenschmidt , Paul Mackerras , "Michael S. Tsirkin" , Jason Wang , "=?UTF-8?q?Noralf=20Tr=C3=B8nnes?=" Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Kent Overstreet The new flags parameter allows controlling - Whether or not the units suffix is separated by a space, for compatibility with sort -h - Whether or not to append a B suffix - we're not always printing bytes. Signed-off-by: Kent Overstreet Signed-off-by: Suren Baghdasaryan Cc: Andy Shevchenko Cc: Michael Ellerman Cc: Benjamin Herrenschmidt Cc: Paul Mackerras Cc: "Michael S. Tsirkin" Cc: Jason Wang Cc: "Noralf Tr=C3=B8nnes" Cc: Jens Axboe --- arch/powerpc/mm/book3s64/radix_pgtable.c | 2 +- drivers/block/virtio_blk.c | 4 ++-- drivers/gpu/drm/gud/gud_drv.c | 2 +- drivers/mmc/core/block.c | 4 ++-- drivers/mtd/spi-nor/debugfs.c | 6 ++--- .../ethernet/chelsio/cxgb4/cxgb4_debugfs.c | 4 ++-- drivers/scsi/sd.c | 8 +++---- include/linux/string_helpers.h | 13 +++++----- lib/string_helpers.c | 24 +++++++++++++------ lib/test-string_helpers.c | 4 ++-- mm/hugetlb.c | 8 +++---- 11 files changed, 44 insertions(+), 35 deletions(-) diff --git a/arch/powerpc/mm/book3s64/radix_pgtable.c b/arch/powerpc/mm/boo= k3s64/radix_pgtable.c index c6a4ac766b2b..27aa5a083ff0 100644 --- a/arch/powerpc/mm/book3s64/radix_pgtable.c +++ b/arch/powerpc/mm/book3s64/radix_pgtable.c @@ -260,7 +260,7 @@ print_mapping(unsigned long start, unsigned long end, u= nsigned long size, bool e if (end <=3D start) return; =20 - string_get_size(size, 1, STRING_UNITS_2, buf, sizeof(buf)); + string_get_size(size, 1, STRING_SIZE_BASE2, buf, sizeof(buf)); =20 pr_info("Mapped 0x%016lx-0x%016lx with %s pages%s\n", start, end, buf, exec ? " (exec)" : ""); diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c index 1fe011676d07..59140424d755 100644 --- a/drivers/block/virtio_blk.c +++ b/drivers/block/virtio_blk.c @@ -986,9 +986,9 @@ static void virtblk_update_capacity(struct virtio_blk *= vblk, bool resize) nblocks =3D DIV_ROUND_UP_ULL(capacity, queue_logical_block_size(q) >> 9); =20 string_get_size(nblocks, queue_logical_block_size(q), - STRING_UNITS_2, cap_str_2, sizeof(cap_str_2)); + STRING_SIZE_BASE2, cap_str_2, sizeof(cap_str_2)); string_get_size(nblocks, queue_logical_block_size(q), - STRING_UNITS_10, cap_str_10, sizeof(cap_str_10)); + 0, cap_str_10, sizeof(cap_str_10)); =20 dev_notice(&vdev->dev, "[%s] %s%llu %d-byte logical blocks (%s/%s)\n", diff --git a/drivers/gpu/drm/gud/gud_drv.c b/drivers/gpu/drm/gud/gud_drv.c index 9d7bf8ee45f1..6b1748e1f666 100644 --- a/drivers/gpu/drm/gud/gud_drv.c +++ b/drivers/gpu/drm/gud/gud_drv.c @@ -329,7 +329,7 @@ static int gud_stats_debugfs(struct seq_file *m, void *= data) struct gud_device *gdrm =3D to_gud_device(entry->dev); char buf[10]; =20 - string_get_size(gdrm->bulk_len, 1, STRING_UNITS_2, buf, sizeof(buf)); + string_get_size(gdrm->bulk_len, 1, STRING_SIZE_BASE2, buf, sizeof(buf)); seq_printf(m, "Max buffer size: %s\n", buf); seq_printf(m, "Number of errors: %u\n", gdrm->stats_num_errors); =20 diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c index 3a8f27c3e310..411dc8137f7c 100644 --- a/drivers/mmc/core/block.c +++ b/drivers/mmc/core/block.c @@ -2511,7 +2511,7 @@ static struct mmc_blk_data *mmc_blk_alloc_req(struct = mmc_card *card, =20 blk_queue_write_cache(md->queue.queue, cache_enabled, fua_enabled); =20 - string_get_size((u64)size, 512, STRING_UNITS_2, + string_get_size((u64)size, 512, STRING_SIZE_BASE2, cap_str, sizeof(cap_str)); pr_info("%s: %s %s %s%s\n", md->disk->disk_name, mmc_card_id(card), mmc_card_name(card), @@ -2707,7 +2707,7 @@ static int mmc_blk_alloc_rpmb_part(struct mmc_card *c= ard, =20 list_add(&rpmb->node, &md->rpmbs); =20 - string_get_size((u64)size, 512, STRING_UNITS_2, + string_get_size((u64)size, 512, STRING_SIZE_BASE2, cap_str, sizeof(cap_str)); =20 pr_info("%s: %s %s %s, chardev (%d:%d)\n", diff --git a/drivers/mtd/spi-nor/debugfs.c b/drivers/mtd/spi-nor/debugfs.c index 6e163cb5b478..a1b61938fee2 100644 --- a/drivers/mtd/spi-nor/debugfs.c +++ b/drivers/mtd/spi-nor/debugfs.c @@ -85,7 +85,7 @@ static int spi_nor_params_show(struct seq_file *s, void *= data) =20 seq_printf(s, "name\t\t%s\n", info->name); seq_printf(s, "id\t\t%*ph\n", SPI_NOR_MAX_ID_LEN, nor->id); - string_get_size(params->size, 1, STRING_UNITS_2, buf, sizeof(buf)); + string_get_size(params->size, 1, STRING_SIZE_BASE2, buf, sizeof(buf)); seq_printf(s, "size\t\t%s\n", buf); seq_printf(s, "write size\t%u\n", params->writesize); seq_printf(s, "page size\t%u\n", params->page_size); @@ -130,14 +130,14 @@ static int spi_nor_params_show(struct seq_file *s, vo= id *data) struct spi_nor_erase_type *et =3D &erase_map->erase_type[i]; =20 if (et->size) { - string_get_size(et->size, 1, STRING_UNITS_2, buf, + string_get_size(et->size, 1, STRING_SIZE_BASE2, buf, sizeof(buf)); seq_printf(s, " %02x (%s) [%d]\n", et->opcode, buf, i); } } =20 if (!(nor->flags & SNOR_F_NO_OP_CHIP_ERASE)) { - string_get_size(params->size, 1, STRING_UNITS_2, buf, sizeof(buf)); + string_get_size(params->size, 1, STRING_SIZE_BASE2, buf, sizeof(buf)); seq_printf(s, " %02x (%s)\n", SPINOR_OP_CHIP_ERASE, buf); } =20 diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_debugfs.c b/drivers/n= et/ethernet/chelsio/cxgb4/cxgb4_debugfs.c index 14e0d989c3ba..7d5fbebd36fc 100644 --- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_debugfs.c +++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_debugfs.c @@ -3457,8 +3457,8 @@ static void mem_region_show(struct seq_file *seq, con= st char *name, { char buf[40]; =20 - string_get_size((u64)to - from + 1, 1, STRING_UNITS_2, buf, - sizeof(buf)); + string_get_size((u64)to - from + 1, 1, STRING_SIZE_BASE2, + buf, sizeof(buf)); seq_printf(seq, "%-15s %#x-%#x [%s]\n", name, from, to, buf); } =20 diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c index 83b6a3f3863b..c37593f76b65 100644 --- a/drivers/scsi/sd.c +++ b/drivers/scsi/sd.c @@ -2689,10 +2689,10 @@ sd_print_capacity(struct scsi_disk *sdkp, if (!sdkp->first_scan && old_capacity =3D=3D sdkp->capacity) return; =20 - string_get_size(sdkp->capacity, sector_size, - STRING_UNITS_2, cap_str_2, sizeof(cap_str_2)); - string_get_size(sdkp->capacity, sector_size, - STRING_UNITS_10, cap_str_10, sizeof(cap_str_10)); + string_get_size(sdkp->capacity, sector_size, STRING_SIZE_BASE2, + cap_str_2, sizeof(cap_str_2)); + string_get_size(sdkp->capacity, sector_size, 0, + cap_str_10, sizeof(cap_str_10)); =20 sd_printk(KERN_NOTICE, sdkp, "%llu %d-byte logical blocks: (%s/%s)\n", diff --git a/include/linux/string_helpers.h b/include/linux/string_helpers.h index 9d1f5bb74dd5..a54467d891db 100644 --- a/include/linux/string_helpers.h +++ b/include/linux/string_helpers.h @@ -17,15 +17,14 @@ static inline bool string_is_terminated(const char *s, = int len) return memchr(s, '\0', len) ? true : false; } =20 -/* Descriptions of the types of units to - * print in */ -enum string_size_units { - STRING_UNITS_10, /* use powers of 10^3 (standard SI) */ - STRING_UNITS_2, /* use binary powers of 2^10 */ +enum string_size_flags { + STRING_SIZE_BASE2 =3D (1 << 0), + STRING_SIZE_NOSPACE =3D (1 << 1), + STRING_SIZE_NOBYTES =3D (1 << 2), }; =20 -void string_get_size(u64 size, u64 blk_size, enum string_size_units units, - char *buf, int len); +int string_get_size(u64 size, u64 blk_size, enum string_size_flags flags, + char *buf, int len); =20 int parse_int_array_user(const char __user *from, size_t count, int **arra= y); =20 diff --git a/lib/string_helpers.c b/lib/string_helpers.c index 9982344cca34..b1496499b113 100644 --- a/lib/string_helpers.c +++ b/lib/string_helpers.c @@ -19,11 +19,17 @@ #include #include =20 +enum string_size_units { + STRING_UNITS_10, /* use powers of 10^3 (standard SI) */ + STRING_UNITS_2, /* use binary powers of 2^10 */ +}; + /** * string_get_size - get the size in the specified units * @size: The size to be converted in blocks * @blk_size: Size of the block (use 1 for size in bytes) - * @units: units to use (powers of 1000 or 1024) + * @flags: units to use (powers of 1000 or 1024), whether to include space + * separator * @buf: buffer to format to * @len: length of buffer * @@ -32,14 +38,16 @@ * at least 9 bytes and will always be zero terminated. * */ -void string_get_size(u64 size, u64 blk_size, const enum string_size_units = units, - char *buf, int len) +int string_get_size(u64 size, u64 blk_size, enum string_size_flags flags, + char *buf, int len) { + enum string_size_units units =3D flags & flags & STRING_SIZE_BASE2 + ? STRING_UNITS_2 : STRING_UNITS_10; static const char *const units_10[] =3D { - "B", "kB", "MB", "GB", "TB", "PB", "EB", "ZB", "YB" + "", "k", "M", "G", "T", "P", "E", "Z", "Y" }; static const char *const units_2[] =3D { - "B", "KiB", "MiB", "GiB", "TiB", "PiB", "EiB", "ZiB", "YiB" + "", "Ki", "Mi", "Gi", "Ti", "Pi", "Ei", "Zi", "Yi" }; static const char *const *const units_str[] =3D { [STRING_UNITS_10] =3D units_10, @@ -126,8 +134,10 @@ void string_get_size(u64 size, u64 blk_size, const enu= m string_size_units units, else unit =3D units_str[units][i]; =20 - snprintf(buf, len, "%u%s %s", (u32)size, - tmp, unit); + return snprintf(buf, len, "%u%s%s%s%s", (u32)size, tmp, + (flags & STRING_SIZE_NOSPACE) ? "" : " ", + unit, + (flags & STRING_SIZE_NOBYTES) ? "" : "B"); } EXPORT_SYMBOL(string_get_size); =20 diff --git a/lib/test-string_helpers.c b/lib/test-string_helpers.c index 9a68849a5d55..0b01ffca96fb 100644 --- a/lib/test-string_helpers.c +++ b/lib/test-string_helpers.c @@ -507,8 +507,8 @@ static __init void __test_string_get_size(const u64 siz= e, const u64 blk_size, char buf10[string_get_size_maxbuf]; char buf2[string_get_size_maxbuf]; =20 - string_get_size(size, blk_size, STRING_UNITS_10, buf10, sizeof(buf10)); - string_get_size(size, blk_size, STRING_UNITS_2, buf2, sizeof(buf2)); + string_get_size(size, blk_size, 0, buf10, sizeof(buf10)); + string_get_size(size, blk_size, STRING_SIZE_BASE2, buf2, sizeof(buf2)); =20 test_string_get_size_check("STRING_UNITS_10", exp_result10, buf10, size, blk_size); diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 52d26072dfda..37f2148d3b9c 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -3228,7 +3228,7 @@ static void __init hugetlb_hstate_alloc_pages_onenode= (struct hstate *h, int nid) if (i =3D=3D h->max_huge_pages_node[nid]) return; =20 - string_get_size(huge_page_size(h), 1, STRING_UNITS_2, buf, 32); + string_get_size(huge_page_size(h), 1, STRING_SIZE_BASE2, buf, 32); pr_warn("HugeTLB: allocating %u of page size %s failed node%d. Only allo= cated %lu hugepages.\n", h->max_huge_pages_node[nid], buf, nid, i); h->max_huge_pages -=3D (h->max_huge_pages_node[nid] - i); @@ -3290,7 +3290,7 @@ static void __init hugetlb_hstate_alloc_pages(struct = hstate *h) if (i < h->max_huge_pages) { char buf[32]; =20 - string_get_size(huge_page_size(h), 1, STRING_UNITS_2, buf, 32); + string_get_size(huge_page_size(h), 1, STRING_SIZE_BASE2, buf, 32); pr_warn("HugeTLB: allocating %lu of page size %s failed. Only allocated= %lu hugepages.\n", h->max_huge_pages, buf, i); h->max_huge_pages =3D i; @@ -3336,7 +3336,7 @@ static void __init report_hugepages(void) for_each_hstate(h) { char buf[32]; =20 - string_get_size(huge_page_size(h), 1, STRING_UNITS_2, buf, 32); + string_get_size(huge_page_size(h), 1, STRING_SIZE_BASE2, buf, 32); pr_info("HugeTLB: registered %s page size, pre-allocated %ld pages\n", buf, h->free_huge_pages); pr_info("HugeTLB: %d KiB vmemmap can be freed for a %s page\n", @@ -4227,7 +4227,7 @@ static int __init hugetlb_init(void) char buf[32]; =20 string_get_size(huge_page_size(&default_hstate), - 1, STRING_UNITS_2, buf, 32); + 1, STRING_SIZE_BASE2, buf, 32); pr_warn("HugeTLB: Ignoring hugepages=3D%lu associated with %s page siz= e\n", default_hstate.max_huge_pages, buf); pr_warn("HugeTLB: Using hugepages=3D%lu for number of default huge pag= es\n", --=20 2.42.0.758.gaed0368e0e-goog From nobody Thu Jan 1 07:45:32 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 603CEC07545 for ; Tue, 24 Oct 2023 13:47:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1343555AbjJXNrH (ORCPT ); Tue, 24 Oct 2023 09:47:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44630 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234611AbjJXNqz (ORCPT ); Tue, 24 Oct 2023 09:46:55 -0400 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 36F1C186 for ; Tue, 24 Oct 2023 06:46:47 -0700 (PDT) Received: by mail-yb1-xb49.google.com with SMTP id 3f1490d57ef6-d9cfec5e73dso4390551276.2 for ; Tue, 24 Oct 2023 06:46:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1698155206; x=1698760006; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=fObhDXE+DfK48fbfXMStFHtHAmn90M4M91A2l4CxzLM=; b=fvFNewWrJ6myhtkKXKjtJz8cIzAVhMdqhSo1R8XR3dG9lbaEcu40daF7zkrLWvYyQC Q2mL5UdAfr3rTWRWlr7tfdubTPhnHuVlp5zfkMCb4p6DfWKYLDBpBxDlbioDo5ZTgZRX euVcheUfciQ/xmzBww2SzGQeOg/W9/euwxB82yYhQc374RWE0GnNoLePR3bCXEmQF965 czbTs9HHTypXYJ2KkiNNw3L6nBcF2t1cFnXO3xI713cVEfoux04p9ykMWFJOpeasDkxe oENd442Qp97odp3mOJiUNzAJ2X+NUsAxZWMiaMCRtwsD93S0RYHH/4mfaXmqRw/S4P2K tOnw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1698155206; x=1698760006; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=fObhDXE+DfK48fbfXMStFHtHAmn90M4M91A2l4CxzLM=; b=Fh7V1rv2QURY8ZXZcCNHq2CAev5nFi2+xh51t8paMxBb7F0GIsM2hq6dobsydQGTF4 dB177dp7Z3j8BK3uEYbbOng7phbGbph6Qizt84NqJpxJlCxkhyOVAlkLqPaQOy7hv55S T3fm/xgzNDrkJNdCmfdQGGwIuEO3SINuTAsAY5v7KvSWMQIpLGhnFNlSe6gyxahIwm6B Q6GDNMXnX++kulTj5QxY+QBUxyisg/9vJm/R5GA6HrbU3KuDuiAcQ71FQEMZJxZHNPbG VEWUA1zG39dQCdR/1nQ6xkRt3pAM9yzCG6Bl8xXKm/TGe1/AjaxbQPsDVgXqTpZLWU8p zn7Q== X-Gm-Message-State: AOJu0YywxsPi0BGvd6Ey5sRYhkQeYFRtv26i8BFBW9YOF6dnrntK6BWh a4YN38iHdAHa2hHqLD5y8cuKEag2JeU= X-Google-Smtp-Source: AGHT+IGzqDmmVKWAX0qOLCG4zSSOu2XUUgzUKaIXDvA8+3j+t9o+p5RCbsfa+aQ1r6gMSAIrcYIsUr6/0f0= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:45ba:3318:d7a5:336a]) (user=surenb job=sendgmr) by 2002:a25:5008:0:b0:da0:2b01:7215 with SMTP id e8-20020a255008000000b00da02b017215mr55818ybb.10.1698155205957; Tue, 24 Oct 2023 06:46:45 -0700 (PDT) Date: Tue, 24 Oct 2023 06:45:59 -0700 In-Reply-To: <20231024134637.3120277-1-surenb@google.com> Mime-Version: 1.0 References: <20231024134637.3120277-1-surenb@google.com> X-Mailer: git-send-email 2.42.0.758.gaed0368e0e-goog Message-ID: <20231024134637.3120277-3-surenb@google.com> Subject: [PATCH v2 02/39] scripts/kallysms: Always include __start and __stop symbols From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: kent.overstreet@linux.dev, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, roman.gushchin@linux.dev, mgorman@suse.de, dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com, corbet@lwn.net, void@manifault.com, peterz@infradead.org, juri.lelli@redhat.com, ldufour@linux.ibm.com, catalin.marinas@arm.com, will@kernel.org, arnd@arndb.de, tglx@linutronix.de, mingo@redhat.com, dave.hansen@linux.intel.com, x86@kernel.org, peterx@redhat.com, david@redhat.com, axboe@kernel.dk, mcgrof@kernel.org, masahiroy@kernel.org, nathan@kernel.org, dennis@kernel.org, tj@kernel.org, muchun.song@linux.dev, rppt@kernel.org, paulmck@kernel.org, pasha.tatashin@soleen.com, yosryahmed@google.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, andreyknvl@gmail.com, keescook@chromium.org, ndesaulniers@google.com, vvvvvv@google.com, gregkh@linuxfoundation.org, ebiggers@google.com, ytcoode@gmail.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, bristot@redhat.com, vschneid@redhat.com, cl@linux.com, penberg@kernel.org, iamjoonsoo.kim@lge.com, 42.hyeyoo@gmail.com, glider@google.com, elver@google.com, dvyukov@google.com, shakeelb@google.com, songmuchun@bytedance.com, jbaron@akamai.com, rientjes@google.com, minchan@google.com, kaleshsingh@google.com, surenb@google.com, kernel-team@android.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, iommu@lists.linux.dev, linux-arch@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, kasan-dev@googlegroups.com, cgroups@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Kent Overstreet These symbols are used to denote section boundaries: by always including them we can unify loading sections from modules with loading built-in sections, which leads to some significant cleanup. Signed-off-by: Kent Overstreet Signed-off-by: Suren Baghdasaryan --- scripts/kallsyms.c | 13 +++++++++++++ 1 file changed, 13 insertions(+) diff --git a/scripts/kallsyms.c b/scripts/kallsyms.c index 653b92f6d4c8..47978efe4797 100644 --- a/scripts/kallsyms.c +++ b/scripts/kallsyms.c @@ -204,6 +204,11 @@ static int symbol_in_range(const struct sym_entry *s, return 0; } =20 +static bool string_starts_with(const char *s, const char *prefix) +{ + return strncmp(s, prefix, strlen(prefix)) =3D=3D 0; +} + static int symbol_valid(const struct sym_entry *s) { const char *name =3D sym_name(s); @@ -211,6 +216,14 @@ static int symbol_valid(const struct sym_entry *s) /* if --all-symbols is not specified, then symbols outside the text * and inittext sections are discarded */ if (!all_symbols) { + /* + * Symbols starting with __start and __stop are used to denote + * section boundaries, and should always be included: + */ + if (string_starts_with(name, "__start_") || + string_starts_with(name, "__stop_")) + return 1; + if (symbol_in_range(s, text_ranges, ARRAY_SIZE(text_ranges)) =3D=3D 0) return 0; --=20 2.42.0.758.gaed0368e0e-goog From nobody Thu Jan 1 07:45:32 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0A920C25B6E for ; Tue, 24 Oct 2023 13:47:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234690AbjJXNrC (ORCPT ); Tue, 24 Oct 2023 09:47:02 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44566 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1343551AbjJXNqy (ORCPT ); Tue, 24 Oct 2023 09:46:54 -0400 Received: from mail-yw1-x1149.google.com (mail-yw1-x1149.google.com [IPv6:2607:f8b0:4864:20::1149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1992F10CB for ; Tue, 24 Oct 2023 06:46:49 -0700 (PDT) Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-5a7af69a4baso59606987b3.0 for ; Tue, 24 Oct 2023 06:46:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1698155208; x=1698760008; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=MWOhaEKA8zMI+Iz0e6SkNPW5Kz551oWdhyAlPdtU984=; b=1XFbROdLiVVoN5uydLx7JI2SMR4wMa5dO4rGWCS3ucWQUgp7xD3Uzyga3UuvTJbPjm Fb+47YRq/bMpaN3E/eB5AwNie/G70xGjaCjNyWrqPmTvDb2rkdtqc3uQf9aQ1gnWUCjE QKxrw0Y/wn10DXzumW7IxBG4cDEADZzN9KbaQ2FIv5UWBdI2dOrlwWecw0OGsFRf5QxF jeq5BqGdkGUNO4rLanTanR8I/vNGHzjj8gUjKziIlqqh5dw4TEW7wdaqzxfbAp8TzWnI UHPJiV1xnBqRqC5VWZUmCosAbA2hlohjCCpOUUSjvso+AnyYb/QCZBMR7VbscOR7liFR qgYw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1698155208; x=1698760008; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=MWOhaEKA8zMI+Iz0e6SkNPW5Kz551oWdhyAlPdtU984=; b=shk3zGr0Ci0725Afg99ZZL93aXvmmPX3+OKSgpBYJyIKSnI+y1I0zmA4HXB0B9BPDL Eauy+9IJKvb9HMSUUZC26xzvdPl1cW2CCXPPOhJkxSDQCZYIXiacRE+H4/Kp+gKxEtOu /M0iur3TbZ3/7aKbkVAWW3qVtUi1f91fry3+95d5elNYsJ2gRROEFIa8onccE5Y/Q5YE exQSM9heCHq+1/I29DlUahnshKl5K/LrR7ECTjzSC9Lsnfs35+vxaQJ1eZrXhuQ8/FZh 59BQmG7H0HX+Gs0ti7YT0LwGK5by7cg1DtAykVbxmEHHHuqrZsWonBzfKbR3Ws2ZDoh6 /Muw== X-Gm-Message-State: AOJu0YyJ7w7eDxbsVZf0rhRkQiMeUwsSo+MPIh/HfWo8U90xGkk9eavd gF3eLdnMwpYyFzdtdVxHSMgwKW9veK4= X-Google-Smtp-Source: AGHT+IGS8NlUfJRUa0CiIFG8N+DG1BCVzQogfv9Ui1pQm5p1SwS7Eeeej+mjL+3fA2TwJ+bOdBMbqGXulW0= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:45ba:3318:d7a5:336a]) (user=surenb job=sendgmr) by 2002:a25:b951:0:b0:da0:1ba4:6fa2 with SMTP id s17-20020a25b951000000b00da01ba46fa2mr82321ybm.4.1698155208197; Tue, 24 Oct 2023 06:46:48 -0700 (PDT) Date: Tue, 24 Oct 2023 06:46:00 -0700 In-Reply-To: <20231024134637.3120277-1-surenb@google.com> Mime-Version: 1.0 References: <20231024134637.3120277-1-surenb@google.com> X-Mailer: git-send-email 2.42.0.758.gaed0368e0e-goog Message-ID: <20231024134637.3120277-4-surenb@google.com> Subject: [PATCH v2 03/39] fs: Convert alloc_inode_sb() to a macro From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: kent.overstreet@linux.dev, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, roman.gushchin@linux.dev, mgorman@suse.de, dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com, corbet@lwn.net, void@manifault.com, peterz@infradead.org, juri.lelli@redhat.com, ldufour@linux.ibm.com, catalin.marinas@arm.com, will@kernel.org, arnd@arndb.de, tglx@linutronix.de, mingo@redhat.com, dave.hansen@linux.intel.com, x86@kernel.org, peterx@redhat.com, david@redhat.com, axboe@kernel.dk, mcgrof@kernel.org, masahiroy@kernel.org, nathan@kernel.org, dennis@kernel.org, tj@kernel.org, muchun.song@linux.dev, rppt@kernel.org, paulmck@kernel.org, pasha.tatashin@soleen.com, yosryahmed@google.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, andreyknvl@gmail.com, keescook@chromium.org, ndesaulniers@google.com, vvvvvv@google.com, gregkh@linuxfoundation.org, ebiggers@google.com, ytcoode@gmail.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, bristot@redhat.com, vschneid@redhat.com, cl@linux.com, penberg@kernel.org, iamjoonsoo.kim@lge.com, 42.hyeyoo@gmail.com, glider@google.com, elver@google.com, dvyukov@google.com, shakeelb@google.com, songmuchun@bytedance.com, jbaron@akamai.com, rientjes@google.com, minchan@google.com, kaleshsingh@google.com, surenb@google.com, kernel-team@android.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, iommu@lists.linux.dev, linux-arch@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, kasan-dev@googlegroups.com, cgroups@vger.kernel.org, Alexander Viro Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Kent Overstreet We're introducing alloc tagging, which tracks memory allocations by callsite. Converting alloc_inode_sb() to a macro means allocations will be tracked by its caller, which is a bit more useful. Signed-off-by: Kent Overstreet Signed-off-by: Suren Baghdasaryan Cc: Alexander Viro --- include/linux/fs.h | 6 +----- 1 file changed, 1 insertion(+), 5 deletions(-) diff --git a/include/linux/fs.h b/include/linux/fs.h index 4a40823c3c67..c545b1839e96 100644 --- a/include/linux/fs.h +++ b/include/linux/fs.h @@ -2862,11 +2862,7 @@ int setattr_should_drop_sgid(struct mnt_idmap *idmap, * This must be used for allocating filesystems specific inodes to set * up the inode reclaim context correctly. */ -static inline void * -alloc_inode_sb(struct super_block *sb, struct kmem_cache *cache, gfp_t gfp) -{ - return kmem_cache_alloc_lru(cache, &sb->s_inode_lru, gfp); -} +#define alloc_inode_sb(_sb, _cache, _gfp) kmem_cache_alloc_lru(_cache, &_s= b->s_inode_lru, _gfp) =20 extern void __insert_inode_hash(struct inode *, unsigned long hashval); static inline void insert_inode_hash(struct inode *inode) --=20 2.42.0.758.gaed0368e0e-goog From nobody Thu Jan 1 07:45:32 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 484D5C25B48 for ; Tue, 24 Oct 2023 13:47:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234604AbjJXNrO (ORCPT ); Tue, 24 Oct 2023 09:47:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50074 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234675AbjJXNq4 (ORCPT ); Tue, 24 Oct 2023 09:46:56 -0400 Received: from mail-yw1-x1149.google.com (mail-yw1-x1149.google.com [IPv6:2607:f8b0:4864:20::1149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 48277D7D for ; Tue, 24 Oct 2023 06:46:51 -0700 (PDT) Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-5a7e4745acdso84785947b3.3 for ; Tue, 24 Oct 2023 06:46:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1698155210; x=1698760010; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=griUcA7GUXZ/C94ijw+khTWkXdhOo9/SkZ3zjzjerBg=; b=Jqnwgpl+iQgBXPyzwBt0Y9Y8w3zvUht2600ZNQUwsUvGgtC5+FTtNIYPEd0CTsQKi1 k5rzT4QjzP+sGrlMTVBJdcf2MlS7wm0dRhvz7KMmLgKKLED3QqZTQhJaRxOnRIzdhuJ0 Pz6IqSMaKj66GjRXDUUzPuO0/S3KH7tCKtLAFYVTnSrD7Dj/O0JGspre3I9wUFlICaH/ D9Llcg1gxL/kpuP5Cf9pEv0t00foYmiEvsdUi456IyngEl+AI6pOmIo5BU4yGsCJVryI WhpiVIWQkegWWs2eCR9huOjYJC5Cjrz1nGoo7LW1qCRFneuvE7GwF4ffuk5p0W+n/xTw 3ZOQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1698155210; x=1698760010; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=griUcA7GUXZ/C94ijw+khTWkXdhOo9/SkZ3zjzjerBg=; b=eUeXNz0LyiMoJ4BUEHDGY7vIKXFzPrZesnE8luqi2xXe0zPq5sMY8uMbB887B5gKt0 0RZ2XM7T5ZgKXHtDkGCw+LpfAhqU0OIj8U3yIw7U//pFObRgx5KBWxqSz3OvuHiSi2e6 iDTsIDwRa4ft9PTm8F3A8f6zNTjBEE6wfpAin28SncuXOjViDaXTl8mrrQwTMPeT8d7U tgnwIn24XFtH9J9pwRiyotXzXRIpnTEcLKE0+O4IljkJVIIwS3wGl+tVdlNlsHEgl4tq 1RceVXcizRUBM14DAE4gK+Fak7caask7zHpw08qrZLeZX5BTRxWZOpfl/fXMa8UyiPCO iA1g== X-Gm-Message-State: AOJu0Yx8Khd0FOeOqOiTS3DMWLcEeqqiIrcftYjH40sLNaDmuCN/Oe+l MgqN5nnLPFRZWw6hSiPtdScJek5AcBU= X-Google-Smtp-Source: AGHT+IEpwWcHxe3pWwBFpCZZha+IjIrDrruma7abx61pSV1ETO4xUZC89g08mISvSSxLbel6DAK32ekX/R8= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:45ba:3318:d7a5:336a]) (user=surenb job=sendgmr) by 2002:a25:7485:0:b0:d9a:cbf9:1c8d with SMTP id p127-20020a257485000000b00d9acbf91c8dmr216225ybc.12.1698155210387; Tue, 24 Oct 2023 06:46:50 -0700 (PDT) Date: Tue, 24 Oct 2023 06:46:01 -0700 In-Reply-To: <20231024134637.3120277-1-surenb@google.com> Mime-Version: 1.0 References: <20231024134637.3120277-1-surenb@google.com> X-Mailer: git-send-email 2.42.0.758.gaed0368e0e-goog Message-ID: <20231024134637.3120277-5-surenb@google.com> Subject: [PATCH v2 04/39] nodemask: Split out include/linux/nodemask_types.h From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: kent.overstreet@linux.dev, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, roman.gushchin@linux.dev, mgorman@suse.de, dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com, corbet@lwn.net, void@manifault.com, peterz@infradead.org, juri.lelli@redhat.com, ldufour@linux.ibm.com, catalin.marinas@arm.com, will@kernel.org, arnd@arndb.de, tglx@linutronix.de, mingo@redhat.com, dave.hansen@linux.intel.com, x86@kernel.org, peterx@redhat.com, david@redhat.com, axboe@kernel.dk, mcgrof@kernel.org, masahiroy@kernel.org, nathan@kernel.org, dennis@kernel.org, tj@kernel.org, muchun.song@linux.dev, rppt@kernel.org, paulmck@kernel.org, pasha.tatashin@soleen.com, yosryahmed@google.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, andreyknvl@gmail.com, keescook@chromium.org, ndesaulniers@google.com, vvvvvv@google.com, gregkh@linuxfoundation.org, ebiggers@google.com, ytcoode@gmail.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, bristot@redhat.com, vschneid@redhat.com, cl@linux.com, penberg@kernel.org, iamjoonsoo.kim@lge.com, 42.hyeyoo@gmail.com, glider@google.com, elver@google.com, dvyukov@google.com, shakeelb@google.com, songmuchun@bytedance.com, jbaron@akamai.com, rientjes@google.com, minchan@google.com, kaleshsingh@google.com, surenb@google.com, kernel-team@android.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, iommu@lists.linux.dev, linux-arch@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, kasan-dev@googlegroups.com, cgroups@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Kent Overstreet sched.h, which defines task_struct, needs nodemask_t - but sched.h is a frequently used header and ideally shouldn't be pulling in any more code that it needs to. This splits out nodemask_types.h which has the definition sched.h needs, which will avoid a circular header dependency in the alloc tagging patch series, and as a bonus should speed up kernel build times. Signed-off-by: Kent Overstreet Signed-off-by: Suren Baghdasaryan Cc: Ingo Molnar Cc: Peter Zijlstra --- include/linux/nodemask.h | 2 +- include/linux/nodemask_types.h | 9 +++++++++ include/linux/sched.h | 2 +- 3 files changed, 11 insertions(+), 2 deletions(-) create mode 100644 include/linux/nodemask_types.h diff --git a/include/linux/nodemask.h b/include/linux/nodemask.h index 8d07116caaf1..b61438313a73 100644 --- a/include/linux/nodemask.h +++ b/include/linux/nodemask.h @@ -93,10 +93,10 @@ #include #include #include +#include #include #include =20 -typedef struct { DECLARE_BITMAP(bits, MAX_NUMNODES); } nodemask_t; extern nodemask_t _unused_nodemask_arg_; =20 /** diff --git a/include/linux/nodemask_types.h b/include/linux/nodemask_types.h new file mode 100644 index 000000000000..84c2f47c4237 --- /dev/null +++ b/include/linux/nodemask_types.h @@ -0,0 +1,9 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef __LINUX_NODEMASK_TYPES_H +#define __LINUX_NODEMASK_TYPES_H + +#include + +typedef struct { DECLARE_BITMAP(bits, MAX_NUMNODES); } nodemask_t; + +#endif /* __LINUX_NODEMASK_TYPES_H */ diff --git a/include/linux/sched.h b/include/linux/sched.h index 77f01ac385f7..12a2554a3164 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -20,7 +20,7 @@ #include #include #include -#include +#include #include #include #include --=20 2.42.0.758.gaed0368e0e-goog From nobody Thu Jan 1 07:45:32 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id AEFB5C07545 for ; Tue, 24 Oct 2023 13:47:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1343515AbjJXNrS (ORCPT ); Tue, 24 Oct 2023 09:47:18 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50136 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234748AbjJXNq6 (ORCPT ); Tue, 24 Oct 2023 09:46:58 -0400 Received: from mail-yw1-x114a.google.com (mail-yw1-x114a.google.com [IPv6:2607:f8b0:4864:20::114a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6DAB012B for ; Tue, 24 Oct 2023 06:46:53 -0700 (PDT) Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-5a828bdcfbaso61798657b3.2 for ; Tue, 24 Oct 2023 06:46:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1698155212; x=1698760012; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=r9ZaCKU38kmyhI7VEwX4KjtHXLgRMjapTHFgQ9SG6t0=; b=mdQU02L1CvfERziAV1S4IWGZDX7irUzrH657u0xJG/54by6a1PhBxsUviX5Pwjua/T tyX9oRC7TJS1/NjrQVkH2nEgsfpx1CADvuBwn9rmhdGH32e419ZbVLElz2vF7W3PzB/0 wGACZwKxOG0lbxY6tgKksbxYI5qC4gKYNecyeBklDKUcA5HA0g5v9hWgSXg2KtLeQm7n vRt2tiqzDBTxDPdAhfQ0TO87X+AvbN61YbzpDPWOBQCokVInUWM4EbBMbG0vl1CUfoOG Ph4KI8dy0JXLh6ycauQ1HOyH+AF3xuT/aVX18YoM0St6IHTjmP0OR/HnLbDMkTt2f40p auRQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1698155212; x=1698760012; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=r9ZaCKU38kmyhI7VEwX4KjtHXLgRMjapTHFgQ9SG6t0=; b=TMPfwkww2vM2WLp7IWtkvAHgBBldVey0qh0vQUSWCSOBM1OI2jxk7Vr/yS99THZ6nG AmMU1Q6GeJZ87VFyB7sMFXErKDIt+ZCyNRY1cIYhLkYskFnpKFZgf67/tXelPEg61jCi q8V29S5siOGDDbK2gc6CqpX1egUEyOuNvle6VI5sD9e6KuyPH71Wh3Ifzq78XS7zl35U zZjYFkciRmLCb2GwSgphmPzuKx2cASYjllmSL45M3USuVoPFzqXQykMKZIK3ta95Bum4 TvqZe09eGHuEkxFnUrN4GbEQp8Ued1K2+ispWBmckexi+7nf4kzEHGDdHaK/V3TQ0PYe P0wQ== X-Gm-Message-State: AOJu0Yyr9hRv95DcKx7FNNNRWaVk8iacYXLiYk1dRgVquJrKOYFyq/LZ zTpxxfnQYJNBKUZFxKfwghlgJRNJ1PE= X-Google-Smtp-Source: AGHT+IFUpFLBsDMNDQVhWCjHvEp+oEK06bmA8hZB4NpLCdYe9bJ3XL90v8R51rmnjDRHAZifNxDo+IuLmGU= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:45ba:3318:d7a5:336a]) (user=surenb job=sendgmr) by 2002:a0d:dfd0:0:b0:5a7:ad67:b4b6 with SMTP id i199-20020a0ddfd0000000b005a7ad67b4b6mr254303ywe.2.1698155212529; Tue, 24 Oct 2023 06:46:52 -0700 (PDT) Date: Tue, 24 Oct 2023 06:46:02 -0700 In-Reply-To: <20231024134637.3120277-1-surenb@google.com> Mime-Version: 1.0 References: <20231024134637.3120277-1-surenb@google.com> X-Mailer: git-send-email 2.42.0.758.gaed0368e0e-goog Message-ID: <20231024134637.3120277-6-surenb@google.com> Subject: [PATCH v2 05/39] prandom: Remove unused include From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: kent.overstreet@linux.dev, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, roman.gushchin@linux.dev, mgorman@suse.de, dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com, corbet@lwn.net, void@manifault.com, peterz@infradead.org, juri.lelli@redhat.com, ldufour@linux.ibm.com, catalin.marinas@arm.com, will@kernel.org, arnd@arndb.de, tglx@linutronix.de, mingo@redhat.com, dave.hansen@linux.intel.com, x86@kernel.org, peterx@redhat.com, david@redhat.com, axboe@kernel.dk, mcgrof@kernel.org, masahiroy@kernel.org, nathan@kernel.org, dennis@kernel.org, tj@kernel.org, muchun.song@linux.dev, rppt@kernel.org, paulmck@kernel.org, pasha.tatashin@soleen.com, yosryahmed@google.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, andreyknvl@gmail.com, keescook@chromium.org, ndesaulniers@google.com, vvvvvv@google.com, gregkh@linuxfoundation.org, ebiggers@google.com, ytcoode@gmail.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, bristot@redhat.com, vschneid@redhat.com, cl@linux.com, penberg@kernel.org, iamjoonsoo.kim@lge.com, 42.hyeyoo@gmail.com, glider@google.com, elver@google.com, dvyukov@google.com, shakeelb@google.com, songmuchun@bytedance.com, jbaron@akamai.com, rientjes@google.com, minchan@google.com, kaleshsingh@google.com, surenb@google.com, kernel-team@android.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, iommu@lists.linux.dev, linux-arch@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, kasan-dev@googlegroups.com, cgroups@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Kent Overstreet prandom.h doesn't use percpu.h - this fixes some circular header issues. Signed-off-by: Kent Overstreet Signed-off-by: Suren Baghdasaryan --- include/linux/prandom.h | 1 - 1 file changed, 1 deletion(-) diff --git a/include/linux/prandom.h b/include/linux/prandom.h index f2ed5b72b3d6..f7f1e5251c67 100644 --- a/include/linux/prandom.h +++ b/include/linux/prandom.h @@ -10,7 +10,6 @@ =20 #include #include -#include #include =20 struct rnd_state { --=20 2.42.0.758.gaed0368e0e-goog From nobody Thu Jan 1 07:45:32 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 163C6C07545 for ; Tue, 24 Oct 2023 13:47:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1343596AbjJXNrV (ORCPT ); Tue, 24 Oct 2023 09:47:21 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50184 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234778AbjJXNq6 (ORCPT ); Tue, 24 Oct 2023 09:46:58 -0400 Received: from mail-yw1-x114a.google.com (mail-yw1-x114a.google.com [IPv6:2607:f8b0:4864:20::114a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A263F10CC for ; Tue, 24 Oct 2023 06:46:55 -0700 (PDT) Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-5a824ef7a83so58256527b3.0 for ; Tue, 24 Oct 2023 06:46:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1698155215; x=1698760015; darn=vger.kernel.org; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:from:to:cc:subject:date:message-id :reply-to; bh=X9FRyOcPx+PqvPwKi5hgMRq683fi1N6H4UQ0tqouPmU=; b=4cw539HC33gvhIGk13GOxveTOd9n1qGKSR1OIc8xOIqZipc/s0QJYoKuwGZaPBSZSY onk1CfCe8xYPPNFZPxmgb20jMmxXKRMe2IJP7f5hIsQFYbUmuXE2e3KjIbCaK1hPGGFj GdO6XFZXDEaduSPjBFIxqe/1E+qe9Wg+6roLruCWU2Evcek5c6nAuxgzCToHxSRI0n4F ypuIDcUomO8EQpCRIasU5iLUIx1mKr/E53fgrUPZwSHfj3nkWV5pyBHpiHxKSUZPNSZh /yw4WSVAIBThlIGtOb6fV992xxiLyPKGZzw9XA2duieziGf02tsWSHCpDFb+FKQRoGsb aO+A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1698155215; x=1698760015; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:x-gm-message-state:from:to:cc:subject :date:message-id:reply-to; bh=X9FRyOcPx+PqvPwKi5hgMRq683fi1N6H4UQ0tqouPmU=; b=SZg+IAbej4FXMpRHTAXXLFBQIhjYsLmHmcpkDeD+utdH/817aKmcVyAM2vnaI5vyvm 854rD4xe2L+ZJ+MeiXjSWc+IBBcXIRxMk/e5ETz9TAB/Py2gY/iNJ+vaxLbT5/S5BTSQ j+KNK0oc4sBscZC4TWOvSuu6eN0vy6HSLXgMcPGLYJVqxxc2zv3odNROHU9dLA0NSW71 8i5dZd+o5HPlaHlLItCWrcqG16l+qCrc8BohpgOvRxikCsW2+Ky474Js6Y0QLh6O6cSL /y9Tjj8pd1B3280eslERmU2tYbZFYNhT07eeDH7lvkcUIYCzjob82vEAOqUqainjW0s1 l3bw== X-Gm-Message-State: AOJu0YxytIKP2o+vOQf3IrQP7K7bskpJihpwF67NLRWBziTWYI2UMAug kDSxaFVMIgbm/HjoYZASau6AiPJjqjc= X-Google-Smtp-Source: AGHT+IEYMUNI7PlAuFFQePhYZveJfVsgkcZviL6909NPcDbeQdTGb/wj9KBYs/s9p7NjrzoDo5b3j9oL6eI= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:45ba:3318:d7a5:336a]) (user=surenb job=sendgmr) by 2002:a0d:eb8a:0:b0:5a7:b496:5983 with SMTP id u132-20020a0deb8a000000b005a7b4965983mr249771ywe.9.1698155214761; Tue, 24 Oct 2023 06:46:54 -0700 (PDT) Date: Tue, 24 Oct 2023 06:46:03 -0700 In-Reply-To: <20231024134637.3120277-1-surenb@google.com> Mime-Version: 1.0 References: <20231024134637.3120277-1-surenb@google.com> X-Mailer: git-send-email 2.42.0.758.gaed0368e0e-goog Message-ID: <20231024134637.3120277-7-surenb@google.com> Subject: [PATCH v2 06/39] mm: enumerate all gfp flags From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: kent.overstreet@linux.dev, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, roman.gushchin@linux.dev, mgorman@suse.de, dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com, corbet@lwn.net, void@manifault.com, peterz@infradead.org, juri.lelli@redhat.com, ldufour@linux.ibm.com, catalin.marinas@arm.com, will@kernel.org, arnd@arndb.de, tglx@linutronix.de, mingo@redhat.com, dave.hansen@linux.intel.com, x86@kernel.org, peterx@redhat.com, david@redhat.com, axboe@kernel.dk, mcgrof@kernel.org, masahiroy@kernel.org, nathan@kernel.org, dennis@kernel.org, tj@kernel.org, muchun.song@linux.dev, rppt@kernel.org, paulmck@kernel.org, pasha.tatashin@soleen.com, yosryahmed@google.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, andreyknvl@gmail.com, keescook@chromium.org, ndesaulniers@google.com, vvvvvv@google.com, gregkh@linuxfoundation.org, ebiggers@google.com, ytcoode@gmail.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, bristot@redhat.com, vschneid@redhat.com, cl@linux.com, penberg@kernel.org, iamjoonsoo.kim@lge.com, 42.hyeyoo@gmail.com, glider@google.com, elver@google.com, dvyukov@google.com, shakeelb@google.com, songmuchun@bytedance.com, jbaron@akamai.com, rientjes@google.com, minchan@google.com, kaleshsingh@google.com, surenb@google.com, kernel-team@android.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, iommu@lists.linux.dev, linux-arch@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, kasan-dev@googlegroups.com, cgroups@vger.kernel.org, "=?UTF-8?q?Petr=20Tesa=C5=99=C3=ADk?=" Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Introduce GFP bits enumeration to let compiler track the number of used bits (which depends on the config options) instead of hardcoding them. That simplifies __GFP_BITS_SHIFT calculation. Suggested-by: Petr Tesa=C5=99=C3=ADk Signed-off-by: Suren Baghdasaryan --- include/linux/gfp_types.h | 90 +++++++++++++++++++++++++++------------ 1 file changed, 62 insertions(+), 28 deletions(-) diff --git a/include/linux/gfp_types.h b/include/linux/gfp_types.h index 6583a58670c5..3fbe624763d9 100644 --- a/include/linux/gfp_types.h +++ b/include/linux/gfp_types.h @@ -21,44 +21,78 @@ typedef unsigned int __bitwise gfp_t; * include/trace/events/mmflags.h and tools/perf/builtin-kmem.c */ =20 +enum { + ___GFP_DMA_BIT, + ___GFP_HIGHMEM_BIT, + ___GFP_DMA32_BIT, + ___GFP_MOVABLE_BIT, + ___GFP_RECLAIMABLE_BIT, + ___GFP_HIGH_BIT, + ___GFP_IO_BIT, + ___GFP_FS_BIT, + ___GFP_ZERO_BIT, + ___GFP_UNUSED_BIT, /* 0x200u unused */ + ___GFP_DIRECT_RECLAIM_BIT, + ___GFP_KSWAPD_RECLAIM_BIT, + ___GFP_WRITE_BIT, + ___GFP_NOWARN_BIT, + ___GFP_RETRY_MAYFAIL_BIT, + ___GFP_NOFAIL_BIT, + ___GFP_NORETRY_BIT, + ___GFP_MEMALLOC_BIT, + ___GFP_COMP_BIT, + ___GFP_NOMEMALLOC_BIT, + ___GFP_HARDWALL_BIT, + ___GFP_THISNODE_BIT, + ___GFP_ACCOUNT_BIT, + ___GFP_ZEROTAGS_BIT, +#ifdef CONFIG_KASAN_HW_TAGS + ___GFP_SKIP_ZERO_BIT, + ___GFP_SKIP_KASAN_BIT, +#endif +#ifdef CONFIG_LOCKDEP + ___GFP_NOLOCKDEP_BIT, +#endif + ___GFP_LAST_BIT +}; + /* Plain integer GFP bitmasks. Do not use this directly. */ -#define ___GFP_DMA 0x01u -#define ___GFP_HIGHMEM 0x02u -#define ___GFP_DMA32 0x04u -#define ___GFP_MOVABLE 0x08u -#define ___GFP_RECLAIMABLE 0x10u -#define ___GFP_HIGH 0x20u -#define ___GFP_IO 0x40u -#define ___GFP_FS 0x80u -#define ___GFP_ZERO 0x100u +#define ___GFP_DMA BIT(___GFP_DMA_BIT) +#define ___GFP_HIGHMEM BIT(___GFP_HIGHMEM_BIT) +#define ___GFP_DMA32 BIT(___GFP_DMA32_BIT) +#define ___GFP_MOVABLE BIT(___GFP_MOVABLE_BIT) +#define ___GFP_RECLAIMABLE BIT(___GFP_RECLAIMABLE_BIT) +#define ___GFP_HIGH BIT(___GFP_HIGH_BIT) +#define ___GFP_IO BIT(___GFP_IO_BIT) +#define ___GFP_FS BIT(___GFP_FS_BIT) +#define ___GFP_ZERO BIT(___GFP_ZERO_BIT) /* 0x200u unused */ -#define ___GFP_DIRECT_RECLAIM 0x400u -#define ___GFP_KSWAPD_RECLAIM 0x800u -#define ___GFP_WRITE 0x1000u -#define ___GFP_NOWARN 0x2000u -#define ___GFP_RETRY_MAYFAIL 0x4000u -#define ___GFP_NOFAIL 0x8000u -#define ___GFP_NORETRY 0x10000u -#define ___GFP_MEMALLOC 0x20000u -#define ___GFP_COMP 0x40000u -#define ___GFP_NOMEMALLOC 0x80000u -#define ___GFP_HARDWALL 0x100000u -#define ___GFP_THISNODE 0x200000u -#define ___GFP_ACCOUNT 0x400000u -#define ___GFP_ZEROTAGS 0x800000u +#define ___GFP_DIRECT_RECLAIM BIT(___GFP_DIRECT_RECLAIM_BIT) +#define ___GFP_KSWAPD_RECLAIM BIT(___GFP_KSWAPD_RECLAIM_BIT) +#define ___GFP_WRITE BIT(___GFP_WRITE_BIT) +#define ___GFP_NOWARN BIT(___GFP_NOWARN_BIT) +#define ___GFP_RETRY_MAYFAIL BIT(___GFP_RETRY_MAYFAIL_BIT) +#define ___GFP_NOFAIL BIT(___GFP_NOFAIL_BIT) +#define ___GFP_NORETRY BIT(___GFP_NORETRY_BIT) +#define ___GFP_MEMALLOC BIT(___GFP_MEMALLOC_BIT) +#define ___GFP_COMP BIT(___GFP_COMP_BIT) +#define ___GFP_NOMEMALLOC BIT(___GFP_NOMEMALLOC_BIT) +#define ___GFP_HARDWALL BIT(___GFP_HARDWALL_BIT) +#define ___GFP_THISNODE BIT(___GFP_THISNODE_BIT) +#define ___GFP_ACCOUNT BIT(___GFP_ACCOUNT_BIT) +#define ___GFP_ZEROTAGS BIT(___GFP_ZEROTAGS_BIT) #ifdef CONFIG_KASAN_HW_TAGS -#define ___GFP_SKIP_ZERO 0x1000000u -#define ___GFP_SKIP_KASAN 0x2000000u +#define ___GFP_SKIP_ZERO BIT(___GFP_SKIP_ZERO_BIT) +#define ___GFP_SKIP_KASAN BIT(___GFP_SKIP_KASAN_BIT) #else #define ___GFP_SKIP_ZERO 0 #define ___GFP_SKIP_KASAN 0 #endif #ifdef CONFIG_LOCKDEP -#define ___GFP_NOLOCKDEP 0x4000000u +#define ___GFP_NOLOCKDEP BIT(___GFP_NOLOCKDEP_BIT) #else #define ___GFP_NOLOCKDEP 0 #endif -/* If the above are modified, __GFP_BITS_SHIFT may need updating */ =20 /* * Physical address zone modifiers (see linux/mmzone.h - low four bits) @@ -249,7 +283,7 @@ typedef unsigned int __bitwise gfp_t; #define __GFP_NOLOCKDEP ((__force gfp_t)___GFP_NOLOCKDEP) =20 /* Room for N __GFP_FOO bits */ -#define __GFP_BITS_SHIFT (26 + IS_ENABLED(CONFIG_LOCKDEP)) +#define __GFP_BITS_SHIFT ___GFP_LAST_BIT #define __GFP_BITS_MASK ((__force gfp_t)((1 << __GFP_BITS_SHIFT) - 1)) =20 /** --=20 2.42.0.758.gaed0368e0e-goog From nobody Thu Jan 1 07:45:32 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9F217C07545 for ; Tue, 24 Oct 2023 13:47:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1343604AbjJXNru (ORCPT ); Tue, 24 Oct 2023 09:47:50 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50106 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234629AbjJXNrC (ORCPT ); Tue, 24 Oct 2023 09:47:02 -0400 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DD5D910A for ; Tue, 24 Oct 2023 06:46:57 -0700 (PDT) Received: by mail-yb1-xb49.google.com with SMTP id 3f1490d57ef6-d8486b5e780so5313034276.0 for ; Tue, 24 Oct 2023 06:46:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1698155217; x=1698760017; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=IRfqS9iu/xaXGtQj/xVRkZmpW45IwDBAix/OEbVBjfA=; b=ctfJN43Ok6NertBUHigvaarIdqjMpKuyvUxS79MUZuZiwTaZy5CPb92qIINUIpm/uD +lt8sbdqmE2wkidHc+HY1C9Ir9ZkV76Ot/8y5nI8i8stpPsCZdDWjTcFlcH4K2OHxzM1 tpIphQ1OPQIemAEoLBcHY9/pRgqppTvRBFxD3tRcDv7QExEXBVAjNrZ4VHdxkJn3qfLY OL6cyl0KLHrlcBmawm3yu1NZ0piB83Ew5RKVDoxg7ZDtNsvX/7a85vkx6RampKr6gpcg HjdtbQMgY4leJGCxQgUrXdKWMjHvqFbI4yk+71JSF4edQYZTrSViH13YxNlGHhCXMCy8 FFnw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1698155217; x=1698760017; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=IRfqS9iu/xaXGtQj/xVRkZmpW45IwDBAix/OEbVBjfA=; b=nsZ52xDYcMt110GNl8idt9FxTi3y3LtnrReZof8mYnL6t0Vyz7kzrQVTiJtKDsf3uj MUA90YfYrQVPXH4+RGDpIL6WBZfjbObYywuMGKh+cX1C2rzIxxscER/zWl2D1J7lwrSo 0a7daCeXygsmYr5FMfiwZSmAnT7tTW6azYZENhY0J6dl4o5BdH7LmDZzt0wuF4V+rGDH AUYhUbdMZQcAGhnL6X+quYdrOTxqYvdQ91HXs68xhnkH1p0zytXXgpzzhRdfQwVoKum/ kM2y4/hZ+iMDemvda0i4CIK7m3aFs4jk9G9Hq82p/6v4n2AiC188a0ESq5hCNvV8e2+4 ZccA== X-Gm-Message-State: AOJu0YwW01UrEGMUX6IyJ+c/gODhtEhokNFWXOiCislGlpRZ9nT8vOK+ 5YfI4gnv6DFTAHCcXukmKs8OaGNPkjA= X-Google-Smtp-Source: AGHT+IExf3DH8HKKz+2vcB2BBcqQb151AnXodOPQ8JGUuxRZrra0fIQizb58SYKBrv5Wfv4WxbjgymhW4I4= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:45ba:3318:d7a5:336a]) (user=surenb job=sendgmr) by 2002:a0d:d50f:0:b0:5a7:be3f:159f with SMTP id x15-20020a0dd50f000000b005a7be3f159fmr287302ywd.5.1698155217039; Tue, 24 Oct 2023 06:46:57 -0700 (PDT) Date: Tue, 24 Oct 2023 06:46:04 -0700 In-Reply-To: <20231024134637.3120277-1-surenb@google.com> Mime-Version: 1.0 References: <20231024134637.3120277-1-surenb@google.com> X-Mailer: git-send-email 2.42.0.758.gaed0368e0e-goog Message-ID: <20231024134637.3120277-8-surenb@google.com> Subject: [PATCH v2 07/39] mm: introduce slabobj_ext to support slab object extensions From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: kent.overstreet@linux.dev, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, roman.gushchin@linux.dev, mgorman@suse.de, dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com, corbet@lwn.net, void@manifault.com, peterz@infradead.org, juri.lelli@redhat.com, ldufour@linux.ibm.com, catalin.marinas@arm.com, will@kernel.org, arnd@arndb.de, tglx@linutronix.de, mingo@redhat.com, dave.hansen@linux.intel.com, x86@kernel.org, peterx@redhat.com, david@redhat.com, axboe@kernel.dk, mcgrof@kernel.org, masahiroy@kernel.org, nathan@kernel.org, dennis@kernel.org, tj@kernel.org, muchun.song@linux.dev, rppt@kernel.org, paulmck@kernel.org, pasha.tatashin@soleen.com, yosryahmed@google.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, andreyknvl@gmail.com, keescook@chromium.org, ndesaulniers@google.com, vvvvvv@google.com, gregkh@linuxfoundation.org, ebiggers@google.com, ytcoode@gmail.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, bristot@redhat.com, vschneid@redhat.com, cl@linux.com, penberg@kernel.org, iamjoonsoo.kim@lge.com, 42.hyeyoo@gmail.com, glider@google.com, elver@google.com, dvyukov@google.com, shakeelb@google.com, songmuchun@bytedance.com, jbaron@akamai.com, rientjes@google.com, minchan@google.com, kaleshsingh@google.com, surenb@google.com, kernel-team@android.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, iommu@lists.linux.dev, linux-arch@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, kasan-dev@googlegroups.com, cgroups@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Currently slab pages can store only vectors of obj_cgroup pointers in page->memcg_data. Introduce slabobj_ext structure to allow more data to be stored for each slab object. Wrap obj_cgroup into slabobj_ext to support current functionality while allowing to extend slabobj_ext in the future. Signed-off-by: Suren Baghdasaryan --- include/linux/memcontrol.h | 20 +++-- include/linux/mm_types.h | 4 +- init/Kconfig | 4 + mm/kfence/core.c | 14 ++-- mm/kfence/kfence.h | 4 +- mm/memcontrol.c | 56 ++------------ mm/page_owner.c | 2 +- mm/slab.h | 148 +++++++++++++++++++++++++------------ mm/slab_common.c | 47 ++++++++++++ 9 files changed, 185 insertions(+), 114 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index e4e24da16d2c..4b17ebb7e723 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -346,8 +346,8 @@ struct mem_cgroup { extern struct mem_cgroup *root_mem_cgroup; =20 enum page_memcg_data_flags { - /* page->memcg_data is a pointer to an objcgs vector */ - MEMCG_DATA_OBJCGS =3D (1UL << 0), + /* page->memcg_data is a pointer to an slabobj_ext vector */ + MEMCG_DATA_OBJEXTS =3D (1UL << 0), /* page has been accounted as a non-slab kernel page */ MEMCG_DATA_KMEM =3D (1UL << 1), /* the next bit after the last actual flag */ @@ -385,7 +385,7 @@ static inline struct mem_cgroup *__folio_memcg(struct f= olio *folio) unsigned long memcg_data =3D folio->memcg_data; =20 VM_BUG_ON_FOLIO(folio_test_slab(folio), folio); - VM_BUG_ON_FOLIO(memcg_data & MEMCG_DATA_OBJCGS, folio); + VM_BUG_ON_FOLIO(memcg_data & MEMCG_DATA_OBJEXTS, folio); VM_BUG_ON_FOLIO(memcg_data & MEMCG_DATA_KMEM, folio); =20 return (struct mem_cgroup *)(memcg_data & ~MEMCG_DATA_FLAGS_MASK); @@ -406,7 +406,7 @@ static inline struct obj_cgroup *__folio_objcg(struct f= olio *folio) unsigned long memcg_data =3D folio->memcg_data; =20 VM_BUG_ON_FOLIO(folio_test_slab(folio), folio); - VM_BUG_ON_FOLIO(memcg_data & MEMCG_DATA_OBJCGS, folio); + VM_BUG_ON_FOLIO(memcg_data & MEMCG_DATA_OBJEXTS, folio); VM_BUG_ON_FOLIO(!(memcg_data & MEMCG_DATA_KMEM), folio); =20 return (struct obj_cgroup *)(memcg_data & ~MEMCG_DATA_FLAGS_MASK); @@ -503,7 +503,7 @@ static inline struct mem_cgroup *folio_memcg_check(stru= ct folio *folio) */ unsigned long memcg_data =3D READ_ONCE(folio->memcg_data); =20 - if (memcg_data & MEMCG_DATA_OBJCGS) + if (memcg_data & MEMCG_DATA_OBJEXTS) return NULL; =20 if (memcg_data & MEMCG_DATA_KMEM) { @@ -549,7 +549,7 @@ static inline struct mem_cgroup *get_mem_cgroup_from_ob= jcg(struct obj_cgroup *ob static inline bool folio_memcg_kmem(struct folio *folio) { VM_BUG_ON_PGFLAGS(PageTail(&folio->page), &folio->page); - VM_BUG_ON_FOLIO(folio->memcg_data & MEMCG_DATA_OBJCGS, folio); + VM_BUG_ON_FOLIO(folio->memcg_data & MEMCG_DATA_OBJEXTS, folio); return folio->memcg_data & MEMCG_DATA_KMEM; } =20 @@ -1593,6 +1593,14 @@ unsigned long mem_cgroup_soft_limit_reclaim(pg_data_= t *pgdat, int order, } #endif /* CONFIG_MEMCG */ =20 +/* + * Extended information for slab objects stored as an array in page->memcg= _data + * if MEMCG_DATA_OBJEXTS is set. + */ +struct slabobj_ext { + struct obj_cgroup *objcg; +} __aligned(8); + static inline void __inc_lruvec_kmem_state(void *p, enum node_stat_item id= x) { __mod_lruvec_kmem_state(p, idx, 1); diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 36c5b43999e6..5b55c4752c23 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -180,7 +180,7 @@ struct page { /* Usage count. *DO NOT USE DIRECTLY*. See page_ref.h */ atomic_t _refcount; =20 -#ifdef CONFIG_MEMCG +#ifdef CONFIG_SLAB_OBJ_EXT unsigned long memcg_data; #endif =20 @@ -315,7 +315,7 @@ struct folio { }; atomic_t _mapcount; atomic_t _refcount; -#ifdef CONFIG_MEMCG +#ifdef CONFIG_SLAB_OBJ_EXT unsigned long memcg_data; #endif /* private: the union with struct page is transitional */ diff --git a/init/Kconfig b/init/Kconfig index 6d35728b94b2..78a7abe36037 100644 --- a/init/Kconfig +++ b/init/Kconfig @@ -937,10 +937,14 @@ config CGROUP_FAVOR_DYNMODS =20 Say N if unsure. =20 +config SLAB_OBJ_EXT + bool + config MEMCG bool "Memory controller" select PAGE_COUNTER select EVENTFD + select SLAB_OBJ_EXT help Provides control over the memory footprint of tasks in a cgroup. =20 diff --git a/mm/kfence/core.c b/mm/kfence/core.c index 3872528d0963..02b744d2e07d 100644 --- a/mm/kfence/core.c +++ b/mm/kfence/core.c @@ -599,9 +599,9 @@ static unsigned long kfence_init_pool(void) continue; =20 __folio_set_slab(slab_folio(slab)); -#ifdef CONFIG_MEMCG - slab->memcg_data =3D (unsigned long)&kfence_metadata_init[i / 2 - 1].obj= cg | - MEMCG_DATA_OBJCGS; +#ifdef CONFIG_MEMCG_KMEM + slab->obj_exts =3D (unsigned long)&kfence_metadata_init[i / 2 - 1].obj_e= xts | + MEMCG_DATA_OBJEXTS; #endif } =20 @@ -649,8 +649,8 @@ static unsigned long kfence_init_pool(void) =20 if (!i || (i % 2)) continue; -#ifdef CONFIG_MEMCG - slab->memcg_data =3D 0; +#ifdef CONFIG_MEMCG_KMEM + slab->obj_exts =3D 0; #endif __folio_clear_slab(slab_folio(slab)); } @@ -1143,8 +1143,8 @@ void __kfence_free(void *addr) { struct kfence_metadata *meta =3D addr_to_metadata((unsigned long)addr); =20 -#ifdef CONFIG_MEMCG - KFENCE_WARN_ON(meta->objcg); +#ifdef CONFIG_MEMCG_KMEM + KFENCE_WARN_ON(meta->obj_exts.objcg); #endif /* * If the objects of the cache are SLAB_TYPESAFE_BY_RCU, defer freeing diff --git a/mm/kfence/kfence.h b/mm/kfence/kfence.h index f46fbb03062b..084f5f36e8e7 100644 --- a/mm/kfence/kfence.h +++ b/mm/kfence/kfence.h @@ -97,8 +97,8 @@ struct kfence_metadata { struct kfence_track free_track; /* For updating alloc_covered on frees. */ u32 alloc_stack_hash; -#ifdef CONFIG_MEMCG - struct obj_cgroup *objcg; +#ifdef CONFIG_MEMCG_KMEM + struct slabobj_ext obj_exts; #endif }; =20 diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 5b009b233ab8..aca777f45d34 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -2859,13 +2859,6 @@ static void commit_charge(struct folio *folio, struc= t mem_cgroup *memcg) } =20 #ifdef CONFIG_MEMCG_KMEM -/* - * The allocated objcg pointers array is not accounted directly. - * Moreover, it should not come from DMA buffer and is not readily - * reclaimable. So those GFP bits should be masked off. - */ -#define OBJCGS_CLEAR_MASK (__GFP_DMA | __GFP_RECLAIMABLE | __GFP_ACCOUNT) - /* * mod_objcg_mlstate() may be called with irq enabled, so * mod_memcg_lruvec_state() should be used. @@ -2884,62 +2877,27 @@ static inline void mod_objcg_mlstate(struct obj_cgr= oup *objcg, rcu_read_unlock(); } =20 -int memcg_alloc_slab_cgroups(struct slab *slab, struct kmem_cache *s, - gfp_t gfp, bool new_slab) -{ - unsigned int objects =3D objs_per_slab(s, slab); - unsigned long memcg_data; - void *vec; - - gfp &=3D ~OBJCGS_CLEAR_MASK; - vec =3D kcalloc_node(objects, sizeof(struct obj_cgroup *), gfp, - slab_nid(slab)); - if (!vec) - return -ENOMEM; - - memcg_data =3D (unsigned long) vec | MEMCG_DATA_OBJCGS; - if (new_slab) { - /* - * If the slab is brand new and nobody can yet access its - * memcg_data, no synchronization is required and memcg_data can - * be simply assigned. - */ - slab->memcg_data =3D memcg_data; - } else if (cmpxchg(&slab->memcg_data, 0, memcg_data)) { - /* - * If the slab is already in use, somebody can allocate and - * assign obj_cgroups in parallel. In this case the existing - * objcg vector should be reused. - */ - kfree(vec); - return 0; - } - - kmemleak_not_leak(vec); - return 0; -} - static __always_inline struct mem_cgroup *mem_cgroup_from_obj_folio(struct folio *folio, void *p) { /* * Slab objects are accounted individually, not per-page. * Memcg membership data for each individual object is saved in - * slab->memcg_data. + * slab->obj_exts. */ if (folio_test_slab(folio)) { - struct obj_cgroup **objcgs; + struct slabobj_ext *obj_exts; struct slab *slab; unsigned int off; =20 slab =3D folio_slab(folio); - objcgs =3D slab_objcgs(slab); - if (!objcgs) + obj_exts =3D slab_obj_exts(slab); + if (!obj_exts) return NULL; =20 off =3D obj_to_index(slab->slab_cache, slab, p); - if (objcgs[off]) - return obj_cgroup_memcg(objcgs[off]); + if (obj_exts[off].objcg) + return obj_cgroup_memcg(obj_exts[off].objcg); =20 return NULL; } @@ -2947,7 +2905,7 @@ struct mem_cgroup *mem_cgroup_from_obj_folio(struct f= olio *folio, void *p) /* * folio_memcg_check() is used here, because in theory we can encounter * a folio where the slab flag has been cleared already, but - * slab->memcg_data has not been freed yet + * slab->obj_exts has not been freed yet * folio_memcg_check() will guarantee that a proper memory * cgroup pointer or NULL will be returned. */ diff --git a/mm/page_owner.c b/mm/page_owner.c index 4e2723e1b300..de6ea5746acd 100644 --- a/mm/page_owner.c +++ b/mm/page_owner.c @@ -372,7 +372,7 @@ static inline int print_page_owner_memcg(char *kbuf, si= ze_t count, int ret, if (!memcg_data) goto out_unlock; =20 - if (memcg_data & MEMCG_DATA_OBJCGS) + if (memcg_data & MEMCG_DATA_OBJEXTS) ret +=3D scnprintf(kbuf + ret, count - ret, "Slab cache page\n"); =20 diff --git a/mm/slab.h b/mm/slab.h index 799a315695c6..5a47125469f1 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -96,8 +96,8 @@ struct slab { #endif =20 atomic_t __page_refcount; -#ifdef CONFIG_MEMCG - unsigned long memcg_data; +#ifdef CONFIG_SLAB_OBJ_EXT + unsigned long obj_exts; #endif }; =20 @@ -106,8 +106,8 @@ struct slab { SLAB_MATCH(flags, __page_flags); SLAB_MATCH(compound_head, slab_cache); /* Ensure bit 0 is clear */ SLAB_MATCH(_refcount, __page_refcount); -#ifdef CONFIG_MEMCG -SLAB_MATCH(memcg_data, memcg_data); +#ifdef CONFIG_SLAB_OBJ_EXT +SLAB_MATCH(memcg_data, obj_exts); #endif #undef SLAB_MATCH static_assert(sizeof(struct slab) <=3D sizeof(struct page)); @@ -429,36 +429,106 @@ static inline bool kmem_cache_debug_flags(struct kme= m_cache *s, slab_flags_t fla return false; } =20 -#ifdef CONFIG_MEMCG_KMEM +#ifdef CONFIG_SLAB_OBJ_EXT + /* - * slab_objcgs - get the object cgroups vector associated with a slab + * slab_obj_exts - get the pointer to the slab object extension vector + * associated with a slab. * @slab: a pointer to the slab struct * - * Returns a pointer to the object cgroups vector associated with the slab, + * Returns a pointer to the object extension vector associated with the sl= ab, * or NULL if no such vector has been associated yet. */ -static inline struct obj_cgroup **slab_objcgs(struct slab *slab) +static inline struct slabobj_ext *slab_obj_exts(struct slab *slab) { - unsigned long memcg_data =3D READ_ONCE(slab->memcg_data); + unsigned long obj_exts =3D READ_ONCE(slab->obj_exts); =20 - VM_BUG_ON_PAGE(memcg_data && !(memcg_data & MEMCG_DATA_OBJCGS), +#ifdef CONFIG_MEMCG + VM_BUG_ON_PAGE(obj_exts && !(obj_exts & MEMCG_DATA_OBJEXTS), slab_page(slab)); - VM_BUG_ON_PAGE(memcg_data & MEMCG_DATA_KMEM, slab_page(slab)); + VM_BUG_ON_PAGE(obj_exts & MEMCG_DATA_KMEM, slab_page(slab)); =20 - return (struct obj_cgroup **)(memcg_data & ~MEMCG_DATA_FLAGS_MASK); + return (struct slabobj_ext *)(obj_exts & ~MEMCG_DATA_FLAGS_MASK); +#else + return (struct slabobj_ext *)obj_exts; +#endif } =20 -int memcg_alloc_slab_cgroups(struct slab *slab, struct kmem_cache *s, - gfp_t gfp, bool new_slab); -void mod_objcg_state(struct obj_cgroup *objcg, struct pglist_data *pgdat, - enum node_stat_item idx, int nr); +int alloc_slab_obj_exts(struct slab *slab, struct kmem_cache *s, + gfp_t gfp, bool new_slab); =20 -static inline void memcg_free_slab_cgroups(struct slab *slab) +static inline bool need_slab_obj_ext(void) { - kfree(slab_objcgs(slab)); - slab->memcg_data =3D 0; + /* + * CONFIG_MEMCG_KMEM creates vector of obj_cgroup objects conditionally + * inside memcg_slab_post_alloc_hook. No other users for now. + */ + return false; } =20 +static inline void free_slab_obj_exts(struct slab *slab) +{ + struct slabobj_ext *obj_exts; + + obj_exts =3D slab_obj_exts(slab); + if (!obj_exts) + return; + + kfree(obj_exts); + slab->obj_exts =3D 0; +} + +static inline struct slabobj_ext * +prepare_slab_obj_exts_hook(struct kmem_cache *s, gfp_t flags, void *p) +{ + struct slab *slab; + + if (!p) + return NULL; + + if (!need_slab_obj_ext()) + return NULL; + + slab =3D virt_to_slab(p); + if (!slab_obj_exts(slab) && + WARN(alloc_slab_obj_exts(slab, s, flags, false), + "%s, %s: Failed to create slab extension vector!\n", + __func__, s->name)) + return NULL; + + return slab_obj_exts(slab) + obj_to_index(s, slab, p); +} + +#else /* CONFIG_SLAB_OBJ_EXT */ + +static inline struct slabobj_ext *slab_obj_exts(struct slab *slab) +{ + return NULL; +} + +static inline int alloc_slab_obj_exts(struct slab *slab, + struct kmem_cache *s, gfp_t gfp, + bool new_slab) +{ + return 0; +} + +static inline void free_slab_obj_exts(struct slab *slab) +{ +} + +static inline struct slabobj_ext * +prepare_slab_obj_exts_hook(struct kmem_cache *s, gfp_t flags, void *p) +{ + return NULL; +} + +#endif /* CONFIG_SLAB_OBJ_EXT */ + +#ifdef CONFIG_MEMCG_KMEM +void mod_objcg_state(struct obj_cgroup *objcg, struct pglist_data *pgdat, + enum node_stat_item idx, int nr); + static inline size_t obj_full_size(struct kmem_cache *s) { /* @@ -526,16 +596,15 @@ static inline void memcg_slab_post_alloc_hook(struct = kmem_cache *s, if (likely(p[i])) { slab =3D virt_to_slab(p[i]); =20 - if (!slab_objcgs(slab) && - memcg_alloc_slab_cgroups(slab, s, flags, - false)) { + if (!slab_obj_exts(slab) && + alloc_slab_obj_exts(slab, s, flags, false)) { obj_cgroup_uncharge(objcg, obj_full_size(s)); continue; } =20 off =3D obj_to_index(s, slab, p[i]); obj_cgroup_get(objcg); - slab_objcgs(slab)[off] =3D objcg; + slab_obj_exts(slab)[off].objcg =3D objcg; mod_objcg_state(objcg, slab_pgdat(slab), cache_vmstat_idx(s), obj_full_size(s)); } else { @@ -548,14 +617,14 @@ static inline void memcg_slab_post_alloc_hook(struct = kmem_cache *s, static inline void memcg_slab_free_hook(struct kmem_cache *s, struct slab = *slab, void **p, int objects) { - struct obj_cgroup **objcgs; + struct slabobj_ext *obj_exts; int i; =20 if (!memcg_kmem_online()) return; =20 - objcgs =3D slab_objcgs(slab); - if (!objcgs) + obj_exts =3D slab_obj_exts(slab); + if (!obj_exts) return; =20 for (i =3D 0; i < objects; i++) { @@ -563,11 +632,11 @@ static inline void memcg_slab_free_hook(struct kmem_c= ache *s, struct slab *slab, unsigned int off; =20 off =3D obj_to_index(s, slab, p[i]); - objcg =3D objcgs[off]; + objcg =3D obj_exts[off].objcg; if (!objcg) continue; =20 - objcgs[off] =3D NULL; + obj_exts[off].objcg =3D NULL; obj_cgroup_uncharge(objcg, obj_full_size(s)); mod_objcg_state(objcg, slab_pgdat(slab), cache_vmstat_idx(s), -obj_full_size(s)); @@ -576,27 +645,11 @@ static inline void memcg_slab_free_hook(struct kmem_c= ache *s, struct slab *slab, } =20 #else /* CONFIG_MEMCG_KMEM */ -static inline struct obj_cgroup **slab_objcgs(struct slab *slab) -{ - return NULL; -} - static inline struct mem_cgroup *memcg_from_slab_obj(void *ptr) { return NULL; } =20 -static inline int memcg_alloc_slab_cgroups(struct slab *slab, - struct kmem_cache *s, gfp_t gfp, - bool new_slab) -{ - return 0; -} - -static inline void memcg_free_slab_cgroups(struct slab *slab) -{ -} - static inline bool memcg_slab_pre_alloc_hook(struct kmem_cache *s, struct list_lru *lru, struct obj_cgroup **objcgp, @@ -633,7 +686,7 @@ static __always_inline void account_slab(struct slab *s= lab, int order, struct kmem_cache *s, gfp_t gfp) { if (memcg_kmem_online() && (s->flags & SLAB_ACCOUNT)) - memcg_alloc_slab_cgroups(slab, s, gfp, true); + alloc_slab_obj_exts(slab, s, gfp, true); =20 mod_node_page_state(slab_pgdat(slab), cache_vmstat_idx(s), PAGE_SIZE << order); @@ -642,8 +695,7 @@ static __always_inline void account_slab(struct slab *s= lab, int order, static __always_inline void unaccount_slab(struct slab *slab, int order, struct kmem_cache *s) { - if (memcg_kmem_online()) - memcg_free_slab_cgroups(slab); + free_slab_obj_exts(slab); =20 mod_node_page_state(slab_pgdat(slab), cache_vmstat_idx(s), -(PAGE_SIZE << order)); @@ -723,6 +775,7 @@ static inline void slab_post_alloc_hook(struct kmem_cac= he *s, unsigned int orig_size) { unsigned int zero_size =3D s->object_size; + struct slabobj_ext *obj_exts; bool kasan_init =3D init; size_t i; =20 @@ -765,6 +818,7 @@ static inline void slab_post_alloc_hook(struct kmem_cac= he *s, kmemleak_alloc_recursive(p[i], s->object_size, 1, s->flags, flags); kmsan_slab_alloc(s, p[i], flags); + obj_exts =3D prepare_slab_obj_exts_hook(s, flags, p[i]); } =20 memcg_slab_post_alloc_hook(s, objcg, flags, size, p); diff --git a/mm/slab_common.c b/mm/slab_common.c index 9bbffe82d65a..2b42a9d2c11c 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -206,6 +206,53 @@ struct kmem_cache *find_mergeable(unsigned int size, u= nsigned int align, return NULL; } =20 +#ifdef CONFIG_SLAB_OBJ_EXT +/* + * The allocated objcg pointers array is not accounted directly. + * Moreover, it should not come from DMA buffer and is not readily + * reclaimable. So those GFP bits should be masked off. + */ +#define OBJCGS_CLEAR_MASK (__GFP_DMA | __GFP_RECLAIMABLE | __GFP_ACCOUNT) + +int alloc_slab_obj_exts(struct slab *slab, struct kmem_cache *s, + gfp_t gfp, bool new_slab) +{ + unsigned int objects =3D objs_per_slab(s, slab); + unsigned long obj_exts; + void *vec; + + gfp &=3D ~OBJCGS_CLEAR_MASK; + vec =3D kcalloc_node(objects, sizeof(struct slabobj_ext), gfp, + slab_nid(slab)); + if (!vec) + return -ENOMEM; + + obj_exts =3D (unsigned long)vec; +#ifdef CONFIG_MEMCG + obj_exts |=3D MEMCG_DATA_OBJEXTS; +#endif + if (new_slab) { + /* + * If the slab is brand new and nobody can yet access its + * obj_exts, no synchronization is required and obj_exts can + * be simply assigned. + */ + slab->obj_exts =3D obj_exts; + } else if (cmpxchg(&slab->obj_exts, 0, obj_exts)) { + /* + * If the slab is already in use, somebody can allocate and + * assign slabobj_exts in parallel. In this case the existing + * objcg vector should be reused. + */ + kfree(vec); + return 0; + } + + kmemleak_not_leak(vec); + return 0; +} +#endif /* CONFIG_SLAB_OBJ_EXT */ + static struct kmem_cache *create_cache(const char *name, unsigned int object_size, unsigned int align, slab_flags_t flags, unsigned int useroffset, --=20 2.42.0.758.gaed0368e0e-goog From nobody Thu Jan 1 07:45:32 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CC8A5C25B6D for ; Tue, 24 Oct 2023 13:47:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1343538AbjJXNrr (ORCPT ); Tue, 24 Oct 2023 09:47:47 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50122 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234706AbjJXNrD (ORCPT ); Tue, 24 Oct 2023 09:47:03 -0400 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 34F55128 for ; Tue, 24 Oct 2023 06:47:00 -0700 (PDT) Received: by mail-yb1-xb49.google.com with SMTP id 3f1490d57ef6-d86766bba9fso2561099276.1 for ; Tue, 24 Oct 2023 06:47:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1698155219; x=1698760019; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=re9qyJoQbXM1lB3hu8WMyX+cG50piCMxG0UfNqXuCfM=; b=PFMlCZyOq8+KpB/XmnTZ0DpDxuSjhLqcwFl+VBHN9Ed5054PT7eVKG3WYjEca4c/zW QWBiaAZxzb/uK2c5gtOlF3x5y6HTqqa1Kv4b6P/bhkg7rWfWTjZxP09RGlo1sM+6Q77m ed/Lr+tkAnz1LzGsD27+9aEpGE7Uc1BGe/dlcAxXSbdenGElv/uziAiZPbllJFARvhGA jE/6kTKtS+KZY9kaGehb5wj4ImlVjrYmQ4DACnv6JIF/5POUS3JJ9t8r8qwwt/R4JX0B ZWNiwCIaW/LEQ+snxPyRr43dTADnL/urTryizeGebnwDf0s1e3TdjOFbF9ZUurdklRVP bzNg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1698155219; x=1698760019; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=re9qyJoQbXM1lB3hu8WMyX+cG50piCMxG0UfNqXuCfM=; b=jXoo+/kG8Y83TuL0QUaWhEwdOQ0srDvHVi4t0CLXLeGaVg7Cib0C2IECgHQ9lWj7ky aqFALXkA8Y7MOSOTcpAnxG/o+oeDg9tYacRK8UVu+4A4jHSAgchD3DYPeYlG70D3uxnS dRS1zYbpCIFlGpeoTFysAk/jU2Ievi1CBZEKrd8sZuBgKbQuNhZxmvUPSTRhVoctlcCD i6IdYTwbiMjjxmvtMUYl2i4pj8FbU1hMfc7swmLyKUsissnZ0ZUsWYHZGQj16M3Ti9hd 5HCL0wzJTYEUSA3De67OerZgSCML9nHn+z5mi0iVxBqcoG6JGPVunIiR6jey3N+e9JNn EN+Q== X-Gm-Message-State: AOJu0YwXd7Cp6GPFaMUrjgp0jEgjoMYWpLO+0SzBqvAZcS1iD2a3oaLb viKKXMzdDUELNfCchVqMpPBAcfTnTZY= X-Google-Smtp-Source: AGHT+IGJ25fW4BnP1NJYhrzH0X+zrhr1/eHne8PBOna8hmvrezOwpbGVvOl7JHFMC4rGhoY/sRPJncXHlfA= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:45ba:3318:d7a5:336a]) (user=surenb job=sendgmr) by 2002:a05:6902:1083:b0:d9a:c946:c18c with SMTP id v3-20020a056902108300b00d9ac946c18cmr311395ybu.6.1698155219188; Tue, 24 Oct 2023 06:46:59 -0700 (PDT) Date: Tue, 24 Oct 2023 06:46:05 -0700 In-Reply-To: <20231024134637.3120277-1-surenb@google.com> Mime-Version: 1.0 References: <20231024134637.3120277-1-surenb@google.com> X-Mailer: git-send-email 2.42.0.758.gaed0368e0e-goog Message-ID: <20231024134637.3120277-9-surenb@google.com> Subject: [PATCH v2 08/39] mm: introduce __GFP_NO_OBJ_EXT flag to selectively prevent slabobj_ext creation From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: kent.overstreet@linux.dev, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, roman.gushchin@linux.dev, mgorman@suse.de, dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com, corbet@lwn.net, void@manifault.com, peterz@infradead.org, juri.lelli@redhat.com, ldufour@linux.ibm.com, catalin.marinas@arm.com, will@kernel.org, arnd@arndb.de, tglx@linutronix.de, mingo@redhat.com, dave.hansen@linux.intel.com, x86@kernel.org, peterx@redhat.com, david@redhat.com, axboe@kernel.dk, mcgrof@kernel.org, masahiroy@kernel.org, nathan@kernel.org, dennis@kernel.org, tj@kernel.org, muchun.song@linux.dev, rppt@kernel.org, paulmck@kernel.org, pasha.tatashin@soleen.com, yosryahmed@google.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, andreyknvl@gmail.com, keescook@chromium.org, ndesaulniers@google.com, vvvvvv@google.com, gregkh@linuxfoundation.org, ebiggers@google.com, ytcoode@gmail.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, bristot@redhat.com, vschneid@redhat.com, cl@linux.com, penberg@kernel.org, iamjoonsoo.kim@lge.com, 42.hyeyoo@gmail.com, glider@google.com, elver@google.com, dvyukov@google.com, shakeelb@google.com, songmuchun@bytedance.com, jbaron@akamai.com, rientjes@google.com, minchan@google.com, kaleshsingh@google.com, surenb@google.com, kernel-team@android.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, iommu@lists.linux.dev, linux-arch@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, kasan-dev@googlegroups.com, cgroups@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Introduce __GFP_NO_OBJ_EXT flag in order to prevent recursive allocations when allocating slabobj_ext on a slab. Signed-off-by: Suren Baghdasaryan --- include/linux/gfp_types.h | 11 +++++++++++ 1 file changed, 11 insertions(+) diff --git a/include/linux/gfp_types.h b/include/linux/gfp_types.h index 3fbe624763d9..1c6573d69347 100644 --- a/include/linux/gfp_types.h +++ b/include/linux/gfp_types.h @@ -52,6 +52,9 @@ enum { #endif #ifdef CONFIG_LOCKDEP ___GFP_NOLOCKDEP_BIT, +#endif +#ifdef CONFIG_SLAB_OBJ_EXT + ___GFP_NO_OBJ_EXT_BIT, #endif ___GFP_LAST_BIT }; @@ -93,6 +96,11 @@ enum { #else #define ___GFP_NOLOCKDEP 0 #endif +#ifdef CONFIG_SLAB_OBJ_EXT +#define ___GFP_NO_OBJ_EXT BIT(___GFP_NO_OBJ_EXT_BIT) +#else +#define ___GFP_NO_OBJ_EXT 0 +#endif =20 /* * Physical address zone modifiers (see linux/mmzone.h - low four bits) @@ -133,12 +141,15 @@ enum { * node with no fallbacks or placement policy enforcements. * * %__GFP_ACCOUNT causes the allocation to be accounted to kmemcg. + * + * %__GFP_NO_OBJ_EXT causes slab allocation to have no object extension. */ #define __GFP_RECLAIMABLE ((__force gfp_t)___GFP_RECLAIMABLE) #define __GFP_WRITE ((__force gfp_t)___GFP_WRITE) #define __GFP_HARDWALL ((__force gfp_t)___GFP_HARDWALL) #define __GFP_THISNODE ((__force gfp_t)___GFP_THISNODE) #define __GFP_ACCOUNT ((__force gfp_t)___GFP_ACCOUNT) +#define __GFP_NO_OBJ_EXT ((__force gfp_t)___GFP_NO_OBJ_EXT) =20 /** * DOC: Watermark modifiers --=20 2.42.0.758.gaed0368e0e-goog From nobody Thu Jan 1 07:45:32 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6FCADC25B48 for ; Tue, 24 Oct 2023 13:47:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1343616AbjJXNr4 (ORCPT ); Tue, 24 Oct 2023 09:47:56 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50260 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1343544AbjJXNrG (ORCPT ); Tue, 24 Oct 2023 09:47:06 -0400 Received: from mail-yw1-x114a.google.com (mail-yw1-x114a.google.com [IPv6:2607:f8b0:4864:20::114a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5B91710E2 for ; Tue, 24 Oct 2023 06:47:03 -0700 (PDT) Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-5a7fb3f311bso59159277b3.2 for ; Tue, 24 Oct 2023 06:47:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1698155221; x=1698760021; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=BlkdsRQXiDN3rg6CmmMqMbDupjvE+Qo4H/i7Q99VP7E=; b=uxaCAWtyIJT6QvIegl8QCczQrkHie5BVYJuqhMS+k5AFm6IyO61Gp74KG1GLSSet+N JVmeRny/hD8P2gA/CBe6rvFAM0vCSu9Kq/EtwTUJHp3UgrzLurZ6sO/lutfjVZ9u3pi4 Cwkfh1GCvT+zGqaDj+SlLcQGS5bNbb0kszgGzY58vVxaTUoJvy1FkV70jx8v2H8z539B 5rSqX2vCM1UbewVAL58EikUcI8fLRr2Wpv3mohu3gmB0aQ4SMd0OfuqqbLTgZBQQZHt3 /FghF91vUsZRzwNe1Ht1nwKQvcy5znGVX2e1DAKFjBKVHUGfiPwWMLU7Ol8qA0TAyWJK GcVg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1698155221; x=1698760021; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=BlkdsRQXiDN3rg6CmmMqMbDupjvE+Qo4H/i7Q99VP7E=; b=nXfis1DIPTs5t/yaLpBccKbVHAN+hiCB0dvbapN2UkOyjKe4FOOYN8hpMi0UtNmveA 5WMpwVvRpCx9N3sSRP/QRLZ9o7tA30bB6+Menqz+E9n3AX4wYq9zCM3RflHL/uHDufht Pr1cx3GJo2MfTc/2eGwIroV2eSF5NWb1dzgJP9vVs2URvGpEyaavCSqZPp5nUpzx9hHc hHqTFWPgSwhhES1NePgKdKr/EpvG+YJAW1DE/PYc3JdU9mQzveYyWFXE1m3hMrPZ3aUG bwB7Mig6D9vGY+NbJhtvAR6dhOybe729JZxSDbK0/zT2e+voQc68d1LOq+u1oSccgX0u VPxw== X-Gm-Message-State: AOJu0Yw2W0Upho0Fqysr7I1Eh0MYESNkxJnzf9Fq082d/qdk2Q3PooPK cChaMH0rKn0HTs0w6Lm2nkLw3Wd0UZ0= X-Google-Smtp-Source: AGHT+IH4vkQmrXlidCVXKzYYty2ay4HPBInW5lCi3Eqpl3vZeKIQVqiJUvatNMDtWsMNHeDdaowbWxA4kC0= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:45ba:3318:d7a5:336a]) (user=surenb job=sendgmr) by 2002:a0d:d50f:0:b0:5a7:be3f:159f with SMTP id x15-20020a0dd50f000000b005a7be3f159fmr287305ywd.5.1698155221495; Tue, 24 Oct 2023 06:47:01 -0700 (PDT) Date: Tue, 24 Oct 2023 06:46:06 -0700 In-Reply-To: <20231024134637.3120277-1-surenb@google.com> Mime-Version: 1.0 References: <20231024134637.3120277-1-surenb@google.com> X-Mailer: git-send-email 2.42.0.758.gaed0368e0e-goog Message-ID: <20231024134637.3120277-10-surenb@google.com> Subject: [PATCH v2 09/39] mm/slab: introduce SLAB_NO_OBJ_EXT to avoid obj_ext creation From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: kent.overstreet@linux.dev, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, roman.gushchin@linux.dev, mgorman@suse.de, dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com, corbet@lwn.net, void@manifault.com, peterz@infradead.org, juri.lelli@redhat.com, ldufour@linux.ibm.com, catalin.marinas@arm.com, will@kernel.org, arnd@arndb.de, tglx@linutronix.de, mingo@redhat.com, dave.hansen@linux.intel.com, x86@kernel.org, peterx@redhat.com, david@redhat.com, axboe@kernel.dk, mcgrof@kernel.org, masahiroy@kernel.org, nathan@kernel.org, dennis@kernel.org, tj@kernel.org, muchun.song@linux.dev, rppt@kernel.org, paulmck@kernel.org, pasha.tatashin@soleen.com, yosryahmed@google.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, andreyknvl@gmail.com, keescook@chromium.org, ndesaulniers@google.com, vvvvvv@google.com, gregkh@linuxfoundation.org, ebiggers@google.com, ytcoode@gmail.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, bristot@redhat.com, vschneid@redhat.com, cl@linux.com, penberg@kernel.org, iamjoonsoo.kim@lge.com, 42.hyeyoo@gmail.com, glider@google.com, elver@google.com, dvyukov@google.com, shakeelb@google.com, songmuchun@bytedance.com, jbaron@akamai.com, rientjes@google.com, minchan@google.com, kaleshsingh@google.com, surenb@google.com, kernel-team@android.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, iommu@lists.linux.dev, linux-arch@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, kasan-dev@googlegroups.com, cgroups@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Slab extension objects can't be allocated before slab infrastructure is initialized. Some caches, like kmem_cache and kmem_cache_node, are created before slab infrastructure is initialized. Objects from these caches can't have extension objects. Introduce SLAB_NO_OBJ_EXT slab flag to mark these caches and avoid creating extensions for objects allocated from these slabs. Signed-off-by: Suren Baghdasaryan --- include/linux/slab.h | 7 +++++++ mm/slab.c | 2 +- mm/slub.c | 5 +++-- 3 files changed, 11 insertions(+), 3 deletions(-) diff --git a/include/linux/slab.h b/include/linux/slab.h index 8228d1276a2f..11ef3d364b2b 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -164,6 +164,13 @@ #endif #define SLAB_TEMPORARY SLAB_RECLAIM_ACCOUNT /* Objects are short-lived */ =20 +#ifdef CONFIG_SLAB_OBJ_EXT +/* Slab created using create_boot_cache */ +#define SLAB_NO_OBJ_EXT ((slab_flags_t __force)0x20000000U) +#else +#define SLAB_NO_OBJ_EXT 0 +#endif + /* * ZERO_SIZE_PTR will be returned for zero sized kmalloc requests. * diff --git a/mm/slab.c b/mm/slab.c index 9ad3d0f2d1a5..cefcb7499b6c 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -1232,7 +1232,7 @@ void __init kmem_cache_init(void) create_boot_cache(kmem_cache, "kmem_cache", offsetof(struct kmem_cache, node) + nr_node_ids * sizeof(struct kmem_cache_node *), - SLAB_HWCACHE_ALIGN, 0, 0); + SLAB_HWCACHE_ALIGN | SLAB_NO_OBJ_EXT, 0, 0); list_add(&kmem_cache->list, &slab_caches); slab_state =3D PARTIAL; =20 diff --git a/mm/slub.c b/mm/slub.c index f7940048138c..d16643492320 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -5043,7 +5043,8 @@ void __init kmem_cache_init(void) node_set(node, slab_nodes); =20 create_boot_cache(kmem_cache_node, "kmem_cache_node", - sizeof(struct kmem_cache_node), SLAB_HWCACHE_ALIGN, 0, 0); + sizeof(struct kmem_cache_node), + SLAB_HWCACHE_ALIGN | SLAB_NO_OBJ_EXT, 0, 0); =20 hotplug_memory_notifier(slab_memory_callback, SLAB_CALLBACK_PRI); =20 @@ -5053,7 +5054,7 @@ void __init kmem_cache_init(void) create_boot_cache(kmem_cache, "kmem_cache", offsetof(struct kmem_cache, node) + nr_node_ids * sizeof(struct kmem_cache_node *), - SLAB_HWCACHE_ALIGN, 0, 0); + SLAB_HWCACHE_ALIGN | SLAB_NO_OBJ_EXT, 0, 0); =20 kmem_cache =3D bootstrap(&boot_kmem_cache); kmem_cache_node =3D bootstrap(&boot_kmem_cache_node); --=20 2.42.0.758.gaed0368e0e-goog From nobody Thu Jan 1 07:45:32 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 95FDCC25B48 for ; Tue, 24 Oct 2023 13:48:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1343583AbjJXNsI (ORCPT ); Tue, 24 Oct 2023 09:48:08 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50186 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1343560AbjJXNrH (ORCPT ); Tue, 24 Oct 2023 09:47:07 -0400 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 625D810FA for ; Tue, 24 Oct 2023 06:47:05 -0700 (PDT) Received: by mail-yb1-xb49.google.com with SMTP id 3f1490d57ef6-d86dac81f8fso5389043276.1 for ; Tue, 24 Oct 2023 06:47:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1698155224; x=1698760024; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=YHgfLVMScT5H/FxRnxJh+3xP9v/l17ghYQJMJklbWxc=; b=ogBFPLi8fvrnXU6vKCvucgyBHD5/3evNz0uRdyIFs1oqcaBX8qH5RZIeDsGH5V53QY JLd7XMnbQQW4W6PcPFwunDg79vf1GF+pP0ZszfguoLxlNiM6dNl9bhmgYu9aYHXqe5MG y16phkyjP8bmXcYvLbVM7YJLoOIG6SS4BfVuqAGapmdfBrGOrgcKjgu7u44cD8cOm9kE qpUSR4ZpYt1oFZNTpSldByfENNAof0Yda8pAhuvtUGtgQiJ0Lwc5DkLRBOPLUEGZpma7 qyjusp0eIOv43YbpV2B02o5Bmw7aBZdYPAM0vGYC3iHwM7a+tawV6Rat4RnOz5U+04yt JClQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1698155224; x=1698760024; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=YHgfLVMScT5H/FxRnxJh+3xP9v/l17ghYQJMJklbWxc=; b=dS0mwhtjzBZcoq0PeOUTruliPCbyfQ8j4MF3QrUhdP6Z+BlRVhkiFM6Dxona4+7oMq Us9qs1eRVzfkz3r2fwm6l4mwomAoU/zR2GNdNVfEKxDxO3b31WO0rw3uLTxmGSCGRISV coFuDezFC0fJyXZhM+DP0IukaYtAmWnwvm7Ou1eGsL+DeYeC0nKBVjnyAyP7ZOw2GxPl 3HzB+c0i0CCZW/4p6xS13PDDGH9A6Nlz5egF6XeMEZm2obo6XU5m0cfzvxNzi0bfWBjw kISXumKxFX9+DNy8SImnLvsURaFrotoO6wAzI7F9sCxHHfhdIl6KB1VvVkaZD5sXld7N FgwQ== X-Gm-Message-State: AOJu0YwweXBGrvvPzYYbmMMB2VRalfQBl8Okw4h7oS4445ow81aTjTM+ n4GsuSab4qDbRMPggwh9ChXwYhfyPVo= X-Google-Smtp-Source: AGHT+IHeBKvEZxCc8z4s40S7ha5hgPZX8U0NGngnhsFR9kbksVyPLDfdSNwpUqF/X+vAqjdciG/7nvEnCYM= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:45ba:3318:d7a5:336a]) (user=surenb job=sendgmr) by 2002:a25:297:0:b0:d9a:4c45:cfd0 with SMTP id 145-20020a250297000000b00d9a4c45cfd0mr213074ybc.2.1698155223878; Tue, 24 Oct 2023 06:47:03 -0700 (PDT) Date: Tue, 24 Oct 2023 06:46:07 -0700 In-Reply-To: <20231024134637.3120277-1-surenb@google.com> Mime-Version: 1.0 References: <20231024134637.3120277-1-surenb@google.com> X-Mailer: git-send-email 2.42.0.758.gaed0368e0e-goog Message-ID: <20231024134637.3120277-11-surenb@google.com> Subject: [PATCH v2 10/39] mm: prevent slabobj_ext allocations for slabobj_ext and kmem_cache objects From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: kent.overstreet@linux.dev, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, roman.gushchin@linux.dev, mgorman@suse.de, dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com, corbet@lwn.net, void@manifault.com, peterz@infradead.org, juri.lelli@redhat.com, ldufour@linux.ibm.com, catalin.marinas@arm.com, will@kernel.org, arnd@arndb.de, tglx@linutronix.de, mingo@redhat.com, dave.hansen@linux.intel.com, x86@kernel.org, peterx@redhat.com, david@redhat.com, axboe@kernel.dk, mcgrof@kernel.org, masahiroy@kernel.org, nathan@kernel.org, dennis@kernel.org, tj@kernel.org, muchun.song@linux.dev, rppt@kernel.org, paulmck@kernel.org, pasha.tatashin@soleen.com, yosryahmed@google.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, andreyknvl@gmail.com, keescook@chromium.org, ndesaulniers@google.com, vvvvvv@google.com, gregkh@linuxfoundation.org, ebiggers@google.com, ytcoode@gmail.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, bristot@redhat.com, vschneid@redhat.com, cl@linux.com, penberg@kernel.org, iamjoonsoo.kim@lge.com, 42.hyeyoo@gmail.com, glider@google.com, elver@google.com, dvyukov@google.com, shakeelb@google.com, songmuchun@bytedance.com, jbaron@akamai.com, rientjes@google.com, minchan@google.com, kaleshsingh@google.com, surenb@google.com, kernel-team@android.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, iommu@lists.linux.dev, linux-arch@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, kasan-dev@googlegroups.com, cgroups@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Use __GFP_NO_OBJ_EXT to prevent recursions when allocating slabobj_ext objects. Also prevent slabobj_ext allocations for kmem_cache objects. Signed-off-by: Suren Baghdasaryan --- mm/slab.h | 6 ++++++ mm/slab_common.c | 2 ++ 2 files changed, 8 insertions(+) diff --git a/mm/slab.h b/mm/slab.h index 5a47125469f1..187acc593397 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -489,6 +489,12 @@ prepare_slab_obj_exts_hook(struct kmem_cache *s, gfp_t= flags, void *p) if (!need_slab_obj_ext()) return NULL; =20 + if (s->flags & SLAB_NO_OBJ_EXT) + return NULL; + + if (flags & __GFP_NO_OBJ_EXT) + return NULL; + slab =3D virt_to_slab(p); if (!slab_obj_exts(slab) && WARN(alloc_slab_obj_exts(slab, s, flags, false), diff --git a/mm/slab_common.c b/mm/slab_common.c index 2b42a9d2c11c..446f406d2703 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -222,6 +222,8 @@ int alloc_slab_obj_exts(struct slab *slab, struct kmem_= cache *s, void *vec; =20 gfp &=3D ~OBJCGS_CLEAR_MASK; + /* Prevent recursive extension vector allocation */ + gfp |=3D __GFP_NO_OBJ_EXT; vec =3D kcalloc_node(objects, sizeof(struct slabobj_ext), gfp, slab_nid(slab)); if (!vec) --=20 2.42.0.758.gaed0368e0e-goog From nobody Thu Jan 1 07:45:32 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 86350C25B6C for ; Tue, 24 Oct 2023 13:48:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1343636AbjJXNsY (ORCPT ); Tue, 24 Oct 2023 09:48:24 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44528 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234753AbjJXNrj (ORCPT ); Tue, 24 Oct 2023 09:47:39 -0400 Received: from mail-yw1-x114a.google.com (mail-yw1-x114a.google.com [IPv6:2607:f8b0:4864:20::114a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C5F5010C2 for ; Tue, 24 Oct 2023 06:47:06 -0700 (PDT) Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-5a7af69a4baso59610487b3.0 for ; Tue, 24 Oct 2023 06:47:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1698155226; x=1698760026; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=Qg2lup2zwYWRtm8BAuIZWN5zY45A/qiqWGd2wrDGPpE=; b=f5vfhoe6tNttLZZU7sBETFQ2wPoFBee06WEc9qda46PCdbbO8nH5kvBdj1R4VudGrF bFNGNvYjynSXrwkxVexRIdPtJWHPtZ3RRqi5Yhqp7Spv1+DTW14i8UaVMuTcf0gXHppa u/ZsdQMUL5tIE5p2si9wFKEjHfbGANZ6XyiGCmGsoRd1MM0riwIuyRKvj9hWCigdQh3j od8wrfv0mGzTsv1LZdaWxHRDuHr2Hx9ExwL3u854tpEsDNGfT771LfiNaNqdcVFel1bC JA+6yAAg8kqJ9PqhtUlgXy/BTgR5IHfJK6iBzjV+BcSSrVsDDAYTbSOtXeRHLaaaaeu0 ZIfw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1698155226; x=1698760026; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=Qg2lup2zwYWRtm8BAuIZWN5zY45A/qiqWGd2wrDGPpE=; b=ADHADQww695eUIfnLQ7ecULoZnuNPdZZaGCNzxU/OAljqrIbg3JZC75pdwyzyQZeqr HpTilBe1/644TqpjQZBDLYVQI25JneALncFlImViiWrBnasxUtLF84Nr4WJnOdHeMbDy vhZgKV26oDfAIa3Y8jKnYlm1l72BxRdtnBhvBdHjQSytl84d2aDYmYaRT73K6FGTz6Pu TbJHtoKKBfutuiYDyeU6MyrHxKjn89E1Q9JxYnakFPq0NZVvqM3LRDhPSiFlsmn5dgZY Zd+/tBjIT7dwyuxvr4Cr0nq43y4mhEx0DcAvvjWvfD0CFsHURUORYDrWJypi5RfixWCd EQqQ== X-Gm-Message-State: AOJu0YwWu5tuBEOdxByHDtof6EtGdOIxSvXy5lLbtvX5GlS8NTMUJMYV KbmXMJgIVTxgEmBQ0DVOOwzVd5eI5Lk= X-Google-Smtp-Source: AGHT+IFbSeYSocyqc9oNK2jC1GEj7y9HsYgl5xw/vFv8t55K0y/nslFyvoCz7IdF8OEyu7ekDQ1W6y46+34= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:45ba:3318:d7a5:336a]) (user=surenb job=sendgmr) by 2002:a0d:dd02:0:b0:579:f832:74b with SMTP id g2-20020a0ddd02000000b00579f832074bmr286214ywe.10.1698155225916; Tue, 24 Oct 2023 06:47:05 -0700 (PDT) Date: Tue, 24 Oct 2023 06:46:08 -0700 In-Reply-To: <20231024134637.3120277-1-surenb@google.com> Mime-Version: 1.0 References: <20231024134637.3120277-1-surenb@google.com> X-Mailer: git-send-email 2.42.0.758.gaed0368e0e-goog Message-ID: <20231024134637.3120277-12-surenb@google.com> Subject: [PATCH v2 11/39] slab: objext: introduce objext_flags as extension to page_memcg_data_flags From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: kent.overstreet@linux.dev, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, roman.gushchin@linux.dev, mgorman@suse.de, dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com, corbet@lwn.net, void@manifault.com, peterz@infradead.org, juri.lelli@redhat.com, ldufour@linux.ibm.com, catalin.marinas@arm.com, will@kernel.org, arnd@arndb.de, tglx@linutronix.de, mingo@redhat.com, dave.hansen@linux.intel.com, x86@kernel.org, peterx@redhat.com, david@redhat.com, axboe@kernel.dk, mcgrof@kernel.org, masahiroy@kernel.org, nathan@kernel.org, dennis@kernel.org, tj@kernel.org, muchun.song@linux.dev, rppt@kernel.org, paulmck@kernel.org, pasha.tatashin@soleen.com, yosryahmed@google.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, andreyknvl@gmail.com, keescook@chromium.org, ndesaulniers@google.com, vvvvvv@google.com, gregkh@linuxfoundation.org, ebiggers@google.com, ytcoode@gmail.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, bristot@redhat.com, vschneid@redhat.com, cl@linux.com, penberg@kernel.org, iamjoonsoo.kim@lge.com, 42.hyeyoo@gmail.com, glider@google.com, elver@google.com, dvyukov@google.com, shakeelb@google.com, songmuchun@bytedance.com, jbaron@akamai.com, rientjes@google.com, minchan@google.com, kaleshsingh@google.com, surenb@google.com, kernel-team@android.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, iommu@lists.linux.dev, linux-arch@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, kasan-dev@googlegroups.com, cgroups@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Introduce objext_flags to store additional objext flags unrelated to memcg. Signed-off-by: Suren Baghdasaryan --- include/linux/memcontrol.h | 29 ++++++++++++++++++++++------- mm/slab.h | 4 +--- 2 files changed, 23 insertions(+), 10 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 4b17ebb7e723..f3ede28b6fa6 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -354,7 +354,22 @@ enum page_memcg_data_flags { __NR_MEMCG_DATA_FLAGS =3D (1UL << 2), }; =20 -#define MEMCG_DATA_FLAGS_MASK (__NR_MEMCG_DATA_FLAGS - 1) +#define __FIRST_OBJEXT_FLAG __NR_MEMCG_DATA_FLAGS + +#else /* CONFIG_MEMCG */ + +#define __FIRST_OBJEXT_FLAG (1UL << 0) + +#endif /* CONFIG_MEMCG */ + +enum objext_flags { + /* the next bit after the last actual flag */ + __NR_OBJEXTS_FLAGS =3D __FIRST_OBJEXT_FLAG, +}; + +#define OBJEXTS_FLAGS_MASK (__NR_OBJEXTS_FLAGS - 1) + +#ifdef CONFIG_MEMCG =20 static inline bool folio_memcg_kmem(struct folio *folio); =20 @@ -388,7 +403,7 @@ static inline struct mem_cgroup *__folio_memcg(struct f= olio *folio) VM_BUG_ON_FOLIO(memcg_data & MEMCG_DATA_OBJEXTS, folio); VM_BUG_ON_FOLIO(memcg_data & MEMCG_DATA_KMEM, folio); =20 - return (struct mem_cgroup *)(memcg_data & ~MEMCG_DATA_FLAGS_MASK); + return (struct mem_cgroup *)(memcg_data & ~OBJEXTS_FLAGS_MASK); } =20 /* @@ -409,7 +424,7 @@ static inline struct obj_cgroup *__folio_objcg(struct f= olio *folio) VM_BUG_ON_FOLIO(memcg_data & MEMCG_DATA_OBJEXTS, folio); VM_BUG_ON_FOLIO(!(memcg_data & MEMCG_DATA_KMEM), folio); =20 - return (struct obj_cgroup *)(memcg_data & ~MEMCG_DATA_FLAGS_MASK); + return (struct obj_cgroup *)(memcg_data & ~OBJEXTS_FLAGS_MASK); } =20 /* @@ -466,11 +481,11 @@ static inline struct mem_cgroup *folio_memcg_rcu(stru= ct folio *folio) if (memcg_data & MEMCG_DATA_KMEM) { struct obj_cgroup *objcg; =20 - objcg =3D (void *)(memcg_data & ~MEMCG_DATA_FLAGS_MASK); + objcg =3D (void *)(memcg_data & ~OBJEXTS_FLAGS_MASK); return obj_cgroup_memcg(objcg); } =20 - return (struct mem_cgroup *)(memcg_data & ~MEMCG_DATA_FLAGS_MASK); + return (struct mem_cgroup *)(memcg_data & ~OBJEXTS_FLAGS_MASK); } =20 /* @@ -509,11 +524,11 @@ static inline struct mem_cgroup *folio_memcg_check(st= ruct folio *folio) if (memcg_data & MEMCG_DATA_KMEM) { struct obj_cgroup *objcg; =20 - objcg =3D (void *)(memcg_data & ~MEMCG_DATA_FLAGS_MASK); + objcg =3D (void *)(memcg_data & ~OBJEXTS_FLAGS_MASK); return obj_cgroup_memcg(objcg); } =20 - return (struct mem_cgroup *)(memcg_data & ~MEMCG_DATA_FLAGS_MASK); + return (struct mem_cgroup *)(memcg_data & ~OBJEXTS_FLAGS_MASK); } =20 static inline struct mem_cgroup *page_memcg_check(struct page *page) diff --git a/mm/slab.h b/mm/slab.h index 187acc593397..60417fd262ea 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -448,10 +448,8 @@ static inline struct slabobj_ext *slab_obj_exts(struct= slab *slab) slab_page(slab)); VM_BUG_ON_PAGE(obj_exts & MEMCG_DATA_KMEM, slab_page(slab)); =20 - return (struct slabobj_ext *)(obj_exts & ~MEMCG_DATA_FLAGS_MASK); -#else - return (struct slabobj_ext *)obj_exts; #endif + return (struct slabobj_ext *)(obj_exts & ~OBJEXTS_FLAGS_MASK); } =20 int alloc_slab_obj_exts(struct slab *slab, struct kmem_cache *s, --=20 2.42.0.758.gaed0368e0e-goog From nobody Thu Jan 1 07:45:32 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7850FC00A8F for ; Tue, 24 Oct 2023 13:48:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1343656AbjJXNsc (ORCPT ); Tue, 24 Oct 2023 09:48:32 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50206 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234666AbjJXNrp (ORCPT ); Tue, 24 Oct 2023 09:47:45 -0400 Received: from mail-yw1-x1149.google.com (mail-yw1-x1149.google.com [IPv6:2607:f8b0:4864:20::1149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4A31C170D for ; Tue, 24 Oct 2023 06:47:09 -0700 (PDT) Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-5a909b4e079so52558657b3.2 for ; Tue, 24 Oct 2023 06:47:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1698155228; x=1698760028; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=9NoZOXA8j9UYH0qbl3Heb4+Wi1EaC598WUGqABNW4pA=; b=jbEojL0XZTmKOwfTOL1JrB3j2DoioPlUddoBoQWJbgvOwZobLdb1pOoe4yVgr6lOe7 egEUFAZ8AXKqHEizuzXI3b9C3aJmIIThNHU2IrVSUZ5rEl05sDgGnXAw8Hqppf+Uad7z Zx092r39CZJ1BDlqhvzpdXnqDLw2uXwhtEa8uBOvlK2TpbFzqAtN6EDYUZCE3LyvXtCH 6lYpHRBfKbd0mYrGAsJ0IrXvQvTacok4XeYp/NsQmbls8cqdb4M3JY6g6loRYl50JeBF zE/DL7gj/WBnpcV+TqiluDHjfBFvwVEgXNl4CULPHYaI+p3SQaW4K1zM3OUbrQvpYKj9 YiwQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1698155228; x=1698760028; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=9NoZOXA8j9UYH0qbl3Heb4+Wi1EaC598WUGqABNW4pA=; b=doX1sABDmL2KEVbV/KW2uMFVspChmEqrtjkiiYl+dgBFQACXi3uozwxusjZpolGhpg WWcmFqa85PwQjwXWcfI2xG4USx16hApmUBjOmiQTRieX70I9KMFvbqPrhTct1wjqgbKz z/Cop+qCT1jJFcn5uhoxzN6OMHXDfzR8ERRTrGeB8ye79irNYCrroooaIj0EEopfWFBS rui+hkcxAiORXROXUP64ZfUKQ9mv0ZhcX0Ah2eDAJHtY2NtvJ0nctvYp2duyoIZzHWTZ yWb6M4yA+q49WLLfD06wMQfeYBpULWW7TSq9FRaoX7SrxXfJH5nZO/aUufRK+s0loJBa pnug== X-Gm-Message-State: AOJu0Yy2oLu9o95rxi/lA0t7Rvs90nJmN91Hfb3vvojY79yfJFYp/iCL YsW7Wm/BPDk2m1hM4T1mbujim7o5Ark= X-Google-Smtp-Source: AGHT+IEp6i7OIYLqR0zo2UBT4fCb6INeX9PpoJn2QVEvfaV9Q3dpXadwEMR0perH7aOFZkHGn0YJqemKbwE= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:45ba:3318:d7a5:336a]) (user=surenb job=sendgmr) by 2002:a81:a08c:0:b0:57a:e0b:f63 with SMTP id x134-20020a81a08c000000b0057a0e0b0f63mr272098ywg.7.1698155228147; Tue, 24 Oct 2023 06:47:08 -0700 (PDT) Date: Tue, 24 Oct 2023 06:46:09 -0700 In-Reply-To: <20231024134637.3120277-1-surenb@google.com> Mime-Version: 1.0 References: <20231024134637.3120277-1-surenb@google.com> X-Mailer: git-send-email 2.42.0.758.gaed0368e0e-goog Message-ID: <20231024134637.3120277-13-surenb@google.com> Subject: [PATCH v2 12/39] lib: code tagging framework From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: kent.overstreet@linux.dev, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, roman.gushchin@linux.dev, mgorman@suse.de, dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com, corbet@lwn.net, void@manifault.com, peterz@infradead.org, juri.lelli@redhat.com, ldufour@linux.ibm.com, catalin.marinas@arm.com, will@kernel.org, arnd@arndb.de, tglx@linutronix.de, mingo@redhat.com, dave.hansen@linux.intel.com, x86@kernel.org, peterx@redhat.com, david@redhat.com, axboe@kernel.dk, mcgrof@kernel.org, masahiroy@kernel.org, nathan@kernel.org, dennis@kernel.org, tj@kernel.org, muchun.song@linux.dev, rppt@kernel.org, paulmck@kernel.org, pasha.tatashin@soleen.com, yosryahmed@google.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, andreyknvl@gmail.com, keescook@chromium.org, ndesaulniers@google.com, vvvvvv@google.com, gregkh@linuxfoundation.org, ebiggers@google.com, ytcoode@gmail.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, bristot@redhat.com, vschneid@redhat.com, cl@linux.com, penberg@kernel.org, iamjoonsoo.kim@lge.com, 42.hyeyoo@gmail.com, glider@google.com, elver@google.com, dvyukov@google.com, shakeelb@google.com, songmuchun@bytedance.com, jbaron@akamai.com, rientjes@google.com, minchan@google.com, kaleshsingh@google.com, surenb@google.com, kernel-team@android.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, iommu@lists.linux.dev, linux-arch@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, kasan-dev@googlegroups.com, cgroups@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add basic infrastructure to support code tagging which stores tag common information consisting of the module name, function, file name and line number. Provide functions to register a new code tag type and navigate between code tags. Co-developed-by: Kent Overstreet Signed-off-by: Kent Overstreet Signed-off-by: Suren Baghdasaryan --- include/linux/codetag.h | 71 ++++++++++++++ lib/Kconfig.debug | 4 + lib/Makefile | 1 + lib/codetag.c | 199 ++++++++++++++++++++++++++++++++++++++++ 4 files changed, 275 insertions(+) create mode 100644 include/linux/codetag.h create mode 100644 lib/codetag.c diff --git a/include/linux/codetag.h b/include/linux/codetag.h new file mode 100644 index 000000000000..a9d7adecc2a5 --- /dev/null +++ b/include/linux/codetag.h @@ -0,0 +1,71 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * code tagging framework + */ +#ifndef _LINUX_CODETAG_H +#define _LINUX_CODETAG_H + +#include + +struct codetag_iterator; +struct codetag_type; +struct seq_buf; +struct module; + +/* + * An instance of this structure is created in a special ELF section at ev= ery + * code location being tagged. At runtime, the special section is treated= as + * an array of these. + */ +struct codetag { + unsigned int flags; /* used in later patches */ + unsigned int lineno; + const char *modname; + const char *function; + const char *filename; +} __aligned(8); + +union codetag_ref { + struct codetag *ct; +}; + +struct codetag_range { + struct codetag *start; + struct codetag *stop; +}; + +struct codetag_module { + struct module *mod; + struct codetag_range range; +}; + +struct codetag_type_desc { + const char *section; + size_t tag_size; +}; + +struct codetag_iterator { + struct codetag_type *cttype; + struct codetag_module *cmod; + unsigned long mod_id; + struct codetag *ct; +}; + +#define CODE_TAG_INIT { \ + .modname =3D KBUILD_MODNAME, \ + .function =3D __func__, \ + .filename =3D __FILE__, \ + .lineno =3D __LINE__, \ + .flags =3D 0, \ +} + +void codetag_lock_module_list(struct codetag_type *cttype, bool lock); +struct codetag_iterator codetag_get_ct_iter(struct codetag_type *cttype); +struct codetag *codetag_next_ct(struct codetag_iterator *iter); + +void codetag_to_text(struct seq_buf *out, struct codetag *ct); + +struct codetag_type * +codetag_register_type(const struct codetag_type_desc *desc); + +#endif /* _LINUX_CODETAG_H */ diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug index fa307f93fa2e..2acbef24e93e 100644 --- a/lib/Kconfig.debug +++ b/lib/Kconfig.debug @@ -962,6 +962,10 @@ config DEBUG_STACKOVERFLOW =20 If in doubt, say "N". =20 +config CODE_TAGGING + bool + select KALLSYMS + source "lib/Kconfig.kasan" source "lib/Kconfig.kfence" source "lib/Kconfig.kmsan" diff --git a/lib/Makefile b/lib/Makefile index 740109b6e2c8..b50212b5b999 100644 --- a/lib/Makefile +++ b/lib/Makefile @@ -233,6 +233,7 @@ obj-$(CONFIG_OF_RECONFIG_NOTIFIER_ERROR_INJECT) +=3D \ of-reconfig-notifier-error-inject.o obj-$(CONFIG_FUNCTION_ERROR_INJECTION) +=3D error-inject.o =20 +obj-$(CONFIG_CODE_TAGGING) +=3D codetag.o lib-$(CONFIG_GENERIC_BUG) +=3D bug.o =20 obj-$(CONFIG_HAVE_ARCH_TRACEHOOK) +=3D syscall.o diff --git a/lib/codetag.c b/lib/codetag.c new file mode 100644 index 000000000000..7708f8388e55 --- /dev/null +++ b/lib/codetag.c @@ -0,0 +1,199 @@ +// SPDX-License-Identifier: GPL-2.0-only +#include +#include +#include +#include +#include +#include + +struct codetag_type { + struct list_head link; + unsigned int count; + struct idr mod_idr; + struct rw_semaphore mod_lock; /* protects mod_idr */ + struct codetag_type_desc desc; +}; + +static DEFINE_MUTEX(codetag_lock); +static LIST_HEAD(codetag_types); + +void codetag_lock_module_list(struct codetag_type *cttype, bool lock) +{ + if (lock) + down_read(&cttype->mod_lock); + else + up_read(&cttype->mod_lock); +} + +struct codetag_iterator codetag_get_ct_iter(struct codetag_type *cttype) +{ + struct codetag_iterator iter =3D { + .cttype =3D cttype, + .cmod =3D NULL, + .mod_id =3D 0, + .ct =3D NULL, + }; + + return iter; +} + +static inline struct codetag *get_first_module_ct(struct codetag_module *c= mod) +{ + return cmod->range.start < cmod->range.stop ? cmod->range.start : NULL; +} + +static inline +struct codetag *get_next_module_ct(struct codetag_iterator *iter) +{ + struct codetag *res =3D (struct codetag *) + ((char *)iter->ct + iter->cttype->desc.tag_size); + + return res < iter->cmod->range.stop ? res : NULL; +} + +struct codetag *codetag_next_ct(struct codetag_iterator *iter) +{ + struct codetag_type *cttype =3D iter->cttype; + struct codetag_module *cmod; + struct codetag *ct; + + lockdep_assert_held(&cttype->mod_lock); + + if (unlikely(idr_is_empty(&cttype->mod_idr))) + return NULL; + + ct =3D NULL; + while (true) { + cmod =3D idr_find(&cttype->mod_idr, iter->mod_id); + + /* If module was removed move to the next one */ + if (!cmod) + cmod =3D idr_get_next_ul(&cttype->mod_idr, + &iter->mod_id); + + /* Exit if no more modules */ + if (!cmod) + break; + + if (cmod !=3D iter->cmod) { + iter->cmod =3D cmod; + ct =3D get_first_module_ct(cmod); + } else + ct =3D get_next_module_ct(iter); + + if (ct) + break; + + iter->mod_id++; + } + + iter->ct =3D ct; + return ct; +} + +void codetag_to_text(struct seq_buf *out, struct codetag *ct) +{ + seq_buf_printf(out, "%s:%u module:%s func:%s", + ct->filename, ct->lineno, + ct->modname, ct->function); +} + +static inline size_t range_size(const struct codetag_type *cttype, + const struct codetag_range *range) +{ + return ((char *)range->stop - (char *)range->start) / + cttype->desc.tag_size; +} + +static void *get_symbol(struct module *mod, const char *prefix, const char= *name) +{ + char buf[64]; + int res; + + res =3D snprintf(buf, sizeof(buf), "%s%s", prefix, name); + if (WARN_ON(res < 1 || res > sizeof(buf))) + return NULL; + + return mod ? + (void *)find_kallsyms_symbol_value(mod, buf) : + (void *)kallsyms_lookup_name(buf); +} + +static struct codetag_range get_section_range(struct module *mod, + const char *section) +{ + return (struct codetag_range) { + get_symbol(mod, "__start_", section), + get_symbol(mod, "__stop_", section), + }; +} + +static int codetag_module_init(struct codetag_type *cttype, struct module = *mod) +{ + struct codetag_range range; + struct codetag_module *cmod; + int err; + + range =3D get_section_range(mod, cttype->desc.section); + if (!range.start || !range.stop) { + pr_warn("Failed to load code tags of type %s from the module %s\n", + cttype->desc.section, + mod ? mod->name : "(built-in)"); + return -EINVAL; + } + + /* Ignore empty ranges */ + if (range.start =3D=3D range.stop) + return 0; + + BUG_ON(range.start > range.stop); + + cmod =3D kmalloc(sizeof(*cmod), GFP_KERNEL); + if (unlikely(!cmod)) + return -ENOMEM; + + cmod->mod =3D mod; + cmod->range =3D range; + + down_write(&cttype->mod_lock); + err =3D idr_alloc(&cttype->mod_idr, cmod, 0, 0, GFP_KERNEL); + if (err >=3D 0) + cttype->count +=3D range_size(cttype, &range); + up_write(&cttype->mod_lock); + + if (err < 0) { + kfree(cmod); + return err; + } + + return 0; +} + +struct codetag_type * +codetag_register_type(const struct codetag_type_desc *desc) +{ + struct codetag_type *cttype; + int err; + + BUG_ON(desc->tag_size <=3D 0); + + cttype =3D kzalloc(sizeof(*cttype), GFP_KERNEL); + if (unlikely(!cttype)) + return ERR_PTR(-ENOMEM); + + cttype->desc =3D *desc; + idr_init(&cttype->mod_idr); + init_rwsem(&cttype->mod_lock); + + err =3D codetag_module_init(cttype, NULL); + if (unlikely(err)) { + kfree(cttype); + return ERR_PTR(err); + } + + mutex_lock(&codetag_lock); + list_add_tail(&cttype->link, &codetag_types); + mutex_unlock(&codetag_lock); + + return cttype; +} --=20 2.42.0.758.gaed0368e0e-goog From nobody Thu Jan 1 07:45:32 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 28349C07545 for ; Tue, 24 Oct 2023 13:48:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1343615AbjJXNs2 (ORCPT ); Tue, 24 Oct 2023 09:48:28 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50248 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1343592AbjJXNrs (ORCPT ); Tue, 24 Oct 2023 09:47:48 -0400 Received: from mail-yw1-x1149.google.com (mail-yw1-x1149.google.com [IPv6:2607:f8b0:4864:20::1149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 88F02171D for ; Tue, 24 Oct 2023 06:47:11 -0700 (PDT) Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-5a7d1816bccso59393267b3.1 for ; Tue, 24 Oct 2023 06:47:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1698155230; x=1698760030; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=YavwLOEBRwC8H4iLOoVDT3e7zkyr1je9EHR8XAi9ck4=; b=GDlWE/lPFZ1I/Iml9+vKviq93RaiS/dCczcN9/wW7vo+ofVqQZmH5h7fFJcAnE4byy PchdHm+UtawP0U9Sck1J0VuOWsnHwoGQ+nA/XUZ8p540Mw1yHFV7W3A0YhYgbSTVluIS Z3mDUcMqpNaa60+WPNHzboP6s3wRK8MKKfu9I85uiwykEebJ6NgmcBWsGWtPw9mX0fIU Av8vgAvPWuXNHRy5aAP0J3h25gLzM8BkokPdI17uEuuRQsgO8tg95MGlXTLrPlvRh283 yEiHCkl9ifphN9JWE3OoySnNTJMEC4xr/H/d8yyWc1ko7iRAsCK/Nwi/QA5BP10DY9Lp KipA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1698155230; x=1698760030; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=YavwLOEBRwC8H4iLOoVDT3e7zkyr1je9EHR8XAi9ck4=; b=AObolBRGZ8X+1TsOouuoj8Iot1ZRvLesgc7zuxry/LZYjHjsvDIutNCR7yVygB+4AI Xx3GoAhHerplXSh2x2JdDdjg6FnTfylMqfL+aS35hMZLgjT8oeFcscYJfETMcKRwC8go 5T1EQmhXrkLIUiO+u4T/psu9qaMnm8PvKwsbz459TcweRx9Um5yiizWmjTSFnqxOqmda UuZTVOHTPNpmjfF4gH+15S3ZNULXIJevQ5rqX7l0bHoq9fprpaybM1Q3LsnbgYJJPLyp 9Qa64Ryk4dliXXtFI00sUlSTRErqXwIQnM0+6CFrE6mzkj3L1FAf6NCS2WcxWooxFzqy /bTg== X-Gm-Message-State: AOJu0YxUhZe/GNw7Jo58iA79hjYCzhf4+bA4bO3tB5x9SQrSrbFI4I/K NpaDjsz7FfLviZuaddpvQh89W+ZCRpQ= X-Google-Smtp-Source: AGHT+IG23pgEuZi2XWkcWZyR41lqZU1TGfs7mnZJ7YZozt0qnBcTuPc8JBuvXuLPX/Ngmgob/V7gfXg7y38= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:45ba:3318:d7a5:336a]) (user=surenb job=sendgmr) by 2002:a81:4e0e:0:b0:5a2:3de0:24a9 with SMTP id c14-20020a814e0e000000b005a23de024a9mr271520ywb.1.1698155230510; Tue, 24 Oct 2023 06:47:10 -0700 (PDT) Date: Tue, 24 Oct 2023 06:46:10 -0700 In-Reply-To: <20231024134637.3120277-1-surenb@google.com> Mime-Version: 1.0 References: <20231024134637.3120277-1-surenb@google.com> X-Mailer: git-send-email 2.42.0.758.gaed0368e0e-goog Message-ID: <20231024134637.3120277-14-surenb@google.com> Subject: [PATCH v2 13/39] lib: code tagging module support From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: kent.overstreet@linux.dev, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, roman.gushchin@linux.dev, mgorman@suse.de, dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com, corbet@lwn.net, void@manifault.com, peterz@infradead.org, juri.lelli@redhat.com, ldufour@linux.ibm.com, catalin.marinas@arm.com, will@kernel.org, arnd@arndb.de, tglx@linutronix.de, mingo@redhat.com, dave.hansen@linux.intel.com, x86@kernel.org, peterx@redhat.com, david@redhat.com, axboe@kernel.dk, mcgrof@kernel.org, masahiroy@kernel.org, nathan@kernel.org, dennis@kernel.org, tj@kernel.org, muchun.song@linux.dev, rppt@kernel.org, paulmck@kernel.org, pasha.tatashin@soleen.com, yosryahmed@google.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, andreyknvl@gmail.com, keescook@chromium.org, ndesaulniers@google.com, vvvvvv@google.com, gregkh@linuxfoundation.org, ebiggers@google.com, ytcoode@gmail.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, bristot@redhat.com, vschneid@redhat.com, cl@linux.com, penberg@kernel.org, iamjoonsoo.kim@lge.com, 42.hyeyoo@gmail.com, glider@google.com, elver@google.com, dvyukov@google.com, shakeelb@google.com, songmuchun@bytedance.com, jbaron@akamai.com, rientjes@google.com, minchan@google.com, kaleshsingh@google.com, surenb@google.com, kernel-team@android.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, iommu@lists.linux.dev, linux-arch@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, kasan-dev@googlegroups.com, cgroups@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add support for code tagging from dynamically loaded modules. Signed-off-by: Suren Baghdasaryan Co-developed-by: Kent Overstreet Signed-off-by: Kent Overstreet --- include/linux/codetag.h | 12 +++++++++ kernel/module/main.c | 4 +++ lib/codetag.c | 58 +++++++++++++++++++++++++++++++++++++++-- 3 files changed, 72 insertions(+), 2 deletions(-) diff --git a/include/linux/codetag.h b/include/linux/codetag.h index a9d7adecc2a5..386733e89b31 100644 --- a/include/linux/codetag.h +++ b/include/linux/codetag.h @@ -42,6 +42,10 @@ struct codetag_module { struct codetag_type_desc { const char *section; size_t tag_size; + void (*module_load)(struct codetag_type *cttype, + struct codetag_module *cmod); + void (*module_unload)(struct codetag_type *cttype, + struct codetag_module *cmod); }; =20 struct codetag_iterator { @@ -68,4 +72,12 @@ void codetag_to_text(struct seq_buf *out, struct codetag= *ct); struct codetag_type * codetag_register_type(const struct codetag_type_desc *desc); =20 +#ifdef CONFIG_CODE_TAGGING +void codetag_load_module(struct module *mod); +void codetag_unload_module(struct module *mod); +#else +static inline void codetag_load_module(struct module *mod) {} +static inline void codetag_unload_module(struct module *mod) {} +#endif + #endif /* _LINUX_CODETAG_H */ diff --git a/kernel/module/main.c b/kernel/module/main.c index 98fedfdb8db5..c0d3f562c7ab 100644 --- a/kernel/module/main.c +++ b/kernel/module/main.c @@ -56,6 +56,7 @@ #include #include #include +#include #include #include #include "internal.h" @@ -1242,6 +1243,7 @@ static void free_module(struct module *mod) { trace_module_free(mod); =20 + codetag_unload_module(mod); mod_sysfs_teardown(mod); =20 /* @@ -2975,6 +2977,8 @@ static int load_module(struct load_info *info, const = char __user *uargs, /* Get rid of temporary copy. */ free_copy(info, flags); =20 + codetag_load_module(mod); + /* Done! */ trace_module_load(mod); =20 diff --git a/lib/codetag.c b/lib/codetag.c index 7708f8388e55..4ea57fb37346 100644 --- a/lib/codetag.c +++ b/lib/codetag.c @@ -108,15 +108,20 @@ static inline size_t range_size(const struct codetag_= type *cttype, static void *get_symbol(struct module *mod, const char *prefix, const char= *name) { char buf[64]; + void *ret; int res; =20 res =3D snprintf(buf, sizeof(buf), "%s%s", prefix, name); if (WARN_ON(res < 1 || res > sizeof(buf))) return NULL; =20 - return mod ? + preempt_disable(); + ret =3D mod ? (void *)find_kallsyms_symbol_value(mod, buf) : (void *)kallsyms_lookup_name(buf); + preempt_enable(); + + return ret; } =20 static struct codetag_range get_section_range(struct module *mod, @@ -157,8 +162,11 @@ static int codetag_module_init(struct codetag_type *ct= type, struct module *mod) =20 down_write(&cttype->mod_lock); err =3D idr_alloc(&cttype->mod_idr, cmod, 0, 0, GFP_KERNEL); - if (err >=3D 0) + if (err >=3D 0) { cttype->count +=3D range_size(cttype, &range); + if (cttype->desc.module_load) + cttype->desc.module_load(cttype, cmod); + } up_write(&cttype->mod_lock); =20 if (err < 0) { @@ -197,3 +205,49 @@ codetag_register_type(const struct codetag_type_desc *= desc) =20 return cttype; } + +void codetag_load_module(struct module *mod) +{ + struct codetag_type *cttype; + + if (!mod) + return; + + mutex_lock(&codetag_lock); + list_for_each_entry(cttype, &codetag_types, link) + codetag_module_init(cttype, mod); + mutex_unlock(&codetag_lock); +} + +void codetag_unload_module(struct module *mod) +{ + struct codetag_type *cttype; + + if (!mod) + return; + + mutex_lock(&codetag_lock); + list_for_each_entry(cttype, &codetag_types, link) { + struct codetag_module *found =3D NULL; + struct codetag_module *cmod; + unsigned long mod_id, tmp; + + down_write(&cttype->mod_lock); + idr_for_each_entry_ul(&cttype->mod_idr, cmod, tmp, mod_id) { + if (cmod->mod && cmod->mod =3D=3D mod) { + found =3D cmod; + break; + } + } + if (found) { + if (cttype->desc.module_unload) + cttype->desc.module_unload(cttype, cmod); + + cttype->count -=3D range_size(cttype, &cmod->range); + idr_remove(&cttype->mod_idr, mod_id); + kfree(cmod); + } + up_write(&cttype->mod_lock); + } + mutex_unlock(&codetag_lock); +} --=20 2.42.0.758.gaed0368e0e-goog From nobody Thu Jan 1 07:45:32 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 46D90C25B6D for ; Tue, 24 Oct 2023 13:48:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1343553AbjJXNso (ORCPT ); Tue, 24 Oct 2023 09:48:44 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50176 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234811AbjJXNsE (ORCPT ); Tue, 24 Oct 2023 09:48:04 -0400 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4DE2E172B for ; Tue, 24 Oct 2023 06:47:13 -0700 (PDT) Received: by mail-yb1-xb49.google.com with SMTP id 3f1490d57ef6-d9a5a16fa94so4432835276.0 for ; Tue, 24 Oct 2023 06:47:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1698155233; x=1698760033; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=+qMxws3YMU5LIbVoF9EJC7SgnlaxtiNdFL01V96iMG0=; b=kPHxH/l1bD+wyTz6FTXuEr0SGyB+zWwTJrU1RYKJV6/uu0iKo9xRIlUWGBpswf4Hdj qr2vtTK+wwjT5v+8tWif0ZdYFinO7UHckZqoEaIE0mJF7DPXVA19l378+wu0wqvR0HGA ZbYY9dDisppIqRlEyfn3nBAIbdf/uP+QGcHjf6wx0QbIK+xxA2lYDRtMJxhKYUWhl3wN ++vlRSzcmpZzDy7VhDPVdzii1//f6+UkIoJRIZ/Juf03YahmR1B/nH+vH9tgit1ZDyO2 l6vICeFk8h8S/5RDNEB00hoZRnurkUR+RBeMrLGyJcx5wEF//5wkrGjvu12e8ghQs7Hc suHw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1698155233; x=1698760033; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=+qMxws3YMU5LIbVoF9EJC7SgnlaxtiNdFL01V96iMG0=; b=FLYFtRVK+8s8YAyXoSi+CGEoWoCnn4Jt6MWhy2kAMNDnpiphYluFZJiPx/v8LVrzkp 25ZxcZBOCXfM6Z0/JY03GDofDZTrqAEAedjzyw/xGIY7EFzcyk6/V+rgKWduR/ygLsv+ lILrnMcuxp0LAN9RR7arQxLUZ+fvmrSvFLiHIo5jeQfNwt44uDC+a3WkOXBggB8pak8t tj/viF0Ztp/hnC37vGN1IZhXZAnJUxAPzx/9alF+owOzEeNQI4kJBLs8xHF0aVZ0l3GH 7qmOMGv6b6AKEPFAmbVYvOe8Y908lcJu1hXxoNZijzdD1HR1344k9eK3yKZLZ9OPm2Z5 +QBw== X-Gm-Message-State: AOJu0YyHqoIyAiHJnwOnT/+JqsJYVPQmpGdq/3e0FQ04mXisWFAFVCiF YjtOB1ozEn0iLbKBpWnnLdEmePukuas= X-Google-Smtp-Source: AGHT+IH+f+GT4bbDivhKX/Rj+lmLO8Xm8nWs8jzCh/R0o0BCAeKtB84tFP9Lg2d1/1Vq2LZxNMEv6GNyFyw= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:45ba:3318:d7a5:336a]) (user=surenb job=sendgmr) by 2002:a25:b28d:0:b0:d9b:59c3:6eef with SMTP id k13-20020a25b28d000000b00d9b59c36eefmr419344ybj.0.1698155232720; Tue, 24 Oct 2023 06:47:12 -0700 (PDT) Date: Tue, 24 Oct 2023 06:46:11 -0700 In-Reply-To: <20231024134637.3120277-1-surenb@google.com> Mime-Version: 1.0 References: <20231024134637.3120277-1-surenb@google.com> X-Mailer: git-send-email 2.42.0.758.gaed0368e0e-goog Message-ID: <20231024134637.3120277-15-surenb@google.com> Subject: [PATCH v2 14/39] lib: prevent module unloading if memory is not freed From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: kent.overstreet@linux.dev, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, roman.gushchin@linux.dev, mgorman@suse.de, dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com, corbet@lwn.net, void@manifault.com, peterz@infradead.org, juri.lelli@redhat.com, ldufour@linux.ibm.com, catalin.marinas@arm.com, will@kernel.org, arnd@arndb.de, tglx@linutronix.de, mingo@redhat.com, dave.hansen@linux.intel.com, x86@kernel.org, peterx@redhat.com, david@redhat.com, axboe@kernel.dk, mcgrof@kernel.org, masahiroy@kernel.org, nathan@kernel.org, dennis@kernel.org, tj@kernel.org, muchun.song@linux.dev, rppt@kernel.org, paulmck@kernel.org, pasha.tatashin@soleen.com, yosryahmed@google.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, andreyknvl@gmail.com, keescook@chromium.org, ndesaulniers@google.com, vvvvvv@google.com, gregkh@linuxfoundation.org, ebiggers@google.com, ytcoode@gmail.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, bristot@redhat.com, vschneid@redhat.com, cl@linux.com, penberg@kernel.org, iamjoonsoo.kim@lge.com, 42.hyeyoo@gmail.com, glider@google.com, elver@google.com, dvyukov@google.com, shakeelb@google.com, songmuchun@bytedance.com, jbaron@akamai.com, rientjes@google.com, minchan@google.com, kaleshsingh@google.com, surenb@google.com, kernel-team@android.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, iommu@lists.linux.dev, linux-arch@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, kasan-dev@googlegroups.com, cgroups@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Skip freeing module's data section if there are non-zero allocation tags because otherwise, once these allocations are freed, the access to their code tag would cause UAF. Signed-off-by: Suren Baghdasaryan --- include/linux/codetag.h | 6 +++--- kernel/module/main.c | 23 +++++++++++++++-------- lib/codetag.c | 11 ++++++++--- 3 files changed, 26 insertions(+), 14 deletions(-) diff --git a/include/linux/codetag.h b/include/linux/codetag.h index 386733e89b31..d98e4c8e86f0 100644 --- a/include/linux/codetag.h +++ b/include/linux/codetag.h @@ -44,7 +44,7 @@ struct codetag_type_desc { size_t tag_size; void (*module_load)(struct codetag_type *cttype, struct codetag_module *cmod); - void (*module_unload)(struct codetag_type *cttype, + bool (*module_unload)(struct codetag_type *cttype, struct codetag_module *cmod); }; =20 @@ -74,10 +74,10 @@ codetag_register_type(const struct codetag_type_desc *d= esc); =20 #ifdef CONFIG_CODE_TAGGING void codetag_load_module(struct module *mod); -void codetag_unload_module(struct module *mod); +bool codetag_unload_module(struct module *mod); #else static inline void codetag_load_module(struct module *mod) {} -static inline void codetag_unload_module(struct module *mod) {} +static inline bool codetag_unload_module(struct module *mod) { return true= ; } #endif =20 #endif /* _LINUX_CODETAG_H */ diff --git a/kernel/module/main.c b/kernel/module/main.c index c0d3f562c7ab..079f40792ce8 100644 --- a/kernel/module/main.c +++ b/kernel/module/main.c @@ -1211,15 +1211,19 @@ static void *module_memory_alloc(unsigned int size,= enum mod_mem_type type) return module_alloc(size); } =20 -static void module_memory_free(void *ptr, enum mod_mem_type type) +static void module_memory_free(void *ptr, enum mod_mem_type type, + bool unload_codetags) { + if (!unload_codetags && mod_mem_type_is_core_data(type)) + return; + if (mod_mem_use_vmalloc(type)) vfree(ptr); else module_memfree(ptr); } =20 -static void free_mod_mem(struct module *mod) +static void free_mod_mem(struct module *mod, bool unload_codetags) { for_each_mod_mem_type(type) { struct module_memory *mod_mem =3D &mod->mem[type]; @@ -1230,20 +1234,23 @@ static void free_mod_mem(struct module *mod) /* Free lock-classes; relies on the preceding sync_rcu(). */ lockdep_free_key_range(mod_mem->base, mod_mem->size); if (mod_mem->size) - module_memory_free(mod_mem->base, type); + module_memory_free(mod_mem->base, type, + unload_codetags); } =20 /* MOD_DATA hosts mod, so free it at last */ lockdep_free_key_range(mod->mem[MOD_DATA].base, mod->mem[MOD_DATA].size); - module_memory_free(mod->mem[MOD_DATA].base, MOD_DATA); + module_memory_free(mod->mem[MOD_DATA].base, MOD_DATA, unload_codetags); } =20 /* Free a module, remove from lists, etc. */ static void free_module(struct module *mod) { + bool unload_codetags; + trace_module_free(mod); =20 - codetag_unload_module(mod); + unload_codetags =3D codetag_unload_module(mod); mod_sysfs_teardown(mod); =20 /* @@ -1285,7 +1292,7 @@ static void free_module(struct module *mod) kfree(mod->args); percpu_modfree(mod); =20 - free_mod_mem(mod); + free_mod_mem(mod, unload_codetags); } =20 void *__symbol_get(const char *symbol) @@ -2295,7 +2302,7 @@ static int move_module(struct module *mod, struct loa= d_info *info) return 0; out_enomem: for (t--; t >=3D 0; t--) - module_memory_free(mod->mem[t].base, t); + module_memory_free(mod->mem[t].base, t, true); return ret; } =20 @@ -2425,7 +2432,7 @@ static void module_deallocate(struct module *mod, str= uct load_info *info) percpu_modfree(mod); module_arch_freeing_init(mod); =20 - free_mod_mem(mod); + free_mod_mem(mod, true); } =20 int __weak module_finalize(const Elf_Ehdr *hdr, diff --git a/lib/codetag.c b/lib/codetag.c index 4ea57fb37346..0ad4ea66c769 100644 --- a/lib/codetag.c +++ b/lib/codetag.c @@ -5,6 +5,7 @@ #include #include #include +#include =20 struct codetag_type { struct list_head link; @@ -219,12 +220,13 @@ void codetag_load_module(struct module *mod) mutex_unlock(&codetag_lock); } =20 -void codetag_unload_module(struct module *mod) +bool codetag_unload_module(struct module *mod) { struct codetag_type *cttype; + bool unload_ok =3D true; =20 if (!mod) - return; + return true; =20 mutex_lock(&codetag_lock); list_for_each_entry(cttype, &codetag_types, link) { @@ -241,7 +243,8 @@ void codetag_unload_module(struct module *mod) } if (found) { if (cttype->desc.module_unload) - cttype->desc.module_unload(cttype, cmod); + if (!cttype->desc.module_unload(cttype, cmod)) + unload_ok =3D false; =20 cttype->count -=3D range_size(cttype, &cmod->range); idr_remove(&cttype->mod_idr, mod_id); @@ -250,4 +253,6 @@ void codetag_unload_module(struct module *mod) up_write(&cttype->mod_lock); } mutex_unlock(&codetag_lock); + + return unload_ok; } --=20 2.42.0.758.gaed0368e0e-goog From nobody Thu Jan 1 07:45:32 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DFB0AC00A8F for ; Tue, 24 Oct 2023 13:48:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1343629AbjJXNsx (ORCPT ); Tue, 24 Oct 2023 09:48:53 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44584 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234680AbjJXNsT (ORCPT ); Tue, 24 Oct 2023 09:48:19 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E593C1986 for ; Tue, 24 Oct 2023 06:47:16 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id 3f1490d57ef6-d9caf486775so5283807276.2 for ; Tue, 24 Oct 2023 06:47:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1698155235; x=1698760035; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=K+jSKdm0qvzH//duzEhIZcT0jtfDLsmvFXjZdfhiaL8=; b=YQSQ/XfVaVgsZfCntf57aDIjYf72U4J+N39GvOE5WWmQ+jVwb5c0nhnFJ5ePleqyBt 9cN5w6qTcNjJN8//R27g9eGmTpJMyCKHJPLMUkm05uTViCKTqiZj5t7eqsrRj46VMx5l ASxibRCCBlOSsMlM25li2YDei5HkUeB/Uf3KPDVbWZxbcuZx0coCwjD979vldHDr2GyF M0AMzkvJoq6hdFNI7xQ/7JgjyZxDWkca+adYqToZqYiIWd1j3dDN3WwRaW4eOdY8C9u7 uSpGcimTvD0a+NU2qzWTKnVOPkZZ+J6vjpbNjLWoKhPLZlBZDx48PVafcpx3S/yuFqnu ee2g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1698155235; x=1698760035; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=K+jSKdm0qvzH//duzEhIZcT0jtfDLsmvFXjZdfhiaL8=; b=FM4rt4Gw8zJvomucCcJV3yZCpmpvzzjyZUJ2hraHfoG70ZX7RTLOshlZ/cf4SIcsOK GHj2ksp3MV4JfuvFKStD6yNCFPpHivgINnWC02+tQ9JXUH9C4GsIFRl2JFwlN1wiZK42 FSSFQ30ktj+3fLG/0AedHCgfxmGcNx36kdVLahZ0jIJGfcS68uHVpRVchhWT24O0P3xo zfA2t8BO3kfB2aSwnaC9G/zRaNhkG879ve2aUcfwlKVqcCJrZVGJ1ovnzAUAcYg22LsD TzZShnKqr39mJdsXhxPmo64UuEbFLwVr8mErLAfAvtFSPb7LzRQkW5TJV9VeCCaJgJwJ J+Ag== X-Gm-Message-State: AOJu0YzK+drV/RbOQTzwxTDx2cBgxNtKyQ/LDRUjaZMk83EiXrxbWk05 D5vtPr7zGYktnqT4vzxBy6zc8FBJvag= X-Google-Smtp-Source: AGHT+IEIa4lubk7zVV+55Kssyl5IiCjPBPehEtm0DwQdG9mxypz8Eg4MHfMNeuEtWe6Z19f/ZGvIAMikRMg= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:45ba:3318:d7a5:336a]) (user=surenb job=sendgmr) by 2002:a05:6902:1083:b0:d9a:cc6b:cf1a with SMTP id v3-20020a056902108300b00d9acc6bcf1amr317480ybu.8.1698155234917; Tue, 24 Oct 2023 06:47:14 -0700 (PDT) Date: Tue, 24 Oct 2023 06:46:12 -0700 In-Reply-To: <20231024134637.3120277-1-surenb@google.com> Mime-Version: 1.0 References: <20231024134637.3120277-1-surenb@google.com> X-Mailer: git-send-email 2.42.0.758.gaed0368e0e-goog Message-ID: <20231024134637.3120277-16-surenb@google.com> Subject: [PATCH v2 15/39] lib: add allocation tagging support for memory allocation profiling From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: kent.overstreet@linux.dev, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, roman.gushchin@linux.dev, mgorman@suse.de, dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com, corbet@lwn.net, void@manifault.com, peterz@infradead.org, juri.lelli@redhat.com, ldufour@linux.ibm.com, catalin.marinas@arm.com, will@kernel.org, arnd@arndb.de, tglx@linutronix.de, mingo@redhat.com, dave.hansen@linux.intel.com, x86@kernel.org, peterx@redhat.com, david@redhat.com, axboe@kernel.dk, mcgrof@kernel.org, masahiroy@kernel.org, nathan@kernel.org, dennis@kernel.org, tj@kernel.org, muchun.song@linux.dev, rppt@kernel.org, paulmck@kernel.org, pasha.tatashin@soleen.com, yosryahmed@google.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, andreyknvl@gmail.com, keescook@chromium.org, ndesaulniers@google.com, vvvvvv@google.com, gregkh@linuxfoundation.org, ebiggers@google.com, ytcoode@gmail.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, bristot@redhat.com, vschneid@redhat.com, cl@linux.com, penberg@kernel.org, iamjoonsoo.kim@lge.com, 42.hyeyoo@gmail.com, glider@google.com, elver@google.com, dvyukov@google.com, shakeelb@google.com, songmuchun@bytedance.com, jbaron@akamai.com, rientjes@google.com, minchan@google.com, kaleshsingh@google.com, surenb@google.com, kernel-team@android.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, iommu@lists.linux.dev, linux-arch@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, kasan-dev@googlegroups.com, cgroups@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Introduce CONFIG_MEM_ALLOC_PROFILING which provides definitions to easily instrument memory allocators. It registers an "alloc_tags" codetag type with /proc/allocinfo interface to output allocation tag information when the feature is enabled. CONFIG_MEM_ALLOC_PROFILING_DEBUG is provided for debugging the memory allocation profiling instrumentation. Memory allocation profiling can be enabled or disabled at runtime using /proc/sys/vm/mem_profiling sysctl when CONFIG_MEM_ALLOC_PROFILING_DEBUG=3Dn. CONFIG_MEM_ALLOC_PROFILING_ENABLED_BY_DEFAULT enables memory allocation profiling by default. Co-developed-by: Kent Overstreet Signed-off-by: Suren Baghdasaryan Signed-off-by: Kent Overstreet --- Documentation/admin-guide/sysctl/vm.rst | 16 +++ Documentation/filesystems/proc.rst | 28 +++++ include/asm-generic/codetag.lds.h | 14 +++ include/asm-generic/vmlinux.lds.h | 3 + include/linux/alloc_tag.h | 133 ++++++++++++++++++++ include/linux/sched.h | 24 ++++ lib/Kconfig.debug | 25 ++++ lib/Makefile | 2 + lib/alloc_tag.c | 158 ++++++++++++++++++++++++ scripts/module.lds.S | 7 ++ 10 files changed, 410 insertions(+) create mode 100644 include/asm-generic/codetag.lds.h create mode 100644 include/linux/alloc_tag.h create mode 100644 lib/alloc_tag.c diff --git a/Documentation/admin-guide/sysctl/vm.rst b/Documentation/admin-= guide/sysctl/vm.rst index 45ba1f4dc004..0a012ac13a38 100644 --- a/Documentation/admin-guide/sysctl/vm.rst +++ b/Documentation/admin-guide/sysctl/vm.rst @@ -43,6 +43,7 @@ Currently, these files are in /proc/sys/vm: - legacy_va_layout - lowmem_reserve_ratio - max_map_count +- mem_profiling (only if CONFIG_MEM_ALLOC_PROFILING=3Dy) - memory_failure_early_kill - memory_failure_recovery - min_free_kbytes @@ -425,6 +426,21 @@ e.g., up to one or two maps per allocation. The default value is 65530. =20 =20 +mem_profiling +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D + +Enable memory profiling (when CONFIG_MEM_ALLOC_PROFILING=3Dy) + +1: Enable memory profiling. + +0: Disabld memory profiling. + +Enabling memory profiling introduces a small performance overhead for all +memory allocations. + +The default value depends on CONFIG_MEM_ALLOC_PROFILING_ENABLED_BY_DEFAULT. + + memory_failure_early_kill: =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D =20 diff --git a/Documentation/filesystems/proc.rst b/Documentation/filesystems= /proc.rst index 2b59cff8be17..41a7e6d95fe8 100644 --- a/Documentation/filesystems/proc.rst +++ b/Documentation/filesystems/proc.rst @@ -688,6 +688,7 @@ files are there, and which are missing. =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D File Content =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D + allocinfo Memory allocations profiling information apm Advanced power management info buddyinfo Kernel memory allocator information (see text) (2.5) bus Directory containing bus specific information @@ -947,6 +948,33 @@ also be allocatable although a lot of filesystem metad= ata may have to be reclaimed to achieve this. =20 =20 +allocinfo +~~~~~~~ + +Provides information about memory allocations at all locations in the code +base. Each allocation in the code is identified by its source file, line +number, module and the function calling the allocation. The number of bytes +allocated at each location is reported. + +Example output. + +:: + + > cat /proc/allocinfo + + 153MiB mm/slub.c:1826 module:slub func:alloc_slab_page + 6.08MiB mm/slab_common.c:950 module:slab_common func:_kmalloc_ord= er + 5.09MiB mm/memcontrol.c:2814 module:memcontrol func:alloc_slab_ob= j_exts + 4.54MiB mm/page_alloc.c:5777 module:page_alloc func:alloc_pages_e= xact + 1.32MiB include/asm-generic/pgalloc.h:63 module:pgtable func:__pt= e_alloc_one + 1.16MiB fs/xfs/xfs_log_priv.h:700 module:xfs func:xlog_kvmalloc + 1.00MiB mm/swap_cgroup.c:48 module:swap_cgroup func:swap_cgroup_p= repare + 734KiB fs/xfs/kmem.c:20 module:xfs func:kmem_alloc + 640KiB kernel/rcu/tree.c:3184 module:tree func:fill_page_cache_f= unc + 640KiB drivers/char/virtio_console.c:452 module:virtio_console f= unc:alloc_buf + ... + + meminfo ~~~~~~~ =20 diff --git a/include/asm-generic/codetag.lds.h b/include/asm-generic/codeta= g.lds.h new file mode 100644 index 000000000000..64f536b80380 --- /dev/null +++ b/include/asm-generic/codetag.lds.h @@ -0,0 +1,14 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +#ifndef __ASM_GENERIC_CODETAG_LDS_H +#define __ASM_GENERIC_CODETAG_LDS_H + +#define SECTION_WITH_BOUNDARIES(_name) \ + . =3D ALIGN(8); \ + __start_##_name =3D .; \ + KEEP(*(_name)) \ + __stop_##_name =3D .; + +#define CODETAG_SECTIONS() \ + SECTION_WITH_BOUNDARIES(alloc_tags) + +#endif /* __ASM_GENERIC_CODETAG_LDS_H */ diff --git a/include/asm-generic/vmlinux.lds.h b/include/asm-generic/vmlinu= x.lds.h index 67d8dd2f1bde..f04661824939 100644 --- a/include/asm-generic/vmlinux.lds.h +++ b/include/asm-generic/vmlinux.lds.h @@ -50,6 +50,8 @@ * [__nosave_begin, __nosave_end] for the nosave data */ =20 +#include + #ifndef LOAD_OFFSET #define LOAD_OFFSET 0 #endif @@ -367,6 +369,7 @@ . =3D ALIGN(8); \ BOUNDED_SECTION_BY(__dyndbg_classes, ___dyndbg_classes) \ BOUNDED_SECTION_BY(__dyndbg, ___dyndbg) \ + CODETAG_SECTIONS() \ LIKELY_PROFILE() \ BRANCH_PROFILE() \ TRACE_PRINTKS() \ diff --git a/include/linux/alloc_tag.h b/include/linux/alloc_tag.h new file mode 100644 index 000000000000..cf55a149fa84 --- /dev/null +++ b/include/linux/alloc_tag.h @@ -0,0 +1,133 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * allocation tagging + */ +#ifndef _LINUX_ALLOC_TAG_H +#define _LINUX_ALLOC_TAG_H + +#include +#include +#include +#include +#include +#include +#include + +struct alloc_tag_counters { + u64 bytes; + u64 calls; +}; + +/* + * An instance of this structure is created in a special ELF section at ev= ery + * allocation callsite. At runtime, the special section is treated as + * an array of these. Embedded codetag utilizes codetag framework. + */ +struct alloc_tag { + struct codetag ct; + struct alloc_tag_counters __percpu *counters; +} __aligned(8); + +#ifdef CONFIG_MEM_ALLOC_PROFILING + +static inline struct alloc_tag *ct_to_alloc_tag(struct codetag *ct) +{ + return container_of(ct, struct alloc_tag, ct); +} + +#ifdef ARCH_NEEDS_WEAK_PER_CPU +/* + * When percpu variables are required to be defined as weak, static percpu + * variables can't be used inside a function (see comments for DECLARE_PER= _CPU_SECTION). + */ +#error "Memory allocation profiling is incompatible with ARCH_NEEDS_WEAK_P= ER_CPU" +#endif + +#define DEFINE_ALLOC_TAG(_alloc_tag, _old) \ + static DEFINE_PER_CPU(struct alloc_tag_counters, _alloc_tag_cntr); \ + static struct alloc_tag _alloc_tag __used __aligned(8) \ + __section("alloc_tags") =3D { \ + .ct =3D CODE_TAG_INIT, \ + .counters =3D &_alloc_tag_cntr }; \ + struct alloc_tag * __maybe_unused _old =3D alloc_tag_save(&_alloc_tag) + +DECLARE_STATIC_KEY_MAYBE(CONFIG_MEM_ALLOC_PROFILING_ENABLED_BY_DEFAULT, + mem_alloc_profiling_key); + +static inline bool mem_alloc_profiling_enabled(void) +{ + return static_branch_maybe(CONFIG_MEM_ALLOC_PROFILING_ENABLED_BY_DEFAULT, + &mem_alloc_profiling_key); +} + +static inline struct alloc_tag_counters alloc_tag_read(struct alloc_tag *t= ag) +{ + struct alloc_tag_counters v =3D { 0, 0 }; + struct alloc_tag_counters *counter; + int cpu; + + for_each_possible_cpu(cpu) { + counter =3D per_cpu_ptr(tag->counters, cpu); + v.bytes +=3D counter->bytes; + v.calls +=3D counter->calls; + } + + return v; +} + +static inline void __alloc_tag_sub(union codetag_ref *ref, size_t bytes) +{ + struct alloc_tag *tag; + +#ifdef CONFIG_MEM_ALLOC_PROFILING_DEBUG + WARN_ONCE(ref && !ref->ct, "alloc_tag was not set\n"); +#endif + if (!ref || !ref->ct) + return; + + tag =3D ct_to_alloc_tag(ref->ct); + + this_cpu_sub(tag->counters->bytes, bytes); + this_cpu_dec(tag->counters->calls); + + ref->ct =3D NULL; +} + +static inline void alloc_tag_sub(union codetag_ref *ref, size_t bytes) +{ + __alloc_tag_sub(ref, bytes); +} + +static inline void alloc_tag_sub_noalloc(union codetag_ref *ref, size_t by= tes) +{ + __alloc_tag_sub(ref, bytes); +} + +static inline void alloc_tag_add(union codetag_ref *ref, struct alloc_tag = *tag, size_t bytes) +{ +#ifdef CONFIG_MEM_ALLOC_PROFILING_DEBUG + WARN_ONCE(ref && ref->ct, + "alloc_tag was not cleared (got tag for %s:%u)\n",\ + ref->ct->filename, ref->ct->lineno); + + WARN_ONCE(!tag, "current->alloc_tag not set"); +#endif + if (!ref || !tag) + return; + + ref->ct =3D &tag->ct; + this_cpu_add(tag->counters->bytes, bytes); + this_cpu_inc(tag->counters->calls); +} + +#else + +#define DEFINE_ALLOC_TAG(_alloc_tag, _old) +static inline void alloc_tag_sub(union codetag_ref *ref, size_t bytes) {} +static inline void alloc_tag_sub_noalloc(union codetag_ref *ref, size_t by= tes) {} +static inline void alloc_tag_add(union codetag_ref *ref, struct alloc_tag = *tag, + size_t bytes) {} + +#endif + +#endif /* _LINUX_ALLOC_TAG_H */ diff --git a/include/linux/sched.h b/include/linux/sched.h index 12a2554a3164..d35901d8de06 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -767,6 +767,10 @@ struct task_struct { unsigned int flags; unsigned int ptrace; =20 +#ifdef CONFIG_MEM_ALLOC_PROFILING + struct alloc_tag *alloc_tag; +#endif + #ifdef CONFIG_SMP int on_cpu; struct __call_single_node wake_entry; @@ -806,6 +810,7 @@ struct task_struct { struct task_group *sched_task_group; #endif =20 + #ifdef CONFIG_UCLAMP_TASK /* * Clamp values requested for a scheduling entity. @@ -2457,4 +2462,23 @@ static inline int sched_core_idle_cpu(int cpu) { ret= urn idle_cpu(cpu); } =20 extern void sched_set_stop_task(int cpu, struct task_struct *stop); =20 +#ifdef CONFIG_MEM_ALLOC_PROFILING +static inline struct alloc_tag *alloc_tag_save(struct alloc_tag *tag) +{ + swap(current->alloc_tag, tag); + return tag; +} + +static inline void alloc_tag_restore(struct alloc_tag *tag, struct alloc_t= ag *old) +{ +#ifdef CONFIG_MEM_ALLOC_PROFILING_DEBUG + WARN(current->alloc_tag !=3D tag, "current->alloc_tag was changed:\n"); +#endif + current->alloc_tag =3D old; +} +#else +static inline struct alloc_tag *alloc_tag_save(struct alloc_tag *tag) { re= turn NULL; } +#define alloc_tag_restore(_tag, _old) +#endif + #endif diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug index 2acbef24e93e..475a14e70566 100644 --- a/lib/Kconfig.debug +++ b/lib/Kconfig.debug @@ -966,6 +966,31 @@ config CODE_TAGGING bool select KALLSYMS =20 +config MEM_ALLOC_PROFILING + bool "Enable memory allocation profiling" + default n + depends on PROC_FS + depends on !DEBUG_FORCE_WEAK_PER_CPU + select CODE_TAGGING + help + Track allocation source code and record total allocation size + initiated at that code location. The mechanism can be used to track + memory leaks with a low performance and memory impact. + +config MEM_ALLOC_PROFILING_ENABLED_BY_DEFAULT + bool "Enable memory allocation profiling by default" + default y + depends on MEM_ALLOC_PROFILING + +config MEM_ALLOC_PROFILING_DEBUG + bool "Memory allocation profiler debugging" + default n + depends on MEM_ALLOC_PROFILING + select MEM_ALLOC_PROFILING_ENABLED_BY_DEFAULT + help + Adds warnings with helpful error messages for memory allocation + profiling. + source "lib/Kconfig.kasan" source "lib/Kconfig.kfence" source "lib/Kconfig.kmsan" diff --git a/lib/Makefile b/lib/Makefile index b50212b5b999..4525469637fe 100644 --- a/lib/Makefile +++ b/lib/Makefile @@ -234,6 +234,8 @@ obj-$(CONFIG_OF_RECONFIG_NOTIFIER_ERROR_INJECT) +=3D \ obj-$(CONFIG_FUNCTION_ERROR_INJECTION) +=3D error-inject.o =20 obj-$(CONFIG_CODE_TAGGING) +=3D codetag.o +obj-$(CONFIG_MEM_ALLOC_PROFILING) +=3D alloc_tag.o + lib-$(CONFIG_GENERIC_BUG) +=3D bug.o =20 obj-$(CONFIG_HAVE_ARCH_TRACEHOOK) +=3D syscall.o diff --git a/lib/alloc_tag.c b/lib/alloc_tag.c new file mode 100644 index 000000000000..4fc031f9cefd --- /dev/null +++ b/lib/alloc_tag.c @@ -0,0 +1,158 @@ +// SPDX-License-Identifier: GPL-2.0-only +#include +#include +#include +#include +#include +#include +#include + +static struct codetag_type *alloc_tag_cttype; + +DEFINE_STATIC_KEY_MAYBE(CONFIG_MEM_ALLOC_PROFILING_ENABLED_BY_DEFAULT, + mem_alloc_profiling_key); + +static void *allocinfo_start(struct seq_file *m, loff_t *pos) +{ + struct codetag_iterator *iter; + struct codetag *ct; + loff_t node =3D *pos; + + iter =3D kzalloc(sizeof(*iter), GFP_KERNEL); + m->private =3D iter; + if (!iter) + return NULL; + + codetag_lock_module_list(alloc_tag_cttype, true); + *iter =3D codetag_get_ct_iter(alloc_tag_cttype); + while ((ct =3D codetag_next_ct(iter)) !=3D NULL && node) + node--; + + return ct ? iter : NULL; +} + +static void *allocinfo_next(struct seq_file *m, void *arg, loff_t *pos) +{ + struct codetag_iterator *iter =3D (struct codetag_iterator *)arg; + struct codetag *ct =3D codetag_next_ct(iter); + + (*pos)++; + if (!ct) + return NULL; + + return iter; +} + +static void allocinfo_stop(struct seq_file *m, void *arg) +{ + struct codetag_iterator *iter =3D (struct codetag_iterator *)m->private; + + if (iter) { + codetag_lock_module_list(alloc_tag_cttype, false); + kfree(iter); + } +} + +static void alloc_tag_to_text(struct seq_buf *out, struct codetag *ct) +{ + struct alloc_tag *tag =3D ct_to_alloc_tag(ct); + struct alloc_tag_counters counter =3D alloc_tag_read(tag); + s64 bytes =3D counter.bytes; + char val[10], *p =3D val; + + if (bytes < 0) { + *p++ =3D '-'; + bytes =3D -bytes; + } + + string_get_size(bytes, 1, + STRING_SIZE_BASE2|STRING_SIZE_NOSPACE, + p, val + ARRAY_SIZE(val) - p); + + seq_buf_printf(out, "%8s %8llu ", val, counter.calls); + codetag_to_text(out, ct); + seq_buf_putc(out, ' '); + seq_buf_putc(out, '\n'); +} + +static int allocinfo_show(struct seq_file *m, void *arg) +{ + struct codetag_iterator *iter =3D (struct codetag_iterator *)arg; + char *bufp; + size_t n =3D seq_get_buf(m, &bufp); + struct seq_buf buf; + + seq_buf_init(&buf, bufp, n); + alloc_tag_to_text(&buf, iter->ct); + seq_commit(m, seq_buf_used(&buf)); + return 0; +} + +static const struct seq_operations allocinfo_seq_op =3D { + .start =3D allocinfo_start, + .next =3D allocinfo_next, + .stop =3D allocinfo_stop, + .show =3D allocinfo_show, +}; + +static void __init procfs_init(void) +{ + proc_create_seq("allocinfo", 0444, NULL, &allocinfo_seq_op); +} + +static bool alloc_tag_module_unload(struct codetag_type *cttype, + struct codetag_module *cmod) +{ + struct codetag_iterator iter =3D codetag_get_ct_iter(cttype); + struct alloc_tag_counters counter; + bool module_unused =3D true; + struct alloc_tag *tag; + struct codetag *ct; + + for (ct =3D codetag_next_ct(&iter); ct; ct =3D codetag_next_ct(&iter)) { + if (iter.cmod !=3D cmod) + continue; + + tag =3D ct_to_alloc_tag(ct); + counter =3D alloc_tag_read(tag); + + if (WARN(counter.bytes, "%s:%u module %s func:%s has %llu allocated at m= odule unload", + ct->filename, ct->lineno, ct->modname, ct->function, counter.bytes)) + module_unused =3D false; + } + + return module_unused; +} + +static struct ctl_table memory_allocation_profiling_sysctls[] =3D { + { + .procname =3D "mem_profiling", + .data =3D &mem_alloc_profiling_key, +#ifdef CONFIG_MEM_ALLOC_PROFILING_DEBUG + .mode =3D 0444, +#else + .mode =3D 0644, +#endif + .proc_handler =3D proc_do_static_key, + }, + { } +}; + +static int __init alloc_tag_init(void) +{ + const struct codetag_type_desc desc =3D { + .section =3D "alloc_tags", + .tag_size =3D sizeof(struct alloc_tag), + .module_unload =3D alloc_tag_module_unload, + }; + + alloc_tag_cttype =3D codetag_register_type(&desc); + if (IS_ERR_OR_NULL(alloc_tag_cttype)) + return PTR_ERR(alloc_tag_cttype); + + register_sysctl_init("vm", memory_allocation_profiling_sysctls); + procfs_init(); + + return 0; +} +module_init(alloc_tag_init); diff --git a/scripts/module.lds.S b/scripts/module.lds.S index bf5bcf2836d8..45c67a0994f3 100644 --- a/scripts/module.lds.S +++ b/scripts/module.lds.S @@ -9,6 +9,8 @@ #define DISCARD_EH_FRAME *(.eh_frame) #endif =20 +#include + SECTIONS { /DISCARD/ : { *(.discard) @@ -47,12 +49,17 @@ SECTIONS { .data : { *(.data .data.[0-9a-zA-Z_]*) *(.data..L*) + CODETAG_SECTIONS() } =20 .rodata : { *(.rodata .rodata.[0-9a-zA-Z_]*) *(.rodata..L*) } +#else + .data : { + CODETAG_SECTIONS() + } #endif } =20 --=20 2.42.0.758.gaed0368e0e-goog From nobody Thu Jan 1 07:45:32 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 894F9C07545 for ; Tue, 24 Oct 2023 13:49:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1343630AbjJXNtB (ORCPT ); Tue, 24 Oct 2023 09:49:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59714 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234825AbjJXNsU (ORCPT ); Tue, 24 Oct 2023 09:48:20 -0400 Received: from mail-yw1-x1149.google.com (mail-yw1-x1149.google.com [IPv6:2607:f8b0:4864:20::1149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BBA921994 for ; Tue, 24 Oct 2023 06:47:18 -0700 (PDT) Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-5a824ef7a83so58262217b3.0 for ; Tue, 24 Oct 2023 06:47:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1698155237; x=1698760037; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=Xnkv01D37mdYs16/e0X4XkgAogHBJsuQ+iVjrKs+el4=; b=EuxrHBnnGWjn9JRknw1OT8akESrqgNNUzY6H2W3aj1xIl2YcqpTr80iyM49ZGtShxL kuhMylQuqa4aPWq2ibAkZF0fjUD79T8VVtYouzA1C1B+9UaL7cpqBkc54W34TkP8qtRn +i6AVLnnKZ4w9Jb3TDX/Ju2g29yQszEIMtJE4JtrLMQ06kL8uKbRyG/qkSwMRyjgWHxu oyLjrMe9gPmNmJ72k4jI6dRNE96iRYY+OaU08VaCxc+eCfSehFBdYUo+U9Ip0BkLXbxN woEenRP4rVrwcNd7nYO0AiyWZ00FJ70RsJwDJNRoy/46Dke44EtXNLGpt1JajR2ajlmm VseQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1698155237; x=1698760037; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=Xnkv01D37mdYs16/e0X4XkgAogHBJsuQ+iVjrKs+el4=; b=jfSnaRvFkuN4VMw3Xrfe8QZ5ggv2FKmh1YvYtNLGN8qOLLzM1G43U8WRNZApXcErCc kPQDFgiSsuCU8vVNyp8To2e9whiku0TyAmuO8RjFKHQhBOpJWLoNoKXF9GMcT/lSdqoR 2xVVSSxoKbEApzun+MK4g35mUWaSzGUY4lg8lpXOPobSW9BpJIEBwBQPLVjMDkQYWnMD xhsZjAAMr2Wuey7xlPqEijCNngQ7s/jpprLPKpQDJtqaSXKJXcnw3hW7/UMeWT127sJH vmCE+Swz49N9ozwIKLbMWrvNdUjSmsVl53arjnSstJ5Uf7cmqUI7YXRJyyvn26hBaTmN 1nDg== X-Gm-Message-State: AOJu0YxxUdcEpRwu0ni+LnD56dWqkmU7K6ml+UX03AX9mJnCuekS41PU 6Deqti0S1LNzxiif3wPd1hjBuex1JvQ= X-Google-Smtp-Source: AGHT+IF+McJ3mA6RssmQwfp7//Pg5OsKi1Xdwm0aQawyQdn+hvoXflHVr9pzXpd28dOZCYLaDsPpXEXkw0g= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:45ba:3318:d7a5:336a]) (user=surenb job=sendgmr) by 2002:a0d:d954:0:b0:5a7:bfcf:2cb8 with SMTP id b81-20020a0dd954000000b005a7bfcf2cb8mr268838ywe.1.1698155237063; Tue, 24 Oct 2023 06:47:17 -0700 (PDT) Date: Tue, 24 Oct 2023 06:46:13 -0700 In-Reply-To: <20231024134637.3120277-1-surenb@google.com> Mime-Version: 1.0 References: <20231024134637.3120277-1-surenb@google.com> X-Mailer: git-send-email 2.42.0.758.gaed0368e0e-goog Message-ID: <20231024134637.3120277-17-surenb@google.com> Subject: [PATCH v2 16/39] lib: introduce support for page allocation tagging From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: kent.overstreet@linux.dev, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, roman.gushchin@linux.dev, mgorman@suse.de, dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com, corbet@lwn.net, void@manifault.com, peterz@infradead.org, juri.lelli@redhat.com, ldufour@linux.ibm.com, catalin.marinas@arm.com, will@kernel.org, arnd@arndb.de, tglx@linutronix.de, mingo@redhat.com, dave.hansen@linux.intel.com, x86@kernel.org, peterx@redhat.com, david@redhat.com, axboe@kernel.dk, mcgrof@kernel.org, masahiroy@kernel.org, nathan@kernel.org, dennis@kernel.org, tj@kernel.org, muchun.song@linux.dev, rppt@kernel.org, paulmck@kernel.org, pasha.tatashin@soleen.com, yosryahmed@google.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, andreyknvl@gmail.com, keescook@chromium.org, ndesaulniers@google.com, vvvvvv@google.com, gregkh@linuxfoundation.org, ebiggers@google.com, ytcoode@gmail.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, bristot@redhat.com, vschneid@redhat.com, cl@linux.com, penberg@kernel.org, iamjoonsoo.kim@lge.com, 42.hyeyoo@gmail.com, glider@google.com, elver@google.com, dvyukov@google.com, shakeelb@google.com, songmuchun@bytedance.com, jbaron@akamai.com, rientjes@google.com, minchan@google.com, kaleshsingh@google.com, surenb@google.com, kernel-team@android.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, iommu@lists.linux.dev, linux-arch@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, kasan-dev@googlegroups.com, cgroups@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Introduce helper functions to easily instrument page allocators by storing a pointer to the allocation tag associated with the code that allocated the page in a page_ext field. Signed-off-by: Suren Baghdasaryan Co-developed-by: Kent Overstreet Signed-off-by: Kent Overstreet --- include/linux/page_ext.h | 1 - include/linux/pgalloc_tag.h | 73 +++++++++++++++++++++++++++++++++++++ lib/Kconfig.debug | 1 + lib/alloc_tag.c | 17 +++++++++ mm/mm_init.c | 1 + mm/page_alloc.c | 4 ++ mm/page_ext.c | 4 ++ 7 files changed, 100 insertions(+), 1 deletion(-) create mode 100644 include/linux/pgalloc_tag.h diff --git a/include/linux/page_ext.h b/include/linux/page_ext.h index be98564191e6..07e0656898f9 100644 --- a/include/linux/page_ext.h +++ b/include/linux/page_ext.h @@ -4,7 +4,6 @@ =20 #include #include -#include =20 struct pglist_data; =20 diff --git a/include/linux/pgalloc_tag.h b/include/linux/pgalloc_tag.h new file mode 100644 index 000000000000..a060c26eb449 --- /dev/null +++ b/include/linux/pgalloc_tag.h @@ -0,0 +1,73 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * page allocation tagging + */ +#ifndef _LINUX_PGALLOC_TAG_H +#define _LINUX_PGALLOC_TAG_H + +#include + +#ifdef CONFIG_MEM_ALLOC_PROFILING + +#include + +extern struct page_ext_operations page_alloc_tagging_ops; +extern struct page_ext *page_ext_get(struct page *page); +extern void page_ext_put(struct page_ext *page_ext); + +static inline union codetag_ref *codetag_ref_from_page_ext(struct page_ext= *page_ext) +{ + return (void *)page_ext + page_alloc_tagging_ops.offset; +} + +static inline struct page_ext *page_ext_from_codetag_ref(union codetag_ref= *ref) +{ + return (void *)ref - page_alloc_tagging_ops.offset; +} + +static inline union codetag_ref *get_page_tag_ref(struct page *page) +{ + if (page && mem_alloc_profiling_enabled()) { + struct page_ext *page_ext =3D page_ext_get(page); + + if (page_ext) + return codetag_ref_from_page_ext(page_ext); + } + return NULL; +} + +static inline void put_page_tag_ref(union codetag_ref *ref) +{ + page_ext_put(page_ext_from_codetag_ref(ref)); +} + +static inline void pgalloc_tag_add(struct page *page, struct task_struct *= task, + unsigned int order) +{ + union codetag_ref *ref =3D get_page_tag_ref(page); + + if (ref) { + alloc_tag_add(ref, task->alloc_tag, PAGE_SIZE << order); + put_page_tag_ref(ref); + } +} + +static inline void pgalloc_tag_sub(struct page *page, unsigned int order) +{ + union codetag_ref *ref =3D get_page_tag_ref(page); + + if (ref) { + alloc_tag_sub(ref, PAGE_SIZE << order); + put_page_tag_ref(ref); + } +} + +#else /* CONFIG_MEM_ALLOC_PROFILING */ + +static inline void pgalloc_tag_add(struct page *page, struct task_struct *= task, + unsigned int order) {} +static inline void pgalloc_tag_sub(struct page *page, unsigned int order) = {} + +#endif /* CONFIG_MEM_ALLOC_PROFILING */ + +#endif /* _LINUX_PGALLOC_TAG_H */ diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug index 475a14e70566..e1eda1450d68 100644 --- a/lib/Kconfig.debug +++ b/lib/Kconfig.debug @@ -972,6 +972,7 @@ config MEM_ALLOC_PROFILING depends on PROC_FS depends on !DEBUG_FORCE_WEAK_PER_CPU select CODE_TAGGING + select PAGE_EXTENSION help Track allocation source code and record total allocation size initiated at that code location. The mechanism can be used to track diff --git a/lib/alloc_tag.c b/lib/alloc_tag.c index 4fc031f9cefd..2d5226d9262d 100644 --- a/lib/alloc_tag.c +++ b/lib/alloc_tag.c @@ -3,6 +3,7 @@ #include #include #include +#include #include #include #include @@ -124,6 +125,22 @@ static bool alloc_tag_module_unload(struct codetag_typ= e *cttype, return module_unused; } =20 +static __init bool need_page_alloc_tagging(void) +{ + return true; +} + +static __init void init_page_alloc_tagging(void) +{ +} + +struct page_ext_operations page_alloc_tagging_ops =3D { + .size =3D sizeof(union codetag_ref), + .need =3D need_page_alloc_tagging, + .init =3D init_page_alloc_tagging, +}; +EXPORT_SYMBOL(page_alloc_tagging_ops); + static struct ctl_table memory_allocation_profiling_sysctls[] =3D { { .procname =3D "mem_profiling", diff --git a/mm/mm_init.c b/mm/mm_init.c index 50f2f34745af..8e72e431dc35 100644 --- a/mm/mm_init.c +++ b/mm/mm_init.c @@ -24,6 +24,7 @@ #include #include #include +#include #include #include #include "internal.h" diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 95546f376302..d490d0f73e72 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -52,6 +52,7 @@ #include #include #include +#include #include #include "internal.h" #include "shuffle.h" @@ -1093,6 +1094,7 @@ static __always_inline bool free_pages_prepare(struct= page *page, __memcg_kmem_uncharge_page(page, order); reset_page_owner(page, order); page_table_check_free(page, order); + pgalloc_tag_sub(page, order); return false; } =20 @@ -1135,6 +1137,7 @@ static __always_inline bool free_pages_prepare(struct= page *page, page->flags &=3D ~PAGE_FLAGS_CHECK_AT_PREP; reset_page_owner(page, order); page_table_check_free(page, order); + pgalloc_tag_sub(page, order); =20 if (!PageHighMem(page)) { debug_check_no_locks_freed(page_address(page), @@ -1535,6 +1538,7 @@ inline void post_alloc_hook(struct page *page, unsign= ed int order, =20 set_page_owner(page, order, gfp_flags); page_table_check_alloc(page, order); + pgalloc_tag_add(page, current, order); } =20 static void prep_new_page(struct page *page, unsigned int order, gfp_t gfp= _flags, diff --git a/mm/page_ext.c b/mm/page_ext.c index 4548fcc66d74..3c58fe8a24df 100644 --- a/mm/page_ext.c +++ b/mm/page_ext.c @@ -10,6 +10,7 @@ #include #include #include +#include =20 /* * struct page extension @@ -82,6 +83,9 @@ static struct page_ext_operations *page_ext_ops[] __initd= ata =3D { #if defined(CONFIG_PAGE_IDLE_FLAG) && !defined(CONFIG_64BIT) &page_idle_ops, #endif +#ifdef CONFIG_MEM_ALLOC_PROFILING + &page_alloc_tagging_ops, +#endif #ifdef CONFIG_PAGE_TABLE_CHECK &page_table_check_ops, #endif --=20 2.42.0.758.gaed0368e0e-goog From nobody Thu Jan 1 07:45:32 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7F188C07545 for ; Tue, 24 Oct 2023 13:49:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1343711AbjJXNtE (ORCPT ); Tue, 24 Oct 2023 09:49:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44630 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234836AbjJXNsW (ORCPT ); Tue, 24 Oct 2023 09:48:22 -0400 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DCEC319A9 for ; Tue, 24 Oct 2023 06:47:20 -0700 (PDT) Received: by mail-yb1-xb49.google.com with SMTP id 3f1490d57ef6-d9a45e7e0f9so5267412276.0 for ; Tue, 24 Oct 2023 06:47:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1698155239; x=1698760039; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=rU55lfnTUKEjh0uwdJqAIgtITtAkLPiyFECJvQc6NhE=; b=nbDlV8Y/yY4oxlo0eqMWKcefgJkynY2/WjGgcqmlte9eLMHpMSV/eCGMXfrGi/cxxv GJHni5MstsAhASOzKmvX4BetClQypEPzSicOu1Gi+jHMcwz6tJCgGzn9hVdYV5X4hGGx M/zuVcOF0ESVjK2ymtqUUFVS8qHC4L0d8gqvPn2gENg1cwKgqBIVrUmvrPU1MJXklk3U sje3q1sWT7iHUV8yxmQDBO5gIQCgexAE/riWoLmVVxwihLt6FQhGsYhXUIkxHXjr5Gab eKjpdMBOAl1GZr87cU7CJiIR874JEFNNQGNIQJhCyKYHZjAzLMGDHM93d83Dd1V37z0p 3B4Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1698155239; x=1698760039; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=rU55lfnTUKEjh0uwdJqAIgtITtAkLPiyFECJvQc6NhE=; b=Hb0tUx/B63EKf+6cdQVDBuzc3pAih8i3Q+j7eofmVlij4dd3j6Ea75O9Nacoxeo001 4dfs9ppveN7jjcZRN/rxlB3mQg5i9z6FL7/gN/JP6VFmvQVAMQLAkNgscjrXByOHTn7o 4CtZ8qO2Va4nElXDFO5IDUnEVG8DW27jcu3+esduSu0Ct/6JbnUSaofldvS9sp7JMN83 RdabsxQ1yctCcDAUP8NXhsJ+VVx6tNlSpivagXBKtTSQnEZ9boU1AqgGN+cm9PkKFFlC pz6Dm/M52zjq+fDsox6h78C/Czyzhfb2JUh1iH/0fMmKVcbhBwbJPc/dODoS/8kHKkdh wKsA== X-Gm-Message-State: AOJu0Yyf53DJ8auqiHONJmrNSabyx/4dFTHVf73aqimaySqaCX5TrLIO MuvklOlgRkH8TkpFIkxvwwyTJ1JWIMs= X-Google-Smtp-Source: AGHT+IGatz5ZTD8qO9Tv0HGrMDj6Iuk2tgVA4F+Z2fO2W9B6yDgy8GrA8nufWNVDJbI0uZZFq7uH9sjFSH4= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:45ba:3318:d7a5:336a]) (user=surenb job=sendgmr) by 2002:a05:6902:105:b0:da0:3da9:ce08 with SMTP id o5-20020a056902010500b00da03da9ce08mr35563ybh.10.1698155239301; Tue, 24 Oct 2023 06:47:19 -0700 (PDT) Date: Tue, 24 Oct 2023 06:46:14 -0700 In-Reply-To: <20231024134637.3120277-1-surenb@google.com> Mime-Version: 1.0 References: <20231024134637.3120277-1-surenb@google.com> X-Mailer: git-send-email 2.42.0.758.gaed0368e0e-goog Message-ID: <20231024134637.3120277-18-surenb@google.com> Subject: [PATCH v2 17/39] change alloc_pages name in dma_map_ops to avoid name conflicts From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: kent.overstreet@linux.dev, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, roman.gushchin@linux.dev, mgorman@suse.de, dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com, corbet@lwn.net, void@manifault.com, peterz@infradead.org, juri.lelli@redhat.com, ldufour@linux.ibm.com, catalin.marinas@arm.com, will@kernel.org, arnd@arndb.de, tglx@linutronix.de, mingo@redhat.com, dave.hansen@linux.intel.com, x86@kernel.org, peterx@redhat.com, david@redhat.com, axboe@kernel.dk, mcgrof@kernel.org, masahiroy@kernel.org, nathan@kernel.org, dennis@kernel.org, tj@kernel.org, muchun.song@linux.dev, rppt@kernel.org, paulmck@kernel.org, pasha.tatashin@soleen.com, yosryahmed@google.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, andreyknvl@gmail.com, keescook@chromium.org, ndesaulniers@google.com, vvvvvv@google.com, gregkh@linuxfoundation.org, ebiggers@google.com, ytcoode@gmail.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, bristot@redhat.com, vschneid@redhat.com, cl@linux.com, penberg@kernel.org, iamjoonsoo.kim@lge.com, 42.hyeyoo@gmail.com, glider@google.com, elver@google.com, dvyukov@google.com, shakeelb@google.com, songmuchun@bytedance.com, jbaron@akamai.com, rientjes@google.com, minchan@google.com, kaleshsingh@google.com, surenb@google.com, kernel-team@android.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, iommu@lists.linux.dev, linux-arch@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, kasan-dev@googlegroups.com, cgroups@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" After redefining alloc_pages, all uses of that name are being replaced. Change the conflicting names to prevent preprocessor from replacing them when it's not intended. Signed-off-by: Suren Baghdasaryan --- arch/x86/kernel/amd_gart_64.c | 2 +- drivers/iommu/dma-iommu.c | 2 +- drivers/xen/grant-dma-ops.c | 2 +- drivers/xen/swiotlb-xen.c | 2 +- include/linux/dma-map-ops.h | 2 +- kernel/dma/mapping.c | 4 ++-- 6 files changed, 7 insertions(+), 7 deletions(-) diff --git a/arch/x86/kernel/amd_gart_64.c b/arch/x86/kernel/amd_gart_64.c index 56a917df410d..842a0ec5eaa9 100644 --- a/arch/x86/kernel/amd_gart_64.c +++ b/arch/x86/kernel/amd_gart_64.c @@ -676,7 +676,7 @@ static const struct dma_map_ops gart_dma_ops =3D { .get_sgtable =3D dma_common_get_sgtable, .dma_supported =3D dma_direct_supported, .get_required_mask =3D dma_direct_get_required_mask, - .alloc_pages =3D dma_direct_alloc_pages, + .alloc_pages_op =3D dma_direct_alloc_pages, .free_pages =3D dma_direct_free_pages, }; =20 diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index 4b1a88f514c9..28b7b2d10655 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -1603,7 +1603,7 @@ static const struct dma_map_ops iommu_dma_ops =3D { .flags =3D DMA_F_PCI_P2PDMA_SUPPORTED, .alloc =3D iommu_dma_alloc, .free =3D iommu_dma_free, - .alloc_pages =3D dma_common_alloc_pages, + .alloc_pages_op =3D dma_common_alloc_pages, .free_pages =3D dma_common_free_pages, .alloc_noncontiguous =3D iommu_dma_alloc_noncontiguous, .free_noncontiguous =3D iommu_dma_free_noncontiguous, diff --git a/drivers/xen/grant-dma-ops.c b/drivers/xen/grant-dma-ops.c index 76f6f26265a3..29257d2639db 100644 --- a/drivers/xen/grant-dma-ops.c +++ b/drivers/xen/grant-dma-ops.c @@ -282,7 +282,7 @@ static int xen_grant_dma_supported(struct device *dev, = u64 mask) static const struct dma_map_ops xen_grant_dma_ops =3D { .alloc =3D xen_grant_dma_alloc, .free =3D xen_grant_dma_free, - .alloc_pages =3D xen_grant_dma_alloc_pages, + .alloc_pages_op =3D xen_grant_dma_alloc_pages, .free_pages =3D xen_grant_dma_free_pages, .mmap =3D dma_common_mmap, .get_sgtable =3D dma_common_get_sgtable, diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c index 946bd56f0ac5..4f1e3f1fc44e 100644 --- a/drivers/xen/swiotlb-xen.c +++ b/drivers/xen/swiotlb-xen.c @@ -403,6 +403,6 @@ const struct dma_map_ops xen_swiotlb_dma_ops =3D { .dma_supported =3D xen_swiotlb_dma_supported, .mmap =3D dma_common_mmap, .get_sgtable =3D dma_common_get_sgtable, - .alloc_pages =3D dma_common_alloc_pages, + .alloc_pages_op =3D dma_common_alloc_pages, .free_pages =3D dma_common_free_pages, }; diff --git a/include/linux/dma-map-ops.h b/include/linux/dma-map-ops.h index f2fc203fb8a1..3a8a015fdd2e 100644 --- a/include/linux/dma-map-ops.h +++ b/include/linux/dma-map-ops.h @@ -28,7 +28,7 @@ struct dma_map_ops { unsigned long attrs); void (*free)(struct device *dev, size_t size, void *vaddr, dma_addr_t dma_handle, unsigned long attrs); - struct page *(*alloc_pages)(struct device *dev, size_t size, + struct page *(*alloc_pages_op)(struct device *dev, size_t size, dma_addr_t *dma_handle, enum dma_data_direction dir, gfp_t gfp); void (*free_pages)(struct device *dev, size_t size, struct page *vaddr, diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c index e323ca48f7f2..58e490e2cfb4 100644 --- a/kernel/dma/mapping.c +++ b/kernel/dma/mapping.c @@ -570,9 +570,9 @@ static struct page *__dma_alloc_pages(struct device *de= v, size_t size, size =3D PAGE_ALIGN(size); if (dma_alloc_direct(dev, ops)) return dma_direct_alloc_pages(dev, size, dma_handle, dir, gfp); - if (!ops->alloc_pages) + if (!ops->alloc_pages_op) return NULL; - return ops->alloc_pages(dev, size, dma_handle, dir, gfp); + return ops->alloc_pages_op(dev, size, dma_handle, dir, gfp); } =20 struct page *dma_alloc_pages(struct device *dev, size_t size, --=20 2.42.0.758.gaed0368e0e-goog From nobody Thu Jan 1 07:45:32 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 02907C07545 for ; Tue, 24 Oct 2023 13:49:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1343641AbjJXNtT (ORCPT ); Tue, 24 Oct 2023 09:49:19 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59772 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1343638AbjJXNsY (ORCPT ); Tue, 24 Oct 2023 09:48:24 -0400 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 56C231BC5 for ; Tue, 24 Oct 2023 06:47:23 -0700 (PDT) Received: by mail-yb1-xb49.google.com with SMTP id 3f1490d57ef6-da0631f977bso19267276.2 for ; Tue, 24 Oct 2023 06:47:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1698155241; x=1698760041; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=zb+0nW/ZZ6NW74eJtanLAYQ/ebma3WHtobF3vZ6u4HE=; b=RzHufcbgXx2cEN3yIVWU4PH1tEu/cLpKoxe7nA+zR5SLjN9VGaPP2vNr8vhDCax+4g ax+h3usOj5cObP7vEmL5yTVFGZHxIM60QKZwhYNloIfizPp4BOW8mAcS//o4OX+q8rz0 mma8Di4JRbJT6PmY8K8R0XlTxllX6p08tgirWbxkMZaIlkrHn81JGU06qZHFvpAnL8XL pNYvynIuCsq1H5vH1w/MfgVIO1GHIvY6qFBSvUFImn9urejhBSpKMGwRf4ZUtJYDU0yF TElOrWWUZ7Db4OSv7KYCaz8rATN2Ycx+se30qBZwMr4Y0BsGN9eNRQJpv7CluAHAXyAi PZyw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1698155241; x=1698760041; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=zb+0nW/ZZ6NW74eJtanLAYQ/ebma3WHtobF3vZ6u4HE=; b=J2ugwWpeeF7CCVLGEsTwhs27XcWxMaavd/L/7oRR0/8fmTCoti58b5MjNeSEI6DyI+ GkYWNtESocm+AT7VvGaQ+TJthv5GKv6OPyY8Art5o4sAfg+z0Y30IYit1/GO0hgGl1FY XwRpHglOyT8hXFWdu9Yb5GEhWbEY9mfnDfZXzgykebFtw8cjX4Hca62sHr4kqGzwDXm3 MKmYCmxuIETG4Hgb/YyCV+L/PxSbgcSt7wk82+dXAWfZUvvaPbjVL7F89D0JfMZY1zdS VUENv8nXTY5Dti4r10lwjHluko6YAIPsGHytflWRvrp8QgoD0x5twZJtsmkKylXR1B88 JV8g== X-Gm-Message-State: AOJu0YzdCUoRbhiNyTKOisDqAt7cjc2tbRGu27bbd0QuSonD+hsL+TS7 cBTFZtSmWvn8VRteOPbFIXbxVSAbDJg= X-Google-Smtp-Source: AGHT+IEpPcNXSaLopMEVj+acIv3rs50iHU0ea/y8V1UNdWnab+iEcPBoSFwF1WABxkknLPWrzIDF+i6tstI= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:45ba:3318:d7a5:336a]) (user=surenb job=sendgmr) by 2002:a05:6902:544:b0:d13:856b:c10a with SMTP id z4-20020a056902054400b00d13856bc10amr261040ybs.3.1698155241467; Tue, 24 Oct 2023 06:47:21 -0700 (PDT) Date: Tue, 24 Oct 2023 06:46:15 -0700 In-Reply-To: <20231024134637.3120277-1-surenb@google.com> Mime-Version: 1.0 References: <20231024134637.3120277-1-surenb@google.com> X-Mailer: git-send-email 2.42.0.758.gaed0368e0e-goog Message-ID: <20231024134637.3120277-19-surenb@google.com> Subject: [PATCH v2 18/39] change alloc_pages name in ivpu_bo_ops to avoid conflicts From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: kent.overstreet@linux.dev, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, roman.gushchin@linux.dev, mgorman@suse.de, dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com, corbet@lwn.net, void@manifault.com, peterz@infradead.org, juri.lelli@redhat.com, ldufour@linux.ibm.com, catalin.marinas@arm.com, will@kernel.org, arnd@arndb.de, tglx@linutronix.de, mingo@redhat.com, dave.hansen@linux.intel.com, x86@kernel.org, peterx@redhat.com, david@redhat.com, axboe@kernel.dk, mcgrof@kernel.org, masahiroy@kernel.org, nathan@kernel.org, dennis@kernel.org, tj@kernel.org, muchun.song@linux.dev, rppt@kernel.org, paulmck@kernel.org, pasha.tatashin@soleen.com, yosryahmed@google.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, andreyknvl@gmail.com, keescook@chromium.org, ndesaulniers@google.com, vvvvvv@google.com, gregkh@linuxfoundation.org, ebiggers@google.com, ytcoode@gmail.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, bristot@redhat.com, vschneid@redhat.com, cl@linux.com, penberg@kernel.org, iamjoonsoo.kim@lge.com, 42.hyeyoo@gmail.com, glider@google.com, elver@google.com, dvyukov@google.com, shakeelb@google.com, songmuchun@bytedance.com, jbaron@akamai.com, rientjes@google.com, minchan@google.com, kaleshsingh@google.com, surenb@google.com, kernel-team@android.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, iommu@lists.linux.dev, linux-arch@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, kasan-dev@googlegroups.com, cgroups@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Kent Overstreet After redefining alloc_pages, all uses of that name are being replaced. Change the conflicting names to prevent preprocessor from replacing them when it's not intended. Signed-off-by: Kent Overstreet Signed-off-by: Suren Baghdasaryan --- drivers/accel/ivpu/ivpu_gem.c | 8 ++++---- drivers/accel/ivpu/ivpu_gem.h | 2 +- 2 files changed, 5 insertions(+), 5 deletions(-) diff --git a/drivers/accel/ivpu/ivpu_gem.c b/drivers/accel/ivpu/ivpu_gem.c index d09f13b35902..d324eaf5bbe3 100644 --- a/drivers/accel/ivpu/ivpu_gem.c +++ b/drivers/accel/ivpu/ivpu_gem.c @@ -61,7 +61,7 @@ static void prime_unmap_pages_locked(struct ivpu_bo *bo) static const struct ivpu_bo_ops prime_ops =3D { .type =3D IVPU_BO_TYPE_PRIME, .name =3D "prime", - .alloc_pages =3D prime_alloc_pages_locked, + .alloc_pages_op =3D prime_alloc_pages_locked, .free_pages =3D prime_free_pages_locked, .map_pages =3D prime_map_pages_locked, .unmap_pages =3D prime_unmap_pages_locked, @@ -134,7 +134,7 @@ static void ivpu_bo_unmap_pages_locked(struct ivpu_bo *= bo) static const struct ivpu_bo_ops shmem_ops =3D { .type =3D IVPU_BO_TYPE_SHMEM, .name =3D "shmem", - .alloc_pages =3D shmem_alloc_pages_locked, + .alloc_pages_op =3D shmem_alloc_pages_locked, .free_pages =3D shmem_free_pages_locked, .map_pages =3D ivpu_bo_map_pages_locked, .unmap_pages =3D ivpu_bo_unmap_pages_locked, @@ -186,7 +186,7 @@ static void internal_free_pages_locked(struct ivpu_bo *= bo) static const struct ivpu_bo_ops internal_ops =3D { .type =3D IVPU_BO_TYPE_INTERNAL, .name =3D "internal", - .alloc_pages =3D internal_alloc_pages_locked, + .alloc_pages_op =3D internal_alloc_pages_locked, .free_pages =3D internal_free_pages_locked, .map_pages =3D ivpu_bo_map_pages_locked, .unmap_pages =3D ivpu_bo_unmap_pages_locked, @@ -200,7 +200,7 @@ static int __must_check ivpu_bo_alloc_and_map_pages_loc= ked(struct ivpu_bo *bo) lockdep_assert_held(&bo->lock); drm_WARN_ON(&vdev->drm, bo->sgt); =20 - ret =3D bo->ops->alloc_pages(bo); + ret =3D bo->ops->alloc_pages_op(bo); if (ret) { ivpu_err(vdev, "Failed to allocate pages for BO: %d", ret); return ret; diff --git a/drivers/accel/ivpu/ivpu_gem.h b/drivers/accel/ivpu/ivpu_gem.h index 6b0ceda5f253..b81cf2af0b2d 100644 --- a/drivers/accel/ivpu/ivpu_gem.h +++ b/drivers/accel/ivpu/ivpu_gem.h @@ -42,7 +42,7 @@ enum ivpu_bo_type { struct ivpu_bo_ops { enum ivpu_bo_type type; const char *name; - int (*alloc_pages)(struct ivpu_bo *bo); + int (*alloc_pages_op)(struct ivpu_bo *bo); void (*free_pages)(struct ivpu_bo *bo); int (*map_pages)(struct ivpu_bo *bo); void (*unmap_pages)(struct ivpu_bo *bo); --=20 2.42.0.758.gaed0368e0e-goog From nobody Thu Jan 1 07:45:32 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C3C96C25B48 for ; Tue, 24 Oct 2023 13:49:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234813AbjJXNtk (ORCPT ); Tue, 24 Oct 2023 09:49:40 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50164 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1343661AbjJXNsk (ORCPT ); Tue, 24 Oct 2023 09:48:40 -0400 Received: from mail-yw1-x114a.google.com (mail-yw1-x114a.google.com [IPv6:2607:f8b0:4864:20::114a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 411A71BD4 for ; Tue, 24 Oct 2023 06:47:25 -0700 (PDT) Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-5a828bdcfbaso61804497b3.2 for ; Tue, 24 Oct 2023 06:47:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1698155244; x=1698760044; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=aDBkAbyEeXJgt2cs2rmRo9d2BeozU6HvXUKgeAkvqwY=; b=Tprvu8tgvyiU1Ql8c9aQB/8d+qwk01GUKcunZuE2qZ7nTUxCHe5vTwBOwz3v6laxUn 5I2uaM14B99kKBtVI9jWBVfcAzrip4U2ZOX1IxVPN0iXLOocij6HbQ2C8nWke22niXOi I0KAv39GPfagui2VqhwMhqk82SgGbMDOozsYyymWau35d2AfaLuHoajvXY1X/UnJkADE m/4XaPoHhfa0DyiY/B0GvuUI8AiY0tq+EAOumyuh9rtwagZJap0lM3kWJHvvKGH5EMVS KpGB3q0OKx6KypNq5jef2J7O8/NlcSfzI+VSZkyN2tdV5MVoe3xSZ8xhzFT343BTeW14 Sb/Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1698155244; x=1698760044; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=aDBkAbyEeXJgt2cs2rmRo9d2BeozU6HvXUKgeAkvqwY=; b=DxOdVC4bKA97k1lq5WpIIRngfriC/2JgAD2u12+RCnUp8aacjQAQG94J4MSUvanip2 ho6QwdTsio42ubph93Gc4sPNIrXm4stMFxp4sNQpUvWwpxOdCz7fga7ifSHxhBkM4G3Q iD4NgV2/OjV+D4BgIiLwvERcpMaNDTQbNqJo6ruUgh4hVbVDcysB2Ox2lVaiuv9cw/vS 1zgoceU5aag5kK1QmDo8K5DnOWHkWr9mAhWI1tGyP2X+vEre9wM3Rx0qWDqYpfxE/xep 5WzgVf5gJ+qJf5m0dowXLylqWAx4c7JYeuRrWc/g/UWdXbCjCorBVdmOG7Q/SKU40HDc Bhmg== X-Gm-Message-State: AOJu0YzKQJ1ans2Azu5s9+TWar5gK6Vclg2qBzbTk3OF0ml/dPusDTgO CTP1vaTM5tBUIyJEfzHUQqZmvUauvkU= X-Google-Smtp-Source: AGHT+IG63IbGtKdW4spHP4xgnxjx6UFQ5KUsGp6pPkRyHhjJ0vgw7TaV9Rz63MbN9DRwvIBTPlHxkdTktmk= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:45ba:3318:d7a5:336a]) (user=surenb job=sendgmr) by 2002:a05:6902:105:b0:da0:3da9:ce08 with SMTP id o5-20020a056902010500b00da03da9ce08mr35574ybh.10.1698155243657; Tue, 24 Oct 2023 06:47:23 -0700 (PDT) Date: Tue, 24 Oct 2023 06:46:16 -0700 In-Reply-To: <20231024134637.3120277-1-surenb@google.com> Mime-Version: 1.0 References: <20231024134637.3120277-1-surenb@google.com> X-Mailer: git-send-email 2.42.0.758.gaed0368e0e-goog Message-ID: <20231024134637.3120277-20-surenb@google.com> Subject: [PATCH v2 19/39] mm: enable page allocation tagging From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: kent.overstreet@linux.dev, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, roman.gushchin@linux.dev, mgorman@suse.de, dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com, corbet@lwn.net, void@manifault.com, peterz@infradead.org, juri.lelli@redhat.com, ldufour@linux.ibm.com, catalin.marinas@arm.com, will@kernel.org, arnd@arndb.de, tglx@linutronix.de, mingo@redhat.com, dave.hansen@linux.intel.com, x86@kernel.org, peterx@redhat.com, david@redhat.com, axboe@kernel.dk, mcgrof@kernel.org, masahiroy@kernel.org, nathan@kernel.org, dennis@kernel.org, tj@kernel.org, muchun.song@linux.dev, rppt@kernel.org, paulmck@kernel.org, pasha.tatashin@soleen.com, yosryahmed@google.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, andreyknvl@gmail.com, keescook@chromium.org, ndesaulniers@google.com, vvvvvv@google.com, gregkh@linuxfoundation.org, ebiggers@google.com, ytcoode@gmail.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, bristot@redhat.com, vschneid@redhat.com, cl@linux.com, penberg@kernel.org, iamjoonsoo.kim@lge.com, 42.hyeyoo@gmail.com, glider@google.com, elver@google.com, dvyukov@google.com, shakeelb@google.com, songmuchun@bytedance.com, jbaron@akamai.com, rientjes@google.com, minchan@google.com, kaleshsingh@google.com, surenb@google.com, kernel-team@android.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, iommu@lists.linux.dev, linux-arch@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, kasan-dev@googlegroups.com, cgroups@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Redefine page allocators to record allocation tags upon their invocation. Instrument post_alloc_hook and free_pages_prepare to modify current allocation tag. Co-developed-by: Kent Overstreet Signed-off-by: Suren Baghdasaryan Signed-off-by: Kent Overstreet --- include/linux/alloc_tag.h | 10 ++++ include/linux/gfp.h | 111 +++++++++++++++++++++++--------------- include/linux/pagemap.h | 9 ++-- mm/compaction.c | 7 ++- mm/filemap.c | 6 +-- mm/mempolicy.c | 42 +++++++-------- mm/page_alloc.c | 60 ++++++++++----------- 7 files changed, 144 insertions(+), 101 deletions(-) diff --git a/include/linux/alloc_tag.h b/include/linux/alloc_tag.h index cf55a149fa84..6fa8a94d8bc1 100644 --- a/include/linux/alloc_tag.h +++ b/include/linux/alloc_tag.h @@ -130,4 +130,14 @@ static inline void alloc_tag_add(union codetag_ref *re= f, struct alloc_tag *tag, =20 #endif =20 +#define alloc_hooks(_do_alloc) \ +({ \ + typeof(_do_alloc) _res; \ + DEFINE_ALLOC_TAG(_alloc_tag, _old); \ + \ + _res =3D _do_alloc; \ + alloc_tag_restore(&_alloc_tag, _old); \ + _res; \ +}) + #endif /* _LINUX_ALLOC_TAG_H */ diff --git a/include/linux/gfp.h b/include/linux/gfp.h index 665f06675c83..20686fd1f417 100644 --- a/include/linux/gfp.h +++ b/include/linux/gfp.h @@ -6,6 +6,8 @@ =20 #include #include +#include +#include =20 struct vm_area_struct; =20 @@ -174,42 +176,43 @@ static inline void arch_free_page(struct page *page, = int order) { } static inline void arch_alloc_page(struct page *page, int order) { } #endif =20 -struct page *__alloc_pages(gfp_t gfp, unsigned int order, int preferred_ni= d, +struct page *__alloc_pages_noprof(gfp_t gfp, unsigned int order, int prefe= rred_nid, nodemask_t *nodemask); -struct folio *__folio_alloc(gfp_t gfp, unsigned int order, int preferred_n= id, +#define __alloc_pages(...) alloc_hooks(__alloc_pages_noprof(__VA_ARGS__)) + +struct folio *__folio_alloc_noprof(gfp_t gfp, unsigned int order, int pref= erred_nid, nodemask_t *nodemask); +#define __folio_alloc(...) alloc_hooks(__folio_alloc_noprof(__VA_ARGS__)) =20 -unsigned long __alloc_pages_bulk(gfp_t gfp, int preferred_nid, +unsigned long alloc_pages_bulk_noprof(gfp_t gfp, int preferred_nid, nodemask_t *nodemask, int nr_pages, struct list_head *page_list, struct page **page_array); +#define __alloc_pages_bulk(...) alloc_hooks(alloc_pages_bulk_noprof(__VA= _ARGS__)) =20 -unsigned long alloc_pages_bulk_array_mempolicy(gfp_t gfp, +unsigned long alloc_pages_bulk_array_mempolicy_noprof(gfp_t gfp, unsigned long nr_pages, struct page **page_array); +#define alloc_pages_bulk_array_mempolicy(...) alloc_hooks(alloc_pages_bul= k_array_mempolicy_noprof(__VA_ARGS__)) =20 /* Bulk allocate order-0 pages */ -static inline unsigned long -alloc_pages_bulk_list(gfp_t gfp, unsigned long nr_pages, struct list_head = *list) -{ - return __alloc_pages_bulk(gfp, numa_mem_id(), NULL, nr_pages, list, NULL); -} +#define alloc_pages_bulk_list(_gfp, _nr_pages, _list) \ + __alloc_pages_bulk(_gfp, numa_mem_id(), NULL, _nr_pages, _list, NULL) =20 -static inline unsigned long -alloc_pages_bulk_array(gfp_t gfp, unsigned long nr_pages, struct page **pa= ge_array) -{ - return __alloc_pages_bulk(gfp, numa_mem_id(), NULL, nr_pages, NULL, page_= array); -} +#define alloc_pages_bulk_array(_gfp, _nr_pages, _page_array) \ + __alloc_pages_bulk(_gfp, numa_mem_id(), NULL, _nr_pages, NULL, _page_arra= y) =20 static inline unsigned long -alloc_pages_bulk_array_node(gfp_t gfp, int nid, unsigned long nr_pages, st= ruct page **page_array) +alloc_pages_bulk_array_node_noprof(gfp_t gfp, int nid, unsigned long nr_pa= ges, struct page **page_array) { if (nid =3D=3D NUMA_NO_NODE) nid =3D numa_mem_id(); =20 - return __alloc_pages_bulk(gfp, nid, NULL, nr_pages, NULL, page_array); + return alloc_pages_bulk_noprof(gfp, nid, NULL, nr_pages, NULL, page_array= ); } =20 +#define alloc_pages_bulk_array_node(...) alloc_hooks(alloc_pages_bulk_arra= y_node_noprof(__VA_ARGS__)) + static inline void warn_if_node_offline(int this_node, gfp_t gfp_mask) { gfp_t warn_gfp =3D gfp_mask & (__GFP_THISNODE|__GFP_NOWARN); @@ -229,21 +232,23 @@ static inline void warn_if_node_offline(int this_node= , gfp_t gfp_mask) * online. For more general interface, see alloc_pages_node(). */ static inline struct page * -__alloc_pages_node(int nid, gfp_t gfp_mask, unsigned int order) +__alloc_pages_node_noprof(int nid, gfp_t gfp_mask, unsigned int order) { VM_BUG_ON(nid < 0 || nid >=3D MAX_NUMNODES); warn_if_node_offline(nid, gfp_mask); =20 - return __alloc_pages(gfp_mask, order, nid, NULL); + return __alloc_pages_noprof(gfp_mask, order, nid, NULL); } =20 +#define __alloc_pages_node(...) alloc_hooks(__alloc_pages_node_noprof(__= VA_ARGS__)) + static inline struct folio *__folio_alloc_node(gfp_t gfp, unsigned int order, int nid) { VM_BUG_ON(nid < 0 || nid >=3D MAX_NUMNODES); warn_if_node_offline(nid, gfp); =20 - return __folio_alloc(gfp, order, nid, NULL); + return __folio_alloc_noprof(gfp, order, nid, NULL); } =20 /* @@ -251,53 +256,69 @@ struct folio *__folio_alloc_node(gfp_t gfp, unsigned = int order, int nid) * prefer the current CPU's closest node. Otherwise node must be valid and * online. */ -static inline struct page *alloc_pages_node(int nid, gfp_t gfp_mask, - unsigned int order) +static inline struct page *alloc_pages_node_noprof(int nid, gfp_t gfp_mask, + unsigned int order) { if (nid =3D=3D NUMA_NO_NODE) nid =3D numa_mem_id(); =20 - return __alloc_pages_node(nid, gfp_mask, order); + return __alloc_pages_node_noprof(nid, gfp_mask, order); } =20 +#define alloc_pages_node(...) alloc_hooks(alloc_pages_node_noprof(__VA_= ARGS__)) + #ifdef CONFIG_NUMA -struct page *alloc_pages(gfp_t gfp, unsigned int order); -struct folio *folio_alloc(gfp_t gfp, unsigned order); -struct folio *vma_alloc_folio(gfp_t gfp, int order, struct vm_area_struct = *vma, +struct page *alloc_pages_noprof(gfp_t gfp, unsigned int order); +struct folio *folio_alloc_noprof(gfp_t gfp, unsigned int order); +struct folio *vma_alloc_folio_noprof(gfp_t gfp, int order, struct vm_area_= struct *vma, unsigned long addr, bool hugepage); #else -static inline struct page *alloc_pages(gfp_t gfp_mask, unsigned int order) +static inline struct page *alloc_pages_noprof(gfp_t gfp_mask, unsigned int= order) { - return alloc_pages_node(numa_node_id(), gfp_mask, order); + return alloc_pages_node_noprof(numa_node_id(), gfp_mask, order); } -static inline struct folio *folio_alloc(gfp_t gfp, unsigned int order) +static inline struct folio *folio_alloc_noprof(gfp_t gfp, unsigned int ord= er) { return __folio_alloc_node(gfp, order, numa_node_id()); } -#define vma_alloc_folio(gfp, order, vma, addr, hugepage) \ - folio_alloc(gfp, order) +#define vma_alloc_folio_noprof(gfp, order, vma, addr, hugepage) \ + folio_alloc_noprof(gfp, order) #endif + +#define alloc_pages(...) alloc_hooks(alloc_pages_noprof(__VA_ARGS__)) +#define folio_alloc(...) alloc_hooks(folio_alloc_noprof(__VA_ARGS__)) +#define vma_alloc_folio(...) alloc_hooks(vma_alloc_folio_noprof(__VA_ARG= S__)) + #define alloc_page(gfp_mask) alloc_pages(gfp_mask, 0) -static inline struct page *alloc_page_vma(gfp_t gfp, + +static inline struct page *alloc_page_vma_noprof(gfp_t gfp, struct vm_area_struct *vma, unsigned long addr) { - struct folio *folio =3D vma_alloc_folio(gfp, 0, vma, addr, false); + struct folio *folio =3D vma_alloc_folio_noprof(gfp, 0, vma, addr, false); =20 return &folio->page; } +#define alloc_page_vma(...) alloc_hooks(alloc_page_vma_noprof(__VA_ARGS_= _)) + +extern unsigned long get_free_pages_noprof(gfp_t gfp_mask, unsigned int or= der); +#define __get_free_pages(...) alloc_hooks(get_free_pages_noprof(__VA_ARG= S__)) =20 -extern unsigned long __get_free_pages(gfp_t gfp_mask, unsigned int order); -extern unsigned long get_zeroed_page(gfp_t gfp_mask); +extern unsigned long get_zeroed_page_noprof(gfp_t gfp_mask); +#define get_zeroed_page(...) alloc_hooks(get_zeroed_page_noprof(__VA_ARG= S__)) + +void *alloc_pages_exact_noprof(size_t size, gfp_t gfp_mask) __alloc_size(1= ); +#define alloc_pages_exact(...) alloc_hooks(alloc_pages_exact_noprof(__VA= _ARGS__)) =20 -void *alloc_pages_exact(size_t size, gfp_t gfp_mask) __alloc_size(1); void free_pages_exact(void *virt, size_t size); -__meminit void *alloc_pages_exact_nid(int nid, size_t size, gfp_t gfp_mask= ) __alloc_size(2); =20 -#define __get_free_page(gfp_mask) \ - __get_free_pages((gfp_mask), 0) +__meminit void *alloc_pages_exact_nid_noprof(int nid, size_t size, gfp_t g= fp_mask) __alloc_size(2); +#define alloc_pages_exact_nid(...) alloc_hooks(alloc_pages_exact_nid_nopr= of(__VA_ARGS__)) + +#define __get_free_page(gfp_mask) \ + __get_free_pages((gfp_mask), 0) =20 -#define __get_dma_pages(gfp_mask, order) \ - __get_free_pages((gfp_mask) | GFP_DMA, (order)) +#define __get_dma_pages(gfp_mask, order) \ + __get_free_pages((gfp_mask) | GFP_DMA, (order)) =20 extern void __free_pages(struct page *page, unsigned int order); extern void free_pages(unsigned long addr, unsigned int order); @@ -347,10 +368,14 @@ extern gfp_t vma_thp_gfp_mask(struct vm_area_struct *= vma); =20 #ifdef CONFIG_CONTIG_ALLOC /* The below functions must be run on a range from a single zone. */ -extern int alloc_contig_range(unsigned long start, unsigned long end, +extern int alloc_contig_range_noprof(unsigned long start, unsigned long en= d, unsigned migratetype, gfp_t gfp_mask); -extern struct page *alloc_contig_pages(unsigned long nr_pages, gfp_t gfp_m= ask, - int nid, nodemask_t *nodemask); +#define alloc_contig_range(...) alloc_hooks(alloc_contig_range_noprof(__= VA_ARGS__)) + +extern struct page *alloc_contig_pages_noprof(unsigned long nr_pages, gfp_= t gfp_mask, + int nid, nodemask_t *nodemask); +#define alloc_contig_pages(...) alloc_hooks(alloc_contig_pages_noprof(__= VA_ARGS__)) + #endif void free_contig_range(unsigned long pfn, unsigned long nr_pages); =20 diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index 351c3b7f93a1..cb387321f929 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -508,14 +508,17 @@ static inline void *detach_page_private(struct page *= page) #endif =20 #ifdef CONFIG_NUMA -struct folio *filemap_alloc_folio(gfp_t gfp, unsigned int order); +struct folio *filemap_alloc_folio_noprof(gfp_t gfp, unsigned int order); #else -static inline struct folio *filemap_alloc_folio(gfp_t gfp, unsigned int or= der) +static inline struct folio *filemap_alloc_folio_noprof(gfp_t gfp, unsigned= int order) { - return folio_alloc(gfp, order); + return folio_alloc_noprof(gfp, order); } #endif =20 +#define filemap_alloc_folio(...) \ + alloc_hooks(filemap_alloc_folio_noprof(__VA_ARGS__)) + static inline struct page *__page_cache_alloc(gfp_t gfp) { return &filemap_alloc_folio(gfp, 0)->page; diff --git a/mm/compaction.c b/mm/compaction.c index 38c8d216c6a3..74a7ec71660d 100644 --- a/mm/compaction.c +++ b/mm/compaction.c @@ -1760,7 +1760,7 @@ static void isolate_freepages(struct compact_control = *cc) * This is a migrate-callback that "allocates" freepages by taking pages * from the isolated freelists in the block we are migrating to. */ -static struct folio *compaction_alloc(struct folio *src, unsigned long dat= a) +static struct folio *compaction_alloc_noprof(struct folio *src, unsigned l= ong data) { struct compact_control *cc =3D (struct compact_control *)data; struct folio *dst; @@ -1779,6 +1779,11 @@ static struct folio *compaction_alloc(struct folio *= src, unsigned long data) return dst; } =20 +static struct folio *compaction_alloc(struct folio *src, unsigned long dat= a) +{ + return alloc_hooks(compaction_alloc_noprof(src, data)); +} + /* * This is a migrate-callback that "frees" freepages back to the isolated * freelist. All pages on the freelist are from the same zone, so there i= s no diff --git a/mm/filemap.c b/mm/filemap.c index f0a15ce1bd1b..fc0e4b630b51 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -958,7 +958,7 @@ int filemap_add_folio(struct address_space *mapping, st= ruct folio *folio, EXPORT_SYMBOL_GPL(filemap_add_folio); =20 #ifdef CONFIG_NUMA -struct folio *filemap_alloc_folio(gfp_t gfp, unsigned int order) +struct folio *filemap_alloc_folio_noprof(gfp_t gfp, unsigned int order) { int n; struct folio *folio; @@ -973,9 +973,9 @@ struct folio *filemap_alloc_folio(gfp_t gfp, unsigned i= nt order) =20 return folio; } - return folio_alloc(gfp, order); + return folio_alloc_noprof(gfp, order); } -EXPORT_SYMBOL(filemap_alloc_folio); +EXPORT_SYMBOL(filemap_alloc_folio_noprof); #endif =20 /* diff --git a/mm/mempolicy.c b/mm/mempolicy.c index f1b00d6ac7ee..0c3d65fbf9e8 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -2127,7 +2127,7 @@ static struct page *alloc_page_interleave(gfp_t gfp, = unsigned order, { struct page *page; =20 - page =3D __alloc_pages(gfp, order, nid, NULL); + page =3D __alloc_pages_noprof(gfp, order, nid, NULL); /* skip NUMA_INTERLEAVE_HIT counter update if numa stats is disabled */ if (!static_branch_likely(&vm_numa_stat_key)) return page; @@ -2153,15 +2153,15 @@ static struct page *alloc_pages_preferred_many(gfp_= t gfp, unsigned int order, */ preferred_gfp =3D gfp | __GFP_NOWARN; preferred_gfp &=3D ~(__GFP_DIRECT_RECLAIM | __GFP_NOFAIL); - page =3D __alloc_pages(preferred_gfp, order, nid, &pol->nodes); + page =3D __alloc_pages_noprof(preferred_gfp, order, nid, &pol->nodes); if (!page) - page =3D __alloc_pages(gfp, order, nid, NULL); + page =3D __alloc_pages_noprof(gfp, order, nid, NULL); =20 return page; } =20 /** - * vma_alloc_folio - Allocate a folio for a VMA. + * vma_alloc_folio_noprof - Allocate a folio for a VMA. * @gfp: GFP flags. * @order: Order of the folio. * @vma: Pointer to VMA or NULL if not available. @@ -2175,7 +2175,7 @@ static struct page *alloc_pages_preferred_many(gfp_t = gfp, unsigned int order, * * Return: The folio on success or NULL if allocation fails. */ -struct folio *vma_alloc_folio(gfp_t gfp, int order, struct vm_area_struct = *vma, +struct folio *vma_alloc_folio_noprof(gfp_t gfp, int order, struct vm_area_= struct *vma, unsigned long addr, bool hugepage) { struct mempolicy *pol; @@ -2246,7 +2246,7 @@ struct folio *vma_alloc_folio(gfp_t gfp, int order, s= truct vm_area_struct *vma, * memory with both reclaim and compact as well. */ if (!folio && (gfp & __GFP_DIRECT_RECLAIM)) - folio =3D __folio_alloc(gfp, order, hpage_node, + folio =3D __folio_alloc_noprof(gfp, order, hpage_node, nmask); =20 goto out; @@ -2255,15 +2255,15 @@ struct folio *vma_alloc_folio(gfp_t gfp, int order,= struct vm_area_struct *vma, =20 nmask =3D policy_nodemask(gfp, pol); preferred_nid =3D policy_node(gfp, pol, node); - folio =3D __folio_alloc(gfp, order, preferred_nid, nmask); + folio =3D __folio_alloc_noprof(gfp, order, preferred_nid, nmask); mpol_cond_put(pol); out: return folio; } -EXPORT_SYMBOL(vma_alloc_folio); +EXPORT_SYMBOL(vma_alloc_folio_noprof); =20 /** - * alloc_pages - Allocate pages. + * alloc_pages_noprof - Allocate pages. * @gfp: GFP flags. * @order: Power of two of number of pages to allocate. * @@ -2276,7 +2276,7 @@ EXPORT_SYMBOL(vma_alloc_folio); * flags are used. * Return: The page on success or NULL if allocation fails. */ -struct page *alloc_pages(gfp_t gfp, unsigned order) +struct page *alloc_pages_noprof(gfp_t gfp, unsigned int order) { struct mempolicy *pol =3D &default_policy; struct page *page; @@ -2294,24 +2294,24 @@ struct page *alloc_pages(gfp_t gfp, unsigned order) page =3D alloc_pages_preferred_many(gfp, order, policy_node(gfp, pol, numa_node_id()), pol); else - page =3D __alloc_pages(gfp, order, + page =3D __alloc_pages_noprof(gfp, order, policy_node(gfp, pol, numa_node_id()), policy_nodemask(gfp, pol)); =20 return page; } -EXPORT_SYMBOL(alloc_pages); +EXPORT_SYMBOL(alloc_pages_noprof); =20 -struct folio *folio_alloc(gfp_t gfp, unsigned order) +struct folio *folio_alloc_noprof(gfp_t gfp, unsigned int order) { - struct page *page =3D alloc_pages(gfp | __GFP_COMP, order); + struct page *page =3D alloc_pages_noprof(gfp | __GFP_COMP, order); struct folio *folio =3D (struct folio *)page; =20 if (folio && order > 1) folio_prep_large_rmappable(folio); return folio; } -EXPORT_SYMBOL(folio_alloc); +EXPORT_SYMBOL(folio_alloc_noprof); =20 static unsigned long alloc_pages_bulk_array_interleave(gfp_t gfp, struct mempolicy *pol, unsigned long nr_pages, @@ -2330,13 +2330,13 @@ static unsigned long alloc_pages_bulk_array_interle= ave(gfp_t gfp, =20 for (i =3D 0; i < nodes; i++) { if (delta) { - nr_allocated =3D __alloc_pages_bulk(gfp, + nr_allocated =3D alloc_pages_bulk_noprof(gfp, interleave_nodes(pol), NULL, nr_pages_per_node + 1, NULL, page_array); delta--; } else { - nr_allocated =3D __alloc_pages_bulk(gfp, + nr_allocated =3D alloc_pages_bulk_noprof(gfp, interleave_nodes(pol), NULL, nr_pages_per_node, NULL, page_array); } @@ -2358,11 +2358,11 @@ static unsigned long alloc_pages_bulk_array_preferr= ed_many(gfp_t gfp, int nid, preferred_gfp =3D gfp | __GFP_NOWARN; preferred_gfp &=3D ~(__GFP_DIRECT_RECLAIM | __GFP_NOFAIL); =20 - nr_allocated =3D __alloc_pages_bulk(preferred_gfp, nid, &pol->nodes, + nr_allocated =3D alloc_pages_bulk_noprof(preferred_gfp, nid, &pol->nodes, nr_pages, NULL, page_array); =20 if (nr_allocated < nr_pages) - nr_allocated +=3D __alloc_pages_bulk(gfp, numa_node_id(), NULL, + nr_allocated +=3D alloc_pages_bulk_noprof(gfp, numa_node_id(), NULL, nr_pages - nr_allocated, NULL, page_array + nr_allocated); return nr_allocated; @@ -2374,7 +2374,7 @@ static unsigned long alloc_pages_bulk_array_preferred= _many(gfp_t gfp, int nid, * It can accelerate memory allocation especially interleaving * allocate memory. */ -unsigned long alloc_pages_bulk_array_mempolicy(gfp_t gfp, +unsigned long alloc_pages_bulk_array_mempolicy_noprof(gfp_t gfp, unsigned long nr_pages, struct page **page_array) { struct mempolicy *pol =3D &default_policy; @@ -2390,7 +2390,7 @@ unsigned long alloc_pages_bulk_array_mempolicy(gfp_t = gfp, return alloc_pages_bulk_array_preferred_many(gfp, numa_node_id(), pol, nr_pages, page_array); =20 - return __alloc_pages_bulk(gfp, policy_node(gfp, pol, numa_node_id()), + return alloc_pages_bulk_noprof(gfp, policy_node(gfp, pol, numa_node_id()), policy_nodemask(gfp, pol), nr_pages, NULL, page_array); } diff --git a/mm/page_alloc.c b/mm/page_alloc.c index d490d0f73e72..63dc2f8c7901 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -4239,7 +4239,7 @@ static inline bool prepare_alloc_pages(gfp_t gfp_mask= , unsigned int order, * * Returns the number of pages on the list or array. */ -unsigned long __alloc_pages_bulk(gfp_t gfp, int preferred_nid, +unsigned long alloc_pages_bulk_noprof(gfp_t gfp, int preferred_nid, nodemask_t *nodemask, int nr_pages, struct list_head *page_list, struct page **page_array) @@ -4375,7 +4375,7 @@ unsigned long __alloc_pages_bulk(gfp_t gfp, int prefe= rred_nid, pcp_trylock_finish(UP_flags); =20 failed: - page =3D __alloc_pages(gfp, 0, preferred_nid, nodemask); + page =3D __alloc_pages_noprof(gfp, 0, preferred_nid, nodemask); if (page) { if (page_list) list_add(&page->lru, page_list); @@ -4386,13 +4386,13 @@ unsigned long __alloc_pages_bulk(gfp_t gfp, int pre= ferred_nid, =20 goto out; } -EXPORT_SYMBOL_GPL(__alloc_pages_bulk); +EXPORT_SYMBOL_GPL(alloc_pages_bulk_noprof); =20 /* * This is the 'heart' of the zoned buddy allocator. */ -struct page *__alloc_pages(gfp_t gfp, unsigned int order, int preferred_ni= d, - nodemask_t *nodemask) +struct page *__alloc_pages_noprof(gfp_t gfp, unsigned int order, + int preferred_nid, nodemask_t *nodemask) { struct page *page; unsigned int alloc_flags =3D ALLOC_WMARK_LOW; @@ -4454,12 +4454,12 @@ struct page *__alloc_pages(gfp_t gfp, unsigned int = order, int preferred_nid, =20 return page; } -EXPORT_SYMBOL(__alloc_pages); +EXPORT_SYMBOL(__alloc_pages_noprof); =20 -struct folio *__folio_alloc(gfp_t gfp, unsigned int order, int preferred_n= id, +struct folio *__folio_alloc_noprof(gfp_t gfp, unsigned int order, int pref= erred_nid, nodemask_t *nodemask) { - struct page *page =3D __alloc_pages(gfp | __GFP_COMP, order, + struct page *page =3D __alloc_pages_noprof(gfp | __GFP_COMP, order, preferred_nid, nodemask); struct folio *folio =3D (struct folio *)page; =20 @@ -4467,29 +4467,29 @@ struct folio *__folio_alloc(gfp_t gfp, unsigned int= order, int preferred_nid, folio_prep_large_rmappable(folio); return folio; } -EXPORT_SYMBOL(__folio_alloc); +EXPORT_SYMBOL(__folio_alloc_noprof); =20 /* * Common helper functions. Never use with __GFP_HIGHMEM because the retur= ned * address cannot represent highmem pages. Use alloc_pages and then kmap if * you need to access high mem. */ -unsigned long __get_free_pages(gfp_t gfp_mask, unsigned int order) +unsigned long get_free_pages_noprof(gfp_t gfp_mask, unsigned int order) { struct page *page; =20 - page =3D alloc_pages(gfp_mask & ~__GFP_HIGHMEM, order); + page =3D alloc_pages_noprof(gfp_mask & ~__GFP_HIGHMEM, order); if (!page) return 0; return (unsigned long) page_address(page); } -EXPORT_SYMBOL(__get_free_pages); +EXPORT_SYMBOL(get_free_pages_noprof); =20 -unsigned long get_zeroed_page(gfp_t gfp_mask) +unsigned long get_zeroed_page_noprof(gfp_t gfp_mask) { - return __get_free_page(gfp_mask | __GFP_ZERO); + return get_free_pages_noprof(gfp_mask | __GFP_ZERO, 0); } -EXPORT_SYMBOL(get_zeroed_page); +EXPORT_SYMBOL(get_zeroed_page_noprof); =20 /** * __free_pages - Free pages allocated with alloc_pages(). @@ -4681,7 +4681,7 @@ static void *make_alloc_exact(unsigned long addr, uns= igned int order, } =20 /** - * alloc_pages_exact - allocate an exact number physically-contiguous page= s. + * alloc_pages_exact_noprof - allocate an exact number physically-contiguo= us pages. * @size: the number of bytes to allocate * @gfp_mask: GFP flags for the allocation, must not contain __GFP_COMP * @@ -4695,7 +4695,7 @@ static void *make_alloc_exact(unsigned long addr, uns= igned int order, * * Return: pointer to the allocated area or %NULL in case of error. */ -void *alloc_pages_exact(size_t size, gfp_t gfp_mask) +void *alloc_pages_exact_noprof(size_t size, gfp_t gfp_mask) { unsigned int order =3D get_order(size); unsigned long addr; @@ -4703,13 +4703,13 @@ void *alloc_pages_exact(size_t size, gfp_t gfp_mask) if (WARN_ON_ONCE(gfp_mask & (__GFP_COMP | __GFP_HIGHMEM))) gfp_mask &=3D ~(__GFP_COMP | __GFP_HIGHMEM); =20 - addr =3D __get_free_pages(gfp_mask, order); + addr =3D get_free_pages_noprof(gfp_mask, order); return make_alloc_exact(addr, order, size); } -EXPORT_SYMBOL(alloc_pages_exact); +EXPORT_SYMBOL(alloc_pages_exact_noprof); =20 /** - * alloc_pages_exact_nid - allocate an exact number of physically-contiguo= us + * alloc_pages_exact_nid_noprof - allocate an exact number of physically-c= ontiguous * pages on a node. * @nid: the preferred node ID where memory should be allocated * @size: the number of bytes to allocate @@ -4720,7 +4720,7 @@ EXPORT_SYMBOL(alloc_pages_exact); * * Return: pointer to the allocated area or %NULL in case of error. */ -void * __meminit alloc_pages_exact_nid(int nid, size_t size, gfp_t gfp_mas= k) +void * __meminit alloc_pages_exact_nid_noprof(int nid, size_t size, gfp_t = gfp_mask) { unsigned int order =3D get_order(size); struct page *p; @@ -4728,7 +4728,7 @@ void * __meminit alloc_pages_exact_nid(int nid, size_= t size, gfp_t gfp_mask) if (WARN_ON_ONCE(gfp_mask & (__GFP_COMP | __GFP_HIGHMEM))) gfp_mask &=3D ~(__GFP_COMP | __GFP_HIGHMEM); =20 - p =3D alloc_pages_node(nid, gfp_mask, order); + p =3D alloc_pages_node_noprof(nid, gfp_mask, order); if (!p) return NULL; return make_alloc_exact((unsigned long)page_address(p), order, size); @@ -6090,7 +6090,7 @@ int __alloc_contig_migrate_range(struct compact_contr= ol *cc, } =20 /** - * alloc_contig_range() -- tries to allocate given range of pages + * alloc_contig_range_noprof() -- tries to allocate given range of pages * @start: start PFN to allocate * @end: one-past-the-last PFN to allocate * @migratetype: migratetype of the underlying pageblocks (either @@ -6110,7 +6110,7 @@ int __alloc_contig_migrate_range(struct compact_contr= ol *cc, * pages which PFN is in [start, end) are allocated for the caller and * need to be freed with free_contig_range(). */ -int alloc_contig_range(unsigned long start, unsigned long end, +int alloc_contig_range_noprof(unsigned long start, unsigned long end, unsigned migratetype, gfp_t gfp_mask) { unsigned long outer_start, outer_end; @@ -6234,15 +6234,15 @@ int alloc_contig_range(unsigned long start, unsigne= d long end, undo_isolate_page_range(start, end, migratetype); return ret; } -EXPORT_SYMBOL(alloc_contig_range); +EXPORT_SYMBOL(alloc_contig_range_noprof); =20 static int __alloc_contig_pages(unsigned long start_pfn, unsigned long nr_pages, gfp_t gfp_mask) { unsigned long end_pfn =3D start_pfn + nr_pages; =20 - return alloc_contig_range(start_pfn, end_pfn, MIGRATE_MOVABLE, - gfp_mask); + return alloc_contig_range_noprof(start_pfn, end_pfn, MIGRATE_MOVABLE, + gfp_mask); } =20 static bool pfn_range_valid_contig(struct zone *z, unsigned long start_pfn, @@ -6277,7 +6277,7 @@ static bool zone_spans_last_pfn(const struct zone *zo= ne, } =20 /** - * alloc_contig_pages() -- tries to find and allocate contiguous range of = pages + * alloc_contig_pages_noprof() -- tries to find and allocate contiguous ra= nge of pages * @nr_pages: Number of contiguous pages to allocate * @gfp_mask: GFP mask to limit search and used during compaction * @nid: Target node @@ -6297,8 +6297,8 @@ static bool zone_spans_last_pfn(const struct zone *zo= ne, * * Return: pointer to contiguous pages on success, or NULL if not successf= ul. */ -struct page *alloc_contig_pages(unsigned long nr_pages, gfp_t gfp_mask, - int nid, nodemask_t *nodemask) +struct page *alloc_contig_pages_noprof(unsigned long nr_pages, gfp_t gfp_m= ask, + int nid, nodemask_t *nodemask) { unsigned long ret, pfn, flags; struct zonelist *zonelist; --=20 2.42.0.758.gaed0368e0e-goog From nobody Thu Jan 1 07:45:32 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1E1C6C25B6C for ; Tue, 24 Oct 2023 13:49:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1343675AbjJXNtf (ORCPT ); Tue, 24 Oct 2023 09:49:35 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57786 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1343531AbjJXNsj (ORCPT ); Tue, 24 Oct 2023 09:48:39 -0400 Received: from mail-yw1-x1149.google.com (mail-yw1-x1149.google.com [IPv6:2607:f8b0:4864:20::1149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 125C01BDC for ; Tue, 24 Oct 2023 06:47:27 -0700 (PDT) Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-5a7cc433782so55249127b3.3 for ; Tue, 24 Oct 2023 06:47:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1698155246; x=1698760046; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=zhzOhwHQcxJuZ74GpFe9sYgSfMOEl4CWCuHOL/zyxyA=; b=LTGRqr23GKhxcA936hbAdR0QYqb1rrH3OeGhcHCLNGBQlRhtbSBk6bF/K6O8kSes0t tlWoCICmJVmTC+r9JvEgDShF0eHFub0fNpwBOiyo5BKL2+PXGPjkXuUMSEOaxS0KWJNx WWOH1EYFtQgFKO/RUWUOE+X0OmJQ8UpoaPBPukzaJRxX3QkdyQ2nETpuxagwoB9vifGP yyjrBHvnV9KsIw2545gjzUZHoj3zdjrNkVJEpRVOq5TU3BFKt4BjI3v26/ggFpUrC1ej 9SWEdUrtPhdwDM+MN9byluamoHrTFMHtcdBjJQ22G7WXFAg1EM5Mo2Sdkg7Zw6kM2lt4 fWmw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1698155246; x=1698760046; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=zhzOhwHQcxJuZ74GpFe9sYgSfMOEl4CWCuHOL/zyxyA=; b=eHpVReLX0A+5XKmDue9oi7eY45S1I7TZdxizjoQYdiFN8EjHOIR1C3+kRlbw9FzTbM 4Cv2757S59+VjUUqvxmJhPTs6TECiTOJGbjWyLL+psurh4iDUbWiZpvTfzOwmY3nlnge QjgYpt937BnTcggVZuSzp/9oWhKJXhYDDv9bVAbgaAuKarvSou4VMny3mql5aMYsYWxb +IX4FngE/IdLVAqngyTnXP5skDjQ1sdCzhhcSPYI1k69kaUeMIAb+PpYVlx3VDC6lgPt ROYUnvTfotx65iiQskONq3NqzGw9QRmPnqE14GFhOtqDXXU5Pixsf2vk4enLOZX/lFar EEGA== X-Gm-Message-State: AOJu0Yw+QyI3NtYDMof1Rj0PVVVKNWBqj+Bz8kCwnJI9LgakgEZ/7XO5 qKj0KWvhUNyRB7b482IJlscZPD4etuc= X-Google-Smtp-Source: AGHT+IGqbReHwIMnajbCAyyRpg4FWyctkHXB3WgJzXU7P7J0pPMsKbVadURacVWEffWX34fNon1x9kvs7DE= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:45ba:3318:d7a5:336a]) (user=surenb job=sendgmr) by 2002:a25:42c9:0:b0:d9a:bce6:acf3 with SMTP id p192-20020a2542c9000000b00d9abce6acf3mr225566yba.0.1698155245951; Tue, 24 Oct 2023 06:47:25 -0700 (PDT) Date: Tue, 24 Oct 2023 06:46:17 -0700 In-Reply-To: <20231024134637.3120277-1-surenb@google.com> Mime-Version: 1.0 References: <20231024134637.3120277-1-surenb@google.com> X-Mailer: git-send-email 2.42.0.758.gaed0368e0e-goog Message-ID: <20231024134637.3120277-21-surenb@google.com> Subject: [PATCH v2 20/39] mm: create new codetag references during page splitting From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: kent.overstreet@linux.dev, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, roman.gushchin@linux.dev, mgorman@suse.de, dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com, corbet@lwn.net, void@manifault.com, peterz@infradead.org, juri.lelli@redhat.com, ldufour@linux.ibm.com, catalin.marinas@arm.com, will@kernel.org, arnd@arndb.de, tglx@linutronix.de, mingo@redhat.com, dave.hansen@linux.intel.com, x86@kernel.org, peterx@redhat.com, david@redhat.com, axboe@kernel.dk, mcgrof@kernel.org, masahiroy@kernel.org, nathan@kernel.org, dennis@kernel.org, tj@kernel.org, muchun.song@linux.dev, rppt@kernel.org, paulmck@kernel.org, pasha.tatashin@soleen.com, yosryahmed@google.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, andreyknvl@gmail.com, keescook@chromium.org, ndesaulniers@google.com, vvvvvv@google.com, gregkh@linuxfoundation.org, ebiggers@google.com, ytcoode@gmail.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, bristot@redhat.com, vschneid@redhat.com, cl@linux.com, penberg@kernel.org, iamjoonsoo.kim@lge.com, 42.hyeyoo@gmail.com, glider@google.com, elver@google.com, dvyukov@google.com, shakeelb@google.com, songmuchun@bytedance.com, jbaron@akamai.com, rientjes@google.com, minchan@google.com, kaleshsingh@google.com, surenb@google.com, kernel-team@android.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, iommu@lists.linux.dev, linux-arch@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, kasan-dev@googlegroups.com, cgroups@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" When a high-order page is split into smaller ones, each newly split page should get its codetag. The original codetag is reused for these pages but it's recorded as 0-byte allocation because original codetag already accounts for the original high-order allocated page. Signed-off-by: Suren Baghdasaryan --- include/linux/pgalloc_tag.h | 30 ++++++++++++++++++++++++++++++ mm/huge_memory.c | 2 ++ mm/page_alloc.c | 2 ++ 3 files changed, 34 insertions(+) diff --git a/include/linux/pgalloc_tag.h b/include/linux/pgalloc_tag.h index a060c26eb449..0174aff5e871 100644 --- a/include/linux/pgalloc_tag.h +++ b/include/linux/pgalloc_tag.h @@ -62,11 +62,41 @@ static inline void pgalloc_tag_sub(struct page *page, u= nsigned int order) } } =20 +static inline void pgalloc_tag_split(struct page *page, unsigned int nr) +{ + int i; + struct page_ext *page_ext; + union codetag_ref *ref; + struct alloc_tag *tag; + + if (!mem_alloc_profiling_enabled()) + return; + + page_ext =3D page_ext_get(page); + if (unlikely(!page_ext)) + return; + + ref =3D codetag_ref_from_page_ext(page_ext); + if (!ref->ct) + goto out; + + tag =3D ct_to_alloc_tag(ref->ct); + page_ext =3D page_ext_next(page_ext); + for (i =3D 1; i < nr; i++) { + /* New reference with 0 bytes accounted */ + alloc_tag_add(codetag_ref_from_page_ext(page_ext), tag, 0); + page_ext =3D page_ext_next(page_ext); + } +out: + page_ext_put(page_ext); +} + #else /* CONFIG_MEM_ALLOC_PROFILING */ =20 static inline void pgalloc_tag_add(struct page *page, struct task_struct *= task, unsigned int order) {} static inline void pgalloc_tag_sub(struct page *page, unsigned int order) = {} +static inline void pgalloc_tag_split(struct page *page, unsigned int nr) {} =20 #endif /* CONFIG_MEM_ALLOC_PROFILING */ =20 diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 064fbd90822b..392b6907d875 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -37,6 +37,7 @@ #include #include #include +#include =20 #include #include @@ -2545,6 +2546,7 @@ static void __split_huge_page(struct page *page, stru= ct list_head *list, /* Caller disabled irqs, so they are still disabled here */ =20 split_page_owner(head, nr); + pgalloc_tag_split(head, nr); =20 /* See comment in __split_huge_page_tail() */ if (PageAnon(head)) { diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 63dc2f8c7901..c4f0cd127e14 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -2540,6 +2540,7 @@ void split_page(struct page *page, unsigned int order) for (i =3D 1; i < (1 << order); i++) set_page_refcounted(page + i); split_page_owner(page, 1 << order); + pgalloc_tag_split(page, 1 << order); split_page_memcg(page, 1 << order); } EXPORT_SYMBOL_GPL(split_page); @@ -4669,6 +4670,7 @@ static void *make_alloc_exact(unsigned long addr, uns= igned int order, struct page *last =3D page + nr; =20 split_page_owner(page, 1 << order); + pgalloc_tag_split(page, 1 << order); split_page_memcg(page, 1 << order); while (page < --last) set_page_refcounted(last); --=20 2.42.0.758.gaed0368e0e-goog From nobody Thu Jan 1 07:45:32 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 72CC7C00A8F for ; Tue, 24 Oct 2023 13:49:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1343567AbjJXNtn (ORCPT ); Tue, 24 Oct 2023 09:49:43 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41078 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234777AbjJXNsl (ORCPT ); Tue, 24 Oct 2023 09:48:41 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 602B11BF1 for ; Tue, 24 Oct 2023 06:47:29 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id 3f1490d57ef6-da05b786f1dso176315276.2 for ; Tue, 24 Oct 2023 06:47:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1698155248; x=1698760048; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=zC2/AjWbzXkwFGJu5UOviuzr4L4X1GYNGPTyCuvFw8o=; b=r4VlaBiIIyGv+JAyiSfBVJQp+KYKVBP5MM5t0jdEetiExo6YyVSNv0kQ/fAy6kEc1x nkSICjCQVcKAM04qibN9NgDtay0QoakP6LjLA7vCeRfQkaHZQDuEovP41aiMdIgf0SiM 3GoHZ1KUP+puQBpa4ATrTb5anWQTXUp59qMonTr3DTMfEn+YEyfIQrUbYfZdffoxAlkn ireFT1BunBirKvfOrJfmKAhMpyAgeGMVp8b1tbu/Tt6y9fDwCskYzFbWa2L7EFy6jKLb yGvPDXvMlidkwCzkbTGQq2+FcJSXT4HSNmoehqBB3pElPlynw0jbeNK/Q8FIti2H944k hitg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1698155248; x=1698760048; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=zC2/AjWbzXkwFGJu5UOviuzr4L4X1GYNGPTyCuvFw8o=; b=H18mo3JNpUaQKs1euuglGGz+HTJ92yr4ZDXGBk4SRk/pIiIQsPBfsex0RHkpPTruEv rXFD6LqjSbAlPydUBkSULTMbYTnig6T6AdpQB7yiXi9xa4cBuyFhrXgcsdAESJCcszBJ iM5BVTxgyi2QIs3rfh6QgFVC9hLH1Ifhis1m3KLiPgsizf5irhsVaSXNca2xbiCL2Dv6 NNDGdCnxnRtS1O3Koq+tOYA60uWlerIpQFxXsq4Oa43f74aQxQBwLRA+bOzs1u7HukiS 4DC6ZU1amPBERWae92JlsNoy6WE4Gi9GJFS/NSJyD6Dme/RtfRSQMLKqP4RH0wzFqkQD K4DQ== X-Gm-Message-State: AOJu0Yyz8vae46oDE6kgizZUovZEOG4NMS0QD+l8hYfzZnAl8B+wrmvu ALS3zGkuiApDTHvm0yOWc3ZUW3QiN2I= X-Google-Smtp-Source: AGHT+IFtIla7jN3qQCOM5jBjQINYCYH8n10kTFpUDwIK13I4RwREYyTVERaiA7ZHusZIP5/hSujmrJfw8JM= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:45ba:3318:d7a5:336a]) (user=surenb job=sendgmr) by 2002:a25:40c7:0:b0:da0:289e:c056 with SMTP id n190-20020a2540c7000000b00da0289ec056mr61156yba.8.1698155248287; Tue, 24 Oct 2023 06:47:28 -0700 (PDT) Date: Tue, 24 Oct 2023 06:46:18 -0700 In-Reply-To: <20231024134637.3120277-1-surenb@google.com> Mime-Version: 1.0 References: <20231024134637.3120277-1-surenb@google.com> X-Mailer: git-send-email 2.42.0.758.gaed0368e0e-goog Message-ID: <20231024134637.3120277-22-surenb@google.com> Subject: [PATCH v2 21/39] mm/page_ext: enable early_page_ext when CONFIG_MEM_ALLOC_PROFILING_DEBUG=y From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: kent.overstreet@linux.dev, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, roman.gushchin@linux.dev, mgorman@suse.de, dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com, corbet@lwn.net, void@manifault.com, peterz@infradead.org, juri.lelli@redhat.com, ldufour@linux.ibm.com, catalin.marinas@arm.com, will@kernel.org, arnd@arndb.de, tglx@linutronix.de, mingo@redhat.com, dave.hansen@linux.intel.com, x86@kernel.org, peterx@redhat.com, david@redhat.com, axboe@kernel.dk, mcgrof@kernel.org, masahiroy@kernel.org, nathan@kernel.org, dennis@kernel.org, tj@kernel.org, muchun.song@linux.dev, rppt@kernel.org, paulmck@kernel.org, pasha.tatashin@soleen.com, yosryahmed@google.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, andreyknvl@gmail.com, keescook@chromium.org, ndesaulniers@google.com, vvvvvv@google.com, gregkh@linuxfoundation.org, ebiggers@google.com, ytcoode@gmail.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, bristot@redhat.com, vschneid@redhat.com, cl@linux.com, penberg@kernel.org, iamjoonsoo.kim@lge.com, 42.hyeyoo@gmail.com, glider@google.com, elver@google.com, dvyukov@google.com, shakeelb@google.com, songmuchun@bytedance.com, jbaron@akamai.com, rientjes@google.com, minchan@google.com, kaleshsingh@google.com, surenb@google.com, kernel-team@android.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, iommu@lists.linux.dev, linux-arch@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, kasan-dev@googlegroups.com, cgroups@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" For all page allocations to be tagged, page_ext has to be initialized before the first page allocation. Early tasks allocate their stacks using page allocator before alloc_node_page_ext() initializes page_ext area, unless early_page_ext is enabled. Therefore these allocations will generate a warning when CONFIG_MEM_ALLOC_PROFILING_DEBUG is enabled. Enable early_page_ext whenever CONFIG_MEM_ALLOC_PROFILING_DEBUG=3Dy to ensure page_ext initialization prior to any page allocation. This will have all the negative effects associated with early_page_ext, such as possible longer boot time, therefore we enable it only when debugging with CONFIG_MEM_ALLOC_PROFILING_DEBUG enabled and not universally for CONFIG_MEM_ALLOC_PROFILING. Signed-off-by: Suren Baghdasaryan --- mm/page_ext.c | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/mm/page_ext.c b/mm/page_ext.c index 3c58fe8a24df..e7d8f1a5589e 100644 --- a/mm/page_ext.c +++ b/mm/page_ext.c @@ -95,7 +95,16 @@ unsigned long page_ext_size; =20 static unsigned long total_usage; =20 +#ifdef CONFIG_MEM_ALLOC_PROFILING_DEBUG +/* + * To ensure correct allocation tagging for pages, page_ext should be avai= lable + * before the first page allocation. Otherwise early task stacks will be + * allocated before page_ext initialization and missing tags will be flagg= ed. + */ +bool early_page_ext __meminitdata =3D true; +#else bool early_page_ext __meminitdata; +#endif static int __init setup_early_page_ext(char *str) { early_page_ext =3D true; --=20 2.42.0.758.gaed0368e0e-goog From nobody Thu Jan 1 07:45:32 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 44646C25B6D for ; Tue, 24 Oct 2023 13:49:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1343634AbjJXNtr (ORCPT ); Tue, 24 Oct 2023 09:49:47 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54822 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234789AbjJXNst (ORCPT ); Tue, 24 Oct 2023 09:48:49 -0400 Received: from mail-yw1-x1149.google.com (mail-yw1-x1149.google.com [IPv6:2607:f8b0:4864:20::1149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AF5091BFE for ; Tue, 24 Oct 2023 06:47:31 -0700 (PDT) Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-59f61a639b9so61792537b3.1 for ; Tue, 24 Oct 2023 06:47:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1698155250; x=1698760050; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=dyxdpQkY+52gRKectHQ0ToMTqRIUgvQpNStJHRxgij8=; b=d33QH23Yxm3MKDKHMbVs+/7Z+ajwA+Z9Fr1B91dQxlrOqr63fJLQU7DEBzhHYSH2Ov AaNt+RSSZy0PBtl6ukVr46miJd/C3QxPg6KzHz6FYQ0JC9Qfqs0j5ltbOulOktxPQuTt nrh0+RZCaaTaMns6U/LId4/7YryoHX3kCwv6fzCqXIjIb3haHoaZQHpq83inLagixlot juRx33hDFTL7DaM3CBJGLy3/XBIWMBmzkiUuLDzDOLSOgOaaicm+7LO1y4akN/Xn0xLR rK6tW9VmtFp4JaMonEjaKCqp7zkeob3XuNvmSX2LXM1d3hnk0u1sxNKeMUrLdjMLWbB2 bJ+Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1698155250; x=1698760050; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=dyxdpQkY+52gRKectHQ0ToMTqRIUgvQpNStJHRxgij8=; b=E39xkfAk9Y8c1AFrQzAI9egvSciyyIgu2SksoYS+qofi1xXS1aU3m/gKcJS1qDiZCw g1ovFaquegMqt83MIEtCzR0pRwUvyDMpxgq6fTUj+N7XJ8Eab0kZy0XWy1sX6OxC4q6d cjE58TGscYnWq8gusExFcUDRqN41y/lg+Y0bxIgCbISJOT2q6GEVy3HMGnI7lxTnv7oy qy3Wji4LZ3wKtyRqs6w7HT4WKFloH8DScRMhWNWfrDdzm4C+Yi5mFOOps7MWKBU0jfIi M+sQ7Kp4bsosJPtHIfwfUgH2wVfqlcCmS5BZCY2l5JvtGBC0dELfDLQMEBy6KFgxszu4 PRbA== X-Gm-Message-State: AOJu0YxBujZ5DpZq1ytY1e7QPOt0xmXQRn6bvbOSBmP2xiMAVKgnYXEE WAgnxUGIJA62kms+dA7maDjsyHzHuy4= X-Google-Smtp-Source: AGHT+IH6emT7vBUijCZAXTQg5qCnjqbhy63v478Fz+jbhTeMwMkT3/M97iqix7XYtIWpa3tPCFKbfUZWH1c= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:45ba:3318:d7a5:336a]) (user=surenb job=sendgmr) by 2002:a0d:d84a:0:b0:5a8:5653:3323 with SMTP id a71-20020a0dd84a000000b005a856533323mr243310ywe.2.1698155250598; Tue, 24 Oct 2023 06:47:30 -0700 (PDT) Date: Tue, 24 Oct 2023 06:46:19 -0700 In-Reply-To: <20231024134637.3120277-1-surenb@google.com> Mime-Version: 1.0 References: <20231024134637.3120277-1-surenb@google.com> X-Mailer: git-send-email 2.42.0.758.gaed0368e0e-goog Message-ID: <20231024134637.3120277-23-surenb@google.com> Subject: [PATCH v2 22/39] lib: add codetag reference into slabobj_ext From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: kent.overstreet@linux.dev, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, roman.gushchin@linux.dev, mgorman@suse.de, dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com, corbet@lwn.net, void@manifault.com, peterz@infradead.org, juri.lelli@redhat.com, ldufour@linux.ibm.com, catalin.marinas@arm.com, will@kernel.org, arnd@arndb.de, tglx@linutronix.de, mingo@redhat.com, dave.hansen@linux.intel.com, x86@kernel.org, peterx@redhat.com, david@redhat.com, axboe@kernel.dk, mcgrof@kernel.org, masahiroy@kernel.org, nathan@kernel.org, dennis@kernel.org, tj@kernel.org, muchun.song@linux.dev, rppt@kernel.org, paulmck@kernel.org, pasha.tatashin@soleen.com, yosryahmed@google.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, andreyknvl@gmail.com, keescook@chromium.org, ndesaulniers@google.com, vvvvvv@google.com, gregkh@linuxfoundation.org, ebiggers@google.com, ytcoode@gmail.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, bristot@redhat.com, vschneid@redhat.com, cl@linux.com, penberg@kernel.org, iamjoonsoo.kim@lge.com, 42.hyeyoo@gmail.com, glider@google.com, elver@google.com, dvyukov@google.com, shakeelb@google.com, songmuchun@bytedance.com, jbaron@akamai.com, rientjes@google.com, minchan@google.com, kaleshsingh@google.com, surenb@google.com, kernel-team@android.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, iommu@lists.linux.dev, linux-arch@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, kasan-dev@googlegroups.com, cgroups@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" To store code tag for every slab object, a codetag reference is embedded into slabobj_ext when CONFIG_MEM_ALLOC_PROFILING=3Dy. Signed-off-by: Suren Baghdasaryan Co-developed-by: Kent Overstreet Signed-off-by: Kent Overstreet --- include/linux/memcontrol.h | 5 +++++ lib/Kconfig.debug | 1 + mm/slab.h | 4 ++++ 3 files changed, 10 insertions(+) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index f3ede28b6fa6..853a24b5f713 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -1613,7 +1613,12 @@ unsigned long mem_cgroup_soft_limit_reclaim(pg_data_= t *pgdat, int order, * if MEMCG_DATA_OBJEXTS is set. */ struct slabobj_ext { +#ifdef CONFIG_MEMCG_KMEM struct obj_cgroup *objcg; +#endif +#ifdef CONFIG_MEM_ALLOC_PROFILING + union codetag_ref ref; +#endif } __aligned(8); =20 static inline void __inc_lruvec_kmem_state(void *p, enum node_stat_item id= x) diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug index e1eda1450d68..482a6aae7664 100644 --- a/lib/Kconfig.debug +++ b/lib/Kconfig.debug @@ -973,6 +973,7 @@ config MEM_ALLOC_PROFILING depends on !DEBUG_FORCE_WEAK_PER_CPU select CODE_TAGGING select PAGE_EXTENSION + select SLAB_OBJ_EXT help Track allocation source code and record total allocation size initiated at that code location. The mechanism can be used to track diff --git a/mm/slab.h b/mm/slab.h index 60417fd262ea..293210ed10a9 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -457,6 +457,10 @@ int alloc_slab_obj_exts(struct slab *slab, struct kmem= _cache *s, =20 static inline bool need_slab_obj_ext(void) { +#ifdef CONFIG_MEM_ALLOC_PROFILING + if (mem_alloc_profiling_enabled()) + return true; +#endif /* * CONFIG_MEMCG_KMEM creates vector of obj_cgroup objects conditionally * inside memcg_slab_post_alloc_hook. No other users for now. --=20 2.42.0.758.gaed0368e0e-goog From nobody Thu Jan 1 07:45:32 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DFA21C07545 for ; Tue, 24 Oct 2023 13:49:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1343715AbjJXNtu (ORCPT ); Tue, 24 Oct 2023 09:49:50 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59694 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1343560AbjJXNsu (ORCPT ); Tue, 24 Oct 2023 09:48:50 -0400 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E334A1FD2 for ; Tue, 24 Oct 2023 06:47:33 -0700 (PDT) Received: by mail-yb1-xb49.google.com with SMTP id 3f1490d57ef6-d9cad450d5fso5175151276.1 for ; Tue, 24 Oct 2023 06:47:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1698155253; x=1698760053; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=jxcZ43bmgT/q64Ul96+yH7BdLzW8wJzWTibIuRMuFN0=; b=Xw8/uaWh99jt2xK+xyr1zz5Kog3gL8WpEHA8dZdPGys2OtC5Gg4NK91d9N0ir3H00c R2Dpet4fW7D7lwetzBo7Enb+ojL426Q9VupFcKmzMzdjYTpCev6wIhqDkN1j8CAZjMSj y+QI8uxm/F07lzVG7qdFCoty3+28RQ5/9y9OwEEPkqgc7J3IUmadDUEXrzeegxbEZtme M+/QDZ+86ie414B0L4rzjXLjtkfQGn7kdaQQSFU6OEvJycCa1KFpT797l09/Ib2ZHsHz tOrx2lot5IA7HEjgP1G4C043iIJ5YrELcPUr054LXB/3i1dAgt1AkE+Q2MnZWvHoGi4X IAvg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1698155253; x=1698760053; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=jxcZ43bmgT/q64Ul96+yH7BdLzW8wJzWTibIuRMuFN0=; b=ddofMCUK0F9SWm6nBvGh8MMXWRcsoRuJfKDI/G04Nk8/v0MhegSR4GPJ9TzFcMN9ND pP+YM9e7Y3VTM8bn5BS8uD9r6Bzc2QsuJnkPAMf0TUtMaBRcjSvGFWPSlNxkrVXMd4ba Xv1ZvUypgE0Ngw95dGzho/xlk4bSfdXDIF+lGHrb5DGM6PFT7yS8egZP9x7POpmLKhcb 2DcSNB8Ga8hlrv5EdU/1e0tv9sJucWRIBW9GPbqGNcGTKmlDDKMiA2bNd3eFtAbuWaEt /rJZdcYBv51x8PMzDFTzlxqioaDcwHz46ftIN/rgkJwVZopaqUgMq3Y4DxOkmmMm4z3Y znqg== X-Gm-Message-State: AOJu0Yy7FO8SEPEpHGpJF6utV5E1tBF2kVnm/oVYZswXgXQIST4zhguA uhoikJZ3vTPvse+zjnc2fBqsNFjYEns= X-Google-Smtp-Source: AGHT+IHLiWJmtuPzRqvJDSScUKUh0y8YokFyWjfHx2rxdH2c6l653IiepUePcxPp47iOlZTgh6t0s/pZmOY= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:45ba:3318:d7a5:336a]) (user=surenb job=sendgmr) by 2002:a25:d05:0:b0:d9a:5b63:a682 with SMTP id 5-20020a250d05000000b00d9a5b63a682mr216305ybn.13.1698155252840; Tue, 24 Oct 2023 06:47:32 -0700 (PDT) Date: Tue, 24 Oct 2023 06:46:20 -0700 In-Reply-To: <20231024134637.3120277-1-surenb@google.com> Mime-Version: 1.0 References: <20231024134637.3120277-1-surenb@google.com> X-Mailer: git-send-email 2.42.0.758.gaed0368e0e-goog Message-ID: <20231024134637.3120277-24-surenb@google.com> Subject: [PATCH v2 23/39] mm/slab: add allocation accounting into slab allocation and free paths From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: kent.overstreet@linux.dev, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, roman.gushchin@linux.dev, mgorman@suse.de, dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com, corbet@lwn.net, void@manifault.com, peterz@infradead.org, juri.lelli@redhat.com, ldufour@linux.ibm.com, catalin.marinas@arm.com, will@kernel.org, arnd@arndb.de, tglx@linutronix.de, mingo@redhat.com, dave.hansen@linux.intel.com, x86@kernel.org, peterx@redhat.com, david@redhat.com, axboe@kernel.dk, mcgrof@kernel.org, masahiroy@kernel.org, nathan@kernel.org, dennis@kernel.org, tj@kernel.org, muchun.song@linux.dev, rppt@kernel.org, paulmck@kernel.org, pasha.tatashin@soleen.com, yosryahmed@google.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, andreyknvl@gmail.com, keescook@chromium.org, ndesaulniers@google.com, vvvvvv@google.com, gregkh@linuxfoundation.org, ebiggers@google.com, ytcoode@gmail.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, bristot@redhat.com, vschneid@redhat.com, cl@linux.com, penberg@kernel.org, iamjoonsoo.kim@lge.com, 42.hyeyoo@gmail.com, glider@google.com, elver@google.com, dvyukov@google.com, shakeelb@google.com, songmuchun@bytedance.com, jbaron@akamai.com, rientjes@google.com, minchan@google.com, kaleshsingh@google.com, surenb@google.com, kernel-team@android.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, iommu@lists.linux.dev, linux-arch@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, kasan-dev@googlegroups.com, cgroups@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Account slab allocations using codetag reference embedded into slabobj_ext. Signed-off-by: Suren Baghdasaryan Co-developed-by: Kent Overstreet Signed-off-by: Kent Overstreet --- include/linux/slab_def.h | 2 +- include/linux/slub_def.h | 4 ++-- mm/slab.c | 4 +++- mm/slab.h | 32 ++++++++++++++++++++++++++++++++ 4 files changed, 38 insertions(+), 4 deletions(-) diff --git a/include/linux/slab_def.h b/include/linux/slab_def.h index a61e7d55d0d3..23f14dcb8d5b 100644 --- a/include/linux/slab_def.h +++ b/include/linux/slab_def.h @@ -107,7 +107,7 @@ static inline void *nearest_obj(struct kmem_cache *cach= e, const struct slab *sla * reciprocal_divide(offset, cache->reciprocal_buffer_size) */ static inline unsigned int obj_to_index(const struct kmem_cache *cache, - const struct slab *slab, void *obj) + const struct slab *slab, const void *obj) { u32 offset =3D (obj - slab->s_mem); return reciprocal_divide(offset, cache->reciprocal_buffer_size); diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h index deb90cf4bffb..43fda4a5f23a 100644 --- a/include/linux/slub_def.h +++ b/include/linux/slub_def.h @@ -182,14 +182,14 @@ static inline void *nearest_obj(struct kmem_cache *ca= che, const struct slab *sla =20 /* Determine object index from a given position */ static inline unsigned int __obj_to_index(const struct kmem_cache *cache, - void *addr, void *obj) + void *addr, const void *obj) { return reciprocal_divide(kasan_reset_tag(obj) - addr, cache->reciprocal_size); } =20 static inline unsigned int obj_to_index(const struct kmem_cache *cache, - const struct slab *slab, void *obj) + const struct slab *slab, const void *obj) { if (is_kfence_address(obj)) return 0; diff --git a/mm/slab.c b/mm/slab.c index cefcb7499b6c..18923f5f05b5 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -3348,9 +3348,11 @@ static void cache_flusharray(struct kmem_cache *cach= ep, struct array_cache *ac) static __always_inline void __cache_free(struct kmem_cache *cachep, void *= objp, unsigned long caller) { + struct slab *slab =3D virt_to_slab(objp); bool init; =20 - memcg_slab_free_hook(cachep, virt_to_slab(objp), &objp, 1); + memcg_slab_free_hook(cachep, slab, &objp, 1); + alloc_tagging_slab_free_hook(cachep, slab, &objp, 1); =20 if (is_kfence_address(objp)) { kmemleak_free_recursive(objp, cachep->flags); diff --git a/mm/slab.h b/mm/slab.h index 293210ed10a9..4859ce1f8808 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -533,6 +533,32 @@ prepare_slab_obj_exts_hook(struct kmem_cache *s, gfp_t= flags, void *p) =20 #endif /* CONFIG_SLAB_OBJ_EXT */ =20 +#ifdef CONFIG_MEM_ALLOC_PROFILING + +static inline void alloc_tagging_slab_free_hook(struct kmem_cache *s, stru= ct slab *slab, + void **p, int objects) +{ + struct slabobj_ext *obj_exts; + int i; + + obj_exts =3D slab_obj_exts(slab); + if (!obj_exts) + return; + + for (i =3D 0; i < objects; i++) { + unsigned int off =3D obj_to_index(s, slab, p[i]); + + alloc_tag_sub(&obj_exts[off].ref, s->size); + } +} + +#else + +static inline void alloc_tagging_slab_free_hook(struct kmem_cache *s, stru= ct slab *slab, + void **p, int objects) {} + +#endif /* CONFIG_MEM_ALLOC_PROFILING */ + #ifdef CONFIG_MEMCG_KMEM void mod_objcg_state(struct obj_cgroup *objcg, struct pglist_data *pgdat, enum node_stat_item idx, int nr); @@ -827,6 +853,12 @@ static inline void slab_post_alloc_hook(struct kmem_ca= che *s, s->flags, flags); kmsan_slab_alloc(s, p[i], flags); obj_exts =3D prepare_slab_obj_exts_hook(s, flags, p[i]); + +#ifdef CONFIG_MEM_ALLOC_PROFILING + /* obj_exts can be allocated for other reasons */ + if (likely(obj_exts) && mem_alloc_profiling_enabled()) + alloc_tag_add(&obj_exts->ref, current->alloc_tag, s->size); +#endif } =20 memcg_slab_post_alloc_hook(s, objcg, flags, size, p); --=20 2.42.0.758.gaed0368e0e-goog From nobody Thu Jan 1 07:45:32 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B7CB0C25B6C for ; Tue, 24 Oct 2023 13:50:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1343619AbjJXNuW (ORCPT ); Tue, 24 Oct 2023 09:50:22 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58204 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1343705AbjJXNtA (ORCPT ); Tue, 24 Oct 2023 09:49:00 -0400 Received: from mail-yw1-x114a.google.com (mail-yw1-x114a.google.com [IPv6:2607:f8b0:4864:20::114a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 361E81FE1 for ; Tue, 24 Oct 2023 06:47:36 -0700 (PDT) Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-5a8ee6a1801so57881397b3.3 for ; Tue, 24 Oct 2023 06:47:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1698155255; x=1698760055; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=UgU+pcxUdPHIak08TaAB2rSrJMdjgr1bUXnQB/iAGbQ=; b=CCkWMNmNuPPJ60JDCw60uY4UR73UY7F292bfvoWwoyqURv110AZL1LPNGSD3dYqZuO UFaXCprbj+MrPVG3BRN/h0kj6n+VeXRTwkfDrLvKtiHNfx1Yd8gWhCRe5VLKDOk/872h +GdyxKVwxOPh63LWsqT5zLXVBJUd0CZK8XYFl10JdJgjWeg5xh9vYoKPvkt4Gzl5+rqE f+0kXw0aBhwv767uIoP2889B3pK4tj1R0piVHajM+pAVev1W+O9Oy8QWNAmjypzdBdOF 8/ZfnciBlyYDxMJ12Nb5hcsi6mUFn3kTMALht7EXMQ4P+FXNd0xqA5WyLHt7cPL/KdTB Xj5g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1698155255; x=1698760055; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=UgU+pcxUdPHIak08TaAB2rSrJMdjgr1bUXnQB/iAGbQ=; b=d36wqoC714Qiw1Y0m2yUF7L8IV69FryjyW/MB8HoedxRDNB/jjOgiQ2Dgv/WPIUEGZ 1lYgAdTq7w1GpzU4TdUQGl/6rzbgT08JcJ0Ugvfryz2x3fZ1HFfz8H1weM4SyhvjGmJG JqNgaD/8nwWLZgr7zUiclqzstOIant3p5weJH+Ev4jyyz05G0L3aaE1sinVZKHL+aHNT tjDnKtf4l1Rigmi7h/470yXmRfBOymxiY1bvvH13mCLMuKPJmd5s9FASjBdcIhW1b6ZF cCyTM79bThPTmHLREfRQd/paM+io9d1rzkk+1LKZAsvkehEdHqMc6bGfHrx1AdKCWqIE QZqw== X-Gm-Message-State: AOJu0YzYKcraGkFKqibDeL6L5qAIijqnVupVN3ilbZwj0Cmlutox+n9m S0PehrOaUm735fpV2TcIotjXb0dmT3M= X-Google-Smtp-Source: AGHT+IH99U95XbIJleJRvWe1tdmgTc3A/ryOVVoVSn5UdR3NB6RxslzPIkyuREwn+3gBRv/mIBt+xGxQfKM= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:45ba:3318:d7a5:336a]) (user=surenb job=sendgmr) by 2002:a81:a157:0:b0:5a8:d81f:f5e7 with SMTP id y84-20020a81a157000000b005a8d81ff5e7mr260833ywg.8.1698155255155; Tue, 24 Oct 2023 06:47:35 -0700 (PDT) Date: Tue, 24 Oct 2023 06:46:21 -0700 In-Reply-To: <20231024134637.3120277-1-surenb@google.com> Mime-Version: 1.0 References: <20231024134637.3120277-1-surenb@google.com> X-Mailer: git-send-email 2.42.0.758.gaed0368e0e-goog Message-ID: <20231024134637.3120277-25-surenb@google.com> Subject: [PATCH v2 24/39] mm/slab: enable slab allocation tagging for kmalloc and friends From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: kent.overstreet@linux.dev, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, roman.gushchin@linux.dev, mgorman@suse.de, dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com, corbet@lwn.net, void@manifault.com, peterz@infradead.org, juri.lelli@redhat.com, ldufour@linux.ibm.com, catalin.marinas@arm.com, will@kernel.org, arnd@arndb.de, tglx@linutronix.de, mingo@redhat.com, dave.hansen@linux.intel.com, x86@kernel.org, peterx@redhat.com, david@redhat.com, axboe@kernel.dk, mcgrof@kernel.org, masahiroy@kernel.org, nathan@kernel.org, dennis@kernel.org, tj@kernel.org, muchun.song@linux.dev, rppt@kernel.org, paulmck@kernel.org, pasha.tatashin@soleen.com, yosryahmed@google.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, andreyknvl@gmail.com, keescook@chromium.org, ndesaulniers@google.com, vvvvvv@google.com, gregkh@linuxfoundation.org, ebiggers@google.com, ytcoode@gmail.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, bristot@redhat.com, vschneid@redhat.com, cl@linux.com, penberg@kernel.org, iamjoonsoo.kim@lge.com, 42.hyeyoo@gmail.com, glider@google.com, elver@google.com, dvyukov@google.com, shakeelb@google.com, songmuchun@bytedance.com, jbaron@akamai.com, rientjes@google.com, minchan@google.com, kaleshsingh@google.com, surenb@google.com, kernel-team@android.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, iommu@lists.linux.dev, linux-arch@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, kasan-dev@googlegroups.com, cgroups@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Redefine kmalloc, krealloc, kzalloc, kcalloc, etc. to record allocations and deallocations done by these functions. Signed-off-by: Suren Baghdasaryan Co-developed-by: Kent Overstreet Signed-off-by: Kent Overstreet --- include/linux/fortify-string.h | 5 +- include/linux/slab.h | 173 ++++++++++++++++----------------- include/linux/string.h | 4 +- mm/slab.c | 18 ++-- mm/slab_common.c | 38 ++++---- mm/slub.c | 19 ++-- mm/util.c | 20 ++-- 7 files changed, 137 insertions(+), 140 deletions(-) diff --git a/include/linux/fortify-string.h b/include/linux/fortify-string.h index da51a83b2829..11319e7634a4 100644 --- a/include/linux/fortify-string.h +++ b/include/linux/fortify-string.h @@ -752,9 +752,9 @@ __FORTIFY_INLINE void *memchr_inv(const void * const PO= S0 p, int c, size_t size) return __real_memchr_inv(p, c, size); } =20 -extern void *__real_kmemdup(const void *src, size_t len, gfp_t gfp) __RENA= ME(kmemdup) +extern void *__real_kmemdup(const void *src, size_t len, gfp_t gfp) __RENA= ME(kmemdup_noprof) __realloc_size(2); -__FORTIFY_INLINE void *kmemdup(const void * const POS0 p, size_t size, gfp= _t gfp) +__FORTIFY_INLINE void *kmemdup_noprof(const void * const POS0 p, size_t si= ze, gfp_t gfp) { const size_t p_size =3D __struct_size(p); =20 @@ -764,6 +764,7 @@ __FORTIFY_INLINE void *kmemdup(const void * const POS0 = p, size_t size, gfp_t gfp fortify_panic(__func__); return __real_kmemdup(p, size, gfp); } +#define kmemdup(...) alloc_hooks(kmemdup_noprof(__VA_ARGS__)) =20 /** * strcpy - Copy a string into another string buffer diff --git a/include/linux/slab.h b/include/linux/slab.h index 11ef3d364b2b..0543e0f76c60 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -230,7 +230,9 @@ int kmem_cache_shrink(struct kmem_cache *s); /* * Common kmalloc functions provided by all allocators */ -void * __must_check krealloc(const void *objp, size_t new_size, gfp_t flag= s) __realloc_size(2); +void * __must_check krealloc_noprof(const void *objp, size_t new_size, gfp= _t flags) __realloc_size(2); +#define krealloc(...) alloc_hooks(krealloc_noprof(__VA_ARGS__)) + void kfree(const void *objp); void kfree_sensitive(const void *objp); size_t __ksize(const void *objp); @@ -491,7 +493,10 @@ static __always_inline unsigned int __kmalloc_index(si= ze_t size, static_assert(PAGE_SHIFT <=3D 20); #define kmalloc_index(s) __kmalloc_index(s, true) =20 -void *__kmalloc(size_t size, gfp_t flags) __assume_kmalloc_alignment __all= oc_size(1); +#include + +void *__kmalloc_noprof(size_t size, gfp_t flags) __assume_kmalloc_alignmen= t __alloc_size(1); +#define __kmalloc(...) alloc_hooks(__kmalloc_noprof(__VA_ARGS__)) =20 /** * kmem_cache_alloc - Allocate an object @@ -503,9 +508,13 @@ void *__kmalloc(size_t size, gfp_t flags) __assume_kma= lloc_alignment __alloc_siz * * Return: pointer to the new object or %NULL in case of error */ -void *kmem_cache_alloc(struct kmem_cache *cachep, gfp_t flags) __assume_sl= ab_alignment __malloc; -void *kmem_cache_alloc_lru(struct kmem_cache *s, struct list_lru *lru, - gfp_t gfpflags) __assume_slab_alignment __malloc; +void *kmem_cache_alloc_noprof(struct kmem_cache *cachep, gfp_t flags) __as= sume_slab_alignment __malloc; +#define kmem_cache_alloc(...) alloc_hooks(kmem_cache_alloc_noprof(__VA_A= RGS__)) + +void *kmem_cache_alloc_lru_noprof(struct kmem_cache *s, struct list_lru *l= ru, + gfp_t gfpflags) __assume_slab_alignment __malloc; +#define kmem_cache_alloc_lru(...) alloc_hooks(kmem_cache_alloc_lru_noprof= (__VA_ARGS__)) + void kmem_cache_free(struct kmem_cache *s, void *objp); =20 /* @@ -516,29 +525,40 @@ void kmem_cache_free(struct kmem_cache *s, void *objp= ); * Note that interrupts must be enabled when calling these functions. */ void kmem_cache_free_bulk(struct kmem_cache *s, size_t size, void **p); -int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size, = void **p); + +int kmem_cache_alloc_bulk_noprof(struct kmem_cache *s, gfp_t flags, size_t= size, void **p); +#define kmem_cache_alloc_bulk(...) alloc_hooks(kmem_cache_alloc_bulk_nopr= of(__VA_ARGS__)) =20 static __always_inline void kfree_bulk(size_t size, void **p) { kmem_cache_free_bulk(NULL, size, p); } =20 -void *__kmalloc_node(size_t size, gfp_t flags, int node) __assume_kmalloc_= alignment +void *__kmalloc_node_noprof(size_t size, gfp_t flags, int node) __assume_k= malloc_alignment __alloc_size(1); -void *kmem_cache_alloc_node(struct kmem_cache *s, gfp_t flags, int node) _= _assume_slab_alignment - __malloc; +#define __kmalloc_node(...) alloc_hooks(__kmalloc_node_noprof(__VA_ARGS_= _)) =20 -void *kmalloc_trace(struct kmem_cache *s, gfp_t flags, size_t size) +void *kmem_cache_alloc_node_noprof(struct kmem_cache *s, gfp_t flags, int = node) __assume_slab_alignment + __malloc; +#define kmem_cache_alloc_node(...) alloc_hooks(kmem_cache_alloc_node_nopr= of(__VA_ARGS__)) + +void *kmalloc_trace_noprof(struct kmem_cache *s, gfp_t flags, size_t size) __assume_kmalloc_alignment __alloc_size(3); =20 -void *kmalloc_node_trace(struct kmem_cache *s, gfp_t gfpflags, - int node, size_t size) __assume_kmalloc_alignment +void *kmalloc_node_trace_noprof(struct kmem_cache *s, gfp_t gfpflags, + int node, size_t size) __assume_kmalloc_alignment __alloc_size(4); -void *kmalloc_large(size_t size, gfp_t flags) __assume_page_alignment +#define kmalloc_trace(...) alloc_hooks(kmalloc_trace_noprof(__VA_ARGS__)) + +#define kmalloc_node_trace(...) alloc_hooks(kmalloc_node_trace_noprof(__= VA_ARGS__)) + +void *kmalloc_large_noprof(size_t size, gfp_t flags) __assume_page_alignme= nt __alloc_size(1); +#define kmalloc_large(...) alloc_hooks(kmalloc_large_noprof(__VA_ARGS__)) =20 -void *kmalloc_large_node(size_t size, gfp_t flags, int node) __assume_page= _alignment +void *kmalloc_large_node_noprof(size_t size, gfp_t flags, int node) __assu= me_page_alignment __alloc_size(1); +#define kmalloc_large_node(...) alloc_hooks(kmalloc_large_node_noprof(__= VA_ARGS__)) =20 /** * kmalloc - allocate kernel memory @@ -594,37 +614,39 @@ void *kmalloc_large_node(size_t size, gfp_t flags, in= t node) __assume_page_align * Try really hard to succeed the allocation but fail * eventually. */ -static __always_inline __alloc_size(1) void *kmalloc(size_t size, gfp_t fl= ags) +static __always_inline __alloc_size(1) void *kmalloc_noprof(size_t size, g= fp_t flags) { if (__builtin_constant_p(size) && size) { unsigned int index; =20 if (size > KMALLOC_MAX_CACHE_SIZE) - return kmalloc_large(size, flags); + return kmalloc_large_noprof(size, flags); =20 index =3D kmalloc_index(size); - return kmalloc_trace( + return kmalloc_trace_noprof( kmalloc_caches[kmalloc_type(flags, _RET_IP_)][index], flags, size); } - return __kmalloc(size, flags); + return __kmalloc_noprof(size, flags); } +#define kmalloc(...) alloc_hooks(kmalloc_noprof(__VA_ARGS__)) =20 -static __always_inline __alloc_size(1) void *kmalloc_node(size_t size, gfp= _t flags, int node) +static __always_inline __alloc_size(1) void *kmalloc_node_noprof(size_t si= ze, gfp_t flags, int node) { if (__builtin_constant_p(size) && size) { unsigned int index; =20 if (size > KMALLOC_MAX_CACHE_SIZE) - return kmalloc_large_node(size, flags, node); + return kmalloc_large_node_noprof(size, flags, node); =20 index =3D kmalloc_index(size); - return kmalloc_node_trace( + return kmalloc_node_trace_noprof( kmalloc_caches[kmalloc_type(flags, _RET_IP_)][index], flags, node, size); } - return __kmalloc_node(size, flags, node); + return __kmalloc_node_noprof(size, flags, node); } +#define kmalloc_node(...) alloc_hooks(kmalloc_node_noprof(__VA_ARGS__)) =20 /** * kmalloc_array - allocate memory for an array. @@ -632,16 +654,17 @@ static __always_inline __alloc_size(1) void *kmalloc_= node(size_t size, gfp_t fla * @size: element size. * @flags: the type of memory to allocate (see kmalloc). */ -static inline __alloc_size(1, 2) void *kmalloc_array(size_t n, size_t size= , gfp_t flags) +static inline __alloc_size(1, 2) void *kmalloc_array_noprof(size_t n, size= _t size, gfp_t flags) { size_t bytes; =20 if (unlikely(check_mul_overflow(n, size, &bytes))) return NULL; if (__builtin_constant_p(n) && __builtin_constant_p(size)) - return kmalloc(bytes, flags); - return __kmalloc(bytes, flags); + return kmalloc_noprof(bytes, flags); + return kmalloc_noprof(bytes, flags); } +#define kmalloc_array(...) alloc_hooks(kmalloc_array_noprof(__VA_ARGS__)) =20 /** * krealloc_array - reallocate memory for an array. @@ -650,18 +673,19 @@ static inline __alloc_size(1, 2) void *kmalloc_array(= size_t n, size_t size, gfp_ * @new_size: new size of a single member of the array * @flags: the type of memory to allocate (see kmalloc) */ -static inline __realloc_size(2, 3) void * __must_check krealloc_array(void= *p, - size_t new_n, - size_t new_size, - gfp_t flags) +static inline __realloc_size(2, 3) void * __must_check krealloc_array_nopr= of(void *p, + size_t new_n, + size_t new_size, + gfp_t flags) { size_t bytes; =20 if (unlikely(check_mul_overflow(new_n, new_size, &bytes))) return NULL; =20 - return krealloc(p, bytes, flags); + return krealloc_noprof(p, bytes, flags); } +#define krealloc_array(...) alloc_hooks(krealloc_array_noprof(__VA_ARGS_= _)) =20 /** * kcalloc - allocate memory for an array. The memory is set to zero. @@ -669,16 +693,11 @@ static inline __realloc_size(2, 3) void * __must_chec= k krealloc_array(void *p, * @size: element size. * @flags: the type of memory to allocate (see kmalloc). */ -static inline __alloc_size(1, 2) void *kcalloc(size_t n, size_t size, gfp_= t flags) -{ - return kmalloc_array(n, size, flags | __GFP_ZERO); -} +#define kcalloc(_n, _size, _flags) kmalloc_array(_n, _size, (_flags) | __= GFP_ZERO) =20 -void *__kmalloc_node_track_caller(size_t size, gfp_t flags, int node, +void *kmalloc_node_track_caller_noprof(size_t size, gfp_t flags, int node, unsigned long caller) __alloc_size(1); -#define kmalloc_node_track_caller(size, flags, node) \ - __kmalloc_node_track_caller(size, flags, node, \ - _RET_IP_) +#define kmalloc_node_track_caller(...) alloc_hooks(kmalloc_node_track_cal= ler_noprof(__VA_ARGS__, _RET_IP_)) =20 /* * kmalloc_track_caller is a special version of kmalloc that records the @@ -688,11 +707,9 @@ void *__kmalloc_node_track_caller(size_t size, gfp_t f= lags, int node, * allocator where we care about the real place the memory allocation * request comes from. */ -#define kmalloc_track_caller(size, flags) \ - __kmalloc_node_track_caller(size, flags, \ - NUMA_NO_NODE, _RET_IP_) +#define kmalloc_track_caller(...) kmalloc_node_track_caller(__VA_ARGS__, = NUMA_NO_NODE) =20 -static inline __alloc_size(1, 2) void *kmalloc_array_node(size_t n, size_t= size, gfp_t flags, +static inline __alloc_size(1, 2) void *kmalloc_array_node_noprof(size_t n,= size_t size, gfp_t flags, int node) { size_t bytes; @@ -700,75 +717,51 @@ static inline __alloc_size(1, 2) void *kmalloc_array_= node(size_t n, size_t size, if (unlikely(check_mul_overflow(n, size, &bytes))) return NULL; if (__builtin_constant_p(n) && __builtin_constant_p(size)) - return kmalloc_node(bytes, flags, node); - return __kmalloc_node(bytes, flags, node); + return kmalloc_node_noprof(bytes, flags, node); + return __kmalloc_node_noprof(bytes, flags, node); } +#define kmalloc_array_node(...) alloc_hooks(kmalloc_array_node_noprof(__= VA_ARGS__)) =20 -static inline __alloc_size(1, 2) void *kcalloc_node(size_t n, size_t size,= gfp_t flags, int node) -{ - return kmalloc_array_node(n, size, flags | __GFP_ZERO, node); -} +#define kcalloc_node(_n, _size, _flags, _node) kmalloc_array_node(_n, _siz= e, (_flags) | __GFP_ZERO, _node) =20 /* * Shortcuts */ -static inline void *kmem_cache_zalloc(struct kmem_cache *k, gfp_t flags) -{ - return kmem_cache_alloc(k, flags | __GFP_ZERO); -} +#define kmem_cache_zalloc(_k, _flags) kmem_cache_alloc(_k, (_flags)|__GFP= _ZERO) =20 /** * kzalloc - allocate memory. The memory is set to zero. * @size: how many bytes of memory are required. * @flags: the type of memory to allocate (see kmalloc). */ -static inline __alloc_size(1) void *kzalloc(size_t size, gfp_t flags) +static inline __alloc_size(1) void *kzalloc_noprof(size_t size, gfp_t flag= s) { - return kmalloc(size, flags | __GFP_ZERO); + return kmalloc_noprof(size, flags | __GFP_ZERO); } +#define kzalloc(...) alloc_hooks(kzalloc_noprof(__VA_ARGS__)) +#define kzalloc_node(_size, _flags, _node) kmalloc_node(_size, (_flags)|__= GFP_ZERO, _node) =20 -/** - * kzalloc_node - allocate zeroed memory from a particular memory node. - * @size: how many bytes of memory are required. - * @flags: the type of memory to allocate (see kmalloc). - * @node: memory node from which to allocate - */ -static inline __alloc_size(1) void *kzalloc_node(size_t size, gfp_t flags,= int node) -{ - return kmalloc_node(size, flags | __GFP_ZERO, node); -} +extern void *kvmalloc_node_noprof(size_t size, gfp_t flags, int node) __al= loc_size(1); +#define kvmalloc_node(...) alloc_hooks(kvmalloc_node_noprof(__VA_ARGS__)) =20 -extern void *kvmalloc_node(size_t size, gfp_t flags, int node) __alloc_siz= e(1); -static inline __alloc_size(1) void *kvmalloc(size_t size, gfp_t flags) -{ - return kvmalloc_node(size, flags, NUMA_NO_NODE); -} -static inline __alloc_size(1) void *kvzalloc_node(size_t size, gfp_t flags= , int node) -{ - return kvmalloc_node(size, flags | __GFP_ZERO, node); -} -static inline __alloc_size(1) void *kvzalloc(size_t size, gfp_t flags) -{ - return kvmalloc(size, flags | __GFP_ZERO); -} +#define kvmalloc(_size, _flags) kvmalloc_node(_size, _flags, NUMA_NO_NOD= E) +#define kvzalloc(_size, _flags) kvmalloc(_size, _flags|__GFP_ZERO) =20 -static inline __alloc_size(1, 2) void *kvmalloc_array(size_t n, size_t siz= e, gfp_t flags) -{ - size_t bytes; - - if (unlikely(check_mul_overflow(n, size, &bytes))) - return NULL; +#define kvzalloc_node(_size, _flags, _node) kvmalloc_node(_size, _flags|__= GFP_ZERO, _node) =20 - return kvmalloc(bytes, flags); -} +#define kvmalloc_array(_n, _size, _flags) \ +({ \ + size_t _bytes; \ + \ + !check_mul_overflow(_n, _size, &_bytes) ? kvmalloc(_bytes, _flags) : NULL= ; \ +}) =20 -static inline __alloc_size(1, 2) void *kvcalloc(size_t n, size_t size, gfp= _t flags) -{ - return kvmalloc_array(n, size, flags | __GFP_ZERO); -} +#define kvcalloc(_n, _size, _flags) kvmalloc_array(_n, _size, _flags|__GF= P_ZERO) =20 -extern void *kvrealloc(const void *p, size_t oldsize, size_t newsize, gfp_= t flags) +extern void *kvrealloc_noprof(const void *p, size_t oldsize, size_t newsiz= e, gfp_t flags) __realloc_size(3); +#define kvrealloc(...) alloc_hooks(kvrealloc_noprof(__VA_ARGS__)) + extern void kvfree(const void *addr); extern void kvfree_sensitive(const void *addr, size_t len); =20 diff --git a/include/linux/string.h b/include/linux/string.h index dbfc66400050..9516258d8117 100644 --- a/include/linux/string.h +++ b/include/linux/string.h @@ -176,7 +176,9 @@ extern void kfree_const(const void *x); extern char *kstrdup(const char *s, gfp_t gfp) __malloc; extern const char *kstrdup_const(const char *s, gfp_t gfp); extern char *kstrndup(const char *s, size_t len, gfp_t gfp); -extern void *kmemdup(const void *src, size_t len, gfp_t gfp) __realloc_siz= e(2); +extern void *kmemdup_noprof(const void *src, size_t len, gfp_t gfp) __real= loc_size(2); +#define kmemdup(...) alloc_hooks(kmemdup_noprof(__VA_ARGS__)) + extern void *kvmemdup(const void *src, size_t len, gfp_t gfp) __realloc_si= ze(2); extern char *kmemdup_nul(const char *s, size_t len, gfp_t gfp); =20 diff --git a/mm/slab.c b/mm/slab.c index 18923f5f05b5..f75519fa89b9 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -3429,18 +3429,18 @@ void *__kmem_cache_alloc_lru(struct kmem_cache *cac= hep, struct list_lru *lru, return ret; } =20 -void *kmem_cache_alloc(struct kmem_cache *cachep, gfp_t flags) +void *kmem_cache_alloc_noprof(struct kmem_cache *cachep, gfp_t flags) { return __kmem_cache_alloc_lru(cachep, NULL, flags); } -EXPORT_SYMBOL(kmem_cache_alloc); +EXPORT_SYMBOL(kmem_cache_alloc_noprof); =20 -void *kmem_cache_alloc_lru(struct kmem_cache *cachep, struct list_lru *lru, +void *kmem_cache_alloc_lru_noprof(struct kmem_cache *cachep, struct list_l= ru *lru, gfp_t flags) { return __kmem_cache_alloc_lru(cachep, lru, flags); } -EXPORT_SYMBOL(kmem_cache_alloc_lru); +EXPORT_SYMBOL(kmem_cache_alloc_lru_noprof); =20 static __always_inline void cache_alloc_debugcheck_after_bulk(struct kmem_cache *s, gfp_t flags, @@ -3452,8 +3452,8 @@ cache_alloc_debugcheck_after_bulk(struct kmem_cache *= s, gfp_t flags, p[i] =3D cache_alloc_debugcheck_after(s, flags, p[i], caller); } =20 -int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size, - void **p) +int kmem_cache_alloc_bulk_noprof(struct kmem_cache *s, gfp_t flags, size_t= size, + void **p) { struct obj_cgroup *objcg =3D NULL; unsigned long irqflags; @@ -3491,7 +3491,7 @@ int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t= flags, size_t size, kmem_cache_free_bulk(s, i, p); return 0; } -EXPORT_SYMBOL(kmem_cache_alloc_bulk); +EXPORT_SYMBOL(kmem_cache_alloc_bulk_noprof); =20 /** * kmem_cache_alloc_node - Allocate an object on the specified node @@ -3506,7 +3506,7 @@ EXPORT_SYMBOL(kmem_cache_alloc_bulk); * * Return: pointer to the new object or %NULL in case of error */ -void *kmem_cache_alloc_node(struct kmem_cache *cachep, gfp_t flags, int no= deid) +void *kmem_cache_alloc_node_noprof(struct kmem_cache *cachep, gfp_t flags,= int nodeid) { void *ret =3D slab_alloc_node(cachep, NULL, flags, nodeid, cachep->object= _size, _RET_IP_); =20 @@ -3514,7 +3514,7 @@ void *kmem_cache_alloc_node(struct kmem_cache *cachep= , gfp_t flags, int nodeid) =20 return ret; } -EXPORT_SYMBOL(kmem_cache_alloc_node); +EXPORT_SYMBOL(kmem_cache_alloc_node_noprof); =20 void *__kmem_cache_alloc_node(struct kmem_cache *cachep, gfp_t flags, int nodeid, size_t orig_size, diff --git a/mm/slab_common.c b/mm/slab_common.c index 446f406d2703..8ef5e47ff6a7 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -1077,24 +1077,24 @@ void *__do_kmalloc_node(size_t size, gfp_t flags, i= nt node, unsigned long caller return ret; } =20 -void *__kmalloc_node(size_t size, gfp_t flags, int node) +void *__kmalloc_node_noprof(size_t size, gfp_t flags, int node) { return __do_kmalloc_node(size, flags, node, _RET_IP_); } -EXPORT_SYMBOL(__kmalloc_node); +EXPORT_SYMBOL(__kmalloc_node_noprof); =20 -void *__kmalloc(size_t size, gfp_t flags) +void *__kmalloc_noprof(size_t size, gfp_t flags) { return __do_kmalloc_node(size, flags, NUMA_NO_NODE, _RET_IP_); } -EXPORT_SYMBOL(__kmalloc); +EXPORT_SYMBOL(__kmalloc_noprof); =20 -void *__kmalloc_node_track_caller(size_t size, gfp_t flags, - int node, unsigned long caller) +void *kmalloc_node_track_caller_noprof(size_t size, gfp_t flags, + int node, unsigned long caller) { return __do_kmalloc_node(size, flags, node, caller); } -EXPORT_SYMBOL(__kmalloc_node_track_caller); +EXPORT_SYMBOL(kmalloc_node_track_caller_noprof); =20 /** * kfree - free previously allocated memory @@ -1161,7 +1161,7 @@ size_t __ksize(const void *object) return slab_ksize(folio_slab(folio)->slab_cache); } =20 -void *kmalloc_trace(struct kmem_cache *s, gfp_t gfpflags, size_t size) +void *kmalloc_trace_noprof(struct kmem_cache *s, gfp_t gfpflags, size_t si= ze) { void *ret =3D __kmem_cache_alloc_node(s, gfpflags, NUMA_NO_NODE, size, _RET_IP_); @@ -1171,9 +1171,9 @@ void *kmalloc_trace(struct kmem_cache *s, gfp_t gfpfl= ags, size_t size) ret =3D kasan_kmalloc(s, ret, size, gfpflags); return ret; } -EXPORT_SYMBOL(kmalloc_trace); +EXPORT_SYMBOL(kmalloc_trace_noprof); =20 -void *kmalloc_node_trace(struct kmem_cache *s, gfp_t gfpflags, +void *kmalloc_node_trace_noprof(struct kmem_cache *s, gfp_t gfpflags, int node, size_t size) { void *ret =3D __kmem_cache_alloc_node(s, gfpflags, node, size, _RET_IP_); @@ -1183,7 +1183,7 @@ void *kmalloc_node_trace(struct kmem_cache *s, gfp_t = gfpflags, ret =3D kasan_kmalloc(s, ret, size, gfpflags); return ret; } -EXPORT_SYMBOL(kmalloc_node_trace); +EXPORT_SYMBOL(kmalloc_node_trace_noprof); =20 gfp_t kmalloc_fix_flags(gfp_t flags) { @@ -1213,7 +1213,7 @@ static void *__kmalloc_large_node(size_t size, gfp_t = flags, int node) flags =3D kmalloc_fix_flags(flags); =20 flags |=3D __GFP_COMP; - page =3D alloc_pages_node(node, flags, order); + page =3D alloc_pages_node_noprof(node, flags, order); if (page) { ptr =3D page_address(page); mod_lruvec_page_state(page, NR_SLAB_UNRECLAIMABLE_B, @@ -1228,7 +1228,7 @@ static void *__kmalloc_large_node(size_t size, gfp_t = flags, int node) return ptr; } =20 -void *kmalloc_large(size_t size, gfp_t flags) +void *kmalloc_large_noprof(size_t size, gfp_t flags) { void *ret =3D __kmalloc_large_node(size, flags, NUMA_NO_NODE); =20 @@ -1236,9 +1236,9 @@ void *kmalloc_large(size_t size, gfp_t flags) flags, NUMA_NO_NODE); return ret; } -EXPORT_SYMBOL(kmalloc_large); +EXPORT_SYMBOL(kmalloc_large_noprof); =20 -void *kmalloc_large_node(size_t size, gfp_t flags, int node) +void *kmalloc_large_node_noprof(size_t size, gfp_t flags, int node) { void *ret =3D __kmalloc_large_node(size, flags, node); =20 @@ -1246,7 +1246,7 @@ void *kmalloc_large_node(size_t size, gfp_t flags, in= t node) flags, node); return ret; } -EXPORT_SYMBOL(kmalloc_large_node); +EXPORT_SYMBOL(kmalloc_large_node_noprof); =20 #ifdef CONFIG_SLAB_FREELIST_RANDOM /* Randomize a generic freelist */ @@ -1460,7 +1460,7 @@ __do_krealloc(const void *p, size_t new_size, gfp_t f= lags) return (void *)p; } =20 - ret =3D kmalloc_track_caller(new_size, flags); + ret =3D kmalloc_node_track_caller_noprof(new_size, flags, NUMA_NO_NODE, _= RET_IP_); if (ret && p) { /* Disable KASAN checks as the object's redzone is accessed. */ kasan_disable_current(); @@ -1484,7 +1484,7 @@ __do_krealloc(const void *p, size_t new_size, gfp_t f= lags) * * Return: pointer to the allocated memory or %NULL in case of error */ -void *krealloc(const void *p, size_t new_size, gfp_t flags) +void *krealloc_noprof(const void *p, size_t new_size, gfp_t flags) { void *ret; =20 @@ -1499,7 +1499,7 @@ void *krealloc(const void *p, size_t new_size, gfp_t = flags) =20 return ret; } -EXPORT_SYMBOL(krealloc); +EXPORT_SYMBOL(krealloc_noprof); =20 /** * kfree_sensitive - Clear sensitive information in memory before freeing diff --git a/mm/slub.c b/mm/slub.c index d16643492320..f5e07d8802e2 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -3497,18 +3497,18 @@ void *__kmem_cache_alloc_lru(struct kmem_cache *s, = struct list_lru *lru, return ret; } =20 -void *kmem_cache_alloc(struct kmem_cache *s, gfp_t gfpflags) +void *kmem_cache_alloc_noprof(struct kmem_cache *s, gfp_t gfpflags) { return __kmem_cache_alloc_lru(s, NULL, gfpflags); } -EXPORT_SYMBOL(kmem_cache_alloc); +EXPORT_SYMBOL(kmem_cache_alloc_noprof); =20 -void *kmem_cache_alloc_lru(struct kmem_cache *s, struct list_lru *lru, +void *kmem_cache_alloc_lru_noprof(struct kmem_cache *s, struct list_lru *l= ru, gfp_t gfpflags) { return __kmem_cache_alloc_lru(s, lru, gfpflags); } -EXPORT_SYMBOL(kmem_cache_alloc_lru); +EXPORT_SYMBOL(kmem_cache_alloc_lru_noprof); =20 void *__kmem_cache_alloc_node(struct kmem_cache *s, gfp_t gfpflags, int node, size_t orig_size, @@ -3518,7 +3518,7 @@ void *__kmem_cache_alloc_node(struct kmem_cache *s, g= fp_t gfpflags, caller, orig_size); } =20 -void *kmem_cache_alloc_node(struct kmem_cache *s, gfp_t gfpflags, int node) +void *kmem_cache_alloc_node_noprof(struct kmem_cache *s, gfp_t gfpflags, i= nt node) { void *ret =3D slab_alloc_node(s, NULL, gfpflags, node, _RET_IP_, s->objec= t_size); =20 @@ -3526,7 +3526,7 @@ void *kmem_cache_alloc_node(struct kmem_cache *s, gfp= _t gfpflags, int node) =20 return ret; } -EXPORT_SYMBOL(kmem_cache_alloc_node); +EXPORT_SYMBOL(kmem_cache_alloc_node_noprof); =20 static noinline void free_to_partial_list( struct kmem_cache *s, struct slab *slab, @@ -3802,6 +3802,7 @@ static __fastpath_inline void slab_free(struct kmem_c= ache *s, struct slab *slab, unsigned long addr) { memcg_slab_free_hook(s, slab, p, cnt); + alloc_tagging_slab_free_hook(s, slab, p, cnt); /* * With KASAN enabled slab_free_freelist_hook modifies the freelist * to remove objects, whose reuse must be delayed. @@ -4032,8 +4033,8 @@ static int __kmem_cache_alloc_bulk(struct kmem_cache = *s, gfp_t flags, #endif /* CONFIG_SLUB_TINY */ =20 /* Note that interrupts must be enabled when calling this function. */ -int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size, - void **p) +int kmem_cache_alloc_bulk_noprof(struct kmem_cache *s, gfp_t flags, size_t= size, + void **p) { int i; struct obj_cgroup *objcg =3D NULL; @@ -4057,7 +4058,7 @@ int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t= flags, size_t size, slab_want_init_on_alloc(flags, s), s->object_size); return i; } -EXPORT_SYMBOL(kmem_cache_alloc_bulk); +EXPORT_SYMBOL(kmem_cache_alloc_bulk_noprof); =20 =20 /* diff --git a/mm/util.c b/mm/util.c index 8cbbfd3a3d59..27ed6a5ac31a 100644 --- a/mm/util.c +++ b/mm/util.c @@ -115,7 +115,7 @@ char *kstrndup(const char *s, size_t max, gfp_t gfp) EXPORT_SYMBOL(kstrndup); =20 /** - * kmemdup - duplicate region of memory + * kmemdup_noprof - duplicate region of memory * * @src: memory region to duplicate * @len: memory region length @@ -124,16 +124,16 @@ EXPORT_SYMBOL(kstrndup); * Return: newly allocated copy of @src or %NULL in case of error, * result is physically contiguous. Use kfree() to free. */ -void *kmemdup(const void *src, size_t len, gfp_t gfp) +void *kmemdup_noprof(const void *src, size_t len, gfp_t gfp) { void *p; =20 - p =3D kmalloc_track_caller(len, gfp); + p =3D kmalloc_node_track_caller_noprof(len, gfp, NUMA_NO_NODE, _RET_IP_); if (p) memcpy(p, src, len); return p; } -EXPORT_SYMBOL(kmemdup); +EXPORT_SYMBOL(kmemdup_noprof); =20 /** * kvmemdup - duplicate region of memory @@ -567,7 +567,7 @@ unsigned long vm_mmap(struct file *file, unsigned long = addr, EXPORT_SYMBOL(vm_mmap); =20 /** - * kvmalloc_node - attempt to allocate physically contiguous memory, but u= pon + * kvmalloc_node_noprof - attempt to allocate physically contiguous memory= , but upon * failure, fall back to non-contiguous (vmalloc) allocation. * @size: size of the request. * @flags: gfp mask for the allocation - must be compatible (superset) wit= h GFP_KERNEL. @@ -582,7 +582,7 @@ EXPORT_SYMBOL(vm_mmap); * * Return: pointer to the allocated memory of %NULL in case of failure */ -void *kvmalloc_node(size_t size, gfp_t flags, int node) +void *kvmalloc_node_noprof(size_t size, gfp_t flags, int node) { gfp_t kmalloc_flags =3D flags; void *ret; @@ -604,7 +604,7 @@ void *kvmalloc_node(size_t size, gfp_t flags, int node) kmalloc_flags &=3D ~__GFP_NOFAIL; } =20 - ret =3D kmalloc_node(size, kmalloc_flags, node); + ret =3D kmalloc_node_noprof(size, kmalloc_flags, node); =20 /* * It doesn't really make sense to fallback to vmalloc for sub page @@ -633,7 +633,7 @@ void *kvmalloc_node(size_t size, gfp_t flags, int node) flags, PAGE_KERNEL, VM_ALLOW_HUGE_VMAP, node, __builtin_return_address(0)); } -EXPORT_SYMBOL(kvmalloc_node); +EXPORT_SYMBOL(kvmalloc_node_noprof); =20 /** * kvfree() - Free memory. @@ -672,7 +672,7 @@ void kvfree_sensitive(const void *addr, size_t len) } EXPORT_SYMBOL(kvfree_sensitive); =20 -void *kvrealloc(const void *p, size_t oldsize, size_t newsize, gfp_t flags) +void *kvrealloc_noprof(const void *p, size_t oldsize, size_t newsize, gfp_= t flags) { void *newp; =20 @@ -685,7 +685,7 @@ void *kvrealloc(const void *p, size_t oldsize, size_t n= ewsize, gfp_t flags) kvfree(p); return newp; } -EXPORT_SYMBOL(kvrealloc); +EXPORT_SYMBOL(kvrealloc_noprof); =20 /** * __vmalloc_array - allocate memory for a virtually contiguous array. --=20 2.42.0.758.gaed0368e0e-goog From nobody Thu Jan 1 07:45:32 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 142D1C00A8F for ; Tue, 24 Oct 2023 13:50:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1343624AbjJXNu1 (ORCPT ); Tue, 24 Oct 2023 09:50:27 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58230 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1343708AbjJXNtA (ORCPT ); Tue, 24 Oct 2023 09:49:00 -0400 Received: from mail-yw1-x1149.google.com (mail-yw1-x1149.google.com [IPv6:2607:f8b0:4864:20::1149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 55EC51FF7 for ; Tue, 24 Oct 2023 06:47:38 -0700 (PDT) Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-5a7b10c488cso55330827b3.2 for ; Tue, 24 Oct 2023 06:47:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1698155257; x=1698760057; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=OUWmo6RLkaXHvLlOp0/KP9HFW8MtULS71POlQ8HMTCI=; b=0SfEDlsjFph2hfUTgDdtPdJfD9EF4HnWhHqceRZHKqOWdnDRz/D+lsJ6i5WJmxMsQs 3JG5/BQVP6jsoW8qmY13rD+i5tsNDHkUezUqCuKjv/aVuMkaI6euHBKWSZQCUyfhTBtf 2uZ1EG0mcSHGlLbMH+miFiUgWP00R9bPULDuxZ1ywnc0AtARr2M+QImcmXoJ/Qqm/3Ug 5iMQDTr6ICjIQcAcL0sVQHIsqp9OJHZHl+4oTb4jjfDgiSetrKlMmhdpjF+BTkGvmnqs zvy3Qdy2V8spEMyfMeoF/eNqsgZ2pTEOC5gPILWYtlTEcypaKn7grh68F4KHNT2PtFs/ aWNQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1698155257; x=1698760057; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=OUWmo6RLkaXHvLlOp0/KP9HFW8MtULS71POlQ8HMTCI=; b=VtbRmJmBkPxj0elo1tREoXjqJqom79ZS0r4HLJe9YH4q1+4C47tLzPgeRv+01KAU4o l2Eg9jicmq0Ol0iipmJLDWB8BWq8HiOPrwu3nfi6n7D5dm+A37fHeDqhd1fw5vKfvvcX nh070ETpXKB5lyRse/wi2OpHTRMRgX+0bFKBtDnogpV53mc6CtRaMc1nUps0g2GwFzlH EVyJvyP+c8VpFE83crWU7h0OTmI+mkCL5nq4g1g+qExYjV/GjnpsYYArD/5l+dwkHhoX RbPmhQQTNGk00Q6+SopHjaBj7HRik1okKq3ijzXB1vWPPth4Kod5IghMZSg1U13c9LnR aLTg== X-Gm-Message-State: AOJu0YzGMFIOKcS4uE+UZxYmb86c0asIb/7dFNSyo6STxU5z/cqtCF1m 93uYVXUa2XCPYEoDrD6WHu5T9NWt7gg= X-Google-Smtp-Source: AGHT+IHQ3VEMdDHth6teCnTo9GNQ4j4PajdUPKRUHryC857gV5hBaW9Bct9jFesWnWbFdQlkFZvB05Yolzc= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:45ba:3318:d7a5:336a]) (user=surenb job=sendgmr) by 2002:a81:48c9:0:b0:5a7:db29:40e3 with SMTP id v192-20020a8148c9000000b005a7db2940e3mr273153ywa.7.1698155257303; Tue, 24 Oct 2023 06:47:37 -0700 (PDT) Date: Tue, 24 Oct 2023 06:46:22 -0700 In-Reply-To: <20231024134637.3120277-1-surenb@google.com> Mime-Version: 1.0 References: <20231024134637.3120277-1-surenb@google.com> X-Mailer: git-send-email 2.42.0.758.gaed0368e0e-goog Message-ID: <20231024134637.3120277-26-surenb@google.com> Subject: [PATCH v2 25/39] mm/slub: Mark slab_free_freelist_hook() __always_inline From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: kent.overstreet@linux.dev, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, roman.gushchin@linux.dev, mgorman@suse.de, dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com, corbet@lwn.net, void@manifault.com, peterz@infradead.org, juri.lelli@redhat.com, ldufour@linux.ibm.com, catalin.marinas@arm.com, will@kernel.org, arnd@arndb.de, tglx@linutronix.de, mingo@redhat.com, dave.hansen@linux.intel.com, x86@kernel.org, peterx@redhat.com, david@redhat.com, axboe@kernel.dk, mcgrof@kernel.org, masahiroy@kernel.org, nathan@kernel.org, dennis@kernel.org, tj@kernel.org, muchun.song@linux.dev, rppt@kernel.org, paulmck@kernel.org, pasha.tatashin@soleen.com, yosryahmed@google.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, andreyknvl@gmail.com, keescook@chromium.org, ndesaulniers@google.com, vvvvvv@google.com, gregkh@linuxfoundation.org, ebiggers@google.com, ytcoode@gmail.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, bristot@redhat.com, vschneid@redhat.com, cl@linux.com, penberg@kernel.org, iamjoonsoo.kim@lge.com, 42.hyeyoo@gmail.com, glider@google.com, elver@google.com, dvyukov@google.com, shakeelb@google.com, songmuchun@bytedance.com, jbaron@akamai.com, rientjes@google.com, minchan@google.com, kaleshsingh@google.com, surenb@google.com, kernel-team@android.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, iommu@lists.linux.dev, linux-arch@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, kasan-dev@googlegroups.com, cgroups@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Kent Overstreet It seems we need to be more forceful with the compiler on this one. Signed-off-by: Kent Overstreet Signed-off-by: Suren Baghdasaryan --- mm/slub.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/slub.c b/mm/slub.c index f5e07d8802e2..222c16cef729 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -1800,7 +1800,7 @@ static __always_inline bool slab_free_hook(struct kme= m_cache *s, return kasan_slab_free(s, x, init); } =20 -static inline bool slab_free_freelist_hook(struct kmem_cache *s, +static __always_inline bool slab_free_freelist_hook(struct kmem_cache *s, void **head, void **tail, int *cnt) { --=20 2.42.0.758.gaed0368e0e-goog From nobody Thu Jan 1 07:45:32 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 88A22C25B48 for ; Tue, 24 Oct 2023 13:50:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1343650AbjJXNua (ORCPT ); Tue, 24 Oct 2023 09:50:30 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58250 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234745AbjJXNtO (ORCPT ); Tue, 24 Oct 2023 09:49:14 -0400 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C0A581FFF for ; Tue, 24 Oct 2023 06:47:40 -0700 (PDT) Received: by mail-yb1-xb49.google.com with SMTP id 3f1490d57ef6-d9ab7badadeso5068108276.1 for ; Tue, 24 Oct 2023 06:47:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1698155259; x=1698760059; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=2MflihaSbuQmiTaEEI/kjjxwB9AeuC8B0gJ477WMkrk=; b=ucFwQQs+COKiOglcNfgXT8um79ZuAveCmtzV6U4Ex0cObEiNt1dT5ZFt43wmImjuk6 pyMJhJimhXlpE9dk3pu2mE2O/wv1NVaalM2/5ohjB2YP88VqQPw5nzoYitKY8djpuVco fAzjEO6YrNG1i6Gc2aHQnpNFIQW1uTwxt4bblAP8ViybxSbvVXFNUAsDlf4oFU1xyW3D E73xzAaR33HRuU9u2ZAkSXeoaNIC8fIf5H1qGcu42VOrtqPYZvUlfVuBfLfAu8VFIxOE 7rc79YpnyH3hWzYhuOuSvChx+Y6fwhT/QNdDdr0PBaDXkRivdWTfRJ2KkJDiRj+oDjLF qktw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1698155260; x=1698760060; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=2MflihaSbuQmiTaEEI/kjjxwB9AeuC8B0gJ477WMkrk=; b=kOqR7+KU4f+o1mx7P9OMFo/jUWu0h0pLYR+zcquH7cwGiKoEmZK2cO8SR2GYzLJZ32 LVCYjXd5Rl1rqeO8L1hJuVmBAynnNbRBqSl5c2w9PI8xjSNZ3QyjP1puz48SdBhBaNDM j9GnS+eqpWxv1Jo0aGR6DXyMxpEup5dAI3fxZVXjM1ZfYjrAPQJgoFNdzz/milJsx9Wf dDXhGngC8bt76Pco/zc/L/r6wCMv6gA70ME7MHc/tyDtrr0LjShaWlNtNk0H+GK4EBkW d85Bdx4SKhC6NCIIiBxjVWX9OUD6ck9PflrmB2CgIPeUHuXFh5Hlt+G1tz6mjm3fmcLu CJAA== X-Gm-Message-State: AOJu0YxbsSRhV4dpgQ0giYJ7G5AHuzZZP7Ma2j6eheRDNhfGvf3hF/oQ 2W1Yqcp1/cMeo2ujnzZP3Vdin9JNjgo= X-Google-Smtp-Source: AGHT+IEn9RJ/YYka9nhFwipeWEC4kTkdUXBs4fb0JEteWf5skbl81x+dboQnU840QIKuLVpVMJxwXpn7baA= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:45ba:3318:d7a5:336a]) (user=surenb job=sendgmr) by 2002:a5b:8c1:0:b0:da0:5452:29e4 with SMTP id w1-20020a5b08c1000000b00da0545229e4mr18077ybq.0.1698155259614; Tue, 24 Oct 2023 06:47:39 -0700 (PDT) Date: Tue, 24 Oct 2023 06:46:23 -0700 In-Reply-To: <20231024134637.3120277-1-surenb@google.com> Mime-Version: 1.0 References: <20231024134637.3120277-1-surenb@google.com> X-Mailer: git-send-email 2.42.0.758.gaed0368e0e-goog Message-ID: <20231024134637.3120277-27-surenb@google.com> Subject: [PATCH v2 26/39] mempool: Hook up to memory allocation profiling From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: kent.overstreet@linux.dev, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, roman.gushchin@linux.dev, mgorman@suse.de, dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com, corbet@lwn.net, void@manifault.com, peterz@infradead.org, juri.lelli@redhat.com, ldufour@linux.ibm.com, catalin.marinas@arm.com, will@kernel.org, arnd@arndb.de, tglx@linutronix.de, mingo@redhat.com, dave.hansen@linux.intel.com, x86@kernel.org, peterx@redhat.com, david@redhat.com, axboe@kernel.dk, mcgrof@kernel.org, masahiroy@kernel.org, nathan@kernel.org, dennis@kernel.org, tj@kernel.org, muchun.song@linux.dev, rppt@kernel.org, paulmck@kernel.org, pasha.tatashin@soleen.com, yosryahmed@google.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, andreyknvl@gmail.com, keescook@chromium.org, ndesaulniers@google.com, vvvvvv@google.com, gregkh@linuxfoundation.org, ebiggers@google.com, ytcoode@gmail.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, bristot@redhat.com, vschneid@redhat.com, cl@linux.com, penberg@kernel.org, iamjoonsoo.kim@lge.com, 42.hyeyoo@gmail.com, glider@google.com, elver@google.com, dvyukov@google.com, shakeelb@google.com, songmuchun@bytedance.com, jbaron@akamai.com, rientjes@google.com, minchan@google.com, kaleshsingh@google.com, surenb@google.com, kernel-team@android.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, iommu@lists.linux.dev, linux-arch@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, kasan-dev@googlegroups.com, cgroups@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Kent Overstreet This adds hooks to mempools for correctly annotating mempool-backed allocations at the correct source line, so they show up correctly in /sys/kernel/debug/allocations. Various inline functions are converted to wrappers so that we can invoke alloc_hooks() in fewer places. Signed-off-by: Kent Overstreet Signed-off-by: Suren Baghdasaryan --- include/linux/mempool.h | 73 ++++++++++++++++++++--------------------- mm/mempool.c | 34 ++++++++----------- 2 files changed, 48 insertions(+), 59 deletions(-) diff --git a/include/linux/mempool.h b/include/linux/mempool.h index 4aae6c06c5f2..9fa126aa19b5 100644 --- a/include/linux/mempool.h +++ b/include/linux/mempool.h @@ -5,6 +5,8 @@ #ifndef _LINUX_MEMPOOL_H #define _LINUX_MEMPOOL_H =20 +#include +#include #include #include =20 @@ -39,18 +41,32 @@ void mempool_exit(mempool_t *pool); int mempool_init_node(mempool_t *pool, int min_nr, mempool_alloc_t *alloc_= fn, mempool_free_t *free_fn, void *pool_data, gfp_t gfp_mask, int node_id); -int mempool_init(mempool_t *pool, int min_nr, mempool_alloc_t *alloc_fn, + +int mempool_init_noprof(mempool_t *pool, int min_nr, mempool_alloc_t *allo= c_fn, mempool_free_t *free_fn, void *pool_data); +#define mempool_init(...) \ + alloc_hooks(mempool_init_noprof(__VA_ARGS__)) =20 extern mempool_t *mempool_create(int min_nr, mempool_alloc_t *alloc_fn, mempool_free_t *free_fn, void *pool_data); -extern mempool_t *mempool_create_node(int min_nr, mempool_alloc_t *alloc_f= n, + +extern mempool_t *mempool_create_node_noprof(int min_nr, mempool_alloc_t *= alloc_fn, mempool_free_t *free_fn, void *pool_data, gfp_t gfp_mask, int nid); +#define mempool_create_node(...) \ + alloc_hooks(mempool_create_node_noprof(__VA_ARGS__)) + +#define mempool_create(_min_nr, _alloc_fn, _free_fn, _pool_data) \ + mempool_create_node(_min_nr, _alloc_fn, _free_fn, _pool_data, \ + GFP_KERNEL, NUMA_NO_NODE) =20 extern int mempool_resize(mempool_t *pool, int new_min_nr); extern void mempool_destroy(mempool_t *pool); -extern void *mempool_alloc(mempool_t *pool, gfp_t gfp_mask) __malloc; + +extern void *mempool_alloc_noprof(mempool_t *pool, gfp_t gfp_mask) __mallo= c; +#define mempool_alloc(...) \ + alloc_hooks(mempool_alloc_noprof(__VA_ARGS__)) + extern void mempool_free(void *element, mempool_t *pool); =20 /* @@ -61,19 +77,10 @@ extern void mempool_free(void *element, mempool_t *pool= ); void *mempool_alloc_slab(gfp_t gfp_mask, void *pool_data); void mempool_free_slab(void *element, void *pool_data); =20 -static inline int -mempool_init_slab_pool(mempool_t *pool, int min_nr, struct kmem_cache *kc) -{ - return mempool_init(pool, min_nr, mempool_alloc_slab, - mempool_free_slab, (void *) kc); -} - -static inline mempool_t * -mempool_create_slab_pool(int min_nr, struct kmem_cache *kc) -{ - return mempool_create(min_nr, mempool_alloc_slab, mempool_free_slab, - (void *) kc); -} +#define mempool_init_slab_pool(_pool, _min_nr, _kc) \ + mempool_init(_pool, (_min_nr), mempool_alloc_slab, mempool_free_slab, (vo= id *)(_kc)) +#define mempool_create_slab_pool(_min_nr, _kc) \ + mempool_create((_min_nr), mempool_alloc_slab, mempool_free_slab, (void *)= (_kc)) =20 /* * a mempool_alloc_t and a mempool_free_t to kmalloc and kfree the @@ -82,17 +89,12 @@ mempool_create_slab_pool(int min_nr, struct kmem_cache = *kc) void *mempool_kmalloc(gfp_t gfp_mask, void *pool_data); void mempool_kfree(void *element, void *pool_data); =20 -static inline int mempool_init_kmalloc_pool(mempool_t *pool, int min_nr, s= ize_t size) -{ - return mempool_init(pool, min_nr, mempool_kmalloc, - mempool_kfree, (void *) size); -} - -static inline mempool_t *mempool_create_kmalloc_pool(int min_nr, size_t si= ze) -{ - return mempool_create(min_nr, mempool_kmalloc, mempool_kfree, - (void *) size); -} +#define mempool_init_kmalloc_pool(_pool, _min_nr, _size) \ + mempool_init(_pool, (_min_nr), mempool_kmalloc, mempool_kfree, \ + (void *)(unsigned long)(_size)) +#define mempool_create_kmalloc_pool(_min_nr, _size) \ + mempool_create((_min_nr), mempool_kmalloc, mempool_kfree, \ + (void *)(unsigned long)(_size)) =20 /* * A mempool_alloc_t and mempool_free_t for a simple page allocator that @@ -101,16 +103,11 @@ static inline mempool_t *mempool_create_kmalloc_pool(= int min_nr, size_t size) void *mempool_alloc_pages(gfp_t gfp_mask, void *pool_data); void mempool_free_pages(void *element, void *pool_data); =20 -static inline int mempool_init_page_pool(mempool_t *pool, int min_nr, int = order) -{ - return mempool_init(pool, min_nr, mempool_alloc_pages, - mempool_free_pages, (void *)(long)order); -} - -static inline mempool_t *mempool_create_page_pool(int min_nr, int order) -{ - return mempool_create(min_nr, mempool_alloc_pages, mempool_free_pages, - (void *)(long)order); -} +#define mempool_init_page_pool(_pool, _min_nr, _order) \ + mempool_init(_pool, (_min_nr), mempool_alloc_pages, \ + mempool_free_pages, (void *)(long)(_order)) +#define mempool_create_page_pool(_min_nr, _order) \ + mempool_create((_min_nr), mempool_alloc_pages, \ + mempool_free_pages, (void *)(long)(_order)) =20 #endif /* _LINUX_MEMPOOL_H */ diff --git a/mm/mempool.c b/mm/mempool.c index 734bcf5afbb7..4fd949178449 100644 --- a/mm/mempool.c +++ b/mm/mempool.c @@ -230,17 +230,17 @@ EXPORT_SYMBOL(mempool_init_node); * * Return: %0 on success, negative error code otherwise. */ -int mempool_init(mempool_t *pool, int min_nr, mempool_alloc_t *alloc_fn, - mempool_free_t *free_fn, void *pool_data) +int mempool_init_noprof(mempool_t *pool, int min_nr, mempool_alloc_t *allo= c_fn, + mempool_free_t *free_fn, void *pool_data) { return mempool_init_node(pool, min_nr, alloc_fn, free_fn, pool_data, GFP_KERNEL, NUMA_NO_NODE); =20 } -EXPORT_SYMBOL(mempool_init); +EXPORT_SYMBOL(mempool_init_noprof); =20 /** - * mempool_create - create a memory pool + * mempool_create_node - create a memory pool * @min_nr: the minimum number of elements guaranteed to be * allocated for this pool. * @alloc_fn: user-defined element-allocation function. @@ -255,17 +255,9 @@ EXPORT_SYMBOL(mempool_init); * * Return: pointer to the created memory pool object or %NULL on error. */ -mempool_t *mempool_create(int min_nr, mempool_alloc_t *alloc_fn, - mempool_free_t *free_fn, void *pool_data) -{ - return mempool_create_node(min_nr, alloc_fn, free_fn, pool_data, - GFP_KERNEL, NUMA_NO_NODE); -} -EXPORT_SYMBOL(mempool_create); - -mempool_t *mempool_create_node(int min_nr, mempool_alloc_t *alloc_fn, - mempool_free_t *free_fn, void *pool_data, - gfp_t gfp_mask, int node_id) +mempool_t *mempool_create_node_noprof(int min_nr, mempool_alloc_t *alloc_f= n, + mempool_free_t *free_fn, void *pool_data, + gfp_t gfp_mask, int node_id) { mempool_t *pool; =20 @@ -281,7 +273,7 @@ mempool_t *mempool_create_node(int min_nr, mempool_allo= c_t *alloc_fn, =20 return pool; } -EXPORT_SYMBOL(mempool_create_node); +EXPORT_SYMBOL(mempool_create_node_noprof); =20 /** * mempool_resize - resize an existing memory pool @@ -377,7 +369,7 @@ EXPORT_SYMBOL(mempool_resize); * * Return: pointer to the allocated element or %NULL on error. */ -void *mempool_alloc(mempool_t *pool, gfp_t gfp_mask) +void *mempool_alloc_noprof(mempool_t *pool, gfp_t gfp_mask) { void *element; unsigned long flags; @@ -444,7 +436,7 @@ void *mempool_alloc(mempool_t *pool, gfp_t gfp_mask) finish_wait(&pool->wait, &wait); goto repeat_alloc; } -EXPORT_SYMBOL(mempool_alloc); +EXPORT_SYMBOL(mempool_alloc_noprof); =20 /** * mempool_free - return an element to the pool. @@ -515,7 +507,7 @@ void *mempool_alloc_slab(gfp_t gfp_mask, void *pool_dat= a) { struct kmem_cache *mem =3D pool_data; VM_BUG_ON(mem->ctor); - return kmem_cache_alloc(mem, gfp_mask); + return kmem_cache_alloc_noprof(mem, gfp_mask); } EXPORT_SYMBOL(mempool_alloc_slab); =20 @@ -533,7 +525,7 @@ EXPORT_SYMBOL(mempool_free_slab); void *mempool_kmalloc(gfp_t gfp_mask, void *pool_data) { size_t size =3D (size_t)pool_data; - return kmalloc(size, gfp_mask); + return kmalloc_noprof(size, gfp_mask); } EXPORT_SYMBOL(mempool_kmalloc); =20 @@ -550,7 +542,7 @@ EXPORT_SYMBOL(mempool_kfree); void *mempool_alloc_pages(gfp_t gfp_mask, void *pool_data) { int order =3D (int)(long)pool_data; - return alloc_pages(gfp_mask, order); + return alloc_pages_noprof(gfp_mask, order); } EXPORT_SYMBOL(mempool_alloc_pages); =20 --=20 2.42.0.758.gaed0368e0e-goog From nobody Thu Jan 1 07:45:32 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DE1F9C07545 for ; Tue, 24 Oct 2023 13:50:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1343688AbjJXNuc (ORCPT ); Tue, 24 Oct 2023 09:50:32 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59792 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1343638AbjJXNtW (ORCPT ); Tue, 24 Oct 2023 09:49:22 -0400 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DFD172107 for ; Tue, 24 Oct 2023 06:47:42 -0700 (PDT) Received: by mail-yb1-xb49.google.com with SMTP id 3f1490d57ef6-d9a541b720aso5409413276.0 for ; Tue, 24 Oct 2023 06:47:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1698155262; x=1698760062; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=wChZbZdPrjdzw8yNa1DdlAJuF9I/RLt/gyhxqa0kGe0=; b=C+D79NCukv4/03mbQOn2L3MjZjS2Mc/v8G0SND58fFUoYP/ECQHr1x5xSJ6vPLGf9p oq3bZFosD6p8MdGS6v9YsEmVgMy8Dqd3t9gD6SsQN26pvXPFUqYW3t4hjBegEgbwSvzZ iVSY8PCnalxJwxCDe2xQ+boTPLItMWgdbXWTuritqITkGQS/dTSsrdO8t8eTtgR7EjPO EA5JmYg5CEyV2PNourExFaKDyA4rD2LZZBMEicJ2v1XAYLnZy1Xp4AlhhtqrmTJ4Oik6 DPBIij/ftV9rlYTT+S1FdA44Xfo00KMEE5kr3VRobooyT/q+r1d8D9BRfzkWJTASmyfs nPsA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1698155262; x=1698760062; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=wChZbZdPrjdzw8yNa1DdlAJuF9I/RLt/gyhxqa0kGe0=; b=HJXBzyBl32IdPr9hGGHerNlTp22mT37HWZqOx4dvpDqifBDs1E8ajwM3fpK47gyrz2 4PvKqIbX/Vj3wXrsrf10/Le6PmihJnwx76ZaN/IKKDKtF4hVsULW2C2FE2N52JEW9Xyh vRuHA5fYz2PP+ImLGav1LRNoYya70leerA0hZpGttiAWRA5NO/tv0ZNudoDn9UKy71M2 omHEaihG8Bh5A/FJ7ukWuPxPUzNwWUrvFnpNfz9PdJd+LTUZd69GQsKIuOL1F5wtU0Jz OxEEdeWKwNNn7+ua+2DG9M25mF1lZwrQKJhkSBdnm0al9cu7yut+9qR8rKwnZHleIlqU 8dUA== X-Gm-Message-State: AOJu0YzDIyrPI+lHBz6Q8zahN9tV7Fa9JMSkS0CScKqClsJZSKbtlAT9 mdgs/2pxfIAOBwCMzwxCkVtpdsrEjEM= X-Google-Smtp-Source: AGHT+IGcrV5EmbtfxBae51aMAnIIsiAAzPfQg3R0miJy2XM7/Ak0S71e3wg0OW88a6RB4IGUgmC4AuntPsg= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:45ba:3318:d7a5:336a]) (user=surenb job=sendgmr) by 2002:a25:7755:0:b0:d9a:519f:d0e6 with SMTP id s82-20020a257755000000b00d9a519fd0e6mr234250ybc.6.1698155261852; Tue, 24 Oct 2023 06:47:41 -0700 (PDT) Date: Tue, 24 Oct 2023 06:46:24 -0700 In-Reply-To: <20231024134637.3120277-1-surenb@google.com> Mime-Version: 1.0 References: <20231024134637.3120277-1-surenb@google.com> X-Mailer: git-send-email 2.42.0.758.gaed0368e0e-goog Message-ID: <20231024134637.3120277-28-surenb@google.com> Subject: [PATCH v2 27/39] xfs: Memory allocation profiling fixups From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: kent.overstreet@linux.dev, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, roman.gushchin@linux.dev, mgorman@suse.de, dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com, corbet@lwn.net, void@manifault.com, peterz@infradead.org, juri.lelli@redhat.com, ldufour@linux.ibm.com, catalin.marinas@arm.com, will@kernel.org, arnd@arndb.de, tglx@linutronix.de, mingo@redhat.com, dave.hansen@linux.intel.com, x86@kernel.org, peterx@redhat.com, david@redhat.com, axboe@kernel.dk, mcgrof@kernel.org, masahiroy@kernel.org, nathan@kernel.org, dennis@kernel.org, tj@kernel.org, muchun.song@linux.dev, rppt@kernel.org, paulmck@kernel.org, pasha.tatashin@soleen.com, yosryahmed@google.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, andreyknvl@gmail.com, keescook@chromium.org, ndesaulniers@google.com, vvvvvv@google.com, gregkh@linuxfoundation.org, ebiggers@google.com, ytcoode@gmail.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, bristot@redhat.com, vschneid@redhat.com, cl@linux.com, penberg@kernel.org, iamjoonsoo.kim@lge.com, 42.hyeyoo@gmail.com, glider@google.com, elver@google.com, dvyukov@google.com, shakeelb@google.com, songmuchun@bytedance.com, jbaron@akamai.com, rientjes@google.com, minchan@google.com, kaleshsingh@google.com, surenb@google.com, kernel-team@android.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, iommu@lists.linux.dev, linux-arch@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, kasan-dev@googlegroups.com, cgroups@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Kent Overstreet This adds an alloc_hooks() wrapper around kmem_alloc(), so that we can have allocations accounted to the proper callsite. Signed-off-by: Kent Overstreet Signed-off-by: Suren Baghdasaryan --- fs/xfs/kmem.c | 4 ++-- fs/xfs/kmem.h | 10 ++++------ 2 files changed, 6 insertions(+), 8 deletions(-) diff --git a/fs/xfs/kmem.c b/fs/xfs/kmem.c index c557a030acfe..9aa57a4e2478 100644 --- a/fs/xfs/kmem.c +++ b/fs/xfs/kmem.c @@ -8,7 +8,7 @@ #include "xfs_trace.h" =20 void * -kmem_alloc(size_t size, xfs_km_flags_t flags) +kmem_alloc_noprof(size_t size, xfs_km_flags_t flags) { int retries =3D 0; gfp_t lflags =3D kmem_flags_convert(flags); @@ -17,7 +17,7 @@ kmem_alloc(size_t size, xfs_km_flags_t flags) trace_kmem_alloc(size, flags, _RET_IP_); =20 do { - ptr =3D kmalloc(size, lflags); + ptr =3D kmalloc_noprof(size, lflags); if (ptr || (flags & KM_MAYFAIL)) return ptr; if (!(++retries % 100)) diff --git a/fs/xfs/kmem.h b/fs/xfs/kmem.h index b987dc2c6851..c4cf1dc2a7af 100644 --- a/fs/xfs/kmem.h +++ b/fs/xfs/kmem.h @@ -6,6 +6,7 @@ #ifndef __XFS_SUPPORT_KMEM_H__ #define __XFS_SUPPORT_KMEM_H__ =20 +#include #include #include #include @@ -56,18 +57,15 @@ kmem_flags_convert(xfs_km_flags_t flags) return lflags; } =20 -extern void *kmem_alloc(size_t, xfs_km_flags_t); static inline void kmem_free(const void *ptr) { kvfree(ptr); } =20 +extern void *kmem_alloc_noprof(size_t, xfs_km_flags_t); +#define kmem_alloc(...) alloc_hooks(kmem_alloc_noprof(__VA_ARGS__)) =20 -static inline void * -kmem_zalloc(size_t size, xfs_km_flags_t flags) -{ - return kmem_alloc(size, flags | KM_ZERO); -} +#define kmem_zalloc(_size, _flags) kmem_alloc((_size), (_flags) | KM_ZERO) =20 /* * Zone interfaces --=20 2.42.0.758.gaed0368e0e-goog From nobody Thu Jan 1 07:45:32 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4967AC07545 for ; Tue, 24 Oct 2023 13:50:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1343707AbjJXNui (ORCPT ); Tue, 24 Oct 2023 09:50:38 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43244 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1343647AbjJXNt2 (ORCPT ); Tue, 24 Oct 2023 09:49:28 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 352D410CE for ; Tue, 24 Oct 2023 06:47:45 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id 3f1490d57ef6-da03ef6fc30so675072276.0 for ; Tue, 24 Oct 2023 06:47:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1698155264; x=1698760064; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=7C50BM7p5C+vJ0clcF+/KvirWmdIP5XWEnNLFYgTdS0=; b=gK9u6FzTkX3K1iRG91cMEhTsKa38/L/46VIeL5Cn6DYQweSChvvaPKT0wYJ0zaulsS v3oYyFfQejo+/Wlr216GpD4ugSM+ghhKGtEXQpALFywvMH6tbnnyyxR/m1LAwap3fTjC ONpYfnq0a6ggqJqWWIIli0hiQxldIOngGtsYbIu3hYNk8OTYY6Gz7WJJjQPXQGC3sTqI x5a4QRlosGPO5OTPL/77kJSu1rntALwABru6sjyXil4t0nYV92T9GTiRiHI/EsfnqQYK sSWgmjuEN/WWQfiYUO6N0BOUY1Km1BDv/aRDg8P6FDhpvJIQTCUr27/77t94dWfLn9ht 0Bbg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1698155264; x=1698760064; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=7C50BM7p5C+vJ0clcF+/KvirWmdIP5XWEnNLFYgTdS0=; b=Q4CDxsZXMcqXly/8mNaQwq8syAF8HkCA9hZf5+/U0VvzxJ5BXAQxlNTlJcxIShrek3 f3YEDScUZfImTcCxQoutQ1XnGrqBvuyGHWtNB8RfaSrYukkPGBrzmOccZ1M2O9Qi1/Ba lN7qkH0eLlmec6nhR7nIdbUZSiHT1f3C3IHeRXTSqt10tK6IV76WEJrQc/ucxVoIyIHI Dyzc00CFT/GiAWSdJmuknuLVcNoLWpBIRR83sdg7aZjMmwtTP6r0sDIxLlReUYC39bUx TWsRaydBW0v1aKB9sUAmbsAo2UXOwEhRE6gER0LGKBg2v/CYXarLmNko3H4ZODA5T0Jc bRlQ== X-Gm-Message-State: AOJu0YyC4u0woCGHF5zR0BxUuVLrwNJF7uOP35evi4lzZLJRagkCWFuM +zvhBfL427ZykVIzlvLyk5W938OhaLc= X-Google-Smtp-Source: AGHT+IFgQR1uDgRKslLBvCkBHAk0T4yEqB1Qvs6h4RCMdoan4Z+72V6WbgepeSBkK71dX/nU/j0JJWPs+jY= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:45ba:3318:d7a5:336a]) (user=surenb job=sendgmr) by 2002:a25:d34e:0:b0:d9a:e3d9:99bd with SMTP id e75-20020a25d34e000000b00d9ae3d999bdmr212803ybf.5.1698155264079; Tue, 24 Oct 2023 06:47:44 -0700 (PDT) Date: Tue, 24 Oct 2023 06:46:25 -0700 In-Reply-To: <20231024134637.3120277-1-surenb@google.com> Mime-Version: 1.0 References: <20231024134637.3120277-1-surenb@google.com> X-Mailer: git-send-email 2.42.0.758.gaed0368e0e-goog Message-ID: <20231024134637.3120277-29-surenb@google.com> Subject: [PATCH v2 28/39] timekeeping: Fix a circular include dependency From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: kent.overstreet@linux.dev, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, roman.gushchin@linux.dev, mgorman@suse.de, dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com, corbet@lwn.net, void@manifault.com, peterz@infradead.org, juri.lelli@redhat.com, ldufour@linux.ibm.com, catalin.marinas@arm.com, will@kernel.org, arnd@arndb.de, tglx@linutronix.de, mingo@redhat.com, dave.hansen@linux.intel.com, x86@kernel.org, peterx@redhat.com, david@redhat.com, axboe@kernel.dk, mcgrof@kernel.org, masahiroy@kernel.org, nathan@kernel.org, dennis@kernel.org, tj@kernel.org, muchun.song@linux.dev, rppt@kernel.org, paulmck@kernel.org, pasha.tatashin@soleen.com, yosryahmed@google.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, andreyknvl@gmail.com, keescook@chromium.org, ndesaulniers@google.com, vvvvvv@google.com, gregkh@linuxfoundation.org, ebiggers@google.com, ytcoode@gmail.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, bristot@redhat.com, vschneid@redhat.com, cl@linux.com, penberg@kernel.org, iamjoonsoo.kim@lge.com, 42.hyeyoo@gmail.com, glider@google.com, elver@google.com, dvyukov@google.com, shakeelb@google.com, songmuchun@bytedance.com, jbaron@akamai.com, rientjes@google.com, minchan@google.com, kaleshsingh@google.com, surenb@google.com, kernel-team@android.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, iommu@lists.linux.dev, linux-arch@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, kasan-dev@googlegroups.com, cgroups@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Kent Overstreet This avoids a circular header dependency in an upcoming patch by only making hrtimer.h depend on percpu-defs.h Signed-off-by: Kent Overstreet Signed-off-by: Suren Baghdasaryan Cc: Thomas Gleixner --- include/linux/hrtimer.h | 2 +- include/linux/time_namespace.h | 2 ++ 2 files changed, 3 insertions(+), 1 deletion(-) diff --git a/include/linux/hrtimer.h b/include/linux/hrtimer.h index 0ee140176f10..e67349e84364 100644 --- a/include/linux/hrtimer.h +++ b/include/linux/hrtimer.h @@ -16,7 +16,7 @@ #include #include #include -#include +#include #include #include #include diff --git a/include/linux/time_namespace.h b/include/linux/time_namespace.h index 03d9c5ac01d1..a9e61120d4e3 100644 --- a/include/linux/time_namespace.h +++ b/include/linux/time_namespace.h @@ -11,6 +11,8 @@ struct user_namespace; extern struct user_namespace init_user_ns; =20 +struct vm_area_struct; + struct timens_offsets { struct timespec64 monotonic; struct timespec64 boottime; --=20 2.42.0.758.gaed0368e0e-goog From nobody Thu Jan 1 07:45:32 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7337DC00A8F for ; Tue, 24 Oct 2023 13:50:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1343723AbjJXNul (ORCPT ); Tue, 24 Oct 2023 09:50:41 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50176 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1343664AbjJXNte (ORCPT ); Tue, 24 Oct 2023 09:49:34 -0400 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 606C919B0 for ; Tue, 24 Oct 2023 06:47:47 -0700 (PDT) Received: by mail-yb1-xb49.google.com with SMTP id 3f1490d57ef6-d9a3a98b34dso5242412276.3 for ; Tue, 24 Oct 2023 06:47:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1698155266; x=1698760066; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=QaUscnVR3A9ivKzFSXLwhZ9WVAKd01wEmSHK760WXQ8=; b=e8MH4gnJoAUpsPTyheF3wyTjBPTAOvkvqVsK4m8evYCMDbM6k+SRnEdexKewJMF7/C EB+7Qnca8n5RsAn+fOZdyj4uFlE1s2YGQp4QnyfI9JXhdkE73Nt5gfPaBTfcjTUg3Sd8 HR73DgnWHRunS5r+0EAFzWrR12p/qSOSyHva2483Ojt6rdfZF5B2we8JKHfTgq3yhG9a 7XPCtBXakeJgYmQV/DyoXGmlJC5ndSrmERPszGYkrWw4D/k7TXi/210ANxAAb0ZUpvkQ JYmaZyCTkTwCEXDflhlWc22d4aZm+PslZre6aCXyTUzv+wm7gMRlLH3VYHGPcYPYvzPv 1+KQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1698155266; x=1698760066; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=QaUscnVR3A9ivKzFSXLwhZ9WVAKd01wEmSHK760WXQ8=; b=VyKmrrnnDIJMneeT6OnPscjenDpylLS7M3Fbhci/7nLuMZ1Z7G3/dC+Ce5ik+O7dRh 1LqIVRT3Du8adDd792tnD7vttUN+sRgWh6s461jtFiRhjikdd0Od8ljcDfzvCDI2bGVa k+OdhKC0ACnrYMdsB6RCBb0C1Adys70D2lEJNgURlZiZPzzMquh0MkKyRWCms1q8oLt3 ik6JVg1uI9R39Z6iAN1NECxWnEedRjusWYhyA1y1Oi6IKbzkNlcgYwZ8Gm4ZojJEBY36 TteMPtcnc5Hl2ePWN5oPtGIkBIWhFoL44L4zwvTZ/Dj6qQEr2RgIoFfZRAb098N4GIWe 1eJg== X-Gm-Message-State: AOJu0Yx1pd+Od6vg4lY08DfA595Zb74GGUk7UJxnNf0xfIScbuX6M1qf pLNSZkE5bgGri/Sy1HevT0WtdwjoicU= X-Google-Smtp-Source: AGHT+IH1Lkjv7kiNBXwNvQL565BH4DAdvvh1g2aWFQE+hKVCSXY8/YuVaym3f5veoHBOTW82zGAoDjJNWeM= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:45ba:3318:d7a5:336a]) (user=surenb job=sendgmr) by 2002:a05:6902:105:b0:da0:3da9:ce08 with SMTP id o5-20020a056902010500b00da03da9ce08mr35592ybh.10.1698155266214; Tue, 24 Oct 2023 06:47:46 -0700 (PDT) Date: Tue, 24 Oct 2023 06:46:26 -0700 In-Reply-To: <20231024134637.3120277-1-surenb@google.com> Mime-Version: 1.0 References: <20231024134637.3120277-1-surenb@google.com> X-Mailer: git-send-email 2.42.0.758.gaed0368e0e-goog Message-ID: <20231024134637.3120277-30-surenb@google.com> Subject: [PATCH v2 29/39] mm: percpu: Introduce pcpuobj_ext From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: kent.overstreet@linux.dev, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, roman.gushchin@linux.dev, mgorman@suse.de, dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com, corbet@lwn.net, void@manifault.com, peterz@infradead.org, juri.lelli@redhat.com, ldufour@linux.ibm.com, catalin.marinas@arm.com, will@kernel.org, arnd@arndb.de, tglx@linutronix.de, mingo@redhat.com, dave.hansen@linux.intel.com, x86@kernel.org, peterx@redhat.com, david@redhat.com, axboe@kernel.dk, mcgrof@kernel.org, masahiroy@kernel.org, nathan@kernel.org, dennis@kernel.org, tj@kernel.org, muchun.song@linux.dev, rppt@kernel.org, paulmck@kernel.org, pasha.tatashin@soleen.com, yosryahmed@google.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, andreyknvl@gmail.com, keescook@chromium.org, ndesaulniers@google.com, vvvvvv@google.com, gregkh@linuxfoundation.org, ebiggers@google.com, ytcoode@gmail.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, bristot@redhat.com, vschneid@redhat.com, cl@linux.com, penberg@kernel.org, iamjoonsoo.kim@lge.com, 42.hyeyoo@gmail.com, glider@google.com, elver@google.com, dvyukov@google.com, shakeelb@google.com, songmuchun@bytedance.com, jbaron@akamai.com, rientjes@google.com, minchan@google.com, kaleshsingh@google.com, surenb@google.com, kernel-team@android.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, iommu@lists.linux.dev, linux-arch@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, kasan-dev@googlegroups.com, cgroups@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Kent Overstreet Upcoming alloc tagging patches require a place to stash per-allocation metadata. We already do this when memcg is enabled, so this patch generalizes the obj_cgroup * vector in struct pcpu_chunk by creating a pcpu_obj_ext type, which we will be adding to in an upcoming patch - similarly to the previous slabobj_ext patch. Signed-off-by: Kent Overstreet Signed-off-by: Suren Baghdasaryan Cc: Andrew Morton Cc: Dennis Zhou Cc: Tejun Heo Cc: Christoph Lameter Cc: linux-mm@kvack.org --- mm/percpu-internal.h | 19 +++++++++++++++++-- mm/percpu.c | 30 +++++++++++++++--------------- 2 files changed, 32 insertions(+), 17 deletions(-) diff --git a/mm/percpu-internal.h b/mm/percpu-internal.h index cdd0aa597a81..e62d582f4bf3 100644 --- a/mm/percpu-internal.h +++ b/mm/percpu-internal.h @@ -32,6 +32,16 @@ struct pcpu_block_md { int nr_bits; /* total bits responsible for */ }; =20 +struct pcpuobj_ext { +#ifdef CONFIG_MEMCG_KMEM + struct obj_cgroup *cgroup; +#endif +}; + +#ifdef CONFIG_MEMCG_KMEM +#define NEED_PCPUOBJ_EXT +#endif + struct pcpu_chunk { #ifdef CONFIG_PERCPU_STATS int nr_alloc; /* # of allocations */ @@ -64,8 +74,8 @@ struct pcpu_chunk { int end_offset; /* additional area required to have the region end page aligned */ -#ifdef CONFIG_MEMCG_KMEM - struct obj_cgroup **obj_cgroups; /* vector of object cgroups */ +#ifdef NEED_PCPUOBJ_EXT + struct pcpuobj_ext *obj_exts; /* vector of object cgroups */ #endif =20 int nr_pages; /* # of pages served by this chunk */ @@ -74,6 +84,11 @@ struct pcpu_chunk { unsigned long populated[]; /* populated bitmap */ }; =20 +static inline bool need_pcpuobj_ext(void) +{ + return !mem_cgroup_kmem_disabled(); +} + extern spinlock_t pcpu_lock; =20 extern struct list_head *pcpu_chunk_lists; diff --git a/mm/percpu.c b/mm/percpu.c index a7665de8485f..5a6202acffa3 100644 --- a/mm/percpu.c +++ b/mm/percpu.c @@ -1392,9 +1392,9 @@ static struct pcpu_chunk * __init pcpu_alloc_first_ch= unk(unsigned long tmp_addr, panic("%s: Failed to allocate %zu bytes\n", __func__, alloc_size); =20 -#ifdef CONFIG_MEMCG_KMEM +#ifdef NEED_PCPUOBJ_EXT /* first chunk is free to use */ - chunk->obj_cgroups =3D NULL; + chunk->obj_exts =3D NULL; #endif pcpu_init_md_blocks(chunk); =20 @@ -1463,12 +1463,12 @@ static struct pcpu_chunk *pcpu_alloc_chunk(gfp_t gf= p) if (!chunk->md_blocks) goto md_blocks_fail; =20 -#ifdef CONFIG_MEMCG_KMEM - if (!mem_cgroup_kmem_disabled()) { - chunk->obj_cgroups =3D +#ifdef NEED_PCPUOBJ_EXT + if (need_pcpuobj_ext()) { + chunk->obj_exts =3D pcpu_mem_zalloc(pcpu_chunk_map_bits(chunk) * - sizeof(struct obj_cgroup *), gfp); - if (!chunk->obj_cgroups) + sizeof(struct pcpuobj_ext), gfp); + if (!chunk->obj_exts) goto objcg_fail; } #endif @@ -1480,7 +1480,7 @@ static struct pcpu_chunk *pcpu_alloc_chunk(gfp_t gfp) =20 return chunk; =20 -#ifdef CONFIG_MEMCG_KMEM +#ifdef NEED_PCPUOBJ_EXT objcg_fail: pcpu_mem_free(chunk->md_blocks); #endif @@ -1498,8 +1498,8 @@ static void pcpu_free_chunk(struct pcpu_chunk *chunk) { if (!chunk) return; -#ifdef CONFIG_MEMCG_KMEM - pcpu_mem_free(chunk->obj_cgroups); +#ifdef NEED_PCPUOBJ_EXT + pcpu_mem_free(chunk->obj_exts); #endif pcpu_mem_free(chunk->md_blocks); pcpu_mem_free(chunk->bound_map); @@ -1648,8 +1648,8 @@ static void pcpu_memcg_post_alloc_hook(struct obj_cgr= oup *objcg, if (!objcg) return; =20 - if (likely(chunk && chunk->obj_cgroups)) { - chunk->obj_cgroups[off >> PCPU_MIN_ALLOC_SHIFT] =3D objcg; + if (likely(chunk && chunk->obj_exts)) { + chunk->obj_exts[off >> PCPU_MIN_ALLOC_SHIFT].cgroup =3D objcg; =20 rcu_read_lock(); mod_memcg_state(obj_cgroup_memcg(objcg), MEMCG_PERCPU_B, @@ -1665,13 +1665,13 @@ static void pcpu_memcg_free_hook(struct pcpu_chunk = *chunk, int off, size_t size) { struct obj_cgroup *objcg; =20 - if (unlikely(!chunk->obj_cgroups)) + if (unlikely(!chunk->obj_exts)) return; =20 - objcg =3D chunk->obj_cgroups[off >> PCPU_MIN_ALLOC_SHIFT]; + objcg =3D chunk->obj_exts[off >> PCPU_MIN_ALLOC_SHIFT].cgroup; if (!objcg) return; - chunk->obj_cgroups[off >> PCPU_MIN_ALLOC_SHIFT] =3D NULL; + chunk->obj_exts[off >> PCPU_MIN_ALLOC_SHIFT].cgroup =3D NULL; =20 obj_cgroup_uncharge(objcg, pcpu_obj_full_size(size)); =20 --=20 2.42.0.758.gaed0368e0e-goog From nobody Thu Jan 1 07:45:32 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B22DAC00A8F for ; Tue, 24 Oct 2023 13:50:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1343492AbjJXNut (ORCPT ); Tue, 24 Oct 2023 09:50:49 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41078 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1343680AbjJXNtf (ORCPT ); Tue, 24 Oct 2023 09:49:35 -0400 Received: from mail-yw1-x114a.google.com (mail-yw1-x114a.google.com [IPv6:2607:f8b0:4864:20::114a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8A85A2114 for ; Tue, 24 Oct 2023 06:47:49 -0700 (PDT) Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-5a7cfdacf8fso48931257b3.0 for ; Tue, 24 Oct 2023 06:47:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1698155268; x=1698760068; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=HuI3Xrhprruhc52//OZX/xia3QtNu4igKL3qfHIbn+g=; b=B3pS1zhG+p73RQvvYKo5n1pAV86GVhPSWiQspvOzMe3ycIGzvFtbDrPQKXwy3bUgSF J4pKyZaBUJCohA27gtyIa5lutXWnwlyJC+YtLUAYtSk6tWlbcTS6zDdyF2F/RDzhNpaD sCxzDBHTe8fLUm+UFCDtVvfLi7aIcCMIK/bm3TZ1iVVpmpDvTv4qYBU9qXknPjnGbq0n Z1Tb2BJMzH7nLcTKgmxgrkAOycA3gO4hbSq/Go1EBj+yufSDUVAd7dTZqegV6SbCbL7m Fcm7YLLYx+jz1trHXpCibUITpP+cGlYQ5ZXQsW1cp+UdJVTmZC33vBWK9pk9Krc/d8vO Wb0A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1698155268; x=1698760068; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=HuI3Xrhprruhc52//OZX/xia3QtNu4igKL3qfHIbn+g=; b=wI/wwzkcvxIUB0y5E4V5h8OCETHY8K0eSRzCDfabmhIKEPMYlg9OYIuKUyNw7qFh12 m5pVSYFZTrZlqnkdrtfl0eefVMFlOExVh7wyGLo8j7tjARHmRXM4H7yytypN9zPJXid/ TLgkJ0vS+8UFc20XseIX0g8rrMvfrJ/G9j47HZK515EjjbJ50ZeaKO2UhyAF/Km1cp2L RtIiumy26ET5pH5r0S3Z9p+BHQunP94ezzPMEDgUiDTHIeBF9NTbMoN5okpl49tw76ZZ 0MBW85yh0gv6jUWOMOye5HWRrJiYURGEq3KjnrLzwWuV0YICEXFW2uHtUGHNUlLujhsB 3lNQ== X-Gm-Message-State: AOJu0YyiNjscQkkpdo7gnib5TD62mpp9HqdmGfPAp9LjjI7JPz4J976q PUIb+vCIOBjACNMyqLLmCxixTVfK+q0= X-Google-Smtp-Source: AGHT+IFJAKlMWbiV50/LrVqB8mrnoydVscOxG9uaLGUhxnV+JO976d9OMlZUiNtdsluAhIYZmngChgUkEcg= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:45ba:3318:d7a5:336a]) (user=surenb job=sendgmr) by 2002:a25:ad41:0:b0:da0:6216:7990 with SMTP id l1-20020a25ad41000000b00da062167990mr3869ybe.3.1698155268465; Tue, 24 Oct 2023 06:47:48 -0700 (PDT) Date: Tue, 24 Oct 2023 06:46:27 -0700 In-Reply-To: <20231024134637.3120277-1-surenb@google.com> Mime-Version: 1.0 References: <20231024134637.3120277-1-surenb@google.com> X-Mailer: git-send-email 2.42.0.758.gaed0368e0e-goog Message-ID: <20231024134637.3120277-31-surenb@google.com> Subject: [PATCH v2 30/39] mm: percpu: Add codetag reference into pcpuobj_ext From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: kent.overstreet@linux.dev, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, roman.gushchin@linux.dev, mgorman@suse.de, dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com, corbet@lwn.net, void@manifault.com, peterz@infradead.org, juri.lelli@redhat.com, ldufour@linux.ibm.com, catalin.marinas@arm.com, will@kernel.org, arnd@arndb.de, tglx@linutronix.de, mingo@redhat.com, dave.hansen@linux.intel.com, x86@kernel.org, peterx@redhat.com, david@redhat.com, axboe@kernel.dk, mcgrof@kernel.org, masahiroy@kernel.org, nathan@kernel.org, dennis@kernel.org, tj@kernel.org, muchun.song@linux.dev, rppt@kernel.org, paulmck@kernel.org, pasha.tatashin@soleen.com, yosryahmed@google.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, andreyknvl@gmail.com, keescook@chromium.org, ndesaulniers@google.com, vvvvvv@google.com, gregkh@linuxfoundation.org, ebiggers@google.com, ytcoode@gmail.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, bristot@redhat.com, vschneid@redhat.com, cl@linux.com, penberg@kernel.org, iamjoonsoo.kim@lge.com, 42.hyeyoo@gmail.com, glider@google.com, elver@google.com, dvyukov@google.com, shakeelb@google.com, songmuchun@bytedance.com, jbaron@akamai.com, rientjes@google.com, minchan@google.com, kaleshsingh@google.com, surenb@google.com, kernel-team@android.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, iommu@lists.linux.dev, linux-arch@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, kasan-dev@googlegroups.com, cgroups@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Kent Overstreet To store codetag for every per-cpu allocation, a codetag reference is embedded into pcpuobj_ext when CONFIG_MEM_ALLOC_PROFILING=3Dy. Hooks to use the newly introduced codetag are added. Signed-off-by: Kent Overstreet Signed-off-by: Suren Baghdasaryan --- mm/percpu-internal.h | 11 +++++++++-- mm/percpu.c | 26 ++++++++++++++++++++++++++ 2 files changed, 35 insertions(+), 2 deletions(-) diff --git a/mm/percpu-internal.h b/mm/percpu-internal.h index e62d582f4bf3..7e42f0ca3b7b 100644 --- a/mm/percpu-internal.h +++ b/mm/percpu-internal.h @@ -36,9 +36,12 @@ struct pcpuobj_ext { #ifdef CONFIG_MEMCG_KMEM struct obj_cgroup *cgroup; #endif +#ifdef CONFIG_MEM_ALLOC_PROFILING + union codetag_ref tag; +#endif }; =20 -#ifdef CONFIG_MEMCG_KMEM +#if defined(CONFIG_MEMCG_KMEM) || defined(CONFIG_MEM_ALLOC_PROFILING) #define NEED_PCPUOBJ_EXT #endif =20 @@ -86,7 +89,11 @@ struct pcpu_chunk { =20 static inline bool need_pcpuobj_ext(void) { - return !mem_cgroup_kmem_disabled(); + if (IS_ENABLED(CONFIG_MEM_ALLOC_PROFILING)) + return true; + if (!mem_cgroup_kmem_disabled()) + return true; + return false; } =20 extern spinlock_t pcpu_lock; diff --git a/mm/percpu.c b/mm/percpu.c index 5a6202acffa3..002ee5d38fd5 100644 --- a/mm/percpu.c +++ b/mm/percpu.c @@ -1701,6 +1701,32 @@ static void pcpu_memcg_free_hook(struct pcpu_chunk *= chunk, int off, size_t size) } #endif /* CONFIG_MEMCG_KMEM */ =20 +#ifdef CONFIG_MEM_ALLOC_PROFILING +static void pcpu_alloc_tag_alloc_hook(struct pcpu_chunk *chunk, int off, + size_t size) +{ + if (mem_alloc_profiling_enabled() && likely(chunk->obj_exts)) { + alloc_tag_add(&chunk->obj_exts[off >> PCPU_MIN_ALLOC_SHIFT].tag, + current->alloc_tag, size); + } +} + +static void pcpu_alloc_tag_free_hook(struct pcpu_chunk *chunk, int off, si= ze_t size) +{ + if (mem_alloc_profiling_enabled() && likely(chunk->obj_exts)) + alloc_tag_sub_noalloc(&chunk->obj_exts[off >> PCPU_MIN_ALLOC_SHIFT].tag,= size); +} +#else +static void pcpu_alloc_tag_alloc_hook(struct pcpu_chunk *chunk, int off, + size_t size) +{ +} + +static void pcpu_alloc_tag_free_hook(struct pcpu_chunk *chunk, int off, si= ze_t size) +{ +} +#endif + /** * pcpu_alloc - the percpu allocator * @size: size of area to allocate in bytes --=20 2.42.0.758.gaed0368e0e-goog From nobody Thu Jan 1 07:45:32 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 84FCCC25B48 for ; Tue, 24 Oct 2023 13:50:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1343745AbjJXNux (ORCPT ); Tue, 24 Oct 2023 09:50:53 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54822 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234691AbjJXNth (ORCPT ); Tue, 24 Oct 2023 09:49:37 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DD06A2126 for ; Tue, 24 Oct 2023 06:47:51 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id 3f1490d57ef6-d9cfec5e73dso4392079276.2 for ; Tue, 24 Oct 2023 06:47:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1698155271; x=1698760071; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=05OHaXyGuLuiu55tV7A0+NfbyrdoqwRt+v2k2i2/xoM=; b=ivsQ+l+uCR4hbw7CsKaruMdGU8D8Kh6tsayABTOTOCVqm8mLvl1qvQGcRZPikWk5dt J52i7BspCufNyTb0GW57hEPwZ1OrBS/LeOcBGL+SJ7lGOxq5OfHTLkewcox0zmAxglLg vvki62em9UPwgytgmF9PDHtvxkuk8XREW4jZF9p91DrsYOKwonvYLAeZHU3xlzu8/Ej1 TqDuFQ2Oo1/DtXaefAu0NevrRFezyicJuP5eQBugA2+Pdev0Fu4ekrFNuNIBB3unFOP5 /yql0h4n6zkrjPrk+OyBt8PymiDg1xpjncM9M1fsM2r9O1SX7lnRv8fOwtudeMcAqpZg IPVg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1698155271; x=1698760071; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=05OHaXyGuLuiu55tV7A0+NfbyrdoqwRt+v2k2i2/xoM=; b=ojn9NJZXEhfd8XzDMJ4nMyVWwmAZTu+hPYkrYf0JJ/oK3jhOCd71F83c0QU542lWA9 UwgorKbpBzvMj2/dalHuXmLIiyMVns8rl3pX1NseMeSjh5H/5/6d1qBKjd2HUoOCMZYS nAl/8+z5DC+OSxYnBNMzC+eH4gtbegjlPCB0ilC7vqwt3UHndUTODhUlnFgh42BskPaB D8hzvgrjWVSiMiyPA0Mya7uC4yD05i1Ze4TieeYUGiSLyJkj0fUuQb8l6i8hrY4gozx+ Uv447HL51idBIfVHtZpK6HA6kYy5Ib16nDcLlFGJtHYJmCwdBaGbOepXlUPtTZR56lY/ kU9g== X-Gm-Message-State: AOJu0Yw10FoLZipaAmdu2n4V/PspeFnfZP/1mSq/8zNmu+JT1qFgm7Ow aXrwtAKSoVJrS83zaW+Ys4fiqjrWkT4= X-Google-Smtp-Source: AGHT+IHL3ylyGkPSwgZbZBswail77fdEm5nbxSBwW+rtci5/qxf0YkGsYjEQDGdhz4VxvAtbtJa/uTQClZE= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:45ba:3318:d7a5:336a]) (user=surenb job=sendgmr) by 2002:a25:5008:0:b0:da0:2b01:7215 with SMTP id e8-20020a255008000000b00da02b017215mr55853ybb.10.1698155270752; Tue, 24 Oct 2023 06:47:50 -0700 (PDT) Date: Tue, 24 Oct 2023 06:46:28 -0700 In-Reply-To: <20231024134637.3120277-1-surenb@google.com> Mime-Version: 1.0 References: <20231024134637.3120277-1-surenb@google.com> X-Mailer: git-send-email 2.42.0.758.gaed0368e0e-goog Message-ID: <20231024134637.3120277-32-surenb@google.com> Subject: [PATCH v2 31/39] mm: percpu: enable per-cpu allocation tagging From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: kent.overstreet@linux.dev, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, roman.gushchin@linux.dev, mgorman@suse.de, dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com, corbet@lwn.net, void@manifault.com, peterz@infradead.org, juri.lelli@redhat.com, ldufour@linux.ibm.com, catalin.marinas@arm.com, will@kernel.org, arnd@arndb.de, tglx@linutronix.de, mingo@redhat.com, dave.hansen@linux.intel.com, x86@kernel.org, peterx@redhat.com, david@redhat.com, axboe@kernel.dk, mcgrof@kernel.org, masahiroy@kernel.org, nathan@kernel.org, dennis@kernel.org, tj@kernel.org, muchun.song@linux.dev, rppt@kernel.org, paulmck@kernel.org, pasha.tatashin@soleen.com, yosryahmed@google.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, andreyknvl@gmail.com, keescook@chromium.org, ndesaulniers@google.com, vvvvvv@google.com, gregkh@linuxfoundation.org, ebiggers@google.com, ytcoode@gmail.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, bristot@redhat.com, vschneid@redhat.com, cl@linux.com, penberg@kernel.org, iamjoonsoo.kim@lge.com, 42.hyeyoo@gmail.com, glider@google.com, elver@google.com, dvyukov@google.com, shakeelb@google.com, songmuchun@bytedance.com, jbaron@akamai.com, rientjes@google.com, minchan@google.com, kaleshsingh@google.com, surenb@google.com, kernel-team@android.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, iommu@lists.linux.dev, linux-arch@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, kasan-dev@googlegroups.com, cgroups@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Redefine __alloc_percpu, __alloc_percpu_gfp and __alloc_reserved_percpu to record allocations and deallocations done by these functions. Signed-off-by: Kent Overstreet Signed-off-by: Suren Baghdasaryan --- include/linux/alloc_tag.h | 15 +++++++++ include/linux/percpu.h | 23 +++++++++----- mm/percpu.c | 64 +++++---------------------------------- 3 files changed, 38 insertions(+), 64 deletions(-) diff --git a/include/linux/alloc_tag.h b/include/linux/alloc_tag.h index 6fa8a94d8bc1..3fe51e67e231 100644 --- a/include/linux/alloc_tag.h +++ b/include/linux/alloc_tag.h @@ -140,4 +140,19 @@ static inline void alloc_tag_add(union codetag_ref *re= f, struct alloc_tag *tag, _res; \ }) =20 +/* + * workaround for a sparse bug: it complains about res_type_to_err() when + * typeof(_do_alloc) is a __percpu pointer, but gcc won't let us add a sep= arate + * __percpu case to res_type_to_err(): + */ +#define alloc_hooks_pcpu(_do_alloc) \ +({ \ + typeof(_do_alloc) _res; \ + DEFINE_ALLOC_TAG(_alloc_tag, _old); \ + \ + _res =3D _do_alloc; \ + alloc_tag_restore(&_alloc_tag, _old); \ + _res; \ +}) + #endif /* _LINUX_ALLOC_TAG_H */ diff --git a/include/linux/percpu.h b/include/linux/percpu.h index 68fac2e7cbe6..338c1ef9c93d 100644 --- a/include/linux/percpu.h +++ b/include/linux/percpu.h @@ -2,6 +2,7 @@ #ifndef __LINUX_PERCPU_H #define __LINUX_PERCPU_H =20 +#include #include #include #include @@ -9,6 +10,7 @@ #include #include #include +#include =20 #include =20 @@ -121,7 +123,6 @@ extern int __init pcpu_page_first_chunk(size_t reserved= _size, pcpu_fc_cpu_to_node_fn_t cpu_to_nd_fn); #endif =20 -extern void __percpu *__alloc_reserved_percpu(size_t size, size_t align) _= _alloc_size(1); extern bool __is_kernel_percpu_address(unsigned long addr, unsigned long *= can_addr); extern bool is_kernel_percpu_address(unsigned long addr); =20 @@ -129,13 +130,15 @@ extern bool is_kernel_percpu_address(unsigned long ad= dr); extern void __init setup_per_cpu_areas(void); #endif =20 -extern void __percpu *__alloc_percpu_gfp(size_t size, size_t align, gfp_t = gfp) __alloc_size(1); -extern void __percpu *__alloc_percpu(size_t size, size_t align) __alloc_si= ze(1); -extern void free_percpu(void __percpu *__pdata); +extern void __percpu *pcpu_alloc_noprof(size_t size, size_t align, bool re= served, + gfp_t gfp) __alloc_size(1); =20 -DEFINE_FREE(free_percpu, void __percpu *, free_percpu(_T)) - -extern phys_addr_t per_cpu_ptr_to_phys(void *addr); +#define __alloc_percpu_gfp(_size, _align, _gfp) \ + alloc_hooks_pcpu(pcpu_alloc_noprof(_size, _align, false, _gfp)) +#define __alloc_percpu(_size, _align) \ + alloc_hooks_pcpu(pcpu_alloc_noprof(_size, _align, false, GFP_KERNEL)) +#define __alloc_reserved_percpu(_size, _align) \ + alloc_hooks_pcpu(pcpu_alloc_noprof(_size, _align, true, GFP_KERNEL)) =20 #define alloc_percpu_gfp(type, gfp) \ (typeof(type) __percpu *)__alloc_percpu_gfp(sizeof(type), \ @@ -144,6 +147,12 @@ extern phys_addr_t per_cpu_ptr_to_phys(void *addr); (typeof(type) __percpu *)__alloc_percpu(sizeof(type), \ __alignof__(type)) =20 +extern void free_percpu(void __percpu *__pdata); + +DEFINE_FREE(free_percpu, void __percpu *, free_percpu(_T)) + +extern phys_addr_t per_cpu_ptr_to_phys(void *addr); + extern unsigned long pcpu_nr_pages(void); =20 #endif /* __LINUX_PERCPU_H */ diff --git a/mm/percpu.c b/mm/percpu.c index 002ee5d38fd5..328a5b3c943b 100644 --- a/mm/percpu.c +++ b/mm/percpu.c @@ -1728,7 +1728,7 @@ static void pcpu_alloc_tag_free_hook(struct pcpu_chun= k *chunk, int off, size_t s #endif =20 /** - * pcpu_alloc - the percpu allocator + * pcpu_alloc_noprof - the percpu allocator * @size: size of area to allocate in bytes * @align: alignment of area (max PAGE_SIZE) * @reserved: allocate from the reserved chunk if available @@ -1742,7 +1742,7 @@ static void pcpu_alloc_tag_free_hook(struct pcpu_chun= k *chunk, int off, size_t s * RETURNS: * Percpu pointer to the allocated area on success, NULL on failure. */ -static void __percpu *pcpu_alloc(size_t size, size_t align, bool reserved, +void __percpu *pcpu_alloc_noprof(size_t size, size_t align, bool reserved, gfp_t gfp) { gfp_t pcpu_gfp; @@ -1909,6 +1909,8 @@ static void __percpu *pcpu_alloc(size_t size, size_t = align, bool reserved, =20 pcpu_memcg_post_alloc_hook(objcg, chunk, off, size); =20 + pcpu_alloc_tag_alloc_hook(chunk, off, size); + return ptr; =20 fail_unlock: @@ -1937,61 +1939,7 @@ static void __percpu *pcpu_alloc(size_t size, size_t= align, bool reserved, =20 return NULL; } - -/** - * __alloc_percpu_gfp - allocate dynamic percpu area - * @size: size of area to allocate in bytes - * @align: alignment of area (max PAGE_SIZE) - * @gfp: allocation flags - * - * Allocate zero-filled percpu area of @size bytes aligned at @align. If - * @gfp doesn't contain %GFP_KERNEL, the allocation doesn't block and can - * be called from any context but is a lot more likely to fail. If @gfp - * has __GFP_NOWARN then no warning will be triggered on invalid or failed - * allocation requests. - * - * RETURNS: - * Percpu pointer to the allocated area on success, NULL on failure. - */ -void __percpu *__alloc_percpu_gfp(size_t size, size_t align, gfp_t gfp) -{ - return pcpu_alloc(size, align, false, gfp); -} -EXPORT_SYMBOL_GPL(__alloc_percpu_gfp); - -/** - * __alloc_percpu - allocate dynamic percpu area - * @size: size of area to allocate in bytes - * @align: alignment of area (max PAGE_SIZE) - * - * Equivalent to __alloc_percpu_gfp(size, align, %GFP_KERNEL). - */ -void __percpu *__alloc_percpu(size_t size, size_t align) -{ - return pcpu_alloc(size, align, false, GFP_KERNEL); -} -EXPORT_SYMBOL_GPL(__alloc_percpu); - -/** - * __alloc_reserved_percpu - allocate reserved percpu area - * @size: size of area to allocate in bytes - * @align: alignment of area (max PAGE_SIZE) - * - * Allocate zero-filled percpu area of @size bytes aligned at @align - * from reserved percpu area if arch has set it up; otherwise, - * allocation is served from the same dynamic area. Might sleep. - * Might trigger writeouts. - * - * CONTEXT: - * Does GFP_KERNEL allocation. - * - * RETURNS: - * Percpu pointer to the allocated area on success, NULL on failure. - */ -void __percpu *__alloc_reserved_percpu(size_t size, size_t align) -{ - return pcpu_alloc(size, align, true, GFP_KERNEL); -} +EXPORT_SYMBOL_GPL(pcpu_alloc_noprof); =20 /** * pcpu_balance_free - manage the amount of free chunks @@ -2301,6 +2249,8 @@ void free_percpu(void __percpu *ptr) =20 size =3D pcpu_free_area(chunk, off); =20 + pcpu_alloc_tag_free_hook(chunk, off, size); + pcpu_memcg_free_hook(chunk, off, size); =20 /* --=20 2.42.0.758.gaed0368e0e-goog From nobody Thu Jan 1 07:45:32 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B2FF1C07545 for ; Tue, 24 Oct 2023 13:51:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1343766AbjJXNvG (ORCPT ); Tue, 24 Oct 2023 09:51:06 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44528 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234816AbjJXNtk (ORCPT ); Tue, 24 Oct 2023 09:49:40 -0400 Received: from mail-yw1-x1149.google.com (mail-yw1-x1149.google.com [IPv6:2607:f8b0:4864:20::1149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 078C12132 for ; Tue, 24 Oct 2023 06:47:53 -0700 (PDT) Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-5a824ef7a83so58269167b3.0 for ; Tue, 24 Oct 2023 06:47:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1698155273; x=1698760073; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=XaYZQGGekpRYWKofo5TkfncjTV9EZU0daTUjtZ9a4ek=; b=HHIE4ELn/5FDRwwPSu1FimOOlWIoBT0SXYAlapDwK696rYwJRqYYvP1VsY70F6KqG/ I9K7bwFQZHbeynjoDLosHrXkL/LdbOr7OrJ8e5XcBgqnqlpNjOdghs1kVayxc8LZs5ev feVfAgz26vy8MwJTE1kubaG6vP1QWtwSDkTOoS7HRyUvlWDc8MJIY+op+dvxWJOBpyHD XMM/SjC6js85yyZK/CiMBKD9iJGb7w4KqOQjcm9/k9zuUBK3LZKQNmlmgbAC94qMnT2S l2pJhb57WwSKX7rnKyibQi1WhZHr4thOX4Njf/vE5SbshA4jOvwCNneOcvJSd0hVCKyZ fnQQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1698155273; x=1698760073; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=XaYZQGGekpRYWKofo5TkfncjTV9EZU0daTUjtZ9a4ek=; b=u8rcmKe1MOpUGLynj8VNhduRxCHluM02eoAaO5RAvw7RJIM61agKvGzaPF56NPEzX3 AfO/w6Txz+J63woCT73X8wsX/llScFbttj01ZvqtAZ8liAwMHkUdtX7rFmyzIab2V/OP jsEbNLjJSmdsT1TdACiSA9hQz4TZ19ILEEAcMUKrPb6JuuTfRo84d7jrGJ//2FRMW24E jDU3KLG9OnlNpqvSLtxSdPVQLXqQ9Evg2uFNyKXRMsmDdboM0ss6b1hFmVXZyAoGrojQ kn3Q+knqSREydNSKdkGmq9ncOaOaD7XfwZeI1Lv93kQYipuXtJoInjWwVS6sDPu5gtpX jTkw== X-Gm-Message-State: AOJu0Yziwt11EqnblzzEgn8c0whGFyo6Xxm7h9fSDVo7gfXyT9QZd1sY H14vmZtdHybCvxrSzMJvwQxrAK+mnyg= X-Google-Smtp-Source: AGHT+IH/jHMj6SPf+djqQdMpidZEKeSxKuOmMnfBX0O2HbLM+zTXwQbpAuHKC7+8B6veMY9hNUZcoYeVJwM= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:45ba:3318:d7a5:336a]) (user=surenb job=sendgmr) by 2002:a05:6902:168c:b0:d9a:e6ae:ddb7 with SMTP id bx12-20020a056902168c00b00d9ae6aeddb7mr221434ybb.7.1698155273022; Tue, 24 Oct 2023 06:47:53 -0700 (PDT) Date: Tue, 24 Oct 2023 06:46:29 -0700 In-Reply-To: <20231024134637.3120277-1-surenb@google.com> Mime-Version: 1.0 References: <20231024134637.3120277-1-surenb@google.com> X-Mailer: git-send-email 2.42.0.758.gaed0368e0e-goog Message-ID: <20231024134637.3120277-33-surenb@google.com> Subject: [PATCH v2 32/39] arm64: Fix circular header dependency From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: kent.overstreet@linux.dev, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, roman.gushchin@linux.dev, mgorman@suse.de, dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com, corbet@lwn.net, void@manifault.com, peterz@infradead.org, juri.lelli@redhat.com, ldufour@linux.ibm.com, catalin.marinas@arm.com, will@kernel.org, arnd@arndb.de, tglx@linutronix.de, mingo@redhat.com, dave.hansen@linux.intel.com, x86@kernel.org, peterx@redhat.com, david@redhat.com, axboe@kernel.dk, mcgrof@kernel.org, masahiroy@kernel.org, nathan@kernel.org, dennis@kernel.org, tj@kernel.org, muchun.song@linux.dev, rppt@kernel.org, paulmck@kernel.org, pasha.tatashin@soleen.com, yosryahmed@google.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, andreyknvl@gmail.com, keescook@chromium.org, ndesaulniers@google.com, vvvvvv@google.com, gregkh@linuxfoundation.org, ebiggers@google.com, ytcoode@gmail.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, bristot@redhat.com, vschneid@redhat.com, cl@linux.com, penberg@kernel.org, iamjoonsoo.kim@lge.com, 42.hyeyoo@gmail.com, glider@google.com, elver@google.com, dvyukov@google.com, shakeelb@google.com, songmuchun@bytedance.com, jbaron@akamai.com, rientjes@google.com, minchan@google.com, kaleshsingh@google.com, surenb@google.com, kernel-team@android.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, iommu@lists.linux.dev, linux-arch@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, kasan-dev@googlegroups.com, cgroups@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Kent Overstreet Replace linux/percpu.h include with asm/percpu.h to avoid circular dependency. Signed-off-by: Kent Overstreet Signed-off-by: Suren Baghdasaryan --- arch/arm64/include/asm/spectre.h | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/arch/arm64/include/asm/spectre.h b/arch/arm64/include/asm/spec= tre.h index 9cc501450486..75e837753772 100644 --- a/arch/arm64/include/asm/spectre.h +++ b/arch/arm64/include/asm/spectre.h @@ -13,8 +13,8 @@ #define __BP_HARDEN_HYP_VECS_SZ ((BP_HARDEN_EL2_SLOTS - 1) * SZ_2K) =20 #ifndef __ASSEMBLY__ - -#include +#include +#include =20 #include #include --=20 2.42.0.758.gaed0368e0e-goog From nobody Thu Jan 1 07:45:32 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 683C3C00A8F for ; Tue, 24 Oct 2023 13:51:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1343777AbjJXNvM (ORCPT ); Tue, 24 Oct 2023 09:51:12 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59730 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234827AbjJXNuL (ORCPT ); Tue, 24 Oct 2023 09:50:11 -0400 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 09F162682 for ; Tue, 24 Oct 2023 06:47:56 -0700 (PDT) Received: by mail-yb1-xb49.google.com with SMTP id 3f1490d57ef6-da05b786f1dso176738276.2 for ; Tue, 24 Oct 2023 06:47:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1698155275; x=1698760075; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=JDn4KyOu1aP1bbjJv2+LYLA0L02aMbiW0UOS3mV2n4g=; b=PUMYXUfilwdczWflDYz6nAcBNpP6W/r/eGBJ4lohYxqqr4rWNGc+dljUcpLNUw6Pey KZ5/6GYLhrolOpYX2lmR7BWqy1ygnlcgGU4P7eOT6dxdYn7ZRh4e3o3fhb0FqzL5rCi3 1QHLUwwxjYRAyUOTNsfw4BPNFg85R/JFTsBSeQKHkpuZMSbJ/HWmg68Mo6vPSvPk9M/U qZGt/tF1mQHcTvHlDxwGqwFKsLa8qopmnlzIfM6Y69PofsQDbKGEsWNUXJpvznSPsYiV LqbI+6Shjt81xbkaUczNFZfzj0jet9bSr7NdwvkqJErFzH2kJcgoUyqtfYazdzDgX4Qq qg/w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1698155275; x=1698760075; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=JDn4KyOu1aP1bbjJv2+LYLA0L02aMbiW0UOS3mV2n4g=; b=Z3xtq90+cDcDmdfvl7+dbYIKOAeyL+6J+HrOnzUSqqsljNVWkbXW63CgYFyzQKzYo9 PYqklQh9FkeGqWVPC+u4aLxr5z3Jzk0z6Wt2zkPQPXG/cJXEQuqYlO/UpFI6c45oBFOD beB+EDf/EW62udopG6kHQqEpon1WGen3rapByCsl5nItAi71G0pR1EiC/vXb0H3qilqD l6A97NETyULSw+TFltT7JN7mMs/mER6xD/0kUlSam990Z8Blo8ZUVzTUIQdEEoytJRst 5fV0HnOe4EnUJ133A0A85bIxHAaKwDHeFQZvMiJz8Fhu3SxvG/PfBDkO/rrKGvlXK4vL lhWw== X-Gm-Message-State: AOJu0YyXezeodt8CAO1RM/+n/QIYRuc75+B1+AqzKJOepRLj47S0ld5E NlxRV3D3Ck9PZ7RBMrQg+a0LTynp790= X-Google-Smtp-Source: AGHT+IEj3qXX1ASyXmKltnU3Blh4TU/UIcsAQ8zCrsk6sZWyeAyJtmCUxLTSeeZqDpp0lQL0cg2luCQWxLY= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:45ba:3318:d7a5:336a]) (user=surenb job=sendgmr) by 2002:a05:6902:4ad:b0:da0:46e5:a7ef with SMTP id r13-20020a05690204ad00b00da046e5a7efmr24271ybs.3.1698155275005; Tue, 24 Oct 2023 06:47:55 -0700 (PDT) Date: Tue, 24 Oct 2023 06:46:30 -0700 In-Reply-To: <20231024134637.3120277-1-surenb@google.com> Mime-Version: 1.0 References: <20231024134637.3120277-1-surenb@google.com> X-Mailer: git-send-email 2.42.0.758.gaed0368e0e-goog Message-ID: <20231024134637.3120277-34-surenb@google.com> Subject: [PATCH v2 33/39] mm: vmalloc: Enable memory allocation profiling From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: kent.overstreet@linux.dev, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, roman.gushchin@linux.dev, mgorman@suse.de, dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com, corbet@lwn.net, void@manifault.com, peterz@infradead.org, juri.lelli@redhat.com, ldufour@linux.ibm.com, catalin.marinas@arm.com, will@kernel.org, arnd@arndb.de, tglx@linutronix.de, mingo@redhat.com, dave.hansen@linux.intel.com, x86@kernel.org, peterx@redhat.com, david@redhat.com, axboe@kernel.dk, mcgrof@kernel.org, masahiroy@kernel.org, nathan@kernel.org, dennis@kernel.org, tj@kernel.org, muchun.song@linux.dev, rppt@kernel.org, paulmck@kernel.org, pasha.tatashin@soleen.com, yosryahmed@google.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, andreyknvl@gmail.com, keescook@chromium.org, ndesaulniers@google.com, vvvvvv@google.com, gregkh@linuxfoundation.org, ebiggers@google.com, ytcoode@gmail.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, bristot@redhat.com, vschneid@redhat.com, cl@linux.com, penberg@kernel.org, iamjoonsoo.kim@lge.com, 42.hyeyoo@gmail.com, glider@google.com, elver@google.com, dvyukov@google.com, shakeelb@google.com, songmuchun@bytedance.com, jbaron@akamai.com, rientjes@google.com, minchan@google.com, kaleshsingh@google.com, surenb@google.com, kernel-team@android.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, iommu@lists.linux.dev, linux-arch@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, kasan-dev@googlegroups.com, cgroups@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Kent Overstreet This wrapps all external vmalloc allocation functions with the alloc_hooks() wrapper, and switches internal allocations to _noprof variants where appropriate, for the new memory allocation profiling feature. Signed-off-by: Kent Overstreet Signed-off-by: Suren Baghdasaryan --- drivers/staging/media/atomisp/pci/hmm/hmm.c | 2 +- include/linux/vmalloc.h | 60 ++++++++++---- kernel/kallsyms_selftest.c | 2 +- mm/util.c | 24 +++--- mm/vmalloc.c | 88 ++++++++++----------- 5 files changed, 103 insertions(+), 73 deletions(-) diff --git a/drivers/staging/media/atomisp/pci/hmm/hmm.c b/drivers/staging/= media/atomisp/pci/hmm/hmm.c index bb12644fd033..3e2899ad8517 100644 --- a/drivers/staging/media/atomisp/pci/hmm/hmm.c +++ b/drivers/staging/media/atomisp/pci/hmm/hmm.c @@ -205,7 +205,7 @@ static ia_css_ptr __hmm_alloc(size_t bytes, enum hmm_bo= _type type, } =20 dev_dbg(atomisp_dev, "pages: 0x%08x (%zu bytes), type: %d, vmalloc %p\n", - bo->start, bytes, type, vmalloc); + bo->start, bytes, type, vmalloc_noprof); =20 return bo->start; =20 diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h index c720be70c8dd..106d78e75606 100644 --- a/include/linux/vmalloc.h +++ b/include/linux/vmalloc.h @@ -2,6 +2,8 @@ #ifndef _LINUX_VMALLOC_H #define _LINUX_VMALLOC_H =20 +#include +#include #include #include #include @@ -137,26 +139,54 @@ extern unsigned long vmalloc_nr_pages(void); static inline unsigned long vmalloc_nr_pages(void) { return 0; } #endif =20 -extern void *vmalloc(unsigned long size) __alloc_size(1); -extern void *vzalloc(unsigned long size) __alloc_size(1); -extern void *vmalloc_user(unsigned long size) __alloc_size(1); -extern void *vmalloc_node(unsigned long size, int node) __alloc_size(1); -extern void *vzalloc_node(unsigned long size, int node) __alloc_size(1); -extern void *vmalloc_32(unsigned long size) __alloc_size(1); -extern void *vmalloc_32_user(unsigned long size) __alloc_size(1); -extern void *__vmalloc(unsigned long size, gfp_t gfp_mask) __alloc_size(1); -extern void *__vmalloc_node_range(unsigned long size, unsigned long align, +extern void *vmalloc_noprof(unsigned long size) __alloc_size(1); +#define vmalloc(...) alloc_hooks(vmalloc_noprof(__VA_ARGS__)) + +extern void *vzalloc_noprof(unsigned long size) __alloc_size(1); +#define vzalloc(...) alloc_hooks(vzalloc_noprof(__VA_ARGS__)) + +extern void *vmalloc_user_noprof(unsigned long size) __alloc_size(1); +#define vmalloc_user(...) alloc_hooks(vmalloc_user_noprof(__VA_ARGS__)) + +extern void *vmalloc_node_noprof(unsigned long size, int node) __alloc_siz= e(1); +#define vmalloc_node(...) alloc_hooks(vmalloc_node_noprof(__VA_ARGS__)) + +extern void *vzalloc_node_noprof(unsigned long size, int node) __alloc_siz= e(1); +#define vzalloc_node(...) alloc_hooks(vzalloc_node_noprof(__VA_ARGS__)) + +extern void *vmalloc_32_noprof(unsigned long size) __alloc_size(1); +#define vmalloc_32(...) alloc_hooks(vmalloc_32_noprof(__VA_ARGS__)) + +extern void *vmalloc_32_user_noprof(unsigned long size) __alloc_size(1); +#define vmalloc_32_user(...) alloc_hooks(vmalloc_32_user_noprof(__VA_ARGS_= _)) + +extern void *__vmalloc_noprof(unsigned long size, gfp_t gfp_mask) __alloc_= size(1); +#define __vmalloc(...) alloc_hooks(__vmalloc_noprof(__VA_ARGS__)) + +extern void *__vmalloc_node_range_noprof(unsigned long size, unsigned long= align, unsigned long start, unsigned long end, gfp_t gfp_mask, pgprot_t prot, unsigned long vm_flags, int node, const void *caller) __alloc_size(1); -void *__vmalloc_node(unsigned long size, unsigned long align, gfp_t gfp_ma= sk, +#define __vmalloc_node_range(...) alloc_hooks(__vmalloc_node_range_noprof(= __VA_ARGS__)) + +void *__vmalloc_node_noprof(unsigned long size, unsigned long align, gfp_t= gfp_mask, int node, const void *caller) __alloc_size(1); -void *vmalloc_huge(unsigned long size, gfp_t gfp_mask) __alloc_size(1); +#define __vmalloc_node(...) alloc_hooks(__vmalloc_node_noprof(__VA_ARGS__)) + +void *vmalloc_huge_noprof(unsigned long size, gfp_t gfp_mask) __alloc_size= (1); +#define vmalloc_huge(...) alloc_hooks(vmalloc_huge_noprof(__VA_ARGS__)) + +extern void *__vmalloc_array_noprof(size_t n, size_t size, gfp_t flags) __= alloc_size(1, 2); +#define __vmalloc_array(...) alloc_hooks(__vmalloc_array_noprof(__VA_ARGS_= _)) + +extern void *vmalloc_array_noprof(size_t n, size_t size) __alloc_size(1, 2= ); +#define vmalloc_array(...) alloc_hooks(vmalloc_array_noprof(__VA_ARGS__)) + +extern void *__vcalloc_noprof(size_t n, size_t size, gfp_t flags) __alloc_= size(1, 2); +#define __vcalloc(...) alloc_hooks(__vcalloc_noprof(__VA_ARGS__)) =20 -extern void *__vmalloc_array(size_t n, size_t size, gfp_t flags) __alloc_s= ize(1, 2); -extern void *vmalloc_array(size_t n, size_t size) __alloc_size(1, 2); -extern void *__vcalloc(size_t n, size_t size, gfp_t flags) __alloc_size(1,= 2); -extern void *vcalloc(size_t n, size_t size) __alloc_size(1, 2); +extern void *vcalloc_noprof(size_t n, size_t size) __alloc_size(1, 2); +#define vcalloc(...) alloc_hooks(vcalloc_noprof(__VA_ARGS__)) =20 extern void vfree(const void *addr); extern void vfree_atomic(const void *addr); diff --git a/kernel/kallsyms_selftest.c b/kernel/kallsyms_selftest.c index b4cac76ea5e9..3ea9be364e32 100644 --- a/kernel/kallsyms_selftest.c +++ b/kernel/kallsyms_selftest.c @@ -82,7 +82,7 @@ static struct test_item test_items[] =3D { ITEM_FUNC(kallsyms_test_func_static), ITEM_FUNC(kallsyms_test_func), ITEM_FUNC(kallsyms_test_func_weak), - ITEM_FUNC(vmalloc), + ITEM_FUNC(vmalloc_noprof), ITEM_FUNC(vfree), #ifdef CONFIG_KALLSYMS_ALL ITEM_DATA(kallsyms_test_var_bss_static), diff --git a/mm/util.c b/mm/util.c index 27ed6a5ac31a..7e6043ece040 100644 --- a/mm/util.c +++ b/mm/util.c @@ -629,7 +629,7 @@ void *kvmalloc_node_noprof(size_t size, gfp_t flags, in= t node) * about the resulting pointer, and cannot play * protection games. */ - return __vmalloc_node_range(size, 1, VMALLOC_START, VMALLOC_END, + return __vmalloc_node_range_noprof(size, 1, VMALLOC_START, VMALLOC_END, flags, PAGE_KERNEL, VM_ALLOW_HUGE_VMAP, node, __builtin_return_address(0)); } @@ -688,12 +688,12 @@ void *kvrealloc_noprof(const void *p, size_t oldsize,= size_t newsize, gfp_t flag EXPORT_SYMBOL(kvrealloc_noprof); =20 /** - * __vmalloc_array - allocate memory for a virtually contiguous array. + * __vmalloc_array_noprof - allocate memory for a virtually contiguous arr= ay. * @n: number of elements. * @size: element size. * @flags: the type of memory to allocate (see kmalloc). */ -void *__vmalloc_array(size_t n, size_t size, gfp_t flags) +void *__vmalloc_array_noprof(size_t n, size_t size, gfp_t flags) { size_t bytes; =20 @@ -701,18 +701,18 @@ void *__vmalloc_array(size_t n, size_t size, gfp_t fl= ags) return NULL; return __vmalloc(bytes, flags); } -EXPORT_SYMBOL(__vmalloc_array); +EXPORT_SYMBOL(__vmalloc_array_noprof); =20 /** - * vmalloc_array - allocate memory for a virtually contiguous array. + * vmalloc_array_noprof - allocate memory for a virtually contiguous array. * @n: number of elements. * @size: element size. */ -void *vmalloc_array(size_t n, size_t size) +void *vmalloc_array_noprof(size_t n, size_t size) { return __vmalloc_array(n, size, GFP_KERNEL); } -EXPORT_SYMBOL(vmalloc_array); +EXPORT_SYMBOL(vmalloc_array_noprof); =20 /** * __vcalloc - allocate and zero memory for a virtually contiguous array. @@ -720,22 +720,22 @@ EXPORT_SYMBOL(vmalloc_array); * @size: element size. * @flags: the type of memory to allocate (see kmalloc). */ -void *__vcalloc(size_t n, size_t size, gfp_t flags) +void *__vcalloc_noprof(size_t n, size_t size, gfp_t flags) { return __vmalloc_array(n, size, flags | __GFP_ZERO); } -EXPORT_SYMBOL(__vcalloc); +EXPORT_SYMBOL(__vcalloc_noprof); =20 /** - * vcalloc - allocate and zero memory for a virtually contiguous array. + * vcalloc_noprof - allocate and zero memory for a virtually contiguous ar= ray. * @n: number of elements. * @size: element size. */ -void *vcalloc(size_t n, size_t size) +void *vcalloc_noprof(size_t n, size_t size) { return __vmalloc_array(n, size, GFP_KERNEL | __GFP_ZERO); } -EXPORT_SYMBOL(vcalloc); +EXPORT_SYMBOL(vcalloc_noprof); =20 struct anon_vma *folio_anon_vma(struct folio *folio) { diff --git a/mm/vmalloc.c b/mm/vmalloc.c index a3fedb3ee0db..62fd33094bfd 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -3025,12 +3025,12 @@ vm_area_alloc_pages(gfp_t gfp, int nid, * but mempolicy wants to alloc memory by interleaving. */ if (IS_ENABLED(CONFIG_NUMA) && nid =3D=3D NUMA_NO_NODE) - nr =3D alloc_pages_bulk_array_mempolicy(bulk_gfp, + nr =3D alloc_pages_bulk_array_mempolicy_noprof(bulk_gfp, nr_pages_request, pages + nr_allocated); =20 else - nr =3D alloc_pages_bulk_array_node(bulk_gfp, nid, + nr =3D alloc_pages_bulk_array_node_noprof(bulk_gfp, nid, nr_pages_request, pages + nr_allocated); =20 @@ -3060,9 +3060,9 @@ vm_area_alloc_pages(gfp_t gfp, int nid, break; =20 if (nid =3D=3D NUMA_NO_NODE) - page =3D alloc_pages(alloc_gfp, order); + page =3D alloc_pages_noprof(alloc_gfp, order); else - page =3D alloc_pages_node(nid, alloc_gfp, order); + page =3D alloc_pages_node_noprof(nid, alloc_gfp, order); if (unlikely(!page)) { if (!nofail) break; @@ -3119,10 +3119,10 @@ static void *__vmalloc_area_node(struct vm_struct *= area, gfp_t gfp_mask, =20 /* Please note that the recursion is strictly bounded. */ if (array_size > PAGE_SIZE) { - area->pages =3D __vmalloc_node(array_size, 1, nested_gfp, node, + area->pages =3D __vmalloc_node_noprof(array_size, 1, nested_gfp, node, area->caller); } else { - area->pages =3D kmalloc_node(array_size, nested_gfp, node); + area->pages =3D kmalloc_node_noprof(array_size, nested_gfp, node); } =20 if (!area->pages) { @@ -3205,7 +3205,7 @@ static void *__vmalloc_area_node(struct vm_struct *ar= ea, gfp_t gfp_mask, } =20 /** - * __vmalloc_node_range - allocate virtually contiguous memory + * __vmalloc_node_range_noprof - allocate virtually contiguous memory * @size: allocation size * @align: desired alignment * @start: vm area range start @@ -3232,7 +3232,7 @@ static void *__vmalloc_area_node(struct vm_struct *ar= ea, gfp_t gfp_mask, * * Return: the address of the area or %NULL on failure */ -void *__vmalloc_node_range(unsigned long size, unsigned long align, +void *__vmalloc_node_range_noprof(unsigned long size, unsigned long align, unsigned long start, unsigned long end, gfp_t gfp_mask, pgprot_t prot, unsigned long vm_flags, int node, const void *caller) @@ -3361,7 +3361,7 @@ void *__vmalloc_node_range(unsigned long size, unsign= ed long align, } =20 /** - * __vmalloc_node - allocate virtually contiguous memory + * __vmalloc_node_noprof - allocate virtually contiguous memory * @size: allocation size * @align: desired alignment * @gfp_mask: flags for the page level allocator @@ -3379,10 +3379,10 @@ void *__vmalloc_node_range(unsigned long size, unsi= gned long align, * * Return: pointer to the allocated memory or %NULL on error */ -void *__vmalloc_node(unsigned long size, unsigned long align, +void *__vmalloc_node_noprof(unsigned long size, unsigned long align, gfp_t gfp_mask, int node, const void *caller) { - return __vmalloc_node_range(size, align, VMALLOC_START, VMALLOC_END, + return __vmalloc_node_range_noprof(size, align, VMALLOC_START, VMALLOC_EN= D, gfp_mask, PAGE_KERNEL, 0, node, caller); } /* @@ -3391,15 +3391,15 @@ void *__vmalloc_node(unsigned long size, unsigned l= ong align, * than that. */ #ifdef CONFIG_TEST_VMALLOC_MODULE -EXPORT_SYMBOL_GPL(__vmalloc_node); +EXPORT_SYMBOL_GPL(__vmalloc_node_noprof); #endif =20 -void *__vmalloc(unsigned long size, gfp_t gfp_mask) +void *__vmalloc_noprof(unsigned long size, gfp_t gfp_mask) { - return __vmalloc_node(size, 1, gfp_mask, NUMA_NO_NODE, + return __vmalloc_node_noprof(size, 1, gfp_mask, NUMA_NO_NODE, __builtin_return_address(0)); } -EXPORT_SYMBOL(__vmalloc); +EXPORT_SYMBOL(__vmalloc_noprof); =20 /** * vmalloc - allocate virtually contiguous memory @@ -3413,12 +3413,12 @@ EXPORT_SYMBOL(__vmalloc); * * Return: pointer to the allocated memory or %NULL on error */ -void *vmalloc(unsigned long size) +void *vmalloc_noprof(unsigned long size) { - return __vmalloc_node(size, 1, GFP_KERNEL, NUMA_NO_NODE, + return __vmalloc_node_noprof(size, 1, GFP_KERNEL, NUMA_NO_NODE, __builtin_return_address(0)); } -EXPORT_SYMBOL(vmalloc); +EXPORT_SYMBOL(vmalloc_noprof); =20 /** * vmalloc_huge - allocate virtually contiguous memory, allow huge pages @@ -3432,16 +3432,16 @@ EXPORT_SYMBOL(vmalloc); * * Return: pointer to the allocated memory or %NULL on error */ -void *vmalloc_huge(unsigned long size, gfp_t gfp_mask) +void *vmalloc_huge_noprof(unsigned long size, gfp_t gfp_mask) { - return __vmalloc_node_range(size, 1, VMALLOC_START, VMALLOC_END, + return __vmalloc_node_range_noprof(size, 1, VMALLOC_START, VMALLOC_END, gfp_mask, PAGE_KERNEL, VM_ALLOW_HUGE_VMAP, NUMA_NO_NODE, __builtin_return_address(0)); } -EXPORT_SYMBOL_GPL(vmalloc_huge); +EXPORT_SYMBOL_GPL(vmalloc_huge_noprof); =20 /** - * vzalloc - allocate virtually contiguous memory with zero fill + * vzalloc_noprof - allocate virtually contiguous memory with zero fill * @size: allocation size * * Allocate enough pages to cover @size from the page level @@ -3453,12 +3453,12 @@ EXPORT_SYMBOL_GPL(vmalloc_huge); * * Return: pointer to the allocated memory or %NULL on error */ -void *vzalloc(unsigned long size) +void *vzalloc_noprof(unsigned long size) { - return __vmalloc_node(size, 1, GFP_KERNEL | __GFP_ZERO, NUMA_NO_NODE, + return __vmalloc_node_noprof(size, 1, GFP_KERNEL | __GFP_ZERO, NUMA_NO_NO= DE, __builtin_return_address(0)); } -EXPORT_SYMBOL(vzalloc); +EXPORT_SYMBOL(vzalloc_noprof); =20 /** * vmalloc_user - allocate zeroed virtually contiguous memory for userspace @@ -3469,17 +3469,17 @@ EXPORT_SYMBOL(vzalloc); * * Return: pointer to the allocated memory or %NULL on error */ -void *vmalloc_user(unsigned long size) +void *vmalloc_user_noprof(unsigned long size) { - return __vmalloc_node_range(size, SHMLBA, VMALLOC_START, VMALLOC_END, + return __vmalloc_node_range_noprof(size, SHMLBA, VMALLOC_START, VMALLOC_= END, GFP_KERNEL | __GFP_ZERO, PAGE_KERNEL, VM_USERMAP, NUMA_NO_NODE, __builtin_return_address(0)); } -EXPORT_SYMBOL(vmalloc_user); +EXPORT_SYMBOL(vmalloc_user_noprof); =20 /** - * vmalloc_node - allocate memory on a specific node + * vmalloc_node_noprof - allocate memory on a specific node * @size: allocation size * @node: numa node * @@ -3491,15 +3491,15 @@ EXPORT_SYMBOL(vmalloc_user); * * Return: pointer to the allocated memory or %NULL on error */ -void *vmalloc_node(unsigned long size, int node) +void *vmalloc_node_noprof(unsigned long size, int node) { - return __vmalloc_node(size, 1, GFP_KERNEL, node, + return __vmalloc_node_noprof(size, 1, GFP_KERNEL, node, __builtin_return_address(0)); } -EXPORT_SYMBOL(vmalloc_node); +EXPORT_SYMBOL(vmalloc_node_noprof); =20 /** - * vzalloc_node - allocate memory on a specific node with zero fill + * vzalloc_node_noprof - allocate memory on a specific node with zero fill * @size: allocation size * @node: numa node * @@ -3509,12 +3509,12 @@ EXPORT_SYMBOL(vmalloc_node); * * Return: pointer to the allocated memory or %NULL on error */ -void *vzalloc_node(unsigned long size, int node) +void *vzalloc_node_noprof(unsigned long size, int node) { - return __vmalloc_node(size, 1, GFP_KERNEL | __GFP_ZERO, node, + return __vmalloc_node_noprof(size, 1, GFP_KERNEL | __GFP_ZERO, node, __builtin_return_address(0)); } -EXPORT_SYMBOL(vzalloc_node); +EXPORT_SYMBOL(vzalloc_node_noprof); =20 #if defined(CONFIG_64BIT) && defined(CONFIG_ZONE_DMA32) #define GFP_VMALLOC32 (GFP_DMA32 | GFP_KERNEL) @@ -3529,7 +3529,7 @@ EXPORT_SYMBOL(vzalloc_node); #endif =20 /** - * vmalloc_32 - allocate virtually contiguous memory (32bit addressable) + * vmalloc_32_noprof - allocate virtually contiguous memory (32bit address= able) * @size: allocation size * * Allocate enough 32bit PA addressable pages to cover @size from the @@ -3537,15 +3537,15 @@ EXPORT_SYMBOL(vzalloc_node); * * Return: pointer to the allocated memory or %NULL on error */ -void *vmalloc_32(unsigned long size) +void *vmalloc_32_noprof(unsigned long size) { - return __vmalloc_node(size, 1, GFP_VMALLOC32, NUMA_NO_NODE, + return __vmalloc_node_noprof(size, 1, GFP_VMALLOC32, NUMA_NO_NODE, __builtin_return_address(0)); } -EXPORT_SYMBOL(vmalloc_32); +EXPORT_SYMBOL(vmalloc_32_noprof); =20 /** - * vmalloc_32_user - allocate zeroed virtually contiguous 32bit memory + * vmalloc_32_user_noprof - allocate zeroed virtually contiguous 32bit mem= ory * @size: allocation size * * The resulting memory area is 32bit addressable and zeroed so it can be @@ -3553,14 +3553,14 @@ EXPORT_SYMBOL(vmalloc_32); * * Return: pointer to the allocated memory or %NULL on error */ -void *vmalloc_32_user(unsigned long size) +void *vmalloc_32_user_noprof(unsigned long size) { - return __vmalloc_node_range(size, SHMLBA, VMALLOC_START, VMALLOC_END, + return __vmalloc_node_range_noprof(size, SHMLBA, VMALLOC_START, VMALLOC_= END, GFP_VMALLOC32 | __GFP_ZERO, PAGE_KERNEL, VM_USERMAP, NUMA_NO_NODE, __builtin_return_address(0)); } -EXPORT_SYMBOL(vmalloc_32_user); +EXPORT_SYMBOL(vmalloc_32_user_noprof); =20 /* * Atomically zero bytes in the iterator. --=20 2.42.0.758.gaed0368e0e-goog From nobody Thu Jan 1 07:45:32 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 66D25C00A8F for ; Tue, 24 Oct 2023 13:51:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1343528AbjJXNvQ (ORCPT ); Tue, 24 Oct 2023 09:51:16 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50136 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234842AbjJXNuP (ORCPT ); Tue, 24 Oct 2023 09:50:15 -0400 Received: from mail-yw1-x114a.google.com (mail-yw1-x114a.google.com [IPv6:2607:f8b0:4864:20::114a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3E79B2691 for ; Tue, 24 Oct 2023 06:47:58 -0700 (PDT) Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-5a7af53bde4so60190067b3.0 for ; Tue, 24 Oct 2023 06:47:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1698155277; x=1698760077; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=YqQmkpquk/zEN/BrXUtfRL7OAs/mhY9q/VQlYA3NCSg=; b=PC+YSAAB05pP8/DGW01idGH1Sx+pmaGfrQBVOAsb9RDuNajRuxrICqon1lOUz95r0Z 0bhCjYlXi8XxTKH5HBQXC5FIV6f7eAnescJlmzESVtYMSpddQZn4sdhYwxSblsTfRJ9g s2YOPi4PayceyUZiVRmRxE6eEB5w6ddHzxn9GT42kFWO5DRAy+o9EKcqFPHF8aFnuhS6 ct14v1THJgKpZ9Sf0TkLE0I2dXATAW8amhXbBD/HAbZHsAjn6IMC9HQWM+1cryjWufeV vqh6eBGMXjgFdA2W+0P9Pxq8dE3n0vZGGtolZv2DzT/NzLHbTMPPFVsIf1cOgMCj7+T5 TeiQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1698155277; x=1698760077; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=YqQmkpquk/zEN/BrXUtfRL7OAs/mhY9q/VQlYA3NCSg=; b=wM9qVx50kQ8AWZ4f1yI/hrpbZH6OZ4ZosyBZ5UTdfv8ZpyDX9tgQhP58nIBcXGT+Lv COjhwTJ0d0u+AFg+i4hyKX2+oDk5kQoVy9GAbfICA3LUZxph6CWxfQXddSmS1ux7O2Y9 BfmAMQd1JhQ0D2Ox9lz5Nptcj9o9wst/es/P0mdQmM3mmdLYf/lkwzwZhzPu6ICuF4ah aNqSmI02ityXI6clpGbZpMof20NsUaJKU5Bv/11nxWeHiSI0XAYTTF3sNVCBuSGtCHft B2CNdnmkQ0zXLkqk/VbtMpVjfPA4eq10JJCXarkpzHqj19AMI15KSW/WAx03ACzLdaIY 6w/A== X-Gm-Message-State: AOJu0YzIqVTNf2w3XXD57fF1mcUQWh9uTDVPEJgB4YU507Ti/sh9riz6 ozdgYzpCiRhLiRzAcW52dZX084I8NeA= X-Google-Smtp-Source: AGHT+IH6wvOVzrOaaGyY1/e57eYHLfTZ/5bwoybFh+Q+IMjNK5yyYqtlQtnc2beg11hZB/FoRq4QkUDgnBs= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:45ba:3318:d7a5:336a]) (user=surenb job=sendgmr) by 2002:a81:4853:0:b0:5a8:3f07:ddd6 with SMTP id v80-20020a814853000000b005a83f07ddd6mr266393ywa.6.1698155277184; Tue, 24 Oct 2023 06:47:57 -0700 (PDT) Date: Tue, 24 Oct 2023 06:46:31 -0700 In-Reply-To: <20231024134637.3120277-1-surenb@google.com> Mime-Version: 1.0 References: <20231024134637.3120277-1-surenb@google.com> X-Mailer: git-send-email 2.42.0.758.gaed0368e0e-goog Message-ID: <20231024134637.3120277-35-surenb@google.com> Subject: [PATCH v2 34/39] rhashtable: Plumb through alloc tag From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: kent.overstreet@linux.dev, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, roman.gushchin@linux.dev, mgorman@suse.de, dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com, corbet@lwn.net, void@manifault.com, peterz@infradead.org, juri.lelli@redhat.com, ldufour@linux.ibm.com, catalin.marinas@arm.com, will@kernel.org, arnd@arndb.de, tglx@linutronix.de, mingo@redhat.com, dave.hansen@linux.intel.com, x86@kernel.org, peterx@redhat.com, david@redhat.com, axboe@kernel.dk, mcgrof@kernel.org, masahiroy@kernel.org, nathan@kernel.org, dennis@kernel.org, tj@kernel.org, muchun.song@linux.dev, rppt@kernel.org, paulmck@kernel.org, pasha.tatashin@soleen.com, yosryahmed@google.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, andreyknvl@gmail.com, keescook@chromium.org, ndesaulniers@google.com, vvvvvv@google.com, gregkh@linuxfoundation.org, ebiggers@google.com, ytcoode@gmail.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, bristot@redhat.com, vschneid@redhat.com, cl@linux.com, penberg@kernel.org, iamjoonsoo.kim@lge.com, 42.hyeyoo@gmail.com, glider@google.com, elver@google.com, dvyukov@google.com, shakeelb@google.com, songmuchun@bytedance.com, jbaron@akamai.com, rientjes@google.com, minchan@google.com, kaleshsingh@google.com, surenb@google.com, kernel-team@android.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, iommu@lists.linux.dev, linux-arch@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, kasan-dev@googlegroups.com, cgroups@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Kent Overstreet This gives better memory allocation profiling results; rhashtable allocations will be accounted to the code that initialized the rhashtable. Signed-off-by: Kent Overstreet Signed-off-by: Suren Baghdasaryan --- include/linux/rhashtable-types.h | 11 +++++-- lib/rhashtable.c | 52 +++++++++++++++++++++++++------- 2 files changed, 50 insertions(+), 13 deletions(-) diff --git a/include/linux/rhashtable-types.h b/include/linux/rhashtable-ty= pes.h index 57467cbf4c5b..aac2984c2ef0 100644 --- a/include/linux/rhashtable-types.h +++ b/include/linux/rhashtable-types.h @@ -9,6 +9,7 @@ #ifndef _LINUX_RHASHTABLE_TYPES_H #define _LINUX_RHASHTABLE_TYPES_H =20 +#include #include #include #include @@ -88,6 +89,9 @@ struct rhashtable { struct mutex mutex; spinlock_t lock; atomic_t nelems; +#ifdef CONFIG_MEM_ALLOC_PROFILING + struct alloc_tag *alloc_tag; +#endif }; =20 /** @@ -127,9 +131,12 @@ struct rhashtable_iter { bool end_of_table; }; =20 -int rhashtable_init(struct rhashtable *ht, +int rhashtable_init_noprof(struct rhashtable *ht, const struct rhashtable_params *params); -int rhltable_init(struct rhltable *hlt, +#define rhashtable_init(...) alloc_hooks(rhashtable_init_noprof(__VA_ARGS_= _)) + +int rhltable_init_noprof(struct rhltable *hlt, const struct rhashtable_params *params); +#define rhltable_init(...) alloc_hooks(rhltable_init_noprof(__VA_ARGS__)) =20 #endif /* _LINUX_RHASHTABLE_TYPES_H */ diff --git a/lib/rhashtable.c b/lib/rhashtable.c index 6ae2ba8e06a2..b62116f332b8 100644 --- a/lib/rhashtable.c +++ b/lib/rhashtable.c @@ -63,6 +63,27 @@ EXPORT_SYMBOL_GPL(lockdep_rht_bucket_is_held); #define ASSERT_RHT_MUTEX(HT) #endif =20 +#ifdef CONFIG_MEM_ALLOC_PROFILING +static inline void rhashtable_alloc_tag_init(struct rhashtable *ht) +{ + ht->alloc_tag =3D current->alloc_tag; +} + +static inline struct alloc_tag *rhashtable_alloc_tag_save(struct rhashtabl= e *ht) +{ + return alloc_tag_save(ht->alloc_tag); +} + +static inline void rhashtable_alloc_tag_restore(struct rhashtable *ht, str= uct alloc_tag *old) +{ + alloc_tag_restore(ht->alloc_tag, old); +} +#else +#define rhashtable_alloc_tag_init(ht) +static inline struct alloc_tag *rhashtable_alloc_tag_save(struct rhashtabl= e *ht) { return NULL; } +#define rhashtable_alloc_tag_restore(ht, old) +#endif + static inline union nested_table *nested_table_top( const struct bucket_table *tbl) { @@ -130,7 +151,7 @@ static union nested_table *nested_table_alloc(struct rh= ashtable *ht, if (ntbl) return ntbl; =20 - ntbl =3D kzalloc(PAGE_SIZE, GFP_ATOMIC); + ntbl =3D kmalloc_noprof(PAGE_SIZE, GFP_ATOMIC|__GFP_ZERO); =20 if (ntbl && leaf) { for (i =3D 0; i < PAGE_SIZE / sizeof(ntbl[0]); i++) @@ -157,7 +178,7 @@ static struct bucket_table *nested_bucket_table_alloc(s= truct rhashtable *ht, =20 size =3D sizeof(*tbl) + sizeof(tbl->buckets[0]); =20 - tbl =3D kzalloc(size, gfp); + tbl =3D kmalloc_noprof(size, gfp|__GFP_ZERO); if (!tbl) return NULL; =20 @@ -180,8 +201,10 @@ static struct bucket_table *bucket_table_alloc(struct = rhashtable *ht, size_t size; int i; static struct lock_class_key __key; + struct alloc_tag * __maybe_unused old =3D rhashtable_alloc_tag_save(ht); =20 - tbl =3D kvzalloc(struct_size(tbl, buckets, nbuckets), gfp); + tbl =3D kvmalloc_node_noprof(struct_size(tbl, buckets, nbuckets), + gfp|__GFP_ZERO, NUMA_NO_NODE); =20 size =3D nbuckets; =20 @@ -190,6 +213,8 @@ static struct bucket_table *bucket_table_alloc(struct r= hashtable *ht, nbuckets =3D 0; } =20 + rhashtable_alloc_tag_restore(ht, old); + if (tbl =3D=3D NULL) return NULL; =20 @@ -975,7 +1000,7 @@ static u32 rhashtable_jhash2(const void *key, u32 leng= th, u32 seed) } =20 /** - * rhashtable_init - initialize a new hash table + * rhashtable_init_noprof - initialize a new hash table * @ht: hash table to be initialized * @params: configuration parameters * @@ -1016,7 +1041,7 @@ static u32 rhashtable_jhash2(const void *key, u32 len= gth, u32 seed) * .obj_hashfn =3D my_hash_fn, * }; */ -int rhashtable_init(struct rhashtable *ht, +int rhashtable_init_noprof(struct rhashtable *ht, const struct rhashtable_params *params) { struct bucket_table *tbl; @@ -1031,6 +1056,8 @@ int rhashtable_init(struct rhashtable *ht, spin_lock_init(&ht->lock); memcpy(&ht->p, params, sizeof(*params)); =20 + rhashtable_alloc_tag_init(ht); + if (params->min_size) ht->p.min_size =3D roundup_pow_of_two(params->min_size); =20 @@ -1076,26 +1103,26 @@ int rhashtable_init(struct rhashtable *ht, =20 return 0; } -EXPORT_SYMBOL_GPL(rhashtable_init); +EXPORT_SYMBOL_GPL(rhashtable_init_noprof); =20 /** - * rhltable_init - initialize a new hash list table + * rhltable_init_noprof - initialize a new hash list table * @hlt: hash list table to be initialized * @params: configuration parameters * * Initializes a new hash list table. * - * See documentation for rhashtable_init. + * See documentation for rhashtable_init_noprof. */ -int rhltable_init(struct rhltable *hlt, const struct rhashtable_params *pa= rams) +int rhltable_init_noprof(struct rhltable *hlt, const struct rhashtable_par= ams *params) { int err; =20 - err =3D rhashtable_init(&hlt->ht, params); + err =3D rhashtable_init_noprof(&hlt->ht, params); hlt->ht.rhlist =3D true; return err; } -EXPORT_SYMBOL_GPL(rhltable_init); +EXPORT_SYMBOL_GPL(rhltable_init_noprof); =20 static void rhashtable_free_one(struct rhashtable *ht, struct rhash_head *= obj, void (*free_fn)(void *ptr, void *arg), @@ -1222,6 +1249,7 @@ struct rhash_lock_head __rcu **rht_bucket_nested_inse= rt( unsigned int index =3D hash & ((1 << tbl->nest) - 1); unsigned int size =3D tbl->size >> tbl->nest; union nested_table *ntbl; + struct alloc_tag * __maybe_unused old =3D rhashtable_alloc_tag_save(ht); =20 ntbl =3D nested_table_top(tbl); hash >>=3D tbl->nest; @@ -1236,6 +1264,8 @@ struct rhash_lock_head __rcu **rht_bucket_nested_inse= rt( size <=3D (1 << shift)); } =20 + rhashtable_alloc_tag_restore(ht, old); + if (!ntbl) return NULL; =20 --=20 2.42.0.758.gaed0368e0e-goog From nobody Thu Jan 1 07:45:32 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id AF55EC07545 for ; Tue, 24 Oct 2023 13:51:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1343598AbjJXNv3 (ORCPT ); Tue, 24 Oct 2023 09:51:29 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58220 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234846AbjJXNuT (ORCPT ); Tue, 24 Oct 2023 09:50:19 -0400 Received: from mail-yw1-x114a.google.com (mail-yw1-x114a.google.com [IPv6:2607:f8b0:4864:20::114a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 53576269A for ; Tue, 24 Oct 2023 06:48:00 -0700 (PDT) Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-5a7af53bde4so60190527b3.0 for ; Tue, 24 Oct 2023 06:48:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1698155279; x=1698760079; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=Od9SIx6nrlV00gepzGuthWnU/KWY+8bGDr+wczV24Vo=; b=E6kpLefjgKPo0dphH2vyKJTPSa0dqO9i9a5GanO+53ZwZB/w4bJdOOr7fBG2Nirf7s pMiP8Z758l+ZTWAdCS3BqJiSYr98zy66BvgWD3xIwuhj0hcFSdJdcbh0jIlb3gIw2HwX wNKEG9IN6kGZYWRMpBu4pafPHfUdsxEXSzDI5Hh+3uR7IU2ut/aP8ItFL9bIlakdHH3F uA+1PAW1uAxmIk2U1B3jwra1vncjnkkLf09adv2/EmwFl6oCezFkYL89cYzWSk2oJU0g YeYfIXV5GtL8iVYeidLkDuCbBiB6CTtlS8nbRoa5X+w4H0+DdJqCqW8PHDjNj8SV6boi FGmw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1698155279; x=1698760079; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=Od9SIx6nrlV00gepzGuthWnU/KWY+8bGDr+wczV24Vo=; b=g7g5Oj7oUjrzaIIIeb6VQj5FoX58I5lVYquzC5kvfDZ0kc3bNyH5bYAnUsaPLhIu49 4n+3TeO/uWdveCGHmByFlJ0K0KfS4n9sR/w/kG9NCo9SPlXRFbsw/e4St2JXvsycp4E6 qDkIIvIYIg+zCprHJrbEc82ACgPmwdlhuRdaAtaqkE24xCBRF5xmPhkTuhV9BIim+lLe RSyLeO+ZHbHYMQS5PQj5yMHluUXOQBvwHlwDM81myk5VVlN3Ihv1PMrmsv3FhFEwC6CW uN+kZwtcMkCaxouIV86QfQxDN+SXjHBVQx0+bX+27MHDlk29zBFUOGaKHnDNR13qB56i mhHA== X-Gm-Message-State: AOJu0YzD3S4XCY0X6vbZt+X9t9DK55NBs5QXdze/6wtXLsP3mkO9VMRO BYtvo9NZpYzdYCYBjLTTcf/Sn8pk/qc= X-Google-Smtp-Source: AGHT+IHTNe96EdD9LLd7MUZqFW1q382ncfW7dWktSIhQfdb/UWURbia5QNDViYiMSeL9bXxhLhwSMXQz/Wc= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:45ba:3318:d7a5:336a]) (user=surenb job=sendgmr) by 2002:a25:76cc:0:b0:d9a:68de:16a1 with SMTP id r195-20020a2576cc000000b00d9a68de16a1mr246429ybc.0.1698155279398; Tue, 24 Oct 2023 06:47:59 -0700 (PDT) Date: Tue, 24 Oct 2023 06:46:32 -0700 In-Reply-To: <20231024134637.3120277-1-surenb@google.com> Mime-Version: 1.0 References: <20231024134637.3120277-1-surenb@google.com> X-Mailer: git-send-email 2.42.0.758.gaed0368e0e-goog Message-ID: <20231024134637.3120277-36-surenb@google.com> Subject: [PATCH v2 35/39] lib: add memory allocations report in show_mem() From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: kent.overstreet@linux.dev, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, roman.gushchin@linux.dev, mgorman@suse.de, dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com, corbet@lwn.net, void@manifault.com, peterz@infradead.org, juri.lelli@redhat.com, ldufour@linux.ibm.com, catalin.marinas@arm.com, will@kernel.org, arnd@arndb.de, tglx@linutronix.de, mingo@redhat.com, dave.hansen@linux.intel.com, x86@kernel.org, peterx@redhat.com, david@redhat.com, axboe@kernel.dk, mcgrof@kernel.org, masahiroy@kernel.org, nathan@kernel.org, dennis@kernel.org, tj@kernel.org, muchun.song@linux.dev, rppt@kernel.org, paulmck@kernel.org, pasha.tatashin@soleen.com, yosryahmed@google.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, andreyknvl@gmail.com, keescook@chromium.org, ndesaulniers@google.com, vvvvvv@google.com, gregkh@linuxfoundation.org, ebiggers@google.com, ytcoode@gmail.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, bristot@redhat.com, vschneid@redhat.com, cl@linux.com, penberg@kernel.org, iamjoonsoo.kim@lge.com, 42.hyeyoo@gmail.com, glider@google.com, elver@google.com, dvyukov@google.com, shakeelb@google.com, songmuchun@bytedance.com, jbaron@akamai.com, rientjes@google.com, minchan@google.com, kaleshsingh@google.com, surenb@google.com, kernel-team@android.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, iommu@lists.linux.dev, linux-arch@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, kasan-dev@googlegroups.com, cgroups@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Include allocations in show_mem reports. Signed-off-by: Kent Overstreet Signed-off-by: Suren Baghdasaryan --- include/linux/alloc_tag.h | 2 ++ lib/alloc_tag.c | 37 +++++++++++++++++++++++++++++++++++++ mm/show_mem.c | 15 +++++++++++++++ 3 files changed, 54 insertions(+) diff --git a/include/linux/alloc_tag.h b/include/linux/alloc_tag.h index 3fe51e67e231..0a5973c4ad77 100644 --- a/include/linux/alloc_tag.h +++ b/include/linux/alloc_tag.h @@ -30,6 +30,8 @@ struct alloc_tag { =20 #ifdef CONFIG_MEM_ALLOC_PROFILING =20 +void alloc_tags_show_mem_report(struct seq_buf *s); + static inline struct alloc_tag *ct_to_alloc_tag(struct codetag *ct) { return container_of(ct, struct alloc_tag, ct); diff --git a/lib/alloc_tag.c b/lib/alloc_tag.c index 2d5226d9262d..2f7a2e3ddf55 100644 --- a/lib/alloc_tag.c +++ b/lib/alloc_tag.c @@ -96,6 +96,43 @@ static const struct seq_operations allocinfo_seq_op =3D { .show =3D allocinfo_show, }; =20 +void alloc_tags_show_mem_report(struct seq_buf *s) +{ + struct codetag_iterator iter; + struct codetag *ct; + struct { + struct codetag *tag; + size_t bytes; + } tags[10], n; + unsigned int i, nr =3D 0; + + codetag_lock_module_list(alloc_tag_cttype, true); + iter =3D codetag_get_ct_iter(alloc_tag_cttype); + while ((ct =3D codetag_next_ct(&iter))) { + struct alloc_tag_counters counter =3D alloc_tag_read(ct_to_alloc_tag(ct)= ); + n.tag =3D ct; + n.bytes =3D counter.bytes; + + for (i =3D 0; i < nr; i++) + if (n.bytes > tags[i].bytes) + break; + + if (i < ARRAY_SIZE(tags)) { + nr -=3D nr =3D=3D ARRAY_SIZE(tags); + memmove(&tags[i + 1], + &tags[i], + sizeof(tags[0]) * (nr - i)); + nr++; + tags[i] =3D n; + } + } + + for (i =3D 0; i < nr; i++) + alloc_tag_to_text(s, tags[i].tag); + + codetag_lock_module_list(alloc_tag_cttype, false); +} + static void __init procfs_init(void) { proc_create_seq("allocinfo", 0444, NULL, &allocinfo_seq_op); diff --git a/mm/show_mem.c b/mm/show_mem.c index 4b888b18bdde..660e9a78a34d 100644 --- a/mm/show_mem.c +++ b/mm/show_mem.c @@ -12,6 +12,7 @@ #include #include #include +#include #include #include =20 @@ -426,4 +427,18 @@ void __show_mem(unsigned int filter, nodemask_t *nodem= ask, int max_zone_idx) #ifdef CONFIG_MEMORY_FAILURE printk("%lu pages hwpoisoned\n", atomic_long_read(&num_poisoned_pages)); #endif +#ifdef CONFIG_MEM_ALLOC_PROFILING + { + struct seq_buf s; + char *buf =3D kmalloc(4096, GFP_ATOMIC); + + if (buf) { + printk("Memory allocations:\n"); + seq_buf_init(&s, buf, 4096); + alloc_tags_show_mem_report(&s); + printk("%s", buf); + kfree(buf); + } + } +#endif } --=20 2.42.0.758.gaed0368e0e-goog From nobody Thu Jan 1 07:45:32 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id EB8B3C00A8F for ; Tue, 24 Oct 2023 13:51:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1343778AbjJXNvV (ORCPT ); Tue, 24 Oct 2023 09:51:21 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58240 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234856AbjJXNuU (ORCPT ); Tue, 24 Oct 2023 09:50:20 -0400 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BFE0C1FFE for ; Tue, 24 Oct 2023 06:48:02 -0700 (PDT) Received: by mail-yb1-xb49.google.com with SMTP id 3f1490d57ef6-da0631f977bso19841276.2 for ; Tue, 24 Oct 2023 06:48:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1698155282; x=1698760082; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=+3jXXsjNPLXrWv+rf69UQa/6uvXnoVOikFRei2b/KT8=; b=mj7mp9mjCnUwbHFzXtJRB8JtxCctz3Xv/EqKylemAcbGt3uoD4496yEaf5lDjabkHE tcFBvnP/9NZuVH7xxABOxsHuGMlWTjIpU4ohTuYAN9usv5NLAEoKk1/eO4+1+GknmpfM 5B6469drXXWkacLG2A428xOET1WYv/6UQDtqvDZLDRevtVGiskNMxKrZcppMWKkU7zrf lwGUctS2Eg9fLDMQdLsqgw/rbFqd7W1xNoEMo4iW/CBjLkbdwsAGfaI+hEzdTZeBZNEE HoUmlqjWNgenU05LbyAavx2Yyfu182dIM8E08RGfVax6V5DaBmuH3THAAIYfmy5Ke2Cn xxXw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1698155282; x=1698760082; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=+3jXXsjNPLXrWv+rf69UQa/6uvXnoVOikFRei2b/KT8=; b=PbcO9qgYBFEdk2sk7fmhOswB45+msNDUL7G4OV+QZrpe9zoNEfbra/4R4U3SmrdDmN +7VlYjSreAoVSiPfASwojuiYb1Yx6a/SgYUBZGkz4QSWdf/McGR5Dsi1fN9p9jnQtFDl ffMQVxrznMId42QRjJdBybPQqk9M36oEHAxHYsJIAiMCCjXH9vltADMrgES4ILsfeNLw 7e1Zpl074U5S3nQWDOgGdJyCZEuCVRHNTzzK29720s1vZigWaQIj7Gtxz+zaisOr6DmT VUfirUxzHgXOfLwZHI75NVsxwA3imtZ7y4o8+4QVLaotUzivvphRZQAa53DD618ce6JK DWSQ== X-Gm-Message-State: AOJu0YwmEq6SviN8mNIM1ToEtUrg52XnauW6JA3SzNXP2EckLbiuK+dP NQUK5J3gd8xDwff92z9MghsQOwFAYjI= X-Google-Smtp-Source: AGHT+IH8yezIA8jxKZavrTPxkBvsmi2Yxz7WtkoiuiQPT9VSDtMO7/bIUQjwMdJw5xsPHzLAEf+sR1TuKL8= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:45ba:3318:d7a5:336a]) (user=surenb job=sendgmr) by 2002:a25:d50c:0:b0:d9c:66d1:9597 with SMTP id r12-20020a25d50c000000b00d9c66d19597mr227720ybe.13.1698155281680; Tue, 24 Oct 2023 06:48:01 -0700 (PDT) Date: Tue, 24 Oct 2023 06:46:33 -0700 In-Reply-To: <20231024134637.3120277-1-surenb@google.com> Mime-Version: 1.0 References: <20231024134637.3120277-1-surenb@google.com> X-Mailer: git-send-email 2.42.0.758.gaed0368e0e-goog Message-ID: <20231024134637.3120277-37-surenb@google.com> Subject: [PATCH v2 36/39] codetag: debug: skip objext checking when it's for objext itself From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: kent.overstreet@linux.dev, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, roman.gushchin@linux.dev, mgorman@suse.de, dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com, corbet@lwn.net, void@manifault.com, peterz@infradead.org, juri.lelli@redhat.com, ldufour@linux.ibm.com, catalin.marinas@arm.com, will@kernel.org, arnd@arndb.de, tglx@linutronix.de, mingo@redhat.com, dave.hansen@linux.intel.com, x86@kernel.org, peterx@redhat.com, david@redhat.com, axboe@kernel.dk, mcgrof@kernel.org, masahiroy@kernel.org, nathan@kernel.org, dennis@kernel.org, tj@kernel.org, muchun.song@linux.dev, rppt@kernel.org, paulmck@kernel.org, pasha.tatashin@soleen.com, yosryahmed@google.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, andreyknvl@gmail.com, keescook@chromium.org, ndesaulniers@google.com, vvvvvv@google.com, gregkh@linuxfoundation.org, ebiggers@google.com, ytcoode@gmail.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, bristot@redhat.com, vschneid@redhat.com, cl@linux.com, penberg@kernel.org, iamjoonsoo.kim@lge.com, 42.hyeyoo@gmail.com, glider@google.com, elver@google.com, dvyukov@google.com, shakeelb@google.com, songmuchun@bytedance.com, jbaron@akamai.com, rientjes@google.com, minchan@google.com, kaleshsingh@google.com, surenb@google.com, kernel-team@android.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, iommu@lists.linux.dev, linux-arch@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, kasan-dev@googlegroups.com, cgroups@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" objext objects are created with __GFP_NO_OBJ_EXT flag and therefore have no corresponding objext themselves (otherwise we would get an infinite recursion). When freeing these objects their codetag will be empty and when CONFIG_MEM_ALLOC_PROFILING_DEBUG is enabled this will lead to false warnings. Introduce CODETAG_EMPTY special codetag value to mark allocations which intentionally lack codetag to avoid these warnings. Set objext codetags to CODETAG_EMPTY before freeing to indicate that the codetag is expected to be empty. Signed-off-by: Suren Baghdasaryan --- include/linux/alloc_tag.h | 26 ++++++++++++++++++++++++++ mm/slab.h | 33 +++++++++++++++++++++++++++++++++ mm/slab_common.c | 1 + 3 files changed, 60 insertions(+) diff --git a/include/linux/alloc_tag.h b/include/linux/alloc_tag.h index 0a5973c4ad77..1f3207097b03 100644 --- a/include/linux/alloc_tag.h +++ b/include/linux/alloc_tag.h @@ -77,6 +77,27 @@ static inline struct alloc_tag_counters alloc_tag_read(s= truct alloc_tag *tag) return v; } =20 +#ifdef CONFIG_MEM_ALLOC_PROFILING_DEBUG + +#define CODETAG_EMPTY (void *)1 + +static inline bool is_codetag_empty(union codetag_ref *ref) +{ + return ref->ct =3D=3D CODETAG_EMPTY; +} + +static inline void set_codetag_empty(union codetag_ref *ref) +{ + if (ref) + ref->ct =3D CODETAG_EMPTY; +} + +#else /* CONFIG_MEM_ALLOC_PROFILING_DEBUG */ + +static inline bool is_codetag_empty(union codetag_ref *ref) { return false= ; } + +#endif /* CONFIG_MEM_ALLOC_PROFILING_DEBUG */ + static inline void __alloc_tag_sub(union codetag_ref *ref, size_t bytes) { struct alloc_tag *tag; @@ -87,6 +108,11 @@ static inline void __alloc_tag_sub(union codetag_ref *r= ef, size_t bytes) if (!ref || !ref->ct) return; =20 + if (is_codetag_empty(ref)) { + ref->ct =3D NULL; + return; + } + tag =3D ct_to_alloc_tag(ref->ct); =20 this_cpu_sub(tag->counters->bytes, bytes); diff --git a/mm/slab.h b/mm/slab.h index 4859ce1f8808..45216bad34b8 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -455,6 +455,31 @@ static inline struct slabobj_ext *slab_obj_exts(struct= slab *slab) int alloc_slab_obj_exts(struct slab *slab, struct kmem_cache *s, gfp_t gfp, bool new_slab); =20 + +#ifdef CONFIG_MEM_ALLOC_PROFILING_DEBUG + +static inline void mark_objexts_empty(struct slabobj_ext *obj_exts) +{ + struct slabobj_ext *slab_exts; + struct slab *obj_exts_slab; + + obj_exts_slab =3D virt_to_slab(obj_exts); + slab_exts =3D slab_obj_exts(obj_exts_slab); + if (slab_exts) { + unsigned int offs =3D obj_to_index(obj_exts_slab->slab_cache, + obj_exts_slab, obj_exts); + /* codetag should be NULL */ + WARN_ON(slab_exts[offs].ref.ct); + set_codetag_empty(&slab_exts[offs].ref); + } +} + +#else /* CONFIG_MEM_ALLOC_PROFILING_DEBUG */ + +static inline void mark_objexts_empty(struct slabobj_ext *obj_exts) {} + +#endif /* CONFIG_MEM_ALLOC_PROFILING_DEBUG */ + static inline bool need_slab_obj_ext(void) { #ifdef CONFIG_MEM_ALLOC_PROFILING @@ -476,6 +501,14 @@ static inline void free_slab_obj_exts(struct slab *sla= b) if (!obj_exts) return; =20 + /* + * obj_exts was created with __GFP_NO_OBJ_EXT flag, therefore its + * corresponding extension will be NULL. alloc_tag_sub() will throw a + * warning if slab has extensions but the extension of an object is + * NULL, therefore replace NULL with CODETAG_EMPTY to indicate that + * the extension for obj_exts is expected to be NULL. + */ + mark_objexts_empty(obj_exts); kfree(obj_exts); slab->obj_exts =3D 0; } diff --git a/mm/slab_common.c b/mm/slab_common.c index 8ef5e47ff6a7..db2cd7afc353 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -246,6 +246,7 @@ int alloc_slab_obj_exts(struct slab *slab, struct kmem_= cache *s, * assign slabobj_exts in parallel. In this case the existing * objcg vector should be reused. */ + mark_objexts_empty(vec); kfree(vec); return 0; } --=20 2.42.0.758.gaed0368e0e-goog From nobody Thu Jan 1 07:45:32 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id F0EA7C25B48 for ; Tue, 24 Oct 2023 13:51:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1343490AbjJXNvv (ORCPT ); Tue, 24 Oct 2023 09:51:51 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58270 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234864AbjJXNuW (ORCPT ); Tue, 24 Oct 2023 09:50:22 -0400 Received: from mail-yw1-x1149.google.com (mail-yw1-x1149.google.com [IPv6:2607:f8b0:4864:20::1149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0964210DB for ; Tue, 24 Oct 2023 06:48:05 -0700 (PDT) Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-5a7c97d5d5aso61682767b3.3 for ; Tue, 24 Oct 2023 06:48:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1698155284; x=1698760084; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=nhDAtTQGKDGtY2/txJr7AfaYGvHf9XTIsotRfIG2XI4=; b=uTuQ+BeipJRSS468RV0J4N/errrKktwdtqAbygziGLINd4pYnX/Wwy/y2hB9kX0K9L Z1mPWbrzmS4w+Ctdw12m0O1wQajoVEKkDB4XdfbwlWyqODEv5prGsg0+8RX7jxlEoW6D zDhluOqX8vDIA0hTaGAy3UC6tSp7I/LMoyf2MIV5evcqrbu5FID7xA5wMlVCt0DoQvTw 7ruuskGso+/4+CiI8DoPTgiLs+8zytR60Y6thqFcPea6/oliJnso9DwsjjUCMoxUCZzQ bznQEhc5P48uoia9Xv2z+lAQGinEXYCjLj3vfiUUkpArYnCUglOSQcsW5lViaV2ni8EI OHag== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1698155284; x=1698760084; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=nhDAtTQGKDGtY2/txJr7AfaYGvHf9XTIsotRfIG2XI4=; b=QNxuAP6QyYgvdCvR2foK8YrSC/QEJUS/I+UHYSt/zdVQXWsDZ80jgQWHknrljopwmS Oor6oRJfgDDzdgHrPZ7T1HyTYOuInVm5HcF10rtfYjT+7WF8lS6pV5HQpy5ljlzFL4a+ lXulzXap6/0R1+rFi5ErVL1/iH2YhcVJrNNiKWPybyb/NGE0mKOtpaCYmElN9szn7u2C eMO1WlafTZos/LxeARlKySF31dCGApD1VOGOUzMuIhWaqalSRrORMuSW2axx28r45sHp 12Yogf1RDA+DVcr51GZspOkLfFYdkTlO72VJ0FLZgYc63dk0ye8aoJMmGaEMIAIyYrbL m+TA== X-Gm-Message-State: AOJu0YyLaKluCQi2VY7RtRn7kzORKjo1o1JFx6LxCZzyaKmV4UsKFWu5 lZgmgGXi2DNn5WKZFcphk5TALM9WgPc= X-Google-Smtp-Source: AGHT+IEqWGHLJsK7BvvDJRxELns2H70hhRTn4578toCcoqAqCoHJoeJS8HRj9/dFP0FFd7ukqdr613N1OPo= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:45ba:3318:d7a5:336a]) (user=surenb job=sendgmr) by 2002:a25:8541:0:b0:d89:b072:d06f with SMTP id f1-20020a258541000000b00d89b072d06fmr231639ybn.7.1698155283973; Tue, 24 Oct 2023 06:48:03 -0700 (PDT) Date: Tue, 24 Oct 2023 06:46:34 -0700 In-Reply-To: <20231024134637.3120277-1-surenb@google.com> Mime-Version: 1.0 References: <20231024134637.3120277-1-surenb@google.com> X-Mailer: git-send-email 2.42.0.758.gaed0368e0e-goog Message-ID: <20231024134637.3120277-38-surenb@google.com> Subject: [PATCH v2 37/39] codetag: debug: mark codetags for reserved pages as empty From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: kent.overstreet@linux.dev, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, roman.gushchin@linux.dev, mgorman@suse.de, dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com, corbet@lwn.net, void@manifault.com, peterz@infradead.org, juri.lelli@redhat.com, ldufour@linux.ibm.com, catalin.marinas@arm.com, will@kernel.org, arnd@arndb.de, tglx@linutronix.de, mingo@redhat.com, dave.hansen@linux.intel.com, x86@kernel.org, peterx@redhat.com, david@redhat.com, axboe@kernel.dk, mcgrof@kernel.org, masahiroy@kernel.org, nathan@kernel.org, dennis@kernel.org, tj@kernel.org, muchun.song@linux.dev, rppt@kernel.org, paulmck@kernel.org, pasha.tatashin@soleen.com, yosryahmed@google.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, andreyknvl@gmail.com, keescook@chromium.org, ndesaulniers@google.com, vvvvvv@google.com, gregkh@linuxfoundation.org, ebiggers@google.com, ytcoode@gmail.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, bristot@redhat.com, vschneid@redhat.com, cl@linux.com, penberg@kernel.org, iamjoonsoo.kim@lge.com, 42.hyeyoo@gmail.com, glider@google.com, elver@google.com, dvyukov@google.com, shakeelb@google.com, songmuchun@bytedance.com, jbaron@akamai.com, rientjes@google.com, minchan@google.com, kaleshsingh@google.com, surenb@google.com, kernel-team@android.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, iommu@lists.linux.dev, linux-arch@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, kasan-dev@googlegroups.com, cgroups@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" To avoid debug warnings while freeing reserved pages which were not allocated with usual allocators, mark their codetags as empty before freeing. Maybe we can annotate reserved pages correctly and avoid this? Signed-off-by: Suren Baghdasaryan --- include/linux/alloc_tag.h | 2 ++ include/linux/mm.h | 8 ++++++++ include/linux/pgalloc_tag.h | 2 ++ 3 files changed, 12 insertions(+) diff --git a/include/linux/alloc_tag.h b/include/linux/alloc_tag.h index 1f3207097b03..102caf62c2a9 100644 --- a/include/linux/alloc_tag.h +++ b/include/linux/alloc_tag.h @@ -95,6 +95,7 @@ static inline void set_codetag_empty(union codetag_ref *r= ef) #else /* CONFIG_MEM_ALLOC_PROFILING_DEBUG */ =20 static inline bool is_codetag_empty(union codetag_ref *ref) { return false= ; } +static inline void set_codetag_empty(union codetag_ref *ref) {} =20 #endif /* CONFIG_MEM_ALLOC_PROFILING_DEBUG */ =20 @@ -155,6 +156,7 @@ static inline void alloc_tag_sub(union codetag_ref *ref= , size_t bytes) {} static inline void alloc_tag_sub_noalloc(union codetag_ref *ref, size_t by= tes) {} static inline void alloc_tag_add(union codetag_ref *ref, struct alloc_tag = *tag, size_t bytes) {} +static inline void set_codetag_empty(union codetag_ref *ref) {} =20 #endif =20 diff --git a/include/linux/mm.h b/include/linux/mm.h index bf5d0b1b16f4..310129414833 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -5,6 +5,7 @@ #include #include #include +#include #include #include #include @@ -3077,6 +3078,13 @@ extern void reserve_bootmem_region(phys_addr_t start, /* Free the reserved page into the buddy system, so it gets managed. */ static inline void free_reserved_page(struct page *page) { + union codetag_ref *ref; + + ref =3D get_page_tag_ref(page); + if (ref) { + set_codetag_empty(ref); + put_page_tag_ref(ref); + } ClearPageReserved(page); init_page_count(page); __free_page(page); diff --git a/include/linux/pgalloc_tag.h b/include/linux/pgalloc_tag.h index 0174aff5e871..ae9b0f359264 100644 --- a/include/linux/pgalloc_tag.h +++ b/include/linux/pgalloc_tag.h @@ -93,6 +93,8 @@ static inline void pgalloc_tag_split(struct page *page, u= nsigned int nr) =20 #else /* CONFIG_MEM_ALLOC_PROFILING */ =20 +static inline union codetag_ref *get_page_tag_ref(struct page *page) { ret= urn NULL; } +static inline void put_page_tag_ref(union codetag_ref *ref) {} static inline void pgalloc_tag_add(struct page *page, struct task_struct *= task, unsigned int order) {} static inline void pgalloc_tag_sub(struct page *page, unsigned int order) = {} --=20 2.42.0.758.gaed0368e0e-goog From nobody Thu Jan 1 07:45:32 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8CB01C00A8F for ; Tue, 24 Oct 2023 13:52:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1343759AbjJXNwI (ORCPT ); Tue, 24 Oct 2023 09:52:08 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41064 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1343669AbjJXNu3 (ORCPT ); Tue, 24 Oct 2023 09:50:29 -0400 Received: from mail-yw1-x114a.google.com (mail-yw1-x114a.google.com [IPv6:2607:f8b0:4864:20::114a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 16EED10FC for ; Tue, 24 Oct 2023 06:48:07 -0700 (PDT) Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-5acac8b6575so12790027b3.1 for ; Tue, 24 Oct 2023 06:48:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1698155286; x=1698760086; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=nwLU1aU2LNAtecsch6dr14rd1yTTpsfUN6nYD0uWJEE=; b=lO8oYsG5g14dNpqW9ouj1s9M69yM5MZ0rmpNEdvNfhQGuka2UpS4+gTkTkTLrnRCu+ R5AobkkLh7AXlvJkml+093Ak5izfo+xFuzyMswEZK5obqBZh6gXDiMnx5gR7fOKGlgjY Ri85Ii+Gh46qA74+B5+IOlRDrppLj62h3HhEptVf/gj4i7dUTEoU7A09+HKIHAHLwmG0 eTesSqwFQfe705J6lwxlV5cFVg0bln3s+XVTH1zXDNcxzEtSW58PEZptjFRlMzqI9FUR m0pdOVNzJv7fbXqaf+sqajhdL+625ejoN30xcyS3LVTKZ9LLuDW/+EYSUxR1nGuGwCzn qCoQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1698155286; x=1698760086; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=nwLU1aU2LNAtecsch6dr14rd1yTTpsfUN6nYD0uWJEE=; b=HsP+v/PJvMpFTVj9Qin3iN7UVwb6Al5dGMbZXGeRjmF0O8ZnIu2tRmC+zI2KCGuDoa 6D0qpnArIZ5UhoIdkyw126YuPucah9uxH3rxOJc4RrGB6mwzezraNBSn7EbuwID1ugD1 CSPhUygGv1FGhcN1ylrt2wfWe10Jm18ckAlbK41ummwEICiys6Z/RPFLQrvqWwQYi0v0 Rr6EtM8/PpHK0WgLRGzB+7ej1SI9CURVtkr+Ax4GHFAExjwtUqy/sgoH/amyBREaHdNr lK57L8PTAvdugyIH0qqQI2sNk35QkRdngVj+iyeaw6G+MR5Mpc/37jVr96W8R1F4kI82 gUwQ== X-Gm-Message-State: AOJu0Ywozf7VOzvgy0EXmwefpc8DfueKkFvYrZNBESsPvNna8wRVW6fT WrNTTuzMxFQGXra0Itf0cYmRFY5WFWM= X-Google-Smtp-Source: AGHT+IEQFxSAdc50qujfzmzJKJcWY3qyZfXbT1k7XuYaFuTlpAInpKqkovdUmqHR4hu8HiZ2L8Ah2MqCV0A= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:45ba:3318:d7a5:336a]) (user=surenb job=sendgmr) by 2002:a25:d244:0:b0:d9a:4cc1:b59a with SMTP id j65-20020a25d244000000b00d9a4cc1b59amr317326ybg.1.1698155286109; Tue, 24 Oct 2023 06:48:06 -0700 (PDT) Date: Tue, 24 Oct 2023 06:46:35 -0700 In-Reply-To: <20231024134637.3120277-1-surenb@google.com> Mime-Version: 1.0 References: <20231024134637.3120277-1-surenb@google.com> X-Mailer: git-send-email 2.42.0.758.gaed0368e0e-goog Message-ID: <20231024134637.3120277-39-surenb@google.com> Subject: [PATCH v2 38/39] codetag: debug: introduce OBJEXTS_ALLOC_FAIL to mark failed slab_ext allocations From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: kent.overstreet@linux.dev, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, roman.gushchin@linux.dev, mgorman@suse.de, dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com, corbet@lwn.net, void@manifault.com, peterz@infradead.org, juri.lelli@redhat.com, ldufour@linux.ibm.com, catalin.marinas@arm.com, will@kernel.org, arnd@arndb.de, tglx@linutronix.de, mingo@redhat.com, dave.hansen@linux.intel.com, x86@kernel.org, peterx@redhat.com, david@redhat.com, axboe@kernel.dk, mcgrof@kernel.org, masahiroy@kernel.org, nathan@kernel.org, dennis@kernel.org, tj@kernel.org, muchun.song@linux.dev, rppt@kernel.org, paulmck@kernel.org, pasha.tatashin@soleen.com, yosryahmed@google.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, andreyknvl@gmail.com, keescook@chromium.org, ndesaulniers@google.com, vvvvvv@google.com, gregkh@linuxfoundation.org, ebiggers@google.com, ytcoode@gmail.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, bristot@redhat.com, vschneid@redhat.com, cl@linux.com, penberg@kernel.org, iamjoonsoo.kim@lge.com, 42.hyeyoo@gmail.com, glider@google.com, elver@google.com, dvyukov@google.com, shakeelb@google.com, songmuchun@bytedance.com, jbaron@akamai.com, rientjes@google.com, minchan@google.com, kaleshsingh@google.com, surenb@google.com, kernel-team@android.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, iommu@lists.linux.dev, linux-arch@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, kasan-dev@googlegroups.com, cgroups@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" If slabobj_ext vector allocation for a slab object fails and later on it succeeds for another object in the same slab, the slabobj_ext for the original object will be NULL and will be flagged in case when CONFIG_MEM_ALLOC_PROFILING_DEBUG is enabled. Mark failed slabobj_ext vector allocations using a new objext_flags flag stored in the lower bits of slab->obj_exts. When new allocation succeeds it marks all tag references in the same slabobj_ext vector as empty to avoid warnings implemented by CONFIG_MEM_ALLOC_PROFILING_DEBUG checks. Signed-off-by: Suren Baghdasaryan --- include/linux/memcontrol.h | 4 +++- mm/slab.h | 25 +++++++++++++++++++++++++ mm/slab_common.c | 22 +++++++++++++++------- 3 files changed, 43 insertions(+), 8 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 853a24b5f713..6b680ca424e3 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -363,8 +363,10 @@ enum page_memcg_data_flags { #endif /* CONFIG_MEMCG */ =20 enum objext_flags { + /* slabobj_ext vector failed to allocate */ + OBJEXTS_ALLOC_FAIL =3D __FIRST_OBJEXT_FLAG, /* the next bit after the last actual flag */ - __NR_OBJEXTS_FLAGS =3D __FIRST_OBJEXT_FLAG, + __NR_OBJEXTS_FLAGS =3D (__FIRST_OBJEXT_FLAG << 1), }; =20 #define OBJEXTS_FLAGS_MASK (__NR_OBJEXTS_FLAGS - 1) diff --git a/mm/slab.h b/mm/slab.h index 45216bad34b8..1736268892e6 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -474,9 +474,34 @@ static inline void mark_objexts_empty(struct slabobj_e= xt *obj_exts) } } =20 +static inline void mark_failed_objexts_alloc(struct slab *slab) +{ + slab->obj_exts =3D OBJEXTS_ALLOC_FAIL; +} + +static inline void handle_failed_objexts_alloc(unsigned long obj_exts, + struct slabobj_ext *vec, unsigned int objects) +{ + /* + * If vector previously failed to allocate then we have live + * objects with no tag reference. Mark all references in this + * vector as empty to avoid warnings later on. + */ + if (obj_exts & OBJEXTS_ALLOC_FAIL) { + unsigned int i; + + for (i =3D 0; i < objects; i++) + set_codetag_empty(&vec[i].ref); + } +} + + #else /* CONFIG_MEM_ALLOC_PROFILING_DEBUG */ =20 static inline void mark_objexts_empty(struct slabobj_ext *obj_exts) {} +static inline void mark_failed_objexts_alloc(struct slab *slab) {} +static inline void handle_failed_objexts_alloc(unsigned long obj_exts, + struct slabobj_ext *vec, unsigned int objects) {} =20 #endif /* CONFIG_MEM_ALLOC_PROFILING_DEBUG */ =20 diff --git a/mm/slab_common.c b/mm/slab_common.c index db2cd7afc353..cea73314f919 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -218,29 +218,37 @@ int alloc_slab_obj_exts(struct slab *slab, struct kme= m_cache *s, gfp_t gfp, bool new_slab) { unsigned int objects =3D objs_per_slab(s, slab); - unsigned long obj_exts; - void *vec; + unsigned long new_exts; + unsigned long old_exts; + struct slabobj_ext *vec; =20 gfp &=3D ~OBJCGS_CLEAR_MASK; /* Prevent recursive extension vector allocation */ gfp |=3D __GFP_NO_OBJ_EXT; vec =3D kcalloc_node(objects, sizeof(struct slabobj_ext), gfp, slab_nid(slab)); - if (!vec) + if (!vec) { + /* Mark vectors which failed to allocate */ + if (new_slab) + mark_failed_objexts_alloc(slab); + return -ENOMEM; + } =20 - obj_exts =3D (unsigned long)vec; + new_exts =3D (unsigned long)vec; #ifdef CONFIG_MEMCG - obj_exts |=3D MEMCG_DATA_OBJEXTS; + new_exts |=3D MEMCG_DATA_OBJEXTS; #endif + old_exts =3D slab->obj_exts; + handle_failed_objexts_alloc(old_exts, vec, objects); if (new_slab) { /* * If the slab is brand new and nobody can yet access its * obj_exts, no synchronization is required and obj_exts can * be simply assigned. */ - slab->obj_exts =3D obj_exts; - } else if (cmpxchg(&slab->obj_exts, 0, obj_exts)) { + slab->obj_exts =3D new_exts; + } else if (cmpxchg(&slab->obj_exts, old_exts, new_exts) !=3D old_exts) { /* * If the slab is already in use, somebody can allocate and * assign slabobj_exts in parallel. In this case the existing --=20 2.42.0.758.gaed0368e0e-goog From nobody Thu Jan 1 07:45:32 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A5D3FC00A8F for ; Tue, 24 Oct 2023 13:52:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1343712AbjJXNwh (ORCPT ); Tue, 24 Oct 2023 09:52:37 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57808 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1343649AbjJXNu3 (ORCPT ); Tue, 24 Oct 2023 09:50:29 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 629EA2125 for ; Tue, 24 Oct 2023 06:48:09 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id 3f1490d57ef6-d9cb79eb417so4174466276.2 for ; Tue, 24 Oct 2023 06:48:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1698155288; x=1698760088; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=5iA9XL/oyX6Tyl0rnn5vdBOVip3jqaNokzkLete42fs=; b=SzE1AUc4BKZmpNvrK36+SD3sWBCP8XLcqDoSN7SaMr2sU/BG1aGfkwvE/f4JQXF1Ni hMF1o0lNDayxObo0C0wjdTZL8vll7r8pHHyLG7ycDypgWlfcaIqRWfAtBLLsA0BrpBju FzAAj+z5fJfNRaY9+A3lV6zqMBZMFytgHluIRpWvQ3sZxRKVk0/VlTAtIhv1vlgDxCFT 1X0QR2xi2xN0sadEGLcjq9JrBII48akc5UqV38dawVcqLr033J2tJARubY1zouO5IWW8 ObNjzyLUon3FC8l21rKOSCG4nSNGOECtt/Pzik0zSbCvIpn6JtaWo302ot6QhuiBaU7v 3hsg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1698155288; x=1698760088; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=5iA9XL/oyX6Tyl0rnn5vdBOVip3jqaNokzkLete42fs=; b=aItxxWs3fglQurl+uBkwkE+ZSo8/phJmgvCV5YCRFdS77ZSEDJk3xeum1BINXw7aYF c/3z2Lcnhh41ax47tJE9kMved6ZB17k+3crLihI2uSu+sdPpyQFJQV9Aovf660e3B7Jf HMILPBzuJsUtMC830W+3lPquAMAxr4UXVHBnUptvMAlOO4fyRdysa26LDvn7dhyTKjhG VyihQ7E1NUEFYInDPVKI+ylCty7ldHJJj8iyWi7inG0oP/ZXPnhy8wS085OGTlGk/fX8 9hjcCjRZDO60ENxEKf6R+v6gLLPfTopbDGmhIL8aqJ50jyHrsF9rJ7F0u8UlFaJjv0O/ N4Ig== X-Gm-Message-State: AOJu0YwbN7spc7/vKLSG+mMyvrLu+9cLd3u9x33baZUJhY75Vf8gwXH8 z+PzDrOKv6zuEwCqRRIM+fu69+Pd070= X-Google-Smtp-Source: AGHT+IG2dGz7nkvTfdEUK5vnzGjBRQVPkaal0DQpTev0jCoT6FkWge7pFcsduGaJ7nCMKZx++nF8KYSK16A= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:45ba:3318:d7a5:336a]) (user=surenb job=sendgmr) by 2002:a25:ade8:0:b0:d89:dd12:163d with SMTP id d40-20020a25ade8000000b00d89dd12163dmr222107ybe.1.1698155288323; Tue, 24 Oct 2023 06:48:08 -0700 (PDT) Date: Tue, 24 Oct 2023 06:46:36 -0700 In-Reply-To: <20231024134637.3120277-1-surenb@google.com> Mime-Version: 1.0 References: <20231024134637.3120277-1-surenb@google.com> X-Mailer: git-send-email 2.42.0.758.gaed0368e0e-goog Message-ID: <20231024134637.3120277-40-surenb@google.com> Subject: [PATCH v2 39/39] MAINTAINERS: Add entries for code tagging and memory allocation profiling From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: kent.overstreet@linux.dev, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, roman.gushchin@linux.dev, mgorman@suse.de, dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com, corbet@lwn.net, void@manifault.com, peterz@infradead.org, juri.lelli@redhat.com, ldufour@linux.ibm.com, catalin.marinas@arm.com, will@kernel.org, arnd@arndb.de, tglx@linutronix.de, mingo@redhat.com, dave.hansen@linux.intel.com, x86@kernel.org, peterx@redhat.com, david@redhat.com, axboe@kernel.dk, mcgrof@kernel.org, masahiroy@kernel.org, nathan@kernel.org, dennis@kernel.org, tj@kernel.org, muchun.song@linux.dev, rppt@kernel.org, paulmck@kernel.org, pasha.tatashin@soleen.com, yosryahmed@google.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, andreyknvl@gmail.com, keescook@chromium.org, ndesaulniers@google.com, vvvvvv@google.com, gregkh@linuxfoundation.org, ebiggers@google.com, ytcoode@gmail.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, bristot@redhat.com, vschneid@redhat.com, cl@linux.com, penberg@kernel.org, iamjoonsoo.kim@lge.com, 42.hyeyoo@gmail.com, glider@google.com, elver@google.com, dvyukov@google.com, shakeelb@google.com, songmuchun@bytedance.com, jbaron@akamai.com, rientjes@google.com, minchan@google.com, kaleshsingh@google.com, surenb@google.com, kernel-team@android.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, iommu@lists.linux.dev, linux-arch@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, kasan-dev@googlegroups.com, cgroups@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Kent Overstreet The new code & libraries added are being maintained - mark them as such. Signed-off-by: Kent Overstreet Signed-off-by: Suren Baghdasaryan --- MAINTAINERS | 16 ++++++++++++++++ 1 file changed, 16 insertions(+) diff --git a/MAINTAINERS b/MAINTAINERS index 2894f0777537..22e51de42131 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -5118,6 +5118,13 @@ S: Supported F: Documentation/process/code-of-conduct-interpretation.rst F: Documentation/process/code-of-conduct.rst =20 +CODE TAGGING +M: Suren Baghdasaryan +M: Kent Overstreet +S: Maintained +F: include/linux/codetag.h +F: lib/codetag.c + COMEDI DRIVERS M: Ian Abbott M: H Hartley Sweeten @@ -13708,6 +13715,15 @@ F: mm/memblock.c F: mm/mm_init.c F: tools/testing/memblock/ =20 +MEMORY ALLOCATION PROFILING +M: Suren Baghdasaryan +M: Kent Overstreet +S: Maintained +F: include/linux/alloc_tag.h +F: include/linux/codetag_ctx.h +F: lib/alloc_tag.c +F: lib/pgalloc_tag.c + MEMORY CONTROLLER DRIVERS M: Krzysztof Kozlowski L: linux-kernel@vger.kernel.org --=20 2.42.0.758.gaed0368e0e-goog