From nobody Mon May 11 04:52:14 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4CBAAC433F5 for ; Thu, 14 Apr 2022 08:57:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241329AbiDNJAU (ORCPT ); Thu, 14 Apr 2022 05:00:20 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42612 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241323AbiDNJAJ (ORCPT ); Thu, 14 Apr 2022 05:00:09 -0400 Received: from mail-pl1-x62d.google.com (mail-pl1-x62d.google.com [IPv6:2607:f8b0:4864:20::62d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0026A69CDD for ; Thu, 14 Apr 2022 01:57:44 -0700 (PDT) Received: by mail-pl1-x62d.google.com with SMTP id t12so4116898pll.7 for ; Thu, 14 Apr 2022 01:57:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=BH7DV9U/vu2jtDErAITdUqLyoGxX/jkJAwwizEZv+Js=; b=JJCBD1YAeoPMhuIjVfylzRv0NxsFdBPMH+kZzmPcRUHMX5TQEv+wea9a8orZanf8p/ iQBR9sZAT+j7gUSeUw5x+S3c7HxDHcxVHwjYCdK64Ubqco6qU/RZDbVHtMfKFNqWUVmP O3nytkKNxVblG0o8Nm0u+CZV0R5UyGqV/FOYrYzfKFjL4Bnc7sOFMZdfj6D4b2M0/r+b TqmA3PW8ykLEUpurPiQU+4bjY8J77+tYoh5W+qyYIXtfWC3/zHTrhaDEEx0IXBPcZFgh jzHWoU+t+KbP9vrRpNLtvi96RB8dbsJo5Oj8NiPQCGf25lX0J3QbY8w05i/odEi4GOc7 WqsA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=BH7DV9U/vu2jtDErAITdUqLyoGxX/jkJAwwizEZv+Js=; b=fbBwlsokueLCavIJtm3hGTL8LIC/1AJPr+UH0KywpWQ0357Poed3zEGpaFyaXMxbrF ZnPruaOw2hyAvEYaYqIG+fsWE/on7cK/0F2OKV5218TkZiU856LVudp4p9KPA7i3qgVI JlTqTGzQwG8Penmc+eSrDbthLTf7FGrUwGJLMP3n5mhItBYkEURxPOEP2TFi24BCIrZR Ov9qnprOEQPoxFIwNT992TdzlxNShXwykZOKh3Bew+BuKln8hk56sP7h7m9kcJHsk+MH 95h6JwsEq0j5huxx6hqXMj1YFD0ed8iMK+fVk7xZjCUbZ/D/PvtqE/01z/wP2AJw/UuE 1Nfg== X-Gm-Message-State: AOAM531Syx8aVTofd5gBZJiKtY3ypbA5VozBKSXJdYPtM9beiFwkSkUi uKfxEdLYdkYcyQzxwTyZmBI= X-Google-Smtp-Source: ABdhPJxsUXct5U3lD2WN+lLx+SB2iXDsebklAp6DajrSrorvr3K71YGlQGb2/NbFbsSvuQLhCUWLBg== X-Received: by 2002:a17:90a:e7d2:b0:1c7:b410:ccfc with SMTP id kb18-20020a17090ae7d200b001c7b410ccfcmr2567374pjb.209.1649926664449; Thu, 14 Apr 2022 01:57:44 -0700 (PDT) Received: from hyeyoo.. ([114.29.24.243]) by smtp.gmail.com with ESMTPSA id p9-20020aa79e89000000b00505fada20dfsm1403537pfq.117.2022.04.14.01.57.38 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 14 Apr 2022 01:57:42 -0700 (PDT) From: Hyeonggon Yoo <42.hyeyoo@gmail.com> To: Vlastimil Babka Cc: Marco Elver , Matthew WilCox , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Hyeonggon Yoo <42.hyeyoo@gmail.com>, Roman Gushchin , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 01/23] mm/slab: move NUMA-related code to __do_cache_alloc() Date: Thu, 14 Apr 2022 17:57:05 +0900 Message-Id: <20220414085727.643099-2-42.hyeyoo@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220414085727.643099-1-42.hyeyoo@gmail.com> References: <20220414085727.643099-1-42.hyeyoo@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" To implement slab_alloc_node() independent of NUMA configuration, move NUMA fallback/alternate allocation code into __do_cache_alloc(). One functional change here is not to check availability of node when allocating from local node. Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> Reviewed-by: Vlastimil Babka --- Changes from v1: - Undo removing path to alternate_node_alloc code when node id is not speci= fied (which was mistake.) mm/slab.c | 68 +++++++++++++++++++++++++------------------------------ 1 file changed, 31 insertions(+), 37 deletions(-) diff --git a/mm/slab.c b/mm/slab.c index e882657c1494..d854c24d5f5a 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -3187,13 +3187,14 @@ static void *____cache_alloc_node(struct kmem_cache= *cachep, gfp_t flags, return obj ? obj : fallback_alloc(cachep, flags); } =20 +static void *__do_cache_alloc(struct kmem_cache *cachep, gfp_t flags, int = nodeid); + static __always_inline void * slab_alloc_node(struct kmem_cache *cachep, gfp_t flags, int nodeid, size_t= orig_size, unsigned long caller) { unsigned long save_flags; void *ptr; - int slab_node =3D numa_mem_id(); struct obj_cgroup *objcg =3D NULL; bool init =3D false; =20 @@ -3208,30 +3209,7 @@ slab_alloc_node(struct kmem_cache *cachep, gfp_t fla= gs, int nodeid, size_t orig_ =20 cache_alloc_debugcheck_before(cachep, flags); local_irq_save(save_flags); - - if (nodeid =3D=3D NUMA_NO_NODE) - nodeid =3D slab_node; - - if (unlikely(!get_node(cachep, nodeid))) { - /* Node not bootstrapped yet */ - ptr =3D fallback_alloc(cachep, flags); - goto out; - } - - if (nodeid =3D=3D slab_node) { - /* - * Use the locally cached objects if possible. - * However ____cache_alloc does not allow fallback - * to other nodes. It may fail while we still have - * objects on other nodes available. - */ - ptr =3D ____cache_alloc(cachep, flags); - if (ptr) - goto out; - } - /* ___cache_alloc_node can fall back to other nodes */ - ptr =3D ____cache_alloc_node(cachep, flags, nodeid); - out: + ptr =3D __do_cache_alloc(cachep, flags, nodeid); local_irq_restore(save_flags); ptr =3D cache_alloc_debugcheck_after(cachep, flags, ptr, caller); init =3D slab_want_init_on_alloc(flags, cachep); @@ -3242,31 +3220,46 @@ slab_alloc_node(struct kmem_cache *cachep, gfp_t fl= ags, int nodeid, size_t orig_ } =20 static __always_inline void * -__do_cache_alloc(struct kmem_cache *cache, gfp_t flags) +__do_cache_alloc(struct kmem_cache *cachep, gfp_t flags, int nodeid) { void *objp; + int slab_node =3D numa_mem_id(); =20 - if (current->mempolicy || cpuset_do_slab_mem_spread()) { - objp =3D alternate_node_alloc(cache, flags); - if (objp) - goto out; + if (nodeid =3D=3D NUMA_NO_NODE) { + if (current->mempolicy || cpuset_do_slab_mem_spread()) { + objp =3D alternate_node_alloc(cachep, flags); + if (objp) + goto out; + } + /* + * Use the locally cached objects if possible. + * However ____cache_alloc does not allow fallback + * to other nodes. It may fail while we still have + * objects on other nodes available. + */ + objp =3D ____cache_alloc(cachep, flags); + nodeid =3D slab_node; + } else if (nodeid =3D=3D slab_node) { + objp =3D ____cache_alloc(cachep, flags); + } else if (!get_node(cachep, nodeid)) { + /* Node not bootstrapped yet */ + objp =3D fallback_alloc(cachep, flags); + goto out; } - objp =3D ____cache_alloc(cache, flags); =20 /* * We may just have run out of memory on the local node. * ____cache_alloc_node() knows how to locate memory on other nodes */ if (!objp) - objp =3D ____cache_alloc_node(cache, flags, numa_mem_id()); - - out: + objp =3D ____cache_alloc_node(cachep, flags, nodeid); +out: return objp; } #else =20 static __always_inline void * -__do_cache_alloc(struct kmem_cache *cachep, gfp_t flags) +__do_cache_alloc(struct kmem_cache *cachep, gfp_t flags, int nodeid __mayb= e_unused) { return ____cache_alloc(cachep, flags); } @@ -3293,7 +3286,7 @@ slab_alloc(struct kmem_cache *cachep, struct list_lru= *lru, gfp_t flags, =20 cache_alloc_debugcheck_before(cachep, flags); local_irq_save(save_flags); - objp =3D __do_cache_alloc(cachep, flags); + objp =3D __do_cache_alloc(cachep, flags, NUMA_NO_NODE); local_irq_restore(save_flags); objp =3D cache_alloc_debugcheck_after(cachep, flags, objp, caller); prefetchw(objp); @@ -3532,7 +3525,8 @@ int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t= flags, size_t size, =20 local_irq_disable(); for (i =3D 0; i < size; i++) { - void *objp =3D kfence_alloc(s, s->object_size, flags) ?: __do_cache_allo= c(s, flags); + void *objp =3D kfence_alloc(s, s->object_size, flags) ?: + __do_cache_alloc(s, flags, NUMA_NO_NODE); =20 if (unlikely(!objp)) goto error; --=20 2.32.0 From nobody Mon May 11 04:52:14 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 73BE4C433F5 for ; Thu, 14 Apr 2022 08:58:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241336AbiDNJAa (ORCPT ); Thu, 14 Apr 2022 05:00:30 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42986 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235540AbiDNJAZ (ORCPT ); Thu, 14 Apr 2022 05:00:25 -0400 Received: from mail-pg1-x533.google.com (mail-pg1-x533.google.com [IPv6:2607:f8b0:4864:20::533]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 07245694BA for ; Thu, 14 Apr 2022 01:57:51 -0700 (PDT) Received: by mail-pg1-x533.google.com with SMTP id t4so4258607pgc.1 for ; Thu, 14 Apr 2022 01:57:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=1FPkm/TLzLTQMHUsixIGcuD+OLSdW/9rH0hk3KXQQRs=; b=nEVcOqG+5eRBSzUKW8iulIUpYNphwv7RdkQabP5oXTmOBF1Ho2/hsibWym9Lfx4IYu k96SHmurryzNiATsEe3/EJMtIEvHiOX07ahOn/YMby4d3Vx6ITx9bYpPoA1OQ7ZANpHh ernORsqx2XRP+ELWlZ/MXN+6nHQYIo44e/Gnuyx3HxLtCwWmpPYaJEK/+pvwD0yxFY7+ AELf4iNxECWm80YQMJ//H7PvCPhGuqkqOPt6/nrwmWz41b9289t5Sf0Sq1XE+jVtITTG IzWYeH9u7fJWdTQXzBMkVvyiRXg2sekJF3JZYq17DVddoqee2/o6f4TkQWpLWKWpfyZp E+tw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=1FPkm/TLzLTQMHUsixIGcuD+OLSdW/9rH0hk3KXQQRs=; b=x1E8d1eUNyaVVYaEpIYlAHc/XEb4Iok9jHeT/4sz+WHSio4Pl2krp3VVFfAjtXy3sj QmliyJEd603t0bmuONVWSUu6xhRUU7xMSoRSsm/Tk5t5GBb3cmRctpUORsFhr0oT/T8i TNM43EF0m6mTg7Ar5PUA9s2Es/GmUgM/KuorGG6v9G6OJo8dXnMQnay8nqxbEZuLmA/e EBnqQOzcI4CCe1e5wsc9GEXkhdpI252j1UfIYdNPmr1NM9Kr4SP0IGkOndbwqO07uWh4 mmiJfRYZNJOLqtBVQ3LTR1ZR8MmFIY8PrgEGIhJ3AWeQ7iGseZSEanNR4Y98Yb2nAYVW pI/w== X-Gm-Message-State: AOAM530H+WPd+ciMJrICukgvRJIft+UG/FxlpmZaZ/9KSU8noNo6Xq2C pz5phlfkkN2YU4gZNOYxkZs= X-Google-Smtp-Source: ABdhPJzebzesCCi5YprbRSNLDIjE6SUmQD82KtTQFMn1URgYcQqctqNOi9eF2IZnCysdbjBJWvsqSw== X-Received: by 2002:aa7:888b:0:b0:4fb:10e1:8983 with SMTP id z11-20020aa7888b000000b004fb10e18983mr2848126pfe.62.1649926670569; Thu, 14 Apr 2022 01:57:50 -0700 (PDT) Received: from hyeyoo.. ([114.29.24.243]) by smtp.gmail.com with ESMTPSA id p9-20020aa79e89000000b00505fada20dfsm1403537pfq.117.2022.04.14.01.57.44 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 14 Apr 2022 01:57:49 -0700 (PDT) From: Hyeonggon Yoo <42.hyeyoo@gmail.com> To: Vlastimil Babka Cc: Marco Elver , Matthew WilCox , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Hyeonggon Yoo <42.hyeyoo@gmail.com>, Roman Gushchin , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 02/23] mm/slab: cleanup slab_alloc() and slab_alloc_node() Date: Thu, 14 Apr 2022 17:57:06 +0900 Message-Id: <20220414085727.643099-3-42.hyeyoo@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220414085727.643099-1-42.hyeyoo@gmail.com> References: <20220414085727.643099-1-42.hyeyoo@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Make slab_alloc_node() available even when CONFIG_NUMA=3Dn and make slab_alloc() wrapper of slab_alloc_node(). This is necessary for further cleanup. Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> Reviewed-by: Vlastimil Babka --- mm/slab.c | 50 +++++++++++++------------------------------------- 1 file changed, 13 insertions(+), 37 deletions(-) diff --git a/mm/slab.c b/mm/slab.c index d854c24d5f5a..f033d5b4fefb 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -3187,38 +3187,6 @@ static void *____cache_alloc_node(struct kmem_cache = *cachep, gfp_t flags, return obj ? obj : fallback_alloc(cachep, flags); } =20 -static void *__do_cache_alloc(struct kmem_cache *cachep, gfp_t flags, int = nodeid); - -static __always_inline void * -slab_alloc_node(struct kmem_cache *cachep, gfp_t flags, int nodeid, size_t= orig_size, - unsigned long caller) -{ - unsigned long save_flags; - void *ptr; - struct obj_cgroup *objcg =3D NULL; - bool init =3D false; - - flags &=3D gfp_allowed_mask; - cachep =3D slab_pre_alloc_hook(cachep, NULL, &objcg, 1, flags); - if (unlikely(!cachep)) - return NULL; - - ptr =3D kfence_alloc(cachep, orig_size, flags); - if (unlikely(ptr)) - goto out_hooks; - - cache_alloc_debugcheck_before(cachep, flags); - local_irq_save(save_flags); - ptr =3D __do_cache_alloc(cachep, flags, nodeid); - local_irq_restore(save_flags); - ptr =3D cache_alloc_debugcheck_after(cachep, flags, ptr, caller); - init =3D slab_want_init_on_alloc(flags, cachep); - -out_hooks: - slab_post_alloc_hook(cachep, objcg, flags, 1, &ptr, init); - return ptr; -} - static __always_inline void * __do_cache_alloc(struct kmem_cache *cachep, gfp_t flags, int nodeid) { @@ -3267,8 +3235,8 @@ __do_cache_alloc(struct kmem_cache *cachep, gfp_t fla= gs, int nodeid __maybe_unus #endif /* CONFIG_NUMA */ =20 static __always_inline void * -slab_alloc(struct kmem_cache *cachep, struct list_lru *lru, gfp_t flags, - size_t orig_size, unsigned long caller) +slab_alloc_node(struct kmem_cache *cachep, struct list_lru *lru, gfp_t fla= gs, + int nodeid, size_t orig_size, unsigned long caller) { unsigned long save_flags; void *objp; @@ -3286,7 +3254,7 @@ slab_alloc(struct kmem_cache *cachep, struct list_lru= *lru, gfp_t flags, =20 cache_alloc_debugcheck_before(cachep, flags); local_irq_save(save_flags); - objp =3D __do_cache_alloc(cachep, flags, NUMA_NO_NODE); + objp =3D __do_cache_alloc(cachep, flags, nodeid); local_irq_restore(save_flags); objp =3D cache_alloc_debugcheck_after(cachep, flags, objp, caller); prefetchw(objp); @@ -3297,6 +3265,14 @@ slab_alloc(struct kmem_cache *cachep, struct list_lr= u *lru, gfp_t flags, return objp; } =20 +static __always_inline void * +slab_alloc(struct kmem_cache *cachep, struct list_lru *lru, gfp_t flags, + size_t orig_size, unsigned long caller) +{ + return slab_alloc_node(cachep, lru, flags, NUMA_NO_NODE, orig_size, + caller); +} + /* * Caller needs to acquire correct kmem_cache_node's list_lock * @list: List of detached free slabs should be freed by caller @@ -3585,7 +3561,7 @@ EXPORT_SYMBOL(kmem_cache_alloc_trace); */ void *kmem_cache_alloc_node(struct kmem_cache *cachep, gfp_t flags, int no= deid) { - void *ret =3D slab_alloc_node(cachep, flags, nodeid, cachep->object_size,= _RET_IP_); + void *ret =3D slab_alloc_node(cachep, NULL, flags, nodeid, cachep->object= _size, _RET_IP_); =20 trace_kmem_cache_alloc_node(_RET_IP_, ret, cachep->object_size, cachep->size, @@ -3603,7 +3579,7 @@ void *kmem_cache_alloc_node_trace(struct kmem_cache *= cachep, { void *ret; =20 - ret =3D slab_alloc_node(cachep, flags, nodeid, size, _RET_IP_); + ret =3D slab_alloc_node(cachep, NULL, flags, nodeid, size, _RET_IP_); =20 ret =3D kasan_kmalloc(cachep, ret, size, flags); trace_kmalloc_node(_RET_IP_, ret, --=20 2.32.0 From nobody Mon May 11 04:52:14 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8D42CC433EF for ; Thu, 14 Apr 2022 08:58:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241342AbiDNJAh (ORCPT ); Thu, 14 Apr 2022 05:00:37 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43006 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241332AbiDNJAZ (ORCPT ); Thu, 14 Apr 2022 05:00:25 -0400 Received: from mail-pg1-x533.google.com (mail-pg1-x533.google.com [IPv6:2607:f8b0:4864:20::533]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6CDC26A058 for ; Thu, 14 Apr 2022 01:57:57 -0700 (PDT) Received: by mail-pg1-x533.google.com with SMTP id u2so4224284pgq.10 for ; Thu, 14 Apr 2022 01:57:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=SBtfUMUo5AKJFVkDupZX+/F7Ml8C31Xorc4nBPBe7Xo=; b=HX/bbJRHO3U3FJx3/b3hVoAzs392gVgvfWPhItFLKedHkoAlVEE6foru+q1MleQcYK +9E+XntVlp3ohUJMNnB7WZ3DtVaNzJ3FICs50O1Mm7NAz90b57jM9QlpR/mOXNfYt1ot wV4qA5Uw0PO8s0OfpVPXxFBXn6XmaKg2foj0E5bgqSA/mX3e1O1wPAAyQ/PTAzWtIMRe +86gWMlWrPb62XPH1F8W7bdxTllnFnSuKCKrtSBtlxV7DQ28IPCLj3xjUqTpCbw/Em2r hwArpFKwFIswZNegyfJZxAR/ejUX/PtFbB/XewBFCHv/yshHXtl5cr0Zbvp4AYfqWahD 2wTQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=SBtfUMUo5AKJFVkDupZX+/F7Ml8C31Xorc4nBPBe7Xo=; b=SeWJJlSQStTBu+Bmvo5YaYWu+qkkZkez0yxDgZ+GnfUaON+X09DqulUA7rxI3NFH/K ZwlLv5AUposIxbTiHQQUACDcu1hXN7DBECGnG0xJkJQT0UNHvsv37kUuIVL75EXkNlRl R0dbBmm1azgJaB0D9Ys8VUbNnj5vMbB24SapVfKQrLv5bG/7LRmFcipb7fuuMwko8uXz 2lVrzgFn2E1agFfNEvkdW1AG8QUhvnEiCaXqrcxhJOQtzuq/JpoN5jaeBrk0iWWnv4yn JQMkQ/SFg+lKGNdlCnq7R4FmgNa6XQRpBF2QeZszX/WeOzoCxc324nRTmYV83uGNxNhQ ATqQ== X-Gm-Message-State: AOAM5330OtbtYATt8vkRC5gnZvSXwQjlcdqiEeTScj2uEdJqMt/4QQXu qlgzS+LFW0ZFC+U8NoL9wIg= X-Google-Smtp-Source: ABdhPJwdf6PDIV9iYxCmyC0U1BopJSIt85+zGKggzLDKYsaXkFCXCFJ6KkO4FwiQNT+ofldpajL/qA== X-Received: by 2002:a63:5648:0:b0:398:dfcf:c9c6 with SMTP id g8-20020a635648000000b00398dfcfc9c6mr1485952pgm.0.1649926676999; Thu, 14 Apr 2022 01:57:56 -0700 (PDT) Received: from hyeyoo.. ([114.29.24.243]) by smtp.gmail.com with ESMTPSA id p9-20020aa79e89000000b00505fada20dfsm1403537pfq.117.2022.04.14.01.57.50 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 14 Apr 2022 01:57:55 -0700 (PDT) From: Hyeonggon Yoo <42.hyeyoo@gmail.com> To: Vlastimil Babka Cc: Marco Elver , Matthew WilCox , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Hyeonggon Yoo <42.hyeyoo@gmail.com>, Roman Gushchin , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 03/23] mm/slab_common: remove CONFIG_NUMA ifdefs for common kmalloc functions Date: Thu, 14 Apr 2022 17:57:07 +0900 Message-Id: <20220414085727.643099-4-42.hyeyoo@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220414085727.643099-1-42.hyeyoo@gmail.com> References: <20220414085727.643099-1-42.hyeyoo@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Now that slab_alloc_node() is available for SLAB when CONFIG_NUMA=3Dn, remove CONFIG_NUMA ifdefs for common kmalloc functions. Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> Reviewed-by: Vlastimil Babka --- include/linux/slab.h | 28 ---------------------------- mm/slab.c | 2 -- mm/slob.c | 5 +---- mm/slub.c | 6 ------ 4 files changed, 1 insertion(+), 40 deletions(-) diff --git a/include/linux/slab.h b/include/linux/slab.h index 11ceddcae9f4..a3b9d4c20d7e 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -444,38 +444,18 @@ static __always_inline void kfree_bulk(size_t size, v= oid **p) kmem_cache_free_bulk(NULL, size, p); } =20 -#ifdef CONFIG_NUMA void *__kmalloc_node(size_t size, gfp_t flags, int node) __assume_kmalloc_= alignment __alloc_size(1); void *kmem_cache_alloc_node(struct kmem_cache *s, gfp_t flags, int node) _= _assume_slab_alignment __malloc; -#else -static __always_inline __alloc_size(1) void *__kmalloc_node(size_t size, g= fp_t flags, int node) -{ - return __kmalloc(size, flags); -} - -static __always_inline void *kmem_cache_alloc_node(struct kmem_cache *s, g= fp_t flags, int node) -{ - return kmem_cache_alloc(s, flags); -} -#endif =20 #ifdef CONFIG_TRACING extern void *kmem_cache_alloc_trace(struct kmem_cache *s, gfp_t flags, siz= e_t size) __assume_slab_alignment __alloc_size(3); =20 -#ifdef CONFIG_NUMA extern void *kmem_cache_alloc_node_trace(struct kmem_cache *s, gfp_t gfpfl= ags, int node, size_t size) __assume_slab_alignment __alloc_size(4); -#else -static __always_inline __alloc_size(4) void *kmem_cache_alloc_node_trace(s= truct kmem_cache *s, - gfp_t gfpflags, int node, size_t size) -{ - return kmem_cache_alloc_trace(s, gfpflags, size); -} -#endif /* CONFIG_NUMA */ =20 #else /* CONFIG_TRACING */ static __always_inline __alloc_size(3) void *kmem_cache_alloc_trace(struct= kmem_cache *s, @@ -689,20 +669,12 @@ static inline __alloc_size(1, 2) void *kcalloc_node(s= ize_t n, size_t size, gfp_t } =20 =20 -#ifdef CONFIG_NUMA extern void *__kmalloc_node_track_caller(size_t size, gfp_t flags, int nod= e, unsigned long caller) __alloc_size(1); #define kmalloc_node_track_caller(size, flags, node) \ __kmalloc_node_track_caller(size, flags, node, \ _RET_IP_) =20 -#else /* CONFIG_NUMA */ - -#define kmalloc_node_track_caller(size, flags, node) \ - kmalloc_track_caller(size, flags) - -#endif /* CONFIG_NUMA */ - /* * Shortcuts */ diff --git a/mm/slab.c b/mm/slab.c index f033d5b4fefb..5ad55ca96ab6 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -3545,7 +3545,6 @@ kmem_cache_alloc_trace(struct kmem_cache *cachep, gfp= _t flags, size_t size) EXPORT_SYMBOL(kmem_cache_alloc_trace); #endif =20 -#ifdef CONFIG_NUMA /** * kmem_cache_alloc_node - Allocate an object on the specified node * @cachep: The cache to allocate from. @@ -3619,7 +3618,6 @@ void *__kmalloc_node_track_caller(size_t size, gfp_t = flags, return __do_kmalloc_node(size, flags, node, caller); } EXPORT_SYMBOL(__kmalloc_node_track_caller); -#endif /* CONFIG_NUMA */ =20 #ifdef CONFIG_PRINTK void kmem_obj_info(struct kmem_obj_info *kpp, void *object, struct slab *s= lab) diff --git a/mm/slob.c b/mm/slob.c index dfa6808dff36..c8c3b5662edf 100644 --- a/mm/slob.c +++ b/mm/slob.c @@ -534,14 +534,12 @@ void *__kmalloc_track_caller(size_t size, gfp_t gfp, = unsigned long caller) } EXPORT_SYMBOL(__kmalloc_track_caller); =20 -#ifdef CONFIG_NUMA void *__kmalloc_node_track_caller(size_t size, gfp_t gfp, int node, unsigned long caller) { return __do_kmalloc_node(size, gfp, node, caller); } EXPORT_SYMBOL(__kmalloc_node_track_caller); -#endif =20 void kfree(const void *block) { @@ -641,7 +639,7 @@ void *kmem_cache_alloc_lru(struct kmem_cache *cachep, s= truct list_lru *lru, gfp_ return slob_alloc_node(cachep, flags, NUMA_NO_NODE); } EXPORT_SYMBOL(kmem_cache_alloc_lru); -#ifdef CONFIG_NUMA + void *__kmalloc_node(size_t size, gfp_t gfp, int node) { return __do_kmalloc_node(size, gfp, node, _RET_IP_); @@ -653,7 +651,6 @@ void *kmem_cache_alloc_node(struct kmem_cache *cachep, = gfp_t gfp, int node) return slob_alloc_node(cachep, gfp, node); } EXPORT_SYMBOL(kmem_cache_alloc_node); -#endif =20 static void __kmem_cache_free(void *b, int size) { diff --git a/mm/slub.c b/mm/slub.c index d7e8355b2f08..e36c148e5069 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -3260,7 +3260,6 @@ void *kmem_cache_alloc_trace(struct kmem_cache *s, gf= p_t gfpflags, size_t size) EXPORT_SYMBOL(kmem_cache_alloc_trace); #endif =20 -#ifdef CONFIG_NUMA void *kmem_cache_alloc_node(struct kmem_cache *s, gfp_t gfpflags, int node) { void *ret =3D slab_alloc_node(s, NULL, gfpflags, node, _RET_IP_, s->objec= t_size); @@ -3287,7 +3286,6 @@ void *kmem_cache_alloc_node_trace(struct kmem_cache *= s, } EXPORT_SYMBOL(kmem_cache_alloc_node_trace); #endif -#endif /* CONFIG_NUMA */ =20 /* * Slow path handling. This may still be called frequently since objects @@ -4424,7 +4422,6 @@ void *__kmalloc(size_t size, gfp_t flags) } EXPORT_SYMBOL(__kmalloc); =20 -#ifdef CONFIG_NUMA static void *kmalloc_large_node(size_t size, gfp_t flags, int node) { struct page *page; @@ -4471,7 +4468,6 @@ void *__kmalloc_node(size_t size, gfp_t flags, int no= de) return ret; } EXPORT_SYMBOL(__kmalloc_node); -#endif /* CONFIG_NUMA */ =20 #ifdef CONFIG_HARDENED_USERCOPY /* @@ -4929,7 +4925,6 @@ void *__kmalloc_track_caller(size_t size, gfp_t gfpfl= ags, unsigned long caller) } EXPORT_SYMBOL(__kmalloc_track_caller); =20 -#ifdef CONFIG_NUMA void *__kmalloc_node_track_caller(size_t size, gfp_t gfpflags, int node, unsigned long caller) { @@ -4959,7 +4954,6 @@ void *__kmalloc_node_track_caller(size_t size, gfp_t = gfpflags, return ret; } EXPORT_SYMBOL(__kmalloc_node_track_caller); -#endif =20 #ifdef CONFIG_SYSFS static int count_inuse(struct slab *slab) --=20 2.32.0 From nobody Mon May 11 04:52:14 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A32DFC433F5 for ; Thu, 14 Apr 2022 08:58:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241343AbiDNJAo (ORCPT ); Thu, 14 Apr 2022 05:00:44 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43066 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241323AbiDNJA1 (ORCPT ); Thu, 14 Apr 2022 05:00:27 -0400 Received: from mail-pf1-x434.google.com (mail-pf1-x434.google.com [IPv6:2607:f8b0:4864:20::434]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B24A168982 for ; Thu, 14 Apr 2022 01:58:03 -0700 (PDT) Received: by mail-pf1-x434.google.com with SMTP id n22so3941966pfa.0 for ; Thu, 14 Apr 2022 01:58:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=K7ZmOmH1IbMc+vnlcJBZfbxN3MEcHeLW2ET7s3Wh5sM=; b=I70WfUKgVpQ6q+asH3FAmRn1WWiOAy6zd0FZbkAq6a02yN2z6/+ljmprknkpE2cMGP b4fjuvaH99F1aAEhWhnan9EeGlHqrNslTlrjI4++n/eXgw1TU26x9LyLn7zzYY7G8I5H k+Me9mMeCsC6PFp93hM/rk+BLLAkGkljpvOBs/M9dmFGcEAxWhAOIEZbIEa6HSBsTTR/ gGDOtgcz78mbZNfs+WMW7SVniVtfBKOzZjoUJOB74VvgPmFiGz4yVUfDdsGyyDfDMEYI /7LrxWRqv+J5fTU5AIfm6CDGouRBJSKF5fdrjc8qplsLME+iRSJfjU0b9+2f2fdEOGPX 9LJg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=K7ZmOmH1IbMc+vnlcJBZfbxN3MEcHeLW2ET7s3Wh5sM=; b=2Agu6DZauGZ9tsIxp1KpguAneP4rxMqjyBiaGsW9fPUmL8xIBK0FCuKisz0RbL019F 7OnhCPcokySOhBV5W+n0mTk9G72eTkhZmV+hAGJOry0ebmvMwcgy0b5On561cNkdpIQ9 qRHehQbAPOKeRnrd8rTuVLjQ7/etvKqT4vheVXPYPgvn0ilbWpYaDeiBIbe3ceZtDBYN MmIufFTsgj72VIvJHKXa6P40rQjHHcLdooUV7IZSu5G20+34xqkc2dHapZNPJH/qFqpK T2GZHFvRwrW76zb3WNygvWCSeB5nLLO0XJdvTIvnfru6Mpk6bffrXvi4dVJpiKuN2yBn t9IQ== X-Gm-Message-State: AOAM532FYzxZYafcpJPuwxUG6ObIUPZaNJF8FWmWESlxqb+St3GNVj1J ybqhNAH7gGpu06kGrrE3LCo= X-Google-Smtp-Source: ABdhPJwjedRUJ62C/VUVzRlP7p+Lh/lKMZPqawWi7OpyiXjnkb9bEleKPusTPvxIUdrOkQXNBlkZtg== X-Received: by 2002:aa7:88ce:0:b0:505:6a0b:c965 with SMTP id k14-20020aa788ce000000b005056a0bc965mr14021729pff.67.1649926683254; Thu, 14 Apr 2022 01:58:03 -0700 (PDT) Received: from hyeyoo.. ([114.29.24.243]) by smtp.gmail.com with ESMTPSA id p9-20020aa79e89000000b00505fada20dfsm1403537pfq.117.2022.04.14.01.57.57 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 14 Apr 2022 01:58:01 -0700 (PDT) From: Hyeonggon Yoo <42.hyeyoo@gmail.com> To: Vlastimil Babka Cc: Marco Elver , Matthew WilCox , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Hyeonggon Yoo <42.hyeyoo@gmail.com>, Roman Gushchin , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 04/23] mm/slab_common: cleanup kmalloc_track_caller() Date: Thu, 14 Apr 2022 17:57:08 +0900 Message-Id: <20220414085727.643099-5-42.hyeyoo@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220414085727.643099-1-42.hyeyoo@gmail.com> References: <20220414085727.643099-1-42.hyeyoo@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Make kmalloc_track_caller() wrapper of kmalloc_node_track_caller(). Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> Reviewed-by: Vlastimil Babka --- include/linux/slab.h | 17 ++++++++--------- mm/slab.c | 6 ------ mm/slob.c | 6 ------ mm/slub.c | 22 ---------------------- 4 files changed, 8 insertions(+), 43 deletions(-) diff --git a/include/linux/slab.h b/include/linux/slab.h index a3b9d4c20d7e..acdb4b7428f9 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -639,6 +639,12 @@ static inline __alloc_size(1, 2) void *kcalloc(size_t = n, size_t size, gfp_t flag return kmalloc_array(n, size, flags | __GFP_ZERO); } =20 +extern void *__kmalloc_node_track_caller(size_t size, gfp_t flags, int nod= e, + unsigned long caller) __alloc_size(1); +#define kmalloc_node_track_caller(size, flags, node) \ + __kmalloc_node_track_caller(size, flags, node, \ + _RET_IP_) + /* * kmalloc_track_caller is a special version of kmalloc that records the * calling function of the routine calling it for slab leak tracking inste= ad @@ -647,9 +653,9 @@ static inline __alloc_size(1, 2) void *kcalloc(size_t n= , size_t size, gfp_t flag * allocator where we care about the real place the memory allocation * request comes from. */ -extern void *__kmalloc_track_caller(size_t size, gfp_t flags, unsigned lon= g caller); #define kmalloc_track_caller(size, flags) \ - __kmalloc_track_caller(size, flags, _RET_IP_) + __kmalloc_node_track_caller(size, flags, \ + NUMA_NO_NODE, _RET_IP_) =20 static inline __alloc_size(1, 2) void *kmalloc_array_node(size_t n, size_t= size, gfp_t flags, int node) @@ -668,13 +674,6 @@ static inline __alloc_size(1, 2) void *kcalloc_node(si= ze_t n, size_t size, gfp_t return kmalloc_array_node(n, size, flags | __GFP_ZERO, node); } =20 - -extern void *__kmalloc_node_track_caller(size_t size, gfp_t flags, int nod= e, - unsigned long caller) __alloc_size(1); -#define kmalloc_node_track_caller(size, flags, node) \ - __kmalloc_node_track_caller(size, flags, node, \ - _RET_IP_) - /* * Shortcuts */ diff --git a/mm/slab.c b/mm/slab.c index 5ad55ca96ab6..5f20efc7a330 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -3675,12 +3675,6 @@ void *__kmalloc(size_t size, gfp_t flags) } EXPORT_SYMBOL(__kmalloc); =20 -void *__kmalloc_track_caller(size_t size, gfp_t flags, unsigned long calle= r) -{ - return __do_kmalloc(size, flags, caller); -} -EXPORT_SYMBOL(__kmalloc_track_caller); - /** * kmem_cache_free - Deallocate an object * @cachep: The cache the allocation was from. diff --git a/mm/slob.c b/mm/slob.c index c8c3b5662edf..6d0fc6ad1413 100644 --- a/mm/slob.c +++ b/mm/slob.c @@ -528,12 +528,6 @@ void *__kmalloc(size_t size, gfp_t gfp) } EXPORT_SYMBOL(__kmalloc); =20 -void *__kmalloc_track_caller(size_t size, gfp_t gfp, unsigned long caller) -{ - return __do_kmalloc_node(size, gfp, NUMA_NO_NODE, caller); -} -EXPORT_SYMBOL(__kmalloc_track_caller); - void *__kmalloc_node_track_caller(size_t size, gfp_t gfp, int node, unsigned long caller) { diff --git a/mm/slub.c b/mm/slub.c index e36c148e5069..e425c5c372de 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -4903,28 +4903,6 @@ int __kmem_cache_create(struct kmem_cache *s, slab_f= lags_t flags) return 0; } =20 -void *__kmalloc_track_caller(size_t size, gfp_t gfpflags, unsigned long ca= ller) -{ - struct kmem_cache *s; - void *ret; - - if (unlikely(size > KMALLOC_MAX_CACHE_SIZE)) - return kmalloc_large(size, gfpflags); - - s =3D kmalloc_slab(size, gfpflags); - - if (unlikely(ZERO_OR_NULL_PTR(s))) - return s; - - ret =3D slab_alloc(s, NULL, gfpflags, caller, size); - - /* Honor the call site pointer we received. */ - trace_kmalloc(caller, ret, size, s->size, gfpflags); - - return ret; -} -EXPORT_SYMBOL(__kmalloc_track_caller); - void *__kmalloc_node_track_caller(size_t size, gfp_t gfpflags, int node, unsigned long caller) { --=20 2.32.0 From nobody Mon May 11 04:52:14 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7FD62C433EF for ; Thu, 14 Apr 2022 08:59:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241208AbiDNJBV (ORCPT ); Thu, 14 Apr 2022 05:01:21 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43276 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241334AbiDNJAe (ORCPT ); Thu, 14 Apr 2022 05:00:34 -0400 Received: from mail-pf1-x42f.google.com (mail-pf1-x42f.google.com [IPv6:2607:f8b0:4864:20::42f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 63CD46A035 for ; Thu, 14 Apr 2022 01:58:10 -0700 (PDT) Received: by mail-pf1-x42f.google.com with SMTP id s8so4256307pfk.12 for ; Thu, 14 Apr 2022 01:58:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=puoTxozYPPe1WBJOeEiOmhEKhJO+f8Zoi/BnyAhKQd8=; b=cdksr7EV49IyEJgLUz1YWDTL9lJ/zdfWCGdXtI9Bh0TKKCe9eNh2TlIy4BlUAgAKC6 pUG2zCtCjSVzx33NAuhCNZLPINqhW5RFCGOTvuSw6R9sdp9BRwMW+slA01wYuSmUIHUJ QOWhpVTp1JVhPuHxZO5FvGsP6u76460r6QxrVOKjXOGFqhLVMPGKuTv0Y5GFDi4JRz1B 0trDBxOgXfgi7wNmvFjM9EbAa+QLrLxC+iXRis/JYgzqurNP6MiMTeQ/pjG4lh21Ymkl ixXRImK43ZXqcpydtsny7uQVxBezw9010hdqZWsIQd1hZgE+ssHU28Cse+Jcs3es+08b Is9w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=puoTxozYPPe1WBJOeEiOmhEKhJO+f8Zoi/BnyAhKQd8=; b=JhqIprXv7DOwvGH9Uxv3d6BmPpffapqFEHU/PgKN0m0eyGQ7OI5Xln1sL6jnk4upNy B69elEl7BQNDhprL358LMY+LzoF5i8/X7oZVeSNoR7gP0JO9n2x1mNygD1KgjvFt3UR1 ZwD/R1Abqp8wrJluYfJJjouldhwcr1oJI5JMs8Fg3qoow+4MDWWtKcNC/vQ/p/zPrac8 rkksochSNfL4r1R8Rbrh0nZT4tark9u/Genm/psY4tdspBlejwJU7HvCOUJ0bcl0LmYi h2/231JIKJ6UTojdvQTAZrWvoeIvXFJXBx16+nzxvUoU+wdCr12rUFaAT3gYwqdqgwcq YpdA== X-Gm-Message-State: AOAM530VX+JjtjZ0e3kLTR70cRVBA9wGzVT+t3dlO4LAzVSL4sCNja3E v+s09gKwpYol7Zw9a972Vx4= X-Google-Smtp-Source: ABdhPJzR2qAtqNLzPfNtBk/cK7xehrYa7p742CXwvRI/dSLx9v1+BABWqj52kjI0Nkv1Qjq5r9dLmw== X-Received: by 2002:a05:6a00:a12:b0:504:e93f:2dd9 with SMTP id p18-20020a056a000a1200b00504e93f2dd9mr2886591pfh.49.1649926689688; Thu, 14 Apr 2022 01:58:09 -0700 (PDT) Received: from hyeyoo.. ([114.29.24.243]) by smtp.gmail.com with ESMTPSA id p9-20020aa79e89000000b00505fada20dfsm1403537pfq.117.2022.04.14.01.58.03 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 14 Apr 2022 01:58:07 -0700 (PDT) From: Hyeonggon Yoo <42.hyeyoo@gmail.com> To: Vlastimil Babka Cc: Marco Elver , Matthew WilCox , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Hyeonggon Yoo <42.hyeyoo@gmail.com>, Roman Gushchin , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 05/23] mm/slab_common: cleanup __kmalloc() Date: Thu, 14 Apr 2022 17:57:09 +0900 Message-Id: <20220414085727.643099-6-42.hyeyoo@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220414085727.643099-1-42.hyeyoo@gmail.com> References: <20220414085727.643099-1-42.hyeyoo@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Make __kmalloc() wrapper of __kmalloc_node(). Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> Reviewed-by: Vlastimil Babka --- include/linux/slab.h | 13 ++++++++++--- mm/slab.c | 34 ---------------------------------- mm/slob.c | 6 ------ mm/slub.c | 23 ----------------------- 4 files changed, 10 insertions(+), 66 deletions(-) diff --git a/include/linux/slab.h b/include/linux/slab.h index acdb4b7428f9..4c06d15f731c 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -419,7 +419,16 @@ static __always_inline unsigned int __kmalloc_index(si= ze_t size, #define kmalloc_index(s) __kmalloc_index(s, true) #endif /* !CONFIG_SLOB */ =20 -void *__kmalloc(size_t size, gfp_t flags) __assume_kmalloc_alignment __all= oc_size(1); +extern void *__kmalloc_node(size_t size, gfp_t flags, int node) + __assume_kmalloc_alignment + __alloc_size(1); + +static __always_inline __alloc_size(1) __assume_kmalloc_alignment +void *__kmalloc(size_t size, gfp_t flags) +{ + return __kmalloc_node(size, flags, NUMA_NO_NODE); +} + void *kmem_cache_alloc(struct kmem_cache *s, gfp_t flags) __assume_slab_al= ignment __malloc; void *kmem_cache_alloc_lru(struct kmem_cache *s, struct list_lru *lru, gfp_t gfpflags) __assume_slab_alignment __malloc; @@ -444,8 +453,6 @@ static __always_inline void kfree_bulk(size_t size, voi= d **p) kmem_cache_free_bulk(NULL, size, p); } =20 -void *__kmalloc_node(size_t size, gfp_t flags, int node) __assume_kmalloc_= alignment - __alloc_size(1); void *kmem_cache_alloc_node(struct kmem_cache *s, gfp_t flags, int node) _= _assume_slab_alignment __malloc; =20 diff --git a/mm/slab.c b/mm/slab.c index 5f20efc7a330..db7eab9e2e9f 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -3641,40 +3641,6 @@ void kmem_obj_info(struct kmem_obj_info *kpp, void *= object, struct slab *slab) } #endif =20 -/** - * __do_kmalloc - allocate memory - * @size: how many bytes of memory are required. - * @flags: the type of memory to allocate (see kmalloc). - * @caller: function caller for debug tracking of the caller - * - * Return: pointer to the allocated memory or %NULL in case of error - */ -static __always_inline void *__do_kmalloc(size_t size, gfp_t flags, - unsigned long caller) -{ - struct kmem_cache *cachep; - void *ret; - - if (unlikely(size > KMALLOC_MAX_CACHE_SIZE)) - return NULL; - cachep =3D kmalloc_slab(size, flags); - if (unlikely(ZERO_OR_NULL_PTR(cachep))) - return cachep; - ret =3D slab_alloc(cachep, NULL, flags, size, caller); - - ret =3D kasan_kmalloc(cachep, ret, size, flags); - trace_kmalloc(caller, ret, - size, cachep->size, flags); - - return ret; -} - -void *__kmalloc(size_t size, gfp_t flags) -{ - return __do_kmalloc(size, flags, _RET_IP_); -} -EXPORT_SYMBOL(__kmalloc); - /** * kmem_cache_free - Deallocate an object * @cachep: The cache the allocation was from. diff --git a/mm/slob.c b/mm/slob.c index 6d0fc6ad1413..ab67c8219e8d 100644 --- a/mm/slob.c +++ b/mm/slob.c @@ -522,12 +522,6 @@ __do_kmalloc_node(size_t size, gfp_t gfp, int node, un= signed long caller) return ret; } =20 -void *__kmalloc(size_t size, gfp_t gfp) -{ - return __do_kmalloc_node(size, gfp, NUMA_NO_NODE, _RET_IP_); -} -EXPORT_SYMBOL(__kmalloc); - void *__kmalloc_node_track_caller(size_t size, gfp_t gfp, int node, unsigned long caller) { diff --git a/mm/slub.c b/mm/slub.c index e425c5c372de..44170b4f084b 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -4399,29 +4399,6 @@ static int __init setup_slub_min_objects(char *str) =20 __setup("slub_min_objects=3D", setup_slub_min_objects); =20 -void *__kmalloc(size_t size, gfp_t flags) -{ - struct kmem_cache *s; - void *ret; - - if (unlikely(size > KMALLOC_MAX_CACHE_SIZE)) - return kmalloc_large(size, flags); - - s =3D kmalloc_slab(size, flags); - - if (unlikely(ZERO_OR_NULL_PTR(s))) - return s; - - ret =3D slab_alloc(s, NULL, flags, _RET_IP_, size); - - trace_kmalloc(_RET_IP_, ret, size, s->size, flags); - - ret =3D kasan_kmalloc(s, ret, size, flags); - - return ret; -} -EXPORT_SYMBOL(__kmalloc); - static void *kmalloc_large_node(size_t size, gfp_t flags, int node) { struct page *page; --=20 2.32.0 From nobody Mon May 11 04:52:14 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 114C4C433F5 for ; Thu, 14 Apr 2022 08:58:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241367AbiDNJAy (ORCPT ); Thu, 14 Apr 2022 05:00:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43626 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241357AbiDNJAt (ORCPT ); Thu, 14 Apr 2022 05:00:49 -0400 Received: from mail-pg1-x52d.google.com (mail-pg1-x52d.google.com [IPv6:2607:f8b0:4864:20::52d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8CF756BDD1 for ; Thu, 14 Apr 2022 01:58:16 -0700 (PDT) Received: by mail-pg1-x52d.google.com with SMTP id h5so4224488pgc.7 for ; Thu, 14 Apr 2022 01:58:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Flouc9x4YW1EROU8wB1b7QSoEspFYDh7fC18OAMupnU=; b=oony/Txz5UItxFolta6iBfVnB8Y1xpXvK3iNV5pZyj98ppFo9RkGtVpwBS+DKI8QWO VwGRSP+JOqEQKfBIFD3st+33YwNN2BP25EYis48bexbK4VejcsLUZYH6XDcLI9Him/Cc pdL4JgTE8s4JX3p8FES71ZLoECOOogMnaangOCqpr0OoafDq3QrY56Or4zqI6gg3Rl4I 6PJpps2bvqQyRG8B5sAjDr83OMcBN8MiYs6T12aHSnzK6ovfvPD4vpB0khpULee6lrlv lMsscFikXRKquMov+dpptRT9c7vJXLJ4n2ViPI2dPNKB+x+S5Mt3HpQnOIq2ER68Z3th 5ltQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Flouc9x4YW1EROU8wB1b7QSoEspFYDh7fC18OAMupnU=; b=HytLoPdyMFqf2tDr2TWq0eFPvOJ1OVoLqvBjAozSuQo5hk1FV7racS0TKIyG61CdhU ZNGloXhKZY7jU04B1VvcCa+pCzUVsdNYb4W/zM7GxYNkDCKzrNJSkJhrfu91xP/OWJuW UrQDARvdu09+tVpVwSeQqUzW1IIOLoGCq43fDSeVW7E19jBdx+srvBMXYpqX0iEDn2di P3sPD8yd48cCU8kIRO4bk+F5NiQ0ZlpZUoFbbxJIpc6wt84n9z86sV1FtgqUtGEXMzcw 84lTzqrwgudEQ5jAkPzrZBanutVP30Uzc4PpY99RjIIipH3o+4u5C4FLT6TXoHl5JDi1 DdaQ== X-Gm-Message-State: AOAM531vQlPFbx1xxBsM1ULzt++ynMn8eJwKwQv/7L/dDRFdLaM3kK3M gb59K2ztv7RAohEiUPe2KYY= X-Google-Smtp-Source: ABdhPJyHufbjJAEzA9QC+Uba/N3QwSoEgPj/JV96DrZ1cwqz0NEioGGRA5GL0D2mSmMpFjxGqs7n6g== X-Received: by 2002:a05:6a00:1bca:b0:505:ac8b:cc4b with SMTP id o10-20020a056a001bca00b00505ac8bcc4bmr14022293pfw.26.1649926696025; Thu, 14 Apr 2022 01:58:16 -0700 (PDT) Received: from hyeyoo.. ([114.29.24.243]) by smtp.gmail.com with ESMTPSA id p9-20020aa79e89000000b00505fada20dfsm1403537pfq.117.2022.04.14.01.58.10 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 14 Apr 2022 01:58:14 -0700 (PDT) From: Hyeonggon Yoo <42.hyeyoo@gmail.com> To: Vlastimil Babka Cc: Marco Elver , Matthew WilCox , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Hyeonggon Yoo <42.hyeyoo@gmail.com>, Roman Gushchin , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 06/23] mm/sl[auo]b: fold kmalloc_order_trace() into kmalloc_large() Date: Thu, 14 Apr 2022 17:57:10 +0900 Message-Id: <20220414085727.643099-7-42.hyeyoo@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220414085727.643099-1-42.hyeyoo@gmail.com> References: <20220414085727.643099-1-42.hyeyoo@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" There is no caller of kmalloc_order_trace() except kmalloc_large(). Fold it into kmalloc_large() and remove kmalloc_order{,_trace}(). Also add tracepoint in kmalloc_large() that was previously in kmalloc_order_trace(). Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> Reviewed-by: Vlastimil Babka --- Changes from v1: - updated some changelog (kmalloc_order() -> kmalloc_order_trace()) include/linux/slab.h | 22 ++-------------------- mm/slab_common.c | 14 +++----------- 2 files changed, 5 insertions(+), 31 deletions(-) diff --git a/include/linux/slab.h b/include/linux/slab.h index 4c06d15f731c..6f6e22959b39 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -484,26 +484,8 @@ static __always_inline void *kmem_cache_alloc_node_tra= ce(struct kmem_cache *s, g } #endif /* CONFIG_TRACING */ =20 -extern void *kmalloc_order(size_t size, gfp_t flags, unsigned int order) _= _assume_page_alignment - __alloc_size(1); - -#ifdef CONFIG_TRACING -extern void *kmalloc_order_trace(size_t size, gfp_t flags, unsigned int or= der) - __assume_page_alignment __alloc_size(1); -#else -static __always_inline __alloc_size(1) void *kmalloc_order_trace(size_t si= ze, gfp_t flags, - unsigned int order) -{ - return kmalloc_order(size, flags, order); -} -#endif - -static __always_inline __alloc_size(1) void *kmalloc_large(size_t size, gf= p_t flags) -{ - unsigned int order =3D get_order(size); - return kmalloc_order_trace(size, flags, order); -} - +extern void *kmalloc_large(size_t size, gfp_t flags) __assume_page_alignme= nt + __alloc_size(1); /** * kmalloc - allocate memory * @size: how many bytes of memory are required. diff --git a/mm/slab_common.c b/mm/slab_common.c index c4d63f2c78b8..308cd5449285 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -925,10 +925,11 @@ gfp_t kmalloc_fix_flags(gfp_t flags) * directly to the page allocator. We use __GFP_COMP, because we will need= to * know the allocation order to free the pages properly in kfree. */ -void *kmalloc_order(size_t size, gfp_t flags, unsigned int order) +void *kmalloc_large(size_t size, gfp_t flags) { void *ret =3D NULL; struct page *page; + unsigned int order =3D get_order(size); =20 if (unlikely(flags & GFP_SLAB_BUG_MASK)) flags =3D kmalloc_fix_flags(flags); @@ -943,19 +944,10 @@ void *kmalloc_order(size_t size, gfp_t flags, unsigne= d int order) ret =3D kasan_kmalloc_large(ret, size, flags); /* As ret might get tagged, call kmemleak hook after KASAN. */ kmemleak_alloc(ret, size, 1, flags); - return ret; -} -EXPORT_SYMBOL(kmalloc_order); - -#ifdef CONFIG_TRACING -void *kmalloc_order_trace(size_t size, gfp_t flags, unsigned int order) -{ - void *ret =3D kmalloc_order(size, flags, order); trace_kmalloc(_RET_IP_, ret, size, PAGE_SIZE << order, flags); return ret; } -EXPORT_SYMBOL(kmalloc_order_trace); -#endif +EXPORT_SYMBOL(kmalloc_large); =20 #ifdef CONFIG_SLAB_FREELIST_RANDOM /* Randomize a generic freelist */ --=20 2.32.0 From nobody Mon May 11 04:52:14 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2D74CC433F5 for ; Thu, 14 Apr 2022 08:58:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241366AbiDNJBD (ORCPT ); Thu, 14 Apr 2022 05:01:03 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43652 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238973AbiDNJAu (ORCPT ); Thu, 14 Apr 2022 05:00:50 -0400 Received: from mail-pl1-x630.google.com (mail-pl1-x630.google.com [IPv6:2607:f8b0:4864:20::630]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D6C396C927 for ; Thu, 14 Apr 2022 01:58:22 -0700 (PDT) Received: by mail-pl1-x630.google.com with SMTP id c12so4123957plr.6 for ; Thu, 14 Apr 2022 01:58:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=RAAeRzDkd8VHwI1btUNFRYh6M+YjRVYA6C41AKkMgWk=; b=LviAtoyKtaKBlZ2FL6h97l5iQCipPPEbnpaURSu/Q6d31pZSaLfk737UHtPjVNNsya 8UjFYt0XMpXiAj6EBZwDZVB762UoYzBR2Lzj9I3sMXWCZ8YFQ8tYaOt1+aE2v0m9SmXH 1kAPmDPsZ8J3ypTav1BKdu93hoc1ICkpd0CgIAdtvDBuioCInx4v2XMaSw2rMgpRmQ0V x8zQIQ3/oGnkXGZ+54jI0wCM1PCUt+vZfoGONGUa2q+VSonlhy6okbOOfU2p36TJPYAS r3WdtXJQHOamPnjAmWsDfakB80v3/UcMgboUJRkgjAUuGThAWyyCkumDxlZIQvS0PPPQ 0zBg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=RAAeRzDkd8VHwI1btUNFRYh6M+YjRVYA6C41AKkMgWk=; b=0bbed0SfElJxsD5k6Pbo+5d4BoLqvxoeK2Qc+Xpi2bcLOLnaqmEb8gzNdYWoyM/Se/ JKBoJ9zBn7H44PHmuuP2xbNQfqUFiMsgjsHq1PVstILoXzPE6zytjT4VmWrCI+eOkDtg uug5oicV/YdVaFwzZpq3mXVH6IY6M8rq2wANyKW94ZMWHBpyeDnqnE0gjcBkqa4jdtCn MlytNNYqkbhV9icXFLLZLEa/y42acxgEGe2Ka6q1sQ9Cyz6NEcWpXKjENMg5f+ytZR7X Lv2eQ03y8SoaeSsOslk14BQnhg6SOTYPA/H5q0IPylu9+l51CwmKPiOf5NNcNr1NlgZS SDsw== X-Gm-Message-State: AOAM533FPxI6vOgdGcPUPKluKeron0B9ylKFW70CKxvp9xLOhVqCYA5s FgfiOq1B1ZdAySYkY9DNZ3c= X-Google-Smtp-Source: ABdhPJyC2Gz8AiXe//HxfShI0SN9Ulacz6Yh5OZH2akI/9dzq5MTL9VvmXVj1hdw+LWLAjf1fwUBCw== X-Received: by 2002:a17:90b:4d01:b0:1cd:46e8:215a with SMTP id mw1-20020a17090b4d0100b001cd46e8215amr2635214pjb.73.1649926702169; Thu, 14 Apr 2022 01:58:22 -0700 (PDT) Received: from hyeyoo.. ([114.29.24.243]) by smtp.gmail.com with ESMTPSA id p9-20020aa79e89000000b00505fada20dfsm1403537pfq.117.2022.04.14.01.58.16 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 14 Apr 2022 01:58:20 -0700 (PDT) From: Hyeonggon Yoo <42.hyeyoo@gmail.com> To: Vlastimil Babka Cc: Marco Elver , Matthew WilCox , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Hyeonggon Yoo <42.hyeyoo@gmail.com>, Roman Gushchin , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 07/23] mm/slub: move kmalloc_large_node() to slab_common.c Date: Thu, 14 Apr 2022 17:57:11 +0900 Message-Id: <20220414085727.643099-8-42.hyeyoo@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220414085727.643099-1-42.hyeyoo@gmail.com> References: <20220414085727.643099-1-42.hyeyoo@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" In later patch SLAB will also pass requests larger than order-1 page to page allocator. Move kmalloc_large_node() to slab_common.c. Fold kmalloc_large_node_hook() into kmalloc_large_node() as there is no other caller. Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> Reviewed-by: Vlastimil Babka --- include/linux/slab.h | 3 +++ mm/slab_common.c | 22 ++++++++++++++++++++++ mm/slub.c | 25 ------------------------- 3 files changed, 25 insertions(+), 25 deletions(-) diff --git a/include/linux/slab.h b/include/linux/slab.h index 6f6e22959b39..97336acbebbf 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -486,6 +486,9 @@ static __always_inline void *kmem_cache_alloc_node_trac= e(struct kmem_cache *s, g =20 extern void *kmalloc_large(size_t size, gfp_t flags) __assume_page_alignme= nt __alloc_size(1); + +extern void *kmalloc_large_node(size_t size, gfp_t flags, int node) + __assume_page_alignment __alloc_size(1); /** * kmalloc - allocate memory * @size: how many bytes of memory are required. diff --git a/mm/slab_common.c b/mm/slab_common.c index 308cd5449285..e72089515030 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -949,6 +949,28 @@ void *kmalloc_large(size_t size, gfp_t flags) } EXPORT_SYMBOL(kmalloc_large); =20 +void *kmalloc_large_node(size_t size, gfp_t flags, int node) +{ + struct page *page; + void *ptr =3D NULL; + unsigned int order =3D get_order(size); + + flags |=3D __GFP_COMP; + page =3D alloc_pages_node(node, flags, order); + if (page) { + ptr =3D page_address(page); + mod_lruvec_page_state(page, NR_SLAB_UNRECLAIMABLE_B, + PAGE_SIZE << order); + } + + ptr =3D kasan_kmalloc_large(ptr, size, flags); + /* As ptr might get tagged, call kmemleak hook after KASAN. */ + kmemleak_alloc(ptr, size, 1, flags); + + return ptr; +} +EXPORT_SYMBOL(kmalloc_large_node); + #ifdef CONFIG_SLAB_FREELIST_RANDOM /* Randomize a generic freelist */ static void freelist_randomize(struct rnd_state *state, unsigned int *list, diff --git a/mm/slub.c b/mm/slub.c index 44170b4f084b..640712706f2b 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -1679,14 +1679,6 @@ static bool freelist_corrupted(struct kmem_cache *s,= struct slab *slab, * Hooks for other subsystems that check memory allocations. In a typical * production configuration these hooks all should produce no code at all. */ -static inline void *kmalloc_large_node_hook(void *ptr, size_t size, gfp_t = flags) -{ - ptr =3D kasan_kmalloc_large(ptr, size, flags); - /* As ptr might get tagged, call kmemleak hook after KASAN. */ - kmemleak_alloc(ptr, size, 1, flags); - return ptr; -} - static __always_inline void kfree_hook(void *x) { kmemleak_free(x); @@ -4399,23 +4391,6 @@ static int __init setup_slub_min_objects(char *str) =20 __setup("slub_min_objects=3D", setup_slub_min_objects); =20 -static void *kmalloc_large_node(size_t size, gfp_t flags, int node) -{ - struct page *page; - void *ptr =3D NULL; - unsigned int order =3D get_order(size); - - flags |=3D __GFP_COMP; - page =3D alloc_pages_node(node, flags, order); - if (page) { - ptr =3D page_address(page); - mod_lruvec_page_state(page, NR_SLAB_UNRECLAIMABLE_B, - PAGE_SIZE << order); - } - - return kmalloc_large_node_hook(ptr, size, flags); -} - void *__kmalloc_node(size_t size, gfp_t flags, int node) { struct kmem_cache *s; --=20 2.32.0 From nobody Mon May 11 04:52:14 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 17033C433EF for ; Thu, 14 Apr 2022 08:58:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239580AbiDNJBJ (ORCPT ); Thu, 14 Apr 2022 05:01:09 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43776 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241363AbiDNJAy (ORCPT ); Thu, 14 Apr 2022 05:00:54 -0400 Received: from mail-pj1-x1031.google.com (mail-pj1-x1031.google.com [IPv6:2607:f8b0:4864:20::1031]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E97C66A03E for ; Thu, 14 Apr 2022 01:58:28 -0700 (PDT) Received: by mail-pj1-x1031.google.com with SMTP id md4so4512073pjb.4 for ; Thu, 14 Apr 2022 01:58:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=mklZHMW5WaloeJehusu1yiX1f4fRcVma2Hx+8tQuib8=; b=GYW1lVzkhSeESKJQugH4CvCpPCnM0yPtooYZPpJUL6/o3mTK9iJDo6dDN9vtod3Tgb hIfURH8rHJq9vbAWy4FYs6rd2oixfZmowPP4xMb6FvHfcTvY2MMjGKgxqZRMiSUIXenf 3gtIboofh9pvDm4R8qhwIzmlgIBKfYIH17OyeN8t9ltupuXBZWK+nFkoopWnfj3LfdbU 5TPi9waabu9rOMIsIS2YJ/2qwe+RZav6Mlhi/yu8/1S0sF/zXnI2CzP8KZwBeWWwrAmC JCFSArnAlY2GARjH5PlUy6sgyKvZN65XnLSiD3kdqnrakHEeNNu8Mhkzy0bAfZwbM7tv XH/Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=mklZHMW5WaloeJehusu1yiX1f4fRcVma2Hx+8tQuib8=; b=3uXrsFN2GEE2a+i28U34yiCuX41gsReyV0qOR+Vwdd9SA8y3P1wX0+JEANiqqEdJJb WFe0MaGTOaOeJldrQrSCeE0UWDa2cx5wROKgz5rmkCUT03IuRP7TgfUsihXFdHDnZSLy Qi82BbI5n77aj8VlNcBen38B6rFx5gWwvH/hthXWPPUpIOlzdN8wOikl0KKgV7EDvs7/ o4u6jvuuyDbRa7itC1LaXb7ReL9dXngBYHACHugvW+EW+HsNaE4LEh+9FIYt0HNqyvUt wHcZ1iceDOXnnbmwrOd0Erg0yT++E20SlVC+Y5kc4yBhpNIpgpYh4ZziB5e8ajAalH5B k6Gw== X-Gm-Message-State: AOAM530iObbMOUyi2ups3ws/6erpQZGBuCoeuSqVYGEXnhCFiTAS7/R6 D7Etz0WyvzP7ZExrvXhQY7o= X-Google-Smtp-Source: ABdhPJz5Qe3GUHD4nEk7hJm0HJO2KG9nnWXjtA/yZzhLdrdjyrKfgdCsRiLJZQUTpyory4+gb0nIgQ== X-Received: by 2002:a17:90b:4f44:b0:1cb:c539:5acc with SMTP id pj4-20020a17090b4f4400b001cbc5395accmr3233335pjb.152.1649926708329; Thu, 14 Apr 2022 01:58:28 -0700 (PDT) Received: from hyeyoo.. ([114.29.24.243]) by smtp.gmail.com with ESMTPSA id p9-20020aa79e89000000b00505fada20dfsm1403537pfq.117.2022.04.14.01.58.22 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 14 Apr 2022 01:58:26 -0700 (PDT) From: Hyeonggon Yoo <42.hyeyoo@gmail.com> To: Vlastimil Babka Cc: Marco Elver , Matthew WilCox , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Hyeonggon Yoo <42.hyeyoo@gmail.com>, Roman Gushchin , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 08/23] mm/slab_common: make kmalloc_large_node() consistent with kmalloc_large() Date: Thu, 14 Apr 2022 17:57:12 +0900 Message-Id: <20220414085727.643099-9-42.hyeyoo@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220414085727.643099-1-42.hyeyoo@gmail.com> References: <20220414085727.643099-1-42.hyeyoo@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Move tracepoints into kmalloc_large_node() and add missing flag fix code. Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> --- mm/slab_common.c | 6 ++++++ mm/slub.c | 22 ++++------------------ 2 files changed, 10 insertions(+), 18 deletions(-) diff --git a/mm/slab_common.c b/mm/slab_common.c index e72089515030..cf17be8cd9ad 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -955,6 +955,9 @@ void *kmalloc_large_node(size_t size, gfp_t flags, int = node) void *ptr =3D NULL; unsigned int order =3D get_order(size); =20 + if (unlikely(flags & GFP_SLAB_BUG_MASK)) + flags =3D kmalloc_fix_flags(flags); + flags |=3D __GFP_COMP; page =3D alloc_pages_node(node, flags, order); if (page) { @@ -966,6 +969,9 @@ void *kmalloc_large_node(size_t size, gfp_t flags, int = node) ptr =3D kasan_kmalloc_large(ptr, size, flags); /* As ptr might get tagged, call kmemleak hook after KASAN. */ kmemleak_alloc(ptr, size, 1, flags); + trace_kmalloc_node(_RET_IP_, ptr, + size, PAGE_SIZE << order, + flags, node); =20 return ptr; } diff --git a/mm/slub.c b/mm/slub.c index 640712706f2b..f10a892f1772 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -4396,15 +4396,8 @@ void *__kmalloc_node(size_t size, gfp_t flags, int n= ode) struct kmem_cache *s; void *ret; =20 - if (unlikely(size > KMALLOC_MAX_CACHE_SIZE)) { - ret =3D kmalloc_large_node(size, flags, node); - - trace_kmalloc_node(_RET_IP_, ret, - size, PAGE_SIZE << get_order(size), - flags, node); - - return ret; - } + if (unlikely(size > KMALLOC_MAX_CACHE_SIZE)) + return kmalloc_large_node(size, flags, node); =20 s =3D kmalloc_slab(size, flags); =20 @@ -4861,15 +4854,8 @@ void *__kmalloc_node_track_caller(size_t size, gfp_t= gfpflags, struct kmem_cache *s; void *ret; =20 - if (unlikely(size > KMALLOC_MAX_CACHE_SIZE)) { - ret =3D kmalloc_large_node(size, gfpflags, node); - - trace_kmalloc_node(caller, ret, - size, PAGE_SIZE << get_order(size), - gfpflags, node); - - return ret; - } + if (unlikely(size > KMALLOC_MAX_CACHE_SIZE)) + return kmalloc_large_node(size, gfpflags, node); =20 s =3D kmalloc_slab(size, gfpflags); =20 --=20 2.32.0 From nobody Mon May 11 04:52:14 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B483CC433F5 for ; Thu, 14 Apr 2022 08:59:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241377AbiDNJB0 (ORCPT ); Thu, 14 Apr 2022 05:01:26 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44164 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241374AbiDNJBD (ORCPT ); Thu, 14 Apr 2022 05:01:03 -0400 Received: from mail-pf1-x42c.google.com (mail-pf1-x42c.google.com [IPv6:2607:f8b0:4864:20::42c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 45A2E6BDD1 for ; Thu, 14 Apr 2022 01:58:35 -0700 (PDT) Received: by mail-pf1-x42c.google.com with SMTP id f3so4287564pfe.2 for ; Thu, 14 Apr 2022 01:58:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=FlV8ZEK41ZdMIfwkF7+gZmKxcQ8uBvSWc85Gpkn7FwU=; b=RubjxjJED6Mw2ADHs9bC6pFF/8ixEw2g9qX1cjRSx/QoRTFS+aCl/M/LClPmVSB/Ar hUcRjn3NB0Pm8HSMAvEpsHx0mw/7A3gGoqxg5WHW6wo/VbNffgIDqlwAZJJqRqh6iqnl V2aVPMB/JyjUFKCu0HnMMjO7f9F9eAahWb3d5YeUosdMbdlAHGcYG8RxoSNToK1zZ0Tw hjfZen2/EZsE0tEprZfscu94f65wy7/H5Xqp94d01zPuaU7+0yHqgI9oLcR4l73SRYqH M5UB6pkQkTYLn7GOirpdaaHFy7F/7drJz6a4hDl6a5igt9StPR8Y0lNVzY6tKbADs5te yHVw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=FlV8ZEK41ZdMIfwkF7+gZmKxcQ8uBvSWc85Gpkn7FwU=; b=EfcLo7hv1I/ALfsaSlD4snLJpjmkraKVFYdXOL56uzuPAK/tGnbZy/P2DurFf6sNZ6 gj7VrKk8jLIz3+n4ZiUVPPjOrceyEvsZsSEOXdCVDGWroBtykhc4ypc6OTiyj5mgJyFm CwXR89MVoCHBIP+ZXF5b8tewR0ReDG8vGuex63ufDkQluwAyMI3o6deOudWapV6aXiey YNSk6kRF9t+e/Ec/siKpguIczdJJeoPoRjkshRLCrOfkHa0LDC2UHsZ2wis8xKd1BizD we8Tlpp6I6eMzhY0R9UQ4p0SQaDpVIyQntt+n9LxbxFFWzuEhkclRzJUfYU/p2rw0Vj3 aNuA== X-Gm-Message-State: AOAM533ggVmUe0+RhcteTv/piP06M9pgOgT8jxZlJHPDwBiGII8TwhrC JNIA5R5bLdBLfn4uj5qiCQQ= X-Google-Smtp-Source: ABdhPJzxZfc9soDCUGa8twuzUynuxNIiT+1ORj/tjvMAiS1C7RpfPC5rAYNf51IGF11gXFeOROr0JQ== X-Received: by 2002:a05:6a00:1341:b0:4fa:a3af:6ba3 with SMTP id k1-20020a056a00134100b004faa3af6ba3mr2847232pfu.51.1649926714563; Thu, 14 Apr 2022 01:58:34 -0700 (PDT) Received: from hyeyoo.. ([114.29.24.243]) by smtp.gmail.com with ESMTPSA id p9-20020aa79e89000000b00505fada20dfsm1403537pfq.117.2022.04.14.01.58.28 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 14 Apr 2022 01:58:33 -0700 (PDT) From: Hyeonggon Yoo <42.hyeyoo@gmail.com> To: Vlastimil Babka Cc: Marco Elver , Matthew WilCox , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Hyeonggon Yoo <42.hyeyoo@gmail.com>, Roman Gushchin , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 09/23] mm/slab_common: cleanup kmalloc_large() Date: Thu, 14 Apr 2022 17:57:13 +0900 Message-Id: <20220414085727.643099-10-42.hyeyoo@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220414085727.643099-1-42.hyeyoo@gmail.com> References: <20220414085727.643099-1-42.hyeyoo@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Now that kmalloc_large() and kmalloc_large_node() do same job, make kmalloc_large() wrapper of kmalloc_large_node(). Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> --- include/linux/slab.h | 9 ++++++--- mm/slab_common.c | 24 ------------------------ 2 files changed, 6 insertions(+), 27 deletions(-) diff --git a/include/linux/slab.h b/include/linux/slab.h index 97336acbebbf..143830f57a7f 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -484,11 +484,14 @@ static __always_inline void *kmem_cache_alloc_node_tr= ace(struct kmem_cache *s, g } #endif /* CONFIG_TRACING */ =20 -extern void *kmalloc_large(size_t size, gfp_t flags) __assume_page_alignme= nt - __alloc_size(1); - extern void *kmalloc_large_node(size_t size, gfp_t flags, int node) __assume_page_alignment __alloc_size(1); + +static __always_inline void *kmalloc_large(size_t size, gfp_t flags) +{ + return kmalloc_large_node(size, flags, NUMA_NO_NODE); +} + /** * kmalloc - allocate memory * @size: how many bytes of memory are required. diff --git a/mm/slab_common.c b/mm/slab_common.c index cf17be8cd9ad..30684efc89d7 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -925,30 +925,6 @@ gfp_t kmalloc_fix_flags(gfp_t flags) * directly to the page allocator. We use __GFP_COMP, because we will need= to * know the allocation order to free the pages properly in kfree. */ -void *kmalloc_large(size_t size, gfp_t flags) -{ - void *ret =3D NULL; - struct page *page; - unsigned int order =3D get_order(size); - - if (unlikely(flags & GFP_SLAB_BUG_MASK)) - flags =3D kmalloc_fix_flags(flags); - - flags |=3D __GFP_COMP; - page =3D alloc_pages(flags, order); - if (likely(page)) { - ret =3D page_address(page); - mod_lruvec_page_state(page, NR_SLAB_UNRECLAIMABLE_B, - PAGE_SIZE << order); - } - ret =3D kasan_kmalloc_large(ret, size, flags); - /* As ret might get tagged, call kmemleak hook after KASAN. */ - kmemleak_alloc(ret, size, 1, flags); - trace_kmalloc(_RET_IP_, ret, size, PAGE_SIZE << order, flags); - return ret; -} -EXPORT_SYMBOL(kmalloc_large); - void *kmalloc_large_node(size_t size, gfp_t flags, int node) { struct page *page; --=20 2.32.0 From nobody Mon May 11 04:52:14 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 056B0C433EF for ; Thu, 14 Apr 2022 08:59:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241376AbiDNJBb (ORCPT ); Thu, 14 Apr 2022 05:01:31 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44420 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241413AbiDNJBM (ORCPT ); Thu, 14 Apr 2022 05:01:12 -0400 Received: from mail-pl1-x629.google.com (mail-pl1-x629.google.com [IPv6:2607:f8b0:4864:20::629]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E6F276AA70 for ; Thu, 14 Apr 2022 01:58:41 -0700 (PDT) Received: by mail-pl1-x629.google.com with SMTP id t12so4118531pll.7 for ; Thu, 14 Apr 2022 01:58:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=+ZJfCekWiq8F6LeclLH2EX7WmH3vxid4PPYVPgU+XNY=; b=PyS26MG52EA2v+5qrmXqDD8CMoZcoJx9HU6Uai/EzLFXfEPQlmK2xSu8EsFyKnzVSW 3vRReMigOKnc98w8pNi8tKC54o+q14hAA8NwSV3DbvPKBYWyRDFoE+ebTbuTSchT0Zx9 4P83K+mYXIxDW9x2YNJ4sBykq30udXIWXw658qVKKmkARp591P0alHbjbtvxESJOoMYd Sgy4svqTChxrEFHi5x5fG11cpzi83w4dPdY0sZkCidu/P47e7SDNo3ufaA4JaYrgw2s6 2RUtiVm5F6/lyqQB2PEZXTAFvrjq67heL6RXDPqkt3+DHuu4et/MFIEoMhk/ZhFSf3kd yXxQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=+ZJfCekWiq8F6LeclLH2EX7WmH3vxid4PPYVPgU+XNY=; b=OrUNR3RpvotwwCGYLgvEUea6nEj3mjoqdg2MjfoTkQZE4jHs4oF/F7EKFCeD+i5dvR dSuqUfttqoTYNjnYcuSXhE6HnVYOb7oBnnQCppGmNn9JVD9zccESxQmqMOgpPEqgAE8v gXlVu0rtT3vKLvjdpWh1eObokaBbJmNqA5s0kBYli1vFW61aHFt7BFlPUgysmji3RecU 5maaPREImqVJWPqeb+NlkeqhFaGYeNV5l7+p+R/vZpd8cPLyBqf7dehI+leoF4kJTcEg aCGjBKSeIxP1tUXS7BjRXITb0ORdnATDyupNoqDvRh1O0OaRbIhJUPe0RIbeJfELITVZ Cywg== X-Gm-Message-State: AOAM5301gN0jpyDPXykfUN8cwlk6m01iCANvSOOLGWLBHinExtY7MY7S Ff2tP2vfXUKvvg0b8JYSNsY= X-Google-Smtp-Source: ABdhPJzgzBkSAWIuFJ1GQ7GPARwXR+mujiRe6H6Cuue9w2lWjEi1HI79ZBHiGxGceGxOZ0pObNzVAQ== X-Received: by 2002:a17:902:e1d4:b0:158:91b8:edea with SMTP id t20-20020a170902e1d400b0015891b8edeamr10257201pla.167.1649926720992; Thu, 14 Apr 2022 01:58:40 -0700 (PDT) Received: from hyeyoo.. ([114.29.24.243]) by smtp.gmail.com with ESMTPSA id p9-20020aa79e89000000b00505fada20dfsm1403537pfq.117.2022.04.14.01.58.34 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 14 Apr 2022 01:58:39 -0700 (PDT) From: Hyeonggon Yoo <42.hyeyoo@gmail.com> To: Vlastimil Babka Cc: Marco Elver , Matthew WilCox , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Hyeonggon Yoo <42.hyeyoo@gmail.com>, Roman Gushchin , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 10/23] mm/slab_common: cleanup kmem_cache_alloc{,node,lru} Date: Thu, 14 Apr 2022 17:57:14 +0900 Message-Id: <20220414085727.643099-11-42.hyeyoo@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220414085727.643099-1-42.hyeyoo@gmail.com> References: <20220414085727.643099-1-42.hyeyoo@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Implement only __kmem_cache_alloc_node() in slab allocators and make kmem_cache_alloc{,node,lru} wrapper of it. Now that kmem_cache_alloc{,node,lru} is inline function, we should use _THIS_IP_ instead of _RET_IP_ for consistency. Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> Reviewed-by: Vlastimil Babka --- include/linux/slab.h | 52 ++++++++++++++++++++++++++++++++----- mm/slab.c | 61 +++++--------------------------------------- mm/slob.c | 27 ++++++-------------- mm/slub.c | 35 +++++-------------------- 4 files changed, 67 insertions(+), 108 deletions(-) diff --git a/include/linux/slab.h b/include/linux/slab.h index 143830f57a7f..1b5bdcb0fd31 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -429,9 +429,52 @@ void *__kmalloc(size_t size, gfp_t flags) return __kmalloc_node(size, flags, NUMA_NO_NODE); } =20 -void *kmem_cache_alloc(struct kmem_cache *s, gfp_t flags) __assume_slab_al= ignment __malloc; -void *kmem_cache_alloc_lru(struct kmem_cache *s, struct list_lru *lru, - gfp_t gfpflags) __assume_slab_alignment __malloc; + +void *__kmem_cache_alloc_node(struct kmem_cache *s, struct list_lru *lru, + gfp_t gfpflags, int node, unsigned long caller __maybe_unused) + __assume_slab_alignment __malloc; + +/** + * kmem_cache_alloc - Allocate an object + * @cachep: The cache to allocate from. + * @flags: See kmalloc(). + * + * Allocate an object from this cache. The flags are only relevant + * if the cache has no available objects. + * + * Return: pointer to the new object or %NULL in case of error + */ +static __always_inline __malloc +void *kmem_cache_alloc(struct kmem_cache *s, gfp_t flags) +{ + return __kmem_cache_alloc_node(s, NULL, flags, NUMA_NO_NODE, _THIS_IP_); +} + +/** + * kmem_cache_alloc_node - Allocate an object on the specified node + * @s: The cache to allocate from. + * @flags: See kmalloc(). + * @node: node number of the target node. + * + * Identical to kmem_cache_alloc but it will allocate memory on the given + * node, which can improve the performance for cpu bound structures. + * + * Fallback to other node is possible if __GFP_THISNODE is not set. + * + * Return: pointer to the new object or %NULL in case of error + */ +static __always_inline __malloc +void *kmem_cache_alloc_node(struct kmem_cache *s, gfp_t flags, int node) +{ + return __kmem_cache_alloc_node(s, NULL, flags, node, _THIS_IP_); +} + +static __always_inline __malloc +void *kmem_cache_alloc_lru(struct kmem_cache *s, struct list_lru *lru, gfp= _t gfpflags) +{ + return __kmem_cache_alloc_node(s, lru, gfpflags, NUMA_NO_NODE, _THIS_IP_); +} + void kmem_cache_free(struct kmem_cache *s, void *objp); =20 /* @@ -453,9 +496,6 @@ static __always_inline void kfree_bulk(size_t size, voi= d **p) kmem_cache_free_bulk(NULL, size, p); } =20 -void *kmem_cache_alloc_node(struct kmem_cache *s, gfp_t flags, int node) _= _assume_slab_alignment - __malloc; - #ifdef CONFIG_TRACING extern void *kmem_cache_alloc_trace(struct kmem_cache *s, gfp_t flags, siz= e_t size) __assume_slab_alignment __alloc_size(3); diff --git a/mm/slab.c b/mm/slab.c index db7eab9e2e9f..c5ffe54c207a 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -3442,40 +3442,18 @@ void ___cache_free(struct kmem_cache *cachep, void = *objp, __free_one(ac, objp); } =20 -static __always_inline -void *__kmem_cache_alloc_lru(struct kmem_cache *cachep, struct list_lru *l= ru, - gfp_t flags) +void *__kmem_cache_alloc_node(struct kmem_cache *cachep, struct list_lru *= lru, + gfp_t flags, int nodeid, unsigned long caller) { - void *ret =3D slab_alloc(cachep, lru, flags, cachep->object_size, _RET_IP= _); + void *ret =3D slab_alloc_node(cachep, lru, flags, nodeid, + cachep->object_size, caller); =20 - trace_kmem_cache_alloc(_RET_IP_, ret, - cachep->object_size, cachep->size, flags); + trace_kmem_cache_alloc_node(caller, ret, cachep->object_size, + cachep->size, flags, nodeid); =20 return ret; } - -/** - * kmem_cache_alloc - Allocate an object - * @cachep: The cache to allocate from. - * @flags: See kmalloc(). - * - * Allocate an object from this cache. The flags are only relevant - * if the cache has no available objects. - * - * Return: pointer to the new object or %NULL in case of error - */ -void *kmem_cache_alloc(struct kmem_cache *cachep, gfp_t flags) -{ - return __kmem_cache_alloc_lru(cachep, NULL, flags); -} -EXPORT_SYMBOL(kmem_cache_alloc); - -void *kmem_cache_alloc_lru(struct kmem_cache *cachep, struct list_lru *lru, - gfp_t flags) -{ - return __kmem_cache_alloc_lru(cachep, lru, flags); -} -EXPORT_SYMBOL(kmem_cache_alloc_lru); +EXPORT_SYMBOL(__kmem_cache_alloc_node); =20 static __always_inline void cache_alloc_debugcheck_after_bulk(struct kmem_cache *s, gfp_t flags, @@ -3545,31 +3523,6 @@ kmem_cache_alloc_trace(struct kmem_cache *cachep, gf= p_t flags, size_t size) EXPORT_SYMBOL(kmem_cache_alloc_trace); #endif =20 -/** - * kmem_cache_alloc_node - Allocate an object on the specified node - * @cachep: The cache to allocate from. - * @flags: See kmalloc(). - * @nodeid: node number of the target node. - * - * Identical to kmem_cache_alloc but it will allocate memory on the given - * node, which can improve the performance for cpu bound structures. - * - * Fallback to other node is possible if __GFP_THISNODE is not set. - * - * Return: pointer to the new object or %NULL in case of error - */ -void *kmem_cache_alloc_node(struct kmem_cache *cachep, gfp_t flags, int no= deid) -{ - void *ret =3D slab_alloc_node(cachep, NULL, flags, nodeid, cachep->object= _size, _RET_IP_); - - trace_kmem_cache_alloc_node(_RET_IP_, ret, - cachep->object_size, cachep->size, - flags, nodeid); - - return ret; -} -EXPORT_SYMBOL(kmem_cache_alloc_node); - #ifdef CONFIG_TRACING void *kmem_cache_alloc_node_trace(struct kmem_cache *cachep, gfp_t flags, diff --git a/mm/slob.c b/mm/slob.c index ab67c8219e8d..6c7c30845056 100644 --- a/mm/slob.c +++ b/mm/slob.c @@ -586,7 +586,8 @@ int __kmem_cache_create(struct kmem_cache *c, slab_flag= s_t flags) return 0; } =20 -static void *slob_alloc_node(struct kmem_cache *c, gfp_t flags, int node) +static void *slob_alloc_node(struct kmem_cache *c, gfp_t flags, int node, + unsigned long caller) { void *b; =20 @@ -596,12 +597,12 @@ static void *slob_alloc_node(struct kmem_cache *c, gf= p_t flags, int node) =20 if (c->size < PAGE_SIZE) { b =3D slob_alloc(c->size, flags, c->align, node, 0); - trace_kmem_cache_alloc_node(_RET_IP_, b, c->object_size, + trace_kmem_cache_alloc_node(caller, b, c->object_size, SLOB_UNITS(c->size) * SLOB_UNIT, flags, node); } else { b =3D slob_new_pages(flags, get_order(c->size), node); - trace_kmem_cache_alloc_node(_RET_IP_, b, c->object_size, + trace_kmem_cache_alloc_node(caller, b, c->object_size, PAGE_SIZE << get_order(c->size), flags, node); } @@ -615,30 +616,18 @@ static void *slob_alloc_node(struct kmem_cache *c, gf= p_t flags, int node) return b; } =20 -void *kmem_cache_alloc(struct kmem_cache *cachep, gfp_t flags) -{ - return slob_alloc_node(cachep, flags, NUMA_NO_NODE); -} -EXPORT_SYMBOL(kmem_cache_alloc); - - -void *kmem_cache_alloc_lru(struct kmem_cache *cachep, struct list_lru *lru= , gfp_t flags) -{ - return slob_alloc_node(cachep, flags, NUMA_NO_NODE); -} -EXPORT_SYMBOL(kmem_cache_alloc_lru); - void *__kmalloc_node(size_t size, gfp_t gfp, int node) { return __do_kmalloc_node(size, gfp, node, _RET_IP_); } EXPORT_SYMBOL(__kmalloc_node); =20 -void *kmem_cache_alloc_node(struct kmem_cache *cachep, gfp_t gfp, int node) +void *__kmem_cache_alloc_node(struct kmem_cache *cachep, struct list_lru *= lru __maybe_unused, + gfp_t gfp, int node, unsigned long caller __maybe_unused) { - return slob_alloc_node(cachep, gfp, node); + return slob_alloc_node(cachep, gfp, node, caller); } -EXPORT_SYMBOL(kmem_cache_alloc_node); +EXPORT_SYMBOL(__kmem_cache_alloc_node); =20 static void __kmem_cache_free(void *b, int size) { diff --git a/mm/slub.c b/mm/slub.c index f10a892f1772..2a2be2a8a5d0 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -3216,30 +3216,6 @@ static __always_inline void *slab_alloc(struct kmem_= cache *s, struct list_lru *l return slab_alloc_node(s, lru, gfpflags, NUMA_NO_NODE, addr, orig_size); } =20 -static __always_inline -void *__kmem_cache_alloc_lru(struct kmem_cache *s, struct list_lru *lru, - gfp_t gfpflags) -{ - void *ret =3D slab_alloc(s, lru, gfpflags, _RET_IP_, s->object_size); - - trace_kmem_cache_alloc(_RET_IP_, ret, s->object_size, - s->size, gfpflags); - - return ret; -} - -void *kmem_cache_alloc(struct kmem_cache *s, gfp_t gfpflags) -{ - return __kmem_cache_alloc_lru(s, NULL, gfpflags); -} -EXPORT_SYMBOL(kmem_cache_alloc); - -void *kmem_cache_alloc_lru(struct kmem_cache *s, struct list_lru *lru, - gfp_t gfpflags) -{ - return __kmem_cache_alloc_lru(s, lru, gfpflags); -} -EXPORT_SYMBOL(kmem_cache_alloc_lru); =20 #ifdef CONFIG_TRACING void *kmem_cache_alloc_trace(struct kmem_cache *s, gfp_t gfpflags, size_t = size) @@ -3252,16 +3228,17 @@ void *kmem_cache_alloc_trace(struct kmem_cache *s, = gfp_t gfpflags, size_t size) EXPORT_SYMBOL(kmem_cache_alloc_trace); #endif =20 -void *kmem_cache_alloc_node(struct kmem_cache *s, gfp_t gfpflags, int node) +void *__kmem_cache_alloc_node(struct kmem_cache *s, struct list_lru *lru, = gfp_t gfpflags, + int node, unsigned long caller __maybe_unused) { - void *ret =3D slab_alloc_node(s, NULL, gfpflags, node, _RET_IP_, s->objec= t_size); + void *ret =3D slab_alloc_node(s, lru, gfpflags, node, caller, s->object_s= ize); =20 - trace_kmem_cache_alloc_node(_RET_IP_, ret, - s->object_size, s->size, gfpflags, node); + trace_kmem_cache_alloc_node(caller, ret, s->object_size, + s->size, gfpflags, node); =20 return ret; } -EXPORT_SYMBOL(kmem_cache_alloc_node); +EXPORT_SYMBOL(__kmem_cache_alloc_node); =20 #ifdef CONFIG_TRACING void *kmem_cache_alloc_node_trace(struct kmem_cache *s, --=20 2.32.0 From nobody Mon May 11 04:52:14 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 57122C433EF for ; Thu, 14 Apr 2022 08:59:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241432AbiDNJBo (ORCPT ); Thu, 14 Apr 2022 05:01:44 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44380 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241431AbiDNJBN (ORCPT ); Thu, 14 Apr 2022 05:01:13 -0400 Received: from mail-pj1-x102a.google.com (mail-pj1-x102a.google.com [IPv6:2607:f8b0:4864:20::102a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6E2B96BDE1 for ; Thu, 14 Apr 2022 01:58:47 -0700 (PDT) Received: by mail-pj1-x102a.google.com with SMTP id h15-20020a17090a054f00b001cb7cd2b11dso5048368pjf.5 for ; Thu, 14 Apr 2022 01:58:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=7gSbtqBRr2wSdYKirc6pWSRHibf6eUTgwDuPBAZwnl8=; b=d1aUNa4xAf9AFLk27JROnTj7gTMxi+QKNfpwzKujGplZaDWjcUHJbpmmyHUlxNQWWv up2SXBZ/CPbl36VLVAS6pP9Ll2Rm8ygAHJ548vam92dc7eGxOpgDCT1CCw/0Nc8WaUgy WM0WARRtqcLQfaHYHSUoRl1wWH0bPL/Oe/nQoHD1XbeEAnN0Zdx64Cb9lIxHowMa2bF+ 3nlolC3fzeGVtaGqAztkZhJiyoT3BRjQN32KkJnoa2vwiTJ0aOY4X/xgbXsLwD+kq7z7 hQZXajkx77QNN/8G6PLQQ52NcVzmt1tSOD3484narV1c5YgHPL9lPDq2c+s+9DDIElQ0 NE0Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=7gSbtqBRr2wSdYKirc6pWSRHibf6eUTgwDuPBAZwnl8=; b=lYpzkGYIYRW1PvzCDFM1FWDfNlzRdRY1hIAl7kEMifX6tE5U6IwQW01NmEEaS90NpA CyMh1Ynuim8d8MbVcORVBUpqsu9u2N2hSrIVajeJ6dJ8vMDcnk9pWApNPag1J7SKjdf2 WMpBZohYZ9tiOH42GoiuVxF610FgH9KAlXZBLO/ocXVWme6471XqP6PfeU2bvW+2n+Wq stw3a+UnVDxCrnq+t87CaVP9iAL5osDS5TVQnohUyyP7QYhRmWPGSMrxID0WDHcx5bzO Q+9dCuGby52StP1xBYA4MyBPBE8Ck2yMoM2RmR7ZjdRrBV8KZlWCC8BeqbaqUIOMF+M7 ODWA== X-Gm-Message-State: AOAM533twjkrzEepL0KMuk1RmI/ZPYqok0UmNejaUi502EJ4VfIr3wed 4VsoJ7yOC8Ai6S81jrbuvog= X-Google-Smtp-Source: ABdhPJzj3pUgRnAQ9AeoN1UcTJDOaBjCMDcKEporHnxQE//dhFhbfvsbYssdkFT4Us/0SBKGseAknQ== X-Received: by 2002:a17:902:b495:b0:158:8318:4cf9 with SMTP id y21-20020a170902b49500b0015883184cf9mr13579160plr.33.1649926726962; Thu, 14 Apr 2022 01:58:46 -0700 (PDT) Received: from hyeyoo.. ([114.29.24.243]) by smtp.gmail.com with ESMTPSA id p9-20020aa79e89000000b00505fada20dfsm1403537pfq.117.2022.04.14.01.58.41 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 14 Apr 2022 01:58:45 -0700 (PDT) From: Hyeonggon Yoo <42.hyeyoo@gmail.com> To: Vlastimil Babka Cc: Marco Elver , Matthew WilCox , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Hyeonggon Yoo <42.hyeyoo@gmail.com>, Roman Gushchin , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 11/23] mm/slab_common: kmalloc_node: pass large requests to page allocator Date: Thu, 14 Apr 2022 17:57:15 +0900 Message-Id: <20220414085727.643099-12-42.hyeyoo@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220414085727.643099-1-42.hyeyoo@gmail.com> References: <20220414085727.643099-1-42.hyeyoo@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Now that kmalloc_large_node() is in common code, pass large requests to page allocator in kmalloc_node() using kmalloc_large_node(). Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> Reviewed-by: Vlastimil Babka --- include/linux/slab.h | 26 +++++++++++++++++++------- 1 file changed, 19 insertions(+), 7 deletions(-) diff --git a/include/linux/slab.h b/include/linux/slab.h index 1b5bdcb0fd31..eb457f20f415 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -608,23 +608,35 @@ static __always_inline __alloc_size(1) void *kmalloc(= size_t size, gfp_t flags) return __kmalloc(size, flags); } =20 +#ifndef CONFIG_SLOB static __always_inline __alloc_size(1) void *kmalloc_node(size_t size, gfp= _t flags, int node) { -#ifndef CONFIG_SLOB - if (__builtin_constant_p(size) && - size <=3D KMALLOC_MAX_CACHE_SIZE) { - unsigned int i =3D kmalloc_index(size); + if (__builtin_constant_p(size)) { + unsigned int index; =20 - if (!i) + if (size > KMALLOC_MAX_CACHE_SIZE) + return kmalloc_large_node(size, flags, node); + + index =3D kmalloc_index(size); + + if (!index) return ZERO_SIZE_PTR; =20 return kmem_cache_alloc_node_trace( - kmalloc_caches[kmalloc_type(flags)][i], + kmalloc_caches[kmalloc_type(flags)][index], flags, node, size); } -#endif return __kmalloc_node(size, flags, node); } +#else +static __always_inline __alloc_size(1) void *kmalloc_node(size_t size, gfp= _t flags, int node) +{ + if (__builtin_constant_p(size) && size > KMALLOC_MAX_CACHE_SIZE) + return kmalloc_large_node(size, flags, node); + + return __kmalloc_node(size, flags, node); +} +#endif =20 /** * kmalloc_array - allocate memory for an array. --=20 2.32.0 From nobody Mon May 11 04:52:14 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DD259C433EF for ; Thu, 14 Apr 2022 08:59:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241474AbiDNJB5 (ORCPT ); Thu, 14 Apr 2022 05:01:57 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44782 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241439AbiDNJBX (ORCPT ); Thu, 14 Apr 2022 05:01:23 -0400 Received: from mail-pf1-x42b.google.com (mail-pf1-x42b.google.com [IPv6:2607:f8b0:4864:20::42b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8ACA26D38B for ; Thu, 14 Apr 2022 01:58:53 -0700 (PDT) Received: by mail-pf1-x42b.google.com with SMTP id s8so4257618pfk.12 for ; Thu, 14 Apr 2022 01:58:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=6hxL75KFeP2qi1GRuhGW4keQJz0+ExPc+GiFqWwmOVw=; b=KXURwW+dd0r1EwDdcia61oDDeDZOkH2JDLvuDit1QPWsQbh8zV5QqtL8HcPxiPPOp1 6kgJavCCKYQ50eZESfleZyqDHMC1P8+g/MXDcGxE45Ty9PtJ2wQD//Oq5ySu3YsSiw4O lVWCkof3OXtnMscc9ZirN2sv7RjxChA6QIkKPh68hAI2TZZKP4KHuPANhQJUo5cfdb8K R79jVwVnqhhtn1UQWI7PW1rbEmTkeLFKGsWk3cSmQ+PUDP1N4jlogGHn+a/2TVm59UOc UjmPdALdxv0sZrq//TMSeSaVmonLCInzEK381vKQfmqIQX+M7mHnI40z2tdyY59jGZ0k vmIQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=6hxL75KFeP2qi1GRuhGW4keQJz0+ExPc+GiFqWwmOVw=; b=POuQX/AtVFa5wZp3a+Z01K9It38AI+aTGE3lrgMimvRQ9NILiStQiRKcTlzaLFPy53 nXbIpY6GLE5gkOIK17VJ3/D3ZfJjcFLfIrjRngCrKI2yn8iNHz4uVDgQ2oY2wb+h8Uvf lxVifmz68YNnjgJba8cVED0cNQH3xWrRuypahnwvn97hotIrDDrP51Nm1jGIbsh/t9AU 88k1yPCE1FM2Y5UIJNkP8OZ2aKRp01PYRzYk2XpbJGXp1CIa+ut5n8/oGgdqHUssPcxw 4e1E3rwLnI6WbypYGE0BdbwlbmIZm1wx3JkUE0v3i2T8UbuxqwZXDfgnREMxHSwC+EgZ sDSA== X-Gm-Message-State: AOAM533s+DIQEfqbPQa4meota92eISxoSql0AFTxSHjNwpOz4qo6y+6Y SQ/eSq2TZ0k/o05TEpqxrkc= X-Google-Smtp-Source: ABdhPJyXSchgrJoVMijeZDC4c/X4LGGEpbmRQ/w4nF5EeMTTozyFoFqFsK3/RsNvjtnq5YSI+P5HOQ== X-Received: by 2002:a63:3d0b:0:b0:37f:ef34:1431 with SMTP id k11-20020a633d0b000000b0037fef341431mr1424935pga.547.1649926733078; Thu, 14 Apr 2022 01:58:53 -0700 (PDT) Received: from hyeyoo.. ([114.29.24.243]) by smtp.gmail.com with ESMTPSA id p9-20020aa79e89000000b00505fada20dfsm1403537pfq.117.2022.04.14.01.58.47 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 14 Apr 2022 01:58:51 -0700 (PDT) From: Hyeonggon Yoo <42.hyeyoo@gmail.com> To: Vlastimil Babka Cc: Marco Elver , Matthew WilCox , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Hyeonggon Yoo <42.hyeyoo@gmail.com>, Roman Gushchin , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 12/23] mm/slab_common: cleanup kmalloc() Date: Thu, 14 Apr 2022 17:57:16 +0900 Message-Id: <20220414085727.643099-13-42.hyeyoo@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220414085727.643099-1-42.hyeyoo@gmail.com> References: <20220414085727.643099-1-42.hyeyoo@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Now that kmalloc() and kmalloc_node() do same job, make kmalloc() wrapper of kmalloc_node(). Remove kmem_cache_alloc_trace() that is now unused. Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> Reviewed-by: Vlastimil Babka --- include/linux/slab.h | 93 +++++++++++++++----------------------------- mm/slab.c | 16 -------- mm/slub.c | 12 ------ 3 files changed, 32 insertions(+), 89 deletions(-) diff --git a/include/linux/slab.h b/include/linux/slab.h index eb457f20f415..ea168f8a248d 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -497,23 +497,10 @@ static __always_inline void kfree_bulk(size_t size, v= oid **p) } =20 #ifdef CONFIG_TRACING -extern void *kmem_cache_alloc_trace(struct kmem_cache *s, gfp_t flags, siz= e_t size) - __assume_slab_alignment __alloc_size(3); - extern void *kmem_cache_alloc_node_trace(struct kmem_cache *s, gfp_t gfpfl= ags, int node, size_t size) __assume_slab_alignment __alloc_size(4); - #else /* CONFIG_TRACING */ -static __always_inline __alloc_size(3) void *kmem_cache_alloc_trace(struct= kmem_cache *s, - gfp_t flags, size_t size) -{ - void *ret =3D kmem_cache_alloc(s, flags); - - ret =3D kasan_kmalloc(s, ret, size, flags); - return ret; -} - static __always_inline void *kmem_cache_alloc_node_trace(struct kmem_cache= *s, gfp_t gfpflags, int node, size_t size) { @@ -532,6 +519,37 @@ static __always_inline void *kmalloc_large(size_t size= , gfp_t flags) return kmalloc_large_node(size, flags, NUMA_NO_NODE); } =20 +#ifndef CONFIG_SLOB +static __always_inline __alloc_size(1) void *kmalloc_node(size_t size, gfp= _t flags, int node) +{ + if (__builtin_constant_p(size)) { + unsigned int index; + + if (size > KMALLOC_MAX_CACHE_SIZE) + return kmalloc_large_node(size, flags, node); + + index =3D kmalloc_index(size); + + if (!index) + return ZERO_SIZE_PTR; + + return kmem_cache_alloc_node_trace( + kmalloc_caches[kmalloc_type(flags)][index], + flags, node, size); + } + return __kmalloc_node(size, flags, node); +} +#else +static __always_inline __alloc_size(1) void *kmalloc_node(size_t size, gfp= _t flags, int node) +{ + if (__builtin_constant_p(size) && size > KMALLOC_MAX_CACHE_SIZE) + return kmalloc_large_node(size, flags, node); + + return __kmalloc_node(size, flags, node); +} +#endif + + /** * kmalloc - allocate memory * @size: how many bytes of memory are required. @@ -588,55 +606,8 @@ static __always_inline void *kmalloc_large(size_t size= , gfp_t flags) */ static __always_inline __alloc_size(1) void *kmalloc(size_t size, gfp_t fl= ags) { - if (__builtin_constant_p(size)) { -#ifndef CONFIG_SLOB - unsigned int index; -#endif - if (size > KMALLOC_MAX_CACHE_SIZE) - return kmalloc_large(size, flags); -#ifndef CONFIG_SLOB - index =3D kmalloc_index(size); - - if (!index) - return ZERO_SIZE_PTR; - - return kmem_cache_alloc_trace( - kmalloc_caches[kmalloc_type(flags)][index], - flags, size); -#endif - } - return __kmalloc(size, flags); -} - -#ifndef CONFIG_SLOB -static __always_inline __alloc_size(1) void *kmalloc_node(size_t size, gfp= _t flags, int node) -{ - if (__builtin_constant_p(size)) { - unsigned int index; - - if (size > KMALLOC_MAX_CACHE_SIZE) - return kmalloc_large_node(size, flags, node); - - index =3D kmalloc_index(size); - - if (!index) - return ZERO_SIZE_PTR; - - return kmem_cache_alloc_node_trace( - kmalloc_caches[kmalloc_type(flags)][index], - flags, node, size); - } - return __kmalloc_node(size, flags, node); -} -#else -static __always_inline __alloc_size(1) void *kmalloc_node(size_t size, gfp= _t flags, int node) -{ - if (__builtin_constant_p(size) && size > KMALLOC_MAX_CACHE_SIZE) - return kmalloc_large_node(size, flags, node); - - return __kmalloc_node(size, flags, node); + return kmalloc_node(size, flags, NUMA_NO_NODE); } -#endif =20 /** * kmalloc_array - allocate memory for an array. diff --git a/mm/slab.c b/mm/slab.c index c5ffe54c207a..b0aaca017f42 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -3507,22 +3507,6 @@ int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_= t flags, size_t size, } EXPORT_SYMBOL(kmem_cache_alloc_bulk); =20 -#ifdef CONFIG_TRACING -void * -kmem_cache_alloc_trace(struct kmem_cache *cachep, gfp_t flags, size_t size) -{ - void *ret; - - ret =3D slab_alloc(cachep, NULL, flags, size, _RET_IP_); - - ret =3D kasan_kmalloc(cachep, ret, size, flags); - trace_kmalloc(_RET_IP_, ret, - size, cachep->size, flags); - return ret; -} -EXPORT_SYMBOL(kmem_cache_alloc_trace); -#endif - #ifdef CONFIG_TRACING void *kmem_cache_alloc_node_trace(struct kmem_cache *cachep, gfp_t flags, diff --git a/mm/slub.c b/mm/slub.c index 2a2be2a8a5d0..892988990da7 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -3216,18 +3216,6 @@ static __always_inline void *slab_alloc(struct kmem_= cache *s, struct list_lru *l return slab_alloc_node(s, lru, gfpflags, NUMA_NO_NODE, addr, orig_size); } =20 - -#ifdef CONFIG_TRACING -void *kmem_cache_alloc_trace(struct kmem_cache *s, gfp_t gfpflags, size_t = size) -{ - void *ret =3D slab_alloc(s, NULL, gfpflags, _RET_IP_, size); - trace_kmalloc(_RET_IP_, ret, size, s->size, gfpflags); - ret =3D kasan_kmalloc(s, ret, size, gfpflags); - return ret; -} -EXPORT_SYMBOL(kmem_cache_alloc_trace); -#endif - void *__kmem_cache_alloc_node(struct kmem_cache *s, struct list_lru *lru, = gfp_t gfpflags, int node, unsigned long caller __maybe_unused) { --=20 2.32.0 From nobody Mon May 11 04:52:14 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D96A0C433F5 for ; Thu, 14 Apr 2022 09:02:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241394AbiDNJEg (ORCPT ); Thu, 14 Apr 2022 05:04:36 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44824 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241453AbiDNJBY (ORCPT ); Thu, 14 Apr 2022 05:01:24 -0400 Received: from mail-pj1-x1035.google.com (mail-pj1-x1035.google.com [IPv6:2607:f8b0:4864:20::1035]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AC02F6A03E for ; Thu, 14 Apr 2022 01:58:59 -0700 (PDT) Received: by mail-pj1-x1035.google.com with SMTP id c12-20020a17090a020c00b001cba1ebb20cso5848373pjc.0 for ; Thu, 14 Apr 2022 01:58:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=anpXLhQ6pJWpmFj8SKm1BnkWTcdROH9kMWzOdPADW34=; b=LenyhKHFzD+n2vYe7925mlD8BIH5F+MaLSbAXusG6eUaQCPtXx6mhz+ylnS5SG4cbh lIzOAJ4kYKEswmVcWMxdBKIk9JvcgveLd6eW5tvJGFBzB2GBoBceuyM0veJTxsVl3d7v kCk4k5Yw/uGM1FBYsDq6pvALqwmvZ6mLj4pKR85OFlOT9JuL7TU/a0I2orbYwGaAGwD5 ZI/5v60Avg044pNPCNtu88H4aV3Tdwov7MNM6WmkLfJD+nm5i0q9w0TU5bg40iwj2yWQ 70wxiXOEPsVJlltigb4Et7SUYyAgktCjmUVnS2sAVHKml8F6xoGWGMS/SV4OgQQrGJNi AnyQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=anpXLhQ6pJWpmFj8SKm1BnkWTcdROH9kMWzOdPADW34=; b=VhBUh85eOaFTiF4pzbcNt8FvnIMvf9wNLpzoStbUf64Rf+jdKlXD10pDazwQil2a1D rOteZAaflwNtpKOIRy91Gn9ArGhwtV/dL5vYbgSI0WbKfOArQKzID+yFSksiD1jGdi3f TAUDu7SOxChNi7T/W3SuR0WLjwb0/owRsrjWpD9gU3rwONR13tNe3ZEn7ARy1T2+B/Hv xL8clawi2a+u5GG5bzIqmpoW+2Z5V+jfLSyf1hSYhoWeuCKieyVP65Ipz+x125wYuITl saS0d9mbQpwt/55f+9twBviwPtdAJ4ex54+c+5ByMWo17k66i+VeOxZs6c8rO1udDleU DM0Q== X-Gm-Message-State: AOAM533L2xPSWzwII9e3Wz5LGEVl0yoSsAfrEu4LFmNjIabcVUcrXT3B Rn9+vx1VtdNFrFuf/jy9zlk= X-Google-Smtp-Source: ABdhPJwHNxp2vapxB+MLkK7ET2sv+h8Elix+TFHRPcGof4ZmEDdRc7uB882QarF1ebs+ae78TNfHzg== X-Received: by 2002:a17:90a:8b91:b0:1be:db25:eecd with SMTP id z17-20020a17090a8b9100b001bedb25eecdmr2634323pjn.10.1649926739224; Thu, 14 Apr 2022 01:58:59 -0700 (PDT) Received: from hyeyoo.. ([114.29.24.243]) by smtp.gmail.com with ESMTPSA id p9-20020aa79e89000000b00505fada20dfsm1403537pfq.117.2022.04.14.01.58.53 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 14 Apr 2022 01:58:57 -0700 (PDT) From: Hyeonggon Yoo <42.hyeyoo@gmail.com> To: Vlastimil Babka Cc: Marco Elver , Matthew WilCox , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Hyeonggon Yoo <42.hyeyoo@gmail.com>, Roman Gushchin , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 13/23] mm/slab: kmalloc: pass requests larger than order-1 page to page allocator Date: Thu, 14 Apr 2022 17:57:17 +0900 Message-Id: <20220414085727.643099-14-42.hyeyoo@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220414085727.643099-1-42.hyeyoo@gmail.com> References: <20220414085727.643099-1-42.hyeyoo@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" There is not much benefit for serving large objects in kmalloc(). Let's pass large requests to page allocator like SLUB for better maintenance of common code. Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> Reviewed-by: Vlastimil Babka --- Changes from previous series (Thanks to Vlastimil): - Disable/Enable irqs around free_large_kmalloc() - Do not lose NUMA locality in __do_kmalloc - Some style fixes (use slab->slab_cache instead of virt_to_cache) - Remove unsupported sizes in __kmalloc_index Changes from v1: - instead of defining varaible x, just casting to (void *) while calling fr= ee_large_kmalloc(). include/linux/slab.h | 23 +++++------------------ mm/slab.c | 44 ++++++++++++++++++++++++++++++-------------- mm/slab.h | 3 +++ mm/slab_common.c | 25 ++++++++++++++++++------- mm/slub.c | 19 ------------------- 5 files changed, 56 insertions(+), 58 deletions(-) diff --git a/include/linux/slab.h b/include/linux/slab.h index ea168f8a248d..c8c82087c3f9 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -231,27 +231,17 @@ void kmem_dump_obj(void *object); =20 #ifdef CONFIG_SLAB /* - * The largest kmalloc size supported by the SLAB allocators is - * 32 megabyte (2^25) or the maximum allocatable page order if that is - * less than 32 MB. - * - * WARNING: Its not easy to increase this value since the allocators have - * to do various tricks to work around compiler limitations in order to - * ensure proper constant folding. + * SLAB and SLUB directly allocates requests fitting in to an order-1 page + * (PAGE_SIZE*2). Larger requests are passed to the page allocator. */ -#define KMALLOC_SHIFT_HIGH ((MAX_ORDER + PAGE_SHIFT - 1) <=3D 25 ? \ - (MAX_ORDER + PAGE_SHIFT - 1) : 25) -#define KMALLOC_SHIFT_MAX KMALLOC_SHIFT_HIGH +#define KMALLOC_SHIFT_HIGH (PAGE_SHIFT + 1) +#define KMALLOC_SHIFT_MAX (MAX_ORDER + PAGE_SHIFT - 1) #ifndef KMALLOC_SHIFT_LOW #define KMALLOC_SHIFT_LOW 5 #endif #endif =20 #ifdef CONFIG_SLUB -/* - * SLUB directly allocates requests fitting in to an order-1 page - * (PAGE_SIZE*2). Larger requests are passed to the page allocator. - */ #define KMALLOC_SHIFT_HIGH (PAGE_SHIFT + 1) #define KMALLOC_SHIFT_MAX (MAX_ORDER + PAGE_SHIFT - 1) #ifndef KMALLOC_SHIFT_LOW @@ -403,10 +393,6 @@ static __always_inline unsigned int __kmalloc_index(si= ze_t size, if (size <=3D 512 * 1024) return 19; if (size <=3D 1024 * 1024) return 20; if (size <=3D 2 * 1024 * 1024) return 21; - if (size <=3D 4 * 1024 * 1024) return 22; - if (size <=3D 8 * 1024 * 1024) return 23; - if (size <=3D 16 * 1024 * 1024) return 24; - if (size <=3D 32 * 1024 * 1024) return 25; =20 if (!IS_ENABLED(CONFIG_PROFILE_ALL_BRANCHES) && size_is_constant) BUILD_BUG_ON_MSG(1, "unexpected size in kmalloc_index()"); @@ -416,6 +402,7 @@ static __always_inline unsigned int __kmalloc_index(siz= e_t size, /* Will never be reached. Needed because the compiler may complain */ return -1; } +static_assert(PAGE_SHIFT <=3D 20); #define kmalloc_index(s) __kmalloc_index(s, true) #endif /* !CONFIG_SLOB */ =20 diff --git a/mm/slab.c b/mm/slab.c index b0aaca017f42..1dfe0f9d5882 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -3533,7 +3533,7 @@ __do_kmalloc_node(size_t size, gfp_t flags, int node,= unsigned long caller) void *ret; =20 if (unlikely(size > KMALLOC_MAX_CACHE_SIZE)) - return NULL; + return kmalloc_large_node(size, flags, node); cachep =3D kmalloc_slab(size, flags); if (unlikely(ZERO_OR_NULL_PTR(cachep))) return cachep; @@ -3607,15 +3607,25 @@ void kmem_cache_free_bulk(struct kmem_cache *orig_s= , size_t size, void **p) { struct kmem_cache *s; size_t i; + struct folio *folio; =20 local_irq_disable(); for (i =3D 0; i < size; i++) { void *objp =3D p[i]; =20 - if (!orig_s) /* called via kfree_bulk */ - s =3D virt_to_cache(objp); - else + if (!orig_s) { + folio =3D virt_to_folio(objp); + /* called via kfree_bulk */ + if (!folio_test_slab(folio)) { + local_irq_enable(); + free_large_kmalloc(folio, objp); + local_irq_disable(); + continue; + } + s =3D folio_slab(folio)->slab_cache; + } else s =3D cache_from_obj(orig_s, objp); + if (!s) continue; =20 @@ -3644,20 +3654,24 @@ void kfree(const void *objp) { struct kmem_cache *c; unsigned long flags; + struct folio *folio; =20 trace_kfree(_RET_IP_, objp); =20 if (unlikely(ZERO_OR_NULL_PTR(objp))) return; - local_irq_save(flags); - kfree_debugcheck(objp); - c =3D virt_to_cache(objp); - if (!c) { - local_irq_restore(flags); + + folio =3D virt_to_folio(objp); + if (!folio_test_slab(folio)) { + free_large_kmalloc(folio, (void *)objp); return; } - debug_check_no_locks_freed(objp, c->object_size); =20 + c =3D folio_slab(folio)->slab_cache; + + local_irq_save(flags); + kfree_debugcheck(objp); + debug_check_no_locks_freed(objp, c->object_size); debug_check_no_obj_freed(objp, c->object_size); __cache_free(c, (void *)objp, _RET_IP_); local_irq_restore(flags); @@ -4079,15 +4093,17 @@ void __check_heap_object(const void *ptr, unsigned = long n, size_t __ksize(const void *objp) { struct kmem_cache *c; - size_t size; + struct folio *folio; =20 BUG_ON(!objp); if (unlikely(objp =3D=3D ZERO_SIZE_PTR)) return 0; =20 - c =3D virt_to_cache(objp); - size =3D c ? c->object_size : 0; + folio =3D virt_to_folio(objp); + if (!folio_test_slab(folio)) + return folio_size(folio); =20 - return size; + c =3D folio_slab(folio)->slab_cache; + return c->object_size; } EXPORT_SYMBOL(__ksize); diff --git a/mm/slab.h b/mm/slab.h index f7d018100994..b864c5bc4c25 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -681,6 +681,9 @@ static inline struct kmem_cache *cache_from_obj(struct = kmem_cache *s, void *x) print_tracking(cachep, x); return cachep; } + +void free_large_kmalloc(struct folio *folio, void *object); + #endif /* CONFIG_SLOB */ =20 static inline size_t slab_ksize(const struct kmem_cache *s) diff --git a/mm/slab_common.c b/mm/slab_common.c index 30684efc89d7..960cc07c3a91 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -764,8 +764,8 @@ struct kmem_cache *kmalloc_slab(size_t size, gfp_t flag= s) =20 /* * kmalloc_info[] is to make slub_debug=3D,kmalloc-xx option work at boot = time. - * kmalloc_index() supports up to 2^25=3D32MB, so the final entry of the t= able is - * kmalloc-32M. + * kmalloc_index() supports up to 2^21=3D2MB, so the final entry of the ta= ble is + * kmalloc-2M. */ const struct kmalloc_info_struct kmalloc_info[] __initconst =3D { INIT_KMALLOC_INFO(0, 0), @@ -789,11 +789,7 @@ const struct kmalloc_info_struct kmalloc_info[] __init= const =3D { INIT_KMALLOC_INFO(262144, 256k), INIT_KMALLOC_INFO(524288, 512k), INIT_KMALLOC_INFO(1048576, 1M), - INIT_KMALLOC_INFO(2097152, 2M), - INIT_KMALLOC_INFO(4194304, 4M), - INIT_KMALLOC_INFO(8388608, 8M), - INIT_KMALLOC_INFO(16777216, 16M), - INIT_KMALLOC_INFO(33554432, 32M) + INIT_KMALLOC_INFO(2097152, 2M) }; =20 /* @@ -906,6 +902,21 @@ void __init create_kmalloc_caches(slab_flags_t flags) /* Kmalloc array is now usable */ slab_state =3D UP; } + +void free_large_kmalloc(struct folio *folio, void *object) +{ + unsigned int order =3D folio_order(folio); + + if (WARN_ON_ONCE(order =3D=3D 0)) + pr_warn_once("object pointer: 0x%p\n", object); + + kmemleak_free(object); + kasan_kfree_large(object); + + mod_lruvec_page_state(folio_page(folio, 0), NR_SLAB_UNRECLAIMABLE_B, + -(PAGE_SIZE << order)); + __free_pages(folio_page(folio, 0), order); +} #endif /* !CONFIG_SLOB */ =20 gfp_t kmalloc_fix_flags(gfp_t flags) diff --git a/mm/slub.c b/mm/slub.c index 892988990da7..1dc9e8eebb62 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -1679,12 +1679,6 @@ static bool freelist_corrupted(struct kmem_cache *s,= struct slab *slab, * Hooks for other subsystems that check memory allocations. In a typical * production configuration these hooks all should produce no code at all. */ -static __always_inline void kfree_hook(void *x) -{ - kmemleak_free(x); - kasan_kfree_large(x); -} - static __always_inline bool slab_free_hook(struct kmem_cache *s, void *x, bool init) { @@ -3490,19 +3484,6 @@ struct detached_freelist { struct kmem_cache *s; }; =20 -static inline void free_large_kmalloc(struct folio *folio, void *object) -{ - unsigned int order =3D folio_order(folio); - - if (WARN_ON_ONCE(order =3D=3D 0)) - pr_warn_once("object pointer: 0x%p\n", object); - - kfree_hook(object); - mod_lruvec_page_state(folio_page(folio, 0), NR_SLAB_UNRECLAIMABLE_B, - -(PAGE_SIZE << order)); - __free_pages(folio_page(folio, 0), order); -} - /* * This function progressively scans the array with free objects (with * a limited look ahead) and extract objects belonging to the same --=20 2.32.0 From nobody Mon May 11 04:52:14 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9D823C433EF for ; Thu, 14 Apr 2022 08:59:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241439AbiDNJCE (ORCPT ); Thu, 14 Apr 2022 05:02:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44542 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241464AbiDNJBp (ORCPT ); Thu, 14 Apr 2022 05:01:45 -0400 Received: from mail-pg1-x534.google.com (mail-pg1-x534.google.com [IPv6:2607:f8b0:4864:20::534]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 26FFE6D96B for ; Thu, 14 Apr 2022 01:59:05 -0700 (PDT) Received: by mail-pg1-x534.google.com with SMTP id t4so4260969pgc.1 for ; Thu, 14 Apr 2022 01:59:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=7nL+6Lqmsphi23eQHyx3PjMEoO6jPBsPY6a/e/Mn+GM=; b=FxWh7U87zLedE8esjk7IcRw92WnQY7kx0n9GtXR7T/DBbCmrRqfpYk9BUV/Si7UCOI QkMCtjKIskmGztFstDW8OE2tpj3XkQK2P1SsBJBNPlakcVYuA9vp97I9s46MadowmUNv VYMnJhAgn34FOHW9mU/3jXQgzNcod00X7Bnr8sSZ8rq1mF3hBfdx3dFBoabnB44ap5Ko 2l+U0qlqxeQOK8L5IBTcGzgd4Q1SmCc1sIn4nzXN8aRbYlQYyZGTK/K2DNruRfNA4QWT 4Hq8Iw1vyLI611psCz0Zu3dDhV/siul56BdxJvX+1Ydv+TCf70lyPlXWNH79i1aWd+ql 6ogw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=7nL+6Lqmsphi23eQHyx3PjMEoO6jPBsPY6a/e/Mn+GM=; b=tGnNCH93Q5lPmbTb1HrNqb2NcI4frwYtguj1oeZX4j+GFo2ZY8J53wwmcJTmCpspZP X+bMIVbfLkVNMrrAih1jEE6RuTejRc0H49PAiMJBWB9YRjiRY3PrbYGlLahYfhfE0/PO 5/NyMUT163VII2Q+Nvm/RH6cpuug6p2KZqAGnOpVpOW1+8JHj8mSUTzSqjPtzS0+uaBz YRhZDE0Be7mlxzjvfq9xlvSGChSuLQP2JM7NU0JFOKG1ApeLOy7j5UKjRNvp/1DGMatB 5+EwcghETPua4iLpXuHtvs6f8x/v8dHyBkQuwsuohwkhBbY2JQeRc79vRTSdN9DUH7p3 ZnpQ== X-Gm-Message-State: AOAM533hYsxQSoBnCuH8VObWczZEpoTN9IDVcIo1/CZb4n6kIOW/vZZ4 h5Bn2uCF//2HYC9r5QNklFhcmHKFJ90= X-Google-Smtp-Source: ABdhPJwBssAVf7hlNzo0QSt/qFMFzQCGzIzZo1icW6TkNDqIq87qRBwgYLPWhmDxpZA12aT0XMYX8A== X-Received: by 2002:a63:4142:0:b0:39c:dd63:27a2 with SMTP id o63-20020a634142000000b0039cdd6327a2mr1470742pga.119.1649926745388; Thu, 14 Apr 2022 01:59:05 -0700 (PDT) Received: from hyeyoo.. ([114.29.24.243]) by smtp.gmail.com with ESMTPSA id p9-20020aa79e89000000b00505fada20dfsm1403537pfq.117.2022.04.14.01.58.59 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 14 Apr 2022 01:59:03 -0700 (PDT) From: Hyeonggon Yoo <42.hyeyoo@gmail.com> To: Vlastimil Babka Cc: Marco Elver , Matthew WilCox , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Hyeonggon Yoo <42.hyeyoo@gmail.com>, Roman Gushchin , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 14/23] mm/slab_common: print cache name in tracepoints Date: Thu, 14 Apr 2022 17:57:18 +0900 Message-Id: <20220414085727.643099-15-42.hyeyoo@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220414085727.643099-1-42.hyeyoo@gmail.com> References: <20220414085727.643099-1-42.hyeyoo@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Print cache name in tracepoints. If there is no corresponding cache (kmalloc in SLOB or kmalloc_large_node), use KMALLOC_{,LARGE_}NAME macro. Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> --- include/trace/events/kmem.h | 34 +++++++++++++++++++--------------- mm/slab.c | 9 +++++---- mm/slab.h | 4 ++++ mm/slab_common.c | 6 ++---- mm/slob.c | 10 +++++----- mm/slub.c | 10 +++++----- 6 files changed, 40 insertions(+), 33 deletions(-) diff --git a/include/trace/events/kmem.h b/include/trace/events/kmem.h index ddc8c944f417..35e6887c6101 100644 --- a/include/trace/events/kmem.h +++ b/include/trace/events/kmem.h @@ -61,16 +61,18 @@ DEFINE_EVENT(kmem_alloc, kmem_cache_alloc, =20 DECLARE_EVENT_CLASS(kmem_alloc_node, =20 - TP_PROTO(unsigned long call_site, + TP_PROTO(const char *name, + unsigned long call_site, const void *ptr, size_t bytes_req, size_t bytes_alloc, gfp_t gfp_flags, int node), =20 - TP_ARGS(call_site, ptr, bytes_req, bytes_alloc, gfp_flags, node), + TP_ARGS(name, call_site, ptr, bytes_req, bytes_alloc, gfp_flags, node), =20 TP_STRUCT__entry( + __string( name, name ) __field( unsigned long, call_site ) __field( const void *, ptr ) __field( size_t, bytes_req ) @@ -80,6 +82,7 @@ DECLARE_EVENT_CLASS(kmem_alloc_node, ), =20 TP_fast_assign( + __assign_str(name, name); __entry->call_site =3D call_site; __entry->ptr =3D ptr; __entry->bytes_req =3D bytes_req; @@ -88,7 +91,8 @@ DECLARE_EVENT_CLASS(kmem_alloc_node, __entry->node =3D node; ), =20 - TP_printk("call_site=3D%pS ptr=3D%p bytes_req=3D%zu bytes_alloc=3D%zu gfp= _flags=3D%s node=3D%d", + TP_printk("name=3D%s call_site=3D%pS ptr=3D%p bytes_req=3D%zu bytes_alloc= =3D%zu gfp_flags=3D%s node=3D%d", + __get_str(name), (void *)__entry->call_site, __entry->ptr, __entry->bytes_req, @@ -99,20 +103,20 @@ DECLARE_EVENT_CLASS(kmem_alloc_node, =20 DEFINE_EVENT(kmem_alloc_node, kmalloc_node, =20 - TP_PROTO(unsigned long call_site, const void *ptr, - size_t bytes_req, size_t bytes_alloc, + TP_PROTO(const char *name, unsigned long call_site, + const void *ptr, size_t bytes_req, size_t bytes_alloc, gfp_t gfp_flags, int node), =20 - TP_ARGS(call_site, ptr, bytes_req, bytes_alloc, gfp_flags, node) + TP_ARGS(name, call_site, ptr, bytes_req, bytes_alloc, gfp_flags, node) ); =20 DEFINE_EVENT(kmem_alloc_node, kmem_cache_alloc_node, =20 - TP_PROTO(unsigned long call_site, const void *ptr, - size_t bytes_req, size_t bytes_alloc, + TP_PROTO(const char *name, unsigned long call_site, + const void *ptr, size_t bytes_req, size_t bytes_alloc, gfp_t gfp_flags, int node), =20 - TP_ARGS(call_site, ptr, bytes_req, bytes_alloc, gfp_flags, node) + TP_ARGS(name, call_site, ptr, bytes_req, bytes_alloc, gfp_flags, node) ); =20 TRACE_EVENT(kfree, @@ -137,24 +141,24 @@ TRACE_EVENT(kfree, =20 TRACE_EVENT(kmem_cache_free, =20 - TP_PROTO(unsigned long call_site, const void *ptr, const char *name), + TP_PROTO(const char *name, unsigned long call_site, const void *ptr), =20 - TP_ARGS(call_site, ptr, name), + TP_ARGS(name, call_site, ptr), =20 TP_STRUCT__entry( + __string( name, name ) __field( unsigned long, call_site ) __field( const void *, ptr ) - __string( name, name ) ), =20 TP_fast_assign( + __assign_str(name, name); __entry->call_site =3D call_site; __entry->ptr =3D ptr; - __assign_str(name, name); ), =20 - TP_printk("call_site=3D%pS ptr=3D%p name=3D%s", - (void *)__entry->call_site, __entry->ptr, __get_str(name)) + TP_printk("name=3D%s call_site=3D%pS ptr=3D%p", + __get_str(name), (void *)__entry->call_site, __entry->ptr) ); =20 TRACE_EVENT(mm_page_free, diff --git a/mm/slab.c b/mm/slab.c index 1dfe0f9d5882..3c47d0979706 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -3448,8 +3448,9 @@ void *__kmem_cache_alloc_node(struct kmem_cache *cach= ep, struct list_lru *lru, void *ret =3D slab_alloc_node(cachep, lru, flags, nodeid, cachep->object_size, caller); =20 - trace_kmem_cache_alloc_node(caller, ret, cachep->object_size, - cachep->size, flags, nodeid); + trace_kmem_cache_alloc_node(cachep->name, caller, ret, + cachep->object_size, cachep->size, + flags, nodeid); =20 return ret; } @@ -3518,7 +3519,7 @@ void *kmem_cache_alloc_node_trace(struct kmem_cache *= cachep, ret =3D slab_alloc_node(cachep, NULL, flags, nodeid, size, _RET_IP_); =20 ret =3D kasan_kmalloc(cachep, ret, size, flags); - trace_kmalloc_node(_RET_IP_, ret, + trace_kmalloc_node(cachep->name, _RET_IP_, ret, size, cachep->size, flags, nodeid); return ret; @@ -3593,7 +3594,7 @@ void kmem_cache_free(struct kmem_cache *cachep, void = *objp) if (!cachep) return; =20 - trace_kmem_cache_free(_RET_IP_, objp, cachep->name); + trace_kmem_cache_free(cachep->name, _RET_IP_, objp); local_irq_save(flags); debug_check_no_locks_freed(objp, cachep->object_size); if (!(cachep->flags & SLAB_DEBUG_OBJECTS)) diff --git a/mm/slab.h b/mm/slab.h index b864c5bc4c25..45ddb19df319 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -275,6 +275,10 @@ void create_kmalloc_caches(slab_flags_t); struct kmem_cache *kmalloc_slab(size_t, gfp_t); #endif =20 +/* cache names for tracepoints where it has no corresponding cache */ +#define KMALLOC_LARGE_NAME "kmalloc_large_node" +#define KMALLOC_NAME "kmalloc_node" + gfp_t kmalloc_fix_flags(gfp_t flags); =20 /* Functions provided by the slab allocators */ diff --git a/mm/slab_common.c b/mm/slab_common.c index 960cc07c3a91..416f0a1f17a6 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -956,10 +956,8 @@ void *kmalloc_large_node(size_t size, gfp_t flags, int= node) ptr =3D kasan_kmalloc_large(ptr, size, flags); /* As ptr might get tagged, call kmemleak hook after KASAN. */ kmemleak_alloc(ptr, size, 1, flags); - trace_kmalloc_node(_RET_IP_, ptr, - size, PAGE_SIZE << order, - flags, node); - + trace_kmalloc_node(KMALLOC_LARGE_NAME, _RET_IP_, ptr, size, + PAGE_SIZE << order, flags, node); return ptr; } EXPORT_SYMBOL(kmalloc_large_node); diff --git a/mm/slob.c b/mm/slob.c index 6c7c30845056..8abde6037d95 100644 --- a/mm/slob.c +++ b/mm/slob.c @@ -505,7 +505,7 @@ __do_kmalloc_node(size_t size, gfp_t gfp, int node, uns= igned long caller) *m =3D size; ret =3D (void *)m + minalign; =20 - trace_kmalloc_node(caller, ret, + trace_kmalloc_node(KMALLOC_NAME, caller, ret, size, size + minalign, gfp, node); } else { unsigned int order =3D get_order(size); @@ -514,7 +514,7 @@ __do_kmalloc_node(size_t size, gfp_t gfp, int node, uns= igned long caller) gfp |=3D __GFP_COMP; ret =3D slob_new_pages(gfp, order, node); =20 - trace_kmalloc_node(caller, ret, + trace_kmalloc_node(KMALLOC_LARGE_NAME, caller, ret, size, PAGE_SIZE << order, gfp, node); } =20 @@ -597,12 +597,12 @@ static void *slob_alloc_node(struct kmem_cache *c, gf= p_t flags, int node, =20 if (c->size < PAGE_SIZE) { b =3D slob_alloc(c->size, flags, c->align, node, 0); - trace_kmem_cache_alloc_node(caller, b, c->object_size, + trace_kmem_cache_alloc_node(c->name, caller, b, c->object_size, SLOB_UNITS(c->size) * SLOB_UNIT, flags, node); } else { b =3D slob_new_pages(flags, get_order(c->size), node); - trace_kmem_cache_alloc_node(caller, b, c->object_size, + trace_kmem_cache_alloc_node(c->name, caller, b, c->object_size, PAGE_SIZE << get_order(c->size), flags, node); } @@ -648,7 +648,7 @@ static void kmem_rcu_free(struct rcu_head *head) void kmem_cache_free(struct kmem_cache *c, void *b) { kmemleak_free_recursive(b, c->flags); - trace_kmem_cache_free(_RET_IP_, b, c->name); + trace_kmem_cache_free(c->name, _RET_IP_, b); if (unlikely(c->flags & SLAB_TYPESAFE_BY_RCU)) { struct slob_rcu *slob_rcu; slob_rcu =3D b + (c->size - sizeof(struct slob_rcu)); diff --git a/mm/slub.c b/mm/slub.c index 1dc9e8eebb62..de03fa1f5667 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -3215,7 +3215,7 @@ void *__kmem_cache_alloc_node(struct kmem_cache *s, s= truct list_lru *lru, gfp_t { void *ret =3D slab_alloc_node(s, lru, gfpflags, node, caller, s->object_s= ize); =20 - trace_kmem_cache_alloc_node(caller, ret, s->object_size, + trace_kmem_cache_alloc_node(s->name, caller, ret, s->object_size, s->size, gfpflags, node); =20 return ret; @@ -3229,7 +3229,7 @@ void *kmem_cache_alloc_node_trace(struct kmem_cache *= s, { void *ret =3D slab_alloc_node(s, NULL, gfpflags, node, _RET_IP_, size); =20 - trace_kmalloc_node(_RET_IP_, ret, + trace_kmalloc_node(s->name, _RET_IP_, ret, size, s->size, gfpflags, node); =20 ret =3D kasan_kmalloc(s, ret, size, gfpflags); @@ -3471,7 +3471,7 @@ void kmem_cache_free(struct kmem_cache *s, void *x) s =3D cache_from_obj(s, x); if (!s) return; - trace_kmem_cache_free(_RET_IP_, x, s->name); + trace_kmem_cache_free(s->name, _RET_IP_, x); slab_free(s, virt_to_slab(x), x, NULL, 1, _RET_IP_); } EXPORT_SYMBOL(kmem_cache_free); @@ -4352,7 +4352,7 @@ void *__kmalloc_node(size_t size, gfp_t flags, int no= de) =20 ret =3D slab_alloc_node(s, NULL, flags, node, _RET_IP_, size); =20 - trace_kmalloc_node(_RET_IP_, ret, size, s->size, flags, node); + trace_kmalloc_node(s->name, _RET_IP_, ret, size, s->size, flags, node); =20 ret =3D kasan_kmalloc(s, ret, size, flags); =20 @@ -4811,7 +4811,7 @@ void *__kmalloc_node_track_caller(size_t size, gfp_t = gfpflags, ret =3D slab_alloc_node(s, NULL, gfpflags, node, caller, size); =20 /* Honor the call site pointer we received. */ - trace_kmalloc_node(caller, ret, size, s->size, gfpflags, node); + trace_kmalloc_node(s->name, caller, ret, size, s->size, gfpflags, node); =20 return ret; } --=20 2.32.0 From nobody Mon May 11 04:52:14 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9C329C433FE for ; Thu, 14 Apr 2022 08:59:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241462AbiDNJCM (ORCPT ); Thu, 14 Apr 2022 05:02:12 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44418 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241390AbiDNJBq (ORCPT ); Thu, 14 Apr 2022 05:01:46 -0400 Received: from mail-pl1-x631.google.com (mail-pl1-x631.google.com [IPv6:2607:f8b0:4864:20::631]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7363A6E4EB for ; Thu, 14 Apr 2022 01:59:12 -0700 (PDT) Received: by mail-pl1-x631.google.com with SMTP id j8so4102602pll.11 for ; Thu, 14 Apr 2022 01:59:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=zvTeNbwSFjzmp95ydPU9GCtKAlIW9evNe2EP9XmPg9o=; b=hmWMAwYx5oUL6Wp1vBwdgXwIs13Q82Au0zsJ0XWS5lPSR0hfPBYvajbkN6qAoJ8byF Ehy6dZvmsU1vgtuxXzAv8G4WRBh0JrSQ4mdLJrZtn3CXWroWBp0IHkf6cR2dkSA6Lk3t 2J9uBipPAZox0QxR1ftuERIiFWcQW88Anpmeyw6rxE6Dn2DrRWab3XbnMNVauFoqOIAN 6E3ngfB68K0eg1wm9ONKS/wdheLAGTGkPrZWfHyyg8jeHae3uw33bTDlHIjJZd0IIrtn zpFyZ7+az7oHtTl84AG24h1/PDCZSsqUxrF6Q4z9a8avFhEQ/Be5CuwkRqjuSs6YddXW a/cA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=zvTeNbwSFjzmp95ydPU9GCtKAlIW9evNe2EP9XmPg9o=; b=XB7AqFivhuDcMvNkPY5ijFh4WcbiXNfP4/NAhbf3xvctz5eOmSHUVy3/sjmmmU+nwO KD1C/xPOgJNoIftunTpH7oco9xtorgrrVwIE2gDICjm43o1o9CxVfsez+RkTZrZTD8xF FyoeMNY0ehzakj0BTEzPtwhA/TlN2whi2doDXlyByZ426zeFsRAEofzzn48x+GEjSMGI lT2L6DQk5QPxrpR274+nqpxeTE4FhQrm8hENDjW97PD+ZrMFQrpP4Su1hk4lw1mOu1w2 NtdcpEaZd9iWvojTgsWuoF2IZdh4o1GfD8xrB/Cxq20ytYQbIkGANxS6eK6TKeQ7clGL eCrw== X-Gm-Message-State: AOAM533ERe+vIBnKZ8c1S5fBkjKap8ZdyAMl+1v9QWZkWFRN5xdW19jj Zvyyadjpx9tMKepYwf1E4C4= X-Google-Smtp-Source: ABdhPJx7jNnY5ayXMUX9R/+plcepj453EBGZ/mxvajewMlzF9iepLcw0FshfUCuRkFyU9HwD+z7yfA== X-Received: by 2002:a17:90b:33ca:b0:1cb:d0c:e1b5 with SMTP id lk10-20020a17090b33ca00b001cb0d0ce1b5mr3165304pjb.178.1649926751818; Thu, 14 Apr 2022 01:59:11 -0700 (PDT) Received: from hyeyoo.. ([114.29.24.243]) by smtp.gmail.com with ESMTPSA id p9-20020aa79e89000000b00505fada20dfsm1403537pfq.117.2022.04.14.01.59.05 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 14 Apr 2022 01:59:10 -0700 (PDT) From: Hyeonggon Yoo <42.hyeyoo@gmail.com> To: Vlastimil Babka Cc: Marco Elver , Matthew WilCox , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Hyeonggon Yoo <42.hyeyoo@gmail.com>, Roman Gushchin , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 15/23] mm/slab_common: use same tracepoint in kmalloc and normal caches Date: Thu, 14 Apr 2022 17:57:19 +0900 Message-Id: <20220414085727.643099-16-42.hyeyoo@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220414085727.643099-1-42.hyeyoo@gmail.com> References: <20220414085727.643099-1-42.hyeyoo@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Now that tracepoints print cache names, we can distinguish kmalloc and normal cache allocations. Use same tracepoint in kmalloc and normal caches. After this patch, there is only two tracepoints in slab allocators: kmem_cache_alloc_node and kmem_cache_free. Remove all unused tracepoints. Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> --- include/trace/events/kmem.h | 79 ------------------------------------- mm/slab.c | 8 ++-- mm/slab_common.c | 9 ++--- mm/slob.c | 14 ++++--- mm/slub.c | 19 +++++---- 5 files changed, 27 insertions(+), 102 deletions(-) diff --git a/include/trace/events/kmem.h b/include/trace/events/kmem.h index 35e6887c6101..ca67ba5fd76a 100644 --- a/include/trace/events/kmem.h +++ b/include/trace/events/kmem.h @@ -9,56 +9,6 @@ #include #include =20 -DECLARE_EVENT_CLASS(kmem_alloc, - - TP_PROTO(unsigned long call_site, - const void *ptr, - size_t bytes_req, - size_t bytes_alloc, - gfp_t gfp_flags), - - TP_ARGS(call_site, ptr, bytes_req, bytes_alloc, gfp_flags), - - TP_STRUCT__entry( - __field( unsigned long, call_site ) - __field( const void *, ptr ) - __field( size_t, bytes_req ) - __field( size_t, bytes_alloc ) - __field( gfp_t, gfp_flags ) - ), - - TP_fast_assign( - __entry->call_site =3D call_site; - __entry->ptr =3D ptr; - __entry->bytes_req =3D bytes_req; - __entry->bytes_alloc =3D bytes_alloc; - __entry->gfp_flags =3D gfp_flags; - ), - - TP_printk("call_site=3D%pS ptr=3D%p bytes_req=3D%zu bytes_alloc=3D%zu gfp= _flags=3D%s", - (void *)__entry->call_site, - __entry->ptr, - __entry->bytes_req, - __entry->bytes_alloc, - show_gfp_flags(__entry->gfp_flags)) -); - -DEFINE_EVENT(kmem_alloc, kmalloc, - - TP_PROTO(unsigned long call_site, const void *ptr, - size_t bytes_req, size_t bytes_alloc, gfp_t gfp_flags), - - TP_ARGS(call_site, ptr, bytes_req, bytes_alloc, gfp_flags) -); - -DEFINE_EVENT(kmem_alloc, kmem_cache_alloc, - - TP_PROTO(unsigned long call_site, const void *ptr, - size_t bytes_req, size_t bytes_alloc, gfp_t gfp_flags), - - TP_ARGS(call_site, ptr, bytes_req, bytes_alloc, gfp_flags) -); - DECLARE_EVENT_CLASS(kmem_alloc_node, =20 TP_PROTO(const char *name, @@ -101,15 +51,6 @@ DECLARE_EVENT_CLASS(kmem_alloc_node, __entry->node) ); =20 -DEFINE_EVENT(kmem_alloc_node, kmalloc_node, - - TP_PROTO(const char *name, unsigned long call_site, - const void *ptr, size_t bytes_req, size_t bytes_alloc, - gfp_t gfp_flags, int node), - - TP_ARGS(name, call_site, ptr, bytes_req, bytes_alloc, gfp_flags, node) -); - DEFINE_EVENT(kmem_alloc_node, kmem_cache_alloc_node, =20 TP_PROTO(const char *name, unsigned long call_site, @@ -119,26 +60,6 @@ DEFINE_EVENT(kmem_alloc_node, kmem_cache_alloc_node, TP_ARGS(name, call_site, ptr, bytes_req, bytes_alloc, gfp_flags, node) ); =20 -TRACE_EVENT(kfree, - - TP_PROTO(unsigned long call_site, const void *ptr), - - TP_ARGS(call_site, ptr), - - TP_STRUCT__entry( - __field( unsigned long, call_site ) - __field( const void *, ptr ) - ), - - TP_fast_assign( - __entry->call_site =3D call_site; - __entry->ptr =3D ptr; - ), - - TP_printk("call_site=3D%pS ptr=3D%p", - (void *)__entry->call_site, __entry->ptr) -); - TRACE_EVENT(kmem_cache_free, =20 TP_PROTO(const char *name, unsigned long call_site, const void *ptr), diff --git a/mm/slab.c b/mm/slab.c index 3c47d0979706..b9959a6b5c48 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -3519,9 +3519,9 @@ void *kmem_cache_alloc_node_trace(struct kmem_cache *= cachep, ret =3D slab_alloc_node(cachep, NULL, flags, nodeid, size, _RET_IP_); =20 ret =3D kasan_kmalloc(cachep, ret, size, flags); - trace_kmalloc_node(cachep->name, _RET_IP_, ret, - size, cachep->size, - flags, nodeid); + trace_kmem_cache_alloc_node(cachep->name, _RET_IP_, ret, + size, cachep->size, + flags, nodeid); return ret; } EXPORT_SYMBOL(kmem_cache_alloc_node_trace); @@ -3657,7 +3657,6 @@ void kfree(const void *objp) unsigned long flags; struct folio *folio; =20 - trace_kfree(_RET_IP_, objp); =20 if (unlikely(ZERO_OR_NULL_PTR(objp))) return; @@ -3669,6 +3668,7 @@ void kfree(const void *objp) } =20 c =3D folio_slab(folio)->slab_cache; + trace_kmem_cache_free(c->name, _RET_IP_, objp); =20 local_irq_save(flags); kfree_debugcheck(objp); diff --git a/mm/slab_common.c b/mm/slab_common.c index 416f0a1f17a6..3d1569085c54 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -910,6 +910,7 @@ void free_large_kmalloc(struct folio *folio, void *obje= ct) if (WARN_ON_ONCE(order =3D=3D 0)) pr_warn_once("object pointer: 0x%p\n", object); =20 + trace_kmem_cache_free(KMALLOC_LARGE_NAME, _RET_IP_, object); kmemleak_free(object); kasan_kfree_large(object); =20 @@ -956,8 +957,8 @@ void *kmalloc_large_node(size_t size, gfp_t flags, int = node) ptr =3D kasan_kmalloc_large(ptr, size, flags); /* As ptr might get tagged, call kmemleak hook after KASAN. */ kmemleak_alloc(ptr, size, 1, flags); - trace_kmalloc_node(KMALLOC_LARGE_NAME, _RET_IP_, ptr, size, - PAGE_SIZE << order, flags, node); + trace_kmem_cache_alloc_node(KMALLOC_LARGE_NAME, _RET_IP_, ptr, size, + PAGE_SIZE << order, flags, node); return ptr; } EXPORT_SYMBOL(kmalloc_large_node); @@ -1290,11 +1291,7 @@ size_t ksize(const void *objp) EXPORT_SYMBOL(ksize); =20 /* Tracepoints definitions. */ -EXPORT_TRACEPOINT_SYMBOL(kmalloc); -EXPORT_TRACEPOINT_SYMBOL(kmem_cache_alloc); -EXPORT_TRACEPOINT_SYMBOL(kmalloc_node); EXPORT_TRACEPOINT_SYMBOL(kmem_cache_alloc_node); -EXPORT_TRACEPOINT_SYMBOL(kfree); EXPORT_TRACEPOINT_SYMBOL(kmem_cache_free); =20 int should_failslab(struct kmem_cache *s, gfp_t gfpflags) diff --git a/mm/slob.c b/mm/slob.c index 8abde6037d95..b1f291128e94 100644 --- a/mm/slob.c +++ b/mm/slob.c @@ -505,8 +505,8 @@ __do_kmalloc_node(size_t size, gfp_t gfp, int node, uns= igned long caller) *m =3D size; ret =3D (void *)m + minalign; =20 - trace_kmalloc_node(KMALLOC_NAME, caller, ret, - size, size + minalign, gfp, node); + trace_kmem_cache_alloc_node(KMALLOC_NAME, caller, ret, + size, size + minalign, gfp, node); } else { unsigned int order =3D get_order(size); =20 @@ -514,8 +514,9 @@ __do_kmalloc_node(size_t size, gfp_t gfp, int node, uns= igned long caller) gfp |=3D __GFP_COMP; ret =3D slob_new_pages(gfp, order, node); =20 - trace_kmalloc_node(KMALLOC_LARGE_NAME, caller, ret, - size, PAGE_SIZE << order, gfp, node); + trace_kmem_cache_alloc_node(KMALLOC_LARGE_NAME, caller, + ret, size, PAGE_SIZE << order, + gfp, node); } =20 kmemleak_alloc(ret, size, 1, gfp); @@ -533,8 +534,6 @@ void kfree(const void *block) { struct folio *sp; =20 - trace_kfree(_RET_IP_, block); - if (unlikely(ZERO_OR_NULL_PTR(block))) return; kmemleak_free(block); @@ -543,10 +542,13 @@ void kfree(const void *block) if (folio_test_slab(sp)) { int align =3D max_t(size_t, ARCH_KMALLOC_MINALIGN, ARCH_SLAB_MINALIGN); unsigned int *m =3D (unsigned int *)(block - align); + + trace_kmem_cache_free(KMALLOC_LARGE_NAME, _RET_IP_, block); slob_free(m, *m + align); } else { unsigned int order =3D folio_order(sp); =20 + trace_kmem_cache_free(KMALLOC_NAME, _RET_IP_, block); mod_node_page_state(folio_pgdat(sp), NR_SLAB_UNRECLAIMABLE_B, -(PAGE_SIZE << order)); __free_pages(folio_page(sp, 0), order); diff --git a/mm/slub.c b/mm/slub.c index de03fa1f5667..d53e9e22d67e 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -3229,8 +3229,8 @@ void *kmem_cache_alloc_node_trace(struct kmem_cache *= s, { void *ret =3D slab_alloc_node(s, NULL, gfpflags, node, _RET_IP_, size); =20 - trace_kmalloc_node(s->name, _RET_IP_, ret, - size, s->size, gfpflags, node); + trace_kmem_cache_alloc_node(s->name, _RET_IP_, ret, + size, s->size, gfpflags, node); =20 ret =3D kasan_kmalloc(s, ret, size, gfpflags); return ret; @@ -4352,7 +4352,8 @@ void *__kmalloc_node(size_t size, gfp_t flags, int no= de) =20 ret =3D slab_alloc_node(s, NULL, flags, node, _RET_IP_, size); =20 - trace_kmalloc_node(s->name, _RET_IP_, ret, size, s->size, flags, node); + trace_kmem_cache_alloc_node(s->name, _RET_IP_, ret, size, + s->size, flags, node); =20 ret =3D kasan_kmalloc(s, ret, size, flags); =20 @@ -4431,8 +4432,7 @@ void kfree(const void *x) struct folio *folio; struct slab *slab; void *object =3D (void *)x; - - trace_kfree(_RET_IP_, x); + struct kmem_cache *s; =20 if (unlikely(ZERO_OR_NULL_PTR(x))) return; @@ -4442,8 +4442,12 @@ void kfree(const void *x) free_large_kmalloc(folio, object); return; } + slab =3D folio_slab(folio); - slab_free(slab->slab_cache, slab, object, NULL, 1, _RET_IP_); + s =3D slab->slab_cache; + + trace_kmem_cache_free(s->name, _RET_IP_, x); + slab_free(s, slab, object, NULL, 1, _RET_IP_); } EXPORT_SYMBOL(kfree); =20 @@ -4811,7 +4815,8 @@ void *__kmalloc_node_track_caller(size_t size, gfp_t = gfpflags, ret =3D slab_alloc_node(s, NULL, gfpflags, node, caller, size); =20 /* Honor the call site pointer we received. */ - trace_kmalloc_node(s->name, caller, ret, size, s->size, gfpflags, node); + trace_kmem_cache_alloc_node(s->name, caller, ret, size, + s->size, gfpflags, node); =20 return ret; } --=20 2.32.0 From nobody Mon May 11 04:52:14 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A4475C433EF for ; Thu, 14 Apr 2022 09:00:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241494AbiDNJCW (ORCPT ); Thu, 14 Apr 2022 05:02:22 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44816 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241436AbiDNJBy (ORCPT ); Thu, 14 Apr 2022 05:01:54 -0400 Received: from mail-pj1-x1030.google.com (mail-pj1-x1030.google.com [IPv6:2607:f8b0:4864:20::1030]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 35B34219 for ; Thu, 14 Apr 2022 01:59:18 -0700 (PDT) Received: by mail-pj1-x1030.google.com with SMTP id a16-20020a17090a6d9000b001c7d6c1bb13so5055016pjk.4 for ; Thu, 14 Apr 2022 01:59:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=hsQvWpubllx7PdWg9lbvyVRg4B3WKq0E84Aji7H3XUg=; b=RcI79TDVfUjEI+i7r0zv9DyFEarSCFOyyJ0yAe7QADd7JMamrld0EL4b0MRQQtrJFP yl5lGZBKoYaISvBtIUFjkwkE4uRd4hqa75Q8rX2cOSKVe7/OSGuDyPqfJVYs2ji6ZbGH jwAFBBf00cMN6ViOCeY6jFXivsR0+KSCaodvMFqGNHgQPtoYlW7W7OuOywUtOkIw6o2A 4oxeUjuOGCfQ6MpC3ZvHMcRv6IJnmXTXjN5djExNqEzEFYByKGuvntX4KDlBhmv+ZQDE 1Ph97XF0XJ3amt/G8ZzZcNMasQKt7vl8pveXq3v2fOc9v7JM1R58uRuyFHh3lu0WYjj7 T8Rg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=hsQvWpubllx7PdWg9lbvyVRg4B3WKq0E84Aji7H3XUg=; b=ni1dX/0oWEt+TnrH9nFn9S21ZZckJx3GR3LUwvH8Eaup0VsMDtnEc5nDOS67Uwbdwc Qp5jYDe8KVhrPE8qtlBzlxoh8BIeWn7hKTuafyS2jiOz9GGa+iNqvxw6yr5q9VPXAWwr YKexrOL3ZquJMclhEzqASxE5zqZy8rUy5O2kW/HUiyzk5xtchOOJ4tJsa4K3AsT+DdIf PghQcO018mgw1QESfZi8x2GFAF/GBZYzOa9049/l8lLR3qOd4ILPgfjaP5sFMSW6pPyj OfS7+dRdWGdxUTpmgi0U5aM87nAiHIOHNavmIDXjnByA9I0UVS6lp3PsdD0iFAP5eZ8B SfLQ== X-Gm-Message-State: AOAM5326/IDvij4T8/zit21oV5ooiWtE++AOpBBdZDXdm4GSyektOYrU eSCPUOKXQBQQQrAOqnegHOk= X-Google-Smtp-Source: ABdhPJz7HfWYTCuY7TqGCZq82ODBwRzEgJAykfwr5rZXPv7yJZnkLjt8584O0Wr3hN6glVAZ5QpD3g== X-Received: by 2002:a17:902:aa85:b0:155:ceb9:3710 with SMTP id d5-20020a170902aa8500b00155ceb93710mr12561403plr.59.1649926758443; Thu, 14 Apr 2022 01:59:18 -0700 (PDT) Received: from hyeyoo.. ([114.29.24.243]) by smtp.gmail.com with ESMTPSA id p9-20020aa79e89000000b00505fada20dfsm1403537pfq.117.2022.04.14.01.59.12 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 14 Apr 2022 01:59:16 -0700 (PDT) From: Hyeonggon Yoo <42.hyeyoo@gmail.com> To: Vlastimil Babka Cc: Marco Elver , Matthew WilCox , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Hyeonggon Yoo <42.hyeyoo@gmail.com>, Roman Gushchin , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 16/23] mm/slab_common: rename tracepoint Date: Thu, 14 Apr 2022 17:57:20 +0900 Message-Id: <20220414085727.643099-17-42.hyeyoo@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220414085727.643099-1-42.hyeyoo@gmail.com> References: <20220414085727.643099-1-42.hyeyoo@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" To reduce overhead of printing tracepoint name, rename trace_kmem_cache_alloc_node to kmem_cache_alloc. Suggested-by: Vlastimil Babka Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> --- include/trace/events/kmem.h | 4 ++-- mm/slab.c | 8 ++++---- mm/slab_common.c | 6 +++--- mm/slob.c | 22 +++++++++++----------- mm/slub.c | 16 ++++++++-------- 5 files changed, 28 insertions(+), 28 deletions(-) diff --git a/include/trace/events/kmem.h b/include/trace/events/kmem.h index ca67ba5fd76a..58edb2e3e5a4 100644 --- a/include/trace/events/kmem.h +++ b/include/trace/events/kmem.h @@ -9,7 +9,7 @@ #include #include =20 -DECLARE_EVENT_CLASS(kmem_alloc_node, +DECLARE_EVENT_CLASS(kmem_alloc, =20 TP_PROTO(const char *name, unsigned long call_site, @@ -51,7 +51,7 @@ DECLARE_EVENT_CLASS(kmem_alloc_node, __entry->node) ); =20 -DEFINE_EVENT(kmem_alloc_node, kmem_cache_alloc_node, +DEFINE_EVENT(kmem_alloc, kmem_cache_alloc, =20 TP_PROTO(const char *name, unsigned long call_site, const void *ptr, size_t bytes_req, size_t bytes_alloc, diff --git a/mm/slab.c b/mm/slab.c index b9959a6b5c48..424168b96790 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -3448,7 +3448,7 @@ void *__kmem_cache_alloc_node(struct kmem_cache *cach= ep, struct list_lru *lru, void *ret =3D slab_alloc_node(cachep, lru, flags, nodeid, cachep->object_size, caller); =20 - trace_kmem_cache_alloc_node(cachep->name, caller, ret, + trace_kmem_cache_alloc(cachep->name, caller, ret, cachep->object_size, cachep->size, flags, nodeid); =20 @@ -3519,9 +3519,9 @@ void *kmem_cache_alloc_node_trace(struct kmem_cache *= cachep, ret =3D slab_alloc_node(cachep, NULL, flags, nodeid, size, _RET_IP_); =20 ret =3D kasan_kmalloc(cachep, ret, size, flags); - trace_kmem_cache_alloc_node(cachep->name, _RET_IP_, ret, - size, cachep->size, - flags, nodeid); + trace_kmem_cache_alloc(cachep->name, _RET_IP_, ret, + size, cachep->size, + flags, nodeid); return ret; } EXPORT_SYMBOL(kmem_cache_alloc_node_trace); diff --git a/mm/slab_common.c b/mm/slab_common.c index 3d1569085c54..3cd5d7a47ec7 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -957,8 +957,8 @@ void *kmalloc_large_node(size_t size, gfp_t flags, int = node) ptr =3D kasan_kmalloc_large(ptr, size, flags); /* As ptr might get tagged, call kmemleak hook after KASAN. */ kmemleak_alloc(ptr, size, 1, flags); - trace_kmem_cache_alloc_node(KMALLOC_LARGE_NAME, _RET_IP_, ptr, size, - PAGE_SIZE << order, flags, node); + trace_kmem_cache_alloc(KMALLOC_LARGE_NAME, _RET_IP_, ptr, size, + PAGE_SIZE << order, flags, node); return ptr; } EXPORT_SYMBOL(kmalloc_large_node); @@ -1291,7 +1291,7 @@ size_t ksize(const void *objp) EXPORT_SYMBOL(ksize); =20 /* Tracepoints definitions. */ -EXPORT_TRACEPOINT_SYMBOL(kmem_cache_alloc_node); +EXPORT_TRACEPOINT_SYMBOL(kmem_cache_alloc); EXPORT_TRACEPOINT_SYMBOL(kmem_cache_free); =20 int should_failslab(struct kmem_cache *s, gfp_t gfpflags) diff --git a/mm/slob.c b/mm/slob.c index b1f291128e94..1bb4c577b908 100644 --- a/mm/slob.c +++ b/mm/slob.c @@ -505,8 +505,8 @@ __do_kmalloc_node(size_t size, gfp_t gfp, int node, uns= igned long caller) *m =3D size; ret =3D (void *)m + minalign; =20 - trace_kmem_cache_alloc_node(KMALLOC_NAME, caller, ret, - size, size + minalign, gfp, node); + trace_kmem_cache_alloc(KMALLOC_NAME, caller, ret, + size, size + minalign, gfp, node); } else { unsigned int order =3D get_order(size); =20 @@ -514,9 +514,9 @@ __do_kmalloc_node(size_t size, gfp_t gfp, int node, uns= igned long caller) gfp |=3D __GFP_COMP; ret =3D slob_new_pages(gfp, order, node); =20 - trace_kmem_cache_alloc_node(KMALLOC_LARGE_NAME, caller, - ret, size, PAGE_SIZE << order, - gfp, node); + trace_kmem_cache_alloc(KMALLOC_LARGE_NAME, caller, + ret, size, PAGE_SIZE << order, + gfp, node); } =20 kmemleak_alloc(ret, size, 1, gfp); @@ -599,14 +599,14 @@ static void *slob_alloc_node(struct kmem_cache *c, gf= p_t flags, int node, =20 if (c->size < PAGE_SIZE) { b =3D slob_alloc(c->size, flags, c->align, node, 0); - trace_kmem_cache_alloc_node(c->name, caller, b, c->object_size, - SLOB_UNITS(c->size) * SLOB_UNIT, - flags, node); + trace_kmem_cache_alloc(c->name, caller, b, c->object_size, + SLOB_UNITS(c->size) * SLOB_UNIT, + flags, node); } else { b =3D slob_new_pages(flags, get_order(c->size), node); - trace_kmem_cache_alloc_node(c->name, caller, b, c->object_size, - PAGE_SIZE << get_order(c->size), - flags, node); + trace_kmem_cache_alloc(c->name, caller, b, c->object_size, + PAGE_SIZE << get_order(c->size), + flags, node); } =20 if (b && c->ctor) { diff --git a/mm/slub.c b/mm/slub.c index d53e9e22d67e..a088d4fa1062 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -3215,8 +3215,8 @@ void *__kmem_cache_alloc_node(struct kmem_cache *s, s= truct list_lru *lru, gfp_t { void *ret =3D slab_alloc_node(s, lru, gfpflags, node, caller, s->object_s= ize); =20 - trace_kmem_cache_alloc_node(s->name, caller, ret, s->object_size, - s->size, gfpflags, node); + trace_kmem_cache_alloc(s->name, caller, ret, s->object_size, + s->size, gfpflags, node); =20 return ret; } @@ -3229,8 +3229,8 @@ void *kmem_cache_alloc_node_trace(struct kmem_cache *= s, { void *ret =3D slab_alloc_node(s, NULL, gfpflags, node, _RET_IP_, size); =20 - trace_kmem_cache_alloc_node(s->name, _RET_IP_, ret, - size, s->size, gfpflags, node); + trace_kmem_cache_alloc(s->name, _RET_IP_, ret, + size, s->size, gfpflags, node); =20 ret =3D kasan_kmalloc(s, ret, size, gfpflags); return ret; @@ -4352,8 +4352,8 @@ void *__kmalloc_node(size_t size, gfp_t flags, int no= de) =20 ret =3D slab_alloc_node(s, NULL, flags, node, _RET_IP_, size); =20 - trace_kmem_cache_alloc_node(s->name, _RET_IP_, ret, size, - s->size, flags, node); + trace_kmem_cache_alloc(s->name, _RET_IP_, ret, size, + s->size, flags, node); =20 ret =3D kasan_kmalloc(s, ret, size, flags); =20 @@ -4815,8 +4815,8 @@ void *__kmalloc_node_track_caller(size_t size, gfp_t = gfpflags, ret =3D slab_alloc_node(s, NULL, gfpflags, node, caller, size); =20 /* Honor the call site pointer we received. */ - trace_kmem_cache_alloc_node(s->name, caller, ret, size, - s->size, gfpflags, node); + trace_kmem_cache_alloc(s->name, caller, ret, size, + s->size, gfpflags, node); =20 return ret; } --=20 2.32.0 From nobody Mon May 11 04:52:14 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 97568C433F5 for ; Thu, 14 Apr 2022 09:01:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241502AbiDNJCb (ORCPT ); Thu, 14 Apr 2022 05:02:31 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44532 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241471AbiDNJB5 (ORCPT ); Thu, 14 Apr 2022 05:01:57 -0400 Received: from mail-pj1-x1029.google.com (mail-pj1-x1029.google.com [IPv6:2607:f8b0:4864:20::1029]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9B8C36E540 for ; Thu, 14 Apr 2022 01:59:25 -0700 (PDT) Received: by mail-pj1-x1029.google.com with SMTP id mp16-20020a17090b191000b001cb5efbcab6so8684347pjb.4 for ; Thu, 14 Apr 2022 01:59:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=EgrMWHzF8KTmL9m2nlEJVQvsLaOWu876E+Hsc/ZOTns=; b=eVN9EceUKSMr6S3Dx3WSG3+9RqV3tUi12cCXVc+p3TamUxX3IVV/OiGL12wf+uzrtM bLCfrPExxg+uIJ8cAf6E3lS2stPyY2CvbRMzA7f4G6+1ieA41AjTW9T9LCHmAq3CkC9e NeThT2oo9wah/fTkl0y3/ke9NAsRgUCGibv48DPHIvLZtOw/LiYAD1xVDF8Tg+4YAsSd EUTJTAAugQWgdf7WekUhtVCtHeW4bmLZGjmfcw41siVYZgYfguTUhDgYAF/KrEsuuoPm /VPvwPamPNs2ny/K7qrsoS5YM990ITJ+Ng14TwhM3kQAjAs3uRonECxyb8khqi8TqYZB 1BNw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=EgrMWHzF8KTmL9m2nlEJVQvsLaOWu876E+Hsc/ZOTns=; b=xA/OLgs11LIVV7QC/DMeo2Dty648ZJy4tASXtgMRdLa/pddJHs/fMeyS5ejEV6waMB OZaCMIlVGOAD4HzTh4K2dMYcm3modJT2FKVYKm0iF9HWYqNDN9eM/en6w4Nr4QNnwo6k ybVbn8WDztr3Y0GvOFvzhgBHc7gzI4594wRIJ+BG0kMb3Ka/SuIOyzjufK9swEPDpWUr w3I3+esLFsYYxsIa+SR5b6hWmFW/zIJuFRGfdSaq8PxtUWJuf3jRnpN/5rX/D7apV3Qs JSUEzo23Kpt1TleFdefHIIjFa+DfwlDKytBZpL+Y0By8WjJ5pDbiKTBRdo1CTg/rDwqH c/OA== X-Gm-Message-State: AOAM530X70aRkURtxm5Aaxd9i4i07386ZgBFXMXvCKp7MoF7RX693Pfa zVdPDmoBt3SLvW0bsWPuzPeTjAVpzUs= X-Google-Smtp-Source: ABdhPJwZQkFFpM6qRwxrzIAePqZY2pqNaF46syQsAZdIkRUvqJY7Cm8E08BpythhpVh93GkFpAoCag== X-Received: by 2002:a17:903:c2:b0:158:5842:2b3a with SMTP id x2-20020a17090300c200b0015858422b3amr20520315plc.126.1649926765109; Thu, 14 Apr 2022 01:59:25 -0700 (PDT) Received: from hyeyoo.. ([114.29.24.243]) by smtp.gmail.com with ESMTPSA id p9-20020aa79e89000000b00505fada20dfsm1403537pfq.117.2022.04.14.01.59.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 14 Apr 2022 01:59:23 -0700 (PDT) From: Hyeonggon Yoo <42.hyeyoo@gmail.com> To: Vlastimil Babka Cc: Marco Elver , Matthew WilCox , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Hyeonggon Yoo <42.hyeyoo@gmail.com>, Roman Gushchin , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 17/23] mm/slab_common: implement __kmem_cache_free() Date: Thu, 14 Apr 2022 17:57:21 +0900 Message-Id: <20220414085727.643099-18-42.hyeyoo@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220414085727.643099-1-42.hyeyoo@gmail.com> References: <20220414085727.643099-1-42.hyeyoo@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" To generalize kfree in later patch, implement __kmem_cache_free() that takes caller address and make kmem_cache_free() wrapper of it. Now that kmem_cache_free() is inline function, we should use _THIS_IP_ instead of _RET_IP_ for consistency. Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> --- include/linux/slab.h | 16 +++++++++++++++- mm/slab.c | 17 +++++------------ mm/slob.c | 13 +++++++------ mm/slub.c | 9 +++++---- 4 files changed, 32 insertions(+), 23 deletions(-) diff --git a/include/linux/slab.h b/include/linux/slab.h index c8c82087c3f9..0630c37ee630 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -462,7 +462,21 @@ void *kmem_cache_alloc_lru(struct kmem_cache *s, struc= t list_lru *lru, gfp_t gfp return __kmem_cache_alloc_node(s, lru, gfpflags, NUMA_NO_NODE, _THIS_IP_); } =20 -void kmem_cache_free(struct kmem_cache *s, void *objp); +void __kmem_cache_free(struct kmem_cache *s, void *objp, unsigned long cal= ler __maybe_unused); + +/** + * kmem_cache_free - Deallocate an object + * @s: The cache the allocation was from. + * @objp: The previously allocated object. + * + * Free an object which was previously allocated from this + * cache. + */ +static __always_inline void kmem_cache_free(struct kmem_cache *s, void *ob= jp) +{ + __kmem_cache_free(s, objp, _THIS_IP_); +} + =20 /* * Bulk allocation and freeing operations. These are accelerated in an diff --git a/mm/slab.c b/mm/slab.c index 424168b96790..d35873da5572 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -3579,30 +3579,23 @@ void kmem_obj_info(struct kmem_obj_info *kpp, void = *object, struct slab *slab) } #endif =20 -/** - * kmem_cache_free - Deallocate an object - * @cachep: The cache the allocation was from. - * @objp: The previously allocated object. - * - * Free an object which was previously allocated from this - * cache. - */ -void kmem_cache_free(struct kmem_cache *cachep, void *objp) +void __kmem_cache_free(struct kmem_cache *cachep, void *objp, + unsigned long caller __maybe_unused) { unsigned long flags; cachep =3D cache_from_obj(cachep, objp); if (!cachep) return; =20 - trace_kmem_cache_free(cachep->name, _RET_IP_, objp); + trace_kmem_cache_free(cachep->name, caller, objp); local_irq_save(flags); debug_check_no_locks_freed(objp, cachep->object_size); if (!(cachep->flags & SLAB_DEBUG_OBJECTS)) debug_check_no_obj_freed(objp, cachep->object_size); - __cache_free(cachep, objp, _RET_IP_); + __cache_free(cachep, objp, caller); local_irq_restore(flags); } -EXPORT_SYMBOL(kmem_cache_free); +EXPORT_SYMBOL(__kmem_cache_free); =20 void kmem_cache_free_bulk(struct kmem_cache *orig_s, size_t size, void **p) { diff --git a/mm/slob.c b/mm/slob.c index 1bb4c577b908..e893d182d471 100644 --- a/mm/slob.c +++ b/mm/slob.c @@ -631,7 +631,7 @@ void *__kmem_cache_alloc_node(struct kmem_cache *cachep= , struct list_lru *lru __ } EXPORT_SYMBOL(__kmem_cache_alloc_node); =20 -static void __kmem_cache_free(void *b, int size) +static void ____kmem_cache_free(void *b, int size) { if (size < PAGE_SIZE) slob_free(b, size); @@ -644,23 +644,24 @@ static void kmem_rcu_free(struct rcu_head *head) struct slob_rcu *slob_rcu =3D (struct slob_rcu *)head; void *b =3D (void *)slob_rcu - (slob_rcu->size - sizeof(struct slob_rcu)); =20 - __kmem_cache_free(b, slob_rcu->size); + ____kmem_cache_free(b, slob_rcu->size); } =20 -void kmem_cache_free(struct kmem_cache *c, void *b) +void __kmem_cache_free(struct kmem_cache *c, void *b, + unsigned long caller __maybe_unused) { kmemleak_free_recursive(b, c->flags); - trace_kmem_cache_free(c->name, _RET_IP_, b); + trace_kmem_cache_free(c->name, caller, b); if (unlikely(c->flags & SLAB_TYPESAFE_BY_RCU)) { struct slob_rcu *slob_rcu; slob_rcu =3D b + (c->size - sizeof(struct slob_rcu)); slob_rcu->size =3D c->size; call_rcu(&slob_rcu->head, kmem_rcu_free); } else { - __kmem_cache_free(b, c->size); + ____kmem_cache_free(b, c->size); } } -EXPORT_SYMBOL(kmem_cache_free); +EXPORT_SYMBOL(__kmem_cache_free); =20 void kmem_cache_free_bulk(struct kmem_cache *s, size_t size, void **p) { diff --git a/mm/slub.c b/mm/slub.c index a088d4fa1062..a72a2d08e793 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -3466,15 +3466,16 @@ void ___cache_free(struct kmem_cache *cache, void *= x, unsigned long addr) } #endif =20 -void kmem_cache_free(struct kmem_cache *s, void *x) +void __kmem_cache_free(struct kmem_cache *s, void *x, + unsigned long caller __maybe_unused) { s =3D cache_from_obj(s, x); if (!s) return; - trace_kmem_cache_free(s->name, _RET_IP_, x); - slab_free(s, virt_to_slab(x), x, NULL, 1, _RET_IP_); + trace_kmem_cache_free(s->name, caller, x); + slab_free(s, virt_to_slab(x), x, NULL, 1, caller); } -EXPORT_SYMBOL(kmem_cache_free); +EXPORT_SYMBOL(__kmem_cache_free); =20 struct detached_freelist { struct slab *slab; --=20 2.32.0 From nobody Mon May 11 04:52:14 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 27EAAC43217 for ; Thu, 14 Apr 2022 09:02:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241382AbiDNJEZ (ORCPT ); Thu, 14 Apr 2022 05:04:25 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45832 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241495AbiDNJB7 (ORCPT ); Thu, 14 Apr 2022 05:01:59 -0400 Received: from mail-pg1-x52f.google.com (mail-pg1-x52f.google.com [IPv6:2607:f8b0:4864:20::52f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D650123BF4 for ; Thu, 14 Apr 2022 01:59:31 -0700 (PDT) Received: by mail-pg1-x52f.google.com with SMTP id h5so4226766pgc.7 for ; Thu, 14 Apr 2022 01:59:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=VXU9f8brElqQjQUQyo4iKTj+TdVz9PS035pWXZROjPs=; b=CMsTgOCmxpH9Ya24bTQBUOlRMe8yjhRIe9zC3tPsZTskOcuJ8mb3Cphb6DXR0p6KDJ bsYdwi1vMMybsPx64eS8pO386Q0GhssSVrBfdy8I1gNtmnFAhJKaH1sreyl7+r0bLUG9 PrPYb4xCSzo6XXo8iqJxcfv9vD9UqMDcq4R+O5hHMJUZohK/UMAQc8g3Ws5x7ivvCrir m1Dj8YQD2ZlnD4vUpu+jmQgxn1siLSmZh8k9eM7oEqK37KPx+vzf7MqnN/D6750AK9eQ Uxm5LJF8mJpXNCes2xnd8zVQIfH58lrEQG7KNNsUzaR7nCSiftL5VQ2RT6Jg1EnERwbv psFg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=VXU9f8brElqQjQUQyo4iKTj+TdVz9PS035pWXZROjPs=; b=n2ZamBVsK3YggWciWAaqsNaIV6gwuCykVcauJRXnZ3lxQIhN35th1WR7ew3h5Bb73P bORB7qQ59enTOIE0PdluFBxXnSjfbw86y3UwMkWlUVqEb+lquxdBtq05Ex46kk6qilkx CtpkNzkCJXlhSjEdF2zDvEd88GH+y/2wQQZW489F79VIlZm8S1Yl17CKGHvqg/cEba+i vOY05PuWS93Di4CfR23PAZtwqlBISMWDB//49q/btGqdJaFv+7xZEtLN4uK82NPsoX+A YrLOLMSOzVm5h1+oyDZcHg1DDRZCkPHsWjpd8C0vSE5I6MeObssykq1l8EbNVYruAmJ4 8o9w== X-Gm-Message-State: AOAM531yHctxU+htbFwEsJSP33pjOBKcgkBObwoDeky5t+0huqIfPgXV /D8bDm0QGJvY98mulc7v14g= X-Google-Smtp-Source: ABdhPJyWukxNFbT9XT5Je4C9MNigf2VP/qCuFkDFIKDtoulNW7LOXhOtKS3m58upU+oMh1k3yvO1RA== X-Received: by 2002:a05:6a00:1141:b0:506:d0e:6640 with SMTP id b1-20020a056a00114100b005060d0e6640mr2875392pfm.73.1649926771272; Thu, 14 Apr 2022 01:59:31 -0700 (PDT) Received: from hyeyoo.. ([114.29.24.243]) by smtp.gmail.com with ESMTPSA id p9-20020aa79e89000000b00505fada20dfsm1403537pfq.117.2022.04.14.01.59.25 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 14 Apr 2022 01:59:29 -0700 (PDT) From: Hyeonggon Yoo <42.hyeyoo@gmail.com> To: Vlastimil Babka Cc: Marco Elver , Matthew WilCox , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Hyeonggon Yoo <42.hyeyoo@gmail.com>, Roman Gushchin , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 18/23] mm/sl[au]b: generalize kmalloc subsystem Date: Thu, 14 Apr 2022 17:57:22 +0900 Message-Id: <20220414085727.643099-19-42.hyeyoo@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220414085727.643099-1-42.hyeyoo@gmail.com> References: <20220414085727.643099-1-42.hyeyoo@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Now everything in kmalloc subsystem can be generalized. Let's do it! Generalize __kmalloc_node_track_caller(), kfree(), __ksize(), __kmalloc_node() and move them to slab_common.c. Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> --- mm/slab.c | 94 ----------------------------------------------- mm/slab_common.c | 95 ++++++++++++++++++++++++++++++++++++++++++++++++ mm/slub.c | 88 -------------------------------------------- 3 files changed, 95 insertions(+), 182 deletions(-) diff --git a/mm/slab.c b/mm/slab.c index d35873da5572..fc00aca62ae3 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -3527,36 +3527,6 @@ void *kmem_cache_alloc_node_trace(struct kmem_cache = *cachep, EXPORT_SYMBOL(kmem_cache_alloc_node_trace); #endif =20 -static __always_inline void * -__do_kmalloc_node(size_t size, gfp_t flags, int node, unsigned long caller) -{ - struct kmem_cache *cachep; - void *ret; - - if (unlikely(size > KMALLOC_MAX_CACHE_SIZE)) - return kmalloc_large_node(size, flags, node); - cachep =3D kmalloc_slab(size, flags); - if (unlikely(ZERO_OR_NULL_PTR(cachep))) - return cachep; - ret =3D kmem_cache_alloc_node_trace(cachep, flags, node, size); - ret =3D kasan_kmalloc(cachep, ret, size, flags); - - return ret; -} - -void *__kmalloc_node(size_t size, gfp_t flags, int node) -{ - return __do_kmalloc_node(size, flags, node, _RET_IP_); -} -EXPORT_SYMBOL(__kmalloc_node); - -void *__kmalloc_node_track_caller(size_t size, gfp_t flags, - int node, unsigned long caller) -{ - return __do_kmalloc_node(size, flags, node, caller); -} -EXPORT_SYMBOL(__kmalloc_node_track_caller); - #ifdef CONFIG_PRINTK void kmem_obj_info(struct kmem_obj_info *kpp, void *object, struct slab *s= lab) { @@ -3635,43 +3605,6 @@ void kmem_cache_free_bulk(struct kmem_cache *orig_s,= size_t size, void **p) } EXPORT_SYMBOL(kmem_cache_free_bulk); =20 -/** - * kfree - free previously allocated memory - * @objp: pointer returned by kmalloc. - * - * If @objp is NULL, no operation is performed. - * - * Don't free memory not originally allocated by kmalloc() - * or you will run into trouble. - */ -void kfree(const void *objp) -{ - struct kmem_cache *c; - unsigned long flags; - struct folio *folio; - - - if (unlikely(ZERO_OR_NULL_PTR(objp))) - return; - - folio =3D virt_to_folio(objp); - if (!folio_test_slab(folio)) { - free_large_kmalloc(folio, (void *)objp); - return; - } - - c =3D folio_slab(folio)->slab_cache; - trace_kmem_cache_free(c->name, _RET_IP_, objp); - - local_irq_save(flags); - kfree_debugcheck(objp); - debug_check_no_locks_freed(objp, c->object_size); - debug_check_no_obj_freed(objp, c->object_size); - __cache_free(c, (void *)objp, _RET_IP_); - local_irq_restore(flags); -} -EXPORT_SYMBOL(kfree); - /* * This initializes kmem_cache_node or resizes various caches for all node= s. */ @@ -4074,30 +4007,3 @@ void __check_heap_object(const void *ptr, unsigned l= ong n, usercopy_abort("SLAB object", cachep->name, to_user, offset, n); } #endif /* CONFIG_HARDENED_USERCOPY */ - -/** - * __ksize -- Uninstrumented ksize. - * @objp: pointer to the object - * - * Unlike ksize(), __ksize() is uninstrumented, and does not provide the s= ame - * safety checks as ksize() with KASAN instrumentation enabled. - * - * Return: size of the actual memory used by @objp in bytes - */ -size_t __ksize(const void *objp) -{ - struct kmem_cache *c; - struct folio *folio; - - BUG_ON(!objp); - if (unlikely(objp =3D=3D ZERO_SIZE_PTR)) - return 0; - - folio =3D virt_to_folio(objp); - if (!folio_test_slab(folio)) - return folio_size(folio); - - c =3D folio_slab(folio)->slab_cache; - return c->object_size; -} -EXPORT_SYMBOL(__ksize); diff --git a/mm/slab_common.c b/mm/slab_common.c index 3cd5d7a47ec7..daf626e25e12 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -918,6 +918,101 @@ void free_large_kmalloc(struct folio *folio, void *ob= ject) -(PAGE_SIZE << order)); __free_pages(folio_page(folio, 0), order); } + +void *__kmalloc_node(size_t size, gfp_t flags, int node) +{ + struct kmem_cache *s; + void *ret; + + if (unlikely(size > KMALLOC_MAX_CACHE_SIZE)) + return kmalloc_large_node(size, flags, node); + + s =3D kmalloc_slab(size, flags); + + if (unlikely(ZERO_OR_NULL_PTR(s))) + return s; + + ret =3D __kmem_cache_alloc_node(s, NULL, flags, node, _RET_IP_); + ret =3D kasan_kmalloc(s, ret, size, flags); + + return ret; +} +EXPORT_SYMBOL(__kmalloc_node); + +void *__kmalloc_node_track_caller(size_t size, gfp_t gfpflags, + int node, unsigned long caller) +{ + struct kmem_cache *s; + void *ret; + + if (unlikely(size > KMALLOC_MAX_CACHE_SIZE)) + return kmalloc_large_node(size, gfpflags, node); + + s =3D kmalloc_slab(size, gfpflags); + + if (unlikely(ZERO_OR_NULL_PTR(s))) + return s; + + ret =3D __kmem_cache_alloc_node(s, NULL, gfpflags, node, caller); + + return ret; +} +EXPORT_SYMBOL(__kmalloc_node_track_caller); + +/** + * kfree - free previously allocated memory + * @objp: pointer returned by kmalloc. + * + * If @objp is NULL, no operation is performed. + * + * Don't free memory not originally allocated by kmalloc() + * or you will run into trouble. + */ +void kfree(const void *object) +{ + struct folio *folio; + struct slab *slab; + struct kmem_cache *s; + + if (unlikely(ZERO_OR_NULL_PTR(object))) + return; + + folio =3D virt_to_folio(object); + if (unlikely(!folio_test_slab(folio))) { + free_large_kmalloc(folio, (void *)object); + return; + } + + slab =3D folio_slab(folio); + s =3D slab->slab_cache; + __kmem_cache_free(s, object, _RET_IP_); +} +EXPORT_SYMBOL(kfree); + +/** + * __ksize -- Uninstrumented ksize. + * @objp: pointer to the object + * + * Unlike ksize(), __ksize() is uninstrumented, and does not provide the s= ame + * safety checks as ksize() with KASAN instrumentation enabled. + * + * Return: size of the actual memory used by @objp in bytes + */ +size_t __ksize(const void *object) +{ + struct folio *folio; + + if (unlikely(object =3D=3D ZERO_SIZE_PTR)) + return 0; + + folio =3D virt_to_folio(object); + + if (unlikely(!folio_test_slab(folio))) + return folio_size(folio); + + return slab_ksize(folio_slab(folio)->slab_cache); +} +EXPORT_SYMBOL(__ksize); #endif /* !CONFIG_SLOB */ =20 gfp_t kmalloc_fix_flags(gfp_t flags) diff --git a/mm/slub.c b/mm/slub.c index a72a2d08e793..bc9c96ce0521 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -4338,30 +4338,6 @@ static int __init setup_slub_min_objects(char *str) =20 __setup("slub_min_objects=3D", setup_slub_min_objects); =20 -void *__kmalloc_node(size_t size, gfp_t flags, int node) -{ - struct kmem_cache *s; - void *ret; - - if (unlikely(size > KMALLOC_MAX_CACHE_SIZE)) - return kmalloc_large_node(size, flags, node); - - s =3D kmalloc_slab(size, flags); - - if (unlikely(ZERO_OR_NULL_PTR(s))) - return s; - - ret =3D slab_alloc_node(s, NULL, flags, node, _RET_IP_, size); - - trace_kmem_cache_alloc(s->name, _RET_IP_, ret, size, - s->size, flags, node); - - ret =3D kasan_kmalloc(s, ret, size, flags); - - return ret; -} -EXPORT_SYMBOL(__kmalloc_node); - #ifdef CONFIG_HARDENED_USERCOPY /* * Rejects incorrectly sized objects and objects that are to be copied @@ -4412,46 +4388,6 @@ void __check_heap_object(const void *ptr, unsigned l= ong n, } #endif /* CONFIG_HARDENED_USERCOPY */ =20 -size_t __ksize(const void *object) -{ - struct folio *folio; - - if (unlikely(object =3D=3D ZERO_SIZE_PTR)) - return 0; - - folio =3D virt_to_folio(object); - - if (unlikely(!folio_test_slab(folio))) - return folio_size(folio); - - return slab_ksize(folio_slab(folio)->slab_cache); -} -EXPORT_SYMBOL(__ksize); - -void kfree(const void *x) -{ - struct folio *folio; - struct slab *slab; - void *object =3D (void *)x; - struct kmem_cache *s; - - if (unlikely(ZERO_OR_NULL_PTR(x))) - return; - - folio =3D virt_to_folio(x); - if (unlikely(!folio_test_slab(folio))) { - free_large_kmalloc(folio, object); - return; - } - - slab =3D folio_slab(folio); - s =3D slab->slab_cache; - - trace_kmem_cache_free(s->name, _RET_IP_, x); - slab_free(s, slab, object, NULL, 1, _RET_IP_); -} -EXPORT_SYMBOL(kfree); - #define SHRINK_PROMOTE_MAX 32 =20 /* @@ -4799,30 +4735,6 @@ int __kmem_cache_create(struct kmem_cache *s, slab_f= lags_t flags) return 0; } =20 -void *__kmalloc_node_track_caller(size_t size, gfp_t gfpflags, - int node, unsigned long caller) -{ - struct kmem_cache *s; - void *ret; - - if (unlikely(size > KMALLOC_MAX_CACHE_SIZE)) - return kmalloc_large_node(size, gfpflags, node); - - s =3D kmalloc_slab(size, gfpflags); - - if (unlikely(ZERO_OR_NULL_PTR(s))) - return s; - - ret =3D slab_alloc_node(s, NULL, gfpflags, node, caller, size); - - /* Honor the call site pointer we received. */ - trace_kmem_cache_alloc(s->name, caller, ret, size, - s->size, gfpflags, node); - - return ret; -} -EXPORT_SYMBOL(__kmalloc_node_track_caller); - #ifdef CONFIG_SYSFS static int count_inuse(struct slab *slab) { --=20 2.32.0 From nobody Mon May 11 04:52:14 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A5DC4C4332F for ; Thu, 14 Apr 2022 09:02:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241437AbiDNJEb (ORCPT ); Thu, 14 Apr 2022 05:04:31 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44214 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241389AbiDNJCC (ORCPT ); Thu, 14 Apr 2022 05:02:02 -0400 Received: from mail-pj1-x102c.google.com (mail-pj1-x102c.google.com [IPv6:2607:f8b0:4864:20::102c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 28AD924F00 for ; Thu, 14 Apr 2022 01:59:37 -0700 (PDT) Received: by mail-pj1-x102c.google.com with SMTP id a16-20020a17090a6d9000b001c7d6c1bb13so5055540pjk.4 for ; Thu, 14 Apr 2022 01:59:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=7ymqE8aSZCiNLvHohw2uieveIBfmHm8/fdfrtfycEzo=; b=BNp7lubm9luWUadkoruGrkWyZeX14FCIDVbhrURd/EYMzs8Uf9+XvrpzFO+zj+lSi0 3gp2UIVZcmFXPrWdKlwIaQkB5vLE6ekBrPk/5Iqyf877O/Bg0CYPAiNskXSoKK+k34je YaL9R/FGVqKMw2gdzmEI5Nrp6xXP7b2Gmegco9aG6a4hTMccW0/6xD8WA22KHoLcpeIf AWgKjiSwmbZXXoo/wP33cOi8k7q2vChCIYCPcmkl7eunwbeeNMVVzb8wywQ0xd0sb3Ng 5MX5vuwtYpxE9oINP/ziInmjJyC0j+Mg/UUAmavLToQ6WKEVprrWQDux/KIw6PLWZWyd 70PA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=7ymqE8aSZCiNLvHohw2uieveIBfmHm8/fdfrtfycEzo=; b=fCFFAGPVJFTJPWRW8tXEOnKvQCFOkIzHYDn44bE7u8MdNG95JUpDrTKg8wEi74sTYM aatYlVihlOrSfMwXES/HS2b2dk8d3MPFyR2aeYfwiLv9e7cGxNqz6La6x/qzjdRkB+Td l8W8ZhM8rsZ4TwRl2HAnuVUFPMDea3MBk9YPe3G+zh3ueRJvRkz4UXCPC+2Urr/a8W3B 2rYXxMZnPT8W4yyU7pRgPEZKr1KB38X88hbCafi/wq6Q6h0NmP5LS4FM8rxP2bVDGP1w 3XBuH2GpXOBiolsFNKxtL13ivhv2I37MxR3hZ0Ru5s285WoCMfIL4KET2CXTPlEUUv2L twUQ== X-Gm-Message-State: AOAM530J+bWp/+fVe0SKOs2vK5/pgxnOeU/d3qGtEbg3a83pIgM1e7RN 5kqJV2ZC8GbgMkBtbp/Pycs= X-Google-Smtp-Source: ABdhPJzxK3hu1hp4jfT79xykHhdty1oOVUP8hCbv9UW6qC8LenmkzadyvtoVNShK+XHTT4d4iMJpsg== X-Received: by 2002:a17:903:1c7:b0:158:5ada:8876 with SMTP id e7-20020a17090301c700b001585ada8876mr20359165plh.122.1649926777087; Thu, 14 Apr 2022 01:59:37 -0700 (PDT) Received: from hyeyoo.. ([114.29.24.243]) by smtp.gmail.com with ESMTPSA id p9-20020aa79e89000000b00505fada20dfsm1403537pfq.117.2022.04.14.01.59.31 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 14 Apr 2022 01:59:35 -0700 (PDT) From: Hyeonggon Yoo <42.hyeyoo@gmail.com> To: Vlastimil Babka Cc: Marco Elver , Matthew WilCox , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Hyeonggon Yoo <42.hyeyoo@gmail.com>, Roman Gushchin , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 19/23] mm/slab_common: add kasan_kmalloc() in __kmalloc_node_track_caller() Date: Thu, 14 Apr 2022 17:57:23 +0900 Message-Id: <20220414085727.643099-20-42.hyeyoo@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220414085727.643099-1-42.hyeyoo@gmail.com> References: <20220414085727.643099-1-42.hyeyoo@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Add missing kasan_kmalloc() in __kmalloc_node_track_caller(). Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> --- mm/slab_common.c | 1 + 1 file changed, 1 insertion(+) diff --git a/mm/slab_common.c b/mm/slab_common.c index a3eb66e5180c..6abe7f61c197 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -954,6 +954,7 @@ void *__kmalloc_node_track_caller(size_t size, gfp_t gf= pflags, return s; =20 ret =3D __kmem_cache_alloc_node(s, NULL, gfpflags, node, caller); + ret =3D kasan_kmalloc(s, ret, size, gfpflags); =20 return ret; } --=20 2.32.0 From nobody Mon May 11 04:52:14 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2391CC433EF for ; Thu, 14 Apr 2022 09:02:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235521AbiDNJEj (ORCPT ); Thu, 14 Apr 2022 05:04:39 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47286 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241451AbiDNJCI (ORCPT ); Thu, 14 Apr 2022 05:02:08 -0400 Received: from mail-pf1-x42e.google.com (mail-pf1-x42e.google.com [IPv6:2607:f8b0:4864:20::42e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A379420189 for ; Thu, 14 Apr 2022 01:59:43 -0700 (PDT) Received: by mail-pf1-x42e.google.com with SMTP id r10so413202pfh.13 for ; Thu, 14 Apr 2022 01:59:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=ua/uZ6qm2oTljPWsofJREnrVaB3Pp7mzKI9GmIJ+zVs=; b=i3xDvTW0KEoA3rocpb/ts6nvQdVhG/gwVA+cxvSoNic20GYFjH31pocaxRa1Gh3LSl er7pUR2X7EB5Vpam3pnIaxCJPc7HKG+SK0tY+SL9qwUloa6IpB4carU16P+0BFvn3Ve3 jMkpgb5ur0o/LiQ/geHPuBcqAqAgRsG1xpZ6VuxtGZr0tQNMW4a/+DkO7kqid05uJmDJ TdPyWjSiO5qVsRdFro8/swAA20IXRPTZP1pL6Th05qXGgxX8S96uowdqPh5hqZryplAZ vdMoTau5uN6f9RxjzOV24KJMBPuK9PUWe89dsCl0SryZyVROw90wYX9LkpQxtN8NpZIS bZPA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=ua/uZ6qm2oTljPWsofJREnrVaB3Pp7mzKI9GmIJ+zVs=; b=G+uVtnRWLF+XDDP7cRyt7pw8ok8FxMxkky4Uzp7nS8Az+UKEc4flgB4gBC7p8f7CSw zUi9JgEOA/PH3fTRlJ2b6GG0P3b2/PCdk4LQR1nFnC2aHpp1p7O+2EmNiLCi/lEpn/aU //D3+U0BHNdg+IA+w37IWn2UE9ueOPQWlpU0llEoDuhOLbW2D5kVgVIJaQINYxUFPmC9 L0IaO4ib0RGRsZtKR1wLioQvKnXXr6VSy4+AVQA19B946NUnFEdXrIEv1jG5fX4tLQyj ljQVHeUxt6J3EZWialLSdMGj9hWYqTJZ9upoaMPDe86Pqdih8x7LBIDGI5xUXPe04Pve YjJQ== X-Gm-Message-State: AOAM531tePwc9QA8udaAnBTSNlyfJ1vkvEx7nUdHB4inBceCyt9Jtiv2 XYYw1bm0GEzMPXR3lDksGnk= X-Google-Smtp-Source: ABdhPJx/Hd0A3GzOSvel1BxHOz4/B46nvZGT//rxRVflVuYWERDTpthd5Fup1TuuwYMwwTBDcmCmiw== X-Received: by 2002:a63:a804:0:b0:398:e7d7:29ab with SMTP id o4-20020a63a804000000b00398e7d729abmr1454997pgf.138.1649926783121; Thu, 14 Apr 2022 01:59:43 -0700 (PDT) Received: from hyeyoo.. ([114.29.24.243]) by smtp.gmail.com with ESMTPSA id p9-20020aa79e89000000b00505fada20dfsm1403537pfq.117.2022.04.14.01.59.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 14 Apr 2022 01:59:41 -0700 (PDT) From: Hyeonggon Yoo <42.hyeyoo@gmail.com> To: Vlastimil Babka Cc: Marco Elver , Matthew WilCox , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Hyeonggon Yoo <42.hyeyoo@gmail.com>, Roman Gushchin , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 20/23] mm/slab_common: factor out __do_kmalloc_node() Date: Thu, 14 Apr 2022 17:57:24 +0900 Message-Id: <20220414085727.643099-21-42.hyeyoo@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220414085727.643099-1-42.hyeyoo@gmail.com> References: <20220414085727.643099-1-42.hyeyoo@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Factor out common code into __do_kmalloc_node(). Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> --- mm/slab_common.c | 27 ++++++++++----------------- 1 file changed, 10 insertions(+), 17 deletions(-) diff --git a/mm/slab_common.c b/mm/slab_common.c index 6abe7f61c197..af563e64e8aa 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -919,7 +919,9 @@ void free_large_kmalloc(struct folio *folio, void *obje= ct) __free_pages(folio_page(folio, 0), order); } =20 -void *__kmalloc_node(size_t size, gfp_t flags, int node) +static __always_inline +void *__do_kmalloc_node(size_t size, gfp_t flags, int node, + unsigned long caller __maybe_unused) { struct kmem_cache *s; void *ret; @@ -932,31 +934,22 @@ void *__kmalloc_node(size_t size, gfp_t flags, int no= de) if (unlikely(ZERO_OR_NULL_PTR(s))) return s; =20 - ret =3D __kmem_cache_alloc_node(s, NULL, flags, node, _RET_IP_); + ret =3D __kmem_cache_alloc_node(s, NULL, flags, node, caller); ret =3D kasan_kmalloc(s, ret, size, flags); =20 return ret; } + +void *__kmalloc_node(size_t size, gfp_t flags, int node) +{ + return __do_kmalloc_node(size, flags, node, _RET_IP_); +} EXPORT_SYMBOL(__kmalloc_node); =20 void *__kmalloc_node_track_caller(size_t size, gfp_t gfpflags, int node, unsigned long caller) { - struct kmem_cache *s; - void *ret; - - if (unlikely(size > KMALLOC_MAX_CACHE_SIZE)) - return kmalloc_large_node(size, gfpflags, node); - - s =3D kmalloc_slab(size, gfpflags); - - if (unlikely(ZERO_OR_NULL_PTR(s))) - return s; - - ret =3D __kmem_cache_alloc_node(s, NULL, gfpflags, node, caller); - ret =3D kasan_kmalloc(s, ret, size, gfpflags); - - return ret; + return __do_kmalloc_node(size, flags, node, caller); } EXPORT_SYMBOL(__kmalloc_node_track_caller); =20 --=20 2.32.0 From nobody Mon May 11 04:52:14 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A3864C433FE for ; Thu, 14 Apr 2022 09:02:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241445AbiDNJEu (ORCPT ); Thu, 14 Apr 2022 05:04:50 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47710 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241466AbiDNJCN (ORCPT ); Thu, 14 Apr 2022 05:02:13 -0400 Received: from mail-pl1-x62c.google.com (mail-pl1-x62c.google.com [IPv6:2607:f8b0:4864:20::62c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7FEB62657F for ; Thu, 14 Apr 2022 01:59:49 -0700 (PDT) Received: by mail-pl1-x62c.google.com with SMTP id c23so4173530plo.0 for ; Thu, 14 Apr 2022 01:59:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=l9QL4BycKE4eFXdrmUENq07WhnIQYOOU7butyGlrlts=; b=oLxmbrdrrSYOas267DSbgnV7X8Q/M59tbvkzw39YowuMa64Stzy1Sc8jRluxQdezP0 l1U5kP+u3MOVEuii/WM2XiAh4B58+UWrMK8YwohsdbefFYmO/oHVr8S66x2ZpuxhG44x 4Z4Oc+GPnUvwz/ShIKcgm74rGBZLYTuHCRpl+x/SbY11vbucdT/KaTNwb4B6O35Y4NAg Ka3rTgD2CJnenUkq3d2MuHWGZneo6s2DoYCL5GSoIM8BvRFOjFLb+MDcs6cQ2Zod9Y9v qW8iJh3R5ZK/Rihp8i1pWxhoUpxne6sbZs/VxMXWAMdjRn/cNTEWx5lrUVSUNwpnjhJS w8IQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=l9QL4BycKE4eFXdrmUENq07WhnIQYOOU7butyGlrlts=; b=5pQew7exTuE9ZSLiw6j0guM4t87P3pZ3DsPzcFqUgxSpg4OAM3d2EK2UlebafAwAE5 plhZzcM4sOxhIPqTaMzxkuJkX35c6G3aJuHKqDBefgf0O1fuZ/Ng9ooxjxjHgb3ty2K9 OXTiN6NPgnSjdm1m3jMyI0OovlxlhhlqXVX7hf2V0J+46Q2dwkGgGCMWPTpz7PV9wMmc YyGiY0CwBdPf21/X2GV3Nje54ZjqOXwk2faJyxWh+TPy65yq0ZK9TSkRn2mW5ghCjhm5 qVzaNrY3YrtrlqSL8TlqogL2qSsChofPIi0KQAq1k38kwoPykfP+emAykS34s+2tJgOk kyuA== X-Gm-Message-State: AOAM5333ex5z3IqIhT+rV1Epwf7byn7YmVAQX1Fpmlp5BLCWK54XSZD1 9QHscFNErzsWeJ4eFRvoUGE= X-Google-Smtp-Source: ABdhPJwMF9eGt5BVU1QUVYQqLceYeeGQhDCgIiFSWMMEQ+Qv2jWmAMGkEiShivvQT1P6HAfwJXPljg== X-Received: by 2002:a17:90b:4f8d:b0:1c6:408b:6b0d with SMTP id qe13-20020a17090b4f8d00b001c6408b6b0dmr3217496pjb.90.1649926789022; Thu, 14 Apr 2022 01:59:49 -0700 (PDT) Received: from hyeyoo.. ([114.29.24.243]) by smtp.gmail.com with ESMTPSA id p9-20020aa79e89000000b00505fada20dfsm1403537pfq.117.2022.04.14.01.59.43 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 14 Apr 2022 01:59:47 -0700 (PDT) From: Hyeonggon Yoo <42.hyeyoo@gmail.com> To: Vlastimil Babka Cc: Marco Elver , Matthew WilCox , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Hyeonggon Yoo <42.hyeyoo@gmail.com>, Roman Gushchin , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 21/23] mm/sl[au]b: remove kmem_cache_alloc_node_trace() Date: Thu, 14 Apr 2022 17:57:25 +0900 Message-Id: <20220414085727.643099-22-42.hyeyoo@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220414085727.643099-1-42.hyeyoo@gmail.com> References: <20220414085727.643099-1-42.hyeyoo@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" kmem_cache_alloc_node_trace() was introduced by commit 4a92379bdfb4 ("slub tracing: move trace calls out of always inlined functions to reduce kernel code size") to avoid inlining tracepoints for inlined kmalloc function calls. Now that we use same tracepoint in kmalloc and normal caches, kmem_cache_alloc_node_trace() can be replaced with __kmem_cache_alloc_node() and kasan_kmalloc(). Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> --- include/linux/slab.h | 26 ++++++++------------------ mm/slab.c | 19 ------------------- mm/slub.c | 16 ---------------- 3 files changed, 8 insertions(+), 53 deletions(-) diff --git a/include/linux/slab.h b/include/linux/slab.h index 0630c37ee630..c1aed9d97cf2 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -497,21 +497,6 @@ static __always_inline void kfree_bulk(size_t size, vo= id **p) kmem_cache_free_bulk(NULL, size, p); } =20 -#ifdef CONFIG_TRACING -extern void *kmem_cache_alloc_node_trace(struct kmem_cache *s, gfp_t gfpfl= ags, - int node, size_t size) __assume_slab_alignment - __alloc_size(4); -#else /* CONFIG_TRACING */ -static __always_inline void *kmem_cache_alloc_node_trace(struct kmem_cache= *s, gfp_t gfpflags, - int node, size_t size) -{ - void *ret =3D kmem_cache_alloc_node(s, gfpflags, node); - - ret =3D kasan_kmalloc(s, ret, size, gfpflags); - return ret; -} -#endif /* CONFIG_TRACING */ - extern void *kmalloc_large_node(size_t size, gfp_t flags, int node) __assume_page_alignment __alloc_size(1); =20 @@ -523,6 +508,9 @@ static __always_inline void *kmalloc_large(size_t size,= gfp_t flags) #ifndef CONFIG_SLOB static __always_inline __alloc_size(1) void *kmalloc_node(size_t size, gfp= _t flags, int node) { + struct kmem_cache *s; + void *objp; + if (__builtin_constant_p(size)) { unsigned int index; =20 @@ -534,9 +522,11 @@ static __always_inline __alloc_size(1) void *kmalloc_n= ode(size_t size, gfp_t fla if (!index) return ZERO_SIZE_PTR; =20 - return kmem_cache_alloc_node_trace( - kmalloc_caches[kmalloc_type(flags)][index], - flags, node, size); + s =3D kmalloc_caches[kmalloc_type(flags)][index]; + + objp =3D __kmem_cache_alloc_node(s, NULL, flags, node, _RET_IP_); + objp =3D kasan_kmalloc(s, objp, size, flags); + return objp; } return __kmalloc_node(size, flags, node); } diff --git a/mm/slab.c b/mm/slab.c index fc00aca62ae3..24010e72f603 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -3508,25 +3508,6 @@ int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_= t flags, size_t size, } EXPORT_SYMBOL(kmem_cache_alloc_bulk); =20 -#ifdef CONFIG_TRACING -void *kmem_cache_alloc_node_trace(struct kmem_cache *cachep, - gfp_t flags, - int nodeid, - size_t size) -{ - void *ret; - - ret =3D slab_alloc_node(cachep, NULL, flags, nodeid, size, _RET_IP_); - - ret =3D kasan_kmalloc(cachep, ret, size, flags); - trace_kmem_cache_alloc(cachep->name, _RET_IP_, ret, - size, cachep->size, - flags, nodeid); - return ret; -} -EXPORT_SYMBOL(kmem_cache_alloc_node_trace); -#endif - #ifdef CONFIG_PRINTK void kmem_obj_info(struct kmem_obj_info *kpp, void *object, struct slab *s= lab) { diff --git a/mm/slub.c b/mm/slub.c index bc9c96ce0521..1899c7e1de10 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -3222,22 +3222,6 @@ void *__kmem_cache_alloc_node(struct kmem_cache *s, = struct list_lru *lru, gfp_t } EXPORT_SYMBOL(__kmem_cache_alloc_node); =20 -#ifdef CONFIG_TRACING -void *kmem_cache_alloc_node_trace(struct kmem_cache *s, - gfp_t gfpflags, - int node, size_t size) -{ - void *ret =3D slab_alloc_node(s, NULL, gfpflags, node, _RET_IP_, size); - - trace_kmem_cache_alloc(s->name, _RET_IP_, ret, - size, s->size, gfpflags, node); - - ret =3D kasan_kmalloc(s, ret, size, gfpflags); - return ret; -} -EXPORT_SYMBOL(kmem_cache_alloc_node_trace); -#endif - /* * Slow path handling. This may still be called frequently since objects * have a longer lifetime than the cpu slabs in most processing loads. --=20 2.32.0 From nobody Mon May 11 04:52:14 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7B690C433EF for ; Thu, 14 Apr 2022 09:03:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241471AbiDNJFg (ORCPT ); Thu, 14 Apr 2022 05:05:36 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48324 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241486AbiDNJCU (ORCPT ); Thu, 14 Apr 2022 05:02:20 -0400 Received: from mail-pj1-x1033.google.com (mail-pj1-x1033.google.com [IPv6:2607:f8b0:4864:20::1033]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2F52922286 for ; Thu, 14 Apr 2022 01:59:55 -0700 (PDT) Received: by mail-pj1-x1033.google.com with SMTP id a16-20020a17090a6d9000b001c7d6c1bb13so5056078pjk.4 for ; Thu, 14 Apr 2022 01:59:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=NORyvB92HhMuMei+Pomh8Oih3TxPvQvxBtXd7TKFrXs=; b=cYuAPq95v2bjnqMdfyAhTbn9wVgLymi4gKjixUDP+cO7CMfR4/fi7HonSlCYGSwsRC juz4svkPR8qlGI/1jbF6uq2yJWXSJ+XmzrgSEaXXO7WOG3oeUozWp50ZdbPfTLH863HW rE32U++MGWqz45csIVcjq1DdTRF2VuTW/xbEr5SkJWoclIRte41zD1rdFWJMivW++xfE gDfBcr8IwjMw8mfQAwFGkiUrPtkaV1PNyIvL3HDqe1HWHyjKWLQtHF+12x5afHVEYOHU lfd2lswe3lQ4L1IXLAN/Vc9XajD/XmsRzdwMGHVkWPCOFGJqp8Wiva+CavFuKgFuYHCZ 5O+w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=NORyvB92HhMuMei+Pomh8Oih3TxPvQvxBtXd7TKFrXs=; b=utUzG1PjzTUC0R6WB/jGgMA7kvqRrObPLbqd8BwrHgoYJCjVphw1R9z6Fe1yyLxm1h 0CeC2Yrph59UWumwg5BpaWtcYo6jKzgINuvxViBxmIInG2YwxMqeiiMwU1fOX0msIUkg H60+Nmm0aw584ZuE7Y51qbAPQtA/L08MZC559Sh2B2QDJFOSVQoDy6KhYimKPNajImpt E1dfaqYNJ9Nq5fjuHBtAW0MhiCa9DH7ZUnWWLVONsuNIUXqTLdtwz2DnTB/uofMblkWX wJ+oepynCdElxhbhFm5CVD1KA6vkp4BS5ZOkfuhHDZqVxsOIlOcYzDx/gNyDqtin9vgf D8Eg== X-Gm-Message-State: AOAM531PCFSziQJtmLWzE9jAXPg+HQ7tCumCn1Ku/8YZiJ8ErGxfQluD XGVYg4T+hVQXsVRMHrGrdf0= X-Google-Smtp-Source: ABdhPJyohnd8KppvDlxmWverlmJ9PEfUJInYQHrOa24aW7oR0ao4/RGtPXE31A73I2fqb5HQ5QEs3g== X-Received: by 2002:a17:90b:4d01:b0:1cd:46e8:215a with SMTP id mw1-20020a17090b4d0100b001cd46e8215amr2640027pjb.73.1649926794713; Thu, 14 Apr 2022 01:59:54 -0700 (PDT) Received: from hyeyoo.. ([114.29.24.243]) by smtp.gmail.com with ESMTPSA id p9-20020aa79e89000000b00505fada20dfsm1403537pfq.117.2022.04.14.01.59.49 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 14 Apr 2022 01:59:53 -0700 (PDT) From: Hyeonggon Yoo <42.hyeyoo@gmail.com> To: Vlastimil Babka Cc: Marco Elver , Matthew WilCox , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Hyeonggon Yoo <42.hyeyoo@gmail.com>, Roman Gushchin , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 22/23] mm/sl[auo]b: move definition of __ksize() to mm/slab.h Date: Thu, 14 Apr 2022 17:57:26 +0900 Message-Id: <20220414085727.643099-23-42.hyeyoo@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220414085727.643099-1-42.hyeyoo@gmail.com> References: <20220414085727.643099-1-42.hyeyoo@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" __ksize() is only called by KASAN. Remove export symbol and move definition to mm/slab.h as we don't want to grow its callers. Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> Reviewed-by: Vlastimil Babka --- include/linux/slab.h | 1 - mm/slab.h | 2 ++ mm/slab_common.c | 11 +---------- mm/slob.c | 1 - 4 files changed, 3 insertions(+), 12 deletions(-) diff --git a/include/linux/slab.h b/include/linux/slab.h index c1aed9d97cf2..e30c0675d6b2 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -187,7 +187,6 @@ int kmem_cache_shrink(struct kmem_cache *s); void * __must_check krealloc(const void *objp, size_t new_size, gfp_t flag= s) __alloc_size(2); void kfree(const void *objp); void kfree_sensitive(const void *objp); -size_t __ksize(const void *objp); size_t ksize(const void *objp); #ifdef CONFIG_PRINTK bool kmem_valid_obj(void *object); diff --git a/mm/slab.h b/mm/slab.h index 45ddb19df319..5a500894317b 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -690,6 +690,8 @@ void free_large_kmalloc(struct folio *folio, void *obje= ct); =20 #endif /* CONFIG_SLOB */ =20 +size_t __ksize(const void *objp); + static inline size_t slab_ksize(const struct kmem_cache *s) { #ifndef CONFIG_SLUB diff --git a/mm/slab_common.c b/mm/slab_common.c index af563e64e8aa..8facade42bdd 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -984,15 +984,7 @@ void kfree(const void *x) } EXPORT_SYMBOL(kfree); =20 -/** - * __ksize -- Uninstrumented ksize. - * @objp: pointer to the object - * - * Unlike ksize(), __ksize() is uninstrumented, and does not provide the s= ame - * safety checks as ksize() with KASAN instrumentation enabled. - * - * Return: size of the actual memory used by @objp in bytes - */ +/* Uninstrumented ksize. Only called by KASAN. */ size_t __ksize(const void *object) { struct folio *folio; @@ -1007,7 +999,6 @@ size_t __ksize(const void *object) =20 return slab_ksize(folio_slab(folio)->slab_cache); } -EXPORT_SYMBOL(__ksize); #endif /* !CONFIG_SLOB */ =20 gfp_t kmalloc_fix_flags(gfp_t flags) diff --git a/mm/slob.c b/mm/slob.c index e893d182d471..adf794d58eb5 100644 --- a/mm/slob.c +++ b/mm/slob.c @@ -576,7 +576,6 @@ size_t __ksize(const void *block) m =3D (unsigned int *)(block - align); return SLOB_UNITS(*m) * SLOB_UNIT; } -EXPORT_SYMBOL(__ksize); =20 int __kmem_cache_create(struct kmem_cache *c, slab_flags_t flags) { --=20 2.32.0 From nobody Mon May 11 04:52:14 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D4F7AC433F5 for ; Thu, 14 Apr 2022 09:03:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241534AbiDNJFm (ORCPT ); Thu, 14 Apr 2022 05:05:42 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47386 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241532AbiDNJCd (ORCPT ); Thu, 14 Apr 2022 05:02:33 -0400 Received: from mail-pl1-x62f.google.com (mail-pl1-x62f.google.com [IPv6:2607:f8b0:4864:20::62f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 089052A259 for ; Thu, 14 Apr 2022 02:00:01 -0700 (PDT) Received: by mail-pl1-x62f.google.com with SMTP id t12so4120760pll.7 for ; Thu, 14 Apr 2022 02:00:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=B7gcZ2iThFka6LvykgF/itOBPX1z1QotaADm6wrPK4A=; b=aleL4N3zpF/Ea+70rZj2fCuVsct15PZsMcWwS03pX3yQiXe8ck7/m4zT6KNmGYgDvo mqEjk6mtV2uPs6Luaq/+ilfVnpYUeogNBGzEmh3vbyvT3C8ZsU4Qe9Ln6IGBRZeNH7QF 6WKwZNbxfUyLSKI2Ta2d+GF0DizhSqXlmAqPm9M0KhFv6bmyxSAigSZKIGWv9jNeETz7 44/WOwe7f8uK9yTDF1cWYFv8nFc9ZB590hw1Ka/eRUs/8mXqX2mUh8jIOZGvxxoYrCl/ h/O3OsN/P/VTP5eozp7LggK0Y9SO+ja0h87RlxoOuy97ZaBdRfGmt4vHdiCf7pUTz1fq FGnw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=B7gcZ2iThFka6LvykgF/itOBPX1z1QotaADm6wrPK4A=; b=CtMKwc0a32ycjw0pWTRpJY7xXHtzbSDAkQ+w/jhDsz7SuB1Qs6V7f5HnqgO+8MkwAo 3aXOmxPpzR2xfUjP+EQELKf2FZpbrQfIWdI/+cl+vVYKUYd6yETzSp8prqgObiD38YDl lOluuJpUppPhR1HmUZe/IkLjjocAV5j6CQfPKlkIfr2MeIZ5JWqDkcPl7mORm1UqcTjc AtRjy9anBKvLsqSOF4W3LbB3rB+AIV9z7NXl8gsc62ZOAy6PI89wzb5zO/e1z5Kk1ueR anMw4KOZuXcBX9o1SpNxRk1Z34W4USdAkvDO5j9zLAO05H+rri2hVMwo+woboBs4+qpx fWpg== X-Gm-Message-State: AOAM533FJ/Rl9JcHey/H0eWQ0tm5DVCCNWWlZXJf+xKby1o3Tfe0AdBL f/iiu+/MXgD1Ot1yQti9NlI= X-Google-Smtp-Source: ABdhPJz+MKOJRhVGRMjYjrgkLtZKyFXNHXy5iSwnzJ2XplH/gav0LkRz1Gf7Zci/tlCcRcywt3k2lg== X-Received: by 2002:a17:902:9684:b0:158:b28c:41e0 with SMTP id n4-20020a170902968400b00158b28c41e0mr3854458plp.85.1649926800535; Thu, 14 Apr 2022 02:00:00 -0700 (PDT) Received: from hyeyoo.. ([114.29.24.243]) by smtp.gmail.com with ESMTPSA id p9-20020aa79e89000000b00505fada20dfsm1403537pfq.117.2022.04.14.01.59.55 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 14 Apr 2022 01:59:59 -0700 (PDT) From: Hyeonggon Yoo <42.hyeyoo@gmail.com> To: Vlastimil Babka Cc: Marco Elver , Matthew WilCox , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Hyeonggon Yoo <42.hyeyoo@gmail.com>, Roman Gushchin , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 23/23] mm/sl[au]b: check if large object is valid in __ksize() Date: Thu, 14 Apr 2022 17:57:27 +0900 Message-Id: <20220414085727.643099-24-42.hyeyoo@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220414085727.643099-1-42.hyeyoo@gmail.com> References: <20220414085727.643099-1-42.hyeyoo@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" __ksize() returns size of objects allocated from slab allocator. When invalid object is passed to __ksize(), returning zero prevents further memory corruption and makes caller be able to check if there is an error. If address of large object is not beginning of folio or size of the folio is too small, it must be invalid. Return zero in such cases. Suggested-by: Vlastimil Babka Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> --- mm/slab_common.c | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/mm/slab_common.c b/mm/slab_common.c index 8facade42bdd..a14f9990b159 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -994,8 +994,12 @@ size_t __ksize(const void *object) =20 folio =3D virt_to_folio(object); =20 - if (unlikely(!folio_test_slab(folio))) + if (unlikely(!folio_test_slab(folio))) { + if (object !=3D folio_address(folio) || + folio_size(folio) <=3D KMALLOC_MAX_CACHE_SIZE) + return 0; return folio_size(folio); + } =20 return slab_ksize(folio_slab(folio)->slab_cache); } --=20 2.32.0