From nobody Sat Apr 11 02:17:48 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 38C55C32772 for ; Wed, 17 Aug 2022 10:18:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234979AbiHQKSz (ORCPT ); Wed, 17 Aug 2022 06:18:55 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53912 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233114AbiHQKSu (ORCPT ); Wed, 17 Aug 2022 06:18:50 -0400 Received: from mail-pl1-x62b.google.com (mail-pl1-x62b.google.com [IPv6:2607:f8b0:4864:20::62b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id ED51152E70 for ; Wed, 17 Aug 2022 03:18:49 -0700 (PDT) Received: by mail-pl1-x62b.google.com with SMTP id w14so11619133plp.9 for ; Wed, 17 Aug 2022 03:18:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc; bh=YF+ekAMEny9NvsTn9a4lw7RA5UzOw69huq16Y0TTu/E=; b=UfKWZydc6cWoE65QiAxGGqIjSiAtD0FTW2ouaiH24BCMLqRxnKci1GLgS3gQ0c38ek h0RdQwvcWm8nSdGolt0rJF9eWWHjgf55b7AQqnT8MrSz7KCgCBMK18+jtkbyltSBiYT7 GeQFS07ovAMivSBIlqQUviSC17HCZxz/u98IdnOkiuGW0dsKh6bqUINMIoYBWrTEU/FL E8jRoYD9dG4GB74/3ZIBuyUptfi6z7J/NpbxwzIrwwqUmrRO4wLe8XtVhJ48Sxvv92XM aYiFRioz4iekqq1KWjYcFnBXtIVDWLY53Ua9PQmCFRHAhGcsl5HcgVlOCk64KtGLNm1j XqTQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc; bh=YF+ekAMEny9NvsTn9a4lw7RA5UzOw69huq16Y0TTu/E=; b=f7lHz4jraXKUUcSsMSd8FWvgjWOI19FpWLWw8yPmpC8thHCdLLUXPqsMeEdHaLecTn NP2UkbLD4vBxIOuDMsQ/aoaG7GmF6LqBuJ5AQ2lD3CipzXUHDYuEk5RxgBhG0VVoIIT2 gJZ13iJvsvEsawodefinTUdfqEa/g5xKYaghqz4LAKu6NvPY9474tQZ7SVCVDVjUepTV gS45HjaK3BZV1ZV9w11UYxcDOeNc3IH/khszzql/I/gQNX1kMcO5+gNA1M/3sj39+s8R X3qOJNXd0g8gJfUcgaO/xz4w1YfnC7gjJ3kKe8CplGdU1HZZtppSDuOJm8lTuUzwRdzb zYBg== X-Gm-Message-State: ACgBeo37+/j7XBVgD+kaAyopEbcaLAp7tI1lvUS/igOUPqMVcgzibZnP +lItMASjzPi/tmWz2546SIc= X-Google-Smtp-Source: AA6agR54kAAuhdp73DFDVedKK36Zsc3Hl4XZr1ca5vnebvrjU6WllrJ9ITzm7kSiuXYJqzEV6/nhpQ== X-Received: by 2002:a17:902:bd41:b0:172:74c9:2a08 with SMTP id b1-20020a170902bd4100b0017274c92a08mr10779640plx.9.1660731529461; Wed, 17 Aug 2022 03:18:49 -0700 (PDT) Received: from hyeyoo.. ([114.29.91.56]) by smtp.gmail.com with ESMTPSA id d8-20020a170903230800b00172633fc236sm1071318plh.174.2022.08.17.03.18.45 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 17 Aug 2022 03:18:48 -0700 (PDT) From: Hyeonggon Yoo <42.hyeyoo@gmail.com> To: Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Vlastimil Babka , Roman Gushchin Cc: Hyeonggon Yoo <42.hyeyoo@gmail.com>, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v4 01/17] mm/slab: move NUMA-related code to __do_cache_alloc() Date: Wed, 17 Aug 2022 19:18:10 +0900 Message-Id: <20220817101826.236819-2-42.hyeyoo@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220817101826.236819-1-42.hyeyoo@gmail.com> References: <20220817101826.236819-1-42.hyeyoo@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" To implement slab_alloc_node() independent of NUMA configuration, move NUMA fallback/alternate allocation code into __do_cache_alloc(). One functional change here is not to check availability of node when allocating from local node. Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> Reviewed-by: Vlastimil Babka --- mm/slab.c | 68 +++++++++++++++++++++++++------------------------------ 1 file changed, 31 insertions(+), 37 deletions(-) diff --git a/mm/slab.c b/mm/slab.c index 10e96137b44f..1656393f55cb 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -3180,13 +3180,14 @@ static void *____cache_alloc_node(struct kmem_cache= *cachep, gfp_t flags, return obj ? obj : fallback_alloc(cachep, flags); } =20 +static void *__do_cache_alloc(struct kmem_cache *cachep, gfp_t flags, int = nodeid); + static __always_inline void * slab_alloc_node(struct kmem_cache *cachep, gfp_t flags, int nodeid, size_t= orig_size, unsigned long caller) { unsigned long save_flags; void *ptr; - int slab_node =3D numa_mem_id(); struct obj_cgroup *objcg =3D NULL; bool init =3D false; =20 @@ -3200,30 +3201,7 @@ slab_alloc_node(struct kmem_cache *cachep, gfp_t fla= gs, int nodeid, size_t orig_ goto out_hooks; =20 local_irq_save(save_flags); - - if (nodeid =3D=3D NUMA_NO_NODE) - nodeid =3D slab_node; - - if (unlikely(!get_node(cachep, nodeid))) { - /* Node not bootstrapped yet */ - ptr =3D fallback_alloc(cachep, flags); - goto out; - } - - if (nodeid =3D=3D slab_node) { - /* - * Use the locally cached objects if possible. - * However ____cache_alloc does not allow fallback - * to other nodes. It may fail while we still have - * objects on other nodes available. - */ - ptr =3D ____cache_alloc(cachep, flags); - if (ptr) - goto out; - } - /* ___cache_alloc_node can fall back to other nodes */ - ptr =3D ____cache_alloc_node(cachep, flags, nodeid); -out: + ptr =3D __do_cache_alloc(cachep, flags, nodeid); local_irq_restore(save_flags); ptr =3D cache_alloc_debugcheck_after(cachep, flags, ptr, caller); init =3D slab_want_init_on_alloc(flags, cachep); @@ -3234,31 +3212,46 @@ slab_alloc_node(struct kmem_cache *cachep, gfp_t fl= ags, int nodeid, size_t orig_ } =20 static __always_inline void * -__do_cache_alloc(struct kmem_cache *cache, gfp_t flags) +__do_cache_alloc(struct kmem_cache *cachep, gfp_t flags, int nodeid) { - void *objp; + void *objp =3D NULL; + int slab_node =3D numa_mem_id(); =20 - if (current->mempolicy || cpuset_do_slab_mem_spread()) { - objp =3D alternate_node_alloc(cache, flags); - if (objp) - goto out; + if (nodeid =3D=3D NUMA_NO_NODE) { + if (current->mempolicy || cpuset_do_slab_mem_spread()) { + objp =3D alternate_node_alloc(cachep, flags); + if (objp) + goto out; + } + /* + * Use the locally cached objects if possible. + * However ____cache_alloc does not allow fallback + * to other nodes. It may fail while we still have + * objects on other nodes available. + */ + objp =3D ____cache_alloc(cachep, flags); + nodeid =3D slab_node; + } else if (nodeid =3D=3D slab_node) { + objp =3D ____cache_alloc(cachep, flags); + } else if (!get_node(cachep, nodeid)) { + /* Node not bootstrapped yet */ + objp =3D fallback_alloc(cachep, flags); + goto out; } - objp =3D ____cache_alloc(cache, flags); =20 /* * We may just have run out of memory on the local node. * ____cache_alloc_node() knows how to locate memory on other nodes */ if (!objp) - objp =3D ____cache_alloc_node(cache, flags, numa_mem_id()); - + objp =3D ____cache_alloc_node(cachep, flags, nodeid); out: return objp; } #else =20 static __always_inline void * -__do_cache_alloc(struct kmem_cache *cachep, gfp_t flags) +__do_cache_alloc(struct kmem_cache *cachep, gfp_t flags, int nodeid __mayb= e_unused) { return ____cache_alloc(cachep, flags); } @@ -3284,7 +3277,7 @@ slab_alloc(struct kmem_cache *cachep, struct list_lru= *lru, gfp_t flags, goto out; =20 local_irq_save(save_flags); - objp =3D __do_cache_alloc(cachep, flags); + objp =3D __do_cache_alloc(cachep, flags, NUMA_NO_NODE); local_irq_restore(save_flags); objp =3D cache_alloc_debugcheck_after(cachep, flags, objp, caller); prefetchw(objp); @@ -3521,7 +3514,8 @@ int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t= flags, size_t size, =20 local_irq_disable(); for (i =3D 0; i < size; i++) { - void *objp =3D kfence_alloc(s, s->object_size, flags) ?: __do_cache_allo= c(s, flags); + void *objp =3D kfence_alloc(s, s->object_size, flags) ?: + __do_cache_alloc(s, flags, NUMA_NO_NODE); =20 if (unlikely(!objp)) goto error; --=20 2.32.0 From nobody Sat Apr 11 02:17:48 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D7B90C32772 for ; Wed, 17 Aug 2022 10:19:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237422AbiHQKS7 (ORCPT ); Wed, 17 Aug 2022 06:18:59 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53936 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234670AbiHQKSz (ORCPT ); Wed, 17 Aug 2022 06:18:55 -0400 Received: from mail-pj1-x1033.google.com (mail-pj1-x1033.google.com [IPv6:2607:f8b0:4864:20::1033]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AB57B5FF52 for ; Wed, 17 Aug 2022 03:18:53 -0700 (PDT) Received: by mail-pj1-x1033.google.com with SMTP id s36-20020a17090a69a700b001faad0a7a34so1387331pjj.4 for ; Wed, 17 Aug 2022 03:18:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc; bh=3U+8LTY1y1RiFgIZBWF6u26rllF04yVkQZAKSsbfV88=; b=o+eLIVEt1kKstQYEPuqhNvCnf19Ci8D4sguN4nwyRDg+FIafrV/2wNuYCQ2hFw3zGm S6N7ygUZ8Y3rRkuN6woRRJ+Quhna5GnJ2uHgH0Q9rBOQfQqNAV7WITAIQeyeoNrsvKml Jpb+Vrz1P+0ZDV94y7bs9t7AvX0CKI1ML3eaeNBRxGzAeTOEKamIM5xHaBoRzs5/Ezfr VwBcXDv+0j2RdEMV+SS/24NpVShba96jr/Gh84kADiL2bA2natEjWxx0V3htEPQJU204 MYfnMt26N2c4+oh7JQl/ePMkQ/TZMX9uSZxy7tqjfgkNazF5/K7U2pMO1gQc9rOg7+Yg uK8g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc; bh=3U+8LTY1y1RiFgIZBWF6u26rllF04yVkQZAKSsbfV88=; b=DU7O1G5UQKEk0U8bezNjIlPoN12+sa7ZI6C2uiajc20sBYsU62qr1GAvgLlO+iCyRV n5Bs/PKh68qMMULd0FMwX4xAyeKMKk/bZlmyN0FMo+oRc64RYS3LNkSiVaQVUWljIhgT bYGtkQ3GuqVczne/iLMIP0aD1UdDr0Lt8gIQVBQOGfd81b6/e/jA1gWsgbG+w9klY6zA 07NWzEhuBP2RrKAE4C03Hw58pkQucLBf/0zSnQXG5CUsGZAI+SkvsQII9o+cvXWRyzAA a0ePSerNL21iSI4AvTRttb4jxr9kIqwtFxZ+aMYka1zaWDbdS/J2SWB8T2Zvfj/dLGP1 idjg== X-Gm-Message-State: ACgBeo1F576c91B6b/PI6DVcVcTfgkO/+7h8IoVUBbxF2cAeZV0ap5Aa Ilefvml5KHzvjvhmHn3GI9g= X-Google-Smtp-Source: AA6agR5y8J8WikcfzF1i0kGroiRKnzVfSzPJ1UB2rSumOIO5ossoFYJ0lYdU+nkXvjiUEBedYMvovg== X-Received: by 2002:a17:902:d2c6:b0:16e:d285:c602 with SMTP id n6-20020a170902d2c600b0016ed285c602mr25604783plc.81.1660731533224; Wed, 17 Aug 2022 03:18:53 -0700 (PDT) Received: from hyeyoo.. ([114.29.91.56]) by smtp.gmail.com with ESMTPSA id d8-20020a170903230800b00172633fc236sm1071318plh.174.2022.08.17.03.18.49 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 17 Aug 2022 03:18:52 -0700 (PDT) From: Hyeonggon Yoo <42.hyeyoo@gmail.com> To: Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Vlastimil Babka , Roman Gushchin Cc: Hyeonggon Yoo <42.hyeyoo@gmail.com>, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v4 02/17] mm/slab: cleanup slab_alloc() and slab_alloc_node() Date: Wed, 17 Aug 2022 19:18:11 +0900 Message-Id: <20220817101826.236819-3-42.hyeyoo@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220817101826.236819-1-42.hyeyoo@gmail.com> References: <20220817101826.236819-1-42.hyeyoo@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Make slab_alloc_node() available even when CONFIG_NUMA=3Dn and make slab_alloc() wrapper of slab_alloc_node(). This is necessary for further cleanup. Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> Reviewed-by: Vlastimil Babka --- mm/slab.c | 49 +++++++++++++------------------------------------ 1 file changed, 13 insertions(+), 36 deletions(-) diff --git a/mm/slab.c b/mm/slab.c index 1656393f55cb..748dd085f38e 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -3180,37 +3180,6 @@ static void *____cache_alloc_node(struct kmem_cache = *cachep, gfp_t flags, return obj ? obj : fallback_alloc(cachep, flags); } =20 -static void *__do_cache_alloc(struct kmem_cache *cachep, gfp_t flags, int = nodeid); - -static __always_inline void * -slab_alloc_node(struct kmem_cache *cachep, gfp_t flags, int nodeid, size_t= orig_size, - unsigned long caller) -{ - unsigned long save_flags; - void *ptr; - struct obj_cgroup *objcg =3D NULL; - bool init =3D false; - - flags &=3D gfp_allowed_mask; - cachep =3D slab_pre_alloc_hook(cachep, NULL, &objcg, 1, flags); - if (unlikely(!cachep)) - return NULL; - - ptr =3D kfence_alloc(cachep, orig_size, flags); - if (unlikely(ptr)) - goto out_hooks; - - local_irq_save(save_flags); - ptr =3D __do_cache_alloc(cachep, flags, nodeid); - local_irq_restore(save_flags); - ptr =3D cache_alloc_debugcheck_after(cachep, flags, ptr, caller); - init =3D slab_want_init_on_alloc(flags, cachep); - -out_hooks: - slab_post_alloc_hook(cachep, objcg, flags, 1, &ptr, init); - return ptr; -} - static __always_inline void * __do_cache_alloc(struct kmem_cache *cachep, gfp_t flags, int nodeid) { @@ -3259,8 +3228,8 @@ __do_cache_alloc(struct kmem_cache *cachep, gfp_t fla= gs, int nodeid __maybe_unus #endif /* CONFIG_NUMA */ =20 static __always_inline void * -slab_alloc(struct kmem_cache *cachep, struct list_lru *lru, gfp_t flags, - size_t orig_size, unsigned long caller) +slab_alloc_node(struct kmem_cache *cachep, struct list_lru *lru, gfp_t fla= gs, + int nodeid, size_t orig_size, unsigned long caller) { unsigned long save_flags; void *objp; @@ -3277,7 +3246,7 @@ slab_alloc(struct kmem_cache *cachep, struct list_lru= *lru, gfp_t flags, goto out; =20 local_irq_save(save_flags); - objp =3D __do_cache_alloc(cachep, flags, NUMA_NO_NODE); + objp =3D __do_cache_alloc(cachep, flags, nodeid); local_irq_restore(save_flags); objp =3D cache_alloc_debugcheck_after(cachep, flags, objp, caller); prefetchw(objp); @@ -3288,6 +3257,14 @@ slab_alloc(struct kmem_cache *cachep, struct list_lr= u *lru, gfp_t flags, return objp; } =20 +static __always_inline void * +slab_alloc(struct kmem_cache *cachep, struct list_lru *lru, gfp_t flags, + size_t orig_size, unsigned long caller) +{ + return slab_alloc_node(cachep, lru, flags, NUMA_NO_NODE, orig_size, + caller); +} + /* * Caller needs to acquire correct kmem_cache_node's list_lock * @list: List of detached free slabs should be freed by caller @@ -3574,7 +3551,7 @@ EXPORT_SYMBOL(kmem_cache_alloc_trace); */ void *kmem_cache_alloc_node(struct kmem_cache *cachep, gfp_t flags, int no= deid) { - void *ret =3D slab_alloc_node(cachep, flags, nodeid, cachep->object_size,= _RET_IP_); + void *ret =3D slab_alloc_node(cachep, NULL, flags, nodeid, cachep->object= _size, _RET_IP_); =20 trace_kmem_cache_alloc_node(_RET_IP_, ret, cachep, cachep->object_size, cachep->size, @@ -3592,7 +3569,7 @@ void *kmem_cache_alloc_node_trace(struct kmem_cache *= cachep, { void *ret; =20 - ret =3D slab_alloc_node(cachep, flags, nodeid, size, _RET_IP_); + ret =3D slab_alloc_node(cachep, NULL, flags, nodeid, size, _RET_IP_); =20 ret =3D kasan_kmalloc(cachep, ret, size, flags); trace_kmalloc_node(_RET_IP_, ret, cachep, --=20 2.32.0 From nobody Sat Apr 11 02:17:48 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 81FC8C32772 for ; Wed, 17 Aug 2022 10:19:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238886AbiHQKTH (ORCPT ); Wed, 17 Aug 2022 06:19:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53958 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235722AbiHQKS6 (ORCPT ); Wed, 17 Aug 2022 06:18:58 -0400 Received: from mail-pf1-x42d.google.com (mail-pf1-x42d.google.com [IPv6:2607:f8b0:4864:20::42d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7120E65250 for ; Wed, 17 Aug 2022 03:18:57 -0700 (PDT) Received: by mail-pf1-x42d.google.com with SMTP id z187so11655975pfb.12 for ; Wed, 17 Aug 2022 03:18:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc; bh=dErSdygiPBNsBAP9y5F3S7ekn7p7kWL1SSm4SRF+jts=; b=KIqkM8kQCeSA6lOVWRvpaj88kdiJxGl/Ysg2mgJZh2bmi8Z7ykNum+Sat5cbEwzI4o tJ9CQgyQ2lQHwfH1rn/NIri+J4RdqujaoQE4celkgwmqnSgxb6MM9n8Ugfl0YgQFVCCF qbEJkWrfYGeokUQ2kxtj6TpA/TmzBVhMBFIYW4H9R2DnFRc9UgPJp8ELDkGVqgXWUlSa 7qDrKZStXLbr9GKkt0ktkM2F1xg/QoZIRR2LRd6Tqjdmq9bkvc6nUDi99zje8wiX0TWW DTJx/nZe3iCU60gnNHxi+W3B5sZTCWYzMNPsAJAV2wB1I7IouxSJYPE29lRRuHDGYAVK kYNw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc; bh=dErSdygiPBNsBAP9y5F3S7ekn7p7kWL1SSm4SRF+jts=; b=PeDIxdHuwliqtRBhS65jykmcbHK19IvtCaLcER3vgXlpND8cjcYqB/IOc5u6vOA409 NJsVKdwcnCIfZz3ebs3kVh6DwSTbrgck7DpTeYQ0W34lSqf4vlDk5n+onkaetmh2tZ3B AygztVEvMzvDgSSc9g7/L7J9mgWZ126hG5aDGaJc1bNPsjgDC4/5L07W4orLb/WLk23P 3O3ui/hISo9gY/KLEJE+UKG0HvwATK63mNhzpDDVjIR7BtaqtAI4Ipe5iATmFQysV1gE i//TpzPfJNTBP5oVIbbYM1tNmNg4tkJ38oQhYjWE7ltRXlZGzTAfS5DIvcGdFZBNzRZM GcSQ== X-Gm-Message-State: ACgBeo1E2SXmzTJaJBqRfONEJnR3wgYHfjQlzxbeTGVmqpJSvpZtGAsX zFxoUNBveI2Dr8XKuuuemxs= X-Google-Smtp-Source: AA6agR50UeU3HEpeP9GVa1wNnr14Ztlk2ksOg095VJMlH8rYkzaa0EC00mWPps4A86HdPSen/TmygQ== X-Received: by 2002:a05:6a00:1910:b0:52f:13d7:44c4 with SMTP id y16-20020a056a00191000b0052f13d744c4mr24266690pfi.32.1660731536946; Wed, 17 Aug 2022 03:18:56 -0700 (PDT) Received: from hyeyoo.. ([114.29.91.56]) by smtp.gmail.com with ESMTPSA id d8-20020a170903230800b00172633fc236sm1071318plh.174.2022.08.17.03.18.53 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 17 Aug 2022 03:18:56 -0700 (PDT) From: Hyeonggon Yoo <42.hyeyoo@gmail.com> To: Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Vlastimil Babka , Roman Gushchin Cc: Hyeonggon Yoo <42.hyeyoo@gmail.com>, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v4 03/17] mm/slab_common: remove CONFIG_NUMA ifdefs for common kmalloc functions Date: Wed, 17 Aug 2022 19:18:12 +0900 Message-Id: <20220817101826.236819-4-42.hyeyoo@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220817101826.236819-1-42.hyeyoo@gmail.com> References: <20220817101826.236819-1-42.hyeyoo@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Now that slab_alloc_node() is available for SLAB when CONFIG_NUMA=3Dn, remove CONFIG_NUMA ifdefs for common kmalloc functions. Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> Reviewed-by: Vlastimil Babka --- include/linux/slab.h | 28 ---------------------------- mm/slab.c | 2 -- mm/slob.c | 5 +---- mm/slub.c | 6 ------ 4 files changed, 1 insertion(+), 40 deletions(-) diff --git a/include/linux/slab.h b/include/linux/slab.h index 0fefdf528e0d..4754c834b0e3 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -456,38 +456,18 @@ static __always_inline void kfree_bulk(size_t size, v= oid **p) kmem_cache_free_bulk(NULL, size, p); } =20 -#ifdef CONFIG_NUMA void *__kmalloc_node(size_t size, gfp_t flags, int node) __assume_kmalloc_= alignment __alloc_size(1); void *kmem_cache_alloc_node(struct kmem_cache *s, gfp_t flags, int node) _= _assume_slab_alignment __malloc; -#else -static __always_inline __alloc_size(1) void *__kmalloc_node(size_t size, g= fp_t flags, int node) -{ - return __kmalloc(size, flags); -} - -static __always_inline void *kmem_cache_alloc_node(struct kmem_cache *s, g= fp_t flags, int node) -{ - return kmem_cache_alloc(s, flags); -} -#endif =20 #ifdef CONFIG_TRACING extern void *kmem_cache_alloc_trace(struct kmem_cache *s, gfp_t flags, siz= e_t size) __assume_slab_alignment __alloc_size(3); =20 -#ifdef CONFIG_NUMA extern void *kmem_cache_alloc_node_trace(struct kmem_cache *s, gfp_t gfpfl= ags, int node, size_t size) __assume_slab_alignment __alloc_size(4); -#else -static __always_inline __alloc_size(4) void *kmem_cache_alloc_node_trace(s= truct kmem_cache *s, - gfp_t gfpflags, int node, size_t size) -{ - return kmem_cache_alloc_trace(s, gfpflags, size); -} -#endif /* CONFIG_NUMA */ =20 #else /* CONFIG_TRACING */ static __always_inline __alloc_size(3) void *kmem_cache_alloc_trace(struct= kmem_cache *s, @@ -701,20 +681,12 @@ static inline __alloc_size(1, 2) void *kcalloc_node(s= ize_t n, size_t size, gfp_t } =20 =20 -#ifdef CONFIG_NUMA extern void *__kmalloc_node_track_caller(size_t size, gfp_t flags, int nod= e, unsigned long caller) __alloc_size(1); #define kmalloc_node_track_caller(size, flags, node) \ __kmalloc_node_track_caller(size, flags, node, \ _RET_IP_) =20 -#else /* CONFIG_NUMA */ - -#define kmalloc_node_track_caller(size, flags, node) \ - kmalloc_track_caller(size, flags) - -#endif /* CONFIG_NUMA */ - /* * Shortcuts */ diff --git a/mm/slab.c b/mm/slab.c index 748dd085f38e..0acd65358c83 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -3535,7 +3535,6 @@ kmem_cache_alloc_trace(struct kmem_cache *cachep, gfp= _t flags, size_t size) EXPORT_SYMBOL(kmem_cache_alloc_trace); #endif =20 -#ifdef CONFIG_NUMA /** * kmem_cache_alloc_node - Allocate an object on the specified node * @cachep: The cache to allocate from. @@ -3609,7 +3608,6 @@ void *__kmalloc_node_track_caller(size_t size, gfp_t = flags, return __do_kmalloc_node(size, flags, node, caller); } EXPORT_SYMBOL(__kmalloc_node_track_caller); -#endif /* CONFIG_NUMA */ =20 #ifdef CONFIG_PRINTK void __kmem_obj_info(struct kmem_obj_info *kpp, void *object, struct slab = *slab) diff --git a/mm/slob.c b/mm/slob.c index 2bd4f476c340..74d850967213 100644 --- a/mm/slob.c +++ b/mm/slob.c @@ -536,14 +536,12 @@ void *__kmalloc_track_caller(size_t size, gfp_t gfp, = unsigned long caller) } EXPORT_SYMBOL(__kmalloc_track_caller); =20 -#ifdef CONFIG_NUMA void *__kmalloc_node_track_caller(size_t size, gfp_t gfp, int node, unsigned long caller) { return __do_kmalloc_node(size, gfp, node, caller); } EXPORT_SYMBOL(__kmalloc_node_track_caller); -#endif =20 void kfree(const void *block) { @@ -647,7 +645,7 @@ void *kmem_cache_alloc_lru(struct kmem_cache *cachep, s= truct list_lru *lru, gfp_ return slob_alloc_node(cachep, flags, NUMA_NO_NODE); } EXPORT_SYMBOL(kmem_cache_alloc_lru); -#ifdef CONFIG_NUMA + void *__kmalloc_node(size_t size, gfp_t gfp, int node) { return __do_kmalloc_node(size, gfp, node, _RET_IP_); @@ -659,7 +657,6 @@ void *kmem_cache_alloc_node(struct kmem_cache *cachep, = gfp_t gfp, int node) return slob_alloc_node(cachep, gfp, node); } EXPORT_SYMBOL(kmem_cache_alloc_node); -#endif =20 static void __kmem_cache_free(void *b, int size) { diff --git a/mm/slub.c b/mm/slub.c index 862dbd9af4f5..b29b3c9d3175 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -3287,7 +3287,6 @@ void *kmem_cache_alloc_trace(struct kmem_cache *s, gf= p_t gfpflags, size_t size) EXPORT_SYMBOL(kmem_cache_alloc_trace); #endif =20 -#ifdef CONFIG_NUMA void *kmem_cache_alloc_node(struct kmem_cache *s, gfp_t gfpflags, int node) { void *ret =3D slab_alloc_node(s, NULL, gfpflags, node, _RET_IP_, s->objec= t_size); @@ -3314,7 +3313,6 @@ void *kmem_cache_alloc_node_trace(struct kmem_cache *= s, } EXPORT_SYMBOL(kmem_cache_alloc_node_trace); #endif -#endif /* CONFIG_NUMA */ =20 /* * Slow path handling. This may still be called frequently since objects @@ -4427,7 +4425,6 @@ void *__kmalloc(size_t size, gfp_t flags) } EXPORT_SYMBOL(__kmalloc); =20 -#ifdef CONFIG_NUMA static void *kmalloc_large_node(size_t size, gfp_t flags, int node) { struct page *page; @@ -4474,7 +4471,6 @@ void *__kmalloc_node(size_t size, gfp_t flags, int no= de) return ret; } EXPORT_SYMBOL(__kmalloc_node); -#endif /* CONFIG_NUMA */ =20 #ifdef CONFIG_HARDENED_USERCOPY /* @@ -4930,7 +4926,6 @@ void *__kmalloc_track_caller(size_t size, gfp_t gfpfl= ags, unsigned long caller) } EXPORT_SYMBOL(__kmalloc_track_caller); =20 -#ifdef CONFIG_NUMA void *__kmalloc_node_track_caller(size_t size, gfp_t gfpflags, int node, unsigned long caller) { @@ -4960,7 +4955,6 @@ void *__kmalloc_node_track_caller(size_t size, gfp_t = gfpflags, return ret; } EXPORT_SYMBOL(__kmalloc_node_track_caller); -#endif =20 #ifdef CONFIG_SYSFS static int count_inuse(struct slab *slab) --=20 2.32.0 From nobody Sat Apr 11 02:17:48 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 09EF9C32774 for ; Wed, 17 Aug 2022 10:19:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238908AbiHQKTJ (ORCPT ); Wed, 17 Aug 2022 06:19:09 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54022 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238650AbiHQKTD (ORCPT ); Wed, 17 Aug 2022 06:19:03 -0400 Received: from mail-pj1-x102f.google.com (mail-pj1-x102f.google.com [IPv6:2607:f8b0:4864:20::102f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5DAB36CD2B for ; Wed, 17 Aug 2022 03:19:01 -0700 (PDT) Received: by mail-pj1-x102f.google.com with SMTP id a8so12074744pjg.5 for ; Wed, 17 Aug 2022 03:19:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc; bh=Ga89EBm/M0pSIGR8VO7j68eF/ZO7QJRQA6L4btLKSFg=; b=QvUbGhywEPeBhwJmjbmSzIFybywy0vyCr2SJpbyRtQyIAz/VHy0E9scAmI4tj9TDt1 7CCk4lbN/5d6jWw7cwcGb8PBl3ye5q15nhrzA5sqxvDUGMSFqVg7NYHiX3XBRAni81eB auRwaRmIuJf7ov23ZgTvUnKbzv3DtzTYDWOO4ADxnf805wypIVs5TPMc3HWNxAfQge4u Tg6TyCAaDMi3EWtSGO6vJy/T3OsHv9bFnT5fSSzBnJsGjmYouddxdZ11fsxvo7fUAqG1 9FcIao+/tb8ciwjz9SJKcMtwT9GpZ1eH3PJdTUfkmGedsuJqSgTl3kHnOfOaXBbqmEkk cGOg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc; bh=Ga89EBm/M0pSIGR8VO7j68eF/ZO7QJRQA6L4btLKSFg=; b=v40tC+QLEL/+myw/lCuvYQRXb+bTtpHn2FO5y+QIjuEstVu5zU5AFOXDMfjzJNFlyK /YLgb1aCpJPhnhvfA8lScqPTU7hjQL0h8mafjziRxUReltG2oCkc8yLW/F2yxz1pAK5G 78iM+VlG+tnA4Sg15keK+euY2uh/HWWxSbKGMVw/3PrI+ipI+vckgyjnYx5Sijnvhnfl rPGigyKF1hPRtEkSKMG30ZW/5D9R4oOC2f9C21iYjFDl6sn5BtCPjM74iSNxlh+RrGhn LAlR4odo3ue39Sx9N3Ptd7PdQ1qT4vOSIwufrGsU40XGE4pX2jxGkKw2hR+VasChQGyQ mDuQ== X-Gm-Message-State: ACgBeo3dhZ/CiGSP2TiqamU2EqVcB5Fjs7GujeyeTh/s9APoIv+plWD8 FaiATimG0SK6nfNPVEPFWHo= X-Google-Smtp-Source: AA6agR7bPV9zEhLcxPDQaHMgtadEN07BrNJzVtA2QjN0NjCwWz3hrowIypljfqszymfPovsGjThBlg== X-Received: by 2002:a17:902:9307:b0:170:d739:a3cf with SMTP id bc7-20020a170902930700b00170d739a3cfmr26025458plb.127.1660731540819; Wed, 17 Aug 2022 03:19:00 -0700 (PDT) Received: from hyeyoo.. ([114.29.91.56]) by smtp.gmail.com with ESMTPSA id d8-20020a170903230800b00172633fc236sm1071318plh.174.2022.08.17.03.18.57 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 17 Aug 2022 03:18:59 -0700 (PDT) From: Hyeonggon Yoo <42.hyeyoo@gmail.com> To: Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Vlastimil Babka , Roman Gushchin Cc: Hyeonggon Yoo <42.hyeyoo@gmail.com>, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v4 04/17] mm/slab_common: cleanup kmalloc_track_caller() Date: Wed, 17 Aug 2022 19:18:13 +0900 Message-Id: <20220817101826.236819-5-42.hyeyoo@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220817101826.236819-1-42.hyeyoo@gmail.com> References: <20220817101826.236819-1-42.hyeyoo@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Make kmalloc_track_caller() wrapper of kmalloc_node_track_caller(). Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> Reviewed-by: Vlastimil Babka --- include/linux/slab.h | 17 ++++++++--------- mm/slab.c | 6 ------ mm/slob.c | 6 ------ mm/slub.c | 22 ---------------------- 4 files changed, 8 insertions(+), 43 deletions(-) diff --git a/include/linux/slab.h b/include/linux/slab.h index 4754c834b0e3..a0e57df3d5a4 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -651,6 +651,12 @@ static inline __alloc_size(1, 2) void *kcalloc(size_t = n, size_t size, gfp_t flag return kmalloc_array(n, size, flags | __GFP_ZERO); } =20 +void *__kmalloc_node_track_caller(size_t size, gfp_t flags, int node, + unsigned long caller) __alloc_size(1); +#define kmalloc_node_track_caller(size, flags, node) \ + __kmalloc_node_track_caller(size, flags, node, \ + _RET_IP_) + /* * kmalloc_track_caller is a special version of kmalloc that records the * calling function of the routine calling it for slab leak tracking inste= ad @@ -659,9 +665,9 @@ static inline __alloc_size(1, 2) void *kcalloc(size_t n= , size_t size, gfp_t flag * allocator where we care about the real place the memory allocation * request comes from. */ -extern void *__kmalloc_track_caller(size_t size, gfp_t flags, unsigned lon= g caller); #define kmalloc_track_caller(size, flags) \ - __kmalloc_track_caller(size, flags, _RET_IP_) + __kmalloc_node_track_caller(size, flags, \ + NUMA_NO_NODE, _RET_IP_) =20 static inline __alloc_size(1, 2) void *kmalloc_array_node(size_t n, size_t= size, gfp_t flags, int node) @@ -680,13 +686,6 @@ static inline __alloc_size(1, 2) void *kcalloc_node(si= ze_t n, size_t size, gfp_t return kmalloc_array_node(n, size, flags | __GFP_ZERO, node); } =20 - -extern void *__kmalloc_node_track_caller(size_t size, gfp_t flags, int nod= e, - unsigned long caller) __alloc_size(1); -#define kmalloc_node_track_caller(size, flags, node) \ - __kmalloc_node_track_caller(size, flags, node, \ - _RET_IP_) - /* * Shortcuts */ diff --git a/mm/slab.c b/mm/slab.c index 0acd65358c83..611e630ff860 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -3665,12 +3665,6 @@ void *__kmalloc(size_t size, gfp_t flags) } EXPORT_SYMBOL(__kmalloc); =20 -void *__kmalloc_track_caller(size_t size, gfp_t flags, unsigned long calle= r) -{ - return __do_kmalloc(size, flags, caller); -} -EXPORT_SYMBOL(__kmalloc_track_caller); - /** * kmem_cache_free - Deallocate an object * @cachep: The cache the allocation was from. diff --git a/mm/slob.c b/mm/slob.c index 74d850967213..96b08acd72ce 100644 --- a/mm/slob.c +++ b/mm/slob.c @@ -530,12 +530,6 @@ void *__kmalloc(size_t size, gfp_t gfp) } EXPORT_SYMBOL(__kmalloc); =20 -void *__kmalloc_track_caller(size_t size, gfp_t gfp, unsigned long caller) -{ - return __do_kmalloc_node(size, gfp, NUMA_NO_NODE, caller); -} -EXPORT_SYMBOL(__kmalloc_track_caller); - void *__kmalloc_node_track_caller(size_t size, gfp_t gfp, int node, unsigned long caller) { diff --git a/mm/slub.c b/mm/slub.c index b29b3c9d3175..c82a4062f730 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -4904,28 +4904,6 @@ int __kmem_cache_create(struct kmem_cache *s, slab_f= lags_t flags) return 0; } =20 -void *__kmalloc_track_caller(size_t size, gfp_t gfpflags, unsigned long ca= ller) -{ - struct kmem_cache *s; - void *ret; - - if (unlikely(size > KMALLOC_MAX_CACHE_SIZE)) - return kmalloc_large(size, gfpflags); - - s =3D kmalloc_slab(size, gfpflags); - - if (unlikely(ZERO_OR_NULL_PTR(s))) - return s; - - ret =3D slab_alloc(s, NULL, gfpflags, caller, size); - - /* Honor the call site pointer we received. */ - trace_kmalloc(caller, ret, s, size, s->size, gfpflags); - - return ret; -} -EXPORT_SYMBOL(__kmalloc_track_caller); - void *__kmalloc_node_track_caller(size_t size, gfp_t gfpflags, int node, unsigned long caller) { --=20 2.32.0 From nobody Sat Apr 11 02:17:48 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id AA748C25B08 for ; Wed, 17 Aug 2022 10:19:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238929AbiHQKTS (ORCPT ); Wed, 17 Aug 2022 06:19:18 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54058 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238846AbiHQKTF (ORCPT ); Wed, 17 Aug 2022 06:19:05 -0400 Received: from mail-pj1-x102f.google.com (mail-pj1-x102f.google.com [IPv6:2607:f8b0:4864:20::102f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 10A9B52FE9 for ; Wed, 17 Aug 2022 03:19:05 -0700 (PDT) Received: by mail-pj1-x102f.google.com with SMTP id r15-20020a17090a1bcf00b001fabf42a11cso91372pjr.3 for ; Wed, 17 Aug 2022 03:19:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc; bh=Mkyu1siQR2SpurMp6n+bbzTxyYWe6qfMz+gCu1SCQOw=; b=X2p/rtQnRcbuMgMv2/g/yUiUfAVdUulEF3feqNKVrhUZdVHitdHgnz7ivcfoIx3igS 9Z7pzrer2u1uWO9TpTToV0JMowwrebRH/hn+zxBes/WTZLmiFHlUGg7OV0x5mQF4pjhB iS/yUR/5G8ydpytcdtAmTFV+6le+Id6p15CCQLrcCZWXGI3QhrohoOLenisPn6QjWiGy JzoqVv0xOpVNZBCBu0TEVHqg1wrUi1Oik2yLS5IoO0sHdi9QJXDhyqQtgZRsySjh/8K9 9Q4DS66jOw0tidDJfbl5N9ohtrw9R9K/zkSMZ1hFq3Br1VS0pgECuG97fzQC//gJ1w4s qyAg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc; bh=Mkyu1siQR2SpurMp6n+bbzTxyYWe6qfMz+gCu1SCQOw=; b=GKZCrz0oEPx/kgpGLM9OPODa7xZuKgVIp17I1GqSJEW7zsHt8FOe0r7ScvBjK17CJc osifOme6x6yhGQIWR9lrsQFDY2dK7qZps6Out7MWPSI+NIW+7HqcTRrDzux5wicUvVti NKGCb0ddYwpqxb3uwaaccmLbgSM2AJd1L3NtXbwcgZMp7G31VfHFfUdhsE0XnOfHVxEK /hYdu9u3j1eE+jKP/FMS5ZAIXmVbPv1/C6SgFSn+KtTj6PPbE4JLi5pPRQ6JVso9h++r 14q3KRdw1hIYc98wwZPlRCSoxjAf7Ts0ApDzksvMadDIfK2MUvMAPugEIN8xNJ3p7Rn8 dv6g== X-Gm-Message-State: ACgBeo1nyYint0YXn3nkZsVADEc1Bic0LVulgbQ7jXiUur8scPxxF1xu +zTH694Qday0GUBIM+4eQZo= X-Google-Smtp-Source: AA6agR6pvX9vtBet0UY+NHp3e4BoO2UXGfR8fh/YGIkfc65JKCmnJtMLBxUCK0a3mVeWGiCp0plydA== X-Received: by 2002:a17:902:7883:b0:170:d646:7f00 with SMTP id q3-20020a170902788300b00170d6467f00mr25919187pll.164.1660731544517; Wed, 17 Aug 2022 03:19:04 -0700 (PDT) Received: from hyeyoo.. ([114.29.91.56]) by smtp.gmail.com with ESMTPSA id d8-20020a170903230800b00172633fc236sm1071318plh.174.2022.08.17.03.19.01 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 17 Aug 2022 03:19:03 -0700 (PDT) From: Hyeonggon Yoo <42.hyeyoo@gmail.com> To: Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Vlastimil Babka , Roman Gushchin Cc: Hyeonggon Yoo <42.hyeyoo@gmail.com>, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v4 05/17] mm/sl[au]b: factor out __do_kmalloc_node() Date: Wed, 17 Aug 2022 19:18:14 +0900 Message-Id: <20220817101826.236819-6-42.hyeyoo@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220817101826.236819-1-42.hyeyoo@gmail.com> References: <20220817101826.236819-1-42.hyeyoo@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" __kmalloc(), __kmalloc_node(), __kmalloc_node_track_caller() mostly do same job. Factor out common code into __do_kmalloc_node(). Note that this patch also fixes missing kasan_kmalloc() in SLUB's __kmalloc_node_track_caller(). Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> Reviewed-by: Vlastimil Babka --- mm/slab.c | 30 +---------------------- mm/slub.c | 71 +++++++++++++++---------------------------------------- 2 files changed, 20 insertions(+), 81 deletions(-) diff --git a/mm/slab.c b/mm/slab.c index 611e630ff860..8c08d7f3dead 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -3631,37 +3631,9 @@ void __kmem_obj_info(struct kmem_obj_info *kpp, void= *object, struct slab *slab) } #endif =20 -/** - * __do_kmalloc - allocate memory - * @size: how many bytes of memory are required. - * @flags: the type of memory to allocate (see kmalloc). - * @caller: function caller for debug tracking of the caller - * - * Return: pointer to the allocated memory or %NULL in case of error - */ -static __always_inline void *__do_kmalloc(size_t size, gfp_t flags, - unsigned long caller) -{ - struct kmem_cache *cachep; - void *ret; - - if (unlikely(size > KMALLOC_MAX_CACHE_SIZE)) - return NULL; - cachep =3D kmalloc_slab(size, flags); - if (unlikely(ZERO_OR_NULL_PTR(cachep))) - return cachep; - ret =3D slab_alloc(cachep, NULL, flags, size, caller); - - ret =3D kasan_kmalloc(cachep, ret, size, flags); - trace_kmalloc(caller, ret, cachep, - size, cachep->size, flags); - - return ret; -} - void *__kmalloc(size_t size, gfp_t flags) { - return __do_kmalloc(size, flags, _RET_IP_); + return __do_kmalloc_node(size, flags, NUMA_NO_NODE, _RET_IP_); } EXPORT_SYMBOL(__kmalloc); =20 diff --git a/mm/slub.c b/mm/slub.c index c82a4062f730..f9929ba858ec 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -4402,29 +4402,6 @@ static int __init setup_slub_min_objects(char *str) =20 __setup("slub_min_objects=3D", setup_slub_min_objects); =20 -void *__kmalloc(size_t size, gfp_t flags) -{ - struct kmem_cache *s; - void *ret; - - if (unlikely(size > KMALLOC_MAX_CACHE_SIZE)) - return kmalloc_large(size, flags); - - s =3D kmalloc_slab(size, flags); - - if (unlikely(ZERO_OR_NULL_PTR(s))) - return s; - - ret =3D slab_alloc(s, NULL, flags, _RET_IP_, size); - - trace_kmalloc(_RET_IP_, ret, s, size, s->size, flags); - - ret =3D kasan_kmalloc(s, ret, size, flags); - - return ret; -} -EXPORT_SYMBOL(__kmalloc); - static void *kmalloc_large_node(size_t size, gfp_t flags, int node) { struct page *page; @@ -4442,7 +4419,8 @@ static void *kmalloc_large_node(size_t size, gfp_t fl= ags, int node) return kmalloc_large_node_hook(ptr, size, flags); } =20 -void *__kmalloc_node(size_t size, gfp_t flags, int node) +static __always_inline +void *__do_kmalloc_node(size_t size, gfp_t flags, int node, unsigned long = caller) { struct kmem_cache *s; void *ret; @@ -4450,7 +4428,7 @@ void *__kmalloc_node(size_t size, gfp_t flags, int no= de) if (unlikely(size > KMALLOC_MAX_CACHE_SIZE)) { ret =3D kmalloc_large_node(size, flags, node); =20 - trace_kmalloc_node(_RET_IP_, ret, NULL, + trace_kmalloc_node(caller, ret, NULL, size, PAGE_SIZE << get_order(size), flags, node); =20 @@ -4462,16 +4440,28 @@ void *__kmalloc_node(size_t size, gfp_t flags, int = node) if (unlikely(ZERO_OR_NULL_PTR(s))) return s; =20 - ret =3D slab_alloc_node(s, NULL, flags, node, _RET_IP_, size); + ret =3D slab_alloc_node(s, NULL, flags, node, caller, size); =20 - trace_kmalloc_node(_RET_IP_, ret, s, size, s->size, flags, node); + trace_kmalloc_node(caller, ret, s, size, s->size, flags, node); =20 ret =3D kasan_kmalloc(s, ret, size, flags); =20 return ret; } + +void *__kmalloc_node(size_t size, gfp_t flags, int node) +{ + return __do_kmalloc_node(size, flags, node, _RET_IP_); +} EXPORT_SYMBOL(__kmalloc_node); =20 +void *__kmalloc(size_t size, gfp_t flags) +{ + return __do_kmalloc_node(size, flags, NUMA_NO_NODE, _RET_IP_); +} +EXPORT_SYMBOL(__kmalloc); + + #ifdef CONFIG_HARDENED_USERCOPY /* * Rejects incorrectly sized objects and objects that are to be copied @@ -4905,32 +4895,9 @@ int __kmem_cache_create(struct kmem_cache *s, slab_f= lags_t flags) } =20 void *__kmalloc_node_track_caller(size_t size, gfp_t gfpflags, - int node, unsigned long caller) + int node, unsigned long caller) { - struct kmem_cache *s; - void *ret; - - if (unlikely(size > KMALLOC_MAX_CACHE_SIZE)) { - ret =3D kmalloc_large_node(size, gfpflags, node); - - trace_kmalloc_node(caller, ret, NULL, - size, PAGE_SIZE << get_order(size), - gfpflags, node); - - return ret; - } - - s =3D kmalloc_slab(size, gfpflags); - - if (unlikely(ZERO_OR_NULL_PTR(s))) - return s; - - ret =3D slab_alloc_node(s, NULL, gfpflags, node, caller, size); - - /* Honor the call site pointer we received. */ - trace_kmalloc_node(caller, ret, s, size, s->size, gfpflags, node); - - return ret; + return __do_kmalloc_node(size, gfpflags, node, caller); } EXPORT_SYMBOL(__kmalloc_node_track_caller); =20 --=20 2.32.0 From nobody Sat Apr 11 02:17:48 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 686C4C32772 for ; Wed, 17 Aug 2022 10:19:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233532AbiHQKTY (ORCPT ); Wed, 17 Aug 2022 06:19:24 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54190 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238865AbiHQKTM (ORCPT ); Wed, 17 Aug 2022 06:19:12 -0400 Received: from mail-pg1-x52a.google.com (mail-pg1-x52a.google.com [IPv6:2607:f8b0:4864:20::52a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5E9EA65250 for ; Wed, 17 Aug 2022 03:19:09 -0700 (PDT) Received: by mail-pg1-x52a.google.com with SMTP id s206so11610575pgs.3 for ; Wed, 17 Aug 2022 03:19:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc; bh=mA3ow7hUqSccT7bNwtvWTEd/QzJvQLSRyOCnJvDyp6U=; b=KSORlivSY1gDZvvmSEL4PxisN5IhBGQ1AeMvUGp8veSEXXByVMSzBjP4EJs747a5pq EofZUC/QfmHd03v73+6exjCC4vKOklIzSLxg0nltRXkCXFS4Sc0OwHmR9KIl6TyAEJrc fHJBGiEwu0cAFe0dzqsjwmqN5xt8ff032WOB4BvTPxYzEfZz7b1pP7lJm08KccLkCQHJ jt569vdr1+djQJL67jwzVOqUDjxqN7XyixPqfbCIXDwsI35VKNdGOl8hek/QXE1J67DP RNLPeV6A5AyUr6bx88K6xfAy8EGoftktHop8hdTCXciIpaf7F43jIYcW+0Qxx8yVYu8U qlTg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc; bh=mA3ow7hUqSccT7bNwtvWTEd/QzJvQLSRyOCnJvDyp6U=; b=CZ8/6Qsr3gOe7MeO9t9yB+w/kbM7L8FBcZsUJnkAd/fZO+Th/Dc3UlXFa+lmfTyQ2a 40lgViEOoAJJr/7UTQPHDCZ+RXaiYk1+wA1z6oRDDOOES3xPJn7LaRkc0G5iQbgzXPUa 32TALZSUvfEsPvq08cH/4PkujcT3tFiQfWlTDQEZ4mekGyDykQ3xal1X7q4TVwBGjOvv zEu500ckyjVbgFeWGxWCfC2aOKLT/jJaGztf/vOjnH0QPjSuO6ZcMMUvRPbfdaLWBj/T jtgYIH8zuKp5y29KQMs3uOVC/YLZjOpIcM4O99kVY+ujg3i8JImPiOiyfQb+JA0dMTJR C5AA== X-Gm-Message-State: ACgBeo1jUNk8dJITJELYH1scDYeD+8sksjdeohgiNYtk9dz69Fbwy7Em ETE3431dpdhlSubp4Du/8eU= X-Google-Smtp-Source: AA6agR7yr6u/vBTtWdBbY4Pd43tYRkW3wO6YJeVXJ+XtjSNhD6MsvXydP/mxQVwJjej0KHAHrH/4tQ== X-Received: by 2002:a63:494a:0:b0:41c:f29e:2a2e with SMTP id y10-20020a63494a000000b0041cf29e2a2emr21247258pgk.477.1660731548139; Wed, 17 Aug 2022 03:19:08 -0700 (PDT) Received: from hyeyoo.. ([114.29.91.56]) by smtp.gmail.com with ESMTPSA id d8-20020a170903230800b00172633fc236sm1071318plh.174.2022.08.17.03.19.04 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 17 Aug 2022 03:19:07 -0700 (PDT) From: Hyeonggon Yoo <42.hyeyoo@gmail.com> To: Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Vlastimil Babka , Roman Gushchin Cc: Hyeonggon Yoo <42.hyeyoo@gmail.com>, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v4 06/17] mm/slab_common: fold kmalloc_order_trace() into kmalloc_large() Date: Wed, 17 Aug 2022 19:18:15 +0900 Message-Id: <20220817101826.236819-7-42.hyeyoo@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220817101826.236819-1-42.hyeyoo@gmail.com> References: <20220817101826.236819-1-42.hyeyoo@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" There is no caller of kmalloc_order_trace() except kmalloc_large(). Fold it into kmalloc_large() and remove kmalloc_order{,_trace}(). Also add tracepoint in kmalloc_large() that was previously in kmalloc_order_trace(). Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> Reviewed-by: Vlastimil Babka --- include/linux/slab.h | 22 ++-------------------- mm/slab_common.c | 17 ++++------------- 2 files changed, 6 insertions(+), 33 deletions(-) diff --git a/include/linux/slab.h b/include/linux/slab.h index a0e57df3d5a4..15a4c59da59e 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -489,26 +489,8 @@ static __always_inline void *kmem_cache_alloc_node_tra= ce(struct kmem_cache *s, g } #endif /* CONFIG_TRACING */ =20 -extern void *kmalloc_order(size_t size, gfp_t flags, unsigned int order) _= _assume_page_alignment - __alloc_size(1); - -#ifdef CONFIG_TRACING -extern void *kmalloc_order_trace(size_t size, gfp_t flags, unsigned int or= der) - __assume_page_alignment __alloc_size(1); -#else -static __always_inline __alloc_size(1) void *kmalloc_order_trace(size_t si= ze, gfp_t flags, - unsigned int order) -{ - return kmalloc_order(size, flags, order); -} -#endif - -static __always_inline __alloc_size(1) void *kmalloc_large(size_t size, gf= p_t flags) -{ - unsigned int order =3D get_order(size); - return kmalloc_order_trace(size, flags, order); -} - +void *kmalloc_large(size_t size, gfp_t flags) __assume_page_alignment + __alloc_size(1); /** * kmalloc - allocate memory * @size: how many bytes of memory are required. diff --git a/mm/slab_common.c b/mm/slab_common.c index 17996649cfe3..8b1988544b89 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -905,16 +905,16 @@ gfp_t kmalloc_fix_flags(gfp_t flags) * directly to the page allocator. We use __GFP_COMP, because we will need= to * know the allocation order to free the pages properly in kfree. */ -void *kmalloc_order(size_t size, gfp_t flags, unsigned int order) +void *kmalloc_large(size_t size, gfp_t flags) { void *ret =3D NULL; struct page *page; + unsigned int order =3D get_order(size); =20 if (unlikely(flags & GFP_SLAB_BUG_MASK)) flags =3D kmalloc_fix_flags(flags); =20 - flags |=3D __GFP_COMP; - page =3D alloc_pages(flags, order); + page =3D alloc_pages(flags | __GFP_COMP, order); if (likely(page)) { ret =3D page_address(page); mod_lruvec_page_state(page, NR_SLAB_UNRECLAIMABLE_B, @@ -923,19 +923,10 @@ void *kmalloc_order(size_t size, gfp_t flags, unsigne= d int order) ret =3D kasan_kmalloc_large(ret, size, flags); /* As ret might get tagged, call kmemleak hook after KASAN. */ kmemleak_alloc(ret, size, 1, flags); - return ret; -} -EXPORT_SYMBOL(kmalloc_order); - -#ifdef CONFIG_TRACING -void *kmalloc_order_trace(size_t size, gfp_t flags, unsigned int order) -{ - void *ret =3D kmalloc_order(size, flags, order); trace_kmalloc(_RET_IP_, ret, NULL, size, PAGE_SIZE << order, flags); return ret; } -EXPORT_SYMBOL(kmalloc_order_trace); -#endif +EXPORT_SYMBOL(kmalloc_large); =20 #ifdef CONFIG_SLAB_FREELIST_RANDOM /* Randomize a generic freelist */ --=20 2.32.0 From nobody Sat Apr 11 02:17:48 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B9D8DC32772 for ; Wed, 17 Aug 2022 10:19:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238974AbiHQKT3 (ORCPT ); Wed, 17 Aug 2022 06:19:29 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54258 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238915AbiHQKTO (ORCPT ); Wed, 17 Aug 2022 06:19:14 -0400 Received: from mail-pf1-x42d.google.com (mail-pf1-x42d.google.com [IPv6:2607:f8b0:4864:20::42d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8429771980 for ; Wed, 17 Aug 2022 03:19:12 -0700 (PDT) Received: by mail-pf1-x42d.google.com with SMTP id h28so11653613pfq.11 for ; Wed, 17 Aug 2022 03:19:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc; bh=V13eBLL+I9iGduc+JnUgsCOIMqteurACvuOFaESMVck=; b=npKhTo3ktjcY8GrNZPcQJecpqhpmyjY86O+1b2c9YapnvSiuEYHiYe5hnCn/sUP0QT qNkoC7kX+tJi8lPw28MEQm733wFAvgSGKZHtoE1lhSMoCOdt3XXcAE+Mv7OAcQdFAvX+ HhH7RPa6voDdqpdjNP0u52d2Qik1vWrHYJBtcDSAmM+IC8xz9TstXJaxtT7fK2MNa8tY a+XUbmdJlWpmSC7QsgIfWYyubzXveHLy0AUl6kjCLyHahj3m/nD1qZPHeIe6fNarqGw7 QA4ZQNVBFjw1Ozp+l5pBhj89GteMBMwljxJ7dQawuvf0VZQXj9hzJhJRJy8QSr8Qix4u YK8A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc; bh=V13eBLL+I9iGduc+JnUgsCOIMqteurACvuOFaESMVck=; b=p41uaZFf7ZZ3A0wx5wM9VBbzbzcDIpFnXRIKDXz8kkoTNh4V9V2QUjGFXIzRxTLvsj mgTPAH28Ptf4bhJZaZa93QnLr0XzdNz3yV/2r5MA/jHHaBtzf6rERqomAHGlQMBHDMea eC2ofAgqTsVejJ5HN+RFqmAwq5h3JTDrWzmvuW2HaDab6tvHyj3+sWLwbbkS2EpWKrdL CmylFWTQPhJYLr9Z+q66fDZJwv9pgpXzD+dmJ+j0UjblN1cUJ1cQ0Ep/h8pxYEIIETVb QE2eRqIfDG/yrOP14YG03Se86v4b08pNeNOcdH9hwawX8PhqhjUXoUwyqZeR8tdEgq3R m4JA== X-Gm-Message-State: ACgBeo1kirbMdQuJvhU6ZBp9n4xgQfUEG9xdB8WxbXiC9maSxI3gTrFm aUOUeTQT2iyjI16PTFO480k= X-Google-Smtp-Source: AA6agR5Ow2fAgG6PDOU6KesG5RJ+Tea/+u3A+H4XdF3ir3eqFdhtzZcFsxZkt3rDAsMg+9NGYF8LDg== X-Received: by 2002:a65:4605:0:b0:41c:3d73:9385 with SMTP id v5-20020a654605000000b0041c3d739385mr20546331pgq.168.1660731551918; Wed, 17 Aug 2022 03:19:11 -0700 (PDT) Received: from hyeyoo.. ([114.29.91.56]) by smtp.gmail.com with ESMTPSA id d8-20020a170903230800b00172633fc236sm1071318plh.174.2022.08.17.03.19.08 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 17 Aug 2022 03:19:11 -0700 (PDT) From: Hyeonggon Yoo <42.hyeyoo@gmail.com> To: Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Vlastimil Babka , Roman Gushchin Cc: Hyeonggon Yoo <42.hyeyoo@gmail.com>, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v4 07/17] mm/slub: move kmalloc_large_node() to slab_common.c Date: Wed, 17 Aug 2022 19:18:16 +0900 Message-Id: <20220817101826.236819-8-42.hyeyoo@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220817101826.236819-1-42.hyeyoo@gmail.com> References: <20220817101826.236819-1-42.hyeyoo@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" In later patch SLAB will also pass requests larger than order-1 page to page allocator. Move kmalloc_large_node() to slab_common.c. Fold kmalloc_large_node_hook() into kmalloc_large_node() as there is no other caller. Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> Reviewed-by: Vlastimil Babka --- include/linux/slab.h | 4 ++++ mm/slab_common.c | 22 ++++++++++++++++++++++ mm/slub.c | 25 ------------------------- 3 files changed, 26 insertions(+), 25 deletions(-) diff --git a/include/linux/slab.h b/include/linux/slab.h index 15a4c59da59e..082499306098 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -491,6 +491,10 @@ static __always_inline void *kmem_cache_alloc_node_tra= ce(struct kmem_cache *s, g =20 void *kmalloc_large(size_t size, gfp_t flags) __assume_page_alignment __alloc_size(1); + +void *kmalloc_large_node(size_t size, gfp_t flags, int node) __assume_page= _alignment + __alloc_size(1); + /** * kmalloc - allocate memory * @size: how many bytes of memory are required. diff --git a/mm/slab_common.c b/mm/slab_common.c index 8b1988544b89..1b9101f9cb21 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -928,6 +928,28 @@ void *kmalloc_large(size_t size, gfp_t flags) } EXPORT_SYMBOL(kmalloc_large); =20 +void *kmalloc_large_node(size_t size, gfp_t flags, int node) +{ + struct page *page; + void *ptr =3D NULL; + unsigned int order =3D get_order(size); + + flags |=3D __GFP_COMP; + page =3D alloc_pages_node(node, flags, order); + if (page) { + ptr =3D page_address(page); + mod_lruvec_page_state(page, NR_SLAB_UNRECLAIMABLE_B, + PAGE_SIZE << order); + } + + ptr =3D kasan_kmalloc_large(ptr, size, flags); + /* As ptr might get tagged, call kmemleak hook after KASAN. */ + kmemleak_alloc(ptr, size, 1, flags); + + return ptr; +} +EXPORT_SYMBOL(kmalloc_large_node); + #ifdef CONFIG_SLAB_FREELIST_RANDOM /* Randomize a generic freelist */ static void freelist_randomize(struct rnd_state *state, unsigned int *list, diff --git a/mm/slub.c b/mm/slub.c index f9929ba858ec..5e7819ade2c4 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -1704,14 +1704,6 @@ static bool freelist_corrupted(struct kmem_cache *s,= struct slab *slab, * Hooks for other subsystems that check memory allocations. In a typical * production configuration these hooks all should produce no code at all. */ -static inline void *kmalloc_large_node_hook(void *ptr, size_t size, gfp_t = flags) -{ - ptr =3D kasan_kmalloc_large(ptr, size, flags); - /* As ptr might get tagged, call kmemleak hook after KASAN. */ - kmemleak_alloc(ptr, size, 1, flags); - return ptr; -} - static __always_inline void kfree_hook(void *x) { kmemleak_free(x); @@ -4402,23 +4394,6 @@ static int __init setup_slub_min_objects(char *str) =20 __setup("slub_min_objects=3D", setup_slub_min_objects); =20 -static void *kmalloc_large_node(size_t size, gfp_t flags, int node) -{ - struct page *page; - void *ptr =3D NULL; - unsigned int order =3D get_order(size); - - flags |=3D __GFP_COMP; - page =3D alloc_pages_node(node, flags, order); - if (page) { - ptr =3D page_address(page); - mod_lruvec_page_state(page, NR_SLAB_UNRECLAIMABLE_B, - PAGE_SIZE << order); - } - - return kmalloc_large_node_hook(ptr, size, flags); -} - static __always_inline void *__do_kmalloc_node(size_t size, gfp_t flags, int node, unsigned long = caller) { --=20 2.32.0 From nobody Sat Apr 11 02:17:48 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6E583C32772 for ; Wed, 17 Aug 2022 10:19:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238923AbiHQKTc (ORCPT ); Wed, 17 Aug 2022 06:19:32 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54594 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238963AbiHQKTY (ORCPT ); Wed, 17 Aug 2022 06:19:24 -0400 Received: from mail-pg1-x52f.google.com (mail-pg1-x52f.google.com [IPv6:2607:f8b0:4864:20::52f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5ECA869F6B for ; Wed, 17 Aug 2022 03:19:16 -0700 (PDT) Received: by mail-pg1-x52f.google.com with SMTP id l64so11623309pge.0 for ; Wed, 17 Aug 2022 03:19:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc; bh=tUizZNwzs6wrGKQx11/4tG0R+Fhf4KT2C7xsd5j/OPo=; b=bY7PoNXTNun67H8bazC4lB6u5PTF5Ax0sm7RPZ5xOp0oXn6dZLFedj/4unFw/BlDDL mqTrR5AxcbSlSpRA45N8LaKISXNbmmfAewTh4D/t5lrPW/t+REUE7SsxahxsKg0Srcp6 dbyfTNerch1YbEmDg9hmcmdCMQm4UMFLr5EzYYzC3g/dLOYmxFQBLsyr6pG8YE4an+oZ WndXjF7Fgvn7VRcUho4F+SN/MWvoYbwWZJRjMjiuRkbB/PM7TYriAIlcffJhkczVh74l Osfzb/vU+aEncSSEVKo4JlhOnKOHr2lS5Th50FdOQLTxUyRRZQTVfbg2+BChABSBVcSI IbDg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc; bh=tUizZNwzs6wrGKQx11/4tG0R+Fhf4KT2C7xsd5j/OPo=; b=61930GL3rBnGibzva45ezhC1WJK7J8Uf5/I/0CD8BobzWp946T0exGrz3uBXCy4E03 5xHizgo9EyQYbVqlH5Nm31vew2iO/nk6hyAX7llLA/FPkA0ujleUt8JjlC8gdTUERvvs rAvSY5qJAM6X0kFMxoWCQgii5maCD50JFWOMFuSo3i6U8A9HcujYJgaXFMx9JK1WE6Iu qNtdegoMwGl64d0yufqxRu9gP+Q3lkP4nh4LFUi3+mp3CZjyJLVRE2MGQfWW9Q5OQSE1 ttd6SPWFhKcCxC9PyAv/WwNaVSP+7GCVnRcUyPfkJ0VWxo+MnXfv6iuGNxpYrGfu5FmM LM3w== X-Gm-Message-State: ACgBeo2sjzw1AX6Svb4x2vA27/WL0MjVU1L1dT7x7NRhIXkyEEI3Vas+ yRUtZldprMmfW8Tb/fqaXfU= X-Google-Smtp-Source: AA6agR4P3TPHzp2fITNYUXY7GsNidyO6PDhQ+K8ehcHtYRDvNqx9oqwhJcudXfK2S6wxGaR7sSRMSA== X-Received: by 2002:a63:5916:0:b0:41d:2c8c:7492 with SMTP id n22-20020a635916000000b0041d2c8c7492mr20956775pgb.81.1660731555697; Wed, 17 Aug 2022 03:19:15 -0700 (PDT) Received: from hyeyoo.. ([114.29.91.56]) by smtp.gmail.com with ESMTPSA id d8-20020a170903230800b00172633fc236sm1071318plh.174.2022.08.17.03.19.12 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 17 Aug 2022 03:19:14 -0700 (PDT) From: Hyeonggon Yoo <42.hyeyoo@gmail.com> To: Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Vlastimil Babka , Roman Gushchin Cc: Hyeonggon Yoo <42.hyeyoo@gmail.com>, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v4 08/17] mm/slab_common: kmalloc_node: pass large requests to page allocator Date: Wed, 17 Aug 2022 19:18:17 +0900 Message-Id: <20220817101826.236819-9-42.hyeyoo@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220817101826.236819-1-42.hyeyoo@gmail.com> References: <20220817101826.236819-1-42.hyeyoo@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Now that kmalloc_large_node() is in common code, pass large requests to page allocator in kmalloc_node() using kmalloc_large_node(). One problem is that currently there is no tracepoint in kmalloc_large_node(). Instead of simply putting tracepoint in it, use kmalloc_large_node{,_notrace} depending on its caller to show useful address for both inlined kmalloc_node() and __kmalloc_node_track_caller() when large objects are allocated. Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> Reviewed-by: Vlastimil Babka --- include/linux/slab.h | 26 +++++++++++++++++++------- mm/slab.h | 2 ++ mm/slab_common.c | 11 ++++++++++- mm/slub.c | 2 +- 4 files changed, 32 insertions(+), 9 deletions(-) diff --git a/include/linux/slab.h b/include/linux/slab.h index 082499306098..fd2e129fc813 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -571,23 +571,35 @@ static __always_inline __alloc_size(1) void *kmalloc(= size_t size, gfp_t flags) return __kmalloc(size, flags); } =20 +#ifndef CONFIG_SLOB static __always_inline __alloc_size(1) void *kmalloc_node(size_t size, gfp= _t flags, int node) { -#ifndef CONFIG_SLOB - if (__builtin_constant_p(size) && - size <=3D KMALLOC_MAX_CACHE_SIZE) { - unsigned int i =3D kmalloc_index(size); + if (__builtin_constant_p(size)) { + unsigned int index; =20 - if (!i) + if (size > KMALLOC_MAX_CACHE_SIZE) + return kmalloc_large_node(size, flags, node); + + index =3D kmalloc_index(size); + + if (!index) return ZERO_SIZE_PTR; =20 return kmem_cache_alloc_node_trace( - kmalloc_caches[kmalloc_type(flags)][i], + kmalloc_caches[kmalloc_type(flags)][index], flags, node, size); } -#endif return __kmalloc_node(size, flags, node); } +#else +static __always_inline __alloc_size(1) void *kmalloc_node(size_t size, gfp= _t flags, int node) +{ + if (__builtin_constant_p(size) && size > KMALLOC_MAX_CACHE_SIZE) + return kmalloc_large_node(size, flags, node); + + return __kmalloc_node(size, flags, node); +} +#endif =20 /** * kmalloc_array - allocate memory for an array. diff --git a/mm/slab.h b/mm/slab.h index 4ec82bec15ec..40322bcf07be 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -273,6 +273,8 @@ void create_kmalloc_caches(slab_flags_t); =20 /* Find the kmalloc slab corresponding for a certain size */ struct kmem_cache *kmalloc_slab(size_t, gfp_t); + +void *kmalloc_large_node_notrace(size_t size, gfp_t flags, int node); #endif =20 gfp_t kmalloc_fix_flags(gfp_t flags); diff --git a/mm/slab_common.c b/mm/slab_common.c index 1b9101f9cb21..7a0942d54424 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -928,7 +928,7 @@ void *kmalloc_large(size_t size, gfp_t flags) } EXPORT_SYMBOL(kmalloc_large); =20 -void *kmalloc_large_node(size_t size, gfp_t flags, int node) +void *kmalloc_large_node_notrace(size_t size, gfp_t flags, int node) { struct page *page; void *ptr =3D NULL; @@ -948,6 +948,15 @@ void *kmalloc_large_node(size_t size, gfp_t flags, int= node) =20 return ptr; } + +void *kmalloc_large_node(size_t size, gfp_t flags, int node) +{ + void *ret =3D kmalloc_large_node_notrace(size, flags, node); + + trace_kmalloc_node(_RET_IP_, ret, NULL, size, + PAGE_SIZE << get_order(size), flags, node); + return ret; +} EXPORT_SYMBOL(kmalloc_large_node); =20 #ifdef CONFIG_SLAB_FREELIST_RANDOM diff --git a/mm/slub.c b/mm/slub.c index 5e7819ade2c4..165fe87af204 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -4401,7 +4401,7 @@ void *__do_kmalloc_node(size_t size, gfp_t flags, int= node, unsigned long caller void *ret; =20 if (unlikely(size > KMALLOC_MAX_CACHE_SIZE)) { - ret =3D kmalloc_large_node(size, flags, node); + ret =3D kmalloc_large_node_notrace(size, flags, node); =20 trace_kmalloc_node(caller, ret, NULL, size, PAGE_SIZE << get_order(size), --=20 2.32.0 From nobody Sat Apr 11 02:17:48 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7AB4AC25B08 for ; Wed, 17 Aug 2022 10:19:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239023AbiHQKTq (ORCPT ); Wed, 17 Aug 2022 06:19:46 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54656 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238912AbiHQKT0 (ORCPT ); Wed, 17 Aug 2022 06:19:26 -0400 Received: from mail-pj1-x1030.google.com (mail-pj1-x1030.google.com [IPv6:2607:f8b0:4864:20::1030]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 46A32760E4 for ; Wed, 17 Aug 2022 03:19:20 -0700 (PDT) Received: by mail-pj1-x1030.google.com with SMTP id w11-20020a17090a380b00b001f73f75a1feso1406044pjb.2 for ; Wed, 17 Aug 2022 03:19:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc; bh=yob3/+IwNeWgOZVqPDUGNXwyBAO5sJAn+qw3hk8rQbE=; b=Tna2JkyXzqn5qE47zwCxnPqMJBvOs2MucxtFWhJJ0c/lzCw9K5Z32HPbeuTEFcnaXm euh69XaJIiwNQsXoFXICFSU+/Oca1A0aBr+VKjaQwRsqZn0ImCwMvZM0N8OAtELq5gWO 8FFxXGzMld8SE/bklMsCfmz8zmuRhUc1KMMSce/NCBQ/LcnIuDs1AfKmri8IhYcblm+V rrNNS4fF5tsR5jmnvk57xPPvLXbTFJrN6HyWLycKFPInDB+wHEqRePYu7yjFgNU3vP5E BG+0gLROd9cISMZo+XFOB9NN0yZR8ajSr+tMCE72foa4xqmEUVSggTGPNNWEhj7k62Sa xqUQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc; bh=yob3/+IwNeWgOZVqPDUGNXwyBAO5sJAn+qw3hk8rQbE=; b=pGMAw46cA3d8i/2O9hzv6vd6NEx6p0clBvpFT7hq2jGfdaV1fcUIb8Y/9Ll+EoVqCJ 94Rz2L0UtNh7xu3PKA69vMR0/R9MHaesYuxSST/eeXGOkTfve9XBhuA/aQCQy3qPxWpW fysEbkzMrbw9Fk91DRA+9hL71iJ4J0ggu9wZM6S3Dry8LU9ar2UqM7SanIh6E5kafS3r VQyB0aB9QNibwZJmwKxu/rVcOnjkvsUt9NUslS1Pm2mbMYh3EgvLTe7Lvgi5gTRkoe0i 8RhAJTdQZsK860+iFlDZJknvhAzLd1Cqzu5r0eWEouBxujsSDGVRaVj5mOVnz5d51rQc st/w== X-Gm-Message-State: ACgBeo1K0mOhwUcdzEEHeVUxWYmj1O8Wsq+t7wAP84/NbQDdh0F1nnZ1 kyhg2EVeRmvjH1Na4OvVtvs= X-Google-Smtp-Source: AA6agR4US9kp+7qbu6CM869/bm31zI3vqMNat3pYfqBfuj2fagq4H+iDwarnNS8c/JYltDYP+XKAMw== X-Received: by 2002:a17:90b:48c3:b0:1fa:7ade:384a with SMTP id li3-20020a17090b48c300b001fa7ade384amr3021105pjb.106.1660731559586; Wed, 17 Aug 2022 03:19:19 -0700 (PDT) Received: from hyeyoo.. ([114.29.91.56]) by smtp.gmail.com with ESMTPSA id d8-20020a170903230800b00172633fc236sm1071318plh.174.2022.08.17.03.19.16 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 17 Aug 2022 03:19:18 -0700 (PDT) From: Hyeonggon Yoo <42.hyeyoo@gmail.com> To: Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Vlastimil Babka , Roman Gushchin Cc: Hyeonggon Yoo <42.hyeyoo@gmail.com>, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v4 09/17] mm/slab_common: cleanup kmalloc_large() Date: Wed, 17 Aug 2022 19:18:18 +0900 Message-Id: <20220817101826.236819-10-42.hyeyoo@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220817101826.236819-1-42.hyeyoo@gmail.com> References: <20220817101826.236819-1-42.hyeyoo@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Now that kmalloc_large() and kmalloc_large_node() do mostly same job, make kmalloc_large() wrapper of kmalloc_large_node_notrace(). In the meantime, add missing flag fix code in kmalloc_large_node_notrace(). Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> Reviewed-by: Vlastimil Babka --- mm/slab_common.c | 35 +++++++++++++---------------------- 1 file changed, 13 insertions(+), 22 deletions(-) diff --git a/mm/slab_common.c b/mm/slab_common.c index 7a0942d54424..51ccd0545816 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -905,28 +905,6 @@ gfp_t kmalloc_fix_flags(gfp_t flags) * directly to the page allocator. We use __GFP_COMP, because we will need= to * know the allocation order to free the pages properly in kfree. */ -void *kmalloc_large(size_t size, gfp_t flags) -{ - void *ret =3D NULL; - struct page *page; - unsigned int order =3D get_order(size); - - if (unlikely(flags & GFP_SLAB_BUG_MASK)) - flags =3D kmalloc_fix_flags(flags); - - page =3D alloc_pages(flags | __GFP_COMP, order); - if (likely(page)) { - ret =3D page_address(page); - mod_lruvec_page_state(page, NR_SLAB_UNRECLAIMABLE_B, - PAGE_SIZE << order); - } - ret =3D kasan_kmalloc_large(ret, size, flags); - /* As ret might get tagged, call kmemleak hook after KASAN. */ - kmemleak_alloc(ret, size, 1, flags); - trace_kmalloc(_RET_IP_, ret, NULL, size, PAGE_SIZE << order, flags); - return ret; -} -EXPORT_SYMBOL(kmalloc_large); =20 void *kmalloc_large_node_notrace(size_t size, gfp_t flags, int node) { @@ -934,6 +912,9 @@ void *kmalloc_large_node_notrace(size_t size, gfp_t fla= gs, int node) void *ptr =3D NULL; unsigned int order =3D get_order(size); =20 + if (unlikely(flags & GFP_SLAB_BUG_MASK)) + flags =3D kmalloc_fix_flags(flags); + flags |=3D __GFP_COMP; page =3D alloc_pages_node(node, flags, order); if (page) { @@ -949,6 +930,16 @@ void *kmalloc_large_node_notrace(size_t size, gfp_t fl= ags, int node) return ptr; } =20 +void *kmalloc_large(size_t size, gfp_t flags) +{ + void *ret =3D kmalloc_large_node_notrace(size, flags, NUMA_NO_NODE); + + trace_kmalloc(_RET_IP_, ret, NULL, size, + PAGE_SIZE << get_order(size), flags); + return ret; +} +EXPORT_SYMBOL(kmalloc_large); + void *kmalloc_large_node(size_t size, gfp_t flags, int node) { void *ret =3D kmalloc_large_node_notrace(size, flags, node); --=20 2.32.0 From nobody Sat Apr 11 02:17:48 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id EAF28C25B08 for ; Wed, 17 Aug 2022 10:19:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239000AbiHQKTy (ORCPT ); Wed, 17 Aug 2022 06:19:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54284 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238994AbiHQKTl (ORCPT ); Wed, 17 Aug 2022 06:19:41 -0400 Received: from mail-pj1-x102f.google.com (mail-pj1-x102f.google.com [IPv6:2607:f8b0:4864:20::102f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 512F77C307 for ; Wed, 17 Aug 2022 03:19:24 -0700 (PDT) Received: by mail-pj1-x102f.google.com with SMTP id ch17-20020a17090af41100b001fa74771f61so3279093pjb.0 for ; Wed, 17 Aug 2022 03:19:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc; bh=paupgt1G6cH3RqeKhiTalrXFR+ytTeKPfHkaPAB0YdA=; b=bK1aYHR1zgO31MEYMp0z27Z1q5eFMJ+tzZkZPU/xUi7QSEqc9gJgYHZt9kOlcxyZiS l7ibFrX6CclG1r7qw3jgmSgOkKp/IdIrlIVH9ewekQmF9WODMD0nStvwZw7chdKQEsRo uY/mDnL/2e/Ac/izgf03aEtmPtRgcUeBJFudRS3LJlSPla97U7S29W0QH4L/lqMEpGpZ veX1hxijOV7f9qs7xnEiiXbnoobxzdA5MMvAKhA6uiRDKnGKAXdSpRhL+hzJGLvv4CDp rgA2OfPYtOnydhMw3vtTwO1eo/x8yi4gzdShPaxj1bvxTxnz7N7RRKksHrVlKaFyeEdq FdCw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc; bh=paupgt1G6cH3RqeKhiTalrXFR+ytTeKPfHkaPAB0YdA=; b=jylDK1bNMZkqWOwJb6Um9RfXM1/OV5FNF3LeSuOC1Q2CpZBDiUW/1uxsUVtxDuV48m pO2St3oROdgG8w7Jh04+JvTaOvIVz55HBmTH7uMntpoUVNRqWmjC5h2FdQOJqGp1hcU9 7ekbzlxV7vE+zyaQ7cw8eXUQ/Xzkxa58Ow5xfvI91yBkZQaYjTKRsxM6++n864TBQVf0 lGxCW4BQxc3OA1jKjlmfp4v/hOj8w7rfNFweQ2j1yh+1aOQ3X73pHZxJXJEwjBSmUPbp HGiU64XW3AO2g1b440f9WMtvcAhm7MQNWxTkoYOH9cCkJC8UQNRQ1YzgK49auI2UNKju /gNA== X-Gm-Message-State: ACgBeo0eb9FU8+KUT0y9Td2ow2WTF1a+VVlaiiIIf3C1doj7KdFSSrIX /z4Isq/p7v0kgKY1eoQT2J0= X-Google-Smtp-Source: AA6agR4LT57t3RNaykBg4K7+bvNe5zXr5bsxJoXJ3gs6cRtgISyVA95RK4nraz+NxdIVhKsWrJ/jeg== X-Received: by 2002:a17:903:11c7:b0:171:2818:4cd7 with SMTP id q7-20020a17090311c700b0017128184cd7mr25306274plh.136.1660731563319; Wed, 17 Aug 2022 03:19:23 -0700 (PDT) Received: from hyeyoo.. ([114.29.91.56]) by smtp.gmail.com with ESMTPSA id d8-20020a170903230800b00172633fc236sm1071318plh.174.2022.08.17.03.19.19 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 17 Aug 2022 03:19:22 -0700 (PDT) From: Hyeonggon Yoo <42.hyeyoo@gmail.com> To: Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Vlastimil Babka , Roman Gushchin Cc: Hyeonggon Yoo <42.hyeyoo@gmail.com>, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v4 10/17] mm/slab: kmalloc: pass requests larger than order-1 page to page allocator Date: Wed, 17 Aug 2022 19:18:19 +0900 Message-Id: <20220817101826.236819-11-42.hyeyoo@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220817101826.236819-1-42.hyeyoo@gmail.com> References: <20220817101826.236819-1-42.hyeyoo@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" There is not much benefit for serving large objects in kmalloc(). Let's pass large requests to page allocator like SLUB for better maintenance of common code. Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> Reviewed-by: Vlastimil Babka --- include/linux/slab.h | 23 ++++------------- mm/slab.c | 60 +++++++++++++++++++++++++++++++------------- mm/slab.h | 3 +++ mm/slab_common.c | 25 ++++++++++++------ mm/slub.c | 19 -------------- 5 files changed, 68 insertions(+), 62 deletions(-) diff --git a/include/linux/slab.h b/include/linux/slab.h index fd2e129fc813..4ee5b2fed164 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -243,27 +243,17 @@ static inline unsigned int arch_slab_minalign(void) =20 #ifdef CONFIG_SLAB /* - * The largest kmalloc size supported by the SLAB allocators is - * 32 megabyte (2^25) or the maximum allocatable page order if that is - * less than 32 MB. - * - * WARNING: Its not easy to increase this value since the allocators have - * to do various tricks to work around compiler limitations in order to - * ensure proper constant folding. + * SLAB and SLUB directly allocates requests fitting in to an order-1 page + * (PAGE_SIZE*2). Larger requests are passed to the page allocator. */ -#define KMALLOC_SHIFT_HIGH ((MAX_ORDER + PAGE_SHIFT - 1) <=3D 25 ? \ - (MAX_ORDER + PAGE_SHIFT - 1) : 25) -#define KMALLOC_SHIFT_MAX KMALLOC_SHIFT_HIGH +#define KMALLOC_SHIFT_HIGH (PAGE_SHIFT + 1) +#define KMALLOC_SHIFT_MAX (MAX_ORDER + PAGE_SHIFT - 1) #ifndef KMALLOC_SHIFT_LOW #define KMALLOC_SHIFT_LOW 5 #endif #endif =20 #ifdef CONFIG_SLUB -/* - * SLUB directly allocates requests fitting in to an order-1 page - * (PAGE_SIZE*2). Larger requests are passed to the page allocator. - */ #define KMALLOC_SHIFT_HIGH (PAGE_SHIFT + 1) #define KMALLOC_SHIFT_MAX (MAX_ORDER + PAGE_SHIFT - 1) #ifndef KMALLOC_SHIFT_LOW @@ -415,10 +405,6 @@ static __always_inline unsigned int __kmalloc_index(si= ze_t size, if (size <=3D 512 * 1024) return 19; if (size <=3D 1024 * 1024) return 20; if (size <=3D 2 * 1024 * 1024) return 21; - if (size <=3D 4 * 1024 * 1024) return 22; - if (size <=3D 8 * 1024 * 1024) return 23; - if (size <=3D 16 * 1024 * 1024) return 24; - if (size <=3D 32 * 1024 * 1024) return 25; =20 if (!IS_ENABLED(CONFIG_PROFILE_ALL_BRANCHES) && size_is_constant) BUILD_BUG_ON_MSG(1, "unexpected size in kmalloc_index()"); @@ -428,6 +414,7 @@ static __always_inline unsigned int __kmalloc_index(siz= e_t size, /* Will never be reached. Needed because the compiler may complain */ return -1; } +static_assert(PAGE_SHIFT <=3D 20); #define kmalloc_index(s) __kmalloc_index(s, true) #endif /* !CONFIG_SLOB */ =20 diff --git a/mm/slab.c b/mm/slab.c index 8c08d7f3dead..10c9af904410 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -3585,11 +3585,19 @@ __do_kmalloc_node(size_t size, gfp_t flags, int nod= e, unsigned long caller) struct kmem_cache *cachep; void *ret; =20 - if (unlikely(size > KMALLOC_MAX_CACHE_SIZE)) - return NULL; + if (unlikely(size > KMALLOC_MAX_CACHE_SIZE)) { + ret =3D kmalloc_large_node_notrace(size, flags, node); + + trace_kmalloc_node(caller, ret, NULL, size, + PAGE_SIZE << get_order(size), + flags, node); + return ret; + } + cachep =3D kmalloc_slab(size, flags); if (unlikely(ZERO_OR_NULL_PTR(cachep))) return cachep; + ret =3D kmem_cache_alloc_node_trace(cachep, flags, node, size); ret =3D kasan_kmalloc(cachep, ret, size, flags); =20 @@ -3664,17 +3672,27 @@ EXPORT_SYMBOL(kmem_cache_free); =20 void kmem_cache_free_bulk(struct kmem_cache *orig_s, size_t size, void **p) { - struct kmem_cache *s; - size_t i; =20 local_irq_disable(); - for (i =3D 0; i < size; i++) { + for (int i =3D 0; i < size; i++) { void *objp =3D p[i]; + struct kmem_cache *s; =20 - if (!orig_s) /* called via kfree_bulk */ - s =3D virt_to_cache(objp); - else + if (!orig_s) { + struct folio *folio =3D virt_to_folio(objp); + + /* called via kfree_bulk */ + if (!folio_test_slab(folio)) { + local_irq_enable(); + free_large_kmalloc(folio, objp); + local_irq_disable(); + continue; + } + s =3D folio_slab(folio)->slab_cache; + } else { s =3D cache_from_obj(orig_s, objp); + } + if (!s) continue; =20 @@ -3703,20 +3721,24 @@ void kfree(const void *objp) { struct kmem_cache *c; unsigned long flags; + struct folio *folio; =20 trace_kfree(_RET_IP_, objp); =20 if (unlikely(ZERO_OR_NULL_PTR(objp))) return; - local_irq_save(flags); - kfree_debugcheck(objp); - c =3D virt_to_cache(objp); - if (!c) { - local_irq_restore(flags); + + folio =3D virt_to_folio(objp); + if (!folio_test_slab(folio)) { + free_large_kmalloc(folio, (void *)objp); return; } - debug_check_no_locks_freed(objp, c->object_size); =20 + c =3D folio_slab(folio)->slab_cache; + + local_irq_save(flags); + kfree_debugcheck(objp); + debug_check_no_locks_freed(objp, c->object_size); debug_check_no_obj_freed(objp, c->object_size); __cache_free(c, (void *)objp, _RET_IP_); local_irq_restore(flags); @@ -4138,15 +4160,17 @@ void __check_heap_object(const void *ptr, unsigned = long n, size_t __ksize(const void *objp) { struct kmem_cache *c; - size_t size; + struct folio *folio; =20 BUG_ON(!objp); if (unlikely(objp =3D=3D ZERO_SIZE_PTR)) return 0; =20 - c =3D virt_to_cache(objp); - size =3D c ? c->object_size : 0; + folio =3D virt_to_folio(objp); + if (!folio_test_slab(folio)) + return folio_size(folio); =20 - return size; + c =3D folio_slab(folio)->slab_cache; + return c->object_size; } EXPORT_SYMBOL(__ksize); diff --git a/mm/slab.h b/mm/slab.h index 40322bcf07be..381ba3e6b2a1 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -660,6 +660,9 @@ static inline struct kmem_cache *cache_from_obj(struct = kmem_cache *s, void *x) print_tracking(cachep, x); return cachep; } + +void free_large_kmalloc(struct folio *folio, void *object); + #endif /* CONFIG_SLOB */ =20 static inline size_t slab_ksize(const struct kmem_cache *s) diff --git a/mm/slab_common.c b/mm/slab_common.c index 51ccd0545816..5a2e81f42ee9 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -744,8 +744,8 @@ struct kmem_cache *kmalloc_slab(size_t size, gfp_t flag= s) =20 /* * kmalloc_info[] is to make slub_debug=3D,kmalloc-xx option work at boot = time. - * kmalloc_index() supports up to 2^25=3D32MB, so the final entry of the t= able is - * kmalloc-32M. + * kmalloc_index() supports up to 2^21=3D2MB, so the final entry of the ta= ble is + * kmalloc-2M. */ const struct kmalloc_info_struct kmalloc_info[] __initconst =3D { INIT_KMALLOC_INFO(0, 0), @@ -769,11 +769,7 @@ const struct kmalloc_info_struct kmalloc_info[] __init= const =3D { INIT_KMALLOC_INFO(262144, 256k), INIT_KMALLOC_INFO(524288, 512k), INIT_KMALLOC_INFO(1048576, 1M), - INIT_KMALLOC_INFO(2097152, 2M), - INIT_KMALLOC_INFO(4194304, 4M), - INIT_KMALLOC_INFO(8388608, 8M), - INIT_KMALLOC_INFO(16777216, 16M), - INIT_KMALLOC_INFO(33554432, 32M) + INIT_KMALLOC_INFO(2097152, 2M) }; =20 /* @@ -886,6 +882,21 @@ void __init create_kmalloc_caches(slab_flags_t flags) /* Kmalloc array is now usable */ slab_state =3D UP; } + +void free_large_kmalloc(struct folio *folio, void *object) +{ + unsigned int order =3D folio_order(folio); + + if (WARN_ON_ONCE(order =3D=3D 0)) + pr_warn_once("object pointer: 0x%p\n", object); + + kmemleak_free(object); + kasan_kfree_large(object); + + mod_lruvec_page_state(folio_page(folio, 0), NR_SLAB_UNRECLAIMABLE_B, + -(PAGE_SIZE << order)); + __free_pages(folio_page(folio, 0), order); +} #endif /* !CONFIG_SLOB */ =20 gfp_t kmalloc_fix_flags(gfp_t flags) diff --git a/mm/slub.c b/mm/slub.c index 165fe87af204..a659874c5d44 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -1704,12 +1704,6 @@ static bool freelist_corrupted(struct kmem_cache *s,= struct slab *slab, * Hooks for other subsystems that check memory allocations. In a typical * production configuration these hooks all should produce no code at all. */ -static __always_inline void kfree_hook(void *x) -{ - kmemleak_free(x); - kasan_kfree_large(x); -} - static __always_inline bool slab_free_hook(struct kmem_cache *s, void *x, bool init) { @@ -3550,19 +3544,6 @@ struct detached_freelist { struct kmem_cache *s; }; =20 -static inline void free_large_kmalloc(struct folio *folio, void *object) -{ - unsigned int order =3D folio_order(folio); - - if (WARN_ON_ONCE(order =3D=3D 0)) - pr_warn_once("object pointer: 0x%p\n", object); - - kfree_hook(object); - mod_lruvec_page_state(folio_page(folio, 0), NR_SLAB_UNRECLAIMABLE_B, - -(PAGE_SIZE << order)); - __free_pages(folio_page(folio, 0), order); -} - /* * This function progressively scans the array with free objects (with * a limited look ahead) and extract objects belonging to the same --=20 2.32.0 From nobody Sat Apr 11 02:17:48 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0CD7AC25B08 for ; Wed, 17 Aug 2022 10:20:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239058AbiHQKUF (ORCPT ); Wed, 17 Aug 2022 06:20:05 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54774 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239017AbiHQKTq (ORCPT ); Wed, 17 Aug 2022 06:19:46 -0400 Received: from mail-pl1-x62c.google.com (mail-pl1-x62c.google.com [IPv6:2607:f8b0:4864:20::62c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 958668053F for ; Wed, 17 Aug 2022 03:19:28 -0700 (PDT) Received: by mail-pl1-x62c.google.com with SMTP id u22so3362894plq.12 for ; Wed, 17 Aug 2022 03:19:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc; bh=Daa1VQEO90b1RAILID/2eNYhl4WJ1tJNJRwlSK6+APs=; b=W6PsCrjgbKrvxCBQCQHMSo2WhEZ8f/XuU5argCzxfQbBHYDA1W+hhpS31AosnPk6c3 TElwq517lBeG71MGD7nDV2ZvRWe9isPvVlbvTEil+r7q/kjZ0aNeUrN1juLGdNQyuFJh uJj5MGga4wTp0NxUlS7V7NDJ9eL/w6uQjug0SXLFsnMRn8NhKiFmcu9NFlmAx4G0ntZj dYnxkRsZN1UIAQbOtXGrJpVZF7XG6XG72fY/9/4yvSnFY7NPplZJX6vHi2yNc2ekS7bw qDfKyvCT+s9isO3UuKeGikXmR15NMOykh6JOFOPYQKQS5imqPTpNszSyBQHc1X20QdVh eONA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc; bh=Daa1VQEO90b1RAILID/2eNYhl4WJ1tJNJRwlSK6+APs=; b=fz+kHeCaXKpcBPXYpIefEDnI9M7Rm+AjshDiNk1GvNCMZC4oiaKOOBo/q6qFpDW8KP c6wfmL+a42ezWIdNu0pOKJDGUtnLSLjO+HK/BVE5NYOf0soULonzi3bwO8Om7UxHYOLp EtpeN9iau1q90Yba8NqRAXzR/e3stOdSB1/81BKfwU/AVJ9y7NPll0VaO0sxw05+/J3K f0BM64I+hrV3VMzd+7Z1yJYsskSS7FpHUiy67yHJ3RpiayRu/elMtwuqG1yBPXkS457f xMkc9LLSAm3x4Qcirz8Rj+m0brTBBl3fnmYYo1uzs/F30usUo1HdwD+cyDgn3eNe4SGU VplQ== X-Gm-Message-State: ACgBeo0zXXc4/0nXRm/kSwhT8UFn9zuVB06izwQN9cBRf+q2UriyxFDq dz0g5YlWW3y3aYwGNSQvA3E= X-Google-Smtp-Source: AA6agR5NX3zrqI+s7B2wouE/rq8MdzR3qps4dt4XbqfqKpvrdEuq5/VqipgEphbdDE+OOpPjq7+FaQ== X-Received: by 2002:a17:90b:3ecd:b0:1f5:6330:8295 with SMTP id rm13-20020a17090b3ecd00b001f563308295mr2966134pjb.174.1660731567045; Wed, 17 Aug 2022 03:19:27 -0700 (PDT) Received: from hyeyoo.. ([114.29.91.56]) by smtp.gmail.com with ESMTPSA id d8-20020a170903230800b00172633fc236sm1071318plh.174.2022.08.17.03.19.23 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 17 Aug 2022 03:19:26 -0700 (PDT) From: Hyeonggon Yoo <42.hyeyoo@gmail.com> To: Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Vlastimil Babka , Roman Gushchin Cc: Hyeonggon Yoo <42.hyeyoo@gmail.com>, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v4 11/17] mm/sl[au]b: introduce common alloc/free functions without tracepoint Date: Wed, 17 Aug 2022 19:18:20 +0900 Message-Id: <20220817101826.236819-12-42.hyeyoo@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220817101826.236819-1-42.hyeyoo@gmail.com> References: <20220817101826.236819-1-42.hyeyoo@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" To unify kmalloc functions in later patch, introduce common alloc/free functions that does not have tracepoint. Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> Reviewed-by: Vlastimil Babka --- mm/slab.c | 36 +++++++++++++++++++++++++++++------- mm/slab.h | 4 ++++ mm/slub.c | 13 +++++++++++++ 3 files changed, 46 insertions(+), 7 deletions(-) diff --git a/mm/slab.c b/mm/slab.c index 10c9af904410..aa61851b0a07 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -3560,6 +3560,14 @@ void *kmem_cache_alloc_node(struct kmem_cache *cache= p, gfp_t flags, int nodeid) } EXPORT_SYMBOL(kmem_cache_alloc_node); =20 +void *__kmem_cache_alloc_node(struct kmem_cache *cachep, gfp_t flags, + int nodeid, size_t orig_size, + unsigned long caller) +{ + return slab_alloc_node(cachep, NULL, flags, nodeid, + orig_size, caller); +} + #ifdef CONFIG_TRACING void *kmem_cache_alloc_node_trace(struct kmem_cache *cachep, gfp_t flags, @@ -3645,6 +3653,26 @@ void *__kmalloc(size_t size, gfp_t flags) } EXPORT_SYMBOL(__kmalloc); =20 +static __always_inline +void __do_kmem_cache_free(struct kmem_cache *cachep, void *objp, + unsigned long caller) +{ + unsigned long flags; + + local_irq_save(flags); + debug_check_no_locks_freed(objp, cachep->object_size); + if (!(cachep->flags & SLAB_DEBUG_OBJECTS)) + debug_check_no_obj_freed(objp, cachep->object_size); + __cache_free(cachep, objp, caller); + local_irq_restore(flags); +} + +void __kmem_cache_free(struct kmem_cache *cachep, void *objp, + unsigned long caller) +{ + __do_kmem_cache_free(cachep, objp, caller); +} + /** * kmem_cache_free - Deallocate an object * @cachep: The cache the allocation was from. @@ -3655,18 +3683,12 @@ EXPORT_SYMBOL(__kmalloc); */ void kmem_cache_free(struct kmem_cache *cachep, void *objp) { - unsigned long flags; cachep =3D cache_from_obj(cachep, objp); if (!cachep) return; =20 trace_kmem_cache_free(_RET_IP_, objp, cachep->name); - local_irq_save(flags); - debug_check_no_locks_freed(objp, cachep->object_size); - if (!(cachep->flags & SLAB_DEBUG_OBJECTS)) - debug_check_no_obj_freed(objp, cachep->object_size); - __cache_free(cachep, objp, _RET_IP_); - local_irq_restore(flags); + __do_kmem_cache_free(cachep, objp, _RET_IP_); } EXPORT_SYMBOL(kmem_cache_free); =20 diff --git a/mm/slab.h b/mm/slab.h index 381ba3e6b2a1..4e90ed0ab635 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -275,6 +275,10 @@ void create_kmalloc_caches(slab_flags_t); struct kmem_cache *kmalloc_slab(size_t, gfp_t); =20 void *kmalloc_large_node_notrace(size_t size, gfp_t flags, int node); +void *__kmem_cache_alloc_node(struct kmem_cache *s, gfp_t gfpflags, + int node, size_t orig_size, + unsigned long caller); +void __kmem_cache_free(struct kmem_cache *s, void *x, unsigned long caller= ); #endif =20 gfp_t kmalloc_fix_flags(gfp_t flags); diff --git a/mm/slub.c b/mm/slub.c index a659874c5d44..a11f78c2647c 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -3262,6 +3262,14 @@ void *kmem_cache_alloc_lru(struct kmem_cache *s, str= uct list_lru *lru, } EXPORT_SYMBOL(kmem_cache_alloc_lru); =20 +void *__kmem_cache_alloc_node(struct kmem_cache *s, gfp_t gfpflags, + int node, size_t orig_size, + unsigned long caller) +{ + return slab_alloc_node(s, NULL, gfpflags, node, + caller, orig_size); +} + #ifdef CONFIG_TRACING void *kmem_cache_alloc_trace(struct kmem_cache *s, gfp_t gfpflags, size_t = size) { @@ -3526,6 +3534,11 @@ void ___cache_free(struct kmem_cache *cache, void *x= , unsigned long addr) } #endif =20 +void __kmem_cache_free(struct kmem_cache *s, void *x, unsigned long caller) +{ + slab_free(s, virt_to_slab(x), x, NULL, &x, 1, caller); +} + void kmem_cache_free(struct kmem_cache *s, void *x) { s =3D cache_from_obj(s, x); --=20 2.32.0 From nobody Sat Apr 11 02:17:48 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 27D3CC25B08 for ; Wed, 17 Aug 2022 10:20:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238865AbiHQKUQ (ORCPT ); Wed, 17 Aug 2022 06:20:16 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55092 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238987AbiHQKTs (ORCPT ); Wed, 17 Aug 2022 06:19:48 -0400 Received: from mail-pj1-x1034.google.com (mail-pj1-x1034.google.com [IPv6:2607:f8b0:4864:20::1034]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4CA5280EBB for ; Wed, 17 Aug 2022 03:19:32 -0700 (PDT) Received: by mail-pj1-x1034.google.com with SMTP id s5-20020a17090a13c500b001f4da9ffe5fso1381372pjf.5 for ; Wed, 17 Aug 2022 03:19:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc; bh=dz+RrLrwLk+xXgnSRFPm4loOb+4mn5KEX029pojpz/g=; b=EdO5lWBDyEtukF9A+7HE/7DKzG4hdASyYOEZ0XAIe1KW64g4FGZU0PT5I44da9T14r IfuRLAlAV1vA2pc/+jQLC+Uv3+XB+6ENROPFDzWCZItxwMAZW3aKEcignejNQHZnS0w3 VNeiIXv5pqy7syFwbox3b58swjVbRQ/DM0zAQmAmO6b5FbZhZkG4GjSr4DfIK9n//X72 79GBOmBfEyoeThZ2Y+E+smekrYZ2hNpLD1+YdJcM/8ILL1YoeOguDTMgnhsfT2Ey9TfY iwowPTAIsytcKB5vefJo2FvI5tXF+ZS/7TPm1i73WqbCDQ3PuaF/wE/u1ru8Vt0oZjvx jQ6g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc; bh=dz+RrLrwLk+xXgnSRFPm4loOb+4mn5KEX029pojpz/g=; b=zgVF5o1UgAwsRv4rtPpJ32QacDPpdFQbUsJ28P56bIieBFa85Md6oDe9H6pAPqiuFY 4lw+1l8U/FwWbnd1dB3a8AC2okRDYQw98Oa6UIaL7T++K3fUwyJ20hwy+hXRdkPuJOT9 9iMo/khCX/LGSHMmMkIkiIBBbiijz1nucXs+g6yP8yAmBgN7/DnAlWl3aOkqaJlmccnW iLi2873eCuB0DMyvOyC/SFl1CzA4X38WVmtYVI1cPLkxiDEspG8yJLcpSpKLVTCC6wBU 48vRPr6t09BcWxEbTbQ+KS511/ZEMrrYKq1uAIp+i6Lj0fgj7SCjfMv0N/dYMBWF2Q9Z EVZA== X-Gm-Message-State: ACgBeo2FbVs1GDRFPu4uterSJf/ARwAy1RzyrY4cIQK7LcT66q1G8Ive tA2ED3wwk8SmEnCBoEm3fwk= X-Google-Smtp-Source: AA6agR6s+Wo7cDyxpJkmcNzbA+ig037E6IsZ/0zdxuBesVy8M5k0aYhgK97VFdOKH8n82GyYTPWJyg== X-Received: by 2002:a17:90b:4b47:b0:1f7:2e06:5752 with SMTP id mi7-20020a17090b4b4700b001f72e065752mr2997195pjb.187.1660731570997; Wed, 17 Aug 2022 03:19:30 -0700 (PDT) Received: from hyeyoo.. ([114.29.91.56]) by smtp.gmail.com with ESMTPSA id d8-20020a170903230800b00172633fc236sm1071318plh.174.2022.08.17.03.19.27 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 17 Aug 2022 03:19:29 -0700 (PDT) From: Hyeonggon Yoo <42.hyeyoo@gmail.com> To: Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Vlastimil Babka , Roman Gushchin Cc: Hyeonggon Yoo <42.hyeyoo@gmail.com>, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v4 12/17] mm/sl[au]b: generalize kmalloc subsystem Date: Wed, 17 Aug 2022 19:18:21 +0900 Message-Id: <20220817101826.236819-13-42.hyeyoo@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220817101826.236819-1-42.hyeyoo@gmail.com> References: <20220817101826.236819-1-42.hyeyoo@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Now everything in kmalloc subsystem can be generalized. Let's do it! Generalize __do_kmalloc_node(), __kmalloc_node_track_caller(), kfree(), __ksize(), __kmalloc(), __kmalloc_node() and move them to slab_common.c. In the meantime, rename kmalloc_large_node_notrace() to __kmalloc_large_node() and make it static as it's now only called in slab_common.c. Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> Reviewed-by: Vlastimil Babka --- mm/slab.c | 108 ---------------------------------------------- mm/slab.h | 1 - mm/slab_common.c | 109 +++++++++++++++++++++++++++++++++++++++++++++-- mm/slub.c | 87 ------------------------------------- 4 files changed, 106 insertions(+), 199 deletions(-) diff --git a/mm/slab.c b/mm/slab.c index aa61851b0a07..5b234e3ab165 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -3587,44 +3587,6 @@ void *kmem_cache_alloc_node_trace(struct kmem_cache = *cachep, EXPORT_SYMBOL(kmem_cache_alloc_node_trace); #endif =20 -static __always_inline void * -__do_kmalloc_node(size_t size, gfp_t flags, int node, unsigned long caller) -{ - struct kmem_cache *cachep; - void *ret; - - if (unlikely(size > KMALLOC_MAX_CACHE_SIZE)) { - ret =3D kmalloc_large_node_notrace(size, flags, node); - - trace_kmalloc_node(caller, ret, NULL, size, - PAGE_SIZE << get_order(size), - flags, node); - return ret; - } - - cachep =3D kmalloc_slab(size, flags); - if (unlikely(ZERO_OR_NULL_PTR(cachep))) - return cachep; - - ret =3D kmem_cache_alloc_node_trace(cachep, flags, node, size); - ret =3D kasan_kmalloc(cachep, ret, size, flags); - - return ret; -} - -void *__kmalloc_node(size_t size, gfp_t flags, int node) -{ - return __do_kmalloc_node(size, flags, node, _RET_IP_); -} -EXPORT_SYMBOL(__kmalloc_node); - -void *__kmalloc_node_track_caller(size_t size, gfp_t flags, - int node, unsigned long caller) -{ - return __do_kmalloc_node(size, flags, node, caller); -} -EXPORT_SYMBOL(__kmalloc_node_track_caller); - #ifdef CONFIG_PRINTK void __kmem_obj_info(struct kmem_obj_info *kpp, void *object, struct slab = *slab) { @@ -3647,12 +3609,6 @@ void __kmem_obj_info(struct kmem_obj_info *kpp, void= *object, struct slab *slab) } #endif =20 -void *__kmalloc(size_t size, gfp_t flags) -{ - return __do_kmalloc_node(size, flags, NUMA_NO_NODE, _RET_IP_); -} -EXPORT_SYMBOL(__kmalloc); - static __always_inline void __do_kmem_cache_free(struct kmem_cache *cachep, void *objp, unsigned long caller) @@ -3730,43 +3686,6 @@ void kmem_cache_free_bulk(struct kmem_cache *orig_s,= size_t size, void **p) } EXPORT_SYMBOL(kmem_cache_free_bulk); =20 -/** - * kfree - free previously allocated memory - * @objp: pointer returned by kmalloc. - * - * If @objp is NULL, no operation is performed. - * - * Don't free memory not originally allocated by kmalloc() - * or you will run into trouble. - */ -void kfree(const void *objp) -{ - struct kmem_cache *c; - unsigned long flags; - struct folio *folio; - - trace_kfree(_RET_IP_, objp); - - if (unlikely(ZERO_OR_NULL_PTR(objp))) - return; - - folio =3D virt_to_folio(objp); - if (!folio_test_slab(folio)) { - free_large_kmalloc(folio, (void *)objp); - return; - } - - c =3D folio_slab(folio)->slab_cache; - - local_irq_save(flags); - kfree_debugcheck(objp); - debug_check_no_locks_freed(objp, c->object_size); - debug_check_no_obj_freed(objp, c->object_size); - __cache_free(c, (void *)objp, _RET_IP_); - local_irq_restore(flags); -} -EXPORT_SYMBOL(kfree); - /* * This initializes kmem_cache_node or resizes various caches for all node= s. */ @@ -4169,30 +4088,3 @@ void __check_heap_object(const void *ptr, unsigned l= ong n, usercopy_abort("SLAB object", cachep->name, to_user, offset, n); } #endif /* CONFIG_HARDENED_USERCOPY */ - -/** - * __ksize -- Uninstrumented ksize. - * @objp: pointer to the object - * - * Unlike ksize(), __ksize() is uninstrumented, and does not provide the s= ame - * safety checks as ksize() with KASAN instrumentation enabled. - * - * Return: size of the actual memory used by @objp in bytes - */ -size_t __ksize(const void *objp) -{ - struct kmem_cache *c; - struct folio *folio; - - BUG_ON(!objp); - if (unlikely(objp =3D=3D ZERO_SIZE_PTR)) - return 0; - - folio =3D virt_to_folio(objp); - if (!folio_test_slab(folio)) - return folio_size(folio); - - c =3D folio_slab(folio)->slab_cache; - return c->object_size; -} -EXPORT_SYMBOL(__ksize); diff --git a/mm/slab.h b/mm/slab.h index 4e90ed0ab635..4d8330d57573 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -274,7 +274,6 @@ void create_kmalloc_caches(slab_flags_t); /* Find the kmalloc slab corresponding for a certain size */ struct kmem_cache *kmalloc_slab(size_t, gfp_t); =20 -void *kmalloc_large_node_notrace(size_t size, gfp_t flags, int node); void *__kmem_cache_alloc_node(struct kmem_cache *s, gfp_t gfpflags, int node, size_t orig_size, unsigned long caller); diff --git a/mm/slab_common.c b/mm/slab_common.c index 5a2e81f42ee9..c8242b4e2223 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -897,6 +897,109 @@ void free_large_kmalloc(struct folio *folio, void *ob= ject) -(PAGE_SIZE << order)); __free_pages(folio_page(folio, 0), order); } + +static void *__kmalloc_large_node(size_t size, gfp_t flags, int node); +static __always_inline +void *__do_kmalloc_node(size_t size, gfp_t flags, int node, unsigned long = caller) +{ + struct kmem_cache *s; + void *ret; + + if (unlikely(size > KMALLOC_MAX_CACHE_SIZE)) { + ret =3D __kmalloc_large_node(size, flags, node); + trace_kmalloc_node(caller, ret, NULL, + size, PAGE_SIZE << get_order(size), + flags, node); + return ret; + } + + s =3D kmalloc_slab(size, flags); + + if (unlikely(ZERO_OR_NULL_PTR(s))) + return s; + + ret =3D __kmem_cache_alloc_node(s, flags, node, size, caller); + ret =3D kasan_kmalloc(s, ret, size, flags); + trace_kmalloc_node(caller, ret, s, size, + s->size, flags, node); + return ret; +} + +void *__kmalloc_node(size_t size, gfp_t flags, int node) +{ + return __do_kmalloc_node(size, flags, node, _RET_IP_); +} +EXPORT_SYMBOL(__kmalloc_node); + +void *__kmalloc(size_t size, gfp_t flags) +{ + return __do_kmalloc_node(size, flags, NUMA_NO_NODE, _RET_IP_); +} +EXPORT_SYMBOL(__kmalloc); + +void *__kmalloc_node_track_caller(size_t size, gfp_t flags, + int node, unsigned long caller) +{ + return __do_kmalloc_node(size, flags, node, caller); +} +EXPORT_SYMBOL(__kmalloc_node_track_caller); + +/** + * kfree - free previously allocated memory + * @objp: pointer returned by kmalloc. + * + * If @objp is NULL, no operation is performed. + * + * Don't free memory not originally allocated by kmalloc() + * or you will run into trouble. + */ +void kfree(const void *object) +{ + struct folio *folio; + struct slab *slab; + struct kmem_cache *s; + + trace_kfree(_RET_IP_, object); + + if (unlikely(ZERO_OR_NULL_PTR(object))) + return; + + folio =3D virt_to_folio(object); + if (unlikely(!folio_test_slab(folio))) { + free_large_kmalloc(folio, (void *)object); + return; + } + + slab =3D folio_slab(folio); + s =3D slab->slab_cache; + __kmem_cache_free(s, (void *)object, _RET_IP_); +} +EXPORT_SYMBOL(kfree); + +/** + * __ksize -- Uninstrumented ksize. + * @objp: pointer to the object + * + * Unlike ksize(), __ksize() is uninstrumented, and does not provide the s= ame + * safety checks as ksize() with KASAN instrumentation enabled. + * + * Return: size of the actual memory used by @objp in bytes + */ +size_t __ksize(const void *object) +{ + struct folio *folio; + + if (unlikely(object =3D=3D ZERO_SIZE_PTR)) + return 0; + + folio =3D virt_to_folio(object); + + if (unlikely(!folio_test_slab(folio))) + return folio_size(folio); + + return slab_ksize(folio_slab(folio)->slab_cache); +} +EXPORT_SYMBOL(__ksize); #endif /* !CONFIG_SLOB */ =20 gfp_t kmalloc_fix_flags(gfp_t flags) @@ -917,7 +1020,7 @@ gfp_t kmalloc_fix_flags(gfp_t flags) * know the allocation order to free the pages properly in kfree. */ =20 -void *kmalloc_large_node_notrace(size_t size, gfp_t flags, int node) +void *__kmalloc_large_node(size_t size, gfp_t flags, int node) { struct page *page; void *ptr =3D NULL; @@ -943,7 +1046,7 @@ void *kmalloc_large_node_notrace(size_t size, gfp_t fl= ags, int node) =20 void *kmalloc_large(size_t size, gfp_t flags) { - void *ret =3D kmalloc_large_node_notrace(size, flags, NUMA_NO_NODE); + void *ret =3D __kmalloc_large_node(size, flags, NUMA_NO_NODE); =20 trace_kmalloc(_RET_IP_, ret, NULL, size, PAGE_SIZE << get_order(size), flags); @@ -953,7 +1056,7 @@ EXPORT_SYMBOL(kmalloc_large); =20 void *kmalloc_large_node(size_t size, gfp_t flags, int node) { - void *ret =3D kmalloc_large_node_notrace(size, flags, node); + void *ret =3D __kmalloc_large_node(size, flags, node); =20 trace_kmalloc_node(_RET_IP_, ret, NULL, size, PAGE_SIZE << get_order(size), flags, node); diff --git a/mm/slub.c b/mm/slub.c index a11f78c2647c..cd49785d59e1 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -4388,49 +4388,6 @@ static int __init setup_slub_min_objects(char *str) =20 __setup("slub_min_objects=3D", setup_slub_min_objects); =20 -static __always_inline -void *__do_kmalloc_node(size_t size, gfp_t flags, int node, unsigned long = caller) -{ - struct kmem_cache *s; - void *ret; - - if (unlikely(size > KMALLOC_MAX_CACHE_SIZE)) { - ret =3D kmalloc_large_node_notrace(size, flags, node); - - trace_kmalloc_node(caller, ret, NULL, - size, PAGE_SIZE << get_order(size), - flags, node); - - return ret; - } - - s =3D kmalloc_slab(size, flags); - - if (unlikely(ZERO_OR_NULL_PTR(s))) - return s; - - ret =3D slab_alloc_node(s, NULL, flags, node, caller, size); - - trace_kmalloc_node(caller, ret, s, size, s->size, flags, node); - - ret =3D kasan_kmalloc(s, ret, size, flags); - - return ret; -} - -void *__kmalloc_node(size_t size, gfp_t flags, int node) -{ - return __do_kmalloc_node(size, flags, node, _RET_IP_); -} -EXPORT_SYMBOL(__kmalloc_node); - -void *__kmalloc(size_t size, gfp_t flags) -{ - return __do_kmalloc_node(size, flags, NUMA_NO_NODE, _RET_IP_); -} -EXPORT_SYMBOL(__kmalloc); - - #ifdef CONFIG_HARDENED_USERCOPY /* * Rejects incorrectly sized objects and objects that are to be copied @@ -4481,43 +4438,6 @@ void __check_heap_object(const void *ptr, unsigned l= ong n, } #endif /* CONFIG_HARDENED_USERCOPY */ =20 -size_t __ksize(const void *object) -{ - struct folio *folio; - - if (unlikely(object =3D=3D ZERO_SIZE_PTR)) - return 0; - - folio =3D virt_to_folio(object); - - if (unlikely(!folio_test_slab(folio))) - return folio_size(folio); - - return slab_ksize(folio_slab(folio)->slab_cache); -} -EXPORT_SYMBOL(__ksize); - -void kfree(const void *x) -{ - struct folio *folio; - struct slab *slab; - void *object =3D (void *)x; - - trace_kfree(_RET_IP_, x); - - if (unlikely(ZERO_OR_NULL_PTR(x))) - return; - - folio =3D virt_to_folio(x); - if (unlikely(!folio_test_slab(folio))) { - free_large_kmalloc(folio, object); - return; - } - slab =3D folio_slab(folio); - slab_free(slab->slab_cache, slab, object, NULL, &object, 1, _RET_IP_); -} -EXPORT_SYMBOL(kfree); - #define SHRINK_PROMOTE_MAX 32 =20 /* @@ -4863,13 +4783,6 @@ int __kmem_cache_create(struct kmem_cache *s, slab_f= lags_t flags) return 0; } =20 -void *__kmalloc_node_track_caller(size_t size, gfp_t gfpflags, - int node, unsigned long caller) -{ - return __do_kmalloc_node(size, gfpflags, node, caller); -} -EXPORT_SYMBOL(__kmalloc_node_track_caller); - #ifdef CONFIG_SYSFS static int count_inuse(struct slab *slab) { --=20 2.32.0 From nobody Sat Apr 11 02:17:48 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 18DB8C25B08 for ; Wed, 17 Aug 2022 10:20:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238999AbiHQKUX (ORCPT ); Wed, 17 Aug 2022 06:20:23 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55504 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239037AbiHQKTw (ORCPT ); Wed, 17 Aug 2022 06:19:52 -0400 Received: from mail-pj1-x102e.google.com (mail-pj1-x102e.google.com [IPv6:2607:f8b0:4864:20::102e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2FB91816A7 for ; Wed, 17 Aug 2022 03:19:35 -0700 (PDT) Received: by mail-pj1-x102e.google.com with SMTP id x63-20020a17090a6c4500b001fabbf8debfso502206pjj.4 for ; Wed, 17 Aug 2022 03:19:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc; bh=ec/2ehXn71Tv+L2UuMNScuaYO4eN7Mil3n5xQnD/usM=; b=NE0QMyh9rN6jdYaOFqchLIWssHGMXYg5VraHfgM7OQRM+zrFJ0sZt1cK5vRqGOJIjs 2sumSFJ9OzlmGQJknoSvF79MKY7HzOVwM+FMBsmM/NtUaYtVtJah0x/V08MPUBJu9G0r USMvaDN/IxXCi+icvzS/yp1yXqo25FI3uXLQuapSGVQAS9dq/PLRMO7builXKV3vcD6U Bfs1iFowef9AILGtXjTVAd9///7WkTRoBf4C0iDUl8UJmAbbf3gKs+1JPrsmgbSng7lH uHZys+ZJYWdTwKwL7D74AL4ZVVEa1WlVDGTTUFtctN72UkextK+408oGudwHSofmY0Ly mnmA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc; bh=ec/2ehXn71Tv+L2UuMNScuaYO4eN7Mil3n5xQnD/usM=; b=dzZC5BhnjSYj/80CfUfx5fbZHqYcYIY52OWS5DG2uI/DrHi0sQtddcxqM8b9g0t6Mf 0HqYI0Jpmh+lRSyUD3trWJcuLhmDkDkIyLn44eMAss+CaEuBUXnwEl08GADE59uhgsBV jXux51Dka6HkVA4RJlDFhWU9iAnrYu4mk1p7ppsnbwUMkQs+VGtH2vHsZyzRa7h2GvQe B0hV6AmLjEDyodLW05DuGy+XGlcAXUDMQ8VsMJzkaM8O3NfUUWZ11kqeywo1kGAtvfKJ x8W/jKR5hCxvBM636z18csAkFli/1kyfYBy7oHd9spGfxXGKj27jovKSaxyAEhZaU+wn fsbw== X-Gm-Message-State: ACgBeo14qe8TelZ6dSzKcbePZhrYHuPmqmotT7WmfPJF7/zArZku55vT C7lkZbMpcBdo7hTL0AlfTpk= X-Google-Smtp-Source: AA6agR42HTFa0hi7Z2W2UpfZGw0BiJjZK6mjVfVq8nVA2E4qeRRVezYO2uWKdvXuyzgFXyIJ29hflA== X-Received: by 2002:a17:90b:17c9:b0:1f3:3a7c:a3a7 with SMTP id me9-20020a17090b17c900b001f33a7ca3a7mr3137221pjb.76.1660731574780; Wed, 17 Aug 2022 03:19:34 -0700 (PDT) Received: from hyeyoo.. ([114.29.91.56]) by smtp.gmail.com with ESMTPSA id d8-20020a170903230800b00172633fc236sm1071318plh.174.2022.08.17.03.19.31 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 17 Aug 2022 03:19:33 -0700 (PDT) From: Hyeonggon Yoo <42.hyeyoo@gmail.com> To: Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Vlastimil Babka , Roman Gushchin Cc: Hyeonggon Yoo <42.hyeyoo@gmail.com>, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v4 13/17] mm/sl[au]b: cleanup kmem_cache_alloc[_node]_trace() Date: Wed, 17 Aug 2022 19:18:22 +0900 Message-Id: <20220817101826.236819-14-42.hyeyoo@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220817101826.236819-1-42.hyeyoo@gmail.com> References: <20220817101826.236819-1-42.hyeyoo@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" This patch does: - Despite its name, kmem_cache_alloc[_node]_trace() is hook for inlined kmalloc. So rename it to kmalloc[_node]_trace(). - Move its implementation to slab_common.c by using __kmem_cache_alloc_node(), but keep CONFIG_TRACING=3Dn varients to save a function call when CONFIG_TRACING=3Dn. - Use __assume_kmalloc_alignment for kmalloc[_node]_trace instead of __assume_slab_alignement. Generally kmalloc has larger alignment requirements. Suggested-by: Vlastimil Babka Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> Reviewed-by: Vlastimil Babka --- include/linux/slab.h | 27 ++++++++++++++------------- mm/slab.c | 35 ----------------------------------- mm/slab_common.c | 27 +++++++++++++++++++++++++++ mm/slub.c | 27 --------------------------- 4 files changed, 41 insertions(+), 75 deletions(-) diff --git a/include/linux/slab.h b/include/linux/slab.h index 4ee5b2fed164..c8e485ce8815 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -449,16 +449,16 @@ void *kmem_cache_alloc_node(struct kmem_cache *s, gfp= _t flags, int node) __assum __malloc; =20 #ifdef CONFIG_TRACING -extern void *kmem_cache_alloc_trace(struct kmem_cache *s, gfp_t flags, siz= e_t size) - __assume_slab_alignment __alloc_size(3); - -extern void *kmem_cache_alloc_node_trace(struct kmem_cache *s, gfp_t gfpfl= ags, - int node, size_t size) __assume_slab_alignment - __alloc_size(4); +void *kmalloc_trace(struct kmem_cache *s, gfp_t flags, size_t size) + __assume_kmalloc_alignment __alloc_size(3); =20 +void *kmalloc_node_trace(struct kmem_cache *s, gfp_t gfpflags, + int node, size_t size) __assume_kmalloc_alignment + __alloc_size(4); #else /* CONFIG_TRACING */ -static __always_inline __alloc_size(3) void *kmem_cache_alloc_trace(struct= kmem_cache *s, - gfp_t flags, size_t size) +/* Save a function call when CONFIG_TRACING=3Dn */ +static __always_inline __alloc_size(3) +void *kmalloc_trace(struct kmem_cache *s, gfp_t flags, size_t size) { void *ret =3D kmem_cache_alloc(s, flags); =20 @@ -466,8 +466,9 @@ static __always_inline __alloc_size(3) void *kmem_cache= _alloc_trace(struct kmem_ return ret; } =20 -static __always_inline void *kmem_cache_alloc_node_trace(struct kmem_cache= *s, gfp_t gfpflags, - int node, size_t size) +static __always_inline __alloc_size(4) +void *kmalloc_node_trace(struct kmem_cache *s, gfp_t gfpflags, + int node, size_t size) { void *ret =3D kmem_cache_alloc_node(s, gfpflags, node); =20 @@ -550,7 +551,7 @@ static __always_inline __alloc_size(1) void *kmalloc(si= ze_t size, gfp_t flags) if (!index) return ZERO_SIZE_PTR; =20 - return kmem_cache_alloc_trace( + return kmalloc_trace( kmalloc_caches[kmalloc_type(flags)][index], flags, size); #endif @@ -572,9 +573,9 @@ static __always_inline __alloc_size(1) void *kmalloc_no= de(size_t size, gfp_t fla if (!index) return ZERO_SIZE_PTR; =20 - return kmem_cache_alloc_node_trace( + return kmalloc_node_trace( kmalloc_caches[kmalloc_type(flags)][index], - flags, node, size); + flags, node, size); } return __kmalloc_node(size, flags, node); } diff --git a/mm/slab.c b/mm/slab.c index 5b234e3ab165..8d9d0fbf9792 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -3519,22 +3519,6 @@ int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_= t flags, size_t size, } EXPORT_SYMBOL(kmem_cache_alloc_bulk); =20 -#ifdef CONFIG_TRACING -void * -kmem_cache_alloc_trace(struct kmem_cache *cachep, gfp_t flags, size_t size) -{ - void *ret; - - ret =3D slab_alloc(cachep, NULL, flags, size, _RET_IP_); - - ret =3D kasan_kmalloc(cachep, ret, size, flags); - trace_kmalloc(_RET_IP_, ret, cachep, - size, cachep->size, flags); - return ret; -} -EXPORT_SYMBOL(kmem_cache_alloc_trace); -#endif - /** * kmem_cache_alloc_node - Allocate an object on the specified node * @cachep: The cache to allocate from. @@ -3568,25 +3552,6 @@ void *__kmem_cache_alloc_node(struct kmem_cache *cac= hep, gfp_t flags, orig_size, caller); } =20 -#ifdef CONFIG_TRACING -void *kmem_cache_alloc_node_trace(struct kmem_cache *cachep, - gfp_t flags, - int nodeid, - size_t size) -{ - void *ret; - - ret =3D slab_alloc_node(cachep, NULL, flags, nodeid, size, _RET_IP_); - - ret =3D kasan_kmalloc(cachep, ret, size, flags); - trace_kmalloc_node(_RET_IP_, ret, cachep, - size, cachep->size, - flags, nodeid); - return ret; -} -EXPORT_SYMBOL(kmem_cache_alloc_node_trace); -#endif - #ifdef CONFIG_PRINTK void __kmem_obj_info(struct kmem_obj_info *kpp, void *object, struct slab = *slab) { diff --git a/mm/slab_common.c b/mm/slab_common.c index c8242b4e2223..d8e8c41c12f1 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -1000,6 +1000,33 @@ size_t __ksize(const void *object) return slab_ksize(folio_slab(folio)->slab_cache); } EXPORT_SYMBOL(__ksize); + +#ifdef CONFIG_TRACING +void *kmalloc_trace(struct kmem_cache *s, gfp_t gfpflags, size_t size) +{ + void *ret =3D __kmem_cache_alloc_node(s, gfpflags, NUMA_NO_NODE, + size, _RET_IP_); + + trace_kmalloc_node(_RET_IP_, ret, s, size, s->size, + gfpflags, NUMA_NO_NODE); + + ret =3D kasan_kmalloc(s, ret, size, gfpflags); + return ret; +} +EXPORT_SYMBOL(kmalloc_trace); + +void *kmalloc_node_trace(struct kmem_cache *s, gfp_t gfpflags, + int node, size_t size) +{ + void *ret =3D __kmem_cache_alloc_node(s, gfpflags, node, size, _RET_IP_); + + trace_kmalloc_node(_RET_IP_, ret, s, size, s->size, gfpflags, node); + + ret =3D kasan_kmalloc(s, ret, size, gfpflags); + return ret; +} +EXPORT_SYMBOL(kmalloc_node_trace); +#endif /* !CONFIG_TRACING */ #endif /* !CONFIG_SLOB */ =20 gfp_t kmalloc_fix_flags(gfp_t flags) diff --git a/mm/slub.c b/mm/slub.c index cd49785d59e1..7d7fd9d4e8fa 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -3270,17 +3270,6 @@ void *__kmem_cache_alloc_node(struct kmem_cache *s, = gfp_t gfpflags, caller, orig_size); } =20 -#ifdef CONFIG_TRACING -void *kmem_cache_alloc_trace(struct kmem_cache *s, gfp_t gfpflags, size_t = size) -{ - void *ret =3D slab_alloc(s, NULL, gfpflags, _RET_IP_, size); - trace_kmalloc(_RET_IP_, ret, s, size, s->size, gfpflags); - ret =3D kasan_kmalloc(s, ret, size, gfpflags); - return ret; -} -EXPORT_SYMBOL(kmem_cache_alloc_trace); -#endif - void *kmem_cache_alloc_node(struct kmem_cache *s, gfp_t gfpflags, int node) { void *ret =3D slab_alloc_node(s, NULL, gfpflags, node, _RET_IP_, s->objec= t_size); @@ -3292,22 +3281,6 @@ void *kmem_cache_alloc_node(struct kmem_cache *s, gf= p_t gfpflags, int node) } EXPORT_SYMBOL(kmem_cache_alloc_node); =20 -#ifdef CONFIG_TRACING -void *kmem_cache_alloc_node_trace(struct kmem_cache *s, - gfp_t gfpflags, - int node, size_t size) -{ - void *ret =3D slab_alloc_node(s, NULL, gfpflags, node, _RET_IP_, size); - - trace_kmalloc_node(_RET_IP_, ret, s, - size, s->size, gfpflags, node); - - ret =3D kasan_kmalloc(s, ret, size, gfpflags); - return ret; -} -EXPORT_SYMBOL(kmem_cache_alloc_node_trace); -#endif - /* * Slow path handling. This may still be called frequently since objects * have a longer lifetime than the cpu slabs in most processing loads. --=20 2.32.0 From nobody Sat Apr 11 02:17:48 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 70C6BC25B08 for ; Wed, 17 Aug 2022 10:20:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239005AbiHQKU2 (ORCPT ); Wed, 17 Aug 2022 06:20:28 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54650 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238934AbiHQKTy (ORCPT ); Wed, 17 Aug 2022 06:19:54 -0400 Received: from mail-pl1-x629.google.com (mail-pl1-x629.google.com [IPv6:2607:f8b0:4864:20::629]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9C72482774 for ; Wed, 17 Aug 2022 03:19:39 -0700 (PDT) Received: by mail-pl1-x629.google.com with SMTP id 2so426642pll.0 for ; Wed, 17 Aug 2022 03:19:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc; bh=CyypfYVW40Zxy99wL/B3ZBuo0BqIs8lbgANQnRCbVC4=; b=VUPnz/D4yt281vtciG3zIkCG8wzFEAcCtpanFMktSXNw3J+m8nk9VhKsXldatbEuVk 7veCFdOCyM5L9juZ7+zaNNDNbCjir0zuhAChAv4ze6fHgRbYNrEp3oGaQe/U+Sf+d5pt Yy77QuS5zhR6dRA2l8LZNj5iKE52sp0+6qq2dWEJQDCtO3kbgU9yuiKEqLRhQ36QXUvm cULrJbg6FjbXLKN6MOoYW7eIV6hEb5Cl7DuKhA/L7MymZs1HFm2qm8JBfheXLZsYkKbI 3wuFAByfXclzWmoQct96dE06Q21O52/t2mBW2qED00il5VbqLfUSj0ljRzMRYFcLB7DR P65Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc; bh=CyypfYVW40Zxy99wL/B3ZBuo0BqIs8lbgANQnRCbVC4=; b=YNJcw2Tgsa+B8rhruhGe/kP2YJJtRHMfAW5EXBkolwU1EhF6DruMYQLf26ZYNcJNlj I7c67xDm9VfyG8JpHzk5Q/su3IfAWwO+Oa2IodIUgr/c6fdr920UmmXdNM8zDO0LJhZu EDAvbIOR+rgg9sq8C8fYICVsg5Bi0DBaxX8XGhXwrW8U/eIyrSJD1UyMCfPcGZqu5USL uIAM+RtL9YpTYzVJXmkUkyjOhBqI1Hq/nEDgq4Ik5VuKDCXi2UcYXTxzW4WRESDYuxsd zdPKWfUJg0iYlYYB+nAfqcpKwnu5pfery9HaJGHUE6Xf7KU7boSOnkFVGqYZJ/MEK+NG 23bA== X-Gm-Message-State: ACgBeo2Y6TNzbJS2IBq3orvErId5CTUzGN8BDwccOWnNFGSmdTbqtGDh jy3fl1AdBqr0olqRPBZ84WE= X-Google-Smtp-Source: AA6agR4ltHmPKihrwg6pI9XNaqqOJ7qEKaijv1U195SEBQTrbkQ2dpNmYsxi6ciIdPMPJQ90fxJE7g== X-Received: by 2002:a17:90b:390c:b0:1f5:5bbc:2e8 with SMTP id ob12-20020a17090b390c00b001f55bbc02e8mr3047899pjb.233.1660731578454; Wed, 17 Aug 2022 03:19:38 -0700 (PDT) Received: from hyeyoo.. ([114.29.91.56]) by smtp.gmail.com with ESMTPSA id d8-20020a170903230800b00172633fc236sm1071318plh.174.2022.08.17.03.19.35 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 17 Aug 2022 03:19:37 -0700 (PDT) From: Hyeonggon Yoo <42.hyeyoo@gmail.com> To: Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Vlastimil Babka , Roman Gushchin Cc: Hyeonggon Yoo <42.hyeyoo@gmail.com>, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v4 14/17] mm/slab_common: unify NUMA and UMA version of tracepoints Date: Wed, 17 Aug 2022 19:18:23 +0900 Message-Id: <20220817101826.236819-15-42.hyeyoo@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220817101826.236819-1-42.hyeyoo@gmail.com> References: <20220817101826.236819-1-42.hyeyoo@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Drop kmem_alloc event class, rename kmem_alloc_node to kmem_alloc, and remove _node postfix for NUMA version of tracepoints. This will break some tools that depend on {kmem_cache_alloc,kmalloc}_node, but at this point maintaining both kmem_alloc and kmem_alloc_node event classes does not makes sense at all. Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> Reviewed-by: Vlastimil Babka --- include/trace/events/kmem.h | 60 ++----------------------------------- mm/slab.c | 9 +++--- mm/slab_common.c | 21 +++++-------- mm/slob.c | 20 ++++++------- mm/slub.c | 6 ++-- 5 files changed, 27 insertions(+), 89 deletions(-) diff --git a/include/trace/events/kmem.h b/include/trace/events/kmem.h index 4cb51ace600d..e078ebcdc4b1 100644 --- a/include/trace/events/kmem.h +++ b/include/trace/events/kmem.h @@ -11,62 +11,6 @@ =20 DECLARE_EVENT_CLASS(kmem_alloc, =20 - TP_PROTO(unsigned long call_site, - const void *ptr, - struct kmem_cache *s, - size_t bytes_req, - size_t bytes_alloc, - gfp_t gfp_flags), - - TP_ARGS(call_site, ptr, s, bytes_req, bytes_alloc, gfp_flags), - - TP_STRUCT__entry( - __field( unsigned long, call_site ) - __field( const void *, ptr ) - __field( size_t, bytes_req ) - __field( size_t, bytes_alloc ) - __field( unsigned long, gfp_flags ) - __field( bool, accounted ) - ), - - TP_fast_assign( - __entry->call_site =3D call_site; - __entry->ptr =3D ptr; - __entry->bytes_req =3D bytes_req; - __entry->bytes_alloc =3D bytes_alloc; - __entry->gfp_flags =3D (__force unsigned long)gfp_flags; - __entry->accounted =3D IS_ENABLED(CONFIG_MEMCG_KMEM) ? - ((gfp_flags & __GFP_ACCOUNT) || - (s && s->flags & SLAB_ACCOUNT)) : false; - ), - - TP_printk("call_site=3D%pS ptr=3D%p bytes_req=3D%zu bytes_alloc=3D%zu gfp= _flags=3D%s accounted=3D%s", - (void *)__entry->call_site, - __entry->ptr, - __entry->bytes_req, - __entry->bytes_alloc, - show_gfp_flags(__entry->gfp_flags), - __entry->accounted ? "true" : "false") -); - -DEFINE_EVENT(kmem_alloc, kmalloc, - - TP_PROTO(unsigned long call_site, const void *ptr, struct kmem_cache *s, - size_t bytes_req, size_t bytes_alloc, gfp_t gfp_flags), - - TP_ARGS(call_site, ptr, s, bytes_req, bytes_alloc, gfp_flags) -); - -DEFINE_EVENT(kmem_alloc, kmem_cache_alloc, - - TP_PROTO(unsigned long call_site, const void *ptr, struct kmem_cache *s, - size_t bytes_req, size_t bytes_alloc, gfp_t gfp_flags), - - TP_ARGS(call_site, ptr, s, bytes_req, bytes_alloc, gfp_flags) -); - -DECLARE_EVENT_CLASS(kmem_alloc_node, - TP_PROTO(unsigned long call_site, const void *ptr, struct kmem_cache *s, @@ -109,7 +53,7 @@ DECLARE_EVENT_CLASS(kmem_alloc_node, __entry->accounted ? "true" : "false") ); =20 -DEFINE_EVENT(kmem_alloc_node, kmalloc_node, +DEFINE_EVENT(kmem_alloc, kmalloc, =20 TP_PROTO(unsigned long call_site, const void *ptr, struct kmem_cache *s, size_t bytes_req, size_t bytes_alloc, @@ -118,7 +62,7 @@ DEFINE_EVENT(kmem_alloc_node, kmalloc_node, TP_ARGS(call_site, ptr, s, bytes_req, bytes_alloc, gfp_flags, node) ); =20 -DEFINE_EVENT(kmem_alloc_node, kmem_cache_alloc_node, +DEFINE_EVENT(kmem_alloc, kmem_cache_alloc, =20 TP_PROTO(unsigned long call_site, const void *ptr, struct kmem_cache *s, size_t bytes_req, size_t bytes_alloc, diff --git a/mm/slab.c b/mm/slab.c index 8d9d0fbf9792..2fd400203ac2 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -3440,8 +3440,8 @@ void *__kmem_cache_alloc_lru(struct kmem_cache *cache= p, struct list_lru *lru, { void *ret =3D slab_alloc(cachep, lru, flags, cachep->object_size, _RET_IP= _); =20 - trace_kmem_cache_alloc(_RET_IP_, ret, cachep, - cachep->object_size, cachep->size, flags); + trace_kmem_cache_alloc(_RET_IP_, ret, cachep, cachep->object_size, + cachep->size, flags, NUMA_NO_NODE); =20 return ret; } @@ -3536,9 +3536,8 @@ void *kmem_cache_alloc_node(struct kmem_cache *cachep= , gfp_t flags, int nodeid) { void *ret =3D slab_alloc_node(cachep, NULL, flags, nodeid, cachep->object= _size, _RET_IP_); =20 - trace_kmem_cache_alloc_node(_RET_IP_, ret, cachep, - cachep->object_size, cachep->size, - flags, nodeid); + trace_kmem_cache_alloc(_RET_IP_, ret, cachep, cachep->object_size, + cachep->size, flags, nodeid); =20 return ret; } diff --git a/mm/slab_common.c b/mm/slab_common.c index d8e8c41c12f1..f34be57b00c8 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -907,9 +907,8 @@ void *__do_kmalloc_node(size_t size, gfp_t flags, int n= ode, unsigned long caller =20 if (unlikely(size > KMALLOC_MAX_CACHE_SIZE)) { ret =3D __kmalloc_large_node(size, flags, node); - trace_kmalloc_node(caller, ret, NULL, - size, PAGE_SIZE << get_order(size), - flags, node); + trace_kmalloc(_RET_IP_, ret, NULL, size, + PAGE_SIZE << get_order(size), flags, node); return ret; } =20 @@ -920,8 +919,7 @@ void *__do_kmalloc_node(size_t size, gfp_t flags, int n= ode, unsigned long caller =20 ret =3D __kmem_cache_alloc_node(s, flags, node, size, caller); ret =3D kasan_kmalloc(s, ret, size, flags); - trace_kmalloc_node(caller, ret, s, size, - s->size, flags, node); + trace_kmalloc(_RET_IP_, ret, s, size, s->size, flags, node); return ret; } =20 @@ -1007,8 +1005,7 @@ void *kmalloc_trace(struct kmem_cache *s, gfp_t gfpfl= ags, size_t size) void *ret =3D __kmem_cache_alloc_node(s, gfpflags, NUMA_NO_NODE, size, _RET_IP_); =20 - trace_kmalloc_node(_RET_IP_, ret, s, size, s->size, - gfpflags, NUMA_NO_NODE); + trace_kmalloc(_RET_IP_, ret, s, size, s->size, gfpflags, NUMA_NO_NODE); =20 ret =3D kasan_kmalloc(s, ret, size, gfpflags); return ret; @@ -1020,7 +1017,7 @@ void *kmalloc_node_trace(struct kmem_cache *s, gfp_t = gfpflags, { void *ret =3D __kmem_cache_alloc_node(s, gfpflags, node, size, _RET_IP_); =20 - trace_kmalloc_node(_RET_IP_, ret, s, size, s->size, gfpflags, node); + trace_kmalloc(_RET_IP_, ret, s, size, s->size, gfpflags, node); =20 ret =3D kasan_kmalloc(s, ret, size, gfpflags); return ret; @@ -1076,7 +1073,7 @@ void *kmalloc_large(size_t size, gfp_t flags) void *ret =3D __kmalloc_large_node(size, flags, NUMA_NO_NODE); =20 trace_kmalloc(_RET_IP_, ret, NULL, size, - PAGE_SIZE << get_order(size), flags); + PAGE_SIZE << get_order(size), flags, NUMA_NO_NODE); return ret; } EXPORT_SYMBOL(kmalloc_large); @@ -1085,8 +1082,8 @@ void *kmalloc_large_node(size_t size, gfp_t flags, in= t node) { void *ret =3D __kmalloc_large_node(size, flags, node); =20 - trace_kmalloc_node(_RET_IP_, ret, NULL, size, - PAGE_SIZE << get_order(size), flags, node); + trace_kmalloc(_RET_IP_, ret, NULL, size, + PAGE_SIZE << get_order(size), flags, node); return ret; } EXPORT_SYMBOL(kmalloc_large_node); @@ -1421,8 +1418,6 @@ EXPORT_SYMBOL(ksize); /* Tracepoints definitions. */ EXPORT_TRACEPOINT_SYMBOL(kmalloc); EXPORT_TRACEPOINT_SYMBOL(kmem_cache_alloc); -EXPORT_TRACEPOINT_SYMBOL(kmalloc_node); -EXPORT_TRACEPOINT_SYMBOL(kmem_cache_alloc_node); EXPORT_TRACEPOINT_SYMBOL(kfree); EXPORT_TRACEPOINT_SYMBOL(kmem_cache_free); =20 diff --git a/mm/slob.c b/mm/slob.c index 96b08acd72ce..3208c56d8f82 100644 --- a/mm/slob.c +++ b/mm/slob.c @@ -507,8 +507,8 @@ __do_kmalloc_node(size_t size, gfp_t gfp, int node, uns= igned long caller) *m =3D size; ret =3D (void *)m + minalign; =20 - trace_kmalloc_node(caller, ret, NULL, - size, size + minalign, gfp, node); + trace_kmalloc(caller, ret, NULL, size, + size + minalign, gfp, node); } else { unsigned int order =3D get_order(size); =20 @@ -516,8 +516,8 @@ __do_kmalloc_node(size_t size, gfp_t gfp, int node, uns= igned long caller) gfp |=3D __GFP_COMP; ret =3D slob_new_pages(gfp, order, node); =20 - trace_kmalloc_node(caller, ret, NULL, - size, PAGE_SIZE << order, gfp, node); + trace_kmalloc(caller, ret, NULL, size, + PAGE_SIZE << order, gfp, node); } =20 kmemleak_alloc(ret, size, 1, gfp); @@ -608,14 +608,14 @@ static void *slob_alloc_node(struct kmem_cache *c, gf= p_t flags, int node) =20 if (c->size < PAGE_SIZE) { b =3D slob_alloc(c->size, flags, c->align, node, 0); - trace_kmem_cache_alloc_node(_RET_IP_, b, NULL, c->object_size, - SLOB_UNITS(c->size) * SLOB_UNIT, - flags, node); + trace_kmem_cache_alloc(_RET_IP_, b, NULL, c->object_size, + SLOB_UNITS(c->size) * SLOB_UNIT, + flags, node); } else { b =3D slob_new_pages(flags, get_order(c->size), node); - trace_kmem_cache_alloc_node(_RET_IP_, b, NULL, c->object_size, - PAGE_SIZE << get_order(c->size), - flags, node); + trace_kmem_cache_alloc(_RET_IP_, b, NULL, c->object_size, + PAGE_SIZE << get_order(c->size), + flags, node); } =20 if (b && c->ctor) { diff --git a/mm/slub.c b/mm/slub.c index 7d7fd9d4e8fa..22e4ccf06638 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -3244,7 +3244,7 @@ void *__kmem_cache_alloc_lru(struct kmem_cache *s, st= ruct list_lru *lru, void *ret =3D slab_alloc(s, lru, gfpflags, _RET_IP_, s->object_size); =20 trace_kmem_cache_alloc(_RET_IP_, ret, s, s->object_size, - s->size, gfpflags); + s->size, gfpflags, NUMA_NO_NODE); =20 return ret; } @@ -3274,8 +3274,8 @@ void *kmem_cache_alloc_node(struct kmem_cache *s, gfp= _t gfpflags, int node) { void *ret =3D slab_alloc_node(s, NULL, gfpflags, node, _RET_IP_, s->objec= t_size); =20 - trace_kmem_cache_alloc_node(_RET_IP_, ret, s, - s->object_size, s->size, gfpflags, node); + trace_kmem_cache_alloc(_RET_IP_, ret, s, s->object_size, + s->size, gfpflags, node); =20 return ret; } --=20 2.32.0 From nobody Sat Apr 11 02:17:48 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 65A3AC25B08 for ; Wed, 17 Aug 2022 10:20:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239033AbiHQKUn (ORCPT ); Wed, 17 Aug 2022 06:20:43 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55078 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239059AbiHQKUG (ORCPT ); Wed, 17 Aug 2022 06:20:06 -0400 Received: from mail-pf1-x435.google.com (mail-pf1-x435.google.com [IPv6:2607:f8b0:4864:20::435]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0F2E382862 for ; Wed, 17 Aug 2022 03:19:43 -0700 (PDT) Received: by mail-pf1-x435.google.com with SMTP id f192so11662219pfa.9 for ; Wed, 17 Aug 2022 03:19:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc; bh=NUhisoUUDw6CQz+v9YGeettBV60tR4WaAKuvhlxdtBs=; b=lhCz68I4EgFx+wWyEe5AMNw5KeaZgl8syXATH2qGGWEIgxjlpooCBpohcHeUeiDDsE jE8b0xGnKbCNY75aS6eBlqIEMXv82HseRtahkUw/WZmYNK8Z0Bgd/loYGY6xdUlksdXB ejx/2wQqBg1/WgEWz7oJjb404IHTLZVdjcTqTdLd+wEoTDTG4nCVWq82n37zykZW865K Cg4LGy5uNJUROcc0ZUoLdjKM4mi4+x1DtxpbT6uTD6JP7qiP5WVhOWmgsDvCYx54yIjd ZyzQDLn7MI3DGfQlRiRQA4A1roAGqH/PC4LR/WxOSi/4R6FkzBpxeipD6+7yv162wSUP 83+g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc; bh=NUhisoUUDw6CQz+v9YGeettBV60tR4WaAKuvhlxdtBs=; b=sSnzZGZUOGTlA5f4TKUuyviNfehS4VdHJA0s/OBU+WKGKxKKcrMq4TQD1ZTwhiK8F4 iUcqGQXn/jG/0bsE8ryA4ZPgDn9SrXTW4eyxK+ehwCAnwyGisNXp6P/h0r2IdzQR+710 OdTZ4irYEe1gj45MuiX7WX+aHUQ8e4sVYg5DqvpMpZMqjGMs/E4FGit1AC8Eg4q206EN mE0zXMhw/gOfk6hs9WAfgfBxWM9vkjuS7OVwT/gqS9M1RBlphzpRK0SrNzKBOpr3ct5E Yv6lLq71DRQ+maMc84BVo0KfScOS8zeha+iS4/nAtxSIgCXjPRv+CiBeLqT81Q3qL0bF qucQ== X-Gm-Message-State: ACgBeo3vXddVDIIB1rlM5GUhtB9zdeKA3C0MK/wCCo70C5StGSLCsaKG y2payPTnBMVlHwm2CBewa1o= X-Google-Smtp-Source: AA6agR5UJmaMtf04/yHbQgrf6h8TvXMUUhoRNtxO3WVLNbTHMdy2nAQOOKD3bg2S2eKBtrYQEnSLbg== X-Received: by 2002:a62:4ed3:0:b0:52d:9b4a:d91a with SMTP id c202-20020a624ed3000000b0052d9b4ad91amr24901250pfb.8.1660731582428; Wed, 17 Aug 2022 03:19:42 -0700 (PDT) Received: from hyeyoo.. ([114.29.91.56]) by smtp.gmail.com with ESMTPSA id d8-20020a170903230800b00172633fc236sm1071318plh.174.2022.08.17.03.19.38 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 17 Aug 2022 03:19:41 -0700 (PDT) From: Hyeonggon Yoo <42.hyeyoo@gmail.com> To: Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Vlastimil Babka , Roman Gushchin Cc: Hyeonggon Yoo <42.hyeyoo@gmail.com>, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Vasily Averin Subject: [PATCH v4 15/17] mm/slab_common: drop kmem_alloc & avoid dereferencing fields when not using Date: Wed, 17 Aug 2022 19:18:24 +0900 Message-Id: <20220817101826.236819-16-42.hyeyoo@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220817101826.236819-1-42.hyeyoo@gmail.com> References: <20220817101826.236819-1-42.hyeyoo@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Drop kmem_alloc event class, and define kmalloc and kmem_cache_alloc using TRACE_EVENT() macro. And then this patch does: - Do not pass pointer to struct kmem_cache to trace_kmalloc. gfp flag is enough to know if it's accounted or not. - Avoid dereferencing s->object_size and s->size when not using kmem_cac= he_alloc event. - Avoid dereferencing s->name in when not using kmem_cache_free event. - Adjust s->size to SLOB_UNITS(s->size) * SLOB_UNIT in SLOB Cc: Vasily Averin Suggested-by: Vlastimil Babka Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> Reviewed-by: Vlastimil Babka --- include/trace/events/kmem.h | 64 ++++++++++++++++++++++++------------- mm/slab.c | 8 ++--- mm/slab_common.c | 16 +++++----- mm/slob.c | 19 +++++------ mm/slub.c | 8 ++--- 5 files changed, 64 insertions(+), 51 deletions(-) diff --git a/include/trace/events/kmem.h b/include/trace/events/kmem.h index e078ebcdc4b1..8c6f96604244 100644 --- a/include/trace/events/kmem.h +++ b/include/trace/events/kmem.h @@ -9,17 +9,15 @@ #include #include =20 -DECLARE_EVENT_CLASS(kmem_alloc, +TRACE_EVENT(kmem_cache_alloc, =20 TP_PROTO(unsigned long call_site, const void *ptr, struct kmem_cache *s, - size_t bytes_req, - size_t bytes_alloc, gfp_t gfp_flags, int node), =20 - TP_ARGS(call_site, ptr, s, bytes_req, bytes_alloc, gfp_flags, node), + TP_ARGS(call_site, ptr, s, gfp_flags, node), =20 TP_STRUCT__entry( __field( unsigned long, call_site ) @@ -34,13 +32,13 @@ DECLARE_EVENT_CLASS(kmem_alloc, TP_fast_assign( __entry->call_site =3D call_site; __entry->ptr =3D ptr; - __entry->bytes_req =3D bytes_req; - __entry->bytes_alloc =3D bytes_alloc; + __entry->bytes_req =3D s->object_size; + __entry->bytes_alloc =3D s->size; __entry->gfp_flags =3D (__force unsigned long)gfp_flags; __entry->node =3D node; __entry->accounted =3D IS_ENABLED(CONFIG_MEMCG_KMEM) ? ((gfp_flags & __GFP_ACCOUNT) || - (s && s->flags & SLAB_ACCOUNT)) : false; + (s->flags & SLAB_ACCOUNT)) : false; ), =20 TP_printk("call_site=3D%pS ptr=3D%p bytes_req=3D%zu bytes_alloc=3D%zu gfp= _flags=3D%s node=3D%d accounted=3D%s", @@ -53,22 +51,44 @@ DECLARE_EVENT_CLASS(kmem_alloc, __entry->accounted ? "true" : "false") ); =20 -DEFINE_EVENT(kmem_alloc, kmalloc, +TRACE_EVENT(kmalloc, =20 - TP_PROTO(unsigned long call_site, const void *ptr, - struct kmem_cache *s, size_t bytes_req, size_t bytes_alloc, - gfp_t gfp_flags, int node), + TP_PROTO(unsigned long call_site, + const void *ptr, + size_t bytes_req, + size_t bytes_alloc, + gfp_t gfp_flags, + int node), =20 - TP_ARGS(call_site, ptr, s, bytes_req, bytes_alloc, gfp_flags, node) -); + TP_ARGS(call_site, ptr, bytes_req, bytes_alloc, gfp_flags, node), =20 -DEFINE_EVENT(kmem_alloc, kmem_cache_alloc, + TP_STRUCT__entry( + __field( unsigned long, call_site ) + __field( const void *, ptr ) + __field( size_t, bytes_req ) + __field( size_t, bytes_alloc ) + __field( unsigned long, gfp_flags ) + __field( int, node ) + ), =20 - TP_PROTO(unsigned long call_site, const void *ptr, - struct kmem_cache *s, size_t bytes_req, size_t bytes_alloc, - gfp_t gfp_flags, int node), + TP_fast_assign( + __entry->call_site =3D call_site; + __entry->ptr =3D ptr; + __entry->bytes_req =3D bytes_req; + __entry->bytes_alloc =3D bytes_alloc; + __entry->gfp_flags =3D (__force unsigned long)gfp_flags; + __entry->node =3D node; + ), =20 - TP_ARGS(call_site, ptr, s, bytes_req, bytes_alloc, gfp_flags, node) + TP_printk("call_site=3D%pS ptr=3D%p bytes_req=3D%zu bytes_alloc=3D%zu gfp= _flags=3D%s node=3D%d accounted=3D%s", + (void *)__entry->call_site, + __entry->ptr, + __entry->bytes_req, + __entry->bytes_alloc, + show_gfp_flags(__entry->gfp_flags), + __entry->node, + (IS_ENABLED(CONFIG_MEMCG_KMEM) && + (__entry->gfp_flags & __GFP_ACCOUNT)) ? "true" : "false") ); =20 TRACE_EVENT(kfree, @@ -93,20 +113,20 @@ TRACE_EVENT(kfree, =20 TRACE_EVENT(kmem_cache_free, =20 - TP_PROTO(unsigned long call_site, const void *ptr, const char *name), + TP_PROTO(unsigned long call_site, const void *ptr, const struct kmem_cach= e *s), =20 - TP_ARGS(call_site, ptr, name), + TP_ARGS(call_site, ptr, s), =20 TP_STRUCT__entry( __field( unsigned long, call_site ) __field( const void *, ptr ) - __string( name, name ) + __string( name, s->name ) ), =20 TP_fast_assign( __entry->call_site =3D call_site; __entry->ptr =3D ptr; - __assign_str(name, name); + __assign_str(name, s->name); ), =20 TP_printk("call_site=3D%pS ptr=3D%p name=3D%s", diff --git a/mm/slab.c b/mm/slab.c index 2fd400203ac2..a5486ff8362a 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -3440,8 +3440,7 @@ void *__kmem_cache_alloc_lru(struct kmem_cache *cache= p, struct list_lru *lru, { void *ret =3D slab_alloc(cachep, lru, flags, cachep->object_size, _RET_IP= _); =20 - trace_kmem_cache_alloc(_RET_IP_, ret, cachep, cachep->object_size, - cachep->size, flags, NUMA_NO_NODE); + trace_kmem_cache_alloc(_RET_IP_, ret, cachep, flags, NUMA_NO_NODE); =20 return ret; } @@ -3536,8 +3535,7 @@ void *kmem_cache_alloc_node(struct kmem_cache *cachep= , gfp_t flags, int nodeid) { void *ret =3D slab_alloc_node(cachep, NULL, flags, nodeid, cachep->object= _size, _RET_IP_); =20 - trace_kmem_cache_alloc(_RET_IP_, ret, cachep, cachep->object_size, - cachep->size, flags, nodeid); + trace_kmem_cache_alloc(_RET_IP_, ret, cachep, flags, nodeid); =20 return ret; } @@ -3607,7 +3605,7 @@ void kmem_cache_free(struct kmem_cache *cachep, void = *objp) if (!cachep) return; =20 - trace_kmem_cache_free(_RET_IP_, objp, cachep->name); + trace_kmem_cache_free(_RET_IP_, objp, cachep); __do_kmem_cache_free(cachep, objp, _RET_IP_); } EXPORT_SYMBOL(kmem_cache_free); diff --git a/mm/slab_common.c b/mm/slab_common.c index f34be57b00c8..e53016c9a6e9 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -907,7 +907,7 @@ void *__do_kmalloc_node(size_t size, gfp_t flags, int n= ode, unsigned long caller =20 if (unlikely(size > KMALLOC_MAX_CACHE_SIZE)) { ret =3D __kmalloc_large_node(size, flags, node); - trace_kmalloc(_RET_IP_, ret, NULL, size, + trace_kmalloc(_RET_IP_, ret, size, PAGE_SIZE << get_order(size), flags, node); return ret; } @@ -919,7 +919,7 @@ void *__do_kmalloc_node(size_t size, gfp_t flags, int n= ode, unsigned long caller =20 ret =3D __kmem_cache_alloc_node(s, flags, node, size, caller); ret =3D kasan_kmalloc(s, ret, size, flags); - trace_kmalloc(_RET_IP_, ret, s, size, s->size, flags, node); + trace_kmalloc(_RET_IP_, ret, size, s->size, flags, node); return ret; } =20 @@ -1005,7 +1005,7 @@ void *kmalloc_trace(struct kmem_cache *s, gfp_t gfpfl= ags, size_t size) void *ret =3D __kmem_cache_alloc_node(s, gfpflags, NUMA_NO_NODE, size, _RET_IP_); =20 - trace_kmalloc(_RET_IP_, ret, s, size, s->size, gfpflags, NUMA_NO_NODE); + trace_kmalloc(_RET_IP_, ret, size, s->size, gfpflags, NUMA_NO_NODE); =20 ret =3D kasan_kmalloc(s, ret, size, gfpflags); return ret; @@ -1017,7 +1017,7 @@ void *kmalloc_node_trace(struct kmem_cache *s, gfp_t = gfpflags, { void *ret =3D __kmem_cache_alloc_node(s, gfpflags, node, size, _RET_IP_); =20 - trace_kmalloc(_RET_IP_, ret, s, size, s->size, gfpflags, node); + trace_kmalloc(_RET_IP_, ret, size, s->size, gfpflags, node); =20 ret =3D kasan_kmalloc(s, ret, size, gfpflags); return ret; @@ -1072,8 +1072,8 @@ void *kmalloc_large(size_t size, gfp_t flags) { void *ret =3D __kmalloc_large_node(size, flags, NUMA_NO_NODE); =20 - trace_kmalloc(_RET_IP_, ret, NULL, size, - PAGE_SIZE << get_order(size), flags, NUMA_NO_NODE); + trace_kmalloc(_RET_IP_, ret, size, PAGE_SIZE << get_order(size), + flags, NUMA_NO_NODE); return ret; } EXPORT_SYMBOL(kmalloc_large); @@ -1082,8 +1082,8 @@ void *kmalloc_large_node(size_t size, gfp_t flags, in= t node) { void *ret =3D __kmalloc_large_node(size, flags, node); =20 - trace_kmalloc(_RET_IP_, ret, NULL, size, - PAGE_SIZE << get_order(size), flags, node); + trace_kmalloc(_RET_IP_, ret, size, PAGE_SIZE << get_order(size), + flags, node); return ret; } EXPORT_SYMBOL(kmalloc_large_node); diff --git a/mm/slob.c b/mm/slob.c index 3208c56d8f82..771af84576bf 100644 --- a/mm/slob.c +++ b/mm/slob.c @@ -507,8 +507,7 @@ __do_kmalloc_node(size_t size, gfp_t gfp, int node, uns= igned long caller) *m =3D size; ret =3D (void *)m + minalign; =20 - trace_kmalloc(caller, ret, NULL, size, - size + minalign, gfp, node); + trace_kmalloc(caller, ret, size, size + minalign, gfp, node); } else { unsigned int order =3D get_order(size); =20 @@ -516,8 +515,7 @@ __do_kmalloc_node(size_t size, gfp_t gfp, int node, uns= igned long caller) gfp |=3D __GFP_COMP; ret =3D slob_new_pages(gfp, order, node); =20 - trace_kmalloc(caller, ret, NULL, size, - PAGE_SIZE << order, gfp, node); + trace_kmalloc(caller, ret, size, PAGE_SIZE << order, gfp, node); } =20 kmemleak_alloc(ret, size, 1, gfp); @@ -594,6 +592,9 @@ int __kmem_cache_create(struct kmem_cache *c, slab_flag= s_t flags) /* leave room for rcu footer at the end of object */ c->size +=3D sizeof(struct slob_rcu); } + + /* Actual size allocated */ + c->size =3D SLOB_UNITS(c->size) * SLOB_UNIT; c->flags =3D flags; return 0; } @@ -608,14 +609,10 @@ static void *slob_alloc_node(struct kmem_cache *c, gf= p_t flags, int node) =20 if (c->size < PAGE_SIZE) { b =3D slob_alloc(c->size, flags, c->align, node, 0); - trace_kmem_cache_alloc(_RET_IP_, b, NULL, c->object_size, - SLOB_UNITS(c->size) * SLOB_UNIT, - flags, node); + trace_kmem_cache_alloc(_RET_IP_, b, c, flags, node); } else { b =3D slob_new_pages(flags, get_order(c->size), node); - trace_kmem_cache_alloc(_RET_IP_, b, NULL, c->object_size, - PAGE_SIZE << get_order(c->size), - flags, node); + trace_kmem_cache_alloc(_RET_IP_, b, c, flags, node); } =20 if (b && c->ctor) { @@ -671,7 +668,7 @@ static void kmem_rcu_free(struct rcu_head *head) void kmem_cache_free(struct kmem_cache *c, void *b) { kmemleak_free_recursive(b, c->flags); - trace_kmem_cache_free(_RET_IP_, b, c->name); + trace_kmem_cache_free(_RET_IP_, b, c); if (unlikely(c->flags & SLAB_TYPESAFE_BY_RCU)) { struct slob_rcu *slob_rcu; slob_rcu =3D b + (c->size - sizeof(struct slob_rcu)); diff --git a/mm/slub.c b/mm/slub.c index 22e4ccf06638..8083a6ee5f15 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -3243,8 +3243,7 @@ void *__kmem_cache_alloc_lru(struct kmem_cache *s, st= ruct list_lru *lru, { void *ret =3D slab_alloc(s, lru, gfpflags, _RET_IP_, s->object_size); =20 - trace_kmem_cache_alloc(_RET_IP_, ret, s, s->object_size, - s->size, gfpflags, NUMA_NO_NODE); + trace_kmem_cache_alloc(_RET_IP_, ret, s, gfpflags, NUMA_NO_NODE); =20 return ret; } @@ -3274,8 +3273,7 @@ void *kmem_cache_alloc_node(struct kmem_cache *s, gfp= _t gfpflags, int node) { void *ret =3D slab_alloc_node(s, NULL, gfpflags, node, _RET_IP_, s->objec= t_size); =20 - trace_kmem_cache_alloc(_RET_IP_, ret, s, s->object_size, - s->size, gfpflags, node); + trace_kmem_cache_alloc(_RET_IP_, ret, s, gfpflags, node); =20 return ret; } @@ -3517,7 +3515,7 @@ void kmem_cache_free(struct kmem_cache *s, void *x) s =3D cache_from_obj(s, x); if (!s) return; - trace_kmem_cache_free(_RET_IP_, x, s->name); + trace_kmem_cache_free(_RET_IP_, x, s); slab_free(s, virt_to_slab(x), x, NULL, &x, 1, _RET_IP_); } EXPORT_SYMBOL(kmem_cache_free); --=20 2.32.0 From nobody Sat Apr 11 02:17:48 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 41A86C25B08 for ; Wed, 17 Aug 2022 10:20:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239070AbiHQKUu (ORCPT ); Wed, 17 Aug 2022 06:20:50 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55476 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239034AbiHQKUR (ORCPT ); Wed, 17 Aug 2022 06:20:17 -0400 Received: from mail-pg1-x536.google.com (mail-pg1-x536.google.com [IPv6:2607:f8b0:4864:20::536]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7F10975FFE for ; Wed, 17 Aug 2022 03:19:47 -0700 (PDT) Received: by mail-pg1-x536.google.com with SMTP id 24so11604913pgr.7 for ; Wed, 17 Aug 2022 03:19:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc; bh=Ru5NzUFWFY1mp//X6stEiyuQPjnMcLZ7ult0h/+qL8E=; b=YqBZ+mIsukwjNFm5uj1S+x8Wp0d/9vKvRmeFd9hIw3ZdJzV0rLL/Zmbnggn5OJciun 9UgY/TPWTSYXcxq3xtTI7HAeHmaNStsepckVV8MkSvRWAfWqJIP4XYhCXx+jwx0i4yiN bDOIH+nLhf+U8CWc08sNAyxHmLQSsy5Cx5fVlJI/WKog3wjMJKqopoCY1lL1nV2rKqEP MJvm2+Ki7N/FCE2q6MsxhJDfWyPKkgM9Zd0qGEGxxKsD+tht0wnGOoErPE1d9WFBRlfm WZ8xpj3b/K6MAqYWvqfxR58XK7+JIc77qIu1Q+pijxc5z+kSq8bTsIh5WE9CfHIG0SYI oreQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc; bh=Ru5NzUFWFY1mp//X6stEiyuQPjnMcLZ7ult0h/+qL8E=; b=ipmHf7MBxAGHCod/rFJz6fM/p0u+a2OiwsRdrjp5mnDaFPUhuDMK/NouuqFNr/G6Ha uCN3Kb7E48hDSgU6ZS9dK8P36MWe4a5GSmKgYdy0lDjSddZUw7fgTv0waIaRfbOtWrtu NADxNfypcJqnb5Or+FSiKCMf+/pndBaUA421lvB2xelZgMnZh5juBp/TApUiA/J9rYoW hHsR4GuTqnuo6XEcZQ6QmLJNNzOIEBCrEml5JhUpIiQPwbsSp1Td7Iflgh7aSpha0VOa 21EJI2BiZXxlF+bfIlChessejk+cqJlT1YXEjPYw1wcz1vDh5WGe3pf6kfWfhhhDlvcQ y5uQ== X-Gm-Message-State: ACgBeo3WCAIXan6xWFFH3AReY/OOXHngWaYfft5eMNuTD+Y9LkAJJ+UF jDxH0uBLBijkWBK9jXzzpR4= X-Google-Smtp-Source: AA6agR6ifSZS508FUcNv8AUeG2x8oARm+JTo00bgXTAg+e0iyMO0y0O7vjS+8iYkahz8ahDH0g7Mmg== X-Received: by 2002:a63:fd14:0:b0:41a:20e8:c1e2 with SMTP id d20-20020a63fd14000000b0041a20e8c1e2mr21017032pgh.286.1660731586184; Wed, 17 Aug 2022 03:19:46 -0700 (PDT) Received: from hyeyoo.. ([114.29.91.56]) by smtp.gmail.com with ESMTPSA id d8-20020a170903230800b00172633fc236sm1071318plh.174.2022.08.17.03.19.42 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 17 Aug 2022 03:19:45 -0700 (PDT) From: Hyeonggon Yoo <42.hyeyoo@gmail.com> To: Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Vlastimil Babka , Roman Gushchin Cc: Hyeonggon Yoo <42.hyeyoo@gmail.com>, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v4 16/17] mm/slab_common: move declaration of __ksize() to mm/slab.h Date: Wed, 17 Aug 2022 19:18:25 +0900 Message-Id: <20220817101826.236819-17-42.hyeyoo@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220817101826.236819-1-42.hyeyoo@gmail.com> References: <20220817101826.236819-1-42.hyeyoo@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" __ksize() is only called by KASAN. Remove export symbol and move declaration to mm/slab.h as we don't want to grow its callers. Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> Reviewed-by: Vlastimil Babka --- include/linux/slab.h | 1 - mm/slab.h | 2 ++ mm/slab_common.c | 11 +---------- mm/slob.c | 1 - 4 files changed, 3 insertions(+), 12 deletions(-) diff --git a/include/linux/slab.h b/include/linux/slab.h index c8e485ce8815..9b592e611cb1 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -187,7 +187,6 @@ int kmem_cache_shrink(struct kmem_cache *s); void * __must_check krealloc(const void *objp, size_t new_size, gfp_t flag= s) __alloc_size(2); void kfree(const void *objp); void kfree_sensitive(const void *objp); -size_t __ksize(const void *objp); size_t ksize(const void *objp); #ifdef CONFIG_PRINTK bool kmem_valid_obj(void *object); diff --git a/mm/slab.h b/mm/slab.h index 4d8330d57573..65023f000d42 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -668,6 +668,8 @@ void free_large_kmalloc(struct folio *folio, void *obje= ct); =20 #endif /* CONFIG_SLOB */ =20 +size_t __ksize(const void *objp); + static inline size_t slab_ksize(const struct kmem_cache *s) { #ifndef CONFIG_SLUB diff --git a/mm/slab_common.c b/mm/slab_common.c index e53016c9a6e9..9c273a5fb0d7 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -974,15 +974,7 @@ void kfree(const void *object) } EXPORT_SYMBOL(kfree); =20 -/** - * __ksize -- Uninstrumented ksize. - * @objp: pointer to the object - * - * Unlike ksize(), __ksize() is uninstrumented, and does not provide the s= ame - * safety checks as ksize() with KASAN instrumentation enabled. - * - * Return: size of the actual memory used by @objp in bytes - */ +/* Uninstrumented ksize. Only called by KASAN. */ size_t __ksize(const void *object) { struct folio *folio; @@ -997,7 +989,6 @@ size_t __ksize(const void *object) =20 return slab_ksize(folio_slab(folio)->slab_cache); } -EXPORT_SYMBOL(__ksize); =20 #ifdef CONFIG_TRACING void *kmalloc_trace(struct kmem_cache *s, gfp_t gfpflags, size_t size) diff --git a/mm/slob.c b/mm/slob.c index 771af84576bf..45a061b8ba38 100644 --- a/mm/slob.c +++ b/mm/slob.c @@ -584,7 +584,6 @@ size_t __ksize(const void *block) m =3D (unsigned int *)(block - align); return SLOB_UNITS(*m) * SLOB_UNIT; } -EXPORT_SYMBOL(__ksize); =20 int __kmem_cache_create(struct kmem_cache *c, slab_flags_t flags) { --=20 2.32.0 From nobody Sat Apr 11 02:17:48 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id F0D31C25B08 for ; Wed, 17 Aug 2022 10:20:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239075AbiHQKU4 (ORCPT ); Wed, 17 Aug 2022 06:20:56 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55152 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239072AbiHQKUT (ORCPT ); Wed, 17 Aug 2022 06:20:19 -0400 Received: from mail-pg1-x535.google.com (mail-pg1-x535.google.com [IPv6:2607:f8b0:4864:20::535]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BCA4D82F9A for ; Wed, 17 Aug 2022 03:19:51 -0700 (PDT) Received: by mail-pg1-x535.google.com with SMTP id r69so11608286pgr.2 for ; Wed, 17 Aug 2022 03:19:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc; bh=QvbBlM64XGXvCYpSrw2ZhXubC7osA9SaKrvHemruWUk=; b=OH1i67TRQUnuhYunmmI/MYZkQu3kqwPcoWFksU+md4pWKAZnkbL8LMUnQFZcQ2ihXg AXvgKmdFw8N9HiH4T1CGwUKqhKiLDSXrAw1D6savzF8DUwAN1q0NeZNH18ESnyEIQGu6 nugZqGOPIbMKIQDiyvg9qiWA1W+EysZuIoPc58j95+BudKMg4WPJiSVfrxmz36ASlbSh Xd07i+oFEYQ7f2iWznifckbG5x6HlGnLRSUwyz0uFBT2C0TNca2HBDwbWCXuxm3Pel3N uxY6CHinDJ/cnbjrWvKy3JqwZ9dHWsFlPLb2AlhKDtCzlwsdULxFoAWm6UV0hO7fTKtB +wnQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc; bh=QvbBlM64XGXvCYpSrw2ZhXubC7osA9SaKrvHemruWUk=; b=3tCG+cIXVRU3wxG/Kv+vddJwiAzRqCF/Tw6WjShZ6UQySF53lWzYJqAUHtFgECLhDL nz9GSlPsuzIpdYhVwwGkX9z+B/oNYgxshtbUXIYwogGfRq2r67aqADAtFdI0PKnGHfNp lsd4KL9fM4LPBk/oNyA9JDnNk2p9ue4b9GaaJy83Ij+aOYT7FwJPnT4+4367WSY41gtU C0X+VeTkfo+GBKdKFXUiEBAg6cVAtwWrazNlvxjBasPfReJ7oEhwmO4not0gmE9pE0aw SIeju3O07Q1d+aCZRKm+fuzs3ZOH072tO7uuiO1eKOnInYt7XiKMdiDVaCnDikntIsDW ITvw== X-Gm-Message-State: ACgBeo3FLnZR8osYK4UL0r/Dk24dsUqUKupnLKK2vw05NYzB6G1HMiP/ TCI9WuUYr35uhUoi+YuSbTk= X-Google-Smtp-Source: AA6agR5BivBlhg3XB+mK1tMfAb8bN7Obm740H6tldqe9nQKKdLaKub/ufv6AC+sNXwisZ+Ihenb5GA== X-Received: by 2002:a05:6a00:1c53:b0:52d:d673:2241 with SMTP id s19-20020a056a001c5300b0052dd6732241mr24646290pfw.71.1660731590183; Wed, 17 Aug 2022 03:19:50 -0700 (PDT) Received: from hyeyoo.. ([114.29.91.56]) by smtp.gmail.com with ESMTPSA id d8-20020a170903230800b00172633fc236sm1071318plh.174.2022.08.17.03.19.46 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 17 Aug 2022 03:19:49 -0700 (PDT) From: Hyeonggon Yoo <42.hyeyoo@gmail.com> To: Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Vlastimil Babka , Roman Gushchin Cc: Hyeonggon Yoo <42.hyeyoo@gmail.com>, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Marco Elver Subject: [PATCH v4 17/17] mm/sl[au]b: check if large object is valid in __ksize() Date: Wed, 17 Aug 2022 19:18:26 +0900 Message-Id: <20220817101826.236819-18-42.hyeyoo@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220817101826.236819-1-42.hyeyoo@gmail.com> References: <20220817101826.236819-1-42.hyeyoo@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" If address of large object is not beginning of folio or size of the folio is too small, it must be invalid. BUG() in such cases. Cc: Marco Elver Suggested-by: Vlastimil Babka Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> Reviewed-by: Vlastimil Babka --- mm/slab_common.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/mm/slab_common.c b/mm/slab_common.c index 9c273a5fb0d7..98d029212682 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -984,8 +984,11 @@ size_t __ksize(const void *object) =20 folio =3D virt_to_folio(object); =20 - if (unlikely(!folio_test_slab(folio))) + if (unlikely(!folio_test_slab(folio))) { + BUG_ON(folio_size(folio) <=3D KMALLOC_MAX_CACHE_SIZE); + BUG_ON(object !=3D folio_address(folio)); return folio_size(folio); + } =20 return slab_ksize(folio_slab(folio)->slab_cache); } --=20 2.32.0