From nobody Thu Dec 18 07:17:04 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A44B9C54FB9 for ; Mon, 20 Nov 2023 09:20:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232664AbjKTJUm (ORCPT ); Mon, 20 Nov 2023 04:20:42 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56506 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232500AbjKTJUb (ORCPT ); Mon, 20 Nov 2023 04:20:31 -0500 Received: from pv50p00im-ztdg10011201.me.com (pv50p00im-ztdg10011201.me.com [17.58.6.39]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B9374E8 for ; Mon, 20 Nov 2023 01:20:25 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=me.com; s=1a1hai; t=1700471565; bh=ZAp7SrtTUyF1qnNZ4F6YnfdHeWOZun6+Mu4sUrQKB9s=; h=From:To:Subject:Date:Message-Id:MIME-Version; b=USDqxjmKUEm9/GIa5/4gC7hnYIXIHuPzCL5aPL+cvlzihWOeqnr13DtbuVMn3Kq/7 BS5bRt2u1ynmcBaOLzombkAuXxPBmaMe+7YyY5r5woQXglMxiftAcbSyyQnJaysl/2 Eg5+fDzx5L2TPpVM6HWLU6Fl34yq1pPuwp/5v5lnWibSpMx+OtLC+4nr7aOf4DPFMC EanaowXwvtzrITBT4+ajum2cJeeI0bMW7mhiqzR1i202nBhp7iwjB5svqVmewWXLYH imYOYaNcnvNM1hxuZV9gfjQCFwC9VqNQXOZtfoGfA2Jgv6D1vLq3mLB083RVlNbxUQ 9XDpzcbb1DwlQ== Received: from xiongwei.. (pv50p00im-dlb-asmtp-mailmevip.me.com [17.56.9.10]) by pv50p00im-ztdg10011201.me.com (Postfix) with ESMTPSA id 0C45C680248; Mon, 20 Nov 2023 09:12:40 +0000 (UTC) From: sxwjean@me.com To: cl@linux.com, penberg@kernel.org, rientjes@google.com, iamjoonsoo.kim@lge.com, vbabka@suse.cz, roman.gushchin@linux.dev, 42.hyeyoo@gmail.com Cc: corbet@lwn.net, linux-mm@kvack.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 1/4] Documentation: kernel-parameters: remove slab_max_order Date: Mon, 20 Nov 2023 17:12:11 +0800 Message-Id: <20231120091214.150502-2-sxwjean@me.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231120091214.150502-1-sxwjean@me.com> References: <20231120091214.150502-1-sxwjean@me.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Proofpoint-GUID: SzJMVYm1M10MaJEJRLI9Y8E7DGdxcrov X-Proofpoint-ORIG-GUID: SzJMVYm1M10MaJEJRLI9Y8E7DGdxcrov X-Proofpoint-Virus-Version: =?UTF-8?Q?vendor=3Dfsecure_engine=3D1.1.170-22c6f66c430a71ce266a39bfe25bc?= =?UTF-8?Q?2903e8d5c8f:6.0.517,18.0.883,17.0.605.474.0000000_definitions?= =?UTF-8?Q?=3D2022-06-21=5F08:2022-06-21=5F01,2022-06-21=5F08,2020-01-23?= =?UTF-8?Q?=5F02_signatures=3D0?= X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 mlxlogscore=999 suspectscore=0 bulkscore=0 clxscore=1015 adultscore=0 phishscore=0 mlxscore=0 malwarescore=0 spamscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2308100000 definitions=main-2311200061 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Xiongwei Song Since slab allocator has already been removed. There is no users about it, so remove it. Signed-off-by: Xiongwei Song --- Documentation/admin-guide/kernel-parameters.txt | 6 ------ 1 file changed, 6 deletions(-) diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentatio= n/admin-guide/kernel-parameters.txt index 65731b060e3f..c7709a11f8ce 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -5887,12 +5887,6 @@ own. For more information see Documentation/mm/slub.rst. =20 - slab_max_order=3D [MM, SLAB] - Determines the maximum allowed order for slabs. - A high setting may cause OOMs due to memory - fragmentation. Defaults to 1 for systems with - more than 32MB of RAM, 0 otherwise. - slub_debug[=3Doptions[,slabs][;[options[,slabs]]...] [MM, SLUB] Enabling slub_debug allows one to determine the culprit if slab objects become corrupted. Enabling --=20 2.34.1 From nobody Thu Dec 18 07:17:04 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B50AFC54E76 for ; Mon, 20 Nov 2023 09:20:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232748AbjKTJUo (ORCPT ); Mon, 20 Nov 2023 04:20:44 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56522 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232548AbjKTJUb (ORCPT ); Mon, 20 Nov 2023 04:20:31 -0500 Received: from pv50p00im-ztdg10011201.me.com (pv50p00im-ztdg10011201.me.com [17.58.6.39]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C2A14EE for ; Mon, 20 Nov 2023 01:20:25 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=me.com; s=1a1hai; t=1700471570; bh=C6bJm1rJmKMAKTZ+mlTlfG/ccBLTxL7wbEIPZkFpi6U=; h=From:To:Subject:Date:Message-Id:MIME-Version; b=OGFm018hZkfpIDcLlJ+sDNFLtihksioQkzpq/hBDtx3HOwN/OSl/t3KRJSflD3lkn 4Oho8P1cK4oOiBDjKXPhgKii31LVCFnDxZkWg4LV9s7aVo3kKdbappI8fM7zbUXUaJ Ls2VyrHamNhu6GyJ69Lzmca4xlw1rg8HAlKzoRn8GJGEymnFE+IxQDdDVfgjo7DdV/ 7RXFjxe7O92FBzS1pvbS44OZr4oKNBYci2XXxjlFgDzqnixz6LCnaD/BJXFFb/lMoa 7NrlwA5LJTyMUEOu2R3AEO4Jd2DzNLrV9sQkHI26FuqAzvnAn57I+4QS45NqK/+WlN znY9Cz0TdrhBg== Received: from xiongwei.. (pv50p00im-dlb-asmtp-mailmevip.me.com [17.56.9.10]) by pv50p00im-ztdg10011201.me.com (Postfix) with ESMTPSA id 4C63B680241; Mon, 20 Nov 2023 09:12:45 +0000 (UTC) From: sxwjean@me.com To: cl@linux.com, penberg@kernel.org, rientjes@google.com, iamjoonsoo.kim@lge.com, vbabka@suse.cz, roman.gushchin@linux.dev, 42.hyeyoo@gmail.com Cc: corbet@lwn.net, linux-mm@kvack.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 2/4] mm/slab: remove slab_nomrege and slab_merge Date: Mon, 20 Nov 2023 17:12:12 +0800 Message-Id: <20231120091214.150502-3-sxwjean@me.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231120091214.150502-1-sxwjean@me.com> References: <20231120091214.150502-1-sxwjean@me.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Proofpoint-GUID: 5R7rj5D1dxnrVnAuK0i15zYpwBZnK12t X-Proofpoint-ORIG-GUID: 5R7rj5D1dxnrVnAuK0i15zYpwBZnK12t X-Proofpoint-Virus-Version: =?UTF-8?Q?vendor=3Dfsecure_engine=3D1.1.170-22c6f66c430a71ce266a39bfe25bc?= =?UTF-8?Q?2903e8d5c8f:6.0.517,18.0.883,17.0.605.474.0000000_definitions?= =?UTF-8?Q?=3D2022-06-21=5F08:2022-06-21=5F01,2022-06-21=5F08,2020-01-23?= =?UTF-8?Q?=5F02_signatures=3D0?= X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 mlxlogscore=999 suspectscore=0 bulkscore=0 clxscore=1015 adultscore=0 phishscore=0 mlxscore=0 malwarescore=0 spamscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2308100000 definitions=main-2311200061 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Xiongwei Song Since slab allocatoer has already been removed, so we should also remove the related parameters. And change the global flag from slab_nomerge to slub_nomerge. Signed-off-by: Xiongwei Song --- Documentation/admin-guide/kernel-parameters.txt | 11 ++--------- mm/Kconfig | 2 +- mm/slab_common.c | 13 +++++-------- 3 files changed, 8 insertions(+), 18 deletions(-) diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentatio= n/admin-guide/kernel-parameters.txt index c7709a11f8ce..afca9ff7c9f0 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -5870,11 +5870,11 @@ =20 slram=3D [HW,MTD] =20 - slab_merge [MM] + slub_merge [MM] Enable merging of slabs with similar size when the kernel is built without CONFIG_SLAB_MERGE_DEFAULT. =20 - slab_nomerge [MM] + slub_nomerge [MM] Disable merging of slabs with similar size. May be necessary if there is some reason to distinguish allocs to different slabs, especially in hardened @@ -5915,13 +5915,6 @@ lower than slub_max_order. For more information see Documentation/mm/slub.rst. =20 - slub_merge [MM, SLUB] - Same with slab_merge. - - slub_nomerge [MM, SLUB] - Same with slab_nomerge. This is supported for legacy. - See slab_nomerge for more information. - smart2=3D [HW] Format: [,[,...,]] =20 diff --git a/mm/Kconfig b/mm/Kconfig index 766aa8f8e553..87c3f2e1d0d3 100644 --- a/mm/Kconfig +++ b/mm/Kconfig @@ -255,7 +255,7 @@ config SLAB_MERGE_DEFAULT cache layout), which makes such heap attacks easier to exploit by attackers. By keeping caches unmerged, these kinds of exploits can usually only damage objects in the same cache. To disable - merging at runtime, "slab_nomerge" can be passed on the kernel + merging at runtime, "slub_nomerge" can be passed on the kernel command line. =20 config SLAB_FREELIST_RANDOM diff --git a/mm/slab_common.c b/mm/slab_common.c index 238293b1dbe1..d707abd31926 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -58,26 +58,23 @@ static DECLARE_WORK(slab_caches_to_rcu_destroy_work, /* * Merge control. If this is set then no merging of slab caches will occur. */ -static bool slab_nomerge =3D !IS_ENABLED(CONFIG_SLAB_MERGE_DEFAULT); +static bool slub_nomerge =3D !IS_ENABLED(CONFIG_SLAB_MERGE_DEFAULT); =20 static int __init setup_slab_nomerge(char *str) { - slab_nomerge =3D true; + slub_nomerge =3D true; return 1; } =20 static int __init setup_slab_merge(char *str) { - slab_nomerge =3D false; + slub_nomerge =3D false; return 1; } =20 __setup_param("slub_nomerge", slub_nomerge, setup_slab_nomerge, 0); __setup_param("slub_merge", slub_merge, setup_slab_merge, 0); =20 -__setup("slab_nomerge", setup_slab_nomerge); -__setup("slab_merge", setup_slab_merge); - /* * Determine the size of a slab object */ @@ -138,7 +135,7 @@ static unsigned int calculate_alignment(slab_flags_t fl= ags, */ int slab_unmergeable(struct kmem_cache *s) { - if (slab_nomerge || (s->flags & SLAB_NEVER_MERGE)) + if (slub_nomerge || (s->flags & SLAB_NEVER_MERGE)) return 1; =20 if (s->ctor) @@ -163,7 +160,7 @@ struct kmem_cache *find_mergeable(unsigned int size, un= signed int align, { struct kmem_cache *s; =20 - if (slab_nomerge) + if (slub_nomerge) return NULL; =20 if (ctor) --=20 2.34.1 From nobody Thu Dec 18 07:17:04 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4E52FC197A0 for ; Mon, 20 Nov 2023 09:20:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232635AbjKTJUk (ORCPT ); Mon, 20 Nov 2023 04:20:40 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56494 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232200AbjKTJUb (ORCPT ); Mon, 20 Nov 2023 04:20:31 -0500 Received: from pv50p00im-ztdg10011201.me.com (pv50p00im-ztdg10011201.me.com [17.58.6.39]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B52D2E5 for ; Mon, 20 Nov 2023 01:20:25 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=me.com; s=1a1hai; t=1700471576; bh=0IyZUppRFC+kRXK+uWTlA6BA5d71MB5eJgI2v8JSo8g=; h=From:To:Subject:Date:Message-Id:MIME-Version; b=ngGfutKlG4UMoqOY1w9qK0d3x1YrvjuZvo/ni40ibMWGkH/Lb4CELoCYuGc2BFF7a zcXHrXDcB/1MwknG30WASwWYOirtpC/kvUmvBMmXJjmaRD2CYpO8k7GXLdtlBloFhS rXWorM09Ix6kedh3weEOi6UKdHBwWp/9E0p9lA2l4oznrFu+QotAEzay8AeMxo5eoT 5Ha1YQpY1qw/USNShuXvCGcO/u3xXpR2+dXhrYNjI6+x961mu2M67GPyDLssxiE45h eUCRKTWgBk0CwfUKPnD/YQ7A/x0CsqQ+vp1APxR/yLihctQ+K+TiXDuttpo2RShfvv EmGXoZs3dXQYQ== Received: from xiongwei.. (pv50p00im-dlb-asmtp-mailmevip.me.com [17.56.9.10]) by pv50p00im-ztdg10011201.me.com (Postfix) with ESMTPSA id 90878680224; Mon, 20 Nov 2023 09:12:51 +0000 (UTC) From: sxwjean@me.com To: cl@linux.com, penberg@kernel.org, rientjes@google.com, iamjoonsoo.kim@lge.com, vbabka@suse.cz, roman.gushchin@linux.dev, 42.hyeyoo@gmail.com Cc: corbet@lwn.net, linux-mm@kvack.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 3/4] mm/slab: make calculate_alignment() public Date: Mon, 20 Nov 2023 17:12:13 +0800 Message-Id: <20231120091214.150502-4-sxwjean@me.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231120091214.150502-1-sxwjean@me.com> References: <20231120091214.150502-1-sxwjean@me.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Proofpoint-GUID: oX7Qs8WfInCPZ2wL3ryBnCYP_MfDMAaM X-Proofpoint-ORIG-GUID: oX7Qs8WfInCPZ2wL3ryBnCYP_MfDMAaM X-Proofpoint-Virus-Version: =?UTF-8?Q?vendor=3Dfsecure_engine=3D1.1.170-22c6f66c430a71ce266a39bfe25bc?= =?UTF-8?Q?2903e8d5c8f:6.0.517,18.0.883,17.0.605.474.0000000_definitions?= =?UTF-8?Q?=3D2022-06-21=5F08:2022-06-21=5F01,2022-06-21=5F08,2020-01-23?= =?UTF-8?Q?=5F02_signatures=3D0?= X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 mlxlogscore=833 suspectscore=0 bulkscore=0 clxscore=1015 adultscore=0 phishscore=0 mlxscore=0 malwarescore=0 spamscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2308100000 definitions=main-2311200061 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Xiongwei Song We are going to move slab merge from slab_common.c to slub.c, there is a call with it in find_mergeable(), so make it public. Signed-off-by: Xiongwei Song --- mm/slab.h | 2 ++ mm/slab_common.c | 2 +- 2 files changed, 3 insertions(+), 1 deletion(-) diff --git a/mm/slab.h b/mm/slab.h index eb04c8a5dbd1..8d20f8c6269d 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -427,6 +427,8 @@ extern void create_boot_cache(struct kmem_cache *, cons= t char *name, unsigned int size, slab_flags_t flags, unsigned int useroffset, unsigned int usersize); =20 +unsigned int calculate_alignment(slab_flags_t flags, + unsigned int align, unsigned int size); int slab_unmergeable(struct kmem_cache *s); struct kmem_cache *find_mergeable(unsigned size, unsigned align, slab_flags_t flags, const char *name, void (*ctor)(void *)); diff --git a/mm/slab_common.c b/mm/slab_common.c index d707abd31926..62eb77fdedf2 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -106,7 +106,7 @@ static inline int kmem_cache_sanity_check(const char *n= ame, unsigned int size) * Figure out what the alignment of the objects will be given a set of * flags, a user specified alignment and the size of the objects. */ -static unsigned int calculate_alignment(slab_flags_t flags, +unsigned int calculate_alignment(slab_flags_t flags, unsigned int align, unsigned int size) { /* --=20 2.34.1 From nobody Thu Dec 18 07:17:04 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 68071C197A0 for ; Mon, 20 Nov 2023 09:20:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232588AbjKTJUg (ORCPT ); Mon, 20 Nov 2023 04:20:36 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56484 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232495AbjKTJUa (ORCPT ); Mon, 20 Nov 2023 04:20:30 -0500 Received: from pv50p00im-ztdg10011201.me.com (pv50p00im-ztdg10011201.me.com [17.58.6.39]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AF71BBC for ; Mon, 20 Nov 2023 01:20:25 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=me.com; s=1a1hai; t=1700471581; bh=AWaj1ki3rwcuwpTBAPDzYvQWi1J126ZZaCIJ2Nblgd8=; h=From:To:Subject:Date:Message-Id:MIME-Version; b=rYi1u8wZOp0OrfiANALqBBzTEN2M0f6lGNJBSLz6HLwevM0aEhdQWh9aoZkirKdFM NergaZPv15dFBOa6yN26VLwf60oecQ2t1knY4//RG/5gLDDgpr5uOiFdyPBu1dFuK2 uy0BPrTW6OQTPP2WBeurw/gtydeSpsyHK3GGRcaP/eV3OT+42TEaJ8U6TACpXKliX9 39d4pmfOhcG/iHZdyYJWpzdinJSiDGZvUqdrVJdeVscEPGZn85OitiKchCZ9vAVSBT Nx7zkD344/z0XjshDlf1XVTEi6cOESbvkOasJobMN7rZ1ND1c+fshy8KEdgyoI9x45 W2ZV4djkODAqg== Received: from xiongwei.. (pv50p00im-dlb-asmtp-mailmevip.me.com [17.56.9.10]) by pv50p00im-ztdg10011201.me.com (Postfix) with ESMTPSA id BE336680246; Mon, 20 Nov 2023 09:12:56 +0000 (UTC) From: sxwjean@me.com To: cl@linux.com, penberg@kernel.org, rientjes@google.com, iamjoonsoo.kim@lge.com, vbabka@suse.cz, roman.gushchin@linux.dev, 42.hyeyoo@gmail.com Cc: corbet@lwn.net, linux-mm@kvack.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 4/4] mm/slab: move slab merge from slab_common.c to slub.c Date: Mon, 20 Nov 2023 17:12:14 +0800 Message-Id: <20231120091214.150502-5-sxwjean@me.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231120091214.150502-1-sxwjean@me.com> References: <20231120091214.150502-1-sxwjean@me.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Proofpoint-GUID: wmdiAmmV3HG-u_79uyar0KcaBugWOyer X-Proofpoint-ORIG-GUID: wmdiAmmV3HG-u_79uyar0KcaBugWOyer X-Proofpoint-Virus-Version: =?UTF-8?Q?vendor=3Dfsecure_engine=3D1.1.170-22c6f66c430a71ce266a39bfe25bc?= =?UTF-8?Q?2903e8d5c8f:6.0.517,18.0.883,17.0.605.474.0000000_definitions?= =?UTF-8?Q?=3D2022-06-21=5F08:2022-06-21=5F01,2022-06-21=5F08,2020-01-23?= =?UTF-8?Q?=5F02_signatures=3D0?= X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 mlxlogscore=999 suspectscore=0 bulkscore=0 clxscore=1015 adultscore=0 phishscore=0 mlxscore=0 malwarescore=0 spamscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2308100000 definitions=main-2311200061 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Xiongwei Song Since slab allocator has been removed. There is no users about slab merge except slub. This commit is almost to revert commit 423c929cbbec ("mm/slab_common: commonize slab merge logic"). Also change all prefix of slab merge related functions, variables and definitions from "slab/SLAB" to"slub/SLUB". Signed-off-by: Xiongwei Song --- mm/slab.h | 3 -- mm/slab_common.c | 98 ---------------------------------------------- mm/slub.c | 100 ++++++++++++++++++++++++++++++++++++++++++++++- 3 files changed, 99 insertions(+), 102 deletions(-) diff --git a/mm/slab.h b/mm/slab.h index 8d20f8c6269d..cd52e705ce28 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -429,9 +429,6 @@ extern void create_boot_cache(struct kmem_cache *, cons= t char *name, =20 unsigned int calculate_alignment(slab_flags_t flags, unsigned int align, unsigned int size); -int slab_unmergeable(struct kmem_cache *s); -struct kmem_cache *find_mergeable(unsigned size, unsigned align, - slab_flags_t flags, const char *name, void (*ctor)(void *)); struct kmem_cache * __kmem_cache_alias(const char *name, unsigned int size, unsigned int align, slab_flags_t flags, void (*ctor)(void *)); diff --git a/mm/slab_common.c b/mm/slab_common.c index 62eb77fdedf2..6960ae5c35ee 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -45,36 +45,6 @@ static void slab_caches_to_rcu_destroy_workfn(struct wor= k_struct *work); static DECLARE_WORK(slab_caches_to_rcu_destroy_work, slab_caches_to_rcu_destroy_workfn); =20 -/* - * Set of flags that will prevent slab merging - */ -#define SLAB_NEVER_MERGE (SLAB_RED_ZONE | SLAB_POISON | SLAB_STORE_USER | \ - SLAB_TRACE | SLAB_TYPESAFE_BY_RCU | SLAB_NOLEAKTRACE | \ - SLAB_FAILSLAB | SLAB_NO_MERGE | kasan_never_merge()) - -#define SLAB_MERGE_SAME (SLAB_RECLAIM_ACCOUNT | SLAB_CACHE_DMA | \ - SLAB_CACHE_DMA32 | SLAB_ACCOUNT) - -/* - * Merge control. If this is set then no merging of slab caches will occur. - */ -static bool slub_nomerge =3D !IS_ENABLED(CONFIG_SLAB_MERGE_DEFAULT); - -static int __init setup_slab_nomerge(char *str) -{ - slub_nomerge =3D true; - return 1; -} - -static int __init setup_slab_merge(char *str) -{ - slub_nomerge =3D false; - return 1; -} - -__setup_param("slub_nomerge", slub_nomerge, setup_slab_nomerge, 0); -__setup_param("slub_merge", slub_merge, setup_slab_merge, 0); - /* * Determine the size of a slab object */ @@ -130,74 +100,6 @@ unsigned int calculate_alignment(slab_flags_t flags, return ALIGN(align, sizeof(void *)); } =20 -/* - * Find a mergeable slab cache - */ -int slab_unmergeable(struct kmem_cache *s) -{ - if (slub_nomerge || (s->flags & SLAB_NEVER_MERGE)) - return 1; - - if (s->ctor) - return 1; - -#ifdef CONFIG_HARDENED_USERCOPY - if (s->usersize) - return 1; -#endif - - /* - * We may have set a slab to be unmergeable during bootstrap. - */ - if (s->refcount < 0) - return 1; - - return 0; -} - -struct kmem_cache *find_mergeable(unsigned int size, unsigned int align, - slab_flags_t flags, const char *name, void (*ctor)(void *)) -{ - struct kmem_cache *s; - - if (slub_nomerge) - return NULL; - - if (ctor) - return NULL; - - size =3D ALIGN(size, sizeof(void *)); - align =3D calculate_alignment(flags, align, size); - size =3D ALIGN(size, align); - flags =3D kmem_cache_flags(size, flags, name); - - if (flags & SLAB_NEVER_MERGE) - return NULL; - - list_for_each_entry_reverse(s, &slab_caches, list) { - if (slab_unmergeable(s)) - continue; - - if (size > s->size) - continue; - - if ((flags & SLAB_MERGE_SAME) !=3D (s->flags & SLAB_MERGE_SAME)) - continue; - /* - * Check if alignment is compatible. - * Courtesy of Adrian Drzewiecki - */ - if ((s->size & ~(align - 1)) !=3D s->size) - continue; - - if (s->size - size >=3D sizeof(void *)) - continue; - - return s; - } - return NULL; -} - static struct kmem_cache *create_cache(const char *name, unsigned int object_size, unsigned int align, slab_flags_t flags, unsigned int useroffset, diff --git a/mm/slub.c b/mm/slub.c index ae1e6e635253..435d9ed140e4 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -709,6 +709,104 @@ static inline bool slab_update_freelist(struct kmem_c= ache *s, struct slab *slab, return false; } =20 +/* + * Set of flags that will prevent slab merging + */ +#define SLUB_NEVER_MERGE (SLAB_RED_ZONE | SLAB_POISON | SLAB_STORE_USER | \ + SLAB_TRACE | SLAB_TYPESAFE_BY_RCU | SLAB_NOLEAKTRACE | \ + SLAB_FAILSLAB | SLAB_NO_MERGE | kasan_never_merge()) + +#define SLUB_MERGE_SAME (SLAB_RECLAIM_ACCOUNT | SLAB_CACHE_DMA | \ + SLAB_CACHE_DMA32 | SLAB_ACCOUNT) + +/* + * Merge control. If this is set then no merging of slab caches will occur. + */ +static bool slub_nomerge =3D !IS_ENABLED(CONFIG_SLAB_MERGE_DEFAULT); + +static int __init setup_slub_nomerge(char *str) +{ + slub_nomerge =3D true; + return 1; +} + +static int __init setup_slub_merge(char *str) +{ + slub_nomerge =3D false; + return 1; +} + +__setup_param("slub_nomerge", slub_nomerge, setup_slab_nomerge, 0); +__setup_param("slub_merge", slub_merge, setup_slab_merge, 0); + +/* + * Find a mergeable slab cache + */ +static inline int slub_unmergeable(struct kmem_cache *s) +{ + if (slub_nomerge || (s->flags & SLUB_NEVER_MERGE)) + return 1; + + if (s->ctor) + return 1; + +#ifdef CONFIG_HARDENED_USERCOPY + if (s->usersize) + return 1; +#endif + + /* + * We may have set a slab to be unmergeable during bootstrap. + */ + if (s->refcount < 0) + return 1; + + return 0; +} + +static struct kmem_cache *find_mergeable(unsigned int size, unsigned int a= lign, + slab_flags_t flags, const char *name, void (*ctor)(void *)) +{ + struct kmem_cache *s; + + if (slub_nomerge) + return NULL; + + if (ctor) + return NULL; + + size =3D ALIGN(size, sizeof(void *)); + align =3D calculate_alignment(flags, align, size); + size =3D ALIGN(size, align); + flags =3D kmem_cache_flags(size, flags, name); + + if (flags & SLUB_NEVER_MERGE) + return NULL; + + list_for_each_entry_reverse(s, &slab_caches, list) { + if (slub_unmergeable(s)) + continue; + + if (size > s->size) + continue; + + if ((flags & SLUB_MERGE_SAME) !=3D (s->flags & SLUB_MERGE_SAME)) + continue; + /* + * Check if alignment is compatible. + * Courtesy of Adrian Drzewiecki + */ + if ((s->size & ~(align - 1)) !=3D s->size) + continue; + + if (s->size - size >=3D sizeof(void *)) + continue; + + return s; + } + return NULL; +} + #ifdef CONFIG_SLUB_DEBUG static unsigned long object_map[BITS_TO_LONGS(MAX_OBJS_PER_PAGE)]; static DEFINE_SPINLOCK(object_map_lock); @@ -6679,7 +6777,7 @@ static int sysfs_slab_add(struct kmem_cache *s) int err; const char *name; struct kset *kset =3D cache_kset(s); - int unmergeable =3D slab_unmergeable(s); + int unmergeable =3D slub_unmergeable(s); =20 if (!unmergeable && disable_higher_order_debug && (slub_debug & DEBUG_METADATA_FLAGS)) --=20 2.34.1