From nobody Sun Feb 8 13:32:34 2026 Received: from mta20.hihonor.com (mta20.honor.com [81.70.206.69]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3BF7E125B2 for ; Thu, 18 Dec 2025 01:59:11 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=81.70.206.69 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766023152; cv=none; b=ptdDAFji9nW1OGyniZU5IAZcD688veoZXQ72Mek/JzHyokD31gDCVhf8SdijFi2ao5T6uZ2OjWwllFjDdJJQHzfhGqP+rB0zJUt56VpczPsMW169MqM5JnLkmmAGnZNpk3hOJFKCxwkR0HquptrJ0uuSJS+YtJoJoRAPwtsnVAE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766023152; c=relaxed/simple; bh=A+Zuspe5VJtHd35LdgeCVdakZgmxI+9ysdFKHFGzjNQ=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=hS1Tt4uhtF7joYZKb4gWLGFJPtDfiZNxfkAULZgxubrB5u0EKDI+Q8bBKjQCqGrEF2/azqnDtteEYmHmGkoAfykkgWXD6xthC+8AUn9m9yP4aVEnj7OpWp+363/kDbm5jp5si+orUWdHTVJU/+LZCR71fK4bW0t4TgF3YOrfPyc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=honor.com; spf=pass smtp.mailfrom=honor.com; arc=none smtp.client-ip=81.70.206.69 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=honor.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=honor.com Received: from w002.hihonor.com (unknown [10.68.28.120]) by mta20.hihonor.com (SkyGuard) with ESMTPS id 4dWv0X2l0vzYqZWX; Thu, 18 Dec 2025 09:56:28 +0800 (CST) Received: from w025.hihonor.com (10.68.28.69) by w002.hihonor.com (10.68.28.120) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.2562.27; Thu, 18 Dec 2025 09:59:02 +0800 Received: from localhost.localdomain (10.144.17.252) by w025.hihonor.com (10.68.28.69) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.2562.27; Thu, 18 Dec 2025 09:59:01 +0800 From: yuan linyu To: Alexander Potapenko , Marco Elver , Dmitry Vyukov , Andrew Morton , Huacai Chen , WANG Xuerui , , , CC: , yuan linyu Subject: [PATCH 1/3] LoongArch: kfence: avoid use CONFIG_KFENCE_NUM_OBJECTS Date: Thu, 18 Dec 2025 09:58:47 +0800 Message-ID: <20251218015849.1414609-2-yuanlinyu@honor.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20251218015849.1414609-1-yuanlinyu@honor.com> References: <20251218015849.1414609-1-yuanlinyu@honor.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ClientProxiedBy: w012.hihonor.com (10.68.27.189) To w025.hihonor.com (10.68.28.69) Content-Type: text/plain; charset="utf-8" use common kfence macro KFENCE_POOL_SIZE for KFENCE_AREA_SIZE definition Signed-off-by: yuan linyu --- arch/loongarch/include/asm/pgtable.h | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/arch/loongarch/include/asm/pgtable.h b/arch/loongarch/include/= asm/pgtable.h index f41a648a3d9e..e9966c9f844f 100644 --- a/arch/loongarch/include/asm/pgtable.h +++ b/arch/loongarch/include/asm/pgtable.h @@ -10,6 +10,7 @@ #define _ASM_PGTABLE_H =20 #include +#include #include #include #include @@ -96,7 +97,7 @@ extern unsigned long empty_zero_page[PAGE_SIZE / sizeof(u= nsigned long)]; #define MODULES_END (MODULES_VADDR + SZ_256M) =20 #ifdef CONFIG_KFENCE -#define KFENCE_AREA_SIZE (((CONFIG_KFENCE_NUM_OBJECTS + 1) * 2 + 2) * PAGE= _SIZE) +#define KFENCE_AREA_SIZE (KFENCE_POOL_SIZE + (2 * PAGE_SIZE)) #else #define KFENCE_AREA_SIZE 0 #endif --=20 2.25.1 From nobody Sun Feb 8 13:32:34 2026 Received: from mta21.hihonor.com (mta21.hihonor.com [81.70.160.142]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 720822F659F for ; Thu, 18 Dec 2025 02:16:29 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=81.70.160.142 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766024191; cv=none; b=mXW9ft+6aqwAHFcT9zxFJrSr35ehY7z52QraHqN9X8j1KmN3Syd5G1f1VdlI+0FCjO0sF983pmOjNbymUJw61Wm5V6wV+wW7LNiL6J/WShTUr9o9lDvP2MzARVTXva7VxyroKXjzz7yzeS3gev9QVKg7k06EzIh/Y+r5GFb0Rmo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766024191; c=relaxed/simple; bh=L5OZBikAi2AFvA+sOwLm34mMN7OohDsBcZp9v6sWfGE=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=LtuUT3b+MaQLMSwLrx/M9DkcxJelPLqs4VmE+GMSI6oc8MIRpHFX4+s7aih87fxoqm+V2FgdwqB68KttQa3fyruMiDUhESJ5AB99sfCLBICAo3be7LVFZlpM9G+jPOS8NSYJCOwb+8mRQrPpt1OxDl5E4W2L+KEzjXYtx8a37i8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=honor.com; spf=pass smtp.mailfrom=honor.com; arc=none smtp.client-ip=81.70.160.142 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=honor.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=honor.com Received: from w001.hihonor.com (unknown [10.68.25.235]) by mta21.hihonor.com (SkyGuard) with ESMTPS id 4dWv0T3PyTzYnWDt; Thu, 18 Dec 2025 09:56:25 +0800 (CST) Received: from w025.hihonor.com (10.68.28.69) by w001.hihonor.com (10.68.25.235) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.2562.27; Thu, 18 Dec 2025 09:59:02 +0800 Received: from localhost.localdomain (10.144.17.252) by w025.hihonor.com (10.68.28.69) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.2562.27; Thu, 18 Dec 2025 09:59:01 +0800 From: yuan linyu To: Alexander Potapenko , Marco Elver , Dmitry Vyukov , Andrew Morton , Huacai Chen , WANG Xuerui , , , CC: , yuan linyu Subject: [PATCH 2/3] kfence: allow create debugfs dir/file unconditionally Date: Thu, 18 Dec 2025 09:58:48 +0800 Message-ID: <20251218015849.1414609-3-yuanlinyu@honor.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20251218015849.1414609-1-yuanlinyu@honor.com> References: <20251218015849.1414609-1-yuanlinyu@honor.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ClientProxiedBy: w012.hihonor.com (10.68.27.189) To w025.hihonor.com (10.68.28.69) Content-Type: text/plain; charset="utf-8" When add boot parameter kfence.sample_interval=3D0, it will not create debugfs dir/file, but when user change this parameter after boot, it can enable kfence, there is no debugfs info to check the kfence state. Remove kfence_enabled check in kfence_debugfs_init() to create debugfs unconditionally. Signed-off-by: yuan linyu --- mm/kfence/core.c | 3 --- 1 file changed, 3 deletions(-) diff --git a/mm/kfence/core.c b/mm/kfence/core.c index 577a1699c553..24c6f1fa5b19 100644 --- a/mm/kfence/core.c +++ b/mm/kfence/core.c @@ -782,9 +782,6 @@ static int kfence_debugfs_init(void) { struct dentry *kfence_dir; =20 - if (!READ_ONCE(kfence_enabled)) - return 0; - kfence_dir =3D debugfs_create_dir("kfence", NULL); debugfs_create_file("stats", 0444, kfence_dir, NULL, &stats_fops); debugfs_create_file("objects", 0400, kfence_dir, NULL, &objects_fops); --=20 2.25.1 From nobody Sun Feb 8 13:32:34 2026 Received: from mta22.hihonor.com (mta22.honor.com [81.70.192.198]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 34AD02F5492 for ; Thu, 18 Dec 2025 02:16:24 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=81.70.192.198 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766024187; cv=none; b=SM1f+VRxdbzTe1g+H4tQM3FS7aj5+uX2qWO8Upga4rJLC7/9vpga/uLCjOkPdIS3dn2gRR7IIBEkwowzoL7CfZ9tVtgcnjSRPWyoVGJJscI2xsfyIXVli2OxZ1eOzqznNBKclTejwMEsAK/Qmgm7eu5EnJQgDFrB7D7bTLr3lZk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766024187; c=relaxed/simple; bh=H79c6Aq9sFnQfYig7FXKzbQl8Ja/YDvfk4yZAOpTh5U=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=hVcHJkgF4/FzArX1F54KZlG4kj71nmURCYlvicS7KKv9hoCAj12Q3ithubuH51i5rbE8Xh0jNk8AcYTISh2t2pg4Z7sivbkra9kPKhsJOZ7yJ/m/mU8G24tow3d2YysV23/0l1r+ssorXJJQT4MksvibODPdPr4ogQ8JnhTuu3I= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=honor.com; spf=pass smtp.mailfrom=honor.com; dkim=pass (1024-bit key) header.d=honor.com header.i=@honor.com header.b=iU94+Fuq; arc=none smtp.client-ip=81.70.192.198 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=honor.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=honor.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=honor.com header.i=@honor.com header.b="iU94+Fuq" dkim-signature: v=1; a=rsa-sha256; d=honor.com; s=dkim; c=relaxed/relaxed; q=dns/txt; h=To:From; bh=hU40RLa4jRhtk5Sc+fhCSmQtLG8/DL1BTepfIHBkGLs=; b=iU94+FuqIzGpv2DTR0/CvobFtspitqmbm/y+jfs+2cVymrHGVLdQWGK1fXZGtzTaAkJAdKa9f Ibdo9tCeAVLaHB9qEIXkL3W+dnTckzPK0Owmj+W9pdt4eZqYUhyL/ZVTA5hDC+wvMfMuO6HDfvn 0/CTw+KygTKDLc8d7rn+eMA= Received: from w012.hihonor.com (unknown [10.68.27.189]) by mta22.hihonor.com (SkyGuard) with ESMTPS id 4dWv1B6lt7zYmf8k; Thu, 18 Dec 2025 09:57:02 +0800 (CST) Received: from w025.hihonor.com (10.68.28.69) by w012.hihonor.com (10.68.27.189) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.2562.27; Thu, 18 Dec 2025 09:59:02 +0800 Received: from localhost.localdomain (10.144.17.252) by w025.hihonor.com (10.68.28.69) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.2562.27; Thu, 18 Dec 2025 09:59:02 +0800 From: yuan linyu To: Alexander Potapenko , Marco Elver , Dmitry Vyukov , Andrew Morton , Huacai Chen , WANG Xuerui , , , CC: , yuan linyu Subject: [PATCH 3/3] kfence: allow change number of object by early parameter Date: Thu, 18 Dec 2025 09:58:49 +0800 Message-ID: <20251218015849.1414609-4-yuanlinyu@honor.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20251218015849.1414609-1-yuanlinyu@honor.com> References: <20251218015849.1414609-1-yuanlinyu@honor.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ClientProxiedBy: w012.hihonor.com (10.68.27.189) To w025.hihonor.com (10.68.28.69) Content-Type: text/plain; charset="utf-8" when want to change the kfence pool size, currently it is not easy and need to compile kernel. Add an early boot parameter kfence.num_objects to allow change kfence objects number and allow increate total pool to provide high failure rate. Signed-off-by: yuan linyu --- include/linux/kfence.h | 5 +- mm/kfence/core.c | 122 +++++++++++++++++++++++++++++----------- mm/kfence/kfence.h | 4 +- mm/kfence/kfence_test.c | 2 +- 4 files changed, 96 insertions(+), 37 deletions(-) diff --git a/include/linux/kfence.h b/include/linux/kfence.h index 0ad1ddbb8b99..920bcd5649fa 100644 --- a/include/linux/kfence.h +++ b/include/linux/kfence.h @@ -24,7 +24,10 @@ extern unsigned long kfence_sample_interval; * address to metadata indices; effectively, the very first page serves as= an * extended guard page, but otherwise has no special purpose. */ -#define KFENCE_POOL_SIZE ((CONFIG_KFENCE_NUM_OBJECTS + 1) * 2 * PAGE_SIZE) +extern unsigned int __kfence_pool_size; +#define KFENCE_POOL_SIZE (__kfence_pool_size) +extern unsigned int __kfence_num_objects; +#define KFENCE_NUM_OBJECTS (__kfence_num_objects) extern char *__kfence_pool; =20 DECLARE_STATIC_KEY_FALSE(kfence_allocation_key); diff --git a/mm/kfence/core.c b/mm/kfence/core.c index 24c6f1fa5b19..82425da5f27c 100644 --- a/mm/kfence/core.c +++ b/mm/kfence/core.c @@ -132,6 +132,31 @@ struct kfence_metadata *kfence_metadata __read_mostly; */ static struct kfence_metadata *kfence_metadata_init __read_mostly; =20 +/* allow change number of objects from cmdline */ +#define KFENCE_MIN_NUM_OBJECTS 1 +#define KFENCE_MAX_NUM_OBJECTS 65535 +unsigned int __kfence_num_objects __read_mostly =3D CONFIG_KFENCE_NUM_OBJE= CTS; +EXPORT_SYMBOL(__kfence_num_objects); /* Export for test modules. */ +static unsigned int __kfence_pool_pages __read_mostly =3D (CONFIG_KFENCE_N= UM_OBJECTS + 1) * 2; +unsigned int __kfence_pool_size __read_mostly =3D (CONFIG_KFENCE_NUM_OBJEC= TS + 1) * 2 * PAGE_SIZE; +EXPORT_SYMBOL(__kfence_pool_size); /* Export for lkdtm module. */ + +static int __init early_parse_kfence_num_objects(char *buf) +{ + unsigned int num; + int ret =3D kstrtouint(buf, 10, &num); + + if (ret < 0) + return ret; + + __kfence_num_objects =3D clamp(num, KFENCE_MIN_NUM_OBJECTS, KFENCE_MAX_NU= M_OBJECTS); + __kfence_pool_pages =3D (__kfence_num_objects + 1) * 2; + __kfence_pool_size =3D __kfence_pool_pages * PAGE_SIZE; + + return 0; +} +early_param("kfence.num_objects", early_parse_kfence_num_objects); + /* Freelist with available objects. */ static struct list_head kfence_freelist =3D LIST_HEAD_INIT(kfence_freelist= ); static DEFINE_RAW_SPINLOCK(kfence_freelist_lock); /* Lock protecting freel= ist. */ @@ -155,12 +180,13 @@ atomic_t kfence_allocation_gate =3D ATOMIC_INIT(1); * * P(alloc_traces) =3D (1 - e^(-HNUM * (alloc_traces / SIZE)) ^ HNUM */ +static unsigned int kfence_alloc_covered_order __read_mostly; +static unsigned int kfence_alloc_covered_mask __read_mostly; +static atomic_t *alloc_covered __read_mostly; #define ALLOC_COVERED_HNUM 2 -#define ALLOC_COVERED_ORDER (const_ilog2(CONFIG_KFENCE_NUM_OBJECTS) + 2) -#define ALLOC_COVERED_SIZE (1 << ALLOC_COVERED_ORDER) -#define ALLOC_COVERED_HNEXT(h) hash_32(h, ALLOC_COVERED_ORDER) -#define ALLOC_COVERED_MASK (ALLOC_COVERED_SIZE - 1) -static atomic_t alloc_covered[ALLOC_COVERED_SIZE]; +#define ALLOC_COVERED_HNEXT(h) hash_32(h, kfence_alloc_covered_order) +#define ALLOC_COVERED_MASK (kfence_alloc_covered_mask) +#define KFENCE_COVERED_SIZE (sizeof(atomic_t) * (1 << kfence_alloc_covere= d_order)) =20 /* Stack depth used to determine uniqueness of an allocation. */ #define UNIQUE_ALLOC_STACK_DEPTH ((size_t)8) @@ -200,7 +226,7 @@ static_assert(ARRAY_SIZE(counter_names) =3D=3D KFENCE_C= OUNTER_COUNT); =20 static inline bool should_skip_covered(void) { - unsigned long thresh =3D (CONFIG_KFENCE_NUM_OBJECTS * kfence_skip_covered= _thresh) / 100; + unsigned long thresh =3D (__kfence_num_objects * kfence_skip_covered_thre= sh) / 100; =20 return atomic_long_read(&counters[KFENCE_COUNTER_ALLOCATED]) > thresh; } @@ -262,7 +288,7 @@ static inline unsigned long metadata_to_pageaddr(const = struct kfence_metadata *m =20 /* Only call with a pointer into kfence_metadata. */ if (KFENCE_WARN_ON(meta < kfence_metadata || - meta >=3D kfence_metadata + CONFIG_KFENCE_NUM_OBJECTS)) + meta >=3D kfence_metadata + __kfence_num_objects)) return 0; =20 /* @@ -612,7 +638,7 @@ static unsigned long kfence_init_pool(void) * fast-path in SLUB, and therefore need to ensure kfree() correctly * enters __slab_free() slow-path. */ - for (i =3D 0; i < KFENCE_POOL_SIZE / PAGE_SIZE; i++) { + for (i =3D 0; i < __kfence_pool_pages; i++) { struct page *page; =20 if (!i || (i % 2)) @@ -640,7 +666,7 @@ static unsigned long kfence_init_pool(void) addr +=3D PAGE_SIZE; } =20 - for (i =3D 0; i < CONFIG_KFENCE_NUM_OBJECTS; i++) { + for (i =3D 0; i < __kfence_num_objects; i++) { struct kfence_metadata *meta =3D &kfence_metadata_init[i]; =20 /* Initialize metadata. */ @@ -666,7 +692,7 @@ static unsigned long kfence_init_pool(void) return 0; =20 reset_slab: - for (i =3D 0; i < KFENCE_POOL_SIZE / PAGE_SIZE; i++) { + for (i =3D 0; i < __kfence_pool_pages; i++) { struct page *page; =20 if (!i || (i % 2)) @@ -710,7 +736,7 @@ static bool __init kfence_init_pool_early(void) * fails for the first page, and therefore expect addr=3D=3D__kfence_pool= in * most failure cases. */ - memblock_free_late(__pa(addr), KFENCE_POOL_SIZE - (addr - (unsigned long)= __kfence_pool)); + memblock_free_late(__pa(addr), __kfence_pool_size - (addr - (unsigned lon= g)__kfence_pool)); __kfence_pool =3D NULL; =20 memblock_free_late(__pa(kfence_metadata_init), KFENCE_METADATA_SIZE); @@ -740,7 +766,7 @@ DEFINE_SHOW_ATTRIBUTE(stats); */ static void *start_object(struct seq_file *seq, loff_t *pos) { - if (*pos < CONFIG_KFENCE_NUM_OBJECTS) + if (*pos < __kfence_num_objects) return (void *)((long)*pos + 1); return NULL; } @@ -752,7 +778,7 @@ static void stop_object(struct seq_file *seq, void *v) static void *next_object(struct seq_file *seq, void *v, loff_t *pos) { ++*pos; - if (*pos < CONFIG_KFENCE_NUM_OBJECTS) + if (*pos < __kfence_num_objects) return (void *)((long)*pos + 1); return NULL; } @@ -796,7 +822,7 @@ static void kfence_check_all_canary(void) { int i; =20 - for (i =3D 0; i < CONFIG_KFENCE_NUM_OBJECTS; i++) { + for (i =3D 0; i < __kfence_num_objects; i++) { struct kfence_metadata *meta =3D &kfence_metadata[i]; =20 if (kfence_obj_allocated(meta)) @@ -891,7 +917,7 @@ void __init kfence_alloc_pool_and_metadata(void) * re-allocate the memory pool. */ if (!__kfence_pool) - __kfence_pool =3D memblock_alloc(KFENCE_POOL_SIZE, PAGE_SIZE); + __kfence_pool =3D memblock_alloc(__kfence_pool_size, PAGE_SIZE); =20 if (!__kfence_pool) { pr_err("failed to allocate pool\n"); @@ -900,11 +926,23 @@ void __init kfence_alloc_pool_and_metadata(void) =20 /* The memory allocated by memblock has been zeroed out. */ kfence_metadata_init =3D memblock_alloc(KFENCE_METADATA_SIZE, PAGE_SIZE); - if (!kfence_metadata_init) { - pr_err("failed to allocate metadata\n"); - memblock_free(__kfence_pool, KFENCE_POOL_SIZE); - __kfence_pool =3D NULL; - } + if (!kfence_metadata_init) + goto fail_pool; + + kfence_alloc_covered_order =3D ilog2(__kfence_num_objects) + 2; + kfence_alloc_covered_mask =3D (1 << kfence_alloc_covered_order) - 1; + alloc_covered =3D memblock_alloc(KFENCE_COVERED_SIZE, PAGE_SIZE); + if (alloc_covered) + return; + + pr_err("failed to allocate covered\n"); + memblock_free(kfence_metadata_init, KFENCE_METADATA_SIZE); + kfence_metadata_init =3D NULL; + +fail_pool: + pr_err("failed to allocate metadata\n"); + memblock_free(__kfence_pool, __kfence_pool_size); + __kfence_pool =3D NULL; } =20 static void kfence_init_enable(void) @@ -927,9 +965,9 @@ static void kfence_init_enable(void) WRITE_ONCE(kfence_enabled, true); queue_delayed_work(system_unbound_wq, &kfence_timer, 0); =20 - pr_info("initialized - using %lu bytes for %d objects at 0x%p-0x%p\n", KF= ENCE_POOL_SIZE, - CONFIG_KFENCE_NUM_OBJECTS, (void *)__kfence_pool, - (void *)(__kfence_pool + KFENCE_POOL_SIZE)); + pr_info("initialized - using %u bytes for %d objects at 0x%p-0x%p\n", __k= fence_pool_size, + __kfence_num_objects, (void *)__kfence_pool, + (void *)(__kfence_pool + __kfence_pool_size)); } =20 void __init kfence_init(void) @@ -950,41 +988,53 @@ void __init kfence_init(void) =20 static int kfence_init_late(void) { - const unsigned long nr_pages_pool =3D KFENCE_POOL_SIZE / PAGE_SIZE; - const unsigned long nr_pages_meta =3D KFENCE_METADATA_SIZE / PAGE_SIZE; + unsigned long nr_pages_meta =3D KFENCE_METADATA_SIZE / PAGE_SIZE; unsigned long addr =3D (unsigned long)__kfence_pool; - unsigned long free_size =3D KFENCE_POOL_SIZE; + unsigned long free_size =3D __kfence_pool_size; + unsigned long nr_pages_covered, covered_size; int err =3D -ENOMEM; =20 + kfence_alloc_covered_order =3D ilog2(__kfence_num_objects) + 2; + kfence_alloc_covered_mask =3D (1 << kfence_alloc_covered_order) - 1; + covered_size =3D PAGE_ALIGN(KFENCE_COVERED_SIZE); + nr_pages_covered =3D (covered_size / PAGE_SIZE); #ifdef CONFIG_CONTIG_ALLOC struct page *pages; =20 - pages =3D alloc_contig_pages(nr_pages_pool, GFP_KERNEL, first_online_node, + pages =3D alloc_contig_pages(__kfence_pool_pages, GFP_KERNEL, first_onlin= e_node, NULL); if (!pages) return -ENOMEM; =20 __kfence_pool =3D page_to_virt(pages); + pages =3D alloc_contig_pages(nr_pages_covered, GFP_KERNEL, first_online_n= ode, + NULL); + if (!pages) + goto free_pool; + alloc_covered =3D page_to_virt(pages); pages =3D alloc_contig_pages(nr_pages_meta, GFP_KERNEL, first_online_node, NULL); if (pages) kfence_metadata_init =3D page_to_virt(pages); #else - if (nr_pages_pool > MAX_ORDER_NR_PAGES || + if (__kfence_pool_pages > MAX_ORDER_NR_PAGES || nr_pages_meta > MAX_ORDER_NR_PAGES) { pr_warn("KFENCE_NUM_OBJECTS too large for buddy allocator\n"); return -EINVAL; } =20 - __kfence_pool =3D alloc_pages_exact(KFENCE_POOL_SIZE, GFP_KERNEL); + __kfence_pool =3D alloc_pages_exact(__kfence_pool_size, GFP_KERNEL); if (!__kfence_pool) return -ENOMEM; =20 + alloc_covered =3D alloc_pages_exact(covered_size, GFP_KERNEL); + if (!alloc_covered) + goto free_pool; kfence_metadata_init =3D alloc_pages_exact(KFENCE_METADATA_SIZE, GFP_KERN= EL); #endif =20 if (!kfence_metadata_init) - goto free_pool; + goto free_cover; =20 memzero_explicit(kfence_metadata_init, KFENCE_METADATA_SIZE); addr =3D kfence_init_pool(); @@ -995,22 +1045,28 @@ static int kfence_init_late(void) } =20 pr_err("%s failed\n", __func__); - free_size =3D KFENCE_POOL_SIZE - (addr - (unsigned long)__kfence_pool); + free_size =3D __kfence_pool_size - (addr - (unsigned long)__kfence_pool); err =3D -EBUSY; =20 #ifdef CONFIG_CONTIG_ALLOC free_contig_range(page_to_pfn(virt_to_page((void *)kfence_metadata_init)), nr_pages_meta); +free_cover: + free_contig_range(page_to_pfn(virt_to_page((void *)alloc_covered)), + nr_pages_covered); free_pool: free_contig_range(page_to_pfn(virt_to_page((void *)addr)), free_size / PAGE_SIZE); #else free_pages_exact((void *)kfence_metadata_init, KFENCE_METADATA_SIZE); +free_cover: + free_pages_exact((void *)alloc_covered, covered_size); free_pool: free_pages_exact((void *)addr, free_size); #endif =20 kfence_metadata_init =3D NULL; + alloc_covered =3D NULL; __kfence_pool =3D NULL; return err; } @@ -1036,7 +1092,7 @@ void kfence_shutdown_cache(struct kmem_cache *s) if (!smp_load_acquire(&kfence_metadata)) return; =20 - for (i =3D 0; i < CONFIG_KFENCE_NUM_OBJECTS; i++) { + for (i =3D 0; i < __kfence_num_objects; i++) { bool in_use; =20 meta =3D &kfence_metadata[i]; @@ -1074,7 +1130,7 @@ void kfence_shutdown_cache(struct kmem_cache *s) } } =20 - for (i =3D 0; i < CONFIG_KFENCE_NUM_OBJECTS; i++) { + for (i =3D 0; i < __kfence_num_objects; i++) { meta =3D &kfence_metadata[i]; =20 /* See above. */ diff --git a/mm/kfence/kfence.h b/mm/kfence/kfence.h index dfba5ea06b01..dc3abb27c632 100644 --- a/mm/kfence/kfence.h +++ b/mm/kfence/kfence.h @@ -104,7 +104,7 @@ struct kfence_metadata { }; =20 #define KFENCE_METADATA_SIZE PAGE_ALIGN(sizeof(struct kfence_metadata) * \ - CONFIG_KFENCE_NUM_OBJECTS) + __kfence_num_objects) =20 extern struct kfence_metadata *kfence_metadata; =20 @@ -123,7 +123,7 @@ static inline struct kfence_metadata *addr_to_metadata(= unsigned long addr) * error. */ index =3D (addr - (unsigned long)__kfence_pool) / (PAGE_SIZE * 2) - 1; - if (index < 0 || index >=3D CONFIG_KFENCE_NUM_OBJECTS) + if (index < 0 || index >=3D __kfence_num_objects) return NULL; =20 return &kfence_metadata[index]; diff --git a/mm/kfence/kfence_test.c b/mm/kfence/kfence_test.c index 00034e37bc9f..00a51aa4bad9 100644 --- a/mm/kfence/kfence_test.c +++ b/mm/kfence/kfence_test.c @@ -641,7 +641,7 @@ static void test_gfpzero(struct kunit *test) break; test_free(buf2); =20 - if (kthread_should_stop() || (i =3D=3D CONFIG_KFENCE_NUM_OBJECTS)) { + if (kthread_should_stop() || (i =3D=3D __kfence_num_objects)) { kunit_warn(test, "giving up ... cannot get same object back\n"); return; } --=20 2.25.1