From nobody Fri Dec 19 17:18:21 2025 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by smtp.subspace.kernel.org (Postfix) with ESMTP id E78E734B693 for ; Thu, 6 Nov 2025 16:09:57 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=217.140.110.172 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1762445399; cv=none; b=O6NQFQwfsKKYq7NBZOYrAJ9af8Ajo8M/BMNEekaUT8uK12fiX7cO9gxuj01mpvXOU7B9cppkHBAHcV5ekkAtz6SLBmj3sOGbKRhr++nSH1hoYeOgxEPpxu1TTPHtQ2DEFf9fvOtzqhRIQ1xuBBIh+gRV1NpCFFPwVhjjtjU6rX8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1762445399; c=relaxed/simple; bh=NAHL8aX/40wLvrJMxbZ062XZ3alLXaEAr1S+QCSspBM=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=rBdsotnAtkdyZACRh0AeaY3GQVElTyXB1q7Xbv+y81jSSJvfWTCJuBuQTQSX72Qdo5tTLhqQ9uKSwHLnCTKwr5R0DdTtHhtlcEU5lb5ykFIjK4AUoWi0NTCO5TTgWMD6n2JWoGFkYFBqIF6mpclAwqktvFIN4recedgCY5F86JI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com; spf=pass smtp.mailfrom=arm.com; arc=none smtp.client-ip=217.140.110.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 845DE168F; Thu, 6 Nov 2025 08:09:49 -0800 (PST) Received: from e125769.cambridge.arm.com (e125769.cambridge.arm.com [10.1.196.27]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id CDCED3F66E; Thu, 6 Nov 2025 08:09:55 -0800 (PST) From: Ryan Roberts To: catalin.marinas@arm.com, will@kernel.org, yang@os.amperecomputing.com, david@redhat.com, ardb@kernel.org, dev.jain@arm.com, scott@os.amperecomputing.com, cl@gentwo.org Cc: Ryan Roberts , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Guenter Roeck Subject: [PATCH v2 1/3] arm64: mm: Don't sleep in split_kernel_leaf_mapping() when in atomic context Date: Thu, 6 Nov 2025 16:09:41 +0000 Message-ID: <20251106160945.3182799-2-ryan.roberts@arm.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20251106160945.3182799-1-ryan.roberts@arm.com> References: <20251106160945.3182799-1-ryan.roberts@arm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" It has been reported that split_kernel_leaf_mapping() is trying to sleep in non-sleepable context. It does this when acquiring the pgtable_split_lock mutex, when either CONFIG_DEBUG_PAGEALLOC or CONFIG_KFENCE are enabled, which change linear map permissions within softirq context during memory allocation and/or freeing. All other paths into this function are called from sleepable context and so are safe. But it turns out that the memory for which these 2 features may attempt to modify the permissions is always mapped by pte, so there is no need to attempt to split the mapping. So let's exit early in these cases and avoid attempting to take the mutex. There is one wrinkle to this approach; late-initialized kfence allocates it's pool from the buddy which may be block mapped. So we must hook that allocation and convert it to pte-mappings up front. Previously this was done as a side-effect of kfence protecting all the individual pages in its pool at init-time, but this no longer works due to the added early exit path in split_kernel_leaf_mapping(). So instead, do this via the existing arch_kfence_init_pool() arch hook, and reuse the existing linear_map_split_to_ptes() infrastructure. Closes: https://lore.kernel.org/all/f24b9032-0ec9-47b1-8b95-c0eeac7a31c5@ro= eck-us.net/ Fixes: a166563e7ec3 ("arm64: mm: support large block mapping when rodata=3D= full") Tested-by: Guenter Roeck Signed-off-by: Ryan Roberts Reviewed-by: David Hildenbrand (Red Hat) Reviewed-by: Yang Shi --- arch/arm64/include/asm/kfence.h | 3 +- arch/arm64/mm/mmu.c | 92 +++++++++++++++++++++++---------- 2 files changed, 67 insertions(+), 28 deletions(-) diff --git a/arch/arm64/include/asm/kfence.h b/arch/arm64/include/asm/kfenc= e.h index a81937fae9f6..21dbc9dda747 100644 --- a/arch/arm64/include/asm/kfence.h +++ b/arch/arm64/include/asm/kfence.h @@ -10,8 +10,6 @@ =20 #include =20 -static inline bool arch_kfence_init_pool(void) { return true; } - static inline bool kfence_protect_page(unsigned long addr, bool protect) { set_memory_valid(addr, 1, !protect); @@ -25,6 +23,7 @@ static inline bool arm64_kfence_can_set_direct_map(void) { return !kfence_early_init; } +bool arch_kfence_init_pool(void); #else /* CONFIG_KFENCE */ static inline bool arm64_kfence_can_set_direct_map(void) { return false; } #endif /* CONFIG_KFENCE */ diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index b8d37eb037fc..a364ac2c9c61 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -708,6 +708,16 @@ static int split_kernel_leaf_mapping_locked(unsigned l= ong addr) return ret; } =20 +static inline bool force_pte_mapping(void) +{ + bool bbml2 =3D system_capabilities_finalized() ? + system_supports_bbml2_noabort() : cpu_supports_bbml2_noabort(); + + return (!bbml2 && (rodata_full || arm64_kfence_can_set_direct_map() || + is_realm_world())) || + debug_pagealloc_enabled(); +} + static DEFINE_MUTEX(pgtable_split_lock); =20 int split_kernel_leaf_mapping(unsigned long start, unsigned long end) @@ -723,6 +733,16 @@ int split_kernel_leaf_mapping(unsigned long start, uns= igned long end) if (!system_supports_bbml2_noabort()) return 0; =20 + /* + * If the region is within a pte-mapped area, there is no need to try to + * split. Additionally, CONFIG_DEBUG_PAGEALLOC and CONFIG_KFENCE may + * change permissions from atomic context so for those cases (which are + * always pte-mapped), we must not go any further because taking the + * mutex below may sleep. + */ + if (force_pte_mapping() || is_kfence_address((void *)start)) + return 0; + /* * Ensure start and end are at least page-aligned since this is the * finest granularity we can split to. @@ -758,30 +778,30 @@ int split_kernel_leaf_mapping(unsigned long start, un= signed long end) return ret; } =20 -static int __init split_to_ptes_pud_entry(pud_t *pudp, unsigned long addr, - unsigned long next, - struct mm_walk *walk) +static int split_to_ptes_pud_entry(pud_t *pudp, unsigned long addr, + unsigned long next, struct mm_walk *walk) { + gfp_t gfp =3D *(gfp_t *)walk->private; pud_t pud =3D pudp_get(pudp); int ret =3D 0; =20 if (pud_leaf(pud)) - ret =3D split_pud(pudp, pud, GFP_ATOMIC, false); + ret =3D split_pud(pudp, pud, gfp, false); =20 return ret; } =20 -static int __init split_to_ptes_pmd_entry(pmd_t *pmdp, unsigned long addr, - unsigned long next, - struct mm_walk *walk) +static int split_to_ptes_pmd_entry(pmd_t *pmdp, unsigned long addr, + unsigned long next, struct mm_walk *walk) { + gfp_t gfp =3D *(gfp_t *)walk->private; pmd_t pmd =3D pmdp_get(pmdp); int ret =3D 0; =20 if (pmd_leaf(pmd)) { if (pmd_cont(pmd)) split_contpmd(pmdp); - ret =3D split_pmd(pmdp, pmd, GFP_ATOMIC, false); + ret =3D split_pmd(pmdp, pmd, gfp, false); =20 /* * We have split the pmd directly to ptes so there is no need to @@ -793,9 +813,8 @@ static int __init split_to_ptes_pmd_entry(pmd_t *pmdp, = unsigned long addr, return ret; } =20 -static int __init split_to_ptes_pte_entry(pte_t *ptep, unsigned long addr, - unsigned long next, - struct mm_walk *walk) +static int split_to_ptes_pte_entry(pte_t *ptep, unsigned long addr, + unsigned long next, struct mm_walk *walk) { pte_t pte =3D __ptep_get(ptep); =20 @@ -805,12 +824,18 @@ static int __init split_to_ptes_pte_entry(pte_t *ptep= , unsigned long addr, return 0; } =20 -static const struct mm_walk_ops split_to_ptes_ops __initconst =3D { +static const struct mm_walk_ops split_to_ptes_ops =3D { .pud_entry =3D split_to_ptes_pud_entry, .pmd_entry =3D split_to_ptes_pmd_entry, .pte_entry =3D split_to_ptes_pte_entry, }; =20 +static int range_split_to_ptes(unsigned long start, unsigned long end, gfp= _t gfp) +{ + return walk_kernel_page_table_range_lockless(start, end, + &split_to_ptes_ops, NULL, &gfp); +} + static bool linear_map_requires_bbml2 __initdata; =20 u32 idmap_kpti_bbml2_flag; @@ -847,11 +872,9 @@ static int __init linear_map_split_to_ptes(void *__unu= sed) * PTE. The kernel alias remains static throughout runtime so * can continue to be safely mapped with large mappings. */ - ret =3D walk_kernel_page_table_range_lockless(lstart, kstart, - &split_to_ptes_ops, NULL, NULL); + ret =3D range_split_to_ptes(lstart, kstart, GFP_ATOMIC); if (!ret) - ret =3D walk_kernel_page_table_range_lockless(kend, lend, - &split_to_ptes_ops, NULL, NULL); + ret =3D range_split_to_ptes(kend, lend, GFP_ATOMIC); if (ret) panic("Failed to split linear map\n"); flush_tlb_kernel_range(lstart, lend); @@ -1002,6 +1025,33 @@ static void __init arm64_kfence_map_pool(phys_addr_t= kfence_pool, pgd_t *pgdp) memblock_clear_nomap(kfence_pool, KFENCE_POOL_SIZE); __kfence_pool =3D phys_to_virt(kfence_pool); } + +bool arch_kfence_init_pool(void) +{ + unsigned long start =3D (unsigned long)__kfence_pool; + unsigned long end =3D start + KFENCE_POOL_SIZE; + int ret; + + /* Exit early if we know the linear map is already pte-mapped. */ + if (!system_supports_bbml2_noabort() || force_pte_mapping()) + return true; + + /* Kfence pool is already pte-mapped for the early init case. */ + if (kfence_early_init) + return true; + + mutex_lock(&pgtable_split_lock); + ret =3D range_split_to_ptes(start, end, GFP_PGTABLE_KERNEL); + mutex_unlock(&pgtable_split_lock); + + /* + * Since the system supports bbml2_noabort, tlb invalidation is not + * required here; the pgtable mappings have been split to pte but larger + * entries may safely linger in the TLB. + */ + + return !ret; +} #else /* CONFIG_KFENCE */ =20 static inline phys_addr_t arm64_kfence_alloc_pool(void) { return 0; } @@ -1009,16 +1059,6 @@ static inline void arm64_kfence_map_pool(phys_addr_t= kfence_pool, pgd_t *pgdp) { =20 #endif /* CONFIG_KFENCE */ =20 -static inline bool force_pte_mapping(void) -{ - bool bbml2 =3D system_capabilities_finalized() ? - system_supports_bbml2_noabort() : cpu_supports_bbml2_noabort(); - - return (!bbml2 && (rodata_full || arm64_kfence_can_set_direct_map() || - is_realm_world())) || - debug_pagealloc_enabled(); -} - static void __init map_mem(pgd_t *pgdp) { static const u64 direct_map_end =3D _PAGE_END(VA_BITS_MIN); --=20 2.43.0 From nobody Fri Dec 19 17:18:21 2025 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 7047234C990 for ; Thu, 6 Nov 2025 16:09:59 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=217.140.110.172 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1762445400; cv=none; b=mvK+ZPzRMUf2dGLFOMqZL+Z1xY499Z8ObhzMmCjdy9Y2+DRA8YZFtBD1hckPJb4+Y4OnUtQlrI55M0fV3b5KLCMTqdTj6SonzxxQs7jxux1icmZPL/xkda5KtfziSVVvCZESl+qCmRCQUtSTs8Ln2NlyzYVVEZ7J5p6Lw2ngt7A= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1762445400; c=relaxed/simple; bh=/SIvofTh0hLevurcB7lx/H3anGNdb7zvthzKCyZDmP4=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=rtDlQiqnVBVmrBzXPAbG0oq2Qfs3A/d8pKZvnSGx/dlh6qu5hnxwUWCudF201dkgjY/SVEd0FPUTk1N3P0Q8UEKcTn3CHt8xbP2i5tMayzeVOXnPGDcXTaelp8p+MX8yl0+IEiTH16OEYe0QObmJAziTd25jU6uFh7GzE5Z3TYk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com; spf=pass smtp.mailfrom=arm.com; arc=none smtp.client-ip=217.140.110.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 26FE3169C; Thu, 6 Nov 2025 08:09:51 -0800 (PST) Received: from e125769.cambridge.arm.com (e125769.cambridge.arm.com [10.1.196.27]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 8B1433F66E; Thu, 6 Nov 2025 08:09:57 -0800 (PST) From: Ryan Roberts To: catalin.marinas@arm.com, will@kernel.org, yang@os.amperecomputing.com, david@redhat.com, ardb@kernel.org, dev.jain@arm.com, scott@os.amperecomputing.com, cl@gentwo.org Cc: Ryan Roberts , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 2/3] arm64: mm: Optimize range_split_to_ptes() Date: Thu, 6 Nov 2025 16:09:42 +0000 Message-ID: <20251106160945.3182799-3-ryan.roberts@arm.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20251106160945.3182799-1-ryan.roberts@arm.com> References: <20251106160945.3182799-1-ryan.roberts@arm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Enter lazy_mmu mode while splitting a range of memory to pte mappings. This causes barriers, which would otherwise be emitted after every pte (and pmd/pud) write, to be deferred until exiting lazy_mmu mode. For large systems, this is expected to significantly speed up fallback to pte-mapping the linear map for the case where the boot CPU has BBML2_NOABORT, but secondary CPUs do not. I haven't directly measured it, but this is equivalent to commit 1fcb7cea8a5f ("arm64: mm: Batch dsb and isb when populating pgtables"). Note that for the path from arch_kfence_init_pool(), we may sleep while allocating memory inside the lazy_mmu mode. Sleeping is not allowed by generic code inside lazy_mmu, but we know that the arm64 implementation is sleep-safe. So this is ok and follows the same pattern already used by split_kernel_leaf_mapping(). Signed-off-by: Ryan Roberts Reviewed-by: Yang Shi --- arch/arm64/mm/mmu.c | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-) diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index a364ac2c9c61..652bb8c14035 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -832,8 +832,14 @@ static const struct mm_walk_ops split_to_ptes_ops =3D { =20 static int range_split_to_ptes(unsigned long start, unsigned long end, gfp= _t gfp) { - return walk_kernel_page_table_range_lockless(start, end, + int ret; + + arch_enter_lazy_mmu_mode(); + ret =3D walk_kernel_page_table_range_lockless(start, end, &split_to_ptes_ops, NULL, &gfp); + arch_leave_lazy_mmu_mode(); + + return ret; } =20 static bool linear_map_requires_bbml2 __initdata; --=20 2.43.0 From nobody Fri Dec 19 17:18:21 2025 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 0753534CFBC for ; Thu, 6 Nov 2025 16:10:01 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=217.140.110.172 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1762445402; cv=none; b=ewXggCXEuWj7sImgFJRWq+J1RPP/7T+YOATAFN/t7hhQxnKahwF7IJ0QjwP8o97EX2nzeTX6TvqrVZ8OrXppM9/S8gnKZlGfD1uhunbQOFIKi4jLXT1eWIgo58DPqSTZQ9uWHPiOaT19HNZpCm8BTFFhqdmogeXjn/pBwF2KUM8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1762445402; c=relaxed/simple; bh=zl7sUm3Roi+xYSTdbpG6i/QWcWRVDBqGiezbcNG0vrQ=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=D3EXI3FZMoEJ2NHy18Noay7X3N1YPWTtYJw6I72jmB3o0EeHxps248aKAh1qGhqdShp0MLUUdekCyuTPdipwguZJd5OpKPFQOCwgrGbXjuQZxtBntRiRJ6np38WVMfLmJMNvTJhtxOWFeIdVyOldN4acHPuauLAg0eHfL4N+NGo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com; spf=pass smtp.mailfrom=arm.com; arc=none smtp.client-ip=217.140.110.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id D8D7C15A1; Thu, 6 Nov 2025 08:09:52 -0800 (PST) Received: from e125769.cambridge.arm.com (e125769.cambridge.arm.com [10.1.196.27]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 2E69B3F66E; Thu, 6 Nov 2025 08:09:59 -0800 (PST) From: Ryan Roberts To: catalin.marinas@arm.com, will@kernel.org, yang@os.amperecomputing.com, david@redhat.com, ardb@kernel.org, dev.jain@arm.com, scott@os.amperecomputing.com, cl@gentwo.org Cc: Ryan Roberts , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, David Hildenbrand Subject: [PATCH v2 3/3] arm64: mm: Tidy up force_pte_mapping() Date: Thu, 6 Nov 2025 16:09:43 +0000 Message-ID: <20251106160945.3182799-4-ryan.roberts@arm.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20251106160945.3182799-1-ryan.roberts@arm.com> References: <20251106160945.3182799-1-ryan.roberts@arm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Tidy up the implementation of force_pte_mapping() to make it easier to read and introduce the split_leaf_mapping_possible() helper to reduce code duplication in split_kernel_leaf_mapping() and arch_kfence_init_pool(). Suggested-by: David Hildenbrand (Red Hat) Signed-off-by: Ryan Roberts Reviewed-by: David Hildenbrand (Red Hat) Reviewed-by: Yang Shi --- arch/arm64/mm/mmu.c | 43 +++++++++++++++++++++++-------------------- 1 file changed, 23 insertions(+), 20 deletions(-) diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index 652bb8c14035..2ba01dc8ef82 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -710,12 +710,26 @@ static int split_kernel_leaf_mapping_locked(unsigned = long addr) =20 static inline bool force_pte_mapping(void) { - bool bbml2 =3D system_capabilities_finalized() ? + const bool bbml2 =3D system_capabilities_finalized() ? system_supports_bbml2_noabort() : cpu_supports_bbml2_noabort(); =20 - return (!bbml2 && (rodata_full || arm64_kfence_can_set_direct_map() || - is_realm_world())) || - debug_pagealloc_enabled(); + if (debug_pagealloc_enabled()) + return true; + if (bbml2) + return false; + return rodata_full || arm64_kfence_can_set_direct_map() || is_realm_world= (); +} + +static inline bool split_leaf_mapping_possible(void) +{ + /* + * !BBML2_NOABORT systems should never run into scenarios where we would + * have to split. So exit early and let calling code detect it and raise + * a warning. + */ + if (!system_supports_bbml2_noabort()) + return false; + return !force_pte_mapping(); } =20 static DEFINE_MUTEX(pgtable_split_lock); @@ -725,22 +739,11 @@ int split_kernel_leaf_mapping(unsigned long start, un= signed long end) int ret; =20 /* - * !BBML2_NOABORT systems should not be trying to change permissions on - * anything that is not pte-mapped in the first place. Just return early - * and let the permission change code raise a warning if not already - * pte-mapped. - */ - if (!system_supports_bbml2_noabort()) - return 0; - - /* - * If the region is within a pte-mapped area, there is no need to try to - * split. Additionally, CONFIG_DEBUG_PAGEALLOC and CONFIG_KFENCE may - * change permissions from atomic context so for those cases (which are - * always pte-mapped), we must not go any further because taking the - * mutex below may sleep. + * Exit early if the region is within a pte-mapped area or if we can't + * split. For the latter case, the permission change code will raise a + * warning if not already pte-mapped. */ - if (force_pte_mapping() || is_kfence_address((void *)start)) + if (!split_leaf_mapping_possible() || is_kfence_address((void *)start)) return 0; =20 /* @@ -1039,7 +1042,7 @@ bool arch_kfence_init_pool(void) int ret; =20 /* Exit early if we know the linear map is already pte-mapped. */ - if (!system_supports_bbml2_noabort() || force_pte_mapping()) + if (!split_leaf_mapping_possible()) return true; =20 /* Kfence pool is already pte-mapped for the early init case. */ --=20 2.43.0