From nobody Fri Oct 10 13:36:36 2025 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 0D48918A6DF for ; Fri, 13 Jun 2025 13:44:08 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=217.140.110.172 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749822251; cv=none; b=PYE2hH58+x9YA9LcH3HYP5YcgXCCX4fgmAX/l1PFw3MK60uBUiTIjxWsq/14j81aPXIZO9/BkheyeJdMKpRIeZlsZZvKmSZDLeMQbkhMyAiWiEnz35VXeBfaTcW2zLDEgb2mnUxGHjvKWeShJWcMgyhAVx2o+AS8Py3egPZlN/Y= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749822251; c=relaxed/simple; bh=dzG5wCAL9pzA0c0q681Vl0cuoOpY71jqgL3/1G+dXBA=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=G70QcbobmfxZTf8qaT1pu0ZrRfYN/bpsvDzzQKLRAH0GqXAQ3yAoDkob9KHYQALo4t6fpGg6owZDIu/f8A0P50ih76vdn9bys8tydzE4Yp3Z5cKkzDgmysIEFg9QkNhDuDXJMhhVMj8ld3V406AIjJddGBsQaceLo+ykf7wGOMU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com; spf=pass smtp.mailfrom=arm.com; arc=none smtp.client-ip=217.140.110.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 1808B2B; Fri, 13 Jun 2025 06:43:48 -0700 (PDT) Received: from MacBook-Pro.blr.arm.com (MacBook-Pro.blr.arm.com [10.164.18.48]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id C12C33F59E; Fri, 13 Jun 2025 06:44:02 -0700 (PDT) From: Dev Jain To: akpm@linux-foundation.org, david@redhat.com, catalin.marinas@arm.com, will@kernel.org Cc: lorenzo.stoakes@oracle.com, Liam.Howlett@oracle.com, vbabka@suse.cz, rppt@kernel.org, surenb@google.com, mhocko@suse.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, suzuki.poulose@arm.com, steven.price@arm.com, gshan@redhat.com, linux-arm-kernel@lists.infradead.org, yang@os.amperecomputing.com, ryan.roberts@arm.com, anshuman.khandual@arm.com, Dev Jain Subject: [PATCH v3 1/2] arm64: pageattr: Use pagewalk API to change memory permissions Date: Fri, 13 Jun 2025 19:13:51 +0530 Message-Id: <20250613134352.65994-2-dev.jain@arm.com> X-Mailer: git-send-email 2.39.3 (Apple Git-146) In-Reply-To: <20250613134352.65994-1-dev.jain@arm.com> References: <20250613134352.65994-1-dev.jain@arm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" arm64 currently changes permissions on vmalloc objects locklessly, via apply_to_page_range, whose limitation is to deny changing permissions for block mappings. Therefore, we move away to use the generic pagewalk API, thus paving the path for enabling huge mappings by default on kernel space mappings, thus leading to more efficient TLB usage. However, the API currently enforces the init_mm.mmap_lock to be held. To avoid the unnecessary bottleneck of the mmap_lock for our usecase, this patch extends this generic API to be used locklessly, so as to retain the existing behaviour for changing permissions. Apart from this reason, it is noted at [1] that KFENCE can manipulate kernel pgtable entries during softirqs. It does this by calling set_memory_valid() -> __change_memory_com= mon(). This being a non-sleepable context, we cannot take the init_mm mmap lock. Add comments to highlight the conditions under which we can use the lockless variant - no underlying VMA, and the user having exclusive control over the range, thus guaranteeing no concurrent access. Since arm64 cannot handle kernel live mapping splitting without BBML2, we require that the start and end of a given range lie on block mapping boundaries. Return -EINVAL in case a partial block mapping is detected; add a corresponding comment in ___change_memory_common() to warn that eliminating such a condition is the responsibility of the caller. apply_to_page_range() currently performs all pte level callbacks while in lazy mmu mode. Since arm64 can optimize performance by batching barriers when modifying kernel pgtables in lazy mmu mode, we would like to continue to benefit from this optimisation. Unfortunately walk_kernel_page_table_ran= ge() does not use lazy mmu mode. However, since the pagewalk framework is not allocating any memory, we can safely bracket the whole operation inside lazy mmu mode ourselves. Therefore, wrap the call to walk_kernel_page_table_range() with the lazy MMU helpers. [1] https://lore.kernel.org/linux-arm-kernel/89d0ad18-4772-4d8f-ae8a-7c48d2= 6a927e@arm.com/ Signed-off-by: Dev Jain Reviewed-by: Ryan Roberts --- arch/arm64/mm/pageattr.c | 157 +++++++++++++++++++++++++++++++-------- include/linux/pagewalk.h | 3 + mm/pagewalk.c | 26 +++++++ 3 files changed, 154 insertions(+), 32 deletions(-) diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c index 04d4a8f676db..cfc5279f27a2 100644 --- a/arch/arm64/mm/pageattr.c +++ b/arch/arm64/mm/pageattr.c @@ -8,6 +8,7 @@ #include #include #include +#include =20 #include #include @@ -20,6 +21,99 @@ struct page_change_data { pgprot_t clear_mask; }; =20 +static ptdesc_t set_pageattr_masks(ptdesc_t val, struct mm_walk *walk) +{ + struct page_change_data *masks =3D walk->private; + + val &=3D ~(pgprot_val(masks->clear_mask)); + val |=3D (pgprot_val(masks->set_mask)); + + return val; +} + +static int pageattr_pgd_entry(pgd_t *pgd, unsigned long addr, + unsigned long next, struct mm_walk *walk) +{ + pgd_t val =3D pgdp_get(pgd); + + if (pgd_leaf(val)) { + if (WARN_ON_ONCE((next - addr) !=3D PGDIR_SIZE)) + return -EINVAL; + val =3D __pgd(set_pageattr_masks(pgd_val(val), walk)); + set_pgd(pgd, val); + walk->action =3D ACTION_CONTINUE; + } + + return 0; +} + +static int pageattr_p4d_entry(p4d_t *p4d, unsigned long addr, + unsigned long next, struct mm_walk *walk) +{ + p4d_t val =3D p4dp_get(p4d); + + if (p4d_leaf(val)) { + if (WARN_ON_ONCE((next - addr) !=3D P4D_SIZE)) + return -EINVAL; + val =3D __p4d(set_pageattr_masks(p4d_val(val), walk)); + set_p4d(p4d, val); + walk->action =3D ACTION_CONTINUE; + } + + return 0; +} + +static int pageattr_pud_entry(pud_t *pud, unsigned long addr, + unsigned long next, struct mm_walk *walk) +{ + pud_t val =3D pudp_get(pud); + + if (pud_leaf(val)) { + if (WARN_ON_ONCE((next - addr) !=3D PUD_SIZE)) + return -EINVAL; + val =3D __pud(set_pageattr_masks(pud_val(val), walk)); + set_pud(pud, val); + walk->action =3D ACTION_CONTINUE; + } + + return 0; +} + +static int pageattr_pmd_entry(pmd_t *pmd, unsigned long addr, + unsigned long next, struct mm_walk *walk) +{ + pmd_t val =3D pmdp_get(pmd); + + if (pmd_leaf(val)) { + if (WARN_ON_ONCE((next - addr) !=3D PMD_SIZE)) + return -EINVAL; + val =3D __pmd(set_pageattr_masks(pmd_val(val), walk)); + set_pmd(pmd, val); + walk->action =3D ACTION_CONTINUE; + } + + return 0; +} + +static int pageattr_pte_entry(pte_t *pte, unsigned long addr, + unsigned long next, struct mm_walk *walk) +{ + pte_t val =3D __ptep_get(pte); + + val =3D __pte(set_pageattr_masks(pte_val(val), walk)); + __set_pte(pte, val); + + return 0; +} + +static const struct mm_walk_ops pageattr_ops =3D { + .pgd_entry =3D pageattr_pgd_entry, + .p4d_entry =3D pageattr_p4d_entry, + .pud_entry =3D pageattr_pud_entry, + .pmd_entry =3D pageattr_pmd_entry, + .pte_entry =3D pageattr_pte_entry, +}; + bool rodata_full __ro_after_init =3D IS_ENABLED(CONFIG_RODATA_FULL_DEFAULT= _ENABLED); =20 bool can_set_direct_map(void) @@ -37,22 +131,7 @@ bool can_set_direct_map(void) arm64_kfence_can_set_direct_map() || is_realm_world(); } =20 -static int change_page_range(pte_t *ptep, unsigned long addr, void *data) -{ - struct page_change_data *cdata =3D data; - pte_t pte =3D __ptep_get(ptep); - - pte =3D clear_pte_bit(pte, cdata->clear_mask); - pte =3D set_pte_bit(pte, cdata->set_mask); - - __set_pte(ptep, pte); - return 0; -} - -/* - * This function assumes that the range is mapped with PAGE_SIZE pages. - */ -static int __change_memory_common(unsigned long start, unsigned long size, +static int ___change_memory_common(unsigned long start, unsigned long size, pgprot_t set_mask, pgprot_t clear_mask) { struct page_change_data data; @@ -61,9 +140,28 @@ static int __change_memory_common(unsigned long start, = unsigned long size, data.set_mask =3D set_mask; data.clear_mask =3D clear_mask; =20 - ret =3D apply_to_page_range(&init_mm, start, size, change_page_range, - &data); + arch_enter_lazy_mmu_mode(); + + /* + * The caller must ensure that the range we are operating on does not + * partially overlap a block mapping. Any such case should either not + * exist, or must be eliminated by splitting the mapping - which for + * kernel mappings can be done only on BBML2 systems. + * + */ + ret =3D walk_kernel_page_table_range_lockless(start, start + size, + &pageattr_ops, NULL, &data); + arch_leave_lazy_mmu_mode(); + + return ret; +} =20 +static int __change_memory_common(unsigned long start, unsigned long size, + pgprot_t set_mask, pgprot_t clear_mask) +{ + int ret; + + ret =3D ___change_memory_common(start, size, set_mask, clear_mask); /* * If the memory is being made valid without changing any other bits * then a TLBI isn't required as a non-valid entry cannot be cached in @@ -71,6 +169,7 @@ static int __change_memory_common(unsigned long start, u= nsigned long size, */ if (pgprot_val(set_mask) !=3D PTE_VALID || pgprot_val(clear_mask)) flush_tlb_kernel_range(start, start + size); + return ret; } =20 @@ -174,32 +273,26 @@ int set_memory_valid(unsigned long addr, int numpages= , int enable) =20 int set_direct_map_invalid_noflush(struct page *page) { - struct page_change_data data =3D { - .set_mask =3D __pgprot(0), - .clear_mask =3D __pgprot(PTE_VALID), - }; + pgprot_t clear_mask =3D __pgprot(PTE_VALID); + pgprot_t set_mask =3D __pgprot(0); =20 if (!can_set_direct_map()) return 0; =20 - return apply_to_page_range(&init_mm, - (unsigned long)page_address(page), - PAGE_SIZE, change_page_range, &data); + return ___change_memory_common((unsigned long)page_address(page), + PAGE_SIZE, set_mask, clear_mask); } =20 int set_direct_map_default_noflush(struct page *page) { - struct page_change_data data =3D { - .set_mask =3D __pgprot(PTE_VALID | PTE_WRITE), - .clear_mask =3D __pgprot(PTE_RDONLY), - }; + pgprot_t set_mask =3D __pgprot(PTE_VALID | PTE_WRITE); + pgprot_t clear_mask =3D __pgprot(PTE_RDONLY); =20 if (!can_set_direct_map()) return 0; =20 - return apply_to_page_range(&init_mm, - (unsigned long)page_address(page), - PAGE_SIZE, change_page_range, &data); + return ___change_memory_common((unsigned long)page_address(page), + PAGE_SIZE, set_mask, clear_mask); } =20 static int __set_memory_enc_dec(unsigned long addr, diff --git a/include/linux/pagewalk.h b/include/linux/pagewalk.h index 8ac2f6d6d2a3..79ab8c754dff 100644 --- a/include/linux/pagewalk.h +++ b/include/linux/pagewalk.h @@ -132,6 +132,9 @@ int walk_page_range(struct mm_struct *mm, unsigned long= start, int walk_kernel_page_table_range(unsigned long start, unsigned long end, const struct mm_walk_ops *ops, pgd_t *pgd, void *private); +int walk_kernel_page_table_range_lockless(unsigned long start, + unsigned long end, const struct mm_walk_ops *ops, + pgd_t *pgd, void *private); int walk_page_range_vma(struct vm_area_struct *vma, unsigned long start, unsigned long end, const struct mm_walk_ops *ops, void *private); diff --git a/mm/pagewalk.c b/mm/pagewalk.c index ff5299eca687..7446984b2154 100644 --- a/mm/pagewalk.c +++ b/mm/pagewalk.c @@ -632,6 +632,32 @@ int walk_kernel_page_table_range(unsigned long start, = unsigned long end, return walk_pgd_range(start, end, &walk); } =20 +/* + * Use this function to walk the kernel page tables locklessly. It should = be + * guaranteed that the caller has exclusive access over the range they are + * operating on - that there should be no concurrent access, for example, + * changing permissions for vmalloc objects. + */ +int walk_kernel_page_table_range_lockless(unsigned long start, unsigned lo= ng end, + const struct mm_walk_ops *ops, pgd_t *pgd, void *private) +{ + struct mm_struct *mm =3D &init_mm; + struct mm_walk walk =3D { + .ops =3D ops, + .mm =3D mm, + .pgd =3D pgd, + .private =3D private, + .no_vma =3D true + }; + + if (start >=3D end) + return -EINVAL; + if (!check_ops_valid(ops)) + return -EINVAL; + + return walk_pgd_range(start, end, &walk); +} + /** * walk_page_range_debug - walk a range of pagetables not backed by a vma * @mm: mm_struct representing the target process of page table walk --=20 2.30.2 From nobody Fri Oct 10 13:36:36 2025 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 52F771A00E7 for ; Fri, 13 Jun 2025 13:44:15 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=217.140.110.172 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749822257; cv=none; b=m1DvWwBN+2yP5gdKgF6qgZWoions2H05cWjk/LzaY0tHlpQ0NnlN2s8cs1afAb8u2I4l5maa+CR7dldoeSZRTGtGXKBW6RjOncI7R9XpXxos8b4iz+d7x9z9b807CbfQzg4dJQC+FHDyAruOAcfB/6HSF8p88NAc0qVMt8Ky6gU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749822257; c=relaxed/simple; bh=k2bCB2RhNxgQyhR/9M/ePVPKiQkg1dxqcww0EiPUw1Y=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=oHlflzsD1fzDw33rYBkz/LOCUNHpXqQyGreSDVoVLLOHOTk7J5yNV3K6CGEvnNyc0neV3x9wuVwS7CUeZ8QCeFr2vuV9sS7lFOFM48k95/AQrWlPzT4bYitITtQT+MnF2kPmggjVKegWqwsENP1U5bc7Ea8NhjPR2U3LK2Sf9bA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com; spf=pass smtp.mailfrom=arm.com; arc=none smtp.client-ip=217.140.110.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 462562B; Fri, 13 Jun 2025 06:43:54 -0700 (PDT) Received: from MacBook-Pro.blr.arm.com (MacBook-Pro.blr.arm.com [10.164.18.48]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 07BAB3F59E; Fri, 13 Jun 2025 06:44:08 -0700 (PDT) From: Dev Jain To: akpm@linux-foundation.org, david@redhat.com, catalin.marinas@arm.com, will@kernel.org Cc: lorenzo.stoakes@oracle.com, Liam.Howlett@oracle.com, vbabka@suse.cz, rppt@kernel.org, surenb@google.com, mhocko@suse.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, suzuki.poulose@arm.com, steven.price@arm.com, gshan@redhat.com, linux-arm-kernel@lists.infradead.org, yang@os.amperecomputing.com, ryan.roberts@arm.com, anshuman.khandual@arm.com, Dev Jain Subject: [PATCH v3 2/2] arm64: pageattr: Enable huge-vmalloc permission change Date: Fri, 13 Jun 2025 19:13:52 +0530 Message-Id: <20250613134352.65994-3-dev.jain@arm.com> X-Mailer: git-send-email 2.39.3 (Apple Git-146) In-Reply-To: <20250613134352.65994-1-dev.jain@arm.com> References: <20250613134352.65994-1-dev.jain@arm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Commit fcf8dda8cc48 ("arm64: pageattr: Explicitly bail out when changing permissions for vmalloc_huge mappings") disallowed changing permissions for vmalloc-huge mappings. The motivation of this was to enforce an API requirement and explicitly tell the caller that it is unsafe to change permissions for block mappings since splitting may be required, which cannot be handled safely on an arm64 system in absence of BBML2. This patch effectively partially reverts this commit, since patch 1 will now enable permission changes on kernel block mappings, thus, through change_memory_common(), enable permission changes for vmalloc-huge mappings. Any caller "misusing" the API, in the sense, calling it for a partial block mapping, will receive an error code of -EINVAL via the pagewalk callbacks, thus reverting to the behaviour of the API itself returning -EINVAL, through apply_to_page_range returning -EINVAL in case of block mappings, the difference now being, the -EINVAL is restricted to playing permission games on partial block mappings courtesy of patch 1. Signed-off-by: Dev Jain --- arch/arm64/mm/pageattr.c | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c index cfc5279f27a2..66676f7f432a 100644 --- a/arch/arm64/mm/pageattr.c +++ b/arch/arm64/mm/pageattr.c @@ -195,8 +195,6 @@ static int change_memory_common(unsigned long addr, int= numpages, * we are operating on does not result in such splitting. * * Let's restrict ourselves to mappings created by vmalloc (or vmap). - * Disallow VM_ALLOW_HUGE_VMAP mappings to guarantee that only page - * mappings are updated and splitting is never needed. * * So check whether the [addr, addr + size) interval is entirely * covered by precisely one VM area that has the VM_ALLOC flag set. @@ -204,7 +202,7 @@ static int change_memory_common(unsigned long addr, int= numpages, area =3D find_vm_area((void *)addr); if (!area || end > (unsigned long)kasan_reset_tag(area->addr) + area->size || - ((area->flags & (VM_ALLOC | VM_ALLOW_HUGE_VMAP)) !=3D VM_ALLOC)) + !(area->flags & VM_ALLOC)) return -EINVAL; =20 if (!numpages) --=20 2.30.2