arch/arm64/mm/pageattr.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-)
arm64 uses apply_to_page_range to change permissions for kernel vmalloc mappings,
which does not support changing permissions for block mappings. This function
will change permissions until it encounters a block mapping, and will bail
out with a warning. Since there are no reports of this triggering, it
implies that there are currently no cases of code doing a vmalloc_huge()
followed by partial permission change. But this is a footgun waiting to
go off, so let's detect it early and avoid the possibility of permissions
in an intermediate state. So, explicitly disallow changing permissions
for VM_ALLOW_HUGE_VMAP mappings.
Reviewed-by: Ryan Roberts <ryan.roberts@arm.com>
Reviewed-by: Mike Rapoport (Microsoft) <rppt@kernel.org>
Signed-off-by: Dev Jain <dev.jain@arm.com>
---
v1->v2:
- Improve changelog, keep mention of page mappings in comment
arch/arm64/mm/pageattr.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
index 39fd1f7ff02a..04d4a8f676db 100644
--- a/arch/arm64/mm/pageattr.c
+++ b/arch/arm64/mm/pageattr.c
@@ -96,8 +96,8 @@ static int change_memory_common(unsigned long addr, int numpages,
* we are operating on does not result in such splitting.
*
* Let's restrict ourselves to mappings created by vmalloc (or vmap).
- * Those are guaranteed to consist entirely of page mappings, and
- * splitting is never needed.
+ * Disallow VM_ALLOW_HUGE_VMAP mappings to guarantee that only page
+ * mappings are updated and splitting is never needed.
*
* So check whether the [addr, addr + size) interval is entirely
* covered by precisely one VM area that has the VM_ALLOC flag set.
@@ -105,7 +105,7 @@ static int change_memory_common(unsigned long addr, int numpages,
area = find_vm_area((void *)addr);
if (!area ||
end > (unsigned long)kasan_reset_tag(area->addr) + area->size ||
- !(area->flags & VM_ALLOC))
+ ((area->flags & (VM_ALLOC | VM_ALLOW_HUGE_VMAP)) != VM_ALLOC))
return -EINVAL;
if (!numpages)
--
2.30.2
On Thu, 03 Apr 2025 10:58:44 +0530, Dev Jain wrote:
> arm64 uses apply_to_page_range to change permissions for kernel vmalloc mappings,
> which does not support changing permissions for block mappings. This function
> will change permissions until it encounters a block mapping, and will bail
> out with a warning. Since there are no reports of this triggering, it
> implies that there are currently no cases of code doing a vmalloc_huge()
> followed by partial permission change. But this is a footgun waiting to
> go off, so let's detect it early and avoid the possibility of permissions
> in an intermediate state. So, explicitly disallow changing permissions
> for VM_ALLOW_HUGE_VMAP mappings.
>
> [...]
Applied to arm64 (for-next/mm), thanks!
[1/1] arm64: pageattr: Explicitly bail out when changing permissions for vmalloc_huge mappings
https://git.kernel.org/arm64/c/fcf8dda8cc48
Cheers,
--
Will
https://fixes.arm64.dev
https://next.arm64.dev
https://will.arm64.dev
Gentle Ping On 03/04/25 10:58 am, Dev Jain wrote: > arm64 uses apply_to_page_range to change permissions for kernel vmalloc mappings, > which does not support changing permissions for block mappings. This function > will change permissions until it encounters a block mapping, and will bail > out with a warning. Since there are no reports of this triggering, it > implies that there are currently no cases of code doing a vmalloc_huge() > followed by partial permission change. But this is a footgun waiting to > go off, so let's detect it early and avoid the possibility of permissions > in an intermediate state. So, explicitly disallow changing permissions > for VM_ALLOW_HUGE_VMAP mappings. > > Reviewed-by: Ryan Roberts <ryan.roberts@arm.com> > Reviewed-by: Mike Rapoport (Microsoft) <rppt@kernel.org> > Signed-off-by: Dev Jain <dev.jain@arm.com> > --- > v1->v2: > - Improve changelog, keep mention of page mappings in comment > > arch/arm64/mm/pageattr.c | 6 +++--- > 1 file changed, 3 insertions(+), 3 deletions(-) > > diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c > index 39fd1f7ff02a..04d4a8f676db 100644 > --- a/arch/arm64/mm/pageattr.c > +++ b/arch/arm64/mm/pageattr.c > @@ -96,8 +96,8 @@ static int change_memory_common(unsigned long addr, int numpages, > * we are operating on does not result in such splitting. > * > * Let's restrict ourselves to mappings created by vmalloc (or vmap). > - * Those are guaranteed to consist entirely of page mappings, and > - * splitting is never needed. > + * Disallow VM_ALLOW_HUGE_VMAP mappings to guarantee that only page > + * mappings are updated and splitting is never needed. > * > * So check whether the [addr, addr + size) interval is entirely > * covered by precisely one VM area that has the VM_ALLOC flag set. > @@ -105,7 +105,7 @@ static int change_memory_common(unsigned long addr, int numpages, > area = find_vm_area((void *)addr); > if (!area || > end > (unsigned long)kasan_reset_tag(area->addr) + area->size || > - !(area->flags & VM_ALLOC)) > + ((area->flags & (VM_ALLOC | VM_ALLOW_HUGE_VMAP)) != VM_ALLOC)) > return -EINVAL; > > if (!numpages)
On 03.04.25 07:28, Dev Jain wrote: > arm64 uses apply_to_page_range to change permissions for kernel vmalloc mappings, > which does not support changing permissions for block mappings. This function > will change permissions until it encounters a block mapping, and will bail > out with a warning. Since there are no reports of this triggering, it > implies that there are currently no cases of code doing a vmalloc_huge() > followed by partial permission change. But this is a footgun waiting to > go off, so let's detect it early and avoid the possibility of permissions > in an intermediate state. So, explicitly disallow changing permissions > for VM_ALLOW_HUGE_VMAP mappings. > > Reviewed-by: Ryan Roberts <ryan.roberts@arm.com> > Reviewed-by: Mike Rapoport (Microsoft) <rppt@kernel.org> > Signed-off-by: Dev Jain <dev.jain@arm.com> > --- > v1->v2: > - Improve changelog, keep mention of page mappings in comment > > arch/arm64/mm/pageattr.c | 6 +++--- > 1 file changed, 3 insertions(+), 3 deletions(-) > > diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c > index 39fd1f7ff02a..04d4a8f676db 100644 > --- a/arch/arm64/mm/pageattr.c > +++ b/arch/arm64/mm/pageattr.c > @@ -96,8 +96,8 @@ static int change_memory_common(unsigned long addr, int numpages, > * we are operating on does not result in such splitting. > * > * Let's restrict ourselves to mappings created by vmalloc (or vmap). > - * Those are guaranteed to consist entirely of page mappings, and > - * splitting is never needed. > + * Disallow VM_ALLOW_HUGE_VMAP mappings to guarantee that only page > + * mappings are updated and splitting is never needed. > * > * So check whether the [addr, addr + size) interval is entirely > * covered by precisely one VM area that has the VM_ALLOC flag set. > @@ -105,7 +105,7 @@ static int change_memory_common(unsigned long addr, int numpages, > area = find_vm_area((void *)addr); > if (!area || > end > (unsigned long)kasan_reset_tag(area->addr) + area->size || > - !(area->flags & VM_ALLOC)) > + ((area->flags & (VM_ALLOC | VM_ALLOW_HUGE_VMAP)) != VM_ALLOC)) > return -EINVAL; > > if (!numpages) Makes sense to me. Whenever required, we can improve the checks (or even try supporting splitting). Acked-by: David Hildenbrand <david@redhat.com> -- Cheers, David / dhildenb
On 04/04/2025 13:57, David Hildenbrand wrote: > On 03.04.25 07:28, Dev Jain wrote: >> arm64 uses apply_to_page_range to change permissions for kernel vmalloc mappings, >> which does not support changing permissions for block mappings. This function >> will change permissions until it encounters a block mapping, and will bail >> out with a warning. Since there are no reports of this triggering, it >> implies that there are currently no cases of code doing a vmalloc_huge() >> followed by partial permission change. But this is a footgun waiting to >> go off, so let's detect it early and avoid the possibility of permissions >> in an intermediate state. So, explicitly disallow changing permissions >> for VM_ALLOW_HUGE_VMAP mappings. >> >> Reviewed-by: Ryan Roberts <ryan.roberts@arm.com> >> Reviewed-by: Mike Rapoport (Microsoft) <rppt@kernel.org> >> Signed-off-by: Dev Jain <dev.jain@arm.com> >> --- >> v1->v2: >> - Improve changelog, keep mention of page mappings in comment >> >> arch/arm64/mm/pageattr.c | 6 +++--- >> 1 file changed, 3 insertions(+), 3 deletions(-) >> >> diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c >> index 39fd1f7ff02a..04d4a8f676db 100644 >> --- a/arch/arm64/mm/pageattr.c >> +++ b/arch/arm64/mm/pageattr.c >> @@ -96,8 +96,8 @@ static int change_memory_common(unsigned long addr, int >> numpages, >> * we are operating on does not result in such splitting. >> * >> * Let's restrict ourselves to mappings created by vmalloc (or vmap). >> - * Those are guaranteed to consist entirely of page mappings, and >> - * splitting is never needed. >> + * Disallow VM_ALLOW_HUGE_VMAP mappings to guarantee that only page >> + * mappings are updated and splitting is never needed. >> * >> * So check whether the [addr, addr + size) interval is entirely >> * covered by precisely one VM area that has the VM_ALLOC flag set. >> @@ -105,7 +105,7 @@ static int change_memory_common(unsigned long addr, int >> numpages, >> area = find_vm_area((void *)addr); >> if (!area || >> end > (unsigned long)kasan_reset_tag(area->addr) + area->size || >> - !(area->flags & VM_ALLOC)) >> + ((area->flags & (VM_ALLOC | VM_ALLOW_HUGE_VMAP)) != VM_ALLOC)) >> return -EINVAL; >> if (!numpages) > > Makes sense to me. Whenever required, we can improve the checks (or even try > supporting splitting). Yes that's the plan; I'd like to get to the point where we can do huge vmalloc mappings by default on systems that support BBML2, then split mappings when needed. > > Acked-by: David Hildenbrand <david@redhat.com> >
On 4/3/25 3:28 PM, Dev Jain wrote: > arm64 uses apply_to_page_range to change permissions for kernel vmalloc mappings, > which does not support changing permissions for block mappings. This function > will change permissions until it encounters a block mapping, and will bail > out with a warning. Since there are no reports of this triggering, it > implies that there are currently no cases of code doing a vmalloc_huge() > followed by partial permission change. But this is a footgun waiting to > go off, so let's detect it early and avoid the possibility of permissions > in an intermediate state. So, explicitly disallow changing permissions > for VM_ALLOW_HUGE_VMAP mappings. > > Reviewed-by: Ryan Roberts <ryan.roberts@arm.com> > Reviewed-by: Mike Rapoport (Microsoft) <rppt@kernel.org> > Signed-off-by: Dev Jain <dev.jain@arm.com> > --- > v1->v2: > - Improve changelog, keep mention of page mappings in comment > > arch/arm64/mm/pageattr.c | 6 +++--- > 1 file changed, 3 insertions(+), 3 deletions(-) > Reviewed-by: Gavin Shan <gshan@redhat.com>
On 4/3/25 10:58, Dev Jain wrote: > arm64 uses apply_to_page_range to change permissions for kernel vmalloc mappings, > which does not support changing permissions for block mappings. This function > will change permissions until it encounters a block mapping, and will bail > out with a warning. Since there are no reports of this triggering, it > implies that there are currently no cases of code doing a vmalloc_huge() > followed by partial permission change. But this is a footgun waiting to > go off, so let's detect it early and avoid the possibility of permissions > in an intermediate state. So, explicitly disallow changing permissions > for VM_ALLOW_HUGE_VMAP mappings. > > Reviewed-by: Ryan Roberts <ryan.roberts@arm.com> > Reviewed-by: Mike Rapoport (Microsoft) <rppt@kernel.org> > Signed-off-by: Dev Jain <dev.jain@arm.com> > --- > v1->v2: > - Improve changelog, keep mention of page mappings in comment > > arch/arm64/mm/pageattr.c | 6 +++--- > 1 file changed, 3 insertions(+), 3 deletions(-) > > diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c > index 39fd1f7ff02a..04d4a8f676db 100644 > --- a/arch/arm64/mm/pageattr.c > +++ b/arch/arm64/mm/pageattr.c > @@ -96,8 +96,8 @@ static int change_memory_common(unsigned long addr, int numpages, > * we are operating on does not result in such splitting. > * > * Let's restrict ourselves to mappings created by vmalloc (or vmap). > - * Those are guaranteed to consist entirely of page mappings, and > - * splitting is never needed. > + * Disallow VM_ALLOW_HUGE_VMAP mappings to guarantee that only page > + * mappings are updated and splitting is never needed. > * > * So check whether the [addr, addr + size) interval is entirely > * covered by precisely one VM area that has the VM_ALLOC flag set. > @@ -105,7 +105,7 @@ static int change_memory_common(unsigned long addr, int numpages, > area = find_vm_area((void *)addr); > if (!area || > end > (unsigned long)kasan_reset_tag(area->addr) + area->size || > - !(area->flags & VM_ALLOC)) > + ((area->flags & (VM_ALLOC | VM_ALLOW_HUGE_VMAP)) != VM_ALLOC)) > return -EINVAL; > > if (!numpages) LGTM Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com>
© 2016 - 2025 Red Hat, Inc.