mm/mseal.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
Previously we stored the end of the current VMA in curr_end, and then upon
iterating to the next VMA updated curr_start to curr_end to advance to the
next VMA.
However, this doesn't take into account the fact that a VMA might be
updated due to a merge by vma_modify_flags(), which can result in curr_end
being stale and thus, upon setting curr_start to curr_end, ending up with
an incorrect curr_start on the next iteration.
Resolve the issue by setting curr_end to vma->vm_end unconditionally to
ensure this value remains updated should this occur.
Signed-off-by: Lorenzo Stoakes (Oracle) <ljs@kernel.org>
Fixes: 6c2da14ae1e0 ("mm/mseal: rework mseal apply logic")
Cc: <stable@vger.kernel.org>
Reported-by: Antonius <antonius@bluedragonsec.com>
Closes: https://lore.kernel.org/linux-mm/CAK8a0jyHXqBpt8Xe8v9SNDbnRiwz7OthA8SKY=NLRY7smPEP3Q@mail.gmail.com/
---
mm/mseal.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/mm/mseal.c b/mm/mseal.c
index 316b5e1dec78..2d72a15d8ea1 100644
--- a/mm/mseal.c
+++ b/mm/mseal.c
@@ -66,7 +66,7 @@ static int mseal_apply(struct mm_struct *mm,
prev = vma;
for_each_vma_range(vmi, vma, end) {
- const unsigned long curr_end = MIN(vma->vm_end, end);
+ unsigned long curr_end = MIN(vma->vm_end, end);
if (!(vma->vm_flags & VM_SEALED)) {
vm_flags_t vm_flags = vma->vm_flags | VM_SEALED;
@@ -76,6 +76,7 @@ static int mseal_apply(struct mm_struct *mm,
if (IS_ERR(vma))
return PTR_ERR(vma);
vm_flags_set(vma, VM_SEALED);
+ curr_end = vma->vm_end; /* Merge may have updated. */
}
prev = vma;
--
2.53.0
On 3/27/26 10:06, Lorenzo Stoakes (Oracle) wrote:
> Previously we stored the end of the current VMA in curr_end, and then upon
> iterating to the next VMA updated curr_start to curr_end to advance to the
> next VMA.
>
> However, this doesn't take into account the fact that a VMA might be
> updated due to a merge by vma_modify_flags(), which can result in curr_end
> being stale and thus, upon setting curr_start to curr_end, ending up with
> an incorrect curr_start on the next iteration.
>
> Resolve the issue by setting curr_end to vma->vm_end unconditionally to
> ensure this value remains updated should this occur.
>
> Signed-off-by: Lorenzo Stoakes (Oracle) <ljs@kernel.org>
> Fixes: 6c2da14ae1e0 ("mm/mseal: rework mseal apply logic")
> Cc: <stable@vger.kernel.org>
> Reported-by: Antonius <antonius@bluedragonsec.com>
> Closes: https://lore.kernel.org/linux-mm/CAK8a0jyHXqBpt8Xe8v9SNDbnRiwz7OthA8SKY=NLRY7smPEP3Q@mail.gmail.com/
> ---
> mm/mseal.c | 3 ++-
> 1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/mm/mseal.c b/mm/mseal.c
> index 316b5e1dec78..2d72a15d8ea1 100644
> --- a/mm/mseal.c
> +++ b/mm/mseal.c
> @@ -66,7 +66,7 @@ static int mseal_apply(struct mm_struct *mm,
> prev = vma;
>
> for_each_vma_range(vmi, vma, end) {
> - const unsigned long curr_end = MIN(vma->vm_end, end);
> + unsigned long curr_end = MIN(vma->vm_end, end);
>
> if (!(vma->vm_flags & VM_SEALED)) {
> vm_flags_t vm_flags = vma->vm_flags | VM_SEALED;
> @@ -76,6 +76,7 @@ static int mseal_apply(struct mm_struct *mm,
> if (IS_ERR(vma))
> return PTR_ERR(vma);
> vm_flags_set(vma, VM_SEALED);
> + curr_end = vma->vm_end; /* Merge may have updated. */
> }
I was a bit confused why curr_start is allowed to not start within the VMA,
but before it. Then I recalled that range_contains_unmapped() checks for no holes.
Would the following also sort out the problem and even simplify the code?
diff --git a/mm/mseal.c b/mm/mseal.c
index 603df53ad267..e2093ae3d25c 100644
--- a/mm/mseal.c
+++ b/mm/mseal.c
@@ -56,7 +56,6 @@ static int mseal_apply(struct mm_struct *mm,
unsigned long start, unsigned long end)
{
struct vm_area_struct *vma, *prev;
- unsigned long curr_start = start;
VMA_ITERATOR(vmi, mm, start);
/* We know there are no gaps so this will be non-NULL. */
@@ -66,6 +65,7 @@ static int mseal_apply(struct mm_struct *mm,
prev = vma;
for_each_vma_range(vmi, vma, end) {
+ const unsigned long curr_start = MAX(vma->vm_start, start);
const unsigned long curr_end = MIN(vma->vm_end, end);
if (!vma_test(vma, VMA_SEALED_BIT)) {
@@ -82,7 +82,6 @@ static int mseal_apply(struct mm_struct *mm,
}
prev = vma;
- curr_start = curr_end;
}
return 0;
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
I might be wrong about that, I've been staring at the screen for too long today.
--
Cheers,
David
On Fri, Mar 27, 2026 at 05:57:06PM +0100, David Hildenbrand (Arm) wrote:
> On 3/27/26 10:06, Lorenzo Stoakes (Oracle) wrote:
> > Previously we stored the end of the current VMA in curr_end, and then upon
> > iterating to the next VMA updated curr_start to curr_end to advance to the
> > next VMA.
> >
> > However, this doesn't take into account the fact that a VMA might be
> > updated due to a merge by vma_modify_flags(), which can result in curr_end
> > being stale and thus, upon setting curr_start to curr_end, ending up with
> > an incorrect curr_start on the next iteration.
> >
> > Resolve the issue by setting curr_end to vma->vm_end unconditionally to
> > ensure this value remains updated should this occur.
> >
> > Signed-off-by: Lorenzo Stoakes (Oracle) <ljs@kernel.org>
> > Fixes: 6c2da14ae1e0 ("mm/mseal: rework mseal apply logic")
> > Cc: <stable@vger.kernel.org>
> > Reported-by: Antonius <antonius@bluedragonsec.com>
> > Closes: https://lore.kernel.org/linux-mm/CAK8a0jyHXqBpt8Xe8v9SNDbnRiwz7OthA8SKY=NLRY7smPEP3Q@mail.gmail.com/
> > ---
> > mm/mseal.c | 3 ++-
> > 1 file changed, 2 insertions(+), 1 deletion(-)
> >
> > diff --git a/mm/mseal.c b/mm/mseal.c
> > index 316b5e1dec78..2d72a15d8ea1 100644
> > --- a/mm/mseal.c
> > +++ b/mm/mseal.c
> > @@ -66,7 +66,7 @@ static int mseal_apply(struct mm_struct *mm,
> > prev = vma;
> >
> > for_each_vma_range(vmi, vma, end) {
> > - const unsigned long curr_end = MIN(vma->vm_end, end);
> > + unsigned long curr_end = MIN(vma->vm_end, end);
> >
> > if (!(vma->vm_flags & VM_SEALED)) {
> > vm_flags_t vm_flags = vma->vm_flags | VM_SEALED;
> > @@ -76,6 +76,7 @@ static int mseal_apply(struct mm_struct *mm,
> > if (IS_ERR(vma))
> > return PTR_ERR(vma);
> > vm_flags_set(vma, VM_SEALED);
> > + curr_end = vma->vm_end; /* Merge may have updated. */
> > }
>
>
> I was a bit confused why curr_start is allowed to not start within the VMA,
> but before it. Then I recalled that range_contains_unmapped() checks for no holes.
>
>
> Would the following also sort out the problem and even simplify the code?
>
> diff --git a/mm/mseal.c b/mm/mseal.c
> index 603df53ad267..e2093ae3d25c 100644
> --- a/mm/mseal.c
> +++ b/mm/mseal.c
> @@ -56,7 +56,6 @@ static int mseal_apply(struct mm_struct *mm,
> unsigned long start, unsigned long end)
> {
> struct vm_area_struct *vma, *prev;
> - unsigned long curr_start = start;
> VMA_ITERATOR(vmi, mm, start);
>
> /* We know there are no gaps so this will be non-NULL. */
> @@ -66,6 +65,7 @@ static int mseal_apply(struct mm_struct *mm,
> prev = vma;
>
> for_each_vma_range(vmi, vma, end) {
> + const unsigned long curr_start = MAX(vma->vm_start, start);
Yeah that's nice :)
> const unsigned long curr_end = MIN(vma->vm_end, end);
>
> if (!vma_test(vma, VMA_SEALED_BIT)) {
> @@ -82,7 +82,6 @@ static int mseal_apply(struct mm_struct *mm,
> }
>
> prev = vma;
> - curr_start = curr_end;
> }
>
> return 0;
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
>
>
> I might be wrong about that, I've been staring at the screen for too long today.
No this is better, I've mulled this over, this is much better :)
Sorry to be a pain Andrew - let me respin this right now.
I will retain tags as this is functionally equivalent, I have checked it
locally, confirmed repro solved, checked against AI review also.
>
>
> --
> Cheers,
>
> David
Cheers, Lorenzo
On Fri, 27 Mar 2026 09:06:40 +0000 "Lorenzo Stoakes (Oracle)" <ljs@kernel.org> wrote:
> --- a/mm/mseal.c
> +++ b/mm/mseal.c
> @@ -66,7 +66,7 @@ static int mseal_apply(struct mm_struct *mm,
> prev = vma;
>
> for_each_vma_range(vmi, vma, end) {
> - const unsigned long curr_end = MIN(vma->vm_end, end);
> + unsigned long curr_end = MIN(vma->vm_end, end);
>
> if (!(vma->vm_flags & VM_SEALED)) {
> vm_flags_t vm_flags = vma->vm_flags | VM_SEALED;
> @@ -76,6 +76,7 @@ static int mseal_apply(struct mm_struct *mm,
> if (IS_ERR(vma))
> return PTR_ERR(vma);
> vm_flags_set(vma, VM_SEALED);
> + curr_end = vma->vm_end; /* Merge may have updated. */
> }
>
> prev = vma;
This led to some rework in your "mm/vma: convert
vma_modify_flags[_uffd]() to use vma_flags_t". Please check my
handiwork.
reject:
--- mm/mseal.c~mm-vma-convert-vma_modify_flags-to-use-vma_flags_t
+++ mm/mseal.c
@@ -68,14 +68,17 @@ static int mseal_apply(struct mm_struct
for_each_vma_range(vmi, vma, end) {
const unsigned long curr_end = MIN(vma->vm_end, end);
- if (!(vma->vm_flags & VM_SEALED)) {
- vm_flags_t vm_flags = vma->vm_flags | VM_SEALED;
+ if (!vma_test(vma, VMA_SEALED_BIT)) {
+ vma_flags_t vma_flags = vma->flags;
+
+ vma_flags_set(&vma_flags, VMA_SEALED_BIT);
vma = vma_modify_flags(&vmi, prev, vma, curr_start,
- curr_end, &vm_flags);
+ curr_end, &vma_flags);
if (IS_ERR(vma))
return PTR_ERR(vma);
- vm_flags_set(vma, VM_SEALED);
+ vma_start_write(vma);
+ vma_set_flags(vma, VMA_SEALED_BIT);
}
prev = vma;
resolution:
static int mseal_apply(struct mm_struct *mm,
unsigned long start, unsigned long end)
{
struct vm_area_struct *vma, *prev;
unsigned long curr_start = start;
VMA_ITERATOR(vmi, mm, start);
/* We know there are no gaps so this will be non-NULL. */
vma = vma_iter_load(&vmi);
prev = vma_prev(&vmi);
if (start > vma->vm_start)
prev = vma;
for_each_vma_range(vmi, vma, end) {
unsigned long curr_end = MIN(vma->vm_end, end);
if (!vma_test(vma, VMA_SEALED_BIT)) {
vma_flags_t vma_flags = vma->flags;
vma_flags_set(&vma_flags, VMA_SEALED_BIT);
vma = vma_modify_flags(&vmi, prev, vma, curr_start,
curr_end, &vma_flags);
if (IS_ERR(vma))
return PTR_ERR(vma);
vma_start_write(vma);
vma_set_flags(vma, VMA_SEALED_BIT);
curr_end = vma->vm_end; /* Merge may have updated. */
}
prev = vma;
curr_start = curr_end;
}
return 0;
}
On Fri, Mar 27, 2026 at 08:24:46AM -0700, Andrew Morton wrote:
> On Fri, 27 Mar 2026 09:06:40 +0000 "Lorenzo Stoakes (Oracle)" <ljs@kernel.org> wrote:
>
> > --- a/mm/mseal.c
> > +++ b/mm/mseal.c
> > @@ -66,7 +66,7 @@ static int mseal_apply(struct mm_struct *mm,
> > prev = vma;
> >
> > for_each_vma_range(vmi, vma, end) {
> > - const unsigned long curr_end = MIN(vma->vm_end, end);
> > + unsigned long curr_end = MIN(vma->vm_end, end);
> >
> > if (!(vma->vm_flags & VM_SEALED)) {
> > vm_flags_t vm_flags = vma->vm_flags | VM_SEALED;
> > @@ -76,6 +76,7 @@ static int mseal_apply(struct mm_struct *mm,
> > if (IS_ERR(vma))
> > return PTR_ERR(vma);
> > vm_flags_set(vma, VM_SEALED);
> > + curr_end = vma->vm_end; /* Merge may have updated. */
> > }
> >
> > prev = vma;
>
> This led to some rework in your "mm/vma: convert
> vma_modify_flags[_uffd]() to use vma_flags_t". Please check my
> handiwork.
>
> reject:
>
> --- mm/mseal.c~mm-vma-convert-vma_modify_flags-to-use-vma_flags_t
> +++ mm/mseal.c
> @@ -68,14 +68,17 @@ static int mseal_apply(struct mm_struct
> for_each_vma_range(vmi, vma, end) {
> const unsigned long curr_end = MIN(vma->vm_end, end);
>
> - if (!(vma->vm_flags & VM_SEALED)) {
> - vm_flags_t vm_flags = vma->vm_flags | VM_SEALED;
> + if (!vma_test(vma, VMA_SEALED_BIT)) {
> + vma_flags_t vma_flags = vma->flags;
> +
> + vma_flags_set(&vma_flags, VMA_SEALED_BIT);
>
> vma = vma_modify_flags(&vmi, prev, vma, curr_start,
> - curr_end, &vm_flags);
> + curr_end, &vma_flags);
> if (IS_ERR(vma))
> return PTR_ERR(vma);
> - vm_flags_set(vma, VM_SEALED);
> + vma_start_write(vma);
> + vma_set_flags(vma, VMA_SEALED_BIT);
> }
>
> prev = vma;
>
> resolution:
>
> static int mseal_apply(struct mm_struct *mm,
> unsigned long start, unsigned long end)
> {
> struct vm_area_struct *vma, *prev;
> unsigned long curr_start = start;
> VMA_ITERATOR(vmi, mm, start);
>
> /* We know there are no gaps so this will be non-NULL. */
> vma = vma_iter_load(&vmi);
> prev = vma_prev(&vmi);
> if (start > vma->vm_start)
> prev = vma;
>
> for_each_vma_range(vmi, vma, end) {
> unsigned long curr_end = MIN(vma->vm_end, end);
>
> if (!vma_test(vma, VMA_SEALED_BIT)) {
> vma_flags_t vma_flags = vma->flags;
>
> vma_flags_set(&vma_flags, VMA_SEALED_BIT);
>
> vma = vma_modify_flags(&vmi, prev, vma, curr_start,
> curr_end, &vma_flags);
> if (IS_ERR(vma))
> return PTR_ERR(vma);
> vma_start_write(vma);
> vma_set_flags(vma, VMA_SEALED_BIT);
> curr_end = vma->vm_end; /* Merge may have updated. */
> }
>
> prev = vma;
> curr_start = curr_end;
> }
>
> return 0;
> }
>
Thanks that looks correct!
Cheers, Lorenzo
On 3/27/26 10:06, Lorenzo Stoakes (Oracle) wrote:
> Previously we stored the end of the current VMA in curr_end, and then upon
> iterating to the next VMA updated curr_start to curr_end to advance to the
> next VMA.
>
> However, this doesn't take into account the fact that a VMA might be
> updated due to a merge by vma_modify_flags(), which can result in curr_end
> being stale and thus, upon setting curr_start to curr_end, ending up with
> an incorrect curr_start on the next iteration.
>
> Resolve the issue by setting curr_end to vma->vm_end unconditionally to
> ensure this value remains updated should this occur.
>
> Signed-off-by: Lorenzo Stoakes (Oracle) <ljs@kernel.org>
> Fixes: 6c2da14ae1e0 ("mm/mseal: rework mseal apply logic")
> Cc: <stable@vger.kernel.org>
> Reported-by: Antonius <antonius@bluedragonsec.com>
> Closes: https://lore.kernel.org/linux-mm/CAK8a0jyHXqBpt8Xe8v9SNDbnRiwz7OthA8SKY=NLRY7smPEP3Q@mail.gmail.com/
Acked-by: Vlastimil Babka (SUSE) <vbabka@kernel.org>
Thanks!
> ---
> mm/mseal.c | 3 ++-
> 1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/mm/mseal.c b/mm/mseal.c
> index 316b5e1dec78..2d72a15d8ea1 100644
> --- a/mm/mseal.c
> +++ b/mm/mseal.c
> @@ -66,7 +66,7 @@ static int mseal_apply(struct mm_struct *mm,
> prev = vma;
>
> for_each_vma_range(vmi, vma, end) {
> - const unsigned long curr_end = MIN(vma->vm_end, end);
> + unsigned long curr_end = MIN(vma->vm_end, end);
>
> if (!(vma->vm_flags & VM_SEALED)) {
> vm_flags_t vm_flags = vma->vm_flags | VM_SEALED;
> @@ -76,6 +76,7 @@ static int mseal_apply(struct mm_struct *mm,
> if (IS_ERR(vma))
> return PTR_ERR(vma);
> vm_flags_set(vma, VM_SEALED);
> + curr_end = vma->vm_end; /* Merge may have updated. */
> }
>
> prev = vma;
> --
> 2.53.0
On Fri, Mar 27, 2026 at 09:06:40AM +0000, Lorenzo Stoakes (Oracle) wrote:
> Previously we stored the end of the current VMA in curr_end, and then upon
> iterating to the next VMA updated curr_start to curr_end to advance to the
> next VMA.
>
> However, this doesn't take into account the fact that a VMA might be
> updated due to a merge by vma_modify_flags(), which can result in curr_end
> being stale and thus, upon setting curr_start to curr_end, ending up with
> an incorrect curr_start on the next iteration.
>
> Resolve the issue by setting curr_end to vma->vm_end unconditionally to
> ensure this value remains updated should this occur.
>
> Signed-off-by: Lorenzo Stoakes (Oracle) <ljs@kernel.org>
> Fixes: 6c2da14ae1e0 ("mm/mseal: rework mseal apply logic")
> Cc: <stable@vger.kernel.org>
> Reported-by: Antonius <antonius@bluedragonsec.com>
> Closes: https://lore.kernel.org/linux-mm/CAK8a0jyHXqBpt8Xe8v9SNDbnRiwz7OthA8SKY=NLRY7smPEP3Q@mail.gmail.com/
Oops, Closes should be:
https://lore.kernel.org/linux-mm/CAK8a0jwWGj9-SgFk0yKFh7i8jMkwKm5b0ao9=kmXWjO54veX2g@mail.gmail.com/
Cheers, Lorenzo
> ---
> mm/mseal.c | 3 ++-
> 1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/mm/mseal.c b/mm/mseal.c
> index 316b5e1dec78..2d72a15d8ea1 100644
> --- a/mm/mseal.c
> +++ b/mm/mseal.c
> @@ -66,7 +66,7 @@ static int mseal_apply(struct mm_struct *mm,
> prev = vma;
>
> for_each_vma_range(vmi, vma, end) {
> - const unsigned long curr_end = MIN(vma->vm_end, end);
> + unsigned long curr_end = MIN(vma->vm_end, end);
>
> if (!(vma->vm_flags & VM_SEALED)) {
> vm_flags_t vm_flags = vma->vm_flags | VM_SEALED;
> @@ -76,6 +76,7 @@ static int mseal_apply(struct mm_struct *mm,
> if (IS_ERR(vma))
> return PTR_ERR(vma);
> vm_flags_set(vma, VM_SEALED);
> + curr_end = vma->vm_end; /* Merge may have updated. */
> }
>
> prev = vma;
> --
> 2.53.0
On Fri, Mar 27, 2026 at 09:06:40AM +0000, Lorenzo Stoakes (Oracle) wrote:
> Previously we stored the end of the current VMA in curr_end, and then upon
> iterating to the next VMA updated curr_start to curr_end to advance to the
> next VMA.
>
> However, this doesn't take into account the fact that a VMA might be
> updated due to a merge by vma_modify_flags(), which can result in curr_end
> being stale and thus, upon setting curr_start to curr_end, ending up with
> an incorrect curr_start on the next iteration.
>
> Resolve the issue by setting curr_end to vma->vm_end unconditionally to
> ensure this value remains updated should this occur.
>
> Signed-off-by: Lorenzo Stoakes (Oracle) <ljs@kernel.org>
> Fixes: 6c2da14ae1e0 ("mm/mseal: rework mseal apply logic")
> Cc: <stable@vger.kernel.org>
> Reported-by: Antonius <antonius@bluedragonsec.com>
> Closes: https://lore.kernel.org/linux-mm/CAK8a0jyHXqBpt8Xe8v9SNDbnRiwz7OthA8SKY=NLRY7smPEP3Q@mail.gmail.com/
Reviewed-by: Pedro Falcato <pfalcato@suse.de>
--
Pedro
© 2016 - 2026 Red Hat, Inc.