[PATCH v2] mm/mseal: update VMA end correctly on merge

Lorenzo Stoakes (Oracle) posted 1 patch 5 days, 20 hours ago
mm/mseal.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
[PATCH v2] mm/mseal: update VMA end correctly on merge
Posted by Lorenzo Stoakes (Oracle) 5 days, 20 hours ago
Previously we stored the end of the current VMA in curr_end, and then upon
iterating to the next VMA updated curr_start to curr_end to advance to the
next VMA.

However, this doesn't take into account the fact that a VMA might be
updated due to a merge by vma_modify_flags(), which can result in curr_end
being stale and thus, upon setting curr_start to curr_end, ending up with
an incorrect curr_start on the next iteration.

Resolve the issue by setting curr_end to vma->vm_end unconditionally to
ensure this value remains updated should this occur.

While we're here, eliminate this entire class of bug by simply setting
const curr_[start/end] to be clamped to the input range and VMAs, which
also happens to simplify the logic.

Reported-by: Antonius <antonius@bluedragonsec.com>
Closes: https://lore.kernel.org/linux-mm/CAK8a0jwWGj9-SgFk0yKFh7i8jMkwKm5b0ao9=kmXWjO54veX2g@mail.gmail.com/
Suggested-by: David Hildenbrand (ARM) <david@kernel.org>
Acked-by: Vlastimil Babka (SUSE) <vbabka@kernel.org>
Reviewed-by: Pedro Falcato <pfalcato@suse.de>
Signed-off-by: Lorenzo Stoakes (Oracle) <ljs@kernel.org>
Fixes: 6c2da14ae1e0 ("mm/mseal: rework mseal apply logic")
Cc: <stable@vger.kernel.org>
---
v2:
* Correct Closes: tag
* Use David's excellent idea to improve the patch

v1:
https://lore.kernel.org/all/20260327090640.146308-1-ljs@kernel.org/

 mm/mseal.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/mm/mseal.c b/mm/mseal.c
index 316b5e1dec78..ac58643181f7 100644
--- a/mm/mseal.c
+++ b/mm/mseal.c
@@ -56,7 +56,6 @@ static int mseal_apply(struct mm_struct *mm,
 		unsigned long start, unsigned long end)
 {
 	struct vm_area_struct *vma, *prev;
-	unsigned long curr_start = start;
 	VMA_ITERATOR(vmi, mm, start);

 	/* We know there are no gaps so this will be non-NULL. */
@@ -66,6 +65,7 @@ static int mseal_apply(struct mm_struct *mm,
 		prev = vma;

 	for_each_vma_range(vmi, vma, end) {
+		const unsigned long curr_start = MAX(vma->vm_start, start);
 		const unsigned long curr_end = MIN(vma->vm_end, end);

 		if (!(vma->vm_flags & VM_SEALED)) {
@@ -79,7 +79,6 @@ static int mseal_apply(struct mm_struct *mm,
 		}

 		prev = vma;
-		curr_start = curr_end;
 	}

 	return 0;
--
2.53.0
Re: [PATCH v2] mm/mseal: update VMA end correctly on merge
Posted by David Hildenbrand (Arm) 5 days, 19 hours ago
On 3/27/26 18:31, Lorenzo Stoakes (Oracle) wrote:
> Previously we stored the end of the current VMA in curr_end, and then upon
> iterating to the next VMA updated curr_start to curr_end to advance to the
> next VMA.
> 
> However, this doesn't take into account the fact that a VMA might be
> updated due to a merge by vma_modify_flags(), which can result in curr_end
> being stale and thus, upon setting curr_start to curr_end, ending up with
> an incorrect curr_start on the next iteration.
> 
> Resolve the issue by setting curr_end to vma->vm_end unconditionally to
> ensure this value remains updated should this occur.
> 
> While we're here, eliminate this entire class of bug by simply setting
> const curr_[start/end] to be clamped to the input range and VMAs, which
> also happens to simplify the logic.
> 
> Reported-by: Antonius <antonius@bluedragonsec.com>
> Closes: https://lore.kernel.org/linux-mm/CAK8a0jwWGj9-SgFk0yKFh7i8jMkwKm5b0ao9=kmXWjO54veX2g@mail.gmail.com/
> Suggested-by: David Hildenbrand (ARM) <david@kernel.org>
> Acked-by: Vlastimil Babka (SUSE) <vbabka@kernel.org>
> Reviewed-by: Pedro Falcato <pfalcato@suse.de>
> Signed-off-by: Lorenzo Stoakes (Oracle) <ljs@kernel.org>
> Fixes: 6c2da14ae1e0 ("mm/mseal: rework mseal apply logic")
> Cc: <stable@vger.kernel.org>
> ---

Acked-by: David Hildenbrand (Arm) <david@kernel.org>

-- 
Cheers,

David
Re: [PATCH v2] mm/mseal: update VMA end correctly on merge
Posted by Lorenzo Stoakes (Oracle) 5 days, 20 hours ago
Note - I tested this locally against the repro and confirmed it resolved it
correctly, and I also ran it through AI review as a double-check.

(Secondary, less important note - I plan to refactor all of these loops as
they're all quite bug prone :)

Cheers, Lorenzo

On Fri, Mar 27, 2026 at 05:31:04PM +0000, Lorenzo Stoakes (Oracle) wrote:
> Previously we stored the end of the current VMA in curr_end, and then upon
> iterating to the next VMA updated curr_start to curr_end to advance to the
> next VMA.
>
> However, this doesn't take into account the fact that a VMA might be
> updated due to a merge by vma_modify_flags(), which can result in curr_end
> being stale and thus, upon setting curr_start to curr_end, ending up with
> an incorrect curr_start on the next iteration.
>
> Resolve the issue by setting curr_end to vma->vm_end unconditionally to
> ensure this value remains updated should this occur.
>
> While we're here, eliminate this entire class of bug by simply setting
> const curr_[start/end] to be clamped to the input range and VMAs, which
> also happens to simplify the logic.
>
> Reported-by: Antonius <antonius@bluedragonsec.com>
> Closes: https://lore.kernel.org/linux-mm/CAK8a0jwWGj9-SgFk0yKFh7i8jMkwKm5b0ao9=kmXWjO54veX2g@mail.gmail.com/
> Suggested-by: David Hildenbrand (ARM) <david@kernel.org>
> Acked-by: Vlastimil Babka (SUSE) <vbabka@kernel.org>
> Reviewed-by: Pedro Falcato <pfalcato@suse.de>
> Signed-off-by: Lorenzo Stoakes (Oracle) <ljs@kernel.org>
> Fixes: 6c2da14ae1e0 ("mm/mseal: rework mseal apply logic")
> Cc: <stable@vger.kernel.org>
> ---
> v2:
> * Correct Closes: tag
> * Use David's excellent idea to improve the patch
>
> v1:
> https://lore.kernel.org/all/20260327090640.146308-1-ljs@kernel.org/
>
>  mm/mseal.c | 3 +--
>  1 file changed, 1 insertion(+), 2 deletions(-)
>
> diff --git a/mm/mseal.c b/mm/mseal.c
> index 316b5e1dec78..ac58643181f7 100644
> --- a/mm/mseal.c
> +++ b/mm/mseal.c
> @@ -56,7 +56,6 @@ static int mseal_apply(struct mm_struct *mm,
>  		unsigned long start, unsigned long end)
>  {
>  	struct vm_area_struct *vma, *prev;
> -	unsigned long curr_start = start;
>  	VMA_ITERATOR(vmi, mm, start);
>
>  	/* We know there are no gaps so this will be non-NULL. */
> @@ -66,6 +65,7 @@ static int mseal_apply(struct mm_struct *mm,
>  		prev = vma;
>
>  	for_each_vma_range(vmi, vma, end) {
> +		const unsigned long curr_start = MAX(vma->vm_start, start);
>  		const unsigned long curr_end = MIN(vma->vm_end, end);
>
>  		if (!(vma->vm_flags & VM_SEALED)) {
> @@ -79,7 +79,6 @@ static int mseal_apply(struct mm_struct *mm,
>  		}
>
>  		prev = vma;
> -		curr_start = curr_end;
>  	}
>
>  	return 0;
> --
> 2.53.0