madvise_dontneed_single_vma() and madvise_free_single_vma() support both
batched tlb flushes and unbatched tlb flushes use cases depending on
received tlb parameter's value. The supports were for safe and fine
transition of the usages from the unbatched flushes to the batched ones.
Now the transition is done, and therefore there is no real unbatched tlb
flushes use case. Remove the code for supporting the no more being used
cases.
Signed-off-by: SeongJae Park <sj@kernel.org>
---
mm/madvise.c | 19 ++-----------------
1 file changed, 2 insertions(+), 17 deletions(-)
diff --git a/mm/madvise.c b/mm/madvise.c
index d5f4ce3041a4..25af0a24c00b 100644
--- a/mm/madvise.c
+++ b/mm/madvise.c
@@ -795,18 +795,11 @@ static const struct mm_walk_ops madvise_free_walk_ops = {
};
static int madvise_free_single_vma(
- struct mmu_gather *caller_tlb, struct vm_area_struct *vma,
+ struct mmu_gather *tlb, struct vm_area_struct *vma,
unsigned long start_addr, unsigned long end_addr)
{
struct mm_struct *mm = vma->vm_mm;
struct mmu_notifier_range range;
- struct mmu_gather self_tlb;
- struct mmu_gather *tlb;
-
- if (caller_tlb)
- tlb = caller_tlb;
- else
- tlb = &self_tlb;
/* MADV_FREE works for only anon vma at the moment */
if (!vma_is_anonymous(vma))
@@ -822,8 +815,6 @@ static int madvise_free_single_vma(
range.start, range.end);
lru_add_drain();
- if (!caller_tlb)
- tlb_gather_mmu(tlb, mm);
update_hiwater_rss(mm);
mmu_notifier_invalidate_range_start(&range);
@@ -832,9 +823,6 @@ static int madvise_free_single_vma(
&madvise_free_walk_ops, tlb);
tlb_end_vma(tlb, vma);
mmu_notifier_invalidate_range_end(&range);
- if (!caller_tlb)
- tlb_finish_mmu(tlb);
-
return 0;
}
@@ -866,10 +854,7 @@ static long madvise_dontneed_single_vma(struct mmu_gather *tlb,
.even_cows = true,
};
- if (!tlb)
- zap_page_range_single(vma, start, end - start, &details);
- else
- unmap_vma_single(tlb, vma, start, end - start, &details);
+ unmap_vma_single(tlb, vma, start, end - start, &details);
return 0;
}
--
2.39.5
On Mon, Mar 10, 2025 at 10:23:18AM -0700, SeongJae Park wrote:
> madvise_dontneed_single_vma() and madvise_free_single_vma() support both
> batched tlb flushes and unbatched tlb flushes use cases depending on
> received tlb parameter's value. The supports were for safe and fine
> transition of the usages from the unbatched flushes to the batched ones.
> Now the transition is done, and therefore there is no real unbatched tlb
> flushes use case. Remove the code for supporting the no more being used
> cases.
>
> Signed-off-by: SeongJae Park <sj@kernel.org>
Obviously I support this based on previous preview :) but I wonder if we
can avoid this horrid caller_tlb pattern in the first instance.
FWIW:
Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
> ---
> mm/madvise.c | 19 ++-----------------
> 1 file changed, 2 insertions(+), 17 deletions(-)
>
> diff --git a/mm/madvise.c b/mm/madvise.c
> index d5f4ce3041a4..25af0a24c00b 100644
> --- a/mm/madvise.c
> +++ b/mm/madvise.c
> @@ -795,18 +795,11 @@ static const struct mm_walk_ops madvise_free_walk_ops = {
> };
>
> static int madvise_free_single_vma(
> - struct mmu_gather *caller_tlb, struct vm_area_struct *vma,
> + struct mmu_gather *tlb, struct vm_area_struct *vma,
> unsigned long start_addr, unsigned long end_addr)
> {
> struct mm_struct *mm = vma->vm_mm;
> struct mmu_notifier_range range;
> - struct mmu_gather self_tlb;
> - struct mmu_gather *tlb;
> -
> - if (caller_tlb)
> - tlb = caller_tlb;
> - else
> - tlb = &self_tlb;
>
> /* MADV_FREE works for only anon vma at the moment */
> if (!vma_is_anonymous(vma))
> @@ -822,8 +815,6 @@ static int madvise_free_single_vma(
> range.start, range.end);
>
> lru_add_drain();
> - if (!caller_tlb)
> - tlb_gather_mmu(tlb, mm);
> update_hiwater_rss(mm);
>
> mmu_notifier_invalidate_range_start(&range);
> @@ -832,9 +823,6 @@ static int madvise_free_single_vma(
> &madvise_free_walk_ops, tlb);
> tlb_end_vma(tlb, vma);
> mmu_notifier_invalidate_range_end(&range);
> - if (!caller_tlb)
> - tlb_finish_mmu(tlb);
> -
> return 0;
> }
>
> @@ -866,10 +854,7 @@ static long madvise_dontneed_single_vma(struct mmu_gather *tlb,
> .even_cows = true,
> };
>
> - if (!tlb)
> - zap_page_range_single(vma, start, end - start, &details);
> - else
> - unmap_vma_single(tlb, vma, start, end - start, &details);
> + unmap_vma_single(tlb, vma, start, end - start, &details);
> return 0;
> }
>
> --
> 2.39.5
On Tue, 11 Mar 2025 14:01:20 +0000 Lorenzo Stoakes <lorenzo.stoakes@oracle.com> wrote: > On Mon, Mar 10, 2025 at 10:23:18AM -0700, SeongJae Park wrote: > > madvise_dontneed_single_vma() and madvise_free_single_vma() support both > > batched tlb flushes and unbatched tlb flushes use cases depending on > > received tlb parameter's value. The supports were for safe and fine > > transition of the usages from the unbatched flushes to the batched ones. > > Now the transition is done, and therefore there is no real unbatched tlb > > flushes use case. Remove the code for supporting the no more being used > > cases. > > > > Signed-off-by: SeongJae Park <sj@kernel.org> > > Obviously I support this based on previous preview :) but I wonder if we > can avoid this horrid caller_tlb pattern in the first instance. I will try, though I have no good idea for that for now. Maybe we could simply squash patches 7-9. I'm bit concerned if it makes changes unnecessariy mixed and not small, but I have no strong opinion about it. Please feel free to let me know if you want that. > > FWIW: > > Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Appreciate your reviews! Thanks, SJ [...]
On Tue, Mar 11, 2025 at 02:02:11PM -0700, SeongJae Park wrote: > On Tue, 11 Mar 2025 14:01:20 +0000 Lorenzo Stoakes <lorenzo.stoakes@oracle.com> wrote: > > > On Mon, Mar 10, 2025 at 10:23:18AM -0700, SeongJae Park wrote: > > > madvise_dontneed_single_vma() and madvise_free_single_vma() support both > > > batched tlb flushes and unbatched tlb flushes use cases depending on > > > received tlb parameter's value. The supports were for safe and fine > > > transition of the usages from the unbatched flushes to the batched ones. > > > Now the transition is done, and therefore there is no real unbatched tlb > > > flushes use case. Remove the code for supporting the no more being used > > > cases. > > > > > > Signed-off-by: SeongJae Park <sj@kernel.org> > > > > Obviously I support this based on previous preview :) but I wonder if we > > can avoid this horrid caller_tlb pattern in the first instance. > > I will try, though I have no good idea for that for now. > > Maybe we could simply squash patches 7-9. I'm bit concerned if it makes > changes unnecessariy mixed and not small, but I have no strong opinion about > it. Please feel free to let me know if you want that. Yeah, though maybe try to make things as incremental as possible within that? > > > > > FWIW: > > > > Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> > > Appreciate your reviews! No problem! Feel free to propagate to respin (assuming no major changes :) thanks for writing good clean code! > > > Thanks, > SJ > > [...]
On Wed, 12 Mar 2025 13:46:38 +0000 Lorenzo Stoakes <lorenzo.stoakes@oracle.com> wrote: > On Tue, Mar 11, 2025 at 02:02:11PM -0700, SeongJae Park wrote: > > On Tue, 11 Mar 2025 14:01:20 +0000 Lorenzo Stoakes <lorenzo.stoakes@oracle.com> wrote: > > > > > On Mon, Mar 10, 2025 at 10:23:18AM -0700, SeongJae Park wrote: > > > > madvise_dontneed_single_vma() and madvise_free_single_vma() support both > > > > batched tlb flushes and unbatched tlb flushes use cases depending on > > > > received tlb parameter's value. The supports were for safe and fine > > > > transition of the usages from the unbatched flushes to the batched ones. > > > > Now the transition is done, and therefore there is no real unbatched tlb > > > > flushes use case. Remove the code for supporting the no more being used > > > > cases. > > > > > > > > Signed-off-by: SeongJae Park <sj@kernel.org> > > > > > > Obviously I support this based on previous preview :) but I wonder if we > > > can avoid this horrid caller_tlb pattern in the first instance. > > > > I will try, though I have no good idea for that for now. > > > > Maybe we could simply squash patches 7-9. I'm bit concerned if it makes > > changes unnecessariy mixed and not small, but I have no strong opinion about > > it. Please feel free to let me know if you want that. > > Yeah, though maybe try to make things as incremental as possible within > that? Now I think we can make entire batching change for MADV_FREE first, and then make another change for MADV_DONTNEED[_LOCKED]. In the way, the caller_tlb pattern will not be introduced at all and changes in individual commit would be small and dense. Please let me know if you have any concern about it. If I don't hear some concerns about it, I will format the next spin in the way. Thanks, SJ [...]
© 2016 - 2026 Red Hat, Inc.