mm/memory.c | 10 +--------- 1 file changed, 1 insertion(+), 9 deletions(-)
vma_map_pages currently calls vm_insert_page on each individual page in
the mapping, which creates significant overhead because we are
repeatedly spinlocking. Instead, we should batch insert pages using
vm_insert_pages, which amortizes the cost of the spinlock.
Tested through watching hardware accelerated video on a MTK ChromeOS
device. This particular path maps both a V4L2 buffer and a GEM allocated
buffer into userspace and converts the contents from one pixel format to
another. Both vb2_mmap() and mtk_gem_object_mmap() exercise this
pathway.
Signed-off-by: Justin Green <greenjustin@chromium.org>
---
mm/memory.c | 10 +---------
1 file changed, 1 insertion(+), 9 deletions(-)
diff --git a/mm/memory.c b/mm/memory.c
index da360a6eb8a4..7ae6ac42e7d8 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -2499,7 +2499,6 @@ static int __vm_map_pages(struct vm_area_struct *vma, struct page **pages,
{
unsigned long count = vma_pages(vma);
unsigned long uaddr = vma->vm_start;
- int ret, i;
/* Fail if the user requested offset is beyond the end of the object */
if (offset >= num)
@@ -2509,14 +2508,7 @@ static int __vm_map_pages(struct vm_area_struct *vma, struct page **pages,
if (count > num - offset)
return -ENXIO;
- for (i = 0; i < count; i++) {
- ret = vm_insert_page(vma, uaddr, pages[offset + i]);
- if (ret < 0)
- return ret;
- uaddr += PAGE_SIZE;
- }
-
- return 0;
+ return vm_insert_pages(vma, uaddr, pages + offset, &count);
}
/**
--
2.53.0.rc1.217.geba53bf80e-goog
On Wed, Jan 28, 2026 at 5:57 PM Justin Green <greenjustin@chromium.org> wrote:
>
> vma_map_pages currently calls vm_insert_page on each individual page in
> the mapping, which creates significant overhead because we are
> repeatedly spinlocking. Instead, we should batch insert pages using
> vm_insert_pages, which amortizes the cost of the spinlock.
This makes sense, I wonder why this wasn't done previously?
>
> Tested through watching hardware accelerated video on a MTK ChromeOS
> device. This particular path maps both a V4L2 buffer and a GEM allocated
> buffer into userspace and converts the contents from one pixel format to
> another. Both vb2_mmap() and mtk_gem_object_mmap() exercise this
> pathway.
>
> Signed-off-by: Justin Green <greenjustin@chromium.org>
Acked-by: Brian Geffon <bgeffon@google.com>
> ---
> mm/memory.c | 10 +---------
> 1 file changed, 1 insertion(+), 9 deletions(-)
>
> diff --git a/mm/memory.c b/mm/memory.c
> index da360a6eb8a4..7ae6ac42e7d8 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -2499,7 +2499,6 @@ static int __vm_map_pages(struct vm_area_struct *vma, struct page **pages,
> {
> unsigned long count = vma_pages(vma);
> unsigned long uaddr = vma->vm_start;
> - int ret, i;
>
> /* Fail if the user requested offset is beyond the end of the object */
> if (offset >= num)
> @@ -2509,14 +2508,7 @@ static int __vm_map_pages(struct vm_area_struct *vma, struct page **pages,
> if (count > num - offset)
> return -ENXIO;
>
> - for (i = 0; i < count; i++) {
> - ret = vm_insert_page(vma, uaddr, pages[offset + i]);
> - if (ret < 0)
> - return ret;
> - uaddr += PAGE_SIZE;
> - }
> -
> - return 0;
> + return vm_insert_pages(vma, uaddr, pages + offset, &count);
> }
>
> /**
> --
> 2.53.0.rc1.217.geba53bf80e-goog
>
On Wed, Jan 28, 2026 at 05:59:12PM -0500, Brian Geffon wrote: > On Wed, Jan 28, 2026 at 5:57 PM Justin Green <greenjustin@chromium.org> wrote: > > > > vma_map_pages currently calls vm_insert_page on each individual page in > > the mapping, which creates significant overhead because we are > > repeatedly spinlocking. Instead, we should batch insert pages using > > vm_insert_pages, which amortizes the cost of the spinlock. > > This makes sense, I wonder why this wasn't done previously? That's always a good question, because it might reveal why this patch is a bad idea ... However in this case, it simply seems to be an oversight. __vm_map_pages() was introduced in May 2019 and then vm_insert_pages() was added in April 2020. Reviewed-by: Matthew Wilcox (Oracle) <willy@infradead.org>
On Wed, Jan 28, 2026 at 4:51 PM Matthew Wilcox <willy@infradead.org> wrote: > > On Wed, Jan 28, 2026 at 05:59:12PM -0500, Brian Geffon wrote: > > On Wed, Jan 28, 2026 at 5:57 PM Justin Green <greenjustin@chromium.org> wrote: > > > > > > vma_map_pages currently calls vm_insert_page on each individual page in > > > the mapping, which creates significant overhead because we are > > > repeatedly spinlocking. Instead, we should batch insert pages using > > > vm_insert_pages, which amortizes the cost of the spinlock. > > > > This makes sense, I wonder why this wasn't done previously? > > That's always a good question, because it might reveal why this patch is > a bad idea ... > > However in this case, it simply seems to be an oversight. > __vm_map_pages() was introduced in May 2019 and then vm_insert_pages() > was added in April 2020. > Yes, it was an oversight. I had originally cooked up vm_insert_pages() to amortize that spinlock for TCP zerocopy receive, and had not noticed __vm_map_pages() sitting right there. Reviewed-by: Arjun Roy <arjunroy@google.com> -Arjun > Reviewed-by: Matthew Wilcox (Oracle) <willy@infradead.org>
© 2016 - 2026 Red Hat, Inc.