kernel/dma/ops_helpers.c | 12 ++++++++++-- 1 file changed, 10 insertions(+), 2 deletions(-)
This commit is dedicated to fix incorrect conversion from
cpu_addr to page address in cases when we get virtual
address which allocated in the vmalloc range.
As the result, virt_to_page() cannot convert this address
properly and return incorrect page address.
Need to detect such cases and obtains the page address using
vmalloc_to_page() instead.
Signed-off-by: Roman Skakun <roman_skakun@epam.com>
Reviewed-by: Andrii Anisov <andrii_anisov@epam.com>
---
Hey!
Thanks for suggestions, Christoph!
I updated the patch according to your advice.
But, I'm so surprised because nobody catches this problem
in the common code before. It looks a bit strange as for me.
kernel/dma/ops_helpers.c | 12 ++++++++++--
1 file changed, 10 insertions(+), 2 deletions(-)
diff --git a/kernel/dma/ops_helpers.c b/kernel/dma/ops_helpers.c
index 910ae69cae77..782728d8a393 100644
--- a/kernel/dma/ops_helpers.c
+++ b/kernel/dma/ops_helpers.c
@@ -5,6 +5,14 @@
*/
#include <linux/dma-map-ops.h>
+static struct page *cpu_addr_to_page(void *cpu_addr)
+{
+ if (is_vmalloc_addr(cpu_addr))
+ return vmalloc_to_page(cpu_addr);
+ else
+ return virt_to_page(cpu_addr);
+}
+
/*
* Create scatter-list for the already allocated DMA buffer.
*/
@@ -12,7 +20,7 @@ int dma_common_get_sgtable(struct device *dev, struct sg_table *sgt,
void *cpu_addr, dma_addr_t dma_addr, size_t size,
unsigned long attrs)
{
- struct page *page = virt_to_page(cpu_addr);
+ struct page *page = cpu_addr_to_page(cpu_addr);
int ret;
ret = sg_alloc_table(sgt, 1, GFP_KERNEL);
@@ -43,7 +51,7 @@ int dma_common_mmap(struct device *dev, struct vm_area_struct *vma,
return -ENXIO;
return remap_pfn_range(vma, vma->vm_start,
- page_to_pfn(virt_to_page(cpu_addr)) + vma->vm_pgoff,
+ page_to_pfn(cpu_addr_to_page(cpu_addr)) + vma->vm_pgoff,
user_count << PAGE_SHIFT, vma->vm_page_prot);
#else
return -ENXIO;
--
2.25.1
On Tue, 22 Jun 2021, Roman Skakun wrote:
> This commit is dedicated to fix incorrect conversion from
> cpu_addr to page address in cases when we get virtual
> address which allocated in the vmalloc range.
> As the result, virt_to_page() cannot convert this address
> properly and return incorrect page address.
>
> Need to detect such cases and obtains the page address using
> vmalloc_to_page() instead.
>
> Signed-off-by: Roman Skakun <roman_skakun@epam.com>
> Reviewed-by: Andrii Anisov <andrii_anisov@epam.com>
> ---
> Hey!
> Thanks for suggestions, Christoph!
> I updated the patch according to your advice.
> But, I'm so surprised because nobody catches this problem
> in the common code before. It looks a bit strange as for me.
>
>
> kernel/dma/ops_helpers.c | 12 ++++++++++--
> 1 file changed, 10 insertions(+), 2 deletions(-)
>
> diff --git a/kernel/dma/ops_helpers.c b/kernel/dma/ops_helpers.c
> index 910ae69cae77..782728d8a393 100644
> --- a/kernel/dma/ops_helpers.c
> +++ b/kernel/dma/ops_helpers.c
> @@ -5,6 +5,14 @@
> */
> #include <linux/dma-map-ops.h>
>
> +static struct page *cpu_addr_to_page(void *cpu_addr)
> +{
> + if (is_vmalloc_addr(cpu_addr))
> + return vmalloc_to_page(cpu_addr);
> + else
> + return virt_to_page(cpu_addr);
> +}
We have the same thing in xen_swiotlb_free_coherent. Can we find a way
to call cpu_addr_to_page() from xen_swiotlb_free_coherent? Maybe we can
make cpu_addr_to_page non-static? No problem if it is too much trouble.
> /*
> * Create scatter-list for the already allocated DMA buffer.
> */
> @@ -12,7 +20,7 @@ int dma_common_get_sgtable(struct device *dev, struct sg_table *sgt,
> void *cpu_addr, dma_addr_t dma_addr, size_t size,
> unsigned long attrs)
> {
> - struct page *page = virt_to_page(cpu_addr);
> + struct page *page = cpu_addr_to_page(cpu_addr);
> int ret;
>
> ret = sg_alloc_table(sgt, 1, GFP_KERNEL);
> @@ -43,7 +51,7 @@ int dma_common_mmap(struct device *dev, struct vm_area_struct *vma,
> return -ENXIO;
>
> return remap_pfn_range(vma, vma->vm_start,
> - page_to_pfn(virt_to_page(cpu_addr)) + vma->vm_pgoff,
> + page_to_pfn(cpu_addr_to_page(cpu_addr)) + vma->vm_pgoff,
> user_count << PAGE_SHIFT, vma->vm_page_prot);
> #else
> return -ENXIO;
> --
> 2.25.1
>
Hi, Stefano!
> We have the same thing in xen_swiotlb_free_coherent. Can we find a way
> to call cpu_addr_to_page() from xen_swiotlb_free_coherent?
> Maybe we can make cpu_addr_to_page non-static?
Yes, right, If we want to reuse this helper instead of the same code
block in xen_swiotlb_free_coherent() need to make cpu_addr_to_page() as
global and add a new declaration for this helper in include/linux/dma-map-ops.h.
What do you think?
Cheers!
ср, 14 июл. 2021 г. в 04:23, Stefano Stabellini <sstabellini@kernel.org>:
>
> On Tue, 22 Jun 2021, Roman Skakun wrote:
> > This commit is dedicated to fix incorrect conversion from
> > cpu_addr to page address in cases when we get virtual
> > address which allocated in the vmalloc range.
> > As the result, virt_to_page() cannot convert this address
> > properly and return incorrect page address.
> >
> > Need to detect such cases and obtains the page address using
> > vmalloc_to_page() instead.
> >
> > Signed-off-by: Roman Skakun <roman_skakun@epam.com>
> > Reviewed-by: Andrii Anisov <andrii_anisov@epam.com>
> > ---
> > Hey!
> > Thanks for suggestions, Christoph!
> > I updated the patch according to your advice.
> > But, I'm so surprised because nobody catches this problem
> > in the common code before. It looks a bit strange as for me.
> >
> >
> > kernel/dma/ops_helpers.c | 12 ++++++++++--
> > 1 file changed, 10 insertions(+), 2 deletions(-)
> >
> > diff --git a/kernel/dma/ops_helpers.c b/kernel/dma/ops_helpers.c
> > index 910ae69cae77..782728d8a393 100644
> > --- a/kernel/dma/ops_helpers.c
> > +++ b/kernel/dma/ops_helpers.c
> > @@ -5,6 +5,14 @@
> > */
> > #include <linux/dma-map-ops.h>
> >
> > +static struct page *cpu_addr_to_page(void *cpu_addr)
> > +{
> > + if (is_vmalloc_addr(cpu_addr))
> > + return vmalloc_to_page(cpu_addr);
> > + else
> > + return virt_to_page(cpu_addr);
> > +}
>
> We have the same thing in xen_swiotlb_free_coherent. Can we find a way
> to call cpu_addr_to_page() from xen_swiotlb_free_coherent? Maybe we can
> make cpu_addr_to_page non-static? No problem if it is too much trouble.
>
>
> > /*
> > * Create scatter-list for the already allocated DMA buffer.
> > */
> > @@ -12,7 +20,7 @@ int dma_common_get_sgtable(struct device *dev, struct sg_table *sgt,
> > void *cpu_addr, dma_addr_t dma_addr, size_t size,
> > unsigned long attrs)
> > {
> > - struct page *page = virt_to_page(cpu_addr);
> > + struct page *page = cpu_addr_to_page(cpu_addr);
> > int ret;
> >
> > ret = sg_alloc_table(sgt, 1, GFP_KERNEL);
> > @@ -43,7 +51,7 @@ int dma_common_mmap(struct device *dev, struct vm_area_struct *vma,
> > return -ENXIO;
> >
> > return remap_pfn_range(vma, vma->vm_start,
> > - page_to_pfn(virt_to_page(cpu_addr)) + vma->vm_pgoff,
> > + page_to_pfn(cpu_addr_to_page(cpu_addr)) + vma->vm_pgoff,
> > user_count << PAGE_SHIFT, vma->vm_page_prot);
> > #else
> > return -ENXIO;
> > --
> > 2.25.1
> >
--
Best Regards, Roman.
© 2016 - 2026 Red Hat, Inc.