When checking a memory buffer to be consecutive in machine memory,
the alignment needs to be checked, too. Failing to do so might result
in DMA memory not being aligned according to its requested size,
leading to error messages like:
4xxx 0000:2b:00.0: enabling device (0140 -> 0142)
4xxx 0000:2b:00.0: Ring address not aligned
4xxx 0000:2b:00.0: Failed to initialise service qat_crypto
4xxx 0000:2b:00.0: Resetting device qat_dev0
4xxx: probe of 0000:2b:00.0 failed with error -14
Fixes: 9435cce87950 ("xen/swiotlb: Add support for 64KB page granularity")
Signed-off-by: Juergen Gross <jgross@suse.com>
---
V2:
- use 1ULL for creating align mask in order to cover ARM32 LPAE
- fix case of XEN_PAGE_SIZE != PAGE_SIZE (Jan Beulich)
---
drivers/xen/swiotlb-xen.c | 6 ++++++
1 file changed, 6 insertions(+)
diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
index 35155258a7e2..ddf5b1df632e 100644
--- a/drivers/xen/swiotlb-xen.c
+++ b/drivers/xen/swiotlb-xen.c
@@ -78,9 +78,15 @@ static inline int range_straddles_page_boundary(phys_addr_t p, size_t size)
{
unsigned long next_bfn, xen_pfn = XEN_PFN_DOWN(p);
unsigned int i, nr_pages = XEN_PFN_UP(xen_offset_in_page(p) + size);
+ phys_addr_t algn = 1ULL << (get_order(size) + PAGE_SHIFT);
next_bfn = pfn_to_bfn(xen_pfn);
+ /* If buffer is physically aligned, ensure DMA alignment. */
+ if (IS_ALIGNED(p, algn) &&
+ !IS_ALIGNED(next_bfn << XEN_PAGE_SHIFT, algn))
+ return 1;
+
for (i = 1; i < nr_pages; i++)
if (pfn_to_bfn(++xen_pfn) != ++next_bfn)
return 1;
--
2.43.0
Hi, On 16/09/2024 08:47, Juergen Gross wrote: > diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c > index 35155258a7e2..ddf5b1df632e 100644 > --- a/drivers/xen/swiotlb-xen.c > +++ b/drivers/xen/swiotlb-xen.c > @@ -78,9 +78,15 @@ static inline int range_straddles_page_boundary(phys_addr_t p, size_t size) > { > unsigned long next_bfn, xen_pfn = XEN_PFN_DOWN(p); > unsigned int i, nr_pages = XEN_PFN_UP(xen_offset_in_page(p) + size); > + phys_addr_t algn = 1ULL << (get_order(size) + PAGE_SHIFT); > > next_bfn = pfn_to_bfn(xen_pfn); > > + /* If buffer is physically aligned, ensure DMA alignment. */ > + if (IS_ALIGNED(p, algn) && > + !IS_ALIGNED(next_bfn << XEN_PAGE_SHIFT, algn)) > + return 1; There is a regression in the mpt3sas driver because of this change. When, in a dom0, this driver creates its DMA pool at probe time and calls dma_pool_zalloc() (see [1]), the call to range_straddles_page_boundary() (from xen_swiotlb_alloc_coherent()) returns 1 because of the alignment checks added by this patch. Then the call to xen_create_contiguous_region() in xen_swiotlb_alloc_coherent() fails because the passed size order is too big (> MAX_CONTIG_ORDER). This driver sets the pool allocation block size to 2.3+ MBytes. From previous discussions on the v1 patch, these checks are not necessary from xen_swiotlb_alloc_coherent() that already ensures alignment, right? [1] https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/tree/drivers/scsi/mpt3sas/mpt3sas_base.c?h=v6.12.1#n6227 Regards, Thierry
On 16.09.2024 08:47, Juergen Gross wrote: > --- a/drivers/xen/swiotlb-xen.c > +++ b/drivers/xen/swiotlb-xen.c > @@ -78,9 +78,15 @@ static inline int range_straddles_page_boundary(phys_addr_t p, size_t size) > { > unsigned long next_bfn, xen_pfn = XEN_PFN_DOWN(p); > unsigned int i, nr_pages = XEN_PFN_UP(xen_offset_in_page(p) + size); > + phys_addr_t algn = 1ULL << (get_order(size) + PAGE_SHIFT); > > next_bfn = pfn_to_bfn(xen_pfn); > > + /* If buffer is physically aligned, ensure DMA alignment. */ > + if (IS_ALIGNED(p, algn) && > + !IS_ALIGNED(next_bfn << XEN_PAGE_SHIFT, algn)) And this shift is not at risk of losing bits on Arm LPAE? Jan
On 16.09.24 08:50, Jan Beulich wrote: > On 16.09.2024 08:47, Juergen Gross wrote: >> --- a/drivers/xen/swiotlb-xen.c >> +++ b/drivers/xen/swiotlb-xen.c >> @@ -78,9 +78,15 @@ static inline int range_straddles_page_boundary(phys_addr_t p, size_t size) >> { >> unsigned long next_bfn, xen_pfn = XEN_PFN_DOWN(p); >> unsigned int i, nr_pages = XEN_PFN_UP(xen_offset_in_page(p) + size); >> + phys_addr_t algn = 1ULL << (get_order(size) + PAGE_SHIFT); >> >> next_bfn = pfn_to_bfn(xen_pfn); >> >> + /* If buffer is physically aligned, ensure DMA alignment. */ >> + if (IS_ALIGNED(p, algn) && >> + !IS_ALIGNED(next_bfn << XEN_PAGE_SHIFT, algn)) > > And this shift is not at risk of losing bits on Arm LPAE? For alignment check this just doesn't matter (assuming XEN_PAGE_SIZE is smaller than 4G). Juergen
On 16.09.24 08:56, Juergen Gross wrote: > On 16.09.24 08:50, Jan Beulich wrote: >> On 16.09.2024 08:47, Juergen Gross wrote: >>> --- a/drivers/xen/swiotlb-xen.c >>> +++ b/drivers/xen/swiotlb-xen.c >>> @@ -78,9 +78,15 @@ static inline int >>> range_straddles_page_boundary(phys_addr_t p, size_t size) >>> { >>> unsigned long next_bfn, xen_pfn = XEN_PFN_DOWN(p); >>> unsigned int i, nr_pages = XEN_PFN_UP(xen_offset_in_page(p) + size); >>> + phys_addr_t algn = 1ULL << (get_order(size) + PAGE_SHIFT); >>> next_bfn = pfn_to_bfn(xen_pfn); >>> + /* If buffer is physically aligned, ensure DMA alignment. */ >>> + if (IS_ALIGNED(p, algn) && >>> + !IS_ALIGNED(next_bfn << XEN_PAGE_SHIFT, algn)) >> >> And this shift is not at risk of losing bits on Arm LPAE? > > For alignment check this just doesn't matter (assuming XEN_PAGE_SIZE is > smaller than 4G). Wait, that was nonsense. I'll change the check to: !IS_ALIGNED((phys_addr_t)next_bfn << XEN_PAGE_SHIFT, algn) Juergen
On Mon, 16 Sep 2024, Juergen Gross wrote: > On 16.09.24 08:56, Juergen Gross wrote: > > On 16.09.24 08:50, Jan Beulich wrote: > > > On 16.09.2024 08:47, Juergen Gross wrote: > > > > --- a/drivers/xen/swiotlb-xen.c > > > > +++ b/drivers/xen/swiotlb-xen.c > > > > @@ -78,9 +78,15 @@ static inline int > > > > range_straddles_page_boundary(phys_addr_t p, size_t size) > > > > { > > > > unsigned long next_bfn, xen_pfn = XEN_PFN_DOWN(p); > > > > unsigned int i, nr_pages = XEN_PFN_UP(xen_offset_in_page(p) + > > > > size); > > > > + phys_addr_t algn = 1ULL << (get_order(size) + PAGE_SHIFT); > > > > next_bfn = pfn_to_bfn(xen_pfn); > > > > + /* If buffer is physically aligned, ensure DMA alignment. */ > > > > + if (IS_ALIGNED(p, algn) && > > > > + !IS_ALIGNED(next_bfn << XEN_PAGE_SHIFT, algn)) > > > > > > And this shift is not at risk of losing bits on Arm LPAE? > > > > For alignment check this just doesn't matter (assuming XEN_PAGE_SIZE is > > smaller than 4G). > > Wait, that was nonsense. > > I'll change the check to: > > !IS_ALIGNED((phys_addr_t)next_bfn << XEN_PAGE_SHIFT, algn) With this change: Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
On 16.09.2024 08:59, Juergen Gross wrote: > On 16.09.24 08:56, Juergen Gross wrote: >> On 16.09.24 08:50, Jan Beulich wrote: >>> On 16.09.2024 08:47, Juergen Gross wrote: >>>> --- a/drivers/xen/swiotlb-xen.c >>>> +++ b/drivers/xen/swiotlb-xen.c >>>> @@ -78,9 +78,15 @@ static inline int >>>> range_straddles_page_boundary(phys_addr_t p, size_t size) >>>> { >>>> unsigned long next_bfn, xen_pfn = XEN_PFN_DOWN(p); >>>> unsigned int i, nr_pages = XEN_PFN_UP(xen_offset_in_page(p) + size); >>>> + phys_addr_t algn = 1ULL << (get_order(size) + PAGE_SHIFT); >>>> next_bfn = pfn_to_bfn(xen_pfn); >>>> + /* If buffer is physically aligned, ensure DMA alignment. */ >>>> + if (IS_ALIGNED(p, algn) && >>>> + !IS_ALIGNED(next_bfn << XEN_PAGE_SHIFT, algn)) >>> >>> And this shift is not at risk of losing bits on Arm LPAE? >> >> For alignment check this just doesn't matter (assuming XEN_PAGE_SIZE is >> smaller than 4G). > > Wait, that was nonsense. I think it was quite reasonable, as long as also algn (and hence size) isn't absurdly large. Jan
On 16.09.24 09:01, Jan Beulich wrote: > On 16.09.2024 08:59, Juergen Gross wrote: >> On 16.09.24 08:56, Juergen Gross wrote: >>> On 16.09.24 08:50, Jan Beulich wrote: >>>> On 16.09.2024 08:47, Juergen Gross wrote: >>>>> --- a/drivers/xen/swiotlb-xen.c >>>>> +++ b/drivers/xen/swiotlb-xen.c >>>>> @@ -78,9 +78,15 @@ static inline int >>>>> range_straddles_page_boundary(phys_addr_t p, size_t size) >>>>> { >>>>> unsigned long next_bfn, xen_pfn = XEN_PFN_DOWN(p); >>>>> unsigned int i, nr_pages = XEN_PFN_UP(xen_offset_in_page(p) + size); >>>>> + phys_addr_t algn = 1ULL << (get_order(size) + PAGE_SHIFT); >>>>> next_bfn = pfn_to_bfn(xen_pfn); >>>>> + /* If buffer is physically aligned, ensure DMA alignment. */ >>>>> + if (IS_ALIGNED(p, algn) && >>>>> + !IS_ALIGNED(next_bfn << XEN_PAGE_SHIFT, algn)) >>>> >>>> And this shift is not at risk of losing bits on Arm LPAE? >>> >>> For alignment check this just doesn't matter (assuming XEN_PAGE_SIZE is >>> smaller than 4G). >> >> Wait, that was nonsense. > > I think it was quite reasonable, as long as also algn (and hence size) > isn't absurdly large. Better safe than sorry. Juergen
On 16.09.2024 08:56, Juergen Gross wrote: > On 16.09.24 08:50, Jan Beulich wrote: >> On 16.09.2024 08:47, Juergen Gross wrote: >>> --- a/drivers/xen/swiotlb-xen.c >>> +++ b/drivers/xen/swiotlb-xen.c >>> @@ -78,9 +78,15 @@ static inline int range_straddles_page_boundary(phys_addr_t p, size_t size) >>> { >>> unsigned long next_bfn, xen_pfn = XEN_PFN_DOWN(p); >>> unsigned int i, nr_pages = XEN_PFN_UP(xen_offset_in_page(p) + size); >>> + phys_addr_t algn = 1ULL << (get_order(size) + PAGE_SHIFT); >>> >>> next_bfn = pfn_to_bfn(xen_pfn); >>> >>> + /* If buffer is physically aligned, ensure DMA alignment. */ >>> + if (IS_ALIGNED(p, algn) && >>> + !IS_ALIGNED(next_bfn << XEN_PAGE_SHIFT, algn)) >> >> And this shift is not at risk of losing bits on Arm LPAE? > > For alignment check this just doesn't matter (assuming XEN_PAGE_SIZE is > smaller than 4G). Oh, yes - of course. A (hypothetical?) strict no-overflow checking mode of the kernel may take issue though, so maybe better to right- shift "algn"? Jan
On 16.09.24 08:59, Jan Beulich wrote: > On 16.09.2024 08:56, Juergen Gross wrote: >> On 16.09.24 08:50, Jan Beulich wrote: >>> On 16.09.2024 08:47, Juergen Gross wrote: >>>> --- a/drivers/xen/swiotlb-xen.c >>>> +++ b/drivers/xen/swiotlb-xen.c >>>> @@ -78,9 +78,15 @@ static inline int range_straddles_page_boundary(phys_addr_t p, size_t size) >>>> { >>>> unsigned long next_bfn, xen_pfn = XEN_PFN_DOWN(p); >>>> unsigned int i, nr_pages = XEN_PFN_UP(xen_offset_in_page(p) + size); >>>> + phys_addr_t algn = 1ULL << (get_order(size) + PAGE_SHIFT); >>>> >>>> next_bfn = pfn_to_bfn(xen_pfn); >>>> >>>> + /* If buffer is physically aligned, ensure DMA alignment. */ >>>> + if (IS_ALIGNED(p, algn) && >>>> + !IS_ALIGNED(next_bfn << XEN_PAGE_SHIFT, algn)) >>> >>> And this shift is not at risk of losing bits on Arm LPAE? >> >> For alignment check this just doesn't matter (assuming XEN_PAGE_SIZE is >> smaller than 4G). > > Oh, yes - of course. A (hypothetical?) strict no-overflow checking > mode of the kernel may take issue though, so maybe better to right- > shift "algn"? No, this wouldn't work in case algn < XEN_PAGE_SIZE. Juergen
© 2016 - 2024 Red Hat, Inc.