MAX_ORDER is not inclusive: the maximum allocation order buddy allocator
can deliver is MAX_ORDER-1.
Fix MAX_ORDER usage in __iommu_dma_alloc_pages().
Also use GENMASK() instead of hard to read "(2U << order) - 1" magic.
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: Robin Murphy <robin.murphy@arm.com>
Cc: Jacob Pan <jacob.jun.pan@linux.intel.com>
---
drivers/iommu/dma-iommu.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
index 99b2646cb5c7..ac996fd6bd9c 100644
--- a/drivers/iommu/dma-iommu.c
+++ b/drivers/iommu/dma-iommu.c
@@ -736,7 +736,7 @@ static struct page **__iommu_dma_alloc_pages(struct device *dev,
struct page **pages;
unsigned int i = 0, nid = dev_to_node(dev);
- order_mask &= (2U << MAX_ORDER) - 1;
+ order_mask &= GENMASK(MAX_ORDER - 1, 0);
if (!order_mask)
return NULL;
@@ -756,7 +756,7 @@ static struct page **__iommu_dma_alloc_pages(struct device *dev,
* than a necessity, hence using __GFP_NORETRY until
* falling back to minimum-order allocations.
*/
- for (order_mask &= (2U << __fls(count)) - 1;
+ for (order_mask &= GENMASK(__fls(count), 0);
order_mask; order_mask &= ~order_size) {
unsigned int order = __fls(order_mask);
gfp_t alloc_flags = gfp;
--
2.39.2
On 3/15/23 12:31, Kirill A. Shutemov wrote:
> MAX_ORDER is not inclusive: the maximum allocation order buddy allocator
> can deliver is MAX_ORDER-1.
>
> Fix MAX_ORDER usage in __iommu_dma_alloc_pages().
>
> Also use GENMASK() instead of hard to read "(2U << order) - 1" magic.
>
> Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Reviewed-by: Vlastimil Babka <vbabka@suse.cz>
> Cc: Robin Murphy <robin.murphy@arm.com>
> Cc: Jacob Pan <jacob.jun.pan@linux.intel.com>
> ---
> drivers/iommu/dma-iommu.c | 4 ++--
> 1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
> index 99b2646cb5c7..ac996fd6bd9c 100644
> --- a/drivers/iommu/dma-iommu.c
> +++ b/drivers/iommu/dma-iommu.c
> @@ -736,7 +736,7 @@ static struct page **__iommu_dma_alloc_pages(struct device *dev,
> struct page **pages;
> unsigned int i = 0, nid = dev_to_node(dev);
>
> - order_mask &= (2U << MAX_ORDER) - 1;
> + order_mask &= GENMASK(MAX_ORDER - 1, 0);
> if (!order_mask)
> return NULL;
>
> @@ -756,7 +756,7 @@ static struct page **__iommu_dma_alloc_pages(struct device *dev,
> * than a necessity, hence using __GFP_NORETRY until
> * falling back to minimum-order allocations.
> */
> - for (order_mask &= (2U << __fls(count)) - 1;
> + for (order_mask &= GENMASK(__fls(count), 0);
> order_mask; order_mask &= ~order_size) {
> unsigned int order = __fls(order_mask);
> gfp_t alloc_flags = gfp;
Hi Kirill,
On Wed, 15 Mar 2023 14:31:32 +0300, "Kirill A. Shutemov"
<kirill.shutemov@linux.intel.com> wrote:
> MAX_ORDER is not inclusive: the maximum allocation order buddy allocator
> can deliver is MAX_ORDER-1.
>
> Fix MAX_ORDER usage in __iommu_dma_alloc_pages().
>
> Also use GENMASK() instead of hard to read "(2U << order) - 1" magic.
>
> Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
> Cc: Robin Murphy <robin.murphy@arm.com>
> Cc: Jacob Pan <jacob.jun.pan@linux.intel.com>
> ---
> drivers/iommu/dma-iommu.c | 4 ++--
> 1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
> index 99b2646cb5c7..ac996fd6bd9c 100644
> --- a/drivers/iommu/dma-iommu.c
> +++ b/drivers/iommu/dma-iommu.c
> @@ -736,7 +736,7 @@ static struct page **__iommu_dma_alloc_pages(struct
> device *dev, struct page **pages;
> unsigned int i = 0, nid = dev_to_node(dev);
>
> - order_mask &= (2U << MAX_ORDER) - 1;
> + order_mask &= GENMASK(MAX_ORDER - 1, 0);
> if (!order_mask)
> return NULL;
>
> @@ -756,7 +756,7 @@ static struct page **__iommu_dma_alloc_pages(struct
> device *dev,
> * than a necessity, hence using __GFP_NORETRY until
> * falling back to minimum-order allocations.
> */
> - for (order_mask &= (2U << __fls(count)) - 1;
> + for (order_mask &= GENMASK(__fls(count), 0);
> order_mask; order_mask &= ~order_size) {
> unsigned int order = __fls(order_mask);
> gfp_t alloc_flags = gfp;
Reviewed-by: Jacob Pan <jacob.jun.pan@linux.intel.com>
(For VT-d part, there is no functional impact at all. We only have 2M and 1G
page sizes, no SZ_8M page)
Thanks,
Jacob
On 2023-03-15 11:31, Kirill A. Shutemov wrote:
> MAX_ORDER is not inclusive: the maximum allocation order buddy allocator
> can deliver is MAX_ORDER-1.
>
> Fix MAX_ORDER usage in __iommu_dma_alloc_pages().
Technically this isn't a major issue - all it means is that if we did
happen to have a suitable page size which lined up with MAX_ORDER, we'd
unsuccessfully try the allocation once before falling back to the order
of the next-smallest page size anyway. Semantically you're correct
though, and I probably did still misunderstand MAX_ORDER 7 years ago :)
> Also use GENMASK() instead of hard to read "(2U << order) - 1" magic.
ISTR that GENMASK() had a habit of generating pretty terrible code for
non-constant arguments, but a GCC9 build for arm64 looks fine - in fact
if anything it seems to be able to optimise out more of the __fls() this
way and save a couple more instructions, which is nice, so:
Acked-by: Robin Murphy <robin.murphy@arm.com>
I'm guessing you probably want to take this through the mm tree - that
should be fine since I don't expect any conflicting changes in the IOMMU
tree for now (cc'ing Joerg just as a heads-up).
Cheers,
Robin.
> Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
> Cc: Robin Murphy <robin.murphy@arm.com>
> Cc: Jacob Pan <jacob.jun.pan@linux.intel.com>
> ---
> drivers/iommu/dma-iommu.c | 4 ++--
> 1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
> index 99b2646cb5c7..ac996fd6bd9c 100644
> --- a/drivers/iommu/dma-iommu.c
> +++ b/drivers/iommu/dma-iommu.c
> @@ -736,7 +736,7 @@ static struct page **__iommu_dma_alloc_pages(struct device *dev,
> struct page **pages;
> unsigned int i = 0, nid = dev_to_node(dev);
>
> - order_mask &= (2U << MAX_ORDER) - 1;
> + order_mask &= GENMASK(MAX_ORDER - 1, 0);
> if (!order_mask)
> return NULL;
>
> @@ -756,7 +756,7 @@ static struct page **__iommu_dma_alloc_pages(struct device *dev,
> * than a necessity, hence using __GFP_NORETRY until
> * falling back to minimum-order allocations.
> */
> - for (order_mask &= (2U << __fls(count)) - 1;
> + for (order_mask &= GENMASK(__fls(count), 0);
> order_mask; order_mask &= ~order_size) {
> unsigned int order = __fls(order_mask);
> gfp_t alloc_flags = gfp;
On Wed, Mar 15, 2023 at 12:18:31PM +0000, Robin Murphy wrote: > I'm guessing you probably want to take this through the mm tree - that > should be fine since I don't expect any conflicting changes in the IOMMU > tree for now (cc'ing Joerg just as a heads-up). Yes, mm tree is fine for this: Acked-by: Joerg Roedel <jroedel@suse.de>
© 2016 - 2026 Red Hat, Inc.