[PATCH v1] swiotlb: optimize get_max_slots()

Petr Tesarik posted 1 patch 2 years, 6 months ago
kernel/dma/swiotlb.c | 4 +---
1 file changed, 1 insertion(+), 3 deletions(-)
[PATCH v1] swiotlb: optimize get_max_slots()
Posted by Petr Tesarik 2 years, 6 months ago
From: Petr Tesarik <petr.tesarik.ext@huawei.com>

Use a simple logical shift and increment to calculate the number of slots
taken by the DMA segment boundary.

At least GCC-13 is not able to optimize the expression, producing this
horrible assembly code on x86:

	cmpq	$-1, %rcx
	je	.L364
	addq	$2048, %rcx
	shrq	$11, %rcx
	movq	%rcx, %r13
.L331:
	// rest of the function here...

	// after function epilogue and return:
.L364:
	movabsq $9007199254740992, %r13
	jmp	.L331

After the optimization, the code looks more reasonable:

	shrq	$11, %r11
	leaq	1(%r11), %rbx

Signed-off-by: Petr Tesarik <petr.tesarik.ext@huawei.com>
---
 kernel/dma/swiotlb.c | 4 +---
 1 file changed, 1 insertion(+), 3 deletions(-)

diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index 2b83e3ad9dca..a95d2ea2ae18 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -577,9 +577,7 @@ static inline phys_addr_t slot_addr(phys_addr_t start, phys_addr_t idx)
  */
 static inline unsigned long get_max_slots(unsigned long boundary_mask)
 {
-	if (boundary_mask == ~0UL)
-		return 1UL << (BITS_PER_LONG - IO_TLB_SHIFT);
-	return nr_slots(boundary_mask + 1);
+	return (boundary_mask >> IO_TLB_SHIFT) + 1;
 }
 
 static unsigned int wrap_area_index(struct io_tlb_mem *mem, unsigned int index)
-- 
2.25.1
Re: [PATCH v1] swiotlb: optimize get_max_slots()
Posted by Christoph Hellwig 2 years, 6 months ago
Thanks, applied.