[PATCH] xsk: clear page contiguity bit when unmapping pool

Ivan Malov posted 1 patch 3 years, 9 months ago
net/xdp/xsk_buff_pool.c | 1 +
1 file changed, 1 insertion(+)
[PATCH] xsk: clear page contiguity bit when unmapping pool
Posted by Ivan Malov 3 years, 9 months ago
When a XSK pool gets mapped, xp_check_dma_contiguity() adds bit 0x1
to pages' DMA addresses that go in ascending order and at 4K stride.
The problem is that the bit does not get cleared before doing unmap.

As a result, a lot of warnings from iommu_dma_unmap_page() are seen
suggesting mapping lookup failures at drivers/iommu/dma-iommu.c:848.

Fixes: 2b43470add8c ("xsk: Introduce AF_XDP buffer allocation API")

Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
---
 net/xdp/xsk_buff_pool.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/net/xdp/xsk_buff_pool.c b/net/xdp/xsk_buff_pool.c
index 87bdd71c7bb6..f70112176b7c 100644
--- a/net/xdp/xsk_buff_pool.c
+++ b/net/xdp/xsk_buff_pool.c
@@ -332,6 +332,7 @@ static void __xp_dma_unmap(struct xsk_dma_map *dma_map, unsigned long attrs)
 	for (i = 0; i < dma_map->dma_pages_cnt; i++) {
 		dma = &dma_map->dma_pages[i];
 		if (*dma) {
+			*dma &= ~XSK_NEXT_PG_CONTIG_MASK;
 			dma_unmap_page_attrs(dma_map->dev, *dma, PAGE_SIZE,
 					     DMA_BIDIRECTIONAL, attrs);
 			*dma = 0;
-- 
2.30.2
[PATCH net v3 1/1] xsk: clear page contiguity bit when unmapping pool
Posted by Ivan Malov 3 years, 9 months ago
When a XSK pool gets mapped, xp_check_dma_contiguity() adds bit 0x1
to pages' DMA addresses that go in ascending order and at 4K stride.
The problem is that the bit does not get cleared before doing unmap.
As a result, a lot of warnings from iommu_dma_unmap_page() are seen
in dmesg, which indicates that lookups by iommu_iova_to_phys() fail.

Fixes: 2b43470add8c ("xsk: Introduce AF_XDP buffer allocation API")
Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
---
 v1 -> v2: minor adjustments to dispose of the "Fixes:" tag warning
 v2 -> v3: extra refinements to apply notes from Magnus Karlsson

 net/xdp/xsk_buff_pool.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/net/xdp/xsk_buff_pool.c b/net/xdp/xsk_buff_pool.c
index 87bdd71c7bb6..f70112176b7c 100644
--- a/net/xdp/xsk_buff_pool.c
+++ b/net/xdp/xsk_buff_pool.c
@@ -332,6 +332,7 @@ static void __xp_dma_unmap(struct xsk_dma_map *dma_map, unsigned long attrs)
 	for (i = 0; i < dma_map->dma_pages_cnt; i++) {
 		dma = &dma_map->dma_pages[i];
 		if (*dma) {
+			*dma &= ~XSK_NEXT_PG_CONTIG_MASK;
 			dma_unmap_page_attrs(dma_map->dev, *dma, PAGE_SIZE,
 					     DMA_BIDIRECTIONAL, attrs);
 			*dma = 0;
-- 
2.30.2
Re: [PATCH net v3 1/1] xsk: clear page contiguity bit when unmapping pool
Posted by patchwork-bot+netdevbpf@kernel.org 3 years, 9 months ago
Hello:

This patch was applied to bpf/bpf.git (master)
by Daniel Borkmann <daniel@iogearbox.net>:

On Tue, 28 Jun 2022 12:18:48 +0300 you wrote:
> When a XSK pool gets mapped, xp_check_dma_contiguity() adds bit 0x1
> to pages' DMA addresses that go in ascending order and at 4K stride.
> The problem is that the bit does not get cleared before doing unmap.
> As a result, a lot of warnings from iommu_dma_unmap_page() are seen
> in dmesg, which indicates that lookups by iommu_iova_to_phys() fail.
> 
> Fixes: 2b43470add8c ("xsk: Introduce AF_XDP buffer allocation API")
> Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
> 
> [...]

Here is the summary with links:
  - [net,v3,1/1] xsk: clear page contiguity bit when unmapping pool
    https://git.kernel.org/bpf/bpf/c/512d1999b8e9

You are awesome, thank you!
-- 
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/patchwork/pwbot.html
Re: [PATCH net v3 1/1] xsk: clear page contiguity bit when unmapping pool
Posted by Magnus Karlsson 3 years, 9 months ago
On Tue, Jun 28, 2022 at 11:25 AM Ivan Malov <ivan.malov@oktetlabs.ru> wrote:
>
> When a XSK pool gets mapped, xp_check_dma_contiguity() adds bit 0x1
> to pages' DMA addresses that go in ascending order and at 4K stride.
> The problem is that the bit does not get cleared before doing unmap.
> As a result, a lot of warnings from iommu_dma_unmap_page() are seen
> in dmesg, which indicates that lookups by iommu_iova_to_phys() fail.
>
> Fixes: 2b43470add8c ("xsk: Introduce AF_XDP buffer allocation API")
> Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>

Thanks Ivan.

Acked-by: Magnus Karlsson <magnus.karlsson@intel.com>

> ---
>  v1 -> v2: minor adjustments to dispose of the "Fixes:" tag warning
>  v2 -> v3: extra refinements to apply notes from Magnus Karlsson
>
>  net/xdp/xsk_buff_pool.c | 1 +
>  1 file changed, 1 insertion(+)
>
> diff --git a/net/xdp/xsk_buff_pool.c b/net/xdp/xsk_buff_pool.c
> index 87bdd71c7bb6..f70112176b7c 100644
> --- a/net/xdp/xsk_buff_pool.c
> +++ b/net/xdp/xsk_buff_pool.c
> @@ -332,6 +332,7 @@ static void __xp_dma_unmap(struct xsk_dma_map *dma_map, unsigned long attrs)
>         for (i = 0; i < dma_map->dma_pages_cnt; i++) {
>                 dma = &dma_map->dma_pages[i];
>                 if (*dma) {
> +                       *dma &= ~XSK_NEXT_PG_CONTIG_MASK;
>                         dma_unmap_page_attrs(dma_map->dev, *dma, PAGE_SIZE,
>                                              DMA_BIDIRECTIONAL, attrs);
>                         *dma = 0;
> --
> 2.30.2
>
[PATCH v2 1/1] xsk: clear page contiguity bit when unmapping pool
Posted by Ivan Malov 3 years, 9 months ago
When a XSK pool gets mapped, xp_check_dma_contiguity() adds bit 0x1
to pages' DMA addresses that go in ascending order and at 4K stride.
The problem is that the bit does not get cleared before doing unmap.
As a result, a lot of warnings from iommu_dma_unmap_page() are seen
suggesting mapping lookup failures at drivers/iommu/dma-iommu.c:848.

Fixes: 2b43470add8c ("xsk: Introduce AF_XDP buffer allocation API")
Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
---
 v1 -> v2: minor adjustments to dispose of the "Fixes:" tag warning

 net/xdp/xsk_buff_pool.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/net/xdp/xsk_buff_pool.c b/net/xdp/xsk_buff_pool.c
index 87bdd71c7bb6..f70112176b7c 100644
--- a/net/xdp/xsk_buff_pool.c
+++ b/net/xdp/xsk_buff_pool.c
@@ -332,6 +332,7 @@ static void __xp_dma_unmap(struct xsk_dma_map *dma_map, unsigned long attrs)
 	for (i = 0; i < dma_map->dma_pages_cnt; i++) {
 		dma = &dma_map->dma_pages[i];
 		if (*dma) {
+			*dma &= ~XSK_NEXT_PG_CONTIG_MASK;
 			dma_unmap_page_attrs(dma_map->dev, *dma, PAGE_SIZE,
 					     DMA_BIDIRECTIONAL, attrs);
 			*dma = 0;
-- 
2.30.2
Re: [PATCH v2 1/1] xsk: clear page contiguity bit when unmapping pool
Posted by Magnus Karlsson 3 years, 9 months ago
On Tue, Jun 28, 2022 at 2:18 AM Ivan Malov <ivan.malov@oktetlabs.ru> wrote:
>
> When a XSK pool gets mapped, xp_check_dma_contiguity() adds bit 0x1
> to pages' DMA addresses that go in ascending order and at 4K stride.
> The problem is that the bit does not get cleared before doing unmap.
> As a result, a lot of warnings from iommu_dma_unmap_page() are seen
> suggesting mapping lookup failures at drivers/iommu/dma-iommu.c:848.

Thanks Ivan for spotting this. Please add if this patch is for bpf or
net in your subject line. E.g., [PATCH net].

Also, I cannot find a warning at drivers/iommu/dma-iommu.c:848. For
net and bpf I have a WARN() at line 679 in __iommu_dma_unmap(). Maybe
it would be better to just refer to __iommu_dma_unmap() and the
warning in that function. Line numbers tend to change.

> Fixes: 2b43470add8c ("xsk: Introduce AF_XDP buffer allocation API")
> Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
> ---
>  v1 -> v2: minor adjustments to dispose of the "Fixes:" tag warning
>
>  net/xdp/xsk_buff_pool.c | 1 +
>  1 file changed, 1 insertion(+)
>
> diff --git a/net/xdp/xsk_buff_pool.c b/net/xdp/xsk_buff_pool.c
> index 87bdd71c7bb6..f70112176b7c 100644
> --- a/net/xdp/xsk_buff_pool.c
> +++ b/net/xdp/xsk_buff_pool.c
> @@ -332,6 +332,7 @@ static void __xp_dma_unmap(struct xsk_dma_map *dma_map, unsigned long attrs)
>         for (i = 0; i < dma_map->dma_pages_cnt; i++) {
>                 dma = &dma_map->dma_pages[i];
>                 if (*dma) {
> +                       *dma &= ~XSK_NEXT_PG_CONTIG_MASK;
>                         dma_unmap_page_attrs(dma_map->dev, *dma, PAGE_SIZE,
>                                              DMA_BIDIRECTIONAL, attrs);
>                         *dma = 0;
> --
> 2.30.2
>