kernel/dma/swiotlb.c | 46 +++++++++++++++++++++++++++++++++----------- 1 file changed, 35 insertions(+), 11 deletions(-)
From: Petr Tesarik <petr.tesarik.ext@huawei.com> While reworking the dynamic SWIOTLB implementation, I ran into some locking issues with the current implementation. I believe the bugs are serious enough to be fixed separately. Petr Tesarik (2): swiotlb: Always set the number of areas before allocating the pool swiotlb: Reduce the number of areas to match actual memory pool size kernel/dma/swiotlb.c | 46 +++++++++++++++++++++++++++++++++----------- 1 file changed, 35 insertions(+), 11 deletions(-) -- 2.25.1
Thanks, I've applied this the dma-mapping for-next tree with a trivial cleanup.
On Mon, 2023-06-26 at 15:01 +0200, Petr Tesarik wrote: > From: Petr Tesarik <petr.tesarik.ext@huawei.com> > > While reworking the dynamic SWIOTLB implementation, I ran into some > locking issues with the current implementation. I believe the bugs > are serious enough to be fixed separately. Thanks, excellent work! Reviewed-by: Roberto Sassu <roberto.sassu@huawei.com> Roberto > Petr Tesarik (2): > swiotlb: Always set the number of areas before allocating the pool > swiotlb: Reduce the number of areas to match actual memory pool size > > kernel/dma/swiotlb.c | 46 +++++++++++++++++++++++++++++++++----------- > 1 file changed, 35 insertions(+), 11 deletions(-) >
On Mon, 26 Jun 2023 15:01:02 +0200 Petr Tesarik <petrtesarik@huaweicloud.com> wrote: > From: Petr Tesarik <petr.tesarik.ext@huawei.com> > > While reworking the dynamic SWIOTLB implementation, I ran into some > locking issues with the current implementation. I believe the bugs > are serious enough to be fixed separately. As an aside (and not directly related to the bugfixes themselves), I wonder why the area size cannot be always equal to IO_TLB_SEGSIZE. Of course, we would (usually) end up with more areas, but that should be a good thing, shouldn't it? The area structure is quite small, so it cannot be because of memory consumption concerns. The overhead of taking an uncontended spinlock should also be negligible. Do I miss something important here? Petr T
On Mon, Jun 26, 2023 at 04:07:25PM +0200, Petr Tesařík wrote: > As an aside (and not directly related to the bugfixes themselves), I > wonder why the area size cannot be always equal to IO_TLB_SEGSIZE. Of > course, we would (usually) end up with more areas, but that should be > a good thing, shouldn't it? The area structure is quite small, so it > cannot be because of memory consumption concerns. The overhead of > taking an uncontended spinlock should also be negligible. What would be the benefit of this?
On Thu, 29 Jun 2023 07:12:38 +0200 Christoph Hellwig <hch@lst.de> wrote: > On Mon, Jun 26, 2023 at 04:07:25PM +0200, Petr Tesařík wrote: > > As an aside (and not directly related to the bugfixes themselves), I > > wonder why the area size cannot be always equal to IO_TLB_SEGSIZE. Of > > course, we would (usually) end up with more areas, but that should be > > a good thing, shouldn't it? The area structure is quite small, so it > > cannot be because of memory consumption concerns. The overhead of > > taking an uncontended spinlock should also be negligible. > > What would be the benefit of this? For me, as a newcomer to this code, the existence of areas _and_ segments was confusing, especially since segments are not represented by a data structure. It took me more than one day to realize that slots are grouped into segments for allocation, but changes to the slot metadata are protected by area spinlocks. In my case, a segment crossed an area boundary, so the area spinlock protected only half of the allocated slots. I could also ask differently: What is the benefit of grouping slots into segments, and then again grouping segments into areas? Petr T
© 2016 - 2026 Red Hat, Inc.