> From: Nicolin Chen <nicolinc@nvidia.com> > Sent: Friday, April 11, 2025 2:38 PM > [...] > This is a big > improvement since there is no VM Exit during an invalidation, compared to > the traditional invalidation pathway by trapping a guest-own invalidation > queue and forwarding those commands/requests to the host kernel that will > eventually fill a HW-owned queue to execute those commands. > any data to show how big the improvements could be in major IOMMU usages (kernel dma, user dma and sva)?
On Thu, Apr 24, 2025 at 08:21:08AM +0000, Tian, Kevin wrote: > > From: Nicolin Chen <nicolinc@nvidia.com> > > Sent: Friday, April 11, 2025 2:38 PM > > > [...] > > This is a big > > improvement since there is no VM Exit during an invalidation, compared to > > the traditional invalidation pathway by trapping a guest-own invalidation > > queue and forwarding those commands/requests to the host kernel that will > > eventually fill a HW-owned queue to execute those commands. > > > > any data to show how big the improvements could be in major > IOMMU usages (kernel dma, user dma and sva)? I thought I mentioned about the percentage of the gain somewhere but seemingly not in this series.. Will add. Thanks! Nicolin
© 2016 - 2026 Red Hat, Inc.