> From: Jean-Philippe Brucker <jean-philippe@linaro.org> > Sent: Thursday, March 5, 2020 12:47 AM > [...] > > > > > > * We can't use DVM in nested mode unless the VMID is shared with the > > > CPU. For that we'll need the host SMMU driver to hook into the KVM > VMID > > > allocator, just like we do for the ASID allocator. I haven't yet > > > investigated how to do that. It's possible to do vSVA without DVM > > > though, by sending all TLB invalidations through the SMMU command > queue. > > > " > > Hm we're already mandating DVM for host SVA, so I'd say mandate it for > vSVA as well. We'd avoid a ton of context switches, especially for the zip > accelerator which doesn't require ATC invalidations. The host needs to pin > the VMID allocated by KVM and write it in the endpoint's STE. > Curious... what is DVM and how is it related to SVA? Is it SMMU specific?
On Thu, Mar 05, 2020 at 02:56:20AM +0000, Tian, Kevin wrote: > > From: Jean-Philippe Brucker <jean-philippe@linaro.org> > > Sent: Thursday, March 5, 2020 12:47 AM > > > [...] > > > > > > > > * We can't use DVM in nested mode unless the VMID is shared with the > > > > CPU. For that we'll need the host SMMU driver to hook into the KVM > > VMID > > > > allocator, just like we do for the ASID allocator. I haven't yet > > > > investigated how to do that. It's possible to do vSVA without DVM > > > > though, by sending all TLB invalidations through the SMMU command > > queue. > > > > " > > > > Hm we're already mandating DVM for host SVA, so I'd say mandate it for > > vSVA as well. We'd avoid a ton of context switches, especially for the zip > > accelerator which doesn't require ATC invalidations. The host needs to pin > > the VMID allocated by KVM and write it in the endpoint's STE. > > > > Curious... what is DVM and how is it related to SVA? Is it SMMU specific? Yes it stands for "Distributed Virtual Memory", an Arm interconnect protocol. When sharing a process address space, TLB invalidations from the CPU are broadcasted to the SMMU, so we don't have to send commands through the SMMU queue to invalidate IOTLBs. However ATCs from PCIe endpoints do not participate in DVM and still have to be invalidated by hand. Thanks, Jean
> From: Jean-Philippe Brucker <jean-philippe@linaro.org> > Sent: Thursday, March 5, 2020 3:34 PM > > On Thu, Mar 05, 2020 at 02:56:20AM +0000, Tian, Kevin wrote: > > > From: Jean-Philippe Brucker <jean-philippe@linaro.org> > > > Sent: Thursday, March 5, 2020 12:47 AM > > > > > [...] > > > > > > > > > > * We can't use DVM in nested mode unless the VMID is shared with > the > > > > > CPU. For that we'll need the host SMMU driver to hook into the KVM > > > VMID > > > > > allocator, just like we do for the ASID allocator. I haven't yet > > > > > investigated how to do that. It's possible to do vSVA without DVM > > > > > though, by sending all TLB invalidations through the SMMU command > > > queue. > > > > > " > > > > > > Hm we're already mandating DVM for host SVA, so I'd say mandate it for > > > vSVA as well. We'd avoid a ton of context switches, especially for the zip > > > accelerator which doesn't require ATC invalidations. The host needs to > pin > > > the VMID allocated by KVM and write it in the endpoint's STE. > > > > > > > Curious... what is DVM and how is it related to SVA? Is it SMMU specific? > > Yes it stands for "Distributed Virtual Memory", an Arm interconnect > protocol. When sharing a process address space, TLB invalidations from the > CPU are broadcasted to the SMMU, so we don't have to send commands > through > the SMMU queue to invalidate IOTLBs. However ATCs from PCIe endpoints > do > not participate in DVM and still have to be invalidated by hand. > ah, got it. Thanks for explanation!
© 2016 - 2026 Red Hat, Inc.