> -----Original Message----- > From: Eric Auger <eric.auger@redhat.com> > Sent: 12 January 2026 15:44 > To: Shameer Kolothum <skolothumtho@nvidia.com>; qemu- > arm@nongnu.org; qemu-devel@nongnu.org > Cc: peter.maydell@linaro.org; Nicolin Chen <nicolinc@nvidia.com>; Nathan > Chen <nathanc@nvidia.com>; Matt Ochs <mochs@nvidia.com>; Jason > Gunthorpe <jgg@nvidia.com>; jonathan.cameron@huawei.com; > zhangfei.gao@linaro.org; zhenzhong.duan@intel.com; Krishnakant Jaju > <kjaju@nvidia.com> > Subject: Re: [RFC PATCH 00/16] hw/arm: Introduce Tegra241 CMDQV support > for accelerated SMMUv3 > > External email: Use caution opening links or attachments > > > On 12/10/25 2:37 PM, Shameer Kolothum wrote: > > Hi, > > > > This RFC series adds initial support for NVIDIA Tegra241 CMDQV > > (Command Queue Virtualisation), an extension to ARM SMMUv3 that > > provides hardware accelerated virtual command queues (VCMDQs) for > > guests. CMDQV allows guests to issue SMMU invalidation commands > > directly to hardware without VM exits, significantly reducing TLBI > > overhead. > > > > Thanks to Nicolin for the initial patches and testing on which this RFC > > is based. > > > > This is based on v6[0] of the SMMUv3 accel series, which is still under > > review, though nearing convergence. This is sent as an RFC, with the goal > > of gathering early feedback on the CMDQV design and its integration with > > the SMMUv3 acceleration path. > > > > Background: > > > > Tegra241 CMDQV extends SMMUv3 by allocating per-VM "virtual > interfaces" > > (VINTFs), each hosting up to 128 VCMDQs. > Can you add a reference to some specification documet please? > > > > Each VINTF exposes two 64KB MMIO pages: > > - Page0 – guest owned control and status registers (directly mapped > > into the VM) > > - Page1 – queue configuration registers (trapped/emulated by QEMU) > > > > Unlike the standard SMMU CMDQ, a guest owned Tegra241 VCMDQ does > not > > support the full command set. Only a subset, primarily invalidation > > related commands, is accepted by the CMDQV hardware. For this reason, > > a distinct CMDQV device must be exposed to the guest, and the guest OS > > must include a Tegra241 CMDQV aware driver to take advantage of the > > hardware acceleration. > Do I understand correctly that this Tegra241 CMDQV aware driveris > enabled by the CONFIG_TEGRA241_CMDQV on guest? Is it fully supported > upstream? Yes. With CONFIG_TEGRA241_CMDQV enabled, it should work. Thanks, Shameer
© 2016 - 2026 Red Hat, Inc.