docs/devel/loads-stores.rst | 2 ++ include/exec/memattrs.h | 3 ++ include/hw/pci/pci.h | 21 ++++++++++--- include/hw/ppc/spapr_vio.h | 26 +++++++++------ include/sysemu/dma.h | 59 +++++++++++++++++++++-------------- dma-helpers.c | 12 ++++--- exec.c | 8 +++++ hw/arm/musicpal.c | 13 ++++---- hw/arm/smmu-common.c | 3 +- hw/arm/smmuv3.c | 14 ++++++--- hw/core/generic-loader.c | 3 +- hw/display/virtio-gpu.c | 8 +++-- hw/dma/pl330.c | 12 ++++--- hw/dma/sparc32_dma.c | 16 ++++++---- hw/dma/xlnx-zynq-devcfg.c | 6 ++-- hw/dma/xlnx_dpdma.c | 10 +++--- hw/hyperv/vmbus.c | 8 +++-- hw/i386/amd_iommu.c | 16 +++++----- hw/i386/intel_iommu.c | 28 ++++++++++------- hw/ide/ahci.c | 9 ++++-- hw/ide/macio.c | 2 +- hw/intc/pnv_xive.c | 7 +++-- hw/intc/spapr_xive.c | 3 +- hw/intc/xive.c | 7 +++-- hw/misc/bcm2835_property.c | 3 +- hw/misc/macio/mac_dbdma.c | 10 +++--- hw/net/allwinner-sun8i-emac.c | 21 ++++++++----- hw/net/ftgmac100.c | 25 +++++++++------ hw/net/imx_fec.c | 32 ++++++++++++------- hw/nvram/fw_cfg.c | 16 ++++++---- hw/pci-host/pnv_phb3.c | 5 +-- hw/pci-host/pnv_phb3_msi.c | 9 ++++-- hw/pci-host/pnv_phb4.c | 7 +++-- hw/sd/allwinner-sdhost.c | 14 +++++---- hw/sd/sdhci.c | 35 +++++++++++++-------- hw/usb/hcd-dwc2.c | 8 ++--- hw/usb/hcd-ehci.c | 6 ++-- hw/usb/hcd-ohci.c | 28 ++++++++++------- hw/usb/libhw.c | 3 +- hw/virtio/virtio.c | 6 ++-- trace-events | 1 + 41 files changed, 334 insertions(+), 191 deletions(-)
Hi, I'm not suppose to work on this but I couldn't sleep so kept wondering about this problem the whole night and eventually woke up to write this quickly, so comments are scarce, sorry. The first part is obvious anyway, simply pass MemTxAttrs argument. The main patch is: "exec/memattrs: Introduce MemTxAttrs::direct_access field". This way we can restrict accesses to ROM/RAM by setting the 'direct_access' field. Illegal accesses return MEMTX_BUS_ERROR. Next patch restrict PCI DMA accesses by setting the direct_access field. Finally we add an assertion for any DMA write access to indirect memory to kill a class of bug recently found by Alexander while fuzzing. Regards, Phil. Klaus Jensen (1): pci: pass along the return value of dma_memory_rw Philippe Mathieu-Daudé (11): dma: Let dma_memory_valid() take MemTxAttrs argument dma: Let dma_memory_set() take MemTxAttrs argument dma: Let dma_memory_rw_relaxed() take MemTxAttrs argument dma: Let dma_memory_rw() take MemTxAttrs argument dma: Let dma_memory_read/write() take MemTxAttrs argument dma: Let dma_memory_map() take MemTxAttrs argument docs/devel/loads-stores: Add regexp for DMA functions dma: Let load/store DMA functions take MemTxAttrs argument exec/memattrs: Introduce MemTxAttrs::direct_access field hw/pci: Only allow PCI slave devices to write to direct memory dma: Assert when device writes to indirect memory (such MMIO regions) docs/devel/loads-stores.rst | 2 ++ include/exec/memattrs.h | 3 ++ include/hw/pci/pci.h | 21 ++++++++++--- include/hw/ppc/spapr_vio.h | 26 +++++++++------ include/sysemu/dma.h | 59 +++++++++++++++++++++-------------- dma-helpers.c | 12 ++++--- exec.c | 8 +++++ hw/arm/musicpal.c | 13 ++++---- hw/arm/smmu-common.c | 3 +- hw/arm/smmuv3.c | 14 ++++++--- hw/core/generic-loader.c | 3 +- hw/display/virtio-gpu.c | 8 +++-- hw/dma/pl330.c | 12 ++++--- hw/dma/sparc32_dma.c | 16 ++++++---- hw/dma/xlnx-zynq-devcfg.c | 6 ++-- hw/dma/xlnx_dpdma.c | 10 +++--- hw/hyperv/vmbus.c | 8 +++-- hw/i386/amd_iommu.c | 16 +++++----- hw/i386/intel_iommu.c | 28 ++++++++++------- hw/ide/ahci.c | 9 ++++-- hw/ide/macio.c | 2 +- hw/intc/pnv_xive.c | 7 +++-- hw/intc/spapr_xive.c | 3 +- hw/intc/xive.c | 7 +++-- hw/misc/bcm2835_property.c | 3 +- hw/misc/macio/mac_dbdma.c | 10 +++--- hw/net/allwinner-sun8i-emac.c | 21 ++++++++----- hw/net/ftgmac100.c | 25 +++++++++------ hw/net/imx_fec.c | 32 ++++++++++++------- hw/nvram/fw_cfg.c | 16 ++++++---- hw/pci-host/pnv_phb3.c | 5 +-- hw/pci-host/pnv_phb3_msi.c | 9 ++++-- hw/pci-host/pnv_phb4.c | 7 +++-- hw/sd/allwinner-sdhost.c | 14 +++++---- hw/sd/sdhci.c | 35 +++++++++++++-------- hw/usb/hcd-dwc2.c | 8 ++--- hw/usb/hcd-ehci.c | 6 ++-- hw/usb/hcd-ohci.c | 28 ++++++++++------- hw/usb/libhw.c | 3 +- hw/virtio/virtio.c | 6 ++-- trace-events | 1 + 41 files changed, 334 insertions(+), 191 deletions(-) -- 2.26.2
Hi Phil, On 09/03/20 13:08, Philippe Mathieu-Daudé wrote: > Hi, > > I'm not suppose to work on this but I couldn't sleep so kept > wondering about this problem the whole night and eventually > woke up to write this quickly, so comments are scarce, sorry. > > The first part is obvious anyway, simply pass MemTxAttrs argument. > > The main patch is: > "exec/memattrs: Introduce MemTxAttrs::direct_access field". > This way we can restrict accesses to ROM/RAM by setting the > 'direct_access' field. Illegal accesses return MEMTX_BUS_ERROR. > > Next patch restrict PCI DMA accesses by setting the direct_access > field. > > Finally we add an assertion for any DMA write access to indirect > memory to kill a class of bug recently found by Alexander while > fuzzing. I've briefly checked LP#1886362 and LP#1888606, and as much as I understand them, they seem tricky. It's not clear how we can recognize long chains of DMA-to-MMIO transfers, and interrupt them. Peter mentions an approach at the end of <https://bugs.launchpad.net/qemu/+bug/1886362/comments/5> that I believe to understand, but -- according to him -- it seems too much work. And, I'm not too familiar with the qemu memory model, so I don't have comments on your solution. Maybe we can have some kind of "depth counter" for such recursive DMA-to-MMIO calls (even across multiple device models), and put an artificial limit on them, such as 5 or 10. This could be controlled on the QEMU command line perhaps? I don't think such chains work unto arbitrary depths on physical hardware either. Sorry that I don't have any sensible comments here. I hope I didn't misunderstand the problem at least. Laszlo
On Thu, 3 Sep 2020 at 14:37, Laszlo Ersek <lersek@redhat.com> wrote: > Peter mentions an approach at the end of > <https://bugs.launchpad.net/qemu/+bug/1886362/comments/5> that I believe > to understand, but -- according to him -- it seems too much work. It also would only be effective for MMIO, not for qemu_irq lines... > I don't think such chains work unto arbitrary depths on physical > hardware either. Real hardware by and large doesn't get designed with this kind of DMA-to-self as a consideration either, but unfortunately it's not really very useful as a model to base QEMU's behaviour on: (1) real hardware is usually massively parallel, so the logic that handles incoming MMIO is decoupled anyway from logic that does outgoing DMA. (Arguably the "do all DMA in a bottom-half" idea is kind of following the hardware design.) Similarly simple "raise this outbound signal" logic just works as an instantaneous action that causes the device on the other end to change its state/do something parallel, whereas for QEMU we need to actually call some code in the device on the other end and so we serialize this stuff, sandwiching a bit of "device B code" in the middle of a run of "device A code". So a lot more of this stuff "just happens to work" on h/w than we get with QEMU. (2) if software running on real h/w does do something silly with programming a device to DMA to itself then the worst case is generally that they manage to wedge that device (or the whole machine, if you're really unlucky), in which case the response is "don't do that then". There isn't the same "guest code can escape the VM" security boundary that QEMU needs to guard against [*]. [*] I do wonder about hardware-device-passthrough setups; I don't think I would care to pass through an arbitrary device to an untrusted guest... thanks -- PMM
On Thu, Sep 03, 2020 at 02:58:19PM +0100, Peter Maydell wrote: > On Thu, 3 Sep 2020 at 14:37, Laszlo Ersek <lersek@redhat.com> wrote: > > Peter mentions an approach at the end of > > <https://bugs.launchpad.net/qemu/+bug/1886362/comments/5> that I believe > > to understand, but -- according to him -- it seems too much work. > > It also would only be effective for MMIO, not for qemu_irq lines... > > > I don't think such chains work unto arbitrary depths on physical > > hardware either. > > Real hardware by and large doesn't get designed with this kind > of DMA-to-self as a consideration either, but unfortunately it's > not really very useful as a model to base QEMU's behaviour on: > > (1) real hardware is usually massively parallel, so the logic > that handles incoming MMIO is decoupled anyway from logic > that does outgoing DMA. (Arguably the "do all DMA in a > bottom-half" idea is kind of following the hardware design.) > Similarly simple "raise this outbound signal" logic just > works as an instantaneous action that causes the device on > the other end to change its state/do something parallel, > whereas for QEMU we need to actually call some code in the > device on the other end and so we serialize this stuff, > sandwiching a bit of "device B code" in the middle of a > run of "device A code". So a lot more of this stuff "just > happens to work" on h/w than we get with QEMU. > (2) if software running on real h/w does do something silly with > programming a device to DMA to itself then the worst case is > generally that they manage to wedge that device (or the whole > machine, if you're really unlucky), in which case the response > is "don't do that then". There isn't the same "guest code > can escape the VM" security boundary that QEMU needs to guard > against [*]. > > [*] I do wonder about hardware-device-passthrough setups; I > don't think I would care to pass through an arbitrary device > to an untrusted guest... Hmm, I guess it would make sense to have a configurable option in KVM to isolate passthrough devices so they only can DMA to guest RAM... Cheers, Edgar
On 03/09/20 16:24, Edgar E. Iglesias wrote: >> [*] I do wonder about hardware-device-passthrough setups; I >> don't think I would care to pass through an arbitrary device >> to an untrusted guest... > Hmm, I guess it would make sense to have a configurable option in KVM > to isolate passthrough devices so they only can DMA to guest RAM... Passthrough devices are always protected by the IOMMU, anything else would be obviously insane^H^H^Hecure. :) Paolo
On Thu, Sep 03, 2020 at 05:46:39PM +0200, Paolo Bonzini wrote: > On 03/09/20 16:24, Edgar E. Iglesias wrote: > >> [*] I do wonder about hardware-device-passthrough setups; I > >> don't think I would care to pass through an arbitrary device > >> to an untrusted guest... > > Hmm, I guess it would make sense to have a configurable option in KVM > > to isolate passthrough devices so they only can DMA to guest RAM... > > Passthrough devices are always protected by the IOMMU, anything else > would be obviously insane^H^H^Hecure. :) Really? To always do that blindly seems wrong. I'm refering to the passthrough device not being able to reach registers of other passthrough devices within the same guest. Obviously the IOMMU should be setup so that passthrough devices don't reach\ other guests or the host. Cheers, Edgar
On 03/09/20 17:50, Edgar E. Iglesias wrote: >>> Hmm, I guess it would make sense to have a configurable option in KVM >>> to isolate passthrough devices so they only can DMA to guest RAM... >> >> Passthrough devices are always protected by the IOMMU, anything else >> would be obviously insane^H^H^Hecure. :) > > Really? To always do that blindly seems wrong. > > I'm refering to the passthrough device not being able to reach registers > of other passthrough devices within the same guest. Ah okay; sorry, I misunderstood. That makes more sense now! Multiple devices are put in the same IOMMU "container" (page table basically), and that takes care of reaching registers of other passthrough devices. Paolo > Obviously the IOMMU should be setup so that passthrough devices don't reach\ > other guests or the host.
On Thu, Sep 03, 2020 at 07:53:33PM +0200, Paolo Bonzini wrote: > On 03/09/20 17:50, Edgar E. Iglesias wrote: > >>> Hmm, I guess it would make sense to have a configurable option in KVM > >>> to isolate passthrough devices so they only can DMA to guest RAM... > >> > >> Passthrough devices are always protected by the IOMMU, anything else > >> would be obviously insane^H^H^Hecure. :) > > > > Really? To always do that blindly seems wrong. > > > > I'm refering to the passthrough device not being able to reach registers > > of other passthrough devices within the same guest. > > Ah okay; sorry, I misunderstood. That makes more sense now! > > Multiple devices are put in the same IOMMU "container" (page table > basically), and that takes care of reaching registers of other > passthrough devices. Thanks, yes, that's a sane default. What I was trying to say before is that it may make sense to allow the user to "harden" the setup by selectivly putting certain passthrough devs on a separate group that can *only* DMA access guest RAM (not other device regs). Some devs need access to other device's regs but many passthrough devs don't need DMA access to anything else but RAM (e.g an Ethernet MAC). That could mitigate the damage caused by wild DMA pointers... Cheers, Edgar
On 2020/9/4 上午3:46, Edgar E. Iglesias wrote: > On Thu, Sep 03, 2020 at 07:53:33PM +0200, Paolo Bonzini wrote: >> On 03/09/20 17:50, Edgar E. Iglesias wrote: >>>>> Hmm, I guess it would make sense to have a configurable option in KVM >>>>> to isolate passthrough devices so they only can DMA to guest RAM... >>>> Passthrough devices are always protected by the IOMMU, anything else >>>> would be obviously insane^H^H^Hecure. :) >>> Really? To always do that blindly seems wrong. >>> >>> I'm refering to the passthrough device not being able to reach registers >>> of other passthrough devices within the same guest. >> Ah okay; sorry, I misunderstood. That makes more sense now! >> >> Multiple devices are put in the same IOMMU "container" (page table >> basically), and that takes care of reaching registers of other >> passthrough devices. > Thanks, yes, that's a sane default. What I was trying to say before is that > it may make sense to allow the user to "harden" the setup by selectivly > putting certain passthrough devs on a separate group that can *only* > DMA access guest RAM (not other device regs). This makes sens but it requires the knowledge from the management layer of whether P2P is needed which is probably not easy. Thanks > > Some devs need access to other device's regs but many passthrough devs don't > need DMA access to anything else but RAM (e.g an Ethernet MAC). > > That could mitigate the damage caused by wild DMA pointers... > > Cheers, > Edgar >
On Thu, Sep 03, 2020 at 01:08:19PM +0200, Philippe Mathieu-Daudé wrote: > The main patch is: > "exec/memattrs: Introduce MemTxAttrs::direct_access field". > This way we can restrict accesses to ROM/RAM by setting the > 'direct_access' field. Illegal accesses return MEMTX_BUS_ERROR. QEMU needs to simulate the behavior of real hardware. What is the behavior of real hardware? Stefan
On Wed, 9 Sep 2020 at 10:12, Stefan Hajnoczi <stefanha@gmail.com> wrote: > > On Thu, Sep 03, 2020 at 01:08:19PM +0200, Philippe Mathieu-Daudé wrote: > > The main patch is: > > "exec/memattrs: Introduce MemTxAttrs::direct_access field". > > This way we can restrict accesses to ROM/RAM by setting the > > 'direct_access' field. Illegal accesses return MEMTX_BUS_ERROR. > > QEMU needs to simulate the behavior of real hardware. What is the > behavior of real hardware? It varies, depending on the hardware. The most common thing is probably "happens to work by luck", which is OK for hardware but doesn't help us much since our software implementation is naturally more serialized than hardware is and since we don't want to allow guests to make QEMU crash or misbehave. thanks -- PMM
On Wed, Sep 9, 2020 at 2:23 PM Peter Maydell <peter.maydell@linaro.org> wrote: > On Wed, 9 Sep 2020 at 10:12, Stefan Hajnoczi <stefanha@gmail.com> wrote: > > On Thu, Sep 03, 2020 at 01:08:19PM +0200, Philippe Mathieu-Daudé wrote: > > > The main patch is: > > > "exec/memattrs: Introduce MemTxAttrs::direct_access field". > > > This way we can restrict accesses to ROM/RAM by setting the > > > 'direct_access' field. Illegal accesses return MEMTX_BUS_ERROR. > > > > QEMU needs to simulate the behavior of real hardware. What is the > > behavior of real hardware? > > It varies, depending on the hardware. The most common thing > is probably "happens to work by luck", which is OK for hardware > but doesn't help us much since our software implementation is > naturally more serialized than hardware is and since we don't > want to allow guests to make QEMU crash or misbehave. The memory API bounce buffer mechanism is evidence that some board(s) need or needed it. At a minimum we need to find out the reason for the bounce buffer mechanism to avoid breaking guests. Stefan
Philippe Mathieu-Daudé <philmd@redhat.com> 于2020年9月3日周四 下午7:09写道: > > Hi, > > I'm not suppose to work on this but I couldn't sleep so kept > wondering about this problem the whole night and eventually > woke up to write this quickly, so comments are scarce, sorry. > > The first part is obvious anyway, simply pass MemTxAttrs argument. > > The main patch is: > "exec/memattrs: Introduce MemTxAttrs::direct_access field". > This way we can restrict accesses to ROM/RAM by setting the > 'direct_access' field. Illegal accesses return MEMTX_BUS_ERROR. > > Next patch restrict PCI DMA accesses by setting the direct_access > field. > > Finally we add an assertion for any DMA write access to indirect > memory to kill a class of bug recently found by Alexander while > fuzzing. > Hi Philippe, I have reviewed your patches. Your patch just deny the DMA write to MMIO for PCI device. 1. The DMA write to MMIO is allowed for P2P. Unconditionally deny is not right I think. Maybe we can add some flag for the device as property so the device can indicate whether it supports DMA to MMIO. But this method needs define we should apply the restrict to DMA to MMIO initiator or target. If the target, we need to find the target PCI device. 2. I think the MMIO read maybe also suffers the reentrant issue If the MMIO read handler does complicated work. 3. As your patch just consider the PCI case. This reentrant is quite complicated if we consider the no-PCI the qemu_irq cases. I agree to address the PCI cases first. Thanks, Li Qiang > Regards, > > Phil. > > Klaus Jensen (1): > pci: pass along the return value of dma_memory_rw > > Philippe Mathieu-Daudé (11): > dma: Let dma_memory_valid() take MemTxAttrs argument > dma: Let dma_memory_set() take MemTxAttrs argument > dma: Let dma_memory_rw_relaxed() take MemTxAttrs argument > dma: Let dma_memory_rw() take MemTxAttrs argument > dma: Let dma_memory_read/write() take MemTxAttrs argument > dma: Let dma_memory_map() take MemTxAttrs argument > docs/devel/loads-stores: Add regexp for DMA functions > dma: Let load/store DMA functions take MemTxAttrs argument > exec/memattrs: Introduce MemTxAttrs::direct_access field > hw/pci: Only allow PCI slave devices to write to direct memory > dma: Assert when device writes to indirect memory (such MMIO regions) > > docs/devel/loads-stores.rst | 2 ++ > include/exec/memattrs.h | 3 ++ > include/hw/pci/pci.h | 21 ++++++++++--- > include/hw/ppc/spapr_vio.h | 26 +++++++++------ > include/sysemu/dma.h | 59 +++++++++++++++++++++-------------- > dma-helpers.c | 12 ++++--- > exec.c | 8 +++++ > hw/arm/musicpal.c | 13 ++++---- > hw/arm/smmu-common.c | 3 +- > hw/arm/smmuv3.c | 14 ++++++--- > hw/core/generic-loader.c | 3 +- > hw/display/virtio-gpu.c | 8 +++-- > hw/dma/pl330.c | 12 ++++--- > hw/dma/sparc32_dma.c | 16 ++++++---- > hw/dma/xlnx-zynq-devcfg.c | 6 ++-- > hw/dma/xlnx_dpdma.c | 10 +++--- > hw/hyperv/vmbus.c | 8 +++-- > hw/i386/amd_iommu.c | 16 +++++----- > hw/i386/intel_iommu.c | 28 ++++++++++------- > hw/ide/ahci.c | 9 ++++-- > hw/ide/macio.c | 2 +- > hw/intc/pnv_xive.c | 7 +++-- > hw/intc/spapr_xive.c | 3 +- > hw/intc/xive.c | 7 +++-- > hw/misc/bcm2835_property.c | 3 +- > hw/misc/macio/mac_dbdma.c | 10 +++--- > hw/net/allwinner-sun8i-emac.c | 21 ++++++++----- > hw/net/ftgmac100.c | 25 +++++++++------ > hw/net/imx_fec.c | 32 ++++++++++++------- > hw/nvram/fw_cfg.c | 16 ++++++---- > hw/pci-host/pnv_phb3.c | 5 +-- > hw/pci-host/pnv_phb3_msi.c | 9 ++++-- > hw/pci-host/pnv_phb4.c | 7 +++-- > hw/sd/allwinner-sdhost.c | 14 +++++---- > hw/sd/sdhci.c | 35 +++++++++++++-------- > hw/usb/hcd-dwc2.c | 8 ++--- > hw/usb/hcd-ehci.c | 6 ++-- > hw/usb/hcd-ohci.c | 28 ++++++++++------- > hw/usb/libhw.c | 3 +- > hw/virtio/virtio.c | 6 ++-- > trace-events | 1 + > 41 files changed, 334 insertions(+), 191 deletions(-) > > -- > 2.26.2 > >
© 2016 - 2024 Red Hat, Inc.