[RFC 0/3] try to solve the DMA to MMIO issue

Li Qiang posted 3 patches 5 years, 2 months ago
Test docker-quick@centos7 failed
Test docker-mingw@fedora failed
Test checkpatch failed
Test FreeBSD failed
Patches applied successfully (tree, apply log)
git fetch https://github.com/patchew-project/qemu tags/patchew/20200902162206.101872-1-liq3ea@163.com
Maintainers: Dmitry Fleytman <dmitry.fleytman@gmail.com>, Jason Wang <jasowang@redhat.com>, Gerd Hoffmann <kraxel@redhat.com>, "Michael S. Tsirkin" <mst@redhat.com>
hw/display/virtio-gpu.c        | 10 ++++++
hw/net/e1000e.c                | 35 +++++++++++++++++++-
hw/usb/hcd-xhci.c              | 60 ++++++++++++++++++++++++++++++++++
hw/usb/hcd-xhci.h              |  1 +
include/hw/virtio/virtio-gpu.h |  1 +
5 files changed, 106 insertions(+), 1 deletion(-)
[RFC 0/3] try to solve the DMA to MMIO issue
Posted by Li Qiang 5 years, 2 months ago
The qemu device fuzzer has found several DMA to MMIO issue.
These issues is caused by the guest driver programs the DMA
address, then in the device MMIO handler it trigger the DMA
and as the DMA address is MMIO it will trigger another dispatch
and reenter the MMIO handler again. However most of the device
is not reentrant.

DMA to MMIO will cause issues depend by the device emulator,
mostly it will crash the qemu. Following is three classic 
DMA to MMIO issue.

e1000e: https://bugs.launchpad.net/qemu/+bug/1886362
xhci: https://bugs.launchpad.net/qemu/+bug/1891354
virtio-gpu: https://bugs.launchpad.net/qemu/+bug/1888606

The DMA to MMIO issue I think can be classified as following:
1. DMA to the device itself
2. device A DMA to device B and to device C
3. device A DMA to device B and to device A

The first case of course should not be allowed.
The second case I think it ok as the device IO handler has no
assumption about the IO data came from no matter it come from
device or other device. This is for P2P DMA.
The third case I think it also should not be allowed.

So our issue has been reduced by one case: not allowed the
device's IO handler reenter.

Paolo suggested that we can refactor the device emulation with
BH. However it is a lot of work.
I have thought several propose to address this, also discuss
this with Jason Wang in private email.

I have can solve this issue in core framework or in specific device.
After try several methods I choose address it in per-device for
following reason:
1. If we address it in core framwork we have to recored and check the 
device or MR info in MR dispatch write function. Unfortunally we have
no these info in core framework.
2. The performance will also be decrease largely
3. Only the device itself know its IO

The (most of the) device emulation is protected by BQL one time only
a device emulation code can be run. We can add a flag to indicate the
IO is running. The first two patches does this. For simplicity at the
RFC stage I just set it while enter the IO callback and clear it exit
the IO callback. It should be check/set/clean according the per-device's
IO emulation.
The second issue which itself suffers a race condition so I uses a
atomic.




Li Qiang (3):
  e1000e: make the IO handler reentrant
  xhci: make the IO handler reentrant
  virtio-gpu: make the IO handler reentrant

 hw/display/virtio-gpu.c        | 10 ++++++
 hw/net/e1000e.c                | 35 +++++++++++++++++++-
 hw/usb/hcd-xhci.c              | 60 ++++++++++++++++++++++++++++++++++
 hw/usb/hcd-xhci.h              |  1 +
 include/hw/virtio/virtio-gpu.h |  1 +
 5 files changed, 106 insertions(+), 1 deletion(-)

-- 
2.17.1


Re: [RFC 0/3] try to solve the DMA to MMIO issue
Posted by Jason Wang 5 years, 2 months ago
On 2020/9/3 上午12:22, Li Qiang wrote:
> The qemu device fuzzer has found several DMA to MMIO issue.
> These issues is caused by the guest driver programs the DMA
> address, then in the device MMIO handler it trigger the DMA
> and as the DMA address is MMIO it will trigger another dispatch
> and reenter the MMIO handler again. However most of the device
> is not reentrant.
>
> DMA to MMIO will cause issues depend by the device emulator,
> mostly it will crash the qemu. Following is three classic
> DMA to MMIO issue.
>
> e1000e: https://bugs.launchpad.net/qemu/+bug/1886362
> xhci: https://bugs.launchpad.net/qemu/+bug/1891354
> virtio-gpu: https://bugs.launchpad.net/qemu/+bug/1888606
>
> The DMA to MMIO issue I think can be classified as following:
> 1. DMA to the device itself
> 2. device A DMA to device B and to device C
> 3. device A DMA to device B and to device A
>
> The first case of course should not be allowed.
> The second case I think it ok as the device IO handler has no
> assumption about the IO data came from no matter it come from
> device or other device. This is for P2P DMA.
> The third case I think it also should not be allowed.
>
> So our issue has been reduced by one case: not allowed the
> device's IO handler reenter.
>
> Paolo suggested that we can refactor the device emulation with
> BH. However it is a lot of work.
> I have thought several propose to address this, also discuss
> this with Jason Wang in private email.
>
> I have can solve this issue in core framework or in specific device.
> After try several methods I choose address it in per-device for
> following reason:
> 1. If we address it in core framwork we have to recored and check the
> device or MR info in MR dispatch write function. Unfortunally we have
> no these info in core framework.
> 2. The performance will also be decrease largely
> 3. Only the device itself know its IO


I think we still need to seek a way to address this issue completely.

How about adding a flag in MemoryRegionOps and detect the reentrancy 
through that flag?

Thanks


>
> The (most of the) device emulation is protected by BQL one time only
> a device emulation code can be run. We can add a flag to indicate the
> IO is running. The first two patches does this. For simplicity at the
> RFC stage I just set it while enter the IO callback and clear it exit
> the IO callback. It should be check/set/clean according the per-device's
> IO emulation.
> The second issue which itself suffers a race condition so I uses a
> atomic.
>
>
>
>
> Li Qiang (3):
>    e1000e: make the IO handler reentrant
>    xhci: make the IO handler reentrant
>    virtio-gpu: make the IO handler reentrant
>
>   hw/display/virtio-gpu.c        | 10 ++++++
>   hw/net/e1000e.c                | 35 +++++++++++++++++++-
>   hw/usb/hcd-xhci.c              | 60 ++++++++++++++++++++++++++++++++++
>   hw/usb/hcd-xhci.h              |  1 +
>   include/hw/virtio/virtio-gpu.h |  1 +
>   5 files changed, 106 insertions(+), 1 deletion(-)
>


Re: [RFC 0/3] try to solve the DMA to MMIO issue
Posted by Alexander Bulekov 5 years, 2 months ago
On 200903 1154, Jason Wang wrote:
> 
> On 2020/9/3 上午12:22, Li Qiang wrote:
> > The qemu device fuzzer has found several DMA to MMIO issue.
> > These issues is caused by the guest driver programs the DMA
> > address, then in the device MMIO handler it trigger the DMA
> > and as the DMA address is MMIO it will trigger another dispatch
> > and reenter the MMIO handler again. However most of the device
> > is not reentrant.
> > 
> > DMA to MMIO will cause issues depend by the device emulator,
> > mostly it will crash the qemu. Following is three classic
> > DMA to MMIO issue.
> > 
> > e1000e: https://bugs.launchpad.net/qemu/+bug/1886362
> > xhci: https://bugs.launchpad.net/qemu/+bug/1891354
> > virtio-gpu: https://bugs.launchpad.net/qemu/+bug/1888606
> > 
> > The DMA to MMIO issue I think can be classified as following:
> > 1. DMA to the device itself
> > 2. device A DMA to device B and to device C
> > 3. device A DMA to device B and to device A
> > 
> > The first case of course should not be allowed.
> > The second case I think it ok as the device IO handler has no
> > assumption about the IO data came from no matter it come from
> > device or other device. This is for P2P DMA.
> > The third case I think it also should not be allowed.
> > 
> > So our issue has been reduced by one case: not allowed the
> > device's IO handler reenter.
> > 
> > Paolo suggested that we can refactor the device emulation with
> > BH. However it is a lot of work.
> > I have thought several propose to address this, also discuss
> > this with Jason Wang in private email.
> > 
> > I have can solve this issue in core framework or in specific device.
> > After try several methods I choose address it in per-device for
> > following reason:
> > 1. If we address it in core framwork we have to recored and check the
> > device or MR info in MR dispatch write function. Unfortunally we have
> > no these info in core framework.
> > 2. The performance will also be decrease largely
> > 3. Only the device itself know its IO
> 
> 
> I think we still need to seek a way to address this issue completely.
> 
> How about adding a flag in MemoryRegionOps and detect the reentrancy through
> that flag?

What happens for devices with multiple MemoryRegions? Make all the
MemoryRegionOps share the same flag?

What about the virtio-gpu bug, where the problem happens in a bh->mmio
access rather than an mmio->mmio access?

-Alex

> Thanks
> 
> 
> > 
> > The (most of the) device emulation is protected by BQL one time only
> > a device emulation code can be run. We can add a flag to indicate the
> > IO is running. The first two patches does this. For simplicity at the
> > RFC stage I just set it while enter the IO callback and clear it exit
> > the IO callback. It should be check/set/clean according the per-device's
> > IO emulation.
> > The second issue which itself suffers a race condition so I uses a
> > atomic.
> > 
> > 
> > 
> > 
> > Li Qiang (3):
> >    e1000e: make the IO handler reentrant
> >    xhci: make the IO handler reentrant
> >    virtio-gpu: make the IO handler reentrant
> > 
> >   hw/display/virtio-gpu.c        | 10 ++++++
> >   hw/net/e1000e.c                | 35 +++++++++++++++++++-
> >   hw/usb/hcd-xhci.c              | 60 ++++++++++++++++++++++++++++++++++
> >   hw/usb/hcd-xhci.h              |  1 +
> >   include/hw/virtio/virtio-gpu.h |  1 +
> >   5 files changed, 106 insertions(+), 1 deletion(-)
> > 
> 

Re: [RFC 0/3] try to solve the DMA to MMIO issue
Posted by Jason Wang 5 years, 2 months ago
On 2020/9/3 下午12:06, Alexander Bulekov wrote:
> On 200903 1154, Jason Wang wrote:
>> On 2020/9/3 上午12:22, Li Qiang wrote:
>>> The qemu device fuzzer has found several DMA to MMIO issue.
>>> These issues is caused by the guest driver programs the DMA
>>> address, then in the device MMIO handler it trigger the DMA
>>> and as the DMA address is MMIO it will trigger another dispatch
>>> and reenter the MMIO handler again. However most of the device
>>> is not reentrant.
>>>
>>> DMA to MMIO will cause issues depend by the device emulator,
>>> mostly it will crash the qemu. Following is three classic
>>> DMA to MMIO issue.
>>>
>>> e1000e: https://bugs.launchpad.net/qemu/+bug/1886362
>>> xhci: https://bugs.launchpad.net/qemu/+bug/1891354
>>> virtio-gpu: https://bugs.launchpad.net/qemu/+bug/1888606
>>>
>>> The DMA to MMIO issue I think can be classified as following:
>>> 1. DMA to the device itself
>>> 2. device A DMA to device B and to device C
>>> 3. device A DMA to device B and to device A
>>>
>>> The first case of course should not be allowed.
>>> The second case I think it ok as the device IO handler has no
>>> assumption about the IO data came from no matter it come from
>>> device or other device. This is for P2P DMA.
>>> The third case I think it also should not be allowed.
>>>
>>> So our issue has been reduced by one case: not allowed the
>>> device's IO handler reenter.
>>>
>>> Paolo suggested that we can refactor the device emulation with
>>> BH. However it is a lot of work.
>>> I have thought several propose to address this, also discuss
>>> this with Jason Wang in private email.
>>>
>>> I have can solve this issue in core framework or in specific device.
>>> After try several methods I choose address it in per-device for
>>> following reason:
>>> 1. If we address it in core framwork we have to recored and check the
>>> device or MR info in MR dispatch write function. Unfortunally we have
>>> no these info in core framework.
>>> 2. The performance will also be decrease largely
>>> 3. Only the device itself know its IO
>>
>> I think we still need to seek a way to address this issue completely.
>>
>> How about adding a flag in MemoryRegionOps and detect the reentrancy through
>> that flag?
> What happens for devices with multiple MemoryRegions? Make all the
> MemoryRegionOps share the same flag?


I think there could be two approaches:

1) record the device in MR as Qiang mentioned
2) Only forbid the reentrancy in MMIO handler and depends on the device 
to solve the multiple Memory Region issue, if the regions want to access 
the same data, it needs to be synchronized internally

But the point is still to try to solve it in the layer of memory 
regions. Otherwise we may still hit similar issues.


>
> What about the virtio-gpu bug, where the problem happens in a bh->mmio
> access rather than an mmio->mmio access?


Yes, it needs more thought, but as a first step, we can try to fix the 
MMIO handler issue and do bh fix on top.

Thanks


>
> -Alex
>
>> Thanks
>>
>>
>>> The (most of the) device emulation is protected by BQL one time only
>>> a device emulation code can be run. We can add a flag to indicate the
>>> IO is running. The first two patches does this. For simplicity at the
>>> RFC stage I just set it while enter the IO callback and clear it exit
>>> the IO callback. It should be check/set/clean according the per-device's
>>> IO emulation.
>>> The second issue which itself suffers a race condition so I uses a
>>> atomic.
>>>
>>>
>>>
>>>
>>> Li Qiang (3):
>>>     e1000e: make the IO handler reentrant
>>>     xhci: make the IO handler reentrant
>>>     virtio-gpu: make the IO handler reentrant
>>>
>>>    hw/display/virtio-gpu.c        | 10 ++++++
>>>    hw/net/e1000e.c                | 35 +++++++++++++++++++-
>>>    hw/usb/hcd-xhci.c              | 60 ++++++++++++++++++++++++++++++++++
>>>    hw/usb/hcd-xhci.h              |  1 +
>>>    include/hw/virtio/virtio-gpu.h |  1 +
>>>    5 files changed, 106 insertions(+), 1 deletion(-)
>>>


Re: [RFC 0/3] try to solve the DMA to MMIO issue
Posted by Li Qiang 5 years, 2 months ago
Jason Wang <jasowang@redhat.com> 于2020年9月3日周四 下午12:24写道:
>
>
> On 2020/9/3 下午12:06, Alexander Bulekov wrote:
> > On 200903 1154, Jason Wang wrote:
> >> On 2020/9/3 上午12:22, Li Qiang wrote:
> >>> The qemu device fuzzer has found several DMA to MMIO issue.
> >>> These issues is caused by the guest driver programs the DMA
> >>> address, then in the device MMIO handler it trigger the DMA
> >>> and as the DMA address is MMIO it will trigger another dispatch
> >>> and reenter the MMIO handler again. However most of the device
> >>> is not reentrant.
> >>>
> >>> DMA to MMIO will cause issues depend by the device emulator,
> >>> mostly it will crash the qemu. Following is three classic
> >>> DMA to MMIO issue.
> >>>
> >>> e1000e: https://bugs.launchpad.net/qemu/+bug/1886362
> >>> xhci: https://bugs.launchpad.net/qemu/+bug/1891354
> >>> virtio-gpu: https://bugs.launchpad.net/qemu/+bug/1888606
> >>>
> >>> The DMA to MMIO issue I think can be classified as following:
> >>> 1. DMA to the device itself
> >>> 2. device A DMA to device B and to device C
> >>> 3. device A DMA to device B and to device A
> >>>
> >>> The first case of course should not be allowed.
> >>> The second case I think it ok as the device IO handler has no
> >>> assumption about the IO data came from no matter it come from
> >>> device or other device. This is for P2P DMA.
> >>> The third case I think it also should not be allowed.
> >>>
> >>> So our issue has been reduced by one case: not allowed the
> >>> device's IO handler reenter.
> >>>
> >>> Paolo suggested that we can refactor the device emulation with
> >>> BH. However it is a lot of work.
> >>> I have thought several propose to address this, also discuss
> >>> this with Jason Wang in private email.
> >>>
> >>> I have can solve this issue in core framework or in specific device.
> >>> After try several methods I choose address it in per-device for
> >>> following reason:
> >>> 1. If we address it in core framwork we have to recored and check the
> >>> device or MR info in MR dispatch write function. Unfortunally we have
> >>> no these info in core framework.
> >>> 2. The performance will also be decrease largely
> >>> 3. Only the device itself know its IO
> >>
> >> I think we still need to seek a way to address this issue completely.
> >>
> >> How about adding a flag in MemoryRegionOps and detect the reentrancy through
> >> that flag?
> > What happens for devices with multiple MemoryRegions? Make all the
> > MemoryRegionOps share the same flag?
>
>
> I think there could be two approaches:
>
> 1) record the device in MR as Qiang mentioned

I have tried this as we discussed. But has following consideration:
1. The performance, we need to check/record/clean the MR in an array/hashtable.

2. The multiple MR and alias MR process in the memory layer. It is
complicated and performance effective.
So If we let the MR issue to the device itself, it is just as this
patch does-let the device address the reentrancy issue.f

Another solution. We connects a MR with the corresponding device. Now
the device often tight MR with an 'opaque' field.
Just uses it in the calling of MR callback. Then we add a flag in the
device and needs to modify the MR register interface.

So in the memory layer we can check/record/clean the MR->device->flag.
But this is can't address the DMA (in BH) to MMIO issue as the BH runs
in main thread.

Thanks,
Li Qiang



> 2) Only forbid the reentrancy in MMIO handler and depends on the device
> to solve the multiple Memory Region issue, if the regions want to access
> the same data, it needs to be synchronized internally
>
> But the point is still to try to solve it in the layer of memory
> regions. Otherwise we may still hit similar issues.
>
>
> >
> > What about the virtio-gpu bug, where the problem happens in a bh->mmio
> > access rather than an mmio->mmio access?
>
>
> Yes, it needs more thought, but as a first step, we can try to fix the
> MMIO handler issue and do bh fix on top.



>
> Thanks
>
>
> >
> > -Alex
> >
> >> Thanks
> >>
> >>
> >>> The (most of the) device emulation is protected by BQL one time only
> >>> a device emulation code can be run. We can add a flag to indicate the
> >>> IO is running. The first two patches does this. For simplicity at the
> >>> RFC stage I just set it while enter the IO callback and clear it exit
> >>> the IO callback. It should be check/set/clean according the per-device's
> >>> IO emulation.
> >>> The second issue which itself suffers a race condition so I uses a
> >>> atomic.
> >>>
> >>>
> >>>
> >>>
> >>> Li Qiang (3):
> >>>     e1000e: make the IO handler reentrant
> >>>     xhci: make the IO handler reentrant
> >>>     virtio-gpu: make the IO handler reentrant
> >>>
> >>>    hw/display/virtio-gpu.c        | 10 ++++++
> >>>    hw/net/e1000e.c                | 35 +++++++++++++++++++-
> >>>    hw/usb/hcd-xhci.c              | 60 ++++++++++++++++++++++++++++++++++
> >>>    hw/usb/hcd-xhci.h              |  1 +
> >>>    include/hw/virtio/virtio-gpu.h |  1 +
> >>>    5 files changed, 106 insertions(+), 1 deletion(-)
> >>>
>

Re: [RFC 0/3] try to solve the DMA to MMIO issue
Posted by Jason Wang 5 years, 2 months ago
On 2020/9/3 下午12:50, Li Qiang wrote:
> Jason Wang <jasowang@redhat.com> 于2020年9月3日周四 下午12:24写道:
>>
>> On 2020/9/3 下午12:06, Alexander Bulekov wrote:
>>> On 200903 1154, Jason Wang wrote:
>>>> On 2020/9/3 上午12:22, Li Qiang wrote:
>>>>> The qemu device fuzzer has found several DMA to MMIO issue.
>>>>> These issues is caused by the guest driver programs the DMA
>>>>> address, then in the device MMIO handler it trigger the DMA
>>>>> and as the DMA address is MMIO it will trigger another dispatch
>>>>> and reenter the MMIO handler again. However most of the device
>>>>> is not reentrant.
>>>>>
>>>>> DMA to MMIO will cause issues depend by the device emulator,
>>>>> mostly it will crash the qemu. Following is three classic
>>>>> DMA to MMIO issue.
>>>>>
>>>>> e1000e: https://bugs.launchpad.net/qemu/+bug/1886362
>>>>> xhci: https://bugs.launchpad.net/qemu/+bug/1891354
>>>>> virtio-gpu: https://bugs.launchpad.net/qemu/+bug/1888606
>>>>>
>>>>> The DMA to MMIO issue I think can be classified as following:
>>>>> 1. DMA to the device itself
>>>>> 2. device A DMA to device B and to device C
>>>>> 3. device A DMA to device B and to device A
>>>>>
>>>>> The first case of course should not be allowed.
>>>>> The second case I think it ok as the device IO handler has no
>>>>> assumption about the IO data came from no matter it come from
>>>>> device or other device. This is for P2P DMA.
>>>>> The third case I think it also should not be allowed.
>>>>>
>>>>> So our issue has been reduced by one case: not allowed the
>>>>> device's IO handler reenter.
>>>>>
>>>>> Paolo suggested that we can refactor the device emulation with
>>>>> BH. However it is a lot of work.
>>>>> I have thought several propose to address this, also discuss
>>>>> this with Jason Wang in private email.
>>>>>
>>>>> I have can solve this issue in core framework or in specific device.
>>>>> After try several methods I choose address it in per-device for
>>>>> following reason:
>>>>> 1. If we address it in core framwork we have to recored and check the
>>>>> device or MR info in MR dispatch write function. Unfortunally we have
>>>>> no these info in core framework.
>>>>> 2. The performance will also be decrease largely
>>>>> 3. Only the device itself know its IO
>>>> I think we still need to seek a way to address this issue completely.
>>>>
>>>> How about adding a flag in MemoryRegionOps and detect the reentrancy through
>>>> that flag?
>>> What happens for devices with multiple MemoryRegions? Make all the
>>> MemoryRegionOps share the same flag?
>>
>> I think there could be two approaches:
>>
>> 1) record the device in MR as Qiang mentioned
> I have tried this as we discussed. But has following consideration:
> 1. The performance, we need to check/record/clean the MR in an array/hashtable.
>
> 2. The multiple MR and alias MR process in the memory layer. It is
> complicated and performance effective.
> So If we let the MR issue to the device itself, it is just as this
> patch does-let the device address the reentrancy issue.f
>
> Another solution. We connects a MR with the corresponding device. Now
> the device often tight MR with an 'opaque' field.
> Just uses it in the calling of MR callback. Then we add a flag in the
> device and needs to modify the MR register interface.
>
> So in the memory layer we can check/record/clean the MR->device->flag.
> But this is can't address the DMA (in BH) to MMIO issue as the BH runs
> in main thread.


This is probably good enough to start. To my point of view, there're two 
different issues:

1) re-entrant MMIO handler
2) MMIO hanlder sync with BH

For 1), we'd better solve it at core, For 2) it can only be solved in 
the device.

Thanks


>
> Thanks,
> Li Qiang
>
>
>
>> 2) Only forbid the reentrancy in MMIO handler and depends on the device
>> to solve the multiple Memory Region issue, if the regions want to access
>> the same data, it needs to be synchronized internally
>>
>> But the point is still to try to solve it in the layer of memory
>> regions. Otherwise we may still hit similar issues.
>>
>>
>>> What about the virtio-gpu bug, where the problem happens in a bh->mmio
>>> access rather than an mmio->mmio access?
>>
>> Yes, it needs more thought, but as a first step, we can try to fix the
>> MMIO handler issue and do bh fix on top.
>
>
>> Thanks
>>
>>
>>> -Alex
>>>
>>>> Thanks
>>>>
>>>>
>>>>> The (most of the) device emulation is protected by BQL one time only
>>>>> a device emulation code can be run. We can add a flag to indicate the
>>>>> IO is running. The first two patches does this. For simplicity at the
>>>>> RFC stage I just set it while enter the IO callback and clear it exit
>>>>> the IO callback. It should be check/set/clean according the per-device's
>>>>> IO emulation.
>>>>> The second issue which itself suffers a race condition so I uses a
>>>>> atomic.
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> Li Qiang (3):
>>>>>      e1000e: make the IO handler reentrant
>>>>>      xhci: make the IO handler reentrant
>>>>>      virtio-gpu: make the IO handler reentrant
>>>>>
>>>>>     hw/display/virtio-gpu.c        | 10 ++++++
>>>>>     hw/net/e1000e.c                | 35 +++++++++++++++++++-
>>>>>     hw/usb/hcd-xhci.c              | 60 ++++++++++++++++++++++++++++++++++
>>>>>     hw/usb/hcd-xhci.h              |  1 +
>>>>>     include/hw/virtio/virtio-gpu.h |  1 +
>>>>>     5 files changed, 106 insertions(+), 1 deletion(-)
>>>>>


Re: [RFC 0/3] try to solve the DMA to MMIO issue
Posted by Li Qiang 5 years, 2 months ago
Jason Wang <jasowang@redhat.com> 于2020年9月3日周四 下午2:16写道:
>
>
> On 2020/9/3 下午12:50, Li Qiang wrote:
> > Jason Wang <jasowang@redhat.com> 于2020年9月3日周四 下午12:24写道:
> >>
> >> On 2020/9/3 下午12:06, Alexander Bulekov wrote:
> >>> On 200903 1154, Jason Wang wrote:
> >>>> On 2020/9/3 上午12:22, Li Qiang wrote:
> >>>>> The qemu device fuzzer has found several DMA to MMIO issue.
> >>>>> These issues is caused by the guest driver programs the DMA
> >>>>> address, then in the device MMIO handler it trigger the DMA
> >>>>> and as the DMA address is MMIO it will trigger another dispatch
> >>>>> and reenter the MMIO handler again. However most of the device
> >>>>> is not reentrant.
> >>>>>
> >>>>> DMA to MMIO will cause issues depend by the device emulator,
> >>>>> mostly it will crash the qemu. Following is three classic
> >>>>> DMA to MMIO issue.
> >>>>>
> >>>>> e1000e: https://bugs.launchpad.net/qemu/+bug/1886362
> >>>>> xhci: https://bugs.launchpad.net/qemu/+bug/1891354
> >>>>> virtio-gpu: https://bugs.launchpad.net/qemu/+bug/1888606
> >>>>>
> >>>>> The DMA to MMIO issue I think can be classified as following:
> >>>>> 1. DMA to the device itself
> >>>>> 2. device A DMA to device B and to device C
> >>>>> 3. device A DMA to device B and to device A
> >>>>>
> >>>>> The first case of course should not be allowed.
> >>>>> The second case I think it ok as the device IO handler has no
> >>>>> assumption about the IO data came from no matter it come from
> >>>>> device or other device. This is for P2P DMA.
> >>>>> The third case I think it also should not be allowed.
> >>>>>
> >>>>> So our issue has been reduced by one case: not allowed the
> >>>>> device's IO handler reenter.
> >>>>>
> >>>>> Paolo suggested that we can refactor the device emulation with
> >>>>> BH. However it is a lot of work.
> >>>>> I have thought several propose to address this, also discuss
> >>>>> this with Jason Wang in private email.
> >>>>>
> >>>>> I have can solve this issue in core framework or in specific device.
> >>>>> After try several methods I choose address it in per-device for
> >>>>> following reason:
> >>>>> 1. If we address it in core framwork we have to recored and check the
> >>>>> device or MR info in MR dispatch write function. Unfortunally we have
> >>>>> no these info in core framework.
> >>>>> 2. The performance will also be decrease largely
> >>>>> 3. Only the device itself know its IO
> >>>> I think we still need to seek a way to address this issue completely.
> >>>>
> >>>> How about adding a flag in MemoryRegionOps and detect the reentrancy through
> >>>> that flag?
> >>> What happens for devices with multiple MemoryRegions? Make all the
> >>> MemoryRegionOps share the same flag?
> >>
> >> I think there could be two approaches:
> >>
> >> 1) record the device in MR as Qiang mentioned
> > I have tried this as we discussed. But has following consideration:
> > 1. The performance, we need to check/record/clean the MR in an array/hashtable.
> >
> > 2. The multiple MR and alias MR process in the memory layer. It is
> > complicated and performance effective.
> > So If we let the MR issue to the device itself, it is just as this
> > patch does-let the device address the reentrancy issue.f
> >
> > Another solution. We connects a MR with the corresponding device. Now
> > the device often tight MR with an 'opaque' field.
> > Just uses it in the calling of MR callback. Then we add a flag in the
> > device and needs to modify the MR register interface.
> >
> > So in the memory layer we can check/record/clean the MR->device->flag.
> > But this is can't address the DMA (in BH) to MMIO issue as the BH runs
> > in main thread.
>
>
> This is probably good enough to start. To my point of view, there're two
> different issues:
>
> 1) re-entrant MMIO handler
> 2) MMIO hanlder sync with BH
>

Agree, here I want to address these two kind of issue in a manner so
it just be left to the device itself.
I  will try to add a new memory register function
memory_region_init_io_with_device
to connect the MR and device. And solve it in the memory layer.


Thanks,
Li Qiang

> For 1), we'd better solve it at core, For 2) it can only be solved in
> the device.
>
> Thanks
>
>
> >
> > Thanks,
> > Li Qiang
> >
> >
> >
> >> 2) Only forbid the reentrancy in MMIO handler and depends on the device
> >> to solve the multiple Memory Region issue, if the regions want to access
> >> the same data, it needs to be synchronized internally
> >>
> >> But the point is still to try to solve it in the layer of memory
> >> regions. Otherwise we may still hit similar issues.
> >>
> >>
> >>> What about the virtio-gpu bug, where the problem happens in a bh->mmio
> >>> access rather than an mmio->mmio access?
> >>
> >> Yes, it needs more thought, but as a first step, we can try to fix the
> >> MMIO handler issue and do bh fix on top.
> >
> >
> >> Thanks
> >>
> >>
> >>> -Alex
> >>>
> >>>> Thanks
> >>>>
> >>>>
> >>>>> The (most of the) device emulation is protected by BQL one time only
> >>>>> a device emulation code can be run. We can add a flag to indicate the
> >>>>> IO is running. The first two patches does this. For simplicity at the
> >>>>> RFC stage I just set it while enter the IO callback and clear it exit
> >>>>> the IO callback. It should be check/set/clean according the per-device's
> >>>>> IO emulation.
> >>>>> The second issue which itself suffers a race condition so I uses a
> >>>>> atomic.
> >>>>>
> >>>>>
> >>>>>
> >>>>>
> >>>>> Li Qiang (3):
> >>>>>      e1000e: make the IO handler reentrant
> >>>>>      xhci: make the IO handler reentrant
> >>>>>      virtio-gpu: make the IO handler reentrant
> >>>>>
> >>>>>     hw/display/virtio-gpu.c        | 10 ++++++
> >>>>>     hw/net/e1000e.c                | 35 +++++++++++++++++++-
> >>>>>     hw/usb/hcd-xhci.c              | 60 ++++++++++++++++++++++++++++++++++
> >>>>>     hw/usb/hcd-xhci.h              |  1 +
> >>>>>     include/hw/virtio/virtio-gpu.h |  1 +
> >>>>>     5 files changed, 106 insertions(+), 1 deletion(-)
> >>>>>
>

Re: [RFC 0/3] try to solve the DMA to MMIO issue
Posted by Peter Maydell 5 years, 2 months ago
On Thu, 3 Sep 2020 at 04:55, Jason Wang <jasowang@redhat.com> wrote:
> I think we still need to seek a way to address this issue completely.
>
> How about adding a flag in MemoryRegionOps and detect the reentrancy
> through that flag?

This won't catch everything. Consider this situation:
  Device A makes DMA access to device B
  Device B's write-handling causes it to raise an
   outbound qemu_irq signal
  The qemu_irq signal is connected to device A
  Now we have reentered into device A's code

That is to say, the problem is general to "device A does
something that affects device B" links of all kinds, which
can form loops. Self-DMA is just an easy way to find one
category of these with the fuzzer.

thanks
-- PMM

Re: [RFC 0/3] try to solve the DMA to MMIO issue
Posted by Li Qiang 5 years, 2 months ago
Peter Maydell <peter.maydell@linaro.org> 于2020年9月3日周四 下午6:53写道:
>
> On Thu, 3 Sep 2020 at 04:55, Jason Wang <jasowang@redhat.com> wrote:
> > I think we still need to seek a way to address this issue completely.
> >
> > How about adding a flag in MemoryRegionOps and detect the reentrancy
> > through that flag?
>
> This won't catch everything. Consider this situation:
>   Device A makes DMA access to device B
>   Device B's write-handling causes it to raise an
>    outbound qemu_irq signal
>   The qemu_irq signal is connected to device A

Here mean device A is an interrupt controller?
This is special case I think.

>   Now we have reentered into device A's code
>
> That is to say, the problem is general to "device A does
> something that affects device B" links of all kinds, which

As the P2P is a normal behavior, we can't just prevent this.

Thanks,
Li Qiang
> can form loops. Self-DMA is just an easy way to find one
> category of these with the fuzzer.
>
> thanks
> -- PMM

Re: [RFC 0/3] try to solve the DMA to MMIO issue
Posted by Peter Maydell 5 years, 2 months ago
On Thu, 3 Sep 2020 at 12:11, Li Qiang <liq3ea@gmail.com> wrote:
>
> Peter Maydell <peter.maydell@linaro.org> 于2020年9月3日周四 下午6:53写道:
> >
> > On Thu, 3 Sep 2020 at 04:55, Jason Wang <jasowang@redhat.com> wrote:
> > > I think we still need to seek a way to address this issue completely.
> > >
> > > How about adding a flag in MemoryRegionOps and detect the reentrancy
> > > through that flag?
> >
> > This won't catch everything. Consider this situation:
> >   Device A makes DMA access to device B
> >   Device B's write-handling causes it to raise an
> >    outbound qemu_irq signal
> >   The qemu_irq signal is connected to device A
>
> Here mean device A is an interrupt controller?

No. Any device can have an inbound or outbound qemu_irq line.
We use them not just for actual IRQ lines but for any
situation where we need to pass an on-or-off signal from
one device to another.

> This is special case I think.

It's an example of why looking purely at MMIO is not
sufficient. We should prefer to see if we can come up with
a design principle that works for all between-device
coordination before we implement something that is specific
to MMIO.

thanks
-- PMM

Re: [RFC 0/3] try to solve the DMA to MMIO issue
Posted by Li Qiang 5 years, 2 months ago
Peter Maydell <peter.maydell@linaro.org> 于2020年9月3日周四 下午7:19写道:
>
> On Thu, 3 Sep 2020 at 12:11, Li Qiang <liq3ea@gmail.com> wrote:
> >
> > Peter Maydell <peter.maydell@linaro.org> 于2020年9月3日周四 下午6:53写道:
> > >
> > > On Thu, 3 Sep 2020 at 04:55, Jason Wang <jasowang@redhat.com> wrote:
> > > > I think we still need to seek a way to address this issue completely.
> > > >
> > > > How about adding a flag in MemoryRegionOps and detect the reentrancy
> > > > through that flag?
> > >
> > > This won't catch everything. Consider this situation:
> > >   Device A makes DMA access to device B
> > >   Device B's write-handling causes it to raise an
> > >    outbound qemu_irq signal
> > >   The qemu_irq signal is connected to device A
> >
> > Here mean device A is an interrupt controller?
>
> No. Any device can have an inbound or outbound qemu_irq line.
> We use them not just for actual IRQ lines but for any
> situation where we need to pass an on-or-off signal from
> one device to another.

Could you please provide some example, I haven't noticed this before. Thanks.

>
> > This is special case I think.
>
> It's an example of why looking purely at MMIO is not
> sufficient. We should prefer to see if we can come up with
> a design principle that works for all between-device
> coordination before we implement something that is specific
> to MMIO.

So first we may be define a clean boundary/interface between device
coordination?

Thanks,
Li Qiang

>
> thanks
> -- PMM

Re: [RFC 0/3] try to solve the DMA to MMIO issue
Posted by Peter Maydell 5 years, 2 months ago
On Thu, 3 Sep 2020 at 12:24, Li Qiang <liq3ea@gmail.com> wrote:
> Peter Maydell <peter.maydell@linaro.org> 于2020年9月3日周四 下午7:19写道:
> > No. Any device can have an inbound or outbound qemu_irq line.
> > We use them not just for actual IRQ lines but for any
> > situation where we need to pass an on-or-off signal from
> > one device to another.
>
> Could you please provide some example, I haven't noticed this before.

Look at any device that calls qdev_init_gpio_in() or
qdev_init_gpio_in_named() for an example of inbound signals.
Outbound signals might be created via qdev_init_gpio_out(),
qdev_init_gpio_out_named() or sysbus_init_irq().

thanks
-- PMM

Re: [RFC 0/3] try to solve the DMA to MMIO issue
Posted by Philippe Mathieu-Daudé 5 years, 2 months ago
On 9/3/20 1:28 PM, Peter Maydell wrote:
> On Thu, 3 Sep 2020 at 12:24, Li Qiang <liq3ea@gmail.com> wrote:
>> Peter Maydell <peter.maydell@linaro.org> 于2020年9月3日周四 下午7:19写道:
>>> No. Any device can have an inbound or outbound qemu_irq line.
>>> We use them not just for actual IRQ lines but for any
>>> situation where we need to pass an on-or-off signal from
>>> one device to another.
>>
>> Could you please provide some example, I haven't noticed this before.
> 
> Look at any device that calls qdev_init_gpio_in() or
> qdev_init_gpio_in_named() for an example of inbound signals.
> Outbound signals might be created via qdev_init_gpio_out(),
> qdev_init_gpio_out_named() or sysbus_init_irq().

Not sure if this is a valid example, but when adding:

-- >8 --
diff --git a/hw/intc/ioapic.c b/hw/intc/ioapic.c
index bca71b5934b..b8b4ba362b1 100644
--- a/hw/intc/ioapic.c
+++ b/hw/intc/ioapic.c
@@ -96,6 +96,8 @@ static void ioapic_service(IOAPICCommonState *s)
     uint32_t mask;
     uint64_t entry;

+    assert(!resettable_is_in_reset(OBJECT(s)));
+
     for (i = 0; i < IOAPIC_NUM_PINS; i++) {
         mask = 1 << i;
         if (s->irr & mask) {
---

I get a MMIO write triggered from an IRQ:

(gdb) bt
#3  0x0000555558e44a12 in memory_region_write_accessor
(mr=0x61600001ab10, addr=0, value=0x7fffffffaa10, size=4, shift=0,
mask=4294967295, attrs=...) at softmmu/memory.c:482
#4  0x0000555558e4453b in access_with_adjusted_size (addr=0,
value=0x7fffffffaa10, size=4, access_size_min=1, access_size_max=4,
access_fn=
    0x555558e44600 <memory_region_write_accessor>, mr=0x61600001ab10,
attrs=...) at softmmu/memory.c:545
#5  0x0000555558e42c56 in memory_region_dispatch_write
(mr=0x61600001ab10, addr=0, data=0, op=MO_32, attrs=...) at
softmmu/memory.c:1466
#6  0x0000555558f322b3 in address_space_stl_internal (as=0x55555c0120e0
<address_space_memory>, addr=4276092928, val=0, attrs=..., result=0x0,
endian=DEVICE_LITTLE_ENDIAN)
    at memory_ldst.c.inc:315
#7  0x0000555558f32802 in address_space_stl_le (as=0x55555c0120e0
<address_space_memory>, addr=4276092928, val=0, attrs=..., result=0x0)
at memory_ldst.c.inc:353
#8  0x0000555558be2e22 in stl_le_phys (as=0x55555c0120e0
<address_space_memory>, addr=4276092928, val=0) at
/home/phil/source/qemu/include/exec/memory_ldst_phys.h.inc:103
#9  0x0000555558be0e14 in ioapic_service (s=0x61b000002a80) at
hw/intc/ioapic.c:138
#10 0x0000555558be4901 in ioapic_set_irq (opaque=0x61b000002a80,
vector=2, level=1) at hw/intc/ioapic.c:186
#11 0x00005555598769f6 in qemu_set_irq (irq=0x606000040f40, level=1) at
hw/core/irq.c:44
#12 0x00005555585fc097 in gsi_handler (opaque=0x61200000b8c0, n=0,
level=1) at hw/i386/x86.c:336
#13 0x00005555598769f6 in qemu_set_irq (irq=0x60600003db80, level=1) at
hw/core/irq.c:44
#14 0x0000555557653047 in hpet_handle_legacy_irq (opaque=0x61f000000080,
n=0, level=1) at hw/timer/hpet.c:707
#15 0x00005555598769f6 in qemu_set_irq (irq=0x606000042500, level=1) at
hw/core/irq.c:44
#16 0x00005555571c0686 in pit_irq_timer_update (s=0x616000032018,
current_time=0) at hw/timer/i8254.c:262
#17 0x00005555571c01c9 in pit_irq_control (opaque=0x616000031e80, n=0,
enable=1) at hw/timer/i8254.c:304
#18 0x00005555598769f6 in qemu_set_irq (irq=0x6060000435e0, level=1) at
hw/core/irq.c:44
#19 0x00005555576518cb in hpet_reset (d=0x61f000000080) at
hw/timer/hpet.c:690
#20 0x000055555986dfbe in device_transitional_reset (obj=0x61f000000080)
at hw/core/qdev.c:1114
#21 0x0000555559870e8e in resettable_phase_hold (obj=0x61f000000080,
opaque=0x0, type=RESET_TYPE_COLD) at hw/core/resettable.c:182
#22 0x0000555559846add in bus_reset_child_foreach (obj=0x60c00002e000,
cb=0x5555598707e0 <resettable_phase_hold>, opaque=0x0,
type=RESET_TYPE_COLD) at hw/core/bus.c:94
#23 0x0000555559873c29 in resettable_child_foreach (rc=0x60e00003e160,
obj=0x60c00002e000, cb=0x5555598707e0 <resettable_phase_hold>,
opaque=0x0, type=RESET_TYPE_COLD)
    at hw/core/resettable.c:96
#24 0x0000555559870b01 in resettable_phase_hold (obj=0x60c00002e000,
opaque=0x0, type=RESET_TYPE_COLD) at hw/core/resettable.c:173
#25 0x000055555986fbc3 in resettable_assert_reset (obj=0x60c00002e000,
type=RESET_TYPE_COLD) at hw/core/resettable.c:60
#26 0x000055555986fa6a in resettable_reset (obj=0x60c00002e000,
type=RESET_TYPE_COLD) at hw/core/resettable.c:45
#27 0x00005555598725ba in resettable_cold_reset_fn
(opaque=0x60c00002e000) at hw/core/resettable.c:269
#28 0x000055555986f9e9 in qemu_devices_reset () at hw/core/reset.c:69
#29 0x000055555865d711 in pc_machine_reset (machine=0x615000020100) at
hw/i386/pc.c:1901
#30 0x00005555589ea197 in qemu_system_reset (reason=SHUTDOWN_CAUSE_NONE)
at softmmu/vl.c:1403
#31 0x00005555589f7738 in qemu_init (argc=16, argv=0x7fffffffd278,
envp=0x7fffffffd300) at softmmu/vl.c:4458
#32 0x00005555571615fa in main (argc=16, argv=0x7fffffffd278,
envp=0x7fffffffd300) at softmmu/main.c:49


Re: [RFC 0/3] try to solve the DMA to MMIO issue
Posted by Peter Maydell 5 years, 2 months ago
On Thu, 3 Sep 2020 at 14:36, Philippe Mathieu-Daudé <philmd@redhat.com> wrote:
> Not sure if this is a valid example, but when adding:
>
> -- >8 --
> diff --git a/hw/intc/ioapic.c b/hw/intc/ioapic.c
> index bca71b5934b..b8b4ba362b1 100644
> --- a/hw/intc/ioapic.c
> +++ b/hw/intc/ioapic.c
> @@ -96,6 +96,8 @@ static void ioapic_service(IOAPICCommonState *s)
>      uint32_t mask;
>      uint64_t entry;
>
> +    assert(!resettable_is_in_reset(OBJECT(s)));
> +
>      for (i = 0; i < IOAPIC_NUM_PINS; i++) {
>          mask = 1 << i;
>          if (s->irr & mask) {
> ---
>
> I get a MMIO write triggered from an IRQ:

Yeah, IRQs can trigger MMIO writes. In this case one underlying
problem is that the hpet_reset() code is asserting a qemu_irq
in a reset phase that it should not, because it's an old-style
reset function and not a new-style 3-phase one (which would
do the assertion of the IRQ only in the 3rd phase). I don't
think this is a case of ending up with a recursive re-entry
into the code of the original device, though.

thanks
-- PMM

Re: [RFC 0/3] try to solve the DMA to MMIO issue
Posted by Jason Wang 5 years, 2 months ago
On 2020/9/3 下午7:19, Peter Maydell wrote:
> On Thu, 3 Sep 2020 at 12:11, Li Qiang <liq3ea@gmail.com> wrote:
>> Peter Maydell <peter.maydell@linaro.org> 于2020年9月3日周四 下午6:53写道:
>>> On Thu, 3 Sep 2020 at 04:55, Jason Wang <jasowang@redhat.com> wrote:
>>>> I think we still need to seek a way to address this issue completely.
>>>>
>>>> How about adding a flag in MemoryRegionOps and detect the reentrancy
>>>> through that flag?
>>> This won't catch everything. Consider this situation:
>>>    Device A makes DMA access to device B
>>>    Device B's write-handling causes it to raise an
>>>     outbound qemu_irq signal
>>>    The qemu_irq signal is connected to device A
>> Here mean device A is an interrupt controller?
> No. Any device can have an inbound or outbound qemu_irq line.
> We use them not just for actual IRQ lines but for any
> situation where we need to pass an on-or-off signal from
> one device to another.
>
>> This is special case I think.
> It's an example of why looking purely at MMIO is not
> sufficient. We should prefer to see if we can come up with
> a design principle that works for all between-device
> coordination before we implement something that is specific
> to MMIO.


As discussed, maybe we can track the pending operations in device itself 
and check it in all the possible entry of device codes (irq, MMIO or 
what ever else). This may be easier for stable backport.

Thanks


>
> thanks
> -- PMM
>