[Qemu-devel] [PATCH RFC 0/4] intel_iommu: Do sanity check of vfio-pci earlier

Peter Xu posted 4 patches 1 week ago
Test FreeBSD passed
Test docker-mingw@fedora passed
Test asan passed
Test docker-clang@ubuntu passed
Test checkpatch passed
Test s390x failed
Patches applied successfully (tree, apply log)
git fetch https://github.com/patchew-project/qemu tags/patchew/20190812074531.28970-1-peterx@redhat.com
Maintainers: Eduardo Habkost <ehabkost@redhat.com>, "Daniel P. Berrangé" <berrange@redhat.com>, "Michael S. Tsirkin" <mst@redhat.com>, Marcel Apfelbaum <marcel.apfelbaum@gmail.com>, Richard Henderson <rth@twiddle.net>, Paolo Bonzini <pbonzini@redhat.com>
hw/core/qdev.c         | 17 +++++++++++++++++
hw/i386/intel_iommu.c  | 40 ++++++++++++++++++++++++++++++++++------
hw/i386/pc.c           | 21 +++++++++++++++++++++
include/hw/boards.h    |  9 +++++++++
include/hw/qdev-core.h |  1 +
qdev-monitor.c         |  7 +++++++
6 files changed, 89 insertions(+), 6 deletions(-)

[Qemu-devel] [PATCH RFC 0/4] intel_iommu: Do sanity check of vfio-pci earlier

Posted by Peter Xu 1 week ago
This is a RFC series.

The VT-d code has some defects, one of them is that we cannot detect
the misuse of vIOMMU and vfio-pci early enough.

For example, logically this is not allowed:

  -device intel-iommu,caching-mode=off \
  -device vfio-pci,host=05:00.0

Because the caching mode is required to make vfio-pci devices
functional.

Previously we did this sanity check in vtd_iommu_notify_flag_changed()
as when the memory regions change their attributes.  However that's
too late in most cases!  Because the memory region layouts will only
change after IOMMU is enabled, and that's in most cases during the
guest OS boots.  So when the configuration is wrong, we will only bail
out during the guest boots rather than simply telling the user before
QEMU starts.

The same problem happens on device hotplug, say, when we have this:

  -device intel-iommu,caching-mode=off

Then we do something like:

  (HMP) device_add vfio-pci,host=05:00.0,bus=pcie.1

If at that time the vIOMMU is enabled in the guest then the QEMU
process will simply quit directly due to this hotplug event.  This is
a bit insane...

This series tries to solve above two problems by introducing two
sanity checks upon these places separately:

  - machine done
  - hotplug device

This is a bit awkward but I hope this could be better than before.
There is of course other solutions like hard-code the check into
vfio-pci but I feel it even more unpretty.  I didn't think out any
better way to do this, if there is please kindly shout out.

Please have a look to see whether this would be acceptable, thanks.

Peter Xu (4):
  intel_iommu: Sanity check vfio-pci config on machine init done
  qdev/machine: Introduce hotplug_allowed hook
  pc/q35: Disallow vfio-pci hotplug without VT-d caching mode
  intel_iommu: Remove the caching-mode check during flag change

 hw/core/qdev.c         | 17 +++++++++++++++++
 hw/i386/intel_iommu.c  | 40 ++++++++++++++++++++++++++++++++++------
 hw/i386/pc.c           | 21 +++++++++++++++++++++
 include/hw/boards.h    |  9 +++++++++
 include/hw/qdev-core.h |  1 +
 qdev-monitor.c         |  7 +++++++
 6 files changed, 89 insertions(+), 6 deletions(-)

-- 
2.21.0


Re: [Qemu-devel] [PATCH RFC 0/4] intel_iommu: Do sanity check of vfio-pci earlier

Posted by Jason Wang 1 week ago
On 2019/8/12 下午3:45, Peter Xu wrote:
> This is a RFC series.
>
> The VT-d code has some defects, one of them is that we cannot detect
> the misuse of vIOMMU and vfio-pci early enough.
>
> For example, logically this is not allowed:
>
>    -device intel-iommu,caching-mode=off \
>    -device vfio-pci,host=05:00.0
>
> Because the caching mode is required to make vfio-pci devices
> functional.
>
> Previously we did this sanity check in vtd_iommu_notify_flag_changed()
> as when the memory regions change their attributes.  However that's
> too late in most cases!  Because the memory region layouts will only
> change after IOMMU is enabled, and that's in most cases during the
> guest OS boots.  So when the configuration is wrong, we will only bail
> out during the guest boots rather than simply telling the user before
> QEMU starts.
>
> The same problem happens on device hotplug, say, when we have this:
>
>    -device intel-iommu,caching-mode=off
>
> Then we do something like:
>
>    (HMP) device_add vfio-pci,host=05:00.0,bus=pcie.1
>
> If at that time the vIOMMU is enabled in the guest then the QEMU
> process will simply quit directly due to this hotplug event.  This is
> a bit insane...
>
> This series tries to solve above two problems by introducing two
> sanity checks upon these places separately:
>
>    - machine done
>    - hotplug device
>
> This is a bit awkward but I hope this could be better than before.
> There is of course other solutions like hard-code the check into
> vfio-pci but I feel it even more unpretty.  I didn't think out any
> better way to do this, if there is please kindly shout out.
>
> Please have a look to see whether this would be acceptable, thanks.
>
> Peter Xu (4):
>    intel_iommu: Sanity check vfio-pci config on machine init done
>    qdev/machine: Introduce hotplug_allowed hook
>    pc/q35: Disallow vfio-pci hotplug without VT-d caching mode
>    intel_iommu: Remove the caching-mode check during flag change
>
>   hw/core/qdev.c         | 17 +++++++++++++++++
>   hw/i386/intel_iommu.c  | 40 ++++++++++++++++++++++++++++++++++------
>   hw/i386/pc.c           | 21 +++++++++++++++++++++
>   include/hw/boards.h    |  9 +++++++++
>   include/hw/qdev-core.h |  1 +
>   qdev-monitor.c         |  7 +++++++
>   6 files changed, 89 insertions(+), 6 deletions(-)
>

Do we need a generic solution other than an Intel specific one?

Thanks


Re: [Qemu-devel] [PATCH RFC 0/4] intel_iommu: Do sanity check of vfio-pci earlier

Posted by Peter Xu 1 week ago
On Tue, Aug 13, 2019 at 04:41:49PM +0800, Jason Wang wrote:
> Do we need a generic solution other than an Intel specific one?

I assume you're asking about ARM not AMD, right? :)

Yes I think we should have a generic solution.  Though I'd like to see
whether this idea can be accepted first, then we can expand to ARM or
other IOMMUs.  After all we've got some existing duplications already
between at least x86 and arm on vIOMMU so we can actually do more
things like that when time comes.

Thanks,

-- 
Peter Xu

Re: [Qemu-devel] [PATCH RFC 0/4] intel_iommu: Do sanity check of vfio-pci earlier

Posted by Alex Williamson 1 week ago
On Mon, 12 Aug 2019 09:45:27 +0200
Peter Xu <peterx@redhat.com> wrote:

> This is a RFC series.
> 
> The VT-d code has some defects, one of them is that we cannot detect
> the misuse of vIOMMU and vfio-pci early enough.
> 
> For example, logically this is not allowed:
> 
>   -device intel-iommu,caching-mode=off \
>   -device vfio-pci,host=05:00.0

Do we require intel-iommu with intremap=on in order to get x2apic for
large vCPU count guests?  If so, wouldn't it be a valid configuration
for the user to specify:

   -device intel-iommu,caching-mode=off,intremap=on \
   -device vfio-pci,host=05:00.0

so long as they never have any intention of the guest enabling DMA
translation?  Would there be any advantage to this config versus
caching-mode=on?  I suspect the overhead of CM=1 when only using
interrupt remapping is small to non-existent, but are there other
reasons for running with CM=0, perhaps guest drivers not supporting it?

I like the idea of being able to nak an incompatible hot-add rather
than kill the VM, we could narrow that even further to look at not only
whether caching-mode support is enabled, but also whether translation
is enabled on the vIOMMU.  Ideally we might disallow the guest from
enabling translation in such a configuration, but the Linux code is not
written with the expectation that the hardware can refuse to enable
translation and there are no capability bits to remove the DMA
translation capability of the IOMMU.  Still, we might want to think
about which is the better user experience, to have the guest panic when
DMA_GSTS_TES never becomes set (as it seems Linux would do) or to have
QEMU exit, or as proposed here, prevent all configurations where this
might occur.  Thanks,

Alex

> Because the caching mode is required to make vfio-pci devices
> functional.
> 
> Previously we did this sanity check in vtd_iommu_notify_flag_changed()
> as when the memory regions change their attributes.  However that's
> too late in most cases!  Because the memory region layouts will only
> change after IOMMU is enabled, and that's in most cases during the
> guest OS boots.  So when the configuration is wrong, we will only bail
> out during the guest boots rather than simply telling the user before
> QEMU starts.
> 
> The same problem happens on device hotplug, say, when we have this:
> 
>   -device intel-iommu,caching-mode=off
> 
> Then we do something like:
> 
>   (HMP) device_add vfio-pci,host=05:00.0,bus=pcie.1
> 
> If at that time the vIOMMU is enabled in the guest then the QEMU
> process will simply quit directly due to this hotplug event.  This is
> a bit insane...
> 
> This series tries to solve above two problems by introducing two
> sanity checks upon these places separately:
> 
>   - machine done
>   - hotplug device
> 
> This is a bit awkward but I hope this could be better than before.
> There is of course other solutions like hard-code the check into
> vfio-pci but I feel it even more unpretty.  I didn't think out any
> better way to do this, if there is please kindly shout out.
> 
> Please have a look to see whether this would be acceptable, thanks.
> 
> Peter Xu (4):
>   intel_iommu: Sanity check vfio-pci config on machine init done
>   qdev/machine: Introduce hotplug_allowed hook
>   pc/q35: Disallow vfio-pci hotplug without VT-d caching mode
>   intel_iommu: Remove the caching-mode check during flag change
> 
>  hw/core/qdev.c         | 17 +++++++++++++++++
>  hw/i386/intel_iommu.c  | 40 ++++++++++++++++++++++++++++++++++------
>  hw/i386/pc.c           | 21 +++++++++++++++++++++
>  include/hw/boards.h    |  9 +++++++++
>  include/hw/qdev-core.h |  1 +
>  qdev-monitor.c         |  7 +++++++
>  6 files changed, 89 insertions(+), 6 deletions(-)
> 


Re: [Qemu-devel] [PATCH RFC 0/4] intel_iommu: Do sanity check of vfio-pci earlier

Posted by Peter Xu 1 week ago
On Mon, Aug 12, 2019 at 10:24:53AM -0600, Alex Williamson wrote:
> On Mon, 12 Aug 2019 09:45:27 +0200
> Peter Xu <peterx@redhat.com> wrote:
> 
> > This is a RFC series.
> > 
> > The VT-d code has some defects, one of them is that we cannot detect
> > the misuse of vIOMMU and vfio-pci early enough.
> > 
> > For example, logically this is not allowed:
> > 
> >   -device intel-iommu,caching-mode=off \
> >   -device vfio-pci,host=05:00.0
> 
> Do we require intel-iommu with intremap=on in order to get x2apic for
> large vCPU count guests?  If so, wouldn't it be a valid configuration
> for the user to specify:
> 
>    -device intel-iommu,caching-mode=off,intremap=on \
>    -device vfio-pci,host=05:00.0
> 
> so long as they never have any intention of the guest enabling DMA
> translation?  Would there be any advantage to this config versus
> caching-mode=on?  I suspect the overhead of CM=1 when only using
> interrupt remapping is small to non-existent, but are there other
> reasons for running with CM=0, perhaps guest drivers not supporting it?

AFAIU the major users of the vIOMMU should be guest DPDK apps and
nested device assignments.  For these users I would just make bold to
guess they are mostly using Linux so the majority should be safe.

For the minority, I do agree that above question is valid.  IMHO the
hard point is to find out those users and let them join the
discussion, then we can know how many will be affected and how.  I
think one way to achieve it could be that we merge the patchset like
this, then people will start to complain if there is any. :) I'm not
sure whether that's the best way to go.  I think that could still be a
serious option considering that it could potentially fix a more severe
issue (unexpected QEMU quits), and also reverting the patchset like
this one could be easy as well when really necessary (e.g., the
patchset will not bring machine state changes which might cause
migration issues, or so on).

> 
> I like the idea of being able to nak an incompatible hot-add rather
> than kill the VM, we could narrow that even further to look at not only
> whether caching-mode support is enabled, but also whether translation
> is enabled on the vIOMMU.  Ideally we might disallow the guest from
> enabling translation in such a configuration, but the Linux code is not
> written with the expectation that the hardware can refuse to enable
> translation and there are no capability bits to remove the DMA
> translation capability of the IOMMU.

This is an interesting view at least to me, while... I'm not sure we
should allow that even for emulation.  I'm just imaging such a patch
for the Linux kernel to allow failures on enabling DMAR - it'll be
only for QEMU emulation and I'm not sure whether upstream would like
such a patch.  After all, we are emulating the hardwares, and the
hardware will always succeed in enabling DMAR, AFAICT.  For Windows
and other OSs it could be even harder.  If without the support of all
these, we could simply have other risks of having hanging guests when
the driver is busy waiting for the DMAR status bit to be set.

> Still, we might want to think
> about which is the better user experience, to have the guest panic when
> DMA_GSTS_TES never becomes set (as it seems Linux would do) or to have
> QEMU exit, or as proposed here, prevent all configurations where this
> might occur.  Thanks,

Agreed.  So far, a stricter rule could be a bit better than a hanging
guest to me.  Though that could be very subjective.

Thanks!

-- 
Peter Xu

Re: [Qemu-devel] [PATCH RFC 0/4] intel_iommu: Do sanity check of vfio-pci earlier

Posted by Peter Xu 1 day ago
On Mon, Aug 12, 2019 at 09:45:27AM +0200, Peter Xu wrote:
> This is a RFC series.
> 
> The VT-d code has some defects, one of them is that we cannot detect
> the misuse of vIOMMU and vfio-pci early enough.
> 
> For example, logically this is not allowed:
> 
>   -device intel-iommu,caching-mode=off \
>   -device vfio-pci,host=05:00.0
> 
> Because the caching mode is required to make vfio-pci devices
> functional.
> 
> Previously we did this sanity check in vtd_iommu_notify_flag_changed()
> as when the memory regions change their attributes.  However that's
> too late in most cases!  Because the memory region layouts will only
> change after IOMMU is enabled, and that's in most cases during the
> guest OS boots.  So when the configuration is wrong, we will only bail
> out during the guest boots rather than simply telling the user before
> QEMU starts.
> 
> The same problem happens on device hotplug, say, when we have this:
> 
>   -device intel-iommu,caching-mode=off
> 
> Then we do something like:
> 
>   (HMP) device_add vfio-pci,host=05:00.0,bus=pcie.1
> 
> If at that time the vIOMMU is enabled in the guest then the QEMU
> process will simply quit directly due to this hotplug event.  This is
> a bit insane...
> 
> This series tries to solve above two problems by introducing two
> sanity checks upon these places separately:
> 
>   - machine done
>   - hotplug device
> 
> This is a bit awkward but I hope this could be better than before.
> There is of course other solutions like hard-code the check into
> vfio-pci but I feel it even more unpretty.  I didn't think out any
> better way to do this, if there is please kindly shout out.
> 
> Please have a look to see whether this would be acceptable, thanks.

Any more comment on this?

Thanks,

-- 
Peter Xu

Re: [Qemu-devel] [PATCH RFC 0/4] intel_iommu: Do sanity check of vfio-pci earlier

Posted by Paolo Bonzini 1 day ago
On 20/08/19 07:22, Peter Xu wrote:
> On Mon, Aug 12, 2019 at 09:45:27AM +0200, Peter Xu wrote:
>> This is a RFC series.
>>
>> The VT-d code has some defects, one of them is that we cannot detect
>> the misuse of vIOMMU and vfio-pci early enough.
>>
>> For example, logically this is not allowed:
>>
>>   -device intel-iommu,caching-mode=off \
>>   -device vfio-pci,host=05:00.0
>>
>> Because the caching mode is required to make vfio-pci devices
>> functional.
>>
>> Previously we did this sanity check in vtd_iommu_notify_flag_changed()
>> as when the memory regions change their attributes.  However that's
>> too late in most cases!  Because the memory region layouts will only
>> change after IOMMU is enabled, and that's in most cases during the
>> guest OS boots.  So when the configuration is wrong, we will only bail
>> out during the guest boots rather than simply telling the user before
>> QEMU starts.
>>
>> The same problem happens on device hotplug, say, when we have this:
>>
>>   -device intel-iommu,caching-mode=off
>>
>> Then we do something like:
>>
>>   (HMP) device_add vfio-pci,host=05:00.0,bus=pcie.1
>>
>> If at that time the vIOMMU is enabled in the guest then the QEMU
>> process will simply quit directly due to this hotplug event.  This is
>> a bit insane...
>>
>> This series tries to solve above two problems by introducing two
>> sanity checks upon these places separately:
>>
>>   - machine done
>>   - hotplug device
>>
>> This is a bit awkward but I hope this could be better than before.
>> There is of course other solutions like hard-code the check into
>> vfio-pci but I feel it even more unpretty.  I didn't think out any
>> better way to do this, if there is please kindly shout out.
>>
>> Please have a look to see whether this would be acceptable, thanks.
> 
> Any more comment on this?

No problem from me, but I wouldn't mind if someone else merged it. :)

Paolo

Re: [Qemu-devel] [PATCH RFC 0/4] intel_iommu: Do sanity check of vfio-pci earlier

Posted by Peter Xu 13 hours ago
On Tue, Aug 20, 2019 at 08:24:49AM +0200, Paolo Bonzini wrote:
> On 20/08/19 07:22, Peter Xu wrote:
> > On Mon, Aug 12, 2019 at 09:45:27AM +0200, Peter Xu wrote:
> >> This is a RFC series.
> >>
> >> The VT-d code has some defects, one of them is that we cannot detect
> >> the misuse of vIOMMU and vfio-pci early enough.
> >>
> >> For example, logically this is not allowed:
> >>
> >>   -device intel-iommu,caching-mode=off \
> >>   -device vfio-pci,host=05:00.0
> >>
> >> Because the caching mode is required to make vfio-pci devices
> >> functional.
> >>
> >> Previously we did this sanity check in vtd_iommu_notify_flag_changed()
> >> as when the memory regions change their attributes.  However that's
> >> too late in most cases!  Because the memory region layouts will only
> >> change after IOMMU is enabled, and that's in most cases during the
> >> guest OS boots.  So when the configuration is wrong, we will only bail
> >> out during the guest boots rather than simply telling the user before
> >> QEMU starts.
> >>
> >> The same problem happens on device hotplug, say, when we have this:
> >>
> >>   -device intel-iommu,caching-mode=off
> >>
> >> Then we do something like:
> >>
> >>   (HMP) device_add vfio-pci,host=05:00.0,bus=pcie.1
> >>
> >> If at that time the vIOMMU is enabled in the guest then the QEMU
> >> process will simply quit directly due to this hotplug event.  This is
> >> a bit insane...
> >>
> >> This series tries to solve above two problems by introducing two
> >> sanity checks upon these places separately:
> >>
> >>   - machine done
> >>   - hotplug device
> >>
> >> This is a bit awkward but I hope this could be better than before.
> >> There is of course other solutions like hard-code the check into
> >> vfio-pci but I feel it even more unpretty.  I didn't think out any
> >> better way to do this, if there is please kindly shout out.
> >>
> >> Please have a look to see whether this would be acceptable, thanks.
> > 
> > Any more comment on this?
> 
> No problem from me, but I wouldn't mind if someone else merged it. :)

Can I read this as an "acked-by"? :)

Michael, should this be for your tree?  What do you think about the
series?  Please let me know what I need to do to move this forward.  I
can repost a non-rfc series if needed, but it'll be exactly the same
content.

Regards,

-- 
Peter Xu

Re: [Qemu-devel] [PATCH RFC 0/4] intel_iommu: Do sanity check of vfio-pci earlier

Posted by Paolo Bonzini 10 hours ago
On 21/08/19 07:03, Peter Xu wrote:
> On Tue, Aug 20, 2019 at 08:24:49AM +0200, Paolo Bonzini wrote:
>> On 20/08/19 07:22, Peter Xu wrote:
>>> On Mon, Aug 12, 2019 at 09:45:27AM +0200, Peter Xu wrote:
>>>> This is a RFC series.
>>>>
>>>> The VT-d code has some defects, one of them is that we cannot detect
>>>> the misuse of vIOMMU and vfio-pci early enough.
>>>>
>>>> For example, logically this is not allowed:
>>>>
>>>>   -device intel-iommu,caching-mode=off \
>>>>   -device vfio-pci,host=05:00.0
>>>>
>>>> Because the caching mode is required to make vfio-pci devices
>>>> functional.
>>>>
>>>> Previously we did this sanity check in vtd_iommu_notify_flag_changed()
>>>> as when the memory regions change their attributes.  However that's
>>>> too late in most cases!  Because the memory region layouts will only
>>>> change after IOMMU is enabled, and that's in most cases during the
>>>> guest OS boots.  So when the configuration is wrong, we will only bail
>>>> out during the guest boots rather than simply telling the user before
>>>> QEMU starts.
>>>>
>>>> The same problem happens on device hotplug, say, when we have this:
>>>>
>>>>   -device intel-iommu,caching-mode=off
>>>>
>>>> Then we do something like:
>>>>
>>>>   (HMP) device_add vfio-pci,host=05:00.0,bus=pcie.1
>>>>
>>>> If at that time the vIOMMU is enabled in the guest then the QEMU
>>>> process will simply quit directly due to this hotplug event.  This is
>>>> a bit insane...
>>>>
>>>> This series tries to solve above two problems by introducing two
>>>> sanity checks upon these places separately:
>>>>
>>>>   - machine done
>>>>   - hotplug device
>>>>
>>>> This is a bit awkward but I hope this could be better than before.
>>>> There is of course other solutions like hard-code the check into
>>>> vfio-pci but I feel it even more unpretty.  I didn't think out any
>>>> better way to do this, if there is please kindly shout out.
>>>>
>>>> Please have a look to see whether this would be acceptable, thanks.
>>>
>>> Any more comment on this?
>>
>> No problem from me, but I wouldn't mind if someone else merged it. :)
> 
> Can I read this as an "acked-by"? :)

Yes, it shouldn't even need Acked-by since there are other maintainers
that handle this part of the tree:

Paolo Bonzini <pbonzini@redhat.com> (maintainer:X86 TCG CPUs)
Richard Henderson <rth@twiddle.net> (maintainer:X86 TCG CPUs)
Eduardo Habkost <ehabkost@redhat.com> (maintainer:X86 TCG CPUs)
"Michael S. Tsirkin" <mst@redhat.com> (supporter:PC)
Marcel Apfelbaum <marcel.apfelbaum@gmail.com> (supporter:PC)

> Michael, should this be for your tree?  What do you think about the
> series?  Please let me know what I need to do to move this forward.  I
> can repost a non-rfc series if needed, but it'll be exactly the same
> content.
> 
> Regards,
>