It makes sense to accept pvpanic-pci also without specified PCI
address and assign one if possible.
Signed-off-by: Kristina Hanicova <khanicov@redhat.com>
---
src/qemu/qemu_domain_address.c | 34 +++++++++++++++++++++++++++++++++-
1 file changed, 33 insertions(+), 1 deletion(-)
diff --git a/src/qemu/qemu_domain_address.c b/src/qemu/qemu_domain_address.c
index 20b648354b..e80d543ce8 100644
--- a/src/qemu/qemu_domain_address.c
+++ b/src/qemu/qemu_domain_address.c
@@ -1062,10 +1062,24 @@ qemuDomainDeviceCalculatePCIConnectFlags(virDomainDeviceDef *dev,
}
break;
+ case VIR_DOMAIN_DEVICE_PANIC:
+ switch ((virDomainPanicModel) dev->data.panic->model) {
+ case VIR_DOMAIN_PANIC_MODEL_PVPANIC:
+ return pciFlags;
+
+ case VIR_DOMAIN_PANIC_MODEL_DEFAULT:
+ case VIR_DOMAIN_PANIC_MODEL_ISA:
+ case VIR_DOMAIN_PANIC_MODEL_PSERIES:
+ case VIR_DOMAIN_PANIC_MODEL_HYPERV:
+ case VIR_DOMAIN_PANIC_MODEL_S390:
+ case VIR_DOMAIN_PANIC_MODEL_LAST:
+ return 0;
+ }
+ break;
+
/* These devices don't ever connect with PCI */
case VIR_DOMAIN_DEVICE_NVRAM:
case VIR_DOMAIN_DEVICE_TPM:
- case VIR_DOMAIN_DEVICE_PANIC:
case VIR_DOMAIN_DEVICE_HUB:
case VIR_DOMAIN_DEVICE_REDIRDEV:
case VIR_DOMAIN_DEVICE_SMARTCARD:
@@ -2454,6 +2468,24 @@ qemuDomainAssignDevicePCISlots(virDomainDef *def,
return -1;
}
+ for (i = 0; i < def->npanics; i++) {
+ virDomainPanicDef *panic = def->panics[i];
+
+ switch (panic->model) {
+ case VIR_DOMAIN_PANIC_MODEL_PVPANIC:
+ if (virDeviceInfoPCIAddressIsWanted(&panic->info) &&
+ qemuDomainPCIAddressReserveNextAddr(addrs, &panic->info) < 0)
+ return -1;
+ break;
+ case VIR_DOMAIN_PANIC_MODEL_DEFAULT:
+ case VIR_DOMAIN_PANIC_MODEL_ISA:
+ case VIR_DOMAIN_PANIC_MODEL_PSERIES:
+ case VIR_DOMAIN_PANIC_MODEL_HYPERV:
+ case VIR_DOMAIN_PANIC_MODEL_S390:
+ case VIR_DOMAIN_PANIC_MODEL_LAST:
+ break;
+ }
+ }
return 0;
}
--
2.39.1
On Wed, Feb 08, 2023 at 12:49:03PM +0100, Kristina Hanicova wrote:
> +++ b/src/qemu/qemu_domain_address.c
> @@ -1062,10 +1062,24 @@ qemuDomainDeviceCalculatePCIConnectFlags(virDomainDeviceDef *dev,
> }
> break;
>
> + case VIR_DOMAIN_DEVICE_PANIC:
> + switch ((virDomainPanicModel) dev->data.panic->model) {
> + case VIR_DOMAIN_PANIC_MODEL_PVPANIC:
> + return pciFlags;
So, this is correct, because pvpanic-pci is indeed a conventional PCI
device (as opposed to a PCI Express device).
However, when used with a PCIe-based machine such as aarch64/virt,
these flags result in the device being plugged into a
pcie-pci-bridge, which in turn is plugged into a pcie-root-port,
itself plugged into pcie.0.
While this seems to work fine, it's also kind of a waste of PCI
controllers. For a "system" device such as pvpanic-pci, I think
plugging directly into pcie.0 might be more appropriate. This is
particularly true of x86/q35, where with the current implementation
switching from ISA pvpanic to pvpanic-pci would result in adding two
extra PCI controllers.
So I think this would be a good fit for the
VIR_PCI_CONNECT_INTEGRATED flag that I've added a while ago for use
with virtio-iommu. While this would technically result in libvirt
being more restrictive than QEMU in what PCI addresses it accepts for
pvpanic-pci, I don't think this would limit users in practice, and in
the default case (libvirt automatically picking the address) the
resulting configuration would be preferable.
We can always decide to relax this restriction later down the line,
if it turns out to really be a limiting factor.
Eric, what do you think about this idea?
(There's one caveat: VIR_PCI_CONNECT_INTEGRATED doesn't seem to work
with pciFlags at the moment O:-) It works fine with pcieFlags and
virtioFlags. I'll try to figure out why that's the case.)
--
Andrea Bolognani / Red Hat / Virtualization
Hi A?drea, Kristina,
On 2/9/23 17:33, Andrea Bolognani wrote:
> On Wed, Feb 08, 2023 at 12:49:03PM +0100, Kristina Hanicova wrote:
>> +++ b/src/qemu/qemu_domain_address.c
>> @@ -1062,10 +1062,24 @@ qemuDomainDeviceCalculatePCIConnectFlags(virDomainDeviceDef *dev,
>> }
>> break;
>>
>> + case VIR_DOMAIN_DEVICE_PANIC:
>> + switch ((virDomainPanicModel) dev->data.panic->model) {
>> + case VIR_DOMAIN_PANIC_MODEL_PVPANIC:
>> + return pciFlags;
> So, this is correct, because pvpanic-pci is indeed a conventional PCI
> device (as opposed to a PCI Express device).
>
> However, when used with a PCIe-based machine such as aarch64/virt,
> these flags result in the device being plugged into a
> pcie-pci-bridge, which in turn is plugged into a pcie-root-port,
> itself plugged into pcie.0.
>
> While this seems to work fine, it's also kind of a waste of PCI
> controllers. For a "system" device such as pvpanic-pci, I think
> plugging directly into pcie.0 might be more appropriate. This is
> particularly true of x86/q35, where with the current implementation
> switching from ISA pvpanic to pvpanic-pci would result in adding two
> extra PCI controllers.
>
> So I think this would be a good fit for the
> VIR_PCI_CONNECT_INTEGRATED flag that I've added a while ago for use
> with virtio-iommu. While this would technically result in libvirt
> being more restrictive than QEMU in what PCI addresses it accepts for
> pvpanic-pci, I don't think this would limit users in practice, and in
> the default case (libvirt automatically picking the address) the
> resulting configuration would be preferable.
>
> We can always decide to relax this restriction later down the line,
> if it turns out to really be a limiting factor.
>
> Eric, what do you think about this idea?
I do agree. If we can avoid putting that device downstream to a
pcie-to-pci bridge that's much better. Putting it onto pcie.0 looks more
appropriate to me indeed.
Thanks
Eric
>
>
> (There's one caveat: VIR_PCI_CONNECT_INTEGRATED doesn't seem to work
> with pciFlags at the moment O:-) It works fine with pcieFlags and
> virtioFlags. I'll try to figure out why that's the case.)
>
On Thu, Feb 09, 2023 at 06:26:14PM +0100, Eric Auger wrote:
> On 2/9/23 17:33, Andrea Bolognani wrote:
> > On Wed, Feb 08, 2023 at 12:49:03PM +0100, Kristina Hanicova wrote:
> >> +++ b/src/qemu/qemu_domain_address.c
> >> @@ -1062,10 +1062,24 @@ qemuDomainDeviceCalculatePCIConnectFlags(virDomainDeviceDef *dev,
> >> }
> >> break;
> >>
> >> + case VIR_DOMAIN_DEVICE_PANIC:
> >> + switch ((virDomainPanicModel) dev->data.panic->model) {
> >> + case VIR_DOMAIN_PANIC_MODEL_PVPANIC:
> >> + return pciFlags;
> >
> > So, this is correct, because pvpanic-pci is indeed a conventional PCI
> > device (as opposed to a PCI Express device).
> >
> > However, when used with a PCIe-based machine such as aarch64/virt,
> > these flags result in the device being plugged into a
> > pcie-pci-bridge, which in turn is plugged into a pcie-root-port,
> > itself plugged into pcie.0.
> >
> > While this seems to work fine, it's also kind of a waste of PCI
> > controllers. For a "system" device such as pvpanic-pci, I think
> > plugging directly into pcie.0 might be more appropriate. This is
> > particularly true of x86/q35, where with the current implementation
> > switching from ISA pvpanic to pvpanic-pci would result in adding two
> > extra PCI controllers.
> >
> > So I think this would be a good fit for the
> > VIR_PCI_CONNECT_INTEGRATED flag that I've added a while ago for use
> > with virtio-iommu. While this would technically result in libvirt
> > being more restrictive than QEMU in what PCI addresses it accepts for
> > pvpanic-pci, I don't think this would limit users in practice, and in
> > the default case (libvirt automatically picking the address) the
> > resulting configuration would be preferable.
> >
> > We can always decide to relax this restriction later down the line,
> > if it turns out to really be a limiting factor.
> >
> > Eric, what do you think about this idea?
>
> I do agree. If we can avoid putting that device downstream to a
> pcie-to-pci bridge that's much better. Putting it onto pcie.0 looks more
> appropriate to me indeed.
Thanks for the input! Let's go with this approach then, unless
someone raises concerns with it.
> > (There's one caveat: VIR_PCI_CONNECT_INTEGRATED doesn't seem to work
> > with pciFlags at the moment O:-) It works fine with pcieFlags and
> > virtioFlags. I'll try to figure out why that's the case.)
Fixed by this patch:
https://listman.redhat.com/archives/libvir-list/2023-February/237688.html
--
Andrea Bolognani / Red Hat / Virtualization
© 2016 - 2026 Red Hat, Inc.