[Qemu-devel] [RFC v5 0/6] pci_expander_brdige: support separate pci domain for pxb-pcie

Zihan Yang posted 6 patches 7 years, 1 month ago
Patches applied successfully (tree, apply log)
git fetch https://github.com/patchew-project/qemu tags/patchew/1537196258-12581-1-git-send-email-whois.zihan.yang@gmail.com
Test docker-clang@ubuntu failed
Test checkpatch failed
hw/i386/acpi-build.c                        | 162 ++++++++++++++++++--------
hw/pci-bridge/pci_expander_bridge.c         | 172 +++++++++++++++++++++++++++-
hw/pci/pci.c                                |  30 ++++-
include/hw/pci-bridge/pci_expander_bridge.h |  25 ++++
include/hw/pci/pci.h                        |   2 +
include/hw/pci/pci_bus.h                    |   2 +
include/hw/pci/pci_host.h                   |   2 +-
7 files changed, 336 insertions(+), 59 deletions(-)
create mode 100644 include/hw/pci-bridge/pci_expander_bridge.h
[Qemu-devel] [RFC v5 0/6] pci_expander_brdige: support separate pci domain for pxb-pcie
Posted by Zihan Yang 7 years, 1 month ago
Hi all

Here is a minimal working version of supporting multiple pci domains.
The next a few paragraphs will illustrate the purpose and use example.
Current issue and limitations will be in last 2 paragraphs, followed
by the changelog of each verison.

Currently only q35 host bridge is allocated an item in MCFG table, all
pxb-pcie host bridges stay within pci domain 0. This series of patches
allow each pxb-pcie to be put in separate pci domain, allocating a new
MCFG table item for it.

Users can configure whether to put pxb host bridge into separate domain
by specifying property 'domain_nr' of pxb-pcie device. 'bus_nr' property
indicates the Base Bus Number(BBN) of the pxb-pcie host bridge. Another
property max_bus specifies the maximum desired bus numbers to reduce
MCFG space cost. Example command is

    -device pxb-pcie,id=bridge3,bus="pcie.0",domain_nr=1,max_bus=15

Then this pxb-pcie host bridge is placed at pci domain 1, and only reserve
(15+1)=16 buses, which is much smaller than the default 256 buses.

Compared with previous version, this version is much simpler because
mcfg of extra domain now has a relatively fixed address, as suggested
by Marcel and Gerd. Putting extra mmconfig above 4G and let seabios
leave them for guest os will be expected in next version. The range is
[0x80000000, 0xb0000000), which allows us to hold 4x busses compared
with before.

A complete command line for test is follows, you need to replace GUEST_IMAGE,
DATA_IMAGE and SEABIOS_BIN with proper environment variable

./x86_64-softmmu/qemu-system-x86_64 \
    -machine q35,accel=kvm -smp 2 -m 2048 \
    -drive file=${GUEST_IMAGE}  -netdev user,id=realnet0 \
    -device e1000e,netdev=realnet0,mac=52:54:00:12:34:56 \
    -device pxb-pcie,id=bridge3,bus="pcie.0",domain_nr=1 \
    -device pcie-root-port,id=rp1,bus=bridge3,addr=1c.0,port=8,chassis=8 \
    -drive if=none,id=drive0,file=${DATA_IMAGE} \
    -device virtio-scsi-pci,id=scsi,bus=rp1,addr=00.0 \
    -bios ${SEABIOS_BIN}

There are a few limitations, though
1. Legacy interrupt routing is not dealt with yet. There is only support for
   devices using MSI/MSIX
2. Only 4x devices is supported, you need to be careful not to overuse
3. I have not fully tested the functionality of devices under separate domain
   yet, but Linux can recognize then when typing `lspci`

Current issue:
* SCSI storage device will be recognized twice, one in domain 0 as 0000:01.0,
  the other in domain 1 as 0001:01.0. I will try to fix it in next version

v5 <- v4:
- Refactor the design and place pxb-pcie's mcfg in [0x80000000, 0xb0000000)
- QEMU only decides the desired mcfg_size and leaves mcfg_base for seabios
- Does not connect PXBDev and PXBPCIEHost with link property anymore, but
  with the pcibus under them, which makes code simpler.

v4 <- v3:
- Fix bug in setting mcfg table
- bus_nr is not used when pxb-pcie is in a new pci domain

v3 <- v2:
- Replace duplicate properties in pxb pcie host with link property to PXBDev
- Allow seabios to access config space and data space of expander bridge
  through a different ioport, because 0xcf8 is attached only to sysbus.
- Add a new property start_bus to indicate the BBN of pxb host bridge. The
  bus_nr property is used as the bus number of pxb-pcie device on pcie.0 bus

v2 <- v1:
- Allow user to configure whether to put pxb-pcie into seperate domain
- Add AML description part of each host bridge
- Modify the location of MCFG space to between RAM hotplug and pci hole64

Many thanks to 
Please let me know if you have any suggestions.

Zihan Yang (6):
  pci_expander_bridge: add type TYPE_PXB_PCIE_HOST
  pci_expander_bridge: add domain_nr and max_bus property for pxb-pcie
  acpi-build: allocate mcfg for pxb-pcie host bridges
  i386/acpi-build: describe new pci domain in AML
  pci_expander_bridge: add config_write callback for pxb-pcie
  pci_expander_bridge: inform seabios of desired mcfg size via hidden
    bar

 hw/i386/acpi-build.c                        | 162 ++++++++++++++++++--------
 hw/pci-bridge/pci_expander_bridge.c         | 172 +++++++++++++++++++++++++++-
 hw/pci/pci.c                                |  30 ++++-
 include/hw/pci-bridge/pci_expander_bridge.h |  25 ++++
 include/hw/pci/pci.h                        |   2 +
 include/hw/pci/pci_bus.h                    |   2 +
 include/hw/pci/pci_host.h                   |   2 +-
 7 files changed, 336 insertions(+), 59 deletions(-)
 create mode 100644 include/hw/pci-bridge/pci_expander_bridge.h

-- 
2.7.4


Re: [Qemu-devel] [RFC v5 0/6] pci_expander_brdige: support separate pci domain for pxb-pcie
Posted by Michael S. Tsirkin 7 years, 1 month ago
Cc Laine, Eric for an opinion about the management interface.

On Mon, Sep 17, 2018 at 10:57:31PM +0800, Zihan Yang wrote:
> Hi all
> 
> Here is a minimal working version of supporting multiple pci domains.
> The next a few paragraphs will illustrate the purpose and use example.
> Current issue and limitations will be in last 2 paragraphs, followed
> by the changelog of each verison.
> 
> Currently only q35 host bridge is allocated an item in MCFG table, all
> pxb-pcie host bridges stay within pci domain 0. This series of patches
> allow each pxb-pcie to be put in separate pci domain, allocating a new
> MCFG table item for it.
> 
> Users can configure whether to put pxb host bridge into separate domain
> by specifying property 'domain_nr' of pxb-pcie device. 'bus_nr' property
> indicates the Base Bus Number(BBN) of the pxb-pcie host bridge. Another
> property max_bus specifies the maximum desired bus numbers to reduce
> MCFG space cost. Example command is
> 
>     -device pxb-pcie,id=bridge3,bus="pcie.0",domain_nr=1,max_bus=15
> 
> Then this pxb-pcie host bridge is placed at pci domain 1, and only reserve
> (15+1)=16 buses, which is much smaller than the default 256 buses.
> 
> Compared with previous version, this version is much simpler because
> mcfg of extra domain now has a relatively fixed address, as suggested
> by Marcel and Gerd. Putting extra mmconfig above 4G and let seabios
> leave them for guest os will be expected in next version. The range is
> [0x80000000, 0xb0000000), which allows us to hold 4x busses compared
> with before.
> 
> A complete command line for test is follows, you need to replace GUEST_IMAGE,
> DATA_IMAGE and SEABIOS_BIN with proper environment variable
> 
> ./x86_64-softmmu/qemu-system-x86_64 \
>     -machine q35,accel=kvm -smp 2 -m 2048 \
>     -drive file=${GUEST_IMAGE}  -netdev user,id=realnet0 \
>     -device e1000e,netdev=realnet0,mac=52:54:00:12:34:56 \
>     -device pxb-pcie,id=bridge3,bus="pcie.0",domain_nr=1 \
>     -device pcie-root-port,id=rp1,bus=bridge3,addr=1c.0,port=8,chassis=8 \
>     -drive if=none,id=drive0,file=${DATA_IMAGE} \
>     -device virtio-scsi-pci,id=scsi,bus=rp1,addr=00.0 \
>     -bios ${SEABIOS_BIN}
> 
> There are a few limitations, though
> 1. Legacy interrupt routing is not dealt with yet. There is only support for
>    devices using MSI/MSIX

That's probably a must have. What makes it hard to add?

> 2. Only 4x devices is supported, you need to be careful not to overuse

Could you elaborate on this please? What happens if you are not careful?
How does management know what the limits are?

> 3. I have not fully tested the functionality of devices under separate domain
>    yet, but Linux can recognize then when typing `lspci`
> 
> Current issue:
> * SCSI storage device will be recognized twice, one in domain 0 as 0000:01.0,
>   the other in domain 1 as 0001:01.0. I will try to fix it in next version
> 
> v5 <- v4:
> - Refactor the design and place pxb-pcie's mcfg in [0x80000000, 0xb0000000)
> - QEMU only decides the desired mcfg_size and leaves mcfg_base for seabios
> - Does not connect PXBDev and PXBPCIEHost with link property anymore, but
>   with the pcibus under them, which makes code simpler.
> 
> v4 <- v3:
> - Fix bug in setting mcfg table
> - bus_nr is not used when pxb-pcie is in a new pci domain
> 
> v3 <- v2:
> - Replace duplicate properties in pxb pcie host with link property to PXBDev
> - Allow seabios to access config space and data space of expander bridge
>   through a different ioport, because 0xcf8 is attached only to sysbus.
> - Add a new property start_bus to indicate the BBN of pxb host bridge. The
>   bus_nr property is used as the bus number of pxb-pcie device on pcie.0 bus
> 
> v2 <- v1:
> - Allow user to configure whether to put pxb-pcie into seperate domain
> - Add AML description part of each host bridge
> - Modify the location of MCFG space to between RAM hotplug and pci hole64
> 
> Many thanks to 
> Please let me know if you have any suggestions.
> 
> Zihan Yang (6):
>   pci_expander_bridge: add type TYPE_PXB_PCIE_HOST
>   pci_expander_bridge: add domain_nr and max_bus property for pxb-pcie
>   acpi-build: allocate mcfg for pxb-pcie host bridges
>   i386/acpi-build: describe new pci domain in AML
>   pci_expander_bridge: add config_write callback for pxb-pcie
>   pci_expander_bridge: inform seabios of desired mcfg size via hidden
>     bar
> 
>  hw/i386/acpi-build.c                        | 162 ++++++++++++++++++--------
>  hw/pci-bridge/pci_expander_bridge.c         | 172 +++++++++++++++++++++++++++-
>  hw/pci/pci.c                                |  30 ++++-
>  include/hw/pci-bridge/pci_expander_bridge.h |  25 ++++
>  include/hw/pci/pci.h                        |   2 +
>  include/hw/pci/pci_bus.h                    |   2 +
>  include/hw/pci/pci_host.h                   |   2 +-
>  7 files changed, 336 insertions(+), 59 deletions(-)
>  create mode 100644 include/hw/pci-bridge/pci_expander_bridge.h
> 
> -- 
> 2.7.4

Re: [Qemu-devel] [RFC v5 0/6] pci_expander_brdige: support separate pci domain for pxb-pcie
Posted by Marcel Apfelbaum 7 years, 1 month ago
  Hi Zihan,

On 09/18/2018 04:41 PM, Michael S. Tsirkin wrote:
> Cc Laine, Eric for an opinion about the management interface.
>
> On Mon, Sep 17, 2018 at 10:57:31PM +0800, Zihan Yang wrote:
>> Hi all
>>
>> Here is a minimal working version of supporting multiple pci domains.
>> The next a few paragraphs will illustrate the purpose and use example.
>> Current issue and limitations will be in last 2 paragraphs, followed
>> by the changelog of each verison.
>>
>> Currently only q35 host bridge is allocated an item in MCFG table, all
>> pxb-pcie host bridges stay within pci domain 0. This series of patches
>> allow each pxb-pcie to be put in separate pci domain, allocating a new
>> MCFG table item for it.
>>
>> Users can configure whether to put pxb host bridge into separate domain
>> by specifying property 'domain_nr' of pxb-pcie device. 'bus_nr' property
>> indicates the Base Bus Number(BBN) of the pxb-pcie host bridge. Another
>> property max_bus specifies the maximum desired bus numbers to reduce
>> MCFG space cost. Example command is
>>
>>      -device pxb-pcie,id=bridge3,bus="pcie.0",domain_nr=1,max_bus=15
>>
>> Then this pxb-pcie host bridge is placed at pci domain 1, and only reserve
>> (15+1)=16 buses, which is much smaller than the default 256 buses.
>>
>> Compared with previous version, this version is much simpler because
>> mcfg of extra domain now has a relatively fixed address, as suggested
>> by Marcel and Gerd. Putting extra mmconfig above 4G and let seabios
>> leave them for guest os will be expected in next version. The range is
>> [0x80000000, 0xb0000000), which allows us to hold 4x busses compared
>> with before.
>>
>> A complete command line for test is follows, you need to replace GUEST_IMAGE,
>> DATA_IMAGE and SEABIOS_BIN with proper environment variable
>>
>> ./x86_64-softmmu/qemu-system-x86_64 \
>>      -machine q35,accel=kvm -smp 2 -m 2048 \
>>      -drive file=${GUEST_IMAGE}  -netdev user,id=realnet0 \
>>      -device e1000e,netdev=realnet0,mac=52:54:00:12:34:56 \
>>      -device pxb-pcie,id=bridge3,bus="pcie.0",domain_nr=1 \
>>      -device pcie-root-port,id=rp1,bus=bridge3,addr=1c.0,port=8,chassis=8 \
>>      -drive if=none,id=drive0,file=${DATA_IMAGE} \
>>      -device virtio-scsi-pci,id=scsi,bus=rp1,addr=00.0 \
>>      -bios ${SEABIOS_BIN}
>>
>> There are a few limitations, though
>> 1. Legacy interrupt routing is not dealt with yet. There is only support for
>>     devices using MSI/MSIX
> That's probably a must have. What makes it hard to add?

Zihan, can you please elaborate on what is the exact problem?
pxb-pcie devices placed in PCI domain 0 do support
legacy  interrupts, the question is what is different for multiple
PCI domains.

Thanks,
Marcel

>

[...]

Re: [Qemu-devel] [RFC v5 0/6] pci_expander_brdige: support separate pci domain for pxb-pcie
Posted by Zihan Yang 7 years, 1 month ago
Hi Marcel,

Marcel Apfelbaum <marcel.apfelbaum@gmail.com> 于2018年9月20日周四 下午2:39写道:
>
>   Hi Zihan,
>
> On 09/18/2018 04:41 PM, Michael S. Tsirkin wrote:
> > Cc Laine, Eric for an opinion about the management interface.
> >
> > On Mon, Sep 17, 2018 at 10:57:31PM +0800, Zihan Yang wrote:
> >> Hi all
> >>
> >> Here is a minimal working version of supporting multiple pci domains.
> >> The next a few paragraphs will illustrate the purpose and use example.
> >> Current issue and limitations will be in last 2 paragraphs, followed
> >> by the changelog of each verison.
> >>
> >> Currently only q35 host bridge is allocated an item in MCFG table, all
> >> pxb-pcie host bridges stay within pci domain 0. This series of patches
> >> allow each pxb-pcie to be put in separate pci domain, allocating a new
> >> MCFG table item for it.
> >>
> >> Users can configure whether to put pxb host bridge into separate domain
> >> by specifying property 'domain_nr' of pxb-pcie device. 'bus_nr' property
> >> indicates the Base Bus Number(BBN) of the pxb-pcie host bridge. Another
> >> property max_bus specifies the maximum desired bus numbers to reduce
> >> MCFG space cost. Example command is
> >>
> >>      -device pxb-pcie,id=bridge3,bus="pcie.0",domain_nr=1,max_bus=15
> >>
> >> Then this pxb-pcie host bridge is placed at pci domain 1, and only reserve
> >> (15+1)=16 buses, which is much smaller than the default 256 buses.
> >>
> >> Compared with previous version, this version is much simpler because
> >> mcfg of extra domain now has a relatively fixed address, as suggested
> >> by Marcel and Gerd. Putting extra mmconfig above 4G and let seabios
> >> leave them for guest os will be expected in next version. The range is
> >> [0x80000000, 0xb0000000), which allows us to hold 4x busses compared
> >> with before.
> >>
> >> A complete command line for test is follows, you need to replace GUEST_IMAGE,
> >> DATA_IMAGE and SEABIOS_BIN with proper environment variable
> >>
> >> ./x86_64-softmmu/qemu-system-x86_64 \
> >>      -machine q35,accel=kvm -smp 2 -m 2048 \
> >>      -drive file=${GUEST_IMAGE}  -netdev user,id=realnet0 \
> >>      -device e1000e,netdev=realnet0,mac=52:54:00:12:34:56 \
> >>      -device pxb-pcie,id=bridge3,bus="pcie.0",domain_nr=1 \
> >>      -device pcie-root-port,id=rp1,bus=bridge3,addr=1c.0,port=8,chassis=8 \
> >>      -drive if=none,id=drive0,file=${DATA_IMAGE} \
> >>      -device virtio-scsi-pci,id=scsi,bus=rp1,addr=00.0 \
> >>      -bios ${SEABIOS_BIN}
> >>
> >> There are a few limitations, though
> >> 1. Legacy interrupt routing is not dealt with yet. There is only support for
> >>     devices using MSI/MSIX
> > That's probably a must have. What makes it hard to add?
>
> Zihan, can you please elaborate on what is the exact problem?
> pxb-pcie devices placed in PCI domain 0 do support
> legacy  interrupts, the question is what is different for multiple
> PCI domains.

Sorry for the delay. One problem I know of interrupt is that pxb-pcie in domain
0 directly maps irq to pci root of q35 host. In multiple domains they are under
different pci root, so we need to change the routing rule and modify PRT
table of pxb pcie host to route interrupt to ich9. But pxb does not have ISA
under it, I'm not sure whether this would be a problem.

It seems to require some engineering effort, but first I'm trying to figure out
another issue: why the same device under domain 1 will also appear in domain 0.
I find there are many resource allocation failure for device under domain 1,
dmesg shows

[    0.179664] pci 0001:00:00.0: BAR 14: no space for [mem size 0x00100000]
[    0.179667] pci 0001:00:00.0: BAR 14: failed to assign [mem size 0x00100000]
[    0.179668] pci 0001:00:00.0: BAR 13: no space for [io  size 0x1000]
[    0.179670] pci 0001:00:00.0: BAR 13: failed to assign [io  size 0x1000]
[    0.179672] pci 0001:00:00.0: BAR 0: no space for [mem size 0x00000100 64bit]
[    0.179673] pci 0001:00:00.0: BAR 0: failed to assign [mem size
0x00000100 64bit]
[    0.179676] pci 0001:01:10.0: BAR 6: no space for [mem size 0x00040000 pref]
[    0.179678] pci 0001:01:10.0: BAR 6: failed to assign [mem size
0x00040000 pref]
[    0.179680] pci 0001:01:10.0: BAR 0: no space for [mem size 0x00020000]
[    0.179681] pci 0001:01:10.0: BAR 0: failed to assign [mem size 0x00020000]

A humble guess is that the pxb host bus is a child bus of pxb-pcie, which might
be scanned twice during initialization of q35 and pxb host bus and causes the
potential resource conflict. But that is just a guess now, not quite
sure it is the
issue of qemu or seabios. I will try to find out.

> Thanks,
> Marcel
>
> >
>
> [...]

Re: [Qemu-devel] [RFC v5 0/6] pci_expander_brdige: support separate pci domain for pxb-pcie
Posted by Zihan Yang 7 years, 1 month ago
Michael S. Tsirkin <mst@redhat.com> 于2018年9月18日周二 下午9:41写道:
>
> Cc Laine, Eric for an opinion about the management interface.
>
> On Mon, Sep 17, 2018 at 10:57:31PM +0800, Zihan Yang wrote:
> > Hi all
> >
> > Here is a minimal working version of supporting multiple pci domains.
> > The next a few paragraphs will illustrate the purpose and use example.
> > Current issue and limitations will be in last 2 paragraphs, followed
> > by the changelog of each verison.
> >
> > Currently only q35 host bridge is allocated an item in MCFG table, all
> > pxb-pcie host bridges stay within pci domain 0. This series of patches
> > allow each pxb-pcie to be put in separate pci domain, allocating a new
> > MCFG table item for it.
> >
> > Users can configure whether to put pxb host bridge into separate domain
> > by specifying property 'domain_nr' of pxb-pcie device. 'bus_nr' property
> > indicates the Base Bus Number(BBN) of the pxb-pcie host bridge. Another
> > property max_bus specifies the maximum desired bus numbers to reduce
> > MCFG space cost. Example command is
> >
> >     -device pxb-pcie,id=bridge3,bus="pcie.0",domain_nr=1,max_bus=15
> >
> > Then this pxb-pcie host bridge is placed at pci domain 1, and only reserve
> > (15+1)=16 buses, which is much smaller than the default 256 buses.
> >
> > Compared with previous version, this version is much simpler because
> > mcfg of extra domain now has a relatively fixed address, as suggested
> > by Marcel and Gerd. Putting extra mmconfig above 4G and let seabios
> > leave them for guest os will be expected in next version. The range is
> > [0x80000000, 0xb0000000), which allows us to hold 4x busses compared
> > with before.
> >
> > A complete command line for test is follows, you need to replace GUEST_IMAGE,
> > DATA_IMAGE and SEABIOS_BIN with proper environment variable
> >
> > ./x86_64-softmmu/qemu-system-x86_64 \
> >     -machine q35,accel=kvm -smp 2 -m 2048 \
> >     -drive file=${GUEST_IMAGE}  -netdev user,id=realnet0 \
> >     -device e1000e,netdev=realnet0,mac=52:54:00:12:34:56 \
> >     -device pxb-pcie,id=bridge3,bus="pcie.0",domain_nr=1 \
> >     -device pcie-root-port,id=rp1,bus=bridge3,addr=1c.0,port=8,chassis=8 \
> >     -drive if=none,id=drive0,file=${DATA_IMAGE} \
> >     -device virtio-scsi-pci,id=scsi,bus=rp1,addr=00.0 \
> >     -bios ${SEABIOS_BIN}
> >
> > There are a few limitations, though
> > 1. Legacy interrupt routing is not dealt with yet. There is only support for
> >    devices using MSI/MSIX
>
> That's probably a must have. What makes it hard to add?

I will try to support it in the next one or two versions. In previous
versoin, I was
trying to support legacy interrupt, but I was also trying to figure
out a way for
seabios to configure pxb-pcie devices whose mmconfig are above 4g. That makes
things a little bit complicated for me to locate whether a problem is caused by
incorrect AML or wrong interrupt routing or incomplete support in seabios.
Therefore, I jsut leave legacy interrupt at the moment and make it work first,
as suggested by Marcel. Now that it is indeed working, I think I will support
legacy interrupt in future versions.

> > 2. Only 4x devices is supported, you need to be careful not to overuse
>
> Could you elaborate on this please? What happens if you are not careful?
> How does management know what the limits are?

It means the user might use more space than 768MB for mmconfig,
which is [0x80000000m 0xb0000000). If such situation happens,  seabios
will ignore all extra pxb-pcie devices and not adding e820 entry for them,
which makes all the devices under them unworkable.

As for the management, will some checks when adding mcfg be enough for
management? Or I can maintain a variable to indicate how many space
have been consumed and warn the user if they exceed the threshold?
The latter allows us to do the check when the pxb-pcie is initializing.

In future versions, I will try to put such 'extra' mmconfig above 4g and
completely let guest os configure them, as suggested by Gerd. By then,
the number limits may no longer be a problem, but I think you might want to
put some limits on it for now.
> > 3. I have not fully tested the functionality of devices under separate domain
> >    yet, but Linux can recognize then when typing `lspci`
> >
> > Current issue:
> > * SCSI storage device will be recognized twice, one in domain 0 as 0000:01.0,
> >   the other in domain 1 as 0001:01.0. I will try to fix it in next version
> >
> > v5 <- v4:
> > - Refactor the design and place pxb-pcie's mcfg in [0x80000000, 0xb0000000)
> > - QEMU only decides the desired mcfg_size and leaves mcfg_base for seabios
> > - Does not connect PXBDev and PXBPCIEHost with link property anymore, but
> >   with the pcibus under them, which makes code simpler.
> >
> > v4 <- v3:
> > - Fix bug in setting mcfg table
> > - bus_nr is not used when pxb-pcie is in a new pci domain
> >
> > v3 <- v2:
> > - Replace duplicate properties in pxb pcie host with link property to PXBDev
> > - Allow seabios to access config space and data space of expander bridge
> >   through a different ioport, because 0xcf8 is attached only to sysbus.
> > - Add a new property start_bus to indicate the BBN of pxb host bridge. The
> >   bus_nr property is used as the bus number of pxb-pcie device on pcie.0 bus
> >
> > v2 <- v1:
> > - Allow user to configure whether to put pxb-pcie into seperate domain
> > - Add AML description part of each host bridge
> > - Modify the location of MCFG space to between RAM hotplug and pci hole64
> >
> > Many thanks to
> > Please let me know if you have any suggestions.
> >
> > Zihan Yang (6):
> >   pci_expander_bridge: add type TYPE_PXB_PCIE_HOST
> >   pci_expander_bridge: add domain_nr and max_bus property for pxb-pcie
> >   acpi-build: allocate mcfg for pxb-pcie host bridges
> >   i386/acpi-build: describe new pci domain in AML
> >   pci_expander_bridge: add config_write callback for pxb-pcie
> >   pci_expander_bridge: inform seabios of desired mcfg size via hidden
> >     bar
> >
> >  hw/i386/acpi-build.c                        | 162 ++++++++++++++++++--------
> >  hw/pci-bridge/pci_expander_bridge.c         | 172 +++++++++++++++++++++++++++-
> >  hw/pci/pci.c                                |  30 ++++-
> >  include/hw/pci-bridge/pci_expander_bridge.h |  25 ++++
> >  include/hw/pci/pci.h                        |   2 +
> >  include/hw/pci/pci_bus.h                    |   2 +
> >  include/hw/pci/pci_host.h                   |   2 +-
> >  7 files changed, 336 insertions(+), 59 deletions(-)
> >  create mode 100644 include/hw/pci-bridge/pci_expander_bridge.h
> >
> > --
> > 2.7.4

Re: [Qemu-devel] [RFC v5 0/6] pci_expander_brdige: support separate pci domain for pxb-pcie
Posted by Gerd Hoffmann 7 years, 1 month ago
> > > 2. Only 4x devices is supported, you need to be careful not to overuse
> >
> > Could you elaborate on this please? What happens if you are not careful?
> > How does management know what the limits are?
> 
> It means the user might use more space than 768MB for mmconfig,
> which is [0x80000000m 0xb0000000). If such situation happens,  seabios
> will ignore all extra pxb-pcie devices and not adding e820 entry for them,
> which makes all the devices under them unworkable.

Ypu should clearly note that this is a limitation of the seabios support
patches then, so not an issue of the qemu patches.

> As for the management, will some checks when adding mcfg be enough for
> management? Or I can maintain a variable to indicate how many space
> have been consumed and warn the user if they exceed the threshold?
> The latter allows us to do the check when the pxb-pcie is initializing.

I think qemu should not apply any restrictions here.

cheers,
  Gerd


Re: [Qemu-devel] [RFC v5 0/6] pci_expander_brdige: support separate pci domain for pxb-pcie
Posted by Zihan Yang 7 years, 1 month ago
Gerd Hoffmann <kraxel@redhat.com> 于2018年9月19日周三 下午12:26写道:
>
> > > > 2. Only 4x devices is supported, you need to be careful not to overuse
> > >
> > > Could you elaborate on this please? What happens if you are not careful?
> > > How does management know what the limits are?
> >
> > It means the user might use more space than 768MB for mmconfig,
> > which is [0x80000000m 0xb0000000). If such situation happens,  seabios
> > will ignore all extra pxb-pcie devices and not adding e820 entry for them,
> > which makes all the devices under them unworkable.
>
> Ypu should clearly note that this is a limitation of the seabios support
> patches then, so not an issue of the qemu patches.

OK, description will be clearer in next version.

> > As for the management, will some checks when adding mcfg be enough for
> > management? Or I can maintain a variable to indicate how many space
> > have been consumed and warn the user if they exceed the threshold?
> > The latter allows us to do the check when the pxb-pcie is initializing.
>
> I think qemu should not apply any restrictions here.

But will that confuse users when their device is not listed in guest os
while qemu does not throw any error/warning?

> cheers,
>   Gerd
>

Thanks,
Zihan

Re: [Qemu-devel] [RFC v5 0/6] pci_expander_brdige: support separate pci domain for pxb-pcie
Posted by Gerd Hoffmann 7 years, 1 month ago
> > > As for the management, will some checks when adding mcfg be enough for
> > > management? Or I can maintain a variable to indicate how many space
> > > have been consumed and warn the user if they exceed the threshold?
> > > The latter allows us to do the check when the pxb-pcie is initializing.
> >
> > I think qemu should not apply any restrictions here.
> 
> But will that confuse users when their device is not listed in guest os
> while qemu does not throw any error/warning?

Well, that can happen anyway.  For example when using an old seabios
version without pci domain support, or other firmware without pci domain
support (coreboot, ovmf).  And there is no easy way for qemu to figure
this beforehand.

You can detect this later, when generating the acpi tables, that there
are expander bridges where the hidden pci bar wasn't configured by the
firmware.  Logging a warning in that case - pointing out the missing
firmware support - is probably a good idea.

cheers,
  Gerd


Re: [Qemu-devel] [RFC v5 0/6] pci_expander_brdige: support separate pci domain for pxb-pcie
Posted by Marcel Apfelbaum 7 years, 1 month ago
Hi Zihan, Gerd

On 09/20/2018 09:09 AM, Gerd Hoffmann wrote:
>>>> As for the management, will some checks when adding mcfg be enough for
>>>> management? Or I can maintain a variable to indicate how many space
>>>> have been consumed and warn the user if they exceed the threshold?
>>>> The latter allows us to do the check when the pxb-pcie is initializing.
>>> I think qemu should not apply any restrictions here.
>> But will that confuse users when their device is not listed in guest os
>> while qemu does not throw any error/warning?
> Well, that can happen anyway.  For example when using an old seabios
> version without pci domain support, or other firmware without pci domain
> support (coreboot, ovmf).  And there is no easy way for qemu to figure
> this beforehand.
>
> You can detect this later, when generating the acpi tables, that there
> are expander bridges where the hidden pci bar wasn't configured by the
> firmware.  Logging a warning in that case - pointing out the missing
> firmware support - is probably a good idea.

Logging a warning if the pxb-pci was not configured is enough I think.
It will also help in case QEMU uses other firmware not supporting
multiple PCI domains at all, e.g. OVMF  (for now), or an older SeaBIOS.


Thanks,
Marcel

> cheers,
>    Gerd
>


Re: [Qemu-devel] [RFC v5 0/6] pci_expander_brdige: support separate pci domain for pxb-pcie
Posted by Zihan Yang 7 years, 1 month ago
Marcel Apfelbaum <marcel.apfelbaum@gmail.com> 于2018年9月20日周四 下午2:41写道:
>
> Hi Zihan, Gerd
>
> On 09/20/2018 09:09 AM, Gerd Hoffmann wrote:
> >>>> As for the management, will some checks when adding mcfg be enough for
> >>>> management? Or I can maintain a variable to indicate how many space
> >>>> have been consumed and warn the user if they exceed the threshold?
> >>>> The latter allows us to do the check when the pxb-pcie is initializing.
> >>> I think qemu should not apply any restrictions here.
> >> But will that confuse users when their device is not listed in guest os
> >> while qemu does not throw any error/warning?
> > Well, that can happen anyway.  For example when using an old seabios
> > version without pci domain support, or other firmware without pci domain
> > support (coreboot, ovmf).  And there is no easy way for qemu to figure
> > this beforehand.
> >
> > You can detect this later, when generating the acpi tables, that there
> > are expander bridges where the hidden pci bar wasn't configured by the
> > firmware.  Logging a warning in that case - pointing out the missing
> > firmware support - is probably a good idea.
>
> Logging a warning if the pxb-pci was not configured is enough I think.
> It will also help in case QEMU uses other firmware not supporting
> multiple PCI domains at all, e.g. OVMF  (for now), or an older SeaBIOS.

OK, will be added in next version.

>
> Thanks,
> Marcel
>
> > cheers,
> >    Gerd
> >
>