Hello,
This patchset has rebased on the latest master and replaced 3rd patch
to one which allocates a dynamic array for secondary controller list
based on the maximum number of VFs (sriov_max_vfs) rather than a static
size of static array as Klaus suggested. Rest of the patchset are the
same with the previous one.
This patchset has been tested with the following simple script more than
127 VFs.
-device nvme-subsys,id=subsys0 \
-device ioh3420,id=rp2,multifunction=on,chassis=12 \
-device nvme,serial=foo,id=nvme0,bus=rp2,subsys=subsys0,mdts=9,msix_qsize=130,max_ioqpairs=260,sriov_max_vfs=129,sriov_vq_flexible=258,sriov_vi_flexible=129 \
$ cat nvme-enable-vfs.sh
#!/bin/bash
nr_vfs=129
for (( i=1; i<=$nr_vfs; i++ ))
do
nvme virt-mgmt /dev/nvme0 -c $i -r 0 -a 8 -n 2
nvme virt-mgmt /dev/nvme0 -c $i -r 1 -a 8 -n 1
done
bdf="0000:01:00.0"
sysfs="/sys/bus/pci/devices/$bdf"
nvme="/sys/bus/pci/drivers/nvme"
echo 0 > $sysfs/sriov_drivers_autoprobe
echo $nr_vfs > $sysfs/sriov_numvfs
for (( i=1; i<=$nr_vfs; i++ ))
do
nvme virt-mgmt /dev/nvme0 -c $i -a 9
echo "nvme" > $sysfs/virtfn$(($i-1))/driver_override
bdf="$(basename $(readlink $sysfs/virtfn$(($i-1))))"
echo $bdf > $nvme/bind
done
Thanks,
v3:
- Replace [3/4] patch with one allocating a dyanmic array of secondary
controller list rather than a static array with a fixed size of
maximum number of VF to support (Suggested by Klaus).
v2:
- Added [2/4] commit to fix crash due to entry overflow
Minwoo Im (4):
hw/nvme: add Identify Endurance Group List
hw/nvme: separate identify data for sec. ctrl list
hw/nvme: Allocate sec-ctrl-list as a dynamic array
hw/nvme: Expand VI/VQ resource to uint32
hw/nvme/ctrl.c | 59 +++++++++++++++++++++++++++-----------------
hw/nvme/nvme.h | 19 +++++++-------
hw/nvme/subsys.c | 10 +++++---
include/block/nvme.h | 1 +
4 files changed, 54 insertions(+), 35 deletions(-)
--
2.34.1