[PATCH v1 0/1] Add support for namespace attach/detach during runtime

Daniel Wagner posted 1 patch 1 year ago
Patches applied successfully (tree, apply log)
git fetch https://github.com/patchew-project/qemu tags/patchew/20230505121216.30740-1-dwagner@suse.de
Maintainers: Keith Busch <kbusch@kernel.org>, Klaus Jensen <its@irrelevant.dk>
hw/nvme/ctrl.c | 13 +++++++++++++
1 file changed, 13 insertions(+)
[PATCH v1 0/1] Add support for namespace attach/detach during runtime
Posted by Daniel Wagner 1 year ago
This is a follow up on a very old thread[1]. My aim is to attach/detatch nvme
devices during runtime and test the Linux nvme subsystem in the guest.

In order to it working, I had first to add hotplug able PCI bus and the
nvme-subsystem. The nvme-subsystem can't be instatiated during runtime so far
(libvirtd config):

  <qemu:commandline>
    <qemu:arg value='-device'/>
    <qemu:arg value='pcie-root-port,id=root,bus=pcie.0,addr=0x6'/>
    <qemu:arg value='-device'/>
    <qemu:arg value='nvme-subsys,id=nvme-subsys-1,nqn=nvme-1'/>
  </qemu:commandline>

and then with a bunch of monitor commands:

  qemu-monitor-command tw --hmp device_add nvme,id=nvme1,serial=nvme-1,subsys=nvme-subsys-1,bus=root
  qemu-monitor-command tw --hmp drive_add 0 if=none,file=/var/lib/libvirt/images/tw-nvme1.img,format=raw,id=nvme1
  qemu-monitor-command tw --hmp device_add nvme-ns,drive=nvme1,shared=on

  qemu-monitor-command tw --hmp info block

  qemu-monitor-command tw --hmp device_del nvme1

I see the nvme device pop up in the guest:

  [  187.047693 ] pci 0000:0f:00.0: [1b36:0010] type 00 class 0x010802
  [  187.050324 ] pci 0000:0f:00.0: reg 0x10: [mem 0x00000000-0x00003fff 64bit]
  [  187.098823 ] pci 0000:0f:00.0: BAR 0: assigned [mem 0xc0800000-0xc0803fff 64bit]
  [  187.102173 ] nvme nvme1: pci function 0000:0f:00.0
  [  187.103084 ] nvme 0000:0f:00.0: enabling device (0000 -> 0002)
  [  187.131154 ] nvme nvme1: 4/0/0 default/read/poll queues
  [  187.133460 ] nvme nvme1: Ignoring bogus Namespace Identifiers

and it looks pretty reasonable:

  # nvme list -v
  Subsystem        Subsystem-NQN                                                                                    Controllers
  ---------------- ------------------------------------------------------------------------------------------------ ----------------
  nvme-subsys1     nqn.2019-08.org.qemu:nvme-1                                                                      nvme1
  nvme-subsys0     nqn.2019-08.org.qemu:nvme-0                                                                      nvme0
  
  Device   SN                   MN                                       FR       TxPort Address        Subsystem    Namespaces      
  -------- -------------------- ---------------------------------------- -------- ------ -------------- ------------ ----------------
  nvme1    nvme-1               QEMU NVMe Ctrl                           8.0.0    pcie   0000:0f:00.0   nvme-subsys1 nvme1n1
  nvme0    nvme-0               QEMU NVMe Ctrl                           8.0.0    pcie   0000:00:05.0   nvme-subsys0 nvme0n1
  
  Device       Generic      NSID       Usage                      Format           Controllers     
  ------------ ------------ ---------- -------------------------- ---------------- ----------------
  /dev/nvme1n1 /dev/ng1n1   0x1          1.07  GB /   1.07  GB    512   B +  0 B   nvme1
  /dev/nvme0n1 /dev/ng0n1   0x1          1.07  GB /   1.07  GB    512   B +  0 B   nvme0


[1] https://lore.kernel.org/all/Y195nENga028PvT9@cormorant.local/

Daniel Wagner (1):
  hw/nvme: Add hotplug handler to nvme bus

 hw/nvme/ctrl.c | 13 +++++++++++++
 1 file changed, 13 insertions(+)

-- 
2.40.0
Re: [PATCH v1 0/1] Add support for namespace attach/detach during runtime
Posted by Klaus Jensen 12 months ago
On May  5 14:12, Daniel Wagner wrote:
> This is a follow up on a very old thread[1]. My aim is to attach/detatch nvme
> devices during runtime and test the Linux nvme subsystem in the guest.
> 
> In order to it working, I had first to add hotplug able PCI bus and the
> nvme-subsystem. The nvme-subsystem can't be instatiated during runtime so far
> (libvirtd config):
> 
>   <qemu:commandline>
>     <qemu:arg value='-device'/>
>     <qemu:arg value='pcie-root-port,id=root,bus=pcie.0,addr=0x6'/>
>     <qemu:arg value='-device'/>
>     <qemu:arg value='nvme-subsys,id=nvme-subsys-1,nqn=nvme-1'/>
>   </qemu:commandline>
> 
> and then with a bunch of monitor commands:
> 
>   qemu-monitor-command tw --hmp device_add nvme,id=nvme1,serial=nvme-1,subsys=nvme-subsys-1,bus=root
>   qemu-monitor-command tw --hmp drive_add 0 if=none,file=/var/lib/libvirt/images/tw-nvme1.img,format=raw,id=nvme1
>   qemu-monitor-command tw --hmp device_add nvme-ns,drive=nvme1,shared=on
> 
>   qemu-monitor-command tw --hmp info block
> 
>   qemu-monitor-command tw --hmp device_del nvme1
> 
> I see the nvme device pop up in the guest:
> 
>   [  187.047693 ] pci 0000:0f:00.0: [1b36:0010] type 00 class 0x010802
>   [  187.050324 ] pci 0000:0f:00.0: reg 0x10: [mem 0x00000000-0x00003fff 64bit]
>   [  187.098823 ] pci 0000:0f:00.0: BAR 0: assigned [mem 0xc0800000-0xc0803fff 64bit]
>   [  187.102173 ] nvme nvme1: pci function 0000:0f:00.0
>   [  187.103084 ] nvme 0000:0f:00.0: enabling device (0000 -> 0002)
>   [  187.131154 ] nvme nvme1: 4/0/0 default/read/poll queues
>   [  187.133460 ] nvme nvme1: Ignoring bogus Namespace Identifiers
> 
> and it looks pretty reasonable:
> 
>   # nvme list -v
>   Subsystem        Subsystem-NQN                                                                                    Controllers
>   ---------------- ------------------------------------------------------------------------------------------------ ----------------
>   nvme-subsys1     nqn.2019-08.org.qemu:nvme-1                                                                      nvme1
>   nvme-subsys0     nqn.2019-08.org.qemu:nvme-0                                                                      nvme0
>   
>   Device   SN                   MN                                       FR       TxPort Address        Subsystem    Namespaces      
>   -------- -------------------- ---------------------------------------- -------- ------ -------------- ------------ ----------------
>   nvme1    nvme-1               QEMU NVMe Ctrl                           8.0.0    pcie   0000:0f:00.0   nvme-subsys1 nvme1n1
>   nvme0    nvme-0               QEMU NVMe Ctrl                           8.0.0    pcie   0000:00:05.0   nvme-subsys0 nvme0n1
>   
>   Device       Generic      NSID       Usage                      Format           Controllers     
>   ------------ ------------ ---------- -------------------------- ---------------- ----------------
>   /dev/nvme1n1 /dev/ng1n1   0x1          1.07  GB /   1.07  GB    512   B +  0 B   nvme1
>   /dev/nvme0n1 /dev/ng0n1   0x1          1.07  GB /   1.07  GB    512   B +  0 B   nvme0
> 
> 
> [1] https://lore.kernel.org/all/Y195nENga028PvT9@cormorant.local/
> 
> Daniel Wagner (1):
>   hw/nvme: Add hotplug handler to nvme bus
> 
>  hw/nvme/ctrl.c | 13 +++++++++++++
>  1 file changed, 13 insertions(+)
> 
> -- 
> 2.40.0
> 

Hi Daniel,

This looks reasonable enough. If the pci bus is rescanned after plugging
the controller, but before plugging the namespace, the namespace doesnt
show up. The controller does not send an AER in this case. Should be
possible to do that with a 'plug' callback but we need to handle sending
this event on all controllers the namespace becomes attached to (if
shared) or just the first one (for shared=off).