RE: [PATCH v1 0/5] target/arm: Handle psci calls in userspace

Salil Mehta via posted 5 patches 9 months, 4 weeks ago
Only 0 patches received!
RE: [PATCH v1 0/5] target/arm: Handle psci calls in userspace
Posted by Salil Mehta via 9 months, 4 weeks ago
Hi Shaoqin,
Just saw this. Apologies. I missed to reply this earlier as I was bit
disconnected for last few days.


> From: Shaoqin Huang <shahuang@redhat.com>
> Sent: Tuesday, June 27, 2023 3:35 AM

> Hi Salil,
> 
> On 6/26/23 21:42, Salil Mehta wrote:
> >> From: Shaoqin Huang <shahuang@redhat.com>
> >> Sent: Monday, June 26, 2023 7:49 AM
> >> To: qemu-devel@nongnu.org; qemu-arm@nongnu.org
> >> Cc: oliver.upton@linux.dev; Salil Mehta <salil.mehta@huawei.com>;
> >> james.morse@arm.com; gshan@redhat.com; Shaoqin Huang <shahuang@redhat.com>;
> >> Cornelia Huck <cohuck@redhat.com>; kvm@vger.kernel.org; Michael S. Tsirkin
> >> <mst@redhat.com>; Paolo Bonzini <pbonzini@redhat.com>; Peter Maydell <peter.maydell@linaro.org>
> >> Subject: [PATCH v1 0/5] target/arm: Handle psci calls in userspace
> >>
> >> The userspace SMCCC call filtering[1] provides the ability to forward the SMCCC
> >> calls to the userspace. The vCPU hotplug[2] would be the first legitimate use
> >> case to handle the psci calls in userspace, thus the vCPU hotplug can deny the
> >> PSCI_ON call if the vCPU is not present now.
> >>
> >> This series try to enable the userspace SMCCC call filtering, thus can handle
> >> the SMCCC call in userspace. The first enabled SMCCC call is psci call, by using
> >> the new added option 'user-smccc', we can enable handle psci calls in userspace.
> >>
> >> qemu-system-aarch64 -machine virt,user-smccc=on
> >>
> >> This series reuse the qemu implementation of the psci handling, thus the
> >> handling process is very simple. But when handling psci in userspace when using
> >> kvm, the reset vcpu process need to be taking care, the detail is included in
> >> the patch05.
> >
> > This change is intended for VCPU Hotplug and are duplicating the code
> > we are working on. Unless this change is also intended for any other
> > feature I would request you to defer this.
> 
> Thanks for sharing me the information. I'm not intended for merging this
> series, but discuss something about the VCPU Hotplug, since I'm also
> following the work of vCPU Hotplug.

Sure. I am not against this work in any way but there was bit of an overlap and was
trying to avoid that. 

> 
> Just curious, what is your plan to update a new version of VCPU Hotplug
> which is based on the userspace SMCCC filtering?

We have already incorporated this. We have not tested it properly though and
there are some issues related to the migration we are fixing.

I did mention about this in the KVMForum2023 presentation as well.

Latest Qemu Prototype (Pre RFC V2) (Not in the final shape of the patches)
https://github.com/salil-mehta/qemu.git   virt-cpuhp-armv8/rfc-v1-port11052023.dev-1


should work against below kernel changes as confirmed by James,

Latest Kernel Prototype (Pre RFC V2 = RFC V1 + Fixes) 
https://git.gitlab.arm.com/linux-arm/linux-jm.git   virtual_cpu_hotplug/rfc/v2  


We have not added the support of user-configurability which your patch-set does.


Many thanks
Salil.











Re: [PATCH v1 0/5] target/arm: Handle psci calls in userspace
Posted by Gavin Shan 9 months, 2 weeks ago
Hi Salil,

On 7/4/23 19:58, Salil Mehta wrote:

> 
> Latest Qemu Prototype (Pre RFC V2) (Not in the final shape of the patches)
> https://github.com/salil-mehta/qemu.git   virt-cpuhp-armv8/rfc-v1-port11052023.dev-1
> 
> 
> should work against below kernel changes as confirmed by James,
> 
> Latest Kernel Prototype (Pre RFC V2 = RFC V1 + Fixes)
> https://git.gitlab.arm.com/linux-arm/linux-jm.git   virtual_cpu_hotplug/rfc/v2
> 

I think it'd better to have the discussions through maillist. The threads and all
follow-up replies can be cached somewhere to avoid lost. Besides, other people may
be intrested in the same points and can join the discussion directly.

I got a chance to give the RFC patchsets some tests. Not all cases are working
as expected. I know the patchset is being polished. I'm summarize them as below:

(1) coredump is triggered when the topology is out of range. It's the issue we
     discussed in private. Here I'm just recapping in case other people also blocked
     by the issue.

     (a) start VM with the following command lines
      /home/gavin/sandbox/qemu.main/build/qemu-system-aarch64       \
      -accel kvm -machine virt,gic-version=host,nvdimm=on -cpu host \
      -smp cpus=1,maxcpus=2,sockets=1,clusters=1,cores=1,threads=2  \
      -m 512M,slots=16,maxmem=64G                                   \
      -object memory-backend-ram,id=mem0,size=512M                  \
      -numa node,nodeid=0,cpus=0-1,memdev=mem0                      \

     (b) hot add CPU whose topology is out of range
     (qemu) device_add driver=host-arm-cpu,id=cpu1,core-id=1


     It's actually caused by typos in hw/arm/virt.c::virt_cpu_pre_plug() where
     'ms->possible_cpus->len' needs to be replaced with 'ms->smp.cores'. With this,
     the hot-added CPU object will be rejected.

(2) I don't think TCG has been tested since it seems not working at all.

     (a) start VM with the following command lines
     /home/gshan/sandbox/src/qemu/main/build/qemu-system-aarch64     \
     -machine virt,gic-version=3 -cpu max -m 1024                    \
     -smp maxcpus=2,cpus=1,sockets=1,clusters=1,cores=1,threads=2    \

     (b) failure while hot-adding CPU
     (qemu) device_add driver=max-arm-cpu,id=cpu1,thread-id=1
     Error: cpu(id1=0:0:0:1) with arch-id 1 exists

     The error message is printed by hw/arm/virt.c::virt_cpu_pre_plug() where the
     specific CPU has been presented. For KVM case, the disabled CPUs are detached
     from 'ms->possible_cpu->cpus[1].cpu' and destroyed. I think we need to do similar
     thing for TCG case in hw/arm/virt.c::virt_cpu_post_init(). I'm able to add CPU
     with the following hunk of changes.

--- a/hw/arm/virt.c
+++ b/hw/arm/virt.c
@@ -2122,6 +2122,18 @@ static void virt_cpu_post_init(VirtMachineState *vms, MemoryRegion *sysmem)
                  exit(1);
              }
          }
+
+#if 1
+        for (n = 0; n < possible_cpus->len; n++) {
+            cpu = qemu_get_possible_cpu(n);
+            if (!qemu_enabled_cpu(cpu)) {
+                CPUArchId *cpu_slot;
+                cpu_slot = virt_find_cpu_slot(ms, cpu->cpu_index);
+                cpu_slot->cpu = NULL;
+                object_unref(OBJECT(cpu));
+            }
+        }
+#endif
      }
  }

(3) Assertion on following the sequence of hot-add, hot-remove and hot-add when TCG mode is enabled.

     (a) Include the hack from (2) and start VM with the following command lines
     /home/gshan/sandbox/src/qemu/main/build/qemu-system-aarch64     \
     -machine virt,gic-version=3 -cpu max -m 1024                    \
     -smp maxcpus=2,cpus=1,sockets=1,clusters=1,cores=1,threads=2    \

     (b) assertion on the sequence of hot-add, hot-remove and hot-add
     (qemu) device_add driver=max-arm-cpu,id=cpu1,thread-id=1
     (qemu) device_del cpu1
     (qemu) device_add driver=max-arm-cpu,id=cpu1,thread-id=1
     **
     ERROR:../tcg/tcg.c:669:tcg_register_thread: assertion failed: (n < tcg_max_ctxs)
     Bail out! ERROR:../tcg/tcg.c:669:tcg_register_thread: assertion failed: (n < tcg_max_ctxs)
     Aborted (core dumped)

     I'm not sure if x86 has similar issue. It seems the management for TCG contexts, corresponding
     to variable @tcg_max_ctxs and @tcg_ctxs need some improvements for better TCG context registration
     and unregistration to accomodate CPU hotplug.


Apart from what have been found in the tests, I've started to look into the code changes. I may
reply with more specific comments. However, it would be ideal to comment on the specific changes
after the patchset is posted for review. Salil, the plan may have been mentioned by you somewhere.
As I understood, the QEMU patchset will be posted after James's RFCv2 kernel series is posted.
Please let me know if my understanding is correct. Again, thanks for your efforts to make vCPU
hotplug to be supported :)

Thanks,
Gavin