MdeModulePkg/Core/PiSmmCore/Dispatcher.c | 6 + OvmfPkg/CpuHotplugSmm/ApicId.h | 23 ++ OvmfPkg/CpuHotplugSmm/CpuHotplug.c | 426 ++++++++++++++++++++ OvmfPkg/CpuHotplugSmm/CpuHotplugSmm.inf | 64 +++ OvmfPkg/CpuHotplugSmm/FirstSmiHandler.nasm | 149 +++++++ OvmfPkg/CpuHotplugSmm/FirstSmiHandlerContext.h | 41 ++ OvmfPkg/CpuHotplugSmm/PostSmmPen.nasm | 137 +++++++ OvmfPkg/CpuHotplugSmm/QemuCpuhp.c | 301 ++++++++++++++ OvmfPkg/CpuHotplugSmm/QemuCpuhp.h | 61 +++ OvmfPkg/CpuHotplugSmm/Smbase.c | 252 ++++++++++++ OvmfPkg/CpuHotplugSmm/Smbase.h | 46 +++ OvmfPkg/Include/IndustryStandard/Q35MchIch9.h | 5 +- OvmfPkg/Include/IndustryStandard/QemuCpuHotplug.h | 3 + OvmfPkg/OvmfPkgIa32.dsc | 7 +- OvmfPkg/OvmfPkgIa32.fdf | 3 +- OvmfPkg/OvmfPkgIa32X64.dsc | 7 +- OvmfPkg/OvmfPkgIa32X64.fdf | 3 +- OvmfPkg/OvmfPkgX64.dsc | 7 +- OvmfPkg/OvmfPkgX64.fdf | 3 +- UefiCpuPkg/Library/SmmCpuPlatformHookLibNull/SmmCpuPlatformHookLibNull.c => OvmfPkg/Library/SmmCpuPlatformHookLibQemu/SmmCpuPlatformHookLibQemu.c | 45 ++- UefiCpuPkg/Library/SmmCpuPlatformHookLibNull/SmmCpuPlatformHookLibNull.inf => OvmfPkg/Library/SmmCpuPlatformHookLibQemu/SmmCpuPlatformHookLibQemu.inf | 24 +- UefiCpuPkg/PiSmmCpuDxeSmm/CpuS3.c | 14 +- {UefiCpuPkg => OvmfPkg}/CpuS3DataDxe/CpuS3Data.c | 99 +++-- {UefiCpuPkg => OvmfPkg}/CpuS3DataDxe/CpuS3DataDxe.inf | 30 +- 24 files changed, 1667 insertions(+), 89 deletions(-) copy UefiCpuPkg/Library/SmmCpuPlatformHookLibNull/SmmCpuPlatformHookLibNull.c => OvmfPkg/Library/SmmCpuPlatformHookLibQemu/SmmCpuPlatformHookLibQemu.c (61%) copy UefiCpuPkg/Library/SmmCpuPlatformHookLibNull/SmmCpuPlatformHookLibNull.inf => OvmfPkg/Library/SmmCpuPlatformHookLibQemu/SmmCpuPlatformHookLibQemu.inf (43%) copy {UefiCpuPkg => OvmfPkg}/CpuS3DataDxe/CpuS3Data.c (77%) copy {UefiCpuPkg => OvmfPkg}/CpuS3DataDxe/CpuS3DataDxe.inf (69%) create mode 100644 OvmfPkg/CpuHotplugSmm/ApicId.h create mode 100644 OvmfPkg/CpuHotplugSmm/CpuHotplug.c create mode 100644 OvmfPkg/CpuHotplugSmm/CpuHotplugSmm.inf create mode 100644 OvmfPkg/CpuHotplugSmm/FirstSmiHandler.nasm create mode 100644 OvmfPkg/CpuHotplugSmm/FirstSmiHandlerContext.h create mode 100644 OvmfPkg/CpuHotplugSmm/PostSmmPen.nasm create mode 100644 OvmfPkg/CpuHotplugSmm/QemuCpuhp.c create mode 100644 OvmfPkg/CpuHotplugSmm/QemuCpuhp.h create mode 100644 OvmfPkg/CpuHotplugSmm/Smbase.c create mode 100644 OvmfPkg/CpuHotplugSmm/Smbase.h
Bugzilla: https://bugzilla.tianocore.org/show_bug.cgi?id=1512 Repo: https://github.com/lersek/edk2.git Branch: vcpu_hotplug_smm_bz_1512 This series implements VCPU hotplug with SMM for OVMF, i.e., when OVMF is built with "-D SMM_REQUIRE". SEV support and hot-unplug support are out of scope for now. Patch#13 ("OvmfPkg/CpuHotplugSmm: complete root MMI handler for CPU hotplug") describes tests and results in the Notes section. Obviously this is not being proposed for the edk2-stable202002 tag (which is in hard feature freeze). QEMU needs patches for this feature, too. I've done the development against a QEMU patch that Igor hacked up quickly at my request. It was never posted (it needs some polish for upstreaming), but it has allowed me to write and test this feature. The key parts of the QEMU commit message are: > x68:acpi: trigger SMI before scanning for added/removed CPUs > > Firmware should only scan for new CPUs and not modify events in CPU > hotplug registers. Igor's patch is based on upstream QEMU commit 418fa86dd465. Until he decides to post or otherwise share the patch, its effect can be expressed with a diff, taken in the Linux guest, between decompiled before/after versions of the QEMU-generated DSDT: > @@ -81,6 +81,27 @@ > Return (Arg3) > } > } > + > + Device (SMI0) > + { > + Name (_HID, "PNP0A06" /* Generic Container Device */) // _HID: Hardware ID > + Name (_UID, "SMI resources") // _UID: Unique ID > + Name (_STA, 0x0B) // _STA: Status > + Name (_CRS, ResourceTemplate () // _CRS: Current Resource Settings > + { > + IO (Decode16, > + 0x00B2, // Range Minimum > + 0x00B2, // Range Maximum > + 0x01, // Alignment > + 0x01, // Length > + ) > + }) > + OperationRegion (SMIR, SystemIO, 0xB2, One) > + Field (SMIR, ByteAcc, NoLock, WriteAsZeros) > + { > + SMIC, 8 > + } > + } > } > > Scope (_SB) > @@ -3016,6 +3037,7 @@ > Method (CSCN, 0, Serialized) > { > Acquire (\_SB.PCI0.PRES.CPLK, 0xFFFF) > + \_SB.SMI0.SMIC = 0x04 > Local0 = One > While ((Local0 == One)) > { where the CSCN ("CPU scan") method is the _E02 GPE ("CPU hotplug") event handler: > Method (\_GPE._E02, 0, NotSerialized) // _Exx: Edge-Triggered GPE, xx=0x00-0xFF > { > \_SB.CPUS.CSCN () > } If you'd like to test this series, please ask Igor for the QEMU patch. :) The series has been formatted for review with the following options: --stat=1000 --stat-graph-width=20 \ --unified=22 \ --find-copies=43 --find-copies-harder \ --base=master \ At every stage in the series: - the tree builds, - "PatchCheck.py" is happy, - and OVMF works without regressions. (Hotplug is made functional at patch#13, and "S3 after hotplug" is completed at patch#16. So those actions should not be attempted before said respective patches.) Thanks, Laszlo Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org> Cc: Eric Dong <eric.dong@intel.com> Cc: Hao A Wu <hao.a.wu@intel.com> Cc: Igor Mammedov <imammedo@redhat.com> Cc: Jian J Wang <jian.j.wang@intel.com> Cc: Jiewen Yao <jiewen.yao@intel.com> Cc: Jordan Justen <jordan.l.justen@intel.com> Cc: Michael Kinney <michael.d.kinney@intel.com> Cc: Philippe Mathieu-Daudé <philmd@redhat.com> Cc: Ray Ni <ray.ni@intel.com> Thanks Laszlo Laszlo Ersek (16): MdeModulePkg/PiSmmCore: log SMM image start failure UefiCpuPkg/PiSmmCpuDxeSmm: fix S3 Resume for CPU hotplug OvmfPkg: clone SmmCpuPlatformHookLib from UefiCpuPkg OvmfPkg: enable SMM Monarch Election in PiSmmCpuDxeSmm OvmfPkg: enable CPU hotplug support in PiSmmCpuDxeSmm OvmfPkg/CpuHotplugSmm: introduce skeleton for CPU Hotplug SMM driver OvmfPkg/CpuHotplugSmm: add hotplug register block helper functions OvmfPkg/CpuHotplugSmm: define the QEMU_CPUHP_CMD_GET_ARCH_ID macro OvmfPkg/CpuHotplugSmm: add function for collecting CPUs with events OvmfPkg/CpuHotplugSmm: collect CPUs with events OvmfPkg/CpuHotplugSmm: introduce Post-SMM Pen for hot-added CPUs OvmfPkg/CpuHotplugSmm: introduce First SMI Handler for hot-added CPUs OvmfPkg/CpuHotplugSmm: complete root MMI handler for CPU hotplug OvmfPkg: clone CpuS3DataDxe from UefiCpuPkg OvmfPkg/CpuS3DataDxe: superficial cleanups OvmfPkg/CpuS3DataDxe: enable S3 resume after CPU hotplug MdeModulePkg/Core/PiSmmCore/Dispatcher.c | 6 + OvmfPkg/CpuHotplugSmm/ApicId.h | 23 ++ OvmfPkg/CpuHotplugSmm/CpuHotplug.c | 426 ++++++++++++++++++++ OvmfPkg/CpuHotplugSmm/CpuHotplugSmm.inf | 64 +++ OvmfPkg/CpuHotplugSmm/FirstSmiHandler.nasm | 149 +++++++ OvmfPkg/CpuHotplugSmm/FirstSmiHandlerContext.h | 41 ++ OvmfPkg/CpuHotplugSmm/PostSmmPen.nasm | 137 +++++++ OvmfPkg/CpuHotplugSmm/QemuCpuhp.c | 301 ++++++++++++++ OvmfPkg/CpuHotplugSmm/QemuCpuhp.h | 61 +++ OvmfPkg/CpuHotplugSmm/Smbase.c | 252 ++++++++++++ OvmfPkg/CpuHotplugSmm/Smbase.h | 46 +++ OvmfPkg/Include/IndustryStandard/Q35MchIch9.h | 5 +- OvmfPkg/Include/IndustryStandard/QemuCpuHotplug.h | 3 + OvmfPkg/OvmfPkgIa32.dsc | 7 +- OvmfPkg/OvmfPkgIa32.fdf | 3 +- OvmfPkg/OvmfPkgIa32X64.dsc | 7 +- OvmfPkg/OvmfPkgIa32X64.fdf | 3 +- OvmfPkg/OvmfPkgX64.dsc | 7 +- OvmfPkg/OvmfPkgX64.fdf | 3 +- UefiCpuPkg/Library/SmmCpuPlatformHookLibNull/SmmCpuPlatformHookLibNull.c => OvmfPkg/Library/SmmCpuPlatformHookLibQemu/SmmCpuPlatformHookLibQemu.c | 45 ++- UefiCpuPkg/Library/SmmCpuPlatformHookLibNull/SmmCpuPlatformHookLibNull.inf => OvmfPkg/Library/SmmCpuPlatformHookLibQemu/SmmCpuPlatformHookLibQemu.inf | 24 +- UefiCpuPkg/PiSmmCpuDxeSmm/CpuS3.c | 14 +- {UefiCpuPkg => OvmfPkg}/CpuS3DataDxe/CpuS3Data.c | 99 +++-- {UefiCpuPkg => OvmfPkg}/CpuS3DataDxe/CpuS3DataDxe.inf | 30 +- 24 files changed, 1667 insertions(+), 89 deletions(-) copy UefiCpuPkg/Library/SmmCpuPlatformHookLibNull/SmmCpuPlatformHookLibNull.c => OvmfPkg/Library/SmmCpuPlatformHookLibQemu/SmmCpuPlatformHookLibQemu.c (61%) copy UefiCpuPkg/Library/SmmCpuPlatformHookLibNull/SmmCpuPlatformHookLibNull.inf => OvmfPkg/Library/SmmCpuPlatformHookLibQemu/SmmCpuPlatformHookLibQemu.inf (43%) copy {UefiCpuPkg => OvmfPkg}/CpuS3DataDxe/CpuS3Data.c (77%) copy {UefiCpuPkg => OvmfPkg}/CpuS3DataDxe/CpuS3DataDxe.inf (69%) create mode 100644 OvmfPkg/CpuHotplugSmm/ApicId.h create mode 100644 OvmfPkg/CpuHotplugSmm/CpuHotplug.c create mode 100644 OvmfPkg/CpuHotplugSmm/CpuHotplugSmm.inf create mode 100644 OvmfPkg/CpuHotplugSmm/FirstSmiHandler.nasm create mode 100644 OvmfPkg/CpuHotplugSmm/FirstSmiHandlerContext.h create mode 100644 OvmfPkg/CpuHotplugSmm/PostSmmPen.nasm create mode 100644 OvmfPkg/CpuHotplugSmm/QemuCpuhp.c create mode 100644 OvmfPkg/CpuHotplugSmm/QemuCpuhp.h create mode 100644 OvmfPkg/CpuHotplugSmm/Smbase.c create mode 100644 OvmfPkg/CpuHotplugSmm/Smbase.h base-commit: 1d3215fd24f47eaa4877542a59b4bbf5afc0cfe8 -- 2.19.1.3.g30247aa5d201 -=-=-=-=-=-=-=-=-=-=-=- Groups.io Links: You receive all messages sent to this group. View/Reply Online (#54734): https://edk2.groups.io/g/devel/message/54734 Mute This Topic: https://groups.io/mt/71494209/1787277 Group Owner: devel+owner@edk2.groups.io Unsubscribe: https://edk2.groups.io/g/devel/unsub [importer@patchew.org] -=-=-=-=-=-=-=-=-=-=-=-
Hi Laszlo, Looks OVMF supports the CPU hotplug with those series patches. Could you provide some guide how to enable the OVMF CPU hotplug verification? Is there any general work flow introduction how it works? For example, how to do the hot add CPU initialization (e.g. Register setting / Microcode update, etc.)? I'm very interested in this feature on OVMF. Thank you very much. Jiaxin > -----Original Message----- > From: devel@edk2.groups.io <devel@edk2.groups.io> On Behalf Of Laszlo > Ersek > Sent: Monday, February 24, 2020 1:25 AM > To: edk2-devel-groups-io <devel@edk2.groups.io> > Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>; Dong, Eric > <eric.dong@intel.com>; Wu, Hao A <hao.a.wu@intel.com>; Igor Mammedov > <imammedo@redhat.com>; Wang, Jian J <jian.j.wang@intel.com>; Yao, > Jiewen <jiewen.yao@intel.com>; Justen, Jordan L > <jordan.l.justen@intel.com>; Kinney, Michael D > <michael.d.kinney@intel.com>; Philippe Mathieu-Daudé > <philmd@redhat.com>; Ni, Ray <ray.ni@intel.com> > Subject: [edk2-devel] [PATCH 00/16] OvmfPkg: support VCPU hotplug with - > D SMM_REQUIRE > > Bugzilla: https://bugzilla.tianocore.org/show_bug.cgi?id=1512 > Repo: https://github.com/lersek/edk2.git > Branch: vcpu_hotplug_smm_bz_1512 > > This series implements VCPU hotplug with SMM for OVMF, i.e., when OVMF > is built with "-D SMM_REQUIRE". > > SEV support and hot-unplug support are out of scope for now. > > Patch#13 ("OvmfPkg/CpuHotplugSmm: complete root MMI handler for CPU > hotplug") describes tests and results in the Notes section. > > Obviously this is not being proposed for the edk2-stable202002 tag > (which is in hard feature freeze). > > QEMU needs patches for this feature, too. I've done the development > against a QEMU patch that Igor hacked up quickly at my request. It was > never posted (it needs some polish for upstreaming), but it has allowed > me to write and test this feature. > > The key parts of the QEMU commit message are: > > > x68:acpi: trigger SMI before scanning for added/removed CPUs > > > > Firmware should only scan for new CPUs and not modify events in CPU > > hotplug registers. > > Igor's patch is based on upstream QEMU commit 418fa86dd465. Until he > decides to post or otherwise share the patch, its effect can be > expressed with a diff, taken in the Linux guest, between decompiled > before/after versions of the QEMU-generated DSDT: > > > @@ -81,6 +81,27 @@ > > Return (Arg3) > > } > > } > > + > > + Device (SMI0) > > + { > > + Name (_HID, "PNP0A06" /* Generic Container Device */) // _HID: > Hardware ID > > + Name (_UID, "SMI resources") // _UID: Unique ID > > + Name (_STA, 0x0B) // _STA: Status > > + Name (_CRS, ResourceTemplate () // _CRS: Current Resource > Settings > > + { > > + IO (Decode16, > > + 0x00B2, // Range Minimum > > + 0x00B2, // Range Maximum > > + 0x01, // Alignment > > + 0x01, // Length > > + ) > > + }) > > + OperationRegion (SMIR, SystemIO, 0xB2, One) > > + Field (SMIR, ByteAcc, NoLock, WriteAsZeros) > > + { > > + SMIC, 8 > > + } > > + } > > } > > > > Scope (_SB) > > @@ -3016,6 +3037,7 @@ > > Method (CSCN, 0, Serialized) > > { > > Acquire (\_SB.PCI0.PRES.CPLK, 0xFFFF) > > + \_SB.SMI0.SMIC = 0x04 > > Local0 = One > > While ((Local0 == One)) > > { > > where the CSCN ("CPU scan") method is the _E02 GPE ("CPU hotplug") event > handler: > > > Method (\_GPE._E02, 0, NotSerialized) // _Exx: Edge-Triggered GPE, > xx=0x00-0xFF > > { > > \_SB.CPUS.CSCN () > > } > > If you'd like to test this series, please ask Igor for the QEMU patch. > :) > > The series has been formatted for review with the following options: > > --stat=1000 --stat-graph-width=20 \ > --unified=22 \ > --find-copies=43 --find-copies-harder \ > --base=master \ > > At every stage in the series: > - the tree builds, > - "PatchCheck.py" is happy, > - and OVMF works without regressions. > > (Hotplug is made functional at patch#13, and "S3 after hotplug" is > completed at patch#16. So those actions should not be attempted before > said respective patches.) > > Thanks, > Laszlo > > Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org> > Cc: Eric Dong <eric.dong@intel.com> > Cc: Hao A Wu <hao.a.wu@intel.com> > Cc: Igor Mammedov <imammedo@redhat.com> > Cc: Jian J Wang <jian.j.wang@intel.com> > Cc: Jiewen Yao <jiewen.yao@intel.com> > Cc: Jordan Justen <jordan.l.justen@intel.com> > Cc: Michael Kinney <michael.d.kinney@intel.com> > Cc: Philippe Mathieu-Daudé <philmd@redhat.com> > Cc: Ray Ni <ray.ni@intel.com> > > Thanks > Laszlo > > Laszlo Ersek (16): > MdeModulePkg/PiSmmCore: log SMM image start failure > UefiCpuPkg/PiSmmCpuDxeSmm: fix S3 Resume for CPU hotplug > OvmfPkg: clone SmmCpuPlatformHookLib from UefiCpuPkg > OvmfPkg: enable SMM Monarch Election in PiSmmCpuDxeSmm > OvmfPkg: enable CPU hotplug support in PiSmmCpuDxeSmm > OvmfPkg/CpuHotplugSmm: introduce skeleton for CPU Hotplug SMM driver > OvmfPkg/CpuHotplugSmm: add hotplug register block helper functions > OvmfPkg/CpuHotplugSmm: define the QEMU_CPUHP_CMD_GET_ARCH_ID > macro > OvmfPkg/CpuHotplugSmm: add function for collecting CPUs with events > OvmfPkg/CpuHotplugSmm: collect CPUs with events > OvmfPkg/CpuHotplugSmm: introduce Post-SMM Pen for hot-added CPUs > OvmfPkg/CpuHotplugSmm: introduce First SMI Handler for hot-added CPUs > OvmfPkg/CpuHotplugSmm: complete root MMI handler for CPU hotplug > OvmfPkg: clone CpuS3DataDxe from UefiCpuPkg > OvmfPkg/CpuS3DataDxe: superficial cleanups > OvmfPkg/CpuS3DataDxe: enable S3 resume after CPU hotplug > > MdeModulePkg/Core/PiSmmCore/Dispatcher.c > | 6 + > OvmfPkg/CpuHotplugSmm/ApicId.h > | 23 ++ > OvmfPkg/CpuHotplugSmm/CpuHotplug.c > | 426 ++++++++++++++++++++ > OvmfPkg/CpuHotplugSmm/CpuHotplugSmm.inf > | 64 +++ > OvmfPkg/CpuHotplugSmm/FirstSmiHandler.nasm > | 149 +++++++ > OvmfPkg/CpuHotplugSmm/FirstSmiHandlerContext.h > | 41 ++ > OvmfPkg/CpuHotplugSmm/PostSmmPen.nasm > | 137 +++++++ > OvmfPkg/CpuHotplugSmm/QemuCpuhp.c > | 301 ++++++++++++++ > OvmfPkg/CpuHotplugSmm/QemuCpuhp.h > | 61 +++ > OvmfPkg/CpuHotplugSmm/Smbase.c > | 252 ++++++++++++ > OvmfPkg/CpuHotplugSmm/Smbase.h > | 46 +++ > OvmfPkg/Include/IndustryStandard/Q35MchIch9.h > | 5 +- > OvmfPkg/Include/IndustryStandard/QemuCpuHotplug.h > | 3 + > OvmfPkg/OvmfPkgIa32.dsc > | 7 +- > OvmfPkg/OvmfPkgIa32.fdf > | 3 +- > OvmfPkg/OvmfPkgIa32X64.dsc > | 7 +- > OvmfPkg/OvmfPkgIa32X64.fdf > | 3 +- > OvmfPkg/OvmfPkgX64.dsc > | 7 +- > OvmfPkg/OvmfPkgX64.fdf > | 3 +- > > UefiCpuPkg/Library/SmmCpuPlatformHookLibNull/SmmCpuPlatformHookLib > Null.c => > OvmfPkg/Library/SmmCpuPlatformHookLibQemu/SmmCpuPlatformHookLib > Qemu.c | 45 ++- > > UefiCpuPkg/Library/SmmCpuPlatformHookLibNull/SmmCpuPlatformHookLib > Null.inf => > OvmfPkg/Library/SmmCpuPlatformHookLibQemu/SmmCpuPlatformHookLib > Qemu.inf | 24 +- > UefiCpuPkg/PiSmmCpuDxeSmm/CpuS3.c > | 14 +- > {UefiCpuPkg => OvmfPkg}/CpuS3DataDxe/CpuS3Data.c > | 99 +++-- > {UefiCpuPkg => OvmfPkg}/CpuS3DataDxe/CpuS3DataDxe.inf > | 30 +- > 24 files changed, 1667 insertions(+), 89 deletions(-) > copy > UefiCpuPkg/Library/SmmCpuPlatformHookLibNull/SmmCpuPlatformHookLib > Null.c => > OvmfPkg/Library/SmmCpuPlatformHookLibQemu/SmmCpuPlatformHookLib > Qemu.c (61%) > copy > UefiCpuPkg/Library/SmmCpuPlatformHookLibNull/SmmCpuPlatformHookLib > Null.inf => > OvmfPkg/Library/SmmCpuPlatformHookLibQemu/SmmCpuPlatformHookLib > Qemu.inf (43%) > copy {UefiCpuPkg => OvmfPkg}/CpuS3DataDxe/CpuS3Data.c (77%) > copy {UefiCpuPkg => OvmfPkg}/CpuS3DataDxe/CpuS3DataDxe.inf (69%) > create mode 100644 OvmfPkg/CpuHotplugSmm/ApicId.h > create mode 100644 OvmfPkg/CpuHotplugSmm/CpuHotplug.c > create mode 100644 OvmfPkg/CpuHotplugSmm/CpuHotplugSmm.inf > create mode 100644 OvmfPkg/CpuHotplugSmm/FirstSmiHandler.nasm > create mode 100644 OvmfPkg/CpuHotplugSmm/FirstSmiHandlerContext.h > create mode 100644 OvmfPkg/CpuHotplugSmm/PostSmmPen.nasm > create mode 100644 OvmfPkg/CpuHotplugSmm/QemuCpuhp.c > create mode 100644 OvmfPkg/CpuHotplugSmm/QemuCpuhp.h > create mode 100644 OvmfPkg/CpuHotplugSmm/Smbase.c > create mode 100644 OvmfPkg/CpuHotplugSmm/Smbase.h > > > base-commit: 1d3215fd24f47eaa4877542a59b4bbf5afc0cfe8 > -- > 2.19.1.3.g30247aa5d201 > > > -=-=-=-=-=-=-=-=-=-=-=- Groups.io Links: You receive all messages sent to this group. View/Reply Online (#63226): https://edk2.groups.io/g/devel/message/63226 Mute This Topic: https://groups.io/mt/71494209/1787277 Group Owner: devel+owner@edk2.groups.io Unsubscribe: https://edk2.groups.io/g/devel/unsub [importer@patchew.org] -=-=-=-=-=-=-=-=-=-=-=-
On 07/24/20 08:26, Wu, Jiaxin wrote: > Hi Laszlo, > > Looks OVMF supports the CPU hotplug with those series patches. > > Could you provide some guide how to enable the OVMF CPU hotplug > verification? Is there any general work flow introduction how it > works? For example, how to do the hot add CPU initialization (e.g. > Register setting / Microcode update, etc.)? I'm very interested in > this feature on OVMF. Long version: ------------- (1) There are three pieces missing: (1a) The QEMU side changes for the ACPI (DSDT) content that QEMU generates for the OS. The ACPI GPE handler for CPU hotplug is being modified by my colleague Igor Mammedov to raise the SMI (command value 4) on CPU hotplug. For developing the OVMF series for TianoCore#1512 (which is now merged), I used a prototype QEMU patch, from Igor. But that patch is not suitable for upstreaming to QEMU. SO Igor is now developing the real patches for QEMU's ACPI generator. (1b) The related feature negotiation patches in QEMU. In order for "CPU hotplug with SMM" to work, both OVMF and QEMU need to perform specific things. In order to deal with cross-version compatibility problems, the "CPU hotplug with SMI" feature is dynamically negotiated between OVMF and QEMU. For this negotiation, both QEMU and OVMF need additional patches. These patches are not related to the actual plugging activities; instead they control whether plugging is permitted at all, or not. Igor's QEMU series covers both purposes (1a) and (1b). It's work in progress. The first posting was an RFC series: (1b1) [RFC 0/3] x86: fix cpu hotplug with secure boot https://lists.gnu.org/archive/html/qemu-devel/2020-07/msg03746.html http://mid.mail-archive.com/20200710161704.309824-1-imammedo@redhat.com The latest posting has been a PATCH series: (1b2) [qemu-devel] [PATCH 0/6] x86: fix cpu hotplug with secure boot https://lists.gnu.org/archive/html/qemu-devel/2020-07/msg05850.html http://mid.mail-archive.com/20200720141610.574308-1-imammedo@redhat.com (1c) The feature negotiation patch for OVMF is here: * [edk2-devel] [PATCH] OvmfPkg/SmmControl2Dxe: negotiate ICH9_LPC_SMI_F_CPU_HOTPLUG https://edk2.groups.io/g/devel/message/62561 http://mid.mail-archive.com/20200714184305.9814-1-lersek@redhat.com (2) Special register setting and microcode stuff are not needed. (3) As I mentioned before, I strongly suggest using QEMU and OVMF with libvirt. I had written an article about that here: https://github.com/tianocore/tianocore.github.io/wiki/Testing-SMM-with-QEMU,-KVM-and-libvirt I wrote this article specifically for "Windows-based" developers. The article is written from such a perspective that you don't need a personal Linux workstation, only a single Linux workstation *per team*. So you can continue using a Windows workstation, just set up one Linux box for your team (if you don't yet have one). This article remains relevant. (3a) In order to set up a guest for VCPU hotplug, simply go through the article, initially. (3b) Once you're done with that, power down the guest, and modify the domain XML as follows: virsh edit <DOMAIN_NAME> (3b1) replace the "pc-q35-2.9" machine type with "pc-q35-5.1" (3b2) replace the following stanza: <vcpu placement='static'>4</vcpu> with: <vcpu placement='static' current='2'>4</vcpu> <vcpus> <vcpu id='0' enabled='yes' hotpluggable='no'/> <vcpu id='1' enabled='no' hotpluggable='yes'/> <vcpu id='2' enabled='yes' hotpluggable='yes'/> <vcpu id='3' enabled='no' hotpluggable='yes'/> </vcpus> This will create a VCPU topology where: - CPU#0 is present up-front, and is not hot-pluggable (this is a QEMU requirement), - CPU#1, CPU#2, and CPU#3 are hot-pluggable, - CPU#2 is present up-front ("cold-plugged"), while CPU#1 and CPU#3 are absent initially. (4) Boot the guest. Once you have a root prompt in the guest, you can use one of two libvirt commands for hot-plugging a CPU: (4a) the singular "virsh setvcpu" command: virsh setvcpu <DOMAIN_NAME> <PROCESSOR_ID> --enable --live where you can pass in 1 or 3 for <PROCESSOR_ID>. This command lets you specify the precise ID of the processor to be hot-plugged; IOW, the command lets you control topology. (4b) the plural "virsh setvcpus" command: virsh setvcpus <GUEST_NAME> <PROCESSOR_COUNT> --live This command lets you specify the desired number of total active CPUs. It does not let you control topology. (My understanding is that it keeps the topology populated at the "front".) Regarding the current QEMU status, we need to do more work for supporting (4b). The RFC series (1b1) enables (4a) to work. The PATCH series (1b2) intended to make (4b) work, but unfortunately it broke even (4a). So now we need at least one more version of the QEMU series (I've given my feedback to Igor already, on qemu-devel). (4c) Dependent on guest OS configuration, you might have to manually online the newly plugged CPUs in the guest: echo 1 > /sys/devices/system/cpu/cpu2/online echo 1 > /sys/devices/system/cpu/cpu3/online Note that the "cpuN" identifiers seen here are *neither* APIC IDs *nor* the same IDs as seen in the libvirt domain XML. Instead, these IDs are assigned in the order the Linux kernel learns about the CPUs (if I understand correctly). Short version: -------------- - apply (1b1) on top of latest QEMU master from git, and build and install it, - apply (1c) on latest edk2, and build OVMF with "-D SMM_REQUIRE", - install a Linux guest on a Linux host (using KVM!) as described in my Wiki article (3), - modify the domain XML for the guest as described in (3b), - use the singular "virsh setvcpu" command (4a) for hot-plugging VCPU#1 and/or VCPU#3, - if necessary, use (4c) in the guest. You can do the same with Windows Server guests too, although I'm not exactly sure what versions support CPU hotplug. For testing I've used Windows Server 2012 R2. The Wiki article at (3) has a section dedicated to installing Windows guests too. Thanks, Laszlo -=-=-=-=-=-=-=-=-=-=-=- Groups.io Links: You receive all messages sent to this group. View/Reply Online (#63248): https://edk2.groups.io/g/devel/message/63248 Mute This Topic: https://groups.io/mt/71494209/1787277 Group Owner: devel+owner@edk2.groups.io Unsubscribe: https://edk2.groups.io/g/devel/unsub [importer@patchew.org] -=-=-=-=-=-=-=-=-=-=-=-
Hi Laszlo, Appreciate your feedback! Thank you very much. Jiaxin > -----Original Message----- > From: Laszlo Ersek <lersek@redhat.com> > Sent: Saturday, July 25, 2020 12:02 AM > To: Wu, Jiaxin <jiaxin.wu@intel.com> > Cc: devel@edk2.groups.io > Subject: Re: [edk2-devel] [PATCH 00/16] OvmfPkg: support VCPU hotplug > with -D SMM_REQUIRE > > On 07/24/20 08:26, Wu, Jiaxin wrote: > > Hi Laszlo, > > > > Looks OVMF supports the CPU hotplug with those series patches. > > > > Could you provide some guide how to enable the OVMF CPU hotplug > > verification? Is there any general work flow introduction how it > > works? For example, how to do the hot add CPU initialization (e.g. > > Register setting / Microcode update, etc.)? I'm very interested in > > this feature on OVMF. > > Long version: > ------------- > > (1) There are three pieces missing: > > (1a) The QEMU side changes for the ACPI (DSDT) content that QEMU > generates for the OS. > > The ACPI GPE handler for CPU hotplug is being modified by my colleague > Igor Mammedov to raise the SMI (command value 4) on CPU hotplug. > > For developing the OVMF series for TianoCore#1512 (which is now merged), > I used a prototype QEMU patch, from Igor. But that patch is not suitable > for upstreaming to QEMU. SO Igor is now developing the real patches for > QEMU's ACPI generator. > > (1b) The related feature negotiation patches in QEMU. > > In order for "CPU hotplug with SMM" to work, both OVMF and QEMU need > to > perform specific things. In order to deal with cross-version > compatibility problems, the "CPU hotplug with SMI" feature is > dynamically negotiated between OVMF and QEMU. For this negotiation, > both > QEMU and OVMF need additional patches. These patches are not related to > the actual plugging activities; instead they control whether plugging is > permitted at all, or not. > > Igor's QEMU series covers both purposes (1a) and (1b). It's work in > progress. The first posting was an RFC series: > > (1b1) [RFC 0/3] x86: fix cpu hotplug with secure boot > https://lists.gnu.org/archive/html/qemu-devel/2020-07/msg03746.html > http://mid.mail-archive.com/20200710161704.309824-1- > imammedo@redhat.com > > The latest posting has been a PATCH series: > > (1b2) [qemu-devel] [PATCH 0/6] x86: fix cpu hotplug with secure boot > https://lists.gnu.org/archive/html/qemu-devel/2020-07/msg05850.html > http://mid.mail-archive.com/20200720141610.574308-1- > imammedo@redhat.com > > (1c) The feature negotiation patch for OVMF is here: > > * [edk2-devel] [PATCH] OvmfPkg/SmmControl2Dxe: negotiate > ICH9_LPC_SMI_F_CPU_HOTPLUG > https://edk2.groups.io/g/devel/message/62561 > http://mid.mail-archive.com/20200714184305.9814-1-lersek@redhat.com > > > (2) Special register setting and microcode stuff are not needed. > > > (3) As I mentioned before, I strongly suggest using QEMU and OVMF with > libvirt. I had written an article about that here: > > https://github.com/tianocore/tianocore.github.io/wiki/Testing-SMM-with- > QEMU,-KVM-and-libvirt > > I wrote this article specifically for "Windows-based" developers. The > article is written from such a perspective that you don't need a > personal Linux workstation, only a single Linux workstation *per team*. > So you can continue using a Windows workstation, just set up one Linux > box for your team (if you don't yet have one). > > This article remains relevant. > > (3a) In order to set up a guest for VCPU hotplug, simply go through the > article, initially. > > (3b) Once you're done with that, power down the guest, and modify the > domain XML as follows: > > virsh edit <DOMAIN_NAME> > > (3b1) replace the "pc-q35-2.9" machine type with "pc-q35-5.1" > > (3b2) replace the following stanza: > > <vcpu placement='static'>4</vcpu> > > with: > > <vcpu placement='static' current='2'>4</vcpu> > <vcpus> > <vcpu id='0' enabled='yes' hotpluggable='no'/> > <vcpu id='1' enabled='no' hotpluggable='yes'/> > <vcpu id='2' enabled='yes' hotpluggable='yes'/> > <vcpu id='3' enabled='no' hotpluggable='yes'/> > </vcpus> > > This will create a VCPU topology where: > > - CPU#0 is present up-front, and is not hot-pluggable (this is a QEMU > requirement), > > - CPU#1, CPU#2, and CPU#3 are hot-pluggable, > > - CPU#2 is present up-front ("cold-plugged"), while CPU#1 and CPU#3 are > absent initially. > > > (4) Boot the guest. Once you have a root prompt in the guest, you can > use one of two libvirt commands for hot-plugging a CPU: > > (4a) the singular "virsh setvcpu" command: > > virsh setvcpu <DOMAIN_NAME> <PROCESSOR_ID> --enable --live > > where you can pass in 1 or 3 for <PROCESSOR_ID>. > > This command lets you specify the precise ID of the processor to be > hot-plugged; IOW, the command lets you control topology. > > (4b) the plural "virsh setvcpus" command: > > virsh setvcpus <GUEST_NAME> <PROCESSOR_COUNT> --live > > This command lets you specify the desired number of total active CPUs. > It does not let you control topology. (My understanding is that it keeps > the topology populated at the "front".) > > Regarding the current QEMU status, we need to do more work for > supporting (4b). The RFC series (1b1) enables (4a) to work. The PATCH > series (1b2) intended to make (4b) work, but unfortunately it broke even > (4a). So now we need at least one more version of the QEMU series (I've > given my feedback to Igor already, on qemu-devel). > > (4c) Dependent on guest OS configuration, you might have to manually > online the newly plugged CPUs in the guest: > > echo 1 > /sys/devices/system/cpu/cpu2/online > echo 1 > /sys/devices/system/cpu/cpu3/online > > Note that the "cpuN" identifiers seen here are *neither* APIC IDs *nor* > the same IDs as seen in the libvirt domain XML. Instead, these IDs are > assigned in the order the Linux kernel learns about the CPUs (if I > understand correctly). > > > Short version: > -------------- > > - apply (1b1) on top of latest QEMU master from git, and build and > install it, > > - apply (1c) on latest edk2, and build OVMF with "-D SMM_REQUIRE", > > - install a Linux guest on a Linux host (using KVM!) as described in my > Wiki article (3), > > - modify the domain XML for the guest as described in (3b), > > - use the singular "virsh setvcpu" command (4a) for hot-plugging VCPU#1 > and/or VCPU#3, > > - if necessary, use (4c) in the guest. > > > You can do the same with Windows Server guests too, although I'm not > exactly sure what versions support CPU hotplug. For testing I've used > Windows Server 2012 R2. The Wiki article at (3) has a section dedicated > to installing Windows guests too. > > Thanks, > Laszlo -=-=-=-=-=-=-=-=-=-=-=- Groups.io Links: You receive all messages sent to this group. View/Reply Online (#63436): https://edk2.groups.io/g/devel/message/63436 Mute This Topic: https://groups.io/mt/71494209/1787277 Group Owner: devel+owner@edk2.groups.io Unsubscribe: https://edk2.groups.io/g/devel/unsub [importer@patchew.org] -=-=-=-=-=-=-=-=-=-=-=-
Hello Jiaxin, On 07/29/20 10:37, Wu, Jiaxin wrote: > Hi Laszlo, > > Appreciate your feedback! Thank you very much. if you are still interested in using this feature, I have some good news. As of current edk2 master (to be entirely precise: as of edk2 commit cbccf995920a), and as of upstream QEMU commit 6e2e2e8a4220, the "VCPU hotplug with SMM" feature is considered complete. Both the singular "virsh setvcpu" command (which gives you precise NUMA affinity for the VCPU being hot-plugged), and the plural "virsh setvcpus" command, are expected to work. (For plugging.) I'm repeating a short summary here about the necessary configuration on your end: * install a Linux guest as described in my Wiki article: https://github.com/tianocore/tianocore.github.io/wiki/Testing-SMM-with-QEMU,-KVM-and-libvirt * modify the domain XML for the guest:, - replace the "pc-q35-2.9" machine type with "pc-q35-5.2" - replace the following stanza: <vcpu placement='static'>4</vcpu> with: <vcpu placement='static' current='2'>4</vcpu> <vcpus> <vcpu id='0' enabled='yes' hotpluggable='no'/> <vcpu id='1' enabled='no' hotpluggable='yes'/> <vcpu id='2' enabled='yes' hotpluggable='yes'/> <vcpu id='3' enabled='no' hotpluggable='yes'/> </vcpus> * With Linux guests, you may have to online the just-plugged VCPUs in the guest as a separate step (dependent on your guest Linux distribution). For this, you can either use (in the guest) commands like echo 1 > /sys/devices/system/cpu/cpu2/online echo 1 > /sys/devices/system/cpu/cpu3/online (note that the numbering is in order of the Linux guest learning of the processor). Alternatively, in case you used the plural "virsh setvcpus" command, *and* you have the guest agent running in your guest, you can follow up with virsh setvcpus ... --guest on the host side (the "--guest" option being the important one). This will ask the QEMU guest agent to perform the same onlining as the explicit "echo" commands above. * Windows guests do not need separate onlining of the hot-added CPUs. The oldest Windows release that (to my knowledge) supports CPU hotplug is Windows Server 2008 R2 *Datacenter* edition. Thanks Laszlo -=-=-=-=-=-=-=-=-=-=-=- Groups.io Links: You receive all messages sent to this group. View/Reply Online (#65786): https://edk2.groups.io/g/devel/message/65786 Mute This Topic: https://groups.io/mt/77236412/1787277 Group Owner: devel+owner@edk2.groups.io Unsubscribe: https://edk2.groups.io/g/devel/unsub [importer@patchew.org] -=-=-=-=-=-=-=-=-=-=-=-
On Sun, 23 Feb 2020 at 18:25, Laszlo Ersek <lersek@redhat.com> wrote: > > Bugzilla: https://bugzilla.tianocore.org/show_bug.cgi?id=1512 > Repo: https://github.com/lersek/edk2.git > Branch: vcpu_hotplug_smm_bz_1512 > > This series implements VCPU hotplug with SMM for OVMF, i.e., when OVMF > is built with "-D SMM_REQUIRE". > > SEV support and hot-unplug support are out of scope for now. > > Patch#13 ("OvmfPkg/CpuHotplugSmm: complete root MMI handler for CPU > hotplug") describes tests and results in the Notes section. > > Obviously this is not being proposed for the edk2-stable202002 tag > (which is in hard feature freeze). > > QEMU needs patches for this feature, too. I've done the development > against a QEMU patch that Igor hacked up quickly at my request. It was > never posted (it needs some polish for upstreaming), but it has allowed > me to write and test this feature. > > The key parts of the QEMU commit message are: > > > x68:acpi: trigger SMI before scanning for added/removed CPUs > > > > Firmware should only scan for new CPUs and not modify events in CPU > > hotplug registers. > > Igor's patch is based on upstream QEMU commit 418fa86dd465. Until he > decides to post or otherwise share the patch, its effect can be > expressed with a diff, taken in the Linux guest, between decompiled > before/after versions of the QEMU-generated DSDT: > > > @@ -81,6 +81,27 @@ > > Return (Arg3) > > } > > } > > + > > + Device (SMI0) > > + { > > + Name (_HID, "PNP0A06" /* Generic Container Device */) // _HID: Hardware ID > > + Name (_UID, "SMI resources") // _UID: Unique ID > > + Name (_STA, 0x0B) // _STA: Status > > + Name (_CRS, ResourceTemplate () // _CRS: Current Resource Settings > > + { > > + IO (Decode16, > > + 0x00B2, // Range Minimum > > + 0x00B2, // Range Maximum > > + 0x01, // Alignment > > + 0x01, // Length > > + ) > > + }) > > + OperationRegion (SMIR, SystemIO, 0xB2, One) > > + Field (SMIR, ByteAcc, NoLock, WriteAsZeros) > > + { > > + SMIC, 8 > > + } > > + } > > } > > > > Scope (_SB) > > @@ -3016,6 +3037,7 @@ > > Method (CSCN, 0, Serialized) > > { > > Acquire (\_SB.PCI0.PRES.CPLK, 0xFFFF) > > + \_SB.SMI0.SMIC = 0x04 > > Local0 = One > > While ((Local0 == One)) > > { > > where the CSCN ("CPU scan") method is the _E02 GPE ("CPU hotplug") event > handler: > > > Method (\_GPE._E02, 0, NotSerialized) // _Exx: Edge-Triggered GPE, xx=0x00-0xFF > > { > > \_SB.CPUS.CSCN () > > } > > If you'd like to test this series, please ask Igor for the QEMU patch. > :) > > The series has been formatted for review with the following options: > > --stat=1000 --stat-graph-width=20 \ > --unified=22 \ > --find-copies=43 --find-copies-harder \ > --base=master \ > > At every stage in the series: > - the tree builds, > - "PatchCheck.py" is happy, > - and OVMF works without regressions. > > (Hotplug is made functional at patch#13, and "S3 after hotplug" is > completed at patch#16. So those actions should not be attempted before > said respective patches.) > I skimmed these patches, and it all looks reasonable to me, but I am by no means knowledgeable on x86 SMM internals. So provided the @intel.com folks on cc are happy with these changes, and ack the series, Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> > > Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org> > Cc: Eric Dong <eric.dong@intel.com> > Cc: Hao A Wu <hao.a.wu@intel.com> > Cc: Igor Mammedov <imammedo@redhat.com> > Cc: Jian J Wang <jian.j.wang@intel.com> > Cc: Jiewen Yao <jiewen.yao@intel.com> > Cc: Jordan Justen <jordan.l.justen@intel.com> > Cc: Michael Kinney <michael.d.kinney@intel.com> > Cc: Philippe Mathieu-Daudé <philmd@redhat.com> > Cc: Ray Ni <ray.ni@intel.com> > > Thanks > Laszlo > > Laszlo Ersek (16): > MdeModulePkg/PiSmmCore: log SMM image start failure > UefiCpuPkg/PiSmmCpuDxeSmm: fix S3 Resume for CPU hotplug > OvmfPkg: clone SmmCpuPlatformHookLib from UefiCpuPkg > OvmfPkg: enable SMM Monarch Election in PiSmmCpuDxeSmm > OvmfPkg: enable CPU hotplug support in PiSmmCpuDxeSmm > OvmfPkg/CpuHotplugSmm: introduce skeleton for CPU Hotplug SMM driver > OvmfPkg/CpuHotplugSmm: add hotplug register block helper functions > OvmfPkg/CpuHotplugSmm: define the QEMU_CPUHP_CMD_GET_ARCH_ID macro > OvmfPkg/CpuHotplugSmm: add function for collecting CPUs with events > OvmfPkg/CpuHotplugSmm: collect CPUs with events > OvmfPkg/CpuHotplugSmm: introduce Post-SMM Pen for hot-added CPUs > OvmfPkg/CpuHotplugSmm: introduce First SMI Handler for hot-added CPUs > OvmfPkg/CpuHotplugSmm: complete root MMI handler for CPU hotplug > OvmfPkg: clone CpuS3DataDxe from UefiCpuPkg > OvmfPkg/CpuS3DataDxe: superficial cleanups > OvmfPkg/CpuS3DataDxe: enable S3 resume after CPU hotplug > > MdeModulePkg/Core/PiSmmCore/Dispatcher.c | 6 + > OvmfPkg/CpuHotplugSmm/ApicId.h | 23 ++ > OvmfPkg/CpuHotplugSmm/CpuHotplug.c | 426 ++++++++++++++++++++ > OvmfPkg/CpuHotplugSmm/CpuHotplugSmm.inf | 64 +++ > OvmfPkg/CpuHotplugSmm/FirstSmiHandler.nasm | 149 +++++++ > OvmfPkg/CpuHotplugSmm/FirstSmiHandlerContext.h | 41 ++ > OvmfPkg/CpuHotplugSmm/PostSmmPen.nasm | 137 +++++++ > OvmfPkg/CpuHotplugSmm/QemuCpuhp.c | 301 ++++++++++++++ > OvmfPkg/CpuHotplugSmm/QemuCpuhp.h | 61 +++ > OvmfPkg/CpuHotplugSmm/Smbase.c | 252 ++++++++++++ > OvmfPkg/CpuHotplugSmm/Smbase.h | 46 +++ > OvmfPkg/Include/IndustryStandard/Q35MchIch9.h | 5 +- > OvmfPkg/Include/IndustryStandard/QemuCpuHotplug.h | 3 + > OvmfPkg/OvmfPkgIa32.dsc | 7 +- > OvmfPkg/OvmfPkgIa32.fdf | 3 +- > OvmfPkg/OvmfPkgIa32X64.dsc | 7 +- > OvmfPkg/OvmfPkgIa32X64.fdf | 3 +- > OvmfPkg/OvmfPkgX64.dsc | 7 +- > OvmfPkg/OvmfPkgX64.fdf | 3 +- > UefiCpuPkg/Library/SmmCpuPlatformHookLibNull/SmmCpuPlatformHookLibNull.c => OvmfPkg/Library/SmmCpuPlatformHookLibQemu/SmmCpuPlatformHookLibQemu.c | 45 ++- > UefiCpuPkg/Library/SmmCpuPlatformHookLibNull/SmmCpuPlatformHookLibNull.inf => OvmfPkg/Library/SmmCpuPlatformHookLibQemu/SmmCpuPlatformHookLibQemu.inf | 24 +- > UefiCpuPkg/PiSmmCpuDxeSmm/CpuS3.c | 14 +- > {UefiCpuPkg => OvmfPkg}/CpuS3DataDxe/CpuS3Data.c | 99 +++-- > {UefiCpuPkg => OvmfPkg}/CpuS3DataDxe/CpuS3DataDxe.inf | 30 +- > 24 files changed, 1667 insertions(+), 89 deletions(-) > copy UefiCpuPkg/Library/SmmCpuPlatformHookLibNull/SmmCpuPlatformHookLibNull.c => OvmfPkg/Library/SmmCpuPlatformHookLibQemu/SmmCpuPlatformHookLibQemu.c (61%) > copy UefiCpuPkg/Library/SmmCpuPlatformHookLibNull/SmmCpuPlatformHookLibNull.inf => OvmfPkg/Library/SmmCpuPlatformHookLibQemu/SmmCpuPlatformHookLibQemu.inf (43%) > copy {UefiCpuPkg => OvmfPkg}/CpuS3DataDxe/CpuS3Data.c (77%) > copy {UefiCpuPkg => OvmfPkg}/CpuS3DataDxe/CpuS3DataDxe.inf (69%) > create mode 100644 OvmfPkg/CpuHotplugSmm/ApicId.h > create mode 100644 OvmfPkg/CpuHotplugSmm/CpuHotplug.c > create mode 100644 OvmfPkg/CpuHotplugSmm/CpuHotplugSmm.inf > create mode 100644 OvmfPkg/CpuHotplugSmm/FirstSmiHandler.nasm > create mode 100644 OvmfPkg/CpuHotplugSmm/FirstSmiHandlerContext.h > create mode 100644 OvmfPkg/CpuHotplugSmm/PostSmmPen.nasm > create mode 100644 OvmfPkg/CpuHotplugSmm/QemuCpuhp.c > create mode 100644 OvmfPkg/CpuHotplugSmm/QemuCpuhp.h > create mode 100644 OvmfPkg/CpuHotplugSmm/Smbase.c > create mode 100644 OvmfPkg/CpuHotplugSmm/Smbase.h > > > base-commit: 1d3215fd24f47eaa4877542a59b4bbf5afc0cfe8 > -- > 2.19.1.3.g30247aa5d201 > -=-=-=-=-=-=-=-=-=-=-=- Groups.io Links: You receive all messages sent to this group. View/Reply Online (#54759): https://edk2.groups.io/g/devel/message/54759 Mute This Topic: https://groups.io/mt/71494209/1787277 Group Owner: devel+owner@edk2.groups.io Unsubscribe: https://edk2.groups.io/g/devel/unsub [importer@patchew.org] -=-=-=-=-=-=-=-=-=-=-=-
© 2016 - 2024 Red Hat, Inc.