[edk2-devel] [PATCH wave 1 00/10] support QEMU's "SMRAM at default SMBASE" feature

Laszlo Ersek posted 10 patches 4 years, 7 months ago
Failed in applying to current master (apply log)
There is a newer version of this series
OvmfPkg/Include/IndustryStandard/Q35MchIch9.h           | 106 +++++++++++---------
OvmfPkg/Library/SmmCpuFeaturesLib/SmmCpuFeaturesLib.c   |  21 +++-
OvmfPkg/Library/SmmCpuFeaturesLib/SmmCpuFeaturesLib.inf |   4 +
OvmfPkg/OvmfPkg.dec                                     |   6 ++
OvmfPkg/OvmfPkgIa32.dsc                                 |   1 +
OvmfPkg/OvmfPkgIa32X64.dsc                              |   1 +
OvmfPkg/OvmfPkgX64.dsc                                  |   1 +
OvmfPkg/PlatformPei/AmdSev.c                            |  24 ++++-
OvmfPkg/PlatformPei/MemDetect.c                         |  87 +++++++++++++---
OvmfPkg/PlatformPei/Platform.c                          |  24 +++++
OvmfPkg/PlatformPei/Platform.h                          |   7 ++
OvmfPkg/PlatformPei/PlatformPei.inf                     |   1 +
OvmfPkg/SmmAccess/SmmAccess2Dxe.c                       |   7 ++
OvmfPkg/SmmAccess/SmmAccess2Dxe.inf                     |   1 +
OvmfPkg/SmmAccess/SmmAccessPei.c                        |   6 ++
OvmfPkg/SmmAccess/SmmAccessPei.inf                      |   1 +
OvmfPkg/SmmAccess/SmramInternal.c                       |  25 +++++
OvmfPkg/SmmAccess/SmramInternal.h                       |   8 ++
18 files changed, 260 insertions(+), 71 deletions(-)
[edk2-devel] [PATCH wave 1 00/10] support QEMU's "SMRAM at default SMBASE" feature
Posted by Laszlo Ersek 4 years, 7 months ago
Ref:    https://bugzilla.tianocore.org/show_bug.cgi?id=1512
Repo:   https://github.com/lersek/edk2.git
Branch: smram_at_default_smbase_bz_1512_wave_1

This is the firmware-side patch series for the QEMU feature that Igor is
proposing in

  [Qemu-devel] [PATCH 0/2]
  q35: mch: allow to lock down 128K RAM at default SMBASE address

posted at

  http://mid.mail-archive.com/20190917130708.10281-1-imammedo@redhat.com
  https://lists.gnu.org/archive/html/qemu-devel/2019-09/msg03538.html
  https://edk2.groups.io/g/devel/message/47364

This OVMF patch series is supposed to be the first "wave" of patch sets,
working towards VCPU hotplug with SMM enabled.

We want the hot-plugged VCPU to execute its very first instruction from
SMRAM, running in SMM, for protection from OS and PCI DMA interference.
In the (long!) discussion at

  http://mid.mail-archive.com/20190905174503.2acaa46a@redhat.com
  https://edk2.groups.io/g/devel/message/46910
  https://bugzilla.tianocore.org/show_bug.cgi?id=1512#c9

we considered diverging from the specs and changing the default SMBASE
to an address different from 0x3_0000, so that it would point into
SMRAM. However, Igor had the idea to keep the default SMBASE intact, and
replace the underlying system RAM (128 KB of it) with lockable SMRAM.
This conforms to the language of the Intel SDM, namely, "[t]he actual
physical location of the SMRAM can be in system memory or in a separate
RAM memory". When firmware locks this QEMU-specific SMRAM, the latter
completely disappears from the address space of code that runs outside
of SMM (including PCI devices doing DMA). We call the remaining MMIO
area a "black hole".

The present patch set detects the QEMU feature, locks the SMRAM at the
right times, and informs firmware and OS components to stay away form
the SMRAM at the default SMBASE.

I've tested the set extensively:

  http://mid.mail-archive.com/c18f1ada-3eca-d5f1-da4f-e74181c5a862@redhat.com
  https://edk2.groups.io/g/devel/message/47864

Going forward, if I understand correctly, the plan is to populate the
new SMRAM with the hotplug SMI handler. (This would likely be put in
place by OvmfPkg/PlatformPei, from NASM source code. The
SmmRelocateBases() function in UefiCpuPkg/PiSmmCpuDxeSmm backs up the
original contents before, and restores them after, the initial SMBASE
relocation of the cold-plugged VCPUs. Thus the hotplug SMI handler would
survive the initial relocation of the cold-plugged VCPUs.)

Further, when a VCPU is hot-plugged, we seem to intend to raise an SMI
for it (perhaps internally to QEMU?), which will remain pending until
the VCPU is launched with INIT-SIPI-SIPI. Once the INIT-SIPI-SIPI is
sent, the VCPU will immediately enter SMM, and run the SMI handler in
the locked SMRAM at the default SMBASE. At RSM, it may continue at the
vector requested in the INIT-SIPI-SIPI.

Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: Brijesh Singh <brijesh.singh@amd.com>
Cc: Igor Mammedov <imammedo@redhat.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Joao M Martins <joao.m.martins@oracle.com>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Cc: Jun Nakajima <jun.nakajima@intel.com>
Cc: Michael Kinney <michael.d.kinney@intel.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Phillip Goerl <phillip.goerl@oracle.com>
Cc: Yingwen Chen <yingwen.chen@intel.com>

Thanks
Laszlo

Laszlo Ersek (10):
  OvmfPkg: introduce PcdQ35SmramAtDefaultSmbase
  OvmfPkg/IndustryStandard: increase vertical whitespace in Q35 macro
    defs
  OvmfPkg/IndustryStandard: add MCH_DEFAULT_SMBASE* register macros
  OvmfPkg/PlatformPei: factor out Q35BoardVerification()
  OvmfPkg/PlatformPei: detect SMRAM at default SMBASE (skeleton)
  OvmfPkg/PlatformPei: assert there's no permanent PEI RAM at default
    SMBASE
  OvmfPkg/PlatformPei: reserve the SMRAM at the default SMBASE, if it
    exists
  OvmfPkg/SEV: don't manage the lifecycle of the SMRAM at the default
    SMBASE
  OvmfPkg/SmmAccess: close and lock SMRAM at default SMBASE
  OvmfPkg/PlatformPei: detect SMRAM at default SMBASE (for real)

 OvmfPkg/Include/IndustryStandard/Q35MchIch9.h           | 106 +++++++++++---------
 OvmfPkg/Library/SmmCpuFeaturesLib/SmmCpuFeaturesLib.c   |  21 +++-
 OvmfPkg/Library/SmmCpuFeaturesLib/SmmCpuFeaturesLib.inf |   4 +
 OvmfPkg/OvmfPkg.dec                                     |   6 ++
 OvmfPkg/OvmfPkgIa32.dsc                                 |   1 +
 OvmfPkg/OvmfPkgIa32X64.dsc                              |   1 +
 OvmfPkg/OvmfPkgX64.dsc                                  |   1 +
 OvmfPkg/PlatformPei/AmdSev.c                            |  24 ++++-
 OvmfPkg/PlatformPei/MemDetect.c                         |  87 +++++++++++++---
 OvmfPkg/PlatformPei/Platform.c                          |  24 +++++
 OvmfPkg/PlatformPei/Platform.h                          |   7 ++
 OvmfPkg/PlatformPei/PlatformPei.inf                     |   1 +
 OvmfPkg/SmmAccess/SmmAccess2Dxe.c                       |   7 ++
 OvmfPkg/SmmAccess/SmmAccess2Dxe.inf                     |   1 +
 OvmfPkg/SmmAccess/SmmAccessPei.c                        |   6 ++
 OvmfPkg/SmmAccess/SmmAccessPei.inf                      |   1 +
 OvmfPkg/SmmAccess/SmramInternal.c                       |  25 +++++
 OvmfPkg/SmmAccess/SmramInternal.h                       |   8 ++
 18 files changed, 260 insertions(+), 71 deletions(-)

-- 
2.19.1.3.g30247aa5d201


-=-=-=-=-=-=-=-=-=-=-=-
Groups.io Links: You receive all messages sent to this group.

View/Reply Online (#47924): https://edk2.groups.io/g/devel/message/47924
Mute This Topic: https://groups.io/mt/34274934/1787277
Group Owner: devel+owner@edk2.groups.io
Unsubscribe: https://edk2.groups.io/g/devel/unsub  [importer@patchew.org]
-=-=-=-=-=-=-=-=-=-=-=-

Re: [edk2-devel] [PATCH wave 1 00/10] support QEMU's "SMRAM at default SMBASE" feature
Posted by Yao, Jiewen 4 years, 7 months ago
Hi Laszlo
May I know if you want to support legacy BIOS?

The legacy BIOS memory map is reported in E820, which is different with UEFI memory map.

In OvmfPkg\Csm\LegacyBiosDxe\LegacyBootSupport.c, LegacyBiosBuildE820() will hardcode below 640K address to be OS usage without consulting UEFI memory map.

  //
  // First entry is 0 to (640k - EBDA)
  //
  ACCESS_PAGE0_CODE (
    E820Table[0].BaseAddr  = 0;
    E820Table[0].Length    = (UINT64) ((*(UINT16 *) (UINTN)0x40E) << 4);
    E820Table[0].Type      = EfiAcpiAddressRangeMemory;
  );

If we want to support this feature in legacy BIOS, we also need reserve the black hole here.

Thank you
Yao Jiewen


> -----Original Message-----
> From: devel@edk2.groups.io <devel@edk2.groups.io> On Behalf Of Laszlo Ersek
> Sent: Tuesday, September 24, 2019 7:35 PM
> To: edk2-devel-groups-io <devel@edk2.groups.io>
> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>; Boris Ostrovsky
> <boris.ostrovsky@oracle.com>; Brijesh Singh <brijesh.singh@amd.com>; Igor
> Mammedov <imammedo@redhat.com>; Yao, Jiewen <jiewen.yao@intel.com>;
> Joao M Martins <joao.m.martins@oracle.com>; Justen, Jordan L
> <jordan.l.justen@intel.com>; Nakajima, Jun <jun.nakajima@intel.com>; Kinney,
> Michael D <michael.d.kinney@intel.com>; Paolo Bonzini
> <pbonzini@redhat.com>; Phillip Goerl <phillip.goerl@oracle.com>; Chen,
> Yingwen <yingwen.chen@intel.com>
> Subject: [edk2-devel] [PATCH wave 1 00/10] support QEMU's "SMRAM at
> default SMBASE" feature
> 
> Ref:    https://bugzilla.tianocore.org/show_bug.cgi?id=1512
> Repo:   https://github.com/lersek/edk2.git
> Branch: smram_at_default_smbase_bz_1512_wave_1
> 
> This is the firmware-side patch series for the QEMU feature that Igor is
> proposing in
> 
>   [Qemu-devel] [PATCH 0/2]
>   q35: mch: allow to lock down 128K RAM at default SMBASE address
> 
> posted at
> 
>   http://mid.mail-archive.com/20190917130708.10281-1-
> imammedo@redhat.com
>   https://lists.gnu.org/archive/html/qemu-devel/2019-09/msg03538.html
>   https://edk2.groups.io/g/devel/message/47364
> 
> This OVMF patch series is supposed to be the first "wave" of patch sets,
> working towards VCPU hotplug with SMM enabled.
> 
> We want the hot-plugged VCPU to execute its very first instruction from
> SMRAM, running in SMM, for protection from OS and PCI DMA interference.
> In the (long!) discussion at
> 
>   http://mid.mail-archive.com/20190905174503.2acaa46a@redhat.com
>   https://edk2.groups.io/g/devel/message/46910
>   https://bugzilla.tianocore.org/show_bug.cgi?id=1512#c9
> 
> we considered diverging from the specs and changing the default SMBASE
> to an address different from 0x3_0000, so that it would point into
> SMRAM. However, Igor had the idea to keep the default SMBASE intact, and
> replace the underlying system RAM (128 KB of it) with lockable SMRAM.
> This conforms to the language of the Intel SDM, namely, "[t]he actual
> physical location of the SMRAM can be in system memory or in a separate
> RAM memory". When firmware locks this QEMU-specific SMRAM, the latter
> completely disappears from the address space of code that runs outside
> of SMM (including PCI devices doing DMA). We call the remaining MMIO
> area a "black hole".
> 
> The present patch set detects the QEMU feature, locks the SMRAM at the
> right times, and informs firmware and OS components to stay away form
> the SMRAM at the default SMBASE.
> 
> I've tested the set extensively:
> 
>   http://mid.mail-archive.com/c18f1ada-3eca-d5f1-da4f-
> e74181c5a862@redhat.com
>   https://edk2.groups.io/g/devel/message/47864
> 
> Going forward, if I understand correctly, the plan is to populate the
> new SMRAM with the hotplug SMI handler. (This would likely be put in
> place by OvmfPkg/PlatformPei, from NASM source code. The
> SmmRelocateBases() function in UefiCpuPkg/PiSmmCpuDxeSmm backs up the
> original contents before, and restores them after, the initial SMBASE
> relocation of the cold-plugged VCPUs. Thus the hotplug SMI handler would
> survive the initial relocation of the cold-plugged VCPUs.)
> 
> Further, when a VCPU is hot-plugged, we seem to intend to raise an SMI
> for it (perhaps internally to QEMU?), which will remain pending until
> the VCPU is launched with INIT-SIPI-SIPI. Once the INIT-SIPI-SIPI is
> sent, the VCPU will immediately enter SMM, and run the SMI handler in
> the locked SMRAM at the default SMBASE. At RSM, it may continue at the
> vector requested in the INIT-SIPI-SIPI.
> 
> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
> Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
> Cc: Brijesh Singh <brijesh.singh@amd.com>
> Cc: Igor Mammedov <imammedo@redhat.com>
> Cc: Jiewen Yao <jiewen.yao@intel.com>
> Cc: Joao M Martins <joao.m.martins@oracle.com>
> Cc: Jordan Justen <jordan.l.justen@intel.com>
> Cc: Jun Nakajima <jun.nakajima@intel.com>
> Cc: Michael Kinney <michael.d.kinney@intel.com>
> Cc: Paolo Bonzini <pbonzini@redhat.com>
> Cc: Phillip Goerl <phillip.goerl@oracle.com>
> Cc: Yingwen Chen <yingwen.chen@intel.com>
> 
> Thanks
> Laszlo
> 
> Laszlo Ersek (10):
>   OvmfPkg: introduce PcdQ35SmramAtDefaultSmbase
>   OvmfPkg/IndustryStandard: increase vertical whitespace in Q35 macro
>     defs
>   OvmfPkg/IndustryStandard: add MCH_DEFAULT_SMBASE* register macros
>   OvmfPkg/PlatformPei: factor out Q35BoardVerification()
>   OvmfPkg/PlatformPei: detect SMRAM at default SMBASE (skeleton)
>   OvmfPkg/PlatformPei: assert there's no permanent PEI RAM at default
>     SMBASE
>   OvmfPkg/PlatformPei: reserve the SMRAM at the default SMBASE, if it
>     exists
>   OvmfPkg/SEV: don't manage the lifecycle of the SMRAM at the default
>     SMBASE
>   OvmfPkg/SmmAccess: close and lock SMRAM at default SMBASE
>   OvmfPkg/PlatformPei: detect SMRAM at default SMBASE (for real)
> 
>  OvmfPkg/Include/IndustryStandard/Q35MchIch9.h           | 106 +++++++++++----
> -----
>  OvmfPkg/Library/SmmCpuFeaturesLib/SmmCpuFeaturesLib.c   |  21 +++-
>  OvmfPkg/Library/SmmCpuFeaturesLib/SmmCpuFeaturesLib.inf |   4 +
>  OvmfPkg/OvmfPkg.dec                                     |   6 ++
>  OvmfPkg/OvmfPkgIa32.dsc                                 |   1 +
>  OvmfPkg/OvmfPkgIa32X64.dsc                              |   1 +
>  OvmfPkg/OvmfPkgX64.dsc                                  |   1 +
>  OvmfPkg/PlatformPei/AmdSev.c                            |  24 ++++-
>  OvmfPkg/PlatformPei/MemDetect.c                         |  87 +++++++++++++---
>  OvmfPkg/PlatformPei/Platform.c                          |  24 +++++
>  OvmfPkg/PlatformPei/Platform.h                          |   7 ++
>  OvmfPkg/PlatformPei/PlatformPei.inf                     |   1 +
>  OvmfPkg/SmmAccess/SmmAccess2Dxe.c                       |   7 ++
>  OvmfPkg/SmmAccess/SmmAccess2Dxe.inf                     |   1 +
>  OvmfPkg/SmmAccess/SmmAccessPei.c                        |   6 ++
>  OvmfPkg/SmmAccess/SmmAccessPei.inf                      |   1 +
>  OvmfPkg/SmmAccess/SmramInternal.c                       |  25 +++++
>  OvmfPkg/SmmAccess/SmramInternal.h                       |   8 ++
>  18 files changed, 260 insertions(+), 71 deletions(-)
> 
> --
> 2.19.1.3.g30247aa5d201
> 
> 
> -=-=-=-=-=-=
> Groups.io Links: You receive all messages sent to this group.
> 
> View/Reply Online (#47924): https://edk2.groups.io/g/devel/message/47924
> Mute This Topic: https://groups.io/mt/34274934/1772286
> Group Owner: devel+owner@edk2.groups.io
> Unsubscribe: https://edk2.groups.io/g/devel/unsub  [jiewen.yao@intel.com]
> -=-=-=-=-=-=


-=-=-=-=-=-=-=-=-=-=-=-
Groups.io Links: You receive all messages sent to this group.

View/Reply Online (#48082): https://edk2.groups.io/g/devel/message/48082
Mute This Topic: https://groups.io/mt/34274934/1787277
Group Owner: devel+owner@edk2.groups.io
Unsubscribe: https://edk2.groups.io/g/devel/unsub  [importer@patchew.org]
-=-=-=-=-=-=-=-=-=-=-=-

Re: [edk2-devel] [PATCH wave 1 00/10] support QEMU's "SMRAM at default SMBASE" feature
Posted by Laszlo Ersek 4 years, 7 months ago
On 09/26/19 10:46, Yao, Jiewen wrote:
> Hi Laszlo
> May I know if you want to support legacy BIOS?

No, thanks.

The end-goal is to implement VCPU hotplug at OS runtime, when the OS is
booted with OVMF, Secure Boot is enabled, and the firmware is protected
with SMM.

In other words, the goal is to support VCPU hotplug when there is a
privilege barrier between the firmware and the OS. (The whole complexity
of the feature arises because a hotplugged VCPU is a risk for that barrier.)

With legacy BIOS, there is no such barrier, and VCPU hotplug already
works with SeaBIOS.

Furthermore, if you build OVMF *without* "-D SMM_REQUIRE", then there is
no such barrier again, and VCPU hotplug already works well, in that case
too. (I had tested it extensively, in RHBZ#1454803.)

In effect, I consider "-D SMM_REQUIRE" and "-D CSM_ENABLE" mutually
exclusive, for OVMF. And the present work is only relevant for "-D
SMM_REQUIRE".

We could perhaps add an "!error" directive to the OVMF DSC files,
conditional on both flags being enabled (SMM_REQUIRE and CSM_ENABLE). I
might propose such a patch in the future, but until it becomes a
practical problem, I don't want to spend cycles on it.

Thanks
Laszlo

> The legacy BIOS memory map is reported in E820, which is different with UEFI memory map.
> 
> In OvmfPkg\Csm\LegacyBiosDxe\LegacyBootSupport.c, LegacyBiosBuildE820() will hardcode below 640K address to be OS usage without consulting UEFI memory map.
> 
>   //
>   // First entry is 0 to (640k - EBDA)
>   //
>   ACCESS_PAGE0_CODE (
>     E820Table[0].BaseAddr  = 0;
>     E820Table[0].Length    = (UINT64) ((*(UINT16 *) (UINTN)0x40E) << 4);
>     E820Table[0].Type      = EfiAcpiAddressRangeMemory;
>   );
> 
> If we want to support this feature in legacy BIOS, we also need reserve the black hole here.
> 
> Thank you
> Yao Jiewen
> 
> 
>> -----Original Message-----
>> From: devel@edk2.groups.io <devel@edk2.groups.io> On Behalf Of Laszlo Ersek
>> Sent: Tuesday, September 24, 2019 7:35 PM
>> To: edk2-devel-groups-io <devel@edk2.groups.io>
>> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>; Boris Ostrovsky
>> <boris.ostrovsky@oracle.com>; Brijesh Singh <brijesh.singh@amd.com>; Igor
>> Mammedov <imammedo@redhat.com>; Yao, Jiewen <jiewen.yao@intel.com>;
>> Joao M Martins <joao.m.martins@oracle.com>; Justen, Jordan L
>> <jordan.l.justen@intel.com>; Nakajima, Jun <jun.nakajima@intel.com>; Kinney,
>> Michael D <michael.d.kinney@intel.com>; Paolo Bonzini
>> <pbonzini@redhat.com>; Phillip Goerl <phillip.goerl@oracle.com>; Chen,
>> Yingwen <yingwen.chen@intel.com>
>> Subject: [edk2-devel] [PATCH wave 1 00/10] support QEMU's "SMRAM at
>> default SMBASE" feature
>>
>> Ref:    https://bugzilla.tianocore.org/show_bug.cgi?id=1512
>> Repo:   https://github.com/lersek/edk2.git
>> Branch: smram_at_default_smbase_bz_1512_wave_1
>>
>> This is the firmware-side patch series for the QEMU feature that Igor is
>> proposing in
>>
>>   [Qemu-devel] [PATCH 0/2]
>>   q35: mch: allow to lock down 128K RAM at default SMBASE address
>>
>> posted at
>>
>>   http://mid.mail-archive.com/20190917130708.10281-1-
>> imammedo@redhat.com
>>   https://lists.gnu.org/archive/html/qemu-devel/2019-09/msg03538.html
>>   https://edk2.groups.io/g/devel/message/47364
>>
>> This OVMF patch series is supposed to be the first "wave" of patch sets,
>> working towards VCPU hotplug with SMM enabled.
>>
>> We want the hot-plugged VCPU to execute its very first instruction from
>> SMRAM, running in SMM, for protection from OS and PCI DMA interference.
>> In the (long!) discussion at
>>
>>   http://mid.mail-archive.com/20190905174503.2acaa46a@redhat.com
>>   https://edk2.groups.io/g/devel/message/46910
>>   https://bugzilla.tianocore.org/show_bug.cgi?id=1512#c9
>>
>> we considered diverging from the specs and changing the default SMBASE
>> to an address different from 0x3_0000, so that it would point into
>> SMRAM. However, Igor had the idea to keep the default SMBASE intact, and
>> replace the underlying system RAM (128 KB of it) with lockable SMRAM.
>> This conforms to the language of the Intel SDM, namely, "[t]he actual
>> physical location of the SMRAM can be in system memory or in a separate
>> RAM memory". When firmware locks this QEMU-specific SMRAM, the latter
>> completely disappears from the address space of code that runs outside
>> of SMM (including PCI devices doing DMA). We call the remaining MMIO
>> area a "black hole".
>>
>> The present patch set detects the QEMU feature, locks the SMRAM at the
>> right times, and informs firmware and OS components to stay away form
>> the SMRAM at the default SMBASE.
>>
>> I've tested the set extensively:
>>
>>   http://mid.mail-archive.com/c18f1ada-3eca-d5f1-da4f-
>> e74181c5a862@redhat.com
>>   https://edk2.groups.io/g/devel/message/47864
>>
>> Going forward, if I understand correctly, the plan is to populate the
>> new SMRAM with the hotplug SMI handler. (This would likely be put in
>> place by OvmfPkg/PlatformPei, from NASM source code. The
>> SmmRelocateBases() function in UefiCpuPkg/PiSmmCpuDxeSmm backs up the
>> original contents before, and restores them after, the initial SMBASE
>> relocation of the cold-plugged VCPUs. Thus the hotplug SMI handler would
>> survive the initial relocation of the cold-plugged VCPUs.)
>>
>> Further, when a VCPU is hot-plugged, we seem to intend to raise an SMI
>> for it (perhaps internally to QEMU?), which will remain pending until
>> the VCPU is launched with INIT-SIPI-SIPI. Once the INIT-SIPI-SIPI is
>> sent, the VCPU will immediately enter SMM, and run the SMI handler in
>> the locked SMRAM at the default SMBASE. At RSM, it may continue at the
>> vector requested in the INIT-SIPI-SIPI.
>>
>> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
>> Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
>> Cc: Brijesh Singh <brijesh.singh@amd.com>
>> Cc: Igor Mammedov <imammedo@redhat.com>
>> Cc: Jiewen Yao <jiewen.yao@intel.com>
>> Cc: Joao M Martins <joao.m.martins@oracle.com>
>> Cc: Jordan Justen <jordan.l.justen@intel.com>
>> Cc: Jun Nakajima <jun.nakajima@intel.com>
>> Cc: Michael Kinney <michael.d.kinney@intel.com>
>> Cc: Paolo Bonzini <pbonzini@redhat.com>
>> Cc: Phillip Goerl <phillip.goerl@oracle.com>
>> Cc: Yingwen Chen <yingwen.chen@intel.com>
>>
>> Thanks
>> Laszlo
>>
>> Laszlo Ersek (10):
>>   OvmfPkg: introduce PcdQ35SmramAtDefaultSmbase
>>   OvmfPkg/IndustryStandard: increase vertical whitespace in Q35 macro
>>     defs
>>   OvmfPkg/IndustryStandard: add MCH_DEFAULT_SMBASE* register macros
>>   OvmfPkg/PlatformPei: factor out Q35BoardVerification()
>>   OvmfPkg/PlatformPei: detect SMRAM at default SMBASE (skeleton)
>>   OvmfPkg/PlatformPei: assert there's no permanent PEI RAM at default
>>     SMBASE
>>   OvmfPkg/PlatformPei: reserve the SMRAM at the default SMBASE, if it
>>     exists
>>   OvmfPkg/SEV: don't manage the lifecycle of the SMRAM at the default
>>     SMBASE
>>   OvmfPkg/SmmAccess: close and lock SMRAM at default SMBASE
>>   OvmfPkg/PlatformPei: detect SMRAM at default SMBASE (for real)
>>
>>  OvmfPkg/Include/IndustryStandard/Q35MchIch9.h           | 106 +++++++++++----
>> -----
>>  OvmfPkg/Library/SmmCpuFeaturesLib/SmmCpuFeaturesLib.c   |  21 +++-
>>  OvmfPkg/Library/SmmCpuFeaturesLib/SmmCpuFeaturesLib.inf |   4 +
>>  OvmfPkg/OvmfPkg.dec                                     |   6 ++
>>  OvmfPkg/OvmfPkgIa32.dsc                                 |   1 +
>>  OvmfPkg/OvmfPkgIa32X64.dsc                              |   1 +
>>  OvmfPkg/OvmfPkgX64.dsc                                  |   1 +
>>  OvmfPkg/PlatformPei/AmdSev.c                            |  24 ++++-
>>  OvmfPkg/PlatformPei/MemDetect.c                         |  87 +++++++++++++---
>>  OvmfPkg/PlatformPei/Platform.c                          |  24 +++++
>>  OvmfPkg/PlatformPei/Platform.h                          |   7 ++
>>  OvmfPkg/PlatformPei/PlatformPei.inf                     |   1 +
>>  OvmfPkg/SmmAccess/SmmAccess2Dxe.c                       |   7 ++
>>  OvmfPkg/SmmAccess/SmmAccess2Dxe.inf                     |   1 +
>>  OvmfPkg/SmmAccess/SmmAccessPei.c                        |   6 ++
>>  OvmfPkg/SmmAccess/SmmAccessPei.inf                      |   1 +
>>  OvmfPkg/SmmAccess/SmramInternal.c                       |  25 +++++
>>  OvmfPkg/SmmAccess/SmramInternal.h                       |   8 ++
>>  18 files changed, 260 insertions(+), 71 deletions(-)
>>
>> --
>> 2.19.1.3.g30247aa5d201
>>
>>
>> -=-=-=-=-=-=
>> Groups.io Links: You receive all messages sent to this group.
>>
>> View/Reply Online (#47924): https://edk2.groups.io/g/devel/message/47924
>> Mute This Topic: https://groups.io/mt/34274934/1772286
>> Group Owner: devel+owner@edk2.groups.io
>> Unsubscribe: https://edk2.groups.io/g/devel/unsub  [jiewen.yao@intel.com]
>> -=-=-=-=-=-=
> 


-=-=-=-=-=-=-=-=-=-=-=-
Groups.io Links: You receive all messages sent to this group.

View/Reply Online (#48119): https://edk2.groups.io/g/devel/message/48119
Mute This Topic: https://groups.io/mt/34274934/1787277
Group Owner: devel+owner@edk2.groups.io
Unsubscribe: https://edk2.groups.io/g/devel/unsub  [importer@patchew.org]
-=-=-=-=-=-=-=-=-=-=-=-

Re: [edk2-devel] [PATCH wave 1 00/10] support QEMU's "SMRAM at default SMBASE" feature
Posted by Yao, Jiewen 4 years, 7 months ago
Sounds good.

Maybe you can write that info in the commit message. A simple sentence such as :
No CSM support is needed because legacy BIOS boot don't use SMM.

So other people won't have same question in the future.

With the commit message change, the series reviewed-by: jiewen.yao@intel.com

Thank you
Yao Jiewen


> -----Original Message-----
> From: Laszlo Ersek <lersek@redhat.com>
> Sent: Thursday, September 26, 2019 10:52 PM
> To: Yao, Jiewen <jiewen.yao@intel.com>; devel@edk2.groups.io
> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>; Boris Ostrovsky
> <boris.ostrovsky@oracle.com>; Brijesh Singh <brijesh.singh@amd.com>; Igor
> Mammedov <imammedo@redhat.com>; Joao M Martins
> <joao.m.martins@oracle.com>; Justen, Jordan L <jordan.l.justen@intel.com>;
> Nakajima, Jun <jun.nakajima@intel.com>; Kinney, Michael D
> <michael.d.kinney@intel.com>; Paolo Bonzini <pbonzini@redhat.com>; Phillip
> Goerl <phillip.goerl@oracle.com>; Chen, Yingwen <yingwen.chen@intel.com>
> Subject: Re: [edk2-devel] [PATCH wave 1 00/10] support QEMU's "SMRAM at
> default SMBASE" feature
> 
> On 09/26/19 10:46, Yao, Jiewen wrote:
> > Hi Laszlo
> > May I know if you want to support legacy BIOS?
> 
> No, thanks.
> 
> The end-goal is to implement VCPU hotplug at OS runtime, when the OS is
> booted with OVMF, Secure Boot is enabled, and the firmware is protected
> with SMM.
> 
> In other words, the goal is to support VCPU hotplug when there is a
> privilege barrier between the firmware and the OS. (The whole complexity
> of the feature arises because a hotplugged VCPU is a risk for that barrier.)
> 
> With legacy BIOS, there is no such barrier, and VCPU hotplug already
> works with SeaBIOS.
> 
> Furthermore, if you build OVMF *without* "-D SMM_REQUIRE", then there is
> no such barrier again, and VCPU hotplug already works well, in that case
> too. (I had tested it extensively, in RHBZ#1454803.)
> 
> In effect, I consider "-D SMM_REQUIRE" and "-D CSM_ENABLE" mutually
> exclusive, for OVMF. And the present work is only relevant for "-D
> SMM_REQUIRE".
> 
> We could perhaps add an "!error" directive to the OVMF DSC files,
> conditional on both flags being enabled (SMM_REQUIRE and CSM_ENABLE). I
> might propose such a patch in the future, but until it becomes a
> practical problem, I don't want to spend cycles on it.
> 
> Thanks
> Laszlo
> 
> > The legacy BIOS memory map is reported in E820, which is different with UEFI
> memory map.
> >
> > In OvmfPkg\Csm\LegacyBiosDxe\LegacyBootSupport.c, LegacyBiosBuildE820()
> will hardcode below 640K address to be OS usage without consulting UEFI
> memory map.
> >
> >   //
> >   // First entry is 0 to (640k - EBDA)
> >   //
> >   ACCESS_PAGE0_CODE (
> >     E820Table[0].BaseAddr  = 0;
> >     E820Table[0].Length    = (UINT64) ((*(UINT16 *) (UINTN)0x40E) << 4);
> >     E820Table[0].Type      = EfiAcpiAddressRangeMemory;
> >   );
> >
> > If we want to support this feature in legacy BIOS, we also need reserve the
> black hole here.
> >
> > Thank you
> > Yao Jiewen
> >
> >
> >> -----Original Message-----
> >> From: devel@edk2.groups.io <devel@edk2.groups.io> On Behalf Of Laszlo
> Ersek
> >> Sent: Tuesday, September 24, 2019 7:35 PM
> >> To: edk2-devel-groups-io <devel@edk2.groups.io>
> >> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>; Boris Ostrovsky
> >> <boris.ostrovsky@oracle.com>; Brijesh Singh <brijesh.singh@amd.com>; Igor
> >> Mammedov <imammedo@redhat.com>; Yao, Jiewen
> <jiewen.yao@intel.com>;
> >> Joao M Martins <joao.m.martins@oracle.com>; Justen, Jordan L
> >> <jordan.l.justen@intel.com>; Nakajima, Jun <jun.nakajima@intel.com>;
> Kinney,
> >> Michael D <michael.d.kinney@intel.com>; Paolo Bonzini
> >> <pbonzini@redhat.com>; Phillip Goerl <phillip.goerl@oracle.com>; Chen,
> >> Yingwen <yingwen.chen@intel.com>
> >> Subject: [edk2-devel] [PATCH wave 1 00/10] support QEMU's "SMRAM at
> >> default SMBASE" feature
> >>
> >> Ref:    https://bugzilla.tianocore.org/show_bug.cgi?id=1512
> >> Repo:   https://github.com/lersek/edk2.git
> >> Branch: smram_at_default_smbase_bz_1512_wave_1
> >>
> >> This is the firmware-side patch series for the QEMU feature that Igor is
> >> proposing in
> >>
> >>   [Qemu-devel] [PATCH 0/2]
> >>   q35: mch: allow to lock down 128K RAM at default SMBASE address
> >>
> >> posted at
> >>
> >>   http://mid.mail-archive.com/20190917130708.10281-1-
> >> imammedo@redhat.com
> >>   https://lists.gnu.org/archive/html/qemu-devel/2019-09/msg03538.html
> >>   https://edk2.groups.io/g/devel/message/47364
> >>
> >> This OVMF patch series is supposed to be the first "wave" of patch sets,
> >> working towards VCPU hotplug with SMM enabled.
> >>
> >> We want the hot-plugged VCPU to execute its very first instruction from
> >> SMRAM, running in SMM, for protection from OS and PCI DMA interference.
> >> In the (long!) discussion at
> >>
> >>   http://mid.mail-archive.com/20190905174503.2acaa46a@redhat.com
> >>   https://edk2.groups.io/g/devel/message/46910
> >>   https://bugzilla.tianocore.org/show_bug.cgi?id=1512#c9
> >>
> >> we considered diverging from the specs and changing the default SMBASE
> >> to an address different from 0x3_0000, so that it would point into
> >> SMRAM. However, Igor had the idea to keep the default SMBASE intact, and
> >> replace the underlying system RAM (128 KB of it) with lockable SMRAM.
> >> This conforms to the language of the Intel SDM, namely, "[t]he actual
> >> physical location of the SMRAM can be in system memory or in a separate
> >> RAM memory". When firmware locks this QEMU-specific SMRAM, the latter
> >> completely disappears from the address space of code that runs outside
> >> of SMM (including PCI devices doing DMA). We call the remaining MMIO
> >> area a "black hole".
> >>
> >> The present patch set detects the QEMU feature, locks the SMRAM at the
> >> right times, and informs firmware and OS components to stay away form
> >> the SMRAM at the default SMBASE.
> >>
> >> I've tested the set extensively:
> >>
> >>   http://mid.mail-archive.com/c18f1ada-3eca-d5f1-da4f-
> >> e74181c5a862@redhat.com
> >>   https://edk2.groups.io/g/devel/message/47864
> >>
> >> Going forward, if I understand correctly, the plan is to populate the
> >> new SMRAM with the hotplug SMI handler. (This would likely be put in
> >> place by OvmfPkg/PlatformPei, from NASM source code. The
> >> SmmRelocateBases() function in UefiCpuPkg/PiSmmCpuDxeSmm backs up
> the
> >> original contents before, and restores them after, the initial SMBASE
> >> relocation of the cold-plugged VCPUs. Thus the hotplug SMI handler would
> >> survive the initial relocation of the cold-plugged VCPUs.)
> >>
> >> Further, when a VCPU is hot-plugged, we seem to intend to raise an SMI
> >> for it (perhaps internally to QEMU?), which will remain pending until
> >> the VCPU is launched with INIT-SIPI-SIPI. Once the INIT-SIPI-SIPI is
> >> sent, the VCPU will immediately enter SMM, and run the SMI handler in
> >> the locked SMRAM at the default SMBASE. At RSM, it may continue at the
> >> vector requested in the INIT-SIPI-SIPI.
> >>
> >> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
> >> Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
> >> Cc: Brijesh Singh <brijesh.singh@amd.com>
> >> Cc: Igor Mammedov <imammedo@redhat.com>
> >> Cc: Jiewen Yao <jiewen.yao@intel.com>
> >> Cc: Joao M Martins <joao.m.martins@oracle.com>
> >> Cc: Jordan Justen <jordan.l.justen@intel.com>
> >> Cc: Jun Nakajima <jun.nakajima@intel.com>
> >> Cc: Michael Kinney <michael.d.kinney@intel.com>
> >> Cc: Paolo Bonzini <pbonzini@redhat.com>
> >> Cc: Phillip Goerl <phillip.goerl@oracle.com>
> >> Cc: Yingwen Chen <yingwen.chen@intel.com>
> >>
> >> Thanks
> >> Laszlo
> >>
> >> Laszlo Ersek (10):
> >>   OvmfPkg: introduce PcdQ35SmramAtDefaultSmbase
> >>   OvmfPkg/IndustryStandard: increase vertical whitespace in Q35 macro
> >>     defs
> >>   OvmfPkg/IndustryStandard: add MCH_DEFAULT_SMBASE* register macros
> >>   OvmfPkg/PlatformPei: factor out Q35BoardVerification()
> >>   OvmfPkg/PlatformPei: detect SMRAM at default SMBASE (skeleton)
> >>   OvmfPkg/PlatformPei: assert there's no permanent PEI RAM at default
> >>     SMBASE
> >>   OvmfPkg/PlatformPei: reserve the SMRAM at the default SMBASE, if it
> >>     exists
> >>   OvmfPkg/SEV: don't manage the lifecycle of the SMRAM at the default
> >>     SMBASE
> >>   OvmfPkg/SmmAccess: close and lock SMRAM at default SMBASE
> >>   OvmfPkg/PlatformPei: detect SMRAM at default SMBASE (for real)
> >>
> >>  OvmfPkg/Include/IndustryStandard/Q35MchIch9.h           | 106
> +++++++++++----
> >> -----
> >>  OvmfPkg/Library/SmmCpuFeaturesLib/SmmCpuFeaturesLib.c   |  21 +++-
> >>  OvmfPkg/Library/SmmCpuFeaturesLib/SmmCpuFeaturesLib.inf |   4 +
> >>  OvmfPkg/OvmfPkg.dec                                     |   6 ++
> >>  OvmfPkg/OvmfPkgIa32.dsc                                 |   1 +
> >>  OvmfPkg/OvmfPkgIa32X64.dsc                              |   1 +
> >>  OvmfPkg/OvmfPkgX64.dsc                                  |   1 +
> >>  OvmfPkg/PlatformPei/AmdSev.c                            |  24 ++++-
> >>  OvmfPkg/PlatformPei/MemDetect.c                         |  87 +++++++++++++---
> >>  OvmfPkg/PlatformPei/Platform.c                          |  24 +++++
> >>  OvmfPkg/PlatformPei/Platform.h                          |   7 ++
> >>  OvmfPkg/PlatformPei/PlatformPei.inf                     |   1 +
> >>  OvmfPkg/SmmAccess/SmmAccess2Dxe.c                       |   7 ++
> >>  OvmfPkg/SmmAccess/SmmAccess2Dxe.inf                     |   1 +
> >>  OvmfPkg/SmmAccess/SmmAccessPei.c                        |   6 ++
> >>  OvmfPkg/SmmAccess/SmmAccessPei.inf                      |   1 +
> >>  OvmfPkg/SmmAccess/SmramInternal.c                       |  25 +++++
> >>  OvmfPkg/SmmAccess/SmramInternal.h                       |   8 ++
> >>  18 files changed, 260 insertions(+), 71 deletions(-)
> >>
> >> --
> >> 2.19.1.3.g30247aa5d201
> >>
> >>
> >> -=-=-=-=-=-=
> >> Groups.io Links: You receive all messages sent to this group.
> >>
> >> View/Reply Online (#47924): https://edk2.groups.io/g/devel/message/47924
> >> Mute This Topic: https://groups.io/mt/34274934/1772286
> >> Group Owner: devel+owner@edk2.groups.io
> >> Unsubscribe: https://edk2.groups.io/g/devel/unsub  [jiewen.yao@intel.com]
> >> -=-=-=-=-=-=
> >


-=-=-=-=-=-=-=-=-=-=-=-
Groups.io Links: You receive all messages sent to this group.

View/Reply Online (#48166): https://edk2.groups.io/g/devel/message/48166
Mute This Topic: https://groups.io/mt/34274934/1787277
Group Owner: devel+owner@edk2.groups.io
Unsubscribe: https://edk2.groups.io/g/devel/unsub  [importer@patchew.org]
-=-=-=-=-=-=-=-=-=-=-=-

Re: [edk2-devel] [PATCH wave 1 00/10] support QEMU's "SMRAM at default SMBASE" feature
Posted by Laszlo Ersek 4 years, 6 months ago
(+David)

On 09/27/19 03:14, Yao, Jiewen wrote:
> Sounds good.
> 
> Maybe you can write that info in the commit message. A simple sentence such as :
> No CSM support is needed because legacy BIOS boot don't use SMM.
> 
> So other people won't have same question in the future.

I've given this more thought. I can't decide what's the best approach,
without David's input.

David, here's the problem statement, stripped down: with a new QEMU
feature that's dynamically detected and enabled by the firmware, the
128KB memory area at 0x3_0000 becomes inaccessible to the OS. (This
subfeature is necessary for enabling the larger "VCPU hotplug with SMM"
feature). Right now, this patch series communcates that fact to the UEFI
OS through the UEFI memory map, but it's not told to a legacy OS through
the E820 memmap.

Which option do you prefer?

(1) The feature depends on building OVMF with -D SMM_REQUIRE. We can say
that -D SMM_REQUIRE and -D CSM_ENABLE are mutually exclusive, and add an
!error directive to the DSC files to catch a build that is passed both
-D flags.

This option is the most convenient for me, but it's likely not flexible
enough for users that have no access to hypervisor configuration (such
as QEMU cmdline options and/or firmware binary choice). They might want
to pick UEFI boot vs. CSM boot "unpredictably".


(2) Jiewen suggested a patch for LegacyBiosBuildE820()
[OvmfPkg/Csm/LegacyBiosDxe/LegacyBootSupport.c], punching a hole into
E820Table[0].

While this would make the CSM honest about the memory map, to legacy
OSes, I strongly doubt legacy OSes would actually *work* with such a
memory map (i.e. with 128 KB missing in the middle of the <=640KB RAM).
It would require a bunch of testing anyway, and it could break with any
newly tried legacy OS.


(3) Modify OvmfPkg/SmmAccess/SmmAccess2Dxe to set up an event
notification function for the "gEfiEventLegacyBootGuid" event group, in
case the feature has been detected in PlatformPei. In the notification
function, log an error message, call ASSERT (FALSE), and call
CpuDeadLoop ().

Advantages:
- when OVMF is built without -D SMM_REQUIRE, SmmAccess2Dxe is not included

- when OVMF is built without -D CSM_REQUIRE, then the event group is
never signaled -- the only EfiSignalEventLegacyBoot() call is in
"OvmfPkg/Csm/LegacyBiosDxe/LegacyBootSupport.c".

- when OVMF is built with both options, but no legacy boot is performed,
the callback does nothing

- the callback catches the invalid constellation and prevents the boot
cleanly. The error message in the OVMF debug log would instruct users to
disable the dynamically detectable feature on the QEMU command line, and
to boot again.

Disadvantages:

- users have to look at the OVMF log

- users may not have the power to change the QEMU configuration -- if
they had that power, they'd simply use SeaBIOS for booting the legacy
OS, not OVMF+CSM.


(4) Identify a CSM build in time for the feature detection (in
PlatformPei), and force-disable the feature (don't even attempt
detecting it from QEMU) if this is a CSM build.

In the DXE phase (and especially in the BDS phase), we can detect a CSM
build, from the presence of gEfiLegacyBiosProtocolGuid in the UEFI
protocol database. However, the protocol is not available in the PEI
phase, which is when this particular QEMU feature has to be enabled.

Therefore we could introduce a dedicated FeaturePCD, which would reflect
-D CSM_ENABLE. When set, the PCD would short-circuit the dynamic
detection in PlatformPei, and pretend that the feature doesn't exist.

This option would allow -D SMM_REQUIRE and -D CSM_ENABLE to co-exist
(like they do now); the latter would only disable the new feature
(regardless of QEMU offering it), not all of SMM. With -D CSM_ENABLE,
users would not lose the protection they get from SMM_REQUIRE when
booting a UEFI OS; they'd only lose VCPU hotplug capability.


My preference is (1), but I never use -D CSM_ENABLE.

David, do you see (or do you foresee) OVMF binaries that are built with
*both* of -D CSM_ENABLE and -D SMM_REQUIRE?

> With the commit message change, the series reviewed-by: jiewen.yao@intel.com

Thank you Jiewen -- it's possible that I'll post a v2, based on David's
answer, and then I'll pick up your R-b for unchanged patches from v1.

Thanks
Laszlo

> 
>> -----Original Message-----
>> From: Laszlo Ersek <lersek@redhat.com>
>> Sent: Thursday, September 26, 2019 10:52 PM
>> To: Yao, Jiewen <jiewen.yao@intel.com>; devel@edk2.groups.io
>> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>; Boris Ostrovsky
>> <boris.ostrovsky@oracle.com>; Brijesh Singh <brijesh.singh@amd.com>; Igor
>> Mammedov <imammedo@redhat.com>; Joao M Martins
>> <joao.m.martins@oracle.com>; Justen, Jordan L <jordan.l.justen@intel.com>;
>> Nakajima, Jun <jun.nakajima@intel.com>; Kinney, Michael D
>> <michael.d.kinney@intel.com>; Paolo Bonzini <pbonzini@redhat.com>; Phillip
>> Goerl <phillip.goerl@oracle.com>; Chen, Yingwen <yingwen.chen@intel.com>
>> Subject: Re: [edk2-devel] [PATCH wave 1 00/10] support QEMU's "SMRAM at
>> default SMBASE" feature
>>
>> On 09/26/19 10:46, Yao, Jiewen wrote:
>>> Hi Laszlo
>>> May I know if you want to support legacy BIOS?
>>
>> No, thanks.
>>
>> The end-goal is to implement VCPU hotplug at OS runtime, when the OS is
>> booted with OVMF, Secure Boot is enabled, and the firmware is protected
>> with SMM.
>>
>> In other words, the goal is to support VCPU hotplug when there is a
>> privilege barrier between the firmware and the OS. (The whole complexity
>> of the feature arises because a hotplugged VCPU is a risk for that barrier.)
>>
>> With legacy BIOS, there is no such barrier, and VCPU hotplug already
>> works with SeaBIOS.
>>
>> Furthermore, if you build OVMF *without* "-D SMM_REQUIRE", then there is
>> no such barrier again, and VCPU hotplug already works well, in that case
>> too. (I had tested it extensively, in RHBZ#1454803.)
>>
>> In effect, I consider "-D SMM_REQUIRE" and "-D CSM_ENABLE" mutually
>> exclusive, for OVMF. And the present work is only relevant for "-D
>> SMM_REQUIRE".
>>
>> We could perhaps add an "!error" directive to the OVMF DSC files,
>> conditional on both flags being enabled (SMM_REQUIRE and CSM_ENABLE). I
>> might propose such a patch in the future, but until it becomes a
>> practical problem, I don't want to spend cycles on it.
>>
>> Thanks
>> Laszlo
>>
>>> The legacy BIOS memory map is reported in E820, which is different with UEFI
>> memory map.
>>>
>>> In OvmfPkg\Csm\LegacyBiosDxe\LegacyBootSupport.c, LegacyBiosBuildE820()
>> will hardcode below 640K address to be OS usage without consulting UEFI
>> memory map.
>>>
>>>   //
>>>   // First entry is 0 to (640k - EBDA)
>>>   //
>>>   ACCESS_PAGE0_CODE (
>>>     E820Table[0].BaseAddr  = 0;
>>>     E820Table[0].Length    = (UINT64) ((*(UINT16 *) (UINTN)0x40E) << 4);
>>>     E820Table[0].Type      = EfiAcpiAddressRangeMemory;
>>>   );
>>>
>>> If we want to support this feature in legacy BIOS, we also need reserve the
>> black hole here.
>>>
>>> Thank you
>>> Yao Jiewen
>>>
>>>
>>>> -----Original Message-----
>>>> From: devel@edk2.groups.io <devel@edk2.groups.io> On Behalf Of Laszlo
>> Ersek
>>>> Sent: Tuesday, September 24, 2019 7:35 PM
>>>> To: edk2-devel-groups-io <devel@edk2.groups.io>
>>>> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>; Boris Ostrovsky
>>>> <boris.ostrovsky@oracle.com>; Brijesh Singh <brijesh.singh@amd.com>; Igor
>>>> Mammedov <imammedo@redhat.com>; Yao, Jiewen
>> <jiewen.yao@intel.com>;
>>>> Joao M Martins <joao.m.martins@oracle.com>; Justen, Jordan L
>>>> <jordan.l.justen@intel.com>; Nakajima, Jun <jun.nakajima@intel.com>;
>> Kinney,
>>>> Michael D <michael.d.kinney@intel.com>; Paolo Bonzini
>>>> <pbonzini@redhat.com>; Phillip Goerl <phillip.goerl@oracle.com>; Chen,
>>>> Yingwen <yingwen.chen@intel.com>
>>>> Subject: [edk2-devel] [PATCH wave 1 00/10] support QEMU's "SMRAM at
>>>> default SMBASE" feature
>>>>
>>>> Ref:    https://bugzilla.tianocore.org/show_bug.cgi?id=1512
>>>> Repo:   https://github.com/lersek/edk2.git
>>>> Branch: smram_at_default_smbase_bz_1512_wave_1
>>>>
>>>> This is the firmware-side patch series for the QEMU feature that Igor is
>>>> proposing in
>>>>
>>>>   [Qemu-devel] [PATCH 0/2]
>>>>   q35: mch: allow to lock down 128K RAM at default SMBASE address
>>>>
>>>> posted at
>>>>
>>>>   http://mid.mail-archive.com/20190917130708.10281-1-
>>>> imammedo@redhat.com
>>>>   https://lists.gnu.org/archive/html/qemu-devel/2019-09/msg03538.html
>>>>   https://edk2.groups.io/g/devel/message/47364
>>>>
>>>> This OVMF patch series is supposed to be the first "wave" of patch sets,
>>>> working towards VCPU hotplug with SMM enabled.
>>>>
>>>> We want the hot-plugged VCPU to execute its very first instruction from
>>>> SMRAM, running in SMM, for protection from OS and PCI DMA interference.
>>>> In the (long!) discussion at
>>>>
>>>>   http://mid.mail-archive.com/20190905174503.2acaa46a@redhat.com
>>>>   https://edk2.groups.io/g/devel/message/46910
>>>>   https://bugzilla.tianocore.org/show_bug.cgi?id=1512#c9
>>>>
>>>> we considered diverging from the specs and changing the default SMBASE
>>>> to an address different from 0x3_0000, so that it would point into
>>>> SMRAM. However, Igor had the idea to keep the default SMBASE intact, and
>>>> replace the underlying system RAM (128 KB of it) with lockable SMRAM.
>>>> This conforms to the language of the Intel SDM, namely, "[t]he actual
>>>> physical location of the SMRAM can be in system memory or in a separate
>>>> RAM memory". When firmware locks this QEMU-specific SMRAM, the latter
>>>> completely disappears from the address space of code that runs outside
>>>> of SMM (including PCI devices doing DMA). We call the remaining MMIO
>>>> area a "black hole".
>>>>
>>>> The present patch set detects the QEMU feature, locks the SMRAM at the
>>>> right times, and informs firmware and OS components to stay away form
>>>> the SMRAM at the default SMBASE.
>>>>
>>>> I've tested the set extensively:
>>>>
>>>>   http://mid.mail-archive.com/c18f1ada-3eca-d5f1-da4f-
>>>> e74181c5a862@redhat.com
>>>>   https://edk2.groups.io/g/devel/message/47864
>>>>
>>>> Going forward, if I understand correctly, the plan is to populate the
>>>> new SMRAM with the hotplug SMI handler. (This would likely be put in
>>>> place by OvmfPkg/PlatformPei, from NASM source code. The
>>>> SmmRelocateBases() function in UefiCpuPkg/PiSmmCpuDxeSmm backs up
>> the
>>>> original contents before, and restores them after, the initial SMBASE
>>>> relocation of the cold-plugged VCPUs. Thus the hotplug SMI handler would
>>>> survive the initial relocation of the cold-plugged VCPUs.)
>>>>
>>>> Further, when a VCPU is hot-plugged, we seem to intend to raise an SMI
>>>> for it (perhaps internally to QEMU?), which will remain pending until
>>>> the VCPU is launched with INIT-SIPI-SIPI. Once the INIT-SIPI-SIPI is
>>>> sent, the VCPU will immediately enter SMM, and run the SMI handler in
>>>> the locked SMRAM at the default SMBASE. At RSM, it may continue at the
>>>> vector requested in the INIT-SIPI-SIPI.
>>>>
>>>> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
>>>> Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
>>>> Cc: Brijesh Singh <brijesh.singh@amd.com>
>>>> Cc: Igor Mammedov <imammedo@redhat.com>
>>>> Cc: Jiewen Yao <jiewen.yao@intel.com>
>>>> Cc: Joao M Martins <joao.m.martins@oracle.com>
>>>> Cc: Jordan Justen <jordan.l.justen@intel.com>
>>>> Cc: Jun Nakajima <jun.nakajima@intel.com>
>>>> Cc: Michael Kinney <michael.d.kinney@intel.com>
>>>> Cc: Paolo Bonzini <pbonzini@redhat.com>
>>>> Cc: Phillip Goerl <phillip.goerl@oracle.com>
>>>> Cc: Yingwen Chen <yingwen.chen@intel.com>
>>>>
>>>> Thanks
>>>> Laszlo
>>>>
>>>> Laszlo Ersek (10):
>>>>   OvmfPkg: introduce PcdQ35SmramAtDefaultSmbase
>>>>   OvmfPkg/IndustryStandard: increase vertical whitespace in Q35 macro
>>>>     defs
>>>>   OvmfPkg/IndustryStandard: add MCH_DEFAULT_SMBASE* register macros
>>>>   OvmfPkg/PlatformPei: factor out Q35BoardVerification()
>>>>   OvmfPkg/PlatformPei: detect SMRAM at default SMBASE (skeleton)
>>>>   OvmfPkg/PlatformPei: assert there's no permanent PEI RAM at default
>>>>     SMBASE
>>>>   OvmfPkg/PlatformPei: reserve the SMRAM at the default SMBASE, if it
>>>>     exists
>>>>   OvmfPkg/SEV: don't manage the lifecycle of the SMRAM at the default
>>>>     SMBASE
>>>>   OvmfPkg/SmmAccess: close and lock SMRAM at default SMBASE
>>>>   OvmfPkg/PlatformPei: detect SMRAM at default SMBASE (for real)
>>>>
>>>>  OvmfPkg/Include/IndustryStandard/Q35MchIch9.h           | 106
>> +++++++++++----
>>>> -----
>>>>  OvmfPkg/Library/SmmCpuFeaturesLib/SmmCpuFeaturesLib.c   |  21 +++-
>>>>  OvmfPkg/Library/SmmCpuFeaturesLib/SmmCpuFeaturesLib.inf |   4 +
>>>>  OvmfPkg/OvmfPkg.dec                                     |   6 ++
>>>>  OvmfPkg/OvmfPkgIa32.dsc                                 |   1 +
>>>>  OvmfPkg/OvmfPkgIa32X64.dsc                              |   1 +
>>>>  OvmfPkg/OvmfPkgX64.dsc                                  |   1 +
>>>>  OvmfPkg/PlatformPei/AmdSev.c                            |  24 ++++-
>>>>  OvmfPkg/PlatformPei/MemDetect.c                         |  87 +++++++++++++---
>>>>  OvmfPkg/PlatformPei/Platform.c                          |  24 +++++
>>>>  OvmfPkg/PlatformPei/Platform.h                          |   7 ++
>>>>  OvmfPkg/PlatformPei/PlatformPei.inf                     |   1 +
>>>>  OvmfPkg/SmmAccess/SmmAccess2Dxe.c                       |   7 ++
>>>>  OvmfPkg/SmmAccess/SmmAccess2Dxe.inf                     |   1 +
>>>>  OvmfPkg/SmmAccess/SmmAccessPei.c                        |   6 ++
>>>>  OvmfPkg/SmmAccess/SmmAccessPei.inf                      |   1 +
>>>>  OvmfPkg/SmmAccess/SmramInternal.c                       |  25 +++++
>>>>  OvmfPkg/SmmAccess/SmramInternal.h                       |   8 ++
>>>>  18 files changed, 260 insertions(+), 71 deletions(-)
>>>>
>>>> --
>>>> 2.19.1.3.g30247aa5d201
>>>>
>>>>
>>>> -=-=-=-=-=-=
>>>> Groups.io Links: You receive all messages sent to this group.
>>>>
>>>> View/Reply Online (#47924): https://edk2.groups.io/g/devel/message/47924
>>>> Mute This Topic: https://groups.io/mt/34274934/1772286
>>>> Group Owner: devel+owner@edk2.groups.io
>>>> Unsubscribe: https://edk2.groups.io/g/devel/unsub  [jiewen.yao@intel.com]
>>>> -=-=-=-=-=-=
>>>
> 


-=-=-=-=-=-=-=-=-=-=-=-
Groups.io Links: You receive all messages sent to this group.

View/Reply Online (#48348): https://edk2.groups.io/g/devel/message/48348
Mute This Topic: https://groups.io/mt/34274934/1787277
Group Owner: devel+owner@edk2.groups.io
Unsubscribe: https://edk2.groups.io/g/devel/unsub  [importer@patchew.org]
-=-=-=-=-=-=-=-=-=-=-=-

Re: [edk2-devel] [PATCH wave 1 00/10] support QEMU's "SMRAM at default SMBASE" feature
Posted by Igor Mammedov 4 years, 7 months ago
On Tue, 24 Sep 2019 13:34:55 +0200
"Laszlo Ersek" <lersek@redhat.com> wrote:

> Ref:    https://bugzilla.tianocore.org/show_bug.cgi?id=1512
> Repo:   https://github.com/lersek/edk2.git
> Branch: smram_at_default_smbase_bz_1512_wave_1
> 
> This is the firmware-side patch series for the QEMU feature that Igor is
> proposing in
> 
>   [Qemu-devel] [PATCH 0/2]
>   q35: mch: allow to lock down 128K RAM at default SMBASE address
> 
> posted at
> 
>   http://mid.mail-archive.com/20190917130708.10281-1-imammedo@redhat.com
>   https://lists.gnu.org/archive/html/qemu-devel/2019-09/msg03538.html
>   https://edk2.groups.io/g/devel/message/47364
> 
> This OVMF patch series is supposed to be the first "wave" of patch sets,
> working towards VCPU hotplug with SMM enabled.
> 
> We want the hot-plugged VCPU to execute its very first instruction from
> SMRAM, running in SMM, for protection from OS and PCI DMA interference.
> In the (long!) discussion at
> 
>   http://mid.mail-archive.com/20190905174503.2acaa46a@redhat.com
>   https://edk2.groups.io/g/devel/message/46910
>   https://bugzilla.tianocore.org/show_bug.cgi?id=1512#c9
> 
> we considered diverging from the specs and changing the default SMBASE
> to an address different from 0x3_0000, so that it would point into
> SMRAM. However, Igor had the idea to keep the default SMBASE intact, and
> replace the underlying system RAM (128 KB of it) with lockable SMRAM.
> This conforms to the language of the Intel SDM, namely, "[t]he actual
> physical location of the SMRAM can be in system memory or in a separate
> RAM memory". When firmware locks this QEMU-specific SMRAM, the latter
> completely disappears from the address space of code that runs outside
> of SMM (including PCI devices doing DMA). We call the remaining MMIO
> area a "black hole".
> 
> The present patch set detects the QEMU feature, locks the SMRAM at the
> right times, and informs firmware and OS components to stay away form
> the SMRAM at the default SMBASE.
> 
> I've tested the set extensively:
> 
>   http://mid.mail-archive.com/c18f1ada-3eca-d5f1-da4f-e74181c5a862@redhat.com
>   https://edk2.groups.io/g/devel/message/47864
> 
> Going forward, if I understand correctly, the plan is to populate the
> new SMRAM with the hotplug SMI handler. (This would likely be put in
> place by OvmfPkg/PlatformPei, from NASM source code. The
> SmmRelocateBases() function in UefiCpuPkg/PiSmmCpuDxeSmm backs up the
> original contents before, and restores them after, the initial SMBASE
> relocation of the cold-plugged VCPUs. Thus the hotplug SMI handler would
> survive the initial relocation of the cold-plugged VCPUs.)
that's the tricky part, can we safely detect (in SmmRelocateBases)
race condition in case of several hotplugged CPUs are executing
SMI relocation handler at the same time? (crashing system in case
that happens is good enough)

> Further, when a VCPU is hot-plugged, we seem to intend to raise an SMI
> for it (perhaps internally to QEMU?),
I'd rather rise SMI from guest side (using IO port write from
ACPI cpu hotplug handler). Which in typical case (QEMU negotiated
SMI broadcast) will trigger pending SMI on hotplugged CPU as well.
(if it's acceptable I can prepare corresponding QEMU patches)

In case of unicast SMIs, a CPU handling guest ACPI triggered
SMI, can send SMI to hotpluged CPU as an additional step.

> which will remain pending until
> the VCPU is launched with INIT-SIPI-SIPI. Once the INIT-SIPI-SIPI is
> sent, the VCPU will immediately enter SMM, and run the SMI handler in
> the locked SMRAM at the default SMBASE. At RSM, it may continue at the
> vector requested in the INIT-SIPI-SIPI.
Firmware can arbitrate/send INIT-SIPI-SIPI to hotplugged CPUs.
Enumerating hotplugged CPUs, can be done by reusing CPU hotplug
interface described at QEMU/docs/specs/acpi_cpu_hotplug.txt.
(preferably in read-only mode)

> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
> Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
> Cc: Brijesh Singh <brijesh.singh@amd.com>
> Cc: Igor Mammedov <imammedo@redhat.com>
> Cc: Jiewen Yao <jiewen.yao@intel.com>
> Cc: Joao M Martins <joao.m.martins@oracle.com>
> Cc: Jordan Justen <jordan.l.justen@intel.com>
> Cc: Jun Nakajima <jun.nakajima@intel.com>
> Cc: Michael Kinney <michael.d.kinney@intel.com>
> Cc: Paolo Bonzini <pbonzini@redhat.com>
> Cc: Phillip Goerl <phillip.goerl@oracle.com>
> Cc: Yingwen Chen <yingwen.chen@intel.com>
> 
> Thanks
> Laszlo
> 
> Laszlo Ersek (10):
>   OvmfPkg: introduce PcdQ35SmramAtDefaultSmbase
>   OvmfPkg/IndustryStandard: increase vertical whitespace in Q35 macro
>     defs
>   OvmfPkg/IndustryStandard: add MCH_DEFAULT_SMBASE* register macros
>   OvmfPkg/PlatformPei: factor out Q35BoardVerification()
>   OvmfPkg/PlatformPei: detect SMRAM at default SMBASE (skeleton)
>   OvmfPkg/PlatformPei: assert there's no permanent PEI RAM at default
>     SMBASE
>   OvmfPkg/PlatformPei: reserve the SMRAM at the default SMBASE, if it
>     exists
>   OvmfPkg/SEV: don't manage the lifecycle of the SMRAM at the default
>     SMBASE
>   OvmfPkg/SmmAccess: close and lock SMRAM at default SMBASE
>   OvmfPkg/PlatformPei: detect SMRAM at default SMBASE (for real)
> 
>  OvmfPkg/Include/IndustryStandard/Q35MchIch9.h           | 106 +++++++++++---------
>  OvmfPkg/Library/SmmCpuFeaturesLib/SmmCpuFeaturesLib.c   |  21 +++-
>  OvmfPkg/Library/SmmCpuFeaturesLib/SmmCpuFeaturesLib.inf |   4 +
>  OvmfPkg/OvmfPkg.dec                                     |   6 ++
>  OvmfPkg/OvmfPkgIa32.dsc                                 |   1 +
>  OvmfPkg/OvmfPkgIa32X64.dsc                              |   1 +
>  OvmfPkg/OvmfPkgX64.dsc                                  |   1 +
>  OvmfPkg/PlatformPei/AmdSev.c                            |  24 ++++-
>  OvmfPkg/PlatformPei/MemDetect.c                         |  87 +++++++++++++---
>  OvmfPkg/PlatformPei/Platform.c                          |  24 +++++
>  OvmfPkg/PlatformPei/Platform.h                          |   7 ++
>  OvmfPkg/PlatformPei/PlatformPei.inf                     |   1 +
>  OvmfPkg/SmmAccess/SmmAccess2Dxe.c                       |   7 ++
>  OvmfPkg/SmmAccess/SmmAccess2Dxe.inf                     |   1 +
>  OvmfPkg/SmmAccess/SmmAccessPei.c                        |   6 ++
>  OvmfPkg/SmmAccess/SmmAccessPei.inf                      |   1 +
>  OvmfPkg/SmmAccess/SmramInternal.c                       |  25 +++++
>  OvmfPkg/SmmAccess/SmramInternal.h                       |   8 ++
>  18 files changed, 260 insertions(+), 71 deletions(-)
> 


-=-=-=-=-=-=-=-=-=-=-=-
Groups.io Links: You receive all messages sent to this group.

View/Reply Online (#48220): https://edk2.groups.io/g/devel/message/48220
Mute This Topic: https://groups.io/mt/34274934/1787277
Group Owner: devel+owner@edk2.groups.io
Unsubscribe: https://edk2.groups.io/g/devel/unsub  [importer@patchew.org]
-=-=-=-=-=-=-=-=-=-=-=-

Re: [edk2-devel] [PATCH wave 1 00/10] support QEMU's "SMRAM at default SMBASE" feature
Posted by Laszlo Ersek 4 years, 6 months ago
On 09/27/19 13:35, Igor Mammedov wrote:
> On Tue, 24 Sep 2019 13:34:55 +0200
> "Laszlo Ersek" <lersek@redhat.com> wrote:

>> Going forward, if I understand correctly, the plan is to populate the
>> new SMRAM with the hotplug SMI handler. (This would likely be put in
>> place by OvmfPkg/PlatformPei, from NASM source code. The
>> SmmRelocateBases() function in UefiCpuPkg/PiSmmCpuDxeSmm backs up the
>> original contents before, and restores them after, the initial SMBASE
>> relocation of the cold-plugged VCPUs. Thus the hotplug SMI handler would
>> survive the initial relocation of the cold-plugged VCPUs.)

> that's the tricky part, can we safely detect (in SmmRelocateBases)
> race condition in case of several hotplugged CPUs are executing
> SMI relocation handler at the same time? (crashing system in case
> that happens is good enough)

Wait, there are *two* initial SMI handlers here, one for cold-plugged
CPUs, and another for hot-plugged CPUs. Let's just call them coldplug
and hotplug handlers, for simplicity.

In chronological order, during boot:

(1) OvmfPkg/PlatformPei would put the hotplug handler in place, at the
default SMBASE. This code would lie dormant for a long time.

(2) UefiCpuPkg/PiSmmCpuDxeSmm backs up the current contents of the
default SMBASE area, and installs the coldplug handler. This handler is
then actually used, for relocating SMBASE for the present (cold-plugged)
CPUs. Finally, UefiCpuPkg/PiSmmCpuDxeSmm restores the original contents
of the default SMBASE area. This would in effect re-establish the
hotplug handler from (1).

(3) At OS runtime, a CPU is hotplugged, and then the hotplug handler
runs, at the default SMBASE.

The function SmmRelocateBases() is strictly limited to step (2), and has
nothing to do with hotplug. Therefore it need not deal with any race
conditions related to multi-hotplug.

This is just to clarify that your question applies to the initial SMI
handler (the hotplug one) that is put in place in step (1), and then
used in step (3). The question does not apply to SmmRelocateBases().


With that out of the way, I do think we should be able to detect this,
with a simple -- haha, famous last words! -- compare-and-exchange (LOCK
CMPXCHG); a kind of semaphore.

The initial value of an integer variable in SMRAM should be 1. At the
start of the hotplug initial SMI handler, the CPU should try to swap
that with 0. If it succeeds, proceed. If it fails (the variable's value
is not 1), force a platform reset. Finally, right before the RSM,
restore 1 (swap it back).

(It does not matter if another hotplug CPU starts the relocation in SMM
while the earlier one is left with *only* the RSM instruction in SMM,
immediately after the swap-back.)

Alternatively, if it's friendlier or simpler, we can spin on the initial
LOCK CMPXCHG until it succeeds, executing PAUSE in the loop body. That
should be friendly with KVM too (due to pause loop exiting).


>> Further, when a VCPU is hot-plugged, we seem to intend to raise an SMI
>> for it (perhaps internally to QEMU?),

> I'd rather rise SMI from guest side (using IO port write from
> ACPI cpu hotplug handler). Which in typical case (QEMU negotiated
> SMI broadcast) will trigger pending SMI on hotplugged CPU as well.
> (if it's acceptable I can prepare corresponding QEMU patches)

What if the OS does not send the SMI (from the GPE handler) at once, and
boots the new CPU instead?

Previously, you wrote that the new CPU "won't get access to privileged
SMRAM so OS can't subvert firmware. The next time SMI broadcast is sent
the CPU will use SMI handler at default 30000 SMBASE. It's up to us to
define behavior here (for example relocation handler can put such CPU in
shutdown state)."

How can we discover in the hotplug initial SMI handler at 0x3_0000 that
the CPU executing the code should have been relocated *earlier*, and the
OS just didn't obey the ACPI GPE code? I think a platform reset would be
a proper response, but I don't know how to tell, "this CPU should not be
here".


> In case of unicast SMIs, a CPU handling guest ACPI triggered
> SMI, can send SMI to hotpluged CPU as an additional step.

I think it's OK if we restrict this feature to "broadcast SMI" only.

> 
>> which will remain pending until
>> the VCPU is launched with INIT-SIPI-SIPI. Once the INIT-SIPI-SIPI is
>> sent, the VCPU will immediately enter SMM, and run the SMI handler in
>> the locked SMRAM at the default SMBASE. At RSM, it may continue at the
>> vector requested in the INIT-SIPI-SIPI.
> Firmware can arbitrate/send INIT-SIPI-SIPI to hotplugged CPUs.
> Enumerating hotplugged CPUs, can be done by reusing CPU hotplug
> interface described at QEMU/docs/specs/acpi_cpu_hotplug.txt.
> (preferably in read-only mode)

... I'll have to look at that later :)

Thanks
Laszlo

-=-=-=-=-=-=-=-=-=-=-=-
Groups.io Links: You receive all messages sent to this group.

View/Reply Online (#48350): https://edk2.groups.io/g/devel/message/48350
Mute This Topic: https://groups.io/mt/34274934/1787277
Group Owner: devel+owner@edk2.groups.io
Unsubscribe: https://edk2.groups.io/g/devel/unsub  [importer@patchew.org]
-=-=-=-=-=-=-=-=-=-=-=-

Re: [edk2-devel] [PATCH wave 1 00/10] support QEMU's "SMRAM at default SMBASE" feature
Posted by Igor Mammedov 4 years, 6 months ago
On Tue, 1 Oct 2019 17:31:17 +0200
"Laszlo Ersek" <lersek@redhat.com> wrote:

> On 09/27/19 13:35, Igor Mammedov wrote:
> > On Tue, 24 Sep 2019 13:34:55 +0200
> > "Laszlo Ersek" <lersek@redhat.com> wrote:  
> 
> >> Going forward, if I understand correctly, the plan is to populate the
> >> new SMRAM with the hotplug SMI handler. (This would likely be put in
> >> place by OvmfPkg/PlatformPei, from NASM source code. The
> >> SmmRelocateBases() function in UefiCpuPkg/PiSmmCpuDxeSmm backs up the
> >> original contents before, and restores them after, the initial SMBASE
> >> relocation of the cold-plugged VCPUs. Thus the hotplug SMI handler would
> >> survive the initial relocation of the cold-plugged VCPUs.)  
> 
> > that's the tricky part, can we safely detect (in SmmRelocateBases)
> > race condition in case of several hotplugged CPUs are executing
> > SMI relocation handler at the same time? (crashing system in case
> > that happens is good enough)  
> 
> Wait, there are *two* initial SMI handlers here, one for cold-plugged
> CPUs, and another for hot-plugged CPUs. Let's just call them coldplug
> and hotplug handlers, for simplicity.
> 
> In chronological order, during boot:
> 
> (1) OvmfPkg/PlatformPei would put the hotplug handler in place, at the
> default SMBASE. This code would lie dormant for a long time.
> 
> (2) UefiCpuPkg/PiSmmCpuDxeSmm backs up the current contents of the
> default SMBASE area, and installs the coldplug handler. This handler is
> then actually used, for relocating SMBASE for the present (cold-plugged)
> CPUs. Finally, UefiCpuPkg/PiSmmCpuDxeSmm restores the original contents
> of the default SMBASE area. This would in effect re-establish the
> hotplug handler from (1).
> 
> (3) At OS runtime, a CPU is hotplugged, and then the hotplug handler
> runs, at the default SMBASE.
> 
> The function SmmRelocateBases() is strictly limited to step (2), and has
> nothing to do with hotplug. Therefore it need not deal with any race
> conditions related to multi-hotplug.
> 
> This is just to clarify that your question applies to the initial SMI
> handler (the hotplug one) that is put in place in step (1), and then
> used in step (3). The question does not apply to SmmRelocateBases().
> 
> 
> With that out of the way, I do think we should be able to detect this,
> with a simple -- haha, famous last words! -- compare-and-exchange (LOCK
> CMPXCHG); a kind of semaphore.
> 
> The initial value of an integer variable in SMRAM should be 1. At the
> start of the hotplug initial SMI handler, the CPU should try to swap
> that with 0. If it succeeds, proceed. If it fails (the variable's value
> is not 1), force a platform reset. Finally, right before the RSM,
> restore 1 (swap it back).
> 
> (It does not matter if another hotplug CPU starts the relocation in SMM
> while the earlier one is left with *only* the RSM instruction in SMM,
> immediately after the swap-back.)
I don't see why it doesn't matter, let me illustrate the case
I'm talking about.

assuming there are 2 hotplugged CPUs with pending SMI and
stray INIT-SIPI-SIPI in flight to CPU2.

    HP CPU1                                        HP CPU2
    save state at 0x30000                            -
    start executing hotplug SMI handler              -
    set new SMBASE                                   -
    -                                         save state at 0x30000
                                             (overwrites CPU1 set SMBASE to default one)
    -                                              .....
                     -----------zzzzzzz----------                                         
    RSM
    restores CPU1 with CPU2 saved state
     incl. CPU2 SMBASE = either default or
     already new CPU2 specific one
    
semaphore is irrelevant in this case as we will loose CPU1 relocation
at best (assuming hotplug handler does nothing else beside relocation)
and if hotplug handler does something else this case will probably
leave FW in inconsistent state.


Safest way to deal with it would be:

  1. wait till all host CPUs are brought into SMM (with hotplug SMI handler = check me in)

  2. wait till all hotplugged CPUs are put in well know state
    (a host cpu should send INIT-SIPI-SIPI with NOP vector to wake up)

  3. a host CPU will serialize hotplugged CPUs relocation by sending
     uincast SMI + INIT-SIPI-SIPI NOP vector to wake up the second time
     (with hotplug SMI handler = relocate_me)

alternatively we could skip step 3 and do following:

 hotplug_SMI_handler()   
     hp_cpu_check_in_map[apic_id] = 1;
     /* wait till another cpu tells me to continue */
     while(hp_cpu_check_in_map[apic_id]) ;;
     ...

 host_cpu_hotplug_all_cpus()
     wait till all hotpluged CPUs are in hp_cpu_check_in_map;
     for(i=0;;) {
        if (hp_cpu_check_in_map[i]) {
            /* relocate this CPU */
            hp_cpu_check_in_map[i] = 0;
        }

        tell QEMU that FW pulled the CPU in;
        /* we can add a flag to QEMU's cpu hotplug MMIO interface to FW do it,
           so that legitimate  GPE handler would tell OS to online only
           firmware vetted CPUs */
     }

> Alternatively, if it's friendlier or simpler, we can spin on the initial
> LOCK CMPXCHG until it succeeds, executing PAUSE in the loop body. That
> should be friendly with KVM too (due to pause loop exiting).
> 
> 
> >> Further, when a VCPU is hot-plugged, we seem to intend to raise an SMI
> >> for it (perhaps internally to QEMU?),  
> 
> > I'd rather rise SMI from guest side (using IO port write from
> > ACPI cpu hotplug handler). Which in typical case (QEMU negotiated
> > SMI broadcast) will trigger pending SMI on hotplugged CPU as well.
> > (if it's acceptable I can prepare corresponding QEMU patches)  
> 
> What if the OS does not send the SMI (from the GPE handler) at once, and
> boots the new CPU instead?
> 
> Previously, you wrote that the new CPU "won't get access to privileged
> SMRAM so OS can't subvert firmware. The next time SMI broadcast is sent
> the CPU will use SMI handler at default 30000 SMBASE. It's up to us to
> define behavior here (for example relocation handler can put such CPU in
> shutdown state)."
> 
> How can we discover in the hotplug initial SMI handler at 0x3_0000 that
> the CPU executing the code should have been relocated *earlier*, and the
> OS just didn't obey the ACPI GPE code? I think a platform reset would be
> a proper response, but I don't know how to tell, "this CPU should not be
> here".

one can ask QEMU about present CPUs (via cpu hotplug interface)
and compare with present CPU state in FW (OS can hide insert
event there but not the presence).

another way is to keep it pinned in relocation SMI handler
until another CPU tells it to continue with relocation.

In absence of SMI, such CPU will continue to run wild but do we
really care about it?

> > In case of unicast SMIs, a CPU handling guest ACPI triggered
> > SMI, can send SMI to hotpluged CPU as an additional step.  
> 
> I think it's OK if we restrict this feature to "broadcast SMI" only.
> 
> >   
> >> which will remain pending until
> >> the VCPU is launched with INIT-SIPI-SIPI. Once the INIT-SIPI-SIPI is
> >> sent, the VCPU will immediately enter SMM, and run the SMI handler in
> >> the locked SMRAM at the default SMBASE. At RSM, it may continue at the
> >> vector requested in the INIT-SIPI-SIPI.  
> > Firmware can arbitrate/send INIT-SIPI-SIPI to hotplugged CPUs.
> > Enumerating hotplugged CPUs, can be done by reusing CPU hotplug
> > interface described at QEMU/docs/specs/acpi_cpu_hotplug.txt.
> > (preferably in read-only mode)  
> 
> ... I'll have to look at that later :)
> 
> Thanks
> Laszlo
> 
> 
> 


-=-=-=-=-=-=-=-=-=-=-=-
Groups.io Links: You receive all messages sent to this group.

View/Reply Online (#48475): https://edk2.groups.io/g/devel/message/48475
Mute This Topic: https://groups.io/mt/34274934/1787277
Group Owner: devel+owner@edk2.groups.io
Unsubscribe: https://edk2.groups.io/g/devel/unsub  [importer@patchew.org]
-=-=-=-=-=-=-=-=-=-=-=-

Re: [edk2-devel] [PATCH wave 1 00/10] support QEMU's "SMRAM at default SMBASE" feature
Posted by Laszlo Ersek 4 years, 6 months ago
On 10/04/19 16:09, Igor Mammedov wrote:
> On Tue, 1 Oct 2019 17:31:17 +0200
> "Laszlo Ersek" <lersek@redhat.com> wrote:

>> (It does not matter if another hotplug CPU starts the relocation in SMM
>> while the earlier one is left with *only* the RSM instruction in SMM,
>> immediately after the swap-back.)
> I don't see why it doesn't matter, let me illustrate the case
> I'm talking about.

Right, I'm sorry -- I completely forgot that the RSM instruction is what
makes the newly set SMBASE permanent!

[...]

> Safest way to deal with it would be:
> 
>   1. wait till all host CPUs are brought into SMM (with hotplug SMI handler = check me in)
> 
>   2. wait till all hotplugged CPUs are put in well know state
>     (a host cpu should send INIT-SIPI-SIPI with NOP vector to wake up)
> 
>   3. a host CPU will serialize hotplugged CPUs relocation by sending
>      uincast SMI + INIT-SIPI-SIPI NOP vector to wake up the second time
>      (with hotplug SMI handler = relocate_me)
> 
> alternatively we could skip step 3 and do following:
> 
>  hotplug_SMI_handler()   
>      hp_cpu_check_in_map[apic_id] = 1;
>      /* wait till another cpu tells me to continue */
>      while(hp_cpu_check_in_map[apic_id]) ;;
>      ...
> 
>  host_cpu_hotplug_all_cpus()
>      wait till all hotpluged CPUs are in hp_cpu_check_in_map;
>      for(i=0;;) {
>         if (hp_cpu_check_in_map[i]) {
>             /* relocate this CPU */
>             hp_cpu_check_in_map[i] = 0;
>         }
> 
>         tell QEMU that FW pulled the CPU in;
>         /* we can add a flag to QEMU's cpu hotplug MMIO interface to FW do it,
>            so that legitimate  GPE handler would tell OS to online only
>            firmware vetted CPUs */
>      }

[...]

>> How can we discover in the hotplug initial SMI handler at 0x3_0000 that
>> the CPU executing the code should have been relocated *earlier*, and the
>> OS just didn't obey the ACPI GPE code? I think a platform reset would be
>> a proper response, but I don't know how to tell, "this CPU should not be
>> here".
> 
> one can ask QEMU about present CPUs (via cpu hotplug interface)
> and compare with present CPU state in FW (OS can hide insert
> event there but not the presence).
> 
> another way is to keep it pinned in relocation SMI handler
> until another CPU tells it to continue with relocation.
> 
> In absence of SMI, such CPU will continue to run wild but do we
> really care about it?

Thanks.

It's clear that, for me, too much hangs in the air right now. I'll have
to get my hands dirty and experiment. I need to learn a lot about this
area of edk2 as well, and experimenting looks like one way.

Let's return to these question once we reach the following state:

- ACPI code generated by QEMU raises a broadcast SMI on VCPU hotplug
- in OVMF, a cold-plugged CPU executes platform code, in a custom SMI
handler
- in OVMF, a hot-plugged CPU executes platform code, in the initial
(default) SMI handler.

Once we get to that point, we can look into further details.

I'll answer under your other email too:
<http://mid.mail-archive.com/20191004133052.20373155@redhat.com>.

Thanks
Laszlo

-=-=-=-=-=-=-=-=-=-=-=-
Groups.io Links: You receive all messages sent to this group.

View/Reply Online (#48502): https://edk2.groups.io/g/devel/message/48502
Mute This Topic: https://groups.io/mt/34274934/1787277
Group Owner: devel+owner@edk2.groups.io
Unsubscribe: https://edk2.groups.io/g/devel/unsub  [importer@patchew.org]
-=-=-=-=-=-=-=-=-=-=-=-