[edk2-devel] [PATCH v3 0/3] Support CloudHv guests with >1TB of RAM

Thomas Barrett posted 3 patches 3 months, 2 weeks ago
Failed in applying to current master (apply log)
OvmfPkg/CloudHv/CloudHvX64.dsc              |   2 +
OvmfPkg/Library/PlatformInitLib/MemDetect.c | 107 ++++++++++++++------
2 files changed, 79 insertions(+), 30 deletions(-)
[edk2-devel] [PATCH v3 0/3] Support CloudHv guests with >1TB of RAM
Posted by Thomas Barrett 3 months, 2 weeks ago
This series adds support for cloud-hypervisor guests with >1TiB
of RAM to Ovmf. This bug was solved for Qemu back in 2017
https://bugzilla.redhat.com/show_bug.cgi?id=1468526. The bug is
still occuring for CloudHv guests because the PlatformScanE820
utility is not currently supported for CloudHv.

My working branch for these changes can be found here:
https://github.com/thomasbarrett/edk2/tree/cloud-hv-1tb-ram

Cc: Anatol Belski <anbelski@linux.microsoft.com>
Cc: Ard Biesheuvel <ardb+tianocore@kernel.org>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Jianyong Wu <jianyong.wu@arm.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Rob Bradford <rbradford@rivosinc.com>

Thomas Barrett (3):
  OvmfPkg: Add CloudHv support to PlatformScanE820 utility function.
  OvmfPkg: Update PlatformAddressWidthInitialization for CloudHv
  OvmfPkg: CloudHv: Enable PcdUse1GPageTable

 OvmfPkg/CloudHv/CloudHvX64.dsc              |   2 +
 OvmfPkg/Library/PlatformInitLib/MemDetect.c | 107 ++++++++++++++------
 2 files changed, 79 insertions(+), 30 deletions(-)

-- 
2.34.1



-=-=-=-=-=-=-=-=-=-=-=-
Groups.io Links: You receive all messages sent to this group.
View/Reply Online (#113771): https://edk2.groups.io/g/devel/message/113771
Mute This Topic: https://groups.io/mt/103689730/1787277
Group Owner: devel+owner@edk2.groups.io
Unsubscribe: https://edk2.groups.io/g/devel/unsub [importer@patchew.org]
-=-=-=-=-=-=-=-=-=-=-=-
Re: [edk2-devel] [PATCH v3 0/3] Support CloudHv guests with >1TB of RAM
Posted by Laszlo Ersek 3 months, 2 weeks ago
On 1/12/24 19:31, Thomas Barrett wrote:
> This series adds support for cloud-hypervisor guests with >1TiB
> of RAM to Ovmf. This bug was solved for Qemu back in 2017
> https://bugzilla.redhat.com/show_bug.cgi?id=1468526. The bug is
> still occuring for CloudHv guests because the PlatformScanE820
> utility is not currently supported for CloudHv.
> 
> My working branch for these changes can be found here:
> https://github.com/thomasbarrett/edk2/tree/cloud-hv-1tb-ram
> 
> Cc: Anatol Belski <anbelski@linux.microsoft.com>
> Cc: Ard Biesheuvel <ardb+tianocore@kernel.org>
> Cc: Gerd Hoffmann <kraxel@redhat.com>
> Cc: Jianyong Wu <jianyong.wu@arm.com>
> Cc: Jiewen Yao <jiewen.yao@intel.com>
> Cc: Laszlo Ersek <lersek@redhat.com>
> Cc: Rob Bradford <rbradford@rivosinc.com>
> 
> Thomas Barrett (3):
>   OvmfPkg: Add CloudHv support to PlatformScanE820 utility function.
>   OvmfPkg: Update PlatformAddressWidthInitialization for CloudHv
>   OvmfPkg: CloudHv: Enable PcdUse1GPageTable
> 
>  OvmfPkg/CloudHv/CloudHvX64.dsc              |   2 +
>  OvmfPkg/Library/PlatformInitLib/MemDetect.c | 107 ++++++++++++++------
>  2 files changed, 79 insertions(+), 30 deletions(-)
> 

Thanks for posting v3, this one looks well-formed, so I can make some
comments.

TBH, I'm super uncomfortable with a new bunch of CLOUDHV_DEVICE_ID
branches introduced to PlatformInitLib.

OVMF supports multiple hypervisors, and the general idea has been that a
single module (or even a single platform DSC) should preferably not
attempt to support multiple hypervisors. That's why we have a separate
Xen platform, and a big number of Xen-customized, standalone modules.

The idea with this is that any particular developer is very unlikely to
develop for, and test on, multiple hypervisors. Therefore unification (=
elimination of all possible code duplication) between distinct
hypervisor code snippets is actually detrimental for maintenance,
because it technically enables a developer to regress a platform that
they never actively work with.

I believe bhyve is similarly separated out (just like Xen).

It's one thing to collect support for multiple board types (machine
types) for QEMU into a given module; that's still the same hypervisor --
testing only requires a different "-M" option on the qemu command line.

But fusing Cloud Hypervisor code with QEMU code makes me fidget in my seat.

I've now grepped the OvmfPkg directory tree for existent instances of
CLOUDHV_DEVICE_ID, and I'm very much not liking the result list. It has
seeped into a whole bunch of modules that should only be QEMU. If you
need those modules customized for CloudHv, it'd be best to detach them
for CloudHv, and eliminate the dead code (such as anything that depends
on QEMU fw_cfg), and *enforce* that the underlying platform is CloudHv.
Both communities will benefit. Again, this is all based on the empirical
fact that QEMU and CloudHv developers don't tend to cross-develop.

I understand that it's always just a small addition; in each specific
case, it's just one or two more "if"s in common code. But the end result
is terrible to maintain in the long term.

Of course this all depends on having a separate platform DSC for
CloudHv, but that one already exists: "CloudHv/CloudHvX64.dsc".

So I'd suggest (a) isolating current CloudHv logic to new library
instances, drivers, etc, (b) adding this enhancement to CloudHv's own
instance of PlatformInitLib.

Counter-arguments, objections etc welcome -- feel free to prove me wrong
(e.g. whatever I'm saying about prior art / status quo).

Also I'm not necessarily blocking this patch set; if others don't mind
this, they can ACK it (and I won't NACK).

BR
Laszlo



-=-=-=-=-=-=-=-=-=-=-=-
Groups.io Links: You receive all messages sent to this group.
View/Reply Online (#113835): https://edk2.groups.io/g/devel/message/113835
Mute This Topic: https://groups.io/mt/103689730/1787277
Group Owner: devel+owner@edk2.groups.io
Unsubscribe: https://edk2.groups.io/g/devel/leave/3901457/1787277/102458076/xyzzy [importer@patchew.org]
-=-=-=-=-=-=-=-=-=-=-=-
Re: [edk2-devel] [PATCH v3 0/3] Support CloudHv guests with >1TB of RAM
Posted by Ard Biesheuvel 3 months, 2 weeks ago
On Mon, 15 Jan 2024 at 18:40, Laszlo Ersek <lersek@redhat.com> wrote:
>
> On 1/12/24 19:31, Thomas Barrett wrote:
> > This series adds support for cloud-hypervisor guests with >1TiB
> > of RAM to Ovmf. This bug was solved for Qemu back in 2017
> > https://bugzilla.redhat.com/show_bug.cgi?id=1468526. The bug is
> > still occuring for CloudHv guests because the PlatformScanE820
> > utility is not currently supported for CloudHv.
> >
> > My working branch for these changes can be found here:
> > https://github.com/thomasbarrett/edk2/tree/cloud-hv-1tb-ram
> >
> > Cc: Anatol Belski <anbelski@linux.microsoft.com>
> > Cc: Ard Biesheuvel <ardb+tianocore@kernel.org>
> > Cc: Gerd Hoffmann <kraxel@redhat.com>
> > Cc: Jianyong Wu <jianyong.wu@arm.com>
> > Cc: Jiewen Yao <jiewen.yao@intel.com>
> > Cc: Laszlo Ersek <lersek@redhat.com>
> > Cc: Rob Bradford <rbradford@rivosinc.com>
> >
> > Thomas Barrett (3):
> >   OvmfPkg: Add CloudHv support to PlatformScanE820 utility function.
> >   OvmfPkg: Update PlatformAddressWidthInitialization for CloudHv
> >   OvmfPkg: CloudHv: Enable PcdUse1GPageTable
> >
> >  OvmfPkg/CloudHv/CloudHvX64.dsc              |   2 +
> >  OvmfPkg/Library/PlatformInitLib/MemDetect.c | 107 ++++++++++++++------
> >  2 files changed, 79 insertions(+), 30 deletions(-)
> >
>
> Thanks for posting v3, this one looks well-formed, so I can make some
> comments.
>
> TBH, I'm super uncomfortable with a new bunch of CLOUDHV_DEVICE_ID
> branches introduced to PlatformInitLib.
>
> OVMF supports multiple hypervisors, and the general idea has been that a
> single module (or even a single platform DSC) should preferably not
> attempt to support multiple hypervisors. That's why we have a separate
> Xen platform, and a big number of Xen-customized, standalone modules.
>
> The idea with this is that any particular developer is very unlikely to
> develop for, and test on, multiple hypervisors. Therefore unification (=
> elimination of all possible code duplication) between distinct
> hypervisor code snippets is actually detrimental for maintenance,
> because it technically enables a developer to regress a platform that
> they never actively work with.
>
> I believe bhyve is similarly separated out (just like Xen).
>
> It's one thing to collect support for multiple board types (machine
> types) for QEMU into a given module; that's still the same hypervisor --
> testing only requires a different "-M" option on the qemu command line.
>
> But fusing Cloud Hypervisor code with QEMU code makes me fidget in my seat.
>
> I've now grepped the OvmfPkg directory tree for existent instances of
> CLOUDHV_DEVICE_ID, and I'm very much not liking the result list. It has
> seeped into a whole bunch of modules that should only be QEMU. If you
> need those modules customized for CloudHv, it'd be best to detach them
> for CloudHv, and eliminate the dead code (such as anything that depends
> on QEMU fw_cfg), and *enforce* that the underlying platform is CloudHv.
> Both communities will benefit. Again, this is all based on the empirical
> fact that QEMU and CloudHv developers don't tend to cross-develop.
>
> I understand that it's always just a small addition; in each specific
> case, it's just one or two more "if"s in common code. But the end result
> is terrible to maintain in the long term.
>
> Of course this all depends on having a separate platform DSC for
> CloudHv, but that one already exists: "CloudHv/CloudHvX64.dsc".
>
> So I'd suggest (a) isolating current CloudHv logic to new library
> instances, drivers, etc, (b) adding this enhancement to CloudHv's own
> instance of PlatformInitLib.
>
> Counter-arguments, objections etc welcome -- feel free to prove me wrong
> (e.g. whatever I'm saying about prior art / status quo).
>
> Also I'm not necessarily blocking this patch set; if others don't mind
> this, they can ACK it (and I won't NACK).
>

Thanks Laszlo.

I'm afraid I seem to have made your last point moot :-)

But I agree with your points above: EDK2 makes the fork&tweak approach
very easy, so CloudHv can keep its own versions of many of the QEMU
specific glue libraries. It does, however, require a certain degree of
hygiene on the part of the developer to introduce abstractions where
possible, to avoid forking huge drivers or libraries for a 2 line
delta between QEMU and CloudHv.

So let's decree that future CloudHv contributions that follow this old
pattern will not be considered until/unless there is some confidence
on our part that there is a long term plan in motion that cleans this
all up and repays the accumulated technical debt.


-=-=-=-=-=-=-=-=-=-=-=-
Groups.io Links: You receive all messages sent to this group.
View/Reply Online (#113836): https://edk2.groups.io/g/devel/message/113836
Mute This Topic: https://groups.io/mt/103689730/1787277
Group Owner: devel+owner@edk2.groups.io
Unsubscribe: https://edk2.groups.io/g/devel/unsub [importer@patchew.org]
-=-=-=-=-=-=-=-=-=-=-=-
Re: [edk2-devel] [PATCH v3 0/3] Support CloudHv guests with >1TB of RAM
Posted by Thomas Barrett 3 months, 2 weeks ago
Hey all,

Yeah I don’t disagree Laszlo. While developing these patches, It was pretty
confusing to determine which parts of PlatformLibInit were compatible with
CloudHv and which parts weren’t.

I have no idea how much work would be involved in splitting out
cloud-hypervisor code. It would be good to hear input from the
cloud-hypervisor maintainers on this.

Best,
Thomas

On Mon, Jan 15, 2024 at 9:52 AM Ard Biesheuvel <ardb@kernel.org> wrote:

> On Mon, 15 Jan 2024 at 18:40, Laszlo Ersek <lersek@redhat.com> wrote:
> >
> > On 1/12/24 19:31, Thomas Barrett wrote:
> > > This series adds support for cloud-hypervisor guests with >1TiB
> > > of RAM to Ovmf. This bug was solved for Qemu back in 2017
> > > https://bugzilla.redhat.com/show_bug.cgi?id=1468526. The bug is
> > > still occuring for CloudHv guests because the PlatformScanE820
> > > utility is not currently supported for CloudHv.
> > >
> > > My working branch for these changes can be found here:
> > > https://github.com/thomasbarrett/edk2/tree/cloud-hv-1tb-ram
> > >
> > > Cc: Anatol Belski <anbelski@linux.microsoft.com>
> > > Cc: Ard Biesheuvel <ardb+tianocore@kernel.org>
> > > Cc: Gerd Hoffmann <kraxel@redhat.com>
> > > Cc: Jianyong Wu <jianyong.wu@arm.com>
> > > Cc: Jiewen Yao <jiewen.yao@intel.com>
> > > Cc: Laszlo Ersek <lersek@redhat.com>
> > > Cc: Rob Bradford <rbradford@rivosinc.com>
> > >
> > > Thomas Barrett (3):
> > >   OvmfPkg: Add CloudHv support to PlatformScanE820 utility function.
> > >   OvmfPkg: Update PlatformAddressWidthInitialization for CloudHv
> > >   OvmfPkg: CloudHv: Enable PcdUse1GPageTable
> > >
> > >  OvmfPkg/CloudHv/CloudHvX64.dsc              |   2 +
> > >  OvmfPkg/Library/PlatformInitLib/MemDetect.c | 107 ++++++++++++++------
> > >  2 files changed, 79 insertions(+), 30 deletions(-)
> > >
> >
> > Thanks for posting v3, this one looks well-formed, so I can make some
> > comments.
> >
> > TBH, I'm super uncomfortable with a new bunch of CLOUDHV_DEVICE_ID
> > branches introduced to PlatformInitLib.
> >
> > OVMF supports multiple hypervisors, and the general idea has been that a
> > single module (or even a single platform DSC) should preferably not
> > attempt to support multiple hypervisors. That's why we have a separate
> > Xen platform, and a big number of Xen-customized, standalone modules.
> >
> > The idea with this is that any particular developer is very unlikely to
> > develop for, and test on, multiple hypervisors. Therefore unification (=
> > elimination of all possible code duplication) between distinct
> > hypervisor code snippets is actually detrimental for maintenance,
> > because it technically enables a developer to regress a platform that
> > they never actively work with.
> >
> > I believe bhyve is similarly separated out (just like Xen).
> >
> > It's one thing to collect support for multiple board types (machine
> > types) for QEMU into a given module; that's still the same hypervisor --
> > testing only requires a different "-M" option on the qemu command line.
> >
> > But fusing Cloud Hypervisor code with QEMU code makes me fidget in my
> seat.
> >
> > I've now grepped the OvmfPkg directory tree for existent instances of
> > CLOUDHV_DEVICE_ID, and I'm very much not liking the result list. It has
> > seeped into a whole bunch of modules that should only be QEMU. If you
> > need those modules customized for CloudHv, it'd be best to detach them
> > for CloudHv, and eliminate the dead code (such as anything that depends
> > on QEMU fw_cfg), and *enforce* that the underlying platform is CloudHv.
> > Both communities will benefit. Again, this is all based on the empirical
> > fact that QEMU and CloudHv developers don't tend to cross-develop.
> >
> > I understand that it's always just a small addition; in each specific
> > case, it's just one or two more "if"s in common code. But the end result
> > is terrible to maintain in the long term.
> >
> > Of course this all depends on having a separate platform DSC for
> > CloudHv, but that one already exists: "CloudHv/CloudHvX64.dsc".
> >
> > So I'd suggest (a) isolating current CloudHv logic to new library
> > instances, drivers, etc, (b) adding this enhancement to CloudHv's own
> > instance of PlatformInitLib.
> >
> > Counter-arguments, objections etc welcome -- feel free to prove me wrong
> > (e.g. whatever I'm saying about prior art / status quo).
> >
> > Also I'm not necessarily blocking this patch set; if others don't mind
> > this, they can ACK it (and I won't NACK).
> >
>
> Thanks Laszlo.
>
> I'm afraid I seem to have made your last point moot :-)
>
> But I agree with your points above: EDK2 makes the fork&tweak approach
> very easy, so CloudHv can keep its own versions of many of the QEMU
> specific glue libraries. It does, however, require a certain degree of
> hygiene on the part of the developer to introduce abstractions where
> possible, to avoid forking huge drivers or libraries for a 2 line
> delta between QEMU and CloudHv.
>
> So let's decree that future CloudHv contributions that follow this old
> pattern will not be considered until/unless there is some confidence
> on our part that there is a long term plan in motion that cleans this
> all up and repays the accumulated technical debt.
>


-=-=-=-=-=-=-=-=-=-=-=-
Groups.io Links: You receive all messages sent to this group.
View/Reply Online (#113842): https://edk2.groups.io/g/devel/message/113842
Mute This Topic: https://groups.io/mt/103689730/1787277
Group Owner: devel+owner@edk2.groups.io
Unsubscribe: https://edk2.groups.io/g/devel/unsub [importer@patchew.org]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [edk2-devel] [PATCH v3 0/3] Support CloudHv guests with >1TB of RAM
Posted by Anatol Belski 3 months, 2 weeks ago
Hi,

i was late to the party as well. At least i went now for a manual build
and some tests with Cloud Hypervisor and it looks good.

On Mon, 2024-01-15 at 10:26 -0800, Thomas Barrett wrote:
> Hey all,
> 
> Yeah I don’t disagree Laszlo. While developing these patches, It was
> pretty confusing to determine which parts of PlatformLibInit were
> compatible with CloudHv and which parts weren’t.
> 
> I have no idea how much work would be involved in splitting out
> cloud-hypervisor code. It would be good to hear input from the cloud-
> hypervisor maintainers on this.
> 
Such a refactoring appears to be for sure meaningful. We should create
an issue on the Cloud Hypervisor side so this topic isn't lost and so
we can track it there, too.

Thanks!

Anatol


> Best,
> Thomas
> 
> On Mon, Jan 15, 2024 at 9:52 AM Ard Biesheuvel <ardb@kernel.org>
> wrote:
> > On Mon, 15 Jan 2024 at 18:40, Laszlo Ersek <lersek@redhat.com>
> > wrote:
> > >
> > > On 1/12/24 19:31, Thomas Barrett wrote:
> > > > This series adds support for cloud-hypervisor guests with >1TiB
> > > > of RAM to Ovmf. This bug was solved for Qemu back in 2017
> > > > https://bugzilla.redhat.com/show_bug.cgi?id=1468526. The bug is
> > > > still occuring for CloudHv guests because the PlatformScanE820
> > > > utility is not currently supported for CloudHv.
> > > >
> > > > My working branch for these changes can be found here:
> > > > https://github.com/thomasbarrett/edk2/tree/cloud-hv-1tb-ram
> > > >
> > > > Cc: Anatol Belski <anbelski@linux.microsoft.com>
> > > > Cc: Ard Biesheuvel <ardb+tianocore@kernel.org>
> > > > Cc: Gerd Hoffmann <kraxel@redhat.com>
> > > > Cc: Jianyong Wu <jianyong.wu@arm.com>
> > > > Cc: Jiewen Yao <jiewen.yao@intel.com>
> > > > Cc: Laszlo Ersek <lersek@redhat.com>
> > > > Cc: Rob Bradford <rbradford@rivosinc.com>
> > > >
> > > > Thomas Barrett (3):
> > > >   OvmfPkg: Add CloudHv support to PlatformScanE820 utility
> > function.
> > > >   OvmfPkg: Update PlatformAddressWidthInitialization for
> > CloudHv
> > > >   OvmfPkg: CloudHv: Enable PcdUse1GPageTable
> > > >
> > > >  OvmfPkg/CloudHv/CloudHvX64.dsc              |   2 +
> > > >  OvmfPkg/Library/PlatformInitLib/MemDetect.c | 107
> > ++++++++++++++------
> > > >  2 files changed, 79 insertions(+), 30 deletions(-)
> > > >
> > >
> > > Thanks for posting v3, this one looks well-formed, so I can make
> > some
> > > comments.
> > >
> > > TBH, I'm super uncomfortable with a new bunch of
> > CLOUDHV_DEVICE_ID
> > > branches introduced to PlatformInitLib.
> > >
> > > OVMF supports multiple hypervisors, and the general idea has been
> > that a
> > > single module (or even a single platform DSC) should preferably
> > not
> > > attempt to support multiple hypervisors. That's why we have a
> > separate
> > > Xen platform, and a big number of Xen-customized, standalone
> > modules.
> > >
> > > The idea with this is that any particular developer is very
> > unlikely to
> > > develop for, and test on, multiple hypervisors. Therefore
> > unification (=
> > > elimination of all possible code duplication) between distinct
> > > hypervisor code snippets is actually detrimental for maintenance,
> > > because it technically enables a developer to regress a platform
> > that
> > > they never actively work with.
> > >
> > > I believe bhyve is similarly separated out (just like Xen).
> > >
> > > It's one thing to collect support for multiple board types
> > (machine
> > > types) for QEMU into a given module; that's still the same
> > hypervisor --
> > > testing only requires a different "-M" option on the qemu command
> > line.
> > >
> > > But fusing Cloud Hypervisor code with QEMU code makes me fidget
> > in my seat.
> > >
> > > I've now grepped the OvmfPkg directory tree for existent
> > instances of
> > > CLOUDHV_DEVICE_ID, and I'm very much not liking the result list.
> > It has
> > > seeped into a whole bunch of modules that should only be QEMU. If
> > you
> > > need those modules customized for CloudHv, it'd be best to detach
> > them
> > > for CloudHv, and eliminate the dead code (such as anything that
> > depends
> > > on QEMU fw_cfg), and *enforce* that the underlying platform is
> > CloudHv.
> > > Both communities will benefit. Again, this is all based on the
> > empirical
> > > fact that QEMU and CloudHv developers don't tend to cross-
> > develop.
> > >
> > > I understand that it's always just a small addition; in each
> > specific
> > > case, it's just one or two more "if"s in common code. But the end
> > result
> > > is terrible to maintain in the long term.
> > >
> > > Of course this all depends on having a separate platform DSC for
> > > CloudHv, but that one already exists: "CloudHv/CloudHvX64.dsc".
> > >
> > > So I'd suggest (a) isolating current CloudHv logic to new library
> > > instances, drivers, etc, (b) adding this enhancement to CloudHv's
> > own
> > > instance of PlatformInitLib.
> > >
> > > Counter-arguments, objections etc welcome -- feel free to prove
> > me wrong
> > > (e.g. whatever I'm saying about prior art / status quo).
> > >
> > > Also I'm not necessarily blocking this patch set; if others don't
> > mind
> > > this, they can ACK it (and I won't NACK).
> > >
> > 
> > Thanks Laszlo.
> > 
> > I'm afraid I seem to have made your last point moot :-)
> > 
> > But I agree with your points above: EDK2 makes the fork&tweak
> > approach
> > very easy, so CloudHv can keep its own versions of many of the QEMU
> > specific glue libraries. It does, however, require a certain degree
> > of
> > hygiene on the part of the developer to introduce abstractions
> > where
> > possible, to avoid forking huge drivers or libraries for a 2 line
> > delta between QEMU and CloudHv.
> > 
> > So let's decree that future CloudHv contributions that follow this
> > old
> > pattern will not be considered until/unless there is some
> > confidence
> > on our part that there is a long term plan in motion that cleans
> > this
> > all up and repays the accumulated technical debt.
> 



-=-=-=-=-=-=-=-=-=-=-=-
Groups.io Links: You receive all messages sent to this group.
View/Reply Online (#113866): https://edk2.groups.io/g/devel/message/113866
Mute This Topic: https://groups.io/mt/103689730/1787277
Group Owner: devel+owner@edk2.groups.io
Unsubscribe: https://edk2.groups.io/g/devel/unsub [importer@patchew.org]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [edk2-devel] [PATCH v3 0/3] Support CloudHv guests with >1TB of RAM
Posted by Anatol Belski 3 months, 2 weeks ago


-=-=-=-=-=-=-=-=-=-=-=-
Groups.io Links: You receive all messages sent to this group.
View/Reply Online (#113850): https://edk2.groups.io/g/devel/message/113850
Mute This Topic: https://groups.io/mt/103689730/1787277
Group Owner: devel+owner@edk2.groups.io
Unsubscribe: https://edk2.groups.io/g/devel/unsub [importer@patchew.org]
-=-=-=-=-=-=-=-=-=-=-=-
Re: [edk2-devel] [PATCH v3 0/3] Support CloudHv guests with >1TB of RAM
Posted by Gerd Hoffmann 3 months, 2 weeks ago
On Fri, Jan 12, 2024 at 06:31:41PM +0000, Thomas Barrett wrote:
> This series adds support for cloud-hypervisor guests with >1TiB
> of RAM to Ovmf. This bug was solved for Qemu back in 2017
> https://bugzilla.redhat.com/show_bug.cgi?id=1468526. The bug is
> still occuring for CloudHv guests because the PlatformScanE820
> utility is not currently supported for CloudHv.
> 
> My working branch for these changes can be found here:
> https://github.com/thomasbarrett/edk2/tree/cloud-hv-1tb-ram
> 
> Cc: Anatol Belski <anbelski@linux.microsoft.com>
> Cc: Ard Biesheuvel <ardb+tianocore@kernel.org>
> Cc: Gerd Hoffmann <kraxel@redhat.com>
> Cc: Jianyong Wu <jianyong.wu@arm.com>
> Cc: Jiewen Yao <jiewen.yao@intel.com>
> Cc: Laszlo Ersek <lersek@redhat.com>
> Cc: Rob Bradford <rbradford@rivosinc.com>
> 
> Thomas Barrett (3):
>   OvmfPkg: Add CloudHv support to PlatformScanE820 utility function.
>   OvmfPkg: Update PlatformAddressWidthInitialization for CloudHv
>   OvmfPkg: CloudHv: Enable PcdUse1GPageTable

Series:
Acked-by: Gerd Hoffmann <kraxel@redhat.com>

take care,
  Gerd



-=-=-=-=-=-=-=-=-=-=-=-
Groups.io Links: You receive all messages sent to this group.
View/Reply Online (#113825): https://edk2.groups.io/g/devel/message/113825
Mute This Topic: https://groups.io/mt/103689730/1787277
Group Owner: devel+owner@edk2.groups.io
Unsubscribe: https://edk2.groups.io/g/devel/unsub [importer@patchew.org]
-=-=-=-=-=-=-=-=-=-=-=-
Re: [edk2-devel] [PATCH v3 0/3] Support CloudHv guests with >1TB of RAM
Posted by Ard Biesheuvel 3 months, 2 weeks ago
On Mon, 15 Jan 2024 at 16:27, Gerd Hoffmann <kraxel@redhat.com> wrote:
>
> On Fri, Jan 12, 2024 at 06:31:41PM +0000, Thomas Barrett wrote:
> > This series adds support for cloud-hypervisor guests with >1TiB
> > of RAM to Ovmf. This bug was solved for Qemu back in 2017
> > https://bugzilla.redhat.com/show_bug.cgi?id=1468526. The bug is
> > still occuring for CloudHv guests because the PlatformScanE820
> > utility is not currently supported for CloudHv.
> >
> > My working branch for these changes can be found here:
> > https://github.com/thomasbarrett/edk2/tree/cloud-hv-1tb-ram
> >
> > Cc: Anatol Belski <anbelski@linux.microsoft.com>
> > Cc: Ard Biesheuvel <ardb+tianocore@kernel.org>
> > Cc: Gerd Hoffmann <kraxel@redhat.com>
> > Cc: Jianyong Wu <jianyong.wu@arm.com>
> > Cc: Jiewen Yao <jiewen.yao@intel.com>
> > Cc: Laszlo Ersek <lersek@redhat.com>
> > Cc: Rob Bradford <rbradford@rivosinc.com>
> >
> > Thomas Barrett (3):
> >   OvmfPkg: Add CloudHv support to PlatformScanE820 utility function.
> >   OvmfPkg: Update PlatformAddressWidthInitialization for CloudHv
> >   OvmfPkg: CloudHv: Enable PcdUse1GPageTable
>
> Series:
> Acked-by: Gerd Hoffmann <kraxel@redhat.com>
>

Thanks, I've queued this up.


-=-=-=-=-=-=-=-=-=-=-=-
Groups.io Links: You receive all messages sent to this group.
View/Reply Online (#113826): https://edk2.groups.io/g/devel/message/113826
Mute This Topic: https://groups.io/mt/103689730/1787277
Group Owner: devel+owner@edk2.groups.io
Unsubscribe: https://edk2.groups.io/g/devel/unsub [importer@patchew.org]
-=-=-=-=-=-=-=-=-=-=-=-
Re: [edk2-devel] [PATCH v3 0/3] Support CloudHv guests with >1TB of RAM
Posted by Ard Biesheuvel 3 months, 2 weeks ago
On Mon, 15 Jan 2024 at 16:32, Ard Biesheuvel <ardb@kernel.org> wrote:
>
> On Mon, 15 Jan 2024 at 16:27, Gerd Hoffmann <kraxel@redhat.com> wrote:
> >
> > On Fri, Jan 12, 2024 at 06:31:41PM +0000, Thomas Barrett wrote:
> > > This series adds support for cloud-hypervisor guests with >1TiB
> > > of RAM to Ovmf. This bug was solved for Qemu back in 2017
> > > https://bugzilla.redhat.com/show_bug.cgi?id=1468526. The bug is
> > > still occuring for CloudHv guests because the PlatformScanE820
> > > utility is not currently supported for CloudHv.
> > >
> > > My working branch for these changes can be found here:
> > > https://github.com/thomasbarrett/edk2/tree/cloud-hv-1tb-ram
> > >
> > > Cc: Anatol Belski <anbelski@linux.microsoft.com>
> > > Cc: Ard Biesheuvel <ardb+tianocore@kernel.org>
> > > Cc: Gerd Hoffmann <kraxel@redhat.com>
> > > Cc: Jianyong Wu <jianyong.wu@arm.com>
> > > Cc: Jiewen Yao <jiewen.yao@intel.com>
> > > Cc: Laszlo Ersek <lersek@redhat.com>
> > > Cc: Rob Bradford <rbradford@rivosinc.com>
> > >
> > > Thomas Barrett (3):
> > >   OvmfPkg: Add CloudHv support to PlatformScanE820 utility function.
> > >   OvmfPkg: Update PlatformAddressWidthInitialization for CloudHv
> > >   OvmfPkg: CloudHv: Enable PcdUse1GPageTable
> >
> > Series:
> > Acked-by: Gerd Hoffmann <kraxel@redhat.com>
> >
>
> Thanks, I've queued this up.

Merged as #5262


-=-=-=-=-=-=-=-=-=-=-=-
Groups.io Links: You receive all messages sent to this group.
View/Reply Online (#113834): https://edk2.groups.io/g/devel/message/113834
Mute This Topic: https://groups.io/mt/103689730/1787277
Group Owner: devel+owner@edk2.groups.io
Unsubscribe: https://edk2.groups.io/g/devel/unsub [importer@patchew.org]
-=-=-=-=-=-=-=-=-=-=-=-
Re: [edk2-devel] [PATCH v3 0/3] Support CloudHv guests with >1TB of RAM
Posted by Laszlo Ersek 3 months, 2 weeks ago
On 1/15/24 17:13, Ard Biesheuvel wrote:
> On Mon, 15 Jan 2024 at 16:32, Ard Biesheuvel <ardb@kernel.org> wrote:
>>
>> On Mon, 15 Jan 2024 at 16:27, Gerd Hoffmann <kraxel@redhat.com> wrote:
>>>
>>> On Fri, Jan 12, 2024 at 06:31:41PM +0000, Thomas Barrett wrote:
>>>> This series adds support for cloud-hypervisor guests with >1TiB
>>>> of RAM to Ovmf. This bug was solved for Qemu back in 2017
>>>> https://bugzilla.redhat.com/show_bug.cgi?id=1468526. The bug is
>>>> still occuring for CloudHv guests because the PlatformScanE820
>>>> utility is not currently supported for CloudHv.
>>>>
>>>> My working branch for these changes can be found here:
>>>> https://github.com/thomasbarrett/edk2/tree/cloud-hv-1tb-ram
>>>>
>>>> Cc: Anatol Belski <anbelski@linux.microsoft.com>
>>>> Cc: Ard Biesheuvel <ardb+tianocore@kernel.org>
>>>> Cc: Gerd Hoffmann <kraxel@redhat.com>
>>>> Cc: Jianyong Wu <jianyong.wu@arm.com>
>>>> Cc: Jiewen Yao <jiewen.yao@intel.com>
>>>> Cc: Laszlo Ersek <lersek@redhat.com>
>>>> Cc: Rob Bradford <rbradford@rivosinc.com>
>>>>
>>>> Thomas Barrett (3):
>>>>   OvmfPkg: Add CloudHv support to PlatformScanE820 utility function.
>>>>   OvmfPkg: Update PlatformAddressWidthInitialization for CloudHv
>>>>   OvmfPkg: CloudHv: Enable PcdUse1GPageTable
>>>
>>> Series:
>>> Acked-by: Gerd Hoffmann <kraxel@redhat.com>
>>>
>>
>> Thanks, I've queued this up.
> 
> Merged as #5262
> 

I was late to the party, but that's not a problem; as I said I was going
to be, I am fine with this being approved by other reviewers.

Thanks
Laszlo



-=-=-=-=-=-=-=-=-=-=-=-
Groups.io Links: You receive all messages sent to this group.
View/Reply Online (#113845): https://edk2.groups.io/g/devel/message/113845
Mute This Topic: https://groups.io/mt/103689730/1787277
Group Owner: devel+owner@edk2.groups.io
Unsubscribe: https://edk2.groups.io/g/devel/leave/3901457/1787277/102458076/xyzzy [importer@patchew.org]
-=-=-=-=-=-=-=-=-=-=-=-