When running out of memory get a chunk of memory from ZoneTmpHigh to expand ZoneHigh. Drop simliar logic fro pmm code because it's not needed ay more. This fixes some scalability problems, for example with lots of vcpus, where seabios runs out of memory due to large smbios/acpi tables. Gerd Hoffmann (2): malloc: add on demand ZoneHigh expansion support Revert "pmm: use tmp zone on oom" src/malloc.c | 16 ++++++++++++++++ src/pmm.c | 13 ------------- 2 files changed, 16 insertions(+), 13 deletions(-) -- 2.35.1 _______________________________________________ SeaBIOS mailing list -- seabios@seabios.org To unsubscribe send an email to seabios-leave@seabios.org
On Thu, Apr 21, 2022 at 11:33:24AM +0200, Gerd Hoffmann wrote: > When running out of memory get a chunk of memory from ZoneTmpHigh to > expand ZoneHigh. Drop simliar logic fro pmm code because it's not > needed ay more. > > This fixes some scalability problems, for example with lots of vcpus, > where seabios runs out of memory due to large smbios/acpi tables. I'm not sure this is a good idea, because it could cause subtle issues with reproducibility. SeaBIOS does not have a deterministic ordering to memory allocations because of its implementation of "threads" (coroutines). Should permanent allocations need to spill over to ZoneTmpHigh then it will likely result in a fragmented e820 memory map. In that case, there is a good chance that different bootups will have different e820 maps, which may result in different OS behavior. The goal of ZoneHigh was to be the maximum amount of space needed. Unused space gets returned to the e820 map before boot, so there is generally not much harm in increasing it. Order of allocations in the ZoneHigh region is less important because we generally don't free allocations in that zone. IIRC, the pmm ZoneTmpHigh hack was primarily intended for ridiculously large allocations (like framebuffers) where allocating from the e820 map was the only feasible solution. What's using the ZoneHigh region that is so large that we need to expand it? -Kevin _______________________________________________ SeaBIOS mailing list -- seabios@seabios.org To unsubscribe send an email to seabios-leave@seabios.org
Hi, > Unused space gets returned to the e820 map before boot, so there is > generally not much harm in increasing it. Ah, missed that detail ... So simply bumping it to 1M or so is fine and does not waste memory? > What's using the ZoneHigh region that is so large that we need to > expand it? zonehigh is 256k. Largest allocation usually are the acpi tables (128k for a typical q35 config) and that is the one which typically fails when the other allocations sum up to > 128k so there is less than 128k free space left. Lots of vcpus (~800 IIRC) leading to large smbios tables are one way to trigger this. I've also seen allocation failures with many disk devices, although I'm not sure whenever that was zonehigh or zonelow. And the change to only initialize bootable disks should help with that one too. take care, Gerd _______________________________________________ SeaBIOS mailing list -- seabios@seabios.org To unsubscribe send an email to seabios-leave@seabios.org
On Thu, 21 Apr 2022 11:33:24 +0200 Gerd Hoffmann <kraxel@redhat.com> wrote: > When running out of memory get a chunk of memory from ZoneTmpHigh to > > expand ZoneHigh. Drop simliar logic fro pmm code because it's not > > needed ay more. > > > > This fixes some scalability problems, for example with lots of vcpus, > > where seabios runs out of memory due to large smbios/acpi tables. lots of vcpus is a bit vague, it would nice to have a reproducer CLI mentioned here or event better in a commit message. > > > > Gerd Hoffmann (2): > > malloc: add on demand ZoneHigh expansion support > > Revert "pmm: use tmp zone on oom" > > > > src/malloc.c | 16 ++++++++++++++++ > > src/pmm.c | 13 ------------- > > 2 files changed, 16 insertions(+), 13 deletions(-) > > > _______________________________________________ SeaBIOS mailing list -- seabios@seabios.org To unsubscribe send an email to seabios-leave@seabios.org
© 2016 - 2023 Red Hat, Inc.