hw/arm/virt.c | 1 + hw/core/machine.c | 5 ++++- hw/i386/pc_piix.c | 1 + hw/i386/pc_q35.c | 1 + 4 files changed, 7 insertions(+), 1 deletion(-)
With current limit set to match max spec size (2PTb),
Windows fails to parse type 17 records when DIMM size reaches 4Tb+.
Failure happens in GetPhysicallyInstalledSystemMemory() function,
and fails "Check SMBIOS System Memory Tables" SVVP test.
Though not fatal, it might cause issues for userspace apps,
something like [1].
Lets cap default DIMM size to 2Tb for now, until MS fixes it.
1) https://issues.redhat.com/browse/RHEL-81999?focusedId=27731200&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-27731200
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
---
PS: It's obvious 32 int overflow math somewhere in Windows,
MS admitted that it's Windows bug and in a process of fixing it.
However it's unclear if W10 and earlier would get the fix.
So however I dislike changing defaults, we heed to work around
the issue (it looks like QEMU regression while not being it).
Hopefully 2Tb/DIMM split will last longer until VM memory size
will become large enough to cause to many type 17 records issue
again.
PS2:
Alternatively, instead of messing with defaults, I can create
a dedicated knob to ask for desired DIMM size cap explicitly
on CLI. That will let users to enable workaround when they
hit this corner case. Downside is that knob has to be propagated
up all mgmt stack, which might be not desirable.
---
hw/arm/virt.c | 1 +
hw/core/machine.c | 5 ++++-
hw/i386/pc_piix.c | 1 +
hw/i386/pc_q35.c | 1 +
4 files changed, 7 insertions(+), 1 deletion(-)
diff --git a/hw/arm/virt.c b/hw/arm/virt.c
index 9326cfc895..4100c4ff1e 100644
--- a/hw/arm/virt.c
+++ b/hw/arm/virt.c
@@ -3463,6 +3463,7 @@ DEFINE_VIRT_MACHINE_AS_LATEST(10, 2)
static void virt_machine_10_1_options(MachineClass *mc)
{
virt_machine_10_2_options(mc);
+ mc->smbios_memory_device_size = 2047 * TiB;
compat_props_add(mc->compat_props, hw_compat_10_1, hw_compat_10_1_len);
}
DEFINE_VIRT_MACHINE(10, 1)
diff --git a/hw/core/machine.c b/hw/core/machine.c
index 38c949c4f2..ac00e72127 100644
--- a/hw/core/machine.c
+++ b/hw/core/machine.c
@@ -1115,8 +1115,11 @@ static void machine_class_init(ObjectClass *oc, const void *data)
* SMBIOS 3.1.0 7.18.5 Memory Device — Extended Size
* use max possible value that could be encoded into
* 'Extended Size' field (2047Tb).
+ *
+ * Unfortunately (current) Windows Server 2025 and earlier do not handle
+ * 4Tb+ DIMM size.
*/
- mc->smbios_memory_device_size = 2047 * TiB;
+ mc->smbios_memory_device_size = 2 * TiB;
/* numa node memory size aligned on 8MB by default.
* On Linux, each node's border has to be 8MB aligned
diff --git a/hw/i386/pc_piix.c b/hw/i386/pc_piix.c
index d165ac72ed..eafa081825 100644
--- a/hw/i386/pc_piix.c
+++ b/hw/i386/pc_piix.c
@@ -514,6 +514,7 @@ DEFINE_I440FX_MACHINE_AS_LATEST(10, 2);
static void pc_i440fx_machine_10_1_options(MachineClass *m)
{
pc_i440fx_machine_10_2_options(m);
+ m->smbios_memory_device_size = 2047 * TiB;
compat_props_add(m->compat_props, hw_compat_10_1, hw_compat_10_1_len);
compat_props_add(m->compat_props, pc_compat_10_1, pc_compat_10_1_len);
}
diff --git a/hw/i386/pc_q35.c b/hw/i386/pc_q35.c
index e89951285e..6015e639d7 100644
--- a/hw/i386/pc_q35.c
+++ b/hw/i386/pc_q35.c
@@ -384,6 +384,7 @@ DEFINE_Q35_MACHINE_AS_LATEST(10, 2);
static void pc_q35_machine_10_1_options(MachineClass *m)
{
pc_q35_machine_10_2_options(m);
+ m->smbios_memory_device_size = 2047 * TiB;
compat_props_add(m->compat_props, hw_compat_10_1, hw_compat_10_1_len);
compat_props_add(m->compat_props, pc_compat_10_1, pc_compat_10_1_len);
}
--
2.47.1
On Mon, Sep 01, 2025 at 10:49:15AM +0200, Igor Mammedov wrote: > With current limit set to match max spec size (2PTb), > Windows fails to parse type 17 records when DIMM size reaches 4Tb+. > Failure happens in GetPhysicallyInstalledSystemMemory() function, > and fails "Check SMBIOS System Memory Tables" SVVP test. > Though not fatal, it might cause issues for userspace apps, > something like [1]. > > Lets cap default DIMM size to 2Tb for now, until MS fixes it. > > 1) https://issues.redhat.com/browse/RHEL-81999?focusedId=27731200&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-27731200 Why link to a comment that refers to a link to Adobe photoshop and doesn't mention KVM ??? Also should have: Fixes: 62f182c97b31445012d37181005a83ff8453edaa > Signed-off-by: Igor Mammedov <imammedo@redhat.com> > --- > PS: It's obvious 32 int overflow math somewhere in Windows, > MS admitted that it's Windows bug and in a process of fixing it. > However it's unclear if W10 and earlier would get the fix. > So however I dislike changing defaults, we heed to work around > the issue (it looks like QEMU regression while not being it). > Hopefully 2Tb/DIMM split will last longer until VM memory size > will become large enough to cause to many type 17 records issue > again. > PS2: > Alternatively, instead of messing with defaults, I can create > a dedicated knob to ask for desired DIMM size cap explicitly > on CLI. That will let users to enable workaround when they > hit this corner case. Downside is that knob has to be propagated > up all mgmt stack, which might be not desirable. How many type 17 records can we get before hitting the the Linux limits which was the motiviation for the previous fix 62f182c97b31445012d37181005a83ff8453edaa ? ie, with this 2 TB dimm size, what is our effective maximum RAM size ? With regards, Daniel -- |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o- https://fstop138.berrange.com :| |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|
On Mon, 1 Sep 2025 10:08:31 +0100 Daniel P. Berrangé <berrange@redhat.com> wrote: > On Mon, Sep 01, 2025 at 10:49:15AM +0200, Igor Mammedov wrote: > > With current limit set to match max spec size (2PTb), > > Windows fails to parse type 17 records when DIMM size reaches 4Tb+. > > Failure happens in GetPhysicallyInstalledSystemMemory() function, > > and fails "Check SMBIOS System Memory Tables" SVVP test. > > Though not fatal, it might cause issues for userspace apps, > > something like [1]. > > > > Lets cap default DIMM size to 2Tb for now, until MS fixes it. > > > > 1) https://issues.redhat.com/browse/RHEL-81999?focusedId=27731200&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-27731200 > > Why link to a comment that refers to a link to Adobe photoshop > and doesn't mention KVM ??? It's just to demo that GetPhysicallyInstalledSystemMemory(), can cause issues (I doubt very much it's our case, since it's unlikely that there are physical DIMMs of 4Tb+ size in wild). Lets drop it. I'd expect issues in various inventory sw and honestly I'd kick this very niche problem MS direction. I resisted long enough till the things got moving towards resolution on MS side, but folks insist on workaround in QEMU. If w10 isn't going to get fix, then QEMU is the only option to work around the bug. PS: forgot another workaround (QEMU): configure initial RAM to be less than 4Tb and all additional RAM add as DIMMs on QEMU CLI. (however it's the job to be done by mgmt which could know Windows version and total amount of RAM) PS2: I'm fine with dropping the idea of patching QEMU defaults for yet another way to work around the issue. > Also should have: > > Fixes: 62f182c97b31445012d37181005a83ff8453edaa > > > Signed-off-by: Igor Mammedov <imammedo@redhat.com> > > --- > > PS: It's obvious 32 int overflow math somewhere in Windows, > > MS admitted that it's Windows bug and in a process of fixing it. > > However it's unclear if W10 and earlier would get the fix. > > So however I dislike changing defaults, we heed to work around > > the issue (it looks like QEMU regression while not being it). > > Hopefully 2Tb/DIMM split will last longer until VM memory size > > will become large enough to cause to many type 17 records issue > > again. > > PS2: > > Alternatively, instead of messing with defaults, I can create > > a dedicated knob to ask for desired DIMM size cap explicitly > > on CLI. That will let users to enable workaround when they > > hit this corner case. Downside is that knob has to be propagated > > up all mgmt stack, which might be not desirable. > > > How many type 17 records can we get before hitting the the > Linux limits which was the motiviation for the previous > fix 62f182c97b31445012d37181005a83ff8453edaa ? > > ie, with this 2 TB dimm size, what is our effective maximum > RAM size ? it would be around 2PTb (assuming 1000 records, exact number varies due to configuration since the rest of SMBIOS tables take some part of that 64K buffer). > > > With regards, > Daniel
© 2016 - 2025 Red Hat, Inc.