[PATCH] x86/cpu: Handle SMM mode in x86_cpu_dump_state for softmmu

Kirill Martynov posted 1 patch 5 months, 3 weeks ago
Patches applied successfully (tree, apply log)
git fetch https://github.com/patchew-project/qemu tags/patchew/20250523154431.506993-1-stdcalllevi@yandex-team.ru
Maintainers: Paolo Bonzini <pbonzini@redhat.com>, Zhao Liu <zhao1.liu@intel.com>
target/i386/cpu.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
[PATCH] x86/cpu: Handle SMM mode in x86_cpu_dump_state for softmmu
Posted by Kirill Martynov 5 months, 3 weeks ago
Certain error conditions can trigger x86_cpu_dump_state() to output CPU state
debug information e.g. KVM emulation failure due to misbehaving guest.
However, if the CPU is in System Management Mode (SMM) when the assertion
in cpu_asidx_from_attrs failure happens because:

1. In SMM mode (smm=1), the CPU must use multiple address spaces
   with a dedicated SMM address space
2. On machine types with softmmu, address spaces are hardcoded to 1
   (no multiple address spaces available)

The assertion occurs in cpu_asidx_from_attrs() when trying to
access memory in SMM mode with insufficient address spaces.

Fix this by:
1. If number of address spaces is 1 always use index 0
2. In other cases use attr.secure for identified proper index

This prevents the assertion while still providing useful debug
output during VM shutdown errors.

Stack trace of the original issue:
#0  ... in raise () from /lib/x86_64-linux-gnu/libc.so.6
#1  ... in abort () from /lib/x86_64-linux-gnu/libc.so.6
#2  ... in ?? () from /lib/x86_64-linux-gnu/libc.so.6
#3  ... in __assert_fail () from /lib/x86_64-linux-gnu/libc.so.6
#4  ... in cpu_asidx_from_attrs (cpu=cpu@entry=0x5578ca2eb340, attrs=...)
   at ../hw/core/cpu-sysemu.c:76
#5  ... in cpu_memory_rw_debug (cpu=cpu@entry=0x5578ca2eb340,
   addr=addr@entry=2147258348, ptr=ptr@entry=0x7f5341ca373c, len=len@entry=1,
    is_write=is_write@entry=false) at ../softmmu/physmem.c:3529
#6  ... in x86_cpu_dump_state (cs=0x5578ca2eb340,
   f=0x7f53434065c0 <_IO_2_1_stderr_>, flags=<optimized out>)
   at ../target/i386/cpu-dump.c:560
#7  ... in kvm_cpu_exec (cpu=cpu@entry=0x5578ca2eb340)
   at ../accel/kvm/kvm-all.c:3000
#8  ... in kvm_vcpu_thread_fn (arg=arg@entry=0x5578ca2eb340)
   at ../accel/kvm/kvm-accel-ops.c:51
#9  ... in qemu_thread_start (args=<optimized out>)
   at ../util/qemu-thread-posix.c:505
#10 ... in start_thread () from /lib/x86_64-linux-gnu/libpthread.so.0
#11 ... in clone () from /lib/x86_64-linux-gnu/libc.so.6

Signed-off-by: Kirill Martynov <stdcalllevi@yandex-team.ru>
---
 target/i386/cpu.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/target/i386/cpu.h b/target/i386/cpu.h
index c51e0a43d0..2616a61c87 100644
--- a/target/i386/cpu.h
+++ b/target/i386/cpu.h
@@ -2507,7 +2507,7 @@ void cpu_sync_avx_hflag(CPUX86State *env);
 #ifndef CONFIG_USER_ONLY
 static inline int x86_asidx_from_attrs(CPUState *cs, MemTxAttrs attrs)
 {
-    return !!attrs.secure;
+    return cs->num_ases == 1 ? 0 : (!!attrs.secure);
 }
 
 static inline AddressSpace *cpu_addressspace(CPUState *cs, MemTxAttrs attrs)
-- 
2.43.0
Re: [PATCH] x86/cpu: Handle SMM mode in x86_cpu_dump_state for softmmu
Posted by Xiaoyao Li 4 months, 2 weeks ago
On 5/23/2025 11:44 PM, Kirill Martynov wrote:
> Certain error conditions can trigger x86_cpu_dump_state() to output CPU state
> debug information e.g. KVM emulation failure due to misbehaving guest.
> However, if the CPU is in System Management Mode (SMM) when the assertion
> in cpu_asidx_from_attrs failure happens because:
> 
> 1. In SMM mode (smm=1), the CPU must use multiple address spaces
>     with a dedicated SMM address space
> 2. On machine types with softmmu, address spaces are hardcoded to 1
>     (no multiple address spaces available)
> 
> The assertion occurs in cpu_asidx_from_attrs() when trying to
> access memory in SMM mode with insufficient address spaces.
> 
> Fix this by:
> 1. If number of address spaces is 1 always use index 0
> 2. In other cases use attr.secure for identified proper index
> 
> This prevents the assertion while still providing useful debug
> output during VM shutdown errors.

To me,  it's just a workaround to avoid the assertion.

When attrs.secure is 1, it means it's in SMM mode. As you describe above,

 > 1. In SMM mode (smm=1), the CPU must use multiple address spaces
 >     with a dedicated SMM address space

So I think we need to first figure out why it gets attrs.secure as 1 
when there is only 1 address space.

> Stack trace of the original issue:
> #0  ... in raise () from /lib/x86_64-linux-gnu/libc.so.6
> #1  ... in abort () from /lib/x86_64-linux-gnu/libc.so.6
> #2  ... in ?? () from /lib/x86_64-linux-gnu/libc.so.6
> #3  ... in __assert_fail () from /lib/x86_64-linux-gnu/libc.so.6
> #4  ... in cpu_asidx_from_attrs (cpu=cpu@entry=0x5578ca2eb340, attrs=...)
>     at ../hw/core/cpu-sysemu.c:76
> #5  ... in cpu_memory_rw_debug (cpu=cpu@entry=0x5578ca2eb340,
>     addr=addr@entry=2147258348, ptr=ptr@entry=0x7f5341ca373c, len=len@entry=1,
>      is_write=is_write@entry=false) at ../softmmu/physmem.c:3529
> #6  ... in x86_cpu_dump_state (cs=0x5578ca2eb340,
>     f=0x7f53434065c0 <_IO_2_1_stderr_>, flags=<optimized out>)
>     at ../target/i386/cpu-dump.c:560
> #7  ... in kvm_cpu_exec (cpu=cpu@entry=0x5578ca2eb340)
>     at ../accel/kvm/kvm-all.c:3000
> #8  ... in kvm_vcpu_thread_fn (arg=arg@entry=0x5578ca2eb340)
>     at ../accel/kvm/kvm-accel-ops.c:51
> #9  ... in qemu_thread_start (args=<optimized out>)
>     at ../util/qemu-thread-posix.c:505
> #10 ... in start_thread () from /lib/x86_64-linux-gnu/libpthread.so.0
> #11 ... in clone () from /lib/x86_64-linux-gnu/libc.so.6
> 
> Signed-off-by: Kirill Martynov <stdcalllevi@yandex-team.ru>
> ---
>   target/i386/cpu.h | 2 +-
>   1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/target/i386/cpu.h b/target/i386/cpu.h
> index c51e0a43d0..2616a61c87 100644
> --- a/target/i386/cpu.h
> +++ b/target/i386/cpu.h
> @@ -2507,7 +2507,7 @@ void cpu_sync_avx_hflag(CPUX86State *env);
>   #ifndef CONFIG_USER_ONLY
>   static inline int x86_asidx_from_attrs(CPUState *cs, MemTxAttrs attrs)
>   {
> -    return !!attrs.secure;
> +    return cs->num_ases == 1 ? 0 : (!!attrs.secure);
>   }
>   
>   static inline AddressSpace *cpu_addressspace(CPUState *cs, MemTxAttrs attrs)
Re: [PATCH] x86/cpu: Handle SMM mode in x86_cpu_dump_state for softmmu
Posted by Xiaoyao Li 4 months, 2 weeks ago
On 7/2/2025 10:16 PM, Xiaoyao Li wrote:
> On 5/23/2025 11:44 PM, Kirill Martynov wrote:
>> Certain error conditions can trigger x86_cpu_dump_state() to output 
>> CPU state
>> debug information e.g. KVM emulation failure due to misbehaving guest.
>> However, if the CPU is in System Management Mode (SMM) when the assertion
>> in cpu_asidx_from_attrs failure happens because:
>>
>> 1. In SMM mode (smm=1), the CPU must use multiple address spaces
>>     with a dedicated SMM address space
>> 2. On machine types with softmmu, address spaces are hardcoded to 1
>>     (no multiple address spaces available)
>>
>> The assertion occurs in cpu_asidx_from_attrs() when trying to
>> access memory in SMM mode with insufficient address spaces.
>>
>> Fix this by:
>> 1. If number of address spaces is 1 always use index 0
>> 2. In other cases use attr.secure for identified proper index
>>
>> This prevents the assertion while still providing useful debug
>> output during VM shutdown errors.
> 
> To me,  it's just a workaround to avoid the assertion.
> 
> When attrs.secure is 1, it means it's in SMM mode. As you describe above,
> 
>  > 1. In SMM mode (smm=1), the CPU must use multiple address spaces
>  >     with a dedicated SMM address space
> 
> So I think we need to first figure out why it gets attrs.secure as 1 
> when there is only 1 address space.

Ah, with KVM, QEMU can only support 1 address space.

It's not that

   2. On machine types with softmmu, address spaces are hardcoded to 1
      (no multiple address spaces available)

because TCG does supports 2 address space. So please update the commit 
message with a V2.

>> Stack trace of the original issue:
>> #0  ... in raise () from /lib/x86_64-linux-gnu/libc.so.6
>> #1  ... in abort () from /lib/x86_64-linux-gnu/libc.so.6
>> #2  ... in ?? () from /lib/x86_64-linux-gnu/libc.so.6
>> #3  ... in __assert_fail () from /lib/x86_64-linux-gnu/libc.so.6
>> #4  ... in cpu_asidx_from_attrs (cpu=cpu@entry=0x5578ca2eb340, attrs=...)
>>     at ../hw/core/cpu-sysemu.c:76
>> #5  ... in cpu_memory_rw_debug (cpu=cpu@entry=0x5578ca2eb340,
>>     addr=addr@entry=2147258348, ptr=ptr@entry=0x7f5341ca373c, 
>> len=len@entry=1,
>>      is_write=is_write@entry=false) at ../softmmu/physmem.c:3529
>> #6  ... in x86_cpu_dump_state (cs=0x5578ca2eb340,
>>     f=0x7f53434065c0 <_IO_2_1_stderr_>, flags=<optimized out>)
>>     at ../target/i386/cpu-dump.c:560
>> #7  ... in kvm_cpu_exec (cpu=cpu@entry=0x5578ca2eb340)
>>     at ../accel/kvm/kvm-all.c:3000
>> #8  ... in kvm_vcpu_thread_fn (arg=arg@entry=0x5578ca2eb340)
>>     at ../accel/kvm/kvm-accel-ops.c:51
>> #9  ... in qemu_thread_start (args=<optimized out>)
>>     at ../util/qemu-thread-posix.c:505
>> #10 ... in start_thread () from /lib/x86_64-linux-gnu/libpthread.so.0
>> #11 ... in clone () from /lib/x86_64-linux-gnu/libc.so.6
>>
>> Signed-off-by: Kirill Martynov <stdcalllevi@yandex-team.ru>
>> ---
>>   target/i386/cpu.h | 2 +-
>>   1 file changed, 1 insertion(+), 1 deletion(-)
>>
>> diff --git a/target/i386/cpu.h b/target/i386/cpu.h
>> index c51e0a43d0..2616a61c87 100644
>> --- a/target/i386/cpu.h
>> +++ b/target/i386/cpu.h
>> @@ -2507,7 +2507,7 @@ void cpu_sync_avx_hflag(CPUX86State *env);
>>   #ifndef CONFIG_USER_ONLY
>>   static inline int x86_asidx_from_attrs(CPUState *cs, MemTxAttrs attrs)
>>   {
>> -    return !!attrs.secure;
>> +    return cs->num_ases == 1 ? 0 : (!!attrs.secure);

I would beg a comment like

	/*
          * For some case, only one address space is supported.
          * e.g., with KVM.
	 */

With it and commit message rectified,

Reviewed-by: Xiaoyao Li <xiaoyao.li@intel.com>

>>   }
>>   static inline AddressSpace *cpu_addressspace(CPUState *cs, 
>> MemTxAttrs attrs)
> 



Re: [PATCH] x86/cpu: Handle SMM mode in x86_cpu_dump_state for softmmu
Posted by Xiaoyao Li 4 months, 2 weeks ago
On 7/2/2025 11:10 PM, Xiaoyao Li wrote:
> On 7/2/2025 10:16 PM, Xiaoyao Li wrote:
>> On 5/23/2025 11:44 PM, Kirill Martynov wrote:
>>> Certain error conditions can trigger x86_cpu_dump_state() to output 
>>> CPU state
>>> debug information e.g. KVM emulation failure due to misbehaving guest.
>>> However, if the CPU is in System Management Mode (SMM) when the 
>>> assertion
>>> in cpu_asidx_from_attrs failure happens because:
>>>
>>> 1. In SMM mode (smm=1), the CPU must use multiple address spaces
>>>     with a dedicated SMM address space
>>> 2. On machine types with softmmu, address spaces are hardcoded to 1
>>>     (no multiple address spaces available)
>>>
>>> The assertion occurs in cpu_asidx_from_attrs() when trying to
>>> access memory in SMM mode with insufficient address spaces.
>>>
>>> Fix this by:
>>> 1. If number of address spaces is 1 always use index 0
>>> 2. In other cases use attr.secure for identified proper index
>>>
>>> This prevents the assertion while still providing useful debug
>>> output during VM shutdown errors.
>>
>> To me,  it's just a workaround to avoid the assertion.
>>
>> When attrs.secure is 1, it means it's in SMM mode. As you describe above,
>>
>>  > 1. In SMM mode (smm=1), the CPU must use multiple address spaces
>>  >     with a dedicated SMM address space
>>
>> So I think we need to first figure out why it gets attrs.secure as 1 
>> when there is only 1 address space.
> 
> Ah, with KVM, QEMU can only support 1 address space.

In fact, KVM does support different address space for supporting SMM 
mode. There is KVM_CAP_MULTI_ADDRESS_SPACE to report how many address 
space is supported by KVM.

(It turns out my memory on KVM is correct. I was misled by QEMU code and 
comment)

QEMU allocates separate KVM address space for SMM in 
register_smram_listener(). But the address space doesn't associated with 
cpu's address space.

I think this patch can only avoid the assertion in you case when vcpu is 
in SMM mode with KVM. But with this patch, do you get the correct info 
of SMM mode dumped? I guess no, since the info is of address space 0, 
not the SMM address space.

If there is no reason of cannot associate KVM's address space with cpu's 
address space, I think the right fix is to enable the association with 
them.


> Reviewed-by: Xiaoyao Li <xiaoyao.li@intel.com>

Based on above, I need to withdraw my Reviewed-by.


>>>   }
>>>   static inline AddressSpace *cpu_addressspace(CPUState *cs, 
>>> MemTxAttrs attrs)
>>
> 
> 


Re: [PATCH] x86/cpu: Handle SMM mode in x86_cpu_dump_state for softmmu
Posted by Kirill Martynov 4 months, 2 weeks ago
Hi, Xiaoyao!
Hi, Zhao!
Thank you for your feedback. 
You wrote:
> QEMU allocates separate KVM address space for SMM in register_smram_listener(). But the address space doesn't associated with cpu's address space.

The address space allocated in register_sm_ram_listener() is  stored in KVMState::KVMAs::as

However, function cpu_asidx_from_attrs() returns index which is used to reference CPUState::cpu_ases
These are different array used to store address spaces. In softmmu setup there is a function called for cpu initialisation qemu_init_vcpu() which has hardcoded number of address spaces used to 1

if (!cpu->as) {
    /* If the target cpu hasn't set up any address spaces itself,
     * give it the default one.
     */
    cpu->num_ases = 1;
    cpu_address_space_init(cpu, 0, "cpu-memory", cpu->memory);
}

Do I understand your concern correctly?
The number of address spaces from KVM is allocated correctly (2 address spaces) however in QEMU CPUState is allocated only 1, so the correct fix would be to associate/map KVM allocated address spaces with
QEMU CPUState address spaces ?

> On 2 Jul 2025, at 19:24, Xiaoyao Li <xiaoyao.li@intel.com> wrote:
> 
> On 7/2/2025 11:10 PM, Xiaoyao Li wrote:
>> On 7/2/2025 10:16 PM, Xiaoyao Li wrote:
>>> On 5/23/2025 11:44 PM, Kirill Martynov wrote:
>>>> Certain error conditions can trigger x86_cpu_dump_state() to output CPU state
>>>> debug information e.g. KVM emulation failure due to misbehaving guest.
>>>> However, if the CPU is in System Management Mode (SMM) when the assertion
>>>> in cpu_asidx_from_attrs failure happens because:
>>>> 
>>>> 1. In SMM mode (smm=1), the CPU must use multiple address spaces
>>>>     with a dedicated SMM address space
>>>> 2. On machine types with softmmu, address spaces are hardcoded to 1
>>>>     (no multiple address spaces available)
>>>> 
>>>> The assertion occurs in cpu_asidx_from_attrs() when trying to
>>>> access memory in SMM mode with insufficient address spaces.
>>>> 
>>>> Fix this by:
>>>> 1. If number of address spaces is 1 always use index 0
>>>> 2. In other cases use attr.secure for identified proper index
>>>> 
>>>> This prevents the assertion while still providing useful debug
>>>> output during VM shutdown errors.
>>> 
>>> To me,  it's just a workaround to avoid the assertion.
>>> 
>>> When attrs.secure is 1, it means it's in SMM mode. As you describe above,
>>> 
>>>  > 1. In SMM mode (smm=1), the CPU must use multiple address spaces
>>>  >     with a dedicated SMM address space
>>> 
>>> So I think we need to first figure out why it gets attrs.secure as 1 when there is only 1 address space.
>> Ah, with KVM, QEMU can only support 1 address space.
> 
> In fact, KVM does support different address space for supporting SMM mode. There is KVM_CAP_MULTI_ADDRESS_SPACE to report how many address space is supported by KVM.
> 
> (It turns out my memory on KVM is correct. I was misled by QEMU code and comment)
> 
> QEMU allocates separate KVM address space for SMM in register_smram_listener(). But the address space doesn't associated with cpu's address space.
> 
> I think this patch can only avoid the assertion in you case when vcpu is in SMM mode with KVM. But with this patch, do you get the correct info of SMM mode dumped? I guess no, since the info is of address space 0, not the SMM address space.
> 
> If there is no reason of cannot associate KVM's address space with cpu's address space, I think the right fix is to enable the association with them.
> 
> 
>> Reviewed-by: Xiaoyao Li <xiaoyao.li@intel.com <mailto:xiaoyao.li@intel.com>>
> 
> Based on above, I need to withdraw my Reviewed-by.
> 
> 
>>>>   }
>>>>   static inline AddressSpace *cpu_addressspace(CPUState *cs, MemTxAttrs attrs)

Re: [PATCH] x86/cpu: Handle SMM mode in x86_cpu_dump_state for softmmu
Posted by Xiaoyao Li 4 months, 2 weeks ago
On 7/3/2025 5:25 PM, Kirill Martynov wrote:
> Hi, Xiaoyao!
> Hi, Zhao!
> Thank you for your feedback.
> You wrote:
>> QEMU allocates separate KVM address space for SMM in register_smram_listener(). But the address space doesn't associated with cpu's address space.
> 
> The address space allocated in register_sm_ram_listener() is  stored in KVMState::KVMAs::as
> 
> However, function cpu_asidx_from_attrs() returns index which is used to reference CPUState::cpu_ases
> These are different array used to store address spaces. In softmmu setup there is a function called for cpu initialisation qemu_init_vcpu() which has hardcoded number of address spaces used to 1
> 
> if (!cpu->as) {
>      /* If the target cpu hasn't set up any address spaces itself,
>       * give it the default one.
>       */
>      cpu->num_ases = 1;
>      cpu_address_space_init(cpu, 0, "cpu-memory", cpu->memory);
> }
> 
> Do I understand your concern correctly?
> The number of address spaces from KVM is allocated correctly (2 address spaces) however in QEMU CPUState is allocated only 1, so the correct fix would be to associate/map KVM allocated address spaces with
> QEMU CPUState address spaces ?

The address spaces are all allocated by QEMU. So it's not allocated from 
KVM, but for KVM.

yes, QEMU supports separate address space for SMM mode with KVM. It's 
just that QEMU doesn't connect it with the CPU address space.

I cook a draft code below, which passes the "make check" test. Could 
help test if it can resolve your issue? QEMU initializes 
smram_address_space later at machine done notifier, so that the code has 
to iterate the CPUs to add the address space of SMRAM to CPU address 
space. I will try to see if possible to make it happen earlier so that 
when kvm_cpu_realizefn() all the address spaces are here.


-------8<---------
diff --git a/include/exec/cpu-common.h b/include/exec/cpu-common.h
index a68485547d50..7d6f4a86d802 100644
--- a/include/exec/cpu-common.h
+++ b/include/exec/cpu-common.h
@@ -130,6 +130,8 @@ void cpu_address_space_init(CPUState *cpu, int asidx,
   */
  void cpu_address_space_destroy(CPUState *cpu, int asidx);

+void cpu_address_space_add(CPUState *cpu, AddressSpace *as);
+
  void cpu_physical_memory_rw(hwaddr addr, void *buf,
                              hwaddr len, bool is_write);
  static inline void cpu_physical_memory_read(hwaddr addr,
diff --git a/system/physmem.c b/system/physmem.c
index ff0ca40222d3..289c06c2af77 100644
--- a/system/physmem.c
+++ b/system/physmem.c
@@ -814,6 +814,31 @@ void cpu_address_space_init(CPUState *cpu, int asidx,
      }
  }

+void cpu_address_space_add(CPUState *cpu, AddressSpace *as)
+{
+    CPUAddressSpace *newas;
+    int asidx = cpu->num_ases;
+
+    cpu->num_ases++;
+
+    if(asidx == 0) {
+        /* address space 0 gets the convenience alias */
+        cpu->as = as;
+    }
+
+    if (!cpu->cpu_ases) {
+        cpu->cpu_ases = g_new0(CPUAddressSpace, cpu->num_ases);
+        cpu->cpu_ases_count = cpu->num_ases;
+    } else {
+        cpu->cpu_ases = g_renew(CPUAddressSpace, cpu->cpu_ases,
+                                                cpu->num_ases);
+    }
+
+    newas = &cpu->cpu_ases[asidx];
+    newas->cpu = cpu;
+    newas->as = as;
+}
+
  void cpu_address_space_destroy(CPUState *cpu, int asidx)
  {
      CPUAddressSpace *cpuas;
diff --git a/target/i386/kvm/kvm-cpu.c b/target/i386/kvm/kvm-cpu.c
index 16bde4de01e5..7b89326e34ca 100644
--- a/target/i386/kvm/kvm-cpu.c
+++ b/target/i386/kvm/kvm-cpu.c
@@ -12,6 +12,7 @@
  #include "host-cpu.h"
  #include "qapi/error.h"
  #include "system/system.h"
+#include "system/kvm_int.h"
  #include "hw/boards.h"

  #include "kvm_i386.h"
@@ -90,6 +91,12 @@ static bool kvm_cpu_realizefn(CPUState *cs, Error **errp)
          kvm_set_guest_phys_bits(cs);
      }

+    for (int i = 0; i < kvm_state->nr_as; i++) {
+        if (kvm_state->as[i].as) {
+            cpu_address_space_add(cs, kvm_state->as[i].as);
+        }
+    }
+
      return true;
  }

diff --git a/target/i386/kvm/kvm.c b/target/i386/kvm/kvm.c
index 234878c613f6..3ba7b26e5a74 100644
--- a/target/i386/kvm/kvm.c
+++ b/target/i386/kvm/kvm.c
@@ -2700,6 +2700,7 @@ static MemoryRegion smram_as_mem;

  static void register_smram_listener(Notifier *n, void *unused)
  {
+    CPUState *cpu;
      MemoryRegion *smram =
          (MemoryRegion *) object_resolve_path("/machine/smram", NULL);

@@ -2724,6 +2725,9 @@ static void register_smram_listener(Notifier *n, 
void *unused)
      address_space_init(&smram_address_space, &smram_as_root, "KVM-SMRAM");
      kvm_memory_listener_register(kvm_state, &smram_listener,
                                   &smram_address_space, 1, "kvm-smram");
+    CPU_FOREACH(cpu) {
+        cpu_address_space_add(cpu, &smram_address_space);
+    }
  }

  static void *kvm_msr_energy_thread(void *data)
Re: [PATCH] x86/cpu: Handle SMM mode in x86_cpu_dump_state for softmmu
Posted by Zhao Liu 4 months, 2 weeks ago
> yes, QEMU supports separate address space for SMM mode with KVM. It's just
> that QEMU doesn't connect it with the CPU address space.

Yes, you're right.

(But initially, it might have been intentional, as KVM's SMM address
space is global. It is consistent with the current KVM/memory interface.
Creating a separate CPUAddressSpace for each CPU would involve a lot of
redundant work.)

> diff --git a/include/exec/cpu-common.h b/include/exec/cpu-common.h
> index a68485547d50..7d6f4a86d802 100644
> --- a/include/exec/cpu-common.h
> +++ b/include/exec/cpu-common.h
> @@ -130,6 +130,8 @@ void cpu_address_space_init(CPUState *cpu, int asidx,
>   */
>  void cpu_address_space_destroy(CPUState *cpu, int asidx);
> 
> +void cpu_address_space_add(CPUState *cpu, AddressSpace *as);
> +

Instead of introducing a "redundant" interfaces, it's better to lift the
restrictions of cpu_address_space_init() on KVM and reuse it.  Moreover,
not explicitly setting asidx can be risky.

diff --git a/include/exec/cpu-common.h b/include/exec/cpu-common.h
index a68485547d50..e3c70ccb1ea0 100644
--- a/include/exec/cpu-common.h
+++ b/include/exec/cpu-common.h
@@ -106,6 +106,8 @@ size_t qemu_ram_pagesize_largest(void);
  * @asidx: integer index of this address space
  * @prefix: prefix to be used as name of address space
  * @mr: the root memory region of address space
+ * @as: the pre-created AddressSpace. If have, no need to
+ *      specify @mr.
  *
  * Add the specified address space to the CPU's cpu_ases list.
  * The address space added with @asidx 0 is the one used for the
@@ -117,10 +119,10 @@ size_t qemu_ram_pagesize_largest(void);
  * cpu->num_ases to the total number of address spaces it needs
  * to support.
  *
- * Note that with KVM only one address space is supported.
  */
 void cpu_address_space_init(CPUState *cpu, int asidx,
-                            const char *prefix, MemoryRegion *mr);
+                            const char *prefix, MemoryRegion *mr,
+                            AddressSpace *as);
 /**
  * cpu_address_space_destroy:
  * @cpu: CPU for which address space needs to be destroyed
diff --git a/system/physmem.c b/system/physmem.c
index ff0ca40222d3..15aedfb58055 100644
--- a/system/physmem.c
+++ b/system/physmem.c
@@ -776,16 +776,23 @@ hwaddr memory_region_section_get_iotlb(CPUState *cpu,
 #endif /* CONFIG_TCG */

 void cpu_address_space_init(CPUState *cpu, int asidx,
-                            const char *prefix, MemoryRegion *mr)
+                            const char *prefix, MemoryRegion *mr,
+                            AddressSpace *as)
 {
     CPUAddressSpace *newas;
-    AddressSpace *as = g_new0(AddressSpace, 1);
-    char *as_name;
+    AddressSpace *cpu_as;

-    assert(mr);
-    as_name = g_strdup_printf("%s-%d", prefix, cpu->cpu_index);
-    address_space_init(as, mr, as_name);
-    g_free(as_name);
+    if (!as) {
+        cpu_as = g_new0(AddressSpace, 1);
+        char *as_name;
+
+        assert(mr);
+        as_name = g_strdup_printf("%s-%d", prefix, cpu->cpu_index);
+        address_space_init(cpu_as, mr, as_name);
+        g_free(as_name);
+    } else {
+        cpu_as = as;
+    }

     /* Target code should have set num_ases before calling us */
     assert(asidx < cpu->num_ases);

     if (asidx == 0) {
         /* address space 0 gets the convenience alias */
-        cpu->as = as;
+        cpu->as = cpu_as;
     }

-    /* KVM cannot currently support multiple address spaces. */
-    assert(asidx == 0 || !kvm_enabled());
-
     if (!cpu->cpu_ases) {
         cpu->cpu_ases = g_new0(CPUAddressSpace, cpu->num_ases);
         cpu->cpu_ases_count = cpu->num_ases;
@@ -805,12 +809,12 @@ void cpu_address_space_init(CPUState *cpu, int asidx,

     newas = &cpu->cpu_ases[asidx];
     newas->cpu = cpu;
-    newas->as = as;
+    newas->as = cpu_as;
     if (tcg_enabled()) {
         newas->tcg_as_listener.log_global_after_sync = tcg_log_global_after_sync;
         newas->tcg_as_listener.commit = tcg_commit;
         newas->tcg_as_listener.name = "tcg";
-        memory_listener_register(&newas->tcg_as_listener, as);
+        memory_listener_register(&newas->tcg_as_listener, cpu_as);
     }
 }

---

In this interface, whether to reuse the existing address space (@as
argument) or create a new one for the CPU depends on the maintainer's
final opinion anyway. If the choice is to reuse, as in the code above,
need to adjust other calling cases. If so, I suggest Kirill handle this
in a separate patch.

>  #include "kvm_i386.h"
> @@ -90,6 +91,12 @@ static bool kvm_cpu_realizefn(CPUState *cs, Error **errp)
>          kvm_set_guest_phys_bits(cs);
>      }
> 
> +    for (int i = 0; i < kvm_state->nr_as; i++) {
> +        if (kvm_state->as[i].as) {
> +            cpu_address_space_add(cs, kvm_state->as[i].as);
> +        }
> +    }
> +

This will add smram twice, with the following cpu_address_space_add().

Why are all KVM as unconditionally added to each CPU?

Another issue is the SMM AS index would be "unknown"...

>      return true;
>  }
> 
> diff --git a/target/i386/kvm/kvm.c b/target/i386/kvm/kvm.c
> index 234878c613f6..3ba7b26e5a74 100644
> --- a/target/i386/kvm/kvm.c
> +++ b/target/i386/kvm/kvm.c
> @@ -2700,6 +2700,7 @@ static MemoryRegion smram_as_mem;
> 
>  static void register_smram_listener(Notifier *n, void *unused)
>  {
> +    CPUState *cpu;
>      MemoryRegion *smram =
>          (MemoryRegion *) object_resolve_path("/machine/smram", NULL);
> 
> @@ -2724,6 +2725,9 @@ static void register_smram_listener(Notifier *n, void
> *unused)
>      address_space_init(&smram_address_space, &smram_as_root, "KVM-SMRAM");
>      kvm_memory_listener_register(kvm_state, &smram_listener,
>                                   &smram_address_space, 1, "kvm-smram");
> +    CPU_FOREACH(cpu) {
> +        cpu_address_space_add(cpu, &smram_address_space);
> +    }
>  }

With the cpu_address_space_init(), here could be:

CPU_FOREACH(cpu) {
	/* Ensure SMM Address Space has the index 1.  */
	assert(cpu->num_ases == 1);
	cpu->num_ases = 2;
	cpu_address_space_init(cpu, 1, NULL, NULL, &smram_address_space);
}
Re: [PATCH] x86/cpu: Handle SMM mode in x86_cpu_dump_state for softmmu
Posted by Xiaoyao Li 4 months, 2 weeks ago
On 7/4/2025 4:20 PM, Zhao Liu wrote:
>> yes, QEMU supports separate address space for SMM mode with KVM. It's just
>> that QEMU doesn't connect it with the CPU address space.
> 
> Yes, you're right.
> 
> (But initially, it might have been intentional, as KVM's SMM address
> space is global. It is consistent with the current KVM/memory interface.
> Creating a separate CPUAddressSpace for each CPU would involve a lot of
> redundant work.)

I think Paolo can answer why didn't associate SMM support with KVM with 
CPU addresspace, since Paolo enabled the KVM smm support in QEMU. I 
guess maybe it's just overlooked.

>> diff --git a/include/exec/cpu-common.h b/include/exec/cpu-common.h
>> index a68485547d50..7d6f4a86d802 100644
>> --- a/include/exec/cpu-common.h
>> +++ b/include/exec/cpu-common.h
>> @@ -130,6 +130,8 @@ void cpu_address_space_init(CPUState *cpu, int asidx,
>>    */
>>   void cpu_address_space_destroy(CPUState *cpu, int asidx);
>>
>> +void cpu_address_space_add(CPUState *cpu, AddressSpace *as);
>> +
> 
> Instead of introducing a "redundant" interfaces, it's better to lift the
> restrictions of cpu_address_space_init() on KVM and reuse it.  Moreover,
> not explicitly setting asidx can be risky.

I thought about it. I just wanted to provide a poc implementation for 
Kirill to try, so I didn't touch the existing interface by purpose.

Meanwhile, I also had the idea of just calling existing 
cpu_address_space_init() instead of adjusting it, but it needs to be 
called for SMRAM as well to cover SMM. This way, it would still create a 
(when counting the smram, then two) redundant address space for each 
CPU. But it is how it behaves today that with KVM, each CPU has a 
separate address space for system memory.

And I'm still investigating if switching to reuse the existing address 
space instead of creating a new one in cpu_address_space_init() would 
cause incompatible problem or not. And this is also the reason why I 
just provided a draft POC diff instead of formal patch.

> diff --git a/include/exec/cpu-common.h b/include/exec/cpu-common.h
> index a68485547d50..e3c70ccb1ea0 100644
> --- a/include/exec/cpu-common.h
> +++ b/include/exec/cpu-common.h
> @@ -106,6 +106,8 @@ size_t qemu_ram_pagesize_largest(void);
>    * @asidx: integer index of this address space
>    * @prefix: prefix to be used as name of address space
>    * @mr: the root memory region of address space
> + * @as: the pre-created AddressSpace. If have, no need to
> + *      specify @mr.
>    *
>    * Add the specified address space to the CPU's cpu_ases list.
>    * The address space added with @asidx 0 is the one used for the
> @@ -117,10 +119,10 @@ size_t qemu_ram_pagesize_largest(void);
>    * cpu->num_ases to the total number of address spaces it needs
>    * to support.
>    *
> - * Note that with KVM only one address space is supported.
>    */
>   void cpu_address_space_init(CPUState *cpu, int asidx,
> -                            const char *prefix, MemoryRegion *mr);
> +                            const char *prefix, MemoryRegion *mr,
> +                            AddressSpace *as);
>   /**
>    * cpu_address_space_destroy:
>    * @cpu: CPU for which address space needs to be destroyed
> diff --git a/system/physmem.c b/system/physmem.c
> index ff0ca40222d3..15aedfb58055 100644
> --- a/system/physmem.c
> +++ b/system/physmem.c
> @@ -776,16 +776,23 @@ hwaddr memory_region_section_get_iotlb(CPUState *cpu,
>   #endif /* CONFIG_TCG */
> 
>   void cpu_address_space_init(CPUState *cpu, int asidx,
> -                            const char *prefix, MemoryRegion *mr)
> +                            const char *prefix, MemoryRegion *mr,
> +                            AddressSpace *as)
>   {
>       CPUAddressSpace *newas;
> -    AddressSpace *as = g_new0(AddressSpace, 1);
> -    char *as_name;
> +    AddressSpace *cpu_as;
> 
> -    assert(mr);
> -    as_name = g_strdup_printf("%s-%d", prefix, cpu->cpu_index);
> -    address_space_init(as, mr, as_name);
> -    g_free(as_name);
> +    if (!as) {
> +        cpu_as = g_new0(AddressSpace, 1);
> +        char *as_name;
> +
> +        assert(mr);
> +        as_name = g_strdup_printf("%s-%d", prefix, cpu->cpu_index);
> +        address_space_init(cpu_as, mr, as_name);
> +        g_free(as_name);
> +    } else {
> +        cpu_as = as;
> +    }
> 
>       /* Target code should have set num_ases before calling us */
>       assert(asidx < cpu->num_ases);
> 
>       if (asidx == 0) {
>           /* address space 0 gets the convenience alias */
> -        cpu->as = as;
> +        cpu->as = cpu_as;
>       }
> 
> -    /* KVM cannot currently support multiple address spaces. */
> -    assert(asidx == 0 || !kvm_enabled());
> -
>       if (!cpu->cpu_ases) {
>           cpu->cpu_ases = g_new0(CPUAddressSpace, cpu->num_ases);
>           cpu->cpu_ases_count = cpu->num_ases;
> @@ -805,12 +809,12 @@ void cpu_address_space_init(CPUState *cpu, int asidx,
> 
>       newas = &cpu->cpu_ases[asidx];
>       newas->cpu = cpu;
> -    newas->as = as;
> +    newas->as = cpu_as;
>       if (tcg_enabled()) {
>           newas->tcg_as_listener.log_global_after_sync = tcg_log_global_after_sync;
>           newas->tcg_as_listener.commit = tcg_commit;
>           newas->tcg_as_listener.name = "tcg";
> -        memory_listener_register(&newas->tcg_as_listener, as);
> +        memory_listener_register(&newas->tcg_as_listener, cpu_as);
>       }
>   }
> 
> ---
> 
> In this interface, whether to reuse the existing address space (@as
> argument) or create a new one for the CPU depends on the maintainer's
> final opinion anyway. If the choice is to reuse, as in the code above,
> need to adjust other calling cases. If so, I suggest Kirill handle this
> in a separate patch.
> 
>>   #include "kvm_i386.h"
>> @@ -90,6 +91,12 @@ static bool kvm_cpu_realizefn(CPUState *cs, Error **errp)
>>           kvm_set_guest_phys_bits(cs);
>>       }
>>
>> +    for (int i = 0; i < kvm_state->nr_as; i++) {
>> +        if (kvm_state->as[i].as) {
>> +            cpu_address_space_add(cs, kvm_state->as[i].as);
>> +        }
>> +    }
>> +
> 
> This will add smram twice, with the following cpu_address_space_add().

No, SMRAM is initialize at machine init done notifier, which is after this.

> Why are all KVM as unconditionally added to each CPU?
> 
> Another issue is the SMM AS index would be "unknown"...
> 
>>       return true;
>>   }
>>
>> diff --git a/target/i386/kvm/kvm.c b/target/i386/kvm/kvm.c
>> index 234878c613f6..3ba7b26e5a74 100644
>> --- a/target/i386/kvm/kvm.c
>> +++ b/target/i386/kvm/kvm.c
>> @@ -2700,6 +2700,7 @@ static MemoryRegion smram_as_mem;
>>
>>   static void register_smram_listener(Notifier *n, void *unused)
>>   {
>> +    CPUState *cpu;
>>       MemoryRegion *smram =
>>           (MemoryRegion *) object_resolve_path("/machine/smram", NULL);
>>
>> @@ -2724,6 +2725,9 @@ static void register_smram_listener(Notifier *n, void
>> *unused)
>>       address_space_init(&smram_address_space, &smram_as_root, "KVM-SMRAM");
>>       kvm_memory_listener_register(kvm_state, &smram_listener,
>>                                    &smram_address_space, 1, "kvm-smram");
>> +    CPU_FOREACH(cpu) {
>> +        cpu_address_space_add(cpu, &smram_address_space);
>> +    }
>>   }
> 
> With the cpu_address_space_init(), here could be:
> 
> CPU_FOREACH(cpu) {
> 	/* Ensure SMM Address Space has the index 1.  */
> 	assert(cpu->num_ases == 1);
> 	cpu->num_ases = 2;
> 	cpu_address_space_init(cpu, 1, NULL, NULL, &smram_address_space);
> }
> 
> 
>
Re: [PATCH] x86/cpu: Handle SMM mode in x86_cpu_dump_state for softmmu
Posted by Kirill Martynov 3 months, 2 weeks ago
Hi Xiaoyao!
Hi Zhao!

Xiaoyao,
I tested the patch you provided, it works smoothly, easy to apply. Nothing to complain about.

Zhao, 
I also tried your approach (extend cpu_address_space_init with AddressSpace parameter)
First, it crashed in malloc with error:
malloc(): unaligned tcache chunk detected
After a little investigation I resized cpu->cpu_ases array, so it can fit second element and
it started working. However, it looks like that function cpu_address_space_destroy needs
some adjustment, because now it treats cpu->cpu_ases elements as dynamically allocated and
destroys them with g_free() and passing &smram_address_space to cpu_address_space_init()
in register_smram_listener() could lead to a problem since it is statically allocated in binary.

So, my question now, what should I do?
Do I need to send Xiaoyao patch as new separate patch for review?
Or continue polishing Zhao approach?
Any comments from Paolo?

> On 4 Jul 2025, at 16:50, Xiaoyao Li <xiaoyao.li@intel.com> wrote:
> 
> On 7/4/2025 4:20 PM, Zhao Liu wrote:
>>> yes, QEMU supports separate address space for SMM mode with KVM. It's just
>>> that QEMU doesn't connect it with the CPU address space.
>> Yes, you're right.
>> (But initially, it might have been intentional, as KVM's SMM address
>> space is global. It is consistent with the current KVM/memory interface.
>> Creating a separate CPUAddressSpace for each CPU would involve a lot of
>> redundant work.)
> 
> I think Paolo can answer why didn't associate SMM support with KVM with CPU addresspace, since Paolo enabled the KVM smm support in QEMU. I guess maybe it's just overlooked.
> 
>>> diff --git a/include/exec/cpu-common.h b/include/exec/cpu-common.h
>>> index a68485547d50..7d6f4a86d802 100644
>>> --- a/include/exec/cpu-common.h
>>> +++ b/include/exec/cpu-common.h
>>> @@ -130,6 +130,8 @@ void cpu_address_space_init(CPUState *cpu, int asidx,
>>>   */
>>>  void cpu_address_space_destroy(CPUState *cpu, int asidx);
>>> 
>>> +void cpu_address_space_add(CPUState *cpu, AddressSpace *as);
>>> +
>> Instead of introducing a "redundant" interfaces, it's better to lift the
>> restrictions of cpu_address_space_init() on KVM and reuse it.  Moreover,
>> not explicitly setting asidx can be risky.
> 
> I thought about it. I just wanted to provide a poc implementation for Kirill to try, so I didn't touch the existing interface by purpose.
> 
> Meanwhile, I also had the idea of just calling existing cpu_address_space_init() instead of adjusting it, but it needs to be called for SMRAM as well to cover SMM. This way, it would still create a (when counting the smram, then two) redundant address space for each CPU. But it is how it behaves today that with KVM, each CPU has a separate address space for system memory.
> 
> And I'm still investigating if switching to reuse the existing address space instead of creating a new one in cpu_address_space_init() would cause incompatible problem or not. And this is also the reason why I just provided a draft POC diff instead of formal patch.
> 
>> diff --git a/include/exec/cpu-common.h b/include/exec/cpu-common.h
>> index a68485547d50..e3c70ccb1ea0 100644
>> --- a/include/exec/cpu-common.h
>> +++ b/include/exec/cpu-common.h
>> @@ -106,6 +106,8 @@ size_t qemu_ram_pagesize_largest(void);
>>   * @asidx: integer index of this address space
>>   * @prefix: prefix to be used as name of address space
>>   * @mr: the root memory region of address space
>> + * @as: the pre-created AddressSpace. If have, no need to
>> + *      specify @mr.
>>   *
>>   * Add the specified address space to the CPU's cpu_ases list.
>>   * The address space added with @asidx 0 is the one used for the
>> @@ -117,10 +119,10 @@ size_t qemu_ram_pagesize_largest(void);
>>   * cpu->num_ases to the total number of address spaces it needs
>>   * to support.
>>   *
>> - * Note that with KVM only one address space is supported.
>>   */
>>  void cpu_address_space_init(CPUState *cpu, int asidx,
>> -                            const char *prefix, MemoryRegion *mr);
>> +                            const char *prefix, MemoryRegion *mr,
>> +                            AddressSpace *as);
>>  /**
>>   * cpu_address_space_destroy:
>>   * @cpu: CPU for which address space needs to be destroyed
>> diff --git a/system/physmem.c b/system/physmem.c
>> index ff0ca40222d3..15aedfb58055 100644
>> --- a/system/physmem.c
>> +++ b/system/physmem.c
>> @@ -776,16 +776,23 @@ hwaddr memory_region_section_get_iotlb(CPUState *cpu,
>>  #endif /* CONFIG_TCG */
>>  void cpu_address_space_init(CPUState *cpu, int asidx,
>> -                            const char *prefix, MemoryRegion *mr)
>> +                            const char *prefix, MemoryRegion *mr,
>> +                            AddressSpace *as)
>>  {
>>      CPUAddressSpace *newas;
>> -    AddressSpace *as = g_new0(AddressSpace, 1);
>> -    char *as_name;
>> +    AddressSpace *cpu_as;
>> -    assert(mr);
>> -    as_name = g_strdup_printf("%s-%d", prefix, cpu->cpu_index);
>> -    address_space_init(as, mr, as_name);
>> -    g_free(as_name);
>> +    if (!as) {
>> +        cpu_as = g_new0(AddressSpace, 1);
>> +        char *as_name;
>> +
>> +        assert(mr);
>> +        as_name = g_strdup_printf("%s-%d", prefix, cpu->cpu_index);
>> +        address_space_init(cpu_as, mr, as_name);
>> +        g_free(as_name);
>> +    } else {
>> +        cpu_as = as;
>> +    }
>>      /* Target code should have set num_ases before calling us */
>>      assert(asidx < cpu->num_ases);
>>      if (asidx == 0) {
>>          /* address space 0 gets the convenience alias */
>> -        cpu->as = as;
>> +        cpu->as = cpu_as;
>>      }
>> -    /* KVM cannot currently support multiple address spaces. */
>> -    assert(asidx == 0 || !kvm_enabled());
>> -
>>      if (!cpu->cpu_ases) {
>>          cpu->cpu_ases = g_new0(CPUAddressSpace, cpu->num_ases);
>>          cpu->cpu_ases_count = cpu->num_ases;
>> @@ -805,12 +809,12 @@ void cpu_address_space_init(CPUState *cpu, int asidx,
>>      newas = &cpu->cpu_ases[asidx];
>>      newas->cpu = cpu;
>> -    newas->as = as;
>> +    newas->as = cpu_as;
>>      if (tcg_enabled()) {
>>          newas->tcg_as_listener.log_global_after_sync = tcg_log_global_after_sync;
>>          newas->tcg_as_listener.commit = tcg_commit;
>>          newas->tcg_as_listener.name = "tcg";
>> -        memory_listener_register(&newas->tcg_as_listener, as);
>> +        memory_listener_register(&newas->tcg_as_listener, cpu_as);
>>      }
>>  }
>> ---
>> In this interface, whether to reuse the existing address space (@as
>> argument) or create a new one for the CPU depends on the maintainer's
>> final opinion anyway. If the choice is to reuse, as in the code above,
>> need to adjust other calling cases. If so, I suggest Kirill handle this
>> in a separate patch.
>>>  #include "kvm_i386.h"
>>> @@ -90,6 +91,12 @@ static bool kvm_cpu_realizefn(CPUState *cs, Error **errp)
>>>          kvm_set_guest_phys_bits(cs);
>>>      }
>>> 
>>> +    for (int i = 0; i < kvm_state->nr_as; i++) {
>>> +        if (kvm_state->as[i].as) {
>>> +            cpu_address_space_add(cs, kvm_state->as[i].as);
>>> +        }
>>> +    }
>>> +
>> This will add smram twice, with the following cpu_address_space_add().
> 
> No, SMRAM is initialize at machine init done notifier, which is after this.
> 
>> Why are all KVM as unconditionally added to each CPU?
>> Another issue is the SMM AS index would be "unknown"...
>>>      return true;
>>>  }
>>> 
>>> diff --git a/target/i386/kvm/kvm.c b/target/i386/kvm/kvm.c
>>> index 234878c613f6..3ba7b26e5a74 100644
>>> --- a/target/i386/kvm/kvm.c
>>> +++ b/target/i386/kvm/kvm.c
>>> @@ -2700,6 +2700,7 @@ static MemoryRegion smram_as_mem;
>>> 
>>>  static void register_smram_listener(Notifier *n, void *unused)
>>>  {
>>> +    CPUState *cpu;
>>>      MemoryRegion *smram =
>>>          (MemoryRegion *) object_resolve_path("/machine/smram", NULL);
>>> 
>>> @@ -2724,6 +2725,9 @@ static void register_smram_listener(Notifier *n, void
>>> *unused)
>>>      address_space_init(&smram_address_space, &smram_as_root, "KVM-SMRAM");
>>>      kvm_memory_listener_register(kvm_state, &smram_listener,
>>>                                   &smram_address_space, 1, "kvm-smram");
>>> +    CPU_FOREACH(cpu) {
>>> +        cpu_address_space_add(cpu, &smram_address_space);
>>> +    }
>>>  }
>> With the cpu_address_space_init(), here could be:
>> CPU_FOREACH(cpu) {
>> 	/* Ensure SMM Address Space has the index 1.  */
>> 	assert(cpu->num_ases == 1);
>> 	cpu->num_ases = 2;
>> 	cpu_address_space_init(cpu, 1, NULL, NULL, &smram_address_space);
>> }

Re: [PATCH] x86/cpu: Handle SMM mode in x86_cpu_dump_state for softmmu
Posted by Zhao Liu 3 months, 2 weeks ago
Hi Kirill,

On Mon, Jul 28, 2025 at 05:44:25PM +0300, Kirill Martynov wrote:
> Date: Mon, 28 Jul 2025 17:44:25 +0300
> From: Kirill Martynov <stdcalllevi@yandex-team.ru>
> Subject: Re: [PATCH] x86/cpu: Handle SMM mode in x86_cpu_dump_state for
>  softmmu
> X-Mailer: Apple Mail (2.3826.600.51.1.1)
> 
> Hi Xiaoyao!
> Hi Zhao!
> 
> Xiaoyao,
> I tested the patch you provided, it works smoothly, easy to apply. Nothing to complain about.
> 
> Zhao, 
> I also tried your approach (extend cpu_address_space_init with AddressSpace parameter)
> First, it crashed in malloc with error:
> malloc(): unaligned tcache chunk detected
> After a little investigation I resized cpu->cpu_ases array, so it can fit second element and
> it started working. However, it looks like that function cpu_address_space_destroy needs
> some adjustment, because now it treats cpu->cpu_ases elements as dynamically allocated and
> destroys them with g_free() and passing &smram_address_space to cpu_address_space_init()
> in register_smram_listener() could lead to a problem since it is statically allocated in binary.

Thanks for testing. Yes, resize related details are needed, which were
I missed. These 2 patches essentially are all about adding SMM CPU
address space for KVM, like TCG did.

> So, my question now, what should I do?

I still believe we should update cpu_address_space_init() and remove its
outdated assumptions about KVM first.

Moreover, users should have control over the added address spaces (I
think this is why num_ases should be set before
cpu_address_space_init()), and quietly updating num_ases is not a good
idea.

The question of whether to reuse smram_address_space for the CPU is
flexible. At least TCG doesn't reuse the same SMM space, and there's
already cpu_as_root (and cpu_as_mem!) in X86CPU. There are also some
cleanup things worth considering, such as how to better handle the TCG
memory listener in cpu_address_space_init() - KVM also has the similar
logic. If possible, I can help you further refine this fix and clean up
other related stuff in one goes as well.

Thanks,
Zhao
Re: [PATCH] x86/cpu: Handle SMM mode in x86_cpu_dump_state for softmmu
Posted by Xiaoyao Li 3 months, 2 weeks ago
On 7/29/2025 12:19 AM, Zhao Liu wrote:
> Hi Kirill,
> 
> On Mon, Jul 28, 2025 at 05:44:25PM +0300, Kirill Martynov wrote:
>> Date: Mon, 28 Jul 2025 17:44:25 +0300
>> From: Kirill Martynov <stdcalllevi@yandex-team.ru>
>> Subject: Re: [PATCH] x86/cpu: Handle SMM mode in x86_cpu_dump_state for
>>   softmmu
>> X-Mailer: Apple Mail (2.3826.600.51.1.1)
>>
>> Hi Xiaoyao!
>> Hi Zhao!
>>
>> Xiaoyao,
>> I tested the patch you provided, it works smoothly, easy to apply. Nothing to complain about.
>>
>> Zhao,
>> I also tried your approach (extend cpu_address_space_init with AddressSpace parameter)
>> First, it crashed in malloc with error:
>> malloc(): unaligned tcache chunk detected
>> After a little investigation I resized cpu->cpu_ases array, so it can fit second element and
>> it started working. However, it looks like that function cpu_address_space_destroy needs
>> some adjustment, because now it treats cpu->cpu_ases elements as dynamically allocated and
>> destroys them with g_free() and passing &smram_address_space to cpu_address_space_init()
>> in register_smram_listener() could lead to a problem since it is statically allocated in binary.
> 
> Thanks for testing. Yes, resize related details are needed, which were
> I missed. These 2 patches essentially are all about adding SMM CPU
> address space for KVM, like TCG did.
> 
>> So, my question now, what should I do?

I just sent the formal version [*], could you please help verify if it 
resolve your problem?

(If you can share the step how to reproduce the original problem, I can 
test myself)

[*] 
https://lore.kernel.org/all/20250729054023.1668443-2-xiaoyao.li@intel.com/

> I still believe we should update cpu_address_space_init() and remove its
> outdated assumptions about KVM first.
> 
> Moreover, users should have control over the added address spaces (I
> think this is why num_ases should be set before
> cpu_address_space_init()), and quietly updating num_ases is not a good
> idea.
> 
> The question of whether to reuse smram_address_space for the CPU is
> flexible. At least TCG doesn't reuse the same SMM space, and there's
> already cpu_as_root (and cpu_as_mem!) in X86CPU. 

For i386 tcg, it allocates each CPU 3 MemoryRegions: cpu_as_root, 
cpu_as_mem and smram for SMM. While for i386 kvm, it allocates global 
MemoryRegions: smram_as_root and smram_as_mem and get smram from 
resolving "/machine/smram".

yeah, this seems something we can cleanup if there were not specific 
reason for TCG to have different MemoryRegion each CPU. I don't have 
bandwidth to investigate it further.

> There are also some
> cleanup things worth considering, such as how to better handle the TCG
> memory listener in cpu_address_space_init() - KVM also has the similar
> logic. If possible, I can help you further refine this fix and clean up
> other related stuff in one goes as well.
> 
> Thanks,
> Zhao
>
Re: [PATCH] x86/cpu: Handle SMM mode in x86_cpu_dump_state for softmmu
Posted by Kirill Martynov 3 months, 2 weeks ago
Hi Xiaoyao,
Sure, I can share how I reproduce this issue. 
1. First I have modified hmp_info_registers
diff --git a/monitor/hmp-cmds-target.c b/monitor/hmp-cmds-target.c
index 8eaf70d9c9..a4bb3d715b 100644
--- a/monitor/hmp-cmds-target.c
+++ b/monitor/hmp-cmds-target.c
@@ -102,7 +102,7 @@ void hmp_info_registers(Monitor *mon, const QDict *qdict)
     if (all_cpus) {
         CPU_FOREACH(cs) {
             monitor_printf(mon, "\nCPU#%d\n", cs->cpu_index);
-            cpu_dump_state(cs, NULL, CPU_DUMP_FPU);
+            cpu_dump_state(cs, NULL, CPU_DUMP_CODE);
         }
     } else {
         cs = vcpu >= 0 ? qemu_get_cpu(vcpu) : mon_get_cpu(mon);
@@ -117,7 +117,7 @@ void hmp_info_registers(Monitor *mon, const QDict *qdict)
         }
 
         monitor_printf(mon, "\nCPU#%d\n", cs->cpu_index);
-        cpu_dump_state(cs, NULL, CPU_DUMP_FPU);
+        cpu_dump_state(cs, NULL, CPU_DUMP_CODE);
     }
 }

2. Run this in cmd line:
# yes "info registers" | sudo ./qemu-system-x86_64 -accel kvm -monitor stdio -global driver=cfi.pflash01,property=secure,value=on -blockdev "{'driver': 'file', 'filename': '/usr/share/OVMF/OVMF_CODE_4M.secboot.fd', 'node-name': 'ovmf-code', 'read-only': true}" -blockdev "{'driver': 'file', 'filename': '/usr/share/OVMF/OVMF_VARS_4M.fd', 'node-name': 'ovmf-vars', 'read-only': true}" -machine q35,smm=on,pflash0=ovmf-code,pflash1=ovmf-vars -m 2G -nodefaults


Assert should be reproduced within 10-15 seconds.
Not sure if it is important detail or not, however I run this qemu cmd inside qemu-based virual machine with enabled nested virtualization.


> On 29 Jul 2025, at 09:01, Xiaoyao Li <xiaoyao.li@intel.com> wrote:
> 
> On 7/29/2025 12:19 AM, Zhao Liu wrote:
>> Hi Kirill,
>> On Mon, Jul 28, 2025 at 05:44:25PM +0300, Kirill Martynov wrote:
>>> Date: Mon, 28 Jul 2025 17:44:25 +0300
>>> From: Kirill Martynov <stdcalllevi@yandex-team.ru>
>>> Subject: Re: [PATCH] x86/cpu: Handle SMM mode in x86_cpu_dump_state for
>>>  softmmu
>>> X-Mailer: Apple Mail (2.3826.600.51.1.1)
>>> 
>>> Hi Xiaoyao!
>>> Hi Zhao!
>>> 
>>> Xiaoyao,
>>> I tested the patch you provided, it works smoothly, easy to apply. Nothing to complain about.
>>> 
>>> Zhao,
>>> I also tried your approach (extend cpu_address_space_init with AddressSpace parameter)
>>> First, it crashed in malloc with error:
>>> malloc(): unaligned tcache chunk detected
>>> After a little investigation I resized cpu->cpu_ases array, so it can fit second element and
>>> it started working. However, it looks like that function cpu_address_space_destroy needs
>>> some adjustment, because now it treats cpu->cpu_ases elements as dynamically allocated and
>>> destroys them with g_free() and passing &smram_address_space to cpu_address_space_init()
>>> in register_smram_listener() could lead to a problem since it is statically allocated in binary.
>> Thanks for testing. Yes, resize related details are needed, which were
>> I missed. These 2 patches essentially are all about adding SMM CPU
>> address space for KVM, like TCG did.
>>> So, my question now, what should I do?
> 
> I just sent the formal version [*], could you please help verify if it resolve your problem?
> 
> (If you can share the step how to reproduce the original problem, I can test myself)
> 
> [*] https://lore.kernel.org/all/20250729054023.1668443-2-xiaoyao.li@intel.com/
> 
>> I still believe we should update cpu_address_space_init() and remove its
>> outdated assumptions about KVM first.
>> Moreover, users should have control over the added address spaces (I
>> think this is why num_ases should be set before
>> cpu_address_space_init()), and quietly updating num_ases is not a good
>> idea.
>> The question of whether to reuse smram_address_space for the CPU is
>> flexible. At least TCG doesn't reuse the same SMM space, and there's
>> already cpu_as_root (and cpu_as_mem!) in X86CPU. 
> 
> For i386 tcg, it allocates each CPU 3 MemoryRegions: cpu_as_root, cpu_as_mem and smram for SMM. While for i386 kvm, it allocates global MemoryRegions: smram_as_root and smram_as_mem and get smram from resolving "/machine/smram".
> 
> yeah, this seems something we can cleanup if there were not specific reason for TCG to have different MemoryRegion each CPU. I don't have bandwidth to investigate it further.
> 
>> There are also some
>> cleanup things worth considering, such as how to better handle the TCG
>> memory listener in cpu_address_space_init() - KVM also has the similar
>> logic. If possible, I can help you further refine this fix and clean up
>> other related stuff in one goes as well.
>> Thanks,
>> Zhao

Re: [PATCH] x86/cpu: Handle SMM mode in x86_cpu_dump_state for softmmu
Posted by Zhao Liu 4 months, 2 weeks ago
On Fri, May 23, 2025 at 03:44:31PM +0000, Kirill Martynov wrote:
> Date: Fri, 23 May 2025 15:44:31 +0000
> From: Kirill Martynov <stdcalllevi@yandex-team.ru>
> Subject: [PATCH] x86/cpu: Handle SMM mode in x86_cpu_dump_state for softmmu
> X-Mailer: git-send-email 2.43.0
> 
> Certain error conditions can trigger x86_cpu_dump_state() to output CPU state
> debug information e.g. KVM emulation failure due to misbehaving guest.
> However, if the CPU is in System Management Mode (SMM) when the assertion
> in cpu_asidx_from_attrs failure happens because:
> 
> 1. In SMM mode (smm=1), the CPU must use multiple address spaces
>    with a dedicated SMM address space
> 2. On machine types with softmmu, address spaces are hardcoded to 1
>    (no multiple address spaces available)
> 
> The assertion occurs in cpu_asidx_from_attrs() when trying to
> access memory in SMM mode with insufficient address spaces.
> 
> Fix this by:
> 1. If number of address spaces is 1 always use index 0
> 2. In other cases use attr.secure for identified proper index
> 
> This prevents the assertion while still providing useful debug
> output during VM shutdown errors.
> 
> Stack trace of the original issue:
> #0  ... in raise () from /lib/x86_64-linux-gnu/libc.so.6
> #1  ... in abort () from /lib/x86_64-linux-gnu/libc.so.6
> #2  ... in ?? () from /lib/x86_64-linux-gnu/libc.so.6
> #3  ... in __assert_fail () from /lib/x86_64-linux-gnu/libc.so.6
> #4  ... in cpu_asidx_from_attrs (cpu=cpu@entry=0x5578ca2eb340, attrs=...)
>    at ../hw/core/cpu-sysemu.c:76
> #5  ... in cpu_memory_rw_debug (cpu=cpu@entry=0x5578ca2eb340,
>    addr=addr@entry=2147258348, ptr=ptr@entry=0x7f5341ca373c, len=len@entry=1,
>     is_write=is_write@entry=false) at ../softmmu/physmem.c:3529
> #6  ... in x86_cpu_dump_state (cs=0x5578ca2eb340,
>    f=0x7f53434065c0 <_IO_2_1_stderr_>, flags=<optimized out>)
>    at ../target/i386/cpu-dump.c:560
> #7  ... in kvm_cpu_exec (cpu=cpu@entry=0x5578ca2eb340)
>    at ../accel/kvm/kvm-all.c:3000
> #8  ... in kvm_vcpu_thread_fn (arg=arg@entry=0x5578ca2eb340)
>    at ../accel/kvm/kvm-accel-ops.c:51
> #9  ... in qemu_thread_start (args=<optimized out>)
>    at ../util/qemu-thread-posix.c:505
> #10 ... in start_thread () from /lib/x86_64-linux-gnu/libpthread.so.0
> #11 ... in clone () from /lib/x86_64-linux-gnu/libc.so.6
> 
> Signed-off-by: Kirill Martynov <stdcalllevi@yandex-team.ru>
> ---
>  target/i386/cpu.h | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)

Sorry for delay. This fix looks good to me,

Reviewed-by: Zhao Liu <zhao1.liu@intel.com>
Re: [PATCH] x86/cpu: Handle SMM mode in x86_cpu_dump_state for softmmu
Posted by Kirill Martynov 5 months ago
Hello there, would you be so kind to give some feedback on this patch?

> On 23 May 2025, at 18:44, Kirill Martynov <stdcalllevi@yandex-team.ru> wrote:
> 
> Certain error conditions can trigger x86_cpu_dump_state() to output CPU state
> debug information e.g. KVM emulation failure due to misbehaving guest.
> However, if the CPU is in System Management Mode (SMM) when the assertion
> in cpu_asidx_from_attrs failure happens because:
> 
> 1. In SMM mode (smm=1), the CPU must use multiple address spaces
>   with a dedicated SMM address space
> 2. On machine types with softmmu, address spaces are hardcoded to 1
>   (no multiple address spaces available)
> 
> The assertion occurs in cpu_asidx_from_attrs() when trying to
> access memory in SMM mode with insufficient address spaces.
> 
> Fix this by:
> 1. If number of address spaces is 1 always use index 0
> 2. In other cases use attr.secure for identified proper index
> 
> This prevents the assertion while still providing useful debug
> output during VM shutdown errors.
> 
> Stack trace of the original issue:
> #0  ... in raise () from /lib/x86_64-linux-gnu/libc.so.6
> #1  ... in abort () from /lib/x86_64-linux-gnu/libc.so.6
> #2  ... in ?? () from /lib/x86_64-linux-gnu/libc.so.6
> #3  ... in __assert_fail () from /lib/x86_64-linux-gnu/libc.so.6
> #4  ... in cpu_asidx_from_attrs (cpu=cpu@entry=0x5578ca2eb340, attrs=...)
>   at ../hw/core/cpu-sysemu.c:76
> #5  ... in cpu_memory_rw_debug (cpu=cpu@entry=0x5578ca2eb340,
>   addr=addr@entry=2147258348, ptr=ptr@entry=0x7f5341ca373c, len=len@entry=1,
>    is_write=is_write@entry=false) at ../softmmu/physmem.c:3529
> #6  ... in x86_cpu_dump_state (cs=0x5578ca2eb340,
>   f=0x7f53434065c0 <_IO_2_1_stderr_>, flags=<optimized out>)
>   at ../target/i386/cpu-dump.c:560
> #7  ... in kvm_cpu_exec (cpu=cpu@entry=0x5578ca2eb340)
>   at ../accel/kvm/kvm-all.c:3000
> #8  ... in kvm_vcpu_thread_fn (arg=arg@entry=0x5578ca2eb340)
>   at ../accel/kvm/kvm-accel-ops.c:51
> #9  ... in qemu_thread_start (args=<optimized out>)
>   at ../util/qemu-thread-posix.c:505
> #10 ... in start_thread () from /lib/x86_64-linux-gnu/libpthread.so.0
> #11 ... in clone () from /lib/x86_64-linux-gnu/libc.so.6
> 
> Signed-off-by: Kirill Martynov <stdcalllevi@yandex-team.ru>
> ---
> target/i386/cpu.h | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/target/i386/cpu.h b/target/i386/cpu.h
> index c51e0a43d0..2616a61c87 100644
> --- a/target/i386/cpu.h
> +++ b/target/i386/cpu.h
> @@ -2507,7 +2507,7 @@ void cpu_sync_avx_hflag(CPUX86State *env);
> #ifndef CONFIG_USER_ONLY
> static inline int x86_asidx_from_attrs(CPUState *cs, MemTxAttrs attrs)
> {
> -    return !!attrs.secure;
> +    return cs->num_ases == 1 ? 0 : (!!attrs.secure);
> }
> 
> static inline AddressSpace *cpu_addressspace(CPUState *cs, MemTxAttrs attrs)
> -- 
> 2.43.0
>