From: Paolo Bonzini <pbonzini@redhat.com>
The address translation logic in get_physical_address() will currently
truncate physical addresses to 32 bits unless long mode is enabled.
This is incorrect when using physical address extensions (PAE) outside
of long mode, with the result that a 32-bit operating system using PAE
to access memory above 4G will experience undefined behaviour.
The truncation code was originally introduced in commit 33dfdb5 ("x86:
only allow real mode to access 32bit without LMA"), where it applied
only to translations performed while paging is disabled (and so cannot
affect guests using PAE).
Commit 9828198 ("target/i386: Add MMU_PHYS_IDX and MMU_NESTED_IDX")
rearranged the code such that the truncation also applied to the use
of MMU_PHYS_IDX and MMU_NESTED_IDX. Commit 4a1e9d4 ("target/i386: Use
atomic operations for pte updates") brought this truncation into scope
for page table entry accesses, and is the first commit for which a
Windows 10 32-bit guest will reliably fail to boot if memory above 4G
is present.
The truncation code however is not completely redundant. Even though the
maximum address size for any executed instruction is 32 bits, helpers for
operations such as BOUND, FSAVE or XSAVE may ask get_physical_address()
to translate an address outside of the 32-bit range, if invoked with an
argument that is close to the 4G boundary. Likewise for processor
accesses, for example TSS or IDT accesses, when EFER.LMA==0.
So, move the address truncation in get_physical_address() so that it
applies to 32-bit MMU indexes, but not to MMU_PHYS_IDX and MMU_NESTED_IDX.
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2040
Fixes: 4a1e9d4d11c ("target/i386: Use atomic operations for pte updates", 2022-10-18)
Cc: qemu-stable@nongnu.org
Co-developed-by: Michael Brown <mcb30@ipxe.org>
Signed-off-by: Michael Brown <mcb30@ipxe.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit b1661801c184119a10ad6cbc3b80330fc22e7b2c)
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
(Mjt: drop unrelated change in target/i386/cpu.c)
diff --git a/target/i386/cpu.h b/target/i386/cpu.h
index f67cee477a..7be047ce33 100644
--- a/target/i386/cpu.h
+++ b/target/i386/cpu.h
@@ -2195,6 +2195,12 @@ static inline int cpu_mmu_index(CPUX86State *env, bool ifetch)
? MMU_KNOSMAP_IDX : MMU_KSMAP_IDX;
}
+static inline bool is_mmu_index_32(int mmu_index)
+{
+ assert(mmu_index < MMU_PHYS_IDX);
+ return mmu_index & 1;
+}
+
static inline int cpu_mmu_index_kernel(CPUX86State *env)
{
return !(env->hflags & HF_SMAP_MASK) ? MMU_KNOSMAP_IDX :
diff --git a/target/i386/tcg/sysemu/excp_helper.c b/target/i386/tcg/sysemu/excp_helper.c
index c253226b3c..389a0d6912 100644
--- a/target/i386/tcg/sysemu/excp_helper.c
+++ b/target/i386/tcg/sysemu/excp_helper.c
@@ -556,6 +556,10 @@ static bool get_physical_address(CPUX86State *env, vaddr addr,
break;
default:
+ if (is_mmu_index_32(mmu_idx)) {
+ addr = (uint32_t)addr;
+ }
+
if (likely(env->cr[0] & CR0_PG_MASK)) {
in.cr3 = env->cr[3];
in.mmu_idx = mmu_idx;
@@ -579,14 +583,8 @@ static bool get_physical_address(CPUX86State *env, vaddr addr,
break;
}
- /* Translation disabled. */
+ /* No translation needed. */
out->paddr = addr & x86_get_a20_mask(env);
-#ifdef TARGET_X86_64
- if (!(env->hflags & HF_LMA_MASK)) {
- /* Without long mode we can only address 32bits in real mode */
- out->paddr = (uint32_t)out->paddr;
- }
-#endif
out->prot = PAGE_READ | PAGE_WRITE | PAGE_EXEC;
out->page_size = TARGET_PAGE_SIZE;
return true;
--
2.39.2