From nobody Tue Feb 10 02:00:16 2026 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 152276995523312.547393283769338; Tue, 3 Apr 2018 08:39:15 -0700 (PDT) Received: from localhost ([::1]:43599 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1f3O1d-0001ZK-RF for importer@patchew.org; Tue, 03 Apr 2018 11:39:09 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:38295) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1f3NzK-00008v-9k for qemu-devel@nongnu.org; Tue, 03 Apr 2018 11:36:49 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1f3NzH-0008Eo-0D for qemu-devel@nongnu.org; Tue, 03 Apr 2018 11:36:46 -0400 Received: from mout.web.de ([212.227.15.3]:43777) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_128_CBC_SHA1:16) (Exim 4.71) (envelope-from ) id 1f3NzG-0008E8-Hj for qemu-devel@nongnu.org; Tue, 03 Apr 2018 11:36:42 -0400 Received: from md1f2u6c.ww002.siemens.net ([92.77.50.102]) by smtp.web.de (mrweb004 [213.165.67.108]) with ESMTPSA (Nemesis) id 0LfVe5-1ejBNA0pez-00p2lq; Tue, 03 Apr 2018 17:36:25 +0200 From: Jan Kiszka To: qemu-devel , Paolo Bonzini , Richard Henderson , Eduardo Habkost Date: Tue, 3 Apr 2018 17:36:14 +0200 Message-Id: X-Mailer: git-send-email 2.13.6 In-Reply-To: References: In-Reply-To: References: X-Provags-ID: V03:K0:xpWkMqnBSVi+v/7P2wIAXpahXs2NvJRizu2DVxkZnT5ITA6/EGv 5VfE+vtoxx2OVFO3XyJ9SY26UQteSWZgJgTOKQZoHynMziBr6aLaoKdluPncPxXaNhSfLDx LhbgT/dUB+pwG5ialCR483ux1NovO7moUeiMllnJqz42Tm3h4Bo4iQYfp+LBmJtpMjyx8QX MMCiDeOUQBT3T5uoFLNuw== X-UI-Out-Filterresults: notjunk:1;V01:K0:o40QmPh772M=:HhYi4y2Hv2OOlhnwMmBT00 ozT0pLTBZURV1dCfopYe4D7zT7sRFG1fcdItGg+s/itQ4X8WoFnB2CoXajw1fXiH7ck1rQuZk 3TmcfKf746g5+kYxLw9Eu6okU8CAajln7gaCxtNeV0jBZMdqJh63ag0selhA5JTODnOkYW82w IbCvM4OzXxPlt9sy1s+7SrsFpeqtU6wZokE2uZMtyZ/w64DFruHVw8rMZ8trKk6xjn703jRhh dWg8F7etO6hpACNJJI4P4E6gfPq9sjIFKehb5a27g9Ph78QNa2gu+Y5Ep/iNl2wWOhGPgetKb HIY0YcSqxT6G8SV9vCa3PMKskrRjn+E0r6dUlppxlq7trPM9gF9MPCDctXEqfsfpm6ZSXpptm fcoGXa9eWjF8pPXfSqQDEkeOxyDbd/DzXFAWpPIO2XoRB6zSHcJYAgTBpSks6VHkqRETKUax2 0tCURQX0d0UVeyzeTH6FPzyayiy+PL0TRzPB7RBMm/USal8Gb3dl0F+SpPFNzkCmATG9wPO5s PCpAvOeE1E7Vk1eDMU/zuvlN/lM+JAPcwH/HBOGDvcjbMNv0lzxvY4AJWDi9X68ZY0em4gy7M CWh816I52S2twYrc6JwhV9oqJkon1YW7erd4IYLWNdQz2pkpAKewBC43+2ZpFht8kdmIQ6vP/ 4iy159LREA5YTzXea3RFje7lBRFkN9NnsqJEUcQfth5EJlW4aRNW4d6vdGdViBAdrVPe6Fzlb tu0JFDc6VaER9XmkiS1fBBtDnPCPGqsMCF0gM26dQRVlkEKWV6du2nmEEMKYVq/SeyyG08PZG Lriwv8L5MMp6FYWjWo6QLh+y9r0qg== X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] [fuzzy] X-Received-From: 212.227.15.3 Subject: [Qemu-devel] [PATCH v2 4/4] target-i386: Add NPT support X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Valentine Sinitsyn Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail: RSF_0 Z_629925259 SPT_0 Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" From: Jan Kiszka This implements NPT suport for SVM by hooking into x86_cpu_handle_mmu_fault where it reads the stage-1 page table. Whether we need to perform this 2nd stage translation, and how, is decided during vmrun and stored in hflags as well as nested_cr3 and nested_pg_mode. As get_hphys performs a direct cpu_vmexit in case of NPT faults, we need retaddr in that function. To avoid changing the signature of cpu_handle_mmu_fault, this passes the value from tlb_fill to get_hphys via the CPU state. This was tested successfully via the Jailhouse hypervisor. Signed-off-by: Jan Kiszka --- target/i386/cpu.c | 2 +- target/i386/cpu.h | 6 ++ target/i386/excp_helper.c | 216 ++++++++++++++++++++++++++++++++++++++++++= +++- target/i386/mem_helper.c | 6 +- target/i386/svm.h | 14 +++ target/i386/svm_helper.c | 22 +++++ 6 files changed, 260 insertions(+), 6 deletions(-) diff --git a/target/i386/cpu.c b/target/i386/cpu.c index 555ae79d29..0d14254ca1 100644 --- a/target/i386/cpu.c +++ b/target/i386/cpu.c @@ -261,7 +261,7 @@ static void x86_cpu_vendor_words2str(char *dst, uint32_= t vendor1, #define TCG_EXT3_FEATURES (CPUID_EXT3_LAHF_LM | CPUID_EXT3_SVM | \ CPUID_EXT3_CR8LEG | CPUID_EXT3_ABM | CPUID_EXT3_SSE4A) #define TCG_EXT4_FEATURES 0 -#define TCG_SVM_FEATURES 0 +#define TCG_SVM_FEATURES CPUID_SVM_NPT #define TCG_KVM_FEATURES 0 #define TCG_7_0_EBX_FEATURES (CPUID_7_0_EBX_SMEP | CPUID_7_0_EBX_SMAP | \ CPUID_7_0_EBX_BMI1 | CPUID_7_0_EBX_BMI2 | CPUID_7_0_EBX_ADX | \ diff --git a/target/i386/cpu.h b/target/i386/cpu.h index d711634c2f..106a70e1cf 100644 --- a/target/i386/cpu.h +++ b/target/i386/cpu.h @@ -177,6 +177,7 @@ typedef enum X86Seg { #define HF_IOBPT_SHIFT 24 /* an io breakpoint enabled */ #define HF_MPX_EN_SHIFT 25 /* MPX Enabled (CR4+XCR0+BNDCFGx) */ #define HF_MPX_IU_SHIFT 26 /* BND registers in-use */ +#define HF_NPT_SHIFT 27 /* Nested Paging enabled */ =20 #define HF_CPL_MASK (3 << HF_CPL_SHIFT) #define HF_INHIBIT_IRQ_MASK (1 << HF_INHIBIT_IRQ_SHIFT) @@ -202,6 +203,7 @@ typedef enum X86Seg { #define HF_IOBPT_MASK (1 << HF_IOBPT_SHIFT) #define HF_MPX_EN_MASK (1 << HF_MPX_EN_SHIFT) #define HF_MPX_IU_MASK (1 << HF_MPX_IU_SHIFT) +#define HF_NPT_MASK (1 << HF_NPT_SHIFT) =20 /* hflags2 */ =20 @@ -1201,12 +1203,16 @@ typedef struct CPUX86State { uint16_t intercept_dr_read; uint16_t intercept_dr_write; uint32_t intercept_exceptions; + uint64_t nested_cr3; + uint32_t nested_pg_mode; uint8_t v_tpr; =20 /* KVM states, automatically cleared on reset */ uint8_t nmi_injected; uint8_t nmi_pending; =20 + uintptr_t retaddr; + /* Fields up to this point are cleared by a CPU reset */ struct {} end_reset_fields; =20 diff --git a/target/i386/excp_helper.c b/target/i386/excp_helper.c index cb4d1b7d33..e3bb59284f 100644 --- a/target/i386/excp_helper.c +++ b/target/i386/excp_helper.c @@ -157,6 +157,209 @@ int x86_cpu_handle_mmu_fault(CPUState *cs, vaddr addr= , int size, =20 #else =20 +static hwaddr get_hphys(CPUState *cs, hwaddr gphys, MMUAccessType access_t= ype, + int *prot) +{ + CPUX86State *env =3D &X86_CPU(cs)->env; + uint64_t rsvd_mask =3D PG_HI_RSVD_MASK; + uint64_t ptep, pte; + uint64_t exit_info_1 =3D 0; + target_ulong pde_addr, pte_addr; + uint32_t page_offset; + int page_size; + + if (likely(!(env->hflags & HF_NPT_MASK))) { + return gphys; + } + + if (!(env->nested_pg_mode & SVM_NPT_NXE)) { + rsvd_mask |=3D PG_NX_MASK; + } + + if (env->nested_pg_mode & SVM_NPT_PAE) { + uint64_t pde, pdpe; + target_ulong pdpe_addr; + +#ifdef TARGET_X86_64 + if (env->nested_pg_mode & SVM_NPT_LMA) { + uint64_t pml5e; + uint64_t pml4e_addr, pml4e; + + pml5e =3D env->nested_cr3; + ptep =3D PG_NX_MASK | PG_USER_MASK | PG_RW_MASK; + + pml4e_addr =3D (pml5e & PG_ADDRESS_MASK) + + (((gphys >> 39) & 0x1ff) << 3); + pml4e =3D x86_ldq_phys(cs, pml4e_addr); + if (!(pml4e & PG_PRESENT_MASK)) { + goto do_fault; + } + if (pml4e & (rsvd_mask | PG_PSE_MASK)) { + goto do_fault_rsvd; + } + if (!(pml4e & PG_ACCESSED_MASK)) { + pml4e |=3D PG_ACCESSED_MASK; + x86_stl_phys_notdirty(cs, pml4e_addr, pml4e); + } + ptep &=3D pml4e ^ PG_NX_MASK; + pdpe_addr =3D (pml4e & PG_ADDRESS_MASK) + + (((gphys >> 30) & 0x1ff) << 3); + pdpe =3D x86_ldq_phys(cs, pdpe_addr); + if (!(pdpe & PG_PRESENT_MASK)) { + goto do_fault; + } + if (pdpe & rsvd_mask) { + goto do_fault_rsvd; + } + ptep &=3D pdpe ^ PG_NX_MASK; + if (!(pdpe & PG_ACCESSED_MASK)) { + pdpe |=3D PG_ACCESSED_MASK; + x86_stl_phys_notdirty(cs, pdpe_addr, pdpe); + } + if (pdpe & PG_PSE_MASK) { + /* 1 GB page */ + page_size =3D 1024 * 1024 * 1024; + pte_addr =3D pdpe_addr; + pte =3D pdpe; + goto do_check_protect; + } + } else +#endif + { + pdpe_addr =3D (env->nested_cr3 & ~0x1f) + ((gphys >> 27) & 0x1= 8); + pdpe =3D x86_ldq_phys(cs, pdpe_addr); + if (!(pdpe & PG_PRESENT_MASK)) { + goto do_fault; + } + rsvd_mask |=3D PG_HI_USER_MASK; + if (pdpe & (rsvd_mask | PG_NX_MASK)) { + goto do_fault_rsvd; + } + ptep =3D PG_NX_MASK | PG_USER_MASK | PG_RW_MASK; + } + + pde_addr =3D (pdpe & PG_ADDRESS_MASK) + (((gphys >> 21) & 0x1ff) <= < 3); + pde =3D x86_ldq_phys(cs, pde_addr); + if (!(pde & PG_PRESENT_MASK)) { + goto do_fault; + } + if (pde & rsvd_mask) { + goto do_fault_rsvd; + } + ptep &=3D pde ^ PG_NX_MASK; + if (pde & PG_PSE_MASK) { + /* 2 MB page */ + page_size =3D 2048 * 1024; + pte_addr =3D pde_addr; + pte =3D pde; + goto do_check_protect; + } + /* 4 KB page */ + if (!(pde & PG_ACCESSED_MASK)) { + pde |=3D PG_ACCESSED_MASK; + x86_stl_phys_notdirty(cs, pde_addr, pde); + } + pte_addr =3D (pde & PG_ADDRESS_MASK) + (((gphys >> 12) & 0x1ff) <<= 3); + pte =3D x86_ldq_phys(cs, pte_addr); + if (!(pte & PG_PRESENT_MASK)) { + goto do_fault; + } + if (pte & rsvd_mask) { + goto do_fault_rsvd; + } + /* combine pde and pte nx, user and rw protections */ + ptep &=3D pte ^ PG_NX_MASK; + page_size =3D 4096; + } else { + uint32_t pde; + + /* page directory entry */ + pde_addr =3D (env->nested_cr3 & ~0xfff) + ((gphys >> 20) & 0xffc); + pde =3D x86_ldl_phys(cs, pde_addr); + if (!(pde & PG_PRESENT_MASK)) { + goto do_fault; + } + ptep =3D pde | PG_NX_MASK; + + /* if PSE bit is set, then we use a 4MB page */ + if ((pde & PG_PSE_MASK) && (env->cr[4] & CR4_PSE_MASK)) { + page_size =3D 4096 * 1024; + pte_addr =3D pde_addr; + + /* Bits 20-13 provide bits 39-32 of the address, bit 21 is res= erved. + * Leave bits 20-13 in place for setting accessed/dirty bits b= elow. + */ + pte =3D pde | ((pde & 0x1fe000LL) << (32 - 13)); + rsvd_mask =3D 0x200000; + goto do_check_protect_pse36; + } + + if (!(pde & PG_ACCESSED_MASK)) { + pde |=3D PG_ACCESSED_MASK; + x86_stl_phys_notdirty(cs, pde_addr, pde); + } + + /* page directory entry */ + pte_addr =3D (pde & ~0xfff) + ((gphys >> 10) & 0xffc); + pte =3D x86_ldl_phys(cs, pte_addr); + if (!(pte & PG_PRESENT_MASK)) { + goto do_fault; + } + /* combine pde and pte user and rw protections */ + ptep &=3D pte | PG_NX_MASK; + page_size =3D 4096; + rsvd_mask =3D 0; + } + + do_check_protect: + rsvd_mask |=3D (page_size - 1) & PG_ADDRESS_MASK & ~PG_PSE_PAT_MASK; + do_check_protect_pse36: + if (pte & rsvd_mask) { + goto do_fault_rsvd; + } + ptep ^=3D PG_NX_MASK; + + if (!(ptep & PG_USER_MASK)) { + goto do_fault_protect; + } + if (ptep & PG_NX_MASK) { + if (access_type =3D=3D MMU_INST_FETCH) { + goto do_fault_protect; + } + *prot &=3D ~PAGE_EXEC; + } + if (!(ptep & PG_RW_MASK)) { + if (access_type =3D=3D MMU_DATA_STORE) { + goto do_fault_protect; + } + *prot &=3D ~PAGE_WRITE; + } + + pte &=3D PG_ADDRESS_MASK & ~(page_size - 1); + page_offset =3D gphys & (page_size - 1); + return pte + page_offset; + + do_fault_rsvd: + exit_info_1 |=3D SVM_NPTEXIT_RSVD; + do_fault_protect: + exit_info_1 |=3D SVM_NPTEXIT_P; + do_fault: + x86_stq_phys(cs, env->vm_vmcb + offsetof(struct vmcb, control.exit_inf= o_2), + gphys); + exit_info_1 |=3D SVM_NPTEXIT_US; + if (access_type =3D=3D MMU_DATA_STORE) { + exit_info_1 |=3D SVM_NPTEXIT_RW; + } else if (access_type =3D=3D MMU_INST_FETCH) { + exit_info_1 |=3D SVM_NPTEXIT_ID; + } + if (prot) { + exit_info_1 |=3D SVM_NPTEXIT_GPA; + } else { /* page table access */ + exit_info_1 |=3D SVM_NPTEXIT_GPT; + } + cpu_vmexit(env, SVM_EXIT_NPF, exit_info_1, env->retaddr); +} + /* return value: * -1 =3D cannot handle fault * 0 =3D nothing more to do @@ -224,6 +427,7 @@ int x86_cpu_handle_mmu_fault(CPUState *cs, vaddr addr, = int size, if (la57) { pml5e_addr =3D ((env->cr[3] & ~0xfff) + (((addr >> 48) & 0x1ff) << 3)) & a20_mask; + pml5e_addr =3D get_hphys(cs, pml5e_addr, MMU_DATA_STORE, N= ULL); pml5e =3D x86_ldq_phys(cs, pml5e_addr); if (!(pml5e & PG_PRESENT_MASK)) { goto do_fault; @@ -243,6 +447,7 @@ int x86_cpu_handle_mmu_fault(CPUState *cs, vaddr addr, = int size, =20 pml4e_addr =3D ((pml5e & PG_ADDRESS_MASK) + (((addr >> 39) & 0x1ff) << 3)) & a20_mask; + pml4e_addr =3D get_hphys(cs, pml4e_addr, MMU_DATA_STORE, false= ); pml4e =3D x86_ldq_phys(cs, pml4e_addr); if (!(pml4e & PG_PRESENT_MASK)) { goto do_fault; @@ -257,6 +462,7 @@ int x86_cpu_handle_mmu_fault(CPUState *cs, vaddr addr, = int size, ptep &=3D pml4e ^ PG_NX_MASK; pdpe_addr =3D ((pml4e & PG_ADDRESS_MASK) + (((addr >> 30) & 0x= 1ff) << 3)) & a20_mask; + pdpe_addr =3D get_hphys(cs, pdpe_addr, MMU_DATA_STORE, NULL); pdpe =3D x86_ldq_phys(cs, pdpe_addr); if (!(pdpe & PG_PRESENT_MASK)) { goto do_fault; @@ -282,6 +488,7 @@ int x86_cpu_handle_mmu_fault(CPUState *cs, vaddr addr, = int size, /* XXX: load them when cr3 is loaded ? */ pdpe_addr =3D ((env->cr[3] & ~0x1f) + ((addr >> 27) & 0x18)) & a20_mask; + pdpe_addr =3D get_hphys(cs, pdpe_addr, MMU_DATA_STORE, false); pdpe =3D x86_ldq_phys(cs, pdpe_addr); if (!(pdpe & PG_PRESENT_MASK)) { goto do_fault; @@ -295,6 +502,7 @@ int x86_cpu_handle_mmu_fault(CPUState *cs, vaddr addr, = int size, =20 pde_addr =3D ((pdpe & PG_ADDRESS_MASK) + (((addr >> 21) & 0x1ff) <= < 3)) & a20_mask; + pde_addr =3D get_hphys(cs, pde_addr, MMU_DATA_STORE, NULL); pde =3D x86_ldq_phys(cs, pde_addr); if (!(pde & PG_PRESENT_MASK)) { goto do_fault; @@ -317,6 +525,7 @@ int x86_cpu_handle_mmu_fault(CPUState *cs, vaddr addr, = int size, } pte_addr =3D ((pde & PG_ADDRESS_MASK) + (((addr >> 12) & 0x1ff) <<= 3)) & a20_mask; + pte_addr =3D get_hphys(cs, pte_addr, MMU_DATA_STORE, NULL); pte =3D x86_ldq_phys(cs, pte_addr); if (!(pte & PG_PRESENT_MASK)) { goto do_fault; @@ -333,6 +542,7 @@ int x86_cpu_handle_mmu_fault(CPUState *cs, vaddr addr, = int size, /* page directory entry */ pde_addr =3D ((env->cr[3] & ~0xfff) + ((addr >> 20) & 0xffc)) & a20_mask; + pde_addr =3D get_hphys(cs, pde_addr, MMU_DATA_STORE, NULL); pde =3D x86_ldl_phys(cs, pde_addr); if (!(pde & PG_PRESENT_MASK)) { goto do_fault; @@ -360,6 +570,7 @@ int x86_cpu_handle_mmu_fault(CPUState *cs, vaddr addr, = int size, /* page directory entry */ pte_addr =3D ((pde & ~0xfff) + ((addr >> 10) & 0xffc)) & a20_mask; + pte_addr =3D get_hphys(cs, pte_addr, MMU_DATA_STORE, NULL); pte =3D x86_ldl_phys(cs, pte_addr); if (!(pte & PG_PRESENT_MASK)) { goto do_fault; @@ -442,12 +653,13 @@ do_check_protect_pse36: =20 /* align to page_size */ pte &=3D PG_ADDRESS_MASK & ~(page_size - 1); + page_offset =3D addr & (page_size - 1); + paddr =3D get_hphys(cs, pte + page_offset, is_write1, &prot); =20 /* Even if 4MB pages, we map only one 4KB page in the cache to avoid filling it too fast */ vaddr =3D addr & TARGET_PAGE_MASK; - page_offset =3D vaddr & (page_size - 1); - paddr =3D pte + page_offset; + paddr &=3D TARGET_PAGE_MASK; =20 assert(prot & (1 << is_write1)); tlb_set_page_with_attrs(cs, vaddr, paddr, cpu_get_mem_attrs(env), diff --git a/target/i386/mem_helper.c b/target/i386/mem_helper.c index a8ae694a9c..30c26b9d9c 100644 --- a/target/i386/mem_helper.c +++ b/target/i386/mem_helper.c @@ -202,13 +202,13 @@ void helper_boundl(CPUX86State *env, target_ulong a0,= int v) void tlb_fill(CPUState *cs, target_ulong addr, int size, MMUAccessType access_type, int mmu_idx, uintptr_t retaddr) { + X86CPU *cpu =3D X86_CPU(cs); + CPUX86State *env =3D &cpu->env; int ret; =20 + env->retaddr =3D retaddr; ret =3D x86_cpu_handle_mmu_fault(cs, addr, size, access_type, mmu_idx); if (ret) { - X86CPU *cpu =3D X86_CPU(cs); - CPUX86State *env =3D &cpu->env; - raise_exception_err_ra(env, cs->exception_index, env->error_code, = retaddr); } } diff --git a/target/i386/svm.h b/target/i386/svm.h index 922c8fd39c..23a3a040b8 100644 --- a/target/i386/svm.h +++ b/target/i386/svm.h @@ -130,6 +130,20 @@ =20 #define SVM_CR0_SELECTIVE_MASK (1 << 3 | 1) /* TS and MP */ =20 +#define SVM_NPT_ENABLED (1 << 0) + +#define SVM_NPT_PAE (1 << 0) +#define SVM_NPT_LMA (1 << 1) +#define SVM_NPT_NXE (1 << 2) + +#define SVM_NPTEXIT_P (1ULL << 0) +#define SVM_NPTEXIT_RW (1ULL << 1) +#define SVM_NPTEXIT_US (1ULL << 2) +#define SVM_NPTEXIT_RSVD (1ULL << 3) +#define SVM_NPTEXIT_ID (1ULL << 4) +#define SVM_NPTEXIT_GPA (1ULL << 32) +#define SVM_NPTEXIT_GPT (1ULL << 33) + struct QEMU_PACKED vmcb_control_area { uint16_t intercept_cr_read; uint16_t intercept_cr_write; diff --git a/target/i386/svm_helper.c b/target/i386/svm_helper.c index e3288955f1..209881cf16 100644 --- a/target/i386/svm_helper.c +++ b/target/i386/svm_helper.c @@ -124,6 +124,7 @@ void helper_vmrun(CPUX86State *env, int aflag, int next= _eip_addend) { CPUState *cs =3D CPU(x86_env_get_cpu(env)); target_ulong addr; + uint64_t nested_ctl; uint32_t event_inj; uint32_t int_ctl; =20 @@ -206,6 +207,26 @@ void helper_vmrun(CPUX86State *env, int aflag, int nex= t_eip_addend) control.intercept_except= ions )); =20 + nested_ctl =3D x86_ldq_phys(cs, env->vm_vmcb + offsetof(struct vmcb, + control.nested_c= tl)); + if (nested_ctl & SVM_NPT_ENABLED) { + env->nested_cr3 =3D x86_ldq_phys(cs, + env->vm_vmcb + offsetof(struct vmcb, + control.nested_cr3= )); + env->hflags |=3D HF_NPT_MASK; + + env->nested_pg_mode =3D 0; + if (env->cr[4] & CR4_PAE_MASK) { + env->nested_pg_mode |=3D SVM_NPT_PAE; + } + if (env->hflags & HF_LMA_MASK) { + env->nested_pg_mode |=3D SVM_NPT_LMA; + } + if (env->efer & MSR_EFER_NXE) { + env->nested_pg_mode |=3D SVM_NPT_NXE; + } + } + /* enable intercepts */ env->hflags |=3D HF_SVMI_MASK; =20 @@ -616,6 +637,7 @@ void do_vmexit(CPUX86State *env, uint32_t exit_code, ui= nt64_t exit_info_1) x86_stl_phys(cs, env->vm_vmcb + offsetof(struct vmcb, control.int_state), = 0); } + env->hflags &=3D ~HF_NPT_MASK; =20 /* Save the VM state in the vmcb */ svm_save_seg(env, env->vm_vmcb + offsetof(struct vmcb, save.es), --=20 2.13.6