From nobody Mon Feb 9 19:54:15 2026 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of redhat.com designates 216.205.24.124 as permitted sender) client-ip=216.205.24.124; envelope-from=philmd@redhat.com; helo=us-smtp-delivery-124.mimecast.com; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 216.205.24.124 as permitted sender) smtp.mailfrom=philmd@redhat.com; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1614795857; cv=none; d=zohomail.com; s=zohoarc; b=LV0JXQchcUZeScrOXt7rUIyfxLiQSLdZUQGKrOb50S0ro71vFF57SiiZ152Cr4DtRHQeEuyF8G89wR1GGeF2IMr7MkMiEi3K9V5ienAnqy9+cuHnwWbCX6MZqNwrf0kTcgaesy0HOrUnCATdJArNiCXqiLnIUJOYSpBtoPBTlug= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1614795857; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:MIME-Version:Message-ID:References:Subject:To; bh=zbl5ln6j4TQbN8eSMfKjFDrfgLqqJBTprqPvaHJIAzA=; b=hJdUdskrZ5mxArMfWEB0p2rq5ilaL8xbjn/Z2o9eJhJIhJfg1cqlrrx1BVyAl/6cnEL8Gy58wi7cn6uS+NIudUaJ+AHv3y6YfYzwkUjY7R7/mLrKHT0MBk4iSowxxe5wl5suaM+3kLwiQipHJ8gU68p/ksd+2NT3dALpslksSWc= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 216.205.24.124 as permitted sender) smtp.mailfrom=philmd@redhat.com; dmarc=pass header.from= (p=none dis=none) header.from= Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [216.205.24.124]) by mx.zohomail.com with SMTPS id 1614795857502911.5387294941925; Wed, 3 Mar 2021 10:24:17 -0800 (PST) Received: from mail-wm1-f69.google.com (mail-wm1-f69.google.com [209.85.128.69]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-439-v3sSjhW0OlWxwOhhM07rOw-1; Wed, 03 Mar 2021 13:24:14 -0500 Received: by mail-wm1-f69.google.com with SMTP id v5so3387129wml.9 for ; Wed, 03 Mar 2021 10:24:14 -0800 (PST) Return-Path: Return-Path: Received: from x1w.redhat.com (68.red-83-57-175.dynamicip.rima-tde.net. [83.57.175.68]) by smtp.gmail.com with ESMTPSA id h20sm6759660wmb.1.2021.03.03.10.24.11 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 03 Mar 2021 10:24:12 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1614795856; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=zbl5ln6j4TQbN8eSMfKjFDrfgLqqJBTprqPvaHJIAzA=; b=A18DFvuFsn5oSzrEsZxyoSA0CCCvuvhbVhWK1de9NkbuusTFyAwIPS8sHxj4ea8F2WTwUK HpOyePnJw3PGW7pp9hqLZkPgVVZZAGQGnu2vb4QnnA9K8VoCP0MV6JtyAA1sNRqhtet1P5 YrON5JYTrL3RiB3whaXUy0Bsry67DyE= X-MC-Unique: v3sSjhW0OlWxwOhhM07rOw-1 X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=zbl5ln6j4TQbN8eSMfKjFDrfgLqqJBTprqPvaHJIAzA=; b=YqClkNrhnED4aPZbv4vZh9+BFjHDDVcrtA4zliD8flDrjXMb5UxtvAWWWfKnlObNT1 9AArWu+Fj+QASuPnIoDvTi+5Ay05avBINNlK3ISjSoYtnsXX1ObKKTIEIxvD/frtw3BM 8eVh1ch+S1sjw1Ag7TjCZfCNLiPaAMzmEG1XA4w10Xqlt7SUg4z7KoJg3uBSXMv5B/Al 6pCuYhTYYKkx3ORvBt2S7LY5MdagWa4jjTVUg3TWiJpiZS07XgoBGiaHWdhHb4MCpO3A DP1lVI8sxe1zKSq0PuvHrIFR4Iqpm7uviK+SwB7cJXndMaOouXTr/txL7E/Wp/ioqPeI 5y8Q== X-Gm-Message-State: AOAM531vOB1beq5PiHD6NQe/Q1kEGqTjsetiJz3yy4KBiyGzaSsqI7JT UKpWow6soRpv9SYfU9Lv/MebdS/D8GDV0KSzvm2ZNirLSyjzZ3Uuy2C5jRMEt6bS0yIO7lPfQb8 4v0AbrLENsF9zuA== X-Received: by 2002:a1c:df46:: with SMTP id w67mr270284wmg.176.1614795853184; Wed, 03 Mar 2021 10:24:13 -0800 (PST) X-Google-Smtp-Source: ABdhPJx28e4WRX9tuACTD8hzn37Sbo31xMuLH6rW9MrGPqhW3u/fo4zuJ5oZN3ZY4y1xZWsTnmDfOg== X-Received: by 2002:a1c:df46:: with SMTP id w67mr270263wmg.176.1614795852910; Wed, 03 Mar 2021 10:24:12 -0800 (PST) From: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= To: qemu-devel@nongnu.org Cc: Eduardo Habkost , Cornelia Huck , Richard Henderson , Sunil Muthuswamy , Marcelo Tosatti , David Gibson , Marcel Apfelbaum , kvm@vger.kernel.org, Wenchao Wang , Thomas Huth , Cameron Esfahani , Paolo Bonzini , David Hildenbrand , Roman Bolshakov , Peter Maydell , Greg Kurz , qemu-arm@nongnu.org, Halil Pasic , Colin Xu , Claudio Fontana , qemu-ppc@nongnu.org, Christian Borntraeger , qemu-s390x@nongnu.org, haxm-team@intel.com, =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= Subject: [RFC PATCH 17/19] accel/hvf: Reduce deref by declaring 'hv_vcpuid_t hvf_fd' on stack Date: Wed, 3 Mar 2021 19:22:17 +0100 Message-Id: <20210303182219.1631042-18-philmd@redhat.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210303182219.1631042-1-philmd@redhat.com> References: <20210303182219.1631042-1-philmd@redhat.com> MIME-Version: 1.0 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=philmd@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @redhat.com) In order to make the next commits easier to review, declare 'hvf_fd' on the stack when it is used in various places in a function. Signed-off-by: Philippe Mathieu-Daud=C3=A9 --- target/i386/hvf/hvf.c | 95 ++++++++-------- target/i386/hvf/x86_descr.c | 19 ++-- target/i386/hvf/x86hvf.c | 209 ++++++++++++++++++------------------ 3 files changed, 166 insertions(+), 157 deletions(-) diff --git a/target/i386/hvf/hvf.c b/target/i386/hvf/hvf.c index 3c5c9c8197e..effee39ee9b 100644 --- a/target/i386/hvf/hvf.c +++ b/target/i386/hvf/hvf.c @@ -504,6 +504,7 @@ int hvf_init_vcpu(CPUState *cpu) =20 X86CPU *x86cpu =3D X86_CPU(cpu); CPUX86State *env =3D &x86cpu->env; + hv_vcpuid_t hvf_fd; int r; =20 /* init cpu signals */ @@ -532,9 +533,10 @@ int hvf_init_vcpu(CPUState *cpu) } } =20 - r =3D hv_vcpu_create((hv_vcpuid_t *)&cpu->hvf_fd, HV_VCPU_DEFAULT); + r =3D hv_vcpu_create(&hvf_fd, HV_VCPU_DEFAULT); cpu->vcpu_dirty =3D true; assert_hvf_ok(r); + cpu->hvf_fd =3D (int)hvf_fd =20 if (hv_vmx_read_capability(HV_VMX_CAP_PINBASED, &hvf_state->hvf_caps->vmx_cap_pinbased)) { @@ -554,43 +556,43 @@ int hvf_init_vcpu(CPUState *cpu) } =20 /* set VMCS control fields */ - wvmcs(cpu->hvf_fd, VMCS_PIN_BASED_CTLS, + wvmcs(hvf_fd, VMCS_PIN_BASED_CTLS, cap2ctrl(hvf_state->hvf_caps->vmx_cap_pinbased, VMCS_PIN_BASED_CTLS_EXTINT | VMCS_PIN_BASED_CTLS_NMI | VMCS_PIN_BASED_CTLS_VNMI)); - wvmcs(cpu->hvf_fd, VMCS_PRI_PROC_BASED_CTLS, + wvmcs(hvf_fd, VMCS_PRI_PROC_BASED_CTLS, cap2ctrl(hvf_state->hvf_caps->vmx_cap_procbased, VMCS_PRI_PROC_BASED_CTLS_HLT | VMCS_PRI_PROC_BASED_CTLS_MWAIT | VMCS_PRI_PROC_BASED_CTLS_TSC_OFFSET | VMCS_PRI_PROC_BASED_CTLS_TPR_SHADOW) | VMCS_PRI_PROC_BASED_CTLS_SEC_CONTROL); - wvmcs(cpu->hvf_fd, VMCS_SEC_PROC_BASED_CTLS, + wvmcs(hvf_fd, VMCS_SEC_PROC_BASED_CTLS, cap2ctrl(hvf_state->hvf_caps->vmx_cap_procbased2, VMCS_PRI_PROC_BASED2_CTLS_APIC_ACCESSES)); =20 - wvmcs(cpu->hvf_fd, VMCS_ENTRY_CTLS, cap2ctrl(hvf_state->hvf_caps->vmx_= cap_entry, + wvmcs(hvf_fd, VMCS_ENTRY_CTLS, cap2ctrl(hvf_state->hvf_caps->vmx_cap_e= ntry, 0)); - wvmcs(cpu->hvf_fd, VMCS_EXCEPTION_BITMAP, 0); /* Double fault */ + wvmcs(hvf_fd, VMCS_EXCEPTION_BITMAP, 0); /* Double fault */ =20 - wvmcs(cpu->hvf_fd, VMCS_TPR_THRESHOLD, 0); + wvmcs(hvf_fd, VMCS_TPR_THRESHOLD, 0); =20 x86cpu =3D X86_CPU(cpu); x86cpu->env.xsave_buf =3D qemu_memalign(4096, 4096); =20 - hv_vcpu_enable_native_msr(cpu->hvf_fd, MSR_STAR, 1); - hv_vcpu_enable_native_msr(cpu->hvf_fd, MSR_LSTAR, 1); - hv_vcpu_enable_native_msr(cpu->hvf_fd, MSR_CSTAR, 1); - hv_vcpu_enable_native_msr(cpu->hvf_fd, MSR_FMASK, 1); - hv_vcpu_enable_native_msr(cpu->hvf_fd, MSR_FSBASE, 1); - hv_vcpu_enable_native_msr(cpu->hvf_fd, MSR_GSBASE, 1); - hv_vcpu_enable_native_msr(cpu->hvf_fd, MSR_KERNELGSBASE, 1); - hv_vcpu_enable_native_msr(cpu->hvf_fd, MSR_TSC_AUX, 1); - hv_vcpu_enable_native_msr(cpu->hvf_fd, MSR_IA32_TSC, 1); - hv_vcpu_enable_native_msr(cpu->hvf_fd, MSR_IA32_SYSENTER_CS, 1); - hv_vcpu_enable_native_msr(cpu->hvf_fd, MSR_IA32_SYSENTER_EIP, 1); - hv_vcpu_enable_native_msr(cpu->hvf_fd, MSR_IA32_SYSENTER_ESP, 1); + hv_vcpu_enable_native_msr(hvf_fd, MSR_STAR, 1); + hv_vcpu_enable_native_msr(hvf_fd, MSR_LSTAR, 1); + hv_vcpu_enable_native_msr(hvf_fd, MSR_CSTAR, 1); + hv_vcpu_enable_native_msr(hvf_fd, MSR_FMASK, 1); + hv_vcpu_enable_native_msr(hvf_fd, MSR_FSBASE, 1); + hv_vcpu_enable_native_msr(hvf_fd, MSR_GSBASE, 1); + hv_vcpu_enable_native_msr(hvf_fd, MSR_KERNELGSBASE, 1); + hv_vcpu_enable_native_msr(hvf_fd, MSR_TSC_AUX, 1); + hv_vcpu_enable_native_msr(hvf_fd, MSR_IA32_TSC, 1); + hv_vcpu_enable_native_msr(hvf_fd, MSR_IA32_SYSENTER_CS, 1); + hv_vcpu_enable_native_msr(hvf_fd, MSR_IA32_SYSENTER_EIP, 1); + hv_vcpu_enable_native_msr(hvf_fd, MSR_IA32_SYSENTER_ESP, 1); =20 return 0; } @@ -695,6 +697,7 @@ int hvf_vcpu_exec(CPUState *cpu) { X86CPU *x86_cpu =3D X86_CPU(cpu); CPUX86State *env =3D &x86_cpu->env; + hv_vcpuid_t hvf_fd =3D (hv_vcpuid_t)cpu_state->hvf_fd; int ret =3D 0; uint64_t rip =3D 0; =20 @@ -719,20 +722,20 @@ int hvf_vcpu_exec(CPUState *cpu) return EXCP_HLT; } =20 - hv_return_t r =3D hv_vcpu_run(cpu->hvf_fd); + hv_return_t r =3D hv_vcpu_run(hvf_fd); assert_hvf_ok(r); =20 /* handle VMEXIT */ - uint64_t exit_reason =3D rvmcs(cpu->hvf_fd, VMCS_EXIT_REASON); - uint64_t exit_qual =3D rvmcs(cpu->hvf_fd, VMCS_EXIT_QUALIFICATION); - uint32_t ins_len =3D (uint32_t)rvmcs(cpu->hvf_fd, + uint64_t exit_reason =3D rvmcs(hvf_fd, VMCS_EXIT_REASON); + uint64_t exit_qual =3D rvmcs(hvf_fd, VMCS_EXIT_QUALIFICATION); + uint32_t ins_len =3D (uint32_t)rvmcs(hvf_fd, VMCS_EXIT_INSTRUCTION_LENGTH); =20 - uint64_t idtvec_info =3D rvmcs(cpu->hvf_fd, VMCS_IDT_VECTORING_INF= O); + uint64_t idtvec_info =3D rvmcs(hvf_fd, VMCS_IDT_VECTORING_INFO); =20 hvf_store_events(cpu, ins_len, idtvec_info); - rip =3D rreg(cpu->hvf_fd, HV_X86_RIP); - env->eflags =3D rreg(cpu->hvf_fd, HV_X86_RFLAGS); + rip =3D rreg(hvf_fd, HV_X86_RIP); + env->eflags =3D rreg(hvf_fd, HV_X86_RFLAGS); =20 qemu_mutex_lock_iothread(); =20 @@ -762,7 +765,7 @@ int hvf_vcpu_exec(CPUState *cpu) case EXIT_REASON_EPT_FAULT: { hvf_slot *slot; - uint64_t gpa =3D rvmcs(cpu->hvf_fd, VMCS_GUEST_PHYSICAL_ADDRES= S); + uint64_t gpa =3D rvmcs(hvf_fd, VMCS_GUEST_PHYSICAL_ADDRESS); =20 if (((idtvec_info & VMCS_IDT_VEC_VALID) =3D=3D 0) && ((exit_qual & EXIT_QUAL_NMIUDTI) !=3D 0)) { @@ -807,7 +810,7 @@ int hvf_vcpu_exec(CPUState *cpu) store_regs(cpu); break; } else if (!string && !in) { - RAX(env) =3D rreg(cpu->hvf_fd, HV_X86_RAX); + RAX(env) =3D rreg(hvf_fd, HV_X86_RAX); hvf_handle_io(env, port, &RAX(env), 1, size, 1); macvm_set_rip(cpu, rip + ins_len); break; @@ -823,21 +826,21 @@ int hvf_vcpu_exec(CPUState *cpu) break; } case EXIT_REASON_CPUID: { - uint32_t rax =3D (uint32_t)rreg(cpu->hvf_fd, HV_X86_RAX); - uint32_t rbx =3D (uint32_t)rreg(cpu->hvf_fd, HV_X86_RBX); - uint32_t rcx =3D (uint32_t)rreg(cpu->hvf_fd, HV_X86_RCX); - uint32_t rdx =3D (uint32_t)rreg(cpu->hvf_fd, HV_X86_RDX); + uint32_t rax =3D (uint32_t)rreg(hvf_fd, HV_X86_RAX); + uint32_t rbx =3D (uint32_t)rreg(hvf_fd, HV_X86_RBX); + uint32_t rcx =3D (uint32_t)rreg(hvf_fd, HV_X86_RCX); + uint32_t rdx =3D (uint32_t)rreg(hvf_fd, HV_X86_RDX); =20 if (rax =3D=3D 1) { /* CPUID1.ecx.OSXSAVE needs to know CR4 */ - env->cr[4] =3D rvmcs(cpu->hvf_fd, VMCS_GUEST_CR4); + env->cr[4] =3D rvmcs(hvf_fd, VMCS_GUEST_CR4); } hvf_cpu_x86_cpuid(env, rax, rcx, &rax, &rbx, &rcx, &rdx); =20 - wreg(cpu->hvf_fd, HV_X86_RAX, rax); - wreg(cpu->hvf_fd, HV_X86_RBX, rbx); - wreg(cpu->hvf_fd, HV_X86_RCX, rcx); - wreg(cpu->hvf_fd, HV_X86_RDX, rdx); + wreg(hvf_fd, HV_X86_RAX, rax); + wreg(hvf_fd, HV_X86_RBX, rbx); + wreg(hvf_fd, HV_X86_RCX, rcx); + wreg(hvf_fd, HV_X86_RDX, rdx); =20 macvm_set_rip(cpu, rip + ins_len); break; @@ -845,16 +848,16 @@ int hvf_vcpu_exec(CPUState *cpu) case EXIT_REASON_XSETBV: { X86CPU *x86_cpu =3D X86_CPU(cpu); CPUX86State *env =3D &x86_cpu->env; - uint32_t eax =3D (uint32_t)rreg(cpu->hvf_fd, HV_X86_RAX); - uint32_t ecx =3D (uint32_t)rreg(cpu->hvf_fd, HV_X86_RCX); - uint32_t edx =3D (uint32_t)rreg(cpu->hvf_fd, HV_X86_RDX); + uint32_t eax =3D (uint32_t)rreg(hvf_fd, HV_X86_RAX); + uint32_t ecx =3D (uint32_t)rreg(hvf_fd, HV_X86_RCX); + uint32_t edx =3D (uint32_t)rreg(hvf_fd, HV_X86_RDX); =20 if (ecx) { macvm_set_rip(cpu, rip + ins_len); break; } env->xcr0 =3D ((uint64_t)edx << 32) | eax; - wreg(cpu->hvf_fd, HV_X86_XCR0, env->xcr0 | 1); + wreg(hvf_fd, HV_X86_XCR0, env->xcr0 | 1); macvm_set_rip(cpu, rip + ins_len); break; } @@ -893,11 +896,11 @@ int hvf_vcpu_exec(CPUState *cpu) =20 switch (cr) { case 0x0: { - macvm_set_cr0(cpu->hvf_fd, RRX(env, reg)); + macvm_set_cr0(hvf_fd, RRX(env, reg)); break; } case 4: { - macvm_set_cr4(cpu->hvf_fd, RRX(env, reg)); + macvm_set_cr4(hvf_fd, RRX(env, reg)); break; } case 8: { @@ -933,7 +936,7 @@ int hvf_vcpu_exec(CPUState *cpu) break; } case EXIT_REASON_TASK_SWITCH: { - uint64_t vinfo =3D rvmcs(cpu->hvf_fd, VMCS_IDT_VECTORING_INFO); + uint64_t vinfo =3D rvmcs(hvf_fd, VMCS_IDT_VECTORING_INFO); x68_segment_selector sel =3D {.sel =3D exit_qual & 0xffff}; vmx_handle_task_switch(cpu, sel, (exit_qual >> 30) & 0x3, vinfo & VMCS_INTR_VALID, vinfo & VECTORING_INFO_VECTOR_MASK, = vinfo @@ -946,8 +949,8 @@ int hvf_vcpu_exec(CPUState *cpu) break; } case EXIT_REASON_RDPMC: - wreg(cpu->hvf_fd, HV_X86_RAX, 0); - wreg(cpu->hvf_fd, HV_X86_RDX, 0); + wreg(hvf_fd, HV_X86_RAX, 0); + wreg(hvf_fd, HV_X86_RDX, 0); macvm_set_rip(cpu, rip + ins_len); break; case VMX_REASON_VMCALL: diff --git a/target/i386/hvf/x86_descr.c b/target/i386/hvf/x86_descr.c index 9f539e73f6d..1c6220baa0d 100644 --- a/target/i386/hvf/x86_descr.c +++ b/target/i386/hvf/x86_descr.c @@ -75,20 +75,23 @@ void vmx_write_segment_selector(struct CPUState *cpu, x= 68_segment_selector selec =20 void vmx_read_segment_descriptor(struct CPUState *cpu, struct vmx_segment = *desc, X86Seg seg) { - desc->sel =3D rvmcs(cpu->hvf_fd, vmx_segment_fields[seg].selector); - desc->base =3D rvmcs(cpu->hvf_fd, vmx_segment_fields[seg].base); - desc->limit =3D rvmcs(cpu->hvf_fd, vmx_segment_fields[seg].limit); - desc->ar =3D rvmcs(cpu->hvf_fd, vmx_segment_fields[seg].ar_bytes); + hv_vcpuid_t hvf_fd =3D (hv_vcpuid_t)cpu_state->hvf_fd; + + desc->sel =3D rvmcs(hvf_fd, vmx_segment_fields[seg].selector); + desc->base =3D rvmcs(hvf_fd, vmx_segment_fields[seg].base); + desc->limit =3D rvmcs(hvf_fd, vmx_segment_fields[seg].limit); + desc->ar =3D rvmcs(hvf_fd, vmx_segment_fields[seg].ar_bytes); } =20 void vmx_write_segment_descriptor(CPUState *cpu, struct vmx_segment *desc,= X86Seg seg) { const struct vmx_segment_field *sf =3D &vmx_segment_fields[seg]; + hv_vcpuid_t hvf_fd =3D (hv_vcpuid_t)cpu_state->hvf_fd; =20 - wvmcs(cpu->hvf_fd, sf->base, desc->base); - wvmcs(cpu->hvf_fd, sf->limit, desc->limit); - wvmcs(cpu->hvf_fd, sf->selector, desc->sel); - wvmcs(cpu->hvf_fd, sf->ar_bytes, desc->ar); + wvmcs(hvf_fd, sf->base, desc->base); + wvmcs(hvf_fd, sf->limit, desc->limit); + wvmcs(hvf_fd, sf->selector, desc->sel); + wvmcs(hvf_fd, sf->ar_bytes, desc->ar); } =20 void x86_segment_descriptor_to_vmx(struct CPUState *cpu, x68_segment_selec= tor selector, struct x86_segment_descriptor *desc, struct vmx_segment *vmx_= desc) diff --git a/target/i386/hvf/x86hvf.c b/target/i386/hvf/x86hvf.c index 0d7533742eb..2f291f2ad53 100644 --- a/target/i386/hvf/x86hvf.c +++ b/target/i386/hvf/x86hvf.c @@ -89,21 +89,22 @@ void hvf_put_xsave(CPUState *cpu_state) void hvf_put_segments(CPUState *cpu_state) { CPUX86State *env =3D &X86_CPU(cpu_state)->env; + hv_vcpuid_t hvf_fd =3D (hv_vcpuid_t)cpu_state->hvf_fd; struct vmx_segment seg; =20 - wvmcs(cpu_state->hvf_fd, VMCS_GUEST_IDTR_LIMIT, env->idt.limit); - wvmcs(cpu_state->hvf_fd, VMCS_GUEST_IDTR_BASE, env->idt.base); + wvmcs(hvf_fd, VMCS_GUEST_IDTR_LIMIT, env->idt.limit); + wvmcs(hvf_fd, VMCS_GUEST_IDTR_BASE, env->idt.base); =20 - wvmcs(cpu_state->hvf_fd, VMCS_GUEST_GDTR_LIMIT, env->gdt.limit); - wvmcs(cpu_state->hvf_fd, VMCS_GUEST_GDTR_BASE, env->gdt.base); + wvmcs(hvf_fd, VMCS_GUEST_GDTR_LIMIT, env->gdt.limit); + wvmcs(hvf_fd, VMCS_GUEST_GDTR_BASE, env->gdt.base); =20 - /* wvmcs(cpu_state->hvf_fd, VMCS_GUEST_CR2, env->cr[2]); */ - wvmcs(cpu_state->hvf_fd, VMCS_GUEST_CR3, env->cr[3]); + /* wvmcs(hvf_fd, VMCS_GUEST_CR2, env->cr[2]); */ + wvmcs(hvf_fd, VMCS_GUEST_CR3, env->cr[3]); vmx_update_tpr(cpu_state); - wvmcs(cpu_state->hvf_fd, VMCS_GUEST_IA32_EFER, env->efer); + wvmcs(hvf_fd, VMCS_GUEST_IA32_EFER, env->efer); =20 - macvm_set_cr4(cpu_state->hvf_fd, env->cr[4]); - macvm_set_cr0(cpu_state->hvf_fd, env->cr[0]); + macvm_set_cr4(hvf_fd, env->cr[4]); + macvm_set_cr0(hvf_fd, env->cr[0]); =20 hvf_set_segment(cpu_state, &seg, &env->segs[R_CS], false); vmx_write_segment_descriptor(cpu_state, &seg, R_CS); @@ -129,31 +130,29 @@ void hvf_put_segments(CPUState *cpu_state) hvf_set_segment(cpu_state, &seg, &env->ldt, false); vmx_write_segment_descriptor(cpu_state, &seg, R_LDTR); =20 - hv_vcpu_flush(cpu_state->hvf_fd); + hv_vcpu_flush(hvf_fd); } =20 void hvf_put_msrs(CPUState *cpu_state) { CPUX86State *env =3D &X86_CPU(cpu_state)->env; + hv_vcpuid_t hvf_fd =3D (hv_vcpuid_t)cpu_state->hvf_fd; =20 - hv_vcpu_write_msr(cpu_state->hvf_fd, MSR_IA32_SYSENTER_CS, - env->sysenter_cs); - hv_vcpu_write_msr(cpu_state->hvf_fd, MSR_IA32_SYSENTER_ESP, - env->sysenter_esp); - hv_vcpu_write_msr(cpu_state->hvf_fd, MSR_IA32_SYSENTER_EIP, - env->sysenter_eip); + hv_vcpu_write_msr(hvf_fd, MSR_IA32_SYSENTER_CS, env->sysenter_cs); + hv_vcpu_write_msr(hvf_fd, MSR_IA32_SYSENTER_ESP, env->sysenter_esp); + hv_vcpu_write_msr(hvf_fd, MSR_IA32_SYSENTER_EIP, env->sysenter_eip); =20 - hv_vcpu_write_msr(cpu_state->hvf_fd, MSR_STAR, env->star); + hv_vcpu_write_msr(hvf_fd, MSR_STAR, env->star); =20 #ifdef TARGET_X86_64 - hv_vcpu_write_msr(cpu_state->hvf_fd, MSR_CSTAR, env->cstar); - hv_vcpu_write_msr(cpu_state->hvf_fd, MSR_KERNELGSBASE, env->kernelgsba= se); - hv_vcpu_write_msr(cpu_state->hvf_fd, MSR_FMASK, env->fmask); - hv_vcpu_write_msr(cpu_state->hvf_fd, MSR_LSTAR, env->lstar); + hv_vcpu_write_msr(hvf_fd, MSR_CSTAR, env->cstar); + hv_vcpu_write_msr(hvf_fd, MSR_KERNELGSBASE, env->kernelgsbase); + hv_vcpu_write_msr(hvf_fd, MSR_FMASK, env->fmask); + hv_vcpu_write_msr(hvf_fd, MSR_LSTAR, env->lstar); #endif =20 - hv_vcpu_write_msr(cpu_state->hvf_fd, MSR_GSBASE, env->segs[R_GS].base); - hv_vcpu_write_msr(cpu_state->hvf_fd, MSR_FSBASE, env->segs[R_FS].base); + hv_vcpu_write_msr(hvf_fd, MSR_GSBASE, env->segs[R_GS].base); + hv_vcpu_write_msr(hvf_fd, MSR_FSBASE, env->segs[R_FS].base); } =20 =20 @@ -173,7 +172,7 @@ void hvf_get_xsave(CPUState *cpu_state) void hvf_get_segments(CPUState *cpu_state) { CPUX86State *env =3D &X86_CPU(cpu_state)->env; - + hv_vcpuid_t hvf_fd =3D (hv_vcpuid_t)cpu_state->hvf_fd; struct vmx_segment seg; =20 env->interrupt_injected =3D -1; @@ -202,72 +201,74 @@ void hvf_get_segments(CPUState *cpu_state) vmx_read_segment_descriptor(cpu_state, &seg, R_LDTR); hvf_get_segment(&env->ldt, &seg); =20 - env->idt.limit =3D rvmcs(cpu_state->hvf_fd, VMCS_GUEST_IDTR_LIMIT); - env->idt.base =3D rvmcs(cpu_state->hvf_fd, VMCS_GUEST_IDTR_BASE); - env->gdt.limit =3D rvmcs(cpu_state->hvf_fd, VMCS_GUEST_GDTR_LIMIT); - env->gdt.base =3D rvmcs(cpu_state->hvf_fd, VMCS_GUEST_GDTR_BASE); + env->idt.limit =3D rvmcs(hvf_fd, VMCS_GUEST_IDTR_LIMIT); + env->idt.base =3D rvmcs(hvf_fd, VMCS_GUEST_IDTR_BASE); + env->gdt.limit =3D rvmcs(hvf_fd, VMCS_GUEST_GDTR_LIMIT); + env->gdt.base =3D rvmcs(hvf_fd, VMCS_GUEST_GDTR_BASE); =20 - env->cr[0] =3D rvmcs(cpu_state->hvf_fd, VMCS_GUEST_CR0); + env->cr[0] =3D rvmcs(hvf_fd, VMCS_GUEST_CR0); env->cr[2] =3D 0; - env->cr[3] =3D rvmcs(cpu_state->hvf_fd, VMCS_GUEST_CR3); - env->cr[4] =3D rvmcs(cpu_state->hvf_fd, VMCS_GUEST_CR4); + env->cr[3] =3D rvmcs(hvf_fd, VMCS_GUEST_CR3); + env->cr[4] =3D rvmcs(hvf_fd, VMCS_GUEST_CR4); =20 - env->efer =3D rvmcs(cpu_state->hvf_fd, VMCS_GUEST_IA32_EFER); + env->efer =3D rvmcs(hvf_fd, VMCS_GUEST_IA32_EFER); } =20 void hvf_get_msrs(CPUState *cpu_state) { CPUX86State *env =3D &X86_CPU(cpu_state)->env; + hv_vcpuid_t hvf_fd =3D (hv_vcpuid_t)cpu_state->hvf_fd; uint64_t tmp; =20 - hv_vcpu_read_msr(cpu_state->hvf_fd, MSR_IA32_SYSENTER_CS, &tmp); + hv_vcpu_read_msr(hvf_fd, MSR_IA32_SYSENTER_CS, &tmp); env->sysenter_cs =3D tmp; =20 - hv_vcpu_read_msr(cpu_state->hvf_fd, MSR_IA32_SYSENTER_ESP, &tmp); + hv_vcpu_read_msr(hvf_fd, MSR_IA32_SYSENTER_ESP, &tmp); env->sysenter_esp =3D tmp; =20 - hv_vcpu_read_msr(cpu_state->hvf_fd, MSR_IA32_SYSENTER_EIP, &tmp); + hv_vcpu_read_msr(hvf_fd, MSR_IA32_SYSENTER_EIP, &tmp); env->sysenter_eip =3D tmp; =20 - hv_vcpu_read_msr(cpu_state->hvf_fd, MSR_STAR, &env->star); + hv_vcpu_read_msr(hvf_fd, MSR_STAR, &env->star); =20 #ifdef TARGET_X86_64 - hv_vcpu_read_msr(cpu_state->hvf_fd, MSR_CSTAR, &env->cstar); - hv_vcpu_read_msr(cpu_state->hvf_fd, MSR_KERNELGSBASE, &env->kernelgsba= se); - hv_vcpu_read_msr(cpu_state->hvf_fd, MSR_FMASK, &env->fmask); - hv_vcpu_read_msr(cpu_state->hvf_fd, MSR_LSTAR, &env->lstar); + hv_vcpu_read_msr(hvf_fd, MSR_CSTAR, &env->cstar); + hv_vcpu_read_msr(hvf_fd, MSR_KERNELGSBASE, &env->kernelgsbase); + hv_vcpu_read_msr(hvf_fd, MSR_FMASK, &env->fmask); + hv_vcpu_read_msr(hvf_fd, MSR_LSTAR, &env->lstar); #endif =20 - hv_vcpu_read_msr(cpu_state->hvf_fd, MSR_IA32_APICBASE, &tmp); + hv_vcpu_read_msr(hvf_fd, MSR_IA32_APICBASE, &tmp); =20 - env->tsc =3D rdtscp() + rvmcs(cpu_state->hvf_fd, VMCS_TSC_OFFSET); + env->tsc =3D rdtscp() + rvmcs(hvf_fd, VMCS_TSC_OFFSET); } =20 int hvf_put_registers(CPUState *cpu_state) { X86CPU *x86cpu =3D X86_CPU(cpu_state); CPUX86State *env =3D &x86cpu->env; + hv_vcpuid_t hvf_fd =3D (hv_vcpuid_t)cpu_state->hvf_fd; =20 - wreg(cpu_state->hvf_fd, HV_X86_RAX, env->regs[R_EAX]); - wreg(cpu_state->hvf_fd, HV_X86_RBX, env->regs[R_EBX]); - wreg(cpu_state->hvf_fd, HV_X86_RCX, env->regs[R_ECX]); - wreg(cpu_state->hvf_fd, HV_X86_RDX, env->regs[R_EDX]); - wreg(cpu_state->hvf_fd, HV_X86_RBP, env->regs[R_EBP]); - wreg(cpu_state->hvf_fd, HV_X86_RSP, env->regs[R_ESP]); - wreg(cpu_state->hvf_fd, HV_X86_RSI, env->regs[R_ESI]); - wreg(cpu_state->hvf_fd, HV_X86_RDI, env->regs[R_EDI]); - wreg(cpu_state->hvf_fd, HV_X86_R8, env->regs[8]); - wreg(cpu_state->hvf_fd, HV_X86_R9, env->regs[9]); - wreg(cpu_state->hvf_fd, HV_X86_R10, env->regs[10]); - wreg(cpu_state->hvf_fd, HV_X86_R11, env->regs[11]); - wreg(cpu_state->hvf_fd, HV_X86_R12, env->regs[12]); - wreg(cpu_state->hvf_fd, HV_X86_R13, env->regs[13]); - wreg(cpu_state->hvf_fd, HV_X86_R14, env->regs[14]); - wreg(cpu_state->hvf_fd, HV_X86_R15, env->regs[15]); - wreg(cpu_state->hvf_fd, HV_X86_RFLAGS, env->eflags); - wreg(cpu_state->hvf_fd, HV_X86_RIP, env->eip); + wreg(hvf_fd, HV_X86_RAX, env->regs[R_EAX]); + wreg(hvf_fd, HV_X86_RBX, env->regs[R_EBX]); + wreg(hvf_fd, HV_X86_RCX, env->regs[R_ECX]); + wreg(hvf_fd, HV_X86_RDX, env->regs[R_EDX]); + wreg(hvf_fd, HV_X86_RBP, env->regs[R_EBP]); + wreg(hvf_fd, HV_X86_RSP, env->regs[R_ESP]); + wreg(hvf_fd, HV_X86_RSI, env->regs[R_ESI]); + wreg(hvf_fd, HV_X86_RDI, env->regs[R_EDI]); + wreg(hvf_fd, HV_X86_R8, env->regs[8]); + wreg(hvf_fd, HV_X86_R9, env->regs[9]); + wreg(hvf_fd, HV_X86_R10, env->regs[10]); + wreg(hvf_fd, HV_X86_R11, env->regs[11]); + wreg(hvf_fd, HV_X86_R12, env->regs[12]); + wreg(hvf_fd, HV_X86_R13, env->regs[13]); + wreg(hvf_fd, HV_X86_R14, env->regs[14]); + wreg(hvf_fd, HV_X86_R15, env->regs[15]); + wreg(hvf_fd, HV_X86_RFLAGS, env->eflags); + wreg(hvf_fd, HV_X86_RIP, env->eip); =20 - wreg(cpu_state->hvf_fd, HV_X86_XCR0, env->xcr0); + wreg(hvf_fd, HV_X86_XCR0, env->xcr0); =20 hvf_put_xsave(cpu_state); =20 @@ -275,14 +276,14 @@ int hvf_put_registers(CPUState *cpu_state) =20 hvf_put_msrs(cpu_state); =20 - wreg(cpu_state->hvf_fd, HV_X86_DR0, env->dr[0]); - wreg(cpu_state->hvf_fd, HV_X86_DR1, env->dr[1]); - wreg(cpu_state->hvf_fd, HV_X86_DR2, env->dr[2]); - wreg(cpu_state->hvf_fd, HV_X86_DR3, env->dr[3]); - wreg(cpu_state->hvf_fd, HV_X86_DR4, env->dr[4]); - wreg(cpu_state->hvf_fd, HV_X86_DR5, env->dr[5]); - wreg(cpu_state->hvf_fd, HV_X86_DR6, env->dr[6]); - wreg(cpu_state->hvf_fd, HV_X86_DR7, env->dr[7]); + wreg(hvf_fd, HV_X86_DR0, env->dr[0]); + wreg(hvf_fd, HV_X86_DR1, env->dr[1]); + wreg(hvf_fd, HV_X86_DR2, env->dr[2]); + wreg(hvf_fd, HV_X86_DR3, env->dr[3]); + wreg(hvf_fd, HV_X86_DR4, env->dr[4]); + wreg(hvf_fd, HV_X86_DR5, env->dr[5]); + wreg(hvf_fd, HV_X86_DR6, env->dr[6]); + wreg(hvf_fd, HV_X86_DR7, env->dr[7]); =20 return 0; } @@ -291,41 +292,42 @@ int hvf_get_registers(CPUState *cpu_state) { X86CPU *x86cpu =3D X86_CPU(cpu_state); CPUX86State *env =3D &x86cpu->env; + hv_vcpuid_t hvf_fd =3D (hv_vcpuid_t)cpu_state->hvf_fd; =20 - env->regs[R_EAX] =3D rreg(cpu_state->hvf_fd, HV_X86_RAX); - env->regs[R_EBX] =3D rreg(cpu_state->hvf_fd, HV_X86_RBX); - env->regs[R_ECX] =3D rreg(cpu_state->hvf_fd, HV_X86_RCX); - env->regs[R_EDX] =3D rreg(cpu_state->hvf_fd, HV_X86_RDX); - env->regs[R_EBP] =3D rreg(cpu_state->hvf_fd, HV_X86_RBP); - env->regs[R_ESP] =3D rreg(cpu_state->hvf_fd, HV_X86_RSP); - env->regs[R_ESI] =3D rreg(cpu_state->hvf_fd, HV_X86_RSI); - env->regs[R_EDI] =3D rreg(cpu_state->hvf_fd, HV_X86_RDI); - env->regs[8] =3D rreg(cpu_state->hvf_fd, HV_X86_R8); - env->regs[9] =3D rreg(cpu_state->hvf_fd, HV_X86_R9); - env->regs[10] =3D rreg(cpu_state->hvf_fd, HV_X86_R10); - env->regs[11] =3D rreg(cpu_state->hvf_fd, HV_X86_R11); - env->regs[12] =3D rreg(cpu_state->hvf_fd, HV_X86_R12); - env->regs[13] =3D rreg(cpu_state->hvf_fd, HV_X86_R13); - env->regs[14] =3D rreg(cpu_state->hvf_fd, HV_X86_R14); - env->regs[15] =3D rreg(cpu_state->hvf_fd, HV_X86_R15); + env->regs[R_EAX] =3D rreg(hvf_fd, HV_X86_RAX); + env->regs[R_EBX] =3D rreg(hvf_fd, HV_X86_RBX); + env->regs[R_ECX] =3D rreg(hvf_fd, HV_X86_RCX); + env->regs[R_EDX] =3D rreg(hvf_fd, HV_X86_RDX); + env->regs[R_EBP] =3D rreg(hvf_fd, HV_X86_RBP); + env->regs[R_ESP] =3D rreg(hvf_fd, HV_X86_RSP); + env->regs[R_ESI] =3D rreg(hvf_fd, HV_X86_RSI); + env->regs[R_EDI] =3D rreg(hvf_fd, HV_X86_RDI); + env->regs[8] =3D rreg(hvf_fd, HV_X86_R8); + env->regs[9] =3D rreg(hvf_fd, HV_X86_R9); + env->regs[10] =3D rreg(hvf_fd, HV_X86_R10); + env->regs[11] =3D rreg(hvf_fd, HV_X86_R11); + env->regs[12] =3D rreg(hvf_fd, HV_X86_R12); + env->regs[13] =3D rreg(hvf_fd, HV_X86_R13); + env->regs[14] =3D rreg(hvf_fd, HV_X86_R14); + env->regs[15] =3D rreg(hvf_fd, HV_X86_R15); =20 - env->eflags =3D rreg(cpu_state->hvf_fd, HV_X86_RFLAGS); - env->eip =3D rreg(cpu_state->hvf_fd, HV_X86_RIP); + env->eflags =3D rreg(hvf_fd, HV_X86_RFLAGS); + env->eip =3D rreg(hvf_fd, HV_X86_RIP); =20 hvf_get_xsave(cpu_state); - env->xcr0 =3D rreg(cpu_state->hvf_fd, HV_X86_XCR0); + env->xcr0 =3D rreg(hvf_fd, HV_X86_XCR0); =20 hvf_get_segments(cpu_state); hvf_get_msrs(cpu_state); =20 - env->dr[0] =3D rreg(cpu_state->hvf_fd, HV_X86_DR0); - env->dr[1] =3D rreg(cpu_state->hvf_fd, HV_X86_DR1); - env->dr[2] =3D rreg(cpu_state->hvf_fd, HV_X86_DR2); - env->dr[3] =3D rreg(cpu_state->hvf_fd, HV_X86_DR3); - env->dr[4] =3D rreg(cpu_state->hvf_fd, HV_X86_DR4); - env->dr[5] =3D rreg(cpu_state->hvf_fd, HV_X86_DR5); - env->dr[6] =3D rreg(cpu_state->hvf_fd, HV_X86_DR6); - env->dr[7] =3D rreg(cpu_state->hvf_fd, HV_X86_DR7); + env->dr[0] =3D rreg(hvf_fd, HV_X86_DR0); + env->dr[1] =3D rreg(hvf_fd, HV_X86_DR1); + env->dr[2] =3D rreg(hvf_fd, HV_X86_DR2); + env->dr[3] =3D rreg(hvf_fd, HV_X86_DR3); + env->dr[4] =3D rreg(hvf_fd, HV_X86_DR4); + env->dr[5] =3D rreg(hvf_fd, HV_X86_DR5); + env->dr[6] =3D rreg(hvf_fd, HV_X86_DR6); + env->dr[7] =3D rreg(hvf_fd, HV_X86_DR7); =20 x86_update_hflags(env); return 0; @@ -351,6 +353,7 @@ bool hvf_inject_interrupts(CPUState *cpu_state) { X86CPU *x86cpu =3D X86_CPU(cpu_state); CPUX86State *env =3D &x86cpu->env; + hv_vcpuid_t hvf_fd =3D (hv_vcpuid_t)cpu_state->hvf_fd; =20 uint8_t vector; uint64_t intr_type; @@ -379,7 +382,7 @@ bool hvf_inject_interrupts(CPUState *cpu_state) uint64_t info =3D 0; if (have_event) { info =3D vector | intr_type | VMCS_INTR_VALID; - uint64_t reason =3D rvmcs(cpu_state->hvf_fd, VMCS_EXIT_REASON); + uint64_t reason =3D rvmcs(hvf_fd, VMCS_EXIT_REASON); if (env->nmi_injected && reason !=3D EXIT_REASON_TASK_SWITCH) { vmx_clear_nmi_blocking(cpu_state); } @@ -388,17 +391,17 @@ bool hvf_inject_interrupts(CPUState *cpu_state) info &=3D ~(1 << 12); /* clear undefined bit */ if (intr_type =3D=3D VMCS_INTR_T_SWINTR || intr_type =3D=3D VMCS_INTR_T_SWEXCEPTION) { - wvmcs(cpu_state->hvf_fd, VMCS_ENTRY_INST_LENGTH, env->ins_= len); + wvmcs(hvf_fd, VMCS_ENTRY_INST_LENGTH, env->ins_len); } =20 if (env->has_error_code) { - wvmcs(cpu_state->hvf_fd, VMCS_ENTRY_EXCEPTION_ERROR, + wvmcs(hvf_fd, VMCS_ENTRY_EXCEPTION_ERROR, env->error_code); /* Indicate that VMCS_ENTRY_EXCEPTION_ERROR is valid */ info |=3D VMCS_INTR_DEL_ERRCODE; } /*printf("reinject %lx err %d\n", info, err);*/ - wvmcs(cpu_state->hvf_fd, VMCS_ENTRY_INTR_INFO, info); + wvmcs(hvf_fd, VMCS_ENTRY_INTR_INFO, info); }; } =20 @@ -406,7 +409,7 @@ bool hvf_inject_interrupts(CPUState *cpu_state) if (!(env->hflags2 & HF2_NMI_MASK) && !(info & VMCS_INTR_VALID)) { cpu_state->interrupt_request &=3D ~CPU_INTERRUPT_NMI; info =3D VMCS_INTR_VALID | VMCS_INTR_T_NMI | EXCP02_NMI; - wvmcs(cpu_state->hvf_fd, VMCS_ENTRY_INTR_INFO, info); + wvmcs(hvf_fd, VMCS_ENTRY_INTR_INFO, info); } else { vmx_set_nmi_window_exiting(cpu_state); } @@ -418,8 +421,8 @@ bool hvf_inject_interrupts(CPUState *cpu_state) int line =3D cpu_get_pic_interrupt(&x86cpu->env); cpu_state->interrupt_request &=3D ~CPU_INTERRUPT_HARD; if (line >=3D 0) { - wvmcs(cpu_state->hvf_fd, VMCS_ENTRY_INTR_INFO, line | - VMCS_INTR_VALID | VMCS_INTR_T_HWINTR); + wvmcs(hvf_fd, VMCS_ENTRY_INTR_INFO, + line | VMCS_INTR_VALID | VMCS_INTR_T_HWINTR); } } if (cpu_state->interrupt_request & CPU_INTERRUPT_HARD) { --=20 2.26.2