From nobody Mon Feb 9 22:38:26 2026 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of redhat.com designates 216.205.24.124 as permitted sender) client-ip=216.205.24.124; envelope-from=philmd@redhat.com; helo=us-smtp-delivery-124.mimecast.com; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 216.205.24.124 as permitted sender) smtp.mailfrom=philmd@redhat.com; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1614795875; cv=none; d=zohomail.com; s=zohoarc; b=ggo4NSwTpcbJR7ZW3vJ9Z6a4Q+vJuQ2pkh9DJBJ83Yaakamm/XLu/TFx/cY6gjL4OzZgMZpjxedztRe5jMkQiCX1KkLC4r1FWB88L4koaAEcHM7aMqHJ+X18XxrZ8JJYn7GLjRlk1B78Bt3TEv5Ju3ETHPm2iOsk21D9tfzPCEA= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1614795875; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:MIME-Version:Message-ID:References:Subject:To; bh=ghbrUZ4fIl9iFqhQfoGEU/zQLVSUujkFo/gkjqoyoig=; b=nCnNreZdSiei4j9iXTFX83ZX5cldiKmnpcYOQJkDI44jllKG8v9CGbgJqGsc303A1pqnwXGu+69TUoVl/mNIomE22e7eCvYKasP26lzR8nbptDw431iwwp4qXL4kwe/+RI8C62CxqisFKC83rC1WCSLiWwp2ZX8RbHeTiHknAr4= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 216.205.24.124 as permitted sender) smtp.mailfrom=philmd@redhat.com; dmarc=pass header.from= (p=none dis=none) header.from= Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [216.205.24.124]) by mx.zohomail.com with SMTPS id 1614795875217301.33317810875974; Wed, 3 Mar 2021 10:24:35 -0800 (PST) Received: from mail-wm1-f70.google.com (mail-wm1-f70.google.com [209.85.128.70]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-265-rJcxjqG1O5upPnL2B9I6Gw-1; Wed, 03 Mar 2021 13:24:32 -0500 Received: by mail-wm1-f70.google.com with SMTP id z26so3393846wml.4 for ; Wed, 03 Mar 2021 10:24:31 -0800 (PST) Return-Path: Return-Path: Received: from x1w.redhat.com (68.red-83-57-175.dynamicip.rima-tde.net. [83.57.175.68]) by smtp.gmail.com with ESMTPSA id p6sm19168637wru.2.2021.03.03.10.24.27 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 03 Mar 2021 10:24:29 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1614795874; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ghbrUZ4fIl9iFqhQfoGEU/zQLVSUujkFo/gkjqoyoig=; b=NA0gQdTBkrjgnYIkHuX+Fv3t/gJzJjZZ/4kEjFhADBle0OS3ydD2Q2hIXGUZyqha2YAeQa QIccOZN5Y0wap+5dxdfqqX6Hqbhor5Dc12qskqRxfb1lM4207pO45PiBB/n0uO1qyO8SSK 9cqa0+Yv7uN/DllB8TWSCeogXolXRSI= X-MC-Unique: rJcxjqG1O5upPnL2B9I6Gw-1 X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=ghbrUZ4fIl9iFqhQfoGEU/zQLVSUujkFo/gkjqoyoig=; b=aTdRvT9EEKP7ma4EgahI09XeL3hShJe7lo0ctORbDASB1BhIkepaH5926QTdN1MYg1 eWQNusBNxd2i6/3aOCBNuuZZolfwGXDUru2BTU8j9hb4Z+hGJvItsmFyFobInRKp7IH5 2NTk7i5CCyiJmg9sP4LIUMy49FiMMEYmyp7LJ+fucS7VsXGAC6NPxRyWA2efBgVLVfmH dr3pb4/m0VHdRa+EzS4A28RHEW05zf6h0voZRd6syvPdQpIouS0YAQPq/b7IdVFLpWmG ree9N3aKpIrl2mhF4OQi9Jl/GHujZhaVaT5fns3PemANMK4Q8fpqvFrbOPaz4Ortltl0 dtXw== X-Gm-Message-State: AOAM533XAqq+gY6aEbeQsw6zJHwWFUKO5C9H3BF9/6UMbDNjpq3XkuZl +dCwAFxKxxHQh6/mCCRyM+J0B5u0GeA4hI2Im0k26yuRZbrGuRsaPTTydB4v4fiu3CJHZBPPUhV ny/LOmjkPMXLt9A== X-Received: by 2002:a1c:4e07:: with SMTP id g7mr270804wmh.29.1614795870019; Wed, 03 Mar 2021 10:24:30 -0800 (PST) X-Google-Smtp-Source: ABdhPJxy5yGsVkSbOtOt430gsXEGatm965cb95l6wAPlETjCXm0O71i55yGVkWF1cWjLgsw0RKx7+g== X-Received: by 2002:a1c:4e07:: with SMTP id g7mr270785wmh.29.1614795869699; Wed, 03 Mar 2021 10:24:29 -0800 (PST) From: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= To: qemu-devel@nongnu.org Cc: Eduardo Habkost , Cornelia Huck , Richard Henderson , Sunil Muthuswamy , Marcelo Tosatti , David Gibson , Marcel Apfelbaum , kvm@vger.kernel.org, Wenchao Wang , Thomas Huth , Cameron Esfahani , Paolo Bonzini , David Hildenbrand , Roman Bolshakov , Peter Maydell , Greg Kurz , qemu-arm@nongnu.org, Halil Pasic , Colin Xu , Claudio Fontana , qemu-ppc@nongnu.org, Christian Borntraeger , qemu-s390x@nongnu.org, haxm-team@intel.com, =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= Subject: [RFC PATCH 19/19] accel/hvf: Move the 'hvf_fd' field to AccelvCPUState Date: Wed, 3 Mar 2021 19:22:19 +0100 Message-Id: <20210303182219.1631042-20-philmd@redhat.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210303182219.1631042-1-philmd@redhat.com> References: <20210303182219.1631042-1-philmd@redhat.com> MIME-Version: 1.0 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=philmd@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @redhat.com) Move the 'hvf_fd' field from CPUState to AccelvCPUState, and declare it with its correct type: hv_vcpuid_t. Signed-off-by: Philippe Mathieu-Daud=C3=A9 --- include/hw/core/cpu.h | 1 - target/i386/hvf/hvf-i386.h | 1 + target/i386/hvf/vmx.h | 28 +++++++++-------- target/i386/hvf/hvf.c | 23 +++++++------- target/i386/hvf/x86.c | 28 ++++++++--------- target/i386/hvf/x86_descr.c | 17 +++++----- target/i386/hvf/x86_emu.c | 62 ++++++++++++++++++------------------- target/i386/hvf/x86_mmu.c | 4 +-- target/i386/hvf/x86_task.c | 14 +++++---- target/i386/hvf/x86hvf.c | 32 ++++++++++--------- 10 files changed, 110 insertions(+), 100 deletions(-) diff --git a/include/hw/core/cpu.h b/include/hw/core/cpu.h index 3268f1393f1..69a456415c0 100644 --- a/include/hw/core/cpu.h +++ b/include/hw/core/cpu.h @@ -415,7 +415,6 @@ struct CPUState { =20 /* Accelerator-specific fields. */ struct AccelvCPUState *accel_vcpu; - int hvf_fd; /* shared by kvm, hax and hvf */ bool vcpu_dirty; =20 diff --git a/target/i386/hvf/hvf-i386.h b/target/i386/hvf/hvf-i386.h index 1f12eb647a0..e17f9f42c0e 100644 --- a/target/i386/hvf/hvf-i386.h +++ b/target/i386/hvf/hvf-i386.h @@ -52,6 +52,7 @@ struct HVFState { extern HVFState *hvf_state; =20 struct AccelvCPUState { + hv_vcpuid_t hvf_fd; }; =20 void hvf_set_phys_mem(MemoryRegionSection *, bool); diff --git a/target/i386/hvf/vmx.h b/target/i386/hvf/vmx.h index 24c4cdf0be0..bed94856268 100644 --- a/target/i386/hvf/vmx.h +++ b/target/i386/hvf/vmx.h @@ -179,15 +179,15 @@ static inline void macvm_set_rip(CPUState *cpu, uint6= 4_t rip) uint64_t val; =20 /* BUG, should take considering overlap.. */ - wreg(cpu->hvf_fd, HV_X86_RIP, rip); + wreg(cpu->accel_vcpu->hvf_fd, HV_X86_RIP, rip); env->eip =3D rip; =20 /* after moving forward in rip, we need to clean INTERRUPTABILITY */ - val =3D rvmcs(cpu->hvf_fd, VMCS_GUEST_INTERRUPTIBILITY); + val =3D rvmcs(cpu->accel_vcpu->hvf_fd, VMCS_GUEST_INTERRUPTIBILITY); if (val & (VMCS_INTERRUPTIBILITY_STI_BLOCKING | VMCS_INTERRUPTIBILITY_MOVSS_BLOCKING)) { env->hflags &=3D ~HF_INHIBIT_IRQ_MASK; - wvmcs(cpu->hvf_fd, VMCS_GUEST_INTERRUPTIBILITY, + wvmcs(cpu->accel_vcpu->hvf_fd, VMCS_GUEST_INTERRUPTIBILITY, val & ~(VMCS_INTERRUPTIBILITY_STI_BLOCKING | VMCS_INTERRUPTIBILITY_MOVSS_BLOCKING)); } @@ -199,9 +199,10 @@ static inline void vmx_clear_nmi_blocking(CPUState *cp= u) CPUX86State *env =3D &x86_cpu->env; =20 env->hflags2 &=3D ~HF2_NMI_MASK; - uint32_t gi =3D (uint32_t) rvmcs(cpu->hvf_fd, VMCS_GUEST_INTERRUPTIBIL= ITY); + uint32_t gi =3D (uint32_t) rvmcs(cpu->accel_vcpu->hvf_fd, + VMCS_GUEST_INTERRUPTIBILITY); gi &=3D ~VMCS_INTERRUPTIBILITY_NMI_BLOCKING; - wvmcs(cpu->hvf_fd, VMCS_GUEST_INTERRUPTIBILITY, gi); + wvmcs(cpu->accel_vcpu->hvf_fd, VMCS_GUEST_INTERRUPTIBILITY, gi); } =20 static inline void vmx_set_nmi_blocking(CPUState *cpu) @@ -210,17 +211,18 @@ static inline void vmx_set_nmi_blocking(CPUState *cpu) CPUX86State *env =3D &x86_cpu->env; =20 env->hflags2 |=3D HF2_NMI_MASK; - uint32_t gi =3D (uint32_t)rvmcs(cpu->hvf_fd, VMCS_GUEST_INTERRUPTIBILI= TY); + uint32_t gi =3D (uint32_t)rvmcs(cpu->accel_vcpu->hvf_fd, + VMCS_GUEST_INTERRUPTIBILITY); gi |=3D VMCS_INTERRUPTIBILITY_NMI_BLOCKING; - wvmcs(cpu->hvf_fd, VMCS_GUEST_INTERRUPTIBILITY, gi); + wvmcs(cpu->accel_vcpu->hvf_fd, VMCS_GUEST_INTERRUPTIBILITY, gi); } =20 static inline void vmx_set_nmi_window_exiting(CPUState *cpu) { uint64_t val; - val =3D rvmcs(cpu->hvf_fd, VMCS_PRI_PROC_BASED_CTLS); - wvmcs(cpu->hvf_fd, VMCS_PRI_PROC_BASED_CTLS, val | - VMCS_PRI_PROC_BASED_CTLS_NMI_WINDOW_EXITING); + val =3D rvmcs(cpu->accel_vcpu->hvf_fd, VMCS_PRI_PROC_BASED_CTLS); + wvmcs(cpu->accel_vcpu->hvf_fd, VMCS_PRI_PROC_BASED_CTLS, + val | VMCS_PRI_PROC_BASED_CTLS_NMI_WINDOW_EXITING); =20 } =20 @@ -228,9 +230,9 @@ static inline void vmx_clear_nmi_window_exiting(CPUStat= e *cpu) { =20 uint64_t val; - val =3D rvmcs(cpu->hvf_fd, VMCS_PRI_PROC_BASED_CTLS); - wvmcs(cpu->hvf_fd, VMCS_PRI_PROC_BASED_CTLS, val & - ~VMCS_PRI_PROC_BASED_CTLS_NMI_WINDOW_EXITING); + val =3D rvmcs(cpu->accel_vcpu->hvf_fd, VMCS_PRI_PROC_BASED_CTLS); + wvmcs(cpu->accel_vcpu->hvf_fd, VMCS_PRI_PROC_BASED_CTLS, + val & ~VMCS_PRI_PROC_BASED_CTLS_NMI_WINDOW_EXITING); } =20 #endif diff --git a/target/i386/hvf/hvf.c b/target/i386/hvf/hvf.c index 342659f1e15..022975d093e 100644 --- a/target/i386/hvf/hvf.c +++ b/target/i386/hvf/hvf.c @@ -245,19 +245,19 @@ void vmx_update_tpr(CPUState *cpu) int tpr =3D cpu_get_apic_tpr(x86_cpu->apic_state) << 4; int irr =3D apic_get_highest_priority_irr(x86_cpu->apic_state); =20 - wreg(cpu->hvf_fd, HV_X86_TPR, tpr); + wreg(cpu->accel_vcpu->hvf_fd, HV_X86_TPR, tpr); if (irr =3D=3D -1) { - wvmcs(cpu->hvf_fd, VMCS_TPR_THRESHOLD, 0); + wvmcs(cpu->accel_vcpu->hvf_fd, VMCS_TPR_THRESHOLD, 0); } else { - wvmcs(cpu->hvf_fd, VMCS_TPR_THRESHOLD, (irr > tpr) ? tpr >> 4 : - irr >> 4); + wvmcs(cpu->accel_vcpu->hvf_fd, VMCS_TPR_THRESHOLD, + (irr > tpr) ? tpr >> 4 : irr >> 4); } } =20 static void update_apic_tpr(CPUState *cpu) { X86CPU *x86_cpu =3D X86_CPU(cpu); - int tpr =3D rreg(cpu->hvf_fd, HV_X86_TPR) >> 4; + int tpr =3D rreg(cpu->accel_vcpu->hvf_fd, HV_X86_TPR) >> 4; cpu_set_apic_tpr(x86_cpu->apic_state, tpr); } =20 @@ -448,7 +448,7 @@ void hvf_vcpu_destroy(CPUState *cpu) X86CPU *x86_cpu =3D X86_CPU(cpu); CPUX86State *env =3D &x86_cpu->env; =20 - hv_return_t ret =3D hv_vcpu_destroy((hv_vcpuid_t)cpu->hvf_fd); + hv_return_t ret =3D hv_vcpu_destroy(cpu->accel_vcpu->hvf_fd); g_free(env->hvf_mmio_buf); assert_hvf_ok(ret); g_free(cpu->accel_vcpu); @@ -537,7 +537,7 @@ int hvf_init_vcpu(CPUState *cpu) r =3D hv_vcpu_create(&hvf_fd, HV_VCPU_DEFAULT); assert_hvf_ok(r); cpu->accel_vcpu =3D g_new(struct AccelvCPUState, 1); - cpu->hvf_fd =3D (int)hvf_fd + cpu->accel_vcpu->hvf_fd =3D hvf_fd cpu->vcpu_dirty =3D true; =20 if (hv_vmx_read_capability(HV_VMX_CAP_PINBASED, @@ -635,16 +635,17 @@ static void hvf_store_events(CPUState *cpu, uint32_t = ins_len, uint64_t idtvec_in } if (idtvec_info & VMCS_IDT_VEC_ERRCODE_VALID) { env->has_error_code =3D true; - env->error_code =3D rvmcs(cpu->hvf_fd, VMCS_IDT_VECTORING_ERRO= R); + env->error_code =3D rvmcs(cpu->accel_vcpu->hvf_fd, + VMCS_IDT_VECTORING_ERROR); } } - if ((rvmcs(cpu->hvf_fd, VMCS_GUEST_INTERRUPTIBILITY) & + if ((rvmcs(cpu->accel_vcpu->hvf_fd, VMCS_GUEST_INTERRUPTIBILITY) & VMCS_INTERRUPTIBILITY_NMI_BLOCKING)) { env->hflags2 |=3D HF2_NMI_MASK; } else { env->hflags2 &=3D ~HF2_NMI_MASK; } - if (rvmcs(cpu->hvf_fd, VMCS_GUEST_INTERRUPTIBILITY) & + if (rvmcs(cpu->accel_vcpu->hvf_fd, VMCS_GUEST_INTERRUPTIBILITY) & (VMCS_INTERRUPTIBILITY_STI_BLOCKING | VMCS_INTERRUPTIBILITY_MOVSS_BLOCKING)) { env->hflags |=3D HF_INHIBIT_IRQ_MASK; @@ -699,7 +700,7 @@ int hvf_vcpu_exec(CPUState *cpu) { X86CPU *x86_cpu =3D X86_CPU(cpu); CPUX86State *env =3D &x86_cpu->env; - hv_vcpuid_t hvf_fd =3D (hv_vcpuid_t)cpu_state->hvf_fd; + hv_vcpuid_t hvf_fd =3D cpu_state->accel_vcpu->hvf_fd; int ret =3D 0; uint64_t rip =3D 0; =20 diff --git a/target/i386/hvf/x86.c b/target/i386/hvf/x86.c index cd045183a81..23fbdb91eb0 100644 --- a/target/i386/hvf/x86.c +++ b/target/i386/hvf/x86.c @@ -62,11 +62,11 @@ bool x86_read_segment_descriptor(struct CPUState *cpu, } =20 if (GDT_SEL =3D=3D sel.ti) { - base =3D rvmcs(cpu->hvf_fd, VMCS_GUEST_GDTR_BASE); - limit =3D rvmcs(cpu->hvf_fd, VMCS_GUEST_GDTR_LIMIT); + base =3D rvmcs(cpu->accel_vcpu->hvf_fd, VMCS_GUEST_GDTR_BASE); + limit =3D rvmcs(cpu->accel_vcpu->hvf_fd, VMCS_GUEST_GDTR_LIMIT); } else { - base =3D rvmcs(cpu->hvf_fd, VMCS_GUEST_LDTR_BASE); - limit =3D rvmcs(cpu->hvf_fd, VMCS_GUEST_LDTR_LIMIT); + base =3D rvmcs(cpu->accel_vcpu->hvf_fd, VMCS_GUEST_LDTR_BASE); + limit =3D rvmcs(cpu->accel_vcpu->hvf_fd, VMCS_GUEST_LDTR_LIMIT); } =20 if (sel.index * 8 >=3D limit) { @@ -85,11 +85,11 @@ bool x86_write_segment_descriptor(struct CPUState *cpu, uint32_t limit; =20 if (GDT_SEL =3D=3D sel.ti) { - base =3D rvmcs(cpu->hvf_fd, VMCS_GUEST_GDTR_BASE); - limit =3D rvmcs(cpu->hvf_fd, VMCS_GUEST_GDTR_LIMIT); + base =3D rvmcs(cpu->accel_vcpu->hvf_fd, VMCS_GUEST_GDTR_BASE); + limit =3D rvmcs(cpu->accel_vcpu->hvf_fd, VMCS_GUEST_GDTR_LIMIT); } else { - base =3D rvmcs(cpu->hvf_fd, VMCS_GUEST_LDTR_BASE); - limit =3D rvmcs(cpu->hvf_fd, VMCS_GUEST_LDTR_LIMIT); + base =3D rvmcs(cpu->accel_vcpu->hvf_fd, VMCS_GUEST_LDTR_BASE); + limit =3D rvmcs(cpu->accel_vcpu->hvf_fd, VMCS_GUEST_LDTR_LIMIT); } =20 if (sel.index * 8 >=3D limit) { @@ -103,8 +103,8 @@ bool x86_write_segment_descriptor(struct CPUState *cpu, bool x86_read_call_gate(struct CPUState *cpu, struct x86_call_gate *idt_de= sc, int gate) { - target_ulong base =3D rvmcs(cpu->hvf_fd, VMCS_GUEST_IDTR_BASE); - uint32_t limit =3D rvmcs(cpu->hvf_fd, VMCS_GUEST_IDTR_LIMIT); + target_ulong base =3D rvmcs(cpu->accel_vcpu->hvf_fd, VMCS_GUEST_IDTR_= BASE); + uint32_t limit =3D rvmcs(cpu->accel_vcpu->hvf_fd, VMCS_GUEST_IDTR_LIMI= T); =20 memset(idt_desc, 0, sizeof(*idt_desc)); if (gate * 8 >=3D limit) { @@ -118,7 +118,7 @@ bool x86_read_call_gate(struct CPUState *cpu, struct x8= 6_call_gate *idt_desc, =20 bool x86_is_protected(struct CPUState *cpu) { - uint64_t cr0 =3D rvmcs(cpu->hvf_fd, VMCS_GUEST_CR0); + uint64_t cr0 =3D rvmcs(cpu->accel_vcpu->hvf_fd, VMCS_GUEST_CR0); return cr0 & CR0_PE; } =20 @@ -136,7 +136,7 @@ bool x86_is_v8086(struct CPUState *cpu) =20 bool x86_is_long_mode(struct CPUState *cpu) { - return rvmcs(cpu->hvf_fd, VMCS_GUEST_IA32_EFER) & MSR_EFER_LMA; + return rvmcs(cpu->accel_vcpu->hvf_fd, VMCS_GUEST_IA32_EFER) & MSR_EFER= _LMA; } =20 bool x86_is_long64_mode(struct CPUState *cpu) @@ -149,13 +149,13 @@ bool x86_is_long64_mode(struct CPUState *cpu) =20 bool x86_is_paging_mode(struct CPUState *cpu) { - uint64_t cr0 =3D rvmcs(cpu->hvf_fd, VMCS_GUEST_CR0); + uint64_t cr0 =3D rvmcs(cpu->accel_vcpu->hvf_fd, VMCS_GUEST_CR0); return cr0 & CR0_PG; } =20 bool x86_is_pae_enabled(struct CPUState *cpu) { - uint64_t cr4 =3D rvmcs(cpu->hvf_fd, VMCS_GUEST_CR4); + uint64_t cr4 =3D rvmcs(cpu->accel_vcpu->hvf_fd, VMCS_GUEST_CR4); return cr4 & CR4_PAE; } =20 diff --git a/target/i386/hvf/x86_descr.c b/target/i386/hvf/x86_descr.c index 1c6220baa0d..4f716cc5942 100644 --- a/target/i386/hvf/x86_descr.c +++ b/target/i386/hvf/x86_descr.c @@ -48,34 +48,37 @@ static const struct vmx_segment_field { =20 uint32_t vmx_read_segment_limit(CPUState *cpu, X86Seg seg) { - return (uint32_t)rvmcs(cpu->hvf_fd, vmx_segment_fields[seg].limit); + return (uint32_t)rvmcs(cpu->accel_vcpu->hvf_fd, + vmx_segment_fields[seg].limit); } =20 uint32_t vmx_read_segment_ar(CPUState *cpu, X86Seg seg) { - return (uint32_t)rvmcs(cpu->hvf_fd, vmx_segment_fields[seg].ar_bytes); + return (uint32_t)rvmcs(cpu->accel_vcpu->hvf_fd, + vmx_segment_fields[seg].ar_bytes); } =20 uint64_t vmx_read_segment_base(CPUState *cpu, X86Seg seg) { - return rvmcs(cpu->hvf_fd, vmx_segment_fields[seg].base); + return rvmcs(cpu->accel_vcpu->hvf_fd, vmx_segment_fields[seg].base); } =20 x68_segment_selector vmx_read_segment_selector(CPUState *cpu, X86Seg seg) { x68_segment_selector sel; - sel.sel =3D rvmcs(cpu->hvf_fd, vmx_segment_fields[seg].selector); + sel.sel =3D rvmcs(cpu->accel_vcpu->hvf_fd, vmx_segment_fields[seg].sel= ector); return sel; } =20 void vmx_write_segment_selector(struct CPUState *cpu, x68_segment_selector= selector, X86Seg seg) { - wvmcs(cpu->hvf_fd, vmx_segment_fields[seg].selector, selector.sel); + wvmcs(cpu->accel_vcpu->hvf_fd, vmx_segment_fields[seg].selector, + selector.sel); } =20 void vmx_read_segment_descriptor(struct CPUState *cpu, struct vmx_segment = *desc, X86Seg seg) { - hv_vcpuid_t hvf_fd =3D (hv_vcpuid_t)cpu_state->hvf_fd; + hv_vcpuid_t hvf_fd =3D cpu_state->accel_vcpu->hvf_fd; =20 desc->sel =3D rvmcs(hvf_fd, vmx_segment_fields[seg].selector); desc->base =3D rvmcs(hvf_fd, vmx_segment_fields[seg].base); @@ -86,7 +89,7 @@ void vmx_read_segment_descriptor(struct CPUState *cpu, st= ruct vmx_segment *desc, void vmx_write_segment_descriptor(CPUState *cpu, struct vmx_segment *desc,= X86Seg seg) { const struct vmx_segment_field *sf =3D &vmx_segment_fields[seg]; - hv_vcpuid_t hvf_fd =3D (hv_vcpuid_t)cpu_state->hvf_fd; + hv_vcpuid_t hvf_fd =3D cpu_state->accel_vcpu->hvf_fd; =20 wvmcs(hvf_fd, sf->base, desc->base); wvmcs(hvf_fd, sf->limit, desc->limit); diff --git a/target/i386/hvf/x86_emu.c b/target/i386/hvf/x86_emu.c index e52c39ddb1f..dd7dee6f880 100644 --- a/target/i386/hvf/x86_emu.c +++ b/target/i386/hvf/x86_emu.c @@ -674,7 +674,7 @@ void simulate_rdmsr(struct CPUState *cpu) =20 switch (msr) { case MSR_IA32_TSC: - val =3D rdtscp() + rvmcs(cpu->hvf_fd, VMCS_TSC_OFFSET); + val =3D rdtscp() + rvmcs(cpu->accel_vcpu->hvf_fd, VMCS_TSC_OFFSET); break; case MSR_IA32_APICBASE: val =3D cpu_get_apic_base(X86_CPU(cpu)->apic_state); @@ -683,16 +683,16 @@ void simulate_rdmsr(struct CPUState *cpu) val =3D x86_cpu->ucode_rev; break; case MSR_EFER: - val =3D rvmcs(cpu->hvf_fd, VMCS_GUEST_IA32_EFER); + val =3D rvmcs(cpu->accel_vcpu->hvf_fd, VMCS_GUEST_IA32_EFER); break; case MSR_FSBASE: - val =3D rvmcs(cpu->hvf_fd, VMCS_GUEST_FS_BASE); + val =3D rvmcs(cpu->accel_vcpu->hvf_fd, VMCS_GUEST_FS_BASE); break; case MSR_GSBASE: - val =3D rvmcs(cpu->hvf_fd, VMCS_GUEST_GS_BASE); + val =3D rvmcs(cpu->accel_vcpu->hvf_fd, VMCS_GUEST_GS_BASE); break; case MSR_KERNELGSBASE: - val =3D rvmcs(cpu->hvf_fd, VMCS_HOST_FS_BASE); + val =3D rvmcs(cpu->accel_vcpu->hvf_fd, VMCS_HOST_FS_BASE); break; case MSR_STAR: abort(); @@ -780,13 +780,13 @@ void simulate_wrmsr(struct CPUState *cpu) cpu_set_apic_base(X86_CPU(cpu)->apic_state, data); break; case MSR_FSBASE: - wvmcs(cpu->hvf_fd, VMCS_GUEST_FS_BASE, data); + wvmcs(cpu->accel_vcpu->hvf_fd, VMCS_GUEST_FS_BASE, data); break; case MSR_GSBASE: - wvmcs(cpu->hvf_fd, VMCS_GUEST_GS_BASE, data); + wvmcs(cpu->accel_vcpu->hvf_fd, VMCS_GUEST_GS_BASE, data); break; case MSR_KERNELGSBASE: - wvmcs(cpu->hvf_fd, VMCS_HOST_FS_BASE, data); + wvmcs(cpu->accel_vcpu->hvf_fd, VMCS_HOST_FS_BASE, data); break; case MSR_STAR: abort(); @@ -799,9 +799,9 @@ void simulate_wrmsr(struct CPUState *cpu) break; case MSR_EFER: /*printf("new efer %llx\n", EFER(cpu));*/ - wvmcs(cpu->hvf_fd, VMCS_GUEST_IA32_EFER, data); + wvmcs(cpu->accel_vcpu->hvf_fd, VMCS_GUEST_IA32_EFER, data); if (data & MSR_EFER_NXE) { - hv_vcpu_invalidate_tlb(cpu->hvf_fd); + hv_vcpu_invalidate_tlb(cpu->accel_vcpu->hvf_fd); } break; case MSR_MTRRphysBase(0): @@ -1425,21 +1425,21 @@ void load_regs(struct CPUState *cpu) CPUX86State *env =3D &x86_cpu->env; =20 int i =3D 0; - RRX(env, R_EAX) =3D rreg(cpu->hvf_fd, HV_X86_RAX); - RRX(env, R_EBX) =3D rreg(cpu->hvf_fd, HV_X86_RBX); - RRX(env, R_ECX) =3D rreg(cpu->hvf_fd, HV_X86_RCX); - RRX(env, R_EDX) =3D rreg(cpu->hvf_fd, HV_X86_RDX); - RRX(env, R_ESI) =3D rreg(cpu->hvf_fd, HV_X86_RSI); - RRX(env, R_EDI) =3D rreg(cpu->hvf_fd, HV_X86_RDI); - RRX(env, R_ESP) =3D rreg(cpu->hvf_fd, HV_X86_RSP); - RRX(env, R_EBP) =3D rreg(cpu->hvf_fd, HV_X86_RBP); + RRX(env, R_EAX) =3D rreg(cpu->accel_vcpu->hvf_fd, HV_X86_RAX); + RRX(env, R_EBX) =3D rreg(cpu->accel_vcpu->hvf_fd, HV_X86_RBX); + RRX(env, R_ECX) =3D rreg(cpu->accel_vcpu->hvf_fd, HV_X86_RCX); + RRX(env, R_EDX) =3D rreg(cpu->accel_vcpu->hvf_fd, HV_X86_RDX); + RRX(env, R_ESI) =3D rreg(cpu->accel_vcpu->hvf_fd, HV_X86_RSI); + RRX(env, R_EDI) =3D rreg(cpu->accel_vcpu->hvf_fd, HV_X86_RDI); + RRX(env, R_ESP) =3D rreg(cpu->accel_vcpu->hvf_fd, HV_X86_RSP); + RRX(env, R_EBP) =3D rreg(cpu->accel_vcpu->hvf_fd, HV_X86_RBP); for (i =3D 8; i < 16; i++) { - RRX(env, i) =3D rreg(cpu->hvf_fd, HV_X86_RAX + i); + RRX(env, i) =3D rreg(cpu->accel_vcpu->hvf_fd, HV_X86_RAX + i); } =20 - env->eflags =3D rreg(cpu->hvf_fd, HV_X86_RFLAGS); + env->eflags =3D rreg(cpu->accel_vcpu->hvf_fd, HV_X86_RFLAGS); rflags_to_lflags(env); - env->eip =3D rreg(cpu->hvf_fd, HV_X86_RIP); + env->eip =3D rreg(cpu->accel_vcpu->hvf_fd, HV_X86_RIP); } =20 void store_regs(struct CPUState *cpu) @@ -1448,20 +1448,20 @@ void store_regs(struct CPUState *cpu) CPUX86State *env =3D &x86_cpu->env; =20 int i =3D 0; - wreg(cpu->hvf_fd, HV_X86_RAX, RAX(env)); - wreg(cpu->hvf_fd, HV_X86_RBX, RBX(env)); - wreg(cpu->hvf_fd, HV_X86_RCX, RCX(env)); - wreg(cpu->hvf_fd, HV_X86_RDX, RDX(env)); - wreg(cpu->hvf_fd, HV_X86_RSI, RSI(env)); - wreg(cpu->hvf_fd, HV_X86_RDI, RDI(env)); - wreg(cpu->hvf_fd, HV_X86_RBP, RBP(env)); - wreg(cpu->hvf_fd, HV_X86_RSP, RSP(env)); + wreg(cpu->accel_vcpu->hvf_fd, HV_X86_RAX, RAX(env)); + wreg(cpu->accel_vcpu->hvf_fd, HV_X86_RBX, RBX(env)); + wreg(cpu->accel_vcpu->hvf_fd, HV_X86_RCX, RCX(env)); + wreg(cpu->accel_vcpu->hvf_fd, HV_X86_RDX, RDX(env)); + wreg(cpu->accel_vcpu->hvf_fd, HV_X86_RSI, RSI(env)); + wreg(cpu->accel_vcpu->hvf_fd, HV_X86_RDI, RDI(env)); + wreg(cpu->accel_vcpu->hvf_fd, HV_X86_RBP, RBP(env)); + wreg(cpu->accel_vcpu->hvf_fd, HV_X86_RSP, RSP(env)); for (i =3D 8; i < 16; i++) { - wreg(cpu->hvf_fd, HV_X86_RAX + i, RRX(env, i)); + wreg(cpu->accel_vcpu->hvf_fd, HV_X86_RAX + i, RRX(env, i)); } =20 lflags_to_rflags(env); - wreg(cpu->hvf_fd, HV_X86_RFLAGS, env->eflags); + wreg(cpu->accel_vcpu->hvf_fd, HV_X86_RFLAGS, env->eflags); macvm_set_rip(cpu, env->eip); } =20 diff --git a/target/i386/hvf/x86_mmu.c b/target/i386/hvf/x86_mmu.c index 882a6237eea..deb3608f2be 100644 --- a/target/i386/hvf/x86_mmu.c +++ b/target/i386/hvf/x86_mmu.c @@ -128,7 +128,7 @@ static bool test_pt_entry(struct CPUState *cpu, struct = gpt_translation *pt, pt->err_code |=3D MMU_PAGE_PT; } =20 - uint32_t cr0 =3D rvmcs(cpu->hvf_fd, VMCS_GUEST_CR0); + uint32_t cr0 =3D rvmcs(cpu->accel_vcpu->hvf_fd, VMCS_GUEST_CR0); /* check protection */ if (cr0 & CR0_WP) { if (pt->write_access && !pte_write_access(pte)) { @@ -173,7 +173,7 @@ static bool walk_gpt(struct CPUState *cpu, target_ulong= addr, int err_code, { int top_level, level; bool is_large =3D false; - target_ulong cr3 =3D rvmcs(cpu->hvf_fd, VMCS_GUEST_CR3); + target_ulong cr3 =3D rvmcs(cpu->accel_vcpu->hvf_fd, VMCS_GUEST_CR3); uint64_t page_mask =3D pae ? PAE_PTE_PAGE_MASK : LEGACY_PTE_PAGE_MASK; =20 memset(pt, 0, sizeof(*pt)); diff --git a/target/i386/hvf/x86_task.c b/target/i386/hvf/x86_task.c index d66dfd76690..baa4c5ca87e 100644 --- a/target/i386/hvf/x86_task.c +++ b/target/i386/hvf/x86_task.c @@ -62,7 +62,7 @@ static void load_state_from_tss32(CPUState *cpu, struct x= 86_tss_segment32 *tss) X86CPU *x86_cpu =3D X86_CPU(cpu); CPUX86State *env =3D &x86_cpu->env; =20 - wvmcs(cpu->hvf_fd, VMCS_GUEST_CR3, tss->cr3); + wvmcs(cpu->accel_vcpu->hvf_fd, VMCS_GUEST_CR3, tss->cr3); =20 env->eip =3D tss->eip; env->eflags =3D tss->eflags | 2; @@ -111,11 +111,12 @@ static int task_switch_32(CPUState *cpu, x68_segment_= selector tss_sel, x68_segme =20 void vmx_handle_task_switch(CPUState *cpu, x68_segment_selector tss_sel, i= nt reason, bool gate_valid, uint8_t gate, uint64_t gate_type) { - uint64_t rip =3D rreg(cpu->hvf_fd, HV_X86_RIP); + uint64_t rip =3D rreg(cpu->accel_vcpu->hvf_fd, HV_X86_RIP); if (!gate_valid || (gate_type !=3D VMCS_INTR_T_HWEXCEPTION && gate_type !=3D VMCS_INTR_T_HWINTR && gate_type !=3D VMCS_INTR_T_NMI)) { - int ins_len =3D rvmcs(cpu->hvf_fd, VMCS_EXIT_INSTRUCTION_LENGTH); + int ins_len =3D rvmcs(cpu->accel_vcpu->hvf_fd, + VMCS_EXIT_INSTRUCTION_LENGTH); macvm_set_rip(cpu, rip + ins_len); return; } @@ -174,12 +175,13 @@ void vmx_handle_task_switch(CPUState *cpu, x68_segmen= t_selector tss_sel, int rea //ret =3D task_switch_16(cpu, tss_sel, old_tss_sel, old_tss_base, = &next_tss_desc); VM_PANIC("task_switch_16"); =20 - macvm_set_cr0(cpu->hvf_fd, rvmcs(cpu->hvf_fd, VMCS_GUEST_CR0) | CR0_TS= ); + macvm_set_cr0(cpu->accel_vcpu->hvf_fd, + rvmcs(cpu->accel_vcpu->hvf_fd, VMCS_GUEST_CR0) | CR0_TS); x86_segment_descriptor_to_vmx(cpu, tss_sel, &next_tss_desc, &vmx_seg); vmx_write_segment_descriptor(cpu, &vmx_seg, R_TR); =20 store_regs(cpu); =20 - hv_vcpu_invalidate_tlb(cpu->hvf_fd); - hv_vcpu_flush(cpu->hvf_fd); + hv_vcpu_invalidate_tlb(cpu->accel_vcpu->hvf_fd); + hv_vcpu_flush(cpu->accel_vcpu->hvf_fd); } diff --git a/target/i386/hvf/x86hvf.c b/target/i386/hvf/x86hvf.c index 2f291f2ad53..c68400b9729 100644 --- a/target/i386/hvf/x86hvf.c +++ b/target/i386/hvf/x86hvf.c @@ -81,7 +81,8 @@ void hvf_put_xsave(CPUState *cpu_state) =20 x86_cpu_xsave_all_areas(X86_CPU(cpu_state), xsave); =20 - if (hv_vcpu_write_fpstate(cpu_state->hvf_fd, (void*)xsave, 4096)) { + if (hv_vcpu_write_fpstate(cpu_state->accel_vcpu->hvf_fd, + (void *)xsave, 4096)) { abort(); } } @@ -89,7 +90,7 @@ void hvf_put_xsave(CPUState *cpu_state) void hvf_put_segments(CPUState *cpu_state) { CPUX86State *env =3D &X86_CPU(cpu_state)->env; - hv_vcpuid_t hvf_fd =3D (hv_vcpuid_t)cpu_state->hvf_fd; + hv_vcpuid_t hvf_fd =3D cpu_state->accel_vcpu->hvf_fd; struct vmx_segment seg; =20 wvmcs(hvf_fd, VMCS_GUEST_IDTR_LIMIT, env->idt.limit); @@ -136,7 +137,7 @@ void hvf_put_segments(CPUState *cpu_state) void hvf_put_msrs(CPUState *cpu_state) { CPUX86State *env =3D &X86_CPU(cpu_state)->env; - hv_vcpuid_t hvf_fd =3D (hv_vcpuid_t)cpu_state->hvf_fd; + hv_vcpuid_t hvf_fd =3D cpu_state->accel_vcpu->hvf_fd; =20 hv_vcpu_write_msr(hvf_fd, MSR_IA32_SYSENTER_CS, env->sysenter_cs); hv_vcpu_write_msr(hvf_fd, MSR_IA32_SYSENTER_ESP, env->sysenter_esp); @@ -162,7 +163,8 @@ void hvf_get_xsave(CPUState *cpu_state) =20 xsave =3D X86_CPU(cpu_state)->env.xsave_buf; =20 - if (hv_vcpu_read_fpstate(cpu_state->hvf_fd, (void*)xsave, 4096)) { + if (hv_vcpu_read_fpstate(cpu_state->accel_vcpu->hvf_fd, + (void *)xsave, 4096)) { abort(); } =20 @@ -172,7 +174,7 @@ void hvf_get_xsave(CPUState *cpu_state) void hvf_get_segments(CPUState *cpu_state) { CPUX86State *env =3D &X86_CPU(cpu_state)->env; - hv_vcpuid_t hvf_fd =3D (hv_vcpuid_t)cpu_state->hvf_fd; + hv_vcpuid_t hvf_fd =3D cpu_state->accel_vcpu->hvf_fd; struct vmx_segment seg; =20 env->interrupt_injected =3D -1; @@ -217,7 +219,7 @@ void hvf_get_segments(CPUState *cpu_state) void hvf_get_msrs(CPUState *cpu_state) { CPUX86State *env =3D &X86_CPU(cpu_state)->env; - hv_vcpuid_t hvf_fd =3D (hv_vcpuid_t)cpu_state->hvf_fd; + hv_vcpuid_t hvf_fd =3D cpu_state->accel_vcpu->hvf_fd; uint64_t tmp; =20 hv_vcpu_read_msr(hvf_fd, MSR_IA32_SYSENTER_CS, &tmp); @@ -247,7 +249,7 @@ int hvf_put_registers(CPUState *cpu_state) { X86CPU *x86cpu =3D X86_CPU(cpu_state); CPUX86State *env =3D &x86cpu->env; - hv_vcpuid_t hvf_fd =3D (hv_vcpuid_t)cpu_state->hvf_fd; + hv_vcpuid_t hvf_fd =3D cpu_state->accel_vcpu->hvf_fd; =20 wreg(hvf_fd, HV_X86_RAX, env->regs[R_EAX]); wreg(hvf_fd, HV_X86_RBX, env->regs[R_EBX]); @@ -292,7 +294,7 @@ int hvf_get_registers(CPUState *cpu_state) { X86CPU *x86cpu =3D X86_CPU(cpu_state); CPUX86State *env =3D &x86cpu->env; - hv_vcpuid_t hvf_fd =3D (hv_vcpuid_t)cpu_state->hvf_fd; + hv_vcpuid_t hvf_fd =3D cpu_state->accel_vcpu->hvf_fd; =20 env->regs[R_EAX] =3D rreg(hvf_fd, HV_X86_RAX); env->regs[R_EBX] =3D rreg(hvf_fd, HV_X86_RBX); @@ -336,24 +338,24 @@ int hvf_get_registers(CPUState *cpu_state) static void vmx_set_int_window_exiting(CPUState *cpu) { uint64_t val; - val =3D rvmcs(cpu->hvf_fd, VMCS_PRI_PROC_BASED_CTLS); - wvmcs(cpu->hvf_fd, VMCS_PRI_PROC_BASED_CTLS, val | + val =3D rvmcs(cpu->accel_vcpu->hvf_fd, VMCS_PRI_PROC_BASED_CTLS); + wvmcs(cpu->accel_vcpu->hvf_fd, VMCS_PRI_PROC_BASED_CTLS, val | VMCS_PRI_PROC_BASED_CTLS_INT_WINDOW_EXITING); } =20 void vmx_clear_int_window_exiting(CPUState *cpu) { uint64_t val; - val =3D rvmcs(cpu->hvf_fd, VMCS_PRI_PROC_BASED_CTLS); - wvmcs(cpu->hvf_fd, VMCS_PRI_PROC_BASED_CTLS, val & - ~VMCS_PRI_PROC_BASED_CTLS_INT_WINDOW_EXITING); + val =3D rvmcs(cpu->accel_vcpu->hvf_fd, VMCS_PRI_PROC_BASED_CTLS); + wvmcs(cpu->accel_vcpu->hvf_fd, VMCS_PRI_PROC_BASED_CTLS,\ + val & ~VMCS_PRI_PROC_BASED_CTLS_INT_WINDOW_EXITING); } =20 bool hvf_inject_interrupts(CPUState *cpu_state) { X86CPU *x86cpu =3D X86_CPU(cpu_state); CPUX86State *env =3D &x86cpu->env; - hv_vcpuid_t hvf_fd =3D (hv_vcpuid_t)cpu_state->hvf_fd; + hv_vcpuid_t hvf_fd =3D cpu_state->accel_vcpu->hvf_fd; =20 uint8_t vector; uint64_t intr_type; @@ -437,7 +439,7 @@ int hvf_process_events(CPUState *cpu_state) X86CPU *cpu =3D X86_CPU(cpu_state); CPUX86State *env =3D &cpu->env; =20 - env->eflags =3D rreg(cpu_state->hvf_fd, HV_X86_RFLAGS); + env->eflags =3D rreg(cpu_state->accel_vcpu->hvf_fd, HV_X86_RFLAGS); =20 if (cpu_state->interrupt_request & CPU_INTERRUPT_INIT) { hvf_cpu_synchronize_state(cpu_state); --=20 2.26.2