From nobody Fri Apr 4 04:29:29 2025 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1740666148; cv=none; d=zohomail.com; s=zohoarc; b=AgkEe2eW4EGVPr6ItA/DcsKeJIHt7pcxReccE1Ezrlfisp3DRK9fuYHHHuRO4uhEI4G4/Ze6ePMHvdLiFYXVEHijLjNmEWZpHI98FLERC0mkA0g2spm17uF+aX3dRn2Ao3QYeBE743H8yyeKs4nd2nfogwTBwATIbSnOiJovAzA= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1740666148; h=Content-Type:Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=BzevOw2v7eV63sKXI0ynTRS0gFenKTHy7ijiSe2WtG8=; b=hieBpmeA4KlvPYvIFrD6PzSw8k80yg9BJXdAlN+wuH8R1LYJLZZMnZx3g+BgIlDUQawaPl51Mslyjiqp9RgtBeD1nyDEqqqFttswygmizB0JQzhthUSnboHN+PEceg6ISpxRjEaxdiiAGUkNXQF+BXQC+SevrecQRbW7b2kmQ+0= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 174066614798841.167516150343545; Thu, 27 Feb 2025 06:22:27 -0800 (PST) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1tnekl-0007GF-GK; Thu, 27 Feb 2025 09:20:43 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1tnekb-0007Ej-Ka for qemu-devel@nongnu.org; Thu, 27 Feb 2025 09:20:34 -0500 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1tnekZ-0003lz-43 for qemu-devel@nongnu.org; Thu, 27 Feb 2025 09:20:32 -0500 Received: from mail-ej1-f70.google.com (mail-ej1-f70.google.com [209.85.218.70]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-306-uccrlwbJOJ6LSCjSRvNpVQ-1; Thu, 27 Feb 2025 09:20:28 -0500 Received: by mail-ej1-f70.google.com with SMTP id a640c23a62f3a-abbc0572fc9so133284766b.0 for ; Thu, 27 Feb 2025 06:20:28 -0800 (PST) Received: from [192.168.10.48] ([176.206.102.52]) by smtp.gmail.com with ESMTPSA id a640c23a62f3a-abf0c74fba2sm128900366b.131.2025.02.27.06.20.22 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 27 Feb 2025 06:20:22 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1740666029; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=BzevOw2v7eV63sKXI0ynTRS0gFenKTHy7ijiSe2WtG8=; b=Rd1Vaf067brtPWNmwnUkppQEQpPWF/XwqUNtFhQqAnZsOJvhGcMXsqFaa3ZSgQhth+i1VR bYp1mABHfwuMVwVIXmVcCm9TwPfZGhdxvq2i6H+3W1GqEtlt16ShEuEzbkacmwUVT+ySBP zPSpqqh3Q1YAMtRf+JYW4aDUnpGPuDc= X-MC-Unique: uccrlwbJOJ6LSCjSRvNpVQ-1 X-Mimecast-MFC-AGG-ID: uccrlwbJOJ6LSCjSRvNpVQ_1740666027 X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1740666025; x=1741270825; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=BzevOw2v7eV63sKXI0ynTRS0gFenKTHy7ijiSe2WtG8=; b=oZSagxcD3UGwns5tcHlJPEPxOiSLTCLH2ffy+z9imdTmn4aDHOu74yr8QJRXNexyvt H8t1f/bL1CsBywpc0ywUkNFU8aF0W14IzZHFyXuS6U6X2V8N9sRgHctrHWsgwXP1Un1r PGY1gGdx4lPzwrUhWN7a23ntEFVoZ+sp5cxl+ww+xGTGKuH5cSNzqagVVu13yceGBX6H TVf4tOhibsJSVmzV3EtbRAIJtlf/3ViuBDVKp43v3f9JYfN0L8OREicRT7iYY4JFhWbs z/AhpK99twpaigtj+m6vjeT4L2Z++TQMq5TLvejaUFzTKWX3gVqROD1Ergq/MbqWp28w XMSw== X-Gm-Message-State: AOJu0YzIhsUb6BjoOHAgGOcRDtV011NyZdlnvMVRaVnylfJeYe7XyMN8 e0LusJUzCofvR190Jf8T7WvlzAdIkUwynedZnGp9iKEXNIVTfmPnwKpgf41Fe2G8RQqGteEaf45 1z0ysefTBmo0D7dNU6kP9xPwMKNMZDfhENpLLLCHQNOv1zlW9DlbYkixQ5a5LfJPnqT4SDoBJ15 6kn4XeCjyzGNoNXdKJMx49D9JcZYR1zF7EUnSCnyc= X-Gm-Gg: ASbGncseswNcb2cRIzKaE7Jn5NhQlckWMHtff21CLm+llxSBvygH/qsxgXgULkay9p8 OmqXyGCUKU3M5TDwzO2MRLch9ehGNmyHRqijYQsntiAOUTbCeSaphmLiQfzN6EFXOftWwX3V6HO vEEy5KXHDXjZrj/B1AslViprSHz6elJSAh9amvCSBiYX2tmC9WyM4/vPHZaZh35qtL+HnVi7hH0 dcFZtOHmWQiGOMUPQ+3STnfcVOzIBPRl4TXuiVYZ/cAHZAp6ls4/uVYmcqUtAks4p5i/OWUnWM7 vw6xin3MK/VPFc76Qm4L X-Received: by 2002:a17:907:7704:b0:ab7:ef38:1277 with SMTP id a640c23a62f3a-abc09a74646mr2372138266b.26.1740666025173; Thu, 27 Feb 2025 06:20:25 -0800 (PST) X-Google-Smtp-Source: AGHT+IHQMrpA+80/oIoceAbfVOiXOR2QDCzYmfvXCr/nYuVn95AM2fdhPGzTIoMHxUbswgTP5t2+CQ== X-Received: by 2002:a17:907:7704:b0:ab7:ef38:1277 with SMTP id a640c23a62f3a-abc09a74646mr2372133166b.26.1740666024311; Thu, 27 Feb 2025 06:20:24 -0800 (PST) From: Paolo Bonzini To: qemu-devel@nongnu.org Cc: Wei Liu , =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= Subject: [PULL 14/34] target/i386/hvf: fix a typo in a type name Date: Thu, 27 Feb 2025 15:19:32 +0100 Message-ID: <20250227141952.811410-15-pbonzini@redhat.com> X-Mailer: git-send-email 2.48.1 In-Reply-To: <20250227141952.811410-1-pbonzini@redhat.com> References: <20250227141952.811410-1-pbonzini@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=170.10.133.124; envelope-from=pbonzini@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -24 X-Spam_score: -2.5 X-Spam_bar: -- X-Spam_report: (-2.5 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.438, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H2=0.001, RCVD_IN_VALIDITY_CERTIFIED_BLOCKED=0.001, RCVD_IN_VALIDITY_RPBL_BLOCKED=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer=patchew.org@nongnu.org X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1740666149307019100 From: Wei Liu The prefix x68 is wrong. Change it to x86. Signed-off-by: Wei Liu Reviewed-by: Philippe Mathieu-Daud=C3=A9 Link: https://lore.kernel.org/r/1740126987-8483-2-git-send-email-liuwe@linu= x.microsoft.com Signed-off-by: Paolo Bonzini --- target/i386/hvf/x86.h | 8 ++++---- target/i386/hvf/x86_descr.h | 6 +++--- target/i386/hvf/x86_task.h | 2 +- target/i386/hvf/hvf.c | 2 +- target/i386/hvf/x86.c | 4 ++-- target/i386/hvf/x86_descr.c | 8 ++++---- target/i386/hvf/x86_task.c | 22 +++++++++++----------- 7 files changed, 26 insertions(+), 26 deletions(-) diff --git a/target/i386/hvf/x86.h b/target/i386/hvf/x86.h index 3570f29aa9d..063cd0b83ec 100644 --- a/target/i386/hvf/x86.h +++ b/target/i386/hvf/x86.h @@ -183,7 +183,7 @@ static inline uint32_t x86_call_gate_offset(x86_call_ga= te *gate) #define GDT_SEL 0 #define LDT_SEL 1 =20 -typedef struct x68_segment_selector { +typedef struct x86_segment_selector { union { uint16_t sel; struct { @@ -192,7 +192,7 @@ typedef struct x68_segment_selector { uint16_t index:13; }; }; -} __attribute__ ((__packed__)) x68_segment_selector; +} __attribute__ ((__packed__)) x86_segment_selector; =20 /* useful register access macros */ #define x86_reg(cpu, reg) ((x86_register *) &cpu->regs[reg]) @@ -250,10 +250,10 @@ typedef struct x68_segment_selector { /* deal with GDT/LDT descriptors in memory */ bool x86_read_segment_descriptor(CPUState *cpu, struct x86_segment_descriptor *desc, - x68_segment_selector sel); + x86_segment_selector sel); bool x86_write_segment_descriptor(CPUState *cpu, struct x86_segment_descriptor *desc, - x68_segment_selector sel); + x86_segment_selector sel); =20 bool x86_read_call_gate(CPUState *cpu, struct x86_call_gate *idt_desc, int gate); diff --git a/target/i386/hvf/x86_descr.h b/target/i386/hvf/x86_descr.h index 9f06014b56a..ce5de983497 100644 --- a/target/i386/hvf/x86_descr.h +++ b/target/i386/hvf/x86_descr.h @@ -34,10 +34,10 @@ void vmx_read_segment_descriptor(CPUState *cpu, void vmx_write_segment_descriptor(CPUState *cpu, struct vmx_segment *desc, enum X86Seg seg); =20 -x68_segment_selector vmx_read_segment_selector(CPUState *cpu, +x86_segment_selector vmx_read_segment_selector(CPUState *cpu, enum X86Seg seg); void vmx_write_segment_selector(CPUState *cpu, - x68_segment_selector selector, + x86_segment_selector selector, enum X86Seg seg); =20 uint64_t vmx_read_segment_base(CPUState *cpu, enum X86Seg seg); @@ -45,7 +45,7 @@ void vmx_write_segment_base(CPUState *cpu, enum X86Seg se= g, uint64_t base); =20 void x86_segment_descriptor_to_vmx(CPUState *cpu, - x68_segment_selector selector, + x86_segment_selector selector, struct x86_segment_descriptor *desc, struct vmx_segment *vmx_desc); =20 diff --git a/target/i386/hvf/x86_task.h b/target/i386/hvf/x86_task.h index 4eaa61a7dee..b9afac6a47b 100644 --- a/target/i386/hvf/x86_task.h +++ b/target/i386/hvf/x86_task.h @@ -15,6 +15,6 @@ #ifndef HVF_X86_TASK_H #define HVF_X86_TASK_H =20 -void vmx_handle_task_switch(CPUState *cpu, x68_segment_selector tss_sel, +void vmx_handle_task_switch(CPUState *cpu, x86_segment_selector tss_sel, int reason, bool gate_valid, uint8_t gate, uint64_t gate_type); #endif diff --git a/target/i386/hvf/hvf.c b/target/i386/hvf/hvf.c index ca08f0753f0..353549fa779 100644 --- a/target/i386/hvf/hvf.c +++ b/target/i386/hvf/hvf.c @@ -674,7 +674,7 @@ int hvf_vcpu_exec(CPUState *cpu) } case EXIT_REASON_TASK_SWITCH: { uint64_t vinfo =3D rvmcs(cpu->accel->fd, VMCS_IDT_VECTORING_IN= FO); - x68_segment_selector sel =3D {.sel =3D exit_qual & 0xffff}; + x86_segment_selector sel =3D {.sel =3D exit_qual & 0xffff}; vmx_handle_task_switch(cpu, sel, (exit_qual >> 30) & 0x3, vinfo & VMCS_INTR_VALID, vinfo & VECTORING_INFO_VECTOR_MASK, = vinfo & VMCS_INTR_T_MASK); diff --git a/target/i386/hvf/x86.c b/target/i386/hvf/x86.c index 80e36136d04..a0ede138865 100644 --- a/target/i386/hvf/x86.c +++ b/target/i386/hvf/x86.c @@ -48,7 +48,7 @@ =20 bool x86_read_segment_descriptor(CPUState *cpu, struct x86_segment_descriptor *desc, - x68_segment_selector sel) + x86_segment_selector sel) { target_ulong base; uint32_t limit; @@ -78,7 +78,7 @@ bool x86_read_segment_descriptor(CPUState *cpu, =20 bool x86_write_segment_descriptor(CPUState *cpu, struct x86_segment_descriptor *desc, - x68_segment_selector sel) + x86_segment_selector sel) { target_ulong base; uint32_t limit; diff --git a/target/i386/hvf/x86_descr.c b/target/i386/hvf/x86_descr.c index f33836d6cba..7b599c90377 100644 --- a/target/i386/hvf/x86_descr.c +++ b/target/i386/hvf/x86_descr.c @@ -60,14 +60,14 @@ uint64_t vmx_read_segment_base(CPUState *cpu, X86Seg se= g) return rvmcs(cpu->accel->fd, vmx_segment_fields[seg].base); } =20 -x68_segment_selector vmx_read_segment_selector(CPUState *cpu, X86Seg seg) +x86_segment_selector vmx_read_segment_selector(CPUState *cpu, X86Seg seg) { - x68_segment_selector sel; + x86_segment_selector sel; sel.sel =3D rvmcs(cpu->accel->fd, vmx_segment_fields[seg].selector); return sel; } =20 -void vmx_write_segment_selector(CPUState *cpu, x68_segment_selector select= or, X86Seg seg) +void vmx_write_segment_selector(CPUState *cpu, x86_segment_selector select= or, X86Seg seg) { wvmcs(cpu->accel->fd, vmx_segment_fields[seg].selector, selector.sel); } @@ -90,7 +90,7 @@ void vmx_write_segment_descriptor(CPUState *cpu, struct v= mx_segment *desc, X86Se wvmcs(cpu->accel->fd, sf->ar_bytes, desc->ar); } =20 -void x86_segment_descriptor_to_vmx(CPUState *cpu, x68_segment_selector sel= ector, +void x86_segment_descriptor_to_vmx(CPUState *cpu, x86_segment_selector sel= ector, struct x86_segment_descriptor *desc, struct vmx_segment *vmx_desc) { diff --git a/target/i386/hvf/x86_task.c b/target/i386/hvf/x86_task.c index bcd844cff60..287fe11cf70 100644 --- a/target/i386/hvf/x86_task.c +++ b/target/i386/hvf/x86_task.c @@ -76,16 +76,16 @@ static void load_state_from_tss32(CPUState *cpu, struct= x86_tss_segment32 *tss) RSI(env) =3D tss->esi; RDI(env) =3D tss->edi; =20 - vmx_write_segment_selector(cpu, (x68_segment_selector){{tss->ldt}}, R_= LDTR); - vmx_write_segment_selector(cpu, (x68_segment_selector){{tss->es}}, R_E= S); - vmx_write_segment_selector(cpu, (x68_segment_selector){{tss->cs}}, R_C= S); - vmx_write_segment_selector(cpu, (x68_segment_selector){{tss->ss}}, R_S= S); - vmx_write_segment_selector(cpu, (x68_segment_selector){{tss->ds}}, R_D= S); - vmx_write_segment_selector(cpu, (x68_segment_selector){{tss->fs}}, R_F= S); - vmx_write_segment_selector(cpu, (x68_segment_selector){{tss->gs}}, R_G= S); + vmx_write_segment_selector(cpu, (x86_segment_selector){{tss->ldt}}, R_= LDTR); + vmx_write_segment_selector(cpu, (x86_segment_selector){{tss->es}}, R_E= S); + vmx_write_segment_selector(cpu, (x86_segment_selector){{tss->cs}}, R_C= S); + vmx_write_segment_selector(cpu, (x86_segment_selector){{tss->ss}}, R_S= S); + vmx_write_segment_selector(cpu, (x86_segment_selector){{tss->ds}}, R_D= S); + vmx_write_segment_selector(cpu, (x86_segment_selector){{tss->fs}}, R_F= S); + vmx_write_segment_selector(cpu, (x86_segment_selector){{tss->gs}}, R_G= S); } =20 -static int task_switch_32(CPUState *cpu, x68_segment_selector tss_sel, x68= _segment_selector old_tss_sel, +static int task_switch_32(CPUState *cpu, x86_segment_selector tss_sel, x86= _segment_selector old_tss_sel, uint64_t old_tss_base, struct x86_segment_descri= ptor *new_desc) { struct x86_tss_segment32 tss_seg; @@ -108,7 +108,7 @@ static int task_switch_32(CPUState *cpu, x68_segment_se= lector tss_sel, x68_segme return 0; } =20 -void vmx_handle_task_switch(CPUState *cpu, x68_segment_selector tss_sel, i= nt reason, bool gate_valid, uint8_t gate, uint64_t gate_type) +void vmx_handle_task_switch(CPUState *cpu, x86_segment_selector tss_sel, i= nt reason, bool gate_valid, uint8_t gate, uint64_t gate_type) { uint64_t rip =3D rreg(cpu->accel->fd, HV_X86_RIP); if (!gate_valid || (gate_type !=3D VMCS_INTR_T_HWEXCEPTION && @@ -122,7 +122,7 @@ void vmx_handle_task_switch(CPUState *cpu, x68_segment_= selector tss_sel, int rea load_regs(cpu); =20 struct x86_segment_descriptor curr_tss_desc, next_tss_desc; - x68_segment_selector old_tss_sel =3D vmx_read_segment_selector(cpu, R_= TR); + x86_segment_selector old_tss_sel =3D vmx_read_segment_selector(cpu, R_= TR); uint64_t old_tss_base =3D vmx_read_segment_base(cpu, R_TR); uint32_t desc_limit; struct x86_call_gate task_gate_desc; @@ -140,7 +140,7 @@ void vmx_handle_task_switch(CPUState *cpu, x68_segment_= selector tss_sel, int rea x86_read_call_gate(cpu, &task_gate_desc, gate); =20 dpl =3D task_gate_desc.dpl; - x68_segment_selector cs =3D vmx_read_segment_selector(cpu, R_CS); + x86_segment_selector cs =3D vmx_read_segment_selector(cpu, R_CS); if (tss_sel.rpl > dpl || cs.rpl > dpl) ;//DPRINTF("emulate_gp"); } --=20 2.48.1