From nobody Mon Feb 9 14:02:59 2026 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=oracle.com ARC-Seal: i=1; a=rsa-sha256; t=1621523418; cv=none; d=zohomail.com; s=zohoarc; b=bUe6xtIpsGAiLNtSc+xdT1ItGM7v4d1GRlk2DZ1D7+waX13YTDmg67tzhGpHLgHWWQKGBYuIB+EdQhME4vOjKI8nDMvaIC2bd93gJ+MxaGofNGNqq3ByLvJAlh75Zc1jZplf4RTfnWpy2MUW675K9F7aRievMywfSXTHgBR4Z3o= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1621523418; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=8I444d16Lc7oDOyG+INfqz7vdcwzDo1ckG61DIvMJcw=; b=TJ6CsmdLCt+lbX+oiO/sAQO4uiYXwp2i2jGqYxt0P36+ZsrA3jU5CuI/ei0NjfeII+YkhA/c7cOJW2Xh05O0X+WySHOfoIJY2VQABBL8i3nMY+0+Y9iM7Sjb5EnsCJlqGbMg8olk1fAeRU8bvo6nYx8MJxJLPsS0M62RkLydZhc= ARC-Authentication-Results: i=1; mx.zohomail.com; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1621523418961905.7203291701254; Thu, 20 May 2021 08:10:18 -0700 (PDT) Received: from localhost ([::1]:41358 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1ljkJV-0002RC-R5 for importer@patchew.org; Thu, 20 May 2021 11:10:17 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:40358) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1ljk6i-0003tP-IO for qemu-devel@nongnu.org; Thu, 20 May 2021 10:57:06 -0400 Received: from forward1-smtp.messagingengine.com ([66.111.4.223]:35499) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1ljk6Z-0000T7-Kp for qemu-devel@nongnu.org; Thu, 20 May 2021 10:56:59 -0400 Received: from compute6.internal (compute6.nyi.internal [10.202.2.46]) by mailforward.nyi.internal (Postfix) with ESMTP id A688F19409E7; Thu, 20 May 2021 10:56:53 -0400 (EDT) Received: from mailfrontend2 ([10.202.2.163]) by compute6.internal (MEProxy); Thu, 20 May 2021 10:56:53 -0400 Received: from disaster-area.hh.sledj.net (disaster-area.hh.sledj.net [81.187.26.238]) by mail.messagingengine.com (Postfix) with ESMTPA; Thu, 20 May 2021 10:56:52 -0400 (EDT) Received: from localhost (disaster-area.hh.sledj.net [local]) by disaster-area.hh.sledj.net (OpenSMTPD) with ESMTPA id 5d3d0369; Thu, 20 May 2021 14:56:48 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:subject:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm2; bh=8I444d16Lc7oDOyG+INfqz7vdcwzDo1ckG61DIvMJcw=; b=uDfiqN6d x4zJGELcFfrAINbYtvQnSwhQV3jk91NqqbcrUShu7k1pjAxzKVdCB0m3xPPaCarE /ppSVhbY1KBfGaI9JjcLDqmvfzLh3Jy+jLjI5zhxMyzl1r3Yc3E/owo7jHSpHuup O3Z+pmSfRnRzFjpB1zkeBzg3cK+uCYHYtvFtW2YC8TIY6Gjav5FTBCDs35p78vdV TUn0RYhlpqxjNKPRx1cVYX490Lloy3t382a+E6IQb5NgzKhSxpBZe1eQwI45olcp duqjJeVBvaQnsknoKSO+yKGPRXKeMwskbSmCDbiuqIKpaZa+FRUp5bMuke9MwcYD qM4mZXWm2CnocQ== X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeduledrvdejuddgkeefucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne cujfgurhephffvufffkffojghfggfgsedtkeertdertddtnecuhfhrohhmpeffrghvihgu ucfgughmohhnughsohhnuceouggrvhhiugdrvggumhhonhgushhonhesohhrrggtlhgvrd gtohhmqeenucggtffrrghtthgvrhhnpedufeetjefgfefhtdejhfehtdfftefhteekhefg leehfffhiefhgeelgfejtdehkeenucfkphepkedurddukeejrddviedrvdefkeenucevlh hushhtvghrufhiiigvpedtnecurfgrrhgrmhepmhgrihhlfhhrohhmpegurghvihgurdgv ughmohhnughsohhnsehorhgrtghlvgdrtghomh X-ME-Proxy: From: David Edmondson To: qemu-devel@nongnu.org Subject: [RFC PATCH 4/7] target/i386: Prepare for per-vendor X86XSaveArea layout Date: Thu, 20 May 2021 15:56:44 +0100 Message-Id: <20210520145647.3483809-5-david.edmondson@oracle.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210520145647.3483809-1-david.edmondson@oracle.com> References: <20210520145647.3483809-1-david.edmondson@oracle.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: softfail client-ip=66.111.4.223; envelope-from=david.edmondson@oracle.com; helo=forward1-smtp.messagingengine.com X-Spam_score_int: -18 X-Spam_score: -1.9 X-Spam_bar: - X-Spam_report: (-1.9 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, RCVD_IN_DNSWL_LOW=-0.7, SPF_HELO_PASS=-0.001, SPF_SOFTFAIL=0.665, UNPARSEABLE_RELAY=0.001 autolearn=no autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Eduardo Habkost , kvm@vger.kernel.org, Marcelo Tosatti , Richard Henderson , David Edmondson , Babu Moger , Paolo Bonzini Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" Content-Type: text/plain; charset="utf-8" Move Intel specific components of X86XSaveArea into a sub-union. Signed-off-by: David Edmondson --- target/i386/cpu.c | 12 ++++---- target/i386/cpu.h | 55 +++++++++++++++++++++--------------- target/i386/kvm/kvm.c | 12 ++++---- target/i386/tcg/fpu_helper.c | 12 ++++---- target/i386/xsave_helper.c | 24 ++++++++-------- 5 files changed, 63 insertions(+), 52 deletions(-) diff --git a/target/i386/cpu.c b/target/i386/cpu.c index c496bfa1c2..4f481691b4 100644 --- a/target/i386/cpu.c +++ b/target/i386/cpu.c @@ -1418,27 +1418,27 @@ static const ExtSaveArea x86_ext_save_areas[] =3D { .size =3D sizeof(XSaveAVX) }, [XSTATE_BNDREGS_BIT] =3D { .feature =3D FEAT_7_0_EBX, .bits =3D CPUID_7_0_EBX_MPX, - .offset =3D offsetof(X86XSaveArea, bndreg_state), + .offset =3D offsetof(X86XSaveArea, intel.bndreg_state), .size =3D sizeof(XSaveBNDREG) }, [XSTATE_BNDCSR_BIT] =3D { .feature =3D FEAT_7_0_EBX, .bits =3D CPUID_7_0_EBX_MPX, - .offset =3D offsetof(X86XSaveArea, bndcsr_state), + .offset =3D offsetof(X86XSaveArea, intel.bndcsr_state), .size =3D sizeof(XSaveBNDCSR) }, [XSTATE_OPMASK_BIT] =3D { .feature =3D FEAT_7_0_EBX, .bits =3D CPUID_7_0_EBX_AVX512F, - .offset =3D offsetof(X86XSaveArea, opmask_state), + .offset =3D offsetof(X86XSaveArea, intel.opmask_state), .size =3D sizeof(XSaveOpmask) }, [XSTATE_ZMM_Hi256_BIT] =3D { .feature =3D FEAT_7_0_EBX, .bits =3D CPUID_7_0_EBX_AVX512F, - .offset =3D offsetof(X86XSaveArea, zmm_hi256_state), + .offset =3D offsetof(X86XSaveArea, intel.zmm_hi256_state), .size =3D sizeof(XSaveZMM_Hi256) }, [XSTATE_Hi16_ZMM_BIT] =3D { .feature =3D FEAT_7_0_EBX, .bits =3D CPUID_7_0_EBX_AVX512F, - .offset =3D offsetof(X86XSaveArea, hi16_zmm_state), + .offset =3D offsetof(X86XSaveArea, intel.hi16_zmm_state), .size =3D sizeof(XSaveHi16_ZMM) }, [XSTATE_PKRU_BIT] =3D { .feature =3D FEAT_7_0_ECX, .bits =3D CPUID_7_0_ECX_PKU, - .offset =3D offsetof(X86XSaveArea, pkru_state), + .offset =3D offsetof(X86XSaveArea, intel.pkru_state), .size =3D sizeof(XSavePKRU) }, }; =20 diff --git a/target/i386/cpu.h b/target/i386/cpu.h index 0bb365bddf..f1ce4e3008 100644 --- a/target/i386/cpu.h +++ b/target/i386/cpu.h @@ -1330,36 +1330,47 @@ typedef struct X86XSaveArea { /* AVX State: */ XSaveAVX avx_state; =20 - /* Ensure that XSaveBNDREG is properly aligned. */ - uint8_t padding[XSAVE_BNDREG_OFFSET - - sizeof(X86LegacyXSaveArea) - - sizeof(X86XSaveHeader) - - sizeof(XSaveAVX)]; - - /* MPX State: */ - XSaveBNDREG bndreg_state; - XSaveBNDCSR bndcsr_state; - /* AVX-512 State: */ - XSaveOpmask opmask_state; - XSaveZMM_Hi256 zmm_hi256_state; - XSaveHi16_ZMM hi16_zmm_state; - /* PKRU State: */ - XSavePKRU pkru_state; + union { + struct { + /* Ensure that XSaveBNDREG is properly aligned. */ + uint8_t padding[XSAVE_BNDREG_OFFSET + - sizeof(X86LegacyXSaveArea) + - sizeof(X86XSaveHeader) + - sizeof(XSaveAVX)]; + + /* MPX State: */ + XSaveBNDREG bndreg_state; + XSaveBNDCSR bndcsr_state; + /* AVX-512 State: */ + XSaveOpmask opmask_state; + XSaveZMM_Hi256 zmm_hi256_state; + XSaveHi16_ZMM hi16_zmm_state; + /* PKRU State: */ + XSavePKRU pkru_state; + } intel; + }; } X86XSaveArea; =20 -QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, avx_state) !=3D XSAVE_AVX_OFFSET); +QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, avx_state) + !=3D XSAVE_AVX_OFFSET); QEMU_BUILD_BUG_ON(sizeof(XSaveAVX) !=3D 0x100); -QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, bndreg_state) !=3D XSAVE_BNDREG_O= FFSET); +QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, intel.bndreg_state) + !=3D XSAVE_BNDREG_OFFSET); QEMU_BUILD_BUG_ON(sizeof(XSaveBNDREG) !=3D 0x40); -QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, bndcsr_state) !=3D XSAVE_BNDCSR_O= FFSET); +QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, intel.bndcsr_state) + !=3D XSAVE_BNDCSR_OFFSET); QEMU_BUILD_BUG_ON(sizeof(XSaveBNDCSR) !=3D 0x40); -QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, opmask_state) !=3D XSAVE_OPMASK_O= FFSET); +QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, intel.opmask_state) + !=3D XSAVE_OPMASK_OFFSET); QEMU_BUILD_BUG_ON(sizeof(XSaveOpmask) !=3D 0x40); -QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, zmm_hi256_state) !=3D XSAVE_ZMM_H= I256_OFFSET); +QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, intel.zmm_hi256_state) + !=3D XSAVE_ZMM_HI256_OFFSET); QEMU_BUILD_BUG_ON(sizeof(XSaveZMM_Hi256) !=3D 0x200); -QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, hi16_zmm_state) !=3D XSAVE_HI16_Z= MM_OFFSET); +QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, intel.hi16_zmm_state) + !=3D XSAVE_HI16_ZMM_OFFSET); QEMU_BUILD_BUG_ON(sizeof(XSaveHi16_ZMM) !=3D 0x400); -QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, pkru_state) !=3D XSAVE_PKRU_OFFSE= T); +QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, intel.pkru_state) + !=3D XSAVE_PKRU_OFFSET); QEMU_BUILD_BUG_ON(sizeof(XSavePKRU) !=3D 0x8); =20 typedef enum TPRAccess { diff --git a/target/i386/kvm/kvm.c b/target/i386/kvm/kvm.c index aff0774fef..417776a635 100644 --- a/target/i386/kvm/kvm.c +++ b/target/i386/kvm/kvm.c @@ -2409,12 +2409,12 @@ ASSERT_OFFSET(XSAVE_ST_SPACE_OFFSET, legacy.fpregs); ASSERT_OFFSET(XSAVE_XMM_SPACE_OFFSET, legacy.xmm_regs); ASSERT_OFFSET(XSAVE_XSTATE_BV_OFFSET, header.xstate_bv); ASSERT_OFFSET(XSAVE_AVX_OFFSET, avx_state); -ASSERT_OFFSET(XSAVE_BNDREG_OFFSET, bndreg_state); -ASSERT_OFFSET(XSAVE_BNDCSR_OFFSET, bndcsr_state); -ASSERT_OFFSET(XSAVE_OPMASK_OFFSET, opmask_state); -ASSERT_OFFSET(XSAVE_ZMM_HI256_OFFSET, zmm_hi256_state); -ASSERT_OFFSET(XSAVE_HI16_ZMM_OFFSET, hi16_zmm_state); -ASSERT_OFFSET(XSAVE_PKRU_OFFSET, pkru_state); +ASSERT_OFFSET(XSAVE_BNDREG_OFFSET, intel.bndreg_state); +ASSERT_OFFSET(XSAVE_BNDCSR_OFFSET, intel.bndcsr_state); +ASSERT_OFFSET(XSAVE_OPMASK_OFFSET, intel.opmask_state); +ASSERT_OFFSET(XSAVE_ZMM_HI256_OFFSET, intel.zmm_hi256_state); +ASSERT_OFFSET(XSAVE_HI16_ZMM_OFFSET, intel.hi16_zmm_state); +ASSERT_OFFSET(XSAVE_PKRU_OFFSET, intel.pkru_state); =20 static int kvm_put_xsave(X86CPU *cpu) { diff --git a/target/i386/tcg/fpu_helper.c b/target/i386/tcg/fpu_helper.c index 1b30f1bb73..fba2de5b04 100644 --- a/target/i386/tcg/fpu_helper.c +++ b/target/i386/tcg/fpu_helper.c @@ -2637,13 +2637,13 @@ static void do_xsave(CPUX86State *env, target_ulong= ptr, uint64_t rfbm, do_xsave_sse(env, ptr, ra); } if (opt & XSTATE_BNDREGS_MASK) { - do_xsave_bndregs(env, ptr + XO(bndreg_state), ra); + do_xsave_bndregs(env, ptr + XO(intel.bndreg_state), ra); } if (opt & XSTATE_BNDCSR_MASK) { - do_xsave_bndcsr(env, ptr + XO(bndcsr_state), ra); + do_xsave_bndcsr(env, ptr + XO(intel.bndcsr_state), ra); } if (opt & XSTATE_PKRU_MASK) { - do_xsave_pkru(env, ptr + XO(pkru_state), ra); + do_xsave_pkru(env, ptr + XO(intel.pkru_state), ra); } =20 /* Update the XSTATE_BV field. */ @@ -2836,7 +2836,7 @@ void helper_xrstor(CPUX86State *env, target_ulong ptr= , uint64_t rfbm) } if (rfbm & XSTATE_BNDREGS_MASK) { if (xstate_bv & XSTATE_BNDREGS_MASK) { - do_xrstor_bndregs(env, ptr + XO(bndreg_state), ra); + do_xrstor_bndregs(env, ptr + XO(intel.bndreg_state), ra); env->hflags |=3D HF_MPX_IU_MASK; } else { memset(env->bnd_regs, 0, sizeof(env->bnd_regs)); @@ -2845,7 +2845,7 @@ void helper_xrstor(CPUX86State *env, target_ulong ptr= , uint64_t rfbm) } if (rfbm & XSTATE_BNDCSR_MASK) { if (xstate_bv & XSTATE_BNDCSR_MASK) { - do_xrstor_bndcsr(env, ptr + XO(bndcsr_state), ra); + do_xrstor_bndcsr(env, ptr + XO(intel.bndcsr_state), ra); } else { memset(&env->bndcs_regs, 0, sizeof(env->bndcs_regs)); } @@ -2854,7 +2854,7 @@ void helper_xrstor(CPUX86State *env, target_ulong ptr= , uint64_t rfbm) if (rfbm & XSTATE_PKRU_MASK) { uint64_t old_pkru =3D env->pkru; if (xstate_bv & XSTATE_PKRU_MASK) { - do_xrstor_pkru(env, ptr + XO(pkru_state), ra); + do_xrstor_pkru(env, ptr + XO(intel.pkru_state), ra); } else { env->pkru =3D 0; } diff --git a/target/i386/xsave_helper.c b/target/i386/xsave_helper.c index 818115e7d2..97dbab85d1 100644 --- a/target/i386/xsave_helper.c +++ b/target/i386/xsave_helper.c @@ -31,16 +31,16 @@ void x86_cpu_xsave_all_areas(X86CPU *cpu, X86XSaveArea = *buf) sizeof env->fpregs); xsave->legacy.mxcsr =3D env->mxcsr; xsave->header.xstate_bv =3D env->xstate_bv; - memcpy(&xsave->bndreg_state.bnd_regs, env->bnd_regs, + memcpy(&xsave->intel.bndreg_state.bnd_regs, env->bnd_regs, sizeof env->bnd_regs); - xsave->bndcsr_state.bndcsr =3D env->bndcs_regs; - memcpy(&xsave->opmask_state.opmask_regs, env->opmask_regs, + xsave->intel.bndcsr_state.bndcsr =3D env->bndcs_regs; + memcpy(&xsave->intel.opmask_state.opmask_regs, env->opmask_regs, sizeof env->opmask_regs); =20 for (i =3D 0; i < CPU_NB_REGS; i++) { uint8_t *xmm =3D xsave->legacy.xmm_regs[i]; uint8_t *ymmh =3D xsave->avx_state.ymmh[i]; - uint8_t *zmmh =3D xsave->zmm_hi256_state.zmm_hi256[i]; + uint8_t *zmmh =3D xsave->intel.zmm_hi256_state.zmm_hi256[i]; stq_p(xmm, env->xmm_regs[i].ZMM_Q(0)); stq_p(xmm+8, env->xmm_regs[i].ZMM_Q(1)); stq_p(ymmh, env->xmm_regs[i].ZMM_Q(2)); @@ -52,9 +52,9 @@ void x86_cpu_xsave_all_areas(X86CPU *cpu, X86XSaveArea *b= uf) } =20 #ifdef TARGET_X86_64 - memcpy(&xsave->hi16_zmm_state.hi16_zmm, &env->xmm_regs[16], + memcpy(&xsave->intel.hi16_zmm_state.hi16_zmm, &env->xmm_regs[16], 16 * sizeof env->xmm_regs[16]); - memcpy(&xsave->pkru_state, &env->pkru, sizeof env->pkru); + memcpy(&xsave->intel.pkru_state, &env->pkru, sizeof env->pkru); #endif =20 } @@ -83,16 +83,16 @@ void x86_cpu_xrstor_all_areas(X86CPU *cpu, const X86XSa= veArea *buf) memcpy(env->fpregs, &xsave->legacy.fpregs, sizeof env->fpregs); env->xstate_bv =3D xsave->header.xstate_bv; - memcpy(env->bnd_regs, &xsave->bndreg_state.bnd_regs, + memcpy(env->bnd_regs, &xsave->intel.bndreg_state.bnd_regs, sizeof env->bnd_regs); - env->bndcs_regs =3D xsave->bndcsr_state.bndcsr; - memcpy(env->opmask_regs, &xsave->opmask_state.opmask_regs, + env->bndcs_regs =3D xsave->intel.bndcsr_state.bndcsr; + memcpy(env->opmask_regs, &xsave->intel.opmask_state.opmask_regs, sizeof env->opmask_regs); =20 for (i =3D 0; i < CPU_NB_REGS; i++) { const uint8_t *xmm =3D xsave->legacy.xmm_regs[i]; const uint8_t *ymmh =3D xsave->avx_state.ymmh[i]; - const uint8_t *zmmh =3D xsave->zmm_hi256_state.zmm_hi256[i]; + const uint8_t *zmmh =3D xsave->intel.zmm_hi256_state.zmm_hi256[i]; env->xmm_regs[i].ZMM_Q(0) =3D ldq_p(xmm); env->xmm_regs[i].ZMM_Q(1) =3D ldq_p(xmm+8); env->xmm_regs[i].ZMM_Q(2) =3D ldq_p(ymmh); @@ -104,9 +104,9 @@ void x86_cpu_xrstor_all_areas(X86CPU *cpu, const X86XSa= veArea *buf) } =20 #ifdef TARGET_X86_64 - memcpy(&env->xmm_regs[16], &xsave->hi16_zmm_state.hi16_zmm, + memcpy(&env->xmm_regs[16], &xsave->intel.hi16_zmm_state.hi16_zmm, 16 * sizeof env->xmm_regs[16]); - memcpy(&env->pkru, &xsave->pkru_state, sizeof env->pkru); + memcpy(&env->pkru, &xsave->intel.pkru_state, sizeof env->pkru); #endif =20 } --=20 2.30.2