From nobody Mon Apr 13 00:05:21 2026 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=quarantine dis=none) header.from=suse.com ARC-Seal: i=1; a=rsa-sha256; t=1775747054; cv=none; d=zohomail.com; s=zohoarc; b=fk2AfZyjXfi1cqBONKO6/26ZWKJm9dNtk/JlON+GwviqmLa3s1juUog4tCtymgxrunf93Xcikvbacm+IYrqXTOiBH/fuYUS33sOd+fkLMGDb3jtHlgFGXinsgjCry67SvCkzWYXNclAORFkaeedVNyuYycvCpvjhYvVwcvmElXI= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1775747054; h=Content-Type:Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=dqraePAbUJ4g2LjnweTFZrAuZgmq3i6Y4FcVjpidReE=; b=SEt9xBOh0JuZ9/TZP4sF3Bg6hiFAjniAxXSUBH0Fw1Po6D9OCeDNXzYMByQgMjVrKRjcpBJ0Jqv4OiKDR9UGaA3cNN0MaysCG2suF49osMbVR1yg/MmzFmezxyrAUuJ1vSVAl5DN2GfKFKP9YsL1DpKhUzNeny6yf7lUbnNtPf4= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=quarantine dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1775747054540383.37386228046114; Thu, 9 Apr 2026 08:04:14 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.1277705.1562833 (Exim 4.92) (envelope-from ) id 1wAqvC-000071-Lr; Thu, 09 Apr 2026 15:03:54 +0000 Received: by outflank-mailman (output) from mailman id 1277705.1562833; Thu, 09 Apr 2026 15:03:54 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1wAqvC-00006u-JC; Thu, 09 Apr 2026 15:03:54 +0000 Received: by outflank-mailman (input) for mailman id 1277705; Thu, 09 Apr 2026 15:03:52 +0000 Received: from mx.expurgate.net ([195.190.135.10]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1wAqvA-00006g-HY for xen-devel@lists.xenproject.org; Thu, 09 Apr 2026 15:03:52 +0000 Received: from mx.expurgate.net (helo=localhost) by mx.expurgate.net with esmtp id 1wAqv9-00EBZ8-Pu for xen-devel@lists.xenproject.org; Thu, 09 Apr 2026 17:03:51 +0200 Received: from [10.42.69.1] (helo=localhost) by localhost with ESMTP (eXpurgate MTA 0.9.1) (envelope-from ) id 69d7bfd3-e002-0a2a0a5209dd-0a2a4501d41e-14 for ; Thu, 09 Apr 2026 17:03:51 +0200 Received: from [209.85.128.44] (helo=mail-wm1-f44.google.com) by tlsNG-d62444.mxtls.expurgate.net with ESMTPS (eXpurgate 4.56.0) (envelope-from ) id 69d7bfd7-6fc9-0a2a45010019-d155802cd531-3 for ; Thu, 09 Apr 2026 17:03:51 +0200 Received: by mail-wm1-f44.google.com with SMTP id 5b1f17b1804b1-488ba840146so9559615e9.1 for ; Thu, 09 Apr 2026 08:03:51 -0700 (PDT) Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de. [37.24.206.209]) by smtp.gmail.com with ESMTPSA id ffacd0b85a97d-43d473b8c1esm28292478f8f.12.2026.04.09.08.03.49 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Thu, 09 Apr 2026 08:03:50 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" Authentication-Results: eu.smtp.expurgate.cloud; dkim=pass header.s=google header.d=suse.com header.i="@suse.com" header.h="Content-Transfer-Encoding:In-Reply-To:Autocrypt:Content-Language:References:Cc:To:From:Subject:User-Agent:MIME-Version:Date:Message-ID" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=google; t=1775747031; x=1776351831; darn=lists.xenproject.org; h=content-transfer-encoding:in-reply-to:autocrypt:content-language :references:cc:to:from:subject:user-agent:mime-version:date :message-id:from:to:cc:subject:date:message-id:reply-to; bh=dqraePAbUJ4g2LjnweTFZrAuZgmq3i6Y4FcVjpidReE=; b=R8XoS3j96OMZjKEh5q1gO2XZNZuSD3ytxT2j+52r2JShkgOPqf28dAKArA0SbZXpHP UMH3xGSxLTMYfM4JjiSrRG/vtgiFkdMxOD9KPE7Itf20hr4Q0dkAVtdK48D1Fg3V6jZ3 NQV/pJlS9kERQ3+1Y0uSFR46BTAq9BggMLvLHuLz8So7yQQ/1E2fShzOfeiRlkRPyx0y RF/j/w0ImyjaopDS8sK8bUdeU5fIKWp+GQGdN7ddWJFrjcO7xMoMzMPVJZV0gTB2xbRk /UsLqjyLR7sfOhz2wMa8ytRnqQkOc4MDIC8tn32T3+93JzWzETziHmyL8WDORlEqEtIl n+UQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20251104; t=1775747031; x=1776351831; h=content-transfer-encoding:in-reply-to:autocrypt:content-language :references:cc:to:from:subject:user-agent:mime-version:date :message-id:x-gm-gg:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=dqraePAbUJ4g2LjnweTFZrAuZgmq3i6Y4FcVjpidReE=; b=WBFgFfmHCnXtdW9ztiMrJ2yrbr2ExZPFbjXlnkGyPXSAtzHWOnx02eDXkRay7IW6yi jF3vGaBLTc8/v/iQLzJFqn4tChRQ0L5QrTXHAXdsT1GACQA2LO2TaHX9LKT2J/VIHY63 pmAe4t4y0pXMmwb0Cj8OqxL83xU0JcH61tOevI+VPXpsFHHrBqI79GQb4mDtw0G8FHju Ledn8R8mcYsBPajGkjxopRzF44Cb4u3BAw9/K09v8Ja4Ic7qGikRajOjsSOwWEduhwaW a6cJoVwUVgewiL5rDTlXL/ShrSqgR/vqh13ZRZ8Mm7flRMexj8XIXhYvRpn8kUnnX719 swjg== X-Gm-Message-State: AOJu0Yykq6jB9i7i+k9nr3MykdcTuyd009gP7u8LcczrPWWU5sb3lwf6 WLj/qjadlR/wjn6Vz7DGlrnKPk1fCDy5/hCMw7JnOEKQzZhmEukwyciQGa4aHVEmX6//g1JWtMT zx963Dw== X-Gm-Gg: AeBDietZfLRorp0/dXgA8re+3PBLR8VPENINwLbLGN+3olXS7SkiDvOy1fd0T2aQXBx uSzLqHmh6KA/R0arayvIWe2YFU81Gd9FYbCBIvJ79oK//JBCSTC0zMcobTfwSRtZxuDAfgYZscw nw/YaqlKVIGxoTVIuWOd53G/650amWi37ZxfzAt9Xta8EBy8bUmvWv23QbgAH0YXsjT0ebl/Koh YK5jVxmXqvvXtKGhcf2U3FYoUlG3pK/Z68JoovFbnyCi40LEiIQKhrLFSik4VenP2ZJO+ua1YO6 1eze4jdNAGnJxcZB0hcUrvN5OVc3gH36wMbSCxH60XC+Anp9/3RHigQUKiqnHXYNwrqgqF0Phf/ 4gODUbGsX9aAUvvsz6WtZmtmtzQr2hdHaBPZuEW2z38gp742xGGsjFByON/y+aSXzOq5jbIKgfM X7xzRRivCYbPCztecsQop8Iya8RrmHd0n94lAsddifL1gonurTUXav4Wvr+QKfgOV/Z9m+Udq3y 7RpkcpyqvL4gxdfByspEzMTeQ== X-Received: by 2002:a05:600c:a303:b0:485:3abe:ab86 with SMTP id 5b1f17b1804b1-488ccf3a65bmr43573675e9.4.1775747030791; Thu, 09 Apr 2026 08:03:50 -0700 (PDT) Message-ID: Date: Thu, 9 Apr 2026 17:03:48 +0200 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: [PATCH v4 03/16] x86emul: support AVX10.1 From: Jan Beulich To: "xen-devel@lists.xenproject.org" Cc: Andrew Cooper , =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= , Teddy Astie , Oleksii Kurochko , Community Manager References: <1e2ff012-b07a-43be-9ba6-290b38d3c97c@suse.com> Content-Language: en-US Autocrypt: addr=jbeulich@suse.com; keydata= xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A nAuWpQkjM1ASeQwSHEeAWPgskBQL In-Reply-To: <1e2ff012-b07a-43be-9ba6-290b38d3c97c@suse.com> Content-Transfer-Encoding: quoted-printable X-purgate-ID: tlsNG-d62444/1775747031-17EFB185-6038C92D/0/0 X-purgate-type: clean X-purgate-size: 50330 X-ZohoMail-DKIM: pass (identity @suse.com) X-ZM-MESSAGEID: 1775747056566158500 Content-Type: text/plain; charset="utf-8" This requires relaxing various pre-existing AVX512* checks, as AVX10.1 covers all AVX512* except PF, ER, 4FMAPS, 4VNNIW (support for all of which was removed meanwhile anyway), and VP2INTERSECT. At the same time all vector widths are always supported. Note that visa_check(), replacing host_and_vcpu_must_have() uses, checks only the guest capability: We wouldn't expose AVX512* (nor AVX10) without the hardware supporting it. Similarly in avx512_vlen_check() the original host_and_vcpu_must_have() is reduced just vcpu_must_have(). This also simplifies (resulting) code in the test and fuzzing harnesses, as there the XCR0 checks that are part of cpu_has_avx512* are only needed in local code, not in the emulator itself (where respective checking occurs elsewhere anyway, utilizing emul_test_read_xcr()). While in most cases the changes to x86_emulate() are entirely mechanical, for opmask insns earlier unconditional AVX512F checks are converted into "else" clauses to existing if/else-if ones. To be certain that no uses remain, also drop respective cpu_has_avx512* (except in the test harness) and vcpu_has_avx512*(). Signed-off-by: Jan Beulich --- Probably avx512_vlen_check() should have the avx512_ prefix dropped, now that it also covers AVX10. But if so that wants to be either a prereq or a follow-on patch. visa_check() won't cover AVX10.2 and higher, but probably we will want independent checking logic for that anyway. Spec version 2 still leaves unclear what the xstate components are which would need enabling for AVX10/256. x86emul_get_fpu() is therefore untouched for now. Since it'll be reducing code size, we may want to further convert host_and_vcpu_must_have() to just vcpu_must_have() where appropriate (should be [almost?] everywhere). --- v4: Switch to using fallthrough pseudo-keyword. Drop new _vlen_check() helper again. Re-base. v3: Add ChangeLog entry. v2: Drop use of vsz128 field. Re-base, in particular over dropping of Xeon Phi support. --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -16,6 +16,7 @@ The format is based on [Keep a Changelog mitigate (by rate-limiting) the system wide impact of an HVM guest misusing atomic instructions. - Support for CPIO microcode in discrete multiboot modules. + - Support for AVX10.1. (Experimental) =20 - On Arm: - Support for guest suspend and resume to/from RAM via vPSCI. --- a/xen/arch/x86/include/asm/cpufeature.h +++ b/xen/arch/x86/include/asm/cpufeature.h @@ -200,29 +200,18 @@ static inline bool boot_cpu_has(unsigned #define cpu_has_pqe boot_cpu_has(X86_FEATURE_PQE) #define cpu_has_fpu_sel (!boot_cpu_has(X86_FEATURE_NO_FPU_SEL)) #define cpu_has_mpx boot_cpu_has(X86_FEATURE_MPX) -#define cpu_has_avx512f boot_cpu_has(X86_FEATURE_AVX512F) -#define cpu_has_avx512dq boot_cpu_has(X86_FEATURE_AVX512DQ) #define cpu_has_rdseed boot_cpu_has(X86_FEATURE_RDSEED) #define cpu_has_smap boot_cpu_has(X86_FEATURE_SMAP) -#define cpu_has_avx512_ifma boot_cpu_has(X86_FEATURE_AVX512_IFMA) #define cpu_has_clflushopt boot_cpu_has(X86_FEATURE_CLFLUSHOPT) #define cpu_has_clwb boot_cpu_has(X86_FEATURE_CLWB) -#define cpu_has_avx512cd boot_cpu_has(X86_FEATURE_AVX512CD) #define cpu_has_proc_trace boot_cpu_has(X86_FEATURE_PROC_TRACE) #define cpu_has_sha boot_cpu_has(X86_FEATURE_SHA) -#define cpu_has_avx512bw boot_cpu_has(X86_FEATURE_AVX512BW) -#define cpu_has_avx512vl boot_cpu_has(X86_FEATURE_AVX512VL) =20 /* CPUID level 0x00000007:0.ecx */ -#define cpu_has_avx512_vbmi boot_cpu_has(X86_FEATURE_AVX512_VBMI) #define cpu_has_pku boot_cpu_has(X86_FEATURE_PKU) -#define cpu_has_avx512_vbmi2 boot_cpu_has(X86_FEATURE_AVX512_VBMI2) #define cpu_has_gfni boot_cpu_has(X86_FEATURE_GFNI) #define cpu_has_vaes boot_cpu_has(X86_FEATURE_VAES) #define cpu_has_vpclmulqdq boot_cpu_has(X86_FEATURE_VPCLMULQDQ) -#define cpu_has_avx512_vnni boot_cpu_has(X86_FEATURE_AVX512_VNNI) -#define cpu_has_avx512_bitalg boot_cpu_has(X86_FEATURE_AVX512_BITALG) -#define cpu_has_avx512_vpopcntdq boot_cpu_has(X86_FEATURE_AVX512_VPOPCNTDQ) #define cpu_has_rdpid boot_cpu_has(X86_FEATURE_RDPID) #define cpu_has_movdiri boot_cpu_has(X86_FEATURE_MOVDIRI) #define cpu_has_movdir64b boot_cpu_has(X86_FEATURE_MOVDIR64B) @@ -248,7 +237,6 @@ static inline bool boot_cpu_has(unsigned #define cpu_has_tsx_force_abort boot_cpu_has(X86_FEATURE_TSX_FORCE_ABORT) #define cpu_has_serialize boot_cpu_has(X86_FEATURE_SERIALIZE) #define cpu_has_hybrid boot_cpu_has(X86_FEATURE_HYBRID) -#define cpu_has_avx512_fp16 boot_cpu_has(X86_FEATURE_AVX512_FP16) #define cpu_has_arch_caps boot_cpu_has(X86_FEATURE_ARCH_CAPS) =20 /* CPUID level 0x00000007:1.eax */ @@ -256,7 +244,6 @@ static inline bool boot_cpu_has(unsigned #define cpu_has_sm3 boot_cpu_has(X86_FEATURE_SM3) #define cpu_has_sm4 boot_cpu_has(X86_FEATURE_SM4) #define cpu_has_avx_vnni boot_cpu_has(X86_FEATURE_AVX_VNNI) -#define cpu_has_avx512_bf16 boot_cpu_has(X86_FEATURE_AVX512_BF16) #define cpu_has_cmpccxadd boot_cpu_has(X86_FEATURE_CMPCCXADD) #define cpu_has_fred boot_cpu_has(X86_FEATURE_FRED) #define cpu_has_lkgs boot_cpu_has(X86_FEATURE_LKGS) --- a/xen/arch/x86/x86_emulate/private.h +++ b/xen/arch/x86/x86_emulate/private.h @@ -577,26 +577,15 @@ amd_like(const struct x86_emulate_ctxt * #define vcpu_has_invpcid() (ctxt->cpuid->feat.invpcid) #define vcpu_has_rtm() (ctxt->cpuid->feat.rtm) #define vcpu_has_mpx() (ctxt->cpuid->feat.mpx) -#define vcpu_has_avx512f() (ctxt->cpuid->feat.avx512f) -#define vcpu_has_avx512dq() (ctxt->cpuid->feat.avx512dq) #define vcpu_has_rdseed() (ctxt->cpuid->feat.rdseed) #define vcpu_has_adx() (ctxt->cpuid->feat.adx) #define vcpu_has_smap() (ctxt->cpuid->feat.smap) -#define vcpu_has_avx512_ifma() (ctxt->cpuid->feat.avx512_ifma) #define vcpu_has_clflushopt() (ctxt->cpuid->feat.clflushopt) #define vcpu_has_clwb() (ctxt->cpuid->feat.clwb) -#define vcpu_has_avx512cd() (ctxt->cpuid->feat.avx512cd) #define vcpu_has_sha() (ctxt->cpuid->feat.sha) -#define vcpu_has_avx512bw() (ctxt->cpuid->feat.avx512bw) -#define vcpu_has_avx512vl() (ctxt->cpuid->feat.avx512vl) -#define vcpu_has_avx512_vbmi() (ctxt->cpuid->feat.avx512_vbmi) -#define vcpu_has_avx512_vbmi2() (ctxt->cpuid->feat.avx512_vbmi2) #define vcpu_has_gfni() (ctxt->cpuid->feat.gfni) #define vcpu_has_vaes() (ctxt->cpuid->feat.vaes) #define vcpu_has_vpclmulqdq() (ctxt->cpuid->feat.vpclmulqdq) -#define vcpu_has_avx512_vnni() (ctxt->cpuid->feat.avx512_vnni) -#define vcpu_has_avx512_bitalg() (ctxt->cpuid->feat.avx512_bitalg) -#define vcpu_has_avx512_vpopcntdq() (ctxt->cpuid->feat.avx512_vpopcntdq) #define vcpu_has_rdpid() (ctxt->cpuid->feat.rdpid) #define vcpu_has_movdiri() (ctxt->cpuid->feat.movdiri) #define vcpu_has_movdir64b() (ctxt->cpuid->feat.movdir64b) @@ -604,12 +593,10 @@ amd_like(const struct x86_emulate_ctxt * #define vcpu_has_avx512_vp2intersect() (ctxt->cpuid->feat.avx512_vp2inters= ect) #define vcpu_has_serialize() (ctxt->cpuid->feat.serialize) #define vcpu_has_tsxldtrk() (ctxt->cpuid->feat.tsxldtrk) -#define vcpu_has_avx512_fp16() (ctxt->cpuid->feat.avx512_fp16) #define vcpu_has_sha512() (ctxt->cpuid->feat.sha512) #define vcpu_has_sm3() (ctxt->cpuid->feat.sm3) #define vcpu_has_sm4() (ctxt->cpuid->feat.sm4) #define vcpu_has_avx_vnni() (ctxt->cpuid->feat.avx_vnni) -#define vcpu_has_avx512_bf16() (ctxt->cpuid->feat.avx512_bf16) #define vcpu_has_cmpccxadd() (ctxt->cpuid->feat.cmpccxadd) #define vcpu_has_lkgs() (ctxt->cpuid->feat.lkgs) #define vcpu_has_wrmsrns() (ctxt->cpuid->feat.wrmsrns) --- a/xen/arch/x86/x86_emulate/x86_emulate.c +++ b/xen/arch/x86/x86_emulate/x86_emulate.c @@ -1152,6 +1152,10 @@ static unsigned long *decode_vex_gpr( return decode_gpr(regs, ~vex_reg & (mode_64bit() ? 0xf : 7)); } =20 +#define visa_check(subfeat) \ + generate_exception_if(!cp->feat.avx512 ## subfeat && !cp->feat.avx10, \ + X86_EXC_UD) + #define avx512_vlen_check(lig) do { \ switch ( evex.lr ) \ { \ @@ -1161,7 +1165,7 @@ static unsigned long *decode_vex_gpr( break; \ case 0: case 1: \ if ( !(lig) ) \ - host_and_vcpu_must_have(avx512vl); \ + visa_check(vl); \ break; \ } \ } while ( false ) @@ -1396,7 +1400,9 @@ x86_emulate( /* KMOV{W,Q} %k, (%rax) */ stb[0] =3D 0xc4; stb[1] =3D 0xe1; - stb[2] =3D cpu_has_avx512bw ? 0xf8 : 0x78; + stb[2] =3D cp->feat.avx512bw || cp->feat.avx10 + ? 0xf8 /* L0.NP.W1 - kmovq */ + : 0x78 /* L0.NP.W0 - kmovw */; stb[3] =3D 0x91; stb[4] =3D evex.opmsk << 3; insn_bytes =3D 5; @@ -3423,7 +3429,7 @@ x86_emulate( (ea.type !=3D OP_REG && evex.brs && (evex.pfx & VEX_PREFIX_SCALAR_MASK))), X86_EXC_UD); - host_and_vcpu_must_have(avx512f); + visa_check(f); if ( ea.type !=3D OP_REG || !evex.brs ) avx512_vlen_check(evex.pfx & VEX_PREFIX_SCALAR_MASK); simd_zmm: @@ -3479,7 +3485,7 @@ x86_emulate( generate_exception_if((evex.lr || evex.opmsk || evex.brs || evex.w !=3D (evex.pfx & VEX_PREFIX_DOUBLE_M= ASK)), X86_EXC_UD); - host_and_vcpu_must_have(avx512f); + visa_check(f); if ( (d & DstMask) !=3D DstMem ) d &=3D ~TwoOp; op_bytes =3D 8; @@ -3506,7 +3512,7 @@ x86_emulate( generate_exception_if((evex.brs || evex.w !=3D (evex.pfx & VEX_PREFIX_DOUBLE_M= ASK)), X86_EXC_UD); - host_and_vcpu_must_have(avx512f); + visa_check(f); avx512_vlen_check(false); d |=3D TwoOp; op_bytes =3D !(evex.pfx & VEX_PREFIX_DOUBLE_MASK) || evex.lr @@ -3543,7 +3549,7 @@ x86_emulate( case X86EMUL_OPC_EVEX_66(0x0f38, 0x64): /* vpblendm{d,q} [xyz]mm/mem,[= xyz]mm,[xyz]mm{k} */ case X86EMUL_OPC_EVEX_66(0x0f38, 0x65): /* vblendmp{s,d} [xyz]mm/mem,[= xyz]mm,[xyz]mm{k} */ avx512f_no_sae: - host_and_vcpu_must_have(avx512f); + visa_check(f); generate_exception_if(ea.type !=3D OP_MEM && evex.brs, X86_EXC_UD); avx512_vlen_check(false); goto simd_zmm; @@ -3623,13 +3629,13 @@ x86_emulate( =20 case X86EMUL_OPC_EVEX_F3(5, 0x2a): /* vcvtsi2sh r/m,xmm,xmm */ case X86EMUL_OPC_EVEX_F3(5, 0x7b): /* vcvtusi2sh r/m,xmm,xmm */ - host_and_vcpu_must_have(avx512_fp16); + visa_check(_fp16); /* fall through */ CASE_SIMD_SCALAR_FP(_EVEX, 0x0f, 0x2a): /* vcvtsi2s{s,d} r/m,xmm,xmm */ CASE_SIMD_SCALAR_FP(_EVEX, 0x0f, 0x7b): /* vcvtusi2s{s,d} r/m,xmm,xmm = */ generate_exception_if(evex.opmsk || (ea.type !=3D OP_REG && evex.b= rs), X86_EXC_UD); - host_and_vcpu_must_have(avx512f); + visa_check(f); if ( !evex.brs ) avx512_vlen_check(true); get_fpu(X86EMUL_FPU_zmm); @@ -3742,7 +3748,7 @@ x86_emulate( case X86EMUL_OPC_EVEX_F3(5, 0x2d): /* vcvtsh2si xmm/mem,reg */ case X86EMUL_OPC_EVEX_F3(5, 0x78): /* vcvttsh2usi xmm/mem,reg */ case X86EMUL_OPC_EVEX_F3(5, 0x79): /* vcvtsh2usi xmm/mem,reg */ - host_and_vcpu_must_have(avx512_fp16); + visa_check(_fp16); /* fall through */ CASE_SIMD_SCALAR_FP(_EVEX, 0x0f, 0x2c): /* vcvtts{s,d}2si xmm/mem,reg = */ CASE_SIMD_SCALAR_FP(_EVEX, 0x0f, 0x2d): /* vcvts{s,d}2si xmm/mem,reg */ @@ -3752,7 +3758,7 @@ x86_emulate( evex.opmsk || (ea.type !=3D OP_REG && evex.brs)), X86_EXC_UD); - host_and_vcpu_must_have(avx512f); + visa_check(f); if ( !evex.brs ) avx512_vlen_check(true); get_fpu(X86EMUL_FPU_zmm); @@ -3818,7 +3824,7 @@ x86_emulate( =20 case X86EMUL_OPC_EVEX(5, 0x2e): /* vucomish xmm/m16,xmm */ case X86EMUL_OPC_EVEX(5, 0x2f): /* vcomish xmm/m16,xmm */ - host_and_vcpu_must_have(avx512_fp16); + visa_check(_fp16); generate_exception_if(evex.w, X86_EXC_UD); /* fall through */ CASE_SIMD_PACKED_FP(_EVEX, 0x0f, 0x2e): /* vucomis{s,d} xmm/mem,xmm */ @@ -3827,7 +3833,7 @@ x86_emulate( (ea.type !=3D OP_REG && evex.brs) || evex.w !=3D evex.pfx), X86_EXC_UD); - host_and_vcpu_must_have(avx512f); + visa_check(f); if ( !evex.brs ) avx512_vlen_check(true); get_fpu(X86EMUL_FPU_zmm); @@ -3971,7 +3977,7 @@ x86_emulate( =20 case X86EMUL_OPC_VEX(0x0f, 0x4a): /* kadd{w,q} k,k,k */ if ( !vex.w ) - host_and_vcpu_must_have(avx512dq); + visa_check(dq); /* fall through */ case X86EMUL_OPC_VEX(0x0f, 0x41): /* kand{w,q} k,k,k */ case X86EMUL_OPC_VEX_66(0x0f, 0x41): /* kand{b,d} k,k,k */ @@ -3987,11 +3993,12 @@ x86_emulate( generate_exception_if(!vex.l, X86_EXC_UD); opmask_basic: if ( vex.w ) - host_and_vcpu_must_have(avx512bw); + visa_check(bw); else if ( vex.pfx ) - host_and_vcpu_must_have(avx512dq); + visa_check(dq); + else + visa_check(f); opmask_common: - host_and_vcpu_must_have(avx512f); generate_exception_if(!vex.r || (mode_64bit() && !(vex.reg & 8)) || ea.type !=3D OP_REG, X86_EXC_UD); =20 @@ -4014,13 +4021,14 @@ x86_emulate( generate_exception_if(vex.l || vex.reg !=3D 0xf, X86_EXC_UD); goto opmask_basic; =20 - case X86EMUL_OPC_VEX(0x0f, 0x4b): /* kunpck{w,d}{d,q} k,k,k */ + case X86EMUL_OPC_VEX(0x0f, 0x4b): /* kunpck{wd,dq} k,k,k */ generate_exception_if(!vex.l, X86_EXC_UD); - host_and_vcpu_must_have(avx512bw); + visa_check(bw); goto opmask_common; =20 case X86EMUL_OPC_VEX_66(0x0f, 0x4b): /* kunpckbw k,k,k */ generate_exception_if(!vex.l || vex.w, X86_EXC_UD); + visa_check(f); goto opmask_common; =20 #endif /* X86EMUL_NO_SIMD */ @@ -4088,7 +4096,7 @@ x86_emulate( generate_exception_if((evex.w !=3D (evex.pfx & VEX_PREFIX_DOUBLE_M= ASK) || (ea.type !=3D OP_MEM && evex.brs)), X86_EXC_UD); - host_and_vcpu_must_have(avx512dq); + visa_check(dq); avx512_vlen_check(false); goto simd_zmm; =20 @@ -4127,12 +4135,12 @@ x86_emulate( case X86EMUL_OPC_EVEX_F2(0x0f, 0x7a): /* vcvtudq2ps [xyz]mm/mem,[xyz]m= m{k} */ /* vcvtuqq2ps [xyz]mm/mem,{x,y}m= m{k} */ if ( evex.w ) - host_and_vcpu_must_have(avx512dq); + visa_check(dq); else { case X86EMUL_OPC_EVEX(0x0f, 0x78): /* vcvttp{s,d}2udq [xyz]mm/mem,[= xyz]mm{k} */ case X86EMUL_OPC_EVEX(0x0f, 0x79): /* vcvtp{s,d}2udq [xyz]mm/mem,[x= yz]mm{k} */ - host_and_vcpu_must_have(avx512f); + visa_check(f); } if ( ea.type !=3D OP_REG || !evex.brs ) avx512_vlen_check(false); @@ -4349,7 +4357,7 @@ x86_emulate( case X86EMUL_OPC_EVEX_66(0x0f38, 0x0b): /* vpmulhrsw [xyz]mm/mem,[xyz]= mm,[xyz]mm{k} */ case X86EMUL_OPC_EVEX_66(0x0f38, 0x1c): /* vpabsb [xyz]mm/mem,[xyz]mm{= k} */ case X86EMUL_OPC_EVEX_66(0x0f38, 0x1d): /* vpabsw [xyz]mm/mem,[xyz]mm{= k} */ - host_and_vcpu_must_have(avx512bw); + visa_check(bw); generate_exception_if(evex.brs, X86_EXC_UD); elem_bytes =3D 1 << (b & 1); goto avx512f_no_sae; @@ -4381,7 +4389,7 @@ x86_emulate( generate_exception_if(b !=3D 0x27 && evex.w !=3D (b & 1), X86_= EXC_UD); goto avx512f_no_sae; } - host_and_vcpu_must_have(avx512bw); + visa_check(bw); generate_exception_if(evex.brs, X86_EXC_UD); elem_bytes =3D 1 << (ext =3D=3D ext_0f ? b & 1 : evex.w); avx512_vlen_check(false); @@ -4454,7 +4462,7 @@ x86_emulate( dst.bytes =3D 2; /* fall through */ case X86EMUL_OPC_EVEX_66(5, 0x6e): /* vmovw r/m16,xmm */ - host_and_vcpu_must_have(avx512_fp16); + visa_check(_fp16); generate_exception_if(evex.w, X86_EXC_UD); /* fall through */ case X86EMUL_OPC_EVEX_66(0x0f, 0x6e): /* vmov{d,q} r/m,xmm */ @@ -4462,7 +4470,7 @@ x86_emulate( generate_exception_if((evex.lr || evex.opmsk || evex.brs || evex.reg !=3D 0xf || !evex.RX), X86_EXC_UD); - host_and_vcpu_must_have(avx512f); + visa_check(f); get_fpu(X86EMUL_FPU_zmm); =20 opc =3D init_evex(stub); @@ -4520,7 +4528,7 @@ x86_emulate( =20 case X86EMUL_OPC_EVEX_F2(0x0f, 0x6f): /* vmovdqu{8,16} [xyz]mm/mem,[xy= z]mm{k} */ case X86EMUL_OPC_EVEX_F2(0x0f, 0x7f): /* vmovdqu{8,16} [xyz]mm,[xyz]mm= /mem{k} */ - host_and_vcpu_must_have(avx512bw); + visa_check(bw); elem_bytes =3D 1 << evex.w; goto vmovdqa; =20 @@ -4613,7 +4621,7 @@ x86_emulate( generate_exception_if(evex.w, X86_EXC_UD); else { - host_and_vcpu_must_have(avx512bw); + visa_check(bw); generate_exception_if(evex.brs, X86_EXC_UD); } d =3D (d & ~SrcMask) | SrcMem | TwoOp; @@ -4861,7 +4869,7 @@ x86_emulate( case X86EMUL_OPC_EVEX_F3(0x0f, 0xe6): /* vcvtdq2pd {x,y}mm/mem,[xyz]= mm{k} */ /* vcvtqq2pd [xyz]mm/mem,[xyz]= mm{k} */ if ( evex.pfx !=3D vex_f3 ) - host_and_vcpu_must_have(avx512f); + visa_check(f); else if ( evex.w ) { case X86EMUL_OPC_EVEX_66(0x0f, 0x78): /* vcvttps2uqq {x,y}mm/mem,[xy= z]mm{k} */ @@ -4872,11 +4880,11 @@ x86_emulate( /* vcvttpd2qq [xyz]mm/mem,[xyz= ]mm{k} */ case X86EMUL_OPC_EVEX_66(0x0f, 0x7b): /* vcvtps2qq {x,y}mm/mem,[xyz]= mm{k} */ /* vcvtpd2qq [xyz]mm/mem,[xyz]= mm{k} */ - host_and_vcpu_must_have(avx512dq); + visa_check(dq); } else { - host_and_vcpu_must_have(avx512f); + visa_check(f); /* * While SDM version 085 has explicit wording towards embedded * rounding being ignored, it's still not entirely unambiguous= with @@ -4923,7 +4931,7 @@ x86_emulate( case X86EMUL_OPC_EVEX_66(0x0f, 0xd6): /* vmovq xmm,xmm/m64 */ generate_exception_if(evex.lr || !evex.w || evex.opmsk || evex.brs, X86_EXC_UD); - host_and_vcpu_must_have(avx512f); + visa_check(f); d |=3D TwoOp; op_bytes =3D 8; goto simd_zmm; @@ -4949,19 +4957,21 @@ x86_emulate( case X86EMUL_OPC_VEX(0x0f, 0x90): /* kmov{w,q} k/mem,k */ case X86EMUL_OPC_VEX_66(0x0f, 0x90): /* kmov{b,d} k/mem,k */ generate_exception_if(vex.l || !vex.r, X86_EXC_UD); - host_and_vcpu_must_have(avx512f); if ( vex.w ) { - host_and_vcpu_must_have(avx512bw); + visa_check(bw); op_bytes =3D 4 << !vex.pfx; } else if ( vex.pfx ) { - host_and_vcpu_must_have(avx512dq); + visa_check(dq); op_bytes =3D 1; } else + { + visa_check(f); op_bytes =3D 2; + } =20 get_fpu(X86EMUL_FPU_opmask); =20 @@ -4983,14 +4993,15 @@ x86_emulate( generate_exception_if(vex.l || !vex.r || vex.reg !=3D 0xf || ea.type !=3D OP_REG, X86_EXC_UD); =20 - host_and_vcpu_must_have(avx512f); if ( vex.pfx =3D=3D vex_f2 ) - host_and_vcpu_must_have(avx512bw); + visa_check(bw); else { generate_exception_if(vex.w, X86_EXC_UD); if ( vex.pfx ) - host_and_vcpu_must_have(avx512dq); + visa_check(dq); + else + visa_check(f); } =20 get_fpu(X86EMUL_FPU_opmask); @@ -5022,10 +5033,9 @@ x86_emulate( dst =3D ea; dst.reg =3D decode_gpr(&_regs, modrm_reg); =20 - host_and_vcpu_must_have(avx512f); if ( vex.pfx =3D=3D vex_f2 ) { - host_and_vcpu_must_have(avx512bw); + visa_check(bw); dst.bytes =3D 4 << (mode_64bit() && vex.w); } else @@ -5033,7 +5043,9 @@ x86_emulate( generate_exception_if(vex.w, X86_EXC_UD); dst.bytes =3D 4; if ( vex.pfx ) - host_and_vcpu_must_have(avx512dq); + visa_check(dq); + else + visa_check(f); } =20 get_fpu(X86EMUL_FPU_opmask); @@ -5055,20 +5067,18 @@ x86_emulate( ASSERT(!state->simd_size); break; =20 - case X86EMUL_OPC_VEX(0x0f, 0x99): /* ktest{w,q} k,k */ - if ( !vex.w ) - host_and_vcpu_must_have(avx512dq); - /* fall through */ case X86EMUL_OPC_VEX(0x0f, 0x98): /* kortest{w,q} k,k */ case X86EMUL_OPC_VEX_66(0x0f, 0x98): /* kortest{b,d} k,k */ + case X86EMUL_OPC_VEX(0x0f, 0x99): /* ktest{w,q} k,k */ case X86EMUL_OPC_VEX_66(0x0f, 0x99): /* ktest{b,d} k,k */ generate_exception_if(vex.l || !vex.r || vex.reg !=3D 0xf || ea.type !=3D OP_REG, X86_EXC_UD); - host_and_vcpu_must_have(avx512f); if ( vex.w ) - host_and_vcpu_must_have(avx512bw); - else if ( vex.pfx ) - host_and_vcpu_must_have(avx512dq); + visa_check(bw); + else if ( vex.pfx || (b & 1) ) + visa_check(dq); + else + visa_check(f); =20 get_fpu(X86EMUL_FPU_opmask); =20 @@ -5355,7 +5365,7 @@ x86_emulate( (evex.pfx & VEX_PREFIX_SCALAR_MASK)) || !evex.r || !evex.R || evex.z), X86_EXC_UD); - host_and_vcpu_must_have(avx512f); + visa_check(f); if ( ea.type !=3D OP_REG || !evex.brs ) avx512_vlen_check(evex.pfx & VEX_PREFIX_SCALAR_MASK); simd_imm8_zmm: @@ -5399,9 +5409,9 @@ x86_emulate( case X86EMUL_OPC_EVEX_66(0x0f3a, 0x22): /* vpinsr{d,q} $imm8,r/m,xmm,x= mm */ generate_exception_if(evex.lr || evex.opmsk || evex.brs, X86_EXC_U= D); if ( b & 2 ) - host_and_vcpu_must_have(avx512dq); + visa_check(dq); else - host_and_vcpu_must_have(avx512bw); + visa_check(bw); if ( !mode_64bit() ) evex.w =3D 0; memcpy(mmvalp, &src.val, src.bytes); @@ -5438,7 +5448,7 @@ x86_emulate( /* fall through */ case X86EMUL_OPC_EVEX_66(0x0f3a, 0x25): /* vpternlog{d,q} $imm8,[xyz]m= m/mem,[xyz]mm,[xyz]mm{k} */ avx512f_imm8_no_sae: - host_and_vcpu_must_have(avx512f); + visa_check(f); generate_exception_if(ea.type !=3D OP_MEM && evex.brs, X86_EXC_UD); avx512_vlen_check(false); goto simd_imm8_zmm; @@ -5537,7 +5547,7 @@ x86_emulate( case X86EMUL_OPC_EVEX_66(0x0f, 0xe4): /* vpmulhuw [xyz]mm/mem,[xyz]mm,= [xyz]mm{k} */ case X86EMUL_OPC_EVEX_66(0x0f, 0xea): /* vpminsw [xyz]mm/mem,[xyz]mm,[= xyz]mm{k} */ case X86EMUL_OPC_EVEX_66(0x0f, 0xee): /* vpmaxsw [xyz]mm/mem,[xyz]mm,[= xyz]mm{k} */ - host_and_vcpu_must_have(avx512bw); + visa_check(bw); generate_exception_if(evex.brs, X86_EXC_UD); elem_bytes =3D b & 0x10 ? 1 : 2; goto avx512f_no_sae; @@ -5762,7 +5772,7 @@ x86_emulate( case X86EMUL_OPC_EVEX_66(0x0f38, 0x10): /* vpsrlvw [xyz]mm/mem,[xyz]mm= ,[xyz]mm{k} */ case X86EMUL_OPC_EVEX_66(0x0f38, 0x11): /* vpsravw [xyz]mm/mem,[xyz]mm= ,[xyz]mm{k} */ case X86EMUL_OPC_EVEX_66(0x0f38, 0x12): /* vpsllvw [xyz]mm/mem,[xyz]mm= ,[xyz]mm{k} */ - host_and_vcpu_must_have(avx512bw); + visa_check(bw); generate_exception_if(!evex.w || evex.brs, X86_EXC_UD); elem_bytes =3D 2; goto avx512f_no_sae; @@ -5772,7 +5782,7 @@ x86_emulate( case X86EMUL_OPC_EVEX_F3(0x0f38, 0x20): /* vpmovswb [xyz]mm,{x,y}mm/me= m{k} */ case X86EMUL_OPC_EVEX_66(0x0f38, 0x30): /* vpmovzxbw {x,y}mm/mem,[xyz]= mm{k} */ case X86EMUL_OPC_EVEX_F3(0x0f38, 0x30): /* vpmovwb [xyz]mm,{x,y}mm/mem= {k} */ - host_and_vcpu_must_have(avx512bw); + visa_check(bw); if ( evex.pfx !=3D vex_f3 ) { case X86EMUL_OPC_EVEX_66(0x0f38, 0x21): /* vpmovsxbd xmm/mem,[xyz]mm{k= } */ @@ -5820,7 +5830,7 @@ x86_emulate( =20 case X86EMUL_OPC_EVEX_66(0x0f38, 0x13): /* vcvtph2ps {x,y}mm/mem,[xyz]= mm{k} */ generate_exception_if(evex.w || (ea.type !=3D OP_REG && evex.brs),= X86_EXC_UD); - host_and_vcpu_must_have(avx512f); + visa_check(f); if ( !evex.brs ) avx512_vlen_check(false); op_bytes =3D 8 << evex.lr; @@ -5874,7 +5884,7 @@ x86_emulate( op_bytes =3D 8; generate_exception_if(evex.brs, X86_EXC_UD); if ( !evex.w ) - host_and_vcpu_must_have(avx512dq); + visa_check(dq); goto avx512_broadcast; =20 case X86EMUL_OPC_EVEX_66(0x0f38, 0x1a): /* vbroadcastf32x4 m128,{y,z}m= m{k} */ @@ -5884,7 +5894,7 @@ x86_emulate( generate_exception_if(ea.type !=3D OP_MEM || !evex.lr || evex.brs, X86_EXC_UD); if ( evex.w ) - host_and_vcpu_must_have(avx512dq); + visa_check(dq); goto avx512_broadcast; =20 case X86EMUL_OPC_VEX_66(0x0f38, 0x20): /* vpmovsxbw xmm/mem,{x,y}mm */ @@ -5909,9 +5919,9 @@ x86_emulate( case X86EMUL_OPC_EVEX_F3(0x0f38, 0x28): /* vpmovm2{b,w} k,[xyz]mm */ case X86EMUL_OPC_EVEX_F3(0x0f38, 0x38): /* vpmovm2{d,q} k,[xyz]mm */ if ( b & 0x10 ) - host_and_vcpu_must_have(avx512dq); + visa_check(dq); else - host_and_vcpu_must_have(avx512bw); + visa_check(bw); generate_exception_if(evex.opmsk || ea.type !=3D OP_REG, X86_EXC_U= D); d |=3D TwoOp; op_bytes =3D 16 << evex.lr; @@ -5954,7 +5964,7 @@ x86_emulate( fault_suppression =3D false; /* fall through */ case X86EMUL_OPC_EVEX_66(0x0f38, 0x44): /* vplzcnt{d,q} [xyz]mm/mem,[x= yz]mm{k} */ - host_and_vcpu_must_have(avx512cd); + visa_check(cd); goto avx512f_no_sae; =20 case X86EMUL_OPC_VEX_66(0x0f38, 0x2c): /* vmaskmovps mem,{x,y}mm,{x,y}= mm */ @@ -6030,7 +6040,7 @@ x86_emulate( case X86EMUL_OPC_EVEX_66(0x0f38, 0xba): /* vfmsub231p{s,d} [xyz]mm/mem= ,[xyz]mm,[xyz]mm{k} */ case X86EMUL_OPC_EVEX_66(0x0f38, 0xbc): /* vfnmadd231p{s,d} [xyz]mm/me= m,[xyz]mm,[xyz]mm{k} */ case X86EMUL_OPC_EVEX_66(0x0f38, 0xbe): /* vfnmsub231p{s,d} [xyz]mm/me= m,[xyz]mm,[xyz]mm{k} */ - host_and_vcpu_must_have(avx512f); + visa_check(f); if ( ea.type !=3D OP_REG || !evex.brs ) avx512_vlen_check(false); goto simd_zmm; @@ -6049,7 +6059,7 @@ x86_emulate( case X86EMUL_OPC_EVEX_66(0x0f38, 0xbb): /* vfmsub231s{s,d} xmm/mem,xmm= ,xmm{k} */ case X86EMUL_OPC_EVEX_66(0x0f38, 0xbd): /* vfnmadd231s{s,d} xmm/mem,xm= m,xmm{k} */ case X86EMUL_OPC_EVEX_66(0x0f38, 0xbf): /* vfnmsub231s{s,d} xmm/mem,xm= m,xmm{k} */ - host_and_vcpu_must_have(avx512f); + visa_check(f); generate_exception_if(ea.type !=3D OP_REG && evex.brs, X86_EXC_UD); if ( !evex.brs ) avx512_vlen_check(true); @@ -6063,14 +6073,14 @@ x86_emulate( case X86EMUL_OPC_EVEX_66(0x0f38, 0x3a): /* vpminuw [xyz]mm/mem,[xyz]mm= ,[xyz]mm{k} */ case X86EMUL_OPC_EVEX_66(0x0f38, 0x3c): /* vpmaxsb [xyz]mm/mem,[xyz]mm= ,[xyz]mm{k} */ case X86EMUL_OPC_EVEX_66(0x0f38, 0x3e): /* vpmaxuw [xyz]mm/mem,[xyz]mm= ,[xyz]mm{k} */ - host_and_vcpu_must_have(avx512bw); + visa_check(bw); generate_exception_if(evex.brs, X86_EXC_UD); elem_bytes =3D b & 2 ?: 1; goto avx512f_no_sae; =20 case X86EMUL_OPC_EVEX_66(0x0f38, 0x40): /* vpmull{d,q} [xyz]mm/mem,[xy= z]mm,[xyz]mm{k} */ if ( evex.w ) - host_and_vcpu_must_have(avx512dq); + visa_check(dq); goto avx512f_no_sae; =20 case X86EMUL_OPC_66(0x0f38, 0xdb): /* aesimc xmm/m128,xmm */ @@ -6110,7 +6120,7 @@ x86_emulate( case X86EMUL_OPC_EVEX_66(0x0f38, 0x51): /* vpdpbusds [xyz]mm/mem,[xyz]= mm,[xyz]mm{k} */ case X86EMUL_OPC_EVEX_66(0x0f38, 0x52): /* vpdpwssd [xyz]mm/mem,[xyz]m= m,[xyz]mm{k} */ case X86EMUL_OPC_EVEX_66(0x0f38, 0x53): /* vpdpwssds [xyz]mm/mem,[xyz]= mm,[xyz]mm{k} */ - host_and_vcpu_must_have(avx512_vnni); + visa_check(_vnni); generate_exception_if(evex.w, X86_EXC_UD); goto avx512f_no_sae; =20 @@ -6122,7 +6132,7 @@ x86_emulate( d |=3D TwoOp; /* fall through */ case X86EMUL_OPC_EVEX_F3(0x0f38, 0x52): /* vdpbf16ps [xyz]mm/mem,[xyz]= mm,[xyz]mm{k} */ - host_and_vcpu_must_have(avx512_bf16); + visa_check(_bf16); generate_exception_if(evex.w, X86_EXC_UD); op_bytes =3D 16 << evex.lr; goto avx512f_no_sae; @@ -6139,7 +6149,7 @@ x86_emulate( =20 case X86EMUL_OPC_EVEX_66(0x0f38, 0x4d): /* vrcp14s{s,d} xmm/mem,xmm,xm= m{k} */ case X86EMUL_OPC_EVEX_66(0x0f38, 0x4f): /* vrsqrt14s{s,d} xmm/mem,xmm,= xmm{k} */ - host_and_vcpu_must_have(avx512f); + visa_check(f); generate_exception_if(evex.brs, X86_EXC_UD); avx512_vlen_check(true); goto simd_zmm; @@ -6148,16 +6158,16 @@ x86_emulate( generate_exception_if(evex.w || !evex.r || !evex.R || evex.z, X86_= EXC_UD); /* fall through */ case X86EMUL_OPC_EVEX_66(0x0f38, 0x54): /* vpopcnt{b,w} [xyz]mm/mem,[x= yz]mm{k} */ - host_and_vcpu_must_have(avx512_bitalg); + visa_check(_bitalg); /* fall through */ case X86EMUL_OPC_EVEX_66(0x0f38, 0x66): /* vpblendm{b,w} [xyz]mm/mem,[= xyz]mm,[xyz]mm{k} */ - host_and_vcpu_must_have(avx512bw); + visa_check(bw); generate_exception_if(evex.brs, X86_EXC_UD); elem_bytes =3D 1 << evex.w; goto avx512f_no_sae; =20 case X86EMUL_OPC_EVEX_66(0x0f38, 0x55): /* vpopcnt{d,q} [xyz]mm/mem,[x= yz]mm{k} */ - host_and_vcpu_must_have(avx512_vpopcntdq); + visa_check(_vpopcntdq); goto avx512f_no_sae; =20 case X86EMUL_OPC_VEX_66(0x0f38, 0x5a): /* vbroadcasti128 m128,ymm */ @@ -6166,14 +6176,14 @@ x86_emulate( =20 case X86EMUL_OPC_EVEX_66(0x0f38, 0x62): /* vpexpand{b,w} [xyz]mm/mem,[= xyz]mm{k} */ case X86EMUL_OPC_EVEX_66(0x0f38, 0x63): /* vpcompress{b,w} [xyz]mm,[xy= z]mm/mem{k} */ - host_and_vcpu_must_have(avx512_vbmi2); + visa_check(_vbmi2); elem_bytes =3D 1 << evex.w; /* fall through */ case X86EMUL_OPC_EVEX_66(0x0f38, 0x88): /* vexpandp{s,d} [xyz]mm/mem,[= xyz]mm{k} */ case X86EMUL_OPC_EVEX_66(0x0f38, 0x89): /* vpexpand{d,q} [xyz]mm/mem,[= xyz]mm{k} */ case X86EMUL_OPC_EVEX_66(0x0f38, 0x8a): /* vcompressp{s,d} [xyz]mm,[xy= z]mm/mem{k} */ case X86EMUL_OPC_EVEX_66(0x0f38, 0x8b): /* vpcompress{d,q} [xyz]mm,[xy= z]mm/mem{k} */ - host_and_vcpu_must_have(avx512f); + visa_check(f); generate_exception_if(evex.brs, X86_EXC_UD); avx512_vlen_check(false); /* @@ -6207,7 +6217,7 @@ x86_emulate( /* fall through */ case X86EMUL_OPC_EVEX_66(0x0f38, 0x71): /* vpshldv{d,q} [xyz]mm/mem,[x= yz]mm,[xyz]mm{k} */ case X86EMUL_OPC_EVEX_66(0x0f38, 0x73): /* vpshrdv{d,q} [xyz]mm/mem,[x= yz]mm,[xyz]mm{k} */ - host_and_vcpu_must_have(avx512_vbmi2); + visa_check(_vbmi2); goto avx512f_no_sae; =20 case X86EMUL_OPC_VEX (0x0f38, 0xb0): /* vcvtneoph2ps mem,[xy]mm */ @@ -6227,16 +6237,16 @@ x86_emulate( case X86EMUL_OPC_EVEX_66(0x0f38, 0x7d): /* vpermt2{b,w} [xyz]mm/mem,[x= yz]mm,[xyz]mm{k} */ case X86EMUL_OPC_EVEX_66(0x0f38, 0x8d): /* vperm{b,w} [xyz]mm/mem,[xyz= ]mm,[xyz]mm{k} */ if ( !evex.w ) - host_and_vcpu_must_have(avx512_vbmi); + visa_check(_vbmi); else - host_and_vcpu_must_have(avx512bw); + visa_check(bw); generate_exception_if(evex.brs, X86_EXC_UD); fault_suppression =3D false; goto avx512f_no_sae; =20 case X86EMUL_OPC_EVEX_66(0x0f38, 0x78): /* vpbroadcastb xmm/m8,[xyz]mm= {k} */ case X86EMUL_OPC_EVEX_66(0x0f38, 0x79): /* vpbroadcastw xmm/m16,[xyz]m= m{k} */ - host_and_vcpu_must_have(avx512bw); + visa_check(bw); generate_exception_if(evex.w || evex.brs, X86_EXC_UD); op_bytes =3D elem_bytes =3D 1 << (b & 1); /* See the comment at the avx512_broadcast label. */ @@ -6245,14 +6255,14 @@ x86_emulate( =20 case X86EMUL_OPC_EVEX_66(0x0f38, 0x7a): /* vpbroadcastb r32,[xyz]mm{k}= */ case X86EMUL_OPC_EVEX_66(0x0f38, 0x7b): /* vpbroadcastw r32,[xyz]mm{k}= */ - host_and_vcpu_must_have(avx512bw); + visa_check(bw); generate_exception_if(evex.w, X86_EXC_UD); /* fall through */ case X86EMUL_OPC_EVEX_66(0x0f38, 0x7c): /* vpbroadcast{d,q} reg,[xyz]m= m{k} */ generate_exception_if((ea.type !=3D OP_REG || evex.brs || evex.reg !=3D 0xf || !evex.RX), X86_EXC_UD); - host_and_vcpu_must_have(avx512f); + visa_check(f); avx512_vlen_check(false); get_fpu(X86EMUL_FPU_zmm); =20 @@ -6321,7 +6331,7 @@ x86_emulate( =20 case X86EMUL_OPC_EVEX_66(0x0f38, 0x83): /* vpmultishiftqb [xyz]mm/mem,= [xyz]mm,[xyz]mm{k} */ generate_exception_if(!evex.w, X86_EXC_UD); - host_and_vcpu_must_have(avx512_vbmi); + visa_check(_vbmi); fault_suppression =3D false; goto avx512f_no_sae; =20 @@ -6479,8 +6489,8 @@ x86_emulate( evex.reg !=3D 0xf || modrm_reg =3D=3D state->sib_index), X86_EXC_UD); + visa_check(f); avx512_vlen_check(false); - host_and_vcpu_must_have(avx512f); get_fpu(X86EMUL_FPU_zmm); =20 /* Read destination and index registers. */ @@ -6641,8 +6651,8 @@ x86_emulate( evex.reg !=3D 0xf || modrm_reg =3D=3D state->sib_index), X86_EXC_UD); + visa_check(f); avx512_vlen_check(false); - host_and_vcpu_must_have(avx512f); get_fpu(X86EMUL_FPU_zmm); =20 /* Read source and index registers. */ @@ -6758,7 +6768,7 @@ x86_emulate( =20 case X86EMUL_OPC_EVEX_66(0x0f38, 0xb4): /* vpmadd52luq [xyz]mm/mem,[xy= z]mm,[xyz]mm{k} */ case X86EMUL_OPC_EVEX_66(0x0f38, 0xb5): /* vpmadd52huq [xyz]mm/mem,[xy= z]mm,[xyz]mm{k} */ - host_and_vcpu_must_have(avx512_ifma); + visa_check(_ifma); generate_exception_if(!evex.w, X86_EXC_UD); goto avx512f_no_sae; =20 @@ -7238,7 +7248,7 @@ x86_emulate( /* fall through */ case X86EMUL_OPC_EVEX_66(0x0f3a, 0x08): /* vrndscaleps $imm8,[xyz]mm/m= em,[xyz]mm{k} */ case X86EMUL_OPC_EVEX_66(0x0f3a, 0x09): /* vrndscalepd $imm8,[xyz]mm/m= em,[xyz]mm{k} */ - host_and_vcpu_must_have(avx512f); + visa_check(f); generate_exception_if(evex.w !=3D (b & 1), X86_EXC_UD); avx512_vlen_check(b & 2); goto simd_imm8_zmm; @@ -7247,7 +7257,7 @@ x86_emulate( generate_exception_if(ea.type !=3D OP_REG && evex.brs, X86_EXC_UD); /* fall through */ case X86EMUL_OPC_EVEX(0x0f3a, 0x08): /* vrndscaleph $imm8,[xyz]mm/mem,= [xyz]mm{k} */ - host_and_vcpu_must_have(avx512_fp16); + visa_check(_fp16); generate_exception_if(evex.w, X86_EXC_UD); avx512_vlen_check(b & 2); goto simd_imm8_zmm; @@ -7360,11 +7370,11 @@ x86_emulate( evex.opmsk || evex.brs), X86_EXC_UD); if ( !(b & 2) ) - host_and_vcpu_must_have(avx512bw); + visa_check(bw); else if ( !(b & 1) ) - host_and_vcpu_must_have(avx512dq); + visa_check(dq); else - host_and_vcpu_must_have(avx512f); + visa_check(f); get_fpu(X86EMUL_FPU_zmm); opc =3D init_evex(stub); goto pextr; @@ -7378,7 +7388,7 @@ x86_emulate( case X86EMUL_OPC_EVEX_66(0x0f3a, 0x39): /* vextracti32x4 $imm8,{y,z}mm= ,xmm/m128{k} */ /* vextracti64x2 $imm8,{y,z}mm= ,xmm/m128{k} */ if ( evex.w ) - host_and_vcpu_must_have(avx512dq); + visa_check(dq); generate_exception_if(evex.brs, X86_EXC_UD); /* fall through */ case X86EMUL_OPC_EVEX_66(0x0f3a, 0x23): /* vshuff32x4 $imm8,{y,z}mm/me= m,{y,z}mm,{y,z}mm{k} */ @@ -7398,7 +7408,7 @@ x86_emulate( case X86EMUL_OPC_EVEX_66(0x0f3a, 0x3b): /* vextracti32x8 $imm8,zmm,ymm= /m256{k} */ /* vextracti64x4 $imm8,zmm,ymm= /m256{k} */ if ( !evex.w ) - host_and_vcpu_must_have(avx512dq); + visa_check(dq); generate_exception_if(evex.lr !=3D 2 || evex.brs, X86_EXC_UD); fault_suppression =3D false; goto avx512f_imm8_no_sae; @@ -7414,7 +7424,7 @@ x86_emulate( generate_exception_if((evex.w || evex.reg !=3D 0xf || !evex.RX= || (ea.type !=3D OP_REG && (evex.z || evex= .brs))), X86_EXC_UD); - host_and_vcpu_must_have(avx512f); + visa_check(f); avx512_vlen_check(false); opc =3D init_evex(stub); } @@ -7506,7 +7516,7 @@ x86_emulate( if ( !(b & 0x20) ) goto avx512f_imm8_no_sae; avx512bw_imm: - host_and_vcpu_must_have(avx512bw); + visa_check(bw); generate_exception_if(evex.brs, X86_EXC_UD); elem_bytes =3D 1 << evex.w; avx512_vlen_check(false); @@ -7545,7 +7555,7 @@ x86_emulate( goto simd_0f_imm8_avx; =20 case X86EMUL_OPC_EVEX_66(0x0f3a, 0x21): /* vinsertps $imm8,xmm/m32,xmm= ,xmm */ - host_and_vcpu_must_have(avx512f); + visa_check(f); generate_exception_if(evex.lr || evex.w || evex.opmsk || evex.brs, X86_EXC_UD); op_bytes =3D 4; @@ -7553,18 +7563,18 @@ x86_emulate( =20 case X86EMUL_OPC_EVEX_66(0x0f3a, 0x50): /* vrangep{s,d} $imm8,[xyz]mm/= mem,[xyz]mm,[xyz]mm{k} */ case X86EMUL_OPC_EVEX_66(0x0f3a, 0x56): /* vreducep{s,d} $imm8,[xyz]mm= /mem,[xyz]mm{k} */ - host_and_vcpu_must_have(avx512dq); + visa_check(dq); /* fall through */ case X86EMUL_OPC_EVEX_66(0x0f3a, 0x26): /* vgetmantp{s,d} $imm8,[xyz]m= m/mem,[xyz]mm{k} */ case X86EMUL_OPC_EVEX_66(0x0f3a, 0x54): /* vfixupimmp{s,d} $imm8,[xyz]= mm/mem,[xyz]mm,[xyz]mm{k} */ - host_and_vcpu_must_have(avx512f); + visa_check(f); if ( ea.type !=3D OP_REG || !evex.brs ) avx512_vlen_check(false); goto simd_imm8_zmm; =20 case X86EMUL_OPC_EVEX(0x0f3a, 0x26): /* vgetmantph $imm8,[xyz]mm/mem,[= xyz]mm{k} */ case X86EMUL_OPC_EVEX(0x0f3a, 0x56): /* vreduceph $imm8,[xyz]mm/mem,[x= yz]mm{k} */ - host_and_vcpu_must_have(avx512_fp16); + visa_check(_fp16); generate_exception_if(evex.w, X86_EXC_UD); if ( ea.type !=3D OP_REG || !evex.brs ) avx512_vlen_check(false); @@ -7572,11 +7582,11 @@ x86_emulate( =20 case X86EMUL_OPC_EVEX_66(0x0f3a, 0x51): /* vranges{s,d} $imm8,xmm/mem,= xmm,xmm{k} */ case X86EMUL_OPC_EVEX_66(0x0f3a, 0x57): /* vreduces{s,d} $imm8,xmm/mem= ,xmm,xmm{k} */ - host_and_vcpu_must_have(avx512dq); + visa_check(dq); /* fall through */ case X86EMUL_OPC_EVEX_66(0x0f3a, 0x27): /* vgetmants{s,d} $imm8,xmm/me= m,xmm,xmm{k} */ case X86EMUL_OPC_EVEX_66(0x0f3a, 0x55): /* vfixupimms{s,d} $imm8,xmm/m= em,xmm,xmm{k} */ - host_and_vcpu_must_have(avx512f); + visa_check(f); generate_exception_if(ea.type !=3D OP_REG && evex.brs, X86_EXC_UD); if ( !evex.brs ) avx512_vlen_check(true); @@ -7584,7 +7594,7 @@ x86_emulate( =20 case X86EMUL_OPC_EVEX(0x0f3a, 0x27): /* vgetmantsh $imm8,xmm/mem,xmm,x= mm{k} */ case X86EMUL_OPC_EVEX(0x0f3a, 0x57): /* vreducesh $imm8,xmm/mem,xmm,xm= m{k} */ - host_and_vcpu_must_have(avx512_fp16); + visa_check(_fp16); generate_exception_if(evex.w, X86_EXC_UD); if ( !evex.brs ) avx512_vlen_check(true); @@ -7595,18 +7605,19 @@ x86_emulate( case X86EMUL_OPC_VEX_66(0x0f3a, 0x30): /* kshiftr{b,w} $imm8,k,k */ case X86EMUL_OPC_VEX_66(0x0f3a, 0x32): /* kshiftl{b,w} $imm8,k,k */ if ( !vex.w ) - host_and_vcpu_must_have(avx512dq); + visa_check(dq); + else + visa_check(f); opmask_shift_imm: generate_exception_if(vex.l || !vex.r || vex.reg !=3D 0xf || ea.type !=3D OP_REG, X86_EXC_UD); - host_and_vcpu_must_have(avx512f); get_fpu(X86EMUL_FPU_opmask); op_bytes =3D 1; /* Any non-zero value will do. */ goto simd_0f_imm8; =20 case X86EMUL_OPC_VEX_66(0x0f3a, 0x31): /* kshiftr{d,q} $imm8,k,k */ case X86EMUL_OPC_VEX_66(0x0f3a, 0x33): /* kshiftl{d,q} $imm8,k,k */ - host_and_vcpu_must_have(avx512bw); + visa_check(bw); goto opmask_shift_imm; =20 case X86EMUL_OPC_66(0x0f3a, 0x44): /* pclmulqdq $imm8,xmm/m128,xmm= */ @@ -7747,7 +7758,7 @@ x86_emulate( =20 case X86EMUL_OPC_EVEX_66(0x0f3a, 0x66): /* vfpclassp{s,d} $imm8,[xyz]m= m/mem,k{k} */ case X86EMUL_OPC_EVEX_66(0x0f3a, 0x67): /* vfpclasss{s,d} $imm8,xmm/me= m,k{k} */ - host_and_vcpu_must_have(avx512dq); + visa_check(dq); generate_exception_if(!evex.r || !evex.R || evex.z, X86_EXC_UD); if ( !(b & 1) ) goto avx512f_imm8_no_sae; @@ -7757,7 +7768,7 @@ x86_emulate( =20 case X86EMUL_OPC_EVEX(0x0f3a, 0x66): /* vfpclassph $imm8,[xyz]mm/mem,k= {k} */ case X86EMUL_OPC_EVEX(0x0f3a, 0x67): /* vfpclasssh $imm8,xmm/mem,k{k} = */ - host_and_vcpu_must_have(avx512_fp16); + visa_check(_fp16); generate_exception_if(evex.w || !evex.r || !evex.R || evex.z, X86_= EXC_UD); if ( !(b & 1) ) goto avx512f_imm8_no_sae; @@ -7772,14 +7783,14 @@ x86_emulate( /* fall through */ case X86EMUL_OPC_EVEX_66(0x0f3a, 0x71): /* vpshld{d,q} $imm8,[xyz]mm/m= em,[xyz]mm,[xyz]mm{k} */ case X86EMUL_OPC_EVEX_66(0x0f3a, 0x73): /* vpshrd{d,q} $imm8,[xyz]mm/m= em,[xyz]mm,[xyz]mm{k} */ - host_and_vcpu_must_have(avx512_vbmi2); + visa_check(_vbmi2); goto avx512f_imm8_no_sae; =20 case X86EMUL_OPC_EVEX_F3(0x0f3a, 0xc2): /* vcmpsh $imm8,xmm/mem,xmm,k{= k} */ generate_exception_if(ea.type !=3D OP_REG && evex.brs, X86_EXC_UD); /* fall through */ case X86EMUL_OPC_EVEX(0x0f3a, 0xc2): /* vcmpph $imm8,[xyz]mm/mem,[xyz]= mm,k{k} */ - host_and_vcpu_must_have(avx512_fp16); + visa_check(_fp16); generate_exception_if(evex.w || !evex.r || !evex.R || evex.z, X86_= EXC_UD); if ( ea.type !=3D OP_REG || !evex.brs ) avx512_vlen_check(evex.pfx & VEX_PREFIX_SCALAR_MASK); @@ -7855,13 +7866,13 @@ x86_emulate( CASE_SIMD_SINGLE_FP(_EVEX, 5, 0x5d): /* vmin{p,s}h [xyz]mm/mem,[xyz]mm= ,[xyz]mm{k} */ CASE_SIMD_SINGLE_FP(_EVEX, 5, 0x5e): /* vdiv{p,s}h [xyz]mm/mem,[xyz]mm= ,[xyz]mm{k} */ CASE_SIMD_SINGLE_FP(_EVEX, 5, 0x5f): /* vmax{p,s}h [xyz]mm/mem,[xyz]mm= ,[xyz]mm{k} */ - host_and_vcpu_must_have(avx512_fp16); + visa_check(_fp16); generate_exception_if(evex.w, X86_EXC_UD); goto avx512f_all_fp; =20 CASE_SIMD_ALL_FP(_EVEX, 5, 0x5a): /* vcvtp{h,d}2p{h,d} [xyz]mm/mem,[x= yz]mm{k} */ /* vcvts{h,d}2s{h,d} xmm/mem,xmm,xm= m{k} */ - host_and_vcpu_must_have(avx512_fp16); + visa_check(_fp16); if ( vex.pfx & VEX_PREFIX_SCALAR_MASK ) d &=3D ~TwoOp; op_bytes =3D 2 << (((evex.pfx & VEX_PREFIX_SCALAR_MASK) ? 0 : 1 + = evex.lr) + @@ -7872,7 +7883,7 @@ x86_emulate( /* vcvtqq2ph [xyz]mm/mem,xmm{k} */ case X86EMUL_OPC_EVEX_F2(5, 0x7a): /* vcvtudq2ph [xyz]mm/mem,[xy]mm{k}= */ /* vcvtuqq2ph [xyz]mm/mem,xmm{k} */ - host_and_vcpu_must_have(avx512_fp16); + visa_check(_fp16); if ( ea.type !=3D OP_REG || !evex.brs ) avx512_vlen_check(false); op_bytes =3D 16 << evex.lr; @@ -7882,7 +7893,7 @@ x86_emulate( case X86EMUL_OPC_EVEX_F3(5, 0x5b): /* vcvttph2dq [xy]mm/mem,[xyz]mm{k}= */ case X86EMUL_OPC_EVEX (5, 0x78): /* vcvttph2udq [xy]mm/mem,[xyz]mm{k= } */ case X86EMUL_OPC_EVEX (5, 0x79): /* vcvtph2udq [xy]mm/mem,[xyz]mm{k}= */ - host_and_vcpu_must_have(avx512_fp16); + visa_check(_fp16); generate_exception_if(evex.w, X86_EXC_UD); if ( ea.type !=3D OP_REG || !evex.brs ) avx512_vlen_check(false); @@ -7904,7 +7915,7 @@ x86_emulate( case X86EMUL_OPC_EVEX_66(5, 0x79): /* vcvtph2uqq xmm/mem,[xyz]mm{k} */ case X86EMUL_OPC_EVEX_66(5, 0x7a): /* vcvttph2qq xmm/mem,[xyz]mm{k} */ case X86EMUL_OPC_EVEX_66(5, 0x7b): /* vcvtph2qq xmm/mem,[xyz]mm{k} */ - host_and_vcpu_must_have(avx512_fp16); + visa_check(_fp16); generate_exception_if(evex.w, X86_EXC_UD); if ( ea.type !=3D OP_REG || !evex.brs ) avx512_vlen_check(false); @@ -7941,7 +7952,7 @@ x86_emulate( case X86EMUL_OPC_EVEX_66(6, 0xba): /* vfmsub231ph [xyz]mm/mem,[xyz]mm,= [xyz]mm{k} */ case X86EMUL_OPC_EVEX_66(6, 0xbc): /* vfnmadd231ph [xyz]mm/mem,[xyz]mm= ,[xyz]mm{k} */ case X86EMUL_OPC_EVEX_66(6, 0xbe): /* vfnmsub231ph [xyz]mm/mem,[xyz]mm= ,[xyz]mm{k} */ - host_and_vcpu_must_have(avx512_fp16); + visa_check(_fp16); generate_exception_if(evex.w, X86_EXC_UD); if ( ea.type !=3D OP_REG || !evex.brs ) avx512_vlen_check(false); @@ -7963,7 +7974,7 @@ x86_emulate( case X86EMUL_OPC_EVEX_66(6, 0xbb): /* vfmsub231sh xmm/m16,xmm,xmm{k} */ case X86EMUL_OPC_EVEX_66(6, 0xbd): /* vfnmadd231sh xmm/m16,xmm,xmm{k} = */ case X86EMUL_OPC_EVEX_66(6, 0xbf): /* vfnmsub231sh xmm/m16,xmm,xmm{k} = */ - host_and_vcpu_must_have(avx512_fp16); + visa_check(_fp16); generate_exception_if(evex.w || (ea.type !=3D OP_REG && evex.brs), X86_EXC_UD); if ( !evex.brs ) @@ -7972,13 +7983,13 @@ x86_emulate( =20 case X86EMUL_OPC_EVEX_66(6, 0x4c): /* vrcpph [xyz]mm/mem,[xyz]mm{k} */ case X86EMUL_OPC_EVEX_66(6, 0x4e): /* vrsqrtph [xyz]mm/mem,[xyz]mm{k} = */ - host_and_vcpu_must_have(avx512_fp16); + visa_check(_fp16); generate_exception_if(evex.w, X86_EXC_UD); goto avx512f_no_sae; =20 case X86EMUL_OPC_EVEX_66(6, 0x4d): /* vrcpsh xmm/m16,xmm,xmm{k} */ case X86EMUL_OPC_EVEX_66(6, 0x4f): /* vrsqrtsh xmm/m16,xmm,xmm{k} */ - host_and_vcpu_must_have(avx512_fp16); + visa_check(_fp16); generate_exception_if(evex.w || evex.brs, X86_EXC_UD); avx512_vlen_check(true); goto simd_zmm; @@ -7996,7 +8007,7 @@ x86_emulate( { unsigned int src1 =3D ~evex.reg; =20 - host_and_vcpu_must_have(avx512_fp16); + visa_check(_fp16); generate_exception_if(evex.w || ((b & 1) && ea.type !=3D OP_REG &&= evex.brs), X86_EXC_UD); if ( mode_64bit() ) --- a/xen/arch/x86/xstate.c +++ b/xen/arch/x86/xstate.c @@ -814,7 +814,7 @@ static void __init noinline xstate_check if ( cpu_has_mpx ) check_new_xstate(&s, X86_XCR0_BNDCSR | X86_XCR0_BNDREGS); =20 - if ( cpu_has_avx512f ) + if ( boot_cpu_has(X86_FEATURE_AVX512F) || boot_cpu_has(X86_FEATURE_AVX= 10) ) check_new_xstate(&s, X86_XCR0_HI_ZMM | X86_XCR0_ZMM | X86_XCR0_OPM= ASK); =20 /* --- a/xen/include/public/arch-x86/cpufeatureset.h +++ b/xen/include/public/arch-x86/cpufeatureset.h @@ -366,7 +366,7 @@ XEN_CPUFEATURE(PREFETCHI, 15*32 XEN_CPUFEATURE(USER_MSR, 15*32+15) /*s U{RD,WR}MSR Instructions= */ XEN_CPUFEATURE(UIRET_UIF, 15*32+17) /* UIRET updates UIF */ XEN_CPUFEATURE(CET_SSS, 15*32+18) /* CET Supervisor Shadow St= acks safe to use */ -XEN_CPUFEATURE(AVX10, 15*32+19) /* AVX10 Converged Vector I= SA */ +XEN_CPUFEATURE(AVX10, 15*32+19) /*a AVX10 Converged Vector I= SA */ XEN_CPUFEATURE(SLSM, 15*32+24) /* Static Lockstep Mode */ =20 /* Intel-defined CPU features, MSR_ARCH_CAPS 0x10a.eax, word 16 */