From nobody Mon Feb 9 16:47:08 2026 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; arc=pass (i=1 dmarc=pass fromdomain=suse.com); dmarc=pass(p=quarantine dis=none) header.from=suse.com ARC-Seal: i=2; a=rsa-sha256; t=1628684720; cv=pass; d=zohomail.com; s=zohoarc; b=gLpYOLtJsV4BMr7XBIbBUI0GrWu/HzC6e16VQCpJWUIugQC8J0lpdnoQJ/zsWnExYVF3NZ9szQcxi9tSIrNaU0ViEgAsIqlc+4B5qNEImm7VXLjf4zSFm13tnifgBw8g/vmx27cYpDm9sB2CtzVqSoZHUe96xu2wkMimbSKnMRA= ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1628684720; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=s8DVOkHW2bPmch43Tk+h/2NJULu3EpE9aZm9uqLJKIY=; b=f6QKkul0dPur5ICSmeHqsQQaxX7t5I1GZUBnVvsVtZLd12u0DbT7fc8cByzuold++EWU7Alf9221feHnpunb1WqjNl7eyt6T45ZYxgQd0DC/gVVkYDnspPSKj5T3+wyFlGs5jOnrgV+YLuk8QpJUYsvk85iuO4t3rmUwHPaBLmI= ARC-Authentication-Results: i=2; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; arc=pass (i=1 dmarc=pass fromdomain=suse.com); dmarc=pass header.from= (p=quarantine dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1628684720029527.8324645892011; Wed, 11 Aug 2021 05:25:20 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.166002.303173 (Exim 4.92) (envelope-from ) id 1mDnI2-0008UF-0E; Wed, 11 Aug 2021 12:24:58 +0000 Received: by outflank-mailman (output) from mailman id 166002.303173; Wed, 11 Aug 2021 12:24:57 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1mDnI1-0008U6-Sz; Wed, 11 Aug 2021 12:24:57 +0000 Received: by outflank-mailman (input) for mailman id 166002; Wed, 11 Aug 2021 12:24:56 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1mDnI0-0008Tm-Ii for xen-devel@lists.xenproject.org; Wed, 11 Aug 2021 12:24:56 +0000 Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.111.102]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id b6abad94-e9b7-490f-8dd8-6e5abc4ba733; Wed, 11 Aug 2021 12:24:46 +0000 (UTC) Received: from EUR02-HE1-obe.outbound.protection.outlook.com (mail-he1eur02lp2050.outbound.protection.outlook.com [104.47.5.50]) (Using TLS) by relay.mimecast.com with ESMTP id de-mta-10-VFSt_QXfMeC6b4PDzQXlVw-1; Wed, 11 Aug 2021 14:24:43 +0200 Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16) by VI1PR0402MB3390.eurprd04.prod.outlook.com (2603:10a6:803:9::20) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4394.19; Wed, 11 Aug 2021 12:24:40 +0000 Received: from VI1PR04MB5600.eurprd04.prod.outlook.com ([fe80::99d3:99cd:8adf:3eea]) by VI1PR04MB5600.eurprd04.prod.outlook.com ([fe80::99d3:99cd:8adf:3eea%5]) with mapi id 15.20.4394.025; Wed, 11 Aug 2021 12:24:40 +0000 Received: from [10.156.60.236] (37.24.206.209) by PR3P251CA0026.EURP251.PROD.OUTLOOK.COM (2603:10a6:102:b5::30) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4394.17 via Frontend Transport; Wed, 11 Aug 2021 12:24:39 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: b6abad94-e9b7-490f-8dd8-6e5abc4ba733 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619; t=1628684685; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=s8DVOkHW2bPmch43Tk+h/2NJULu3EpE9aZm9uqLJKIY=; b=IS6E7Qhk9xtJ4R7mZ708m4chOQxi0PzgGe9gXeYaqCJBdVBeXrjpME+xAlerLWomut+P7a gO3HVm3+OoFjMK67th5LrP7a4R0rowRhoc8IoFVqDpKlthstNHmsXTErOSB7Yy4hNAl4pJ +0oQ+FusNf76cKSXeqEIDghALIJPkdk= X-MC-Unique: VFSt_QXfMeC6b4PDzQXlVw-1 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=Xf3jG35FZ6nwxXNiVIAeSPvYpl4CpA9hpHg4eIcShSCRSXGtM6ydVrxBFcsCipCTFm2n+LMOUfYJT6PuEpqkNjgUBr/SmyRvz+j9ighcE0EpYDAcMEy/3vZFzQdj3sS5uvj6ItgveJb2oMlbFE7/4is8iusPw/HexkL1UHEhjgdo7jCjfQ9fhreOhDrQU+5EZAIKmGwy8mwL53uhxJVmGWPiS8vFnt2jGplfWM94WJezS1y8tQPjQcmzmHy5CppFtS7ZRSC8eTVc3k9Ol8Mp+mSy2dy5VlhVKVHb6ZOM0ebzs21uxQ47oaRMQBExdZph+pClq9T+Y0n9TDMloFhqVQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=s8DVOkHW2bPmch43Tk+h/2NJULu3EpE9aZm9uqLJKIY=; b=Y2aTC2637/MpenOa5qSvWBP4R+opjX3EFbe32iGyL2+Zc8z92gOprV97qiIhFaWPDBNDja9BAuUDM7IM9xYVffjprQFmcrkm8mFY2eMA8cv16ZEDOxN4PKauq2k7yx6eA2lYBAol9om07oXv8vooQ0KFf454hWIQr4oXp4H1otro9+5HS1FgShJ/GrwFaayNTg3Yh8E8QYMznqTLVzNF60FiE6IpMDK9WmNMo6IlFvyBs0P8yg5rzR06+euzzwUzUKHy0GI8razhCH7K43bTfrjjp3NeE4bD7s7mMgK16nIgg6yJKtsQ40gngn37BK+mcwR+CTaIqvDWZgmTeNYkTg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com; dkim=pass header.d=suse.com; arc=none Authentication-Results: citrix.com; dkim=none (message not signed) header.d=none;citrix.com; dmarc=none action=none header.from=suse.com; Subject: [PATCH 5/7] x86emul: split off insn decoding From: Jan Beulich To: "xen-devel@lists.xenproject.org" Cc: Andrew Cooper , Wei Liu , =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= References: Message-ID: <7f324493-6088-9147-4122-4691a86129cd@suse.com> Date: Wed, 11 Aug 2021 14:24:38 +0200 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101 Thunderbird/78.12.0 In-Reply-To: Content-Language: en-US Content-Transfer-Encoding: quoted-printable X-ClientProxiedBy: PR3P251CA0026.EURP251.PROD.OUTLOOK.COM (2603:10a6:102:b5::30) To VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16) MIME-Version: 1.0 X-MS-Exchange-MessageSentRepresentingType: 1 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 5b074968-784d-4d63-baa0-08d95cc2fdde X-MS-TrafficTypeDiagnostic: VI1PR0402MB3390: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:5797; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 1VR1VqZFrQx6AmPnSmGKDgayTi1FyiawkIGvN+q34DOdo4imrz+UV/xRknp/I7HtoYy2Wd9umwRbUY5utvWn7Z9SJx4E5pc9N1d6e4po/5y/9pO7dyIceY+GuzVu+sZ+44QWdEcM0JLxG/F1bO8c6x1ExURmmcvfwTaRfOhJusTZlxyb07CJo/NAsjxp6VBwGKvHArO/fNJUPZwQSHS4XL+pCdqnIc/BnlkCkDvHYN0JtSIabdOCgvdADZkNnuMV0sPVpEm44yxyDZU3t0oCRYZdFt4Ykbn95v6EcBQ+amIno1Gz9IBMs8EB9vYfDk9Sce4TDWtaZ68hH17jGTwfTQS1T9uap9wMqGgdnvvZbqDB3WggdZXNPy1+85z0BXmOVYxydGyBtYTC+8d/8TdRgf7KJ+6F7t3jciFjt9aV+Zh+Pl7h27/uPooPoBkZPxbOgbh4UQrSQCh64VdN4MEYCHu9zuWkJG2sKJZmhL0yWOQyyWTIMXs78wfRu+/DCCppzsY3nfjbkari5BaQMka80b8WjwKG0yGlLitRIGD1kuuIZnYjsq4/md7yWNcUoNH6tw85S9FcGEC7l7xy+P1DQW8NxYJkwEz2MWbV6jcUgWmO+0qP2YUOIygqfjOqmqRPTuzAmPvvT6pQZv9eIQ9AI/cfkRzFCbPss+PYVkzZlbVpZdrQUBp7ZT5wKezXsXASdDccG0Jeufvsh4XfK0xGqtoMzBODuIHI+90m+q5SXXft1GBZ5hNg6j/aGYtzKHvjuchm+fYo35h0Nn+Y0ho8dN9mR717AIgQZV05oPeTu9qDt4c72ASkrwh9K/G05036tPwt5XDIU6iePzup1xqGZw== X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(376002)(366004)(396003)(39860400002)(136003)(346002)(30864003)(31686004)(83380400001)(8936002)(2616005)(6486002)(956004)(16576012)(31696002)(36756003)(316002)(86362001)(66946007)(478600001)(38100700002)(66556008)(4326008)(54906003)(5660300002)(26005)(2906002)(6916009)(186003)(8676002)(66476007)(2004002)(45980500001)(43740500002)(559001)(579004);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: =?utf-8?B?Ny9uUTZ2ZkhsbTJJZm1vQnZ1MUlxNWpBOWhoaU9pQXVDdHJIVGkrMlBlKzVJ?= =?utf-8?B?RlN4YjMweXRERVZSUmJwcXNQbWN4VzQ1U21vZndpQTFESlVudytzeTA0c0Fw?= =?utf-8?B?alduUmJmSUJrbFUwSDhSUWQ3TVVINVpNdGRmTlJwckVHMDVvZmM3WFNDQVV6?= =?utf-8?B?T1F0dXpHKzlDeThzKzJZQmw1c05ONWxLODMrc1hNaGxOZWVFOXUzcEphaHRz?= =?utf-8?B?cUhkNmVMak9aQUJRT2IvRjVNbDFWZnNHbUZJdVRzWkZMRTlMUEg5WjhXRi9s?= =?utf-8?B?YmlESEpBWGVCaGNJY2ptbHVYblBiakZxZDdjbGs0UGxsRjFiK3p6UTE1ZWNO?= =?utf-8?B?MmRsUS82Z0hLNm9lT0l0SWQwMlg5enFEK3UwbGZUZndpaXU5OEJyOVFuREdu?= =?utf-8?B?S01QakxYZ24rNldBNGo4YlRmV3cwZHJYN0pibitDN3B5R013aXAzalJMK2Zh?= =?utf-8?B?THNBZ1dQVDY5N2VRWTlyZW1kakpUeEVDSnlFMi9ZSXVRVS9nWHVQUUVZdXhi?= =?utf-8?B?Rk1DcjlGZ0RNVUpaUUxrODVnelZDYzh2Zmh0UnlTcis5dERyaEI1K0pwekNK?= =?utf-8?B?QUJVRXQyZjE4M2Q0b3NhU1F4M1FDNjIvUEdPNUVsM3VjRFl0VTRza3JFNVBY?= =?utf-8?B?emRLTml1WGlubEc5OEY0THZZQlN3SFovSUNaOUc5SlBHUlVxanFtWUZSRlk0?= =?utf-8?B?MTZEWFMveVZBcHB3QzRIbTRSdHBBd3FhRXhlYjEwT09FSTU0cXF0N3VIVzlT?= =?utf-8?B?c2ZVbG9OaW5KaTFVcHhyZlRUTVJLU25ZQTFoNXNZcVpQR0htbmpjRklDM2RS?= =?utf-8?B?RXdseGppcVVjYkVTRTQzbWdKNWJlbUQraWVrTm9XWEJRZkltMVVqSkhFVmMx?= =?utf-8?B?M3N2NHVUbjRhRGRhK2hGNW9Hd2JKcys1RUNWTWJ6MGREamcvYXhJMFBGMFRq?= =?utf-8?B?TS9aS2RadUhKTDlKWWhyNFhrTkI5UXlKVzUwaW1BQnBZQm5lQ0pxVklwbjU2?= =?utf-8?B?ako4N3AwTFpHV204ZXZwd3NWV2VWZkhjTU5OTVdlNGMyYWRLZG5zaVZOQkZW?= =?utf-8?B?b2VTMWtQcVBWZWNtb2x1ZkZEeTlocW1JTDNWUVAxTDVRUEJvUFN3Uzh5NDlN?= =?utf-8?B?U2RlVWNhQS8zRUgvTW1rTXF2dVVFalNBanlTQ2pxcW11cGN2Mm91R1lQNG1C?= =?utf-8?B?UnFNaUF1N29uOVpycTM3dzBka2dIcm9PaDI5T3NEQVBlanRDVkVrZ0pZdUdS?= =?utf-8?B?STdiVHFtSWZUYjBPaThvVDI2bkxNSElXQXp3RXdyV1gxMU9aSTNTWHBuYVdH?= =?utf-8?B?eTdUNDFRRDFQMk1pRWgvWVB6MW1TRGxYN0pVQ200MTBKaFhzYStURVBrcTF5?= =?utf-8?B?YTlhaVE5blRCdXR6V1htcnVLaVpuOE02cm1ab093cW9Xbk5FbnFjN0VaR1Fp?= =?utf-8?B?ZGxzdGRMcWJRRXRNaTZGOFA1YnFvN2JOUWtRbnl5SGVncUtWYURCUkZ3clJT?= =?utf-8?B?RXE5alZwUXFNSnNHbUxEUmV2bEltWkVkVThweFZpZHNBeEhSeW9zMlhwUXU0?= =?utf-8?B?YkdicWwyaEl1THorcndGY1RpQUhEd01JUWdqYXBqSFdSZHdGL0dNYWxVcWJn?= =?utf-8?B?ZllVUGxzSVMxNThKSXM3TjZaa1NoOWFXaEk0bDNxNU1id3g1cmp0TzBVcFBh?= =?utf-8?B?R0hUVmN0ckZPdDB5UCtxSkhyNEF4bS9VclBnY01qWTgveExDWTRzZWltR3do?= =?utf-8?Q?XVuPSPVjRPYkWWBvXuxAPYnh6knzQ1da87JxGns?= X-OriginatorOrg: suse.com X-MS-Exchange-CrossTenant-Network-Message-Id: 5b074968-784d-4d63-baa0-08d95cc2fdde X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 Aug 2021 12:24:40.4108 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: bYOadVUTPsd/tPuSagqY2zJuj2sGmV8cUf+vOqF3vbxcLGo9e9y9akakKqHMcWARYT6lctpJpLyYehiacnwo7Q== X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR0402MB3390 X-ZohoMail-DKIM: pass (identity @suse.com) X-ZM-MESSAGEID: 1628684720552100003 Content-Type: text/plain; charset="utf-8" This is a fair chunk of code and data and can easily live separate from the main emulation function. Code moved gets slightly adjusted in a few places, e.g. replacing EXC_* by X86_EXC_* (such that EXC_* don't need to move as well; we want these to be phased out anyway). Signed-off-by: Jan Beulich --- a/tools/fuzz/x86_instruction_emulator/Makefile +++ b/tools/fuzz/x86_instruction_emulator/Makefile @@ -36,7 +36,7 @@ x86_emulate.h :=3D x86-emulate.h x86_emula =20 OBJS :=3D fuzz-emul.o x86-emulate.o OBJS +=3D x86_emulate/0f01.o x86_emulate/0fae.o x86_emulate/0fc7.o -OBJS +=3D x86_emulate/fpu.o +OBJS +=3D x86_emulate/decode.o x86_emulate/fpu.o =20 # x86-emulate.c will be implicit for both x86-emulate.o x86-emulate-cov.o: x86_emulate/x86_emulate.c $(x86_emulate.h= ) x86_emulate/private.h --- a/tools/tests/x86_emulator/Makefile +++ b/tools/tests/x86_emulator/Makefile @@ -252,7 +252,7 @@ endif # 32-bit override =20 OBJS :=3D x86-emulate.o cpuid.o test_x86_emulator.o evex-disp8.o predicate= s.o wrappers.o OBJS +=3D x86_emulate/0f01.o x86_emulate/0fae.o x86_emulate/0fc7.o -OBJS +=3D x86_emulate/fpu.o +OBJS +=3D x86_emulate/decode.o x86_emulate/fpu.o =20 $(TARGET): $(OBJS) $(HOSTCC) $(HOSTCFLAGS) -o $@ $^ --- a/tools/tests/x86_emulator/x86-emulate.c +++ b/tools/tests/x86_emulator/x86-emulate.c @@ -3,11 +3,6 @@ #include #include =20 -#define DEFINE_PER_CPU(type, var) type per_cpu_##var -#define this_cpu(var) per_cpu_##var - -#define ERR_PTR(val) NULL - /* See gcc bug 100680, but here don't bother making this version dependent= . */ #define gcc11_wrap(x) ({ \ unsigned long x_; \ --- a/tools/tests/x86_emulator/x86-emulate.h +++ b/tools/tests/x86_emulator/x86-emulate.h @@ -48,6 +48,9 @@ #define ASSERT assert #define ASSERT_UNREACHABLE() assert(!__LINE__) =20 +#define DEFINE_PER_CPU(type, var) type per_cpu_##var +#define this_cpu(var) per_cpu_##var + #define MASK_EXTR(v, m) (((v) & (m)) / ((m) & -(m))) #define MASK_INSR(v, m) (((v) * ((m) & -(m))) & (m)) =20 --- a/xen/arch/x86/x86_emulate.c +++ b/xen/arch/x86/x86_emulate.c @@ -9,7 +9,6 @@ * Keir Fraser */ =20 -#include #include #include #include /* current_cpu_info */ --- a/xen/arch/x86/x86_emulate/Makefile +++ b/xen/arch/x86/x86_emulate/Makefile @@ -1,4 +1,5 @@ obj-y +=3D 0f01.o obj-y +=3D 0fae.o obj-y +=3D 0fc7.o +obj-y +=3D decode.o obj-$(CONFIG_HVM) +=3D fpu.o --- /dev/null +++ b/xen/arch/x86/x86_emulate/decode.c @@ -0,0 +1,1750 @@ +/*************************************************************************= ***** + * decode.c - helper for x86_emulate.c + * + * Generic x86 (32-bit and 64-bit) instruction decoder and emulator. + * + * Copyright (c) 2005-2007 Keir Fraser + * Copyright (c) 2005-2007 XenSource Inc. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License as published by + * the Free Software Foundation; either version 2 of the License, or + * (at your option) any later version. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; If not, see . + */ + +#include "private.h" + +#ifdef __XEN__ +# include +#else +# define ERR_PTR(val) NULL +#endif + +#define evex_encoded() (s->evex.mbs) + +struct x86_emulate_state * +x86_decode_insn( + struct x86_emulate_ctxt *ctxt, + int (*insn_fetch)( + enum x86_segment seg, unsigned long offset, + void *p_data, unsigned int bytes, + struct x86_emulate_ctxt *ctxt)) +{ + static DEFINE_PER_CPU(struct x86_emulate_state, state); + struct x86_emulate_state *s =3D &this_cpu(state); + const struct x86_emulate_ops ops =3D { + .insn_fetch =3D insn_fetch, + .read =3D x86emul_unhandleable_rw, + }; + int rc; + + init_context(ctxt); + + rc =3D x86emul_decode(s, ctxt, &ops); + if ( unlikely(rc !=3D X86EMUL_OKAY) ) + return ERR_PTR(-rc); + +#if defined(__XEN__) && !defined(NDEBUG) + /* + * While we avoid memory allocation (by use of per-CPU data) above, + * nevertheless make sure callers properly release the state structure + * for forward compatibility. + */ + if ( s->caller ) + { + printk(XENLOG_ERR "Unreleased emulation state acquired by %ps\n", + s->caller); + dump_execution_state(); + } + s->caller =3D __builtin_return_address(0); +#endif + + return s; +} + +static const opcode_desc_t opcode_table[256] =3D { + /* 0x00 - 0x07 */ + ByteOp|DstMem|SrcReg|ModRM, DstMem|SrcReg|ModRM, + ByteOp|DstReg|SrcMem|ModRM, DstReg|SrcMem|ModRM, + ByteOp|DstEax|SrcImm, DstEax|SrcImm, ImplicitOps|Mov, ImplicitOps|Mov, + /* 0x08 - 0x0F */ + ByteOp|DstMem|SrcReg|ModRM, DstMem|SrcReg|ModRM, + ByteOp|DstReg|SrcMem|ModRM, DstReg|SrcMem|ModRM, + ByteOp|DstEax|SrcImm, DstEax|SrcImm, ImplicitOps|Mov, 0, + /* 0x10 - 0x17 */ + ByteOp|DstMem|SrcReg|ModRM, DstMem|SrcReg|ModRM, + ByteOp|DstReg|SrcMem|ModRM, DstReg|SrcMem|ModRM, + ByteOp|DstEax|SrcImm, DstEax|SrcImm, ImplicitOps|Mov, ImplicitOps|Mov, + /* 0x18 - 0x1F */ + ByteOp|DstMem|SrcReg|ModRM, DstMem|SrcReg|ModRM, + ByteOp|DstReg|SrcMem|ModRM, DstReg|SrcMem|ModRM, + ByteOp|DstEax|SrcImm, DstEax|SrcImm, ImplicitOps|Mov, ImplicitOps|Mov, + /* 0x20 - 0x27 */ + ByteOp|DstMem|SrcReg|ModRM, DstMem|SrcReg|ModRM, + ByteOp|DstReg|SrcMem|ModRM, DstReg|SrcMem|ModRM, + ByteOp|DstEax|SrcImm, DstEax|SrcImm, 0, ImplicitOps, + /* 0x28 - 0x2F */ + ByteOp|DstMem|SrcReg|ModRM, DstMem|SrcReg|ModRM, + ByteOp|DstReg|SrcMem|ModRM, DstReg|SrcMem|ModRM, + ByteOp|DstEax|SrcImm, DstEax|SrcImm, 0, ImplicitOps, + /* 0x30 - 0x37 */ + ByteOp|DstMem|SrcReg|ModRM, DstMem|SrcReg|ModRM, + ByteOp|DstReg|SrcMem|ModRM, DstReg|SrcMem|ModRM, + ByteOp|DstEax|SrcImm, DstEax|SrcImm, 0, ImplicitOps, + /* 0x38 - 0x3F */ + ByteOp|DstReg|SrcMem|ModRM, DstReg|SrcMem|ModRM, + ByteOp|DstReg|SrcMem|ModRM, DstReg|SrcMem|ModRM, + ByteOp|DstEax|SrcImm, DstEax|SrcImm, 0, ImplicitOps, + /* 0x40 - 0x4F */ + ImplicitOps, ImplicitOps, ImplicitOps, ImplicitOps, + ImplicitOps, ImplicitOps, ImplicitOps, ImplicitOps, + ImplicitOps, ImplicitOps, ImplicitOps, ImplicitOps, + ImplicitOps, ImplicitOps, ImplicitOps, ImplicitOps, + /* 0x50 - 0x5F */ + ImplicitOps|Mov, ImplicitOps|Mov, ImplicitOps|Mov, ImplicitOps|Mov, + ImplicitOps|Mov, ImplicitOps|Mov, ImplicitOps|Mov, ImplicitOps|Mov, + ImplicitOps|Mov, ImplicitOps|Mov, ImplicitOps|Mov, ImplicitOps|Mov, + ImplicitOps|Mov, ImplicitOps|Mov, ImplicitOps|Mov, ImplicitOps|Mov, + /* 0x60 - 0x67 */ + ImplicitOps, ImplicitOps, DstReg|SrcMem|ModRM, DstReg|SrcNone|ModRM|Mo= v, + 0, 0, 0, 0, + /* 0x68 - 0x6F */ + DstImplicit|SrcImm|Mov, DstReg|SrcImm|ModRM|Mov, + DstImplicit|SrcImmByte|Mov, DstReg|SrcImmByte|ModRM|Mov, + ImplicitOps|Mov, ImplicitOps|Mov, ImplicitOps|Mov, ImplicitOps|Mov, + /* 0x70 - 0x77 */ + DstImplicit|SrcImmByte, DstImplicit|SrcImmByte, + DstImplicit|SrcImmByte, DstImplicit|SrcImmByte, + DstImplicit|SrcImmByte, DstImplicit|SrcImmByte, + DstImplicit|SrcImmByte, DstImplicit|SrcImmByte, + /* 0x78 - 0x7F */ + DstImplicit|SrcImmByte, DstImplicit|SrcImmByte, + DstImplicit|SrcImmByte, DstImplicit|SrcImmByte, + DstImplicit|SrcImmByte, DstImplicit|SrcImmByte, + DstImplicit|SrcImmByte, DstImplicit|SrcImmByte, + /* 0x80 - 0x87 */ + ByteOp|DstMem|SrcImm|ModRM, DstMem|SrcImm|ModRM, + ByteOp|DstMem|SrcImm|ModRM, DstMem|SrcImmByte|ModRM, + ByteOp|DstReg|SrcMem|ModRM, DstReg|SrcMem|ModRM, + ByteOp|DstMem|SrcReg|ModRM, DstMem|SrcReg|ModRM, + /* 0x88 - 0x8F */ + ByteOp|DstMem|SrcReg|ModRM|Mov, DstMem|SrcReg|ModRM|Mov, + ByteOp|DstReg|SrcMem|ModRM|Mov, DstReg|SrcMem|ModRM|Mov, + DstMem|SrcReg|ModRM|Mov, DstReg|SrcNone|ModRM, + DstReg|SrcMem16|ModRM|Mov, DstMem|SrcNone|ModRM|Mov, + /* 0x90 - 0x97 */ + DstImplicit|SrcEax, DstImplicit|SrcEax, + DstImplicit|SrcEax, DstImplicit|SrcEax, + DstImplicit|SrcEax, DstImplicit|SrcEax, + DstImplicit|SrcEax, DstImplicit|SrcEax, + /* 0x98 - 0x9F */ + ImplicitOps, ImplicitOps, ImplicitOps, ImplicitOps, + ImplicitOps|Mov, ImplicitOps|Mov, ImplicitOps, ImplicitOps, + /* 0xA0 - 0xA7 */ + ByteOp|DstEax|SrcMem|Mov, DstEax|SrcMem|Mov, + ByteOp|DstMem|SrcEax|Mov, DstMem|SrcEax|Mov, + ByteOp|ImplicitOps|Mov, ImplicitOps|Mov, + ByteOp|ImplicitOps, ImplicitOps, + /* 0xA8 - 0xAF */ + ByteOp|DstEax|SrcImm, DstEax|SrcImm, + ByteOp|DstImplicit|SrcEax|Mov, DstImplicit|SrcEax|Mov, + ByteOp|DstEax|SrcImplicit|Mov, DstEax|SrcImplicit|Mov, + ByteOp|DstImplicit|SrcEax, DstImplicit|SrcEax, + /* 0xB0 - 0xB7 */ + ByteOp|DstReg|SrcImm|Mov, ByteOp|DstReg|SrcImm|Mov, + ByteOp|DstReg|SrcImm|Mov, ByteOp|DstReg|SrcImm|Mov, + ByteOp|DstReg|SrcImm|Mov, ByteOp|DstReg|SrcImm|Mov, + ByteOp|DstReg|SrcImm|Mov, ByteOp|DstReg|SrcImm|Mov, + /* 0xB8 - 0xBF */ + DstReg|SrcImm|Mov, DstReg|SrcImm|Mov, DstReg|SrcImm|Mov, DstReg|SrcImm= |Mov, + DstReg|SrcImm|Mov, DstReg|SrcImm|Mov, DstReg|SrcImm|Mov, DstReg|SrcImm= |Mov, + /* 0xC0 - 0xC7 */ + ByteOp|DstMem|SrcImm|ModRM, DstMem|SrcImmByte|ModRM, + DstImplicit|SrcImm16, ImplicitOps, + DstReg|SrcMem|ModRM|Mov, DstReg|SrcMem|ModRM|Mov, + ByteOp|DstMem|SrcImm|ModRM|Mov, DstMem|SrcImm|ModRM|Mov, + /* 0xC8 - 0xCF */ + DstImplicit|SrcImm16, ImplicitOps, DstImplicit|SrcImm16, ImplicitOps, + ImplicitOps, DstImplicit|SrcImmByte, ImplicitOps, ImplicitOps, + /* 0xD0 - 0xD7 */ + ByteOp|DstMem|SrcImplicit|ModRM, DstMem|SrcImplicit|ModRM, + ByteOp|DstMem|SrcImplicit|ModRM, DstMem|SrcImplicit|ModRM, + DstImplicit|SrcImmByte, DstImplicit|SrcImmByte, ImplicitOps, ImplicitO= ps, + /* 0xD8 - 0xDF */ + ImplicitOps|ModRM, ImplicitOps|ModRM|Mov, + ImplicitOps|ModRM, ImplicitOps|ModRM|Mov, + ImplicitOps|ModRM, ImplicitOps|ModRM|Mov, + DstImplicit|SrcMem16|ModRM, ImplicitOps|ModRM|Mov, + /* 0xE0 - 0xE7 */ + DstImplicit|SrcImmByte, DstImplicit|SrcImmByte, + DstImplicit|SrcImmByte, DstImplicit|SrcImmByte, + DstEax|SrcImmByte, DstEax|SrcImmByte, + DstImplicit|SrcImmByte, DstImplicit|SrcImmByte, + /* 0xE8 - 0xEF */ + DstImplicit|SrcImm|Mov, DstImplicit|SrcImm, + ImplicitOps, DstImplicit|SrcImmByte, + DstEax|SrcImplicit, DstEax|SrcImplicit, ImplicitOps, ImplicitOps, + /* 0xF0 - 0xF7 */ + 0, ImplicitOps, 0, 0, + ImplicitOps, ImplicitOps, ByteOp|ModRM, ModRM, + /* 0xF8 - 0xFF */ + ImplicitOps, ImplicitOps, ImplicitOps, ImplicitOps, + ImplicitOps, ImplicitOps, ByteOp|DstMem|SrcNone|ModRM, DstMem|SrcNone|= ModRM +}; + +static const struct twobyte_table { + opcode_desc_t desc; + simd_opsize_t size:4; + disp8scale_t d8s:4; +} twobyte_table[256] =3D { + [0x00] =3D { ModRM }, + [0x01] =3D { ImplicitOps|ModRM }, + [0x02] =3D { DstReg|SrcMem16|ModRM }, + [0x03] =3D { DstReg|SrcMem16|ModRM }, + [0x05] =3D { ImplicitOps }, + [0x06] =3D { ImplicitOps }, + [0x07] =3D { ImplicitOps }, + [0x08] =3D { ImplicitOps }, + [0x09] =3D { ImplicitOps }, + [0x0b] =3D { ImplicitOps }, + [0x0d] =3D { ImplicitOps|ModRM }, + [0x0e] =3D { ImplicitOps }, + [0x0f] =3D { ModRM|SrcImmByte }, + [0x10] =3D { DstImplicit|SrcMem|ModRM|Mov, simd_any_fp, d8s_vl }, + [0x11] =3D { DstMem|SrcImplicit|ModRM|Mov, simd_any_fp, d8s_vl }, + [0x12] =3D { DstImplicit|SrcMem|ModRM|Mov, simd_other, 3 }, + [0x13] =3D { DstMem|SrcImplicit|ModRM|Mov, simd_other, 3 }, + [0x14 ... 0x15] =3D { DstImplicit|SrcMem|ModRM, simd_packed_fp, d8s_vl= }, + [0x16] =3D { DstImplicit|SrcMem|ModRM|Mov, simd_other, 3 }, + [0x17] =3D { DstMem|SrcImplicit|ModRM|Mov, simd_other, 3 }, + [0x18 ... 0x1f] =3D { ImplicitOps|ModRM }, + [0x20 ... 0x21] =3D { DstMem|SrcImplicit|ModRM }, + [0x22 ... 0x23] =3D { DstImplicit|SrcMem|ModRM }, + [0x28] =3D { DstImplicit|SrcMem|ModRM|Mov, simd_packed_fp, d8s_vl }, + [0x29] =3D { DstMem|SrcImplicit|ModRM|Mov, simd_packed_fp, d8s_vl }, + [0x2a] =3D { DstImplicit|SrcMem|ModRM|Mov, simd_other, d8s_dq64 }, + [0x2b] =3D { DstMem|SrcImplicit|ModRM|Mov, simd_any_fp, d8s_vl }, + [0x2c ... 0x2d] =3D { DstImplicit|SrcMem|ModRM|Mov, simd_other }, + [0x2e ... 0x2f] =3D { ImplicitOps|ModRM|TwoOp, simd_none, d8s_dq }, + [0x30 ... 0x35] =3D { ImplicitOps }, + [0x37] =3D { ImplicitOps }, + [0x38] =3D { DstReg|SrcMem|ModRM }, + [0x3a] =3D { DstReg|SrcImmByte|ModRM }, + [0x40 ... 0x4f] =3D { DstReg|SrcMem|ModRM|Mov }, + [0x50] =3D { DstReg|SrcImplicit|ModRM|Mov }, + [0x51] =3D { DstImplicit|SrcMem|ModRM|TwoOp, simd_any_fp, d8s_vl }, + [0x52 ... 0x53] =3D { DstImplicit|SrcMem|ModRM|TwoOp, simd_single_fp }, + [0x54 ... 0x57] =3D { DstImplicit|SrcMem|ModRM, simd_packed_fp, d8s_vl= }, + [0x58 ... 0x59] =3D { DstImplicit|SrcMem|ModRM, simd_any_fp, d8s_vl }, + [0x5a] =3D { DstImplicit|SrcMem|ModRM|Mov, simd_any_fp, d8s_vl }, + [0x5b] =3D { DstImplicit|SrcMem|ModRM|Mov, simd_packed_fp, d8s_vl }, + [0x5c ... 0x5f] =3D { DstImplicit|SrcMem|ModRM, simd_any_fp, d8s_vl }, + [0x60 ... 0x62] =3D { DstImplicit|SrcMem|ModRM, simd_other, d8s_vl }, + [0x63 ... 0x67] =3D { DstImplicit|SrcMem|ModRM, simd_packed_int, d8s_v= l }, + [0x68 ... 0x6a] =3D { DstImplicit|SrcMem|ModRM, simd_other, d8s_vl }, + [0x6b ... 0x6d] =3D { DstImplicit|SrcMem|ModRM, simd_packed_int, d8s_v= l }, + [0x6e] =3D { DstImplicit|SrcMem|ModRM|Mov, simd_none, d8s_dq64 }, + [0x6f] =3D { DstImplicit|SrcMem|ModRM|Mov, simd_packed_int, d8s_vl }, + [0x70] =3D { SrcImmByte|ModRM|TwoOp, simd_other, d8s_vl }, + [0x71 ... 0x73] =3D { DstImplicit|SrcImmByte|ModRM, simd_none, d8s_vl = }, + [0x74 ... 0x76] =3D { DstImplicit|SrcMem|ModRM, simd_packed_int, d8s_v= l }, + [0x77] =3D { DstImplicit|SrcNone }, + [0x78 ... 0x79] =3D { DstImplicit|SrcMem|ModRM|Mov, simd_other, d8s_vl= }, + [0x7a] =3D { DstImplicit|SrcMem|ModRM|Mov, simd_packed_fp, d8s_vl }, + [0x7b] =3D { DstImplicit|SrcMem|ModRM|Mov, simd_other, d8s_dq64 }, + [0x7c ... 0x7d] =3D { DstImplicit|SrcMem|ModRM, simd_other }, + [0x7e] =3D { DstMem|SrcImplicit|ModRM|Mov, simd_none, d8s_dq64 }, + [0x7f] =3D { DstMem|SrcImplicit|ModRM|Mov, simd_packed_int, d8s_vl }, + [0x80 ... 0x8f] =3D { DstImplicit|SrcImm }, + [0x90 ... 0x9f] =3D { ByteOp|DstMem|SrcNone|ModRM|Mov }, + [0xa0 ... 0xa1] =3D { ImplicitOps|Mov }, + [0xa2] =3D { ImplicitOps }, + [0xa3] =3D { DstBitBase|SrcReg|ModRM }, + [0xa4] =3D { DstMem|SrcImmByte|ModRM }, + [0xa5] =3D { DstMem|SrcReg|ModRM }, + [0xa6 ... 0xa7] =3D { ModRM }, + [0xa8 ... 0xa9] =3D { ImplicitOps|Mov }, + [0xaa] =3D { ImplicitOps }, + [0xab] =3D { DstBitBase|SrcReg|ModRM }, + [0xac] =3D { DstMem|SrcImmByte|ModRM }, + [0xad] =3D { DstMem|SrcReg|ModRM }, + [0xae] =3D { ImplicitOps|ModRM }, + [0xaf] =3D { DstReg|SrcMem|ModRM }, + [0xb0] =3D { ByteOp|DstMem|SrcReg|ModRM }, + [0xb1] =3D { DstMem|SrcReg|ModRM }, + [0xb2] =3D { DstReg|SrcMem|ModRM|Mov }, + [0xb3] =3D { DstBitBase|SrcReg|ModRM }, + [0xb4 ... 0xb5] =3D { DstReg|SrcMem|ModRM|Mov }, + [0xb6] =3D { ByteOp|DstReg|SrcMem|ModRM|Mov }, + [0xb7] =3D { DstReg|SrcMem16|ModRM|Mov }, + [0xb8] =3D { DstReg|SrcMem|ModRM }, + [0xb9] =3D { ModRM }, + [0xba] =3D { DstBitBase|SrcImmByte|ModRM }, + [0xbb] =3D { DstBitBase|SrcReg|ModRM }, + [0xbc ... 0xbd] =3D { DstReg|SrcMem|ModRM }, + [0xbe] =3D { ByteOp|DstReg|SrcMem|ModRM|Mov }, + [0xbf] =3D { DstReg|SrcMem16|ModRM|Mov }, + [0xc0] =3D { ByteOp|DstMem|SrcReg|ModRM }, + [0xc1] =3D { DstMem|SrcReg|ModRM }, + [0xc2] =3D { DstImplicit|SrcImmByte|ModRM, simd_any_fp, d8s_vl }, + [0xc3] =3D { DstMem|SrcReg|ModRM|Mov }, + [0xc4] =3D { DstImplicit|SrcImmByte|ModRM, simd_none, 1 }, + [0xc5] =3D { DstReg|SrcImmByte|ModRM|Mov }, + [0xc6] =3D { DstImplicit|SrcImmByte|ModRM, simd_packed_fp, d8s_vl }, + [0xc7] =3D { ImplicitOps|ModRM }, + [0xc8 ... 0xcf] =3D { ImplicitOps }, + [0xd0] =3D { DstImplicit|SrcMem|ModRM, simd_other }, + [0xd1 ... 0xd3] =3D { DstImplicit|SrcMem|ModRM, simd_128, 4 }, + [0xd4 ... 0xd5] =3D { DstImplicit|SrcMem|ModRM, simd_packed_int, d8s_v= l }, + [0xd6] =3D { DstMem|SrcImplicit|ModRM|Mov, simd_other, 3 }, + [0xd7] =3D { DstReg|SrcImplicit|ModRM|Mov }, + [0xd8 ... 0xdf] =3D { DstImplicit|SrcMem|ModRM, simd_packed_int, d8s_v= l }, + [0xe0] =3D { DstImplicit|SrcMem|ModRM, simd_packed_int, d8s_vl }, + [0xe1 ... 0xe2] =3D { DstImplicit|SrcMem|ModRM, simd_128, 4 }, + [0xe3 ... 0xe5] =3D { DstImplicit|SrcMem|ModRM, simd_packed_int, d8s_v= l }, + [0xe6] =3D { DstImplicit|SrcMem|ModRM|Mov, simd_packed_fp, d8s_vl }, + [0xe7] =3D { DstMem|SrcImplicit|ModRM|Mov, simd_packed_int, d8s_vl }, + [0xe8 ... 0xef] =3D { DstImplicit|SrcMem|ModRM, simd_packed_int, d8s_v= l }, + [0xf0] =3D { DstImplicit|SrcMem|ModRM|Mov, simd_other }, + [0xf1 ... 0xf3] =3D { DstImplicit|SrcMem|ModRM, simd_128, 4 }, + [0xf4 ... 0xf6] =3D { DstImplicit|SrcMem|ModRM, simd_packed_int, d8s_v= l }, + [0xf7] =3D { DstMem|SrcMem|ModRM|Mov, simd_packed_int }, + [0xf8 ... 0xfe] =3D { DstImplicit|SrcMem|ModRM, simd_packed_int, d8s_v= l }, + [0xff] =3D { ModRM } +}; + +/* + * "two_op" and "four_op" below refer to the number of register operands + * (one of which possibly also allowing to be a memory one). The named + * operand counts do not include any immediate operands. + */ +static const struct ext0f38_table { + uint8_t simd_size:5; + uint8_t to_mem:1; + uint8_t two_op:1; + uint8_t vsib:1; + disp8scale_t d8s:4; +} ext0f38_table[256] =3D { + [0x00] =3D { .simd_size =3D simd_packed_int, .d8s =3D d8s_vl }, + [0x01 ... 0x03] =3D { .simd_size =3D simd_packed_int }, + [0x04] =3D { .simd_size =3D simd_packed_int, .d8s =3D d8s_vl }, + [0x05 ... 0x0a] =3D { .simd_size =3D simd_packed_int }, + [0x0b] =3D { .simd_size =3D simd_packed_int, .d8s =3D d8s_vl }, + [0x0c ... 0x0d] =3D { .simd_size =3D simd_packed_fp, .d8s =3D d8s_vl }, + [0x0e ... 0x0f] =3D { .simd_size =3D simd_packed_fp }, + [0x10 ... 0x12] =3D { .simd_size =3D simd_packed_int, .d8s =3D d8s_vl = }, + [0x13] =3D { .simd_size =3D simd_other, .two_op =3D 1, .d8s =3D d8s_vl= _by_2 }, + [0x14 ... 0x16] =3D { .simd_size =3D simd_packed_fp, .d8s =3D d8s_vl }, + [0x17] =3D { .simd_size =3D simd_packed_int, .two_op =3D 1 }, + [0x18] =3D { .simd_size =3D simd_scalar_opc, .two_op =3D 1, .d8s =3D 2= }, + [0x19] =3D { .simd_size =3D simd_scalar_opc, .two_op =3D 1, .d8s =3D 3= }, + [0x1a] =3D { .simd_size =3D simd_128, .two_op =3D 1, .d8s =3D 4 }, + [0x1b] =3D { .simd_size =3D simd_256, .two_op =3D 1, .d8s =3D d8s_vl_b= y_2 }, + [0x1c ... 0x1f] =3D { .simd_size =3D simd_packed_int, .two_op =3D 1, .= d8s =3D d8s_vl }, + [0x20] =3D { .simd_size =3D simd_other, .two_op =3D 1, .d8s =3D d8s_vl= _by_2 }, + [0x21] =3D { .simd_size =3D simd_other, .two_op =3D 1, .d8s =3D d8s_vl= _by_4 }, + [0x22] =3D { .simd_size =3D simd_other, .two_op =3D 1, .d8s =3D d8s_vl= _by_8 }, + [0x23] =3D { .simd_size =3D simd_other, .two_op =3D 1, .d8s =3D d8s_vl= _by_2 }, + [0x24] =3D { .simd_size =3D simd_other, .two_op =3D 1, .d8s =3D d8s_vl= _by_4 }, + [0x25] =3D { .simd_size =3D simd_other, .two_op =3D 1, .d8s =3D d8s_vl= _by_2 }, + [0x26 ... 0x29] =3D { .simd_size =3D simd_packed_int, .d8s =3D d8s_vl = }, + [0x2a] =3D { .simd_size =3D simd_packed_int, .two_op =3D 1, .d8s =3D d= 8s_vl }, + [0x2b] =3D { .simd_size =3D simd_packed_int, .d8s =3D d8s_vl }, + [0x2c] =3D { .simd_size =3D simd_packed_fp, .d8s =3D d8s_vl }, + [0x2d] =3D { .simd_size =3D simd_packed_fp, .d8s =3D d8s_dq }, + [0x2e ... 0x2f] =3D { .simd_size =3D simd_packed_fp, .to_mem =3D 1 }, + [0x30] =3D { .simd_size =3D simd_other, .two_op =3D 1, .d8s =3D d8s_vl= _by_2 }, + [0x31] =3D { .simd_size =3D simd_other, .two_op =3D 1, .d8s =3D d8s_vl= _by_4 }, + [0x32] =3D { .simd_size =3D simd_other, .two_op =3D 1, .d8s =3D d8s_vl= _by_8 }, + [0x33] =3D { .simd_size =3D simd_other, .two_op =3D 1, .d8s =3D d8s_vl= _by_2 }, + [0x34] =3D { .simd_size =3D simd_other, .two_op =3D 1, .d8s =3D d8s_vl= _by_4 }, + [0x35] =3D { .simd_size =3D simd_other, .two_op =3D 1, .d8s =3D d8s_vl= _by_2 }, + [0x36 ... 0x3f] =3D { .simd_size =3D simd_packed_int, .d8s =3D d8s_vl = }, + [0x40] =3D { .simd_size =3D simd_packed_int, .d8s =3D d8s_vl }, + [0x41] =3D { .simd_size =3D simd_packed_int, .two_op =3D 1 }, + [0x42] =3D { .simd_size =3D simd_packed_fp, .two_op =3D 1, .d8s =3D d8= s_vl }, + [0x43] =3D { .simd_size =3D simd_scalar_vexw, .d8s =3D d8s_dq }, + [0x44] =3D { .simd_size =3D simd_packed_int, .two_op =3D 1, .d8s =3D d= 8s_vl }, + [0x45 ... 0x47] =3D { .simd_size =3D simd_packed_int, .d8s =3D d8s_vl = }, + [0x4c] =3D { .simd_size =3D simd_packed_fp, .two_op =3D 1, .d8s =3D d8= s_vl }, + [0x4d] =3D { .simd_size =3D simd_scalar_vexw, .d8s =3D d8s_dq }, + [0x4e] =3D { .simd_size =3D simd_packed_fp, .two_op =3D 1, .d8s =3D d8= s_vl }, + [0x4f] =3D { .simd_size =3D simd_scalar_vexw, .d8s =3D d8s_dq }, + [0x50 ... 0x53] =3D { .simd_size =3D simd_packed_int, .d8s =3D d8s_vl = }, + [0x54 ... 0x55] =3D { .simd_size =3D simd_packed_int, .two_op =3D 1, .= d8s =3D d8s_vl }, + [0x58] =3D { .simd_size =3D simd_other, .two_op =3D 1, .d8s =3D 2 }, + [0x59] =3D { .simd_size =3D simd_other, .two_op =3D 1, .d8s =3D 3 }, + [0x5a] =3D { .simd_size =3D simd_128, .two_op =3D 1, .d8s =3D 4 }, + [0x5b] =3D { .simd_size =3D simd_256, .two_op =3D 1, .d8s =3D d8s_vl_b= y_2 }, + [0x62] =3D { .simd_size =3D simd_packed_int, .two_op =3D 1, .d8s =3D d= 8s_bw }, + [0x63] =3D { .simd_size =3D simd_packed_int, .to_mem =3D 1, .two_op = =3D 1, .d8s =3D d8s_bw }, + [0x64 ... 0x66] =3D { .simd_size =3D simd_packed_int, .d8s =3D d8s_vl = }, + [0x68] =3D { .simd_size =3D simd_packed_int, .d8s =3D d8s_vl }, + [0x70 ... 0x73] =3D { .simd_size =3D simd_packed_int, .d8s =3D d8s_vl = }, + [0x75 ... 0x76] =3D { .simd_size =3D simd_packed_int, .d8s =3D d8s_vl = }, + [0x77] =3D { .simd_size =3D simd_packed_fp, .d8s =3D d8s_vl }, + [0x78] =3D { .simd_size =3D simd_other, .two_op =3D 1 }, + [0x79] =3D { .simd_size =3D simd_other, .two_op =3D 1, .d8s =3D 1 }, + [0x7a ... 0x7c] =3D { .simd_size =3D simd_none, .two_op =3D 1 }, + [0x7d ... 0x7e] =3D { .simd_size =3D simd_packed_int, .d8s =3D d8s_vl = }, + [0x7f] =3D { .simd_size =3D simd_packed_fp, .d8s =3D d8s_vl }, + [0x82] =3D { .simd_size =3D simd_other }, + [0x83] =3D { .simd_size =3D simd_packed_int, .d8s =3D d8s_vl }, + [0x88] =3D { .simd_size =3D simd_packed_fp, .two_op =3D 1, .d8s =3D d8= s_dq }, + [0x89] =3D { .simd_size =3D simd_packed_int, .two_op =3D 1, .d8s =3D d= 8s_dq }, + [0x8a] =3D { .simd_size =3D simd_packed_fp, .to_mem =3D 1, .two_op =3D= 1, .d8s =3D d8s_dq }, + [0x8b] =3D { .simd_size =3D simd_packed_int, .to_mem =3D 1, .two_op = =3D 1, .d8s =3D d8s_dq }, + [0x8c] =3D { .simd_size =3D simd_packed_int }, + [0x8d] =3D { .simd_size =3D simd_packed_int, .d8s =3D d8s_vl }, + [0x8e] =3D { .simd_size =3D simd_packed_int, .to_mem =3D 1 }, + [0x8f] =3D { .simd_size =3D simd_packed_int, .d8s =3D d8s_vl }, + [0x90 ... 0x93] =3D { .simd_size =3D simd_other, .vsib =3D 1, .d8s =3D= d8s_dq }, + [0x96 ... 0x98] =3D { .simd_size =3D simd_packed_fp, .d8s =3D d8s_vl }, + [0x99] =3D { .simd_size =3D simd_scalar_vexw, .d8s =3D d8s_dq }, + [0x9a] =3D { .simd_size =3D simd_packed_fp, .d8s =3D d8s_vl }, + [0x9b] =3D { .simd_size =3D simd_scalar_vexw, .d8s =3D d8s_dq }, + [0x9c] =3D { .simd_size =3D simd_packed_fp, .d8s =3D d8s_vl }, + [0x9d] =3D { .simd_size =3D simd_scalar_vexw, .d8s =3D d8s_dq }, + [0x9e] =3D { .simd_size =3D simd_packed_fp, .d8s =3D d8s_vl }, + [0x9f] =3D { .simd_size =3D simd_scalar_vexw, .d8s =3D d8s_dq }, + [0xa0 ... 0xa3] =3D { .simd_size =3D simd_other, .to_mem =3D 1, .vsib = =3D 1, .d8s =3D d8s_dq }, + [0xa6 ... 0xa8] =3D { .simd_size =3D simd_packed_fp, .d8s =3D d8s_vl }, + [0xa9] =3D { .simd_size =3D simd_scalar_vexw, .d8s =3D d8s_dq }, + [0xaa] =3D { .simd_size =3D simd_packed_fp, .d8s =3D d8s_vl }, + [0xab] =3D { .simd_size =3D simd_scalar_vexw, .d8s =3D d8s_dq }, + [0xac] =3D { .simd_size =3D simd_packed_fp, .d8s =3D d8s_vl }, + [0xad] =3D { .simd_size =3D simd_scalar_vexw, .d8s =3D d8s_dq }, + [0xae] =3D { .simd_size =3D simd_packed_fp, .d8s =3D d8s_vl }, + [0xaf] =3D { .simd_size =3D simd_scalar_vexw, .d8s =3D d8s_dq }, + [0xb4 ... 0xb5] =3D { .simd_size =3D simd_packed_int, .d8s =3D d8s_vl = }, + [0xb6 ... 0xb8] =3D { .simd_size =3D simd_packed_fp, .d8s =3D d8s_vl }, + [0xb9] =3D { .simd_size =3D simd_scalar_vexw, .d8s =3D d8s_dq }, + [0xba] =3D { .simd_size =3D simd_packed_fp, .d8s =3D d8s_vl }, + [0xbb] =3D { .simd_size =3D simd_scalar_vexw, .d8s =3D d8s_dq }, + [0xbc] =3D { .simd_size =3D simd_packed_fp, .d8s =3D d8s_vl }, + [0xbd] =3D { .simd_size =3D simd_scalar_vexw, .d8s =3D d8s_dq }, + [0xbe] =3D { .simd_size =3D simd_packed_fp, .d8s =3D d8s_vl }, + [0xbf] =3D { .simd_size =3D simd_scalar_vexw, .d8s =3D d8s_dq }, + [0xc4] =3D { .simd_size =3D simd_packed_int, .two_op =3D 1, .d8s =3D d= 8s_vl }, + [0xc6 ... 0xc7] =3D { .simd_size =3D simd_other, .vsib =3D 1, .d8s =3D= d8s_dq }, + [0xc8] =3D { .simd_size =3D simd_packed_fp, .two_op =3D 1, .d8s =3D d8= s_vl }, + [0xc9] =3D { .simd_size =3D simd_other }, + [0xca] =3D { .simd_size =3D simd_packed_fp, .two_op =3D 1, .d8s =3D d8= s_vl }, + [0xcb] =3D { .simd_size =3D simd_scalar_vexw, .d8s =3D d8s_dq }, + [0xcc] =3D { .simd_size =3D simd_packed_fp, .two_op =3D 1, .d8s =3D d8= s_vl }, + [0xcd] =3D { .simd_size =3D simd_scalar_vexw, .d8s =3D d8s_dq }, + [0xcf] =3D { .simd_size =3D simd_packed_int, .d8s =3D d8s_vl }, + [0xdb] =3D { .simd_size =3D simd_packed_int, .two_op =3D 1 }, + [0xdc ... 0xdf] =3D { .simd_size =3D simd_packed_int, .d8s =3D d8s_vl = }, + [0xf0] =3D { .two_op =3D 1 }, + [0xf1] =3D { .to_mem =3D 1, .two_op =3D 1 }, + [0xf2 ... 0xf3] =3D {}, + [0xf5 ... 0xf7] =3D {}, + [0xf8] =3D { .simd_size =3D simd_other }, + [0xf9] =3D { .to_mem =3D 1, .two_op =3D 1 /* Mov */ }, +}; + +static const struct ext0f3a_table { + uint8_t simd_size:5; + uint8_t to_mem:1; + uint8_t two_op:1; + uint8_t four_op:1; + disp8scale_t d8s:4; +} ext0f3a_table[256] =3D { + [0x00] =3D { .simd_size =3D simd_packed_int, .two_op =3D 1, .d8s =3D d= 8s_vl }, + [0x01] =3D { .simd_size =3D simd_packed_fp, .two_op =3D 1, .d8s =3D d8= s_vl }, + [0x02] =3D { .simd_size =3D simd_packed_int }, + [0x03] =3D { .simd_size =3D simd_packed_int, .d8s =3D d8s_vl }, + [0x04 ... 0x05] =3D { .simd_size =3D simd_packed_fp, .two_op =3D 1, .d= 8s =3D d8s_vl }, + [0x06] =3D { .simd_size =3D simd_packed_fp }, + [0x08 ... 0x09] =3D { .simd_size =3D simd_packed_fp, .two_op =3D 1, .d= 8s =3D d8s_vl }, + [0x0a ... 0x0b] =3D { .simd_size =3D simd_scalar_opc, .d8s =3D d8s_dq = }, + [0x0c ... 0x0d] =3D { .simd_size =3D simd_packed_fp }, + [0x0e] =3D { .simd_size =3D simd_packed_int }, + [0x0f] =3D { .simd_size =3D simd_packed_int, .d8s =3D d8s_vl }, + [0x14] =3D { .simd_size =3D simd_none, .to_mem =3D 1, .two_op =3D 1, .= d8s =3D 0 }, + [0x15] =3D { .simd_size =3D simd_none, .to_mem =3D 1, .two_op =3D 1, .= d8s =3D 1 }, + [0x16] =3D { .simd_size =3D simd_none, .to_mem =3D 1, .two_op =3D 1, .= d8s =3D d8s_dq64 }, + [0x17] =3D { .simd_size =3D simd_none, .to_mem =3D 1, .two_op =3D 1, .= d8s =3D 2 }, + [0x18] =3D { .simd_size =3D simd_128, .d8s =3D 4 }, + [0x19] =3D { .simd_size =3D simd_128, .to_mem =3D 1, .two_op =3D 1, .d= 8s =3D 4 }, + [0x1a] =3D { .simd_size =3D simd_256, .d8s =3D d8s_vl_by_2 }, + [0x1b] =3D { .simd_size =3D simd_256, .to_mem =3D 1, .two_op =3D 1, .d= 8s =3D d8s_vl_by_2 }, + [0x1d] =3D { .simd_size =3D simd_other, .to_mem =3D 1, .two_op =3D 1, = .d8s =3D d8s_vl_by_2 }, + [0x1e ... 0x1f] =3D { .simd_size =3D simd_packed_int, .d8s =3D d8s_vl = }, + [0x20] =3D { .simd_size =3D simd_none, .d8s =3D 0 }, + [0x21] =3D { .simd_size =3D simd_other, .d8s =3D 2 }, + [0x22] =3D { .simd_size =3D simd_none, .d8s =3D d8s_dq64 }, + [0x23] =3D { .simd_size =3D simd_packed_int, .d8s =3D d8s_vl }, + [0x25] =3D { .simd_size =3D simd_packed_int, .d8s =3D d8s_vl }, + [0x26] =3D { .simd_size =3D simd_packed_fp, .two_op =3D 1, .d8s =3D d8= s_vl }, + [0x27] =3D { .simd_size =3D simd_scalar_vexw, .d8s =3D d8s_dq }, + [0x30 ... 0x33] =3D { .simd_size =3D simd_other, .two_op =3D 1 }, + [0x38] =3D { .simd_size =3D simd_128, .d8s =3D 4 }, + [0x3a] =3D { .simd_size =3D simd_256, .d8s =3D d8s_vl_by_2 }, + [0x39] =3D { .simd_size =3D simd_128, .to_mem =3D 1, .two_op =3D 1, .d= 8s =3D 4 }, + [0x3b] =3D { .simd_size =3D simd_256, .to_mem =3D 1, .two_op =3D 1, .d= 8s =3D d8s_vl_by_2 }, + [0x3e ... 0x3f] =3D { .simd_size =3D simd_packed_int, .d8s =3D d8s_vl = }, + [0x40 ... 0x41] =3D { .simd_size =3D simd_packed_fp }, + [0x42 ... 0x43] =3D { .simd_size =3D simd_packed_int, .d8s =3D d8s_vl = }, + [0x44] =3D { .simd_size =3D simd_packed_int, .d8s =3D d8s_vl }, + [0x46] =3D { .simd_size =3D simd_packed_int }, + [0x48 ... 0x49] =3D { .simd_size =3D simd_packed_fp, .four_op =3D 1 }, + [0x4a ... 0x4b] =3D { .simd_size =3D simd_packed_fp, .four_op =3D 1 }, + [0x4c] =3D { .simd_size =3D simd_packed_int, .four_op =3D 1 }, + [0x50] =3D { .simd_size =3D simd_packed_fp, .d8s =3D d8s_vl }, + [0x51] =3D { .simd_size =3D simd_scalar_vexw, .d8s =3D d8s_dq }, + [0x54] =3D { .simd_size =3D simd_packed_fp, .d8s =3D d8s_vl }, + [0x55] =3D { .simd_size =3D simd_scalar_vexw, .d8s =3D d8s_dq }, + [0x56] =3D { .simd_size =3D simd_packed_fp, .two_op =3D 1, .d8s =3D d8= s_vl }, + [0x57] =3D { .simd_size =3D simd_scalar_vexw, .d8s =3D d8s_dq }, + [0x5c ... 0x5f] =3D { .simd_size =3D simd_packed_fp, .four_op =3D 1 }, + [0x60 ... 0x63] =3D { .simd_size =3D simd_packed_int, .two_op =3D 1 }, + [0x66] =3D { .simd_size =3D simd_packed_fp, .two_op =3D 1, .d8s =3D d8= s_vl }, + [0x67] =3D { .simd_size =3D simd_scalar_vexw, .two_op =3D 1, .d8s =3D = d8s_dq }, + [0x68 ... 0x69] =3D { .simd_size =3D simd_packed_fp, .four_op =3D 1 }, + [0x6a ... 0x6b] =3D { .simd_size =3D simd_scalar_opc, .four_op =3D 1 }, + [0x6c ... 0x6d] =3D { .simd_size =3D simd_packed_fp, .four_op =3D 1 }, + [0x6e ... 0x6f] =3D { .simd_size =3D simd_scalar_opc, .four_op =3D 1 }, + [0x70 ... 0x73] =3D { .simd_size =3D simd_packed_int, .d8s =3D d8s_vl = }, + [0x78 ... 0x79] =3D { .simd_size =3D simd_packed_fp, .four_op =3D 1 }, + [0x7a ... 0x7b] =3D { .simd_size =3D simd_scalar_opc, .four_op =3D 1 }, + [0x7c ... 0x7d] =3D { .simd_size =3D simd_packed_fp, .four_op =3D 1 }, + [0x7e ... 0x7f] =3D { .simd_size =3D simd_scalar_opc, .four_op =3D 1 }, + [0xcc] =3D { .simd_size =3D simd_other }, + [0xce ... 0xcf] =3D { .simd_size =3D simd_packed_int, .d8s =3D d8s_vl = }, + [0xdf] =3D { .simd_size =3D simd_packed_int, .two_op =3D 1 }, + [0xf0] =3D {}, +}; + +static const opcode_desc_t xop_table[] =3D { + DstReg|SrcImmByte|ModRM, + DstReg|SrcMem|ModRM, + DstReg|SrcImm|ModRM, +}; + +static const struct ext8f08_table { + uint8_t simd_size:5; + uint8_t two_op:1; + uint8_t four_op:1; +} ext8f08_table[256] =3D { + [0xa2] =3D { .simd_size =3D simd_packed_int, .four_op =3D 1 }, + [0x85 ... 0x87] =3D { .simd_size =3D simd_packed_int, .four_op =3D 1 }, + [0x8e ... 0x8f] =3D { .simd_size =3D simd_packed_int, .four_op =3D 1 }, + [0x95 ... 0x97] =3D { .simd_size =3D simd_packed_int, .four_op =3D 1 }, + [0x9e ... 0x9f] =3D { .simd_size =3D simd_packed_int, .four_op =3D 1 }, + [0xa3] =3D { .simd_size =3D simd_packed_int, .four_op =3D 1 }, + [0xa6] =3D { .simd_size =3D simd_packed_int, .four_op =3D 1 }, + [0xb6] =3D { .simd_size =3D simd_packed_int, .four_op =3D 1 }, + [0xc0 ... 0xc3] =3D { .simd_size =3D simd_packed_int, .two_op =3D 1 }, + [0xcc ... 0xcf] =3D { .simd_size =3D simd_packed_int }, + [0xec ... 0xef] =3D { .simd_size =3D simd_packed_int }, +}; + +static const struct ext8f09_table { + uint8_t simd_size:5; + uint8_t two_op:1; +} ext8f09_table[256] =3D { + [0x01 ... 0x02] =3D { .two_op =3D 1 }, + [0x80 ... 0x81] =3D { .simd_size =3D simd_packed_fp, .two_op =3D 1 }, + [0x82 ... 0x83] =3D { .simd_size =3D simd_scalar_opc, .two_op =3D 1 }, + [0x90 ... 0x9b] =3D { .simd_size =3D simd_packed_int }, + [0xc1 ... 0xc3] =3D { .simd_size =3D simd_packed_int, .two_op =3D 1 }, + [0xc6 ... 0xc7] =3D { .simd_size =3D simd_packed_int, .two_op =3D 1 }, + [0xcb] =3D { .simd_size =3D simd_packed_int, .two_op =3D 1 }, + [0xd1 ... 0xd3] =3D { .simd_size =3D simd_packed_int, .two_op =3D 1 }, + [0xd6 ... 0xd7] =3D { .simd_size =3D simd_packed_int, .two_op =3D 1 }, + [0xdb] =3D { .simd_size =3D simd_packed_int, .two_op =3D 1 }, + [0xe1 ... 0xe3] =3D { .simd_size =3D simd_packed_int, .two_op =3D 1 }, +}; + +static unsigned int decode_disp8scale(enum disp8scale scale, + const struct x86_emulate_state *s) +{ + switch ( scale ) + { + case d8s_bw: + return s->evex.w; + + default: + if ( scale < d8s_vl ) + return scale; + if ( s->evex.brs ) + { + case d8s_dq: + return 2 + s->evex.w; + } + break; + + case d8s_dq64: + return 2 + (s->op_bytes =3D=3D 8); + } + + switch ( s->simd_size ) + { + case simd_any_fp: + case simd_single_fp: + if ( !(s->evex.pfx & VEX_PREFIX_SCALAR_MASK) ) + break; + /* fall through */ + case simd_scalar_opc: + case simd_scalar_vexw: + return 2 + s->evex.w; + + case simd_128: + /* These should have an explicit size specified. */ + ASSERT_UNREACHABLE(); + return 4; + + default: + break; + } + + return 4 + s->evex.lr - (scale - d8s_vl); +} + +/* Fetch next part of the instruction being emulated. */ +#define insn_fetch_bytes(_size) ({ \ + unsigned long _x =3D 0, _ip =3D s->ip; \ + s->ip +=3D (_size); /* real hardware doesn't truncate */ \ + generate_exception_if((uint8_t)(s->ip - \ + ctxt->regs->r(ip)) > MAX_INST_LEN, \ + X86_EXC_GP, 0); \ + rc =3D ops->insn_fetch(x86_seg_cs, _ip, &_x, _size, ctxt); \ + if ( rc ) goto done; \ + _x; \ +}) +#define insn_fetch_type(type) ((type)insn_fetch_bytes(sizeof(type))) + +static int +decode_onebyte(struct x86_emulate_state *s, + struct x86_emulate_ctxt *ctxt, + const struct x86_emulate_ops *ops) +{ + int rc =3D X86EMUL_OKAY; + + switch ( ctxt->opcode ) + { + case 0x06: /* push %%es */ + case 0x07: /* pop %%es */ + case 0x0e: /* push %%cs */ + case 0x16: /* push %%ss */ + case 0x17: /* pop %%ss */ + case 0x1e: /* push %%ds */ + case 0x1f: /* pop %%ds */ + case 0x27: /* daa */ + case 0x2f: /* das */ + case 0x37: /* aaa */ + case 0x3f: /* aas */ + case 0x60: /* pusha */ + case 0x61: /* popa */ + case 0x62: /* bound */ + case 0xc4: /* les */ + case 0xc5: /* lds */ + case 0xce: /* into */ + case 0xd4: /* aam */ + case 0xd5: /* aad */ + case 0xd6: /* salc */ + s->not_64bit =3D true; + break; + + case 0x82: /* Grp1 (x86/32 only) */ + s->not_64bit =3D true; + /* fall through */ + case 0x80: case 0x81: case 0x83: /* Grp1 */ + if ( (s->modrm_reg & 7) =3D=3D 7 ) /* cmp */ + s->desc =3D (s->desc & ByteOp) | DstNone | SrcMem; + break; + + case 0x90: /* nop / pause */ + if ( s->vex.pfx =3D=3D vex_f3 ) + ctxt->opcode |=3D X86EMUL_OPC_F3(0, 0); + break; + + case 0x9a: /* call (far, absolute) */ + case 0xea: /* jmp (far, absolute) */ + generate_exception_if(mode_64bit(), X86_EXC_UD); + + s->imm1 =3D insn_fetch_bytes(s->op_bytes); + s->imm2 =3D insn_fetch_type(uint16_t); + break; + + case 0xa0: case 0xa1: /* mov mem.offs,{%al,%ax,%eax,%rax} */ + case 0xa2: case 0xa3: /* mov {%al,%ax,%eax,%rax},mem.offs */ + /* Source EA is not encoded via ModRM. */ + s->ea.type =3D OP_MEM; + s->ea.mem.off =3D insn_fetch_bytes(s->ad_bytes); + break; + + case 0xb8 ... 0xbf: /* mov imm{16,32,64},r{16,32,64} */ + if ( s->op_bytes =3D=3D 8 ) /* Fetch more bytes to obtain imm64. */ + s->imm1 =3D ((uint32_t)s->imm1 | + ((uint64_t)insn_fetch_type(uint32_t) << 32)); + break; + + case 0xc8: /* enter imm16,imm8 */ + s->imm2 =3D insn_fetch_type(uint8_t); + break; + + case 0xf6: case 0xf7: /* Grp3 */ + if ( !(s->modrm_reg & 6) ) /* test */ + s->desc =3D (s->desc & ByteOp) | DstNone | SrcMem; + break; + + case 0xff: /* Grp5 */ + switch ( s->modrm_reg & 7 ) + { + case 2: /* call (near) */ + case 4: /* jmp (near) */ + if ( mode_64bit() && (s->op_bytes =3D=3D 4 || !amd_like(ctxt))= ) + s->op_bytes =3D 8; + s->desc =3D DstNone | SrcMem | Mov; + break; + + case 3: /* call (far, absolute indirect) */ + case 5: /* jmp (far, absolute indirect) */ + /* REX.W ignored on a vendor-dependent basis. */ + if ( s->op_bytes =3D=3D 8 && amd_like(ctxt) ) + s->op_bytes =3D 4; + s->desc =3D DstNone | SrcMem | Mov; + break; + + case 6: /* push */ + if ( mode_64bit() && s->op_bytes =3D=3D 4 ) + s->op_bytes =3D 8; + s->desc =3D DstNone | SrcMem | Mov; + break; + } + break; + } + + done: + return rc; +} + +static int +decode_twobyte(struct x86_emulate_state *s, + struct x86_emulate_ctxt *ctxt, + const struct x86_emulate_ops *ops) +{ + int rc =3D X86EMUL_OKAY; + + switch ( ctxt->opcode & X86EMUL_OPC_MASK ) + { + case 0x00: /* Grp6 */ + switch ( s->modrm_reg & 6 ) + { + case 0: + s->desc |=3D DstMem | SrcImplicit | Mov; + break; + case 2: case 4: + s->desc |=3D SrcMem16; + break; + } + break; + + case 0x78: + s->desc =3D ImplicitOps; + s->simd_size =3D simd_none; + switch ( s->vex.pfx ) + { + case vex_66: /* extrq $imm8, $imm8, xmm */ + case vex_f2: /* insertq $imm8, $imm8, xmm, xmm */ + s->imm1 =3D insn_fetch_type(uint8_t); + s->imm2 =3D insn_fetch_type(uint8_t); + break; + } + /* fall through */ + case 0x10 ... 0x18: + case 0x28 ... 0x2f: + case 0x50 ... 0x77: + case 0x7a ... 0x7d: + case 0x7f: + case 0xc2 ... 0xc3: + case 0xc5 ... 0xc6: + case 0xd0 ... 0xef: + case 0xf1 ... 0xfe: + ctxt->opcode |=3D MASK_INSR(s->vex.pfx, X86EMUL_OPC_PFX_MASK); + break; + + case 0x20: case 0x22: /* mov to/from cr */ + if ( s->lock_prefix && vcpu_has_cr8_legacy() ) + { + s->modrm_reg +=3D 8; + s->lock_prefix =3D false; + } + /* fall through */ + case 0x21: case 0x23: /* mov to/from dr */ + ASSERT(s->ea.type =3D=3D OP_REG); /* Early operand adjustment ensu= res this. */ + generate_exception_if(s->lock_prefix, X86_EXC_UD); + s->op_bytes =3D mode_64bit() ? 8 : 4; + break; + + case 0x79: + s->desc =3D DstReg | SrcMem; + s->simd_size =3D simd_packed_int; + ctxt->opcode |=3D MASK_INSR(s->vex.pfx, X86EMUL_OPC_PFX_MASK); + break; + + case 0x7e: + ctxt->opcode |=3D MASK_INSR(s->vex.pfx, X86EMUL_OPC_PFX_MASK); + if ( s->vex.pfx =3D=3D vex_f3 ) /* movq xmm/m64,xmm */ + { + case X86EMUL_OPC_VEX_F3(0, 0x7e): /* vmovq xmm/m64,xmm */ + case X86EMUL_OPC_EVEX_F3(0, 0x7e): /* vmovq xmm/m64,xmm */ + s->desc =3D DstImplicit | SrcMem | TwoOp; + s->simd_size =3D simd_other; + /* Avoid the s->desc clobbering of TwoOp below. */ + return X86EMUL_OKAY; + } + break; + + case X86EMUL_OPC_VEX(0, 0x90): /* kmov{w,q} */ + case X86EMUL_OPC_VEX_66(0, 0x90): /* kmov{b,d} */ + s->desc =3D DstReg | SrcMem | Mov; + s->simd_size =3D simd_other; + break; + + case X86EMUL_OPC_VEX(0, 0x91): /* kmov{w,q} */ + case X86EMUL_OPC_VEX_66(0, 0x91): /* kmov{b,d} */ + s->desc =3D DstMem | SrcReg | Mov; + s->simd_size =3D simd_other; + break; + + case 0xae: + ctxt->opcode |=3D MASK_INSR(s->vex.pfx, X86EMUL_OPC_PFX_MASK); + /* fall through */ + case X86EMUL_OPC_VEX(0, 0xae): + switch ( s->modrm_reg & 7 ) + { + case 2: /* {,v}ldmxcsr */ + s->desc =3D DstImplicit | SrcMem | Mov; + s->op_bytes =3D 4; + break; + + case 3: /* {,v}stmxcsr */ + s->desc =3D DstMem | SrcImplicit | Mov; + s->op_bytes =3D 4; + break; + } + break; + + case 0xb2: /* lss */ + case 0xb4: /* lfs */ + case 0xb5: /* lgs */ + /* REX.W ignored on a vendor-dependent basis. */ + if ( s->op_bytes =3D=3D 8 && amd_like(ctxt) ) + s->op_bytes =3D 4; + break; + + case 0xb8: /* jmpe / popcnt */ + if ( s->vex.pfx >=3D vex_f3 ) + ctxt->opcode |=3D MASK_INSR(s->vex.pfx, X86EMUL_OPC_PFX_MASK); + break; + + /* Intentionally not handling here despite being modified by F3: + case 0xbc: bsf / tzcnt + case 0xbd: bsr / lzcnt + * They're being dealt with in the execution phase (if at all). + */ + + case 0xc4: /* pinsrw */ + ctxt->opcode |=3D MASK_INSR(s->vex.pfx, X86EMUL_OPC_PFX_MASK); + /* fall through */ + case X86EMUL_OPC_VEX_66(0, 0xc4): /* vpinsrw */ + case X86EMUL_OPC_EVEX_66(0, 0xc4): /* vpinsrw */ + s->desc =3D DstImplicit | SrcMem16; + break; + + case 0xf0: + ctxt->opcode |=3D MASK_INSR(s->vex.pfx, X86EMUL_OPC_PFX_MASK); + if ( s->vex.pfx =3D=3D vex_f2 ) /* lddqu mem,xmm */ + { + /* fall through */ + case X86EMUL_OPC_VEX_F2(0, 0xf0): /* vlddqu mem,{x,y}mm */ + s->desc =3D DstImplicit | SrcMem | TwoOp; + s->simd_size =3D simd_other; + /* Avoid the s->desc clobbering of TwoOp below. */ + return X86EMUL_OKAY; + } + break; + } + + /* + * Scalar forms of most VEX-/EVEX-encoded TwoOp instructions have + * three operands. Those which do really have two operands + * should have exited earlier. + */ + if ( s->simd_size && s->vex.opcx && + (s->vex.pfx & VEX_PREFIX_SCALAR_MASK) ) + s->desc &=3D ~TwoOp; + + done: + return rc; +} + +static int +decode_0f38(struct x86_emulate_state *s, + struct x86_emulate_ctxt *ctxt, + const struct x86_emulate_ops *ops) +{ + switch ( ctxt->opcode & X86EMUL_OPC_MASK ) + { + case 0x00 ... 0xef: + case 0xf2 ... 0xf5: + case 0xf7 ... 0xf8: + case 0xfa ... 0xff: + s->op_bytes =3D 0; + /* fall through */ + case 0xf6: /* adcx / adox */ + case 0xf9: /* movdiri */ + ctxt->opcode |=3D MASK_INSR(s->vex.pfx, X86EMUL_OPC_PFX_MASK); + break; + + case X86EMUL_OPC_EVEX_66(0, 0x2d): /* vscalefs{s,d} */ + s->simd_size =3D simd_scalar_vexw; + break; + + case X86EMUL_OPC_EVEX_66(0, 0x7a): /* vpbroadcastb */ + case X86EMUL_OPC_EVEX_66(0, 0x7b): /* vpbroadcastw */ + case X86EMUL_OPC_EVEX_66(0, 0x7c): /* vpbroadcast{d,q} */ + break; + + case 0xf0: /* movbe / crc32 */ + s->desc |=3D s->vex.pfx =3D=3D vex_f2 ? ByteOp : Mov; + if ( s->vex.pfx >=3D vex_f3 ) + ctxt->opcode |=3D MASK_INSR(s->vex.pfx, X86EMUL_OPC_PFX_MASK); + break; + + case 0xf1: /* movbe / crc32 */ + if ( s->vex.pfx =3D=3D vex_f2 ) + s->desc =3D DstReg | SrcMem; + if ( s->vex.pfx >=3D vex_f3 ) + ctxt->opcode |=3D MASK_INSR(s->vex.pfx, X86EMUL_OPC_PFX_MASK); + break; + + case X86EMUL_OPC_VEX(0, 0xf2): /* andn */ + case X86EMUL_OPC_VEX(0, 0xf3): /* Grp 17 */ + case X86EMUL_OPC_VEX(0, 0xf5): /* bzhi */ + case X86EMUL_OPC_VEX_F3(0, 0xf5): /* pext */ + case X86EMUL_OPC_VEX_F2(0, 0xf5): /* pdep */ + case X86EMUL_OPC_VEX_F2(0, 0xf6): /* mulx */ + case X86EMUL_OPC_VEX(0, 0xf7): /* bextr */ + case X86EMUL_OPC_VEX_66(0, 0xf7): /* shlx */ + case X86EMUL_OPC_VEX_F3(0, 0xf7): /* sarx */ + case X86EMUL_OPC_VEX_F2(0, 0xf7): /* shrx */ + break; + + default: + s->op_bytes =3D 0; + break; + } + + return X86EMUL_OKAY; +} + +static int +decode_0f3a(struct x86_emulate_state *s, + struct x86_emulate_ctxt *ctxt, + const struct x86_emulate_ops *ops) +{ + if ( !s->vex.opcx ) + ctxt->opcode |=3D MASK_INSR(s->vex.pfx, X86EMUL_OPC_PFX_MASK); + + switch ( ctxt->opcode & X86EMUL_OPC_MASK ) + { + case X86EMUL_OPC_66(0, 0x14) + ... X86EMUL_OPC_66(0, 0x17): /* pextr*, extractps */ + case X86EMUL_OPC_VEX_66(0, 0x14) + ... X86EMUL_OPC_VEX_66(0, 0x17): /* vpextr*, vextractps */ + case X86EMUL_OPC_EVEX_66(0, 0x14) + ... X86EMUL_OPC_EVEX_66(0, 0x17): /* vpextr*, vextractps */ + case X86EMUL_OPC_VEX_F2(0, 0xf0): /* rorx */ + break; + + case X86EMUL_OPC_66(0, 0x20): /* pinsrb */ + case X86EMUL_OPC_VEX_66(0, 0x20): /* vpinsrb */ + case X86EMUL_OPC_EVEX_66(0, 0x20): /* vpinsrb */ + s->desc =3D DstImplicit | SrcMem; + if ( s->modrm_mod !=3D 3 ) + s->desc |=3D ByteOp; + break; + + case X86EMUL_OPC_66(0, 0x22): /* pinsr{d,q} */ + case X86EMUL_OPC_VEX_66(0, 0x22): /* vpinsr{d,q} */ + case X86EMUL_OPC_EVEX_66(0, 0x22): /* vpinsr{d,q} */ + s->desc =3D DstImplicit | SrcMem; + break; + + default: + s->op_bytes =3D 0; + break; + } + + return X86EMUL_OKAY; +} + +#define ad_bytes (s->ad_bytes) /* for truncate_ea() */ + +int x86emul_decode(struct x86_emulate_state *s, + struct x86_emulate_ctxt *ctxt, + const struct x86_emulate_ops *ops) +{ + uint8_t b, d; + unsigned int def_op_bytes, def_ad_bytes, opcode; + enum x86_segment override_seg =3D x86_seg_none; + bool pc_rel =3D false; + int rc =3D X86EMUL_OKAY; + + ASSERT(ops->insn_fetch); + + memset(s, 0, sizeof(*s)); + s->ea.type =3D OP_NONE; + s->ea.mem.seg =3D x86_seg_ds; + s->ea.reg =3D PTR_POISON; + s->regs =3D ctxt->regs; + s->ip =3D ctxt->regs->r(ip); + + s->op_bytes =3D def_op_bytes =3D ad_bytes =3D def_ad_bytes =3D + ctxt->addr_size / 8; + if ( s->op_bytes =3D=3D 8 ) + { + s->op_bytes =3D def_op_bytes =3D 4; +#ifndef __x86_64__ + return X86EMUL_UNHANDLEABLE; +#endif + } + + /* Prefix bytes. */ + for ( ; ; ) + { + switch ( b =3D insn_fetch_type(uint8_t) ) + { + case 0x66: /* operand-size override */ + s->op_bytes =3D def_op_bytes ^ 6; + if ( !s->vex.pfx ) + s->vex.pfx =3D vex_66; + break; + case 0x67: /* address-size override */ + ad_bytes =3D def_ad_bytes ^ (mode_64bit() ? 12 : 6); + break; + case 0x2e: /* CS override / ignored in 64-bit mode */ + if ( !mode_64bit() ) + override_seg =3D x86_seg_cs; + break; + case 0x3e: /* DS override / ignored in 64-bit mode */ + if ( !mode_64bit() ) + override_seg =3D x86_seg_ds; + break; + case 0x26: /* ES override / ignored in 64-bit mode */ + if ( !mode_64bit() ) + override_seg =3D x86_seg_es; + break; + case 0x64: /* FS override */ + override_seg =3D x86_seg_fs; + break; + case 0x65: /* GS override */ + override_seg =3D x86_seg_gs; + break; + case 0x36: /* SS override / ignored in 64-bit mode */ + if ( !mode_64bit() ) + override_seg =3D x86_seg_ss; + break; + case 0xf0: /* LOCK */ + s->lock_prefix =3D true; + break; + case 0xf2: /* REPNE/REPNZ */ + s->vex.pfx =3D vex_f2; + break; + case 0xf3: /* REP/REPE/REPZ */ + s->vex.pfx =3D vex_f3; + break; + case 0x40 ... 0x4f: /* REX */ + if ( !mode_64bit() ) + goto done_prefixes; + s->rex_prefix =3D b; + continue; + default: + goto done_prefixes; + } + + /* Any legacy prefix after a REX prefix nullifies its effect. */ + s->rex_prefix =3D 0; + } + done_prefixes: + + if ( s->rex_prefix & REX_W ) + s->op_bytes =3D 8; + + /* Opcode byte(s). */ + d =3D opcode_table[b]; + if ( d =3D=3D 0 && b =3D=3D 0x0f ) + { + /* Two-byte opcode. */ + b =3D insn_fetch_type(uint8_t); + d =3D twobyte_table[b].desc; + switch ( b ) + { + default: + opcode =3D b | MASK_INSR(0x0f, X86EMUL_OPC_EXT_MASK); + s->ext =3D ext_0f; + s->simd_size =3D twobyte_table[b].size; + break; + case 0x38: + b =3D insn_fetch_type(uint8_t); + opcode =3D b | MASK_INSR(0x0f38, X86EMUL_OPC_EXT_MASK); + s->ext =3D ext_0f38; + break; + case 0x3a: + b =3D insn_fetch_type(uint8_t); + opcode =3D b | MASK_INSR(0x0f3a, X86EMUL_OPC_EXT_MASK); + s->ext =3D ext_0f3a; + break; + } + } + else + opcode =3D b; + + /* ModRM and SIB bytes. */ + if ( d & ModRM ) + { + s->modrm =3D insn_fetch_type(uint8_t); + s->modrm_mod =3D (s->modrm & 0xc0) >> 6; + + if ( !s->ext && ((b & ~1) =3D=3D 0xc4 || (b =3D=3D 0x8f && (s->mod= rm & 0x18)) || + b =3D=3D 0x62) ) + switch ( def_ad_bytes ) + { + default: + BUG(); /* Shouldn't be possible. */ + case 2: + if ( s->regs->eflags & X86_EFLAGS_VM ) + break; + /* fall through */ + case 4: + if ( s->modrm_mod !=3D 3 || in_realmode(ctxt, ops) ) + break; + /* fall through */ + case 8: + /* VEX / XOP / EVEX */ + generate_exception_if(s->rex_prefix || s->vex.pfx, X86_EXC= _UD); + /* + * With operand size override disallowed (see above), op_b= ytes + * should not have changed from its default. + */ + ASSERT(s->op_bytes =3D=3D def_op_bytes); + + s->vex.raw[0] =3D s->modrm; + if ( b =3D=3D 0xc5 ) + { + opcode =3D X86EMUL_OPC_VEX_; + s->vex.raw[1] =3D s->modrm; + s->vex.opcx =3D vex_0f; + s->vex.x =3D 1; + s->vex.b =3D 1; + s->vex.w =3D 0; + } + else + { + s->vex.raw[1] =3D insn_fetch_type(uint8_t); + if ( mode_64bit() ) + { + if ( !s->vex.b ) + s->rex_prefix |=3D REX_B; + if ( !s->vex.x ) + s->rex_prefix |=3D REX_X; + if ( s->vex.w ) + { + s->rex_prefix |=3D REX_W; + s->op_bytes =3D 8; + } + } + else + { + /* Operand size fixed at 4 (no override via W bit)= . */ + s->op_bytes =3D 4; + s->vex.b =3D 1; + } + switch ( b ) + { + case 0x62: + opcode =3D X86EMUL_OPC_EVEX_; + s->evex.raw[0] =3D s->vex.raw[0]; + s->evex.raw[1] =3D s->vex.raw[1]; + s->evex.raw[2] =3D insn_fetch_type(uint8_t); + + generate_exception_if(!s->evex.mbs || s->evex.mbz,= X86_EXC_UD); + generate_exception_if(!s->evex.opmsk && s->evex.z,= X86_EXC_UD); + + if ( !mode_64bit() ) + s->evex.R =3D 1; + + s->vex.opcx =3D s->evex.opcx; + break; + case 0xc4: + opcode =3D X86EMUL_OPC_VEX_; + break; + default: + opcode =3D 0; + break; + } + } + if ( !s->vex.r ) + s->rex_prefix |=3D REX_R; + + s->ext =3D s->vex.opcx; + if ( b !=3D 0x8f ) + { + b =3D insn_fetch_type(uint8_t); + switch ( s->ext ) + { + case vex_0f: + opcode |=3D MASK_INSR(0x0f, X86EMUL_OPC_EXT_MASK); + d =3D twobyte_table[b].desc; + s->simd_size =3D twobyte_table[b].size; + break; + case vex_0f38: + opcode |=3D MASK_INSR(0x0f38, X86EMUL_OPC_EXT_MASK= ); + d =3D twobyte_table[0x38].desc; + break; + case vex_0f3a: + opcode |=3D MASK_INSR(0x0f3a, X86EMUL_OPC_EXT_MASK= ); + d =3D twobyte_table[0x3a].desc; + break; + default: + rc =3D X86EMUL_UNRECOGNIZED; + goto done; + } + } + else if ( s->ext < ext_8f08 + ARRAY_SIZE(xop_table) ) + { + b =3D insn_fetch_type(uint8_t); + opcode |=3D MASK_INSR(0x8f08 + s->ext - ext_8f08, + X86EMUL_OPC_EXT_MASK); + d =3D array_access_nospec(xop_table, s->ext - ext_8f08= ); + } + else + { + rc =3D X86EMUL_UNRECOGNIZED; + goto done; + } + + opcode |=3D b | MASK_INSR(s->vex.pfx, X86EMUL_OPC_PFX_MASK= ); + + if ( !evex_encoded() ) + s->evex.lr =3D s->vex.l; + + if ( !(d & ModRM) ) + break; + + s->modrm =3D insn_fetch_type(uint8_t); + s->modrm_mod =3D (s->modrm & 0xc0) >> 6; + + break; + } + } + + if ( d & ModRM ) + { + unsigned int disp8scale =3D 0; + + d &=3D ~ModRM; +#undef ModRM /* Only its aliases are valid to use from here on. */ + s->modrm_reg =3D ((s->rex_prefix & 4) << 1) | ((s->modrm & 0x38) >= > 3) | + ((evex_encoded() && !s->evex.R) << 4); + s->modrm_rm =3D s->modrm & 0x07; + + /* + * Early operand adjustments. Only ones affecting further processi= ng + * prior to the x86_decode_*() calls really belong here. That would + * normally be only addition/removal of SrcImm/SrcImm16, so their + * fetching can be taken care of by the common code below. + */ + switch ( s->ext ) + { + case ext_none: + switch ( b ) + { + case 0xf6 ... 0xf7: /* Grp3 */ + switch ( s->modrm_reg & 7 ) + { + case 0 ... 1: /* test */ + d |=3D DstMem | SrcImm; + break; + case 2: /* not */ + case 3: /* neg */ + d |=3D DstMem; + break; + case 4: /* mul */ + case 5: /* imul */ + case 6: /* div */ + case 7: /* idiv */ + /* + * DstEax isn't really precise for all cases; updates = to + * rDX get handled in an open coded manner. + */ + d |=3D DstEax | SrcMem; + break; + } + break; + } + break; + + case ext_0f: + if ( evex_encoded() ) + disp8scale =3D decode_disp8scale(twobyte_table[b].d8s, s); + + switch ( b ) + { + case 0x12: /* vmovsldup / vmovddup */ + if ( s->evex.pfx =3D=3D vex_f2 ) + disp8scale =3D s->evex.lr ? 4 + s->evex.lr : 3; + /* fall through */ + case 0x16: /* vmovshdup */ + if ( s->evex.pfx =3D=3D vex_f3 ) + disp8scale =3D 4 + s->evex.lr; + break; + + case 0x20: /* mov cr,reg */ + case 0x21: /* mov dr,reg */ + case 0x22: /* mov reg,cr */ + case 0x23: /* mov reg,dr */ + /* + * Mov to/from cr/dr ignore the encoding of Mod, and behav= e as + * if they were encoded as reg/reg instructions. No furth= er + * disp/SIB bytes are fetched. + */ + s->modrm_mod =3D 3; + break; + + case 0x78: + case 0x79: + if ( !s->evex.pfx ) + break; + /* vcvt{,t}ps2uqq need special casing */ + if ( s->evex.pfx =3D=3D vex_66 ) + { + if ( !s->evex.w && !s->evex.brs ) + --disp8scale; + break; + } + /* vcvt{,t}s{s,d}2usi need special casing: fall through */ + case 0x2c: /* vcvtts{s,d}2si need special casing */ + case 0x2d: /* vcvts{s,d}2si need special casing */ + if ( evex_encoded() ) + disp8scale =3D 2 + (s->evex.pfx & VEX_PREFIX_DOUBLE_MA= SK); + break; + + case 0x5a: /* vcvtps2pd needs special casing */ + if ( disp8scale && !s->evex.pfx && !s->evex.brs ) + --disp8scale; + break; + + case 0x7a: /* vcvttps2qq and vcvtudq2pd need special casing */ + if ( disp8scale && s->evex.pfx !=3D vex_f2 && !s->evex.w &= & !s->evex.brs ) + --disp8scale; + break; + + case 0x7b: /* vcvtp{s,d}2qq need special casing */ + if ( disp8scale && s->evex.pfx =3D=3D vex_66 ) + disp8scale =3D (s->evex.brs ? 2 : 3 + s->evex.lr) + s-= >evex.w; + break; + + case 0x7e: /* vmovq xmm/m64,xmm needs special casing */ + if ( disp8scale =3D=3D 2 && s->evex.pfx =3D=3D vex_f3 ) + disp8scale =3D 3; + break; + + case 0xe6: /* vcvtdq2pd needs special casing */ + if ( disp8scale && s->evex.pfx =3D=3D vex_f3 && !s->evex.w= && !s->evex.brs ) + --disp8scale; + break; + } + break; + + case ext_0f38: + d =3D ext0f38_table[b].to_mem ? DstMem | SrcReg + : DstReg | SrcMem; + if ( ext0f38_table[b].two_op ) + d |=3D TwoOp; + if ( ext0f38_table[b].vsib ) + d |=3D vSIB; + s->simd_size =3D ext0f38_table[b].simd_size; + if ( evex_encoded() ) + { + /* + * VPMOVUS* are identical to VPMOVS* Disp8-scaling-wise, b= ut + * their attributes don't match those of the vex_66 encoded + * insns with the same base opcodes. Rather than adding new + * columns to the table, handle this here for now. + */ + if ( s->evex.pfx !=3D vex_f3 || (b & 0xf8) !=3D 0x10 ) + disp8scale =3D decode_disp8scale(ext0f38_table[b].d8s,= s); + else + { + disp8scale =3D decode_disp8scale(ext0f38_table[b ^ 0x3= 0].d8s, + s); + s->simd_size =3D simd_other; + } + + switch ( b ) + { + /* vp4dpwssd{,s} need special casing */ + case 0x52: case 0x53: + /* v4f{,n}madd{p,s}s need special casing */ + case 0x9a: case 0x9b: case 0xaa: case 0xab: + if ( s->evex.pfx =3D=3D vex_f2 ) + { + disp8scale =3D 4; + s->simd_size =3D simd_128; + } + break; + } + } + break; + + case ext_0f3a: + /* + * Cannot update d here yet, as the immediate operand still + * needs fetching. + */ + s->simd_size =3D ext0f3a_table[b].simd_size; + if ( evex_encoded() ) + disp8scale =3D decode_disp8scale(ext0f3a_table[b].d8s, s); + break; + + case ext_8f09: + if ( ext8f09_table[b].two_op ) + d |=3D TwoOp; + s->simd_size =3D ext8f09_table[b].simd_size; + break; + + case ext_8f08: + case ext_8f0a: + /* + * Cannot update d here yet, as the immediate operand still + * needs fetching. + */ + break; + + default: + ASSERT_UNREACHABLE(); + return X86EMUL_UNIMPLEMENTED; + } + + if ( s->modrm_mod =3D=3D 3 ) + { + generate_exception_if(d & vSIB, X86_EXC_UD); + s->modrm_rm |=3D ((s->rex_prefix & 1) << 3) | + ((evex_encoded() && !s->evex.x) << 4); + s->ea.type =3D OP_REG; + } + else if ( ad_bytes =3D=3D 2 ) + { + /* 16-bit ModR/M decode. */ + generate_exception_if(d & vSIB, X86_EXC_UD); + s->ea.type =3D OP_MEM; + switch ( s->modrm_rm ) + { + case 0: + s->ea.mem.off =3D s->regs->bx + s->regs->si; + break; + case 1: + s->ea.mem.off =3D s->regs->bx + s->regs->di; + break; + case 2: + s->ea.mem.seg =3D x86_seg_ss; + s->ea.mem.off =3D s->regs->bp + s->regs->si; + break; + case 3: + s->ea.mem.seg =3D x86_seg_ss; + s->ea.mem.off =3D s->regs->bp + s->regs->di; + break; + case 4: + s->ea.mem.off =3D s->regs->si; + break; + case 5: + s->ea.mem.off =3D s->regs->di; + break; + case 6: + if ( s->modrm_mod =3D=3D 0 ) + break; + s->ea.mem.seg =3D x86_seg_ss; + s->ea.mem.off =3D s->regs->bp; + break; + case 7: + s->ea.mem.off =3D s->regs->bx; + break; + } + switch ( s->modrm_mod ) + { + case 0: + if ( s->modrm_rm =3D=3D 6 ) + s->ea.mem.off =3D insn_fetch_type(int16_t); + break; + case 1: + s->ea.mem.off +=3D insn_fetch_type(int8_t) * (1 << disp8sc= ale); + break; + case 2: + s->ea.mem.off +=3D insn_fetch_type(int16_t); + break; + } + } + else + { + /* 32/64-bit ModR/M decode. */ + s->ea.type =3D OP_MEM; + if ( s->modrm_rm =3D=3D 4 ) + { + uint8_t sib =3D insn_fetch_type(uint8_t); + uint8_t sib_base =3D (sib & 7) | ((s->rex_prefix << 3) & 8= ); + + s->sib_index =3D ((sib >> 3) & 7) | ((s->rex_prefix << 2) = & 8); + s->sib_scale =3D (sib >> 6) & 3; + if ( unlikely(d & vSIB) ) + s->sib_index |=3D (mode_64bit() && evex_encoded() && + !s->evex.RX) << 4; + else if ( s->sib_index !=3D 4 ) + { + s->ea.mem.off =3D *decode_gpr(s->regs, s->sib_index); + s->ea.mem.off <<=3D s->sib_scale; + } + if ( (s->modrm_mod =3D=3D 0) && ((sib_base & 7) =3D=3D 5) ) + s->ea.mem.off +=3D insn_fetch_type(int32_t); + else if ( sib_base =3D=3D 4 ) + { + s->ea.mem.seg =3D x86_seg_ss; + s->ea.mem.off +=3D s->regs->r(sp); + if ( !s->ext && (b =3D=3D 0x8f) ) + /* POP computes its EA post increment. */ + s->ea.mem.off +=3D ((mode_64bit() && (s->op_bytes = =3D=3D 4)) + ? 8 : s->op_bytes); + } + else if ( sib_base =3D=3D 5 ) + { + s->ea.mem.seg =3D x86_seg_ss; + s->ea.mem.off +=3D s->regs->r(bp); + } + else + s->ea.mem.off +=3D *decode_gpr(s->regs, sib_base); + } + else + { + generate_exception_if(d & vSIB, X86_EXC_UD); + s->modrm_rm |=3D (s->rex_prefix & 1) << 3; + s->ea.mem.off =3D *decode_gpr(s->regs, s->modrm_rm); + if ( (s->modrm_rm =3D=3D 5) && (s->modrm_mod !=3D 0) ) + s->ea.mem.seg =3D x86_seg_ss; + } + switch ( s->modrm_mod ) + { + case 0: + if ( (s->modrm_rm & 7) !=3D 5 ) + break; + s->ea.mem.off =3D insn_fetch_type(int32_t); + pc_rel =3D mode_64bit(); + break; + case 1: + s->ea.mem.off +=3D insn_fetch_type(int8_t) * (1 << disp8sc= ale); + break; + case 2: + s->ea.mem.off +=3D insn_fetch_type(int32_t); + break; + } + } + } + else + { + s->modrm_mod =3D 0xff; + s->modrm_reg =3D s->modrm_rm =3D s->modrm =3D 0; + } + + if ( override_seg !=3D x86_seg_none ) + s->ea.mem.seg =3D override_seg; + + /* Fetch the immediate operand, if present. */ + switch ( d & SrcMask ) + { + unsigned int bytes; + + case SrcImm: + if ( !(d & ByteOp) ) + { + if ( mode_64bit() && !amd_like(ctxt) && + ((s->ext =3D=3D ext_none && (b | 1) =3D=3D 0xe9) /* call = / jmp */ || + (s->ext =3D=3D ext_0f && (b | 0xf) =3D=3D 0x8f) /* jcc *= / ) ) + s->op_bytes =3D 4; + bytes =3D s->op_bytes !=3D 8 ? s->op_bytes : 4; + } + else + { + case SrcImmByte: + bytes =3D 1; + } + /* NB. Immediates are sign-extended as necessary. */ + switch ( bytes ) + { + case 1: s->imm1 =3D insn_fetch_type(int8_t); break; + case 2: s->imm1 =3D insn_fetch_type(int16_t); break; + case 4: s->imm1 =3D insn_fetch_type(int32_t); break; + } + break; + case SrcImm16: + s->imm1 =3D insn_fetch_type(uint16_t); + break; + } + + ctxt->opcode =3D opcode; + s->desc =3D d; + + switch ( s->ext ) + { + case ext_none: + rc =3D decode_onebyte(s, ctxt, ops); + break; + + case ext_0f: + rc =3D decode_twobyte(s, ctxt, ops); + break; + + case ext_0f38: + rc =3D decode_0f38(s, ctxt, ops); + break; + + case ext_0f3a: + d =3D ext0f3a_table[b].to_mem ? DstMem | SrcReg : DstReg | SrcMem; + if ( ext0f3a_table[b].two_op ) + d |=3D TwoOp; + else if ( ext0f3a_table[b].four_op && !mode_64bit() && s->vex.opcx= ) + s->imm1 &=3D 0x7f; + s->desc =3D d; + rc =3D decode_0f3a(s, ctxt, ops); + break; + + case ext_8f08: + d =3D DstReg | SrcMem; + if ( ext8f08_table[b].two_op ) + d |=3D TwoOp; + else if ( ext8f08_table[b].four_op && !mode_64bit() ) + s->imm1 &=3D 0x7f; + s->desc =3D d; + s->simd_size =3D ext8f08_table[b].simd_size; + break; + + case ext_8f09: + case ext_8f0a: + break; + + default: + ASSERT_UNREACHABLE(); + return X86EMUL_UNIMPLEMENTED; + } + + if ( s->ea.type =3D=3D OP_MEM ) + { + if ( pc_rel ) + s->ea.mem.off +=3D s->ip; + + s->ea.mem.off =3D truncate_ea(s->ea.mem.off); + } + + /* + * Simple op_bytes calculations. More complicated cases produce 0 + * and are further handled during execute. + */ + switch ( s->simd_size ) + { + case simd_none: + /* + * When prefix 66 has a meaning different from operand-size overri= de, + * operand size defaults to 4 and can't be overridden to 2. + */ + if ( s->op_bytes =3D=3D 2 && + (ctxt->opcode & X86EMUL_OPC_PFX_MASK) =3D=3D X86EMUL_OPC_66(0= , 0) ) + s->op_bytes =3D 4; + break; + +#ifndef X86EMUL_NO_SIMD + case simd_packed_int: + switch ( s->vex.pfx ) + { + case vex_none: + if ( !s->vex.opcx ) + { + s->op_bytes =3D 8; + break; + } + /* fall through */ + case vex_66: + s->op_bytes =3D 16 << s->evex.lr; + break; + default: + s->op_bytes =3D 0; + break; + } + break; + + case simd_single_fp: + if ( s->vex.pfx & VEX_PREFIX_DOUBLE_MASK ) + { + s->op_bytes =3D 0; + break; + case simd_packed_fp: + if ( s->vex.pfx & VEX_PREFIX_SCALAR_MASK ) + { + s->op_bytes =3D 0; + break; + } + } + /* fall through */ + case simd_any_fp: + switch ( s->vex.pfx ) + { + default: + s->op_bytes =3D 16 << s->evex.lr; + break; + case vex_f3: + generate_exception_if(evex_encoded() && s->evex.w, X86_EXC_UD); + s->op_bytes =3D 4; + break; + case vex_f2: + generate_exception_if(evex_encoded() && !s->evex.w, X86_EXC_UD= ); + s->op_bytes =3D 8; + break; + } + break; + + case simd_scalar_opc: + s->op_bytes =3D 4 << (ctxt->opcode & 1); + break; + + case simd_scalar_vexw: + s->op_bytes =3D 4 << s->vex.w; + break; + + case simd_128: + /* The special cases here are MMX shift insns. */ + s->op_bytes =3D s->vex.opcx || s->vex.pfx ? 16 : 8; + break; + + case simd_256: + s->op_bytes =3D 32; + break; +#endif /* !X86EMUL_NO_SIMD */ + + default: + s->op_bytes =3D 0; + break; + } + + done: + return rc; +} --- a/xen/arch/x86/x86_emulate/private.h +++ b/xen/arch/x86/x86_emulate/private.h @@ -37,9 +37,11 @@ #ifdef __i386__ # define mode_64bit() false # define r(name) e ## name +# define PTR_POISON NULL /* 32-bit builds are for user-space, so NULL is O= K. */ #else # define mode_64bit() (ctxt->addr_size =3D=3D 64) # define r(name) r ## name +# define PTR_POISON ((void *)0x8086000000008086UL) /* non-canonical */ #endif =20 /* Operand sizes: 8-bit operands or specified/overridden size. */ @@ -76,6 +78,23 @@ =20 typedef uint8_t opcode_desc_t; =20 +enum disp8scale { + /* Values 0 ... 4 are explicit sizes. */ + d8s_bw =3D 5, + d8s_dq, + /* EVEX.W ignored outside of 64-bit mode */ + d8s_dq64, + /* + * All further values must strictly be last and in the order + * given so that arithmetic on the values works. + */ + d8s_vl, + d8s_vl_by_2, + d8s_vl_by_4, + d8s_vl_by_8, +}; +typedef uint8_t disp8scale_t; + /* Type, address-of, and value of an instruction's operand. */ struct operand { enum { OP_REG, OP_MEM, OP_IMM, OP_NONE } type; @@ -182,6 +201,9 @@ enum vex_pfx { vex_f2 }; =20 +#define VEX_PREFIX_DOUBLE_MASK 0x1 +#define VEX_PREFIX_SCALAR_MASK 0x2 + union vex { uint8_t raw[2]; struct { /* SDM names */ @@ -706,6 +728,10 @@ do { if ( rc ) goto done; \ } while (0) =20 +int x86emul_decode(struct x86_emulate_state *s, + struct x86_emulate_ctxt *ctxt, + const struct x86_emulate_ops *ops); + int x86emul_fpu(struct x86_emulate_state *s, struct cpu_user_regs *regs, struct operand *dst, @@ -735,6 +761,13 @@ int x86emul_0fc7(struct x86_emulate_stat const struct x86_emulate_ops *ops, mmval_t *mmvalp); =20 +/* Initialise output state in x86_emulate_ctxt */ +static inline void init_context(struct x86_emulate_ctxt *ctxt) +{ + ctxt->retire.raw =3D 0; + x86_emul_reset_event(ctxt); +} + static inline bool is_aligned(enum x86_segment seg, unsigned long offs, unsigned int size, struct x86_emulate_ctxt *= ctxt, const struct x86_emulate_ops *ops) --- a/xen/arch/x86/x86_emulate/x86_emulate.c +++ b/xen/arch/x86/x86_emulate/x86_emulate.c @@ -22,274 +22,6 @@ =20 #include "private.h" =20 -static const opcode_desc_t opcode_table[256] =3D { - /* 0x00 - 0x07 */ - ByteOp|DstMem|SrcReg|ModRM, DstMem|SrcReg|ModRM, - ByteOp|DstReg|SrcMem|ModRM, DstReg|SrcMem|ModRM, - ByteOp|DstEax|SrcImm, DstEax|SrcImm, ImplicitOps|Mov, ImplicitOps|Mov, - /* 0x08 - 0x0F */ - ByteOp|DstMem|SrcReg|ModRM, DstMem|SrcReg|ModRM, - ByteOp|DstReg|SrcMem|ModRM, DstReg|SrcMem|ModRM, - ByteOp|DstEax|SrcImm, DstEax|SrcImm, ImplicitOps|Mov, 0, - /* 0x10 - 0x17 */ - ByteOp|DstMem|SrcReg|ModRM, DstMem|SrcReg|ModRM, - ByteOp|DstReg|SrcMem|ModRM, DstReg|SrcMem|ModRM, - ByteOp|DstEax|SrcImm, DstEax|SrcImm, ImplicitOps|Mov, ImplicitOps|Mov, - /* 0x18 - 0x1F */ - ByteOp|DstMem|SrcReg|ModRM, DstMem|SrcReg|ModRM, - ByteOp|DstReg|SrcMem|ModRM, DstReg|SrcMem|ModRM, - ByteOp|DstEax|SrcImm, DstEax|SrcImm, ImplicitOps|Mov, ImplicitOps|Mov, - /* 0x20 - 0x27 */ - ByteOp|DstMem|SrcReg|ModRM, DstMem|SrcReg|ModRM, - ByteOp|DstReg|SrcMem|ModRM, DstReg|SrcMem|ModRM, - ByteOp|DstEax|SrcImm, DstEax|SrcImm, 0, ImplicitOps, - /* 0x28 - 0x2F */ - ByteOp|DstMem|SrcReg|ModRM, DstMem|SrcReg|ModRM, - ByteOp|DstReg|SrcMem|ModRM, DstReg|SrcMem|ModRM, - ByteOp|DstEax|SrcImm, DstEax|SrcImm, 0, ImplicitOps, - /* 0x30 - 0x37 */ - ByteOp|DstMem|SrcReg|ModRM, DstMem|SrcReg|ModRM, - ByteOp|DstReg|SrcMem|ModRM, DstReg|SrcMem|ModRM, - ByteOp|DstEax|SrcImm, DstEax|SrcImm, 0, ImplicitOps, - /* 0x38 - 0x3F */ - ByteOp|DstReg|SrcMem|ModRM, DstReg|SrcMem|ModRM, - ByteOp|DstReg|SrcMem|ModRM, DstReg|SrcMem|ModRM, - ByteOp|DstEax|SrcImm, DstEax|SrcImm, 0, ImplicitOps, - /* 0x40 - 0x4F */ - ImplicitOps, ImplicitOps, ImplicitOps, ImplicitOps, - ImplicitOps, ImplicitOps, ImplicitOps, ImplicitOps, - ImplicitOps, ImplicitOps, ImplicitOps, ImplicitOps, - ImplicitOps, ImplicitOps, ImplicitOps, ImplicitOps, - /* 0x50 - 0x5F */ - ImplicitOps|Mov, ImplicitOps|Mov, ImplicitOps|Mov, ImplicitOps|Mov, - ImplicitOps|Mov, ImplicitOps|Mov, ImplicitOps|Mov, ImplicitOps|Mov, - ImplicitOps|Mov, ImplicitOps|Mov, ImplicitOps|Mov, ImplicitOps|Mov, - ImplicitOps|Mov, ImplicitOps|Mov, ImplicitOps|Mov, ImplicitOps|Mov, - /* 0x60 - 0x67 */ - ImplicitOps, ImplicitOps, DstReg|SrcMem|ModRM, DstReg|SrcNone|ModRM|Mo= v, - 0, 0, 0, 0, - /* 0x68 - 0x6F */ - DstImplicit|SrcImm|Mov, DstReg|SrcImm|ModRM|Mov, - DstImplicit|SrcImmByte|Mov, DstReg|SrcImmByte|ModRM|Mov, - ImplicitOps|Mov, ImplicitOps|Mov, ImplicitOps|Mov, ImplicitOps|Mov, - /* 0x70 - 0x77 */ - DstImplicit|SrcImmByte, DstImplicit|SrcImmByte, - DstImplicit|SrcImmByte, DstImplicit|SrcImmByte, - DstImplicit|SrcImmByte, DstImplicit|SrcImmByte, - DstImplicit|SrcImmByte, DstImplicit|SrcImmByte, - /* 0x78 - 0x7F */ - DstImplicit|SrcImmByte, DstImplicit|SrcImmByte, - DstImplicit|SrcImmByte, DstImplicit|SrcImmByte, - DstImplicit|SrcImmByte, DstImplicit|SrcImmByte, - DstImplicit|SrcImmByte, DstImplicit|SrcImmByte, - /* 0x80 - 0x87 */ - ByteOp|DstMem|SrcImm|ModRM, DstMem|SrcImm|ModRM, - ByteOp|DstMem|SrcImm|ModRM, DstMem|SrcImmByte|ModRM, - ByteOp|DstReg|SrcMem|ModRM, DstReg|SrcMem|ModRM, - ByteOp|DstMem|SrcReg|ModRM, DstMem|SrcReg|ModRM, - /* 0x88 - 0x8F */ - ByteOp|DstMem|SrcReg|ModRM|Mov, DstMem|SrcReg|ModRM|Mov, - ByteOp|DstReg|SrcMem|ModRM|Mov, DstReg|SrcMem|ModRM|Mov, - DstMem|SrcReg|ModRM|Mov, DstReg|SrcNone|ModRM, - DstReg|SrcMem16|ModRM|Mov, DstMem|SrcNone|ModRM|Mov, - /* 0x90 - 0x97 */ - DstImplicit|SrcEax, DstImplicit|SrcEax, - DstImplicit|SrcEax, DstImplicit|SrcEax, - DstImplicit|SrcEax, DstImplicit|SrcEax, - DstImplicit|SrcEax, DstImplicit|SrcEax, - /* 0x98 - 0x9F */ - ImplicitOps, ImplicitOps, ImplicitOps, ImplicitOps, - ImplicitOps|Mov, ImplicitOps|Mov, ImplicitOps, ImplicitOps, - /* 0xA0 - 0xA7 */ - ByteOp|DstEax|SrcMem|Mov, DstEax|SrcMem|Mov, - ByteOp|DstMem|SrcEax|Mov, DstMem|SrcEax|Mov, - ByteOp|ImplicitOps|Mov, ImplicitOps|Mov, - ByteOp|ImplicitOps, ImplicitOps, - /* 0xA8 - 0xAF */ - ByteOp|DstEax|SrcImm, DstEax|SrcImm, - ByteOp|DstImplicit|SrcEax|Mov, DstImplicit|SrcEax|Mov, - ByteOp|DstEax|SrcImplicit|Mov, DstEax|SrcImplicit|Mov, - ByteOp|DstImplicit|SrcEax, DstImplicit|SrcEax, - /* 0xB0 - 0xB7 */ - ByteOp|DstReg|SrcImm|Mov, ByteOp|DstReg|SrcImm|Mov, - ByteOp|DstReg|SrcImm|Mov, ByteOp|DstReg|SrcImm|Mov, - ByteOp|DstReg|SrcImm|Mov, ByteOp|DstReg|SrcImm|Mov, - ByteOp|DstReg|SrcImm|Mov, ByteOp|DstReg|SrcImm|Mov, - /* 0xB8 - 0xBF */ - DstReg|SrcImm|Mov, DstReg|SrcImm|Mov, DstReg|SrcImm|Mov, DstReg|SrcImm= |Mov, - DstReg|SrcImm|Mov, DstReg|SrcImm|Mov, DstReg|SrcImm|Mov, DstReg|SrcImm= |Mov, - /* 0xC0 - 0xC7 */ - ByteOp|DstMem|SrcImm|ModRM, DstMem|SrcImmByte|ModRM, - DstImplicit|SrcImm16, ImplicitOps, - DstReg|SrcMem|ModRM|Mov, DstReg|SrcMem|ModRM|Mov, - ByteOp|DstMem|SrcImm|ModRM|Mov, DstMem|SrcImm|ModRM|Mov, - /* 0xC8 - 0xCF */ - DstImplicit|SrcImm16, ImplicitOps, DstImplicit|SrcImm16, ImplicitOps, - ImplicitOps, DstImplicit|SrcImmByte, ImplicitOps, ImplicitOps, - /* 0xD0 - 0xD7 */ - ByteOp|DstMem|SrcImplicit|ModRM, DstMem|SrcImplicit|ModRM, - ByteOp|DstMem|SrcImplicit|ModRM, DstMem|SrcImplicit|ModRM, - DstImplicit|SrcImmByte, DstImplicit|SrcImmByte, ImplicitOps, ImplicitO= ps, - /* 0xD8 - 0xDF */ - ImplicitOps|ModRM, ImplicitOps|ModRM|Mov, - ImplicitOps|ModRM, ImplicitOps|ModRM|Mov, - ImplicitOps|ModRM, ImplicitOps|ModRM|Mov, - DstImplicit|SrcMem16|ModRM, ImplicitOps|ModRM|Mov, - /* 0xE0 - 0xE7 */ - DstImplicit|SrcImmByte, DstImplicit|SrcImmByte, - DstImplicit|SrcImmByte, DstImplicit|SrcImmByte, - DstEax|SrcImmByte, DstEax|SrcImmByte, - DstImplicit|SrcImmByte, DstImplicit|SrcImmByte, - /* 0xE8 - 0xEF */ - DstImplicit|SrcImm|Mov, DstImplicit|SrcImm, - ImplicitOps, DstImplicit|SrcImmByte, - DstEax|SrcImplicit, DstEax|SrcImplicit, ImplicitOps, ImplicitOps, - /* 0xF0 - 0xF7 */ - 0, ImplicitOps, 0, 0, - ImplicitOps, ImplicitOps, ByteOp|ModRM, ModRM, - /* 0xF8 - 0xFF */ - ImplicitOps, ImplicitOps, ImplicitOps, ImplicitOps, - ImplicitOps, ImplicitOps, ByteOp|DstMem|SrcNone|ModRM, DstMem|SrcNone|= ModRM -}; - -enum disp8scale { - /* Values 0 ... 4 are explicit sizes. */ - d8s_bw =3D 5, - d8s_dq, - /* EVEX.W ignored outside of 64-bit mode */ - d8s_dq64, - /* - * All further values must strictly be last and in the order - * given so that arithmetic on the values works. - */ - d8s_vl, - d8s_vl_by_2, - d8s_vl_by_4, - d8s_vl_by_8, -}; -typedef uint8_t disp8scale_t; - -static const struct twobyte_table { - opcode_desc_t desc; - simd_opsize_t size:4; - disp8scale_t d8s:4; -} twobyte_table[256] =3D { - [0x00] =3D { ModRM }, - [0x01] =3D { ImplicitOps|ModRM }, - [0x02] =3D { DstReg|SrcMem16|ModRM }, - [0x03] =3D { DstReg|SrcMem16|ModRM }, - [0x05] =3D { ImplicitOps }, - [0x06] =3D { ImplicitOps }, - [0x07] =3D { ImplicitOps }, - [0x08] =3D { ImplicitOps }, - [0x09] =3D { ImplicitOps }, - [0x0b] =3D { ImplicitOps }, - [0x0d] =3D { ImplicitOps|ModRM }, - [0x0e] =3D { ImplicitOps }, - [0x0f] =3D { ModRM|SrcImmByte }, - [0x10] =3D { DstImplicit|SrcMem|ModRM|Mov, simd_any_fp, d8s_vl }, - [0x11] =3D { DstMem|SrcImplicit|ModRM|Mov, simd_any_fp, d8s_vl }, - [0x12] =3D { DstImplicit|SrcMem|ModRM|Mov, simd_other, 3 }, - [0x13] =3D { DstMem|SrcImplicit|ModRM|Mov, simd_other, 3 }, - [0x14 ... 0x15] =3D { DstImplicit|SrcMem|ModRM, simd_packed_fp, d8s_vl= }, - [0x16] =3D { DstImplicit|SrcMem|ModRM|Mov, simd_other, 3 }, - [0x17] =3D { DstMem|SrcImplicit|ModRM|Mov, simd_other, 3 }, - [0x18 ... 0x1f] =3D { ImplicitOps|ModRM }, - [0x20 ... 0x21] =3D { DstMem|SrcImplicit|ModRM }, - [0x22 ... 0x23] =3D { DstImplicit|SrcMem|ModRM }, - [0x28] =3D { DstImplicit|SrcMem|ModRM|Mov, simd_packed_fp, d8s_vl }, - [0x29] =3D { DstMem|SrcImplicit|ModRM|Mov, simd_packed_fp, d8s_vl }, - [0x2a] =3D { DstImplicit|SrcMem|ModRM|Mov, simd_other, d8s_dq64 }, - [0x2b] =3D { DstMem|SrcImplicit|ModRM|Mov, simd_any_fp, d8s_vl }, - [0x2c ... 0x2d] =3D { DstImplicit|SrcMem|ModRM|Mov, simd_other }, - [0x2e ... 0x2f] =3D { ImplicitOps|ModRM|TwoOp, simd_none, d8s_dq }, - [0x30 ... 0x35] =3D { ImplicitOps }, - [0x37] =3D { ImplicitOps }, - [0x38] =3D { DstReg|SrcMem|ModRM }, - [0x3a] =3D { DstReg|SrcImmByte|ModRM }, - [0x40 ... 0x4f] =3D { DstReg|SrcMem|ModRM|Mov }, - [0x50] =3D { DstReg|SrcImplicit|ModRM|Mov }, - [0x51] =3D { DstImplicit|SrcMem|ModRM|TwoOp, simd_any_fp, d8s_vl }, - [0x52 ... 0x53] =3D { DstImplicit|SrcMem|ModRM|TwoOp, simd_single_fp }, - [0x54 ... 0x57] =3D { DstImplicit|SrcMem|ModRM, simd_packed_fp, d8s_vl= }, - [0x58 ... 0x59] =3D { DstImplicit|SrcMem|ModRM, simd_any_fp, d8s_vl }, - [0x5a] =3D { DstImplicit|SrcMem|ModRM|Mov, simd_any_fp, d8s_vl }, - [0x5b] =3D { DstImplicit|SrcMem|ModRM|Mov, simd_packed_fp, d8s_vl }, - [0x5c ... 0x5f] =3D { DstImplicit|SrcMem|ModRM, simd_any_fp, d8s_vl }, - [0x60 ... 0x62] =3D { DstImplicit|SrcMem|ModRM, simd_other, d8s_vl }, - [0x63 ... 0x67] =3D { DstImplicit|SrcMem|ModRM, simd_packed_int, d8s_v= l }, - [0x68 ... 0x6a] =3D { DstImplicit|SrcMem|ModRM, simd_other, d8s_vl }, - [0x6b ... 0x6d] =3D { DstImplicit|SrcMem|ModRM, simd_packed_int, d8s_v= l }, - [0x6e] =3D { DstImplicit|SrcMem|ModRM|Mov, simd_none, d8s_dq64 }, - [0x6f] =3D { DstImplicit|SrcMem|ModRM|Mov, simd_packed_int, d8s_vl }, - [0x70] =3D { SrcImmByte|ModRM|TwoOp, simd_other, d8s_vl }, - [0x71 ... 0x73] =3D { DstImplicit|SrcImmByte|ModRM, simd_none, d8s_vl = }, - [0x74 ... 0x76] =3D { DstImplicit|SrcMem|ModRM, simd_packed_int, d8s_v= l }, - [0x77] =3D { DstImplicit|SrcNone }, - [0x78 ... 0x79] =3D { DstImplicit|SrcMem|ModRM|Mov, simd_other, d8s_vl= }, - [0x7a] =3D { DstImplicit|SrcMem|ModRM|Mov, simd_packed_fp, d8s_vl }, - [0x7b] =3D { DstImplicit|SrcMem|ModRM|Mov, simd_other, d8s_dq64 }, - [0x7c ... 0x7d] =3D { DstImplicit|SrcMem|ModRM, simd_other }, - [0x7e] =3D { DstMem|SrcImplicit|ModRM|Mov, simd_none, d8s_dq64 }, - [0x7f] =3D { DstMem|SrcImplicit|ModRM|Mov, simd_packed_int, d8s_vl }, - [0x80 ... 0x8f] =3D { DstImplicit|SrcImm }, - [0x90 ... 0x9f] =3D { ByteOp|DstMem|SrcNone|ModRM|Mov }, - [0xa0 ... 0xa1] =3D { ImplicitOps|Mov }, - [0xa2] =3D { ImplicitOps }, - [0xa3] =3D { DstBitBase|SrcReg|ModRM }, - [0xa4] =3D { DstMem|SrcImmByte|ModRM }, - [0xa5] =3D { DstMem|SrcReg|ModRM }, - [0xa6 ... 0xa7] =3D { ModRM }, - [0xa8 ... 0xa9] =3D { ImplicitOps|Mov }, - [0xaa] =3D { ImplicitOps }, - [0xab] =3D { DstBitBase|SrcReg|ModRM }, - [0xac] =3D { DstMem|SrcImmByte|ModRM }, - [0xad] =3D { DstMem|SrcReg|ModRM }, - [0xae] =3D { ImplicitOps|ModRM }, - [0xaf] =3D { DstReg|SrcMem|ModRM }, - [0xb0] =3D { ByteOp|DstMem|SrcReg|ModRM }, - [0xb1] =3D { DstMem|SrcReg|ModRM }, - [0xb2] =3D { DstReg|SrcMem|ModRM|Mov }, - [0xb3] =3D { DstBitBase|SrcReg|ModRM }, - [0xb4 ... 0xb5] =3D { DstReg|SrcMem|ModRM|Mov }, - [0xb6] =3D { ByteOp|DstReg|SrcMem|ModRM|Mov }, - [0xb7] =3D { DstReg|SrcMem16|ModRM|Mov }, - [0xb8] =3D { DstReg|SrcMem|ModRM }, - [0xb9] =3D { ModRM }, - [0xba] =3D { DstBitBase|SrcImmByte|ModRM }, - [0xbb] =3D { DstBitBase|SrcReg|ModRM }, - [0xbc ... 0xbd] =3D { DstReg|SrcMem|ModRM }, - [0xbe] =3D { ByteOp|DstReg|SrcMem|ModRM|Mov }, - [0xbf] =3D { DstReg|SrcMem16|ModRM|Mov }, - [0xc0] =3D { ByteOp|DstMem|SrcReg|ModRM }, - [0xc1] =3D { DstMem|SrcReg|ModRM }, - [0xc2] =3D { DstImplicit|SrcImmByte|ModRM, simd_any_fp, d8s_vl }, - [0xc3] =3D { DstMem|SrcReg|ModRM|Mov }, - [0xc4] =3D { DstImplicit|SrcImmByte|ModRM, simd_none, 1 }, - [0xc5] =3D { DstReg|SrcImmByte|ModRM|Mov }, - [0xc6] =3D { DstImplicit|SrcImmByte|ModRM, simd_packed_fp, d8s_vl }, - [0xc7] =3D { ImplicitOps|ModRM }, - [0xc8 ... 0xcf] =3D { ImplicitOps }, - [0xd0] =3D { DstImplicit|SrcMem|ModRM, simd_other }, - [0xd1 ... 0xd3] =3D { DstImplicit|SrcMem|ModRM, simd_128, 4 }, - [0xd4 ... 0xd5] =3D { DstImplicit|SrcMem|ModRM, simd_packed_int, d8s_v= l }, - [0xd6] =3D { DstMem|SrcImplicit|ModRM|Mov, simd_other, 3 }, - [0xd7] =3D { DstReg|SrcImplicit|ModRM|Mov }, - [0xd8 ... 0xdf] =3D { DstImplicit|SrcMem|ModRM, simd_packed_int, d8s_v= l }, - [0xe0] =3D { DstImplicit|SrcMem|ModRM, simd_packed_int, d8s_vl }, - [0xe1 ... 0xe2] =3D { DstImplicit|SrcMem|ModRM, simd_128, 4 }, - [0xe3 ... 0xe5] =3D { DstImplicit|SrcMem|ModRM, simd_packed_int, d8s_v= l }, - [0xe6] =3D { DstImplicit|SrcMem|ModRM|Mov, simd_packed_fp, d8s_vl }, - [0xe7] =3D { DstMem|SrcImplicit|ModRM|Mov, simd_packed_int, d8s_vl }, - [0xe8 ... 0xef] =3D { DstImplicit|SrcMem|ModRM, simd_packed_int, d8s_v= l }, - [0xf0] =3D { DstImplicit|SrcMem|ModRM|Mov, simd_other }, - [0xf1 ... 0xf3] =3D { DstImplicit|SrcMem|ModRM, simd_128, 4 }, - [0xf4 ... 0xf6] =3D { DstImplicit|SrcMem|ModRM, simd_packed_int, d8s_v= l }, - [0xf7] =3D { DstMem|SrcMem|ModRM|Mov, simd_packed_int }, - [0xf8 ... 0xfe] =3D { DstImplicit|SrcMem|ModRM, simd_packed_int, d8s_v= l }, - [0xff] =3D { ModRM } -}; - /* * The next two tables are indexed by high opcode extension byte (the one * that's encoded like an immediate) nibble, with each table element then @@ -325,257 +57,9 @@ static const uint16_t _3dnow_ext_table[1 [0xb] =3D (1 << 0xb) /* pswapd */, }; =20 -/* - * "two_op" and "four_op" below refer to the number of register operands - * (one of which possibly also allowing to be a memory one). The named - * operand counts do not include any immediate operands. - */ -static const struct ext0f38_table { - uint8_t simd_size:5; - uint8_t to_mem:1; - uint8_t two_op:1; - uint8_t vsib:1; - disp8scale_t d8s:4; -} ext0f38_table[256] =3D { - [0x00] =3D { .simd_size =3D simd_packed_int, .d8s =3D d8s_vl }, - [0x01 ... 0x03] =3D { .simd_size =3D simd_packed_int }, - [0x04] =3D { .simd_size =3D simd_packed_int, .d8s =3D d8s_vl }, - [0x05 ... 0x0a] =3D { .simd_size =3D simd_packed_int }, - [0x0b] =3D { .simd_size =3D simd_packed_int, .d8s =3D d8s_vl }, - [0x0c ... 0x0d] =3D { .simd_size =3D simd_packed_fp, .d8s =3D d8s_vl }, - [0x0e ... 0x0f] =3D { .simd_size =3D simd_packed_fp }, - [0x10 ... 0x12] =3D { .simd_size =3D simd_packed_int, .d8s =3D d8s_vl = }, - [0x13] =3D { .simd_size =3D simd_other, .two_op =3D 1, .d8s =3D d8s_vl= _by_2 }, - [0x14 ... 0x16] =3D { .simd_size =3D simd_packed_fp, .d8s =3D d8s_vl }, - [0x17] =3D { .simd_size =3D simd_packed_int, .two_op =3D 1 }, - [0x18] =3D { .simd_size =3D simd_scalar_opc, .two_op =3D 1, .d8s =3D 2= }, - [0x19] =3D { .simd_size =3D simd_scalar_opc, .two_op =3D 1, .d8s =3D 3= }, - [0x1a] =3D { .simd_size =3D simd_128, .two_op =3D 1, .d8s =3D 4 }, - [0x1b] =3D { .simd_size =3D simd_256, .two_op =3D 1, .d8s =3D d8s_vl_b= y_2 }, - [0x1c ... 0x1f] =3D { .simd_size =3D simd_packed_int, .two_op =3D 1, .= d8s =3D d8s_vl }, - [0x20] =3D { .simd_size =3D simd_other, .two_op =3D 1, .d8s =3D d8s_vl= _by_2 }, - [0x21] =3D { .simd_size =3D simd_other, .two_op =3D 1, .d8s =3D d8s_vl= _by_4 }, - [0x22] =3D { .simd_size =3D simd_other, .two_op =3D 1, .d8s =3D d8s_vl= _by_8 }, - [0x23] =3D { .simd_size =3D simd_other, .two_op =3D 1, .d8s =3D d8s_vl= _by_2 }, - [0x24] =3D { .simd_size =3D simd_other, .two_op =3D 1, .d8s =3D d8s_vl= _by_4 }, - [0x25] =3D { .simd_size =3D simd_other, .two_op =3D 1, .d8s =3D d8s_vl= _by_2 }, - [0x26 ... 0x29] =3D { .simd_size =3D simd_packed_int, .d8s =3D d8s_vl = }, - [0x2a] =3D { .simd_size =3D simd_packed_int, .two_op =3D 1, .d8s =3D d= 8s_vl }, - [0x2b] =3D { .simd_size =3D simd_packed_int, .d8s =3D d8s_vl }, - [0x2c] =3D { .simd_size =3D simd_packed_fp, .d8s =3D d8s_vl }, - [0x2d] =3D { .simd_size =3D simd_packed_fp, .d8s =3D d8s_dq }, - [0x2e ... 0x2f] =3D { .simd_size =3D simd_packed_fp, .to_mem =3D 1 }, - [0x30] =3D { .simd_size =3D simd_other, .two_op =3D 1, .d8s =3D d8s_vl= _by_2 }, - [0x31] =3D { .simd_size =3D simd_other, .two_op =3D 1, .d8s =3D d8s_vl= _by_4 }, - [0x32] =3D { .simd_size =3D simd_other, .two_op =3D 1, .d8s =3D d8s_vl= _by_8 }, - [0x33] =3D { .simd_size =3D simd_other, .two_op =3D 1, .d8s =3D d8s_vl= _by_2 }, - [0x34] =3D { .simd_size =3D simd_other, .two_op =3D 1, .d8s =3D d8s_vl= _by_4 }, - [0x35] =3D { .simd_size =3D simd_other, .two_op =3D 1, .d8s =3D d8s_vl= _by_2 }, - [0x36 ... 0x3f] =3D { .simd_size =3D simd_packed_int, .d8s =3D d8s_vl = }, - [0x40] =3D { .simd_size =3D simd_packed_int, .d8s =3D d8s_vl }, - [0x41] =3D { .simd_size =3D simd_packed_int, .two_op =3D 1 }, - [0x42] =3D { .simd_size =3D simd_packed_fp, .two_op =3D 1, .d8s =3D d8= s_vl }, - [0x43] =3D { .simd_size =3D simd_scalar_vexw, .d8s =3D d8s_dq }, - [0x44] =3D { .simd_size =3D simd_packed_int, .two_op =3D 1, .d8s =3D d= 8s_vl }, - [0x45 ... 0x47] =3D { .simd_size =3D simd_packed_int, .d8s =3D d8s_vl = }, - [0x4c] =3D { .simd_size =3D simd_packed_fp, .two_op =3D 1, .d8s =3D d8= s_vl }, - [0x4d] =3D { .simd_size =3D simd_scalar_vexw, .d8s =3D d8s_dq }, - [0x4e] =3D { .simd_size =3D simd_packed_fp, .two_op =3D 1, .d8s =3D d8= s_vl }, - [0x4f] =3D { .simd_size =3D simd_scalar_vexw, .d8s =3D d8s_dq }, - [0x50 ... 0x53] =3D { .simd_size =3D simd_packed_int, .d8s =3D d8s_vl = }, - [0x54 ... 0x55] =3D { .simd_size =3D simd_packed_int, .two_op =3D 1, .= d8s =3D d8s_vl }, - [0x58] =3D { .simd_size =3D simd_other, .two_op =3D 1, .d8s =3D 2 }, - [0x59] =3D { .simd_size =3D simd_other, .two_op =3D 1, .d8s =3D 3 }, - [0x5a] =3D { .simd_size =3D simd_128, .two_op =3D 1, .d8s =3D 4 }, - [0x5b] =3D { .simd_size =3D simd_256, .two_op =3D 1, .d8s =3D d8s_vl_b= y_2 }, - [0x62] =3D { .simd_size =3D simd_packed_int, .two_op =3D 1, .d8s =3D d= 8s_bw }, - [0x63] =3D { .simd_size =3D simd_packed_int, .to_mem =3D 1, .two_op = =3D 1, .d8s =3D d8s_bw }, - [0x64 ... 0x66] =3D { .simd_size =3D simd_packed_int, .d8s =3D d8s_vl = }, - [0x68] =3D { .simd_size =3D simd_packed_int, .d8s =3D d8s_vl }, - [0x70 ... 0x73] =3D { .simd_size =3D simd_packed_int, .d8s =3D d8s_vl = }, - [0x75 ... 0x76] =3D { .simd_size =3D simd_packed_int, .d8s =3D d8s_vl = }, - [0x77] =3D { .simd_size =3D simd_packed_fp, .d8s =3D d8s_vl }, - [0x78] =3D { .simd_size =3D simd_other, .two_op =3D 1 }, - [0x79] =3D { .simd_size =3D simd_other, .two_op =3D 1, .d8s =3D 1 }, - [0x7a ... 0x7c] =3D { .simd_size =3D simd_none, .two_op =3D 1 }, - [0x7d ... 0x7e] =3D { .simd_size =3D simd_packed_int, .d8s =3D d8s_vl = }, - [0x7f] =3D { .simd_size =3D simd_packed_fp, .d8s =3D d8s_vl }, - [0x82] =3D { .simd_size =3D simd_other }, - [0x83] =3D { .simd_size =3D simd_packed_int, .d8s =3D d8s_vl }, - [0x88] =3D { .simd_size =3D simd_packed_fp, .two_op =3D 1, .d8s =3D d8= s_dq }, - [0x89] =3D { .simd_size =3D simd_packed_int, .two_op =3D 1, .d8s =3D d= 8s_dq }, - [0x8a] =3D { .simd_size =3D simd_packed_fp, .to_mem =3D 1, .two_op =3D= 1, .d8s =3D d8s_dq }, - [0x8b] =3D { .simd_size =3D simd_packed_int, .to_mem =3D 1, .two_op = =3D 1, .d8s =3D d8s_dq }, - [0x8c] =3D { .simd_size =3D simd_packed_int }, - [0x8d] =3D { .simd_size =3D simd_packed_int, .d8s =3D d8s_vl }, - [0x8e] =3D { .simd_size =3D simd_packed_int, .to_mem =3D 1 }, - [0x8f] =3D { .simd_size =3D simd_packed_int, .d8s =3D d8s_vl }, - [0x90 ... 0x93] =3D { .simd_size =3D simd_other, .vsib =3D 1, .d8s =3D= d8s_dq }, - [0x96 ... 0x98] =3D { .simd_size =3D simd_packed_fp, .d8s =3D d8s_vl }, - [0x99] =3D { .simd_size =3D simd_scalar_vexw, .d8s =3D d8s_dq }, - [0x9a] =3D { .simd_size =3D simd_packed_fp, .d8s =3D d8s_vl }, - [0x9b] =3D { .simd_size =3D simd_scalar_vexw, .d8s =3D d8s_dq }, - [0x9c] =3D { .simd_size =3D simd_packed_fp, .d8s =3D d8s_vl }, - [0x9d] =3D { .simd_size =3D simd_scalar_vexw, .d8s =3D d8s_dq }, - [0x9e] =3D { .simd_size =3D simd_packed_fp, .d8s =3D d8s_vl }, - [0x9f] =3D { .simd_size =3D simd_scalar_vexw, .d8s =3D d8s_dq }, - [0xa0 ... 0xa3] =3D { .simd_size =3D simd_other, .to_mem =3D 1, .vsib = =3D 1, .d8s =3D d8s_dq }, - [0xa6 ... 0xa8] =3D { .simd_size =3D simd_packed_fp, .d8s =3D d8s_vl }, - [0xa9] =3D { .simd_size =3D simd_scalar_vexw, .d8s =3D d8s_dq }, - [0xaa] =3D { .simd_size =3D simd_packed_fp, .d8s =3D d8s_vl }, - [0xab] =3D { .simd_size =3D simd_scalar_vexw, .d8s =3D d8s_dq }, - [0xac] =3D { .simd_size =3D simd_packed_fp, .d8s =3D d8s_vl }, - [0xad] =3D { .simd_size =3D simd_scalar_vexw, .d8s =3D d8s_dq }, - [0xae] =3D { .simd_size =3D simd_packed_fp, .d8s =3D d8s_vl }, - [0xaf] =3D { .simd_size =3D simd_scalar_vexw, .d8s =3D d8s_dq }, - [0xb4 ... 0xb5] =3D { .simd_size =3D simd_packed_int, .d8s =3D d8s_vl = }, - [0xb6 ... 0xb8] =3D { .simd_size =3D simd_packed_fp, .d8s =3D d8s_vl }, - [0xb9] =3D { .simd_size =3D simd_scalar_vexw, .d8s =3D d8s_dq }, - [0xba] =3D { .simd_size =3D simd_packed_fp, .d8s =3D d8s_vl }, - [0xbb] =3D { .simd_size =3D simd_scalar_vexw, .d8s =3D d8s_dq }, - [0xbc] =3D { .simd_size =3D simd_packed_fp, .d8s =3D d8s_vl }, - [0xbd] =3D { .simd_size =3D simd_scalar_vexw, .d8s =3D d8s_dq }, - [0xbe] =3D { .simd_size =3D simd_packed_fp, .d8s =3D d8s_vl }, - [0xbf] =3D { .simd_size =3D simd_scalar_vexw, .d8s =3D d8s_dq }, - [0xc4] =3D { .simd_size =3D simd_packed_int, .two_op =3D 1, .d8s =3D d= 8s_vl }, - [0xc6 ... 0xc7] =3D { .simd_size =3D simd_other, .vsib =3D 1, .d8s =3D= d8s_dq }, - [0xc8] =3D { .simd_size =3D simd_packed_fp, .two_op =3D 1, .d8s =3D d8= s_vl }, - [0xc9] =3D { .simd_size =3D simd_other }, - [0xca] =3D { .simd_size =3D simd_packed_fp, .two_op =3D 1, .d8s =3D d8= s_vl }, - [0xcb] =3D { .simd_size =3D simd_scalar_vexw, .d8s =3D d8s_dq }, - [0xcc] =3D { .simd_size =3D simd_packed_fp, .two_op =3D 1, .d8s =3D d8= s_vl }, - [0xcd] =3D { .simd_size =3D simd_scalar_vexw, .d8s =3D d8s_dq }, - [0xcf] =3D { .simd_size =3D simd_packed_int, .d8s =3D d8s_vl }, - [0xdb] =3D { .simd_size =3D simd_packed_int, .two_op =3D 1 }, - [0xdc ... 0xdf] =3D { .simd_size =3D simd_packed_int, .d8s =3D d8s_vl = }, - [0xf0] =3D { .two_op =3D 1 }, - [0xf1] =3D { .to_mem =3D 1, .two_op =3D 1 }, - [0xf2 ... 0xf3] =3D {}, - [0xf5 ... 0xf7] =3D {}, - [0xf8] =3D { .simd_size =3D simd_other }, - [0xf9] =3D { .to_mem =3D 1, .two_op =3D 1 /* Mov */ }, -}; - /* Shift values between src and dst sizes of pmov{s,z}x{b,w,d}{w,d,q}. */ static const uint8_t pmov_convert_delta[] =3D { 1, 2, 3, 1, 2, 1 }; =20 -static const struct ext0f3a_table { - uint8_t simd_size:5; - uint8_t to_mem:1; - uint8_t two_op:1; - uint8_t four_op:1; - disp8scale_t d8s:4; -} ext0f3a_table[256] =3D { - [0x00] =3D { .simd_size =3D simd_packed_int, .two_op =3D 1, .d8s =3D d= 8s_vl }, - [0x01] =3D { .simd_size =3D simd_packed_fp, .two_op =3D 1, .d8s =3D d8= s_vl }, - [0x02] =3D { .simd_size =3D simd_packed_int }, - [0x03] =3D { .simd_size =3D simd_packed_int, .d8s =3D d8s_vl }, - [0x04 ... 0x05] =3D { .simd_size =3D simd_packed_fp, .two_op =3D 1, .d= 8s =3D d8s_vl }, - [0x06] =3D { .simd_size =3D simd_packed_fp }, - [0x08 ... 0x09] =3D { .simd_size =3D simd_packed_fp, .two_op =3D 1, .d= 8s =3D d8s_vl }, - [0x0a ... 0x0b] =3D { .simd_size =3D simd_scalar_opc, .d8s =3D d8s_dq = }, - [0x0c ... 0x0d] =3D { .simd_size =3D simd_packed_fp }, - [0x0e] =3D { .simd_size =3D simd_packed_int }, - [0x0f] =3D { .simd_size =3D simd_packed_int, .d8s =3D d8s_vl }, - [0x14] =3D { .simd_size =3D simd_none, .to_mem =3D 1, .two_op =3D 1, .= d8s =3D 0 }, - [0x15] =3D { .simd_size =3D simd_none, .to_mem =3D 1, .two_op =3D 1, .= d8s =3D 1 }, - [0x16] =3D { .simd_size =3D simd_none, .to_mem =3D 1, .two_op =3D 1, .= d8s =3D d8s_dq64 }, - [0x17] =3D { .simd_size =3D simd_none, .to_mem =3D 1, .two_op =3D 1, .= d8s =3D 2 }, - [0x18] =3D { .simd_size =3D simd_128, .d8s =3D 4 }, - [0x19] =3D { .simd_size =3D simd_128, .to_mem =3D 1, .two_op =3D 1, .d= 8s =3D 4 }, - [0x1a] =3D { .simd_size =3D simd_256, .d8s =3D d8s_vl_by_2 }, - [0x1b] =3D { .simd_size =3D simd_256, .to_mem =3D 1, .two_op =3D 1, .d= 8s =3D d8s_vl_by_2 }, - [0x1d] =3D { .simd_size =3D simd_other, .to_mem =3D 1, .two_op =3D 1, = .d8s =3D d8s_vl_by_2 }, - [0x1e ... 0x1f] =3D { .simd_size =3D simd_packed_int, .d8s =3D d8s_vl = }, - [0x20] =3D { .simd_size =3D simd_none, .d8s =3D 0 }, - [0x21] =3D { .simd_size =3D simd_other, .d8s =3D 2 }, - [0x22] =3D { .simd_size =3D simd_none, .d8s =3D d8s_dq64 }, - [0x23] =3D { .simd_size =3D simd_packed_int, .d8s =3D d8s_vl }, - [0x25] =3D { .simd_size =3D simd_packed_int, .d8s =3D d8s_vl }, - [0x26] =3D { .simd_size =3D simd_packed_fp, .two_op =3D 1, .d8s =3D d8= s_vl }, - [0x27] =3D { .simd_size =3D simd_scalar_vexw, .d8s =3D d8s_dq }, - [0x30 ... 0x33] =3D { .simd_size =3D simd_other, .two_op =3D 1 }, - [0x38] =3D { .simd_size =3D simd_128, .d8s =3D 4 }, - [0x3a] =3D { .simd_size =3D simd_256, .d8s =3D d8s_vl_by_2 }, - [0x39] =3D { .simd_size =3D simd_128, .to_mem =3D 1, .two_op =3D 1, .d= 8s =3D 4 }, - [0x3b] =3D { .simd_size =3D simd_256, .to_mem =3D 1, .two_op =3D 1, .d= 8s =3D d8s_vl_by_2 }, - [0x3e ... 0x3f] =3D { .simd_size =3D simd_packed_int, .d8s =3D d8s_vl = }, - [0x40 ... 0x41] =3D { .simd_size =3D simd_packed_fp }, - [0x42 ... 0x43] =3D { .simd_size =3D simd_packed_int, .d8s =3D d8s_vl = }, - [0x44] =3D { .simd_size =3D simd_packed_int, .d8s =3D d8s_vl }, - [0x46] =3D { .simd_size =3D simd_packed_int }, - [0x48 ... 0x49] =3D { .simd_size =3D simd_packed_fp, .four_op =3D 1 }, - [0x4a ... 0x4b] =3D { .simd_size =3D simd_packed_fp, .four_op =3D 1 }, - [0x4c] =3D { .simd_size =3D simd_packed_int, .four_op =3D 1 }, - [0x50] =3D { .simd_size =3D simd_packed_fp, .d8s =3D d8s_vl }, - [0x51] =3D { .simd_size =3D simd_scalar_vexw, .d8s =3D d8s_dq }, - [0x54] =3D { .simd_size =3D simd_packed_fp, .d8s =3D d8s_vl }, - [0x55] =3D { .simd_size =3D simd_scalar_vexw, .d8s =3D d8s_dq }, - [0x56] =3D { .simd_size =3D simd_packed_fp, .two_op =3D 1, .d8s =3D d8= s_vl }, - [0x57] =3D { .simd_size =3D simd_scalar_vexw, .d8s =3D d8s_dq }, - [0x5c ... 0x5f] =3D { .simd_size =3D simd_packed_fp, .four_op =3D 1 }, - [0x60 ... 0x63] =3D { .simd_size =3D simd_packed_int, .two_op =3D 1 }, - [0x66] =3D { .simd_size =3D simd_packed_fp, .two_op =3D 1, .d8s =3D d8= s_vl }, - [0x67] =3D { .simd_size =3D simd_scalar_vexw, .two_op =3D 1, .d8s =3D = d8s_dq }, - [0x68 ... 0x69] =3D { .simd_size =3D simd_packed_fp, .four_op =3D 1 }, - [0x6a ... 0x6b] =3D { .simd_size =3D simd_scalar_opc, .four_op =3D 1 }, - [0x6c ... 0x6d] =3D { .simd_size =3D simd_packed_fp, .four_op =3D 1 }, - [0x6e ... 0x6f] =3D { .simd_size =3D simd_scalar_opc, .four_op =3D 1 }, - [0x70 ... 0x73] =3D { .simd_size =3D simd_packed_int, .d8s =3D d8s_vl = }, - [0x78 ... 0x79] =3D { .simd_size =3D simd_packed_fp, .four_op =3D 1 }, - [0x7a ... 0x7b] =3D { .simd_size =3D simd_scalar_opc, .four_op =3D 1 }, - [0x7c ... 0x7d] =3D { .simd_size =3D simd_packed_fp, .four_op =3D 1 }, - [0x7e ... 0x7f] =3D { .simd_size =3D simd_scalar_opc, .four_op =3D 1 }, - [0xcc] =3D { .simd_size =3D simd_other }, - [0xce ... 0xcf] =3D { .simd_size =3D simd_packed_int, .d8s =3D d8s_vl = }, - [0xdf] =3D { .simd_size =3D simd_packed_int, .two_op =3D 1 }, - [0xf0] =3D {}, -}; - -static const opcode_desc_t xop_table[] =3D { - DstReg|SrcImmByte|ModRM, - DstReg|SrcMem|ModRM, - DstReg|SrcImm|ModRM, -}; - -static const struct ext8f08_table { - uint8_t simd_size:5; - uint8_t two_op:1; - uint8_t four_op:1; -} ext8f08_table[256] =3D { - [0xa2] =3D { .simd_size =3D simd_packed_int, .four_op =3D 1 }, - [0x85 ... 0x87] =3D { .simd_size =3D simd_packed_int, .four_op =3D 1 }, - [0x8e ... 0x8f] =3D { .simd_size =3D simd_packed_int, .four_op =3D 1 }, - [0x95 ... 0x97] =3D { .simd_size =3D simd_packed_int, .four_op =3D 1 }, - [0x9e ... 0x9f] =3D { .simd_size =3D simd_packed_int, .four_op =3D 1 }, - [0xa3] =3D { .simd_size =3D simd_packed_int, .four_op =3D 1 }, - [0xa6] =3D { .simd_size =3D simd_packed_int, .four_op =3D 1 }, - [0xb6] =3D { .simd_size =3D simd_packed_int, .four_op =3D 1 }, - [0xc0 ... 0xc3] =3D { .simd_size =3D simd_packed_int, .two_op =3D 1 }, - [0xcc ... 0xcf] =3D { .simd_size =3D simd_packed_int }, - [0xec ... 0xef] =3D { .simd_size =3D simd_packed_int }, -}; - -static const struct ext8f09_table { - uint8_t simd_size:5; - uint8_t two_op:1; -} ext8f09_table[256] =3D { - [0x01 ... 0x02] =3D { .two_op =3D 1 }, - [0x80 ... 0x81] =3D { .simd_size =3D simd_packed_fp, .two_op =3D 1 }, - [0x82 ... 0x83] =3D { .simd_size =3D simd_scalar_opc, .two_op =3D 1 }, - [0x90 ... 0x9b] =3D { .simd_size =3D simd_packed_int }, - [0xc1 ... 0xc3] =3D { .simd_size =3D simd_packed_int, .two_op =3D 1 }, - [0xc6 ... 0xc7] =3D { .simd_size =3D simd_packed_int, .two_op =3D 1 }, - [0xcb] =3D { .simd_size =3D simd_packed_int, .two_op =3D 1 }, - [0xd1 ... 0xd3] =3D { .simd_size =3D simd_packed_int, .two_op =3D 1 }, - [0xd6 ... 0xd7] =3D { .simd_size =3D simd_packed_int, .two_op =3D 1 }, - [0xdb] =3D { .simd_size =3D simd_packed_int, .two_op =3D 1 }, - [0xe1 ... 0xe3] =3D { .simd_size =3D simd_packed_int, .two_op =3D 1 }, -}; - -#define VEX_PREFIX_DOUBLE_MASK 0x1 -#define VEX_PREFIX_SCALAR_MASK 0x2 - static const uint8_t sse_prefix[] =3D { 0x66, 0xf3, 0xf2 }; =20 #ifdef __x86_64__ @@ -637,12 +121,6 @@ static const uint8_t sse_prefix[] =3D { 0x #define repe_prefix() (vex.pfx =3D=3D vex_f3) #define repne_prefix() (vex.pfx =3D=3D vex_f2) =20 -#ifdef __x86_64__ -#define PTR_POISON ((void *)0x8086000000008086UL) /* non-canonical */ -#else -#define PTR_POISON NULL /* 32-bit builds are for user-space, so NULL is OK= . */ -#endif - /* * While proper alignment gets specified in mmval_t, this doesn't get hono= red * by the compiler for automatic variables. Use this helper to instantiate= a @@ -831,19 +309,6 @@ do{ asm volatile ( : [msk] "i" (EFLAGS_MASK), ## src); \ } while (0) =20 -/* Fetch next part of the instruction being emulated. */ -#define insn_fetch_bytes(_size) \ -({ unsigned long _x =3D 0, _ip =3D state->ip; = \ - state->ip +=3D (_size); /* real hardware doesn't truncate */ \ - generate_exception_if((uint8_t)(state->ip - \ - ctxt->regs->r(ip)) > MAX_INST_LEN, \ - EXC_GP, 0); \ - rc =3D ops->insn_fetch(x86_seg_cs, _ip, &_x, (_size), ctxt); \ - if ( rc ) goto done; \ - _x; \ -}) -#define insn_fetch_type(_type) ((_type)insn_fetch_bytes(sizeof(_type))) - /* * Given byte has even parity (even number of 1s)? SDM Vol. 1 Sec. 3.4.3.1, * "Status Flags": EFLAGS.PF reflects parity of least-sig. byte of result = only. @@ -1354,13 +819,6 @@ static int ioport_access_check( return rc; } =20 -/* Initialise output state in x86_emulate_ctxt */ -static void init_context(struct x86_emulate_ctxt *ctxt) -{ - ctxt->retire.raw =3D 0; - x86_emul_reset_event(ctxt); -} - static int realmode_load_seg( enum x86_segment seg, @@ -1707,51 +1165,6 @@ static unsigned long *decode_vex_gpr( return decode_gpr(regs, ~vex_reg & (mode_64bit() ? 0xf : 7)); } =20 -static unsigned int decode_disp8scale(enum disp8scale scale, - const struct x86_emulate_state *stat= e) -{ - switch ( scale ) - { - case d8s_bw: - return state->evex.w; - - default: - if ( scale < d8s_vl ) - return scale; - if ( state->evex.brs ) - { - case d8s_dq: - return 2 + state->evex.w; - } - break; - - case d8s_dq64: - return 2 + (state->op_bytes =3D=3D 8); - } - - switch ( state->simd_size ) - { - case simd_any_fp: - case simd_single_fp: - if ( !(state->evex.pfx & VEX_PREFIX_SCALAR_MASK) ) - break; - /* fall through */ - case simd_scalar_opc: - case simd_scalar_vexw: - return 2 + state->evex.w; - - case simd_128: - /* These should have an explicit size specified. */ - ASSERT_UNREACHABLE(); - return 4; - - default: - break; - } - - return 4 + state->evex.lr - (scale - d8s_vl); -} - #define avx512_vlen_check(lig) do { \ switch ( evex.lr ) \ { \ @@ -1833,1138 +1246,6 @@ int x86emul_unhandleable_rw( #define evex_encoded() (evex.mbs) #define ea (state->ea) =20 -static int -x86_decode_onebyte( - struct x86_emulate_state *state, - struct x86_emulate_ctxt *ctxt, - const struct x86_emulate_ops *ops) -{ - int rc =3D X86EMUL_OKAY; - - switch ( ctxt->opcode ) - { - case 0x06: /* push %%es */ - case 0x07: /* pop %%es */ - case 0x0e: /* push %%cs */ - case 0x16: /* push %%ss */ - case 0x17: /* pop %%ss */ - case 0x1e: /* push %%ds */ - case 0x1f: /* pop %%ds */ - case 0x27: /* daa */ - case 0x2f: /* das */ - case 0x37: /* aaa */ - case 0x3f: /* aas */ - case 0x60: /* pusha */ - case 0x61: /* popa */ - case 0x62: /* bound */ - case 0xc4: /* les */ - case 0xc5: /* lds */ - case 0xce: /* into */ - case 0xd4: /* aam */ - case 0xd5: /* aad */ - case 0xd6: /* salc */ - state->not_64bit =3D true; - break; - - case 0x82: /* Grp1 (x86/32 only) */ - state->not_64bit =3D true; - /* fall through */ - case 0x80: case 0x81: case 0x83: /* Grp1 */ - if ( (modrm_reg & 7) =3D=3D 7 ) /* cmp */ - state->desc =3D (state->desc & ByteOp) | DstNone | SrcMem; - break; - - case 0x90: /* nop / pause */ - if ( repe_prefix() ) - ctxt->opcode |=3D X86EMUL_OPC_F3(0, 0); - break; - - case 0x9a: /* call (far, absolute) */ - case 0xea: /* jmp (far, absolute) */ - generate_exception_if(mode_64bit(), EXC_UD); - - imm1 =3D insn_fetch_bytes(op_bytes); - imm2 =3D insn_fetch_type(uint16_t); - break; - - case 0xa0: case 0xa1: /* mov mem.offs,{%al,%ax,%eax,%rax} */ - case 0xa2: case 0xa3: /* mov {%al,%ax,%eax,%rax},mem.offs */ - /* Source EA is not encoded via ModRM. */ - ea.type =3D OP_MEM; - ea.mem.off =3D insn_fetch_bytes(ad_bytes); - break; - - case 0xb8 ... 0xbf: /* mov imm{16,32,64},r{16,32,64} */ - if ( op_bytes =3D=3D 8 ) /* Fetch more bytes to obtain imm64. */ - imm1 =3D ((uint32_t)imm1 | - ((uint64_t)insn_fetch_type(uint32_t) << 32)); - break; - - case 0xc8: /* enter imm16,imm8 */ - imm2 =3D insn_fetch_type(uint8_t); - break; - - case 0xf6: case 0xf7: /* Grp3 */ - if ( !(modrm_reg & 6) ) /* test */ - state->desc =3D (state->desc & ByteOp) | DstNone | SrcMem; - break; - - case 0xff: /* Grp5 */ - switch ( modrm_reg & 7 ) - { - case 2: /* call (near) */ - case 4: /* jmp (near) */ - if ( mode_64bit() && (op_bytes =3D=3D 4 || !amd_like(ctxt)) ) - op_bytes =3D 8; - state->desc =3D DstNone | SrcMem | Mov; - break; - - case 3: /* call (far, absolute indirect) */ - case 5: /* jmp (far, absolute indirect) */ - /* REX.W ignored on a vendor-dependent basis. */ - if ( op_bytes =3D=3D 8 && amd_like(ctxt) ) - op_bytes =3D 4; - state->desc =3D DstNone | SrcMem | Mov; - break; - - case 6: /* push */ - if ( mode_64bit() && op_bytes =3D=3D 4 ) - op_bytes =3D 8; - state->desc =3D DstNone | SrcMem | Mov; - break; - } - break; - } - - done: - return rc; -} - -static int -x86_decode_twobyte( - struct x86_emulate_state *state, - struct x86_emulate_ctxt *ctxt, - const struct x86_emulate_ops *ops) -{ - int rc =3D X86EMUL_OKAY; - - switch ( ctxt->opcode & X86EMUL_OPC_MASK ) - { - case 0x00: /* Grp6 */ - switch ( modrm_reg & 6 ) - { - case 0: - state->desc |=3D DstMem | SrcImplicit | Mov; - break; - case 2: case 4: - state->desc |=3D SrcMem16; - break; - } - break; - - case 0x78: - state->desc =3D ImplicitOps; - state->simd_size =3D simd_none; - switch ( vex.pfx ) - { - case vex_66: /* extrq $imm8, $imm8, xmm */ - case vex_f2: /* insertq $imm8, $imm8, xmm, xmm */ - imm1 =3D insn_fetch_type(uint8_t); - imm2 =3D insn_fetch_type(uint8_t); - break; - } - /* fall through */ - case 0x10 ... 0x18: - case 0x28 ... 0x2f: - case 0x50 ... 0x77: - case 0x7a ... 0x7d: - case 0x7f: - case 0xc2 ... 0xc3: - case 0xc5 ... 0xc6: - case 0xd0 ... 0xef: - case 0xf1 ... 0xfe: - ctxt->opcode |=3D MASK_INSR(vex.pfx, X86EMUL_OPC_PFX_MASK); - break; - - case 0x20: case 0x22: /* mov to/from cr */ - if ( lock_prefix && vcpu_has_cr8_legacy() ) - { - modrm_reg +=3D 8; - lock_prefix =3D false; - } - /* fall through */ - case 0x21: case 0x23: /* mov to/from dr */ - ASSERT(ea.type =3D=3D OP_REG); /* Early operand adjustment ensures= this. */ - generate_exception_if(lock_prefix, EXC_UD); - op_bytes =3D mode_64bit() ? 8 : 4; - break; - - case 0x79: - state->desc =3D DstReg | SrcMem; - state->simd_size =3D simd_packed_int; - ctxt->opcode |=3D MASK_INSR(vex.pfx, X86EMUL_OPC_PFX_MASK); - break; - - case 0x7e: - ctxt->opcode |=3D MASK_INSR(vex.pfx, X86EMUL_OPC_PFX_MASK); - if ( vex.pfx =3D=3D vex_f3 ) /* movq xmm/m64,xmm */ - { - case X86EMUL_OPC_VEX_F3(0, 0x7e): /* vmovq xmm/m64,xmm */ - case X86EMUL_OPC_EVEX_F3(0, 0x7e): /* vmovq xmm/m64,xmm */ - state->desc =3D DstImplicit | SrcMem | TwoOp; - state->simd_size =3D simd_other; - /* Avoid the state->desc clobbering of TwoOp below. */ - return X86EMUL_OKAY; - } - break; - - case X86EMUL_OPC_VEX(0, 0x90): /* kmov{w,q} */ - case X86EMUL_OPC_VEX_66(0, 0x90): /* kmov{b,d} */ - state->desc =3D DstReg | SrcMem | Mov; - state->simd_size =3D simd_other; - break; - - case X86EMUL_OPC_VEX(0, 0x91): /* kmov{w,q} */ - case X86EMUL_OPC_VEX_66(0, 0x91): /* kmov{b,d} */ - state->desc =3D DstMem | SrcReg | Mov; - state->simd_size =3D simd_other; - break; - - case 0xae: - ctxt->opcode |=3D MASK_INSR(vex.pfx, X86EMUL_OPC_PFX_MASK); - /* fall through */ - case X86EMUL_OPC_VEX(0, 0xae): - switch ( modrm_reg & 7 ) - { - case 2: /* {,v}ldmxcsr */ - state->desc =3D DstImplicit | SrcMem | Mov; - op_bytes =3D 4; - break; - - case 3: /* {,v}stmxcsr */ - state->desc =3D DstMem | SrcImplicit | Mov; - op_bytes =3D 4; - break; - } - break; - - case 0xb2: /* lss */ - case 0xb4: /* lfs */ - case 0xb5: /* lgs */ - /* REX.W ignored on a vendor-dependent basis. */ - if ( op_bytes =3D=3D 8 && amd_like(ctxt) ) - op_bytes =3D 4; - break; - - case 0xb8: /* jmpe / popcnt */ - if ( rep_prefix() ) - ctxt->opcode |=3D MASK_INSR(vex.pfx, X86EMUL_OPC_PFX_MASK); - break; - - /* Intentionally not handling here despite being modified by F3: - case 0xbc: bsf / tzcnt - case 0xbd: bsr / lzcnt - * They're being dealt with in the execution phase (if at all). - */ - - case 0xc4: /* pinsrw */ - ctxt->opcode |=3D MASK_INSR(vex.pfx, X86EMUL_OPC_PFX_MASK); - /* fall through */ - case X86EMUL_OPC_VEX_66(0, 0xc4): /* vpinsrw */ - case X86EMUL_OPC_EVEX_66(0, 0xc4): /* vpinsrw */ - state->desc =3D DstImplicit | SrcMem16; - break; - - case 0xf0: - ctxt->opcode |=3D MASK_INSR(vex.pfx, X86EMUL_OPC_PFX_MASK); - if ( vex.pfx =3D=3D vex_f2 ) /* lddqu mem,xmm */ - { - /* fall through */ - case X86EMUL_OPC_VEX_F2(0, 0xf0): /* vlddqu mem,{x,y}mm */ - state->desc =3D DstImplicit | SrcMem | TwoOp; - state->simd_size =3D simd_other; - /* Avoid the state->desc clobbering of TwoOp below. */ - return X86EMUL_OKAY; - } - break; - } - - /* - * Scalar forms of most VEX-/EVEX-encoded TwoOp instructions have - * three operands. Those which do really have two operands - * should have exited earlier. - */ - if ( state->simd_size && vex.opcx && - (vex.pfx & VEX_PREFIX_SCALAR_MASK) ) - state->desc &=3D ~TwoOp; - - done: - return rc; -} - -static int -x86_decode_0f38( - struct x86_emulate_state *state, - struct x86_emulate_ctxt *ctxt, - const struct x86_emulate_ops *ops) -{ - switch ( ctxt->opcode & X86EMUL_OPC_MASK ) - { - case 0x00 ... 0xef: - case 0xf2 ... 0xf5: - case 0xf7 ... 0xf8: - case 0xfa ... 0xff: - op_bytes =3D 0; - /* fall through */ - case 0xf6: /* adcx / adox */ - case 0xf9: /* movdiri */ - ctxt->opcode |=3D MASK_INSR(vex.pfx, X86EMUL_OPC_PFX_MASK); - break; - - case X86EMUL_OPC_EVEX_66(0, 0x2d): /* vscalefs{s,d} */ - state->simd_size =3D simd_scalar_vexw; - break; - - case X86EMUL_OPC_EVEX_66(0, 0x7a): /* vpbroadcastb */ - case X86EMUL_OPC_EVEX_66(0, 0x7b): /* vpbroadcastw */ - case X86EMUL_OPC_EVEX_66(0, 0x7c): /* vpbroadcast{d,q} */ - break; - - case 0xf0: /* movbe / crc32 */ - state->desc |=3D repne_prefix() ? ByteOp : Mov; - if ( rep_prefix() ) - ctxt->opcode |=3D MASK_INSR(vex.pfx, X86EMUL_OPC_PFX_MASK); - break; - - case 0xf1: /* movbe / crc32 */ - if ( repne_prefix() ) - state->desc =3D DstReg | SrcMem; - if ( rep_prefix() ) - ctxt->opcode |=3D MASK_INSR(vex.pfx, X86EMUL_OPC_PFX_MASK); - break; - - case X86EMUL_OPC_VEX(0, 0xf2): /* andn */ - case X86EMUL_OPC_VEX(0, 0xf3): /* Grp 17 */ - case X86EMUL_OPC_VEX(0, 0xf5): /* bzhi */ - case X86EMUL_OPC_VEX_F3(0, 0xf5): /* pext */ - case X86EMUL_OPC_VEX_F2(0, 0xf5): /* pdep */ - case X86EMUL_OPC_VEX_F2(0, 0xf6): /* mulx */ - case X86EMUL_OPC_VEX(0, 0xf7): /* bextr */ - case X86EMUL_OPC_VEX_66(0, 0xf7): /* shlx */ - case X86EMUL_OPC_VEX_F3(0, 0xf7): /* sarx */ - case X86EMUL_OPC_VEX_F2(0, 0xf7): /* shrx */ - break; - - default: - op_bytes =3D 0; - break; - } - - return X86EMUL_OKAY; -} - -static int -x86_decode_0f3a( - struct x86_emulate_state *state, - struct x86_emulate_ctxt *ctxt, - const struct x86_emulate_ops *ops) -{ - if ( !vex.opcx ) - ctxt->opcode |=3D MASK_INSR(vex.pfx, X86EMUL_OPC_PFX_MASK); - - switch ( ctxt->opcode & X86EMUL_OPC_MASK ) - { - case X86EMUL_OPC_66(0, 0x14) - ... X86EMUL_OPC_66(0, 0x17): /* pextr*, extractps */ - case X86EMUL_OPC_VEX_66(0, 0x14) - ... X86EMUL_OPC_VEX_66(0, 0x17): /* vpextr*, vextractps */ - case X86EMUL_OPC_EVEX_66(0, 0x14) - ... X86EMUL_OPC_EVEX_66(0, 0x17): /* vpextr*, vextractps */ - case X86EMUL_OPC_VEX_F2(0, 0xf0): /* rorx */ - break; - - case X86EMUL_OPC_66(0, 0x20): /* pinsrb */ - case X86EMUL_OPC_VEX_66(0, 0x20): /* vpinsrb */ - case X86EMUL_OPC_EVEX_66(0, 0x20): /* vpinsrb */ - state->desc =3D DstImplicit | SrcMem; - if ( modrm_mod !=3D 3 ) - state->desc |=3D ByteOp; - break; - - case X86EMUL_OPC_66(0, 0x22): /* pinsr{d,q} */ - case X86EMUL_OPC_VEX_66(0, 0x22): /* vpinsr{d,q} */ - case X86EMUL_OPC_EVEX_66(0, 0x22): /* vpinsr{d,q} */ - state->desc =3D DstImplicit | SrcMem; - break; - - default: - op_bytes =3D 0; - break; - } - - return X86EMUL_OKAY; -} - -static int -x86_decode( - struct x86_emulate_state *state, - struct x86_emulate_ctxt *ctxt, - const struct x86_emulate_ops *ops) -{ - uint8_t b, d; - unsigned int def_op_bytes, def_ad_bytes, opcode; - enum x86_segment override_seg =3D x86_seg_none; - bool pc_rel =3D false; - int rc =3D X86EMUL_OKAY; - - ASSERT(ops->insn_fetch); - - memset(state, 0, sizeof(*state)); - ea.type =3D OP_NONE; - ea.mem.seg =3D x86_seg_ds; - ea.reg =3D PTR_POISON; - state->regs =3D ctxt->regs; - state->ip =3D ctxt->regs->r(ip); - - op_bytes =3D def_op_bytes =3D ad_bytes =3D def_ad_bytes =3D ctxt->addr= _size/8; - if ( op_bytes =3D=3D 8 ) - { - op_bytes =3D def_op_bytes =3D 4; -#ifndef __x86_64__ - return X86EMUL_UNHANDLEABLE; -#endif - } - - /* Prefix bytes. */ - for ( ; ; ) - { - switch ( b =3D insn_fetch_type(uint8_t) ) - { - case 0x66: /* operand-size override */ - op_bytes =3D def_op_bytes ^ 6; - if ( !vex.pfx ) - vex.pfx =3D vex_66; - break; - case 0x67: /* address-size override */ - ad_bytes =3D def_ad_bytes ^ (mode_64bit() ? 12 : 6); - break; - case 0x2e: /* CS override / ignored in 64-bit mode */ - if ( !mode_64bit() ) - override_seg =3D x86_seg_cs; - break; - case 0x3e: /* DS override / ignored in 64-bit mode */ - if ( !mode_64bit() ) - override_seg =3D x86_seg_ds; - break; - case 0x26: /* ES override / ignored in 64-bit mode */ - if ( !mode_64bit() ) - override_seg =3D x86_seg_es; - break; - case 0x64: /* FS override */ - override_seg =3D x86_seg_fs; - break; - case 0x65: /* GS override */ - override_seg =3D x86_seg_gs; - break; - case 0x36: /* SS override / ignored in 64-bit mode */ - if ( !mode_64bit() ) - override_seg =3D x86_seg_ss; - break; - case 0xf0: /* LOCK */ - lock_prefix =3D 1; - break; - case 0xf2: /* REPNE/REPNZ */ - vex.pfx =3D vex_f2; - break; - case 0xf3: /* REP/REPE/REPZ */ - vex.pfx =3D vex_f3; - break; - case 0x40 ... 0x4f: /* REX */ - if ( !mode_64bit() ) - goto done_prefixes; - rex_prefix =3D b; - continue; - default: - goto done_prefixes; - } - - /* Any legacy prefix after a REX prefix nullifies its effect. */ - rex_prefix =3D 0; - } - done_prefixes: - - if ( rex_prefix & REX_W ) - op_bytes =3D 8; - - /* Opcode byte(s). */ - d =3D opcode_table[b]; - if ( d =3D=3D 0 && b =3D=3D 0x0f ) - { - /* Two-byte opcode. */ - b =3D insn_fetch_type(uint8_t); - d =3D twobyte_table[b].desc; - switch ( b ) - { - default: - opcode =3D b | MASK_INSR(0x0f, X86EMUL_OPC_EXT_MASK); - ext =3D ext_0f; - state->simd_size =3D twobyte_table[b].size; - break; - case 0x38: - b =3D insn_fetch_type(uint8_t); - opcode =3D b | MASK_INSR(0x0f38, X86EMUL_OPC_EXT_MASK); - ext =3D ext_0f38; - break; - case 0x3a: - b =3D insn_fetch_type(uint8_t); - opcode =3D b | MASK_INSR(0x0f3a, X86EMUL_OPC_EXT_MASK); - ext =3D ext_0f3a; - break; - } - } - else - opcode =3D b; - - /* ModRM and SIB bytes. */ - if ( d & ModRM ) - { - modrm =3D insn_fetch_type(uint8_t); - modrm_mod =3D (modrm & 0xc0) >> 6; - - if ( !ext && ((b & ~1) =3D=3D 0xc4 || (b =3D=3D 0x8f && (modrm & 0= x18)) || - b =3D=3D 0x62) ) - switch ( def_ad_bytes ) - { - default: - BUG(); /* Shouldn't be possible. */ - case 2: - if ( state->regs->eflags & X86_EFLAGS_VM ) - break; - /* fall through */ - case 4: - if ( modrm_mod !=3D 3 || in_realmode(ctxt, ops) ) - break; - /* fall through */ - case 8: - /* VEX / XOP / EVEX */ - generate_exception_if(rex_prefix || vex.pfx, EXC_UD); - /* - * With operand size override disallowed (see above), op_b= ytes - * should not have changed from its default. - */ - ASSERT(op_bytes =3D=3D def_op_bytes); - - vex.raw[0] =3D modrm; - if ( b =3D=3D 0xc5 ) - { - opcode =3D X86EMUL_OPC_VEX_; - vex.raw[1] =3D modrm; - vex.opcx =3D vex_0f; - vex.x =3D 1; - vex.b =3D 1; - vex.w =3D 0; - } - else - { - vex.raw[1] =3D insn_fetch_type(uint8_t); - if ( mode_64bit() ) - { - if ( !vex.b ) - rex_prefix |=3D REX_B; - if ( !vex.x ) - rex_prefix |=3D REX_X; - if ( vex.w ) - { - rex_prefix |=3D REX_W; - op_bytes =3D 8; - } - } - else - { - /* Operand size fixed at 4 (no override via W bit)= . */ - op_bytes =3D 4; - vex.b =3D 1; - } - switch ( b ) - { - case 0x62: - opcode =3D X86EMUL_OPC_EVEX_; - evex.raw[0] =3D vex.raw[0]; - evex.raw[1] =3D vex.raw[1]; - evex.raw[2] =3D insn_fetch_type(uint8_t); - - generate_exception_if(!evex.mbs || evex.mbz, EXC_U= D); - generate_exception_if(!evex.opmsk && evex.z, EXC_U= D); - - if ( !mode_64bit() ) - evex.R =3D 1; - - vex.opcx =3D evex.opcx; - break; - case 0xc4: - opcode =3D X86EMUL_OPC_VEX_; - break; - default: - opcode =3D 0; - break; - } - } - if ( !vex.r ) - rex_prefix |=3D REX_R; - - ext =3D vex.opcx; - if ( b !=3D 0x8f ) - { - b =3D insn_fetch_type(uint8_t); - switch ( ext ) - { - case vex_0f: - opcode |=3D MASK_INSR(0x0f, X86EMUL_OPC_EXT_MASK); - d =3D twobyte_table[b].desc; - state->simd_size =3D twobyte_table[b].size; - break; - case vex_0f38: - opcode |=3D MASK_INSR(0x0f38, X86EMUL_OPC_EXT_MASK= ); - d =3D twobyte_table[0x38].desc; - break; - case vex_0f3a: - opcode |=3D MASK_INSR(0x0f3a, X86EMUL_OPC_EXT_MASK= ); - d =3D twobyte_table[0x3a].desc; - break; - default: - rc =3D X86EMUL_UNRECOGNIZED; - goto done; - } - } - else if ( ext < ext_8f08 + ARRAY_SIZE(xop_table) ) - { - b =3D insn_fetch_type(uint8_t); - opcode |=3D MASK_INSR(0x8f08 + ext - ext_8f08, - X86EMUL_OPC_EXT_MASK); - d =3D array_access_nospec(xop_table, ext - ext_8f08); - } - else - { - rc =3D X86EMUL_UNRECOGNIZED; - goto done; - } - - opcode |=3D b | MASK_INSR(vex.pfx, X86EMUL_OPC_PFX_MASK); - - if ( !evex_encoded() ) - evex.lr =3D vex.l; - - if ( !(d & ModRM) ) - break; - - modrm =3D insn_fetch_type(uint8_t); - modrm_mod =3D (modrm & 0xc0) >> 6; - - break; - } - } - - if ( d & ModRM ) - { - unsigned int disp8scale =3D 0; - - d &=3D ~ModRM; -#undef ModRM /* Only its aliases are valid to use from here on. */ - modrm_reg =3D ((rex_prefix & 4) << 1) | ((modrm & 0x38) >> 3) | - ((evex_encoded() && !evex.R) << 4); - modrm_rm =3D modrm & 0x07; - - /* - * Early operand adjustments. Only ones affecting further processi= ng - * prior to the x86_decode_*() calls really belong here. That would - * normally be only addition/removal of SrcImm/SrcImm16, so their - * fetching can be taken care of by the common code below. - */ - switch ( ext ) - { - case ext_none: - switch ( b ) - { - case 0xf6 ... 0xf7: /* Grp3 */ - switch ( modrm_reg & 7 ) - { - case 0 ... 1: /* test */ - d |=3D DstMem | SrcImm; - break; - case 2: /* not */ - case 3: /* neg */ - d |=3D DstMem; - break; - case 4: /* mul */ - case 5: /* imul */ - case 6: /* div */ - case 7: /* idiv */ - /* - * DstEax isn't really precise for all cases; updates = to - * rDX get handled in an open coded manner. - */ - d |=3D DstEax | SrcMem; - break; - } - break; - } - break; - - case ext_0f: - if ( evex_encoded() ) - disp8scale =3D decode_disp8scale(twobyte_table[b].d8s, sta= te); - - switch ( b ) - { - case 0x12: /* vmovsldup / vmovddup */ - if ( evex.pfx =3D=3D vex_f2 ) - disp8scale =3D evex.lr ? 4 + evex.lr : 3; - /* fall through */ - case 0x16: /* vmovshdup */ - if ( evex.pfx =3D=3D vex_f3 ) - disp8scale =3D 4 + evex.lr; - break; - - case 0x20: /* mov cr,reg */ - case 0x21: /* mov dr,reg */ - case 0x22: /* mov reg,cr */ - case 0x23: /* mov reg,dr */ - /* - * Mov to/from cr/dr ignore the encoding of Mod, and behav= e as - * if they were encoded as reg/reg instructions. No furth= er - * disp/SIB bytes are fetched. - */ - modrm_mod =3D 3; - break; - - case 0x78: - case 0x79: - if ( !evex.pfx ) - break; - /* vcvt{,t}ps2uqq need special casing */ - if ( evex.pfx =3D=3D vex_66 ) - { - if ( !evex.w && !evex.brs ) - --disp8scale; - break; - } - /* vcvt{,t}s{s,d}2usi need special casing: fall through */ - case 0x2c: /* vcvtts{s,d}2si need special casing */ - case 0x2d: /* vcvts{s,d}2si need special casing */ - if ( evex_encoded() ) - disp8scale =3D 2 + (evex.pfx & VEX_PREFIX_DOUBLE_MASK); - break; - - case 0x5a: /* vcvtps2pd needs special casing */ - if ( disp8scale && !evex.pfx && !evex.brs ) - --disp8scale; - break; - - case 0x7a: /* vcvttps2qq and vcvtudq2pd need special casing */ - if ( disp8scale && evex.pfx !=3D vex_f2 && !evex.w && !eve= x.brs ) - --disp8scale; - break; - - case 0x7b: /* vcvtp{s,d}2qq need special casing */ - if ( disp8scale && evex.pfx =3D=3D vex_66 ) - disp8scale =3D (evex.brs ? 2 : 3 + evex.lr) + evex.w; - break; - - case 0x7e: /* vmovq xmm/m64,xmm needs special casing */ - if ( disp8scale =3D=3D 2 && evex.pfx =3D=3D vex_f3 ) - disp8scale =3D 3; - break; - - case 0xe6: /* vcvtdq2pd needs special casing */ - if ( disp8scale && evex.pfx =3D=3D vex_f3 && !evex.w && !e= vex.brs ) - --disp8scale; - break; - } - break; - - case ext_0f38: - d =3D ext0f38_table[b].to_mem ? DstMem | SrcReg - : DstReg | SrcMem; - if ( ext0f38_table[b].two_op ) - d |=3D TwoOp; - if ( ext0f38_table[b].vsib ) - d |=3D vSIB; - state->simd_size =3D ext0f38_table[b].simd_size; - if ( evex_encoded() ) - { - /* - * VPMOVUS* are identical to VPMOVS* Disp8-scaling-wise, b= ut - * their attributes don't match those of the vex_66 encoded - * insns with the same base opcodes. Rather than adding new - * columns to the table, handle this here for now. - */ - if ( evex.pfx !=3D vex_f3 || (b & 0xf8) !=3D 0x10 ) - disp8scale =3D decode_disp8scale(ext0f38_table[b].d8s,= state); - else - { - disp8scale =3D decode_disp8scale(ext0f38_table[b ^ 0x3= 0].d8s, - state); - state->simd_size =3D simd_other; - } - - switch ( b ) - { - /* vp4dpwssd{,s} need special casing */ - case 0x52: case 0x53: - /* v4f{,n}madd{p,s}s need special casing */ - case 0x9a: case 0x9b: case 0xaa: case 0xab: - if ( evex.pfx =3D=3D vex_f2 ) - { - disp8scale =3D 4; - state->simd_size =3D simd_128; - } - break; - } - } - break; - - case ext_0f3a: - /* - * Cannot update d here yet, as the immediate operand still - * needs fetching. - */ - state->simd_size =3D ext0f3a_table[b].simd_size; - if ( evex_encoded() ) - disp8scale =3D decode_disp8scale(ext0f3a_table[b].d8s, sta= te); - break; - - case ext_8f09: - if ( ext8f09_table[b].two_op ) - d |=3D TwoOp; - state->simd_size =3D ext8f09_table[b].simd_size; - break; - - case ext_8f08: - case ext_8f0a: - /* - * Cannot update d here yet, as the immediate operand still - * needs fetching. - */ - break; - - default: - ASSERT_UNREACHABLE(); - return X86EMUL_UNIMPLEMENTED; - } - - if ( modrm_mod =3D=3D 3 ) - { - generate_exception_if(d & vSIB, EXC_UD); - modrm_rm |=3D ((rex_prefix & 1) << 3) | - ((evex_encoded() && !evex.x) << 4); - ea.type =3D OP_REG; - } - else if ( ad_bytes =3D=3D 2 ) - { - /* 16-bit ModR/M decode. */ - generate_exception_if(d & vSIB, EXC_UD); - ea.type =3D OP_MEM; - switch ( modrm_rm ) - { - case 0: - ea.mem.off =3D state->regs->bx + state->regs->si; - break; - case 1: - ea.mem.off =3D state->regs->bx + state->regs->di; - break; - case 2: - ea.mem.seg =3D x86_seg_ss; - ea.mem.off =3D state->regs->bp + state->regs->si; - break; - case 3: - ea.mem.seg =3D x86_seg_ss; - ea.mem.off =3D state->regs->bp + state->regs->di; - break; - case 4: - ea.mem.off =3D state->regs->si; - break; - case 5: - ea.mem.off =3D state->regs->di; - break; - case 6: - if ( modrm_mod =3D=3D 0 ) - break; - ea.mem.seg =3D x86_seg_ss; - ea.mem.off =3D state->regs->bp; - break; - case 7: - ea.mem.off =3D state->regs->bx; - break; - } - switch ( modrm_mod ) - { - case 0: - if ( modrm_rm =3D=3D 6 ) - ea.mem.off =3D insn_fetch_type(int16_t); - break; - case 1: - ea.mem.off +=3D insn_fetch_type(int8_t) * (1 << disp8scale= ); - break; - case 2: - ea.mem.off +=3D insn_fetch_type(int16_t); - break; - } - } - else - { - /* 32/64-bit ModR/M decode. */ - ea.type =3D OP_MEM; - if ( modrm_rm =3D=3D 4 ) - { - uint8_t sib =3D insn_fetch_type(uint8_t); - uint8_t sib_base =3D (sib & 7) | ((rex_prefix << 3) & 8); - - state->sib_index =3D ((sib >> 3) & 7) | ((rex_prefix << 2)= & 8); - state->sib_scale =3D (sib >> 6) & 3; - if ( unlikely(d & vSIB) ) - state->sib_index |=3D (mode_64bit() && evex_encoded() = && - !evex.RX) << 4; - else if ( state->sib_index !=3D 4 ) - { - ea.mem.off =3D *decode_gpr(state->regs, state->sib_ind= ex); - ea.mem.off <<=3D state->sib_scale; - } - if ( (modrm_mod =3D=3D 0) && ((sib_base & 7) =3D=3D 5) ) - ea.mem.off +=3D insn_fetch_type(int32_t); - else if ( sib_base =3D=3D 4 ) - { - ea.mem.seg =3D x86_seg_ss; - ea.mem.off +=3D state->regs->r(sp); - if ( !ext && (b =3D=3D 0x8f) ) - /* POP computes its EA post increment. */ - ea.mem.off +=3D ((mode_64bit() && (op_bytes =3D=3D= 4)) - ? 8 : op_bytes); - } - else if ( sib_base =3D=3D 5 ) - { - ea.mem.seg =3D x86_seg_ss; - ea.mem.off +=3D state->regs->r(bp); - } - else - ea.mem.off +=3D *decode_gpr(state->regs, sib_base); - } - else - { - generate_exception_if(d & vSIB, EXC_UD); - modrm_rm |=3D (rex_prefix & 1) << 3; - ea.mem.off =3D *decode_gpr(state->regs, modrm_rm); - if ( (modrm_rm =3D=3D 5) && (modrm_mod !=3D 0) ) - ea.mem.seg =3D x86_seg_ss; - } - switch ( modrm_mod ) - { - case 0: - if ( (modrm_rm & 7) !=3D 5 ) - break; - ea.mem.off =3D insn_fetch_type(int32_t); - pc_rel =3D mode_64bit(); - break; - case 1: - ea.mem.off +=3D insn_fetch_type(int8_t) * (1 << disp8scale= ); - break; - case 2: - ea.mem.off +=3D insn_fetch_type(int32_t); - break; - } - } - } - else - { - modrm_mod =3D 0xff; - modrm_reg =3D modrm_rm =3D modrm =3D 0; - } - - if ( override_seg !=3D x86_seg_none ) - ea.mem.seg =3D override_seg; - - /* Fetch the immediate operand, if present. */ - switch ( d & SrcMask ) - { - unsigned int bytes; - - case SrcImm: - if ( !(d & ByteOp) ) - { - if ( mode_64bit() && !amd_like(ctxt) && - ((ext =3D=3D ext_none && (b | 1) =3D=3D 0xe9) /* call / j= mp */ || - (ext =3D=3D ext_0f && (b | 0xf) =3D=3D 0x8f) /* jcc */ )= ) - op_bytes =3D 4; - bytes =3D op_bytes !=3D 8 ? op_bytes : 4; - } - else - { - case SrcImmByte: - bytes =3D 1; - } - /* NB. Immediates are sign-extended as necessary. */ - switch ( bytes ) - { - case 1: imm1 =3D insn_fetch_type(int8_t); break; - case 2: imm1 =3D insn_fetch_type(int16_t); break; - case 4: imm1 =3D insn_fetch_type(int32_t); break; - } - break; - case SrcImm16: - imm1 =3D insn_fetch_type(uint16_t); - break; - } - - ctxt->opcode =3D opcode; - state->desc =3D d; - - switch ( ext ) - { - case ext_none: - rc =3D x86_decode_onebyte(state, ctxt, ops); - break; - - case ext_0f: - rc =3D x86_decode_twobyte(state, ctxt, ops); - break; - - case ext_0f38: - rc =3D x86_decode_0f38(state, ctxt, ops); - break; - - case ext_0f3a: - d =3D ext0f3a_table[b].to_mem ? DstMem | SrcReg : DstReg | SrcMem; - if ( ext0f3a_table[b].two_op ) - d |=3D TwoOp; - else if ( ext0f3a_table[b].four_op && !mode_64bit() && vex.opcx ) - imm1 &=3D 0x7f; - state->desc =3D d; - rc =3D x86_decode_0f3a(state, ctxt, ops); - break; - - case ext_8f08: - d =3D DstReg | SrcMem; - if ( ext8f08_table[b].two_op ) - d |=3D TwoOp; - else if ( ext8f08_table[b].four_op && !mode_64bit() ) - imm1 &=3D 0x7f; - state->desc =3D d; - state->simd_size =3D ext8f08_table[b].simd_size; - break; - - case ext_8f09: - case ext_8f0a: - break; - - default: - ASSERT_UNREACHABLE(); - return X86EMUL_UNIMPLEMENTED; - } - - if ( ea.type =3D=3D OP_MEM ) - { - if ( pc_rel ) - ea.mem.off +=3D state->ip; - - ea.mem.off =3D truncate_ea(ea.mem.off); - } - - /* - * Simple op_bytes calculations. More complicated cases produce 0 - * and are further handled during execute. - */ - switch ( state->simd_size ) - { - case simd_none: - /* - * When prefix 66 has a meaning different from operand-size overri= de, - * operand size defaults to 4 and can't be overridden to 2. - */ - if ( op_bytes =3D=3D 2 && - (ctxt->opcode & X86EMUL_OPC_PFX_MASK) =3D=3D X86EMUL_OPC_66(0= , 0) ) - op_bytes =3D 4; - break; - -#ifndef X86EMUL_NO_SIMD - case simd_packed_int: - switch ( vex.pfx ) - { - case vex_none: - if ( !vex.opcx ) - { - op_bytes =3D 8; - break; - } - /* fall through */ - case vex_66: - op_bytes =3D 16 << evex.lr; - break; - default: - op_bytes =3D 0; - break; - } - break; - - case simd_single_fp: - if ( vex.pfx & VEX_PREFIX_DOUBLE_MASK ) - { - op_bytes =3D 0; - break; - case simd_packed_fp: - if ( vex.pfx & VEX_PREFIX_SCALAR_MASK ) - { - op_bytes =3D 0; - break; - } - } - /* fall through */ - case simd_any_fp: - switch ( vex.pfx ) - { - default: - op_bytes =3D 16 << evex.lr; - break; - case vex_f3: - generate_exception_if(evex_encoded() && evex.w, EXC_UD); - op_bytes =3D 4; - break; - case vex_f2: - generate_exception_if(evex_encoded() && !evex.w, EXC_UD); - op_bytes =3D 8; - break; - } - break; - - case simd_scalar_opc: - op_bytes =3D 4 << (ctxt->opcode & 1); - break; - - case simd_scalar_vexw: - op_bytes =3D 4 << vex.w; - break; - - case simd_128: - /* The special cases here are MMX shift insns. */ - op_bytes =3D vex.opcx || vex.pfx ? 16 : 8; - break; - - case simd_256: - op_bytes =3D 32; - break; -#endif /* !X86EMUL_NO_SIMD */ - - default: - op_bytes =3D 0; - break; - } - - done: - return rc; -} - -/* No insn fetching past this point. */ -#undef insn_fetch_bytes -#undef insn_fetch_type - /* Undo DEBUG wrapper. */ #undef x86_emulate =20 @@ -3000,7 +1281,7 @@ x86_emulate( (_regs.eflags & X86_EFLAGS_VIP)), EXC_GP, 0); =20 - rc =3D x86_decode(&state, ctxt, ops); + rc =3D x86emul_decode(&state, ctxt, ops); if ( rc !=3D X86EMUL_OKAY ) return rc; =20 @@ -10497,46 +8778,6 @@ int x86_emulate_wrapper( } #endif =20 -struct x86_emulate_state * -x86_decode_insn( - struct x86_emulate_ctxt *ctxt, - int (*insn_fetch)( - enum x86_segment seg, unsigned long offset, - void *p_data, unsigned int bytes, - struct x86_emulate_ctxt *ctxt)) -{ - static DEFINE_PER_CPU(struct x86_emulate_state, state); - struct x86_emulate_state *state =3D &this_cpu(state); - const struct x86_emulate_ops ops =3D { - .insn_fetch =3D insn_fetch, - .read =3D x86emul_unhandleable_rw, - }; - int rc; - - init_context(ctxt); - - rc =3D x86_decode(state, ctxt, &ops); - if ( unlikely(rc !=3D X86EMUL_OKAY) ) - return ERR_PTR(-rc); - -#if defined(__XEN__) && !defined(NDEBUG) - /* - * While we avoid memory allocation (by use of per-CPU data) above, - * nevertheless make sure callers properly release the state structure - * for forward compatibility. - */ - if ( state->caller ) - { - printk(XENLOG_ERR "Unreleased emulation state acquired by %ps\n", - state->caller); - dump_execution_state(); - } - state->caller =3D __builtin_return_address(0); -#endif - - return state; -} - static inline void check_state(const struct x86_emulate_state *state) { #if defined(__XEN__) && !defined(NDEBUG)