From nobody Fri Nov 22 12:58:07 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=reject dis=none) header.from=citrix.com ARC-Seal: i=1; a=rsa-sha256; t=1718645994; cv=none; d=zohomail.com; s=zohoarc; b=htf3xv6TGLSbV9T1RhgHXO3m7LLg83rljXm7UlQShzS+9ac/7/+RQJSE4f8hHKjmgRoAclG4SfkBrk+MVKFCRwfUH6X8PB3wFDPs2JJp2DhE/39Myf6BEO7Y0Tuqlq5neUVHOgNpE3K+CfT1GyeHZN8kVfOTvYTfm58bTY1SP2E= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1718645994; h=Content-Type:Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=Zhl3ukCaOtB54z1wi1NLzU/jmU9SjtnheNpVm9jxFJQ=; b=TNJvgg10vO7xqS9h87QsNqEs1/o3Mjy5YkaXdb/6iGqAhUk+9shq3zg/t4IIz2aJSNAN355K1Q0NwuBAMFoLWrWacWVZQGuQJ84xLmGdbISFzjSm/JkJLERHP0K6Klqb5OQPdeuju3HMV4K8mcAq3x8Wewf2F5Q/NnNF6nnyzY8= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=reject dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1718645994984986.064967974707; Mon, 17 Jun 2024 10:39:54 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.742540.1149385 (Exim 4.92) (envelope-from ) id 1sJGKJ-0003qq-PA; Mon, 17 Jun 2024 17:39:31 +0000 Received: by outflank-mailman (output) from mailman id 742540.1149385; Mon, 17 Jun 2024 17:39:31 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1sJGKJ-0003pT-HZ; Mon, 17 Jun 2024 17:39:31 +0000 Received: by outflank-mailman (input) for mailman id 742540; Mon, 17 Jun 2024 17:39:29 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1sJGKH-00036g-L8 for xen-devel@lists.xenproject.org; Mon, 17 Jun 2024 17:39:29 +0000 Received: from mail-ej1-x634.google.com (mail-ej1-x634.google.com [2a00:1450:4864:20::634]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id 8b0748c8-2cd0-11ef-b4bb-af5377834399; Mon, 17 Jun 2024 19:39:27 +0200 (CEST) Received: by mail-ej1-x634.google.com with SMTP id a640c23a62f3a-a6f936d2ba1so36321466b.0 for ; Mon, 17 Jun 2024 10:39:27 -0700 (PDT) Received: from andrewcoop.eng.citrite.net ([160.101.139.1]) by smtp.gmail.com with ESMTPSA id a640c23a62f3a-a6f56da4496sm532010666b.8.2024.06.17.10.39.25 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 17 Jun 2024 10:39:25 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 8b0748c8-2cd0-11ef-b4bb-af5377834399 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=citrix.com; s=google; t=1718645966; x=1719250766; darn=lists.xenproject.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Zhl3ukCaOtB54z1wi1NLzU/jmU9SjtnheNpVm9jxFJQ=; b=QDCbE3XBxwBmCDbPdgqBaGh0tGF2vut0nRt3/l35Xh9dsR5M3sS+YoFU3THwsTYaOY ghyI6xKoEBvUtoiJqvBV+1Z7HpKSxxz4u4vTcuMH5ZU3ttlXWBMnvY6lNP7W8e6RIY6/ CIQmBAhF+9LiuYxMkHzTRVrftEMN58dHQ6NEs= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1718645966; x=1719250766; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Zhl3ukCaOtB54z1wi1NLzU/jmU9SjtnheNpVm9jxFJQ=; b=Lgltc1MlhFgFx8zdRfCbuD3HJLUB/H8yYE8kE5aQ4MeaozTmSWysgFmAM2qufPycDp Cgk4H5Blk2lI14LY1zr7/B16rqsE81WSMlb13sR86msZLEVKUJCk9QyBZ9dcy5cxizVw aF+gbfgXuDVEVqSuX32nGIX76MS6+tbViCttXzRigxrKdXjA2u+uIP8AxTlKERwdeSJm OHBUT8pvhAprCuFDKUkN8xFJfaZ6UVGnFDzEPJZoLVxAOA4Li+elCSf8R+dukYF1LTvF c4BoKAJlw1L1b0SZ4Ovqgo9iMI9OUISHZfKRKULxaoxPm32Oqmn2k3Y8qHgQuxFtt8xe 4c0Q== X-Gm-Message-State: AOJu0YypyJkAFOB2xzd+w/fcVEO2whxF8Y9pILQjjGT43fGGwubBhuld 0V8ozmzAiOftw65WAaBwkMS4rqtxGhlQ3s6ZLLkB8SjoChyUcma6pITaUm135IVZsoAv+yTNGgy XnvE= X-Google-Smtp-Source: AGHT+IE2ePwbkm/Jzh5+1PD5kPs6RXA3hTe5nwJ2ydBXKARDAIfPc8oUxJZ52aObkYo78bSbB9bKjw== X-Received: by 2002:a17:906:a847:b0:a6f:46f1:5434 with SMTP id a640c23a62f3a-a6f94c047famr24028366b.6.1718645966154; Mon, 17 Jun 2024 10:39:26 -0700 (PDT) From: Andrew Cooper To: Xen-devel Cc: Andrew Cooper , Jan Beulich , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= , Oleksii Kurochko Subject: [PATCH v4 2/7] x86/xstate: Cross-check dynamic XSTATE sizes at boot Date: Mon, 17 Jun 2024 18:39:16 +0100 Message-Id: <20240617173921.1755439-3-andrew.cooper3@citrix.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240617173921.1755439-1-andrew.cooper3@citrix.com> References: <20240617173921.1755439-1-andrew.cooper3@citrix.com> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @citrix.com) X-ZM-MESSAGEID: 1718645996234100007 Right now, xstate_ctxt_size() performs a cross-check of size with CPUID in = for every call. This is expensive, being used for domain create/migrate, as we= ll as to service certain guest CPUID instructions. Instead, arrange to check the sizes once at boot. See the code comments for details. Right now, it just checks hardware against the algorithm expectations. Later patches will add further cross-checking. Introduce more X86_XCR0_* and X86_XSS_* constants CPUID bits. This is to maximise coverage in the sanity check, even if we don't expect to use/virtualise some of these features any time soon. Leave HDC and HWP alo= ne for now; we don't have CPUID bits from them stored nicely. Only perform the cross-checks when SELF_TESTS are active. It's only developers or new hardware liable to trip these checks, and Xen at least tracks "maximum value ever seen in xcr0" for the lifetime of the VM, which = we don't want to be tickling in the general case. Signed-off-by: Andrew Cooper --- CC: Jan Beulich CC: Roger Pau Monn=C3=A9 CC: Oleksii Kurochko v3: * New v4: * Rebase over CONFIG_SELF_TESTS * Swap one BUG_ON() for a WARN() On Sapphire Rapids with the whole series inc diagnostics, we get this patte= rn: (XEN) *** check_new_xstate(, 0x00000003) (XEN) *** check_new_xstate(, 0x00000004) (XEN) *** check_new_xstate(, 0x000000e0) (XEN) *** check_new_xstate(, 0x00000200) (XEN) *** check_new_xstate(, 0x00060000) (XEN) *** check_new_xstate(, 0x00000100) (XEN) *** check_new_xstate(, 0x00000400) (XEN) *** check_new_xstate(, 0x00000800) (XEN) *** check_new_xstate(, 0x00001000) (XEN) *** check_new_xstate(, 0x00004000) (XEN) *** check_new_xstate(, 0x00008000) and on Genoa, this pattern: (XEN) *** check_new_xstate(, 0x00000003) (XEN) *** check_new_xstate(, 0x00000004) (XEN) *** check_new_xstate(, 0x000000e0) (XEN) *** check_new_xstate(, 0x00000200) (XEN) *** check_new_xstate(, 0x00000800) (XEN) *** check_new_xstate(, 0x00001000) --- xen/arch/x86/include/asm/x86-defns.h | 25 +++- xen/arch/x86/xstate.c | 158 ++++++++++++++++++++ xen/include/public/arch-x86/cpufeatureset.h | 3 + 3 files changed, 185 insertions(+), 1 deletion(-) diff --git a/xen/arch/x86/include/asm/x86-defns.h b/xen/arch/x86/include/as= m/x86-defns.h index 48d7a3b7af45..d7602ab225c4 100644 --- a/xen/arch/x86/include/asm/x86-defns.h +++ b/xen/arch/x86/include/asm/x86-defns.h @@ -77,7 +77,7 @@ #define X86_CR4_PKS 0x01000000 /* Protection Key Supervisor */ =20 /* - * XSTATE component flags in XCR0 + * XSTATE component flags in XCR0 | MSR_XSS */ #define X86_XCR0_FP_POS 0 #define X86_XCR0_FP (1ULL << X86_XCR0_FP_POS) @@ -95,11 +95,34 @@ #define X86_XCR0_ZMM (1ULL << X86_XCR0_ZMM_POS) #define X86_XCR0_HI_ZMM_POS 7 #define X86_XCR0_HI_ZMM (1ULL << X86_XCR0_HI_ZMM_POS) +#define X86_XSS_PROC_TRACE (_AC(1, ULL) << 8) #define X86_XCR0_PKRU_POS 9 #define X86_XCR0_PKRU (1ULL << X86_XCR0_PKRU_POS) +#define X86_XSS_PASID (_AC(1, ULL) << 10) +#define X86_XSS_CET_U (_AC(1, ULL) << 11) +#define X86_XSS_CET_S (_AC(1, ULL) << 12) +#define X86_XSS_HDC (_AC(1, ULL) << 13) +#define X86_XSS_UINTR (_AC(1, ULL) << 14) +#define X86_XSS_LBR (_AC(1, ULL) << 15) +#define X86_XSS_HWP (_AC(1, ULL) << 16) +#define X86_XCR0_TILE_CFG (_AC(1, ULL) << 17) +#define X86_XCR0_TILE_DATA (_AC(1, ULL) << 18) #define X86_XCR0_LWP_POS 62 #define X86_XCR0_LWP (1ULL << X86_XCR0_LWP_POS) =20 +#define X86_XCR0_STATES \ + (X86_XCR0_FP | X86_XCR0_SSE | X86_XCR0_YMM | X86_XCR0_BNDREGS | \ + X86_XCR0_BNDCSR | X86_XCR0_OPMASK | X86_XCR0_ZMM | \ + X86_XCR0_HI_ZMM | X86_XCR0_PKRU | X86_XCR0_TILE_CFG | \ + X86_XCR0_TILE_DATA | \ + X86_XCR0_LWP) + +#define X86_XSS_STATES \ + (X86_XSS_PROC_TRACE | X86_XSS_PASID | X86_XSS_CET_U | \ + X86_XSS_CET_S | X86_XSS_HDC | X86_XSS_UINTR | X86_XSS_LBR | \ + X86_XSS_HWP | \ + 0) + /* * Debug status flags in DR6. * diff --git a/xen/arch/x86/xstate.c b/xen/arch/x86/xstate.c index 75788147966a..650206d9d2b6 100644 --- a/xen/arch/x86/xstate.c +++ b/xen/arch/x86/xstate.c @@ -604,9 +604,164 @@ static bool valid_xcr0(uint64_t xcr0) if ( !(xcr0 & X86_XCR0_BNDREGS) !=3D !(xcr0 & X86_XCR0_BNDCSR) ) return false; =20 + /* TILECFG and TILEDATA must be the same. */ + if ( !(xcr0 & X86_XCR0_TILE_CFG) !=3D !(xcr0 & X86_XCR0_TILE_DATA) ) + return false; + return true; } =20 +struct xcheck_state { + uint64_t states; + uint32_t uncomp_size; + uint32_t comp_size; +}; + +static void __init check_new_xstate(struct xcheck_state *s, uint64_t new) +{ + uint32_t hw_size; + + BUILD_BUG_ON(X86_XCR0_STATES & X86_XSS_STATES); + + BUG_ON(s->states & new); /* States only increase. */ + BUG_ON(!valid_xcr0(s->states | new)); /* Xen thinks it's a good value.= */ + BUG_ON(new & ~(X86_XCR0_STATES | X86_XSS_STATES)); /* Known state. */ + BUG_ON((new & X86_XCR0_STATES) && + (new & X86_XSS_STATES)); /* User or supervisor, not both. */ + + s->states |=3D new; + if ( new & X86_XCR0_STATES ) + { + if ( !set_xcr0(s->states & X86_XCR0_STATES) ) + BUG(); + } + else + set_msr_xss(s->states & X86_XSS_STATES); + + /* + * Check the uncompressed size. Some XSTATEs are out-of-order and fil= l in + * prior holes in the state area, so we check that the size doesn't + * decrease. + */ + hw_size =3D cpuid_count_ebx(0xd, 0); + + if ( hw_size < s->uncomp_size ) + panic("XSTATE 0x%016"PRIx64", new bits {%63pbl}, uncompressed hw s= ize %#x < prev size %#x\n", + s->states, &new, hw_size, s->uncomp_size); + + s->uncomp_size =3D hw_size; + + /* + * Check the compressed size, if available. All components strictly + * appear in index order. In principle there are no holes, but some + * components have their base address 64-byte aligned for efficiency + * reasons (e.g. AMX-TILE) and there are other components small enough= to + * fit in the gap (e.g. PKRU) without increasing the overall length. + */ + hw_size =3D cpuid_count_ebx(0xd, 1); + + if ( cpu_has_xsavec ) + { + if ( hw_size < s->comp_size ) + panic("XSTATE 0x%016"PRIx64", new bits {%63pbl}, compressed hw= size %#x < prev size %#x\n", + s->states, &new, hw_size, s->comp_size); + + s->comp_size =3D hw_size; + } + else if ( hw_size ) /* Compressed size reported, but no XSAVEC ? */ + { + static bool once; + + if ( !once ) + { + WARN(); + once =3D true; + } + } +} + +/* + * The {un,}compressed XSTATE sizes are reported by dynamic CPUID value, b= ased + * on the current %XCR0 and MSR_XSS values. The exact layout is also feat= ure + * and vendor specific. Cross-check Xen's understanding against real hard= ware + * on boot. + * + * Testing every combination is prohibitive, so we use a partial approach. + * Starting with nothing active, we add new XSTATEs and check that the CPU= ID + * dynamic values never decreases. + */ +static void __init noinline xstate_check_sizes(void) +{ + uint64_t old_xcr0 =3D get_xcr0(); + uint64_t old_xss =3D get_msr_xss(); + struct xcheck_state s =3D {}; + + /* + * User XSTATEs, increasing by index. + * + * Chronologically, Intel and AMD had identical layouts for AVX (YMM). + * AMD introduced LWP in Fam15h, following immediately on from YMM. I= ntel + * left an LWP-shaped hole when adding MPX (BND{CSR,REGS}) in Skylake. + * AMD removed LWP in Fam17h, putting PKRU in the same space, breaking + * layout compatibility with Intel and having a knock-on effect on all + * subsequent states. + */ + check_new_xstate(&s, X86_XCR0_SSE | X86_XCR0_FP); + + if ( cpu_has_avx ) + check_new_xstate(&s, X86_XCR0_YMM); + + if ( cpu_has_mpx ) + check_new_xstate(&s, X86_XCR0_BNDCSR | X86_XCR0_BNDREGS); + + if ( cpu_has_avx512f ) + check_new_xstate(&s, X86_XCR0_HI_ZMM | X86_XCR0_ZMM | X86_XCR0_OPM= ASK); + + if ( cpu_has_pku ) + check_new_xstate(&s, X86_XCR0_PKRU); + + if ( boot_cpu_has(X86_FEATURE_AMX_TILE) ) + check_new_xstate(&s, X86_XCR0_TILE_DATA | X86_XCR0_TILE_CFG); + + if ( boot_cpu_has(X86_FEATURE_LWP) ) + check_new_xstate(&s, X86_XCR0_LWP); + + /* + * Supervisor XSTATEs, increasing by index. + * + * Intel Broadwell has Processor Trace but no XSAVES. There doesn't + * appear to have been a new enumeration when X86_XSS_PROC_TRACE was + * introduced in Skylake. + */ + if ( cpu_has_xsaves ) + { + if ( cpu_has_proc_trace ) + check_new_xstate(&s, X86_XSS_PROC_TRACE); + + if ( boot_cpu_has(X86_FEATURE_ENQCMD) ) + check_new_xstate(&s, X86_XSS_PASID); + + if ( boot_cpu_has(X86_FEATURE_CET_SS) || + boot_cpu_has(X86_FEATURE_CET_IBT) ) + { + check_new_xstate(&s, X86_XSS_CET_U); + check_new_xstate(&s, X86_XSS_CET_S); + } + + if ( boot_cpu_has(X86_FEATURE_UINTR) ) + check_new_xstate(&s, X86_XSS_UINTR); + + if ( boot_cpu_has(X86_FEATURE_ARCH_LBR) ) + check_new_xstate(&s, X86_XSS_LBR); + } + + /* Restore old state now the test is done. */ + if ( !set_xcr0(old_xcr0) ) + BUG(); + if ( cpu_has_xsaves ) + set_msr_xss(old_xss); +} + /* Collect the information of processor's extended state */ void xstate_init(struct cpuinfo_x86 *c) { @@ -683,6 +838,9 @@ void xstate_init(struct cpuinfo_x86 *c) =20 if ( setup_xstate_features(bsp) && bsp ) BUG(); + + if ( IS_ENABLED(CONFIG_SELF_TESTS) && bsp ) + xstate_check_sizes(); } =20 int validate_xstate(const struct domain *d, uint64_t xcr0, uint64_t xcr0_a= ccum, diff --git a/xen/include/public/arch-x86/cpufeatureset.h b/xen/include/publ= ic/arch-x86/cpufeatureset.h index 6627453e3985..d9eba5e9a714 100644 --- a/xen/include/public/arch-x86/cpufeatureset.h +++ b/xen/include/public/arch-x86/cpufeatureset.h @@ -266,6 +266,7 @@ XEN_CPUFEATURE(IBPB_RET, 8*32+30) /*A IBPB clears= RSB/RAS too. */ XEN_CPUFEATURE(AVX512_4VNNIW, 9*32+ 2) /*A AVX512 Neural Network Instruct= ions */ XEN_CPUFEATURE(AVX512_4FMAPS, 9*32+ 3) /*A AVX512 Multiply Accumulation S= ingle Precision */ XEN_CPUFEATURE(FSRM, 9*32+ 4) /*A Fast Short REP MOVS */ +XEN_CPUFEATURE(UINTR, 9*32+ 5) /* User-mode Interrupts */ XEN_CPUFEATURE(AVX512_VP2INTERSECT, 9*32+8) /*a VP2INTERSECT{D,Q} insns */ XEN_CPUFEATURE(SRBDS_CTRL, 9*32+ 9) /* MSR_MCU_OPT_CTRL and RNGDS_MIT= G_DIS. */ XEN_CPUFEATURE(MD_CLEAR, 9*32+10) /*!A| VERW clears microarchitectura= l buffers */ @@ -274,8 +275,10 @@ XEN_CPUFEATURE(TSX_FORCE_ABORT, 9*32+13) /* MSR_TSX_FO= RCE_ABORT.RTM_ABORT */ XEN_CPUFEATURE(SERIALIZE, 9*32+14) /*A SERIALIZE insn */ XEN_CPUFEATURE(HYBRID, 9*32+15) /* Heterogeneous platform */ XEN_CPUFEATURE(TSXLDTRK, 9*32+16) /*a TSX load tracking suspend/resu= me insns */ +XEN_CPUFEATURE(ARCH_LBR, 9*32+19) /* Architectural Last Branch Reco= rd */ XEN_CPUFEATURE(CET_IBT, 9*32+20) /* CET - Indirect Branch Tracking= */ XEN_CPUFEATURE(AVX512_FP16, 9*32+23) /*A AVX512 FP16 instructions */ +XEN_CPUFEATURE(AMX_TILE, 9*32+24) /* AMX Tile architecture */ XEN_CPUFEATURE(IBRSB, 9*32+26) /*A IBRS and IBPB support (used by= Intel) */ XEN_CPUFEATURE(STIBP, 9*32+27) /*A STIBP */ XEN_CPUFEATURE(L1D_FLUSH, 9*32+28) /*S MSR_FLUSH_CMD and L1D flush. */ --=20 2.39.2