From nobody Thu Apr 2 15:00:37 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5A706ECAAD8 for ; Thu, 22 Sep 2022 20:10:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232757AbiIVUKu (ORCPT ); Thu, 22 Sep 2022 16:10:50 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58538 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232490AbiIVUKk (ORCPT ); Thu, 22 Sep 2022 16:10:40 -0400 Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 59E17F2745 for ; Thu, 22 Sep 2022 13:10:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1663877439; x=1695413439; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=2wQU5u0JEJI3UkDRA0Tf2j+yl1WWuKxxoHm3zoZx9Lc=; b=jT0kQimmYbTL+HK8JfpHYkIJWJdGUqVMc16qu1tzOLkVJ1f5Kx2M2K+R RJGrI1NPxH7qqQLaEqq7zjESy7hR+NzOKOVwVuJcZbX0tI7oYDJYIiLrY +Bw+s5DVyiSCZL6mx1YLloU1f7WhQV1zw8Ohv1oZDwVr3VSiQ6rP6lmzP oaLo+2dJLkJnrgIUclantddHeRTFnjUkRMfns6+hXy2aNhruhrnU1l6Kd UM81So5PwmV3bmfE+HQdmwbaxyNLzKwezJHFlzM2zf/m8rIQDARYn42/7 Kd+lmfni6QWCIEycM0qYEJXyI8U0l982z/YdGiycMaY/08pbYgh+oFFew Q==; X-IronPort-AV: E=McAfee;i="6500,9779,10478"; a="300404292" X-IronPort-AV: E=Sophos;i="5.93,337,1654585200"; d="scan'208";a="300404292" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Sep 2022 13:10:37 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.93,337,1654585200"; d="scan'208";a="597592000" Received: from chang-linux-3.sc.intel.com ([172.25.66.173]) by orsmga006.jf.intel.com with ESMTP; 22 Sep 2022 13:10:37 -0700 From: "Chang S. Bae" To: linux-kernel@vger.kernel.org Cc: x86@kernel.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, hpa@zytor.com, avagin@gmail.com, seanjc@google.com, chang.seok.bae@intel.com Subject: [PATCH v2 1/4] x86/fpu: Fix the MXCSR state reshuffling between userspace and kernel buffers Date: Thu, 22 Sep 2022 13:00:31 -0700 Message-Id: <20220922200034.23759-2-chang.seok.bae@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20220922200034.23759-1-chang.seok.bae@intel.com> References: <20220922200034.23759-1-chang.seok.bae@intel.com> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" =3D=3D Hardware Background =3D=3D The MXCSR state, as part of the SSE component, has different treatments with XSAVE*/XRSTOR* between the two XSAVE formats: - When the MXCSR state is XSAVEd in the non-compacted format, the feature bit in XSTATE_BV pertains to the XMM registers. XRSTOR restores the MXCSR state without referencing XSTATE_BV. - But, on XSAVE* with the compacted format, the SSE bit is set in XSTATE_BV if MXCSR is not in the init state on XSAVE*. Then, on XRSTOR*, the state is restored only when the SSE bit is set in XSTATE_BV. =3D=3D Regression =3D=3D The XSTATE copy routine between userspace and kernel buffers used to be separate for different XSAVE formats. Commit 43be46e89698 ("x86/fpu: Sanitize xstateregs_set()") combined them together. This change appears to be a regression on XSAVES-less systems. But, the merged code is based on the original conversion code with commit 91c3dba7dbc1 ("x86/fpu/xstate: Fix PTRACE frames for XSAVES"). That has such oversight as: - Mistreating MXCSR as part of the FP state instead of the SSE component. - Taking the SSE bit in XSTATE_BV always for all the SSE states. =3D=3D Correction =3D=3D Update the XSTATE conversion code: - Refactor the copy routine for legacy states. Treat MXCSR as part of SSE. - Copying MXCSR, reference XSTATE_BV only with the compacted format. - Also, flip the SSE bit in XSTATE_BV accordingly to the format. Reported-by: Andrei Vagin Fixes: 91c3dba7dbc1 ("x86/fpu/xstate: Fix PTRACE frames for XSAVES") Signed-off-by: Chang S. Bae Cc: x86@kernel.org Cc: linux-kernel@vger.kernel.org Link: https://lore.kernel.org/lkml/CANaxB-wkcNKWjyNGFuMn6f6H2DQSGwwQjUgg1eA= TdUgmM-Kg+A@mail.gmail.com/ --- arch/x86/kernel/fpu/xstate.c | 70 +++++++++++++++++++++++++----------- 1 file changed, 49 insertions(+), 21 deletions(-) diff --git a/arch/x86/kernel/fpu/xstate.c b/arch/x86/kernel/fpu/xstate.c index c8340156bfd2..d7676cfc32eb 100644 --- a/arch/x86/kernel/fpu/xstate.c +++ b/arch/x86/kernel/fpu/xstate.c @@ -1064,6 +1064,7 @@ void __copy_xstate_to_uabi_buf(struct membuf to, stru= ct fpstate *fpstate, u32 pkru_val, enum xstate_copy_mode copy_mode) { const unsigned int off_mxcsr =3D offsetof(struct fxregs_state, mxcsr); + bool compacted =3D cpu_feature_enabled(X86_FEATURE_XCOMPACTED); struct xregs_state *xinit =3D &init_fpstate.regs.xsave; struct xregs_state *xsave =3D &fpstate->regs.xsave; struct xstate_header header; @@ -1093,8 +1094,13 @@ void __copy_xstate_to_uabi_buf(struct membuf to, str= uct fpstate *fpstate, copy_feature(header.xfeatures & XFEATURE_MASK_FP, &to, &xsave->i387, &xinit->i387, off_mxcsr); =20 - /* Copy MXCSR when SSE or YMM are set in the feature mask */ - copy_feature(header.xfeatures & (XFEATURE_MASK_SSE | XFEATURE_MASK_YMM), + /* + * Copy MXCSR depending on the XSAVE format. If compacted, + * reference the feature mask. Otherwise, check if any of related + * features is valid. + */ + copy_feature(compacted ? header.xfeatures & XFEATURE_MASK_SSE : + fpstate->user_xfeatures & (XFEATURE_MASK_SSE | XFEATURE_MASK_YMM), &to, &xsave->i387.mxcsr, &xinit->i387.mxcsr, MXCSR_AND_FLAGS_SIZE); =20 @@ -1199,6 +1205,11 @@ static int copy_from_buffer(void *dst, unsigned int = offset, unsigned int size, static int copy_uabi_to_xstate(struct fpstate *fpstate, const void *kbuf, const void __user *ubuf) { + const unsigned int off_stspace =3D offsetof(struct fxregs_state, st_space= ); + const unsigned int off_xmm =3D offsetof(struct fxregs_state, xmm_space); + const unsigned int off_mxcsr =3D offsetof(struct fxregs_state, mxcsr); + bool compacted =3D cpu_feature_enabled(X86_FEATURE_XCOMPACTED); + struct fxregs_state *fxsave =3D &fpstate->regs.fxsave; struct xregs_state *xsave =3D &fpstate->regs.xsave; unsigned int offset, size; struct xstate_header hdr; @@ -1212,38 +1223,48 @@ static int copy_uabi_to_xstate(struct fpstate *fpst= ate, const void *kbuf, if (validate_user_xstate_header(&hdr, fpstate)) return -EINVAL; =20 - /* Validate MXCSR when any of the related features is in use */ - mask =3D XFEATURE_MASK_FP | XFEATURE_MASK_SSE | XFEATURE_MASK_YMM; - if (hdr.xfeatures & mask) { + if (hdr.xfeatures & XFEATURE_MASK_FP) { + if (copy_from_buffer(fxsave, 0, off_mxcsr, kbuf, ubuf)) + return -EINVAL; + if (copy_from_buffer(fxsave->st_space, off_stspace, sizeof(fxsave->st_sp= ace), + kbuf, ubuf)) + return -EINVAL; + } + + /* Validate MXCSR when any of the related features is valid. */ + mask =3D XFEATURE_MASK_SSE | XFEATURE_MASK_YMM; + if (fpstate->user_xfeatures & mask) { u32 mxcsr[2]; =20 - offset =3D offsetof(struct fxregs_state, mxcsr); - if (copy_from_buffer(mxcsr, offset, sizeof(mxcsr), kbuf, ubuf)) + if (copy_from_buffer(mxcsr, off_mxcsr, sizeof(mxcsr), kbuf, ubuf)) return -EFAULT; =20 /* Reserved bits in MXCSR must be zero. */ if (mxcsr[0] & ~mxcsr_feature_mask) return -EINVAL; =20 - /* SSE and YMM require MXCSR even when FP is not in use. */ - if (!(hdr.xfeatures & XFEATURE_MASK_FP)) { - xsave->i387.mxcsr =3D mxcsr[0]; - xsave->i387.mxcsr_mask =3D mxcsr[1]; - } + /* + * Copy MXCSR regardless of the feature mask as userspace + * uses the uncompacted format. + */ + fxsave->mxcsr =3D mxcsr[0]; + fxsave->mxcsr_mask =3D mxcsr[1]; } =20 - for (i =3D 0; i < XFEATURE_MAX; i++) { - mask =3D BIT_ULL(i); + if (hdr.xfeatures & XFEATURE_MASK_SSE) { + if (copy_from_buffer(fxsave->xmm_space, off_xmm, sizeof(fxsave->xmm_spac= e), + kbuf, ubuf)) + return -EINVAL; + } =20 - if (hdr.xfeatures & mask) { - void *dst =3D __raw_xsave_addr(xsave, i); + for_each_extended_xfeature(i, hdr.xfeatures) { + void *dst =3D __raw_xsave_addr(xsave, i); =20 - offset =3D xstate_offsets[i]; - size =3D xstate_sizes[i]; + offset =3D xstate_offsets[i]; + size =3D xstate_sizes[i]; =20 - if (copy_from_buffer(dst, offset, size, kbuf, ubuf)) - return -EFAULT; - } + if (copy_from_buffer(dst, offset, size, kbuf, ubuf)) + return -EFAULT; } =20 /* @@ -1256,6 +1277,13 @@ static int copy_uabi_to_xstate(struct fpstate *fpsta= te, const void *kbuf, * Add back in the features that came in from userspace: */ xsave->header.xfeatures |=3D hdr.xfeatures; + /* + * Convert the SSE bit in the feature mask as it implies + * differently between the formats. It indicates the MXCSR state + * if compacted; otherwise, it pertains to XMM registers. + */ + if (compacted && fxsave->mxcsr !=3D MXCSR_DEFAULT) + xsave->header.xfeatures |=3D XFEATURE_MASK_SSE; =20 return 0; } --=20 2.17.1 From nobody Thu Apr 2 15:00:37 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 44594C6FA82 for ; Thu, 22 Sep 2022 20:10:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232729AbiIVUKq (ORCPT ); Thu, 22 Sep 2022 16:10:46 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58540 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232646AbiIVUKk (ORCPT ); Thu, 22 Sep 2022 16:10:40 -0400 Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 84FECEC559; Thu, 22 Sep 2022 13:10:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1663877439; x=1695413439; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=x0W/PWlKVH4Kg6RVdCR6J9PqSt//9vhZDuFOnyIkMK4=; b=MGYNCtWcwpfvank2u+uIVTkMIU9en41DEO9oIrkOaY1m6JXQoRElyv0J 6Nmak0Pno7y9shvuVAkRFSEGidllSOunHeOwojnctlBUBgE3/wwpeWxsu CBKO3mT3iiGsjupkha4Kzqm3txFidZ7jRaw6Iq25De3GNKZ2ssJBIzWLz wB3dhRo0g7NshC97mUvYtN/47ajgQ0HDAx4hincQuAyZY3pJ4UYYDEGjd HeAfFspt5xlHjRZpGr0RYysiMQnuKqR4jcSFX9cFv8cJ1TQ/oSdxDCy/B tEXWgZ3C7Z/5mtC2/SJ1UxWr1CvlAG7rO+GsBb0ITCoKC4cPeezgk3FCG w==; X-IronPort-AV: E=McAfee;i="6500,9779,10478"; a="300404294" X-IronPort-AV: E=Sophos;i="5.93,337,1654585200"; d="scan'208";a="300404294" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Sep 2022 13:10:37 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.93,337,1654585200"; d="scan'208";a="597592004" Received: from chang-linux-3.sc.intel.com ([172.25.66.173]) by orsmga006.jf.intel.com with ESMTP; 22 Sep 2022 13:10:37 -0700 From: "Chang S. Bae" To: linux-kernel@vger.kernel.org Cc: x86@kernel.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, hpa@zytor.com, avagin@gmail.com, seanjc@google.com, chang.seok.bae@intel.com, linux-kselftest@vger.kernel.org Subject: [PATCH v2 2/4] selftests/x86/mxcsr: Test the MXCSR state write via ptrace Date: Thu, 22 Sep 2022 13:00:32 -0700 Message-Id: <20220922200034.23759-3-chang.seok.bae@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20220922200034.23759-1-chang.seok.bae@intel.com> References: <20220922200034.23759-1-chang.seok.bae@intel.com> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" The ptrace buffer is in the non-compacted format. The MXCSR state should be written to the target thread when either SSE or AVX component is enabled. Write an MXCSR value to the target and read back. Then it is validated with the XRSTOR/XSAVE result on the current. Signed-off-by: Chang S. Bae Cc: x86@kernel.org Cc: linux-kernel@vger.kernel.org Cc: linux-kselftest@vger.kernel.org --- If this is acceptable, I will also follow up to move some of the helper functions to a .h file from this and other test cases because duplicating what is shareable should be avoided. --- tools/testing/selftests/x86/Makefile | 2 +- tools/testing/selftests/x86/mxcsr.c | 200 +++++++++++++++++++++++++++ 2 files changed, 201 insertions(+), 1 deletion(-) create mode 100644 tools/testing/selftests/x86/mxcsr.c diff --git a/tools/testing/selftests/x86/Makefile b/tools/testing/selftests= /x86/Makefile index 0388c4d60af0..621c47960be3 100644 --- a/tools/testing/selftests/x86/Makefile +++ b/tools/testing/selftests/x86/Makefile @@ -13,7 +13,7 @@ CAN_BUILD_WITH_NOPIE :=3D $(shell ./check_cc.sh "$(CC)" t= rivial_program.c -no-pie) TARGETS_C_BOTHBITS :=3D single_step_syscall sysret_ss_attrs syscall_nt tes= t_mremap_vdso \ check_initial_reg_state sigreturn iopl ioperm \ test_vsyscall mov_ss_trap \ - syscall_arg_fault fsgsbase_restore sigaltstack + syscall_arg_fault fsgsbase_restore sigaltstack mxcsr TARGETS_C_32BIT_ONLY :=3D entry_from_vm86 test_syscall_vdso unwind_vdso \ test_FCMOV test_FCOMI test_FISTTP \ vdso_restorer diff --git a/tools/testing/selftests/x86/mxcsr.c b/tools/testing/selftests/= x86/mxcsr.c new file mode 100644 index 000000000000..7c318c48b4be --- /dev/null +++ b/tools/testing/selftests/x86/mxcsr.c @@ -0,0 +1,200 @@ +// SPDX-License-Identifier: GPL-2.0 + +#define _GNU_SOURCE +#include +#include +#include +#include +#include +#include +#include + +#include +#include +#include +#include + +#include "../kselftest.h" /* For __cpuid_count() */ + +#define LEGACY_STATE_SIZE 24 +#define MXCSR_SIZE 8 +#define STSTATE_SIZE 8*16 +#define XMM_SIZE 16*16 +#define PADDING_SIZE 96 +#define XSAVE_HDR_SIZE 64 + +struct xsave_buffer { + uint8_t legacy_state[LEGACY_STATE_SIZE]; + uint8_t mxcsr[MXCSR_SIZE]; + uint8_t st_state[STSTATE_SIZE]; + uint8_t xmm_state[XMM_SIZE]; + uint8_t padding[PADDING_SIZE]; + uint8_t header[XSAVE_HDR_SIZE]; + uint8_t extended[0]; +}; + +#ifdef __x86_64__ +#define REX_PREFIX "0x48, " +#else +#define REX_PREFIX +#endif + +#define XSAVE ".byte " REX_PREFIX "0x0f,0xae,0x27" +#define XRSTOR ".byte " REX_PREFIX "0x0f,0xae,0x2f" + +static inline uint64_t xgetbv(uint32_t index) +{ + uint32_t eax, edx; + + asm volatile("xgetbv" + : "=3Da" (eax), "=3Dd" (edx) + : "c" (index)); + return eax + ((uint64_t)edx << 32); +} + +static inline void xsave(struct xsave_buffer *xbuf, uint64_t rfbm) +{ + uint32_t rfbm_lo =3D rfbm; + uint32_t rfbm_hi =3D rfbm >> 32; + + asm volatile(XSAVE :: "D" (xbuf), "a" (rfbm_lo), "d" (rfbm_hi) : "memory"= ); +} + +static inline void xrstor(struct xsave_buffer *xbuf, uint64_t rfbm) +{ + uint32_t rfbm_lo =3D rfbm; + uint32_t rfbm_hi =3D rfbm >> 32; + + asm volatile(XRSTOR :: "D" (xbuf), "a" (rfbm_lo), "d" (rfbm_hi)); +} + +static inline void clear_xstate_header(struct xsave_buffer *xbuf) +{ + memset(&xbuf->header, 0, sizeof(xbuf->header)); +} + +static inline uint32_t get_mxcsr(struct xsave_buffer *xbuf) +{ + return *((uint32_t *)xbuf->mxcsr); +} + +static inline void set_mxcsr(struct xsave_buffer *xbuf, uint32_t val) +{ + *((uint32_t *)xbuf->mxcsr) =3D val; +} + +#define XFEATURE_MASK_SSE 0x2 +#define XFEATURE_MASK_YMM 0x4 + +#define CPUID_LEAF1_ECX_XSAVE_MASK (1 << 26) +#define CPUID_LEAF1_ECX_OSXSAVE_MASK (1 << 27) +#define CPUID_LEAF_XSTATE 0xd +#define CPUID_SUBLEAF_XSTATE_USER 0x0 +#define CPUID_SUBLEAF_XSTATE_EXT 0x1 + +static bool xsave_availability(void) +{ + uint32_t eax, ebx, ecx, edx; + + __cpuid_count(1, 0, eax, ebx, ecx, edx); + if (!(ecx & CPUID_LEAF1_ECX_XSAVE_MASK)) + return false; + if (!(ecx & CPUID_LEAF1_ECX_OSXSAVE_MASK)) + return false; + return true; +} + +static uint32_t get_xbuf_size(void) +{ + uint32_t eax, ebx, ecx, edx; + + __cpuid_count(CPUID_LEAF_XSTATE, CPUID_SUBLEAF_XSTATE_USER, + eax, ebx, ecx, edx); + return ebx; +} + +static void ptrace_get(pid_t pid, struct iovec *iov) +{ + memset(iov->iov_base, 0, iov->iov_len); + + if (ptrace(PTRACE_GETREGSET, pid, (uint32_t)NT_X86_XSTATE, iov)) + err(1, "TRACE_GETREGSET"); +} + +static void ptrace_set(pid_t pid, struct iovec *iov) +{ + if (ptrace(PTRACE_SETREGSET, pid, (uint32_t)NT_X86_XSTATE, iov)) + err(1, "TRACE_SETREGSET"); +} + +int main(void) +{ + struct xsave_buffer *xbuf; + uint32_t xbuf_size; + struct iovec iov; + uint32_t mxcsr; + pid_t child; + int status; + + if (!xsave_availability()) + printf("[SKIP]\tSkip as XSAVE not available.\n"); + + xbuf_size =3D get_xbuf_size(); + if (!xbuf_size) + printf("[SKIP]\tSkip as XSAVE not available.\n"); + + if (!(xgetbv(0) & (XFEATURE_MASK_SSE | XFEATURE_MASK_YMM))) + printf("[SKIP]\tSkip as SSE state not available.\n"); + + xbuf =3D aligned_alloc(64, xbuf_size); + if (!xbuf) + err(1, "aligned_alloc()"); + + iov.iov_base =3D xbuf; + iov.iov_len =3D xbuf_size; + + child =3D fork(); + if (child < 0) { + err(1, "fork()"); + } else if (!child) { + if (ptrace(PTRACE_TRACEME, 0, NULL, NULL)) + err(1, "PTRACE_TRACEME"); + + raise(SIGTRAP); + _exit(0); + } + + wait(&status); + + if (WSTOPSIG(status) !=3D SIGTRAP) + err(1, "raise(SIGTRAP)"); + + printf("[RUN]\tTest the MXCSR state write via ptrace().\n"); + + /* Set a benign value */ + set_mxcsr(xbuf, 0xabc); + /* The MXCSR state should be loaded regardless of XSTATE_BV */ + clear_xstate_header(xbuf); + + /* Write the MXCSR state both locally and remotely. */ + xrstor(xbuf, XFEATURE_MASK_SSE); + ptrace_set(child, &iov); + + /* Read the MXCSR state back for both */ + xsave(xbuf, XFEATURE_MASK_SSE); + mxcsr =3D get_mxcsr(xbuf); + ptrace_get(child, &iov); + + /* Cross-check with each other */ + if (mxcsr =3D=3D get_mxcsr(xbuf)) + printf("[OK]\tThe written state was read back correctly.\n"); + else + printf("[FAIL]\tThe write (or read) was incorrect.\n"); + + ptrace(PTRACE_DETACH, child, NULL, NULL); + wait(&status); + if (!WIFEXITED(status) || WEXITSTATUS(status)) + err(1, "PTRACE_DETACH"); + + free(xbuf); +} --=20 2.17.1 From nobody Thu Apr 2 15:00:37 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4813CC6FA82 for ; Thu, 22 Sep 2022 20:11:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232788AbiIVUK6 (ORCPT ); Thu, 22 Sep 2022 16:10:58 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58588 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232677AbiIVUKm (ORCPT ); Thu, 22 Sep 2022 16:10:42 -0400 Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 04009F1632; Thu, 22 Sep 2022 13:10:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1663877441; x=1695413441; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=zwFIpoguOuj1QO5SML3bAM3A0l1o0mNQhic7NOeMOsg=; b=TfF30m1xorxxA+JpC64nqx0F+xvwz31wTpZ6kew8Z2nwcdEwyiAlVnpq C5aCJ7m+bApli6wnEpF6hDpGb4c10Ed2d4IiYbaWWz0ej10zC5gSNUZ4i BoHzG0IXs9kGrIq4A+1sEUO3SyIh2vpos3XKu4vLWvP3btVLK0niDB41R exfx2ZKVmdR8Lww7+gu+hqHWJh8ygmUk+XW1Ra57PBc8SEi0ohaQGJun5 afmU0jyNiuZVFbyOIzGIuj+nOVmMUyZs9+3LFfxlBvkLERWMiV/+jNh8V c3d+2xDEAFIn61eWaGRT8k5SowbCGnPbxXsJjN8UkI5n9wkmdBK326txc Q==; X-IronPort-AV: E=McAfee;i="6500,9779,10478"; a="300404296" X-IronPort-AV: E=Sophos;i="5.93,337,1654585200"; d="scan'208";a="300404296" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Sep 2022 13:10:38 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.93,337,1654585200"; d="scan'208";a="597592010" Received: from chang-linux-3.sc.intel.com ([172.25.66.173]) by orsmga006.jf.intel.com with ESMTP; 22 Sep 2022 13:10:37 -0700 From: "Chang S. Bae" To: linux-kernel@vger.kernel.org Cc: x86@kernel.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, hpa@zytor.com, avagin@gmail.com, seanjc@google.com, chang.seok.bae@intel.com, kvm@vger.kernel.org Subject: [PATCH v2 3/4] x86/fpu: Disallow legacy states from fpstate_clear_xstate_component() Date: Thu, 22 Sep 2022 13:00:33 -0700 Message-Id: <20220922200034.23759-4-chang.seok.bae@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20220922200034.23759-1-chang.seok.bae@intel.com> References: <20220922200034.23759-1-chang.seok.bae@intel.com> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Commit 087df48c298c ("x86/fpu: Replace KVMs xstate component clearing") refactored the MPX state clearing code. But, legacy states are not warranted in this routine: - It presumes every state is contiguous but that's not true for the legacy states. While MXCSR belongs to SSE, the state is located in the XSAVE buffer as surrounded by FP states. - Also, zeroing out legacy states is not meaningful as their init state is non-zero. It is possible to adjust the code to support them. Then, there is no use for clearing legacy states yet. To make it simple, explicitly disallow legacy states. Signed-off-by: Chang S. Bae Cc: x86@kernel.org Cc: kvm@vger.kernel.org Cc: linux-kernel@vger.kernel.org --- Changes from v1 (Sean Christopherson): * Revert the name change. * Add a warning. * Update title/changelog. --- arch/x86/kernel/fpu/xstate.c | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/arch/x86/kernel/fpu/xstate.c b/arch/x86/kernel/fpu/xstate.c index d7676cfc32eb..a3f7045d1f8e 100644 --- a/arch/x86/kernel/fpu/xstate.c +++ b/arch/x86/kernel/fpu/xstate.c @@ -1375,6 +1375,15 @@ void fpstate_clear_xstate_component(struct fpstate *= fps, unsigned int xfeature) { void *addr =3D get_xsave_addr(&fps->regs.xsave, xfeature); =20 + /* + * Allow extended states only, because: + * (1) Each legacy state is not contiguously located in the buffer. + * (2) Zeroing those states is not meaningful as their init states + * are not zero. + */ + if (WARN_ON_ONCE(xfeature <=3D XFEATURE_SSE)) + return; + if (addr) memset(addr, 0, xstate_sizes[xfeature]); } --=20 2.17.1 From nobody Thu Apr 2 15:00:37 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5B14DECAAD8 for ; Thu, 22 Sep 2022 20:10:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232783AbiIVUKz (ORCPT ); Thu, 22 Sep 2022 16:10:55 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58584 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232672AbiIVUKm (ORCPT ); Thu, 22 Sep 2022 16:10:42 -0400 Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 25229F372B for ; Thu, 22 Sep 2022 13:10:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1663877441; x=1695413441; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=6q+7YG95fTQJu0XBlZybDpu+Be16ol9OqAsEH3PK4pM=; b=H1MT/OUVF0uoXngJbfk2cTij3OBp/blTIdzxIeX01qlIwP3eg6LAK0ab cLpA0A/hNxCWEp6OzWwCIuoNmsj2995oztnizm8ypiXJZN8jwXwTHVXym 7o6Q+SIuTM9ewnAAC/ZISyFrNXWkVpUeGX1R8UtvVQIbNaav3wv9cNr7V JG8OA+DsYjs/9RA4D5Fo/aOcivsOMD+CUYLEWtPUhM3vzTT0rmTimGmse 0M/8ZWNJt5cLO1HEdZV24xHkm09YpbQT1SM8aslTR5BpaSXapP8LNkj4s YyXvf/FRnY6x9Oy0QBR611ehr9tJDatWzQXW+nQ4IBJxtV9AgarDPolod A==; X-IronPort-AV: E=McAfee;i="6500,9779,10478"; a="300404299" X-IronPort-AV: E=Sophos;i="5.93,337,1654585200"; d="scan'208";a="300404299" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Sep 2022 13:10:38 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.93,337,1654585200"; d="scan'208";a="597592015" Received: from chang-linux-3.sc.intel.com ([172.25.66.173]) by orsmga006.jf.intel.com with ESMTP; 22 Sep 2022 13:10:38 -0700 From: "Chang S. Bae" To: linux-kernel@vger.kernel.org Cc: x86@kernel.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, hpa@zytor.com, avagin@gmail.com, seanjc@google.com, chang.seok.bae@intel.com Subject: [PATCH v2 4/4] x86/fpu: Correct the legacy state offset and size information Date: Thu, 22 Sep 2022 13:00:34 -0700 Message-Id: <20220922200034.23759-5-chang.seok.bae@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20220922200034.23759-1-chang.seok.bae@intel.com> References: <20220922200034.23759-1-chang.seok.bae@intel.com> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" MXCSR is architecturally part of the SSE state. But, the kernel code presumes it as part of the FP component. Adjust the offset and size for these legacy states. Notably, each legacy component area is not contiguous, unlike extended components. Add a warning message when these offset and size are referenced. Fixes: ac73b27aea4e ("x86/fpu/xstate: Fix xstate_offsets, xstate_sizes for = non-extended xstates") Signed-off-by: Chang S. Bae Cc: x86@kernel.org Cc: linux-kernel@vger.kernel.org --- arch/x86/kernel/fpu/xstate.c | 21 +++++++++++++++------ 1 file changed, 15 insertions(+), 6 deletions(-) diff --git a/arch/x86/kernel/fpu/xstate.c b/arch/x86/kernel/fpu/xstate.c index a3f7045d1f8e..ac2ec5d6e7e4 100644 --- a/arch/x86/kernel/fpu/xstate.c +++ b/arch/x86/kernel/fpu/xstate.c @@ -143,8 +143,13 @@ static unsigned int xfeature_get_offset(u64 xcomp_bv, = int xfeature) * offsets. */ if (!cpu_feature_enabled(X86_FEATURE_XCOMPACTED) || - xfeature <=3D XFEATURE_SSE) + xfeature <=3D XFEATURE_SSE) { + if (xfeature <=3D XFEATURE_SSE) + pr_warn("The legacy state (%d) is discontiguously located.\n", + xfeature); + return xstate_offsets[xfeature]; + } =20 /* * Compacted format offsets depend on the actual content of the @@ -217,14 +222,18 @@ static void __init setup_xstate_cache(void) * The FP xstates and SSE xstates are legacy states. They are always * in the fixed offsets in the xsave area in either compacted form * or standard form. + * + * But, while MXCSR is part of the SSE state, it is located in + * between the FP states. Note that it is erroneous assuming that + * each legacy area is contiguous. */ xstate_offsets[XFEATURE_FP] =3D 0; - xstate_sizes[XFEATURE_FP] =3D offsetof(struct fxregs_state, - xmm_space); + xstate_sizes[XFEATURE_FP] =3D offsetof(struct fxregs_state, mxcsr) + + sizeof_field(struct fxregs_state, st_space); =20 - xstate_offsets[XFEATURE_SSE] =3D xstate_sizes[XFEATURE_FP]; - xstate_sizes[XFEATURE_SSE] =3D sizeof_field(struct fxregs_state, - xmm_space); + xstate_offsets[XFEATURE_SSE] =3D offsetof(struct fxregs_state, mxcsr); + xstate_sizes[XFEATURE_SSE] =3D MXCSR_AND_FLAGS_SIZE + + sizeof_field(struct fxregs_state, xmm_space); =20 for_each_extended_xfeature(i, fpu_kernel_cfg.max_features) { cpuid_count(XSTATE_CPUID, i, &eax, &ebx, &ecx, &edx); --=20 2.17.1