From nobody Tue Feb 10 14:25:55 2026 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.10]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id BA0681EB5F8; Mon, 9 Feb 2026 07:25:54 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.10 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770621954; cv=none; b=kup6VYOCN8JICnzCAgB3iqxHx4WJmc3bPnJZP56C4i9C6N6KeYpGOfk8bcVerVq3f/VVoQdCIIpnbSh6yn7aoLvT2jh7G0bFJl8PXY5J1ffEgS5r2zmDX+B4/WL2Bkr4G5L7Rx7tl3M538HGJaTyB6uZHmzqspxn0X3gsska4LI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770621954; c=relaxed/simple; bh=Yur59q32UXK3pWsbtTM6eUJ1ZGd+hM7qv9KqF4lhYUs=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=Xxn9rshQeIicBPZ78udAaFHlbcFZtZPWLTu8EkrO6cAMQHsE5RqjrF4vtKZ8qC65oZkr2Ew/YzaWSolyIVLQITrFcH8jOVqxKXfPHsMZhvkMN1aUYGJ8wlxZoDmsjVUEfehiuTjvfULuGi5bujvxmz7bI6OpFophg+k+oS1xP1Q= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=pass smtp.mailfrom=linux.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=eo9yhlyy; arc=none smtp.client-ip=192.198.163.10 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="eo9yhlyy" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1770621955; x=1802157955; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Yur59q32UXK3pWsbtTM6eUJ1ZGd+hM7qv9KqF4lhYUs=; b=eo9yhlyyt6xoFEWOMF9JyQCy1/bdW1tY2+O/oL/gUQZ3GuZNaw/+T7Xy VWpOHc7IPIXv18Ww0WkdTErUtmm6FC3NdQ/RhH3n/ZFgXndHCqv4RILS4 rqoh4AxvDqkjI4YZZYxuM0Bte8np8npi+E137WEKhOLhaj5L5qAeWkkUg 6s8KFoXfATA7Nx0Q44DgC/eUBVWUl3F9rnizwscK/tVLvX6/0rCafwptp 2UtIOV/W3B4IvyZk3qRFE/UxLB5PlV//Zmg2M2ylwRUPzrlNs7S7neJ18 asqHZalNQuNyY9tjCiG/ssgtMtpTQAM1fOXNZyBOwIQzRxA0DRAOu5XBX w==; X-CSE-ConnectionGUID: QG/7LbD3TR2+gU6fKBeXrA== X-CSE-MsgGUID: fsEfzqpeQ7+xcIdKzZjfHw== X-IronPort-AV: E=McAfee;i="6800,10657,11695"; a="83098461" X-IronPort-AV: E=Sophos;i="6.21,281,1763452800"; d="scan'208";a="83098461" Received: from fmviesa001.fm.intel.com ([10.60.135.141]) by fmvoesa104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 08 Feb 2026 23:25:55 -0800 X-CSE-ConnectionGUID: 02VzebmrQZm2b0W2iAQKfg== X-CSE-MsgGUID: 8gi3sU01TIKkCR2hhBE4Yw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.21,281,1763452800"; d="scan'208";a="241694734" Received: from spr.sh.intel.com ([10.112.229.196]) by fmviesa001.fm.intel.com with ESMTP; 08 Feb 2026 23:25:49 -0800 From: Dapeng Mi To: Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Namhyung Kim , Thomas Gleixner , Dave Hansen , Ian Rogers , Adrian Hunter , Jiri Olsa , Alexander Shishkin , Andi Kleen , Eranian Stephane Cc: Mark Rutland , broonie@kernel.org, Ravi Bangoria , linux-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org, Zide Chen , Falcon Thomas , Dapeng Mi , Xudong Hao , Kan Liang , Dapeng Mi Subject: [Patch v6 13/22] perf/x86: Enable XMM sampling using sample_simd_vec_reg_* fields Date: Mon, 9 Feb 2026 15:20:38 +0800 Message-Id: <20260209072047.2180332-14-dapeng1.mi@linux.intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20260209072047.2180332-1-dapeng1.mi@linux.intel.com> References: <20260209072047.2180332-1-dapeng1.mi@linux.intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Kan Liang This patch adds support for sampling XMM registers using the sample_simd_vec_reg_* fields. When sample_simd_regs_enabled is set, the original XMM space in the sample_regs_* field is treated as reserved. An INVAL error will be reported to user space if any bit is set in the original XMM space while sample_simd_regs_enabled is set. The perf_reg_value function requires ABI information to understand the layout of sample_regs. To accommodate this, a new abi field is introduced in the struct x86_perf_regs to represent ABI information. Additionally, the X86-specific perf_simd_reg_value function is implemented to retrieve the XMM register values. Signed-off-by: Kan Liang Co-developed-by: Dapeng Mi Signed-off-by: Dapeng Mi --- V6: Remove some unnecessary marcos from perf_regs.h but not all. For the marcos like PERF_X86_SIMD_*_REGS and PERF_X86_*_QWORDS, they are still needed by both kernel and perf-tools and perf_regs.h seems to be the best place to define them. arch/x86/events/core.c | 90 +++++++++++++++++++++++++-- arch/x86/events/intel/ds.c | 2 +- arch/x86/events/perf_event.h | 12 ++++ arch/x86/include/asm/perf_event.h | 1 + arch/x86/include/uapi/asm/perf_regs.h | 12 ++++ arch/x86/kernel/perf_regs.c | 51 ++++++++++++++- 6 files changed, 161 insertions(+), 7 deletions(-) diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c index 36b4bc413938..bd47127fb84d 100644 --- a/arch/x86/events/core.c +++ b/arch/x86/events/core.c @@ -704,6 +704,22 @@ int x86_pmu_hw_config(struct perf_event *event) if (event_has_extended_regs(event)) { if (!(event->pmu->capabilities & PERF_PMU_CAP_EXTENDED_REGS)) return -EINVAL; + if (event->attr.sample_simd_regs_enabled) + return -EINVAL; + } + + if (event_has_simd_regs(event)) { + if (!(event->pmu->capabilities & PERF_PMU_CAP_SIMD_REGS)) + return -EINVAL; + /* Not require any vector registers but set width */ + if (event->attr.sample_simd_vec_reg_qwords && + !event->attr.sample_simd_vec_reg_intr && + !event->attr.sample_simd_vec_reg_user) + return -EINVAL; + /* The vector registers set is not supported */ + if (event_needs_xmm(event) && + !(x86_pmu.ext_regs_mask & XFEATURE_MASK_SSE)) + return -EINVAL; } } =20 @@ -1749,6 +1765,7 @@ static void x86_pmu_perf_get_regs_user(struct perf_sa= mple_data *data, struct x86_perf_regs *x86_regs_user =3D this_cpu_ptr(&x86_user_regs); struct perf_regs regs_user; =20 + x86_regs_user->abi =3D PERF_SAMPLE_REGS_ABI_NONE; perf_get_regs_user(®s_user, regs); data->regs_user.abi =3D regs_user.abi; if (regs_user.regs) { @@ -1758,12 +1775,26 @@ static void x86_pmu_perf_get_regs_user(struct perf_= sample_data *data, data->regs_user.regs =3D NULL; } =20 +static inline void +x86_pmu_update_ext_regs_size(struct perf_event_attr *attr, + struct perf_sample_data *data, + struct pt_regs *regs, + u64 mask, u64 pred_mask) +{ + u16 pred_qwords =3D attr->sample_simd_pred_reg_qwords; + u16 vec_qwords =3D attr->sample_simd_vec_reg_qwords; + + data->dyn_size +=3D (hweight64(mask) * vec_qwords + + hweight64(pred_mask) * pred_qwords) * sizeof(u64); +} + static void x86_pmu_setup_basic_regs_data(struct perf_event *event, struct perf_sample_data *data, struct pt_regs *regs) { struct perf_event_attr *attr =3D &event->attr; u64 sample_type =3D attr->sample_type; + struct x86_perf_regs *perf_regs; =20 if (sample_type & PERF_SAMPLE_REGS_USER) { if (user_mode(regs)) { @@ -1783,8 +1814,13 @@ static void x86_pmu_setup_basic_regs_data(struct per= f_event *event, data->regs_user.regs =3D NULL; } data->dyn_size +=3D sizeof(u64); - if (data->regs_user.regs) - data->dyn_size +=3D hweight64(attr->sample_regs_user) * sizeof(u64); + if (data->regs_user.regs) { + data->dyn_size +=3D + hweight64(attr->sample_regs_user) * sizeof(u64); + perf_regs =3D container_of(data->regs_user.regs, + struct x86_perf_regs, regs); + perf_regs->abi =3D data->regs_user.abi; + } data->sample_flags |=3D PERF_SAMPLE_REGS_USER; } =20 @@ -1792,8 +1828,13 @@ static void x86_pmu_setup_basic_regs_data(struct per= f_event *event, data->regs_intr.regs =3D regs; data->regs_intr.abi =3D perf_reg_abi(current); data->dyn_size +=3D sizeof(u64); - if (data->regs_intr.regs) - data->dyn_size +=3D hweight64(attr->sample_regs_intr) * sizeof(u64); + if (data->regs_intr.regs) { + data->dyn_size +=3D + hweight64(attr->sample_regs_intr) * sizeof(u64); + perf_regs =3D container_of(data->regs_intr.regs, + struct x86_perf_regs, regs); + perf_regs->abi =3D data->regs_intr.abi; + } data->sample_flags |=3D PERF_SAMPLE_REGS_INTR; } } @@ -1885,7 +1926,7 @@ static void x86_pmu_sample_extended_regs(struct perf_= event *event, =20 perf_regs =3D container_of(regs, struct x86_perf_regs, regs); =20 - if (event_has_extended_regs(event)) + if (event_needs_xmm(event)) mask |=3D XFEATURE_MASK_SSE; =20 mask &=3D x86_pmu.ext_regs_mask; @@ -1909,6 +1950,44 @@ static void x86_pmu_sample_extended_regs(struct perf= _event *event, x86_pmu_update_ext_regs(perf_regs, xsave, intr_mask); } =20 +static void x86_pmu_setup_extended_regs_data(struct perf_event *event, + struct perf_sample_data *data, + struct pt_regs *regs) +{ + struct perf_event_attr *attr =3D &event->attr; + u64 sample_type =3D attr->sample_type; + struct x86_perf_regs *perf_regs; + + if (!attr->sample_simd_regs_enabled) + return; + + if (sample_type & PERF_SAMPLE_REGS_USER && data->regs_user.abi) { + perf_regs =3D container_of(data->regs_user.regs, + struct x86_perf_regs, regs); + perf_regs->abi |=3D PERF_SAMPLE_REGS_ABI_SIMD; + + /* num and qwords of vector and pred registers */ + data->dyn_size +=3D sizeof(u64); + data->regs_user.abi |=3D PERF_SAMPLE_REGS_ABI_SIMD; + x86_pmu_update_ext_regs_size(attr, data, data->regs_user.regs, + attr->sample_simd_vec_reg_user, + attr->sample_simd_pred_reg_user); + } + + if (sample_type & PERF_SAMPLE_REGS_INTR && data->regs_intr.abi) { + perf_regs =3D container_of(data->regs_intr.regs, + struct x86_perf_regs, regs); + perf_regs->abi |=3D PERF_SAMPLE_REGS_ABI_SIMD; + + /* num and qwords of vector and pred registers */ + data->dyn_size +=3D sizeof(u64); + data->regs_intr.abi |=3D PERF_SAMPLE_REGS_ABI_SIMD; + x86_pmu_update_ext_regs_size(attr, data, data->regs_intr.regs, + attr->sample_simd_vec_reg_intr, + attr->sample_simd_pred_reg_intr); + } +} + void x86_pmu_setup_regs_data(struct perf_event *event, struct perf_sample_data *data, struct pt_regs *regs, @@ -1920,6 +1999,7 @@ void x86_pmu_setup_regs_data(struct perf_event *event, * which may be unnecessary to sample again. */ x86_pmu_sample_extended_regs(event, data, regs, ignore_mask); + x86_pmu_setup_extended_regs_data(event, data, regs); } =20 int x86_pmu_handle_irq(struct pt_regs *regs) diff --git a/arch/x86/events/intel/ds.c b/arch/x86/events/intel/ds.c index 229dbe368b65..272725d749df 100644 --- a/arch/x86/events/intel/ds.c +++ b/arch/x86/events/intel/ds.c @@ -1735,7 +1735,7 @@ static u64 pebs_update_adaptive_cfg(struct perf_event= *event) if (gprs || (attr->precise_ip < 2) || tsx_weight) pebs_data_cfg |=3D PEBS_DATACFG_GP; =20 - if (event_has_extended_regs(event)) + if (event_needs_xmm(event)) pebs_data_cfg |=3D PEBS_DATACFG_XMMS; =20 if (sample_type & PERF_SAMPLE_BRANCH_STACK) { diff --git a/arch/x86/events/perf_event.h b/arch/x86/events/perf_event.h index a32ee4f0c891..02eea137e261 100644 --- a/arch/x86/events/perf_event.h +++ b/arch/x86/events/perf_event.h @@ -137,6 +137,18 @@ static inline bool is_acr_event_group(struct perf_even= t *event) return check_leader_group(event->group_leader, PERF_X86_EVENT_ACR); } =20 +static inline bool event_needs_xmm(struct perf_event *event) +{ + if (event->attr.sample_simd_regs_enabled && + event->attr.sample_simd_vec_reg_qwords >=3D PERF_X86_XMM_QWORDS) + return true; + + if (!event->attr.sample_simd_regs_enabled && + event_has_extended_regs(event)) + return true; + return false; +} + struct amd_nb { int nb_id; /* NorthBridge id */ int refcnt; /* reference count */ diff --git a/arch/x86/include/asm/perf_event.h b/arch/x86/include/asm/perf_= event.h index 7baa1b0f889f..1f172740916c 100644 --- a/arch/x86/include/asm/perf_event.h +++ b/arch/x86/include/asm/perf_event.h @@ -709,6 +709,7 @@ extern void perf_events_lapic_init(void); struct pt_regs; struct x86_perf_regs { struct pt_regs regs; + u64 abi; union { u64 *xmm_regs; u32 *xmm_space; /* for xsaves */ diff --git a/arch/x86/include/uapi/asm/perf_regs.h b/arch/x86/include/uapi/= asm/perf_regs.h index 7c9d2bb3833b..342b08448138 100644 --- a/arch/x86/include/uapi/asm/perf_regs.h +++ b/arch/x86/include/uapi/asm/perf_regs.h @@ -55,4 +55,16 @@ enum perf_event_x86_regs { =20 #define PERF_REG_EXTENDED_MASK (~((1ULL << PERF_REG_X86_XMM0) - 1)) =20 +enum { + PERF_X86_SIMD_XMM_REGS =3D 16, + PERF_X86_SIMD_VEC_REGS_MAX =3D PERF_X86_SIMD_XMM_REGS, +}; + +#define PERF_X86_SIMD_VEC_MASK GENMASK_ULL(PERF_X86_SIMD_VEC_REGS_MAX - 1,= 0) + +enum { + PERF_X86_XMM_QWORDS =3D 2, + PERF_X86_SIMD_QWORDS_MAX =3D PERF_X86_XMM_QWORDS, +}; + #endif /* _ASM_X86_PERF_REGS_H */ diff --git a/arch/x86/kernel/perf_regs.c b/arch/x86/kernel/perf_regs.c index 81204cb7f723..9947a6b5c260 100644 --- a/arch/x86/kernel/perf_regs.c +++ b/arch/x86/kernel/perf_regs.c @@ -63,6 +63,9 @@ u64 perf_reg_value(struct pt_regs *regs, int idx) =20 if (idx >=3D PERF_REG_X86_XMM0 && idx < PERF_REG_X86_XMM_MAX) { perf_regs =3D container_of(regs, struct x86_perf_regs, regs); + /* SIMD registers are moved to dedicated sample_simd_vec_reg */ + if (perf_regs->abi & PERF_SAMPLE_REGS_ABI_SIMD) + return 0; if (!perf_regs->xmm_regs) return 0; return perf_regs->xmm_regs[idx - PERF_REG_X86_XMM0]; @@ -74,6 +77,51 @@ u64 perf_reg_value(struct pt_regs *regs, int idx) return regs_get_register(regs, pt_regs_offset[idx]); } =20 +u64 perf_simd_reg_value(struct pt_regs *regs, int idx, + u16 qwords_idx, bool pred) +{ + struct x86_perf_regs *perf_regs =3D + container_of(regs, struct x86_perf_regs, regs); + + if (pred) + return 0; + + if (WARN_ON_ONCE(idx >=3D PERF_X86_SIMD_VEC_REGS_MAX || + qwords_idx >=3D PERF_X86_SIMD_QWORDS_MAX)) + return 0; + + if (qwords_idx < PERF_X86_XMM_QWORDS) { + if (!perf_regs->xmm_regs) + return 0; + return perf_regs->xmm_regs[idx * PERF_X86_XMM_QWORDS + + qwords_idx]; + } + + return 0; +} + +int perf_simd_reg_validate(u16 vec_qwords, u64 vec_mask, + u16 pred_qwords, u32 pred_mask) +{ + /* pred_qwords implies sample_simd_{pred,vec}_reg_* are supported */ + if (!pred_qwords) + return 0; + + if (!vec_qwords) { + if (vec_mask) + return -EINVAL; + } else { + if (vec_qwords !=3D PERF_X86_XMM_QWORDS) + return -EINVAL; + if (vec_mask & ~PERF_X86_SIMD_VEC_MASK) + return -EINVAL; + } + if (pred_mask) + return -EINVAL; + + return 0; +} + #define PERF_REG_X86_RESERVED (((1ULL << PERF_REG_X86_XMM0) - 1) & \ ~((1ULL << PERF_REG_X86_MAX) - 1)) =20 @@ -108,7 +156,8 @@ u64 perf_reg_abi(struct task_struct *task) =20 int perf_reg_validate(u64 mask) { - if (!mask || (mask & (REG_NOSUPPORT | PERF_REG_X86_RESERVED))) + /* The mask could be 0 if only the SIMD registers are interested */ + if (mask & (REG_NOSUPPORT | PERF_REG_X86_RESERVED)) return -EINVAL; =20 return 0; --=20 2.34.1