From nobody Tue Feb 10 14:26:01 2026 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.10]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 86D2231770B; Mon, 9 Feb 2026 07:24:55 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.10 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770621895; cv=none; b=MMhyg2WkiZQNjl+2uz+GvSvpRwQN5oreuPREDPMqr3U/MhgNfhBj/2srzlKVZLPIszE+aWu1jznM01/QmUATB+CXktU3s/yb+C7M460gHTAxX7PeqCs+HA3DT6J7mTTH2Z3xzfCMOedR4FeE0FMhnjiM05AhL5tMdTS/sZV/bVc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770621895; c=relaxed/simple; bh=Cq+Py8qfBn01PCvh++AD7LhREt2xWkW5h9nsyAnf4xw=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=qyj07UM3Kv9p2Eyow8zCCvlBYrR1DbdyzZLoEkliHaqIvuclPcAyU85WKLmhigRyqpZuFr08bUCyj2zkwxQilOQkQPcQ42N1acJr8kZOhGjGFUXbQ671mp5Tib+BKpYDcxwZ4f08Bc/SvRODf2N3Xj5x6DSxr8iGVyQXhM2J3g4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=pass smtp.mailfrom=linux.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=IRgSLOPV; arc=none smtp.client-ip=192.198.163.10 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="IRgSLOPV" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1770621896; x=1802157896; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Cq+Py8qfBn01PCvh++AD7LhREt2xWkW5h9nsyAnf4xw=; b=IRgSLOPVpaQZZF1kANh7y6kY7C4qipvJ4rehZOspaMncDeyyGI3rw42C hgfCRICAbdFdc9jS97B1UE+su4JEE3t2QK9xvyUMwJEi1lnvDSGWkhmP/ 3spxwksLr8xtG++GyjTlp0VtfGyKsVYuyn/prYBo/CF8QGPlpX8gZrY2w FLFHAFyeChJnG/mUkKZ8cDRg9v/dpTLLivs4GYaDDacuSqz1LNMRsV0sx VMaujgFlkx3rUDqcGKc33HKwU4Fd3NZnhl1EG/1LVtz01/RGh4nURiT8d +jxUrZvqP55cWp4J+tOkLxhPOjPzpTlKGKLIkuwlv5idc5hb0miytAeFu g==; X-CSE-ConnectionGUID: CpsDo21rSZu6IsP3cDfxZQ== X-CSE-MsgGUID: Ehc7JZZ2RQi+FfQcIh99ZA== X-IronPort-AV: E=McAfee;i="6800,10657,11695"; a="83098228" X-IronPort-AV: E=Sophos;i="6.21,281,1763452800"; d="scan'208";a="83098228" Received: from fmviesa001.fm.intel.com ([10.60.135.141]) by fmvoesa104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 08 Feb 2026 23:24:55 -0800 X-CSE-ConnectionGUID: NY5L17SkT1qze+7aWgFSkA== X-CSE-MsgGUID: g1HHoSLFSp+hdDPUEt7FKw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.21,281,1763452800"; d="scan'208";a="241694582" Received: from spr.sh.intel.com ([10.112.229.196]) by fmviesa001.fm.intel.com with ESMTP; 08 Feb 2026 23:24:51 -0800 From: Dapeng Mi To: Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Namhyung Kim , Thomas Gleixner , Dave Hansen , Ian Rogers , Adrian Hunter , Jiri Olsa , Alexander Shishkin , Andi Kleen , Eranian Stephane Cc: Mark Rutland , broonie@kernel.org, Ravi Bangoria , linux-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org, Zide Chen , Falcon Thomas , Dapeng Mi , Xudong Hao , Dapeng Mi Subject: [Patch v6 01/22] perf/x86/intel: Restrict PEBS_ENABLE writes to PEBS-capable counters Date: Mon, 9 Feb 2026 15:20:26 +0800 Message-Id: <20260209072047.2180332-2-dapeng1.mi@linux.intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20260209072047.2180332-1-dapeng1.mi@linux.intel.com> References: <20260209072047.2180332-1-dapeng1.mi@linux.intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Before the introduction of extended PEBS, PEBS supported only general-purpose (GP) counters. In a virtual machine (VM) environment, the PEBS_BASELINE bit in PERF_CAPABILITIES may not be set, but the PEBS format could be indicated as 4 or higher. In such cases, PEBS events might be scheduled to fixed counters, and writing the corresponding bits into the PEBS_ENABLE MSR could cause a #GP fault. To prevent writing unsupported bits into the PEBS_ENABLE MSR, ensure cpuc->pebs_enabled aligns with x86_pmu.pebs_capable and restrict the writes to only PEBS-capable counter bits. Signed-off-by: Dapeng Mi --- V6: new patch. arch/x86/events/intel/core.c | 6 ++++-- arch/x86/events/intel/ds.c | 11 +++++++---- 2 files changed, 11 insertions(+), 6 deletions(-) diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c index f3ae1f8ee3cd..546ebc7e1624 100644 --- a/arch/x86/events/intel/core.c +++ b/arch/x86/events/intel/core.c @@ -3554,8 +3554,10 @@ static int handle_pmi_common(struct pt_regs *regs, u= 64 status) * cpuc->enabled has been forced to 0 in PMI. * Update the MSR if pebs_enabled is changed. */ - if (pebs_enabled !=3D cpuc->pebs_enabled) - wrmsrq(MSR_IA32_PEBS_ENABLE, cpuc->pebs_enabled); + if (pebs_enabled !=3D cpuc->pebs_enabled) { + wrmsrq(MSR_IA32_PEBS_ENABLE, + cpuc->pebs_enabled & x86_pmu.pebs_capable); + } =20 /* * Above PEBS handler (PEBS counters snapshotting) has updated fixed diff --git a/arch/x86/events/intel/ds.c b/arch/x86/events/intel/ds.c index 5027afc97b65..57805c6ba0c3 100644 --- a/arch/x86/events/intel/ds.c +++ b/arch/x86/events/intel/ds.c @@ -1963,6 +1963,7 @@ void intel_pmu_pebs_disable(struct perf_event *event) { struct cpu_hw_events *cpuc =3D this_cpu_ptr(&cpu_hw_events); struct hw_perf_event *hwc =3D &event->hw; + u64 pebs_enabled; =20 __intel_pmu_pebs_disable(event); =20 @@ -1974,16 +1975,18 @@ void intel_pmu_pebs_disable(struct perf_event *even= t) =20 intel_pmu_pebs_via_pt_disable(event); =20 - if (cpuc->enabled) - wrmsrq(MSR_IA32_PEBS_ENABLE, cpuc->pebs_enabled); + pebs_enabled =3D cpuc->pebs_enabled & x86_pmu.pebs_capable; + if (pebs_enabled) + wrmsrq(MSR_IA32_PEBS_ENABLE, pebs_enabled); } =20 void intel_pmu_pebs_enable_all(void) { struct cpu_hw_events *cpuc =3D this_cpu_ptr(&cpu_hw_events); + u64 pebs_enabled =3D cpuc->pebs_enabled & x86_pmu.pebs_capable; =20 - if (cpuc->pebs_enabled) - wrmsrq(MSR_IA32_PEBS_ENABLE, cpuc->pebs_enabled); + if (pebs_enabled) + wrmsrq(MSR_IA32_PEBS_ENABLE, pebs_enabled); } =20 void intel_pmu_pebs_disable_all(void) --=20 2.34.1 From nobody Tue Feb 10 14:26:01 2026 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.10]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 55F093033F5; Mon, 9 Feb 2026 07:25:00 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.10 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770621900; cv=none; b=RZbTa6+zbZRaEZJ8lSELMvFuh9IzahRqJBJbGa4k42hMuMuDnzjAsU2YtAXtFKzpMDEiMLdO60sjvPI0UaA/k5KJfrZ3gtMJ9cBBIXHkyV53muiiWBlJfndpl4KWJoJU59GffPjM2J0IP0Kzxw1qHfIn1NF8do3BsaZe4WhTRTs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770621900; c=relaxed/simple; bh=JHL5STerb4I0PRzZceXpsPfRmTSSHrsbR/CuUhoQXN4=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=kJWl5S/6uOtB6tvuvpVACrXY3j1sTQOFiwCglQ1Ja4oMvaqi0M4PFy1JQKbOb3u3wmxOUi7IU9LJrOgNWowpn5/hTTIEzeFnJlFZX1I+m5PSZGGh4I86xt6burGI7drLXzMu/fzRsMO+svkG1edVWOnE6L+38W8bvNjrWccnhkw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=pass smtp.mailfrom=linux.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=VFtdKnbv; arc=none smtp.client-ip=192.198.163.10 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="VFtdKnbv" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1770621901; x=1802157901; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=JHL5STerb4I0PRzZceXpsPfRmTSSHrsbR/CuUhoQXN4=; b=VFtdKnbvWxbEo4spP9/pW5R8laGGZglLsGRlVTec6YODAu7M5D1dj78e IviK3eDUqvthj0sRTVzft4z/IKB9sR+RblrlFYyRfvguUgFQSRcCg3cip yvvlGWPKyKha2TDnFQERn3XTZXi/CfxpL5sU6G82MJ1aW7RM+2re7SE/V ph4q4dAhoPaSEv92wMOjZZuzzfzuR9CSn4XRePH5vozJ7F5hXG/KrdUM2 Z5JkjJ/pNBP+IjbpwJh/QIb2NHkhLT4+7jPxnni4IbQaeEb6hAFXDvCeL r8N9lRatrPQkBrJvbKE/wjl+W6eVmslNpDsKLHq1pYLYMYyR2fWPAnAYW w==; X-CSE-ConnectionGUID: PdlgZcd0QI+bdS0uOPDqYw== X-CSE-MsgGUID: U6Ix6ahIR5uV0yu677cOkA== X-IronPort-AV: E=McAfee;i="6800,10657,11695"; a="83098246" X-IronPort-AV: E=Sophos;i="6.21,281,1763452800"; d="scan'208";a="83098246" Received: from fmviesa001.fm.intel.com ([10.60.135.141]) by fmvoesa104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 08 Feb 2026 23:25:00 -0800 X-CSE-ConnectionGUID: OxeMcBprTCW+lVkxHhfKEQ== X-CSE-MsgGUID: zB5TDpVIT7CB/zGrvJDRXA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.21,281,1763452800"; d="scan'208";a="241694590" Received: from spr.sh.intel.com ([10.112.229.196]) by fmviesa001.fm.intel.com with ESMTP; 08 Feb 2026 23:24:55 -0800 From: Dapeng Mi To: Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Namhyung Kim , Thomas Gleixner , Dave Hansen , Ian Rogers , Adrian Hunter , Jiri Olsa , Alexander Shishkin , Andi Kleen , Eranian Stephane Cc: Mark Rutland , broonie@kernel.org, Ravi Bangoria , linux-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org, Zide Chen , Falcon Thomas , Dapeng Mi , Xudong Hao , Dapeng Mi Subject: [Patch v6 02/22] perf/x86/intel: Enable large PEBS sampling for XMMs Date: Mon, 9 Feb 2026 15:20:27 +0800 Message-Id: <20260209072047.2180332-3-dapeng1.mi@linux.intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20260209072047.2180332-1-dapeng1.mi@linux.intel.com> References: <20260209072047.2180332-1-dapeng1.mi@linux.intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Modern PEBS hardware supports directly sampling XMM registers, then large PEBS can be enabled for XMM registers just like other GPRs. Reported-by: Xudong Hao Signed-off-by: Dapeng Mi --- V6: new patch. arch/x86/events/intel/core.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c index 546ebc7e1624..5ed26b83c61d 100644 --- a/arch/x86/events/intel/core.c +++ b/arch/x86/events/intel/core.c @@ -4425,7 +4425,8 @@ static unsigned long intel_pmu_large_pebs_flags(struc= t perf_event *event) flags &=3D ~PERF_SAMPLE_REGS_USER; if (event->attr.sample_regs_user & ~PEBS_GP_REGS) flags &=3D ~PERF_SAMPLE_REGS_USER; - if (event->attr.sample_regs_intr & ~PEBS_GP_REGS) + if (event->attr.sample_regs_intr & + ~(PEBS_GP_REGS | PERF_REG_EXTENDED_MASK)) flags &=3D ~PERF_SAMPLE_REGS_INTR; return flags; } --=20 2.34.1 From nobody Tue Feb 10 14:26:01 2026 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.10]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3DF7F3033F5; Mon, 9 Feb 2026 07:25:05 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.10 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770621905; cv=none; b=qqvKJyjYcUaa1Ms8+7BFrl2CfQtd+pUFDFfY0GatkPKkCv/2BBeb22lJLwuzxiIQqCH3MSFEsJ8UPppGTD3Ed9mZ+cNSthKNMypHhuXIRmb+hJLtSAHFaomNS/SG6omNA7DU66TJ+FaSJvUtJqE98ZJ6dZ0E8tkFfxxe5nFOzN8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770621905; c=relaxed/simple; bh=vyccwpwVNEoObBkltGcUlDWlubtE7wsvucwHlawZWaM=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=gLUgpXHKNMndpXrjBQ4jtkj/NgW7prpegNhhLJfa3gls1zruBHqNmZT55Kn+FSi3C43SV/y3g68JmgBPPL45B1EITc2bLvPsHg8kdEGOoKK7xHYPFPq5v2wS6JfR4GMyaCHB0B0QXOMFcjJpFU8ZA9fZmTfZiE6gaqHDKiTYhRA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=pass smtp.mailfrom=linux.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=Obi7LVHZ; arc=none smtp.client-ip=192.198.163.10 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="Obi7LVHZ" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1770621905; x=1802157905; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=vyccwpwVNEoObBkltGcUlDWlubtE7wsvucwHlawZWaM=; b=Obi7LVHZUZf1nXgAEuZbeeOF7Anz3Raa+67IMwqxm8pZqMSLuKulc96l waAo1KTszCV1tQA7/yanPrZ1FDZobtTxZiZ+Bcu/HfXyInCpj7fVZC5x/ YKRoRB1FhQA2Sokzw1tNaKmQfZEfdCmYmLMZqYMbMORxlHZO9tO2EIumy 4y0zfVRZ4+SPmLPlh61c/LGPiaeyExY9tfuhZbtYRS5GghjnyOgzrpg3E Ok6mdfyBfZH5EuZYTKL/76ZACpKaWYcDJ7SXeGa4DM3PD24vrgftlrom2 xa44fahF9r3C34EBOOL4bijnANh6Gn6t8xtzSIA4lTNLvJgUzyue2XrUS g==; X-CSE-ConnectionGUID: Tk2utIBWRyif4IcKizgGaQ== X-CSE-MsgGUID: C+tPk/6pQAa/0qHMpW29qQ== X-IronPort-AV: E=McAfee;i="6800,10657,11695"; a="83098269" X-IronPort-AV: E=Sophos;i="6.21,281,1763452800"; d="scan'208";a="83098269" Received: from fmviesa001.fm.intel.com ([10.60.135.141]) by fmvoesa104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 08 Feb 2026 23:25:05 -0800 X-CSE-ConnectionGUID: rDYe2InXTUKlH3wkudaB6g== X-CSE-MsgGUID: e87nQxDVS7+JPQ1W+snS3g== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.21,281,1763452800"; d="scan'208";a="241694604" Received: from spr.sh.intel.com ([10.112.229.196]) by fmviesa001.fm.intel.com with ESMTP; 08 Feb 2026 23:25:00 -0800 From: Dapeng Mi To: Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Namhyung Kim , Thomas Gleixner , Dave Hansen , Ian Rogers , Adrian Hunter , Jiri Olsa , Alexander Shishkin , Andi Kleen , Eranian Stephane Cc: Mark Rutland , broonie@kernel.org, Ravi Bangoria , linux-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org, Zide Chen , Falcon Thomas , Dapeng Mi , Xudong Hao , Dapeng Mi Subject: [Patch v6 03/22] perf/x86/intel: Convert x86_perf_regs to per-cpu variables Date: Mon, 9 Feb 2026 15:20:28 +0800 Message-Id: <20260209072047.2180332-4-dapeng1.mi@linux.intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20260209072047.2180332-1-dapeng1.mi@linux.intel.com> References: <20260209072047.2180332-1-dapeng1.mi@linux.intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Currently, the intel_pmu_drain_pebs_icl() and intel_pmu_drain_arch_pebs() helpers define many temporary variables. Upcoming patches will add new fields like *ymm_regs and *zmm_regs to the x86_perf_regs structure to support sampling for these SIMD registers. This would increase the stack size consumed by these helpers, potentially triggering the warning: "the frame size of 1048 bytes is larger than 1024 bytes [-Wframe-larger-than=3D]". To eliminate this warning, convert x86_perf_regs to per-cpu variables. No functional changes are intended. Signed-off-by: Dapeng Mi --- V6: new patch. arch/x86/events/intel/ds.c | 10 ++++++---- 1 file changed, 6 insertions(+), 4 deletions(-) diff --git a/arch/x86/events/intel/ds.c b/arch/x86/events/intel/ds.c index 57805c6ba0c3..87bf8672f5a8 100644 --- a/arch/x86/events/intel/ds.c +++ b/arch/x86/events/intel/ds.c @@ -3174,14 +3174,16 @@ __intel_pmu_handle_last_pebs_record(struct pt_regs = *iregs, =20 } =20 +DEFINE_PER_CPU(struct x86_perf_regs, pebs_perf_regs); + static void intel_pmu_drain_pebs_icl(struct pt_regs *iregs, struct perf_sa= mple_data *data) { short counts[INTEL_PMC_IDX_FIXED + MAX_FIXED_PEBS_EVENTS] =3D {}; void *last[INTEL_PMC_IDX_FIXED + MAX_FIXED_PEBS_EVENTS]; struct cpu_hw_events *cpuc =3D this_cpu_ptr(&cpu_hw_events); struct debug_store *ds =3D cpuc->ds; - struct x86_perf_regs perf_regs; - struct pt_regs *regs =3D &perf_regs.regs; + struct x86_perf_regs *perf_regs =3D this_cpu_ptr(&pebs_perf_regs); + struct pt_regs *regs =3D &perf_regs->regs; struct pebs_basic *basic; void *base, *at, *top; u64 mask; @@ -3231,8 +3233,8 @@ static void intel_pmu_drain_arch_pebs(struct pt_regs = *iregs, void *last[INTEL_PMC_IDX_FIXED + MAX_FIXED_PEBS_EVENTS]; struct cpu_hw_events *cpuc =3D this_cpu_ptr(&cpu_hw_events); union arch_pebs_index index; - struct x86_perf_regs perf_regs; - struct pt_regs *regs =3D &perf_regs.regs; + struct x86_perf_regs *perf_regs =3D this_cpu_ptr(&pebs_perf_regs); + struct pt_regs *regs =3D &perf_regs->regs; void *base, *at, *top; u64 mask; =20 --=20 2.34.1 From nobody Tue Feb 10 14:26:01 2026 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.10]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9FA6E3033F5; Mon, 9 Feb 2026 07:25:15 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.10 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770621915; cv=none; b=A8OrRrMuOgUGUtgS+sLHAl6i0Wvzu8fsP5jkq6zrO7YqvC/klWjpMpx4AWcpTcDDYSVTwDwrg/OFqS8a8/BbmVShwh5mmksiMC0WTVNbr80zKH4TzRxDoVaPB049PRbQKqHDBPxKm7zP8JgvnLMyGpA4sJAc+iUzKpk2XRLB0wM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770621915; c=relaxed/simple; bh=xTp/DsfHjyWY1mGACH6X7HJX8BBEFPachisALC7Njeg=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=BwCSAWL4ONwnLZHaK7+YIDzHNwXtAAeoKjOfNxvfbmjqZWPsqegDobR5cK7K5ECY6nDG3EZ/szUrqy17ZrJBGRr03PbUNbgpDHlbzHpscOfxlpClbJ+fOtV2gQjASFMunIsqNDRmNC4PN9R39k7WMHa03dzz0nNxuEaaFhqzfUI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=pass smtp.mailfrom=linux.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=mX+sTI5x; arc=none smtp.client-ip=192.198.163.10 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="mX+sTI5x" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1770621916; x=1802157916; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=xTp/DsfHjyWY1mGACH6X7HJX8BBEFPachisALC7Njeg=; b=mX+sTI5xkf8tQWayey5NTf/IUW6DJ13mbaHWpYo3C/ZDH0jfcjQGHRuY rIxhmchfjj1zdZpQcQB7wVEE/n2TBTVMsxEblRKFH1uKq+8MutgxjbBzj SJ0w57zWER+Ark2DxrBmF6M7AkqSBazU5VOTzYkr82jTO24MzaIZiEBC5 VZ31yERlDNBxjoKUs1Ygq5YjJ6mh3BGRqLeKDdNQgpyJIRwKKvrHNjbqS VgSSKFryXyGX3Tl5tJueCgtgnyIJsl4FJmNM4cj9N05Qz6/9gtRW2esC0 Cn3HISEUW169wjuCnCPtdPM9+lrQ5Mdh7L0yfNHq10ihqOz9tw7sNxYgA A==; X-CSE-ConnectionGUID: sXLgkzVnS2qqDTiunAY7/Q== X-CSE-MsgGUID: M2w98H5aQoyQRFkUeuF50A== X-IronPort-AV: E=McAfee;i="6800,10657,11695"; a="83098286" X-IronPort-AV: E=Sophos;i="6.21,281,1763452800"; d="scan'208";a="83098286" Received: from fmviesa001.fm.intel.com ([10.60.135.141]) by fmvoesa104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 08 Feb 2026 23:25:10 -0800 X-CSE-ConnectionGUID: jgyMbLJhSkubuDarwCdPSQ== X-CSE-MsgGUID: +ceWcjTcRwm6z08D0Uk8vw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.21,281,1763452800"; d="scan'208";a="241694614" Received: from spr.sh.intel.com ([10.112.229.196]) by fmviesa001.fm.intel.com with ESMTP; 08 Feb 2026 23:25:05 -0800 From: Dapeng Mi To: Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Namhyung Kim , Thomas Gleixner , Dave Hansen , Ian Rogers , Adrian Hunter , Jiri Olsa , Alexander Shishkin , Andi Kleen , Eranian Stephane Cc: Mark Rutland , broonie@kernel.org, Ravi Bangoria , linux-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org, Zide Chen , Falcon Thomas , Dapeng Mi , Xudong Hao , Dapeng Mi Subject: [Patch v6 04/22] perf: Eliminate duplicate arch-specific functions definations Date: Mon, 9 Feb 2026 15:20:29 +0800 Message-Id: <20260209072047.2180332-5-dapeng1.mi@linux.intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20260209072047.2180332-1-dapeng1.mi@linux.intel.com> References: <20260209072047.2180332-1-dapeng1.mi@linux.intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Define default common __weak functions for perf_reg_value(), perf_reg_validate(), perf_reg_abi() and perf_get_regs_user(). This helps to eliminate the duplicated arch-specific definations. No function changes intended. Signed-off-by: Dapeng Mi --- arch/arm/kernel/perf_regs.c | 6 ------ arch/arm64/kernel/perf_regs.c | 6 ------ arch/csky/kernel/perf_regs.c | 6 ------ arch/loongarch/kernel/perf_regs.c | 6 ------ arch/mips/kernel/perf_regs.c | 6 ------ arch/parisc/kernel/perf_regs.c | 6 ------ arch/riscv/kernel/perf_regs.c | 6 ------ arch/x86/kernel/perf_regs.c | 6 ------ include/linux/perf_regs.h | 32 ++++++------------------------- kernel/events/core.c | 22 +++++++++++++++++++++ 10 files changed, 28 insertions(+), 74 deletions(-) diff --git a/arch/arm/kernel/perf_regs.c b/arch/arm/kernel/perf_regs.c index 0529f90395c9..d575a4c3ca56 100644 --- a/arch/arm/kernel/perf_regs.c +++ b/arch/arm/kernel/perf_regs.c @@ -31,9 +31,3 @@ u64 perf_reg_abi(struct task_struct *task) return PERF_SAMPLE_REGS_ABI_32; } =20 -void perf_get_regs_user(struct perf_regs *regs_user, - struct pt_regs *regs) -{ - regs_user->regs =3D task_pt_regs(current); - regs_user->abi =3D perf_reg_abi(current); -} diff --git a/arch/arm64/kernel/perf_regs.c b/arch/arm64/kernel/perf_regs.c index b4eece3eb17d..70e2f13f587f 100644 --- a/arch/arm64/kernel/perf_regs.c +++ b/arch/arm64/kernel/perf_regs.c @@ -98,9 +98,3 @@ u64 perf_reg_abi(struct task_struct *task) return PERF_SAMPLE_REGS_ABI_64; } =20 -void perf_get_regs_user(struct perf_regs *regs_user, - struct pt_regs *regs) -{ - regs_user->regs =3D task_pt_regs(current); - regs_user->abi =3D perf_reg_abi(current); -} diff --git a/arch/csky/kernel/perf_regs.c b/arch/csky/kernel/perf_regs.c index 09b7f88a2d6a..94601f37b596 100644 --- a/arch/csky/kernel/perf_regs.c +++ b/arch/csky/kernel/perf_regs.c @@ -31,9 +31,3 @@ u64 perf_reg_abi(struct task_struct *task) return PERF_SAMPLE_REGS_ABI_32; } =20 -void perf_get_regs_user(struct perf_regs *regs_user, - struct pt_regs *regs) -{ - regs_user->regs =3D task_pt_regs(current); - regs_user->abi =3D perf_reg_abi(current); -} diff --git a/arch/loongarch/kernel/perf_regs.c b/arch/loongarch/kernel/perf= _regs.c index 263ac4ab5af6..8dd604f01745 100644 --- a/arch/loongarch/kernel/perf_regs.c +++ b/arch/loongarch/kernel/perf_regs.c @@ -45,9 +45,3 @@ u64 perf_reg_value(struct pt_regs *regs, int idx) return regs->regs[idx]; } =20 -void perf_get_regs_user(struct perf_regs *regs_user, - struct pt_regs *regs) -{ - regs_user->regs =3D task_pt_regs(current); - regs_user->abi =3D perf_reg_abi(current); -} diff --git a/arch/mips/kernel/perf_regs.c b/arch/mips/kernel/perf_regs.c index e686780d1647..7736d3c5ebd2 100644 --- a/arch/mips/kernel/perf_regs.c +++ b/arch/mips/kernel/perf_regs.c @@ -60,9 +60,3 @@ u64 perf_reg_value(struct pt_regs *regs, int idx) return (s64)v; /* Sign extend if 32-bit. */ } =20 -void perf_get_regs_user(struct perf_regs *regs_user, - struct pt_regs *regs) -{ - regs_user->regs =3D task_pt_regs(current); - regs_user->abi =3D perf_reg_abi(current); -} diff --git a/arch/parisc/kernel/perf_regs.c b/arch/parisc/kernel/perf_regs.c index 10a1a5f06a18..b9fe1f2fcb9b 100644 --- a/arch/parisc/kernel/perf_regs.c +++ b/arch/parisc/kernel/perf_regs.c @@ -53,9 +53,3 @@ u64 perf_reg_abi(struct task_struct *task) return PERF_SAMPLE_REGS_ABI_64; } =20 -void perf_get_regs_user(struct perf_regs *regs_user, - struct pt_regs *regs) -{ - regs_user->regs =3D task_pt_regs(current); - regs_user->abi =3D perf_reg_abi(current); -} diff --git a/arch/riscv/kernel/perf_regs.c b/arch/riscv/kernel/perf_regs.c index fd304a248de6..3bba8deababb 100644 --- a/arch/riscv/kernel/perf_regs.c +++ b/arch/riscv/kernel/perf_regs.c @@ -35,9 +35,3 @@ u64 perf_reg_abi(struct task_struct *task) #endif } =20 -void perf_get_regs_user(struct perf_regs *regs_user, - struct pt_regs *regs) -{ - regs_user->regs =3D task_pt_regs(current); - regs_user->abi =3D perf_reg_abi(current); -} diff --git a/arch/x86/kernel/perf_regs.c b/arch/x86/kernel/perf_regs.c index 624703af80a1..81204cb7f723 100644 --- a/arch/x86/kernel/perf_regs.c +++ b/arch/x86/kernel/perf_regs.c @@ -100,12 +100,6 @@ u64 perf_reg_abi(struct task_struct *task) return PERF_SAMPLE_REGS_ABI_32; } =20 -void perf_get_regs_user(struct perf_regs *regs_user, - struct pt_regs *regs) -{ - regs_user->regs =3D task_pt_regs(current); - regs_user->abi =3D perf_reg_abi(current); -} #else /* CONFIG_X86_64 */ #define REG_NOSUPPORT ((1ULL << PERF_REG_X86_DS) | \ (1ULL << PERF_REG_X86_ES) | \ diff --git a/include/linux/perf_regs.h b/include/linux/perf_regs.h index f632c5725f16..144bcc3ff19f 100644 --- a/include/linux/perf_regs.h +++ b/include/linux/perf_regs.h @@ -9,6 +9,12 @@ struct perf_regs { struct pt_regs *regs; }; =20 +u64 perf_reg_value(struct pt_regs *regs, int idx); +int perf_reg_validate(u64 mask); +u64 perf_reg_abi(struct task_struct *task); +void perf_get_regs_user(struct perf_regs *regs_user, + struct pt_regs *regs); + #ifdef CONFIG_HAVE_PERF_REGS #include =20 @@ -16,35 +22,9 @@ struct perf_regs { #define PERF_REG_EXTENDED_MASK 0 #endif =20 -u64 perf_reg_value(struct pt_regs *regs, int idx); -int perf_reg_validate(u64 mask); -u64 perf_reg_abi(struct task_struct *task); -void perf_get_regs_user(struct perf_regs *regs_user, - struct pt_regs *regs); #else =20 #define PERF_REG_EXTENDED_MASK 0 =20 -static inline u64 perf_reg_value(struct pt_regs *regs, int idx) -{ - return 0; -} - -static inline int perf_reg_validate(u64 mask) -{ - return mask ? -ENOSYS : 0; -} - -static inline u64 perf_reg_abi(struct task_struct *task) -{ - return PERF_SAMPLE_REGS_ABI_NONE; -} - -static inline void perf_get_regs_user(struct perf_regs *regs_user, - struct pt_regs *regs) -{ - regs_user->regs =3D task_pt_regs(current); - regs_user->abi =3D perf_reg_abi(current); -} #endif /* CONFIG_HAVE_PERF_REGS */ #endif /* _LINUX_PERF_REGS_H */ diff --git a/kernel/events/core.c b/kernel/events/core.c index da013b9a595f..8410b1a7ef3b 100644 --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -7723,6 +7723,28 @@ unsigned long perf_instruction_pointer(struct perf_e= vent *event, return perf_arch_instruction_pointer(regs); } =20 +u64 __weak perf_reg_value(struct pt_regs *regs, int idx) +{ + return 0; +} + +int __weak perf_reg_validate(u64 mask) +{ + return mask ? -ENOSYS : 0; +} + +u64 __weak perf_reg_abi(struct task_struct *task) +{ + return PERF_SAMPLE_REGS_ABI_NONE; +} + +void __weak perf_get_regs_user(struct perf_regs *regs_user, + struct pt_regs *regs) +{ + regs_user->regs =3D task_pt_regs(current); + regs_user->abi =3D perf_reg_abi(current); +} + static void perf_output_sample_regs(struct perf_output_handle *handle, struct pt_regs *regs, u64 mask) --=20 2.34.1 From nobody Tue Feb 10 14:26:01 2026 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.10]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0F9E231A808; Mon, 9 Feb 2026 07:25:16 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.10 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770621916; cv=none; b=Bn1sFZeWQXYJsiH0fQG0qDr6MCP/se/MewVZnjmp/Dd00VYVnL1QxqvKQNjXzE0Rzz2V+cRNeCdwqRW4bPVf9RZh+3MMEmtiYVTrpEf3GdSIm1Q+57iCW4BANDxEFatPGps/6hLpwdObn2NlA67IpdPNjTX1rBG0K3xRULm5N5A= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770621916; c=relaxed/simple; bh=3YbUpPFsRka/sotlraM+1DpA2t+Dw+JX6Y9ceVUB5bI=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=h4UGtYbtVicAzUuzFdRHo3C8gAfbNUMcS2lMESgxnnY/Tz6Kz/ZM00ZMT/U7tS91QhhS5o+fFvp6U6aWLH/XQrLzBU5Pf4zFxHm3q0js6Uvw80DKyC9QbMl34E+zKmytBpIu6eHrPAKBdpJlV4FQRhMYT2Wdtyh534kGMdCitfA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=pass smtp.mailfrom=linux.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=c1yDCVv7; arc=none smtp.client-ip=192.198.163.10 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="c1yDCVv7" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1770621916; x=1802157916; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=3YbUpPFsRka/sotlraM+1DpA2t+Dw+JX6Y9ceVUB5bI=; b=c1yDCVv70OQnVYX+DdUMLyAy+HUNnh+5UtKzFoxyZeAJQNCWWvCPq2pY BBRxzomvJ0KFe+xTWVUeNbgtdjEpNxetobTKrHO0rySublBO3lj3cbitv L4kKHjKQeFwTAr23xl6nj4uLBoQW9SKceKEJQMGvQeP/1o8tSelK5iPnW hJ33QX5dyGHCm2RBMw7T6JzIVEDZkPQaYVsN8Gdv32+NUdm9UcimtVNli HdVoJuFfnzFoG6+Yi3DxBaUj7URsgcyEKOeBmigy5WtY1TF+E+EkAnVTd BbKbRJ3fubJOtB+P1T+c73ihl5tFllukscPhDsYXAh8AqUMoPOrICewA5 Q==; X-CSE-ConnectionGUID: i0qsBB1sR2O4wL3xDVdemw== X-CSE-MsgGUID: 40Ewr5fIQAWbJrZggQBoog== X-IronPort-AV: E=McAfee;i="6800,10657,11695"; a="83098306" X-IronPort-AV: E=Sophos;i="6.21,281,1763452800"; d="scan'208";a="83098306" Received: from fmviesa001.fm.intel.com ([10.60.135.141]) by fmvoesa104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 08 Feb 2026 23:25:15 -0800 X-CSE-ConnectionGUID: PapH08v1QJuXLK2ZR+QE6Q== X-CSE-MsgGUID: N5zRPzTWRSi35EC3zMLWYQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.21,281,1763452800"; d="scan'208";a="241694618" Received: from spr.sh.intel.com ([10.112.229.196]) by fmviesa001.fm.intel.com with ESMTP; 08 Feb 2026 23:25:10 -0800 From: Dapeng Mi To: Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Namhyung Kim , Thomas Gleixner , Dave Hansen , Ian Rogers , Adrian Hunter , Jiri Olsa , Alexander Shishkin , Andi Kleen , Eranian Stephane Cc: Mark Rutland , broonie@kernel.org, Ravi Bangoria , linux-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org, Zide Chen , Falcon Thomas , Dapeng Mi , Xudong Hao , Kan Liang , Dapeng Mi Subject: [Patch v6 05/22] perf/x86: Use x86_perf_regs in the x86 nmi handler Date: Mon, 9 Feb 2026 15:20:30 +0800 Message-Id: <20260209072047.2180332-6-dapeng1.mi@linux.intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20260209072047.2180332-1-dapeng1.mi@linux.intel.com> References: <20260209072047.2180332-1-dapeng1.mi@linux.intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Kan Liang More and more regs will be supported in the overflow, e.g., more vector registers, SSP, etc. The generic pt_regs struct cannot store all of them. Use a X86 specific x86_perf_regs instead. The struct pt_regs *regs is still passed to x86_pmu_handle_irq(). There is no functional change for the existing code. AMD IBS's NMI handler doesn't utilize the static call x86_pmu_handle_irq(). The x86_perf_regs struct doesn't apply to the AMD IBS. It can be added separately later when AMD IBS supports more regs. Signed-off-by: Kan Liang Signed-off-by: Dapeng Mi --- arch/x86/events/core.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c index 6df73e8398cd..8c80d22864d8 100644 --- a/arch/x86/events/core.c +++ b/arch/x86/events/core.c @@ -1785,6 +1785,7 @@ EXPORT_SYMBOL_FOR_KVM(perf_put_guest_lvtpc); static int perf_event_nmi_handler(unsigned int cmd, struct pt_regs *regs) { + struct x86_perf_regs x86_regs; u64 start_clock; u64 finish_clock; int ret; @@ -1808,7 +1809,8 @@ perf_event_nmi_handler(unsigned int cmd, struct pt_re= gs *regs) return NMI_DONE; =20 start_clock =3D sched_clock(); - ret =3D static_call(x86_pmu_handle_irq)(regs); + x86_regs.regs =3D *regs; + ret =3D static_call(x86_pmu_handle_irq)(&x86_regs.regs); finish_clock =3D sched_clock(); =20 perf_sample_event_took(finish_clock - start_clock); --=20 2.34.1 From nobody Tue Feb 10 14:26:01 2026 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.10]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4714131A572; Mon, 9 Feb 2026 07:25:20 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.10 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770621921; cv=none; b=DctoDhXaC4oKnuCPNtslIdrjghhiNf0y6+IWXjcUEc0C69M3bBnB1v2M+i2Du301MF0+LN8NCxFR2e6H8C+86dl8YRN4eB/wA8E80kv22H01CdBEFTcFRtrlDufnM19G4bYE9mvWU/THDuIh9HkHNE8LRgeifox9AP9XpiaEzdc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770621921; c=relaxed/simple; bh=kTmN5eOmwiDtUttjg1mC7gO5xc/w3FEBKYBa689S4+M=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=QMNLoE0ZacIpA83wC9QvQZuFFavDchE8FcjJLuIv7ho59zvP97bbWTfbiezBL8fF0QDg4Lo6du/Cze1Oamre2/ot4x1aLD9nS59RfVslb4v8D4N8fkWnEiYgFX3HdHHKKHx32CHOlA8l+Q6taFzvV+ROvPuETkWBuNl7PV9A+G4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=pass smtp.mailfrom=linux.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=fBvg09DH; arc=none smtp.client-ip=192.198.163.10 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="fBvg09DH" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1770621921; x=1802157921; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=kTmN5eOmwiDtUttjg1mC7gO5xc/w3FEBKYBa689S4+M=; b=fBvg09DHt+1m7U4ErSNbK0SYsumsm93+CdadwRVFjYnWJLGu15oLshua POeWEaoqyWiJFcjcQgr8ew4wEjEtTTig8RJ54t/5gmIqkG2nz5JuxLrIs orGdXdp4DlqfDugKdgE9Eu1/n7XilKOWBJyHHkWaOBGTPtczI2SRlCaQa pgyDR1NM6bsBFNiIBf4xCF22kqdOHVhPAx2321GonclwYxwseO9a/MN64 hjPTkdS1aBioXzAFRrcANBcS+lFw6Ms7tB3I8CI25huDF4o5P5q/gulkJ GXTsRmsp90Icu7KmcF4n7KdxHl8Ka1EZoz0ZFl1jNucNrCn7dBwCFOy+S g==; X-CSE-ConnectionGUID: Vbvf+abaQ6aJTkVmFDDivA== X-CSE-MsgGUID: ZQKSMpzKQVqMdyiojm2TUQ== X-IronPort-AV: E=McAfee;i="6800,10657,11695"; a="83098325" X-IronPort-AV: E=Sophos;i="6.21,281,1763452800"; d="scan'208";a="83098325" Received: from fmviesa001.fm.intel.com ([10.60.135.141]) by fmvoesa104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 08 Feb 2026 23:25:20 -0800 X-CSE-ConnectionGUID: Uw5HNXi4RVaXIqPiW1HTSw== X-CSE-MsgGUID: idOYzNrUSSG/KtXzzzh6HQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.21,281,1763452800"; d="scan'208";a="241694624" Received: from spr.sh.intel.com ([10.112.229.196]) by fmviesa001.fm.intel.com with ESMTP; 08 Feb 2026 23:25:15 -0800 From: Dapeng Mi To: Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Namhyung Kim , Thomas Gleixner , Dave Hansen , Ian Rogers , Adrian Hunter , Jiri Olsa , Alexander Shishkin , Andi Kleen , Eranian Stephane Cc: Mark Rutland , broonie@kernel.org, Ravi Bangoria , linux-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org, Zide Chen , Falcon Thomas , Dapeng Mi , Xudong Hao , Kan Liang , Dapeng Mi Subject: [Patch v6 06/22] perf/x86: Introduce x86-specific x86_pmu_setup_regs_data() Date: Mon, 9 Feb 2026 15:20:31 +0800 Message-Id: <20260209072047.2180332-7-dapeng1.mi@linux.intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20260209072047.2180332-1-dapeng1.mi@linux.intel.com> References: <20260209072047.2180332-1-dapeng1.mi@linux.intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Kan Liang The current perf/x86 implementation uses the generic functions perf_sample_regs_user() and perf_sample_regs_intr() to set up registers data for sampling records. While this approach works for general registers, it falls short when adding sampling support for SIMD and APX eGPRs registers on x86 platforms. To address this, we introduce the x86-specific function x86_pmu_setup_regs_data() for setting up register data on x86 platforms. At present, x86_pmu_setup_regs_data() mirrors the logic of the generic functions perf_sample_regs_user() and perf_sample_regs_intr(). Subsequent patches will introduce x86-specific enhancements. Signed-off-by: Kan Liang Signed-off-by: Dapeng Mi --- arch/x86/events/core.c | 33 +++++++++++++++++++++++++++++++++ arch/x86/events/intel/ds.c | 9 ++++++--- arch/x86/events/perf_event.h | 4 ++++ 3 files changed, 43 insertions(+), 3 deletions(-) diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c index 8c80d22864d8..d0753592a75b 100644 --- a/arch/x86/events/core.c +++ b/arch/x86/events/core.c @@ -1699,6 +1699,39 @@ static void x86_pmu_del(struct perf_event *event, in= t flags) static_call_cond(x86_pmu_del)(event); } =20 +void x86_pmu_setup_regs_data(struct perf_event *event, + struct perf_sample_data *data, + struct pt_regs *regs) +{ + struct perf_event_attr *attr =3D &event->attr; + u64 sample_type =3D attr->sample_type; + + if (sample_type & PERF_SAMPLE_REGS_USER) { + if (user_mode(regs)) { + data->regs_user.abi =3D perf_reg_abi(current); + data->regs_user.regs =3D regs; + } else if (!(current->flags & PF_KTHREAD)) { + perf_get_regs_user(&data->regs_user, regs); + } else { + data->regs_user.abi =3D PERF_SAMPLE_REGS_ABI_NONE; + data->regs_user.regs =3D NULL; + } + data->dyn_size +=3D sizeof(u64); + if (data->regs_user.regs) + data->dyn_size +=3D hweight64(attr->sample_regs_user) * sizeof(u64); + data->sample_flags |=3D PERF_SAMPLE_REGS_USER; + } + + if (sample_type & PERF_SAMPLE_REGS_INTR) { + data->regs_intr.regs =3D regs; + data->regs_intr.abi =3D perf_reg_abi(current); + data->dyn_size +=3D sizeof(u64); + if (data->regs_intr.regs) + data->dyn_size +=3D hweight64(attr->sample_regs_intr) * sizeof(u64); + data->sample_flags |=3D PERF_SAMPLE_REGS_INTR; + } +} + int x86_pmu_handle_irq(struct pt_regs *regs) { struct perf_sample_data data; diff --git a/arch/x86/events/intel/ds.c b/arch/x86/events/intel/ds.c index 87bf8672f5a8..07c2a670ba02 100644 --- a/arch/x86/events/intel/ds.c +++ b/arch/x86/events/intel/ds.c @@ -2445,6 +2445,7 @@ static inline void __setup_pebs_basic_group(struct pe= rf_event *event, } =20 static inline void __setup_pebs_gpr_group(struct perf_event *event, + struct perf_sample_data *data, struct pt_regs *regs, struct pebs_gprs *gprs, u64 sample_type) @@ -2454,8 +2455,10 @@ static inline void __setup_pebs_gpr_group(struct per= f_event *event, regs->flags &=3D ~PERF_EFLAGS_EXACT; } =20 - if (sample_type & (PERF_SAMPLE_REGS_INTR | PERF_SAMPLE_REGS_USER)) + if (sample_type & (PERF_SAMPLE_REGS_INTR | PERF_SAMPLE_REGS_USER)) { adaptive_pebs_save_regs(regs, gprs); + x86_pmu_setup_regs_data(event, data, regs); + } } =20 static inline void __setup_pebs_meminfo_group(struct perf_event *event, @@ -2548,7 +2551,7 @@ static void setup_pebs_adaptive_sample_data(struct pe= rf_event *event, gprs =3D next_record; next_record =3D gprs + 1; =20 - __setup_pebs_gpr_group(event, regs, gprs, sample_type); + __setup_pebs_gpr_group(event, data, regs, gprs, sample_type); } =20 if (format_group & PEBS_DATACFG_MEMINFO) { @@ -2672,7 +2675,7 @@ static void setup_arch_pebs_sample_data(struct perf_e= vent *event, gprs =3D next_record; next_record =3D gprs + 1; =20 - __setup_pebs_gpr_group(event, regs, + __setup_pebs_gpr_group(event, data, regs, (struct pebs_gprs *)gprs, sample_type); } diff --git a/arch/x86/events/perf_event.h b/arch/x86/events/perf_event.h index cd337f3ffd01..d9ebea3ebee5 100644 --- a/arch/x86/events/perf_event.h +++ b/arch/x86/events/perf_event.h @@ -1306,6 +1306,10 @@ void x86_pmu_enable_event(struct perf_event *event); =20 int x86_pmu_handle_irq(struct pt_regs *regs); =20 +void x86_pmu_setup_regs_data(struct perf_event *event, + struct perf_sample_data *data, + struct pt_regs *regs); + void x86_pmu_show_pmu_cap(struct pmu *pmu); =20 static inline int x86_pmu_num_counters(struct pmu *pmu) --=20 2.34.1 From nobody Tue Feb 10 14:26:01 2026 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.10]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1BA643033F5; Mon, 9 Feb 2026 07:25:25 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.10 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770621925; cv=none; b=ogvRCpHgoNGW2r126fts2uNkWobNk5swjY4aq81kOwkHkTCi1w1dGhn57RzQh5c8+XwKZXpK/xE6FRgPYevzt8/Q2wMa3yHGFl1wilTDN3cNAIWaIGmsmgXP1LFlbyfBCr0VKqn4Ik70/gvLyds9UWJo2Q9AAl/GXnglzxDCUwo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770621925; c=relaxed/simple; bh=pOowW5PnzCxMKpOIejbunNjOaFcRgVzkr360IxzFQNA=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=DdhrXivrNQO44nU3jiTLhbIVVTlLaiF439xMnT1hOyCn+hxZnFM7LJyTGIyANkllDNgnIutN7bWxWXegY2y0iY80IOpBBgJ4y5lKRiVyjjNsRuhGy9G/BLto9NCrOqgYfkv7F08FqedK4UVWjmilT/Yo1B93T2WvEFS4YVE3OeI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=pass smtp.mailfrom=linux.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=RR0KjdCM; arc=none smtp.client-ip=192.198.163.10 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="RR0KjdCM" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1770621925; x=1802157925; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=pOowW5PnzCxMKpOIejbunNjOaFcRgVzkr360IxzFQNA=; b=RR0KjdCM7PONO6Xo7zZeBZpEjDIRoDXnJMfREL9AMki9/nchMjAUnx7O R/ilMfoeF0q7dHOUhGaymd0joQmCap+R/TgmvwAJ3u6iFitS54IapipIG BCkV0kpPE0Whxj99RzXZ4FdSlTczQM9oxdeaQxoY5/nhUjJfvudrzQS+1 Xp7NF4ae8KwpALPKWJ92dfAhII8MGB2LjbH5Unw8XGVwQNJwRpFSIPCjo FBruxqP+KJ4DMsbdzuqMjrpJTpVw7wlQ5yj2tHpipuNZhydWdWlNhMg6b gCnAEsTnRrDTq5W+G8CoS4/omhqzAjx8MUOu7UGCCtrjPBRHtKtVGPrV+ w==; X-CSE-ConnectionGUID: iAvM8D+XRA6Jgb0VIpr0Qw== X-CSE-MsgGUID: hIl9GUhtTga2skSE/jyb0g== X-IronPort-AV: E=McAfee;i="6800,10657,11695"; a="83098333" X-IronPort-AV: E=Sophos;i="6.21,281,1763452800"; d="scan'208";a="83098333" Received: from fmviesa001.fm.intel.com ([10.60.135.141]) by fmvoesa104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 08 Feb 2026 23:25:25 -0800 X-CSE-ConnectionGUID: 2np2GJv+RDipfh4rNN2qOw== X-CSE-MsgGUID: kepqQK/bQtyYi/RX0HF5bA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.21,281,1763452800"; d="scan'208";a="241694628" Received: from spr.sh.intel.com ([10.112.229.196]) by fmviesa001.fm.intel.com with ESMTP; 08 Feb 2026 23:25:20 -0800 From: Dapeng Mi To: Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Namhyung Kim , Thomas Gleixner , Dave Hansen , Ian Rogers , Adrian Hunter , Jiri Olsa , Alexander Shishkin , Andi Kleen , Eranian Stephane Cc: Mark Rutland , broonie@kernel.org, Ravi Bangoria , linux-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org, Zide Chen , Falcon Thomas , Dapeng Mi , Xudong Hao , Kan Liang , Dapeng Mi Subject: [Patch v6 07/22] x86/fpu/xstate: Add xsaves_nmi() helper Date: Mon, 9 Feb 2026 15:20:32 +0800 Message-Id: <20260209072047.2180332-8-dapeng1.mi@linux.intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20260209072047.2180332-1-dapeng1.mi@linux.intel.com> References: <20260209072047.2180332-1-dapeng1.mi@linux.intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Kan Liang Add xsaves_nmi() to save supported xsave states in NMI handler. This function is similar to xsaves(), but should only be called within a NMI handler. This function returns the actual register contents at the moment the NMI occurs. Currently the perf subsystem is the sole user of this helper. It uses this function to snapshot SIMD (XMM/YMM/ZMM) and APX eGPRs registers which would be added in subsequent patches. Suggested-by: Dave Hansen Signed-off-by: Kan Liang Signed-off-by: Dapeng Mi --- arch/x86/include/asm/fpu/xstate.h | 1 + arch/x86/kernel/fpu/xstate.c | 23 +++++++++++++++++++++++ 2 files changed, 24 insertions(+) diff --git a/arch/x86/include/asm/fpu/xstate.h b/arch/x86/include/asm/fpu/x= state.h index 7a7dc9d56027..38fa8ff26559 100644 --- a/arch/x86/include/asm/fpu/xstate.h +++ b/arch/x86/include/asm/fpu/xstate.h @@ -110,6 +110,7 @@ int xfeature_size(int xfeature_nr); =20 void xsaves(struct xregs_state *xsave, u64 mask); void xrstors(struct xregs_state *xsave, u64 mask); +void xsaves_nmi(struct xregs_state *xsave, u64 mask); =20 int xfd_enable_feature(u64 xfd_err); =20 diff --git a/arch/x86/kernel/fpu/xstate.c b/arch/x86/kernel/fpu/xstate.c index 48113c5193aa..33e9a4562943 100644 --- a/arch/x86/kernel/fpu/xstate.c +++ b/arch/x86/kernel/fpu/xstate.c @@ -1475,6 +1475,29 @@ void xrstors(struct xregs_state *xstate, u64 mask) WARN_ON_ONCE(err); } =20 +/** + * xsaves_nmi - Save selected components to a kernel xstate buffer in NMI + * @xstate: Pointer to the buffer + * @mask: Feature mask to select the components to save + * + * This function is similar to xsaves(), but should only be called within + * a NMI handler. This function returns the actual register contents at + * the moment the NMI occurs. + * + * Currently, the perf subsystem is the sole user of this helper. It uses + * the function to snapshot SIMD (XMM/YMM/ZMM) and APX eGPRs registers. + */ +void xsaves_nmi(struct xregs_state *xstate, u64 mask) +{ + int err; + + if (!in_nmi()) + return; + + XSTATE_OP(XSAVES, xstate, (u32)mask, (u32)(mask >> 32), err); + WARN_ON_ONCE(err); +} + #if IS_ENABLED(CONFIG_KVM) void fpstate_clear_xstate_component(struct fpstate *fpstate, unsigned int = xfeature) { --=20 2.34.1 From nobody Tue Feb 10 14:26:01 2026 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.10]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C33041EB5F8; Mon, 9 Feb 2026 07:25:29 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.10 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770621930; cv=none; b=LweVAELDxR62+/uCqO2F8Bn/K7CGSUySX4TZRL2v+1hqcBxZCLflLUvZmqXCXDtnr8ahNFMmpJ+T5LHfbghPtJ3cyMe/cJAdQYQo6UbskDp7kyNCq336Wp9P5sY/15DiJpHGsCf5aYi81CZcs5MNf2x+3dO0m/P0qYMj9lcRVvw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770621930; c=relaxed/simple; bh=FxXIF/UDFwdpUEz9aVfYkf18vVCta1TtH/HcMsU+WAk=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=XmEYsWUQBxx1iwCWHn374znlLNl/XoG8oUDkH1g4I1P6Fbr92uJZi4xMe9WLxEO3m+i/QV72DQXs3X8GQ/JMZKdafeHMSg2S33erEHInhigmsZ+qAHRdIHHxcLR6aLxeA0AW8ajdmVFO7VNvFtqJwXFqgk6O13GOIm3K/f3JvV8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=pass smtp.mailfrom=linux.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=EKnZxx9p; arc=none smtp.client-ip=192.198.163.10 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="EKnZxx9p" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1770621930; x=1802157930; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=FxXIF/UDFwdpUEz9aVfYkf18vVCta1TtH/HcMsU+WAk=; b=EKnZxx9pqS0zXNzKIdKa4DBfFDrpXKHwpluMzjXDpfJR7osd9qwW9esN qF9IlhQICs0V7fMEaSJqvvQPd2mP8ubLyKpzvVGo34OIIQYl0lVwoXV8V uaVPSIW1KCK12ujrO7nUgDMBhekstYjuDpUORiPTJxv+qNOsoNbtIzwtw zgkmQeDlV+f6q2dqedLoMk0Tn1yhxBq7CzKdMRk8gn0EoAXZUcRuWpuWc PN383dJV/Kp/iblO+e+1Hie0NfU3wWth+90wzY/HtU3/0pUPUTOO4VQ4w ucLYySjIJvkEjleQs1By83Cn+ZSBAHB39I1Ca+77PtCBXgt4ml7Pq2p+0 g==; X-CSE-ConnectionGUID: vxToSYPKQCiIg1LG9AaB4Q== X-CSE-MsgGUID: Qf9rq95DThSjGPoO6k5r6A== X-IronPort-AV: E=McAfee;i="6800,10657,11695"; a="83098342" X-IronPort-AV: E=Sophos;i="6.21,281,1763452800"; d="scan'208";a="83098342" Received: from fmviesa001.fm.intel.com ([10.60.135.141]) by fmvoesa104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 08 Feb 2026 23:25:30 -0800 X-CSE-ConnectionGUID: SHjNoBXiT1GBuF2eVL0UVw== X-CSE-MsgGUID: o6bquAC/RqGyjivp2nEP6w== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.21,281,1763452800"; d="scan'208";a="241694634" Received: from spr.sh.intel.com ([10.112.229.196]) by fmviesa001.fm.intel.com with ESMTP; 08 Feb 2026 23:25:25 -0800 From: Dapeng Mi To: Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Namhyung Kim , Thomas Gleixner , Dave Hansen , Ian Rogers , Adrian Hunter , Jiri Olsa , Alexander Shishkin , Andi Kleen , Eranian Stephane Cc: Mark Rutland , broonie@kernel.org, Ravi Bangoria , linux-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org, Zide Chen , Falcon Thomas , Dapeng Mi , Xudong Hao , Dapeng Mi Subject: [Patch v6 08/22] x86/fpu: Ensure TIF_NEED_FPU_LOAD is set after saving FPU state Date: Mon, 9 Feb 2026 15:20:33 +0800 Message-Id: <20260209072047.2180332-9-dapeng1.mi@linux.intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20260209072047.2180332-1-dapeng1.mi@linux.intel.com> References: <20260209072047.2180332-1-dapeng1.mi@linux.intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Ensure that the TIF_NEED_FPU_LOAD flag is always set after saving the FPU state. This guarantees that the user space FPU state has been saved whenever the TIF_NEED_FPU_LOAD flag is set. A subsequent patch will verify if the user space FPU state can be retrieved from the saved task FPU state in the NMI context by checking the TIF_NEED_FPU_LOAD flag. Suggested-by: Peter Zijlstra Suggested-by: Dave Hansen Signed-off-by: Dapeng Mi --- v6: new patch. arch/x86/include/asm/fpu/sched.h | 2 +- arch/x86/kernel/fpu/core.c | 12 ++++++++---- 2 files changed, 9 insertions(+), 5 deletions(-) diff --git a/arch/x86/include/asm/fpu/sched.h b/arch/x86/include/asm/fpu/sc= hed.h index 89004f4ca208..2d57a7bf5406 100644 --- a/arch/x86/include/asm/fpu/sched.h +++ b/arch/x86/include/asm/fpu/sched.h @@ -36,8 +36,8 @@ static inline void switch_fpu(struct task_struct *old, in= t cpu) !(old->flags & (PF_KTHREAD | PF_USER_WORKER))) { struct fpu *old_fpu =3D x86_task_fpu(old); =20 - set_tsk_thread_flag(old, TIF_NEED_FPU_LOAD); save_fpregs_to_fpstate(old_fpu); + set_tsk_thread_flag(old, TIF_NEED_FPU_LOAD); /* * The save operation preserved register state, so the * fpu_fpregs_owner_ctx is still @old_fpu. Store the diff --git a/arch/x86/kernel/fpu/core.c b/arch/x86/kernel/fpu/core.c index da233f20ae6f..0f91a0d7e799 100644 --- a/arch/x86/kernel/fpu/core.c +++ b/arch/x86/kernel/fpu/core.c @@ -359,18 +359,22 @@ int fpu_swap_kvm_fpstate(struct fpu_guest *guest_fpu,= bool enter_guest) struct fpstate *cur_fps =3D fpu->fpstate; =20 fpregs_lock(); - if (!cur_fps->is_confidential && !test_thread_flag(TIF_NEED_FPU_LOAD)) + if (!cur_fps->is_confidential && !test_thread_flag(TIF_NEED_FPU_LOAD)) { save_fpregs_to_fpstate(fpu); + set_thread_flag(TIF_NEED_FPU_LOAD); + } =20 /* Swap fpstate */ if (enter_guest) { - fpu->__task_fpstate =3D cur_fps; + WRITE_ONCE(fpu->__task_fpstate, cur_fps); + barrier(); fpu->fpstate =3D guest_fps; guest_fps->in_use =3D true; } else { guest_fps->in_use =3D false; fpu->fpstate =3D fpu->__task_fpstate; - fpu->__task_fpstate =3D NULL; + barrier(); + WRITE_ONCE(fpu->__task_fpstate, NULL); } =20 cur_fps =3D fpu->fpstate; @@ -456,8 +460,8 @@ void kernel_fpu_begin_mask(unsigned int kfpu_mask) =20 if (!(current->flags & (PF_KTHREAD | PF_USER_WORKER)) && !test_thread_flag(TIF_NEED_FPU_LOAD)) { - set_thread_flag(TIF_NEED_FPU_LOAD); save_fpregs_to_fpstate(x86_task_fpu(current)); + set_thread_flag(TIF_NEED_FPU_LOAD); } __cpu_invalidate_fpregs_state(); =20 --=20 2.34.1 From nobody Tue Feb 10 14:26:01 2026 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.10]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B8F111EB5F8; Mon, 9 Feb 2026 07:25:34 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.10 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770621934; cv=none; b=ZcfkUJN/IUxXWzEGoIoTZNxElHRh7AerXZEkW1n73Yzh6sw7bENAbEH+n2OWe4/+O/aG/EtpG18trP4Gi1euXfUpkwHF9mWXjTtEl+qHFuwMpESoswO7+DsaWdQf12OyK+5riwmpyoOTJB+MWr8/w1icnvzo5esMdJ7pessVcR8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770621934; c=relaxed/simple; bh=NdJxAgkkZUlMDn4CCFlG9khMShPl95l1ZRV/iTrz0MI=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=A4dLfQetKfaT8Iuq1BhGLPVISszVaPQcvbKUWtT96LaG1kcJzW/V0zyjQer7WSbkW3q57NoAYPfaDkq0/FGZ5n04mgKYbW8kUQRBn9RqsURUrLWL3g22zAy5zTxRNdmwqXQUgGweF98RknNVY5zPPHKjDkGgJznINPSzkRgldvU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=pass smtp.mailfrom=linux.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=nHsLNq4K; arc=none smtp.client-ip=192.198.163.10 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="nHsLNq4K" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1770621935; x=1802157935; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=NdJxAgkkZUlMDn4CCFlG9khMShPl95l1ZRV/iTrz0MI=; b=nHsLNq4KphS/MelKkQRFRwrg5x14daXBeGXLyxwGallNQazbgeddQAWF lyCPJF7jgx9UzHgVExha5qhKRwenRBtWIZWXAwS3EtvQRaNkcvNCyROND e4UdAr1PuQvnbn9LUlAXKvWvB+DZ7Cz3A+bbTt2ca30raIJxSKNEJzlyC hnfNgSNTeoQmLjc+3l/w3/aASSoC6XL8k64fCBe4fecn6gOhE3Vd88zai 5OsKDYmSS4PZLA4OET5C73h5ubO0DvRvIsuVDlPsPqvnILy9AWXBoJzqg OQXd6LI55kBO42fq+hM8r07qv/AzO1Yq3jmV/grcgT6wp4m4ul1nS7G8N w==; X-CSE-ConnectionGUID: JlD2gR2wQ6unLEVhSPt6og== X-CSE-MsgGUID: i/Sm5pY+SWe8LK24sj/Obw== X-IronPort-AV: E=McAfee;i="6800,10657,11695"; a="83098354" X-IronPort-AV: E=Sophos;i="6.21,281,1763452800"; d="scan'208";a="83098354" Received: from fmviesa001.fm.intel.com ([10.60.135.141]) by fmvoesa104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 08 Feb 2026 23:25:35 -0800 X-CSE-ConnectionGUID: p6wmg1LsT5uV6vH3FhHc+w== X-CSE-MsgGUID: vNPd+U5BQOupRK0yoMrQXA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.21,281,1763452800"; d="scan'208";a="241694648" Received: from spr.sh.intel.com ([10.112.229.196]) by fmviesa001.fm.intel.com with ESMTP; 08 Feb 2026 23:25:30 -0800 From: Dapeng Mi To: Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Namhyung Kim , Thomas Gleixner , Dave Hansen , Ian Rogers , Adrian Hunter , Jiri Olsa , Alexander Shishkin , Andi Kleen , Eranian Stephane Cc: Mark Rutland , broonie@kernel.org, Ravi Bangoria , linux-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org, Zide Chen , Falcon Thomas , Dapeng Mi , Xudong Hao , Kan Liang , Dapeng Mi Subject: [Patch v6 09/22] perf: Move and rename has_extended_regs() for ARCH-specific use Date: Mon, 9 Feb 2026 15:20:34 +0800 Message-Id: <20260209072047.2180332-10-dapeng1.mi@linux.intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20260209072047.2180332-1-dapeng1.mi@linux.intel.com> References: <20260209072047.2180332-1-dapeng1.mi@linux.intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Kan Liang The has_extended_regs() function will be utilized in ARCH-specific code. To facilitate this, move it to header file perf_event.h Additionally, the function is renamed to event_has_extended_regs() which aligns with the existing naming conventions. No functional change intended. Signed-off-by: Kan Liang Signed-off-by: Dapeng Mi --- include/linux/perf_event.h | 8 ++++++++ kernel/events/core.c | 8 +------- 2 files changed, 9 insertions(+), 7 deletions(-) diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h index 82e617fad165..b8a0f77412b3 100644 --- a/include/linux/perf_event.h +++ b/include/linux/perf_event.h @@ -1534,6 +1534,14 @@ perf_event__output_id_sample(struct perf_event *even= t, extern void perf_log_lost_samples(struct perf_event *event, u64 lost); =20 +static inline bool event_has_extended_regs(struct perf_event *event) +{ + struct perf_event_attr *attr =3D &event->attr; + + return (attr->sample_regs_user & PERF_REG_EXTENDED_MASK) || + (attr->sample_regs_intr & PERF_REG_EXTENDED_MASK); +} + static inline bool event_has_any_exclude_flag(struct perf_event *event) { struct perf_event_attr *attr =3D &event->attr; diff --git a/kernel/events/core.c b/kernel/events/core.c index 8410b1a7ef3b..d487c55a4f3e 100644 --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -12964,12 +12964,6 @@ int perf_pmu_unregister(struct pmu *pmu) } EXPORT_SYMBOL_GPL(perf_pmu_unregister); =20 -static inline bool has_extended_regs(struct perf_event *event) -{ - return (event->attr.sample_regs_user & PERF_REG_EXTENDED_MASK) || - (event->attr.sample_regs_intr & PERF_REG_EXTENDED_MASK); -} - static int perf_try_init_event(struct pmu *pmu, struct perf_event *event) { struct perf_event_context *ctx =3D NULL; @@ -13004,7 +12998,7 @@ static int perf_try_init_event(struct pmu *pmu, str= uct perf_event *event) goto err_pmu; =20 if (!(pmu->capabilities & PERF_PMU_CAP_EXTENDED_REGS) && - has_extended_regs(event)) { + event_has_extended_regs(event)) { ret =3D -EOPNOTSUPP; goto err_destroy; } --=20 2.34.1 From nobody Tue Feb 10 14:26:01 2026 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.10]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id BF1E01EB5F8; Mon, 9 Feb 2026 07:25:39 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.10 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770621940; cv=none; b=kn2SqlqnKv+tu6JAePifoF7g6SQbMIkCmLGjpVRXc94l0p1Q+pWeLTRyjK4BUBKY+7bJqnIGrKSjKS1MyQgt33+5+QyWUaS8G4quuRB6kxzSwN4TCmjZlTA73I2VFdLAgZJvjb6PikO3/e9p/tZ90X0ziGeAxCpn+3delpVecFs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770621940; c=relaxed/simple; bh=RgC+cocMg2Ev2yh/fIisJYaym4vCbIj95SVxIcp6H1s=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=p1ip5wA/4SJn9yGJ+Dzek2LQ8aK8IA73mc2ItfOC2B3BEm03u46+fIN6NAKutlB3iWUYefXPBjpnIJor4P5sDWsJxV1JsAgrq5iBH4D0uR1zVFZi7k2RRpFKckjIFliDzkLsuXNwZgXKY2V14oUQm1glG8YYXADcCNh8IkiqHs0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=pass smtp.mailfrom=linux.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=O8ORi17J; arc=none smtp.client-ip=192.198.163.10 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="O8ORi17J" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1770621940; x=1802157940; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=RgC+cocMg2Ev2yh/fIisJYaym4vCbIj95SVxIcp6H1s=; b=O8ORi17JKJ3VsmvqmTXp7RJHKvao9vYtYV90cQonkNQ4UmqSbMB+zabK eU+A/K4oKw8aNPHJVGOhPw08ZRpedBfa2n6gIm3u6mIDDxZhO8oW6HVKO JEmp6Nb+NLJcEodRwIiqWYcmyh3IJjTot4v9dmTgjwCx7Lnt/a96Zc0Ay 3Jv5jCmeEsDyIv0fqCM3p+uv14h+VtfyvvHChUlySnAz5GW99bMRZwDxk WwGWf6qhuWLTgDP13p9BksqKGtElK5IZhaxQbfeWdC5ZugEUtpHDwKZ5Y ksIsRMFWWI2zjI1cnAli2D+IAHbnMJDGyjuvKisUiv2p2Mb34zBBHdWxH A==; X-CSE-ConnectionGUID: Dq48l0sNQmWWnxgvvVl/3Q== X-CSE-MsgGUID: SeASmHlJSjqK6DtS+En3jg== X-IronPort-AV: E=McAfee;i="6800,10657,11695"; a="83098390" X-IronPort-AV: E=Sophos;i="6.21,281,1763452800"; d="scan'208";a="83098390" Received: from fmviesa001.fm.intel.com ([10.60.135.141]) by fmvoesa104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 08 Feb 2026 23:25:40 -0800 X-CSE-ConnectionGUID: Tq+uA2TxQ6a+GletWu5N7A== X-CSE-MsgGUID: TderQiydSB+1WtDNKwK2Pg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.21,281,1763452800"; d="scan'208";a="241694667" Received: from spr.sh.intel.com ([10.112.229.196]) by fmviesa001.fm.intel.com with ESMTP; 08 Feb 2026 23:25:35 -0800 From: Dapeng Mi To: Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Namhyung Kim , Thomas Gleixner , Dave Hansen , Ian Rogers , Adrian Hunter , Jiri Olsa , Alexander Shishkin , Andi Kleen , Eranian Stephane Cc: Mark Rutland , broonie@kernel.org, Ravi Bangoria , linux-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org, Zide Chen , Falcon Thomas , Dapeng Mi , Xudong Hao , Dapeng Mi , Kan Liang Subject: [Patch v6 10/22] perf/x86: Enable XMM Register Sampling for Non-PEBS Events Date: Mon, 9 Feb 2026 15:20:35 +0800 Message-Id: <20260209072047.2180332-11-dapeng1.mi@linux.intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20260209072047.2180332-1-dapeng1.mi@linux.intel.com> References: <20260209072047.2180332-1-dapeng1.mi@linux.intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Previously, XMM register sampling was only available for PEBS events starting from Icelake. This patch extends support to non-PEBS events by utilizing the `xsaves` instruction, thereby completing the feature set. To implement this, a 64-byte aligned buffer is required. A per-CPU `ext_regs_buf` is introduced to store SIMD and other registers, with an approximate size of 2K. The buffer is allocated using `kzalloc_node()`, ensuring natural and 64-byte alignment for all `kmalloc()` allocations with powers of 2. This patch supports XMM sampling for non-PEBS events in the `REGS_INTR` case. Support for `REGS_USER` will be added in a subsequent patch. For PEBS events, XMM register sampling data is directly retrieved from PEBS records. Future support for additional vector registers (YMM/ZMM/OPMASK) is planned. An `ext_regs_mask` is added to track the supported vector register groups. Co-developed-by: Kan Liang Signed-off-by: Kan Liang Signed-off-by: Dapeng Mi --- V6: functions name refine. arch/x86/events/core.c | 147 +++++++++++++++++++++++++++--- arch/x86/events/intel/core.c | 29 +++++- arch/x86/events/intel/ds.c | 20 ++-- arch/x86/events/perf_event.h | 11 ++- arch/x86/include/asm/fpu/xstate.h | 2 + arch/x86/include/asm/perf_event.h | 5 +- arch/x86/kernel/fpu/xstate.c | 2 +- 7 files changed, 191 insertions(+), 25 deletions(-) diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c index d0753592a75b..3c0987e13edc 100644 --- a/arch/x86/events/core.c +++ b/arch/x86/events/core.c @@ -410,6 +410,45 @@ set_ext_hw_attr(struct hw_perf_event *hwc, struct perf= _event *event) return x86_pmu_extra_regs(val, event); } =20 +static DEFINE_PER_CPU(struct xregs_state *, ext_regs_buf); + +static void release_ext_regs_buffers(void) +{ + int cpu; + + if (!x86_pmu.ext_regs_mask) + return; + + for_each_possible_cpu(cpu) { + kfree(per_cpu(ext_regs_buf, cpu)); + per_cpu(ext_regs_buf, cpu) =3D NULL; + } +} + +static void reserve_ext_regs_buffers(void) +{ + bool compacted =3D cpu_feature_enabled(X86_FEATURE_XCOMPACTED); + unsigned int size; + int cpu; + + if (!x86_pmu.ext_regs_mask) + return; + + size =3D xstate_calculate_size(x86_pmu.ext_regs_mask, compacted); + + for_each_possible_cpu(cpu) { + per_cpu(ext_regs_buf, cpu) =3D kzalloc_node(size, GFP_KERNEL, + cpu_to_node(cpu)); + if (!per_cpu(ext_regs_buf, cpu)) + goto err; + } + + return; + +err: + release_ext_regs_buffers(); +} + int x86_reserve_hardware(void) { int err =3D 0; @@ -422,6 +461,7 @@ int x86_reserve_hardware(void) } else { reserve_ds_buffers(); reserve_lbr_buffers(); + reserve_ext_regs_buffers(); } } if (!err) @@ -438,6 +478,7 @@ void x86_release_hardware(void) release_pmc_hardware(); release_ds_buffers(); release_lbr_buffers(); + release_ext_regs_buffers(); mutex_unlock(&pmc_reserve_mutex); } } @@ -655,18 +696,23 @@ int x86_pmu_hw_config(struct perf_event *event) return -EINVAL; } =20 - /* sample_regs_user never support XMM registers */ - if (unlikely(event->attr.sample_regs_user & PERF_REG_EXTENDED_MASK)) - return -EINVAL; - /* - * Besides the general purpose registers, XMM registers may - * be collected in PEBS on some platforms, e.g. Icelake - */ - if (unlikely(event->attr.sample_regs_intr & PERF_REG_EXTENDED_MASK)) { - if (!(event->pmu->capabilities & PERF_PMU_CAP_EXTENDED_REGS)) - return -EINVAL; + if (event->attr.sample_type & PERF_SAMPLE_REGS_INTR) { + /* + * Besides the general purpose registers, XMM registers may + * be collected as well. + */ + if (event_has_extended_regs(event)) { + if (!(event->pmu->capabilities & PERF_PMU_CAP_EXTENDED_REGS)) + return -EINVAL; + } + } =20 - if (!event->attr.precise_ip) + if (event->attr.sample_type & PERF_SAMPLE_REGS_USER) { + /* + * Currently XMM registers sampling for REGS_USER is not + * supported yet. + */ + if (event_has_extended_regs(event)) return -EINVAL; } =20 @@ -1699,9 +1745,9 @@ static void x86_pmu_del(struct perf_event *event, int= flags) static_call_cond(x86_pmu_del)(event); } =20 -void x86_pmu_setup_regs_data(struct perf_event *event, - struct perf_sample_data *data, - struct pt_regs *regs) +static void x86_pmu_setup_basic_regs_data(struct perf_event *event, + struct perf_sample_data *data, + struct pt_regs *regs) { struct perf_event_attr *attr =3D &event->attr; u64 sample_type =3D attr->sample_type; @@ -1732,6 +1778,79 @@ void x86_pmu_setup_regs_data(struct perf_event *even= t, } } =20 +inline void x86_pmu_clear_perf_regs(struct pt_regs *regs) +{ + struct x86_perf_regs *perf_regs =3D container_of(regs, struct x86_perf_re= gs, regs); + + perf_regs->xmm_regs =3D NULL; +} + +static inline void __x86_pmu_sample_ext_regs(u64 mask) +{ + struct xregs_state *xsave =3D per_cpu(ext_regs_buf, smp_processor_id()); + + if (WARN_ON_ONCE(!xsave)) + return; + + xsaves_nmi(xsave, mask); +} + +static inline void x86_pmu_update_ext_regs(struct x86_perf_regs *perf_regs, + struct xregs_state *xsave, u64 bitmap) +{ + u64 mask; + + if (!xsave) + return; + + /* Filtered by what XSAVE really gives */ + mask =3D bitmap & xsave->header.xfeatures; + + if (mask & XFEATURE_MASK_SSE) + perf_regs->xmm_space =3D xsave->i387.xmm_space; +} + +static void x86_pmu_sample_extended_regs(struct perf_event *event, + struct perf_sample_data *data, + struct pt_regs *regs, + u64 ignore_mask) +{ + u64 sample_type =3D event->attr.sample_type; + struct x86_perf_regs *perf_regs; + struct xregs_state *xsave; + u64 intr_mask =3D 0; + u64 mask =3D 0; + + perf_regs =3D container_of(regs, struct x86_perf_regs, regs); + + if (event_has_extended_regs(event)) + mask |=3D XFEATURE_MASK_SSE; + + mask &=3D x86_pmu.ext_regs_mask; + + if (sample_type & PERF_SAMPLE_REGS_INTR) + intr_mask =3D mask & ~ignore_mask; + + if (intr_mask) { + __x86_pmu_sample_ext_regs(intr_mask); + xsave =3D per_cpu(ext_regs_buf, smp_processor_id()); + x86_pmu_update_ext_regs(perf_regs, xsave, intr_mask); + } +} + +void x86_pmu_setup_regs_data(struct perf_event *event, + struct perf_sample_data *data, + struct pt_regs *regs, + u64 ignore_mask) +{ + x86_pmu_setup_basic_regs_data(event, data, regs); + /* + * ignore_mask indicates the PEBS sampled extended regs + * which may be unnecessary to sample again. + */ + x86_pmu_sample_extended_regs(event, data, regs, ignore_mask); +} + int x86_pmu_handle_irq(struct pt_regs *regs) { struct perf_sample_data data; diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c index 5ed26b83c61d..ae7693e586d3 100644 --- a/arch/x86/events/intel/core.c +++ b/arch/x86/events/intel/core.c @@ -3651,6 +3651,9 @@ static int handle_pmi_common(struct pt_regs *regs, u6= 4 status) if (has_branch_stack(event)) intel_pmu_lbr_save_brstack(&data, cpuc, event); =20 + x86_pmu_clear_perf_regs(regs); + x86_pmu_setup_regs_data(event, &data, regs, 0); + perf_event_overflow(event, &data, regs); } =20 @@ -5880,8 +5883,30 @@ static inline void __intel_update_large_pebs_flags(s= truct pmu *pmu) } } =20 -#define counter_mask(_gp, _fixed) ((_gp) | ((u64)(_fixed) << INTEL_PMC_IDX= _FIXED)) +static void intel_extended_regs_init(struct pmu *pmu) +{ + /* + * Extend the vector registers support to non-PEBS. + * The feature is limited to newer Intel machines with + * PEBS V4+ or archPerfmonExt (0x23) enabled for now. + * In theory, the vector registers can be retrieved as + * long as the CPU supports. The support for the old + * generations may be added later if there is a + * requirement. + * Only support the extension when XSAVES is available. + */ + if (!boot_cpu_has(X86_FEATURE_XSAVES)) + return; =20 + if (!boot_cpu_has(X86_FEATURE_XMM) || + !cpu_has_xfeatures(XFEATURE_MASK_SSE, NULL)) + return; + + x86_pmu.ext_regs_mask |=3D XFEATURE_MASK_SSE; + x86_get_pmu(smp_processor_id())->capabilities |=3D PERF_PMU_CAP_EXTENDED_= REGS; +} + +#define counter_mask(_gp, _fixed) ((_gp) | ((u64)(_fixed) << INTEL_PMC_IDX= _FIXED)) static void update_pmu_cap(struct pmu *pmu) { unsigned int eax, ebx, ecx, edx; @@ -5945,6 +5970,8 @@ static void update_pmu_cap(struct pmu *pmu) /* Perf Metric (Bit 15) and PEBS via PT (Bit 16) are hybrid enumeration = */ rdmsrq(MSR_IA32_PERF_CAPABILITIES, hybrid(pmu, intel_cap).capabilities); } + + intel_extended_regs_init(pmu); } =20 static void intel_pmu_check_hybrid_pmus(struct x86_hybrid_pmu *pmu) diff --git a/arch/x86/events/intel/ds.c b/arch/x86/events/intel/ds.c index 07c2a670ba02..229dbe368b65 100644 --- a/arch/x86/events/intel/ds.c +++ b/arch/x86/events/intel/ds.c @@ -1735,8 +1735,7 @@ static u64 pebs_update_adaptive_cfg(struct perf_event= *event) if (gprs || (attr->precise_ip < 2) || tsx_weight) pebs_data_cfg |=3D PEBS_DATACFG_GP; =20 - if ((sample_type & PERF_SAMPLE_REGS_INTR) && - (attr->sample_regs_intr & PERF_REG_EXTENDED_MASK)) + if (event_has_extended_regs(event)) pebs_data_cfg |=3D PEBS_DATACFG_XMMS; =20 if (sample_type & PERF_SAMPLE_BRANCH_STACK) { @@ -2455,10 +2454,8 @@ static inline void __setup_pebs_gpr_group(struct per= f_event *event, regs->flags &=3D ~PERF_EFLAGS_EXACT; } =20 - if (sample_type & (PERF_SAMPLE_REGS_INTR | PERF_SAMPLE_REGS_USER)) { + if (sample_type & (PERF_SAMPLE_REGS_INTR | PERF_SAMPLE_REGS_USER)) adaptive_pebs_save_regs(regs, gprs); - x86_pmu_setup_regs_data(event, data, regs); - } } =20 static inline void __setup_pebs_meminfo_group(struct perf_event *event, @@ -2516,6 +2513,7 @@ static void setup_pebs_adaptive_sample_data(struct pe= rf_event *event, struct pebs_meminfo *meminfo =3D NULL; struct pebs_gprs *gprs =3D NULL; struct x86_perf_regs *perf_regs; + u64 ignore_mask =3D 0; u64 format_group; u16 retire; =20 @@ -2523,7 +2521,7 @@ static void setup_pebs_adaptive_sample_data(struct pe= rf_event *event, return; =20 perf_regs =3D container_of(regs, struct x86_perf_regs, regs); - perf_regs->xmm_regs =3D NULL; + x86_pmu_clear_perf_regs(regs); =20 format_group =3D basic->format_group; =20 @@ -2570,6 +2568,7 @@ static void setup_pebs_adaptive_sample_data(struct pe= rf_event *event, if (format_group & PEBS_DATACFG_XMMS) { struct pebs_xmm *xmm =3D next_record; =20 + ignore_mask |=3D XFEATURE_MASK_SSE; next_record =3D xmm + 1; perf_regs->xmm_regs =3D xmm->xmm; } @@ -2608,6 +2607,8 @@ static void setup_pebs_adaptive_sample_data(struct pe= rf_event *event, next_record +=3D nr * sizeof(u64); } =20 + x86_pmu_setup_regs_data(event, data, regs, ignore_mask); + WARN_ONCE(next_record !=3D __pebs + basic->format_size, "PEBS record size %u, expected %llu, config %llx\n", basic->format_size, @@ -2633,6 +2634,7 @@ static void setup_arch_pebs_sample_data(struct perf_e= vent *event, struct arch_pebs_aux *meminfo =3D NULL; struct arch_pebs_gprs *gprs =3D NULL; struct x86_perf_regs *perf_regs; + u64 ignore_mask =3D 0; void *next_record; void *at =3D __pebs; =20 @@ -2640,7 +2642,7 @@ static void setup_arch_pebs_sample_data(struct perf_e= vent *event, return; =20 perf_regs =3D container_of(regs, struct x86_perf_regs, regs); - perf_regs->xmm_regs =3D NULL; + x86_pmu_clear_perf_regs(regs); =20 __setup_perf_sample_data(event, iregs, data); =20 @@ -2695,6 +2697,7 @@ static void setup_arch_pebs_sample_data(struct perf_e= vent *event, =20 next_record +=3D sizeof(struct arch_pebs_xer_header); =20 + ignore_mask |=3D XFEATURE_MASK_SSE; xmm =3D next_record; perf_regs->xmm_regs =3D xmm->xmm; next_record =3D xmm + 1; @@ -2742,6 +2745,8 @@ static void setup_arch_pebs_sample_data(struct perf_e= vent *event, at =3D at + header->size; goto again; } + + x86_pmu_setup_regs_data(event, data, regs, ignore_mask); } =20 static inline void * @@ -3404,6 +3409,7 @@ static void __init intel_ds_pebs_init(void) x86_pmu.flags |=3D PMU_FL_PEBS_ALL; x86_pmu.pebs_capable =3D ~0ULL; pebs_qual =3D "-baseline"; + x86_pmu.ext_regs_mask |=3D XFEATURE_MASK_SSE; x86_get_pmu(smp_processor_id())->capabilities |=3D PERF_PMU_CAP_EXTEND= ED_REGS; } else { /* Only basic record supported */ diff --git a/arch/x86/events/perf_event.h b/arch/x86/events/perf_event.h index d9ebea3ebee5..a32ee4f0c891 100644 --- a/arch/x86/events/perf_event.h +++ b/arch/x86/events/perf_event.h @@ -1020,6 +1020,12 @@ struct x86_pmu { struct extra_reg *extra_regs; unsigned int flags; =20 + /* + * Extended regs, e.g., vector registers + * Utilize the same format as the XFEATURE_MASK_* + */ + u64 ext_regs_mask; + /* * Intel host/guest support (KVM) */ @@ -1306,9 +1312,12 @@ void x86_pmu_enable_event(struct perf_event *event); =20 int x86_pmu_handle_irq(struct pt_regs *regs); =20 +void x86_pmu_clear_perf_regs(struct pt_regs *regs); + void x86_pmu_setup_regs_data(struct perf_event *event, struct perf_sample_data *data, - struct pt_regs *regs); + struct pt_regs *regs, + u64 ignore_mask); =20 void x86_pmu_show_pmu_cap(struct pmu *pmu); =20 diff --git a/arch/x86/include/asm/fpu/xstate.h b/arch/x86/include/asm/fpu/x= state.h index 38fa8ff26559..19dec5f0b1c7 100644 --- a/arch/x86/include/asm/fpu/xstate.h +++ b/arch/x86/include/asm/fpu/xstate.h @@ -112,6 +112,8 @@ void xsaves(struct xregs_state *xsave, u64 mask); void xrstors(struct xregs_state *xsave, u64 mask); void xsaves_nmi(struct xregs_state *xsave, u64 mask); =20 +unsigned int xstate_calculate_size(u64 xfeatures, bool compacted); + int xfd_enable_feature(u64 xfd_err); =20 #ifdef CONFIG_X86_64 diff --git a/arch/x86/include/asm/perf_event.h b/arch/x86/include/asm/perf_= event.h index ff5acb8b199b..7baa1b0f889f 100644 --- a/arch/x86/include/asm/perf_event.h +++ b/arch/x86/include/asm/perf_event.h @@ -709,7 +709,10 @@ extern void perf_events_lapic_init(void); struct pt_regs; struct x86_perf_regs { struct pt_regs regs; - u64 *xmm_regs; + union { + u64 *xmm_regs; + u32 *xmm_space; /* for xsaves */ + }; }; =20 extern unsigned long perf_arch_instruction_pointer(struct pt_regs *regs); diff --git a/arch/x86/kernel/fpu/xstate.c b/arch/x86/kernel/fpu/xstate.c index 33e9a4562943..7a98769d7ea0 100644 --- a/arch/x86/kernel/fpu/xstate.c +++ b/arch/x86/kernel/fpu/xstate.c @@ -587,7 +587,7 @@ static bool __init check_xstate_against_struct(int nr) return true; } =20 -static unsigned int xstate_calculate_size(u64 xfeatures, bool compacted) +unsigned int xstate_calculate_size(u64 xfeatures, bool compacted) { unsigned int topmost =3D fls64(xfeatures) - 1; unsigned int offset, i; --=20 2.34.1 From nobody Tue Feb 10 14:26:01 2026 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.10]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B0B782DD60F; Mon, 9 Feb 2026 07:25:44 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.10 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770621944; cv=none; b=ZS9QPfLTs9rc0akhVr+29lc7fM6YRnX8jsfD6n9GOjg12Brih0gsicAssl5lUUr2YtJhFBKFLegiEYHLnenlBwKttQWvMK83fHEvpdi0hL+Rz5+SOgnE872vLSAJCjTCw6UOrtuWAExypZ0e7tucGLtm+1LofLnMhKT5PAISl5M= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770621944; c=relaxed/simple; bh=YGt7Le9WtZ0UzK3leN4S+kdMPPfdpgbxaEzyWjtoKWI=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=kyPkaZFZqN9gnP5Z40diocKP8/qBX2hkySQFXj9MXo4Cqd4+X9Nf82xNtwskFNRC4mWrdJ801aBEbqajzIl03TZ07gz23Y9Rib7wp/7gGSr/dE8VJDZ4qD1gEzPzEI6xP4HJT7AMZSghsHcnQI8EnwzBVF/BW8Yv9UwUl+MuOM4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=pass smtp.mailfrom=linux.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=HZtK8jtP; arc=none smtp.client-ip=192.198.163.10 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="HZtK8jtP" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1770621945; x=1802157945; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=YGt7Le9WtZ0UzK3leN4S+kdMPPfdpgbxaEzyWjtoKWI=; b=HZtK8jtPmOU0+yjAO5KK4fRru9IisFunWCE0/FdhG0WL/mwChqZHAqXo kbWG07NnNd14ZUS5ga3WENXGdi3jeyWZXCyt0Z9xkQeB/MRUcTx/4Sio3 8HtD7PPt3KBHsvPRdRBQZucBN++Hq2N44iuAFS7jArgfkz2LBukuPEBkt eiEC+tHLlmsZAU4SJ8eE/7VuAFX+01CtXAZy+XrwjbEkk+6tejLEr6xLx 400SymYDnMHgN3135XIgJ3/15YiOMHl01HJc23ygU3Ud6/3dqZrKHoeig PQCyY1vZ5w1Avh6ab+76CeC56Bc6/t0Qipokic5HgN22f2K+P68CVyd0U Q==; X-CSE-ConnectionGUID: 4OgW6ZSyRgCCHK1HA8d9zA== X-CSE-MsgGUID: XcvULaGvQ7WuHZ5nHiNagQ== X-IronPort-AV: E=McAfee;i="6800,10657,11695"; a="83098415" X-IronPort-AV: E=Sophos;i="6.21,281,1763452800"; d="scan'208";a="83098415" Received: from fmviesa001.fm.intel.com ([10.60.135.141]) by fmvoesa104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 08 Feb 2026 23:25:45 -0800 X-CSE-ConnectionGUID: XD84p1oNReWJMzVKcGClDA== X-CSE-MsgGUID: rC8JH2IPTaeVcmz/l3cNYw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.21,281,1763452800"; d="scan'208";a="241694685" Received: from spr.sh.intel.com ([10.112.229.196]) by fmviesa001.fm.intel.com with ESMTP; 08 Feb 2026 23:25:40 -0800 From: Dapeng Mi To: Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Namhyung Kim , Thomas Gleixner , Dave Hansen , Ian Rogers , Adrian Hunter , Jiri Olsa , Alexander Shishkin , Andi Kleen , Eranian Stephane Cc: Mark Rutland , broonie@kernel.org, Ravi Bangoria , linux-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org, Zide Chen , Falcon Thomas , Dapeng Mi , Xudong Hao , Dapeng Mi , Kan Liang Subject: [Patch v6 11/22] perf/x86: Enable XMM register sampling for REGS_USER case Date: Mon, 9 Feb 2026 15:20:36 +0800 Message-Id: <20260209072047.2180332-12-dapeng1.mi@linux.intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20260209072047.2180332-1-dapeng1.mi@linux.intel.com> References: <20260209072047.2180332-1-dapeng1.mi@linux.intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" This patch adds support for XMM register sampling in the REGS_USER case. To handle simultaneous sampling of XMM registers for both REGS_INTR and REGS_USER cases, a per-CPU `x86_user_regs` is introduced to store REGS_USER-specific XMM registers. This prevents REGS_USER-specific XMM register data from being overwritten by REGS_INTR-specific data if they share the same `x86_perf_regs` structure. To sample user-space XMM registers, the `x86_pmu_update_user_ext_regs()` helper function is added. It checks if the `TIF_NEED_FPU_LOAD` flag is set. If so, the user-space XMM register data can be directly retrieved from the cached task FPU state, as the corresponding hardware registers have been cleared or switched to kernel-space data. Otherwise, the data must be read from the hardware registers using the `xsaves` instruction. For PEBS events, `x86_pmu_update_user_ext_regs()` checks if the PEBS-sampled XMM register data belongs to user-space. If so, no further action is needed. Otherwise, the user-space XMM register data needs to be re-sampled using the same method as for non-PEBS events. Co-developed-by: Kan Liang Signed-off-by: Kan Liang Signed-off-by: Dapeng Mi --- V6: New patch, partly split from previous patch. Fully support user-regs sampling for SIMD regsiters as Peter suggested. arch/x86/events/core.c | 99 ++++++++++++++++++++++++++++++++++++------ 1 file changed, 85 insertions(+), 14 deletions(-) diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c index 3c0987e13edc..36b4bc413938 100644 --- a/arch/x86/events/core.c +++ b/arch/x86/events/core.c @@ -696,7 +696,7 @@ int x86_pmu_hw_config(struct perf_event *event) return -EINVAL; } =20 - if (event->attr.sample_type & PERF_SAMPLE_REGS_INTR) { + if (event->attr.sample_type & (PERF_SAMPLE_REGS_INTR | PERF_SAMPLE_REGS_U= SER)) { /* * Besides the general purpose registers, XMM registers may * be collected as well. @@ -707,15 +707,6 @@ int x86_pmu_hw_config(struct perf_event *event) } } =20 - if (event->attr.sample_type & PERF_SAMPLE_REGS_USER) { - /* - * Currently XMM registers sampling for REGS_USER is not - * supported yet. - */ - if (event_has_extended_regs(event)) - return -EINVAL; - } - return x86_setup_perfctr(event); } =20 @@ -1745,6 +1736,28 @@ static void x86_pmu_del(struct perf_event *event, in= t flags) static_call_cond(x86_pmu_del)(event); } =20 +/* + * When both PERF_SAMPLE_REGS_INTR and PERF_SAMPLE_REGS_USER are set, + * an additional x86_perf_regs is required to save user-space registers. + * Without this, user-space register data may be overwritten by kernel-spa= ce + * registers. + */ +static DEFINE_PER_CPU(struct x86_perf_regs, x86_user_regs); +static void x86_pmu_perf_get_regs_user(struct perf_sample_data *data, + struct pt_regs *regs) +{ + struct x86_perf_regs *x86_regs_user =3D this_cpu_ptr(&x86_user_regs); + struct perf_regs regs_user; + + perf_get_regs_user(®s_user, regs); + data->regs_user.abi =3D regs_user.abi; + if (regs_user.regs) { + x86_regs_user->regs =3D *regs_user.regs; + data->regs_user.regs =3D &x86_regs_user->regs; + } else + data->regs_user.regs =3D NULL; +} + static void x86_pmu_setup_basic_regs_data(struct perf_event *event, struct perf_sample_data *data, struct pt_regs *regs) @@ -1757,7 +1770,14 @@ static void x86_pmu_setup_basic_regs_data(struct per= f_event *event, data->regs_user.abi =3D perf_reg_abi(current); data->regs_user.regs =3D regs; } else if (!(current->flags & PF_KTHREAD)) { - perf_get_regs_user(&data->regs_user, regs); + /* + * It cannot guarantee that the kernel will never + * touch the registers outside of the pt_regs, + * especially when more and more registers + * (e.g., SIMD, eGPR) are added. The live data + * cannot be used. + */ + x86_pmu_perf_get_regs_user(data, regs); } else { data->regs_user.abi =3D PERF_SAMPLE_REGS_ABI_NONE; data->regs_user.regs =3D NULL; @@ -1810,6 +1830,47 @@ static inline void x86_pmu_update_ext_regs(struct x8= 6_perf_regs *perf_regs, perf_regs->xmm_space =3D xsave->i387.xmm_space; } =20 +/* + * This function retrieves cached user-space fpu registers (XMM/YMM/ZMM). + * If TIF_NEED_FPU_LOAD is set, it indicates that the user-space FPU state + * Otherwise, the data should be read directly from the hardware registers. + */ +static inline u64 x86_pmu_update_user_ext_regs(struct perf_sample_data *da= ta, + struct pt_regs *regs, + u64 mask, u64 ignore_mask) +{ + struct x86_perf_regs *perf_regs; + struct xregs_state *xsave; + struct fpu *fpu; + struct fpstate *fps; + u64 sample_mask =3D 0; + + if (data->regs_user.abi =3D=3D PERF_SAMPLE_REGS_ABI_NONE) + return 0; + + if (user_mode(regs)) + sample_mask =3D mask & ~ignore_mask; + + if (test_thread_flag(TIF_NEED_FPU_LOAD)) { + perf_regs =3D container_of(data->regs_user.regs, + struct x86_perf_regs, regs); + fpu =3D x86_task_fpu(current); + /* + * If __task_fpstate is set, it holds the right pointer, + * otherwise fpstate will. + */ + fps =3D READ_ONCE(fpu->__task_fpstate); + if (!fps) + fps =3D fpu->fpstate; + xsave =3D &fps->regs.xsave; + + x86_pmu_update_ext_regs(perf_regs, xsave, mask); + sample_mask =3D 0; + } + + return sample_mask; +} + static void x86_pmu_sample_extended_regs(struct perf_event *event, struct perf_sample_data *data, struct pt_regs *regs, @@ -1818,6 +1879,7 @@ static void x86_pmu_sample_extended_regs(struct perf_= event *event, u64 sample_type =3D event->attr.sample_type; struct x86_perf_regs *perf_regs; struct xregs_state *xsave; + u64 user_mask =3D 0; u64 intr_mask =3D 0; u64 mask =3D 0; =20 @@ -1827,15 +1889,24 @@ static void x86_pmu_sample_extended_regs(struct per= f_event *event, mask |=3D XFEATURE_MASK_SSE; =20 mask &=3D x86_pmu.ext_regs_mask; + if (sample_type & PERF_SAMPLE_REGS_USER) { + user_mask =3D x86_pmu_update_user_ext_regs(data, regs, + mask, ignore_mask); + } =20 if (sample_type & PERF_SAMPLE_REGS_INTR) intr_mask =3D mask & ~ignore_mask; =20 - if (intr_mask) { - __x86_pmu_sample_ext_regs(intr_mask); + if (user_mask | intr_mask) { + __x86_pmu_sample_ext_regs(user_mask | intr_mask); xsave =3D per_cpu(ext_regs_buf, smp_processor_id()); - x86_pmu_update_ext_regs(perf_regs, xsave, intr_mask); } + + if (user_mask) + x86_pmu_update_ext_regs(perf_regs, xsave, user_mask); + + if (intr_mask) + x86_pmu_update_ext_regs(perf_regs, xsave, intr_mask); } =20 void x86_pmu_setup_regs_data(struct perf_event *event, --=20 2.34.1 From nobody Tue Feb 10 14:26:01 2026 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.10]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B9B2631A80E; Mon, 9 Feb 2026 07:25:49 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.10 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770621950; cv=none; b=MzYFwHMfPRQqu6H0+QsGHvUHdvWz9LF2gOeBJBfZrDH2mniku+s1b9S/gXE9w/kpK1vilZPiFOW4YI2PhvuJw/+3GTSSc4cwEBhRBxbzj/CNPw5N3vH5GyzWFb/6FNzAYF3tFsIuQQVt8hmTxHkfj8D43v4kYXz5cQCMdtQUjGY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770621950; c=relaxed/simple; bh=+P4ZsSbB+IFK+1oK/Y6cz0GVIHwrajlcrQm6zrh/Ffk=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=jViX4OfWLKFug7+kPWgmMhTNsFPHshUjiC+5mRU5HAidYFRddr0vjcEiFXtn5Rinc0MmtXqqxirDcBWJ/Fr28GxGOpSygU9tEAsPTija8Hyg1IZhhf+5ecIsMPFidXZqldmP8DR62YlNA/dZGkPRBtMP2iD1W94BMtf1jqxtWIo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=pass smtp.mailfrom=linux.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=mlz5EzlR; arc=none smtp.client-ip=192.198.163.10 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="mlz5EzlR" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1770621950; x=1802157950; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=+P4ZsSbB+IFK+1oK/Y6cz0GVIHwrajlcrQm6zrh/Ffk=; b=mlz5EzlRt7TIb1Np+t5096+uDuIHljhxINqX1mfZmGni42TK7jgvNcsZ DCt9m7HSGkjI6jjA40Mao+sYtvGhDPUEtEMWZNJR2VB0pw25/Fiti07iI F+spz0vSvEBTuZdPsD9ttT99Yl2A4V9P5Ngo4t5HSwyNmxuoE2BiIyghh lIZ5RSq2MtuvaWMX7SdvVuhI1qFqTbLnb9ZiaKrJ4ctR7XDGVGL+Lggii 0TBV5JDxhH2va8jeiKloG4iGQiYaxFW87un2BkBCbKlULGJI0YC0jkQN5 ob6YtSo2AOUGFfRCqRWhWHWmry9eiImJRaMKodRDcLlQDIATieu688C2c Q==; X-CSE-ConnectionGUID: ML7JR1s7QH2wsf0kvla8Ww== X-CSE-MsgGUID: pVslYzqMSf6pJ7UWunO7Cg== X-IronPort-AV: E=McAfee;i="6800,10657,11695"; a="83098440" X-IronPort-AV: E=Sophos;i="6.21,281,1763452800"; d="scan'208";a="83098440" Received: from fmviesa001.fm.intel.com ([10.60.135.141]) by fmvoesa104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 08 Feb 2026 23:25:50 -0800 X-CSE-ConnectionGUID: rkWmTEdXRy6VbpaJyneAuQ== X-CSE-MsgGUID: lUR0Xp6CRb+WNOnSqdwd3w== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.21,281,1763452800"; d="scan'208";a="241694713" Received: from spr.sh.intel.com ([10.112.229.196]) by fmviesa001.fm.intel.com with ESMTP; 08 Feb 2026 23:25:45 -0800 From: Dapeng Mi To: Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Namhyung Kim , Thomas Gleixner , Dave Hansen , Ian Rogers , Adrian Hunter , Jiri Olsa , Alexander Shishkin , Andi Kleen , Eranian Stephane Cc: Mark Rutland , broonie@kernel.org, Ravi Bangoria , linux-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org, Zide Chen , Falcon Thomas , Dapeng Mi , Xudong Hao , Kan Liang , Dapeng Mi Subject: [Patch v6 12/22] perf: Add sampling support for SIMD registers Date: Mon, 9 Feb 2026 15:20:37 +0800 Message-Id: <20260209072047.2180332-13-dapeng1.mi@linux.intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20260209072047.2180332-1-dapeng1.mi@linux.intel.com> References: <20260209072047.2180332-1-dapeng1.mi@linux.intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Kan Liang Users may be interested in sampling SIMD registers during profiling. The current sample_regs_* structure does not have sufficient space for all SIMD registers. To address this, new attribute fields sample_simd_{pred,vec}_reg_* are added to struct perf_event_attr to represent the SIMD registers that are expected to be sampled. Currently, the perf/x86 code supports XMM registers in sample_regs_*. To unify the configuration of SIMD registers and ensure a consistent method for configuring XMM and other SIMD registers, a new event attribute field, sample_simd_regs_enabled, is introduced. When sample_simd_regs_enabled is set, it indicates that all SIMD registers, including XMM, will be represented by the newly introduced sample_simd_{pred|vec}_reg_* fields. The original XMM space in sample_regs_* is reserved for future uses. Since SIMD registers are wider than 64 bits, a new output format is introduced. The number and width of SIMD registers are dumped first, followed by the register values. The number and width are based on the user's configuration. If they differ (e.g., on ARM), an ARCH-specific perf_output_sample_simd_regs function can be implemented separately. A new ABI, PERF_SAMPLE_REGS_ABI_SIMD, is added to indicate the new format. The enum perf_sample_regs_abi is now a bitmap. This change should not impact existing tools, as the version and bitmap remain the same for values 1 and 2. Additionally, two new __weak functions are introduced: - perf_simd_reg_value(): Retrieves the value of the requested SIMD register. - perf_simd_reg_validate(): Validates the configuration of the SIMD registers. A new flag, PERF_PMU_CAP_SIMD_REGS, is added to indicate that the PMU supports SIMD register dumping. An error is generated if sample_simd_{pred|vec}_reg_* is mistakenly set for a PMU that does not support this capability. Suggested-by: Peter Zijlstra (Intel) Signed-off-by: Kan Liang Co-developed-by: Dapeng Mi Signed-off-by: Dapeng Mi --- V6: Adjust newly added fields in perf_event_attr to avoid memory holes include/linux/perf_event.h | 8 +++ include/linux/perf_regs.h | 4 ++ include/uapi/linux/perf_event.h | 45 ++++++++++++++-- kernel/events/core.c | 96 +++++++++++++++++++++++++++++++-- 4 files changed, 146 insertions(+), 7 deletions(-) diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h index b8a0f77412b3..172ba199d4ff 100644 --- a/include/linux/perf_event.h +++ b/include/linux/perf_event.h @@ -306,6 +306,7 @@ struct perf_event_pmu_context; #define PERF_PMU_CAP_AUX_PAUSE 0x0200 #define PERF_PMU_CAP_AUX_PREFER_LARGE 0x0400 #define PERF_PMU_CAP_MEDIATED_VPMU 0x0800 +#define PERF_PMU_CAP_SIMD_REGS 0x1000 =20 /** * pmu::scope @@ -1534,6 +1535,13 @@ perf_event__output_id_sample(struct perf_event *even= t, extern void perf_log_lost_samples(struct perf_event *event, u64 lost); =20 +static inline bool event_has_simd_regs(struct perf_event *event) +{ + struct perf_event_attr *attr =3D &event->attr; + + return attr->sample_simd_regs_enabled !=3D 0; +} + static inline bool event_has_extended_regs(struct perf_event *event) { struct perf_event_attr *attr =3D &event->attr; diff --git a/include/linux/perf_regs.h b/include/linux/perf_regs.h index 144bcc3ff19f..518f28c6a7d4 100644 --- a/include/linux/perf_regs.h +++ b/include/linux/perf_regs.h @@ -14,6 +14,10 @@ int perf_reg_validate(u64 mask); u64 perf_reg_abi(struct task_struct *task); void perf_get_regs_user(struct perf_regs *regs_user, struct pt_regs *regs); +int perf_simd_reg_validate(u16 vec_qwords, u64 vec_mask, + u16 pred_qwords, u32 pred_mask); +u64 perf_simd_reg_value(struct pt_regs *regs, int idx, + u16 qwords_idx, bool pred); =20 #ifdef CONFIG_HAVE_PERF_REGS #include diff --git a/include/uapi/linux/perf_event.h b/include/uapi/linux/perf_even= t.h index 533393ec94d0..b41ae1b82344 100644 --- a/include/uapi/linux/perf_event.h +++ b/include/uapi/linux/perf_event.h @@ -314,8 +314,9 @@ enum { */ enum perf_sample_regs_abi { PERF_SAMPLE_REGS_ABI_NONE =3D 0, - PERF_SAMPLE_REGS_ABI_32 =3D 1, - PERF_SAMPLE_REGS_ABI_64 =3D 2, + PERF_SAMPLE_REGS_ABI_32 =3D (1 << 0), + PERF_SAMPLE_REGS_ABI_64 =3D (1 << 1), + PERF_SAMPLE_REGS_ABI_SIMD =3D (1 << 2), }; =20 /* @@ -383,6 +384,7 @@ enum perf_event_read_format { #define PERF_ATTR_SIZE_VER7 128 /* Add: sig_data */ #define PERF_ATTR_SIZE_VER8 136 /* Add: config3 */ #define PERF_ATTR_SIZE_VER9 144 /* add: config4 */ +#define PERF_ATTR_SIZE_VER10 176 /* Add: sample_simd_{pred,vec}_reg_* */ =20 /* * 'struct perf_event_attr' contains various attributes that define @@ -547,6 +549,25 @@ struct perf_event_attr { =20 __u64 config3; /* extension of config2 */ __u64 config4; /* extension of config3 */ + + /* + * Defines set of SIMD registers to dump on samples. + * The sample_simd_regs_enabled !=3D0 implies the + * set of SIMD registers is used to config all SIMD registers. + * If !sample_simd_regs_enabled, sample_regs_XXX may be used to + * config some SIMD registers on X86. + */ + union { + __u16 sample_simd_regs_enabled; + __u16 sample_simd_pred_reg_qwords; + }; + __u16 sample_simd_vec_reg_qwords; + __u32 __reserved_4; + + __u32 sample_simd_pred_reg_intr; + __u32 sample_simd_pred_reg_user; + __u64 sample_simd_vec_reg_intr; + __u64 sample_simd_vec_reg_user; }; =20 /* @@ -1020,7 +1041,15 @@ enum perf_event_type { * } && PERF_SAMPLE_BRANCH_STACK * * { u64 abi; # enum perf_sample_regs_abi - * u64 regs[weight(mask)]; } && PERF_SAMPLE_REGS_USER + * u64 regs[weight(mask)]; + * struct { + * u16 nr_vectors; # 0 ... weight(sample_simd_vec_reg_user) + * u16 vector_qwords; # 0 ... sample_simd_vec_reg_qwords + * u16 nr_pred; # 0 ... weight(sample_simd_pred_reg_user) + * u16 pred_qwords; # 0 ... sample_simd_pred_reg_qwords + * u64 data[nr_vectors * vector_qwords + nr_pred * pred_qwords]; + * } && (abi & PERF_SAMPLE_REGS_ABI_SIMD) + * } && PERF_SAMPLE_REGS_USER * * { u64 size; * char data[size]; @@ -1047,7 +1076,15 @@ enum perf_event_type { * { u64 data_src; } && PERF_SAMPLE_DATA_SRC * { u64 transaction; } && PERF_SAMPLE_TRANSACTION * { u64 abi; # enum perf_sample_regs_abi - * u64 regs[weight(mask)]; } && PERF_SAMPLE_REGS_INTR + * u64 regs[weight(mask)]; + * struct { + * u16 nr_vectors; # 0 ... weight(sample_simd_vec_reg_intr) + * u16 vector_qwords; # 0 ... sample_simd_vec_reg_qwords + * u16 nr_pred; # 0 ... weight(sample_simd_pred_reg_intr) + * u16 pred_qwords; # 0 ... sample_simd_pred_reg_qwords + * u64 data[nr_vectors * vector_qwords + nr_pred * pred_qwords]; + * } && (abi & PERF_SAMPLE_REGS_ABI_SIMD) + * } && PERF_SAMPLE_REGS_INTR * { u64 phys_addr;} && PERF_SAMPLE_PHYS_ADDR * { u64 cgroup;} && PERF_SAMPLE_CGROUP * { u64 data_page_size;} && PERF_SAMPLE_DATA_PAGE_SIZE diff --git a/kernel/events/core.c b/kernel/events/core.c index d487c55a4f3e..5742126f50cc 100644 --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -7761,6 +7761,50 @@ perf_output_sample_regs(struct perf_output_handle *h= andle, } } =20 +static void +perf_output_sample_simd_regs(struct perf_output_handle *handle, + struct perf_event *event, + struct pt_regs *regs, + u64 mask, u32 pred_mask) +{ + u16 pred_qwords =3D event->attr.sample_simd_pred_reg_qwords; + u16 vec_qwords =3D event->attr.sample_simd_vec_reg_qwords; + u16 nr_vectors; + u16 nr_pred; + int bit; + u64 val; + u16 i; + + nr_vectors =3D hweight64(mask); + nr_pred =3D hweight32(pred_mask); + + perf_output_put(handle, nr_vectors); + perf_output_put(handle, vec_qwords); + perf_output_put(handle, nr_pred); + perf_output_put(handle, pred_qwords); + + if (nr_vectors) { + for (bit =3D 0; bit < sizeof(mask) * BITS_PER_BYTE; bit++) { + if (!(BIT_ULL(bit) & mask)) + continue; + for (i =3D 0; i < vec_qwords; i++) { + val =3D perf_simd_reg_value(regs, bit, i, false); + perf_output_put(handle, val); + } + } + } + if (nr_pred) { + for (bit =3D 0; bit < sizeof(pred_mask) * BITS_PER_BYTE; bit++) { + if (!(BIT(bit) & pred_mask)) + continue; + for (i =3D 0; i < pred_qwords; i++) { + val =3D perf_simd_reg_value(regs, bit, i, true); + perf_output_put(handle, val); + } + } + } +} + static void perf_sample_regs_user(struct perf_regs *regs_user, struct pt_regs *regs) { @@ -7782,6 +7826,17 @@ static void perf_sample_regs_intr(struct perf_regs *= regs_intr, regs_intr->abi =3D perf_reg_abi(current); } =20 +int __weak perf_simd_reg_validate(u16 vec_qwords, u64 vec_mask, + u16 pred_qwords, u32 pred_mask) +{ + return vec_qwords || vec_mask || pred_qwords || pred_mask ? -ENOSYS : 0; +} + +u64 __weak perf_simd_reg_value(struct pt_regs *regs, int idx, + u16 qwords_idx, bool pred) +{ + return 0; +} =20 /* * Get remaining task size from user stack pointer. @@ -8312,10 +8367,17 @@ void perf_output_sample(struct perf_output_handle *= handle, perf_output_put(handle, abi); =20 if (abi) { - u64 mask =3D event->attr.sample_regs_user; + struct perf_event_attr *attr =3D &event->attr; + u64 mask =3D attr->sample_regs_user; perf_output_sample_regs(handle, data->regs_user.regs, mask); + if (abi & PERF_SAMPLE_REGS_ABI_SIMD) { + perf_output_sample_simd_regs(handle, event, + data->regs_user.regs, + attr->sample_simd_vec_reg_user, + attr->sample_simd_pred_reg_user); + } } } =20 @@ -8343,11 +8405,18 @@ void perf_output_sample(struct perf_output_handle *= handle, perf_output_put(handle, abi); =20 if (abi) { - u64 mask =3D event->attr.sample_regs_intr; + struct perf_event_attr *attr =3D &event->attr; + u64 mask =3D attr->sample_regs_intr; =20 perf_output_sample_regs(handle, data->regs_intr.regs, mask); + if (abi & PERF_SAMPLE_REGS_ABI_SIMD) { + perf_output_sample_simd_regs(handle, event, + data->regs_intr.regs, + attr->sample_simd_vec_reg_intr, + attr->sample_simd_pred_reg_intr); + } } } =20 @@ -12997,6 +13066,12 @@ static int perf_try_init_event(struct pmu *pmu, st= ruct perf_event *event) if (ret) goto err_pmu; =20 + if (!(pmu->capabilities & PERF_PMU_CAP_SIMD_REGS) && + event_has_simd_regs(event)) { + ret =3D -EOPNOTSUPP; + goto err_destroy; + } + if (!(pmu->capabilities & PERF_PMU_CAP_EXTENDED_REGS) && event_has_extended_regs(event)) { ret =3D -EOPNOTSUPP; @@ -13542,6 +13617,12 @@ static int perf_copy_attr(struct perf_event_attr _= _user *uattr, ret =3D perf_reg_validate(attr->sample_regs_user); if (ret) return ret; + ret =3D perf_simd_reg_validate(attr->sample_simd_vec_reg_qwords, + attr->sample_simd_vec_reg_user, + attr->sample_simd_pred_reg_qwords, + attr->sample_simd_pred_reg_user); + if (ret) + return ret; } =20 if (attr->sample_type & PERF_SAMPLE_STACK_USER) { @@ -13562,8 +13643,17 @@ static int perf_copy_attr(struct perf_event_attr _= _user *uattr, if (!attr->sample_max_stack) attr->sample_max_stack =3D sysctl_perf_event_max_stack; =20 - if (attr->sample_type & PERF_SAMPLE_REGS_INTR) + if (attr->sample_type & PERF_SAMPLE_REGS_INTR) { ret =3D perf_reg_validate(attr->sample_regs_intr); + if (ret) + return ret; + ret =3D perf_simd_reg_validate(attr->sample_simd_vec_reg_qwords, + attr->sample_simd_vec_reg_intr, + attr->sample_simd_pred_reg_qwords, + attr->sample_simd_pred_reg_intr); + if (ret) + return ret; + } =20 #ifndef CONFIG_CGROUP_PERF if (attr->sample_type & PERF_SAMPLE_CGROUP) --=20 2.34.1 From nobody Tue Feb 10 14:26:01 2026 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.10]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id BA0681EB5F8; Mon, 9 Feb 2026 07:25:54 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.10 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770621954; cv=none; b=kup6VYOCN8JICnzCAgB3iqxHx4WJmc3bPnJZP56C4i9C6N6KeYpGOfk8bcVerVq3f/VVoQdCIIpnbSh6yn7aoLvT2jh7G0bFJl8PXY5J1ffEgS5r2zmDX+B4/WL2Bkr4G5L7Rx7tl3M538HGJaTyB6uZHmzqspxn0X3gsska4LI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770621954; c=relaxed/simple; bh=Yur59q32UXK3pWsbtTM6eUJ1ZGd+hM7qv9KqF4lhYUs=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=Xxn9rshQeIicBPZ78udAaFHlbcFZtZPWLTu8EkrO6cAMQHsE5RqjrF4vtKZ8qC65oZkr2Ew/YzaWSolyIVLQITrFcH8jOVqxKXfPHsMZhvkMN1aUYGJ8wlxZoDmsjVUEfehiuTjvfULuGi5bujvxmz7bI6OpFophg+k+oS1xP1Q= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=pass smtp.mailfrom=linux.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=eo9yhlyy; arc=none smtp.client-ip=192.198.163.10 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="eo9yhlyy" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1770621955; x=1802157955; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Yur59q32UXK3pWsbtTM6eUJ1ZGd+hM7qv9KqF4lhYUs=; b=eo9yhlyyt6xoFEWOMF9JyQCy1/bdW1tY2+O/oL/gUQZ3GuZNaw/+T7Xy VWpOHc7IPIXv18Ww0WkdTErUtmm6FC3NdQ/RhH3n/ZFgXndHCqv4RILS4 rqoh4AxvDqkjI4YZZYxuM0Bte8np8npi+E137WEKhOLhaj5L5qAeWkkUg 6s8KFoXfATA7Nx0Q44DgC/eUBVWUl3F9rnizwscK/tVLvX6/0rCafwptp 2UtIOV/W3B4IvyZk3qRFE/UxLB5PlV//Zmg2M2ylwRUPzrlNs7S7neJ18 asqHZalNQuNyY9tjCiG/ssgtMtpTQAM1fOXNZyBOwIQzRxA0DRAOu5XBX w==; X-CSE-ConnectionGUID: QG/7LbD3TR2+gU6fKBeXrA== X-CSE-MsgGUID: fsEfzqpeQ7+xcIdKzZjfHw== X-IronPort-AV: E=McAfee;i="6800,10657,11695"; a="83098461" X-IronPort-AV: E=Sophos;i="6.21,281,1763452800"; d="scan'208";a="83098461" Received: from fmviesa001.fm.intel.com ([10.60.135.141]) by fmvoesa104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 08 Feb 2026 23:25:55 -0800 X-CSE-ConnectionGUID: 02VzebmrQZm2b0W2iAQKfg== X-CSE-MsgGUID: 8gi3sU01TIKkCR2hhBE4Yw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.21,281,1763452800"; d="scan'208";a="241694734" Received: from spr.sh.intel.com ([10.112.229.196]) by fmviesa001.fm.intel.com with ESMTP; 08 Feb 2026 23:25:49 -0800 From: Dapeng Mi To: Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Namhyung Kim , Thomas Gleixner , Dave Hansen , Ian Rogers , Adrian Hunter , Jiri Olsa , Alexander Shishkin , Andi Kleen , Eranian Stephane Cc: Mark Rutland , broonie@kernel.org, Ravi Bangoria , linux-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org, Zide Chen , Falcon Thomas , Dapeng Mi , Xudong Hao , Kan Liang , Dapeng Mi Subject: [Patch v6 13/22] perf/x86: Enable XMM sampling using sample_simd_vec_reg_* fields Date: Mon, 9 Feb 2026 15:20:38 +0800 Message-Id: <20260209072047.2180332-14-dapeng1.mi@linux.intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20260209072047.2180332-1-dapeng1.mi@linux.intel.com> References: <20260209072047.2180332-1-dapeng1.mi@linux.intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Kan Liang This patch adds support for sampling XMM registers using the sample_simd_vec_reg_* fields. When sample_simd_regs_enabled is set, the original XMM space in the sample_regs_* field is treated as reserved. An INVAL error will be reported to user space if any bit is set in the original XMM space while sample_simd_regs_enabled is set. The perf_reg_value function requires ABI information to understand the layout of sample_regs. To accommodate this, a new abi field is introduced in the struct x86_perf_regs to represent ABI information. Additionally, the X86-specific perf_simd_reg_value function is implemented to retrieve the XMM register values. Signed-off-by: Kan Liang Co-developed-by: Dapeng Mi Signed-off-by: Dapeng Mi --- V6: Remove some unnecessary marcos from perf_regs.h but not all. For the marcos like PERF_X86_SIMD_*_REGS and PERF_X86_*_QWORDS, they are still needed by both kernel and perf-tools and perf_regs.h seems to be the best place to define them. arch/x86/events/core.c | 90 +++++++++++++++++++++++++-- arch/x86/events/intel/ds.c | 2 +- arch/x86/events/perf_event.h | 12 ++++ arch/x86/include/asm/perf_event.h | 1 + arch/x86/include/uapi/asm/perf_regs.h | 12 ++++ arch/x86/kernel/perf_regs.c | 51 ++++++++++++++- 6 files changed, 161 insertions(+), 7 deletions(-) diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c index 36b4bc413938..bd47127fb84d 100644 --- a/arch/x86/events/core.c +++ b/arch/x86/events/core.c @@ -704,6 +704,22 @@ int x86_pmu_hw_config(struct perf_event *event) if (event_has_extended_regs(event)) { if (!(event->pmu->capabilities & PERF_PMU_CAP_EXTENDED_REGS)) return -EINVAL; + if (event->attr.sample_simd_regs_enabled) + return -EINVAL; + } + + if (event_has_simd_regs(event)) { + if (!(event->pmu->capabilities & PERF_PMU_CAP_SIMD_REGS)) + return -EINVAL; + /* Not require any vector registers but set width */ + if (event->attr.sample_simd_vec_reg_qwords && + !event->attr.sample_simd_vec_reg_intr && + !event->attr.sample_simd_vec_reg_user) + return -EINVAL; + /* The vector registers set is not supported */ + if (event_needs_xmm(event) && + !(x86_pmu.ext_regs_mask & XFEATURE_MASK_SSE)) + return -EINVAL; } } =20 @@ -1749,6 +1765,7 @@ static void x86_pmu_perf_get_regs_user(struct perf_sa= mple_data *data, struct x86_perf_regs *x86_regs_user =3D this_cpu_ptr(&x86_user_regs); struct perf_regs regs_user; =20 + x86_regs_user->abi =3D PERF_SAMPLE_REGS_ABI_NONE; perf_get_regs_user(®s_user, regs); data->regs_user.abi =3D regs_user.abi; if (regs_user.regs) { @@ -1758,12 +1775,26 @@ static void x86_pmu_perf_get_regs_user(struct perf_= sample_data *data, data->regs_user.regs =3D NULL; } =20 +static inline void +x86_pmu_update_ext_regs_size(struct perf_event_attr *attr, + struct perf_sample_data *data, + struct pt_regs *regs, + u64 mask, u64 pred_mask) +{ + u16 pred_qwords =3D attr->sample_simd_pred_reg_qwords; + u16 vec_qwords =3D attr->sample_simd_vec_reg_qwords; + + data->dyn_size +=3D (hweight64(mask) * vec_qwords + + hweight64(pred_mask) * pred_qwords) * sizeof(u64); +} + static void x86_pmu_setup_basic_regs_data(struct perf_event *event, struct perf_sample_data *data, struct pt_regs *regs) { struct perf_event_attr *attr =3D &event->attr; u64 sample_type =3D attr->sample_type; + struct x86_perf_regs *perf_regs; =20 if (sample_type & PERF_SAMPLE_REGS_USER) { if (user_mode(regs)) { @@ -1783,8 +1814,13 @@ static void x86_pmu_setup_basic_regs_data(struct per= f_event *event, data->regs_user.regs =3D NULL; } data->dyn_size +=3D sizeof(u64); - if (data->regs_user.regs) - data->dyn_size +=3D hweight64(attr->sample_regs_user) * sizeof(u64); + if (data->regs_user.regs) { + data->dyn_size +=3D + hweight64(attr->sample_regs_user) * sizeof(u64); + perf_regs =3D container_of(data->regs_user.regs, + struct x86_perf_regs, regs); + perf_regs->abi =3D data->regs_user.abi; + } data->sample_flags |=3D PERF_SAMPLE_REGS_USER; } =20 @@ -1792,8 +1828,13 @@ static void x86_pmu_setup_basic_regs_data(struct per= f_event *event, data->regs_intr.regs =3D regs; data->regs_intr.abi =3D perf_reg_abi(current); data->dyn_size +=3D sizeof(u64); - if (data->regs_intr.regs) - data->dyn_size +=3D hweight64(attr->sample_regs_intr) * sizeof(u64); + if (data->regs_intr.regs) { + data->dyn_size +=3D + hweight64(attr->sample_regs_intr) * sizeof(u64); + perf_regs =3D container_of(data->regs_intr.regs, + struct x86_perf_regs, regs); + perf_regs->abi =3D data->regs_intr.abi; + } data->sample_flags |=3D PERF_SAMPLE_REGS_INTR; } } @@ -1885,7 +1926,7 @@ static void x86_pmu_sample_extended_regs(struct perf_= event *event, =20 perf_regs =3D container_of(regs, struct x86_perf_regs, regs); =20 - if (event_has_extended_regs(event)) + if (event_needs_xmm(event)) mask |=3D XFEATURE_MASK_SSE; =20 mask &=3D x86_pmu.ext_regs_mask; @@ -1909,6 +1950,44 @@ static void x86_pmu_sample_extended_regs(struct perf= _event *event, x86_pmu_update_ext_regs(perf_regs, xsave, intr_mask); } =20 +static void x86_pmu_setup_extended_regs_data(struct perf_event *event, + struct perf_sample_data *data, + struct pt_regs *regs) +{ + struct perf_event_attr *attr =3D &event->attr; + u64 sample_type =3D attr->sample_type; + struct x86_perf_regs *perf_regs; + + if (!attr->sample_simd_regs_enabled) + return; + + if (sample_type & PERF_SAMPLE_REGS_USER && data->regs_user.abi) { + perf_regs =3D container_of(data->regs_user.regs, + struct x86_perf_regs, regs); + perf_regs->abi |=3D PERF_SAMPLE_REGS_ABI_SIMD; + + /* num and qwords of vector and pred registers */ + data->dyn_size +=3D sizeof(u64); + data->regs_user.abi |=3D PERF_SAMPLE_REGS_ABI_SIMD; + x86_pmu_update_ext_regs_size(attr, data, data->regs_user.regs, + attr->sample_simd_vec_reg_user, + attr->sample_simd_pred_reg_user); + } + + if (sample_type & PERF_SAMPLE_REGS_INTR && data->regs_intr.abi) { + perf_regs =3D container_of(data->regs_intr.regs, + struct x86_perf_regs, regs); + perf_regs->abi |=3D PERF_SAMPLE_REGS_ABI_SIMD; + + /* num and qwords of vector and pred registers */ + data->dyn_size +=3D sizeof(u64); + data->regs_intr.abi |=3D PERF_SAMPLE_REGS_ABI_SIMD; + x86_pmu_update_ext_regs_size(attr, data, data->regs_intr.regs, + attr->sample_simd_vec_reg_intr, + attr->sample_simd_pred_reg_intr); + } +} + void x86_pmu_setup_regs_data(struct perf_event *event, struct perf_sample_data *data, struct pt_regs *regs, @@ -1920,6 +1999,7 @@ void x86_pmu_setup_regs_data(struct perf_event *event, * which may be unnecessary to sample again. */ x86_pmu_sample_extended_regs(event, data, regs, ignore_mask); + x86_pmu_setup_extended_regs_data(event, data, regs); } =20 int x86_pmu_handle_irq(struct pt_regs *regs) diff --git a/arch/x86/events/intel/ds.c b/arch/x86/events/intel/ds.c index 229dbe368b65..272725d749df 100644 --- a/arch/x86/events/intel/ds.c +++ b/arch/x86/events/intel/ds.c @@ -1735,7 +1735,7 @@ static u64 pebs_update_adaptive_cfg(struct perf_event= *event) if (gprs || (attr->precise_ip < 2) || tsx_weight) pebs_data_cfg |=3D PEBS_DATACFG_GP; =20 - if (event_has_extended_regs(event)) + if (event_needs_xmm(event)) pebs_data_cfg |=3D PEBS_DATACFG_XMMS; =20 if (sample_type & PERF_SAMPLE_BRANCH_STACK) { diff --git a/arch/x86/events/perf_event.h b/arch/x86/events/perf_event.h index a32ee4f0c891..02eea137e261 100644 --- a/arch/x86/events/perf_event.h +++ b/arch/x86/events/perf_event.h @@ -137,6 +137,18 @@ static inline bool is_acr_event_group(struct perf_even= t *event) return check_leader_group(event->group_leader, PERF_X86_EVENT_ACR); } =20 +static inline bool event_needs_xmm(struct perf_event *event) +{ + if (event->attr.sample_simd_regs_enabled && + event->attr.sample_simd_vec_reg_qwords >=3D PERF_X86_XMM_QWORDS) + return true; + + if (!event->attr.sample_simd_regs_enabled && + event_has_extended_regs(event)) + return true; + return false; +} + struct amd_nb { int nb_id; /* NorthBridge id */ int refcnt; /* reference count */ diff --git a/arch/x86/include/asm/perf_event.h b/arch/x86/include/asm/perf_= event.h index 7baa1b0f889f..1f172740916c 100644 --- a/arch/x86/include/asm/perf_event.h +++ b/arch/x86/include/asm/perf_event.h @@ -709,6 +709,7 @@ extern void perf_events_lapic_init(void); struct pt_regs; struct x86_perf_regs { struct pt_regs regs; + u64 abi; union { u64 *xmm_regs; u32 *xmm_space; /* for xsaves */ diff --git a/arch/x86/include/uapi/asm/perf_regs.h b/arch/x86/include/uapi/= asm/perf_regs.h index 7c9d2bb3833b..342b08448138 100644 --- a/arch/x86/include/uapi/asm/perf_regs.h +++ b/arch/x86/include/uapi/asm/perf_regs.h @@ -55,4 +55,16 @@ enum perf_event_x86_regs { =20 #define PERF_REG_EXTENDED_MASK (~((1ULL << PERF_REG_X86_XMM0) - 1)) =20 +enum { + PERF_X86_SIMD_XMM_REGS =3D 16, + PERF_X86_SIMD_VEC_REGS_MAX =3D PERF_X86_SIMD_XMM_REGS, +}; + +#define PERF_X86_SIMD_VEC_MASK GENMASK_ULL(PERF_X86_SIMD_VEC_REGS_MAX - 1,= 0) + +enum { + PERF_X86_XMM_QWORDS =3D 2, + PERF_X86_SIMD_QWORDS_MAX =3D PERF_X86_XMM_QWORDS, +}; + #endif /* _ASM_X86_PERF_REGS_H */ diff --git a/arch/x86/kernel/perf_regs.c b/arch/x86/kernel/perf_regs.c index 81204cb7f723..9947a6b5c260 100644 --- a/arch/x86/kernel/perf_regs.c +++ b/arch/x86/kernel/perf_regs.c @@ -63,6 +63,9 @@ u64 perf_reg_value(struct pt_regs *regs, int idx) =20 if (idx >=3D PERF_REG_X86_XMM0 && idx < PERF_REG_X86_XMM_MAX) { perf_regs =3D container_of(regs, struct x86_perf_regs, regs); + /* SIMD registers are moved to dedicated sample_simd_vec_reg */ + if (perf_regs->abi & PERF_SAMPLE_REGS_ABI_SIMD) + return 0; if (!perf_regs->xmm_regs) return 0; return perf_regs->xmm_regs[idx - PERF_REG_X86_XMM0]; @@ -74,6 +77,51 @@ u64 perf_reg_value(struct pt_regs *regs, int idx) return regs_get_register(regs, pt_regs_offset[idx]); } =20 +u64 perf_simd_reg_value(struct pt_regs *regs, int idx, + u16 qwords_idx, bool pred) +{ + struct x86_perf_regs *perf_regs =3D + container_of(regs, struct x86_perf_regs, regs); + + if (pred) + return 0; + + if (WARN_ON_ONCE(idx >=3D PERF_X86_SIMD_VEC_REGS_MAX || + qwords_idx >=3D PERF_X86_SIMD_QWORDS_MAX)) + return 0; + + if (qwords_idx < PERF_X86_XMM_QWORDS) { + if (!perf_regs->xmm_regs) + return 0; + return perf_regs->xmm_regs[idx * PERF_X86_XMM_QWORDS + + qwords_idx]; + } + + return 0; +} + +int perf_simd_reg_validate(u16 vec_qwords, u64 vec_mask, + u16 pred_qwords, u32 pred_mask) +{ + /* pred_qwords implies sample_simd_{pred,vec}_reg_* are supported */ + if (!pred_qwords) + return 0; + + if (!vec_qwords) { + if (vec_mask) + return -EINVAL; + } else { + if (vec_qwords !=3D PERF_X86_XMM_QWORDS) + return -EINVAL; + if (vec_mask & ~PERF_X86_SIMD_VEC_MASK) + return -EINVAL; + } + if (pred_mask) + return -EINVAL; + + return 0; +} + #define PERF_REG_X86_RESERVED (((1ULL << PERF_REG_X86_XMM0) - 1) & \ ~((1ULL << PERF_REG_X86_MAX) - 1)) =20 @@ -108,7 +156,8 @@ u64 perf_reg_abi(struct task_struct *task) =20 int perf_reg_validate(u64 mask) { - if (!mask || (mask & (REG_NOSUPPORT | PERF_REG_X86_RESERVED))) + /* The mask could be 0 if only the SIMD registers are interested */ + if (mask & (REG_NOSUPPORT | PERF_REG_X86_RESERVED)) return -EINVAL; =20 return 0; --=20 2.34.1 From nobody Tue Feb 10 14:26:01 2026 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.10]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DD4001EB5F8; Mon, 9 Feb 2026 07:25:59 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.10 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770621960; cv=none; b=B9hsagKK6IcpgcSPLCU9K+wFXmuVsfUa++7cVcZmIN5Fd27BG8oPRx37V36wSydWzy7f14KLlvNqgPfKdmX+YDptplcNCQCgknXP40RkLeV3kJCKtmVieaiKp18ugLEvs3US+hb7u+cM4a4Rba4WMwXvZX9tQCm4QnwemUPUA58= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770621960; c=relaxed/simple; bh=jriykwEad+iBDY2GTzkb2a5ufWlBGyPXfBTLdGWy+Dc=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=aoZWU9IaoKNRVlK96P0+VLPs1Bg0Jw9HhrWBa0hVXmjDsjU8gd72OyIQ+I57Wy1mliQq6YwEOW2tK7ZZ9fkP99FtNNBY1wx2ixUFVWp5nciu2bZxXOzOI48xVuaR7PntnFryEYtg2T+ZOUvh76NnYE3aEKiyN1QsDUDZ5udzcQ4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=pass smtp.mailfrom=linux.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=mKBhcfC3; arc=none smtp.client-ip=192.198.163.10 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="mKBhcfC3" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1770621960; x=1802157960; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=jriykwEad+iBDY2GTzkb2a5ufWlBGyPXfBTLdGWy+Dc=; b=mKBhcfC3LgJV4scGE4Sy2t5UgEbPjk+oYeQoemVDr7L9nqWTsoy0bkXE eWg6F7rQGoiKShWql63n6ZBj71SGB1KU4Q0bfTTf3Z/5QR6FuzItYYr0V QLGGIERHlP+kAmrd4UJrFvv7DnobHR4Dn8d6+gxcQSu6zz+iUmKG8Q9pR WqR74cMhsnhcr0PQPOqaOP7Q4zwY3gpAjdIcyZF1kKBRagO5+SlCcsxl9 PhqUU/bKCDNrCsKcilWvDm01tipVyaFBZW2uhIQROxvgAjBTivtJO6kTx 1Vjy37PJYEtoo3XJlsoRKLm+roRrLIhScnHi7fXY+UCMxztmqqxt13Lp1 g==; X-CSE-ConnectionGUID: 97cvdIzCTnq9OHjBpL17Wg== X-CSE-MsgGUID: chlU0PxSRbWw1zZAxMF6+Q== X-IronPort-AV: E=McAfee;i="6800,10657,11695"; a="83098480" X-IronPort-AV: E=Sophos;i="6.21,281,1763452800"; d="scan'208";a="83098480" Received: from fmviesa001.fm.intel.com ([10.60.135.141]) by fmvoesa104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 08 Feb 2026 23:26:00 -0800 X-CSE-ConnectionGUID: XCuM8O1dTTuVHXMgc5hXCw== X-CSE-MsgGUID: 3BCDDvdqTiikwGYemdM1Iw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.21,281,1763452800"; d="scan'208";a="241694751" Received: from spr.sh.intel.com ([10.112.229.196]) by fmviesa001.fm.intel.com with ESMTP; 08 Feb 2026 23:25:55 -0800 From: Dapeng Mi To: Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Namhyung Kim , Thomas Gleixner , Dave Hansen , Ian Rogers , Adrian Hunter , Jiri Olsa , Alexander Shishkin , Andi Kleen , Eranian Stephane Cc: Mark Rutland , broonie@kernel.org, Ravi Bangoria , linux-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org, Zide Chen , Falcon Thomas , Dapeng Mi , Xudong Hao , Kan Liang , Dapeng Mi Subject: [Patch v6 14/22] perf/x86: Enable YMM sampling using sample_simd_vec_reg_* fields Date: Mon, 9 Feb 2026 15:20:39 +0800 Message-Id: <20260209072047.2180332-15-dapeng1.mi@linux.intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20260209072047.2180332-1-dapeng1.mi@linux.intel.com> References: <20260209072047.2180332-1-dapeng1.mi@linux.intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Kan Liang This patch introduces support for sampling YMM registers via the sample_simd_vec_reg_* fields. Each YMM register consists of 4 u64 words, assembled from two halves: XMM (the lower 2 u64 words) and YMMH (the upper 2 u64 words). Although both XMM and YMMH data can be retrieved with a single xsaves instruction, they are stored in separate locations. The perf_simd_reg_value() function is responsible for assembling these halves into a complete YMM register for output to userspace. Additionally, sample_simd_vec_reg_qwords should be set to 4 to indicate YMM sampling. Signed-off-by: Kan Liang Co-developed-by: Dapeng Mi Signed-off-by: Dapeng Mi --- arch/x86/events/core.c | 8 ++++++++ arch/x86/events/perf_event.h | 9 +++++++++ arch/x86/include/asm/perf_event.h | 4 ++++ arch/x86/include/uapi/asm/perf_regs.h | 6 ++++-- arch/x86/kernel/perf_regs.c | 10 +++++++++- 5 files changed, 34 insertions(+), 3 deletions(-) diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c index bd47127fb84d..e80a392e30b0 100644 --- a/arch/x86/events/core.c +++ b/arch/x86/events/core.c @@ -720,6 +720,9 @@ int x86_pmu_hw_config(struct perf_event *event) if (event_needs_xmm(event) && !(x86_pmu.ext_regs_mask & XFEATURE_MASK_SSE)) return -EINVAL; + if (event_needs_ymm(event) && + !(x86_pmu.ext_regs_mask & XFEATURE_MASK_YMM)) + return -EINVAL; } } =20 @@ -1844,6 +1847,7 @@ inline void x86_pmu_clear_perf_regs(struct pt_regs *r= egs) struct x86_perf_regs *perf_regs =3D container_of(regs, struct x86_perf_re= gs, regs); =20 perf_regs->xmm_regs =3D NULL; + perf_regs->ymmh_regs =3D NULL; } =20 static inline void __x86_pmu_sample_ext_regs(u64 mask) @@ -1869,6 +1873,8 @@ static inline void x86_pmu_update_ext_regs(struct x86= _perf_regs *perf_regs, =20 if (mask & XFEATURE_MASK_SSE) perf_regs->xmm_space =3D xsave->i387.xmm_space; + if (mask & XFEATURE_MASK_YMM) + perf_regs->ymmh =3D get_xsave_addr(xsave, XFEATURE_YMM); } =20 /* @@ -1928,6 +1934,8 @@ static void x86_pmu_sample_extended_regs(struct perf_= event *event, =20 if (event_needs_xmm(event)) mask |=3D XFEATURE_MASK_SSE; + if (event_needs_ymm(event)) + mask |=3D XFEATURE_MASK_YMM; =20 mask &=3D x86_pmu.ext_regs_mask; if (sample_type & PERF_SAMPLE_REGS_USER) { diff --git a/arch/x86/events/perf_event.h b/arch/x86/events/perf_event.h index 02eea137e261..4f18ba6ef0c4 100644 --- a/arch/x86/events/perf_event.h +++ b/arch/x86/events/perf_event.h @@ -149,6 +149,15 @@ static inline bool event_needs_xmm(struct perf_event *= event) return false; } =20 +static inline bool event_needs_ymm(struct perf_event *event) +{ + if (event->attr.sample_simd_regs_enabled && + event->attr.sample_simd_vec_reg_qwords >=3D PERF_X86_YMM_QWORDS) + return true; + + return false; +} + struct amd_nb { int nb_id; /* NorthBridge id */ int refcnt; /* reference count */ diff --git a/arch/x86/include/asm/perf_event.h b/arch/x86/include/asm/perf_= event.h index 1f172740916c..bffe47851676 100644 --- a/arch/x86/include/asm/perf_event.h +++ b/arch/x86/include/asm/perf_event.h @@ -714,6 +714,10 @@ struct x86_perf_regs { u64 *xmm_regs; u32 *xmm_space; /* for xsaves */ }; + union { + u64 *ymmh_regs; + struct ymmh_struct *ymmh; + }; }; =20 extern unsigned long perf_arch_instruction_pointer(struct pt_regs *regs); diff --git a/arch/x86/include/uapi/asm/perf_regs.h b/arch/x86/include/uapi/= asm/perf_regs.h index 342b08448138..eac11a29fce6 100644 --- a/arch/x86/include/uapi/asm/perf_regs.h +++ b/arch/x86/include/uapi/asm/perf_regs.h @@ -57,14 +57,16 @@ enum perf_event_x86_regs { =20 enum { PERF_X86_SIMD_XMM_REGS =3D 16, - PERF_X86_SIMD_VEC_REGS_MAX =3D PERF_X86_SIMD_XMM_REGS, + PERF_X86_SIMD_YMM_REGS =3D 16, + PERF_X86_SIMD_VEC_REGS_MAX =3D PERF_X86_SIMD_YMM_REGS, }; =20 #define PERF_X86_SIMD_VEC_MASK GENMASK_ULL(PERF_X86_SIMD_VEC_REGS_MAX - 1,= 0) =20 enum { PERF_X86_XMM_QWORDS =3D 2, - PERF_X86_SIMD_QWORDS_MAX =3D PERF_X86_XMM_QWORDS, + PERF_X86_YMM_QWORDS =3D 4, + PERF_X86_SIMD_QWORDS_MAX =3D PERF_X86_YMM_QWORDS, }; =20 #endif /* _ASM_X86_PERF_REGS_H */ diff --git a/arch/x86/kernel/perf_regs.c b/arch/x86/kernel/perf_regs.c index 9947a6b5c260..4062a679cc5b 100644 --- a/arch/x86/kernel/perf_regs.c +++ b/arch/x86/kernel/perf_regs.c @@ -77,6 +77,8 @@ u64 perf_reg_value(struct pt_regs *regs, int idx) return regs_get_register(regs, pt_regs_offset[idx]); } =20 +#define PERF_X86_YMMH_QWORDS (PERF_X86_YMM_QWORDS / 2) + u64 perf_simd_reg_value(struct pt_regs *regs, int idx, u16 qwords_idx, bool pred) { @@ -95,6 +97,11 @@ u64 perf_simd_reg_value(struct pt_regs *regs, int idx, return 0; return perf_regs->xmm_regs[idx * PERF_X86_XMM_QWORDS + qwords_idx]; + } else if (qwords_idx < PERF_X86_YMM_QWORDS) { + if (!perf_regs->ymmh_regs) + return 0; + return perf_regs->ymmh_regs[idx * PERF_X86_YMMH_QWORDS + + qwords_idx - PERF_X86_XMM_QWORDS]; } =20 return 0; @@ -111,7 +118,8 @@ int perf_simd_reg_validate(u16 vec_qwords, u64 vec_mask, if (vec_mask) return -EINVAL; } else { - if (vec_qwords !=3D PERF_X86_XMM_QWORDS) + if (vec_qwords !=3D PERF_X86_XMM_QWORDS && + vec_qwords !=3D PERF_X86_YMM_QWORDS) return -EINVAL; if (vec_mask & ~PERF_X86_SIMD_VEC_MASK) return -EINVAL; --=20 2.34.1 From nobody Tue Feb 10 14:26:01 2026 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.10]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 193371EB5F8; Mon, 9 Feb 2026 07:26:05 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.10 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770621965; cv=none; b=dxkXu8alyA4f5DRG+YdNJrLMCZdMwaLXsJ1++E5wAdSY/sbb0HNwSMak6I+Df1EjZjtxGBg0x1atQc5iGVdsg0T5St46anAfXuKqRCu7KzFzNitOFs8fx9eD51Attu2r9mv9zBrJ8Cw8oltpHoLy4UrNUSR7bKR4tISLJIr9mhQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770621965; c=relaxed/simple; bh=o13rjsc6azcBrTyJhYV2gAHNmsQeWLjyELcP8IiTzzQ=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=D0hV3dx98rPj2lsjbD6wzm1cviKyst13y17+HKGS+XDdk2tmemFjfEi1cyr6vUWOl1yK+RN766edVlwlxtj9whOXf3+hcqlC1y/P/32dGwxbA2ifH8zoglwLKZJTPMGcnT+Y6vqoKIvQlKd77P29oTGQJaD+glt5sMB/Vl1nE0o= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=pass smtp.mailfrom=linux.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=Kcaa2kew; arc=none smtp.client-ip=192.198.163.10 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="Kcaa2kew" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1770621965; x=1802157965; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=o13rjsc6azcBrTyJhYV2gAHNmsQeWLjyELcP8IiTzzQ=; b=Kcaa2kewlra5Jwho9mdVXvOZd7OutX2OUmHTp+zgrJhAY/x+K98EDqrw vcPd3XaHFX0n5b+rbxKLVOqQDrMWKpEoMWh+5weCeT7eh527cFvB1aoPb Ns/jaag6ZFqgHiRbxSylUGOPerO44gWUQdwAhDIa3MrwNpS14sXor6ArV 7yK0irbONSHeaVhj36+ONZU8RCtE/KIEz5OMZYAwmJl8GlCuH2UO1nSBd okxN5oVpiNzJ8kZAAlYhLekZzqvVK4jKRm3cEtThiM7JTysjgibbJlrOU /DUJCtJONfnOLE4eI86Gw45EruwRIwmZ5fzTJoFnu91XS4oKWfN7iHcpg g==; X-CSE-ConnectionGUID: jAbeGcA6SYa0TQHYyukypQ== X-CSE-MsgGUID: rODv33nTTWa0HVod56rvHA== X-IronPort-AV: E=McAfee;i="6800,10657,11695"; a="83098495" X-IronPort-AV: E=Sophos;i="6.21,281,1763452800"; d="scan'208";a="83098495" Received: from fmviesa001.fm.intel.com ([10.60.135.141]) by fmvoesa104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 08 Feb 2026 23:26:05 -0800 X-CSE-ConnectionGUID: GPt/w7q8R+yINL8riLTRRg== X-CSE-MsgGUID: Hbf5OymLQ1q53CRC2Nkh+A== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.21,281,1763452800"; d="scan'208";a="241694763" Received: from spr.sh.intel.com ([10.112.229.196]) by fmviesa001.fm.intel.com with ESMTP; 08 Feb 2026 23:26:00 -0800 From: Dapeng Mi To: Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Namhyung Kim , Thomas Gleixner , Dave Hansen , Ian Rogers , Adrian Hunter , Jiri Olsa , Alexander Shishkin , Andi Kleen , Eranian Stephane Cc: Mark Rutland , broonie@kernel.org, Ravi Bangoria , linux-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org, Zide Chen , Falcon Thomas , Dapeng Mi , Xudong Hao , Kan Liang , Dapeng Mi Subject: [Patch v6 15/22] perf/x86: Enable ZMM sampling using sample_simd_vec_reg_* fields Date: Mon, 9 Feb 2026 15:20:40 +0800 Message-Id: <20260209072047.2180332-16-dapeng1.mi@linux.intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20260209072047.2180332-1-dapeng1.mi@linux.intel.com> References: <20260209072047.2180332-1-dapeng1.mi@linux.intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Kan Liang This patch adds support for sampling ZMM registers via the sample_simd_vec_reg_* fields. Each ZMM register consists of 8 u64 words. Current x86 hardware supports up to 32 ZMM registers. For ZMM registers from ZMM0 to ZMM15, they are assembled from three parts: XMM (the lower 2 u64 words), YMMH (the middle 2 u64 words), and ZMMH (the upper 4 u64 words). The perf_simd_reg_value() function is responsible for assembling these three parts into a complete ZMM register for output to userspace. For ZMM registers ZMM16 to ZMM31, each register can be read as a whole and directly outputted to userspace. Additionally, sample_simd_vec_reg_qwords should be set to 8 to indicate ZMM sampling. Signed-off-by: Kan Liang Co-developed-by: Dapeng Mi Signed-off-by: Dapeng Mi --- arch/x86/events/core.c | 16 ++++++++++++++++ arch/x86/events/perf_event.h | 19 +++++++++++++++++++ arch/x86/include/asm/perf_event.h | 8 ++++++++ arch/x86/include/uapi/asm/perf_regs.h | 8 ++++++-- arch/x86/kernel/perf_regs.c | 16 +++++++++++++++- 5 files changed, 64 insertions(+), 3 deletions(-) diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c index e80a392e30b0..b279dfc1c97f 100644 --- a/arch/x86/events/core.c +++ b/arch/x86/events/core.c @@ -723,6 +723,12 @@ int x86_pmu_hw_config(struct perf_event *event) if (event_needs_ymm(event) && !(x86_pmu.ext_regs_mask & XFEATURE_MASK_YMM)) return -EINVAL; + if (event_needs_low16_zmm(event) && + !(x86_pmu.ext_regs_mask & XFEATURE_MASK_ZMM_Hi256)) + return -EINVAL; + if (event_needs_high16_zmm(event) && + !(x86_pmu.ext_regs_mask & XFEATURE_MASK_Hi16_ZMM)) + return -EINVAL; } } =20 @@ -1848,6 +1854,8 @@ inline void x86_pmu_clear_perf_regs(struct pt_regs *r= egs) =20 perf_regs->xmm_regs =3D NULL; perf_regs->ymmh_regs =3D NULL; + perf_regs->zmmh_regs =3D NULL; + perf_regs->h16zmm_regs =3D NULL; } =20 static inline void __x86_pmu_sample_ext_regs(u64 mask) @@ -1875,6 +1883,10 @@ static inline void x86_pmu_update_ext_regs(struct x8= 6_perf_regs *perf_regs, perf_regs->xmm_space =3D xsave->i387.xmm_space; if (mask & XFEATURE_MASK_YMM) perf_regs->ymmh =3D get_xsave_addr(xsave, XFEATURE_YMM); + if (mask & XFEATURE_MASK_ZMM_Hi256) + perf_regs->zmmh =3D get_xsave_addr(xsave, XFEATURE_ZMM_Hi256); + if (mask & XFEATURE_MASK_Hi16_ZMM) + perf_regs->h16zmm =3D get_xsave_addr(xsave, XFEATURE_Hi16_ZMM); } =20 /* @@ -1936,6 +1948,10 @@ static void x86_pmu_sample_extended_regs(struct perf= _event *event, mask |=3D XFEATURE_MASK_SSE; if (event_needs_ymm(event)) mask |=3D XFEATURE_MASK_YMM; + if (event_needs_low16_zmm(event)) + mask |=3D XFEATURE_MASK_ZMM_Hi256; + if (event_needs_high16_zmm(event)) + mask |=3D XFEATURE_MASK_Hi16_ZMM; =20 mask &=3D x86_pmu.ext_regs_mask; if (sample_type & PERF_SAMPLE_REGS_USER) { diff --git a/arch/x86/events/perf_event.h b/arch/x86/events/perf_event.h index 4f18ba6ef0c4..f6379adb8e83 100644 --- a/arch/x86/events/perf_event.h +++ b/arch/x86/events/perf_event.h @@ -158,6 +158,25 @@ static inline bool event_needs_ymm(struct perf_event *= event) return false; } =20 +static inline bool event_needs_low16_zmm(struct perf_event *event) +{ + if (event->attr.sample_simd_regs_enabled && + event->attr.sample_simd_vec_reg_qwords >=3D PERF_X86_ZMM_QWORDS) + return true; + + return false; +} + +static inline bool event_needs_high16_zmm(struct perf_event *event) +{ + if (event->attr.sample_simd_regs_enabled && + (fls64(event->attr.sample_simd_vec_reg_intr) > PERF_X86_H16ZMM_BASE || + fls64(event->attr.sample_simd_vec_reg_user) > PERF_X86_H16ZMM_BASE)) + return true; + + return false; +} + struct amd_nb { int nb_id; /* NorthBridge id */ int refcnt; /* reference count */ diff --git a/arch/x86/include/asm/perf_event.h b/arch/x86/include/asm/perf_= event.h index bffe47851676..a57386ae70d9 100644 --- a/arch/x86/include/asm/perf_event.h +++ b/arch/x86/include/asm/perf_event.h @@ -718,6 +718,14 @@ struct x86_perf_regs { u64 *ymmh_regs; struct ymmh_struct *ymmh; }; + union { + u64 *zmmh_regs; + struct avx_512_zmm_uppers_state *zmmh; + }; + union { + u64 *h16zmm_regs; + struct avx_512_hi16_state *h16zmm; + }; }; =20 extern unsigned long perf_arch_instruction_pointer(struct pt_regs *regs); diff --git a/arch/x86/include/uapi/asm/perf_regs.h b/arch/x86/include/uapi/= asm/perf_regs.h index eac11a29fce6..d6362bc8d125 100644 --- a/arch/x86/include/uapi/asm/perf_regs.h +++ b/arch/x86/include/uapi/asm/perf_regs.h @@ -58,15 +58,19 @@ enum perf_event_x86_regs { enum { PERF_X86_SIMD_XMM_REGS =3D 16, PERF_X86_SIMD_YMM_REGS =3D 16, - PERF_X86_SIMD_VEC_REGS_MAX =3D PERF_X86_SIMD_YMM_REGS, + PERF_X86_SIMD_ZMM_REGS =3D 32, + PERF_X86_SIMD_VEC_REGS_MAX =3D PERF_X86_SIMD_ZMM_REGS, }; =20 #define PERF_X86_SIMD_VEC_MASK GENMASK_ULL(PERF_X86_SIMD_VEC_REGS_MAX - 1,= 0) =20 +#define PERF_X86_H16ZMM_BASE 16 + enum { PERF_X86_XMM_QWORDS =3D 2, PERF_X86_YMM_QWORDS =3D 4, - PERF_X86_SIMD_QWORDS_MAX =3D PERF_X86_YMM_QWORDS, + PERF_X86_ZMM_QWORDS =3D 8, + PERF_X86_SIMD_QWORDS_MAX =3D PERF_X86_ZMM_QWORDS, }; =20 #endif /* _ASM_X86_PERF_REGS_H */ diff --git a/arch/x86/kernel/perf_regs.c b/arch/x86/kernel/perf_regs.c index 4062a679cc5b..fe4ff4d2de88 100644 --- a/arch/x86/kernel/perf_regs.c +++ b/arch/x86/kernel/perf_regs.c @@ -78,6 +78,7 @@ u64 perf_reg_value(struct pt_regs *regs, int idx) } =20 #define PERF_X86_YMMH_QWORDS (PERF_X86_YMM_QWORDS / 2) +#define PERF_X86_ZMMH_QWORDS (PERF_X86_ZMM_QWORDS / 2) =20 u64 perf_simd_reg_value(struct pt_regs *regs, int idx, u16 qwords_idx, bool pred) @@ -92,6 +93,13 @@ u64 perf_simd_reg_value(struct pt_regs *regs, int idx, qwords_idx >=3D PERF_X86_SIMD_QWORDS_MAX)) return 0; =20 + if (idx >=3D PERF_X86_H16ZMM_BASE) { + if (!perf_regs->h16zmm_regs) + return 0; + return perf_regs->h16zmm_regs[(idx - PERF_X86_H16ZMM_BASE) * + PERF_X86_ZMM_QWORDS + qwords_idx]; + } + if (qwords_idx < PERF_X86_XMM_QWORDS) { if (!perf_regs->xmm_regs) return 0; @@ -102,6 +110,11 @@ u64 perf_simd_reg_value(struct pt_regs *regs, int idx, return 0; return perf_regs->ymmh_regs[idx * PERF_X86_YMMH_QWORDS + qwords_idx - PERF_X86_XMM_QWORDS]; + } else if (qwords_idx < PERF_X86_ZMM_QWORDS) { + if (!perf_regs->zmmh_regs) + return 0; + return perf_regs->zmmh_regs[idx * PERF_X86_ZMMH_QWORDS + + qwords_idx - PERF_X86_YMM_QWORDS]; } =20 return 0; @@ -119,7 +132,8 @@ int perf_simd_reg_validate(u16 vec_qwords, u64 vec_mask, return -EINVAL; } else { if (vec_qwords !=3D PERF_X86_XMM_QWORDS && - vec_qwords !=3D PERF_X86_YMM_QWORDS) + vec_qwords !=3D PERF_X86_YMM_QWORDS && + vec_qwords !=3D PERF_X86_ZMM_QWORDS) return -EINVAL; if (vec_mask & ~PERF_X86_SIMD_VEC_MASK) return -EINVAL; --=20 2.34.1 From nobody Tue Feb 10 14:26:01 2026 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.10]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E668B31B810; Mon, 9 Feb 2026 07:26:09 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.10 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770621970; cv=none; b=dgVnRmS/cvgwGmKFnvuRJ/nLES+gjut/y+kPrW0ZG/GFnE495n7uaIUnBqGNIJP8o9W8IB4FXsYVzMfNjoKovOnp2rkAwFHajBKYbe8tew7ZS9kJ4PuS4dLk8BvAFxBOgI5d04oEiXI23sNWcccrdIv1JyT4ixMdea689gI8bxM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770621970; c=relaxed/simple; bh=PHKbcMfar062pK5khryyeDK66m32OOkgPaAuNjwnWRE=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=h8jGVoYZ+yWF4JpGOLeko7i5qaHoBeiOCPgxu4BHhZAzMuGMnFLSiTB8Fo/qTZMTkIsFFq9S84jqSyGgzsv/ndDEQ539dGCNxKSwY5JJ3dmEs4x+J9j5KgQMQuvzJlMGtcOewqm6F1VWRxm2wMKfQ86IUhx0CijC9Pm9essM0+s= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=pass smtp.mailfrom=linux.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=Gp7BcsRm; arc=none smtp.client-ip=192.198.163.10 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="Gp7BcsRm" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1770621970; x=1802157970; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=PHKbcMfar062pK5khryyeDK66m32OOkgPaAuNjwnWRE=; b=Gp7BcsRmXNvQLqOnkbbKXQ31weRqNTwP4kVeAWnb9JaE0Kk1x1ilQ/yZ L/H6JHZ5O6I1OidHgzmZfuS7wRU+nmLKnEB5afAaR9WamC7nR+KMtZD3J aDUki/czVHaNGhlHLRkv2KcWoWaWmzyVlLCaGfplh5FHhZNWYM9IF+MbV /AxCJQosHOaYP3uVne5nQqONjq6+PXLy0ipFNz33HqNAWOLD0f5bV3Be0 QhafkDIwWddOiw3A/ECV+cPkbL+wvGksbowSzvI5mBMfoohVvb4tREfbR Kb7Vk22kahmAWI86RFMvhFE7ykIL5F0Il+mI05IJ9ET732tgyW+1yiVum A==; X-CSE-ConnectionGUID: st0bM7bsSYCnXFee1rFqmg== X-CSE-MsgGUID: ZmdZ33y+T++sXWkiprWlOg== X-IronPort-AV: E=McAfee;i="6800,10657,11695"; a="83098509" X-IronPort-AV: E=Sophos;i="6.21,281,1763452800"; d="scan'208";a="83098509" Received: from fmviesa001.fm.intel.com ([10.60.135.141]) by fmvoesa104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 08 Feb 2026 23:26:10 -0800 X-CSE-ConnectionGUID: 3v0V3odgR0SPEIUWvNdLBQ== X-CSE-MsgGUID: OW7cLhz0T1Ctx6ekvtq6oQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.21,281,1763452800"; d="scan'208";a="241694770" Received: from spr.sh.intel.com ([10.112.229.196]) by fmviesa001.fm.intel.com with ESMTP; 08 Feb 2026 23:26:05 -0800 From: Dapeng Mi To: Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Namhyung Kim , Thomas Gleixner , Dave Hansen , Ian Rogers , Adrian Hunter , Jiri Olsa , Alexander Shishkin , Andi Kleen , Eranian Stephane Cc: Mark Rutland , broonie@kernel.org, Ravi Bangoria , linux-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org, Zide Chen , Falcon Thomas , Dapeng Mi , Xudong Hao , Kan Liang , Dapeng Mi Subject: [Patch v6 16/22] perf/x86: Enable OPMASK sampling using sample_simd_pred_reg_* fields Date: Mon, 9 Feb 2026 15:20:41 +0800 Message-Id: <20260209072047.2180332-17-dapeng1.mi@linux.intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20260209072047.2180332-1-dapeng1.mi@linux.intel.com> References: <20260209072047.2180332-1-dapeng1.mi@linux.intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Kan Liang This patch adds support for sampling OPAMSK registers via the sample_simd_pred_reg_* fields. Each OPMASK register consists of 1 u64 word. Current x86 hardware supports 8 OPMASK registers. The perf_simd_reg_value() function is responsible for outputting OPMASK value to userspace. Additionally, sample_simd_pred_reg_qwords should be set to 1 to indicate OPMASK sampling. Signed-off-by: Kan Liang Co-developed-by: Dapeng Mi Signed-off-by: Dapeng Mi --- arch/x86/events/core.c | 8 ++++++++ arch/x86/events/perf_event.h | 10 ++++++++++ arch/x86/include/asm/perf_event.h | 4 ++++ arch/x86/include/uapi/asm/perf_regs.h | 5 +++++ arch/x86/kernel/perf_regs.c | 15 ++++++++++++--- 5 files changed, 39 insertions(+), 3 deletions(-) diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c index b279dfc1c97f..2a674436f07e 100644 --- a/arch/x86/events/core.c +++ b/arch/x86/events/core.c @@ -729,6 +729,9 @@ int x86_pmu_hw_config(struct perf_event *event) if (event_needs_high16_zmm(event) && !(x86_pmu.ext_regs_mask & XFEATURE_MASK_Hi16_ZMM)) return -EINVAL; + if (event_needs_opmask(event) && + !(x86_pmu.ext_regs_mask & XFEATURE_MASK_OPMASK)) + return -EINVAL; } } =20 @@ -1856,6 +1859,7 @@ inline void x86_pmu_clear_perf_regs(struct pt_regs *r= egs) perf_regs->ymmh_regs =3D NULL; perf_regs->zmmh_regs =3D NULL; perf_regs->h16zmm_regs =3D NULL; + perf_regs->opmask_regs =3D NULL; } =20 static inline void __x86_pmu_sample_ext_regs(u64 mask) @@ -1887,6 +1891,8 @@ static inline void x86_pmu_update_ext_regs(struct x86= _perf_regs *perf_regs, perf_regs->zmmh =3D get_xsave_addr(xsave, XFEATURE_ZMM_Hi256); if (mask & XFEATURE_MASK_Hi16_ZMM) perf_regs->h16zmm =3D get_xsave_addr(xsave, XFEATURE_Hi16_ZMM); + if (mask & XFEATURE_MASK_OPMASK) + perf_regs->opmask =3D get_xsave_addr(xsave, XFEATURE_OPMASK); } =20 /* @@ -1952,6 +1958,8 @@ static void x86_pmu_sample_extended_regs(struct perf_= event *event, mask |=3D XFEATURE_MASK_ZMM_Hi256; if (event_needs_high16_zmm(event)) mask |=3D XFEATURE_MASK_Hi16_ZMM; + if (event_needs_opmask(event)) + mask |=3D XFEATURE_MASK_OPMASK; =20 mask &=3D x86_pmu.ext_regs_mask; if (sample_type & PERF_SAMPLE_REGS_USER) { diff --git a/arch/x86/events/perf_event.h b/arch/x86/events/perf_event.h index f6379adb8e83..c9d6379c4ddb 100644 --- a/arch/x86/events/perf_event.h +++ b/arch/x86/events/perf_event.h @@ -177,6 +177,16 @@ static inline bool event_needs_high16_zmm(struct perf_= event *event) return false; } =20 +static inline bool event_needs_opmask(struct perf_event *event) +{ + if (event->attr.sample_simd_regs_enabled && + (event->attr.sample_simd_pred_reg_intr || + event->attr.sample_simd_pred_reg_user)) + return true; + + return false; +} + struct amd_nb { int nb_id; /* NorthBridge id */ int refcnt; /* reference count */ diff --git a/arch/x86/include/asm/perf_event.h b/arch/x86/include/asm/perf_= event.h index a57386ae70d9..6c5a34e0dfc8 100644 --- a/arch/x86/include/asm/perf_event.h +++ b/arch/x86/include/asm/perf_event.h @@ -726,6 +726,10 @@ struct x86_perf_regs { u64 *h16zmm_regs; struct avx_512_hi16_state *h16zmm; }; + union { + u64 *opmask_regs; + struct avx_512_opmask_state *opmask; + }; }; =20 extern unsigned long perf_arch_instruction_pointer(struct pt_regs *regs); diff --git a/arch/x86/include/uapi/asm/perf_regs.h b/arch/x86/include/uapi/= asm/perf_regs.h index d6362bc8d125..dae39df134ec 100644 --- a/arch/x86/include/uapi/asm/perf_regs.h +++ b/arch/x86/include/uapi/asm/perf_regs.h @@ -60,13 +60,18 @@ enum { PERF_X86_SIMD_YMM_REGS =3D 16, PERF_X86_SIMD_ZMM_REGS =3D 32, PERF_X86_SIMD_VEC_REGS_MAX =3D PERF_X86_SIMD_ZMM_REGS, + + PERF_X86_SIMD_OPMASK_REGS =3D 8, + PERF_X86_SIMD_PRED_REGS_MAX =3D PERF_X86_SIMD_OPMASK_REGS, }; =20 +#define PERF_X86_SIMD_PRED_MASK GENMASK(PERF_X86_SIMD_PRED_REGS_MAX - 1, 0) #define PERF_X86_SIMD_VEC_MASK GENMASK_ULL(PERF_X86_SIMD_VEC_REGS_MAX - 1,= 0) =20 #define PERF_X86_H16ZMM_BASE 16 =20 enum { + PERF_X86_OPMASK_QWORDS =3D 1, PERF_X86_XMM_QWORDS =3D 2, PERF_X86_YMM_QWORDS =3D 4, PERF_X86_ZMM_QWORDS =3D 8, diff --git a/arch/x86/kernel/perf_regs.c b/arch/x86/kernel/perf_regs.c index fe4ff4d2de88..2e3c10dffb35 100644 --- a/arch/x86/kernel/perf_regs.c +++ b/arch/x86/kernel/perf_regs.c @@ -86,8 +86,14 @@ u64 perf_simd_reg_value(struct pt_regs *regs, int idx, struct x86_perf_regs *perf_regs =3D container_of(regs, struct x86_perf_regs, regs); =20 - if (pred) - return 0; + if (pred) { + if (WARN_ON_ONCE(idx >=3D PERF_X86_SIMD_PRED_REGS_MAX || + qwords_idx >=3D PERF_X86_OPMASK_QWORDS)) + return 0; + if (!perf_regs->opmask_regs) + return 0; + return perf_regs->opmask_regs[idx]; + } =20 if (WARN_ON_ONCE(idx >=3D PERF_X86_SIMD_VEC_REGS_MAX || qwords_idx >=3D PERF_X86_SIMD_QWORDS_MAX)) @@ -138,7 +144,10 @@ int perf_simd_reg_validate(u16 vec_qwords, u64 vec_mas= k, if (vec_mask & ~PERF_X86_SIMD_VEC_MASK) return -EINVAL; } - if (pred_mask) + + if (pred_qwords !=3D PERF_X86_OPMASK_QWORDS) + return -EINVAL; + if (pred_mask & ~PERF_X86_SIMD_PRED_MASK) return -EINVAL; =20 return 0; --=20 2.34.1 From nobody Tue Feb 10 14:26:01 2026 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.10]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id ABCD631DDB8; Mon, 9 Feb 2026 07:26:14 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.10 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770621974; cv=none; b=d/HUAnk0ixfzmbPf46GwqovFTLJPCT5lWp8WXyDt2ZCLvvJpoX7zVbnslk+ROoSYjbitvyJA/DDjZkADUeu3k+kljmPdtgI2/gcOBdO0NrRHMTtSefYpO9PKeALHVf5dmZV7zHyHK7fXwXdKTT8/nVx4+ZNT8v8AwE9TZj4EOd8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770621974; c=relaxed/simple; bh=hmQnHbJipSad3Ltuk+N5rXvMTH+C44IN6AhB0cqwOIM=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=HBH5RXGUdH00tE58kUCuA2OnVWEDCWJryeQVCDXohyNWNtpOg6rcP7F3z7cNNgzR963mlxB5YfuYaM3rTpGNKe23/4pmPmbDspy7H2LmK4uMyNnLnvezwMERDANjeBJDCLl69N5n8M3G+a8NaLaDoj2N0F0cQWFTRhjQgKTQVa4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=pass smtp.mailfrom=linux.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=MRYpzL5a; arc=none smtp.client-ip=192.198.163.10 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="MRYpzL5a" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1770621975; x=1802157975; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=hmQnHbJipSad3Ltuk+N5rXvMTH+C44IN6AhB0cqwOIM=; b=MRYpzL5aCA/NxM42kQgdAqSCO4UqvcAh64XgPXvEVk5OIdlfKk2FXYyw /fyjc4V/XsSSiOe6Dp0zHepT2Jd2iFDvlGPNM2JhRjxHfsu+PQwCut23y J09d8OE6XKe2KwGCTmBFWYq8EoETV/R9p1+xVW9nvRuf3Oy5gqacFwtpM +yVrczGrh76G2UGvTziKOf5aIDEoYyhp2WI6480le0ALlEhDqz91km8WI Z8ps9/cA0MG+uFOTzdgIPMRNW6xcPekU6UwOVysz/5VIrl2gWSkxNW+sS TFLbCSYxfN1Bllw1s9e+dPp1KoAyJCm3Qj6pTwOsWziKpAnWWtgpKjTlX w==; X-CSE-ConnectionGUID: /wlB+t9HRoq77D0DjvCqPw== X-CSE-MsgGUID: zEcUV+bpTuesIvwx5Sh8/A== X-IronPort-AV: E=McAfee;i="6800,10657,11695"; a="83098520" X-IronPort-AV: E=Sophos;i="6.21,281,1763452800"; d="scan'208";a="83098520" Received: from fmviesa001.fm.intel.com ([10.60.135.141]) by fmvoesa104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 08 Feb 2026 23:26:15 -0800 X-CSE-ConnectionGUID: FlkyNOt4QWiNXg7i+r8IUw== X-CSE-MsgGUID: FnXiR4ClTFup8aIHieFCOQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.21,281,1763452800"; d="scan'208";a="241694774" Received: from spr.sh.intel.com ([10.112.229.196]) by fmviesa001.fm.intel.com with ESMTP; 08 Feb 2026 23:26:10 -0800 From: Dapeng Mi To: Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Namhyung Kim , Thomas Gleixner , Dave Hansen , Ian Rogers , Adrian Hunter , Jiri Olsa , Alexander Shishkin , Andi Kleen , Eranian Stephane Cc: Mark Rutland , broonie@kernel.org, Ravi Bangoria , linux-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org, Zide Chen , Falcon Thomas , Dapeng Mi , Xudong Hao , Dapeng Mi Subject: [Patch v6 17/22] perf: Enhance perf_reg_validate() with simd_enabled argument Date: Mon, 9 Feb 2026 15:20:42 +0800 Message-Id: <20260209072047.2180332-18-dapeng1.mi@linux.intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20260209072047.2180332-1-dapeng1.mi@linux.intel.com> References: <20260209072047.2180332-1-dapeng1.mi@linux.intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The upcoming patch will support x86 APX eGPRs sampling by using the reclaimed XMM register space to represent eGPRs in sample_regs_* fields. To differentiate between XMM and eGPRs in sample_regs_* fields, an additional argument, simd_enabled, is introduced to the perf_reg_validate() helper. If simd_enabled is set to 1, it indicates that eGPRs are represented in sample_regs_* fields for the x86 platform; otherwise, XMM registers are represented. Signed-off-by: Dapeng Mi --- V6: new patch, splited from the next patch. arch/arm/kernel/perf_regs.c | 2 +- arch/arm64/kernel/perf_regs.c | 2 +- arch/csky/kernel/perf_regs.c | 2 +- arch/loongarch/kernel/perf_regs.c | 2 +- arch/mips/kernel/perf_regs.c | 2 +- arch/parisc/kernel/perf_regs.c | 2 +- arch/powerpc/perf/perf_regs.c | 2 +- arch/riscv/kernel/perf_regs.c | 2 +- arch/s390/kernel/perf_regs.c | 2 +- arch/x86/kernel/perf_regs.c | 4 ++-- include/linux/perf_regs.h | 2 +- kernel/events/core.c | 8 +++++--- 12 files changed, 17 insertions(+), 15 deletions(-) diff --git a/arch/arm/kernel/perf_regs.c b/arch/arm/kernel/perf_regs.c index d575a4c3ca56..838d701adf4d 100644 --- a/arch/arm/kernel/perf_regs.c +++ b/arch/arm/kernel/perf_regs.c @@ -18,7 +18,7 @@ u64 perf_reg_value(struct pt_regs *regs, int idx) =20 #define REG_RESERVED (~((1ULL << PERF_REG_ARM_MAX) - 1)) =20 -int perf_reg_validate(u64 mask) +int perf_reg_validate(u64 mask, bool simd_enabled) { if (!mask || mask & REG_RESERVED) return -EINVAL; diff --git a/arch/arm64/kernel/perf_regs.c b/arch/arm64/kernel/perf_regs.c index 70e2f13f587f..71a3e0238de4 100644 --- a/arch/arm64/kernel/perf_regs.c +++ b/arch/arm64/kernel/perf_regs.c @@ -77,7 +77,7 @@ u64 perf_reg_value(struct pt_regs *regs, int idx) =20 #define REG_RESERVED (~((1ULL << PERF_REG_ARM64_MAX) - 1)) =20 -int perf_reg_validate(u64 mask) +int perf_reg_validate(u64 mask, bool simd_enabled) { u64 reserved_mask =3D REG_RESERVED; =20 diff --git a/arch/csky/kernel/perf_regs.c b/arch/csky/kernel/perf_regs.c index 94601f37b596..c932a96afc56 100644 --- a/arch/csky/kernel/perf_regs.c +++ b/arch/csky/kernel/perf_regs.c @@ -18,7 +18,7 @@ u64 perf_reg_value(struct pt_regs *regs, int idx) =20 #define REG_RESERVED (~((1ULL << PERF_REG_CSKY_MAX) - 1)) =20 -int perf_reg_validate(u64 mask) +int perf_reg_validate(u64 mask, bool simd_enabled) { if (!mask || mask & REG_RESERVED) return -EINVAL; diff --git a/arch/loongarch/kernel/perf_regs.c b/arch/loongarch/kernel/perf= _regs.c index 8dd604f01745..164514f40ae0 100644 --- a/arch/loongarch/kernel/perf_regs.c +++ b/arch/loongarch/kernel/perf_regs.c @@ -25,7 +25,7 @@ u64 perf_reg_abi(struct task_struct *tsk) } #endif /* CONFIG_32BIT */ =20 -int perf_reg_validate(u64 mask) +int perf_reg_validate(u64 mask, bool simd_enabled) { if (!mask) return -EINVAL; diff --git a/arch/mips/kernel/perf_regs.c b/arch/mips/kernel/perf_regs.c index 7736d3c5ebd2..00a5201dbd5d 100644 --- a/arch/mips/kernel/perf_regs.c +++ b/arch/mips/kernel/perf_regs.c @@ -28,7 +28,7 @@ u64 perf_reg_abi(struct task_struct *tsk) } #endif /* CONFIG_32BIT */ =20 -int perf_reg_validate(u64 mask) +int perf_reg_validate(u64 mask, bool simd_enabled) { if (!mask) return -EINVAL; diff --git a/arch/parisc/kernel/perf_regs.c b/arch/parisc/kernel/perf_regs.c index b9fe1f2fcb9b..4f21aab5405c 100644 --- a/arch/parisc/kernel/perf_regs.c +++ b/arch/parisc/kernel/perf_regs.c @@ -34,7 +34,7 @@ u64 perf_reg_value(struct pt_regs *regs, int idx) =20 #define REG_RESERVED (~((1ULL << PERF_REG_PARISC_MAX) - 1)) =20 -int perf_reg_validate(u64 mask) +int perf_reg_validate(u64 mask, bool simd_enabled) { if (!mask || mask & REG_RESERVED) return -EINVAL; diff --git a/arch/powerpc/perf/perf_regs.c b/arch/powerpc/perf/perf_regs.c index 350dccb0143c..a01d8a903640 100644 --- a/arch/powerpc/perf/perf_regs.c +++ b/arch/powerpc/perf/perf_regs.c @@ -125,7 +125,7 @@ u64 perf_reg_value(struct pt_regs *regs, int idx) return regs_get_register(regs, pt_regs_offset[idx]); } =20 -int perf_reg_validate(u64 mask) +int perf_reg_validate(u64 mask, bool simd_enabled) { if (!mask || mask & REG_RESERVED) return -EINVAL; diff --git a/arch/riscv/kernel/perf_regs.c b/arch/riscv/kernel/perf_regs.c index 3bba8deababb..1ecc8760b88b 100644 --- a/arch/riscv/kernel/perf_regs.c +++ b/arch/riscv/kernel/perf_regs.c @@ -18,7 +18,7 @@ u64 perf_reg_value(struct pt_regs *regs, int idx) =20 #define REG_RESERVED (~((1ULL << PERF_REG_RISCV_MAX) - 1)) =20 -int perf_reg_validate(u64 mask) +int perf_reg_validate(u64 mask, bool simd_enabled) { if (!mask || mask & REG_RESERVED) return -EINVAL; diff --git a/arch/s390/kernel/perf_regs.c b/arch/s390/kernel/perf_regs.c index 7b305f1456f8..6496fd23c540 100644 --- a/arch/s390/kernel/perf_regs.c +++ b/arch/s390/kernel/perf_regs.c @@ -34,7 +34,7 @@ u64 perf_reg_value(struct pt_regs *regs, int idx) =20 #define REG_RESERVED (~((1UL << PERF_REG_S390_MAX) - 1)) =20 -int perf_reg_validate(u64 mask) +int perf_reg_validate(u64 mask, bool simd_enabled) { if (!mask || mask & REG_RESERVED) return -EINVAL; diff --git a/arch/x86/kernel/perf_regs.c b/arch/x86/kernel/perf_regs.c index 2e3c10dffb35..9b3134220b3e 100644 --- a/arch/x86/kernel/perf_regs.c +++ b/arch/x86/kernel/perf_regs.c @@ -166,7 +166,7 @@ int perf_simd_reg_validate(u16 vec_qwords, u64 vec_mask, (1ULL << PERF_REG_X86_R14) | \ (1ULL << PERF_REG_X86_R15)) =20 -int perf_reg_validate(u64 mask) +int perf_reg_validate(u64 mask, bool simd_enabled) { if (!mask || (mask & (REG_NOSUPPORT | PERF_REG_X86_RESERVED))) return -EINVAL; @@ -185,7 +185,7 @@ u64 perf_reg_abi(struct task_struct *task) (1ULL << PERF_REG_X86_FS) | \ (1ULL << PERF_REG_X86_GS)) =20 -int perf_reg_validate(u64 mask) +int perf_reg_validate(u64 mask, bool simd_enabled) { /* The mask could be 0 if only the SIMD registers are interested */ if (mask & (REG_NOSUPPORT | PERF_REG_X86_RESERVED)) diff --git a/include/linux/perf_regs.h b/include/linux/perf_regs.h index 518f28c6a7d4..09dbc2fc3859 100644 --- a/include/linux/perf_regs.h +++ b/include/linux/perf_regs.h @@ -10,7 +10,7 @@ struct perf_regs { }; =20 u64 perf_reg_value(struct pt_regs *regs, int idx); -int perf_reg_validate(u64 mask); +int perf_reg_validate(u64 mask, bool simd_enabled); u64 perf_reg_abi(struct task_struct *task); void perf_get_regs_user(struct perf_regs *regs_user, struct pt_regs *regs); diff --git a/kernel/events/core.c b/kernel/events/core.c index 5742126f50cc..8b27b4873dd0 100644 --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -7728,7 +7728,7 @@ u64 __weak perf_reg_value(struct pt_regs *regs, int i= dx) return 0; } =20 -int __weak perf_reg_validate(u64 mask) +int __weak perf_reg_validate(u64 mask, bool simd_enabled) { return mask ? -ENOSYS : 0; } @@ -13614,7 +13614,8 @@ static int perf_copy_attr(struct perf_event_attr __= user *uattr, } =20 if (attr->sample_type & PERF_SAMPLE_REGS_USER) { - ret =3D perf_reg_validate(attr->sample_regs_user); + ret =3D perf_reg_validate(attr->sample_regs_user, + attr->sample_simd_regs_enabled); if (ret) return ret; ret =3D perf_simd_reg_validate(attr->sample_simd_vec_reg_qwords, @@ -13644,7 +13645,8 @@ static int perf_copy_attr(struct perf_event_attr __= user *uattr, attr->sample_max_stack =3D sysctl_perf_event_max_stack; =20 if (attr->sample_type & PERF_SAMPLE_REGS_INTR) { - ret =3D perf_reg_validate(attr->sample_regs_intr); + ret =3D perf_reg_validate(attr->sample_regs_intr, + attr->sample_simd_regs_enabled); if (ret) return ret; ret =3D perf_simd_reg_validate(attr->sample_simd_vec_reg_qwords, --=20 2.34.1 From nobody Tue Feb 10 14:26:01 2026 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.10]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CB13F31DDB8; Mon, 9 Feb 2026 07:26:19 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.10 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770621980; cv=none; b=q2lw47kPhg6Ik/GwfMthv0DCoeCVyZVlAiT37702qBnhLp5eMeGnwC1bFOqOvkT8QhgJvvtsIeATRQXMPT8g7PfEbVfCFDFZ8HCrcZK+KCB3KOBolAaEPeEI5XDbyb/X4E+KTGReosliGBheT5P0ynZgsc5a8otcQy+wJAfNfXk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770621980; c=relaxed/simple; bh=mmAWlcFDxm7sO2Ed3RNtwxAhiK9H8IWOinGZzb5eyxY=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=kPuDwezYegly60KTeOMGz6o4sNw+erfHXWHcuOkU2PwEpNBK3/7sJbWjOw3adBLCcOxNB2/dTOCnV0edBhNQMOQdk60eAYdeP+Pwx6wqhuD1M6ONZMDBdIWCUeUm43N2QOaviN/FiAldxDtCzzg+xfakGrEOLRZsGvT7T6JgnII= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=pass smtp.mailfrom=linux.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=PTIbzS1w; arc=none smtp.client-ip=192.198.163.10 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="PTIbzS1w" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1770621980; x=1802157980; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=mmAWlcFDxm7sO2Ed3RNtwxAhiK9H8IWOinGZzb5eyxY=; b=PTIbzS1wl4MvS2g0Vi0PIbd7dloqrOi36T2mGTY4bCH6DLCCE9JNVVDw wtj07u0HAUzHw5s6ugqbidZUeLEFHDj5UTqb+ePxXbxr38a5TRb65rLrH 98G2Zg+XSD1qVn6jpKqmg3KCMn4PkdrLRuam3E8BSMrYJPpiUZd3TWFku 1yHXI92ad3K/Syhh7TnkOdgqOHhsKU4xxdB5N0ebQw+OmYVM2ieX199tl X5FwgmlOqexTy9EtTGEvBRG2zruM5vNFRbhbJmhnc21YAdAvf2HUa2Z5r ID0OX02n5X+galhZxwxX9RBuQcs3pDSvrFai74iaIa9e8PDdRltuHwRd5 A==; X-CSE-ConnectionGUID: yBFiVSQ+SWO2vXmn5xj2EA== X-CSE-MsgGUID: xFudsm0LRTWDxjJwZVJtdw== X-IronPort-AV: E=McAfee;i="6800,10657,11695"; a="83098532" X-IronPort-AV: E=Sophos;i="6.21,281,1763452800"; d="scan'208";a="83098532" Received: from fmviesa001.fm.intel.com ([10.60.135.141]) by fmvoesa104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 08 Feb 2026 23:26:20 -0800 X-CSE-ConnectionGUID: qUf5GqscS4aJcdKFXMat8g== X-CSE-MsgGUID: pVcYfgMnScOHppZqzuF2Cg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.21,281,1763452800"; d="scan'208";a="241694781" Received: from spr.sh.intel.com ([10.112.229.196]) by fmviesa001.fm.intel.com with ESMTP; 08 Feb 2026 23:26:15 -0800 From: Dapeng Mi To: Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Namhyung Kim , Thomas Gleixner , Dave Hansen , Ian Rogers , Adrian Hunter , Jiri Olsa , Alexander Shishkin , Andi Kleen , Eranian Stephane Cc: Mark Rutland , broonie@kernel.org, Ravi Bangoria , linux-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org, Zide Chen , Falcon Thomas , Dapeng Mi , Xudong Hao , Kan Liang , Dapeng Mi Subject: [Patch v6 18/22] perf/x86: Enable eGPRs sampling using sample_regs_* fields Date: Mon, 9 Feb 2026 15:20:43 +0800 Message-Id: <20260209072047.2180332-19-dapeng1.mi@linux.intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20260209072047.2180332-1-dapeng1.mi@linux.intel.com> References: <20260209072047.2180332-1-dapeng1.mi@linux.intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Kan Liang This patch enables sampling of APX eGPRs (R16 ~ R31) via the sample_regs_* fields. To sample eGPRs, the sample_simd_regs_enabled field must be set. This allows the spare space (reclaimed from the original XMM space) in the sample_regs_* fields to be used for representing eGPRs. The perf_reg_value() function needs to check if the PERF_SAMPLE_REGS_ABI_SIMD flag is set first, and then determine whether to output eGPRs or legacy XMM registers to userspace. The perf_reg_validate() function first checks the simd_enabled argument to determine if the eGPRs bitmap is represented in sample_regs_* fields. It then validates the eGPRs bitmap accordingly. Currently, eGPRs sampling is only supported on the x86_64 architecture, as APX is only available on x86_64 platforms. Suggested-by: Peter Zijlstra (Intel) Signed-off-by: Kan Liang Co-developed-by: Dapeng Mi Signed-off-by: Dapeng Mi --- arch/x86/events/core.c | 37 ++++++++++++++++------- arch/x86/events/perf_event.h | 10 +++++++ arch/x86/include/asm/perf_event.h | 4 +++ arch/x86/include/uapi/asm/perf_regs.h | 25 ++++++++++++++++ arch/x86/kernel/perf_regs.c | 43 ++++++++++++++++----------- 5 files changed, 90 insertions(+), 29 deletions(-) diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c index 2a674436f07e..b320a58ede3f 100644 --- a/arch/x86/events/core.c +++ b/arch/x86/events/core.c @@ -697,20 +697,21 @@ int x86_pmu_hw_config(struct perf_event *event) } =20 if (event->attr.sample_type & (PERF_SAMPLE_REGS_INTR | PERF_SAMPLE_REGS_U= SER)) { - /* - * Besides the general purpose registers, XMM registers may - * be collected as well. - */ - if (event_has_extended_regs(event)) { - if (!(event->pmu->capabilities & PERF_PMU_CAP_EXTENDED_REGS)) - return -EINVAL; - if (event->attr.sample_simd_regs_enabled) - return -EINVAL; - } - if (event_has_simd_regs(event)) { + u64 reserved =3D ~GENMASK_ULL(PERF_REG_MISC_MAX - 1, 0); + if (!(event->pmu->capabilities & PERF_PMU_CAP_SIMD_REGS)) return -EINVAL; + /* + * The XMM space in the perf_event_x86_regs is reclaimed + * for eGPRs and other general registers. + */ + if (event->attr.sample_regs_user & reserved || + event->attr.sample_regs_intr & reserved) + return -EINVAL; + if (event_needs_egprs(event) && + !(x86_pmu.ext_regs_mask & XFEATURE_MASK_APX)) + return -EINVAL; /* Not require any vector registers but set width */ if (event->attr.sample_simd_vec_reg_qwords && !event->attr.sample_simd_vec_reg_intr && @@ -732,6 +733,15 @@ int x86_pmu_hw_config(struct perf_event *event) if (event_needs_opmask(event) && !(x86_pmu.ext_regs_mask & XFEATURE_MASK_OPMASK)) return -EINVAL; + } else { + /* + * Besides the general purpose registers, XMM registers may + * be collected as well. + */ + if (event_has_extended_regs(event)) { + if (!(event->pmu->capabilities & PERF_PMU_CAP_EXTENDED_REGS)) + return -EINVAL; + } } } =20 @@ -1860,6 +1870,7 @@ inline void x86_pmu_clear_perf_regs(struct pt_regs *r= egs) perf_regs->zmmh_regs =3D NULL; perf_regs->h16zmm_regs =3D NULL; perf_regs->opmask_regs =3D NULL; + perf_regs->egpr_regs =3D NULL; } =20 static inline void __x86_pmu_sample_ext_regs(u64 mask) @@ -1893,6 +1904,8 @@ static inline void x86_pmu_update_ext_regs(struct x86= _perf_regs *perf_regs, perf_regs->h16zmm =3D get_xsave_addr(xsave, XFEATURE_Hi16_ZMM); if (mask & XFEATURE_MASK_OPMASK) perf_regs->opmask =3D get_xsave_addr(xsave, XFEATURE_OPMASK); + if (mask & XFEATURE_MASK_APX) + perf_regs->egpr =3D get_xsave_addr(xsave, XFEATURE_APX); } =20 /* @@ -1960,6 +1973,8 @@ static void x86_pmu_sample_extended_regs(struct perf_= event *event, mask |=3D XFEATURE_MASK_Hi16_ZMM; if (event_needs_opmask(event)) mask |=3D XFEATURE_MASK_OPMASK; + if (event_needs_egprs(event)) + mask |=3D XFEATURE_MASK_APX; =20 mask &=3D x86_pmu.ext_regs_mask; if (sample_type & PERF_SAMPLE_REGS_USER) { diff --git a/arch/x86/events/perf_event.h b/arch/x86/events/perf_event.h index c9d6379c4ddb..33c187f9b7ab 100644 --- a/arch/x86/events/perf_event.h +++ b/arch/x86/events/perf_event.h @@ -187,6 +187,16 @@ static inline bool event_needs_opmask(struct perf_even= t *event) return false; } =20 +static inline bool event_needs_egprs(struct perf_event *event) +{ + if (event->attr.sample_simd_regs_enabled && + (event->attr.sample_regs_user & PERF_X86_EGPRS_MASK || + event->attr.sample_regs_intr & PERF_X86_EGPRS_MASK)) + return true; + + return false; +} + struct amd_nb { int nb_id; /* NorthBridge id */ int refcnt; /* reference count */ diff --git a/arch/x86/include/asm/perf_event.h b/arch/x86/include/asm/perf_= event.h index 6c5a34e0dfc8..cecf1e8d002f 100644 --- a/arch/x86/include/asm/perf_event.h +++ b/arch/x86/include/asm/perf_event.h @@ -730,6 +730,10 @@ struct x86_perf_regs { u64 *opmask_regs; struct avx_512_opmask_state *opmask; }; + union { + u64 *egpr_regs; + struct apx_state *egpr; + }; }; =20 extern unsigned long perf_arch_instruction_pointer(struct pt_regs *regs); diff --git a/arch/x86/include/uapi/asm/perf_regs.h b/arch/x86/include/uapi/= asm/perf_regs.h index dae39df134ec..f9b4086085bc 100644 --- a/arch/x86/include/uapi/asm/perf_regs.h +++ b/arch/x86/include/uapi/asm/perf_regs.h @@ -27,9 +27,33 @@ enum perf_event_x86_regs { PERF_REG_X86_R13, PERF_REG_X86_R14, PERF_REG_X86_R15, + /* + * The EGPRs and XMM have overlaps. Only one can be used + * at a time. For the ABI type PERF_SAMPLE_REGS_ABI_SIMD, + * utilize EGPRs. For the other ABI type, XMM is used. + * + * Extended GPRs (EGPRs) + */ + PERF_REG_X86_R16, + PERF_REG_X86_R17, + PERF_REG_X86_R18, + PERF_REG_X86_R19, + PERF_REG_X86_R20, + PERF_REG_X86_R21, + PERF_REG_X86_R22, + PERF_REG_X86_R23, + PERF_REG_X86_R24, + PERF_REG_X86_R25, + PERF_REG_X86_R26, + PERF_REG_X86_R27, + PERF_REG_X86_R28, + PERF_REG_X86_R29, + PERF_REG_X86_R30, + PERF_REG_X86_R31, /* These are the limits for the GPRs. */ PERF_REG_X86_32_MAX =3D PERF_REG_X86_GS + 1, PERF_REG_X86_64_MAX =3D PERF_REG_X86_R15 + 1, + PERF_REG_MISC_MAX =3D PERF_REG_X86_R31 + 1, =20 /* These all need two bits set because they are 128bit */ PERF_REG_X86_XMM0 =3D 32, @@ -54,6 +78,7 @@ enum perf_event_x86_regs { }; =20 #define PERF_REG_EXTENDED_MASK (~((1ULL << PERF_REG_X86_XMM0) - 1)) +#define PERF_X86_EGPRS_MASK GENMASK_ULL(PERF_REG_X86_R31, PERF_REG_X86_R16) =20 enum { PERF_X86_SIMD_XMM_REGS =3D 16, diff --git a/arch/x86/kernel/perf_regs.c b/arch/x86/kernel/perf_regs.c index 9b3134220b3e..1c2a8c2c7bf1 100644 --- a/arch/x86/kernel/perf_regs.c +++ b/arch/x86/kernel/perf_regs.c @@ -61,14 +61,22 @@ u64 perf_reg_value(struct pt_regs *regs, int idx) { struct x86_perf_regs *perf_regs; =20 - if (idx >=3D PERF_REG_X86_XMM0 && idx < PERF_REG_X86_XMM_MAX) { + if (idx > PERF_REG_X86_R15) { perf_regs =3D container_of(regs, struct x86_perf_regs, regs); - /* SIMD registers are moved to dedicated sample_simd_vec_reg */ - if (perf_regs->abi & PERF_SAMPLE_REGS_ABI_SIMD) - return 0; - if (!perf_regs->xmm_regs) - return 0; - return perf_regs->xmm_regs[idx - PERF_REG_X86_XMM0]; + + if (perf_regs->abi & PERF_SAMPLE_REGS_ABI_SIMD) { + if (idx <=3D PERF_REG_X86_R31) { + if (!perf_regs->egpr_regs) + return 0; + return perf_regs->egpr_regs[idx - PERF_REG_X86_R16]; + } + } else { + if (idx >=3D PERF_REG_X86_XMM0 && idx < PERF_REG_X86_XMM_MAX) { + if (!perf_regs->xmm_regs) + return 0; + return perf_regs->xmm_regs[idx - PERF_REG_X86_XMM0]; + } + } } =20 if (WARN_ON_ONCE(idx >=3D ARRAY_SIZE(pt_regs_offset))) @@ -153,18 +161,12 @@ int perf_simd_reg_validate(u16 vec_qwords, u64 vec_ma= sk, return 0; } =20 -#define PERF_REG_X86_RESERVED (((1ULL << PERF_REG_X86_XMM0) - 1) & \ - ~((1ULL << PERF_REG_X86_MAX) - 1)) +#define PERF_REG_X86_RESERVED (GENMASK_ULL(PERF_REG_X86_XMM0 - 1, PERF_REG= _X86_AX) & \ + ~GENMASK_ULL(PERF_REG_X86_R15, PERF_REG_X86_AX)) +#define PERF_REG_X86_EXT_RESERVED (~GENMASK_ULL(PERF_REG_MISC_MAX - 1, PER= F_REG_X86_AX)) =20 #ifdef CONFIG_X86_32 -#define REG_NOSUPPORT ((1ULL << PERF_REG_X86_R8) | \ - (1ULL << PERF_REG_X86_R9) | \ - (1ULL << PERF_REG_X86_R10) | \ - (1ULL << PERF_REG_X86_R11) | \ - (1ULL << PERF_REG_X86_R12) | \ - (1ULL << PERF_REG_X86_R13) | \ - (1ULL << PERF_REG_X86_R14) | \ - (1ULL << PERF_REG_X86_R15)) +#define REG_NOSUPPORT GENMASK_ULL(PERF_REG_X86_R15, PERF_REG_X86_R8) =20 int perf_reg_validate(u64 mask, bool simd_enabled) { @@ -188,7 +190,12 @@ u64 perf_reg_abi(struct task_struct *task) int perf_reg_validate(u64 mask, bool simd_enabled) { /* The mask could be 0 if only the SIMD registers are interested */ - if (mask & (REG_NOSUPPORT | PERF_REG_X86_RESERVED)) + if (!simd_enabled && + (mask & (REG_NOSUPPORT | PERF_REG_X86_RESERVED))) + return -EINVAL; + + if (simd_enabled && + (mask & (REG_NOSUPPORT | PERF_REG_X86_EXT_RESERVED))) return -EINVAL; =20 return 0; --=20 2.34.1 From nobody Tue Feb 10 14:26:01 2026 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.10]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C356031DDB8; Mon, 9 Feb 2026 07:26:24 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.10 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770621984; cv=none; b=QWKbpw0XYBDw++rh4f3GQAolMdkLvo0uQ2XE0smknu6pEAxj5lJ5oPQLw8xt0mIeJllE1GgwSWHw8ix6eVW0ujgqCr8rWECqHeM8BTKlWLiUjUXpJryvf2iCgOf/v9jcto0ZmRH/GLU/Mt+oW+M6K+NVsOtVMpK7QSduzmQRSCE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770621984; c=relaxed/simple; bh=XgZZZ4T0KpfHHjZiIOCBWCnQIN87HiMDsdekYOW/siA=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=eIdtWoD+7CUgdaEpKG0Nn2fx99W6H77GwJoWo6ROjZAOzGgw42FMc8+nSEQ+I7z33Wyni8HJjuRv5YYxZQ0HfQuDte7fFZZLpltB03sRM3MGZCqF+4k1g8j+LhzLHMlHCO4PUZ9ShhT/mQWNxb1uEZ771YbqAcDxHDAu6mFtGU4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=pass smtp.mailfrom=linux.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=XPjwTE3K; arc=none smtp.client-ip=192.198.163.10 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="XPjwTE3K" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1770621985; x=1802157985; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=XgZZZ4T0KpfHHjZiIOCBWCnQIN87HiMDsdekYOW/siA=; b=XPjwTE3KnlcpWTCzH4GioIMetzW2xTuhUbX8Iiy4dGgbY6/N9tTwxm7P 6xzgIEoa4XqaPguMvDzvp118b18RT8Te0Ow07ofSvh3iqT8VjAAg2goFi HKrDHDCHBiXhWFvTM4AjblCVWWImK19n0B+p2LuWF2wnLKKcN7jxpD1Xe U1wa4vIJkPnB/RceSNeItBBuOWz8kvGLsJF0ZnqamYOMxudLPQkIfGTF9 /lOQD9j+FuiBb5XgFMQxEF5meKgLCH1ejaOjKQUphkOEoeRkNkv07ZrVA CUveaThpX1iSahFm4NNr0ZoTfdAW4cQpLiZ0tolmt6kZPe8AZPyfuJBwF A==; X-CSE-ConnectionGUID: EOOmoJxSRwSIUZZXaE/Crg== X-CSE-MsgGUID: MUuqz73gSPGSCMLr34O3Tg== X-IronPort-AV: E=McAfee;i="6800,10657,11695"; a="83098546" X-IronPort-AV: E=Sophos;i="6.21,281,1763452800"; d="scan'208";a="83098546" Received: from fmviesa001.fm.intel.com ([10.60.135.141]) by fmvoesa104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 08 Feb 2026 23:26:25 -0800 X-CSE-ConnectionGUID: xIcptw0XTrGRRyPsv2GKYQ== X-CSE-MsgGUID: 23r/P1pLR1acQfHwb/zj4A== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.21,281,1763452800"; d="scan'208";a="241694788" Received: from spr.sh.intel.com ([10.112.229.196]) by fmviesa001.fm.intel.com with ESMTP; 08 Feb 2026 23:26:20 -0800 From: Dapeng Mi To: Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Namhyung Kim , Thomas Gleixner , Dave Hansen , Ian Rogers , Adrian Hunter , Jiri Olsa , Alexander Shishkin , Andi Kleen , Eranian Stephane Cc: Mark Rutland , broonie@kernel.org, Ravi Bangoria , linux-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org, Zide Chen , Falcon Thomas , Dapeng Mi , Xudong Hao , Kan Liang , Dapeng Mi Subject: [Patch v6 19/22] perf/x86: Enable SSP sampling using sample_regs_* fields Date: Mon, 9 Feb 2026 15:20:44 +0800 Message-Id: <20260209072047.2180332-20-dapeng1.mi@linux.intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20260209072047.2180332-1-dapeng1.mi@linux.intel.com> References: <20260209072047.2180332-1-dapeng1.mi@linux.intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Kan Liang This patch enables sampling of CET SSP register via the sample_regs_* fields. To sample SSP, the sample_simd_regs_enabled field must be set. This allows the spare space (reclaimed from the original XMM space) in the sample_regs_* fields to be used for representing SSP. Similar with eGPRs sampling, the perf_reg_value() function needs to check if the PERF_SAMPLE_REGS_ABI_SIMD flag is set first, and then determine whether to output SSP or legacy XMM registers to userspace. Additionally, arch-PEBS supports sampling SSP, which is placed into the GPRs group. This patch also enables arch-PEBS-based SSP sampling. Currently, SSP sampling is only supported on the x86_64 architecture, as CET is only available on x86_64 platforms. Signed-off-by: Kan Liang Co-developed-by: Dapeng Mi Signed-off-by: Dapeng Mi --- V6: Ensure SSP value is 0 for non-user-space sampling since currently SSP is only enabled for user space. arch/x86/events/core.c | 9 +++++++++ arch/x86/events/intel/ds.c | 7 +++++++ arch/x86/events/perf_event.h | 10 ++++++++++ arch/x86/include/asm/perf_event.h | 4 ++++ arch/x86/include/uapi/asm/perf_regs.h | 7 ++++--- arch/x86/kernel/perf_regs.c | 5 +++++ 6 files changed, 39 insertions(+), 3 deletions(-) diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c index b320a58ede3f..81dc23e658f2 100644 --- a/arch/x86/events/core.c +++ b/arch/x86/events/core.c @@ -712,6 +712,10 @@ int x86_pmu_hw_config(struct perf_event *event) if (event_needs_egprs(event) && !(x86_pmu.ext_regs_mask & XFEATURE_MASK_APX)) return -EINVAL; + if (event_needs_ssp(event) && + !(x86_pmu.ext_regs_mask & XFEATURE_MASK_CET_USER)) + return -EINVAL; + /* Not require any vector registers but set width */ if (event->attr.sample_simd_vec_reg_qwords && !event->attr.sample_simd_vec_reg_intr && @@ -1871,6 +1875,7 @@ inline void x86_pmu_clear_perf_regs(struct pt_regs *r= egs) perf_regs->h16zmm_regs =3D NULL; perf_regs->opmask_regs =3D NULL; perf_regs->egpr_regs =3D NULL; + perf_regs->cet_regs =3D NULL; } =20 static inline void __x86_pmu_sample_ext_regs(u64 mask) @@ -1906,6 +1911,8 @@ static inline void x86_pmu_update_ext_regs(struct x86= _perf_regs *perf_regs, perf_regs->opmask =3D get_xsave_addr(xsave, XFEATURE_OPMASK); if (mask & XFEATURE_MASK_APX) perf_regs->egpr =3D get_xsave_addr(xsave, XFEATURE_APX); + if (mask & XFEATURE_MASK_CET_USER) + perf_regs->cet =3D get_xsave_addr(xsave, XFEATURE_CET_USER); } =20 /* @@ -1975,6 +1982,8 @@ static void x86_pmu_sample_extended_regs(struct perf_= event *event, mask |=3D XFEATURE_MASK_OPMASK; if (event_needs_egprs(event)) mask |=3D XFEATURE_MASK_APX; + if (event_needs_ssp(event)) + mask |=3D XFEATURE_MASK_CET_USER; =20 mask &=3D x86_pmu.ext_regs_mask; if (sample_type & PERF_SAMPLE_REGS_USER) { diff --git a/arch/x86/events/intel/ds.c b/arch/x86/events/intel/ds.c index 272725d749df..ff8707885f74 100644 --- a/arch/x86/events/intel/ds.c +++ b/arch/x86/events/intel/ds.c @@ -2680,6 +2680,13 @@ static void setup_arch_pebs_sample_data(struct perf_= event *event, __setup_pebs_gpr_group(event, data, regs, (struct pebs_gprs *)gprs, sample_type); + + /* Currently only user space mode enables SSP. */ + if (user_mode(regs) && (sample_type & + (PERF_SAMPLE_REGS_INTR | PERF_SAMPLE_REGS_USER))) { + perf_regs->cet_regs =3D &gprs->r15; + ignore_mask =3D XFEATURE_MASK_CET_USER; + } } =20 if (header->aux) { diff --git a/arch/x86/events/perf_event.h b/arch/x86/events/perf_event.h index 33c187f9b7ab..fdfb34d7b1d2 100644 --- a/arch/x86/events/perf_event.h +++ b/arch/x86/events/perf_event.h @@ -197,6 +197,16 @@ static inline bool event_needs_egprs(struct perf_event= *event) return false; } =20 +static inline bool event_needs_ssp(struct perf_event *event) +{ + if (event->attr.sample_simd_regs_enabled && + (event->attr.sample_regs_user & BIT_ULL(PERF_REG_X86_SSP) || + event->attr.sample_regs_intr & BIT_ULL(PERF_REG_X86_SSP))) + return true; + + return false; +} + struct amd_nb { int nb_id; /* NorthBridge id */ int refcnt; /* reference count */ diff --git a/arch/x86/include/asm/perf_event.h b/arch/x86/include/asm/perf_= event.h index cecf1e8d002f..98fef9db0aa3 100644 --- a/arch/x86/include/asm/perf_event.h +++ b/arch/x86/include/asm/perf_event.h @@ -734,6 +734,10 @@ struct x86_perf_regs { u64 *egpr_regs; struct apx_state *egpr; }; + union { + u64 *cet_regs; + struct cet_user_state *cet; + }; }; =20 extern unsigned long perf_arch_instruction_pointer(struct pt_regs *regs); diff --git a/arch/x86/include/uapi/asm/perf_regs.h b/arch/x86/include/uapi/= asm/perf_regs.h index f9b4086085bc..6da63e1dbb40 100644 --- a/arch/x86/include/uapi/asm/perf_regs.h +++ b/arch/x86/include/uapi/asm/perf_regs.h @@ -28,9 +28,9 @@ enum perf_event_x86_regs { PERF_REG_X86_R14, PERF_REG_X86_R15, /* - * The EGPRs and XMM have overlaps. Only one can be used + * The EGPRs/SSP and XMM have overlaps. Only one can be used * at a time. For the ABI type PERF_SAMPLE_REGS_ABI_SIMD, - * utilize EGPRs. For the other ABI type, XMM is used. + * utilize EGPRs/SSP. For the other ABI type, XMM is used. * * Extended GPRs (EGPRs) */ @@ -50,10 +50,11 @@ enum perf_event_x86_regs { PERF_REG_X86_R29, PERF_REG_X86_R30, PERF_REG_X86_R31, + PERF_REG_X86_SSP, /* These are the limits for the GPRs. */ PERF_REG_X86_32_MAX =3D PERF_REG_X86_GS + 1, PERF_REG_X86_64_MAX =3D PERF_REG_X86_R15 + 1, - PERF_REG_MISC_MAX =3D PERF_REG_X86_R31 + 1, + PERF_REG_MISC_MAX =3D PERF_REG_X86_SSP + 1, =20 /* These all need two bits set because they are 128bit */ PERF_REG_X86_XMM0 =3D 32, diff --git a/arch/x86/kernel/perf_regs.c b/arch/x86/kernel/perf_regs.c index 1c2a8c2c7bf1..2e7d83f26cc0 100644 --- a/arch/x86/kernel/perf_regs.c +++ b/arch/x86/kernel/perf_regs.c @@ -70,6 +70,11 @@ u64 perf_reg_value(struct pt_regs *regs, int idx) return 0; return perf_regs->egpr_regs[idx - PERF_REG_X86_R16]; } + if (idx =3D=3D PERF_REG_X86_SSP) { + if (!perf_regs->cet_regs) + return 0; + return perf_regs->cet_regs[1]; + } } else { if (idx >=3D PERF_REG_X86_XMM0 && idx < PERF_REG_X86_XMM_MAX) { if (!perf_regs->xmm_regs) --=20 2.34.1 From nobody Tue Feb 10 14:26:01 2026 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.10]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AF90331DDB8; Mon, 9 Feb 2026 07:26:29 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.10 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770621989; cv=none; b=Z0yPHRprzYPTLjsB+Fc5j6k99w+vtZ46EbdonzdnfSinwAbKXi0JW6IBq2PN5URg9EuCfXmYOheVRZMXuWC+zqd2T6BzslHZiMG6/FMXH965ru58xuVNJfCYfb81v30bPzQ/Fl1rq1VO43jp6ZnbrekA9U5DLRGvoiR2ZwtRM5Q= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770621989; c=relaxed/simple; bh=OmJR/F8UAnx/9Y7nOixVwWgYpVft3qqpR2tZE0MZBzw=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=U6U01FJFwjYZvQavJTT/KeADFV9GRhFA/26/3h7E3HS5N4b3q7wSllLTxEi2DRYOx0Xq0XSR0kTeugC41Hjdl2NgAsZVayjO8l7UNdxaUtc0n3zvoLdAgPIL77/O/fcN6AsedAJ7gGAFNpRGdsI4yAo7MjG5b8LBTXM96YB5v58= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=pass smtp.mailfrom=linux.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=N8ctwHsg; arc=none smtp.client-ip=192.198.163.10 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="N8ctwHsg" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1770621990; x=1802157990; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=OmJR/F8UAnx/9Y7nOixVwWgYpVft3qqpR2tZE0MZBzw=; b=N8ctwHsgEfAm0cp9ykrwLtaBrIi5ztXNj/g+RCR379yM/XINdjSZGnmW uETDBIUL7BqwytOOOohwmAtEtjux/IVolou0rJVP0uwGZG5dnCV/IHxQY RbDsVnt3bk4FQ26rbg6s9aIhKRrUvX5NHm2+uRmxUyKzJPTXx1WCl12Lk PLhgD0cEOwlvWLbQIknvYpjRmzWmfr1ucQy9bOACPIN8yO473q9Byl9qD t09b7dkaCRfDdPsN4LritcCj3dYXcCD3zvhP57uhsu6FIdwIO8a8sU+Rb WtBBgQ/g5P44e7zUyJRhIPPdPI0SJmemwBLHySiiwRU1skdCFodz98EyX w==; X-CSE-ConnectionGUID: UXWAIUc1SS+BXyzDH1Q+fg== X-CSE-MsgGUID: INe1A2oLQCyG+sUmdUModQ== X-IronPort-AV: E=McAfee;i="6800,10657,11695"; a="83098556" X-IronPort-AV: E=Sophos;i="6.21,281,1763452800"; d="scan'208";a="83098556" Received: from fmviesa001.fm.intel.com ([10.60.135.141]) by fmvoesa104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 08 Feb 2026 23:26:30 -0800 X-CSE-ConnectionGUID: DVkajciHQ+C0TvvKD2ZErQ== X-CSE-MsgGUID: ytUSz/27RGmYO0NTMoYPMQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.21,281,1763452800"; d="scan'208";a="241694793" Received: from spr.sh.intel.com ([10.112.229.196]) by fmviesa001.fm.intel.com with ESMTP; 08 Feb 2026 23:26:25 -0800 From: Dapeng Mi To: Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Namhyung Kim , Thomas Gleixner , Dave Hansen , Ian Rogers , Adrian Hunter , Jiri Olsa , Alexander Shishkin , Andi Kleen , Eranian Stephane Cc: Mark Rutland , broonie@kernel.org, Ravi Bangoria , linux-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org, Zide Chen , Falcon Thomas , Dapeng Mi , Xudong Hao , Kan Liang , Dapeng Mi Subject: [Patch v6 20/22] perf/x86/intel: Enable PERF_PMU_CAP_SIMD_REGS capability Date: Mon, 9 Feb 2026 15:20:45 +0800 Message-Id: <20260209072047.2180332-21-dapeng1.mi@linux.intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20260209072047.2180332-1-dapeng1.mi@linux.intel.com> References: <20260209072047.2180332-1-dapeng1.mi@linux.intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Kan Liang Enable the PERF_PMU_CAP_SIMD_REGS capability if XSAVES support is available for YMM, ZMM, OPMASK, eGPRs, or SSP. Temporarily disable large PEBS sampling for these registers, as the current arch-PEBS sampling code does not support them yet. Large PEBS sampling for these registers will be enabled in subsequent patches. Signed-off-by: Kan Liang Signed-off-by: Dapeng Mi --- arch/x86/events/intel/core.c | 52 ++++++++++++++++++++++++++++++++---- 1 file changed, 47 insertions(+), 5 deletions(-) diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c index ae7693e586d3..1f063a1418fb 100644 --- a/arch/x86/events/intel/core.c +++ b/arch/x86/events/intel/core.c @@ -4426,11 +4426,33 @@ static unsigned long intel_pmu_large_pebs_flags(str= uct perf_event *event) flags &=3D ~PERF_SAMPLE_TIME; if (!event->attr.exclude_kernel) flags &=3D ~PERF_SAMPLE_REGS_USER; - if (event->attr.sample_regs_user & ~PEBS_GP_REGS) - flags &=3D ~PERF_SAMPLE_REGS_USER; - if (event->attr.sample_regs_intr & - ~(PEBS_GP_REGS | PERF_REG_EXTENDED_MASK)) - flags &=3D ~PERF_SAMPLE_REGS_INTR; + if (event->attr.sample_simd_regs_enabled) { + u64 nolarge =3D PERF_X86_EGPRS_MASK | BIT_ULL(PERF_REG_X86_SSP); + + /* + * PEBS HW can only collect the XMM0-XMM15 for now. + * Disable large PEBS for other vector registers, predicate + * registers, eGPRs, and SSP. + */ + if (event->attr.sample_regs_user & nolarge || + fls64(event->attr.sample_simd_vec_reg_user) > PERF_X86_H16ZMM_BASE || + event->attr.sample_simd_pred_reg_user) + flags &=3D ~PERF_SAMPLE_REGS_USER; + + if (event->attr.sample_regs_intr & nolarge || + fls64(event->attr.sample_simd_vec_reg_intr) > PERF_X86_H16ZMM_BASE || + event->attr.sample_simd_pred_reg_intr) + flags &=3D ~PERF_SAMPLE_REGS_INTR; + + if (event->attr.sample_simd_vec_reg_qwords > PERF_X86_XMM_QWORDS) + flags &=3D ~(PERF_SAMPLE_REGS_USER | PERF_SAMPLE_REGS_INTR); + } else { + if (event->attr.sample_regs_user & ~PEBS_GP_REGS) + flags &=3D ~PERF_SAMPLE_REGS_USER; + if (event->attr.sample_regs_intr & + ~(PEBS_GP_REGS | PERF_REG_EXTENDED_MASK)) + flags &=3D ~PERF_SAMPLE_REGS_INTR; + } return flags; } =20 @@ -5904,6 +5926,26 @@ static void intel_extended_regs_init(struct pmu *pmu) =20 x86_pmu.ext_regs_mask |=3D XFEATURE_MASK_SSE; x86_get_pmu(smp_processor_id())->capabilities |=3D PERF_PMU_CAP_EXTENDED_= REGS; + + if (boot_cpu_has(X86_FEATURE_AVX) && + cpu_has_xfeatures(XFEATURE_MASK_YMM, NULL)) + x86_pmu.ext_regs_mask |=3D XFEATURE_MASK_YMM; + if (boot_cpu_has(X86_FEATURE_APX) && + cpu_has_xfeatures(XFEATURE_MASK_APX, NULL)) + x86_pmu.ext_regs_mask |=3D XFEATURE_MASK_APX; + if (boot_cpu_has(X86_FEATURE_AVX512F)) { + if (cpu_has_xfeatures(XFEATURE_MASK_OPMASK, NULL)) + x86_pmu.ext_regs_mask |=3D XFEATURE_MASK_OPMASK; + if (cpu_has_xfeatures(XFEATURE_MASK_ZMM_Hi256, NULL)) + x86_pmu.ext_regs_mask |=3D XFEATURE_MASK_ZMM_Hi256; + if (cpu_has_xfeatures(XFEATURE_MASK_Hi16_ZMM, NULL)) + x86_pmu.ext_regs_mask |=3D XFEATURE_MASK_Hi16_ZMM; + } + if (cpu_feature_enabled(X86_FEATURE_USER_SHSTK)) + x86_pmu.ext_regs_mask |=3D XFEATURE_MASK_CET_USER; + + if (x86_pmu.ext_regs_mask !=3D XFEATURE_MASK_SSE) + x86_get_pmu(smp_processor_id())->capabilities |=3D PERF_PMU_CAP_SIMD_REG= S; } =20 #define counter_mask(_gp, _fixed) ((_gp) | ((u64)(_fixed) << INTEL_PMC_IDX= _FIXED)) --=20 2.34.1 From nobody Tue Feb 10 14:26:01 2026 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.10]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9635D1EB5F8; Mon, 9 Feb 2026 07:26:34 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.10 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770621994; cv=none; b=DyhMF+WGs/Ycy1YhxBCpp2OcM+ichRY5cMGPfPlbO6+S7eaJgINkIpzhamuBAtpTYKHO084IL+/llNqlkIw4qziEhgmHzL9ngMiXkf0IiItkkDxRYJZAReiCwF/XvP3IPItDDbf7dPj6CiyAg0nYeUJv2nhwgBiT8QeiQ76CvnU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770621994; c=relaxed/simple; bh=5tqAxR1tCuLe8lrtnM9P7Kk0CBR2SCFmnhNJgcYFNJQ=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=rVdD9fiKY74lqr7VI9tWetQIR+JpOW3JPN6aK3sBlvC5VcfMFQa7X7qoxkeilXvOsFPr4brcImkC3DJkaxxNB/KmTppSZx/HMgepCIdhHvWmVghD9014Qs7D/HFCWFe6YNqgOlBlpdy5T0CtfaLUCaEI1ZJMtlguOs2QOv7DDds= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=pass smtp.mailfrom=linux.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=HNj/7dIT; arc=none smtp.client-ip=192.198.163.10 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="HNj/7dIT" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1770621995; x=1802157995; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=5tqAxR1tCuLe8lrtnM9P7Kk0CBR2SCFmnhNJgcYFNJQ=; b=HNj/7dIT9bdS7wux6YrPWdBUt6Ex8r44QZ2XkqCUx3iL5jRwJj1aojGT 3Fara8/B/ZhciBTQAWpRyyyegAtc7PfoFnhsojpO9mDtM4ss4d0fRacYk WxTDeCP9n1ujuMWfhueSSLaXoKJByj3/lOAkVmzzfTiw6tUFO38ttZJRv cjobEp97szjdmTBaOIs7Nims0Y+SiOVYrVeY0kFcpQv/KYJ0RDyBi2Ah2 Gb3PqlMgVDANKYyVa6uxZm6OGFFLwn9knLZ3A8HSK0Dm5iAZ9gN6Zw7vK ttKJ09kqLhDOSrbv6TtdsGV7RCI6NGQpAWI6b+z0n8lnTGivx21FDhrP0 A==; X-CSE-ConnectionGUID: z0olsZCFQRK9OKUpglmpeQ== X-CSE-MsgGUID: DOd8bJd8SkSY598jeDAe1g== X-IronPort-AV: E=McAfee;i="6800,10657,11695"; a="83098566" X-IronPort-AV: E=Sophos;i="6.21,281,1763452800"; d="scan'208";a="83098566" Received: from fmviesa001.fm.intel.com ([10.60.135.141]) by fmvoesa104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 08 Feb 2026 23:26:34 -0800 X-CSE-ConnectionGUID: WHqGkJ/UTfimuuvc6d65jA== X-CSE-MsgGUID: /+uJlGDRQaOWbVdZMSKs5g== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.21,281,1763452800"; d="scan'208";a="241694812" Received: from spr.sh.intel.com ([10.112.229.196]) by fmviesa001.fm.intel.com with ESMTP; 08 Feb 2026 23:26:30 -0800 From: Dapeng Mi To: Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Namhyung Kim , Thomas Gleixner , Dave Hansen , Ian Rogers , Adrian Hunter , Jiri Olsa , Alexander Shishkin , Andi Kleen , Eranian Stephane Cc: Mark Rutland , broonie@kernel.org, Ravi Bangoria , linux-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org, Zide Chen , Falcon Thomas , Dapeng Mi , Xudong Hao , Dapeng Mi Subject: [Patch v6 21/22] perf/x86/intel: Enable arch-PEBS based SIMD/eGPRs/SSP sampling Date: Mon, 9 Feb 2026 15:20:46 +0800 Message-Id: <20260209072047.2180332-22-dapeng1.mi@linux.intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20260209072047.2180332-1-dapeng1.mi@linux.intel.com> References: <20260209072047.2180332-1-dapeng1.mi@linux.intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" This patch enables arch-PEBS based SIMD/eGPRs/SSP registers sampling. Arch-PEBS supports sampling of these registers, with all except SSP placed into the XSAVE-Enabled Registers (XER) group with the layout described below. Field Name Registers Used Size ---------------------------------------------------------------------- XSTATE_BV XINUSE for groups 8 B ---------------------------------------------------------------------- Reserved Reserved 8 B ---------------------------------------------------------------------- SSER XMM0-XMM15 16 regs * 16 B =3D 256 B ---------------------------------------------------------------------- YMMHIR Upper 128 bits of YMM0-YMM15 16 regs * 16 B =3D 256 B ---------------------------------------------------------------------- EGPR R16-R31 16 regs * 8 B =3D 128 B ---------------------------------------------------------------------- OPMASKR K0-K7 8 regs * 8 B =3D 64 B ---------------------------------------------------------------------- ZMMHIR Upper 256 bits of ZMM0-ZMM15 16 regs * 32 B =3D 512 B ---------------------------------------------------------------------- Hi16ZMMR ZMM16-ZMM31 16 regs * 64 B =3D 1024 B ---------------------------------------------------------------------- Memory space in the output buffer is allocated for these sub-groups as long as the corresponding Format.XER[55:49] bits in the PEBS record header are set. However, the arch-PEBS hardware engine does not write the sub-group if it is not used (in INIT state). In such cases, the corresponding bit in the XSTATE_BV bitmap is set to 0. Therefore, the XSTATE_BV field is checked to determine if the register data is actually written for each PEBS record. If not, the register data is not outputted to userspace. The SSP register is sampled and placed into the GPRs group by arch-PEBS. Additionally, the MSRs IA32_PMC_{GPn|FXm}_CFG_C.[55:49] bits are used to manage which types of these registers need to be sampled. Signed-off-by: Dapeng Mi --- arch/x86/events/intel/core.c | 75 ++++++++++++++++++++++-------- arch/x86/events/intel/ds.c | 77 ++++++++++++++++++++++++++++--- arch/x86/include/asm/msr-index.h | 7 +++ arch/x86/include/asm/perf_event.h | 8 +++- 4 files changed, 142 insertions(+), 25 deletions(-) diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c index 1f063a1418fb..c57a70798364 100644 --- a/arch/x86/events/intel/core.c +++ b/arch/x86/events/intel/core.c @@ -3221,6 +3221,21 @@ static void intel_pmu_enable_event_ext(struct perf_e= vent *event) if (pebs_data_cfg & PEBS_DATACFG_XMMS) ext |=3D ARCH_PEBS_VECR_XMM & cap.caps; =20 + if (pebs_data_cfg & PEBS_DATACFG_YMMHS) + ext |=3D ARCH_PEBS_VECR_YMMH & cap.caps; + + if (pebs_data_cfg & PEBS_DATACFG_EGPRS) + ext |=3D ARCH_PEBS_VECR_EGPRS & cap.caps; + + if (pebs_data_cfg & PEBS_DATACFG_OPMASKS) + ext |=3D ARCH_PEBS_VECR_OPMASK & cap.caps; + + if (pebs_data_cfg & PEBS_DATACFG_ZMMHS) + ext |=3D ARCH_PEBS_VECR_ZMMH & cap.caps; + + if (pebs_data_cfg & PEBS_DATACFG_H16ZMMS) + ext |=3D ARCH_PEBS_VECR_H16ZMM & cap.caps; + if (pebs_data_cfg & PEBS_DATACFG_LBRS) ext |=3D ARCH_PEBS_LBR & cap.caps; =20 @@ -4418,6 +4433,34 @@ static void intel_pebs_aliases_skl(struct perf_event= *event) return intel_pebs_aliases_precdist(event); } =20 +static inline bool intel_pebs_support_regs(struct perf_event *event, u64 r= egs) +{ + struct arch_pebs_cap cap =3D hybrid(event->pmu, arch_pebs_cap); + int pebs_format =3D x86_pmu.intel_cap.pebs_format; + bool supported =3D true; + + /* SSP */ + if (regs & PEBS_DATACFG_GP) + supported &=3D x86_pmu.arch_pebs && (ARCH_PEBS_GPR & cap.caps); + if (regs & PEBS_DATACFG_XMMS) { + supported &=3D x86_pmu.arch_pebs ? + ARCH_PEBS_VECR_XMM & cap.caps : + pebs_format > 3 && x86_pmu.intel_cap.pebs_baseline; + } + if (regs & PEBS_DATACFG_YMMHS) + supported &=3D x86_pmu.arch_pebs && (ARCH_PEBS_VECR_YMMH & cap.caps); + if (regs & PEBS_DATACFG_EGPRS) + supported &=3D x86_pmu.arch_pebs && (ARCH_PEBS_VECR_EGPRS & cap.caps); + if (regs & PEBS_DATACFG_OPMASKS) + supported &=3D x86_pmu.arch_pebs && (ARCH_PEBS_VECR_OPMASK & cap.caps); + if (regs & PEBS_DATACFG_ZMMHS) + supported &=3D x86_pmu.arch_pebs && (ARCH_PEBS_VECR_ZMMH & cap.caps); + if (regs & PEBS_DATACFG_H16ZMMS) + supported &=3D x86_pmu.arch_pebs && (ARCH_PEBS_VECR_H16ZMM & cap.caps); + + return supported; +} + static unsigned long intel_pmu_large_pebs_flags(struct perf_event *event) { unsigned long flags =3D x86_pmu.large_pebs_flags; @@ -4427,24 +4470,20 @@ static unsigned long intel_pmu_large_pebs_flags(str= uct perf_event *event) if (!event->attr.exclude_kernel) flags &=3D ~PERF_SAMPLE_REGS_USER; if (event->attr.sample_simd_regs_enabled) { - u64 nolarge =3D PERF_X86_EGPRS_MASK | BIT_ULL(PERF_REG_X86_SSP); - - /* - * PEBS HW can only collect the XMM0-XMM15 for now. - * Disable large PEBS for other vector registers, predicate - * registers, eGPRs, and SSP. - */ - if (event->attr.sample_regs_user & nolarge || - fls64(event->attr.sample_simd_vec_reg_user) > PERF_X86_H16ZMM_BASE || - event->attr.sample_simd_pred_reg_user) - flags &=3D ~PERF_SAMPLE_REGS_USER; - - if (event->attr.sample_regs_intr & nolarge || - fls64(event->attr.sample_simd_vec_reg_intr) > PERF_X86_H16ZMM_BASE || - event->attr.sample_simd_pred_reg_intr) - flags &=3D ~PERF_SAMPLE_REGS_INTR; - - if (event->attr.sample_simd_vec_reg_qwords > PERF_X86_XMM_QWORDS) + if ((event_needs_ssp(event) && + !intel_pebs_support_regs(event, PEBS_DATACFG_GP)) || + (event_needs_xmm(event) && + !intel_pebs_support_regs(event, PEBS_DATACFG_XMMS)) || + (event_needs_ymm(event) && + !intel_pebs_support_regs(event, PEBS_DATACFG_YMMHS)) || + (event_needs_egprs(event) && + !intel_pebs_support_regs(event, PEBS_DATACFG_EGPRS)) || + (event_needs_opmask(event) && + !intel_pebs_support_regs(event, PEBS_DATACFG_OPMASKS)) || + (event_needs_low16_zmm(event) && + !intel_pebs_support_regs(event, PEBS_DATACFG_ZMMHS)) || + (event_needs_high16_zmm(event) && + !intel_pebs_support_regs(event, PEBS_DATACFG_H16ZMMS))) flags &=3D ~(PERF_SAMPLE_REGS_USER | PERF_SAMPLE_REGS_INTR); } else { if (event->attr.sample_regs_user & ~PEBS_GP_REGS) diff --git a/arch/x86/events/intel/ds.c b/arch/x86/events/intel/ds.c index ff8707885f74..2851622fbf0f 100644 --- a/arch/x86/events/intel/ds.c +++ b/arch/x86/events/intel/ds.c @@ -1732,11 +1732,22 @@ static u64 pebs_update_adaptive_cfg(struct perf_eve= nt *event) ((attr->config & INTEL_ARCH_EVENT_MASK) =3D=3D x86_pmu.rtm_abort_event); =20 - if (gprs || (attr->precise_ip < 2) || tsx_weight) + if (gprs || (attr->precise_ip < 2) || + tsx_weight || event_needs_ssp(event)) pebs_data_cfg |=3D PEBS_DATACFG_GP; =20 if (event_needs_xmm(event)) pebs_data_cfg |=3D PEBS_DATACFG_XMMS; + if (event_needs_ymm(event)) + pebs_data_cfg |=3D PEBS_DATACFG_YMMHS; + if (event_needs_low16_zmm(event)) + pebs_data_cfg |=3D PEBS_DATACFG_ZMMHS; + if (event_needs_high16_zmm(event)) + pebs_data_cfg |=3D PEBS_DATACFG_H16ZMMS; + if (event_needs_opmask(event)) + pebs_data_cfg |=3D PEBS_DATACFG_OPMASKS; + if (event_needs_egprs(event)) + pebs_data_cfg |=3D PEBS_DATACFG_EGPRS; =20 if (sample_type & PERF_SAMPLE_BRANCH_STACK) { /* @@ -2699,15 +2710,69 @@ static void setup_arch_pebs_sample_data(struct perf= _event *event, meminfo->tsx_tuning, ax); } =20 - if (header->xmm) { + if (header->xmm || header->ymmh || header->egpr || + header->opmask || header->zmmh || header->h16zmm) { + struct arch_pebs_xer_header *xer_header =3D next_record; struct pebs_xmm *xmm; + struct ymmh_struct *ymmh; + struct avx_512_zmm_uppers_state *zmmh; + struct avx_512_hi16_state *h16zmm; + struct avx_512_opmask_state *opmask; + struct apx_state *egpr; =20 next_record +=3D sizeof(struct arch_pebs_xer_header); =20 - ignore_mask |=3D XFEATURE_MASK_SSE; - xmm =3D next_record; - perf_regs->xmm_regs =3D xmm->xmm; - next_record =3D xmm + 1; + if (header->xmm) { + ignore_mask |=3D XFEATURE_MASK_SSE; + xmm =3D next_record; + /* + * Only output XMM regs to user space when arch-PEBS + * really writes data into xstate area. + */ + if (xer_header->xstate & XFEATURE_MASK_SSE) + perf_regs->xmm_regs =3D xmm->xmm; + next_record =3D xmm + 1; + } + + if (header->ymmh) { + ignore_mask |=3D XFEATURE_MASK_YMM; + ymmh =3D next_record; + if (xer_header->xstate & XFEATURE_MASK_YMM) + perf_regs->ymmh =3D ymmh; + next_record =3D ymmh + 1; + } + + if (header->egpr) { + ignore_mask |=3D XFEATURE_MASK_APX; + egpr =3D next_record; + if (xer_header->xstate & XFEATURE_MASK_APX) + perf_regs->egpr =3D egpr; + next_record =3D egpr + 1; + } + + if (header->opmask) { + ignore_mask |=3D XFEATURE_MASK_OPMASK; + opmask =3D next_record; + if (xer_header->xstate & XFEATURE_MASK_OPMASK) + perf_regs->opmask =3D opmask; + next_record =3D opmask + 1; + } + + if (header->zmmh) { + ignore_mask |=3D XFEATURE_MASK_ZMM_Hi256; + zmmh =3D next_record; + if (xer_header->xstate & XFEATURE_MASK_ZMM_Hi256) + perf_regs->zmmh =3D zmmh; + next_record =3D zmmh + 1; + } + + if (header->h16zmm) { + ignore_mask |=3D XFEATURE_MASK_Hi16_ZMM; + h16zmm =3D next_record; + if (xer_header->xstate & XFEATURE_MASK_Hi16_ZMM) + perf_regs->h16zmm =3D h16zmm; + next_record =3D h16zmm + 1; + } } =20 if (header->lbr) { diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-in= dex.h index 6d1b69ea01c2..6c915781fdd3 100644 --- a/arch/x86/include/asm/msr-index.h +++ b/arch/x86/include/asm/msr-index.h @@ -350,6 +350,13 @@ #define ARCH_PEBS_LBR_SHIFT 40 #define ARCH_PEBS_LBR (0x3ull << ARCH_PEBS_LBR_SHIFT) #define ARCH_PEBS_VECR_XMM BIT_ULL(49) +#define ARCH_PEBS_VECR_YMMH BIT_ULL(50) +#define ARCH_PEBS_VECR_EGPRS BIT_ULL(51) +#define ARCH_PEBS_VECR_OPMASK BIT_ULL(53) +#define ARCH_PEBS_VECR_ZMMH BIT_ULL(54) +#define ARCH_PEBS_VECR_H16ZMM BIT_ULL(55) +#define ARCH_PEBS_VECR_EXT_SHIFT 50 +#define ARCH_PEBS_VECR_EXT (0x3full << ARCH_PEBS_VECR_EXT_SHIFT) #define ARCH_PEBS_GPR BIT_ULL(61) #define ARCH_PEBS_AUX BIT_ULL(62) #define ARCH_PEBS_EN BIT_ULL(63) diff --git a/arch/x86/include/asm/perf_event.h b/arch/x86/include/asm/perf_= event.h index 98fef9db0aa3..3665a0a2148e 100644 --- a/arch/x86/include/asm/perf_event.h +++ b/arch/x86/include/asm/perf_event.h @@ -148,6 +148,11 @@ #define PEBS_DATACFG_LBRS BIT_ULL(3) #define PEBS_DATACFG_CNTR BIT_ULL(4) #define PEBS_DATACFG_METRICS BIT_ULL(5) +#define PEBS_DATACFG_YMMHS BIT_ULL(6) +#define PEBS_DATACFG_OPMASKS BIT_ULL(7) +#define PEBS_DATACFG_ZMMHS BIT_ULL(8) +#define PEBS_DATACFG_H16ZMMS BIT_ULL(9) +#define PEBS_DATACFG_EGPRS BIT_ULL(10) #define PEBS_DATACFG_LBR_SHIFT 24 #define PEBS_DATACFG_CNTR_SHIFT 32 #define PEBS_DATACFG_CNTR_MASK GENMASK_ULL(15, 0) @@ -545,7 +550,8 @@ struct arch_pebs_header { rsvd3:7, xmm:1, ymmh:1, - rsvd4:2, + egpr:1, + rsvd4:1, opmask:1, zmmh:1, h16zmm:1, --=20 2.34.1 From nobody Tue Feb 10 14:26:01 2026 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.10]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 598F132548D; Mon, 9 Feb 2026 07:26:39 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.10 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770621999; cv=none; b=MNCGC4VXgGjJF0WUMLL+EDz0CtwIBFRJFD5ElwF8iAIc4/Jm/KztrSYj/DgvbDOhGMDjVYsSuqPJWCOV3Ib1STS6IbwJB3z1DPkOyEHr0lHCtIqpOpUhcm0rHArOng9P1w5s8XjsenRuciwz8Lu5BLV2bDz1OdrUOJsN5MQEEvI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770621999; c=relaxed/simple; bh=1EMx7jB3je0G2luX3xOuQcangUI11/tq1DJdXAx7rMs=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=oiyoKN9N2fL32TkFq9e33gLZoxTJUjtgM/Ir8qMeeZ/5hrYiI/zPJzni5JYjUGNB7KBN0REbhXPtA9XvAEpefMRvl8H2Oi08vOr5Tf+sAHL3sWsDZPcx8l7Ms0IoMX2QMMramXPoAvYzfLsxAh4H/5f95/RKjhzOPEtLKT385mc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=pass smtp.mailfrom=linux.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=Zi33P8E+; arc=none smtp.client-ip=192.198.163.10 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="Zi33P8E+" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1770622000; x=1802158000; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=1EMx7jB3je0G2luX3xOuQcangUI11/tq1DJdXAx7rMs=; b=Zi33P8E+Xt/g2l/ch8J0ui7GBga4PpgNVMIlYVnAAAwSrqBUeB5RGZdR AUH/PBVkUpzPmyt6OylvuL19pASsAWst6JWUa5g6twcFDAW2/qzUggMtL R5dqrwASXu7nsQWyUDxqVtUyqSXqPxwNdCAIIOzufHclDQbQ0CCwUe9U6 IN+tANbQHwaYgKsfLHv1XGokl+qjHL5C4VT0SS/Yf8yV1+ifS4xbmRy4T sszfG+XkoDkwOq3j42AMvEpr41u1TLEtg4OrPKMAaUlgH6WtYdLAFDxsp llfT6ZmJMXLrOjt6aDn6Tg7cARkVXCjtp44n+BtlRKwpnGJfRo6lSE2tg Q==; X-CSE-ConnectionGUID: vyd85wgkS8m+hDBc2tZrkA== X-CSE-MsgGUID: e8u89hcpSTausZGCcjCe9g== X-IronPort-AV: E=McAfee;i="6800,10657,11695"; a="83098582" X-IronPort-AV: E=Sophos;i="6.21,281,1763452800"; d="scan'208";a="83098582" Received: from fmviesa001.fm.intel.com ([10.60.135.141]) by fmvoesa104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 08 Feb 2026 23:26:39 -0800 X-CSE-ConnectionGUID: CBFdeYpYRRqAYvHMcZsqkA== X-CSE-MsgGUID: uDN6NfYISdulv1eDD6Apeg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.21,281,1763452800"; d="scan'208";a="241694829" Received: from spr.sh.intel.com ([10.112.229.196]) by fmviesa001.fm.intel.com with ESMTP; 08 Feb 2026 23:26:34 -0800 From: Dapeng Mi To: Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Namhyung Kim , Thomas Gleixner , Dave Hansen , Ian Rogers , Adrian Hunter , Jiri Olsa , Alexander Shishkin , Andi Kleen , Eranian Stephane Cc: Mark Rutland , broonie@kernel.org, Ravi Bangoria , linux-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org, Zide Chen , Falcon Thomas , Dapeng Mi , Xudong Hao , Dapeng Mi Subject: [Patch v6 22/22] perf/x86: Activate back-to-back NMI detection for arch-PEBS induced NMIs Date: Mon, 9 Feb 2026 15:20:47 +0800 Message-Id: <20260209072047.2180332-23-dapeng1.mi@linux.intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20260209072047.2180332-1-dapeng1.mi@linux.intel.com> References: <20260209072047.2180332-1-dapeng1.mi@linux.intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" When two or more identical PEBS events with the same sampling period are programmed on a mix of PDIST and non-PDIST counters, multiple back-to-back NMIs can be triggered. The Linux PMI handler processes the first NMI and clears the GLOBAL_STATUS MSR. If a second NMI is triggered immediately after the first, it is recognized as a "suspicious NMI" because no bits are set in the GLOBAL_STATUS MSR (cleared by the first NMI). This issue does not lead to PEBS data corruption or data loss, but it does result in an annoying warning message. The current NMI handler supports back-to-back NMI detection, but it requires the PMI handler to return the count of actually processed events, which the PEBS handler does not currently do. This patch modifies the PEBS handlers to return the count of actually processed events, thereby activating back-to-back NMI detection and avoiding the "suspicious NMI" warning. Suggested-by: Andi Kleen Signed-off-by: Dapeng Mi --- V6: Enhance b2b NMI detection for all PEBS handlers to ensure identical behaviors of all PEBS handlers arch/x86/events/intel/core.c | 6 ++---- arch/x86/events/intel/ds.c | 40 ++++++++++++++++++++++++------------ arch/x86/events/perf_event.h | 2 +- 3 files changed, 30 insertions(+), 18 deletions(-) diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c index c57a70798364..387205c5d5b5 100644 --- a/arch/x86/events/intel/core.c +++ b/arch/x86/events/intel/core.c @@ -3558,9 +3558,8 @@ static int handle_pmi_common(struct pt_regs *regs, u6= 4 status) if (__test_and_clear_bit(GLOBAL_STATUS_BUFFER_OVF_BIT, (unsigned long *)&= status)) { u64 pebs_enabled =3D cpuc->pebs_enabled; =20 - handled++; x86_pmu_handle_guest_pebs(regs, &data); - static_call(x86_pmu_drain_pebs)(regs, &data); + handled +=3D static_call(x86_pmu_drain_pebs)(regs, &data); =20 /* * PMI throttle may be triggered, which stops the PEBS event. @@ -3589,8 +3588,7 @@ static int handle_pmi_common(struct pt_regs *regs, u6= 4 status) */ if (__test_and_clear_bit(GLOBAL_STATUS_ARCH_PEBS_THRESHOLD_BIT, (unsigned long *)&status)) { - handled++; - static_call(x86_pmu_drain_pebs)(regs, &data); + handled +=3D static_call(x86_pmu_drain_pebs)(regs, &data); =20 if (cpuc->events[INTEL_PMC_IDX_FIXED_SLOTS] && is_pebs_counter_event_group(cpuc->events[INTEL_PMC_IDX_FIXED_SLOTS])) diff --git a/arch/x86/events/intel/ds.c b/arch/x86/events/intel/ds.c index 2851622fbf0f..94ada08360f1 100644 --- a/arch/x86/events/intel/ds.c +++ b/arch/x86/events/intel/ds.c @@ -3029,7 +3029,7 @@ __intel_pmu_pebs_events(struct perf_event *event, __intel_pmu_pebs_last_event(event, iregs, regs, data, at, count, setup_sa= mple); } =20 -static void intel_pmu_drain_pebs_core(struct pt_regs *iregs, struct perf_s= ample_data *data) +static int intel_pmu_drain_pebs_core(struct pt_regs *iregs, struct perf_sa= mple_data *data) { struct cpu_hw_events *cpuc =3D this_cpu_ptr(&cpu_hw_events); struct debug_store *ds =3D cpuc->ds; @@ -3038,7 +3038,7 @@ static void intel_pmu_drain_pebs_core(struct pt_regs = *iregs, struct perf_sample_ int n; =20 if (!x86_pmu.pebs_active) - return; + return 0; =20 at =3D (struct pebs_record_core *)(unsigned long)ds->pebs_buffer_base; top =3D (struct pebs_record_core *)(unsigned long)ds->pebs_index; @@ -3049,22 +3049,24 @@ static void intel_pmu_drain_pebs_core(struct pt_reg= s *iregs, struct perf_sample_ ds->pebs_index =3D ds->pebs_buffer_base; =20 if (!test_bit(0, cpuc->active_mask)) - return; + return 0; =20 WARN_ON_ONCE(!event); =20 if (!event->attr.precise_ip) - return; + return 0; =20 n =3D top - at; if (n <=3D 0) { if (event->hw.flags & PERF_X86_EVENT_AUTO_RELOAD) intel_pmu_save_and_restart_reload(event, 0); - return; + return 0; } =20 __intel_pmu_pebs_events(event, iregs, data, at, top, 0, n, setup_pebs_fixed_sample_data); + + return 1; /* PMC0 only*/ } =20 static void intel_pmu_pebs_event_update_no_drain(struct cpu_hw_events *cpu= c, u64 mask) @@ -3087,7 +3089,7 @@ static void intel_pmu_pebs_event_update_no_drain(stru= ct cpu_hw_events *cpuc, u64 } } =20 -static void intel_pmu_drain_pebs_nhm(struct pt_regs *iregs, struct perf_sa= mple_data *data) +static int intel_pmu_drain_pebs_nhm(struct pt_regs *iregs, struct perf_sam= ple_data *data) { struct cpu_hw_events *cpuc =3D this_cpu_ptr(&cpu_hw_events); struct debug_store *ds =3D cpuc->ds; @@ -3096,11 +3098,12 @@ static void intel_pmu_drain_pebs_nhm(struct pt_regs= *iregs, struct perf_sample_d short counts[INTEL_PMC_IDX_FIXED + MAX_FIXED_PEBS_EVENTS] =3D {}; short error[INTEL_PMC_IDX_FIXED + MAX_FIXED_PEBS_EVENTS] =3D {}; int max_pebs_events =3D intel_pmu_max_num_pebs(NULL); + u64 events_bitmap =3D 0; int bit, i, size; u64 mask; =20 if (!x86_pmu.pebs_active) - return; + return 0; =20 base =3D (struct pebs_record_nhm *)(unsigned long)ds->pebs_buffer_base; top =3D (struct pebs_record_nhm *)(unsigned long)ds->pebs_index; @@ -3116,7 +3119,7 @@ static void intel_pmu_drain_pebs_nhm(struct pt_regs *= iregs, struct perf_sample_d =20 if (unlikely(base >=3D top)) { intel_pmu_pebs_event_update_no_drain(cpuc, mask); - return; + return 0; } =20 for (at =3D base; at < top; at +=3D x86_pmu.pebs_record_size) { @@ -3180,6 +3183,7 @@ static void intel_pmu_drain_pebs_nhm(struct pt_regs *= iregs, struct perf_sample_d if ((counts[bit] =3D=3D 0) && (error[bit] =3D=3D 0)) continue; =20 + events_bitmap |=3D bit; event =3D cpuc->events[bit]; if (WARN_ON_ONCE(!event)) continue; @@ -3201,6 +3205,8 @@ static void intel_pmu_drain_pebs_nhm(struct pt_regs *= iregs, struct perf_sample_d setup_pebs_fixed_sample_data); } } + + return hweight64(events_bitmap); } =20 static __always_inline void @@ -3256,7 +3262,7 @@ __intel_pmu_handle_last_pebs_record(struct pt_regs *i= regs, =20 DEFINE_PER_CPU(struct x86_perf_regs, pebs_perf_regs); =20 -static void intel_pmu_drain_pebs_icl(struct pt_regs *iregs, struct perf_sa= mple_data *data) +static int intel_pmu_drain_pebs_icl(struct pt_regs *iregs, struct perf_sam= ple_data *data) { short counts[INTEL_PMC_IDX_FIXED + MAX_FIXED_PEBS_EVENTS] =3D {}; void *last[INTEL_PMC_IDX_FIXED + MAX_FIXED_PEBS_EVENTS]; @@ -3266,10 +3272,11 @@ static void intel_pmu_drain_pebs_icl(struct pt_regs= *iregs, struct perf_sample_d struct pt_regs *regs =3D &perf_regs->regs; struct pebs_basic *basic; void *base, *at, *top; + u64 events_bitmap =3D 0; u64 mask; =20 if (!x86_pmu.pebs_active) - return; + return 0; =20 base =3D (struct pebs_basic *)(unsigned long)ds->pebs_buffer_base; top =3D (struct pebs_basic *)(unsigned long)ds->pebs_index; @@ -3282,7 +3289,7 @@ static void intel_pmu_drain_pebs_icl(struct pt_regs *= iregs, struct perf_sample_d =20 if (unlikely(base >=3D top)) { intel_pmu_pebs_event_update_no_drain(cpuc, mask); - return; + return 0; } =20 if (!iregs) @@ -3297,6 +3304,7 @@ static void intel_pmu_drain_pebs_icl(struct pt_regs *= iregs, struct perf_sample_d continue; =20 pebs_status =3D mask & basic->applicable_counters; + events_bitmap |=3D pebs_status; __intel_pmu_handle_pebs_record(iregs, regs, data, at, pebs_status, counts, last, setup_pebs_adaptive_sample_data); @@ -3304,9 +3312,11 @@ static void intel_pmu_drain_pebs_icl(struct pt_regs = *iregs, struct perf_sample_d =20 __intel_pmu_handle_last_pebs_record(iregs, regs, data, mask, counts, last, setup_pebs_adaptive_sample_data); + + return hweight64(events_bitmap); } =20 -static void intel_pmu_drain_arch_pebs(struct pt_regs *iregs, +static int intel_pmu_drain_arch_pebs(struct pt_regs *iregs, struct perf_sample_data *data) { short counts[INTEL_PMC_IDX_FIXED + MAX_FIXED_PEBS_EVENTS] =3D {}; @@ -3316,13 +3326,14 @@ static void intel_pmu_drain_arch_pebs(struct pt_reg= s *iregs, struct x86_perf_regs *perf_regs =3D this_cpu_ptr(&pebs_perf_regs); struct pt_regs *regs =3D &perf_regs->regs; void *base, *at, *top; + u64 events_bitmap =3D 0; u64 mask; =20 rdmsrq(MSR_IA32_PEBS_INDEX, index.whole); =20 if (unlikely(!index.wr)) { intel_pmu_pebs_event_update_no_drain(cpuc, X86_PMC_IDX_MAX); - return; + return 0; } =20 base =3D cpuc->pebs_vaddr; @@ -3361,6 +3372,7 @@ static void intel_pmu_drain_arch_pebs(struct pt_regs = *iregs, =20 basic =3D at + sizeof(struct arch_pebs_header); pebs_status =3D mask & basic->applicable_counters; + events_bitmap |=3D pebs_status; __intel_pmu_handle_pebs_record(iregs, regs, data, at, pebs_status, counts, last, setup_arch_pebs_sample_data); @@ -3380,6 +3392,8 @@ static void intel_pmu_drain_arch_pebs(struct pt_regs = *iregs, __intel_pmu_handle_last_pebs_record(iregs, regs, data, mask, counts, last, setup_arch_pebs_sample_data); + + return hweight64(events_bitmap); } =20 static void __init intel_arch_pebs_init(void) diff --git a/arch/x86/events/perf_event.h b/arch/x86/events/perf_event.h index fdfb34d7b1d2..0083334f2d33 100644 --- a/arch/x86/events/perf_event.h +++ b/arch/x86/events/perf_event.h @@ -1014,7 +1014,7 @@ struct x86_pmu { int pebs_record_size; int pebs_buffer_size; u64 pebs_events_mask; - void (*drain_pebs)(struct pt_regs *regs, struct perf_sample_data *data); + int (*drain_pebs)(struct pt_regs *regs, struct perf_sample_data *data); struct event_constraint *pebs_constraints; void (*pebs_aliases)(struct perf_event *event); u64 (*pebs_latency_data)(struct perf_event *event, u64 status); --=20 2.34.1