From nobody Wed Nov 27 08:39:47 2024 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.12]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 38A081C9ECA; Thu, 10 Oct 2024 19:27:34 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.12 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728588455; cv=none; b=mYhgTGUqfDqH40wiL7sizSE8WeGT9DGr+ti3ekROCP3/dpIQ9+/UXH7vUtLKiJFN91Hs8fV351yFPTxoUCo1gzRQsUtXnee5GFkNBu9i9ZGerxJid+4xfYix384Cp6FCnBsyvsay4dl1FYNgoWBtVZQ+NmPA/0QITN/JDDzGIcg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728588455; c=relaxed/simple; bh=4s1F1amb2HpeSQnXE9kDw0v+CMKI1EweulQR8X5Vto0=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=lLGARUU/dE3BdvOT0lL4t96NgQXPst/gVVzXt4WjOfNm6ge28TLvNJQPpL9ouUy+ThpmyHaF7SGJviXxPzG0Ri1gFVoVTDD+9e851GIGtcSzY8pxAre/quyZmyIxlQ+wDDKDgCE/uJRsGl+HtMI8yc50e/JQ+HBs8En8PRLvMgU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=none smtp.mailfrom=linux.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=BPwlFp3x; arc=none smtp.client-ip=192.198.163.12 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="BPwlFp3x" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1728588454; x=1760124454; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=4s1F1amb2HpeSQnXE9kDw0v+CMKI1EweulQR8X5Vto0=; b=BPwlFp3xSnN+cyI/QXLW5BBG/Up9te6+D/AEJS9tkcjJGSDLnhwmDCSz 9ER2Xs5ODfX9UiIYsGpioz1hfGrhX/UrwX9iabCio8gPz1QOxT/bMC4z8 zoWkNNcXtQlyIW5uz2g5QrGYUm7oGZ0oRWWT8aB0nJEgAvZIF+f7zze9+ YU2H/AAQ+Px75nMID8K/7LzyCV3vmGzeTSXINfJCr2tqlED8KODBK+xFf DCstAk1baeyCti3wesl6Av1UHngNM09lkLs3poUe7MW+QlqfFfDiQKSxy iuzigE2Q5eqFAKwpis/QAs3Ee5K59KybzPKV/h39Bl9lMpG2TjdRlBufd Q==; X-CSE-ConnectionGUID: JQ9ci/QlTtq259W/Mcvimg== X-CSE-MsgGUID: /iO1dgKZSzWTNpoSUS3Org== X-IronPort-AV: E=McAfee;i="6700,10204,11221"; a="31870235" X-IronPort-AV: E=Sophos;i="6.11,193,1725346800"; d="scan'208";a="31870235" Received: from orviesa010.jf.intel.com ([10.64.159.150]) by fmvoesa106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Oct 2024 12:27:31 -0700 X-CSE-ConnectionGUID: 5tqLculqRhWIYOkGD0AAfg== X-CSE-MsgGUID: e5MYBkxkTFmsBXVLeIUGNQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.11,193,1725346800"; d="scan'208";a="76614755" Received: from kanliang-dev.jf.intel.com ([10.165.154.102]) by orviesa010.jf.intel.com with ESMTP; 10 Oct 2024 12:27:31 -0700 From: kan.liang@linux.intel.com To: peterz@infradead.org, mingo@kernel.org, acme@kernel.org, namhyung@kernel.org, irogers@google.com, adrian.hunter@intel.com, ak@linux.intel.com, linux-kernel@vger.kernel.org Cc: eranian@google.com, thomas.falcon@intel.com, Kan Liang , stable@vger.kernel.org Subject: [PATCH V2 1/3] perf/x86/intel: Fix ARCH_PERFMON_NUM_COUNTER_LEAF Date: Thu, 10 Oct 2024 12:28:42 -0700 Message-Id: <20241010192844.1006990-2-kan.liang@linux.intel.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20241010192844.1006990-1-kan.liang@linux.intel.com> References: <20241010192844.1006990-1-kan.liang@linux.intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Kan Liang The EAX of the CPUID Leaf 023H enumerates the mask of valid sub-leaves. To tell the availability of the sub-leaf 1 (enumerate the counter mask), perf should check the bit 1 (0x2) of EAS, rather than bit 0 (0x1). The error is not user-visible on bare metal. Because the sub-leaf 0 and the sub-leaf 1 are always available. However, it may bring issues in a virtualization environment when a VMM only enumerates the sub-leaf 0. Fixes: eb467aaac21e ("perf/x86/intel: Support Architectural PerfMon Extensi= on leaf") Signed-off-by: Kan Liang Cc: stable@vger.kernel.org --- arch/x86/events/intel/core.c | 4 ++-- arch/x86/include/asm/perf_event.h | 2 +- 2 files changed, 3 insertions(+), 3 deletions(-) diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c index 7ca40002a19b..2f3bf3bbbd77 100644 --- a/arch/x86/events/intel/core.c +++ b/arch/x86/events/intel/core.c @@ -4886,8 +4886,8 @@ static void update_pmu_cap(struct x86_hybrid_pmu *pmu) if (ebx & ARCH_PERFMON_EXT_EQ) pmu->config_mask |=3D ARCH_PERFMON_EVENTSEL_EQ; =20 - if (sub_bitmaps & ARCH_PERFMON_NUM_COUNTER_LEAF_BIT) { - cpuid_count(ARCH_PERFMON_EXT_LEAF, ARCH_PERFMON_NUM_COUNTER_LEAF, + if (sub_bitmaps & ARCH_PERFMON_NUM_COUNTER_LEAF) { + cpuid_count(ARCH_PERFMON_EXT_LEAF, ARCH_PERFMON_NUM_COUNTER_LEAF_BIT, &eax, &ebx, &ecx, &edx); pmu->cntr_mask64 =3D eax; pmu->fixed_cntr_mask64 =3D ebx; diff --git a/arch/x86/include/asm/perf_event.h b/arch/x86/include/asm/perf_= event.h index 91b73571412f..41ace8431e01 100644 --- a/arch/x86/include/asm/perf_event.h +++ b/arch/x86/include/asm/perf_event.h @@ -190,7 +190,7 @@ union cpuid10_edx { #define ARCH_PERFMON_EXT_UMASK2 0x1 #define ARCH_PERFMON_EXT_EQ 0x2 #define ARCH_PERFMON_NUM_COUNTER_LEAF_BIT 0x1 -#define ARCH_PERFMON_NUM_COUNTER_LEAF 0x1 +#define ARCH_PERFMON_NUM_COUNTER_LEAF BIT(ARCH_PERFMON_NUM_COUNTER_LEAF_B= IT) =20 /* * Intel Architectural LBR CPUID detection/enumeration details: --=20 2.38.1 From nobody Wed Nov 27 08:39:47 2024 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.12]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A3C021CDA31 for ; Thu, 10 Oct 2024 19:27:34 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.12 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728588456; cv=none; b=qyQZtW4qBH3bkHoHolTl2WWHtBgzAYytTgIF2HBsDk7+xfbBxSE229HNBc6N0JUIWa0J6okGUP6xG3vrT7M2fhVOqCf4gx6PvgX5J1wCYGEAAsjC4JCO5YkndryMi/Ds2Mi+WnVWhNhP6A8+kU8cxS2ECL5o06LRLBs4TMUIF8c= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728588456; c=relaxed/simple; bh=lNTkdB1PAh6XUX6/HZeAOmeklFsqdPTTVfSlVDiMCvk=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=s0smw1Latk6UUa9IgIH9DQgsvRxn/HitVlCiCdjPF5Y6MDSg/vdoBsir4vPizSoshXeREqRqv8sCNpKzWdoe+Hm/v5im6I95B4+VepqgiQan3vPpwhF5vvGRL1uCw7itvVBn+x4mMI/oSQE1l8KoTfpX/GfKvuVB+nEFfA3ud/A= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=none smtp.mailfrom=linux.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=go5iZFeL; arc=none smtp.client-ip=192.198.163.12 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="go5iZFeL" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1728588454; x=1760124454; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=lNTkdB1PAh6XUX6/HZeAOmeklFsqdPTTVfSlVDiMCvk=; b=go5iZFeLhTixe6U1TZmpuG6UlKpZNfiNUUS+WsCNmWXRDa0JJQO2+aJo G87ATBHVca+7fwP7uhpHNr1Q2XIkyPfUtWBOwTtg/1GE1iTHBcxa1gt8P GyvJsJrmIqX7ik3LUXdHMAX3KIrvO/DfeoD7XHpSXtErICDxuBJvNngiM jwa47MfWGqac376kmty+R+eBCIDqj9Qr4QrSO6LgmrFh7qwSDOoWwTR3G KyyyrA717QMY+zLu6lWo34283v0cuJ2R2539cppELfGPH3JzGeUwxP0/M 1td5N3XnsjzwqbPkoZ4NnP1DH9rmPDUCYp0VSqmvqZlVCtEz7Rc4+MQgM g==; X-CSE-ConnectionGUID: DFQ7CdCVQI2bFix9+38/Yw== X-CSE-MsgGUID: dSI6SdrJQlakEKqPl6rh/A== X-IronPort-AV: E=McAfee;i="6700,10204,11221"; a="31870241" X-IronPort-AV: E=Sophos;i="6.11,193,1725346800"; d="scan'208";a="31870241" Received: from orviesa010.jf.intel.com ([10.64.159.150]) by fmvoesa106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Oct 2024 12:27:31 -0700 X-CSE-ConnectionGUID: 62Kyv+WESn6v4zx2slFEgA== X-CSE-MsgGUID: PkTZf3N9RPy/GDlD5Hat9Q== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.11,193,1725346800"; d="scan'208";a="76614761" Received: from kanliang-dev.jf.intel.com ([10.165.154.102]) by orviesa010.jf.intel.com with ESMTP; 10 Oct 2024 12:27:31 -0700 From: kan.liang@linux.intel.com To: peterz@infradead.org, mingo@kernel.org, acme@kernel.org, namhyung@kernel.org, irogers@google.com, adrian.hunter@intel.com, ak@linux.intel.com, linux-kernel@vger.kernel.org Cc: eranian@google.com, thomas.falcon@intel.com, Kan Liang Subject: [PATCH V2 2/3] perf/x86/intel: Add the enumeration and flag for the auto counter reload Date: Thu, 10 Oct 2024 12:28:43 -0700 Message-Id: <20241010192844.1006990-3-kan.liang@linux.intel.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20241010192844.1006990-1-kan.liang@linux.intel.com> References: <20241010192844.1006990-1-kan.liang@linux.intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Kan Liang The counters that support the auto counter reload feature can be enumerated in the CPUID Leaf 0x23 sub-leaf 0x2. Add acr_cntr_mask to store the mask of counters which are reloadable. Add acr_cntr_cause_mask to store the mask of counters which can cause reload. Since the e-core and p-core may have different numbers of counters, track the masks in the struct x86_hybrid_pmu as well. The Auto Counter Reload feature requires a dynamic constraint. Add a PMU flag to allocate the constraint_list. There are many existing features which require a dynamic constraint as well. Add a PMU_FL_DYN_MASK to include the flags of all the features. Signed-off-by: Kan Liang --- arch/x86/events/intel/core.c | 17 +++++++++++++++-- arch/x86/events/perf_event.h | 12 ++++++++++++ arch/x86/include/asm/perf_event.h | 2 ++ 3 files changed, 29 insertions(+), 2 deletions(-) diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c index 2f3bf3bbbd77..726ef13c2c81 100644 --- a/arch/x86/events/intel/core.c +++ b/arch/x86/events/intel/core.c @@ -4775,7 +4775,8 @@ static struct intel_excl_cntrs *allocate_excl_cntrs(i= nt cpu) return c; } =20 - +#define PMU_FL_DYN_MASK (PMU_FL_EXCL_CNTRS | PMU_FL_TFA | \ + PMU_FL_BR_CNTR | PMU_FL_ACR) int intel_cpuc_prepare(struct cpu_hw_events *cpuc, int cpu) { cpuc->pebs_record_size =3D x86_pmu.pebs_record_size; @@ -4786,7 +4787,7 @@ int intel_cpuc_prepare(struct cpu_hw_events *cpuc, in= t cpu) goto err; } =20 - if (x86_pmu.flags & (PMU_FL_EXCL_CNTRS | PMU_FL_TFA | PMU_FL_BR_CNTR)) { + if (x86_pmu.flags & PMU_FL_DYN_MASK) { size_t sz =3D X86_PMC_IDX_MAX * sizeof(struct event_constraint); =20 cpuc->constraint_list =3D kzalloc_node(sz, GFP_KERNEL, cpu_to_node(cpu)); @@ -4893,6 +4894,18 @@ static void update_pmu_cap(struct x86_hybrid_pmu *pm= u) pmu->fixed_cntr_mask64 =3D ebx; } =20 + if (sub_bitmaps & ARCH_PERFMON_ACR_LEAF) { + cpuid_count(ARCH_PERFMON_EXT_LEAF, ARCH_PERFMON_ACR_LEAF_BIT, + &eax, &ebx, &ecx, &edx); + /* The mask of the counters which can be reloaded */ + pmu->acr_cntr_mask64 =3D eax | ((u64)ebx << INTEL_PMC_IDX_FIXED); + + /* The mask of the counters which can cause a reload of reloadable count= ers */ + pmu->acr_cntr_cause_mask =3D ecx | ((u64)edx << INTEL_PMC_IDX_FIXED); + + x86_pmu.flags |=3D PMU_FL_ACR; + } + if (!intel_pmu_broken_perf_cap()) { /* Perf Metric (Bit 15) and PEBS via PT (Bit 16) are hybrid enumeration = */ rdmsrl(MSR_IA32_PERF_CAPABILITIES, pmu->intel_cap.capabilities); diff --git a/arch/x86/events/perf_event.h b/arch/x86/events/perf_event.h index 82c6f45ce975..1ee6d7bb10a3 100644 --- a/arch/x86/events/perf_event.h +++ b/arch/x86/events/perf_event.h @@ -718,6 +718,12 @@ struct x86_hybrid_pmu { u64 fixed_cntr_mask64; unsigned long fixed_cntr_mask[BITS_TO_LONGS(X86_PMC_IDX_MAX)]; }; + + union { + u64 acr_cntr_mask64; + unsigned long acr_cntr_mask[BITS_TO_LONGS(X86_PMC_IDX_MAX)]; + }; + u64 acr_cntr_cause_mask; struct event_constraint unconstrained; =20 u64 hw_cache_event_ids @@ -815,6 +821,11 @@ struct x86_pmu { u64 fixed_cntr_mask64; unsigned long fixed_cntr_mask[BITS_TO_LONGS(X86_PMC_IDX_MAX)]; }; + union { + u64 acr_cntr_mask64; + unsigned long acr_cntr_mask[BITS_TO_LONGS(X86_PMC_IDX_MAX)]; + }; + u64 acr_cntr_cause_mask; int cntval_bits; u64 cntval_mask; union { @@ -1059,6 +1070,7 @@ do { \ #define PMU_FL_MEM_LOADS_AUX 0x100 /* Require an auxiliary event for the c= omplete memory info */ #define PMU_FL_RETIRE_LATENCY 0x200 /* Support Retire Latency in PEBS */ #define PMU_FL_BR_CNTR 0x400 /* Support branch counter logging */ +#define PMU_FL_ACR 0x800 /* Support auto-counter reload */ =20 #define EVENT_VAR(_id) event_attr_##_id #define EVENT_PTR(_id) &event_attr_##_id.attr.attr diff --git a/arch/x86/include/asm/perf_event.h b/arch/x86/include/asm/perf_= event.h index 41ace8431e01..19af3d857db3 100644 --- a/arch/x86/include/asm/perf_event.h +++ b/arch/x86/include/asm/perf_event.h @@ -191,6 +191,8 @@ union cpuid10_edx { #define ARCH_PERFMON_EXT_EQ 0x2 #define ARCH_PERFMON_NUM_COUNTER_LEAF_BIT 0x1 #define ARCH_PERFMON_NUM_COUNTER_LEAF BIT(ARCH_PERFMON_NUM_COUNTER_LEAF_B= IT) +#define ARCH_PERFMON_ACR_LEAF_BIT 0x2 +#define ARCH_PERFMON_ACR_LEAF BIT(ARCH_PERFMON_ACR_LEAF_BIT) =20 /* * Intel Architectural LBR CPUID detection/enumeration details: --=20 2.38.1 From nobody Wed Nov 27 08:39:47 2024 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.12]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E54D01A08CB for ; Thu, 10 Oct 2024 19:27:35 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.12 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728588458; cv=none; b=rZLxZnjtPIXB6DLJ8vw81yhKNAVIBIr4h9gBWHJ7+JGrQZf4i+cyuFm3Vq5HeDdbIrLlnWI7tsy9lAR3egPC/14EnImb1MFZJfHQJ+Vl2ZrQ5fhCu1mYovwPhbZUkvbLu9HzFPyv32NOCYHGk11kDmXulkzAkpCEdXkAPJlsjdc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728588458; c=relaxed/simple; bh=jK2VZ06Qs9vhnCkzaAqK6287T/HncaWuren+KQR+HPA=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version:Content-Type; b=IfEyfq6HF+6jpfovQQ/SgHkcixpCvkuOAJmYvmSAA8NikQwKz3z/8/jiaRAubTaUPjV5wBk5Tjsf/mDE+l4I5u9uFhGuZ3fRQmp4MVKDIkh/1jxtS32ln/A+5qtJPcma3Kcfk9IGbG1swJCLmRuFSZsNdiuBu/i6ovi/n8s3Pks= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=none smtp.mailfrom=linux.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=bzbQPFxY; arc=none smtp.client-ip=192.198.163.12 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="bzbQPFxY" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1728588456; x=1760124456; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=jK2VZ06Qs9vhnCkzaAqK6287T/HncaWuren+KQR+HPA=; b=bzbQPFxYRhZ1+olI53NjrG/l65iMCHWaasv6OwWZctXYBASLYgPJD8v3 102tchdO3pUBcPmLeCJoif3znpQXDS1zpv4YOhZ33kA5jT7pFZLpMwK13 fnZdUQydtip9Qj2mHStBPYXiOm6b1r+4huCpF5HZ9KwLKAfMkIPzxRU1o IvHZNnh+Nejq6+LnWNQktf9PEi8JZDZjn0mg1wouzVDldXYzGIgSPIxrT EjG5vpKqI2F5NlHxpHc8bYc6ndJJxS7jZ0ekYlASzVuqotclYnbxf+i4t eIjXSQAfI6EXRXw+wv0sDt3ovxmGw7fES8VIvViVxrlPIa9J0LrJ5B0XY g==; X-CSE-ConnectionGUID: X/AgiazgR2qBWtDcwxq/GQ== X-CSE-MsgGUID: I41+iDesTkaTBFK/owycOA== X-IronPort-AV: E=McAfee;i="6700,10204,11221"; a="31870244" X-IronPort-AV: E=Sophos;i="6.11,193,1725346800"; d="scan'208";a="31870244" Received: from orviesa010.jf.intel.com ([10.64.159.150]) by fmvoesa106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Oct 2024 12:27:31 -0700 X-CSE-ConnectionGUID: FGeaal6lRCmOQDfcXKpcKA== X-CSE-MsgGUID: qynzQM5RSNmvwcs5V/TWMg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.11,193,1725346800"; d="scan'208";a="76614764" Received: from kanliang-dev.jf.intel.com ([10.165.154.102]) by orviesa010.jf.intel.com with ESMTP; 10 Oct 2024 12:27:31 -0700 From: kan.liang@linux.intel.com To: peterz@infradead.org, mingo@kernel.org, acme@kernel.org, namhyung@kernel.org, irogers@google.com, adrian.hunter@intel.com, ak@linux.intel.com, linux-kernel@vger.kernel.org Cc: eranian@google.com, thomas.falcon@intel.com, Kan Liang Subject: [PATCH V2 3/3] perf/x86/intel: Support auto counter reload Date: Thu, 10 Oct 2024 12:28:44 -0700 Message-Id: <20241010192844.1006990-4-kan.liang@linux.intel.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20241010192844.1006990-1-kan.liang@linux.intel.com> References: <20241010192844.1006990-1-kan.liang@linux.intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable From: Kan Liang The relative rates among two or more events are useful for performance analysis, e.g., a high branch miss rate may indicate a performance issue. Usually, the samples with a relative rate that exceeds some threshold are more useful. However, the traditional sampling takes samples of events separately. To get the relative rates among two or more events, a high sample rate is required, which can bring high overhead. Many samples taken in the non-hotspot area are also dropped (useless) in the post-process. The auto counter reload (ACR) feature takes samples when the relative rate of two or more events exceeds some threshold, which provides the fine-grained information at a low cost. To support the feature, two sets of MSRs are introduced. For a given counter IA32_PMC_GPn_CTR/IA32_PMC_FXm_CTR, bit fields in the IA32_PMC_GPn_CFG_B/IA32_PMC_FXm_CFG_B MSR indicate which counter(s) can cause a reload of that counter. The reload value is stored in the IA32_PMC_GPn_CFG_C/IA32_PMC_FXm_CFG_C. The details can be found at Intel Architecture Instruction Set Extensions and Future Features (053) 8.7 AUTO COUNTER RELOAD. The ACR event/group needs to be specially configured. Because the counter mask of an event has to be recalculated according to both the original mask and the reloadable counter mask. The new counter mask is stored into the new field, dyn_mask, in struct hw_perf_event. Also, add a new flag PERF_X86_EVENT_ACR to indicate an ACR group, which is set to the group leader. The ACR configuration MSRs are only updated in the enable_event(). The disable_event() doesn't clear the ACR CFG register. Add acr_cfg_b/acr_cfg_c in the struct cpu_hw_events to cache the MSR values. It can avoid a MSR write if the value is not changed. Expose a acr_mask to the sysfs. The perf tool can utilize the new format to configure the relation of events in the group. The bit sequence of the acr_mask follows the events enabled order of the group. The kernel will convert it to the real counter order and save the updated order into the newly added hw.config1 every time the group is scheduled. The hw.config1 is eventually to be written to the ACR config MSR (MSR_IA32_PMC_GP/FX_CFG_B) when the event is enabled. Example: Here is the snippet of the mispredict.c. Since the array has random numbers, jumps are random and often mispredicted. The mispredicted rate depends on the compared value. For the Loop1, ~11% of all branches are mispredicted. For the Loop2, ~21% of all branches are mispredicted. main() { ... for (i =3D 0; i < N; i++) data[i] =3D rand() % 256; ... /* Loop 1 */ for (k =3D 0; k < 50; k++) for (i =3D 0; i < N; i++) if (data[i] >=3D 64) sum +=3D data[i]; ... ... /* Loop 2 */ for (k =3D 0; k < 50; k++) for (i =3D 0; i < N; i++) if (data[i] >=3D 128) sum +=3D data[i]; ... } Usually, a code with a high branch miss rate means a bad performance. To understand the branch miss rate of the codes, the traditional method usually sample both branches and branch-misses events. E.g., perf record -e "{cpu_atom/branch-misses/ppu, cpu_atom/branch-instructions/u= }" -c 1000000 -- ./mispredict [ perf record: Woken up 4 times to write data ] [ perf record: Captured and wrote 0.925 MB perf.data (5106 samples) ] The 5106 samples are from both events and spread in both Loops. In the post process stage, a user can know that the Loop 2 has a 21% branch miss rate. Then they can focus on the samples of branch-misses events for the Loop 2. With this patch, the user can generate the samples only when the branch miss rate > 20%. For example, perf record -e "{cpu_atom/branch-misses,period=3D200000,acr_mask=3D0x2/ppu, cpu_atom/branch-instructions,period=3D1000000,acr_mask=3D0= x3/u}" -- ./mispredict (Two different periods are applied to branch-misses and branch-instructions. The ratio is 20%. If the branch-instructions is overflowed first, the branch miss rate < 20%. No samples should be generated. All counters should be automatically reloaded. The acr_mask is set to 0x3. If the branch-misses is overflowed first, the branch miss rate > 20%. A sample triggered by the branch-misses event should be generated. The counter of the branch-instructions should be automatically reloaded. The acr_mask is set to 0x2, since the branch-misses event is the first event in the group.) [ perf record: Woken up 1 times to write data ] [ perf record: Captured and wrote 0.098 MB perf.data (2498 samples) ] $perf report Percent =E2=94=82154: movl $0x0,-0x14(%rbp) =E2=94=82 =E2=86=93 jmp 1af =E2=94=82 for (i =3D j; i < N; i++) =E2=94=8215d: mov -0x10(%rbp),%eax =E2=94=82 mov %eax,-0x18(%rbp) =E2=94=82 =E2=86=93 jmp 1a2 =E2=94=82 if (data[i] >=3D 128) =E2=94=82165: mov -0x18(%rbp),%eax =E2=94=82 cltq =E2=94=82 lea 0x0(,%rax,4),%rdx =E2=94=82 mov -0x8(%rbp),%rax =E2=94=82 add %rdx,%rax =E2=94=82 mov (%rax),%eax =E2=94=82 =E2=94=8C=E2=94=80=E2=94=80cmp $0x7f,%eax 100.00 0.00 =E2=94=82 =E2=94=9C=E2=94=80=E2=94=80jle 19e =E2=94=82 =E2=94=82sum +=3D data[i]; The 2498 samples are all from the branch-misses events for the Loop 2. The number of samples and overhead is significantly reduced without losing any information. Signed-off-by: Kan Liang --- arch/x86/events/intel/core.c | 241 ++++++++++++++++++++++++++++- arch/x86/events/perf_event.h | 9 ++ arch/x86/events/perf_event_flags.h | 2 +- arch/x86/include/asm/msr-index.h | 4 + include/linux/perf_event.h | 2 + 5 files changed, 256 insertions(+), 2 deletions(-) diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c index 726ef13c2c81..d3bdc7d18d3f 100644 --- a/arch/x86/events/intel/core.c +++ b/arch/x86/events/intel/core.c @@ -2851,6 +2851,54 @@ static void intel_pmu_enable_fixed(struct perf_event= *event) cpuc->fixed_ctrl_val |=3D bits; } =20 +static void intel_pmu_config_acr(int idx, u64 mask, u32 reload) +{ + struct cpu_hw_events *cpuc =3D this_cpu_ptr(&cpu_hw_events); + int msr_b, msr_c; + + if (!mask && !cpuc->acr_cfg_b[idx]) + return; + + if (idx < INTEL_PMC_IDX_FIXED) { + msr_b =3D MSR_IA32_PMC_V6_GP0_CFG_B; + msr_c =3D MSR_IA32_PMC_V6_GP0_CFG_C; + } else { + msr_b =3D MSR_IA32_PMC_V6_FX0_CFG_B; + msr_c =3D MSR_IA32_PMC_V6_FX0_CFG_C; + idx -=3D INTEL_PMC_IDX_FIXED; + } + + if (cpuc->acr_cfg_b[idx] !=3D mask) { + wrmsrl(msr_b + x86_pmu.addr_offset(idx, false), mask); + cpuc->acr_cfg_b[idx] =3D mask; + } + /* Only need to update the reload value when there is a valid config valu= e. */ + if (mask && cpuc->acr_cfg_c[idx] !=3D reload) { + wrmsrl(msr_c + x86_pmu.addr_offset(idx, false), reload); + cpuc->acr_cfg_c[idx] =3D reload; + } +} + +static void intel_pmu_enable_acr(struct perf_event *event) +{ + struct hw_perf_event *hwc =3D &event->hw; + + /* The PMU doesn't support ACR */ + if (!hybrid(event->pmu, acr_cntr_mask64)) + return; + + if (!is_acr_event_group(event) || !event->attr.config2) { + /* + * The disable doesn't clear the ACR CFG register. + * Check and clear the ACR CFG register. + */ + intel_pmu_config_acr(hwc->idx, 0, 0); + return; + } + + intel_pmu_config_acr(hwc->idx, hwc->config1, -hwc->sample_period); +} + static void intel_pmu_enable_event(struct perf_event *event) { u64 enable_mask =3D ARCH_PERFMON_EVENTSEL_ENABLE; @@ -2866,8 +2914,11 @@ static void intel_pmu_enable_event(struct perf_event= *event) enable_mask |=3D ARCH_PERFMON_EVENTSEL_BR_CNTR; intel_set_masks(event, idx); __x86_pmu_enable_event(hwc, enable_mask); + intel_pmu_enable_acr(event); break; case INTEL_PMC_IDX_FIXED ... INTEL_PMC_IDX_FIXED_BTS - 1: + intel_pmu_enable_acr(event); + fallthrough; case INTEL_PMC_IDX_METRIC_BASE ... INTEL_PMC_IDX_METRIC_END: intel_pmu_enable_fixed(event); break; @@ -3687,6 +3738,12 @@ intel_get_event_constraints(struct cpu_hw_events *cp= uc, int idx, c2->weight =3D hweight64(c2->idxmsk64); } =20 + if (is_acr_event_group(event)) { + c2 =3D dyn_constraint(cpuc, c2, idx); + c2->idxmsk64 &=3D event->hw.dyn_mask; + c2->weight =3D hweight64(c2->idxmsk64); + } + return c2; } =20 @@ -3945,6 +4002,78 @@ static inline bool intel_pmu_has_cap(struct perf_eve= nt *event, int idx) return test_bit(idx, (unsigned long *)&intel_cap->capabilities); } =20 +static bool intel_pmu_is_acr_group(struct perf_event *event) +{ + if (!hybrid(event->pmu, acr_cntr_mask64)) + return false; + + /* The group leader has the ACR flag set */ + if (is_acr_event_group(event)) + return true; + + /* The acr_mask is set */ + if (event->attr.config2) + return true; + + return false; +} + +static int intel_pmu_acr_check_reloadable_event(struct perf_event *event) +{ + struct perf_event *sibling, *leader =3D event->group_leader; + int num =3D 0; + + /* + * The acr_mask(config2) indicates the event can be reloaded by + * other events. Apply the acr_cntr_mask. + */ + if (leader->attr.config2) { + leader->hw.dyn_mask =3D hybrid(leader->pmu, acr_cntr_mask64); + num++; + } else + leader->hw.dyn_mask =3D ~0ULL; + + for_each_sibling_event(sibling, leader) { + if (sibling->attr.config2) { + sibling->hw.dyn_mask =3D hybrid(sibling->pmu, acr_cntr_mask64); + num++; + } else + sibling->hw.dyn_mask =3D ~0ULL; + } + + if (event->attr.config2) { + event->hw.dyn_mask =3D hybrid(event->pmu, acr_cntr_mask64); + num++; + } else + event->hw.dyn_mask =3D ~0ULL; + + if (num > hweight64(hybrid(event->pmu, acr_cntr_mask64))) + return -EINVAL; + + return 0; +} + +/* + * Update the dyn_mask of each event to guarantee the event is scheduled + * to the counters which be able to cause a reload. + */ +static void intel_pmu_set_acr_dyn_mask(struct perf_event *event, int idx, + struct perf_event *last) +{ + struct perf_event *sibling, *leader =3D event->group_leader; + u64 mask =3D hybrid(event->pmu, acr_cntr_cause_mask); + + /* The event set in the acr_mask(config2) can causes a reload. */ + if (test_bit(idx, (unsigned long *)&leader->attr.config2)) + event->hw.dyn_mask &=3D mask; + for_each_sibling_event(sibling, leader) { + if (test_bit(idx, (unsigned long *)&sibling->attr.config2)) + event->hw.dyn_mask &=3D mask; + } + if (test_bit(idx, (unsigned long *)&last->attr.config2)) + event->hw.dyn_mask &=3D mask; +} + static int intel_pmu_hw_config(struct perf_event *event) { int ret =3D x86_pmu_hw_config(event); @@ -4056,6 +4185,50 @@ static int intel_pmu_hw_config(struct perf_event *ev= ent) event->hw.flags |=3D PERF_X86_EVENT_PEBS_VIA_PT; } =20 + if (intel_pmu_is_acr_group(event)) { + struct perf_event *sibling, *leader =3D event->group_leader; + int event_idx =3D 0; + + /* Not support perf metrics */ + if (is_metric_event(event)) + return -EINVAL; + + /* Not support freq mode */ + if (event->attr.freq) + return -EINVAL; + + /* PDist is not supported */ + if (event->attr.config2 && event->attr.precise_ip > 2) + return -EINVAL; + + /* The reload value cannot exceeds the max period */ + if (event->attr.sample_period > x86_pmu.max_period) + return -EINVAL; + /* + * It's hard to know whether the event is the last one of + * the group. Reconfigure the dyn_mask of each X86 event + * every time when add a new event. + * It's impossible to verify whether the bits of + * the event->attr.config2 exceeds the group. But it's + * harmless, because the invalid bits are ignored. See + * intel_pmu_update_acr_mask(). The n - n0 guarantees that + * only the bits in the group is used. + * + * Check whether the reloadable counters is enough and + * initialize the dyn_mask. + */ + if (intel_pmu_acr_check_reloadable_event(event)) + return -EINVAL; + + /* Reconfigure the dyn_mask for each event */ + intel_pmu_set_acr_dyn_mask(leader, event_idx++, event); + for_each_sibling_event(sibling, leader) + intel_pmu_set_acr_dyn_mask(sibling, event_idx++, event); + intel_pmu_set_acr_dyn_mask(event, event_idx, event); + + leader->hw.flags |=3D PERF_X86_EVENT_ACR; + } + if ((event->attr.type =3D=3D PERF_TYPE_HARDWARE) || (event->attr.type =3D=3D PERF_TYPE_HW_CACHE)) return 0; @@ -4159,6 +4332,49 @@ static int intel_pmu_hw_config(struct perf_event *ev= ent) return 0; } =20 +static void intel_pmu_update_acr_mask(struct cpu_hw_events *cpuc, int n, i= nt *assign) +{ + struct perf_event *event; + int n0, i, off; + + if (cpuc->txn_flags & PERF_PMU_TXN_ADD) + n0 =3D cpuc->n_events - cpuc->n_txn; + else + n0 =3D cpuc->n_events; + + for (i =3D n0; i < n; i++) { + event =3D cpuc->event_list[i]; + event->hw.config1 =3D 0; + + /* Convert the group index into the counter index */ + for_each_set_bit(off, (unsigned long *)&event->attr.config2, n - n0) + set_bit(assign[n0 + off], (unsigned long *)&event->hw.config1); + } +} + +static int intel_pmu_schedule_events(struct cpu_hw_events *cpuc, int n, in= t *assign) +{ + struct perf_event *event; + int ret =3D x86_schedule_events(cpuc, n, assign); + + if (ret) + return ret; + + if (cpuc->is_fake) + return ret; + + event =3D cpuc->event_list[n - 1]; + /* + * The acr_mask(config2) is the event-enabling order. + * Update it to the counter order after the counters are assigned. + */ + if (event && is_acr_event_group(event)) + intel_pmu_update_acr_mask(cpuc, n, assign); + + return 0; +} + + /* * Currently, the only caller of this function is the atomic_switch_perf_m= srs(). * The host perf context helps to prepare the values of the real hardware = for @@ -5305,7 +5521,7 @@ static __initconst const struct x86_pmu intel_pmu =3D= { .set_period =3D intel_pmu_set_period, .update =3D intel_pmu_update, .hw_config =3D intel_pmu_hw_config, - .schedule_events =3D x86_schedule_events, + .schedule_events =3D intel_pmu_schedule_events, .eventsel =3D MSR_ARCH_PERFMON_EVENTSEL0, .perfctr =3D MSR_ARCH_PERFMON_PERFCTR0, .fixedctr =3D MSR_ARCH_PERFMON_FIXED_CTR0, @@ -5909,6 +6125,21 @@ td_is_visible(struct kobject *kobj, struct attribute= *attr, int i) return attr->mode; } =20 +PMU_FORMAT_ATTR(acr_mask, "config2:0-63"); + +static struct attribute *format_acr_attrs[] =3D { + &format_attr_acr_mask.attr, + NULL +}; + +static umode_t +acr_is_visible(struct kobject *kobj, struct attribute *attr, int i) +{ + struct device *dev =3D kobj_to_dev(kobj); + + return hybrid(dev_get_drvdata(dev), acr_cntr_mask64) ? attr->mode : 0; +} + static struct attribute_group group_events_td =3D { .name =3D "events", .is_visible =3D td_is_visible, @@ -5951,6 +6182,12 @@ static struct attribute_group group_format_evtsel_ex= t =3D { .is_visible =3D evtsel_ext_is_visible, }; =20 +static struct attribute_group group_format_acr =3D { + .name =3D "format", + .attrs =3D format_acr_attrs, + .is_visible =3D acr_is_visible, +}; + static struct attribute_group group_default =3D { .attrs =3D intel_pmu_attrs, .is_visible =3D default_is_visible, @@ -5965,6 +6202,7 @@ static const struct attribute_group *attr_update[] = =3D { &group_format_extra, &group_format_extra_skl, &group_format_evtsel_ext, + &group_format_acr, &group_default, NULL, }; @@ -6249,6 +6487,7 @@ static const struct attribute_group *hybrid_attr_upda= te[] =3D { &group_caps_lbr, &hybrid_group_format_extra, &group_format_evtsel_ext, + &group_format_acr, &group_default, &hybrid_group_cpus, NULL, diff --git a/arch/x86/events/perf_event.h b/arch/x86/events/perf_event.h index 1ee6d7bb10a3..f6f2c0e60043 100644 --- a/arch/x86/events/perf_event.h +++ b/arch/x86/events/perf_event.h @@ -115,6 +115,11 @@ static inline bool is_branch_counters_group(struct per= f_event *event) return event->group_leader->hw.flags & PERF_X86_EVENT_BRANCH_COUNTERS; } =20 +static inline bool is_acr_event_group(struct perf_event *event) +{ + return event->group_leader->hw.flags & PERF_X86_EVENT_ACR; +} + struct amd_nb { int nb_id; /* NorthBridge id */ int refcnt; /* reference count */ @@ -281,6 +286,10 @@ struct cpu_hw_events { u64 fixed_ctrl_val; u64 active_fixed_ctrl_val; =20 + /* Intel ACR configuration */ + u64 acr_cfg_b[X86_PMC_IDX_MAX]; + u64 acr_cfg_c[X86_PMC_IDX_MAX]; + /* * Intel LBR bits */ diff --git a/arch/x86/events/perf_event_flags.h b/arch/x86/events/perf_even= t_flags.h index 6c977c19f2cd..f21d00965af6 100644 --- a/arch/x86/events/perf_event_flags.h +++ b/arch/x86/events/perf_event_flags.h @@ -9,7 +9,7 @@ PERF_ARCH(PEBS_LD_HSW, 0x00008) /* haswell style datala, l= oad */ PERF_ARCH(PEBS_NA_HSW, 0x00010) /* haswell style datala, unknown */ PERF_ARCH(EXCL, 0x00020) /* HT exclusivity on counter */ PERF_ARCH(DYNAMIC, 0x00040) /* dynamic alloc'd constraint */ - /* 0x00080 */ +PERF_ARCH(ACR, 0x00080) /* Auto counter reload */ PERF_ARCH(EXCL_ACCT, 0x00100) /* accounted EXCL event */ PERF_ARCH(AUTO_RELOAD, 0x00200) /* use PEBS auto-reload */ PERF_ARCH(LARGE_PEBS, 0x00400) /* use large PEBS */ diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-in= dex.h index 3ae84c3b8e6d..fdc0279c6917 100644 --- a/arch/x86/include/asm/msr-index.h +++ b/arch/x86/include/asm/msr-index.h @@ -591,7 +591,11 @@ /* V6 PMON MSR range */ #define MSR_IA32_PMC_V6_GP0_CTR 0x1900 #define MSR_IA32_PMC_V6_GP0_CFG_A 0x1901 +#define MSR_IA32_PMC_V6_GP0_CFG_B 0x1902 +#define MSR_IA32_PMC_V6_GP0_CFG_C 0x1903 #define MSR_IA32_PMC_V6_FX0_CTR 0x1980 +#define MSR_IA32_PMC_V6_FX0_CFG_B 0x1982 +#define MSR_IA32_PMC_V6_FX0_CFG_C 0x1983 #define MSR_IA32_PMC_V6_STEP 4 =20 /* KeyID partitioning between MKTME and TDX */ diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h index fb908843f209..f540ab12c678 100644 --- a/include/linux/perf_event.h +++ b/include/linux/perf_event.h @@ -157,7 +157,9 @@ struct hw_perf_event { union { struct { /* hardware */ u64 config; + u64 config1; u64 last_tag; + u64 dyn_mask; unsigned long config_base; unsigned long event_base; int event_base_rdpmc; --=20 2.38.1