From nobody Sun Dec 28 04:47:43 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DAFD6C4167D for ; Tue, 12 Dec 2023 22:23:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1377894AbjLLWXj (ORCPT ); Tue, 12 Dec 2023 17:23:39 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45926 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1377782AbjLLWXc (ORCPT ); Tue, 12 Dec 2023 17:23:32 -0500 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.10]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4909BD2; Tue, 12 Dec 2023 14:23:38 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1702419818; x=1733955818; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=LzXsCuISW2NJNxUNXM2y/VHhkHK/HUDcaHaJHLyVxtw=; b=gY8vgWt7G14K84Hqotl3OiI7yjPnjdiT4iA9gm0uhwcOZvkNgbKVtA2B 6P8umaTW57HZlQT6rGZxsL9HZsbf6slpFG5sS7fPdkcfpjah4Hd0EPoxX ACFvO+iZsQQcqgD+To8hAtwac2I9eUpb6/5cS0mDSsfBPI3WqrQiTrEr+ Ur6bE7l8qS09XWue/8gP7a5cT/zxU5EglQ9/RdGrcHBowk8WrxHE5j5aK aPimo93Y2eH4VqJP9A421oRhiz5A7MobEsmPDIWZwCTpYbG0p+u53WxKu LywGnA0Fr+QejNod7+ezVJhvLe9ULR+4Ky687jdMfNmAuoVFP1gu4Dk/m Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10922"; a="2049321" X-IronPort-AV: E=Sophos;i="6.04,271,1695711600"; d="scan'208";a="2049321" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmvoesa104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Dec 2023 14:23:37 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10922"; a="802631193" X-IronPort-AV: E=Sophos;i="6.04,271,1695711600"; d="scan'208";a="802631193" Received: from ranerica-svr.sc.intel.com ([172.25.110.23]) by orsmga008.jf.intel.com with ESMTP; 12 Dec 2023 14:23:36 -0800 From: Ricardo Neri To: x86@kernel.org Cc: Andreas Herrmann , Catalin Marinas , Chen Yu , Len Brown , Radu Rendec , Pierre Gondois , Pu Wen , "Rafael J. Wysocki" , Sudeep Holla , Srinivas Pandruvada , Will Deacon , Zhang Rui , Huang Ying , "Ravi V. Shankar" , stable@vger.kernel.org, linux-kernel@vger.kernel.org, Ricardo Neri , linux-arm-kernel@lists.infradead.org Subject: [PATCH v4 3/4] x86/cacheinfo: Delete global num_cache_leaves Date: Tue, 12 Dec 2023 14:25:18 -0800 Message-Id: <20231212222519.12834-4-ricardo.neri-calderon@linux.intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20231212222519.12834-1-ricardo.neri-calderon@linux.intel.com> References: <20231212222519.12834-1-ricardo.neri-calderon@linux.intel.com> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Linux remembers cpu_cachinfo::num_leaves per CPU, but x86 initializes all CPUs from the same global "num_cache_leaves". This is erroneous on systems such as Meteor Lake, where each CPU has a distinct num_leaves value. Delete the global "num_cache_leaves" and initialize num_leaves on each CPU. Cc: Andreas Herrmann Cc: Catalin Marinas Cc: Chen Yu Cc: Huang Ying Cc: Len Brown Cc: Radu Rendec Cc: Pierre Gondois Cc: Pu Wen Cc: "Rafael J. Wysocki" Cc: Sudeep Holla Cc: Srinivas Pandruvada Cc: Will Deacon Cc: Zhang Rui Cc: linux-arm-kernel@lists.infradead.org Cc: stable@vger.kernel.org Reviewed-by: Len Brown Signed-off-by: Ricardo Neri --- After this change, all CPUs will traverse CPUID leaf 0x4 when booted for the first time. On systems with symmetric cache topologies this is useless work. Creating a list of processor models that have asymmetric cache topologies was considered. The burden of maintaining such list would outweigh the performance benefit of skipping this extra step. --- Changes since v3: * Rebased on v6.7-rc5. Changes since v2: * None Changes since v1: * Do not make num_cache_leaves a per-CPU variable. Instead, reuse the existing per-CPU ci_cpu_cacheinfo variable. (Dave Hansen) --- arch/x86/kernel/cpu/cacheinfo.c | 44 +++++++++++++++++++-------------- 1 file changed, 26 insertions(+), 18 deletions(-) diff --git a/arch/x86/kernel/cpu/cacheinfo.c b/arch/x86/kernel/cpu/cacheinf= o.c index c131c412db89..4125e53a5ef7 100644 --- a/arch/x86/kernel/cpu/cacheinfo.c +++ b/arch/x86/kernel/cpu/cacheinfo.c @@ -178,7 +178,16 @@ struct _cpuid4_info_regs { struct amd_northbridge *nb; }; =20 -static unsigned short num_cache_leaves; +static inline unsigned int get_num_cache_leaves(unsigned int cpu) +{ + return get_cpu_cacheinfo(cpu)->num_leaves; +} + +static inline void +set_num_cache_leaves(unsigned int nr_leaves, unsigned int cpu) +{ + get_cpu_cacheinfo(cpu)->num_leaves =3D nr_leaves; +} =20 /* AMD doesn't have CPUID4. Emulate it here to report the same information to the user. This makes some assumptions about the machine: @@ -718,19 +727,21 @@ void cacheinfo_hygon_init_llc_id(struct cpuinfo_x86 *= c) void init_amd_cacheinfo(struct cpuinfo_x86 *c) { =20 + unsigned int cpu =3D c->cpu_index; + if (boot_cpu_has(X86_FEATURE_TOPOEXT)) { - num_cache_leaves =3D find_num_cache_leaves(c); + set_num_cache_leaves(find_num_cache_leaves(c), cpu); } else if (c->extended_cpuid_level >=3D 0x80000006) { if (cpuid_edx(0x80000006) & 0xf000) - num_cache_leaves =3D 4; + set_num_cache_leaves(4, cpu); else - num_cache_leaves =3D 3; + set_num_cache_leaves(3, cpu); } } =20 void init_hygon_cacheinfo(struct cpuinfo_x86 *c) { - num_cache_leaves =3D find_num_cache_leaves(c); + set_num_cache_leaves(find_num_cache_leaves(c), c->cpu_index); } =20 void init_intel_cacheinfo(struct cpuinfo_x86 *c) @@ -742,19 +753,19 @@ void init_intel_cacheinfo(struct cpuinfo_x86 *c) unsigned int l2_id =3D 0, l3_id =3D 0, num_threads_sharing, index_msb; =20 if (c->cpuid_level > 3) { - static int is_initialized; - - if (is_initialized =3D=3D 0) { - /* Init num_cache_leaves from boot CPU */ - num_cache_leaves =3D find_num_cache_leaves(c); - is_initialized++; - } + /* + * There should be at least one leaf. A non-zero value means + * that the number of leaves has been initialized. + */ + if (!get_num_cache_leaves(c->cpu_index)) + set_num_cache_leaves(find_num_cache_leaves(c), + c->cpu_index); =20 /* * Whenever possible use cpuid(4), deterministic cache * parameters cpuid leaf to find the cache details */ - for (i =3D 0; i < num_cache_leaves; i++) { + for (i =3D 0; i < get_num_cache_leaves(c->cpu_index); i++) { struct _cpuid4_info_regs this_leaf =3D {}; int retval; =20 @@ -790,14 +801,14 @@ void init_intel_cacheinfo(struct cpuinfo_x86 *c) * Don't use cpuid2 if cpuid4 is supported. For P4, we use cpuid2 for * trace cache */ - if ((num_cache_leaves =3D=3D 0 || c->x86 =3D=3D 15) && c->cpuid_level > 1= ) { + if ((!get_num_cache_leaves(c->cpu_index) || c->x86 =3D=3D 15) && c->cpuid= _level > 1) { /* supports eax=3D2 call */ int j, n; unsigned int regs[4]; unsigned char *dp =3D (unsigned char *)regs; int only_trace =3D 0; =20 - if (num_cache_leaves !=3D 0 && c->x86 =3D=3D 15) + if (get_num_cache_leaves(c->cpu_index) && c->x86 =3D=3D 15) only_trace =3D 1; =20 /* Number of times to iterate */ @@ -993,12 +1004,9 @@ int init_cache_level(unsigned int cpu) { struct cpu_cacheinfo *this_cpu_ci =3D get_cpu_cacheinfo(cpu); =20 - if (!num_cache_leaves) - return -ENOENT; if (!this_cpu_ci) return -EINVAL; this_cpu_ci->num_levels =3D 3; - this_cpu_ci->num_leaves =3D num_cache_leaves; return 0; } =20 --=20 2.25.1