From nobody Wed Dec 17 15:36:14 2025 Received: from galois.linutronix.de (Galois.linutronix.de [193.142.43.55]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8DB2D205E23 for ; Mon, 17 Mar 2025 16:53:44 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=193.142.43.55 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742230426; cv=none; b=WM0fXiZbQZmDWZoNQi8AbUExhdFb6wrrpFezEmx+GrCXMlCARMUROzN6pBHT8E4r3dysdt65Rubvmit7Khnx7sdAAoF+u/w3MRDNTE8ZhD/W6c/77EYgYzfa85d/nSYO1wtSA+7RlClER7a6+2S75vjrSBaEh7I0rJpKIiz66gs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742230426; c=relaxed/simple; bh=I1v0E23byy1mQjaBHYkixzgtEH+9sPgWWDXUZ7I2myU=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=WlbdNsK5RGPTC94IMt6ntKq0Nac1t8WksYloF0A1C0t2QiCgOQ9pG5dgK67XzV+1OCpUcvHk/cn+yfVU/nHd4pfKRou+o3HJ8toh33QeunEoyIvLfC3t99MIuQFLtqlK5zwElN9GeOCCcEmQYJZz3Mf+m4upd//d9HetLI/aC/g= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de; spf=pass smtp.mailfrom=linutronix.de; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b=oTq5kkb/; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b=K3WeCq6O; arc=none smtp.client-ip=193.142.43.55 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linutronix.de Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="oTq5kkb/"; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="K3WeCq6O" From: "Ahmed S. Darwish" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1742230423; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=NnV7iNx1ODPkpIxgVKrK9GJrOsscq/OgiqCPue+WRAQ=; b=oTq5kkb/sncA6kZBa0lRDQu5eic4aqfXCysRLagTUPjxz+pS85bMrmsDTs7PpE1HZ8IJhE KsYPOViEoW6Zgj0+iuC3M2rpV8S6aNnMTHsi8ITP7mgU9HwqAp3CzNj273DzwtduO0CKIP a/7dG0FtxkrAjAiimXefMnbw37VTg7LxLMkcXwBvZxAZ1ib9FxOzSeVOMJnFFrmSa/v40M Bu8FFtU7OCKNRTOH/0fQZmjM7sTqnkbNwWJHglXFK1ktjt8q3cFtbZTVwhCskp1E/f8/sc yt/+pK25FuTrznLWrCE0G6uCus0qWRBM/CKhsGINT1nP7prPmIDpvLx2IABc0Q== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1742230423; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=NnV7iNx1ODPkpIxgVKrK9GJrOsscq/OgiqCPue+WRAQ=; b=K3WeCq6OHqH4DbV39igsvIH75z/Az0AEFBH6YLwU2N+zKxZSvbgqLvdHrfrA3PHjcw6EJA eZ/8Xtf/M3dKmUCg== To: Ingo Molnar , Dave Hansen , Borislav Petkov Cc: Thomas Gleixner , Andrew Cooper , "H. Peter Anvin" , John Ogness , x86@kernel.org, x86-cpuid@lists.linux.dev, LKML , "Ahmed S. Darwish" Subject: [PATCH v2 24/29] x86/cacheinfo: Separate Intel leaf 0x4 handling Date: Mon, 17 Mar 2025 17:47:40 +0100 Message-ID: <20250317164745.4754-25-darwi@linutronix.de> In-Reply-To: <20250317164745.4754-1-darwi@linutronix.de> References: <20250317164745.4754-1-darwi@linutronix.de> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" init_intel_cacheinfo() was overly complex. It parsed leaf 0x4 data, leaf 0x2 data, and performed post-processing, all within one function. Parent commit moved leaf 0x2 parsing and the post-processing logic into their own functions. Continue the refactoring by extracting leaf 0x4 parsing into its own function. Initialize local L2/L3 topology ID variables to BAD_APICID by default, thus ensuring they can be used unconditionally. Suggested-by: Thomas Gleixner Signed-off-by: Ahmed S. Darwish --- arch/x86/kernel/cpu/cacheinfo.c | 110 ++++++++++++++++---------------- 1 file changed, 54 insertions(+), 56 deletions(-) diff --git a/arch/x86/kernel/cpu/cacheinfo.c b/arch/x86/kernel/cpu/cacheinf= o.c index 9b29842b20db..f1db8182deeb 100644 --- a/arch/x86/kernel/cpu/cacheinfo.c +++ b/arch/x86/kernel/cpu/cacheinfo.c @@ -397,73 +397,71 @@ static void intel_cacheinfo_0x2(struct cpuinfo_x86 *c) intel_cacheinfo_done(c, l3, l2, l1i, l1d); } =20 -void init_intel_cacheinfo(struct cpuinfo_x86 *c) +static bool intel_cacheinfo_0x4(struct cpuinfo_x86 *c) { struct cpu_cacheinfo *ci =3D get_cpu_cacheinfo(c->cpu_index); - unsigned int l1i =3D 0, l1d =3D 0, l2 =3D 0, l3 =3D 0; - unsigned int l2_id =3D 0, l3_id =3D 0; - - if (c->cpuid_level > 3) { - /* - * There should be at least one leaf. A non-zero value means - * that the number of leaves has been initialized. - */ - if (!ci->num_leaves) - ci->num_leaves =3D find_num_cache_leaves(c); + unsigned int l2_id =3D BAD_APICID, l3_id =3D BAD_APICID; + unsigned int l1d =3D 0, l1i =3D 0, l2 =3D 0, l3 =3D 0; =20 - /* - * Whenever possible use cpuid(4), deterministic cache - * parameters cpuid leaf to find the cache details - */ - for (int i =3D 0; i < ci->num_leaves; i++) { - unsigned int num_threads_sharing, index_msb; - struct _cpuid4_info id4 =3D {}; - int retval; + if (c->cpuid_level < 4) + return false; =20 - retval =3D intel_fill_cpuid4_info(i, &id4); - if (retval < 0) - continue; + /* + * There should be at least one leaf. A non-zero value means + * that the number of leaves has been previously initialized. + */ + if (!ci->num_leaves) + ci->num_leaves =3D find_num_cache_leaves(c); =20 - switch (id4.eax.split.level) { - case 1: - if (id4.eax.split.type =3D=3D CTYPE_DATA) - l1d =3D id4.size / 1024; - else if (id4.eax.split.type =3D=3D CTYPE_INST) - l1i =3D id4.size / 1024; - break; - case 2: - l2 =3D id4.size / 1024; - num_threads_sharing =3D 1 + id4.eax.split.num_threads_sharing; - index_msb =3D get_count_order(num_threads_sharing); - l2_id =3D c->topo.apicid & ~((1 << index_msb) - 1); - break; - case 3: - l3 =3D id4.size / 1024; - num_threads_sharing =3D 1 + id4.eax.split.num_threads_sharing; - index_msb =3D get_count_order(num_threads_sharing); - l3_id =3D c->topo.apicid & ~((1 << index_msb) - 1); - break; - default: - break; - } + if (!ci->num_leaves) + return false; + + for (int i =3D 0; i < ci->num_leaves; i++) { + unsigned int num_threads_sharing, index_msb; + struct _cpuid4_info id4 =3D {}; + int ret; + + ret =3D intel_fill_cpuid4_info(i, &id4); + if (ret < 0) + continue; + + switch (id4.eax.split.level) { + case 1: + if (id4.eax.split.type =3D=3D CTYPE_DATA) + l1d =3D id4.size / 1024; + else if (id4.eax.split.type =3D=3D CTYPE_INST) + l1i =3D id4.size / 1024; + break; + case 2: + l2 =3D id4.size / 1024; + num_threads_sharing =3D 1 + id4.eax.split.num_threads_sharing; + index_msb =3D get_count_order(num_threads_sharing); + l2_id =3D c->topo.apicid & ~((1 << index_msb) - 1); + break; + case 3: + l3 =3D id4.size / 1024; + num_threads_sharing =3D 1 + id4.eax.split.num_threads_sharing; + index_msb =3D get_count_order(num_threads_sharing); + l3_id =3D c->topo.apicid & ~((1 << index_msb) - 1); + break; + default: + break; } } =20 + c->topo.l2c_id =3D l2_id; + c->topo.llc_id =3D (l3_id =3D=3D BAD_APICID) ? l2_id : l3_id; + intel_cacheinfo_done(c, l3, l2, l1i, l1d); + return true; +} + +void init_intel_cacheinfo(struct cpuinfo_x86 *c) +{ /* Don't use CPUID(2) if CPUID(4) is supported. */ - if (!ci->num_leaves && c->cpuid_level > 1) { - intel_cacheinfo_0x2(c); + if (intel_cacheinfo_0x4(c)) return; - } - - if (l2) { - c->topo.llc_id =3D l2_id; - c->topo.l2c_id =3D l2_id; - } - - if (l3) - c->topo.llc_id =3D l3_id; =20 - intel_cacheinfo_done(c, l3, l2, l1i, l1d); + intel_cacheinfo_0x2(c); } =20 static int __cache_amd_cpumap_setup(unsigned int cpu, int index, --=20 2.48.1