From nobody Mon Feb 9 17:22:54 2026 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 62A89376BD7; Tue, 3 Feb 2026 21:44:29 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=217.140.110.172 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770155070; cv=none; b=tu7AubAuH2Wd5smGIBlXPb1SaS5vVbjiia3QmKDIlMulAZP43pFXp6QhJYnsk2x/WOkd0wXv0bz99oWWq9he4peXpEKTWXT2frRZOEK2v50N6MyOnPrNGBWOkCOn3RU46JwaMLdugtH3Cm+kVVBWovulnewMreBcWWJSdNi8yb0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770155070; c=relaxed/simple; bh=N3Ef1JY/i9I4UrPyOes9QqB8cRmCHoFT//fFtfMGqj0=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Ql6lK8CG+LfKwTDZcCEJOV7CtUPMR+f4Gx4Nb+IuTm+rTaOhozQP02HuWz1TD0x6fPOYg84mtJa3A/TADapcpCu8KUN8ssAXv4kVKO4RkrTXBadF4NhtcWQYQwhMUBtzYJcUdYICvaVAzzFQpUb7EppaN3ejkelimbM6T7eRHoE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com; spf=pass smtp.mailfrom=arm.com; arc=none smtp.client-ip=217.140.110.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 7FC19339; Tue, 3 Feb 2026 13:44:22 -0800 (PST) Received: from e134344.cambridge.arm.com (e134344.arm.com [10.1.196.46]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 5684F3F778; Tue, 3 Feb 2026 13:44:23 -0800 (PST) From: Ben Horgan To: ben.horgan@arm.com Cc: amitsinght@marvell.com, baisheng.gao@unisoc.com, baolin.wang@linux.alibaba.com, carl@os.amperecomputing.com, dave.martin@arm.com, david@kernel.org, dfustini@baylibre.com, fenghuay@nvidia.com, gshan@redhat.com, james.morse@arm.com, jonathan.cameron@huawei.com, kobak@nvidia.com, lcherian@marvell.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, peternewman@google.com, punit.agrawal@oss.qualcomm.com, quic_jiles@quicinc.com, reinette.chatre@intel.com, rohit.mathew@arm.com, scott@os.amperecomputing.com, sdonthineni@nvidia.com, tan.shaopeng@fujitsu.com, xhao@linux.alibaba.com, catalin.marinas@arm.com, will@kernel.org, corbet@lwn.net, maz@kernel.org, oupton@kernel.org, joey.gouly@arm.com, suzuki.poulose@arm.com, kvmarm@lists.linux.dev, zengheng4@huawei.com, linux-doc@vger.kernel.org, Shaopeng Tan Subject: [PATCH v4 05/41] arm64: mpam: Re-initialise MPAM regs when CPU comes online Date: Tue, 3 Feb 2026 21:43:06 +0000 Message-ID: <20260203214342.584712-6-ben.horgan@arm.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20260203214342.584712-1-ben.horgan@arm.com> References: <20260203214342.584712-1-ben.horgan@arm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: James Morse Now that the MPAM system registers are expected to have values that change, reprogram them based on the previous value when a CPU is brought online. Previously MPAM's 'default PARTID' of 0 was always used for MPAM in kernel-space as this is the PARTID that hardware guarantees to reset. Because there are a limited number of PARTID, this value is exposed to user-space, meaning resctrl changes to the resctrl default group would also affect kernel threads. Instead, use the task's PARTID value for kernel work on behalf of user-space too. The default of 0 is kept for both user-space and kernel-space when MPAM is not enabled. Tested-by: Gavin Shan Tested-by: Shaopeng Tan Tested-by: Peter Newman Reviewed-by: Jonathan Cameron Signed-off-by: James Morse Reviewed-by: Gavin Shan Signed-off-by: Ben Horgan Reviewed-by: Catalin Marinas --- Changes since rfc: CONFIG_MPAM -> CONFIG_ARM64_MPAM Check mpam_enabled Comment about relying on ERET for synchronisation Update commit message Changes since v3: Always set MPAM1_EL1.MPAMEN rather than relying on it being read only --- arch/arm64/kernel/cpufeature.c | 19 ++++++++++++------- 1 file changed, 12 insertions(+), 7 deletions(-) diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index c840a93b9ef9..343018c6159f 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -86,6 +86,7 @@ #include #include #include +#include #include #include #include @@ -2483,13 +2484,17 @@ test_has_mpam(const struct arm64_cpu_capabilities *= entry, int scope) static void cpu_enable_mpam(const struct arm64_cpu_capabilities *entry) { - /* - * Access by the kernel (at EL1) should use the reserved PARTID - * which is configured unrestricted. This avoids priority-inversion - * where latency sensitive tasks have to wait for a task that has - * been throttled to release the lock. - */ - write_sysreg_s(0, SYS_MPAM1_EL1); + int cpu =3D smp_processor_id(); + u64 regval =3D 0; + + if (IS_ENABLED(CONFIG_ARM64_MPAM) && static_branch_likely(&mpam_enabled)) + regval =3D READ_ONCE(per_cpu(arm64_mpam_current, cpu)); + + write_sysreg_s(regval | MPAM1_EL1_MPAMEN, SYS_MPAM1_EL1); + isb(); + + /* Synchronising the EL0 write is left until the ERET to EL0 */ + write_sysreg_s(regval, SYS_MPAM0_EL1); } =20 static bool --=20 2.43.0