From nobody Thu Apr 2 09:29:21 2026 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by smtp.subspace.kernel.org (Postfix) with ESMTP id BCD663A6B84; Fri, 13 Mar 2026 14:47:18 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=217.140.110.172 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773413240; cv=none; b=oPzTCUj7yW53gy0JsYMJckb+kXBRnVvaNDpWNpBTP7r/PnLMfubBygdi8JtePz/7vsMFPZerz38FgXmnMuLn+x/Ygw41C885cV6xXcKoyf8ZFjHFQPTzMO2NDt5zJ9fdTLduUct/Cs20pFd+21v6Ni67Z7xJZDTVMCig4OBLHf0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773413240; c=relaxed/simple; bh=C1yOBQONR3ZIKK/mcPn6ZeK/SBUpR9D5Hwzz32dtkDU=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=aszGT2S0K5QsAukn5Kdm9J544YNf/t59P3xiR7i/G9LFC+BSmUQ4NlxOepaWW3DeAyc3C+woND3B8IUGaMXPozl1ywZ+W3aQt+cnzshoPwfIBmfE1nL7LjUEmaRH80GXZgPp2BmZu63i6tGNFbHg2O+QeZaDQ4w1zOIatw8FwGI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com; spf=pass smtp.mailfrom=arm.com; arc=none smtp.client-ip=217.140.110.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 399481CC4; Fri, 13 Mar 2026 07:47:12 -0700 (PDT) Received: from e134344.cambridge.arm.com (e134344.arm.com [10.1.196.46]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 6E9693F7BD; Fri, 13 Mar 2026 07:47:14 -0700 (PDT) From: Ben Horgan To: ben.horgan@arm.com Cc: amitsinght@marvell.com, baisheng.gao@unisoc.com, baolin.wang@linux.alibaba.com, carl@os.amperecomputing.com, dave.martin@arm.com, david@kernel.org, dfustini@baylibre.com, fenghuay@nvidia.com, gshan@redhat.com, james.morse@arm.com, jonathan.cameron@huawei.com, kobak@nvidia.com, lcherian@marvell.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, peternewman@google.com, punit.agrawal@oss.qualcomm.com, quic_jiles@quicinc.com, reinette.chatre@intel.com, rohit.mathew@arm.com, scott@os.amperecomputing.com, sdonthineni@nvidia.com, tan.shaopeng@fujitsu.com, xhao@linux.alibaba.com, catalin.marinas@arm.com, will@kernel.org, corbet@lwn.net, maz@kernel.org, oupton@kernel.org, joey.gouly@arm.com, suzuki.poulose@arm.com, kvmarm@lists.linux.dev, zengheng4@huawei.com, linux-doc@vger.kernel.org, Shaopeng Tan Subject: [PATCH v6 11/40] arm64: mpam: Initialise and context switch the MPAMSM_EL1 register Date: Fri, 13 Mar 2026 14:45:48 +0000 Message-ID: <20260313144617.3420416-12-ben.horgan@arm.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20260313144617.3420416-1-ben.horgan@arm.com> References: <20260313144617.3420416-1-ben.horgan@arm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The MPAMSM_EL1 sets the MPAM labels, PMG and PARTID, for loads and stores generated by a shared SMCU. Disable the traps so the kernel can use it and set it to the same configuration as the per-EL cpu MPAM configuration. If an SMCU is not shared with other cpus then it is implementation defined whether the configuration from MPAMSM_EL1 is used or that from the appropriate MPAMy_ELx. As we set the same, PMG_D and PARTID_D, configuration for MPAM0_EL1, MPAM1_EL1 and MPAMSM_EL1 the resulting configuration is the same regardless. The range of valid configurations for the PARTID and PMG in MPAMSM_EL1 is not currently specified in Arm Architectural Reference Manual but the architect has confirmed that it is intended to be the same as that for the cpu configuration in the MPAMy_ELx registers. Tested-by: Gavin Shan Tested-by: Shaopeng Tan Tested-by: Peter Newman Tested-by: Zeng Heng Tested-by: Punit Agrawal Reviewed-by: Zeng Heng Reviewed-by: Shaopeng Tan Reviewed-by: Jonathan Cameron Reviewed-by: Gavin Shan Reviewed-by: Catalin Marinas Signed-off-by: Ben Horgan Reviewed-by: James Morse --- Changes since v2: Mention PMG_D and PARTID_D specifically int he commit message Add paragraph in commit message on range of MPAMSM_EL1 fields Changes since v3: Use cpus_have_cap() in cpu_enable_mpam() add {} --- arch/arm64/include/asm/el2_setup.h | 3 ++- arch/arm64/include/asm/mpam.h | 2 ++ arch/arm64/kernel/cpufeature.c | 2 ++ arch/arm64/kernel/mpam.c | 4 ++++ 4 files changed, 10 insertions(+), 1 deletion(-) diff --git a/arch/arm64/include/asm/el2_setup.h b/arch/arm64/include/asm/el= 2_setup.h index 85f4c1615472..4d15071a4f3f 100644 --- a/arch/arm64/include/asm/el2_setup.h +++ b/arch/arm64/include/asm/el2_setup.h @@ -513,7 +513,8 @@ check_override id_aa64pfr0, ID_AA64PFR0_EL1_MPAM_SHIFT, .Linit_mpam_\@, .= Lskip_mpam_\@, x1, x2 =20 .Linit_mpam_\@: - msr_s SYS_MPAM2_EL2, xzr // use the default partition + mov x0, #MPAM2_EL2_EnMPAMSM_MASK + msr_s SYS_MPAM2_EL2, x0 // use the default partition, // and disable lower traps mrs_s x0, SYS_MPAMIDR_EL1 tbz x0, #MPAMIDR_EL1_HAS_HCR_SHIFT, .Lskip_mpam_\@ // skip if no MPAMHCR= reg diff --git a/arch/arm64/include/asm/mpam.h b/arch/arm64/include/asm/mpam.h index 0747e0526927..6bccbfdccb87 100644 --- a/arch/arm64/include/asm/mpam.h +++ b/arch/arm64/include/asm/mpam.h @@ -53,6 +53,8 @@ static inline void mpam_thread_switch(struct task_struct = *tsk) return; =20 write_sysreg_s(regval | MPAM1_EL1_MPAMEN, SYS_MPAM1_EL1); + if (system_supports_sme()) + write_sysreg_s(regval & (MPAMSM_EL1_PARTID_D | MPAMSM_EL1_PMG_D), SYS_MP= AMSM_EL1); isb(); =20 /* Synchronising the EL0 write is left until the ERET to EL0 */ diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index c3f900f81653..4f34e7a76f64 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -2500,6 +2500,8 @@ cpu_enable_mpam(const struct arm64_cpu_capabilities *= entry) regval =3D READ_ONCE(per_cpu(arm64_mpam_current, cpu)); =20 write_sysreg_s(regval | MPAM1_EL1_MPAMEN, SYS_MPAM1_EL1); + if (cpus_have_cap(ARM64_SME)) + write_sysreg_s(regval & (MPAMSM_EL1_PARTID_D | MPAMSM_EL1_PMG_D), SYS_MP= AMSM_EL1); isb(); =20 /* Synchronising the EL0 write is left until the ERET to EL0 */ diff --git a/arch/arm64/kernel/mpam.c b/arch/arm64/kernel/mpam.c index 48ec0ffd5999..3a490de4fa12 100644 --- a/arch/arm64/kernel/mpam.c +++ b/arch/arm64/kernel/mpam.c @@ -28,6 +28,10 @@ static int mpam_pm_notifier(struct notifier_block *self, */ regval =3D READ_ONCE(per_cpu(arm64_mpam_current, cpu)); write_sysreg_s(regval | MPAM1_EL1_MPAMEN, SYS_MPAM1_EL1); + if (system_supports_sme()) { + write_sysreg_s(regval & (MPAMSM_EL1_PARTID_D | MPAMSM_EL1_PMG_D), + SYS_MPAMSM_EL1); + } isb(); =20 write_sysreg_s(regval, SYS_MPAM0_EL1); --=20 2.43.0