[PATCH v3 10/47] arm64: mpam: Initialise and context switch the MPAMSM_EL1 register

Ben Horgan posted 47 patches 4 weeks ago
There is a newer version of this series
[PATCH v3 10/47] arm64: mpam: Initialise and context switch the MPAMSM_EL1 register
Posted by Ben Horgan 4 weeks ago
The MPAMSM_EL1 sets the MPAM labels, PMG and PARTID, for loads and stores
generated by a shared SMCU. Disable the traps so the kernel can use it and
set it to the same configuration as the per-EL cpu MPAM configuration.

If an SMCU is not shared with other cpus then it is implementation
defined whether the configuration from MPAMSM_EL1 is used or that from
the appropriate MPAMy_ELx. As we set the same, PMG_D and PARTID_D,
configuration for MPAM0_EL1, MPAM1_EL1 and MPAMSM_EL1 the resulting
configuration is the same regardless.

The range of valid configurations for the PARTID and PMG in MPAMSM_EL1 is
not currently specified in Arm Architectural Reference Manual but the
architect has confirmed that it is intended to be the same as that for the
cpu configuration in the MPAMy_ELx registers.

Reviewed-by: Jonathan Cameron <jonathan.cameron@huawei.com>
Signed-off-by: Ben Horgan <ben.horgan@arm.com>
---
Changes since v2:
Mention PMG_D and PARTID_D specifically int he commit message
Add paragraph in commit message on range of MPAMSM_EL1 fields
---
 arch/arm64/include/asm/el2_setup.h | 3 ++-
 arch/arm64/include/asm/mpam.h      | 2 ++
 arch/arm64/kernel/cpufeature.c     | 2 ++
 arch/arm64/kernel/mpam.c           | 3 +++
 4 files changed, 9 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/include/asm/el2_setup.h b/arch/arm64/include/asm/el2_setup.h
index cacd20df1786..d37984c09799 100644
--- a/arch/arm64/include/asm/el2_setup.h
+++ b/arch/arm64/include/asm/el2_setup.h
@@ -504,7 +504,8 @@
 	check_override id_aa64pfr0, ID_AA64PFR0_EL1_MPAM_SHIFT, .Linit_mpam_\@, .Lskip_mpam_\@, x1, x2
 
 .Linit_mpam_\@:
-	msr_s	SYS_MPAM2_EL2, xzr		// use the default partition
+	mov	x0, #MPAM2_EL2_EnMPAMSM_MASK
+	msr_s	SYS_MPAM2_EL2, x0		// use the default partition,
 						// and disable lower traps
 	mrs_s	x0, SYS_MPAMIDR_EL1
 	tbz	x0, #MPAMIDR_EL1_HAS_HCR_SHIFT, .Lskip_mpam_\@  // skip if no MPAMHCR reg
diff --git a/arch/arm64/include/asm/mpam.h b/arch/arm64/include/asm/mpam.h
index 14011e5970ce..7b3d3abad162 100644
--- a/arch/arm64/include/asm/mpam.h
+++ b/arch/arm64/include/asm/mpam.h
@@ -53,6 +53,8 @@ static inline void mpam_thread_switch(struct task_struct *tsk)
 		return;
 
 	write_sysreg_s(regval, SYS_MPAM1_EL1);
+	if (system_supports_sme())
+		write_sysreg_s(regval & (MPAMSM_EL1_PARTID_D | MPAMSM_EL1_PMG_D), SYS_MPAMSM_EL1);
 	isb();
 
 	/* Synchronising the EL0 write is left until the ERET to EL0 */
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 0cdfb3728f43..2ede543b3eeb 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -2491,6 +2491,8 @@ cpu_enable_mpam(const struct arm64_cpu_capabilities *entry)
 		regval = READ_ONCE(per_cpu(arm64_mpam_current, cpu));
 
 	write_sysreg_s(regval, SYS_MPAM1_EL1);
+	if (system_supports_sme())
+		write_sysreg_s(regval & (MPAMSM_EL1_PARTID_D | MPAMSM_EL1_PMG_D), SYS_MPAMSM_EL1);
 	isb();
 
 	/* Synchronising the EL0 write is left until the ERET to EL0 */
diff --git a/arch/arm64/kernel/mpam.c b/arch/arm64/kernel/mpam.c
index dbe0a2d05abb..6ce4a36469ce 100644
--- a/arch/arm64/kernel/mpam.c
+++ b/arch/arm64/kernel/mpam.c
@@ -28,6 +28,9 @@ static int mpam_pm_notifier(struct notifier_block *self,
 		 */
 		regval = READ_ONCE(per_cpu(arm64_mpam_current, cpu));
 		write_sysreg_s(regval, SYS_MPAM1_EL1);
+		if (system_supports_sme())
+			write_sysreg_s(regval & (MPAMSM_EL1_PARTID_D | MPAMSM_EL1_PMG_D),
+				       SYS_MPAMSM_EL1);
 		isb();
 
 		write_sysreg_s(regval, SYS_MPAM0_EL1);
-- 
2.43.0
Re: [PATCH v3 10/47] arm64: mpam: Initialise and context switch the MPAMSM_EL1 register
Posted by Gavin Shan 3 weeks ago
Hi Ben,

On 1/13/26 12:58 AM, Ben Horgan wrote:
> The MPAMSM_EL1 sets the MPAM labels, PMG and PARTID, for loads and stores
> generated by a shared SMCU. Disable the traps so the kernel can use it and
> set it to the same configuration as the per-EL cpu MPAM configuration.
> 
> If an SMCU is not shared with other cpus then it is implementation
> defined whether the configuration from MPAMSM_EL1 is used or that from
> the appropriate MPAMy_ELx. As we set the same, PMG_D and PARTID_D,
> configuration for MPAM0_EL1, MPAM1_EL1 and MPAMSM_EL1 the resulting
> configuration is the same regardless.
> 
> The range of valid configurations for the PARTID and PMG in MPAMSM_EL1 is
> not currently specified in Arm Architectural Reference Manual but the
> architect has confirmed that it is intended to be the same as that for the
> cpu configuration in the MPAMy_ELx registers.
> 
> Reviewed-by: Jonathan Cameron <jonathan.cameron@huawei.com>
> Signed-off-by: Ben Horgan <ben.horgan@arm.com>
> ---
> Changes since v2:
> Mention PMG_D and PARTID_D specifically int he commit message
> Add paragraph in commit message on range of MPAMSM_EL1 fields
> ---
>   arch/arm64/include/asm/el2_setup.h | 3 ++-
>   arch/arm64/include/asm/mpam.h      | 2 ++
>   arch/arm64/kernel/cpufeature.c     | 2 ++
>   arch/arm64/kernel/mpam.c           | 3 +++
>   4 files changed, 9 insertions(+), 1 deletion(-)
> 

One nitpick below...

Reviewed-by: Gavin Shan <gshan@redhat.com>

> diff --git a/arch/arm64/include/asm/el2_setup.h b/arch/arm64/include/asm/el2_setup.h
> index cacd20df1786..d37984c09799 100644
> --- a/arch/arm64/include/asm/el2_setup.h
> +++ b/arch/arm64/include/asm/el2_setup.h
> @@ -504,7 +504,8 @@
>   	check_override id_aa64pfr0, ID_AA64PFR0_EL1_MPAM_SHIFT, .Linit_mpam_\@, .Lskip_mpam_\@, x1, x2
>   
>   .Linit_mpam_\@:
> -	msr_s	SYS_MPAM2_EL2, xzr		// use the default partition
> +	mov	x0, #MPAM2_EL2_EnMPAMSM_MASK
> +	msr_s	SYS_MPAM2_EL2, x0		// use the default partition,
>   						// and disable lower traps
>   	mrs_s	x0, SYS_MPAMIDR_EL1
>   	tbz	x0, #MPAMIDR_EL1_HAS_HCR_SHIFT, .Lskip_mpam_\@  // skip if no MPAMHCR reg
> diff --git a/arch/arm64/include/asm/mpam.h b/arch/arm64/include/asm/mpam.h
> index 14011e5970ce..7b3d3abad162 100644
> --- a/arch/arm64/include/asm/mpam.h
> +++ b/arch/arm64/include/asm/mpam.h
> @@ -53,6 +53,8 @@ static inline void mpam_thread_switch(struct task_struct *tsk)
>   		return;
>   
>   	write_sysreg_s(regval, SYS_MPAM1_EL1);
> +	if (system_supports_sme())
> +		write_sysreg_s(regval & (MPAMSM_EL1_PARTID_D | MPAMSM_EL1_PMG_D), SYS_MPAMSM_EL1);
>   	isb();
>   
>   	/* Synchronising the EL0 write is left until the ERET to EL0 */
> diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
> index 0cdfb3728f43..2ede543b3eeb 100644
> --- a/arch/arm64/kernel/cpufeature.c
> +++ b/arch/arm64/kernel/cpufeature.c
> @@ -2491,6 +2491,8 @@ cpu_enable_mpam(const struct arm64_cpu_capabilities *entry)
>   		regval = READ_ONCE(per_cpu(arm64_mpam_current, cpu));
>   
>   	write_sysreg_s(regval, SYS_MPAM1_EL1);
> +	if (system_supports_sme())
> +		write_sysreg_s(regval & (MPAMSM_EL1_PARTID_D | MPAMSM_EL1_PMG_D), SYS_MPAMSM_EL1);
>   	isb();
>   
>   	/* Synchronising the EL0 write is left until the ERET to EL0 */
> diff --git a/arch/arm64/kernel/mpam.c b/arch/arm64/kernel/mpam.c
> index dbe0a2d05abb..6ce4a36469ce 100644
> --- a/arch/arm64/kernel/mpam.c
> +++ b/arch/arm64/kernel/mpam.c
> @@ -28,6 +28,9 @@ static int mpam_pm_notifier(struct notifier_block *self,
>   		 */
>   		regval = READ_ONCE(per_cpu(arm64_mpam_current, cpu));
>   		write_sysreg_s(regval, SYS_MPAM1_EL1);
> +		if (system_supports_sme())
> +			write_sysreg_s(regval & (MPAMSM_EL1_PARTID_D | MPAMSM_EL1_PMG_D),
> +				       SYS_MPAMSM_EL1);

{ } is missed here.

>   		isb();
>   
>   		write_sysreg_s(regval, SYS_MPAM0_EL1);

Thanks,
Gavin
Re: [PATCH v3 10/47] arm64: mpam: Initialise and context switch the MPAMSM_EL1 register
Posted by Ben Horgan 3 weeks ago
Hi Gavin,

On 1/19/26 06:51, Gavin Shan wrote:
> Hi Ben,
> 
> On 1/13/26 12:58 AM, Ben Horgan wrote:
>> The MPAMSM_EL1 sets the MPAM labels, PMG and PARTID, for loads and stores
>> generated by a shared SMCU. Disable the traps so the kernel can use it
>> and
>> set it to the same configuration as the per-EL cpu MPAM configuration.
>>
>> If an SMCU is not shared with other cpus then it is implementation
>> defined whether the configuration from MPAMSM_EL1 is used or that from
>> the appropriate MPAMy_ELx. As we set the same, PMG_D and PARTID_D,
>> configuration for MPAM0_EL1, MPAM1_EL1 and MPAMSM_EL1 the resulting
>> configuration is the same regardless.
>>
>> The range of valid configurations for the PARTID and PMG in MPAMSM_EL1 is
>> not currently specified in Arm Architectural Reference Manual but the
>> architect has confirmed that it is intended to be the same as that for
>> the
>> cpu configuration in the MPAMy_ELx registers.
>>
>> Reviewed-by: Jonathan Cameron <jonathan.cameron@huawei.com>
>> Signed-off-by: Ben Horgan <ben.horgan@arm.com>
>> ---
>> Changes since v2:
>> Mention PMG_D and PARTID_D specifically int he commit message
>> Add paragraph in commit message on range of MPAMSM_EL1 fields
>> ---
>>   arch/arm64/include/asm/el2_setup.h | 3 ++-
>>   arch/arm64/include/asm/mpam.h      | 2 ++
>>   arch/arm64/kernel/cpufeature.c     | 2 ++
>>   arch/arm64/kernel/mpam.c           | 3 +++
>>   4 files changed, 9 insertions(+), 1 deletion(-)
>>
> 
> One nitpick below...
> 
> Reviewed-by: Gavin Shan <gshan@redhat.com>
> 
>> diff --git a/arch/arm64/include/asm/el2_setup.h b/arch/arm64/include/
>> asm/el2_setup.h
>> index cacd20df1786..d37984c09799 100644
>> --- a/arch/arm64/include/asm/el2_setup.h
>> +++ b/arch/arm64/include/asm/el2_setup.h
>> @@ -504,7 +504,8 @@
>>       check_override id_aa64pfr0,
>> ID_AA64PFR0_EL1_MPAM_SHIFT, .Linit_mpam_\@, .Lskip_mpam_\@, x1, x2
>>     .Linit_mpam_\@:
>> -    msr_s    SYS_MPAM2_EL2, xzr        // use the default partition
>> +    mov    x0, #MPAM2_EL2_EnMPAMSM_MASK
>> +    msr_s    SYS_MPAM2_EL2, x0        // use the default partition,
>>                           // and disable lower traps
>>       mrs_s    x0, SYS_MPAMIDR_EL1
>>       tbz    x0, #MPAMIDR_EL1_HAS_HCR_SHIFT, .Lskip_mpam_\@  // skip
>> if no MPAMHCR reg
>> diff --git a/arch/arm64/include/asm/mpam.h b/arch/arm64/include/asm/
>> mpam.h
>> index 14011e5970ce..7b3d3abad162 100644
>> --- a/arch/arm64/include/asm/mpam.h
>> +++ b/arch/arm64/include/asm/mpam.h
>> @@ -53,6 +53,8 @@ static inline void mpam_thread_switch(struct
>> task_struct *tsk)
>>           return;
>>         write_sysreg_s(regval, SYS_MPAM1_EL1);
>> +    if (system_supports_sme())
>> +        write_sysreg_s(regval & (MPAMSM_EL1_PARTID_D |
>> MPAMSM_EL1_PMG_D), SYS_MPAMSM_EL1);
>>       isb();
>>         /* Synchronising the EL0 write is left until the ERET to EL0 */
>> diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/
>> cpufeature.c
>> index 0cdfb3728f43..2ede543b3eeb 100644
>> --- a/arch/arm64/kernel/cpufeature.c
>> +++ b/arch/arm64/kernel/cpufeature.c
>> @@ -2491,6 +2491,8 @@ cpu_enable_mpam(const struct
>> arm64_cpu_capabilities *entry)
>>           regval = READ_ONCE(per_cpu(arm64_mpam_current, cpu));
>>         write_sysreg_s(regval, SYS_MPAM1_EL1);
>> +    if (system_supports_sme())
>> +        write_sysreg_s(regval & (MPAMSM_EL1_PARTID_D |
>> MPAMSM_EL1_PMG_D), SYS_MPAMSM_EL1);
>>       isb();
>>         /* Synchronising the EL0 write is left until the ERET to EL0 */
>> diff --git a/arch/arm64/kernel/mpam.c b/arch/arm64/kernel/mpam.c
>> index dbe0a2d05abb..6ce4a36469ce 100644
>> --- a/arch/arm64/kernel/mpam.c
>> +++ b/arch/arm64/kernel/mpam.c
>> @@ -28,6 +28,9 @@ static int mpam_pm_notifier(struct notifier_block
>> *self,
>>            */
>>           regval = READ_ONCE(per_cpu(arm64_mpam_current, cpu));
>>           write_sysreg_s(regval, SYS_MPAM1_EL1);
>> +        if (system_supports_sme())
>> +            write_sysreg_s(regval & (MPAMSM_EL1_PARTID_D |
>> MPAMSM_EL1_PMG_D),
>> +                       SYS_MPAMSM_EL1);
> 
> { } is missed here.

Addded.

> 
>>           isb();
>>             write_sysreg_s(regval, SYS_MPAM0_EL1);
> 
> Thanks,
> Gavin
> 

Thanks,

Ben

Re: [PATCH v3 10/47] arm64: mpam: Initialise and context switch the MPAMSM_EL1 register
Posted by Catalin Marinas 3 weeks, 4 days ago
On Mon, Jan 12, 2026 at 04:58:37PM +0000, Ben Horgan wrote:
> diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
> index 0cdfb3728f43..2ede543b3eeb 100644
> --- a/arch/arm64/kernel/cpufeature.c
> +++ b/arch/arm64/kernel/cpufeature.c
> @@ -2491,6 +2491,8 @@ cpu_enable_mpam(const struct arm64_cpu_capabilities *entry)
>  		regval = READ_ONCE(per_cpu(arm64_mpam_current, cpu));
>  
>  	write_sysreg_s(regval, SYS_MPAM1_EL1);
> +	if (system_supports_sme())
> +		write_sysreg_s(regval & (MPAMSM_EL1_PARTID_D | MPAMSM_EL1_PMG_D), SYS_MPAMSM_EL1);
>  	isb();

Do we know for sure that system_supports_sme() returns true at this
point (if SME supported)? Digging into the code, system_supports_sme()
uses alternative_has_cap_unlikely() which relies on instruction
patching. setup_system_capabilities(), IIUC, patches the alternatives
after enable_cpu_capabilities(). I think you better use cpus_have_cap().

-- 
Catalin
Re: [PATCH v3 10/47] arm64: mpam: Initialise and context switch the MPAMSM_EL1 register
Posted by Ben Horgan 3 weeks ago
Hi Catalin,

On 1/15/26 19:08, Catalin Marinas wrote:
> On Mon, Jan 12, 2026 at 04:58:37PM +0000, Ben Horgan wrote:
>> diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
>> index 0cdfb3728f43..2ede543b3eeb 100644
>> --- a/arch/arm64/kernel/cpufeature.c
>> +++ b/arch/arm64/kernel/cpufeature.c
>> @@ -2491,6 +2491,8 @@ cpu_enable_mpam(const struct arm64_cpu_capabilities *entry)
>>  		regval = READ_ONCE(per_cpu(arm64_mpam_current, cpu));
>>  
>>  	write_sysreg_s(regval, SYS_MPAM1_EL1);
>> +	if (system_supports_sme())
>> +		write_sysreg_s(regval & (MPAMSM_EL1_PARTID_D | MPAMSM_EL1_PMG_D), SYS_MPAMSM_EL1);
>>  	isb();
> 
> Do we know for sure that system_supports_sme() returns true at this
> point (if SME supported)? Digging into the code, system_supports_sme()
> uses alternative_has_cap_unlikely() which relies on instruction
> patching. setup_system_capabilities(), IIUC, patches the alternatives
> after enable_cpu_capabilities(). I think you better use cpus_have_cap().
> 

I'll switch to cpus_have_cap().

Thanks,

Ben