[PATCH] arm/smmu: Complete SMR masking support

Michal Orzel posted 1 patch 2 months, 2 weeks ago
Patches applied successfully (tree, apply log)
git fetch https://gitlab.com/xen-project/patchew/xen tags/patchew/20240904124349.2058947-1-michal.orzel@amd.com
xen/drivers/passthrough/arm/smmu.c | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
[PATCH] arm/smmu: Complete SMR masking support
Posted by Michal Orzel 2 months, 2 weeks ago
SMR masking support allows deriving a mask either using a 2-cell iommu
specifier (per master) or stream-match-mask SMMU dt property (global
config). Even though the mask is stored in the fwid when adding a
device (in arm_smmu_dt_xlate_generic()), we still set it to 0 when
allocating SMEs (in arm_smmu_master_alloc_smes()). So at the end, we
always ignore the mask when programming SMRn registers. This leads to
SMMU failures. Fix it by completing the support.

A bit of history:
Linux support for SMR allocation was mainly done with:
588888a7399d ("iommu/arm-smmu: Intelligent SMR allocation")
021bb8420d44 ("iommu/arm-smmu: Wire up generic configuration support")

Taking the mask into account in arm_smmu_master_alloc_smes() was added
as part of the second commit, although quite hidden in the thicket of
other changes. We backported only the first patch with: 0435784cc75d
("xen/arm: smmuv1: Intelligent SMR allocation") but the changes to take
the mask into account were missed.

Signed-off-by: Michal Orzel <michal.orzel@amd.com>
---
 xen/drivers/passthrough/arm/smmu.c | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/xen/drivers/passthrough/arm/smmu.c b/xen/drivers/passthrough/arm/smmu.c
index f2cee82f553a..4c8a446754cc 100644
--- a/xen/drivers/passthrough/arm/smmu.c
+++ b/xen/drivers/passthrough/arm/smmu.c
@@ -1619,19 +1619,21 @@ static int arm_smmu_master_alloc_smes(struct device *dev)
 	spin_lock(&smmu->stream_map_lock);
 	/* Figure out a viable stream map entry allocation */
 	for_each_cfg_sme(cfg, i, idx, fwspec->num_ids) {
+		uint16_t mask = (fwspec->ids[i] >> SMR_MASK_SHIFT) & SMR_MASK_MASK;
+
 		if (idx != INVALID_SMENDX) {
 			ret = -EEXIST;
 			goto out_err;
 		}
 
-		ret = arm_smmu_find_sme(smmu, fwspec->ids[i], 0);
+		ret = arm_smmu_find_sme(smmu, fwspec->ids[i], mask);
 		if (ret < 0)
 			goto out_err;
 
 		idx = ret;
 		if (smrs && smmu->s2crs[idx].count == 0) {
 			smrs[idx].id = fwspec->ids[i];
-			smrs[idx].mask = 0; /* We don't currently share SMRs */
+			smrs[idx].mask = mask;
 			smrs[idx].valid = true;
 		}
 		smmu->s2crs[idx].count++;
-- 
2.25.1
Re: [PATCH] arm/smmu: Complete SMR masking support
Posted by Rahul Singh 2 months, 1 week ago
Hi Michal,


> On 4 Sep 2024, at 1:43 PM, Michal Orzel <michal.orzel@amd.com> wrote:
> 
> SMR masking support allows deriving a mask either using a 2-cell iommu
> specifier (per master) or stream-match-mask SMMU dt property (global
> config). Even though the mask is stored in the fwid when adding a
> device (in arm_smmu_dt_xlate_generic()), we still set it to 0 when
> allocating SMEs (in arm_smmu_master_alloc_smes()). So at the end, we
> always ignore the mask when programming SMRn registers. This leads to
> SMMU failures. Fix it by completing the support.
> 
> A bit of history:
> Linux support for SMR allocation was mainly done with:
> 588888a7399d ("iommu/arm-smmu: Intelligent SMR allocation")
> 021bb8420d44 ("iommu/arm-smmu: Wire up generic configuration support")
> 
> Taking the mask into account in arm_smmu_master_alloc_smes() was added
> as part of the second commit, although quite hidden in the thicket of
> other changes. We backported only the first patch with: 0435784cc75d
> ("xen/arm: smmuv1: Intelligent SMR allocation") but the changes to take
> the mask into account were missed.
> 
> Signed-off-by: Michal Orzel <michal.orzel@amd.com>

Reviewed-by: Rahul Singh <rahul.singh@arm.com>

Regards,
Rahul

> ---
> xen/drivers/passthrough/arm/smmu.c | 6 ++++--
> 1 file changed, 4 insertions(+), 2 deletions(-)
> 
> diff --git a/xen/drivers/passthrough/arm/smmu.c b/xen/drivers/passthrough/arm/smmu.c
> index f2cee82f553a..4c8a446754cc 100644
> --- a/xen/drivers/passthrough/arm/smmu.c
> +++ b/xen/drivers/passthrough/arm/smmu.c
> @@ -1619,19 +1619,21 @@ static int arm_smmu_master_alloc_smes(struct device *dev)
> spin_lock(&smmu->stream_map_lock);
> /* Figure out a viable stream map entry allocation */
> for_each_cfg_sme(cfg, i, idx, fwspec->num_ids) {
> + uint16_t mask = (fwspec->ids[i] >> SMR_MASK_SHIFT) & SMR_MASK_MASK;
> +
> if (idx != INVALID_SMENDX) {
> ret = -EEXIST;
> goto out_err;
> }
> 
> - ret = arm_smmu_find_sme(smmu, fwspec->ids[i], 0);
> + ret = arm_smmu_find_sme(smmu, fwspec->ids[i], mask);
> if (ret < 0)
> goto out_err;
> 
> idx = ret;
> if (smrs && smmu->s2crs[idx].count == 0) {
> smrs[idx].id = fwspec->ids[i];
> - smrs[idx].mask = 0; /* We don't currently share SMRs */
> + smrs[idx].mask = mask;
> smrs[idx].valid = true;
> }
> smmu->s2crs[idx].count++;
> -- 
> 2.25.1
> 

Re: [PATCH] arm/smmu: Complete SMR masking support
Posted by Julien Grall 2 months ago
Hi,


On 12/09/2024 17:59, Rahul Singh wrote:
>> On 4 Sep 2024, at 1:43 PM, Michal Orzel <michal.orzel@amd.com> wrote:
>>
>> SMR masking support allows deriving a mask either using a 2-cell iommu
>> specifier (per master) or stream-match-mask SMMU dt property (global
>> config). Even though the mask is stored in the fwid when adding a
>> device (in arm_smmu_dt_xlate_generic()), we still set it to 0 when
>> allocating SMEs (in arm_smmu_master_alloc_smes()). So at the end, we
>> always ignore the mask when programming SMRn registers. This leads to
>> SMMU failures. Fix it by completing the support.
>>
>> A bit of history:
>> Linux support for SMR allocation was mainly done with:
>> 588888a7399d ("iommu/arm-smmu: Intelligent SMR allocation")
>> 021bb8420d44 ("iommu/arm-smmu: Wire up generic configuration support")
>>
>> Taking the mask into account in arm_smmu_master_alloc_smes() was added
>> as part of the second commit, although quite hidden in the thicket of
>> other changes. We backported only the first patch with: 0435784cc75d
>> ("xen/arm: smmuv1: Intelligent SMR allocation") but the changes to take
>> the mask into account were missed.
>>
>> Signed-off-by: Michal Orzel <michal.orzel@amd.com>
> 
> Reviewed-by: Rahul Singh <rahul.singh@arm.com>

Committed.

Cheers,

-- 
Julien Grall


Re: [PATCH] arm/smmu: Complete SMR masking support
Posted by Andrew Cooper 2 months, 2 weeks ago
On 04/09/2024 1:43 pm, Michal Orzel wrote:
> diff --git a/xen/drivers/passthrough/arm/smmu.c b/xen/drivers/passthrough/arm/smmu.c
> index f2cee82f553a..4c8a446754cc 100644
> --- a/xen/drivers/passthrough/arm/smmu.c
> +++ b/xen/drivers/passthrough/arm/smmu.c
> @@ -1619,19 +1619,21 @@ static int arm_smmu_master_alloc_smes(struct device *dev)
>  	spin_lock(&smmu->stream_map_lock);
>  	/* Figure out a viable stream map entry allocation */
>  	for_each_cfg_sme(cfg, i, idx, fwspec->num_ids) {
> +		uint16_t mask = (fwspec->ids[i] >> SMR_MASK_SHIFT) & SMR_MASK_MASK;

Not related to the change itself, but consider using MASK_EXTR() ?

It reduces code volume for the reader, and removes a class of errors by
accidentally mismatching the mask/shift constants.

In x86 in particular, we're trying to remove the SHIFT constants and
only have the MASKs.

Although it looks like this is an abnormal pair to begin with, being
shift then mask, rather than the more usual mask then shift.

~Andrew
Re: [PATCH] arm/smmu: Complete SMR masking support
Posted by Michal Orzel 2 months, 1 week ago

On 04/09/2024 14:50, Andrew Cooper wrote:
> 
> 
> On 04/09/2024 1:43 pm, Michal Orzel wrote:
>> diff --git a/xen/drivers/passthrough/arm/smmu.c b/xen/drivers/passthrough/arm/smmu.c
>> index f2cee82f553a..4c8a446754cc 100644
>> --- a/xen/drivers/passthrough/arm/smmu.c
>> +++ b/xen/drivers/passthrough/arm/smmu.c
>> @@ -1619,19 +1619,21 @@ static int arm_smmu_master_alloc_smes(struct device *dev)
>>       spin_lock(&smmu->stream_map_lock);
>>       /* Figure out a viable stream map entry allocation */
>>       for_each_cfg_sme(cfg, i, idx, fwspec->num_ids) {
>> +             uint16_t mask = (fwspec->ids[i] >> SMR_MASK_SHIFT) & SMR_MASK_MASK;
> 
> Not related to the change itself, but consider using MASK_EXTR() ?
> 
> It reduces code volume for the reader, and removes a class of errors by
> accidentally mismatching the mask/shift constants.
> 
> In x86 in particular, we're trying to remove the SHIFT constants and
> only have the MASKs.
> 
> Although it looks like this is an abnormal pair to begin with, being
> shift then mask, rather than the more usual mask then shift.
Thanks for reminding about MASK_EXTR. However this won't apply here. SMR_MASK_MASK is defined
as 0x7FFF and used elsewhere in the code together with shift. It would need to be defined as
0x7FFF0000 (and thus reflect the actual mask field of the register) to work with MASK_EXTR.

~Michal