[patch-4.16] arm/smmuv1,v2: Protect smmu master list with a lock

Michal Orzel posted 1 patch 3 years, 1 month ago
Test gitlab-ci failed
Patches applied successfully (tree, apply log)
git fetch https://gitlab.com/xen-project/patchew/xen tags/patchew/20211026122903.15042-1-michal.orzel@arm.com
xen/drivers/passthrough/arm/smmu.c | 25 +++++++++++++++++++------
1 file changed, 19 insertions(+), 6 deletions(-)
[patch-4.16] arm/smmuv1,v2: Protect smmu master list with a lock
Posted by Michal Orzel 3 years, 1 month ago
If a device is added to SMMUv1/v2 from DT and PCI
at the same time, there is a concurrent access
to a smmu master list. This could lead to a
scenario where one is looking into a list that
is being modified at the same time. Add a lock
to prevent this issue.

Reuse the existing spinlock arm_smmu_devices_lock
as it is already protecting find_smmu_master.

ipmmu-smmu and smmuv3 are not impacted by this
issue as there is no access or modification of
a global resource during add_device.

Signed-off-by: Michal Orzel <michal.orzel@arm.com>
---
This patch aims for 4.16 release.
Benefits:
Remove a bug that could lead to a corruption of the
smmu master list, which would be very hard to debug.
Risks:
Overall the risk is low as we are touching init code
rather than a runtime one. In case of any issue, the
problem would be catched during system boot or guest
start.
---
 xen/drivers/passthrough/arm/smmu.c | 25 +++++++++++++++++++------
 1 file changed, 19 insertions(+), 6 deletions(-)

diff --git a/xen/drivers/passthrough/arm/smmu.c b/xen/drivers/passthrough/arm/smmu.c
index c9dfc4caa0..be62a66a28 100644
--- a/xen/drivers/passthrough/arm/smmu.c
+++ b/xen/drivers/passthrough/arm/smmu.c
@@ -820,21 +820,25 @@ static int arm_smmu_dt_add_device_legacy(struct arm_smmu_device *smmu,
 					 struct device *dev,
 					 struct iommu_fwspec *fwspec)
 {
-	int i;
+	int i, ret;
 	struct arm_smmu_master *master;
 	struct device_node *dev_node = dev_get_dev_node(dev);
 
+	spin_lock(&arm_smmu_devices_lock);
 	master = find_smmu_master(smmu, dev_node);
 	if (master) {
 		dev_err(dev,
 			"rejecting multiple registrations for master device %s\n",
 			dev_node->name);
-		return -EBUSY;
+		ret = -EBUSY;
+		goto out_unlock;
 	}
 
 	master = devm_kzalloc(dev, sizeof(*master), GFP_KERNEL);
-	if (!master)
-		return -ENOMEM;
+	if (!master) {
+		ret = -ENOMEM;
+		goto out_unlock;
+	}
 	master->of_node = dev_node;
 
 	/* Xen: Let Xen know that the device is protected by an SMMU */
@@ -846,11 +850,17 @@ static int arm_smmu_dt_add_device_legacy(struct arm_smmu_device *smmu,
 			dev_err(dev,
 				"stream ID for master device %s greater than maximum allowed (%d)\n",
 				dev_node->name, smmu->num_mapping_groups);
-			return -ERANGE;
+			ret = -ERANGE;
+			goto out_unlock;
 		}
 		master->cfg.smendx[i] = INVALID_SMENDX;
 	}
-	return insert_smmu_master(smmu, master);
+
+	ret = insert_smmu_master(smmu, master);
+
+out_unlock:
+	spin_unlock(&arm_smmu_devices_lock);
+	return ret;
 }
 
 static int register_smmu_master(struct arm_smmu_device *smmu,
@@ -2056,7 +2066,10 @@ static int arm_smmu_add_device(struct device *dev)
 	} else {
 		struct arm_smmu_master *master;
 
+		spin_lock(&arm_smmu_devices_lock);
 		master = find_smmu_master(smmu, dev->of_node);
+		spin_unlock(&arm_smmu_devices_lock);
+
 		if (!master) {
 			return -ENODEV;
 		}
-- 
2.29.0


Re: [patch-4.16] arm/smmuv1,v2: Protect smmu master list with a lock
Posted by Julien Grall 3 years, 1 month ago
Hi Michal,

On 26/10/2021 13:29, Michal Orzel wrote:
> If a device is added to SMMUv1/v2 from DT and PCI
> at the same time, there is a concurrent access
> to a smmu master list. This could lead to a
> scenario where one is looking into a list that
> is being modified at the same time. Add a lock
> to prevent this issue.

Did you intend to say "Hold" rather than "Add"?

> 
> Reuse the existing spinlock arm_smmu_devices_lock
> as it is already protecting find_smmu_master.
> 
> ipmmu-smmu and smmuv3 are not impacted by this
> issue as there is no access or modification of
> a global resource during add_device.
> 
> Signed-off-by: Michal Orzel <michal.orzel@arm.com>
> ---
> This patch aims for 4.16 release.
> Benefits:
> Remove a bug that could lead to a corruption of the
> smmu master list, which would be very hard to debug.

 From my understanding, this corruption would only happen with 
CONFIG_HAS_PCI. At the moment, this is a experimental feature as it is 
not fully complete.

> Risks:
> Overall the risk is low as we are touching init code
> rather than a runtime one. In case of any issue, the
> problem would be catched during system boot or guest
> start.

With the PCI feature enabled, this can be called at runtime as this 
plumbed through a PHYSDEVOP. That said, it doesn't matter here as for 
supported code (platform/DT device), this will only happen during boot.

The patch looks straighforward, so I would not mind to have it in Xen 
4.16. Ian, what do you think?

> ---
>   xen/drivers/passthrough/arm/smmu.c | 25 +++++++++++++++++++------
>   1 file changed, 19 insertions(+), 6 deletions(-)
> 
> diff --git a/xen/drivers/passthrough/arm/smmu.c b/xen/drivers/passthrough/arm/smmu.c
> index c9dfc4caa0..be62a66a28 100644
> --- a/xen/drivers/passthrough/arm/smmu.c
> +++ b/xen/drivers/passthrough/arm/smmu.c
> @@ -820,21 +820,25 @@ static int arm_smmu_dt_add_device_legacy(struct arm_smmu_device *smmu,
>   					 struct device *dev,
>   					 struct iommu_fwspec *fwspec)
>   {
> -	int i;
> +	int i, ret;
>   	struct arm_smmu_master *master;
>   	struct device_node *dev_node = dev_get_dev_node(dev);
>   
> +	spin_lock(&arm_smmu_devices_lock);
>   	master = find_smmu_master(smmu, dev_node);
>   	if (master) {
>   		dev_err(dev,
>   			"rejecting multiple registrations for master device %s\n",
>   			dev_node->name);
> -		return -EBUSY;
> +		ret = -EBUSY;
> +		goto out_unlock;
>   	}
>   
>   	master = devm_kzalloc(dev, sizeof(*master), GFP_KERNEL);
> -	if (!master)
> -		return -ENOMEM;
> +	if (!master) {
> +		ret = -ENOMEM;
> +		goto out_unlock;
> +	}
>   	master->of_node = dev_node;
>   
>   	/* Xen: Let Xen know that the device is protected by an SMMU */
> @@ -846,11 +850,17 @@ static int arm_smmu_dt_add_device_legacy(struct arm_smmu_device *smmu,
>   			dev_err(dev,
>   				"stream ID for master device %s greater than maximum allowed (%d)\n",
>   				dev_node->name, smmu->num_mapping_groups);
> -			return -ERANGE;
> +			ret = -ERANGE;
> +			goto out_unlock;
>   		}
>   		master->cfg.smendx[i] = INVALID_SMENDX;
>   	}
> -	return insert_smmu_master(smmu, master);
> +
> +	ret = insert_smmu_master(smmu, master);
> +
> +out_unlock:
> +	spin_unlock(&arm_smmu_devices_lock);

Please add a newline here.

> +	return ret;
>   }
>   
>   static int register_smmu_master(struct arm_smmu_device *smmu,
> @@ -2056,7 +2066,10 @@ static int arm_smmu_add_device(struct device *dev)
>   	} else {
>   		struct arm_smmu_master *master;
>   
> +		spin_lock(&arm_smmu_devices_lock);
>   		master = find_smmu_master(smmu, dev->of_node);
> +		spin_unlock(&arm_smmu_devices_lock);

At the moment, unlocking here is fine because we don't remove the 
device. However, there are a series to supporting removing a device (see 
[1]). So I think it would be preferable to unlock after the last use of 
'cfg'.

Looking at the code, I also noticed that the error path is not correct 
for at least the PCI device and we would leak memory. We also assume 
that Stream ID == Requester ID. Are both of them in your radar for PCI 
passthrough?

Cheers,

[1] 
https://lore.kernel.org/xen-devel/1630562763-390068-9-git-send-email-fnu.vikram@xilinx.com/

-- 
Julien Grall

Re: [patch-4.16] arm/smmuv1,v2: Protect smmu master list with a lock
Posted by Julien Grall 3 years, 1 month ago
Hi,

On 26/10/2021 17:28, Julien Grall wrote:
> On 26/10/2021 13:29, Michal Orzel wrote:
>> If a device is added to SMMUv1/v2 from DT and PCI
>> at the same time, there is a concurrent access
>> to a smmu master list. This could lead to a
>> scenario where one is looking into a list that
>> is being modified at the same time. Add a lock
>> to prevent this issue.
> 
> Did you intend to say "Hold" rather than "Add"?
> 
>>
>> Reuse the existing spinlock arm_smmu_devices_lock
>> as it is already protecting find_smmu_master.
>>
>> ipmmu-smmu and smmuv3 are not impacted by this
>> issue as there is no access or modification of
>> a global resource during add_device.
>>
>> Signed-off-by: Michal Orzel <michal.orzel@arm.com>
>> ---
>> This patch aims for 4.16 release.
>> Benefits:
>> Remove a bug that could lead to a corruption of the
>> smmu master list, which would be very hard to debug.
> 
>  From my understanding, this corruption would only happen with 
> CONFIG_HAS_PCI. At the moment, this is a experimental feature as it is 
> not fully complete.

Actually, digging through the code, I noticed that iommu_add_device() 
assume that it can only be called with platform/DT. Thankfully, AFAICT, 
the function would return -ENXIO because the fwspec would not be 
allocated for PCI device.

So was this patch tested with additional patch on top?

> 
>> Risks:
>> Overall the risk is low as we are touching init code
>> rather than a runtime one. In case of any issue, the
>> problem would be catched during system boot or guest
>> start.
> 
> With the PCI feature enabled, this can be called at runtime as this 
> plumbed through a PHYSDEVOP. That said, it doesn't matter here as for 
> supported code (platform/DT device), this will only happen during boot.
> 
> The patch looks straighforward, so I would not mind to have it in Xen 
> 4.16. Ian, what do you think?
> 
>> ---
>>   xen/drivers/passthrough/arm/smmu.c | 25 +++++++++++++++++++------
>>   1 file changed, 19 insertions(+), 6 deletions(-)
>>
>> diff --git a/xen/drivers/passthrough/arm/smmu.c 
>> b/xen/drivers/passthrough/arm/smmu.c
>> index c9dfc4caa0..be62a66a28 100644
>> --- a/xen/drivers/passthrough/arm/smmu.c
>> +++ b/xen/drivers/passthrough/arm/smmu.c
>> @@ -820,21 +820,25 @@ static int arm_smmu_dt_add_device_legacy(struct 
>> arm_smmu_device *smmu,
>>                        struct device *dev,
>>                        struct iommu_fwspec *fwspec)
>>   {
>> -    int i;
>> +    int i, ret;
>>       struct arm_smmu_master *master;
>>       struct device_node *dev_node = dev_get_dev_node(dev);
>> +    spin_lock(&arm_smmu_devices_lock);
>>       master = find_smmu_master(smmu, dev_node);
>>       if (master) {
>>           dev_err(dev,
>>               "rejecting multiple registrations for master device %s\n",
>>               dev_node->name);
>> -        return -EBUSY;
>> +        ret = -EBUSY;
>> +        goto out_unlock;
>>       }
>>       master = devm_kzalloc(dev, sizeof(*master), GFP_KERNEL);
>> -    if (!master)
>> -        return -ENOMEM;
>> +    if (!master) {
>> +        ret = -ENOMEM;
>> +        goto out_unlock;
>> +    }
>>       master->of_node = dev_node;
>>       /* Xen: Let Xen know that the device is protected by an SMMU */
>> @@ -846,11 +850,17 @@ static int arm_smmu_dt_add_device_legacy(struct 
>> arm_smmu_device *smmu,
>>               dev_err(dev,
>>                   "stream ID for master device %s greater than maximum 
>> allowed (%d)\n",
>>                   dev_node->name, smmu->num_mapping_groups);
>> -            return -ERANGE;
>> +            ret = -ERANGE;
>> +            goto out_unlock;
>>           }
>>           master->cfg.smendx[i] = INVALID_SMENDX;
>>       }
>> -    return insert_smmu_master(smmu, master);
>> +
>> +    ret = insert_smmu_master(smmu, master);
>> +
>> +out_unlock:
>> +    spin_unlock(&arm_smmu_devices_lock);
> 
> Please add a newline here.
> 
>> +    return ret;
>>   }
>>   static int register_smmu_master(struct arm_smmu_device *smmu,
>> @@ -2056,7 +2066,10 @@ static int arm_smmu_add_device(struct device *dev)
>>       } else {
>>           struct arm_smmu_master *master;
>> +        spin_lock(&arm_smmu_devices_lock);
>>           master = find_smmu_master(smmu, dev->of_node);
>> +        spin_unlock(&arm_smmu_devices_lock);
> 
> At the moment, unlocking here is fine because we don't remove the 
> device. However, there are a series to supporting removing a device (see 
> [1]). So I think it would be preferable to unlock after the last use of 
> 'cfg'.
> 
> Looking at the code, I also noticed that the error path is not correct 
> for at least the PCI device and we would leak memory. We also assume 
> that Stream ID == Requester ID. Are both of them in your radar for PCI 
> passthrough?
> 
> Cheers,
> 
> [1] 
> https://lore.kernel.org/xen-devel/1630562763-390068-9-git-send-email-fnu.vikram@xilinx.com/ 
> 
> 

Cheers,

-- 
Julien Grall

Re: [patch-4.16] arm/smmuv1,v2: Protect smmu master list with a lock
Posted by Michal Orzel 3 years, 1 month ago
Hi Julien,

On 26.10.2021 18:56, Julien Grall wrote:
> Hi,
> 
> On 26/10/2021 17:28, Julien Grall wrote:
>> On 26/10/2021 13:29, Michal Orzel wrote:
>>> If a device is added to SMMUv1/v2 from DT and PCI
>>> at the same time, there is a concurrent access
>>> to a smmu master list. This could lead to a
>>> scenario where one is looking into a list that
>>> is being modified at the same time. Add a lock
>>> to prevent this issue.
>>
>> Did you intend to say "Hold" rather than "Add"?
>>
Yes, this is what I meant. I will change it.

>>>
>>> Reuse the existing spinlock arm_smmu_devices_lock
>>> as it is already protecting find_smmu_master.
>>>
>>> ipmmu-smmu and smmuv3 are not impacted by this
>>> issue as there is no access or modification of
>>> a global resource during add_device.
>>>
>>> Signed-off-by: Michal Orzel <michal.orzel@arm.com>
>>> ---
>>> This patch aims for 4.16 release.
>>> Benefits:
>>> Remove a bug that could lead to a corruption of the
>>> smmu master list, which would be very hard to debug.
>>
>>  From my understanding, this corruption would only happen with CONFIG_HAS_PCI. At the moment, this is a experimental feature as it is not fully complete.
> 
> Actually, digging through the code, I noticed that iommu_add_device() assume that it can only be called with platform/DT. Thankfully, AFAICT, the function would return -ENXIO because the fwspec would not be allocated for PCI device.
> 
> So was this patch tested with additional patch on top?
> 
The purpose of this patch is to fix the issue that is present and which you thankfully noticed.
There is still a lot of patches to be added that aim to support PCI passthrough.
The complete PCI series is tested with SMMU and it works.
But there is no chance to test this patch standalone as iommu_add_device is not in the correct form for PCI as of now.

>>
>>> Risks:
>>> Overall the risk is low as we are touching init code
>>> rather than a runtime one. In case of any issue, the
>>> problem would be catched during system boot or guest
>>> start.
>>
>> With the PCI feature enabled, this can be called at runtime as this plumbed through a PHYSDEVOP. That said, it doesn't matter here as for supported code (platform/DT device), this will only happen during boot.
>>
>> The patch looks straighforward, so I would not mind to have it in Xen 4.16. Ian, what do you think?
>>
>>> ---
>>>   xen/drivers/passthrough/arm/smmu.c | 25 +++++++++++++++++++------
>>>   1 file changed, 19 insertions(+), 6 deletions(-)
>>>
>>> diff --git a/xen/drivers/passthrough/arm/smmu.c b/xen/drivers/passthrough/arm/smmu.c
>>> index c9dfc4caa0..be62a66a28 100644
>>> --- a/xen/drivers/passthrough/arm/smmu.c
>>> +++ b/xen/drivers/passthrough/arm/smmu.c
>>> @@ -820,21 +820,25 @@ static int arm_smmu_dt_add_device_legacy(struct arm_smmu_device *smmu,
>>>                        struct device *dev,
>>>                        struct iommu_fwspec *fwspec)
>>>   {
>>> -    int i;
>>> +    int i, ret;
>>>       struct arm_smmu_master *master;
>>>       struct device_node *dev_node = dev_get_dev_node(dev);
>>> +    spin_lock(&arm_smmu_devices_lock);
>>>       master = find_smmu_master(smmu, dev_node);
>>>       if (master) {
>>>           dev_err(dev,
>>>               "rejecting multiple registrations for master device %s\n",
>>>               dev_node->name);
>>> -        return -EBUSY;
>>> +        ret = -EBUSY;
>>> +        goto out_unlock;
>>>       }
>>>       master = devm_kzalloc(dev, sizeof(*master), GFP_KERNEL);
>>> -    if (!master)
>>> -        return -ENOMEM;
>>> +    if (!master) {
>>> +        ret = -ENOMEM;
>>> +        goto out_unlock;
>>> +    }
>>>       master->of_node = dev_node;
>>>       /* Xen: Let Xen know that the device is protected by an SMMU */
>>> @@ -846,11 +850,17 @@ static int arm_smmu_dt_add_device_legacy(struct arm_smmu_device *smmu,
>>>               dev_err(dev,
>>>                   "stream ID for master device %s greater than maximum allowed (%d)\n",
>>>                   dev_node->name, smmu->num_mapping_groups);
>>> -            return -ERANGE;
>>> +            ret = -ERANGE;
>>> +            goto out_unlock;
>>>           }
>>>           master->cfg.smendx[i] = INVALID_SMENDX;
>>>       }
>>> -    return insert_smmu_master(smmu, master);
>>> +
>>> +    ret = insert_smmu_master(smmu, master);
>>> +
>>> +out_unlock:
>>> +    spin_unlock(&arm_smmu_devices_lock);
>>
>> Please add a newline here.
>>
Ok.

>>> +    return ret;
>>>   }
>>>   static int register_smmu_master(struct arm_smmu_device *smmu,
>>> @@ -2056,7 +2066,10 @@ static int arm_smmu_add_device(struct device *dev)
>>>       } else {
>>>           struct arm_smmu_master *master;
>>> +        spin_lock(&arm_smmu_devices_lock);
>>>           master = find_smmu_master(smmu, dev->of_node);
>>> +        spin_unlock(&arm_smmu_devices_lock);
>>
>> At the moment, unlocking here is fine because we don't remove the device. However, there are a series to supporting removing a device (see [1]). So I think it would be preferable to unlock after the last use of 'cfg'.
>>
Ok. I will move unlocking to the end of this else {} block. I was not aware of the patch you are referring to.

>> Looking at the code, I also noticed that the error path is not correct for at least the PCI device and we would leak memory. We also assume that Stream ID == Requester ID. Are both of them in your radar for PCI passthrough?
>>
I agree with you. I also noticed it. However this is going to be fixed with the next patch series when Rahul gets back.

>> Cheers,
>>
>> [1] https://lore.kernel.org/xen-devel/1630562763-390068-9-git-send-email-fnu.vikram@xilinx.com/
>>
> 
> Cheers,
>
Cheers,
Michal

Re: [patch-4.16] arm/smmuv1,v2: Protect smmu master list with a lock
Posted by Julien Grall 3 years, 1 month ago

On 27/10/2021 11:41, Michal Orzel wrote:
> Hi Julien,

Hi Michal,

> On 26.10.2021 18:56, Julien Grall wrote:
>> Hi,
>>
>> On 26/10/2021 17:28, Julien Grall wrote:
>>> On 26/10/2021 13:29, Michal Orzel wrote:
>>>> If a device is added to SMMUv1/v2 from DT and PCI
>>>> at the same time, there is a concurrent access
>>>> to a smmu master list. This could lead to a
>>>> scenario where one is looking into a list that
>>>> is being modified at the same time. Add a lock
>>>> to prevent this issue.
>>>
>>> Did you intend to say "Hold" rather than "Add"?
>>>
> Yes, this is what I meant. I will change it.
> 
>>>>
>>>> Reuse the existing spinlock arm_smmu_devices_lock
>>>> as it is already protecting find_smmu_master.
>>>>
>>>> ipmmu-smmu and smmuv3 are not impacted by this
>>>> issue as there is no access or modification of
>>>> a global resource during add_device.
>>>>
>>>> Signed-off-by: Michal Orzel <michal.orzel@arm.com>
>>>> ---
>>>> This patch aims for 4.16 release.
>>>> Benefits:
>>>> Remove a bug that could lead to a corruption of the
>>>> smmu master list, which would be very hard to debug.
>>>
>>>   From my understanding, this corruption would only happen with CONFIG_HAS_PCI. At the moment, this is a experimental feature as it is not fully complete.
>>
>> Actually, digging through the code, I noticed that iommu_add_device() assume that it can only be called with platform/DT. Thankfully, AFAICT, the function would return -ENXIO because the fwspec would not be allocated for PCI device.
>>
>> So was this patch tested with additional patch on top?
>>
> The purpose of this patch is to fix the issue that is present and which you thankfully noticed.
> There is still a lot of patches to be added that aim to support PCI passthrough.
> The complete PCI series is tested with SMMU and it works.
> But there is no chance to test this patch standalone as iommu_add_device is not in the correct form for PCI as of now.

Ok. I would suggest to say this is a latent bug so it make clear that 
the patch is more for hardening at the moment.

>>>>    xen/drivers/passthrough/arm/smmu.c | 25 +++++++++++++++++++------
>>>>    1 file changed, 19 insertions(+), 6 deletions(-)
>>>>
>>>> diff --git a/xen/drivers/passthrough/arm/smmu.c b/xen/drivers/passthrough/arm/smmu.c
>>>> index c9dfc4caa0..be62a66a28 100644
>>>> --- a/xen/drivers/passthrough/arm/smmu.c
>>>> +++ b/xen/drivers/passthrough/arm/smmu.c
>>>> @@ -820,21 +820,25 @@ static int arm_smmu_dt_add_device_legacy(struct arm_smmu_device *smmu,
>>>>                         struct device *dev,
>>>>                         struct iommu_fwspec *fwspec)
>>>>    {
>>>> -    int i;
>>>> +    int i, ret;
>>>>        struct arm_smmu_master *master;
>>>>        struct device_node *dev_node = dev_get_dev_node(dev);
>>>> +    spin_lock(&arm_smmu_devices_lock);
>>>>        master = find_smmu_master(smmu, dev_node);
>>>>        if (master) {
>>>>            dev_err(dev,
>>>>                "rejecting multiple registrations for master device %s\n",
>>>>                dev_node->name);
>>>> -        return -EBUSY;
>>>> +        ret = -EBUSY;
>>>> +        goto out_unlock;
>>>>        }
>>>>        master = devm_kzalloc(dev, sizeof(*master), GFP_KERNEL);
>>>> -    if (!master)
>>>> -        return -ENOMEM;
>>>> +    if (!master) {
>>>> +        ret = -ENOMEM;
>>>> +        goto out_unlock;
>>>> +    }
>>>>        master->of_node = dev_node;
>>>>        /* Xen: Let Xen know that the device is protected by an SMMU */
>>>> @@ -846,11 +850,17 @@ static int arm_smmu_dt_add_device_legacy(struct arm_smmu_device *smmu,
>>>>                dev_err(dev,
>>>>                    "stream ID for master device %s greater than maximum allowed (%d)\n",
>>>>                    dev_node->name, smmu->num_mapping_groups);
>>>> -            return -ERANGE;
>>>> +            ret = -ERANGE;
>>>> +            goto out_unlock;
>>>>            }
>>>>            master->cfg.smendx[i] = INVALID_SMENDX;
>>>>        }
>>>> -    return insert_smmu_master(smmu, master);
>>>> +
>>>> +    ret = insert_smmu_master(smmu, master);
>>>> +
>>>> +out_unlock:
>>>> +    spin_unlock(&arm_smmu_devices_lock);
>>>
>>> Please add a newline here.
>>>
> Ok.
> 
>>>> +    return ret;
>>>>    }
>>>>    static int register_smmu_master(struct arm_smmu_device *smmu,
>>>> @@ -2056,7 +2066,10 @@ static int arm_smmu_add_device(struct device *dev)
>>>>        } else {
>>>>            struct arm_smmu_master *master;
>>>> +        spin_lock(&arm_smmu_devices_lock);
>>>>            master = find_smmu_master(smmu, dev->of_node);
>>>> +        spin_unlock(&arm_smmu_devices_lock);
>>>
>>> At the moment, unlocking here is fine because we don't remove the device. However, there are a series to supporting removing a device (see [1]). So I think it would be preferable to unlock after the last use of 'cfg'.
>>>
> Ok. I will move unlocking to the end of this else {} block. I was not aware of the patch you are referring to.

I think the end of the else is still too early. This needs to at least 
be past iommu_group_set_iommudata() because we store cfg.

Potentially, the lock wants to also englobe 
arm_smmu_master_alloc_smes(). So I am wondering whether it would be 
simpler to hold the lock for the whole duration of arm_smmu_add_device() 
(I can see value when we will want to interlock with the remove code).

> 
>>> Looking at the code, I also noticed that the error path is not correct for at least the PCI device and we would leak memory. We also assume that Stream ID == Requester ID. Are both of them in your radar for PCI passthrough?
>>>
> I agree with you. I also noticed it. However this is going to be fixed with the next patch series when Rahul gets back.

Good. Do you have a todo list for PCI passthrough that can be shared?

Cheers,

-- 
Julien Grall

Re: [patch-4.16] arm/smmuv1,v2: Protect smmu master list with a lock
Posted by Stefano Stabellini 3 years, 1 month ago
On Wed, 27 Oct 2021, Julien Grall wrote:
> > > > > +    return ret;
> > > > >    }
> > > > >    static int register_smmu_master(struct arm_smmu_device *smmu,
> > > > > @@ -2056,7 +2066,10 @@ static int arm_smmu_add_device(struct device
> > > > > *dev)
> > > > >        } else {
> > > > >            struct arm_smmu_master *master;
> > > > > +        spin_lock(&arm_smmu_devices_lock);
> > > > >            master = find_smmu_master(smmu, dev->of_node);
> > > > > +        spin_unlock(&arm_smmu_devices_lock);
> > > > 
> > > > At the moment, unlocking here is fine because we don't remove the
> > > > device. However, there are a series to supporting removing a device (see
> > > > [1]). So I think it would be preferable to unlock after the last use of
> > > > 'cfg'.
> > > > 
> > Ok. I will move unlocking to the end of this else {} block. I was not aware
> > of the patch you are referring to.
> 
> I think the end of the else is still too early. This needs to at least be past
> iommu_group_set_iommudata() because we store cfg.
> 
> Potentially, the lock wants to also englobe arm_smmu_master_alloc_smes(). So I
> am wondering whether it would be simpler to hold the lock for the whole
> duration of arm_smmu_add_device() (I can see value when we will want to
> interlock with the remove code).

The patch was to protect the smmu master list. From that point of view
the unlock right after find_smmu_master would be sufficient, right?

We only need to protect cfg if we are worried that the same device is
added in two different ways at the same time. Did I get it right? If so,
I would say that that case should not be possible? Am I missing another
potential conflict?


I am pointing this out for two reasons:

Protecting the list is different from protecting each element from
concurrent modification of the element itself. If the latter is a
potential problem, I wonder if arm_smmu_add_device is the only function
affected?

The second reason is that extending the lock past
arm_smmu_master_alloc_smes is a bit worrying because it causes
&arm_smmu_devices_lock and smmu->stream_map_lock to nest, which wasn't
the case before.

I am not saying that it is a bad idea to extend the lock past
arm_smmu_master_alloc_smes -- it might be the right thing to do. But I
am merely saying that it might be best to think twice about it and/or do
that after 4.16.
Re: [patch-4.16] arm/smmuv1,v2: Protect smmu master list with a lock
Posted by Julien Grall 3 years, 1 month ago
On Thu, 28 Oct 2021, 00:14 Stefano Stabellini, <sstabellini@kernel.org>
wrote:

> On Wed, 27 Oct 2021, Julien Grall wrote:
> > > > > > +    return ret;
> > > > > >    }
> > > > > >    static int register_smmu_master(struct arm_smmu_device *smmu,
> > > > > > @@ -2056,7 +2066,10 @@ static int arm_smmu_add_device(struct
> device
> > > > > > *dev)
> > > > > >        } else {
> > > > > >            struct arm_smmu_master *master;
> > > > > > +        spin_lock(&arm_smmu_devices_lock);
> > > > > >            master = find_smmu_master(smmu, dev->of_node);
> > > > > > +        spin_unlock(&arm_smmu_devices_lock);
> > > > >
> > > > > At the moment, unlocking here is fine because we don't remove the
> > > > > device. However, there are a series to supporting removing a
> device (see
> > > > > [1]). So I think it would be preferable to unlock after the last
> use of
> > > > > 'cfg'.
> > > > >
> > > Ok. I will move unlocking to the end of this else {} block. I was not
> aware
> > > of the patch you are referring to.
> >
> > I think the end of the else is still too early. This needs to at least
> be past
> > iommu_group_set_iommudata() because we store cfg.
> >
> > Potentially, the lock wants to also englobe
> arm_smmu_master_alloc_smes(). So I
> > am wondering whether it would be simpler to hold the lock for the whole
> > duration of arm_smmu_add_device() (I can see value when we will want to
> > interlock with the remove code).
>
> The patch was to protect the smmu master list. From that point of view
> the unlock right after find_smmu_master would be sufficient, right?
>

Yes. However this is not fixing all the problems (see below).


> We only need to protect cfg if we are worried that the same device is
> added in two different ways at the same time. Did I get it right? If so,
> I would say that that case should not be possible? Am I missing another
> potential conflict?
>

It should not be possible to add the same device twice. The problem is more
when we are going to remove the device. In this case, "master" may
disappear at any point.

The support for removing device is not yet implemented in the tree. But
there is already a patch on the ML. So I think it would be shortsighted to
only move the lock to just solve concurrent access to the list.


>
> I am pointing this out for two reasons:
>
> Protecting the list is different from protecting each element from
> concurrent modification of the element itself. If the latter is a
> potential problem, I wonder if arm_smmu_add_device is the only function
> affected?
>

I had a brief looked at the code and couldn't find any other places where
this may be an issue.


> The second reason is that extending the lock past
> arm_smmu_master_alloc_smes is a bit worrying because it causes
> &arm_smmu_devices_lock and smmu->stream_map_lock to nest, which wasn't
> the case before.
>

Nested locks are common. I don't believe there would be a problem here with
this one.


> I am not saying that it is a bad idea to extend the lock past
> arm_smmu_master_alloc_smes -- it might be the right thing to do.


I don't usually suggest locking changes blindly ;).

But I

am merely saying that it might be best to think twice about it.

and/or do
> that after 4.16.


To be honest, this patch is not useful the callback to register a device in
the IOMMU subsystem. So if you are concerned with the my suggested locking
then, I am afraid the current patch is a no-go on my side for 4.16.

That said we can work towards a new locking approach for 4.17. However, I
would want to have a proposal from your side or at least some details on
why the suggested locking is not suitable.

Cheers,
Re: [patch-4.16] arm/smmuv1,v2: Protect smmu master list with a lock
Posted by Julien Grall 3 years, 1 month ago
On Thu, 28 Oct 2021, 00:43 Julien Grall, <julien.grall.oss@gmail.com> wrote:

>
>
> On Thu, 28 Oct 2021, 00:14 Stefano Stabellini, <sstabellini@kernel.org>
> wrote:
>
>> On Wed, 27 Oct 2021, Julien Grall wrote:
>> > > > > > +    return ret;
>> > > > > >    }
>> > > > > >    static int register_smmu_master(struct arm_smmu_device *smmu,
>> > > > > > @@ -2056,7 +2066,10 @@ static int arm_smmu_add_device(struct
>> device
>> > > > > > *dev)
>> > > > > >        } else {
>> > > > > >            struct arm_smmu_master *master;
>> > > > > > +        spin_lock(&arm_smmu_devices_lock);
>> > > > > >            master = find_smmu_master(smmu, dev->of_node);
>> > > > > > +        spin_unlock(&arm_smmu_devices_lock);
>> > > > >
>> > > > > At the moment, unlocking here is fine because we don't remove the
>> > > > > device. However, there are a series to supporting removing a
>> device (see
>> > > > > [1]). So I think it would be preferable to unlock after the last
>> use of
>> > > > > 'cfg'.
>> > > > >
>> > > Ok. I will move unlocking to the end of this else {} block. I was not
>> aware
>> > > of the patch you are referring to.
>> >
>> > I think the end of the else is still too early. This needs to at least
>> be past
>> > iommu_group_set_iommudata() because we store cfg.
>> >
>> > Potentially, the lock wants to also englobe
>> arm_smmu_master_alloc_smes(). So I
>> > am wondering whether it would be simpler to hold the lock for the whole
>> > duration of arm_smmu_add_device() (I can see value when we will want to
>> > interlock with the remove code).
>>
>> The patch was to protect the smmu master list. From that point of view
>> the unlock right after find_smmu_master would be sufficient, right?
>>
>
> Yes. However this is not fixing all the problems (see below).
>
>
>> We only need to protect cfg if we are worried that the same device is
>> added in two different ways at the same time. Did I get it right? If so,
>> I would say that that case should not be possible? Am I missing another
>> potential conflict?
>>
>
> It should not be possible to add the same device twice. The problem is
> more when we are going to remove the device. In this case, "master" may
> disappear at any point.
>
> The support for removing device is not yet implemented in the tree. But
> there is already a patch on the ML. So I think it would be shortsighted to
> only move the lock to just solve concurrent access to the list.
>
>
>>
>> I am pointing this out for two reasons:
>>
>> Protecting the list is different from protecting each element from
>> concurrent modification of the element itself. If the latter is a
>> potential problem, I wonder if arm_smmu_add_device is the only function
>> affected?
>>
>
> I had a brief looked at the code and couldn't find any other places where
> this may be an issue.
>
>
>> The second reason is that extending the lock past
>> arm_smmu_master_alloc_smes is a bit worrying because it causes
>> &arm_smmu_devices_lock and smmu->stream_map_lock to nest, which wasn't
>> the case before.
>>
>
> Nested locks are common. I don't believe there would be a problem here
> with this one.
>
>
>> I am not saying that it is a bad idea to extend the lock past
>> arm_smmu_master_alloc_smes -- it might be the right thing to do.
>
>
> I don't usually suggest locking changes blindly ;).
>
> But I
>
> am merely saying that it might be best to think twice about it.
>
> and/or do
>> that after 4.16.
>
>
> To be honest, this patch is not useful the callback to register a device
> in the IOMMU subsystem. So if you are concerned with
>

The sentence makes no sense sorry. I meant the add callback doesn't support
PCI devices. So the locking is a latent issue at the moment.

 the my suggested locking then, I am afraid the current patch is a no-go on
> my side for 4.16.
>
> That said we can work towards a new locking approach for 4.17. However, I
> would want to have a proposal from your side or at least some details on
> why the suggested locking is not suitable.
>
> Cheers,
>
Re: [patch-4.16] arm/smmuv1,v2: Protect smmu master list with a lock
Posted by Stefano Stabellini 3 years, 1 month ago
On Thu, 28 Oct 2021, Julien Grall wrote:
> On Thu, 28 Oct 2021, 00:14 Stefano Stabellini, <sstabellini@kernel.org> wrote:
>       On Wed, 27 Oct 2021, Julien Grall wrote:
>       > > > > > +    return ret;
>       > > > > >    }
>       > > > > >    static int register_smmu_master(struct arm_smmu_device *smmu,
>       > > > > > @@ -2056,7 +2066,10 @@ static int arm_smmu_add_device(struct device
>       > > > > > *dev)
>       > > > > >        } else {
>       > > > > >            struct arm_smmu_master *master;
>       > > > > > +        spin_lock(&arm_smmu_devices_lock);
>       > > > > >            master = find_smmu_master(smmu, dev->of_node);
>       > > > > > +        spin_unlock(&arm_smmu_devices_lock);
>       > > > >
>       > > > > At the moment, unlocking here is fine because we don't remove the
>       > > > > device. However, there are a series to supporting removing a device (see
>       > > > > [1]). So I think it would be preferable to unlock after the last use of
>       > > > > 'cfg'.
>       > > > >
>       > > Ok. I will move unlocking to the end of this else {} block. I was not aware
>       > > of the patch you are referring to.
>       >
>       > I think the end of the else is still too early. This needs to at least be past
>       > iommu_group_set_iommudata() because we store cfg.
>       >
>       > Potentially, the lock wants to also englobe arm_smmu_master_alloc_smes(). So I
>       > am wondering whether it would be simpler to hold the lock for the whole
>       > duration of arm_smmu_add_device() (I can see value when we will want to
>       > interlock with the remove code).
> 
>       The patch was to protect the smmu master list. From that point of view
>       the unlock right after find_smmu_master would be sufficient, right?
> 
> 
> Yes. However this is not fixing all the problems (see below).
> 
> 
>       We only need to protect cfg if we are worried that the same device is
>       added in two different ways at the same time. Did I get it right? If so,
>       I would say that that case should not be possible? Am I missing another
>       potential conflict?
> 
> 
> It should not be possible to add the same device twice. The problem is more when we are going to remove the device. In this case, "master"
> may disappear at any point.
> 
> The support for removing device is not yet implemented in the tree. But there is already a patch on the ML. So I think it would be
> shortsighted to only move the lock to just solve concurrent access to the list.
 
That makes sense now: the other source of conflict is concurrent add and
remove of the same device. Sorry it wasn't clear to me before.
 
 
>       I am pointing this out for two reasons:
> 
>       Protecting the list is different from protecting each element from
>       concurrent modification of the element itself. If the latter is a
>       potential problem, I wonder if arm_smmu_add_device is the only function
>       affected?
> 
> 
> I had a brief looked at the code and couldn't find any other places where this may be an issue.
> 
> 
>       The second reason is that extending the lock past
>       arm_smmu_master_alloc_smes is a bit worrying because it causes
>       &arm_smmu_devices_lock and smmu->stream_map_lock to nest, which wasn't
>       the case before.
> 
> 
> Nested locks are common. I don't believe there would be a problem here with this one.
> 
> 
>       I am not saying that it is a bad idea to extend the lock past
>       arm_smmu_master_alloc_smes -- it might be the right thing to do.
> 
> 
> I don't usually suggest locking changes blindly ;).
> 
>       But I
> 
>       am merely saying that it might be best to think twice about it.
> 
>       and/or do
>       that after 4.16.
> 
> 
> To be honest, this patch is not useful the callback to register a
> device in the IOMMU subsystem. The sentence makes no sense sorry. I
> meant the add callback doesn't support PCI devices. So the locking is
> a latent issue at the moment.
>
> So if you are concerned with the my suggested locking then, I am
> afraid the current patch is a no-go on my side for 4.16.

I was totally fine with it aside from the last suggestion of extending
the spin_unlock until the end of the function because until then the
changes were obviously correct to me.

I didn't understand the reason why we needed extending spin_unlock, now
I understand it. Also lock nesting is one of those thing that it is
relatively common but I think should always take a second check to make
sure it is correct.  Specifically we need to check that no fuctions with
smmu->stream_map_lock taken call a function that take
&arm_smmu_devices_lock. It is not very difficult but I haven't done
this check myself.

The other thing that is not clear to me is whether we would need also to
protect places where we use (not allocate) masters and/or cfg, e.g.
arm_smmu_attach_dev, arm_smmu_domain_add_master.


> That said we can work towards a new locking approach for 4.17.
> However, I would want to have a proposal from your side or at least
> some details on why the suggested locking is not suitable.
 
The suggested locking approach up until the last suggestion looks
totally fine to me. The last suggestion is a bit harder to tell because
the PCI removal hook is not there yet, so I am having troubles seeing
exactly what needs to be protected.
Re: [patch-4.16] arm/smmuv1,v2: Protect smmu master list with a lock
Posted by Julien Grall 3 years, 1 month ago
Hi Stefano,

First apologies for sending the previous e-mails in HTML (thanks for 
pointing that out!).

On 28/10/2021 01:20, Stefano Stabellini wrote:
> On Thu, 28 Oct 2021, Julien Grall wrote:
>> On Thu, 28 Oct 2021, 00:14 Stefano Stabellini, <sstabellini@kernel.org> wrote:
>>        On Wed, 27 Oct 2021, Julien Grall wrote:
>>        > > > > > +    return ret;
>>        > > > > >    }
>>        > > > > >    static int register_smmu_master(struct arm_smmu_device *smmu,
>>        > > > > > @@ -2056,7 +2066,10 @@ static int arm_smmu_add_device(struct device
>>        > > > > > *dev)
>>        > > > > >        } else {
>>        > > > > >            struct arm_smmu_master *master;
>>        > > > > > +        spin_lock(&arm_smmu_devices_lock);
>>        > > > > >            master = find_smmu_master(smmu, dev->of_node);
>>        > > > > > +        spin_unlock(&arm_smmu_devices_lock);
>>        > > > >
>>        > > > > At the moment, unlocking here is fine because we don't remove the
>>        > > > > device. However, there are a series to supporting removing a device (see
>>        > > > > [1]). So I think it would be preferable to unlock after the last use of
>>        > > > > 'cfg'.
>>        > > > >
>>        > > Ok. I will move unlocking to the end of this else {} block. I was not aware
>>        > > of the patch you are referring to.
>>        >
>>        > I think the end of the else is still too early. This needs to at least be past
>>        > iommu_group_set_iommudata() because we store cfg.
>>        >
>>        > Potentially, the lock wants to also englobe arm_smmu_master_alloc_smes(). So I
>>        > am wondering whether it would be simpler to hold the lock for the whole
>>        > duration of arm_smmu_add_device() (I can see value when we will want to
>>        > interlock with the remove code).
>>
>>        The patch was to protect the smmu master list. From that point of view
>>        the unlock right after find_smmu_master would be sufficient, right?
>>
>>
>> Yes. However this is not fixing all the problems (see below).
>>
>>
>>        We only need to protect cfg if we are worried that the same device is
>>        added in two different ways at the same time. Did I get it right? If so,
>>        I would say that that case should not be possible? Am I missing another
>>        potential conflict?
>>
>>
>> It should not be possible to add the same device twice. The problem is more when we are going to remove the device. In this case, "master"
>> may disappear at any point.
>>
>> The support for removing device is not yet implemented in the tree. But there is already a patch on the ML. So I think it would be
>> shortsighted to only move the lock to just solve concurrent access to the list.
>   
> That makes sense now: the other source of conflict is concurrent add and
> remove of the same device. Sorry it wasn't clear to me before.
At the moment, we are relying on the upper layer (e.g. PCI or DT 
subsystem) to prevent concurrent add/remove/assignment. The trouble is 
we don't have a common lock between PCI and DT.

One possibility would be to add a common in the uper layer, but it feels 
to me this is a bit fragile and may also require longer locking section 
than necessary.

That said, add/remove/assignment operations are meant to be rare. So 
this is could be an option. This would also have the advantage to cover 
all the IOMMUs.

>   
>   
>>        I am pointing this out for two reasons:
>>
>>        Protecting the list is different from protecting each element from
>>        concurrent modification of the element itself. If the latter is a
>>        potential problem, I wonder if arm_smmu_add_device is the only function
>>        affected?
>>
>>
>> I had a brief looked at the code and couldn't find any other places where this may be an issue.
>>
>>
>>        The second reason is that extending the lock past
>>        arm_smmu_master_alloc_smes is a bit worrying because it causes
>>        &arm_smmu_devices_lock and smmu->stream_map_lock to nest, which wasn't
>>        the case before.
>>
>>
>> Nested locks are common. I don't believe there would be a problem here with this one.
>>
>>
>>        I am not saying that it is a bad idea to extend the lock past
>>        arm_smmu_master_alloc_smes -- it might be the right thing to do.
>>
>>
>> I don't usually suggest locking changes blindly ;).
>>
>>        But I
>>
>>        am merely saying that it might be best to think twice about it.
>>
>>        and/or do
>>        that after 4.16.
>>
>>

[...]

> The other thing that is not clear to me is whether we would need also to
> protect places where we use (not allocate) masters and/or cfg, e.g.
> arm_smmu_attach_dev, arm_smmu_domain_add_master.

I think both should be with the lock. Now the question is will other 
IOMMUs driver requires the same locking?

If yes, then maybe that locking should be in the common code.

>> That said we can work towards a new locking approach for 4.17.
>> However, I would want to have a proposal from your side or at least
>> some details on why the suggested locking is not suitable.
>   
> The suggested locking approach up until the last suggestion looks
> totally fine to me. The last suggestion is a bit harder to tell because
> the PCI removal hook is not there yet, so I am having troubles seeing
> exactly what needs to be protected.

The PCI removal hook is the same as the platform device one. There are 
already a patch on the ML (see [1]) for that.

We have two interlocking problem to resolve:
   1) Concurrent request between PCI and platform/DT subsystem
   2) Removal vs add vs (re)assign

The two approaches I can think of are:

Approach A:
   - The driver is responsible to protect against 1)
   - Each subsystem (DT and PCI) are responsible for 2)

Approach B:
   The driver is responsible to protect for 1) 2).

 From my understanding, the proposed patch for Michal is following 
approach A whilst my proposal is going towards approach B.

I am open to use approach A, however I think this needs to be documented 
as the lock to use will depend on whether the device is a PCI device or not.

Cheers,

[1] <1630562763-390068-9-git-send-email-fnu.vikram@xilinx.com>

-- 
Julien Grall

Re: [patch-4.16] arm/smmuv1,v2: Protect smmu master list with a lock
Posted by Michal Orzel 3 years, 1 month ago
Hi Julien,

On 28.10.2021 12:05, Julien Grall wrote:
> Hi Stefano,
> 
> First apologies for sending the previous e-mails in HTML (thanks for pointing that out!).
> 
> On 28/10/2021 01:20, Stefano Stabellini wrote:
>> On Thu, 28 Oct 2021, Julien Grall wrote:
>>> On Thu, 28 Oct 2021, 00:14 Stefano Stabellini, <sstabellini@kernel.org> wrote:
>>>        On Wed, 27 Oct 2021, Julien Grall wrote:
>>>        > > > > > +    return ret;
>>>        > > > > >    }
>>>        > > > > >    static int register_smmu_master(struct arm_smmu_device *smmu,
>>>        > > > > > @@ -2056,7 +2066,10 @@ static int arm_smmu_add_device(struct device
>>>        > > > > > *dev)
>>>        > > > > >        } else {
>>>        > > > > >            struct arm_smmu_master *master;
>>>        > > > > > +        spin_lock(&arm_smmu_devices_lock);
>>>        > > > > >            master = find_smmu_master(smmu, dev->of_node);
>>>        > > > > > +        spin_unlock(&arm_smmu_devices_lock);
>>>        > > > >
>>>        > > > > At the moment, unlocking here is fine because we don't remove the
>>>        > > > > device. However, there are a series to supporting removing a device (see
>>>        > > > > [1]). So I think it would be preferable to unlock after the last use of
>>>        > > > > 'cfg'.
>>>        > > > >
>>>        > > Ok. I will move unlocking to the end of this else {} block. I was not aware
>>>        > > of the patch you are referring to.
>>>        >
>>>        > I think the end of the else is still too early. This needs to at least be past
>>>        > iommu_group_set_iommudata() because we store cfg.
>>>        >
>>>        > Potentially, the lock wants to also englobe arm_smmu_master_alloc_smes(). So I
>>>        > am wondering whether it would be simpler to hold the lock for the whole
>>>        > duration of arm_smmu_add_device() (I can see value when we will want to
>>>        > interlock with the remove code).
>>>
>>>        The patch was to protect the smmu master list. From that point of view
>>>        the unlock right after find_smmu_master would be sufficient, right?
>>>
>>>
>>> Yes. However this is not fixing all the problems (see below).
>>>
>>>
>>>        We only need to protect cfg if we are worried that the same device is
>>>        added in two different ways at the same time. Did I get it right? If so,
>>>        I would say that that case should not be possible? Am I missing another
>>>        potential conflict?
>>>
>>>
>>> It should not be possible to add the same device twice. The problem is more when we are going to remove the device. In this case, "master"
>>> may disappear at any point.
>>>
>>> The support for removing device is not yet implemented in the tree. But there is already a patch on the ML. So I think it would be
>>> shortsighted to only move the lock to just solve concurrent access to the list.
>>   That makes sense now: the other source of conflict is concurrent add and
>> remove of the same device. Sorry it wasn't clear to me before.
> At the moment, we are relying on the upper layer (e.g. PCI or DT subsystem) to prevent concurrent add/remove/assignment. The trouble is we don't have a common lock between PCI and DT.
> 
> One possibility would be to add a common in the uper layer, but it feels to me this is a bit fragile and may also require longer locking section than necessary.
> 
> That said, add/remove/assignment operations are meant to be rare. So this is could be an option. This would also have the advantage to cover all the IOMMUs.
> 
>>    
>>>        I am pointing this out for two reasons:
>>>
>>>        Protecting the list is different from protecting each element from
>>>        concurrent modification of the element itself. If the latter is a
>>>        potential problem, I wonder if arm_smmu_add_device is the only function
>>>        affected?
>>>
>>>
>>> I had a brief looked at the code and couldn't find any other places where this may be an issue.
>>>
>>>
>>>        The second reason is that extending the lock past
>>>        arm_smmu_master_alloc_smes is a bit worrying because it causes
>>>        &arm_smmu_devices_lock and smmu->stream_map_lock to nest, which wasn't
>>>        the case before.
>>>
>>>
>>> Nested locks are common. I don't believe there would be a problem here with this one.
>>>
>>>
>>>        I am not saying that it is a bad idea to extend the lock past
>>>        arm_smmu_master_alloc_smes -- it might be the right thing to do.
>>>
>>>
>>> I don't usually suggest locking changes blindly ;).
>>>
>>>        But I
>>>
>>>        am merely saying that it might be best to think twice about it.
>>>
>>>        and/or do
>>>        that after 4.16.
>>>
>>>
> 
> [...]
> 
>> The other thing that is not clear to me is whether we would need also to
>> protect places where we use (not allocate) masters and/or cfg, e.g.
>> arm_smmu_attach_dev, arm_smmu_domain_add_master.
> 
> I think both should be with the lock. Now the question is will other IOMMUs driver requires the same locking?
> 
> If yes, then maybe that locking should be in the common code.
> 
>>> That said we can work towards a new locking approach for 4.17.
>>> However, I would want to have a proposal from your side or at least
>>> some details on why the suggested locking is not suitable.
>>   The suggested locking approach up until the last suggestion looks
>> totally fine to me. The last suggestion is a bit harder to tell because
>> the PCI removal hook is not there yet, so I am having troubles seeing
>> exactly what needs to be protected.
> 
> The PCI removal hook is the same as the platform device one. There are already a patch on the ML (see [1]) for that.
> 
> We have two interlocking problem to resolve:
>   1) Concurrent request between PCI and platform/DT subsystem
>   2) Removal vs add vs (re)assign
> 
> The two approaches I can think of are:
> 
> Approach A:
>   - The driver is responsible to protect against 1)
>   - Each subsystem (DT and PCI) are responsible for 2)
> 
> Approach B:
>   The driver is responsible to protect for 1) 2).
> 
> From my understanding, the proposed patch for Michal is following approach A whilst my proposal is going towards approach B.
> 
> I am open to use approach A, however I think this needs to be documented as the lock to use will depend on whether the device is a PCI device or not.
> 

The purpose of this patch is to fix the issue that is present in 4.16.
The patch adding support for removal you are reffering to:
-is in RFC state
-is not meant for 4.16
-will need to be modified anyway because of the future PCI passthrough patches that are going to modify lots of stuff

That being said, the bug we want to fix touches only point 1). And in fact my patch is solving this issue.
So I think we should focus on 4.16 and fixing bugs for it without thinking of future patches/modifications.
I agree that the locking behaviour will change as soon as the patch you are reffering to will be merged.
However, the PCI passthrough patches are going to modify this code anyway, so all in all the future modifications will be needed.

> Cheers,
> 
> [1] <1630562763-390068-9-git-send-email-fnu.vikram@xilinx.com>
> 

Cheers,
Michal

Re: [patch-4.16] arm/smmuv1,v2: Protect smmu master list with a lock
Posted by Julien Grall 3 years, 1 month ago

On 28/10/2021 13:15, Michal Orzel wrote:
> Hi Julien,

Hi Michal,

> On 28.10.2021 12:05, Julien Grall wrote:
> The purpose of this patch is to fix the issue that is present in 4.16.

I think this is a latent bug (see more below).

> The patch adding support for removal you are reffering to:
> -is in RFC state
> -is not meant for 4.16
> -will need to be modified anyway because of the future PCI passthrough patches that are going to modify lots of stuff
> 
> That being said, the bug we want to fix touches only point 1). And in fact my patch is solving this issue.
> So I think we should focus on 4.16 and fixing bugs for it without thinking of future patches/modifications.

I would have agreed with your reasoning if this bug could be triggered 
with Xen 4.16 out-of-box. AFAIK, this is not the case because PCI 
devices cannot be registered with the SMMU driver.

> I agree that the locking behaviour will change as soon as the patch you are reffering to will be merged.
> However, the PCI passthrough patches are going to modify this code anyway, so all in all the future modifications will be needed.

Right. PCI passthrough is not going to work in 4.16 whether this patch 
is merged or not. We are past the code freeze and as you said the code 
(and potentially the locking) is going to be reworked for PCI passthrough.

So, I think this is a bad idea to rush this patch in 4.16. Instead, we 
should focus on getting a consistent locking for 4.17 that would cover 
the removal part and all PCI and DT concurrent call.

To be clear, I am not expecting you to implement the both part. I am 
fine if you focus only on the latter. But first, we need to agree on the 
approach (see my 2 proposals sent earlier).

Regarding the patch itself, I will be happy to queue the patch when it 
is fully ready and merge it after the tree is re-opened for 4.17.

Cheers,

-- 
Julien Grall

Re: [patch-4.16] arm/smmuv1,v2: Protect smmu master list with a lock
Posted by Ian Jackson 3 years, 1 month ago
Julien Grall writes ("Re: [patch-4.16] arm/smmuv1,v2: Protect smmu master list with a lock"):
> Right. PCI passthrough is not going to work in 4.16 whether this patch 
> is merged or not. We are past the code freeze and as you said the code 
> (and potentially the locking) is going to be reworked for PCI passthrough.
> 
> So, I think this is a bad idea to rush this patch in 4.16. Instead, we 
> should focus on getting a consistent locking for 4.17 that would cover 
> the removal part and all PCI and DT concurrent call.

Thanks for this clear communication.  As the release manager I very
much appreciate it.  Michal, do you disagree with Julien's analysis ?

If not, I would like to take this patch off my list of stuff I have on
my radar for 4.16.

Thanks,
Ian.

Re: [patch-4.16] arm/smmuv1,v2: Protect smmu master list with a lock
Posted by Stefano Stabellini 3 years, 1 month ago
On Thu, 28 Oct 2021, Julien Grall wrote:
> Hi Stefano,
> 
> First apologies for sending the previous e-mails in HTML (thanks for pointing
> that out!).
> 
> On 28/10/2021 01:20, Stefano Stabellini wrote:
> > On Thu, 28 Oct 2021, Julien Grall wrote:
> > > On Thu, 28 Oct 2021, 00:14 Stefano Stabellini, <sstabellini@kernel.org>
> > > wrote:
> > >        On Wed, 27 Oct 2021, Julien Grall wrote:
> > >        > > > > > +    return ret;
> > >        > > > > >    }
> > >        > > > > >    static int register_smmu_master(struct arm_smmu_device
> > > *smmu,
> > >        > > > > > @@ -2056,7 +2066,10 @@ static int
> > > arm_smmu_add_device(struct device
> > >        > > > > > *dev)
> > >        > > > > >        } else {
> > >        > > > > >            struct arm_smmu_master *master;
> > >        > > > > > +        spin_lock(&arm_smmu_devices_lock);
> > >        > > > > >            master = find_smmu_master(smmu, dev->of_node);
> > >        > > > > > +        spin_unlock(&arm_smmu_devices_lock);
> > >        > > > >
> > >        > > > > At the moment, unlocking here is fine because we don't
> > > remove the
> > >        > > > > device. However, there are a series to supporting removing
> > > a device (see
> > >        > > > > [1]). So I think it would be preferable to unlock after the
> > > last use of
> > >        > > > > 'cfg'.
> > >        > > > >
> > >        > > Ok. I will move unlocking to the end of this else {} block. I
> > > was not aware
> > >        > > of the patch you are referring to.
> > >        >
> > >        > I think the end of the else is still too early. This needs to at
> > > least be past
> > >        > iommu_group_set_iommudata() because we store cfg.
> > >        >
> > >        > Potentially, the lock wants to also englobe
> > > arm_smmu_master_alloc_smes(). So I
> > >        > am wondering whether it would be simpler to hold the lock for the
> > > whole
> > >        > duration of arm_smmu_add_device() (I can see value when we will
> > > want to
> > >        > interlock with the remove code).
> > > 
> > >        The patch was to protect the smmu master list. From that point of
> > > view
> > >        the unlock right after find_smmu_master would be sufficient, right?
> > > 
> > > 
> > > Yes. However this is not fixing all the problems (see below).
> > > 
> > > 
> > >        We only need to protect cfg if we are worried that the same device
> > > is
> > >        added in two different ways at the same time. Did I get it right?
> > > If so,
> > >        I would say that that case should not be possible? Am I missing
> > > another
> > >        potential conflict?
> > > 
> > > 
> > > It should not be possible to add the same device twice. The problem is
> > > more when we are going to remove the device. In this case, "master"
> > > may disappear at any point.
> > > 
> > > The support for removing device is not yet implemented in the tree. But
> > > there is already a patch on the ML. So I think it would be
> > > shortsighted to only move the lock to just solve concurrent access to the
> > > list.
> >   That makes sense now: the other source of conflict is concurrent add and
> > remove of the same device. Sorry it wasn't clear to me before.
> At the moment, we are relying on the upper layer (e.g. PCI or DT subsystem) to
> prevent concurrent add/remove/assignment. The trouble is we don't have a
> common lock between PCI and DT.
> 
> One possibility would be to add a common in the uper layer, but it feels to me
> this is a bit fragile and may also require longer locking section than
> necessary.
> 
> That said, add/remove/assignment operations are meant to be rare. So this is
> could be an option. This would also have the advantage to cover all the
> IOMMUs.
> 
> >     
> > >        I am pointing this out for two reasons:
> > > 
> > >        Protecting the list is different from protecting each element from
> > >        concurrent modification of the element itself. If the latter is a
> > >        potential problem, I wonder if arm_smmu_add_device is the only
> > > function
> > >        affected?
> > > 
> > > 
> > > I had a brief looked at the code and couldn't find any other places where
> > > this may be an issue.
> > > 
> > > 
> > >        The second reason is that extending the lock past
> > >        arm_smmu_master_alloc_smes is a bit worrying because it causes
> > >        &arm_smmu_devices_lock and smmu->stream_map_lock to nest, which
> > > wasn't
> > >        the case before.
> > > 
> > > 
> > > Nested locks are common. I don't believe there would be a problem here
> > > with this one.
> > > 
> > > 
> > >        I am not saying that it is a bad idea to extend the lock past
> > >        arm_smmu_master_alloc_smes -- it might be the right thing to do.
> > > 
> > > 
> > > I don't usually suggest locking changes blindly ;).
> > > 
> > >        But I
> > > 
> > >        am merely saying that it might be best to think twice about it.
> > > 
> > >        and/or do
> > >        that after 4.16.
> > > 
> > > 
> 
> [...]
> 
> > The other thing that is not clear to me is whether we would need also to
> > protect places where we use (not allocate) masters and/or cfg, e.g.
> > arm_smmu_attach_dev, arm_smmu_domain_add_master.
> 
> I think both should be with the lock. Now the question is will other IOMMUs
> driver requires the same locking?
> 
> If yes, then maybe that locking should be in the common code.
> 
> > > That said we can work towards a new locking approach for 4.17.
> > > However, I would want to have a proposal from your side or at least
> > > some details on why the suggested locking is not suitable.
> >   The suggested locking approach up until the last suggestion looks
> > totally fine to me. The last suggestion is a bit harder to tell because
> > the PCI removal hook is not there yet, so I am having troubles seeing
> > exactly what needs to be protected.
> 
> The PCI removal hook is the same as the platform device one. There are already
> a patch on the ML (see [1]) for that.
> 
> We have two interlocking problem to resolve:
>   1) Concurrent request between PCI and platform/DT subsystem
>   2) Removal vs add vs (re)assign
> 
> The two approaches I can think of are:
> 
> Approach A:
>   - The driver is responsible to protect against 1)
>   - Each subsystem (DT and PCI) are responsible for 2)
> 
> Approach B:
>   The driver is responsible to protect for 1) 2).
> 
> From my understanding, the proposed patch for Michal is following approach A
> whilst my proposal is going towards approach B.
> 
> I am open to use approach A, however I think this needs to be documented as
> the lock to use will depend on whether the device is a PCI device or not.

Thanks for the explanation, now everything is a lot clearer. I don't
have feedback on approach A vs. B -- it looks like both could work well.

In regards to this specific patch and also the conversation about 4.16
or 4.17: I think it would be fine to take this patch in 4.16 in its
current form. Although it is not required because PCI passthrough is
not going to be complete in 4.16 anyway, I like that this patch makes
the code consistent in terms of protection of rbtree accesses.  With
this patch the arm_smmu_master rbtree is consistently protected from
concurrent accesses. Without this patch, it is sometimes protected and
sometimes not, which is not great.

So I think that is something that could be good to have in 4.16. But
like you said, the patch is not strictly required so it is fine either
way.

Other changes on top of it, e.g. a complete implementation of Approach A
or Approach B, I think it would be best to target 4.17 so that we can
evaluate them together with the other outstanding PCI patches. I think
it would make the review a lot easier (at least for me.)
Re: [patch-4.16] arm/smmuv1,v2: Protect smmu master list with a lock
Posted by Ian Jackson 3 years ago
Stefano Stabellini writes ("Re: [patch-4.16] arm/smmuv1,v2: Protect smmu master list with a lock"):
> In regards to this specific patch and also the conversation about 4.16
> or 4.17: I think it would be fine to take this patch in 4.16 in its
> current form. Although it is not required because PCI passthrough is
> not going to be complete in 4.16 anyway, I like that this patch makes
> the code consistent in terms of protection of rbtree accesses.  With
> this patch the arm_smmu_master rbtree is consistently protected from
> concurrent accesses. Without this patch, it is sometimes protected and
> sometimes not, which is not great.

It sounds like this is a possible latent bug, or at least a bad state
of the code that might lead to the introduction of bad bugs later.

So I think I understand the upside.

> So I think that is something that could be good to have in 4.16. But
> like you said, the patch is not strictly required so it is fine either
> way.

Can you set out the downside for me too ?  What are the risks ?  How
are the affected code paths used in 4.16 ?

A good way to think about this is: if taking this patch for 4.16
causes problems, what would that look like ?

Thanks,
Ian.

Re: [patch-4.16] arm/smmuv1,v2: Protect smmu master list with a lock
Posted by Stefano Stabellini 3 years ago
On Mon, 1 Nov 2021, Ian Jackson wrote:
> Stefano Stabellini writes ("Re: [patch-4.16] arm/smmuv1,v2: Protect smmu master list with a lock"):
> > In regards to this specific patch and also the conversation about 4.16
> > or 4.17: I think it would be fine to take this patch in 4.16 in its
> > current form. Although it is not required because PCI passthrough is
> > not going to be complete in 4.16 anyway, I like that this patch makes
> > the code consistent in terms of protection of rbtree accesses.  With
> > this patch the arm_smmu_master rbtree is consistently protected from
> > concurrent accesses. Without this patch, it is sometimes protected and
> > sometimes not, which is not great.
> 
> It sounds like this is a possible latent bug, or at least a bad state
> of the code that might lead to the introduction of bad bugs later.
> 
> So I think I understand the upside.
> 
> > So I think that is something that could be good to have in 4.16. But
> > like you said, the patch is not strictly required so it is fine either
> > way.
> 
> Can you set out the downside for me too ?  What are the risks ?  How
> are the affected code paths used in 4.16 ?
> 
> A good way to think about this is: if taking this patch for 4.16
> causes problems, what would that look like ?

The patch affects the SMMU code paths that are currently in-use for
non-PCI devices which are currently supported. A bug in this patch could
cause a failure to setup the SMMU for one or more devices. I would
imagine that it would manifest probably as either an error or an hang
(given that it is adding spin locks) early at boot when the SMMU is
configured.

The validation of this patch would mostly happen by review: it is the
kind of patch that changes some "return -1" into "goto err".

Re: [patch-4.16] arm/smmuv1,v2: Protect smmu master list with a lock
Posted by Michal Orzel 3 years ago
Hello,

On 01.11.2021 21:51, Stefano Stabellini wrote:
> On Mon, 1 Nov 2021, Ian Jackson wrote:
>> Stefano Stabellini writes ("Re: [patch-4.16] arm/smmuv1,v2: Protect smmu master list with a lock"):
>>> In regards to this specific patch and also the conversation about 4.16
>>> or 4.17: I think it would be fine to take this patch in 4.16 in its
>>> current form. Although it is not required because PCI passthrough is
>>> not going to be complete in 4.16 anyway, I like that this patch makes
>>> the code consistent in terms of protection of rbtree accesses.  With
>>> this patch the arm_smmu_master rbtree is consistently protected from
>>> concurrent accesses. Without this patch, it is sometimes protected and
>>> sometimes not, which is not great.
>>
>> It sounds like this is a possible latent bug, or at least a bad state
>> of the code that might lead to the introduction of bad bugs later.
>>
>> So I think I understand the upside.
>>
>>> So I think that is something that could be good to have in 4.16. But
>>> like you said, the patch is not strictly required so it is fine either
>>> way.
>>
>> Can you set out the downside for me too ?  What are the risks ?  How
>> are the affected code paths used in 4.16 ?
>>
>> A good way to think about this is: if taking this patch for 4.16
>> causes problems, what would that look like ?
> 
> The patch affects the SMMU code paths that are currently in-use for
> non-PCI devices which are currently supported. A bug in this patch could
> cause a failure to setup the SMMU for one or more devices. I would
> imagine that it would manifest probably as either an error or an hang
> (given that it is adding spin locks) early at boot when the SMMU is
> configured.
> 
> The validation of this patch would mostly happen by review: it is the
> kind of patch that changes some "return -1" into "goto err".
> 

In order not to leave this patch high and dry:
I can see that Stefano and Bertrand are in favor of this patch and
Julien is rather against. Therefore I wanted to ask what are we doing with this patch.
Do we want it for 4.16?

Cheers,
Michal

Re: [patch-4.16] arm/smmuv1,v2: Protect smmu master list with a lock
Posted by Julien Grall 3 years ago

On 04/11/2021 09:18, Michal Orzel wrote:
> Hello,

Hi Michal,

> 
> On 01.11.2021 21:51, Stefano Stabellini wrote:
>> On Mon, 1 Nov 2021, Ian Jackson wrote:
>>> Stefano Stabellini writes ("Re: [patch-4.16] arm/smmuv1,v2: Protect smmu master list with a lock"):
>>>> In regards to this specific patch and also the conversation about 4.16
>>>> or 4.17: I think it would be fine to take this patch in 4.16 in its
>>>> current form. Although it is not required because PCI passthrough is
>>>> not going to be complete in 4.16 anyway, I like that this patch makes
>>>> the code consistent in terms of protection of rbtree accesses.  With
>>>> this patch the arm_smmu_master rbtree is consistently protected from
>>>> concurrent accesses. Without this patch, it is sometimes protected and
>>>> sometimes not, which is not great.
>>>
>>> It sounds like this is a possible latent bug, or at least a bad state
>>> of the code that might lead to the introduction of bad bugs later.
>>>
>>> So I think I understand the upside.
>>>
>>>> So I think that is something that could be good to have in 4.16. But
>>>> like you said, the patch is not strictly required so it is fine either
>>>> way.
>>>
>>> Can you set out the downside for me too ?  What are the risks ?  How
>>> are the affected code paths used in 4.16 ?
>>>
>>> A good way to think about this is: if taking this patch for 4.16
>>> causes problems, what would that look like ?
>>
>> The patch affects the SMMU code paths that are currently in-use for
>> non-PCI devices which are currently supported. A bug in this patch could
>> cause a failure to setup the SMMU for one or more devices. I would
>> imagine that it would manifest probably as either an error or an hang
>> (given that it is adding spin locks) early at boot when the SMMU is
>> configured.
>>
>> The validation of this patch would mostly happen by review: it is the
>> kind of patch that changes some "return -1" into "goto err".
>>
> 
> In order not to leave this patch high and dry:
> I can see that Stefano and Bertrand are in favor of this patch and
> Julien is rather against. Therefore I wanted to ask what are we doing with this patch.
My main objection is on the process. We should not merge patch that 
doesn't fix a real issue at this stage of this release.

That said, the patch is low risk and both Stefano/Bertrand are for it. So...

> Do we want it for 4.16?

... Ian can we get your release-acked-by?

Cheers,

-- 
Julien Grall

Re: [patch-4.16] arm/smmuv1,v2: Protect smmu master list with a lock
Posted by Ian Jackson 3 years ago
Julien Grall writes ("Re: [patch-4.16] arm/smmuv1,v2: Protect smmu master list with a lock"):
> On 04/11/2021 09:18, Michal Orzel wrote:
> > On 01.11.2021 21:51, Stefano Stabellini wrote:
> >> On Mon, 1 Nov 2021, Ian Jackson wrote:
> >>> It sounds like this is a possible latent bug, or at least a bad state
> >>> of the code that might lead to the introduction of bad bugs later.

^ this is the upside.

> >>> Can you set out the downside for me too ?  What are the risks ?  How
> >>> are the affected code paths used in 4.16 ?
> >>>
> >>> A good way to think about this is: if taking this patch for 4.16
> >>> causes problems, what would that look like ?
> >>
> >> The patch affects the SMMU code paths that are currently in-use for
> >> non-PCI devices which are currently supported. A bug in this patch could
> >> cause a failure to setup the SMMU for one or more devices. I would
> >> imagine that it would manifest probably as either an error or an hang
> >> (given that it is adding spin locks) early at boot when the SMMU is
> >> configured.
> >>
> >> The validation of this patch would mostly happen by review: it is the
> >> kind of patch that changes some "return -1" into "goto err".

^ this is the downside.

> > In order not to leave this patch high and dry:

Michal, you are right that we should not just stall this.

> My main objection is on the process. We should not merge patch that 
> doesn't fix a real issue at this stage of this release.

I agree with Julien.  I wouldn't characterise this as a process
objection.  I think it is a practical objection.  As I understand it
the patch can only harm the experience of users of 4.16.  The release
process is primarily aimed at making sure 4.16 meets the needs of
users.

So:

Release-Nacked-by: Ian Jackson <iwj@xenproject.org>

I would be very sympathetic to code comment patches which document the
limitations/restrictions so as to make the future bugs less likely.

> ... Ian can we get your release-acked-by?

You can have my decision.  I hope this is helpful.

Thanks,
Ian.

Re: [patch-4.16] arm/smmuv1,v2: Protect smmu master list with a lock
Posted by Bertrand Marquis 3 years, 1 month ago
Hi,

> On 28 Oct 2021, at 21:31, Stefano Stabellini <sstabellini@kernel.org> wrote:
> 
> On Thu, 28 Oct 2021, Julien Grall wrote:
>> Hi Stefano,
>> 
>> First apologies for sending the previous e-mails in HTML (thanks for pointing
>> that out!).
>> 
>> On 28/10/2021 01:20, Stefano Stabellini wrote:
>>> On Thu, 28 Oct 2021, Julien Grall wrote:
>>>> On Thu, 28 Oct 2021, 00:14 Stefano Stabellini, <sstabellini@kernel.org>
>>>> wrote:
>>>>       On Wed, 27 Oct 2021, Julien Grall wrote:
>>>>>>>>> +    return ret;
>>>>>>>>>     }
>>>>>>>>>     static int register_smmu_master(struct arm_smmu_device
>>>> *smmu,
>>>>>>>>> @@ -2056,7 +2066,10 @@ static int
>>>> arm_smmu_add_device(struct device
>>>>>>>>> *dev)
>>>>>>>>>         } else {
>>>>>>>>>             struct arm_smmu_master *master;
>>>>>>>>> +        spin_lock(&arm_smmu_devices_lock);
>>>>>>>>>             master = find_smmu_master(smmu, dev->of_node);
>>>>>>>>> +        spin_unlock(&arm_smmu_devices_lock);
>>>>>>>> 
>>>>>>>> At the moment, unlocking here is fine because we don't
>>>> remove the
>>>>>>>> device. However, there are a series to supporting removing
>>>> a device (see
>>>>>>>> [1]). So I think it would be preferable to unlock after the
>>>> last use of
>>>>>>>> 'cfg'.
>>>>>>>> 
>>>>>> Ok. I will move unlocking to the end of this else {} block. I
>>>> was not aware
>>>>>> of the patch you are referring to.
>>>>> 
>>>>> I think the end of the else is still too early. This needs to at
>>>> least be past
>>>>> iommu_group_set_iommudata() because we store cfg.
>>>>> 
>>>>> Potentially, the lock wants to also englobe
>>>> arm_smmu_master_alloc_smes(). So I
>>>>> am wondering whether it would be simpler to hold the lock for the
>>>> whole
>>>>> duration of arm_smmu_add_device() (I can see value when we will
>>>> want to
>>>>> interlock with the remove code).
>>>> 
>>>>       The patch was to protect the smmu master list. From that point of
>>>> view
>>>>       the unlock right after find_smmu_master would be sufficient, right?
>>>> 
>>>> 
>>>> Yes. However this is not fixing all the problems (see below).
>>>> 
>>>> 
>>>>       We only need to protect cfg if we are worried that the same device
>>>> is
>>>>       added in two different ways at the same time. Did I get it right?
>>>> If so,
>>>>       I would say that that case should not be possible? Am I missing
>>>> another
>>>>       potential conflict?
>>>> 
>>>> 
>>>> It should not be possible to add the same device twice. The problem is
>>>> more when we are going to remove the device. In this case, "master"
>>>> may disappear at any point.
>>>> 
>>>> The support for removing device is not yet implemented in the tree. But
>>>> there is already a patch on the ML. So I think it would be
>>>> shortsighted to only move the lock to just solve concurrent access to the
>>>> list.
>>>  That makes sense now: the other source of conflict is concurrent add and
>>> remove of the same device. Sorry it wasn't clear to me before.
>> At the moment, we are relying on the upper layer (e.g. PCI or DT subsystem) to
>> prevent concurrent add/remove/assignment. The trouble is we don't have a
>> common lock between PCI and DT.
>> 
>> One possibility would be to add a common in the uper layer, but it feels to me
>> this is a bit fragile and may also require longer locking section than
>> necessary.
>> 
>> That said, add/remove/assignment operations are meant to be rare. So this is
>> could be an option. This would also have the advantage to cover all the
>> IOMMUs.
>> 
>>> 
>>>>       I am pointing this out for two reasons:
>>>> 
>>>>       Protecting the list is different from protecting each element from
>>>>       concurrent modification of the element itself. If the latter is a
>>>>       potential problem, I wonder if arm_smmu_add_device is the only
>>>> function
>>>>       affected?
>>>> 
>>>> 
>>>> I had a brief looked at the code and couldn't find any other places where
>>>> this may be an issue.
>>>> 
>>>> 
>>>>       The second reason is that extending the lock past
>>>>       arm_smmu_master_alloc_smes is a bit worrying because it causes
>>>>       &arm_smmu_devices_lock and smmu->stream_map_lock to nest, which
>>>> wasn't
>>>>       the case before.
>>>> 
>>>> 
>>>> Nested locks are common. I don't believe there would be a problem here
>>>> with this one.
>>>> 
>>>> 
>>>>       I am not saying that it is a bad idea to extend the lock past
>>>>       arm_smmu_master_alloc_smes -- it might be the right thing to do.
>>>> 
>>>> 
>>>> I don't usually suggest locking changes blindly ;).
>>>> 
>>>>       But I
>>>> 
>>>>       am merely saying that it might be best to think twice about it.
>>>> 
>>>>       and/or do
>>>>       that after 4.16.
>>>> 
>>>> 
>> 
>> [...]
>> 
>>> The other thing that is not clear to me is whether we would need also to
>>> protect places where we use (not allocate) masters and/or cfg, e.g.
>>> arm_smmu_attach_dev, arm_smmu_domain_add_master.
>> 
>> I think both should be with the lock. Now the question is will other IOMMUs
>> driver requires the same locking?
>> 
>> If yes, then maybe that locking should be in the common code.
>> 
>>>> That said we can work towards a new locking approach for 4.17.
>>>> However, I would want to have a proposal from your side or at least
>>>> some details on why the suggested locking is not suitable.
>>>  The suggested locking approach up until the last suggestion looks
>>> totally fine to me. The last suggestion is a bit harder to tell because
>>> the PCI removal hook is not there yet, so I am having troubles seeing
>>> exactly what needs to be protected.
>> 
>> The PCI removal hook is the same as the platform device one. There are already
>> a patch on the ML (see [1]) for that.
>> 
>> We have two interlocking problem to resolve:
>>  1) Concurrent request between PCI and platform/DT subsystem
>>  2) Removal vs add vs (re)assign
>> 
>> The two approaches I can think of are:
>> 
>> Approach A:
>>  - The driver is responsible to protect against 1)
>>  - Each subsystem (DT and PCI) are responsible for 2)
>> 
>> Approach B:
>>  The driver is responsible to protect for 1) 2).
>> 
>> From my understanding, the proposed patch for Michal is following approach A
>> whilst my proposal is going towards approach B.
>> 
>> I am open to use approach A, however I think this needs to be documented as
>> the lock to use will depend on whether the device is a PCI device or not.
> 
> Thanks for the explanation, now everything is a lot clearer. I don't
> have feedback on approach A vs. B -- it looks like both could work well.
> 
> In regards to this specific patch and also the conversation about 4.16
> or 4.17: I think it would be fine to take this patch in 4.16 in its
> current form. Although it is not required because PCI passthrough is
> not going to be complete in 4.16 anyway, I like that this patch makes
> the code consistent in terms of protection of rbtree accesses.  With
> this patch the arm_smmu_master rbtree is consistently protected from
> concurrent accesses. Without this patch, it is sometimes protected and
> sometimes not, which is not great.
> 
> So I think that is something that could be good to have in 4.16. But
> like you said, the patch is not strictly required so it is fine either
> way.

I think that this patch should be added in 4.16 as it addressing an issue (even
if it could not be triggered) as it will serve also as a reminder when the rest of
the PCI serie will be added.

So I would say: merge with changes in commit message (needs a v2).

> 
> Other changes on top of it, e.g. a complete implementation of Approach A
> or Approach B, I think it would be best to target 4.17 so that we can
> evaluate them together with the other outstanding PCI patches. I think
> it would make the review a lot easier (at least for me.)

For the rest, having the full picture with the rest of the PCI patches will help as
most of this code will be modified by it anyway.

Cheers
Bertrand


Re: [patch-4.16] arm/smmuv1,v2: Protect smmu master list with a lock
Posted by Michal Orzel 3 years, 1 month ago
Hi Julien,

On 27.10.2021 19:02, Julien Grall wrote:
> 
> 
> On 27/10/2021 11:41, Michal Orzel wrote:
>> Hi Julien,
> 
> Hi Michal,
> 
>> On 26.10.2021 18:56, Julien Grall wrote:
>>> Hi,
>>>
>>> On 26/10/2021 17:28, Julien Grall wrote:
>>>> On 26/10/2021 13:29, Michal Orzel wrote:
>>>>> If a device is added to SMMUv1/v2 from DT and PCI
>>>>> at the same time, there is a concurrent access
>>>>> to a smmu master list. This could lead to a
>>>>> scenario where one is looking into a list that
>>>>> is being modified at the same time. Add a lock
>>>>> to prevent this issue.
>>>>
>>>> Did you intend to say "Hold" rather than "Add"?
>>>>
>> Yes, this is what I meant. I will change it.
>>
>>>>>
>>>>> Reuse the existing spinlock arm_smmu_devices_lock
>>>>> as it is already protecting find_smmu_master.
>>>>>
>>>>> ipmmu-smmu and smmuv3 are not impacted by this
>>>>> issue as there is no access or modification of
>>>>> a global resource during add_device.
>>>>>
>>>>> Signed-off-by: Michal Orzel <michal.orzel@arm.com>
>>>>> ---
>>>>> This patch aims for 4.16 release.
>>>>> Benefits:
>>>>> Remove a bug that could lead to a corruption of the
>>>>> smmu master list, which would be very hard to debug.
>>>>
>>>>   From my understanding, this corruption would only happen with CONFIG_HAS_PCI. At the moment, this is a experimental feature as it is not fully complete.
>>>
>>> Actually, digging through the code, I noticed that iommu_add_device() assume that it can only be called with platform/DT. Thankfully, AFAICT, the function would return -ENXIO because the fwspec would not be allocated for PCI device.
>>>
>>> So was this patch tested with additional patch on top?
>>>
>> The purpose of this patch is to fix the issue that is present and which you thankfully noticed.
>> There is still a lot of patches to be added that aim to support PCI passthrough.
>> The complete PCI series is tested with SMMU and it works.
>> But there is no chance to test this patch standalone as iommu_add_device is not in the correct form for PCI as of now.
> 
> Ok. I would suggest to say this is a latent bug so it make clear that the patch is more for hardening at the moment.
> 
Ok I will mention it in the commit msg.

>>>>>    xen/drivers/passthrough/arm/smmu.c | 25 +++++++++++++++++++------
>>>>>    1 file changed, 19 insertions(+), 6 deletions(-)
>>>>>
>>>>> diff --git a/xen/drivers/passthrough/arm/smmu.c b/xen/drivers/passthrough/arm/smmu.c
>>>>> index c9dfc4caa0..be62a66a28 100644
>>>>> --- a/xen/drivers/passthrough/arm/smmu.c
>>>>> +++ b/xen/drivers/passthrough/arm/smmu.c
>>>>> @@ -820,21 +820,25 @@ static int arm_smmu_dt_add_device_legacy(struct arm_smmu_device *smmu,
>>>>>                         struct device *dev,
>>>>>                         struct iommu_fwspec *fwspec)
>>>>>    {
>>>>> -    int i;
>>>>> +    int i, ret;
>>>>>        struct arm_smmu_master *master;
>>>>>        struct device_node *dev_node = dev_get_dev_node(dev);
>>>>> +    spin_lock(&arm_smmu_devices_lock);
>>>>>        master = find_smmu_master(smmu, dev_node);
>>>>>        if (master) {
>>>>>            dev_err(dev,
>>>>>                "rejecting multiple registrations for master device %s\n",
>>>>>                dev_node->name);
>>>>> -        return -EBUSY;
>>>>> +        ret = -EBUSY;
>>>>> +        goto out_unlock;
>>>>>        }
>>>>>        master = devm_kzalloc(dev, sizeof(*master), GFP_KERNEL);
>>>>> -    if (!master)
>>>>> -        return -ENOMEM;
>>>>> +    if (!master) {
>>>>> +        ret = -ENOMEM;
>>>>> +        goto out_unlock;
>>>>> +    }
>>>>>        master->of_node = dev_node;
>>>>>        /* Xen: Let Xen know that the device is protected by an SMMU */
>>>>> @@ -846,11 +850,17 @@ static int arm_smmu_dt_add_device_legacy(struct arm_smmu_device *smmu,
>>>>>                dev_err(dev,
>>>>>                    "stream ID for master device %s greater than maximum allowed (%d)\n",
>>>>>                    dev_node->name, smmu->num_mapping_groups);
>>>>> -            return -ERANGE;
>>>>> +            ret = -ERANGE;
>>>>> +            goto out_unlock;
>>>>>            }
>>>>>            master->cfg.smendx[i] = INVALID_SMENDX;
>>>>>        }
>>>>> -    return insert_smmu_master(smmu, master);
>>>>> +
>>>>> +    ret = insert_smmu_master(smmu, master);
>>>>> +
>>>>> +out_unlock:
>>>>> +    spin_unlock(&arm_smmu_devices_lock);
>>>>
>>>> Please add a newline here.
>>>>
>> Ok.
>>
>>>>> +    return ret;
>>>>>    }
>>>>>    static int register_smmu_master(struct arm_smmu_device *smmu,
>>>>> @@ -2056,7 +2066,10 @@ static int arm_smmu_add_device(struct device *dev)
>>>>>        } else {
>>>>>            struct arm_smmu_master *master;
>>>>> +        spin_lock(&arm_smmu_devices_lock);
>>>>>            master = find_smmu_master(smmu, dev->of_node);
>>>>> +        spin_unlock(&arm_smmu_devices_lock);
>>>>
>>>> At the moment, unlocking here is fine because we don't remove the device. However, there are a series to supporting removing a device (see [1]). So I think it would be preferable to unlock after the last use of 'cfg'.
>>>>
>> Ok. I will move unlocking to the end of this else {} block. I was not aware of the patch you are referring to.
> 
> I think the end of the else is still too early. This needs to at least be past iommu_group_set_iommudata() because we store cfg.
> 
> Potentially, the lock wants to also englobe arm_smmu_master_alloc_smes(). So I am wondering whether it would be simpler to hold the lock for the whole duration of arm_smmu_add_device() (I can see value when we will want to interlock with the remove code).
> 
>>
>>>> Looking at the code, I also noticed that the error path is not correct for at least the PCI device and we would leak memory. We also assume that Stream ID == Requester ID. Are both of them in your radar for PCI passthrough?
>>>>
>> I agree with you. I also noticed it. However this is going to be fixed with the next patch series when Rahul gets back.
> 
> Good. Do you have a todo list for PCI passthrough that can be shared?
> 
As I am not working on the PCI stuff, I do not have such a list but I can point you to the branch on which both ARM and EPAM are working:
https://gitlab.com/xen-project/fusa/xen-integration/-/tree/integration/pci-passthrough

> Cheers,
> 

Cheers,
Michal