The iotlb_sync_map callback is called by the iommu core after non-present
to present mappings are created. The iommu driver uses this callback to
invalidate caches if IOMMU is working in caching mode and second-only
translation is used for the domain. Use cache_tag_flush_cm_range() in this
callback.
Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com>
---
drivers/iommu/intel/iommu.c | 22 +---------------------
1 file changed, 1 insertion(+), 21 deletions(-)
diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c
index 1c03d2dafb9d..2dcab1e5cd4d 100644
--- a/drivers/iommu/intel/iommu.c
+++ b/drivers/iommu/intel/iommu.c
@@ -1504,20 +1504,6 @@ static void iommu_flush_iotlb_psi(struct intel_iommu *iommu,
iommu_flush_dev_iotlb(domain, addr, mask);
}
-/* Notification for newly created mappings */
-static void __mapping_notify_one(struct intel_iommu *iommu, struct dmar_domain *domain,
- unsigned long pfn, unsigned int pages)
-{
- /*
- * It's a non-present to present mapping. Only flush if caching mode
- * and second level.
- */
- if (cap_caching_mode(iommu->cap) && !domain->use_first_level)
- iommu_flush_iotlb_psi(iommu, domain, pfn, pages, 0, 1);
- else
- iommu_flush_write_buffer(iommu);
-}
-
/*
* Flush the relevant caches in nested translation if the domain
* also serves as a parent
@@ -4549,14 +4535,8 @@ static bool risky_device(struct pci_dev *pdev)
static int intel_iommu_iotlb_sync_map(struct iommu_domain *domain,
unsigned long iova, size_t size)
{
- struct dmar_domain *dmar_domain = to_dmar_domain(domain);
- unsigned long pages = aligned_nrpages(iova, size);
- unsigned long pfn = iova >> VTD_PAGE_SHIFT;
- struct iommu_domain_info *info;
- unsigned long i;
+ cache_tag_flush_cm_range(to_dmar_domain(domain), iova, iova + size - 1);
- xa_for_each(&dmar_domain->iommu_array, i, info)
- __mapping_notify_one(info->iommu, dmar_domain, pfn, pages);
return 0;
}
--
2.34.1
> From: Lu Baolu <baolu.lu@linux.intel.com>
> Sent: Monday, March 25, 2024 10:17 AM
>
> The iotlb_sync_map callback is called by the iommu core after non-present
> to present mappings are created. The iommu driver uses this callback to
> invalidate caches if IOMMU is working in caching mode and second-only
> translation is used for the domain. Use cache_tag_flush_cm_range() in this
> callback.
>
> Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com>
> ---
> drivers/iommu/intel/iommu.c | 22 +---------------------
> 1 file changed, 1 insertion(+), 21 deletions(-)
>
> diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c
> index 1c03d2dafb9d..2dcab1e5cd4d 100644
> --- a/drivers/iommu/intel/iommu.c
> +++ b/drivers/iommu/intel/iommu.c
> @@ -1504,20 +1504,6 @@ static void iommu_flush_iotlb_psi(struct
> intel_iommu *iommu,
> iommu_flush_dev_iotlb(domain, addr, mask);
> }
>
> -/* Notification for newly created mappings */
> -static void __mapping_notify_one(struct intel_iommu *iommu, struct
> dmar_domain *domain,
> - unsigned long pfn, unsigned int pages)
> -{
> - /*
> - * It's a non-present to present mapping. Only flush if caching mode
> - * and second level.
> - */
> - if (cap_caching_mode(iommu->cap) && !domain->use_first_level)
> - iommu_flush_iotlb_psi(iommu, domain, pfn, pages, 0, 1);
> - else
> - iommu_flush_write_buffer(iommu);
> -}
iommu_flush_write_buffer is for a quite different issue. it's clearer to
keep it separated from the iotlb helpers.
On 3/28/24 3:48 PM, Tian, Kevin wrote:
>> From: Lu Baolu <baolu.lu@linux.intel.com>
>> Sent: Monday, March 25, 2024 10:17 AM
>>
>> The iotlb_sync_map callback is called by the iommu core after non-present
>> to present mappings are created. The iommu driver uses this callback to
>> invalidate caches if IOMMU is working in caching mode and second-only
>> translation is used for the domain. Use cache_tag_flush_cm_range() in this
>> callback.
>>
>> Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com>
>> ---
>> drivers/iommu/intel/iommu.c | 22 +---------------------
>> 1 file changed, 1 insertion(+), 21 deletions(-)
>>
>> diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c
>> index 1c03d2dafb9d..2dcab1e5cd4d 100644
>> --- a/drivers/iommu/intel/iommu.c
>> +++ b/drivers/iommu/intel/iommu.c
>> @@ -1504,20 +1504,6 @@ static void iommu_flush_iotlb_psi(struct
>> intel_iommu *iommu,
>> iommu_flush_dev_iotlb(domain, addr, mask);
>> }
>>
>> -/* Notification for newly created mappings */
>> -static void __mapping_notify_one(struct intel_iommu *iommu, struct
>> dmar_domain *domain,
>> - unsigned long pfn, unsigned int pages)
>> -{
>> - /*
>> - * It's a non-present to present mapping. Only flush if caching mode
>> - * and second level.
>> - */
>> - if (cap_caching_mode(iommu->cap) && !domain->use_first_level)
>> - iommu_flush_iotlb_psi(iommu, domain, pfn, pages, 0, 1);
>> - else
>> - iommu_flush_write_buffer(iommu);
>> -}
>
> iommu_flush_write_buffer is for a quite different issue. it's clearer to
> keep it separated from the iotlb helpers.
The VT-d spec describes the write buffer flushing in section 6.8. It
states,
1. Write buffer flushing is required only for earlier VT-d hardware
implementations. Newer hardware implementations are expected to NOT
require this.
2. Write buffer flushing is related to cache invalidation. The hardware
performs an implicit write-buffer-flushing as a pre-condition to
the invalidation operations.
According to this, write buffer flushing is only required in the map
path when IOMMU is not in caching mode and the RWBF iommu capability is
set, which perfectly matches the case of cache_tag_flush_cm_range().
By doing it in cache_tag_flush_cm_range(), the code will be simpler and
easier for maintenance.
Best regards,
baolu
> From: Baolu Lu <baolu.lu@linux.intel.com>
> Sent: Sunday, April 7, 2024 2:41 PM
>
> On 3/28/24 3:48 PM, Tian, Kevin wrote:
> >> From: Lu Baolu <baolu.lu@linux.intel.com>
> >> Sent: Monday, March 25, 2024 10:17 AM
> >>
> >> The iotlb_sync_map callback is called by the iommu core after non-
> present
> >> to present mappings are created. The iommu driver uses this callback to
> >> invalidate caches if IOMMU is working in caching mode and second-only
> >> translation is used for the domain. Use cache_tag_flush_cm_range() in
> this
> >> callback.
> >>
> >> Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com>
> >> ---
> >> drivers/iommu/intel/iommu.c | 22 +---------------------
> >> 1 file changed, 1 insertion(+), 21 deletions(-)
> >>
> >> diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c
> >> index 1c03d2dafb9d..2dcab1e5cd4d 100644
> >> --- a/drivers/iommu/intel/iommu.c
> >> +++ b/drivers/iommu/intel/iommu.c
> >> @@ -1504,20 +1504,6 @@ static void iommu_flush_iotlb_psi(struct
> >> intel_iommu *iommu,
> >> iommu_flush_dev_iotlb(domain, addr, mask);
> >> }
> >>
> >> -/* Notification for newly created mappings */
> >> -static void __mapping_notify_one(struct intel_iommu *iommu, struct
> >> dmar_domain *domain,
> >> - unsigned long pfn, unsigned int pages)
> >> -{
> >> - /*
> >> - * It's a non-present to present mapping. Only flush if caching mode
> >> - * and second level.
> >> - */
> >> - if (cap_caching_mode(iommu->cap) && !domain->use_first_level)
> >> - iommu_flush_iotlb_psi(iommu, domain, pfn, pages, 0, 1);
> >> - else
> >> - iommu_flush_write_buffer(iommu);
> >> -}
> >
> > iommu_flush_write_buffer is for a quite different issue. it's clearer to
> > keep it separated from the iotlb helpers.
>
> The VT-d spec describes the write buffer flushing in section 6.8. It
> states,
>
> 1. Write buffer flushing is required only for earlier VT-d hardware
> implementations. Newer hardware implementations are expected to NOT
> require this.
> 2. Write buffer flushing is related to cache invalidation. The hardware
> performs an implicit write-buffer-flushing as a pre-condition to
> the invalidation operations.
>
> According to this, write buffer flushing is only required in the map
> path when IOMMU is not in caching mode and the RWBF iommu capability is
> set, which perfectly matches the case of cache_tag_flush_cm_range().
>
> By doing it in cache_tag_flush_cm_range(), the code will be simpler and
> easier for maintenance.
>
but above applies actually when caching mode is false or when 1st
stage is being used. this is the opposite of what flush_cm_range()
implies.
if we do it through a general flush_range() helper which accepts
a parameter to indicate the map path, then it's reasonable to move
this special handling to that helper as that is another trick which
the caller doesn't need to know.
On 4/8/24 10:51 AM, Tian, Kevin wrote:
>> From: Baolu Lu<baolu.lu@linux.intel.com>
>> Sent: Sunday, April 7, 2024 2:41 PM
>>
>> On 3/28/24 3:48 PM, Tian, Kevin wrote:
>>>> From: Lu Baolu<baolu.lu@linux.intel.com>
>>>> Sent: Monday, March 25, 2024 10:17 AM
>>>>
>>>> The iotlb_sync_map callback is called by the iommu core after non-
>> present
>>>> to present mappings are created. The iommu driver uses this callback to
>>>> invalidate caches if IOMMU is working in caching mode and second-only
>>>> translation is used for the domain. Use cache_tag_flush_cm_range() in
>> this
>>>> callback.
>>>>
>>>> Signed-off-by: Lu Baolu<baolu.lu@linux.intel.com>
>>>> ---
>>>> drivers/iommu/intel/iommu.c | 22 +---------------------
>>>> 1 file changed, 1 insertion(+), 21 deletions(-)
>>>>
>>>> diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c
>>>> index 1c03d2dafb9d..2dcab1e5cd4d 100644
>>>> --- a/drivers/iommu/intel/iommu.c
>>>> +++ b/drivers/iommu/intel/iommu.c
>>>> @@ -1504,20 +1504,6 @@ static void iommu_flush_iotlb_psi(struct
>>>> intel_iommu *iommu,
>>>> iommu_flush_dev_iotlb(domain, addr, mask);
>>>> }
>>>>
>>>> -/* Notification for newly created mappings */
>>>> -static void __mapping_notify_one(struct intel_iommu *iommu, struct
>>>> dmar_domain *domain,
>>>> - unsigned long pfn, unsigned int pages)
>>>> -{
>>>> - /*
>>>> - * It's a non-present to present mapping. Only flush if caching mode
>>>> - * and second level.
>>>> - */
>>>> - if (cap_caching_mode(iommu->cap) && !domain->use_first_level)
>>>> - iommu_flush_iotlb_psi(iommu, domain, pfn, pages, 0, 1);
>>>> - else
>>>> - iommu_flush_write_buffer(iommu);
>>>> -}
>>> iommu_flush_write_buffer is for a quite different issue. it's clearer to
>>> keep it separated from the iotlb helpers.
>> The VT-d spec describes the write buffer flushing in section 6.8. It
>> states,
>>
>> 1. Write buffer flushing is required only for earlier VT-d hardware
>> implementations. Newer hardware implementations are expected to NOT
>> require this.
>> 2. Write buffer flushing is related to cache invalidation. The hardware
>> performs an implicit write-buffer-flushing as a pre-condition to
>> the invalidation operations.
>>
>> According to this, write buffer flushing is only required in the map
>> path when IOMMU is not in caching mode and the RWBF iommu capability is
>> set, which perfectly matches the case of cache_tag_flush_cm_range().
>>
>> By doing it in cache_tag_flush_cm_range(), the code will be simpler and
>> easier for maintenance.
>>
> but above applies actually when caching mode is false or when 1st
> stage is being used. this is the opposite of what flush_cm_range()
> implies.
>
> if we do it through a general flush_range() helper which accepts
> a parameter to indicate the map path, then it's reasonable to move
> this special handling to that helper as that is another trick which
> the caller doesn't need to know.
This could be an option. How about my proposal in another reply?
Best regards,
baolu
© 2016 - 2026 Red Hat, Inc.