[PATCH v1 1/3] arm64: mm: Fix rodata=full block mapping support for realm guests

Ryan Roberts posted 3 patches 1 week, 4 days ago
There is a newer version of this series
[PATCH v1 1/3] arm64: mm: Fix rodata=full block mapping support for realm guests
Posted by Ryan Roberts 1 week, 4 days ago
Commit a166563e7ec37 ("arm64: mm: support large block mapping when
rodata=full") enabled the linear map to be mapped by block/cont while
still allowing granular permission changes on BBML2_NOABORT systems by
lazily splitting the live mappings. This mechanism was intended to be
usable by realm guests since they need to dynamically share dma buffers
with the host by "decrypting" them - which for Arm CCA, means marking
them as shared in the page tables.

However, it turns out that the mechanism was failing for realm guests
because realms need to share their dma buffers (via
__set_memory_enc_dec()) much earlier during boot than
split_kernel_leaf_mapping() was able to handle. The report linked below
showed that GIC's ITS was one such user. But during the investigation I
found other callsites that could not meet the
split_kernel_leaf_mapping() constraints.

The problem is that we block map the linear map based on the boot CPU
supporting BBML2_NOABORT, then check that all the other CPUs support it
too when finalizing the caps. If they don't, then we stop_machine() and
split to ptes. For safety, split_kernel_leaf_mapping() previously
wouldn't permit splitting until after the caps were finalized. That
ensured that if any secondary cpus were running that didn't support
BBML2_NOABORT, we wouldn't risk breaking them.

I've fix this problem by reducing the black-out window where we refuse
to split; there are now 2 windows. The first is from T0 until the page
allocator is inititialized. Splitting allocates memory for the page
allocator so it must be in use. The second covers the period between
starting to online the secondary cpus until the system caps are
finalized (this is a very small window).

All of the problematic callers are calling __set_memory_enc_dec() before
the secondary cpus come online, so this solves the problem. However, one
of these callers, swiotlb_update_mem_attributes(), was trying to split
before the page allocator was initialized. So I have moved this call
from arch_mm_preinit() to mem_init(), which solves the ordering issue.

I've added warnings and return an error if any attempt is made to split
in the black-out windows.

Note there are other issues which prevent booting all the way to user
space, which will be fixed in subsequent patches.

Reported-by: Jinjiang Tu <tujinjiang@huawei.com>
Closes: https://lore.kernel.org/all/0b2a4ae5-fc51-4d77-b177-b2e9db74f11d@huawei.com/
Fixes: a166563e7ec37 ("arm64: mm: support large block mapping when rodata=full")
Cc: stable@vger.kernel.org
Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
---
 arch/arm64/mm/init.c |  9 ++++++++-
 arch/arm64/mm/mmu.c  | 35 +++++++++++++++++++++++++++--------
 2 files changed, 35 insertions(+), 9 deletions(-)

diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
index 96711b8578fd0..b9b248d24fd10 100644
--- a/arch/arm64/mm/init.c
+++ b/arch/arm64/mm/init.c
@@ -350,7 +350,6 @@ void __init arch_mm_preinit(void)
 	}
 
 	swiotlb_init(swiotlb, flags);
-	swiotlb_update_mem_attributes();
 
 	/*
 	 * Check boundaries twice: Some fundamental inconsistencies can be
@@ -377,6 +376,14 @@ void __init arch_mm_preinit(void)
 	}
 }
 
+bool page_alloc_available __ro_after_init;
+
+void __init mem_init(void)
+{
+	page_alloc_available = true;
+	swiotlb_update_mem_attributes();
+}
+
 void free_initmem(void)
 {
 	void *lm_init_begin = lm_alias(__init_begin);
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index a6a00accf4f93..5b6a8d53e64b7 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -773,14 +773,33 @@ int split_kernel_leaf_mapping(unsigned long start, unsigned long end)
 {
 	int ret;
 
-	/*
-	 * !BBML2_NOABORT systems should not be trying to change permissions on
-	 * anything that is not pte-mapped in the first place. Just return early
-	 * and let the permission change code raise a warning if not already
-	 * pte-mapped.
-	 */
-	if (!system_supports_bbml2_noabort())
-		return 0;
+	if (!system_supports_bbml2_noabort()) {
+		/*
+		 * !BBML2_NOABORT systems should not be trying to change
+		 * permissions on anything that is not pte-mapped in the first
+		 * place. Just return early and let the permission change code
+		 * raise a warning if not already pte-mapped.
+		 */
+		if (system_capabilities_finalized() ||
+		    !cpu_supports_bbml2_noabort())
+			return 0;
+
+		/*
+		 * Boot-time: split_kernel_leaf_mapping_locked() allocates from
+		 * page allocator. Can't split until it's available.
+		 */
+		extern bool page_alloc_available;
+		if (WARN_ON(!page_alloc_available))
+			return -EBUSY;
+
+		/*
+		 * Boot-time: Started secondary cpus but don't know if they
+		 * support BBML2_NOABORT yet. Can't allow splitting in this
+		 * window in case they don't.
+		 */
+		if (WARN_ON(num_online_cpus() > 1))
+			return -EBUSY;
+	}
 
 	/*
 	 * If the region is within a pte-mapped area, there is no need to try to
-- 
2.43.0
Re: [PATCH v1 1/3] arm64: mm: Fix rodata=full block mapping support for realm guests
Posted by Yang Shi 1 week, 3 days ago

On 3/23/26 6:03 AM, Ryan Roberts wrote:
> Commit a166563e7ec37 ("arm64: mm: support large block mapping when
> rodata=full") enabled the linear map to be mapped by block/cont while
> still allowing granular permission changes on BBML2_NOABORT systems by
> lazily splitting the live mappings. This mechanism was intended to be
> usable by realm guests since they need to dynamically share dma buffers
> with the host by "decrypting" them - which for Arm CCA, means marking
> them as shared in the page tables.
>
> However, it turns out that the mechanism was failing for realm guests
> because realms need to share their dma buffers (via
> __set_memory_enc_dec()) much earlier during boot than
> split_kernel_leaf_mapping() was able to handle. The report linked below
> showed that GIC's ITS was one such user. But during the investigation I
> found other callsites that could not meet the
> split_kernel_leaf_mapping() constraints.
>
> The problem is that we block map the linear map based on the boot CPU
> supporting BBML2_NOABORT, then check that all the other CPUs support it
> too when finalizing the caps. If they don't, then we stop_machine() and
> split to ptes. For safety, split_kernel_leaf_mapping() previously
> wouldn't permit splitting until after the caps were finalized. That
> ensured that if any secondary cpus were running that didn't support
> BBML2_NOABORT, we wouldn't risk breaking them.
>
> I've fix this problem by reducing the black-out window where we refuse
> to split; there are now 2 windows. The first is from T0 until the page
> allocator is inititialized. Splitting allocates memory for the page
> allocator so it must be in use. The second covers the period between
> starting to online the secondary cpus until the system caps are
> finalized (this is a very small window).
>
> All of the problematic callers are calling __set_memory_enc_dec() before
> the secondary cpus come online, so this solves the problem. However, one
> of these callers, swiotlb_update_mem_attributes(), was trying to split
> before the page allocator was initialized. So I have moved this call
> from arch_mm_preinit() to mem_init(), which solves the ordering issue.
>
> I've added warnings and return an error if any attempt is made to split
> in the black-out windows.
>
> Note there are other issues which prevent booting all the way to user
> space, which will be fixed in subsequent patches.

Hi Ryan,

Thanks for putting everything to together to have the patches so 
quickly. It basically looks good to me. However, I'm thinking about 
whether we should have split_kernel_leaf_mapping() call for different 
memory allocators in different stages. If buddy has been initialized, it 
can call page allocator, otherwise, for example, in early boot stage, it 
can call memblock allocator. So split_kernel_leaf_mapping() should be 
able to be called anytime and we don't have to rely on the boot order of 
subsystems.

Thanks,
Yang

>
> Reported-by: Jinjiang Tu <tujinjiang@huawei.com>
> Closes: https://lore.kernel.org/all/0b2a4ae5-fc51-4d77-b177-b2e9db74f11d@huawei.com/
> Fixes: a166563e7ec37 ("arm64: mm: support large block mapping when rodata=full")
> Cc: stable@vger.kernel.org
> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
> ---
>   arch/arm64/mm/init.c |  9 ++++++++-
>   arch/arm64/mm/mmu.c  | 35 +++++++++++++++++++++++++++--------
>   2 files changed, 35 insertions(+), 9 deletions(-)
>
> diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
> index 96711b8578fd0..b9b248d24fd10 100644
> --- a/arch/arm64/mm/init.c
> +++ b/arch/arm64/mm/init.c
> @@ -350,7 +350,6 @@ void __init arch_mm_preinit(void)
>   	}
>   
>   	swiotlb_init(swiotlb, flags);
> -	swiotlb_update_mem_attributes();
>   
>   	/*
>   	 * Check boundaries twice: Some fundamental inconsistencies can be
> @@ -377,6 +376,14 @@ void __init arch_mm_preinit(void)
>   	}
>   }
>   
> +bool page_alloc_available __ro_after_init;
> +
> +void __init mem_init(void)
> +{
> +	page_alloc_available = true;
> +	swiotlb_update_mem_attributes();
> +}
> +
>   void free_initmem(void)
>   {
>   	void *lm_init_begin = lm_alias(__init_begin);
> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
> index a6a00accf4f93..5b6a8d53e64b7 100644
> --- a/arch/arm64/mm/mmu.c
> +++ b/arch/arm64/mm/mmu.c
> @@ -773,14 +773,33 @@ int split_kernel_leaf_mapping(unsigned long start, unsigned long end)
>   {
>   	int ret;
>   
> -	/*
> -	 * !BBML2_NOABORT systems should not be trying to change permissions on
> -	 * anything that is not pte-mapped in the first place. Just return early
> -	 * and let the permission change code raise a warning if not already
> -	 * pte-mapped.
> -	 */
> -	if (!system_supports_bbml2_noabort())
> -		return 0;
> +	if (!system_supports_bbml2_noabort()) {
> +		/*
> +		 * !BBML2_NOABORT systems should not be trying to change
> +		 * permissions on anything that is not pte-mapped in the first
> +		 * place. Just return early and let the permission change code
> +		 * raise a warning if not already pte-mapped.
> +		 */
> +		if (system_capabilities_finalized() ||
> +		    !cpu_supports_bbml2_noabort())
> +			return 0;
> +
> +		/*
> +		 * Boot-time: split_kernel_leaf_mapping_locked() allocates from
> +		 * page allocator. Can't split until it's available.
> +		 */
> +		extern bool page_alloc_available;
> +		if (WARN_ON(!page_alloc_available))
> +			return -EBUSY;
> +
> +		/*
> +		 * Boot-time: Started secondary cpus but don't know if they
> +		 * support BBML2_NOABORT yet. Can't allow splitting in this
> +		 * window in case they don't.
> +		 */
> +		if (WARN_ON(num_online_cpus() > 1))
> +			return -EBUSY;
> +	}
>   
>   	/*
>   	 * If the region is within a pte-mapped area, there is no need to try to
Re: [PATCH v1 1/3] arm64: mm: Fix rodata=full block mapping support for realm guests
Posted by Ryan Roberts 1 week, 2 days ago
On 23/03/2026 21:34, Yang Shi wrote:
> 
> 
> On 3/23/26 6:03 AM, Ryan Roberts wrote:
>> Commit a166563e7ec37 ("arm64: mm: support large block mapping when
>> rodata=full") enabled the linear map to be mapped by block/cont while
>> still allowing granular permission changes on BBML2_NOABORT systems by
>> lazily splitting the live mappings. This mechanism was intended to be
>> usable by realm guests since they need to dynamically share dma buffers
>> with the host by "decrypting" them - which for Arm CCA, means marking
>> them as shared in the page tables.
>>
>> However, it turns out that the mechanism was failing for realm guests
>> because realms need to share their dma buffers (via
>> __set_memory_enc_dec()) much earlier during boot than
>> split_kernel_leaf_mapping() was able to handle. The report linked below
>> showed that GIC's ITS was one such user. But during the investigation I
>> found other callsites that could not meet the
>> split_kernel_leaf_mapping() constraints.
>>
>> The problem is that we block map the linear map based on the boot CPU
>> supporting BBML2_NOABORT, then check that all the other CPUs support it
>> too when finalizing the caps. If they don't, then we stop_machine() and
>> split to ptes. For safety, split_kernel_leaf_mapping() previously
>> wouldn't permit splitting until after the caps were finalized. That
>> ensured that if any secondary cpus were running that didn't support
>> BBML2_NOABORT, we wouldn't risk breaking them.
>>
>> I've fix this problem by reducing the black-out window where we refuse
>> to split; there are now 2 windows. The first is from T0 until the page
>> allocator is inititialized. Splitting allocates memory for the page
>> allocator so it must be in use. The second covers the period between
>> starting to online the secondary cpus until the system caps are
>> finalized (this is a very small window).
>>
>> All of the problematic callers are calling __set_memory_enc_dec() before
>> the secondary cpus come online, so this solves the problem. However, one
>> of these callers, swiotlb_update_mem_attributes(), was trying to split
>> before the page allocator was initialized. So I have moved this call
>> from arch_mm_preinit() to mem_init(), which solves the ordering issue.
>>
>> I've added warnings and return an error if any attempt is made to split
>> in the black-out windows.
>>
>> Note there are other issues which prevent booting all the way to user
>> space, which will be fixed in subsequent patches.
> 
> Hi Ryan,
> 
> Thanks for putting everything to together to have the patches so quickly. It
> basically looks good to me. However, I'm thinking about whether we should have
> split_kernel_leaf_mapping() call for different memory allocators in different
> stages. If buddy has been initialized, it can call page allocator, otherwise,
> for example, in early boot stage, it can call memblock allocator. So
> split_kernel_leaf_mapping() should be able to be called anytime and we don't
> have to rely on the boot order of subsystems.

I considered that, but ultimately we would just be adding dead code. I've added
a warning that will catch this usage. So I'd prefer to leave it as is for now
and only add this functionality if we identify a need.

Thanks,
Ryan


> 
> Thanks,
> Yang
>
Re: [PATCH v1 1/3] arm64: mm: Fix rodata=full block mapping support for realm guests
Posted by Yang Shi 1 week ago

On 3/25/26 10:29 AM, Ryan Roberts wrote:
> On 23/03/2026 21:34, Yang Shi wrote:
>>
>> On 3/23/26 6:03 AM, Ryan Roberts wrote:
>>> Commit a166563e7ec37 ("arm64: mm: support large block mapping when
>>> rodata=full") enabled the linear map to be mapped by block/cont while
>>> still allowing granular permission changes on BBML2_NOABORT systems by
>>> lazily splitting the live mappings. This mechanism was intended to be
>>> usable by realm guests since they need to dynamically share dma buffers
>>> with the host by "decrypting" them - which for Arm CCA, means marking
>>> them as shared in the page tables.
>>>
>>> However, it turns out that the mechanism was failing for realm guests
>>> because realms need to share their dma buffers (via
>>> __set_memory_enc_dec()) much earlier during boot than
>>> split_kernel_leaf_mapping() was able to handle. The report linked below
>>> showed that GIC's ITS was one such user. But during the investigation I
>>> found other callsites that could not meet the
>>> split_kernel_leaf_mapping() constraints.
>>>
>>> The problem is that we block map the linear map based on the boot CPU
>>> supporting BBML2_NOABORT, then check that all the other CPUs support it
>>> too when finalizing the caps. If they don't, then we stop_machine() and
>>> split to ptes. For safety, split_kernel_leaf_mapping() previously
>>> wouldn't permit splitting until after the caps were finalized. That
>>> ensured that if any secondary cpus were running that didn't support
>>> BBML2_NOABORT, we wouldn't risk breaking them.
>>>
>>> I've fix this problem by reducing the black-out window where we refuse
>>> to split; there are now 2 windows. The first is from T0 until the page
>>> allocator is inititialized. Splitting allocates memory for the page
>>> allocator so it must be in use. The second covers the period between
>>> starting to online the secondary cpus until the system caps are
>>> finalized (this is a very small window).
>>>
>>> All of the problematic callers are calling __set_memory_enc_dec() before
>>> the secondary cpus come online, so this solves the problem. However, one
>>> of these callers, swiotlb_update_mem_attributes(), was trying to split
>>> before the page allocator was initialized. So I have moved this call
>>> from arch_mm_preinit() to mem_init(), which solves the ordering issue.
>>>
>>> I've added warnings and return an error if any attempt is made to split
>>> in the black-out windows.
>>>
>>> Note there are other issues which prevent booting all the way to user
>>> space, which will be fixed in subsequent patches.
>> Hi Ryan,
>>
>> Thanks for putting everything to together to have the patches so quickly. It
>> basically looks good to me. However, I'm thinking about whether we should have
>> split_kernel_leaf_mapping() call for different memory allocators in different
>> stages. If buddy has been initialized, it can call page allocator, otherwise,
>> for example, in early boot stage, it can call memblock allocator. So
>> split_kernel_leaf_mapping() should be able to be called anytime and we don't
>> have to rely on the boot order of subsystems.
> I considered that, but ultimately we would just be adding dead code. I've added
> a warning that will catch this usage. So I'd prefer to leave it as is for now
> and only add this functionality if we identify a need.

OK, fine to me. I don't have strong preference for either.

Thanks,
Yang

>
> Thanks,
> Ryan
>
>
>> Thanks,
>> Yang
>>
Re: [PATCH v1 1/3] arm64: mm: Fix rodata=full block mapping support for realm guests
Posted by Kevin Brodsky 1 week, 4 days ago
On 23/03/2026 14:03, Ryan Roberts wrote:
> [...]
>
> diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
> index 96711b8578fd0..b9b248d24fd10 100644
> --- a/arch/arm64/mm/init.c
> +++ b/arch/arm64/mm/init.c
> @@ -350,7 +350,6 @@ void __init arch_mm_preinit(void)
>  	}
>  
>  	swiotlb_init(swiotlb, flags);
> -	swiotlb_update_mem_attributes();
>  
>  	/*
>  	 * Check boundaries twice: Some fundamental inconsistencies can be
> @@ -377,6 +376,14 @@ void __init arch_mm_preinit(void)
>  	}
>  }
>  
> +bool page_alloc_available __ro_after_init;
> +
> +void __init mem_init(void)
> +{
> +	page_alloc_available = true;
> +	swiotlb_update_mem_attributes();

The move seems reasonable, x86 calls this function even later (from
arch_cpu_finalize_init()).

> +}
> +
>  void free_initmem(void)
>  {
>  	void *lm_init_begin = lm_alias(__init_begin);
> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
> index a6a00accf4f93..5b6a8d53e64b7 100644
> --- a/arch/arm64/mm/mmu.c
> +++ b/arch/arm64/mm/mmu.c
> @@ -773,14 +773,33 @@ int split_kernel_leaf_mapping(unsigned long start, unsigned long end)
>  {
>  	int ret;
>  
> -	/*
> -	 * !BBML2_NOABORT systems should not be trying to change permissions on
> -	 * anything that is not pte-mapped in the first place. Just return early
> -	 * and let the permission change code raise a warning if not already
> -	 * pte-mapped.
> -	 */
> -	if (!system_supports_bbml2_noabort())
> -		return 0;
> +	if (!system_supports_bbml2_noabort()) {
> +		/*
> +		 * !BBML2_NOABORT systems should not be trying to change
> +		 * permissions on anything that is not pte-mapped in the first
> +		 * place. Just return early and let the permission change code
> +		 * raise a warning if not already pte-mapped.
> +		 */
> +		if (system_capabilities_finalized() ||
> +		    !cpu_supports_bbml2_noabort())
> +			return 0;
> +
> +		/*
> +		 * Boot-time: split_kernel_leaf_mapping_locked() allocates from
> +		 * page allocator. Can't split until it's available.
> +		 */
> +		extern bool page_alloc_available;

Could we at least have the declaration in say <asm/mmu.h>? x86 defines a
similar global so we could eventually have a generic global (defined
before mem_init() is called).

Looks good otherwise:

Reviewed-by: Kevin Brodsky <kevin.brodsky@arm.com>

> +		if (WARN_ON(!page_alloc_available))
> +			return -EBUSY;
> +
> +		/*
> +		 * Boot-time: Started secondary cpus but don't know if they
> +		 * support BBML2_NOABORT yet. Can't allow splitting in this
> +		 * window in case they don't.
> +		 */
> +		if (WARN_ON(num_online_cpus() > 1))
> +			return -EBUSY;
> +	}
>  
>  	/*
>  	 * If the region is within a pte-mapped area, there is no need to try to
Re: [PATCH v1 1/3] arm64: mm: Fix rodata=full block mapping support for realm guests
Posted by Ryan Roberts 1 week, 4 days ago
On 23/03/2026 16:52, Kevin Brodsky wrote:
> On 23/03/2026 14:03, Ryan Roberts wrote:
>> [...]
>>
>> diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
>> index 96711b8578fd0..b9b248d24fd10 100644
>> --- a/arch/arm64/mm/init.c
>> +++ b/arch/arm64/mm/init.c
>> @@ -350,7 +350,6 @@ void __init arch_mm_preinit(void)
>>  	}
>>  
>>  	swiotlb_init(swiotlb, flags);
>> -	swiotlb_update_mem_attributes();
>>  
>>  	/*
>>  	 * Check boundaries twice: Some fundamental inconsistencies can be
>> @@ -377,6 +376,14 @@ void __init arch_mm_preinit(void)
>>  	}
>>  }
>>  
>> +bool page_alloc_available __ro_after_init;
>> +
>> +void __init mem_init(void)
>> +{
>> +	page_alloc_available = true;
>> +	swiotlb_update_mem_attributes();
> 
> The move seems reasonable, x86 calls this function even later (from
> arch_cpu_finalize_init()).
> 
>> +}
>> +
>>  void free_initmem(void)
>>  {
>>  	void *lm_init_begin = lm_alias(__init_begin);
>> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
>> index a6a00accf4f93..5b6a8d53e64b7 100644
>> --- a/arch/arm64/mm/mmu.c
>> +++ b/arch/arm64/mm/mmu.c
>> @@ -773,14 +773,33 @@ int split_kernel_leaf_mapping(unsigned long start, unsigned long end)
>>  {
>>  	int ret;
>>  
>> -	/*
>> -	 * !BBML2_NOABORT systems should not be trying to change permissions on
>> -	 * anything that is not pte-mapped in the first place. Just return early
>> -	 * and let the permission change code raise a warning if not already
>> -	 * pte-mapped.
>> -	 */
>> -	if (!system_supports_bbml2_noabort())
>> -		return 0;
>> +	if (!system_supports_bbml2_noabort()) {
>> +		/*
>> +		 * !BBML2_NOABORT systems should not be trying to change
>> +		 * permissions on anything that is not pte-mapped in the first
>> +		 * place. Just return early and let the permission change code
>> +		 * raise a warning if not already pte-mapped.
>> +		 */
>> +		if (system_capabilities_finalized() ||
>> +		    !cpu_supports_bbml2_noabort())
>> +			return 0;
>> +
>> +		/*
>> +		 * Boot-time: split_kernel_leaf_mapping_locked() allocates from
>> +		 * page allocator. Can't split until it's available.
>> +		 */
>> +		extern bool page_alloc_available;
> 
> Could we at least have the declaration in say <asm/mmu.h>? x86 defines a
> similar global so we could eventually have a generic global (defined
> before mem_init() is called).

Yeah, fair enough. I was being lazy. I'll move it to the header for v2.

> 
> Looks good otherwise:
> 
> Reviewed-by: Kevin Brodsky <kevin.brodsky@arm.com>
> 
>> +		if (WARN_ON(!page_alloc_available))
>> +			return -EBUSY;
>> +
>> +		/*
>> +		 * Boot-time: Started secondary cpus but don't know if they
>> +		 * support BBML2_NOABORT yet. Can't allow splitting in this
>> +		 * window in case they don't.
>> +		 */
>> +		if (WARN_ON(num_online_cpus() > 1))
>> +			return -EBUSY;
>> +	}
>>  
>>  	/*
>>  	 * If the region is within a pte-mapped area, there is no need to try to