From: Wei Chen <wei.chen@arm.com>
At the moment, on MMU system, enable_mmu() will return to an
address in the 1:1 mapping, then each path is responsible to
switch to virtual runtime mapping. Then remove_identity_mapping()
is called on the boot CPU to remove all 1:1 mapping.
Since remove_identity_mapping() is not necessary on Non-MMU system,
and we also avoid creating empty function for Non-MMU system, trying
to keep only one codeflow in arm64/head.S, we move path switch and
remove_identity_mapping() in enable_mmu() on MMU system.
As the remove_identity_mapping should only be called for the boot
CPU only, so we introduce enable_boot_cpu_mm() for boot CPU and
enable_secondary_cpu_mm() for secondary CPUs in this patch.
Signed-off-by: Wei Chen <wei.chen@arm.com>
Signed-off-by: Penny Zheng <penny.zheng@arm.com>
Signed-off-by: Henry Wang <Henry.Wang@arm.com>
Reviewed-by: Julien Grall <jgrall@amazon.com>
---
v6:
- Add Julien's Reviewed-by tag.
v5:
- Add missing "()" in title.
- Use more generic comment in enable_{boot,secondary}_cpu_mm() to
mention function will return to the vaddr requested by the caller.
- Move 'mov lr, x5' closer to 'b remove_identity_mapping'.
- Drop the 'b fail' for unreachable code in enable_boot_cpu_mm().
v4:
- Clarify remove_identity_mapping() is called on boot CPU and keep
the function/proc format consistent in commit msg.
- Drop inaccurate (due to the refactor) in-code comment.
- Rename enable_{boot,runtime}_mmu to enable_{boot,secondary}_cpu_mm.
- Reword the in-code comment on top of enable_{boot,secondary}_cpu_mm.
- Call "fail" for unreachable code.
v3:
- new patch
---
xen/arch/arm/arm64/head.S | 83 ++++++++++++++++++++++++++++++---------
1 file changed, 64 insertions(+), 19 deletions(-)
diff --git a/xen/arch/arm/arm64/head.S b/xen/arch/arm/arm64/head.S
index 5029013a14..f25a41d36c 100644
--- a/xen/arch/arm/arm64/head.S
+++ b/xen/arch/arm/arm64/head.S
@@ -325,21 +325,11 @@ real_start_efi:
bl check_cpu_mode
bl cpu_init
- bl create_page_tables
- load_paddr x0, boot_pgtable
- bl enable_mmu
- /* We are still in the 1:1 mapping. Jump to the runtime Virtual Address. */
- ldr x0, =primary_switched
- br x0
+ ldr lr, =primary_switched
+ b enable_boot_cpu_mm
+
primary_switched:
- /*
- * The 1:1 map may clash with other parts of the Xen virtual memory
- * layout. As it is not used anymore, remove it completely to
- * avoid having to worry about replacing existing mapping
- * afterwards.
- */
- bl remove_identity_mapping
bl setup_fixmap
#ifdef CONFIG_EARLY_PRINTK
/* Use a virtual address to access the UART. */
@@ -384,13 +374,10 @@ GLOBAL(init_secondary)
#endif
bl check_cpu_mode
bl cpu_init
- load_paddr x0, init_ttbr
- ldr x0, [x0]
- bl enable_mmu
- /* We are still in the 1:1 mapping. Jump to the runtime Virtual Address. */
- ldr x0, =secondary_switched
- br x0
+ ldr lr, =secondary_switched
+ b enable_secondary_cpu_mm
+
secondary_switched:
#ifdef CONFIG_EARLY_PRINTK
/* Use a virtual address to access the UART. */
@@ -748,6 +735,64 @@ enable_mmu:
ret
ENDPROC(enable_mmu)
+/*
+ * Enable mm (turn on the data cache and the MMU) for secondary CPUs.
+ * The function will return to the virtual address provided in LR (e.g. the
+ * runtime mapping).
+ *
+ * Inputs:
+ * lr : Virtual address to return to.
+ *
+ * Clobbers x0 - x5
+ */
+enable_secondary_cpu_mm:
+ mov x5, lr
+
+ load_paddr x0, init_ttbr
+ ldr x0, [x0]
+
+ bl enable_mmu
+ mov lr, x5
+
+ /* Return to the virtual address requested by the caller. */
+ ret
+ENDPROC(enable_secondary_cpu_mm)
+
+/*
+ * Enable mm (turn on the data cache and the MMU) for the boot CPU.
+ * The function will return to the virtual address provided in LR (e.g. the
+ * runtime mapping).
+ *
+ * Inputs:
+ * lr : Virtual address to return to.
+ *
+ * Clobbers x0 - x5
+ */
+enable_boot_cpu_mm:
+ mov x5, lr
+
+ bl create_page_tables
+ load_paddr x0, boot_pgtable
+
+ bl enable_mmu
+
+ /*
+ * The MMU is turned on and we are in the 1:1 mapping. Switch
+ * to the runtime mapping.
+ */
+ ldr x0, =1f
+ br x0
+1:
+ mov lr, x5
+ /*
+ * The 1:1 map may clash with other parts of the Xen virtual memory
+ * layout. As it is not used anymore, remove it completely to avoid
+ * having to worry about replacing existing mapping afterwards.
+ * Function will return to the virtual address requested by the caller.
+ */
+ b remove_identity_mapping
+ENDPROC(enable_boot_cpu_mm)
+
/*
* Remove the 1:1 map from the page-tables. It is not easy to keep track
* where the 1:1 map was mapped, so we will look for the top-level entry
--
2.25.1
Hi Henry,
On 28/08/2023 02:32, Henry Wang wrote:
> CAUTION: This message has originated from an External Source. Please use proper judgment and caution when opening attachments, clicking links, or responding to this email.
>
>
> From: Wei Chen <wei.chen@arm.com>
>
> At the moment, on MMU system, enable_mmu() will return to an
> address in the 1:1 mapping, then each path is responsible to
> switch to virtual runtime mapping. Then remove_identity_mapping()
> is called on the boot CPU to remove all 1:1 mapping.
>
> Since remove_identity_mapping() is not necessary on Non-MMU system,
> and we also avoid creating empty function for Non-MMU system, trying
> to keep only one codeflow in arm64/head.S, we move path switch and
> remove_identity_mapping() in enable_mmu() on MMU system.
>
> As the remove_identity_mapping should only be called for the boot
> CPU only, so we introduce enable_boot_cpu_mm() for boot CPU and
> enable_secondary_cpu_mm() for secondary CPUs in this patch.
>
> Signed-off-by: Wei Chen <wei.chen@arm.com>
> Signed-off-by: Penny Zheng <penny.zheng@arm.com>
> Signed-off-by: Henry Wang <Henry.Wang@arm.com>
> Reviewed-by: Julien Grall <jgrall@amazon.com>
> ---
> v6:
> - Add Julien's Reviewed-by tag.
> v5:
> - Add missing "()" in title.
> - Use more generic comment in enable_{boot,secondary}_cpu_mm() to
> mention function will return to the vaddr requested by the caller.
> - Move 'mov lr, x5' closer to 'b remove_identity_mapping'.
> - Drop the 'b fail' for unreachable code in enable_boot_cpu_mm().
> v4:
> - Clarify remove_identity_mapping() is called on boot CPU and keep
> the function/proc format consistent in commit msg.
> - Drop inaccurate (due to the refactor) in-code comment.
> - Rename enable_{boot,runtime}_mmu to enable_{boot,secondary}_cpu_mm.
> - Reword the in-code comment on top of enable_{boot,secondary}_cpu_mm.
> - Call "fail" for unreachable code.
> v3:
> - new patch
> ---
> xen/arch/arm/arm64/head.S | 83 ++++++++++++++++++++++++++++++---------
> 1 file changed, 64 insertions(+), 19 deletions(-)
>
> diff --git a/xen/arch/arm/arm64/head.S b/xen/arch/arm/arm64/head.S
> index 5029013a14..f25a41d36c 100644
> --- a/xen/arch/arm/arm64/head.S
> +++ b/xen/arch/arm/arm64/head.S
> @@ -325,21 +325,11 @@ real_start_efi:
>
> bl check_cpu_mode
> bl cpu_init
> - bl create_page_tables
> - load_paddr x0, boot_pgtable
> - bl enable_mmu
>
> - /* We are still in the 1:1 mapping. Jump to the runtime Virtual Address. */
> - ldr x0, =primary_switched
> - br x0
> + ldr lr, =primary_switched
> + b enable_boot_cpu_mm
> +
> primary_switched:
> - /*
> - * The 1:1 map may clash with other parts of the Xen virtual memory
> - * layout. As it is not used anymore, remove it completely to
> - * avoid having to worry about replacing existing mapping
> - * afterwards.
> - */
> - bl remove_identity_mapping
> bl setup_fixmap
> #ifdef CONFIG_EARLY_PRINTK
> /* Use a virtual address to access the UART. */
> @@ -384,13 +374,10 @@ GLOBAL(init_secondary)
> #endif
> bl check_cpu_mode
> bl cpu_init
> - load_paddr x0, init_ttbr
> - ldr x0, [x0]
> - bl enable_mmu
>
> - /* We are still in the 1:1 mapping. Jump to the runtime Virtual Address. */
> - ldr x0, =secondary_switched
> - br x0
> + ldr lr, =secondary_switched
> + b enable_secondary_cpu_mm
> +
> secondary_switched:
> #ifdef CONFIG_EARLY_PRINTK
> /* Use a virtual address to access the UART. */
> @@ -748,6 +735,64 @@ enable_mmu:
> ret
> ENDPROC(enable_mmu)
>
> +/*
> + * Enable mm (turn on the data cache and the MMU) for secondary CPUs.
> + * The function will return to the virtual address provided in LR (e.g. the
> + * runtime mapping).
> + *
> + * Inputs:
> + * lr : Virtual address to return to.
> + *
> + * Clobbers x0 - x5
> + */
> +enable_secondary_cpu_mm:
> + mov x5, lr
> +
> + load_paddr x0, init_ttbr
> + ldr x0, [x0]
> +
> + bl enable_mmu
> + mov lr, x5
> +
> + /* Return to the virtual address requested by the caller. */
> + ret
> +ENDPROC(enable_secondary_cpu_mm)
> +
> +/*
> + * Enable mm (turn on the data cache and the MMU) for the boot CPU.
> + * The function will return to the virtual address provided in LR (e.g. the
> + * runtime mapping).
> + *
> + * Inputs:
> + * lr : Virtual address to return to.
> + *
> + * Clobbers x0 - x5
> + */
> +enable_boot_cpu_mm:
> + mov x5, lr
> +
> + bl create_page_tables
> + load_paddr x0, boot_pgtable
> +
> + bl enable_mmu
> +
> + /*
> + * The MMU is turned on and we are in the 1:1 mapping. Switch
> + * to the runtime mapping.
> + */
> + ldr x0, =1f
> + br x0
> +1:
> + mov lr, x5
> + /*
> + * The 1:1 map may clash with other parts of the Xen virtual memory
> + * layout. As it is not used anymore, remove it completely to avoid
> + * having to worry about replacing existing mapping afterwards.
> + * Function will return to the virtual address requested by the caller.
> + */
> + b remove_identity_mapping
ret
I guess you also need this ^^^ (Similar to enable_secondary_cpu_mm() ).
Otherwise PC will not switch to the caller.
It will again invoke remove_identity_mapping() (which is redundant) and
then jump to the caller.
- Ayan
> +ENDPROC(enable_boot_cpu_mm)
> +
> /*
> * Remove the 1:1 map from the page-tables. It is not easy to keep track
> * where the 1:1 map was mapped, so we will look for the top-level entry
> --
> 2.25.1
>
>
Hi Ayan
On 2023/9/7 17:44, Ayan Kumar Halder wrote:
> Hi Henry,
>
> On 28/08/2023 02:32, Henry Wang wrote:
>> CAUTION: This message has originated from an External Source. Please
>> use proper judgment and caution when opening attachments, clicking
>> links, or responding to this email.
>>
>>
>> From: Wei Chen <wei.chen@arm.com>
>>
>> At the moment, on MMU system, enable_mmu() will return to an
>> address in the 1:1 mapping, then each path is responsible to
>> switch to virtual runtime mapping. Then remove_identity_mapping()
>> is called on the boot CPU to remove all 1:1 mapping.
>>
>> Since remove_identity_mapping() is not necessary on Non-MMU system,
>> and we also avoid creating empty function for Non-MMU system, trying
>> to keep only one codeflow in arm64/head.S, we move path switch and
>> remove_identity_mapping() in enable_mmu() on MMU system.
>>
>> As the remove_identity_mapping should only be called for the boot
>> CPU only, so we introduce enable_boot_cpu_mm() for boot CPU and
>> enable_secondary_cpu_mm() for secondary CPUs in this patch.
>>
>> Signed-off-by: Wei Chen <wei.chen@arm.com>
>> Signed-off-by: Penny Zheng <penny.zheng@arm.com>
>> Signed-off-by: Henry Wang <Henry.Wang@arm.com>
>> Reviewed-by: Julien Grall <jgrall@amazon.com>
>> ---
>> v6:
>> - Add Julien's Reviewed-by tag.
>> v5:
>> - Add missing "()" in title.
>> - Use more generic comment in enable_{boot,secondary}_cpu_mm() to
>> mention function will return to the vaddr requested by the caller.
>> - Move 'mov lr, x5' closer to 'b remove_identity_mapping'.
>> - Drop the 'b fail' for unreachable code in enable_boot_cpu_mm().
>> v4:
>> - Clarify remove_identity_mapping() is called on boot CPU and keep
>> the function/proc format consistent in commit msg.
>> - Drop inaccurate (due to the refactor) in-code comment.
>> - Rename enable_{boot,runtime}_mmu to enable_{boot,secondary}_cpu_mm.
>> - Reword the in-code comment on top of enable_{boot,secondary}_cpu_mm.
>> - Call "fail" for unreachable code.
>> v3:
>> - new patch
>> ---
>> xen/arch/arm/arm64/head.S | 83 ++++++++++++++++++++++++++++++---------
>> 1 file changed, 64 insertions(+), 19 deletions(-)
>>
>> diff --git a/xen/arch/arm/arm64/head.S b/xen/arch/arm/arm64/head.S
>> index 5029013a14..f25a41d36c 100644
>> --- a/xen/arch/arm/arm64/head.S
>> +++ b/xen/arch/arm/arm64/head.S
>> @@ -325,21 +325,11 @@ real_start_efi:
>>
>> bl check_cpu_mode
>> bl cpu_init
>> - bl create_page_tables
>> - load_paddr x0, boot_pgtable
>> - bl enable_mmu
>>
>> - /* We are still in the 1:1 mapping. Jump to the runtime
>> Virtual Address. */
>> - ldr x0, =primary_switched
>> - br x0
>> + ldr lr, =primary_switched
>> + b enable_boot_cpu_mm
>> +
>> primary_switched:
>> - /*
>> - * The 1:1 map may clash with other parts of the Xen virtual
>> memory
>> - * layout. As it is not used anymore, remove it completely to
>> - * avoid having to worry about replacing existing mapping
>> - * afterwards.
>> - */
>> - bl remove_identity_mapping
>> bl setup_fixmap
>> #ifdef CONFIG_EARLY_PRINTK
>> /* Use a virtual address to access the UART. */
>> @@ -384,13 +374,10 @@ GLOBAL(init_secondary)
>> #endif
>> bl check_cpu_mode
>> bl cpu_init
>> - load_paddr x0, init_ttbr
>> - ldr x0, [x0]
>> - bl enable_mmu
>>
>> - /* We are still in the 1:1 mapping. Jump to the runtime
>> Virtual Address. */
>> - ldr x0, =secondary_switched
>> - br x0
>> + ldr lr, =secondary_switched
>> + b enable_secondary_cpu_mm
>> +
>> secondary_switched:
>> #ifdef CONFIG_EARLY_PRINTK
>> /* Use a virtual address to access the UART. */
>> @@ -748,6 +735,64 @@ enable_mmu:
>> ret
>> ENDPROC(enable_mmu)
>>
>> +/*
>> + * Enable mm (turn on the data cache and the MMU) for secondary CPUs.
>> + * The function will return to the virtual address provided in LR
>> (e.g. the
>> + * runtime mapping).
>> + *
>> + * Inputs:
>> + * lr : Virtual address to return to.
>> + *
>> + * Clobbers x0 - x5
>> + */
>> +enable_secondary_cpu_mm:
>> + mov x5, lr
>> +
>> + load_paddr x0, init_ttbr
>> + ldr x0, [x0]
>> +
>> + bl enable_mmu
>> + mov lr, x5
>> +
>> + /* Return to the virtual address requested by the caller. */
>> + ret
>> +ENDPROC(enable_secondary_cpu_mm)
>> +
>> +/*
>> + * Enable mm (turn on the data cache and the MMU) for the boot CPU.
>> + * The function will return to the virtual address provided in LR
>> (e.g. the
>> + * runtime mapping).
>> + *
>> + * Inputs:
>> + * lr : Virtual address to return to.
>> + *
>> + * Clobbers x0 - x5
>> + */
>> +enable_boot_cpu_mm:
>> + mov x5, lr
>> +
>> + bl create_page_tables
>> + load_paddr x0, boot_pgtable
>> +
>> + bl enable_mmu
>> +
>> + /*
>> + * The MMU is turned on and we are in the 1:1 mapping. Switch
>> + * to the runtime mapping.
>> + */
>> + ldr x0, =1f
>> + br x0
>> +1:
>> + mov lr, x5
>> + /*
>> + * The 1:1 map may clash with other parts of the Xen virtual
>> memory
>> + * layout. As it is not used anymore, remove it completely to
>> avoid
>> + * having to worry about replacing existing mapping afterwards.
>> + * Function will return to the virtual address requested by
>> the caller.
>> + */
>> + b remove_identity_mapping
>
> ret
>
> I guess you also need this ^^^ (Similar to enable_secondary_cpu_mm() ).
> Otherwise PC will not switch to the caller.
>
We have once talked about whether adding ret here in enable_boot_cpu_mm
in previous serie.
Since the "ret" in remove_identity_mapping will make user jump to the
"lr" we defined before "b remove_identity_mapping", it is redundant to
add another "ret" here. We will never reach the "ret" here
> It will again invoke remove_identity_mapping() (which is redundant) and
> then jump to the caller.
>
> - Ayan
>
>> +ENDPROC(enable_boot_cpu_mm)
>> +
>> /*
>> * Remove the 1:1 map from the page-tables. It is not easy to keep
>> track
>> * where the 1:1 map was mapped, so we will look for the top-level
>> entry
>> --
>> 2.25.1
>>
>>
On 07/09/2023 11:58, Penny Zheng wrote:
> Hi Ayan
Hi Penny,
>
> On 2023/9/7 17:44, Ayan Kumar Halder wrote:
>> Hi Henry,
>>
>> On 28/08/2023 02:32, Henry Wang wrote:
>>> CAUTION: This message has originated from an External Source. Please
>>> use proper judgment and caution when opening attachments, clicking
>>> links, or responding to this email.
>>>
>>>
>>> From: Wei Chen <wei.chen@arm.com>
>>>
>>> At the moment, on MMU system, enable_mmu() will return to an
>>> address in the 1:1 mapping, then each path is responsible to
>>> switch to virtual runtime mapping. Then remove_identity_mapping()
>>> is called on the boot CPU to remove all 1:1 mapping.
>>>
>>> Since remove_identity_mapping() is not necessary on Non-MMU system,
>>> and we also avoid creating empty function for Non-MMU system, trying
>>> to keep only one codeflow in arm64/head.S, we move path switch and
>>> remove_identity_mapping() in enable_mmu() on MMU system.
>>>
>>> As the remove_identity_mapping should only be called for the boot
>>> CPU only, so we introduce enable_boot_cpu_mm() for boot CPU and
>>> enable_secondary_cpu_mm() for secondary CPUs in this patch.
>>>
>>> Signed-off-by: Wei Chen <wei.chen@arm.com>
>>> Signed-off-by: Penny Zheng <penny.zheng@arm.com>
>>> Signed-off-by: Henry Wang <Henry.Wang@arm.com>
>>> Reviewed-by: Julien Grall <jgrall@amazon.com>
Reviewed-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
>>> ---
>>> v6:
>>> - Add Julien's Reviewed-by tag.
>>> v5:
>>> - Add missing "()" in title.
>>> - Use more generic comment in enable_{boot,secondary}_cpu_mm() to
>>> mention function will return to the vaddr requested by the caller.
>>> - Move 'mov lr, x5' closer to 'b remove_identity_mapping'.
>>> - Drop the 'b fail' for unreachable code in enable_boot_cpu_mm().
>>> v4:
>>> - Clarify remove_identity_mapping() is called on boot CPU and keep
>>> the function/proc format consistent in commit msg.
>>> - Drop inaccurate (due to the refactor) in-code comment.
>>> - Rename enable_{boot,runtime}_mmu to enable_{boot,secondary}_cpu_mm.
>>> - Reword the in-code comment on top of enable_{boot,secondary}_cpu_mm.
>>> - Call "fail" for unreachable code.
>>> v3:
>>> - new patch
>>> ---
>>> xen/arch/arm/arm64/head.S | 83
>>> ++++++++++++++++++++++++++++++---------
>>> 1 file changed, 64 insertions(+), 19 deletions(-)
>>>
>>> diff --git a/xen/arch/arm/arm64/head.S b/xen/arch/arm/arm64/head.S
>>> index 5029013a14..f25a41d36c 100644
>>> --- a/xen/arch/arm/arm64/head.S
>>> +++ b/xen/arch/arm/arm64/head.S
>>> @@ -325,21 +325,11 @@ real_start_efi:
>>>
>>> bl check_cpu_mode
>>> bl cpu_init
>>> - bl create_page_tables
>>> - load_paddr x0, boot_pgtable
>>> - bl enable_mmu
>>>
>>> - /* We are still in the 1:1 mapping. Jump to the runtime
>>> Virtual Address. */
>>> - ldr x0, =primary_switched
>>> - br x0
>>> + ldr lr, =primary_switched
>>> + b enable_boot_cpu_mm
>>> +
>>> primary_switched:
>>> - /*
>>> - * The 1:1 map may clash with other parts of the Xen
>>> virtual memory
>>> - * layout. As it is not used anymore, remove it completely to
>>> - * avoid having to worry about replacing existing mapping
>>> - * afterwards.
>>> - */
>>> - bl remove_identity_mapping
>>> bl setup_fixmap
>>> #ifdef CONFIG_EARLY_PRINTK
>>> /* Use a virtual address to access the UART. */
>>> @@ -384,13 +374,10 @@ GLOBAL(init_secondary)
>>> #endif
>>> bl check_cpu_mode
>>> bl cpu_init
>>> - load_paddr x0, init_ttbr
>>> - ldr x0, [x0]
>>> - bl enable_mmu
>>>
>>> - /* We are still in the 1:1 mapping. Jump to the runtime
>>> Virtual Address. */
>>> - ldr x0, =secondary_switched
>>> - br x0
>>> + ldr lr, =secondary_switched
>>> + b enable_secondary_cpu_mm
>>> +
>>> secondary_switched:
>>> #ifdef CONFIG_EARLY_PRINTK
>>> /* Use a virtual address to access the UART. */
>>> @@ -748,6 +735,64 @@ enable_mmu:
>>> ret
>>> ENDPROC(enable_mmu)
>>>
>>> +/*
>>> + * Enable mm (turn on the data cache and the MMU) for secondary CPUs.
>>> + * The function will return to the virtual address provided in LR
>>> (e.g. the
>>> + * runtime mapping).
>>> + *
>>> + * Inputs:
>>> + * lr : Virtual address to return to.
>>> + *
>>> + * Clobbers x0 - x5
>>> + */
>>> +enable_secondary_cpu_mm:
>>> + mov x5, lr
>>> +
>>> + load_paddr x0, init_ttbr
>>> + ldr x0, [x0]
>>> +
>>> + bl enable_mmu
>>> + mov lr, x5
>>> +
>>> + /* Return to the virtual address requested by the caller. */
>>> + ret
>>> +ENDPROC(enable_secondary_cpu_mm)
>>> +
>>> +/*
>>> + * Enable mm (turn on the data cache and the MMU) for the boot CPU.
>>> + * The function will return to the virtual address provided in LR
>>> (e.g. the
>>> + * runtime mapping).
>>> + *
>>> + * Inputs:
>>> + * lr : Virtual address to return to.
>>> + *
>>> + * Clobbers x0 - x5
>>> + */
>>> +enable_boot_cpu_mm:
>>> + mov x5, lr
>>> +
>>> + bl create_page_tables
>>> + load_paddr x0, boot_pgtable
>>> +
>>> + bl enable_mmu
>>> +
>>> + /*
>>> + * The MMU is turned on and we are in the 1:1 mapping. Switch
>>> + * to the runtime mapping.
>>> + */
>>> + ldr x0, =1f
>>> + br x0
>>> +1:
>>> + mov lr, x5
>>> + /*
>>> + * The 1:1 map may clash with other parts of the Xen
>>> virtual memory
>>> + * layout. As it is not used anymore, remove it completely
>>> to avoid
>>> + * having to worry about replacing existing mapping
>>> afterwards.
>>> + * Function will return to the virtual address requested by
>>> the caller.
>>> + */
>>> + b remove_identity_mapping
>>
>> ret
>>
>> I guess you also need this ^^^ (Similar to enable_secondary_cpu_mm()
>> ). Otherwise PC will not switch to the caller.
>>
>
> We have once talked about whether adding ret here in enable_boot_cpu_mm
> in previous serie.
>
> Since the "ret" in remove_identity_mapping will make user jump to the
> "lr" we defined before "b remove_identity_mapping", it is redundant to
> add another "ret" here. We will never reach the "ret" here
My bad, I see what you mean.
Unlike arm32/head.S enable_mmu(),
remove_identity_mapping() in not called inside enable_mmu() for arm64.
Sorry for the noise.
You can add my RB.
- Ayan
>
>> It will again invoke remove_identity_mapping() (which is redundant)
>> and then jump to the caller.
>>
>> - Ayan
>>
>>> +ENDPROC(enable_boot_cpu_mm)
>>> +
>>> /*
>>> * Remove the 1:1 map from the page-tables. It is not easy to keep
>>> track
>>> * where the 1:1 map was mapped, so we will look for the top-level
>>> entry
>>> --
>>> 2.25.1
>>>
>>>
© 2016 - 2026 Red Hat, Inc.