From: Wei Chen <wei.chen@arm.com>
At the moment, on MMU system, enable_mmu() will return to an
address in the 1:1 mapping, then each path is responsible to
switch to virtual runtime mapping. Then remove_identity_mapping()
is called on the boot CPU to remove all 1:1 mapping.
Since remove_identity_mapping() is not necessary on Non-MMU system,
and we also avoid creating empty function for Non-MMU system, trying
to keep only one codeflow in arm64/head.S, we move path switch and
remove_identity_mapping() in enable_mmu() on MMU system.
As the remove_identity_mapping should only be called for the boot
CPU only, so we introduce enable_boot_cpu_mm() for boot CPU and
enable_secondary_cpu_mm() for secondary CPUs in this patch.
Signed-off-by: Wei Chen <wei.chen@arm.com>
Signed-off-by: Penny Zheng <penny.zheng@arm.com>
Signed-off-by: Henry Wang <Henry.Wang@arm.com>
---
v5:
- Add missing "()" in title.
- Use more generic comment in enable_{boot,secondary}_cpu_mm() to
mention function will return to the vaddr requested by the caller.
- Move 'mov lr, x5' closer to 'b remove_identity_mapping'.
- Drop the 'b fail' for unreachable code in enable_boot_cpu_mm().
v4:
- Clarify remove_identity_mapping() is called on boot CPU and keep
the function/proc format consistent in commit msg.
- Drop inaccurate (due to the refactor) in-code comment.
- Rename enable_{boot,runtime}_mmu to enable_{boot,secondary}_cpu_mm.
- Reword the in-code comment on top of enable_{boot,secondary}_cpu_mm.
- Call "fail" for unreachable code.
v3:
- new patch
---
xen/arch/arm/arm64/head.S | 83 ++++++++++++++++++++++++++++++---------
1 file changed, 64 insertions(+), 19 deletions(-)
diff --git a/xen/arch/arm/arm64/head.S b/xen/arch/arm/arm64/head.S
index 8bca9afa27..2bc2a03565 100644
--- a/xen/arch/arm/arm64/head.S
+++ b/xen/arch/arm/arm64/head.S
@@ -325,21 +325,11 @@ real_start_efi:
bl check_cpu_mode
bl cpu_init
- bl create_page_tables
- load_paddr x0, boot_pgtable
- bl enable_mmu
- /* We are still in the 1:1 mapping. Jump to the runtime Virtual Address. */
- ldr x0, =primary_switched
- br x0
+ ldr lr, =primary_switched
+ b enable_boot_cpu_mm
+
primary_switched:
- /*
- * The 1:1 map may clash with other parts of the Xen virtual memory
- * layout. As it is not used anymore, remove it completely to
- * avoid having to worry about replacing existing mapping
- * afterwards.
- */
- bl remove_identity_mapping
bl setup_fixmap
#ifdef CONFIG_EARLY_PRINTK
/* Use a virtual address to access the UART. */
@@ -384,13 +374,10 @@ GLOBAL(init_secondary)
#endif
bl check_cpu_mode
bl cpu_init
- load_paddr x0, init_ttbr
- ldr x0, [x0]
- bl enable_mmu
- /* We are still in the 1:1 mapping. Jump to the runtime Virtual Address. */
- ldr x0, =secondary_switched
- br x0
+ ldr lr, =secondary_switched
+ b enable_secondary_cpu_mm
+
secondary_switched:
#ifdef CONFIG_EARLY_PRINTK
/* Use a virtual address to access the UART. */
@@ -748,6 +735,64 @@ enable_mmu:
ret
ENDPROC(enable_mmu)
+/*
+ * Enable mm (turn on the data cache and the MMU) for secondary CPUs.
+ * The function will return to the virtual address provided in LR (e.g. the
+ * runtime mapping).
+ *
+ * Inputs:
+ * lr : Virtual address to return to.
+ *
+ * Clobbers x0 - x5
+ */
+enable_secondary_cpu_mm:
+ mov x5, lr
+
+ load_paddr x0, init_ttbr
+ ldr x0, [x0]
+
+ bl enable_mmu
+ mov lr, x5
+
+ /* Return to the virtual address requested by the caller. */
+ ret
+ENDPROC(enable_secondary_cpu_mm)
+
+/*
+ * Enable mm (turn on the data cache and the MMU) for the boot CPU.
+ * The function will return to the virtual address provided in LR (e.g. the
+ * runtime mapping).
+ *
+ * Inputs:
+ * lr : Virtual address to return to.
+ *
+ * Clobbers x0 - x5
+ */
+enable_boot_cpu_mm:
+ mov x5, lr
+
+ bl create_page_tables
+ load_paddr x0, boot_pgtable
+
+ bl enable_mmu
+
+ /*
+ * The MMU is turned on and we are in the 1:1 mapping. Switch
+ * to the runtime mapping.
+ */
+ ldr x0, =1f
+ br x0
+1:
+ mov lr, x5
+ /*
+ * The 1:1 map may clash with other parts of the Xen virtual memory
+ * layout. As it is not used anymore, remove it completely to avoid
+ * having to worry about replacing existing mapping afterwards.
+ * Function will return to the virtual address requested by the caller.
+ */
+ b remove_identity_mapping
+ENDPROC(enable_boot_cpu_mm)
+
/*
* Remove the 1:1 map from the page-tables. It is not easy to keep track
* where the 1:1 map was mapped, so we will look for the top-level entry
--
2.25.1
Hi Henry, On 14/08/2023 05:25, Henry Wang wrote: > From: Wei Chen <wei.chen@arm.com> > > At the moment, on MMU system, enable_mmu() will return to an > address in the 1:1 mapping, then each path is responsible to > switch to virtual runtime mapping. Then remove_identity_mapping() > is called on the boot CPU to remove all 1:1 mapping. > > Since remove_identity_mapping() is not necessary on Non-MMU system, > and we also avoid creating empty function for Non-MMU system, trying > to keep only one codeflow in arm64/head.S, we move path switch and > remove_identity_mapping() in enable_mmu() on MMU system. > > As the remove_identity_mapping should only be called for the boot > CPU only, so we introduce enable_boot_cpu_mm() for boot CPU and > enable_secondary_cpu_mm() for secondary CPUs in this patch. > > Signed-off-by: Wei Chen <wei.chen@arm.com> > Signed-off-by: Penny Zheng <penny.zheng@arm.com> > Signed-off-by: Henry Wang <Henry.Wang@arm.com> One remark below. With or without it addressed: Reviewed-by: Julien Grall <jgrall@amazon.com> [...] > +/* > + * Enable mm (turn on the data cache and the MMU) for secondary CPUs. > + * The function will return to the virtual address provided in LR (e.g. the > + * runtime mapping). > + * > + * Inputs: > + * lr : Virtual address to return to. > + * > + * Clobbers x0 - x5 > + */ > +enable_secondary_cpu_mm: > + mov x5, lr > + > + load_paddr x0, init_ttbr > + ldr x0, [x0] > + > + bl enable_mmu > + mov lr, x5 > + > + /* Return to the virtual address requested by the caller. */ > + ret > +ENDPROC(enable_secondary_cpu_mm) NIT: enable_mmu() could directly return to the virtual address. This would reduce the function to: load_paddr x0, init_ttbr ldr x0, [x0] /* Return to the virtual address requested by the caller. b enable_mmu > + > +/* > + * Enable mm (turn on the data cache and the MMU) for the boot CPU. > + * The function will return to the virtual address provided in LR (e.g. the > + * runtime mapping). > + * > + * Inputs: > + * lr : Virtual address to return to. > + * > + * Clobbers x0 - x5 > + */ > +enable_boot_cpu_mm: > + mov x5, lr > + > + bl create_page_tables > + load_paddr x0, boot_pgtable > + > + bl enable_mmu > + > + /* > + * The MMU is turned on and we are in the 1:1 mapping. Switch > + * to the runtime mapping. > + */ > + ldr x0, =1f > + br x0 > +1: > + mov lr, x5 > + /* > + * The 1:1 map may clash with other parts of the Xen virtual memory > + * layout. As it is not used anymore, remove it completely to avoid > + * having to worry about replacing existing mapping afterwards. > + * Function will return to the virtual address requested by the caller. > + */ > + b remove_identity_mapping > +ENDPROC(enable_boot_cpu_mm) > + > /* > * Remove the 1:1 map from the page-tables. It is not easy to keep track > * where the 1:1 map was mapped, so we will look for the top-level entry Cheers, -- Julien Grall
Hi Julien, > On Aug 21, 2023, at 16:33, Julien Grall <julien@xen.org> wrote: > > Hi Henry, > > On 14/08/2023 05:25, Henry Wang wrote: >> From: Wei Chen <wei.chen@arm.com> >> At the moment, on MMU system, enable_mmu() will return to an >> address in the 1:1 mapping, then each path is responsible to >> switch to virtual runtime mapping. Then remove_identity_mapping() >> is called on the boot CPU to remove all 1:1 mapping. >> Since remove_identity_mapping() is not necessary on Non-MMU system, >> and we also avoid creating empty function for Non-MMU system, trying >> to keep only one codeflow in arm64/head.S, we move path switch and >> remove_identity_mapping() in enable_mmu() on MMU system. >> As the remove_identity_mapping should only be called for the boot >> CPU only, so we introduce enable_boot_cpu_mm() for boot CPU and >> enable_secondary_cpu_mm() for secondary CPUs in this patch. >> Signed-off-by: Wei Chen <wei.chen@arm.com> >> Signed-off-by: Penny Zheng <penny.zheng@arm.com> > Signed-off-by: Henry Wang <Henry.Wang@arm.com> > > One remark below. With or without it addressed: > > Reviewed-by: Julien Grall <jgrall@amazon.com> Thanks, I will take this tag with ... > > [...] > >> +/* >> + * Enable mm (turn on the data cache and the MMU) for secondary CPUs. >> + * The function will return to the virtual address provided in LR (e.g. the >> + * runtime mapping). >> + * >> + * Inputs: >> + * lr : Virtual address to return to. >> + * >> + * Clobbers x0 - x5 >> + */ >> +enable_secondary_cpu_mm: >> + mov x5, lr >> + >> + load_paddr x0, init_ttbr >> + ldr x0, [x0] >> + >> + bl enable_mmu >> + mov lr, x5 >> + >> + /* Return to the virtual address requested by the caller. */ >> + ret >> +ENDPROC(enable_secondary_cpu_mm) > > NIT: enable_mmu() could directly return to the virtual address. This would reduce the function to: > > load_paddr x0, init_ttbr > ldr x0, [x0] > > /* Return to the virtual address requested by the caller. > b enable_mmu …this fixed in v6 since I think there is likely to be a v6, and I think I also need to address the commit message nit pointed out by Jan in the last patch. Kind regards, Henry
© 2016 - 2026 Red Hat, Inc.