From: Julien Grall <jgrall@amazon.com>
At the moment, we are mapping the size of the reserved area for Xen
(i.e. 2MB) even if the binary is smaller. We don't exactly know what's
after Xen, so it is not a good idea to map more than necessary for a
couple of reasons:
* We would need to use break-before-make if the extra PTE needs to
be updated to point to another region
* The extra area mapped may be mapped again by Xen with different
memory attribute. This would result to attribute mismatch.
Therefore, rework the logic in create_page_tables() to map only what's
necessary. To simplify the logic, we also want to make sure _end
is page-aligned. So align the symbol in the linker and add an assert
to catch any change.
Signed-off-by: Julien Grall <jgrall@amazon.com>
Reviewed-by: Michal Orzel <michal.orzel@amd.com>
---
Changes in v2:
- Fix typo
- Add Michal's reviewed-by tag
---
xen/arch/arm/arm32/head.S | 15 ++++++++++++++-
1 file changed, 14 insertions(+), 1 deletion(-)
diff --git a/xen/arch/arm/arm32/head.S b/xen/arch/arm/arm32/head.S
index 1ad981b67460..5e3692eb3abf 100644
--- a/xen/arch/arm/arm32/head.S
+++ b/xen/arch/arm/arm32/head.S
@@ -459,6 +459,19 @@ create_page_tables:
create_table_entry boot_pgtable, boot_second, r0, 1
create_table_entry boot_second, boot_third, r0, 2
+ /*
+ * Find the size of Xen in pages and multiply by the size of a
+ * PTE. This will then be compared in the mapping loop below.
+ *
+ * Note the multiplication is just to avoid using an extra
+ * register/instruction per iteration.
+ */
+ mov_w r0, _start /* r0 := vaddr(_start) */
+ mov_w r1, _end /* r1 := vaddr(_end) */
+ sub r0, r1, r0 /* r0 := effective size of Xen */
+ lsr r0, r0, #PAGE_SHIFT /* r0 := Number of pages for Xen */
+ lsl r0, r0, #3 /* r0 := Number of pages * PTE size */
+
/* Setup boot_third: */
adr_l r4, boot_third
@@ -473,7 +486,7 @@ create_page_tables:
1: strd r2, r3, [r4, r1] /* Map vaddr(start) */
add r2, r2, #PAGE_SIZE /* Next page */
add r1, r1, #8 /* Next slot */
- cmp r1, #(XEN_PT_LPAE_ENTRIES<<3) /* 512*8-byte entries per page */
+ cmp r1, r0 /* Loop until we map all of Xen */
blo 1b
/*
--
2.40.1
Hi Julien, > -----Original Message----- > Subject: [v2 2/4] xen/arm32: head: Don't map too much in boot_third > > From: Julien Grall <jgrall@amazon.com> > > At the moment, we are mapping the size of the reserved area for Xen > (i.e. 2MB) even if the binary is smaller. We don't exactly know what's > after Xen, so it is not a good idea to map more than necessary for a > couple of reasons: > * We would need to use break-before-make if the extra PTE needs to > be updated to point to another region > * The extra area mapped may be mapped again by Xen with different > memory attribute. This would result to attribute mismatch. > > Therefore, rework the logic in create_page_tables() to map only what's > necessary. To simplify the logic, we also want to make sure _end > is page-aligned. So align the symbol in the linker and add an assert > to catch any change. > > Signed-off-by: Julien Grall <jgrall@amazon.com> > Reviewed-by: Michal Orzel <michal.orzel@amd.com> Reviewed-by: Henry Wang <Henry.Wang@arm.com> Kind regards, Henry
Hi Julien, > On 29 Jun 2023, at 22:11, Julien Grall <julien@xen.org> wrote: > > From: Julien Grall <jgrall@amazon.com> > > At the moment, we are mapping the size of the reserved area for Xen > (i.e. 2MB) even if the binary is smaller. We don't exactly know what's > after Xen, so it is not a good idea to map more than necessary for a > couple of reasons: > * We would need to use break-before-make if the extra PTE needs to > be updated to point to another region > * The extra area mapped may be mapped again by Xen with different > memory attribute. This would result to attribute mismatch. > > Therefore, rework the logic in create_page_tables() to map only what's > necessary. To simplify the logic, we also want to make sure _end > is page-aligned. So align the symbol in the linker and add an assert > to catch any change. The last 2 sentences actually belongs to patch 1 and have been copied here. Please remove them on commit as alignment of _end is not in this patch. > > Signed-off-by: Julien Grall <jgrall@amazon.com> > Reviewed-by: Michal Orzel <michal.orzel@amd.com> > With commit message fixed on commit: Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com> Cheers Bertrand > --- > > Changes in v2: > - Fix typo > - Add Michal's reviewed-by tag > --- > xen/arch/arm/arm32/head.S | 15 ++++++++++++++- > 1 file changed, 14 insertions(+), 1 deletion(-) > > diff --git a/xen/arch/arm/arm32/head.S b/xen/arch/arm/arm32/head.S > index 1ad981b67460..5e3692eb3abf 100644 > --- a/xen/arch/arm/arm32/head.S > +++ b/xen/arch/arm/arm32/head.S > @@ -459,6 +459,19 @@ create_page_tables: > create_table_entry boot_pgtable, boot_second, r0, 1 > create_table_entry boot_second, boot_third, r0, 2 > > + /* > + * Find the size of Xen in pages and multiply by the size of a > + * PTE. This will then be compared in the mapping loop below. > + * > + * Note the multiplication is just to avoid using an extra > + * register/instruction per iteration. > + */ > + mov_w r0, _start /* r0 := vaddr(_start) */ > + mov_w r1, _end /* r1 := vaddr(_end) */ > + sub r0, r1, r0 /* r0 := effective size of Xen */ > + lsr r0, r0, #PAGE_SHIFT /* r0 := Number of pages for Xen */ > + lsl r0, r0, #3 /* r0 := Number of pages * PTE size */ > + > /* Setup boot_third: */ > adr_l r4, boot_third > > @@ -473,7 +486,7 @@ create_page_tables: > 1: strd r2, r3, [r4, r1] /* Map vaddr(start) */ > add r2, r2, #PAGE_SIZE /* Next page */ > add r1, r1, #8 /* Next slot */ > - cmp r1, #(XEN_PT_LPAE_ENTRIES<<3) /* 512*8-byte entries per page */ > + cmp r1, r0 /* Loop until we map all of Xen */ > blo 1b > > /* > -- > 2.40.1 > >
Hi Bertrand, On 04/07/2023 15:12, Bertrand Marquis wrote: >> On 29 Jun 2023, at 22:11, Julien Grall <julien@xen.org> wrote: >> >> From: Julien Grall <jgrall@amazon.com> >> >> At the moment, we are mapping the size of the reserved area for Xen >> (i.e. 2MB) even if the binary is smaller. We don't exactly know what's >> after Xen, so it is not a good idea to map more than necessary for a >> couple of reasons: >> * We would need to use break-before-make if the extra PTE needs to >> be updated to point to another region >> * The extra area mapped may be mapped again by Xen with different >> memory attribute. This would result to attribute mismatch. >> >> Therefore, rework the logic in create_page_tables() to map only what's >> necessary. To simplify the logic, we also want to make sure _end >> is page-aligned. So align the symbol in the linker and add an assert >> to catch any change. > > The last 2 sentences actually belongs to patch 1 and have been copied > here. Please remove them on commit as alignment of _end is not in > this patch. Good point. I have removed them. > >> >> Signed-off-by: Julien Grall <jgrall@amazon.com> >> Reviewed-by: Michal Orzel <michal.orzel@amd.com> >> > With commit message fixed on commit: > Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com> Thanks! Cheers, -- Julien Grall
© 2016 - 2026 Red Hat, Inc.