xen/arch/arm/mm.c | 29 +++++++++++++++++++++++++++++ 1 file changed, 29 insertions(+)
The current implementations of xen_{map, unmap}_table() expect
{map, unmap}_domain_page() to be usable. Those helpers are used to
map/unmap page tables while update Xen page-tables.
Since commit 022387ee1a "xen/arm: mm: Don't open-code Xen PT update in
{set, clear}_fixmap()", setup_fixmap() will make use of the helpers
mentioned above. When booting Xen using GRUB, setup_fixmap() may be used
before map_domain_page() can be called. This will result to data abort:
(XEN) Data Abort Trap. Syndrome=0x5
(XEN) CPU0: Unexpected Trap: Data Abort
[...]
(XEN) Xen call trace:
(XEN) [<000000000025ab6c>] mm.c#xen_pt_update+0x2b4/0x59c (PC)
(XEN) [<000000000025ab20>] mm.c#xen_pt_update+0x268/0x59c (LR)
(XEN) [<000000000025ae70>] set_fixmap+0x1c/0x2c
(XEN) [<00000000002a9c98>] copy_from_paddr+0x7c/0xdc
(XEN) [<00000000002a4ae0>] has_xsm_magic+0x18/0x34
(XEN) [<00000000002a5b5c>] bootfdt.c#early_scan_node+0x398/0x560
(XEN) [<00000000002a5de0>] device_tree_for_each_node+0xbc/0x144
(XEN) [<00000000002a5ed4>] boot_fdt_info+0x6c/0x260
(XEN) [<00000000002ac0d0>] start_xen+0x108/0xc74
(XEN) [<000000000020044c>] arm64/head.o#paging+0x60/0x88
During early boot, the page tables are either statically allocated in
Xen binary or allocated via alloc_boot_pages().
For statically allocated page-tables, they will already be mapped as
part of Xen binary. So we can easily find the virtual address.
For dynamically allocated page-tables, we need to rely
map_domain_page() to be functionally working.
For arm32, the call will be usable much before page can be dynamically
allocated (see setup_pagetables()). For arm64, the call will be usable
after setup_xenheap_mappings().
In both cases, memory are given to the boot allocator afterwards. So we
can rely on map_domain_page() for mapping page tables allocated
dynamically.
The helpers xen_{map, unmap}_table() are now updated to take into
account the case where page-tables are part of Xen binary.
Fixes: 022387ee1a ('xen/arm: mm: Don't open-code Xen PT update in {set, clear}_fixmap()')
Signed-off-by: Julien Grall <julien.grall@arm.com>
Release-acked-by: Juergen Gross <jgross@suse.com>
---
Changes in v2:
- Add RaB Juergen
- Rework the check to avoid truncation
---
xen/arch/arm/mm.c | 29 +++++++++++++++++++++++++++++
1 file changed, 29 insertions(+)
diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index be23acfe26..a6637ce347 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -964,11 +964,40 @@ static int create_xen_table(lpae_t *entry)
static lpae_t *xen_map_table(mfn_t mfn)
{
+ /*
+ * We may require to map the page table before map_domain_page() is
+ * useable. The requirements here is it must be useable as soon as
+ * page-tables are allocated dynamically via alloc_boot_pages().
+ *
+ * We need to do the check on physical address rather than virtual
+ * address to avoid truncation on Arm32. Therefore is_kernel() cannot
+ * be used.
+ */
+ if ( system_state == SYS_STATE_early_boot )
+ {
+ const mfn_t mstart = virt_to_mfn(_start);
+ const mfn_t mend = virt_to_mfn(_end);
+
+ if ( (mfn_x(mstart) <= mfn_x(mfn)) && (mfn_x(mfn) < mfn_x(mend)) )
+ {
+ vaddr_t offset = mfn_to_maddr(mfn) - mfn_to_maddr(mstart);
+
+ return (lpae_t *)(XEN_VIRT_START + offset);
+ }
+ }
+
return map_domain_page(mfn);
}
static void xen_unmap_table(const lpae_t *table)
{
+ /*
+ * During early boot, xen_map_table() will not use map_domain_page()
+ * for page-tables residing in Xen binary. So skip the unmap part.
+ */
+ if ( system_state == SYS_STATE_early_boot && is_kernel(table) )
+ return;
+
unmap_domain_page(table);
}
--
2.11.0
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
On Thu, 10 Oct 2019, Julien Grall wrote:
> The current implementations of xen_{map, unmap}_table() expect
> {map, unmap}_domain_page() to be usable. Those helpers are used to
> map/unmap page tables while update Xen page-tables.
>
> Since commit 022387ee1a "xen/arm: mm: Don't open-code Xen PT update in
> {set, clear}_fixmap()", setup_fixmap() will make use of the helpers
> mentioned above. When booting Xen using GRUB, setup_fixmap() may be used
> before map_domain_page() can be called. This will result to data abort:
>
> (XEN) Data Abort Trap. Syndrome=0x5
> (XEN) CPU0: Unexpected Trap: Data Abort
>
> [...]
>
> (XEN) Xen call trace:
> (XEN) [<000000000025ab6c>] mm.c#xen_pt_update+0x2b4/0x59c (PC)
> (XEN) [<000000000025ab20>] mm.c#xen_pt_update+0x268/0x59c (LR)
> (XEN) [<000000000025ae70>] set_fixmap+0x1c/0x2c
> (XEN) [<00000000002a9c98>] copy_from_paddr+0x7c/0xdc
> (XEN) [<00000000002a4ae0>] has_xsm_magic+0x18/0x34
> (XEN) [<00000000002a5b5c>] bootfdt.c#early_scan_node+0x398/0x560
> (XEN) [<00000000002a5de0>] device_tree_for_each_node+0xbc/0x144
> (XEN) [<00000000002a5ed4>] boot_fdt_info+0x6c/0x260
> (XEN) [<00000000002ac0d0>] start_xen+0x108/0xc74
> (XEN) [<000000000020044c>] arm64/head.o#paging+0x60/0x88
>
> During early boot, the page tables are either statically allocated in
> Xen binary or allocated via alloc_boot_pages().
>
> For statically allocated page-tables, they will already be mapped as
> part of Xen binary. So we can easily find the virtual address.
>
> For dynamically allocated page-tables, we need to rely
> map_domain_page() to be functionally working.
>
> For arm32, the call will be usable much before page can be dynamically
> allocated (see setup_pagetables()). For arm64, the call will be usable
> after setup_xenheap_mappings().
>
> In both cases, memory are given to the boot allocator afterwards. So we
> can rely on map_domain_page() for mapping page tables allocated
> dynamically.
>
> The helpers xen_{map, unmap}_table() are now updated to take into
> account the case where page-tables are part of Xen binary.
>
> Fixes: 022387ee1a ('xen/arm: mm: Don't open-code Xen PT update in {set, clear}_fixmap()')
> Signed-off-by: Julien Grall <julien.grall@arm.com>
> Release-acked-by: Juergen Gross <jgross@suse.com>
>
> ---
> Changes in v2:
> - Add RaB Juergen
> - Rework the check to avoid truncation
> ---
> xen/arch/arm/mm.c | 29 +++++++++++++++++++++++++++++
> 1 file changed, 29 insertions(+)
>
> diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
> index be23acfe26..a6637ce347 100644
> --- a/xen/arch/arm/mm.c
> +++ b/xen/arch/arm/mm.c
> @@ -964,11 +964,40 @@ static int create_xen_table(lpae_t *entry)
>
> static lpae_t *xen_map_table(mfn_t mfn)
> {
> + /*
> + * We may require to map the page table before map_domain_page() is
> + * useable. The requirements here is it must be useable as soon as
> + * page-tables are allocated dynamically via alloc_boot_pages().
> + *
> + * We need to do the check on physical address rather than virtual
> + * address to avoid truncation on Arm32. Therefore is_kernel() cannot
> + * be used.
> + */
> + if ( system_state == SYS_STATE_early_boot )
> + {
> + const mfn_t mstart = virt_to_mfn(_start);
> + const mfn_t mend = virt_to_mfn(_end);
> +
> + if ( (mfn_x(mstart) <= mfn_x(mfn)) && (mfn_x(mfn) < mfn_x(mend)) )
> + {
The patch is good. Actually I noticed that we already have
is_xen_fixed_mfn(), looks like it is made for this. I think we should
use it here, except that is_xen_fixed_mfn has:
(mfn_to_maddr(mfn) <= virt_to_maddr(&_end)))
I think it should actually be:
(mfn_to_maddr(mfn) < virt_to_maddr(&_end)))
Maybe we could fix that at the same time in one patch? However, it is
the same definition as on x86, so I don't know what is going on there.
> + vaddr_t offset = mfn_to_maddr(mfn) - mfn_to_maddr(mstart);
> + return (lpae_t *)(XEN_VIRT_START + offset);
I know this is safe in this case thanks to the `if' above, so there are
no risks. But in general it is not a great idea to have a hidden 64-bit
to 32-bit cast (also it is not MISRA compliant.) I would add an explicit
cast:
vaddr_t offset = (vaddr_t)(mfn_to_maddr(mfn) - mfn_to_maddr(mstart));
I am not going to insist on this though.
> + }
> + }
> +
> return map_domain_page(mfn);
> }
>
> static void xen_unmap_table(const lpae_t *table)
> {
> + /*
> + * During early boot, xen_map_table() will not use map_domain_page()
> + * for page-tables residing in Xen binary. So skip the unmap part.
> + */
> + if ( system_state == SYS_STATE_early_boot && is_kernel(table) )
> + return;
> +
> unmap_domain_page(table);
> }
>
> --
> 2.11.0
>
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
Hi Stefano,
On 11/10/2019 01:27, Stefano Stabellini wrote:
> On Thu, 10 Oct 2019, Julien Grall wrote:
>> The current implementations of xen_{map, unmap}_table() expect
>> {map, unmap}_domain_page() to be usable. Those helpers are used to
>> map/unmap page tables while update Xen page-tables.
>>
>> Since commit 022387ee1a "xen/arm: mm: Don't open-code Xen PT update in
>> {set, clear}_fixmap()", setup_fixmap() will make use of the helpers
>> mentioned above. When booting Xen using GRUB, setup_fixmap() may be used
>> before map_domain_page() can be called. This will result to data abort:
>>
>> (XEN) Data Abort Trap. Syndrome=0x5
>> (XEN) CPU0: Unexpected Trap: Data Abort
>>
>> [...]
>>
>> (XEN) Xen call trace:
>> (XEN) [<000000000025ab6c>] mm.c#xen_pt_update+0x2b4/0x59c (PC)
>> (XEN) [<000000000025ab20>] mm.c#xen_pt_update+0x268/0x59c (LR)
>> (XEN) [<000000000025ae70>] set_fixmap+0x1c/0x2c
>> (XEN) [<00000000002a9c98>] copy_from_paddr+0x7c/0xdc
>> (XEN) [<00000000002a4ae0>] has_xsm_magic+0x18/0x34
>> (XEN) [<00000000002a5b5c>] bootfdt.c#early_scan_node+0x398/0x560
>> (XEN) [<00000000002a5de0>] device_tree_for_each_node+0xbc/0x144
>> (XEN) [<00000000002a5ed4>] boot_fdt_info+0x6c/0x260
>> (XEN) [<00000000002ac0d0>] start_xen+0x108/0xc74
>> (XEN) [<000000000020044c>] arm64/head.o#paging+0x60/0x88
>>
>> During early boot, the page tables are either statically allocated in
>> Xen binary or allocated via alloc_boot_pages().
>>
>> For statically allocated page-tables, they will already be mapped as
>> part of Xen binary. So we can easily find the virtual address.
>>
>> For dynamically allocated page-tables, we need to rely
>> map_domain_page() to be functionally working.
>>
>> For arm32, the call will be usable much before page can be dynamically
>> allocated (see setup_pagetables()). For arm64, the call will be usable
>> after setup_xenheap_mappings().
>>
>> In both cases, memory are given to the boot allocator afterwards. So we
>> can rely on map_domain_page() for mapping page tables allocated
>> dynamically.
>>
>> The helpers xen_{map, unmap}_table() are now updated to take into
>> account the case where page-tables are part of Xen binary.
>>
>> Fixes: 022387ee1a ('xen/arm: mm: Don't open-code Xen PT update in {set, clear}_fixmap()')
>> Signed-off-by: Julien Grall <julien.grall@arm.com>
>> Release-acked-by: Juergen Gross <jgross@suse.com>
>>
>> ---
>> Changes in v2:
>> - Add RaB Juergen
>> - Rework the check to avoid truncation
>> ---
>> xen/arch/arm/mm.c | 29 +++++++++++++++++++++++++++++
>> 1 file changed, 29 insertions(+)
>>
>> diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
>> index be23acfe26..a6637ce347 100644
>> --- a/xen/arch/arm/mm.c
>> +++ b/xen/arch/arm/mm.c
>> @@ -964,11 +964,40 @@ static int create_xen_table(lpae_t *entry)
>>
>> static lpae_t *xen_map_table(mfn_t mfn)
>> {
>> + /*
>> + * We may require to map the page table before map_domain_page() is
>> + * useable. The requirements here is it must be useable as soon as
>> + * page-tables are allocated dynamically via alloc_boot_pages().
>> + *
>> + * We need to do the check on physical address rather than virtual
>> + * address to avoid truncation on Arm32. Therefore is_kernel() cannot
>> + * be used.
>> + */
>> + if ( system_state == SYS_STATE_early_boot )
>> + {
>> + const mfn_t mstart = virt_to_mfn(_start);
>> + const mfn_t mend = virt_to_mfn(_end);
>> +
>> + if ( (mfn_x(mstart) <= mfn_x(mfn)) && (mfn_x(mfn) < mfn_x(mend)) )
>> + {
>
> The patch is good. Actually I noticed that we already have
> is_xen_fixed_mfn(), looks like it is made for this. I think we should
> use it here, except that is_xen_fixed_mfn has:
>
> (mfn_to_maddr(mfn) <= virt_to_maddr(&_end)))
>
> I think it should actually be:
>
> (mfn_to_maddr(mfn) < virt_to_maddr(&_end)))
>
> Maybe we could fix that at the same time in one patch? However, it is
> the same definition as on x86, so I don't know what is going on there.
>
>
>> + vaddr_t offset = mfn_to_maddr(mfn) - mfn_to_maddr(mstart);
>> + return (lpae_t *)(XEN_VIRT_START + offset);
>
> I know this is safe in this case thanks to the `if' above, so there are
> no risks. But in general it is not a great idea to have a hidden 64-bit
> to 32-bit cast (also it is not MISRA compliant.) I would add an explicit
> cast: >
> vaddr_t offset = (vaddr_t)(mfn_to_maddr(mfn) - mfn_to_maddr(mstart));
You would still need a comment for this case as it is unclear why the cast is
necessary/safe. So I feel a comment would be sufficient here:
/*
* The address belongs to Xen binary and will always fit in vaddr_t.
* It is therefore fine to demote the type.
*/
Cheers,
--
Julien Grall
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
(+ Andrew and Jan)
Hi,
On 11/10/2019 01:27, Stefano Stabellini wrote:
> On Thu, 10 Oct 2019, Julien Grall wrote:
>> diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
>> index be23acfe26..a6637ce347 100644
>> --- a/xen/arch/arm/mm.c
>> +++ b/xen/arch/arm/mm.c
>> @@ -964,11 +964,40 @@ static int create_xen_table(lpae_t *entry)
>>
>> static lpae_t *xen_map_table(mfn_t mfn)
>> {
>> + /*
>> + * We may require to map the page table before map_domain_page() is
>> + * useable. The requirements here is it must be useable as soon as
>> + * page-tables are allocated dynamically via alloc_boot_pages().
>> + *
>> + * We need to do the check on physical address rather than virtual
>> + * address to avoid truncation on Arm32. Therefore is_kernel() cannot
>> + * be used.
>> + */
>> + if ( system_state == SYS_STATE_early_boot )
>> + {
>> + const mfn_t mstart = virt_to_mfn(_start);
>> + const mfn_t mend = virt_to_mfn(_end);
>> +
>> + if ( (mfn_x(mstart) <= mfn_x(mfn)) && (mfn_x(mfn) < mfn_x(mend)) )
>> + {
>
> The patch is good. Actually I noticed that we already have
> is_xen_fixed_mfn(), looks like it is made for this. I think we should
> use it here, except that is_xen_fixed_mfn has:
Thanks, I thought we had one but I couldn't remember the name :(.
>
> (mfn_to_maddr(mfn) <= virt_to_maddr(&_end)))
>
> I think it should actually be:
>
> (mfn_to_maddr(mfn) < virt_to_maddr(&_end)))
>
> Maybe we could fix that at the same time in one patch? However, it is
> the same definition as on x86, so I don't know what is going on there.
For Arm we should definitely use < and not <=.
I am assuming this is the same for x86. Andrew? Jan?
Cheers,
--
Julien Grall
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
© 2016 - 2026 Red Hat, Inc.