[PATCH v2] xen/arm64: Zero the top 32 bits of gp registers on entry...

Michal Orzel posted 1 patch 2 years, 4 months ago
Test gitlab-ci failed
Patches applied successfully (tree, apply log)
git fetch https://gitlab.com/xen-project/patchew/xen tags/patchew/20211216092134.579-1-michal.orzel@arm.com
There is a newer version of this series
xen/arch/arm/arm64/entry.S | 29 +++++++++++++++++++++++++++++
1 file changed, 29 insertions(+)
[PATCH v2] xen/arm64: Zero the top 32 bits of gp registers on entry...
Posted by Michal Orzel 2 years, 4 months ago
to hypervisor when switching from AArch32 state.

According to section D1.20.2 of Arm Arm(DDI 0487A.j):
"If the general-purpose register was accessible from AArch32 state the
upper 32 bits either become zero, or hold the value that the same
architectural register held before any AArch32 execution.
The choice between these two options is IMPLEMENTATION DEFINED"

Currently Xen does not ensure that the top 32 bits are zeroed and this
needs to be fixed. The reason why is that there are places in Xen
where we assume that top 32bits are zero for AArch32 guests.
If they are not, this can lead to misinterpretation of Xen regarding
what the guest requested. For example hypercalls returning an error
encoded in a signed long like do_sched_op, do_hmv_op, do_memory_op
would return -ENOSYS if the command passed as the first argument was
clobbered.

Create a macro clobber_gp_top_halves to clobber top 32 bits of gp
registers when hyp == 0 (guest mode) and compat == 1 (AArch32 mode).
Add a compile time check to ensure that save_x0_x1 == 1 if
compat == 1.

Signed-off-by: Michal Orzel <michal.orzel@arm.com>
---
 xen/arch/arm/arm64/entry.S | 29 +++++++++++++++++++++++++++++
 1 file changed, 29 insertions(+)

diff --git a/xen/arch/arm/arm64/entry.S b/xen/arch/arm/arm64/entry.S
index fc3811ad0a..01f32324d0 100644
--- a/xen/arch/arm/arm64/entry.S
+++ b/xen/arch/arm/arm64/entry.S
@@ -102,6 +102,30 @@
         .endif
 
         .endm
+
+/*
+ * Clobber top 32 bits of gp registers when switching from AArch32
+ */
+        .macro clobber_gp_top_halves, compat, save_x0_x1
+
+        .if \compat == 1      /* AArch32 mode */
+
+        /*
+         * save_x0_x1 is equal to 0 only for guest_sync (compat == 0).
+         * Add a compile time check to avoid violating this rule.
+         */
+        .if \save_x0_x1 == 0
+        .error "save_x0_x1 is 0 but compat is 1"
+        .endif
+
+        .irp n,0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29
+        mov w\n, w\n
+        .endr
+
+        .endif
+
+        .endm
+
 /*
  * Save state on entry to hypervisor, restore on exit
  *
@@ -111,6 +135,11 @@
  */
         .macro  entry, hyp, compat, save_x0_x1=1
         sub     sp, sp, #(UREGS_SPSR_el1 - UREGS_LR) /* CPSR, PC, SP, LR */
+
+        .if \hyp == 0         /* Guest mode */
+        clobber_gp_top_halves compat=\compat, save_x0_x1=\save_x0_x1
+        .endif
+
         push    x28, x29
         push    x26, x27
         push    x24, x25
-- 
2.29.0


Re: [PATCH v2] xen/arm64: Zero the top 32 bits of gp registers on entry...
Posted by Jan Beulich 2 years, 4 months ago
On 16.12.2021 10:21, Michal Orzel wrote:
> to hypervisor when switching from AArch32 state.
> 
> According to section D1.20.2 of Arm Arm(DDI 0487A.j):
> "If the general-purpose register was accessible from AArch32 state the
> upper 32 bits either become zero, or hold the value that the same
> architectural register held before any AArch32 execution.
> The choice between these two options is IMPLEMENTATION DEFINED"
> 
> Currently Xen does not ensure that the top 32 bits are zeroed and this
> needs to be fixed. The reason why is that there are places in Xen
> where we assume that top 32bits are zero for AArch32 guests.
> If they are not, this can lead to misinterpretation of Xen regarding
> what the guest requested. For example hypercalls returning an error
> encoded in a signed long like do_sched_op, do_hmv_op, do_memory_op
> would return -ENOSYS if the command passed as the first argument was
> clobbered.
> 
> Create a macro clobber_gp_top_halves to clobber top 32 bits of gp
> registers when hyp == 0 (guest mode) and compat == 1 (AArch32 mode).
> Add a compile time check to ensure that save_x0_x1 == 1 if
> compat == 1.
> 
> Signed-off-by: Michal Orzel <michal.orzel@arm.com>
> ---
>  xen/arch/arm/arm64/entry.S | 29 +++++++++++++++++++++++++++++
>  1 file changed, 29 insertions(+)
> 
> diff --git a/xen/arch/arm/arm64/entry.S b/xen/arch/arm/arm64/entry.S
> index fc3811ad0a..01f32324d0 100644
> --- a/xen/arch/arm/arm64/entry.S
> +++ b/xen/arch/arm/arm64/entry.S
> @@ -102,6 +102,30 @@
>          .endif
>  
>          .endm
> +
> +/*
> + * Clobber top 32 bits of gp registers when switching from AArch32
> + */
> +        .macro clobber_gp_top_halves, compat, save_x0_x1
> +
> +        .if \compat == 1      /* AArch32 mode */
> +
> +        /*
> +         * save_x0_x1 is equal to 0 only for guest_sync (compat == 0).
> +         * Add a compile time check to avoid violating this rule.
> +         */
> +        .if \save_x0_x1 == 0
> +        .error "save_x0_x1 is 0 but compat is 1"
> +        .endif
> +
> +        .irp n,0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29
> +        mov w\n, w\n
> +        .endr

What about x30 (aka lr)?

For values read from elr_el2, spsr_el2, and esr_el2 I guess the
hardware takes care of the high halves getting zeroed?

Jan


Re: [PATCH v2] xen/arm64: Zero the top 32 bits of gp registers on entry...
Posted by Michal Orzel 2 years, 4 months ago

On 16.12.2021 14:50, Jan Beulich wrote:
> On 16.12.2021 10:21, Michal Orzel wrote:
>> to hypervisor when switching from AArch32 state.
>>
>> According to section D1.20.2 of Arm Arm(DDI 0487A.j):
>> "If the general-purpose register was accessible from AArch32 state the
>> upper 32 bits either become zero, or hold the value that the same
>> architectural register held before any AArch32 execution.
>> The choice between these two options is IMPLEMENTATION DEFINED"
>>
>> Currently Xen does not ensure that the top 32 bits are zeroed and this
>> needs to be fixed. The reason why is that there are places in Xen
>> where we assume that top 32bits are zero for AArch32 guests.
>> If they are not, this can lead to misinterpretation of Xen regarding
>> what the guest requested. For example hypercalls returning an error
>> encoded in a signed long like do_sched_op, do_hmv_op, do_memory_op
>> would return -ENOSYS if the command passed as the first argument was
>> clobbered.
>>
>> Create a macro clobber_gp_top_halves to clobber top 32 bits of gp
>> registers when hyp == 0 (guest mode) and compat == 1 (AArch32 mode).
>> Add a compile time check to ensure that save_x0_x1 == 1 if
>> compat == 1.
>>
>> Signed-off-by: Michal Orzel <michal.orzel@arm.com>
>> ---
>>  xen/arch/arm/arm64/entry.S | 29 +++++++++++++++++++++++++++++
>>  1 file changed, 29 insertions(+)
>>
>> diff --git a/xen/arch/arm/arm64/entry.S b/xen/arch/arm/arm64/entry.S
>> index fc3811ad0a..01f32324d0 100644
>> --- a/xen/arch/arm/arm64/entry.S
>> +++ b/xen/arch/arm/arm64/entry.S
>> @@ -102,6 +102,30 @@
>>          .endif
>>  
>>          .endm
>> +
>> +/*
>> + * Clobber top 32 bits of gp registers when switching from AArch32
>> + */
>> +        .macro clobber_gp_top_halves, compat, save_x0_x1
>> +
>> +        .if \compat == 1      /* AArch32 mode */
>> +
>> +        /*
>> +         * save_x0_x1 is equal to 0 only for guest_sync (compat == 0).
>> +         * Add a compile time check to avoid violating this rule.
>> +         */
>> +        .if \save_x0_x1 == 0
>> +        .error "save_x0_x1 is 0 but compat is 1"
>> +        .endif
>> +
>> +        .irp n,0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29
>> +        mov w\n, w\n
>> +        .endr
> 
> What about x30 (aka lr)?
> 
Well the docs says about gp registers as a whole so including lr.
However I do not see how clobbering lr would impact Xen.
lr is not used to pass any data.

> For values read from elr_el2, spsr_el2, and esr_el2 I guess the
> hardware takes care of the high halves getting zeroed?
> 
From the docs:
"On exception entry to an Exception level using AArch64 state from an Exception level using AArch32 state,
the AArch64 Stack Pointers and Exception Link Registers associated with an Exception level that are not
accessible during execution in AArch32 state at that Exception level, retain the state that they had before
the execution in AArch32 state."

So basically they retain the state. It is a different model than the gp registers model.
> Jan
> 

Cheers,
Michal

Re: [PATCH v2] xen/arm64: Zero the top 32 bits of gp registers on entry...
Posted by Jan Beulich 2 years, 4 months ago
On 16.12.2021 15:26, Michal Orzel wrote:
> 
> 
> On 16.12.2021 14:50, Jan Beulich wrote:
>> On 16.12.2021 10:21, Michal Orzel wrote:
>>> to hypervisor when switching from AArch32 state.
>>>
>>> According to section D1.20.2 of Arm Arm(DDI 0487A.j):
>>> "If the general-purpose register was accessible from AArch32 state the
>>> upper 32 bits either become zero, or hold the value that the same
>>> architectural register held before any AArch32 execution.
>>> The choice between these two options is IMPLEMENTATION DEFINED"
>>>
>>> Currently Xen does not ensure that the top 32 bits are zeroed and this
>>> needs to be fixed. The reason why is that there are places in Xen
>>> where we assume that top 32bits are zero for AArch32 guests.
>>> If they are not, this can lead to misinterpretation of Xen regarding
>>> what the guest requested. For example hypercalls returning an error
>>> encoded in a signed long like do_sched_op, do_hmv_op, do_memory_op
>>> would return -ENOSYS if the command passed as the first argument was
>>> clobbered.
>>>
>>> Create a macro clobber_gp_top_halves to clobber top 32 bits of gp
>>> registers when hyp == 0 (guest mode) and compat == 1 (AArch32 mode).
>>> Add a compile time check to ensure that save_x0_x1 == 1 if
>>> compat == 1.
>>>
>>> Signed-off-by: Michal Orzel <michal.orzel@arm.com>
>>> ---
>>>  xen/arch/arm/arm64/entry.S | 29 +++++++++++++++++++++++++++++
>>>  1 file changed, 29 insertions(+)
>>>
>>> diff --git a/xen/arch/arm/arm64/entry.S b/xen/arch/arm/arm64/entry.S
>>> index fc3811ad0a..01f32324d0 100644
>>> --- a/xen/arch/arm/arm64/entry.S
>>> +++ b/xen/arch/arm/arm64/entry.S
>>> @@ -102,6 +102,30 @@
>>>          .endif
>>>  
>>>          .endm
>>> +
>>> +/*
>>> + * Clobber top 32 bits of gp registers when switching from AArch32
>>> + */
>>> +        .macro clobber_gp_top_halves, compat, save_x0_x1
>>> +
>>> +        .if \compat == 1      /* AArch32 mode */
>>> +
>>> +        /*
>>> +         * save_x0_x1 is equal to 0 only for guest_sync (compat == 0).
>>> +         * Add a compile time check to avoid violating this rule.
>>> +         */
>>> +        .if \save_x0_x1 == 0
>>> +        .error "save_x0_x1 is 0 but compat is 1"
>>> +        .endif
>>> +
>>> +        .irp n,0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29
>>> +        mov w\n, w\n
>>> +        .endr
>>
>> What about x30 (aka lr)?
>>
> Well the docs says about gp registers as a whole so including lr.
> However I do not see how clobbering lr would impact Xen.
> lr is not used to pass any data.

And all of x0...x29 are used (somewhere) to pass data?

Jan


Re: [PATCH v2] xen/arm64: Zero the top 32 bits of gp registers on entry...
Posted by Julien Grall 2 years, 4 months ago
Hi,

On 16/12/2021 14:26, Michal Orzel wrote:
> On 16.12.2021 14:50, Jan Beulich wrote:
>> On 16.12.2021 10:21, Michal Orzel wrote:
>>> to hypervisor when switching from AArch32 state.
>>>
>>> According to section D1.20.2 of Arm Arm(DDI 0487A.j):
>>> "If the general-purpose register was accessible from AArch32 state the
>>> upper 32 bits either become zero, or hold the value that the same
>>> architectural register held before any AArch32 execution.
>>> The choice between these two options is IMPLEMENTATION DEFINED"
>>>
>>> Currently Xen does not ensure that the top 32 bits are zeroed and this
>>> needs to be fixed. The reason why is that there are places in Xen
>>> where we assume that top 32bits are zero for AArch32 guests.
>>> If they are not, this can lead to misinterpretation of Xen regarding
>>> what the guest requested. For example hypercalls returning an error
>>> encoded in a signed long like do_sched_op, do_hmv_op, do_memory_op
>>> would return -ENOSYS if the command passed as the first argument was
>>> clobbered.
>>>
>>> Create a macro clobber_gp_top_halves to clobber top 32 bits of gp
>>> registers when hyp == 0 (guest mode) and compat == 1 (AArch32 mode).
>>> Add a compile time check to ensure that save_x0_x1 == 1 if
>>> compat == 1.
>>>
>>> Signed-off-by: Michal Orzel <michal.orzel@arm.com>
>>> ---
>>>   xen/arch/arm/arm64/entry.S | 29 +++++++++++++++++++++++++++++
>>>   1 file changed, 29 insertions(+)
>>>
>>> diff --git a/xen/arch/arm/arm64/entry.S b/xen/arch/arm/arm64/entry.S
>>> index fc3811ad0a..01f32324d0 100644
>>> --- a/xen/arch/arm/arm64/entry.S
>>> +++ b/xen/arch/arm/arm64/entry.S
>>> @@ -102,6 +102,30 @@
>>>           .endif
>>>   
>>>           .endm
>>> +
>>> +/*
>>> + * Clobber top 32 bits of gp registers when switching from AArch32
>>> + */
>>> +        .macro clobber_gp_top_halves, compat, save_x0_x1
>>> +
>>> +        .if \compat == 1      /* AArch32 mode */
>>> +
>>> +        /*
>>> +         * save_x0_x1 is equal to 0 only for guest_sync (compat == 0).
>>> +         * Add a compile time check to avoid violating this rule.
>>> +         */
>>> +        .if \save_x0_x1 == 0
>>> +        .error "save_x0_x1 is 0 but compat is 1"
>>> +        .endif
>>> +
>>> +        .irp n,0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29
>>> +        mov w\n, w\n
>>> +        .endr
>>
>> What about x30 (aka lr)?
>>
> Well the docs says about gp registers as a whole so including lr.
> However I do not see how clobbering lr would impact Xen.

Xen may not be directly impacted. However this may be used by some 
userspace application (such as for VM introspection) and could be dumped 
on the console.

So I would cover all the GPR to give a consistent view to everyone.

Cheers,

-- 
Julien Grall

Re: [PATCH v2] xen/arm64: Zero the top 32 bits of gp registers on entry...
Posted by Jan Beulich 2 years, 4 months ago
On 16.12.2021 15:55, Julien Grall wrote:
> Hi,
> 
> On 16/12/2021 14:26, Michal Orzel wrote:
>> On 16.12.2021 14:50, Jan Beulich wrote:
>>> On 16.12.2021 10:21, Michal Orzel wrote:
>>>> to hypervisor when switching from AArch32 state.
>>>>
>>>> According to section D1.20.2 of Arm Arm(DDI 0487A.j):
>>>> "If the general-purpose register was accessible from AArch32 state the
>>>> upper 32 bits either become zero, or hold the value that the same
>>>> architectural register held before any AArch32 execution.
>>>> The choice between these two options is IMPLEMENTATION DEFINED"
>>>>
>>>> Currently Xen does not ensure that the top 32 bits are zeroed and this
>>>> needs to be fixed. The reason why is that there are places in Xen
>>>> where we assume that top 32bits are zero for AArch32 guests.
>>>> If they are not, this can lead to misinterpretation of Xen regarding
>>>> what the guest requested. For example hypercalls returning an error
>>>> encoded in a signed long like do_sched_op, do_hmv_op, do_memory_op
>>>> would return -ENOSYS if the command passed as the first argument was
>>>> clobbered.
>>>>
>>>> Create a macro clobber_gp_top_halves to clobber top 32 bits of gp
>>>> registers when hyp == 0 (guest mode) and compat == 1 (AArch32 mode).
>>>> Add a compile time check to ensure that save_x0_x1 == 1 if
>>>> compat == 1.
>>>>
>>>> Signed-off-by: Michal Orzel <michal.orzel@arm.com>
>>>> ---
>>>>   xen/arch/arm/arm64/entry.S | 29 +++++++++++++++++++++++++++++
>>>>   1 file changed, 29 insertions(+)
>>>>
>>>> diff --git a/xen/arch/arm/arm64/entry.S b/xen/arch/arm/arm64/entry.S
>>>> index fc3811ad0a..01f32324d0 100644
>>>> --- a/xen/arch/arm/arm64/entry.S
>>>> +++ b/xen/arch/arm/arm64/entry.S
>>>> @@ -102,6 +102,30 @@
>>>>           .endif
>>>>   
>>>>           .endm
>>>> +
>>>> +/*
>>>> + * Clobber top 32 bits of gp registers when switching from AArch32
>>>> + */
>>>> +        .macro clobber_gp_top_halves, compat, save_x0_x1
>>>> +
>>>> +        .if \compat == 1      /* AArch32 mode */
>>>> +
>>>> +        /*
>>>> +         * save_x0_x1 is equal to 0 only for guest_sync (compat == 0).
>>>> +         * Add a compile time check to avoid violating this rule.
>>>> +         */
>>>> +        .if \save_x0_x1 == 0
>>>> +        .error "save_x0_x1 is 0 but compat is 1"
>>>> +        .endif
>>>> +
>>>> +        .irp n,0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29
>>>> +        mov w\n, w\n
>>>> +        .endr
>>>
>>> What about x30 (aka lr)?
>>>
>> Well the docs says about gp registers as a whole so including lr.
>> However I do not see how clobbering lr would impact Xen.
> 
> Xen may not be directly impacted. However this may be used by some 
> userspace application (such as for VM introspection) and could be dumped 
> on the console.
> 
> So I would cover all the GPR to give a consistent view to everyone.

Consistency is how I came to think of x30.

Jan


Re: [PATCH v2] xen/arm64: Zero the top 32 bits of gp registers on entry...
Posted by Michal Orzel 2 years, 4 months ago

On 16.12.2021 15:55, Julien Grall wrote:
> Hi,
> 
> On 16/12/2021 14:26, Michal Orzel wrote:
>> On 16.12.2021 14:50, Jan Beulich wrote:
>>> On 16.12.2021 10:21, Michal Orzel wrote:
>>>> to hypervisor when switching from AArch32 state.
>>>>
>>>> According to section D1.20.2 of Arm Arm(DDI 0487A.j):
>>>> "If the general-purpose register was accessible from AArch32 state the
>>>> upper 32 bits either become zero, or hold the value that the same
>>>> architectural register held before any AArch32 execution.
>>>> The choice between these two options is IMPLEMENTATION DEFINED"
>>>>
>>>> Currently Xen does not ensure that the top 32 bits are zeroed and this
>>>> needs to be fixed. The reason why is that there are places in Xen
>>>> where we assume that top 32bits are zero for AArch32 guests.
>>>> If they are not, this can lead to misinterpretation of Xen regarding
>>>> what the guest requested. For example hypercalls returning an error
>>>> encoded in a signed long like do_sched_op, do_hmv_op, do_memory_op
>>>> would return -ENOSYS if the command passed as the first argument was
>>>> clobbered.
>>>>
>>>> Create a macro clobber_gp_top_halves to clobber top 32 bits of gp
>>>> registers when hyp == 0 (guest mode) and compat == 1 (AArch32 mode).
>>>> Add a compile time check to ensure that save_x0_x1 == 1 if
>>>> compat == 1.
>>>>
>>>> Signed-off-by: Michal Orzel <michal.orzel@arm.com>
>>>> ---
>>>>   xen/arch/arm/arm64/entry.S | 29 +++++++++++++++++++++++++++++
>>>>   1 file changed, 29 insertions(+)
>>>>
>>>> diff --git a/xen/arch/arm/arm64/entry.S b/xen/arch/arm/arm64/entry.S
>>>> index fc3811ad0a..01f32324d0 100644
>>>> --- a/xen/arch/arm/arm64/entry.S
>>>> +++ b/xen/arch/arm/arm64/entry.S
>>>> @@ -102,6 +102,30 @@
>>>>           .endif
>>>>             .endm
>>>> +
>>>> +/*
>>>> + * Clobber top 32 bits of gp registers when switching from AArch32
>>>> + */
>>>> +        .macro clobber_gp_top_halves, compat, save_x0_x1
>>>> +
>>>> +        .if \compat == 1      /* AArch32 mode */
>>>> +
>>>> +        /*
>>>> +         * save_x0_x1 is equal to 0 only for guest_sync (compat == 0).
>>>> +         * Add a compile time check to avoid violating this rule.
>>>> +         */
>>>> +        .if \save_x0_x1 == 0
>>>> +        .error "save_x0_x1 is 0 but compat is 1"
>>>> +        .endif
>>>> +
>>>> +        .irp n,0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29
>>>> +        mov w\n, w\n
>>>> +        .endr
>>>
>>> What about x30 (aka lr)?
>>>
>> Well the docs says about gp registers as a whole so including lr.
>> However I do not see how clobbering lr would impact Xen.
> 
> Xen may not be directly impacted. However this may be used by some userspace application (such as for VM introspection) and could be dumped on the console.
> 
> So I would cover all the GPR to give a consistent view to everyone.
> 
Ok I fully understand. I will send this change as v3.

> Cheers,
> 
Cheers,
Michal