target/s390x/tcg/misc_helper.c | 4 ++++ 1 file changed, 4 insertions(+)
According to the architecture, SET PREFIX must try to access the new
prefix area and recognize an addressing exception if the area is not
accessible.
For qemu this check prevents a crash in cpu_map_lowcore after an
inaccessible prefix area has been set.
Signed-off-by: Janis Schoetterl-Glausch <scgl@linux.ibm.com>
---
Is there a stricter check to see if the memory is accessible?
target/s390x/tcg/misc_helper.c | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/target/s390x/tcg/misc_helper.c b/target/s390x/tcg/misc_helper.c
index aab9c47747..c8447b36fc 100644
--- a/target/s390x/tcg/misc_helper.c
+++ b/target/s390x/tcg/misc_helper.c
@@ -158,6 +158,10 @@ void HELPER(spx)(CPUS390XState *env, uint64_t a1)
if (prefix == old_prefix) {
return;
}
+ if (!mmu_absolute_addr_valid(prefix, true) ||
+ !mmu_absolute_addr_valid(prefix + TARGET_PAGE_SIZE, true)) {
+ tcg_s390_program_interrupt(env, PGM_ADDRESSING, GETPC());
+ }
env->psa = prefix;
HELPER_LOG("prefix: %#x\n", prefix);
base-commit: 3a821c52e1a30ecd9a436f2c67cc66b5628c829f
--
2.36.1
On 27.06.22 15:12, Janis Schoetterl-Glausch wrote:
> According to the architecture, SET PREFIX must try to access the new
> prefix area and recognize an addressing exception if the area is not
> accessible.
> For qemu this check prevents a crash in cpu_map_lowcore after an
> inaccessible prefix area has been set.
I don't think that's possible. Our memory increments are 1 MiB and one
would have to cross a 1~MiB range with the second page to trigger that.
IIRC that's impossible with SPX address alignment requirements?
>
> Signed-off-by: Janis Schoetterl-Glausch <scgl@linux.ibm.com>
> ---
>
>
> Is there a stricter check to see if the memory is accessible?
>
>
> target/s390x/tcg/misc_helper.c | 4 ++++
> 1 file changed, 4 insertions(+)
>
> diff --git a/target/s390x/tcg/misc_helper.c b/target/s390x/tcg/misc_helper.c
> index aab9c47747..c8447b36fc 100644
> --- a/target/s390x/tcg/misc_helper.c
> +++ b/target/s390x/tcg/misc_helper.c
> @@ -158,6 +158,10 @@ void HELPER(spx)(CPUS390XState *env, uint64_t a1)
> if (prefix == old_prefix) {
> return;
> }
> + if (!mmu_absolute_addr_valid(prefix, true) ||
> + !mmu_absolute_addr_valid(prefix + TARGET_PAGE_SIZE, true)) {
> + tcg_s390_program_interrupt(env, PGM_ADDRESSING, GETPC());
> + }
>
> env->psa = prefix;
> HELPER_LOG("prefix: %#x\n", prefix);
>
> base-commit: 3a821c52e1a30ecd9a436f2c67cc66b5628c829f
--
Thanks,
David / dhildenb
On 6/27/22 18:27, David Hildenbrand wrote:
> On 27.06.22 15:12, Janis Schoetterl-Glausch wrote:
>> According to the architecture, SET PREFIX must try to access the new
>> prefix area and recognize an addressing exception if the area is not
>> accessible.
>> For qemu this check prevents a crash in cpu_map_lowcore after an
>> inaccessible prefix area has been set.
>
> I don't think that's possible. Our memory increments are 1 MiB and one
> would have to cross a 1~MiB range with the second page to trigger that.
> IIRC that's impossible with SPX address alignment requirements?
>
Are you saying that checking the first page is sufficient?
I'm not sure that this the case for the architecture in general,
but I guess it is true for tcg. Do you want me to remove the second check?
>
>>
>> Signed-off-by: Janis Schoetterl-Glausch <scgl@linux.ibm.com>
>> ---
>>
>>
>> Is there a stricter check to see if the memory is accessible?
>>
>>
>> target/s390x/tcg/misc_helper.c | 4 ++++
>> 1 file changed, 4 insertions(+)
>>
>> diff --git a/target/s390x/tcg/misc_helper.c b/target/s390x/tcg/misc_helper.c
>> index aab9c47747..c8447b36fc 100644
>> --- a/target/s390x/tcg/misc_helper.c
>> +++ b/target/s390x/tcg/misc_helper.c
>> @@ -158,6 +158,10 @@ void HELPER(spx)(CPUS390XState *env, uint64_t a1)
>> if (prefix == old_prefix) {
>> return;
>> }
>> + if (!mmu_absolute_addr_valid(prefix, true) ||
>> + !mmu_absolute_addr_valid(prefix + TARGET_PAGE_SIZE, true)) {
>> + tcg_s390_program_interrupt(env, PGM_ADDRESSING, GETPC());
>> + }
>>
>> env->psa = prefix;
>> HELPER_LOG("prefix: %#x\n", prefix);
>>
>> base-commit: 3a821c52e1a30ecd9a436f2c67cc66b5628c829f
>
>
On 27.06.22 19:06, Janis Schoetterl-Glausch wrote: > On 6/27/22 18:27, David Hildenbrand wrote: >> On 27.06.22 15:12, Janis Schoetterl-Glausch wrote: >>> According to the architecture, SET PREFIX must try to access the new >>> prefix area and recognize an addressing exception if the area is not >>> accessible. >>> For qemu this check prevents a crash in cpu_map_lowcore after an >>> inaccessible prefix area has been set. >> >> I don't think that's possible. Our memory increments are 1 MiB and one >> would have to cross a 1~MiB range with the second page to trigger that. >> IIRC that's impossible with SPX address alignment requirements? >> > Are you saying that checking the first page is sufficient? Yes, unless I'm wrong. :) /* Due to alignment and QEMU memory sizes, it's sufficient to check the first page only. */ > I'm not sure that this the case for the architecture in general, > but I guess it is true for tcg. Do you want me to remove the second check? Should have been clearer (and I somehow thought we'd be checking the first page already :)) Reviewed-by: David Hildenbrand <david@redhat.com> -- Thanks, David / dhildenb
© 2016 - 2026 Red Hat, Inc.