It is pointless to have 32-bit CPUs see a 64-bit address
space, when they can only address the 32 lower bits.
Only create CPU address space with a size it can address.
This makes HMP 'info mtree' command easier to understand
(on 32-bit CPUs).
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
---
This is particularly helpful with the AVR cores.
---
exec.c | 10 +++++++++-
1 file changed, 9 insertions(+), 1 deletion(-)
diff --git a/exec.c b/exec.c
index 5162f0d12f..d6809a9447 100644
--- a/exec.c
+++ b/exec.c
@@ -2962,9 +2962,17 @@ static void tcg_commit(MemoryListener *listener)
static void memory_map_init(void)
{
+ uint64_t system_memory_size;
+
+#if TARGET_LONG_BITS >= 64
+ system_memory_size = UINT64_MAX;
+#else
+ system_memory_size = 1ULL << TARGET_LONG_BITS;
+#endif
+
system_memory = g_malloc(sizeof(*system_memory));
- memory_region_init(system_memory, NULL, "system", UINT64_MAX);
+ memory_region_init(system_memory, NULL, "system", system_memory_size);
address_space_init(&address_space_memory, system_memory, "memory");
system_io = g_malloc(sizeof(*system_io));
--
2.21.3
On Sun, 31 May 2020 at 18:54, Philippe Mathieu-Daudé <f4bug@amsat.org> wrote:
>
> It is pointless to have 32-bit CPUs see a 64-bit address
> space, when they can only address the 32 lower bits.
>
> Only create CPU address space with a size it can address.
> This makes HMP 'info mtree' command easier to understand
> (on 32-bit CPUs).
> diff --git a/exec.c b/exec.c
> index 5162f0d12f..d6809a9447 100644
> --- a/exec.c
> +++ b/exec.c
> @@ -2962,9 +2962,17 @@ static void tcg_commit(MemoryListener *listener)
>
> static void memory_map_init(void)
> {
> + uint64_t system_memory_size;
> +
> +#if TARGET_LONG_BITS >= 64
> + system_memory_size = UINT64_MAX;
> +#else
> + system_memory_size = 1ULL << TARGET_LONG_BITS;
> +#endif
TARGET_LONG_BITS is a description of the CPU's virtual
address size; but the size of the system_memory memory
region is related to the CPU's physical address size[*].
In particular, for the Arm Cortex-A15 (and any other
32-bit CPU with LPAE) TARGET_LONG_BITS is 32 but the CPU
can address more than 32 bits of physical memory.
[*] Strictly speaking, it would depend on the
maximum physical address size used by any transaction
master in the system -- in theory you could have a
32-bit-only CPU and a DMA controller that could be
programmed with 64-bit addresses. In practice the
CPU can generally address at least as much of the
physical address space as any other transaction master.
thanks
-- PMM
On 5/31/20 9:09 PM, Peter Maydell wrote:
> On Sun, 31 May 2020 at 18:54, Philippe Mathieu-Daudé <f4bug@amsat.org> wrote:
>>
>> It is pointless to have 32-bit CPUs see a 64-bit address
>> space, when they can only address the 32 lower bits.
>>
>> Only create CPU address space with a size it can address.
>> This makes HMP 'info mtree' command easier to understand
>> (on 32-bit CPUs).
>
>> diff --git a/exec.c b/exec.c
>> index 5162f0d12f..d6809a9447 100644
>> --- a/exec.c
>> +++ b/exec.c
>> @@ -2962,9 +2962,17 @@ static void tcg_commit(MemoryListener *listener)
>>
>> static void memory_map_init(void)
>> {
>> + uint64_t system_memory_size;
>> +
>> +#if TARGET_LONG_BITS >= 64
>> + system_memory_size = UINT64_MAX;
>> +#else
>> + system_memory_size = 1ULL << TARGET_LONG_BITS;
>> +#endif
>
> TARGET_LONG_BITS is a description of the CPU's virtual
> address size; but the size of the system_memory memory
> region is related to the CPU's physical address size[*].
OK I misunderstood it was the physical size, not virtual.
> In particular, for the Arm Cortex-A15 (and any other
> 32-bit CPU with LPAE) TARGET_LONG_BITS is 32 but the CPU
> can address more than 32 bits of physical memory.
>
> [*] Strictly speaking, it would depend on the
> maximum physical address size used by any transaction
> master in the system -- in theory you could have a
> 32-bit-only CPU and a DMA controller that could be
> programmed with 64-bit addresses. In practice the
> CPU can generally address at least as much of the
> physical address space as any other transaction master.
Yes, I tried the Malta with 32-bit core, while the GT64120 northbridge
addresses 64-bit:
address-space: cpu-memory-0
0000000000000000-00000000ffffffff (prio 0, i/o): system
0000000000000000-0000000007ffffff (prio 0, ram): alias
mips_malta_low_preio.ram @mips_malta.ram 0000000000000000-0000000007ffffff
0000000000000000-000000001fffffff (prio 0, i/o): empty-slot
0000000010000000-0000000011ffffff (prio 0, i/o): alias pci0-io @io
0000000000000000-0000000001ffffff
0000000012000000-0000000013ffffff (prio 0, i/o): alias pci0-mem0
@pci0-mem 0000000012000000-0000000013ffffff
address-space: gt64120_pci
0000000000000000-ffffffffffffffff (prio 0, i/o): bus master container
0000000000000000-00000000ffffffff (prio 0, i/o): alias bus master
@system 0000000000000000-00000000ffffffff [disabled]
So my testing is bad, because I want @system to be 64-bit wide for the
GT64120.
From "In practice the CPU can generally address at least as much of the
physical address space as any other transaction master." I understand
for QEMU @system address space must be as big as the largest transaction
a bus master can do".
I think what confuse me is what QEMU means by 'system-memory', I
understand it historically as the address space of the first CPU core.
This is more complex in the case of the raspi SoC where the DSP can
address the main bus (system memory) while the ARM cores access it via
an MMU, see this schema:
https://www.raspberrypi.org/forums/viewtopic.php?t=262747
Regards,
Phil.
On Mon, 1 Jun 2020 at 09:09, Philippe Mathieu-Daudé <f4bug@amsat.org> wrote: > On 5/31/20 9:09 PM, Peter Maydell wrote: > > [*] Strictly speaking, it would depend on the > > maximum physical address size used by any transaction > > master in the system -- in theory you could have a > > 32-bit-only CPU and a DMA controller that could be > > programmed with 64-bit addresses. In practice the > > CPU can generally address at least as much of the > > physical address space as any other transaction master. > > Yes, I tried the Malta with 32-bit core, while the GT64120 northbridge > addresses 64-bit: > From "In practice the CPU can generally address at least as much of the > physical address space as any other transaction master." I understand > for QEMU @system address space must be as big as the largest transaction > a bus master can do". That depends on what happens for transactions that are off the end of the range, I suppose -- usually a 32-bit CPU system design will for obvious reasons not put ram or devices over 4GB, so if the behaviour for a DMA access past 4GB is the same whether there's nothing mapped there or whether the access is just off-the-end then it doesn't matter how QEMU models it. I haven't tested to see what an off-the-end transaction does, though. I'm inclined to say that since 'hwaddr' is always a 64-bit type we should stick to having the system memory address space be64 bits. > I think what confuse me is what QEMU means by 'system-memory', I > understand it historically as the address space of the first CPU core. Historically I think it was more "there is only one address space and this is it": it wasn't the first CPU's address space, it was what *every* CPU saw, and what every DMA device used, because the APIs pre-MemoryRegion had no concept of separate address spaces at all. So system-memory starts off as a way to continue to provide those old semantics in an AddressSpace/MemoryRegion design, and we've then gradually increased the degree to which different transaction masters use different AddressSpaces. Typically system-memory today is often "whatever's common to all CPUs" (and then you overlay per-CPU devices etc on top of that), but it might have less stuff than that in it (I have a feeling the arm-sse SoCs put less stuff into system-memory than you might expect). How much freedom you have to not put stuff into the system-memory address space depends on things like whether the guest architecture's target/foo code or some DMA device model on the board still uses APIs that don't specify the address space and instead use the system address space. thanks -- PMM
On 6/1/20 1:09 AM, Philippe Mathieu-Daudé wrote:
> On 5/31/20 9:09 PM, Peter Maydell wrote:
>> On Sun, 31 May 2020 at 18:54, Philippe Mathieu-Daudé <f4bug@amsat.org> wrote:
>>>
>>> It is pointless to have 32-bit CPUs see a 64-bit address
>>> space, when they can only address the 32 lower bits.
>>>
>>> Only create CPU address space with a size it can address.
>>> This makes HMP 'info mtree' command easier to understand
>>> (on 32-bit CPUs).
>>
>>> diff --git a/exec.c b/exec.c
>>> index 5162f0d12f..d6809a9447 100644
>>> --- a/exec.c
>>> +++ b/exec.c
>>> @@ -2962,9 +2962,17 @@ static void tcg_commit(MemoryListener *listener)
>>>
>>> static void memory_map_init(void)
>>> {
>>> + uint64_t system_memory_size;
>>> +
>>> +#if TARGET_LONG_BITS >= 64
>>> + system_memory_size = UINT64_MAX;
>>> +#else
>>> + system_memory_size = 1ULL << TARGET_LONG_BITS;
>>> +#endif
>>
>> TARGET_LONG_BITS is a description of the CPU's virtual
>> address size; but the size of the system_memory memory
>> region is related to the CPU's physical address size[*].
>
> OK I misunderstood it was the physical size, not virtual.
It is the physical size.
In the armv7 case, the lpae page table entry maps a 32-bit virtual address to a
40-bit physical address. The i686 page table extensions do something similar.
See TARGET_PHYS_ADDR_SPACE_BITS.
r~
© 2016 - 2026 Red Hat, Inc.