To enable DOMAIN_BUILD_HELPERS for RISC-V the following is introduced:
- Add a global p2m_ipa_bits variable, initialized to PADDR_BITS, to
represent the maximum supported IPA size as find_unallocated_memory()
requires it.
- Define default guest RAM layout parameters in the public RISC-V
header as it is required by allocate_memory().
Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
---
xen/arch/riscv/Kconfig | 1 +
xen/arch/riscv/include/asm/p2m.h | 3 +++
xen/arch/riscv/p2m.c | 6 ++++++
xen/include/public/arch-riscv.h | 8 ++++++++
4 files changed, 18 insertions(+)
diff --git a/xen/arch/riscv/Kconfig b/xen/arch/riscv/Kconfig
index 89876b32175d..12b337365f1f 100644
--- a/xen/arch/riscv/Kconfig
+++ b/xen/arch/riscv/Kconfig
@@ -1,5 +1,6 @@
config RISCV
def_bool y
+ select DOMAIN_BUILD_HELPERS
select FUNCTION_ALIGNMENT_16B
select GENERIC_BUG_FRAME
select GENERIC_UART_INIT
diff --git a/xen/arch/riscv/include/asm/p2m.h b/xen/arch/riscv/include/asm/p2m.h
index c68494593fd9..083549ef9640 100644
--- a/xen/arch/riscv/include/asm/p2m.h
+++ b/xen/arch/riscv/include/asm/p2m.h
@@ -44,6 +44,9 @@
#define P2M_LEVEL_MASK(p2m, lvl) \
(P2M_TABLE_OFFSET(p2m, lvl) << P2M_GFN_LEVEL_SHIFT(lvl))
+/* Holds the bit size of IPAs in p2m tables */
+extern unsigned int p2m_ipa_bits;
+
#define paddr_bits PADDR_BITS
/* Get host p2m table */
diff --git a/xen/arch/riscv/p2m.c b/xen/arch/riscv/p2m.c
index f5b03e1e3264..62bd8a2f602f 100644
--- a/xen/arch/riscv/p2m.c
+++ b/xen/arch/riscv/p2m.c
@@ -51,6 +51,12 @@ static struct gstage_mode_desc __ro_after_init max_gstage_mode = {
.name = "Bare",
};
+/*
+ * Set to the maximum configured support for IPA bits, so the number of IPA bits can be
+ * restricted by external entity (e.g. IOMMU).
+ */
+unsigned int __read_mostly p2m_ipa_bits = PADDR_BITS;
+
static void p2m_free_page(struct p2m_domain *p2m, struct page_info *pg);
static inline void p2m_free_metadata_page(struct p2m_domain *p2m,
diff --git a/xen/include/public/arch-riscv.h b/xen/include/public/arch-riscv.h
index 360d8e6871ba..91cee3096041 100644
--- a/xen/include/public/arch-riscv.h
+++ b/xen/include/public/arch-riscv.h
@@ -50,6 +50,14 @@ typedef uint64_t xen_ulong_t;
#if defined(__XEN__) || defined(__XEN_TOOLS__)
+#define GUEST_RAM_BANKS 1
+
+#define GUEST_RAM0_BASE xen_mk_ullong(0x80000000) /* 2GB of low RAM @ 2GB */
+#define GUEST_RAM0_SIZE xen_mk_ullong(0x80000000)
+
+#define GUEST_RAM_BANK_BASES { GUEST_RAM0_BASE }
+#define GUEST_RAM_BANK_SIZES { GUEST_RAM0_SIZE }
+
struct vcpu_guest_context {
};
typedef struct vcpu_guest_context vcpu_guest_context_t;
--
2.52.0
On 12.02.2026 17:21, Oleksii Kurochko wrote:
> --- a/xen/arch/riscv/include/asm/p2m.h
> +++ b/xen/arch/riscv/include/asm/p2m.h
> @@ -44,6 +44,9 @@
> #define P2M_LEVEL_MASK(p2m, lvl) \
> (P2M_TABLE_OFFSET(p2m, lvl) << P2M_GFN_LEVEL_SHIFT(lvl))
>
> +/* Holds the bit size of IPAs in p2m tables */
> +extern unsigned int p2m_ipa_bits;
Hmm, I can spot a declaration and ...
> --- a/xen/arch/riscv/p2m.c
> +++ b/xen/arch/riscv/p2m.c
> @@ -51,6 +51,12 @@ static struct gstage_mode_desc __ro_after_init max_gstage_mode = {
> .name = "Bare",
> };
>
> +/*
> + * Set to the maximum configured support for IPA bits, so the number of IPA bits can be
> + * restricted by external entity (e.g. IOMMU).
> + */
> +unsigned int __read_mostly p2m_ipa_bits = PADDR_BITS;
... a definition, but neither a use nor a place where the variable would
be set. Hmm, yes, I see common/device-tree/domain-build.c uses it. Then
the following questions arise:
- What does "ipa" stand for? Is this a term sensible in RISC-V context at
all? Judging from the comment at the decl, isn't it PPN width (plus
PAGE_SHIFT) that it describes?
- With there not being anyone writing to the variable, why is it not
const (or even a #define), or at the very least __ro_after_init?
And no, "Arm has it like this" doesn't count as an answer. Considering
all the review comments you've got so far you should know by now that you
shouldn't copy things blindly.
> --- a/xen/include/public/arch-riscv.h
> +++ b/xen/include/public/arch-riscv.h
> @@ -50,6 +50,14 @@ typedef uint64_t xen_ulong_t;
>
> #if defined(__XEN__) || defined(__XEN_TOOLS__)
>
> +#define GUEST_RAM_BANKS 1
> +
> +#define GUEST_RAM0_BASE xen_mk_ullong(0x80000000) /* 2GB of low RAM @ 2GB */
> +#define GUEST_RAM0_SIZE xen_mk_ullong(0x80000000)
> +
> +#define GUEST_RAM_BANK_BASES { GUEST_RAM0_BASE }
> +#define GUEST_RAM_BANK_SIZES { GUEST_RAM0_SIZE }
Hmm, does RISC-V really want to go with compile-time constants here? And
if so, why would guests be limited to just 2 Gb? That may more efficiently
be RV32 guests then, with perhaps just an RV32 hypervisor.
Jan
On 2/12/26 5:39 PM, Jan Beulich wrote:
> On 12.02.2026 17:21, Oleksii Kurochko wrote:
>> --- a/xen/arch/riscv/include/asm/p2m.h
>> +++ b/xen/arch/riscv/include/asm/p2m.h
>> @@ -44,6 +44,9 @@
>> #define P2M_LEVEL_MASK(p2m, lvl) \
>> (P2M_TABLE_OFFSET(p2m, lvl) << P2M_GFN_LEVEL_SHIFT(lvl))
>>
>> +/* Holds the bit size of IPAs in p2m tables */
>> +extern unsigned int p2m_ipa_bits;
> Hmm, I can spot a declaration and ...
>
>> --- a/xen/arch/riscv/p2m.c
>> +++ b/xen/arch/riscv/p2m.c
>> @@ -51,6 +51,12 @@ static struct gstage_mode_desc __ro_after_init max_gstage_mode = {
>> .name = "Bare",
>> };
>>
>> +/*
>> + * Set to the maximum configured support for IPA bits, so the number of IPA bits can be
>> + * restricted by external entity (e.g. IOMMU).
>> + */
>> +unsigned int __read_mostly p2m_ipa_bits = PADDR_BITS;
> ... a definition, but neither a use nor a place where the variable would
> be set. Hmm, yes, I see common/device-tree/domain-build.c uses it. Then
> the following questions arise:
> - What does "ipa" stand for? Is this a term sensible in RISC-V context at
> all?
IPA stands for GPA. (maybe it would better to rename the in common code to gpa too).
It was used as common code uses p2m_ipa_bits.
Yes, I miss to set p2m_ipa_bits properly in p2m_init() where G-stage MMU mode is set.
> Judging from the comment at the decl, isn't it PPN width (plus
> PAGE_SHIFT) that it describes?
It is PPN width + PAGE_SHIFT what is equal to PADDR_BITS (44bit PPN + 12 bit PAGE_SHIFT).
> - With there not being anyone writing to the variable, why is it not
> const (or even a #define), or at the very least __ro_after_init?
> And no, "Arm has it like this" doesn't count as an answer. Considering
> all the review comments you've got so far you should know by now that you
> shouldn't copy things blindly.
It was added because of the usage in common/device-tree/domain-build.c.
It was done in the same way as it is also possible that an IOMMU shares the P2M page
tables with the CPU's G-stage(stage-2) translation, so GPA size must not exceed what
the IOMMU can handle (or G-stage address limitation if it is smaller then IOMMU's).
(a) It could be that MMU uses Sv57, IOMMU uses Sv39, so in this case if IOMMU and MMU
shares G-stae page tables it is necessary to respect guest address limitation.
But considering that according to RISC-V IOMMU spec ... :
The IOMMU must support all the virtual memory extensions that are supported by
any of the harts in the system.
... (a) isn't real issue as we could always program IOMMU to use the same as MMU mode
and then p2m_ipa_bits __ro_after_init should work well. It can't be const as I mentioned
above I missed to initialize it properly in p2m_init() code. (It is also a case for RISC-V
that IOMMU could use x4 mode, so MMU uses Sv57 and IOMMU uses Sv57x4.)
>> --- a/xen/include/public/arch-riscv.h
>> +++ b/xen/include/public/arch-riscv.h
>> @@ -50,6 +50,14 @@ typedef uint64_t xen_ulong_t;
>>
>> #if defined(__XEN__) || defined(__XEN_TOOLS__)
>>
>> +#define GUEST_RAM_BANKS 1
>> +
>> +#define GUEST_RAM0_BASE xen_mk_ullong(0x80000000) /* 2GB of low RAM @ 2GB */
>> +#define GUEST_RAM0_SIZE xen_mk_ullong(0x80000000)
>> +
>> +#define GUEST_RAM_BANK_BASES { GUEST_RAM0_BASE }
>> +#define GUEST_RAM_BANK_SIZES { GUEST_RAM0_SIZE }
> Hmm, does RISC-V really want to go with compile-time constants here?
It is needed for allocate_memory() for guest domains, so it is expected
to be compile-time constant with the current code of common dom0less
approach.
It represents the start of RAM address for DomU and the maximum RAM size
(the actual size will be calculated based on what is mentioned in DomU node
in dts) and then will be used to generate memory node for DomU (GUEST_RAM0_BASE
as RAM start address and min(GUEST_RAM0_SIZE, dts->domU->memory->size) as a
RAM size).
> And
> if so, why would guests be limited to just 2 Gb?
It is enough for guest domain I am using in dom0less mode.
> That may more efficiently
> be RV32 guests then, with perhaps just an RV32 hypervisor.
I didn't get this point. Could you please explain differently what do you
mean?
~ Oleksii
On 13.02.2026 13:54, Oleksii Kurochko wrote:
> On 2/12/26 5:39 PM, Jan Beulich wrote:
>> On 12.02.2026 17:21, Oleksii Kurochko wrote:
>>> --- a/xen/arch/riscv/include/asm/p2m.h
>>> +++ b/xen/arch/riscv/include/asm/p2m.h
>>> @@ -44,6 +44,9 @@
>>> #define P2M_LEVEL_MASK(p2m, lvl) \
>>> (P2M_TABLE_OFFSET(p2m, lvl) << P2M_GFN_LEVEL_SHIFT(lvl))
>>>
>>> +/* Holds the bit size of IPAs in p2m tables */
>>> +extern unsigned int p2m_ipa_bits;
>> Hmm, I can spot a declaration and ...
>>
>>> --- a/xen/arch/riscv/p2m.c
>>> +++ b/xen/arch/riscv/p2m.c
>>> @@ -51,6 +51,12 @@ static struct gstage_mode_desc __ro_after_init max_gstage_mode = {
>>> .name = "Bare",
>>> };
>>>
>>> +/*
>>> + * Set to the maximum configured support for IPA bits, so the number of IPA bits can be
>>> + * restricted by external entity (e.g. IOMMU).
>>> + */
>>> +unsigned int __read_mostly p2m_ipa_bits = PADDR_BITS;
>> ... a definition, but neither a use nor a place where the variable would
>> be set. Hmm, yes, I see common/device-tree/domain-build.c uses it. Then
>> the following questions arise:
>> - What does "ipa" stand for? Is this a term sensible in RISC-V context at
>> all?
>
> IPA stands for GPA. (maybe it would better to rename the in common code to gpa too).
> It was used as common code uses p2m_ipa_bits.
>
> Yes, I miss to set p2m_ipa_bits properly in p2m_init() where G-stage MMU mode is set.
>
>> Judging from the comment at the decl, isn't it PPN width (plus
>> PAGE_SHIFT) that it describes?
>
> It is PPN width + PAGE_SHIFT what is equal to PADDR_BITS (44bit PPN + 12 bit PAGE_SHIFT).
>
>> - With there not being anyone writing to the variable, why is it not
>> const (or even a #define), or at the very least __ro_after_init?
>> And no, "Arm has it like this" doesn't count as an answer. Considering
>> all the review comments you've got so far you should know by now that you
>> shouldn't copy things blindly.
>
> It was added because of the usage in common/device-tree/domain-build.c.
Well, I understand that, but this isn't the way to do. And you've been through
such before. Anything you want to share between arch-es that isn't shared yet,
will want some suitable abstraction done. Like giving variables names which
are appropriate independent of the arch.
>>> --- a/xen/include/public/arch-riscv.h
>>> +++ b/xen/include/public/arch-riscv.h
>>> @@ -50,6 +50,14 @@ typedef uint64_t xen_ulong_t;
>>>
>>> #if defined(__XEN__) || defined(__XEN_TOOLS__)
>>>
>>> +#define GUEST_RAM_BANKS 1
>>> +
>>> +#define GUEST_RAM0_BASE xen_mk_ullong(0x80000000) /* 2GB of low RAM @ 2GB */
>>> +#define GUEST_RAM0_SIZE xen_mk_ullong(0x80000000)
>>> +
>>> +#define GUEST_RAM_BANK_BASES { GUEST_RAM0_BASE }
>>> +#define GUEST_RAM_BANK_SIZES { GUEST_RAM0_SIZE }
>> Hmm, does RISC-V really want to go with compile-time constants here?
>
> It is needed for allocate_memory() for guest domains, so it is expected
> to be compile-time constant with the current code of common dom0less
> approach.
>
> It represents the start of RAM address for DomU and the maximum RAM size
> (the actual size will be calculated based on what is mentioned in DomU node
> in dts) and then will be used to generate memory node for DomU (GUEST_RAM0_BASE
> as RAM start address and min(GUEST_RAM0_SIZE, dts->domU->memory->size) as a
> RAM size).
>
>> And
>> if so, why would guests be limited to just 2 Gb?
>
> It is enough for guest domain I am using in dom0less mode.
And what others may want to use RISC-V for once it actually becomes usable
isn't relevant? As you start adding things to the public headers, you will
need to understand that you can't change easily what once was put there.
Everything there is part of the ABI, and the ABI needs to remain stable
(within certain limits).
>> That may more efficiently
>> be RV32 guests then, with perhaps just an RV32 hypervisor.
>
> I didn't get this point. Could you please explain differently what do you
> mean?
If all you want are 2Gb guests, why would such guests be 64-bit? And with
(iirc) RV32 permitting more than 4Gb (via PPN being 22 bits wide), perhaps
even a 32-bit hypervisor would suffice?
Jan
On 2/13/26 2:11 PM, Jan Beulich wrote:
>>>> +#define GUEST_RAM0_BASE xen_mk_ullong(0x80000000) /* 2GB of low RAM @ 2GB */
>>>> +#define GUEST_RAM0_SIZE xen_mk_ullong(0x80000000)
>>>> +
>>>> +#define GUEST_RAM_BANK_BASES { GUEST_RAM0_BASE }
>>>> +#define GUEST_RAM_BANK_SIZES { GUEST_RAM0_SIZE }
(cut)
> If all you want are 2Gb guests, why would such guests be 64-bit? And with
> (iirc) RV32 permitting more than 4Gb (via PPN being 22 bits wide), perhaps
> even a 32-bit hypervisor would suffice?
Btw, shouldn't we look at VPN width?
My understanding is that we should take GUEST_RAM0_BASE as sgfn address
and then map it to mfn's page (allocated by alloc_domheap_pages())? And then
repeat this process until we won't map GUEST_RAM0_SIZE.
In this case for RV32 VPN (which is GFN in the current context) is 32-bit
wide as RV32 supports only Sv32, what is 2^32 - 1, what is almost 4gb.
~ Oleksii
On 17.03.2026 13:49, Oleksii Kurochko wrote:
>
> On 2/13/26 2:11 PM, Jan Beulich wrote:
>>>>> +#define GUEST_RAM0_BASE xen_mk_ullong(0x80000000) /* 2GB of low RAM @ 2GB */
>>>>> +#define GUEST_RAM0_SIZE xen_mk_ullong(0x80000000)
>>>>> +
>>>>> +#define GUEST_RAM_BANK_BASES { GUEST_RAM0_BASE }
>>>>> +#define GUEST_RAM_BANK_SIZES { GUEST_RAM0_SIZE }
>
> (cut)
>
>> If all you want are 2Gb guests, why would such guests be 64-bit? And with
>> (iirc) RV32 permitting more than 4Gb (via PPN being 22 bits wide), perhaps
>> even a 32-bit hypervisor would suffice?
>
> Btw, shouldn't we look at VPN width?
>
> My understanding is that we should take GUEST_RAM0_BASE as sgfn address
> and then map it to mfn's page (allocated by alloc_domheap_pages())? And then
> repeat this process until we won't map GUEST_RAM0_SIZE.
>
> In this case for RV32 VPN (which is GFN in the current context) is 32-bit
> wide as RV32 supports only Sv32, what is 2^32 - 1, what is almost 4gb.
??? (IOW - I fear I'm confused enough by the question that I don't know how
to respond.)
Jan
On 3/19/26 8:58 AM, Jan Beulich wrote:
> On 17.03.2026 13:49, Oleksii Kurochko wrote:
>> On 2/13/26 2:11 PM, Jan Beulich wrote:
>>>>>> +#define GUEST_RAM0_BASE xen_mk_ullong(0x80000000) /* 2GB of low RAM @ 2GB */
>>>>>> +#define GUEST_RAM0_SIZE xen_mk_ullong(0x80000000)
>>>>>> +
>>>>>> +#define GUEST_RAM_BANK_BASES { GUEST_RAM0_BASE }
>>>>>> +#define GUEST_RAM_BANK_SIZES { GUEST_RAM0_SIZE }
>> (cut)
>>
>>> If all you want are 2Gb guests, why would such guests be 64-bit? And with
>>> (iirc) RV32 permitting more than 4Gb (via PPN being 22 bits wide), perhaps
>>> even a 32-bit hypervisor would suffice?
>> Btw, shouldn't we look at VPN width?
>>
>> My understanding is that we should take GUEST_RAM0_BASE as sgfn address
>> and then map it to mfn's page (allocated by alloc_domheap_pages())? And then
>> repeat this process until we won't map GUEST_RAM0_SIZE.
>>
>> In this case for RV32 VPN (which is GFN in the current context) is 32-bit
>> wide as RV32 supports only Sv32, what is 2^32 - 1, what is almost 4gb.
> ??? (IOW - I fear I'm confused enough by the question that I don't know how
> to respond.)
You mentioned above that:
"... And with (iirc) RV32 permitting more than 4Gb (via PPN being 22 bits wide) ..."
I wanted to clarify why you use PPN here in the context of GUEST_RAM0_BASE definition.
(and maybe I just misinterpreted incorrectly your original message)
GUEST_RAM0_BASE is the address at which the guest believes RAM starts in its physical
address space, i.e. it is a GPA, which is then translated to an MPA.
From the MMU's perspective, the GPA looks like:
VPN[1] | VPN[0] | page_offset (in Sv32x4 mode)
In Sv32x4, the GPA is 34 bits wide (or 22 bits wide in terms of GFNs), and the MPA is
also 32 bits wide (or 22 bits wide in terms of PPN).
The distinction is not significant in Sv32x4, since PPN width equals VPN width, but
in other modes VPN < PPN (in terms of bit width).
So when we want to run a guest in Sv39x4 mode and want to give the guest the full
Sv39x4 address space, setting GUEST_RAM0_SIZE to the maximum possible value for
Sv39x4, shouldn't we look at the VPN width rather than the PPN width?
In other words, GUEST_RAM0_SIZE should be (2^41 - 1) rather than (2^56 - 1)
for Sv39x4.
~ Oleksii
On 20.03.2026 10:58, Oleksii Kurochko wrote:
> On 3/19/26 8:58 AM, Jan Beulich wrote:
>> On 17.03.2026 13:49, Oleksii Kurochko wrote:
>>> On 2/13/26 2:11 PM, Jan Beulich wrote:
>>>>>>> +#define GUEST_RAM0_BASE xen_mk_ullong(0x80000000) /* 2GB of low RAM @ 2GB */
>>>>>>> +#define GUEST_RAM0_SIZE xen_mk_ullong(0x80000000)
>>>>>>> +
>>>>>>> +#define GUEST_RAM_BANK_BASES { GUEST_RAM0_BASE }
>>>>>>> +#define GUEST_RAM_BANK_SIZES { GUEST_RAM0_SIZE }
>>> (cut)
>>>
>>>> If all you want are 2Gb guests, why would such guests be 64-bit? And with
>>>> (iirc) RV32 permitting more than 4Gb (via PPN being 22 bits wide), perhaps
>>>> even a 32-bit hypervisor would suffice?
>>> Btw, shouldn't we look at VPN width?
>>>
>>> My understanding is that we should take GUEST_RAM0_BASE as sgfn address
>>> and then map it to mfn's page (allocated by alloc_domheap_pages())? And then
>>> repeat this process until we won't map GUEST_RAM0_SIZE.
>>>
>>> In this case for RV32 VPN (which is GFN in the current context) is 32-bit
>>> wide as RV32 supports only Sv32, what is 2^32 - 1, what is almost 4gb.
>> ??? (IOW - I fear I'm confused enough by the question that I don't know how
>> to respond.)
>
> You mentioned above that:
> "... And with (iirc) RV32 permitting more than 4Gb (via PPN being 22 bits wide) ..."
>
> I wanted to clarify why you use PPN here in the context of GUEST_RAM0_BASE definition.
> (and maybe I just misinterpreted incorrectly your original message)
> GUEST_RAM0_BASE is the address at which the guest believes RAM starts in its physical
> address space, i.e. it is a GPA, which is then translated to an MPA.
>
> From the MMU's perspective, the GPA looks like:
> VPN[1] | VPN[0] | page_offset (in Sv32x4 mode)
>
> In Sv32x4, the GPA is 34 bits wide (or 22 bits wide in terms of GFNs), and the MPA is
> also 32 bits wide (or 22 bits wide in terms of PPN).
You mentioning Sv32x4 may point at part of the problem: For the guest physical
memory layout (and hence size), paging and hence virtual addresses don't matter
at all. What matters is what the guest can put in the page table entries it
writes. Addresses there are represented as PPNs, aren't they? Hence my use of
that acronym.
> The distinction is not significant in Sv32x4, since PPN width equals VPN width, but
> in other modes VPN < PPN (in terms of bit width).
> So when we want to run a guest in Sv39x4 mode and want to give the guest the full
> Sv39x4 address space, setting GUEST_RAM0_SIZE to the maximum possible value for
> Sv39x4, shouldn't we look at the VPN width rather than the PPN width?
No, why? The guest can arrange to map more than 2^39 bytes. Not all at the same
time, sure, but by suitable switching page tables (or merely entries) around.
Jan
> In other words, GUEST_RAM0_SIZE should be (2^41 - 1) rather than (2^56 - 1)
> for Sv39x4.
>
> ~ Oleksii
>
On 3/20/26 2:19 PM, Jan Beulich wrote:
> On 20.03.2026 10:58, Oleksii Kurochko wrote:
>> On 3/19/26 8:58 AM, Jan Beulich wrote:
>>> On 17.03.2026 13:49, Oleksii Kurochko wrote:
>>>> On 2/13/26 2:11 PM, Jan Beulich wrote:
>>>>>>>> +#define GUEST_RAM0_BASE xen_mk_ullong(0x80000000) /* 2GB of low RAM @ 2GB */
>>>>>>>> +#define GUEST_RAM0_SIZE xen_mk_ullong(0x80000000)
>>>>>>>> +
>>>>>>>> +#define GUEST_RAM_BANK_BASES { GUEST_RAM0_BASE }
>>>>>>>> +#define GUEST_RAM_BANK_SIZES { GUEST_RAM0_SIZE }
>>>> (cut)
>>>>
>>>>> If all you want are 2Gb guests, why would such guests be 64-bit? And with
>>>>> (iirc) RV32 permitting more than 4Gb (via PPN being 22 bits wide), perhaps
>>>>> even a 32-bit hypervisor would suffice?
>>>> Btw, shouldn't we look at VPN width?
>>>>
>>>> My understanding is that we should take GUEST_RAM0_BASE as sgfn address
>>>> and then map it to mfn's page (allocated by alloc_domheap_pages())? And then
>>>> repeat this process until we won't map GUEST_RAM0_SIZE.
>>>>
>>>> In this case for RV32 VPN (which is GFN in the current context) is 32-bit
>>>> wide as RV32 supports only Sv32, what is 2^32 - 1, what is almost 4gb.
>>> ??? (IOW - I fear I'm confused enough by the question that I don't know how
>>> to respond.)
>>
>> You mentioned above that:
>> "... And with (iirc) RV32 permitting more than 4Gb (via PPN being 22 bits wide) ..."
>>
>> I wanted to clarify why you use PPN here in the context of GUEST_RAM0_BASE definition.
>> (and maybe I just misinterpreted incorrectly your original message)
>> GUEST_RAM0_BASE is the address at which the guest believes RAM starts in its physical
>> address space, i.e. it is a GPA, which is then translated to an MPA.
>>
>> From the MMU's perspective, the GPA looks like:
>> VPN[1] | VPN[0] | page_offset (in Sv32x4 mode)
>>
>> In Sv32x4, the GPA is 34 bits wide (or 22 bits wide in terms of GFNs), and the MPA is
>> also 32 bits wide (or 22 bits wide in terms of PPN).
>
> You mentioning Sv32x4 may point at part of the problem: For the guest physical
> memory layout (and hence size), paging and hence virtual addresses don't matter
> at all. What matters is what the guest can put in the page table entries it
> writes. Addresses there are represented as PPNs, aren't they? Hence my use of
> that acronym.
That's is what I came to after wrote and sent an e-mail. Now you
confirmed that.
>
>> The distinction is not significant in Sv32x4, since PPN width equals VPN width, but
>> in other modes VPN < PPN (in terms of bit width).
>> So when we want to run a guest in Sv39x4 mode and want to give the guest the full
>> Sv39x4 address space, setting GUEST_RAM0_SIZE to the maximum possible value for
>> Sv39x4, shouldn't we look at the VPN width rather than the PPN width?
>
> No, why? The guest can arrange to map more than 2^39 bytes. Not all at the same
> time, sure, but by suitable switching page tables (or merely entries) around.
>
Good point. Then the right limit is therefore the PPN width which
reflects the actual physical addressing capability.
Thanks a lot.
~ Oleksii
On 2/13/26 2:11 PM, Jan Beulich wrote:
> On 13.02.2026 13:54, Oleksii Kurochko wrote:
>> On 2/12/26 5:39 PM, Jan Beulich wrote:
>>> On 12.02.2026 17:21, Oleksii Kurochko wrote:
>>>> --- a/xen/include/public/arch-riscv.h
>>>> +++ b/xen/include/public/arch-riscv.h
>>>> @@ -50,6 +50,14 @@ typedef uint64_t xen_ulong_t;
>>>>
>>>> #if defined(__XEN__) || defined(__XEN_TOOLS__)
>>>>
>>>> +#define GUEST_RAM_BANKS 1
>>>> +
>>>> +#define GUEST_RAM0_BASE xen_mk_ullong(0x80000000) /* 2GB of low RAM @ 2GB */
>>>> +#define GUEST_RAM0_SIZE xen_mk_ullong(0x80000000)
>>>> +
>>>> +#define GUEST_RAM_BANK_BASES { GUEST_RAM0_BASE }
>>>> +#define GUEST_RAM_BANK_SIZES { GUEST_RAM0_SIZE }
>>> Hmm, does RISC-V really want to go with compile-time constants here?
>> It is needed for allocate_memory() for guest domains, so it is expected
>> to be compile-time constant with the current code of common dom0less
>> approach.
>>
>> It represents the start of RAM address for DomU and the maximum RAM size
>> (the actual size will be calculated based on what is mentioned in DomU node
>> in dts) and then will be used to generate memory node for DomU (GUEST_RAM0_BASE
>> as RAM start address and min(GUEST_RAM0_SIZE, dts->domU->memory->size) as a
>> RAM size).
>>
>>> And
>>> if so, why would guests be limited to just 2 Gb?
>> It is enough for guest domain I am using in dom0less mode.
> And what others may want to use RISC-V for once it actually becomes usable
> isn't relevant? As you start adding things to the public headers, you will
> need to understand that you can't change easily what once was put there.
> Everything there is part of the ABI, and the ABI needs to remain stable
> (within certain limits).
Considering this ...
>
>>> That may more efficiently
>>> be RV32 guests then, with perhaps just an RV32 hypervisor.
>> I didn't get this point. Could you please explain differently what do you
>> mean?
> If all you want are 2Gb guests, why would such guests be 64-bit? And with
> (iirc) RV32 permitting more than 4Gb (via PPN being 22 bits wide), perhaps
> even a 32-bit hypervisor would suffice?
... now I can agree that Xen should permit bigger amount of RAM. At least,
(2^34-1) should be allowed for RV32 and so for RV64 so it could be used
like a base for both of them. As RV64 allows (2^56 - 1) it makes sense
to add another bank to cover range from 2^34 to (2^56 -1) for RV64 (and ifdef
this second bank for RV64).
Would it be better?
~ Oleksii
On 18.02.2026 11:39, Oleksii Kurochko wrote:
>
> On 2/13/26 2:11 PM, Jan Beulich wrote:
>> On 13.02.2026 13:54, Oleksii Kurochko wrote:
>>> On 2/12/26 5:39 PM, Jan Beulich wrote:
>>>> On 12.02.2026 17:21, Oleksii Kurochko wrote:
>>>>> --- a/xen/include/public/arch-riscv.h
>>>>> +++ b/xen/include/public/arch-riscv.h
>>>>> @@ -50,6 +50,14 @@ typedef uint64_t xen_ulong_t;
>>>>>
>>>>> #if defined(__XEN__) || defined(__XEN_TOOLS__)
>>>>>
>>>>> +#define GUEST_RAM_BANKS 1
>>>>> +
>>>>> +#define GUEST_RAM0_BASE xen_mk_ullong(0x80000000) /* 2GB of low RAM @ 2GB */
>>>>> +#define GUEST_RAM0_SIZE xen_mk_ullong(0x80000000)
>>>>> +
>>>>> +#define GUEST_RAM_BANK_BASES { GUEST_RAM0_BASE }
>>>>> +#define GUEST_RAM_BANK_SIZES { GUEST_RAM0_SIZE }
>>>> Hmm, does RISC-V really want to go with compile-time constants here?
>>> It is needed for allocate_memory() for guest domains, so it is expected
>>> to be compile-time constant with the current code of common dom0less
>>> approach.
>>>
>>> It represents the start of RAM address for DomU and the maximum RAM size
>>> (the actual size will be calculated based on what is mentioned in DomU node
>>> in dts) and then will be used to generate memory node for DomU (GUEST_RAM0_BASE
>>> as RAM start address and min(GUEST_RAM0_SIZE, dts->domU->memory->size) as a
>>> RAM size).
>>>
>>>> And
>>>> if so, why would guests be limited to just 2 Gb?
>>> It is enough for guest domain I am using in dom0less mode.
>> And what others may want to use RISC-V for once it actually becomes usable
>> isn't relevant? As you start adding things to the public headers, you will
>> need to understand that you can't change easily what once was put there.
>> Everything there is part of the ABI, and the ABI needs to remain stable
>> (within certain limits).
>
> Considering this ...
>
>>>> That may more efficiently
>>>> be RV32 guests then, with perhaps just an RV32 hypervisor.
>>> I didn't get this point. Could you please explain differently what do you
>>> mean?
>> If all you want are 2Gb guests, why would such guests be 64-bit? And with
>> (iirc) RV32 permitting more than 4Gb (via PPN being 22 bits wide), perhaps
>> even a 32-bit hypervisor would suffice?
>
> ... now I can agree that Xen should permit bigger amount of RAM. At least,
> (2^34-1) should be allowed for RV32 and so for RV64 so it could be used
> like a base for both of them. As RV64 allows (2^56 - 1) it makes sense
> to add another bank to cover range from 2^34 to (2^56 -1) for RV64 (and ifdef
> this second bank for RV64).
>
> Would it be better?
Having a 2nd bank right away for RV64 would seem better to me, yes. Whether
that means going all the way up to 2^56 I don't know.
In whether a public header can be changed, it also matters whether these
#define-s actually are meant to be exposed to guests (vs. merely the tool
stack). Longer-term, however, this is going to change (as we intend to move
to a fully stable ABI).
Jan
© 2016 - 2026 Red Hat, Inc.