[PATCH v2 14/17] xen/riscv: implement p2m_next_level()

Oleksii Kurochko posted 17 patches 4 months, 3 weeks ago
There is a newer version of this series
[PATCH v2 14/17] xen/riscv: implement p2m_next_level()
Posted by Oleksii Kurochko 4 months, 3 weeks ago
Implement the p2m_next_level() function, which enables traversal and dynamic
allocation of intermediate levels (if necessary) in the RISC-V
p2m (physical-to-machine) page table hierarchy.

To support this, the following helpers are introduced:
- p2me_is_mapping(): Determines whether a PTE represents a valid mapping.
- page_to_p2m_table(): Constructs non-leaf PTEs pointing to next-level page
  tables with correct attributes.
- p2m_alloc_page(): Allocates page table pages, supporting both hardware and
  guest domains.
- p2m_create_table(): Allocates and initializes a new page table page and
  installs it into the hierarchy.

Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
---
Changes in V2:
 - New patch. It was a part of a big patch "xen/riscv: implement p2m mapping
   functionality" which was splitted to smaller.
 - s/p2m_is_mapping/p2me_is_mapping.
---
 xen/arch/riscv/p2m.c | 103 ++++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 101 insertions(+), 2 deletions(-)

diff --git a/xen/arch/riscv/p2m.c b/xen/arch/riscv/p2m.c
index cba04acf38..87dd636b80 100644
--- a/xen/arch/riscv/p2m.c
+++ b/xen/arch/riscv/p2m.c
@@ -387,6 +387,17 @@ static inline bool p2me_is_valid(struct p2m_domain *p2m, pte_t pte)
     return p2m_type_radix_get(p2m, pte) != p2m_invalid;
 }
 
+/*
+ * pte_is_* helpers are checking the valid bit set in the
+ * PTE but we have to check p2m_type instead (look at the comment above
+ * p2me_is_valid())
+ * Provide our own overlay to check the valid bit.
+ */
+static inline bool p2me_is_mapping(struct p2m_domain *p2m, pte_t pte)
+{
+    return p2me_is_valid(p2m, pte) && (pte.pte & PTE_ACCESS_MASK);
+}
+
 static inline bool p2me_is_superpage(struct p2m_domain *p2m, pte_t pte,
                                     unsigned int level)
 {
@@ -492,6 +503,70 @@ static pte_t p2m_entry_from_mfn(struct p2m_domain *p2m, mfn_t mfn, p2m_type_t t,
     return e;
 }
 
+/* Generate table entry with correct attributes. */
+static pte_t page_to_p2m_table(struct p2m_domain *p2m, struct page_info *page)
+{
+    /*
+     * Since this function generates a table entry, according to "Encoding
+     * of PTE R/W/X fields," the entry's r, w, and x fields must be set to 0
+     * to point to the next level of the page table.
+     * Therefore, to ensure that an entry is a page table entry,
+     * `p2m_access_n2rwx` is passed to `mfn_to_p2m_entry()` as the access value,
+     * which overrides whatever was passed as `p2m_type_t` and guarantees that
+     * the entry is a page table entry by setting r = w = x = 0.
+     */
+    return p2m_entry_from_mfn(p2m, page_to_mfn(page), p2m_ram_rw, p2m_access_n2rwx);
+}
+
+static struct page_info *p2m_alloc_page(struct domain *d)
+{
+    struct page_info *pg;
+
+    /*
+     * For hardware domain, there should be no limit in the number of pages that
+     * can be allocated, so that the kernel may take advantage of the extended
+     * regions. Hence, allocate p2m pages for hardware domains from heap.
+     */
+    if ( is_hardware_domain(d) )
+    {
+        pg = alloc_domheap_page(d, MEMF_no_owner);
+        if ( pg == NULL )
+            printk(XENLOG_G_ERR "Failed to allocate P2M pages for hwdom.\n");
+    }
+    else
+    {
+        spin_lock(&d->arch.paging.lock);
+        pg = page_list_remove_head(&d->arch.paging.p2m_freelist);
+        spin_unlock(&d->arch.paging.lock);
+    }
+
+    return pg;
+}
+
+/* Allocate a new page table page and hook it in via the given entry. */
+static int p2m_create_table(struct p2m_domain *p2m, pte_t *entry)
+{
+    struct page_info *page;
+    pte_t *p;
+
+    ASSERT(!p2me_is_valid(p2m, *entry));
+
+    page = p2m_alloc_page(p2m->domain);
+    if ( page == NULL )
+        return -ENOMEM;
+
+    page_list_add(page, &p2m->pages);
+
+    p = __map_domain_page(page);
+    clear_page(p);
+
+    unmap_domain_page(p);
+
+    p2m_write_pte(entry, page_to_p2m_table(p2m, page), p2m->clean_pte);
+
+    return 0;
+}
+
 #define GUEST_TABLE_MAP_NONE 0
 #define GUEST_TABLE_MAP_NOMEM 1
 #define GUEST_TABLE_SUPER_PAGE 2
@@ -516,9 +591,33 @@ static int p2m_next_level(struct p2m_domain *p2m, bool alloc_tbl,
                           unsigned int level, pte_t **table,
                           unsigned int offset)
 {
-    panic("%s: hasn't been implemented yet\n", __func__);
+    pte_t *entry;
+    int ret;
+    mfn_t mfn;
+
+    entry = *table + offset;
+
+    if ( !p2me_is_valid(p2m, *entry) )
+    {
+        if ( !alloc_tbl )
+            return GUEST_TABLE_MAP_NONE;
+
+        ret = p2m_create_table(p2m, entry);
+        if ( ret )
+            return GUEST_TABLE_MAP_NOMEM;
+    }
+
+    /* The function p2m_next_level() is never called at the last level */
+    ASSERT(level != 0);
+    if ( p2me_is_mapping(p2m, *entry) )
+        return GUEST_TABLE_SUPER_PAGE;
+
+    mfn = mfn_from_pte(*entry);
+
+    unmap_domain_page(*table);
+    *table = map_domain_page(mfn);
 
-    return GUEST_TABLE_MAP_NONE;
+    return GUEST_TABLE_NORMAL;
 }
 
 static void p2m_put_foreign_page(struct page_info *pg)
-- 
2.49.0
Re: [PATCH v2 14/17] xen/riscv: implement p2m_next_level()
Posted by Jan Beulich 4 months ago
On 10.06.2025 15:05, Oleksii Kurochko wrote:
> --- a/xen/arch/riscv/p2m.c
> +++ b/xen/arch/riscv/p2m.c
> @@ -387,6 +387,17 @@ static inline bool p2me_is_valid(struct p2m_domain *p2m, pte_t pte)
>      return p2m_type_radix_get(p2m, pte) != p2m_invalid;
>  }
>  
> +/*
> + * pte_is_* helpers are checking the valid bit set in the
> + * PTE but we have to check p2m_type instead (look at the comment above
> + * p2me_is_valid())
> + * Provide our own overlay to check the valid bit.
> + */
> +static inline bool p2me_is_mapping(struct p2m_domain *p2m, pte_t pte)
> +{
> +    return p2me_is_valid(p2m, pte) && (pte.pte & PTE_ACCESS_MASK);
> +}

Same question as on the earlier patch - does P2M type apply to intermediate
page tables at all? (Conceptually it shouldn't.)

> @@ -492,6 +503,70 @@ static pte_t p2m_entry_from_mfn(struct p2m_domain *p2m, mfn_t mfn, p2m_type_t t,
>      return e;
>  }
>  
> +/* Generate table entry with correct attributes. */
> +static pte_t page_to_p2m_table(struct p2m_domain *p2m, struct page_info *page)
> +{
> +    /*
> +     * Since this function generates a table entry, according to "Encoding
> +     * of PTE R/W/X fields," the entry's r, w, and x fields must be set to 0
> +     * to point to the next level of the page table.
> +     * Therefore, to ensure that an entry is a page table entry,
> +     * `p2m_access_n2rwx` is passed to `mfn_to_p2m_entry()` as the access value,
> +     * which overrides whatever was passed as `p2m_type_t` and guarantees that
> +     * the entry is a page table entry by setting r = w = x = 0.
> +     */
> +    return p2m_entry_from_mfn(p2m, page_to_mfn(page), p2m_ram_rw, p2m_access_n2rwx);

Similarly P2M access shouldn't apply to intermediate page tables. (Moot
with that, but (ab)using p2m_access_n2rwx would also look wrong: You did
read what it means, didn't you?)

> +}
> +
> +static struct page_info *p2m_alloc_page(struct domain *d)
> +{
> +    struct page_info *pg;
> +
> +    /*
> +     * For hardware domain, there should be no limit in the number of pages that
> +     * can be allocated, so that the kernel may take advantage of the extended
> +     * regions. Hence, allocate p2m pages for hardware domains from heap.
> +     */
> +    if ( is_hardware_domain(d) )
> +    {
> +        pg = alloc_domheap_page(d, MEMF_no_owner);
> +        if ( pg == NULL )
> +            printk(XENLOG_G_ERR "Failed to allocate P2M pages for hwdom.\n");
> +    }

The comment looks to have been taken verbatim from Arm. Whatever "extended
regions" are, does the same concept even exist on RISC-V?

Also, special casing Dom0 like this has benefits, but also comes with a
pitfall: If the system's out of memory, allocations will fail. A pre-
populated pool would avoid that (until exhausted, of course). If special-
casing of Dom0 is needed, I wonder whether ...

> +    else
> +    {
> +        spin_lock(&d->arch.paging.lock);
> +        pg = page_list_remove_head(&d->arch.paging.p2m_freelist);
> +        spin_unlock(&d->arch.paging.lock);
> +    }

... going this path but with a Dom0-only fallback to general allocation
wouldn't be the better route.

> +    return pg;
> +}
> +
> +/* Allocate a new page table page and hook it in via the given entry. */
> +static int p2m_create_table(struct p2m_domain *p2m, pte_t *entry)
> +{
> +    struct page_info *page;
> +    pte_t *p;
> +
> +    ASSERT(!p2me_is_valid(p2m, *entry));
> +
> +    page = p2m_alloc_page(p2m->domain);
> +    if ( page == NULL )
> +        return -ENOMEM;
> +
> +    page_list_add(page, &p2m->pages);
> +
> +    p = __map_domain_page(page);
> +    clear_page(p);
> +
> +    unmap_domain_page(p);

clear_domain_page()? Or actually clear_and_clean_page()?

> @@ -516,9 +591,33 @@ static int p2m_next_level(struct p2m_domain *p2m, bool alloc_tbl,
>                            unsigned int level, pte_t **table,
>                            unsigned int offset)
>  {
> -    panic("%s: hasn't been implemented yet\n", __func__);
> +    pte_t *entry;
> +    int ret;
> +    mfn_t mfn;
> +
> +    entry = *table + offset;
> +
> +    if ( !p2me_is_valid(p2m, *entry) )
> +    {
> +        if ( !alloc_tbl )
> +            return GUEST_TABLE_MAP_NONE;
> +
> +        ret = p2m_create_table(p2m, entry);
> +        if ( ret )
> +            return GUEST_TABLE_MAP_NOMEM;
> +    }
> +
> +    /* The function p2m_next_level() is never called at the last level */
> +    ASSERT(level != 0);

Logically you would perhaps better do this ahead of trying to allocate a
page table. Calls here with level == 0 are invalid in all cases aiui, not
just when you make it here.

> +    if ( p2me_is_mapping(p2m, *entry) )
> +        return GUEST_TABLE_SUPER_PAGE;
> +
> +    mfn = mfn_from_pte(*entry);
> +
> +    unmap_domain_page(*table);
> +    *table = map_domain_page(mfn);

Just to mention it (may not need taking care of right away), there's an
inefficiency here: In p2m_create_table() you map the page to clear it.
Then you tear down that mapping, just to re-establish it here.

Jan
Re: [PATCH v2 14/17] xen/riscv: implement p2m_next_level()
Posted by Oleksii Kurochko 3 months, 2 weeks ago
On 7/2/25 10:35 AM, Jan Beulich wrote:
> On 10.06.2025 15:05, Oleksii Kurochko wrote:
>> --- a/xen/arch/riscv/p2m.c
>> +++ b/xen/arch/riscv/p2m.c
>> @@ -387,6 +387,17 @@ static inline bool p2me_is_valid(struct p2m_domain *p2m, pte_t pte)
>>       return p2m_type_radix_get(p2m, pte) != p2m_invalid;
>>   }
>>   
>> +/*
>> + * pte_is_* helpers are checking the valid bit set in the
>> + * PTE but we have to check p2m_type instead (look at the comment above
>> + * p2me_is_valid())
>> + * Provide our own overlay to check the valid bit.
>> + */
>> +static inline bool p2me_is_mapping(struct p2m_domain *p2m, pte_t pte)
>> +{
>> +    return p2me_is_valid(p2m, pte) && (pte.pte & PTE_ACCESS_MASK);
>> +}
> Same question as on the earlier patch - does P2M type apply to intermediate
> page tables at all? (Conceptually it shouldn't.)

It doesn't matter whether it is an intermediate page table or a leaf PTE pointing
to a page — PTE should be valid. Considering that in the current implementation
it’s possible for PTE.v = 0 but P2M.v = 1, it is better to check P2M.v instead
of PTE.v.

>
>> @@ -492,6 +503,70 @@ static pte_t p2m_entry_from_mfn(struct p2m_domain *p2m, mfn_t mfn, p2m_type_t t,
>>       return e;
>>   }
>>   
>> +/* Generate table entry with correct attributes. */
>> +static pte_t page_to_p2m_table(struct p2m_domain *p2m, struct page_info *page)
>> +{
>> +    /*
>> +     * Since this function generates a table entry, according to "Encoding
>> +     * of PTE R/W/X fields," the entry's r, w, and x fields must be set to 0
>> +     * to point to the next level of the page table.
>> +     * Therefore, to ensure that an entry is a page table entry,
>> +     * `p2m_access_n2rwx` is passed to `mfn_to_p2m_entry()` as the access value,
>> +     * which overrides whatever was passed as `p2m_type_t` and guarantees that
>> +     * the entry is a page table entry by setting r = w = x = 0.
>> +     */
>> +    return p2m_entry_from_mfn(p2m, page_to_mfn(page), p2m_ram_rw, p2m_access_n2rwx);
> Similarly P2M access shouldn't apply to intermediate page tables. (Moot
> with that, but (ab)using p2m_access_n2rwx would also look wrong: You did
> read what it means, didn't you?)

|p2m_access_n2rwx| was chosen not really because of the description mentioned near
its declaration, but because it sets r=w=x=0, which RISC-V expects for a PTE that
points to the next-level page table.

Generally, I agree that P2M access shouldn't be applied to intermediate page tables.

What I can suggest in this case is to use|p2m_access_rwx| instead of|p2m_access_n2rwx|,
which will ensure that the P2M access type isn't applied when|p2m_entry_from_mfn() |is called, and then, after calling|p2m_entry_from_mfn()|, simply set|PTE.r,w,x=0|.
So this function will look like:
     /* Generate table entry with correct attributes. */
     static pte_t page_to_p2m_table(struct p2m_domain *p2m, struct page_info *page)
     {
         /*
         * p2m_ram_rw is chosen for a table entry as p2m table should be valid
         * from both P2M and hardware point of view.
         *
         * p2m_access_rwx is chosen to restrict access permissions, what mean
         * do not apply access permission for a table entry
         */
         pte_t pte = p2m_pte_from_mfn(p2m, page_to_mfn(page), _gfn(0), p2m_ram_rw,
                                     p2m_access_rwx);

         /*
         * Since this function generates a table entry, according to "Encoding
         * of PTE R/W/X fields," the entry's r, w, and x fields must be set to 0
         * to point to the next level of the page table.
         */
         pte.pte &= ~PTE_ACCESS_MASK;

         return pte;
     }

Does this make sense? Or would it be better to keep the current version of
|page_to_p2m_table()| and just improve the comment explaining why|p2m_access_n2rwx |is used for a table entry?

>
>> +}
>> +
>> +static struct page_info *p2m_alloc_page(struct domain *d)
>> +{
>> +    struct page_info *pg;
>> +
>> +    /*
>> +     * For hardware domain, there should be no limit in the number of pages that
>> +     * can be allocated, so that the kernel may take advantage of the extended
>> +     * regions. Hence, allocate p2m pages for hardware domains from heap.
>> +     */
>> +    if ( is_hardware_domain(d) )
>> +    {
>> +        pg = alloc_domheap_page(d, MEMF_no_owner);
>> +        if ( pg == NULL )
>> +            printk(XENLOG_G_ERR "Failed to allocate P2M pages for hwdom.\n");
>> +    }
> The comment looks to have been taken verbatim from Arm. Whatever "extended
> regions" are, does the same concept even exist on RISC-V?

Initially, I missed that it’s used only for Arm. Since it was mentioned in
|doc/misc/xen-command-line.pandoc|, I assumed it applied to all architectures.
But now I see that it’s Arm-specific:: ### ext_regions (Arm)

>
> Also, special casing Dom0 like this has benefits, but also comes with a
> pitfall: If the system's out of memory, allocations will fail. A pre-
> populated pool would avoid that (until exhausted, of course). If special-
> casing of Dom0 is needed, I wonder whether ...
>
>> +    else
>> +    {
>> +        spin_lock(&d->arch.paging.lock);
>> +        pg = page_list_remove_head(&d->arch.paging.p2m_freelist);
>> +        spin_unlock(&d->arch.paging.lock);
>> +    }
> ... going this path but with a Dom0-only fallback to general allocation
> wouldn't be the better route.

IIUC, then it should be something like:
   static struct page_info *p2m_alloc_page(struct domain *d)
   {
       struct page_info *pg;
       
       spin_lock(&d->arch.paging.lock);
       pg = page_list_remove_head(&d->arch.paging.p2m_freelist);
       spin_unlock(&d->arch.paging.lock);

       if ( !pg && is_hardware_domain(d) )
       {
             /* Need to allocate more memory from domheap */
             pg = alloc_domheap_page(d, MEMF_no_owner);
             if ( pg == NULL )
             {
                 printk(XENLOG_ERR "Failed to allocate pages.\n");
                 return pg;
             }
             ACCESS_ONCE(d->arch.paging.total_pages)++;
             page_list_add_tail(pg, &d->arch.paging.freelist);
       }
    
       return pg;
}

And basically use|d->arch.paging.freelist| for both dom0less and dom0 domains,
with the only difference being that in the case of Dom0,|d->arch.paging.freelist |could be extended.

Do I understand your idea correctly?

(
Probably, this is the reply you’re referring to:
   https://lore.kernel.org/xen-devel/43e89225-5e69-49a6-a8c8-bda6d120d8ff@suse.com/,
at the moment, I can't find a better one.
)


>
>> +    return pg;
>> +}
>> +
>> +/* Allocate a new page table page and hook it in via the given entry. */
>> +static int p2m_create_table(struct p2m_domain *p2m, pte_t *entry)
>> +{
>> +    struct page_info *page;
>> +    pte_t *p;
>> +
>> +    ASSERT(!p2me_is_valid(p2m, *entry));
>> +
>> +    page = p2m_alloc_page(p2m->domain);
>> +    if ( page == NULL )
>> +        return -ENOMEM;
>> +
>> +    page_list_add(page, &p2m->pages);
>> +
>> +    p = __map_domain_page(page);
>> +    clear_page(p);
>> +
>> +    unmap_domain_page(p);
> clear_domain_page()? Or actually clear_and_clean_page()?

Agree, clear_and_clean_page() would be better here.

>
>> @@ -516,9 +591,33 @@ static int p2m_next_level(struct p2m_domain *p2m, bool alloc_tbl,
>>                             unsigned int level, pte_t **table,
>>                             unsigned int offset)
>>   {
>> -    panic("%s: hasn't been implemented yet\n", __func__);
>> +    pte_t *entry;
>> +    int ret;
>> +    mfn_t mfn;
>> +
>> +    entry = *table + offset;
>> +
>> +    if ( !p2me_is_valid(p2m, *entry) )
>> +    {
>> +        if ( !alloc_tbl )
>> +            return GUEST_TABLE_MAP_NONE;
>> +
>> +        ret = p2m_create_table(p2m, entry);
>> +        if ( ret )
>> +            return GUEST_TABLE_MAP_NOMEM;
>> +    }
>> +
>> +    /* The function p2m_next_level() is never called at the last level */
>> +    ASSERT(level != 0);
> Logically you would perhaps better do this ahead of trying to allocate a
> page table. Calls here with level == 0 are invalid in all cases aiui, not
> just when you make it here.

It makes sense. I will move ASSERT() to the start of the function
p2m_next_level().

>> +    if ( p2me_is_mapping(p2m, *entry) )
>> +        return GUEST_TABLE_SUPER_PAGE;
>> +
>> +    mfn = mfn_from_pte(*entry);
>> +
>> +    unmap_domain_page(*table);
>> +    *table = map_domain_page(mfn);
> Just to mention it (may not need taking care of right away), there's an
> inefficiency here: In p2m_create_table() you map the page to clear it.
> Then you tear down that mapping, just to re-establish it here.

I will add:
     /*
      * TODO: There's an inefficiency here:
      *       In p2m_create_table(), the page is mapped to clear it.
      *       Then that mapping is torn down in p2m_create_table(),
      *       only to be re-established here.
      */
     *table = map_domain_page(mfn);

Thanks.

~ Oleksii
Re: [PATCH v2 14/17] xen/riscv: implement p2m_next_level()
Posted by Jan Beulich 3 months, 2 weeks ago
On 16.07.2025 13:32, Oleksii Kurochko wrote:
> On 7/2/25 10:35 AM, Jan Beulich wrote:
>> On 10.06.2025 15:05, Oleksii Kurochko wrote:
>>> --- a/xen/arch/riscv/p2m.c
>>> +++ b/xen/arch/riscv/p2m.c
>>> @@ -387,6 +387,17 @@ static inline bool p2me_is_valid(struct p2m_domain *p2m, pte_t pte)
>>>       return p2m_type_radix_get(p2m, pte) != p2m_invalid;
>>>   }
>>>   
>>> +/*
>>> + * pte_is_* helpers are checking the valid bit set in the
>>> + * PTE but we have to check p2m_type instead (look at the comment above
>>> + * p2me_is_valid())
>>> + * Provide our own overlay to check the valid bit.
>>> + */
>>> +static inline bool p2me_is_mapping(struct p2m_domain *p2m, pte_t pte)
>>> +{
>>> +    return p2me_is_valid(p2m, pte) && (pte.pte & PTE_ACCESS_MASK);
>>> +}
>> Same question as on the earlier patch - does P2M type apply to intermediate
>> page tables at all? (Conceptually it shouldn't.)
> 
> It doesn't matter whether it is an intermediate page table or a leaf PTE pointing
> to a page — PTE should be valid. Considering that in the current implementation
> it’s possible for PTE.v = 0 but P2M.v = 1, it is better to check P2M.v instead
> of PTE.v.

I'm confused by this reply. If you want to name 2nd level page table entries
P2M - fine (but unhelpful). But then for any memory access there's only one
of the two involved: A PTE (Xen accesses) or a P2M (guest accesses). Hence
how can there be "PTE.v = 0 but P2M.v = 1"?

An intermediate page table entry is something Xen controls entirely. Hence
it has no (guest induced) type.

>>> @@ -492,6 +503,70 @@ static pte_t p2m_entry_from_mfn(struct p2m_domain *p2m, mfn_t mfn, p2m_type_t t,
>>>       return e;
>>>   }
>>>   
>>> +/* Generate table entry with correct attributes. */
>>> +static pte_t page_to_p2m_table(struct p2m_domain *p2m, struct page_info *page)
>>> +{
>>> +    /*
>>> +     * Since this function generates a table entry, according to "Encoding
>>> +     * of PTE R/W/X fields," the entry's r, w, and x fields must be set to 0
>>> +     * to point to the next level of the page table.
>>> +     * Therefore, to ensure that an entry is a page table entry,
>>> +     * `p2m_access_n2rwx` is passed to `mfn_to_p2m_entry()` as the access value,
>>> +     * which overrides whatever was passed as `p2m_type_t` and guarantees that
>>> +     * the entry is a page table entry by setting r = w = x = 0.
>>> +     */
>>> +    return p2m_entry_from_mfn(p2m, page_to_mfn(page), p2m_ram_rw, p2m_access_n2rwx);
>> Similarly P2M access shouldn't apply to intermediate page tables. (Moot
>> with that, but (ab)using p2m_access_n2rwx would also look wrong: You did
>> read what it means, didn't you?)
> 
> |p2m_access_n2rwx| was chosen not really because of the description mentioned near
> its declaration, but because it sets r=w=x=0, which RISC-V expects for a PTE that
> points to the next-level page table.
> 
> Generally, I agree that P2M access shouldn't be applied to intermediate page tables.
> 
> What I can suggest in this case is to use|p2m_access_rwx| instead of|p2m_access_n2rwx|,

No. p2m_access_* shouldn't come into play here at all. Period. Just like P2M types
shouldn't. As per above - intermediate page tables are Xen internal constructs.

> which will ensure that the P2M access type isn't applied when|p2m_entry_from_mfn() |is called, and then, after calling|p2m_entry_from_mfn()|, simply set|PTE.r,w,x=0|.
> So this function will look like:
>      /* Generate table entry with correct attributes. */
>      static pte_t page_to_p2m_table(struct p2m_domain *p2m, struct page_info *page)
>      {
>          /*
>          * p2m_ram_rw is chosen for a table entry as p2m table should be valid
>          * from both P2M and hardware point of view.
>          *
>          * p2m_access_rwx is chosen to restrict access permissions, what mean
>          * do not apply access permission for a table entry
>          */
>          pte_t pte = p2m_pte_from_mfn(p2m, page_to_mfn(page), _gfn(0), p2m_ram_rw,
>                                      p2m_access_rwx);
> 
>          /*
>          * Since this function generates a table entry, according to "Encoding
>          * of PTE R/W/X fields," the entry's r, w, and x fields must be set to 0
>          * to point to the next level of the page table.
>          */
>          pte.pte &= ~PTE_ACCESS_MASK;
> 
>          return pte;
>      }
> 
> Does this make sense? Or would it be better to keep the current version of
> |page_to_p2m_table()| and just improve the comment explaining why|p2m_access_n2rwx |is used for a table entry?

No to both, as per above.

>>> +static struct page_info *p2m_alloc_page(struct domain *d)
>>> +{
>>> +    struct page_info *pg;
>>> +
>>> +    /*
>>> +     * For hardware domain, there should be no limit in the number of pages that
>>> +     * can be allocated, so that the kernel may take advantage of the extended
>>> +     * regions. Hence, allocate p2m pages for hardware domains from heap.
>>> +     */
>>> +    if ( is_hardware_domain(d) )
>>> +    {
>>> +        pg = alloc_domheap_page(d, MEMF_no_owner);
>>> +        if ( pg == NULL )
>>> +            printk(XENLOG_G_ERR "Failed to allocate P2M pages for hwdom.\n");
>>> +    }
>> The comment looks to have been taken verbatim from Arm. Whatever "extended
>> regions" are, does the same concept even exist on RISC-V?
> 
> Initially, I missed that it’s used only for Arm. Since it was mentioned in
> |doc/misc/xen-command-line.pandoc|, I assumed it applied to all architectures.
> But now I see that it’s Arm-specific:: ### ext_regions (Arm)
> 
>>
>> Also, special casing Dom0 like this has benefits, but also comes with a
>> pitfall: If the system's out of memory, allocations will fail. A pre-
>> populated pool would avoid that (until exhausted, of course). If special-
>> casing of Dom0 is needed, I wonder whether ...
>>
>>> +    else
>>> +    {
>>> +        spin_lock(&d->arch.paging.lock);
>>> +        pg = page_list_remove_head(&d->arch.paging.p2m_freelist);
>>> +        spin_unlock(&d->arch.paging.lock);
>>> +    }
>> ... going this path but with a Dom0-only fallback to general allocation
>> wouldn't be the better route.
> 
> IIUC, then it should be something like:
>    static struct page_info *p2m_alloc_page(struct domain *d)
>    {
>        struct page_info *pg;
>        
>        spin_lock(&d->arch.paging.lock);
>        pg = page_list_remove_head(&d->arch.paging.p2m_freelist);
>        spin_unlock(&d->arch.paging.lock);
> 
>        if ( !pg && is_hardware_domain(d) )
>        {
>              /* Need to allocate more memory from domheap */
>              pg = alloc_domheap_page(d, MEMF_no_owner);
>              if ( pg == NULL )
>              {
>                  printk(XENLOG_ERR "Failed to allocate pages.\n");
>                  return pg;
>              }
>              ACCESS_ONCE(d->arch.paging.total_pages)++;
>              page_list_add_tail(pg, &d->arch.paging.freelist);
>        }
>     
>        return pg;
> }
> 
> And basically use|d->arch.paging.freelist| for both dom0less and dom0 domains,
> with the only difference being that in the case of Dom0,|d->arch.paging.freelist |could be extended.
> 
> Do I understand your idea correctly?

Broadly yes, but not in the details. For example, I don't think such a
page allocated from the general heap would want appending to freelist.
Commentary and alike also would want tidying.

And of course going forward, for split hardware and control domains the
latter may want similar treatment.

Jan

Re: [PATCH v2 14/17] xen/riscv: implement p2m_next_level()
Posted by Oleksii Kurochko 3 months, 2 weeks ago
On 7/16/25 1:43 PM, Jan Beulich wrote:
> On 16.07.2025 13:32, Oleksii Kurochko wrote:
>> On 7/2/25 10:35 AM, Jan Beulich wrote:
>>> On 10.06.2025 15:05, Oleksii Kurochko wrote:
>>>> --- a/xen/arch/riscv/p2m.c
>>>> +++ b/xen/arch/riscv/p2m.c
>>>> @@ -387,6 +387,17 @@ static inline bool p2me_is_valid(struct p2m_domain *p2m, pte_t pte)
>>>>        return p2m_type_radix_get(p2m, pte) != p2m_invalid;
>>>>    }
>>>>    
>>>> +/*
>>>> + * pte_is_* helpers are checking the valid bit set in the
>>>> + * PTE but we have to check p2m_type instead (look at the comment above
>>>> + * p2me_is_valid())
>>>> + * Provide our own overlay to check the valid bit.
>>>> + */
>>>> +static inline bool p2me_is_mapping(struct p2m_domain *p2m, pte_t pte)
>>>> +{
>>>> +    return p2me_is_valid(p2m, pte) && (pte.pte & PTE_ACCESS_MASK);
>>>> +}
>>> Same question as on the earlier patch - does P2M type apply to intermediate
>>> page tables at all? (Conceptually it shouldn't.)
>> It doesn't matter whether it is an intermediate page table or a leaf PTE pointing
>> to a page — PTE should be valid. Considering that in the current implementation
>> it’s possible for PTE.v = 0 but P2M.v = 1, it is better to check P2M.v instead
>> of PTE.v.
> I'm confused by this reply. If you want to name 2nd level page table entries
> P2M - fine (but unhelpful). But then for any memory access there's only one
> of the two involved: A PTE (Xen accesses) or a P2M (guest accesses). Hence
> how can there be "PTE.v = 0 but P2M.v = 1"?

I think I understand your confusion, let me try to rephrase.

The reason for having both|p2m_is_valid()| and|pte_is_valid()| is that I want to
have the ability to use the P2M PTE valid bit to track which pages were accessed
by a vCPU, so that cleaning and invalidating RAM associated with the guest vCPU
won't be too expensive, for example.
In this case, the P2M PTE valid bit will be set to 0, but the P2M PTE type bits
will be set to something other than|p2m_invalid| (even for a table entries),
so when an MMU fault occurs, we can properly resolve it.

So, if the P2M PTE type (what|p2m_is_valid()| checks) is set to|p2m_invalid|, it
means that the valid bit (what|pte_is_valid()| checks) should be set to 0, so
the P2M PTE is genuinely invalid.

It could also be the case that the P2M PTE type isn't|p2m_invalid (and P2M PTE valid will be intentionally set to 0 to have 
ability to track which pages were accessed for the reason I wrote above)|, and when MMU fault occurs we could
properly handle it and set to 1 P2M PTE valid bit to 1...

>
> An intermediate page table entry is something Xen controls entirely. Hence
> it has no (guest induced) type.

... And actually it is a reason why it is needed to set a type even for an
intermediate page table entry.

I hope now it is a lit bit clearer what and why was done.

>
>>>> @@ -492,6 +503,70 @@ static pte_t p2m_entry_from_mfn(struct p2m_domain *p2m, mfn_t mfn, p2m_type_t t,
>>>>        return e;
>>>>    }
>>>>    
>>>> +/* Generate table entry with correct attributes. */
>>>> +static pte_t page_to_p2m_table(struct p2m_domain *p2m, struct page_info *page)
>>>> +{
>>>> +    /*
>>>> +     * Since this function generates a table entry, according to "Encoding
>>>> +     * of PTE R/W/X fields," the entry's r, w, and x fields must be set to 0
>>>> +     * to point to the next level of the page table.
>>>> +     * Therefore, to ensure that an entry is a page table entry,
>>>> +     * `p2m_access_n2rwx` is passed to `mfn_to_p2m_entry()` as the access value,
>>>> +     * which overrides whatever was passed as `p2m_type_t` and guarantees that
>>>> +     * the entry is a page table entry by setting r = w = x = 0.
>>>> +     */
>>>> +    return p2m_entry_from_mfn(p2m, page_to_mfn(page), p2m_ram_rw, p2m_access_n2rwx);
>>> Similarly P2M access shouldn't apply to intermediate page tables. (Moot
>>> with that, but (ab)using p2m_access_n2rwx would also look wrong: You did
>>> read what it means, didn't you?)
>> |p2m_access_n2rwx| was chosen not really because of the description mentioned near
>> its declaration, but because it sets r=w=x=0, which RISC-V expects for a PTE that
>> points to the next-level page table.
>>
>> Generally, I agree that P2M access shouldn't be applied to intermediate page tables.
>>
>> What I can suggest in this case is to use|p2m_access_rwx| instead of|p2m_access_n2rwx|,
> No. p2m_access_* shouldn't come into play here at all.

Okay, then it seems like I just can't explicitly re-use p2m_pte_from_mfn() in
page_to_p2m_table() and have to open-code p2m_pte_from_mfn() or add another one
argument is_table to decide if p2m_access_t and/or p2m_type_t should be applied.

>   Period. Just like P2M types
> shouldn't. As per above - intermediate page tables are Xen internal constructs.

Look please at the explaining above why p2m types is needed despite of the fact that
logically it isn't really needed.

>
>> which will ensure that the P2M access type isn't applied when|p2m_entry_from_mfn() |is called, and then, after calling|p2m_entry_from_mfn()|, simply set|PTE.r,w,x=0|.
>> So this function will look like:
>>       /* Generate table entry with correct attributes. */
>>       static pte_t page_to_p2m_table(struct p2m_domain *p2m, struct page_info *page)
>>       {
>>           /*
>>           * p2m_ram_rw is chosen for a table entry as p2m table should be valid
>>           * from both P2M and hardware point of view.
>>           *
>>           * p2m_access_rwx is chosen to restrict access permissions, what mean
>>           * do not apply access permission for a table entry
>>           */
>>           pte_t pte = p2m_pte_from_mfn(p2m, page_to_mfn(page), _gfn(0), p2m_ram_rw,
>>                                       p2m_access_rwx);
>>
>>           /*
>>           * Since this function generates a table entry, according to "Encoding
>>           * of PTE R/W/X fields," the entry's r, w, and x fields must be set to 0
>>           * to point to the next level of the page table.
>>           */
>>           pte.pte &= ~PTE_ACCESS_MASK;
>>
>>           return pte;
>>       }
>>
>> Does this make sense? Or would it be better to keep the current version of
>> |page_to_p2m_table()| and just improve the comment explaining why|p2m_access_n2rwx |is used for a table entry?
> No to both, as per above.
>
>>>> +static struct page_info *p2m_alloc_page(struct domain *d)
>>>> +{
>>>> +    struct page_info *pg;
>>>> +
>>>> +    /*
>>>> +     * For hardware domain, there should be no limit in the number of pages that
>>>> +     * can be allocated, so that the kernel may take advantage of the extended
>>>> +     * regions. Hence, allocate p2m pages for hardware domains from heap.
>>>> +     */
>>>> +    if ( is_hardware_domain(d) )
>>>> +    {
>>>> +        pg = alloc_domheap_page(d, MEMF_no_owner);
>>>> +        if ( pg == NULL )
>>>> +            printk(XENLOG_G_ERR "Failed to allocate P2M pages for hwdom.\n");
>>>> +    }
>>> The comment looks to have been taken verbatim from Arm. Whatever "extended
>>> regions" are, does the same concept even exist on RISC-V?
>> Initially, I missed that it’s used only for Arm. Since it was mentioned in
>> |doc/misc/xen-command-line.pandoc|, I assumed it applied to all architectures.
>> But now I see that it’s Arm-specific:: ### ext_regions (Arm)
>>
>>> Also, special casing Dom0 like this has benefits, but also comes with a
>>> pitfall: If the system's out of memory, allocations will fail. A pre-
>>> populated pool would avoid that (until exhausted, of course). If special-
>>> casing of Dom0 is needed, I wonder whether ...
>>>
>>>> +    else
>>>> +    {
>>>> +        spin_lock(&d->arch.paging.lock);
>>>> +        pg = page_list_remove_head(&d->arch.paging.p2m_freelist);
>>>> +        spin_unlock(&d->arch.paging.lock);
>>>> +    }
>>> ... going this path but with a Dom0-only fallback to general allocation
>>> wouldn't be the better route.
>> IIUC, then it should be something like:
>>     static struct page_info *p2m_alloc_page(struct domain *d)
>>     {
>>         struct page_info *pg;
>>         
>>         spin_lock(&d->arch.paging.lock);
>>         pg = page_list_remove_head(&d->arch.paging.p2m_freelist);
>>         spin_unlock(&d->arch.paging.lock);
>>
>>         if ( !pg && is_hardware_domain(d) )
>>         {
>>               /* Need to allocate more memory from domheap */
>>               pg = alloc_domheap_page(d, MEMF_no_owner);
>>               if ( pg == NULL )
>>               {
>>                   printk(XENLOG_ERR "Failed to allocate pages.\n");
>>                   return pg;
>>               }
>>               ACCESS_ONCE(d->arch.paging.total_pages)++;
>>               page_list_add_tail(pg, &d->arch.paging.freelist);
>>         }
>>      
>>         return pg;
>> }
>>
>> And basically use|d->arch.paging.freelist| for both dom0less and dom0 domains,
>> with the only difference being that in the case of Dom0,|d->arch.paging.freelist |could be extended.
>>
>> Do I understand your idea correctly?
> Broadly yes, but not in the details. For example, I don't think such a
> page allocated from the general heap would want appending to freelist.
> Commentary and alike also would want tidying.

Could you please explain why it wouldn't want appending to freelist?

>
> And of course going forward, for split hardware and control domains the
> latter may want similar treatment.

Could you please clarify what is the difference between hardware and control
domains?
I thought that it is the same or is it for the case when we have
dom0 (control domain) which runs domD (hardware domain) and guest domain?

Thanks.

~ Oleksii
Re: [PATCH v2 14/17] xen/riscv: implement p2m_next_level()
Posted by Jan Beulich 3 months, 2 weeks ago
On 16.07.2025 17:53, Oleksii Kurochko wrote:
> On 7/16/25 1:43 PM, Jan Beulich wrote:
>> On 16.07.2025 13:32, Oleksii Kurochko wrote:
>>> On 7/2/25 10:35 AM, Jan Beulich wrote:
>>>> On 10.06.2025 15:05, Oleksii Kurochko wrote:
>>>>> --- a/xen/arch/riscv/p2m.c
>>>>> +++ b/xen/arch/riscv/p2m.c
>>>>> @@ -387,6 +387,17 @@ static inline bool p2me_is_valid(struct p2m_domain *p2m, pte_t pte)
>>>>>        return p2m_type_radix_get(p2m, pte) != p2m_invalid;
>>>>>    }
>>>>>    
>>>>> +/*
>>>>> + * pte_is_* helpers are checking the valid bit set in the
>>>>> + * PTE but we have to check p2m_type instead (look at the comment above
>>>>> + * p2me_is_valid())
>>>>> + * Provide our own overlay to check the valid bit.
>>>>> + */
>>>>> +static inline bool p2me_is_mapping(struct p2m_domain *p2m, pte_t pte)
>>>>> +{
>>>>> +    return p2me_is_valid(p2m, pte) && (pte.pte & PTE_ACCESS_MASK);
>>>>> +}
>>>> Same question as on the earlier patch - does P2M type apply to intermediate
>>>> page tables at all? (Conceptually it shouldn't.)
>>> It doesn't matter whether it is an intermediate page table or a leaf PTE pointing
>>> to a page — PTE should be valid. Considering that in the current implementation
>>> it’s possible for PTE.v = 0 but P2M.v = 1, it is better to check P2M.v instead
>>> of PTE.v.
>> I'm confused by this reply. If you want to name 2nd level page table entries
>> P2M - fine (but unhelpful). But then for any memory access there's only one
>> of the two involved: A PTE (Xen accesses) or a P2M (guest accesses). Hence
>> how can there be "PTE.v = 0 but P2M.v = 1"?
> 
> I think I understand your confusion, let me try to rephrase.
> 
> The reason for having both|p2m_is_valid()| and|pte_is_valid()| is that I want to
> have the ability to use the P2M PTE valid bit to track which pages were accessed
> by a vCPU, so that cleaning and invalidating RAM associated with the guest vCPU
> won't be too expensive, for example.

I don't know what you're talking about here.

> In this case, the P2M PTE valid bit will be set to 0, but the P2M PTE type bits
> will be set to something other than|p2m_invalid| (even for a table entries),
> so when an MMU fault occurs, we can properly resolve it.
> 
> So, if the P2M PTE type (what|p2m_is_valid()| checks) is set to|p2m_invalid|, it
> means that the valid bit (what|pte_is_valid()| checks) should be set to 0, so
> the P2M PTE is genuinely invalid.
> 
> It could also be the case that the P2M PTE type isn't|p2m_invalid (and P2M PTE valid will be intentionally set to 0 to have 
> ability to track which pages were accessed for the reason I wrote above)|, and when MMU fault occurs we could
> properly handle it and set to 1 P2M PTE valid bit to 1...
> 
>>
>> An intermediate page table entry is something Xen controls entirely. Hence
>> it has no (guest induced) type.
> 
> ... And actually it is a reason why it is needed to set a type even for an
> intermediate page table entry.
> 
> I hope now it is a lit bit clearer what and why was done.

Sadly not. I still don't see what use the P2M type in of an intermediate page
table is going to be. It surely can't reliably describe all of the entries that
page table holds. Intermediate page tables and leaf pages are just too different
to share a concept like this, I think. That said, I'll be happy to be shown code
demonstrating the contrary.

>>>>> +static struct page_info *p2m_alloc_page(struct domain *d)
>>>>> +{
>>>>> +    struct page_info *pg;
>>>>> +
>>>>> +    /*
>>>>> +     * For hardware domain, there should be no limit in the number of pages that
>>>>> +     * can be allocated, so that the kernel may take advantage of the extended
>>>>> +     * regions. Hence, allocate p2m pages for hardware domains from heap.
>>>>> +     */
>>>>> +    if ( is_hardware_domain(d) )
>>>>> +    {
>>>>> +        pg = alloc_domheap_page(d, MEMF_no_owner);
>>>>> +        if ( pg == NULL )
>>>>> +            printk(XENLOG_G_ERR "Failed to allocate P2M pages for hwdom.\n");
>>>>> +    }
>>>> The comment looks to have been taken verbatim from Arm. Whatever "extended
>>>> regions" are, does the same concept even exist on RISC-V?
>>> Initially, I missed that it’s used only for Arm. Since it was mentioned in
>>> |doc/misc/xen-command-line.pandoc|, I assumed it applied to all architectures.
>>> But now I see that it’s Arm-specific:: ### ext_regions (Arm)
>>>
>>>> Also, special casing Dom0 like this has benefits, but also comes with a
>>>> pitfall: If the system's out of memory, allocations will fail. A pre-
>>>> populated pool would avoid that (until exhausted, of course). If special-
>>>> casing of Dom0 is needed, I wonder whether ...
>>>>
>>>>> +    else
>>>>> +    {
>>>>> +        spin_lock(&d->arch.paging.lock);
>>>>> +        pg = page_list_remove_head(&d->arch.paging.p2m_freelist);
>>>>> +        spin_unlock(&d->arch.paging.lock);
>>>>> +    }
>>>> ... going this path but with a Dom0-only fallback to general allocation
>>>> wouldn't be the better route.
>>> IIUC, then it should be something like:
>>>     static struct page_info *p2m_alloc_page(struct domain *d)
>>>     {
>>>         struct page_info *pg;
>>>         
>>>         spin_lock(&d->arch.paging.lock);
>>>         pg = page_list_remove_head(&d->arch.paging.p2m_freelist);

Note this: Here you _remove_ from freelist, because you want to actually
use the page. Then clearly ...

>>>         spin_unlock(&d->arch.paging.lock);
>>>
>>>         if ( !pg && is_hardware_domain(d) )
>>>         {
>>>               /* Need to allocate more memory from domheap */
>>>               pg = alloc_domheap_page(d, MEMF_no_owner);
>>>               if ( pg == NULL )
>>>               {
>>>                   printk(XENLOG_ERR "Failed to allocate pages.\n");
>>>                   return pg;
>>>               }
>>>               ACCESS_ONCE(d->arch.paging.total_pages)++;
>>>               page_list_add_tail(pg, &d->arch.paging.freelist);
>>>         }
>>>      
>>>         return pg;
>>> }
>>>
>>> And basically use|d->arch.paging.freelist| for both dom0less and dom0 domains,
>>> with the only difference being that in the case of Dom0,|d->arch.paging.freelist |could be extended.
>>>
>>> Do I understand your idea correctly?
>> Broadly yes, but not in the details. For example, I don't think such a
>> page allocated from the general heap would want appending to freelist.
>> Commentary and alike also would want tidying.
> 
> Could you please explain why it wouldn't want appending to freelist?

... adding to freelist here is wrong: You want to use this separately
allocated page, too. Else once it is freed it'll be added to freelist
a 2nd time, leading to a corrupt list.

>> And of course going forward, for split hardware and control domains the
>> latter may want similar treatment.
> 
> Could you please clarify what is the difference between hardware and control
> domains?
> I thought that it is the same or is it for the case when we have
> dom0 (control domain) which runs domD (hardware domain) and guest domain?

That's the common case, yes, but conceptually the two can be separate.
And if you've followed recent discussions on the list you would also
have noticed that work is being done in that direction. (But this was
really a forward-looking comment; I didn't mean to make you cover that
case right away. Just wanted you to be aware.)

Jan

Re: [PATCH v2 14/17] xen/riscv: implement p2m_next_level()
Posted by Oleksii Kurochko 3 months, 2 weeks ago
On 7/16/25 6:12 PM, Jan Beulich wrote:
> On 16.07.2025 17:53, Oleksii Kurochko wrote:
>> On 7/16/25 1:43 PM, Jan Beulich wrote:
>>> On 16.07.2025 13:32, Oleksii Kurochko wrote:
>>>> On 7/2/25 10:35 AM, Jan Beulich wrote:
>>>>> On 10.06.2025 15:05, Oleksii Kurochko wrote:
>>>>>> --- a/xen/arch/riscv/p2m.c
>>>>>> +++ b/xen/arch/riscv/p2m.c
>>>>>> @@ -387,6 +387,17 @@ static inline bool p2me_is_valid(struct p2m_domain *p2m, pte_t pte)
>>>>>>         return p2m_type_radix_get(p2m, pte) != p2m_invalid;
>>>>>>     }
>>>>>>     
>>>>>> +/*
>>>>>> + * pte_is_* helpers are checking the valid bit set in the
>>>>>> + * PTE but we have to check p2m_type instead (look at the comment above
>>>>>> + * p2me_is_valid())
>>>>>> + * Provide our own overlay to check the valid bit.
>>>>>> + */
>>>>>> +static inline bool p2me_is_mapping(struct p2m_domain *p2m, pte_t pte)
>>>>>> +{
>>>>>> +    return p2me_is_valid(p2m, pte) && (pte.pte & PTE_ACCESS_MASK);
>>>>>> +}
>>>>> Same question as on the earlier patch - does P2M type apply to intermediate
>>>>> page tables at all? (Conceptually it shouldn't.)
>>>> It doesn't matter whether it is an intermediate page table or a leaf PTE pointing
>>>> to a page — PTE should be valid. Considering that in the current implementation
>>>> it’s possible for PTE.v = 0 but P2M.v = 1, it is better to check P2M.v instead
>>>> of PTE.v.
>>> I'm confused by this reply. If you want to name 2nd level page table entries
>>> P2M - fine (but unhelpful). But then for any memory access there's only one
>>> of the two involved: A PTE (Xen accesses) or a P2M (guest accesses). Hence
>>> how can there be "PTE.v = 0 but P2M.v = 1"?
>> I think I understand your confusion, let me try to rephrase.
>>
>> The reason for having both|p2m_is_valid()| and|pte_is_valid()| is that I want to
>> have the ability to use the P2M PTE valid bit to track which pages were accessed
>> by a vCPU, so that cleaning and invalidating RAM associated with the guest vCPU
>> won't be too expensive, for example.
> I don't know what you're talking about here.

https://gitlab.com/xen-project/xen/-/blob/staging/xen/arch/arm/mmu/p2m.c#L1649

>
>> In this case, the P2M PTE valid bit will be set to 0, but the P2M PTE type bits
>> will be set to something other than|p2m_invalid| (even for a table entries),
>> so when an MMU fault occurs, we can properly resolve it.
>>
>> So, if the P2M PTE type (what|p2m_is_valid()| checks) is set to|p2m_invalid|, it
>> means that the valid bit (what|pte_is_valid()| checks) should be set to 0, so
>> the P2M PTE is genuinely invalid.
>>
>> It could also be the case that the P2M PTE type isn't|p2m_invalid (and P2M PTE valid will be intentionally set to 0 to have
>> ability to track which pages were accessed for the reason I wrote above)|, and when MMU fault occurs we could
>> properly handle it and set to 1 P2M PTE valid bit to 1...
>>
>>> An intermediate page table entry is something Xen controls entirely. Hence
>>> it has no (guest induced) type.
>> ... And actually it is a reason why it is needed to set a type even for an
>> intermediate page table entry.
>>
>> I hope now it is a lit bit clearer what and why was done.
> Sadly not. I still don't see what use the P2M type in of an intermediate page
> table is going to be. It surely can't reliably describe all of the entries that
> page table holds. Intermediate page tables and leaf pages are just too different
> to share a concept like this, I think. That said, I'll be happy to be shown code
> demonstrating the contrary.

Then it is needed to introduce new p2m_type_t - p2m_table and use it.
Would it be better?

I still need some type to have ability to distinguish if p2m is valid or not from
p2m management and hardware point of view.
If there is no need for such distinguish why all archs introduce p2m_invalid?
Isn't enough just to use P2M PTE valid bit?

>
>>>>>> +static struct page_info *p2m_alloc_page(struct domain *d)
>>>>>> +{
>>>>>> +    struct page_info *pg;
>>>>>> +
>>>>>> +    /*
>>>>>> +     * For hardware domain, there should be no limit in the number of pages that
>>>>>> +     * can be allocated, so that the kernel may take advantage of the extended
>>>>>> +     * regions. Hence, allocate p2m pages for hardware domains from heap.
>>>>>> +     */
>>>>>> +    if ( is_hardware_domain(d) )
>>>>>> +    {
>>>>>> +        pg = alloc_domheap_page(d, MEMF_no_owner);
>>>>>> +        if ( pg == NULL )
>>>>>> +            printk(XENLOG_G_ERR "Failed to allocate P2M pages for hwdom.\n");
>>>>>> +    }
>>>>> The comment looks to have been taken verbatim from Arm. Whatever "extended
>>>>> regions" are, does the same concept even exist on RISC-V?
>>>> Initially, I missed that it’s used only for Arm. Since it was mentioned in
>>>> |doc/misc/xen-command-line.pandoc|, I assumed it applied to all architectures.
>>>> But now I see that it’s Arm-specific:: ### ext_regions (Arm)
>>>>
>>>>> Also, special casing Dom0 like this has benefits, but also comes with a
>>>>> pitfall: If the system's out of memory, allocations will fail. A pre-
>>>>> populated pool would avoid that (until exhausted, of course). If special-
>>>>> casing of Dom0 is needed, I wonder whether ...
>>>>>
>>>>>> +    else
>>>>>> +    {
>>>>>> +        spin_lock(&d->arch.paging.lock);
>>>>>> +        pg = page_list_remove_head(&d->arch.paging.p2m_freelist);
>>>>>> +        spin_unlock(&d->arch.paging.lock);
>>>>>> +    }
>>>>> ... going this path but with a Dom0-only fallback to general allocation
>>>>> wouldn't be the better route.
>>>> IIUC, then it should be something like:
>>>>      static struct page_info *p2m_alloc_page(struct domain *d)
>>>>      {
>>>>          struct page_info *pg;
>>>>          
>>>>          spin_lock(&d->arch.paging.lock);
>>>>          pg = page_list_remove_head(&d->arch.paging.p2m_freelist);
> Note this: Here you _remove_ from freelist, because you want to actually
> use the page. Then clearly ...
>
>>>>          spin_unlock(&d->arch.paging.lock);
>>>>
>>>>          if ( !pg && is_hardware_domain(d) )
>>>>          {
>>>>                /* Need to allocate more memory from domheap */
>>>>                pg = alloc_domheap_page(d, MEMF_no_owner);
>>>>                if ( pg == NULL )
>>>>                {
>>>>                    printk(XENLOG_ERR "Failed to allocate pages.\n");
>>>>                    return pg;
>>>>                }
>>>>                ACCESS_ONCE(d->arch.paging.total_pages)++;
>>>>                page_list_add_tail(pg, &d->arch.paging.freelist);
>>>>          }
>>>>       
>>>>          return pg;
>>>> }
>>>>
>>>> And basically use|d->arch.paging.freelist| for both dom0less and dom0 domains,
>>>> with the only difference being that in the case of Dom0,|d->arch.paging.freelist |could be extended.
>>>>
>>>> Do I understand your idea correctly?
>>> Broadly yes, but not in the details. For example, I don't think such a
>>> page allocated from the general heap would want appending to freelist.
>>> Commentary and alike also would want tidying.
>> Could you please explain why it wouldn't want appending to freelist?
> ... adding to freelist here is wrong: You want to use this separately
> allocated page, too. Else once it is freed it'll be added to freelist
> a 2nd time, leading to a corrupt list.

Got it, I understand why it shouldn’t be added to the freelist.

Incrementing total_pages still makes sense, right?

>
>>> And of course going forward, for split hardware and control domains the
>>> latter may want similar treatment.
>> Could you please clarify what is the difference between hardware and control
>> domains?
>> I thought that it is the same or is it for the case when we have
>> dom0 (control domain) which runs domD (hardware domain) and guest domain?
> That's the common case, yes, but conceptually the two can be separate.
> And if you've followed recent discussions on the list you would also
> have noticed that work is being done in that direction. (But this was
> really a forward-looking comment; I didn't mean to make you cover that
> case right away. Just wanted you to be aware.)

Thanks.

~ Oleksii
Re: [PATCH v2 14/17] xen/riscv: implement p2m_next_level()
Posted by Jan Beulich 3 months, 2 weeks ago
On 17.07.2025 11:42, Oleksii Kurochko wrote:
> On 7/16/25 6:12 PM, Jan Beulich wrote:
>> On 16.07.2025 17:53, Oleksii Kurochko wrote:
>>> On 7/16/25 1:43 PM, Jan Beulich wrote:
>>>> On 16.07.2025 13:32, Oleksii Kurochko wrote:
>>>>> On 7/2/25 10:35 AM, Jan Beulich wrote:
>>>>>> On 10.06.2025 15:05, Oleksii Kurochko wrote:
>>>>>>> --- a/xen/arch/riscv/p2m.c
>>>>>>> +++ b/xen/arch/riscv/p2m.c
>>>>>>> @@ -387,6 +387,17 @@ static inline bool p2me_is_valid(struct p2m_domain *p2m, pte_t pte)
>>>>>>>         return p2m_type_radix_get(p2m, pte) != p2m_invalid;
>>>>>>>     }
>>>>>>>     
>>>>>>> +/*
>>>>>>> + * pte_is_* helpers are checking the valid bit set in the
>>>>>>> + * PTE but we have to check p2m_type instead (look at the comment above
>>>>>>> + * p2me_is_valid())
>>>>>>> + * Provide our own overlay to check the valid bit.
>>>>>>> + */
>>>>>>> +static inline bool p2me_is_mapping(struct p2m_domain *p2m, pte_t pte)
>>>>>>> +{
>>>>>>> +    return p2me_is_valid(p2m, pte) && (pte.pte & PTE_ACCESS_MASK);
>>>>>>> +}
>>>>>> Same question as on the earlier patch - does P2M type apply to intermediate
>>>>>> page tables at all? (Conceptually it shouldn't.)
>>>>> It doesn't matter whether it is an intermediate page table or a leaf PTE pointing
>>>>> to a page — PTE should be valid. Considering that in the current implementation
>>>>> it’s possible for PTE.v = 0 but P2M.v = 1, it is better to check P2M.v instead
>>>>> of PTE.v.
>>>> I'm confused by this reply. If you want to name 2nd level page table entries
>>>> P2M - fine (but unhelpful). But then for any memory access there's only one
>>>> of the two involved: A PTE (Xen accesses) or a P2M (guest accesses). Hence
>>>> how can there be "PTE.v = 0 but P2M.v = 1"?
>>> I think I understand your confusion, let me try to rephrase.
>>>
>>> The reason for having both|p2m_is_valid()| and|pte_is_valid()| is that I want to
>>> have the ability to use the P2M PTE valid bit to track which pages were accessed
>>> by a vCPU, so that cleaning and invalidating RAM associated with the guest vCPU
>>> won't be too expensive, for example.
>> I don't know what you're talking about here.
> 
> https://gitlab.com/xen-project/xen/-/blob/staging/xen/arch/arm/mmu/p2m.c#L1649

How does that Arm function matter here? Aiui you don't need anything like that
in RISC-V, as there caches don't need disabling temporarily.

>>> In this case, the P2M PTE valid bit will be set to 0, but the P2M PTE type bits
>>> will be set to something other than|p2m_invalid| (even for a table entries),
>>> so when an MMU fault occurs, we can properly resolve it.
>>>
>>> So, if the P2M PTE type (what|p2m_is_valid()| checks) is set to|p2m_invalid|, it
>>> means that the valid bit (what|pte_is_valid()| checks) should be set to 0, so
>>> the P2M PTE is genuinely invalid.
>>>
>>> It could also be the case that the P2M PTE type isn't|p2m_invalid (and P2M PTE valid will be intentionally set to 0 to have
>>> ability to track which pages were accessed for the reason I wrote above)|, and when MMU fault occurs we could
>>> properly handle it and set to 1 P2M PTE valid bit to 1...
>>>
>>>> An intermediate page table entry is something Xen controls entirely. Hence
>>>> it has no (guest induced) type.
>>> ... And actually it is a reason why it is needed to set a type even for an
>>> intermediate page table entry.
>>>
>>> I hope now it is a lit bit clearer what and why was done.
>> Sadly not. I still don't see what use the P2M type in of an intermediate page
>> table is going to be. It surely can't reliably describe all of the entries that
>> page table holds. Intermediate page tables and leaf pages are just too different
>> to share a concept like this, I think. That said, I'll be happy to be shown code
>> demonstrating the contrary.
> 
> Then it is needed to introduce new p2m_type_t - p2m_table and use it.
> Would it be better?
> 
> I still need some type to have ability to distinguish if p2m is valid or not from
> p2m management and hardware point of view.
> If there is no need for such distinguish why all archs introduce p2m_invalid?
> Isn't enough just to use P2M PTE valid bit?

At least on x86 we don't tag intermediate page tables with P2M types. For
ordinary leaf entries the situation is different, as there may be varying
reasons why a PTE has its valid (on x86: present) bit cleared. Hence the
type is relevant there, just to know what to do when a page is accessed
through such a not-present PTE.

Jan

Re: [PATCH v2 14/17] xen/riscv: implement p2m_next_level()
Posted by Oleksii Kurochko 3 months, 2 weeks ago
On 7/17/25 12:37 PM, Jan Beulich wrote:
> On 17.07.2025 11:42, Oleksii Kurochko wrote:
>> On 7/16/25 6:12 PM, Jan Beulich wrote:
>>> On 16.07.2025 17:53, Oleksii Kurochko wrote:
>>>> On 7/16/25 1:43 PM, Jan Beulich wrote:
>>>>> On 16.07.2025 13:32, Oleksii Kurochko wrote:
>>>>>> On 7/2/25 10:35 AM, Jan Beulich wrote:
>>>>>>> On 10.06.2025 15:05, Oleksii Kurochko wrote:
>>>>>>>> --- a/xen/arch/riscv/p2m.c
>>>>>>>> +++ b/xen/arch/riscv/p2m.c
>>>>>>>> @@ -387,6 +387,17 @@ static inline bool p2me_is_valid(struct p2m_domain *p2m, pte_t pte)
>>>>>>>>          return p2m_type_radix_get(p2m, pte) != p2m_invalid;
>>>>>>>>      }
>>>>>>>>      
>>>>>>>> +/*
>>>>>>>> + * pte_is_* helpers are checking the valid bit set in the
>>>>>>>> + * PTE but we have to check p2m_type instead (look at the comment above
>>>>>>>> + * p2me_is_valid())
>>>>>>>> + * Provide our own overlay to check the valid bit.
>>>>>>>> + */
>>>>>>>> +static inline bool p2me_is_mapping(struct p2m_domain *p2m, pte_t pte)
>>>>>>>> +{
>>>>>>>> +    return p2me_is_valid(p2m, pte) && (pte.pte & PTE_ACCESS_MASK);
>>>>>>>> +}
>>>>>>> Same question as on the earlier patch - does P2M type apply to intermediate
>>>>>>> page tables at all? (Conceptually it shouldn't.)
>>>>>> It doesn't matter whether it is an intermediate page table or a leaf PTE pointing
>>>>>> to a page — PTE should be valid. Considering that in the current implementation
>>>>>> it’s possible for PTE.v = 0 but P2M.v = 1, it is better to check P2M.v instead
>>>>>> of PTE.v.
>>>>> I'm confused by this reply. If you want to name 2nd level page table entries
>>>>> P2M - fine (but unhelpful). But then for any memory access there's only one
>>>>> of the two involved: A PTE (Xen accesses) or a P2M (guest accesses). Hence
>>>>> how can there be "PTE.v = 0 but P2M.v = 1"?
>>>> I think I understand your confusion, let me try to rephrase.
>>>>
>>>> The reason for having both|p2m_is_valid()| and|pte_is_valid()| is that I want to
>>>> have the ability to use the P2M PTE valid bit to track which pages were accessed
>>>> by a vCPU, so that cleaning and invalidating RAM associated with the guest vCPU
>>>> won't be too expensive, for example.
>>> I don't know what you're talking about here.
>> https://gitlab.com/xen-project/xen/-/blob/staging/xen/arch/arm/mmu/p2m.c#L1649
> How does that Arm function matter here? Aiui you don't need anything like that
> in RISC-V, as there caches don't need disabling temporarily.

I thought that it could be needed not only in the case when a d-cache is disabled
temporarily, but it seems like that I was just wrong and all other cases are
handled by cache coherency protocol.

>
>>>> In this case, the P2M PTE valid bit will be set to 0, but the P2M PTE type bits
>>>> will be set to something other than|p2m_invalid| (even for a table entries),
>>>> so when an MMU fault occurs, we can properly resolve it.
>>>>
>>>> So, if the P2M PTE type (what|p2m_is_valid()| checks) is set to|p2m_invalid|, it
>>>> means that the valid bit (what|pte_is_valid()| checks) should be set to 0, so
>>>> the P2M PTE is genuinely invalid.
>>>>
>>>> It could also be the case that the P2M PTE type isn't|p2m_invalid (and P2M PTE valid will be intentionally set to 0 to have
>>>> ability to track which pages were accessed for the reason I wrote above)|, and when MMU fault occurs we could
>>>> properly handle it and set to 1 P2M PTE valid bit to 1...
>>>>
>>>>> An intermediate page table entry is something Xen controls entirely. Hence
>>>>> it has no (guest induced) type.
>>>> ... And actually it is a reason why it is needed to set a type even for an
>>>> intermediate page table entry.
>>>>
>>>> I hope now it is a lit bit clearer what and why was done.
>>> Sadly not. I still don't see what use the P2M type in of an intermediate page
>>> table is going to be. It surely can't reliably describe all of the entries that
>>> page table holds. Intermediate page tables and leaf pages are just too different
>>> to share a concept like this, I think. That said, I'll be happy to be shown code
>>> demonstrating the contrary.
>> Then it is needed to introduce new p2m_type_t - p2m_table and use it.
>> Would it be better?
>>
>> I still need some type to have ability to distinguish if p2m is valid or not from
>> p2m management and hardware point of view.
>> If there is no need for such distinguish why all archs introduce p2m_invalid?
>> Isn't enough just to use P2M PTE valid bit?
> At least on x86 we don't tag intermediate page tables with P2M types. For
> ordinary leaf entries the situation is different, as there may be varying
> reasons why a PTE has its valid (on x86: present) bit cleared. Hence the
> type is relevant there, just to know what to do when a page is accessed
> through such a not-present PTE.

I think that I got your idea now.

Does it make sense to have such optimization when we have 2Mb memory range and
it was mapped using 4k pages instead of 1 super-page, could it be useful to
invalidate just just page table entry of L1 which corresponds to the start of
this 2mb memory range, instead of invalidating each entry on L0?
If it could useful then intermediate page tables should be tagged too. Arm has
such use cases:
   https://gitlab.com/xen-project/people/olkur/xen/-/blob/staging/xen/arch/arm/mmu/p2m.c#L1286

~ OLeksii
Re: [PATCH v2 14/17] xen/riscv: implement p2m_next_level()
Posted by Jan Beulich 3 months, 1 week ago
On 18.07.2025 13:19, Oleksii Kurochko wrote:
> On 7/17/25 12:37 PM, Jan Beulich wrote:
>> On 17.07.2025 11:42, Oleksii Kurochko wrote:
>>> On 7/16/25 6:12 PM, Jan Beulich wrote:
>>>> On 16.07.2025 17:53, Oleksii Kurochko wrote:
>>>>> In this case, the P2M PTE valid bit will be set to 0, but the P2M PTE type bits
>>>>> will be set to something other than|p2m_invalid| (even for a table entries),
>>>>> so when an MMU fault occurs, we can properly resolve it.
>>>>>
>>>>> So, if the P2M PTE type (what|p2m_is_valid()| checks) is set to|p2m_invalid|, it
>>>>> means that the valid bit (what|pte_is_valid()| checks) should be set to 0, so
>>>>> the P2M PTE is genuinely invalid.
>>>>>
>>>>> It could also be the case that the P2M PTE type isn't|p2m_invalid (and P2M PTE valid will be intentionally set to 0 to have
>>>>> ability to track which pages were accessed for the reason I wrote above)|, and when MMU fault occurs we could
>>>>> properly handle it and set to 1 P2M PTE valid bit to 1...
>>>>>
>>>>>> An intermediate page table entry is something Xen controls entirely. Hence
>>>>>> it has no (guest induced) type.
>>>>> ... And actually it is a reason why it is needed to set a type even for an
>>>>> intermediate page table entry.
>>>>>
>>>>> I hope now it is a lit bit clearer what and why was done.
>>>> Sadly not. I still don't see what use the P2M type in of an intermediate page
>>>> table is going to be. It surely can't reliably describe all of the entries that
>>>> page table holds. Intermediate page tables and leaf pages are just too different
>>>> to share a concept like this, I think. That said, I'll be happy to be shown code
>>>> demonstrating the contrary.
>>> Then it is needed to introduce new p2m_type_t - p2m_table and use it.
>>> Would it be better?
>>>
>>> I still need some type to have ability to distinguish if p2m is valid or not from
>>> p2m management and hardware point of view.
>>> If there is no need for such distinguish why all archs introduce p2m_invalid?
>>> Isn't enough just to use P2M PTE valid bit?
>> At least on x86 we don't tag intermediate page tables with P2M types. For
>> ordinary leaf entries the situation is different, as there may be varying
>> reasons why a PTE has its valid (on x86: present) bit cleared. Hence the
>> type is relevant there, just to know what to do when a page is accessed
>> through such a not-present PTE.
> 
> I think that I got your idea now.
> 
> Does it make sense to have such optimization when we have 2Mb memory range and
> it was mapped using 4k pages instead of 1 super-page, could it be useful to
> invalidate just just page table entry of L1 which corresponds to the start of
> this 2mb memory range, instead of invalidating each entry on L0?
> If it could useful then intermediate page tables should be tagged too. Arm has
> such use cases:
>    https://gitlab.com/xen-project/people/olkur/xen/-/blob/staging/xen/arch/arm/mmu/p2m.c#L1286

I don't currently see how that's related to the topic at hand.

Furthermore range-constrained TLB flushing is never at just an address, i.e.
"L1 which corresponds to the start of this 2mb memory range" isn't meaningful
here. It's always a range (typically expressed by address and size), and it
always needs to be the full range that is invalidated. This can be a solitary
low-level flush operation when you know a large page mapping would _not_ be
split. When splitting is done in software or when hardware may split behind
your back, you always need to invalidate the entire range. Or else, in your
example, 4k TLB entries may remain for any but the first page of the 2M
super-page. (Whether such a range can still be done in a single invalidation
operation is a separate question. But I don't see how maintaining the type
at the L1 level would help there.)

Jan