[PATCH v3 16/20] xen/riscv: Implement superpage splitting for p2m mappings

Oleksii Kurochko posted 20 patches 3 months ago
There is a newer version of this series
[PATCH v3 16/20] xen/riscv: Implement superpage splitting for p2m mappings
Posted by Oleksii Kurochko 3 months ago
Add support for down large memory mappings ("superpages") in the RISC-V
p2m mapping so that smaller, more precise mappings ("finer-grained entries")
can be inserted into lower levels of the page table hierarchy.

To implement that the following is done:
- Introduce p2m_split_superpage(): Recursively shatters a superpage into
  smaller page table entries down to the target level, preserving original
  permissions and attributes.
- p2m_set_entry() updated to invoke superpage splitting when inserting
  entries at lower levels within a superpage-mapped region.

This implementation is based on the ARM code, with modifications to the part
that follows the BBM (break-before-make) approach, some parts are simplified
as according to RISC-V spec:
  It is permitted for multiple address-translation cache entries to co-exist
  for the same address. This represents the fact that in a conventional
  TLB hierarchy, it is possible for multiple entries to match a single
  address if, for example, a page is upgraded to a superpage without first
  clearing the original non-leaf PTE’s valid bit and executing an SFENCE.VMA
  with rs1=x0, or if multiple TLBs exist in parallel at a given level of the
  hierarchy. In this case, just as if an SFENCE.VMA is not executed between
  a write to the memory-management tables and subsequent implicit read of the
  same address: it is unpredictable whether the old non-leaf PTE or the new
  leaf PTE is used, but the behavior is otherwise well defined.
In contrast to the Arm architecture, where BBM is mandatory and failing to
use it in some cases can lead to CPU instability, RISC-V guarantees
stability, and the behavior remains safe — though unpredictable in terms of
which translation will be used.

Additionally, the page table walk logic has been adjusted, as ARM uses the
opposite number of levels compared to RISC-V.

Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
---
Changes in V3:
 - Move     page_list_add(page, &p2m->pages) inside p2m_alloc_page().
 - Use 'unsigned long' for local vairiable 'i' in p2m_split_superpage().
 - Update the comment above if ( next_level != target ) in p2m_split_superpage().
 - Reverse cycle to iterate through page table levels in p2m_set_entry().
 - Update p2m_split_superpage() with the same changes which are done in the
   patch "P2M: Don't try to free the existing PTE if we can't allocate a new table".
---
Changes in V2:
 - New patch. It was a part of a big patch "xen/riscv: implement p2m mapping
   functionality" which was splitted to smaller.
 - Update the commit above the cycle which creates new page table as
   RISC-V travserse page tables in an opposite to ARM order.
 - RISC-V doesn't require BBM so there is no needed for invalidating
   and TLB flushing before updating PTE.
---
 xen/arch/riscv/p2m.c | 118 ++++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 117 insertions(+), 1 deletion(-)

diff --git a/xen/arch/riscv/p2m.c b/xen/arch/riscv/p2m.c
index e04cfde8c7..e9e6818da2 100644
--- a/xen/arch/riscv/p2m.c
+++ b/xen/arch/riscv/p2m.c
@@ -539,6 +539,91 @@ static void p2m_free_subtree(struct p2m_domain *p2m,
     p2m_free_page(p2m, pg);
 }
 
+static bool p2m_split_superpage(struct p2m_domain *p2m, pte_t *entry,
+                                unsigned int level, unsigned int target,
+                                const unsigned int *offsets)
+{
+    struct page_info *page;
+    unsigned long i;
+    pte_t pte, *table;
+    bool rv = true;
+
+    /* Convenience aliases */
+    mfn_t mfn = pte_get_mfn(*entry);
+    unsigned int next_level = level - 1;
+    unsigned int level_order = XEN_PT_LEVEL_ORDER(next_level);
+
+    /*
+     * This should only be called with target != level and the entry is
+     * a superpage.
+     */
+    ASSERT(level > target);
+    ASSERT(pte_is_superpage(*entry, level));
+
+    page = p2m_alloc_page(p2m->domain);
+    if ( !page )
+    {
+        /*
+         * The caller is in charge to free the sub-tree.
+         * As we didn't manage to allocate anything, just tell the
+         * caller there is nothing to free by invalidating the PTE.
+         */
+        memset(entry, 0, sizeof(*entry));
+        return false;
+    }
+
+    table = __map_domain_page(page);
+
+    /*
+     * We are either splitting a second level 1G page into 512 first level
+     * 2M pages, or a first level 2M page into 512 zero level 4K pages.
+     */
+    for ( i = 0; i < XEN_PT_ENTRIES; i++ )
+    {
+        pte_t *new_entry = table + i;
+
+        /*
+         * Use the content of the superpage entry and override
+         * the necessary fields. So the correct permission are kept.
+         */
+        pte = *entry;
+        pte_set_mfn(&pte, mfn_add(mfn, i << level_order));
+
+        write_pte(new_entry, pte);
+    }
+
+    /*
+     * Shatter superpage in the page to the level we want to make the
+     * changes.
+     * This is done outside the loop to avoid checking the offset
+     * for every entry to know whether the entry should be shattered.
+     */
+    if ( next_level != target )
+        rv = p2m_split_superpage(p2m, table + offsets[next_level],
+                                 level - 1, target, offsets);
+
+    if ( p2m->clean_pte )
+        clean_dcache_va_range(table, PAGE_SIZE);
+
+    /*
+     * TODO: an inefficiency here: the caller almost certainly wants to map
+     *       the same page again, to update the one entry that caused the
+     *       request to shatter the page.
+     */
+    unmap_domain_page(table);
+
+    /*
+     * Even if we failed, we should (according to the current implemetation
+     * of a way how sub-tree is freed if p2m_split_superpage hasn't been
+     * finished fully) install the newly allocated PTE
+     * entry.
+     * The caller will be in charge to free the sub-tree.
+     */
+    p2m_write_pte(entry, page_to_p2m_table(page), p2m->clean_pte);
+
+    return rv;
+}
+
 /*
  * Insert an entry in the p2m. This should be called with a mapping
  * equal to a page/superpage.
@@ -603,7 +688,38 @@ static int p2m_set_entry(struct p2m_domain *p2m,
      */
     if ( level > target )
     {
-        panic("Shattering isn't implemented\n");
+        /* We need to split the original page. */
+        pte_t split_pte = *entry;
+
+        ASSERT(pte_is_superpage(*entry, level));
+
+        if ( !p2m_split_superpage(p2m, &split_pte, level, target, offsets) )
+        {
+            /* Free the allocated sub-tree */
+            p2m_free_subtree(p2m, split_pte, level);
+
+            rc = -ENOMEM;
+            goto out;
+        }
+
+        p2m_write_pte(entry, split_pte, p2m->clean_pte);
+
+        p2m->need_flush = true;
+
+        /* Then move to the level we want to make real changes */
+        for ( ; level > target; level-- )
+        {
+            rc = p2m_next_level(p2m, true, level, &table, offsets[level]);
+
+            /*
+             * The entry should be found and either be a table
+             * or a superpage if level 0 is not targeted
+             */
+            ASSERT(rc == P2M_TABLE_NORMAL ||
+                   (rc == P2M_TABLE_SUPER_PAGE && target > 0));
+        }
+
+        entry = table + offsets[level];
     }
 
     /*
-- 
2.50.1


Re: [PATCH v3 16/20] xen/riscv: Implement superpage splitting for p2m mappings
Posted by Jan Beulich 2 months, 2 weeks ago
On 31.07.2025 17:58, Oleksii Kurochko wrote:
> Add support for down large memory mappings ("superpages") in the RISC-V
> p2m mapping so that smaller, more precise mappings ("finer-grained entries")
> can be inserted into lower levels of the page table hierarchy.
> 
> To implement that the following is done:
> - Introduce p2m_split_superpage(): Recursively shatters a superpage into
>   smaller page table entries down to the target level, preserving original
>   permissions and attributes.
> - p2m_set_entry() updated to invoke superpage splitting when inserting
>   entries at lower levels within a superpage-mapped region.
> 
> This implementation is based on the ARM code, with modifications to the part
> that follows the BBM (break-before-make) approach, some parts are simplified
> as according to RISC-V spec:
>   It is permitted for multiple address-translation cache entries to co-exist
>   for the same address. This represents the fact that in a conventional
>   TLB hierarchy, it is possible for multiple entries to match a single
>   address if, for example, a page is upgraded to a superpage without first
>   clearing the original non-leaf PTE’s valid bit and executing an SFENCE.VMA
>   with rs1=x0, or if multiple TLBs exist in parallel at a given level of the
>   hierarchy. In this case, just as if an SFENCE.VMA is not executed between
>   a write to the memory-management tables and subsequent implicit read of the
>   same address: it is unpredictable whether the old non-leaf PTE or the new
>   leaf PTE is used, but the behavior is otherwise well defined.
> In contrast to the Arm architecture, where BBM is mandatory and failing to
> use it in some cases can lead to CPU instability, RISC-V guarantees
> stability, and the behavior remains safe — though unpredictable in terms of
> which translation will be used.
> 
> Additionally, the page table walk logic has been adjusted, as ARM uses the
> opposite number of levels compared to RISC-V.

As before, I think you mean "numbering".

> --- a/xen/arch/riscv/p2m.c
> +++ b/xen/arch/riscv/p2m.c
> @@ -539,6 +539,91 @@ static void p2m_free_subtree(struct p2m_domain *p2m,
>      p2m_free_page(p2m, pg);
>  }
>  
> +static bool p2m_split_superpage(struct p2m_domain *p2m, pte_t *entry,
> +                                unsigned int level, unsigned int target,
> +                                const unsigned int *offsets)
> +{
> +    struct page_info *page;
> +    unsigned long i;
> +    pte_t pte, *table;
> +    bool rv = true;
> +
> +    /* Convenience aliases */
> +    mfn_t mfn = pte_get_mfn(*entry);
> +    unsigned int next_level = level - 1;
> +    unsigned int level_order = XEN_PT_LEVEL_ORDER(next_level);
> +
> +    /*
> +     * This should only be called with target != level and the entry is
> +     * a superpage.
> +     */
> +    ASSERT(level > target);
> +    ASSERT(pte_is_superpage(*entry, level));
> +
> +    page = p2m_alloc_page(p2m->domain);
> +    if ( !page )
> +    {
> +        /*
> +         * The caller is in charge to free the sub-tree.
> +         * As we didn't manage to allocate anything, just tell the
> +         * caller there is nothing to free by invalidating the PTE.
> +         */
> +        memset(entry, 0, sizeof(*entry));
> +        return false;
> +    }
> +
> +    table = __map_domain_page(page);
> +
> +    /*
> +     * We are either splitting a second level 1G page into 512 first level
> +     * 2M pages, or a first level 2M page into 512 zero level 4K pages.
> +     */

Such a comment is at risk of (silently) going stale when support for 512G
mappings is added. I wonder if it's really that informative to have here.

> +    for ( i = 0; i < XEN_PT_ENTRIES; i++ )
> +    {
> +        pte_t *new_entry = table + i;
> +
> +        /*
> +         * Use the content of the superpage entry and override
> +         * the necessary fields. So the correct permission are kept.
> +         */

It's not just permissions though? The memory type field also needs
retaining (and is being retained this way). Maybe better say "attributes"?

Jan

Re: [PATCH v3 16/20] xen/riscv: Implement superpage splitting for p2m mappings
Posted by Oleksii Kurochko 2 months, 2 weeks ago
On 8/11/25 1:59 PM, Jan Beulich wrote:
> On 31.07.2025 17:58, Oleksii Kurochko wrote:
>> Add support for down large memory mappings ("superpages") in the RISC-V
>> p2m mapping so that smaller, more precise mappings ("finer-grained entries")
>> can be inserted into lower levels of the page table hierarchy.
>>
>> To implement that the following is done:
>> - Introduce p2m_split_superpage(): Recursively shatters a superpage into
>>    smaller page table entries down to the target level, preserving original
>>    permissions and attributes.
>> - p2m_set_entry() updated to invoke superpage splitting when inserting
>>    entries at lower levels within a superpage-mapped region.
>>
>> This implementation is based on the ARM code, with modifications to the part
>> that follows the BBM (break-before-make) approach, some parts are simplified
>> as according to RISC-V spec:
>>    It is permitted for multiple address-translation cache entries to co-exist
>>    for the same address. This represents the fact that in a conventional
>>    TLB hierarchy, it is possible for multiple entries to match a single
>>    address if, for example, a page is upgraded to a superpage without first
>>    clearing the original non-leaf PTE’s valid bit and executing an SFENCE.VMA
>>    with rs1=x0, or if multiple TLBs exist in parallel at a given level of the
>>    hierarchy. In this case, just as if an SFENCE.VMA is not executed between
>>    a write to the memory-management tables and subsequent implicit read of the
>>    same address: it is unpredictable whether the old non-leaf PTE or the new
>>    leaf PTE is used, but the behavior is otherwise well defined.
>> In contrast to the Arm architecture, where BBM is mandatory and failing to
>> use it in some cases can lead to CPU instability, RISC-V guarantees
>> stability, and the behavior remains safe — though unpredictable in terms of
>> which translation will be used.
>>
>> Additionally, the page table walk logic has been adjusted, as ARM uses the
>> opposite number of levels compared to RISC-V.
> As before, I think you mean "numbering".

Yes, level numbering would be better.

>
>> --- a/xen/arch/riscv/p2m.c
>> +++ b/xen/arch/riscv/p2m.c
>> @@ -539,6 +539,91 @@ static void p2m_free_subtree(struct p2m_domain *p2m,
>>       p2m_free_page(p2m, pg);
>>   }
>>   
>> +static bool p2m_split_superpage(struct p2m_domain *p2m, pte_t *entry,
>> +                                unsigned int level, unsigned int target,
>> +                                const unsigned int *offsets)
>> +{
>> +    struct page_info *page;
>> +    unsigned long i;
>> +    pte_t pte, *table;
>> +    bool rv = true;
>> +
>> +    /* Convenience aliases */
>> +    mfn_t mfn = pte_get_mfn(*entry);
>> +    unsigned int next_level = level - 1;
>> +    unsigned int level_order = XEN_PT_LEVEL_ORDER(next_level);
>> +
>> +    /*
>> +     * This should only be called with target != level and the entry is
>> +     * a superpage.
>> +     */
>> +    ASSERT(level > target);
>> +    ASSERT(pte_is_superpage(*entry, level));
>> +
>> +    page = p2m_alloc_page(p2m->domain);
>> +    if ( !page )
>> +    {
>> +        /*
>> +         * The caller is in charge to free the sub-tree.
>> +         * As we didn't manage to allocate anything, just tell the
>> +         * caller there is nothing to free by invalidating the PTE.
>> +         */
>> +        memset(entry, 0, sizeof(*entry));
>> +        return false;
>> +    }
>> +
>> +    table = __map_domain_page(page);
>> +
>> +    /*
>> +     * We are either splitting a second level 1G page into 512 first level
>> +     * 2M pages, or a first level 2M page into 512 zero level 4K pages.
>> +     */
> Such a comment is at risk of (silently) going stale when support for 512G
> mappings is added. I wonder if it's really that informative to have here.

Good point, I think we could really drop it.
Regarding support for 512G mappings. Is it really make sense to support
such big mappings? It seems like some operations as splitting or sub-entry
freeing could be pretty long under some circumstances.

>
>> +    for ( i = 0; i < XEN_PT_ENTRIES; i++ )
>> +    {
>> +        pte_t *new_entry = table + i;
>> +
>> +        /*
>> +         * Use the content of the superpage entry and override
>> +         * the necessary fields. So the correct permission are kept.
>> +         */
> It's not just permissions though? The memory type field also needs
> retaining (and is being retained this way). Maybe better say "attributes"?

Sure, I'll use "attributes" instead.

Thanks.

~ Oleksii
Re: [PATCH v3 16/20] xen/riscv: Implement superpage splitting for p2m mappings
Posted by Jan Beulich 2 months, 2 weeks ago
On 11.08.2025 17:19, Oleksii Kurochko wrote:
> On 8/11/25 1:59 PM, Jan Beulich wrote:
>> On 31.07.2025 17:58, Oleksii Kurochko wrote:
>>> --- a/xen/arch/riscv/p2m.c
>>> +++ b/xen/arch/riscv/p2m.c
>>> @@ -539,6 +539,91 @@ static void p2m_free_subtree(struct p2m_domain *p2m,
>>>       p2m_free_page(p2m, pg);
>>>   }
>>>   
>>> +static bool p2m_split_superpage(struct p2m_domain *p2m, pte_t *entry,
>>> +                                unsigned int level, unsigned int target,
>>> +                                const unsigned int *offsets)
>>> +{
>>> +    struct page_info *page;
>>> +    unsigned long i;
>>> +    pte_t pte, *table;
>>> +    bool rv = true;
>>> +
>>> +    /* Convenience aliases */
>>> +    mfn_t mfn = pte_get_mfn(*entry);
>>> +    unsigned int next_level = level - 1;
>>> +    unsigned int level_order = XEN_PT_LEVEL_ORDER(next_level);
>>> +
>>> +    /*
>>> +     * This should only be called with target != level and the entry is
>>> +     * a superpage.
>>> +     */
>>> +    ASSERT(level > target);
>>> +    ASSERT(pte_is_superpage(*entry, level));
>>> +
>>> +    page = p2m_alloc_page(p2m->domain);
>>> +    if ( !page )
>>> +    {
>>> +        /*
>>> +         * The caller is in charge to free the sub-tree.
>>> +         * As we didn't manage to allocate anything, just tell the
>>> +         * caller there is nothing to free by invalidating the PTE.
>>> +         */
>>> +        memset(entry, 0, sizeof(*entry));
>>> +        return false;
>>> +    }
>>> +
>>> +    table = __map_domain_page(page);
>>> +
>>> +    /*
>>> +     * We are either splitting a second level 1G page into 512 first level
>>> +     * 2M pages, or a first level 2M page into 512 zero level 4K pages.
>>> +     */
>> Such a comment is at risk of (silently) going stale when support for 512G
>> mappings is added. I wonder if it's really that informative to have here.
> 
> Good point, I think we could really drop it.
> Regarding support for 512G mappings. Is it really make sense to support
> such big mappings?

I think so, yes (in the longer run). And yes, ...

> It seems like some operations as splitting or sub-entry
> freeing could be pretty long under some circumstances.

... such will need sorting.

Jan