Introduce functions required to perform a p2m context switch during
a vCPU context switch.
As no mechanism is provided to atomically change vsatp and hgatp
together. Hence, to prevent speculative execution causing one
guest’s VS-stage translations to be cached under another guest’s
VMID, world-switch code should zero vsatp in p2m_ctx_swith_from(),
then construct new hgatp and write the new vsatp value in
p2m_ctx_switch_to().
Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
---
xen/arch/riscv/include/asm/p2m.h | 4 ++
xen/arch/riscv/p2m.c | 81 ++++++++++++++++++++++++++++++++
2 files changed, 85 insertions(+)
diff --git a/xen/arch/riscv/include/asm/p2m.h b/xen/arch/riscv/include/asm/p2m.h
index f63b5dec99b1..1d8c97326565 100644
--- a/xen/arch/riscv/include/asm/p2m.h
+++ b/xen/arch/riscv/include/asm/p2m.h
@@ -255,6 +255,10 @@ static inline bool p2m_is_locked(const struct p2m_domain *p2m)
struct page_info *p2m_get_page_from_gfn(struct p2m_domain *p2m, gfn_t gfn,
p2m_type_t *t);
+
+void p2m_ctx_switch_from(struct vcpu *p);
+void p2m_ctx_switch_to(struct vcpu *n);
+
#endif /* ASM__RISCV__P2M_H */
/*
diff --git a/xen/arch/riscv/p2m.c b/xen/arch/riscv/p2m.c
index 0abeb374c110..af68497c4200 100644
--- a/xen/arch/riscv/p2m.c
+++ b/xen/arch/riscv/p2m.c
@@ -1434,3 +1434,84 @@ struct page_info *p2m_get_page_from_gfn(struct p2m_domain *p2m, gfn_t gfn,
return get_page(page, p2m->domain) ? page : NULL;
}
+
+void p2m_ctx_switch_from(struct vcpu *p)
+{
+ /*
+ * No mechanism is provided to atomically change vsatp and hgatp
+ * together. Hence, to prevent speculative execution causing one
+ * guest’s VS-stage translations to be cached under another guest’s
+ * VMID, world-switch code should zero vsatp, then swap hgatp, then
+ * finally write the new vsatp value.
+ */
+ p->arch.vsatp = csr_read(CSR_VSATP);
+ csr_write(CSR_VSATP, 0);
+
+ /*
+ * No need for VS-stage TLB flush here:
+ * Changing satp.MODE from Bare to other modes and vice versa also
+ * takes effect immediately, without the need to execute an
+ * SFENCE.VMA instruction.
+ * Note that VSATP is just VS-mode’s version of SATP, so the mentioned
+ * above should be true for VSATP.
+ */
+
+ /*
+ * Nothing to do with HGATP as it is constructed each time when
+ * p2m_ctx_switch_to() is called.
+ */
+}
+
+void p2m_ctx_switch_to(struct vcpu *n)
+{
+ struct vcpu_vmid *p_vmid = &n->arch.vmid;
+ uint16_t old_vmid, new_vmid;
+ bool need_flush;
+
+ if ( is_idle_vcpu(n) )
+ return;
+
+ old_vmid = p_vmid->vmid;
+ need_flush = vmid_handle_vmenter(p_vmid);
+ new_vmid = p_vmid->vmid;
+
+#ifdef P2M_DEBUG
+ printk(XENLOG_INFO, "%pv: oldvmid(%d) new_vmid(%d), need_flush(%d)\n",
+ n, old_vmid, new_vmid, need_flush);
+#endif
+
+ csr_write(CSR_HGATP, construct_hgatp(p2m_get_hostp2m(current->domain),
+ new_vmid));
+
+ if ( unlikely(need_flush) )
+ local_hfence_gvma_all();
+
+ /*
+ * According to the RISC-V specification, speculation can happen
+ * during an update of hgatp and vsatp:
+ * No mechanism is provided to atomically change vsatp and hgatp
+ * together. Hence, to prevent speculative execution causing one
+ * guest’s VS-stage translations to be cached under another guest’s
+ * VMID, world-switch code should zero vsatp, then swap hgatp, then
+ * finally write the new vsatp value. Similarly, if henvcfg.PBMTE
+ * need be world-switched, it should be switched after zeroing vsatp
+ * but before writing the new vsatp value, obviating the need to
+ * execute an HFENCE.VVMA instruction.
+ * So just flush TLBs for VS-Stage and G-stage after both of regs are
+ * touched.
+ */
+ flush_tlb_guest_local();
+
+ /*
+ * The vsatp register is a VSXLEN-bit read/write register that is
+ * VS-mode’s version of supervisor register satp, so the following is
+ * true for VSATP registers:
+ * Changing satp.MODE from Bare to other modes and vice versa also takes
+ * effect immediately, without the need to execute an SFENCE.VMA
+ * instruction. Likewise, changes to satp.ASID take effect immediately.
+ * Considering the mentioned above and that VS-stage TLB flush has been
+ * already done there is no need to flush VS-stage TLB after an update
+ * of VSATP from Bare mode to what is written in `n->arch.vsatp`.
+ */
+ csr_write(CSR_VSATP, n->arch.vsatp);
+}
--
2.52.0
On 02.02.2026 13:57, Oleksii Kurochko wrote:
> Introduce functions required to perform a p2m context switch during
> a vCPU context switch.
>
> As no mechanism is provided to atomically change vsatp and hgatp
> together.
This sentence doesn't parse; imo you simply want to drop "As".
> Hence, to prevent speculative execution causing one
> guest’s VS-stage translations to be cached under another guest’s
> VMID, world-switch code should zero vsatp in p2m_ctx_swith_from(),
> then construct new hgatp and write the new vsatp value in
> p2m_ctx_switch_to().
>
> Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
> ---
> xen/arch/riscv/include/asm/p2m.h | 4 ++
> xen/arch/riscv/p2m.c | 81 ++++++++++++++++++++++++++++++++
> 2 files changed, 85 insertions(+)
>
> --- a/xen/arch/riscv/include/asm/p2m.h
> +++ b/xen/arch/riscv/include/asm/p2m.h
> @@ -255,6 +255,10 @@ static inline bool p2m_is_locked(const struct p2m_domain *p2m)
> struct page_info *p2m_get_page_from_gfn(struct p2m_domain *p2m, gfn_t gfn,
> p2m_type_t *t);
>
> +
As before: No double blank lines please.
> +void p2m_ctx_switch_from(struct vcpu *p);
> +void p2m_ctx_switch_to(struct vcpu *n);
Like for the other functions, please consider s/ctx/ctxt/.
> --- a/xen/arch/riscv/p2m.c
> +++ b/xen/arch/riscv/p2m.c
> @@ -1434,3 +1434,84 @@ struct page_info *p2m_get_page_from_gfn(struct p2m_domain *p2m, gfn_t gfn,
>
> return get_page(page, p2m->domain) ? page : NULL;
> }
> +
> +void p2m_ctx_switch_from(struct vcpu *p)
> +{
> + /*
> + * No mechanism is provided to atomically change vsatp and hgatp
> + * together. Hence, to prevent speculative execution causing one
> + * guest’s VS-stage translations to be cached under another guest’s
> + * VMID, world-switch code should zero vsatp, then swap hgatp, then
> + * finally write the new vsatp value.
> + */
> + p->arch.vsatp = csr_read(CSR_VSATP);
> + csr_write(CSR_VSATP, 0);
Why two CSR accesses when one will do? RISC-V specifically has the CSRRW insn.
> + /*
> + * No need for VS-stage TLB flush here:
> + * Changing satp.MODE from Bare to other modes and vice versa also
> + * takes effect immediately, without the need to execute an
> + * SFENCE.VMA instruction.
> + * Note that VSATP is just VS-mode’s version of SATP, so the mentioned
> + * above should be true for VSATP.
> + */
> +
> + /*
> + * Nothing to do with HGATP as it is constructed each time when
> + * p2m_ctx_switch_to() is called.
> + */
> +}
> +
> +void p2m_ctx_switch_to(struct vcpu *n)
> +{
> + struct vcpu_vmid *p_vmid = &n->arch.vmid;
> + uint16_t old_vmid, new_vmid;
> + bool need_flush;
> +
> + if ( is_idle_vcpu(n) )
> + return;
Shouldn't the other function have such a check, too?
> + old_vmid = p_vmid->vmid;
> + need_flush = vmid_handle_vmenter(p_vmid);
As you're re-using x86'es model, I'm not quite sure why this would be needed.
I don't think we have such in x86; aiui this should be handled while actually
doing the VM entry.
> + new_vmid = p_vmid->vmid;
> +
> +#ifdef P2M_DEBUG
> + printk(XENLOG_INFO, "%pv: oldvmid(%d) new_vmid(%d), need_flush(%d)\n",
> + n, old_vmid, new_vmid, need_flush);
> +#endif
> +
> + csr_write(CSR_HGATP, construct_hgatp(p2m_get_hostp2m(current->domain),
> + new_vmid));
> +
> + if ( unlikely(need_flush) )
> + local_hfence_gvma_all();
> +
> + /*
> + * According to the RISC-V specification, speculation can happen
> + * during an update of hgatp and vsatp:
> + * No mechanism is provided to atomically change vsatp and hgatp
> + * together. Hence, to prevent speculative execution causing one
> + * guest’s VS-stage translations to be cached under another guest’s
> + * VMID, world-switch code should zero vsatp, then swap hgatp, then
> + * finally write the new vsatp value. Similarly, if henvcfg.PBMTE
> + * need be world-switched, it should be switched after zeroing vsatp
> + * but before writing the new vsatp value, obviating the need to
> + * execute an HFENCE.VVMA instruction.
> + * So just flush TLBs for VS-Stage and G-stage after both of regs are
> + * touched.
> + */
> + flush_tlb_guest_local();
Which "both regs" does the comment talk about? Doesn't the flush want to come
...
> + /*
> + * The vsatp register is a VSXLEN-bit read/write register that is
> + * VS-mode’s version of supervisor register satp, so the following is
> + * true for VSATP registers:
> + * Changing satp.MODE from Bare to other modes and vice versa also takes
> + * effect immediately, without the need to execute an SFENCE.VMA
> + * instruction. Likewise, changes to satp.ASID take effect immediately.
> + * Considering the mentioned above and that VS-stage TLB flush has been
> + * already done there is no need to flush VS-stage TLB after an update
> + * of VSATP from Bare mode to what is written in `n->arch.vsatp`.
> + */
> + csr_write(CSR_VSATP, n->arch.vsatp);
... after this? Then some of the commentary also doesn't look to be necessary.
Jan
On 2/4/26 4:02 PM, Jan Beulich wrote:
> On 02.02.2026 13:57, Oleksii Kurochko wrote:
>> Introduce functions required to perform a p2m context switch during
>> a vCPU context switch.
>>
>> As no mechanism is provided to atomically change vsatp and hgatp
>> together.
> This sentence doesn't parse; imo you simply want to drop "As".
>
>> Hence, to prevent speculative execution causing one
>> guest’s VS-stage translations to be cached under another guest’s
>> VMID, world-switch code should zero vsatp in p2m_ctx_swith_from(),
>> then construct new hgatp and write the new vsatp value in
>> p2m_ctx_switch_to().
>>
>> Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
>> ---
>> xen/arch/riscv/include/asm/p2m.h | 4 ++
>> xen/arch/riscv/p2m.c | 81 ++++++++++++++++++++++++++++++++
>> 2 files changed, 85 insertions(+)
>>
>> --- a/xen/arch/riscv/p2m.c
>> +++ b/xen/arch/riscv/p2m.c
>> @@ -1434,3 +1434,84 @@ struct page_info *p2m_get_page_from_gfn(struct p2m_domain *p2m, gfn_t gfn,
>>
>> return get_page(page, p2m->domain) ? page : NULL;
>> }
>> +
>> +void p2m_ctx_switch_from(struct vcpu *p)
>> +{
>> + /*
>> + * No mechanism is provided to atomically change vsatp and hgatp
>> + * together. Hence, to prevent speculative execution causing one
>> + * guest’s VS-stage translations to be cached under another guest’s
>> + * VMID, world-switch code should zero vsatp, then swap hgatp, then
>> + * finally write the new vsatp value.
>> + */
>> + p->arch.vsatp = csr_read(CSR_VSATP);
>> + csr_write(CSR_VSATP, 0);
> Why two CSR accesses when one will do? RISC-V specifically has the CSRRW insn.
csr_swap() could really be used here instead, thanks.
>
>> + /*
>> + * No need for VS-stage TLB flush here:
>> + * Changing satp.MODE from Bare to other modes and vice versa also
>> + * takes effect immediately, without the need to execute an
>> + * SFENCE.VMA instruction.
>> + * Note that VSATP is just VS-mode’s version of SATP, so the mentioned
>> + * above should be true for VSATP.
>> + */
>> +
>> + /*
>> + * Nothing to do with HGATP as it is constructed each time when
>> + * p2m_ctx_switch_to() is called.
>> + */
>> +}
>> +
>> +void p2m_ctx_switch_to(struct vcpu *n)
>> +{
>> + struct vcpu_vmid *p_vmid = &n->arch.vmid;
>> + uint16_t old_vmid, new_vmid;
>> + bool need_flush;
>> +
>> + if ( is_idle_vcpu(n) )
>> + return;
> Shouldn't the other function have such a check, too?
Yes, it should. I will add the check.
>
>> + old_vmid = p_vmid->vmid;
>> + need_flush = vmid_handle_vmenter(p_vmid);
> As you're re-using x86'es model, I'm not quite sure why this would be needed.
> I don't think we have such in x86; aiui this should be handled while actually
> doing the VM entry.
I can move this to the end of do_trap().
However, isn’t this effectively the same?
If a context switch to vCPU n happens, it means that once the trap is fully handled,
we will enter vCPU n.
The same will be if to put the changes at the end of do_trap() before control
will be given to VM.
>
>> + new_vmid = p_vmid->vmid;
>> +
>> +#ifdef P2M_DEBUG
>> + printk(XENLOG_INFO, "%pv: oldvmid(%d) new_vmid(%d), need_flush(%d)\n",
>> + n, old_vmid, new_vmid, need_flush);
>> +#endif
>> +
>> + csr_write(CSR_HGATP, construct_hgatp(p2m_get_hostp2m(current->domain),
>> + new_vmid));
>> +
>> + if ( unlikely(need_flush) )
>> + local_hfence_gvma_all();
>> +
>> + /*
>> + * According to the RISC-V specification, speculation can happen
>> + * during an update of hgatp and vsatp:
>> + * No mechanism is provided to atomically change vsatp and hgatp
>> + * together. Hence, to prevent speculative execution causing one
>> + * guest’s VS-stage translations to be cached under another guest’s
>> + * VMID, world-switch code should zero vsatp, then swap hgatp, then
>> + * finally write the new vsatp value. Similarly, if henvcfg.PBMTE
>> + * need be world-switched, it should be switched after zeroing vsatp
>> + * but before writing the new vsatp value, obviating the need to
>> + * execute an HFENCE.VVMA instruction.
>> + * So just flush TLBs for VS-Stage and G-stage after both of regs are
>> + * touched.
>> + */
>> + flush_tlb_guest_local();
> Which "both regs" does the comment talk about? Doesn't the flush want to come
> ...
>
>> + /*
>> + * The vsatp register is a VSXLEN-bit read/write register that is
>> + * VS-mode’s version of supervisor register satp, so the following is
>> + * true for VSATP registers:
>> + * Changing satp.MODE from Bare to other modes and vice versa also takes
>> + * effect immediately, without the need to execute an SFENCE.VMA
>> + * instruction. Likewise, changes to satp.ASID take effect immediately.
>> + * Considering the mentioned above and that VS-stage TLB flush has been
>> + * already done there is no need to flush VS-stage TLB after an update
>> + * of VSATP from Bare mode to what is written in `n->arch.vsatp`.
>> + */
>> + csr_write(CSR_VSATP, n->arch.vsatp);
> ... after this? Then some of the commentary also doesn't look to be necessary.
I think there is no need for a guest TLB flush here or above
(i.e. flush_tlb_guest_local()), because:
- If CSR_VSATP is set to 0, or changed from 0 to a non-zero value, no speculation
can occur. Therefore, the guest TLB cannot be polluted with unwanted entries,
and no flush is required.
- If need_flush is false, it means that no VMID wraparound has occurred.
In that case, the VMID is still unique, so we are safe and no guest TLB flush is
needed.
Additionally, it seems that I do not need to introduce flush_tlb_guest_local() at
all, since local_hfence_gvma_all() already provides the same functionality.
Instead, it probably makes sense to introduce an HFENCE_VVMA() helper as a generic
wrapper around the hfence.vvma instruction, and then reuse it inside
local_hfence_gvma_all().
~ Oleksii
On 05.02.2026 11:32, Oleksii Kurochko wrote:
> On 2/4/26 4:02 PM, Jan Beulich wrote:
>> On 02.02.2026 13:57, Oleksii Kurochko wrote:
>>> +void p2m_ctx_switch_to(struct vcpu *n)
>>> +{
>>> + struct vcpu_vmid *p_vmid = &n->arch.vmid;
>>> + uint16_t old_vmid, new_vmid;
>>> + bool need_flush;
>>> +
>>> + if ( is_idle_vcpu(n) )
>>> + return;
>> Shouldn't the other function have such a check, too?
>
> Yes, it should. I will add the check.
>
>>> + old_vmid = p_vmid->vmid;
>>> + need_flush = vmid_handle_vmenter(p_vmid);
>> As you're re-using x86'es model, I'm not quite sure why this would be needed.
>> I don't think we have such in x86; aiui this should be handled while actually
>> doing the VM entry.
>
> I can move this to the end of do_trap().
>
> However, isn’t this effectively the same?
> If a context switch to vCPU n happens, it means that once the trap is fully handled,
> we will enter vCPU n.
> The same will be if to put the changes at the end of do_trap() before control
> will be given to VM.
Thing is - you will need it at the end of do_trap() (or wherever VM entry is
going to be dealt with) anyway, to cover situation other than context switch
as well.
>>> + new_vmid = p_vmid->vmid;
>>> +
>>> +#ifdef P2M_DEBUG
>>> + printk(XENLOG_INFO, "%pv: oldvmid(%d) new_vmid(%d), need_flush(%d)\n",
>>> + n, old_vmid, new_vmid, need_flush);
>>> +#endif
>>> +
>>> + csr_write(CSR_HGATP, construct_hgatp(p2m_get_hostp2m(current->domain),
>>> + new_vmid));
>>> +
>>> + if ( unlikely(need_flush) )
>>> + local_hfence_gvma_all();
>>> +
>>> + /*
>>> + * According to the RISC-V specification, speculation can happen
>>> + * during an update of hgatp and vsatp:
>>> + * No mechanism is provided to atomically change vsatp and hgatp
>>> + * together. Hence, to prevent speculative execution causing one
>>> + * guest’s VS-stage translations to be cached under another guest’s
>>> + * VMID, world-switch code should zero vsatp, then swap hgatp, then
>>> + * finally write the new vsatp value. Similarly, if henvcfg.PBMTE
>>> + * need be world-switched, it should be switched after zeroing vsatp
>>> + * but before writing the new vsatp value, obviating the need to
>>> + * execute an HFENCE.VVMA instruction.
>>> + * So just flush TLBs for VS-Stage and G-stage after both of regs are
>>> + * touched.
>>> + */
>>> + flush_tlb_guest_local();
>> Which "both regs" does the comment talk about? Doesn't the flush want to come
>> ...
>>
>>> + /*
>>> + * The vsatp register is a VSXLEN-bit read/write register that is
>>> + * VS-mode’s version of supervisor register satp, so the following is
>>> + * true for VSATP registers:
>>> + * Changing satp.MODE from Bare to other modes and vice versa also takes
>>> + * effect immediately, without the need to execute an SFENCE.VMA
>>> + * instruction. Likewise, changes to satp.ASID take effect immediately.
>>> + * Considering the mentioned above and that VS-stage TLB flush has been
>>> + * already done there is no need to flush VS-stage TLB after an update
>>> + * of VSATP from Bare mode to what is written in `n->arch.vsatp`.
>>> + */
>>> + csr_write(CSR_VSATP, n->arch.vsatp);
>> ... after this? Then some of the commentary also doesn't look to be necessary.
>
> I think there is no need for a guest TLB flush here or above
> (i.e. flush_tlb_guest_local()), because:
> - If CSR_VSATP is set to 0, or changed from 0 to a non-zero value, no speculation
> can occur. Therefore, the guest TLB cannot be polluted with unwanted entries,
> and no flush is required.
> - If need_flush is false, it means that no VMID wraparound has occurred.
> In that case, the VMID is still unique, so we are safe and no guest TLB flush is
> needed.
>
> Additionally, it seems that I do not need to introduce flush_tlb_guest_local() at
> all, since local_hfence_gvma_all() already provides the same functionality.
> Instead, it probably makes sense to introduce an HFENCE_VVMA() helper as a generic
> wrapper around the hfence.vvma instruction, and then reuse it inside
> local_hfence_gvma_all().
Hmm, I fear I'm confused: hfence.gvma != hfence.vvma ?
Jan
On 2/5/26 1:21 PM, Jan Beulich wrote:
> On 05.02.2026 11:32, Oleksii Kurochko wrote:
>> On 2/4/26 4:02 PM, Jan Beulich wrote:
>>> On 02.02.2026 13:57, Oleksii Kurochko wrote:
>>>> +void p2m_ctx_switch_to(struct vcpu *n)
>>>> +{
>>>> + struct vcpu_vmid *p_vmid = &n->arch.vmid;
>>>> + uint16_t old_vmid, new_vmid;
>>>> + bool need_flush;
>>>> +
>>>> + if ( is_idle_vcpu(n) )
>>>> + return;
>>> Shouldn't the other function have such a check, too?
>> Yes, it should. I will add the check.
>>
>>>> + old_vmid = p_vmid->vmid;
>>>> + need_flush = vmid_handle_vmenter(p_vmid);
>>> As you're re-using x86'es model, I'm not quite sure why this would be needed.
>>> I don't think we have such in x86; aiui this should be handled while actually
>>> doing the VM entry.
>> I can move this to the end of do_trap().
>>
>> However, isn’t this effectively the same?
>> If a context switch to vCPU n happens, it means that once the trap is fully handled,
>> we will enter vCPU n.
>> The same will be if to put the changes at the end of do_trap() before control
>> will be given to VM.
> Thing is - you will need it at the end of do_trap() (or wherever VM entry is
> going to be dealt with) anyway, to cover situation other than context switch
> as well.
If there are such situation then it makes sense.
>
>>>> + new_vmid = p_vmid->vmid;
>>>> +
>>>> +#ifdef P2M_DEBUG
>>>> + printk(XENLOG_INFO, "%pv: oldvmid(%d) new_vmid(%d), need_flush(%d)\n",
>>>> + n, old_vmid, new_vmid, need_flush);
>>>> +#endif
>>>> +
>>>> + csr_write(CSR_HGATP, construct_hgatp(p2m_get_hostp2m(current->domain),
>>>> + new_vmid));
>>>> +
>>>> + if ( unlikely(need_flush) )
>>>> + local_hfence_gvma_all();
>>>> +
>>>> + /*
>>>> + * According to the RISC-V specification, speculation can happen
>>>> + * during an update of hgatp and vsatp:
>>>> + * No mechanism is provided to atomically change vsatp and hgatp
>>>> + * together. Hence, to prevent speculative execution causing one
>>>> + * guest’s VS-stage translations to be cached under another guest’s
>>>> + * VMID, world-switch code should zero vsatp, then swap hgatp, then
>>>> + * finally write the new vsatp value. Similarly, if henvcfg.PBMTE
>>>> + * need be world-switched, it should be switched after zeroing vsatp
>>>> + * but before writing the new vsatp value, obviating the need to
>>>> + * execute an HFENCE.VVMA instruction.
>>>> + * So just flush TLBs for VS-Stage and G-stage after both of regs are
>>>> + * touched.
>>>> + */
>>>> + flush_tlb_guest_local();
>>> Which "both regs" does the comment talk about? Doesn't the flush want to come
>>> ...
>>>
>>>> + /*
>>>> + * The vsatp register is a VSXLEN-bit read/write register that is
>>>> + * VS-mode’s version of supervisor register satp, so the following is
>>>> + * true for VSATP registers:
>>>> + * Changing satp.MODE from Bare to other modes and vice versa also takes
>>>> + * effect immediately, without the need to execute an SFENCE.VMA
>>>> + * instruction. Likewise, changes to satp.ASID take effect immediately.
>>>> + * Considering the mentioned above and that VS-stage TLB flush has been
>>>> + * already done there is no need to flush VS-stage TLB after an update
>>>> + * of VSATP from Bare mode to what is written in `n->arch.vsatp`.
>>>> + */
>>>> + csr_write(CSR_VSATP, n->arch.vsatp);
>>> ... after this? Then some of the commentary also doesn't look to be necessary.
>> I think there is no need for a guest TLB flush here or above
>> (i.e. flush_tlb_guest_local()), because:
>> - If CSR_VSATP is set to 0, or changed from 0 to a non-zero value, no speculation
>> can occur. Therefore, the guest TLB cannot be polluted with unwanted entries,
>> and no flush is required.
>> - If need_flush is false, it means that no VMID wraparound has occurred.
>> In that case, the VMID is still unique, so we are safe and no guest TLB flush is
>> needed.
>>
>> Additionally, it seems that I do not need to introduce flush_tlb_guest_local() at
>> all, since local_hfence_gvma_all() already provides the same functionality.
>> Instead, it probably makes sense to introduce an HFENCE_VVMA() helper as a generic
>> wrapper around the hfence.vvma instruction, and then reuse it inside
>> local_hfence_gvma_all().
> Hmm, I fear I'm confused: hfence.gvma != hfence.vvma ?
I overlooked that it Gvma in local_hfence_gvma_all(). Then it make sense for having
both local_hfence_gvma_all() and flush_tlb_guest_local().
But it is still look like flush_tlb_guest_local() shouldn't be called in p2m context
switch, because of what I mentioned in the first paragraph above what allow not to
introduce flush_tlb_guest_local() now.
~ Oleksii
© 2016 - 2026 Red Hat, Inc.