arch_flush_lazy_mmu_mode() is called when outstanding batched
pgtable operations must be completed immediately. There should
however be no need to leave and re-enter lazy MMU completely. The
only part of that sequence that we really need is xen_mc_flush();
call it directly.
Signed-off-by: Kevin Brodsky <kevin.brodsky@arm.com>
---
arch/x86/xen/mmu_pv.c | 6 ++----
1 file changed, 2 insertions(+), 4 deletions(-)
diff --git a/arch/x86/xen/mmu_pv.c b/arch/x86/xen/mmu_pv.c
index 2a4a8deaf612..7a35c3393df4 100644
--- a/arch/x86/xen/mmu_pv.c
+++ b/arch/x86/xen/mmu_pv.c
@@ -2139,10 +2139,8 @@ static void xen_flush_lazy_mmu(void)
{
preempt_disable();
- if (xen_get_lazy_mode() == XEN_LAZY_MMU) {
- arch_leave_lazy_mmu_mode();
- arch_enter_lazy_mmu_mode();
- }
+ if (xen_get_lazy_mode() == XEN_LAZY_MMU)
+ xen_mc_flush();
preempt_enable();
}
--
2.47.0
On 29.10.25 11:08, Kevin Brodsky wrote: > arch_flush_lazy_mmu_mode() is called when outstanding batched > pgtable operations must be completed immediately. There should > however be no need to leave and re-enter lazy MMU completely. The > only part of that sequence that we really need is xen_mc_flush(); > call it directly. > > Signed-off-by: Kevin Brodsky <kevin.brodsky@arm.com> Reviewed-by: Juergen Gross <jgross@suse.com> Juergen
On 29/10/2025 10:08, Kevin Brodsky wrote:
> arch_flush_lazy_mmu_mode() is called when outstanding batched
> pgtable operations must be completed immediately. There should
> however be no need to leave and re-enter lazy MMU completely. The
> only part of that sequence that we really need is xen_mc_flush();
> call it directly.
>
> Signed-off-by: Kevin Brodsky <kevin.brodsky@arm.com>
This looks functionally equivalent to me, so:
Reviewed-by: Ryan Roberts <ryan.roberts@arm.com>
But I don't think this tidy up is strictly necessary for your series to work?
(perhaps I'll change my mind on that as I go through it).
> ---
> arch/x86/xen/mmu_pv.c | 6 ++----
> 1 file changed, 2 insertions(+), 4 deletions(-)
>
> diff --git a/arch/x86/xen/mmu_pv.c b/arch/x86/xen/mmu_pv.c
> index 2a4a8deaf612..7a35c3393df4 100644
> --- a/arch/x86/xen/mmu_pv.c
> +++ b/arch/x86/xen/mmu_pv.c
> @@ -2139,10 +2139,8 @@ static void xen_flush_lazy_mmu(void)
> {
> preempt_disable();
>
> - if (xen_get_lazy_mode() == XEN_LAZY_MMU) {
> - arch_leave_lazy_mmu_mode();
> - arch_enter_lazy_mmu_mode();
> - }
> + if (xen_get_lazy_mode() == XEN_LAZY_MMU)
> + xen_mc_flush();
>
> preempt_enable();
> }
On 07/11/2025 12:31, Ryan Roberts wrote: > On 29/10/2025 10:08, Kevin Brodsky wrote: >> arch_flush_lazy_mmu_mode() is called when outstanding batched >> pgtable operations must be completed immediately. There should >> however be no need to leave and re-enter lazy MMU completely. The >> only part of that sequence that we really need is xen_mc_flush(); >> call it directly. >> >> Signed-off-by: Kevin Brodsky <kevin.brodsky@arm.com> > This looks functionally equivalent to me, so: > > Reviewed-by: Ryan Roberts <ryan.roberts@arm.com> > > But I don't think this tidy up is strictly necessary for your series to work? > (perhaps I'll change my mind on that as I go through it). I initially thought it might be, but in the end I think you're right - it should still work fine without this patch. Still, I'd rather avoid unnecessary calls to arch_enter() and arch_leave() as it makes it harder to reason about what is called where. Namely, keeping them here means that a nested call to lazy_mmu_mode_disable() would cause arch_leave() then arch_enter() to be called - rather unexpected. The only calls to arch_enter() and arch_leave() that are left after this series are the ones in <linux/pgtable.h> and the Xen context-switching logic (the one case where calling arch hooks directly is justified, see discussion on v3 [1]). - Kevin [1] https://lore.kernel.org/all/390e41ae-4b66-40c1-935f-7a1794ba0b71@arm.com/
On 10/11/2025 10:36, Kevin Brodsky wrote: > On 07/11/2025 12:31, Ryan Roberts wrote: >> On 29/10/2025 10:08, Kevin Brodsky wrote: >>> arch_flush_lazy_mmu_mode() is called when outstanding batched >>> pgtable operations must be completed immediately. There should >>> however be no need to leave and re-enter lazy MMU completely. The >>> only part of that sequence that we really need is xen_mc_flush(); >>> call it directly. >>> >>> Signed-off-by: Kevin Brodsky <kevin.brodsky@arm.com> >> This looks functionally equivalent to me, so: >> >> Reviewed-by: Ryan Roberts <ryan.roberts@arm.com> >> >> But I don't think this tidy up is strictly necessary for your series to work? >> (perhaps I'll change my mind on that as I go through it). > > I initially thought it might be, but in the end I think you're right - > it should still work fine without this patch. > > Still, I'd rather avoid unnecessary calls to arch_enter() and > arch_leave() as it makes it harder to reason about what is called where. > Namely, keeping them here means that a nested call to > lazy_mmu_mode_disable() would cause arch_leave() then arch_enter() to be > called - rather unexpected. > > The only calls to arch_enter() and arch_leave() that are left after this > series are the ones in <linux/pgtable.h> and the Xen context-switching > logic (the one case where calling arch hooks directly is justified, see > discussion on v3 [1]). OK yeah, sounds reasonable. > > - Kevin > > [1] > https://lore.kernel.org/all/390e41ae-4b66-40c1-935f-7a1794ba0b71@arm.com/
On 29.10.25 11:08, Kevin Brodsky wrote:
> arch_flush_lazy_mmu_mode() is called when outstanding batched
> pgtable operations must be completed immediately. There should
> however be no need to leave and re-enter lazy MMU completely. The
> only part of that sequence that we really need is xen_mc_flush();
> call it directly.
>
> Signed-off-by: Kevin Brodsky <kevin.brodsky@arm.com>
> ---
> arch/x86/xen/mmu_pv.c | 6 ++----
> 1 file changed, 2 insertions(+), 4 deletions(-)
>
> diff --git a/arch/x86/xen/mmu_pv.c b/arch/x86/xen/mmu_pv.c
> index 2a4a8deaf612..7a35c3393df4 100644
> --- a/arch/x86/xen/mmu_pv.c
> +++ b/arch/x86/xen/mmu_pv.c
> @@ -2139,10 +2139,8 @@ static void xen_flush_lazy_mmu(void)
> {
> preempt_disable();
>
> - if (xen_get_lazy_mode() == XEN_LAZY_MMU) {
> - arch_leave_lazy_mmu_mode();
> - arch_enter_lazy_mmu_mode();
> - }
> + if (xen_get_lazy_mode() == XEN_LAZY_MMU)
> + xen_mc_flush();
>
> preempt_enable();
> }
Looks like that was moved to XEN code in
commit a4a7644c15096f57f92252dd6e1046bf269c87d8
Author: Juergen Gross <jgross@suse.com>
Date: Wed Sep 13 13:38:27 2023 +0200
x86/xen: move paravirt lazy code
And essentially the previous implementation lived in
arch/x86/kernel/paravirt.c:paravirt_flush_lazy_mmu(void) in an
implementation-agnostic way:
void paravirt_flush_lazy_mmu(void)
{
preempt_disable();
if (paravirt_get_lazy_mode() == PARAVIRT_LAZY_MMU) {
arch_leave_lazy_mmu_mode();
arch_enter_lazy_mmu_mode();
}
preempt_enable();
}
So indeed, I assume just doing the flush here is sufficient.
Reviewed-by: David Hildenbrand <david@redhat.com>
--
Cheers
David / dhildenb
On 01/11/2025 12:14, David Hildenbrand wrote:
> On 29.10.25 11:08, Kevin Brodsky wrote:
>> arch_flush_lazy_mmu_mode() is called when outstanding batched
>> pgtable operations must be completed immediately. There should
>> however be no need to leave and re-enter lazy MMU completely. The
>> only part of that sequence that we really need is xen_mc_flush();
>> call it directly.
>>
>> Signed-off-by: Kevin Brodsky <kevin.brodsky@arm.com>
>> ---
>> arch/x86/xen/mmu_pv.c | 6 ++----
>> 1 file changed, 2 insertions(+), 4 deletions(-)
>>
>> diff --git a/arch/x86/xen/mmu_pv.c b/arch/x86/xen/mmu_pv.c
>> index 2a4a8deaf612..7a35c3393df4 100644
>> --- a/arch/x86/xen/mmu_pv.c
>> +++ b/arch/x86/xen/mmu_pv.c
>> @@ -2139,10 +2139,8 @@ static void xen_flush_lazy_mmu(void)
>> {
>> preempt_disable();
>> - if (xen_get_lazy_mode() == XEN_LAZY_MMU) {
>> - arch_leave_lazy_mmu_mode();
>> - arch_enter_lazy_mmu_mode();
>> - }
>> + if (xen_get_lazy_mode() == XEN_LAZY_MMU)
>> + xen_mc_flush();
>> preempt_enable();
>> }
>
> Looks like that was moved to XEN code in
>
> commit a4a7644c15096f57f92252dd6e1046bf269c87d8
> Author: Juergen Gross <jgross@suse.com>
> Date: Wed Sep 13 13:38:27 2023 +0200
>
> x86/xen: move paravirt lazy code
>
>
> And essentially the previous implementation lived in
> arch/x86/kernel/paravirt.c:paravirt_flush_lazy_mmu(void) in an
> implementation-agnostic way:
>
> void paravirt_flush_lazy_mmu(void)
> {
> preempt_disable();
>
> if (paravirt_get_lazy_mode() == PARAVIRT_LAZY_MMU) {
> arch_leave_lazy_mmu_mode();
> arch_enter_lazy_mmu_mode();
> }
>
> preempt_enable();
> }
Indeed, I saw that too. Calling the generic leave/enter functions made
some sense at that point, but now that the implementation is
Xen-specific we can directly call xen_mc_flush().
>
> So indeed, I assume just doing the flush here is sufficient.
>
> Reviewed-by: David Hildenbrand <david@redhat.com>
Thanks for the review!
- Kevin
© 2016 - 2025 Red Hat, Inc.