We currently set a TIF flag when scheduling out a task that is in
lazy MMU mode, in order to restore it when the task is scheduled
again.
The generic lazy_mmu layer now tracks whether a task is in lazy MMU
mode in task_struct::lazy_mmu_state. We can therefore check that
state when switching to the new task, instead of using a separate
TIF flag.
Signed-off-by: Kevin Brodsky <kevin.brodsky@arm.com>
---
arch/x86/include/asm/thread_info.h | 4 +---
arch/x86/xen/enlighten_pv.c | 3 +--
2 files changed, 2 insertions(+), 5 deletions(-)
diff --git a/arch/x86/include/asm/thread_info.h b/arch/x86/include/asm/thread_info.h
index e71e0e8362ed..0067684afb5b 100644
--- a/arch/x86/include/asm/thread_info.h
+++ b/arch/x86/include/asm/thread_info.h
@@ -100,8 +100,7 @@ struct thread_info {
#define TIF_FORCED_TF 24 /* true if TF in eflags artificially */
#define TIF_SINGLESTEP 25 /* reenable singlestep on user return*/
#define TIF_BLOCKSTEP 26 /* set when we want DEBUGCTLMSR_BTF */
-#define TIF_LAZY_MMU_UPDATES 27 /* task is updating the mmu lazily */
-#define TIF_ADDR32 28 /* 32-bit address space on 64 bits */
+#define TIF_ADDR32 27 /* 32-bit address space on 64 bits */
#define _TIF_SSBD BIT(TIF_SSBD)
#define _TIF_SPEC_IB BIT(TIF_SPEC_IB)
@@ -114,7 +113,6 @@ struct thread_info {
#define _TIF_FORCED_TF BIT(TIF_FORCED_TF)
#define _TIF_BLOCKSTEP BIT(TIF_BLOCKSTEP)
#define _TIF_SINGLESTEP BIT(TIF_SINGLESTEP)
-#define _TIF_LAZY_MMU_UPDATES BIT(TIF_LAZY_MMU_UPDATES)
#define _TIF_ADDR32 BIT(TIF_ADDR32)
/* flags to check in __switch_to() */
diff --git a/arch/x86/xen/enlighten_pv.c b/arch/x86/xen/enlighten_pv.c
index 4806cc28d7ca..9fabe83e7546 100644
--- a/arch/x86/xen/enlighten_pv.c
+++ b/arch/x86/xen/enlighten_pv.c
@@ -426,7 +426,6 @@ static void xen_start_context_switch(struct task_struct *prev)
if (this_cpu_read(xen_lazy_mode) == XEN_LAZY_MMU) {
arch_leave_lazy_mmu_mode();
- set_ti_thread_flag(task_thread_info(prev), TIF_LAZY_MMU_UPDATES);
}
enter_lazy(XEN_LAZY_CPU);
}
@@ -437,7 +436,7 @@ static void xen_end_context_switch(struct task_struct *next)
xen_mc_flush();
leave_lazy(XEN_LAZY_CPU);
- if (test_and_clear_ti_thread_flag(task_thread_info(next), TIF_LAZY_MMU_UPDATES))
+ if (next->lazy_mmu_state.enabled)
arch_enter_lazy_mmu_mode();
}
--
2.47.0
On 15.10.25 10:27, Kevin Brodsky wrote: > We currently set a TIF flag when scheduling out a task that is in > lazy MMU mode, in order to restore it when the task is scheduled > again. > > The generic lazy_mmu layer now tracks whether a task is in lazy MMU > mode in task_struct::lazy_mmu_state. We can therefore check that > state when switching to the new task, instead of using a separate > TIF flag. > > Signed-off-by: Kevin Brodsky <kevin.brodsky@arm.com> > --- Looks ok to me, but I hope we get some confirmation from x86 / xen folks. -- Cheers David / dhildenb
On Thu, 2025-10-23 at 22:06 +0200, David Hildenbrand wrote: > On 15.10.25 10:27, Kevin Brodsky wrote: > > We currently set a TIF flag when scheduling out a task that is in > > lazy MMU mode, in order to restore it when the task is scheduled > > again. > > > > The generic lazy_mmu layer now tracks whether a task is in lazy MMU > > mode in task_struct::lazy_mmu_state. We can therefore check that > > state when switching to the new task, instead of using a separate > > TIF flag. > > > > Signed-off-by: Kevin Brodsky <kevin.brodsky@arm.com> > > --- > > > Looks ok to me, but I hope we get some confirmation from x86 / xen > folks. I know tglx has shouted at me in the past for precisely this reminder, but you know you can test Xen guests under QEMU/KVM now and don't need to actually run Xen? Has this been boot tested?
On 24/10/2025 16:47, David Woodhouse wrote: > On Thu, 2025-10-23 at 22:06 +0200, David Hildenbrand wrote: >> On 15.10.25 10:27, Kevin Brodsky wrote: >>> We currently set a TIF flag when scheduling out a task that is in >>> lazy MMU mode, in order to restore it when the task is scheduled >>> again. >>> >>> The generic lazy_mmu layer now tracks whether a task is in lazy MMU >>> mode in task_struct::lazy_mmu_state. We can therefore check that >>> state when switching to the new task, instead of using a separate >>> TIF flag. >>> >>> Signed-off-by: Kevin Brodsky <kevin.brodsky@arm.com> >>> --- >> >> Looks ok to me, but I hope we get some confirmation from x86 / xen >> folks. > > I know tglx has shouted at me in the past for precisely this reminder, > but you know you can test Xen guests under QEMU/KVM now and don't need > to actually run Xen? Has this been boot tested? I considered boot-testing a Xen guest (considering the Xen-specific changes in this series), but having no idea how to go about it I quickly gave up... Happy to follow instructions :) - Kevin
On Fri, 2025-10-24 at 17:05 +0200, Kevin Brodsky wrote: > On 24/10/2025 16:47, David Woodhouse wrote: > > On Thu, 2025-10-23 at 22:06 +0200, David Hildenbrand wrote: > > > On 15.10.25 10:27, Kevin Brodsky wrote: > > > > We currently set a TIF flag when scheduling out a task that is in > > > > lazy MMU mode, in order to restore it when the task is scheduled > > > > again. > > > > > > > > The generic lazy_mmu layer now tracks whether a task is in lazy MMU > > > > mode in task_struct::lazy_mmu_state. We can therefore check that > > > > state when switching to the new task, instead of using a separate > > > > TIF flag. > > > > > > > > Signed-off-by: Kevin Brodsky <kevin.brodsky@arm.com> > > > > --- > > > > > > Looks ok to me, but I hope we get some confirmation from x86 / xen > > > folks. > > > > I know tglx has shouted at me in the past for precisely this reminder, > > but you know you can test Xen guests under QEMU/KVM now and don't need > > to actually run Xen? Has this been boot tested? > > I considered boot-testing a Xen guest (considering the Xen-specific > changes in this series), but having no idea how to go about it I quickly > gave up... Happy to follow instructions :) https://qemu-project.gitlab.io/qemu/system/i386/xen.html covers booting Xen HVM guests, and near the bottom PV guests too (for which you do need a copy of Xen to run in QEMU with '--kernel xen', and your distro's build should suffice for that). Let me know if you have any trouble. Here's a sample command line which works here... qemu-system-x86_64 -display none --accel kvm,xen-version=0x40011,kernel-irqchip=split -drive file=/var/lib/libvirt/images/fedora28.qcow2,if=xen -kernel ~/git/linux-2.6/arch/x86/boot/bzImage -append "root=/dev/xvda1 console=ttyS0" -serial mon:stdio
On 24/10/2025 17:17, David Woodhouse wrote: > On Fri, 2025-10-24 at 17:05 +0200, Kevin Brodsky wrote: >> On 24/10/2025 16:47, David Woodhouse wrote: >>> On Thu, 2025-10-23 at 22:06 +0200, David Hildenbrand wrote: >>>> On 15.10.25 10:27, Kevin Brodsky wrote: >>>>> We currently set a TIF flag when scheduling out a task that is in >>>>> lazy MMU mode, in order to restore it when the task is scheduled >>>>> again. >>>>> >>>>> The generic lazy_mmu layer now tracks whether a task is in lazy MMU >>>>> mode in task_struct::lazy_mmu_state. We can therefore check that >>>>> state when switching to the new task, instead of using a separate >>>>> TIF flag. >>>>> >>>>> Signed-off-by: Kevin Brodsky <kevin.brodsky@arm.com> >>>>> --- >>>> Looks ok to me, but I hope we get some confirmation from x86 / xen >>>> folks. >>> I know tglx has shouted at me in the past for precisely this reminder, >>> but you know you can test Xen guests under QEMU/KVM now and don't need >>> to actually run Xen? Has this been boot tested? >> I considered boot-testing a Xen guest (considering the Xen-specific >> changes in this series), but having no idea how to go about it I quickly >> gave up... Happy to follow instructions :) > https://qemu-project.gitlab.io/qemu/system/i386/xen.html covers booting > Xen HVM guests, and near the bottom PV guests too (for which you do > need a copy of Xen to run in QEMU with '--kernel xen', and your > distro's build should suffice for that). > > Let me know if you have any trouble. Here's a sample command line which > works here... > > qemu-system-x86_64 -display none --accel kvm,xen-version=0x40011,kernel-irqchip=split -drive file=/var/lib/libvirt/images/fedora28.qcow2,if=xen -kernel ~/git/linux-2.6/arch/x86/boot/bzImage -append "root=/dev/xvda1 console=ttyS0" -serial mon:stdio Thanks this is helpful! Unfortunately lazy_mmu is only used in the PV case, so I'd need to run a PV guest. And the distro I'm using (Arch Linux) does not have a Xen package :/ It can be built from source from the AUR but that looks rather involved. Are there some prebuilt binaries I could grab and just point QEMU to? - Kevin
On 24.10.25 16:47, David Woodhouse wrote: > On Thu, 2025-10-23 at 22:06 +0200, David Hildenbrand wrote: >> On 15.10.25 10:27, Kevin Brodsky wrote: >>> We currently set a TIF flag when scheduling out a task that is in >>> lazy MMU mode, in order to restore it when the task is scheduled >>> again. >>> >>> The generic lazy_mmu layer now tracks whether a task is in lazy MMU >>> mode in task_struct::lazy_mmu_state. We can therefore check that >>> state when switching to the new task, instead of using a separate >>> TIF flag. >>> >>> Signed-off-by: Kevin Brodsky <kevin.brodsky@arm.com> >>> --- >> >> >> Looks ok to me, but I hope we get some confirmation from x86 / xen >> folks. > > > I know tglx has shouted at me in the past for precisely this reminder, > but you know you can test Xen guests under QEMU/KVM now and don't need > to actually run Xen? Has this been boot tested? And after that, boot-testing sparc as well? :D If it's easy, why not. But other people should not suffer for all the XEN hacks we keep dragging along. -- Cheers David / dhildenb
On 10/24/25 10:51, David Hildenbrand wrote: > On 24.10.25 16:47, David Woodhouse wrote: >> On Thu, 2025-10-23 at 22:06 +0200, David Hildenbrand wrote: >>> On 15.10.25 10:27, Kevin Brodsky wrote: >>>> We currently set a TIF flag when scheduling out a task that is in >>>> lazy MMU mode, in order to restore it when the task is scheduled >>>> again. >>>> >>>> The generic lazy_mmu layer now tracks whether a task is in lazy MMU >>>> mode in task_struct::lazy_mmu_state. We can therefore check that >>>> state when switching to the new task, instead of using a separate >>>> TIF flag. >>>> >>>> Signed-off-by: Kevin Brodsky <kevin.brodsky@arm.com> >>>> --- >>> >>> >>> Looks ok to me, but I hope we get some confirmation from x86 / xen >>> folks. >> >> >> I know tglx has shouted at me in the past for precisely this reminder, >> but you know you can test Xen guests under QEMU/KVM now and don't need >> to actually run Xen? Has this been boot tested? > > And after that, boot-testing sparc as well? :D > > If it's easy, why not. But other people should not suffer for all the > XEN hacks we keep dragging along. Which hacks? Serious question. Is this just for Xen PV or is HVM also affected? -- Sincerely, Demi Marie Obenour (she/her/hers)
On 25.10.25 00:52, Demi Marie Obenour wrote: > On 10/24/25 10:51, David Hildenbrand wrote: >> On 24.10.25 16:47, David Woodhouse wrote: >>> On Thu, 2025-10-23 at 22:06 +0200, David Hildenbrand wrote: >>>> On 15.10.25 10:27, Kevin Brodsky wrote: >>>>> We currently set a TIF flag when scheduling out a task that is in >>>>> lazy MMU mode, in order to restore it when the task is scheduled >>>>> again. >>>>> >>>>> The generic lazy_mmu layer now tracks whether a task is in lazy MMU >>>>> mode in task_struct::lazy_mmu_state. We can therefore check that >>>>> state when switching to the new task, instead of using a separate >>>>> TIF flag. >>>>> >>>>> Signed-off-by: Kevin Brodsky <kevin.brodsky@arm.com> >>>>> --- >>>> >>>> >>>> Looks ok to me, but I hope we get some confirmation from x86 / xen >>>> folks. >>> >>> >>> I know tglx has shouted at me in the past for precisely this reminder, >>> but you know you can test Xen guests under QEMU/KVM now and don't need >>> to actually run Xen? Has this been boot tested? >> >> And after that, boot-testing sparc as well? :D >> >> If it's easy, why not. But other people should not suffer for all the >> XEN hacks we keep dragging along. > > Which hacks? Serious question. Is this just for Xen PV or is HVM > also affected? In the context of this series, XEN_LAZY_MMU. Your question regarding PV/HVM emphasizes my point: how is a submitter supposed to know which XEN combinations to test (and how to test them), to not confidentially break something here. We really need guidance+help from the XEN folks here. -- Cheers David / dhildenb
On 27/10/2025 13:29, David Hildenbrand wrote: > On 25.10.25 00:52, Demi Marie Obenour wrote: >> On 10/24/25 10:51, David Hildenbrand wrote: >>> On 24.10.25 16:47, David Woodhouse wrote: >>>> On Thu, 2025-10-23 at 22:06 +0200, David Hildenbrand wrote: >>>>> On 15.10.25 10:27, Kevin Brodsky wrote: >>>>>> We currently set a TIF flag when scheduling out a task that is in >>>>>> lazy MMU mode, in order to restore it when the task is scheduled >>>>>> again. >>>>>> >>>>>> The generic lazy_mmu layer now tracks whether a task is in lazy MMU >>>>>> mode in task_struct::lazy_mmu_state. We can therefore check that >>>>>> state when switching to the new task, instead of using a separate >>>>>> TIF flag. >>>>>> >>>>>> Signed-off-by: Kevin Brodsky <kevin.brodsky@arm.com> >>>>>> --- >>>>> >>>>> >>>>> Looks ok to me, but I hope we get some confirmation from x86 / xen >>>>> folks. >>>> >>>> >>>> I know tglx has shouted at me in the past for precisely this reminder, >>>> but you know you can test Xen guests under QEMU/KVM now and don't need >>>> to actually run Xen? Has this been boot tested? >>> >>> And after that, boot-testing sparc as well? :D >>> >>> If it's easy, why not. But other people should not suffer for all the >>> XEN hacks we keep dragging along. >> >> Which hacks? Serious question. Is this just for Xen PV or is HVM >> also affected? > > In the context of this series, XEN_LAZY_MMU. FWIW in that particular case it's relatively easy to tell this is specific to Xen PV (this is only used in mmu_pv.c and enlighten_pv.c). Knowing what to test is certainly not obvious in general, though. - Kevin > > Your question regarding PV/HVM emphasizes my point: how is a submitter > supposed to know which XEN combinations to test (and how to test > them), to not confidentially break something here. > > We really need guidance+help from the XEN folks here.
On Fri, 2025-10-24 at 16:51 +0200, David Hildenbrand wrote: > On 24.10.25 16:47, David Woodhouse wrote: > > On Thu, 2025-10-23 at 22:06 +0200, David Hildenbrand wrote: > > > On 15.10.25 10:27, Kevin Brodsky wrote: > > > > We currently set a TIF flag when scheduling out a task that is in > > > > lazy MMU mode, in order to restore it when the task is scheduled > > > > again. > > > > > > > > The generic lazy_mmu layer now tracks whether a task is in lazy MMU > > > > mode in task_struct::lazy_mmu_state. We can therefore check that > > > > state when switching to the new task, instead of using a separate > > > > TIF flag. > > > > > > > > Signed-off-by: Kevin Brodsky <kevin.brodsky@arm.com> > > > > --- > > > > > > > > > Looks ok to me, but I hope we get some confirmation from x86 / xen > > > folks. > > > > > > I know tglx has shouted at me in the past for precisely this reminder, > > but you know you can test Xen guests under QEMU/KVM now and don't need > > to actually run Xen? Has this been boot tested? > > And after that, boot-testing sparc as well? :D Also not that hard in QEMU, I believe. Although I do have some SPARC boxes in the shed...
On Fri, 2025-10-24 at 16:13 +0100, David Woodhouse wrote: > On Fri, 2025-10-24 at 16:51 +0200, David Hildenbrand wrote: > > On 24.10.25 16:47, David Woodhouse wrote: > > > On Thu, 2025-10-23 at 22:06 +0200, David Hildenbrand wrote: > > > > On 15.10.25 10:27, Kevin Brodsky wrote: > > > > > We currently set a TIF flag when scheduling out a task that is in > > > > > lazy MMU mode, in order to restore it when the task is scheduled > > > > > again. > > > > > > > > > > The generic lazy_mmu layer now tracks whether a task is in lazy MMU > > > > > mode in task_struct::lazy_mmu_state. We can therefore check that > > > > > state when switching to the new task, instead of using a separate > > > > > TIF flag. > > > > > > > > > > Signed-off-by: Kevin Brodsky <kevin.brodsky@arm.com> > > > > > --- > > > > > > > > > > > > Looks ok to me, but I hope we get some confirmation from x86 / xen > > > > folks. > > > > > > > > > I know tglx has shouted at me in the past for precisely this reminder, > > > but you know you can test Xen guests under QEMU/KVM now and don't need > > > to actually run Xen? Has this been boot tested? > > > > And after that, boot-testing sparc as well? :D > > Also not that hard in QEMU, I believe. Although I do have some SPARC > boxes in the shed... Please have people test kernel changes on SPARC on real hardware. QEMU does not emulate sun4v, for example, and therefore testing in QEMU does not cover all of SPARC hardware. There are plenty of people on the debian-sparc, gentoo-sparc and sparclinux LKML mailing lists that can test kernel patches for SPARC. If SPARC-relevant changes need to be tested, please ask there and don't bury such things in a deeply nested thread in a discussion which doesn't even have SPARC in the mail subject. Adrian -- .''`. John Paul Adrian Glaubitz : :' : Debian Developer `. `' Physicist `- GPG: 62FF 8A75 84E0 2956 9546 0006 7426 3B37 F5B5 F913
On 24.10.25 17:38, John Paul Adrian Glaubitz wrote: > On Fri, 2025-10-24 at 16:13 +0100, David Woodhouse wrote: >> On Fri, 2025-10-24 at 16:51 +0200, David Hildenbrand wrote: >>> On 24.10.25 16:47, David Woodhouse wrote: >>>> On Thu, 2025-10-23 at 22:06 +0200, David Hildenbrand wrote: >>>>> On 15.10.25 10:27, Kevin Brodsky wrote: >>>>>> We currently set a TIF flag when scheduling out a task that is in >>>>>> lazy MMU mode, in order to restore it when the task is scheduled >>>>>> again. >>>>>> >>>>>> The generic lazy_mmu layer now tracks whether a task is in lazy MMU >>>>>> mode in task_struct::lazy_mmu_state. We can therefore check that >>>>>> state when switching to the new task, instead of using a separate >>>>>> TIF flag. >>>>>> >>>>>> Signed-off-by: Kevin Brodsky <kevin.brodsky@arm.com> >>>>>> --- >>>>> >>>>> >>>>> Looks ok to me, but I hope we get some confirmation from x86 / xen >>>>> folks. >>>> >>>> >>>> I know tglx has shouted at me in the past for precisely this reminder, >>>> but you know you can test Xen guests under QEMU/KVM now and don't need >>>> to actually run Xen? Has this been boot tested? >>> >>> And after that, boot-testing sparc as well? :D >> >> Also not that hard in QEMU, I believe. Although I do have some SPARC >> boxes in the shed... > > Please have people test kernel changes on SPARC on real hardware. QEMU does not > emulate sun4v, for example, and therefore testing in QEMU does not cover all > of SPARC hardware. > > There are plenty of people on the debian-sparc, gentoo-sparc and sparclinux > LKML mailing lists that can test kernel patches for SPARC. If SPARC-relevant > changes need to be tested, please ask there and don't bury such things in a > deeply nested thread in a discussion which doesn't even have SPARC in the > mail subject. Hi Adrian, out of curiosity, do people monitor sparclinux@ for changes to actively offer testing when required -- like would it be sufficient to CC relevant maintainers+list (like done here) and raise in the cover letter that some testing help would be appreciated? -- Cheers David / dhildenb
Hi David, On Fri, 2025-10-24 at 17:47 +0200, David Hildenbrand wrote: > > Please have people test kernel changes on SPARC on real hardware. QEMU does not > > emulate sun4v, for example, and therefore testing in QEMU does not cover all > > of SPARC hardware. > > > > There are plenty of people on the debian-sparc, gentoo-sparc and sparclinux > > LKML mailing lists that can test kernel patches for SPARC. If SPARC-relevant > > changes need to be tested, please ask there and don't bury such things in a > > deeply nested thread in a discussion which doesn't even have SPARC in the > > mail subject. > > out of curiosity, do people monitor sparclinux@ for changes to actively > offer testing when required -- like would it be sufficient to CC > relevant maintainers+list (like done here) and raise in the cover letter > that some testing help would be appreciated? Yes, that's definitely the case. But it should be obvious that from the subject of the mail that the change affects SPARC as not everyone can read every mail they're receiving through mailing lists. I'm trying to keep up, but since I'm on mailing lists for many different architectures, mails can slip through the cracks. For people that want to test changes on SPARC regularly, I can also offer accounts on SPARC test machines running on a Solaris LDOM (logical domain) on a SPARC T4. Adrian -- .''`. John Paul Adrian Glaubitz : :' : Debian Developer `. `' Physicist `- GPG: 62FF 8A75 84E0 2956 9546 0006 7426 3B37 F5B5 F913
On 24.10.25 17:51, John Paul Adrian Glaubitz wrote: > Hi David, Hi, > > On Fri, 2025-10-24 at 17:47 +0200, David Hildenbrand wrote: >>> Please have people test kernel changes on SPARC on real hardware. QEMU does not >>> emulate sun4v, for example, and therefore testing in QEMU does not cover all >>> of SPARC hardware. >>> >>> There are plenty of people on the debian-sparc, gentoo-sparc and sparclinux >>> LKML mailing lists that can test kernel patches for SPARC. If SPARC-relevant >>> changes need to be tested, please ask there and don't bury such things in a >>> deeply nested thread in a discussion which doesn't even have SPARC in the >>> mail subject. >> >> out of curiosity, do people monitor sparclinux@ for changes to actively >> offer testing when required -- like would it be sufficient to CC >> relevant maintainers+list (like done here) and raise in the cover letter >> that some testing help would be appreciated? > > Yes, that's definitely the case. But it should be obvious that from the subject > of the mail that the change affects SPARC as not everyone can read every mail > they're receiving through mailing lists. Agreed. One would hope that people only CC the sparc mailing list + maintainers when there is actually something relevant in there. Also, it would be nice if someone (e.g., the maintainer or reviewers) could monitor the list to spot that there is testing demand to CC the right people. I guess one problem might be that nobody is getting paid to work on sparc I guess (I'm happy to be wrong on that one :) ). Regarding sparc, I'll keep in mind that we might have to write a separate mail to the list to get some help with testing. > > I'm trying to keep up, but since I'm on mailing lists for many different architectures, > mails can slip through the cracks. Yeah, that's understandable. > > For people that want to test changes on SPARC regularly, I can also offer accounts > on SPARC test machines running on a Solaris LDOM (logical domain) on a SPARC T4. For example, I do have a s390x machine in an IBM cloud where I can test stuff. But I worked on s390x before, so I know how to test and what to test, and how to troubleshoot. On sparc I'd unfortunately have a hard time even understanding whether a simple boot test on some machine will actually trigger what I wanted to test :( -- Cheers David / dhildenb
On 24.10.25 17:13, David Woodhouse wrote: > On Fri, 2025-10-24 at 16:51 +0200, David Hildenbrand wrote: >> On 24.10.25 16:47, David Woodhouse wrote: >>> On Thu, 2025-10-23 at 22:06 +0200, David Hildenbrand wrote: >>>> On 15.10.25 10:27, Kevin Brodsky wrote: >>>>> We currently set a TIF flag when scheduling out a task that is in >>>>> lazy MMU mode, in order to restore it when the task is scheduled >>>>> again. >>>>> >>>>> The generic lazy_mmu layer now tracks whether a task is in lazy MMU >>>>> mode in task_struct::lazy_mmu_state. We can therefore check that >>>>> state when switching to the new task, instead of using a separate >>>>> TIF flag. >>>>> >>>>> Signed-off-by: Kevin Brodsky <kevin.brodsky@arm.com> >>>>> --- >>>> >>>> >>>> Looks ok to me, but I hope we get some confirmation from x86 / xen >>>> folks. >>> >>> >>> I know tglx has shouted at me in the past for precisely this reminder, >>> but you know you can test Xen guests under QEMU/KVM now and don't need >>> to actually run Xen? Has this been boot tested? >> >> And after that, boot-testing sparc as well? :D > > Also not that hard in QEMU, I believe. Although I do have some SPARC > boxes in the shed... Yeah, I once went through the pain of getting a sparc64 system booting in QEMU with a distro (was it debian?) that was 7 years old or so. Fantastic experience. Only took me 2 days IIRC. Absolutely worth it to not break upstream kernels on a museum piece. -- Cheers David / dhildenb
© 2016 - 2025 Red Hat, Inc.