Since commit b9ef323ea168 ("powerpc/64s: Disable preemption in hash
lazy mmu mode") a task can not be preempted while in lazy MMU mode.
Therefore, the batch re-activation code is never called, so remove it.
Signed-off-by: Alexander Gordeev <agordeev@linux.ibm.com>
---
arch/powerpc/include/asm/thread_info.h | 2 --
arch/powerpc/kernel/process.c | 25 -------------------------
2 files changed, 27 deletions(-)
diff --git a/arch/powerpc/include/asm/thread_info.h b/arch/powerpc/include/asm/thread_info.h
index 2785c7462ebf..092118a68862 100644
--- a/arch/powerpc/include/asm/thread_info.h
+++ b/arch/powerpc/include/asm/thread_info.h
@@ -154,12 +154,10 @@ void arch_setup_new_exec(void);
/* Don't move TLF_NAPPING without adjusting the code in entry_32.S */
#define TLF_NAPPING 0 /* idle thread enabled NAP mode */
#define TLF_SLEEPING 1 /* suspend code enabled SLEEP mode */
-#define TLF_LAZY_MMU 3 /* tlb_batch is active */
#define TLF_RUNLATCH 4 /* Is the runlatch enabled? */
#define _TLF_NAPPING (1 << TLF_NAPPING)
#define _TLF_SLEEPING (1 << TLF_SLEEPING)
-#define _TLF_LAZY_MMU (1 << TLF_LAZY_MMU)
#define _TLF_RUNLATCH (1 << TLF_RUNLATCH)
#ifndef __ASSEMBLY__
diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c
index 855e09886503..edb59a447149 100644
--- a/arch/powerpc/kernel/process.c
+++ b/arch/powerpc/kernel/process.c
@@ -1281,9 +1281,6 @@ struct task_struct *__switch_to(struct task_struct *prev,
{
struct thread_struct *new_thread, *old_thread;
struct task_struct *last;
-#ifdef CONFIG_PPC_64S_HASH_MMU
- struct ppc64_tlb_batch *batch;
-#endif
new_thread = &new->thread;
old_thread = ¤t->thread;
@@ -1291,14 +1288,6 @@ struct task_struct *__switch_to(struct task_struct *prev,
WARN_ON(!irqs_disabled());
#ifdef CONFIG_PPC_64S_HASH_MMU
- batch = this_cpu_ptr(&ppc64_tlb_batch);
- if (batch->active) {
- current_thread_info()->local_flags |= _TLF_LAZY_MMU;
- if (batch->index)
- __flush_tlb_pending(batch);
- batch->active = 0;
- }
-
/*
* On POWER9 the copy-paste buffer can only paste into
* foreign real addresses, so unprivileged processes can not
@@ -1369,20 +1358,6 @@ struct task_struct *__switch_to(struct task_struct *prev,
*/
#ifdef CONFIG_PPC_BOOK3S_64
-#ifdef CONFIG_PPC_64S_HASH_MMU
- /*
- * This applies to a process that was context switched while inside
- * arch_enter_lazy_mmu_mode(), to re-activate the batch that was
- * deactivated above, before _switch(). This will never be the case
- * for new tasks.
- */
- if (current_thread_info()->local_flags & _TLF_LAZY_MMU) {
- current_thread_info()->local_flags &= ~_TLF_LAZY_MMU;
- batch = this_cpu_ptr(&ppc64_tlb_batch);
- batch->active = 1;
- }
-#endif
-
/*
* Math facilities are masked out of the child MSR in copy_thread.
* A new task does not need to restore_math because it will
--
2.48.1
On Thu, Jun 12, 2025 at 07:36:13PM +0200, Alexander Gordeev wrote: > Since commit b9ef323ea168 ("powerpc/64s: Disable preemption in hash > lazy mmu mode") a task can not be preempted while in lazy MMU mode. > Therefore, the batch re-activation code is never called, so remove it. > > Signed-off-by: Alexander Gordeev <agordeev@linux.ibm.com> > --- > arch/powerpc/include/asm/thread_info.h | 2 -- > arch/powerpc/kernel/process.c | 25 ------------------------- > 2 files changed, 27 deletions(-) Hi All, (I trimmed non-ppc mailing lists/people). The whole series does not seem to make it, but this patch alone is still applicable and makes sence, if I am not mistaken. Thanks!
On 17/06/2025 17:11, Alexander Gordeev wrote: > On Thu, Jun 12, 2025 at 07:36:13PM +0200, Alexander Gordeev wrote: >> Since commit b9ef323ea168 ("powerpc/64s: Disable preemption in hash >> lazy mmu mode") a task can not be preempted while in lazy MMU mode. >> Therefore, the batch re-activation code is never called, so remove it. >> >> Signed-off-by: Alexander Gordeev <agordeev@linux.ibm.com> >> --- >> arch/powerpc/include/asm/thread_info.h | 2 -- >> arch/powerpc/kernel/process.c | 25 ------------------------- >> 2 files changed, 27 deletions(-) > Hi All, > > (I trimmed non-ppc mailing lists/people). > > The whole series does not seem to make it, but this patch alone is still > applicable and makes sence, if I am not mistaken. Yes, I agree. I arrived at the same conclusion working on the next version of the nested lazy_mmu series [1]. May I include this patch in v3? - Kevin [1] https://lore.kernel.org/all/20250908073931.4159362-1-kevin.brodsky@arm.com/
Kevin Brodsky <kevin.brodsky@arm.com> writes: > On 17/06/2025 17:11, Alexander Gordeev wrote: >> On Thu, Jun 12, 2025 at 07:36:13PM +0200, Alexander Gordeev wrote: >>> Since commit b9ef323ea168 ("powerpc/64s: Disable preemption in hash >>> lazy mmu mode") a task can not be preempted while in lazy MMU mode. >>> Therefore, the batch re-activation code is never called, so remove it. >>> >>> Signed-off-by: Alexander Gordeev <agordeev@linux.ibm.com> >>> --- >>> arch/powerpc/include/asm/thread_info.h | 2 -- >>> arch/powerpc/kernel/process.c | 25 ------------------------- >>> 2 files changed, 27 deletions(-) >> Hi All, >> >> (I trimmed non-ppc mailing lists/people). >> >> The whole series does not seem to make it, but this patch alone is still >> applicable and makes sence, if I am not mistaken. > > Yes, I agree. I arrived at the same conclusion working on the next > version of the nested lazy_mmu series [1]. > [1] > https://lore.kernel.org/all/20250908073931.4159362-1-kevin.brodsky@arm.com/ Yes, we disable preemption while in lazy mmu mode for Hash, so I agree that we won't call into __switch_to() in between preempt_disable()/_enable(). So it does look like that we don't need that code. > May I include this patch in v3? > That should be ok. > - Kevin Thanks! -ritesh
On 07/10/2025 11:40, Ritesh Harjani (IBM) wrote: > Kevin Brodsky <kevin.brodsky@arm.com> writes: > >> On 17/06/2025 17:11, Alexander Gordeev wrote: >>> On Thu, Jun 12, 2025 at 07:36:13PM +0200, Alexander Gordeev wrote: >>>> Since commit b9ef323ea168 ("powerpc/64s: Disable preemption in hash >>>> lazy mmu mode") a task can not be preempted while in lazy MMU mode. >>>> Therefore, the batch re-activation code is never called, so remove it. >>>> >>>> Signed-off-by: Alexander Gordeev <agordeev@linux.ibm.com> >>>> --- >>>> arch/powerpc/include/asm/thread_info.h | 2 -- >>>> arch/powerpc/kernel/process.c | 25 ------------------------- >>>> 2 files changed, 27 deletions(-) >>> Hi All, >>> >>> (I trimmed non-ppc mailing lists/people). >>> >>> The whole series does not seem to make it, but this patch alone is still >>> applicable and makes sence, if I am not mistaken. >> Yes, I agree. I arrived at the same conclusion working on the next >> version of the nested lazy_mmu series [1]. >> [1] >> https://lore.kernel.org/all/20250908073931.4159362-1-kevin.brodsky@arm.com/ > Yes, we disable preemption while in lazy mmu mode for Hash, so I agree that > we won't call into __switch_to() in between preempt_disable()/_enable(). > So it does look like that we don't need that code. Thanks for confirming. >> May I include this patch in v3? >> > That should be ok. Thanks! - Kevin
© 2016 - 2025 Red Hat, Inc.