[PATCH v3] MIPS: mm: Prevent a TLB shutdown on initial uniquification

Maciej W. Rozycki posted 1 patch 3 weeks, 3 days ago
arch/mips/mm/tlb-r4k.c |  100 ++++++++++++++++++++++++++++++-------------------
1 file changed, 63 insertions(+), 37 deletions(-)
[PATCH v3] MIPS: mm: Prevent a TLB shutdown on initial uniquification
Posted by Maciej W. Rozycki 3 weeks, 3 days ago
Depending on the particular CPU implementation a TLB shutdown may occur 
if multiple matching entries are detected upon the execution of a TLBP 
or the TLBWI/TLBWR instructions.  Given that we don't know what entries 
we have been handed we need to be very careful with the initial TLB 
setup and avoid all these instructions.

Therefore read all the TLB entries one by one with the TLBR instruction, 
bypassing the content addressing logic, and truncate any large pages in 
place so as to avoid a case in the second step where an incoming entry 
for a large page at a lower address overlaps with a replacement entry 
chosen at another index.  Then preinitialize the TLB using addresses 
outside our usual unique range and avoiding clashes with any entries 
received, before making the usual call to local_flush_tlb_all().

This fixes (at least) R4x00 cores if TLBP hits multiple matching TLB 
entries (SGI IP22 PROM for examples sets up all TLBs to the same virtual 
address).

Signed-off-by: Maciej W. Rozycki <macro@orcam.me.uk>
Fixes: 35ad7e181541 ("MIPS: mm: tlb-r4k: Uniquify TLB entries on init")
Cc: stable@vger.kernel.org # v6.17+
---
Hi,

 On second thoughts I decided against including wired entries in our VPN 
matching discovery for clash avoidance.  The reason is as I wrote before 
it makes no sense to have wired entries for KSEG0 addresses, so it should 
be safe to assume we won't ever make one, and then if a wired entry maps a 
large page, which is quite likely, then our clash avoidance logic won't 
handle an overlap where the two VPN values of a clashing pair don't match, 
so it makes no sense to pretend we can handle wired entries with the code 
as proposed.

 Pasting v2 discussion below verbatim as it still applies.

 Verified the same way as before, also with some diagnostics added so as 
to make sure things get set up correctly, with my Malta/74Kf system for a 
32-bit configuration and with my SWARM/BCM1250 system for a 64-bit one.

 In addition to the Wired register setup discussed with v1 I have realised 
the incoming entries may include large pages, possibly exceeding the size 
of KSEG0 even.  Such entries may overlap with our temporary entries added 
in the second step, so truncate any large pages in place as this ensures 
no clash happens with the received contents of the TLB.

 NB this doesn't handle incoming PageGrain.ESP having been set, but it's 
an unrelated preexisting issue that would have to be handled elsewhere.  
Possibly it doesn't matter in reality.

 Additionally PageMask is left set at what has been retrieved from the 
last incoming TLB entry in the first step and has to be reset to our page 
size before proceeding with the second step.

 And last but not least the comparator function returned 0 incorrectly 
when the difference between 64-bit elements was positive but with none of 
the high-order 32 bits set.  Fixed with a branchless sequence of 3 machine 
instructions, which I think is the minimum here (only the sign and zero 
matter here, but this sequence actually produces -1/0/1, because why not).
No change for the 32-bit case, the difference is returned as is.

  Maciej

Changes from v2 (at 
<https://lore.kernel.org/r/alpine.DEB.2.21.2511122032400.25436@angie.orcam.me.uk/>):

- Revert the v2 update to include wired entries while reading original 
  contents of TLB.

Changes from v1 (at 
<https://lore.kernel.org/r/alpine.DEB.2.21.2511110547430.25436@angie.orcam.me.uk/>):

- Also include wired entries while reading original contents of TLB.

- Truncate any large pages in place while reading original TLB entries.

- Reset PageMask to PM_DEFAULT_MASK after reading in TLB entries.

- Fix the 64-bit case for the sort comparator.
---
 arch/mips/mm/tlb-r4k.c |  100 ++++++++++++++++++++++++++++++-------------------
 1 file changed, 63 insertions(+), 37 deletions(-)

linux-mips-tlb-r4k-uniquify-fix.diff
Index: linux-swarm64/arch/mips/mm/tlb-r4k.c
===================================================================
--- linux-swarm64.orig/arch/mips/mm/tlb-r4k.c
+++ linux-swarm64/arch/mips/mm/tlb-r4k.c
@@ -15,6 +15,7 @@
 #include <linux/mm.h>
 #include <linux/hugetlb.h>
 #include <linux/export.h>
+#include <linux/sort.h>
 
 #include <asm/cpu.h>
 #include <asm/cpu-type.h>
@@ -508,54 +509,78 @@ static int __init set_ntlb(char *str)
 
 __setup("ntlb=", set_ntlb);
 
-/* Initialise all TLB entries with unique values */
+
+/* Comparison function for EntryHi VPN fields.  */
+static int r4k_vpn_cmp(const void *a, const void *b)
+{
+	long v = *(unsigned long *)a - *(unsigned long *)b;
+	int s = sizeof(long) > sizeof(int) ? sizeof(long) * 8 - 1: 0;
+	return s ? (v != 0) | v >> s : v;
+}
+
+/*
+ * Initialise all TLB entries with unique values that do not clash with
+ * what we have been handed over and what we'll be using ourselves.
+ */
 static void r4k_tlb_uniquify(void)
 {
-	int entry = num_wired_entries();
+	unsigned long tlb_vpns[1 << MIPS_CONF1_TLBS_SIZE];
+	int tlbsize = current_cpu_data.tlbsize;
+	int start = num_wired_entries();
+	unsigned long vpn_mask;
+	int cnt, ent, idx, i;
+
+	vpn_mask = GENMASK(cpu_vmbits - 1, 13);
+	vpn_mask |= IS_ENABLED(CONFIG_64BIT) ? 3ULL << 62 : 1 << 31;
 
 	htw_stop();
+
+	for (i = start, cnt = 0; i < tlbsize; i++, cnt++) {
+		unsigned long vpn;
+
+		write_c0_index(i);
+		mtc0_tlbr_hazard();
+		tlb_read();
+		tlb_read_hazard();
+		vpn = read_c0_entryhi();
+		vpn &= vpn_mask & PAGE_MASK;
+		tlb_vpns[cnt] = vpn;
+
+		/* Prevent any large pages from overlapping regular ones.  */
+		write_c0_pagemask(read_c0_pagemask() & PM_DEFAULT_MASK);
+		mtc0_tlbw_hazard();
+		tlb_write_indexed();
+		tlbw_use_hazard();
+	}
+
+	sort(tlb_vpns, cnt, sizeof(tlb_vpns[0]), r4k_vpn_cmp, NULL);
+
+	write_c0_pagemask(PM_DEFAULT_MASK);
 	write_c0_entrylo0(0);
 	write_c0_entrylo1(0);
 
-	while (entry < current_cpu_data.tlbsize) {
-		unsigned long asid_mask = cpu_asid_mask(&current_cpu_data);
-		unsigned long asid = 0;
-		int idx;
+	idx = 0;
+	ent = tlbsize;
+	for (i = start; i < tlbsize; i++)
+		while (1) {
+			unsigned long entryhi, vpn;
 
-		/* Skip wired MMID to make ginvt_mmid work */
-		if (cpu_has_mmid)
-			asid = MMID_KERNEL_WIRED + 1;
+			entryhi = UNIQUE_ENTRYHI(ent);
+			vpn = entryhi & vpn_mask & PAGE_MASK;
 
-		/* Check for match before using UNIQUE_ENTRYHI */
-		do {
-			if (cpu_has_mmid) {
-				write_c0_memorymapid(asid);
-				write_c0_entryhi(UNIQUE_ENTRYHI(entry));
+			if (idx >= cnt || vpn < tlb_vpns[idx]) {
+				write_c0_entryhi(entryhi);
+				write_c0_index(i);
+				mtc0_tlbw_hazard();
+				tlb_write_indexed();
+				ent++;
+				break;
+			} else if (vpn == tlb_vpns[idx]) {
+				ent++;
 			} else {
-				write_c0_entryhi(UNIQUE_ENTRYHI(entry) | asid);
+				idx++;
 			}
-			mtc0_tlbw_hazard();
-			tlb_probe();
-			tlb_probe_hazard();
-			idx = read_c0_index();
-			/* No match or match is on current entry */
-			if (idx < 0 || idx == entry)
-				break;
-			/*
-			 * If we hit a match, we need to try again with
-			 * a different ASID.
-			 */
-			asid++;
-		} while (asid < asid_mask);
-
-		if (idx >= 0 && idx != entry)
-			panic("Unable to uniquify TLB entry %d", idx);
-
-		write_c0_index(entry);
-		mtc0_tlbw_hazard();
-		tlb_write_indexed();
-		entry++;
-	}
+		}
 
 	tlbw_use_hazard();
 	htw_start();
@@ -602,6 +627,7 @@ static void r4k_tlb_configure(void)
 
 	/* From this point on the ARC firmware is dead.	 */
 	r4k_tlb_uniquify();
+	local_flush_tlb_all();
 
 	/* Did I tell you that ARC SUCKS?  */
 }
Re: [PATCH v3] MIPS: mm: Prevent a TLB shutdown on initial uniquification
Posted by Klara Modin 1 week, 3 days ago
Hi,

On 2025-11-13 05:21:10 +0000, Maciej W. Rozycki wrote:
> Depending on the particular CPU implementation a TLB shutdown may occur 
> if multiple matching entries are detected upon the execution of a TLBP 
> or the TLBWI/TLBWR instructions.  Given that we don't know what entries 
> we have been handed we need to be very careful with the initial TLB 
> setup and avoid all these instructions.
> 
> Therefore read all the TLB entries one by one with the TLBR instruction, 
> bypassing the content addressing logic, and truncate any large pages in 
> place so as to avoid a case in the second step where an incoming entry 
> for a large page at a lower address overlaps with a replacement entry 
> chosen at another index.  Then preinitialize the TLB using addresses 
> outside our usual unique range and avoiding clashes with any entries 
> received, before making the usual call to local_flush_tlb_all().
> 
> This fixes (at least) R4x00 cores if TLBP hits multiple matching TLB 
> entries (SGI IP22 PROM for examples sets up all TLBs to the same virtual 
> address).
> 
> Signed-off-by: Maciej W. Rozycki <macro@orcam.me.uk>
> Fixes: 35ad7e181541 ("MIPS: mm: tlb-r4k: Uniquify TLB entries on init")
> Cc: stable@vger.kernel.org # v6.17+
> ---
> Hi,
> 
>  On second thoughts I decided against including wired entries in our VPN 
> matching discovery for clash avoidance.  The reason is as I wrote before 
> it makes no sense to have wired entries for KSEG0 addresses, so it should 
> be safe to assume we won't ever make one, and then if a wired entry maps a 
> large page, which is quite likely, then our clash avoidance logic won't 
> handle an overlap where the two VPN values of a clashing pair don't match, 
> so it makes no sense to pretend we can handle wired entries with the code 
> as proposed.
> 
>  Pasting v2 discussion below verbatim as it still applies.
> 
>  Verified the same way as before, also with some diagnostics added so as 
> to make sure things get set up correctly, with my Malta/74Kf system for a 
> 32-bit configuration and with my SWARM/BCM1250 system for a 64-bit one.
> 
>  In addition to the Wired register setup discussed with v1 I have realised 
> the incoming entries may include large pages, possibly exceeding the size 
> of KSEG0 even.  Such entries may overlap with our temporary entries added 
> in the second step, so truncate any large pages in place as this ensures 
> no clash happens with the received contents of the TLB.
> 
>  NB this doesn't handle incoming PageGrain.ESP having been set, but it's 
> an unrelated preexisting issue that would have to be handled elsewhere.  
> Possibly it doesn't matter in reality.
> 
>  Additionally PageMask is left set at what has been retrieved from the 
> last incoming TLB entry in the first step and has to be reset to our page 
> size before proceeding with the second step.
> 
>  And last but not least the comparator function returned 0 incorrectly 
> when the difference between 64-bit elements was positive but with none of 
> the high-order 32 bits set.  Fixed with a branchless sequence of 3 machine 
> instructions, which I think is the minimum here (only the sign and zero 
> matter here, but this sequence actually produces -1/0/1, because why not).
> No change for the 32-bit case, the difference is returned as is.
> 
>   Maciej
> 
> Changes from v2 (at 
> <https://lore.kernel.org/r/alpine.DEB.2.21.2511122032400.25436@angie.orcam.me.uk/>):
> 
> - Revert the v2 update to include wired entries while reading original 
>   contents of TLB.
> 
> Changes from v1 (at 
> <https://lore.kernel.org/r/alpine.DEB.2.21.2511110547430.25436@angie.orcam.me.uk/>):
> 
> - Also include wired entries while reading original contents of TLB.
> 
> - Truncate any large pages in place while reading original TLB entries.
> 
> - Reset PageMask to PM_DEFAULT_MASK after reading in TLB entries.
> 
> - Fix the 64-bit case for the sort comparator.
> ---
>  arch/mips/mm/tlb-r4k.c |  100 ++++++++++++++++++++++++++++++-------------------
>  1 file changed, 63 insertions(+), 37 deletions(-)
> 
> linux-mips-tlb-r4k-uniquify-fix.diff
> Index: linux-swarm64/arch/mips/mm/tlb-r4k.c
> ===================================================================
> --- linux-swarm64.orig/arch/mips/mm/tlb-r4k.c
> +++ linux-swarm64/arch/mips/mm/tlb-r4k.c
> @@ -15,6 +15,7 @@
>  #include <linux/mm.h>
>  #include <linux/hugetlb.h>
>  #include <linux/export.h>
> +#include <linux/sort.h>
>  
>  #include <asm/cpu.h>
>  #include <asm/cpu-type.h>
> @@ -508,54 +509,78 @@ static int __init set_ntlb(char *str)
>  
>  __setup("ntlb=", set_ntlb);
>  
> -/* Initialise all TLB entries with unique values */
> +
> +/* Comparison function for EntryHi VPN fields.  */
> +static int r4k_vpn_cmp(const void *a, const void *b)
> +{
> +	long v = *(unsigned long *)a - *(unsigned long *)b;
> +	int s = sizeof(long) > sizeof(int) ? sizeof(long) * 8 - 1: 0;
> +	return s ? (v != 0) | v >> s : v;
> +}
> +

> +/*
> + * Initialise all TLB entries with unique values that do not clash with
> + * what we have been handed over and what we'll be using ourselves.
> + */
>  static void r4k_tlb_uniquify(void)
>  {
> -	int entry = num_wired_entries();
> +	unsigned long tlb_vpns[1 << MIPS_CONF1_TLBS_SIZE];
> +	int tlbsize = current_cpu_data.tlbsize;
> +	int start = num_wired_entries();

It seems that for my Edgerouter 6P (identifies itself as "CPU0 revision
is: 000d9602 (Cavium Octeon III)") current_cpu_data.tlbsize is larger
than 1 << MIPS_CONF1_TLBS_SIZE (256 rather than 64) and
num_wired_entries() returns 0 so the loop will overwrite part of the
stack and hangs the system.

If I increase the size to 256 that boots for me but the compiler
complains about the frame size being too large at 2064 bytes.

It seems the tlbsize is increased from 64 to 256 in the
MIPS_CONF4_MMUEXTDEF_MMUSIZEEXT case of decode_config4(), and according
to 1b362e3e350f ("MIPS: Decode c0_config4 for large TLBs.") that seems
to be expected.

Although it boots if I remove the call to r4k_tlb_uniquify(), I have run
into issues when enabling transparent hugepages and hugetlb in the past
but not sure if that's related to this, and I still seem to be able to
trigger that issue sometimes with this patch. Attaching the panic for
that just in case, though.

Regards,
Klara Modin

> +	unsigned long vpn_mask;
> +	int cnt, ent, idx, i;
> +
> +	vpn_mask = GENMASK(cpu_vmbits - 1, 13);
> +	vpn_mask |= IS_ENABLED(CONFIG_64BIT) ? 3ULL << 62 : 1 << 31;
>  
>  	htw_stop();
> +
> +	for (i = start, cnt = 0; i < tlbsize; i++, cnt++) {
> +		unsigned long vpn;
> +
> +		write_c0_index(i);
> +		mtc0_tlbr_hazard();
> +		tlb_read();
> +		tlb_read_hazard();
> +		vpn = read_c0_entryhi();
> +		vpn &= vpn_mask & PAGE_MASK;
> +		tlb_vpns[cnt] = vpn;
> +
> +		/* Prevent any large pages from overlapping regular ones.  */
> +		write_c0_pagemask(read_c0_pagemask() & PM_DEFAULT_MASK);
> +		mtc0_tlbw_hazard();
> +		tlb_write_indexed();
> +		tlbw_use_hazard();
> +	}
> +
> +	sort(tlb_vpns, cnt, sizeof(tlb_vpns[0]), r4k_vpn_cmp, NULL);
> +
> +	write_c0_pagemask(PM_DEFAULT_MASK);
>  	write_c0_entrylo0(0);
>  	write_c0_entrylo1(0);
>  

...
Re: [PATCH v3] MIPS: mm: Prevent a TLB shutdown on initial uniquification
Posted by Maciej W. Rozycki 1 week, 2 days ago
On Fri, 28 Nov 2025, Klara Modin wrote:

> > +/*
> > + * Initialise all TLB entries with unique values that do not clash with
> > + * what we have been handed over and what we'll be using ourselves.
> > + */
> >  static void r4k_tlb_uniquify(void)
> >  {
> > -	int entry = num_wired_entries();
> > +	unsigned long tlb_vpns[1 << MIPS_CONF1_TLBS_SIZE];
> > +	int tlbsize = current_cpu_data.tlbsize;
> > +	int start = num_wired_entries();
> 
> It seems that for my Edgerouter 6P (identifies itself as "CPU0 revision
> is: 000d9602 (Cavium Octeon III)") current_cpu_data.tlbsize is larger
> than 1 << MIPS_CONF1_TLBS_SIZE (256 rather than 64) and
> num_wired_entries() returns 0 so the loop will overwrite part of the
> stack and hangs the system.

 Thank you for the report.  A fix is in review already, please try it: 
<https://lore.kernel.org/r/alpine.DEB.2.21.2511280544050.36486@angie.orcam.me.uk/>.

> Although it boots if I remove the call to r4k_tlb_uniquify(), I have run
> into issues when enabling transparent hugepages and hugetlb in the past
> but not sure if that's related to this, and I still seem to be able to
> trigger that issue sometimes with this patch. Attaching the panic for
> that just in case, though.

 Unrelated.  There's an obvious clash here:

[   23.341961] Index    : 80000000
[   23.345104] PageMask : 1fe000
[   23.348073] EntryHi  : c0000000000c609b
[   23.351911] EntryLo0 : 000000000014afc7
[   23.355749] EntryLo1 : 0000000000000001
[   23.359587] Wired    : 0
[   23.362122] PageGrain: e8000000

-- so this is an attempt to create a TLB entry for a pair of 1MiB pages at 
0xc0000000000c6000 (which is already suspicious as the VPN is obviously 
not 2MiB-aligned, but the extraneous bits will be masked by hardware), and 
it collides with all these entries:

[   25.918311] Index: 193 pgmask=4kb va=c00000000005a000 asid=9b
[   25.918311] 	[ri=0 xi=0 pa=0000593d000 c=0 d=1 v=1 g=1] [ri=0 xi=0 pa=0000593e000 c=0 d=1 v=1 g=1]
[   25.933035] Index: 194 pgmask=4kb va=c000000000072000 asid=9b
[   25.933035] 	[ri=0 xi=0 pa=00005955000 c=0 d=1 v=1 g=1] [ri=0 xi=0 pa=00005956000 c=0 d=1 v=1 g=1]
[   25.947760] Index: 195 pgmask=4kb va=c000000000016000 asid=9b
[   25.947760] 	[ri=0 xi=0 pa=00000000000 c=0 d=0 v=0 g=1] [ri=0 xi=0 pa=00005886000 c=0 d=1 v=1 g=1]
[   25.962483] Index: 196 pgmask=4kb va=c00000000001a000 asid=9b
[   25.962483] 	[ri=0 xi=0 pa=00005888000 c=0 d=1 v=1 g=1] [ri=0 xi=0 pa=00005889000 c=0 d=1 v=1 g=1]

 HTH,

  Maciej
Re: [PATCH v3] MIPS: mm: Prevent a TLB shutdown on initial uniquification
Posted by Thomas Bogendoerfer 2 weeks, 2 days ago
On Thu, Nov 13, 2025 at 05:21:10AM +0000, Maciej W. Rozycki wrote:
> Depending on the particular CPU implementation a TLB shutdown may occur 
> if multiple matching entries are detected upon the execution of a TLBP 
> or the TLBWI/TLBWR instructions.  Given that we don't know what entries 
> we have been handed we need to be very careful with the initial TLB 
> setup and avoid all these instructions.
> 
> Therefore read all the TLB entries one by one with the TLBR instruction, 
> bypassing the content addressing logic, and truncate any large pages in 
> place so as to avoid a case in the second step where an incoming entry 
> for a large page at a lower address overlaps with a replacement entry 
> chosen at another index.  Then preinitialize the TLB using addresses 
> outside our usual unique range and avoiding clashes with any entries 
> received, before making the usual call to local_flush_tlb_all().
> 
> This fixes (at least) R4x00 cores if TLBP hits multiple matching TLB 
> entries (SGI IP22 PROM for examples sets up all TLBs to the same virtual 
> address).
> 
> Signed-off-by: Maciej W. Rozycki <macro@orcam.me.uk>
> Fixes: 35ad7e181541 ("MIPS: mm: tlb-r4k: Uniquify TLB entries on init")
> Cc: stable@vger.kernel.org # v6.17+
> ---
> Hi,
> 
>  On second thoughts I decided against including wired entries in our VPN 
> matching discovery for clash avoidance.  The reason is as I wrote before 
> it makes no sense to have wired entries for KSEG0 addresses, so it should 
> be safe to assume we won't ever make one, and then if a wired entry maps a 
> large page, which is quite likely, then our clash avoidance logic won't 
> handle an overlap where the two VPN values of a clashing pair don't match, 
> so it makes no sense to pretend we can handle wired entries with the code 
> as proposed.
> 
>  Pasting v2 discussion below verbatim as it still applies.
> 
>  Verified the same way as before, also with some diagnostics added so as 
> to make sure things get set up correctly, with my Malta/74Kf system for a 
> 32-bit configuration and with my SWARM/BCM1250 system for a 64-bit one.
> 
>  In addition to the Wired register setup discussed with v1 I have realised 
> the incoming entries may include large pages, possibly exceeding the size 
> of KSEG0 even.  Such entries may overlap with our temporary entries added 
> in the second step, so truncate any large pages in place as this ensures 
> no clash happens with the received contents of the TLB.
> 
>  NB this doesn't handle incoming PageGrain.ESP having been set, but it's 
> an unrelated preexisting issue that would have to be handled elsewhere.  
> Possibly it doesn't matter in reality.
> 
>  Additionally PageMask is left set at what has been retrieved from the 
> last incoming TLB entry in the first step and has to be reset to our page 
> size before proceeding with the second step.
> 
>  And last but not least the comparator function returned 0 incorrectly 
> when the difference between 64-bit elements was positive but with none of 
> the high-order 32 bits set.  Fixed with a branchless sequence of 3 machine 
> instructions, which I think is the minimum here (only the sign and zero 
> matter here, but this sequence actually produces -1/0/1, because why not).
> No change for the 32-bit case, the difference is returned as is.
> 
>   Maciej
> 
> Changes from v2 (at 
> <https://lore.kernel.org/r/alpine.DEB.2.21.2511122032400.25436@angie.orcam.me.uk/>):
> 
> - Revert the v2 update to include wired entries while reading original 
>   contents of TLB.
> 
> Changes from v1 (at 
> <https://lore.kernel.org/r/alpine.DEB.2.21.2511110547430.25436@angie.orcam.me.uk/>):
> 
> - Also include wired entries while reading original contents of TLB.
> 
> - Truncate any large pages in place while reading original TLB entries.
> 
> - Reset PageMask to PM_DEFAULT_MASK after reading in TLB entries.
> 
> - Fix the 64-bit case for the sort comparator.
> ---
>  arch/mips/mm/tlb-r4k.c |  100 ++++++++++++++++++++++++++++++-------------------
>  1 file changed, 63 insertions(+), 37 deletions(-)

applied to mips-fixes.

Thomas.

-- 
Crap can work. Given enough thrust pigs will fly, but it's not necessarily a
good idea.                                                [ RFC1925, 2.3 ]
Re: [PATCH v3] MIPS: mm: Prevent a TLB shutdown on initial uniquification
Posted by Jiaxun Yang 3 weeks ago

On Thu, 13 Nov 2025, at 1:21 PM, Maciej W. Rozycki wrote:
> Depending on the particular CPU implementation a TLB shutdown may occur 
> if multiple matching entries are detected upon the execution of a TLBP 
> or the TLBWI/TLBWR instructions.  Given that we don't know what entries 
> we have been handed we need to be very careful with the initial TLB 
> setup and avoid all these instructions.
>
> Therefore read all the TLB entries one by one with the TLBR instruction, 
> bypassing the content addressing logic, and truncate any large pages in 
> place so as to avoid a case in the second step where an incoming entry 
> for a large page at a lower address overlaps with a replacement entry 
> chosen at another index.  Then preinitialize the TLB using addresses 
> outside our usual unique range and avoiding clashes with any entries 
> received, before making the usual call to local_flush_tlb_all().
>
> This fixes (at least) R4x00 cores if TLBP hits multiple matching TLB 
> entries (SGI IP22 PROM for examples sets up all TLBs to the same virtual 
> address).
>
> Signed-off-by: Maciej W. Rozycki <macro@orcam.me.uk>
> Fixes: 35ad7e181541 ("MIPS: mm: tlb-r4k: Uniquify TLB entries on init")
> Cc: stable@vger.kernel.org # v6.17+

Maybe we should drop 6.17+ tag here given that the origin patch was backported
to 

Reviewed-by: Jiaxun Yang <jiaxun.yang@flygoat.com>
Tested-by: Jiaxun Yang <jiaxun.yang@flygoat.com> # Boston I6400, M5150 sim

This approach is indeed more robust!

Thanks
-- 
- Jiaxun
Re: [PATCH v3] MIPS: mm: Prevent a TLB shutdown on initial uniquification
Posted by Nick Bowler 3 weeks, 1 day ago
On Thu, Nov 13, 2025 at 05:21:10AM +0000, Maciej W. Rozycki wrote:
[...]
> Changes from v2 (at 
> <https://lore.kernel.org/r/alpine.DEB.2.21.2511122032400.25436@angie.orcam.me.uk/>):
> 
> - Revert the v2 update to include wired entries while reading original 
>   contents of TLB.

Everything seems fine on my R4400SC Indy with v3 too (or v2 or v1).

Thanks,
  Nick