arch/riscv/kernel/entry.S | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-)
thread_info.cpu field is 32 bits wide,
but is accessed using an XLEN-bit load, which might be 64bit load, fix it
Changes in v3:
- replace space with tab to keep it aligned with code block
- Add "Fixes" tag
Changes in v2:
- add a comment to explain why use lw instead of REG_L.
- correct commit message
Fixes: 503638e0babf3 ("riscv: Stop emitting preventive sfence.vma for new vmalloc mappings")
Signed-off-by: Jimmy Ho <jimmy.ho@sifive.com>
---
arch/riscv/kernel/entry.S | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/arch/riscv/kernel/entry.S b/arch/riscv/kernel/entry.S
index 3a0ec6fd5956..492ae936dccd 100644
--- a/arch/riscv/kernel/entry.S
+++ b/arch/riscv/kernel/entry.S
@@ -45,8 +45,10 @@
* Computes:
* a0 = &new_vmalloc[BIT_WORD(cpu)]
* a1 = BIT_MASK(cpu)
+ *
+ * using lw instead of REG_L is because the thread_info.cpu field is 32 bits wide
*/
- REG_L a2, TASK_TI_CPU(tp)
+ lw a2, TASK_TI_CPU(tp)
/*
* Compute the new_vmalloc element position:
* (cpu / 64) * 8 = (cpu >> 6) << 3
--
2.39.3
On Tue, Aug 19, 2025 at 03:13:18PM +0800, Jimmy Ho wrote: > thread_info.cpu field is 32 bits wide, > but is accessed using an XLEN-bit load, which might be 64bit load, fix it > > Changes in v3: > - replace space with tab to keep it aligned with code block > - Add "Fixes" tag > > Changes in v2: > - add a comment to explain why use lw instead of REG_L. > - correct commit message The changelog belongs below the --- in the patch. > > Fixes: 503638e0babf3 ("riscv: Stop emitting preventive sfence.vma for new vmalloc mappings") > Signed-off-by: Jimmy Ho <jimmy.ho@sifive.com> > --- > arch/riscv/kernel/entry.S | 4 +++- > 1 file changed, 3 insertions(+), 1 deletion(-) > > diff --git a/arch/riscv/kernel/entry.S b/arch/riscv/kernel/entry.S > index 3a0ec6fd5956..492ae936dccd 100644 > --- a/arch/riscv/kernel/entry.S > +++ b/arch/riscv/kernel/entry.S > @@ -45,8 +45,10 @@ > * Computes: > * a0 = &new_vmalloc[BIT_WORD(cpu)] > * a1 = BIT_MASK(cpu) > + * > + * using lw instead of REG_L is because the thread_info.cpu field is 32 bits wide > */ > - REG_L a2, TASK_TI_CPU(tp) > + lw a2, TASK_TI_CPU(tp) > /* > * Compute the new_vmalloc element position: > * (cpu / 64) * 8 = (cpu >> 6) << 3 > -- > 2.39.3 Otherwise, Reviewed-by: Andrew Jones <ajones@ventanamicro.com>
On Tue, Aug 19, 2025 at 03:13:18PM +0800, Jimmy Ho wrote: > thread_info.cpu field is 32 bits wide, > but is accessed using an XLEN-bit load, which might be 64bit load, fix it > > Changes in v3: > - replace space with tab to keep it aligned with code block > - Add "Fixes" tag Please add previous version link. > > Changes in v2: > - add a comment to explain why use lw instead of REG_L. > - correct commit message ditto. Additionally, I see a patch here that serves the same purpose as yours, but it's more comprehensive and was submitted earlier. In any case, if you'd like to proceed, here is the: Acked-by: Troy Mitchell <troy.mitchell@linux.spacemit.com> > > Fixes: 503638e0babf3 ("riscv: Stop emitting preventive sfence.vma for new vmalloc mappings") > Signed-off-by: Jimmy Ho <jimmy.ho@sifive.com> > --- > arch/riscv/kernel/entry.S | 4 +++- > 1 file changed, 3 insertions(+), 1 deletion(-) > > diff --git a/arch/riscv/kernel/entry.S b/arch/riscv/kernel/entry.S > index 3a0ec6fd5956..492ae936dccd 100644 > --- a/arch/riscv/kernel/entry.S > +++ b/arch/riscv/kernel/entry.S > @@ -45,8 +45,10 @@ > * Computes: > * a0 = &new_vmalloc[BIT_WORD(cpu)] > * a1 = BIT_MASK(cpu) > + * > + * using lw instead of REG_L is because the thread_info.cpu field is 32 bits wide > */ > - REG_L a2, TASK_TI_CPU(tp) > + lw a2, TASK_TI_CPU(tp) > /* > * Compute the new_vmalloc element position: > * (cpu / 64) * 8 = (cpu >> 6) << 3 > -- > 2.39.3 > >
On Tue, Aug 19, 2025 at 03:19:17PM +0800, Troy Mitchell wrote: > On Tue, Aug 19, 2025 at 03:13:18PM +0800, Jimmy Ho wrote: > > thread_info.cpu field is 32 bits wide, > > but is accessed using an XLEN-bit load, which might be 64bit load, fix it > > > > Changes in v3: > > - replace space with tab to keep it aligned with code block > > - Add "Fixes" tag > Please add previous version link. > > > > > Changes in v2: > > - add a comment to explain why use lw instead of REG_L. > > - correct commit message > ditto. > > Additionally, I see a patch here that serves the same purpose as yours, Sry I forgot to put link: https://lore.kernel.org/all/20250722160556.2216925-4-rkrcmar@ventanamicro.com/ - Troy > but it's more comprehensive and was submitted earlier. > > In any case, if you'd like to proceed, here is the: > Acked-by: Troy Mitchell <troy.mitchell@linux.spacemit.com> > > > > Fixes: 503638e0babf3 ("riscv: Stop emitting preventive sfence.vma for new vmalloc mappings") > > Signed-off-by: Jimmy Ho <jimmy.ho@sifive.com> > > --- > > arch/riscv/kernel/entry.S | 4 +++- > > 1 file changed, 3 insertions(+), 1 deletion(-) > > > > diff --git a/arch/riscv/kernel/entry.S b/arch/riscv/kernel/entry.S > > index 3a0ec6fd5956..492ae936dccd 100644 > > --- a/arch/riscv/kernel/entry.S > > +++ b/arch/riscv/kernel/entry.S > > @@ -45,8 +45,10 @@ > > * Computes: > > * a0 = &new_vmalloc[BIT_WORD(cpu)] > > * a1 = BIT_MASK(cpu) > > + * > > + * using lw instead of REG_L is because the thread_info.cpu field is 32 bits wide > > */ > > - REG_L a2, TASK_TI_CPU(tp) > > + lw a2, TASK_TI_CPU(tp) > > /* > > * Compute the new_vmalloc element position: > > * (cpu / 64) * 8 = (cpu >> 6) << 3 > > -- > > 2.39.3 > > > >
On Tue, Aug 19, 2025 at 03:20:20PM +0800, Troy Mitchell wrote: > On Tue, Aug 19, 2025 at 03:19:17PM +0800, Troy Mitchell wrote: > > On Tue, Aug 19, 2025 at 03:13:18PM +0800, Jimmy Ho wrote: > > > thread_info.cpu field is 32 bits wide, > > > but is accessed using an XLEN-bit load, which might be 64bit load, fix it > > > > > > Changes in v3: > > > - replace space with tab to keep it aligned with code block > > > - Add "Fixes" tag > > Please add previous version link. > > > > > > > > Changes in v2: > > > - add a comment to explain why use lw instead of REG_L. > > > - correct commit message > > ditto. > > > > Additionally, I see a patch here that serves the same purpose as yours, > Sry I forgot to put link: > https://lore.kernel.org/all/20250722160556.2216925-4-rkrcmar@ventanamicro.com/ Ah, yes. I forgot about Radim's patch. We should go with that. Thanks, drew > > - Troy > > > but it's more comprehensive and was submitted earlier. > > > > In any case, if you'd like to proceed, here is the: > > Acked-by: Troy Mitchell <troy.mitchell@linux.spacemit.com> > > > > > > Fixes: 503638e0babf3 ("riscv: Stop emitting preventive sfence.vma for new vmalloc mappings") > > > Signed-off-by: Jimmy Ho <jimmy.ho@sifive.com> > > > --- > > > arch/riscv/kernel/entry.S | 4 +++- > > > 1 file changed, 3 insertions(+), 1 deletion(-) > > > > > > diff --git a/arch/riscv/kernel/entry.S b/arch/riscv/kernel/entry.S > > > index 3a0ec6fd5956..492ae936dccd 100644 > > > --- a/arch/riscv/kernel/entry.S > > > +++ b/arch/riscv/kernel/entry.S > > > @@ -45,8 +45,10 @@ > > > * Computes: > > > * a0 = &new_vmalloc[BIT_WORD(cpu)] > > > * a1 = BIT_MASK(cpu) > > > + * > > > + * using lw instead of REG_L is because the thread_info.cpu field is 32 bits wide > > > */ > > > - REG_L a2, TASK_TI_CPU(tp) > > > + lw a2, TASK_TI_CPU(tp) > > > /* > > > * Compute the new_vmalloc element position: > > > * (cpu / 64) * 8 = (cpu >> 6) << 3 > > > -- > > > 2.39.3 > > > > > > > > _______________________________________________ > linux-riscv mailing list > linux-riscv@lists.infradead.org > http://lists.infradead.org/mailman/listinfo/linux-riscv
© 2016 - 2025 Red Hat, Inc.