[PATCH] arm64: Fix double TCR_T0SZ_OFFSET shift

Seongsu Park posted 1 patch 1 year, 10 months ago
There is a newer version of this series
arch/arm64/include/asm/mmu_context.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
[PATCH] arm64: Fix double TCR_T0SZ_OFFSET shift
Posted by Seongsu Park 1 year, 10 months ago
We have already shifted the value of t0sz in TCR_T0SZ by TCR_T0SZ_OFFSET.
So, the TCR_T0SZ_OFFSET shift here should be removed.

Co-developed-by: Leem ChaeHoon <infinite.run@gmail.com>
Signed-off-by: Leem ChaeHoon <infinite.run@gmail.com>
Co-developed-by: Gyeonggeon Choi <gychoi@student.42seoul.kr>
Signed-off-by: Gyeonggeon Choi <gychoi@student.42seoul.kr>
Co-developed-by: Soomin Cho <to.soomin@gmail.com>
Signed-off-by: Soomin Cho <to.soomin@gmail.com>
Co-developed-by: DaeRo Lee <skseofh@gmail.com>
Signed-off-by: DaeRo Lee <skseofh@gmail.com>
Co-developed-by: kmasta <kmasta.study@gmail.com>
Signed-off-by: kmasta <kmasta.study@gmail.com>
Signed-off-by: Seongsu Park <sgsu.park@samsung.com>
---
 arch/arm64/include/asm/mmu_context.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/arm64/include/asm/mmu_context.h b/arch/arm64/include/asm/mmu_context.h
index c768d16b81a4..58de99836d2e 100644
--- a/arch/arm64/include/asm/mmu_context.h
+++ b/arch/arm64/include/asm/mmu_context.h
@@ -76,7 +76,7 @@ static inline void __cpu_set_tcr_t0sz(unsigned long t0sz)
 		return;
 
 	tcr &= ~TCR_T0SZ_MASK;
-	tcr |= t0sz << TCR_T0SZ_OFFSET;
+	tcr |= t0sz;
 	write_sysreg(tcr, tcr_el1);
 	isb();
 }
-- 
2.34.1
Re: [PATCH] arm64: Fix double TCR_T0SZ_OFFSET shift
Posted by Will Deacon 1 year, 10 months ago
On Tue, Apr 02, 2024 at 07:49:50PM +0900, Seongsu Park wrote:
> We have already shifted the value of t0sz in TCR_T0SZ by TCR_T0SZ_OFFSET.
> So, the TCR_T0SZ_OFFSET shift here should be removed.
> 
> Co-developed-by: Leem ChaeHoon <infinite.run@gmail.com>
> Signed-off-by: Leem ChaeHoon <infinite.run@gmail.com>
> Co-developed-by: Gyeonggeon Choi <gychoi@student.42seoul.kr>
> Signed-off-by: Gyeonggeon Choi <gychoi@student.42seoul.kr>
> Co-developed-by: Soomin Cho <to.soomin@gmail.com>
> Signed-off-by: Soomin Cho <to.soomin@gmail.com>
> Co-developed-by: DaeRo Lee <skseofh@gmail.com>
> Signed-off-by: DaeRo Lee <skseofh@gmail.com>
> Co-developed-by: kmasta <kmasta.study@gmail.com>
> Signed-off-by: kmasta <kmasta.study@gmail.com>
> Signed-off-by: Seongsu Park <sgsu.park@samsung.com>

heh, that's quite a lot of people. Did you remove three chars each? :p

> ---
>  arch/arm64/include/asm/mmu_context.h | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/arch/arm64/include/asm/mmu_context.h b/arch/arm64/include/asm/mmu_context.h
> index c768d16b81a4..58de99836d2e 100644
> --- a/arch/arm64/include/asm/mmu_context.h
> +++ b/arch/arm64/include/asm/mmu_context.h
> @@ -76,7 +76,7 @@ static inline void __cpu_set_tcr_t0sz(unsigned long t0sz)
>  		return;
>  
>  	tcr &= ~TCR_T0SZ_MASK;
> -	tcr |= t0sz << TCR_T0SZ_OFFSET;
> +	tcr |= t0sz;

Thankfully, TCR_T0SZ_OFFSET is 0 so this isn't as alarming as it looks.
Even so, if we're going to make the code consistent, then shouldn't the
earlier conditional be updated too?

	if ((tcr & TCR_T0SZ_MASK) >> TCR_T0SZ_OFFSET == t0sz)
		return;

seems to assume that t0sz is unshifted.

Will
RE: [PATCH] arm64: Fix double TCR_T0SZ_OFFSET shift
Posted by Seongsu Park 1 year, 10 months ago

> On Tue, Apr 02, 2024 at 07:49:50PM +0900, Seongsu Park wrote:
> > We have already shifted the value of t0sz in TCR_T0SZ by
TCR_T0SZ_OFFSET.
> > So, the TCR_T0SZ_OFFSET shift here should be removed.
> >
> > Co-developed-by: Leem ChaeHoon <infinite.run@gmail.com>
> > Signed-off-by: Leem ChaeHoon <infinite.run@gmail.com>
> > Co-developed-by: Gyeonggeon Choi <gychoi@student.42seoul.kr>
> > Signed-off-by: Gyeonggeon Choi <gychoi@student.42seoul.kr>
> > Co-developed-by: Soomin Cho <to.soomin@gmail.com>
> > Signed-off-by: Soomin Cho <to.soomin@gmail.com>
> > Co-developed-by: DaeRo Lee <skseofh@gmail.com>
> > Signed-off-by: DaeRo Lee <skseofh@gmail.com>
> > Co-developed-by: kmasta <kmasta.study@gmail.com>
> > Signed-off-by: kmasta <kmasta.study@gmail.com>
> > Signed-off-by: Seongsu Park <sgsu.park@samsung.com>
> 
> heh, that's quite a lot of people. Did you remove three chars each? :p
We are studying the Linux kernel based on arm64 together every Saturday for
7 hours! :)
> 
> > ---
> >  arch/arm64/include/asm/mmu_context.h | 2 +-
> >  1 file changed, 1 insertion(+), 1 deletion(-)
> >
> > diff --git a/arch/arm64/include/asm/mmu_context.h
> > b/arch/arm64/include/asm/mmu_context.h
> > index c768d16b81a4..58de99836d2e 100644
> > --- a/arch/arm64/include/asm/mmu_context.h
> > +++ b/arch/arm64/include/asm/mmu_context.h
> > @@ -76,7 +76,7 @@ static inline void __cpu_set_tcr_t0sz(unsigned long
> t0sz)
> >  		return;
> >
> >  	tcr &= ~TCR_T0SZ_MASK;
> > -	tcr |= t0sz << TCR_T0SZ_OFFSET;
> > +	tcr |= t0sz;
> 
> Thankfully, TCR_T0SZ_OFFSET is 0 so this isn't as alarming as it looks.
> Even so, if we're going to make the code consistent, then shouldn't the
> earlier conditional be updated too?
> 
> 	if ((tcr & TCR_T0SZ_MASK) >> TCR_T0SZ_OFFSET == t0sz)
> 		return;
> 
> seems to assume that t0sz is unshifted.
> 
> Will
Thank you for feedback. I'll send v2 patch.