This csum_fold implementation introduced into arch/arc by Vineet Gupta
is better than the default implementation on at least arc, x86, arm, and
riscv. Using GCC trunk and compiling non-inlined version, this
implementation has 41.6667%, 25%, 16.6667% fewer instructions on
riscv64, x86-64, and arm64 respectively with -O3 optimization.
Signed-off-by: Charlie Jenkins <charlie@rivosinc.com>
---
include/asm-generic/checksum.h | 5 +----
1 file changed, 1 insertion(+), 4 deletions(-)
diff --git a/include/asm-generic/checksum.h b/include/asm-generic/checksum.h
index 43e18db89c14..adab9ac4312c 100644
--- a/include/asm-generic/checksum.h
+++ b/include/asm-generic/checksum.h
@@ -30,10 +30,7 @@ extern __sum16 ip_fast_csum(const void *iph, unsigned int ihl);
*/
static inline __sum16 csum_fold(__wsum csum)
{
- u32 sum = (__force u32)csum;
- sum = (sum & 0xffff) + (sum >> 16);
- sum = (sum & 0xffff) + (sum >> 16);
- return (__force __sum16)~sum;
+ return (__force __sum16)((~csum - ror32(csum, 16)) >> 16);
}
#endif
--
2.42.0
From: Charlie Jenkins
> Sent: 15 September 2023 04:50
>
> This csum_fold implementation introduced into arch/arc by Vineet Gupta
> is better than the default implementation on at least arc, x86, arm, and
> riscv. Using GCC trunk and compiling non-inlined version, this
> implementation has 41.6667%, 25%, 16.6667% fewer instructions on
> riscv64, x86-64, and arm64 respectively with -O3 optimization.
Nit-picking the commit message...
Some of those architectures have their own asm implementation.
The arm one is better than the C code below, the x86 ones aren't.
I think that only sparc32 (carry flag but no rotate) and
arm/arm64 (barrel shifter on every instruction) have versions
that are better than the one here.
Since I suggested it to Charlie:
Reviewed-by: David Laight <david.laight@aculab.com>
>
> Signed-off-by: Charlie Jenkins <charlie@rivosinc.com>
> ---
> include/asm-generic/checksum.h | 5 +----
> 1 file changed, 1 insertion(+), 4 deletions(-)
>
> diff --git a/include/asm-generic/checksum.h b/include/asm-generic/checksum.h
> index 43e18db89c14..adab9ac4312c 100644
> --- a/include/asm-generic/checksum.h
> +++ b/include/asm-generic/checksum.h
> @@ -30,10 +30,7 @@ extern __sum16 ip_fast_csum(const void *iph, unsigned int ihl);
> */
> static inline __sum16 csum_fold(__wsum csum)
> {
> - u32 sum = (__force u32)csum;
You'll need to re-instate that line to stop sparse complaining.
> - sum = (sum & 0xffff) + (sum >> 16);
> - sum = (sum & 0xffff) + (sum >> 16);
> - return (__force __sum16)~sum;
> + return (__force __sum16)((~csum - ror32(csum, 16)) >> 16);
> }
> #endif
>
>
> --
> 2.42.0
-
Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
Registration No: 1397386 (Wales)
On Fri, Sep 15, 2023 at 07:29:23AM +0000, David Laight wrote:
> From: Charlie Jenkins
> > Sent: 15 September 2023 04:50
> >
> > This csum_fold implementation introduced into arch/arc by Vineet Gupta
> > is better than the default implementation on at least arc, x86, arm, and
> > riscv. Using GCC trunk and compiling non-inlined version, this
> > implementation has 41.6667%, 25%, 16.6667% fewer instructions on
> > riscv64, x86-64, and arm64 respectively with -O3 optimization.
>
> Nit-picking the commit message...
> Some of those architectures have their own asm implementation.
> The arm one is better than the C code below, the x86 ones aren't.
I can clean up the commit message to be more accurate.
>
> I think that only sparc32 (carry flag but no rotate) and
> arm/arm64 (barrel shifter on every instruction) have versions
> that are better than the one here.
>
> Since I suggested it to Charlie:
>
> Reviewed-by: David Laight <david.laight@aculab.com>
>
> >
> > Signed-off-by: Charlie Jenkins <charlie@rivosinc.com>
> > ---
> > include/asm-generic/checksum.h | 5 +----
> > 1 file changed, 1 insertion(+), 4 deletions(-)
> >
> > diff --git a/include/asm-generic/checksum.h b/include/asm-generic/checksum.h
> > index 43e18db89c14..adab9ac4312c 100644
> > --- a/include/asm-generic/checksum.h
> > +++ b/include/asm-generic/checksum.h
> > @@ -30,10 +30,7 @@ extern __sum16 ip_fast_csum(const void *iph, unsigned int ihl);
> > */
> > static inline __sum16 csum_fold(__wsum csum)
> > {
> > - u32 sum = (__force u32)csum;
>
> You'll need to re-instate that line to stop sparse complaining.
I will add that back, thanks.
- Charlie
>
> > - sum = (sum & 0xffff) + (sum >> 16);
> > - sum = (sum & 0xffff) + (sum >> 16);
> > - return (__force __sum16)~sum;
> > + return (__force __sum16)((~csum - ror32(csum, 16)) >> 16);
> > }
> > #endif
> >
> >
> > --
> > 2.42.0
>
> -
> Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
> Registration No: 1397386 (Wales)
© 2016 - 2026 Red Hat, Inc.