From nobody Tue Dec 30 09:36:41 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4D33EC5ACB3 for ; Fri, 17 Nov 2023 21:28:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1346318AbjKQV2p (ORCPT ); Fri, 17 Nov 2023 16:28:45 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57190 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235857AbjKQV2a (ORCPT ); Fri, 17 Nov 2023 16:28:30 -0500 Received: from mail-ot1-x32f.google.com (mail-ot1-x32f.google.com [IPv6:2607:f8b0:4864:20::32f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CF9EBAD for ; Fri, 17 Nov 2023 13:28:10 -0800 (PST) Received: by mail-ot1-x32f.google.com with SMTP id 46e09a7af769-6ce2add34c9so1314406a34.1 for ; Fri, 17 Nov 2023 13:28:10 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20230601.gappssmtp.com; s=20230601; t=1700256490; x=1700861290; darn=vger.kernel.org; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:from:to:cc:subject:date:message-id :reply-to; bh=iQdBfGrxgT0Dp0rMc/ALsHZTaxuvWcxH/0MUqWK8IuU=; b=f70WRS7yyv+xBIlfF9WkxxyBns6g68kJl5Zcq9FaMGwv3Nk0XF2LNc2GdRXTAZGF7d r8pQ/bwH57C5ZLNJV1pX5OkNzoFL8YP+5TTDLJVOfZe+hNxgNqoseP05l962ZI5IK2Iw j5WyQ8DXl+zVuzEPtA6uqDMmBrASNBe0IIAiY0lvr5TB93BCGN19V4cqAYpchLQ0Yx70 bLH0rPqxKeMTm7I2SkgVcbXZWpPB12e0HR+moIalyE3QARMyFtfV92/Is82hO6EAl03H C7l76vhulPQv0fKWnG+87DDse1XBgl3JKUjbiXej8dOgLC1tvb53tKII8zxn1Fh9Wd5s HsNA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1700256490; x=1700861290; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=iQdBfGrxgT0Dp0rMc/ALsHZTaxuvWcxH/0MUqWK8IuU=; b=h0FbLjvY7DXXnz/KUVsF+XdQlDHfqJ8EWd3A87fO4dYufuQWEf8Q5fdtZYBu/INXlt A5frV/nGFmgZsNdqzaF/PenlfJKfx5GfkXdeid3LkoUj5B3yD4frgvwZmYszC3N+TbX1 lO5PdTuuz7dvpETMaf4gIl7EilCKu1aFwNgxb8wMciDaKEaqVH6zMBGGPLlu4TjbFujZ fQfELjZWNRhdus6M44vhfAa06Yw5IEfU5gs+ZcigfXEyWlPsCeXZH8E3zJcoNsEHrI+8 PT3vi7Z4eDe/5z0c4Q7dWZwC5MseMTOOsUkUGSt5weqUfvUzGB69rRPhiEshrWNIGPVb +ElQ== X-Gm-Message-State: AOJu0Yxaw0SfcqQzDclFISXg19GiNxRvHpdkPvmNUT6hqMUSPs+j/UuA UYvq9OcIxHd4Op73/YXH/l3m1ZSCP0wxQOf19Ao= X-Google-Smtp-Source: AGHT+IERQ0+DN8SQn6/wJ3vrUeFjmy9jXeWoMn+tLYoESOPbb2DXJmsV20QZd0x5Dm205tCkq4OFHA== X-Received: by 2002:a05:6871:3788:b0:1f5:2b0c:706b with SMTP id nq8-20020a056871378800b001f52b0c706bmr394978oac.28.1700256489806; Fri, 17 Nov 2023 13:28:09 -0800 (PST) Received: from charlie.ba.rivosinc.com ([64.71.180.162]) by smtp.gmail.com with ESMTPSA id e2-20020a05683013c200b006d3127234d7sm365677otq.8.2023.11.17.13.28.08 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 17 Nov 2023 13:28:09 -0800 (PST) From: Charlie Jenkins Date: Fri, 17 Nov 2023 13:27:59 -0800 Subject: [PATCH v11 1/5] asm-generic: Improve csum_fold MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20231117-optimize_checksum-v11-1-7d9d954fe361@rivosinc.com> References: <20231117-optimize_checksum-v11-0-7d9d954fe361@rivosinc.com> In-Reply-To: <20231117-optimize_checksum-v11-0-7d9d954fe361@rivosinc.com> To: Charlie Jenkins , Palmer Dabbelt , Conor Dooley , Samuel Holland , David Laight , Xiao Wang , Evan Green , linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org Cc: Paul Walmsley , Albert Ou , Arnd Bergmann , David Laight X-Mailer: b4 0.12.3 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org This csum_fold implementation introduced into arch/arc by Vineet Gupta is better than the default implementation on at least arc, x86, and riscv. Using GCC trunk and compiling non-inlined version, this implementation has 41.6667%, 25% fewer instructions on riscv64, x86-64 respectively with -O3 optimization. Most implmentations override this default in asm, but this should be more performant than all of those other implementations except for arm which has barrel shifting and sparc32 which has a carry flag. Signed-off-by: Charlie Jenkins Reviewed-by: David Laight --- include/asm-generic/checksum.h | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/include/asm-generic/checksum.h b/include/asm-generic/checksum.h index 43e18db89c14..ad928cce268b 100644 --- a/include/asm-generic/checksum.h +++ b/include/asm-generic/checksum.h @@ -2,6 +2,8 @@ #ifndef __ASM_GENERIC_CHECKSUM_H #define __ASM_GENERIC_CHECKSUM_H =20 +#include + /* * computes the checksum of a memory block at buff, length len, * and adds in "sum" (32-bit) @@ -31,9 +33,7 @@ extern __sum16 ip_fast_csum(const void *iph, unsigned int= ihl); static inline __sum16 csum_fold(__wsum csum) { u32 sum =3D (__force u32)csum; - sum =3D (sum & 0xffff) + (sum >> 16); - sum =3D (sum & 0xffff) + (sum >> 16); - return (__force __sum16)~sum; + return (__force __sum16)((~sum - ror32(sum, 16)) >> 16); } #endif =20 --=20 2.34.1