From nobody Wed Dec 31 04:51:07 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C3F8AC4332F for ; Tue, 7 Nov 2023 16:21:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344029AbjKGQV2 (ORCPT ); Tue, 7 Nov 2023 11:21:28 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54168 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235623AbjKGQUx (ORCPT ); Tue, 7 Nov 2023 11:20:53 -0500 Received: from mail-lj1-x232.google.com (mail-lj1-x232.google.com [IPv6:2a00:1450:4864:20::232]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 95E964781 for ; Tue, 7 Nov 2023 07:55:48 -0800 (PST) Received: by mail-lj1-x232.google.com with SMTP id 38308e7fff4ca-2c6b30aca06so76972841fa.3 for ; Tue, 07 Nov 2023 07:55:48 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=baylibre-com.20230601.gappssmtp.com; s=20230601; t=1699372547; x=1699977347; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=hyIzL/Q3yxnzrvtrJL0lEmia/DHhD7H2KwmfawN1GcI=; b=CRB/Gxgi1jx3QeFec+QGCJrKJ5W3VVRHESBEH1/TsMgYQCereLPXxO2OEV/r/M5RbJ 1UncuQ1GqZNfQTwsv5FGJicmlE2mZbIHoi2IxcQ54qjy4Offlo5G4ZlQYGkIQ1KFjTxg CBmMmLjDWWRU/y81zESdgwQagAwz5l3il7Zi+pFGelOZORELAggaXpEqw3v+ucl3wsHB t29NH1ClckQRbkvZur5zKrjFvwwBsnLTrKS9aAROreIUuKhDxJqDTAGviazIRbIVkLWh o0+0uz0cnC//51sN4GcaRKrIACs9D2zJGMPcLkBDB+8guHcMhw7yqDOTzL0yu16yD3My M/Aw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1699372547; x=1699977347; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=hyIzL/Q3yxnzrvtrJL0lEmia/DHhD7H2KwmfawN1GcI=; b=vQDUIXmpkpOCIFp7qUuOoncPTR16MJIliaktFLzt/O1DaboxAi7dXmdfSOs7I0NUZ/ ZRyEkj9SeJAS3Fc5lT2WXAWrCx9y+3CJuZKZgo5/lQXU3Pa6rDb5m4O0zUfycxiPlPHo RuH3FhubC02UeDCI5rNSvUNejh8TujuLBFIXzKdN2Uu7KaMyyF/ZqKaxJPmvMlE2aoec qC8+BFiGB4Fvc8PkaHUy33bPp606w6g0Vh2uk8O6f/qbVVAULX8yaYQVn9a0Y5ROHSwo uIRbWeSp57STz+b6hDCbwu3qh20DpSnH0UVBLbaXI2PilPiGGmSVx17IO/5u74dTt+uF FLrg== X-Gm-Message-State: AOJu0Ywf4mtSzVU7QZj7PXDEy3i4ncjy+9hYK/DjvKe1XsuVTh/RNhK1 kvM75nCs07IM9aaXveKp/5r/1g== X-Google-Smtp-Source: AGHT+IF2CtEdSKnhv/tXugEB9xIrKqP4UzseDtGMiIJn+4Yx97r9/P6lMwv01Zt98YVJ/AbBVnN2kQ== X-Received: by 2002:a05:651c:333:b0:2c5:1542:57e4 with SMTP id b19-20020a05651c033300b002c5154257e4mr24082642ljp.31.1699372546743; Tue, 07 Nov 2023 07:55:46 -0800 (PST) Received: from arnold.baylibre (laubervilliers-658-1-213-31.w90-63.abo.wanadoo.fr. [90.63.244.31]) by smtp.googlemail.com with ESMTPSA id f6-20020a05600c4e8600b003fefb94ccc9sm16579085wmq.11.2023.11.07.07.55.44 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 07 Nov 2023 07:55:45 -0800 (PST) From: Corentin Labbe To: davem@davemloft.net, heiko@sntech.de, herbert@gondor.apana.org.au, krzysztof.kozlowski+dt@linaro.org, mturquette@baylibre.com, p.zabel@pengutronix.de, robh+dt@kernel.org, sboyd@kernel.org Cc: ricardo@pardini.net, devicetree@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-clk@vger.kernel.org, linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org, linux-rockchip@lists.infradead.org, Corentin Labbe Subject: [PATCH 1/6] dt-bindings: crypto: add support for rockchip,crypto-rk3588 Date: Tue, 7 Nov 2023 15:55:27 +0000 Message-Id: <20231107155532.3747113-2-clabbe@baylibre.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20231107155532.3747113-1-clabbe@baylibre.com> References: <20231107155532.3747113-1-clabbe@baylibre.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Add device tree binding documentation for the Rockchip cryptographic offloader V2. Signed-off-by: Corentin Labbe --- .../crypto/rockchip,rk3588-crypto.yaml | 65 +++++++++++++++++++ 1 file changed, 65 insertions(+) create mode 100644 Documentation/devicetree/bindings/crypto/rockchip,rk358= 8-crypto.yaml diff --git a/Documentation/devicetree/bindings/crypto/rockchip,rk3588-crypt= o.yaml b/Documentation/devicetree/bindings/crypto/rockchip,rk3588-crypto.ya= ml new file mode 100644 index 000000000000..07024cf4fb0e --- /dev/null +++ b/Documentation/devicetree/bindings/crypto/rockchip,rk3588-crypto.yaml @@ -0,0 +1,65 @@ +# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) +%YAML 1.2 +--- +$id: http://devicetree.org/schemas/crypto/rockchip,rk3588-crypto.yaml# +$schema: http://devicetree.org/meta-schemas/core.yaml# + +title: Rockchip cryptographic offloader V2 + +maintainers: + - Corentin Labbe + +properties: + compatible: + enum: + - rockchip,rk3568-crypto + - rockchip,rk3588-crypto + + reg: + maxItems: 1 + + interrupts: + maxItems: 1 + + clocks: + minItems: 3 + + clock-names: + items: + - const: core + - const: a + - const: h + + resets: + minItems: 1 + + reset-names: + items: + - const: core + +required: + - compatible + - reg + - interrupts + - clocks + - clock-names + - resets + - reset-names + +additionalProperties: false + +examples: + - | + #include + #include + #include + crypto@fe370000 { + compatible =3D "rockchip,rk3588-crypto"; + reg =3D <0xfe370000 0x4000>; + interrupts =3D ; + clocks =3D <&scmi_clk SCMI_CRYPTO_CORE>, <&scmi_clk SCMI_ACLK_SECURE= _NS>, + <&scmi_clk SCMI_HCLK_SECURE_NS>; + clock-names =3D "core", "a", "h"; + resets =3D <&scmi_reset SRST_CRYPTO_CORE>; + reset-names =3D "core"; + }; --=20 2.41.0 From nobody Wed Dec 31 04:51:07 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B8753C4332F for ; Tue, 7 Nov 2023 16:21:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235193AbjKGQVZ (ORCPT ); Tue, 7 Nov 2023 11:21:25 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34448 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235629AbjKGQUt (ORCPT ); Tue, 7 Nov 2023 11:20:49 -0500 Received: from mail-wm1-x32a.google.com (mail-wm1-x32a.google.com [IPv6:2a00:1450:4864:20::32a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 514744784 for ; Tue, 7 Nov 2023 07:55:50 -0800 (PST) Received: by mail-wm1-x32a.google.com with SMTP id 5b1f17b1804b1-40836ea8cbaso41031915e9.0 for ; Tue, 07 Nov 2023 07:55:50 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=baylibre-com.20230601.gappssmtp.com; s=20230601; t=1699372549; x=1699977349; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=BkfSHGrIpem8wzkvHrBk/3Gkd85Ebnq+lcLBg87ITvo=; b=dpY4L3eWa4flhXLc67MqZnrpxEIJMf0+wX5MqxWeKAUCPWoN9bdiHmuoqlI28BOFlz M/xTcEG33KRQWa05Aaz1bQxE7r3zmBXQc0Zda+6xnqZyCv+FkTuc4As+A5um5WV6kPsa +gyZZt67wq/aEP1WbvF8YjmsQD4NkMg+otY3rNyS94ruLbBrvZA6mkgFyxqZA6AZPzlh nsSquFgJHHl8vYze7deTu1VaC4jR0jEXHhE07z+3i2s+BMN/pCdVYDMf7hwfRNHJPo2H DeFnEj5yK6Lzdr4MlN/sh//IbQzUsmh5XW3tm/OkXmfBP8uj8yb4vUYK8FGSFIHBtj42 7lfw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1699372549; x=1699977349; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=BkfSHGrIpem8wzkvHrBk/3Gkd85Ebnq+lcLBg87ITvo=; b=pdd2aRcZ18f8Ht0yvmrFLj9Yi95DjxW8nNCBZxRwbbLg9JftKLepimWZ22UIOBxzQM WwmHvYnQS+S6722xw/UfsOqsw9eWLFuPxIy+BjNJfmmoe70TgU+SQqVL0Z4MkWvl4rFh R7sdIlG1hMfx2qJNQEPPEEkonyRUXeZToWN0qQH91ODCl/47IYGm2z4BOcS8xJM4YUgy 6aeuzwEt5yw4WMh9LV/iokfIdsJ62qdNhassXY0GhNtjoYJ5c5DArSSLsnwkzObz/pS/ hkyIsJX/vFTC5zqahQ/rSj//5MRWyE6iNgHEXyvoUs6Vn3ACp5M7josknMk4RzBdT5Ru 7R0w== X-Gm-Message-State: AOJu0YzWejwKagfh3vUbf32QWwIafw/3QlpgXD2pe2nGWnQCcvsXTGo2 RlDul4DZ3ScUn8IyUZQBPfVp6g== X-Google-Smtp-Source: AGHT+IHpnyjFtHnxiqNRaHl8atDMuhe0j+p3O/OrAQ+Bopbx3/CIOAepx72DV4yXbiUhAx69OJKUiw== X-Received: by 2002:a05:600c:1913:b0:408:3cdf:32c with SMTP id j19-20020a05600c191300b004083cdf032cmr3353377wmq.41.1699372548745; Tue, 07 Nov 2023 07:55:48 -0800 (PST) Received: from arnold.baylibre (laubervilliers-658-1-213-31.w90-63.abo.wanadoo.fr. [90.63.244.31]) by smtp.googlemail.com with ESMTPSA id f6-20020a05600c4e8600b003fefb94ccc9sm16579085wmq.11.2023.11.07.07.55.46 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 07 Nov 2023 07:55:47 -0800 (PST) From: Corentin Labbe To: davem@davemloft.net, heiko@sntech.de, herbert@gondor.apana.org.au, krzysztof.kozlowski+dt@linaro.org, mturquette@baylibre.com, p.zabel@pengutronix.de, robh+dt@kernel.org, sboyd@kernel.org Cc: ricardo@pardini.net, devicetree@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-clk@vger.kernel.org, linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org, linux-rockchip@lists.infradead.org, Corentin Labbe Subject: [PATCH 2/6] MAINTAINERS: add new dt-binding doc to the right entry Date: Tue, 7 Nov 2023 15:55:28 +0000 Message-Id: <20231107155532.3747113-3-clabbe@baylibre.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20231107155532.3747113-1-clabbe@baylibre.com> References: <20231107155532.3747113-1-clabbe@baylibre.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Rockchip crypto driver have a new file to be added. Signed-off-by: Corentin Labbe --- MAINTAINERS | 1 + 1 file changed, 1 insertion(+) diff --git a/MAINTAINERS b/MAINTAINERS index 8a43b16aecaa..f9ae35a13e70 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -18697,6 +18697,7 @@ M: Corentin Labbe L: linux-crypto@vger.kernel.org S: Maintained F: Documentation/devicetree/bindings/crypto/rockchip,rk3288-crypto.yaml +F: Documentation/devicetree/bindings/crypto/rockchip,rk3588-crypto.yaml F: drivers/crypto/rockchip/ =20 ROCKCHIP I2S TDM DRIVER --=20 2.41.0 From nobody Wed Dec 31 04:51:07 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C5BA6C4167B for ; Tue, 7 Nov 2023 16:05:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1343922AbjKGQFL (ORCPT ); Tue, 7 Nov 2023 11:05:11 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53986 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344715AbjKGQEX (ORCPT ); Tue, 7 Nov 2023 11:04:23 -0500 Received: from mail-wm1-x32e.google.com (mail-wm1-x32e.google.com [IPv6:2a00:1450:4864:20::32e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 316A5478E for ; Tue, 7 Nov 2023 07:55:52 -0800 (PST) Received: by mail-wm1-x32e.google.com with SMTP id 5b1f17b1804b1-40859c46447so36696465e9.1 for ; Tue, 07 Nov 2023 07:55:52 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=baylibre-com.20230601.gappssmtp.com; s=20230601; t=1699372550; x=1699977350; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=s+UV3k4yizYTMdoa5VrKE6ofk04X3OR5xKj9uFZuHk4=; b=lVDSo+hwfrXEtn8u9vZOOnxf4KRs0For6f7dFb4P0JC4AHFlXPHjRSmIXEvmXounHu v0uTKkjeXjGatJp3NMBWy/zpdDsSZqwAHvNmOG0qVxAkEMUDtsS/xw948KYJu2Z2ZbSK drin7q9Allj8aM68qBi51qemPiud7aGC5hmh3IAs49Ry3Ep0hi3iip4JQA9OMLJesF2t vJ0hZDN5rBtlOQOUrmCd3UirooFgQobEr3rhH5NCwisxnwmvQlbJlUtvrzO6e7n/6GVk d+E5vpe21xkUYIpfzmNorTEnTpn3gCKuJ59AdVzzFndSc5yvLrDAlHbLKTsaw3rIeDAA r1cQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1699372550; x=1699977350; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=s+UV3k4yizYTMdoa5VrKE6ofk04X3OR5xKj9uFZuHk4=; b=M5UiVyCFCuoiDZd7u2vaLKpT928YUYXA5GgRU0H8+Sp7zDBZkGq5TLCUBdy8Ftc04w CjxMyLUub7eyj/LeRK4lBbNeC4vhTF2ZrRzE1eoA7aDyYqIPROplVzYT5QapU8P0OwTM qfukTXlUfXgzvUpId1Wk4SXBWR6a+PbyUpaT8JjCxKTExMXNfKZW1EXLmK+ocJve/kQx MZ2JnSX1Uyjbg4q/oeHB5yZ6ntHrdM74JGKOwt6PZcJoza3oFZicIRmC/CJWIWQ8MgvE BuylAb71lNhmWqzJK3+jYQAALE+g3cZB7aQXy4gtymnaEfUWMz6qZKUO5Oh3DsqcF7D8 ZWBA== X-Gm-Message-State: AOJu0Yz7kd03tN3LdIc25lHqI0AMLuFsUABg1E6+/G27wfIjANvdCrhV SXHWKFQCzPM5tDO00Fsy4GS3ng== X-Google-Smtp-Source: AGHT+IHkZ5GDVZhysvqM4521kZZywfOXQBII4ImWf3CF24yhLceufV+jdRnRjPS4MYfSNa8U4v6jHw== X-Received: by 2002:a05:600c:548e:b0:405:4daa:6e3d with SMTP id iv14-20020a05600c548e00b004054daa6e3dmr2628006wmb.39.1699372550578; Tue, 07 Nov 2023 07:55:50 -0800 (PST) Received: from arnold.baylibre (laubervilliers-658-1-213-31.w90-63.abo.wanadoo.fr. [90.63.244.31]) by smtp.googlemail.com with ESMTPSA id f6-20020a05600c4e8600b003fefb94ccc9sm16579085wmq.11.2023.11.07.07.55.48 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 07 Nov 2023 07:55:49 -0800 (PST) From: Corentin Labbe To: davem@davemloft.net, heiko@sntech.de, herbert@gondor.apana.org.au, krzysztof.kozlowski+dt@linaro.org, mturquette@baylibre.com, p.zabel@pengutronix.de, robh+dt@kernel.org, sboyd@kernel.org Cc: ricardo@pardini.net, devicetree@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-clk@vger.kernel.org, linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org, linux-rockchip@lists.infradead.org, Corentin Labbe Subject: [PATCH 3/6] ARM64: dts: rk3588: add crypto node Date: Tue, 7 Nov 2023 15:55:29 +0000 Message-Id: <20231107155532.3747113-4-clabbe@baylibre.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20231107155532.3747113-1-clabbe@baylibre.com> References: <20231107155532.3747113-1-clabbe@baylibre.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" The rk3588 has a crypto IP handled by the rk3588 crypto driver so adds a node for it. Signed-off-by: Corentin Labbe --- arch/arm64/boot/dts/rockchip/rk3588s.dtsi | 12 ++++++++++++ 1 file changed, 12 insertions(+) diff --git a/arch/arm64/boot/dts/rockchip/rk3588s.dtsi b/arch/arm64/boot/dt= s/rockchip/rk3588s.dtsi index 7064c0e9179f..a2ba5ebec38d 100644 --- a/arch/arm64/boot/dts/rockchip/rk3588s.dtsi +++ b/arch/arm64/boot/dts/rockchip/rk3588s.dtsi @@ -1523,6 +1523,18 @@ sdhci: mmc@fe2e0000 { status =3D "disabled"; }; =20 + crypto: crypto@fe370000 { + compatible =3D "rockchip,rk3588-crypto"; + reg =3D <0x0 0xfe370000 0x0 0x2000>; + interrupts =3D ; + clocks =3D <&scmi_clk SCMI_CRYPTO_CORE>, <&scmi_clk SCMI_ACLK_SECURE_NS>, + <&scmi_clk SCMI_HCLK_SECURE_NS>; + clock-names =3D "core", "aclk", "hclk"; + resets =3D <&scmi_reset SRST_CRYPTO_CORE>; + reset-names =3D "core"; + status =3D "okay"; + }; + i2s0_8ch: i2s@fe470000 { compatible =3D "rockchip,rk3588-i2s-tdm"; reg =3D <0x0 0xfe470000 0x0 0x1000>; --=20 2.41.0 From nobody Wed Dec 31 04:51:07 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E6B1EC4332F for ; Tue, 7 Nov 2023 16:21:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235437AbjKGQVB (ORCPT ); Tue, 7 Nov 2023 11:21:01 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54318 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235429AbjKGQUj (ORCPT ); Tue, 7 Nov 2023 11:20:39 -0500 Received: from mail-lf1-x12b.google.com (mail-lf1-x12b.google.com [IPv6:2a00:1450:4864:20::12b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2E7F64796 for ; Tue, 7 Nov 2023 07:55:54 -0800 (PST) Received: by mail-lf1-x12b.google.com with SMTP id 2adb3069b0e04-507a98517f3so7255681e87.0 for ; Tue, 07 Nov 2023 07:55:54 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=baylibre-com.20230601.gappssmtp.com; s=20230601; t=1699372552; x=1699977352; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=ZdPLm4V5Hkn6PmKk4G4TX5qad8wgg2qashR0qo8Jkhg=; b=pDsPALb0e6J1a1jCURrRD9/tXYB1gEpZMaylUGGaYQsiiUgRGOqi/IcW3qs42ixF4r CqDFd2DRdX2zRBRAzYMweHNnofPMJ3IOL5YsqGfjNFFWz8hXgggMHQWsPJsrcS9cBgFZ IoPJu2BHW0CX/3tOh9fIWXAkwLaq6nb2+2vAUbTWh7on34eZNvJZsKcsBWocChsFjkhJ hpOv1RnGrkUf+8plzdhM13sEHK3uTZeOlFRI2xeuzOnF6s19d53kSrzBdX/hsRGiWPRu zWVh996Vek1+lRwalCgGE5ObE3RR14iKNoFcwK5EF/nDvu6y7dZ9nGSaPig94WR1ro2A gBlA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1699372552; x=1699977352; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ZdPLm4V5Hkn6PmKk4G4TX5qad8wgg2qashR0qo8Jkhg=; b=MiNgQhBZLmSJLyDie3sXObWvuvX1gc9F+c6JD9EoabmDgAZxsKCMc5TtadmRcTAsBs kJIaJraB2bpfUDAff7zpztUEWcrGrLZEGMfGxwJH9tjdmeLx/Zmjsqm0RutjlvsjYhkQ mfBN2k2aS8abl7+Gbfs7Xr7YJDNPo7WI6A1j7KUVpJq0us1p4cW3hVAPVPDZwqotn/B2 zaidl8G+FxwH1TaOoiRdVTngxcffTWp0x6adk/NDpjewXSvlTq1hZK6xYgFoghD911by IeeVNf6pwU6tKL2ugTPOQ6+izslVU9Xd4s8m/WN8CqG+ZTm+kBrSmcLlrzB9C1c4Ud7H 7qFQ== X-Gm-Message-State: AOJu0YwVKzMPTCkbZaV9Pwk11+U73tc1E1ID0IYtyVw2Rdv0EPo2iw9S eNRuCGNylaVOZvIw8enR6gazSA== X-Google-Smtp-Source: AGHT+IEdPqFLLB97FsJn9HE6PuUCXYD1MUIYJLqs8/8KgotwzZO95QNjFhJZQBPn7vpBWJhUDXeo1w== X-Received: by 2002:ac2:5234:0:b0:508:264e:2ded with SMTP id i20-20020ac25234000000b00508264e2dedmr24676229lfl.38.1699372552470; Tue, 07 Nov 2023 07:55:52 -0800 (PST) Received: from arnold.baylibre (laubervilliers-658-1-213-31.w90-63.abo.wanadoo.fr. [90.63.244.31]) by smtp.googlemail.com with ESMTPSA id f6-20020a05600c4e8600b003fefb94ccc9sm16579085wmq.11.2023.11.07.07.55.50 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 07 Nov 2023 07:55:51 -0800 (PST) From: Corentin Labbe To: davem@davemloft.net, heiko@sntech.de, herbert@gondor.apana.org.au, krzysztof.kozlowski+dt@linaro.org, mturquette@baylibre.com, p.zabel@pengutronix.de, robh+dt@kernel.org, sboyd@kernel.org Cc: ricardo@pardini.net, devicetree@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-clk@vger.kernel.org, linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org, linux-rockchip@lists.infradead.org, Corentin Labbe Subject: [PATCH 4/6] ARM64: dts: rk356x: add crypto node Date: Tue, 7 Nov 2023 15:55:30 +0000 Message-Id: <20231107155532.3747113-5-clabbe@baylibre.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20231107155532.3747113-1-clabbe@baylibre.com> References: <20231107155532.3747113-1-clabbe@baylibre.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Both RK3566 and RK3568 have a crypto IP handled by the rk3588 crypto driver= so adds a node for it. Tested-by: Ricardo Pardini Signed-off-by: Corentin Labbe --- arch/arm64/boot/dts/rockchip/rk356x.dtsi | 12 ++++++++++++ 1 file changed, 12 insertions(+) diff --git a/arch/arm64/boot/dts/rockchip/rk356x.dtsi b/arch/arm64/boot/dts= /rockchip/rk356x.dtsi index 0964761e3ce9..c94a1b535c32 100644 --- a/arch/arm64/boot/dts/rockchip/rk356x.dtsi +++ b/arch/arm64/boot/dts/rockchip/rk356x.dtsi @@ -1070,6 +1070,18 @@ sdhci: mmc@fe310000 { status =3D "disabled"; }; =20 + crypto: crypto@fe380000 { + compatible =3D "rockchip,rk3568-crypto"; + reg =3D <0x0 0xfe380000 0x0 0x2000>; + interrupts =3D ; + clocks =3D <&cru ACLK_CRYPTO_NS>, <&cru HCLK_CRYPTO_NS>, + <&cru CLK_CRYPTO_NS_CORE>; + clock-names =3D "aclk", "hclk", "core"; + resets =3D <&cru SRST_CRYPTO_NS_CORE>; + reset-names =3D "core"; + status =3D "okay"; + }; + i2s0_8ch: i2s@fe400000 { compatible =3D "rockchip,rk3568-i2s-tdm"; reg =3D <0x0 0xfe400000 0x0 0x1000>; --=20 2.41.0 From nobody Wed Dec 31 04:51:07 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A762FC4167D for ; Tue, 7 Nov 2023 16:21:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344101AbjKGQVc (ORCPT ); Tue, 7 Nov 2023 11:21:32 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54226 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235675AbjKGQUx (ORCPT ); Tue, 7 Nov 2023 11:20:53 -0500 Received: from mail-lj1-x22e.google.com (mail-lj1-x22e.google.com [IPv6:2a00:1450:4864:20::22e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9565E4793 for ; Tue, 7 Nov 2023 07:55:56 -0800 (PST) Received: by mail-lj1-x22e.google.com with SMTP id 38308e7fff4ca-2c5720a321aso77013261fa.1 for ; Tue, 07 Nov 2023 07:55:56 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=baylibre-com.20230601.gappssmtp.com; s=20230601; t=1699372555; x=1699977355; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=RudCoJ8NR0SEmzqEwB5YrG9jC9FrLgs56yqGgQd2uTA=; b=Rj0KhgzIBNGvbaHzhjwQ3i7l/gsgywBXLjpPYqbJplmLg9UUCwdgXmzQMQYDxh0W0a sQzpaP2IirQwxI4Sx6fabX0bN4ozYOXY2XSHgc6DTYYlk+sqzLxrLxD2oKFXsnLeKIWr AerWpJ0+IjnIENku6tG4wXbaAvHcROFogMRpl2+7ATzPgKTgCQkf+FtXKr61LuLORC2d oHpD/yHADrjSMZxp3IGOVwCOKZkA6Qx9ud3i7AG/oC/YacLtYHLqIYjoIyr4TcGS1Ckv i0K7zBnDoSXXOCmXJGFJQrQxqLsmiGtPe9eVoWkQdZXSPb5jSZajvd6NHiSfE14qfa7N f3VQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1699372555; x=1699977355; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=RudCoJ8NR0SEmzqEwB5YrG9jC9FrLgs56yqGgQd2uTA=; b=PE98lbfI1gVkmuHa778i3/k0Jg5Nv+zjntSdkV7OfulvJZz0uA5btR8KwYSdkkgMue pOiX9p9mP96WBi45tUosEGUtgTCjmEVK4ZgYTWJhF+aD4mNzk/kZr6AJjVJM9T2KtQtS 396e1MDLSb7RpmcrSPTcVLcPruD7YAK9kx/HJInY1U4rcub+xuwp0hn2Z+gm9im6gDq9 k5PjTHTKiJF4MGfqye0zNesYTf+8VTLfE9tS901Sno9p2xSvt9W4kA3I1+2d83hfrAOQ G5YQlqB9W6nUR6enheyuPDs/MkNS+ePBgKnJWYepz3sBx0FrFAn61qmTWogR/HSwlBiz BnYg== X-Gm-Message-State: AOJu0Yw8Q4uRELcIeFt1bnE+tdLxTLJqa2QFRj4EM0KieAVtZbBDNxtW kc0kpMRtEQjIRCCAKMRHFh5blg== X-Google-Smtp-Source: AGHT+IGo5CYgPSxvFOkcB6TV2qu3wAGlFcNsCAhIeLXNyMTOhymnxJFmrLUMB62tN0KlJI+RPeupXg== X-Received: by 2002:a05:651c:221c:b0:2bc:dcdb:b5dc with SMTP id y28-20020a05651c221c00b002bcdcdbb5dcmr31511815ljq.39.1699372554809; Tue, 07 Nov 2023 07:55:54 -0800 (PST) Received: from arnold.baylibre (laubervilliers-658-1-213-31.w90-63.abo.wanadoo.fr. [90.63.244.31]) by smtp.googlemail.com with ESMTPSA id f6-20020a05600c4e8600b003fefb94ccc9sm16579085wmq.11.2023.11.07.07.55.52 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 07 Nov 2023 07:55:53 -0800 (PST) From: Corentin Labbe To: davem@davemloft.net, heiko@sntech.de, herbert@gondor.apana.org.au, krzysztof.kozlowski+dt@linaro.org, mturquette@baylibre.com, p.zabel@pengutronix.de, robh+dt@kernel.org, sboyd@kernel.org Cc: ricardo@pardini.net, devicetree@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-clk@vger.kernel.org, linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org, linux-rockchip@lists.infradead.org, Corentin Labbe Subject: [PATCH 5/6] reset: rockchip: secure reset must be used by SCMI Date: Tue, 7 Nov 2023 15:55:31 +0000 Message-Id: <20231107155532.3747113-6-clabbe@baylibre.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20231107155532.3747113-1-clabbe@baylibre.com> References: <20231107155532.3747113-1-clabbe@baylibre.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" While working on the rk3588 crypto driver, I loose lot of time understanding why resetting the IP failed. This is due to RK3588_SECURECRU_RESET_OFFSET being in the secure world, so impossible to operate on it from the kernel. All resets in this block must be handled via SCMI call. Signed-off-by: Corentin Labbe --- drivers/clk/rockchip/rst-rk3588.c | 42 ------------ .../dt-bindings/reset/rockchip,rk3588-cru.h | 68 +++++++++---------- 2 files changed, 34 insertions(+), 76 deletions(-) diff --git a/drivers/clk/rockchip/rst-rk3588.c b/drivers/clk/rockchip/rst-r= k3588.c index e855bb8d5413..6556d9d3c7ab 100644 --- a/drivers/clk/rockchip/rst-rk3588.c +++ b/drivers/clk/rockchip/rst-rk3588.c @@ -16,9 +16,6 @@ /* 0xFD7C8000 + 0x0A00 */ #define RK3588_PHPTOPCRU_RESET_OFFSET(id, reg, bit) [id] =3D (0x8000*4 + r= eg * 16 + bit) =20 -/* 0xFD7D0000 + 0x0A00 */ -#define RK3588_SECURECRU_RESET_OFFSET(id, reg, bit) [id] =3D (0x10000*4 + = reg * 16 + bit) - /* 0xFD7F0000 + 0x0A00 */ #define RK3588_PMU1CRU_RESET_OFFSET(id, reg, bit) [id] =3D (0x30000*4 + re= g * 16 + bit) =20 @@ -806,45 +803,6 @@ static const int rk3588_register_offset[] =3D { RK3588_PMU1CRU_RESET_OFFSET(SRST_P_PMU0IOC, 5, 4), RK3588_PMU1CRU_RESET_OFFSET(SRST_P_GPIO0, 5, 5), RK3588_PMU1CRU_RESET_OFFSET(SRST_GPIO0, 5, 6), - - /* SECURECRU_SOFTRST_CON00 */ - RK3588_SECURECRU_RESET_OFFSET(SRST_A_SECURE_NS_BIU, 0, 10), - RK3588_SECURECRU_RESET_OFFSET(SRST_H_SECURE_NS_BIU, 0, 11), - RK3588_SECURECRU_RESET_OFFSET(SRST_A_SECURE_S_BIU, 0, 12), - RK3588_SECURECRU_RESET_OFFSET(SRST_H_SECURE_S_BIU, 0, 13), - RK3588_SECURECRU_RESET_OFFSET(SRST_P_SECURE_S_BIU, 0, 14), - RK3588_SECURECRU_RESET_OFFSET(SRST_CRYPTO_CORE, 0, 15), - - /* SECURECRU_SOFTRST_CON01 */ - RK3588_SECURECRU_RESET_OFFSET(SRST_CRYPTO_PKA, 1, 0), - RK3588_SECURECRU_RESET_OFFSET(SRST_CRYPTO_RNG, 1, 1), - RK3588_SECURECRU_RESET_OFFSET(SRST_A_CRYPTO, 1, 2), - RK3588_SECURECRU_RESET_OFFSET(SRST_H_CRYPTO, 1, 3), - RK3588_SECURECRU_RESET_OFFSET(SRST_KEYLADDER_CORE, 1, 9), - RK3588_SECURECRU_RESET_OFFSET(SRST_KEYLADDER_RNG, 1, 10), - RK3588_SECURECRU_RESET_OFFSET(SRST_A_KEYLADDER, 1, 11), - RK3588_SECURECRU_RESET_OFFSET(SRST_H_KEYLADDER, 1, 12), - RK3588_SECURECRU_RESET_OFFSET(SRST_P_OTPC_S, 1, 13), - RK3588_SECURECRU_RESET_OFFSET(SRST_OTPC_S, 1, 14), - RK3588_SECURECRU_RESET_OFFSET(SRST_WDT_S, 1, 15), - - /* SECURECRU_SOFTRST_CON02 */ - RK3588_SECURECRU_RESET_OFFSET(SRST_T_WDT_S, 2, 0), - RK3588_SECURECRU_RESET_OFFSET(SRST_H_BOOTROM, 2, 1), - RK3588_SECURECRU_RESET_OFFSET(SRST_A_DCF, 2, 2), - RK3588_SECURECRU_RESET_OFFSET(SRST_P_DCF, 2, 3), - RK3588_SECURECRU_RESET_OFFSET(SRST_H_BOOTROM_NS, 2, 5), - RK3588_SECURECRU_RESET_OFFSET(SRST_P_KEYLADDER, 2, 14), - RK3588_SECURECRU_RESET_OFFSET(SRST_H_TRNG_S, 2, 15), - - /* SECURECRU_SOFTRST_CON03 */ - RK3588_SECURECRU_RESET_OFFSET(SRST_H_TRNG_NS, 3, 0), - RK3588_SECURECRU_RESET_OFFSET(SRST_D_SDMMC_BUFFER, 3, 1), - RK3588_SECURECRU_RESET_OFFSET(SRST_H_SDMMC, 3, 2), - RK3588_SECURECRU_RESET_OFFSET(SRST_H_SDMMC_BUFFER, 3, 3), - RK3588_SECURECRU_RESET_OFFSET(SRST_SDMMC, 3, 4), - RK3588_SECURECRU_RESET_OFFSET(SRST_P_TRNG_CHK, 3, 5), - RK3588_SECURECRU_RESET_OFFSET(SRST_TRNG_S, 3, 6), }; =20 void rk3588_rst_init(struct device_node *np, void __iomem *reg_base) diff --git a/include/dt-bindings/reset/rockchip,rk3588-cru.h b/include/dt-b= indings/reset/rockchip,rk3588-cru.h index d4264db2a07f..c0d08ae78cd5 100644 --- a/include/dt-bindings/reset/rockchip,rk3588-cru.h +++ b/include/dt-bindings/reset/rockchip,rk3588-cru.h @@ -716,39 +716,39 @@ #define SRST_P_GPIO0 627 #define SRST_GPIO0 628 =20 -#define SRST_A_SECURE_NS_BIU 629 -#define SRST_H_SECURE_NS_BIU 630 -#define SRST_A_SECURE_S_BIU 631 -#define SRST_H_SECURE_S_BIU 632 -#define SRST_P_SECURE_S_BIU 633 -#define SRST_CRYPTO_CORE 634 - -#define SRST_CRYPTO_PKA 635 -#define SRST_CRYPTO_RNG 636 -#define SRST_A_CRYPTO 637 -#define SRST_H_CRYPTO 638 -#define SRST_KEYLADDER_CORE 639 -#define SRST_KEYLADDER_RNG 640 -#define SRST_A_KEYLADDER 641 -#define SRST_H_KEYLADDER 642 -#define SRST_P_OTPC_S 643 -#define SRST_OTPC_S 644 -#define SRST_WDT_S 645 - -#define SRST_T_WDT_S 646 -#define SRST_H_BOOTROM 647 -#define SRST_A_DCF 648 -#define SRST_P_DCF 649 -#define SRST_H_BOOTROM_NS 650 -#define SRST_P_KEYLADDER 651 -#define SRST_H_TRNG_S 652 - -#define SRST_H_TRNG_NS 653 -#define SRST_D_SDMMC_BUFFER 654 -#define SRST_H_SDMMC 655 -#define SRST_H_SDMMC_BUFFER 656 -#define SRST_SDMMC 657 -#define SRST_P_TRNG_CHK 658 -#define SRST_TRNG_S 659 +#define SRST_A_SECURE_NS_BIU 10 +#define SRST_H_SECURE_NS_BIU 11 +#define SRST_A_SECURE_S_BIU 12 +#define SRST_H_SECURE_S_BIU 13 +#define SRST_P_SECURE_S_BIU 14 +#define SRST_CRYPTO_CORE 15 + +#define SRST_CRYPTO_PKA 16 +#define SRST_CRYPTO_RNG 17 +#define SRST_A_CRYPTO 18 +#define SRST_H_CRYPTO 19 +#define SRST_KEYLADDER_CORE 25 +#define SRST_KEYLADDER_RNG 26 +#define SRST_A_KEYLADDER 27 +#define SRST_H_KEYLADDER 28 +#define SRST_P_OTPC_S 29 +#define SRST_OTPC_S 30 +#define SRST_WDT_S 31 + +#define SRST_T_WDT_S 32 +#define SRST_H_BOOTROM 33 +#define SRST_A_DCF 34 +#define SRST_P_DCF 35 +#define SRST_H_BOOTROM_NS 37 +#define SRST_P_KEYLADDER 46 +#define SRST_H_TRNG_S 47 + +#define SRST_H_TRNG_NS 48 +#define SRST_D_SDMMC_BUFFER 49 +#define SRST_H_SDMMC 50 +#define SRST_H_SDMMC_BUFFER 51 +#define SRST_SDMMC 52 +#define SRST_P_TRNG_CHK 53 +#define SRST_TRNG_S 54 =20 #endif --=20 2.41.0 From nobody Wed Dec 31 04:51:07 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C3EA0C4332F for ; Tue, 7 Nov 2023 16:05:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344380AbjKGQFP (ORCPT ); Tue, 7 Nov 2023 11:05:15 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54104 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344257AbjKGQEe (ORCPT ); Tue, 7 Nov 2023 11:04:34 -0500 Received: from mail-lf1-x136.google.com (mail-lf1-x136.google.com [IPv6:2a00:1450:4864:20::136]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 356E647AD for ; Tue, 7 Nov 2023 07:56:02 -0800 (PST) Received: by mail-lf1-x136.google.com with SMTP id 2adb3069b0e04-507adc3381cso7524291e87.3 for ; Tue, 07 Nov 2023 07:56:02 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=baylibre-com.20230601.gappssmtp.com; s=20230601; t=1699372560; x=1699977360; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=ACrUCaQqBEjX+w74tNKKmwgzPWxiISbd4UEub/jw4AE=; b=C2NFeQp7EH1OQK/lADLTFJetuNWRVMleoZjkRxXEmu7g2L0UnQG6YFq/yugknw2kDd /7+1Bp46pGRrU7ffPD9CWKR23h+1+Aw5eJ0cE3ee43Etie+Vs8a/ucDZba4qykXkuCy3 sZWguJLKe/cZJoJjJyMkOQUiMnnH3VfAGWTE18TRyCcmCgn2cHM2t977BTCxtQelQezN OoD5AUnrIWmehR5EiTxCVF7F18c59uegVa1mCl80+pSHedKgb4y9vlHooBtVps0bCxbc H3HxZxtq53G2cgPXjIiy7aScyZ+FWevr2dhvnaxveBY1hTNEgh1UObojhBGL1/8Rrz7y Sjhg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1699372560; x=1699977360; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ACrUCaQqBEjX+w74tNKKmwgzPWxiISbd4UEub/jw4AE=; b=lOwWH/DjIJ7fHRzeGLoBsfNGfIVPvG+AG5oePoA+2mxVpVpww+c9T2HHqYYh1E3Lqy XNlHAfRQQf2av9QyX+izYJdieiLBMWXCYBX8pZxF2p/wUw/3TN6BUjWPOMpvQY9NCHhw taks5g5SsS5XX7RoCWvNwqlJ8oD7qM/lh92hCovYI4H8WjOovDZLZZflOPypgabVWeqM cyA0P8X6ca+YXdG+ttqXcEOPz60KJwklQ/qUiU/qnrzFtZMC2HUcJnzZCGgi0BZ4gZG2 eZI/7gh/eTtMOMO4IB+sn9eWz1+vn3s/tHGfP5OsMTOEENmXQPT5bB4ISqSND0TEU3eT 2CaQ== X-Gm-Message-State: AOJu0Yw0V2N/3829eWuDYGjzuXfO2RVVehBI0lH4OhYjDaGy3ysxH2UA hyvpGzs1vemjW8LL1CaXKBYimw== X-Google-Smtp-Source: AGHT+IFtnNVD3peOHdiB3JRA/63VF81K1FFRt73WLadTgQzDX+kpZ57v4JiURazsLgqDUjJJepeFpQ== X-Received: by 2002:a19:4f42:0:b0:507:a40e:d8bf with SMTP id a2-20020a194f42000000b00507a40ed8bfmr25324337lfk.7.1699372559990; Tue, 07 Nov 2023 07:55:59 -0800 (PST) Received: from arnold.baylibre (laubervilliers-658-1-213-31.w90-63.abo.wanadoo.fr. [90.63.244.31]) by smtp.googlemail.com with ESMTPSA id f6-20020a05600c4e8600b003fefb94ccc9sm16579085wmq.11.2023.11.07.07.55.54 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 07 Nov 2023 07:55:55 -0800 (PST) From: Corentin Labbe To: davem@davemloft.net, heiko@sntech.de, herbert@gondor.apana.org.au, krzysztof.kozlowski+dt@linaro.org, mturquette@baylibre.com, p.zabel@pengutronix.de, robh+dt@kernel.org, sboyd@kernel.org Cc: ricardo@pardini.net, devicetree@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-clk@vger.kernel.org, linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org, linux-rockchip@lists.infradead.org, Corentin Labbe Subject: [PATCH 6/6] crypto: rockchip: add rk3588 driver Date: Tue, 7 Nov 2023 15:55:32 +0000 Message-Id: <20231107155532.3747113-7-clabbe@baylibre.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20231107155532.3747113-1-clabbe@baylibre.com> References: <20231107155532.3747113-1-clabbe@baylibre.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" RK3588 have a new crypto IP, this patch adds basic support for it. Only hashes and cipher are handled for the moment. Signed-off-by: Corentin Labbe --- drivers/crypto/Kconfig | 29 + drivers/crypto/rockchip/Makefile | 5 + drivers/crypto/rockchip/rk2_crypto.c | 739 ++++++++++++++++++ drivers/crypto/rockchip/rk2_crypto.h | 246 ++++++ drivers/crypto/rockchip/rk2_crypto_ahash.c | 344 ++++++++ drivers/crypto/rockchip/rk2_crypto_skcipher.c | 576 ++++++++++++++ 6 files changed, 1939 insertions(+) create mode 100644 drivers/crypto/rockchip/rk2_crypto.c create mode 100644 drivers/crypto/rockchip/rk2_crypto.h create mode 100644 drivers/crypto/rockchip/rk2_crypto_ahash.c create mode 100644 drivers/crypto/rockchip/rk2_crypto_skcipher.c diff --git a/drivers/crypto/Kconfig b/drivers/crypto/Kconfig index 79c3bb9c99c3..b6a2027b1f9a 100644 --- a/drivers/crypto/Kconfig +++ b/drivers/crypto/Kconfig @@ -660,6 +660,35 @@ config CRYPTO_DEV_ROCKCHIP_DEBUG the number of requests per algorithm and other internal stats. =20 =20 +config CRYPTO_DEV_ROCKCHIP2 + tristate "Rockchip's cryptographic offloader V2" + depends on OF && ARCH_ROCKCHIP + depends on PM + select CRYPTO_ECB + select CRYPTO_CBC + select CRYPTO_AES + select CRYPTO_MD5 + select CRYPTO_SHA1 + select CRYPTO_SHA256 + select CRYPTO_SHA512 + select CRYPTO_SM3_GENERIC + select CRYPTO_HASH + select CRYPTO_SKCIPHER + select CRYPTO_ENGINE + + help + This driver interfaces with the hardware crypto offloader present + on RK3566, RK3568 and RK3588. + +config CRYPTO_DEV_ROCKCHIP2_DEBUG + bool "Enable Rockchip V2 crypto stats" + depends on CRYPTO_DEV_ROCKCHIP2 + depends on DEBUG_FS + help + Say y to enable Rockchip crypto debug stats. + This will create /sys/kernel/debug/rk3588_crypto/stats for displaying + the number of requests per algorithm and other internal stats. + config CRYPTO_DEV_ZYNQMP_AES tristate "Support for Xilinx ZynqMP AES hw accelerator" depends on ZYNQMP_FIRMWARE || COMPILE_TEST diff --git a/drivers/crypto/rockchip/Makefile b/drivers/crypto/rockchip/Mak= efile index 785277aca71e..452a12ff6538 100644 --- a/drivers/crypto/rockchip/Makefile +++ b/drivers/crypto/rockchip/Makefile @@ -3,3 +3,8 @@ obj-$(CONFIG_CRYPTO_DEV_ROCKCHIP) +=3D rk_crypto.o rk_crypto-objs :=3D rk3288_crypto.o \ rk3288_crypto_skcipher.o \ rk3288_crypto_ahash.o + +obj-$(CONFIG_CRYPTO_DEV_ROCKCHIP2) +=3D rk_crypto2.o +rk_crypto2-objs :=3D rk2_crypto.o \ + rk2_crypto_skcipher.o \ + rk2_crypto_ahash.o diff --git a/drivers/crypto/rockchip/rk2_crypto.c b/drivers/crypto/rockchip= /rk2_crypto.c new file mode 100644 index 000000000000..f3b8d27924da --- /dev/null +++ b/drivers/crypto/rockchip/rk2_crypto.c @@ -0,0 +1,739 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * hardware cryptographic offloader for RK3568/RK3588 SoC + * + * Copyright (c) 2022-2023, Corentin Labbe + */ + +#include "rk2_crypto.h" +#include +#include +#include +#include +#include +#include +#include +#include + +static struct rockchip_ip rocklist =3D { + .dev_list =3D LIST_HEAD_INIT(rocklist.dev_list), + .lock =3D __SPIN_LOCK_UNLOCKED(rocklist.lock), +}; + +struct rk2_crypto_dev *get_rk2_crypto(void) +{ + struct rk2_crypto_dev *first; + + spin_lock(&rocklist.lock); + first =3D list_first_entry_or_null(&rocklist.dev_list, + struct rk2_crypto_dev, list); + list_rotate_left(&rocklist.dev_list); + spin_unlock(&rocklist.lock); + return first; +} + +static const struct rk2_variant rk3568_variant =3D { + .num_clks =3D 3, +}; + +static const struct rk2_variant rk3588_variant =3D { + .num_clks =3D 3, +}; + +static int rk2_crypto_get_clks(struct rk2_crypto_dev *dev) +{ + int i, j, err; + unsigned long cr; + + dev->num_clks =3D devm_clk_bulk_get_all(dev->dev, &dev->clks); + if (dev->num_clks < dev->variant->num_clks) { + dev_err(dev->dev, "Missing clocks, got %d instead of %d\n", + dev->num_clks, dev->variant->num_clks); + return -EINVAL; + } + + for (i =3D 0; i < dev->num_clks; i++) { + cr =3D clk_get_rate(dev->clks[i].clk); + for (j =3D 0; j < ARRAY_SIZE(dev->variant->rkclks); j++) { + if (dev->variant->rkclks[j].max =3D=3D 0) + continue; + if (strcmp(dev->variant->rkclks[j].name, dev->clks[i].id)) + continue; + if (cr > dev->variant->rkclks[j].max) { + err =3D clk_set_rate(dev->clks[i].clk, + dev->variant->rkclks[j].max); + if (err) + dev_err(dev->dev, "Fail downclocking %s from %lu to %lu\n", + dev->variant->rkclks[j].name, cr, + dev->variant->rkclks[j].max); + else + dev_info(dev->dev, "Downclocking %s from %lu to %lu\n", + dev->variant->rkclks[j].name, cr, + dev->variant->rkclks[j].max); + } + } + } + return 0; +} + +static int rk2_crypto_enable_clk(struct rk2_crypto_dev *dev) +{ + int err; + + err =3D clk_bulk_prepare_enable(dev->num_clks, dev->clks); + if (err) + dev_err(dev->dev, "Could not enable clock clks\n"); + + return err; +} + +static void rk2_crypto_disable_clk(struct rk2_crypto_dev *dev) +{ + clk_bulk_disable_unprepare(dev->num_clks, dev->clks); +} + +/* + * Power management strategy: The device is suspended until a request + * is handled. For avoiding suspend/resume yoyo, the autosuspend is set to= 2s. + */ +static int rk2_crypto_pm_suspend(struct device *dev) +{ + struct rk2_crypto_dev *rkdev =3D dev_get_drvdata(dev); + + rk2_crypto_disable_clk(rkdev); + reset_control_assert(rkdev->rst); + + return 0; +} + +static int rk2_crypto_pm_resume(struct device *dev) +{ + struct rk2_crypto_dev *rkdev =3D dev_get_drvdata(dev); + int ret; + + ret =3D rk2_crypto_enable_clk(rkdev); + if (ret) + return ret; + + reset_control_deassert(rkdev->rst); + return 0; +} + +static const struct dev_pm_ops rk2_crypto_pm_ops =3D { + SET_RUNTIME_PM_OPS(rk2_crypto_pm_suspend, rk2_crypto_pm_resume, NULL) +}; + +static int rk2_crypto_pm_init(struct rk2_crypto_dev *rkdev) +{ + int err; + + pm_runtime_use_autosuspend(rkdev->dev); + pm_runtime_set_autosuspend_delay(rkdev->dev, 2000); + + err =3D pm_runtime_set_suspended(rkdev->dev); + if (err) + return err; + pm_runtime_enable(rkdev->dev); + return err; +} + +static void rk2_crypto_pm_exit(struct rk2_crypto_dev *rkdev) +{ + pm_runtime_disable(rkdev->dev); +} + +static irqreturn_t rk2_crypto_irq_handle(int irq, void *dev_id) +{ + struct rk2_crypto_dev *rkc =3D platform_get_drvdata(dev_id); + u32 v; + + v =3D readl(rkc->reg + RK2_CRYPTO_DMA_INT_ST); + writel(v, rkc->reg + RK2_CRYPTO_DMA_INT_ST); + + rkc->status =3D 1; + if (v & 0xF8) { + dev_warn(rkc->dev, "DMA Error\n"); + rkc->status =3D 0; + } + complete(&rkc->complete); + + return IRQ_HANDLED; +} + +static struct rk2_crypto_template rk2_crypto_algs[] =3D { + { + .type =3D CRYPTO_ALG_TYPE_SKCIPHER, + .rk2_mode =3D RK2_CRYPTO_AES_ECB, + .alg.skcipher.base =3D { + .base.cra_name =3D "ecb(aes)", + .base.cra_driver_name =3D "ecb-aes-rk2", + .base.cra_priority =3D 300, + .base.cra_flags =3D CRYPTO_ALG_ASYNC | CRYPTO_ALG_NEED_FALLBACK, + .base.cra_blocksize =3D AES_BLOCK_SIZE, + .base.cra_ctxsize =3D sizeof(struct rk2_cipher_ctx), + .base.cra_alignmask =3D 0x0f, + .base.cra_module =3D THIS_MODULE, + + .init =3D rk2_cipher_tfm_init, + .exit =3D rk2_cipher_tfm_exit, + .min_keysize =3D AES_MIN_KEY_SIZE, + .max_keysize =3D AES_MAX_KEY_SIZE, + .setkey =3D rk2_aes_setkey, + .encrypt =3D rk2_skcipher_encrypt, + .decrypt =3D rk2_skcipher_decrypt, + }, + .alg.skcipher.op =3D { + .do_one_request =3D rk2_cipher_run, + }, + }, + { + .type =3D CRYPTO_ALG_TYPE_SKCIPHER, + .rk2_mode =3D RK2_CRYPTO_AES_CBC, + .alg.skcipher.base =3D { + .base.cra_name =3D "cbc(aes)", + .base.cra_driver_name =3D "cbc-aes-rk2", + .base.cra_priority =3D 300, + .base.cra_flags =3D CRYPTO_ALG_ASYNC | CRYPTO_ALG_NEED_FALLBACK, + .base.cra_blocksize =3D AES_BLOCK_SIZE, + .base.cra_ctxsize =3D sizeof(struct rk2_cipher_ctx), + .base.cra_alignmask =3D 0x0f, + .base.cra_module =3D THIS_MODULE, + + .init =3D rk2_cipher_tfm_init, + .exit =3D rk2_cipher_tfm_exit, + .min_keysize =3D AES_MIN_KEY_SIZE, + .max_keysize =3D AES_MAX_KEY_SIZE, + .ivsize =3D AES_BLOCK_SIZE, + .setkey =3D rk2_aes_setkey, + .encrypt =3D rk2_skcipher_encrypt, + .decrypt =3D rk2_skcipher_decrypt, + }, + .alg.skcipher.op =3D { + .do_one_request =3D rk2_cipher_run, + }, + }, + { + .type =3D CRYPTO_ALG_TYPE_SKCIPHER, + .rk2_mode =3D RK2_CRYPTO_AES_XTS, + .is_xts =3D true, + .alg.skcipher.base =3D { + .base.cra_name =3D "xts(aes)", + .base.cra_driver_name =3D "xts-aes-rk2", + .base.cra_priority =3D 300, + .base.cra_flags =3D CRYPTO_ALG_ASYNC | CRYPTO_ALG_NEED_FALLBACK, + .base.cra_blocksize =3D AES_BLOCK_SIZE, + .base.cra_ctxsize =3D sizeof(struct rk2_cipher_ctx), + .base.cra_alignmask =3D 0x0f, + .base.cra_module =3D THIS_MODULE, + + .init =3D rk2_cipher_tfm_init, + .exit =3D rk2_cipher_tfm_exit, + .min_keysize =3D AES_MIN_KEY_SIZE * 2, + .max_keysize =3D AES_MAX_KEY_SIZE * 2, + .ivsize =3D AES_BLOCK_SIZE, + .setkey =3D rk2_aes_xts_setkey, + .encrypt =3D rk2_skcipher_encrypt, + .decrypt =3D rk2_skcipher_decrypt, + }, + .alg.skcipher.op =3D { + .do_one_request =3D rk2_cipher_run, + }, + }, + { + .type =3D CRYPTO_ALG_TYPE_AHASH, + .rk2_mode =3D RK2_CRYPTO_MD5, + .alg.hash.base =3D { + .init =3D rk2_ahash_init, + .update =3D rk2_ahash_update, + .final =3D rk2_ahash_final, + .finup =3D rk2_ahash_finup, + .export =3D rk2_ahash_export, + .import =3D rk2_ahash_import, + .digest =3D rk2_ahash_digest, + .init_tfm =3D rk2_hash_init_tfm, + .exit_tfm =3D rk2_hash_exit_tfm, + .halg =3D { + .digestsize =3D MD5_DIGEST_SIZE, + .statesize =3D sizeof(struct md5_state), + .base =3D { + .cra_name =3D "md5", + .cra_driver_name =3D "rk2-md5", + .cra_priority =3D 300, + .cra_flags =3D CRYPTO_ALG_ASYNC | + CRYPTO_ALG_NEED_FALLBACK, + .cra_blocksize =3D MD5_HMAC_BLOCK_SIZE, + .cra_ctxsize =3D sizeof(struct rk2_ahash_ctx), + .cra_module =3D THIS_MODULE, + } + } + }, + .alg.hash.op =3D { + .do_one_request =3D rk2_hash_run, + }, + + }, + { + .type =3D CRYPTO_ALG_TYPE_AHASH, + .rk2_mode =3D RK2_CRYPTO_SHA1, + .alg.hash.base =3D { + .init =3D rk2_ahash_init, + .update =3D rk2_ahash_update, + .final =3D rk2_ahash_final, + .finup =3D rk2_ahash_finup, + .export =3D rk2_ahash_export, + .import =3D rk2_ahash_import, + .digest =3D rk2_ahash_digest, + .init_tfm =3D rk2_hash_init_tfm, + .exit_tfm =3D rk2_hash_exit_tfm, + .halg =3D { + .digestsize =3D SHA1_DIGEST_SIZE, + .statesize =3D sizeof(struct sha1_state), + .base =3D { + .cra_name =3D "sha1", + .cra_driver_name =3D "rk2-sha1", + .cra_priority =3D 300, + .cra_flags =3D CRYPTO_ALG_ASYNC | + CRYPTO_ALG_NEED_FALLBACK, + .cra_blocksize =3D SHA1_BLOCK_SIZE, + .cra_ctxsize =3D sizeof(struct rk2_ahash_ctx), + .cra_module =3D THIS_MODULE, + } + } + }, + .alg.hash.op =3D { + .do_one_request =3D rk2_hash_run, + }, + }, + { + .type =3D CRYPTO_ALG_TYPE_AHASH, + .rk2_mode =3D RK2_CRYPTO_SHA256, + .alg.hash.base =3D { + .init =3D rk2_ahash_init, + .update =3D rk2_ahash_update, + .final =3D rk2_ahash_final, + .finup =3D rk2_ahash_finup, + .export =3D rk2_ahash_export, + .import =3D rk2_ahash_import, + .digest =3D rk2_ahash_digest, + .init_tfm =3D rk2_hash_init_tfm, + .exit_tfm =3D rk2_hash_exit_tfm, + .halg =3D { + .digestsize =3D SHA256_DIGEST_SIZE, + .statesize =3D sizeof(struct sha256_state), + .base =3D { + .cra_name =3D "sha256", + .cra_driver_name =3D "rk2-sha256", + .cra_priority =3D 300, + .cra_flags =3D CRYPTO_ALG_ASYNC | + CRYPTO_ALG_NEED_FALLBACK, + .cra_blocksize =3D SHA256_BLOCK_SIZE, + .cra_ctxsize =3D sizeof(struct rk2_ahash_ctx), + .cra_module =3D THIS_MODULE, + } + } + }, + .alg.hash.op =3D { + .do_one_request =3D rk2_hash_run, + }, + }, + { + .type =3D CRYPTO_ALG_TYPE_AHASH, + .rk2_mode =3D RK2_CRYPTO_SHA384, + .alg.hash.base =3D { + .init =3D rk2_ahash_init, + .update =3D rk2_ahash_update, + .final =3D rk2_ahash_final, + .finup =3D rk2_ahash_finup, + .export =3D rk2_ahash_export, + .import =3D rk2_ahash_import, + .digest =3D rk2_ahash_digest, + .init_tfm =3D rk2_hash_init_tfm, + .exit_tfm =3D rk2_hash_exit_tfm, + .halg =3D { + .digestsize =3D SHA384_DIGEST_SIZE, + .statesize =3D sizeof(struct sha512_state), + .base =3D { + .cra_name =3D "sha384", + .cra_driver_name =3D "rk2-sha384", + .cra_priority =3D 300, + .cra_flags =3D CRYPTO_ALG_ASYNC | + CRYPTO_ALG_NEED_FALLBACK, + .cra_blocksize =3D SHA384_BLOCK_SIZE, + .cra_ctxsize =3D sizeof(struct rk2_ahash_ctx), + .cra_module =3D THIS_MODULE, + } + } + }, + .alg.hash.op =3D { + .do_one_request =3D rk2_hash_run, + }, + }, + { + .type =3D CRYPTO_ALG_TYPE_AHASH, + .rk2_mode =3D RK2_CRYPTO_SHA512, + .alg.hash.base =3D { + .init =3D rk2_ahash_init, + .update =3D rk2_ahash_update, + .final =3D rk2_ahash_final, + .finup =3D rk2_ahash_finup, + .export =3D rk2_ahash_export, + .import =3D rk2_ahash_import, + .digest =3D rk2_ahash_digest, + .init_tfm =3D rk2_hash_init_tfm, + .exit_tfm =3D rk2_hash_exit_tfm, + .halg =3D { + .digestsize =3D SHA512_DIGEST_SIZE, + .statesize =3D sizeof(struct sha512_state), + .base =3D { + .cra_name =3D "sha512", + .cra_driver_name =3D "rk2-sha512", + .cra_priority =3D 300, + .cra_flags =3D CRYPTO_ALG_ASYNC | + CRYPTO_ALG_NEED_FALLBACK, + .cra_blocksize =3D SHA512_BLOCK_SIZE, + .cra_ctxsize =3D sizeof(struct rk2_ahash_ctx), + .cra_module =3D THIS_MODULE, + } + } + }, + .alg.hash.op =3D { + .do_one_request =3D rk2_hash_run, + }, + }, + { + .type =3D CRYPTO_ALG_TYPE_AHASH, + .rk2_mode =3D RK2_CRYPTO_SM3, + .alg.hash.base =3D { + .init =3D rk2_ahash_init, + .update =3D rk2_ahash_update, + .final =3D rk2_ahash_final, + .finup =3D rk2_ahash_finup, + .export =3D rk2_ahash_export, + .import =3D rk2_ahash_import, + .digest =3D rk2_ahash_digest, + .init_tfm =3D rk2_hash_init_tfm, + .exit_tfm =3D rk2_hash_exit_tfm, + .halg =3D { + .digestsize =3D SM3_DIGEST_SIZE, + .statesize =3D sizeof(struct sm3_state), + .base =3D { + .cra_name =3D "sm3", + .cra_driver_name =3D "rk2-sm3", + .cra_priority =3D 300, + .cra_flags =3D CRYPTO_ALG_ASYNC | + CRYPTO_ALG_NEED_FALLBACK, + .cra_blocksize =3D SM3_BLOCK_SIZE, + .cra_ctxsize =3D sizeof(struct rk2_ahash_ctx), + .cra_module =3D THIS_MODULE, + } + } + }, + .alg.hash.op =3D { + .do_one_request =3D rk2_hash_run, + }, + }, +}; + +#ifdef CONFIG_CRYPTO_DEV_ROCKCHIP2_DEBUG +static int rk2_crypto_debugfs_stats_show(struct seq_file *seq, void *v) +{ + struct rk2_crypto_dev *rkc; + unsigned int i; + + spin_lock(&rocklist.lock); + list_for_each_entry(rkc, &rocklist.dev_list, list) { + seq_printf(seq, "%s %s requests: %lu\n", + dev_driver_string(rkc->dev), dev_name(rkc->dev), + rkc->nreq); + } + spin_unlock(&rocklist.lock); + + for (i =3D 0; i < ARRAY_SIZE(rk2_crypto_algs); i++) { + if (!rk2_crypto_algs[i].dev) + continue; + switch (rk2_crypto_algs[i].type) { + case CRYPTO_ALG_TYPE_SKCIPHER: + seq_printf(seq, "%s %s reqs=3D%lu fallback=3D%lu\n", + rk2_crypto_algs[i].alg.skcipher.base.base.cra_driver_name, + rk2_crypto_algs[i].alg.skcipher.base.base.cra_name, + rk2_crypto_algs[i].stat_req, rk2_crypto_algs[i].stat_fb); + seq_printf(seq, "\tfallback due to length: %lu\n", + rk2_crypto_algs[i].stat_fb_len); + seq_printf(seq, "\tfallback due to alignment: %lu\n", + rk2_crypto_algs[i].stat_fb_align); + seq_printf(seq, "\tfallback due to SGs: %lu\n", + rk2_crypto_algs[i].stat_fb_sgdiff); + break; + case CRYPTO_ALG_TYPE_AHASH: + seq_printf(seq, "%s %s reqs=3D%lu fallback=3D%lu\n", + rk2_crypto_algs[i].alg.hash.base.halg.base.cra_driver_name, + rk2_crypto_algs[i].alg.hash.base.halg.base.cra_name, + rk2_crypto_algs[i].stat_req, rk2_crypto_algs[i].stat_fb); + break; + } + } + return 0; +} + +static int rk2_crypto_debugfs_info_show(struct seq_file *seq, void *d) +{ + struct rk2_crypto_dev *rkc; + u32 v; + + spin_lock(&rocklist.lock); + list_for_each_entry(rkc, &rocklist.dev_list, list) { + v =3D readl(rkc->reg + RK2_CRYPTO_CLK_CTL); + seq_printf(seq, "CRYPTO_CLK_CTL %x\n", v); + v =3D readl(rkc->reg + RK2_CRYPTO_RST_CTL); + seq_printf(seq, "CRYPTO_RST_CTL %x\n", v); + + v =3D readl(rkc->reg + CRYPTO_AES_VERSION); + seq_printf(seq, "CRYPTO_AES_VERSION %x\n", v); + if (v & BIT(17)) + seq_puts(seq, "AES 192\n"); + + v =3D readl(rkc->reg + CRYPTO_DES_VERSION); + seq_printf(seq, "CRYPTO_DES_VERSION %x\n", v); + v =3D readl(rkc->reg + CRYPTO_SM4_VERSION); + seq_printf(seq, "CRYPTO_SM4_VERSION %x\n", v); + v =3D readl(rkc->reg + CRYPTO_HASH_VERSION); + seq_printf(seq, "CRYPTO_HASH_VERSION %x\n", v); + v =3D readl(rkc->reg + CRYPTO_HMAC_VERSION); + seq_printf(seq, "CRYPTO_HMAC_VERSION %x\n", v); + v =3D readl(rkc->reg + CRYPTO_RNG_VERSION); + seq_printf(seq, "CRYPTO_RNG_VERSION %x\n", v); + v =3D readl(rkc->reg + CRYPTO_PKA_VERSION); + seq_printf(seq, "CRYPTO_PKA_VERSION %x\n", v); + v =3D readl(rkc->reg + CRYPTO_CRYPTO_VERSION); + seq_printf(seq, "CRYPTO_CRYPTO_VERSION %x\n", v); + } + spin_unlock(&rocklist.lock); + + return 0; +} + +DEFINE_SHOW_ATTRIBUTE(rk2_crypto_debugfs_stats); +DEFINE_SHOW_ATTRIBUTE(rk2_crypto_debugfs_info); + +#endif + +static void register_debugfs(struct rk2_crypto_dev *crypto_dev) +{ +#ifdef CONFIG_CRYPTO_DEV_ROCKCHIP2_DEBUG + /* Ignore error of debugfs */ + rocklist.dbgfs_dir =3D debugfs_create_dir("rk2_crypto", NULL); + rocklist.dbgfs_stats =3D debugfs_create_file("stats", 0440, + rocklist.dbgfs_dir, + &rocklist, + &rk2_crypto_debugfs_stats_fops); + rocklist.dbgfs_stats =3D debugfs_create_file("info", 0440, + rocklist.dbgfs_dir, + &rocklist, + &rk2_crypto_debugfs_info_fops); +#endif +} + +static int rk2_crypto_register(struct rk2_crypto_dev *rkc) +{ + unsigned int i, k; + int err =3D 0; + + for (i =3D 0; i < ARRAY_SIZE(rk2_crypto_algs); i++) { + rk2_crypto_algs[i].dev =3D rkc; + switch (rk2_crypto_algs[i].type) { + case CRYPTO_ALG_TYPE_SKCIPHER: + dev_info(rkc->dev, "Register %s as %s\n", + rk2_crypto_algs[i].alg.skcipher.base.base.cra_name, + rk2_crypto_algs[i].alg.skcipher.base.base.cra_driver_name); + err =3D crypto_engine_register_skcipher(&rk2_crypto_algs[i].alg.skciphe= r); + break; + case CRYPTO_ALG_TYPE_AHASH: + dev_info(rkc->dev, "Register %s as %s %d\n", + rk2_crypto_algs[i].alg.hash.base.halg.base.cra_name, + rk2_crypto_algs[i].alg.hash.base.halg.base.cra_driver_name, i); + err =3D crypto_engine_register_ahash(&rk2_crypto_algs[i].alg.hash); + break; + default: + dev_err(rkc->dev, "unknown algorithm\n"); + } + if (err) + goto err_cipher_algs; + } + return 0; + +err_cipher_algs: + for (k =3D 0; k < i; k++) { + if (rk2_crypto_algs[k].type =3D=3D CRYPTO_ALG_TYPE_SKCIPHER) + crypto_engine_unregister_skcipher(&rk2_crypto_algs[k].alg.skcipher); + else + crypto_engine_unregister_ahash(&rk2_crypto_algs[k].alg.hash); + } + return err; +} + +static void rk2_crypto_unregister(void) +{ + unsigned int i; + + for (i =3D 0; i < ARRAY_SIZE(rk2_crypto_algs); i++) { + if (rk2_crypto_algs[i].type =3D=3D CRYPTO_ALG_TYPE_SKCIPHER) + crypto_engine_unregister_skcipher(&rk2_crypto_algs[i].alg.skcipher); + else + crypto_engine_unregister_ahash(&rk2_crypto_algs[i].alg.hash); + } +} + +static const struct of_device_id crypto_of_id_table[] =3D { + { .compatible =3D "rockchip,rk3568-crypto", + .data =3D &rk3568_variant, + }, + { .compatible =3D "rockchip,rk3588-crypto", + .data =3D &rk3588_variant, + }, + {} +}; +MODULE_DEVICE_TABLE(of, crypto_of_id_table); + +static int rk2_crypto_probe(struct platform_device *pdev) +{ + struct device *dev =3D &pdev->dev; + struct rk2_crypto_dev *rkc, *first; + int err =3D 0; + + rkc =3D devm_kzalloc(&pdev->dev, sizeof(*rkc), GFP_KERNEL); + if (!rkc) { + err =3D -ENOMEM; + goto err_crypto; + } + + rkc->dev =3D &pdev->dev; + platform_set_drvdata(pdev, rkc); + + rkc->variant =3D of_device_get_match_data(&pdev->dev); + if (!rkc->variant) { + dev_err(&pdev->dev, "Missing variant\n"); + return -EINVAL; + } + + rkc->rst =3D devm_reset_control_array_get_exclusive(dev); + if (IS_ERR(rkc->rst)) { + err =3D PTR_ERR(rkc->rst); + dev_err(&pdev->dev, "Fail to get resets err=3D%d\n", err); + goto err_crypto; + } + + rkc->tl =3D dma_alloc_coherent(rkc->dev, + sizeof(struct rk2_crypto_lli) * MAX_LLI, + &rkc->t_phy, GFP_KERNEL); + if (!rkc->tl) { + dev_err(rkc->dev, "Cannot get DMA memory for task\n"); + err =3D -ENOMEM; + goto err_crypto; + } + + reset_control_assert(rkc->rst); + usleep_range(10, 20); + reset_control_deassert(rkc->rst); + + rkc->reg =3D devm_platform_ioremap_resource(pdev, 0); + if (IS_ERR(rkc->reg)) { + err =3D PTR_ERR(rkc->reg); + dev_err(&pdev->dev, "Fail to get resources\n"); + goto err_crypto; + } + + err =3D rk2_crypto_get_clks(rkc); + if (err) + goto err_crypto; + + rkc->irq =3D platform_get_irq(pdev, 0); + if (rkc->irq < 0) { + dev_err(&pdev->dev, "control Interrupt is not available.\n"); + err =3D rkc->irq; + goto err_crypto; + } + + err =3D devm_request_irq(&pdev->dev, rkc->irq, + rk2_crypto_irq_handle, IRQF_SHARED, + "rk-crypto", pdev); + + if (err) { + dev_err(&pdev->dev, "irq request failed.\n"); + goto err_crypto; + } + + rkc->engine =3D crypto_engine_alloc_init(&pdev->dev, true); + crypto_engine_start(rkc->engine); + init_completion(&rkc->complete); + + err =3D rk2_crypto_pm_init(rkc); + if (err) + goto err_pm; + + err =3D pm_runtime_resume_and_get(&pdev->dev); + + spin_lock(&rocklist.lock); + first =3D list_first_entry_or_null(&rocklist.dev_list, + struct rk2_crypto_dev, list); + list_add_tail(&rkc->list, &rocklist.dev_list); + spin_unlock(&rocklist.lock); + + if (!first) { + dev_info(dev, "Registers crypto algos\n"); + err =3D rk2_crypto_register(rkc); + if (err) { + dev_err(dev, "Fail to register crypto algorithms"); + goto err_register_alg; + } + + register_debugfs(rkc); + } + + return 0; + +err_register_alg: + rk2_crypto_pm_exit(rkc); +err_pm: + crypto_engine_exit(rkc->engine); +err_crypto: + dev_err(dev, "Crypto Accelerator not successfully registered\n"); + return err; +} + +static int rk2_crypto_remove(struct platform_device *pdev) +{ + struct rk2_crypto_dev *rkc =3D platform_get_drvdata(pdev); + struct rk2_crypto_dev *first; + + spin_lock_bh(&rocklist.lock); + list_del(&rkc->list); + first =3D list_first_entry_or_null(&rocklist.dev_list, + struct rk2_crypto_dev, list); + spin_unlock_bh(&rocklist.lock); + + if (!first) { +#ifdef CONFIG_CRYPTO_DEV_ROCKCHIP2_DEBUG + debugfs_remove_recursive(rocklist.dbgfs_dir); +#endif + rk2_crypto_unregister(); + } + rk2_crypto_pm_exit(rkc); + crypto_engine_exit(rkc->engine); + return 0; +} + +static struct platform_driver crypto_driver =3D { + .probe =3D rk2_crypto_probe, + .remove =3D rk2_crypto_remove, + .driver =3D { + .name =3D "rk2-crypto", + .pm =3D &rk2_crypto_pm_ops, + .of_match_table =3D crypto_of_id_table, + }, +}; + +module_platform_driver(crypto_driver); + +MODULE_DESCRIPTION("Rockchip Crypto Engine cryptographic offloader"); +MODULE_LICENSE("GPL"); +MODULE_AUTHOR("Corentin Labbe "); diff --git a/drivers/crypto/rockchip/rk2_crypto.h b/drivers/crypto/rockchip= /rk2_crypto.h new file mode 100644 index 000000000000..59cd8be59f70 --- /dev/null +++ b/drivers/crypto/rockchip/rk2_crypto.h @@ -0,0 +1,246 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#define RK2_CRYPTO_CLK_CTL 0x0000 +#define RK2_CRYPTO_RST_CTL 0x0004 + +#define RK2_CRYPTO_DMA_INT_EN 0x0008 +/* values for RK2_CRYPTO_DMA_INT_EN */ +#define RK2_CRYPTO_DMA_INT_LISTDONE BIT(0) + +#define RK2_CRYPTO_DMA_INT_ST 0x000C +/* values in RK2_CRYPTO_DMA_INT_ST are the same than in RK2_CRYPTO_DMA_INT= _EN */ + +#define RK2_CRYPTO_DMA_CTL 0x0010 +#define RK2_CRYPTO_DMA_CTL_START BIT(0) + +#define RK2_CRYPTO_DMA_LLI_ADDR 0x0014 +#define RK2_CRYPTO_DMA_ST 0x0018 +#define RK2_CRYPTO_DMA_STATE 0x001C +#define RK2_CRYPTO_DMA_LLI_RADDR 0x0020 +#define RK2_CRYPTO_DMA_SRC_RADDR 0x0024 +#define RK2_CRYPTO_DMA_DST_WADDR 0x0028 +#define RK2_CRYPTO_DMA_ITEM_ID 0x002C + +#define RK2_CRYPTO_FIFO_CTL 0x0040 + +#define RK2_CRYPTO_BC_CTL 0x0044 +#define RK2_CRYPTO_AES (0 << 8) +#define RK2_CRYPTO_MODE_ECB (0 << 4) +#define RK2_CRYPTO_MODE_CBC (1 << 4) +#define RK2_CRYPTO_XTS (6 << 4) + +#define RK2_CRYPTO_HASH_CTL 0x0048 +#define RK2_CRYPTO_HW_PAD BIT(2) +#define RK2_CRYPTO_SHA1 (0 << 4) +#define RK2_CRYPTO_MD5 (1 << 4) +#define RK2_CRYPTO_SHA224 (3 << 4) +#define RK2_CRYPTO_SHA256 (2 << 4) +#define RK2_CRYPTO_SHA384 (9 << 4) +#define RK2_CRYPTO_SHA512 (8 << 4) +#define RK2_CRYPTO_SM3 (4 << 4) + +#define RK2_CRYPTO_AES_ECB (RK2_CRYPTO_AES | RK2_CRYPTO_MODE_ECB) +#define RK2_CRYPTO_AES_CBC (RK2_CRYPTO_AES | RK2_CRYPTO_MODE_CBC) +#define RK2_CRYPTO_AES_XTS (RK2_CRYPTO_AES | RK2_CRYPTO_XTS) +#define RK2_CRYPTO_AES_CTR_MODE 3 +#define RK2_CRYPTO_AES_128BIT_key (0 << 2) +#define RK2_CRYPTO_AES_192BIT_key (1 << 2) +#define RK2_CRYPTO_AES_256BIT_key (2 << 2) + +#define RK2_CRYPTO_DEC BIT(1) +#define RK2_CRYPTO_ENABLE BIT(0) + +#define RK2_CRYPTO_CIPHER_ST 0x004C +#define RK2_CRYPTO_CIPHER_STATE 0x0050 + +#define RK2_CRYPTO_CH0_IV_0 0x0100 + +#define RK2_CRYPTO_KEY0 0x0180 +#define RK2_CRYPTO_KEY1 0x0184 +#define RK2_CRYPTO_KEY2 0x0188 +#define RK2_CRYPTO_KEY3 0x018C +#define RK2_CRYPTO_KEY4 0x0190 +#define RK2_CRYPTO_KEY5 0x0194 +#define RK2_CRYPTO_KEY6 0x0198 +#define RK2_CRYPTO_KEY7 0x019C +#define RK2_CRYPTO_CH4_KEY0 0x01c0 + +#define RK2_CRYPTO_CH0_PC_LEN_0 0x0280 + +#define RK2_CRYPTO_CH0_IV_LEN 0x0300 + +#define RK2_CRYPTO_HASH_DOUT_0 0x03A0 +#define RK2_CRYPTO_HASH_VALID 0x03E4 + +#define RK2_CRYPTO_TRNG_CTL 0x0400 +#define RK2_CRYPTO_TRNG_START BIT(0) +#define RK2_CRYPTO_TRNG_ENABLE BIT(1) +#define RK2_CRYPTO_TRNG_256 (0x3 << 4) +#define RK2_CRYPTO_TRNG_SAMPLE_CNT 0x0404 +#define RK2_CRYPTO_TRNG_DOUT 0x0410 + +#define CRYPTO_AES_VERSION 0x0680 +#define CRYPTO_DES_VERSION 0x0684 +#define CRYPTO_SM4_VERSION 0x0688 +#define CRYPTO_HASH_VERSION 0x068C +#define CRYPTO_HMAC_VERSION 0x0690 +#define CRYPTO_RNG_VERSION 0x0694 +#define CRYPTO_PKA_VERSION 0x0698 +#define CRYPTO_CRYPTO_VERSION 0x06F0 + +#define RK2_LLI_DMA_CTRL_SRC_INT BIT(10) +#define RK2_LLI_DMA_CTRL_DST_INT BIT(9) +#define RK2_LLI_DMA_CTRL_LIST_INT BIT(8) +#define RK2_LLI_DMA_CTRL_LAST BIT(0) + +#define RK2_LLI_STRING_LAST BIT(2) +#define RK2_LLI_STRING_FIRST BIT(1) +#define RK2_LLI_CIPHER_START BIT(0) + +#define RK2_MAX_CLKS 4 + +#define MAX_LLI 20 + +struct rk2_crypto_lli { + __le32 src_addr; + __le32 src_len; + __le32 dst_addr; + __le32 dst_len; + __le32 user; + __le32 iv; + __le32 dma_ctrl; + __le32 next; +}; + +/* + * struct rockchip_ip - struct for managing a list of RK crypto instance + * @dev_list: Used for doing a list of rk2_crypto_dev + * @lock: Control access to dev_list + * @dbgfs_dir: Debugfs dentry for statistic directory + * @dbgfs_stats: Debugfs dentry for statistic counters + */ +struct rockchip_ip { + struct list_head dev_list; + spinlock_t lock; /* Control access to dev_list */ + struct dentry *dbgfs_dir; + struct dentry *dbgfs_stats; +}; + +struct rk2_clks { + const char *name; + unsigned long max; +}; + +struct rk2_variant { + int num_clks; + struct rk2_clks rkclks[RK2_MAX_CLKS]; +}; + +struct rk2_crypto_dev { + struct list_head list; + struct device *dev; + struct clk_bulk_data *clks; + int num_clks; + struct reset_control *rst; + void __iomem *reg; + int irq; + const struct rk2_variant *variant; + unsigned long nreq; + struct crypto_engine *engine; + struct completion complete; + int status; + struct rk2_crypto_lli *tl; + dma_addr_t t_phy; +}; + +/* the private variable of hash */ +struct rk2_ahash_ctx { + /* for fallback */ + struct crypto_ahash *fallback_tfm; +}; + +/* the private variable of hash for fallback */ +struct rk2_ahash_rctx { + struct rk2_crypto_dev *dev; + struct ahash_request fallback_req; + u32 mode; + int nrsgs; +}; + +/* the private variable of cipher */ +struct rk2_cipher_ctx { + unsigned int keylen; + u8 key[AES_MAX_KEY_SIZE * 2]; + u8 iv[AES_BLOCK_SIZE]; + struct crypto_skcipher *fallback_tfm; +}; + +struct rk2_cipher_rctx { + struct rk2_crypto_dev *dev; + u8 backup_iv[AES_BLOCK_SIZE]; + u32 mode; + struct skcipher_request fallback_req; // keep at the end +}; + +struct rk2_crypto_template { + u32 type; + u32 rk2_mode; + bool is_xts; + struct rk2_crypto_dev *dev; + union { + struct skcipher_engine_alg skcipher; + struct ahash_engine_alg hash; + } alg; + unsigned long stat_req; + unsigned long stat_fb; + unsigned long stat_fb_len; + unsigned long stat_fb_sglen; + unsigned long stat_fb_align; + unsigned long stat_fb_sgdiff; +}; + +struct rk2_crypto_dev *get_rk2_crypto(void); +int rk2_cipher_run(struct crypto_engine *engine, void *async_req); +int rk2_hash_run(struct crypto_engine *engine, void *breq); + +int rk2_cipher_tfm_init(struct crypto_skcipher *tfm); +void rk2_cipher_tfm_exit(struct crypto_skcipher *tfm); +int rk2_aes_setkey(struct crypto_skcipher *cipher, const u8 *key, + unsigned int keylen); +int rk2_aes_xts_setkey(struct crypto_skcipher *cipher, const u8 *key, + unsigned int keylen); +int rk2_skcipher_encrypt(struct skcipher_request *req); +int rk2_skcipher_decrypt(struct skcipher_request *req); +int rk2_aes_ecb_encrypt(struct skcipher_request *req); +int rk2_aes_ecb_decrypt(struct skcipher_request *req); +int rk2_aes_cbc_encrypt(struct skcipher_request *req); +int rk2_aes_cbc_decrypt(struct skcipher_request *req); + +int rk2_ahash_init(struct ahash_request *req); +int rk2_ahash_update(struct ahash_request *req); +int rk2_ahash_final(struct ahash_request *req); +int rk2_ahash_finup(struct ahash_request *req); +int rk2_ahash_import(struct ahash_request *req, const void *in); +int rk2_ahash_export(struct ahash_request *req, void *out); +int rk2_ahash_digest(struct ahash_request *req); +int rk2_hash_init_tfm(struct crypto_ahash *tfm); +void rk2_hash_exit_tfm(struct crypto_ahash *tfm); diff --git a/drivers/crypto/rockchip/rk2_crypto_ahash.c b/drivers/crypto/ro= ckchip/rk2_crypto_ahash.c new file mode 100644 index 000000000000..75b8d9893447 --- /dev/null +++ b/drivers/crypto/rockchip/rk2_crypto_ahash.c @@ -0,0 +1,344 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Crypto offloader support for Rockchip RK3568/RK3588 + * + * Copyright (c) 2022-2023 Corentin Labbe + */ +#include +#include +#include "rk2_crypto.h" + +static bool rk2_ahash_need_fallback(struct ahash_request *areq) +{ + struct crypto_ahash *tfm =3D crypto_ahash_reqtfm(areq); + struct ahash_alg *alg =3D crypto_ahash_alg(tfm); + struct rk2_crypto_template *algt =3D container_of(alg, struct rk2_crypto_= template, alg.hash.base); + struct scatterlist *sg; + + sg =3D areq->src; + while (sg) { + if (!IS_ALIGNED(sg->offset, sizeof(u32))) { + algt->stat_fb_align++; + return true; + } + if (sg->length % 4) { + algt->stat_fb_sglen++; + return true; + } + sg =3D sg_next(sg); + } + return false; +} + +static int rk2_ahash_digest_fb(struct ahash_request *areq) +{ + struct rk2_ahash_rctx *rctx =3D ahash_request_ctx(areq); + struct crypto_ahash *tfm =3D crypto_ahash_reqtfm(areq); + struct rk2_ahash_ctx *tfmctx =3D crypto_ahash_ctx(tfm); + struct ahash_alg *alg =3D crypto_ahash_alg(tfm); + struct rk2_crypto_template *algt =3D container_of(alg, struct rk2_crypto_= template, alg.hash.base); + + algt->stat_fb++; + + ahash_request_set_tfm(&rctx->fallback_req, tfmctx->fallback_tfm); + rctx->fallback_req.base.flags =3D areq->base.flags & + CRYPTO_TFM_REQ_MAY_SLEEP; + + rctx->fallback_req.nbytes =3D areq->nbytes; + rctx->fallback_req.src =3D areq->src; + rctx->fallback_req.result =3D areq->result; + + return crypto_ahash_digest(&rctx->fallback_req); +} + +static int zero_message_process(struct ahash_request *req) +{ + struct crypto_ahash *tfm =3D crypto_ahash_reqtfm(req); + struct ahash_alg *alg =3D crypto_ahash_alg(tfm); + struct rk2_crypto_template *algt =3D container_of(alg, struct rk2_crypto_= template, alg.hash.base); + int digestsize =3D crypto_ahash_digestsize(tfm); + + switch (algt->rk2_mode) { + case RK2_CRYPTO_SHA1: + memcpy(req->result, sha1_zero_message_hash, digestsize); + break; + case RK2_CRYPTO_SHA256: + memcpy(req->result, sha256_zero_message_hash, digestsize); + break; + case RK2_CRYPTO_SHA384: + memcpy(req->result, sha384_zero_message_hash, digestsize); + break; + case RK2_CRYPTO_SHA512: + memcpy(req->result, sha512_zero_message_hash, digestsize); + break; + case RK2_CRYPTO_MD5: + memcpy(req->result, md5_zero_message_hash, digestsize); + break; + case RK2_CRYPTO_SM3: + memcpy(req->result, sm3_zero_message_hash, digestsize); + break; + default: + return -EINVAL; + } + + return 0; +} + +int rk2_ahash_init(struct ahash_request *req) +{ + struct rk2_ahash_rctx *rctx =3D ahash_request_ctx(req); + struct crypto_ahash *tfm =3D crypto_ahash_reqtfm(req); + struct rk2_ahash_ctx *ctx =3D crypto_ahash_ctx(tfm); + + ahash_request_set_tfm(&rctx->fallback_req, ctx->fallback_tfm); + rctx->fallback_req.base.flags =3D req->base.flags & + CRYPTO_TFM_REQ_MAY_SLEEP; + + return crypto_ahash_init(&rctx->fallback_req); +} + +int rk2_ahash_update(struct ahash_request *req) +{ + struct rk2_ahash_rctx *rctx =3D ahash_request_ctx(req); + struct crypto_ahash *tfm =3D crypto_ahash_reqtfm(req); + struct rk2_ahash_ctx *ctx =3D crypto_ahash_ctx(tfm); + + ahash_request_set_tfm(&rctx->fallback_req, ctx->fallback_tfm); + rctx->fallback_req.base.flags =3D req->base.flags & + CRYPTO_TFM_REQ_MAY_SLEEP; + rctx->fallback_req.nbytes =3D req->nbytes; + rctx->fallback_req.src =3D req->src; + + return crypto_ahash_update(&rctx->fallback_req); +} + +int rk2_ahash_final(struct ahash_request *req) +{ + struct rk2_ahash_rctx *rctx =3D ahash_request_ctx(req); + struct crypto_ahash *tfm =3D crypto_ahash_reqtfm(req); + struct rk2_ahash_ctx *ctx =3D crypto_ahash_ctx(tfm); + + ahash_request_set_tfm(&rctx->fallback_req, ctx->fallback_tfm); + rctx->fallback_req.base.flags =3D req->base.flags & + CRYPTO_TFM_REQ_MAY_SLEEP; + rctx->fallback_req.result =3D req->result; + + return crypto_ahash_final(&rctx->fallback_req); +} + +int rk2_ahash_finup(struct ahash_request *req) +{ + struct rk2_ahash_rctx *rctx =3D ahash_request_ctx(req); + struct crypto_ahash *tfm =3D crypto_ahash_reqtfm(req); + struct rk2_ahash_ctx *ctx =3D crypto_ahash_ctx(tfm); + + ahash_request_set_tfm(&rctx->fallback_req, ctx->fallback_tfm); + rctx->fallback_req.base.flags =3D req->base.flags & + CRYPTO_TFM_REQ_MAY_SLEEP; + + rctx->fallback_req.nbytes =3D req->nbytes; + rctx->fallback_req.src =3D req->src; + rctx->fallback_req.result =3D req->result; + + return crypto_ahash_finup(&rctx->fallback_req); +} + +int rk2_ahash_import(struct ahash_request *req, const void *in) +{ + struct rk2_ahash_rctx *rctx =3D ahash_request_ctx(req); + struct crypto_ahash *tfm =3D crypto_ahash_reqtfm(req); + struct rk2_ahash_ctx *ctx =3D crypto_ahash_ctx(tfm); + + ahash_request_set_tfm(&rctx->fallback_req, ctx->fallback_tfm); + rctx->fallback_req.base.flags =3D req->base.flags & + CRYPTO_TFM_REQ_MAY_SLEEP; + + return crypto_ahash_import(&rctx->fallback_req, in); +} + +int rk2_ahash_export(struct ahash_request *req, void *out) +{ + struct rk2_ahash_rctx *rctx =3D ahash_request_ctx(req); + struct crypto_ahash *tfm =3D crypto_ahash_reqtfm(req); + struct rk2_ahash_ctx *ctx =3D crypto_ahash_ctx(tfm); + + ahash_request_set_tfm(&rctx->fallback_req, ctx->fallback_tfm); + rctx->fallback_req.base.flags =3D req->base.flags & + CRYPTO_TFM_REQ_MAY_SLEEP; + + return crypto_ahash_export(&rctx->fallback_req, out); +} + +int rk2_ahash_digest(struct ahash_request *req) +{ + struct rk2_ahash_rctx *rctx =3D ahash_request_ctx(req); + struct rk2_crypto_dev *dev; + struct crypto_engine *engine; + + if (rk2_ahash_need_fallback(req)) + return rk2_ahash_digest_fb(req); + + if (!req->nbytes) + return zero_message_process(req); + + dev =3D get_rk2_crypto(); + + rctx->dev =3D dev; + engine =3D dev->engine; + + return crypto_transfer_hash_request_to_engine(engine, req); +} + +static int rk2_hash_prepare(struct crypto_engine *engine, void *breq) +{ + struct ahash_request *areq =3D container_of(breq, struct ahash_request, b= ase); + struct rk2_ahash_rctx *rctx =3D ahash_request_ctx(areq); + struct rk2_crypto_dev *rkc =3D rctx->dev; + int ret; + + ret =3D dma_map_sg(rkc->dev, areq->src, sg_nents(areq->src), DMA_TO_DEVIC= E); + if (ret <=3D 0) + return -EINVAL; + + rctx->nrsgs =3D ret; + + return 0; +} + +static void rk2_hash_unprepare(struct crypto_engine *engine, void *breq) +{ + struct ahash_request *areq =3D container_of(breq, struct ahash_request, b= ase); + struct rk2_ahash_rctx *rctx =3D ahash_request_ctx(areq); + struct rk2_crypto_dev *rkc =3D rctx->dev; + + dma_unmap_sg(rkc->dev, areq->src, rctx->nrsgs, DMA_TO_DEVICE); +} + +int rk2_hash_run(struct crypto_engine *engine, void *breq) +{ + struct ahash_request *areq =3D container_of(breq, struct ahash_request, b= ase); + struct crypto_ahash *tfm =3D crypto_ahash_reqtfm(areq); + struct rk2_ahash_rctx *rctx =3D ahash_request_ctx(areq); + struct ahash_alg *alg =3D crypto_ahash_alg(tfm); + struct rk2_crypto_template *algt =3D container_of(alg, struct rk2_crypto_= template, alg.hash.base); + struct scatterlist *sgs =3D areq->src; + struct rk2_crypto_dev *rkc =3D rctx->dev; + struct rk2_crypto_lli *dd =3D &rkc->tl[0]; + int ddi =3D 0; + int err =3D 0; + unsigned int len =3D areq->nbytes; + unsigned int todo; + u32 v; + int i; + + err =3D rk2_hash_prepare(engine, breq); + + err =3D pm_runtime_resume_and_get(rkc->dev); + if (err) + return err; + + dev_dbg(rkc->dev, "%s %s len=3D%d\n", __func__, + crypto_tfm_alg_name(areq->base.tfm), areq->nbytes); + + algt->stat_req++; + rkc->nreq++; + + rctx->mode =3D algt->rk2_mode; + rctx->mode |=3D 0xffff0000; + rctx->mode |=3D RK2_CRYPTO_ENABLE | RK2_CRYPTO_HW_PAD; + writel(rctx->mode, rkc->reg + RK2_CRYPTO_HASH_CTL); + + while (sgs && len > 0) { + dd =3D &rkc->tl[ddi]; + + todo =3D min(sg_dma_len(sgs), len); + dd->src_addr =3D sg_dma_address(sgs); + dd->src_len =3D todo; + dd->dst_addr =3D 0; + dd->dst_len =3D 0; + dd->dma_ctrl =3D ddi << 24; + dd->iv =3D 0; + dd->next =3D rkc->t_phy + sizeof(struct rk2_crypto_lli) * (ddi + 1); + + if (ddi =3D=3D 0) + dd->user =3D RK2_LLI_CIPHER_START | RK2_LLI_STRING_FIRST; + else + dd->user =3D 0; + + len -=3D todo; + dd->dma_ctrl |=3D RK2_LLI_DMA_CTRL_SRC_INT; + if (len =3D=3D 0) { + dd->user |=3D RK2_LLI_STRING_LAST; + dd->dma_ctrl |=3D RK2_LLI_DMA_CTRL_LAST; + } + dev_dbg(rkc->dev, "HASH SG %d sglen=3D%d user=3D%x dma=3D%x mode=3D%x le= n=3D%d todo=3D%d phy=3D%llx\n", + ddi, sgs->length, dd->user, dd->dma_ctrl, rctx->mode, len, todo, rkc->t= _phy); + + sgs =3D sg_next(sgs); + ddi++; + } + dd->next =3D 1; + writel(RK2_CRYPTO_DMA_INT_LISTDONE | 0x7F, rkc->reg + RK2_CRYPTO_DMA_INT_= EN); + + writel(rkc->t_phy, rkc->reg + RK2_CRYPTO_DMA_LLI_ADDR); + + reinit_completion(&rkc->complete); + rkc->status =3D 0; + + writel(RK2_CRYPTO_DMA_CTL_START | RK2_CRYPTO_DMA_CTL_START << 16, rkc->re= g + RK2_CRYPTO_DMA_CTL); + + wait_for_completion_interruptible_timeout(&rkc->complete, + msecs_to_jiffies(2000)); + if (!rkc->status) { + dev_err(rkc->dev, "DMA timeout\n"); + err =3D -EFAULT; + goto theend; + } + + readl_poll_timeout_atomic(rkc->reg + RK2_CRYPTO_HASH_VALID, v, v =3D=3D 1, + 10, 1000); + + for (i =3D 0; i < crypto_ahash_digestsize(tfm) / 4; i++) { + v =3D readl(rkc->reg + RK2_CRYPTO_HASH_DOUT_0 + i * 4); + put_unaligned_le32(be32_to_cpu(v), areq->result + i * 4); + } + +theend: + pm_runtime_put_autosuspend(rkc->dev); + + rk2_hash_unprepare(engine, breq); + + local_bh_disable(); + crypto_finalize_hash_request(engine, breq, err); + local_bh_enable(); + + return 0; +} + +int rk2_hash_init_tfm(struct crypto_ahash *tfm) +{ + struct rk2_ahash_ctx *tctx =3D crypto_ahash_ctx(tfm); + const char *alg_name =3D crypto_ahash_alg_name(tfm); + struct ahash_alg *alg =3D crypto_ahash_alg(tfm); + struct rk2_crypto_template *algt =3D container_of(alg, struct rk2_crypto_= template, alg.hash.base); + + /* for fallback */ + tctx->fallback_tfm =3D crypto_alloc_ahash(alg_name, 0, + CRYPTO_ALG_NEED_FALLBACK); + if (IS_ERR(tctx->fallback_tfm)) { + dev_err(algt->dev->dev, "Could not load fallback driver.\n"); + return PTR_ERR(tctx->fallback_tfm); + } + + crypto_ahash_set_reqsize(tfm, + sizeof(struct rk2_ahash_rctx) + + crypto_ahash_reqsize(tctx->fallback_tfm)); + return 0; +} + +void rk2_hash_exit_tfm(struct crypto_ahash *tfm) +{ + struct rk2_ahash_ctx *tctx =3D crypto_ahash_ctx(tfm); + + crypto_free_ahash(tctx->fallback_tfm); +} diff --git a/drivers/crypto/rockchip/rk2_crypto_skcipher.c b/drivers/crypto= /rockchip/rk2_crypto_skcipher.c new file mode 100644 index 000000000000..3e8e44d84b47 --- /dev/null +++ b/drivers/crypto/rockchip/rk2_crypto_skcipher.c @@ -0,0 +1,576 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * hardware cryptographic offloader for RK3568/RK3588 SoC + * + * Copyright (c) 2022-2023 Corentin Labbe + */ +#include +#include "rk2_crypto.h" + +static void rk2_print(struct rk2_crypto_dev *rkc) +{ + u32 v; + + v =3D readl(rkc->reg + RK2_CRYPTO_DMA_ST); + dev_info(rkc->dev, "DMA_ST %x\n", v); + switch (v) { + case 0: + dev_info(rkc->dev, "DMA_ST: DMA IDLE\n"); + break; + case 1: + dev_info(rkc->dev, "DMA_ST: DMA BUSY\n"); + break; + default: + dev_err(rkc->dev, "DMA_ST: invalid value\n"); + } + + v =3D readl(rkc->reg + RK2_CRYPTO_DMA_STATE); + dev_info(rkc->dev, "DMA_STATE %x\n", v); + + switch (v & 0x3) { + case 0: + dev_info(rkc->dev, "DMA_STATE: DMA DST IDLE\n"); + break; + case 1: + dev_info(rkc->dev, "DMA_STATE: DMA DST LOAD\n"); + break; + case 2: + dev_info(rkc->dev, "DMA_STATE: DMA DST WORK\n"); + break; + default: + dev_err(rkc->dev, "DMA DST invalid\n"); + break; + } + switch (v & 0xC) { + case 0: + dev_info(rkc->dev, "DMA_STATE: DMA SRC IDLE\n"); + break; + case 1: + dev_info(rkc->dev, "DMA_STATE: DMA SRC LOAD\n"); + break; + case 2: + dev_info(rkc->dev, "DMA_STATE: DMA SRC WORK\n"); + break; + default: + dev_err(rkc->dev, "DMA_STATE: DMA SRC invalid\n"); + break; + } + switch (v & 0x30) { + case 0: + dev_info(rkc->dev, "DMA_STATE: DMA LLI IDLE\n"); + break; + case 1: + dev_info(rkc->dev, "DMA_STATE: DMA LLI LOAD\n"); + break; + case 2: + dev_info(rkc->dev, "DMA LLI WORK\n"); + break; + default: + dev_err(rkc->dev, "DMA LLI invalid\n"); + break; + } + + v =3D readl(rkc->reg + RK2_CRYPTO_DMA_LLI_RADDR); + dev_info(rkc->dev, "DMA_LLI_RADDR %x\n", v); + v =3D readl(rkc->reg + RK2_CRYPTO_DMA_SRC_RADDR); + dev_info(rkc->dev, "DMA_SRC_RADDR %x\n", v); + v =3D readl(rkc->reg + RK2_CRYPTO_DMA_DST_WADDR); + dev_info(rkc->dev, "DMA_LLI_WADDR %x\n", v); + v =3D readl(rkc->reg + RK2_CRYPTO_DMA_ITEM_ID); + dev_info(rkc->dev, "DMA_LLI_ITEMID %x\n", v); + + v =3D readl(rkc->reg + RK2_CRYPTO_CIPHER_ST); + dev_info(rkc->dev, "CIPHER_ST %x\n", v); + if (v & BIT(0)) + dev_info(rkc->dev, "CIPHER_ST: BLOCK CIPHER BUSY\n"); + else + dev_info(rkc->dev, "CIPHER_ST: BLOCK CIPHER IDLE\n"); + if (v & BIT(2)) + dev_info(rkc->dev, "CIPHER_ST: HASH BUSY\n"); + else + dev_info(rkc->dev, "CIPHER_ST: HASH IDLE\n"); + if (v & BIT(2)) + dev_info(rkc->dev, "CIPHER_ST: OTP KEY VALID\n"); + else + dev_info(rkc->dev, "CIPHER_ST: OTP KEY INVALID\n"); + + v =3D readl(rkc->reg + RK2_CRYPTO_CIPHER_STATE); + dev_info(rkc->dev, "CIPHER_STATE %x\n", v); + switch (v & 0x3) { + case 0: + dev_info(rkc->dev, "serial: IDLE state\n"); + break; + case 1: + dev_info(rkc->dev, "serial: PRE state\n"); + break; + case 2: + dev_info(rkc->dev, "serial: BULK state\n"); + break; + default: + dev_info(rkc->dev, "serial: reserved state\n"); + break; + } + switch (v & 0xC) { + case 0: + dev_info(rkc->dev, "mac_state: IDLE state\n"); + break; + case 1: + dev_info(rkc->dev, "mac_state: PRE state\n"); + break; + case 2: + dev_info(rkc->dev, "mac_state: BULK state\n"); + break; + default: + dev_info(rkc->dev, "mac_state: reserved state\n"); + break; + } + switch (v & 0x30) { + case 0: + dev_info(rkc->dev, "parallel_state: IDLE state\n"); + break; + case 1: + dev_info(rkc->dev, "parallel_state: PRE state\n"); + break; + case 2: + dev_info(rkc->dev, "parallel_state: BULK state\n"); + break; + default: + dev_info(rkc->dev, "parallel_state: reserved state\n"); + break; + } + switch (v & 0xC0) { + case 0: + dev_info(rkc->dev, "ccm_state: IDLE state\n"); + break; + case 1: + dev_info(rkc->dev, "ccm_state: PRE state\n"); + break; + case 2: + dev_info(rkc->dev, "ccm_state: NA state\n"); + break; + default: + dev_info(rkc->dev, "ccm_state: reserved state\n"); + break; + } + switch (v & 0xF00) { + case 0: + dev_info(rkc->dev, "gcm_state: IDLE state\n"); + break; + case 1: + dev_info(rkc->dev, "gcm_state: PRE state\n"); + break; + case 2: + dev_info(rkc->dev, "gcm_state: NA state\n"); + break; + case 3: + dev_info(rkc->dev, "gcm_state: PC state\n"); + break; + } + switch (v & 0xC00) { + case 0x1: + dev_info(rkc->dev, "hash_state: IDLE state\n"); + break; + case 0x2: + dev_info(rkc->dev, "hash_state: IPAD state\n"); + break; + case 0x4: + dev_info(rkc->dev, "hash_state: TEXT state\n"); + break; + case 0x8: + dev_info(rkc->dev, "hash_state: OPAD state\n"); + break; + case 0x10: + dev_info(rkc->dev, "hash_state: OPAD EXT state\n"); + break; + default: + dev_info(rkc->dev, "hash_state: invalid state\n"); + break; + } + + v =3D readl(rkc->reg + RK2_CRYPTO_DMA_INT_ST); + dev_info(rkc->dev, "RK2_CRYPTO_DMA_INT_ST %x\n", v); +} + +static int rk2_cipher_need_fallback(struct skcipher_request *req) +{ + struct crypto_skcipher *tfm =3D crypto_skcipher_reqtfm(req); + struct skcipher_alg *alg =3D crypto_skcipher_alg(tfm); + struct rk2_crypto_template *algt =3D container_of(alg, struct rk2_crypto_= template, alg.skcipher.base); + struct scatterlist *sgs, *sgd; + unsigned int stodo, dtodo, len; + unsigned int bs =3D crypto_skcipher_blocksize(tfm); + + if (!req->cryptlen) + return true; + + if (algt->is_xts) { + if (sg_nents_for_len(req->src, req->cryptlen) > 1) + return true; + if (sg_nents_for_len(req->dst, req->cryptlen) > 1) + return true; + } + + len =3D req->cryptlen; + sgs =3D req->src; + sgd =3D req->dst; + while (sgs && sgd) { + if (!IS_ALIGNED(sgs->offset, sizeof(u32))) { + algt->stat_fb_align++; + return true; + } + if (!IS_ALIGNED(sgd->offset, sizeof(u32))) { + algt->stat_fb_align++; + return true; + } + stodo =3D min(len, sgs->length); + if (stodo % bs) { + algt->stat_fb_len++; + return true; + } + dtodo =3D min(len, sgd->length); + if (dtodo % bs) { + algt->stat_fb_len++; + return true; + } + if (stodo !=3D dtodo) { + algt->stat_fb_sgdiff++; + return true; + } + len -=3D stodo; + sgs =3D sg_next(sgs); + sgd =3D sg_next(sgd); + } + return false; +} + +static int rk2_cipher_fallback(struct skcipher_request *areq) +{ + struct crypto_skcipher *tfm =3D crypto_skcipher_reqtfm(areq); + struct rk2_cipher_ctx *op =3D crypto_skcipher_ctx(tfm); + struct rk2_cipher_rctx *rctx =3D skcipher_request_ctx(areq); + struct skcipher_alg *alg =3D crypto_skcipher_alg(tfm); + struct rk2_crypto_template *algt =3D container_of(alg, struct rk2_crypto_= template, alg.skcipher.base); + int err; + + algt->stat_fb++; + + skcipher_request_set_tfm(&rctx->fallback_req, op->fallback_tfm); + skcipher_request_set_callback(&rctx->fallback_req, areq->base.flags, + areq->base.complete, areq->base.data); + skcipher_request_set_crypt(&rctx->fallback_req, areq->src, areq->dst, + areq->cryptlen, areq->iv); + if (rctx->mode & RK2_CRYPTO_DEC) + err =3D crypto_skcipher_decrypt(&rctx->fallback_req); + else + err =3D crypto_skcipher_encrypt(&rctx->fallback_req); + return err; +} + +static int rk2_cipher_handle_req(struct skcipher_request *req) +{ + struct rk2_cipher_rctx *rctx =3D skcipher_request_ctx(req); + struct crypto_skcipher *tfm =3D crypto_skcipher_reqtfm(req); + struct rk2_cipher_ctx *ctx =3D crypto_skcipher_ctx(tfm); + struct rk2_crypto_dev *rkc; + struct crypto_engine *engine; + + if (ctx->keylen =3D=3D AES_KEYSIZE_192 * 2) + return rk2_cipher_fallback(req); + + if (rk2_cipher_need_fallback(req)) + return rk2_cipher_fallback(req); + + rkc =3D get_rk2_crypto(); + + engine =3D rkc->engine; + rctx->dev =3D rkc; + + return crypto_transfer_skcipher_request_to_engine(engine, req); +} + +int rk2_aes_xts_setkey(struct crypto_skcipher *cipher, const u8 *key, + unsigned int keylen) +{ + struct crypto_tfm *tfm =3D crypto_skcipher_tfm(cipher); + struct rk2_cipher_ctx *ctx =3D crypto_tfm_ctx(tfm); + int err; + + err =3D xts_verify_key(cipher, key, keylen); + if (err) + return err; + + ctx->keylen =3D keylen; + memcpy(ctx->key, key, keylen); + + return crypto_skcipher_setkey(ctx->fallback_tfm, key, keylen); +} + +int rk2_aes_setkey(struct crypto_skcipher *cipher, const u8 *key, + unsigned int keylen) +{ + struct crypto_tfm *tfm =3D crypto_skcipher_tfm(cipher); + struct rk2_cipher_ctx *ctx =3D crypto_tfm_ctx(tfm); + + if (keylen !=3D AES_KEYSIZE_128 && keylen !=3D AES_KEYSIZE_192 && + keylen !=3D AES_KEYSIZE_256) + return -EINVAL; + ctx->keylen =3D keylen; + memcpy(ctx->key, key, keylen); + + return crypto_skcipher_setkey(ctx->fallback_tfm, key, keylen); +} + +int rk2_skcipher_encrypt(struct skcipher_request *req) +{ + struct rk2_cipher_rctx *rctx =3D skcipher_request_ctx(req); + struct crypto_skcipher *tfm =3D crypto_skcipher_reqtfm(req); + struct skcipher_alg *alg =3D crypto_skcipher_alg(tfm); + struct rk2_crypto_template *algt =3D container_of(alg, struct rk2_crypto_= template, alg.skcipher.base); + + rctx->mode =3D algt->rk2_mode; + return rk2_cipher_handle_req(req); +} + +int rk2_skcipher_decrypt(struct skcipher_request *req) +{ + struct rk2_cipher_rctx *rctx =3D skcipher_request_ctx(req); + struct crypto_skcipher *tfm =3D crypto_skcipher_reqtfm(req); + struct skcipher_alg *alg =3D crypto_skcipher_alg(tfm); + struct rk2_crypto_template *algt =3D container_of(alg, struct rk2_crypto_= template, alg.skcipher.base); + + rctx->mode =3D algt->rk2_mode | RK2_CRYPTO_DEC; + return rk2_cipher_handle_req(req); +} + +int rk2_cipher_run(struct crypto_engine *engine, void *async_req) +{ + struct skcipher_request *areq =3D container_of(async_req, struct skcipher= _request, base); + struct crypto_skcipher *tfm =3D crypto_skcipher_reqtfm(areq); + struct rk2_cipher_rctx *rctx =3D skcipher_request_ctx(areq); + struct rk2_cipher_ctx *ctx =3D crypto_skcipher_ctx(tfm); + struct scatterlist *sgs, *sgd; + int err =3D 0; + int ivsize =3D crypto_skcipher_ivsize(tfm); + unsigned int len =3D areq->cryptlen; + unsigned int todo; + struct skcipher_alg *alg =3D crypto_skcipher_alg(tfm); + struct rk2_crypto_template *algt =3D container_of(alg, struct rk2_crypto_= template, alg.skcipher.base); + struct rk2_crypto_dev *rkc =3D rctx->dev; + struct rk2_crypto_lli *dd =3D &rkc->tl[0]; + u32 m, v; + u32 *rkey =3D (u32 *)ctx->key; + u32 *riv =3D (u32 *)areq->iv; + int i; + unsigned int offset; + + algt->stat_req++; + rkc->nreq++; + + m =3D rctx->mode | RK2_CRYPTO_ENABLE; + if (algt->is_xts) { + switch (ctx->keylen) { + case AES_KEYSIZE_128 * 2: + m |=3D RK2_CRYPTO_AES_128BIT_key; + break; + case AES_KEYSIZE_256 * 2: + m |=3D RK2_CRYPTO_AES_256BIT_key; + break; + default: + dev_err(rkc->dev, "Invalid key length %u\n", ctx->keylen); + return -EINVAL; + } + } else { + switch (ctx->keylen) { + case AES_KEYSIZE_128: + m |=3D RK2_CRYPTO_AES_128BIT_key; + break; + case AES_KEYSIZE_192: + m |=3D RK2_CRYPTO_AES_192BIT_key; + break; + case AES_KEYSIZE_256: + m |=3D RK2_CRYPTO_AES_256BIT_key; + break; + default: + dev_err(rkc->dev, "Invalid key length %u\n", ctx->keylen); + return -EINVAL; + } + } + + err =3D pm_runtime_resume_and_get(rkc->dev); + if (err) + return err; + + /* the upper bits are a write enable mask, so we need to write 1 to all + * upper 16 bits to allow write to the 16 lower bits + */ + m |=3D 0xffff0000; + + dev_dbg(rkc->dev, "%s %s len=3D%u keylen=3D%u mode=3D%x\n", __func__, + crypto_tfm_alg_name(areq->base.tfm), + areq->cryptlen, ctx->keylen, m); + sgs =3D areq->src; + sgd =3D areq->dst; + + while (sgs && sgd && len) { + ivsize =3D crypto_skcipher_ivsize(tfm); + if (areq->iv && crypto_skcipher_ivsize(tfm) > 0) { + if (rctx->mode & RK2_CRYPTO_DEC) { + offset =3D sgs->length - ivsize; + scatterwalk_map_and_copy(rctx->backup_iv, sgs, + offset, ivsize, 0); + } + } + + dev_dbg(rkc->dev, "SG len=3D%u mode=3D%x ivsize=3D%u\n", sgs->length, m,= ivsize); + + if (sgs =3D=3D sgd) { + err =3D dma_map_sg(rkc->dev, sgs, 1, DMA_BIDIRECTIONAL); + if (err !=3D 1) { + dev_err(rkc->dev, "Invalid sg number %d\n", err); + err =3D -EINVAL; + goto theend; + } + } else { + err =3D dma_map_sg(rkc->dev, sgs, 1, DMA_TO_DEVICE); + if (err !=3D 1) { + dev_err(rkc->dev, "Invalid sg number %d\n", err); + err =3D -EINVAL; + goto theend; + } + err =3D dma_map_sg(rkc->dev, sgd, 1, DMA_FROM_DEVICE); + if (err !=3D 1) { + dev_err(rkc->dev, "Invalid sg number %d\n", err); + err =3D -EINVAL; + dma_unmap_sg(rkc->dev, sgs, 1, DMA_TO_DEVICE); + goto theend; + } + } + err =3D 0; + writel(m, rkc->reg + RK2_CRYPTO_BC_CTL); + + if (algt->is_xts) { + for (i =3D 0; i < ctx->keylen / 8; i++) { + v =3D cpu_to_be32(rkey[i]); + writel(v, rkc->reg + RK2_CRYPTO_KEY0 + i * 4); + } + for (i =3D 0; i < (ctx->keylen / 8); i++) { + v =3D cpu_to_be32(rkey[i + ctx->keylen / 8]); + writel(v, rkc->reg + RK2_CRYPTO_CH4_KEY0 + i * 4); + } + } else { + for (i =3D 0; i < ctx->keylen / 4; i++) { + v =3D cpu_to_be32(rkey[i]); + writel(v, rkc->reg + RK2_CRYPTO_KEY0 + i * 4); + } + } + + if (ivsize) { + for (i =3D 0; i < ivsize / 4; i++) + writel(cpu_to_be32(riv[i]), + rkc->reg + RK2_CRYPTO_CH0_IV_0 + i * 4); + writel(ivsize, rkc->reg + RK2_CRYPTO_CH0_IV_LEN); + } + if (!sgs->length) { + sgs =3D sg_next(sgs); + sgd =3D sg_next(sgd); + continue; + } + + /* The hw support multiple descriptor, so why this driver use + * only one descriptor ? + * Using one descriptor per SG seems the way to do and it works + * but only when doing encryption. + * With decryption it always fail on second descriptor. + * Probably the HW dont know how to use IV. + */ + todo =3D min(sg_dma_len(sgs), len); + len -=3D todo; + dd->src_addr =3D sg_dma_address(sgs); + dd->src_len =3D todo; + dd->dst_addr =3D sg_dma_address(sgd); + dd->dst_len =3D todo; + dd->iv =3D 0; + dd->next =3D 1; + + dd->user =3D RK2_LLI_CIPHER_START | RK2_LLI_STRING_FIRST | RK2_LLI_STRIN= G_LAST; + dd->dma_ctrl |=3D RK2_LLI_DMA_CTRL_DST_INT | RK2_LLI_DMA_CTRL_LAST; + + writel(RK2_CRYPTO_DMA_INT_LISTDONE | 0x7F, rkc->reg + RK2_CRYPTO_DMA_INT= _EN); + + /*writel(0x00030000, rkc->reg + RK2_CRYPTO_FIFO_CTL);*/ + writel(rkc->t_phy, rkc->reg + RK2_CRYPTO_DMA_LLI_ADDR); + + reinit_completion(&rkc->complete); + rkc->status =3D 0; + + writel(RK2_CRYPTO_DMA_CTL_START | 1 << 16, rkc->reg + RK2_CRYPTO_DMA_CTL= ); + + wait_for_completion_interruptible_timeout(&rkc->complete, + msecs_to_jiffies(10000)); + if (sgs =3D=3D sgd) { + dma_unmap_sg(rkc->dev, sgs, 1, DMA_BIDIRECTIONAL); + } else { + dma_unmap_sg(rkc->dev, sgs, 1, DMA_TO_DEVICE); + dma_unmap_sg(rkc->dev, sgd, 1, DMA_FROM_DEVICE); + } + + if (!rkc->status) { + dev_err(rkc->dev, "DMA timeout\n"); + rk2_print(rkc); + err =3D -EFAULT; + goto theend; + } + if (areq->iv && ivsize > 0) { + offset =3D sgd->length - ivsize; + if (rctx->mode & RK2_CRYPTO_DEC) { + memcpy(areq->iv, rctx->backup_iv, ivsize); + memzero_explicit(rctx->backup_iv, ivsize); + } else { + scatterwalk_map_and_copy(areq->iv, sgd, offset, + ivsize, 0); + } + } + sgs =3D sg_next(sgs); + sgd =3D sg_next(sgd); + } +theend: + writel(0xffff0000, rkc->reg + RK2_CRYPTO_BC_CTL); + pm_runtime_put_autosuspend(rkc->dev); + + local_bh_disable(); + crypto_finalize_skcipher_request(engine, areq, err); + local_bh_enable(); + return 0; +} + +int rk2_cipher_tfm_init(struct crypto_skcipher *tfm) +{ + struct rk2_cipher_ctx *ctx =3D crypto_skcipher_ctx(tfm); + const char *name =3D crypto_tfm_alg_name(&tfm->base); + struct skcipher_alg *alg =3D crypto_skcipher_alg(tfm); + struct rk2_crypto_template *algt =3D container_of(alg, struct rk2_crypto_= template, alg.skcipher.base); + + ctx->fallback_tfm =3D crypto_alloc_skcipher(name, 0, CRYPTO_ALG_NEED_FALL= BACK); + if (IS_ERR(ctx->fallback_tfm)) { + dev_err(algt->dev->dev, "ERROR: Cannot allocate fallback for %s %ld\n", + name, PTR_ERR(ctx->fallback_tfm)); + return PTR_ERR(ctx->fallback_tfm); + } + + dev_info(algt->dev->dev, "Fallback for %s is %s\n", + crypto_tfm_alg_driver_name(&tfm->base), + crypto_tfm_alg_driver_name(crypto_skcipher_tfm(ctx->fallback_tfm))); + + tfm->reqsize =3D sizeof(struct rk2_cipher_rctx) + + crypto_skcipher_reqsize(ctx->fallback_tfm); + + return 0; +} + +void rk2_cipher_tfm_exit(struct crypto_skcipher *tfm) +{ + struct rk2_cipher_ctx *ctx =3D crypto_skcipher_ctx(tfm); + + memzero_explicit(ctx->key, ctx->keylen); + crypto_free_skcipher(ctx->fallback_tfm); +} --=20 2.41.0