From nobody Mon Feb 9 01:17:09 2026 Received: from mail-pg1-f175.google.com (mail-pg1-f175.google.com [209.85.215.175]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3DA1814885B for ; Mon, 2 Jun 2025 05:34:59 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.175 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1748842501; cv=none; b=HpWD0/6l5yMZjf7sVUYOnZweaHjscaUTZI00lobvVdb0B9u/E6iXzzpiJ7jzWoUnTXzXgZQOB6lEP/CIEm5XDBS9NHlIJdF+Yqj6//Ujqn+jnfMWvLYvPxK3vfkzzgfunyr+C7nN5EJBU58I1Il0238KNLVbdaLtA3FA63u34mE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1748842501; c=relaxed/simple; bh=CitsjnGMT2tpHAOqWQTJepEfFEsmNNIOj2PFTQGHKmU=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=Trv4ETowOMlDy1+I44fMYvxKRoXzL/1/bBpxumzIVWJ3JXt51x2B5hhbj3F2q8ND8dOpUknIcbkAlpi/H2d5UXewfIcLNEY+GGbpNghjpNtmcQ1xvFSOHt6lNViQSGuN12A5MCSttuIDxtPrDnyNSoyFHdGoQPk9nJL8DWLrxYA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=vayavyalabs.com; spf=pass smtp.mailfrom=vayavyalabs.com; dkim=pass (1024-bit key) header.d=vayavyalabs.com header.i=@vayavyalabs.com header.b=o8jmx6JK; arc=none smtp.client-ip=209.85.215.175 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=vayavyalabs.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=vayavyalabs.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=vayavyalabs.com header.i=@vayavyalabs.com header.b="o8jmx6JK" Received: by mail-pg1-f175.google.com with SMTP id 41be03b00d2f7-af5085f7861so2299822a12.3 for ; Sun, 01 Jun 2025 22:34:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=vayavyalabs.com; s=google; t=1748842499; x=1749447299; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=vcQsINFfHCo0hVwMqVIxbmVwPSe7ekqyAfXyvWsPPeE=; b=o8jmx6JKtsAS8Qu915LTTLicXjVEQmAU3OES1phJmgBpXVDO2/cRClaHFWFhqaMfAx +7zb6Vf5aAHCo6L2xLVd5oNXdEpGKp489E8SvGGYcYAaVg83OuICTrqxUBeeX1ei3zK3 EmrC4WyE+YCw39htanp5768dQTnvZ6cg7SKBw= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1748842499; x=1749447299; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=vcQsINFfHCo0hVwMqVIxbmVwPSe7ekqyAfXyvWsPPeE=; b=TGEA43iEUyiF0zuFN88umrtQd7kebmVAsA+jr27TE8rMEGABCA7Hr8LC/NjHs0v40E SqTdQGgNA7dyOgnrdbeWoUahGDTiM+BdL5XCnQLrP8hetH4TfykUo582fZYN37k6Q/4r YKeqj1O02pB84ztIleTWaYBQjd8TIRZve/gqC6hukO1ytpLvg6G55HemGOLSE2JWAlU8 rkCsmTwlJmsLwNr4dmeENcZPWUK6onhFG2CnJEQCK9sLJbYp5WgaixwkrBSB3w1JQOC3 LPoc2Khl2YLW2OSDZbMgIq893DM5MK4NsX+oW/T/QN9rGPEGqkW86DDDEYsN0/cZgWeA lf8Q== X-Forwarded-Encrypted: i=1; AJvYcCW8RYZDhbOPegcb7e69FiWhyP82spv4WfdWu/X0iiihnQo4Ok40znGMECXAcwE2S1aHRUD7jOGh8oyMUwQ=@vger.kernel.org X-Gm-Message-State: AOJu0Ywum1X8utNBxGqdtznjn2fwioWis9qNAl370XuZejk/jtQk/Xes CZjrHn6H2QI3qzHiXQlfKzewrOx6YfKZpivns/7wlMYfpcCG8CNISyVQOqzt4gUEHLU= X-Gm-Gg: ASbGnctXIAulwl2ytysctZIhwbLu5uZVyfbGZdboNMXBXHzEkCTmI+xhmpbtVroQHgN alSTjLgcw/gx3Poom3oprGg7fr8jh2I0HtlPv4GN6ECPyPMQddMAu++x8tF724vc/iWpfBWiaQI l037MBcsq0nJ2C0rR8rO+xM4gf7CSuSBcaBmRZc7zayuDeqdnkRxw7ZD7ygN6VRzXy+/igCvLPO 3s5Um2cfuoLJ8HThXxKHc12daoSGla8lCDhhr4ONF+dirmTnSG6bJ1aR/4xWN6xjzwfIc44xDZI FDtbZAF3rXG0MtxzCnruNP5uXjKKIQDLQ7UZ0EjOU1ASw7O6AS/t31jz2ix4n/LWGSUB7DyE+dc bm/HQSaJUue+BQA== X-Google-Smtp-Source: AGHT+IHlUCRblKWGlQN7n7wByy1NApB39s/TRoiv/q2p047yt9Q2knvFW5nqj8ETq2xDhak0tIJA6w== X-Received: by 2002:a17:90a:d450:b0:311:b0d3:855 with SMTP id 98e67ed59e1d1-3125034c7fbmr17082259a91.2.1748842499355; Sun, 01 Jun 2025 22:34:59 -0700 (PDT) Received: from localhost.localdomain ([117.251.222.160]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-3124e30cbe6sm4836986a91.39.2025.06.01.22.34.55 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 01 Jun 2025 22:34:59 -0700 (PDT) From: Pavitrakumar Managutte To: linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org, devicetree@vger.kernel.org, herbert@gondor.apana.org.au, robh@kernel.org Cc: krzk+dt@kernel.org, conor+dt@kernel.org, Ruud.Derwig@synopsys.com, manjunath.hadli@vayavyalabs.com, adityak@vayavyalabs.com, Pavitrakumar Managutte , Bhoomika Kadabi Subject: [PATCH v3 1/6] dt-bindings: crypto: Document support for SPAcc Date: Mon, 2 Jun 2025 11:02:26 +0530 Message-Id: <20250602053231.403143-2-pavitrakumarm@vayavyalabs.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20250602053231.403143-1-pavitrakumarm@vayavyalabs.com> References: <20250602053231.403143-1-pavitrakumarm@vayavyalabs.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add DT bindings related to the SPAcc driver for Documentation. DWC Synopsys Security Protocol Accelerator(SPAcc) Hardware Crypto Engine is a crypto IP designed by Synopsys. Co-developed-by: Bhoomika Kadabi Signed-off-by: Bhoomika Kadabi Signed-off-by: Pavitrakumar Managutte Acked-by: Ruud Derwig --- .../bindings/crypto/snps,dwc-spacc.yaml | 77 +++++++++++++++++++ 1 file changed, 77 insertions(+) create mode 100644 Documentation/devicetree/bindings/crypto/snps,dwc-spacc= .yaml diff --git a/Documentation/devicetree/bindings/crypto/snps,dwc-spacc.yaml b= /Documentation/devicetree/bindings/crypto/snps,dwc-spacc.yaml new file mode 100644 index 000000000000..2780b3db2182 --- /dev/null +++ b/Documentation/devicetree/bindings/crypto/snps,dwc-spacc.yaml @@ -0,0 +1,77 @@ +# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) +%YAML 1.2 +--- +$id: http://devicetree.org/schemas/crypto/snps,dwc-spacc.yaml# +$schema: http://devicetree.org/meta-schemas/core.yaml# + +title: Synopsys DesignWare Security Protocol Accelerator(SPAcc) Crypto Eng= ine + +maintainers: + - Ruud Derwig + +description: | + This binding describes the Synopsys DWC Security Protocol Accelerator (S= PAcc), + which is a hardware IP designed to accelerate cryptographic operations, = such + as encryption, decryption, and hashing. + + The SPAcc supports virtualization where a single physical SPAcc can be + accessed as multiple virtual SPAcc instances, each with its own register= set. + These virtual instances can be assigned different priorities. + + In this configuration, the SPAcc IP is instantiated within the Synopsys + NSIMOSCI virtual SoC platform, a SystemC simulation environment used for + software development and testing. The device is accessed as a memory-map= ped + peripheral and generates interrupts to the ARC interrupt controller. + +properties: + compatible: + items: + - const: snps,nsimosci-hs-spacc + + reg: + maxItems: 1 + + interrupts: + maxItems: 1 + + clocks: + maxItems: 1 + + snps,vspacc-id: + $ref: /schemas/types.yaml#/definitions/uint32 + description: | + Virtual SPAcc instance identifier. + The SPAcc hardware supports multiple virtual instances (determined by + ELP_SPACC_CONFIG_VSPACC_CNT parameter), and this ID is used to ident= ify + which virtual instance this node represents. + minimum: 0 + maximum: 7 + + snps,spacc-internal-counter: + $ref: /schemas/types.yaml#/definitions/uint32 + description: | + Hardware counter that generates an interrupt based on a count value. + This counter starts ticking when there is a completed job sitting on + the status fifo to be serviced. This makes sure that no jobs are + starved of processing. + minimum: 0x19000 + maximum: 0xFFFFF + +required: + - compatible + - reg + - interrupts + +additionalProperties: false + +examples: + - | + + crypto@40000000 { + compatible =3D "snps,nsimosci-hs-spacc"; + reg =3D <0x40000000 0x3FFFF>; + interrupts =3D <28>; + clocks =3D <&clock>; + snps,spacc-internal-counter =3D <0x20000>; + snps,vspacc-id =3D <0>; + }; --=20 2.25.1 From nobody Mon Feb 9 01:17:09 2026 Received: from mail-oi1-f171.google.com (mail-oi1-f171.google.com [209.85.167.171]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4E067156F4A for ; Mon, 2 Jun 2025 05:35:21 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.167.171 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1748842527; cv=none; b=GH/xE3QZepPCF+6AHLoQ47cvFhLDgn5c61vNx0HTpeo0OyuT2JQFP9DsA89O7WKyyfZGCo+DCewR4XZCU5eT8IzTPFvqJga7TXGqgVFWWTXq/tUTH4LtbAdn0xgyKuh8/qVvZZHBTVaUAkgg7TlG1Sfk8gEta/axe2KbPi7UrBY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1748842527; c=relaxed/simple; bh=0PirZmYf2JgWlD71Hgj2dmVHPdymGWtDu1kV50XMxoQ=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version:Content-Type; b=sUgUo12PIluPF0UDL/yC9x1rLNsKSvbhikXMpEegb3v6ud2xAA51IQ00Z/ljq3IaXHsdIIJhFalwWtDK1H6VddFcCu7avrx9unBohCde/FLkCaecD/hsEWpNiTgKDijG7eaxNoy68VG0XgD/DJmZTNic08I849lizeCQaS6hHJk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=vayavyalabs.com; spf=pass smtp.mailfrom=vayavyalabs.com; dkim=pass (1024-bit key) header.d=vayavyalabs.com header.i=@vayavyalabs.com header.b=mpqtmRZE; arc=none smtp.client-ip=209.85.167.171 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=vayavyalabs.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=vayavyalabs.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=vayavyalabs.com header.i=@vayavyalabs.com header.b="mpqtmRZE" Received: by mail-oi1-f171.google.com with SMTP id 5614622812f47-403407e998eso2415984b6e.0 for ; Sun, 01 Jun 2025 22:35:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=vayavyalabs.com; s=google; t=1748842520; x=1749447320; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=lefFVpVGc6WYKlVgbjyPxo/jymUV9IR81UOhHDpURIY=; b=mpqtmRZE24rbmsCvf+SISjtZKF5DHl9hakFZNIOHCxTeJ/QRNF+Sok1+K2Al2w5AE8 DT0UNyuRdkwaTcqohbPnzm+hQ5nl/XURjVQLFz9eS21BTbSJmfazDTOL7n9U/m2GBihZ PbzxIIhrznV1dXyIJ/qMscyJ3CvAZ3zuEwzmQ= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1748842520; x=1749447320; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=lefFVpVGc6WYKlVgbjyPxo/jymUV9IR81UOhHDpURIY=; b=olvvEnFLR3jIQVPRr2CSK61uExGZWHMHTM2U9hl8JZ0EGQUz/TqlzAhsFiXAhql/1X QVX/5jOX5Y136gBTwriBrfHQiEYsf5nCP8ntg4OX2BUU8u9YH2DDv2Iny+Gqop5jScVr jzhaw5Uw1+u9Duv2B7ABgLRwdlz39789gfg6Pi/TaBkdK7uaSgqB1sIBxjixyFX393zR 5NYg0yuNxspoz1aNSlkbFPXTW8XES+kFwavsDbQjLs9AJmRLi3WysymWvK5Pc4kmqoSR ro9exX1Cu7wyaIgWY91C//utbzj/cT6cldtDi2qr8d/3STuQ89r9GsopkhFBTZccYgns ttOA== X-Forwarded-Encrypted: i=1; AJvYcCWMazvuski8sUFBwhGYDPJwhw83ZLyiOsQ6XjrqmUwdPdDyLvxBwmrGqf7TuEyybHMLoVquwzsChvnmFCw=@vger.kernel.org X-Gm-Message-State: AOJu0Yzn2bcXKlU8WEuiIanJqV09PqoKz0P55HqW5o6bsvd/XfE1Hh3U LEduEU5AVJsPT2s5kiI8Ql+6HG0sPTknFFuGC8zL0I646WtB/GItVNqyOP8s64ZukINmZmgmB3j 9ozaVxpI= X-Gm-Gg: ASbGncsxf0E171cwcuXyiO8XkJQxRHR+zh0tX83xPbQv0+jH0Bd22IyIlbv1nVwFnQT vVFQSmI0M6zs4Eo+euq1I0TEzsiHYmWb7nYEG/yrXHbptL60djZEP7/SywqgA5UR0oqACHHVrdG Lhz0umWihc449NduBtgsZjI4IMxqraaTfaQnozDHARM9uKcVm4MuUUKOgI1IIPnfrDhRB3hp/sw KCh4hCXEZaO8o8lNQu1N6zqQgyBfbUm+irway0Sxr2zGYzQVIOYSajNHxefad1mXeKKpGFi5kEO fYiS9QDC/XNekZl4RqEZXu6ov8wzBsVWBkyLbFda4hy8pE/kIIXhrrZJo2MeOLMTq2e3Oz+7C3x 3LiA= X-Google-Smtp-Source: AGHT+IFgC7hQGTOAgttrHhUK9DuJ5ZfI1L/YzNqePYyj0x4Mmp6WXcZ/PaCE80bZXNxmr/YfcjDV+g== X-Received: by 2002:a17:90b:35ca:b0:311:b07f:1b86 with SMTP id 98e67ed59e1d1-3127c852665mr12068885a91.29.1748842509033; Sun, 01 Jun 2025 22:35:09 -0700 (PDT) Received: from localhost.localdomain ([117.251.222.160]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-3124e30cbe6sm4836986a91.39.2025.06.01.22.35.04 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 01 Jun 2025 22:35:08 -0700 (PDT) From: Pavitrakumar Managutte To: linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org, devicetree@vger.kernel.org, herbert@gondor.apana.org.au, robh@kernel.org Cc: krzk+dt@kernel.org, conor+dt@kernel.org, Ruud.Derwig@synopsys.com, manjunath.hadli@vayavyalabs.com, adityak@vayavyalabs.com, Pavitrakumar Managutte , Shweta Raikar Subject: [PATCH v3 2/6] Add SPAcc Skcipher support Date: Mon, 2 Jun 2025 11:02:27 +0530 Message-Id: <20250602053231.403143-3-pavitrakumarm@vayavyalabs.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20250602053231.403143-1-pavitrakumarm@vayavyalabs.com> References: <20250602053231.403143-1-pavitrakumarm@vayavyalabs.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Add SPAcc Skcipher support to Synopsys Protocol Accelerator(SPAcc) IP, which is a crypto accelerator engine. SPAcc supports ciphers, hashes and AEAD algorithms such as AES in different modes, SHA variants, AES-GCM, Chacha-poly1305 etc. Co-developed-by: Shweta Raikar Signed-off-by: Shweta Raikar Signed-off-by: Pavitrakumar Managutte Acked-by: Ruud Derwig --- drivers/crypto/dwc-spacc/spacc_core.c | 1370 ++++++++++++++++++++ drivers/crypto/dwc-spacc/spacc_core.h | 829 ++++++++++++ drivers/crypto/dwc-spacc/spacc_device.c | 309 +++++ drivers/crypto/dwc-spacc/spacc_device.h | 231 ++++ drivers/crypto/dwc-spacc/spacc_hal.c | 374 ++++++ drivers/crypto/dwc-spacc/spacc_hal.h | 114 ++ drivers/crypto/dwc-spacc/spacc_interrupt.c | 324 +++++ drivers/crypto/dwc-spacc/spacc_manager.c | 610 +++++++++ drivers/crypto/dwc-spacc/spacc_skcipher.c | 763 +++++++++++ 9 files changed, 4924 insertions(+) create mode 100644 drivers/crypto/dwc-spacc/spacc_core.c create mode 100644 drivers/crypto/dwc-spacc/spacc_core.h create mode 100644 drivers/crypto/dwc-spacc/spacc_device.c create mode 100644 drivers/crypto/dwc-spacc/spacc_device.h create mode 100644 drivers/crypto/dwc-spacc/spacc_hal.c create mode 100644 drivers/crypto/dwc-spacc/spacc_hal.h create mode 100644 drivers/crypto/dwc-spacc/spacc_interrupt.c create mode 100644 drivers/crypto/dwc-spacc/spacc_manager.c create mode 100644 drivers/crypto/dwc-spacc/spacc_skcipher.c diff --git a/drivers/crypto/dwc-spacc/spacc_core.c b/drivers/crypto/dwc-spa= cc/spacc_core.c new file mode 100644 index 000000000000..2363f2db34ba --- /dev/null +++ b/drivers/crypto/dwc-spacc/spacc_core.c @@ -0,0 +1,1370 @@ +// SPDX-License-Identifier: GPL-2.0 + +#include +#include +#include +#include +#include +#include +#include + +#include "spacc_core.h" +#include "spacc_device.h" + +static const u8 spacc_ctrl_map[SPACC_CTRL_VER_SIZE][SPACC_CTRL_MAPSIZE] = =3D { + { 0, 8, 4, 12, 24, 16, 31, 25, 26, 27, 28, 29, 14, 15 }, + { 0, 8, 3, 12, 24, 16, 31, 25, 26, 27, 28, 29, 14, 15 }, + { 0, 4, 8, 13, 15, 16, 24, 25, 26, 27, 28, 29, 30, 31 } +}; + +static const int keysizes[2][7] =3D { + /* 1 2 4 8 16 32 64 */ + { 5, 8, 16, 24, 32, 0, 0 }, /* cipher key sizes */ + { 8, 16, 20, 24, 32, 64, 128 }, /* hash key sizes */ +}; + +static const struct enc_config enc_table[] =3D { + /* mode cipher_alg cipher_mode auxinfo_cs_mode */ + {CRYPTO_MODE_NULL, 0, 0, 0}, + {CRYPTO_MODE_AES_ECB, C_AES, CM_ECB, 0}, + {CRYPTO_MODE_AES_CBC, C_AES, CM_CBC, 0}, + {CRYPTO_MODE_AES_CS1, C_AES, CM_CBC, 1}, + {CRYPTO_MODE_AES_CS2, C_AES, CM_CBC, 2}, + {CRYPTO_MODE_AES_CS3, C_AES, CM_CBC, 3}, + {CRYPTO_MODE_AES_CFB, C_AES, CM_CFB, 0}, + {CRYPTO_MODE_AES_OFB, C_AES, CM_OFB, 0}, + {CRYPTO_MODE_AES_CTR, C_AES, CM_CTR, 0}, + {CRYPTO_MODE_AES_CCM, C_AES, CM_CCM, 0}, + {CRYPTO_MODE_AES_GCM, C_AES, CM_GCM, 0}, + {CRYPTO_MODE_AES_F8, C_AES, CM_F8, 0}, + {CRYPTO_MODE_AES_XTS, C_AES, CM_XTS, 0}, + {CRYPTO_MODE_MULTI2_ECB, C_MULTI2, CM_ECB, 0}, + {CRYPTO_MODE_MULTI2_CBC, C_MULTI2, CM_CBC, 0}, + {CRYPTO_MODE_MULTI2_OFB, C_MULTI2, CM_OFB, 0}, + {CRYPTO_MODE_MULTI2_CFB, C_MULTI2, CM_CFB, 0}, + {CRYPTO_MODE_3DES_CBC, C_DES, CM_CBC, 0}, + {CRYPTO_MODE_DES_CBC, C_DES, CM_CBC, 0}, + {CRYPTO_MODE_3DES_ECB, C_DES, CM_ECB, 0}, + {CRYPTO_MODE_DES_ECB, C_DES, CM_ECB, 0}, + {CRYPTO_MODE_KASUMI_ECB, C_KASUMI, CM_ECB, 0}, + {CRYPTO_MODE_KASUMI_F8, C_KASUMI, CM_F8, 0}, + {CRYPTO_MODE_SNOW3G_UEA2, C_SNOW3G_UEA2, CM_ECB, 0}, + {CRYPTO_MODE_ZUC_UEA3, C_ZUC_UEA3, CM_ECB, 0}, + {CRYPTO_MODE_CHACHA20_STREAM, C_CHACHA20, CM_CHACHA_STREAM, 0}, + {CRYPTO_MODE_CHACHA20_POLY1305, C_CHACHA20, CM_CHACHA_AEAD, 0}, + {CRYPTO_MODE_SM4_ECB, C_SM4, CM_ECB, 0}, + {CRYPTO_MODE_SM4_CBC, C_SM4, CM_CBC, 0}, + {CRYPTO_MODE_SM4_CS1, C_SM4, CM_CBC, 1}, + {CRYPTO_MODE_SM4_CS2, C_SM4, CM_CBC, 2}, + {CRYPTO_MODE_SM4_CS3, C_SM4, CM_CBC, 3}, + {CRYPTO_MODE_SM4_CFB, C_SM4, CM_CFB, 0}, + {CRYPTO_MODE_SM4_OFB, C_SM4, CM_OFB, 0}, + {CRYPTO_MODE_SM4_CTR, C_SM4, CM_CTR, 0}, + {CRYPTO_MODE_SM4_CCM, C_SM4, CM_CCM, 0}, + {CRYPTO_MODE_SM4_GCM, C_SM4, CM_GCM, 0}, + {CRYPTO_MODE_SM4_F8, C_SM4, CM_F8, 0}, + {CRYPTO_MODE_SM4_XTS, C_SM4, CM_XTS, 0}, +}; + +static const struct hash_config hash_table[] =3D { + /* mode hash_alg hash_mode auxinfo_dir */ + {CRYPTO_MODE_NULL, H_NULL, 0, 0}, + {CRYPTO_MODE_HMAC_SHA1, H_SHA1, HM_HMAC, 0}, + {CRYPTO_MODE_HMAC_MD5, H_MD5, HM_HMAC, 0}, + {CRYPTO_MODE_HMAC_SHA224, H_SHA224, HM_HMAC, 0}, + {CRYPTO_MODE_HMAC_SHA256, H_SHA256, HM_HMAC, 0}, + {CRYPTO_MODE_HMAC_SHA384, H_SHA384, HM_HMAC, 0}, + {CRYPTO_MODE_HMAC_SHA512, H_SHA512, HM_HMAC, 0}, + {CRYPTO_MODE_HMAC_SHA512_224, H_SHA512_224, HM_HMAC, 0}, + {CRYPTO_MODE_HMAC_SHA512_256, H_SHA512_256, HM_HMAC, 0}, + {CRYPTO_MODE_SSLMAC_MD5, H_MD5, HM_SSLMAC, 0}, + {CRYPTO_MODE_SSLMAC_SHA1, H_SHA1, HM_SSLMAC, 0}, + {CRYPTO_MODE_HASH_SHA1, H_SHA1, HM_RAW, 0}, + {CRYPTO_MODE_HASH_MD5, H_MD5, HM_RAW, 0}, + {CRYPTO_MODE_HASH_SHA224, H_SHA224, HM_RAW, 0}, + {CRYPTO_MODE_HASH_SHA256, H_SHA256, HM_RAW, 0}, + {CRYPTO_MODE_HASH_SHA384, H_SHA384, HM_RAW, 0}, + {CRYPTO_MODE_HASH_SHA512, H_SHA512, HM_RAW, 0}, + {CRYPTO_MODE_HASH_SHA512_224, H_SHA512_224, HM_RAW, 0}, + {CRYPTO_MODE_HASH_SHA512_256, H_SHA512_256, HM_RAW, 0}, + {CRYPTO_MODE_HASH_SHA3_224, H_SHA3_224, HM_RAW, 0}, + {CRYPTO_MODE_HASH_SHA3_256, H_SHA3_256, HM_RAW, 0}, + {CRYPTO_MODE_HASH_SHA3_384, H_SHA3_384, HM_RAW, 0}, + {CRYPTO_MODE_HASH_SHA3_512, H_SHA3_512, HM_RAW, 0}, + {CRYPTO_MODE_HASH_SHAKE128, H_SHAKE128, HM_SHAKE_SHAKE, 0}, + {CRYPTO_MODE_HASH_SHAKE256, H_SHAKE256, HM_SHAKE_SHAKE, 0}, + {CRYPTO_MODE_HASH_CSHAKE128, H_SHAKE128, HM_SHAKE_CSHAKE, 0}, + {CRYPTO_MODE_HASH_CSHAKE256, H_SHAKE256, HM_SHAKE_CSHAKE, 0}, + {CRYPTO_MODE_MAC_KMAC128, H_SHAKE128, HM_SHAKE_KMAC, 0}, + {CRYPTO_MODE_MAC_KMAC256, H_SHAKE256, HM_SHAKE_KMAC, 0}, + {CRYPTO_MODE_MAC_KMACXOF128, H_SHAKE128, HM_SHAKE_KMAC, 1}, + {CRYPTO_MODE_MAC_KMACXOF256, H_SHAKE256, HM_SHAKE_KMAC, 1}, + {CRYPTO_MODE_MAC_XCBC, H_XCBC, HM_RAW, 0}, + {CRYPTO_MODE_MAC_CMAC, H_CMAC, HM_RAW, 0}, + {CRYPTO_MODE_MAC_KASUMI_F9, H_KF9, HM_RAW, 0}, + {CRYPTO_MODE_MAC_SNOW3G_UIA2, H_SNOW3G_UIA2, HM_RAW, 0}, + {CRYPTO_MODE_MAC_ZUC_UIA3, H_ZUC_UIA3, HM_RAW, 0}, + {CRYPTO_MODE_MAC_POLY1305, H_POLY1305, HM_RAW, 0}, + {CRYPTO_MODE_HASH_CRC32, H_CRC32_I3E802_3, HM_RAW, 0}, + {CRYPTO_MODE_MAC_MICHAEL, H_MICHAEL, HM_RAW, 0}, + {CRYPTO_MODE_HASH_SM3, H_SM3, HM_RAW, 0}, + {CRYPTO_MODE_HMAC_SM3, H_SM3, HM_HMAC, 0}, + {CRYPTO_MODE_MAC_SM4_XCBC, H_SM4_XCBC_MAC, HM_RAW, 0}, + {CRYPTO_MODE_MAC_SM4_CMAC, H_SM4_CMAC, HM_RAW, 0}, +}; + +/* bits are 40, 64, 128, 192, 256, and top bit for hash */ +static const unsigned char template[] =3D { + [CRYPTO_MODE_NULL] =3D 0, + [CRYPTO_MODE_AES_ECB] =3D 28, /* AESECB 128/224/256 */ + [CRYPTO_MODE_AES_CBC] =3D 28, /* AESCBC 128/224/256 */ + [CRYPTO_MODE_AES_CTR] =3D 28, /* AESCTR 128/224/256 */ + [CRYPTO_MODE_AES_CCM] =3D 28, /* AESCCM 128/224/256 */ + [CRYPTO_MODE_AES_GCM] =3D 28, /* AESGCM 128/224/256 */ + [CRYPTO_MODE_AES_F8] =3D 28, /* AESF8 128/224/256 */ + [CRYPTO_MODE_AES_XTS] =3D 20, /* AESXTS 128/256 */ + [CRYPTO_MODE_AES_CFB] =3D 28, /* AESCFB 128/224/256 */ + [CRYPTO_MODE_AES_OFB] =3D 28, /* AESOFB 128/224/256 */ + [CRYPTO_MODE_AES_CS1] =3D 28, /* AESCS1 128/224/256 */ + [CRYPTO_MODE_AES_CS2] =3D 28, /* AESCS2 128/224/256 */ + [CRYPTO_MODE_AES_CS3] =3D 28, /* AESCS3 128/224/256 */ + [CRYPTO_MODE_MULTI2_ECB] =3D 0, /* MULTI2 */ + [CRYPTO_MODE_MULTI2_CBC] =3D 0, /* MULTI2 */ + [CRYPTO_MODE_MULTI2_OFB] =3D 0, /* MULTI2 */ + [CRYPTO_MODE_MULTI2_CFB] =3D 0, /* MULTI2 */ + [CRYPTO_MODE_3DES_CBC] =3D 8, /* 3DES CBC */ + [CRYPTO_MODE_3DES_ECB] =3D 8, /* 3DES ECB */ + [CRYPTO_MODE_DES_CBC] =3D 2, /* DES CBC */ + [CRYPTO_MODE_DES_ECB] =3D 2, /* DES ECB */ + [CRYPTO_MODE_KASUMI_ECB] =3D 4, /* KASUMI ECB */ + [CRYPTO_MODE_KASUMI_F8] =3D 4, /* KASUMI F8 */ + [CRYPTO_MODE_SNOW3G_UEA2] =3D 4, /* SNOW3G */ + [CRYPTO_MODE_ZUC_UEA3] =3D 4, /* ZUC */ + [CRYPTO_MODE_CHACHA20_STREAM] =3D 16, /* CHACHA20 */ + [CRYPTO_MODE_CHACHA20_POLY1305] =3D 16, /* CHACHA20 */ + [CRYPTO_MODE_SM4_ECB] =3D 4, /* SM4ECB 128 */ + [CRYPTO_MODE_SM4_CBC] =3D 4, /* SM4CBC 128 */ + [CRYPTO_MODE_SM4_CFB] =3D 4, /* SM4CFB 128 */ + [CRYPTO_MODE_SM4_OFB] =3D 4, /* SM4OFB 128 */ + [CRYPTO_MODE_SM4_CTR] =3D 4, /* SM4CTR 128 */ + [CRYPTO_MODE_SM4_CCM] =3D 4, /* SM4CCM 128 */ + [CRYPTO_MODE_SM4_GCM] =3D 4, /* SM4GCM 128 */ + [CRYPTO_MODE_SM4_F8] =3D 4, /* SM4F8 128 */ + [CRYPTO_MODE_SM4_XTS] =3D 4, /* SM4XTS 128 */ + [CRYPTO_MODE_SM4_CS1] =3D 4, /* SM4CS1 128 */ + [CRYPTO_MODE_SM4_CS2] =3D 4, /* SM4CS2 128 */ + [CRYPTO_MODE_SM4_CS3] =3D 4, /* SM4CS3 128 */ + + [CRYPTO_MODE_HASH_MD5] =3D 242, + [CRYPTO_MODE_HMAC_MD5] =3D 242, + [CRYPTO_MODE_HASH_SHA1] =3D 242, + [CRYPTO_MODE_HMAC_SHA1] =3D 242, + [CRYPTO_MODE_HASH_SHA224] =3D 242, + [CRYPTO_MODE_HMAC_SHA224] =3D 242, + [CRYPTO_MODE_HASH_SHA256] =3D 242, + [CRYPTO_MODE_HMAC_SHA256] =3D 242, + [CRYPTO_MODE_HASH_SHA384] =3D 242, + [CRYPTO_MODE_HMAC_SHA384] =3D 242, + [CRYPTO_MODE_HASH_SHA512] =3D 242, + [CRYPTO_MODE_HMAC_SHA512] =3D 242, + [CRYPTO_MODE_HASH_SHA512_224] =3D 242, + [CRYPTO_MODE_HMAC_SHA512_224] =3D 242, + [CRYPTO_MODE_HASH_SHA512_256] =3D 242, + [CRYPTO_MODE_HMAC_SHA512_256] =3D 242, + [CRYPTO_MODE_MAC_XCBC] =3D 154, /* XaCBC */ + [CRYPTO_MODE_MAC_CMAC] =3D 154, /* CMAC */ + [CRYPTO_MODE_MAC_KASUMI_F9] =3D 130, /* KASUMI */ + [CRYPTO_MODE_MAC_SNOW3G_UIA2] =3D 130, /* SNOW */ + [CRYPTO_MODE_MAC_ZUC_UIA3] =3D 130, /* ZUC */ + [CRYPTO_MODE_MAC_POLY1305] =3D 144, + [CRYPTO_MODE_SSLMAC_MD5] =3D 130, + [CRYPTO_MODE_SSLMAC_SHA1] =3D 132, + [CRYPTO_MODE_HASH_CRC32] =3D 0, + [CRYPTO_MODE_MAC_MICHAEL] =3D 129, + + [CRYPTO_MODE_HASH_SHA3_224] =3D 242, + [CRYPTO_MODE_HASH_SHA3_256] =3D 242, + [CRYPTO_MODE_HASH_SHA3_384] =3D 242, + [CRYPTO_MODE_HASH_SHA3_512] =3D 242, + [CRYPTO_MODE_HASH_SHAKE128] =3D 242, + [CRYPTO_MODE_HASH_SHAKE256] =3D 242, + [CRYPTO_MODE_HASH_CSHAKE128] =3D 130, + [CRYPTO_MODE_HASH_CSHAKE256] =3D 130, + [CRYPTO_MODE_MAC_KMAC128] =3D 242, + [CRYPTO_MODE_MAC_KMAC256] =3D 242, + [CRYPTO_MODE_MAC_KMACXOF128] =3D 242, + [CRYPTO_MODE_MAC_KMACXOF256] =3D 242, + [CRYPTO_MODE_HASH_SM3] =3D 242, + [CRYPTO_MODE_HMAC_SM3] =3D 242, + [CRYPTO_MODE_MAC_SM4_XCBC] =3D 242, + [CRYPTO_MODE_MAC_SM4_CMAC] =3D 242, +}; + +int spacc_sg_to_ddt(struct device *dev, struct scatterlist *sg, + int nbytes, struct pdu_ddt *ddt, int dma_direction) +{ + int i; + int nents; + int rc =3D 0; + int orig_nents; + struct scatterlist *sgl; + struct scatterlist *sg_entry; + + orig_nents =3D sg_nents(sg); + if (orig_nents > 1) { + sgl =3D sg_last(sg, orig_nents); + if (sgl->length =3D=3D 0) + orig_nents--; + } + + nents =3D dma_map_sg(dev, sg, orig_nents, dma_direction); + if (nents <=3D 0) + return -ENOMEM; + + /* require ATOMIC operations */ + rc =3D pdu_ddt_init(dev, ddt, nents | 0x80000000); + if (rc < 0) { + dma_unmap_sg(dev, sg, nents, dma_direction); + return -EIO; + } + + for_each_sg(sg, sg_entry, nents, i) { + pdu_ddt_add(dev, ddt, sg_dma_address(sg_entry), + sg_dma_len(sg_entry)); + } + + dma_sync_sg_for_device(dev, sg, nents, dma_direction); + + return nents; +} + +int spacc_set_operation(struct spacc_device *spacc, int handle, int op, + u32 prot, u32 icvcmd, u32 icvoff, + u32 icvsz, u32 sec_key) +{ + int ret =3D 0; + struct spacc_job *job =3D NULL; + + if (handle < 0 || handle >=3D SPACC_MAX_JOBS) + return -EINVAL; + + job =3D &spacc->job[handle]; + if (!job) + return -EIO; + + job->op =3D op; + if (op =3D=3D OP_ENCRYPT) + job->ctrl |=3D SPACC_CTRL_MASK(SPACC_CTRL_ENCRYPT); + else + job->ctrl &=3D ~SPACC_CTRL_MASK(SPACC_CTRL_ENCRYPT); + + switch (prot) { + case ICV_HASH: + /* HASH of plaintext */ + job->ctrl |=3D SPACC_CTRL_MASK(SPACC_CTRL_ICV_PT); + break; + case ICV_HASH_ENCRYPT: + /* + * HASH the plaintext and encrypt the lot + * ICV_PT and ICV_APPEND must be set too + */ + job->ctrl |=3D SPACC_CTRL_MASK(SPACC_CTRL_ICV_ENC); + job->ctrl |=3D SPACC_CTRL_MASK(SPACC_CTRL_ICV_PT); + /* this mode is not valid when BIT_ALIGN !=3D 0 */ + job->ctrl |=3D SPACC_CTRL_MASK(SPACC_CTRL_ICV_APPEND); + break; + case ICV_ENCRYPT_HASH: + /* HASH the ciphertext */ + job->ctrl &=3D ~SPACC_CTRL_MASK(SPACC_CTRL_ICV_PT); + job->ctrl &=3D ~SPACC_CTRL_MASK(SPACC_CTRL_ICV_ENC); + break; + case ICV_IGNORE: + break; + default: + ret =3D -EINVAL; + } + + job->icv_len =3D icvsz; + + switch (icvcmd) { + case IP_ICV_OFFSET: + job->icv_offset =3D icvoff; + job->ctrl &=3D ~SPACC_CTRL_MASK(SPACC_CTRL_ICV_APPEND); + break; + case IP_ICV_APPEND: + job->ctrl |=3D SPACC_CTRL_MASK(SPACC_CTRL_ICV_APPEND); + break; + case IP_ICV_IGNORE: + break; + default: + ret =3D -EINVAL; + } + + if (sec_key) + job->ctrl |=3D SPACC_CTRL_MASK(SPACC_CTRL_SEC_KEY); + + return ret; +} + +static int _spacc_fifo_full(struct spacc_device *spacc, uint32_t prio) +{ + if (spacc->config.is_qos) + return readl(spacc->regmap + SPACC_REG_FIFO_STAT) & + SPACC_FIFO_STAT_CMDX_FULL(prio); + else + return readl(spacc->regmap + SPACC_REG_FIFO_STAT) & + SPACC_FIFO_STAT_CMD0_FULL; +} + +/* + * When proc_sz !=3D 0 it overrides the ddt_len value + * defined in the context referenced by 'job_idx' + */ +int spacc_packet_enqueue_ddt_ex(struct spacc_device *spacc, int use_jb, + int job_idx, struct pdu_ddt *src_ddt, + struct pdu_ddt *dst_ddt, u32 proc_sz, + u32 aad_offset, u32 pre_aad_sz, + u32 post_aad_sz, u32 iv_offset, + u32 prio) +{ + int job_index; + int proc_len; + struct spacc_job *job; + + if (job_idx < 0 || job_idx >=3D SPACC_MAX_JOBS) + return -EINVAL; + + /* + * Handle priority jobs using cmd fifos, high priority + * defaults to cmd0 fifo, medium to cmd1 fifo and low + * to cmd2 fifo + */ + switch (prio) { + case SPACC_SW_CTRL_PRIO_MED: + if (spacc->config.cmd1_fifo_depth =3D=3D 0) + return -EINVAL; + break; + case SPACC_SW_CTRL_PRIO_LOW: + if (spacc->config.cmd2_fifo_depth =3D=3D 0) + return -EINVAL; + break; + } + + job =3D &spacc->job[job_idx]; + if (!job) + return -EINVAL; + + /* process any jobs in the jb */ + if (use_jb && spacc_process_jb(spacc) !=3D 0) + goto fifo_full; + + if (_spacc_fifo_full(spacc, prio)) { + if (use_jb) + goto fifo_full; + else + return -EBUSY; + } + + /* + * Compute the length we must process, in decrypt mode + * with an ICV (hash, hmac or CCM modes) + * we must subtract the icv length from the buffer size + */ + if (proc_sz =3D=3D SPACC_AUTO_SIZE) { + proc_len =3D src_ddt->len; + + if (job->op =3D=3D OP_DECRYPT && + (job->hash_mode > 0 || + job->enc_mode =3D=3D CRYPTO_MODE_AES_CCM || + job->enc_mode =3D=3D CRYPTO_MODE_AES_GCM) && + !(job->ctrl & SPACC_CTRL_MASK(SPACC_CTRL_ICV_ENC))) + proc_len =3D src_ddt->len - job->icv_len; + } else { + proc_len =3D proc_sz; + } + + if (pre_aad_sz & SPACC_AADCOPY_FLAG) { + job->ctrl |=3D SPACC_CTRL_MASK(SPACC_CTRL_AAD_COPY); + pre_aad_sz &=3D ~(SPACC_AADCOPY_FLAG); + } else { + job->ctrl &=3D ~SPACC_CTRL_MASK(SPACC_CTRL_AAD_COPY); + } + + job->pre_aad_sz =3D pre_aad_sz; + job->post_aad_sz =3D post_aad_sz; + + if (spacc->config.dma_type =3D=3D SPACC_DMA_DDT) { + pdu_io_cached_write(spacc->dptr, spacc->regmap + + SPACC_REG_SRC_PTR, (uint32_t)src_ddt->phys, + &spacc->cache.src_ptr); + pdu_io_cached_write(spacc->dptr, spacc->regmap + + SPACC_REG_DST_PTR, (uint32_t)dst_ddt->phys, + &spacc->cache.dst_ptr); + } else if (spacc->config.dma_type =3D=3D SPACC_DMA_LINEAR) { + pdu_io_cached_write(spacc->dptr, spacc->regmap + + SPACC_REG_SRC_PTR, + (uint32_t)src_ddt->virt[0], + &spacc->cache.src_ptr); + pdu_io_cached_write(spacc->dptr, spacc->regmap + + SPACC_REG_DST_PTR, + (uint32_t)dst_ddt->virt[0], + &spacc->cache.dst_ptr); + } else { + return -EIO; + } + + pdu_io_cached_write(spacc->dptr, spacc->regmap + SPACC_REG_PROC_LEN, + proc_len - job->post_aad_sz, + &spacc->cache.proc_len); + pdu_io_cached_write(spacc->dptr, spacc->regmap + SPACC_REG_ICV_LEN, + job->icv_len, &spacc->cache.icv_len); + pdu_io_cached_write(spacc->dptr, spacc->regmap + SPACC_REG_ICV_OFFSET, + job->icv_offset, &spacc->cache.icv_offset); + pdu_io_cached_write(spacc->dptr, spacc->regmap + SPACC_REG_PRE_AAD_LEN, + job->pre_aad_sz, &spacc->cache.pre_aad); + pdu_io_cached_write(spacc->dptr, spacc->regmap + SPACC_REG_POST_AAD_LEN, + job->post_aad_sz, &spacc->cache.post_aad); + pdu_io_cached_write(spacc->dptr, spacc->regmap + SPACC_REG_IV_OFFSET, + iv_offset, &spacc->cache.iv_offset); + pdu_io_cached_write(spacc->dptr, spacc->regmap + SPACC_REG_OFFSET, + aad_offset, &spacc->cache.offset); + pdu_io_cached_write(spacc->dptr, spacc->regmap + SPACC_REG_AUX_INFO, + AUX_DIR(job->auxinfo_dir) | + AUX_BIT_ALIGN(job->auxinfo_bit_align) | + AUX_CBC_CS(job->auxinfo_cs_mode), + &spacc->cache.aux); + + if (job->first_use) { + writel(job->ckey_sz | SPACC_SET_KEY_CTX(job->ctx_idx), + spacc->regmap + SPACC_REG_KEY_SZ); + writel(job->hkey_sz | SPACC_SET_KEY_CTX(job->ctx_idx), + spacc->regmap + SPACC_REG_KEY_SZ); + } + + job->job_swid =3D spacc->job_next_swid; + spacc->job_lookup[job->job_swid] =3D job_idx; + spacc->job_next_swid =3D (spacc->job_next_swid + 1) % SPACC_MAX_JOBS; + + writel(SPACC_SW_CTRL_ID_SET(job->job_swid) | + SPACC_SW_CTRL_PRIO_SET(prio), + spacc->regmap + SPACC_REG_SW_CTRL); + writel(job->ctrl, spacc->regmap + SPACC_REG_CTRL); + + /* clear an expansion key after the first call */ + if (job->first_use) { + job->first_use =3D false; + job->ctrl &=3D ~SPACC_CTRL_MASK(SPACC_CTRL_KEY_EXP); + } + + return 0; + +fifo_full: + /* try to add a job to the job buffers */ + job_index =3D spacc->jb_head + 1; + if (job_index =3D=3D SPACC_MAX_JOB_BUFFERS) + job_index =3D 0; + + if (job_index =3D=3D spacc->jb_tail) + return -EBUSY; + + spacc->job_buffer[spacc->jb_head] =3D (struct spacc_job_buffer) { + .active =3D 1, + .job_idx =3D job_idx, + .src =3D src_ddt, + .dst =3D dst_ddt, + .proc_sz =3D proc_sz, + .aad_offset =3D aad_offset, + .pre_aad_sz =3D pre_aad_sz, + .post_aad_sz =3D post_aad_sz, + .iv_offset =3D iv_offset, + .prio =3D prio + }; + + spacc->jb_head =3D job_index; + + return 0; +} + +int spacc_packet_enqueue_ddt(struct spacc_device *spacc, int job_idx, + struct pdu_ddt *src_ddt, struct pdu_ddt *dst_ddt, + u32 proc_sz, u32 aad_offset, u32 pre_aad_sz, + u32 post_aad_sz, u32 iv_offset, u32 prio) +{ + int ret =3D 0; + unsigned long lock_flags; + + spin_lock_irqsave(&spacc->lock, lock_flags); + ret =3D spacc_packet_enqueue_ddt_ex(spacc, 1, job_idx, src_ddt, + dst_ddt, proc_sz, aad_offset, + pre_aad_sz, post_aad_sz, + iv_offset, prio); + spin_unlock_irqrestore(&spacc->lock, lock_flags); + + return ret; +} + +static int spacc_packet_dequeue(struct spacc_device *spacc, int job_idx) +{ + int ret =3D 0; + unsigned long lock_flag; + struct spacc_job *job =3D &spacc->job[job_idx]; + + spin_lock_irqsave(&spacc->lock, lock_flag); + + if (!job && !(job_idx =3D=3D SPACC_JOB_IDX_UNUSED)) { + ret =3D -EIO; + } else if (job->job_done) { + job->job_done =3D 0; + ret =3D job->job_err; + } else { + ret =3D -EINPROGRESS; + } + + spin_unlock_irqrestore(&spacc->lock, lock_flag); + + return ret; +} + +int spacc_is_mode_keysize_supported(struct spacc_device *spacc, int mode, + int keysize, int keysz_index) +{ + int x; + + if (mode < 0 || mode >=3D CRYPTO_MODE_LAST) + return SPACC_MODE_NOT_SUPPORTED; + + if (mode =3D=3D CRYPTO_MODE_NULL || + mode =3D=3D CRYPTO_MODE_AES_XTS || + mode =3D=3D CRYPTO_MODE_SM4_XTS || + mode =3D=3D CRYPTO_MODE_AES_F8 || + mode =3D=3D CRYPTO_MODE_SM4_F8 || + spacc->config.modes[mode] & 128) + return SPACC_MODE_SUPPORTED; + + /* loop through and check for valid keysizes */ + for (x =3D 0; x < 6; x++) { + if (keysizes[keysz_index][x] =3D=3D keysize) { + if (spacc->config.modes[mode] & (1 << x)) + return SPACC_MODE_SUPPORTED; + else + return SPACC_MODE_NOT_SUPPORTED; + } + } + + return SPACC_MODE_NOT_SUPPORTED; +} + +/* releases a crypto context back into appropriate module's pool */ +int spacc_close(struct spacc_device *dev, int handle) +{ + return spacc_job_release(dev, handle); +} + +static void spacc_static_modes(struct spacc_device *spacc, int x, int y) +{ + /* disable the algos that are not supported here */ + switch (x) { + case CRYPTO_MODE_AES_F8: + case CRYPTO_MODE_AES_CFB: + case CRYPTO_MODE_AES_OFB: + case CRYPTO_MODE_MULTI2_ECB: + case CRYPTO_MODE_MULTI2_CBC: + case CRYPTO_MODE_MULTI2_CFB: + case CRYPTO_MODE_MULTI2_OFB: + case CRYPTO_MODE_MAC_POLY1305: + case CRYPTO_MODE_HASH_CRC32: + /* disable the modes */ + spacc->config.modes[x] &=3D ~(1 << y); + break; + default: + break; /* algos are enabled */ + } +} + +int spacc_static_config(struct spacc_device *spacc) +{ + int x, y; + + for (x =3D 0; x < ARRAY_SIZE(template); x++) { + spacc->config.modes[x] =3D template[x]; + + for (y =3D 0; y < (ARRAY_SIZE(keysizes[0])); y++) { + /* list static modes */ + spacc_static_modes(spacc, x, y); + } + } + + return 0; +} + +int spacc_clone_handle(struct spacc_device *spacc, int old_handle, + void *cbdata) +{ + int new_handle; + + new_handle =3D spacc_job_request(spacc, spacc->job[old_handle].ctx_idx); + if (new_handle < 0) + return new_handle; + + spacc->job[new_handle] =3D spacc->job[old_handle]; + spacc->job[new_handle].job_used =3D new_handle; + spacc->job[new_handle].cbdata =3D cbdata; + + return new_handle; +} + +/* + * Allocate a job for spacc module context and initialize + * it with an appropriate type. + */ +int spacc_open(struct spacc_device *spacc, int enc, int hash, int ctxid, + int secure_mode, spacc_callback cb, void *cbdata) +{ + size_t i; + int ret =3D 0; + u32 ctrl =3D 0; + int retry =3D 0; + int job_idx =3D 0; + struct spacc_job *job =3D NULL; + struct spacc_completion *ctx_wait =3D NULL; + const struct enc_config *enc_cfg =3D NULL; + const struct hash_config *hash_cfg =3D NULL; + + mutex_lock(&spacc->spacc_ctx_mutex); + job_idx =3D spacc_job_request(spacc, ctxid); + + /* + * If someone is waiting for a context, get in the queue. + * Each thread allocates a spacc_completion struct and adds itself to + * the spacc_wait_list. + * The wait_done flag ensures that the completion node isn't freed by + * both the waiting thread and the callback. The callback checks for + * any waiting nodes, clears wait_done, completes the wait, and removes + * the node from the list. The thread checks wait_done before freeing + * the node to avoid double-free scenarios. + */ + + while (retry < SPACC_MAX_RETRIES && job_idx < 0) { + ctx_wait =3D kmalloc(sizeof(struct spacc_completion), GFP_KERNEL); + if (!ctx_wait) { + ret =3D -ENOMEM; + goto unlock_open_mutex; + } + + init_completion(&ctx_wait->spacc_wait_complete); + + mutex_lock(&spacc->spacc_waitq_mutex); + + atomic_inc(&spacc->wait_counter); + ctx_wait->wait_done =3D 1; + INIT_LIST_HEAD(&ctx_wait->list); + list_add_tail(&ctx_wait->list, &spacc->spacc_wait_list); + + mutex_unlock(&spacc->spacc_waitq_mutex); + + ret =3D wait_for_completion_interruptible_timeout + (&ctx_wait->spacc_wait_complete, + msecs_to_jiffies(5)); + + if (ret < 0) { + /* in case if the wait is interrupted */ + mutex_lock(&spacc->spacc_waitq_mutex); + atomic_dec(&spacc->wait_counter); + + if (ctx_wait->wait_done =3D=3D 1) { + ctx_wait->wait_done =3D 0; + list_del(&ctx_wait->list); + kfree(ctx_wait); + ctx_wait =3D NULL; + } else { + kfree(ctx_wait); + ctx_wait =3D NULL; + } + + mutex_unlock(&spacc->spacc_waitq_mutex); + goto unlock_open_mutex; + } else if (ret =3D=3D 0) { + /* in case of timeout delete node from list */ + mutex_lock(&spacc->spacc_waitq_mutex); + atomic_dec(&spacc->wait_counter); + + if (ctx_wait->wait_done =3D=3D 1) { + ctx_wait->wait_done =3D 0; + list_del(&ctx_wait->list); + kfree(ctx_wait); + ctx_wait =3D NULL; + } else { + kfree(ctx_wait); + ctx_wait =3D NULL; + } + + mutex_unlock(&spacc->spacc_waitq_mutex); + + } else { + /* in case the wait was over in time */ + mutex_lock(&spacc->spacc_waitq_mutex); + kfree(ctx_wait); + ctx_wait =3D NULL; + mutex_unlock(&spacc->spacc_waitq_mutex); + } + + /* Try to open SPACC context */ + job_idx =3D spacc_job_request(spacc, ctxid); + retry++; + } + + + if (job_idx < 0) { + mutex_unlock(&spacc->spacc_ctx_mutex); + return -EIO; + } + + job =3D &spacc->job[job_idx]; + + if (secure_mode && job->ctx_idx > spacc->config.num_sec_ctx) { + dev_dbg(spacc->dptr, "ERR: For secure contexts"); + dev_dbg(spacc->dptr, + "ERR: Job ctx ID is outside allowed range\n"); + spacc_job_release(spacc, job_idx); + mutex_unlock(&spacc->spacc_ctx_mutex); + return -EIO; + } + + job->auxinfo_cs_mode =3D 0; + job->auxinfo_bit_align =3D 0; + job->auxinfo_dir =3D 0; + job->icv_len =3D 0; + + + /* Process encryption mode using the lookup table */ + for (i =3D 0; i < ARRAY_SIZE(enc_table); ++i) { + if (enc =3D=3D enc_table[i].mode) { + enc_cfg =3D &enc_table[i]; + ctrl |=3D SPACC_CTRL_SET(SPACC_CTRL_CIPH_ALG, + enc_cfg->cipher_alg); + ctrl |=3D SPACC_CTRL_SET(SPACC_CTRL_CIPH_MODE, + enc_cfg->cipher_mode); + job->auxinfo_cs_mode =3D enc_cfg->auxinfo_cs_mode; + break; + } + } + + if (enc !=3D CRYPTO_MODE_NULL && !enc_cfg) { + ret =3D -EOPNOTSUPP; + spacc_job_release(spacc, job_idx); + goto unlock_open_mutex; + } + + /* Process hash mode using the lookup table */ + for (i =3D 0; i < ARRAY_SIZE(hash_table); ++i) { + if (hash =3D=3D hash_table[i].mode) { + hash_cfg =3D &hash_table[i]; + ctrl |=3D SPACC_CTRL_SET(SPACC_CTRL_HASH_ALG, + hash_cfg->hash_alg); + ctrl |=3D SPACC_CTRL_SET(SPACC_CTRL_HASH_MODE, + hash_cfg->hash_mode); + job->auxinfo_dir =3D hash_cfg->auxinfo_dir; + break; + } + } + + if (hash !=3D CRYPTO_MODE_NULL && !hash_cfg) { + ret =3D -EOPNOTSUPP; + spacc_job_release(spacc, job_idx); + goto unlock_open_mutex; + } + + ctrl |=3D SPACC_CTRL_MASK(SPACC_CTRL_MSG_BEGIN) | + SPACC_CTRL_MASK(SPACC_CTRL_MSG_END); + + if (ret !=3D 0) { + spacc_job_release(spacc, job_idx); + } else { + ret =3D job_idx; + job->first_use =3D true; + job->enc_mode =3D enc; + job->hash_mode =3D hash; + job->ckey_sz =3D 0; + job->hkey_sz =3D 0; + job->job_done =3D 0; + job->job_swid =3D 0; + job->job_secure =3D !!secure_mode; + + job->auxinfo_bit_align =3D 0; + job->job_err =3D -EINPROGRESS; + job->ctrl =3D ctrl | SPACC_CTRL_SET(SPACC_CTRL_CTX_IDX, + job->ctx_idx); + job->cb =3D cb; + job->cbdata =3D cbdata; + } + +unlock_open_mutex: + mutex_unlock(&spacc->spacc_ctx_mutex); + + return ret; +} + +/* Helper function to wait for job completion and check results */ +static bool spacc_wait_for_job_completion(struct spacc_device *spacc, + void *virt, unsigned char *expected_md) +{ + int stat; + unsigned long rbuf; + + for (int i =3D 0; i < 20; i++) { + rbuf =3D readl(spacc->regmap + SPACC_REG_FIFO_STAT) & + SPACC_FIFO_STAT_STAT_EMPTY; + if (rbuf) + continue; + + /* Check result */ + writel(1, spacc->regmap + SPACC_REG_STAT_POP); + rbuf =3D readl(spacc->regmap + SPACC_REG_STATUS); + stat =3D SPACC_GET_STATUS_RET_CODE(rbuf); + + return (memcmp(virt, expected_md, 16) =3D=3D 0) && + (stat =3D=3D SPACC_OK); + } + + return false; +} + +static int spacc_xof_stringsize_autodetect(struct spacc_device *spacc) +{ + void *virt; + int ss, alg; + dma_addr_t dma; + struct pdu_ddt ddt; + unsigned long buflen; + unsigned char buf[256]; + unsigned long spacc_ctrl[2] =3D {0xF400B400, 0xF400D400}; + unsigned char test_str[6] =3D {0x01, 0x20, 0x54, 0x45, 0x53, 0x54}; + unsigned char md[2][16] =3D { + {0xc3, 0x6d, 0x0a, 0x88, 0xfa, 0x37, 0x4c, 0x9b, + 0x44, 0x74, 0xeb, 0x00, 0x5f, 0xe8, 0xca, 0x25}, + {0x68, 0x77, 0x04, 0x11, 0xf8, 0xe3, 0xb0, 0x1e, + 0x0d, 0xbf, 0x71, 0x6a, 0xe9, 0x87, 0x1a, 0x0d}}; + + virt =3D dma_alloc_coherent(get_ddt_device(), 256, &dma, GFP_KERNEL); + if (!virt) + return -EIO; + + if (pdu_ddt_init(spacc->dptr, &ddt, 1)) { + dma_free_coherent(get_ddt_device(), 256, virt, dma); + return -EIO; + } + + pdu_ddt_add(spacc->dptr, &ddt, dma, 256); + + /* populate registers for jobs */ + writel((uint32_t)ddt.phys, spacc->regmap + SPACC_REG_SRC_PTR); + writel((uint32_t)ddt.phys, spacc->regmap + SPACC_REG_DST_PTR); + + writel(16, spacc->regmap + SPACC_REG_PROC_LEN); + writel(16, spacc->regmap + SPACC_REG_PRE_AAD_LEN); + writel(16, spacc->regmap + SPACC_REG_ICV_LEN); + writel(6, spacc->regmap + SPACC_REG_KEY_SZ); + writel(0, spacc->regmap + SPACC_REG_SW_CTRL); + + /* repeat for 2 algorithms, CSHAKE128 and KMAC128 */ + for (alg =3D 0; (alg < 2) && (spacc->config.string_size =3D=3D 0); alg++)= { + /* repeat for 4 string_size sizes */ + for (ss =3D 0; ss < 4; ss++) { + buflen =3D (32UL << ss); + if (buflen > spacc->config.hash_page_size) + break; + + /* clear I/O memory */ + memset(virt, 0, 256); + + /* clear buf and then insert test string */ + memset(buf, 0, sizeof(buf)); + memcpy(buf, test_str, sizeof(test_str)); + memcpy(buf + (buflen >> 1), test_str, sizeof(test_str)); + + /* write key context */ + pdu_to_dev_s(spacc->regmap + SPACC_CTX_HASH_KEY, buf, + spacc->config.hash_page_size >> 2, + spacc->config.big_endian); + + /* write ctrl register */ + writel(spacc_ctrl[alg], spacc->regmap + SPACC_REG_CTRL); + + /* wait for job to complete */ + if (spacc_wait_for_job_completion(spacc, virt, md[alg])) + spacc->config.string_size =3D (16 << ss); + + } + } + + /* reset registers */ + writel(0, spacc->regmap + SPACC_REG_IRQ_CTRL); + writel(0, spacc->regmap + SPACC_REG_IRQ_EN); + writel(0xFFFFFFFF, spacc->regmap + SPACC_REG_IRQ_STAT); + + writel(0, spacc->regmap + SPACC_REG_SRC_PTR); + writel(0, spacc->regmap + SPACC_REG_DST_PTR); + writel(0, spacc->regmap + SPACC_REG_PROC_LEN); + writel(0, spacc->regmap + SPACC_REG_ICV_LEN); + writel(0, spacc->regmap + SPACC_REG_PRE_AAD_LEN); + + pdu_ddt_free(&ddt); + dma_free_coherent(get_ddt_device(), 256, virt, dma); + + return 0; +} + +/* free up the memory */ +void spacc_fini(struct spacc_device *spacc) +{ + vfree(spacc->ctx); + vfree(spacc->job); +} + +int spacc_init(void __iomem *baseaddr, struct spacc_device *spacc, + struct pdu_info *info) +{ +#ifdef CONFIG_CRYPTO_DEV_SPACC_CONFIG_DEBUG + unsigned long id; + char version_string[3][16] =3D { "SPACC", "SPACC-PDU" }; + char idx_string[2][16] =3D { "(Normal Port)", "(Secure Port)" }; + char dma_type_string[4][16] =3D { "Unknown", "Scattergather", "Linear", + "Unknown" }; +#endif + + memset(spacc, 0, sizeof(*spacc)); + + /* Initialize wait_counter with zero */ + atomic_set(&spacc->wait_counter, 0); + + mutex_init(&spacc->spacc_ctx_mutex); + mutex_init(&spacc->spacc_waitq_mutex); + INIT_LIST_HEAD(&spacc->spacc_wait_list); + spin_lock_init(&spacc->lock); + + /* assign the baseaddr */ + spacc->regmap =3D baseaddr; + + /* version info */ + spacc->config.version =3D info->spacc_version.version; + spacc->config.pdu_version =3D (info->pdu_config.major << 4) | + info->pdu_config.minor; + spacc->config.project =3D info->spacc_version.project; + spacc->config.is_pdu =3D info->spacc_version.is_pdu; + spacc->config.is_qos =3D info->spacc_version.qos; + + /* misc */ + spacc->config.is_partial =3D info->spacc_version.partial; + spacc->config.num_ctx =3D info->spacc_config.num_ctx; + spacc->config.ciph_page_size =3D 1U << + info->spacc_config.ciph_ctx_page_size; + + spacc->config.hash_page_size =3D 1U << + info->spacc_config.hash_ctx_page_size; + + spacc->config.dma_type =3D info->spacc_config.dma_type; + spacc->config.idx =3D info->spacc_version.vspacc_id; + spacc->config.cmd0_fifo_depth =3D info->spacc_config.cmd0_fifo_depth; + spacc->config.cmd1_fifo_depth =3D info->spacc_config.cmd1_fifo_depth; + spacc->config.cmd2_fifo_depth =3D info->spacc_config.cmd2_fifo_depth; + spacc->config.stat_fifo_depth =3D info->spacc_config.stat_fifo_depth; + spacc->config.fifo_cnt =3D 1; + spacc->config.is_ivimport =3D info->spacc_version.ivimport; + spacc->wd_cnt_limit =3D false; + + /* ctrl register map */ + if (spacc->config.version <=3D 0x4E) + spacc->config.ctrl_map =3D spacc_ctrl_map[SPACC_CTRL_VER_0]; + else if (spacc->config.version <=3D 0x60) + spacc->config.ctrl_map =3D spacc_ctrl_map[SPACC_CTRL_VER_1]; + else + spacc->config.ctrl_map =3D spacc_ctrl_map[SPACC_CTRL_VER_2]; + + spacc->job_next_swid =3D 0; + spacc->wdcnt =3D 0; + spacc->config.wd_timer =3D SPACC_WD_TIMER_INIT; + + /* + * Version 4.10 uses IRQ, + * above uses WD and we don't support below 4.00 + */ + if (spacc->config.version < 0x40) { + dev_dbg(spacc->dptr, "ERR: Unsupported SPAcc version\n"); + return -EIO; + } else if (spacc->config.version < 0x4B) { + spacc->op_mode =3D SPACC_OP_MODE_IRQ; + } else { + spacc->op_mode =3D SPACC_OP_MODE_WD; + } + + /* + * Set threshold and enable irq + * on 4.11 and newer cores we can derive this + * from the HW reported depths. + */ + if (spacc->config.stat_fifo_depth =3D=3D 1) + spacc->config.ideal_stat_level =3D 1; + else if (spacc->config.stat_fifo_depth <=3D 4) + spacc->config.ideal_stat_level =3D + spacc->config.stat_fifo_depth - 1; + else if (spacc->config.stat_fifo_depth <=3D 8) + spacc->config.ideal_stat_level =3D + spacc->config.stat_fifo_depth - 2; + else + spacc->config.ideal_stat_level =3D + spacc->config.stat_fifo_depth - 4; + + /* determine max proclen value */ + writel(0xFFFFFFFF, spacc->regmap + SPACC_REG_PROC_LEN); + spacc->config.max_msg_size =3D readl(spacc->regmap + SPACC_REG_PROC_LEN); + +#ifdef CONFIG_CRYPTO_DEV_SPACC_CONFIG_DEBUG + + /* read config info */ + if (spacc->config.is_pdu) { + dev_dbg(spacc->dptr, "PDU:\n"); + dev_dbg(spacc->dptr, + " MAJOR : %u\n", info->pdu_config.major); + dev_dbg(spacc->dptr, + " MINOR : %u\n", info->pdu_config.minor); + } + + id =3D readl(spacc->regmap + SPACC_REG_ID); + dev_dbg(spacc->dptr, "SPACC ID: (%08lx)\n", (unsigned long)id); + dev_dbg(spacc->dptr, " MAJOR : %x\n", info->spacc_version.major); + dev_dbg(spacc->dptr, " MINOR : %x\n", info->spacc_version.minor); + dev_dbg(spacc->dptr, " QOS : %x\n", info->spacc_version.qos); + dev_dbg(spacc->dptr, " IVIMPORT : %x\n", spacc->config.is_ivimport); + + if (spacc->config.version >=3D 0x48) + dev_dbg(spacc->dptr, + " TYPE : %lx (%s)\n", SPACC_ID_TYPE(id), + version_string[SPACC_ID_TYPE(id) & 3]); + + dev_dbg(spacc->dptr, " AUX : %x\n", info->spacc_version.qos); + dev_dbg(spacc->dptr, " IDX : %lx %s\n", SPACC_ID_VIDX(id), + spacc->config.is_secure ? + (idx_string[spacc->config.is_secure_port & 1]) : ""); + dev_dbg(spacc->dptr, + " PARTIAL : %x\n", info->spacc_version.partial); + dev_dbg(spacc->dptr, + " PROJECT : %x\n", info->spacc_version.project); + + if (spacc->config.version >=3D 0x48) + id =3D readl(spacc->regmap + SPACC_REG_CONFIG); + else + id =3D 0xFFFFFFFF; + + dev_dbg(spacc->dptr, "SPACC CFG: (%08lx)\n", id); + dev_dbg(spacc->dptr, " CTX CNT : %u\n", info->spacc_config.num_ctx); + dev_dbg(spacc->dptr, + " VSPACC CNT : %u\n", info->spacc_config.num_vspacc); + dev_dbg(spacc->dptr, " CIPH SZ : %-3lu bytes\n", 1UL << + info->spacc_config.ciph_ctx_page_size); + dev_dbg(spacc->dptr, " HASH SZ : %-3lu bytes\n", 1UL << + info->spacc_config.hash_ctx_page_size); + dev_dbg(spacc->dptr, + " DMA TYPE : %u (%s)\n", info->spacc_config.dma_type, + dma_type_string[info->spacc_config.dma_type & 3]); + dev_dbg(spacc->dptr, " MAX PROCLEN: %lu bytes\n", (unsigned long) + spacc->config.max_msg_size); + dev_dbg(spacc->dptr, " FIFO CONFIG :\n"); + dev_dbg(spacc->dptr, "CMD0 DEPTH: %d\n", spacc->config.cmd0_fifo_depth); + + if (spacc->config.is_qos) { + dev_dbg(spacc->dptr, " CMD1 DEPTH: %d\n", + spacc->config.cmd1_fifo_depth); + dev_dbg(spacc->dptr, " CMD2 DEPTH: %d\n", + spacc->config.cmd2_fifo_depth); + } + dev_dbg(spacc->dptr, "STAT DEPTH: %d\n", spacc->config.stat_fifo_depth); + + if (spacc->config.dma_type =3D=3D SPACC_DMA_DDT) { + writel(0x1234567F, baseaddr + SPACC_REG_DST_PTR); + writel(0xDEADBEEF, baseaddr + SPACC_REG_SRC_PTR); + + if (((readl(baseaddr + SPACC_REG_DST_PTR)) !=3D + (0x1234567F & SPACC_DST_PTR_PTR)) || + ((readl(baseaddr + SPACC_REG_SRC_PTR)) !=3D + (0xDEADBEEF & SPACC_SRC_PTR_PTR))) { + dev_dbg(spacc->dptr, "ERR: Failed to set pointers\n"); + goto ERR; + } + } +#endif + + /* + * Zero the IRQ CTRL/EN register + * (to make sure we're in a sane state) + */ + writel(0, spacc->regmap + SPACC_REG_IRQ_CTRL); + writel(0, spacc->regmap + SPACC_REG_IRQ_EN); + writel(0xFFFFFFFF, spacc->regmap + SPACC_REG_IRQ_STAT); + + /* init cache */ + memset(&spacc->cache, 0, sizeof(spacc->cache)); + writel(0, spacc->regmap + SPACC_REG_SRC_PTR); + writel(0, spacc->regmap + SPACC_REG_DST_PTR); + writel(0, spacc->regmap + SPACC_REG_PROC_LEN); + writel(0, spacc->regmap + SPACC_REG_ICV_LEN); + writel(0, spacc->regmap + SPACC_REG_ICV_OFFSET); + writel(0, spacc->regmap + SPACC_REG_PRE_AAD_LEN); + writel(0, spacc->regmap + SPACC_REG_POST_AAD_LEN); + writel(0, spacc->regmap + SPACC_REG_IV_OFFSET); + writel(0, spacc->regmap + SPACC_REG_OFFSET); + writel(0, spacc->regmap + SPACC_REG_AUX_INFO); + + spacc->ctx =3D vmalloc(sizeof(struct spacc_ctx) * spacc->config.num_ctx); + if (!spacc->ctx) + goto ERR; + + spacc->job =3D vmalloc(sizeof(struct spacc_job) * SPACC_MAX_JOBS); + if (!spacc->job) + goto ERR; + + /* initialize job_idx and lookup table */ + spacc_job_init_all(spacc); + + /* initialize contexts */ + spacc_ctx_init_all(spacc); + + /* autodetect and set string size setting */ + if (spacc->config.version =3D=3D 0x61 || spacc->config.version >=3D 0x65) + spacc_xof_stringsize_autodetect(spacc); + + return 0; +ERR: + spacc_fini(spacc); + dev_dbg(spacc->dptr, "ERR: Crypto Failed\n"); + + return -EIO; +} + +/* callback function to initialize workqueue running */ +void spacc_pop_jobs(struct work_struct *data) +{ + int num =3D 0; + struct spacc_priv *priv =3D container_of(data, struct spacc_priv, + pop_jobs); + struct spacc_device *spacc =3D &priv->spacc; + + /* + * Decrement the WD CNT here since + * now we're actually going to respond + * to the IRQ completely + */ + if (spacc->wdcnt) + --(spacc->wdcnt); + + spacc_pop_packets(spacc, &num); +} + +int spacc_remove(struct platform_device *pdev) +{ + struct spacc_device *spacc; + struct spacc_priv *priv =3D platform_get_drvdata(pdev); + + /* free test vector memory */ + spacc =3D &priv->spacc; + spacc_fini(spacc); + + /* devm functions do proper cleanup */ + pdu_mem_deinit(&pdev->dev); + + return 0; +} + +int spacc_set_key_exp(struct spacc_device *spacc, int job_idx) +{ + struct spacc_ctx *ctx =3D NULL; + struct spacc_job *job =3D NULL; + + if (job_idx < 0 || job_idx >=3D SPACC_MAX_JOBS) { + dev_dbg(spacc->dptr, + "ERR: Invalid Job id specified (out of range)\n"); + return -EINVAL; + } + + job =3D &spacc->job[job_idx]; + ctx =3D spacc_context_lookup_by_job(spacc, job_idx); + + if (!ctx) { + dev_dbg(spacc->dptr, "ERR: Failed to find ctx id\n"); + return -EIO; + } + + job->ctrl |=3D SPACC_CTRL_MASK(SPACC_CTRL_KEY_EXP); + + return 0; +} + +int spacc_compute_xcbc_key(struct spacc_device *spacc, int mode_id, + int job_idx, const unsigned char *key, + int keylen, unsigned char *xcbc_out) +{ + int i; + int usecbc; + int handle; + int err =3D 0; + int ctx_idx; + unsigned char *buf; + dma_addr_t bufphys; + struct pdu_ddt ddt; + unsigned char iv[16]; + + if (job_idx >=3D 0 && job_idx < SPACC_MAX_JOBS) + ctx_idx =3D spacc->job[job_idx].ctx_idx; + else + ctx_idx =3D -1; + + if (mode_id =3D=3D CRYPTO_MODE_MAC_XCBC) { + /* figure out if we can schedule the key */ + if (spacc_is_mode_keysize_supported(spacc, CRYPTO_MODE_AES_ECB, + 16, 0)) + usecbc =3D 0; + else if (spacc_is_mode_keysize_supported(spacc, + CRYPTO_MODE_AES_CBC, + 16, 0)) + usecbc =3D 1; + else + return -EINVAL; + } else if (mode_id =3D=3D CRYPTO_MODE_MAC_SM4_XCBC) { + /* figure out if we can schedule the key */ + if (spacc_is_mode_keysize_supported(spacc, CRYPTO_MODE_SM4_ECB, + 16, 0)) + usecbc =3D 0; + else if (spacc_is_mode_keysize_supported(spacc, + CRYPTO_MODE_SM4_CBC, + 16, 0)) + usecbc =3D 1; + else + return -EINVAL; + } else { + return -EINVAL; + } + + memset(iv, 0, sizeof(iv)); + memset(&ddt, 0, sizeof(ddt)); + + buf =3D dma_alloc_coherent(get_ddt_device(), 64, &bufphys, GFP_KERNEL); + if (!buf) + return -ENOMEM; + + handle =3D -1; + + /* set to 1111...., 2222...., 333... */ + for (i =3D 0; i < 48; i++) + buf[i] =3D (i >> 4) + 1; + + /* build DDT */ + err =3D pdu_ddt_init(spacc->dptr, &ddt, 1); + if (err) + goto xcbc_err; + + pdu_ddt_add(spacc->dptr, &ddt, bufphys, 48); + + /* open a handle in either CBC or ECB mode */ + if (mode_id =3D=3D CRYPTO_MODE_MAC_XCBC) { + handle =3D spacc_open(spacc, (usecbc ? + CRYPTO_MODE_AES_CBC : CRYPTO_MODE_AES_ECB), + CRYPTO_MODE_NULL, ctx_idx, 0, NULL, NULL); + + if (handle < 0) { + err =3D handle; + goto xcbc_err; + } + + } else if (mode_id =3D=3D CRYPTO_MODE_MAC_SM4_XCBC) { + handle =3D spacc_open(spacc, (usecbc ? + CRYPTO_MODE_SM4_CBC : CRYPTO_MODE_SM4_ECB), + CRYPTO_MODE_NULL, ctx_idx, 0, NULL, NULL); + if (handle < 0) { + err =3D handle; + goto xcbc_err; + } + } + + spacc_set_operation(spacc, handle, OP_ENCRYPT, 0, 0, 0, 0, 0); + + if (usecbc) { + /* + * We can do the ECB work in CBC using three + * jobs with the IVreset to zero each time + */ + for (i =3D 0; i < 3; i++) { + spacc_write_context(spacc, handle, + SPACC_CRYPTO_OPERATION, key, + keylen, iv, 16); + err =3D spacc_packet_enqueue_ddt(spacc, handle, &ddt, + &ddt, 16, (i * 16) | + ((i * 16) << 16), 0, 0, + 0, 0); + if (err !=3D 0) + goto xcbc_err; + + do { + err =3D spacc_packet_dequeue(spacc, handle); + } while (err =3D=3D -EINPROGRESS); + + if (err !=3D 0) + goto xcbc_err; + } + } else { + /* + * Do the 48 bytes as a single SPAcc job this is the ideal case + * but only possible if ECB was enabled in the core + */ + spacc_write_context(spacc, handle, SPACC_CRYPTO_OPERATION, + key, keylen, iv, 16); + err =3D spacc_packet_enqueue_ddt(spacc, handle, &ddt, &ddt, 48, + 0, 0, 0, 0, 0); + if (err !=3D 0) + goto xcbc_err; + + do { + err =3D spacc_packet_dequeue(spacc, handle); + } while (err =3D=3D -EINPROGRESS); + + if (err !=3D 0) + goto xcbc_err; + } + + /* now we can copy the key */ + memcpy(xcbc_out, buf, 48); + memset(buf, 0, 64); + +xcbc_err: + dma_free_coherent(get_ddt_device(), 64, buf, bufphys); + pdu_ddt_free(&ddt); + + if (handle >=3D 0) + spacc_close(spacc, handle); + + if (err) + return -EINVAL; + + return 0; +} + +void spacc_set_priority(struct spacc_device *spacc, int priority) +{ + u32 vspacc_prio_reg; + + if (!spacc || !spacc->regmap) + return; + + if (priority < 0 || priority > 15) { + dev_warn(spacc->dptr, + "Invalid VSPAcc priority %d (valid:0=E2=80=9315)\n", priority); + return; + } + + /* Build new register value with mode =3D 0(WEIGHTED), weight =3D priorit= y */ + vspacc_prio_reg =3D VPRIO_SET(0, priority); + + /* Write to the SPAcc virtual priority register */ + writel(vspacc_prio_reg, spacc->regmap + SPACC_REG_VIRTUAL_PRIO); + + dev_dbg(spacc->dptr, "Set VSPAcc priority: %d (reg =3D 0x%08x)\n", + priority, vspacc_prio_reg); +} diff --git a/drivers/crypto/dwc-spacc/spacc_core.h b/drivers/crypto/dwc-spa= cc/spacc_core.h new file mode 100644 index 000000000000..9f5d50d3ad8e --- /dev/null +++ b/drivers/crypto/dwc-spacc/spacc_core.h @@ -0,0 +1,829 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +#ifndef SPACC_CORE_H_ +#define SPACC_CORE_H_ + +#include "spacc_hal.h" + +enum dma_type { + SPACC_DMA_UNDEF =3D 0, + SPACC_DMA_DDT =3D 1, + SPACC_DMA_LINEAR =3D 2 +}; + +enum op_mode { + SPACC_OP_MODE_IRQ =3D 0, + SPACC_OP_MODE_WD =3D 1 /* watchdog */ +}; + +#define OP_ENCRYPT 0 +#define OP_DECRYPT 1 + +#define SPACC_CRYPTO_OPERATION 1 +#define SPACC_HASH_OPERATION 2 + +#define SPACC_AADCOPY_FLAG 0x80000000 + +#define SPACC_AUTO_SIZE (-1) + +#define SPACC_WD_LIMIT 0x80 +#define SPACC_WD_TIMER_INIT 0x40000 + +#define SPACC_MODE_SUPPORTED 1 +#define SPACC_MODE_NOT_SUPPORTED 0 + +#define SPACC_MAX_RETRIES 70 + +/********* Register Offsets **********/ +#define SPACC_REG_IRQ_EN 0x00000L +#define SPACC_REG_IRQ_STAT 0x00004L +#define SPACC_REG_IRQ_CTRL 0x00008L +#define SPACC_REG_FIFO_STAT 0x0000CL +#define SPACC_REG_SDMA_BRST_SZ 0x00010L + +#define SPACC_REG_SRC_PTR 0x00020L +#define SPACC_REG_DST_PTR 0x00024L +#define SPACC_REG_OFFSET 0x00028L +#define SPACC_REG_PRE_AAD_LEN 0x0002CL +#define SPACC_REG_POST_AAD_LEN 0x00030L + +#define SPACC_REG_PROC_LEN 0x00034L +#define SPACC_REG_ICV_LEN 0x00038L +#define SPACC_REG_ICV_OFFSET 0x0003CL +#define SPACC_REG_IV_OFFSET 0x00040L + +#define SPACC_REG_SW_CTRL 0x00044L +#define SPACC_REG_AUX_INFO 0x00048L +#define SPACC_REG_CTRL 0x0004CL + +#define SPACC_REG_STAT_POP 0x00050L +#define SPACC_REG_STATUS 0x00054L + +#define SPACC_REG_STAT_WD_CTRL 0x00080L + +#define SPACC_REG_KEY_SZ 0x00100L + +#define SPACC_REG_VIRTUAL_RQST 0x00140L +#define SPACC_REG_VIRTUAL_ALLOC 0x00144L +#define SPACC_REG_VIRTUAL_PRIO 0x00148L + +#define SPACC_REG_ID 0x00180L +#define SPACC_REG_CONFIG 0x00184L +#define SPACC_REG_CONFIG2 0x00190L + +#define SPACC_REG_SECURE_CTRL 0x001C0L +#define SPACC_REG_SECURE_RELEASE 0x001C4 + +#define SPACC_REG_SK_LOAD 0x00200L +#define SPACC_REG_SK_STAT 0x00204L +#define SPACC_REG_SK_KEY 0x00240L + +#define SPACC_REG_VERSION_EXT_3 0x00194L + +/* out 8MB from base of SPACC */ +#define SPACC_REG_SKP 0x800000UL + +/********** Context Offsets **********/ +#define SPACC_CTX_CIPH_KEY 0x04000L +#define SPACC_CTX_HASH_KEY 0x08000L + +/******** Sub-Context Offsets ********/ +#define SPACC_CTX_AES_KEY 0x00 +#define SPACC_CTX_AES_IV 0x20 + +#define SPACC_CTX_DES_KEY 0x08 +#define SPACC_CTX_DES_IV 0x00 + +/* use these to loop over CMDX macros */ +#define SPACC_CMDX_MAX 1 +#define SPACC_CMDX_MAX_QOS 3 + +/********** IRQ_EN Bit Masks **********/ + +#define _SPACC_IRQ_CMD0 0 +#define _SPACC_IRQ_STAT 4 +#define _SPACC_IRQ_STAT_WD 12 +#define _SPACC_IRQ_GLBL 31 + +#define SPACC_IRQ_EN_CMD(x) (1UL << _SPACC_IRQ_CMD0 << (x)) +#define SPACC_IRQ_EN_STAT BIT(_SPACC_IRQ_STAT) +#define SPACC_IRQ_EN_STAT_WD BIT(_SPACC_IRQ_STAT_WD) +#define SPACC_IRQ_EN_GLBL BIT(_SPACC_IRQ_GLBL) + +/********* IRQ_STAT Bitmasks *********/ + +#define SPACC_IRQ_STAT_CMDX(x) (1UL << _SPACC_IRQ_CMD0 << (x)) +#define SPACC_IRQ_STAT_STAT BIT(_SPACC_IRQ_STAT) +#define SPACC_IRQ_STAT_STAT_WD BIT(_SPACC_IRQ_STAT_WD) + +#define SPACC_IRQ_STAT_CLEAR_STAT(spacc) writel(SPACC_IRQ_STAT_STAT, \ + (spacc)->regmap + SPACC_REG_IRQ_STAT) + +#define SPACC_IRQ_STAT_CLEAR_STAT_WD(spacc) writel(SPACC_IRQ_STAT_STAT_WD,= \ + (spacc)->regmap + SPACC_REG_IRQ_STAT) + +#define SPACC_IRQ_STAT_CLEAR_CMDX(spacc, x) writel(SPACC_IRQ_STAT_CMDX(x),= \ + (spacc)->regmap + SPACC_REG_IRQ_STAT) + +/********* IRQ_CTRL Bitmasks *********/ +/* CMD0 =3D 0; for QOS, CMD1 =3D 8, CMD2 =3D 16 */ +#define _SPACC_IRQ_CTRL_CMDX_CNT(x) (8 * (x)) +#define SPACC_IRQ_CTRL_CMDX_CNT_SET(x, n) \ + (((n) & 0xFF) << _SPACC_IRQ_CTRL_CMDX_CNT(x)) +#define SPACC_IRQ_CTRL_CMDX_CNT_MASK(x) \ + (0xFF << _SPACC_IRQ_CTRL_CMDX_CNT(x)) + +/* STAT_CNT is at 16 and for QOS at 24 */ +#define _SPACC_IRQ_CTRL_STAT_CNT 16 +#define SPACC_IRQ_CTRL_STAT_CNT_SET(n) ((n) << _SPACC_IRQ_CTRL_STAT_CNT) +#define SPACC_IRQ_CTRL_STAT_CNT_MASK (0x1FF << _SPACC_IRQ_CTRL_STAT_C= NT) + +#define _SPACC_IRQ_CTRL_STAT_CNT_QOS 24 +#define SPACC_IRQ_CTRL_STAT_CNT_SET_QOS(n) \ + ((n) << _SPACC_IRQ_CTRL_STAT_CNT_QOS) +#define SPACC_IRQ_CTRL_STAT_CNT_MASK_QOS \ + (0x7F << _SPACC_IRQ_CTRL_STAT_CNT_QOS) + +/******** FIFO_STAT Bitmasks *********/ + +/* SPACC with QOS */ +#define SPACC_FIFO_STAT_CMDX_CNT_MASK(x) \ + (0x7F << ((x) * 8)) +#define SPACC_FIFO_STAT_CMDX_CNT_GET(x, y) \ + (((y) & SPACC_FIFO_STAT_CMDX_CNT_MASK(x)) >> ((x) * 8)) +#define SPACC_FIFO_STAT_CMDX_FULL(x) (1UL << (7 + (x) * 8)) + +#define _SPACC_FIFO_STAT_STAT_CNT_QOS 24 +#define SPACC_FIFO_STAT_STAT_CNT_MASK_QOS \ + (0x7F << _SPACC_FIFO_STAT_STAT_CNT_QOS) +#define SPACC_FIFO_STAT_STAT_CNT_GET_QOS(y) \ + (((y) & \ + SPACC_FIFO_STAT_STAT_CNT_MASK_QOS) >> _SPACC_FIFO_STAT_STAT_CNT_QOS) + +/* SPACC without QOS */ +#define SPACC_FIFO_STAT_CMD0_CNT_MASK (0x1FF) +#define SPACC_FIFO_STAT_CMD0_CNT_GET(y) ((y) & SPACC_FIFO_STAT_CMD0_CNT_MA= SK) +#define _SPACC_FIFO_STAT_CMD0_FULL 15 +#define SPACC_FIFO_STAT_CMD0_FULL BIT(_SPACC_FIFO_STAT_CMD0_FULL) + +#define _SPACC_FIFO_STAT_STAT_CNT 16 +#define SPACC_FIFO_STAT_STAT_CNT_MASK (0x1FF << _SPACC_FIFO_STAT_STAT_CN= T) +#define SPACC_FIFO_STAT_STAT_CNT_GET(y) \ + (((y) & SPACC_FIFO_STAT_STAT_CNT_MASK) >> _SPACC_FIFO_STAT_STAT_CNT) + +/* both */ +#define _SPACC_FIFO_STAT_STAT_EMPTY 31 +#define SPACC_FIFO_STAT_STAT_EMPTY BIT(_SPACC_FIFO_STAT_STAT_EMPTY) + +/********* SRC/DST_PTR Bitmasks **********/ + +#define SPACC_SRC_PTR_PTR 0xFFFFFFF8 +#define SPACC_DST_PTR_PTR 0xFFFFFFF8 + +/********** OFFSET Bitmasks **********/ + +#define SPACC_OFFSET_SRC_O 0 +#define SPACC_OFFSET_SRC_W 16 +#define SPACC_OFFSET_DST_O 16 +#define SPACC_OFFSET_DST_W 16 + +#define SPACC_MIN_CHUNK_SIZE 1024 +#define SPACC_MAX_CHUNK_SIZE 16384 + +/********* PKT_LEN Bitmasks **********/ + +#ifndef _SPACC_PKT_LEN_PROC_LEN +#define _SPACC_PKT_LEN_PROC_LEN 0 +#endif +#ifndef _SPACC_PKT_LEN_AAD_LEN +#define _SPACC_PKT_LEN_AAD_LEN 16 +#endif + +/********* SW_CTRL Bitmasks ***********/ + +#define _SPACC_SW_CTRL_ID_0 0 +#define SPACC_SW_CTRL_ID_W 8 +#define SPACC_SW_CTRL_ID_MASK (0xFF << _SPACC_SW_CTRL_ID_0) +#define SPACC_SW_CTRL_ID_GET(y) \ + (((y) & SPACC_SW_CTRL_ID_MASK) >> _SPACC_SW_CTRL_ID_0) +#define SPACC_SW_CTRL_ID_SET(id) \ + (((id) & SPACC_SW_CTRL_ID_MASK) >> _SPACC_SW_CTRL_ID_0) + +#define _SPACC_SW_CTRL_PRIO 30 +#define SPACC_SW_CTRL_PRIO_MASK 0x3 +#define SPACC_SW_CTRL_PRIO_SET(prio) \ + (((prio) & SPACC_SW_CTRL_PRIO_MASK) << _SPACC_SW_CTRL_PRIO) + +/* Priorities */ +#define SPACC_SW_CTRL_PRIO_HI 0 +#define SPACC_SW_CTRL_PRIO_MED 1 +#define SPACC_SW_CTRL_PRIO_LOW 2 + +/*********** SECURE_CTRL bitmasks *********/ +#define _SPACC_SECURE_CTRL_MS_SRC 0 +#define _SPACC_SECURE_CTRL_MS_DST 1 +#define _SPACC_SECURE_CTRL_MS_DDT 2 +#define _SPACC_SECURE_CTRL_LOCK 31 + +#define SPACC_SECURE_CTRL_MS_SRC BIT(_SPACC_SECURE_CTRL_MS_SRC) +#define SPACC_SECURE_CTRL_MS_DST BIT(_SPACC_SECURE_CTRL_MS_DST) +#define SPACC_SECURE_CTRL_MS_DDT BIT(_SPACC_SECURE_CTRL_MS_DDT) +#define SPACC_SECURE_CTRL_LOCK BIT(_SPACC_SECURE_CTRL_LOCK) + +/********* SKP bits **************/ +#define _SPACC_SK_LOAD_CTX_IDX 0 +#define _SPACC_SK_LOAD_ALG 8 +#define _SPACC_SK_LOAD_MODE 12 +#define _SPACC_SK_LOAD_SIZE 16 +#define _SPACC_SK_LOAD_ENC_EN 30 +#define _SPACC_SK_LOAD_DEC_EN 31 +#define _SPACC_SK_STAT_BUSY 0 + +#define SPACC_SK_LOAD_ENC_EN BIT(_SPACC_SK_LOAD_ENC_EN) +#define SPACC_SK_LOAD_DEC_EN BIT(_SPACC_SK_LOAD_DEC_EN) +#define SPACC_SK_STAT_BUSY BIT(_SPACC_SK_STAT_BUSY) + +/*********** CTRL Bitmasks ***********/ +/* + * These CTRL field locations vary with SPACC version + * and if they are used, they should be set accordingly + */ +#define _SPACC_CTRL_CIPH_ALG 0 +#define _SPACC_CTRL_HASH_ALG 4 +#define _SPACC_CTRL_CIPH_MODE 8 +#define _SPACC_CTRL_HASH_MODE 12 +#define _SPACC_CTRL_MSG_BEGIN 14 +#define _SPACC_CTRL_MSG_END 15 +#define _SPACC_CTRL_CTX_IDX 16 +#define _SPACC_CTRL_ENCRYPT 24 +#define _SPACC_CTRL_AAD_COPY 25 +#define _SPACC_CTRL_ICV_PT 26 +#define _SPACC_CTRL_ICV_ENC 27 +#define _SPACC_CTRL_ICV_APPEND 28 +#define _SPACC_CTRL_KEY_EXP 29 +#define _SPACC_CTRL_SEC_KEY 31 + +/* CTRL bitmasks for 4.15+ cores */ +#define _SPACC_CTRL_CIPH_ALG_415 0 +#define _SPACC_CTRL_HASH_ALG_415 3 +#define _SPACC_CTRL_CIPH_MODE_415 8 +#define _SPACC_CTRL_HASH_MODE_415 12 + +/********* Virtual Spacc Priority Bitmasks **********/ +#define _SPACC_VPRIO_MODE 0 +#define _SPACC_VPRIO_WEIGHT 8 + +/********* AUX INFO Bitmasks *********/ +#define _SPACC_AUX_INFO_DIR 0 +#define _SPACC_AUX_INFO_BIT_ALIGN 1 +#define _SPACC_AUX_INFO_CBC_CS 16 + +/********* STAT_POP Bitmasks *********/ +#define _SPACC_STAT_POP_POP 0 +#define SPACC_STAT_POP_POP BIT(_SPACC_STAT_POP_POP) + +/********** STATUS Bitmasks **********/ +#define _SPACC_STATUS_SW_ID 0 +#define _SPACC_STATUS_RET_CODE 24 +#define _SPACC_STATUS_SEC_CMD 31 +#define SPACC_GET_STATUS_RET_CODE(s) \ + (((s) >> _SPACC_STATUS_RET_CODE) & 0x7) + +#define SPACC_STATUS_SW_ID_MASK (0xFF << _SPACC_STATUS_SW_ID) +#define SPACC_STATUS_SW_ID_GET(y) \ + (((y) & SPACC_STATUS_SW_ID_MASK) >> _SPACC_STATUS_SW_ID) + +/********** KEY_SZ Bitmasks **********/ +#define _SPACC_KEY_SZ_SIZE 0 +#define _SPACC_KEY_SZ_CTX_IDX 8 +#define _SPACC_KEY_SZ_CIPHER 31 + +#define SPACC_KEY_SZ_CIPHER BIT(_SPACC_KEY_SZ_CIPHER) + +#define SPACC_SET_CIPHER_KEY_SZ(z) \ + (((z) << _SPACC_KEY_SZ_SIZE) | (1UL << _SPACC_KEY_SZ_CIPHER)) +#define SPACC_SET_HASH_KEY_SZ(z) ((z) << _SPACC_KEY_SZ_SIZE) +#define SPACC_SET_KEY_CTX(ctx) ((ctx) << _SPACC_KEY_SZ_CTX_IDX) + +/*************************************************************************= ****/ + +#define AUX_DIR(a) ((a) << _SPACC_AUX_INFO_DIR) +#define AUX_BIT_ALIGN(a) ((a) << _SPACC_AUX_INFO_BIT_ALIGN) +#define AUX_CBC_CS(a) ((a) << _SPACC_AUX_INFO_CBC_CS) + +#define VPRIO_SET(mode, weight) \ + (((mode) << _SPACC_VPRIO_MODE) | ((weight) << _SPACC_VPRIO_WEIGHT)) + +#ifndef MAX_DDT_ENTRIES +/* add one for null at end of list */ +#define MAX_DDT_ENTRIES \ + ((SPACC_MAX_MSG_MALLOC_SIZE / SPACC_MAX_PARTICLE_SIZE) + 1) +#endif + +#define DDT_ENTRY_SIZE (sizeof(ddt_entry) * MAX_DDT_ENTRIES) + +#ifndef SPACC_MAX_JOBS +#define SPACC_MAX_JOBS BIT(SPACC_SW_CTRL_ID_W) +#endif + +#if SPACC_MAX_JOBS > 256 +# error SPACC_MAX_JOBS cannot exceed 256. +#endif + +#ifndef SPACC_MAX_JOB_BUFFERS +#define SPACC_MAX_JOB_BUFFERS 192 +#endif + +/* max DDT particle size */ +#ifndef SPACC_MAX_PARTICLE_SIZE +#define SPACC_MAX_PARTICLE_SIZE 4096 +#endif + +/* + * Max message size from HW configuration + * usually defined in ICD as (2 exponent 16) -1 + */ +#ifndef _SPACC_MAX_MSG_MALLOC_SIZE +#define _SPACC_MAX_MSG_MALLOC_SIZE 16 +#endif +#define SPACC_MAX_MSG_MALLOC_SIZE BIT(_SPACC_MAX_MSG_MALLOC_SIZE) + +#ifndef SPACC_MAX_MSG_SIZE +#define SPACC_MAX_MSG_SIZE (SPACC_MAX_MSG_MALLOC_SIZE - 1) +#endif + +#define SPACC_LOOP_WAIT 1000000 +#define SPACC_CTR_IV_MAX8 ((u32)0xFF) +#define SPACC_CTR_IV_MAX16 ((u32)0xFFFF) +#define SPACC_CTR_IV_MAX32 ((u32)0xFFFFFFFF) +#define SPACC_CTR_IV_MAX64 ((u64)0xFFFFFFFFFFFFFFFF) + +/* cipher algos */ +enum ecipher { + C_NULL =3D 0, + C_DES =3D 1, + C_AES =3D 2, + C_RC4 =3D 3, + C_MULTI2 =3D 4, + C_KASUMI =3D 5, + C_SNOW3G_UEA2 =3D 6, + C_ZUC_UEA3 =3D 7, + C_CHACHA20 =3D 8, + C_SM4 =3D 9, + C_MAX =3D 10 +}; + +/* ctrl reg cipher modes */ +enum eciphermode { + CM_ECB =3D 0, + CM_CBC =3D 1, + CM_CTR =3D 2, + CM_CCM =3D 3, + CM_GCM =3D 5, + CM_OFB =3D 7, + CM_CFB =3D 8, + CM_F8 =3D 9, + CM_XTS =3D 10, + CM_MAX =3D 11 +}; + +enum echachaciphermode { + CM_CHACHA_STREAM =3D 2, + CM_CHACHA_AEAD =3D 5 +}; + +enum ehash { + H_NULL =3D 0, + H_MD5 =3D 1, + H_SHA1 =3D 2, + H_SHA224 =3D 3, + H_SHA256 =3D 4, + H_SHA384 =3D 5, + H_SHA512 =3D 6, + H_XCBC =3D 7, + H_CMAC =3D 8, + H_KF9 =3D 9, + H_SNOW3G_UIA2 =3D 10, + H_CRC32_I3E802_3 =3D 11, + H_ZUC_UIA3 =3D 12, + H_SHA512_224 =3D 13, + H_SHA512_256 =3D 14, + H_MICHAEL =3D 15, + H_SHA3_224 =3D 16, + H_SHA3_256 =3D 17, + H_SHA3_384 =3D 18, + H_SHA3_512 =3D 19, + H_SHAKE128 =3D 20, + H_SHAKE256 =3D 21, + H_POLY1305 =3D 22, + H_SM3 =3D 23, + H_SM4_XCBC_MAC =3D 24, + H_SM4_CMAC =3D 25, + H_MAX =3D 26 +}; + +enum ehashmode { + HM_RAW =3D 0, + HM_SSLMAC =3D 1, + HM_HMAC =3D 2, + HM_MAX =3D 3 +}; + +enum eshakehashmode { + HM_SHAKE_SHAKE =3D 0, + HM_SHAKE_CSHAKE =3D 1, + HM_SHAKE_KMAC =3D 2 +}; + +enum spacc_ret_code { + SPACC_OK =3D 0, + SPACC_ICVFAIL =3D 1, + SPACC_MEMERR =3D 2, + SPACC_BLOCKERR =3D 3, + SPACC_SECERR =3D 4 +}; + +enum eicvpos { + IP_ICV_OFFSET =3D 0, + IP_ICV_APPEND =3D 1, + IP_ICV_IGNORE =3D 2, + IP_MAX =3D 3 +}; + +enum hash_icv { + /* HASH of plaintext */ + ICV_HASH =3D 0, + /* HASH the plaintext and encrypt the plaintext and ICV */ + ICV_HASH_ENCRYPT =3D 1, + /* HASH the ciphertext */ + ICV_ENCRYPT_HASH =3D 2, + ICV_IGNORE =3D 3, + IM_MAX =3D 4 +}; + +enum crypto_modes { + CRYPTO_MODE_NULL, + CRYPTO_MODE_AES_ECB, + CRYPTO_MODE_AES_CBC, + CRYPTO_MODE_AES_CTR, + CRYPTO_MODE_AES_CCM, + CRYPTO_MODE_AES_GCM, + CRYPTO_MODE_AES_F8, + CRYPTO_MODE_AES_XTS, + CRYPTO_MODE_AES_CFB, + CRYPTO_MODE_AES_OFB, + CRYPTO_MODE_AES_CS1, + CRYPTO_MODE_AES_CS2, + CRYPTO_MODE_AES_CS3, + CRYPTO_MODE_MULTI2_ECB, + CRYPTO_MODE_MULTI2_CBC, + CRYPTO_MODE_MULTI2_OFB, + CRYPTO_MODE_MULTI2_CFB, + CRYPTO_MODE_3DES_CBC, + CRYPTO_MODE_3DES_ECB, + CRYPTO_MODE_DES_CBC, + CRYPTO_MODE_DES_ECB, + CRYPTO_MODE_KASUMI_ECB, + CRYPTO_MODE_KASUMI_F8, + CRYPTO_MODE_SNOW3G_UEA2, + CRYPTO_MODE_ZUC_UEA3, + CRYPTO_MODE_CHACHA20_STREAM, + CRYPTO_MODE_CHACHA20_POLY1305, + CRYPTO_MODE_SM4_ECB, + CRYPTO_MODE_SM4_CBC, + CRYPTO_MODE_SM4_CFB, + CRYPTO_MODE_SM4_OFB, + CRYPTO_MODE_SM4_CTR, + CRYPTO_MODE_SM4_CCM, + CRYPTO_MODE_SM4_GCM, + CRYPTO_MODE_SM4_F8, + CRYPTO_MODE_SM4_XTS, + CRYPTO_MODE_SM4_CS1, + CRYPTO_MODE_SM4_CS2, + CRYPTO_MODE_SM4_CS3, + + CRYPTO_MODE_HASH_MD5, + CRYPTO_MODE_HMAC_MD5, + CRYPTO_MODE_HASH_SHA1, + CRYPTO_MODE_HMAC_SHA1, + CRYPTO_MODE_HASH_SHA224, + CRYPTO_MODE_HMAC_SHA224, + CRYPTO_MODE_HASH_SHA256, + CRYPTO_MODE_HMAC_SHA256, + CRYPTO_MODE_HASH_SHA384, + CRYPTO_MODE_HMAC_SHA384, + CRYPTO_MODE_HASH_SHA512, + CRYPTO_MODE_HMAC_SHA512, + CRYPTO_MODE_HASH_SHA512_224, + CRYPTO_MODE_HMAC_SHA512_224, + CRYPTO_MODE_HASH_SHA512_256, + CRYPTO_MODE_HMAC_SHA512_256, + + CRYPTO_MODE_MAC_XCBC, + CRYPTO_MODE_MAC_CMAC, + CRYPTO_MODE_MAC_KASUMI_F9, + CRYPTO_MODE_MAC_SNOW3G_UIA2, + CRYPTO_MODE_MAC_ZUC_UIA3, + CRYPTO_MODE_MAC_POLY1305, + + CRYPTO_MODE_SSLMAC_MD5, + CRYPTO_MODE_SSLMAC_SHA1, + CRYPTO_MODE_HASH_CRC32, + CRYPTO_MODE_MAC_MICHAEL, + + CRYPTO_MODE_HASH_SHA3_224, + CRYPTO_MODE_HASH_SHA3_256, + CRYPTO_MODE_HASH_SHA3_384, + CRYPTO_MODE_HASH_SHA3_512, + + CRYPTO_MODE_HASH_SHAKE128, + CRYPTO_MODE_HASH_SHAKE256, + CRYPTO_MODE_HASH_CSHAKE128, + CRYPTO_MODE_HASH_CSHAKE256, + CRYPTO_MODE_MAC_KMAC128, + CRYPTO_MODE_MAC_KMAC256, + CRYPTO_MODE_MAC_KMACXOF128, + CRYPTO_MODE_MAC_KMACXOF256, + + CRYPTO_MODE_HASH_SM3, + CRYPTO_MODE_HMAC_SM3, + CRYPTO_MODE_MAC_SM4_XCBC, + CRYPTO_MODE_MAC_SM4_CMAC, + + CRYPTO_MODE_LAST +}; + +/* job descriptor */ +typedef void (*spacc_callback)(void *spacc_dev, void *data); + +struct spacc_job { + unsigned long + enc_mode, /* Encryption algorithm mode */ + hash_mode, /* Hash algorithm mode */ + icv_len, + icv_offset, + op, /* operation */ + ctrl, /* CTRL shadow register */ + + /* + * Context just initialized or taken, + * and this is the first use. + */ + pre_aad_sz, post_aad_sz, /* size of AAD for the latest packet */ + hkey_sz, + ckey_sz; + bool first_use; + + /* direction and bit alignment parameters for the AUX_INFO reg */ + unsigned int auxinfo_dir, auxinfo_bit_align; + unsigned int auxinfo_cs_mode; /* AUX info setting for CBC-CS */ + + u32 ctx_idx; + unsigned int job_used, job_swid, job_done, job_err, job_secure; + spacc_callback cb; + void *cbdata; + +}; + +#define SPACC_CTX_IDX_UNUSED 0xFFFFFFFF +#define SPACC_JOB_IDX_UNUSED 0xFFFFFFFF + +struct spacc_ctx { + /* memory context to store cipher keys */ + void __iomem *ciph_key; + /* memory context to store hash keys */ + void __iomem *hash_key; + /* reference count of jobs using this context */ + int ref_cnt; + /* number of contexts following related to this one */ + int ncontig; +}; + +#define SPACC_CTRL_MASK(field) \ + (1UL << spacc->config.ctrl_map[(field)]) +#define SPACC_CTRL_SET(field, value) \ + ((value) << spacc->config.ctrl_map[(field)]) + +enum ctrl_map { + SPACC_CTRL_VER_0, + SPACC_CTRL_VER_1, + SPACC_CTRL_VER_2, + SPACC_CTRL_VER_SIZE +}; + +enum ctrl_type { + SPACC_CTRL_CIPH_ALG, + SPACC_CTRL_CIPH_MODE, + SPACC_CTRL_HASH_ALG, + SPACC_CTRL_HASH_MODE, + SPACC_CTRL_ENCRYPT, + SPACC_CTRL_CTX_IDX, + SPACC_CTRL_SEC_KEY, + SPACC_CTRL_AAD_COPY, + SPACC_CTRL_ICV_PT, + SPACC_CTRL_ICV_ENC, + SPACC_CTRL_ICV_APPEND, + SPACC_CTRL_KEY_EXP, + SPACC_CTRL_MSG_BEGIN, + SPACC_CTRL_MSG_END, + SPACC_CTRL_MAPSIZE +}; + +struct spacc_device { + void __iomem *regmap; + bool wd_cnt_limit; + atomic_t wait_counter; + struct mutex spacc_ctx_mutex; /* protect access to spacc_open */ + struct mutex spacc_waitq_mutex; /* Synchronizes wait queue access */ + struct list_head spacc_wait_list; + /* hardware configuration */ + struct { + unsigned int version, + pdu_version, + project; + u32 max_msg_size; /* max PROCLEN value */ + + unsigned char modes[CRYPTO_MODE_LAST]; + + int num_ctx, /* no. of contexts */ + num_sec_ctx, /* no. of SKP contexts */ + sec_ctx_page_size, /* page size of SKP context in bytes */ + ciph_page_size, /* cipher context page size in bytes */ + hash_page_size, /* hash context page size in bytes */ + string_size, + is_qos, /* QOS spacc? */ + is_pdu, /* PDU spacc? */ + is_secure, + is_secure_port, /* are we on the secure port? */ + is_partial, /* Is partial processing enabled? */ + is_ivimport, /* is ivimport enabled? */ + dma_type, /* DMA type: linear or scattergather */ + idx, /* which virtual spacc IDX is this? */ + priority, /* weighted priority of virtual spacc */ + cmd0_fifo_depth, /* CMD FIFO depths */ + cmd1_fifo_depth, + cmd2_fifo_depth, + stat_fifo_depth, /* depth of STATUS FIFO */ + fifo_cnt, + ideal_stat_level, + big_endian, + little_endian; + + u32 wd_timer; + u64 oldtimer, timer; + + const u8 *ctrl_map; /* map of ctrl register field offsets */ + } config; + + struct spacc_job_buffer { + int active; + int job_idx; + struct pdu_ddt *src, *dst; + u32 proc_sz, aad_offset, pre_aad_sz, + post_aad_sz, iv_offset, prio; + } job_buffer[SPACC_MAX_JOB_BUFFERS]; + + int jb_head, jb_tail; + + int op_mode, /* operating mode and watchdog functionality */ + wdcnt; /* number of pending WD IRQs */ + + /* SW_ID value which will be used for next job */ + unsigned int job_next_swid; + + struct spacc_ctx *ctx; /* this size changes per configured device */ + struct spacc_job *job; /* allocate memory for [SPACC_MAX_JOBS]; */ + int job_lookup[SPACC_MAX_JOBS]; /* correlate SW_ID back to job index */ + + spinlock_t lock; /* lock for register access */ + + /* callback functions for IRQ processing */ + void (*irq_cb_cmdx)(struct spacc_device *spacc, int x); + void (*irq_cb_stat)(struct spacc_device *spacc); + void (*irq_cb_stat_wd)(struct spacc_device *spacc); + + /* + * This is called after jobs have been popped off the STATUS FIFO + * useful so you can be told when there might be space available + * in the CMD FIFO + */ + void (*spacc_notify_jobs)(struct spacc_device *spacc); + + /* cache */ + struct { + u32 src_ptr, + dst_ptr, + proc_len, + icv_len, + icv_offset, + pre_aad, + post_aad, + iv_offset, + offset, + aux; + } cache; + + struct device *dptr; +}; + +struct spacc_priv { + struct spacc_device spacc; + struct workqueue_struct *spacc_wq; /* dedicated workQ */ + struct work_struct pop_jobs; + unsigned long max_msg_len; +}; + +/* Structure for encryption mode configuration */ +struct enc_config { + int mode; + u32 cipher_alg; + u32 cipher_mode; + int auxinfo_cs_mode; +}; + +/* Structure for hash mode configuration */ +struct hash_config { + int mode; + u32 hash_alg; + u32 hash_mode; + int auxinfo_dir; +}; + +int spacc_open(struct spacc_device *spacc, int enc, int hash, int ctx, + int secure_mode, spacc_callback cb, void *cbdata); +int spacc_clone_handle(struct spacc_device *spacc, int old_handle, + void *cbdata); +int spacc_close(struct spacc_device *spacc, int job_idx); +int spacc_set_operation(struct spacc_device *spacc, int job_idx, int op, + u32 prot, u32 icvcmd, u32 icvoff, + u32 icvsz, u32 sec_key); +int spacc_set_key_exp(struct spacc_device *spacc, int job_idx); + +int spacc_packet_enqueue_ddt_ex(struct spacc_device *spacc, int use_jb, + int job_idx, struct pdu_ddt *src_ddt, + struct pdu_ddt *dst_ddt, u32 proc_sz, + u32 aad_offset, u32 pre_aad_sz, u32 post_aad_sz, + u32 iv_offset, u32 prio); +int spacc_packet_enqueue_ddt(struct spacc_device *spacc, int job_idx, + struct pdu_ddt *src_ddt, struct pdu_ddt *dst_ddt, + u32 proc_sz, u32 aad_offset, u32 pre_aad_sz, + u32 post_aad_sz, u32 iv_offset, u32 prio); + +/* IRQ handling functions */ +void spacc_irq_cmdx_enable(struct spacc_device *spacc, int cmdx, int cmdx_= cnt); +void spacc_irq_cmdx_disable(struct spacc_device *spacc, int cmdx); +void spacc_irq_stat_enable(struct spacc_device *spacc, int stat_cnt); +void spacc_irq_stat_disable(struct spacc_device *spacc); +void spacc_irq_stat_wd_enable(struct spacc_device *spacc); +void spacc_irq_stat_wd_disable(struct spacc_device *spacc); +void spacc_irq_glbl_enable(struct spacc_device *spacc); +void spacc_irq_glbl_disable(struct spacc_device *spacc); +uint32_t spacc_process_irq(struct spacc_device *spacc); +void spacc_set_wd_count(struct spacc_device *spacc, uint32_t val); +irqreturn_t spacc_irq_handler(int irq, void *dev); +int spacc_sgs_to_ddt(struct device *dev, + struct scatterlist *sg1, int len1, int *ents1, + struct scatterlist *sg2, int len2, int *ents2, + struct scatterlist *sg3, int len3, int *ents3, + struct pdu_ddt *ddt, int dma_direction); +int spacc_sg_to_ddt(struct device *dev, struct scatterlist *sg, + int nbytes, struct pdu_ddt *ddt, int dma_direction); + +/* context Manager */ +void spacc_ctx_init_all(struct spacc_device *spacc); + +/* SPAcc specific manipulation of context memory */ +int spacc_write_context(struct spacc_device *spacc, int job_idx, int op, + const unsigned char *key, int ksz, + const unsigned char *iv, int ivsz); + +int spacc_read_context(struct spacc_device *spacc, int job_idx, int op, + unsigned char *key, int ksz, unsigned char *iv, + int ivsz); + +/* job Manager */ +void spacc_job_init_all(struct spacc_device *spacc); +int spacc_job_request(struct spacc_device *dev, int job_idx); +int spacc_job_release(struct spacc_device *dev, int job_idx); + +/* helper functions */ +struct spacc_ctx *spacc_context_lookup_by_job(struct spacc_device *spacc, + int job_idx); +int spacc_is_mode_keysize_supported(struct spacc_device *spacc, int mode, + int keysize, int keysz_index); +int spacc_compute_xcbc_key(struct spacc_device *spacc, int mode_id, + int job_idx, const unsigned char *key, + int keylen, unsigned char *xcbc_out); + +int spacc_process_jb(struct spacc_device *spacc); +int spacc_remove(struct platform_device *pdev); +int spacc_static_config(struct spacc_device *spacc); +int spacc_autodetect(struct spacc_device *spacc); +void spacc_pop_jobs(struct work_struct *work); +void spacc_fini(struct spacc_device *spacc); +int spacc_init(void __iomem *baseaddr, struct spacc_device *spacc, + struct pdu_info *info); +int spacc_pop_packets(struct spacc_device *spacc, int *num_popped); +void spacc_set_priority(struct spacc_device *spacc, int priority); + +#endif diff --git a/drivers/crypto/dwc-spacc/spacc_device.c b/drivers/crypto/dwc-s= pacc/spacc_device.c new file mode 100644 index 000000000000..effdd4e60e30 --- /dev/null +++ b/drivers/crypto/dwc-spacc/spacc_device.c @@ -0,0 +1,309 @@ +// SPDX-License-Identifier: GPL-2.0 + +#include +#include +#include +#include +#include +#include +#include "spacc_device.h" + +static void spacc_cmd_process(struct spacc_device *spacc, int x) +{ + struct spacc_priv *priv =3D container_of(spacc, struct spacc_priv, spacc); + + if (!work_pending(&priv->pop_jobs)) + queue_work(priv->spacc_wq, &priv->pop_jobs); +} + +static void spacc_stat_process(struct spacc_device *spacc) +{ + struct spacc_priv *priv =3D container_of(spacc, struct spacc_priv, spacc); + + if (!work_pending(&priv->pop_jobs)) + queue_work(priv->spacc_wq, &priv->pop_jobs); +} + +static int spacc_init_device(struct platform_device *pdev) +{ + int vspacc_id =3D -1; + u64 timer =3D 100000; + void __iomem *baseaddr; + struct pdu_info info; + struct spacc_priv *priv; + int err =3D 0; + int oldmode; + int irq_num; + const u64 oldtimer =3D 100000; + + /* initialize DDT DMA pools based on this device's resources */ + if (pdu_mem_init(&pdev->dev)) { + dev_err(&pdev->dev, "Could not initialize DMA pools\n"); + return -ENOMEM; + } + + priv =3D devm_kzalloc(&pdev->dev, sizeof(*priv), GFP_KERNEL); + if (!priv) { + err =3D -ENOMEM; + goto free_ddt_mem_pool; + } + + /* default to little-endian */ + priv->spacc.config.big_endian =3D false; + priv->spacc.config.little_endian =3D true; + + if (of_property_read_u32(pdev->dev.of_node, "snps,vspacc-id", + &vspacc_id)) { + dev_err(&pdev->dev, "No virtual spacc id specified\n"); + err =3D -EINVAL; + goto free_ddt_mem_pool; + } + + priv->spacc.config.idx =3D vspacc_id; + priv->spacc.config.oldtimer =3D oldtimer; + + if (of_property_read_u64(pdev->dev.of_node, "spacc-internal-counter", + &timer)) { + dev_dbg(&pdev->dev, "No spacc-internal-counter specified\n"); + dev_dbg(&pdev->dev, "Default internal-counter: (100000)\n"); + timer =3D 100000; + } + priv->spacc.config.timer =3D timer; + + baseaddr =3D devm_platform_ioremap_resource(pdev, 0); + if (IS_ERR(baseaddr)) { + dev_err(&pdev->dev, "Unable to map iomem\n"); + err =3D PTR_ERR(baseaddr); + goto free_ddt_mem_pool; + } + + pdu_get_version(baseaddr, &info); + + dev_dbg(&pdev->dev, "EPN %04X : virt [%d]\n", + info.spacc_version.project, + info.spacc_version.vspacc_id); + + /* + * Validate virtual spacc index with vspacc count read from + * VERSION_EXT.VSPACC_CNT. Thus vspacc count=3D3, gives valid index 0,1,2 + */ + if (vspacc_id !=3D info.spacc_version.vspacc_id) { + dev_err(&pdev->dev, "DTS vspacc_id mismatch read value\n"); + err =3D -EINVAL; + goto free_ddt_mem_pool; + } + + if (vspacc_id < 0 || vspacc_id > (info.spacc_config.num_vspacc - 1)) { + dev_err(&pdev->dev, "Invalid vspacc index specified\n"); + err =3D -EINVAL; + goto free_ddt_mem_pool; + } + + err =3D spacc_init(baseaddr, &priv->spacc, &info); + if (err !=3D 0) { + dev_err(&pdev->dev, "Failed to initialize SPAcc device\n"); + err =3D -ENXIO; + goto free_ddt_mem_pool; + } + + /* Set the priority from kernel config */ + priv->spacc.config.priority =3D CONFIG_CRYPTO_DEV_SPACC_PRIORITY; + dev_dbg(&pdev->dev, "VSPACC priority set from config: %u\n", + priv->spacc.config.priority); + + /* Set the priority for this virtual SPAcc instance */ + spacc_set_priority(&priv->spacc, priv->spacc.config.priority); + + priv->spacc_wq =3D alloc_workqueue("spacc_workqueue", WQ_UNBOUND, 0); + if (!priv->spacc_wq) { + dev_err(&pdev->dev, "failed to allocated workqueue\n"); + err =3D -ENOMEM; + goto free_spacc_ctx; + } + + spacc_irq_glbl_disable(&priv->spacc); + INIT_WORK(&priv->pop_jobs, spacc_pop_jobs); + + priv->spacc.dptr =3D &pdev->dev; + platform_set_drvdata(pdev, priv); + + irq_num =3D platform_get_irq(pdev, 0); + if (irq_num < 0) { + dev_err(&pdev->dev, "No irq resource for spacc\n"); + err =3D -ENXIO; + goto free_spacc_workq; + } + + /* determine configured maximum message length */ + priv->max_msg_len =3D priv->spacc.config.max_msg_size; + + if (devm_request_irq(&pdev->dev, irq_num, spacc_irq_handler, + IRQF_SHARED, dev_name(&pdev->dev), + &pdev->dev)) { + dev_err(&pdev->dev, "Failed to request IRQ\n"); + err =3D -EBUSY; + goto free_spacc_workq; + } + + priv->spacc.irq_cb_stat =3D spacc_stat_process; + priv->spacc.irq_cb_cmdx =3D spacc_cmd_process; + oldmode =3D priv->spacc.op_mode; + priv->spacc.op_mode =3D SPACC_OP_MODE_IRQ; + + /* Enable STAT and CMD interrupts */ + spacc_irq_stat_enable(&priv->spacc, 1); + spacc_irq_cmdx_enable(&priv->spacc, 0, 1); + spacc_irq_stat_wd_disable(&priv->spacc); + spacc_irq_glbl_enable(&priv->spacc); + +#if IS_ENABLED(CONFIG_CRYPTO_DEV_SPACC_AUTODETECT) + + err =3D spacc_autodetect(&priv->spacc); + if (err < 0) { + spacc_irq_glbl_disable(&priv->spacc); + goto free_spacc_workq; + } +#else + err =3D spacc_static_config(&priv->spacc); + if (err < 0) { + spacc_irq_glbl_disable(&priv->spacc); + goto free_spacc_workq; + } +#endif + + priv->spacc.op_mode =3D oldmode; + if (priv->spacc.op_mode =3D=3D SPACC_OP_MODE_IRQ) { + priv->spacc.irq_cb_stat =3D spacc_stat_process; + priv->spacc.irq_cb_cmdx =3D spacc_cmd_process; + + /* Enable STAT and CMD interrupts */ + spacc_irq_stat_enable(&priv->spacc, 1); + spacc_irq_cmdx_enable(&priv->spacc, 0, 1); + spacc_irq_glbl_enable(&priv->spacc); + } else { + priv->spacc.irq_cb_stat =3D spacc_stat_process; + priv->spacc.irq_cb_stat_wd =3D spacc_stat_process; + + spacc_irq_stat_enable(&priv->spacc, + priv->spacc.config.ideal_stat_level); + + /* Enable STAT and WD interrupts */ + spacc_irq_cmdx_disable(&priv->spacc, 0); + spacc_irq_stat_wd_enable(&priv->spacc); + spacc_irq_glbl_enable(&priv->spacc); + + /* enable the wd by setting the wd_timer =3D 100000 */ + spacc_set_wd_count(&priv->spacc, + priv->spacc.config.wd_timer =3D + priv->spacc.config.timer); + } + + /* unlock normal */ + if (priv->spacc.config.is_secure_port) { + u32 t; + + t =3D readl(baseaddr + SPACC_REG_SECURE_CTRL); + t &=3D ~(1UL << 31); + writel(t, baseaddr + SPACC_REG_SECURE_CTRL); + } + + /* unlock device by default */ + writel(0, baseaddr + SPACC_REG_SECURE_CTRL); + + return err; + +free_spacc_workq: + flush_workqueue(priv->spacc_wq); + destroy_workqueue(priv->spacc_wq); + +free_spacc_ctx: + spacc_fini(&priv->spacc); + +free_ddt_mem_pool: + pdu_mem_deinit(&pdev->dev); + + + return err; +} + +static void spacc_unregister_algs(void) +{ +#if IS_ENABLED(CONFIG_CRYPTO_DEV_SPACC_HASH) + spacc_unregister_hash_algs(); +#endif +#if IS_ENABLED(CONFIG_CRYPTO_DEV_SPACC_AEAD) + spacc_unregister_aead_algs(); +#endif +#if IS_ENABLED(CONFIG_CRYPTO_DEV_SPACC_CIPHER) + spacc_unregister_cipher_algs(); +#endif +} + +static int spacc_crypto_probe(struct platform_device *pdev) +{ + int rc =3D 0; + + rc =3D spacc_init_device(pdev); + if (rc < 0) + goto err; + +#if IS_ENABLED(CONFIG_CRYPTO_DEV_SPACC_HASH) + rc =3D spacc_probe_hashes(pdev); + if (rc < 0) + goto err; +#endif + +#if IS_ENABLED(CONFIG_CRYPTO_DEV_SPACC_CIPHER) + rc =3D spacc_probe_ciphers(pdev); + if (rc < 0) + goto err; +#endif + +#if IS_ENABLED(CONFIG_CRYPTO_DEV_SPACC_AEAD) + rc =3D spacc_probe_aeads(pdev); + if (rc < 0) + goto err; +#endif + + return 0; +err: + spacc_unregister_algs(); + + return rc; +} + +static void spacc_crypto_remove(struct platform_device *pdev) +{ + struct spacc_priv *priv =3D platform_get_drvdata(pdev); + + if (priv->spacc_wq) { + flush_workqueue(priv->spacc_wq); + destroy_workqueue(priv->spacc_wq); + } + + spacc_unregister_algs(); + spacc_remove(pdev); +} + +static const struct of_device_id snps_spacc_id[] =3D { + {.compatible =3D "snps,nsimosci-hs-spacc" }, + { /* sentinel */ } +}; + +MODULE_DEVICE_TABLE(of, snps_spacc_id); + +static struct platform_driver spacc_driver =3D { + .probe =3D spacc_crypto_probe, + .remove =3D spacc_crypto_remove, + .driver =3D { + .name =3D "spacc", + .of_match_table =3D snps_spacc_id, + .owner =3D THIS_MODULE, + }, +}; + +module_platform_driver(spacc_driver); + +MODULE_LICENSE("GPL"); +MODULE_AUTHOR("Synopsys, Inc."); +MODULE_DESCRIPTION("SPAcc Crypto Accelerator Driver"); diff --git a/drivers/crypto/dwc-spacc/spacc_device.h b/drivers/crypto/dwc-s= pacc/spacc_device.h new file mode 100644 index 000000000000..8738b9a9df0b --- /dev/null +++ b/drivers/crypto/dwc-spacc/spacc_device.h @@ -0,0 +1,231 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +#ifndef SPACC_DEVICE_H_ +#define SPACC_DEVICE_H_ + +#include +#include +#include +#include +#include +#include "spacc_core.h" + +#define MODE_TAB_AEAD(_name, _ciph, _hash, _hashlen, _ivlen, _blocklen) \ + .name =3D _name, .aead =3D { .ciph =3D _ciph, .hash =3D _hash }, \ + .hashlen =3D _hashlen, .ivlen =3D _ivlen, .blocklen =3D _blocklen + +/* helper macros for initializing the hash/cipher tables */ +#define MODE_TAB_COMMON(_name, _id_name, _blocklen) \ + .name =3D _name, .id =3D CRYPTO_MODE_##_id_name, .blocklen =3D _blocklen + +#define MODE_TAB_HASH(_name, _id_name, _hashlen, _blocklen) \ + MODE_TAB_COMMON(_name, _id_name, _blocklen), \ + .hashlen =3D _hashlen, .testlen =3D _hashlen + +#define MODE_TAB_CIPH(_name, _id_name, _ivlen, _blocklen) \ + MODE_TAB_COMMON(_name, _id_name, _blocklen), \ + .ivlen =3D _ivlen + +#define MODE_TAB_HASH_XCBC 0x8000 + +#define SPACC_MAX_DIGEST_SIZE 64 +#define SPACC_MAX_KEY_SIZE 32 +#define SPACC_MAX_IV_SIZE 16 + +#define SPACC_DMA_ALIGN 4 +#define SPACC_DMA_BOUNDARY 0x10000 +#define SPACC_TEST_DMA_BUFF_SIZE 256 + +/* flag means the IV is computed from setkey and crypt */ +#define SPACC_MANGLE_IV_FLAG 0x8000 + +/* we're doing a CTR mangle (for RFC3686/IPsec) */ +#define SPACC_MANGLE_IV_RFC3686 0x0100 + +/* we're doing GCM */ +#define SPACC_MANGLE_IV_RFC4106 0x0200 + +/* we're doing GMAC */ +#define SPACC_MANGLE_IV_RFC4543 0x0300 + +/* we're doing CCM */ +#define SPACC_MANGLE_IV_RFC4309 0x0400 + +/* we're doing SM4 GCM/CCM */ +#define SPACC_MANGLE_IV_RFC8998 0x0500 + +#define CRYPTO_MODE_AES_CTR_RFC3686 (CRYPTO_MODE_AES_CTR \ + | SPACC_MANGLE_IV_FLAG \ + | SPACC_MANGLE_IV_RFC3686) +#define CRYPTO_MODE_AES_GCM_RFC4106 (CRYPTO_MODE_AES_GCM \ + | SPACC_MANGLE_IV_FLAG \ + | SPACC_MANGLE_IV_RFC4106) +#define CRYPTO_MODE_AES_GCM_RFC4543 (CRYPTO_MODE_AES_GCM \ + | SPACC_MANGLE_IV_FLAG \ + | SPACC_MANGLE_IV_RFC4543) +#define CRYPTO_MODE_AES_CCM_RFC4309 (CRYPTO_MODE_AES_CCM \ + | SPACC_MANGLE_IV_FLAG \ + | SPACC_MANGLE_IV_RFC4309) +#define CRYPTO_MODE_SM4_GCM_RFC8998 (CRYPTO_MODE_SM4_GCM) +#define CRYPTO_MODE_SM4_CCM_RFC8998 (CRYPTO_MODE_SM4_CCM) + +struct spacc_crypto_ctx { + struct device *dev; + + int handle, mode, auth_size, key_len; + unsigned char *cipher_key; + + /* + * Indicates that the H/W context has been setup and can be used for + * crypto; otherwise, the software fallback will be used. + */ + bool ctx_valid; + + /* salt used for rfc3686/givencrypt mode */ + unsigned char csalt[16]; + u8 ipad[128] __aligned(sizeof(u32)); + u8 digest_ctx_buf[128] __aligned(sizeof(u32)); + u8 tmp_buffer[128] __aligned(sizeof(u32)); + + /* save keylen from setkey */ + int keylen; + u8 key[256]; + int zero_key; + unsigned char *tmp_sgl_buff; + struct scatterlist *tmp_sgl; + + union { + struct crypto_ahash *hash; + struct crypto_aead *aead; + struct crypto_skcipher *cipher; + } fb; +}; + +struct spacc_crypto_reqctx { + struct pdu_ddt src, dst; + void *digest_buf, *iv_buf; + dma_addr_t digest_dma; + int dst_nents, src_nents, aead_nents, total_nents; + int encrypt_op, mode, single_shot; + unsigned int spacc_cipher_cryptlen, rem_nents; + + struct aead_cb_data { + int new_handle; + struct spacc_crypto_ctx *tctx; + struct spacc_crypto_reqctx *ctx; + struct aead_request *req; + struct spacc_device *spacc; + } cb; + + struct ahash_cb_data { + int new_handle; + struct spacc_crypto_ctx *tctx; + struct spacc_crypto_reqctx *ctx; + struct ahash_request *req; + struct spacc_device *spacc; + } acb; + + struct cipher_cb_data { + int new_handle; + struct spacc_crypto_ctx *tctx; + struct spacc_crypto_reqctx *ctx; + struct skcipher_request *req; + struct spacc_device *spacc; + } ccb; + + union { + struct ahash_request hash_req; + struct skcipher_request cipher_req; + } fb; +}; + +struct mode_tab { + char name[128]; + + int valid; + + /* mode ID used in hash/cipher mode but not aead */ + int id; + + /* ciph/hash mode used in aead */ + struct { + int ciph, hash; + } aead; + + unsigned int hashlen, ivlen, blocklen, keylen[3]; + unsigned int keylen_mask, testlen; + unsigned int chunksize, walksize, min_keysize, max_keysize; + + bool sw_fb; + + union { + unsigned char hash_test[SPACC_MAX_DIGEST_SIZE]; + unsigned char ciph_test[3][2 * SPACC_MAX_IV_SIZE]; + }; +}; + +struct spacc_alg { + struct mode_tab *mode; + unsigned int keylen_mask; + + struct device *dev; + + struct list_head list; + struct crypto_alg *calg; + struct crypto_tfm *tfm; + + union { + struct ahash_alg hash; + struct aead_alg aead; + struct skcipher_alg skcipher; + } alg; +}; + +struct spacc_completion { + unsigned int wait_done; + struct completion spacc_wait_complete; + struct list_head list; +}; + +static inline const struct spacc_alg *spacc_tfm_ahash(struct crypto_tfm *t= fm) +{ + const struct crypto_alg *calg =3D tfm->__crt_alg; + + if ((calg->cra_flags & CRYPTO_ALG_TYPE_MASK) =3D=3D CRYPTO_ALG_TYPE_AHASH) + return container_of(calg, struct spacc_alg, alg.hash.halg.base); + + return NULL; +} + +static inline const struct spacc_alg *spacc_tfm_skcipher(struct crypto_tfm= *tfm) +{ + const struct crypto_alg *calg =3D tfm->__crt_alg; + + if ((calg->cra_flags & CRYPTO_ALG_TYPE_MASK) =3D=3D + CRYPTO_ALG_TYPE_SKCIPHER) + return container_of(calg, struct spacc_alg, alg.skcipher.base); + + return NULL; +} + +static inline const struct spacc_alg *spacc_tfm_aead(struct crypto_tfm *tf= m) +{ + const struct crypto_alg *calg =3D tfm->__crt_alg; + + if ((calg->cra_flags & CRYPTO_ALG_TYPE_MASK) =3D=3D CRYPTO_ALG_TYPE_AEAD) + return container_of(calg, struct spacc_alg, alg.aead.base); + + return NULL; +} + +int spacc_probe_hashes(struct platform_device *spacc_pdev); +int spacc_unregister_hash_algs(void); + +int spacc_probe_aeads(struct platform_device *spacc_pdev); +int spacc_unregister_aead_algs(void); + +int spacc_probe_ciphers(struct platform_device *spacc_pdev); +int spacc_unregister_cipher_algs(void); + +irqreturn_t spacc_irq_handler(int irq, void *dev); +#endif diff --git a/drivers/crypto/dwc-spacc/spacc_hal.c b/drivers/crypto/dwc-spac= c/spacc_hal.c new file mode 100644 index 000000000000..7dc8139ae949 --- /dev/null +++ b/drivers/crypto/dwc-spacc/spacc_hal.c @@ -0,0 +1,374 @@ +// SPDX-License-Identifier: GPL-2.0 + +#include +#include +#include "spacc_hal.h" + +static struct dma_pool *ddt_pool, *ddt16_pool, *ddt4_pool; +static struct device *ddt_device; + +#define PDU_REG_SPACC_VERSION 0x00180UL +#define PDU_REG_SPACC_CONFIG 0x00184UL +#define PDU_REG_SPACC_CONFIG2 0x00190UL +#define PDU_REG_SPACC_IV_OFFSET 0x00040UL +#define PDU_REG_PDU_CONFIG 0x00188UL +#define PDU_REG_SECURE_LOCK 0x001C0UL + +#define DDT_MAX_ENTRIES ((PDU_MAX_DDT + 1) * 8) +#define DDT_16_ENTRIES ((16 + 1) * 8) +#define DDT_4_ENTRIES ((4 + 1) * 8) + +int pdu_get_version(void __iomem *dev, struct pdu_info *inf) +{ + unsigned long reg_val; + + if (!inf) + return -EINVAL; + + memset(inf, 0, sizeof(*inf)); + reg_val =3D readl(dev + PDU_REG_SPACC_VERSION); + + /* + * Read the SPAcc version block this tells us the revision, + * project, and a few other feature bits + * + * layout for v6.5+ + */ + inf->spacc_version =3D (struct spacc_version_block) { + .minor =3D SPACC_ID_MINOR(reg_val), + .major =3D SPACC_ID_MAJOR(reg_val), + .version =3D (SPACC_ID_MAJOR(reg_val) << 4) | + SPACC_ID_MINOR(reg_val), + .qos =3D SPACC_ID_QOS(reg_val), + .is_spacc =3D SPACC_ID_TYPE(reg_val) =3D=3D SPACC_TYPE_SPACCQOS, + .is_pdu =3D SPACC_ID_TYPE(reg_val) =3D=3D SPACC_TYPE_PDU, + .aux =3D SPACC_ID_AUX(reg_val), + .vspacc_id =3D SPACC_ID_VIDX(reg_val), + .partial =3D SPACC_ID_PARTIAL(reg_val), + .project =3D SPACC_ID_PROJECT(reg_val), + }; + + /* try to autodetect */ + writel(0x80000000, dev + PDU_REG_SPACC_IV_OFFSET); + + if (readl(dev + PDU_REG_SPACC_IV_OFFSET) =3D=3D 0x80000000) + inf->spacc_version.ivimport =3D 1; + else + inf->spacc_version.ivimport =3D 0; + + /* + * Read the SPAcc config block (v6.5+) which tells us how many + * contexts there are and context page sizes + * this register is only available in v6.5 and up + */ + reg_val =3D readl(dev + PDU_REG_SPACC_CONFIG); + inf->spacc_config =3D (struct spacc_config_block) { + SPACC_CFG_CTX_CNT(reg_val), + SPACC_CFG_VSPACC_CNT(reg_val), + SPACC_CFG_CIPH_CTX_SZ(reg_val), + SPACC_CFG_HASH_CTX_SZ(reg_val), + SPACC_CFG_DMA_TYPE(reg_val), + 0, 0, 0, 0 + }; + + /* CONFIG2 only present in v6.5+ cores */ + reg_val =3D readl(dev + PDU_REG_SPACC_CONFIG2); + if (inf->spacc_version.qos) { + inf->spacc_config.cmd0_fifo_depth =3D + SPACC_CFG_CMD0_FIFO_QOS(reg_val); + inf->spacc_config.cmd1_fifo_depth =3D + SPACC_CFG_CMD1_FIFO(reg_val); + inf->spacc_config.cmd2_fifo_depth =3D + SPACC_CFG_CMD2_FIFO(reg_val); + inf->spacc_config.stat_fifo_depth =3D + SPACC_CFG_STAT_FIFO_QOS(reg_val); + } else { + inf->spacc_config.cmd0_fifo_depth =3D + SPACC_CFG_CMD0_FIFO(reg_val); + inf->spacc_config.stat_fifo_depth =3D + SPACC_CFG_STAT_FIFO(reg_val); + } + + /* only read PDU config if it's actually a PDU engine */ + if (inf->spacc_version.is_pdu) { + reg_val =3D readl(dev + PDU_REG_PDU_CONFIG); + inf->pdu_config =3D (struct pdu_config_block) + {SPACC_PDU_CFG_MINOR(reg_val), + SPACC_PDU_CFG_MAJOR(reg_val)}; + + /* unlock all cores by default */ + writel(0, dev + PDU_REG_SECURE_LOCK); + } + + return 0; +} + +void pdu_to_dev(void __iomem *addr_, uint32_t *src, unsigned long nword) +{ + void __iomem *addr =3D addr_; + + while (nword--) { + writel(*src++, addr); + addr +=3D 4; + } +} + +void pdu_from_dev(u32 *dst, void __iomem *addr_, unsigned long nword) +{ + void __iomem *addr =3D addr_; + + while (nword--) { + *dst++ =3D readl(addr); + addr +=3D 4; + } +} + +static void pdu_to_dev_big(void __iomem *addr_, const unsigned char *src, + unsigned long nword) +{ + u32 __iomem *addr =3D addr_; + u32 data; + __be32 val; + + while (nword--) { + data =3D *((u32 *)src); + val =3D __cpu_to_be32(data); + + __raw_writel((u32 __force)val, addr); + src +=3D 4; + addr++; + } +} + +static void pdu_from_dev_big(unsigned char *dst, void __iomem *addr_, + unsigned long nword) +{ + u32 __iomem *addr =3D addr_; + + while (nword--) { + *(u32 *)dst =3D __be32_to_cpu((__be32 __force)__raw_readl(addr)); + addr++; + dst +=3D 4; + } +} + +static void pdu_to_dev_little(void __iomem *addr_, const unsigned char *sr= c, + unsigned long nword) +{ + u32 __iomem *addr =3D addr_; + u32 data; + __le32 val; + + while (nword--) { + data =3D *((u32 *)src); + val =3D __cpu_to_le32(data); + + __raw_writel((u32 __force)val, addr); + src +=3D 4; + addr++; + } +} + +static void pdu_from_dev_little(unsigned char *dst, void __iomem *addr_, + unsigned long nword) +{ + u32 __iomem *addr =3D addr_; + + while (nword--) { + *(u32 *)dst =3D __le32_to_cpu((__le32 __force)__raw_readl(addr)); + addr++; + dst +=3D 4; + } +} + +void pdu_to_dev_s(void __iomem *addr, const unsigned char *src, + unsigned long nword, int big_endian) +{ + if (big_endian) + pdu_to_dev_big(addr, src, nword); + else + pdu_to_dev_little(addr, src, nword); +} + +void pdu_from_dev_s(unsigned char *dst, void __iomem *addr, + unsigned long nword, int big_endian) +{ + if (big_endian) + pdu_from_dev_big(dst, addr, nword); + else + pdu_from_dev_little(dst, addr, nword); +} + +void pdu_io_cached_write(struct device *dev, void __iomem *addr, + unsigned long val, uint32_t *cache) +{ + if (*cache =3D=3D val) { +#ifdef CONFIG_CRYPTO_DEV_SPACC_DEBUG_TRACE_IO + + dev_dbg(dev, "pdu: write %.8lx -> %p (cached)\n", val, addr); +#endif + return; + } + + *cache =3D val; + writel(val, addr); +} + +struct device *get_ddt_device(void) +{ + return ddt_device; +} + +/* platform specific DDT routines */ + +/* + * Create a DMA pool for DDT entries this should help from splitting + * pages for DDTs which by default are 520 bytes long meaning we would + * otherwise waste 3576 bytes per DDT allocated... + * we also maintain a smaller table of 4 entries common for simple jobs + * which uses 480 fewer bytes of DMA memory. + * and for good measure another table for 16 entries saving 384 bytes + */ +int pdu_mem_init(void *device) +{ + if (ddt_device) + return 0; /* already setup */ + + /* max of 64 DDT entries */ + ddt_device =3D device; + ddt_pool =3D dma_pool_create("spaccddt", device, + DDT_MAX_ENTRIES, 8, 0); + + if (!ddt_pool) + return -ENOSPC; + +#if PDU_MAX_DDT > 16 + /* max of 16 DDT entries */ + ddt16_pool =3D dma_pool_create("spaccddt16", device, + DDT_16_ENTRIES, 8, 0); + if (!ddt16_pool) { + dma_pool_destroy(ddt_pool); + return -ENOSPC; + } +#else + ddt16_pool =3D ddt_pool; +#endif + /* max of 4 DDT entries */ + ddt4_pool =3D dma_pool_create("spaccddt4", device, + DDT_4_ENTRIES, 8, 0); + if (!ddt4_pool) { + dma_pool_destroy(ddt_pool); +#if PDU_MAX_DDT > 16 + dma_pool_destroy(ddt16_pool); +#endif + return -ENOSPC; + } + + return 0; +} + +/* Destroy the pool */ +void pdu_mem_deinit(void *device) +{ + /* for now, just skip deinit except for matching device */ + if (device !=3D ddt_device) + return; + + dma_pool_destroy(ddt_pool); + +#if PDU_MAX_DDT > 16 + dma_pool_destroy(ddt16_pool); +#endif + dma_pool_destroy(ddt4_pool); + + ddt_device =3D NULL; +} + +int pdu_ddt_init(struct device *dev, struct pdu_ddt *ddt, unsigned long li= mit) +{ + /* + * Set the MSB if we want to use an ATOMIC + * allocation required for top half processing + */ + int flag =3D (limit & 0x80000000); + + limit &=3D 0x7FFFFFFF; + if (limit + 1 >=3D SIZE_MAX / 8) { + /* too big to even compute DDT size */ + return -EINVAL; + } else if (limit > PDU_MAX_DDT) { + size_t len =3D 8 * ((size_t)limit + 1); + + ddt->virt =3D dma_alloc_coherent(ddt_device, len, &ddt->phys, + flag ? GFP_ATOMIC : GFP_KERNEL); + } else if (limit > 16) { + ddt->virt =3D dma_pool_alloc(ddt_pool, flag ? GFP_ATOMIC : + GFP_KERNEL, &ddt->phys); + } else if (limit > 4) { + ddt->virt =3D dma_pool_alloc(ddt16_pool, flag ? GFP_ATOMIC : + GFP_KERNEL, &ddt->phys); + } else { + ddt->virt =3D dma_pool_alloc(ddt4_pool, flag ? GFP_ATOMIC : + GFP_KERNEL, &ddt->phys); + } + + ddt->idx =3D 0; + ddt->len =3D 0; + ddt->limit =3D limit; + + if (!ddt->virt) + return -EINVAL; + +#ifdef CONFIG_CRYPTO_DEV_SPACC_DEBUG_TRACE_DDT + + dev_dbg(dev, " DDT[%.8lx]: allocated %lu fragments\n", + (unsigned long)ddt->phys, limit); +#endif + + return 0; +} + +int pdu_ddt_add(struct device *dev, struct pdu_ddt *ddt, dma_addr_t phys, + unsigned long size) +{ +#ifdef CONFIG_CRYPTO_DEV_SPACC_DEBUG_TRACE_DDT + + dev_dbg(dev, " DDT[%.8lx]: 0x%.8lx size %lu\n", + (unsigned long)ddt->phys, + (unsigned long)phys, size); +#endif + + if (ddt->idx =3D=3D ddt->limit) + return -EINVAL; + + ddt->virt[ddt->idx * 2 + 0] =3D (uint32_t)phys; + ddt->virt[ddt->idx * 2 + 1] =3D size; + ddt->virt[ddt->idx * 2 + 2] =3D 0; + ddt->virt[ddt->idx * 2 + 3] =3D 0; + ddt->len +=3D size; + ++(ddt->idx); + + return 0; +} + +int pdu_ddt_free(struct pdu_ddt *ddt) +{ + if (ddt->virt) { + if (ddt->limit > PDU_MAX_DDT) { + size_t len =3D 8 * ((size_t)ddt->limit + 1); + + dma_free_coherent(ddt_device, len, ddt->virt, + ddt->phys); + } else if (ddt->limit > 16) { + dma_pool_free(ddt_pool, ddt->virt, ddt->phys); + } else if (ddt->limit > 4) { + dma_pool_free(ddt16_pool, ddt->virt, ddt->phys); + } else { + dma_pool_free(ddt4_pool, ddt->virt, ddt->phys); + } + + ddt->virt =3D NULL; + } + + return 0; +} diff --git a/drivers/crypto/dwc-spacc/spacc_hal.h b/drivers/crypto/dwc-spac= c/spacc_hal.h new file mode 100644 index 000000000000..7bbce32f3a44 --- /dev/null +++ b/drivers/crypto/dwc-spacc/spacc_hal.h @@ -0,0 +1,114 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +#ifndef SPACC_HAL_H +#define SPACC_HAL_H + +/* maximum number of DDT entries allowed */ +#ifndef PDU_MAX_DDT +#define PDU_MAX_DDT 64 +#endif + +/* platform Generic */ +#define PDU_IRQ_EN_GLBL BIT(31) +#define PDU_IRQ_EN_VSPACC(x) (1UL << (x)) +#define PDU_IRQ_EN_RNG BIT(16) + +#ifndef SPACC_ID_MINOR + #define SPACC_ID_MINOR(x) ((x) & 0x0F) + #define SPACC_ID_MAJOR(x) (((x) >> 4) & 0x0F) + #define SPACC_ID_QOS(x) (((x) >> 8) & 0x01) + #define SPACC_ID_TYPE(x) (((x) >> 9) & 0x03) + #define SPACC_ID_AUX(x) (((x) >> 11) & 0x01) + #define SPACC_ID_VIDX(x) (((x) >> 12) & 0x07) + #define SPACC_ID_PARTIAL(x) (((x) >> 15) & 0x01) + #define SPACC_ID_PROJECT(x) ((x) >> 16) + + #define SPACC_TYPE_SPACCQOS 0 + #define SPACC_TYPE_PDU 1 + + #define SPACC_CFG_CTX_CNT(x) ((x) & 0x7F) + #define SPACC_CFG_RC4_CTX_CNT(x) (((x) >> 8) & 0x7F) + #define SPACC_CFG_VSPACC_CNT(x) (((x) >> 16) & 0x0F) + #define SPACC_CFG_CIPH_CTX_SZ(x) (((x) >> 20) & 0x07) + #define SPACC_CFG_HASH_CTX_SZ(x) (((x) >> 24) & 0x0F) + #define SPACC_CFG_DMA_TYPE(x) (((x) >> 28) & 0x03) + + #define SPACC_CFG_CMD0_FIFO_QOS(x) (((x) >> 0) & 0x7F) + #define SPACC_CFG_CMD0_FIFO(x) (((x) >> 0) & 0x1FF) + #define SPACC_CFG_CMD1_FIFO(x) (((x) >> 8) & 0x7F) + #define SPACC_CFG_CMD2_FIFO(x) (((x) >> 16) & 0x7F) + #define SPACC_CFG_STAT_FIFO_QOS(x) (((x) >> 24) & 0x7F) + #define SPACC_CFG_STAT_FIFO(x) (((x) >> 16) & 0x1FF) + + #define SPACC_PDU_CFG_MINOR(x) ((x) & 0x0F) + #define SPACC_PDU_CFG_MAJOR(x) (((x) >> 4) & 0x0F) + + #define PDU_SECURE_LOCK_SPACC(x) (x) + #define PDU_SECURE_LOCK_CFG BIT(30) + #define PDU_SECURE_LOCK_GLBL BIT(31) +#endif /* SPACC_ID_MINOR */ + +struct spacc_version_block { + unsigned int minor, + major, + version, + qos, + is_spacc, + is_pdu, + aux, + vspacc_id, + partial, + project, + ivimport; +}; + +struct spacc_config_block { + unsigned int num_ctx, + num_vspacc, + ciph_ctx_page_size, + hash_ctx_page_size, + dma_type, + cmd0_fifo_depth, + cmd1_fifo_depth, + cmd2_fifo_depth, + stat_fifo_depth; +}; + +struct pdu_config_block { + unsigned int minor, + major; +}; + +struct pdu_info { + u32 clockrate; + struct spacc_version_block spacc_version; + struct spacc_config_block spacc_config; + struct pdu_config_block pdu_config; +}; + +struct pdu_ddt { + dma_addr_t phys; + u32 *virt; + u32 *virt_orig; + struct device *dev; + unsigned long idx, limit, len; +}; + +void pdu_io_cached_write(struct device *dev, void __iomem *addr, + unsigned long val, uint32_t *cache); +void pdu_to_dev(void __iomem *addr, uint32_t *src, unsigned long nword); +void pdu_from_dev(u32 *dst, void __iomem *addr, unsigned long nword); +void pdu_from_dev_s(unsigned char *dst, void __iomem *addr, + unsigned long nword, int endian); +void pdu_to_dev_s(void __iomem *addr, const unsigned char *src, + unsigned long nword, int endian); +struct device *get_ddt_device(void); +int pdu_mem_init(void *device); +void pdu_mem_deinit(void *device); +int pdu_ddt_init(struct device *dev, struct pdu_ddt *ddt, unsigned long li= mit); +int pdu_ddt_add(struct device *dev, struct pdu_ddt *ddt, dma_addr_t phys, + unsigned long size); +int pdu_ddt_free(struct pdu_ddt *ddt); +int pdu_get_version(void __iomem *dev, struct pdu_info *inf); + +#endif diff --git a/drivers/crypto/dwc-spacc/spacc_interrupt.c b/drivers/crypto/dw= c-spacc/spacc_interrupt.c new file mode 100644 index 000000000000..e21008ce275e --- /dev/null +++ b/drivers/crypto/dwc-spacc/spacc_interrupt.c @@ -0,0 +1,324 @@ +// SPDX-License-Identifier: GPL-2.0 + +#include +#include +#include +#include +#include +#include "spacc_core.h" + +#ifndef MIN +#define MIN(x, y) (((x) < (y)) ? (x) : (y)) +#endif + +static inline uint32_t _spacc_get_stat_cnt(struct spacc_device *spacc) +{ + u32 fifo; + + if (spacc->config.is_qos) + fifo =3D SPACC_FIFO_STAT_STAT_CNT_GET_QOS(readl(spacc->regmap + + SPACC_REG_FIFO_STAT)); + else + fifo =3D SPACC_FIFO_STAT_STAT_CNT_GET(readl(spacc->regmap + + SPACC_REG_FIFO_STAT)); + return fifo; +} + +static int spacc_pop_packets_ex(struct spacc_device *spacc, int *num_poppe= d, + unsigned long *lock_flag) +{ + int jobs; + int ret =3D -EINPROGRESS; + struct spacc_job *job =3D NULL; + u32 cmdstat, swid, spacc_errcode =3D SPACC_OK; + *num_popped =3D 0; + + while ((jobs =3D _spacc_get_stat_cnt(spacc))) { + while (jobs-- > 0) { + /* write the pop register to get the next job */ + writel(1, spacc->regmap + SPACC_REG_STAT_POP); + cmdstat =3D readl(spacc->regmap + SPACC_REG_STATUS); + + swid =3D SPACC_STATUS_SW_ID_GET(cmdstat); + + if (spacc->job_lookup[swid] =3D=3D SPACC_JOB_IDX_UNUSED) { + ret =3D -EIO; + goto ERR; + } + + /* find the associated job with popped swid */ + if (swid < 0 || swid >=3D SPACC_MAX_JOBS) + job =3D NULL; + else + job =3D &spacc->job[spacc->job_lookup[swid]]; + + if (!job) { + ret =3D -EIO; + goto ERR; + } + + /* mark job as done */ + job->job_done =3D 1; + spacc->job_lookup[swid] =3D SPACC_JOB_IDX_UNUSED; + spacc_errcode =3D SPACC_GET_STATUS_RET_CODE(cmdstat); + + switch (spacc_errcode) { + case SPACC_ICVFAIL: + ret =3D -EBADMSG; + break; + case SPACC_MEMERR: + ret =3D -EINVAL; + break; + case SPACC_BLOCKERR: + ret =3D -EINVAL; + break; + case SPACC_SECERR: + ret =3D -EIO; + break; + case SPACC_OK: + ret =3D 0; + break; + default: + dev_err(spacc->dptr, "Invalid SPAcc Error"); + } + + job->job_err =3D ret; + + /* + * We're done touching the SPAcc hw, so release the + * lock across the job callback. It must be reacquired + * before continuing to the next iteration. + */ + + if (job->cb) { + spin_unlock_irqrestore(&spacc->lock, + *lock_flag); + job->cb(spacc, job->cbdata); + spin_lock_irqsave(&spacc->lock, *lock_flag); + } + + (*num_popped)++; + } + } + + if (!*num_popped) + dev_dbg(spacc->dptr, "Failed to pop a single job\n"); + +ERR: + spacc_process_jb(spacc); + + /* reset the WD timer to the original value */ + if (spacc->op_mode =3D=3D SPACC_OP_MODE_WD) + spacc_set_wd_count(spacc, spacc->config.wd_timer); + + if (*num_popped && spacc->spacc_notify_jobs) + spacc->spacc_notify_jobs(spacc); + + return ret; +} + +int spacc_pop_packets(struct spacc_device *spacc, int *num_popped) +{ + int err =3D 0; + unsigned long lock_flag; + + spin_lock_irqsave(&spacc->lock, lock_flag); + err =3D spacc_pop_packets_ex(spacc, num_popped, &lock_flag); + spin_unlock_irqrestore(&spacc->lock, lock_flag); + + return err; +} + +uint32_t spacc_process_irq(struct spacc_device *spacc) +{ + u32 irq_status; + int x, cmd_max; + unsigned long lock_flag; + + spin_lock_irqsave(&spacc->lock, lock_flag); + + irq_status =3D readl(spacc->regmap + SPACC_REG_IRQ_STAT); + + /* clear interrupt pin and run registered callback */ + if (irq_status & SPACC_IRQ_STAT_STAT) { + SPACC_IRQ_STAT_CLEAR_STAT(spacc); + if (spacc->op_mode =3D=3D SPACC_OP_MODE_IRQ) { + spacc->config.fifo_cnt <<=3D 2; + spacc->config.fifo_cnt =3D MIN(spacc->config.fifo_cnt, + spacc->config.stat_fifo_depth); + + /* update fifo count to allow more stats to pile up */ + spacc_irq_stat_enable(spacc, spacc->config.fifo_cnt); + + /* re-enable CMD0 empty interrupt */ + spacc_irq_cmdx_enable(spacc, 0, 0); + } + + /* Re-enable the watchdog interrupt */ + if (spacc->op_mode =3D=3D SPACC_OP_MODE_IRQ && spacc->wd_cnt_limit) { + spacc_irq_stat_wd_enable(spacc); + spacc->wdcnt =3D 0; + spacc->op_mode =3D SPACC_OP_MODE_WD; + spacc->wd_cnt_limit =3D false; + } + + if (spacc->irq_cb_stat) + spacc->irq_cb_stat(spacc); + } + + /* watchdog IRQ */ + if (spacc->op_mode =3D=3D SPACC_OP_MODE_WD && + irq_status & SPACC_IRQ_STAT_STAT_WD) { + if (++spacc->wdcnt =3D=3D SPACC_WD_LIMIT) { + /* + * This happens when you get too many IRQs that + * go unanswered + */ + spacc_irq_stat_wd_disable(spacc); + /* + * We set the STAT CNT to 1 so that every job + * generates an IRQ now + */ + spacc_irq_stat_enable(spacc, 1); + spacc->op_mode =3D SPACC_OP_MODE_IRQ; + spacc->wd_cnt_limit =3D true; + + } else if (spacc->config.wd_timer < (0xFFFFFFUL >> 4)) { + /* + * If the timer isn't too high lets bump it up + * a bit so as to give the IRQ a chance to + * reply + */ + spacc_set_wd_count(spacc, + spacc->config.wd_timer << 4); + } + + SPACC_IRQ_STAT_CLEAR_STAT_WD(spacc); + if (spacc->irq_cb_stat_wd) + spacc->irq_cb_stat_wd(spacc); + } + + if (spacc->op_mode =3D=3D SPACC_OP_MODE_IRQ) { + cmd_max =3D (spacc->config.is_qos ? SPACC_CMDX_MAX_QOS : + SPACC_CMDX_MAX); + for (x =3D 0; x < cmd_max; x++) { + if (irq_status & SPACC_IRQ_STAT_CMDX(x)) { + spacc->config.fifo_cnt =3D 1; + + /* disable CMD0 interrupt since STAT=3D1 */ + spacc_irq_cmdx_disable(spacc, x); + spacc_irq_stat_enable(spacc, + spacc->config.fifo_cnt); + + SPACC_IRQ_STAT_CLEAR_CMDX(spacc, x); + + /* run registered callback */ + if (spacc->irq_cb_cmdx) + spacc->irq_cb_cmdx(spacc, x); + } + } + } + + spin_unlock_irqrestore(&spacc->lock, lock_flag); + + return irq_status; +} + +void spacc_set_wd_count(struct spacc_device *spacc, uint32_t val) +{ + writel(val, spacc->regmap + SPACC_REG_STAT_WD_CTRL); +} + +/* + * cmdx and cmdx_cnt depend on HW config + * cmdx can be 0, 1 or 2 + * cmdx_cnt must be 2^6 or less + */ +void spacc_irq_cmdx_enable(struct spacc_device *spacc, int cmdx, int cmdx_= cnt) +{ + u32 reg_val; + + /* read the reg, clear the bit range and set the new value */ + reg_val =3D readl(spacc->regmap + SPACC_REG_IRQ_CTRL) & + (~SPACC_IRQ_CTRL_CMDX_CNT_MASK(cmdx)); + reg_val |=3D SPACC_IRQ_CTRL_CMDX_CNT_SET(cmdx, cmdx_cnt); + + writel(reg_val | SPACC_IRQ_CTRL_CMDX_CNT_SET(cmdx, cmdx_cnt), + spacc->regmap + SPACC_REG_IRQ_CTRL); + + writel(readl(spacc->regmap + SPACC_REG_IRQ_EN) | SPACC_IRQ_EN_CMD(cmdx), + spacc->regmap + SPACC_REG_IRQ_EN); +} + +void spacc_irq_cmdx_disable(struct spacc_device *spacc, int cmdx) +{ + writel(readl(spacc->regmap + SPACC_REG_IRQ_EN) & + (~SPACC_IRQ_EN_CMD(cmdx)), spacc->regmap + SPACC_REG_IRQ_EN); +} + +void spacc_irq_stat_enable(struct spacc_device *spacc, int stat_cnt) +{ + u32 reg_val; + + reg_val =3D readl(spacc->regmap + SPACC_REG_IRQ_CTRL); + if (spacc->config.is_qos) { + reg_val &=3D (~SPACC_IRQ_CTRL_STAT_CNT_MASK_QOS); + reg_val |=3D SPACC_IRQ_CTRL_STAT_CNT_SET_QOS(stat_cnt); + } else { + reg_val &=3D (~SPACC_IRQ_CTRL_STAT_CNT_MASK); + reg_val |=3D SPACC_IRQ_CTRL_STAT_CNT_SET(stat_cnt); + } + + writel(reg_val, spacc->regmap + SPACC_REG_IRQ_CTRL); + writel(readl(spacc->regmap + SPACC_REG_IRQ_EN) | SPACC_IRQ_EN_STAT, + spacc->regmap + SPACC_REG_IRQ_EN); +} + +void spacc_irq_stat_disable(struct spacc_device *spacc) +{ + writel(readl(spacc->regmap + SPACC_REG_IRQ_EN) & (~SPACC_IRQ_EN_STAT), + spacc->regmap + SPACC_REG_IRQ_EN); +} + +void spacc_irq_stat_wd_enable(struct spacc_device *spacc) +{ + writel(readl(spacc->regmap + SPACC_REG_IRQ_EN) | SPACC_IRQ_EN_STAT_WD, + spacc->regmap + SPACC_REG_IRQ_EN); +} + +void spacc_irq_stat_wd_disable(struct spacc_device *spacc) +{ + writel(readl(spacc->regmap + SPACC_REG_IRQ_EN) & + (~SPACC_IRQ_EN_STAT_WD), spacc->regmap + SPACC_REG_IRQ_EN); +} + +void spacc_irq_glbl_enable(struct spacc_device *spacc) +{ + writel(readl(spacc->regmap + SPACC_REG_IRQ_EN) | SPACC_IRQ_EN_GLBL, + spacc->regmap + SPACC_REG_IRQ_EN); +} + +void spacc_irq_glbl_disable(struct spacc_device *spacc) +{ + writel(readl(spacc->regmap + SPACC_REG_IRQ_EN) & (~SPACC_IRQ_EN_GLBL), + spacc->regmap + SPACC_REG_IRQ_EN); +} + +/* Function to run callbacks in the IRQ handler */ +irqreturn_t spacc_irq_handler(int irq, void *dev) +{ + struct spacc_priv *priv =3D platform_get_drvdata(to_platform_device(dev)); + struct spacc_device *spacc =3D &priv->spacc; + + if (spacc->config.oldtimer !=3D spacc->config.timer) { + priv->spacc.config.wd_timer =3D spacc->config.timer; + spacc_set_wd_count(&priv->spacc, priv->spacc.config.wd_timer); + spacc->config.oldtimer =3D spacc->config.timer; + } + + /* check irq flags and process as required */ + if (!spacc_process_irq(spacc)) + return IRQ_NONE; + + return IRQ_HANDLED; +} diff --git a/drivers/crypto/dwc-spacc/spacc_manager.c b/drivers/crypto/dwc-= spacc/spacc_manager.c new file mode 100644 index 000000000000..a8e7ef4561ab --- /dev/null +++ b/drivers/crypto/dwc-spacc/spacc_manager.c @@ -0,0 +1,610 @@ +// SPDX-License-Identifier: GPL-2.0 + +#include +#include +#include +#include +#include +#include +#include "spacc_core.h" + +#ifndef MIN +#define MIN(x, y) (((x) < (y)) ? (x) : (y)) +#endif + +/* Prevent reading past the end of the buffer */ +static void spacc_read_from_buf(unsigned char *dst, unsigned char *src, + int off, int n, int max) +{ + if (!dst) + return; + + while (off < max && n) { + *dst++ =3D src[off++]; + --n; + } +} + +static void spacc_write_to_buf(unsigned char *dst, const unsigned char *sr= c, + int off, int n, int len) +{ + if (!src) + return; + + while (n && (off < len)) { + dst[off++] =3D *src++; + --n; + } +} + +/* + * This is not meant to be called directly, it should be called + * from the job manager + */ +static int spacc_ctx_request(struct spacc_device *spacc, + int ctx_id, int ncontig) +{ + int ret =3D 0; + int x, y, count; + + if (!spacc) + return -EINVAL; + + if (ctx_id > spacc->config.num_ctx) + return -EINVAL; + + if (ncontig < 1 || ncontig > spacc->config.num_ctx) + return -EINVAL; + + /* + * Allocating scheme, look for contiguous contexts. + * Free contexts have a ref_cnt of 0. + * If specific ctx_id is requested, test the ncontig + * and then bump the ref_cnt + */ + if (ctx_id !=3D -1) { + if ((&spacc->ctx[ctx_id])->ncontig !=3D ncontig - 1) + ret =3D -1; + goto NO_FREE_CTX; + } + + /* + * Check to see if ncontig are free + * loop over all available contexts to find the first + * ncontig empty ones + */ + for (x =3D 0; x <=3D (spacc->config.num_ctx - ncontig); ) { + count =3D ncontig; + while (count) { + if ((&spacc->ctx[x + count - 1])->ref_cnt !=3D 0) { + /* + * Increment x to past failed count + * location + */ + x +=3D count; + break; + } + count--; + } + + if (count !=3D 0) { + ret =3D -1; + /* test next x */ + } else { + ctx_id =3D x; + ret =3D 0; + break; + } + } + +NO_FREE_CTX: + + if (ret =3D=3D 0) { + /* ctx_id is good so mark used */ + for (y =3D 0; y < ncontig; y++) + (&spacc->ctx[ctx_id + y])->ref_cnt++; + (&spacc->ctx[ctx_id])->ncontig =3D ncontig - 1; + } else { + ctx_id =3D -1; + } + + return ctx_id; +} + +static int spacc_ctx_release(struct spacc_device *spacc, int ctx_id) +{ + int y; + int ncontig; + + if (ctx_id < 0 || ctx_id > spacc->config.num_ctx) + return -EINVAL; + + /* release the base context and contiguous block */ + ncontig =3D (&spacc->ctx[ctx_id])->ncontig; + for (y =3D 0; y <=3D ncontig; y++) { + if ((&spacc->ctx[ctx_id + y])->ref_cnt > 0) + (&spacc->ctx[ctx_id + y])->ref_cnt--; + } + + if ((&spacc->ctx[ctx_id])->ref_cnt =3D=3D 0) { + (&spacc->ctx[ctx_id])->ncontig =3D 0; +#ifdef CONFIG_CRYPTO_DEV_SPACC_SECURE_MODE + /* + * TODO: This driver works in harmony with "normal" kernel + * processes so we release the context all the time + * normally this would be done from a "secure" kernel process + * (trustzone/etc). This hack is so that SPACC.0 + * cores can both use the same context space. + */ + writel(ctx_id, spacc->regmap + SPACC_REG_SECURE_RELEASE); +#endif + } + + return 0; +} + +/* Job init: will initialize all job data, pointers, etc */ +void spacc_job_init_all(struct spacc_device *spacc) +{ + int x; + struct spacc_job *job; + + for (x =3D 0; x < (SPACC_MAX_JOBS); x++) { + job =3D &spacc->job[x]; + memset(job, 0, sizeof(struct spacc_job)); + + job->job_swid =3D SPACC_JOB_IDX_UNUSED; + job->job_used =3D SPACC_JOB_IDX_UNUSED; + spacc->job_lookup[x] =3D SPACC_JOB_IDX_UNUSED; + } +} + +/* Get a new job id and use a specific ctx_idx or -1 for a new one */ +int spacc_job_request(struct spacc_device *spacc, int ctx_idx) +{ + int x, ret =3D 0; + struct spacc_job *job; + unsigned long lock_flag; + + if (!spacc) + return -EINVAL; + + spin_lock_irqsave(&spacc->lock, lock_flag); + + /* find the first available job id */ + for (x =3D 0; x < SPACC_MAX_JOBS; x++) { + job =3D &spacc->job[x]; + if (job->job_used =3D=3D SPACC_JOB_IDX_UNUSED) { + job->job_used =3D x; + break; + } + } + + if (x =3D=3D SPACC_MAX_JOBS) { + ret =3D -1; + } else { + /* associate a single context to go with job */ + ret =3D spacc_ctx_request(spacc, ctx_idx, 1); + if (ret !=3D -1) { + job->ctx_idx =3D ret; + ret =3D x; + } else { + job->job_used =3D SPACC_JOB_IDX_UNUSED; + } + } + + spin_unlock_irqrestore(&spacc->lock, lock_flag); + + return ret; +} + +int spacc_job_release(struct spacc_device *spacc, int job_idx) +{ + int ret =3D 0; + struct spacc_job *job; + unsigned long lock_flag; + + if (!spacc) + return -EINVAL; + + if (job_idx < 0 || job_idx >=3D SPACC_MAX_JOBS) + return -EINVAL; + + spin_lock_irqsave(&spacc->lock, lock_flag); + + job =3D &spacc->job[job_idx]; + /* release context that goes with job */ + ret =3D spacc_ctx_release(spacc, job->ctx_idx); + job->ctx_idx =3D SPACC_CTX_IDX_UNUSED; + job->job_used =3D SPACC_JOB_IDX_UNUSED; + /* disable any callback */ + job->cb =3D NULL; + + /* NOTE: this leaves ctrl data in memory */ + spin_unlock_irqrestore(&spacc->lock, lock_flag); + + return ret; +} + +/* Return a context structure for a job idx or null if invalid */ +struct spacc_ctx *spacc_context_lookup_by_job(struct spacc_device *spacc, + int job_idx) +{ + if (job_idx < 0 || job_idx >=3D SPACC_MAX_JOBS) + return NULL; + + return &spacc->ctx[(&spacc->job[job_idx])->ctx_idx]; +} + +int spacc_process_jb(struct spacc_device *spacc) +{ + int tail; + int ret =3D 0; + + /* are there jobs in the buffer? */ + while (spacc->jb_head !=3D spacc->jb_tail) { + tail =3D spacc->jb_tail; + + if (spacc->job_buffer[tail].active) { + ret =3D spacc_packet_enqueue_ddt_ex + (spacc, 0, spacc->job_buffer[tail].job_idx, + spacc->job_buffer[tail].src, + spacc->job_buffer[tail].dst, + spacc->job_buffer[tail].proc_sz, + spacc->job_buffer[tail].aad_offset, + spacc->job_buffer[tail].pre_aad_sz, + spacc->job_buffer[tail].post_aad_sz, + spacc->job_buffer[tail].iv_offset, + spacc->job_buffer[tail].prio); + + if (ret !=3D -EBUSY) + spacc->job_buffer[tail].active =3D 0; + else + return -EBUSY; + } + + tail++; + if (tail =3D=3D SPACC_MAX_JOB_BUFFERS) + tail =3D 0; + + spacc->jb_tail =3D tail; + } + + return 0; +} + +/* Write appropriate context data which depends on operation and mode */ +int spacc_write_context(struct spacc_device *spacc, int job_idx, int op, + const unsigned char *key, int ksz, + const unsigned char *iv, int ivsz) +{ + int buflen; + int ret =3D 0; + unsigned char buf[300]; + struct spacc_ctx *ctx =3D NULL; + struct spacc_job *job =3D NULL; + + if (job_idx < 0 || job_idx >=3D SPACC_MAX_JOBS) + return -EINVAL; + + job =3D &spacc->job[job_idx]; + ctx =3D spacc_context_lookup_by_job(spacc, job_idx); + + if (!job || !ctx) + return -EIO; + + switch (op) { + case SPACC_CRYPTO_OPERATION: + /* + * Get page size and then read so we can do a + * read-modify-write cycle + */ + buflen =3D MIN(sizeof(buf), + (unsigned int)spacc->config.ciph_page_size); + + pdu_from_dev_s(buf, ctx->ciph_key, buflen >> 2, + spacc->config.big_endian); + + switch (job->enc_mode) { + case CRYPTO_MODE_SM4_ECB: + case CRYPTO_MODE_SM4_CBC: + case CRYPTO_MODE_SM4_CFB: + case CRYPTO_MODE_SM4_OFB: + case CRYPTO_MODE_SM4_CTR: + case CRYPTO_MODE_SM4_CCM: + case CRYPTO_MODE_SM4_GCM: + case CRYPTO_MODE_SM4_CS1: + case CRYPTO_MODE_SM4_CS2: + case CRYPTO_MODE_SM4_CS3: + case CRYPTO_MODE_AES_ECB: + case CRYPTO_MODE_AES_CBC: + case CRYPTO_MODE_AES_CS1: + case CRYPTO_MODE_AES_CS2: + case CRYPTO_MODE_AES_CS3: + case CRYPTO_MODE_AES_CFB: + case CRYPTO_MODE_AES_OFB: + case CRYPTO_MODE_AES_CTR: + case CRYPTO_MODE_AES_CCM: + case CRYPTO_MODE_AES_GCM: + spacc_write_to_buf(buf, key, 0, ksz, buflen); + if (iv) { + unsigned char one[4] =3D { 0, 0, 0, 1 }; + unsigned long enc1, enc2; + + enc1 =3D CRYPTO_MODE_AES_GCM; + enc2 =3D CRYPTO_MODE_SM4_GCM; + + spacc_write_to_buf(buf, iv, 32, ivsz, buflen); + if (ivsz =3D=3D 12 && + (job->enc_mode =3D=3D enc1 || + job->enc_mode =3D=3D enc2)) + spacc_write_to_buf(buf, one, 11 * 4, 4, + buflen); + } + break; + case CRYPTO_MODE_SM4_F8: + case CRYPTO_MODE_AES_F8: + if (key) { + spacc_write_to_buf(buf, key + ksz, 0, ksz, + buflen); + spacc_write_to_buf(buf, key, 48, ksz, buflen); + } + spacc_write_to_buf(buf, iv, 32, 16, buflen); + break; + case CRYPTO_MODE_SM4_XTS: + case CRYPTO_MODE_AES_XTS: + if (key) { + spacc_write_to_buf(buf, key, 0, + ksz >> 1, buflen); + spacc_write_to_buf(buf, key + (ksz >> 1), 48, + ksz >> 1, buflen); + /* + * Divide by two since that's + * what we program the hardware + */ + ksz =3D ksz >> 1; + } + spacc_write_to_buf(buf, iv, 32, 16, buflen); + break; + case CRYPTO_MODE_MULTI2_ECB: + case CRYPTO_MODE_MULTI2_CBC: + case CRYPTO_MODE_MULTI2_OFB: + case CRYPTO_MODE_MULTI2_CFB: + spacc_write_to_buf(buf, key, 0, ksz, buflen); + spacc_write_to_buf(buf, iv, 0x28, ivsz, buflen); + if (ivsz <=3D 8) { + /* default to 128 rounds */ + unsigned char rounds[4] =3D { 0, 0, 0, 128}; + + spacc_write_to_buf(buf, rounds, 0x30, 4, buflen); + } + break; + case CRYPTO_MODE_3DES_CBC: + case CRYPTO_MODE_3DES_ECB: + case CRYPTO_MODE_DES_CBC: + case CRYPTO_MODE_DES_ECB: + spacc_write_to_buf(buf, iv, 0, 8, buflen); + spacc_write_to_buf(buf, key, 8, ksz, buflen); + break; + case CRYPTO_MODE_KASUMI_ECB: + case CRYPTO_MODE_KASUMI_F8: + spacc_write_to_buf(buf, iv, 16, 8, buflen); + spacc_write_to_buf(buf, key, 0, 16, buflen); + break; + case CRYPTO_MODE_SNOW3G_UEA2: + case CRYPTO_MODE_ZUC_UEA3: + spacc_write_to_buf(buf, key, 0, 32, buflen); + break; + case CRYPTO_MODE_CHACHA20_STREAM: + case CRYPTO_MODE_CHACHA20_POLY1305: + spacc_write_to_buf(buf, key, 0, ksz, buflen); + spacc_write_to_buf(buf, iv, 32, ivsz, buflen); + break; + case CRYPTO_MODE_NULL: + break; + } + + if (key) { + job->ckey_sz =3D SPACC_SET_CIPHER_KEY_SZ(ksz); + job->first_use =3D true; + } + pdu_to_dev_s(ctx->ciph_key, buf, buflen >> 2, + spacc->config.big_endian); + break; + + case SPACC_HASH_OPERATION: + /* + * Get page size and then read so we can do a + * read-modify-write cycle + */ + buflen =3D MIN(sizeof(buf), + (u32)spacc->config.hash_page_size); + pdu_from_dev_s(buf, ctx->hash_key, buflen >> 2, + spacc->config.big_endian); + + switch (job->hash_mode) { + case CRYPTO_MODE_MAC_XCBC: + case CRYPTO_MODE_MAC_SM4_XCBC: + if (key) { + spacc_write_to_buf(buf, key + (ksz - 32), 32, 32, + buflen); + spacc_write_to_buf(buf, key, 0, (ksz - 32), + buflen); + job->hkey_sz =3D SPACC_SET_HASH_KEY_SZ(ksz - 32); + } + break; + case CRYPTO_MODE_HASH_CRC32: + case CRYPTO_MODE_MAC_SNOW3G_UIA2: + case CRYPTO_MODE_MAC_ZUC_UIA3: + if (key) { + spacc_write_to_buf(buf, key, 0, ksz, buflen); + job->hkey_sz =3D SPACC_SET_HASH_KEY_SZ(ksz); + } + break; + case CRYPTO_MODE_MAC_POLY1305: + spacc_write_to_buf(buf, key, 0, ksz, buflen); + spacc_write_to_buf(buf, iv, 32, ivsz, buflen); + break; + case CRYPTO_MODE_HASH_CSHAKE128: + case CRYPTO_MODE_HASH_CSHAKE256: + /* use "iv" and "key" to pass s-string & n-string */ + spacc_write_to_buf(buf, iv, 0, ivsz, buflen); + spacc_write_to_buf(buf, key, + spacc->config.string_size, ksz, buflen); + break; + case CRYPTO_MODE_MAC_KMAC128: + case CRYPTO_MODE_MAC_KMAC256: + case CRYPTO_MODE_MAC_KMACXOF128: + case CRYPTO_MODE_MAC_KMACXOF256: + /* use "iv" and "key" to pass s-string & key */ + spacc_write_to_buf(buf, iv, 0, ivsz, buflen); + spacc_write_to_buf(buf, key, + spacc->config.string_size, ksz, + buflen); + job->hkey_sz =3D SPACC_SET_HASH_KEY_SZ(ksz); + break; + default: + if (key) { + job->hkey_sz =3D SPACC_SET_HASH_KEY_SZ(ksz); + spacc_write_to_buf(buf, key, 0, ksz, buflen); + } + } + pdu_to_dev_s(ctx->hash_key, buf, buflen >> 2, + spacc->config.big_endian); + break; + default: + ret =3D -EINVAL; + } + + return ret; +} + +int spacc_read_context(struct spacc_device *spacc, int job_idx, + int op, unsigned char *key, int ksz, + unsigned char *iv, int ivsz) +{ + int buflen; + int ret =3D 0; + unsigned char buf[300]; + struct spacc_ctx *ctx =3D NULL; + struct spacc_job *job =3D NULL; + + if (job_idx < 0 || job_idx > SPACC_MAX_JOBS) + return -EINVAL; + + job =3D &spacc->job[job_idx]; + ctx =3D spacc_context_lookup_by_job(spacc, job_idx); + + if (!ctx) + return -EIO; + + switch (op) { + case SPACC_CRYPTO_OPERATION: + buflen =3D MIN(sizeof(buf), + (u32)spacc->config.ciph_page_size); + pdu_from_dev_s(buf, ctx->ciph_key, buflen >> 2, + spacc->config.big_endian); + + switch (job->enc_mode) { + case CRYPTO_MODE_SM4_ECB: + case CRYPTO_MODE_SM4_CBC: + case CRYPTO_MODE_SM4_CFB: + case CRYPTO_MODE_SM4_OFB: + case CRYPTO_MODE_SM4_CTR: + case CRYPTO_MODE_SM4_CCM: + case CRYPTO_MODE_SM4_GCM: + case CRYPTO_MODE_SM4_CS1: + case CRYPTO_MODE_SM4_CS2: + case CRYPTO_MODE_SM4_CS3: + case CRYPTO_MODE_AES_ECB: + case CRYPTO_MODE_AES_CBC: + case CRYPTO_MODE_AES_CS1: + case CRYPTO_MODE_AES_CS2: + case CRYPTO_MODE_AES_CS3: + case CRYPTO_MODE_AES_CFB: + case CRYPTO_MODE_AES_OFB: + case CRYPTO_MODE_AES_CTR: + case CRYPTO_MODE_AES_CCM: + case CRYPTO_MODE_AES_GCM: + spacc_read_from_buf(key, buf, 0, ksz, buflen); + spacc_read_from_buf(iv, buf, 32, 16, buflen); + break; + case CRYPTO_MODE_CHACHA20_STREAM: + spacc_read_from_buf(key, buf, 0, ksz, buflen); + spacc_read_from_buf(iv, buf, 32, 16, buflen); + break; + case CRYPTO_MODE_SM4_F8: + case CRYPTO_MODE_AES_F8: + if (key) { + spacc_read_from_buf(key + ksz, buf, 0, ksz, + buflen); + spacc_read_from_buf(key, buf, 48, ksz, buflen); + } + spacc_read_from_buf(iv, buf, 32, 16, buflen); + break; + case CRYPTO_MODE_SM4_XTS: + case CRYPTO_MODE_AES_XTS: + if (key) { + spacc_read_from_buf(key, buf, 0, ksz >> 1, + buflen); + spacc_read_from_buf(key + (ksz >> 1), buf, + 48, ksz >> 1, buflen); + } + spacc_read_from_buf(iv, buf, 32, 16, buflen); + break; + case CRYPTO_MODE_MULTI2_ECB: + case CRYPTO_MODE_MULTI2_CBC: + case CRYPTO_MODE_MULTI2_OFB: + case CRYPTO_MODE_MULTI2_CFB: + spacc_read_from_buf(key, buf, 0, ksz, buflen); + /* number of rounds at the end of the IV */ + spacc_read_from_buf(iv, buf, 0x28, ivsz, buflen); + break; + case CRYPTO_MODE_3DES_CBC: + case CRYPTO_MODE_3DES_ECB: + spacc_read_from_buf(iv, buf, 0, 8, buflen); + spacc_read_from_buf(key, buf, 8, 24, buflen); + break; + case CRYPTO_MODE_DES_CBC: + case CRYPTO_MODE_DES_ECB: + spacc_read_from_buf(iv, buf, 0, 8, buflen); + spacc_read_from_buf(key, buf, 8, 8, buflen); + break; + case CRYPTO_MODE_KASUMI_ECB: + case CRYPTO_MODE_KASUMI_F8: + spacc_read_from_buf(iv, buf, 16, 8, buflen); + spacc_read_from_buf(key, buf, 0, 16, buflen); + break; + case CRYPTO_MODE_SNOW3G_UEA2: + case CRYPTO_MODE_ZUC_UEA3: + spacc_read_from_buf(key, buf, 0, 32, buflen); + break; + case CRYPTO_MODE_NULL: + break; + } + break; + default: + ret =3D -EINVAL; + } + return ret; +} + +/* Context manager: This will reset all reference counts, pointers, etc */ +void spacc_ctx_init_all(struct spacc_device *spacc) +{ + int x; + struct spacc_ctx *ctx; + + /* initialize contexts */ + for (x =3D 0; x < spacc->config.num_ctx; x++) { + ctx =3D &spacc->ctx[x]; + + /* sets everything including ref_cnt and ncontig to 0 */ + memset(ctx, 0, sizeof(*ctx)); + + ctx->ciph_key =3D spacc->regmap + SPACC_CTX_CIPH_KEY + + (x * spacc->config.ciph_page_size); + ctx->hash_key =3D spacc->regmap + SPACC_CTX_HASH_KEY + + (x * spacc->config.hash_page_size); + } +} diff --git a/drivers/crypto/dwc-spacc/spacc_skcipher.c b/drivers/crypto/dwc= -spacc/spacc_skcipher.c new file mode 100644 index 000000000000..9e6adc62e519 --- /dev/null +++ b/drivers/crypto/dwc-spacc/spacc_skcipher.c @@ -0,0 +1,763 @@ +// SPDX-License-Identifier: GPL-2.0 + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "spacc_device.h" +#include "spacc_core.h" + +static LIST_HEAD(spacc_cipher_alg_list); +static DEFINE_MUTEX(spacc_cipher_alg_mutex); + +static struct mode_tab possible_ciphers[] =3D { + /* {keylen, MODE_TAB_CIPH(name, id, iv_len, blk_len)} */ + + /* SM4 */ + { MODE_TAB_CIPH("cbc(sm4)", SM4_CBC, 16, 16), .keylen[0] =3D 16, + .chunksize =3D 16, .walksize =3D 16, .min_keysize =3D 16, .max_keysize = =3D 16 }, + { MODE_TAB_CIPH("ecb(sm4)", SM4_ECB, 0, 16), .keylen[0] =3D 16, + .chunksize =3D 16, .walksize =3D 16, .min_keysize =3D 16, .max_keysize = =3D 16 }, + { MODE_TAB_CIPH("ctr(sm4)", SM4_CTR, 16, 1), .keylen[0] =3D 16, + .chunksize =3D 16, .walksize =3D 16, .min_keysize =3D 16, .max_keysize = =3D 16 }, + { MODE_TAB_CIPH("xts(sm4)", SM4_XTS, 16, 16), .keylen[0] =3D 32, + .chunksize =3D 16, .walksize =3D 16, .min_keysize =3D 32, .max_keysize = =3D 32 }, + { MODE_TAB_CIPH("cts(cbc(sm4))", SM4_CS3, 16, 16), .keylen[0] =3D 16, + .chunksize =3D 16, .walksize =3D 16, .min_keysize =3D 16, .max_keysize = =3D 16 }, + + /* AES */ + { MODE_TAB_CIPH("cbc(aes)", AES_CBC, 16, 16), .keylen =3D { 16, 24, 32 }, + .chunksize =3D 16, .walksize =3D 16, .min_keysize =3D 16, .max_keysize = =3D 32 }, + { MODE_TAB_CIPH("ecb(aes)", AES_ECB, 0, 16), .keylen =3D { 16, 24, 32 }, + .chunksize =3D 16, .walksize =3D 16, .min_keysize =3D 16, .max_keysize = =3D 32 }, + { MODE_TAB_CIPH("xts(aes)", AES_XTS, 16, 16), .keylen =3D { 32, 48, 64 }, + .chunksize =3D 16, .walksize =3D 16, .min_keysize =3D 32, .max_keysize = =3D 64 }, + { MODE_TAB_CIPH("cts(cbc(aes))", AES_CS3, 16, 16), + .keylen =3D { 16, 24, 32 }, .chunksize =3D 16, .walksize =3D 16, + .min_keysize =3D 16, .max_keysize =3D 32 }, + { MODE_TAB_CIPH("ctr(aes)", AES_CTR, 16, 1), .keylen =3D { 16, 24, 32 }, + .chunksize =3D 16, .walksize =3D 16, .min_keysize =3D 16, .max_keysize = =3D 32 }, + + /* CHACHA20 */ + { MODE_TAB_CIPH("chacha20", CHACHA20_STREAM, 16, 1), .keylen[0] =3D 32, + .chunksize =3D 64, .walksize =3D 64, .min_keysize =3D 32, .max_keysize = =3D 32 }, + + /* DES */ + { MODE_TAB_CIPH("ecb(des)", DES_ECB, 0, 8), .keylen[0] =3D 8, + .chunksize =3D 8, .walksize =3D 8, .min_keysize =3D 8, .max_keysize =3D 8= }, + { MODE_TAB_CIPH("cbc(des)", DES_CBC, 8, 8), .keylen[0] =3D 8, + .chunksize =3D 8, .walksize =3D 8, .min_keysize =3D 8, .max_keysize =3D 8= }, + { MODE_TAB_CIPH("ecb(des3_ede)", 3DES_ECB, 0, 8), .keylen[0] =3D 24, + .chunksize =3D 8, .walksize =3D 8, .min_keysize =3D 24, .max_keysize =3D = 24 }, + { MODE_TAB_CIPH("cbc(des3_ede)", 3DES_CBC, 8, 8), .keylen[0] =3D 24, + .chunksize =3D 8, .walksize =3D 8, .min_keysize =3D 24, .max_keysize =3D = 24 }, + +}; + +static int spacc_skcipher_fallback(unsigned char *name, + struct skcipher_request *req, int enc_dec) +{ + int ret =3D 0; + struct crypto_skcipher *reqtfm =3D crypto_skcipher_reqtfm(req); + struct spacc_crypto_ctx *tctx =3D crypto_skcipher_ctx(reqtfm); + struct spacc_crypto_reqctx *ctx =3D skcipher_request_ctx(req); + + skcipher_request_set_tfm(&ctx->fb.cipher_req, tctx->fb.cipher); + skcipher_request_set_callback(&ctx->fb.cipher_req, + req->base.flags, + req->base.complete, + req->base.data); + skcipher_request_set_crypt(&ctx->fb.cipher_req, req->src, req->dst, + req->cryptlen, req->iv); + + if (enc_dec) + ret =3D crypto_skcipher_decrypt(&ctx->fb.cipher_req); + else + ret =3D crypto_skcipher_encrypt(&ctx->fb.cipher_req); + + return ret; +} + +static void spacc_cipher_cleanup_dma(struct device *dev, + struct skcipher_request *req) +{ + struct spacc_crypto_reqctx *ctx =3D skcipher_request_ctx(req); + + if (req->dst !=3D req->src) { + if (ctx->src_nents) { + dma_unmap_sg(dev, req->src, ctx->src_nents, + DMA_TO_DEVICE); + pdu_ddt_free(&ctx->src); + } + + if (ctx->dst_nents) { + dma_unmap_sg(dev, req->dst, ctx->dst_nents, + DMA_FROM_DEVICE); + pdu_ddt_free(&ctx->dst); + } + } else { + if (ctx->src_nents) { + dma_unmap_sg(dev, req->src, ctx->src_nents, + DMA_TO_DEVICE); + pdu_ddt_free(&ctx->src); + } + } +} + +static void spacc_cipher_cb(void *spacc, void *tfm) +{ + int err =3D -1; + int ret =3D 0; + struct cipher_cb_data *cb =3D tfm; + struct spacc_device *device =3D (struct spacc_device *)spacc; + struct spacc_crypto_reqctx *ctx =3D skcipher_request_ctx(cb->req); + + u32 status_reg =3D readl(cb->spacc->regmap + SPACC_REG_STATUS); + + /* + * Extract RET_CODE field (bits 25:24) from STATUS register to check + * result of the crypto operation. + */ + u32 status_ret =3D (status_reg >> 24) & 0x03; + + if (ctx->mode =3D=3D CRYPTO_MODE_DES_CBC || + ctx->mode =3D=3D CRYPTO_MODE_3DES_CBC) { + ret =3D spacc_read_context(cb->spacc, cb->tctx->handle, + SPACC_CRYPTO_OPERATION, NULL, 0, + cb->req->iv, 8); + if (ret) { + dev_err(cb->tctx->dev, "failed to read IV from context"); + err =3D ret; + goto CALLBACK_ERR; + } + } else if (ctx->mode !=3D CRYPTO_MODE_DES_ECB && + ctx->mode !=3D CRYPTO_MODE_3DES_ECB && + ctx->mode !=3D CRYPTO_MODE_SM4_ECB && + ctx->mode !=3D CRYPTO_MODE_AES_ECB && + ctx->mode !=3D CRYPTO_MODE_SM4_XTS && + ctx->mode !=3D CRYPTO_MODE_KASUMI_ECB) { + if (status_ret =3D=3D 0x3) { + err =3D -EINVAL; + goto CALLBACK_ERR; + } + + ret =3D spacc_read_context(cb->spacc, cb->tctx->handle, + SPACC_CRYPTO_OPERATION, NULL, 0, + cb->req->iv, 16); + if (ret) { + dev_err(cb->tctx->dev, "failed to read IV from context"); + err =3D ret; + goto CALLBACK_ERR; + } + } + + if (ctx->mode !=3D CRYPTO_MODE_DES_ECB && + ctx->mode !=3D CRYPTO_MODE_DES_CBC && + ctx->mode !=3D CRYPTO_MODE_3DES_ECB && + ctx->mode !=3D CRYPTO_MODE_3DES_CBC) { + if (status_ret =3D=3D 0x03) { + err =3D -EINVAL; + goto CALLBACK_ERR; + } + } + + if (ctx->mode =3D=3D CRYPTO_MODE_SM4_ECB && status_ret =3D=3D 0x03) { + err =3D -EINVAL; + goto CALLBACK_ERR; + } + + if (cb->req->dst !=3D cb->req->src) + dma_sync_sg_for_cpu(cb->tctx->dev, cb->req->dst, ctx->dst_nents, + DMA_FROM_DEVICE); + + err =3D cb->spacc->job[cb->new_handle].job_err; + +CALLBACK_ERR: + spacc_cipher_cleanup_dma(cb->tctx->dev, cb->req); + spacc_close(cb->spacc, cb->new_handle); + + local_bh_disable(); + skcipher_request_complete(cb->req, err); + local_bh_enable(); + + if (atomic_read(&device->wait_counter) > 0) { + struct spacc_completion *cur_pos, *next_pos; + + /* wake up waitqueue to obtain a context */ + atomic_dec(&device->wait_counter); + if (atomic_read(&device->wait_counter) > 0) { + mutex_lock(&device->spacc_waitq_mutex); + list_for_each_entry_safe(cur_pos, next_pos, + &device->spacc_wait_list, + list) { + if (cur_pos && cur_pos->wait_done =3D=3D 1) { + cur_pos->wait_done =3D 0; + complete(&cur_pos->spacc_wait_complete); + list_del(&cur_pos->list); + break; + } + } + mutex_unlock(&device->spacc_waitq_mutex); + } + } +} + +static int spacc_cipher_init_dma(struct device *dev, + struct skcipher_request *req) +{ + struct spacc_crypto_reqctx *ctx =3D skcipher_request_ctx(req); + int rc =3D 0; + + if (req->src =3D=3D req->dst) { + rc =3D spacc_sg_to_ddt(dev, req->src, req->cryptlen, &ctx->src, + DMA_TO_DEVICE); + if (rc < 0) { + pdu_ddt_free(&ctx->src); + return rc; + } + ctx->src_nents =3D rc; + } else { + rc =3D spacc_sg_to_ddt(dev, req->src, req->cryptlen, &ctx->src, + DMA_TO_DEVICE); + if (rc < 0) { + pdu_ddt_free(&ctx->src); + return rc; + } + ctx->src_nents =3D rc; + + rc =3D spacc_sg_to_ddt(dev, req->dst, req->cryptlen, &ctx->dst, + DMA_FROM_DEVICE); + if (rc < 0) { + pdu_ddt_free(&ctx->dst); + return rc; + } + ctx->dst_nents =3D rc; + } + + return 0; +} + +static int spacc_cipher_init_tfm(struct crypto_skcipher *tfm) +{ + const char *name =3D crypto_tfm_alg_name(&tfm->base); + struct spacc_crypto_ctx *ctx =3D crypto_skcipher_ctx(tfm); + const struct spacc_alg *salg =3D spacc_tfm_skcipher(&tfm->base); + + ctx->keylen =3D 0; + ctx->cipher_key =3D NULL; + ctx->handle =3D -1; + ctx->ctx_valid =3D false; + ctx->dev =3D get_device(salg->dev); + + ctx->fb.cipher =3D crypto_alloc_skcipher(name, 0, + CRYPTO_ALG_NEED_FALLBACK); + if (IS_ERR(ctx->fb.cipher)) { + dev_err(ctx->dev, "Error allocating fallback algo %s\n", name); + return PTR_ERR(ctx->fb.cipher); + } + + crypto_skcipher_set_reqsize(tfm, + sizeof(struct spacc_crypto_reqctx) + + crypto_skcipher_reqsize(ctx->fb.cipher)); + + return 0; +} + +static void spacc_cipher_exit_tfm(struct crypto_skcipher *tfm) +{ + struct spacc_crypto_ctx *ctx =3D crypto_skcipher_ctx(tfm); + + crypto_free_skcipher(ctx->fb.cipher); +} + +static int spacc_check_keylen(const struct spacc_alg *salg, unsigned int k= eylen) +{ + unsigned int index; + + for (index =3D 0; index < ARRAY_SIZE(salg->mode->keylen); index++) + if (salg->mode->keylen[index] =3D=3D keylen) + return 0; + return -EINVAL; +} + +static int spacc_cipher_setkey(struct crypto_skcipher *tfm, const u8 *key, + unsigned int keylen) +{ + int ret =3D 0, rc =3D 0, err =3D 0; + const struct spacc_alg *salg =3D spacc_tfm_skcipher(&tfm->base); + struct spacc_crypto_ctx *tctx =3D crypto_skcipher_ctx(tfm); + struct spacc_priv *priv =3D dev_get_drvdata(tctx->dev); + + err =3D spacc_check_keylen(salg, keylen); + if (err) + return err; + + /* close handle since key size may have changed */ + if (tctx->handle >=3D 0) { + spacc_close(&priv->spacc, tctx->handle); + put_device(tctx->dev); + tctx->handle =3D -1; + tctx->dev =3D NULL; + } + + /* reset priv */ + priv =3D NULL; + priv =3D dev_get_drvdata(salg->dev); + tctx->dev =3D get_device(salg->dev); + ret =3D spacc_is_mode_keysize_supported(&priv->spacc, salg->mode->id, + keylen, 0); + if (ret) { + /* Grab the spacc context if no one is waiting */ + tctx->handle =3D spacc_open(&priv->spacc, salg->mode->id, + CRYPTO_MODE_NULL, -1, 0, + spacc_cipher_cb, tfm); + + if (tctx->handle < 0) { + put_device(salg->dev); + dev_dbg(salg->dev, "Failed to open SPAcc context\n"); + return -EIO; + } + + } else { + dev_err(salg->dev, " Keylen: %d not enabled for algo: %d", + keylen, salg->mode->id); + } + + /* weak key Implementation for DES_ECB */ + if (salg->mode->id =3D=3D CRYPTO_MODE_DES_ECB) { + err =3D verify_skcipher_des_key(tfm, key); + if (err) { + dev_dbg(salg->dev, "setkey: DES ECB failed\n"); + return err; + } + } + + if (salg->mode->id =3D=3D CRYPTO_MODE_SM4_F8 || + salg->mode->id =3D=3D CRYPTO_MODE_AES_F8) { + /* + * F8 mode requires an IV of 128-bits and a key-salt mask, + * equivalent in size to the key. + * AES-F8 or SM4-F8 mode has a SALTKEY prepended to the base + * key. + */ + rc =3D spacc_write_context(&priv->spacc, tctx->handle, + SPACC_CRYPTO_OPERATION, key, 16, + NULL, 0); + } else { + rc =3D spacc_write_context(&priv->spacc, tctx->handle, + SPACC_CRYPTO_OPERATION, key, keylen, + NULL, 0); + } + + if (rc < 0) { + dev_err(salg->dev, "Setkey: Failed to write SPAcc context\n"); + return -EINVAL; + } + + ret =3D crypto_skcipher_setkey(tctx->fb.cipher, key, + keylen); + return ret; +} + +static int spacc_cipher_process(struct skcipher_request *req, int enc_dec) +{ + u32 diff; + int i =3D 0; + int j =3D 0; + int rc =3D 0; + u32 diff64; + u8 ivc1[16]; + int ret =3D 0; + u32 num_iv =3D 0; + u64 num_iv64 =3D 0; + unsigned char *name; + unsigned int len =3D 0; + unsigned char chacha20_iv[16]; + struct crypto_skcipher *reqtfm =3D crypto_skcipher_reqtfm(req); + struct spacc_crypto_ctx *tctx =3D crypto_skcipher_ctx(reqtfm); + struct spacc_crypto_reqctx *ctx =3D skcipher_request_ctx(req); + struct spacc_priv *priv =3D dev_get_drvdata(tctx->dev); + const struct spacc_alg *salg =3D spacc_tfm_skcipher(&reqtfm->base); + struct spacc_device *device_h =3D &priv->spacc; + + len =3D ctx->spacc_cipher_cryptlen / 16; + + if (req->cryptlen =3D=3D 0) { + if (salg->mode->id =3D=3D CRYPTO_MODE_SM4_CS3 || + salg->mode->id =3D=3D CRYPTO_MODE_SM4_XTS || + salg->mode->id =3D=3D CRYPTO_MODE_AES_XTS || + salg->mode->id =3D=3D CRYPTO_MODE_AES_CS3) + return -EINVAL; + else + return 0; + } + + /* + * Given IV - <1st 4-bytes as counter value> + * + * Reversing the order of nonce & counter as, + * <1st 12-bytes as nonce> + * + * and then write to HW context, + * ex: + * Given IV - 2a000000000000000000000000000002 + * Reverse order - 0000000000000000000000020000002a + */ + if (salg->mode->id =3D=3D CRYPTO_MODE_CHACHA20_STREAM) { + for (i =3D 4; i < 16; i++) { + chacha20_iv[j] =3D req->iv[i]; + j++; + } + + j =3D j + 3; + + for (i =3D 0; i <=3D 3; i++) { + chacha20_iv[j] =3D req->iv[i]; + j--; + } + memcpy(req->iv, chacha20_iv, 16); + } + + if (salg->mode->id =3D=3D CRYPTO_MODE_SM4_CFB) { + if (req->cryptlen % 16 !=3D 0) { + name =3D salg->calg->cra_name; + ret =3D spacc_skcipher_fallback(name, req, enc_dec); + return ret; + } + } + + if (salg->mode->id =3D=3D CRYPTO_MODE_SM4_XTS || + salg->mode->id =3D=3D CRYPTO_MODE_SM4_CS3 || + salg->mode->id =3D=3D CRYPTO_MODE_AES_XTS || + salg->mode->id =3D=3D CRYPTO_MODE_AES_CS3) { + if (req->cryptlen =3D=3D 16) { + name =3D salg->calg->cra_name; + ret =3D spacc_skcipher_fallback(name, req, enc_dec); + return ret; + } + } + if (salg->mode->id =3D=3D CRYPTO_MODE_AES_CTR || + salg->mode->id =3D=3D CRYPTO_MODE_SM4_CTR) { + /* copy the IV to local buffer */ + for (i =3D 0; i < 16; i++) + ivc1[i] =3D req->iv[i]; + + /* 32-bit counter width */ + if (readl(device_h->regmap + SPACC_REG_VERSION_EXT_3) & (0x2)) { + for (i =3D 12; i < 16; i++) { + num_iv <<=3D 8; + num_iv |=3D ivc1[i]; + } + + diff =3D SPACC_CTR_IV_MAX32 - num_iv; + + if (len > diff) { + name =3D salg->calg->cra_name; + ret =3D spacc_skcipher_fallback(name, + req, enc_dec); + return ret; + } + /* 64-bit counter width */ + } else if (readl(device_h->regmap + SPACC_REG_VERSION_EXT_3) + & (0x3)) { + for (i =3D 8; i < 16; i++) { + num_iv64 <<=3D 8; + num_iv64 |=3D ivc1[i]; + } + + diff64 =3D SPACC_CTR_IV_MAX64 - num_iv64; + + if (len > diff64) { + name =3D salg->calg->cra_name; + ret =3D spacc_skcipher_fallback(name, + req, enc_dec); + return ret; + } + /* 16-bit counter width */ + } else if (readl(device_h->regmap + SPACC_REG_VERSION_EXT_3) + & (0x1)) { + for (i =3D 14; i < 16; i++) { + num_iv <<=3D 8; + num_iv |=3D ivc1[i]; + } + + diff =3D SPACC_CTR_IV_MAX16 - num_iv; + + if (len > diff) { + name =3D salg->calg->cra_name; + ret =3D spacc_skcipher_fallback(name, + req, enc_dec); + return ret; + } + /* 8-bit counter width */ + } else if ((readl(device_h->regmap + SPACC_REG_VERSION_EXT_3) + & 0x7) =3D=3D 0) { + for (i =3D 15; i < 16; i++) { + num_iv <<=3D 8; + num_iv |=3D ivc1[i]; + } + + diff =3D SPACC_CTR_IV_MAX8 - num_iv; + + if (len > diff) { + name =3D salg->calg->cra_name; + ret =3D spacc_skcipher_fallback(name, + req, enc_dec); + return ret; + } + } + } + + if (salg->mode->id =3D=3D CRYPTO_MODE_DES_CBC || + salg->mode->id =3D=3D CRYPTO_MODE_3DES_CBC) + rc =3D spacc_write_context(&priv->spacc, tctx->handle, + SPACC_CRYPTO_OPERATION, NULL, 0, + req->iv, 8); + else if (salg->mode->id !=3D CRYPTO_MODE_DES_ECB && + salg->mode->id !=3D CRYPTO_MODE_3DES_ECB && + salg->mode->id !=3D CRYPTO_MODE_SM4_ECB && + salg->mode->id !=3D CRYPTO_MODE_AES_ECB && + salg->mode->id !=3D CRYPTO_MODE_KASUMI_ECB) + rc =3D spacc_write_context(&priv->spacc, tctx->handle, + SPACC_CRYPTO_OPERATION, NULL, 0, + req->iv, 16); + + if (rc < 0) + dev_err(salg->dev, "SPAcc write context error\n"); + + /* initialize the DMA */ + rc =3D spacc_cipher_init_dma(tctx->dev, req); + + ctx->ccb.new_handle =3D spacc_clone_handle(&priv->spacc, tctx->handle, + &ctx->ccb); + if (ctx->ccb.new_handle < 0) { + spacc_cipher_cleanup_dma(tctx->dev, req); + dev_err(salg->dev, "Failed to clone handle\n"); + return -EINVAL; + } + + /* copying the data to clone handle */ + ctx->ccb.tctx =3D tctx; + ctx->ccb.ctx =3D ctx; + ctx->ccb.req =3D req; + ctx->ccb.spacc =3D &priv->spacc; + ctx->mode =3D salg->mode->id; + + if (salg->mode->id =3D=3D CRYPTO_MODE_SM4_CS3) { + int handle =3D ctx->ccb.new_handle; + + if (handle < 0 || handle > SPACC_MAX_JOBS) + return -EINVAL; + + device_h->job[handle].auxinfo_cs_mode =3D 3; + } + + if (enc_dec) { /* decrypt */ + rc =3D spacc_set_operation(&priv->spacc, ctx->ccb.new_handle, 1, + ICV_IGNORE, IP_ICV_IGNORE, 0, 0, 0); + spacc_set_key_exp(&priv->spacc, ctx->ccb.new_handle); + } else { /* encrypt */ + rc =3D spacc_set_operation(&priv->spacc, ctx->ccb.new_handle, 0, + ICV_IGNORE, IP_ICV_IGNORE, 0, 0, 0); + } + + rc =3D spacc_packet_enqueue_ddt(&priv->spacc, ctx->ccb.new_handle, + &ctx->src, + (req->dst =3D=3D req->src) ? &ctx->src : + &ctx->dst, + req->cryptlen, + 0, 0, 0, 0, 0); + if (rc < 0) { + spacc_cipher_cleanup_dma(tctx->dev, req); + spacc_close(&priv->spacc, ctx->ccb.new_handle); + + if (rc !=3D -EBUSY && rc < 0) { + dev_err(tctx->dev, + "Failed to enqueue job: %d\n", rc); + return rc; + } else if (!(req->base.flags & CRYPTO_TFM_REQ_MAY_BACKLOG)) { + return -EBUSY; + } + } + + priv->spacc.job[tctx->handle].first_use =3D 0; + priv->spacc.job[tctx->handle].ctrl &=3D + ~(1UL << priv->spacc.config.ctrl_map[SPACC_CTRL_KEY_EXP]); + + return -EINPROGRESS; +} + +static int spacc_cipher_encrypt(struct skcipher_request *req) +{ + int rv =3D 0; + struct spacc_crypto_reqctx *ctx =3D skcipher_request_ctx(req); + + ctx->spacc_cipher_cryptlen =3D req->cryptlen; + + /* enc_dec - 0(encrypt), 1(decrypt) */ + rv =3D spacc_cipher_process(req, 0); + + return rv; +} + +static int spacc_cipher_decrypt(struct skcipher_request *req) +{ + int rv =3D 0; + struct spacc_crypto_reqctx *ctx =3D skcipher_request_ctx(req); + + ctx->spacc_cipher_cryptlen =3D req->cryptlen; + + /* enc_dec - 0(encrypt), 1(decrypt) */ + rv =3D spacc_cipher_process(req, 1); + + return rv; +} + +static struct skcipher_alg spacc_skcipher_alg =3D { + .init =3D spacc_cipher_init_tfm, + .exit =3D spacc_cipher_exit_tfm, + .setkey =3D spacc_cipher_setkey, + .encrypt =3D spacc_cipher_encrypt, + .decrypt =3D spacc_cipher_decrypt, + /* + * Chunksize: Equal to the block size except for stream cipher + * such as CTR where it is set to the underlying block size. + * + * Walksize: Equal to the chunk size except in cases where the + * algorithm is considerably more efficient if it can operate on + * multiple chunks in parallel. Should be a multiple of chunksize. + */ + .min_keysize =3D 16, + .max_keysize =3D 64, + .ivsize =3D 16, + .chunksize =3D 16, + .walksize =3D 16, + .base =3D { + .cra_flags =3D CRYPTO_ALG_TYPE_MASK | CRYPTO_ALG_TYPE_CIPHER | + CRYPTO_ALG_ASYNC | + CRYPTO_ALG_NEED_FALLBACK | + CRYPTO_ALG_ALLOCATES_MEMORY, + .cra_blocksize =3D 16, + .cra_ctxsize =3D sizeof(struct spacc_crypto_ctx), + .cra_priority =3D 300, + .cra_module =3D THIS_MODULE, + }, +}; + +static void spacc_init_calg(struct crypto_alg *calg, + const struct mode_tab *mode) +{ + strscpy(calg->cra_name, mode->name, sizeof(mode->name) - 1); + calg->cra_name[sizeof(mode->name) - 1] =3D '\0'; + + strscpy(calg->cra_driver_name, "spacc-"); + strcat(calg->cra_driver_name, mode->name); + calg->cra_driver_name[sizeof(calg->cra_driver_name) - 1] =3D '\0'; + calg->cra_blocksize =3D mode->blocklen; +} + +static int spacc_register_cipher(struct spacc_alg *salg, + unsigned int algo_idx) +{ + int rc =3D 0; + + salg->calg =3D &salg->alg.skcipher.base; + salg->alg.skcipher =3D spacc_skcipher_alg; + + /* + * This function will assign mode->name to calg->cra_name & + * calg->cra_driver_name + */ + spacc_init_calg(salg->calg, salg->mode); + salg->alg.skcipher.ivsize =3D salg->mode->ivlen; + salg->alg.skcipher.base.cra_blocksize =3D salg->mode->blocklen; + + salg->alg.skcipher.chunksize =3D possible_ciphers[algo_idx].chunksize; + salg->alg.skcipher.walksize =3D possible_ciphers[algo_idx].walksize; + salg->alg.skcipher.min_keysize =3D possible_ciphers[algo_idx].min_keysize; + salg->alg.skcipher.max_keysize =3D possible_ciphers[algo_idx].max_keysize; + + rc =3D crypto_register_skcipher(&salg->alg.skcipher); + if (rc < 0) + return rc; + + mutex_lock(&spacc_cipher_alg_mutex); + list_add(&salg->list, &spacc_cipher_alg_list); + mutex_unlock(&spacc_cipher_alg_mutex); + return 0; +} + +int spacc_probe_ciphers(struct platform_device *spacc_pdev) +{ + int rc =3D 0; + unsigned int index, y; + int registered =3D 0; + struct spacc_alg *salg; + struct spacc_priv *priv =3D dev_get_drvdata(&spacc_pdev->dev); + + for (index =3D 0; index < ARRAY_SIZE(possible_ciphers); index++) { + possible_ciphers[index].valid =3D 0; + possible_ciphers[index].keylen_mask =3D 0; + } + /* compute keylen mask */ + for (index =3D 0; index < ARRAY_SIZE(possible_ciphers); index++) { + for (y =3D 0; y < ARRAY_SIZE(possible_ciphers[index].keylen); y++) { + if (spacc_is_mode_keysize_supported(&priv->spacc, + possible_ciphers[index].id & 0xFF, + possible_ciphers[index].keylen[y], + 0)) { + possible_ciphers[index].keylen_mask |=3D 1u << y; + } + } + } + /* Registering the valid ciphers */ + for (index =3D 0; index < ARRAY_SIZE(possible_ciphers); index++) { + if (!possible_ciphers[index].valid && + possible_ciphers[index].keylen_mask) { + salg =3D kmalloc(sizeof(*salg), GFP_KERNEL); + if (!salg) + return -ENOMEM; + + salg->mode =3D &possible_ciphers[index]; + salg->dev =3D &spacc_pdev->dev; + + rc =3D spacc_register_cipher(salg, index); + if (rc < 0) { + kfree(salg); + continue; + } + + dev_dbg(&spacc_pdev->dev, "Registered %s\n", + possible_ciphers[index].name); + registered++; + possible_ciphers[index].valid =3D 1; + } + } + return registered; +} + +int spacc_unregister_cipher_algs(void) +{ + struct spacc_alg *salg, *tmp; + + mutex_lock(&spacc_cipher_alg_mutex); + + list_for_each_entry_safe(salg, tmp, &spacc_cipher_alg_list, list) { + crypto_unregister_skcipher(&salg->alg.skcipher); + list_del(&salg->list); + kfree(salg); + } + + mutex_unlock(&spacc_cipher_alg_mutex); + + return 0; +} --=20 2.25.1 From nobody Mon Feb 9 01:17:09 2026 Received: from mail-pg1-f177.google.com (mail-pg1-f177.google.com [209.85.215.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B88831885B4 for ; Mon, 2 Jun 2025 05:35:22 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.177 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1748842525; cv=none; b=VOiheGJlmeh7oYjRzeysm/dh9e6noViEslkVuKWJ4gnVHOq+Bc1maAwVjvOqTWCjgrF+KPb8HYWLe7lGgcFtYH2QUKj8GW5ERExwVmHp4/LPlxwkQIqYtDI8lVVYADqcrVhBCZIgTwCJTo8oixVJ8+m9vM2CdDlm/rQR0ZPqH5I= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1748842525; c=relaxed/simple; bh=qU2OtROM6ww4rH0iLIC8saYfPgrln/qLFLoci2fmXt8=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=GB+0wo/qZBVkbv3iQkANojFUfKoOaUcjmScAxyJLf969E3cKH6/5AUeM319AP3AGltT9yi4jK7t809g809EWuIUM+W41w/JoGIir1wvEoMshApTZtAi8a95uNGIQ4VKdVtgyA2Dr70sXgPPjtChvO+WxVRxJH7vaX12kwlDQRGs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=vayavyalabs.com; spf=pass smtp.mailfrom=vayavyalabs.com; dkim=pass (1024-bit key) header.d=vayavyalabs.com header.i=@vayavyalabs.com header.b=an5g3FKy; arc=none smtp.client-ip=209.85.215.177 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=vayavyalabs.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=vayavyalabs.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=vayavyalabs.com header.i=@vayavyalabs.com header.b="an5g3FKy" Received: by mail-pg1-f177.google.com with SMTP id 41be03b00d2f7-b2c4331c50eso2779327a12.3 for ; Sun, 01 Jun 2025 22:35:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=vayavyalabs.com; s=google; t=1748842522; x=1749447322; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=saYztgfaHnKUiUu/hwADHkTwfTiT8EvKk5P9MzQnyZc=; b=an5g3FKyet0h4xeBk6CdGDMnmcHsPkIdYFBbnW1/jU9mBPH4+/qxuQ7H5GhIPlJll4 RLwkWQFKEK/SIlS6Mz0xBZyMu+rFCDJJvIqdpAAQrlYwuPw6JmHLS7tY7K1ZxJTWFaV2 WTVIVBZBK5JyvnNDRLKWUEZ8ZPKloGL7ThRVQ= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1748842522; x=1749447322; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=saYztgfaHnKUiUu/hwADHkTwfTiT8EvKk5P9MzQnyZc=; b=RIDVyl61Wvw/obzAMIjSvhsF/sGp7psy8IvLDhOpx2oKvpdB6GPkMA7DCDJHiozh5A YeMiTTl5KYfoGCuglY67WucYFFw9Byp+lJeLnGsyvqa8tv1ic6NoWMnQHpPkZynqesTE hrJi+6g53RIR7BKuo8KjjvC5o8nwfEGKiQpCtoUFug6mvMNlaKi+i9IQXApWeEfj4PLo WUUe2Peen6iZAQ/5yB2En7oTH7W4Mj3VnmiAGGQB8Ed1afeMuNyxg0lcZRXk71Ay8ZG3 i3o8h5K0YhdML4p6uEhSoEJep9zMSClEUts6tkmHVwbXNJN8QAod95fMXRpbFQ86m0Q9 dIZA== X-Forwarded-Encrypted: i=1; AJvYcCV7CuA+UMV9nl7j6d5IS3PXPCoZjAyKLI7z6jiYiVgTf3CScExDfCJTgmgqPsDUR1veYNPapGgXQOzPIps=@vger.kernel.org X-Gm-Message-State: AOJu0YxJCAuGV5BbuoNphMk3CCG0n8dHeyVCXUwXqeMYeJo9lxJy6ROQ WUI7aOaioFuoYrC5zRyfhsb5x7yuXktejzevzZWHsgJ1s9s9jVpfWVngXp3hoMK/b9I= X-Gm-Gg: ASbGncvB0qPf0OeAQe0Or6dz6+Lc8FT1BcIy+HRb6yNZFQrJZZ98hdw5DWbjNchFDx/ DuTqfhpkIyMKASWyWQzZRYJYdqqbGSTjgTbn0mwCto3SG6XAoWyLBcde852XQBwi+mmoZfQHu3Y bTiv+JSzIBhgP2c7rBP/xA+A3OGGbdXCGXF/pR0rn/fBNoO5OxmE4MH2/1QaY+L4Sa32yYSdRxJ B7ryfFu7F3ga/POuH4iOvs8V1r6QLnjfKYJZP1gWqmy4AgPKb5HgOQagGETAJnfFI7jzZnUru4Y t4Pfa512cM8tT5V9YJUU5GbO2G5Mc+m3b7PtDONQN97l6r9srg2gQ73Hyq6QYgn7r0HwMOkikX+ WLYOoKVGRpNy9vQ== X-Google-Smtp-Source: AGHT+IGgF9PXRzMVUiDeIwk918qrFyMJVgurIIAwhMiHx1eWhwPlFz9jZ6o0p/5DDc8utk0r1Q0zbw== X-Received: by 2002:a17:90b:48cb:b0:311:ef4f:b393 with SMTP id 98e67ed59e1d1-31241b88f6dmr16141235a91.34.1748842521622; Sun, 01 Jun 2025 22:35:21 -0700 (PDT) Received: from localhost.localdomain ([117.251.222.160]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-3124e30cbe6sm4836986a91.39.2025.06.01.22.35.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 01 Jun 2025 22:35:21 -0700 (PDT) From: Pavitrakumar Managutte To: linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org, devicetree@vger.kernel.org, herbert@gondor.apana.org.au, robh@kernel.org Cc: krzk+dt@kernel.org, conor+dt@kernel.org, Ruud.Derwig@synopsys.com, manjunath.hadli@vayavyalabs.com, adityak@vayavyalabs.com, Pavitrakumar Managutte Subject: [PATCH v3 3/6] Add SPAcc AUTODETECT Support Date: Mon, 2 Jun 2025 11:02:28 +0530 Message-Id: <20250602053231.403143-4-pavitrakumarm@vayavyalabs.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20250602053231.403143-1-pavitrakumarm@vayavyalabs.com> References: <20250602053231.403143-1-pavitrakumarm@vayavyalabs.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" SPAcc is configurable and it supports the below modes: 1. AUTODETECT configuration - Autodetects the supported algos. 2. Static configuration - The algo support is defined statically. Co-developed-by: Aditya Kulkarni Signed-off-by: Aditya Kulkarni Signed-off-by: Pavitrakumar Managutte Acked-by: Ruud Derwig --- drivers/crypto/dwc-spacc/spacc_core.c | 1094 +++++++++++++++++++++++++ 1 file changed, 1094 insertions(+) diff --git a/drivers/crypto/dwc-spacc/spacc_core.c b/drivers/crypto/dwc-spa= cc/spacc_core.c index 2363f2db34ba..be9e84bb5c67 100644 --- a/drivers/crypto/dwc-spacc/spacc_core.c +++ b/drivers/crypto/dwc-spacc/spacc_core.c @@ -200,6 +200,882 @@ static const unsigned char template[] =3D { [CRYPTO_MODE_MAC_SM4_CMAC] =3D 242, }; =20 +#if IS_ENABLED(CONFIG_CRYPTO_DEV_SPACC_AUTODETECT) +static const struct { + unsigned int min_version; + struct { + int outlen; + unsigned char data[64]; + } test[7]; +} testdata[CRYPTO_MODE_LAST] =3D { + /* NULL */ + { .min_version =3D 0x65, + .test[0].outlen =3D 0 + }, + + /* AES_ECB */ + { .min_version =3D 0x65, + .test[2].outlen =3D 16, .test[2].data =3D { 0xc6, 0xa1, 0x3b, 0x37, + 0x87, 0x8f, 0x5b, 0x82, 0x6f, 0x4f, 0x81, 0x62, 0xa1, + 0xc8, 0xd8, 0x79, }, + .test[3].outlen =3D 16, .test[3].data =3D { 0x91, 0x62, 0x51, 0x82, + 0x1c, 0x73, 0xa5, 0x22, 0xc3, 0x96, 0xd6, 0x27, 0x38, + 0x01, 0x96, 0x07, }, + .test[4].outlen =3D 16, .test[4].data =3D { 0xf2, 0x90, 0x00, 0xb6, + 0x2a, 0x49, 0x9f, 0xd0, 0xa9, 0xf3, 0x9a, 0x6a, 0xdd, + 0x2e, 0x77, 0x80, }, + }, + + /* AES_CBC */ + { .min_version =3D 0x65, + .test[2].outlen =3D 16, .test[2].data =3D { 0x0a, 0x94, 0x0b, 0xb5, + 0x41, 0x6e, 0xf0, 0x45, 0xf1, 0xc3, 0x94, 0x58, 0xc6, + 0x53, 0xea, 0x5a, }, + .test[3].outlen =3D 16, .test[3].data =3D { 0x00, 0x60, 0xbf, 0xfe, + 0x46, 0x83, 0x4b, 0xb8, 0xda, 0x5c, 0xf9, 0xa6, 0x1f, + 0xf2, 0x20, 0xae, }, + .test[4].outlen =3D 16, .test[4].data =3D { 0x5a, 0x6e, 0x04, 0x57, + 0x08, 0xfb, 0x71, 0x96, 0xf0, 0x2e, 0x55, 0x3d, 0x02, + 0xc3, 0xa6, 0x92, }, + }, + + /* AES_CTR */ + { .min_version =3D 0x65, + .test[2].outlen =3D 16, .test[2].data =3D { 0x0a, 0x94, 0x0b, 0xb5, + 0x41, 0x6e, 0xf0, 0x45, 0xf1, 0xc3, 0x94, 0x58, 0xc6, + 0x53, 0xea, 0x5a, }, + .test[3].outlen =3D 16, .test[3].data =3D { 0x00, 0x60, 0xbf, 0xfe, + 0x46, 0x83, 0x4b, 0xb8, 0xda, 0x5c, 0xf9, 0xa6, 0x1f, + 0xf2, 0x20, 0xae, }, + .test[4].outlen =3D 16, .test[4].data =3D { 0x5a, 0x6e, 0x04, 0x57, + 0x08, 0xfb, 0x71, 0x96, 0xf0, 0x2e, 0x55, 0x3d, 0x02, + 0xc3, 0xa6, 0x92, }, + }, + + /* AES_CCM */ + { .min_version =3D 0x65, + .test[2].outlen =3D 32, .test[2].data =3D { 0x02, 0x63, 0xec, 0x94, + 0x66, 0x18, 0x72, 0x96, 0x9a, 0xda, 0xfd, 0x0f, 0x4b, + 0xa4, 0x0f, 0xdc, 0xa5, 0x09, 0x92, 0x93, 0xb6, 0xb4, + 0x38, 0x34, 0x63, 0x72, 0x50, 0x4c, 0xfc, 0x8a, 0x63, + 0x02, }, + .test[3].outlen =3D 32, .test[3].data =3D { 0x29, 0xf7, 0x63, 0xe8, + 0xa1, 0x75, 0xc6, 0xbf, 0xa5, 0x54, 0x94, 0x89, 0x12, + 0x84, 0x45, 0xf5, 0x9b, 0x27, 0xeb, 0xb1, 0xa4, 0x65, + 0x93, 0x6e, 0x5a, 0xc0, 0xa2, 0xa3, 0xe2, 0x6c, 0x46, + 0x29, }, + .test[4].outlen =3D 32, .test[4].data =3D { 0x60, 0xf3, 0x10, 0xd5, + 0xc3, 0x85, 0x58, 0x5d, 0x55, 0x16, 0xfb, 0x51, 0x72, + 0xe5, 0x20, 0xcf, 0x8e, 0x87, 0x6d, 0x72, 0xc8, 0x44, + 0xbe, 0x6d, 0xa2, 0xd6, 0xf4, 0xba, 0xec, 0xb4, 0xec, + 0x39, }, + }, + + /* AES_GCM */ + { .min_version =3D 0x65, + .test[2].outlen =3D 32, .test[2].data =3D { 0x93, 0x6c, 0xa7, 0xce, + 0x66, 0x1b, 0xf7, 0x54, 0x4b, 0xd2, 0x61, 0x8a, 0x36, + 0xa3, 0x70, 0x08, 0xc0, 0xd7, 0xd0, 0x77, 0xc5, 0x64, + 0x76, 0xdb, 0x48, 0x4a, 0x53, 0xe3, 0x6c, 0x93, 0x34, + 0x0f, }, + .test[3].outlen =3D 32, .test[3].data =3D { 0xe6, 0xf9, 0x22, 0x9b, + 0x99, 0xb9, 0xc9, 0x0e, 0xd0, 0x33, 0xdc, 0x82, 0xff, + 0xa9, 0xdc, 0x70, 0x4c, 0xcd, 0xc4, 0x1b, 0xa3, 0x5a, + 0x87, 0x5d, 0xd8, 0xef, 0xb6, 0x48, 0xbb, 0x0c, 0x92, + 0x60, }, + .test[4].outlen =3D 32, .test[4].data =3D { 0x47, 0x02, 0xd6, 0x1b, + 0xc5, 0xe5, 0xc2, 0x1b, 0x8d, 0x41, 0x97, 0x8b, 0xb1, + 0xe9, 0x78, 0x6d, 0x48, 0x6f, 0x78, 0x81, 0xc7, 0x98, + 0xcc, 0xf5, 0x28, 0xf1, 0x01, 0x7c, 0xe8, 0xf6, 0x09, + 0x78, }, + }, + + /* AES-F8 */ + { .min_version =3D 0x65, + .test[0].outlen =3D 0 + }, + + /* AES-XTS */ + { .min_version =3D 0x65, + .test[2].outlen =3D 32, .test[2].data =3D { 0xa0, 0x1a, 0x6f, 0x09, + 0xfa, 0xef, 0xd2, 0x72, 0xc3, 0x9b, 0xad, 0x35, 0x52, + 0xfc, 0xa1, 0xcb, 0x33, 0x69, 0x51, 0xc5, 0x23, 0xbe, + 0xac, 0xa5, 0x4a, 0xf2, 0xfc, 0x77, 0x71, 0x6f, 0x9a, + 0x86, }, + .test[4].outlen =3D 32, .test[4].data =3D { 0x05, 0x45, 0x91, 0x86, + 0xf2, 0x2d, 0x97, 0x93, 0xf3, 0xa0, 0xbb, 0x29, 0xc7, + 0x9c, 0xc1, 0x4c, 0x3b, 0x8f, 0xdd, 0x9d, 0xda, 0xc7, + 0xb5, 0xaa, 0xc2, 0x7c, 0x2e, 0x71, 0xce, 0x7f, 0xce, + 0x0e, }, + }, + + /* AES-CFB */ + { .min_version =3D 0x65, + .test[0].outlen =3D 0 + }, + + /* AES-OFB */ + { .min_version =3D 0x65, + .test[0].outlen =3D 0 + }, + + /* AES-CS1 */ + { .min_version =3D 0x65, + .test[2].outlen =3D 31, .test[2].data =3D { 0x0a, 0x94, 0x0b, 0xb5, + 0x41, 0x6e, 0xf0, 0x45, 0xf1, 0xc3, 0x94, 0x58, 0xc6, + 0x53, 0xea, 0xae, 0xe7, 0x1e, 0xa5, 0x41, 0xd7, 0xae, + 0x4b, 0xeb, 0x60, 0xbe, 0xcc, 0x59, 0x3f, 0xb6, 0x63, + }, + .test[3].outlen =3D 31, .test[3].data =3D { 0x00, 0x60, 0xbf, 0xfe, + 0x46, 0x83, 0x4b, 0xb8, 0xda, 0x5c, 0xf9, 0xa6, 0x1f, + 0xf2, 0x20, 0x2e, 0x84, 0xcb, 0x12, 0xa3, 0x59, 0x17, + 0xb0, 0x9e, 0x25, 0xa2, 0xa2, 0x3d, 0xf1, 0x9f, 0xdc, + }, + .test[4].outlen =3D 31, .test[4].data =3D { 0x5a, 0x6e, 0x04, 0x57, + 0x08, 0xfb, 0x71, 0x96, 0xf0, 0x2e, 0x55, 0x3d, 0x02, + 0xc3, 0xa6, 0xcd, 0xfc, 0x25, 0x35, 0x31, 0x0b, 0xf5, + 0x6b, 0x2e, 0xb7, 0x8a, 0xa2, 0x5a, 0xdd, 0x77, 0x51, + }, + }, + + /* AES-CS2 */ + { .min_version =3D 0x65, + .test[2].outlen =3D 31, .test[2].data =3D { 0xae, 0xe7, 0x1e, 0xa5, + 0x41, 0xd7, 0xae, 0x4b, 0xeb, 0x60, 0xbe, 0xcc, 0x59, + 0x3f, 0xb6, 0x63, 0x0a, 0x94, 0x0b, 0xb5, 0x41, 0x6e, + 0xf0, 0x45, 0xf1, 0xc3, 0x94, 0x58, 0xc6, 0x53, 0xea, + }, + .test[3].outlen =3D 31, .test[3].data =3D { 0x2e, 0x84, 0xcb, 0x12, + 0xa3, 0x59, 0x17, 0xb0, 0x9e, 0x25, 0xa2, 0xa2, 0x3d, + 0xf1, 0x9f, 0xdc, 0x00, 0x60, 0xbf, 0xfe, 0x46, 0x83, + 0x4b, 0xb8, 0xda, 0x5c, 0xf9, 0xa6, 0x1f, 0xf2, 0x20, + }, + .test[4].outlen =3D 31, .test[4].data =3D { 0xcd, 0xfc, 0x25, 0x35, + 0x31, 0x0b, 0xf5, 0x6b, 0x2e, 0xb7, 0x8a, 0xa2, 0x5a, + 0xdd, 0x77, 0x51, 0x5a, 0x6e, 0x04, 0x57, 0x08, 0xfb, + 0x71, 0x96, 0xf0, 0x2e, 0x55, 0x3d, 0x02, 0xc3, 0xa6, + }, + }, + + /* AES-CS3 */ + { .min_version =3D 0x65, + .test[2].outlen =3D 31, .test[2].data =3D { 0xae, 0xe7, 0x1e, 0xa5, + 0x41, 0xd7, 0xae, 0x4b, 0xeb, 0x60, 0xbe, 0xcc, 0x59, + 0x3f, 0xb6, 0x63, 0x0a, 0x94, 0x0b, 0xb5, 0x41, 0x6e, + 0xf0, 0x45, 0xf1, 0xc3, 0x94, 0x58, 0xc6, 0x53, 0xea, + }, + .test[3].outlen =3D 31, .test[3].data =3D { 0x2e, 0x84, 0xcb, 0x12, + 0xa3, 0x59, 0x17, 0xb0, 0x9e, 0x25, 0xa2, 0xa2, 0x3d, + 0xf1, 0x9f, 0xdc, 0x00, 0x60, 0xbf, 0xfe, 0x46, 0x83, + 0x4b, 0xb8, 0xda, 0x5c, 0xf9, 0xa6, 0x1f, 0xf2, 0x20, + }, + .test[4].outlen =3D 31, .test[4].data =3D { 0xcd, 0xfc, 0x25, 0x35, + 0x31, 0x0b, 0xf5, 0x6b, 0x2e, 0xb7, 0x8a, 0xa2, 0x5a, + 0xdd, 0x77, 0x51, 0x5a, 0x6e, 0x04, 0x57, 0x08, 0xfb, + 0x71, 0x96, 0xf0, 0x2e, 0x55, 0x3d, 0x02, 0xc3, 0xa6, + }, + }, + + /* MULTI2 */ + { .min_version =3D 0x65, + .test[0].outlen =3D 0 + }, + { .min_version =3D 0x65, + .test[0].outlen =3D 0 + }, + { .min_version =3D 0x65, + .test[0].outlen =3D 0 + }, + { .min_version =3D 0x65, + .test[0].outlen =3D 0 + }, + + /* 3DES_CBC */ + { .min_version =3D 0x65, + .test[3].outlen =3D 16, .test[3].data =3D { 0x58, 0xed, 0x24, 0x8f, + 0x77, 0xf6, 0xb1, 0x9e, 0x47, 0xd9, 0xb7, 0x4a, 0x4f, + 0x5a, 0xe6, 0x6d, } + }, + + /* 3DES_ECB */ + { .min_version =3D 0x65, + .test[3].outlen =3D 16, .test[3].data =3D { 0x89, 0x4b, 0xc3, 0x08, + 0x54, 0x26, 0xa4, 0x41, 0x89, 0x4b, 0xc3, 0x08, 0x54, + 0x26, 0xa4, 0x41, } + }, + + /* DES_CBC */ + { .min_version =3D 0x65, + .test[1].outlen =3D 16, .test[1].data =3D { 0xe1, 0xb2, 0x46, 0xe5, + 0xa7, 0xc7, 0x4c, 0xbc, 0xd5, 0xf0, 0x8e, 0x25, 0x3b, + 0xfa, 0x23, 0x80, } + }, + + /* DES_ECB */ + { .min_version =3D 0x65, + .test[1].outlen =3D 16, .test[1].data =3D { 0xa5, 0x17, 0x3a, + 0xd5, 0x95, 0x7b, 0x43, 0x70, 0xa5, 0x17, 0x3a, 0xd5, + 0x95, 0x7b, 0x43, 0x70, } + }, + + /* KASUMI_ECB */ + { .min_version =3D 0x65, + .test[2].outlen =3D 16, .test[2].data =3D { 0x04, 0x7d, 0x5d, + 0x2c, 0x8c, 0x2e, 0x91, 0xb3, 0x04, 0x7d, 0x5d, 0x2c, + 0x8c, 0x2e, 0x91, 0xb3, } }, + + /* KASUMI_F8 */ + { .min_version =3D 0x65, + .test[2].outlen =3D 16, .test[2].data =3D { 0xfc, 0xf7, 0x45, + 0xee, 0x1d, 0xbb, 0xa4, 0x57, 0xa7, 0x45, 0xdc, 0x6b, + 0x2a, 0x1b, 0x50, 0x88, } + }, + + /* SNOW3G UEA2 */ + { .min_version =3D 0x65, + .test[2].outlen =3D 16, .test[2].data =3D { 0x95, 0xd3, 0xc8, + 0x13, 0xc0, 0x20, 0x24, 0xa3, 0x76, 0x24, 0xd1, 0x98, + 0xb6, 0x67, 0x4d, 0x4c, } + }, + + /* ZUC UEA3 */ + { .min_version =3D 0x65, + .test[2].outlen =3D 16, .test[2].data =3D { 0xda, 0xdf, 0xb6, + 0xa2, 0xac, 0x9d, 0xba, 0xfe, 0x18, 0x9c, 0x0c, 0x75, + 0x79, 0xc6, 0xe0, 0x4e, } + }, + + /* CHACHA20_STREAM */ + { .min_version =3D 0x65, + .test[4].outlen =3D 16, .test[4].data =3D { 0x55, 0xdf, 0x91, + 0xe9, 0x27, 0x01, 0x37, 0x69, 0xdb, 0x38, 0xd4, 0x28, + 0x01, 0x79, 0x76, 0x64 } + }, + + /* CHACHA20_POLY1305 (AEAD) */ + { .min_version =3D 0x65, + .test[4].outlen =3D 16, .test[4].data =3D { 0x89, 0xfb, 0x08, + 0x00, 0x29, 0x17, 0xa5, 0x40, 0xb7, 0x83, 0x3f, 0xf3, + 0x98, 0x1d, 0x0e, 0x63 } + }, + + /* SM4_ECB 128 */ + { .min_version =3D 0x65, + .test[2].outlen =3D 16, .test[2].data =3D { 0x1e, 0x96, 0x34, + 0xb7, 0x70, 0xf9, 0xae, 0xba, 0xa9, 0x34, 0x4f, 0x5a, + 0xff, 0x9f, 0x82, 0xa3 } + }, + + /* SM4_CBC 128 */ + { .min_version =3D 0x65, + .test[2].outlen =3D 16, .test[2].data =3D { 0x8f, 0x78, 0x76, + 0x3e, 0xe0, 0x60, 0x13, 0xe0, 0xb7, 0x62, 0x2c, 0x42, + 0x8f, 0xd0, 0x52, 0x8d } + }, + + /* SM4_CFB 128 */ + { .min_version =3D 0x65, + .test[2].outlen =3D 16, .test[2].data =3D { 0x8f, 0x78, 0x76, + 0x3e, 0xe0, 0x60, 0x13, 0xe0, 0xb7, 0x62, 0x2c, 0x42, + 0x8f, 0xd0, 0x52, 0x8d } + }, + + /* SM4_OFB 128 */ + { .min_version =3D 0x65, + .test[2].outlen =3D 16, .test[2].data =3D { 0x8f, 0x78, 0x76, 0x3e, 0xe= 0, + 0x60, 0x13, 0xe0, 0xb7, 0x62, 0x2c, 0x42, 0x8f, 0xd0, 0x52, + 0x8d } + }, + + /* SM4_CTR 128 */ + { .min_version =3D 0x65, + .test[2].outlen =3D 16, .test[2].data =3D { 0x8f, 0x78, 0x76, 0x3e, 0xe= 0, + 0x60, 0x13, 0xe0, 0xb7, 0x62, 0x2c, 0x42, 0x8f, 0xd0, 0x52, + 0x8d } + }, + + /* SM4_CCM 128 */ + { .min_version =3D 0x65, + .test[2].outlen =3D 16, .test[2].data =3D { 0x8e, 0x25, 0x5a, + 0x13, 0xc7, 0x43, 0x4d, 0x95, 0xef, 0x14, 0x15, 0x11, + 0xd0, 0xb9, 0x60, 0x5b } + }, + + /* SM4_GCM 128 */ + { .min_version =3D 0x65, + .test[2].outlen =3D 16, .test[2].data =3D { 0x97, 0x46, 0xde, + 0xfb, 0xc9, 0x6a, 0x85, 0x00, 0xff, 0x9c, 0x74, 0x4d, + 0xd1, 0xbb, 0xf9, 0x66 } + }, + + /* SM4_F8 128 */ + { .min_version =3D 0x65, + .test[2].outlen =3D 16, .test[2].data =3D { 0x77, 0x30, 0xff, + 0x70, 0x46, 0xbc, 0xf4, 0xe3, 0x11, 0xf6, 0x27, 0xe2, + 0xff, 0xd7, 0xc4, 0x2e } + }, + + /* SM4_XTS 128 */ + { .min_version =3D 0x65, + .test[2].outlen =3D 16, .test[2].data =3D { 0x05, 0x3f, 0xb6, + 0xe9, 0xb1, 0xff, 0x09, 0x4f, 0x9d, 0x69, 0x4d, 0xc2, + 0xb6, 0xa1, 0x15, 0xde } + }, + + /* SM4_CS1 128 */ + { .min_version =3D 0x65, + .test[2].outlen =3D 16, .test[2].data =3D { 0x8f, 0x78, 0x76, + 0x3e, 0xe0, 0x60, 0x13, 0xe0, 0xb7, 0x62, 0x2c, 0x42, + 0x8f, 0xd0, 0x52, 0xa0 } + }, + + /* SM4_CS2 128 */ + { .min_version =3D 0x65, + .test[2].outlen =3D 16, .test[2].data =3D { 0xa0, 0x1c, 0xfe, + 0x91, 0xaa, 0x7e, 0xf1, 0x75, 0x6a, 0xe8, 0xbc, 0xe1, + 0x55, 0x08, 0xda, 0x71 } + }, + + /* SM4_CS3 128 */ + { .min_version =3D 0x65, + .test[2].outlen =3D 16, .test[2].data =3D { 0xa0, 0x1c, 0xfe, + 0x91, 0xaa, 0x7e, 0xf1, 0x75, 0x6a, 0xe8, 0xbc, 0xe1, + 0x55, 0x08, 0xda, 0x71 } + }, + + /* + * Hashes ... note they use the 2nd keysize + * array so the indecies mean different sizes!!! + */ + + /* MD5 HASH/HMAC */ + { .min_version =3D 0x65, + .test[1].outlen =3D 16, .test[1].data =3D { 0x70, 0xbc, 0x8f, 0x4b, + 0x72, 0xa8, 0x69, 0x21, 0x46, 0x8b, 0xf8, 0xe8, 0x44, + 0x1d, 0xce, 0x51, } + }, + { .min_version =3D 0x65, + .test[1].outlen =3D 16, .test[1].data =3D { 0xb6, 0x39, 0xc8, 0x73, + 0x16, 0x38, 0x61, 0x8b, 0x70, 0x79, 0x72, 0xaa, 0x6e, + 0x96, 0xcf, 0x90, }, + .test[4].outlen =3D 16, .test[4].data =3D { 0xb7, 0x79, 0x68, 0xea, + 0x17, 0x32, 0x1e, 0x32, 0x13, 0x90, 0x6c, 0x2e, 0x9f, + 0xd5, 0xc8, 0xb3, }, + .test[5].outlen =3D 16, .test[5].data =3D { 0x80, 0x3e, 0x0a, 0x2f, + 0x8a, 0xd8, 0x31, 0x8f, 0x8e, 0x12, 0x28, 0x86, 0x22, + 0x59, 0x6b, 0x05, }, + }, + /* SHA1 */ + { .min_version =3D 0x65, + .test[1].outlen =3D 20, .test[1].data =3D { 0xde, 0x8a, 0x84, 0x7b, + 0xff, 0x8c, 0x34, 0x3d, 0x69, 0xb8, 0x53, 0xa2, 0x15, + 0xe6, 0xee, 0x77, 0x5e, 0xf2, 0xef, 0x96, } + }, + { .min_version =3D 0x65, + .test[1].outlen =3D 20, .test[1].data =3D { 0xf8, 0x54, 0x60, 0x50, + 0x49, 0x56, 0xd1, 0xcd, 0x55, 0x5c, 0x5d, 0xcd, 0x24, + 0x33, 0xbf, 0xdc, 0x5c, 0x99, 0x54, 0xc8, }, + .test[4].outlen =3D 20, .test[4].data =3D { 0x66, 0x3f, 0x3a, 0x3c, + 0x08, 0xb6, 0x87, 0xb2, 0xd3, 0x0c, 0x5a, 0xa7, 0xcc, + 0x5c, 0xc3, 0x99, 0xb2, 0xb4, 0x58, 0x55, }, + .test[5].outlen =3D 20, .test[5].data =3D { 0x9a, 0x28, 0x54, 0x2f, + 0xaf, 0xa7, 0x0b, 0x37, 0xbe, 0x2d, 0x3e, 0xd9, 0xd4, + 0x70, 0xbc, 0xdc, 0x0b, 0x54, 0x20, 0x06, }, + }, + /* SHA224_HASH */ + { .min_version =3D 0x65, + .test[1].outlen =3D 28, .test[1].data =3D { 0xb3, 0x38, 0xc7, 0x6b, + 0xcf, 0xfa, 0x1a, 0x0b, 0x3e, 0xad, 0x8d, 0xe5, 0x8d, + 0xfb, 0xff, 0x47, 0xb6, 0x3a, 0xb1, 0x15, 0x0e, 0x10, + 0xd8, 0xf1, 0x7f, 0x2b, 0xaf, 0xdf, } + }, + { .min_version =3D 0x65, + .test[1].outlen =3D 28, .test[1].data =3D { 0xf3, 0xb4, 0x33, 0x78, + 0x53, 0x4c, 0x0c, 0x4a, 0x1e, 0x31, 0xc2, 0xce, 0xda, + 0xc8, 0xfe, 0x74, 0x4a, 0xd2, 0x9b, 0x7c, 0x1d, 0x2f, + 0x5e, 0xa1, 0xaa, 0x31, 0xb9, 0xf5, }, + .test[4].outlen =3D 28, .test[4].data =3D { 0x4b, 0x6b, 0x3f, 0x9a, + 0x66, 0x47, 0x45, 0xe2, 0x60, 0xc9, 0x53, 0x86, 0x7a, + 0x34, 0x65, 0x7d, 0xe2, 0x24, 0x06, 0xcc, 0xf9, 0x17, + 0x20, 0x5d, 0xc2, 0xb6, 0x97, 0x9a, }, + .test[5].outlen =3D 28, .test[5].data =3D { 0x90, 0xb0, 0x6e, 0xee, + 0x21, 0x57, 0x38, 0xc7, 0x65, 0xbb, 0x9a, 0xf5, 0xb4, + 0x31, 0x0a, 0x0e, 0xe5, 0x64, 0xc4, 0x49, 0x9d, 0xbd, + 0xe9, 0xf7, 0xac, 0x9f, 0xf8, 0x05, }, + }, + + /* SHA256_HASH */ + { .min_version =3D 0x65, + .test[1].outlen =3D 32, .test[1].data =3D { 0x66, 0x68, 0x7a, 0xad, + 0xf8, 0x62, 0xbd, 0x77, 0x6c, 0x8f, 0xc1, 0x8b, 0x8e, + 0x9f, 0x8e, 0x20, 0x08, 0x97, 0x14, 0x85, 0x6e, 0xe2, + 0x33, 0xb3, 0x90, 0x2a, 0x59, 0x1d, 0x0d, 0x5f, 0x29, + 0x25, } + }, + { .min_version =3D 0x65, + .test[1].outlen =3D 32, .test[1].data =3D { 0x75, 0x40, 0x84, 0x49, + 0x54, 0x0a, 0xf9, 0x80, 0x99, 0xeb, 0x93, 0x6b, 0xf6, + 0xd3, 0xff, 0x41, 0x05, 0x47, 0xcc, 0x82, 0x62, 0x76, + 0x32, 0xf3, 0x43, 0x74, 0x70, 0x54, 0xe2, 0x3b, 0xc0, + 0x90, }, + .test[4].outlen =3D 32, .test[4].data =3D { 0x41, 0x6c, 0x53, 0x92, + 0xb9, 0xf3, 0x6d, 0xf1, 0x88, 0xe9, 0x0e, 0xb1, 0x4d, + 0x17, 0xbf, 0x0d, 0xa1, 0x90, 0xbf, 0xdb, 0x7f, 0x1f, + 0x49, 0x56, 0xe6, 0xe5, 0x66, 0xa5, 0x69, 0xc8, 0xb1, + 0x5c, }, + .test[5].outlen =3D 32, .test[5].data =3D { 0x49, 0x1f, 0x58, 0x3b, + 0x05, 0xe2, 0x3a, 0x72, 0x1d, 0x11, 0x6d, 0xc1, 0x08, + 0xa0, 0x3f, 0x30, 0x37, 0x98, 0x36, 0x8a, 0x49, 0x4c, + 0x21, 0x1d, 0x56, 0xa5, 0x2a, 0xf3, 0x68, 0x28, 0xb7, + 0x69, }, + }, + /* SHA384_HASH */ + { .min_version =3D 0x65, + .test[1].outlen =3D 48, .test[1].data =3D { 0xa3, 0x8f, 0xff, 0x4b, + 0xa2, 0x6c, 0x15, 0xe4, 0xac, 0x9c, 0xde, 0x8c, 0x03, + 0x10, 0x3a, 0xc8, 0x90, 0x80, 0xfd, 0x47, 0x54, 0x5f, + 0xde, 0x94, 0x46, 0xc8, 0xf1, 0x92, 0x72, 0x9e, 0xab, + 0x7b, 0xd0, 0x3a, 0x4d, 0x5c, 0x31, 0x87, 0xf7, 0x5f, + 0xe2, 0xa7, 0x1b, 0x0e, 0xe5, 0x0a, 0x4a, 0x40, } + }, + { .min_version =3D 0x65, + .test[1].outlen =3D 48, .test[1].data =3D { 0x6c, 0xd8, 0x89, 0xa0, + 0xca, 0x54, 0xa6, 0x1d, 0x24, 0xc4, 0x1d, 0xa1, 0x77, + 0x50, 0xd6, 0xf2, 0xf3, 0x43, 0x23, 0x0d, 0xb1, 0xf5, + 0xf7, 0xfc, 0xc0, 0x8c, 0xf6, 0xdf, 0x3c, 0x61, 0xfc, + 0x8a, 0xb9, 0xda, 0x12, 0x75, 0x97, 0xac, 0x51, 0x88, + 0x59, 0x19, 0x44, 0x13, 0xc0, 0x78, 0xa5, 0xa8, }, + .test[4].outlen =3D 48, .test[4].data =3D { 0x0c, 0x91, 0x36, 0x46, + 0xd9, 0x17, 0x81, 0x46, 0x1d, 0x42, 0xb1, 0x00, 0xaa, + 0xfa, 0x26, 0x92, 0x9f, 0x05, 0xc0, 0x91, 0x8e, 0x20, + 0xd7, 0x75, 0x9d, 0xd2, 0xc8, 0x9b, 0x02, 0x18, 0x20, + 0x1f, 0xdd, 0xa3, 0x32, 0xe3, 0x1e, 0xa4, 0x2b, 0xc3, + 0xc8, 0xb9, 0xb1, 0x53, 0x4e, 0x6a, 0x49, 0xd2, }, + .test[5].outlen =3D 48, .test[5].data =3D { 0x84, 0x78, 0xd2, 0xf1, + 0x44, 0x95, 0x6a, 0x22, 0x2d, 0x08, 0x19, 0xe8, 0xea, + 0x61, 0xb4, 0x86, 0xe8, 0xc6, 0xb0, 0x40, 0x51, 0x28, + 0x22, 0x54, 0x48, 0xc0, 0x70, 0x09, 0x81, 0xf9, 0xf5, + 0x47, 0x9e, 0xb3, 0x2c, 0x69, 0x19, 0xd5, 0x8d, 0x03, + 0x5d, 0x24, 0xca, 0x90, 0xa6, 0x9d, 0x80, 0x2a, }, + .test[6].outlen =3D 48, .test[6].data =3D { 0x0e, 0x68, 0x17, 0x31, + 0x01, 0xa8, 0x28, 0x0a, 0x4e, 0x47, 0x22, 0xa6, 0x89, + 0xf0, 0xc6, 0xcd, 0x4e, 0x8c, 0x19, 0x4c, 0x44, 0x3d, + 0xb5, 0xa5, 0xf9, 0xfe, 0xea, 0xc7, 0x84, 0x0b, 0x57, + 0x0d, 0xd4, 0xe4, 0x8a, 0x3f, 0x68, 0x31, 0x20, 0xd9, + 0x1f, 0xc4, 0xa3, 0x76, 0xcf, 0xdd, 0x07, 0xa6, }, + }, + /* SHA512_HASH */ + { .min_version =3D 0x65, + .test[1].outlen =3D 64, .test[1].data =3D { 0x50, 0x46, 0xad, 0xc1, + 0xdb, 0xa8, 0x38, 0x86, 0x7b, 0x2b, 0xbb, 0xfd, 0xd0, + 0xc3, 0x42, 0x3e, 0x58, 0xb5, 0x79, 0x70, 0xb5, 0x26, + 0x7a, 0x90, 0xf5, 0x79, 0x60, 0x92, 0x4a, 0x87, 0xf1, + 0x96, 0x0a, 0x6a, 0x85, 0xea, 0xa6, 0x42, 0xda, 0xc8, + 0x35, 0x42, 0x4b, 0x5d, 0x7c, 0x8d, 0x63, 0x7c, 0x00, + 0x40, 0x8c, 0x7a, 0x73, 0xda, 0x67, 0x2b, 0x7f, 0x49, + 0x85, 0x21, 0x42, 0x0b, 0x6d, 0xd3, } + }, + { .min_version =3D 0x65, + .test[1].outlen =3D 64, .test[1].data =3D { 0xec, 0xfd, 0x83, 0x74, + 0xc8, 0xa9, 0x2f, 0xd7, 0x71, 0x94, 0xd1, 0x1e, 0xe7, + 0x0f, 0x0f, 0x5e, 0x11, 0x29, 0x58, 0xb8, 0x36, 0xc6, + 0x39, 0xbc, 0xd6, 0x88, 0x6e, 0xdb, 0xc8, 0x06, 0x09, + 0x30, 0x27, 0xaa, 0x69, 0xb9, 0x2a, 0xd4, 0x67, 0x06, + 0x5c, 0x82, 0x8e, 0x90, 0xe9, 0x3e, 0x55, 0x88, 0x7d, + 0xb2, 0x2b, 0x48, 0xa2, 0x28, 0x92, 0x6c, 0x0f, 0xf1, + 0x57, 0xb5, 0xd0, 0x06, 0x1d, 0xf3, }, + .test[4].outlen =3D 64, .test[4].data =3D { 0x47, 0x88, 0x91, 0xe9, + 0x12, 0x3e, 0xfd, 0xdc, 0x26, 0x29, 0x08, 0xd6, 0x30, + 0x8f, 0xcc, 0xb6, 0x93, 0x30, 0x58, 0x69, 0x4e, 0x81, + 0xee, 0x9d, 0xb6, 0x0f, 0xc5, 0x54, 0xe6, 0x7c, 0x84, + 0xc5, 0xbc, 0x89, 0x99, 0xf0, 0xf3, 0x7f, 0x6f, 0x3f, + 0xf5, 0x04, 0x2c, 0xdf, 0x76, 0x72, 0x6a, 0xbe, 0x28, + 0x3b, 0xb8, 0x05, 0xb3, 0x47, 0x45, 0xf5, 0x7f, 0xb1, + 0x21, 0x2d, 0xe0, 0x8d, 0x1e, 0x29, }, + .test[5].outlen =3D 64, .test[5].data =3D { 0x7e, 0x55, 0xda, 0x88, + 0x28, 0xc1, 0x6e, 0x9a, 0x6a, 0x99, 0xa0, 0x37, 0x68, + 0xf0, 0x28, 0x5e, 0xe2, 0xbe, 0x00, 0xac, 0x76, 0x89, + 0x76, 0xcc, 0x5d, 0x98, 0x1b, 0x32, 0x1a, 0x14, 0xc4, + 0x2e, 0x9c, 0xe4, 0xf3, 0x3f, 0x5f, 0xa0, 0xae, 0x95, + 0x16, 0x0b, 0x14, 0xf5, 0xf5, 0x45, 0x29, 0xd8, 0xc9, + 0x43, 0xf2, 0xa9, 0xbc, 0xdc, 0x03, 0x81, 0x0d, 0x36, + 0x2f, 0xb1, 0x22, 0xe8, 0x13, 0xf8, }, + .test[6].outlen =3D 64, .test[6].data =3D { 0x5d, 0xc4, 0x80, 0x90, + 0x6b, 0x00, 0x17, 0x04, 0x34, 0x63, 0x93, 0xf1, 0xad, + 0x9a, 0x3e, 0x13, 0x37, 0x6b, 0x86, 0xd7, 0xc4, 0x2b, + 0x22, 0x9c, 0x2e, 0xf2, 0x1d, 0xde, 0x35, 0x39, 0x03, + 0x3f, 0x2b, 0x3a, 0xc3, 0x49, 0xb3, 0x32, 0x86, 0x63, + 0x6b, 0x0f, 0x27, 0x95, 0x97, 0xe5, 0xe7, 0x2b, 0x9b, + 0x80, 0xea, 0x94, 0x4d, 0x84, 0x2e, 0x39, 0x44, 0x8f, + 0x56, 0xe3, 0xcd, 0xa7, 0x12, 0x3e, }, + }, + /* SHA512_224_HASH */ + { .min_version =3D 0x65, + .test[1].outlen =3D 28, .test[1].data =3D { 0x9e, 0x7d, 0x60, 0x80, + 0xde, 0xf4, 0xe1, 0xcc, 0xf4, 0xae, 0xaa, 0xc6, 0xf7, + 0xfa, 0xd0, 0x08, 0xd0, 0x60, 0xa6, 0xcf, 0x87, 0x06, + 0x20, 0x38, 0xd6, 0x16, 0x67, 0x74, } + }, + { .min_version =3D 0x65, + .test[1].outlen =3D 28, .test[1].data =3D { 0xff, 0xfb, 0x43, 0x27, + 0xdd, 0x2e, 0x39, 0xa0, 0x18, 0xa8, 0xaf, 0xde, 0x84, + 0x0b, 0x5d, 0x0f, 0x3d, 0xdc, 0xc6, 0x17, 0xd1, 0xb6, + 0x2f, 0x8c, 0xf8, 0x7e, 0x34, 0x34, }, + .test[4].outlen =3D 28, .test[4].data =3D { 0x00, 0x19, 0xe2, 0x2d, + 0x44, 0x80, 0x2d, 0xd8, 0x1c, 0x57, 0xf5, 0x57, 0x92, + 0x08, 0x13, 0xe7, 0x9d, 0xbb, 0x2b, 0xc2, 0x8d, 0x77, + 0xc1, 0xff, 0x71, 0x4c, 0xf0, 0xa9, }, + .test[5].outlen =3D 28, .test[5].data =3D { 0x6a, 0xc4, 0xa8, 0x73, + 0x21, 0x54, 0xb2, 0x82, 0xee, 0x89, 0x8d, 0x45, 0xd4, + 0xe3, 0x76, 0x3e, 0x04, 0x03, 0xc9, 0x71, 0xee, 0x01, + 0x25, 0xd2, 0x7b, 0xa1, 0x20, 0xc4, }, + .test[6].outlen =3D 28, .test[6].data =3D { 0x0f, 0x98, 0x15, 0x9b, + 0x11, 0xca, 0x60, 0xc7, 0x82, 0x39, 0x1a, 0x50, 0x8c, + 0xe4, 0x79, 0xfa, 0xa8, 0x0e, 0xc7, 0x12, 0xfd, 0x8c, + 0x9c, 0x99, 0x7a, 0xe8, 0x7e, 0x92, }, + }, + /* SHA512_256_HASH */ + { .min_version =3D 0x65, + .test[1].outlen =3D 32, .test[1].data =3D { 0xaf, 0x13, 0xc0, 0x48, + 0x99, 0x12, 0x24, 0xa5, 0xe4, 0xc6, 0x64, 0x44, 0x6b, + 0x68, 0x8a, 0xaf, 0x48, 0xfb, 0x54, 0x56, 0xdb, 0x36, + 0x29, 0x60, 0x1b, 0x00, 0xec, 0x16, 0x0c, 0x74, 0xe5, + 0x54, } + }, + { .min_version =3D 0x65, + .test[1].outlen =3D 32, .test[1].data =3D { 0x3a, 0x2c, 0xd0, 0x2b, + 0xfa, 0xa6, 0x72, 0xe4, 0xf1, 0xab, 0x0a, 0x3e, 0x70, + 0xe4, 0x88, 0x1a, 0x92, 0xe1, 0x3b, 0x64, 0x5a, 0x9b, + 0xed, 0xb3, 0x97, 0xc0, 0x17, 0x1f, 0xd4, 0x05, 0xf1, + 0x72, }, + .test[4].outlen =3D 32, .test[4].data =3D { 0x6f, 0x2d, 0xae, 0xc6, + 0xe4, 0xa6, 0x5b, 0x52, 0x0f, 0x26, 0x16, 0xf6, 0xa9, + 0xc1, 0x23, 0xc2, 0xb3, 0x67, 0xfc, 0x69, 0xac, 0x73, + 0x87, 0xa2, 0x5b, 0x6c, 0x44, 0xad, 0xc5, 0x26, 0x2b, + 0x10, }, + .test[5].outlen =3D 32, .test[5].data =3D { 0x63, 0xe7, 0xb8, 0xd1, + 0x76, 0x33, 0x56, 0x29, 0xba, 0x99, 0x86, 0x42, 0x0d, + 0x4f, 0xf7, 0x54, 0x8c, 0xb9, 0x39, 0xf2, 0x72, 0x1d, + 0x0e, 0x9d, 0x80, 0x67, 0xd9, 0xab, 0x15, 0xb0, 0x68, + 0x18, }, + .test[6].outlen =3D 32, .test[6].data =3D { 0x64, 0x78, 0x56, 0xd7, + 0xaf, 0x5b, 0x56, 0x08, 0xf1, 0x44, 0xf7, 0x4f, 0xa1, + 0xa1, 0x13, 0x79, 0x6c, 0xb1, 0x31, 0x11, 0xf3, 0x75, + 0xf4, 0x8c, 0xb4, 0x9f, 0xbf, 0xb1, 0x60, 0x38, 0x3d, + 0x28, }, + }, + + /* AESXCBC */ + { .min_version =3D 0x65, + .test[1].outlen =3D 16, .test[1].data =3D { 0x35, 0xd9, 0xdc, 0xdb, + 0x82, 0x9f, 0xec, 0x33, 0x52, 0xe7, 0xbf, 0x10, 0xb8, + 0x4b, 0xe4, 0xa5, }, + .test[3].outlen =3D 16, .test[3].data =3D { 0x39, 0x6f, 0x99, 0xb5, + 0x43, 0x33, 0x67, 0x4e, 0xd4, 0x45, 0x8f, 0x80, 0x77, + 0xe4, 0xd4, 0x14, }, + .test[4].outlen =3D 16, .test[4].data =3D { 0x73, 0xd4, 0x7c, 0x38, + 0x37, 0x4f, 0x73, 0xd0, 0x78, 0xa8, 0xc6, 0xec, 0x05, + 0x67, 0xca, 0x5e, }, + }, + + /* AESCMAC */ + { .min_version =3D 0x65, + .test[1].outlen =3D 16, .test[1].data =3D { 0x15, 0xbe, 0x1b, 0xfd, + 0x8c, 0xbb, 0xaf, 0x8b, 0x51, 0x9a, 0x64, 0x3b, 0x1b, + 0x46, 0xc1, 0x8f, }, + .test[3].outlen =3D 16, .test[3].data =3D { 0x4e, 0x02, 0xd6, 0xec, + 0x92, 0x75, 0x88, 0xb4, 0x3e, 0x83, 0xa7, 0xac, 0x32, + 0xb6, 0x2b, 0xdb, }, + .test[4].outlen =3D 16, .test[4].data =3D { 0xa7, 0x37, 0x01, 0xbe, + 0xe8, 0xce, 0xed, 0x44, 0x49, 0x4a, 0xbb, 0xf6, 0x9e, + 0xd9, 0x31, 0x3e, }, + }, + + /* KASUMIF9 */ + { .min_version =3D 0x65, + .test[1].outlen =3D 4, .test[1].data =3D { 0x5b, 0x26, 0x81, 0x06 + } + }, + + /* SNOW3G UIA2 */ + { .min_version =3D 0x65, + .test[1].outlen =3D 4, .test[1].data =3D { 0x08, 0xed, 0x2c, 0x76, + } + }, + + /* ZUC UIA3 */ + { .min_version =3D 0x65, + .test[1].outlen =3D 4, .test[1].data =3D { 0x6a, 0x2b, 0x4c, 0x3a, + } + }, + + /* POLY1305 */ + { .min_version =3D 0x65, + .test[4].outlen =3D 16, .test[4].data =3D { 0xef, 0x91, 0x06, 0x4e, + 0xce, 0x99, 0x9c, 0x4e, 0xfd, 0x05, 0x6a, 0x8c, 0xe6, + 0x18, 0x23, 0xad } + }, + + /* SSLMAC MD5 */ + { .min_version =3D 0x65, + .test[1].outlen =3D 16, .test[1].data =3D { 0x0e, 0xf4, 0xca, 0x32, + 0x32, 0x40, 0x1d, 0x1b, 0xaa, 0xfd, 0x6d, 0xa8, 0x01, + 0x79, 0xed, 0xcd, }, + }, + + /* SSLMAC_SHA1 */ + { .min_version =3D 0x65, + .test[2].outlen =3D 20, .test[2].data =3D { 0x05, 0x9d, 0x99, 0xb4, + 0xf3, 0x03, 0x1e, 0xc5, 0x24, 0xbf, 0xec, 0xdf, 0x64, + 0x8e, 0x37, 0x2e, 0xf0, 0xef, 0x93, 0xa0, }, + }, + + /* CRC32 */ + { .min_version =3D 0x65, + .test[0].outlen =3D 0 + }, + + /* TKIP-MIC */ + { .min_version =3D 0x65, + .test[0].outlen =3D 8, .test[0].data =3D { 0x16, 0xfb, 0xa0, + 0x0e, 0xe2, 0xab, 0x6c, 0x97, } + }, + + /* SHA3-224 */ + { .min_version =3D 0x65, + .test[1].outlen =3D 28, .test[1].data =3D { 0x73, 0xe0, 0x87, + 0xae, 0x12, 0x71, 0xb2, 0xc5, 0xf6, 0x85, 0x46, 0xc9, + 0x3a, 0xb4, 0x25, 0x14, 0xa6, 0x9e, 0xef, 0x25, 0x2b, + 0xfd, 0xd1, 0x37, 0x55, 0x74, 0x8a, 0x00, } + }, + + /* SHA3-256 */ + { .min_version =3D 0x65, + .test[1].outlen =3D 32, .test[1].data =3D { 0x9e, 0x62, 0x91, 0x97, + 0x0c, 0xb4, 0x4d, 0xd9, 0x40, 0x08, 0xc7, 0x9b, 0xca, + 0xf9, 0xd8, 0x6f, 0x18, 0xb4, 0xb4, 0x9b, 0xa5, 0xb2, + 0xa0, 0x47, 0x81, 0xdb, 0x71, 0x99, 0xed, 0x3b, 0x9e, + 0x4e, } + }, + + /* SHA3-384 */ + { .min_version =3D 0x65, + .test[1].outlen =3D 48, .test[1].data =3D { 0x4b, 0xda, 0xab, + 0xf7, 0x88, 0xd3, 0xad, 0x1a, 0xd8, 0x3d, 0x6d, 0x93, + 0xc7, 0xe4, 0x49, 0x37, 0xc2, 0xe6, 0x49, 0x6a, 0xf2, + 0x3b, 0xe3, 0x35, 0x4d, 0x75, 0x69, 0x87, 0xf4, 0x51, + 0x60, 0xfc, 0x40, 0x23, 0xbd, 0xa9, 0x5e, 0xcd, 0xcb, + 0x3c, 0x7e, 0x31, 0xa6, 0x2f, 0x72, 0x6d, 0x70, 0x2c, + } + }, + + /* SHA3-512 */ + { .min_version =3D 0x65, + .test[1].outlen =3D 64, .test[1].data =3D { 0xad, 0x56, 0xc3, 0x5c, + 0xab, 0x50, 0x63, 0xb9, 0xe7, 0xea, 0x56, 0x83, 0x14, + 0xec, 0x81, 0xc4, 0x0b, 0xa5, 0x77, 0xaa, 0xe6, 0x30, + 0xde, 0x90, 0x20, 0x04, 0x00, 0x9e, 0x88, 0xf1, 0x8d, + 0xa5, 0x7b, 0xbd, 0xfd, 0xaa, 0xa0, 0xfc, 0x18, 0x9c, + 0x66, 0xc8, 0xd8, 0x53, 0x24, 0x8b, 0x6b, 0x11, 0x88, + 0x44, 0xd5, 0x3f, 0x7d, 0x0b, 0xa1, 0x1d, 0xe0, 0xf3, + 0xbf, 0xaf, 0x4c, 0xdd, 0x9b, 0x3f, } + }, + + /* SHAKE128 */ + { .min_version =3D 0x65, + .test[4].outlen =3D 16, .test[4].data =3D { 0x24, 0xa7, 0xca, + 0x4b, 0x75, 0xe3, 0x89, 0x8d, 0x4f, 0x12, 0xe7, 0x4d, + 0xea, 0x8c, 0xbb, 0x65 } + }, + + /* SHAKE256 */ + { .min_version =3D 0x65, + .test[4].outlen =3D 32, .test[4].data =3D { 0xf5, 0x97, 0x7c, + 0x82, 0x83, 0x54, 0x6a, 0x63, 0x72, 0x3b, 0xc3, 0x1d, + 0x26, 0x19, 0x12, 0x4f, + 0x11, 0xdb, 0x46, 0x58, 0x64, 0x33, 0x36, 0x74, 0x1d, + 0xf8, 0x17, 0x57, 0xd5, 0xad, 0x30, 0x62 } + }, + + /* CSHAKE128 */ + { .min_version =3D 0x65, + .test[1].outlen =3D 16, .test[1].data =3D { 0xe0, 0x6f, 0xd8, + 0x50, 0x57, 0x6f, 0xe4, 0xfa, 0x7e, 0x13, 0x42, 0xb5, + 0xf8, 0x13, 0xeb, 0x23 } + }, + + /* CSHAKE256 */ + { .min_version =3D 0x65, + .test[1].outlen =3D 32, .test[1].data =3D { 0xf3, 0xf2, 0xb5, + 0x47, 0xf2, 0x16, 0xba, 0x6f, 0x49, 0x83, 0x3e, 0xad, + 0x1e, 0x46, 0x85, 0x54, + 0xd0, 0xd7, 0xf9, 0xc6, 0x7e, 0xe9, 0x27, 0xc6, 0xc3, + 0xc3, 0xdb, 0x91, 0xdb, 0x97, 0x04, 0x0f } + }, + + /* KMAC128 */ + { .min_version =3D 0x65, + .test[1].outlen =3D 16, .test[1].data =3D { 0x6c, 0x3f, 0x29, + 0xfe, 0x01, 0x96, 0x59, 0x36, 0xb7, 0xae, 0xb7, 0xff, + 0x71, 0xe0, 0x3d, 0xff }, + .test[4].outlen =3D 16, .test[4].data =3D { 0x58, 0xd9, 0x8d, + 0xe8, 0x1f, 0x64, 0xb4, 0xa3, 0x9f, 0x63, 0xaf, 0x21, + 0x99, 0x03, 0x97, 0x06 }, + .test[5].outlen =3D 16, .test[5].data =3D { 0xf8, 0xf9, 0xb7, + 0xa4, 0x05, 0x3d, 0x90, 0x7c, 0xf2, 0xa1, 0x7c, 0x34, + 0x39, 0xc2, 0x87, 0x4b }, + .test[6].outlen =3D 16, .test[6].data =3D { 0xef, 0x4a, 0xd5, + 0x1d, 0xd7, 0x83, 0x56, 0xd3, 0xa8, 0x3c, 0xf5, 0xf8, + 0xd1, 0x12, 0xf4, 0x44 } + }, + + /* KMAC256 */ + { .min_version =3D 0x65, + .test[1].outlen =3D 32, .test[1].data =3D { 0x0d, 0x86, 0xfa, + 0x92, 0x92, 0xe4, 0x77, 0x24, 0x6a, 0xcc, 0x79, 0xa0, + 0x1e, 0xb4, 0xc3, 0xac, + 0xfc, 0x56, 0xbc, 0x63, 0xcc, 0x1b, 0x6e, 0xf6, 0xc8, + 0x99, 0xa5, 0x3a, 0x38, 0x14, 0xa2, 0x40 }, + .test[4].outlen =3D 32, .test[4].data =3D { 0xad, 0x99, 0xed, + 0x20, 0x1f, 0xbe, 0x45, 0x07, 0x3d, 0xf4, 0xae, 0x9f, + 0xc2, 0xd8, 0x06, 0x18, + 0x31, 0x4e, 0x8c, 0xb6, 0x33, 0xe8, 0x31, 0x36, 0x00, + 0xdd, 0x42, 0x20, 0xda, 0x2b, 0xd5, 0x2b }, + .test[5].outlen =3D 32, .test[5].data =3D { 0xf9, 0xc6, 0x2b, + 0x17, 0xa0, 0x04, 0xd9, 0xf2, 0x6c, 0xbf, 0x5d, 0xa5, + 0x9a, 0xd7, 0x36, 0x1d, + 0xad, 0x66, 0x6b, 0x3d, 0xb1, 0x52, 0xd3, 0x81, 0x39, + 0x20, 0xd4, 0xf0, 0x43, 0x72, 0x2c, 0xb7 }, + .test[6].outlen =3D 32, .test[6].data =3D { 0xcc, 0x89, 0xe4, + 0x05, 0x58, 0x77, 0x38, 0x8b, 0x18, 0xa0, 0x7c, 0x8d, + 0x20, 0x99, 0xea, 0x6e, + 0x6b, 0xe9, 0xf7, 0x0c, 0xe1, 0xe5, 0xce, 0xbc, 0x55, + 0x4c, 0x80, 0xa5, 0xdc, 0xae, 0xf7, 0x94 } + }, + + /* KMAC128XOF */ + { .min_version =3D 0x65, + .test[1].outlen =3D 16, .test[1].data =3D { 0x84, 0x07, 0x89, + 0x29, 0xa7, 0xf4, 0x98, 0x91, 0xf5, 0x64, 0x61, 0x8d, + 0xa5, 0x93, 0x00, 0x31 }, + .test[4].outlen =3D 16, .test[4].data =3D { 0xf0, 0xa4, 0x1b, + 0x98, 0x0f, 0xb3, 0xf2, 0xbd, 0xc3, 0xfc, 0x64, 0x1f, + 0x73, 0x1f, 0xd4, 0x74 }, + .test[5].outlen =3D 16, .test[5].data =3D { 0xa5, 0xc5, 0xad, + 0x25, 0x59, 0xf1, 0x5d, 0xea, 0x5b, 0x18, 0x0a, 0x52, + 0xce, 0x6c, 0xc0, 0x88 }, + .test[6].outlen =3D 16, .test[6].data =3D { 0x1a, 0x81, 0xdd, + 0x81, 0x47, 0x89, 0xf4, 0x15, 0xcc, 0x18, 0x05, 0x81, + 0xe3, 0x95, 0x21, 0xc3 } + }, + + /* KMAC256XOF */ + { .min_version =3D 0x65, + .test[1].outlen =3D 32, .test[1].data =3D { 0xff, 0x85, 0xe9, + 0x61, 0x67, 0x96, 0x35, 0x58, 0x33, 0x38, 0x2c, 0xe8, + 0x25, 0x77, 0xbe, 0x63, + 0xd5, 0x2c, 0xa7, 0xef, 0xce, 0x9b, 0x63, 0x71, 0xb2, + 0x09, 0x7c, 0xd8, 0x60, 0x4e, 0x5a, 0xfa }, + .test[4].outlen =3D 32, .test[4].data =3D { 0x86, 0x89, 0xc2, + 0x4a, 0xe8, 0x18, 0x46, 0x10, 0x6b, 0xf2, 0x09, 0xd7, + 0x37, 0x83, 0xab, 0x77, + 0xb5, 0xce, 0x7c, 0x96, 0x9c, 0xfa, 0x0f, 0xa0, 0xd8, + 0xde, 0xb5, 0xb7, 0xc6, 0xcd, 0xa9, 0x8f }, + .test[5].outlen =3D 32, .test[5].data =3D { 0x4d, 0x71, 0x81, + 0x5a, 0x5f, 0xac, 0x3b, 0x29, 0xf2, 0x5f, 0xb6, 0x56, + 0xf1, 0x76, 0xcf, 0xdc, + 0x51, 0x56, 0xd7, 0x3c, 0x47, 0xec, 0x6d, 0xea, 0xc6, + 0x3e, 0x54, 0xe7, 0x6f, 0xdc, 0xe8, 0x39 }, + .test[6].outlen =3D 32, .test[6].data =3D { 0x5f, 0xc5, 0xe1, + 0x1e, 0xe7, 0x55, 0x0f, 0x62, 0x71, 0x29, 0xf3, 0x0a, + 0xb3, 0x30, 0x68, 0x06, + 0xea, 0xec, 0xe4, 0x37, 0x17, 0x37, 0x2d, 0x5d, 0x64, + 0x09, 0x70, 0x63, 0x94, 0x80, 0x9b, 0x80 } + }, + + /* HASH SM3 */ + { .min_version =3D 0x65, + .test[1].outlen =3D 32, .test[1].data =3D { 0xe0, 0xba, 0xb8, + 0xf4, 0xd8, 0x17, 0x2b, 0xa2, 0x45, 0x19, 0x0d, 0x13, + 0xc9, 0x41, 0x17, 0xe9, + 0x3b, 0x82, 0x16, 0x6c, 0x25, 0xb2, 0xb6, 0x98, 0x83, + 0x35, 0x0c, 0x19, 0x2c, 0x90, 0x51, 0x40 }, + .test[4].outlen =3D 32, .test[4].data =3D { 0xe0, 0xba, 0xb8, + 0xf4, 0xd8, 0x17, 0x2b, 0xa2, 0x45, 0x19, 0x0d, 0x13, + 0xc9, 0x41, 0x17, 0xe9, + 0x3b, 0x82, 0x16, 0x6c, 0x25, 0xb2, 0xb6, 0x98, 0x83, + 0x35, 0x0c, 0x19, 0x2c, 0x90, 0x51, 0x40 }, + .test[5].outlen =3D 32, .test[5].data =3D { 0xe0, 0xba, 0xb8, + 0xf4, 0xd8, 0x17, 0x2b, 0xa2, 0x45, 0x19, 0x0d, 0x13, + 0xc9, 0x41, 0x17, 0xe9, + 0x3b, 0x82, 0x16, 0x6c, 0x25, 0xb2, 0xb6, 0x98, 0x83, + 0x35, 0x0c, 0x19, 0x2c, 0x90, 0x51, 0x40 }, + .test[6].outlen =3D 32, .test[6].data =3D { 0xe0, 0xba, 0xb8, + 0xf4, 0xd8, 0x17, 0x2b, 0xa2, 0x45, 0x19, 0x0d, 0x13, + 0xc9, 0x41, 0x17, 0xe9, + 0x3b, 0x82, 0x16, 0x6c, 0x25, 0xb2, 0xb6, 0x98, 0x83, + 0x35, 0x0c, 0x19, 0x2c, 0x90, 0x51, 0x40 } + }, + + /* HMAC SM3 */ + { .min_version =3D 0x65, + .test[1].outlen =3D 32, .test[1].data =3D { 0x68, 0xf0, 0x65, + 0xd8, 0xd8, 0xc9, 0xc2, 0x0e, 0x10, 0xfd, 0x52, 0x7c, + 0xf2, 0xd7, 0x42, 0xd3, + 0x08, 0x44, 0x22, 0xbc, 0xf0, 0x9d, 0xcc, 0x34, 0x7b, + 0x76, 0x13, 0x91, 0xba, 0xce, 0x4d, 0x17 }, + .test[4].outlen =3D 32, .test[4].data =3D { 0xd8, 0xab, 0x2a, + 0x7b, 0x56, 0x21, 0xb1, 0x59, 0x64, 0xb2, 0xa3, 0xd6, + 0x72, 0xb3, 0x95, 0x81, + 0xa0, 0xcd, 0x96, 0x47, 0xf0, 0xbc, 0x8c, 0x16, 0x5b, + 0x9b, 0x7d, 0x2f, 0x71, 0x3f, 0x23, 0x19}, + .test[5].outlen =3D 32, .test[5].data =3D { 0xa0, 0xd1, 0xd5, + 0xa0, 0x9e, 0x4c, 0xca, 0x8c, 0x7b, 0xe0, 0x8f, 0x70, + 0x92, 0x2e, 0x3f, 0x4c, + 0xa0, 0xca, 0xef, 0xa1, 0x86, 0x9d, 0xb2, 0xe1, 0xc5, + 0xfa, 0x9d, 0xfa, 0xbc, 0x11, 0xcb, 0x1f }, + .test[6].outlen =3D 32, .test[6].data =3D { 0xa0, 0xd1, 0xd5, + 0xa0, 0x9e, 0x4c, 0xca, 0x8c, 0x7b, 0xe0, 0x8f, 0x70, + 0x92, 0x2e, 0x3f, 0x4c, + 0xa0, 0xca, 0xef, 0xa1, 0x86, 0x9d, 0xb2, 0xe1, 0xc5, + 0xfa, 0x9d, 0xfa, 0xbc, 0x11, 0xcb, 0x1f} + }, + + /* MAC_SM4_XCBC */ + { .min_version =3D 0x65, + .test[1].outlen =3D 16, .test[1].data =3D { 0x69, 0xaf, 0x45, + 0xe6, 0x0c, 0x78, 0x71, 0x7e, 0x44, 0x6c, 0xfe, 0x68, + 0xd4, 0xfe, 0x20, 0x8b }, + .test[4].outlen =3D 16, .test[4].data =3D { 0x69, 0xaf, 0x45, + 0xe6, 0x0c, 0x78, 0x71, 0x7e, 0x44, 0x6c, 0xfe, 0x68, + 0xd4, 0xfe, 0x20, 0x8b }, + .test[5].outlen =3D 16, .test[5].data =3D { 0x69, 0xaf, 0x45, + 0xe6, 0x0c, 0x78, 0x71, 0x7e, 0x44, 0x6c, 0xfe, 0x68, + 0xd4, 0xfe, 0x20, 0x8b }, + .test[6].outlen =3D 16, .test[6].data =3D { 0x69, 0xaf, 0x45, + 0xe6, 0x0c, 0x78, 0x71, 0x7e, 0x44, 0x6c, 0xfe, 0x68, + 0xd4, 0xfe, 0x20, 0x8b } + }, + + /* MAC_SM4_CMAC */ + { .min_version =3D 0x65, + .test[1].outlen =3D 16, .test[1].data =3D { 0x36, 0xbe, 0xec, + 0x03, 0x9c, 0xc7, 0x0c, 0x28, 0x23, 0xdd, 0x71, 0x8b, + 0x3c, 0xbd, 0x7f, 0x37 }, + .test[4].outlen =3D 16, .test[4].data =3D { 0x36, 0xbe, 0xec, + 0x03, 0x9c, 0xc7, 0x0c, 0x28, 0x23, 0xdd, 0x71, 0x8b, + 0x3c, 0xbd, 0x7f, 0x37 }, + .test[5].outlen =3D 16, .test[5].data =3D { 0x36, 0xbe, 0xec, + 0x03, 0x9c, 0xc7, 0x0c, 0x28, 0x23, 0xdd, 0x71, 0x8b, + 0x3c, 0xbd, 0x7f, 0x37 }, + .test[6].outlen =3D 16, .test[6].data =3D { 0x36, 0xbe, 0xec, + 0x03, 0x9c, 0xc7, 0x0c, 0x28, 0x23, 0xdd, 0x71, 0x8b, + 0x3c, 0xbd, 0x7f, 0x37 } + }, + +}; +#endif + int spacc_sg_to_ddt(struct device *dev, struct scatterlist *sg, int nbytes, struct pdu_ddt *ddt, int dma_direction) { @@ -560,6 +1436,223 @@ int spacc_close(struct spacc_device *dev, int handle) return spacc_job_release(dev, handle); } =20 +#if IS_ENABLED(CONFIG_CRYPTO_DEV_SPACC_AUTODETECT) +static int spacc_set_auxinfo(struct spacc_device *spacc, int jobid, + u32 direction, u32 bitsize) +{ + int ret =3D 0; + struct spacc_job *job; + + if (jobid < 0 || jobid >=3D SPACC_MAX_JOBS) + return -EINVAL; + + job =3D &spacc->job[jobid]; + if (!job) { + ret =3D -EINVAL; + } else { + job->auxinfo_dir =3D direction; + job->auxinfo_bit_align =3D bitsize; + } + + return ret; +} + +static void spacc_check_modes(struct spacc_device *spacc, int algo_mode, + int keysz_idx, void *virt, char *key, + struct pdu_ddt *ddt) +{ + int enc; + int hash; + int aadlen; + int ivsize; + int proclen; + int rc =3D 0; + int err =3D 0; + bool output_zero_len; + bool output_mismatch; + + if ((template[algo_mode] & (1 << keysz_idx)) =3D=3D 0) + return; + + /* + * Testing keysizes[keysz_idx] with algo 'algo_mode' which + * should match the ENUMs above + */ + + if (template[algo_mode] & 128) { + enc =3D 0; + hash =3D algo_mode; + } else { + enc =3D algo_mode; + hash =3D 0; + } + + rc =3D spacc_open(spacc, enc, hash, -1, 0, NULL, NULL); + if (rc < 0) { + spacc->config.modes[algo_mode] &=3D ~(1 << keysz_idx); + return; + } + + spacc_set_operation(spacc, rc, OP_ENCRYPT, 0, IP_ICV_APPEND, 0, + 0, 0); + + /* if this is a hash or mac */ + if (template[algo_mode] & 128) { + switch (algo_mode) { + case CRYPTO_MODE_HASH_CSHAKE128: + case CRYPTO_MODE_HASH_CSHAKE256: + case CRYPTO_MODE_MAC_KMAC128: + case CRYPTO_MODE_MAC_KMAC256: + case CRYPTO_MODE_MAC_KMACXOF128: + case CRYPTO_MODE_MAC_KMACXOF256: + /* + * Special initial bytes to encode + * length for cust strings + */ + key[0] =3D 0x01; + key[1] =3D 0x70; + break; + } + + spacc_write_context(spacc, rc, SPACC_HASH_OPERATION, + key, keysizes[1][keysz_idx] + + (algo_mode =3D=3D CRYPTO_MODE_MAC_XCBC ? 32 : 0), + key, 16); + } else { + u32 keysize; + + ivsize =3D 16; + keysize =3D keysizes[0][keysz_idx]; + switch (algo_mode) { + case CRYPTO_MODE_CHACHA20_STREAM: + case CRYPTO_MODE_AES_CCM: + case CRYPTO_MODE_SM4_CCM: + ivsize =3D 16; + break; + case CRYPTO_MODE_SM4_GCM: + case CRYPTO_MODE_CHACHA20_POLY1305: + case CRYPTO_MODE_AES_GCM: + ivsize =3D 12; + break; + case CRYPTO_MODE_KASUMI_ECB: + case CRYPTO_MODE_KASUMI_F8: + case CRYPTO_MODE_3DES_CBC: + case CRYPTO_MODE_3DES_ECB: + case CRYPTO_MODE_DES_CBC: + case CRYPTO_MODE_DES_ECB: + ivsize =3D 8; + break; + case CRYPTO_MODE_SM4_XTS: + case CRYPTO_MODE_AES_XTS: + keysize <<=3D 1; + break; + } + spacc_write_context(spacc, rc, SPACC_CRYPTO_OPERATION, + key, keysize, key, ivsize); + } + + spacc_set_key_exp(spacc, rc); + + switch (algo_mode) { + case CRYPTO_MODE_ZUC_UEA3: + case CRYPTO_MODE_SNOW3G_UEA2: + case CRYPTO_MODE_MAC_SNOW3G_UIA2: + case CRYPTO_MODE_MAC_ZUC_UIA3: + case CRYPTO_MODE_KASUMI_F8: + spacc_set_auxinfo(spacc, rc, 0, 0); + break; + case CRYPTO_MODE_MAC_KASUMI_F9: + spacc_set_auxinfo(spacc, rc, 0, 8); + break; + } + + memset(virt, 0, 256); + + /* + * 16AAD/16PT or 32AAD/0PT depending on + * whether we're in a hash or not mode + */ + aadlen =3D 16; + proclen =3D 32; + if (!enc) + aadlen +=3D 16; + + switch (algo_mode) { + case CRYPTO_MODE_SM4_CS1: + case CRYPTO_MODE_SM4_CS2: + case CRYPTO_MODE_SM4_CS3: + case CRYPTO_MODE_AES_CS1: + case CRYPTO_MODE_AES_CS2: + case CRYPTO_MODE_AES_CS3: + proclen =3D 31; + fallthrough; + case CRYPTO_MODE_SM4_XTS: + case CRYPTO_MODE_AES_XTS: + aadlen =3D 0; + } + + err =3D spacc_packet_enqueue_ddt(spacc, rc, ddt, ddt, proclen, 0, + aadlen, 0, 0, 0); + if (err =3D=3D 0) { + do { + err =3D spacc_packet_dequeue(spacc, rc); + } while (err =3D=3D -EINPROGRESS); + } + + output_zero_len =3D !testdata[algo_mode].test[keysz_idx].outlen; + output_mismatch =3D memcmp(testdata[algo_mode].test[keysz_idx].data, virt, + testdata[algo_mode].test[keysz_idx].outlen); + + if (err !=3D 0 || output_zero_len || output_mismatch) + spacc->config.modes[algo_mode] &=3D ~(1 << keysz_idx); + + + spacc_close(spacc, rc); + +} + +int spacc_autodetect(struct spacc_device *spacc) +{ + int x, y; + void *virt; + dma_addr_t dma; + struct pdu_ddt ddt; + unsigned char key[64]; + + /* allocate DMA memory */ + virt =3D dma_alloc_coherent(get_ddt_device(), SPACC_TEST_DMA_BUFF_SIZE, + &dma, GFP_KERNEL); + if (!virt) + return -ENOMEM; + + if (pdu_ddt_init(spacc->dptr, &ddt, 1)) { + dma_free_coherent(get_ddt_device(), SPACC_TEST_DMA_BUFF_SIZE, + virt, dma); + return -EIO; + } + + pdu_ddt_add(spacc->dptr, &ddt, dma, SPACC_TEST_DMA_BUFF_SIZE); + + for (x =3D 0; x < 64; x++) + key[x] =3D x; + + for (x =3D 0; x < ARRAY_SIZE(template); x++) { + spacc->config.modes[x] =3D template[x]; + if (template[x] && spacc->config.version >=3D + testdata[x].min_version) { + for (y =3D 0; y < (ARRAY_SIZE(keysizes[0])); y++) + spacc_check_modes(spacc, x, y, virt, key, &ddt); + } + } + + pdu_ddt_free(&ddt); + dma_free_coherent(get_ddt_device(), SPACC_TEST_DMA_BUFF_SIZE, virt, dma); + + return 0; +} + +#else + static void spacc_static_modes(struct spacc_device *spacc, int x, int y) { /* disable the algos that are not supported here */ @@ -596,6 +1689,7 @@ int spacc_static_config(struct spacc_device *spacc) =20 return 0; } +#endif =20 int spacc_clone_handle(struct spacc_device *spacc, int old_handle, void *cbdata) --=20 2.25.1 From nobody Mon Feb 9 01:17:09 2026 Received: from mail-pf1-f174.google.com (mail-pf1-f174.google.com [209.85.210.174]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4A98718A959 for ; Mon, 2 Jun 2025 05:35:29 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.174 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1748842532; cv=none; b=ZCYidAd/t2Pc2tUnxypIo1B13/n7gouINPjyUUHdTJBsCwHYnVw+GImKJa1QSTzHGVencu/7sBobILsLRN77kwOQ4gScYAos/czlEa9ZZqyZdN0RWVi3/wtuysAXQxCUKKoQ7am/gCLv/gSUi1eAjn/X2yCg2DDA7fR4kW65azo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1748842532; c=relaxed/simple; bh=M5L7PxbFjk8apeIrvn7XhHQ5RjaP01fqSQSuQ1X/Xfg=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=K+eYGXO+j6tptpz0hhoN73zF0XyJ0f42Cgpvm/QZLLsRdIkAsdalR7cD6haO8TK4sEMMM6aSSEzwfyTGsB+o2aYw/rYegykVGp7q5Gxbi7xrmjasUo8T1BxPeuXrNKlXSVgg7pYsM+6havI4Nl/ohot3RU7VKn53Deo60rFm+ck= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=vayavyalabs.com; spf=pass smtp.mailfrom=vayavyalabs.com; dkim=pass (1024-bit key) header.d=vayavyalabs.com header.i=@vayavyalabs.com header.b=Hv7laf0O; arc=none smtp.client-ip=209.85.210.174 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=vayavyalabs.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=vayavyalabs.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=vayavyalabs.com header.i=@vayavyalabs.com header.b="Hv7laf0O" Received: by mail-pf1-f174.google.com with SMTP id d2e1a72fcca58-742c5f3456fso3023254b3a.0 for ; Sun, 01 Jun 2025 22:35:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=vayavyalabs.com; s=google; t=1748842528; x=1749447328; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=QKT3coMp3iq3r328tvIyH39wq4e7n3wmCRAsCURIsok=; b=Hv7laf0OfbZ1YkchsHVLr0wMBvz6EbPR4p61V3TI2fww0vHITMAEtbHUnuZAmwAIvh MB6A7ZiyGyJ8oPArWZsiKJWonDOEq596EvSOi7QGDYi7WaAIl4BBEQJJkhu2VzLYB4xd 1qeQVOxWyCS/up2a4l9Enwd5Yi8YiJKfo2mbc= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1748842528; x=1749447328; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=QKT3coMp3iq3r328tvIyH39wq4e7n3wmCRAsCURIsok=; b=qLNa9G4BKmYnEoJwASlSq2+qxsBsrCsIBj79Hf/aOZIOe6PXIYhmEOQSjJVUo7J9mA S4WnpQ3LCIxupT5V2EdcZaoO9gV8W9Vn3aE8+oqCLjWLfL2+FMeQeIB9CUpQoI57e7D9 4n0nwA8ISHOZYodfOkuY5T59xNBJSEyzLyo1gq93zx16212UmPMV/qbl/Nc5q1TKPQD+ tSlLLX/1aU60P8PxIeWgx/YUMmZLupkHmSdrRPNRwAAZ5XbB8ASNkiXQOYcVCHr8AIP+ Q8Zz5w7rk7nbnhN/nLpLurU/ypmvk6+e8UpMc+BDSxzNoa+HghA0L2BjXBFksXpuK4A3 3VQg== X-Forwarded-Encrypted: i=1; AJvYcCV7e09b3RQFBFs/yvlaTd91KKkyvsquZEp2AfVuoFLBLXp73OjNv0yO7DXGtWELVPMd9VK3RsnIzXcMpbE=@vger.kernel.org X-Gm-Message-State: AOJu0YxOvIvxTc8aQbIr331PWq1xpVC9H7IgvOMmst0eA311Ipg3GfPK SkcB1guF7NLa/2tBCXMtAuY/6D4WxLhWmjH9HM2DXhx9szOI7Vpe6CxQZ7ducpOKKTo= X-Gm-Gg: ASbGncsTA68SFtjZcKV5dYumr3N6/NoEUPiTdISV5rW3fJvXPzifC1qjPMrWsZWA53s s1UBueWOCFduOrrbiivimJ8HL8ijTP4oWOprAgbDdOE+xhzFV7RZm2+0/aqDOcykWefAIO9saSo ZRgSq0NnRtJEajifi68QgE+HpGm8t0K1ckRiPCVxApLk/PrAGpy/lZ1ytsomIr7C9H55L/X8P7F aHV7Xnj9ouWLGPpHXhSh4x82f65UYH8JxP1HX8VO2cv0SR9LZeYGb2a2gabK83RsdOOcAPD04Us PViuOLXHIxMtUTm5N7uPvc/RU/EY8BMOjg0OWlWEVD04ZFQazTiD0zpsYnORRwndTOCs8aNDOTH cPoQ= X-Google-Smtp-Source: AGHT+IGYla/JYqwkRGpaXPdImmxh1e5x7wjAhxAdJnm7H0h8RTFp5MaQHZDJCv21TmgG5J2XJbLZGA== X-Received: by 2002:a17:90b:4cc6:b0:311:df4b:4b84 with SMTP id 98e67ed59e1d1-3127c861582mr10123225a91.35.1748842528278; Sun, 01 Jun 2025 22:35:28 -0700 (PDT) Received: from localhost.localdomain ([117.251.222.160]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-3124e30cbe6sm4836986a91.39.2025.06.01.22.35.24 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 01 Jun 2025 22:35:27 -0700 (PDT) From: Pavitrakumar Managutte To: linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org, devicetree@vger.kernel.org, herbert@gondor.apana.org.au, robh@kernel.org Cc: krzk+dt@kernel.org, conor+dt@kernel.org, Ruud.Derwig@synopsys.com, manjunath.hadli@vayavyalabs.com, adityak@vayavyalabs.com, Pavitrakumar Managutte , Bhoomika Kadabi Subject: [PATCH v3 4/6] Add SPAcc ahash support Date: Mon, 2 Jun 2025 11:02:29 +0530 Message-Id: <20250602053231.403143-5-pavitrakumarm@vayavyalabs.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20250602053231.403143-1-pavitrakumarm@vayavyalabs.com> References: <20250602053231.403143-1-pavitrakumarm@vayavyalabs.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add ahash support to SPAcc driver. Below are the hash algos supported: - cmac(aes) - xcbc(aes) - cmac(sm4) - xcbc(sm4) - hmac(md5) - md5 - hmac(sha1) - sha1 - sha224 - sha256 - sha384 - sha512 - hmac(sha224) - hmac(sha256) - hmac(sha384) - hmac(sha512) - sha3-224 - sha3-256 - sha3-384 - sha3-512 - hmac(sm3) - sm3 - michael_mic Co-developed-by: Bhoomika Kadabi Signed-off-by: Bhoomika Kadabi Signed-off-by: Pavitrakumar Managutte Signed-off-by: Manjunath Hadli Acked-by: Ruud Derwig --- drivers/crypto/dwc-spacc/spacc_ahash.c | 969 +++++++++++++++++++++++++ 1 file changed, 969 insertions(+) create mode 100644 drivers/crypto/dwc-spacc/spacc_ahash.c diff --git a/drivers/crypto/dwc-spacc/spacc_ahash.c b/drivers/crypto/dwc-sp= acc/spacc_ahash.c new file mode 100644 index 000000000000..cffc747ed332 --- /dev/null +++ b/drivers/crypto/dwc-spacc/spacc_ahash.c @@ -0,0 +1,969 @@ +// SPDX-License-Identifier: GPL-2.0 + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "spacc_core.h" +#include "spacc_device.h" + +#define PPP_BUF_SIZE 128 + +struct sdesc { + struct shash_desc shash; + char ctx[]; +}; + +static struct dma_pool *spacc_hash_pool; +static LIST_HEAD(spacc_hash_alg_list); +static DEFINE_MUTEX(spacc_hash_alg_mutex); + +static struct mode_tab possible_hashes[] =3D { + { .keylen[0] =3D 16, MODE_TAB_HASH("cmac(aes)", MAC_CMAC, 16, 16), + .sw_fb =3D true }, + { .keylen[0] =3D 48 | MODE_TAB_HASH_XCBC, MODE_TAB_HASH("xcbc(aes)", + MAC_XCBC, 16, 16), .sw_fb =3D true }, + + { MODE_TAB_HASH("cmac(sm4)", MAC_SM4_CMAC, 16, 16), .sw_fb =3D true }, + { .keylen[0] =3D 32 | MODE_TAB_HASH_XCBC, MODE_TAB_HASH("xcbc(sm4)", + MAC_SM4_XCBC, 16, 16), .sw_fb =3D true }, + + { MODE_TAB_HASH("hmac(md5)", HMAC_MD5, MD5_DIGEST_SIZE, + MD5_HMAC_BLOCK_SIZE), .sw_fb =3D true }, + { MODE_TAB_HASH("md5", HASH_MD5, MD5_DIGEST_SIZE, + MD5_HMAC_BLOCK_SIZE), .sw_fb =3D true }, + + { MODE_TAB_HASH("hmac(sha1)", HMAC_SHA1, SHA1_DIGEST_SIZE, + SHA1_BLOCK_SIZE), .sw_fb =3D true }, + { MODE_TAB_HASH("sha1", HASH_SHA1, SHA1_DIGEST_SIZE, + SHA1_BLOCK_SIZE), .sw_fb =3D true }, + + { MODE_TAB_HASH("sha224", HASH_SHA224, SHA224_DIGEST_SIZE, + SHA224_BLOCK_SIZE), .sw_fb =3D true }, + { MODE_TAB_HASH("sha256", HASH_SHA256, SHA256_DIGEST_SIZE, + SHA256_BLOCK_SIZE), .sw_fb =3D true }, + { MODE_TAB_HASH("sha384", HASH_SHA384, SHA384_DIGEST_SIZE, + SHA384_BLOCK_SIZE), .sw_fb =3D true }, + { MODE_TAB_HASH("sha512", HASH_SHA512, SHA512_DIGEST_SIZE, + SHA512_BLOCK_SIZE), .sw_fb =3D true }, + + { MODE_TAB_HASH("hmac(sha512)", HMAC_SHA512, SHA512_DIGEST_SIZE, + SHA512_BLOCK_SIZE), .sw_fb =3D true }, + { MODE_TAB_HASH("hmac(sha224)", HMAC_SHA224, SHA224_DIGEST_SIZE, + SHA224_BLOCK_SIZE), .sw_fb =3D true }, + { MODE_TAB_HASH("hmac(sha256)", HMAC_SHA256, SHA256_DIGEST_SIZE, + SHA256_BLOCK_SIZE), .sw_fb =3D true }, + { MODE_TAB_HASH("hmac(sha384)", HMAC_SHA384, SHA384_DIGEST_SIZE, + SHA384_BLOCK_SIZE), .sw_fb =3D true }, + + { MODE_TAB_HASH("sha3-224", HASH_SHA3_224, SHA3_224_DIGEST_SIZE, + SHA3_224_BLOCK_SIZE), .sw_fb =3D true }, + { MODE_TAB_HASH("sha3-256", HASH_SHA3_256, SHA3_256_DIGEST_SIZE, + SHA3_256_BLOCK_SIZE), .sw_fb =3D true }, + { MODE_TAB_HASH("sha3-384", HASH_SHA3_384, SHA3_384_DIGEST_SIZE, + SHA3_384_BLOCK_SIZE), .sw_fb =3D true }, + { MODE_TAB_HASH("sha3-512", HASH_SHA3_512, SHA3_512_DIGEST_SIZE, + SHA3_512_BLOCK_SIZE), .sw_fb =3D true }, + + { MODE_TAB_HASH("hmac(sm3)", HMAC_SM3, SM3_DIGEST_SIZE, + SM3_BLOCK_SIZE), .sw_fb =3D true }, + { MODE_TAB_HASH("sm3", HASH_SM3, SM3_DIGEST_SIZE, + SM3_BLOCK_SIZE), .sw_fb =3D true }, + { MODE_TAB_HASH("michael_mic", MAC_MICHAEL, 8, 8), .sw_fb =3D true }, + +}; + +static void spacc_hash_cleanup_dma_dst(struct spacc_crypto_ctx *tctx, + struct ahash_request *req) +{ + struct spacc_crypto_reqctx *ctx =3D ahash_request_ctx(req); + + pdu_ddt_free(&ctx->dst); +} + +static void spacc_hash_cleanup_dma_src(struct spacc_crypto_ctx *tctx, + struct ahash_request *req) +{ + struct spacc_crypto_reqctx *ctx =3D ahash_request_ctx(req); + + if (tctx->tmp_sgl && tctx->tmp_sgl[0].length !=3D 0) { + dma_unmap_sg(tctx->dev, tctx->tmp_sgl, ctx->src_nents, + DMA_TO_DEVICE); + kfree(tctx->tmp_sgl_buff); + tctx->tmp_sgl_buff =3D NULL; + tctx->tmp_sgl[0].length =3D 0; + } else { + dma_unmap_sg(tctx->dev, req->src, ctx->src_nents, + DMA_TO_DEVICE); + } + + pdu_ddt_free(&ctx->src); +} + +static void spacc_hash_cleanup_dma(struct device *dev, + struct ahash_request *req) +{ + struct spacc_crypto_reqctx *ctx =3D ahash_request_ctx(req); + + dma_unmap_sg(dev, req->src, ctx->src_nents, DMA_TO_DEVICE); + pdu_ddt_free(&ctx->src); + + dma_pool_free(spacc_hash_pool, ctx->digest_buf, ctx->digest_dma); + pdu_ddt_free(&ctx->dst); +} + +static void spacc_init_calg(struct crypto_alg *calg, + const struct mode_tab *mode) +{ + strscpy(calg->cra_name, mode->name); + calg->cra_name[sizeof(mode->name) - 1] =3D '\0'; + + strscpy(calg->cra_driver_name, "spacc-"); + strcat(calg->cra_driver_name, mode->name); + calg->cra_driver_name[sizeof(calg->cra_driver_name) - 1] =3D '\0'; + + calg->cra_blocksize =3D mode->blocklen; +} + +static int spacc_ctx_clone_handle(struct ahash_request *req) +{ + struct crypto_ahash *tfm =3D crypto_ahash_reqtfm(req); + struct spacc_crypto_ctx *tctx =3D crypto_ahash_ctx(tfm); + struct spacc_crypto_reqctx *ctx =3D ahash_request_ctx(req); + struct spacc_priv *priv =3D dev_get_drvdata(tctx->dev); + + if (tctx->handle < 0) + return -EINVAL; + + ctx->acb.new_handle =3D spacc_clone_handle(&priv->spacc, tctx->handle, + &ctx->acb); + + if (ctx->acb.new_handle < 0) { + spacc_hash_cleanup_dma(tctx->dev, req); + return -ENOMEM; + } + + ctx->acb.tctx =3D tctx; + ctx->acb.ctx =3D ctx; + ctx->acb.req =3D req; + ctx->acb.spacc =3D &priv->spacc; + + return 0; +} + +static int spacc_hash_init_dma(struct device *dev, struct ahash_request *r= eq) +{ + int rc =3D -1; + struct crypto_ahash *tfm =3D crypto_ahash_reqtfm(req); + struct spacc_crypto_ctx *tctx =3D crypto_ahash_ctx(tfm); + struct spacc_crypto_reqctx *ctx =3D ahash_request_ctx(req); + + gfp_t mflags =3D GFP_ATOMIC; + + if (req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP) + mflags =3D GFP_KERNEL; + + ctx->digest_buf =3D dma_pool_alloc(spacc_hash_pool, mflags, + &ctx->digest_dma); + + if (!ctx->digest_buf) + return -ENOMEM; + + rc =3D pdu_ddt_init(dev, &ctx->dst, 1 | 0x80000000); + if (rc < 0) { + dev_err(dev, "ERR: PDU DDT init error\n"); + rc =3D -EIO; + goto err_free_digest; + } + + pdu_ddt_add(dev, &ctx->dst, ctx->digest_dma, SPACC_MAX_DIGEST_SIZE); + + if (ctx->total_nents > 0 && ctx->single_shot) { + /* single shot */ + spacc_ctx_clone_handle(req); + + if (req->nbytes) { + rc =3D spacc_sg_to_ddt(dev, req->src, req->nbytes, + &ctx->src, DMA_TO_DEVICE); + } else { + memset(tctx->tmp_buffer, '\0', PPP_BUF_SIZE); + sg_set_buf(&tctx->tmp_sgl[0], tctx->tmp_buffer, + PPP_BUF_SIZE); + rc =3D spacc_sg_to_ddt(dev, &tctx->tmp_sgl[0], + tctx->tmp_sgl[0].length, + &ctx->src, DMA_TO_DEVICE); + } + } else if (ctx->total_nents =3D=3D 0 && req->nbytes =3D=3D 0) { + spacc_ctx_clone_handle(req); + + /* zero length case */ + memset(tctx->tmp_buffer, '\0', PPP_BUF_SIZE); + sg_set_buf(&tctx->tmp_sgl[0], tctx->tmp_buffer, PPP_BUF_SIZE); + rc =3D spacc_sg_to_ddt(dev, &tctx->tmp_sgl[0], + tctx->tmp_sgl[0].length, + &ctx->src, DMA_TO_DEVICE); + } + + if (rc < 0) + goto err_free_dst; + + ctx->src_nents =3D rc; + + return rc; + +err_free_dst: + pdu_ddt_free(&ctx->dst); +err_free_digest: + dma_pool_free(spacc_hash_pool, ctx->digest_buf, ctx->digest_dma); + + return rc; +} + +static void spacc_free_mems(struct spacc_crypto_reqctx *ctx, + struct spacc_crypto_ctx *tctx, + struct ahash_request *req) +{ + spacc_hash_cleanup_dma_dst(tctx, req); + spacc_hash_cleanup_dma_src(tctx, req); + + if (ctx->single_shot) { + kfree(tctx->tmp_sgl); + tctx->tmp_sgl =3D NULL; + + ctx->single_shot =3D 0; + if (ctx->total_nents) + ctx->total_nents =3D 0; + } +} + +static void spacc_digest_cb(void *spacc, void *tfm) +{ + int dig_sz; + int err =3D -1; + struct ahash_cb_data *cb =3D tfm; + struct spacc_device *device =3D (struct spacc_device *)spacc; + + dig_sz =3D crypto_ahash_digestsize(crypto_ahash_reqtfm(cb->req)); + + if (cb->ctx->single_shot) + memcpy(cb->req->result, cb->ctx->digest_buf, dig_sz); + else + memcpy(cb->tctx->digest_ctx_buf, cb->ctx->digest_buf, dig_sz); + + err =3D cb->spacc->job[cb->new_handle].job_err; + + dma_pool_free(spacc_hash_pool, cb->ctx->digest_buf, + cb->ctx->digest_dma); + spacc_free_mems(cb->ctx, cb->tctx, cb->req); + spacc_close(cb->spacc, cb->new_handle); + + if (cb->req->base.complete) { + local_bh_disable(); + ahash_request_complete(cb->req, err); + local_bh_enable(); + } + + if (atomic_read(&device->wait_counter) > 0) { + struct spacc_completion *cur_pos, *next_pos; + + /* wake up waitqueue to obtain a context */ + atomic_dec(&device->wait_counter); + if (atomic_read(&device->wait_counter) > 0) { + mutex_lock(&device->spacc_waitq_mutex); + list_for_each_entry_safe(cur_pos, next_pos, + &device->spacc_wait_list, + list) { + if (cur_pos && cur_pos->wait_done =3D=3D 1) { + cur_pos->wait_done =3D 0; + complete(&cur_pos->spacc_wait_complete); + list_del(&cur_pos->list); + break; + } + } + mutex_unlock(&device->spacc_waitq_mutex); + } + } +} + +static int do_shash(struct device *dev, unsigned char *name, + unsigned char *result, const u8 *data1, + unsigned int data1_len, const u8 *data2, + unsigned int data2_len, const u8 *key, + unsigned int key_len) +{ + int rc =3D 0; + unsigned int size; + struct sdesc *sdesc; + struct crypto_shash *hash; + + hash =3D crypto_alloc_shash(name, 0, 0); + if (IS_ERR(hash)) { + rc =3D PTR_ERR(hash); + dev_err(dev, "ERR: Crypto %s allocation error %d\n", name, rc); + return rc; + } + + size =3D sizeof(struct shash_desc) + crypto_shash_descsize(hash); + sdesc =3D kmalloc(size, GFP_KERNEL); + if (!sdesc) { + rc =3D -ENOMEM; + goto do_shash_err; + } + sdesc->shash.tfm =3D hash; + + if (key_len > 0) { + rc =3D crypto_shash_setkey(hash, key, key_len); + if (rc) { + dev_err(dev, "ERR: Could not setkey %s shash\n", name); + goto do_shash_err; + } + } + + rc =3D crypto_shash_init(&sdesc->shash); + if (rc) { + dev_err(dev, "ERR: Could not init %s shash\n", name); + goto do_shash_err; + } + + rc =3D crypto_shash_update(&sdesc->shash, data1, data1_len); + if (rc) { + dev_err(dev, "ERR: Could not update1\n"); + goto do_shash_err; + } + + if (data2 && data2_len) { + rc =3D crypto_shash_update(&sdesc->shash, data2, data2_len); + if (rc) { + dev_err(dev, "ERR: Could not update2\n"); + goto do_shash_err; + } + } + + rc =3D crypto_shash_final(&sdesc->shash, result); + if (rc) + dev_err(dev, "ERR: Could not generate %s hash\n", name); + +do_shash_err: + crypto_free_shash(hash); + kfree(sdesc); + + return rc; +} + +static int spacc_hash_setkey(struct crypto_ahash *tfm, const u8 *key, + unsigned int keylen) +{ + int rc =3D 0; + int ret =3D 0; + unsigned int block_size; + unsigned int digest_size; + char hash_alg[CRYPTO_MAX_ALG_NAME]; + const struct spacc_alg *salg =3D spacc_tfm_ahash(&tfm->base); + struct spacc_crypto_ctx *tctx =3D crypto_ahash_ctx(tfm); + struct spacc_priv *priv =3D dev_get_drvdata(tctx->dev); + + block_size =3D crypto_tfm_alg_blocksize(&tfm->base); + digest_size =3D crypto_ahash_digestsize(tfm); + + /* + * We will not use the hardware in case of HMACs + * This was meant for hashes but it works for cmac/xcbc since we + * only intend to support 128-bit keys... + */ + if (keylen > block_size && salg->mode->id !=3D CRYPTO_MODE_MAC_CMAC) { + dev_dbg(salg->dev, "Exceeds keylen: %u\n", keylen); + dev_dbg(salg->dev, "Req. keylen hashing %s\n", + salg->calg->cra_name); + + memset(hash_alg, 0x00, CRYPTO_MAX_ALG_NAME); + switch (salg->mode->id) { + case CRYPTO_MODE_HMAC_SHA224: + rc =3D do_shash(salg->dev, "sha224", tctx->ipad, key, + keylen, NULL, 0, NULL, 0); + break; + + case CRYPTO_MODE_HMAC_SHA256: + rc =3D do_shash(salg->dev, "sha256", tctx->ipad, key, + keylen, NULL, 0, NULL, 0); + break; + + case CRYPTO_MODE_HMAC_SHA384: + rc =3D do_shash(salg->dev, "sha384", tctx->ipad, key, + keylen, NULL, 0, NULL, 0); + break; + + case CRYPTO_MODE_HMAC_SHA512: + rc =3D do_shash(salg->dev, "sha512", tctx->ipad, key, + keylen, NULL, 0, NULL, 0); + break; + + case CRYPTO_MODE_HMAC_MD5: + rc =3D do_shash(salg->dev, "md5", tctx->ipad, key, + keylen, NULL, 0, NULL, 0); + break; + + case CRYPTO_MODE_HMAC_SHA1: + rc =3D do_shash(salg->dev, "sha1", tctx->ipad, key, + keylen, NULL, 0, NULL, 0); + break; + case CRYPTO_MODE_HMAC_SM3: + rc =3D do_shash(salg->dev, "sm3", tctx->ipad, key, + keylen, NULL, 0, NULL, 0); + break; + + default: + return -EINVAL; + } + + if (rc < 0) { + dev_err(salg->dev, "ERR: %d computing shash for %s\n", + rc, hash_alg); + return -EIO; + } + + keylen =3D digest_size; + dev_dbg(salg->dev, "updated keylen: %u\n", keylen); + + tctx->ctx_valid =3D false; + + if (salg->mode->sw_fb) { + rc =3D crypto_ahash_setkey(tctx->fb.hash, + tctx->ipad, keylen); + if (rc < 0) + return rc; + } + } else { + memcpy(tctx->ipad, key, keylen); + tctx->ctx_valid =3D false; + + if (salg->mode->sw_fb) { + rc =3D crypto_ahash_setkey(tctx->fb.hash, key, keylen); + if (rc < 0) + return rc; + } + } + + /* close handle since key size may have changed */ + if (tctx->handle >=3D 0) { + spacc_close(&priv->spacc, tctx->handle); + put_device(tctx->dev); + tctx->handle =3D -1; + tctx->dev =3D NULL; + } + + /* reset priv */ + priv =3D NULL; + priv =3D dev_get_drvdata(salg->dev); + tctx->dev =3D get_device(salg->dev); + ret =3D spacc_is_mode_keysize_supported(&priv->spacc, salg->mode->id, + keylen, 1); + if (ret) { + /* Grab the spacc context if no one is waiting */ + tctx->handle =3D spacc_open(&priv->spacc, + CRYPTO_MODE_NULL, + salg->mode->id, -1, + 0, spacc_digest_cb, tfm); + if (tctx->handle < 0) { + dev_err(salg->dev, "ERR: Failed to open SPAcc context\n"); + put_device(salg->dev); + return -EIO; + } + + } else { + dev_err(salg->dev, "Keylen: %d not enabled for algo: %d", + keylen, salg->mode->id); + } + + rc =3D spacc_set_operation(&priv->spacc, tctx->handle, OP_ENCRYPT, + ICV_HASH, IP_ICV_OFFSET, 0, 0, 0); + if (rc < 0) { + spacc_close(&priv->spacc, tctx->handle); + tctx->handle =3D -1; + put_device(tctx->dev); + return -EIO; + } + + if (salg->mode->id =3D=3D CRYPTO_MODE_MAC_XCBC || + salg->mode->id =3D=3D CRYPTO_MODE_MAC_SM4_XCBC) { + rc =3D spacc_compute_xcbc_key(&priv->spacc, salg->mode->id, + tctx->handle, tctx->ipad, + keylen, tctx->ipad); + if (rc < 0) { + dev_err(tctx->dev, + "Failed to compute XCBC key: %d\n", rc); + return -EIO; + } + rc =3D spacc_write_context(&priv->spacc, tctx->handle, + SPACC_HASH_OPERATION, tctx->ipad, + 32 + keylen, NULL, 0); + } else { + rc =3D spacc_write_context(&priv->spacc, tctx->handle, + SPACC_HASH_OPERATION, tctx->ipad, + keylen, NULL, 0); + } + + memset(tctx->ipad, 0, sizeof(tctx->ipad)); + if (rc < 0) { + dev_err(tctx->dev, "ERR: Failed to write SPAcc context\n"); + /* Non-fatal, we continue with the software fallback */ + return 0; + } + + tctx->ctx_valid =3D true; + + return 0; +} + +static int spacc_set_statesize(struct spacc_alg *salg) +{ + unsigned int statesize =3D 0; + + switch (salg->mode->id) { + case CRYPTO_MODE_HMAC_SHA1: + case CRYPTO_MODE_HASH_SHA1: + statesize =3D sizeof(struct sha1_state); + break; + case CRYPTO_MODE_MAC_CMAC: + case CRYPTO_MODE_MAC_XCBC: + statesize =3D sizeof(struct crypto_aes_ctx); + break; + case CRYPTO_MODE_MAC_SM4_CMAC: + case CRYPTO_MODE_MAC_SM4_XCBC: + statesize =3D sizeof(struct sm4_ctx); + break; + case CRYPTO_MODE_HMAC_MD5: + case CRYPTO_MODE_HASH_MD5: + statesize =3D sizeof(struct md5_state); + break; + case CRYPTO_MODE_HASH_SHA224: + case CRYPTO_MODE_HASH_SHA256: + case CRYPTO_MODE_HMAC_SHA224: + case CRYPTO_MODE_HMAC_SHA256: + statesize =3D sizeof(struct sha256_state); + break; + case CRYPTO_MODE_HMAC_SHA512: + case CRYPTO_MODE_HASH_SHA512: + statesize =3D sizeof(struct sha512_state); + break; + case CRYPTO_MODE_HMAC_SHA384: + case CRYPTO_MODE_HASH_SHA384: + statesize =3D sizeof(struct spacc_crypto_reqctx); + break; + case CRYPTO_MODE_HASH_SHA3_224: + case CRYPTO_MODE_HASH_SHA3_256: + case CRYPTO_MODE_HASH_SHA3_384: + case CRYPTO_MODE_HASH_SHA3_512: + statesize =3D sizeof(struct sha3_state); + break; + case CRYPTO_MODE_HMAC_SM3: + case CRYPTO_MODE_MAC_MICHAEL: + statesize =3D sizeof(struct spacc_crypto_reqctx); + break; + default: + break; + } + + return statesize; +} + +static int spacc_hash_init_tfm(struct crypto_ahash *tfm) +{ + struct spacc_priv *priv =3D NULL; + const struct spacc_alg *salg =3D container_of(crypto_ahash_alg(tfm), + struct spacc_alg, alg.hash); + struct spacc_crypto_ctx *tctx =3D crypto_ahash_ctx(tfm); + + tctx->handle =3D -1; + tctx->ctx_valid =3D false; + tctx->dev =3D get_device(salg->dev); + priv =3D dev_get_drvdata(tctx->dev); + + tctx->fb.hash =3D crypto_alloc_ahash(crypto_ahash_alg_name(tfm), 0, + CRYPTO_ALG_NEED_FALLBACK); + if (IS_ERR(tctx->fb.hash)) { + spacc_close(&priv->spacc, tctx->handle); + put_device(tctx->dev); + return PTR_ERR(tctx->fb.hash); + } + + crypto_ahash_set_statesize(tfm, + crypto_ahash_statesize(tctx->fb.hash)); + + crypto_ahash_set_reqsize(tfm, + sizeof(struct spacc_crypto_reqctx) + + crypto_ahash_reqsize(tctx->fb.hash)); + + return 0; +} + +static void spacc_hash_exit_tfm(struct crypto_ahash *tfm) +{ + struct spacc_crypto_ctx *tctx =3D crypto_ahash_ctx(tfm); + struct spacc_priv *priv =3D dev_get_drvdata(tctx->dev); + + crypto_free_ahash(tctx->fb.hash); + tctx->fb.hash =3D NULL; + if (tctx->handle >=3D 0) + spacc_close(&priv->spacc, tctx->handle); + + put_device(tctx->dev); +} + +static int spacc_hash_init(struct ahash_request *req) +{ + int rc =3D 0; + struct crypto_ahash *reqtfm =3D crypto_ahash_reqtfm(req); + struct spacc_crypto_ctx *tctx =3D crypto_ahash_ctx(reqtfm); + struct spacc_crypto_reqctx *ctx =3D ahash_request_ctx(req); + + ahash_request_set_tfm(&ctx->fb.hash_req, tctx->fb.hash); + + ahash_request_set_callback(&ctx->fb.hash_req, + CRYPTO_TFM_REQ_MAY_SLEEP, + req->base.complete, + req->base.data); + + rc =3D crypto_ahash_init(&ctx->fb.hash_req); + + return rc; +} + +static int spacc_hash_update(struct ahash_request *req) +{ + int rc =3D 0; + int nbytes =3D req->nbytes; + + struct crypto_ahash *reqtfm =3D crypto_ahash_reqtfm(req); + struct spacc_crypto_ctx *tctx =3D crypto_ahash_ctx(reqtfm); + struct spacc_crypto_reqctx *ctx =3D ahash_request_ctx(req); + + if (!nbytes) + return 0; + + ahash_request_set_tfm(&ctx->fb.hash_req, tctx->fb.hash); + + ahash_request_set_callback(&ctx->fb.hash_req, + CRYPTO_TFM_REQ_MAY_SLEEP, + req->base.complete, + req->base.data); + + ctx->fb.hash_req.nbytes =3D nbytes; + ctx->fb.hash_req.src =3D req->src; + + rc =3D crypto_ahash_update(&ctx->fb.hash_req); + + return rc; +} + +static int spacc_hash_final(struct ahash_request *req) +{ + int rc =3D 0; + struct crypto_ahash *reqtfm =3D crypto_ahash_reqtfm(req); + struct spacc_crypto_ctx *tctx =3D crypto_ahash_ctx(reqtfm); + struct spacc_crypto_reqctx *ctx =3D ahash_request_ctx(req); + + ahash_request_set_tfm(&ctx->fb.hash_req, tctx->fb.hash); + + ahash_request_set_callback(&ctx->fb.hash_req, + CRYPTO_TFM_REQ_MAY_SLEEP, + req->base.complete, + req->base.data); + + ctx->fb.hash_req.result =3D req->result; + + rc =3D crypto_ahash_final(&ctx->fb.hash_req); + + return rc; +} + +static int spacc_hash_digest(struct ahash_request *req) +{ + int rc =3D 0; + struct crypto_ahash *reqtfm =3D crypto_ahash_reqtfm(req); + struct spacc_crypto_ctx *tctx =3D crypto_ahash_ctx(reqtfm); + struct spacc_crypto_reqctx *ctx =3D ahash_request_ctx(req); + struct spacc_priv *priv =3D dev_get_drvdata(tctx->dev); + const struct spacc_alg *salg =3D spacc_tfm_ahash(&reqtfm->base); + + /* direct single shot digest call */ + ctx->single_shot =3D 1; + ctx->total_nents =3D sg_nents(req->src); + + /* alloc tmp_sgl */ + tctx->tmp_sgl =3D kmalloc(sizeof(*tctx->tmp_sgl) * 2, GFP_KERNEL); + + if (!tctx->tmp_sgl) + return -ENOMEM; + + sg_init_table(tctx->tmp_sgl, 2); + tctx->tmp_sgl[0].length =3D 0; + + if (tctx->handle < 0 || !tctx->ctx_valid) { + priv =3D NULL; + priv =3D dev_get_drvdata(salg->dev); + tctx->dev =3D get_device(salg->dev); + + rc =3D spacc_is_mode_keysize_supported(&priv->spacc, + salg->mode->id, 0, 1); + if (rc) + tctx->handle =3D spacc_open(&priv->spacc, + CRYPTO_MODE_NULL, + salg->mode->id, -1, 0, + spacc_digest_cb, + reqtfm); + if (tctx->handle < 0) { + put_device(salg->dev); + dev_dbg(salg->dev, + "Digest:failed to open spacc context\n"); + goto fallback; + } + + rc =3D spacc_set_operation(&priv->spacc, tctx->handle, + OP_ENCRYPT, ICV_HASH, IP_ICV_OFFSET, + 0, 0, 0); + if (rc < 0) { + spacc_close(&priv->spacc, tctx->handle); + dev_dbg(salg->dev, + "ERR: Failed to set operation\n"); + tctx->handle =3D -1; + put_device(tctx->dev); + goto fallback; + } + tctx->ctx_valid =3D true; + } + + rc =3D spacc_hash_init_dma(tctx->dev, req); + if (rc < 0) + goto fallback; + + if (rc =3D=3D 0) { + kfree(tctx->tmp_sgl); + tctx->tmp_sgl =3D NULL; + return 0; + } + + rc =3D spacc_packet_enqueue_ddt(&priv->spacc, ctx->acb.new_handle, + &ctx->src, &ctx->dst, req->nbytes, + 0, req->nbytes, 0, 0, 0); + + if (rc < 0) { + spacc_hash_cleanup_dma(tctx->dev, req); + spacc_close(&priv->spacc, ctx->acb.new_handle); + + if (rc !=3D -EBUSY) { + dev_err(salg->dev, "ERR: Failed to enqueue job: %d\n", + rc); + kfree(tctx->tmp_sgl); + tctx->tmp_sgl =3D NULL; + return rc; + } + + if (!(req->base.flags & CRYPTO_TFM_REQ_MAY_BACKLOG)) + return -EBUSY; + + goto fallback; + } + + return -EINPROGRESS; + +fallback: + kfree(tctx->tmp_sgl); + tctx->tmp_sgl =3D NULL; + + /* start from scratch as init is not called before digest */ + ahash_request_set_tfm(&ctx->fb.hash_req, tctx->fb.hash); + + ahash_request_set_callback(&ctx->fb.hash_req, + CRYPTO_TFM_REQ_MAY_SLEEP, + req->base.complete, + req->base.data); + + ctx->fb.hash_req.nbytes =3D req->nbytes; + ctx->fb.hash_req.src =3D req->src; + ctx->fb.hash_req.result =3D req->result; + + rc =3D crypto_ahash_digest(&ctx->fb.hash_req); + + return rc; +} + +static int spacc_hash_finup(struct ahash_request *req) +{ + int rc =3D 0; + struct crypto_ahash *reqtfm =3D crypto_ahash_reqtfm(req); + struct spacc_crypto_ctx *tctx =3D crypto_ahash_ctx(reqtfm); + struct spacc_crypto_reqctx *ctx =3D ahash_request_ctx(req); + + ahash_request_set_tfm(&ctx->fb.hash_req, tctx->fb.hash); + + ahash_request_set_callback(&ctx->fb.hash_req, + CRYPTO_TFM_REQ_MAY_SLEEP, + req->base.complete, + req->base.data); + + ctx->fb.hash_req.nbytes =3D req->nbytes; + ctx->fb.hash_req.src =3D req->src; + ctx->fb.hash_req.result =3D req->result; + + rc =3D crypto_ahash_finup(&ctx->fb.hash_req); + + return rc; +} + +static int spacc_hash_import(struct ahash_request *req, const void *in) +{ + int rc =3D 0; + struct crypto_ahash *reqtfm =3D crypto_ahash_reqtfm(req); + struct spacc_crypto_ctx *tctx =3D crypto_ahash_ctx(reqtfm); + struct spacc_crypto_reqctx *ctx =3D ahash_request_ctx(req); + + ahash_request_set_tfm(&ctx->fb.hash_req, tctx->fb.hash); + + ahash_request_set_callback(&ctx->fb.hash_req, + CRYPTO_TFM_REQ_MAY_SLEEP, + req->base.complete, + req->base.data); + + rc =3D crypto_ahash_import(&ctx->fb.hash_req, in); + + return rc; +} + +static int spacc_hash_export(struct ahash_request *req, void *out) +{ + int rc =3D 0; + struct crypto_ahash *reqtfm =3D crypto_ahash_reqtfm(req); + struct spacc_crypto_ctx *tctx =3D crypto_ahash_ctx(reqtfm); + struct spacc_crypto_reqctx *ctx =3D ahash_request_ctx(req); + + ahash_request_set_tfm(&ctx->fb.hash_req, tctx->fb.hash); + + ahash_request_set_callback(&ctx->fb.hash_req, + CRYPTO_TFM_REQ_MAY_SLEEP, + req->base.complete, + req->base.data); + + rc =3D crypto_ahash_export(&ctx->fb.hash_req, out); + + return rc; +} + +static const struct ahash_alg spacc_hash_template =3D { + .init =3D spacc_hash_init, + .update =3D spacc_hash_update, + .final =3D spacc_hash_final, + .finup =3D spacc_hash_finup, + .digest =3D spacc_hash_digest, + .setkey =3D spacc_hash_setkey, + .export =3D spacc_hash_export, + .import =3D spacc_hash_import, + .init_tfm =3D spacc_hash_init_tfm, + .exit_tfm =3D spacc_hash_exit_tfm, + + .halg.base =3D { + .cra_priority =3D 300, + .cra_module =3D THIS_MODULE, + .cra_ctxsize =3D sizeof(struct spacc_crypto_ctx), + .cra_flags =3D CRYPTO_ALG_TYPE_AHASH | + CRYPTO_ALG_ASYNC | + CRYPTO_ALG_NEED_FALLBACK | + CRYPTO_ALG_OPTIONAL_KEY + }, +}; + +static int spacc_register_hash(struct spacc_alg *salg) +{ + int rc =3D 0; + + salg->calg =3D &salg->alg.hash.halg.base; + salg->alg.hash =3D spacc_hash_template; + + spacc_init_calg(salg->calg, salg->mode); + salg->alg.hash.halg.digestsize =3D salg->mode->hashlen; + salg->alg.hash.halg.statesize =3D spacc_set_statesize(salg); + + rc =3D crypto_register_ahash(&salg->alg.hash); + if (rc < 0) + return rc; + + mutex_lock(&spacc_hash_alg_mutex); + list_add(&salg->list, &spacc_hash_alg_list); + mutex_unlock(&spacc_hash_alg_mutex); + + return 0; +} + +int spacc_probe_hashes(struct platform_device *spacc_pdev) +{ + int rc =3D 0; + unsigned int index; + int registered =3D 0; + struct spacc_alg *salg; + struct spacc_priv *priv =3D dev_get_drvdata(&spacc_pdev->dev); + + spacc_hash_pool =3D dma_pool_create("spacc-digest", &spacc_pdev->dev, + SPACC_MAX_DIGEST_SIZE, + SPACC_DMA_ALIGN, SPACC_DMA_BOUNDARY); + + if (!spacc_hash_pool) + return -ENOMEM; + + for (index =3D 0; index < ARRAY_SIZE(possible_hashes); index++) + possible_hashes[index].valid =3D 0; + + for (index =3D 0; index < ARRAY_SIZE(possible_hashes); index++) { + if (possible_hashes[index].valid =3D=3D 0 && + spacc_is_mode_keysize_supported(&priv->spacc, + possible_hashes[index].id & 0xFF, + possible_hashes[index].hashlen, 1)) { + salg =3D kmalloc(sizeof(*salg), GFP_KERNEL); + if (!salg) + return -ENOMEM; + + salg->mode =3D &possible_hashes[index]; + + /* Copy all dev's over to the salg */ + salg->dev =3D &spacc_pdev->dev; + + rc =3D spacc_register_hash(salg); + if (rc < 0) { + kfree(salg); + continue; + } + + registered++; + possible_hashes[index].valid =3D 1; + } + } + + return registered; +} + +int spacc_unregister_hash_algs(void) +{ + struct spacc_alg *salg, *tmp; + + mutex_lock(&spacc_hash_alg_mutex); + list_for_each_entry_safe(salg, tmp, &spacc_hash_alg_list, list) { + crypto_unregister_alg(salg->calg); + list_del(&salg->list); + kfree(salg); + } + mutex_unlock(&spacc_hash_alg_mutex); + + dma_pool_destroy(spacc_hash_pool); + + return 0; +} --=20 2.25.1 From nobody Mon Feb 9 01:17:09 2026 Received: from mail-pl1-f180.google.com (mail-pl1-f180.google.com [209.85.214.180]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 25B4C194A44 for ; Mon, 2 Jun 2025 05:35:35 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.180 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1748842538; cv=none; b=uWD4SpoVVTenu0ulr5oAmhqbWB8M7qUCs87zgWcgKxhKrLO9vybQOB3JhDm+rBwPjsaIZvMPcq0ItRasd5fnlI7OEeQB0eztiokUfyLkUNX/jnKXzaVSdAJqmIaYsyv9IefAaPTwrC1TYtmd+YQ+vFehvM9rPjhZ22sCwxyGPRY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1748842538; c=relaxed/simple; bh=4bqiffmMDk5RQlngIw4RQZc5+25Mc8YJcYlB7qGlzBg=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=khAkKdPVj/yrwEHVMDUY8hLhQzJf17AH+Q2BwwRBzeR7XQZcVUn1mMN6259arlyZ5ZdL8TILlkL+Hwj/ykuyksuEoDVbMSmH6vKdJqedzo529vg2YreWy2Ujxuvs3QW9SwpinZshLN4SNwVx9UYJWgPHqI5LkrphZqNUhDcbioM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=vayavyalabs.com; spf=pass smtp.mailfrom=vayavyalabs.com; dkim=pass (1024-bit key) header.d=vayavyalabs.com header.i=@vayavyalabs.com header.b=JY2CW7sy; arc=none smtp.client-ip=209.85.214.180 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=vayavyalabs.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=vayavyalabs.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=vayavyalabs.com header.i=@vayavyalabs.com header.b="JY2CW7sy" Received: by mail-pl1-f180.google.com with SMTP id d9443c01a7336-23539a1a421so16231515ad.0 for ; Sun, 01 Jun 2025 22:35:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=vayavyalabs.com; s=google; t=1748842535; x=1749447335; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=kzbAJPlGDJ1NUTP7XIpeMYEjrKEvxRjPsY+3EHbDtdQ=; b=JY2CW7syRHpBP7nMT7ClX1P9VPYAyFoESevSLcaCo49aUHDAvxb66sW+t2J2YfIxZ9 QUtZUeBgg9ondKp8HPel5lJDdH17uWb7XTXHAeMXZzPmf8oG9kIlIMHsFAunUQLYXm1c 4fIS0CzqjLFRotOQ3K0/+8Rl49awFAIhbF084= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1748842535; x=1749447335; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=kzbAJPlGDJ1NUTP7XIpeMYEjrKEvxRjPsY+3EHbDtdQ=; b=wx1/x82NG6RgxEXu2/NNMfQtAgvyg7rSDLYs6tnFTZh3sIapgY7jD1r0WnMFK3EjD6 IvcMM94sEpgFnWw4V7cBeG8mbQuxx2GRKSdrLeVjq3KzM45HNCZEp5wnHdsqd07xMqjM xb4N3dKFuA+gQ/xFP5ywpWvX3bVVAYE7sqbS47GtPB9MqYr3h1dMaHIN74rI8t3W2E6C fVX3NH+pNcJrNQlaeDDdOW4/3giVoKjTpvSFDQNjc8/0h8qzCmD32dv2ZyL8BdTTHvo5 Q9KM2RRvV4BrfvbPX42J4YcQt+M/1s7p9ZZcrMZ/9+fZv6IX4vY0s7iI5cQw/deQLxN4 4w3A== X-Forwarded-Encrypted: i=1; AJvYcCW1up6iZhTsmjkyARNUgXq/zl9Qsp404uahjiVSjtyzEwM+bXgJPw9PaESYcIYF2tN6P0DCV6uKxvh7Fdg=@vger.kernel.org X-Gm-Message-State: AOJu0YxniTVOt+HppyPeoxsdlknxK16xlOw4KIsvnCvOdMH4HW5eKA42 j6oHub3XCf1rmKCPYNXbcefpolFmH6F6AEjmnLcZnFYse71uBd4S3weVxdPcCydavR8= X-Gm-Gg: ASbGncspdMHVQpqWoNaxyMTiZ7iE5/exxdnsqRtNfhbm5fY/WFiB7X4h+ULG3SR4bV/ I7U1jF/tloBh7jxnzHRz7RhdWZDbMqhQU/MEPOimOCUezm75blM0C27zFPmIlLgH92aOULsTxbZ aLczldZfHtiA82tNrdK6bnVNjG+81wHDam6jty2/ZacfwttPm+0jhJ29S5UVgCdOnYZJGNryBhi hgtaJm9zODu9FOYioR0S/AW2oaYC1g2tjvnKpw1vjPcyTQ2AJKn81mPCPQW+lA8ukPFeffj+Yne W5WwL/kau9GABrhA6rN2EzGQaRo9DCg/N39OxaN7yCip55Ra5a+P80lofJl6jzM67kio0Ym32RU WSkSfuGwQ0NeLgQ== X-Google-Smtp-Source: AGHT+IEQXoBXCeTtt3cL+5s6oREqz08i/ck934oXA/+A4mkcAzpApMlWORPfk8fEqZJmy/jOTUj9jA== X-Received: by 2002:a17:90b:48c2:b0:311:9c9a:58c7 with SMTP id 98e67ed59e1d1-3127c6a0481mr11592508a91.7.1748842535245; Sun, 01 Jun 2025 22:35:35 -0700 (PDT) Received: from localhost.localdomain ([117.251.222.160]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-3124e30cbe6sm4836986a91.39.2025.06.01.22.35.31 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 01 Jun 2025 22:35:34 -0700 (PDT) From: Pavitrakumar Managutte To: linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org, devicetree@vger.kernel.org, herbert@gondor.apana.org.au, robh@kernel.org Cc: krzk+dt@kernel.org, conor+dt@kernel.org, Ruud.Derwig@synopsys.com, manjunath.hadli@vayavyalabs.com, adityak@vayavyalabs.com, Pavitrakumar Managutte Subject: [PATCH v3 5/6] Add SPAcc AEAD support Date: Mon, 2 Jun 2025 11:02:30 +0530 Message-Id: <20250602053231.403143-6-pavitrakumarm@vayavyalabs.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20250602053231.403143-1-pavitrakumarm@vayavyalabs.com> References: <20250602053231.403143-1-pavitrakumarm@vayavyalabs.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add AEAD support to SPAcc driver. Below are the supported AEAD algos: - ccm(sm4) - ccm(aes) - gcm(sm4) - gcm(aes) - rfc7539(chacha20,poly1305) Signed-off-by: Pavitrakumar Managutte Signed-off-by: Manjunath Hadli Acked-by: Ruud Derwig --- drivers/crypto/dwc-spacc/spacc_aead.c | 1297 +++++++++++++++++++++++++ 1 file changed, 1297 insertions(+) create mode 100755 drivers/crypto/dwc-spacc/spacc_aead.c diff --git a/drivers/crypto/dwc-spacc/spacc_aead.c b/drivers/crypto/dwc-spa= cc/spacc_aead.c new file mode 100755 index 000000000000..9d7589239861 --- /dev/null +++ b/drivers/crypto/dwc-spacc/spacc_aead.c @@ -0,0 +1,1297 @@ +// SPDX-License-Identifier: GPL-2.0 + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "spacc_device.h" +#include "spacc_core.h" + +static LIST_HEAD(spacc_aead_alg_list); +static DEFINE_MUTEX(spacc_aead_alg_mutex); + +#define ADATA_CCM_BUF 144 +#define SPACC_B0_SIZE 16 +#define SET_IV_IN_SRCBUF 0x80000000 +#define SET_IV_IN_CONTEXT 0x0 +#define AAD_BUF_SIZE (4096 * 4) +#define ADATA_BUF_SIZE (AAD_BUF_SIZE + SPACC_B0_SIZE +\ + SPACC_MAX_IV_SIZE) + +struct spacc_iv_buf { + unsigned char iv[SPACC_MAX_IV_SIZE]; + unsigned char spacc_adata[ADATA_BUF_SIZE]; + struct scatterlist sg[2], spacc_adata_sg[2]; + struct scatterlist *spacc_ptextsg, temp_aad[2]; +}; + +static struct kmem_cache *spacc_iv_pool; + +static struct mode_tab possible_aeads[] =3D { + { MODE_TAB_AEAD("rfc7539(chacha20,poly1305)", + CRYPTO_MODE_CHACHA20_POLY1305, CRYPTO_MODE_NULL, + 16, 12, 1), .keylen =3D { 16, 24, 32 } + }, + { MODE_TAB_AEAD("gcm(aes)", + CRYPTO_MODE_AES_GCM, CRYPTO_MODE_NULL, + 16, 12, 1), .keylen =3D { 16, 24, 32 } + }, + { MODE_TAB_AEAD("gcm(sm4)", + CRYPTO_MODE_SM4_GCM, CRYPTO_MODE_NULL, + 16, 12, 1), .keylen =3D { 16 } + }, + { MODE_TAB_AEAD("ccm(aes)", + CRYPTO_MODE_AES_CCM, CRYPTO_MODE_NULL, + 16, 16, 1), .keylen =3D { 16, 24, 32 } + }, + { MODE_TAB_AEAD("ccm(sm4)", + CRYPTO_MODE_SM4_CCM, CRYPTO_MODE_NULL, + 16, 16, 1), .keylen =3D { 16, 24, 32 } + }, +}; + +static void spacc_init_aead_alg(struct crypto_alg *calg, + const struct mode_tab *mode) +{ + strscpy(calg->cra_name, mode->name, sizeof(mode->name) - 1); + calg->cra_name[sizeof(mode->name) - 1] =3D '\0'; + + strscpy(calg->cra_driver_name, "spacc-"); + strcat(calg->cra_driver_name, mode->name); + calg->cra_driver_name[sizeof(calg->cra_driver_name) - 1] =3D '\0'; + + calg->cra_blocksize =3D mode->blocklen; +} + +static int ccm_16byte_aligned_len(int in_len) +{ + int len; + int computed_mod; + + if (in_len > 0) { + computed_mod =3D in_len % 16; + if (computed_mod) + len =3D in_len - computed_mod + 16; + else + len =3D in_len; + } else { + len =3D in_len; + } + + return len; +} + +/* Taken from crypto/ccm.c */ +static int spacc_aead_format_adata(u8 *adata, unsigned int a) +{ + int len =3D 0; + + /* + * Add control info for associated data + * RFC 3610 and NIST Special Publication 800-38C + */ + if (a < 65280) { + *(__be16 *)adata =3D cpu_to_be16(a); + len =3D 2; + } else { + *(__be16 *)adata =3D cpu_to_be16(0xfffe); + *(__be32 *)&adata[2] =3D cpu_to_be32(a); + len =3D 6; + } + + return len; +} + +/* Taken from crypto/ccm.c */ +static int spacc_aead_set_msg_len(u8 *block, unsigned int msglen, int csiz= e) +{ + __be32 data; + + memset(block, 0, csize); + block +=3D csize; + + if (csize >=3D 4) + csize =3D 4; + else if (msglen > (unsigned int)(1 << (8 * csize))) + return -EOVERFLOW; + + data =3D cpu_to_be32(msglen); + memcpy(block - csize, (u8 *)&data + 4 - csize, csize); + + return 0; +} + +static int spacc_aead_init_dma(struct device *dev, struct aead_request *re= q, + u64 seq, uint32_t icvlen, int encrypt, int *alen) +{ + struct crypto_aead *reqtfm =3D crypto_aead_reqtfm(req); + struct spacc_crypto_ctx *tctx =3D crypto_aead_ctx(reqtfm); + struct spacc_crypto_reqctx *ctx =3D aead_request_ctx(req); + + int B0len; + int rc =3D 0; + struct spacc_iv_buf *iv; + int ccm_aad_16b_len =3D 0; + int payload_len, spacc_adata_sg_buf_len; + unsigned int ivsize =3D crypto_aead_ivsize(reqtfm); + gfp_t mflags =3D GFP_ATOMIC; + + /* always have 1 byte of IV */ + if (!ivsize) + ivsize =3D 1; + + if (req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP) + mflags =3D GFP_KERNEL; + + ctx->iv_buf =3D kmem_cache_alloc(spacc_iv_pool, mflags); + if (!ctx->iv_buf) + return -ENOMEM; + iv =3D ctx->iv_buf; + + sg_init_table(iv->sg, ARRAY_SIZE(iv->sg)); + sg_init_table(iv->spacc_adata_sg, ARRAY_SIZE(iv->spacc_adata_sg)); + + B0len =3D 0; + ctx->aead_nents =3D 0; + + memset(iv->iv, 0, SPACC_MAX_IV_SIZE); + memset(iv->spacc_adata, 0, ADATA_BUF_SIZE); + + /* copy the IV out for AAD */ + memcpy(iv->iv, req->iv, ivsize); + + /* + * Now we need to figure out the cipher IV which may or + * may not be "req->iv" depending on the mode we are in + */ + if (tctx->mode & SPACC_MANGLE_IV_FLAG) { + switch (tctx->mode & 0x7F00) { + case SPACC_MANGLE_IV_RFC3686: + case SPACC_MANGLE_IV_RFC4106: + case SPACC_MANGLE_IV_RFC4543: + { + unsigned char *odata =3D iv->spacc_adata; + /* + * We're in RFC3686 mode so the last + * 4 bytes of the key are the SALT + */ + memcpy(odata, tctx->csalt, 4); + memcpy(odata + 4, req->iv, ivsize); + + odata[12] =3D 0; + odata[13] =3D 0; + odata[14] =3D 0; + odata[15] =3D 1; + } + break; + case SPACC_MANGLE_IV_RFC4309: + { + unsigned char *odata =3D iv->spacc_adata; + int L, M; + u32 lm =3D req->cryptlen; + + /* + * CCM mode + * p[0..15] is the CTR IV + * p[16..31] is the CBC-MAC B0 block + */ + B0len =3D SPACC_B0_SIZE; + /* IPsec requires L=3D4 */ + L =3D 4; + M =3D tctx->auth_size; + + /* CTR block */ + odata[0] =3D L - 1; + memcpy(odata + 1, tctx->csalt, 3); + memcpy(odata + 4, req->iv, ivsize); + odata[12] =3D 0; + odata[13] =3D 0; + odata[14] =3D 0; + odata[15] =3D 1; + + /* store B0 block at p[16..31] */ + odata[16] =3D (1 << 6) | (((M - 2) >> 1) << 3) + | (L - 1); + memcpy(odata + 1 + 16, tctx->csalt, 3); + memcpy(odata + 4 + 16, req->iv, ivsize); + + /* now store length */ + odata[16 + 12 + 0] =3D (lm >> 24) & 0xFF; + odata[16 + 12 + 1] =3D (lm >> 16) & 0xFF; + odata[16 + 12 + 2] =3D (lm >> 8) & 0xFF; + odata[16 + 12 + 3] =3D (lm) & 0xFF; + + /* now store the pre-formatted AAD */ + odata[32] =3D (req->assoclen >> 8) & 0xFF; + odata[33] =3D (req->assoclen) & 0xFF; + /* we added 2 byte header to the AAD */ + B0len +=3D 2; + } + break; + } + } else if (tctx->mode =3D=3D CRYPTO_MODE_AES_CCM || + tctx->mode =3D=3D CRYPTO_MODE_SM4_CCM) { + unsigned char *odata =3D iv->spacc_adata; + u8 *orig_iv =3D req->iv; + int L, M; + + u32 lm =3D (encrypt) ? + req->cryptlen : + req->cryptlen - tctx->auth_size; + + /* to avoid the stale data from previous ccm operations.*/ + memset(iv->spacc_adata, 0, ADATA_CCM_BUF); + iv->spacc_ptextsg =3D req->src; + /* + * CCM mode + * p[0..15] is the CTR IV + * p[16..31] is the CBC-MAC B0 block + */ + B0len =3D SPACC_B0_SIZE; + + /* IPsec requires L=3D4 */ + L =3D req->iv[0] + 1; + M =3D tctx->auth_size; + + /* + * Note: rfc 3610 and NIST 800-38C require counter of + * zero to encrypt auth tag. + */ + memset(orig_iv + 15 - orig_iv[0], 0, orig_iv[0] + 1); + + /* CTR block */ + memcpy(odata, req->iv, ivsize); + memcpy(odata + 16, req->iv, ivsize); + + /* + * Taken from ccm.c + * Note: rfc 3610 and NIST 800-38C require counter of + * zero to encrypt auth tag. + */ + + /* store B0 block at p[16..31] */ + odata[16] |=3D (8 * ((M - 2) / 2)); + + /* set adata if assoclen > 0 */ + if (req->assoclen) + odata[16] |=3D 64; + + /* + * Now store length, this is L size starts from 16-L + * to 16 of B0 + */ + spacc_aead_set_msg_len(odata + 16 + 16 - L, lm, L); + + if (req->assoclen) { + /* store pre-formatted AAD: AAD_LEN + AAD + PAD */ + *alen =3D spacc_aead_format_adata(&odata[32], + req->assoclen); + + ccm_aad_16b_len =3D + ccm_16byte_aligned_len(req->assoclen + *alen); + + /* adding the rest of AAD from req->src */ + scatterwalk_map_and_copy(odata + 32 + *alen, + req->src, 0, + req->assoclen, 0); + + /* copy AAD to req->dst */ + scatterwalk_map_and_copy(odata + 32 + *alen, req->dst, + 0, req->assoclen, 1); + + iv->spacc_ptextsg =3D scatterwalk_ffwd(iv->temp_aad, + req->src, + req->assoclen); + } + + /* + * Default is to copy the iv over since the + * cipher and protocol IV are the same + */ + memcpy(iv->spacc_adata, req->iv, ivsize); + } + + /* this is part of the AAD */ + sg_set_buf(iv->sg, iv->iv, ivsize); + + /* GCM and CCM don't include the IV in the AAD */ + switch (tctx->mode) { + case CRYPTO_MODE_AES_GCM_RFC4106: + case CRYPTO_MODE_AES_GCM: + case CRYPTO_MODE_SM4_GCM_RFC8998: + case CRYPTO_MODE_CHACHA20_POLY1305: + case CRYPTO_MODE_NULL: + + payload_len =3D req->cryptlen + icvlen + req->assoclen; + spacc_adata_sg_buf_len =3D SPACC_MAX_IV_SIZE + B0len; + + /* + * This is the actual IV getting fed to the core + * (via IV IMPORT) + */ + + sg_set_buf(iv->spacc_adata_sg, iv->spacc_adata, + spacc_adata_sg_buf_len); + + sg_chain(iv->spacc_adata_sg, + sg_nents_for_len(iv->spacc_adata_sg, + spacc_adata_sg_buf_len) + 1, req->src); + + rc =3D spacc_sg_to_ddt(dev, iv->spacc_adata_sg, + spacc_adata_sg_buf_len + payload_len, + &ctx->src, DMA_TO_DEVICE); + + if (rc < 0) + goto err_free_iv; + ctx->aead_nents =3D rc; + break; + case CRYPTO_MODE_AES_CCM: + case CRYPTO_MODE_AES_CCM_RFC4309: + case CRYPTO_MODE_SM4_CCM: + + if (encrypt) + payload_len =3D + ccm_16byte_aligned_len(req->cryptlen + icvlen); + else + payload_len =3D + ccm_16byte_aligned_len(req->cryptlen); + + spacc_adata_sg_buf_len =3D SPACC_MAX_IV_SIZE + B0len + + ccm_aad_16b_len; + + /* + * This is the actual IV getting fed to the core (via IV IMPORT) + * This has CTR IV + B0 + AAD(B1, B2, ...) + */ + sg_set_buf(iv->spacc_adata_sg, iv->spacc_adata, + spacc_adata_sg_buf_len); + sg_chain(iv->spacc_adata_sg, + sg_nents_for_len(iv->spacc_adata_sg, + spacc_adata_sg_buf_len) + 1, + iv->spacc_ptextsg); + + rc =3D spacc_sg_to_ddt(dev, iv->spacc_adata_sg, + spacc_adata_sg_buf_len + payload_len, + &ctx->src, DMA_TO_DEVICE); + if (rc < 0) + goto err_free_iv; + ctx->aead_nents =3D rc; + break; + default: + + /* + * This is the actual IV getting fed to the core (via IV IMPORT) + * This has CTR IV + B0 + AAD(B1, B2, ...) + */ + payload_len =3D req->cryptlen + icvlen + req->assoclen; + spacc_adata_sg_buf_len =3D SPACC_MAX_IV_SIZE + B0len; + sg_set_buf(iv->spacc_adata_sg, iv->spacc_adata, + spacc_adata_sg_buf_len); + + sg_chain(iv->spacc_adata_sg, + sg_nents_for_len(iv->spacc_adata_sg, + spacc_adata_sg_buf_len) + 1, + req->src); + + rc =3D spacc_sg_to_ddt(dev, iv->spacc_adata_sg, + spacc_adata_sg_buf_len + payload_len, + &ctx->src, DMA_TO_DEVICE); + + if (rc < 0) + goto err_free_iv; + ctx->aead_nents =3D rc; + } + + /* + * Putting in req->dst is good since it won't overwrite anything + * even in case of CCM this is fine condition + */ + if (req->dst !=3D req->src) { + switch (tctx->mode) { + case CRYPTO_MODE_AES_CCM: + case CRYPTO_MODE_AES_CCM_RFC4309: + case CRYPTO_MODE_SM4_CCM: + /* + * If req->dst buffer len is not-positive, + * then skip setting up of DMA + */ + if (req->dst->length <=3D 0) { + ctx->dst_nents =3D 0; + return 0; + } + + if (encrypt) + payload_len =3D req->cryptlen + icvlen + + req->assoclen; + else + payload_len =3D req->cryptlen - tctx->auth_size + + req->assoclen; + /* + * For corner cases where PTlen=3DAADlen=3D0, we set default + * to 16 + */ + rc =3D spacc_sg_to_ddt(dev, req->dst, + payload_len > 0 ? payload_len : 16, + &ctx->dst, DMA_FROM_DEVICE); + if (rc < 0) + goto err_free_src; + ctx->dst_nents =3D rc; + break; + default: + + /* + * If req->dst buffer len is not-positive, + * then skip setting up of DMA + */ + if (req->dst->length <=3D 0) { + ctx->dst_nents =3D 0; + return 0; + } + + if (encrypt) { + payload_len =3D SPACC_MAX_IV_SIZE + req->cryptlen + + icvlen + req->assoclen; + } else { + payload_len =3D req->cryptlen - tctx->auth_size + + req->assoclen; + if (payload_len <=3D 0) + return -EBADMSG; + } + + rc =3D spacc_sg_to_ddt(dev, req->dst, + payload_len > 0 ? payload_len : 16, + &ctx->dst, DMA_FROM_DEVICE); + if (rc < 0) + goto err_free_src; + ctx->dst_nents =3D rc; + } + } + + return 0; + +err_free_src: + if (ctx->aead_nents) { + dma_unmap_sg(dev, iv->spacc_adata_sg, ctx->aead_nents, + DMA_TO_DEVICE); + + pdu_ddt_free(&ctx->src); + } + +err_free_iv: + kmem_cache_free(spacc_iv_pool, ctx->iv_buf); + + return rc; +} + +static void spacc_aead_cleanup_dma(struct device *dev, struct aead_request= *req) +{ + struct spacc_crypto_reqctx *ctx =3D aead_request_ctx(req); + struct spacc_iv_buf *iv =3D ctx->iv_buf; + + if (req->src !=3D req->dst && ctx->dst_nents > 0) { + dma_unmap_sg(dev, req->dst, ctx->dst_nents, + DMA_FROM_DEVICE); + pdu_ddt_free(&ctx->dst); + } + + if (ctx->aead_nents) { + dma_unmap_sg(dev, iv->spacc_adata_sg, ctx->aead_nents, + DMA_TO_DEVICE); + + pdu_ddt_free(&ctx->src); + } + + kmem_cache_free(spacc_iv_pool, ctx->iv_buf); +} + +static bool spacc_check_keylen(const struct spacc_alg *salg, + unsigned int keylen) +{ + unsigned int bit_position, mask =3D salg->keylen_mask; + + if (mask > (1ul << ARRAY_SIZE(salg->mode->keylen)) - 1) + return false; + + for (bit_position =3D 0; mask; bit_position++, mask >>=3D 1) { + if (mask & 1 && salg->mode->keylen[bit_position] =3D=3D keylen) + return true; + } + + return false; +} + +static void spacc_aead_cb(void *spacc, void *tfm) +{ + int err =3D -1; + struct aead_cb_data *cb =3D tfm; + struct spacc_device *device =3D (struct spacc_device *)spacc; + + u32 status_reg =3D readl(cb->spacc->regmap + SPACC_REG_STATUS); + + /* + * Extract RET_CODE field (bits 25:24) from STATUS register to check + * result of the crypto operation. + */ + u32 status_ret =3D (status_reg >> 24) & 0x3; + + dma_sync_sg_for_cpu(cb->tctx->dev, cb->req->dst, + cb->ctx->dst_nents, DMA_FROM_DEVICE); + + /* ICV mismatch send bad msg */ + if (status_ret =3D=3D 0x1) { + err =3D -EBADMSG; + goto REQ_DST_CP_SKIP; + } + err =3D cb->spacc->job[cb->new_handle].job_err; + +REQ_DST_CP_SKIP: + spacc_aead_cleanup_dma(cb->tctx->dev, cb->req); + spacc_close(cb->spacc, cb->new_handle); + + /* call complete */ + local_bh_disable(); + aead_request_complete(cb->req, err); + local_bh_enable(); + + if (atomic_read(&device->wait_counter) > 0) { + struct spacc_completion *cur_pos, *next_pos; + + /* wake up waitqueue to obtain a context */ + atomic_dec(&device->wait_counter); + if (atomic_read(&device->wait_counter) > 0) { + mutex_lock(&device->spacc_waitq_mutex); + list_for_each_entry_safe(cur_pos, next_pos, + &device->spacc_wait_list, + list) { + if (cur_pos && cur_pos->wait_done =3D=3D 1) { + cur_pos->wait_done =3D 0; + complete(&cur_pos->spacc_wait_complete); + list_del(&cur_pos->list); + break; + } + } + mutex_unlock(&device->spacc_waitq_mutex); + } + } +} + +static int spacc_aead_setkey(struct crypto_aead *tfm, const u8 *key, + unsigned int keylen) +{ + int err =3D 0; + int ret =3D 0; + int singlekey =3D 0; + unsigned char xcbc[64]; + unsigned int enckeylen; + unsigned int authkeylen; + const unsigned char *authkey, *enckey; + struct spacc_crypto_ctx *ctx =3D crypto_aead_ctx(tfm); + const struct spacc_alg *salg =3D spacc_tfm_aead(&tfm->base); + struct crypto_authenc_keys authenc_keys; + struct spacc_priv *priv; + + /* are keylens valid? */ + ctx->ctx_valid =3D false; + + switch (ctx->mode & 0xFF) { + case CRYPTO_MODE_SM4_GCM: + case CRYPTO_MODE_SM4_CCM: + case CRYPTO_MODE_NULL: + case CRYPTO_MODE_AES_GCM: + case CRYPTO_MODE_AES_CCM: + case CRYPTO_MODE_CHACHA20_POLY1305: + authkey =3D key; + authkeylen =3D 0; + enckey =3D key; + enckeylen =3D keylen; + ctx->keylen =3D keylen; + singlekey =3D 1; + goto skipover; + } + + err =3D crypto_authenc_extractkeys(&authenc_keys, key, keylen); + if (err) + return err; + + authkeylen =3D authenc_keys.authkeylen; + authkey =3D authenc_keys.authkey; + enckeylen =3D authenc_keys.enckeylen; + enckey =3D authenc_keys.enckey; + +skipover: + /* detect RFC3686/4106 and trim from enckeylen(and copy salt..) */ + if (ctx->mode & SPACC_MANGLE_IV_FLAG) { + switch (ctx->mode & 0x7F00) { + case SPACC_MANGLE_IV_RFC3686: + case SPACC_MANGLE_IV_RFC4106: + case SPACC_MANGLE_IV_RFC4543: + memcpy(ctx->csalt, enckey + enckeylen - 4, 4); + enckeylen -=3D 4; + break; + case SPACC_MANGLE_IV_RFC4309: + memcpy(ctx->csalt, enckey + enckeylen - 3, 3); + enckeylen -=3D 3; + break; + } + } + + if (!singlekey) { + if (authkeylen > salg->mode->hashlen) { + dev_warn(ctx->dev, "Auth key size of %u is not valid\n", + authkeylen); + return -EINVAL; + } + } + + if (!spacc_check_keylen(salg, enckeylen)) { + dev_dbg(ctx->dev, "Enc key size of %u is not valid\n", + enckeylen); + return -EINVAL; + } + + /* + * if we're already open close the handle since + * the size may have changed + */ + if (ctx->handle !=3D -1) { + priv =3D dev_get_drvdata(ctx->dev); + spacc_close(&priv->spacc, ctx->handle); + put_device(ctx->dev); + ctx->handle =3D -1; + } + + /* reset priv */ + priv =3D NULL; + priv =3D dev_get_drvdata(salg->dev); + + /* increase reference */ + ctx->dev =3D get_device(salg->dev); + + /* check if its a valid mode */ + ret =3D (spacc_is_mode_keysize_supported(&priv->spacc, + salg->mode->aead.ciph & 0xFF, + enckeylen, 0) && + spacc_is_mode_keysize_supported + (&priv->spacc, + salg->mode->aead.hash & 0xFF, + authkeylen, 0)); + + if (ret) { + /* try to open spacc handle */ + ctx->handle =3D spacc_open(&priv->spacc, + salg->mode->aead.ciph & 0xFF, + salg->mode->aead.hash & 0xFF, + -1, 0, spacc_aead_cb, tfm); + if (ctx->handle < 0) { + put_device(salg->dev); + dev_dbg(salg->dev, "Failed to open SPAcc context\n"); + return -EIO; + } + } else { + dev_err(salg->dev, " Keylen: %d not enabled for algo: %d", + keylen, salg->mode->id); + } + + /* setup XCBC key */ + if (salg->mode->aead.hash =3D=3D CRYPTO_MODE_MAC_XCBC) { + err =3D spacc_compute_xcbc_key(&priv->spacc, + salg->mode->aead.hash, + ctx->handle, authkey, + authkeylen, xcbc); + if (err < 0) { + dev_warn(ctx->dev, "Failed to compute XCBC key: %d\n", + err); + return -EIO; + } + authkey =3D xcbc; + authkeylen =3D 48; + } + + /* handle zero key/zero len DEC condition for SM4/AES GCM mode */ + ctx->zero_key =3D 0; + if (!key[0]) { + int index, val =3D 0; + + for (index =3D 0; index < keylen ; index++) + val +=3D key[index]; + + if (val =3D=3D 0) + ctx->zero_key =3D 1; + } + + err =3D spacc_write_context(&priv->spacc, ctx->handle, + SPACC_CRYPTO_OPERATION, enckey, + enckeylen, NULL, 0); + + if (err) { + dev_warn(ctx->dev, + "Could not write cipher context: %d\n", err); + return -EIO; + } + + if (!singlekey) { + err =3D spacc_write_context(&priv->spacc, ctx->handle, + SPACC_HASH_OPERATION, authkey, + authkeylen, NULL, 0); + if (err) { + dev_warn(ctx->dev, + "Could not write hashing context: %d\n", err); + return -EIO; + } + } + + /* set expand key */ + spacc_set_key_exp(&priv->spacc, ctx->handle); + ctx->ctx_valid =3D true; + + memset(xcbc, 0, sizeof(xcbc)); + + /* copy key to ctx for fallback */ + memcpy(ctx->key, key, keylen); + + return 0; +} + +static int spacc_aead_setauthsize(struct crypto_aead *tfm, + unsigned int authsize) +{ + struct spacc_crypto_ctx *ctx =3D crypto_aead_ctx(tfm); + + ctx->auth_size =3D authsize; + + /* Taken from crypto/ccm.c */ + switch (ctx->mode) { + case CRYPTO_MODE_SM4_GCM: + case CRYPTO_MODE_AES_GCM: + switch (authsize) { + case 4: + case 8: + case 12: + case 13: + case 14: + case 15: + case 16: + break; + default: + return -EINVAL; + } + break; + + case CRYPTO_MODE_AES_CCM: + case CRYPTO_MODE_SM4_CCM: + switch (authsize) { + case 4: + case 6: + case 8: + case 10: + case 12: + case 14: + case 16: + break; + default: + return -EINVAL; + } + break; + + case CRYPTO_MODE_CHACHA20_POLY1305: + switch (authsize) { + case 16: + break; + default: + return -EINVAL; + } + break; + } + + return 0; +} + +static int spacc_aead_fallback(struct aead_request *req, + struct spacc_crypto_ctx *ctx, int encrypt) +{ + int ret =3D 0; + struct aead_request *subreq =3D aead_request_ctx(req); + struct crypto_aead *reqtfm =3D crypto_aead_reqtfm(req); + struct aead_alg *alg =3D crypto_aead_alg(reqtfm); + const char *aead_name =3D alg->base.cra_name; + + ctx->fb.aead =3D crypto_alloc_aead(aead_name, 0, + CRYPTO_ALG_NEED_FALLBACK | + CRYPTO_ALG_ASYNC); + if (IS_ERR(ctx->fb.aead)) { + dev_err(ctx->dev, "Spacc aead fallback tfm is NULL!\n"); + return PTR_ERR(ctx->fb.aead); + } + + subreq =3D aead_request_alloc(ctx->fb.aead, GFP_KERNEL); + if (!subreq) + return -ENOMEM; + + ret =3D crypto_aead_setkey(ctx->fb.aead, ctx->key, ctx->keylen); + if (ret) + dev_err(ctx->dev, "fallback aead setkey() returned:%d\n", ret); + + crypto_aead_setauthsize(ctx->fb.aead, ctx->auth_size); + + aead_request_set_tfm(subreq, ctx->fb.aead); + aead_request_set_callback(subreq, req->base.flags, + req->base.complete, req->base.data); + aead_request_set_crypt(subreq, req->src, req->dst, req->cryptlen, + req->iv); + aead_request_set_ad(subreq, req->assoclen); + + if (encrypt) + ret =3D crypto_aead_encrypt(subreq); + else + ret =3D crypto_aead_decrypt(subreq); + + aead_request_free(subreq); + crypto_free_aead(ctx->fb.aead); + ctx->fb.aead =3D NULL; + + return ret; +} + +static int spacc_aead_process(struct aead_request *req, u64 seq, int encry= pt) +{ + int rc =3D 0; + int B0len; + int alen =3D 0; + u32 dstoff; + int icvremove; + int ivaadsize; + int ptaadsize =3D 0; + int iv_to_context; + int spacc_proc_len; + u32 spacc_icv_offset =3D 0; + int spacc_pre_aad_size; + int ccm_aad_16b_len =3D 0; + struct crypto_aead *reqtfm =3D crypto_aead_reqtfm(req); + int ivsize =3D crypto_aead_ivsize(reqtfm); + struct spacc_crypto_ctx *tctx =3D crypto_aead_ctx(reqtfm); + struct spacc_crypto_reqctx *ctx =3D aead_request_ctx(req); + struct spacc_priv *priv =3D dev_get_drvdata(tctx->dev); + + ctx->encrypt_op =3D encrypt; + + if (tctx->handle < 0 || !tctx->ctx_valid || (req->cryptlen + + req->assoclen) > priv->max_msg_len) + return -EINVAL; + + /* IV is programmed to context by default */ + iv_to_context =3D SET_IV_IN_CONTEXT; + + if (encrypt) { + switch (tctx->mode & 0xFF) { + case CRYPTO_MODE_AES_GCM: + case CRYPTO_MODE_SM4_GCM: + case CRYPTO_MODE_CHACHA20_POLY1305: + /* For cryptlen =3D 0 */ + if (req->cryptlen + req->assoclen =3D=3D 0) + return spacc_aead_fallback(req, tctx, encrypt); + break; + case CRYPTO_MODE_AES_CCM: + case CRYPTO_MODE_SM4_CCM: + + if (req->cryptlen + req->assoclen =3D=3D 0) + return spacc_aead_fallback(req, tctx, encrypt); + + /* + * verify that msglen can in fact be represented + * in L bytes + * + * 2 <=3D L <=3D 8, so 1 <=3D L' <=3D 7 + */ + if (req->iv[0] < 1 || req->iv[0] > 7) + return -EINVAL; + + break; + default: + return -EINVAL; + } + } else { + /* handle decryption */ + switch (tctx->mode & 0xFF) { + case CRYPTO_MODE_AES_GCM: + case CRYPTO_MODE_SM4_GCM: + case CRYPTO_MODE_CHACHA20_POLY1305: + /* for assoclen =3D 0 */ + if (req->assoclen =3D=3D 0 && + (req->cryptlen - tctx->auth_size =3D=3D 0)) + return spacc_aead_fallback(req, tctx, encrypt); + break; + case CRYPTO_MODE_AES_CCM: + case CRYPTO_MODE_SM4_CCM: + if (req->assoclen =3D=3D 0 && + (req->cryptlen - tctx->auth_size =3D=3D 0)) + return spacc_aead_fallback(req, tctx, encrypt); + /* 2 <=3D L <=3D 8, so 1 <=3D L' <=3D 7 */ + if (req->iv[0] < 1 || req->iv[0] > 7) + return -EINVAL; + break; + default: + return -EINVAL; + } + } + + icvremove =3D (encrypt) ? 0 : tctx->auth_size; + + rc =3D spacc_aead_init_dma(tctx->dev, req, seq, (encrypt) ? + tctx->auth_size : 0, encrypt, &alen); + if (rc < 0) + return -EINVAL; + + if (req->assoclen) + ccm_aad_16b_len =3D ccm_16byte_aligned_len(req->assoclen + alen); + + /* Note: This won't work if IV_IMPORT has been disabled */ + ctx->cb.new_handle =3D spacc_clone_handle(&priv->spacc, tctx->handle, + &ctx->cb); + if (ctx->cb.new_handle < 0) { + spacc_aead_cleanup_dma(tctx->dev, req); + return -EINVAL; + } + + ctx->cb.tctx =3D tctx; + ctx->cb.ctx =3D ctx; + ctx->cb.req =3D req; + ctx->cb.spacc =3D &priv->spacc; + + /* + * Write IV to the spacc-context + * IV can be written to context or as part of the input src buffer + * IV in case of CCM is going in the input src buff. + * IV for GCM is written to the context. + */ + if (tctx->mode =3D=3D CRYPTO_MODE_AES_GCM_RFC4106 || + tctx->mode =3D=3D CRYPTO_MODE_AES_GCM || + tctx->mode =3D=3D CRYPTO_MODE_SM4_GCM_RFC8998 || + tctx->mode =3D=3D CRYPTO_MODE_CHACHA20_POLY1305 || + tctx->mode =3D=3D CRYPTO_MODE_NULL) { + iv_to_context =3D SET_IV_IN_CONTEXT; + rc =3D spacc_write_context(&priv->spacc, ctx->cb.new_handle, + SPACC_CRYPTO_OPERATION, NULL, 0, + req->iv, ivsize); + + if (rc < 0) { + spacc_aead_cleanup_dma(tctx->dev, req); + spacc_close(&priv->spacc, ctx->cb.new_handle); + return -EINVAL; + } + } + + /* CCM and GCM don't include the IV in the AAD */ + if (tctx->mode =3D=3D CRYPTO_MODE_AES_GCM_RFC4106 || + tctx->mode =3D=3D CRYPTO_MODE_AES_CCM_RFC4309 || + tctx->mode =3D=3D CRYPTO_MODE_AES_GCM || + tctx->mode =3D=3D CRYPTO_MODE_AES_CCM || + tctx->mode =3D=3D CRYPTO_MODE_SM4_CCM || + tctx->mode =3D=3D CRYPTO_MODE_SM4_GCM_RFC8998 || + tctx->mode =3D=3D CRYPTO_MODE_CHACHA20_POLY1305 || + tctx->mode =3D=3D CRYPTO_MODE_NULL) { + ivaadsize =3D 0; + } else { + ivaadsize =3D ivsize; + } + /* CCM requires an extra block of AAD */ + if (tctx->mode =3D=3D CRYPTO_MODE_AES_CCM_RFC4309 || + tctx->mode =3D=3D CRYPTO_MODE_AES_CCM || + tctx->mode =3D=3D CRYPTO_MODE_SM4_CCM) + B0len =3D SPACC_B0_SIZE; + else + B0len =3D 0; + + /* + * GMAC mode uses AAD for the entire message. + * So does NULL cipher + */ + if (tctx->mode =3D=3D CRYPTO_MODE_AES_GCM_RFC4543 || + tctx->mode =3D=3D CRYPTO_MODE_NULL) { + if (req->cryptlen >=3D icvremove) + ptaadsize =3D req->cryptlen - icvremove; + } + + /* + * Calculate and set the below, important parameters + * spacc icv offset - spacc_icv_offset + * destination offset - dstoff + * IV to context - This is set for CCM, not set for GCM + */ + if (req->dst =3D=3D req->src) { + dstoff =3D ((uint32_t)(SPACC_MAX_IV_SIZE + B0len + + req->assoclen + ivaadsize)); + + /* CCM case */ + if (tctx->mode =3D=3D CRYPTO_MODE_AES_CCM_RFC4309 || + tctx->mode =3D=3D CRYPTO_MODE_AES_CCM || + tctx->mode =3D=3D CRYPTO_MODE_SM4_CCM) { + iv_to_context =3D SET_IV_IN_SRCBUF; + dstoff =3D ((uint32_t)(SPACC_MAX_IV_SIZE + B0len + + ccm_aad_16b_len + ivaadsize)); + } + } else { + dstoff =3D ((uint32_t)(req->assoclen + ivaadsize)); + + /* CCM case */ + if (tctx->mode =3D=3D CRYPTO_MODE_AES_CCM_RFC4309 || + tctx->mode =3D=3D CRYPTO_MODE_AES_CCM || + tctx->mode =3D=3D CRYPTO_MODE_SM4_CCM) { + iv_to_context =3D SET_IV_IN_SRCBUF; + dstoff =3D ((uint32_t)(req->assoclen + ivaadsize)); + } + } + + /* + * Calculate and set the below, important parameters + * spacc proc_len - spacc_proc_len + * pre-AAD size - spacc_pre_aad_size + */ + if (tctx->mode =3D=3D CRYPTO_MODE_AES_CCM || + tctx->mode =3D=3D CRYPTO_MODE_SM4_CCM || + tctx->mode =3D=3D CRYPTO_MODE_AES_CCM_RFC4309 || + tctx->mode =3D=3D CRYPTO_MODE_SM4_CCM_RFC8998) { + spacc_proc_len =3D B0len + ccm_aad_16b_len + + req->cryptlen + ivaadsize + - icvremove; + spacc_pre_aad_size =3D B0len + ccm_aad_16b_len + + ivaadsize + ptaadsize; + } else { + spacc_proc_len =3D B0len + req->assoclen + + req->cryptlen - icvremove + + ivaadsize; + spacc_pre_aad_size =3D B0len + req->assoclen + + ivaadsize + ptaadsize; + } + + rc =3D spacc_set_operation(&priv->spacc, + ctx->cb.new_handle, + encrypt ? OP_ENCRYPT : OP_DECRYPT, + ICV_ENCRYPT_HASH, IP_ICV_APPEND, + spacc_icv_offset, + tctx->auth_size, 0); + + rc =3D spacc_packet_enqueue_ddt(&priv->spacc, ctx->cb.new_handle, + &ctx->src, + (req->dst =3D=3D req->src) ? &ctx->src : + &ctx->dst, spacc_proc_len, + (dstoff << SPACC_OFFSET_DST_O) | + SPACC_MAX_IV_SIZE, + spacc_pre_aad_size, + 0, iv_to_context, 0); + + if (rc < 0) { + spacc_aead_cleanup_dma(tctx->dev, req); + spacc_close(&priv->spacc, ctx->cb.new_handle); + + if (rc !=3D -EBUSY) { + dev_err(tctx->dev, " failed to enqueue job, ERR: %d\n", + rc); + } + + if (!(req->base.flags & CRYPTO_TFM_REQ_MAY_BACKLOG)) + return -EBUSY; + + return -EINVAL; + } + + /* + * At this point the job is in flight to the engine ... remove first use + * so subsequent calls don't expand the key again... ideally we would + * pump a dummy job through the engine to pre-expand the key so that by + * the time setkey was done we wouldn't have to do this. + */ + priv->spacc.job[tctx->handle].first_use =3D 0; + priv->spacc.job[tctx->handle].ctrl &=3D ~(1UL + << priv->spacc.config.ctrl_map[SPACC_CTRL_KEY_EXP]); + + return -EINPROGRESS; +} + +static int spacc_aead_encrypt(struct aead_request *req) +{ + return spacc_aead_process(req, 0ULL, 1); +} + +static int spacc_aead_decrypt(struct aead_request *req) +{ + return spacc_aead_process(req, 0ULL, 0); +} + +static int spacc_aead_init(struct crypto_aead *tfm) +{ + struct spacc_crypto_ctx *ctx =3D crypto_aead_ctx(tfm); + const struct spacc_alg *salg =3D spacc_tfm_aead(&tfm->base); + + ctx->zero_key =3D 0; + ctx->fb.aead =3D NULL; + ctx->handle =3D -1; + ctx->mode =3D salg->mode->aead.ciph; + ctx->dev =3D get_device(salg->dev); + + crypto_aead_set_reqsize(tfm, sizeof(struct spacc_crypto_reqctx)); + + return 0; +} + +static void spacc_aead_exit(struct crypto_aead *tfm) +{ + struct spacc_crypto_ctx *ctx =3D crypto_aead_ctx(tfm); + struct spacc_priv *priv =3D dev_get_drvdata(ctx->dev); + + ctx->fb.aead =3D NULL; + /* close spacc handle */ + if (ctx->handle >=3D 0) { + spacc_close(&priv->spacc, ctx->handle); + ctx->handle =3D -1; + } + + put_device(ctx->dev); +} + +static struct aead_alg spacc_aead_algs =3D { + .setkey =3D spacc_aead_setkey, + .setauthsize =3D spacc_aead_setauthsize, + .encrypt =3D spacc_aead_encrypt, + .decrypt =3D spacc_aead_decrypt, + .init =3D spacc_aead_init, + .exit =3D spacc_aead_exit, + + .base.cra_priority =3D 300, + .base.cra_module =3D THIS_MODULE, + .base.cra_ctxsize =3D sizeof(struct spacc_crypto_ctx), + .base.cra_flags =3D CRYPTO_ALG_TYPE_AEAD | + CRYPTO_ALG_ASYNC | + CRYPTO_ALG_NEED_FALLBACK | + CRYPTO_ALG_KERN_DRIVER_ONLY | + CRYPTO_ALG_OPTIONAL_KEY +}; + +static int spacc_register_aead(unsigned int aead_mode, + struct platform_device *spacc_pdev) +{ + int rc =3D 0; + struct spacc_alg *salg; + + salg =3D kmalloc(sizeof(*salg), GFP_KERNEL); + if (!salg) + return -ENOMEM; + + salg->mode =3D &possible_aeads[aead_mode]; + salg->dev =3D &spacc_pdev->dev; + salg->calg =3D &salg->alg.aead.base; + salg->alg.aead =3D spacc_aead_algs; + + spacc_init_aead_alg(salg->calg, salg->mode); + + salg->alg.aead.ivsize =3D salg->mode->ivlen; + salg->alg.aead.maxauthsize =3D salg->mode->hashlen; + salg->alg.aead.base.cra_blocksize =3D salg->mode->blocklen; + + salg->keylen_mask =3D possible_aeads[aead_mode].keylen_mask; + + if (salg->mode->aead.ciph & SPACC_MANGLE_IV_FLAG) { + switch (salg->mode->aead.ciph & 0x7F00) { + case SPACC_MANGLE_IV_RFC3686: /* CTR */ + case SPACC_MANGLE_IV_RFC4106: /* GCM */ + case SPACC_MANGLE_IV_RFC4543: /* GMAC */ + case SPACC_MANGLE_IV_RFC4309: /* CCM */ + case SPACC_MANGLE_IV_RFC8998: /* GCM/CCM */ + salg->alg.aead.ivsize =3D 12; + break; + } + } + + rc =3D crypto_register_aead(&salg->alg.aead); + if (rc < 0) { + kfree(salg); + return rc; + } + + mutex_lock(&spacc_aead_alg_mutex); + list_add(&salg->list, &spacc_aead_alg_list); + mutex_unlock(&spacc_aead_alg_mutex); + + return 0; +} + +int spacc_probe_aeads(struct platform_device *spacc_pdev) +{ + int err =3D 0; + unsigned int x, y; + struct spacc_priv *priv =3D NULL; + + size_t alloc_size =3D max_t(unsigned long, + roundup_pow_of_two(sizeof(struct spacc_iv_buf)), + dma_get_cache_alignment()); + + spacc_iv_pool =3D kmem_cache_create("spacc-aead-iv", alloc_size, + alloc_size, 0, NULL); + + if (!spacc_iv_pool) + return -ENOMEM; + + for (x =3D 0; x < ARRAY_SIZE(possible_aeads); x++) { + possible_aeads[x].keylen_mask =3D 0; + possible_aeads[x].valid =3D 0; + } + + /* compute cipher key masks */ + priv =3D dev_get_drvdata(&spacc_pdev->dev); + + for (x =3D 0; x < ARRAY_SIZE(possible_aeads); x++) { + for (y =3D 0; y < ARRAY_SIZE(possible_aeads[x].keylen); y++) { + if (spacc_is_mode_keysize_supported(&priv->spacc, + possible_aeads[x].aead.ciph & 0xFF, + possible_aeads[x].keylen[y], 0)) + possible_aeads[x].keylen_mask |=3D 1u << y; + } + } + + /* scan for combined modes */ + priv =3D dev_get_drvdata(&spacc_pdev->dev); + + for (x =3D 0; x < ARRAY_SIZE(possible_aeads); x++) { + if (!possible_aeads[x].valid && possible_aeads[x].keylen_mask && + spacc_is_mode_keysize_supported(&priv->spacc, + possible_aeads[x].aead.hash & 0xFF, + possible_aeads[x].hashlen, 0)) { + possible_aeads[x].valid =3D 1; + err =3D spacc_register_aead(x, spacc_pdev); + if (err < 0) + goto error; + } + } + + return 0; + +error: + return err; +} + +int spacc_unregister_aead_algs(void) +{ + struct spacc_alg *salg, *tmp; + + mutex_lock(&spacc_aead_alg_mutex); + + list_for_each_entry_safe(salg, tmp, &spacc_aead_alg_list, list) { + crypto_unregister_alg(salg->calg); + list_del(&salg->list); + kfree(salg); + } + + mutex_unlock(&spacc_aead_alg_mutex); + + kmem_cache_destroy(spacc_iv_pool); + + return 0; +} --=20 2.25.1 From nobody Mon Feb 9 01:17:09 2026 Received: from mail-pj1-f41.google.com (mail-pj1-f41.google.com [209.85.216.41]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2160F18BC0C for ; Mon, 2 Jun 2025 05:35:44 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.41 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1748842546; cv=none; b=gV6C3y/lVmkEHFrZFgBbrH8liFZKU1H8338AvM/c2VML/Dzcr1EJilptXsg+iKFy5djkI/ItMltYANDN9zcCQyA+L9bYCtuQTWVkAiCfGwV9EhBlWJ1Q5gjDPWNAktkHaBcvt3Pezu/rQ7LOi3JDrIuaXGMQngD6NJXeFgTew9c= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1748842546; c=relaxed/simple; bh=MaGMkK/SWlKFsPuh4AWDT0+Ox1CB9o1INXA9ut4JqOs=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=DRVVTgtlGbKd2nJrgBxwjib2+ah2Nssrz8tjqiBTCgc9sk2tk0itGuYNZLN3UVze0yOxwERAG3vr4Y5wX/O5Y1ACH6LZ74vHQF62okA7z0m+YK0IUMvmG+HTyuuga0drf3yt+Rxe9TrsltTceRSHtyuimzAnfums3d2HdvBIPKU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=vayavyalabs.com; spf=pass smtp.mailfrom=vayavyalabs.com; dkim=pass (1024-bit key) header.d=vayavyalabs.com header.i=@vayavyalabs.com header.b=kUao5eK4; arc=none smtp.client-ip=209.85.216.41 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=vayavyalabs.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=vayavyalabs.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=vayavyalabs.com header.i=@vayavyalabs.com header.b="kUao5eK4" Received: by mail-pj1-f41.google.com with SMTP id 98e67ed59e1d1-311d5fdf1f0so3642668a91.1 for ; Sun, 01 Jun 2025 22:35:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=vayavyalabs.com; s=google; t=1748842544; x=1749447344; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=JsYldGWMRaVQ+7mIriqN/NoKlP/8B7jbjmXeeUPejeE=; b=kUao5eK4qAht+ZtVTc+EOGo5+1ke6JZ+3Rbr3E+qY40/dP/z45He2eM7NpEeWcFp/o bsE7YoCkBCQDnKAO4iqUGVavzyki3asnsWJxtY4pkv7QmdKjHPrA7gpRpPOx8uta7dz8 Jazjn9dUPNoQrp4+yTOzlKXAEP/fhQtVFc0zg= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1748842544; x=1749447344; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=JsYldGWMRaVQ+7mIriqN/NoKlP/8B7jbjmXeeUPejeE=; b=tGrKAzHf9Ihc7QNFWBE68B69JokVFVJw5LlzG50mMjwPzXoxZsTZo4p4F9F3OIO0k9 jxjk/aw8/HN1dBCIJYcC80qvE7y4z2m6KZl/+NDKtOBIoBr1Fq47TT+1BZrjEA4FFkLR Bexq/72R+UbpbLHwczuh7bOtNAS+X8+ERlsP7WVVCHJG32R5+9KWnzi6RfQcSsiRbuBB iBfbmstddWUe60jlbKAkZ4lJ05cJCY9p7mJ2r/GnGE1oSbCd1bb2UlF5ppVs0shG6b1H VHJlcTj59urN6K/knLmcY/ZzPAAZFmO4tzCdbl6TbRt0ifwtVJ5PzmanfEemhvf6+ph8 SF9w== X-Forwarded-Encrypted: i=1; AJvYcCVNQAPIw9BFPXuf2bGEt/Ud6s1A7OsEJ1nQ0HKnFlbnXT2FYJmrWZuonlZuAHHZfYelOP6nv6ZOby4rvd8=@vger.kernel.org X-Gm-Message-State: AOJu0YzB+b3YKkFaeDK1tpcX5I5TjUx2SsLxBd6aidGhfJVqRKmA9kFk JGsRtiOfGubQ2nIjIim+HeZ/8cN/Jgr02YO/aYQrkY5T5vs5UaEkHAD0oWbQpXpi2eY5RGftwHq 3y8fIG9U= X-Gm-Gg: ASbGncu96MuuF1t2S+NK2V/8V8uyVT+8IrTm19CAIPkzjDronMqXPpFs5Wvym21Q6wZ wM8iUA2nFO8YqLhDLR4kdNVlhSg/weyYMld3AMjfLfBkmsvP8IUQ+N+W7Ow7dpnX1Yfcmcer21P Pl3oUisByr2lvtdZwpIhpt7iO1Ez+j1EbIFiVgcujH4GsKE4LsmwAY19i4MeIfR82AHiHpi0YHL lIg3j7otMf8FnXXyYKH7Hwq3LoealGXuGhqhvJ5zq0/hAaDsT2Faakxh4iGRqpW8YVQh9qHOyHQ tWJ0wilJJoUWCLywPGMwDcB0kn9En6xDp9G4ZOhbbEOWM1ZRtcXfqBDirUBa/+PWU7IlpabRd5x +jRM= X-Google-Smtp-Source: AGHT+IET2bsCbj+dVujY3wHHGG62OFluOs0XDlbwXtk0pkPLiLdQoxeL7nhZ/IHJzqKAfMjF3JcPmQ== X-Received: by 2002:a17:90b:1d49:b0:312:639:a062 with SMTP id 98e67ed59e1d1-31250413c17mr18106763a91.16.1748842544238; Sun, 01 Jun 2025 22:35:44 -0700 (PDT) Received: from localhost.localdomain ([117.251.222.160]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-3124e30cbe6sm4836986a91.39.2025.06.01.22.35.40 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 01 Jun 2025 22:35:43 -0700 (PDT) From: Pavitrakumar Managutte To: linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org, devicetree@vger.kernel.org, herbert@gondor.apana.org.au, robh@kernel.org Cc: krzk+dt@kernel.org, conor+dt@kernel.org, Ruud.Derwig@synopsys.com, manjunath.hadli@vayavyalabs.com, adityak@vayavyalabs.com, Pavitrakumar Managutte Subject: [PATCH v3 6/6] Add SPAcc Kconfig and Makefile Date: Mon, 2 Jun 2025 11:02:31 +0530 Message-Id: <20250602053231.403143-7-pavitrakumarm@vayavyalabs.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20250602053231.403143-1-pavitrakumarm@vayavyalabs.com> References: <20250602053231.403143-1-pavitrakumarm@vayavyalabs.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add Makefile and Kconfig for SPAcc driver. Signed-off-by: Pavitrakumar Managutte Acked-by: Ruud Derwig --- drivers/crypto/Kconfig | 1 + drivers/crypto/Makefile | 1 + drivers/crypto/dwc-spacc/Kconfig | 103 ++++++++++++++++++++++++++++++ drivers/crypto/dwc-spacc/Makefile | 16 +++++ 4 files changed, 121 insertions(+) create mode 100644 drivers/crypto/dwc-spacc/Kconfig create mode 100644 drivers/crypto/dwc-spacc/Makefile diff --git a/drivers/crypto/Kconfig b/drivers/crypto/Kconfig index 5686369779be..f3074218a4de 100644 --- a/drivers/crypto/Kconfig +++ b/drivers/crypto/Kconfig @@ -754,6 +754,7 @@ config CRYPTO_DEV_BCM_SPU ahash, and aead algorithms with the kernel cryptographic API. =20 source "drivers/crypto/stm32/Kconfig" +source "drivers/crypto/dwc-spacc/Kconfig" =20 config CRYPTO_DEV_SAFEXCEL tristate "Inside Secure's SafeXcel cryptographic engine driver" diff --git a/drivers/crypto/Makefile b/drivers/crypto/Makefile index 22eadcc8f4a2..c933b309e359 100644 --- a/drivers/crypto/Makefile +++ b/drivers/crypto/Makefile @@ -43,6 +43,7 @@ obj-$(CONFIG_CRYPTO_DEV_BCM_SPU) +=3D bcm/ obj-y +=3D inside-secure/ obj-$(CONFIG_CRYPTO_DEV_ARTPEC6) +=3D axis/ obj-y +=3D xilinx/ +obj-y +=3D dwc-spacc/ obj-y +=3D hisilicon/ obj-$(CONFIG_CRYPTO_DEV_AMLOGIC_GXL) +=3D amlogic/ obj-y +=3D intel/ diff --git a/drivers/crypto/dwc-spacc/Kconfig b/drivers/crypto/dwc-spacc/Kc= onfig new file mode 100644 index 000000000000..e43309fd76a3 --- /dev/null +++ b/drivers/crypto/dwc-spacc/Kconfig @@ -0,0 +1,103 @@ +# SPDX-License-Identifier: GPL-2.0-only + +config CRYPTO_DEV_SPACC + tristate "Support for dw_spacc Security Protocol Accelerator" + depends on HAS_DMA + default n + + help + This enables support for SPAcc Hardware Accelerator. + +config CRYPTO_DEV_SPACC_CIPHER + bool "Enable CIPHER functionality" + depends on CRYPTO_DEV_SPACC + default y + select CRYPTO_SKCIPHER + select CRYPTO_LIB_DES + select CRYPTO_AES + select CRYPTO_CBC + select CRYPTO_ECB + select CRYPTO_CTR + select CRYPTO_XTS + select CRYPTO_CTS + select CRYPTO_OFB + select CRYPTO_CFB + select CRYPTO_SM4_GENERIC + select CRYPTO_CHACHA20 + + help + Say y to enable Cipher functionality of SPAcc. + +config CRYPTO_DEV_SPACC_HASH + bool "Enable HASH functionality" + depends on CRYPTO_DEV_SPACC + default y + select CRYPTO_HASH + select CRYPTO_SHA1 + select CRYPTO_MD5 + select CRYPTO_SHA256 + select CRYPTO_SHA512 + select CRYPTO_HMAC + select CRYPTO_SM3 + select CRYPTO_CMAC + select CRYPTO_MICHAEL_MIC + select CRYPTO_XCBC + select CRYPTO_AES + select CRYPTO_SM4_GENERIC + + help + Say y to enable Hash functionality of SPAcc. + +config CRYPTO_DEV_SPACC_AEAD + bool "Enable AEAD functionality" + depends on CRYPTO_DEV_SPACC + default y + select CRYPTO_AEAD + select CRYPTO_AUTHENC + select CRYPTO_AES + select CRYPTO_SM4_GENERIC + select CRYPTO_CHACHAPOLY1305 + select CRYPTO_GCM + select CRYPTO_CCM + + help + Say y to enable AEAD functionality of SPAcc. + +config CRYPTO_DEV_SPACC_AUTODETECT + bool "Enable Autodetect functionality" + depends on CRYPTO_DEV_SPACC + default y + help + Say y to enable Autodetect functionality of SPAcc. + +config CRYPTO_DEV_SPACC_DEBUG_TRACE_IO + bool "Enable Trace MMIO reads/writes stats" + depends on CRYPTO_DEV_SPACC + default n + help + Say y to enable Trace MMIO reads/writes stats. + To Debug and trace IO register read/write oprations. + +config CRYPTO_DEV_SPACC_DEBUG_TRACE_DDT + bool "Enable Trace DDT entries stats" + default n + depends on CRYPTO_DEV_SPACC + help + Say y to enable Enable DDT entry stats. + To Debug and trace DDT opration + +config CRYPTO_DEV_SPACC_SECURE_MODE + bool "Enable Spacc secure mode stats" + default n + depends on CRYPTO_DEV_SPACC + help + Say y to enable SPAcc secure modes stats. + +config CRYPTO_DEV_SPACC_PRIORITY + int "VSPACC priority value" + depends on CRYPTO_DEV_SPACC + range 0 15 + default 1 + help + Default arbitration priority weight for this Virtual SPAcc instance. + Hardware resets this to 1. Higher values means higher priority. diff --git a/drivers/crypto/dwc-spacc/Makefile b/drivers/crypto/dwc-spacc/M= akefile new file mode 100644 index 000000000000..bf46c8e13a31 --- /dev/null +++ b/drivers/crypto/dwc-spacc/Makefile @@ -0,0 +1,16 @@ +# SPDX-License-Identifier: GPL-2.0-only +obj-$(CONFIG_CRYPTO_DEV_SPACC) +=3D snps-spacc.o +snps-spacc-objs =3D spacc_hal.o spacc_core.o \ +spacc_manager.o spacc_interrupt.o spacc_device.o + +ifeq ($(CONFIG_CRYPTO_DEV_SPACC_HASH),y) +snps-spacc-objs +=3D spacc_ahash.o +endif + +ifeq ($(CONFIG_CRYPTO_DEV_SPACC_CIPHER),y) +snps-spacc-objs +=3D spacc_skcipher.o +endif + +ifeq ($(CONFIG_CRYPTO_DEV_SPACC_AEAD),y) +snps-spacc-objs +=3D spacc_aead.o +endif --=20 2.25.1