From nobody Fri Dec 19 17:24:19 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id F1AFCFA3741 for ; Sat, 22 Oct 2022 08:23:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230387AbiJVIXf (ORCPT ); Sat, 22 Oct 2022 04:23:35 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53804 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230299AbiJVIVT (ORCPT ); Sat, 22 Oct 2022 04:21:19 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 776B06503B; Sat, 22 Oct 2022 00:59:00 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id F3771B82E0F; Sat, 22 Oct 2022 07:58:21 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 5AF10C433C1; Sat, 22 Oct 2022 07:58:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1666425500; bh=jhIWUs5sEtmpmfHv1g5CP9m8YF68xxWpT8UvFPPUhss=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Vmz5SYn65kfZgIVGLWhRII9t+1oTXwO4k2+4//ENMyHvMEEnRQifoCBNK7xheIDB7 +lXMDK9QbykS0p07zQgICP+ft9stCTS13nwCdKoqr84VxsPnEp1Xi8CdVkNVBgS81q tRuXic50yK1mqayYEPfrdVynnDzcD2fcvFQDF8D4= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Damian Muszynski , Giovanni Cabiddu , Herbert Xu , Sasha Levin Subject: [PATCH 5.19 520/717] crypto: qat - fix DMA transfer direction Date: Sat, 22 Oct 2022 09:26:39 +0200 Message-Id: <20221022072521.301947506@linuxfoundation.org> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221022072415.034382448@linuxfoundation.org> References: <20221022072415.034382448@linuxfoundation.org> User-Agent: quilt/0.67 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Damian Muszynski [ Upstream commit cf5bb835b7c8a5fee7f26455099cca7feb57f5e9 ] When CONFIG_DMA_API_DEBUG is selected, while running the crypto self test on the QAT crypto algorithms, the function add_dma_entry() reports a warning similar to the one below, saying that overlapping mappings are not supported. This occurs in tests where the input and the output scatter list point to the same buffers (i.e. two different scatter lists which point to the same chunks of memory). The logic that implements the mapping uses the flag DMA_BIDIRECTIONAL for both the input and the output scatter lists which leads to overlapped write mappings. These are not supported by the DMA layer. Fix by specifying the correct DMA transfer directions when mapping buffers. For in-place operations where the input scatter list matches the output scatter list, buffers are mapped once with DMA_BIDIRECTIONAL, otherwise input buffers are mapped using the flag DMA_TO_DEVICE and output buffers are mapped with DMA_FROM_DEVICE. Overlapping a read mapping with a write mapping is a valid case in dma-coherent devices like QAT. The function that frees and unmaps the buffers, qat_alg_free_bufl() has been changed accordingly to the changes to the mapping function. DMA-API: 4xxx 0000:06:00.0: cacheline tracking EEXIST, overlapping mappi= ngs aren't supported WARNING: CPU: 53 PID: 4362 at kernel/dma/debug.c:570 add_dma_entry+0x1e9= /0x270 ... Call Trace: dma_map_page_attrs+0x82/0x2d0 ? preempt_count_add+0x6a/0xa0 qat_alg_sgl_to_bufl+0x45b/0x990 [intel_qat] qat_alg_aead_dec+0x71/0x250 [intel_qat] crypto_aead_decrypt+0x3d/0x70 test_aead_vec_cfg+0x649/0x810 ? number+0x310/0x3a0 ? vsnprintf+0x2a3/0x550 ? scnprintf+0x42/0x70 ? valid_sg_divisions.constprop.0+0x86/0xa0 ? test_aead_vec+0xdf/0x120 test_aead_vec+0xdf/0x120 alg_test_aead+0x185/0x400 alg_test+0x3d8/0x500 ? crypto_acomp_scomp_free_ctx+0x30/0x30 ? __schedule+0x32a/0x12a0 ? ttwu_queue_wakelist+0xbf/0x110 ? _raw_spin_unlock_irqrestore+0x23/0x40 ? try_to_wake_up+0x83/0x570 ? _raw_spin_unlock_irqrestore+0x23/0x40 ? __set_cpus_allowed_ptr_locked+0xea/0x1b0 ? crypto_acomp_scomp_free_ctx+0x30/0x30 cryptomgr_test+0x27/0x50 kthread+0xe6/0x110 ? kthread_complete_and_exit+0x20/0x20 ret_from_fork+0x1f/0x30 Fixes: d370cec ("crypto: qat - Intel(R) QAT crypto interface") Link: https://lore.kernel.org/linux-crypto/20220223080400.139367-1-gilad@be= nyossef.com/ Signed-off-by: Damian Muszynski Signed-off-by: Giovanni Cabiddu Signed-off-by: Herbert Xu Signed-off-by: Sasha Levin --- drivers/crypto/qat/qat_common/qat_algs.c | 18 ++++++++++++------ 1 file changed, 12 insertions(+), 6 deletions(-) diff --git a/drivers/crypto/qat/qat_common/qat_algs.c b/drivers/crypto/qat/= qat_common/qat_algs.c index 148edbe379e3..0828d856d6b0 100644 --- a/drivers/crypto/qat/qat_common/qat_algs.c +++ b/drivers/crypto/qat/qat_common/qat_algs.c @@ -673,11 +673,14 @@ static void qat_alg_free_bufl(struct qat_crypto_insta= nce *inst, dma_addr_t blpout =3D qat_req->buf.bloutp; size_t sz =3D qat_req->buf.sz; size_t sz_out =3D qat_req->buf.sz_out; + int bl_dma_dir; int i; =20 + bl_dma_dir =3D blp !=3D blpout ? DMA_TO_DEVICE : DMA_BIDIRECTIONAL; + for (i =3D 0; i < bl->num_bufs; i++) dma_unmap_single(dev, bl->bufers[i].addr, - bl->bufers[i].len, DMA_BIDIRECTIONAL); + bl->bufers[i].len, bl_dma_dir); =20 dma_unmap_single(dev, blp, sz, DMA_TO_DEVICE); =20 @@ -691,7 +694,7 @@ static void qat_alg_free_bufl(struct qat_crypto_instanc= e *inst, for (i =3D bufless; i < blout->num_bufs; i++) { dma_unmap_single(dev, blout->bufers[i].addr, blout->bufers[i].len, - DMA_BIDIRECTIONAL); + DMA_FROM_DEVICE); } dma_unmap_single(dev, blpout, sz_out, DMA_TO_DEVICE); =20 @@ -716,6 +719,7 @@ static int qat_alg_sgl_to_bufl(struct qat_crypto_instan= ce *inst, struct scatterlist *sg; size_t sz_out, sz =3D struct_size(bufl, bufers, n); int node =3D dev_to_node(&GET_DEV(inst->accel_dev)); + int bufl_dma_dir; =20 if (unlikely(!n)) return -EINVAL; @@ -733,6 +737,8 @@ static int qat_alg_sgl_to_bufl(struct qat_crypto_instan= ce *inst, qat_req->buf.sgl_src_valid =3D true; } =20 + bufl_dma_dir =3D sgl !=3D sglout ? DMA_TO_DEVICE : DMA_BIDIRECTIONAL; + for_each_sg(sgl, sg, n, i) bufl->bufers[i].addr =3D DMA_MAPPING_ERROR; =20 @@ -744,7 +750,7 @@ static int qat_alg_sgl_to_bufl(struct qat_crypto_instan= ce *inst, =20 bufl->bufers[y].addr =3D dma_map_single(dev, sg_virt(sg), sg->length, - DMA_BIDIRECTIONAL); + bufl_dma_dir); bufl->bufers[y].len =3D sg->length; if (unlikely(dma_mapping_error(dev, bufl->bufers[y].addr))) goto err_in; @@ -787,7 +793,7 @@ static int qat_alg_sgl_to_bufl(struct qat_crypto_instan= ce *inst, =20 bufers[y].addr =3D dma_map_single(dev, sg_virt(sg), sg->length, - DMA_BIDIRECTIONAL); + DMA_FROM_DEVICE); if (unlikely(dma_mapping_error(dev, bufers[y].addr))) goto err_out; bufers[y].len =3D sg->length; @@ -817,7 +823,7 @@ static int qat_alg_sgl_to_bufl(struct qat_crypto_instan= ce *inst, if (!dma_mapping_error(dev, buflout->bufers[i].addr)) dma_unmap_single(dev, buflout->bufers[i].addr, buflout->bufers[i].len, - DMA_BIDIRECTIONAL); + DMA_FROM_DEVICE); =20 if (!qat_req->buf.sgl_dst_valid) kfree(buflout); @@ -831,7 +837,7 @@ static int qat_alg_sgl_to_bufl(struct qat_crypto_instan= ce *inst, if (!dma_mapping_error(dev, bufl->bufers[i].addr)) dma_unmap_single(dev, bufl->bufers[i].addr, bufl->bufers[i].len, - DMA_BIDIRECTIONAL); + bufl_dma_dir); =20 if (!qat_req->buf.sgl_src_valid) kfree(bufl); --=20 2.35.1