From nobody Fri Dec 19 17:34:43 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 71C08FA3740 for ; Mon, 24 Oct 2022 14:00:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236929AbiJXOAn (ORCPT ); Mon, 24 Oct 2022 10:00:43 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56382 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237071AbiJXN7Q (ORCPT ); Mon, 24 Oct 2022 09:59:16 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0EF2CBEAC3; Mon, 24 Oct 2022 05:46:32 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 832C5612DD; Mon, 24 Oct 2022 12:46:32 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 6B9F3C433D6; Mon, 24 Oct 2022 12:46:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1666615591; bh=obz5FPNMsHHx1sO6B0yQTmd5pxFcPPFY4b0ZdeXu71Q=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=DqMOrsMUizOCF5CoEfGZ9V+4bGYnBByog9q5eOCGYbK4dKCGla/VsmG6DLH8dc/0j S7pqd56pIt2gAGzv/sWl1aPFQDd2iVXYGekoNckA3ycb5mlTMzw0LCI9p74j/hJFKy hWIS5C4/qOxoK0XciRFP3VW6akzj25XiIyG62A78= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Olga Kornievskaia , Bernard Metzler , Leon Romanovsky , Sasha Levin Subject: [PATCH 5.15 310/530] RDMA/siw: Always consume all skbuf data in sk_data_ready() upcall. Date: Mon, 24 Oct 2022 13:30:54 +0200 Message-Id: <20221024113059.083190389@linuxfoundation.org> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221024113044.976326639@linuxfoundation.org> References: <20221024113044.976326639@linuxfoundation.org> User-Agent: quilt/0.67 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Bernard Metzler [ Upstream commit 754209850df8367c954ac1de7671c7430b1f342c ] For header and trailer/padding processing, siw did not consume new skb data until minimum amount present to fill current header or trailer structure, including potential payload padding. Not consuming any data during upcall may cause a receive stall, since tcp_read_sock() is not upcalling again if no new data arrive. A NFSoRDMA client got stuck at RDMA Write reception of unaligned payload, if the current skb did contain only the expected 3 padding bytes, but not the 4 bytes CRC trailer. Expecting 4 more bytes already arrived in another skb, and not consuming those 3 bytes in the current upcall left the Write incomplete, waiting for the CRC forever. Fixes: 8b6a361b8c48 ("rdma/siw: receive path") Reported-by: Olga Kornievskaia Tested-by: Olga Kornievskaia Signed-off-by: Bernard Metzler Link: https://lore.kernel.org/r/20220920081202.223629-1-bmt@zurich.ibm.com Signed-off-by: Leon Romanovsky Signed-off-by: Sasha Levin --- drivers/infiniband/sw/siw/siw_qp_rx.c | 27 +++++++++++++++------------ 1 file changed, 15 insertions(+), 12 deletions(-) diff --git a/drivers/infiniband/sw/siw/siw_qp_rx.c b/drivers/infiniband/sw/= siw/siw_qp_rx.c index 875ea6f1b04a..fd721cc19682 100644 --- a/drivers/infiniband/sw/siw/siw_qp_rx.c +++ b/drivers/infiniband/sw/siw/siw_qp_rx.c @@ -961,27 +961,28 @@ int siw_proc_terminate(struct siw_qp *qp) static int siw_get_trailer(struct siw_qp *qp, struct siw_rx_stream *srx) { struct sk_buff *skb =3D srx->skb; + int avail =3D min(srx->skb_new, srx->fpdu_part_rem); u8 *tbuf =3D (u8 *)&srx->trailer.crc - srx->pad; __wsum crc_in, crc_own =3D 0; =20 siw_dbg_qp(qp, "expected %d, available %d, pad %u\n", srx->fpdu_part_rem, srx->skb_new, srx->pad); =20 - if (srx->skb_new < srx->fpdu_part_rem) - return -EAGAIN; - - skb_copy_bits(skb, srx->skb_offset, tbuf, srx->fpdu_part_rem); + skb_copy_bits(skb, srx->skb_offset, tbuf, avail); =20 - if (srx->mpa_crc_hd && srx->pad) - crypto_shash_update(srx->mpa_crc_hd, tbuf, srx->pad); + srx->skb_new -=3D avail; + srx->skb_offset +=3D avail; + srx->skb_copied +=3D avail; + srx->fpdu_part_rem -=3D avail; =20 - srx->skb_new -=3D srx->fpdu_part_rem; - srx->skb_offset +=3D srx->fpdu_part_rem; - srx->skb_copied +=3D srx->fpdu_part_rem; + if (srx->fpdu_part_rem) + return -EAGAIN; =20 if (!srx->mpa_crc_hd) return 0; =20 + if (srx->pad) + crypto_shash_update(srx->mpa_crc_hd, tbuf, srx->pad); /* * CRC32 is computed, transmitted and received directly in NBO, * so there's never a reason to convert byte order. @@ -1083,10 +1084,9 @@ static int siw_get_hdr(struct siw_rx_stream *srx) * completely received. */ if (iwarp_pktinfo[opcode].hdr_len > sizeof(struct iwarp_ctrl_tagged)) { - bytes =3D iwarp_pktinfo[opcode].hdr_len - MIN_DDP_HDR; + int hdrlen =3D iwarp_pktinfo[opcode].hdr_len; =20 - if (srx->skb_new < bytes) - return -EAGAIN; + bytes =3D min_t(int, hdrlen - MIN_DDP_HDR, srx->skb_new); =20 skb_copy_bits(skb, srx->skb_offset, (char *)c_hdr + srx->fpdu_part_rcvd, bytes); @@ -1096,6 +1096,9 @@ static int siw_get_hdr(struct siw_rx_stream *srx) srx->skb_new -=3D bytes; srx->skb_offset +=3D bytes; srx->skb_copied +=3D bytes; + + if (srx->fpdu_part_rcvd < hdrlen) + return -EAGAIN; } =20 /* --=20 2.35.1