From nobody Fri Dec 19 13:05:05 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 11387C4167B for ; Mon, 11 Dec 2023 13:04:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1343591AbjLKNEc (ORCPT ); Mon, 11 Dec 2023 08:04:32 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49208 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1343502AbjLKNEa (ORCPT ); Mon, 11 Dec 2023 08:04:30 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2FED9AF for ; Mon, 11 Dec 2023 05:04:37 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1702299876; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=6+FXF6FtRGXzXSfZqfDd6orRiVmldhaVk3zyYliH12s=; b=T0FU+OBLZ7gDo2SqpGBl4fe6msTfJR/Hbh+lvg2s+WMSGob39NHISsy2tSxvwHne45SRof trZPrCxTqDa8OHMU9gTnEQGsxQ7kCaOrjZlaP63Rv5KjR7JqbT/98a8o8nLFwzWgDrhXb6 6qtqURRWY0ShwuBtFAEgdCwfMrnSJyI= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-319-v5LhRUDeNfe9V6iiU7CV2A-1; Mon, 11 Dec 2023 08:04:33 -0500 X-MC-Unique: v5LhRUDeNfe9V6iiU7CV2A-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.rdu2.redhat.com [10.11.54.7]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id AFEB7101A52A; Mon, 11 Dec 2023 13:04:32 +0000 (UTC) Received: from metal.redhat.com (unknown [10.45.224.23]) by smtp.corp.redhat.com (Postfix) with ESMTP id 0FF311C060B1; Mon, 11 Dec 2023 13:04:30 +0000 (UTC) From: Daniel Vacek To: Jason Gunthorpe , Leon Romanovsky Cc: linux-rdma@vger.kernel.org, linux-kernel@vger.kernel.org, Daniel Vacek , Yuya Fujita-bishamonten Subject: [PATCH 1/2] IB/ipoib: Fix mcast list locking Date: Mon, 11 Dec 2023 14:04:24 +0100 Message-ID: <20231211130426.1500427-2-neelx@redhat.com> In-Reply-To: <20231211130426.1500427-1-neelx@redhat.com> References: <20231211130426.1500427-1-neelx@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.7 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" We need an additional protection against list removal between ipoib_mcast_j= oin_task() and ipoib_mcast_dev_flush() in case the &priv->lock needs to be dropped whi= le iterating the &priv->multicast_list in ipoib_mcast_join_task(). If the mcast is removed while the lock was dropped, the for loop spins forever resulting in a hard lockup (as was reported on RHEL 4.18.0-372.75.1.el8_6 kernel): Task A (kworker/u72:2 below) | Task B (kworker/u72:0 below) -----------------------------------+----------------------------------- ipoib_mcast_join_task(work) | ipoib_ib_dev_flush_light(work) spin_lock_irq(&priv->lock) | __ipoib_ib_dev_flush(priv, ...) list_for_each_entry(mcast, | ipoib_mcast_dev_flush(dev =3D priv= ->dev) &priv->multicast_list, list) | mutex_lock(&priv->mcast_mutex) ipoib_mcast_join(dev, mcast) | spin_unlock_irq(&priv->lock) | | spin_lock_irqsave(&priv->lock, f= lags) | list_for_each_entry_safe(mcast, = tmcast, | &priv->multicast_= list, list) | list_del(&mcast->list); | list_add_tail(&mcast->list, &r= emove_list) | spin_unlock_irqrestore(&priv->lo= ck, flags) spin_lock_irq(&priv->lock) | | ipoib_mcast_remove_list(&remove_= list) (Here, mcast is no longer on the | list_for_each_entry_safe(mcast= , tmcast, &priv->multicast_list and we keep | remove_= list, list) spinning on the &remove_list of the \ >>> wait_for_completion(&mcast->= done) other thread which is blocked and the| list is still valid on it's stack.) | mutex_unlock(&priv->mcast_mutex) Fix this by adding mutex_lock(&priv->mcast_mutex) to ipoib_mcast_join_task(= ). Unfortunately we could not reproduce the lockup and confirm this fix but based on the code review I think this fix should address such lockups. crash> bc 31 PID: 747 TASK: ff1c6a1a007e8000 CPU: 31 COMMAND: "kworker/u72:2" -- [exception RIP: ipoib_mcast_join_task+0x1b1] RIP: ffffffffc0944ac1 RSP: ff646f199a8c7e00 RFLAGS: 00000002 RAX: 0000000000000000 RBX: ff1c6a1a04dc82f8 RCX: 0000000000000000 work (&priv->mcast_task{,.work}) RDX: ff1c6a192d60ac68 RSI: 0000000000000286 RDI: ff1c6a1a04dc8000 &mcast->list RBP: ff646f199a8c7e90 R8: ff1c699980019420 R9: ff1c6a1920c9a000 R10: ff646f199a8c7e00 R11: ff1c6a191a7d9800 R12: ff1c6a192d60ac00 mcast R13: ff1c6a1d82200000 R14: ff1c6a1a04dc8000 R15: ff1c6a1a04dc82d8 dev priv (&priv->lock) &priv->multicast_l= ist (aka head) ORIG_RAX: ffffffffffffffff CS: 0010 SS: 0018 Reported-by: Yuya Fujita Reported-by: Yuya Fujita-bishamonten --- --- #5 [ff646f199a8c7e00] ipoib_mcast_join_task+0x1b1 at ffffffffc0944ac1 [ib_= ipoib] #6 [ff646f199a8c7e98] process_one_work+0x1a7 at ffffffff9bf10967 crash> rx ff646f199a8c7e68 ff646f199a8c7e68: ff1c6a1a04dc82f8 <<< work =3D &priv->mcast_task.work crash> list -hO ipoib_dev_priv.multicast_list ff1c6a1a04dc8000 (empty) crash> ipoib_dev_priv.mcast_task.work.func,mcast_mutex.owner.counter ff1c6a= 1a04dc8000 mcast_task.work.func =3D 0xffffffffc0944910 , mcast_mutex.owner.counter =3D 0xff1c69998efec000 crash> b 8 PID: 8 TASK: ff1c69998efec000 CPU: 33 COMMAND: "kworker/u72:0" -- #3 [ff646f1980153d50] wait_for_completion+0x96 at ffffffff9c7d7646 #4 [ff646f1980153d90] ipoib_mcast_remove_list+0x56 at ffffffffc0944dc6 [ib= _ipoib] #5 [ff646f1980153de8] ipoib_mcast_dev_flush+0x1a7 at ffffffffc09455a7 [ib_= ipoib] #6 [ff646f1980153e58] __ipoib_ib_dev_flush+0x1a4 at ffffffffc09431a4 [ib_i= poib] #7 [ff646f1980153e98] process_one_work+0x1a7 at ffffffff9bf10967 crash> rx ff646f1980153e68 ff646f1980153e68: ff1c6a1a04dc83f0 <<< work =3D &priv->flush_light crash> ipoib_dev_priv.flush_light.func,broadcast ff1c6a1a04dc8000 flush_light.func =3D 0xffffffffc0943820 , broadcast =3D 0x0, The mcast(s) on the &remove_list (the remaining part of the ex &priv->multi= cast_list): crash> list -s ipoib_mcast.done.done ipoib_mcast.list -H ff646f1980153e10 |= paste - - ff1c6a192bd0c200 done.done =3D 0x0, ff1c6a192d60ac00 done.done =3D 0x0, Reported-by: Yuya Fujita-bishamonten Signed-off-by: Daniel Vacek --- drivers/infiniband/ulp/ipoib/ipoib_multicast.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/drivers/infiniband/ulp/ipoib/ipoib_multicast.c b/drivers/infin= iband/ulp/ipoib/ipoib_multicast.c index 5b3154503bf4..8e4f2c8839be 100644 --- a/drivers/infiniband/ulp/ipoib/ipoib_multicast.c +++ b/drivers/infiniband/ulp/ipoib/ipoib_multicast.c @@ -580,6 +580,7 @@ void ipoib_mcast_join_task(struct work_struct *work) } netif_addr_unlock_bh(dev); =20 + mutex_lock(&priv->mcast_mutex); spin_lock_irq(&priv->lock); if (!test_bit(IPOIB_FLAG_OPER_UP, &priv->flags)) goto out; @@ -634,6 +635,7 @@ void ipoib_mcast_join_task(struct work_struct *work) /* Found the next unjoined group */ if (ipoib_mcast_join(dev, mcast)) { spin_unlock_irq(&priv->lock); + mutex_unlock(&priv->mcast_mutex); return; } } else if (!delay_until || @@ -655,6 +657,7 @@ void ipoib_mcast_join_task(struct work_struct *work) ipoib_mcast_join(dev, mcast); =20 spin_unlock_irq(&priv->lock); + mutex_unlock(&priv->mcast_mutex); } =20 void ipoib_mcast_start_thread(struct net_device *dev) --=20 2.43.0 From nobody Fri Dec 19 13:05:05 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id F2653C10F05 for ; Mon, 11 Dec 2023 13:04:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1343603AbjLKNEd (ORCPT ); Mon, 11 Dec 2023 08:04:33 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49236 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1343576AbjLKNEb (ORCPT ); Mon, 11 Dec 2023 08:04:31 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 64E32C6 for ; Mon, 11 Dec 2023 05:04:38 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1702299877; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=giFVLOTqkebo2x+oA3YA6G6HzBfr3m1iYomgrY5Darc=; b=WW+xx/EFCET7haeCoIXNIYTQmBszVT/HZ3owFg/9KMaVldbyeRxSjwM3wSuvNua0IA4fX0 MVnJbq/1JLMCcyiZglOIsnteuQKPnCQTSTgnVFZBbBtgITZca/n/ueqIFOwbf3WMVuGJD9 s6EFR1vihPoZJXw3pZTGWyUVtVyzt3A= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-111-X5XFshiTM9O69vzQtEUuRQ-1; Mon, 11 Dec 2023 08:04:35 -0500 X-MC-Unique: X5XFshiTM9O69vzQtEUuRQ-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.rdu2.redhat.com [10.11.54.7]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id D514D830F32; Mon, 11 Dec 2023 13:04:34 +0000 (UTC) Received: from metal.redhat.com (unknown [10.45.224.23]) by smtp.corp.redhat.com (Postfix) with ESMTP id 804F51C060B1; Mon, 11 Dec 2023 13:04:33 +0000 (UTC) From: Daniel Vacek To: Jason Gunthorpe , Leon Romanovsky Cc: linux-rdma@vger.kernel.org, linux-kernel@vger.kernel.org, Daniel Vacek Subject: [PATCH 2/2] IB/ipoib: Clean up redundant netif_addr_lock Date: Mon, 11 Dec 2023 14:04:25 +0100 Message-ID: <20231211130426.1500427-3-neelx@redhat.com> In-Reply-To: <20231211130426.1500427-1-neelx@redhat.com> References: <20231211130426.1500427-1-neelx@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.7 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" A single memory load does not need to be protected by any lock. The same priv->flags are fetched 15 lines ago without locking anyways. Signed-off-by: Daniel Vacek Reported-by: Yuya Fujita --- drivers/infiniband/ulp/ipoib/ipoib_multicast.c | 6 +----- 1 file changed, 1 insertion(+), 5 deletions(-) diff --git a/drivers/infiniband/ulp/ipoib/ipoib_multicast.c b/drivers/infin= iband/ulp/ipoib/ipoib_multicast.c index 8e4f2c8839be..f54e0d212630 100644 --- a/drivers/infiniband/ulp/ipoib/ipoib_multicast.c +++ b/drivers/infiniband/ulp/ipoib/ipoib_multicast.c @@ -572,13 +572,9 @@ void ipoib_mcast_join_task(struct work_struct *work) return; } priv->local_lid =3D port_attr.lid; - netif_addr_lock_bh(dev); =20 - if (!test_bit(IPOIB_FLAG_DEV_ADDR_SET, &priv->flags)) { - netif_addr_unlock_bh(dev); + if (!test_bit(IPOIB_FLAG_DEV_ADDR_SET, &priv->flags)) return; - } - netif_addr_unlock_bh(dev); =20 mutex_lock(&priv->mcast_mutex); spin_lock_irq(&priv->lock); --=20 2.43.0 From nobody Fri Dec 19 13:05:05 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6C4B2C4332F for ; Tue, 12 Dec 2023 08:08:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345984AbjLLIIC (ORCPT ); Tue, 12 Dec 2023 03:08:02 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47702 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229455AbjLLIHy (ORCPT ); Tue, 12 Dec 2023 03:07:54 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 51EE8DB for ; Tue, 12 Dec 2023 00:08:00 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1702368479; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=WXqf2vh/WJNQMCP9P3WEFWf2Hyjzd2SEqm/uvxIkz6Y=; b=combx/CsHrh+k8BWFHkVZa/4UotYcSkmjJ/gw0Fwp7Y8x8TTXPUX1MvlfLK6l7gP1bt8eF 42d74sbcCM131OyuEO1oSP+UpIgWKg9xOS4wkEbefV/J0uvwUfQAskec6J9xCXp97n2U+m t2VnmRvp4hLLkknq1lsR15eFd7L06zg= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-28-0rMprEnIO2CiBXKl6z94Hg-1; Tue, 12 Dec 2023 03:07:56 -0500 X-MC-Unique: 0rMprEnIO2CiBXKl6z94Hg-1 Received: from smtp.corp.redhat.com (int-mx10.intmail.prod.int.rdu2.redhat.com [10.11.54.10]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id B41AA85A588; Tue, 12 Dec 2023 08:07:55 +0000 (UTC) Received: from metal.redhat.com (unknown [10.45.224.23]) by smtp.corp.redhat.com (Postfix) with ESMTP id 17ECD492BE6; Tue, 12 Dec 2023 08:07:53 +0000 (UTC) From: Daniel Vacek To: Jason Gunthorpe , Leon Romanovsky Cc: linux-rdma@vger.kernel.org, linux-kernel@vger.kernel.org, Daniel Vacek , Yuya Fujita-bishamonten Subject: [PATCH v2] IB/ipoib: Fix mcast list locking Date: Tue, 12 Dec 2023 09:07:45 +0100 Message-ID: <20231212080746.1528802-1-neelx@redhat.com> In-Reply-To: <20231211130426.1500427-1-neelx@redhat.com> References: <20231211130426.1500427-1-neelx@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.10 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Releasing the `priv->lock` while iterating the `priv->multicast_list` in `ipoib_mcast_join_task()` opens a window for `ipoib_mcast_dev_flush()` to remove the items while in the middle of iteration. If the mcast is removed while the lock was dropped, the for loop spins forever resulting in a hard lockup (as was reported on RHEL 4.18.0-372.75.1.el8_6 kernel): Task A (kworker/u72:2 below) | Task B (kworker/u72:0 below) -----------------------------------+----------------------------------- ipoib_mcast_join_task(work) | ipoib_ib_dev_flush_light(work) spin_lock_irq(&priv->lock) | __ipoib_ib_dev_flush(priv, ...) list_for_each_entry(mcast, | ipoib_mcast_dev_flush(dev =3D priv= ->dev) &priv->multicast_list, list) | ipoib_mcast_join(dev, mcast) | spin_unlock_irq(&priv->lock) | | spin_lock_irqsave(&priv->lock, f= lags) | list_for_each_entry_safe(mcast, = tmcast, | &priv->multicast_= list, list) | list_del(&mcast->list); | list_add_tail(&mcast->list, &r= emove_list) | spin_unlock_irqrestore(&priv->lo= ck, flags) spin_lock_irq(&priv->lock) | | ipoib_mcast_remove_list(&remove_= list) (Here, `mcast` is no longer on the | list_for_each_entry_safe(mcast= , tmcast, `priv->multicast_list` and we keep | remove_= list, list) spinning on the `remove_list` of | >>> wait_for_completion(&mcast->= done) the other thread which is blocked | and the list is still valid on | it's stack.) Fix this by keeping the lock held and changing to GFP_ATOMIC to prevent eventual sleeps. Unfortunately we could not reproduce the lockup and confirm this fix but based on the code review I think this fix should address such lockups. crash> bc 31 PID: 747 TASK: ff1c6a1a007e8000 CPU: 31 COMMAND: "kworker/u72:2" -- [exception RIP: ipoib_mcast_join_task+0x1b1] RIP: ffffffffc0944ac1 RSP: ff646f199a8c7e00 RFLAGS: 00000002 RAX: 0000000000000000 RBX: ff1c6a1a04dc82f8 RCX: 0000000000000000 work (&priv->mcast_task{,.work}) RDX: ff1c6a192d60ac68 RSI: 0000000000000286 RDI: ff1c6a1a04dc8000 &mcast->list RBP: ff646f199a8c7e90 R8: ff1c699980019420 R9: ff1c6a1920c9a000 R10: ff646f199a8c7e00 R11: ff1c6a191a7d9800 R12: ff1c6a192d60ac00 mcast R13: ff1c6a1d82200000 R14: ff1c6a1a04dc8000 R15: ff1c6a1a04dc82d8 dev priv (&priv->lock) &priv->multicast_l= ist (aka head) ORIG_RAX: ffffffffffffffff CS: 0010 SS: 0018 Reported-by: Yuya Fujita Reported-by: Yuya Fujita-bishamonten --- --- #5 [ff646f199a8c7e00] ipoib_mcast_join_task+0x1b1 at ffffffffc0944ac1 [ib_= ipoib] #6 [ff646f199a8c7e98] process_one_work+0x1a7 at ffffffff9bf10967 crash> rx ff646f199a8c7e68 ff646f199a8c7e68: ff1c6a1a04dc82f8 <<< work =3D &priv->mcast_task.work crash> list -hO ipoib_dev_priv.multicast_list ff1c6a1a04dc8000 (empty) crash> ipoib_dev_priv.mcast_task.work.func,mcast_mutex.owner.counter ff1c6a= 1a04dc8000 mcast_task.work.func =3D 0xffffffffc0944910 , mcast_mutex.owner.counter =3D 0xff1c69998efec000 crash> b 8 PID: 8 TASK: ff1c69998efec000 CPU: 33 COMMAND: "kworker/u72:0" -- #3 [ff646f1980153d50] wait_for_completion+0x96 at ffffffff9c7d7646 #4 [ff646f1980153d90] ipoib_mcast_remove_list+0x56 at ffffffffc0944dc6 [ib= _ipoib] #5 [ff646f1980153de8] ipoib_mcast_dev_flush+0x1a7 at ffffffffc09455a7 [ib_= ipoib] #6 [ff646f1980153e58] __ipoib_ib_dev_flush+0x1a4 at ffffffffc09431a4 [ib_i= poib] #7 [ff646f1980153e98] process_one_work+0x1a7 at ffffffff9bf10967 crash> rx ff646f1980153e68 ff646f1980153e68: ff1c6a1a04dc83f0 <<< work =3D &priv->flush_light crash> ipoib_dev_priv.flush_light.func,broadcast ff1c6a1a04dc8000 flush_light.func =3D 0xffffffffc0943820 , broadcast =3D 0x0, The mcast(s) on the `remove_list` (the remaining part of the ex `priv->mult= icast_list`): crash> list -s ipoib_mcast.done.done ipoib_mcast.list -H ff646f1980153e10 |= paste - - ff1c6a192bd0c200 done.done =3D 0x0, ff1c6a192d60ac00 done.done =3D 0x0, Reported-by: Yuya Fujita-bishamonten Signed-off-by: Daniel Vacek --- drivers/infiniband/ulp/ipoib/ipoib_multicast.c | 6 +----- 1 file changed, 1 insertion(+), 5 deletions(-) diff --git a/drivers/infiniband/ulp/ipoib/ipoib_multicast.c b/drivers/infin= iband/ulp/ipoib/ipoib_multicast.c index 5b3154503bf4..bca80fe07584 100644 --- a/drivers/infiniband/ulp/ipoib/ipoib_multicast.c +++ b/drivers/infiniband/ulp/ipoib/ipoib_multicast.c @@ -531,21 +531,17 @@ static int ipoib_mcast_join(struct net_device *dev, s= truct ipoib_mcast *mcast) if (test_bit(IPOIB_MCAST_FLAG_SENDONLY, &mcast->flags)) rec.join_state =3D SENDONLY_FULLMEMBER_JOIN; } - spin_unlock_irq(&priv->lock); =20 multicast =3D ib_sa_join_multicast(&ipoib_sa_client, priv->ca, priv->port, - &rec, comp_mask, GFP_KERNEL, + &rec, comp_mask, GFP_ATOMIC, ipoib_mcast_join_complete, mcast); - spin_lock_irq(&priv->lock); if (IS_ERR(multicast)) { ret =3D PTR_ERR(multicast); ipoib_warn(priv, "ib_sa_join_multicast failed, status %d\n", ret); /* Requeue this join task with a backoff delay */ __ipoib_mcast_schedule_join_thread(priv, mcast, 1); clear_bit(IPOIB_MCAST_FLAG_BUSY, &mcast->flags); - spin_unlock_irq(&priv->lock); complete(&mcast->done); - spin_lock_irq(&priv->lock); } return 0; } --=20 2.43.0