From nobody Sun Dec 28 06:38:07 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 07696C4167B for ; Tue, 12 Dec 2023 09:16:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231551AbjLLJQL (ORCPT ); Tue, 12 Dec 2023 04:16:11 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36636 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229583AbjLLJQI (ORCPT ); Tue, 12 Dec 2023 04:16:08 -0500 Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 181B2C7; Tue, 12 Dec 2023 01:16:14 -0800 (PST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.24/8.17.1.24) with ESMTP id 3BC7jh3u009842; Tue, 12 Dec 2023 01:16:06 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h= from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding:content-type; s=pfpt0220; bh=8Fi6JKQ6 hzjc5TlTDLGpAytVBONJviRalvFedjkioj0=; b=D6C1v4MgfuQ6il0V41KGiZJh jP2GdC/sJC/8OZIMIHZ8LaK0lH4SlvHju6gJ/AF7VufwoPUp5aHIrMEbS6aYRAQD s9egYCd3xv89wmMw1rHDXtT84HGXSGGTkgFU0UBphB85sj6U3ItOuFVufYbSiwII 0tYj59ZHukadgwd6Mmd9S9PQEnKQuFEN+XCCRpAzdqzl6i5y/CX5jUWVo5AIRTnP cxvkE4imJzjZQbkYt3vN81d0sFYQMWbrTevU1cBlXvizmtR4Tv0BkQFxUrptg0hl rZL4siUpfEmFIyL2RvVQZugSms6AiPkCvpIFyHmPnXRpNn9DZm5Se6gkjAsB3w== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3uwyp4m9n4-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Tue, 12 Dec 2023 01:16:06 -0800 (PST) Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Tue, 12 Dec 2023 01:16:05 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Tue, 12 Dec 2023 01:16:05 -0800 Received: from localhost.localdomain (unknown [10.28.36.166]) by maili.marvell.com (Postfix) with ESMTP id 2BD283F7043; Tue, 12 Dec 2023 01:16:00 -0800 (PST) From: Suman Ghosh To: , , , , , , , , , , , CC: Suman Ghosh Subject: [net-next PATCH] octeontx2-af: Fix multicast/mirror group lock/unlock issue Date: Tue, 12 Dec 2023 14:45:58 +0530 Message-ID: <20231212091558.49579-1-sumang@marvell.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Proofpoint-ORIG-GUID: mdvrYMnmTWV766XtHWctnrpE8w_O9Zp1 X-Proofpoint-GUID: mdvrYMnmTWV766XtHWctnrpE8w_O9Zp1 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.997,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-12-09_02,2023-12-07_01,2023-05-22_02 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" As per the existing implementation, there exists a race between finding a multicast/mirror group entry and deleting that entry. The group lock was taken and released independently by rvu_nix_mcast_find_grp_elem() function. Which is incorrect and group lock should be taken during the entire operation of group updation/deletion. This patch fixes the same. Fixes: 51b2804c19cd ("octeontx2-af: Add new mbox to support multicast/mirro= r offload") Signed-off-by: Suman Ghosh --- Note: This is a follow up of https://urldefense.proofpoint.com/v2/url?u=3Dhttps-3A__git.kernel.org_netde= v_net-2Dnext_c_51b2804c19cd&d=3DDwIDaQ&c=3DnKjWec2b6R0mOyPaz7xtfQ&r=3D7si3X= n9Ly-Se1a655kvEPIYU0nQ9HPeN280sEUv5ROU&m=3DNjKPoTkYVlL5Dh4aSr3-dVo-AukiIper= lvB0S4_Mqzkyl_VcYAAKrWhkGZE5Cx-p&s=3DAkBf0454Xm-0adqV0Os7ZE8peaCXtYyuNbCS5k= it6Jk&e=3D .../ethernet/marvell/octeontx2/af/rvu_nix.c | 58 +++++++++++++------ 1 file changed, 40 insertions(+), 18 deletions(-) diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c b/drivers/= net/ethernet/marvell/octeontx2/af/rvu_nix.c index b01503acd520..0ab5626380c5 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c @@ -6142,14 +6142,12 @@ static struct nix_mcast_grp_elem *rvu_nix_mcast_fin= d_grp_elem(struct nix_mcast_g struct nix_mcast_grp_elem *iter; bool is_found =3D false; =20 - mutex_lock(&mcast_grp->mcast_grp_lock); list_for_each_entry(iter, &mcast_grp->mcast_grp_head, list) { if (iter->mcast_grp_idx =3D=3D mcast_grp_idx) { is_found =3D true; break; } } - mutex_unlock(&mcast_grp->mcast_grp_lock); =20 if (is_found) return iter; @@ -6162,7 +6160,7 @@ int rvu_nix_mcast_get_mce_index(struct rvu *rvu, u16 = pcifunc, u32 mcast_grp_idx) struct nix_mcast_grp_elem *elem; struct nix_mcast_grp *mcast_grp; struct nix_hw *nix_hw; - int blkaddr; + int blkaddr, ret; =20 blkaddr =3D rvu_get_blkaddr(rvu, BLKTYPE_NIX, pcifunc); nix_hw =3D get_nix_hw(rvu->hw, blkaddr); @@ -6170,11 +6168,15 @@ int rvu_nix_mcast_get_mce_index(struct rvu *rvu, u1= 6 pcifunc, u32 mcast_grp_idx) return NIX_AF_ERR_INVALID_NIXBLK; =20 mcast_grp =3D &nix_hw->mcast_grp; + mutex_lock(&mcast_grp->mcast_grp_lock); elem =3D rvu_nix_mcast_find_grp_elem(mcast_grp, mcast_grp_idx); if (!elem) - return NIX_AF_ERR_INVALID_MCAST_GRP; + ret =3D NIX_AF_ERR_INVALID_MCAST_GRP; + else + ret =3D elem->mce_start_index; =20 - return elem->mce_start_index; + mutex_unlock(&mcast_grp->mcast_grp_lock); + return ret; } =20 void rvu_nix_mcast_flr_free_entries(struct rvu *rvu, u16 pcifunc) @@ -6238,7 +6240,7 @@ int rvu_nix_mcast_update_mcam_entry(struct rvu *rvu, = u16 pcifunc, struct nix_mcast_grp_elem *elem; struct nix_mcast_grp *mcast_grp; struct nix_hw *nix_hw; - int blkaddr; + int blkaddr, ret =3D 0; =20 blkaddr =3D rvu_get_blkaddr(rvu, BLKTYPE_NIX, pcifunc); nix_hw =3D get_nix_hw(rvu->hw, blkaddr); @@ -6246,13 +6248,15 @@ int rvu_nix_mcast_update_mcam_entry(struct rvu *rvu= , u16 pcifunc, return NIX_AF_ERR_INVALID_NIXBLK; =20 mcast_grp =3D &nix_hw->mcast_grp; + mutex_lock(&mcast_grp->mcast_grp_lock); elem =3D rvu_nix_mcast_find_grp_elem(mcast_grp, mcast_grp_idx); if (!elem) - return NIX_AF_ERR_INVALID_MCAST_GRP; - - elem->mcam_index =3D mcam_index; + ret =3D NIX_AF_ERR_INVALID_MCAST_GRP; + else + elem->mcam_index =3D mcam_index; =20 - return 0; + mutex_unlock(&mcast_grp->mcast_grp_lock); + return ret; } =20 int rvu_mbox_handler_nix_mcast_grp_create(struct rvu *rvu, @@ -6306,6 +6310,13 @@ int rvu_mbox_handler_nix_mcast_grp_destroy(struct rv= u *rvu, return err; =20 mcast_grp =3D &nix_hw->mcast_grp; + + /* If AF is requesting for the deletion, + * then AF is already taking the lock + */ + if (!req->is_af) + mutex_lock(&mcast_grp->mcast_grp_lock); + elem =3D rvu_nix_mcast_find_grp_elem(mcast_grp, req->mcast_grp_idx); if (!elem) return NIX_AF_ERR_INVALID_MCAST_GRP; @@ -6333,12 +6344,6 @@ int rvu_mbox_handler_nix_mcast_grp_destroy(struct rv= u *rvu, mutex_unlock(&mcast->mce_lock); =20 delete_grp: - /* If AF is requesting for the deletion, - * then AF is already taking the lock - */ - if (!req->is_af) - mutex_lock(&mcast_grp->mcast_grp_lock); - list_del(&elem->list); kfree(elem); mcast_grp->count--; @@ -6370,9 +6375,20 @@ int rvu_mbox_handler_nix_mcast_grp_update(struct rvu= *rvu, return err; =20 mcast_grp =3D &nix_hw->mcast_grp; + + /* If AF is requesting for the updation, + * then AF is already taking the lock + */ + if (!req->is_af) + mutex_lock(&mcast_grp->mcast_grp_lock); + elem =3D rvu_nix_mcast_find_grp_elem(mcast_grp, req->mcast_grp_idx); - if (!elem) + if (!elem) { + if (!req->is_af) + mutex_unlock(&mcast_grp->mcast_grp_lock); + return NIX_AF_ERR_INVALID_MCAST_GRP; + } =20 /* If any pcifunc matches the group's pcifunc, then we can * delete the entire group. @@ -6383,8 +6399,11 @@ int rvu_mbox_handler_nix_mcast_grp_update(struct rvu= *rvu, /* Delete group */ dreq.hdr.pcifunc =3D elem->pcifunc; dreq.mcast_grp_idx =3D elem->mcast_grp_idx; - dreq.is_af =3D req->is_af; + dreq.is_af =3D 1; rvu_mbox_handler_nix_mcast_grp_destroy(rvu, &dreq, NULL); + if (!req->is_af) + mutex_unlock(&mcast_grp->mcast_grp_lock); + return 0; } } @@ -6467,5 +6486,8 @@ int rvu_mbox_handler_nix_mcast_grp_update(struct rvu = *rvu, =20 done: mutex_unlock(&mcast->mce_lock); + if (!req->is_af) + mutex_unlock(&mcast_grp->mcast_grp_lock); + return ret; } --=20 2.25.1