From nobody Sun Dec 14 19:20:10 2025 Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8E4AE1BF310; Fri, 28 Jun 2024 13:35:46 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=67.231.156.173 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1719581748; cv=none; b=gv4C3y+nU4ImS9+r+MirKNXR7keG7od3z4XpS+5DXw5toqo7ns8dq8u68MTI1KnWhMU9OOMaWJltr1MYb+U4G2D3ENbgXBPiO+dYrqDTCy+ajj/STuOqAwkVy77IZ/naBs8gBG445DPPZq/PpSKN+UAGYPb3PcxtXtT1619zBVQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1719581748; c=relaxed/simple; bh=TboJp2MqxHouy7ATzUmqXOohKrmVo/j8gQQGj7ZCM10=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=dARnEHJOFmhUFD/3GmCSmwqp2ln/fpJD6JBuPW08HmblJUF8Qd17wxMH1y6ItZl+X0Y6LliAsUnF/iEokgjeDy446utBeNp+IDdkDoare2dcmhEmjFBfxAyS99fqzc4nZTNvAZ/I7UWOx+sY0fyNua00PpD8XzTphcpWx2uuGDI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=marvell.com; spf=pass smtp.mailfrom=marvell.com; dkim=pass (2048-bit key) header.d=marvell.com header.i=@marvell.com header.b=OgIuDLua; arc=none smtp.client-ip=67.231.156.173 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=marvell.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=marvell.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=marvell.com header.i=@marvell.com header.b="OgIuDLua" Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 45SA18BZ027520; Fri, 28 Jun 2024 06:35:40 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h= cc:content-type:date:from:in-reply-to:message-id:mime-version :references:subject:to; s=pfpt0220; bh=L3gN18DUgotQQCtnsWCCRZZdn iXVHA6cFYNV4Thx5ME=; b=OgIuDLuaHY0b5mTxzBTw4rwsLg3GihPR7nrY1xb8V mHVQjjeety3skNVhHtKTSMaAE+ZGZoFjYYvuduXSHAN/kjkRMJpF6CPgEBYiJqN+ sK00rTULN5lgM7zY+QkZnDdpH0mjZmWo8E+a9fJrdJJlM56cfinhs/isPcnGaTi7 Igx1buRhikxwmPBfgz74fBpQ3weNqq1VaP+zWE/jHjs2wFJEir1gzsJrTaFh5NoR pHF5mVFGrkiiOQM1WEh1gyeWy6XgAnyssZgEJN3Q3qrtxCFpV2VJ2VmBR6KVl+Ij NYQdQRUU7s5RXIfTr3jbSWEoXxYG4yWXaViTaVac+Fz5g== Received: from dc5-exch05.marvell.com ([199.233.59.128]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 401c8hubmu-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 28 Jun 2024 06:35:40 -0700 (PDT) Received: from DC5-EXCH05.marvell.com (10.69.176.209) by DC5-EXCH05.marvell.com (10.69.176.209) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Fri, 28 Jun 2024 06:35:39 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH05.marvell.com (10.69.176.209) with Microsoft SMTP Server id 15.2.1544.4 via Frontend Transport; Fri, 28 Jun 2024 06:35:39 -0700 Received: from hyd1soter3.marvell.com (unknown [10.29.37.12]) by maili.marvell.com (Postfix) with ESMTP id 013433F704E; Fri, 28 Jun 2024 06:35:35 -0700 (PDT) From: Geetha sowjanya To: , CC: , , , , , , , Subject: [net-next PATCH v7 05/10] octeontx2-af: Add packet path between representor and VF Date: Fri, 28 Jun 2024 19:05:12 +0530 Message-ID: <20240628133517.8591-6-gakula@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20240628133517.8591-1-gakula@marvell.com> References: <20240628133517.8591-1-gakula@marvell.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Proofpoint-GUID: TAOlfyuZtXBi7ksgNo6E8_-ZiiO_GK0I X-Proofpoint-ORIG-GUID: TAOlfyuZtXBi7ksgNo6E8_-ZiiO_GK0I X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1039,Hydra:6.0.680,FMLib:17.12.28.16 definitions=2024-06-28_09,2024-06-28_01,2024-05-17_01 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Current HW, do not support in-built switch which will forward pkts between representee and representor. When representor is put under a bridge and pkts needs to be sent to representee, then pkts from representor are sent on a HW internal loopback channel, which again will be punted to ingress pkt parser. Now the rules that this patch installs are the MCAM filters/rules which will match against these pkts and forward them to representee. The rules that this patch installs are for basic representor <=3D> representee path similar to Tun/TAP between VM and Host. Signed-off-by: Geetha sowjanya Reviewed-by: Simon Horman --- .../net/ethernet/marvell/octeontx2/af/mbox.h | 7 + .../net/ethernet/marvell/octeontx2/af/rvu.h | 7 +- .../marvell/octeontx2/af/rvu_devlink.c | 6 + .../ethernet/marvell/octeontx2/af/rvu_nix.c | 7 +- .../ethernet/marvell/octeontx2/af/rvu_rep.c | 247 ++++++++++++++++++ .../marvell/octeontx2/af/rvu_switch.c | 18 +- .../net/ethernet/marvell/octeontx2/nic/rep.c | 18 ++ 7 files changed, 303 insertions(+), 7 deletions(-) diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h b/drivers/net= /ethernet/marvell/octeontx2/af/mbox.h index befb327e8aff..a7c32f1cc924 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h +++ b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h @@ -145,6 +145,7 @@ M(LMTST_TBL_SETUP, 0x00a, lmtst_tbl_setup, lmtst_tbl_se= tup_req, \ M(SET_VF_PERM, 0x00b, set_vf_perm, set_vf_perm, msg_rsp) \ M(PTP_GET_CAP, 0x00c, ptp_get_cap, msg_req, ptp_get_cap_rsp) \ M(GET_REP_CNT, 0x00d, get_rep_cnt, msg_req, get_rep_cnt_rsp) \ +M(ESW_CFG, 0x00e, esw_cfg, esw_cfg_req, msg_rsp) \ /* CGX mbox IDs (range 0x200 - 0x3FF) */ \ M(CGX_START_RXTX, 0x200, cgx_start_rxtx, msg_req, msg_rsp) \ M(CGX_STOP_RXTX, 0x201, cgx_stop_rxtx, msg_req, msg_rsp) \ @@ -1533,6 +1534,12 @@ struct get_rep_cnt_rsp { u64 rsvd; }; =20 +struct esw_cfg_req { + struct mbox_msghdr hdr; + u8 ena; + u64 rsvd; +}; + struct flow_msg { unsigned char dmac[6]; unsigned char smac[6]; diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h b/drivers/net/= ethernet/marvell/octeontx2/af/rvu.h index cbdc7aeaccfc..f7f8b96a6208 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h @@ -597,6 +597,7 @@ struct rvu { u16 rep_pcifunc; int rep_cnt; u16 *rep2pfvf_map; + u8 rep_mode; }; =20 static inline void rvu_write64(struct rvu *rvu, u64 block, u64 offset, u64= val) @@ -1026,7 +1027,7 @@ int rvu_ndc_fix_locked_cacheline(struct rvu *rvu, int= blkaddr); /* RVU Switch */ void rvu_switch_enable(struct rvu *rvu); void rvu_switch_disable(struct rvu *rvu); -void rvu_switch_update_rules(struct rvu *rvu, u16 pcifunc); +void rvu_switch_update_rules(struct rvu *rvu, u16 pcifunc, bool ena); void rvu_switch_enable_lbk_link(struct rvu *rvu, u16 pcifunc, bool ena); =20 int rvu_npc_set_parse_mode(struct rvu *rvu, u16 pcifunc, u64 mode, u8 dir, @@ -1040,4 +1041,8 @@ int rvu_mcs_flr_handler(struct rvu *rvu, u16 pcifunc); void rvu_mcs_ptp_cfg(struct rvu *rvu, u8 rpm_id, u8 lmac_id, bool ena); void rvu_mcs_exit(struct rvu *rvu); =20 +/* Representor APIs */ +int rvu_rep_pf_init(struct rvu *rvu); +int rvu_rep_install_mcam_rules(struct rvu *rvu); +void rvu_rep_update_rules(struct rvu *rvu, u16 pcifunc, bool ena); #endif /* RVU_H */ diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_devlink.c b/driv= ers/net/ethernet/marvell/octeontx2/af/rvu_devlink.c index 7498ab429963..4d29c509ef6b 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_devlink.c +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_devlink.c @@ -1468,6 +1468,9 @@ static int rvu_devlink_eswitch_mode_get(struct devlin= k *devlink, u16 *mode) struct rvu *rvu =3D rvu_dl->rvu; struct rvu_switch *rswitch; =20 + if (rvu->rep_mode) + return -EOPNOTSUPP; + rswitch =3D &rvu->rswitch; *mode =3D rswitch->mode; =20 @@ -1481,6 +1484,9 @@ static int rvu_devlink_eswitch_mode_set(struct devlin= k *devlink, u16 mode, struct rvu *rvu =3D rvu_dl->rvu; struct rvu_switch *rswitch; =20 + if (rvu->rep_mode) + return -EOPNOTSUPP; + rswitch =3D &rvu->rswitch; switch (mode) { case DEVLINK_ESWITCH_MODE_LEGACY: diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c b/drivers/= net/ethernet/marvell/octeontx2/af/rvu_nix.c index 02d83c4958d9..d84b6214e714 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c @@ -2743,7 +2743,7 @@ void rvu_nix_tx_tl2_cfg(struct rvu *rvu, int blkaddr,= u16 pcifunc, int schq; u64 cfg; =20 - if (!is_pf_cgxmapped(rvu, pf)) + if (!is_pf_cgxmapped(rvu, pf) && !is_rep_dev(rvu, pcifunc)) return; =20 cfg =3D enable ? (BIT_ULL(12) | RVU_SWITCH_LBK_CHAN) : 0; @@ -4374,8 +4374,6 @@ int rvu_mbox_handler_nix_set_mac_addr(struct rvu *rvu, if (test_bit(PF_SET_VF_TRUSTED, &pfvf->flags) && from_vf) ether_addr_copy(pfvf->default_mac, req->mac_addr); =20 - rvu_switch_update_rules(rvu, pcifunc); - return 0; } =20 @@ -5167,7 +5165,7 @@ int rvu_mbox_handler_nix_lf_start_rx(struct rvu *rvu,= struct msg_req *req, pfvf =3D rvu_get_pfvf(rvu, pcifunc); set_bit(NIXLF_INITIALIZED, &pfvf->flags); =20 - rvu_switch_update_rules(rvu, pcifunc); + rvu_switch_update_rules(rvu, pcifunc, true); =20 return rvu_cgx_start_stop_io(rvu, pcifunc, true); } @@ -5195,6 +5193,7 @@ int rvu_mbox_handler_nix_lf_stop_rx(struct rvu *rvu, = struct msg_req *req, if (err) return err; =20 + rvu_switch_update_rules(rvu, pcifunc, false); rvu_cgx_tx_enable(rvu, pcifunc, true); =20 return 0; diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_rep.c b/drivers/= net/ethernet/marvell/octeontx2/af/rvu_rep.c index cf13c5f0a3c5..5f2e2cbd165a 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_rep.c +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_rep.c @@ -13,6 +13,253 @@ #include "rvu.h" #include "rvu_reg.h" =20 +static u16 rvu_rep_get_vlan_id(struct rvu *rvu, u16 pcifunc) +{ + int id; + + for (id =3D 0; id < rvu->rep_cnt; id++) + if (rvu->rep2pfvf_map[id] =3D=3D pcifunc) + return id; + return 0; +} + +static int rvu_rep_tx_vlan_cfg(struct rvu *rvu, u16 pcifunc, + u16 vlan_tci, int *vidx) +{ + struct nix_vtag_config_rsp rsp =3D {}; + struct nix_vtag_config req =3D {}; + u64 etype =3D ETH_P_8021Q; + int err; + + /* Insert vlan tag */ + req.hdr.pcifunc =3D pcifunc; + req.vtag_size =3D VTAGSIZE_T4; + req.cfg_type =3D 0; /* tx vlan cfg */ + req.tx.cfg_vtag0 =3D true; + req.tx.vtag0 =3D FIELD_PREP(NIX_VLAN_ETYPE_MASK, etype) | vlan_tci; + + err =3D rvu_mbox_handler_nix_vtag_cfg(rvu, &req, &rsp); + if (err) { + dev_err(rvu->dev, "Tx vlan config failed\n"); + return err; + } + *vidx =3D rsp.vtag0_idx; + return 0; +} + +static int rvu_rep_rx_vlan_cfg(struct rvu *rvu, u16 pcifunc) +{ + struct nix_vtag_config req =3D {}; + struct nix_vtag_config_rsp rsp; + + /* config strip, capture and size */ + req.hdr.pcifunc =3D pcifunc; + req.vtag_size =3D VTAGSIZE_T4; + req.cfg_type =3D 1; /* rx vlan cfg */ + req.rx.vtag_type =3D NIX_AF_LFX_RX_VTAG_TYPE0; + req.rx.strip_vtag =3D true; + req.rx.capture_vtag =3D false; + + return rvu_mbox_handler_nix_vtag_cfg(rvu, &req, &rsp); +} + +static int rvu_rep_install_rx_rule(struct rvu *rvu, u16 pcifunc, + u16 entry, bool rte) +{ + struct npc_install_flow_req req =3D {}; + struct npc_install_flow_rsp rsp =3D {}; + struct rvu_pfvf *pfvf; + u16 vlan_tci, rep_id; + + pfvf =3D rvu_get_pfvf(rvu, pcifunc); + + /* To steer the traffic from Representee to Representor */ + rep_id =3D rvu_rep_get_vlan_id(rvu, pcifunc); + if (rte) { + vlan_tci =3D rep_id | BIT_ULL(8); + req.vf =3D rvu->rep_pcifunc; + req.op =3D NIX_RX_ACTIONOP_UCAST; + req.index =3D rep_id; + } else { + vlan_tci =3D rep_id; + req.vf =3D pcifunc; + req.op =3D NIX_RX_ACTION_DEFAULT; + } + + rvu_rep_rx_vlan_cfg(rvu, req.vf); + req.entry =3D entry; + req.hdr.pcifunc =3D 0; /* AF is requester */ + req.features =3D BIT_ULL(NPC_OUTER_VID) | BIT_ULL(NPC_VLAN_ETYPE_CTAG); + req.vtag0_valid =3D true; + req.vtag0_type =3D NIX_AF_LFX_RX_VTAG_TYPE0; + req.packet.vlan_etype =3D cpu_to_be16(ETH_P_8021Q); + req.mask.vlan_etype =3D cpu_to_be16(ETH_P_8021Q); + req.packet.vlan_tci =3D cpu_to_be16(vlan_tci); + req.mask.vlan_tci =3D cpu_to_be16(0xffff); + + req.channel =3D RVU_SWITCH_LBK_CHAN; + req.chan_mask =3D 0xffff; + req.intf =3D pfvf->nix_rx_intf; + + return rvu_mbox_handler_npc_install_flow(rvu, &req, &rsp); +} + +static int rvu_rep_install_tx_rule(struct rvu *rvu, u16 pcifunc, u16 entry, + bool rte) +{ + struct npc_install_flow_req req =3D {}; + struct npc_install_flow_rsp rsp =3D {}; + struct rvu_pfvf *pfvf; + int vidx, err; + u16 vlan_tci; + u8 lbkid; + + pfvf =3D rvu_get_pfvf(rvu, pcifunc); + vlan_tci =3D rvu_rep_get_vlan_id(rvu, pcifunc); + if (rte) + vlan_tci |=3D BIT_ULL(8); + + err =3D rvu_rep_tx_vlan_cfg(rvu, pcifunc, vlan_tci, &vidx); + if (err) + return err; + + lbkid =3D pfvf->nix_blkaddr =3D=3D BLKADDR_NIX0 ? 0 : 1; + req.hdr.pcifunc =3D 0; /* AF is requester */ + if (rte) { + req.vf =3D pcifunc; + } else { + req.vf =3D rvu->rep_pcifunc; + req.packet.sq_id =3D vlan_tci; + req.mask.sq_id =3D 0xffff; + } + + req.entry =3D entry; + req.intf =3D pfvf->nix_tx_intf; + req.op =3D NIX_TX_ACTIONOP_UCAST_CHAN; + req.index =3D (lbkid << 8) | RVU_SWITCH_LBK_CHAN; + req.set_cntr =3D 1; + req.vtag0_def =3D vidx; + req.vtag0_op =3D 1; + return rvu_mbox_handler_npc_install_flow(rvu, &req, &rsp); +} + +int rvu_rep_install_mcam_rules(struct rvu *rvu) +{ + struct rvu_switch *rswitch =3D &rvu->rswitch; + u16 start =3D rswitch->start_entry; + struct rvu_hwinfo *hw =3D rvu->hw; + u16 pcifunc, entry =3D 0; + int pf, vf, numvfs; + int err, nixlf, i; + u8 rep; + + for (pf =3D 1; pf < hw->total_pfs; pf++) { + if (!is_pf_cgxmapped(rvu, pf)) + continue; + + pcifunc =3D pf << RVU_PFVF_PF_SHIFT; + rvu_get_nix_blkaddr(rvu, pcifunc); + rep =3D true; + for (i =3D 0; i < 2; i++) { + err =3D rvu_rep_install_rx_rule(rvu, pcifunc, + start + entry, rep); + if (err) + return err; + rswitch->entry2pcifunc[entry++] =3D pcifunc; + + err =3D rvu_rep_install_tx_rule(rvu, pcifunc, + start + entry, rep); + if (err) + return err; + rswitch->entry2pcifunc[entry++] =3D pcifunc; + rep =3D false; + } + + rvu_get_pf_numvfs(rvu, pf, &numvfs, NULL); + for (vf =3D 0; vf < numvfs; vf++) { + pcifunc =3D pf << RVU_PFVF_PF_SHIFT | + ((vf + 1) & RVU_PFVF_FUNC_MASK); + rvu_get_nix_blkaddr(rvu, pcifunc); + + /* Skip installimg rules if nixlf is not attached */ + err =3D nix_get_nixlf(rvu, pcifunc, &nixlf, NULL); + if (err) + continue; + rep =3D true; + for (i =3D 0; i < 2; i++) { + err =3D rvu_rep_install_rx_rule(rvu, pcifunc, + start + entry, + rep); + if (err) + return err; + rswitch->entry2pcifunc[entry++] =3D pcifunc; + + err =3D rvu_rep_install_tx_rule(rvu, pcifunc, + start + entry, + rep); + if (err) + return err; + rswitch->entry2pcifunc[entry++] =3D pcifunc; + rep =3D false; + } + } + } + return 0; +} + +void rvu_rep_update_rules(struct rvu *rvu, u16 pcifunc, bool ena) +{ + struct rvu_switch *rswitch =3D &rvu->rswitch; + struct npc_mcam *mcam =3D &rvu->hw->mcam; + u32 max =3D rswitch->used_entries; + int blkaddr; + u16 entry; + + if (!rswitch->used_entries) + return; + + blkaddr =3D rvu_get_blkaddr(rvu, BLKTYPE_NPC, 0); + + if (blkaddr < 0) + return; + + rvu_switch_enable_lbk_link(rvu, pcifunc, ena); + mutex_lock(&mcam->lock); + for (entry =3D 0; entry < max; entry++) { + if (rswitch->entry2pcifunc[entry] =3D=3D pcifunc) + npc_enable_mcam_entry(rvu, mcam, blkaddr, entry, ena); + } + mutex_unlock(&mcam->lock); +} + +int rvu_rep_pf_init(struct rvu *rvu) +{ + u16 pcifunc =3D rvu->rep_pcifunc; + struct rvu_pfvf *pfvf; + + pfvf =3D rvu_get_pfvf(rvu, pcifunc); + set_bit(NIXLF_INITIALIZED, &pfvf->flags); + rvu_switch_enable_lbk_link(rvu, pcifunc, true); + rvu_rep_rx_vlan_cfg(rvu, pcifunc); + return 0; +} + +int rvu_mbox_handler_esw_cfg(struct rvu *rvu, struct esw_cfg_req *req, + struct msg_rsp *rsp) +{ + if (req->hdr.pcifunc !=3D rvu->rep_pcifunc) + return 0; + + rvu->rep_mode =3D req->ena; + + if (req->ena) + rvu_switch_enable(rvu); + else + rvu_switch_disable(rvu); + + return 0; +} + int rvu_mbox_handler_get_rep_cnt(struct rvu *rvu, struct msg_req *req, struct get_rep_cnt_rsp *rsp) { diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_switch.c b/drive= rs/net/ethernet/marvell/octeontx2/af/rvu_switch.c index ceb81eebf65e..268efb7c1c15 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_switch.c +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_switch.c @@ -166,6 +166,8 @@ void rvu_switch_enable(struct rvu *rvu) =20 alloc_req.contig =3D true; alloc_req.count =3D rvu->cgx_mapped_pfs + rvu->cgx_mapped_vfs; + if (rvu->rep_mode) + alloc_req.count =3D alloc_req.count * 4; ret =3D rvu_mbox_handler_npc_mcam_alloc_entry(rvu, &alloc_req, &alloc_rsp); if (ret) { @@ -189,7 +191,12 @@ void rvu_switch_enable(struct rvu *rvu) rswitch->used_entries =3D alloc_rsp.count; rswitch->start_entry =3D alloc_rsp.entry; =20 - ret =3D rvu_switch_install_rules(rvu); + if (rvu->rep_mode) { + rvu_rep_pf_init(rvu); + ret =3D rvu_rep_install_mcam_rules(rvu); + } else { + ret =3D rvu_switch_install_rules(rvu); + } if (ret) goto uninstall_rules; =20 @@ -222,6 +229,9 @@ void rvu_switch_disable(struct rvu *rvu) if (!rswitch->used_entries) return; =20 + if (rvu->rep_mode) + goto free_ents; + for (pf =3D 1; pf < hw->total_pfs; pf++) { if (!is_pf_cgxmapped(rvu, pf)) continue; @@ -249,6 +259,7 @@ void rvu_switch_disable(struct rvu *rvu) } } =20 +free_ents: uninstall_req.start =3D rswitch->start_entry; uninstall_req.end =3D rswitch->start_entry + rswitch->used_entries - 1; free_req.all =3D 1; @@ -258,12 +269,15 @@ void rvu_switch_disable(struct rvu *rvu) kfree(rswitch->entry2pcifunc); } =20 -void rvu_switch_update_rules(struct rvu *rvu, u16 pcifunc) +void rvu_switch_update_rules(struct rvu *rvu, u16 pcifunc, bool ena) { struct rvu_switch *rswitch =3D &rvu->rswitch; u32 max =3D rswitch->used_entries; u16 entry; =20 + if (rvu->rep_mode) + return rvu_rep_update_rules(rvu, pcifunc, ena); + if (!rswitch->used_entries) return; =20 diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/rep.c b/drivers/net= /ethernet/marvell/octeontx2/nic/rep.c index a021350fe83a..b993b03622dd 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/rep.c +++ b/drivers/net/ethernet/marvell/octeontx2/nic/rep.c @@ -28,6 +28,22 @@ MODULE_DESCRIPTION(DRV_STRING); MODULE_LICENSE("GPL"); MODULE_DEVICE_TABLE(pci, rvu_rep_id_table); =20 +static int rvu_eswitch_config(struct otx2_nic *priv, u8 ena) +{ + struct esw_cfg_req *req; + + mutex_lock(&priv->mbox.lock); + req =3D otx2_mbox_alloc_msg_esw_cfg(&priv->mbox); + if (!req) { + mutex_unlock(&priv->mbox.lock); + return -ENOMEM; + } + req->ena =3D ena; + otx2_sync_mbox_msg(&priv->mbox); + mutex_unlock(&priv->mbox.lock); + return 0; +} + static netdev_tx_t rvu_rep_xmit(struct sk_buff *skb, struct net_device *de= v) { struct rep_dev *rep =3D netdev_priv(dev); @@ -161,6 +177,7 @@ void rvu_rep_destroy(struct otx2_nic *priv) struct rep_dev *rep; int rep_id; =20 + rvu_eswitch_config(priv, false); priv->flags |=3D OTX2_FLAG_INTF_DOWN; rvu_rep_free_cq_rsrc(priv); for (rep_id =3D 0; rep_id < priv->rep_cnt; rep_id++) { @@ -221,6 +238,7 @@ int rvu_rep_create(struct otx2_nic *priv, struct netlin= k_ext_ack *extack) if (err) goto exit; =20 + rvu_eswitch_config(priv, true); return 0; exit: while (--rep_id >=3D 0) { --=20 2.25.1