From nobody Mon Dec 15 21:44:00 2025 Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C5250226163; Thu, 6 Feb 2025 08:51:24 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=67.231.156.173 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738831886; cv=none; b=tzjzir/EeGeZnIH0v6sxKuZDMxcn0p/796eQsBYAP3LGPe9fQVG7MQJYI6C1zk0gy91tNL8Q0TUecFtTaXFXFk2vP2LQYacRzPj9CT1kuRWKdTx+rprVyRxmYyE8UB2XeocSNuaIXIp8u+JJZvoL1GGRk96Ic/r+MRcS6Rx1v+E= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738831886; c=relaxed/simple; bh=K7J5/h1/rCSVHP3V2p0pPLXcV/tdT9fVA7jgLCyFLAo=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=BjbOSOop4ImQiC0vJi6S+FKK3dZDaINKVh36OVQG0xp9LAGMoAJdcZoVIzMD0OAogqvKg3hJIyTkdEilpXnfcbEv45fjQMsEsKeycwqRvbd/40a8Bp83IkO+LxjcqSEThPnQ7ARkYoYGdT4Ahn7nbozGhjm/homn6xOiuXmObnU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=marvell.com; spf=pass smtp.mailfrom=marvell.com; dkim=pass (2048-bit key) header.d=marvell.com header.i=@marvell.com header.b=SlAqc9Wl; arc=none smtp.client-ip=67.231.156.173 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=marvell.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=marvell.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=marvell.com header.i=@marvell.com header.b="SlAqc9Wl" Received: from pps.filterd (m0431383.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 5165ckYe028870; Thu, 6 Feb 2025 00:50:51 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h= cc:content-transfer-encoding:content-type:date:from:in-reply-to :message-id:mime-version:references:subject:to; s=pfpt0220; bh=d Cujrxlx0gO7CYKlkpqYvdUBpL6N+QWSxIyXiX2y1IA=; b=SlAqc9Wls0E5x8qKY sh/ndo9IocuQqptOcUSSU40+hqlLTSoXZzaqcrkQih+D5NFv5Y/COuFKRGE4mnS/ qjFHMEuMvYw7X7SAXIHhHQ9tb5RHA7KVW1RGgIOJDqvdMrxNh6+lUzfOSI91Etb/ Eztng8vCDTwcjQJ1sS7LfRUzmXvzOAoF9EYmgPuFsq/aEF8CaSjk6rXvu6ynLk9W a8B75Vewh0GuO1rWRIerl/Ftm8GIPnDYb3joJvw8BX6cUOMMZZbL9hYpBsAHuOnM /70ae0mEOm+mzPnG1OTglI/OCWuWjy3uCWJWXJAtOmzDB1lP/6/JwajqkGw400kW nioHg== Received: from dc6wp-exch02.marvell.com ([4.21.29.225]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 44mq418a65-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Thu, 06 Feb 2025 00:50:51 -0800 (PST) Received: from DC6WP-EXCH02.marvell.com (10.76.176.209) by DC6WP-EXCH02.marvell.com (10.76.176.209) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Thu, 6 Feb 2025 00:50:50 -0800 Received: from maili.marvell.com (10.69.176.80) by DC6WP-EXCH02.marvell.com (10.76.176.209) with Microsoft SMTP Server id 15.2.1544.4 via Frontend Transport; Thu, 6 Feb 2025 00:50:50 -0800 Received: from localhost.localdomain (unknown [10.28.36.166]) by maili.marvell.com (Postfix) with ESMTP id 484413F7087; Thu, 6 Feb 2025 00:50:44 -0800 (PST) From: Suman Ghosh To: , , , , , , , , , , , , , , , , , , , , CC: Suman Ghosh Subject: [net-next PATCH v5 1/6] octeontx2-pf: use xdp_return_frame() to free xdp buffers Date: Thu, 6 Feb 2025 14:20:29 +0530 Message-ID: <20250206085034.1978172-2-sumang@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20250206085034.1978172-1-sumang@marvell.com> References: <20250206085034.1978172-1-sumang@marvell.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Proofpoint-GUID: JRyWYCAp2b51kjcQQgLVxX3sByJWepS_ X-Proofpoint-ORIG-GUID: JRyWYCAp2b51kjcQQgLVxX3sByJWepS_ X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1057,Hydra:6.0.680,FMLib:17.12.68.34 definitions=2025-02-06_02,2025-02-05_03,2024-11-22_01 Content-Type: text/plain; charset="utf-8" xdp_return_frames() will help to free the xdp frames and their associated pages back to page pool. Signed-off-by: Geetha sowjanya Signed-off-by: Suman Ghosh --- .../marvell/octeontx2/nic/otx2_common.h | 4 +- .../ethernet/marvell/octeontx2/nic/otx2_pf.c | 7 ++- .../marvell/octeontx2/nic/otx2_txrx.c | 49 ++++++++++++------- .../marvell/octeontx2/nic/otx2_txrx.h | 1 + 4 files changed, 39 insertions(+), 22 deletions(-) diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h b/dri= vers/net/ethernet/marvell/octeontx2/nic/otx2_common.h index 65814e3dc93f..d5fbccb289df 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h @@ -21,6 +21,7 @@ #include #include #include +#include =20 #include #include @@ -1094,7 +1095,8 @@ int otx2_del_macfilter(struct net_device *netdev, con= st u8 *mac); int otx2_add_macfilter(struct net_device *netdev, const u8 *mac); int otx2_enable_rxvlan(struct otx2_nic *pf, bool enable); int otx2_install_rxvlan_offload_flow(struct otx2_nic *pfvf); -bool otx2_xdp_sq_append_pkt(struct otx2_nic *pfvf, u64 iova, int len, u16 = qidx); +bool otx2_xdp_sq_append_pkt(struct otx2_nic *pfvf, struct xdp_frame *xdpf, + u64 iova, int len, u16 qidx, u16 flags); u16 otx2_get_max_mtu(struct otx2_nic *pfvf); int otx2_handle_ntuple_tc_features(struct net_device *netdev, netdev_features_t features); diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c b/drivers= /net/ethernet/marvell/octeontx2/nic/otx2_pf.c index e1dde93e8af8..4347a3c95350 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c @@ -2691,7 +2691,6 @@ static int otx2_get_vf_config(struct net_device *netd= ev, int vf, static int otx2_xdp_xmit_tx(struct otx2_nic *pf, struct xdp_frame *xdpf, int qidx) { - struct page *page; u64 dma_addr; int err =3D 0; =20 @@ -2701,11 +2700,11 @@ static int otx2_xdp_xmit_tx(struct otx2_nic *pf, st= ruct xdp_frame *xdpf, if (dma_mapping_error(pf->dev, dma_addr)) return -ENOMEM; =20 - err =3D otx2_xdp_sq_append_pkt(pf, dma_addr, xdpf->len, qidx); + err =3D otx2_xdp_sq_append_pkt(pf, xdpf, dma_addr, xdpf->len, + qidx, XDP_REDIRECT); if (!err) { otx2_dma_unmap_page(pf, dma_addr, xdpf->len, DMA_TO_DEVICE); - page =3D virt_to_page(xdpf->data); - put_page(page); + xdp_return_frame(xdpf); return -ENOMEM; } return 0; diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.c b/drive= rs/net/ethernet/marvell/octeontx2/nic/otx2_txrx.c index 224cef938927..d46f05993d3f 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.c +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.c @@ -96,20 +96,22 @@ static unsigned int frag_num(unsigned int i) =20 static void otx2_xdp_snd_pkt_handler(struct otx2_nic *pfvf, struct otx2_snd_queue *sq, - struct nix_cqe_tx_s *cqe) + struct nix_cqe_tx_s *cqe) { struct nix_send_comp_s *snd_comp =3D &cqe->comp; struct sg_list *sg; struct page *page; - u64 pa; + u64 pa, iova; =20 sg =3D &sq->sg[snd_comp->sqe_id]; =20 - pa =3D otx2_iova_to_phys(pfvf->iommu_domain, sg->dma_addr[0]); - otx2_dma_unmap_page(pfvf, sg->dma_addr[0], - sg->size[0], DMA_TO_DEVICE); + iova =3D sg->dma_addr[0]; + pa =3D otx2_iova_to_phys(pfvf->iommu_domain, iova); page =3D virt_to_page(phys_to_virt(pa)); - put_page(page); + if (sg->flags & XDP_REDIRECT) + otx2_dma_unmap_page(pfvf, sg->dma_addr[0], sg->size[0], DMA_TO_DEVICE); + xdp_return_frame((struct xdp_frame *)sg->skb); + sg->skb =3D (u64)NULL; } =20 static void otx2_snd_pkt_handler(struct otx2_nic *pfvf, @@ -1359,8 +1361,9 @@ void otx2_free_pending_sqe(struct otx2_nic *pfvf) } } =20 -static void otx2_xdp_sqe_add_sg(struct otx2_snd_queue *sq, u64 dma_addr, - int len, int *offset) +static void otx2_xdp_sqe_add_sg(struct otx2_snd_queue *sq, + struct xdp_frame *xdpf, + u64 dma_addr, int len, int *offset, u16 flags) { struct nix_sqe_sg_s *sg =3D NULL; u64 *iova =3D NULL; @@ -1377,9 +1380,12 @@ static void otx2_xdp_sqe_add_sg(struct otx2_snd_queu= e *sq, u64 dma_addr, sq->sg[sq->head].dma_addr[0] =3D dma_addr; sq->sg[sq->head].size[0] =3D len; sq->sg[sq->head].num_segs =3D 1; + sq->sg[sq->head].flags =3D flags; + sq->sg[sq->head].skb =3D (u64)xdpf; } =20 -bool otx2_xdp_sq_append_pkt(struct otx2_nic *pfvf, u64 iova, int len, u16 = qidx) +bool otx2_xdp_sq_append_pkt(struct otx2_nic *pfvf, struct xdp_frame *xdpf, + u64 iova, int len, u16 qidx, u16 flags) { struct nix_sqe_hdr_s *sqe_hdr; struct otx2_snd_queue *sq; @@ -1405,7 +1411,7 @@ bool otx2_xdp_sq_append_pkt(struct otx2_nic *pfvf, u6= 4 iova, int len, u16 qidx) =20 offset =3D sizeof(*sqe_hdr); =20 - otx2_xdp_sqe_add_sg(sq, iova, len, &offset); + otx2_xdp_sqe_add_sg(sq, xdpf, iova, len, &offset, flags); sqe_hdr->sizem1 =3D (offset / 16) - 1; pfvf->hw_ops->sqe_flush(pfvf, sq, offset, qidx); =20 @@ -1419,6 +1425,8 @@ static bool otx2_xdp_rcv_pkt_handler(struct otx2_nic = *pfvf, bool *need_xdp_flush) { unsigned char *hard_start; + struct otx2_pool *pool; + struct xdp_frame *xdpf; int qidx =3D cq->cq_idx; struct xdp_buff xdp; struct page *page; @@ -1426,6 +1434,7 @@ static bool otx2_xdp_rcv_pkt_handler(struct otx2_nic = *pfvf, u32 act; int err; =20 + pool =3D &pfvf->qset.pool[qidx]; iova =3D cqe->sg.seg_addr - OTX2_HEAD_ROOM; pa =3D otx2_iova_to_phys(pfvf->iommu_domain, iova); page =3D virt_to_page(phys_to_virt(pa)); @@ -1444,19 +1453,21 @@ static bool otx2_xdp_rcv_pkt_handler(struct otx2_ni= c *pfvf, case XDP_TX: qidx +=3D pfvf->hw.tx_queues; cq->pool_ptrs++; - return otx2_xdp_sq_append_pkt(pfvf, iova, - cqe->sg.seg_size, qidx); + xdpf =3D xdp_convert_buff_to_frame(&xdp); + return otx2_xdp_sq_append_pkt(pfvf, xdpf, cqe->sg.seg_addr, + cqe->sg.seg_size, qidx, XDP_TX); case XDP_REDIRECT: cq->pool_ptrs++; err =3D xdp_do_redirect(pfvf->netdev, &xdp, prog); - - otx2_dma_unmap_page(pfvf, iova, pfvf->rbsize, - DMA_FROM_DEVICE); if (!err) { *need_xdp_flush =3D true; return true; } - put_page(page); + + otx2_dma_unmap_page(pfvf, iova, pfvf->rbsize, + DMA_FROM_DEVICE); + xdpf =3D xdp_convert_buff_to_frame(&xdp); + xdp_return_frame(xdpf); break; default: bpf_warn_invalid_xdp_action(pfvf->netdev, prog, act); @@ -1465,10 +1476,14 @@ static bool otx2_xdp_rcv_pkt_handler(struct otx2_ni= c *pfvf, trace_xdp_exception(pfvf->netdev, prog, act); break; case XDP_DROP: + cq->pool_ptrs++; + if (page->pp) { + page_pool_recycle_direct(pool->page_pool, page); + return true; + } otx2_dma_unmap_page(pfvf, iova, pfvf->rbsize, DMA_FROM_DEVICE); put_page(page); - cq->pool_ptrs++; return true; } return false; diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.h b/drive= rs/net/ethernet/marvell/octeontx2/nic/otx2_txrx.h index d23810963fdb..92e1e84cad75 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.h +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.h @@ -76,6 +76,7 @@ struct otx2_rcv_queue { =20 struct sg_list { u16 num_segs; + u16 flags; u64 skb; u64 size[OTX2_MAX_FRAGS_IN_SQE]; u64 dma_addr[OTX2_MAX_FRAGS_IN_SQE]; --=20 2.25.1 From nobody Mon Dec 15 21:44:00 2025 Received: from mx0a-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AE6B6225410; Thu, 6 Feb 2025 08:51:21 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=67.231.148.174 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738831883; cv=none; b=nKBKtK48u94DI8yc9xxerr/Q+/7zEP4AUWaxjUSbjV5k4uq5Uq6GAeZxLnnSjqThtA0Lf+soBNXz0PdQESlj5y/6wmd3igdohNCgwi4EPALQBn/Vb3BgM/uR7r0zH/P44jtTyT99L1rf/i8YjwfO4aH8wcynwvSRkODUKW9jL44= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738831883; c=relaxed/simple; bh=RtNsGaJRyE/K4wlr3g+ZETa4bV1odoQdB8Kp6q0kV38=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=QIxtzziEbxtnLRcXf2vN/uaVtr1ZO2+aHT9nu/sSZMyVkZga0X/alIQeqFrWuDVmkNyDpbHOHH0mvtpwO/8wML0qNoQ83xg9uTFZh8thHrXBgGROgvt4UKjD+JYO7y4W++B/Y7dWvHy0cOMBnyumMqqywQJpv4xObQj4Exq6g70= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=marvell.com; spf=pass smtp.mailfrom=marvell.com; dkim=pass (2048-bit key) header.d=marvell.com header.i=@marvell.com header.b=ZkxPSz4K; arc=none smtp.client-ip=67.231.148.174 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=marvell.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=marvell.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=marvell.com header.i=@marvell.com header.b="ZkxPSz4K" Received: from pps.filterd (m0431384.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 5167MtRg015544; Thu, 6 Feb 2025 00:50:59 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h= cc:content-transfer-encoding:content-type:date:from:in-reply-to :message-id:mime-version:references:subject:to; s=pfpt0220; bh=J iCBg/GEk6/RAD/VI56jjVcWd16bzxJO9zBbGFj0dBs=; b=ZkxPSz4KJKcVWVa+v bTgq1RmmvOnmK2R7IQRXdqhQxsC0y0YyR/X4r/OktgJX4lyiH+fRpeNEoynvv8p/ H4FOysH7uO6iXIEKNOQhuQMGMREKYE3WrbsKb2F7TgSlaS+5fttoYL3Vn3zL5zu0 Jq11xfHslBOz3Dkovho+NxPKWl8wtZagBhQFW6W0qgIG5NkXH+68Qfggt+U/Xi39 dfn9pZlGhhTvNPLINiRVhK/SIeUgClSWsTuv2j9l34mo1LvKympS6/6NJBLQJsvo NxTQvJGj+TVan1gcf7lW0jyxjSClB4+ZE6WPeCyvIjm8Z7dbd52iQSkh5dIDsIUN 150Hw== Received: from dc6wp-exch02.marvell.com ([4.21.29.225]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 44mrmsg52u-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Thu, 06 Feb 2025 00:50:59 -0800 (PST) Received: from DC6WP-EXCH02.marvell.com (10.76.176.209) by DC6WP-EXCH02.marvell.com (10.76.176.209) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Thu, 6 Feb 2025 00:50:57 -0800 Received: from maili.marvell.com (10.69.176.80) by DC6WP-EXCH02.marvell.com (10.76.176.209) with Microsoft SMTP Server id 15.2.1544.4 via Frontend Transport; Thu, 6 Feb 2025 00:50:57 -0800 Received: from localhost.localdomain (unknown [10.28.36.166]) by maili.marvell.com (Postfix) with ESMTP id 95F253F705F; Thu, 6 Feb 2025 00:50:51 -0800 (PST) From: Suman Ghosh To: , , , , , , , , , , , , , , , , , , , , CC: Suman Ghosh Subject: [net-next PATCH v5 2/6] octeontx2-pf: Add AF_XDP non-zero copy support Date: Thu, 6 Feb 2025 14:20:30 +0530 Message-ID: <20250206085034.1978172-3-sumang@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20250206085034.1978172-1-sumang@marvell.com> References: <20250206085034.1978172-1-sumang@marvell.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Proofpoint-GUID: JCNyxf1CshYMZW3j5JsDtlYyxQrrYxr4 X-Proofpoint-ORIG-GUID: JCNyxf1CshYMZW3j5JsDtlYyxQrrYxr4 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1057,Hydra:6.0.680,FMLib:17.12.68.34 definitions=2025-02-06_02,2025-02-05_03,2024-11-22_01 Content-Type: text/plain; charset="utf-8" Set xdp rx ring memory type as MEM_TYPE_PAGE_POOL for af-xdp to work. This is needed since xdp_return_frame internally will use page pools. Fixes: 06059a1a9a4a ("octeontx2-pf: Add XDP support to netdev PF") Signed-off-by: Suman Ghosh --- drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-) diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c b/dri= vers/net/ethernet/marvell/octeontx2/nic/otx2_common.c index 2b49bfec7869..161cf33ef89e 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c @@ -1047,6 +1047,7 @@ static int otx2_cq_init(struct otx2_nic *pfvf, u16 qi= dx) int err, pool_id, non_xdp_queues; struct nix_aq_enq_req *aq; struct otx2_cq_queue *cq; + struct otx2_pool *pool; =20 cq =3D &qset->cq[qidx]; cq->cq_idx =3D qidx; @@ -1055,8 +1056,13 @@ static int otx2_cq_init(struct otx2_nic *pfvf, u16 q= idx) cq->cq_type =3D CQ_RX; cq->cint_idx =3D qidx; cq->cqe_cnt =3D qset->rqe_cnt; - if (pfvf->xdp_prog) + if (pfvf->xdp_prog) { + pool =3D &qset->pool[qidx]; xdp_rxq_info_reg(&cq->xdp_rxq, pfvf->netdev, qidx, 0); + xdp_rxq_info_reg_mem_model(&cq->xdp_rxq, + MEM_TYPE_PAGE_POOL, + pool->page_pool); + } } else if (qidx < non_xdp_queues) { cq->cq_type =3D CQ_TX; cq->cint_idx =3D qidx - pfvf->hw.rx_queues; --=20 2.25.1 From nobody Mon Dec 15 21:44:00 2025 Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4B7AE226188; Thu, 6 Feb 2025 08:51:26 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=67.231.148.174 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738831889; cv=none; b=BwBm++RV5LOfX13riRFGFUBU6vZjtogvDum9HDX9XfnwLBQMGzvnILxln1jj1MEy+esVOvbt7Won/2hkGGESoSBF6G/sFrfZKI92Hgvv6/rN/MWNmdo8OGW62htPU1tEwBj9iou8bc0vl55qj/YnRnBO98TcHn/R/Hiqmap9Ea8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738831889; c=relaxed/simple; bh=JOmOkfVXUT1xjy2ur/Kmhp5X0bCv00xAPPjv5m1aNC8=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=qBcOnxnrCPg17YRrorhdM5scvHYi8bFiRZUzPn0B8CTOC1VdY+vYrChkP1B2LO3UOlTyceGM8RtHdqyVeZEExapEYAmpaZA0dnyu4WNPepfZtjOIkNnOL1BxfIc1Lcqh6IYwhqRzRvSmZD+65LqFQw+7EheFmWCBYCOsxZYeTf4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=marvell.com; spf=pass smtp.mailfrom=marvell.com; dkim=pass (2048-bit key) header.d=marvell.com header.i=@marvell.com header.b=RbjK1iGB; arc=none smtp.client-ip=67.231.148.174 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=marvell.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=marvell.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=marvell.com header.i=@marvell.com header.b="RbjK1iGB" Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 5168cllZ008816; Thu, 6 Feb 2025 00:51:06 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h= cc:content-transfer-encoding:content-type:date:from:in-reply-to :message-id:mime-version:references:subject:to; s=pfpt0220; bh=e TqN3X8YvoBVNF0rNpKuBcHpjForwcDOige0UgXqW9o=; b=RbjK1iGBPRnxs42k0 +gUGf15A0q7EYKeZQUutYU3AcnKBSzTD/PUrvyyZJnM3uerirAJag09j5PmAp8d2 FpnprxXv9jkgcovnYa8LIYW8UcWLqWTbENYmBLczIXzlP3F6B7Vx9KY2T1vFXP5q DcMz/ehlOaPJlWnh97yAg+Dqi2eRiBCMKWyMYTPyrF+C8A5BVQ8BtmDPcaq3kY3P PjsVnRKifrrjeGH0uQ+rshNHrJ5Whu9bazplJYiPIKmN6gpaO3fCLZfnJRhNvgHk Hb8s6B+8K7dbnZES6LeC++6uVXutAiesqsuFHJqAxuZeD3IltmR6L+FQAsNmKjL5 wCxTw== Received: from dc6wp-exch02.marvell.com ([4.21.29.225]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 44msrd00n5-4 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Thu, 06 Feb 2025 00:51:05 -0800 (PST) Received: from DC6WP-EXCH02.marvell.com (10.76.176.209) by DC6WP-EXCH02.marvell.com (10.76.176.209) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Thu, 6 Feb 2025 00:51:05 -0800 Received: from maili.marvell.com (10.69.176.80) by DC6WP-EXCH02.marvell.com (10.76.176.209) with Microsoft SMTP Server id 15.2.1544.4 via Frontend Transport; Thu, 6 Feb 2025 00:51:04 -0800 Received: from localhost.localdomain (unknown [10.28.36.166]) by maili.marvell.com (Postfix) with ESMTP id D0BED3F705F; Thu, 6 Feb 2025 00:50:58 -0800 (PST) From: Suman Ghosh To: , , , , , , , , , , , , , , , , , , , , CC: Suman Ghosh Subject: [net-next PATCH v5 3/6] octeontx2-pf: AF_XDP zero copy receive support Date: Thu, 6 Feb 2025 14:20:31 +0530 Message-ID: <20250206085034.1978172-4-sumang@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20250206085034.1978172-1-sumang@marvell.com> References: <20250206085034.1978172-1-sumang@marvell.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Proofpoint-GUID: 7AMFubbHda7UqZjQ6cuxq1TXLGV5sziv X-Proofpoint-ORIG-GUID: 7AMFubbHda7UqZjQ6cuxq1TXLGV5sziv X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1057,Hydra:6.0.680,FMLib:17.12.68.34 definitions=2025-02-06_02,2025-02-05_03,2024-11-22_01 Content-Type: text/plain; charset="utf-8" This patch adds support to AF_XDP zero copy for CN10K. This patch specifically adds receive side support. In this approach once a xdp program with zero copy support on a specific rx queue is enabled, then that receive quse is disabled/detached from the existing kernel queue and re-assigned to the umem memory. Signed-off-by: Suman Ghosh --- .../ethernet/marvell/octeontx2/nic/Makefile | 2 +- .../ethernet/marvell/octeontx2/nic/cn10k.c | 6 +- .../marvell/octeontx2/nic/otx2_common.c | 112 ++++++++--- .../marvell/octeontx2/nic/otx2_common.h | 6 +- .../ethernet/marvell/octeontx2/nic/otx2_pf.c | 25 ++- .../marvell/octeontx2/nic/otx2_txrx.c | 81 ++++++-- .../marvell/octeontx2/nic/otx2_txrx.h | 6 + .../ethernet/marvell/octeontx2/nic/otx2_vf.c | 12 +- .../ethernet/marvell/octeontx2/nic/otx2_xsk.c | 182 ++++++++++++++++++ .../ethernet/marvell/octeontx2/nic/otx2_xsk.h | 21 ++ .../ethernet/marvell/octeontx2/nic/qos_sq.c | 2 +- 11 files changed, 392 insertions(+), 63 deletions(-) create mode 100644 drivers/net/ethernet/marvell/octeontx2/nic/otx2_xsk.c create mode 100644 drivers/net/ethernet/marvell/octeontx2/nic/otx2_xsk.h diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/Makefile b/drivers/= net/ethernet/marvell/octeontx2/nic/Makefile index cb6513ab35e7..69e0778f9ac1 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/Makefile +++ b/drivers/net/ethernet/marvell/octeontx2/nic/Makefile @@ -9,7 +9,7 @@ obj-$(CONFIG_RVU_ESWITCH) +=3D rvu_rep.o =20 rvu_nicpf-y :=3D otx2_pf.o otx2_common.o otx2_txrx.o otx2_ethtool.o \ otx2_flows.o otx2_tc.o cn10k.o otx2_dmac_flt.o \ - otx2_devlink.o qos_sq.o qos.o + otx2_devlink.o qos_sq.o qos.o otx2_xsk.o rvu_nicvf-y :=3D otx2_vf.o rvu_rep-y :=3D rep.o =20 diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k.c b/drivers/n= et/ethernet/marvell/octeontx2/nic/cn10k.c index a15cc86635d6..9a2865c60850 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k.c +++ b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k.c @@ -112,9 +112,12 @@ int cn10k_refill_pool_ptrs(void *dev, struct otx2_cq_q= ueue *cq) struct otx2_nic *pfvf =3D dev; int cnt =3D cq->pool_ptrs; u64 ptrs[NPA_MAX_BURST]; + struct otx2_pool *pool; dma_addr_t bufptr; int num_ptrs =3D 1; =20 + pool =3D &pfvf->qset.pool[cq->cq_idx]; + /* Refill pool with new buffers */ while (cq->pool_ptrs) { if (otx2_alloc_buffer(pfvf, cq, &bufptr)) { @@ -124,7 +127,8 @@ int cn10k_refill_pool_ptrs(void *dev, struct otx2_cq_qu= eue *cq) break; } cq->pool_ptrs--; - ptrs[num_ptrs] =3D (u64)bufptr + OTX2_HEAD_ROOM; + ptrs[num_ptrs] =3D pool->xsk_pool ? (u64)bufptr : (u64)bufptr + OTX2_HEA= D_ROOM; + num_ptrs++; if (num_ptrs =3D=3D NPA_MAX_BURST || cq->pool_ptrs =3D=3D 0) { __cn10k_aura_freeptr(pfvf, cq->cq_idx, ptrs, diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c b/dri= vers/net/ethernet/marvell/octeontx2/nic/otx2_common.c index 161cf33ef89e..b31eccb03cc3 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c @@ -17,6 +17,7 @@ #include "otx2_common.h" #include "otx2_struct.h" #include "cn10k.h" +#include "otx2_xsk.h" =20 static bool otx2_is_pfc_enabled(struct otx2_nic *pfvf) { @@ -549,10 +550,13 @@ static int otx2_alloc_pool_buf(struct otx2_nic *pfvf,= struct otx2_pool *pool, } =20 static int __otx2_alloc_rbuf(struct otx2_nic *pfvf, struct otx2_pool *pool, - dma_addr_t *dma) + dma_addr_t *dma, int qidx, int idx) { u8 *buf; =20 + if (pool->xsk_pool) + return otx2_xsk_pool_alloc_buf(pfvf, pool, dma, idx); + if (pool->page_pool) return otx2_alloc_pool_buf(pfvf, pool, dma); =20 @@ -571,12 +575,12 @@ static int __otx2_alloc_rbuf(struct otx2_nic *pfvf, s= truct otx2_pool *pool, } =20 int otx2_alloc_rbuf(struct otx2_nic *pfvf, struct otx2_pool *pool, - dma_addr_t *dma) + dma_addr_t *dma, int qidx, int idx) { int ret; =20 local_bh_disable(); - ret =3D __otx2_alloc_rbuf(pfvf, pool, dma); + ret =3D __otx2_alloc_rbuf(pfvf, pool, dma, qidx, idx); local_bh_enable(); return ret; } @@ -584,7 +588,8 @@ int otx2_alloc_rbuf(struct otx2_nic *pfvf, struct otx2_= pool *pool, int otx2_alloc_buffer(struct otx2_nic *pfvf, struct otx2_cq_queue *cq, dma_addr_t *dma) { - if (unlikely(__otx2_alloc_rbuf(pfvf, cq->rbpool, dma))) + if (unlikely(__otx2_alloc_rbuf(pfvf, cq->rbpool, dma, + cq->cq_idx, cq->pool_ptrs - 1))) return -ENOMEM; return 0; } @@ -884,7 +889,7 @@ void otx2_sqb_flush(struct otx2_nic *pfvf) #define RQ_PASS_LVL_AURA (255 - ((95 * 256) / 100)) /* RED when 95% is ful= l */ #define RQ_DROP_LVL_AURA (255 - ((99 * 256) / 100)) /* Drop when 99% is fu= ll */ =20 -static int otx2_rq_init(struct otx2_nic *pfvf, u16 qidx, u16 lpb_aura) +int otx2_rq_init(struct otx2_nic *pfvf, u16 qidx, u16 lpb_aura) { struct otx2_qset *qset =3D &pfvf->qset; struct nix_aq_enq_req *aq; @@ -1041,7 +1046,7 @@ int otx2_sq_init(struct otx2_nic *pfvf, u16 qidx, u16= sqb_aura) =20 } =20 -static int otx2_cq_init(struct otx2_nic *pfvf, u16 qidx) +int otx2_cq_init(struct otx2_nic *pfvf, u16 qidx) { struct otx2_qset *qset =3D &pfvf->qset; int err, pool_id, non_xdp_queues; @@ -1057,11 +1062,18 @@ static int otx2_cq_init(struct otx2_nic *pfvf, u16 = qidx) cq->cint_idx =3D qidx; cq->cqe_cnt =3D qset->rqe_cnt; if (pfvf->xdp_prog) { - pool =3D &qset->pool[qidx]; xdp_rxq_info_reg(&cq->xdp_rxq, pfvf->netdev, qidx, 0); - xdp_rxq_info_reg_mem_model(&cq->xdp_rxq, - MEM_TYPE_PAGE_POOL, - pool->page_pool); + pool =3D &qset->pool[qidx]; + if (pool->xsk_pool) { + xdp_rxq_info_reg_mem_model(&cq->xdp_rxq, + MEM_TYPE_XSK_BUFF_POOL, + NULL); + xsk_pool_set_rxq_info(pool->xsk_pool, &cq->xdp_rxq); + } else if (pool->page_pool) { + xdp_rxq_info_reg_mem_model(&cq->xdp_rxq, + MEM_TYPE_PAGE_POOL, + pool->page_pool); + } } } else if (qidx < non_xdp_queues) { cq->cq_type =3D CQ_TX; @@ -1281,9 +1293,10 @@ void otx2_free_bufs(struct otx2_nic *pfvf, struct ot= x2_pool *pool, =20 pa =3D otx2_iova_to_phys(pfvf->iommu_domain, iova); page =3D virt_to_head_page(phys_to_virt(pa)); - if (pool->page_pool) { page_pool_put_full_page(pool->page_pool, page, true); + } else if (pool->xsk_pool) { + /* Note: No way of identifying xdp_buff */ } else { dma_unmap_page_attrs(pfvf->dev, iova, size, DMA_FROM_DEVICE, @@ -1298,6 +1311,7 @@ void otx2_free_aura_ptr(struct otx2_nic *pfvf, int ty= pe) int pool_id, pool_start =3D 0, pool_end =3D 0, size =3D 0; struct otx2_pool *pool; u64 iova; + int idx; =20 if (type =3D=3D AURA_NIX_SQ) { pool_start =3D otx2_get_pool_idx(pfvf, type, 0); @@ -1312,8 +1326,8 @@ void otx2_free_aura_ptr(struct otx2_nic *pfvf, int ty= pe) =20 /* Free SQB and RQB pointers from the aura pool */ for (pool_id =3D pool_start; pool_id < pool_end; pool_id++) { - iova =3D otx2_aura_allocptr(pfvf, pool_id); pool =3D &pfvf->qset.pool[pool_id]; + iova =3D otx2_aura_allocptr(pfvf, pool_id); while (iova) { if (type =3D=3D AURA_NIX_RQ) iova -=3D OTX2_HEAD_ROOM; @@ -1322,6 +1336,13 @@ void otx2_free_aura_ptr(struct otx2_nic *pfvf, int t= ype) =20 iova =3D otx2_aura_allocptr(pfvf, pool_id); } + + for (idx =3D 0 ; idx < pool->xdp_cnt; idx++) { + if (!pool->xdp[idx]) + continue; + + xsk_buff_free(pool->xdp[idx]); + } } } =20 @@ -1338,7 +1359,8 @@ void otx2_aura_pool_free(struct otx2_nic *pfvf) qmem_free(pfvf->dev, pool->stack); qmem_free(pfvf->dev, pool->fc_addr); page_pool_destroy(pool->page_pool); - pool->page_pool =3D NULL; + devm_kfree(pfvf->dev, pool->xdp); + pool->xsk_pool =3D NULL; } devm_kfree(pfvf->dev, pfvf->qset.pool); pfvf->qset.pool =3D NULL; @@ -1425,6 +1447,7 @@ int otx2_pool_init(struct otx2_nic *pfvf, u16 pool_id, int stack_pages, int numptrs, int buf_size, int type) { struct page_pool_params pp_params =3D { 0 }; + struct xsk_buff_pool *xsk_pool; struct npa_aq_enq_req *aq; struct otx2_pool *pool; int err; @@ -1468,21 +1491,35 @@ int otx2_pool_init(struct otx2_nic *pfvf, u16 pool_= id, aq->ctype =3D NPA_AQ_CTYPE_POOL; aq->op =3D NPA_AQ_INSTOP_INIT; =20 - if (type !=3D AURA_NIX_RQ) { - pool->page_pool =3D NULL; + if (type !=3D AURA_NIX_RQ) + return 0; + + if (!test_bit(pool_id, pfvf->af_xdp_zc_qidx)) { + pp_params.order =3D get_order(buf_size); + pp_params.flags =3D PP_FLAG_DMA_MAP; + pp_params.pool_size =3D min(OTX2_PAGE_POOL_SZ, numptrs); + pp_params.nid =3D NUMA_NO_NODE; + pp_params.dev =3D pfvf->dev; + pp_params.dma_dir =3D DMA_FROM_DEVICE; + pool->page_pool =3D page_pool_create(&pp_params); + if (IS_ERR(pool->page_pool)) { + netdev_err(pfvf->netdev, "Creation of page pool failed\n"); + return PTR_ERR(pool->page_pool); + } return 0; } =20 - pp_params.order =3D get_order(buf_size); - pp_params.flags =3D PP_FLAG_DMA_MAP; - pp_params.pool_size =3D min(OTX2_PAGE_POOL_SZ, numptrs); - pp_params.nid =3D NUMA_NO_NODE; - pp_params.dev =3D pfvf->dev; - pp_params.dma_dir =3D DMA_FROM_DEVICE; - pool->page_pool =3D page_pool_create(&pp_params); - if (IS_ERR(pool->page_pool)) { - netdev_err(pfvf->netdev, "Creation of page pool failed\n"); - return PTR_ERR(pool->page_pool); + /* Set XSK pool to support AF_XDP zero-copy */ + xsk_pool =3D xsk_get_pool_from_qid(pfvf->netdev, pool_id); + if (xsk_pool) { + pool->xsk_pool =3D xsk_pool; + pool->xdp_cnt =3D numptrs; + pool->xdp =3D devm_kcalloc(pfvf->dev, + numptrs, sizeof(struct xdp_buff *), GFP_KERNEL); + if (IS_ERR(pool->xdp)) { + netdev_err(pfvf->netdev, "Creation of xsk pool failed\n"); + return PTR_ERR(pool->xdp); + } } =20 return 0; @@ -1543,9 +1580,18 @@ int otx2_sq_aura_pool_init(struct otx2_nic *pfvf) } =20 for (ptr =3D 0; ptr < num_sqbs; ptr++) { - err =3D otx2_alloc_rbuf(pfvf, pool, &bufptr); - if (err) + err =3D otx2_alloc_rbuf(pfvf, pool, &bufptr, pool_id, ptr); + if (err) { + if (pool->xsk_pool) { + ptr--; + while (ptr >=3D 0) { + xsk_buff_free(pool->xdp[ptr]); + ptr--; + } + } goto err_mem; + } + pfvf->hw_ops->aura_freeptr(pfvf, pool_id, bufptr); sq->sqb_ptrs[sq->sqb_count++] =3D (u64)bufptr; } @@ -1595,11 +1641,19 @@ int otx2_rq_aura_pool_init(struct otx2_nic *pfvf) /* Allocate pointers and free them to aura/pool */ for (pool_id =3D 0; pool_id < hw->rqpool_cnt; pool_id++) { pool =3D &pfvf->qset.pool[pool_id]; + for (ptr =3D 0; ptr < num_ptrs; ptr++) { - err =3D otx2_alloc_rbuf(pfvf, pool, &bufptr); - if (err) + err =3D otx2_alloc_rbuf(pfvf, pool, &bufptr, pool_id, ptr); + if (err) { + if (pool->xsk_pool) { + while (ptr) + xsk_buff_free(pool->xdp[--ptr]); + } return -ENOMEM; + } + pfvf->hw_ops->aura_freeptr(pfvf, pool_id, + pool->xsk_pool ? bufptr : bufptr + OTX2_HEAD_ROOM); } } diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h b/dri= vers/net/ethernet/marvell/octeontx2/nic/otx2_common.h index d5fbccb289df..60508971b62f 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h @@ -532,6 +532,8 @@ struct otx2_nic { =20 /* Inline ipsec */ struct cn10k_ipsec ipsec; + /* af_xdp zero-copy */ + unsigned long *af_xdp_zc_qidx; }; =20 static inline bool is_otx2_lbkvf(struct pci_dev *pdev) @@ -1003,7 +1005,7 @@ void otx2_txschq_free_one(struct otx2_nic *pfvf, u16 = lvl, u16 schq); void otx2_free_pending_sqe(struct otx2_nic *pfvf); void otx2_sqb_flush(struct otx2_nic *pfvf); int otx2_alloc_rbuf(struct otx2_nic *pfvf, struct otx2_pool *pool, - dma_addr_t *dma); + dma_addr_t *dma, int qidx, int idx); int otx2_rxtx_enable(struct otx2_nic *pfvf, bool enable); void otx2_ctx_disable(struct mbox *mbox, int type, bool npa); int otx2_nix_config_bp(struct otx2_nic *pfvf, bool enable); @@ -1033,6 +1035,8 @@ void otx2_pfaf_mbox_destroy(struct otx2_nic *pf); void otx2_disable_mbox_intr(struct otx2_nic *pf); void otx2_disable_napi(struct otx2_nic *pf); irqreturn_t otx2_cq_intr_handler(int irq, void *cq_irq); +int otx2_rq_init(struct otx2_nic *pfvf, u16 qidx, u16 lpb_aura); +int otx2_cq_init(struct otx2_nic *pfvf, u16 qidx); =20 /* RSS configuration APIs*/ int otx2_rss_init(struct otx2_nic *pfvf); diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c b/drivers= /net/ethernet/marvell/octeontx2/nic/otx2_pf.c index 4347a3c95350..188ab6b6fb16 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c @@ -27,6 +27,7 @@ #include "qos.h" #include #include "cn10k_ipsec.h" +#include "otx2_xsk.h" =20 #define DRV_NAME "rvu_nicpf" #define DRV_STRING "Marvell RVU NIC Physical Function Driver" @@ -1662,9 +1663,7 @@ void otx2_free_hw_resources(struct otx2_nic *pf) struct nix_lf_free_req *free_req; struct mbox *mbox =3D &pf->mbox; struct otx2_cq_queue *cq; - struct otx2_pool *pool; struct msg_req *req; - int pool_id; int qidx; =20 /* Ensure all SQE are processed */ @@ -1705,13 +1704,6 @@ void otx2_free_hw_resources(struct otx2_nic *pf) /* Free RQ buffer pointers*/ otx2_free_aura_ptr(pf, AURA_NIX_RQ); =20 - for (qidx =3D 0; qidx < pf->hw.rx_queues; qidx++) { - pool_id =3D otx2_get_pool_idx(pf, AURA_NIX_RQ, qidx); - pool =3D &pf->qset.pool[pool_id]; - page_pool_destroy(pool->page_pool); - pool->page_pool =3D NULL; - } - otx2_free_cq_res(pf); =20 /* Free all ingress bandwidth profiles allocated */ @@ -2788,6 +2780,8 @@ static int otx2_xdp(struct net_device *netdev, struct= netdev_bpf *xdp) switch (xdp->command) { case XDP_SETUP_PROG: return otx2_xdp_setup(pf, xdp->prog); + case XDP_SETUP_XSK_POOL: + return otx2_xsk_pool_setup(pf, xdp->xsk.pool, xdp->xsk.queue_id); default: return -EINVAL; } @@ -2865,6 +2859,7 @@ static const struct net_device_ops otx2_netdev_ops = =3D { .ndo_set_vf_vlan =3D otx2_set_vf_vlan, .ndo_get_vf_config =3D otx2_get_vf_config, .ndo_bpf =3D otx2_xdp, + .ndo_xsk_wakeup =3D otx2_xsk_wakeup, .ndo_xdp_xmit =3D otx2_xdp_xmit, .ndo_setup_tc =3D otx2_setup_tc, .ndo_set_vf_trust =3D otx2_ndo_set_vf_trust, @@ -3203,16 +3198,26 @@ static int otx2_probe(struct pci_dev *pdev, const s= truct pci_device_id *id) /* Enable link notifications */ otx2_cgx_config_linkevents(pf, true); =20 + pf->af_xdp_zc_qidx =3D bitmap_zalloc(qcount, GFP_KERNEL); + if (!pf->af_xdp_zc_qidx) { + err =3D -ENOMEM; + goto err_af_xdp_zc; + } + #ifdef CONFIG_DCB err =3D otx2_dcbnl_set_ops(netdev); if (err) - goto err_pf_sriov_init; + goto err_dcbnl_set_ops; #endif =20 otx2_qos_init(pf, qos_txqs); =20 return 0; =20 +err_dcbnl_set_ops: + bitmap_free(pf->af_xdp_zc_qidx); +err_af_xdp_zc: + otx2_sriov_vfcfg_cleanup(pf); err_pf_sriov_init: otx2_shutdown_tc(pf); err_mcam_flow_del: diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.c b/drive= rs/net/ethernet/marvell/octeontx2/nic/otx2_txrx.c index d46f05993d3f..44137160bdf6 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.c +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.c @@ -12,6 +12,7 @@ #include #include #include +#include =20 #include "otx2_reg.h" #include "otx2_common.h" @@ -529,9 +530,10 @@ static void otx2_adjust_adaptive_coalese(struct otx2_n= ic *pfvf, struct otx2_cq_p int otx2_napi_handler(struct napi_struct *napi, int budget) { struct otx2_cq_queue *rx_cq =3D NULL; + struct otx2_cq_queue *cq =3D NULL; struct otx2_cq_poll *cq_poll; int workdone =3D 0, cq_idx, i; - struct otx2_cq_queue *cq; + struct otx2_pool *pool; struct otx2_qset *qset; struct otx2_nic *pfvf; int filled_cnt =3D -1; @@ -556,6 +558,7 @@ int otx2_napi_handler(struct napi_struct *napi, int bud= get) =20 if (rx_cq && rx_cq->pool_ptrs) filled_cnt =3D pfvf->hw_ops->refill_pool_ptrs(pfvf, rx_cq); + /* Clear the IRQ */ otx2_write64(pfvf, NIX_LF_CINTX_INT(cq_poll->cint_idx), BIT_ULL(0)); =20 @@ -568,20 +571,31 @@ int otx2_napi_handler(struct napi_struct *napi, int b= udget) if (pfvf->flags & OTX2_FLAG_ADPTV_INT_COAL_ENABLED) otx2_adjust_adaptive_coalese(pfvf, cq_poll); =20 + if (likely(cq)) + pool =3D &pfvf->qset.pool[cq->cq_idx]; + if (unlikely(!filled_cnt)) { struct refill_work *work; struct delayed_work *dwork; =20 - work =3D &pfvf->refill_wrk[cq->cq_idx]; - dwork =3D &work->pool_refill_work; - /* Schedule a task if no other task is running */ - if (!cq->refill_task_sched) { - work->napi =3D napi; - cq->refill_task_sched =3D true; - schedule_delayed_work(dwork, - msecs_to_jiffies(100)); + if (likely(cq)) { + work =3D &pfvf->refill_wrk[cq->cq_idx]; + dwork =3D &work->pool_refill_work; + /* Schedule a task if no other task is running */ + if (!cq->refill_task_sched) { + work->napi =3D napi; + cq->refill_task_sched =3D true; + schedule_delayed_work(dwork, + msecs_to_jiffies(100)); + } } + /* Call for wake-up for not able to fill buffers */ + if (pool->xsk_pool) + xsk_set_rx_need_wakeup(pool->xsk_pool); } else { + /* Clear wake-up, since buffers are filled successfully */ + if (pool->xsk_pool) + xsk_clear_rx_need_wakeup(pool->xsk_pool); /* Re-enable interrupts */ otx2_write64(pfvf, NIX_LF_CINTX_ENA_W1S(cq_poll->cint_idx), @@ -1232,15 +1246,19 @@ void otx2_cleanup_rx_cqes(struct otx2_nic *pfvf, st= ruct otx2_cq_queue *cq, int q u16 pool_id; u64 iova; =20 - if (pfvf->xdp_prog) + pool_id =3D otx2_get_pool_idx(pfvf, AURA_NIX_RQ, qidx); + pool =3D &pfvf->qset.pool[pool_id]; + + if (pfvf->xdp_prog) { + if (pool->page_pool) + xdp_rxq_info_unreg_mem_model(&cq->xdp_rxq); + xdp_rxq_info_unreg(&cq->xdp_rxq); + } =20 if (otx2_nix_cq_op_status(pfvf, cq) || !cq->pend_cqe) return; =20 - pool_id =3D otx2_get_pool_idx(pfvf, AURA_NIX_RQ, qidx); - pool =3D &pfvf->qset.pool[pool_id]; - while (cq->pend_cqe) { cqe =3D (struct nix_cqe_rx_s *)otx2_get_next_cqe(cq); processed_cqe++; @@ -1424,17 +1442,28 @@ static bool otx2_xdp_rcv_pkt_handler(struct otx2_ni= c *pfvf, struct otx2_cq_queue *cq, bool *need_xdp_flush) { + struct xdp_buff xdp, *xsk_buff =3D NULL; unsigned char *hard_start; struct otx2_pool *pool; struct xdp_frame *xdpf; int qidx =3D cq->cq_idx; - struct xdp_buff xdp; struct page *page; u64 iova, pa; u32 act; int err; =20 pool =3D &pfvf->qset.pool[qidx]; + + if (pool->xsk_pool) { + xsk_buff =3D pool->xdp[--cq->rbpool->xdp_top]; + if (!xsk_buff) + return false; + + xsk_buff->data_end =3D xsk_buff->data + cqe->sg.seg_size; + act =3D bpf_prog_run_xdp(prog, xsk_buff); + goto handle_xdp_verdict; + } + iova =3D cqe->sg.seg_addr - OTX2_HEAD_ROOM; pa =3D otx2_iova_to_phys(pfvf->iommu_domain, iova); page =3D virt_to_page(phys_to_virt(pa)); @@ -1447,6 +1476,7 @@ static bool otx2_xdp_rcv_pkt_handler(struct otx2_nic = *pfvf, =20 act =3D bpf_prog_run_xdp(prog, &xdp); =20 +handle_xdp_verdict: switch (act) { case XDP_PASS: break; @@ -1458,6 +1488,15 @@ static bool otx2_xdp_rcv_pkt_handler(struct otx2_nic= *pfvf, cqe->sg.seg_size, qidx, XDP_TX); case XDP_REDIRECT: cq->pool_ptrs++; + if (xsk_buff) { + err =3D xdp_do_redirect(pfvf->netdev, xsk_buff, prog); + if (!err) { + *need_xdp_flush =3D true; + return true; + } + return false; + } + err =3D xdp_do_redirect(pfvf->netdev, &xdp, prog); if (!err) { *need_xdp_flush =3D true; @@ -1473,17 +1512,21 @@ static bool otx2_xdp_rcv_pkt_handler(struct otx2_ni= c *pfvf, bpf_warn_invalid_xdp_action(pfvf->netdev, prog, act); break; case XDP_ABORTED: + if (xsk_buff) + xsk_buff_free(xsk_buff); trace_xdp_exception(pfvf->netdev, prog, act); break; case XDP_DROP: cq->pool_ptrs++; - if (page->pp) { + if (xsk_buff) { + xsk_buff_free(xsk_buff); + } else if (page->pp) { page_pool_recycle_direct(pool->page_pool, page); - return true; + } else { + otx2_dma_unmap_page(pfvf, iova, pfvf->rbsize, + DMA_FROM_DEVICE); + put_page(page); } - otx2_dma_unmap_page(pfvf, iova, pfvf->rbsize, - DMA_FROM_DEVICE); - put_page(page); return true; } return false; diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.h b/drive= rs/net/ethernet/marvell/octeontx2/nic/otx2_txrx.h index 92e1e84cad75..8f346fbc8221 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.h +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.h @@ -12,6 +12,7 @@ #include #include #include +#include =20 #define LBK_CHAN_BASE 0x000 #define SDP_CHAN_BASE 0x700 @@ -128,7 +129,11 @@ struct otx2_pool { struct qmem *stack; struct qmem *fc_addr; struct page_pool *page_pool; + struct xsk_buff_pool *xsk_pool; + struct xdp_buff **xdp; + u16 xdp_cnt; u16 rbsize; + u16 xdp_top; }; =20 struct otx2_cq_queue { @@ -145,6 +150,7 @@ struct otx2_cq_queue { void *cqe_base; struct qmem *cqe; struct otx2_pool *rbpool; + bool xsk_zc_en; struct xdp_rxq_info xdp_rxq; } ____cacheline_aligned_in_smp; =20 diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_vf.c b/drivers= /net/ethernet/marvell/octeontx2/nic/otx2_vf.c index e926c6ce96cf..e43ecfb633f8 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_vf.c +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_vf.c @@ -722,15 +722,25 @@ static int otx2vf_probe(struct pci_dev *pdev, const s= truct pci_device_id *id) if (err) goto err_shutdown_tc; =20 + vf->af_xdp_zc_qidx =3D bitmap_zalloc(qcount, GFP_KERNEL); + if (!vf->af_xdp_zc_qidx) { + err =3D -ENOMEM; + goto err_af_xdp_zc; + } + #ifdef CONFIG_DCB err =3D otx2_dcbnl_set_ops(netdev); if (err) - goto err_shutdown_tc; + goto err_dcbnl_set_ops; #endif otx2_qos_init(vf, qos_txqs); =20 return 0; =20 +err_dcbnl_set_ops: + bitmap_free(vf->af_xdp_zc_qidx); +err_af_xdp_zc: + otx2_unregister_dl(vf); err_shutdown_tc: otx2_shutdown_tc(vf); err_unreg_netdev: diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_xsk.c b/driver= s/net/ethernet/marvell/octeontx2/nic/otx2_xsk.c new file mode 100644 index 000000000000..69098c6a6fed --- /dev/null +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_xsk.c @@ -0,0 +1,182 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Marvell RVU Ethernet driver + * + * Copyright (C) 2024 Marvell. + * + */ + +#include +#include +#include +#include + +#include "otx2_common.h" +#include "otx2_xsk.h" + +int otx2_xsk_pool_alloc_buf(struct otx2_nic *pfvf, struct otx2_pool *pool, + dma_addr_t *dma, int idx) +{ + struct xdp_buff *xdp; + int delta; + + xdp =3D xsk_buff_alloc(pool->xsk_pool); + if (!xdp) + return -ENOMEM; + + pool->xdp[pool->xdp_top++] =3D xdp; + *dma =3D OTX2_DATA_ALIGN(xsk_buff_xdp_get_dma(xdp)); + /* Adjust xdp->data for unaligned addresses */ + delta =3D *dma - xsk_buff_xdp_get_dma(xdp); + xdp->data +=3D delta; + + return 0; +} + +static int otx2_xsk_ctx_disable(struct otx2_nic *pfvf, u16 qidx, int aura_= id) +{ + struct nix_cn10k_aq_enq_req *cn10k_rq_aq; + struct npa_aq_enq_req *aura_aq; + struct npa_aq_enq_req *pool_aq; + struct nix_aq_enq_req *rq_aq; + + if (test_bit(CN10K_LMTST, &pfvf->hw.cap_flag)) { + cn10k_rq_aq =3D otx2_mbox_alloc_msg_nix_cn10k_aq_enq(&pfvf->mbox); + if (!cn10k_rq_aq) + return -ENOMEM; + cn10k_rq_aq->qidx =3D qidx; + cn10k_rq_aq->rq.ena =3D 0; + cn10k_rq_aq->rq_mask.ena =3D 1; + cn10k_rq_aq->ctype =3D NIX_AQ_CTYPE_RQ; + cn10k_rq_aq->op =3D NIX_AQ_INSTOP_WRITE; + } else { + rq_aq =3D otx2_mbox_alloc_msg_nix_aq_enq(&pfvf->mbox); + if (!rq_aq) + return -ENOMEM; + rq_aq->qidx =3D qidx; + rq_aq->sq.ena =3D 0; + rq_aq->sq_mask.ena =3D 1; + rq_aq->ctype =3D NIX_AQ_CTYPE_RQ; + rq_aq->op =3D NIX_AQ_INSTOP_WRITE; + } + + aura_aq =3D otx2_mbox_alloc_msg_npa_aq_enq(&pfvf->mbox); + if (!aura_aq) { + otx2_mbox_reset(&pfvf->mbox.mbox, 0); + return -ENOMEM; + } + + aura_aq->aura_id =3D aura_id; + aura_aq->aura.ena =3D 0; + aura_aq->aura_mask.ena =3D 1; + aura_aq->ctype =3D NPA_AQ_CTYPE_AURA; + aura_aq->op =3D NPA_AQ_INSTOP_WRITE; + + pool_aq =3D otx2_mbox_alloc_msg_npa_aq_enq(&pfvf->mbox); + if (!pool_aq) { + otx2_mbox_reset(&pfvf->mbox.mbox, 0); + return -ENOMEM; + } + + pool_aq->aura_id =3D aura_id; + pool_aq->pool.ena =3D 0; + pool_aq->pool_mask.ena =3D 1; + + pool_aq->ctype =3D NPA_AQ_CTYPE_POOL; + pool_aq->op =3D NPA_AQ_INSTOP_WRITE; + + return otx2_sync_mbox_msg(&pfvf->mbox); +} + +static void otx2_clean_up_rq(struct otx2_nic *pfvf, int qidx) +{ + struct otx2_qset *qset =3D &pfvf->qset; + struct otx2_cq_queue *cq; + struct otx2_pool *pool; + u64 iova; + + /* If the DOWN flag is set SQs are already freed */ + if (pfvf->flags & OTX2_FLAG_INTF_DOWN) + return; + + cq =3D &qset->cq[qidx]; + if (cq) + otx2_cleanup_rx_cqes(pfvf, cq, qidx); + + pool =3D &pfvf->qset.pool[qidx]; + iova =3D otx2_aura_allocptr(pfvf, qidx); + while (iova) { + iova -=3D OTX2_HEAD_ROOM; + otx2_free_bufs(pfvf, pool, iova, pfvf->rbsize); + iova =3D otx2_aura_allocptr(pfvf, qidx); + } + + mutex_lock(&pfvf->mbox.lock); + otx2_xsk_ctx_disable(pfvf, qidx, qidx); + mutex_unlock(&pfvf->mbox.lock); +} + +int otx2_xsk_pool_enable(struct otx2_nic *pf, struct xsk_buff_pool *pool, = u16 qidx) +{ + u16 rx_queues =3D pf->hw.rx_queues; + u16 tx_queues =3D pf->hw.tx_queues; + int err; + + if (qidx >=3D rx_queues || qidx >=3D tx_queues) + return -EINVAL; + + err =3D xsk_pool_dma_map(pool, pf->dev, DMA_ATTR_SKIP_CPU_SYNC | DMA_ATTR= _WEAK_ORDERING); + if (err) + return err; + + set_bit(qidx, pf->af_xdp_zc_qidx); + otx2_clean_up_rq(pf, qidx); + /* Kick start the NAPI context so that receiving will start */ + return otx2_xsk_wakeup(pf->netdev, qidx, XDP_WAKEUP_RX); +} + +int otx2_xsk_pool_disable(struct otx2_nic *pf, u16 qidx) +{ + struct net_device *netdev =3D pf->netdev; + struct xsk_buff_pool *pool; + + pool =3D xsk_get_pool_from_qid(netdev, qidx); + if (!pool) + return -EINVAL; + + otx2_clean_up_rq(pf, qidx); + clear_bit(qidx, pf->af_xdp_zc_qidx); + xsk_pool_dma_unmap(pool, DMA_ATTR_SKIP_CPU_SYNC | DMA_ATTR_WEAK_ORDERING); + + return 0; +} + +int otx2_xsk_pool_setup(struct otx2_nic *pf, struct xsk_buff_pool *pool, u= 16 qidx) +{ + if (pool) + return otx2_xsk_pool_enable(pf, pool, qidx); + + return otx2_xsk_pool_disable(pf, qidx); +} + +int otx2_xsk_wakeup(struct net_device *dev, u32 queue_id, u32 flags) +{ + struct otx2_nic *pf =3D netdev_priv(dev); + struct otx2_cq_poll *cq_poll =3D NULL; + struct otx2_qset *qset =3D &pf->qset; + + if (pf->flags & OTX2_FLAG_INTF_DOWN) + return -ENETDOWN; + + if (queue_id >=3D pf->hw.rx_queues) + return -EINVAL; + + cq_poll =3D &qset->napi[queue_id]; + if (!cq_poll) + return -EINVAL; + + /* Trigger interrupt */ + if (!napi_if_scheduled_mark_missed(&cq_poll->napi)) + otx2_write64(pf, NIX_LF_CINTX_ENA_W1S(cq_poll->cint_idx), BIT_ULL(0)); + + return 0; +} diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_xsk.h b/driver= s/net/ethernet/marvell/octeontx2/nic/otx2_xsk.h new file mode 100644 index 000000000000..022b3433edbb --- /dev/null +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_xsk.h @@ -0,0 +1,21 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* Marvell RVU PF/VF Netdev Devlink + * + * Copyright (C) 2024 Marvell. + * + */ + +#ifndef OTX2_XSK_H +#define OTX2_XSK_H + +struct otx2_nic; +struct xsk_buff_pool; + +int otx2_xsk_pool_setup(struct otx2_nic *pf, struct xsk_buff_pool *pool, u= 16 qid); +int otx2_xsk_pool_enable(struct otx2_nic *pf, struct xsk_buff_pool *pool, = u16 qid); +int otx2_xsk_pool_disable(struct otx2_nic *pf, u16 qid); +int otx2_xsk_pool_alloc_buf(struct otx2_nic *pfvf, struct otx2_pool *pool, + dma_addr_t *dma, int idx); +int otx2_xsk_wakeup(struct net_device *dev, u32 queue_id, u32 flags); + +#endif /* OTX2_XSK_H */ diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/qos_sq.c b/drivers/= net/ethernet/marvell/octeontx2/nic/qos_sq.c index 9d887bfc3108..c5dbae0e513b 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/qos_sq.c +++ b/drivers/net/ethernet/marvell/octeontx2/nic/qos_sq.c @@ -82,7 +82,7 @@ static int otx2_qos_sq_aura_pool_init(struct otx2_nic *pf= vf, int qidx) } =20 for (ptr =3D 0; ptr < num_sqbs; ptr++) { - err =3D otx2_alloc_rbuf(pfvf, pool, &bufptr); + err =3D otx2_alloc_rbuf(pfvf, pool, &bufptr, pool_id, ptr); if (err) goto sqb_free; pfvf->hw_ops->aura_freeptr(pfvf, pool_id, bufptr); --=20 2.25.1 From nobody Mon Dec 15 21:44:00 2025 Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E8DB7226558; Thu, 6 Feb 2025 08:51:32 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=67.231.156.173 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738831894; cv=none; b=DfeTonmRivNlqVrxGW/3cfUM9BqYp2MIyEG5eSkR9fZIqZjDXxPcVSlDxO1wV0TWCPkxVaxaZSYE4dYaN+ee/SEPVhsuyFTG3mamKJ3DFic5Rbkuo2sfC4DRlNGnidWYrLfCq8qrRHvxOOcPecI9zCR5ZqAfXzx6IHOK20hLTJk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738831894; c=relaxed/simple; bh=yij/X/Jhd3+VtlzhBniNGE4Ni4B1dFJODwokNl/7nag=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=WULjVNUMRiwvaCEa4jfPlA0xlqusKn5JHK5nKKa0SVKtU4b3gm/xIOWE46XupM+V2q4Wo4zkSNecSFoA5IRbjyj0qTK+7XZOZeMgh9he3tbywgrhYsI1ZusyYeQiv8g7Ja2gMmnym3WA23wRiKTryM0aY3Fq/5ZQSWzJ0DtJ5qE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=marvell.com; spf=pass smtp.mailfrom=marvell.com; dkim=pass (2048-bit key) header.d=marvell.com header.i=@marvell.com header.b=QbOjKyqV; arc=none smtp.client-ip=67.231.156.173 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=marvell.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=marvell.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=marvell.com header.i=@marvell.com header.b="QbOjKyqV" Received: from pps.filterd (m0431383.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 5165d0hc029657; Thu, 6 Feb 2025 00:51:14 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h= cc:content-transfer-encoding:content-type:date:from:in-reply-to :message-id:mime-version:references:subject:to; s=pfpt0220; bh=4 pKJO2Nt3mE/FQDDdtPTUj63T7EssLS+3WHAtHlCZkk=; b=QbOjKyqVTanRhvdJ1 Pbj0sVUn1kdoElor9JUUTuE59Z3jXV7l+I0bNq3EXnwBEm3d+sppDSRIXmHeeCaX EDjhzNdHHmz804XZ7UHq8jUV/9XAnMWMug79pv07jzfZft7bqaeOspJfwlyu91df wI1IkLuTh6joxu/kBkWZI++8jX5+Q6NA/egG1xRO6D/qJ9OQ4G3e4bKvEgFCLhR/ 4doEBggUQgJKX/9CADiXxMyRD4Tg8AfedxW3mF+8YTyouoVgwd5OS8d5HC4moKWF oT2RQlmVOJrG8HiJJtj7xkdqpnYRO6CpmMRmuTKMX9AYcCdLJZhiGmrnpZR+6q8N wXHzg== Received: from dc5-exch05.marvell.com ([199.233.59.128]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 44mq418a6t-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Thu, 06 Feb 2025 00:51:13 -0800 (PST) Received: from DC5-EXCH05.marvell.com (10.69.176.209) by DC5-EXCH05.marvell.com (10.69.176.209) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Thu, 6 Feb 2025 00:51:12 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH05.marvell.com (10.69.176.209) with Microsoft SMTP Server id 15.2.1544.4 via Frontend Transport; Thu, 6 Feb 2025 00:51:12 -0800 Received: from localhost.localdomain (unknown [10.28.36.166]) by maili.marvell.com (Postfix) with ESMTP id 1E7163F705F; Thu, 6 Feb 2025 00:51:05 -0800 (PST) From: Suman Ghosh To: , , , , , , , , , , , , , , , , , , , , CC: Suman Ghosh Subject: [net-next PATCH v5 4/6] octeontx2-pf: Reconfigure RSS table after enabling AF_XDP zerocopy on rx queue Date: Thu, 6 Feb 2025 14:20:32 +0530 Message-ID: <20250206085034.1978172-5-sumang@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20250206085034.1978172-1-sumang@marvell.com> References: <20250206085034.1978172-1-sumang@marvell.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Proofpoint-GUID: uRcPcmVKNnkA_4K2GKHZQejtg5tgv19_ X-Proofpoint-ORIG-GUID: uRcPcmVKNnkA_4K2GKHZQejtg5tgv19_ X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1057,Hydra:6.0.680,FMLib:17.12.68.34 definitions=2025-02-06_02,2025-02-05_03,2024-11-22_01 Content-Type: text/plain; charset="utf-8" RSS table needs to be reconfigured once a rx queue is enabled or disabled for AF_XDP zerocopy support. After enabling UMEM on a rx queue, that queue should not be part of RSS queue selection algorithm. Similarly the queue should be considered again after UMEM is disabled. Signed-off-by: Suman Ghosh --- drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c | 4 ++++ drivers/net/ethernet/marvell/octeontx2/nic/otx2_ethtool.c | 6 +++++- drivers/net/ethernet/marvell/octeontx2/nic/otx2_xsk.c | 4 ++++ 3 files changed, 13 insertions(+), 1 deletion(-) diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c b/dri= vers/net/ethernet/marvell/octeontx2/nic/otx2_common.c index b31eccb03cc3..ec8fc2813443 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c @@ -331,6 +331,10 @@ int otx2_set_rss_table(struct otx2_nic *pfvf, int ctx_= id) rss_ctx =3D rss->rss_ctx[ctx_id]; /* Get memory to put this msg */ for (idx =3D 0; idx < rss->rss_size; idx++) { + /* Ignore the queue if AF_XDP zero copy is enabled */ + if (test_bit(rss_ctx->ind_tbl[idx], pfvf->af_xdp_zc_qidx)) + continue; + aq =3D otx2_mbox_alloc_msg_nix_aq_enq(mbox); if (!aq) { /* The shared memory buffer can be full. diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_ethtool.c b/dr= ivers/net/ethernet/marvell/octeontx2/nic/otx2_ethtool.c index 2d53dc77ef1e..010385b29988 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_ethtool.c +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_ethtool.c @@ -910,8 +910,12 @@ static int otx2_get_rxfh(struct net_device *dev, return -ENOENT; =20 if (indir) { - for (idx =3D 0; idx < rss->rss_size; idx++) + for (idx =3D 0; idx < rss->rss_size; idx++) { + /* Ignore if the rx queue is AF_XDP zero copy enabled */ + if (test_bit(rss_ctx->ind_tbl[idx], pfvf->af_xdp_zc_qidx)) + continue; indir[idx] =3D rss_ctx->ind_tbl[idx]; + } } if (rxfh->key) memcpy(rxfh->key, rss->key, sizeof(rss->key)); diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_xsk.c b/driver= s/net/ethernet/marvell/octeontx2/nic/otx2_xsk.c index 69098c6a6fed..13dcbbe6112d 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_xsk.c +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_xsk.c @@ -130,6 +130,8 @@ int otx2_xsk_pool_enable(struct otx2_nic *pf, struct xs= k_buff_pool *pool, u16 qi =20 set_bit(qidx, pf->af_xdp_zc_qidx); otx2_clean_up_rq(pf, qidx); + /* Reconfigure RSS table as 'qidx' cannot be part of RSS now */ + otx2_set_rss_table(pf, DEFAULT_RSS_CONTEXT_GROUP); /* Kick start the NAPI context so that receiving will start */ return otx2_xsk_wakeup(pf->netdev, qidx, XDP_WAKEUP_RX); } @@ -146,6 +148,8 @@ int otx2_xsk_pool_disable(struct otx2_nic *pf, u16 qidx) otx2_clean_up_rq(pf, qidx); clear_bit(qidx, pf->af_xdp_zc_qidx); xsk_pool_dma_unmap(pool, DMA_ATTR_SKIP_CPU_SYNC | DMA_ATTR_WEAK_ORDERING); + /* Reconfigure RSS table as 'qidx' now need to be part of RSS now */ + otx2_set_rss_table(pf, DEFAULT_RSS_CONTEXT_GROUP); =20 return 0; } --=20 2.25.1 From nobody Mon Dec 15 21:44:00 2025 Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E6F4D225A30; Thu, 6 Feb 2025 08:51:39 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=67.231.156.173 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738831901; cv=none; b=OsMGUhcUptlr5HVPXcfEX+bWook7vHQEA88N1sillfGd2vR3JPm2IOWoQXb+EPMwm7wu9cjloLvZYfTaCS/GuyssAJxQU3QIseI9+Ahva0LjNITOJRdq5gCsKZf0pIZ+KYKDgMPifKmBbPAZkDQ/QWFb1PzQUloq8r4CFSff/II= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738831901; c=relaxed/simple; bh=o4Ohhn57E2EGxKhpzP56dqYCexrAo2Ifk0JEjlDL++k=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=DBybzPopgxMC6O16a9/WRfOdwBgsRm9BxhzyVG8/vxKtUG8WR9QEGaPIMV56ePUGoIjrYUxX4qIXdR9qOQjXQTH0WJRl9We9aJxFVYPW/AtuSsud5UrpKURa4fdV6NSBw4c7DeNUBGoQAW/5U4OTK1aG5BzJ9RIq3pj7h4y9PCE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=marvell.com; spf=pass smtp.mailfrom=marvell.com; dkim=pass (2048-bit key) header.d=marvell.com header.i=@marvell.com header.b=Q1PXSJRR; arc=none smtp.client-ip=67.231.156.173 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=marvell.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=marvell.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=marvell.com header.i=@marvell.com header.b="Q1PXSJRR" Received: from pps.filterd (m0431383.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 5165cW15028788; Thu, 6 Feb 2025 00:51:21 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h= cc:content-transfer-encoding:content-type:date:from:in-reply-to :message-id:mime-version:references:subject:to; s=pfpt0220; bh=O OHfYPVguYROE667g4CJKkR1hR54z2jg3i6qtoAA/l0=; b=Q1PXSJRRNImrAWZy1 g0q99IaJp59bg2qDweVvRO7gT4GjSydMv4+SLlhYAhp6ppAtRTRb3VXrm1jJxsRt y0sRPz9zxW/1CVHqUTc9d0IU5avDJyFSXZ/jLPvgoUz9C3Fssnq0geA4MtA2aJNi IzEIb/X5OHBKG0AGA7eAyGnUdtXSFAR0mu616PDzKc4UOvjd9vcZaE0oK/u7HLQh IUjG9oo1EyysacFrhI8NNChet7mt+yM5jlxoPOvTz20KrwxA5wPKj1w2+KQfZ9qX yGjHl2AMfw/3vkoVpNBAykiSpv5MbAjkZfDF4EB39UEzA4EDQvWus0iBBf3DN+Fj IEgPQ== Received: from dc5-exch05.marvell.com ([199.233.59.128]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 44mq418a70-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Thu, 06 Feb 2025 00:51:21 -0800 (PST) Received: from DC5-EXCH05.marvell.com (10.69.176.209) by DC5-EXCH05.marvell.com (10.69.176.209) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Thu, 6 Feb 2025 00:51:19 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH05.marvell.com (10.69.176.209) with Microsoft SMTP Server id 15.2.1544.4 via Frontend Transport; Thu, 6 Feb 2025 00:51:19 -0800 Received: from localhost.localdomain (unknown [10.28.36.166]) by maili.marvell.com (Postfix) with ESMTP id C73CD3F705F; Thu, 6 Feb 2025 00:51:13 -0800 (PST) From: Suman Ghosh To: , , , , , , , , , , , , , , , , , , , , CC: Suman Ghosh Subject: [net-next PATCH v5 5/6] octeontx2-pf: Prepare for AF_XDP Date: Thu, 6 Feb 2025 14:20:33 +0530 Message-ID: <20250206085034.1978172-6-sumang@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20250206085034.1978172-1-sumang@marvell.com> References: <20250206085034.1978172-1-sumang@marvell.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Proofpoint-GUID: FvXkCdoO0IrXENjl0DbHd3cmcyWYBxzP X-Proofpoint-ORIG-GUID: FvXkCdoO0IrXENjl0DbHd3cmcyWYBxzP X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1057,Hydra:6.0.680,FMLib:17.12.68.34 definitions=2025-02-06_02,2025-02-05_03,2024-11-22_01 Content-Type: text/plain; charset="utf-8" Implement necessary APIs required for AF_XDP transmit. Signed-off-by: Hariprasad Kelam Signed-off-by: Suman Ghosh --- .../marvell/octeontx2/nic/otx2_common.h | 1 + .../marvell/octeontx2/nic/otx2_txrx.c | 25 +++++++++++++++++-- 2 files changed, 24 insertions(+), 2 deletions(-) diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h b/dri= vers/net/ethernet/marvell/octeontx2/nic/otx2_common.h index 60508971b62f..19e9e2e72233 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h @@ -1181,4 +1181,5 @@ static inline int mcam_entry_cmp(const void *a, const= void *b) dma_addr_t otx2_dma_map_skb_frag(struct otx2_nic *pfvf, struct sk_buff *skb, int seg, int *len); void otx2_dma_unmap_skb_frags(struct otx2_nic *pfvf, struct sg_list *sg); +int otx2_read_free_sqe(struct otx2_nic *pfvf, u16 qidx); #endif /* OTX2_COMMON_H */ diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.c b/drive= rs/net/ethernet/marvell/octeontx2/nic/otx2_txrx.c index 44137160bdf6..b012d8794f18 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.c +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.c @@ -22,6 +22,12 @@ #include "cn10k.h" =20 #define CQE_ADDR(CQ, idx) ((CQ)->cqe_base + ((CQ)->cqe_size * (idx))) +#define READ_FREE_SQE(SQ, free_sqe) \ + do { \ + typeof(SQ) _SQ =3D (SQ); \ + free_sqe =3D (((_SQ)->cons_head - (_SQ)->head - 1 + (_SQ)->sqe_cnt) \ + & ((_SQ)->sqe_cnt - 1)); \ + } while (0) #define PTP_PORT 0x13F /* PTPv2 header Original Timestamp starts at byte offset 34 and * contains 6 byte seconds field and 4 byte nano seconds field. @@ -1163,7 +1169,7 @@ bool otx2_sq_append_skb(void *dev, struct netdev_queu= e *txq, /* Check if there is enough room between producer * and consumer index. */ - free_desc =3D (sq->cons_head - sq->head - 1 + sq->sqe_cnt) & (sq->sqe_cnt= - 1); + READ_FREE_SQE(sq, free_desc); if (free_desc < sq->sqe_thresh) return false; =20 @@ -1402,6 +1408,21 @@ static void otx2_xdp_sqe_add_sg(struct otx2_snd_queu= e *sq, sq->sg[sq->head].skb =3D (u64)xdpf; } =20 +int otx2_read_free_sqe(struct otx2_nic *pfvf, u16 qidx) +{ + struct otx2_snd_queue *sq; + int free_sqe; + + sq =3D &pfvf->qset.sq[qidx]; + READ_FREE_SQE(sq, free_sqe); + if (free_sqe < sq->sqe_thresh) { + netdev_warn(pfvf->netdev, "No free sqe for Send queue%d\n", qidx); + return 0; + } + + return free_sqe - sq->sqe_thresh; +} + bool otx2_xdp_sq_append_pkt(struct otx2_nic *pfvf, struct xdp_frame *xdpf, u64 iova, int len, u16 qidx, u16 flags) { @@ -1410,7 +1431,7 @@ bool otx2_xdp_sq_append_pkt(struct otx2_nic *pfvf, st= ruct xdp_frame *xdpf, int offset, free_sqe; =20 sq =3D &pfvf->qset.sq[qidx]; - free_sqe =3D (sq->num_sqbs - *sq->aura_fc_addr) * sq->sqe_per_sqb; + READ_FREE_SQE(sq, free_sqe); if (free_sqe < sq->sqe_thresh) return false; =20 --=20 2.25.1 From nobody Mon Dec 15 21:44:00 2025 Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E320C224AF0; Thu, 6 Feb 2025 08:51:48 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=67.231.156.173 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738831910; cv=none; b=fl5GdTrXnAeeYW8cq9zUK5iru88+NaS5aXP7MWDF0ZOR2MJqIEIoitaJinSCcES2VHLRtAUE8plkR8va4Ir3jMr/7Cd3PCIcdl6uJxyp5f4ncXs2o0S27wBrrjXYn6yzk8aEqTIfuSJfa/KEzB1/nFMZxAX8xJahuAK7tf5hUJA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738831910; c=relaxed/simple; bh=6nm9rTP7Hh1x0WcVFijsN0rFNupyhWJnEDcHSgYy77E=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=cd3ywxis/iuhupTsv1cPRckrZPkRIYr6DgUgCY0CxnOVxIIhsrCT7pgMff7RfJzniuRov9Uai6g3B2y+AdfmxomZBhb9iEIufl2grFxPK8US/pW1YQMTVmKuWqGpd4SCY9oYB4HDJNJhn/LBPnxLuepcfTJhQsQ+Hc6mshr+DsE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=marvell.com; spf=pass smtp.mailfrom=marvell.com; dkim=pass (2048-bit key) header.d=marvell.com header.i=@marvell.com header.b=XbMiA1oo; arc=none smtp.client-ip=67.231.156.173 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=marvell.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=marvell.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=marvell.com header.i=@marvell.com header.b="XbMiA1oo" Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 5168PNYV016449; Thu, 6 Feb 2025 00:51:29 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h= cc:content-transfer-encoding:content-type:date:from:in-reply-to :message-id:mime-version:references:subject:to; s=pfpt0220; bh=k lGcRcy5ExH+CBOSYb9emcz2rkAjiyQjvrPJfna9QPI=; b=XbMiA1ooEGMpBGtyR Kp2nL+d47jZZ3SoVJovc/e/6rGetww49mXJkZNO5d8FINcpUKy/D/6BvnY1e6Xio LLnvwsxMwtRsLg7pM9N0Bdi1DbDwziYj6Nf4eS/is33sE9znD09kn3/0TKiBZifv 8skj9n0cJTbE+vToSXZ72X0panpU+tmPi3Yk6LScybH0XWeb2TeE0nnxl6ACdHTQ MLfFekNs4mCxJLJBOaNGyvzpcOyzdgWiSLuK4KAUEAn2SFW1Vw5pVFI01pKjKBMM tJghsCo0prOswMqPd3kl8vIJFvNJgwRoTaTulH4Id9Z4PCq3rcnEDmeGptlHiB6l dwF/w== Received: from dc5-exch05.marvell.com ([199.233.59.128]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 44msja01h0-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Thu, 06 Feb 2025 00:51:28 -0800 (PST) Received: from DC5-EXCH05.marvell.com (10.69.176.209) by DC5-EXCH05.marvell.com (10.69.176.209) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Thu, 6 Feb 2025 00:51:27 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH05.marvell.com (10.69.176.209) with Microsoft SMTP Server id 15.2.1544.4 via Frontend Transport; Thu, 6 Feb 2025 00:51:27 -0800 Received: from localhost.localdomain (unknown [10.28.36.166]) by maili.marvell.com (Postfix) with ESMTP id 3096E3F7086; Thu, 6 Feb 2025 00:51:20 -0800 (PST) From: Suman Ghosh To: , , , , , , , , , , , , , , , , , , , , CC: Suman Ghosh Subject: [net-next PATCH v5 6/6] octeontx2-pf: AF_XDP zero copy transmit support Date: Thu, 6 Feb 2025 14:20:34 +0530 Message-ID: <20250206085034.1978172-7-sumang@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20250206085034.1978172-1-sumang@marvell.com> References: <20250206085034.1978172-1-sumang@marvell.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Proofpoint-ORIG-GUID: lk6nwz4UpZJA0asWo-3mKcsn1nrlbN_3 X-Proofpoint-GUID: lk6nwz4UpZJA0asWo-3mKcsn1nrlbN_3 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1057,Hydra:6.0.680,FMLib:17.12.68.34 definitions=2025-02-06_02,2025-02-05_03,2024-11-22_01 Content-Type: text/plain; charset="utf-8" This patch implements below changes, 1. To avoid concurrency with normal traffic uses XDP queues. 2. Since there are chances that XDP and AF_XDP can fall under same queue uses separate flags to handle dma buffers. Signed-off-by: Hariprasad Kelam Signed-off-by: Suman Ghosh --- .../marvell/octeontx2/nic/otx2_common.c | 4 ++ .../marvell/octeontx2/nic/otx2_common.h | 6 +++ .../ethernet/marvell/octeontx2/nic/otx2_pf.c | 2 +- .../marvell/octeontx2/nic/otx2_txrx.c | 54 ++++++++++++++----- .../marvell/octeontx2/nic/otx2_txrx.h | 2 + .../ethernet/marvell/octeontx2/nic/otx2_xsk.c | 43 ++++++++++++++- .../ethernet/marvell/octeontx2/nic/otx2_xsk.h | 3 ++ 7 files changed, 97 insertions(+), 17 deletions(-) diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c b/dri= vers/net/ethernet/marvell/octeontx2/nic/otx2_common.c index ec8fc2813443..75c45c06cfb1 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c @@ -1037,6 +1037,10 @@ int otx2_sq_init(struct otx2_nic *pfvf, u16 qidx, u1= 6 sqb_aura) =20 sq->stats.bytes =3D 0; sq->stats.pkts =3D 0; + /* Attach XSK_BUFF_POOL to XDP queue */ + if (qidx > pfvf->hw.xdp_queues) + otx2_attach_xsk_buff(pfvf, sq, (qidx - pfvf->hw.xdp_queues)); + =20 chan_offset =3D qidx % pfvf->hw.tx_chan_cnt; err =3D pfvf->hw_ops->sq_aq_init(pfvf, qidx, chan_offset, sqb_aura); diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h b/dri= vers/net/ethernet/marvell/octeontx2/nic/otx2_common.h index 19e9e2e72233..1e88422825be 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h @@ -129,6 +129,12 @@ enum otx2_errcodes_re { ERRCODE_IL4_CSUM =3D 0x22, }; =20 +enum otx2_xdp_action { + OTX2_XDP_TX =3D BIT(0), + OTX2_XDP_REDIRECT =3D BIT(1), + OTX2_AF_XDP_FRAME =3D BIT(2), +}; + struct otx2_dev_stats { u64 rx_bytes; u64 rx_frames; diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c b/drivers= /net/ethernet/marvell/octeontx2/nic/otx2_pf.c index 188ab6b6fb16..47b05a9c3db5 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c @@ -2693,7 +2693,7 @@ static int otx2_xdp_xmit_tx(struct otx2_nic *pf, stru= ct xdp_frame *xdpf, return -ENOMEM; =20 err =3D otx2_xdp_sq_append_pkt(pf, xdpf, dma_addr, xdpf->len, - qidx, XDP_REDIRECT); + qidx, OTX2_XDP_REDIRECT); if (!err) { otx2_dma_unmap_page(pf, dma_addr, xdpf->len, DMA_TO_DEVICE); xdp_return_frame(xdpf); diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.c b/drive= rs/net/ethernet/marvell/octeontx2/nic/otx2_txrx.c index b012d8794f18..ded0d76a8f37 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.c +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.c @@ -20,6 +20,7 @@ #include "otx2_txrx.h" #include "otx2_ptp.h" #include "cn10k.h" +#include "otx2_xsk.h" =20 #define CQE_ADDR(CQ, idx) ((CQ)->cqe_base + ((CQ)->cqe_size * (idx))) #define READ_FREE_SQE(SQ, free_sqe) \ @@ -103,19 +104,22 @@ static unsigned int frag_num(unsigned int i) =20 static void otx2_xdp_snd_pkt_handler(struct otx2_nic *pfvf, struct otx2_snd_queue *sq, - struct nix_cqe_tx_s *cqe) + struct nix_cqe_tx_s *cqe, + int *xsk_frames) { struct nix_send_comp_s *snd_comp =3D &cqe->comp; struct sg_list *sg; - struct page *page; - u64 pa, iova; + u64 iova; =20 sg =3D &sq->sg[snd_comp->sqe_id]; =20 + if (sg->flags & OTX2_AF_XDP_FRAME) { + (*xsk_frames)++; + return; + } + iova =3D sg->dma_addr[0]; - pa =3D otx2_iova_to_phys(pfvf->iommu_domain, iova); - page =3D virt_to_page(phys_to_virt(pa)); - if (sg->flags & XDP_REDIRECT) + if (sg->flags & OTX2_XDP_REDIRECT) otx2_dma_unmap_page(pfvf, sg->dma_addr[0], sg->size[0], DMA_TO_DEVICE); xdp_return_frame((struct xdp_frame *)sg->skb); sg->skb =3D (u64)NULL; @@ -440,6 +444,18 @@ int otx2_refill_pool_ptrs(void *dev, struct otx2_cq_qu= eue *cq) return cnt - cq->pool_ptrs; } =20 +static void otx2_zc_submit_pkts(struct otx2_nic *pfvf, struct xsk_buff_poo= l *xsk_pool, + int *xsk_frames, int qidx, int budget) +{ + if (*xsk_frames) + xsk_tx_completed(xsk_pool, *xsk_frames); + + if (xsk_uses_need_wakeup(xsk_pool)) + xsk_set_tx_need_wakeup(xsk_pool); + + otx2_zc_napi_handler(pfvf, xsk_pool, qidx, budget); +} + static int otx2_tx_napi_handler(struct otx2_nic *pfvf, struct otx2_cq_queue *cq, int budget) { @@ -448,16 +464,22 @@ static int otx2_tx_napi_handler(struct otx2_nic *pfvf, struct nix_cqe_tx_s *cqe; struct net_device *ndev; int processed_cqe =3D 0; + int xsk_frames =3D 0; + + qidx =3D cq->cq_idx - pfvf->hw.rx_queues; + sq =3D &pfvf->qset.sq[qidx]; =20 if (cq->pend_cqe >=3D budget) goto process_cqe; =20 - if (otx2_nix_cq_op_status(pfvf, cq) || !cq->pend_cqe) + if (otx2_nix_cq_op_status(pfvf, cq) || !cq->pend_cqe) { + if (sq->xsk_pool) + otx2_zc_submit_pkts(pfvf, sq->xsk_pool, &xsk_frames, + qidx, budget); return 0; + } =20 process_cqe: - qidx =3D cq->cq_idx - pfvf->hw.rx_queues; - sq =3D &pfvf->qset.sq[qidx]; =20 while (likely(processed_cqe < budget) && cq->pend_cqe) { cqe =3D (struct nix_cqe_tx_s *)otx2_get_next_cqe(cq); @@ -467,10 +489,8 @@ static int otx2_tx_napi_handler(struct otx2_nic *pfvf, break; } =20 - qidx =3D cq->cq_idx - pfvf->hw.rx_queues; - if (cq->cq_type =3D=3D CQ_XDP) - otx2_xdp_snd_pkt_handler(pfvf, sq, cqe); + otx2_xdp_snd_pkt_handler(pfvf, sq, cqe, &xsk_frames); else otx2_snd_pkt_handler(pfvf, cq, &pfvf->qset.sq[qidx], cqe, budget, &tx_pkts, &tx_bytes); @@ -511,6 +531,10 @@ static int otx2_tx_napi_handler(struct otx2_nic *pfvf, netif_carrier_ok(ndev)) netif_tx_wake_queue(txq); } + + if (sq->xsk_pool) + otx2_zc_submit_pkts(pfvf, sq->xsk_pool, &xsk_frames, qidx, budget); + return 0; } =20 @@ -1505,8 +1529,10 @@ static bool otx2_xdp_rcv_pkt_handler(struct otx2_nic= *pfvf, qidx +=3D pfvf->hw.tx_queues; cq->pool_ptrs++; xdpf =3D xdp_convert_buff_to_frame(&xdp); - return otx2_xdp_sq_append_pkt(pfvf, xdpf, cqe->sg.seg_addr, - cqe->sg.seg_size, qidx, XDP_TX); + return otx2_xdp_sq_append_pkt(pfvf, xdpf, + cqe->sg.seg_addr, + cqe->sg.seg_size, + qidx, OTX2_XDP_TX); case XDP_REDIRECT: cq->pool_ptrs++; if (xsk_buff) { diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.h b/drive= rs/net/ethernet/marvell/octeontx2/nic/otx2_txrx.h index 8f346fbc8221..acf259d72008 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.h +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.h @@ -106,6 +106,8 @@ struct otx2_snd_queue { /* SQE ring and CPT response queue for Inline IPSEC */ struct qmem *sqe_ring; struct qmem *cpt_resp; + /* Buffer pool for af_xdp zero-copy */ + struct xsk_buff_pool *xsk_pool; } ____cacheline_aligned_in_smp; =20 enum cq_type { diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_xsk.c b/driver= s/net/ethernet/marvell/octeontx2/nic/otx2_xsk.c index 13dcbbe6112d..40a539a122d9 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_xsk.c +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_xsk.c @@ -140,11 +140,14 @@ int otx2_xsk_pool_disable(struct otx2_nic *pf, u16 qi= dx) { struct net_device *netdev =3D pf->netdev; struct xsk_buff_pool *pool; + struct otx2_snd_queue *sq; =20 pool =3D xsk_get_pool_from_qid(netdev, qidx); if (!pool) return -EINVAL; =20 + sq =3D &pf->qset.sq[qidx + pf->hw.tx_queues]; + sq->xsk_pool =3D NULL; otx2_clean_up_rq(pf, qidx); clear_bit(qidx, pf->af_xdp_zc_qidx); xsk_pool_dma_unmap(pool, DMA_ATTR_SKIP_CPU_SYNC | DMA_ATTR_WEAK_ORDERING); @@ -171,7 +174,7 @@ int otx2_xsk_wakeup(struct net_device *dev, u32 queue_i= d, u32 flags) if (pf->flags & OTX2_FLAG_INTF_DOWN) return -ENETDOWN; =20 - if (queue_id >=3D pf->hw.rx_queues) + if (queue_id >=3D pf->hw.rx_queues || queue_id >=3D pf->hw.tx_queues) return -EINVAL; =20 cq_poll =3D &qset->napi[queue_id]; @@ -179,8 +182,44 @@ int otx2_xsk_wakeup(struct net_device *dev, u32 queue_= id, u32 flags) return -EINVAL; =20 /* Trigger interrupt */ - if (!napi_if_scheduled_mark_missed(&cq_poll->napi)) + if (!napi_if_scheduled_mark_missed(&cq_poll->napi)) { otx2_write64(pf, NIX_LF_CINTX_ENA_W1S(cq_poll->cint_idx), BIT_ULL(0)); + otx2_write64(pf, NIX_LF_CINTX_INT_W1S(cq_poll->cint_idx), BIT_ULL(0)); + } =20 return 0; } + +void otx2_attach_xsk_buff(struct otx2_nic *pfvf, struct otx2_snd_queue *sq= , int qidx) +{ + if (test_bit(qidx, pfvf->af_xdp_zc_qidx)) + sq->xsk_pool =3D xsk_get_pool_from_qid(pfvf->netdev, qidx); +} + +void otx2_zc_napi_handler(struct otx2_nic *pfvf, struct xsk_buff_pool *poo= l, + int queue, int budget) +{ + struct xdp_desc *xdp_desc =3D pool->tx_descs; + int err, i, work_done =3D 0, batch; + + budget =3D min(budget, otx2_read_free_sqe(pfvf, queue)); + batch =3D xsk_tx_peek_release_desc_batch(pool, budget); + if (!batch) + return; + + for (i =3D 0; i < batch; i++) { + dma_addr_t dma_addr; + + dma_addr =3D xsk_buff_raw_get_dma(pool, xdp_desc[i].addr); + err =3D otx2_xdp_sq_append_pkt(pfvf, NULL, dma_addr, xdp_desc[i].len, + queue, OTX2_AF_XDP_FRAME); + if (!err) { + netdev_err(pfvf->netdev, "AF_XDP: Unable to transfer packet err%d\n", e= rr); + break; + } + work_done++; + } + + if (work_done) + xsk_tx_release(pool); +} diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_xsk.h b/driver= s/net/ethernet/marvell/octeontx2/nic/otx2_xsk.h index 022b3433edbb..8047fafee8fe 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_xsk.h +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_xsk.h @@ -17,5 +17,8 @@ int otx2_xsk_pool_disable(struct otx2_nic *pf, u16 qid); int otx2_xsk_pool_alloc_buf(struct otx2_nic *pfvf, struct otx2_pool *pool, dma_addr_t *dma, int idx); int otx2_xsk_wakeup(struct net_device *dev, u32 queue_id, u32 flags); +void otx2_zc_napi_handler(struct otx2_nic *pfvf, struct xsk_buff_pool *poo= l, + int queue, int budget); +void otx2_attach_xsk_buff(struct otx2_nic *pfvf, struct otx2_snd_queue *sq= , int qidx); =20 #endif /* OTX2_XSK_H */ --=20 2.25.1