From nobody Wed Apr 1 14:00:21 2026 Received: from mx0a-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 043BF3C1981; Tue, 31 Mar 2026 07:09:14 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=67.231.148.174 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774940956; cv=none; b=RddPcQmQlEIosbXv6wP1aFnnNjAQdre9uvXBGC30CqaDtRNtB5Erc1pT43BpcfchkSjzrDFOwiXA5hEq1l66GoGY/meNvDjXDA5FUE3fqGtaGAG0U2+O21JM9KxMhd4Q4cKEqEjYa6ngwhf0+hKjCFyAze9xny2Va4dc24owfEU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774940956; c=relaxed/simple; bh=fpHT2P3LGixWxFkeaybXJbFYWLO17ZC9nlATSJh7eG0=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=FdNpXHy7rKKZ5/pQn5Xfu/Jf9I5mOPg1b1oet1J9tmLdXKQa/jYPg0ZFpjPellkDRaLqxyVQEfj6MWl4qw6DrTnVDx+I7+6lpZf8U4+nK7SK/ID3g4dUSOEszb1c7LLSOGbnWJBqvP8IV+8WUDklRJboklBr3R9w2B5vl+f93fs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=marvell.com; spf=pass smtp.mailfrom=marvell.com; dkim=pass (2048-bit key) header.d=marvell.com header.i=@marvell.com header.b=cz6Qec/R; arc=none smtp.client-ip=67.231.148.174 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=marvell.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=marvell.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=marvell.com header.i=@marvell.com header.b="cz6Qec/R" Received: from pps.filterd (m0431384.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.18.1.11/8.18.1.11) with ESMTP id 62UGNtEI3941090; Tue, 31 Mar 2026 00:09:06 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h= cc:content-type:date:from:in-reply-to:message-id:mime-version :references:subject:to; s=pfpt0220; bh=soKgQ9molfrj5+mpv0EZnMflw al9dGJsr6xbcodrVYI=; b=cz6Qec/R8YuiXSA33BkX/u+Ej7BwLCYNh7ALizMp9 hVzeQaQokdbiC9nZrhlO4k3blUh/BivnJVg98a25A0DFOcg3n/EdTvHAQJFStGN5 n04vj9uiMdQe6LCM2z7mAJtQBHgRyMCiFZIAC2xQHMWswuVfiEpZRLib25NiPMvZ L+Ix42AXEYvaoqNy0EL0zFsoF2EZGbaw5SLrVCJB5B0ueZqMTIW6dOJtaiLSYIgo HAWe1s4ydgnIdJ77K1Sta4vNz+VZ5LYAoqKOOb6dSO+bxV/6Q6EwTvcR2V7gtEkW 0ynfexOUMImchUv9Tft6SfGfpTIMpp13vUmbzOj8ElSIA== Received: from dc5-exch05.marvell.com ([199.233.59.128]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 4d738746v0-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 31 Mar 2026 00:09:06 -0700 (PDT) Received: from DC6WP-EXCH02.marvell.com (10.76.176.209) by DC5-EXCH05.marvell.com (10.69.176.209) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.25; Tue, 31 Mar 2026 00:09:05 -0700 Received: from DC6WP-EXCH02.marvell.com (10.76.176.209) by DC6WP-EXCH02.marvell.com (10.76.176.209) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.25; Tue, 31 Mar 2026 00:09:04 -0700 Received: from maili.marvell.com (10.69.176.80) by DC6WP-EXCH02.marvell.com (10.76.176.209) with Microsoft SMTP Server id 15.2.1544.25 via Frontend Transport; Tue, 31 Mar 2026 00:09:04 -0700 Received: from hyd1358.marvell.com (unknown [10.29.37.11]) by maili.marvell.com (Postfix) with ESMTP id 725E73F704B; Tue, 31 Mar 2026 00:09:00 -0700 (PDT) From: Subbaraya Sundeep To: , , , , , , , CC: , , "Subbaraya Sundeep" Subject: [net-next PATCH v4 4/4] octeontx2-pf: cn20k: Use unified Halo context Date: Tue, 31 Mar 2026 12:38:39 +0530 Message-ID: <1774940919-1599-5-git-send-email-sbhatta@marvell.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1774940919-1599-1-git-send-email-sbhatta@marvell.com> References: <1774940919-1599-1-git-send-email-sbhatta@marvell.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Proofpoint-Spam-Details-Enc: AW1haW4tMjYwMzMxMDA2NiBTYWx0ZWRfX4zb6/JSRUbZ8 zmyLpt7dnKJNttyczC4s2C6ovsyksTnRcoaWP1nmyWwkATeAxqphjNbyWeplA8pdph4sJzOkz3s Cnc+RZO8CZixsGpVizeXUZzvCAbLC1GNCXsuNbhkTpEkfKASsUenhWgXovJC2ES5OAsivrMhXns vdvmEP0I4cZbgol4H5DNYo2XBudnONTEaDfBbH7fE36vvPl5mUbmYXeoXOCfr/8GZdegp/wO5G0 oXG6XQeFWQsW+iDlXJubiP+23+SQBQuh0ilySe9QmVlLlL6nUtNe50jTLnM9n6U6D19xddkwyZC 266mjU4a5glrYsok9AXz1KZ2C1Bwvi2TSHbTh85PuPIma/oaAEpy5KyNkTxoIPY7yYXoYGz9bMk XB+SimOmRyZk6vm5sHaHi6l/51qnYERIY9wwsiIFeYLM7WWv8Y/icuJbBGuHazpDkeqoHUv/oLo GiTW63OpvONpZx/Wejg== X-Proofpoint-GUID: UqqSUPGseXFL49U70aHb_X9sizq_GKQA X-Authority-Analysis: v=2.4 cv=DMCCIiNb c=1 sm=1 tr=0 ts=69cb7312 cx=c_pps a=rEv8fa4AjpPjGxpoe8rlIQ==:117 a=rEv8fa4AjpPjGxpoe8rlIQ==:17 a=Yq5XynenixoA:10 a=VkNPw1HP01LnGYTKEx00:22 a=l0iWHRpgs5sLHlkKQ1IR:22 a=TtqV-g6YmW1Jfm2GSLaY:22 a=M5GUcnROAAAA:8 a=CaZ6pBYYoU2kl75vHkwA:9 a=OBjm3rFKGHvpk9ecZwUJ:22 X-Proofpoint-ORIG-GUID: UqqSUPGseXFL49U70aHb_X9sizq_GKQA X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1143,Hydra:6.1.51,FMLib:17.12.100.49 definitions=2026-03-31_02,2026-03-28_01,2025-10-01_01 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Use unified Halo context present in CN20K hardware for octeontx2 netdevs instead of aura and pool contexts. Signed-off-by: Subbaraya Sundeep --- .../ethernet/marvell/octeontx2/nic/cn20k.c | 213 +++++++++--------- .../ethernet/marvell/octeontx2/nic/cn20k.h | 3 + .../marvell/octeontx2/nic/otx2_common.h | 3 + .../ethernet/marvell/octeontx2/nic/otx2_pf.c | 8 + 4 files changed, 124 insertions(+), 103 deletions(-) diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/cn20k.c b/drivers/n= et/ethernet/marvell/octeontx2/nic/cn20k.c index a5a8f4558717..08033858c59d 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/cn20k.c +++ b/drivers/net/ethernet/marvell/octeontx2/nic/cn20k.c @@ -242,15 +242,6 @@ int cn20k_register_pfvf_mbox_intr(struct otx2_nic *pf,= int numvfs) =20 #define RQ_BP_LVL_AURA (255 - ((85 * 256) / 100)) /* BP when 85% is full= */ =20 -static u8 cn20k_aura_bpid_idx(struct otx2_nic *pfvf, int aura_id) -{ -#ifdef CONFIG_DCB - return pfvf->queue_to_pfc_map[aura_id]; -#else - return 0; -#endif -} - static int cn20k_tc_get_entry_index(struct otx2_flow_config *flow_cfg, struct otx2_tc_flow *node) { @@ -517,84 +508,7 @@ int cn20k_tc_alloc_entry(struct otx2_nic *nic, return 0; } =20 -static int cn20k_aura_aq_init(struct otx2_nic *pfvf, int aura_id, - int pool_id, int numptrs) -{ - struct npa_cn20k_aq_enq_req *aq; - struct otx2_pool *pool; - u8 bpid_idx; - int err; - - pool =3D &pfvf->qset.pool[pool_id]; - - /* Allocate memory for HW to update Aura count. - * Alloc one cache line, so that it fits all FC_STYPE modes. - */ - if (!pool->fc_addr) { - err =3D qmem_alloc(pfvf->dev, &pool->fc_addr, 1, OTX2_ALIGN); - if (err) - return err; - } - - /* Initialize this aura's context via AF */ - aq =3D otx2_mbox_alloc_msg_npa_cn20k_aq_enq(&pfvf->mbox); - if (!aq) { - /* Shared mbox memory buffer is full, flush it and retry */ - err =3D otx2_sync_mbox_msg(&pfvf->mbox); - if (err) - return err; - aq =3D otx2_mbox_alloc_msg_npa_cn20k_aq_enq(&pfvf->mbox); - if (!aq) - return -ENOMEM; - } - - aq->aura_id =3D aura_id; - - /* Will be filled by AF with correct pool context address */ - aq->aura.pool_addr =3D pool_id; - aq->aura.pool_caching =3D 1; - aq->aura.shift =3D ilog2(numptrs) - 8; - aq->aura.count =3D numptrs; - aq->aura.limit =3D numptrs; - aq->aura.avg_level =3D 255; - aq->aura.ena =3D 1; - aq->aura.fc_ena =3D 1; - aq->aura.fc_addr =3D pool->fc_addr->iova; - aq->aura.fc_hyst_bits =3D 0; /* Store count on all updates */ - - /* Enable backpressure for RQ aura */ - if (aura_id < pfvf->hw.rqpool_cnt && !is_otx2_lbkvf(pfvf->pdev)) { - aq->aura.bp_ena =3D 0; - /* If NIX1 LF is attached then specify NIX1_RX. - * - * Below NPA_AURA_S[BP_ENA] is set according to the - * NPA_BPINTF_E enumeration given as: - * 0x0 + a*0x1 where 'a' is 0 for NIX0_RX and 1 for NIX1_RX so - * NIX0_RX is 0x0 + 0*0x1 =3D 0 - * NIX1_RX is 0x0 + 1*0x1 =3D 1 - * But in HRM it is given that - * "NPA_AURA_S[BP_ENA](w1[33:32]) - Enable aura backpressure to - * NIX-RX based on [BP] level. One bit per NIX-RX; index - * enumerated by NPA_BPINTF_E." - */ - if (pfvf->nix_blkaddr =3D=3D BLKADDR_NIX1) - aq->aura.bp_ena =3D 1; - - bpid_idx =3D cn20k_aura_bpid_idx(pfvf, aura_id); - aq->aura.bpid =3D pfvf->bpid[bpid_idx]; - - /* Set backpressure level for RQ's Aura */ - aq->aura.bp =3D RQ_BP_LVL_AURA; - } - - /* Fill AQ info */ - aq->ctype =3D NPA_AQ_CTYPE_AURA; - aq->op =3D NPA_AQ_INSTOP_INIT; - - return 0; -} - -static int cn20k_pool_aq_init(struct otx2_nic *pfvf, u16 pool_id, +static int cn20k_halo_aq_init(struct otx2_nic *pfvf, u16 pool_id, int stack_pages, int numptrs, int buf_size, int type) { @@ -610,36 +524,57 @@ static int cn20k_pool_aq_init(struct otx2_nic *pfvf, = u16 pool_id, if (err) return err; =20 + /* Allocate memory for HW to update Aura count. + * Alloc one cache line, so that it fits all FC_STYPE modes. + */ + if (!pool->fc_addr) { + err =3D qmem_alloc(pfvf->dev, &pool->fc_addr, 1, OTX2_ALIGN); + if (err) { + qmem_free(pfvf->dev, pool->stack); + return err; + } + } + pool->rbsize =3D buf_size; =20 - /* Initialize this pool's context via AF */ + /* Initialize this aura's context via AF */ aq =3D otx2_mbox_alloc_msg_npa_cn20k_aq_enq(&pfvf->mbox); if (!aq) { /* Shared mbox memory buffer is full, flush it and retry */ err =3D otx2_sync_mbox_msg(&pfvf->mbox); - if (err) { - qmem_free(pfvf->dev, pool->stack); - return err; - } + if (err) + goto free_mem; aq =3D otx2_mbox_alloc_msg_npa_cn20k_aq_enq(&pfvf->mbox); if (!aq) { - qmem_free(pfvf->dev, pool->stack); - return -ENOMEM; + err =3D -ENOMEM; + goto free_mem; } } =20 aq->aura_id =3D pool_id; - aq->pool.stack_base =3D pool->stack->iova; - aq->pool.stack_caching =3D 1; - aq->pool.ena =3D 1; - aq->pool.buf_size =3D buf_size / 128; - aq->pool.stack_max_pages =3D stack_pages; - aq->pool.shift =3D ilog2(numptrs) - 8; - aq->pool.ptr_start =3D 0; - aq->pool.ptr_end =3D ~0ULL; + + aq->halo.stack_base =3D pool->stack->iova; + aq->halo.stack_caching =3D 1; + aq->halo.ena =3D 1; + aq->halo.buf_size =3D buf_size / 128; + aq->halo.stack_max_pages =3D stack_pages; + aq->halo.shift =3D ilog2(numptrs) - 8; + aq->halo.ptr_start =3D 0; + aq->halo.ptr_end =3D ~0ULL; + + aq->halo.avg_level =3D 255; + aq->halo.fc_ena =3D 1; + aq->halo.fc_addr =3D pool->fc_addr->iova; + aq->halo.fc_hyst_bits =3D 0; /* Store count on all updates */ + + if (pfvf->npa_dpc_valid) { + aq->halo.op_dpc_ena =3D 1; + aq->halo.op_dpc_set =3D pfvf->npa_dpc; + } + aq->halo.unified_ctx =3D 1; =20 /* Fill AQ info */ - aq->ctype =3D NPA_AQ_CTYPE_POOL; + aq->ctype =3D NPA_AQ_CTYPE_HALO; aq->op =3D NPA_AQ_INSTOP_INIT; =20 if (type !=3D AURA_NIX_RQ) { @@ -661,6 +596,78 @@ static int cn20k_pool_aq_init(struct otx2_nic *pfvf, u= 16 pool_id, } =20 return 0; + +free_mem: + qmem_free(pfvf->dev, pool->stack); + qmem_free(pfvf->dev, pool->fc_addr); + return err; +} + +static int cn20k_aura_aq_init(struct otx2_nic *pfvf, int aura_id, + int pool_id, int numptrs) +{ + return 0; +} + +static int cn20k_pool_aq_init(struct otx2_nic *pfvf, u16 pool_id, + int stack_pages, int numptrs, int buf_size, + int type) +{ + return cn20k_halo_aq_init(pfvf, pool_id, stack_pages, + numptrs, buf_size, type); +} + +int cn20k_npa_alloc_dpc(struct otx2_nic *nic) +{ + struct npa_cn20k_dpc_alloc_req *req; + struct npa_cn20k_dpc_alloc_rsp *rsp; + int err; + + req =3D otx2_mbox_alloc_msg_npa_cn20k_dpc_alloc(&nic->mbox); + if (!req) + return -ENOMEM; + + /* Count successful ALLOC requests only */ + req->dpc_conf =3D 1ULL << 4; + + err =3D otx2_sync_mbox_msg(&nic->mbox); + if (err) + return err; + + rsp =3D (struct npa_cn20k_dpc_alloc_rsp *)otx2_mbox_get_rsp(&nic->mbox.mb= ox, + 0, &req->hdr); + if (IS_ERR(rsp)) + return PTR_ERR(rsp); + + nic->npa_dpc =3D rsp->cntr_id; + nic->npa_dpc_valid =3D true; + + return 0; +} + +int cn20k_npa_free_dpc(struct otx2_nic *nic) +{ + struct npa_cn20k_dpc_free_req *req; + int err; + + if (!nic->npa_dpc_valid) + return 0; + + mutex_lock(&nic->mbox.lock); + + req =3D otx2_mbox_alloc_msg_npa_cn20k_dpc_free(&nic->mbox); + if (!req) { + mutex_unlock(&nic->mbox.lock); + return -ENOMEM; + } + + req->cntr_id =3D nic->npa_dpc; + + err =3D otx2_sync_mbox_msg(&nic->mbox); + + mutex_unlock(&nic->mbox.lock); + + return err; } =20 static int cn20k_sq_aq_init(void *dev, u16 qidx, u8 chan_offset, u16 sqb_a= ura) diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/cn20k.h b/drivers/n= et/ethernet/marvell/octeontx2/nic/cn20k.h index b5e527f6d7eb..16a69d84ea79 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/cn20k.h +++ b/drivers/net/ethernet/marvell/octeontx2/nic/cn20k.h @@ -28,4 +28,7 @@ int cn20k_tc_alloc_entry(struct otx2_nic *nic, struct otx2_tc_flow *new_node, struct npc_install_flow_req *dummy); int cn20k_tc_free_mcam_entry(struct otx2_nic *nic, u16 entry); +int cn20k_npa_alloc_dpc(struct otx2_nic *nic); +int cn20k_npa_free_dpc(struct otx2_nic *nic); + #endif /* CN20K_H */ diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h b/dri= vers/net/ethernet/marvell/octeontx2/nic/otx2_common.h index eecee612b7b2..f997dfc0fedd 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h @@ -592,6 +592,9 @@ struct otx2_nic { struct cn10k_ipsec ipsec; /* af_xdp zero-copy */ unsigned long *af_xdp_zc_qidx; + + bool npa_dpc_valid; + u8 npa_dpc; /* NPA DPC counter id */ }; =20 static inline bool is_otx2_lbkvf(struct pci_dev *pdev) diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c b/drivers= /net/ethernet/marvell/octeontx2/nic/otx2_pf.c index ee623476e5ff..2b5fe67d297c 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c @@ -1651,6 +1651,9 @@ int otx2_init_hw_resources(struct otx2_nic *pf) if (!is_otx2_lbkvf(pf->pdev)) otx2_nix_config_bp(pf, true); =20 + if (is_cn20k(pf->pdev)) + cn20k_npa_alloc_dpc(pf); + /* Init Auras and pools used by NIX RQ, for free buffer ptrs */ err =3D otx2_rq_aura_pool_init(pf); if (err) { @@ -1726,6 +1729,8 @@ int otx2_init_hw_resources(struct otx2_nic *pf) otx2_ctx_disable(mbox, NPA_AQ_CTYPE_AURA, true); otx2_aura_pool_free(pf); err_free_nix_lf: + if (pf->npa_dpc_valid) + cn20k_npa_free_dpc(pf); mutex_lock(&mbox->lock); free_req =3D otx2_mbox_alloc_msg_nix_lf_free(mbox); if (free_req) { @@ -1790,6 +1795,9 @@ void otx2_free_hw_resources(struct otx2_nic *pf) =20 otx2_free_sq_res(pf); =20 + if (is_cn20k(pf->pdev)) + cn20k_npa_free_dpc(pf); + /* Free RQ buffer pointers*/ otx2_free_aura_ptr(pf, AURA_NIX_RQ); =20 --=20 2.48.1