From nobody Wed Oct 8 22:34:49 2025 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.13]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id EF4D62E7636; Tue, 24 Jun 2025 16:45:55 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.13 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750783557; cv=none; b=hqRb96NS9He699Sh+UGpmEjq/OvyHvd/iWLBOcVyFttf1WgyUizclJ04q3eGeZ1QEtyqKDk3I9xMnsA0AXfNivE/HUAO3e8podvTEnXip4NbTcelmVsvVtH9rMBLnhMZZr+m7p2yM4uB5gPVO1pjdL6qj+D/TacmbbEXyETCUI8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750783557; c=relaxed/simple; bh=IBK1nArfy8f6knZkS04IULXoZM8ADk2jK0xcxfHnfhY=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=j/4qO60FrIkAycwKLj4bApuQFW9ZfpZDgPdb3twB1m4OWOPSbUd6xjwTJBVXlY018swKwCtpeYF3YthNucWOGPtsEQu457sVX8AEu+36079uAs88ZZeK/D+7UX8gBwc9dK3XVciq7banWBBGmRQzcgj8VhX3sNhUWVWw/ekN3Wk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=dj9hk/ql; arc=none smtp.client-ip=198.175.65.13 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="dj9hk/ql" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1750783556; x=1782319556; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=IBK1nArfy8f6knZkS04IULXoZM8ADk2jK0xcxfHnfhY=; b=dj9hk/ql8BUxFKAes8NcE71XbW7sDL8cs7dBL+ByNY0hGiksOyPr62DD ks4+zFHwwcpq7ZXHoKM8KY6bW04MDkX81uXyNNrYP1zCIo0SqxSoEqteb MFWWdHwpC/CQSGBIhmdJAI8PzSsVrvoC1tE6OP6xsPBeXBXR1WsEH7uRo YN6KdwB0KJYJ947LFc7u1mglEQnjfMMX9Wf62Agx2pTaghLBBqOUgb5+M EL2FZBWOolJPc0vb4DCNdFNZiAnWKVYHYvD48ShsW5CU1Sea1joGQEU4b zFLjo2zl5wla3Gry9/hPxl9LFDbLPzbk6oKyqR//BFLsFgVA1HSUu2oWn Q==; X-CSE-ConnectionGUID: 7tdWhWmlQ36NB4RRoLR0Aw== X-CSE-MsgGUID: i4O8qbyhRTCJktFNVjv82g== X-IronPort-AV: E=McAfee;i="6800,10657,11474"; a="64091181" X-IronPort-AV: E=Sophos;i="6.16,262,1744095600"; d="scan'208";a="64091181" Received: from fmviesa010.fm.intel.com ([10.60.135.150]) by orvoesa105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Jun 2025 09:45:55 -0700 X-CSE-ConnectionGUID: YHSehqCqR+qY1tWW/UY1dg== X-CSE-MsgGUID: RYsHCcdWRomd7glkEpXbdg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.16,262,1744095600"; d="scan'208";a="152669465" Received: from newjersey.igk.intel.com ([10.102.20.203]) by fmviesa010.fm.intel.com with ESMTP; 24 Jun 2025 09:45:52 -0700 From: Alexander Lobakin To: intel-wired-lan@lists.osuosl.org Cc: Alexander Lobakin , Michal Kubiak , Maciej Fijalkowski , Tony Nguyen , Przemek Kitszel , Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Alexei Starovoitov , Daniel Borkmann , Simon Horman , nxne.cnse.osdt.itp.upstreaming@intel.com, bpf@vger.kernel.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH iwl-next v2 04/12] idpf: add 4-byte completion descriptor definition Date: Tue, 24 Jun 2025 18:45:07 +0200 Message-ID: <20250624164515.2663137-5-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250624164515.2663137-1-aleksander.lobakin@intel.com> References: <20250624164515.2663137-1-aleksander.lobakin@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Michal Kubiak In the queue-based scheduling mode, Tx completion descriptor is 4 bytes comparing to 8 bytes in flow-based. Add definition for it and allocate the corresponding amount of memory for the descriptors during the completion queue creation. This does not include handling 4-byte completions during Tx polling, as for now, the only user of QB will be XDP, which has its own routines. Signed-off-by: Michal Kubiak Signed-off-by: Alexander Lobakin --- .../net/ethernet/intel/idpf/idpf_lan_txrx.h | 6 +++- drivers/net/ethernet/intel/idpf/idpf_txrx.h | 11 ++++-- drivers/net/ethernet/intel/idpf/idpf_txrx.c | 36 ++++++++++--------- 3 files changed, 34 insertions(+), 19 deletions(-) diff --git a/drivers/net/ethernet/intel/idpf/idpf_lan_txrx.h b/drivers/net/= ethernet/intel/idpf/idpf_lan_txrx.h index 7492d1713243..20d5af64e750 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_lan_txrx.h +++ b/drivers/net/ethernet/intel/idpf/idpf_lan_txrx.h @@ -186,13 +186,17 @@ struct idpf_base_tx_desc { __le64 qw1; /* type_cmd_offset_bsz_l2tag1 */ }; /* read used with buffer queues */ =20 -struct idpf_splitq_tx_compl_desc { +struct idpf_splitq_4b_tx_compl_desc { /* qid=3D[10:0] comptype=3D[13:11] rsvd=3D[14] gen=3D[15] */ __le16 qid_comptype_gen; union { __le16 q_head; /* Queue head */ __le16 compl_tag; /* Completion tag */ } q_head_compl_tag; +}; /* writeback used with completion queues */ + +struct idpf_splitq_tx_compl_desc { + struct idpf_splitq_4b_tx_compl_desc common; u8 ts[3]; u8 rsvd; /* Reserved */ }; /* writeback used with completion queues */ diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.h b/drivers/net/ethe= rnet/intel/idpf/idpf_txrx.h index 36a0f828a6f8..f593d1539ce7 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_txrx.h +++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.h @@ -755,7 +755,9 @@ libeth_cacheline_set_assert(struct idpf_buf_queue, 64, = 24, 32); =20 /** * struct idpf_compl_queue - software structure representing a completion = queue - * @comp: completion descriptor array + * @comp: 8-byte completion descriptor array + * @comp_4b: 4-byte completion descriptor array + * @desc_ring: virtual descriptor ring address * @txq_grp: See struct idpf_txq_group * @flags: See enum idpf_queue_flags_t * @desc_count: Number of descriptors @@ -775,7 +777,12 @@ libeth_cacheline_set_assert(struct idpf_buf_queue, 64,= 24, 32); */ struct idpf_compl_queue { __cacheline_group_begin_aligned(read_mostly); - struct idpf_splitq_tx_compl_desc *comp; + union { + struct idpf_splitq_tx_compl_desc *comp; + struct idpf_splitq_4b_tx_compl_desc *comp_4b; + + void *desc_ring; + }; struct idpf_txq_group *txq_grp; =20 DECLARE_BITMAP(flags, __IDPF_Q_FLAGS_NBITS); diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.c b/drivers/net/ethe= rnet/intel/idpf/idpf_txrx.c index 8128bd33ef45..a605134c7c6d 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_txrx.c +++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.c @@ -157,8 +157,8 @@ static void idpf_compl_desc_rel(struct idpf_compl_queue= *complq) return; =20 dma_free_coherent(complq->netdev->dev.parent, complq->size, - complq->comp, complq->dma); - complq->comp =3D NULL; + complq->desc_ring, complq->dma); + complq->desc_ring =3D NULL; complq->next_to_use =3D 0; complq->next_to_clean =3D 0; } @@ -285,12 +285,16 @@ static int idpf_tx_desc_alloc(const struct idpf_vport= *vport, static int idpf_compl_desc_alloc(const struct idpf_vport *vport, struct idpf_compl_queue *complq) { - complq->size =3D array_size(complq->desc_count, sizeof(*complq->comp)); + u32 desc_size; =20 - complq->comp =3D dma_alloc_coherent(complq->netdev->dev.parent, - complq->size, &complq->dma, - GFP_KERNEL); - if (!complq->comp) + desc_size =3D idpf_queue_has(FLOW_SCH_EN, complq) ? + sizeof(*complq->comp) : sizeof(*complq->comp_4b); + complq->size =3D array_size(complq->desc_count, desc_size); + + complq->desc_ring =3D dma_alloc_coherent(complq->netdev->dev.parent, + complq->size, &complq->dma, + GFP_KERNEL); + if (!complq->desc_ring) return -ENOMEM; =20 complq->next_to_use =3D 0; @@ -1991,13 +1995,13 @@ static void idpf_tx_handle_rs_completion(struct idp= f_tx_queue *txq, u16 compl_tag; =20 if (!idpf_queue_has(FLOW_SCH_EN, txq)) { - u16 head =3D le16_to_cpu(desc->q_head_compl_tag.q_head); + u16 head =3D le16_to_cpu(desc->common.q_head_compl_tag.q_head); =20 idpf_tx_splitq_clean(txq, head, budget, cleaned, false); return; } =20 - compl_tag =3D le16_to_cpu(desc->q_head_compl_tag.compl_tag); + compl_tag =3D le16_to_cpu(desc->common.q_head_compl_tag.compl_tag); =20 /* If we didn't clean anything on the ring, this packet must be * in the hash table. Go clean it there. @@ -2031,19 +2035,19 @@ static bool idpf_tx_clean_complq(struct idpf_compl_= queue *complq, int budget, do { struct libeth_sq_napi_stats cleaned_stats =3D { }; struct idpf_tx_queue *tx_q; + __le16 hw_head; int rel_tx_qid; - u16 hw_head; u8 ctype; /* completion type */ u16 gen; =20 /* if the descriptor isn't done, no work yet to do */ - gen =3D le16_get_bits(tx_desc->qid_comptype_gen, + gen =3D le16_get_bits(tx_desc->common.qid_comptype_gen, IDPF_TXD_COMPLQ_GEN_M); if (idpf_queue_has(GEN_CHK, complq) !=3D gen) break; =20 /* Find necessary info of TX queue to clean buffers */ - rel_tx_qid =3D le16_get_bits(tx_desc->qid_comptype_gen, + rel_tx_qid =3D le16_get_bits(tx_desc->common.qid_comptype_gen, IDPF_TXD_COMPLQ_QID_M); if (rel_tx_qid >=3D complq->txq_grp->num_txq || !complq->txq_grp->txqs[rel_tx_qid]) { @@ -2053,14 +2057,14 @@ static bool idpf_tx_clean_complq(struct idpf_compl_= queue *complq, int budget, tx_q =3D complq->txq_grp->txqs[rel_tx_qid]; =20 /* Determine completion type */ - ctype =3D le16_get_bits(tx_desc->qid_comptype_gen, + ctype =3D le16_get_bits(tx_desc->common.qid_comptype_gen, IDPF_TXD_COMPLQ_COMPL_TYPE_M); switch (ctype) { case IDPF_TXD_COMPLT_RE: - hw_head =3D le16_to_cpu(tx_desc->q_head_compl_tag.q_head); + hw_head =3D tx_desc->common.q_head_compl_tag.q_head; =20 - idpf_tx_splitq_clean(tx_q, hw_head, budget, - &cleaned_stats, true); + idpf_tx_splitq_clean(tx_q, le16_to_cpu(hw_head), + budget, &cleaned_stats, true); break; case IDPF_TXD_COMPLT_RS: idpf_tx_handle_rs_completion(tx_q, tx_desc, --=20 2.49.0