From nobody Thu Oct 2 19:28:31 2025 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.9]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9730E35FC14; Thu, 11 Sep 2025 16:23:24 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.9 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1757607807; cv=none; b=Q/8+Dwo6XEwfBPde5VhcgZFptIrRrQiw/pthWGYYmHpSk6CDLiTS89E+zwtP2bm1xhORbHZlyLBkdFQ4gIoZ8y2zZOftLgNRhyw78kTkqzhISVHb/Sji/duzlXn3j0kSx4TBt+v7aAdySeSRETVAOycbxqmokS5voM0onwNNQAA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1757607807; c=relaxed/simple; bh=KUdZicuWZDTzJeD5R2x9S2PLy//12bjDbzPArxUq/i0=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=gLM4LnRjj6vxZvWGKzVG/XqL6lEy1OoqT9Pw9xntTVbl970Zkw2H0fTgmdY4nSrdW9IMSofHQzAIof9tlhun/dHEWBrKaCPX9GXjET2+w13M2j1DQef6lbeDtdACia9ewNBLGqiSYGz2LVkHd+dBbx7DTNns2GsryzGbeqOE8OY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=cAlBas8I; arc=none smtp.client-ip=192.198.163.9 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="cAlBas8I" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1757607804; x=1789143804; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=KUdZicuWZDTzJeD5R2x9S2PLy//12bjDbzPArxUq/i0=; b=cAlBas8I/UFPHkWunqOt1uMuU+iFMLaum10Y+xsQNxmGzZitaj2ulQ9n lEheE/5eZH2EPFz9T6u5N3qipUShgTHHXcud6u3TTn7BZ9g5/hX8ub3qd HJJQs27SpPBDyeDuZyoS46wrn8QDyMTXmFZUcbUF66eS3MiIkl7zFbNiK XWNy9ZLf3EbpPBLnoghmE2lHPAYxRPN+FFn9Nyqvg7c4N8vJgoWlJ2Uyk XsjwG3UdzGHNofOl+PTonFDAAqlvrcCskvrz0M3VoGZmJZnsxy3u7zCKi REuPOtuPxkE3Opq5WdLuUD+614fJXWW75nrZEIL6HOlhKaFwrUxatj8L/ Q==; X-CSE-ConnectionGUID: XgY1M27VTQihrNfQSWSQLw== X-CSE-MsgGUID: RFc+zZfuSuCv0ZOpXIBYpQ== X-IronPort-AV: E=McAfee;i="6800,10657,11549"; a="70635168" X-IronPort-AV: E=Sophos;i="6.18,257,1751266800"; d="scan'208";a="70635168" Received: from orviesa009.jf.intel.com ([10.64.159.149]) by fmvoesa103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 11 Sep 2025 09:23:24 -0700 X-CSE-ConnectionGUID: EkGKB+bMSXKyc0s1q2FC9g== X-CSE-MsgGUID: RC1IHAenSD2jBiAvsAMZDQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.18,257,1751266800"; d="scan'208";a="173284642" Received: from newjersey.igk.intel.com ([10.102.20.203]) by orviesa009.jf.intel.com with ESMTP; 11 Sep 2025 09:23:19 -0700 From: Alexander Lobakin To: intel-wired-lan@lists.osuosl.org Cc: Alexander Lobakin , Michal Kubiak , Maciej Fijalkowski , Tony Nguyen , Przemek Kitszel , Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Alexei Starovoitov , Daniel Borkmann , Simon Horman , nxne.cnse.osdt.itp.upstreaming@intel.com, bpf@vger.kernel.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH iwl-next 1/5] idpf: add virtchnl functions to manage selected queues Date: Thu, 11 Sep 2025 18:22:29 +0200 Message-ID: <20250911162233.1238034-2-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.51.0 In-Reply-To: <20250911162233.1238034-1-aleksander.lobakin@intel.com> References: <20250911162233.1238034-1-aleksander.lobakin@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Michal Kubiak Implement VC functions dedicated to enabling, disabling and configuring not all but only selected queues. Also, refactor the existing implementation to make the code more modular. Introduce new generic functions for sending VC messages consisting of chunks, in order to isolate the sending algorithm and its implementation for specific VC messages. Finally, rewrite the function for mapping queues to q_vectors using the new modular approach to avoid copying the code that implements the VC message sending algorithm. Signed-off-by: Michal Kubiak Co-developed-by: Alexander Lobakin Signed-off-by: Alexander Lobakin --- drivers/net/ethernet/intel/idpf/idpf_txrx.h | 3 + .../net/ethernet/intel/idpf/idpf_virtchnl.h | 32 +- drivers/net/ethernet/intel/idpf/idpf_txrx.c | 1 + .../net/ethernet/intel/idpf/idpf_virtchnl.c | 1160 +++++++++++------ 4 files changed, 767 insertions(+), 429 deletions(-) diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.h b/drivers/net/ethe= rnet/intel/idpf/idpf_txrx.h index 39a9c6bd6055..88dc3db488b1 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_txrx.h +++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.h @@ -614,6 +614,7 @@ libeth_cacheline_set_assert(struct idpf_rx_queue, * @dma: Physical address of ring * @q_vector: Backreference to associated vector * @buf_pool_size: Total number of idpf_tx_buf + * @rel_q_id: relative virtchnl queue index */ struct idpf_tx_queue { __cacheline_group_begin_aligned(read_mostly); @@ -684,7 +685,9 @@ struct idpf_tx_queue { dma_addr_t dma; =20 struct idpf_q_vector *q_vector; + u32 buf_pool_size; + u32 rel_q_id; __cacheline_group_end_aligned(cold); }; libeth_cacheline_set_assert(struct idpf_tx_queue, 64, diff --git a/drivers/net/ethernet/intel/idpf/idpf_virtchnl.h b/drivers/net/= ethernet/intel/idpf/idpf_virtchnl.h index d714ff0eaca0..eac3d15daa42 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_virtchnl.h +++ b/drivers/net/ethernet/intel/idpf/idpf_virtchnl.h @@ -4,6 +4,8 @@ #ifndef _IDPF_VIRTCHNL_H_ #define _IDPF_VIRTCHNL_H_ =20 +#include "virtchnl2.h" + #define IDPF_VC_XN_DEFAULT_TIMEOUT_MSEC (60 * 1000) #define IDPF_VC_XN_IDX_M GENMASK(7, 0) #define IDPF_VC_XN_SALT_M GENMASK(15, 8) @@ -114,6 +116,33 @@ int idpf_recv_mb_msg(struct idpf_adapter *adapter); int idpf_send_mb_msg(struct idpf_adapter *adapter, u32 op, u16 msg_size, u8 *msg, u16 cookie); =20 +struct idpf_queue_ptr { + enum virtchnl2_queue_type type; + union { + struct idpf_rx_queue *rxq; + struct idpf_tx_queue *txq; + struct idpf_buf_queue *bufq; + struct idpf_compl_queue *complq; + }; +}; + +struct idpf_queue_set { + struct idpf_vport *vport; + + u32 num; + struct idpf_queue_ptr qs[] __counted_by(num); +}; + +struct idpf_queue_set *idpf_alloc_queue_set(struct idpf_vport *vport, u32 = num); + +int idpf_send_enable_queue_set_msg(const struct idpf_queue_set *qs); +int idpf_send_disable_queue_set_msg(const struct idpf_queue_set *qs); +int idpf_send_config_queue_set_msg(const struct idpf_queue_set *qs); + +int idpf_send_disable_queues_msg(struct idpf_vport *vport); +int idpf_send_config_queues_msg(struct idpf_vport *vport); +int idpf_send_enable_queues_msg(struct idpf_vport *vport); + void idpf_vport_init(struct idpf_vport *vport, struct idpf_vport_max_q *ma= x_q); u32 idpf_get_vport_id(struct idpf_vport *vport); int idpf_send_create_vport_msg(struct idpf_adapter *adapter, @@ -130,9 +159,6 @@ void idpf_vport_dealloc_max_qs(struct idpf_adapter *ada= pter, int idpf_send_add_queues_msg(const struct idpf_vport *vport, u16 num_tx_q, u16 num_complq, u16 num_rx_q, u16 num_rx_bufq); int idpf_send_delete_queues_msg(struct idpf_vport *vport); -int idpf_send_enable_queues_msg(struct idpf_vport *vport); -int idpf_send_disable_queues_msg(struct idpf_vport *vport); -int idpf_send_config_queues_msg(struct idpf_vport *vport); =20 int idpf_vport_alloc_vec_indexes(struct idpf_vport *vport); int idpf_get_vec_ids(struct idpf_adapter *adapter, diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.c b/drivers/net/ethe= rnet/intel/idpf/idpf_txrx.c index 1a756cc0ccd6..81b6646dd3fc 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_txrx.c +++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.c @@ -1365,6 +1365,7 @@ static int idpf_txq_group_alloc(struct idpf_vport *vp= ort, u16 num_txq) q->tx_min_pkt_len =3D idpf_get_min_tx_pkt_len(adapter); q->netdev =3D vport->netdev; q->txq_grp =3D tx_qgrp; + q->rel_q_id =3D j; =20 if (!split) { q->clean_budget =3D vport->compln_clean_budget; diff --git a/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c b/drivers/net/= ethernet/intel/idpf/idpf_virtchnl.c index 31b5dbfcbc39..e46b88f7ce37 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c +++ b/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c @@ -716,30 +716,145 @@ int idpf_recv_mb_msg(struct idpf_adapter *adapter) return err; } =20 +struct idpf_chunked_msg_params { + u32 (*prepare_msg)(const struct idpf_vport *vport, + void *buf, const void *pos, + u32 num); + + const void *chunks; + u32 num_chunks; + + u32 chunk_sz; + u32 config_sz; + + u32 vc_op; +}; + +struct idpf_queue_set *idpf_alloc_queue_set(struct idpf_vport *vport, u32 = num) +{ + struct idpf_queue_set *qp; + + qp =3D kzalloc(struct_size(qp, qs, num), GFP_KERNEL); + if (!qp) + return NULL; + + qp->vport =3D vport; + qp->num =3D num; + + return qp; +} + /** - * idpf_wait_for_marker_event - wait for software marker response + * idpf_send_chunked_msg - send VC message consisting of chunks * @vport: virtual port data structure + * @params: message params * - * Returns 0 success, negative on failure. - **/ -static int idpf_wait_for_marker_event(struct idpf_vport *vport) + * Helper function for preparing a message describing queues to be enabled + * or disabled. + * + * Return: the total size of the prepared message. + */ +static int idpf_send_chunked_msg(struct idpf_vport *vport, + const struct idpf_chunked_msg_params *params) { + struct idpf_vc_xn_params xn_params =3D { + .vc_op =3D params->vc_op, + .timeout_ms =3D IDPF_VC_XN_DEFAULT_TIMEOUT_MSEC, + }; + const void *pos =3D params->chunks; + u32 num_chunks, num_msgs, buf_sz; + void *buf __free(kfree) =3D NULL; + u32 totqs =3D params->num_chunks; + + num_chunks =3D min(IDPF_NUM_CHUNKS_PER_MSG(params->config_sz, + params->chunk_sz), totqs); + num_msgs =3D DIV_ROUND_UP(totqs, num_chunks); + + buf_sz =3D params->config_sz + num_chunks * params->chunk_sz; + buf =3D kzalloc(buf_sz, GFP_KERNEL); + if (!buf) + return -ENOMEM; + + xn_params.send_buf.iov_base =3D buf; + + for (u32 i =3D 0; i < num_msgs; i++) { + ssize_t reply_sz; + + memset(buf, 0, buf_sz); + xn_params.send_buf.iov_len =3D buf_sz; + + if (params->prepare_msg(vport, buf, pos, num_chunks) !=3D buf_sz) + return -EINVAL; + + reply_sz =3D idpf_vc_xn_exec(vport->adapter, &xn_params); + if (reply_sz < 0) + return reply_sz; + + pos +=3D num_chunks * params->chunk_sz; + totqs -=3D num_chunks; + + num_chunks =3D min(num_chunks, totqs); + buf_sz =3D params->config_sz + num_chunks * params->chunk_sz; + } + + return 0; +} + +/** + * idpf_wait_for_marker_event_set - wait for software marker response for + * selected Tx queues + * @qs: set of the Tx queues + * + * Return: 0 success, -errno on failure. + */ +static int idpf_wait_for_marker_event_set(const struct idpf_queue_set *qs) +{ + struct idpf_tx_queue *txq; bool markers_rcvd =3D true; =20 - for (u32 i =3D 0; i < vport->num_txq; i++) { - struct idpf_tx_queue *txq =3D vport->txqs[i]; + for (u32 i =3D 0; i < qs->num; i++) { + switch (qs->qs[i].type) { + case VIRTCHNL2_QUEUE_TYPE_TX: + txq =3D qs->qs[i].txq; =20 - idpf_queue_set(SW_MARKER, txq); - idpf_wait_for_sw_marker_completion(txq); - markers_rcvd &=3D !idpf_queue_has(SW_MARKER, txq); + idpf_queue_set(SW_MARKER, txq); + idpf_wait_for_sw_marker_completion(txq); + markers_rcvd &=3D !idpf_queue_has(SW_MARKER, txq); + break; + default: + break; + } } =20 - if (markers_rcvd) - return 0; + if (!markers_rcvd) { + netdev_warn(qs->vport->netdev, + "Failed to receive marker packets\n"); + return -ETIMEDOUT; + } =20 - dev_warn(&vport->adapter->pdev->dev, "Failed to receive marker packets\n"= ); + return 0; +} =20 - return -ETIMEDOUT; +/** + * idpf_wait_for_marker_event - wait for software marker response + * @vport: virtual port data structure + * + * Return: 0 success, negative on failure. + **/ +static int idpf_wait_for_marker_event(struct idpf_vport *vport) +{ + struct idpf_queue_set *qs __free(kfree) =3D NULL; + + qs =3D idpf_alloc_queue_set(vport, vport->num_txq); + if (!qs) + return -ENOMEM; + + for (u32 i =3D 0; i < qs->num; i++) { + qs->qs[i].type =3D VIRTCHNL2_QUEUE_TYPE_TX; + qs->qs[i].txq =3D vport->txqs[i]; + } + + return idpf_wait_for_marker_event_set(qs); } =20 /** @@ -1571,234 +1686,361 @@ int idpf_send_disable_vport_msg(struct idpf_vport= *vport) } =20 /** - * idpf_send_config_tx_queues_msg - Send virtchnl config tx queues message + * idpf_fill_txq_config_chunk - fill chunk describing the Tx queue + * @vport: virtual port data structure + * @q: Tx queue to be inserted into VC chunk + * @qi: pointer to the buffer containing the VC chunk + */ +static void idpf_fill_txq_config_chunk(const struct idpf_vport *vport, + const struct idpf_tx_queue *q, + struct virtchnl2_txq_info *qi) +{ + u32 val; + + qi->queue_id =3D cpu_to_le32(q->q_id); + qi->model =3D cpu_to_le16(vport->txq_model); + qi->type =3D cpu_to_le32(VIRTCHNL2_QUEUE_TYPE_TX); + qi->ring_len =3D cpu_to_le16(q->desc_count); + qi->dma_ring_addr =3D cpu_to_le64(q->dma); + qi->relative_queue_id =3D cpu_to_le16(q->rel_q_id); + + if (!idpf_is_queue_model_split(vport->txq_model)) { + qi->sched_mode =3D cpu_to_le16(VIRTCHNL2_TXQ_SCHED_MODE_QUEUE); + return; + } + + if (idpf_queue_has(XDP, q)) + val =3D q->complq->q_id; + else + val =3D q->txq_grp->complq->q_id; + + qi->tx_compl_queue_id =3D cpu_to_le16(val); + + if (idpf_queue_has(FLOW_SCH_EN, q)) + val =3D VIRTCHNL2_TXQ_SCHED_MODE_FLOW; + else + val =3D VIRTCHNL2_TXQ_SCHED_MODE_QUEUE; + + qi->sched_mode =3D cpu_to_le16(val); +} + +/** + * idpf_fill_complq_config_chunk - fill chunk describing the completion qu= eue * @vport: virtual port data structure + * @q: completion queue to be inserted into VC chunk + * @qi: pointer to the buffer containing the VC chunk + */ +static void idpf_fill_complq_config_chunk(const struct idpf_vport *vport, + const struct idpf_compl_queue *q, + struct virtchnl2_txq_info *qi) +{ + u32 val; + + qi->queue_id =3D cpu_to_le32(q->q_id); + qi->model =3D cpu_to_le16(vport->txq_model); + qi->type =3D cpu_to_le32(VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION); + qi->ring_len =3D cpu_to_le16(q->desc_count); + qi->dma_ring_addr =3D cpu_to_le64(q->dma); + + if (idpf_queue_has(FLOW_SCH_EN, q)) + val =3D VIRTCHNL2_TXQ_SCHED_MODE_FLOW; + else + val =3D VIRTCHNL2_TXQ_SCHED_MODE_QUEUE; + + qi->sched_mode =3D cpu_to_le16(val); +} + +/** + * idpf_prepare_cfg_txqs_msg - prepare message to configure selected Tx qu= eues + * @vport: virtual port data structure + * @buf: buffer containing the message + * @pos: pointer to the first chunk describing the tx queue + * @num_chunks: number of chunks in the message * - * Send config tx queues virtchnl message. Returns 0 on success, negative = on - * failure. + * Helper function for preparing the message describing configuration of + * Tx queues. + * + * Return: the total size of the prepared message. */ -static int idpf_send_config_tx_queues_msg(struct idpf_vport *vport) +static u32 idpf_prepare_cfg_txqs_msg(const struct idpf_vport *vport, + void *buf, const void *pos, + u32 num_chunks) +{ + struct virtchnl2_config_tx_queues *ctq =3D buf; + + ctq->vport_id =3D cpu_to_le32(vport->vport_id); + ctq->num_qinfo =3D cpu_to_le16(num_chunks); + memcpy(ctq->qinfo, pos, num_chunks * sizeof(*ctq->qinfo)); + + return struct_size(ctq, qinfo, num_chunks); +} + +/** + * idpf_send_config_tx_queue_set_msg - send virtchnl config Tx queues + * message for selected queues + * @qs: set of the Tx queues to configure + * + * Send config queues virtchnl message for queues contained in the @qs arr= ay. + * The @qs array can contain Tx queues (or completion queues) only. + * + * Return: 0 on success, -errno on failure. + */ +static int idpf_send_config_tx_queue_set_msg(const struct idpf_queue_set *= qs) { - struct virtchnl2_config_tx_queues *ctq __free(kfree) =3D NULL; struct virtchnl2_txq_info *qi __free(kfree) =3D NULL; - struct idpf_vc_xn_params xn_params =3D {}; - u32 config_sz, chunk_sz, buf_sz; - int totqs, num_msgs, num_chunks; - ssize_t reply_sz; - int i, k =3D 0; + struct idpf_chunked_msg_params params =3D { + .vc_op =3D VIRTCHNL2_OP_CONFIG_TX_QUEUES, + .prepare_msg =3D idpf_prepare_cfg_txqs_msg, + .config_sz =3D sizeof(struct virtchnl2_config_tx_queues), + .chunk_sz =3D sizeof(*qi), + }; =20 - totqs =3D vport->num_txq + vport->num_complq; - qi =3D kcalloc(totqs, sizeof(struct virtchnl2_txq_info), GFP_KERNEL); + qi =3D kcalloc(qs->num, sizeof(*qi), GFP_KERNEL); if (!qi) return -ENOMEM; =20 - /* Populate the queue info buffer with all queue context info */ - for (i =3D 0; i < vport->num_txq_grp; i++) { - struct idpf_txq_group *tx_qgrp =3D &vport->txq_grps[i]; - int j, sched_mode; - - for (j =3D 0; j < tx_qgrp->num_txq; j++, k++) { - qi[k].queue_id =3D - cpu_to_le32(tx_qgrp->txqs[j]->q_id); - qi[k].model =3D - cpu_to_le16(vport->txq_model); - qi[k].type =3D - cpu_to_le32(VIRTCHNL2_QUEUE_TYPE_TX); - qi[k].ring_len =3D - cpu_to_le16(tx_qgrp->txqs[j]->desc_count); - qi[k].dma_ring_addr =3D - cpu_to_le64(tx_qgrp->txqs[j]->dma); - if (idpf_is_queue_model_split(vport->txq_model)) { - struct idpf_tx_queue *q =3D tx_qgrp->txqs[j]; - - qi[k].tx_compl_queue_id =3D - cpu_to_le16(tx_qgrp->complq->q_id); - qi[k].relative_queue_id =3D cpu_to_le16(j); - - if (idpf_queue_has(FLOW_SCH_EN, q)) - qi[k].sched_mode =3D - cpu_to_le16(VIRTCHNL2_TXQ_SCHED_MODE_FLOW); - else - qi[k].sched_mode =3D - cpu_to_le16(VIRTCHNL2_TXQ_SCHED_MODE_QUEUE); - } else { - qi[k].sched_mode =3D - cpu_to_le16(VIRTCHNL2_TXQ_SCHED_MODE_QUEUE); - } - } + params.chunks =3D qi; =20 - if (!idpf_is_queue_model_split(vport->txq_model)) - continue; + for (u32 i =3D 0; i < qs->num; i++) { + if (qs->qs[i].type =3D=3D VIRTCHNL2_QUEUE_TYPE_TX) + idpf_fill_txq_config_chunk(qs->vport, qs->qs[i].txq, + &qi[params.num_chunks++]); + else if (qs->qs[i].type =3D=3D VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION) + idpf_fill_complq_config_chunk(qs->vport, + qs->qs[i].complq, + &qi[params.num_chunks++]); + } =20 - qi[k].queue_id =3D cpu_to_le32(tx_qgrp->complq->q_id); - qi[k].model =3D cpu_to_le16(vport->txq_model); - qi[k].type =3D cpu_to_le32(VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION); - qi[k].ring_len =3D cpu_to_le16(tx_qgrp->complq->desc_count); - qi[k].dma_ring_addr =3D cpu_to_le64(tx_qgrp->complq->dma); + return idpf_send_chunked_msg(qs->vport, ¶ms); +} =20 - if (idpf_queue_has(FLOW_SCH_EN, tx_qgrp->complq)) - sched_mode =3D VIRTCHNL2_TXQ_SCHED_MODE_FLOW; - else - sched_mode =3D VIRTCHNL2_TXQ_SCHED_MODE_QUEUE; - qi[k].sched_mode =3D cpu_to_le16(sched_mode); +/** + * idpf_send_config_tx_queues_msg - send virtchnl config Tx queues message + * @vport: virtual port data structure + * + * Return: 0 on success, -errno on failure. + */ +static int idpf_send_config_tx_queues_msg(struct idpf_vport *vport) +{ + struct idpf_queue_set *qs __free(kfree) =3D NULL; + u32 totqs =3D vport->num_txq + vport->num_complq; + u32 k =3D 0; + + qs =3D idpf_alloc_queue_set(vport, totqs); + if (!qs) + return -ENOMEM; + + /* Populate the queue info buffer with all queue context info */ + for (u32 i =3D 0; i < vport->num_txq_grp; i++) { + const struct idpf_txq_group *tx_qgrp =3D &vport->txq_grps[i]; + + for (u32 j =3D 0; j < tx_qgrp->num_txq; j++) { + qs->qs[k].type =3D VIRTCHNL2_QUEUE_TYPE_TX; + qs->qs[k++].txq =3D tx_qgrp->txqs[j]; + } =20 - k++; + if (idpf_is_queue_model_split(vport->txq_model)) { + qs->qs[k].type =3D VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION; + qs->qs[k++].complq =3D tx_qgrp->complq; + } } =20 /* Make sure accounting agrees */ if (k !=3D totqs) return -EINVAL; =20 - /* Chunk up the queue contexts into multiple messages to avoid - * sending a control queue message buffer that is too large - */ - config_sz =3D sizeof(struct virtchnl2_config_tx_queues); - chunk_sz =3D sizeof(struct virtchnl2_txq_info); + return idpf_send_config_tx_queue_set_msg(qs); +} =20 - num_chunks =3D min_t(u32, IDPF_NUM_CHUNKS_PER_MSG(config_sz, chunk_sz), - totqs); - num_msgs =3D DIV_ROUND_UP(totqs, num_chunks); +/** + * idpf_fill_rxq_config_chunk - fill chunk describing then Rx queue + * @vport: virtual port data structure + * @q: Rx queue to be inserted into VC chunk + * @qi: pointer to the buffer containing the VC chunk + */ +static void idpf_fill_rxq_config_chunk(const struct idpf_vport *vport, + struct idpf_rx_queue *q, + struct virtchnl2_rxq_info *qi) +{ + const struct idpf_bufq_set *sets; + + qi->queue_id =3D cpu_to_le32(q->q_id); + qi->model =3D cpu_to_le16(vport->rxq_model); + qi->type =3D cpu_to_le32(VIRTCHNL2_QUEUE_TYPE_RX); + qi->ring_len =3D cpu_to_le16(q->desc_count); + qi->dma_ring_addr =3D cpu_to_le64(q->dma); + qi->max_pkt_size =3D cpu_to_le32(q->rx_max_pkt_size); + qi->rx_buffer_low_watermark =3D cpu_to_le16(q->rx_buffer_low_watermark); + qi->qflags =3D cpu_to_le16(VIRTCHNL2_RX_DESC_SIZE_32BYTE); + if (idpf_is_feature_ena(vport, NETIF_F_GRO_HW)) + qi->qflags |=3D cpu_to_le16(VIRTCHNL2_RXQ_RSC); + + if (!idpf_is_queue_model_split(vport->rxq_model)) { + qi->data_buffer_size =3D cpu_to_le32(q->rx_buf_size); + qi->desc_ids =3D cpu_to_le64(q->rxdids); =20 - buf_sz =3D struct_size(ctq, qinfo, num_chunks); - ctq =3D kzalloc(buf_sz, GFP_KERNEL); - if (!ctq) - return -ENOMEM; + return; + } =20 - xn_params.vc_op =3D VIRTCHNL2_OP_CONFIG_TX_QUEUES; - xn_params.timeout_ms =3D IDPF_VC_XN_DEFAULT_TIMEOUT_MSEC; + sets =3D q->bufq_sets; =20 - for (i =3D 0, k =3D 0; i < num_msgs; i++) { - memset(ctq, 0, buf_sz); - ctq->vport_id =3D cpu_to_le32(vport->vport_id); - ctq->num_qinfo =3D cpu_to_le16(num_chunks); - memcpy(ctq->qinfo, &qi[k], chunk_sz * num_chunks); + /* + * In splitq mode, RxQ buffer size should be set to that of the first + * buffer queue associated with this RxQ. + */ + q->rx_buf_size =3D sets[0].bufq.rx_buf_size; + qi->data_buffer_size =3D cpu_to_le32(q->rx_buf_size); =20 - xn_params.send_buf.iov_base =3D ctq; - xn_params.send_buf.iov_len =3D buf_sz; - reply_sz =3D idpf_vc_xn_exec(vport->adapter, &xn_params); - if (reply_sz < 0) - return reply_sz; + qi->rx_bufq1_id =3D cpu_to_le16(sets[0].bufq.q_id); + if (vport->num_bufqs_per_qgrp > IDPF_SINGLE_BUFQ_PER_RXQ_GRP) { + qi->bufq2_ena =3D IDPF_BUFQ2_ENA; + qi->rx_bufq2_id =3D cpu_to_le16(sets[1].bufq.q_id); + } =20 - k +=3D num_chunks; - totqs -=3D num_chunks; - num_chunks =3D min(num_chunks, totqs); - /* Recalculate buffer size */ - buf_sz =3D struct_size(ctq, qinfo, num_chunks); + q->rx_hbuf_size =3D sets[0].bufq.rx_hbuf_size; + + if (idpf_queue_has(HSPLIT_EN, q)) { + qi->qflags |=3D cpu_to_le16(VIRTCHNL2_RXQ_HDR_SPLIT); + qi->hdr_buffer_size =3D cpu_to_le16(q->rx_hbuf_size); } =20 - return 0; + qi->desc_ids =3D cpu_to_le64(VIRTCHNL2_RXDID_2_FLEX_SPLITQ_M); } =20 /** - * idpf_send_config_rx_queues_msg - Send virtchnl config rx queues message + * idpf_fill_bufq_config_chunk - fill chunk describing the buffer queue * @vport: virtual port data structure + * @q: buffer queue to be inserted into VC chunk + * @qi: pointer to the buffer containing the VC chunk + */ +static void idpf_fill_bufq_config_chunk(const struct idpf_vport *vport, + const struct idpf_buf_queue *q, + struct virtchnl2_rxq_info *qi) +{ + qi->queue_id =3D cpu_to_le32(q->q_id); + qi->model =3D cpu_to_le16(vport->rxq_model); + qi->type =3D cpu_to_le32(VIRTCHNL2_QUEUE_TYPE_RX_BUFFER); + qi->ring_len =3D cpu_to_le16(q->desc_count); + qi->dma_ring_addr =3D cpu_to_le64(q->dma); + qi->data_buffer_size =3D cpu_to_le32(q->rx_buf_size); + qi->rx_buffer_low_watermark =3D cpu_to_le16(q->rx_buffer_low_watermark); + qi->desc_ids =3D cpu_to_le64(VIRTCHNL2_RXDID_2_FLEX_SPLITQ_M); + qi->buffer_notif_stride =3D IDPF_RX_BUF_STRIDE; + if (idpf_is_feature_ena(vport, NETIF_F_GRO_HW)) + qi->qflags =3D cpu_to_le16(VIRTCHNL2_RXQ_RSC); + + if (idpf_queue_has(HSPLIT_EN, q)) { + qi->qflags |=3D cpu_to_le16(VIRTCHNL2_RXQ_HDR_SPLIT); + qi->hdr_buffer_size =3D cpu_to_le16(q->rx_hbuf_size); + } +} + +/** + * idpf_prepare_cfg_rxqs_msg - prepare message to configure selected Rx qu= eues + * @vport: virtual port data structure + * @buf: buffer containing the message + * @pos: pointer to the first chunk describing the rx queue + * @num_chunks: number of chunks in the message * - * Send config rx queues virtchnl message. Returns 0 on success, negative= on - * failure. + * Helper function for preparing the message describing configuration of + * Rx queues. + * + * Return: the total size of the prepared message. */ -static int idpf_send_config_rx_queues_msg(struct idpf_vport *vport) +static u32 idpf_prepare_cfg_rxqs_msg(const struct idpf_vport *vport, + void *buf, const void *pos, + u32 num_chunks) +{ + struct virtchnl2_config_rx_queues *crq =3D buf; + + crq->vport_id =3D cpu_to_le32(vport->vport_id); + crq->num_qinfo =3D cpu_to_le16(num_chunks); + memcpy(crq->qinfo, pos, num_chunks * sizeof(*crq->qinfo)); + + return struct_size(crq, qinfo, num_chunks); +} + +/** + * idpf_send_config_rx_queue_set_msg - send virtchnl config Rx queues mess= age + * for selected queues. + * @qs: set of the Rx queues to configure + * + * Send config queues virtchnl message for queues contained in the @qs arr= ay. + * The @qs array can contain Rx queues (or buffer queues) only. + * + * Return: 0 on success, -errno on failure. + */ +static int idpf_send_config_rx_queue_set_msg(const struct idpf_queue_set *= qs) { - struct virtchnl2_config_rx_queues *crq __free(kfree) =3D NULL; struct virtchnl2_rxq_info *qi __free(kfree) =3D NULL; - struct idpf_vc_xn_params xn_params =3D {}; - u32 config_sz, chunk_sz, buf_sz; - int totqs, num_msgs, num_chunks; - ssize_t reply_sz; - int i, k =3D 0; + struct idpf_chunked_msg_params params =3D { + .vc_op =3D VIRTCHNL2_OP_CONFIG_RX_QUEUES, + .prepare_msg =3D idpf_prepare_cfg_rxqs_msg, + .config_sz =3D sizeof(struct virtchnl2_config_rx_queues), + .chunk_sz =3D sizeof(*qi), + }; =20 - totqs =3D vport->num_rxq + vport->num_bufq; - qi =3D kcalloc(totqs, sizeof(struct virtchnl2_rxq_info), GFP_KERNEL); + qi =3D kcalloc(qs->num, sizeof(*qi), GFP_KERNEL); if (!qi) return -ENOMEM; =20 - /* Populate the queue info buffer with all queue context info */ - for (i =3D 0; i < vport->num_rxq_grp; i++) { - struct idpf_rxq_group *rx_qgrp =3D &vport->rxq_grps[i]; - u16 num_rxq; - int j; - - if (!idpf_is_queue_model_split(vport->rxq_model)) - goto setup_rxqs; - - for (j =3D 0; j < vport->num_bufqs_per_qgrp; j++, k++) { - struct idpf_buf_queue *bufq =3D - &rx_qgrp->splitq.bufq_sets[j].bufq; - - qi[k].queue_id =3D cpu_to_le32(bufq->q_id); - qi[k].model =3D cpu_to_le16(vport->rxq_model); - qi[k].type =3D - cpu_to_le32(VIRTCHNL2_QUEUE_TYPE_RX_BUFFER); - qi[k].desc_ids =3D cpu_to_le64(VIRTCHNL2_RXDID_2_FLEX_SPLITQ_M); - qi[k].ring_len =3D cpu_to_le16(bufq->desc_count); - qi[k].dma_ring_addr =3D cpu_to_le64(bufq->dma); - qi[k].data_buffer_size =3D cpu_to_le32(bufq->rx_buf_size); - qi[k].buffer_notif_stride =3D IDPF_RX_BUF_STRIDE; - qi[k].rx_buffer_low_watermark =3D - cpu_to_le16(bufq->rx_buffer_low_watermark); - if (idpf_is_feature_ena(vport, NETIF_F_GRO_HW)) - qi[k].qflags |=3D cpu_to_le16(VIRTCHNL2_RXQ_RSC); - } + params.chunks =3D qi; =20 -setup_rxqs: - if (idpf_is_queue_model_split(vport->rxq_model)) - num_rxq =3D rx_qgrp->splitq.num_rxq_sets; - else - num_rxq =3D rx_qgrp->singleq.num_rxq; + for (u32 i =3D 0; i < qs->num; i++) { + if (qs->qs[i].type =3D=3D VIRTCHNL2_QUEUE_TYPE_RX) + idpf_fill_rxq_config_chunk(qs->vport, qs->qs[i].rxq, + &qi[params.num_chunks++]); + else if (qs->qs[i].type =3D=3D VIRTCHNL2_QUEUE_TYPE_RX_BUFFER) + idpf_fill_bufq_config_chunk(qs->vport, qs->qs[i].bufq, + &qi[params.num_chunks++]); + } =20 - for (j =3D 0; j < num_rxq; j++, k++) { - const struct idpf_bufq_set *sets; - struct idpf_rx_queue *rxq; - u32 rxdids; + return idpf_send_chunked_msg(qs->vport, ¶ms); +} =20 - if (!idpf_is_queue_model_split(vport->rxq_model)) { - rxq =3D rx_qgrp->singleq.rxqs[j]; - rxdids =3D rxq->rxdids; +/** + * idpf_send_config_rx_queues_msg - send virtchnl config Rx queues message + * @vport: virtual port data structure + * + * Return: 0 on success, -errno on failure. + */ +static int idpf_send_config_rx_queues_msg(struct idpf_vport *vport) +{ + bool splitq =3D idpf_is_queue_model_split(vport->rxq_model); + struct idpf_queue_set *qs __free(kfree) =3D NULL; + u32 totqs =3D vport->num_rxq + vport->num_bufq; + u32 k =3D 0; =20 - goto common_qi_fields; - } + qs =3D idpf_alloc_queue_set(vport, totqs); + if (!qs) + return -ENOMEM; =20 - rxq =3D &rx_qgrp->splitq.rxq_sets[j]->rxq; - sets =3D rxq->bufq_sets; + /* Populate the queue info buffer with all queue context info */ + for (u32 i =3D 0; i < vport->num_rxq_grp; i++) { + const struct idpf_rxq_group *rx_qgrp =3D &vport->rxq_grps[i]; + u32 num_rxq; =20 - /* In splitq mode, RXQ buffer size should be - * set to that of the first buffer queue - * associated with this RXQ. - */ - rxq->rx_buf_size =3D sets[0].bufq.rx_buf_size; + if (!splitq) { + num_rxq =3D rx_qgrp->singleq.num_rxq; + goto rxq; + } =20 - qi[k].rx_bufq1_id =3D cpu_to_le16(sets[0].bufq.q_id); - if (vport->num_bufqs_per_qgrp > IDPF_SINGLE_BUFQ_PER_RXQ_GRP) { - qi[k].bufq2_ena =3D IDPF_BUFQ2_ENA; - qi[k].rx_bufq2_id =3D - cpu_to_le16(sets[1].bufq.q_id); - } - qi[k].rx_buffer_low_watermark =3D - cpu_to_le16(rxq->rx_buffer_low_watermark); - if (idpf_is_feature_ena(vport, NETIF_F_GRO_HW)) - qi[k].qflags |=3D cpu_to_le16(VIRTCHNL2_RXQ_RSC); - - rxq->rx_hbuf_size =3D sets[0].bufq.rx_hbuf_size; - - if (idpf_queue_has(HSPLIT_EN, rxq)) { - qi[k].qflags |=3D - cpu_to_le16(VIRTCHNL2_RXQ_HDR_SPLIT); - qi[k].hdr_buffer_size =3D - cpu_to_le16(rxq->rx_hbuf_size); - } + for (u32 j =3D 0; j < vport->num_bufqs_per_qgrp; j++) { + qs->qs[k].type =3D VIRTCHNL2_QUEUE_TYPE_RX_BUFFER; + qs->qs[k++].bufq =3D &rx_qgrp->splitq.bufq_sets[j].bufq; + } =20 - rxdids =3D VIRTCHNL2_RXDID_2_FLEX_SPLITQ_M; - -common_qi_fields: - qi[k].queue_id =3D cpu_to_le32(rxq->q_id); - qi[k].model =3D cpu_to_le16(vport->rxq_model); - qi[k].type =3D cpu_to_le32(VIRTCHNL2_QUEUE_TYPE_RX); - qi[k].ring_len =3D cpu_to_le16(rxq->desc_count); - qi[k].dma_ring_addr =3D cpu_to_le64(rxq->dma); - qi[k].max_pkt_size =3D cpu_to_le32(rxq->rx_max_pkt_size); - qi[k].data_buffer_size =3D cpu_to_le32(rxq->rx_buf_size); - qi[k].qflags |=3D - cpu_to_le16(VIRTCHNL2_RX_DESC_SIZE_32BYTE); - qi[k].desc_ids =3D cpu_to_le64(rxdids); + num_rxq =3D rx_qgrp->splitq.num_rxq_sets; + +rxq: + for (u32 j =3D 0; j < num_rxq; j++) { + qs->qs[k].type =3D VIRTCHNL2_QUEUE_TYPE_RX; + + if (splitq) + qs->qs[k++].rxq =3D + &rx_qgrp->splitq.rxq_sets[j]->rxq; + else + qs->qs[k++].rxq =3D rx_qgrp->singleq.rxqs[j]; } } =20 @@ -1806,329 +2048,395 @@ static int idpf_send_config_rx_queues_msg(struct = idpf_vport *vport) if (k !=3D totqs) return -EINVAL; =20 - /* Chunk up the queue contexts into multiple messages to avoid - * sending a control queue message buffer that is too large - */ - config_sz =3D sizeof(struct virtchnl2_config_rx_queues); - chunk_sz =3D sizeof(struct virtchnl2_rxq_info); + return idpf_send_config_rx_queue_set_msg(qs); +} =20 - num_chunks =3D min_t(u32, IDPF_NUM_CHUNKS_PER_MSG(config_sz, chunk_sz), - totqs); - num_msgs =3D DIV_ROUND_UP(totqs, num_chunks); +/** + * idpf_prepare_ena_dis_qs_msg - prepare message to enable/disable selected + * queues + * @vport: virtual port data structure + * @buf: buffer containing the message + * @pos: pointer to the first chunk describing the queue + * @num_chunks: number of chunks in the message + * + * Helper function for preparing the message describing queues to be enabl= ed + * or disabled. + * + * Return: the total size of the prepared message. + */ +static u32 idpf_prepare_ena_dis_qs_msg(const struct idpf_vport *vport, + void *buf, const void *pos, + u32 num_chunks) +{ + struct virtchnl2_del_ena_dis_queues *eq =3D buf; + + eq->vport_id =3D cpu_to_le32(vport->vport_id); + eq->chunks.num_chunks =3D cpu_to_le16(num_chunks); + memcpy(eq->chunks.chunks, pos, + num_chunks * sizeof(*eq->chunks.chunks)); + + return struct_size(eq, chunks.chunks, num_chunks); +} =20 - buf_sz =3D struct_size(crq, qinfo, num_chunks); - crq =3D kzalloc(buf_sz, GFP_KERNEL); - if (!crq) +/** + * idpf_send_ena_dis_queue_set_msg - send virtchnl enable or disable queues + * message for selected queues + * @qs: set of the queues to enable or disable + * @en: whether to enable or disable queues + * + * Send enable or disable queues virtchnl message for queues contained + * in the @qs array. + * The @qs array can contain pointers to both Rx and Tx queues. + * + * Return: 0 on success, -errno on failure. + */ +static int idpf_send_ena_dis_queue_set_msg(const struct idpf_queue_set *qs, + bool en) +{ + struct virtchnl2_queue_chunk *qc __free(kfree) =3D NULL; + struct idpf_chunked_msg_params params =3D { + .vc_op =3D en ? VIRTCHNL2_OP_ENABLE_QUEUES : + VIRTCHNL2_OP_DISABLE_QUEUES, + .prepare_msg =3D idpf_prepare_ena_dis_qs_msg, + .config_sz =3D sizeof(struct virtchnl2_del_ena_dis_queues), + .chunk_sz =3D sizeof(*qc), + .num_chunks =3D qs->num, + }; + + qc =3D kcalloc(qs->num, sizeof(*qc), GFP_KERNEL); + if (!qc) return -ENOMEM; =20 - xn_params.vc_op =3D VIRTCHNL2_OP_CONFIG_RX_QUEUES; - xn_params.timeout_ms =3D IDPF_VC_XN_DEFAULT_TIMEOUT_MSEC; + params.chunks =3D qc; =20 - for (i =3D 0, k =3D 0; i < num_msgs; i++) { - memset(crq, 0, buf_sz); - crq->vport_id =3D cpu_to_le32(vport->vport_id); - crq->num_qinfo =3D cpu_to_le16(num_chunks); - memcpy(crq->qinfo, &qi[k], chunk_sz * num_chunks); + for (u32 i =3D 0; i < qs->num; i++) { + const struct idpf_queue_ptr *q =3D &qs->qs[i]; + u32 qid; =20 - xn_params.send_buf.iov_base =3D crq; - xn_params.send_buf.iov_len =3D buf_sz; - reply_sz =3D idpf_vc_xn_exec(vport->adapter, &xn_params); - if (reply_sz < 0) - return reply_sz; + qc[i].type =3D cpu_to_le32(q->type); + qc[i].num_queues =3D cpu_to_le32(IDPF_NUMQ_PER_CHUNK); =20 - k +=3D num_chunks; - totqs -=3D num_chunks; - num_chunks =3D min(num_chunks, totqs); - /* Recalculate buffer size */ - buf_sz =3D struct_size(crq, qinfo, num_chunks); + switch (q->type) { + case VIRTCHNL2_QUEUE_TYPE_RX: + qid =3D q->rxq->q_id; + break; + case VIRTCHNL2_QUEUE_TYPE_TX: + qid =3D q->txq->q_id; + break; + case VIRTCHNL2_QUEUE_TYPE_RX_BUFFER: + qid =3D q->bufq->q_id; + break; + case VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION: + qid =3D q->complq->q_id; + break; + default: + return -EINVAL; + } + + qc[i].start_queue_id =3D cpu_to_le32(qid); } =20 - return 0; + return idpf_send_chunked_msg(qs->vport, ¶ms); } =20 /** - * idpf_send_ena_dis_queues_msg - Send virtchnl enable or disable - * queues message + * idpf_send_ena_dis_queues_msg - send virtchnl enable or disable queues + * message * @vport: virtual port data structure - * @ena: if true enable, false disable + * @en: whether to enable or disable queues * - * Send enable or disable queues virtchnl message. Returns 0 on success, - * negative on failure. + * Return: 0 on success, -errno on failure. */ -static int idpf_send_ena_dis_queues_msg(struct idpf_vport *vport, bool ena) +static int idpf_send_ena_dis_queues_msg(struct idpf_vport *vport, bool en) { - struct virtchnl2_del_ena_dis_queues *eq __free(kfree) =3D NULL; - struct virtchnl2_queue_chunk *qc __free(kfree) =3D NULL; - u32 num_msgs, num_chunks, num_txq, num_rxq, num_q; - struct idpf_vc_xn_params xn_params =3D { - .timeout_ms =3D IDPF_VC_XN_DEFAULT_TIMEOUT_MSEC, - }; - struct virtchnl2_queue_chunks *qcs; - u32 config_sz, chunk_sz, buf_sz; - ssize_t reply_sz; - int i, j, k =3D 0; + struct idpf_queue_set *qs __free(kfree) =3D NULL; + u32 num_txq, num_q, k =3D 0; + bool split; =20 num_txq =3D vport->num_txq + vport->num_complq; - num_rxq =3D vport->num_rxq + vport->num_bufq; - num_q =3D num_txq + num_rxq; - buf_sz =3D sizeof(struct virtchnl2_queue_chunk) * num_q; - qc =3D kzalloc(buf_sz, GFP_KERNEL); - if (!qc) + num_q =3D num_txq + vport->num_rxq + vport->num_bufq; + + qs =3D idpf_alloc_queue_set(vport, num_q); + if (!qs) return -ENOMEM; =20 - for (i =3D 0; i < vport->num_txq_grp; i++) { - struct idpf_txq_group *tx_qgrp =3D &vport->txq_grps[i]; + split =3D idpf_is_queue_model_split(vport->txq_model); =20 - for (j =3D 0; j < tx_qgrp->num_txq; j++, k++) { - qc[k].type =3D cpu_to_le32(VIRTCHNL2_QUEUE_TYPE_TX); - qc[k].start_queue_id =3D cpu_to_le32(tx_qgrp->txqs[j]->q_id); - qc[k].num_queues =3D cpu_to_le32(IDPF_NUMQ_PER_CHUNK); - } - } - if (vport->num_txq !=3D k) - return -EINVAL; + for (u32 i =3D 0; i < vport->num_txq_grp; i++) { + const struct idpf_txq_group *tx_qgrp =3D &vport->txq_grps[i]; =20 - if (!idpf_is_queue_model_split(vport->txq_model)) - goto setup_rx; + for (u32 j =3D 0; j < tx_qgrp->num_txq; j++) { + qs->qs[k].type =3D VIRTCHNL2_QUEUE_TYPE_TX; + qs->qs[k++].txq =3D tx_qgrp->txqs[j]; + } =20 - for (i =3D 0; i < vport->num_txq_grp; i++, k++) { - struct idpf_txq_group *tx_qgrp =3D &vport->txq_grps[i]; + if (!split) + continue; =20 - qc[k].type =3D cpu_to_le32(VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION); - qc[k].start_queue_id =3D cpu_to_le32(tx_qgrp->complq->q_id); - qc[k].num_queues =3D cpu_to_le32(IDPF_NUMQ_PER_CHUNK); + qs->qs[k].type =3D VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION; + qs->qs[k++].complq =3D tx_qgrp->complq; } - if (vport->num_complq !=3D (k - vport->num_txq)) + + if (k !=3D num_txq) return -EINVAL; =20 -setup_rx: - for (i =3D 0; i < vport->num_rxq_grp; i++) { - struct idpf_rxq_group *rx_qgrp =3D &vport->rxq_grps[i]; + split =3D idpf_is_queue_model_split(vport->rxq_model); =20 - if (idpf_is_queue_model_split(vport->rxq_model)) + for (u32 i =3D 0; i < vport->num_rxq_grp; i++) { + const struct idpf_rxq_group *rx_qgrp =3D &vport->rxq_grps[i]; + u32 num_rxq; + + if (split) num_rxq =3D rx_qgrp->splitq.num_rxq_sets; else num_rxq =3D rx_qgrp->singleq.num_rxq; =20 - for (j =3D 0; j < num_rxq; j++, k++) { - if (idpf_is_queue_model_split(vport->rxq_model)) { - qc[k].start_queue_id =3D - cpu_to_le32(rx_qgrp->splitq.rxq_sets[j]->rxq.q_id); - qc[k].type =3D - cpu_to_le32(VIRTCHNL2_QUEUE_TYPE_RX); - } else { - qc[k].start_queue_id =3D - cpu_to_le32(rx_qgrp->singleq.rxqs[j]->q_id); - qc[k].type =3D - cpu_to_le32(VIRTCHNL2_QUEUE_TYPE_RX); - } - qc[k].num_queues =3D cpu_to_le32(IDPF_NUMQ_PER_CHUNK); - } - } - if (vport->num_rxq !=3D k - (vport->num_txq + vport->num_complq)) - return -EINVAL; - - if (!idpf_is_queue_model_split(vport->rxq_model)) - goto send_msg; + for (u32 j =3D 0; j < num_rxq; j++) { + qs->qs[k].type =3D VIRTCHNL2_QUEUE_TYPE_RX; =20 - for (i =3D 0; i < vport->num_rxq_grp; i++) { - struct idpf_rxq_group *rx_qgrp =3D &vport->rxq_grps[i]; + if (split) + qs->qs[k++].rxq =3D + &rx_qgrp->splitq.rxq_sets[j]->rxq; + else + qs->qs[k++].rxq =3D rx_qgrp->singleq.rxqs[j]; + } =20 - for (j =3D 0; j < vport->num_bufqs_per_qgrp; j++, k++) { - const struct idpf_buf_queue *q; + if (!split) + continue; =20 - q =3D &rx_qgrp->splitq.bufq_sets[j].bufq; - qc[k].type =3D - cpu_to_le32(VIRTCHNL2_QUEUE_TYPE_RX_BUFFER); - qc[k].start_queue_id =3D cpu_to_le32(q->q_id); - qc[k].num_queues =3D cpu_to_le32(IDPF_NUMQ_PER_CHUNK); + for (u32 j =3D 0; j < vport->num_bufqs_per_qgrp; j++) { + qs->qs[k].type =3D VIRTCHNL2_QUEUE_TYPE_RX_BUFFER; + qs->qs[k++].bufq =3D &rx_qgrp->splitq.bufq_sets[j].bufq; } } - if (vport->num_bufq !=3D k - (vport->num_txq + - vport->num_complq + - vport->num_rxq)) - return -EINVAL; - -send_msg: - /* Chunk up the queue info into multiple messages */ - config_sz =3D sizeof(struct virtchnl2_del_ena_dis_queues); - chunk_sz =3D sizeof(struct virtchnl2_queue_chunk); - - num_chunks =3D min_t(u32, IDPF_NUM_CHUNKS_PER_MSG(config_sz, chunk_sz), - num_q); - num_msgs =3D DIV_ROUND_UP(num_q, num_chunks); =20 - buf_sz =3D struct_size(eq, chunks.chunks, num_chunks); - eq =3D kzalloc(buf_sz, GFP_KERNEL); - if (!eq) - return -ENOMEM; - - if (ena) - xn_params.vc_op =3D VIRTCHNL2_OP_ENABLE_QUEUES; - else - xn_params.vc_op =3D VIRTCHNL2_OP_DISABLE_QUEUES; + if (k !=3D num_q) + return -EINVAL; =20 - for (i =3D 0, k =3D 0; i < num_msgs; i++) { - memset(eq, 0, buf_sz); - eq->vport_id =3D cpu_to_le32(vport->vport_id); - eq->chunks.num_chunks =3D cpu_to_le16(num_chunks); - qcs =3D &eq->chunks; - memcpy(qcs->chunks, &qc[k], chunk_sz * num_chunks); + return idpf_send_ena_dis_queue_set_msg(qs, en); +} =20 - xn_params.send_buf.iov_base =3D eq; - xn_params.send_buf.iov_len =3D buf_sz; - reply_sz =3D idpf_vc_xn_exec(vport->adapter, &xn_params); - if (reply_sz < 0) - return reply_sz; +/** + * idpf_prep_map_unmap_queue_set_vector_msg - prepare message to map or un= map + * queue set to the interrupt vector + * @vport: virtual port data structure + * @buf: buffer containing the message + * @pos: pointer to the first chunk describing the vector mapping + * @num_chunks: number of chunks in the message + * + * Helper function for preparing the message describing mapping queues to + * q_vectors. + * + * Return: the total size of the prepared message. + */ +static u32 +idpf_prep_map_unmap_queue_set_vector_msg(const struct idpf_vport *vport, + void *buf, const void *pos, + u32 num_chunks) +{ + struct virtchnl2_queue_vector_maps *vqvm =3D buf; =20 - k +=3D num_chunks; - num_q -=3D num_chunks; - num_chunks =3D min(num_chunks, num_q); - /* Recalculate buffer size */ - buf_sz =3D struct_size(eq, chunks.chunks, num_chunks); - } + vqvm->vport_id =3D cpu_to_le32(vport->vport_id); + vqvm->num_qv_maps =3D cpu_to_le16(num_chunks); + memcpy(vqvm->qv_maps, pos, num_chunks * sizeof(*vqvm->qv_maps)); =20 - return 0; + return struct_size(vqvm, qv_maps, num_chunks); } =20 /** - * idpf_send_map_unmap_queue_vector_msg - Send virtchnl map or unmap queue - * vector message - * @vport: virtual port data structure + * idpf_send_map_unmap_queue_set_vector_msg - send virtchnl map or unmap + * queue set vector message + * @qs: set of the queues to map or unmap * @map: true for map and false for unmap * - * Send map or unmap queue vector virtchnl message. Returns 0 on success, - * negative on failure. + * Return: 0 on success, -errno on failure. */ -int idpf_send_map_unmap_queue_vector_msg(struct idpf_vport *vport, bool ma= p) +static int +idpf_send_map_unmap_queue_set_vector_msg(const struct idpf_queue_set *qs, + bool map) { - struct virtchnl2_queue_vector_maps *vqvm __free(kfree) =3D NULL; struct virtchnl2_queue_vector *vqv __free(kfree) =3D NULL; - struct idpf_vc_xn_params xn_params =3D { - .timeout_ms =3D IDPF_VC_XN_DEFAULT_TIMEOUT_MSEC, + struct idpf_chunked_msg_params params =3D { + .vc_op =3D map ? VIRTCHNL2_OP_MAP_QUEUE_VECTOR : + VIRTCHNL2_OP_UNMAP_QUEUE_VECTOR, + .prepare_msg =3D idpf_prep_map_unmap_queue_set_vector_msg, + .config_sz =3D sizeof(struct virtchnl2_queue_vector_maps), + .chunk_sz =3D sizeof(*vqv), + .num_chunks =3D qs->num, }; - u32 config_sz, chunk_sz, buf_sz; - u32 num_msgs, num_chunks, num_q; - ssize_t reply_sz; - int i, j, k =3D 0; - - num_q =3D vport->num_txq + vport->num_rxq; + bool split; =20 - buf_sz =3D sizeof(struct virtchnl2_queue_vector) * num_q; - vqv =3D kzalloc(buf_sz, GFP_KERNEL); + vqv =3D kcalloc(qs->num, sizeof(*vqv), GFP_KERNEL); if (!vqv) return -ENOMEM; =20 - for (i =3D 0; i < vport->num_txq_grp; i++) { - struct idpf_txq_group *tx_qgrp =3D &vport->txq_grps[i]; + params.chunks =3D vqv; =20 - for (j =3D 0; j < tx_qgrp->num_txq; j++, k++) { - const struct idpf_tx_queue *txq =3D tx_qgrp->txqs[j]; - const struct idpf_q_vector *vec; - u32 v_idx, tx_itr_idx; + split =3D idpf_is_queue_model_split(qs->vport->txq_model); =20 - vqv[k].queue_type =3D - cpu_to_le32(VIRTCHNL2_QUEUE_TYPE_TX); - vqv[k].queue_id =3D cpu_to_le32(txq->q_id); + for (u32 i =3D 0; i < qs->num; i++) { + const struct idpf_queue_ptr *q =3D &qs->qs[i]; + const struct idpf_q_vector *vec; + u32 qid, v_idx, itr_idx; =20 - if (idpf_queue_has(NOIRQ, txq)) + vqv[i].queue_type =3D cpu_to_le32(q->type); + + switch (q->type) { + case VIRTCHNL2_QUEUE_TYPE_RX: + qid =3D q->rxq->q_id; + + if (idpf_queue_has(NOIRQ, q->rxq)) + vec =3D NULL; + else + vec =3D q->rxq->q_vector; + + if (vec) { + v_idx =3D vec->v_idx; + itr_idx =3D vec->rx_itr_idx; + } else { + v_idx =3D qs->vport->noirq_v_idx; + itr_idx =3D VIRTCHNL2_ITR_IDX_0; + } + break; + case VIRTCHNL2_QUEUE_TYPE_TX: + qid =3D q->txq->q_id; + + if (idpf_queue_has(NOIRQ, q->txq)) vec =3D NULL; - else if (idpf_queue_has(XDP, txq)) - vec =3D txq->complq->q_vector; - else if (idpf_is_queue_model_split(vport->txq_model)) - vec =3D txq->txq_grp->complq->q_vector; + else if (idpf_queue_has(XDP, q->txq)) + vec =3D q->txq->complq->q_vector; + else if (split) + vec =3D q->txq->txq_grp->complq->q_vector; else - vec =3D txq->q_vector; + vec =3D q->txq->q_vector; =20 if (vec) { v_idx =3D vec->v_idx; - tx_itr_idx =3D vec->tx_itr_idx; + itr_idx =3D vec->tx_itr_idx; } else { - v_idx =3D vport->noirq_v_idx; - tx_itr_idx =3D VIRTCHNL2_ITR_IDX_1; + v_idx =3D qs->vport->noirq_v_idx; + itr_idx =3D VIRTCHNL2_ITR_IDX_1; } + break; + default: + return -EINVAL; + } + + vqv[i].queue_id =3D cpu_to_le32(qid); + vqv[i].vector_id =3D cpu_to_le16(v_idx); + vqv[i].itr_idx =3D cpu_to_le32(itr_idx); + } + + return idpf_send_chunked_msg(qs->vport, ¶ms); +} + +/** + * idpf_send_map_unmap_queue_vector_msg - send virtchnl map or unmap queue + * vector message + * @vport: virtual port data structure + * @map: true for map and false for unmap + * + * Return: 0 on success, -errno on failure. + */ +int idpf_send_map_unmap_queue_vector_msg(struct idpf_vport *vport, bool ma= p) +{ + struct idpf_queue_set *qs __free(kfree) =3D NULL; + u32 num_q =3D vport->num_txq + vport->num_rxq; + u32 k =3D 0; + + qs =3D idpf_alloc_queue_set(vport, num_q); + if (!qs) + return -ENOMEM; =20 - vqv[k].vector_id =3D cpu_to_le16(v_idx); - vqv[k].itr_idx =3D cpu_to_le32(tx_itr_idx); + for (u32 i =3D 0; i < vport->num_txq_grp; i++) { + const struct idpf_txq_group *tx_qgrp =3D &vport->txq_grps[i]; + + for (u32 j =3D 0; j < tx_qgrp->num_txq; j++) { + qs->qs[k].type =3D VIRTCHNL2_QUEUE_TYPE_TX; + qs->qs[k++].txq =3D tx_qgrp->txqs[j]; } } =20 - for (i =3D 0; i < vport->num_rxq_grp; i++) { - struct idpf_rxq_group *rx_qgrp =3D &vport->rxq_grps[i]; - u16 num_rxq; + if (k !=3D vport->num_txq) + return -EINVAL; + + for (u32 i =3D 0; i < vport->num_rxq_grp; i++) { + const struct idpf_rxq_group *rx_qgrp =3D &vport->rxq_grps[i]; + u32 num_rxq; =20 if (idpf_is_queue_model_split(vport->rxq_model)) num_rxq =3D rx_qgrp->splitq.num_rxq_sets; else num_rxq =3D rx_qgrp->singleq.num_rxq; =20 - for (j =3D 0; j < num_rxq; j++, k++) { - struct idpf_rx_queue *rxq; - u32 v_idx, rx_itr_idx; + for (u32 j =3D 0; j < num_rxq; j++) { + qs->qs[k].type =3D VIRTCHNL2_QUEUE_TYPE_RX; =20 if (idpf_is_queue_model_split(vport->rxq_model)) - rxq =3D &rx_qgrp->splitq.rxq_sets[j]->rxq; + qs->qs[k++].rxq =3D + &rx_qgrp->splitq.rxq_sets[j]->rxq; else - rxq =3D rx_qgrp->singleq.rxqs[j]; - - vqv[k].queue_type =3D - cpu_to_le32(VIRTCHNL2_QUEUE_TYPE_RX); - vqv[k].queue_id =3D cpu_to_le32(rxq->q_id); - - if (idpf_queue_has(NOIRQ, rxq)) { - v_idx =3D vport->noirq_v_idx; - rx_itr_idx =3D VIRTCHNL2_ITR_IDX_0; - } else { - v_idx =3D rxq->q_vector->v_idx; - rx_itr_idx =3D rxq->q_vector->rx_itr_idx; - } - - vqv[k].vector_id =3D cpu_to_le16(v_idx); - vqv[k].itr_idx =3D cpu_to_le32(rx_itr_idx); + qs->qs[k++].rxq =3D rx_qgrp->singleq.rxqs[j]; } } =20 if (k !=3D num_q) return -EINVAL; =20 - /* Chunk up the vector info into multiple messages */ - config_sz =3D sizeof(struct virtchnl2_queue_vector_maps); - chunk_sz =3D sizeof(struct virtchnl2_queue_vector); + return idpf_send_map_unmap_queue_set_vector_msg(qs, map); +} =20 - num_chunks =3D min_t(u32, IDPF_NUM_CHUNKS_PER_MSG(config_sz, chunk_sz), - num_q); - num_msgs =3D DIV_ROUND_UP(num_q, num_chunks); +/** + * idpf_send_enable_queue_set_msg - send enable queues virtchnl message for + * selected queues + * @qs: set of the queues + * + * Send enable queues virtchnl message for queues contained in the @qs arr= ay. + * + * Return: 0 on success, -errno on failure. + */ +int idpf_send_enable_queue_set_msg(const struct idpf_queue_set *qs) +{ + return idpf_send_ena_dis_queue_set_msg(qs, true); +} =20 - buf_sz =3D struct_size(vqvm, qv_maps, num_chunks); - vqvm =3D kzalloc(buf_sz, GFP_KERNEL); - if (!vqvm) - return -ENOMEM; +/** + * idpf_send_disable_queue_set_msg - send disable queues virtchnl message = for + * selected queues + * @qs: set of the queues + * + * Return: 0 on success, -errno on failure. + */ +int idpf_send_disable_queue_set_msg(const struct idpf_queue_set *qs) +{ + int err; =20 - if (map) - xn_params.vc_op =3D VIRTCHNL2_OP_MAP_QUEUE_VECTOR; - else - xn_params.vc_op =3D VIRTCHNL2_OP_UNMAP_QUEUE_VECTOR; + err =3D idpf_send_ena_dis_queue_set_msg(qs, false); + if (err) + return err; =20 - for (i =3D 0, k =3D 0; i < num_msgs; i++) { - memset(vqvm, 0, buf_sz); - xn_params.send_buf.iov_base =3D vqvm; - xn_params.send_buf.iov_len =3D buf_sz; - vqvm->vport_id =3D cpu_to_le32(vport->vport_id); - vqvm->num_qv_maps =3D cpu_to_le16(num_chunks); - memcpy(vqvm->qv_maps, &vqv[k], chunk_sz * num_chunks); + return idpf_wait_for_marker_event_set(qs); +} =20 - reply_sz =3D idpf_vc_xn_exec(vport->adapter, &xn_params); - if (reply_sz < 0) - return reply_sz; +/** + * idpf_send_config_queue_set_msg - send virtchnl config queues message for + * selected queues + * @qs: set of the queues + * + * Send config queues virtchnl message for queues contained in the @qs arr= ay. + * The @qs array can contain both Rx or Tx queues. + * + * Return: 0 on success, -errno on failure. + */ +int idpf_send_config_queue_set_msg(const struct idpf_queue_set *qs) +{ + int err; =20 - k +=3D num_chunks; - num_q -=3D num_chunks; - num_chunks =3D min(num_chunks, num_q); - /* Recalculate buffer size */ - buf_sz =3D struct_size(vqvm, qv_maps, num_chunks); - } + err =3D idpf_send_config_tx_queue_set_msg(qs); + if (err) + return err; =20 - return 0; + return idpf_send_config_rx_queue_set_msg(qs); } =20 /** --=20 2.51.0 From nobody Thu Oct 2 19:28:31 2025 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.9]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0EFBE362060; Thu, 11 Sep 2025 16:23:29 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.9 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1757607811; cv=none; b=UPX+YNCIUlvJYsGb9yLFK7BHwAvKv30BZQprUwFkzpzfjUh4lWsda9MLodj333cAQAjlwr2rG6s9YwSOPilTV+yGC3KgUCvYSOkLhI8giWQ8PaNkjFsFrZiUI7w25UO4x/Xo4RccqZvc1xClfxGxXMGP31NAIdRu168QLV38H9I= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1757607811; c=relaxed/simple; bh=41YBuY5iaTxxJXvuJYJrR2MqAgLqBnQmO99dcaVdbl4=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=CH74QtPsOxe18PEVMln9f8VJE5JOzTBU+nefzSczwMDwThxzUgqpsHy8rHPLTzCf7CUf1MH+b3r+XXSndI0a/4ghGIXTx1wYLr/tfrSQkjW4L+3+ny4RlFYNIaH7/8MoTCSfTiNhxZBraIk37Xf9XYsXbKI3559BwWX5QhEj16w= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=I3SB3/sJ; arc=none smtp.client-ip=192.198.163.9 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="I3SB3/sJ" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1757607809; x=1789143809; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=41YBuY5iaTxxJXvuJYJrR2MqAgLqBnQmO99dcaVdbl4=; b=I3SB3/sJa/sWPtTdmWdBsSSYMff80qagANDAz8ykIOiVwdcLIIuk+AFn eANSL9ktFuPaIHfLynkBA7k4ks+0bW1wDX5VHoOO4oWmOvkCsyQYjgUBh 0GzvIcSqORAG7QZ/EH08bMHOmbc7aIaigNpObbn2qo6A+6YjgU3mqdsf5 Pjj1p+j827l47qxi8QUoflzHay0N8yOyfrDTcGbdKB5BeiQ9sUki26dD8 0hDktvETOBPbqmTl0IUATF+NMFEmI+Bpb3zAoe0omgKg4a6z9XSzOjRVm 1GhgJTsN/3j8hhujQaGLMpmw9GhXkjdQGTFVrSKKICQjhXu7T7I0aIwHZ w==; X-CSE-ConnectionGUID: laZ+84uIRJWtW6leYAD4nw== X-CSE-MsgGUID: gEy61+OETSadImP9dBLNtg== X-IronPort-AV: E=McAfee;i="6800,10657,11549"; a="70635184" X-IronPort-AV: E=Sophos;i="6.18,257,1751266800"; d="scan'208";a="70635184" Received: from orviesa009.jf.intel.com ([10.64.159.149]) by fmvoesa103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 11 Sep 2025 09:23:28 -0700 X-CSE-ConnectionGUID: mktVWxgaQmGFOStn2awkEw== X-CSE-MsgGUID: 051Ztp91TL+M6NXs8gHKDw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.18,257,1751266800"; d="scan'208";a="173284649" Received: from newjersey.igk.intel.com ([10.102.20.203]) by orviesa009.jf.intel.com with ESMTP; 11 Sep 2025 09:23:24 -0700 From: Alexander Lobakin To: intel-wired-lan@lists.osuosl.org Cc: Alexander Lobakin , Michal Kubiak , Maciej Fijalkowski , Tony Nguyen , Przemek Kitszel , Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Alexei Starovoitov , Daniel Borkmann , Simon Horman , nxne.cnse.osdt.itp.upstreaming@intel.com, bpf@vger.kernel.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH iwl-next 2/5] idpf: add XSk pool initialization Date: Thu, 11 Sep 2025 18:22:30 +0200 Message-ID: <20250911162233.1238034-3-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.51.0 In-Reply-To: <20250911162233.1238034-1-aleksander.lobakin@intel.com> References: <20250911162233.1238034-1-aleksander.lobakin@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Michal Kubiak Add functionality to setup an XSk buffer pool, including ability to stop, reconfig and start only selected queues, not the whole device. Pool DMA mapping is managed by libeth_xdp. Signed-off-by: Michal Kubiak Signed-off-by: Alexander Lobakin --- drivers/net/ethernet/intel/idpf/Makefile | 1 + drivers/net/ethernet/intel/idpf/idpf_txrx.h | 7 + drivers/net/ethernet/intel/idpf/xdp.h | 2 + drivers/net/ethernet/intel/idpf/xsk.h | 14 + .../net/ethernet/intel/idpf/idpf_ethtool.c | 8 +- drivers/net/ethernet/intel/idpf/idpf_txrx.c | 299 ++++++++++++++++++ drivers/net/ethernet/intel/idpf/xdp.c | 14 + drivers/net/ethernet/intel/idpf/xsk.c | 57 ++++ 8 files changed, 398 insertions(+), 4 deletions(-) create mode 100644 drivers/net/ethernet/intel/idpf/xsk.h create mode 100644 drivers/net/ethernet/intel/idpf/xsk.c diff --git a/drivers/net/ethernet/intel/idpf/Makefile b/drivers/net/etherne= t/intel/idpf/Makefile index 0840c3bef371..651ddee942bd 100644 --- a/drivers/net/ethernet/intel/idpf/Makefile +++ b/drivers/net/ethernet/intel/idpf/Makefile @@ -23,3 +23,4 @@ idpf-$(CONFIG_PTP_1588_CLOCK) +=3D idpf_ptp.o idpf-$(CONFIG_PTP_1588_CLOCK) +=3D idpf_virtchnl_ptp.o =20 idpf-y +=3D xdp.o +idpf-y +=3D xsk.o diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.h b/drivers/net/ethe= rnet/intel/idpf/idpf_txrx.h index 88dc3db488b1..8faf33786747 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_txrx.h +++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.h @@ -1050,6 +1050,13 @@ int idpf_config_rss(struct idpf_vport *vport); int idpf_init_rss(struct idpf_vport *vport); void idpf_deinit_rss(struct idpf_vport *vport); int idpf_rx_bufs_init_all(struct idpf_vport *vport); + +struct idpf_q_vector *idpf_find_rxq_vec(const struct idpf_vport *vport, + u32 q_num); +struct idpf_q_vector *idpf_find_txq_vec(const struct idpf_vport *vport, + u32 q_num); +int idpf_qp_switch(struct idpf_vport *vport, u32 qid, bool en); + void idpf_tx_buf_hw_update(struct idpf_tx_queue *tx_q, u32 val, bool xmit_more); unsigned int idpf_size_to_txd_count(unsigned int size); diff --git a/drivers/net/ethernet/intel/idpf/xdp.h b/drivers/net/ethernet/i= ntel/idpf/xdp.h index 66ad83a0e85e..59c0391317c2 100644 --- a/drivers/net/ethernet/intel/idpf/xdp.h +++ b/drivers/net/ethernet/intel/idpf/xdp.h @@ -8,7 +8,9 @@ =20 #include "idpf_txrx.h" =20 +int idpf_xdp_rxq_info_init(struct idpf_rx_queue *rxq); int idpf_xdp_rxq_info_init_all(const struct idpf_vport *vport); +void idpf_xdp_rxq_info_deinit(struct idpf_rx_queue *rxq, u32 model); void idpf_xdp_rxq_info_deinit_all(const struct idpf_vport *vport); void idpf_xdp_copy_prog_to_rqs(const struct idpf_vport *vport, struct bpf_prog *xdp_prog); diff --git a/drivers/net/ethernet/intel/idpf/xsk.h b/drivers/net/ethernet/i= ntel/idpf/xsk.h new file mode 100644 index 000000000000..dc42268ba8e0 --- /dev/null +++ b/drivers/net/ethernet/intel/idpf/xsk.h @@ -0,0 +1,14 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* Copyright (C) 2025 Intel Corporation */ + +#ifndef _IDPF_XSK_H_ +#define _IDPF_XSK_H_ + +#include + +struct idpf_vport; +struct netdev_bpf; + +int idpf_xsk_pool_setup(struct idpf_vport *vport, struct netdev_bpf *xdp); + +#endif /* !_IDPF_XSK_H_ */ diff --git a/drivers/net/ethernet/intel/idpf/idpf_ethtool.c b/drivers/net/e= thernet/intel/idpf/idpf_ethtool.c index 0eb812ac19c2..24b042f80362 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_ethtool.c +++ b/drivers/net/ethernet/intel/idpf/idpf_ethtool.c @@ -1245,8 +1245,8 @@ static void idpf_get_ethtool_stats(struct net_device = *netdev, * * returns pointer to rx vector */ -static struct idpf_q_vector *idpf_find_rxq_vec(const struct idpf_vport *vp= ort, - int q_num) +struct idpf_q_vector *idpf_find_rxq_vec(const struct idpf_vport *vport, + u32 q_num) { int q_grp, q_idx; =20 @@ -1266,8 +1266,8 @@ static struct idpf_q_vector *idpf_find_rxq_vec(const = struct idpf_vport *vport, * * returns pointer to tx vector */ -static struct idpf_q_vector *idpf_find_txq_vec(const struct idpf_vport *vp= ort, - int q_num) +struct idpf_q_vector *idpf_find_txq_vec(const struct idpf_vport *vport, + u32 q_num) { int q_grp; =20 diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.c b/drivers/net/ethe= rnet/intel/idpf/idpf_txrx.c index 81b6646dd3fc..542e09a83bc0 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_txrx.c +++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.c @@ -922,6 +922,305 @@ static int idpf_rx_desc_alloc_all(struct idpf_vport *= vport) return err; } =20 +static int idpf_init_queue_set(const struct idpf_queue_set *qs) +{ + const struct idpf_vport *vport =3D qs->vport; + bool splitq; + int err; + + splitq =3D idpf_is_queue_model_split(vport->rxq_model); + + for (u32 i =3D 0; i < qs->num; i++) { + const struct idpf_queue_ptr *q =3D &qs->qs[i]; + struct idpf_buf_queue *bufq; + + switch (q->type) { + case VIRTCHNL2_QUEUE_TYPE_RX: + err =3D idpf_rx_desc_alloc(vport, q->rxq); + if (err) + break; + + err =3D idpf_xdp_rxq_info_init(q->rxq); + if (err) + break; + + if (!splitq) + err =3D idpf_rx_bufs_init_singleq(q->rxq); + + break; + case VIRTCHNL2_QUEUE_TYPE_RX_BUFFER: + bufq =3D q->bufq; + + err =3D idpf_bufq_desc_alloc(vport, bufq); + if (err) + break; + + for (u32 j =3D 0; j < bufq->q_vector->num_bufq; j++) { + struct idpf_buf_queue * const *bufqs; + enum libeth_fqe_type type; + u32 ts; + + bufqs =3D bufq->q_vector->bufq; + if (bufqs[j] !=3D bufq) + continue; + + if (j) { + type =3D LIBETH_FQE_SHORT; + ts =3D bufqs[j - 1]->truesize >> 1; + } else { + type =3D LIBETH_FQE_MTU; + ts =3D 0; + } + + bufq->truesize =3D ts; + + err =3D idpf_rx_bufs_init(bufq, type); + break; + } + + break; + case VIRTCHNL2_QUEUE_TYPE_TX: + err =3D idpf_tx_desc_alloc(vport, q->txq); + break; + case VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION: + err =3D idpf_compl_desc_alloc(vport, q->complq); + break; + default: + continue; + } + + if (err) + return err; + } + + return 0; +} + +static void idpf_clean_queue_set(const struct idpf_queue_set *qs) +{ + const struct idpf_vport *vport =3D qs->vport; + struct device *dev =3D vport->netdev->dev.parent; + + for (u32 i =3D 0; i < qs->num; i++) { + const struct idpf_queue_ptr *q =3D &qs->qs[i]; + + switch (q->type) { + case VIRTCHNL2_QUEUE_TYPE_RX: + idpf_xdp_rxq_info_deinit(q->rxq, vport->rxq_model); + idpf_rx_desc_rel(q->rxq, dev, vport->rxq_model); + break; + case VIRTCHNL2_QUEUE_TYPE_RX_BUFFER: + idpf_rx_desc_rel_bufq(q->bufq, dev); + break; + case VIRTCHNL2_QUEUE_TYPE_TX: + idpf_tx_desc_rel(q->txq); + + if (idpf_queue_has(XDP, q->txq)) { + q->txq->pending =3D 0; + q->txq->xdp_tx =3D 0; + } else { + q->txq->txq_grp->num_completions_pending =3D 0; + } + + writel(q->txq->next_to_use, q->txq->tail); + break; + case VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION: + idpf_compl_desc_rel(q->complq); + q->complq->num_completions =3D 0; + break; + default: + break; + } + } +} + +static void idpf_qvec_ena_irq(struct idpf_q_vector *qv) +{ + if (qv->num_txq) { + u32 itr; + + if (IDPF_ITR_IS_DYNAMIC(qv->tx_intr_mode)) + itr =3D qv->vport->tx_itr_profile[qv->tx_dim.profile_ix]; + else + itr =3D qv->tx_itr_value; + + idpf_vport_intr_write_itr(qv, itr, true); + } + + if (qv->num_rxq) { + u32 itr; + + if (IDPF_ITR_IS_DYNAMIC(qv->rx_intr_mode)) + itr =3D qv->vport->rx_itr_profile[qv->rx_dim.profile_ix]; + else + itr =3D qv->rx_itr_value; + + idpf_vport_intr_write_itr(qv, itr, false); + } + + if (qv->num_txq || qv->num_rxq) + idpf_vport_intr_update_itr_ena_irq(qv); +} + +/** + * idpf_vector_to_queue_set - create a queue set associated with the given + * queue vector + * @qv: queue vector corresponding to the queue pair + * + * Returns a pointer to a dynamically allocated array of pointers to all + * queues associated with a given queue vector (@qv). + * Please note that the caller is responsible to free the memory allocated + * by this function using kfree(). + * + * Return: &idpf_queue_set on success, %NULL in case of error. + */ +static struct idpf_queue_set * +idpf_vector_to_queue_set(struct idpf_q_vector *qv) +{ + bool xdp =3D qv->vport->xdp_txq_offset; + struct idpf_vport *vport =3D qv->vport; + struct idpf_queue_set *qs; + u32 num; + + num =3D qv->num_rxq + qv->num_bufq + qv->num_txq + qv->num_complq; + num +=3D xdp ? qv->num_rxq * 2 : 0; + if (!num) + return NULL; + + qs =3D idpf_alloc_queue_set(vport, num); + if (!qs) + return NULL; + + num =3D 0; + + for (u32 i =3D 0; i < qv->num_bufq; i++) { + qs->qs[num].type =3D VIRTCHNL2_QUEUE_TYPE_RX_BUFFER; + qs->qs[num++].bufq =3D qv->bufq[i]; + } + + for (u32 i =3D 0; i < qv->num_rxq; i++) { + qs->qs[num].type =3D VIRTCHNL2_QUEUE_TYPE_RX; + qs->qs[num++].rxq =3D qv->rx[i]; + } + + for (u32 i =3D 0; i < qv->num_txq; i++) { + qs->qs[num].type =3D VIRTCHNL2_QUEUE_TYPE_TX; + qs->qs[num++].txq =3D qv->tx[i]; + } + + for (u32 i =3D 0; i < qv->num_complq; i++) { + qs->qs[num].type =3D VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION; + qs->qs[num++].complq =3D qv->complq[i]; + } + + if (!vport->xdp_txq_offset) + goto finalize; + + if (xdp) { + for (u32 i =3D 0; i < qv->num_rxq; i++) { + u32 idx =3D vport->xdp_txq_offset + qv->rx[i]->idx; + + qs->qs[num].type =3D VIRTCHNL2_QUEUE_TYPE_TX; + qs->qs[num++].txq =3D vport->txqs[idx]; + + qs->qs[num].type =3D VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION; + qs->qs[num++].complq =3D vport->txqs[idx]->complq; + } + } + +finalize: + if (num !=3D qs->num) { + kfree(qs); + return NULL; + } + + return qs; +} + +static int idpf_qp_enable(const struct idpf_queue_set *qs, u32 qid) +{ + struct idpf_vport *vport =3D qs->vport; + struct idpf_q_vector *q_vector; + int err; + + q_vector =3D idpf_find_rxq_vec(vport, qid); + + err =3D idpf_init_queue_set(qs); + if (err) { + netdev_err(vport->netdev, "Could not initialize queues in pair %u: %pe\n= ", + qid, ERR_PTR(err)); + return err; + } + + err =3D idpf_send_config_queue_set_msg(qs); + if (err) { + netdev_err(vport->netdev, "Could not configure queues in pair %u: %pe\n", + qid, ERR_PTR(err)); + return err; + } + + err =3D idpf_send_enable_queue_set_msg(qs); + if (err) { + netdev_err(vport->netdev, "Could not enable queues in pair %u: %pe\n", + qid, ERR_PTR(err)); + return err; + } + + napi_enable(&q_vector->napi); + idpf_qvec_ena_irq(q_vector); + + netif_start_subqueue(vport->netdev, qid); + + return 0; +} + +static int idpf_qp_disable(const struct idpf_queue_set *qs, u32 qid) +{ + struct idpf_vport *vport =3D qs->vport; + struct idpf_q_vector *q_vector; + int err; + + q_vector =3D idpf_find_rxq_vec(vport, qid); + netif_stop_subqueue(vport->netdev, qid); + + writel(0, q_vector->intr_reg.dyn_ctl); + napi_disable(&q_vector->napi); + + err =3D idpf_send_disable_queue_set_msg(qs); + if (err) { + netdev_err(vport->netdev, "Could not disable queues in pair %u: %pe\n", + qid, ERR_PTR(err)); + return err; + } + + idpf_clean_queue_set(qs); + + return 0; +} + +/** + * idpf_qp_switch - enable or disable queues associated with queue pair + * @vport: vport to switch the pair for + * @qid: index of the queue pair to switch + * @en: whether to enable or disable the pair + * + * Return: 0 on success, -errno on failure. + */ +int idpf_qp_switch(struct idpf_vport *vport, u32 qid, bool en) +{ + struct idpf_q_vector *q_vector =3D idpf_find_rxq_vec(vport, qid); + struct idpf_queue_set *qs __free(kfree) =3D NULL; + + if (idpf_find_txq_vec(vport, qid) !=3D q_vector) + return -EINVAL; + + qs =3D idpf_vector_to_queue_set(q_vector); + if (!qs) + return -ENOMEM; + + return en ? idpf_qp_enable(qs, qid) : idpf_qp_disable(qs, qid); +} + /** * idpf_txq_group_rel - Release all resources for txq groups * @vport: vport to release txq groups on diff --git a/drivers/net/ethernet/intel/idpf/xdp.c b/drivers/net/ethernet/i= ntel/idpf/xdp.c index 89d5735f42f2..180335beaae1 100644 --- a/drivers/net/ethernet/intel/idpf/xdp.c +++ b/drivers/net/ethernet/intel/idpf/xdp.c @@ -4,6 +4,7 @@ #include "idpf.h" #include "idpf_virtchnl.h" #include "xdp.h" +#include "xsk.h" =20 static int idpf_rxq_for_each(const struct idpf_vport *vport, int (*fn)(struct idpf_rx_queue *rxq, void *arg), @@ -66,6 +67,11 @@ static int __idpf_xdp_rxq_info_init(struct idpf_rx_queue= *rxq, void *arg) return 0; } =20 +int idpf_xdp_rxq_info_init(struct idpf_rx_queue *rxq) +{ + return __idpf_xdp_rxq_info_init(rxq, NULL); +} + int idpf_xdp_rxq_info_init_all(const struct idpf_vport *vport) { return idpf_rxq_for_each(vport, __idpf_xdp_rxq_info_init, NULL); @@ -84,6 +90,11 @@ static int __idpf_xdp_rxq_info_deinit(struct idpf_rx_que= ue *rxq, void *arg) return 0; } =20 +void idpf_xdp_rxq_info_deinit(struct idpf_rx_queue *rxq, u32 model) +{ + __idpf_xdp_rxq_info_deinit(rxq, (void *)(size_t)model); +} + void idpf_xdp_rxq_info_deinit_all(const struct idpf_vport *vport) { idpf_rxq_for_each(vport, __idpf_xdp_rxq_info_deinit, @@ -442,6 +453,9 @@ int idpf_xdp(struct net_device *dev, struct netdev_bpf = *xdp) case XDP_SETUP_PROG: ret =3D idpf_xdp_setup_prog(vport, xdp); break; + case XDP_SETUP_XSK_POOL: + ret =3D idpf_xsk_pool_setup(vport, xdp); + break; default: notsupp: ret =3D -EOPNOTSUPP; diff --git a/drivers/net/ethernet/intel/idpf/xsk.c b/drivers/net/ethernet/i= ntel/idpf/xsk.c new file mode 100644 index 000000000000..2098bf160df7 --- /dev/null +++ b/drivers/net/ethernet/intel/idpf/xsk.c @@ -0,0 +1,57 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* Copyright (C) 2025 Intel Corporation */ + +#include + +#include "idpf.h" +#include "xsk.h" + +int idpf_xsk_pool_setup(struct idpf_vport *vport, struct netdev_bpf *bpf) +{ + struct xsk_buff_pool *pool =3D bpf->xsk.pool; + u32 qid =3D bpf->xsk.queue_id; + bool restart; + int ret; + + restart =3D idpf_xdp_enabled(vport) && netif_running(vport->netdev); + if (!restart) + goto pool; + + ret =3D idpf_qp_switch(vport, qid, false); + if (ret) { + NL_SET_ERR_MSG_FMT_MOD(bpf->extack, + "%s: failed to disable queue pair %u: %pe", + netdev_name(vport->netdev), qid, + ERR_PTR(ret)); + return ret; + } + +pool: + ret =3D libeth_xsk_setup_pool(vport->netdev, qid, pool); + if (ret) { + NL_SET_ERR_MSG_FMT_MOD(bpf->extack, + "%s: failed to configure XSk pool for pair %u: %pe", + netdev_name(vport->netdev), qid, + ERR_PTR(ret)); + return ret; + } + + if (!restart) + return 0; + + ret =3D idpf_qp_switch(vport, qid, true); + if (ret) { + NL_SET_ERR_MSG_FMT_MOD(bpf->extack, + "%s: failed to enable queue pair %u: %pe", + netdev_name(vport->netdev), qid, + ERR_PTR(ret)); + goto err_dis; + } + + return 0; + +err_dis: + libeth_xsk_setup_pool(vport->netdev, qid, false); + + return ret; +} --=20 2.51.0 From nobody Thu Oct 2 19:28:31 2025 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.9]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CD184362081; Thu, 11 Sep 2025 16:23:33 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.9 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1757607816; cv=none; b=gqJd4z5DXLuKx95Pkk2aMpLee5pW+6peoffnsPzcC9O9nLKcZvWCXDS4dRhiytGpbsXY5x16xG+BEOlaIihRuZNECFg61TecKQe/G+2MlCpEZc7sQZ7IydfF3JKhnovHyasqYCM4g6Q1nukBMZ4mWh6yv0L0zHZ7co8TxS5r3e8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1757607816; c=relaxed/simple; bh=MgxYE9Txu4q24Yg7jW/8Yz/uMzgpZekTVhhatndLkOY=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=EBsGSIhuyUJ8R6ZHNopnwxdRyajU2mQpD6PAcLr9ZUmTlBY+kgjl8+ORxtXB9mfqPT8X38PpHEOPD4egIEMWo7upmR4JYsCmCXZsV8y3A9YsRki3eSdpfQleJdHsPl8nQq88oAkNphvO0Nm6vYxSIkbG8qREWU/jC4I3pPm1/Mc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=lZO6FTqN; arc=none smtp.client-ip=192.198.163.9 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="lZO6FTqN" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1757607814; x=1789143814; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=MgxYE9Txu4q24Yg7jW/8Yz/uMzgpZekTVhhatndLkOY=; b=lZO6FTqNrPtSRKGfB98vNFx8vIAC2Yo8vdqq33wlmqbhVC3/3pXyl16t U57qFodkkQZKy21gH07VKKRxXU4UObRVrTakzM8/IlRO26nvq3iMszeME Bcp7RhYZ5eIqM+9/rn2n07bKM4gnMwMkYsBCl7GNf6pRtwawRMqqE1skH 2lv35dKC3zjgwr3WNhmZV2b5/25qGIm2vxTjygC/t7lToo6NEap5fNnG/ svZj6ABwhhp1vgHJrpaDRwIE3vKOSCjzpH74hJbtSOt+3esYXoCyNn4t2 MLPoFT9jUBMbRS1M46xzsVdl3WZhUDD+3mwOgaYe1gHQ/jfQQx5Ryjk91 Q==; X-CSE-ConnectionGUID: hp1mqnRuTjqul45oGn7M0g== X-CSE-MsgGUID: hD12yZ+RTMWX6js6J0DZYg== X-IronPort-AV: E=McAfee;i="6800,10657,11549"; a="70635193" X-IronPort-AV: E=Sophos;i="6.18,257,1751266800"; d="scan'208";a="70635193" Received: from orviesa009.jf.intel.com ([10.64.159.149]) by fmvoesa103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 11 Sep 2025 09:23:33 -0700 X-CSE-ConnectionGUID: 5cP969sPSPeBjpjfF5y3gg== X-CSE-MsgGUID: r4PtEvbaRFy0XpbKULnvoA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.18,257,1751266800"; d="scan'208";a="173284660" Received: from newjersey.igk.intel.com ([10.102.20.203]) by orviesa009.jf.intel.com with ESMTP; 11 Sep 2025 09:23:29 -0700 From: Alexander Lobakin To: intel-wired-lan@lists.osuosl.org Cc: Alexander Lobakin , Michal Kubiak , Maciej Fijalkowski , Tony Nguyen , Przemek Kitszel , Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Alexei Starovoitov , Daniel Borkmann , Simon Horman , nxne.cnse.osdt.itp.upstreaming@intel.com, bpf@vger.kernel.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH iwl-next 3/5] idpf: implement XSk xmit Date: Thu, 11 Sep 2025 18:22:31 +0200 Message-ID: <20250911162233.1238034-4-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.51.0 In-Reply-To: <20250911162233.1238034-1-aleksander.lobakin@intel.com> References: <20250911162233.1238034-1-aleksander.lobakin@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Implement the XSk transmit path using the libeth (libeth_xdp) XSk infra. When the NAPI poll is called, XSk Tx queues are polled first, before regular Tx and Rx. They're generally faster to serve and have higher priority comparing to regular traffic. Co-developed-by: Michal Kubiak Signed-off-by: Michal Kubiak Signed-off-by: Alexander Lobakin --- drivers/net/ethernet/intel/idpf/idpf_txrx.h | 14 +- drivers/net/ethernet/intel/idpf/xdp.h | 1 + drivers/net/ethernet/intel/idpf/xsk.h | 9 + drivers/net/ethernet/intel/idpf/idpf_txrx.c | 117 ++++++++-- drivers/net/ethernet/intel/idpf/xdp.c | 2 +- drivers/net/ethernet/intel/idpf/xsk.c | 232 ++++++++++++++++++++ 6 files changed, 354 insertions(+), 21 deletions(-) diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.h b/drivers/net/ethe= rnet/intel/idpf/idpf_txrx.h index 8faf33786747..e8e63027425c 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_txrx.h +++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.h @@ -285,6 +285,7 @@ struct idpf_ptype_state { * queue * @__IDPF_Q_NOIRQ: queue is polling-driven and has no interrupt * @__IDPF_Q_XDP: this is an XDP queue + * @__IDPF_Q_XSK: the queue has an XSk pool installed * @__IDPF_Q_FLAGS_NBITS: Must be last */ enum idpf_queue_flags_t { @@ -297,6 +298,7 @@ enum idpf_queue_flags_t { __IDPF_Q_PTP, __IDPF_Q_NOIRQ, __IDPF_Q_XDP, + __IDPF_Q_XSK, =20 __IDPF_Q_FLAGS_NBITS, }; @@ -363,10 +365,12 @@ struct idpf_intr_reg { * @num_txq: Number of TX queues * @num_bufq: Number of buffer queues * @num_complq: number of completion queues + * @num_xsksq: number of XSk send queues * @rx: Array of RX queues to service * @tx: Array of TX queues to service * @bufq: Array of buffer queues to service * @complq: array of completion queues + * @xsksq: array of XSk send queues * @intr_reg: See struct idpf_intr_reg * @napi: napi handler * @total_events: Number of interrupts processed @@ -389,10 +393,12 @@ struct idpf_q_vector { u16 num_txq; u16 num_bufq; u16 num_complq; + u16 num_xsksq; struct idpf_rx_queue **rx; struct idpf_tx_queue **tx; struct idpf_buf_queue **bufq; struct idpf_compl_queue **complq; + struct idpf_tx_queue **xsksq; =20 struct idpf_intr_reg intr_reg; __cacheline_group_end_aligned(read_mostly); @@ -418,7 +424,7 @@ struct idpf_q_vector { =20 __cacheline_group_end_aligned(cold); }; -libeth_cacheline_set_assert(struct idpf_q_vector, 120, +libeth_cacheline_set_assert(struct idpf_q_vector, 136, 24 + sizeof(struct napi_struct) + 2 * sizeof(struct dim), 8); @@ -578,6 +584,7 @@ libeth_cacheline_set_assert(struct idpf_rx_queue, * @txq_grp: See struct idpf_txq_group * @complq: corresponding completion queue in XDP mode * @dev: Device back pointer for DMA mapping + * @pool: corresponding XSk pool if installed * @tail: Tail offset. Used for both queue models single and split * @flags: See enum idpf_queue_flags_t * @idx: For TX queue, it is used as index to map between TX queue group a= nd @@ -631,7 +638,10 @@ struct idpf_tx_queue { struct idpf_txq_group *txq_grp; struct idpf_compl_queue *complq; }; - struct device *dev; + union { + struct device *dev; + struct xsk_buff_pool *pool; + }; void __iomem *tail; =20 DECLARE_BITMAP(flags, __IDPF_Q_FLAGS_NBITS); diff --git a/drivers/net/ethernet/intel/idpf/xdp.h b/drivers/net/ethernet/i= ntel/idpf/xdp.h index 59c0391317c2..479f5ef3c604 100644 --- a/drivers/net/ethernet/intel/idpf/xdp.h +++ b/drivers/net/ethernet/intel/idpf/xdp.h @@ -18,6 +18,7 @@ void idpf_xdp_copy_prog_to_rqs(const struct idpf_vport *v= port, int idpf_xdpsqs_get(const struct idpf_vport *vport); void idpf_xdpsqs_put(const struct idpf_vport *vport); =20 +u32 idpf_xdpsq_poll(struct idpf_tx_queue *xdpsq, u32 budget); bool idpf_xdp_tx_flush_bulk(struct libeth_xdp_tx_bulk *bq, u32 flags); =20 /** diff --git a/drivers/net/ethernet/intel/idpf/xsk.h b/drivers/net/ethernet/i= ntel/idpf/xsk.h index dc42268ba8e0..d08fd51b0fc6 100644 --- a/drivers/net/ethernet/intel/idpf/xsk.h +++ b/drivers/net/ethernet/intel/idpf/xsk.h @@ -6,9 +6,18 @@ =20 #include =20 +enum virtchnl2_queue_type; +struct idpf_tx_queue; struct idpf_vport; struct netdev_bpf; =20 +void idpf_xsk_setup_queue(const struct idpf_vport *vport, void *q, + enum virtchnl2_queue_type type); +void idpf_xsk_clear_queue(void *q, enum virtchnl2_queue_type type); + +void idpf_xsksq_clean(struct idpf_tx_queue *xdpq); +bool idpf_xsk_xmit(struct idpf_tx_queue *xsksq); + int idpf_xsk_pool_setup(struct idpf_vport *vport, struct netdev_bpf *xdp); =20 #endif /* !_IDPF_XSK_H_ */ diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.c b/drivers/net/ethe= rnet/intel/idpf/idpf_txrx.c index 542e09a83bc0..64d5211f6e51 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_txrx.c +++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.c @@ -5,6 +5,7 @@ #include "idpf_ptp.h" #include "idpf_virtchnl.h" #include "xdp.h" +#include "xsk.h" =20 #define idpf_tx_buf_next(buf) (*(u32 *)&(buf)->priv) LIBETH_SQE_CHECK_PRIV(u32); @@ -53,11 +54,7 @@ void idpf_tx_timeout(struct net_device *netdev, unsigned= int txqueue) } } =20 -/** - * idpf_tx_buf_rel_all - Free any empty Tx buffers - * @txq: queue to be cleaned - */ -static void idpf_tx_buf_rel_all(struct idpf_tx_queue *txq) +static void idpf_tx_buf_clean(struct idpf_tx_queue *txq) { struct libeth_sq_napi_stats ss =3D { }; struct xdp_frame_bulk bq; @@ -66,19 +63,30 @@ static void idpf_tx_buf_rel_all(struct idpf_tx_queue *t= xq) .bq =3D &bq, .ss =3D &ss, }; - u32 i; - - /* Buffers already cleared, nothing to do */ - if (!txq->tx_buf) - return; =20 xdp_frame_bulk_init(&bq); =20 /* Free all the Tx buffer sk_buffs */ - for (i =3D 0; i < txq->buf_pool_size; i++) + for (u32 i =3D 0; i < txq->buf_pool_size; i++) libeth_tx_complete_any(&txq->tx_buf[i], &cp); =20 xdp_flush_frame_bulk(&bq); +} + +/** + * idpf_tx_buf_rel_all - Free any empty Tx buffers + * @txq: queue to be cleaned + */ +static void idpf_tx_buf_rel_all(struct idpf_tx_queue *txq) +{ + /* Buffers already cleared, nothing to do */ + if (!txq->tx_buf) + return; + + if (idpf_queue_has(XSK, txq)) + idpf_xsksq_clean(txq); + else + idpf_tx_buf_clean(txq); =20 kfree(txq->tx_buf); txq->tx_buf =3D NULL; @@ -102,6 +110,8 @@ static void idpf_tx_desc_rel(struct idpf_tx_queue *txq) if (!xdp) netdev_tx_reset_subqueue(txq->netdev, txq->idx); =20 + idpf_xsk_clear_queue(txq, VIRTCHNL2_QUEUE_TYPE_TX); + if (!txq->desc_ring) return; =20 @@ -122,6 +132,8 @@ static void idpf_tx_desc_rel(struct idpf_tx_queue *txq) */ static void idpf_compl_desc_rel(struct idpf_compl_queue *complq) { + idpf_xsk_clear_queue(complq, VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION); + if (!complq->comp) return; =20 @@ -214,6 +226,8 @@ static int idpf_tx_desc_alloc(const struct idpf_vport *= vport, tx_q->next_to_clean =3D 0; idpf_queue_set(GEN_CHK, tx_q); =20 + idpf_xsk_setup_queue(vport, tx_q, VIRTCHNL2_QUEUE_TYPE_TX); + if (!idpf_queue_has(FLOW_SCH_EN, tx_q)) return 0; =20 @@ -273,6 +287,9 @@ static int idpf_compl_desc_alloc(const struct idpf_vpor= t *vport, complq->next_to_clean =3D 0; idpf_queue_set(GEN_CHK, complq); =20 + idpf_xsk_setup_queue(vport, complq, + VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION); + return 0; } =20 @@ -1077,13 +1094,13 @@ static void idpf_qvec_ena_irq(struct idpf_q_vector = *qv) static struct idpf_queue_set * idpf_vector_to_queue_set(struct idpf_q_vector *qv) { - bool xdp =3D qv->vport->xdp_txq_offset; + bool xdp =3D qv->vport->xdp_txq_offset && !qv->num_xsksq; struct idpf_vport *vport =3D qv->vport; struct idpf_queue_set *qs; u32 num; =20 num =3D qv->num_rxq + qv->num_bufq + qv->num_txq + qv->num_complq; - num +=3D xdp ? qv->num_rxq * 2 : 0; + num +=3D xdp ? qv->num_rxq * 2 : qv->num_xsksq * 2; if (!num) return NULL; =20 @@ -1126,6 +1143,14 @@ idpf_vector_to_queue_set(struct idpf_q_vector *qv) qs->qs[num].type =3D VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION; qs->qs[num++].complq =3D vport->txqs[idx]->complq; } + } else { + for (u32 i =3D 0; i < qv->num_xsksq; i++) { + qs->qs[num].type =3D VIRTCHNL2_QUEUE_TYPE_TX; + qs->qs[num++].txq =3D qv->xsksq[i]; + + qs->qs[num].type =3D VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION; + qs->qs[num++].complq =3D qv->xsksq[i]->complq; + } } =20 finalize: @@ -1152,6 +1177,29 @@ static int idpf_qp_enable(const struct idpf_queue_se= t *qs, u32 qid) return err; } =20 + if (!vport->xdp_txq_offset) + goto config; + + q_vector->xsksq =3D kcalloc(DIV_ROUND_UP(vport->num_rxq_grp, + vport->num_q_vectors), + sizeof(*q_vector->xsksq), GFP_KERNEL); + if (!q_vector->xsksq) + return -ENOMEM; + + for (u32 i =3D 0; i < qs->num; i++) { + const struct idpf_queue_ptr *q =3D &qs->qs[i]; + + if (q->type !=3D VIRTCHNL2_QUEUE_TYPE_TX) + continue; + + if (!idpf_queue_has(XSK, q->txq)) + continue; + + q->txq->q_vector =3D q_vector; + q_vector->xsksq[q_vector->num_xsksq++] =3D q->txq; + } + +config: err =3D idpf_send_config_queue_set_msg(qs); if (err) { netdev_err(vport->netdev, "Could not configure queues in pair %u: %pe\n", @@ -1195,6 +1243,9 @@ static int idpf_qp_disable(const struct idpf_queue_se= t *qs, u32 qid) =20 idpf_clean_queue_set(qs); =20 + kfree(q_vector->xsksq); + q_vector->num_xsksq =3D 0; + return 0; } =20 @@ -3690,7 +3741,7 @@ static irqreturn_t idpf_vport_intr_clean_queues(int _= _always_unused irq, struct idpf_q_vector *q_vector =3D (struct idpf_q_vector *)data; =20 q_vector->total_events++; - napi_schedule(&q_vector->napi); + napi_schedule_irqoff(&q_vector->napi); =20 return IRQ_HANDLED; } @@ -3731,6 +3782,8 @@ void idpf_vport_intr_rel(struct idpf_vport *vport) for (u32 v_idx =3D 0; v_idx < vport->num_q_vectors; v_idx++) { struct idpf_q_vector *q_vector =3D &vport->q_vectors[v_idx]; =20 + kfree(q_vector->xsksq); + q_vector->xsksq =3D NULL; kfree(q_vector->complq); q_vector->complq =3D NULL; kfree(q_vector->bufq); @@ -4214,7 +4267,7 @@ static int idpf_vport_splitq_napi_poll(struct napi_st= ruct *napi, int budget) { struct idpf_q_vector *q_vector =3D container_of(napi, struct idpf_q_vector, napi); - bool clean_complete; + bool clean_complete =3D true; int work_done =3D 0; =20 /* Handle case where we are called by netpoll with a budget of 0 */ @@ -4224,8 +4277,13 @@ static int idpf_vport_splitq_napi_poll(struct napi_s= truct *napi, int budget) return 0; } =20 - clean_complete =3D idpf_rx_splitq_clean_all(q_vector, budget, &work_done); - clean_complete &=3D idpf_tx_splitq_clean_all(q_vector, budget, &work_done= ); + for (u32 i =3D 0; i < q_vector->num_xsksq; i++) + clean_complete &=3D idpf_xsk_xmit(q_vector->xsksq[i]); + + clean_complete &=3D idpf_tx_splitq_clean_all(q_vector, budget, + &work_done); + clean_complete &=3D idpf_rx_splitq_clean_all(q_vector, budget, + &work_done); =20 /* If work not completed, return budget and polling will return */ if (!clean_complete) { @@ -4238,7 +4296,7 @@ static int idpf_vport_splitq_napi_poll(struct napi_st= ruct *napi, int budget) /* Exit the polling mode, but don't re-enable interrupts if stack might * poll us due to busy-polling */ - if (likely(napi_complete_done(napi, work_done))) + if (napi_complete_done(napi, work_done)) idpf_vport_intr_update_itr_ena_irq(q_vector); else idpf_vport_intr_set_wb_on_itr(q_vector); @@ -4331,6 +4389,20 @@ static void idpf_vport_intr_map_vector_to_qs(struct = idpf_vport *vport) =20 qv_idx++; } + + for (i =3D 0; i < vport->num_xdp_txq; i++) { + struct idpf_tx_queue *xdpsq; + struct idpf_q_vector *qv; + + xdpsq =3D vport->txqs[vport->xdp_txq_offset + i]; + if (!idpf_queue_has(XSK, xdpsq)) + continue; + + qv =3D idpf_find_rxq_vec(vport, i); + + xdpsq->q_vector =3D qv; + qv->xsksq[qv->num_xsksq++] =3D xdpsq; + } } =20 /** @@ -4468,6 +4540,15 @@ int idpf_vport_intr_alloc(struct idpf_vport *vport) GFP_KERNEL); if (!q_vector->complq) goto error; + + if (!vport->xdp_txq_offset) + continue; + + q_vector->xsksq =3D kcalloc(rxqs_per_vector, + sizeof(*q_vector->xsksq), + GFP_KERNEL); + if (!q_vector->xsksq) + goto error; } =20 return 0; diff --git a/drivers/net/ethernet/intel/idpf/xdp.c b/drivers/net/ethernet/i= ntel/idpf/xdp.c index 180335beaae1..2b8b75a16804 100644 --- a/drivers/net/ethernet/intel/idpf/xdp.c +++ b/drivers/net/ethernet/intel/idpf/xdp.c @@ -225,7 +225,7 @@ static int idpf_xdp_parse_cqe(const struct idpf_splitq_= 4b_tx_compl_desc *desc, return upper_16_bits(val); } =20 -static u32 idpf_xdpsq_poll(struct idpf_tx_queue *xdpsq, u32 budget) +u32 idpf_xdpsq_poll(struct idpf_tx_queue *xdpsq, u32 budget) { struct idpf_compl_queue *cq =3D xdpsq->complq; u32 tx_ntc =3D xdpsq->next_to_clean; diff --git a/drivers/net/ethernet/intel/idpf/xsk.c b/drivers/net/ethernet/i= ntel/idpf/xsk.c index 2098bf160df7..5c9f6f0a9fae 100644 --- a/drivers/net/ethernet/intel/idpf/xsk.c +++ b/drivers/net/ethernet/intel/idpf/xsk.c @@ -4,8 +4,240 @@ #include =20 #include "idpf.h" +#include "xdp.h" #include "xsk.h" =20 +static void idpf_xsk_tx_timer(struct work_struct *work); + +static void idpf_xsk_setup_txq(const struct idpf_vport *vport, + struct idpf_tx_queue *txq) +{ + struct xsk_buff_pool *pool; + u32 qid; + + idpf_queue_clear(XSK, txq); + + if (!idpf_queue_has(XDP, txq)) + return; + + qid =3D txq->idx - vport->xdp_txq_offset; + + pool =3D xsk_get_pool_from_qid(vport->netdev, qid); + if (!pool || !pool->dev) + return; + + txq->pool =3D pool; + libeth_xdpsq_init_timer(txq->timer, txq, &txq->xdp_lock, + idpf_xsk_tx_timer); + + idpf_queue_assign(NOIRQ, txq, xsk_uses_need_wakeup(pool)); + idpf_queue_set(XSK, txq); +} + +static void idpf_xsk_setup_complq(const struct idpf_vport *vport, + struct idpf_compl_queue *complq) +{ + const struct xsk_buff_pool *pool; + u32 qid; + + idpf_queue_clear(XSK, complq); + + if (!idpf_queue_has(XDP, complq)) + return; + + qid =3D complq->txq_grp->txqs[0]->idx - vport->xdp_txq_offset; + + pool =3D xsk_get_pool_from_qid(vport->netdev, qid); + if (!pool || !pool->dev) + return; + + idpf_queue_set(XSK, complq); +} + +void idpf_xsk_setup_queue(const struct idpf_vport *vport, void *q, + enum virtchnl2_queue_type type) +{ + if (!idpf_xdp_enabled(vport)) + return; + + switch (type) { + case VIRTCHNL2_QUEUE_TYPE_TX: + idpf_xsk_setup_txq(vport, q); + break; + case VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION: + idpf_xsk_setup_complq(vport, q); + break; + default: + break; + } +} + +void idpf_xsk_clear_queue(void *q, enum virtchnl2_queue_type type) +{ + struct idpf_compl_queue *complq; + struct idpf_tx_queue *txq; + + switch (type) { + case VIRTCHNL2_QUEUE_TYPE_TX: + txq =3D q; + if (!idpf_queue_has_clear(XSK, txq)) + return; + + idpf_queue_set(NOIRQ, txq); + txq->dev =3D txq->netdev->dev.parent; + break; + case VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION: + complq =3D q; + idpf_queue_clear(XSK, complq); + break; + default: + break; + } +} + +void idpf_xsksq_clean(struct idpf_tx_queue *xdpsq) +{ + struct libeth_xdpsq_napi_stats ss =3D { }; + u32 ntc =3D xdpsq->next_to_clean; + struct xdp_frame_bulk bq; + struct libeth_cq_pp cp =3D { + .dev =3D xdpsq->pool->dev, + .bq =3D &bq, + .xss =3D &ss, + }; + u32 xsk_frames =3D 0; + + xdp_frame_bulk_init(&bq); + + while (ntc !=3D xdpsq->next_to_use) { + struct libeth_sqe *sqe =3D &xdpsq->tx_buf[ntc]; + + if (sqe->type) + libeth_xdp_complete_tx(sqe, &cp); + else + xsk_frames++; + + if (unlikely(++ntc =3D=3D xdpsq->desc_count)) + ntc =3D 0; + } + + xdp_flush_frame_bulk(&bq); + + if (xsk_frames) + xsk_tx_completed(xdpsq->pool, xsk_frames); +} + +static noinline u32 idpf_xsksq_complete_slow(struct idpf_tx_queue *xdpsq, + u32 done) +{ + struct libeth_xdpsq_napi_stats ss =3D { }; + u32 ntc =3D xdpsq->next_to_clean; + u32 cnt =3D xdpsq->desc_count; + struct xdp_frame_bulk bq; + struct libeth_cq_pp cp =3D { + .dev =3D xdpsq->pool->dev, + .bq =3D &bq, + .xss =3D &ss, + .napi =3D true, + }; + u32 xsk_frames =3D 0; + + xdp_frame_bulk_init(&bq); + + for (u32 i =3D 0; likely(i < done); i++) { + struct libeth_sqe *sqe =3D &xdpsq->tx_buf[ntc]; + + if (sqe->type) + libeth_xdp_complete_tx(sqe, &cp); + else + xsk_frames++; + + if (unlikely(++ntc =3D=3D cnt)) + ntc =3D 0; + } + + xdp_flush_frame_bulk(&bq); + + xdpsq->next_to_clean =3D ntc; + xdpsq->xdp_tx -=3D cp.xdp_tx; + + return xsk_frames; +} + +static __always_inline u32 idpf_xsksq_complete(void *_xdpsq, u32 budget) +{ + struct idpf_tx_queue *xdpsq =3D _xdpsq; + u32 tx_ntc =3D xdpsq->next_to_clean; + u32 tx_cnt =3D xdpsq->desc_count; + u32 done_frames; + u32 xsk_frames; + + done_frames =3D idpf_xdpsq_poll(xdpsq, budget); + if (unlikely(!done_frames)) + return 0; + + if (likely(!xdpsq->xdp_tx)) { + tx_ntc +=3D done_frames; + if (tx_ntc >=3D tx_cnt) + tx_ntc -=3D tx_cnt; + + xdpsq->next_to_clean =3D tx_ntc; + xsk_frames =3D done_frames; + + goto finalize; + } + + xsk_frames =3D idpf_xsksq_complete_slow(xdpsq, done_frames); + if (xsk_frames) +finalize: + xsk_tx_completed(xdpsq->pool, xsk_frames); + + xdpsq->pending -=3D done_frames; + + return done_frames; +} + +static u32 idpf_xsk_xmit_prep(void *_xdpsq, struct libeth_xdpsq *sq) +{ + struct idpf_tx_queue *xdpsq =3D _xdpsq; + + *sq =3D (struct libeth_xdpsq){ + .pool =3D xdpsq->pool, + .sqes =3D xdpsq->tx_buf, + .descs =3D xdpsq->desc_ring, + .count =3D xdpsq->desc_count, + .lock =3D &xdpsq->xdp_lock, + .ntu =3D &xdpsq->next_to_use, + .pending =3D &xdpsq->pending, + }; + + /* + * The queue is cleaned, the budget is already known, optimize out + * the second min() by passing the type limit. + */ + return U32_MAX; +} + +bool idpf_xsk_xmit(struct idpf_tx_queue *xsksq) +{ + u32 free; + + libeth_xdpsq_lock(&xsksq->xdp_lock); + + free =3D xsksq->desc_count - xsksq->pending; + if (free < xsksq->thresh) + free +=3D idpf_xsksq_complete(xsksq, xsksq->thresh); + + return libeth_xsk_xmit_do_bulk(xsksq->pool, xsksq, + min(free - 1, xsksq->thresh), + libeth_xsktmo, idpf_xsk_xmit_prep, + idpf_xdp_tx_xmit, idpf_xdp_tx_finalize); +} + +LIBETH_XDP_DEFINE_START(); +LIBETH_XDP_DEFINE_TIMER(static idpf_xsk_tx_timer, idpf_xsksq_complete); +LIBETH_XDP_DEFINE_END(); + int idpf_xsk_pool_setup(struct idpf_vport *vport, struct netdev_bpf *bpf) { struct xsk_buff_pool *pool =3D bpf->xsk.pool; --=20 2.51.0 From nobody Thu Oct 2 19:28:31 2025 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.9]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CE0DD368086; Thu, 11 Sep 2025 16:23:39 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.9 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1757607822; cv=none; b=FHx2JEBkozHbLz+/eOOxXaJvI8WzUEOlBzvDNdXuJAy2bPNPFuKaBa2kO2fwL+dejmxC5gWoI5XAnQbzqYrLcGEapfc/INmw2ZURNCOqc6xX12VX+ozy+DVNcaSX+AmL2jxfAV9K2hOupLp+IM27y0QWypEqABYSL2qOB5euzBE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1757607822; c=relaxed/simple; bh=3hFUxq8ZVcWDxBI/HUF57ZS6qwDLKZ0fyM+WgQTxXsc=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=O0Aa2jZN/xcG1LhdKSJ024DD2GyMk1yMEcKYpDycn4wHMFZaLgCp2jMgEdM27bb5vJQAuxNouYLDnnINDm4/TGxWcW1Mgq3QKHGfNPAO4qPSeBsWonaqjm79mOh2YCbyN/Y6Ke3mlT6WTwYOQCyTvLJ758Ab76y06P3AvgWXBrQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=SUiXXDEE; arc=none smtp.client-ip=192.198.163.9 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="SUiXXDEE" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1757607820; x=1789143820; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=3hFUxq8ZVcWDxBI/HUF57ZS6qwDLKZ0fyM+WgQTxXsc=; b=SUiXXDEESnK9taCcu1D7Gb+D2g0qRcjZDe6Q2uRozKN8JTI7JD4R5nzz ya7s3mQv2p2tIkGkpJrsgxYJfAP+MFGXKyE2kywWkndT0AIbTNPXc9p+3 IPDi7LK5xsYvsen60YdtOdSSr9tAqrRrz5Dln0Rg9BCrbwgsHWmZO3yuI JGGaOKZYgl9MvXvFeymhjPlOzYKyFoFSU1Ho8CrCpTHajmrOdCHfLMF8m tj5C2SP5piNYWAA4p/rKdiTCotIiGfWapFLJnv2viV2+6hlTbkZGBxD2Q JQw+/3QRcRtbZCJRPxXXVbexdynzGIQFUCwUU5m1xHNIBHcIYe7FDCf7B g==; X-CSE-ConnectionGUID: SG5KWcNmT6uKNCKALeF0+A== X-CSE-MsgGUID: B09Bxr/XTiii3z9cXFqiBA== X-IronPort-AV: E=McAfee;i="6800,10657,11549"; a="70635206" X-IronPort-AV: E=Sophos;i="6.18,257,1751266800"; d="scan'208";a="70635206" Received: from orviesa009.jf.intel.com ([10.64.159.149]) by fmvoesa103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 11 Sep 2025 09:23:39 -0700 X-CSE-ConnectionGUID: aeSbfqXKQ/WxAekIpE2K1A== X-CSE-MsgGUID: cEMZx5jYRKOKCfQKN5bmfg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.18,257,1751266800"; d="scan'208";a="173284700" Received: from newjersey.igk.intel.com ([10.102.20.203]) by orviesa009.jf.intel.com with ESMTP; 11 Sep 2025 09:23:33 -0700 From: Alexander Lobakin To: intel-wired-lan@lists.osuosl.org Cc: Alexander Lobakin , Michal Kubiak , Maciej Fijalkowski , Tony Nguyen , Przemek Kitszel , Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Alexei Starovoitov , Daniel Borkmann , Simon Horman , nxne.cnse.osdt.itp.upstreaming@intel.com, bpf@vger.kernel.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH iwl-next 4/5] idpf: implement Rx path for AF_XDP Date: Thu, 11 Sep 2025 18:22:32 +0200 Message-ID: <20250911162233.1238034-5-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.51.0 In-Reply-To: <20250911162233.1238034-1-aleksander.lobakin@intel.com> References: <20250911162233.1238034-1-aleksander.lobakin@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Implement Rx packet processing specific to AF_XDP ZC using the libeth XSk infra. Initialize queue registers before allocating buffers to avoid redundant ifs when updating the queue tail. Co-developed-by: Michal Kubiak Signed-off-by: Michal Kubiak Signed-off-by: Alexander Lobakin --- drivers/net/ethernet/intel/idpf/idpf_txrx.h | 38 ++- drivers/net/ethernet/intel/idpf/xsk.h | 6 + drivers/net/ethernet/intel/idpf/idpf_lib.c | 8 +- drivers/net/ethernet/intel/idpf/idpf_txrx.c | 35 ++- drivers/net/ethernet/intel/idpf/xdp.c | 24 +- drivers/net/ethernet/intel/idpf/xsk.c | 315 ++++++++++++++++++++ 6 files changed, 405 insertions(+), 21 deletions(-) diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.h b/drivers/net/ethe= rnet/intel/idpf/idpf_txrx.h index e8e63027425c..a42aa4669c3c 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_txrx.h +++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.h @@ -141,6 +141,8 @@ do { \ #define IDPF_TX_FLAGS_TUNNEL BIT(3) #define IDPF_TX_FLAGS_TSYN BIT(4) =20 +struct libeth_rq_napi_stats; + union idpf_tx_flex_desc { struct idpf_flex_tx_desc q; /* queue based scheduling */ struct idpf_flex_tx_sched_desc flow; /* flow based scheduling */ @@ -491,6 +493,8 @@ struct idpf_tx_queue_stats { * @next_to_clean: Next descriptor to clean * @next_to_alloc: RX buffer to allocate at * @xdp: XDP buffer with the current frame + * @xsk: current XDP buffer in XSk mode + * @pool: XSk pool if installed * @cached_phc_time: Cached PHC time for the Rx queue * @stats_sync: See struct u64_stats_sync * @q_stats: See union idpf_rx_queue_stats @@ -546,7 +550,13 @@ struct idpf_rx_queue { u32 next_to_clean; u32 next_to_alloc; =20 - struct libeth_xdp_buff_stash xdp; + union { + struct libeth_xdp_buff_stash xdp; + struct { + struct libeth_xdp_buff *xsk; + struct xsk_buff_pool *pool; + }; + }; u64 cached_phc_time; =20 struct u64_stats_sync stats_sync; @@ -711,16 +721,20 @@ libeth_cacheline_set_assert(struct idpf_tx_queue, 64, /** * struct idpf_buf_queue - software structure representing a buffer queue * @split_buf: buffer descriptor array - * @hdr_buf: &libeth_fqe for header buffers - * @hdr_pp: &page_pool for header buffers * @buf: &libeth_fqe for data buffers * @pp: &page_pool for data buffers + * @xsk_buf: &xdp_buff for XSk Rx buffers + * @pool: &xsk_buff_pool on XSk queues + * @hdr_buf: &libeth_fqe for header buffers + * @hdr_pp: &page_pool for header buffers * @tail: Tail offset * @flags: See enum idpf_queue_flags_t * @desc_count: Number of descriptors + * @thresh: refill threshold in XSk * @next_to_use: Next descriptor to use * @next_to_clean: Next descriptor to clean * @next_to_alloc: RX buffer to allocate at + * @pending: number of buffers to refill (Xsk) * @hdr_truesize: truesize for buffer headers * @truesize: truesize for data buffers * @q_id: Queue id @@ -734,14 +748,24 @@ libeth_cacheline_set_assert(struct idpf_tx_queue, 64, struct idpf_buf_queue { __cacheline_group_begin_aligned(read_mostly); struct virtchnl2_splitq_rx_buf_desc *split_buf; + union { + struct { + struct libeth_fqe *buf; + struct page_pool *pp; + }; + struct { + struct libeth_xdp_buff **xsk_buf; + struct xsk_buff_pool *pool; + }; + }; struct libeth_fqe *hdr_buf; struct page_pool *hdr_pp; - struct libeth_fqe *buf; - struct page_pool *pp; void __iomem *tail; =20 DECLARE_BITMAP(flags, __IDPF_Q_FLAGS_NBITS); u32 desc_count; + + u32 thresh; __cacheline_group_end_aligned(read_mostly); =20 __cacheline_group_begin_aligned(read_write); @@ -749,6 +773,7 @@ struct idpf_buf_queue { u32 next_to_clean; u32 next_to_alloc; =20 + u32 pending; u32 hdr_truesize; u32 truesize; __cacheline_group_end_aligned(read_write); @@ -1079,6 +1104,9 @@ netdev_tx_t idpf_tx_singleq_frame(struct sk_buff *skb, netdev_tx_t idpf_tx_start(struct sk_buff *skb, struct net_device *netdev); bool idpf_rx_singleq_buf_hw_alloc_all(struct idpf_rx_queue *rxq, u16 cleaned_count); +bool idpf_rx_process_skb_fields(struct sk_buff *skb, + const struct libeth_xdp_buff *xdp, + struct libeth_rq_napi_stats *rs); int idpf_tso(struct sk_buff *skb, struct idpf_tx_offload_params *off); =20 void idpf_wait_for_sw_marker_completion(const struct idpf_tx_queue *txq); diff --git a/drivers/net/ethernet/intel/idpf/xsk.h b/drivers/net/ethernet/i= ntel/idpf/xsk.h index d08fd51b0fc6..d5338cbef8bd 100644 --- a/drivers/net/ethernet/intel/idpf/xsk.h +++ b/drivers/net/ethernet/intel/idpf/xsk.h @@ -7,6 +7,8 @@ #include =20 enum virtchnl2_queue_type; +struct idpf_buf_queue; +struct idpf_rx_queue; struct idpf_tx_queue; struct idpf_vport; struct netdev_bpf; @@ -15,7 +17,11 @@ void idpf_xsk_setup_queue(const struct idpf_vport *vport= , void *q, enum virtchnl2_queue_type type); void idpf_xsk_clear_queue(void *q, enum virtchnl2_queue_type type); =20 +int idpf_xskfq_init(struct idpf_buf_queue *bufq); +void idpf_xskfq_rel(struct idpf_buf_queue *bufq); void idpf_xsksq_clean(struct idpf_tx_queue *xdpq); + +int idpf_xskrq_poll(struct idpf_rx_queue *rxq, u32 budget); bool idpf_xsk_xmit(struct idpf_tx_queue *xsksq); =20 int idpf_xsk_pool_setup(struct idpf_vport *vport, struct netdev_bpf *xdp); diff --git a/drivers/net/ethernet/intel/idpf/idpf_lib.c b/drivers/net/ether= net/intel/idpf/idpf_lib.c index 0559f1da88a9..9b8f7a6d65d6 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_lib.c +++ b/drivers/net/ethernet/intel/idpf/idpf_lib.c @@ -1424,16 +1424,16 @@ static int idpf_vport_open(struct idpf_vport *vport= , bool rtnl) goto queues_rel; } =20 - err =3D idpf_rx_bufs_init_all(vport); + err =3D idpf_queue_reg_init(vport); if (err) { - dev_err(&adapter->pdev->dev, "Failed to initialize RX buffers for vport = %u: %d\n", + dev_err(&adapter->pdev->dev, "Failed to initialize queue registers for v= port %u: %d\n", vport->vport_id, err); goto queues_rel; } =20 - err =3D idpf_queue_reg_init(vport); + err =3D idpf_rx_bufs_init_all(vport); if (err) { - dev_err(&adapter->pdev->dev, "Failed to initialize queue registers for v= port %u: %d\n", + dev_err(&adapter->pdev->dev, "Failed to initialize RX buffers for vport = %u: %d\n", vport->vport_id, err); goto queues_rel; } diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.c b/drivers/net/ethe= rnet/intel/idpf/idpf_txrx.c index 64d5211f6e51..67963c0f4541 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_txrx.c +++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.c @@ -389,6 +389,11 @@ static void idpf_rx_buf_rel_bufq(struct idpf_buf_queue= *bufq) if (!bufq->buf) return; =20 + if (idpf_queue_has(XSK, bufq)) { + idpf_xskfq_rel(bufq); + return; + } + /* Free all the bufs allocated and given to hw on Rx queue */ for (u32 i =3D 0; i < bufq->desc_count; i++) idpf_rx_page_rel(&bufq->buf[i]); @@ -437,11 +442,14 @@ static void idpf_rx_desc_rel(struct idpf_rx_queue *rx= q, struct device *dev, if (!rxq) return; =20 - libeth_xdp_return_stash(&rxq->xdp); + if (!idpf_queue_has(XSK, rxq)) + libeth_xdp_return_stash(&rxq->xdp); =20 if (!idpf_is_queue_model_split(model)) idpf_rx_buf_rel_all(rxq); =20 + idpf_xsk_clear_queue(rxq, VIRTCHNL2_QUEUE_TYPE_RX); + rxq->next_to_alloc =3D 0; rxq->next_to_clean =3D 0; rxq->next_to_use =3D 0; @@ -464,6 +472,7 @@ static void idpf_rx_desc_rel_bufq(struct idpf_buf_queue= *bufq, return; =20 idpf_rx_buf_rel_bufq(bufq); + idpf_xsk_clear_queue(bufq, VIRTCHNL2_QUEUE_TYPE_RX_BUFFER); =20 bufq->next_to_alloc =3D 0; bufq->next_to_clean =3D 0; @@ -751,6 +760,9 @@ static int idpf_rx_bufs_init(struct idpf_buf_queue *buf= q, }; int ret; =20 + if (idpf_queue_has(XSK, bufq)) + return idpf_xskfq_init(bufq); + ret =3D libeth_rx_fq_create(&fq, &bufq->q_vector->napi); if (ret) return ret; @@ -846,6 +858,8 @@ static int idpf_rx_desc_alloc(const struct idpf_vport *= vport, rxq->next_to_use =3D 0; idpf_queue_set(GEN_CHK, rxq); =20 + idpf_xsk_setup_queue(vport, rxq, VIRTCHNL2_QUEUE_TYPE_RX); + return 0; } =20 @@ -871,9 +885,10 @@ static int idpf_bufq_desc_alloc(const struct idpf_vpor= t *vport, bufq->next_to_alloc =3D 0; bufq->next_to_clean =3D 0; bufq->next_to_use =3D 0; - idpf_queue_set(GEN_CHK, bufq); =20 + idpf_xsk_setup_queue(vport, bufq, VIRTCHNL2_QUEUE_TYPE_RX_BUFFER); + return 0; } =20 @@ -3381,9 +3396,9 @@ __idpf_rx_process_skb_fields(struct idpf_rx_queue *rx= q, struct sk_buff *skb, return 0; } =20 -static bool idpf_rx_process_skb_fields(struct sk_buff *skb, - const struct libeth_xdp_buff *xdp, - struct libeth_rq_napi_stats *rs) +bool idpf_rx_process_skb_fields(struct sk_buff *skb, + const struct libeth_xdp_buff *xdp, + struct libeth_rq_napi_stats *rs) { struct idpf_rx_queue *rxq; =20 @@ -4242,7 +4257,9 @@ static bool idpf_rx_splitq_clean_all(struct idpf_q_ve= ctor *q_vec, int budget, struct idpf_rx_queue *rxq =3D q_vec->rx[i]; int pkts_cleaned_per_q; =20 - pkts_cleaned_per_q =3D idpf_rx_splitq_clean(rxq, budget_per_q); + pkts_cleaned_per_q =3D idpf_queue_has(XSK, rxq) ? + idpf_xskrq_poll(rxq, budget_per_q) : + idpf_rx_splitq_clean(rxq, budget_per_q); /* if we clean as many as budgeted, we must not be done */ if (pkts_cleaned_per_q >=3D budget_per_q) clean_complete =3D false; @@ -4252,8 +4269,10 @@ static bool idpf_rx_splitq_clean_all(struct idpf_q_v= ector *q_vec, int budget, =20 nid =3D numa_mem_id(); =20 - for (i =3D 0; i < q_vec->num_bufq; i++) - idpf_rx_clean_refillq_all(q_vec->bufq[i], nid); + for (i =3D 0; i < q_vec->num_bufq; i++) { + if (!idpf_queue_has(XSK, q_vec->bufq[i])) + idpf_rx_clean_refillq_all(q_vec->bufq[i], nid); + } =20 return clean_complete; } diff --git a/drivers/net/ethernet/intel/idpf/xdp.c b/drivers/net/ethernet/i= ntel/idpf/xdp.c index 2b8b75a16804..cde6d56553d2 100644 --- a/drivers/net/ethernet/intel/idpf/xdp.c +++ b/drivers/net/ethernet/intel/idpf/xdp.c @@ -46,7 +46,6 @@ static int __idpf_xdp_rxq_info_init(struct idpf_rx_queue = *rxq, void *arg) { const struct idpf_vport *vport =3D rxq->q_vector->vport; bool split =3D idpf_is_queue_model_split(vport->rxq_model); - const struct page_pool *pp; int err; =20 err =3D __xdp_rxq_info_reg(&rxq->xdp_rxq, vport->netdev, rxq->idx, @@ -55,8 +54,18 @@ static int __idpf_xdp_rxq_info_init(struct idpf_rx_queue= *rxq, void *arg) if (err) return err; =20 - pp =3D split ? rxq->bufq_sets[0].bufq.pp : rxq->pp; - xdp_rxq_info_attach_page_pool(&rxq->xdp_rxq, pp); + if (idpf_queue_has(XSK, rxq)) { + err =3D xdp_rxq_info_reg_mem_model(&rxq->xdp_rxq, + MEM_TYPE_XSK_BUFF_POOL, + rxq->pool); + if (err) + goto unreg; + } else { + const struct page_pool *pp; + + pp =3D split ? rxq->bufq_sets[0].bufq.pp : rxq->pp; + xdp_rxq_info_attach_page_pool(&rxq->xdp_rxq, pp); + } =20 if (!split) return 0; @@ -65,6 +74,11 @@ static int __idpf_xdp_rxq_info_init(struct idpf_rx_queue= *rxq, void *arg) rxq->num_xdp_txq =3D vport->num_xdp_txq; =20 return 0; + +unreg: + xdp_rxq_info_unreg(&rxq->xdp_rxq); + + return err; } =20 int idpf_xdp_rxq_info_init(struct idpf_rx_queue *rxq) @@ -84,7 +98,9 @@ static int __idpf_xdp_rxq_info_deinit(struct idpf_rx_queu= e *rxq, void *arg) rxq->num_xdp_txq =3D 0; } =20 - xdp_rxq_info_detach_mem_model(&rxq->xdp_rxq); + if (!idpf_queue_has(XSK, rxq)) + xdp_rxq_info_detach_mem_model(&rxq->xdp_rxq); + xdp_rxq_info_unreg(&rxq->xdp_rxq); =20 return 0; diff --git a/drivers/net/ethernet/intel/idpf/xsk.c b/drivers/net/ethernet/i= ntel/idpf/xsk.c index 5c9f6f0a9fae..ba35dca946d5 100644 --- a/drivers/net/ethernet/intel/idpf/xsk.c +++ b/drivers/net/ethernet/intel/idpf/xsk.c @@ -9,6 +9,47 @@ =20 static void idpf_xsk_tx_timer(struct work_struct *work); =20 +static void idpf_xsk_setup_rxq(const struct idpf_vport *vport, + struct idpf_rx_queue *rxq) +{ + struct xsk_buff_pool *pool; + + pool =3D xsk_get_pool_from_qid(vport->netdev, rxq->idx); + if (!pool || !pool->dev || !xsk_buff_can_alloc(pool, 1)) + return; + + rxq->pool =3D pool; + + idpf_queue_set(XSK, rxq); +} + +static void idpf_xsk_setup_bufq(const struct idpf_vport *vport, + struct idpf_buf_queue *bufq) +{ + struct xsk_buff_pool *pool; + u32 qid =3D U32_MAX; + + for (u32 i =3D 0; i < vport->num_rxq_grp; i++) { + const struct idpf_rxq_group *grp =3D &vport->rxq_grps[i]; + + for (u32 j =3D 0; j < vport->num_bufqs_per_qgrp; j++) { + if (&grp->splitq.bufq_sets[j].bufq =3D=3D bufq) { + qid =3D grp->splitq.rxq_sets[0]->rxq.idx; + goto setup; + } + } + } + +setup: + pool =3D xsk_get_pool_from_qid(vport->netdev, qid); + if (!pool || !pool->dev || !xsk_buff_can_alloc(pool, 1)) + return; + + bufq->pool =3D pool; + + idpf_queue_set(XSK, bufq); +} + static void idpf_xsk_setup_txq(const struct idpf_vport *vport, struct idpf_tx_queue *txq) { @@ -61,6 +102,12 @@ void idpf_xsk_setup_queue(const struct idpf_vport *vpor= t, void *q, return; =20 switch (type) { + case VIRTCHNL2_QUEUE_TYPE_RX: + idpf_xsk_setup_rxq(vport, q); + break; + case VIRTCHNL2_QUEUE_TYPE_RX_BUFFER: + idpf_xsk_setup_bufq(vport, q); + break; case VIRTCHNL2_QUEUE_TYPE_TX: idpf_xsk_setup_txq(vport, q); break; @@ -75,9 +122,25 @@ void idpf_xsk_setup_queue(const struct idpf_vport *vpor= t, void *q, void idpf_xsk_clear_queue(void *q, enum virtchnl2_queue_type type) { struct idpf_compl_queue *complq; + struct idpf_buf_queue *bufq; + struct idpf_rx_queue *rxq; struct idpf_tx_queue *txq; =20 switch (type) { + case VIRTCHNL2_QUEUE_TYPE_RX: + rxq =3D q; + if (!idpf_queue_has_clear(XSK, rxq)) + return; + + rxq->pool =3D NULL; + break; + case VIRTCHNL2_QUEUE_TYPE_RX_BUFFER: + bufq =3D q; + if (!idpf_queue_has_clear(XSK, bufq)) + return; + + bufq->pool =3D NULL; + break; case VIRTCHNL2_QUEUE_TYPE_TX: txq =3D q; if (!idpf_queue_has_clear(XSK, txq)) @@ -197,6 +260,31 @@ static __always_inline u32 idpf_xsksq_complete(void *_= xdpsq, u32 budget) return done_frames; } =20 +static u32 idpf_xsk_tx_prep(void *_xdpsq, struct libeth_xdpsq *sq) +{ + struct idpf_tx_queue *xdpsq =3D _xdpsq; + u32 free; + + libeth_xdpsq_lock(&xdpsq->xdp_lock); + + free =3D xdpsq->desc_count - xdpsq->pending; + if (free < xdpsq->thresh) + free +=3D idpf_xsksq_complete(xdpsq, xdpsq->thresh); + + *sq =3D (struct libeth_xdpsq){ + .pool =3D xdpsq->pool, + .sqes =3D xdpsq->tx_buf, + .descs =3D xdpsq->desc_ring, + .count =3D xdpsq->desc_count, + .lock =3D &xdpsq->xdp_lock, + .ntu =3D &xdpsq->next_to_use, + .pending =3D &xdpsq->pending, + .xdp_tx =3D &xdpsq->xdp_tx, + }; + + return free; +} + static u32 idpf_xsk_xmit_prep(void *_xdpsq, struct libeth_xdpsq *sq) { struct idpf_tx_queue *xdpsq =3D _xdpsq; @@ -236,8 +324,225 @@ bool idpf_xsk_xmit(struct idpf_tx_queue *xsksq) =20 LIBETH_XDP_DEFINE_START(); LIBETH_XDP_DEFINE_TIMER(static idpf_xsk_tx_timer, idpf_xsksq_complete); +LIBETH_XSK_DEFINE_FLUSH_TX(static idpf_xsk_tx_flush_bulk, idpf_xsk_tx_prep, + idpf_xdp_tx_xmit); +LIBETH_XSK_DEFINE_RUN(static idpf_xsk_run_pass, idpf_xsk_run_prog, + idpf_xsk_tx_flush_bulk, idpf_rx_process_skb_fields); +LIBETH_XSK_DEFINE_FINALIZE(static idpf_xsk_finalize_rx, idpf_xsk_tx_flush_= bulk, + idpf_xdp_tx_finalize); LIBETH_XDP_DEFINE_END(); =20 +static void idpf_xskfqe_init(const struct libeth_xskfq_fp *fq, u32 i) +{ + struct virtchnl2_splitq_rx_buf_desc *desc =3D fq->descs; + + desc =3D &desc[i]; +#ifdef __LIBETH_WORD_ACCESS + *(u64 *)&desc->qword0 =3D i; +#else + desc->qword0.buf_id =3D cpu_to_le16(i); +#endif + desc->pkt_addr =3D cpu_to_le64(libeth_xsk_buff_xdp_get_dma(fq->fqes[i])); +} + +static bool idpf_xskfq_refill_thresh(struct idpf_buf_queue *bufq, u32 coun= t) +{ + struct libeth_xskfq_fp fq =3D { + .pool =3D bufq->pool, + .fqes =3D bufq->xsk_buf, + .descs =3D bufq->split_buf, + .ntu =3D bufq->next_to_use, + .count =3D bufq->desc_count, + }; + u32 done; + + done =3D libeth_xskfqe_alloc(&fq, count, idpf_xskfqe_init); + writel(fq.ntu, bufq->tail); + + bufq->next_to_use =3D fq.ntu; + bufq->pending -=3D done; + + return done =3D=3D count; +} + +static bool idpf_xskfq_refill(struct idpf_buf_queue *bufq) +{ + u32 count, rx_thresh =3D bufq->thresh; + + count =3D ALIGN_DOWN(bufq->pending - 1, rx_thresh); + + for (u32 i =3D 0; i < count; i +=3D rx_thresh) { + if (unlikely(!idpf_xskfq_refill_thresh(bufq, rx_thresh))) + return false; + } + + return true; +} + +int idpf_xskfq_init(struct idpf_buf_queue *bufq) +{ + struct libeth_xskfq fq =3D { + .pool =3D bufq->pool, + .count =3D bufq->desc_count, + .nid =3D idpf_q_vector_to_mem(bufq->q_vector), + }; + int ret; + + ret =3D libeth_xskfq_create(&fq); + if (ret) + return ret; + + bufq->xsk_buf =3D fq.fqes; + bufq->pending =3D fq.pending; + bufq->thresh =3D fq.thresh; + bufq->rx_buf_size =3D fq.buf_len; + + if (!idpf_xskfq_refill(bufq)) + netdev_err(bufq->pool->netdev, + "failed to allocate XSk buffers for qid %d\n", + bufq->pool->queue_id); + + bufq->next_to_alloc =3D bufq->next_to_use; + + idpf_queue_clear(HSPLIT_EN, bufq); + bufq->rx_hbuf_size =3D 0; + + return 0; +} + +void idpf_xskfq_rel(struct idpf_buf_queue *bufq) +{ + struct libeth_xskfq fq =3D { + .fqes =3D bufq->xsk_buf, + }; + + libeth_xskfq_destroy(&fq); + + bufq->rx_buf_size =3D fq.buf_len; + bufq->thresh =3D fq.thresh; + bufq->pending =3D fq.pending; +} + +struct idpf_xskfq_refill_set { + struct { + struct idpf_buf_queue *q; + u32 buf_id; + u32 pending; + } bufqs[IDPF_MAX_BUFQS_PER_RXQ_GRP]; +}; + +static bool idpf_xskfq_refill_set(const struct idpf_xskfq_refill_set *set) +{ + bool ret =3D true; + + for (u32 i =3D 0; i < ARRAY_SIZE(set->bufqs); i++) { + struct idpf_buf_queue *bufq =3D set->bufqs[i].q; + u32 ntc; + + if (!bufq) + continue; + + ntc =3D set->bufqs[i].buf_id; + if (unlikely(++ntc =3D=3D bufq->desc_count)) + ntc =3D 0; + + bufq->next_to_clean =3D ntc; + bufq->pending +=3D set->bufqs[i].pending; + + if (bufq->pending > bufq->thresh) + ret &=3D idpf_xskfq_refill(bufq); + } + + return ret; +} + +int idpf_xskrq_poll(struct idpf_rx_queue *rxq, u32 budget) +{ + struct idpf_xskfq_refill_set set =3D { }; + struct libeth_rq_napi_stats rs =3D { }; + bool wake, gen, fail =3D false; + u32 ntc =3D rxq->next_to_clean; + struct libeth_xdp_buff *xdp; + LIBETH_XDP_ONSTACK_BULK(bq); + u32 cnt =3D rxq->desc_count; + + wake =3D xsk_uses_need_wakeup(rxq->pool); + if (wake) + xsk_clear_rx_need_wakeup(rxq->pool); + + gen =3D idpf_queue_has(GEN_CHK, rxq); + + libeth_xsk_tx_init_bulk(&bq, rxq->xdp_prog, rxq->xdp_rxq.dev, + rxq->xdpsqs, rxq->num_xdp_txq); + xdp =3D rxq->xsk; + + while (likely(rs.packets < budget)) { + const struct virtchnl2_rx_flex_desc_adv_nic_3 *rx_desc; + struct idpf_xdp_rx_desc desc __uninitialized; + struct idpf_buf_queue *bufq; + u32 bufq_id, buf_id; + + rx_desc =3D &rxq->rx[ntc].flex_adv_nic_3_wb; + + idpf_xdp_get_qw0(&desc, rx_desc); + if (idpf_xdp_rx_gen(&desc) !=3D gen) + break; + + dma_rmb(); + + bufq_id =3D idpf_xdp_rx_bufq(&desc); + bufq =3D set.bufqs[bufq_id].q; + if (!bufq) { + bufq =3D &rxq->bufq_sets[bufq_id].bufq; + set.bufqs[bufq_id].q =3D bufq; + } + + idpf_xdp_get_qw1(&desc, rx_desc); + buf_id =3D idpf_xdp_rx_buf(&desc); + + set.bufqs[bufq_id].buf_id =3D buf_id; + set.bufqs[bufq_id].pending++; + + xdp =3D libeth_xsk_process_buff(xdp, bufq->xsk_buf[buf_id], + idpf_xdp_rx_len(&desc)); + + if (unlikely(++ntc =3D=3D cnt)) { + ntc =3D 0; + gen =3D !gen; + idpf_queue_change(GEN_CHK, rxq); + } + + if (!idpf_xdp_rx_eop(&desc) || unlikely(!xdp)) + continue; + + fail =3D !idpf_xsk_run_pass(xdp, &bq, rxq->napi, &rs, rx_desc); + xdp =3D NULL; + + if (fail) + break; + } + + idpf_xsk_finalize_rx(&bq); + + rxq->next_to_clean =3D ntc; + rxq->xsk =3D xdp; + + fail |=3D !idpf_xskfq_refill_set(&set); + + u64_stats_update_begin(&rxq->stats_sync); + u64_stats_add(&rxq->q_stats.packets, rs.packets); + u64_stats_add(&rxq->q_stats.bytes, rs.bytes); + u64_stats_update_end(&rxq->stats_sync); + + if (!wake) + return unlikely(fail) ? budget : rs.packets; + + if (unlikely(fail)) + xsk_set_rx_need_wakeup(rxq->pool); + + return rs.packets; +} + int idpf_xsk_pool_setup(struct idpf_vport *vport, struct netdev_bpf *bpf) { struct xsk_buff_pool *pool =3D bpf->xsk.pool; @@ -245,6 +550,16 @@ int idpf_xsk_pool_setup(struct idpf_vport *vport, stru= ct netdev_bpf *bpf) bool restart; int ret; =20 + if (pool && !IS_ALIGNED(xsk_pool_get_rx_frame_size(pool), + LIBETH_RX_BUF_STRIDE)) { + NL_SET_ERR_MSG_FMT_MOD(bpf->extack, + "%s: HW doesn't support frames sizes not aligned to %u (qid %u:= %u)", + netdev_name(vport->netdev), + LIBETH_RX_BUF_STRIDE, qid, + xsk_pool_get_rx_frame_size(pool)); + return -EINVAL; + } + restart =3D idpf_xdp_enabled(vport) && netif_running(vport->netdev); if (!restart) goto pool; --=20 2.51.0 From nobody Thu Oct 2 19:28:31 2025 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.9]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4DDA036CDE6; Thu, 11 Sep 2025 16:23:43 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.9 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1757607825; cv=none; b=XKM/ME0Je5aZwjfmeVvWLaRtj4XPrjTjl7o0qbezV+u3s+RkkafFW7aGU2uphy7QorjgAEEjTHuu+QH6pxAXij5L1dbLL8sj6TdeARUYFiJp3Oq47ybJLpsf1zNlNhSjP6Dh9B7+SFHjpvyoh8H5kCn9Mk+KcqKDwmuXP6aVGZw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1757607825; c=relaxed/simple; bh=ZbzwFksgb7H1D0gKuZCJsoT2wb4x1APi3k7nuAZfsng=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=HJ9FLqX0ALHZ0NpSe+4tRmsq2f0MtKCkGxlMw5LDCIWZwNJdOCtD9nj+KDfj0vF88tMKVGO0xNayCOzn2IecuqeMp5vnQ0RsUW2+TNaMsHnMXalhiyEXAYCI0RkHBNKHXJjA533mkYqAEqOxYWWQIpIkthQnipBnjvVfD+r5oDc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=U1f1WM85; arc=none smtp.client-ip=192.198.163.9 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="U1f1WM85" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1757607823; x=1789143823; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=ZbzwFksgb7H1D0gKuZCJsoT2wb4x1APi3k7nuAZfsng=; b=U1f1WM85yQKQwYY+y8RMtkrG8IsakMUilozjaQJYOiutRpaYGKx6pKSL IPy8iQTwQlXu3oMyRL0HDPAgYm2bvXAGen0bozuW6gUMPOPPXI6EbbYAI LM/wGtB0roGJ0eCd6txA+mZRPtHjDAGWMDJ2DaoUdZ8LeGbKaCZaYd2vY rYk2v7xki+jEC48TxyWjTfTo18+5xs5judtNmGV4h0v6e050deSqhsSTG inEjrWx9j3+O152MO/WWJoCYhczzi8kzjx50HvLFGUsnSBYbVkm3yl30h sO7kzPZ2KmT1l+NZlloPYV6+i5XKlHlaEVo+gWMRqFox8YAJY3yld7OBO g==; X-CSE-ConnectionGUID: NCstvL1wT82bf3CGc1retw== X-CSE-MsgGUID: XRKs40YQQOaco0k265bHIQ== X-IronPort-AV: E=McAfee;i="6800,10657,11549"; a="70635216" X-IronPort-AV: E=Sophos;i="6.18,257,1751266800"; d="scan'208";a="70635216" Received: from orviesa009.jf.intel.com ([10.64.159.149]) by fmvoesa103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 11 Sep 2025 09:23:43 -0700 X-CSE-ConnectionGUID: BYgh7milQWyE4Mf5GqIn9A== X-CSE-MsgGUID: iWVXcq3mT628kEEr6mXlZw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.18,257,1751266800"; d="scan'208";a="173284722" Received: from newjersey.igk.intel.com ([10.102.20.203]) by orviesa009.jf.intel.com with ESMTP; 11 Sep 2025 09:23:38 -0700 From: Alexander Lobakin To: intel-wired-lan@lists.osuosl.org Cc: Alexander Lobakin , Michal Kubiak , Maciej Fijalkowski , Tony Nguyen , Przemek Kitszel , Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Alexei Starovoitov , Daniel Borkmann , Simon Horman , nxne.cnse.osdt.itp.upstreaming@intel.com, bpf@vger.kernel.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH iwl-next 5/5] idpf: enable XSk features and ndo_xsk_wakeup Date: Thu, 11 Sep 2025 18:22:33 +0200 Message-ID: <20250911162233.1238034-6-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.51.0 In-Reply-To: <20250911162233.1238034-1-aleksander.lobakin@intel.com> References: <20250911162233.1238034-1-aleksander.lobakin@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Now that AF_XDP functionality is fully implemented, advertise XSk XDP feature and add .ndo_xsk_wakeup() callback to be able to use it with this driver. Co-developed-by: Michal Kubiak Signed-off-by: Michal Kubiak Signed-off-by: Alexander Lobakin --- drivers/net/ethernet/intel/idpf/idpf.h | 7 +++++ drivers/net/ethernet/intel/idpf/idpf_txrx.h | 10 ++++--- drivers/net/ethernet/intel/idpf/xsk.h | 4 +++ drivers/net/ethernet/intel/idpf/idpf_lib.c | 2 ++ drivers/net/ethernet/intel/idpf/idpf_txrx.c | 3 +++ drivers/net/ethernet/intel/idpf/xdp.c | 4 ++- drivers/net/ethernet/intel/idpf/xsk.c | 29 +++++++++++++++++++++ 7 files changed, 55 insertions(+), 4 deletions(-) diff --git a/drivers/net/ethernet/intel/idpf/idpf.h b/drivers/net/ethernet/= intel/idpf/idpf.h index 6e79fa8556e9..c5ede00c5b2e 100644 --- a/drivers/net/ethernet/intel/idpf/idpf.h +++ b/drivers/net/ethernet/intel/idpf/idpf.h @@ -978,6 +978,13 @@ static inline void idpf_vport_ctrl_unlock(struct net_d= evice *netdev) mutex_unlock(&np->adapter->vport_ctrl_lock); } =20 +static inline bool idpf_vport_ctrl_is_locked(struct net_device *netdev) +{ + struct idpf_netdev_priv *np =3D netdev_priv(netdev); + + return mutex_is_locked(&np->adapter->vport_ctrl_lock); +} + void idpf_statistics_task(struct work_struct *work); void idpf_init_task(struct work_struct *work); void idpf_service_task(struct work_struct *work); diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.h b/drivers/net/ethe= rnet/intel/idpf/idpf_txrx.h index a42aa4669c3c..75b977094741 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_txrx.h +++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.h @@ -374,9 +374,10 @@ struct idpf_intr_reg { * @complq: array of completion queues * @xsksq: array of XSk send queues * @intr_reg: See struct idpf_intr_reg - * @napi: napi handler + * @csd: XSk wakeup CSD * @total_events: Number of interrupts processed * @wb_on_itr: whether WB on ITR is enabled + * @napi: napi handler * @tx_dim: Data for TX net_dim algorithm * @tx_itr_value: TX interrupt throttling rate * @tx_intr_mode: Dynamic ITR or not @@ -406,10 +407,13 @@ struct idpf_q_vector { __cacheline_group_end_aligned(read_mostly); =20 __cacheline_group_begin_aligned(read_write); - struct napi_struct napi; + call_single_data_t csd; + u16 total_events; bool wb_on_itr; =20 + struct napi_struct napi; + struct dim tx_dim; u16 tx_itr_value; bool tx_intr_mode; @@ -427,7 +431,7 @@ struct idpf_q_vector { __cacheline_group_end_aligned(cold); }; libeth_cacheline_set_assert(struct idpf_q_vector, 136, - 24 + sizeof(struct napi_struct) + + 56 + sizeof(struct napi_struct) + 2 * sizeof(struct dim), 8); =20 diff --git a/drivers/net/ethernet/intel/idpf/xsk.h b/drivers/net/ethernet/i= ntel/idpf/xsk.h index d5338cbef8bd..b622d08c03e8 100644 --- a/drivers/net/ethernet/intel/idpf/xsk.h +++ b/drivers/net/ethernet/intel/idpf/xsk.h @@ -8,14 +8,17 @@ =20 enum virtchnl2_queue_type; struct idpf_buf_queue; +struct idpf_q_vector; struct idpf_rx_queue; struct idpf_tx_queue; struct idpf_vport; +struct net_device; struct netdev_bpf; =20 void idpf_xsk_setup_queue(const struct idpf_vport *vport, void *q, enum virtchnl2_queue_type type); void idpf_xsk_clear_queue(void *q, enum virtchnl2_queue_type type); +void idpf_xsk_init_wakeup(struct idpf_q_vector *qv); =20 int idpf_xskfq_init(struct idpf_buf_queue *bufq); void idpf_xskfq_rel(struct idpf_buf_queue *bufq); @@ -25,5 +28,6 @@ int idpf_xskrq_poll(struct idpf_rx_queue *rxq, u32 budget= ); bool idpf_xsk_xmit(struct idpf_tx_queue *xsksq); =20 int idpf_xsk_pool_setup(struct idpf_vport *vport, struct netdev_bpf *xdp); +int idpf_xsk_wakeup(struct net_device *dev, u32 qid, u32 flags); =20 #endif /* !_IDPF_XSK_H_ */ diff --git a/drivers/net/ethernet/intel/idpf/idpf_lib.c b/drivers/net/ether= net/intel/idpf/idpf_lib.c index 9b8f7a6d65d6..8a941f0fb048 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_lib.c +++ b/drivers/net/ethernet/intel/idpf/idpf_lib.c @@ -5,6 +5,7 @@ #include "idpf_virtchnl.h" #include "idpf_ptp.h" #include "xdp.h" +#include "xsk.h" =20 static const struct net_device_ops idpf_netdev_ops; =20 @@ -2618,4 +2619,5 @@ static const struct net_device_ops idpf_netdev_ops = =3D { .ndo_hwtstamp_set =3D idpf_hwtstamp_set, .ndo_bpf =3D idpf_xdp, .ndo_xdp_xmit =3D idpf_xdp_xmit, + .ndo_xsk_wakeup =3D idpf_xsk_wakeup, }; diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.c b/drivers/net/ethe= rnet/intel/idpf/idpf_txrx.c index 67963c0f4541..828f7c444d30 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_txrx.c +++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.c @@ -1210,6 +1210,8 @@ static int idpf_qp_enable(const struct idpf_queue_set= *qs, u32 qid) if (!idpf_queue_has(XSK, q->txq)) continue; =20 + idpf_xsk_init_wakeup(q_vector); + q->txq->q_vector =3D q_vector; q_vector->xsksq[q_vector->num_xsksq++] =3D q->txq; } @@ -4418,6 +4420,7 @@ static void idpf_vport_intr_map_vector_to_qs(struct i= dpf_vport *vport) continue; =20 qv =3D idpf_find_rxq_vec(vport, i); + idpf_xsk_init_wakeup(qv); =20 xdpsq->q_vector =3D qv; qv->xsksq[qv->num_xsksq++] =3D xdpsq; diff --git a/drivers/net/ethernet/intel/idpf/xdp.c b/drivers/net/ethernet/i= ntel/idpf/xdp.c index cde6d56553d2..21ce25b0567f 100644 --- a/drivers/net/ethernet/intel/idpf/xdp.c +++ b/drivers/net/ethernet/intel/idpf/xdp.c @@ -400,7 +400,9 @@ void idpf_xdp_set_features(const struct idpf_vport *vpo= rt) if (!idpf_is_queue_model_split(vport->rxq_model)) return; =20 - libeth_xdp_set_features_noredir(vport->netdev, &idpf_xdpmo); + libeth_xdp_set_features_noredir(vport->netdev, &idpf_xdpmo, + idpf_get_max_tx_bufs(vport->adapter), + libeth_xsktmo); } =20 static int idpf_xdp_setup_prog(struct idpf_vport *vport, diff --git a/drivers/net/ethernet/intel/idpf/xsk.c b/drivers/net/ethernet/i= ntel/idpf/xsk.c index ba35dca946d5..fd2cc43ab43c 100644 --- a/drivers/net/ethernet/intel/idpf/xsk.c +++ b/drivers/net/ethernet/intel/idpf/xsk.c @@ -158,6 +158,11 @@ void idpf_xsk_clear_queue(void *q, enum virtchnl2_queu= e_type type) } } =20 +void idpf_xsk_init_wakeup(struct idpf_q_vector *qv) +{ + libeth_xsk_init_wakeup(&qv->csd, &qv->napi); +} + void idpf_xsksq_clean(struct idpf_tx_queue *xdpsq) { struct libeth_xdpsq_napi_stats ss =3D { }; @@ -602,3 +607,27 @@ int idpf_xsk_pool_setup(struct idpf_vport *vport, stru= ct netdev_bpf *bpf) =20 return ret; } + +int idpf_xsk_wakeup(struct net_device *dev, u32 qid, u32 flags) +{ + const struct idpf_netdev_priv *np =3D netdev_priv(dev); + const struct idpf_vport *vport =3D np->vport; + struct idpf_q_vector *q_vector; + + if (unlikely(idpf_vport_ctrl_is_locked(dev))) + return -EBUSY; + + if (unlikely(!vport->link_up)) + return -ENETDOWN; + + if (unlikely(!vport->num_xdp_txq)) + return -ENXIO; + + q_vector =3D idpf_find_rxq_vec(vport, qid); + if (unlikely(!q_vector->xsksq)) + return -ENXIO; + + libeth_xsk_wakeup(&q_vector->csd, qid); + + return 0; +} --=20 2.51.0