From nobody Tue Apr 7 01:03:01 2026 Received: from smtpbgbr2.qq.com (smtpbgbr2.qq.com [54.207.22.56]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 76F0E34E741; Fri, 3 Apr 2026 02:59:24 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=54.207.22.56 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775185172; cv=none; b=SxYV2Z6DkFsezWDrHPZdneR3EC6vf55b+dvRRVw+m+XBOZQv65QFtgD3TOjJbYHp/HwVWnD8+Rsv/1iVTwQVuUkBVmKDJn+q7IUosOLgRWEwepTjqCpZrlsgQPabOzYjS0Lr6MyrWP8Y61wVtDbw22dtPpnGBoCTjNxhl47ZrBU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775185172; c=relaxed/simple; bh=TR2eqQN33mFVviSHjhi30nCRxYVo+mynxT5GN3H5GCA=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=ADJfNlcxStvlc6v/3+4jzQImzhOCt3VQlbcMg1MHAPusX8oBpyRiY+0pqTbVpfWWqzM9vM2T0bXVnh8jyBvaR+jkf/r9kRzE2QUz3MErMw3ZBnl+0BImqJdQ6auN94O5xb7tZLPRGXbt9+ObYF8nKYDVJqlHabnjXv/1szL6+1Q= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=mucse.com; spf=pass smtp.mailfrom=mucse.com; arc=none smtp.client-ip=54.207.22.56 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=mucse.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=mucse.com X-QQ-mid: esmtpsz19t1775185089t065d0d8e X-QQ-Originating-IP: S0ao1kDsGjQOajwfwlpUF+8Ns1t4P8Z1hcRuni2BgFc= Received: from localhost.localdomain ( [203.174.112.180]) by bizesmtp.qq.com (ESMTP) with id ; Fri, 03 Apr 2026 10:58:07 +0800 (CST) X-QQ-SSF: 0000000000000000000000000000000 X-QQ-GoodBg: 0 X-BIZMAIL-ID: 12031074506574191889 EX-QQ-RecipientCnt: 11 From: Dong Yibo To: andrew+netdev@lunn.ch, davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, danishanwar@ti.com, vadim.fedorenko@linux.dev, horms@kernel.org Cc: linux-kernel@vger.kernel.org, netdev@vger.kernel.org, dong100@mucse.com Subject: [PATCH net-next v2 1/4] net: rnpgbe: Add interrupt handling Date: Fri, 3 Apr 2026 10:57:10 +0800 Message-Id: <20260403025713.527841-2-dong100@mucse.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20260403025713.527841-1-dong100@mucse.com> References: <20260403025713.527841-1-dong100@mucse.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-QQ-SENDSIZE: 520 Feedback-ID: esmtpsz:mucse.com:qybglogicsvrgz:qybglogicsvrgz3a-1 X-QQ-XMAILINFO: MABkn+BX1UZ4cyY36LkllbBA4vEeUVZnTbau9sj727sHH99JZ06Om+AM hQpE7ghwWBwvFFIFlEcMKWGdogS3F4YfRiBJ0h40kenLq+t6gWD0J6qxCzM+XPvpWwbokPj Dz3lAAVzl6H6AWYe7VkDIUJtPVBXJ24R+6nGNWHne4AXZmEDrC9dDH8YJhUAP9CdJ02UQju RHQjLq/eRlLaavTPOLtRGYe0a4MmtywQAHWFD8WHupAKE71wR0B/AUS/pzg8aMObJZRhUg7 pDu1xUyRkKg0zlX3WXYMKWxb3Cp+zPnIXcY/gCgj5lcv27VDF5VBYxcXa0G/40lh4xwhcmr Dr+gxFPug+LJF9VdN9VPV/NdOuzKMzXKIc5WYPr6T6N/hr76BDUtNeIv/77BLumrFoPPDSO +jiEi9HpUG2SFXCEI5HvtKSLQ+rqmhs94Jz21VOedEdCOujAuxiiEfsLHioQfEzF5Qsr/cm K4ul6GSNBhZ8hGUbs2UxJUvlzlDQfTQfsTRWBXKq2/ILhKmNDvQY07vPfjsOwlqMBZzMQk5 4X23K6i1n34vNwBHw5zFih5VuYyfoQ1EOiMjaSTeaKiODXi5DVsULFN98P1iTJRmzGQxl0b NqrURK01zkVDKEvH4VnPUodw5Ezcg/+X5vudhipTilFsyz4+s9Fn2FuMQSI+vIUGIYPXIdD 2WRLEXfyZ/+TtTxjw5eaeS2g5nbPEyKHvHhUpdB7yidouiercsT9Hprn3FFVM1BBT8klkHQ /zN5fQGPc9pBKpfV6CTdckD7qgBX1nZBO8dEpaNJGN/PQE7gmNXwCmZyGQq3ObegRImWBNr quBy6tgtysXcOIAD/vena+B8iSFvRYCv7MOVjS7p8btFs+wvNRaHNBiXwZkacJ9VcfriOki 5h/hCI05R/nla8CIngdFS0b/B/cON6thJamOwibeMf4B4wTHUhtH/r+tPzlNlZ59oKsxnuc ORVNBhvYVunC0J7/4FUfG1uqYdcGiAX/wkq/RjDID8EtWUz3FNJIL8VPQNXjQzxLfPUk= X-QQ-XMRINFO: MSVp+SPm3vtSI1QTLgDHQqIV1w2oNKDqfg== X-QQ-RECHKSPAM: 0 Content-Type: text/plain; charset="utf-8" Add comprehensive interrupt handling for the RNPGBE driver: - Implement msi-x/msi interrupt configuration and management - Create library functions for interrupt registration and cleanup This infrastructure enables proper interrupt handling for the RNPGBE driver. Signed-off-by: Dong Yibo --- drivers/net/ethernet/mucse/rnpgbe/Makefile | 3 +- drivers/net/ethernet/mucse/rnpgbe/rnpgbe.h | 47 ++ .../net/ethernet/mucse/rnpgbe/rnpgbe_chip.c | 4 + drivers/net/ethernet/mucse/rnpgbe/rnpgbe_hw.h | 2 + .../net/ethernet/mucse/rnpgbe/rnpgbe_lib.c | 592 ++++++++++++++++++ .../net/ethernet/mucse/rnpgbe/rnpgbe_lib.h | 33 + .../net/ethernet/mucse/rnpgbe/rnpgbe_main.c | 46 +- 7 files changed, 724 insertions(+), 3 deletions(-) create mode 100644 drivers/net/ethernet/mucse/rnpgbe/rnpgbe_lib.c create mode 100644 drivers/net/ethernet/mucse/rnpgbe/rnpgbe_lib.h diff --git a/drivers/net/ethernet/mucse/rnpgbe/Makefile b/drivers/net/ether= net/mucse/rnpgbe/Makefile index de8bcb7772ab..17574cad392a 100644 --- a/drivers/net/ethernet/mucse/rnpgbe/Makefile +++ b/drivers/net/ethernet/mucse/rnpgbe/Makefile @@ -8,4 +8,5 @@ obj-$(CONFIG_MGBE) +=3D rnpgbe.o rnpgbe-objs :=3D rnpgbe_main.o\ rnpgbe_chip.o\ rnpgbe_mbx.o\ - rnpgbe_mbx_fw.o + rnpgbe_mbx_fw.o\ + rnpgbe_lib.o diff --git a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe.h b/drivers/net/ether= net/mucse/rnpgbe/rnpgbe.h index 5b024f9f7e17..ea4e5b13564d 100644 --- a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe.h +++ b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe.h @@ -6,6 +6,10 @@ =20 #include #include +#include +#include + +#include "rnpgbe_hw.h" =20 enum rnpgbe_boards { board_n500, @@ -35,21 +39,62 @@ enum { =20 struct mucse_hw { void __iomem *hw_addr; + void __iomem *ring_msix_base; struct pci_dev *pdev; struct mucse_mbx_info mbx; int port; u8 pfvfnum; }; =20 +struct mucse_ring { + struct mucse_ring *next; + struct mucse_q_vector *q_vector; + void __iomem *ring_addr; + void __iomem *irq_mask; + void __iomem *trig; + u8 queue_index; + /* hw ring idx */ + u8 rnpgbe_queue_idx; +} ____cacheline_internodealigned_in_smp; + +struct mucse_ring_container { + struct mucse_ring *ring; + u16 count; +}; + +struct mucse_q_vector { + struct mucse *mucse; + int v_idx; + struct mucse_ring_container rx, tx; + struct napi_struct napi; + char name[IFNAMSIZ + 18]; + /* for dynamic allocation of rings associated with this q_vector */ + struct mucse_ring ring[0] ____cacheline_internodealigned_in_smp; +}; + struct mucse_stats { u64 tx_dropped; }; =20 +#define MAX_Q_VECTORS 8 + struct mucse { struct net_device *netdev; struct pci_dev *pdev; struct mucse_hw hw; struct mucse_stats stats; +#define M_FLAG_MSI_EN BIT(0) +#define M_FLAG_MSIX_SINGLE_EN BIT(1) +#define M_FLAG_MSIX_EN BIT(2) + u32 flags; + struct mucse_ring *tx_ring[RNPGBE_MAX_QUEUES] + ____cacheline_aligned_in_smp; + struct mucse_ring *rx_ring[RNPGBE_MAX_QUEUES] + ____cacheline_aligned_in_smp; + struct mucse_q_vector *q_vector[MAX_Q_VECTORS]; + int num_tx_queues; + int num_q_vectors; + int num_rx_queues; }; =20 int rnpgbe_get_permanent_mac(struct mucse_hw *hw, u8 *perm_addr); @@ -68,4 +113,6 @@ int rnpgbe_init_hw(struct mucse_hw *hw, int board_type); =20 #define mucse_hw_wr32(hw, reg, val) \ writel((val), (hw)->hw_addr + (reg)) +#define mucse_hw_rd32(hw, reg) \ + readl((hw)->hw_addr + (reg)) #endif /* _RNPGBE_H */ diff --git a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_chip.c b/drivers/net/= ethernet/mucse/rnpgbe/rnpgbe_chip.c index ebc7b3750157..921cc325a991 100644 --- a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_chip.c +++ b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_chip.c @@ -89,6 +89,8 @@ static void rnpgbe_init_n500(struct mucse_hw *hw) { struct mucse_mbx_info *mbx =3D &hw->mbx; =20 + hw->ring_msix_base =3D hw->hw_addr + MUCSE_N500_RING_MSIX_BASE; + mbx->fwpf_ctrl_base =3D MUCSE_N500_FWPF_CTRL_BASE; mbx->fwpf_shm_base =3D MUCSE_N500_FWPF_SHM_BASE; } @@ -104,6 +106,8 @@ static void rnpgbe_init_n210(struct mucse_hw *hw) { struct mucse_mbx_info *mbx =3D &hw->mbx; =20 + hw->ring_msix_base =3D hw->hw_addr + MUCSE_N210_RING_MSIX_BASE; + mbx->fwpf_ctrl_base =3D MUCSE_N210_FWPF_CTRL_BASE; mbx->fwpf_shm_base =3D MUCSE_N210_FWPF_SHM_BASE; } diff --git a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_hw.h b/drivers/net/et= hernet/mucse/rnpgbe/rnpgbe_hw.h index e77e6bc3d3e3..0dce78e4a91b 100644 --- a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_hw.h +++ b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_hw.h @@ -6,10 +6,12 @@ =20 #define MUCSE_N500_FWPF_CTRL_BASE 0x28b00 #define MUCSE_N500_FWPF_SHM_BASE 0x2d000 +#define MUCSE_N500_RING_MSIX_BASE 0x28700 #define MUCSE_GBE_PFFW_MBX_CTRL_OFFSET 0x5500 #define MUCSE_GBE_FWPF_MBX_MASK_OFFSET 0x5700 #define MUCSE_N210_FWPF_CTRL_BASE 0x29400 #define MUCSE_N210_FWPF_SHM_BASE 0x2d900 +#define MUCSE_N210_RING_MSIX_BASE 0x29000 =20 #define RNPGBE_DMA_AXI_EN 0x0010 =20 diff --git a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_lib.c b/drivers/net/e= thernet/mucse/rnpgbe/rnpgbe_lib.c new file mode 100644 index 000000000000..ae6131032d43 --- /dev/null +++ b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_lib.c @@ -0,0 +1,592 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright(c) 2020 - 2025 Mucse Corporation. */ + +#include +#include + +#include "rnpgbe_lib.h" +#include "rnpgbe.h" + +/** + * rnpgbe_msix_other - Other irq handler + * @irq: irq num + * @data: private data + * + * @return: IRQ_HANDLED + **/ +static irqreturn_t rnpgbe_msix_other(int irq, void *data) +{ + return IRQ_HANDLED; +} + +static void rnpgbe_irq_disable_queues(struct mucse_q_vector *q_vector) +{ + struct mucse_ring *ring; + + /* tx/rx use one register, different bit */ + mucse_for_each_ring(ring, q_vector->tx) { + writel(INT_VALID, ring->trig); + writel((RX_INT_MASK | TX_INT_MASK), ring->irq_mask); + } +} + +static void rnpgbe_irq_enable_queues(struct mucse_q_vector *q_vector) +{ + struct mucse_ring *ring; + + /* tx/rx use one register, different bit */ + mucse_for_each_ring(ring, q_vector->tx) { + writel(0, ring->irq_mask); + writel(INT_VALID | TX_INT_MASK | RX_INT_MASK, ring->trig); + } +} + +/** + * rnpgbe_poll - NAPI Rx polling callback + * @napi: structure for representing this polling device + * @budget: how many packets driver is allowed to clean + * + * @return: work done in this call + * This function is used for legacy and MSI, NAPI mode + **/ +static int rnpgbe_poll(struct napi_struct *napi, int budget) +{ + struct mucse_q_vector *q_vector =3D + container_of(napi, struct mucse_q_vector, napi); + int work_done =3D 0; + + if (likely(napi_complete_done(napi, work_done))) + rnpgbe_irq_enable_queues(q_vector); + + return min(work_done, budget - 1); +} + +/** + * register_mbx_irq - Register mbx routine + * @mucse: pointer to private structure + * + * @return: 0 on success, negative on failure + **/ +int register_mbx_irq(struct mucse *mucse) +{ + struct net_device *netdev =3D mucse->netdev; + struct pci_dev *pdev =3D mucse->pdev; + int err =3D 0; + + if (mucse->flags & M_FLAG_MSIX_EN) { + err =3D request_irq(pci_irq_vector(pdev, 0), + rnpgbe_msix_other, 0, netdev->name, + mucse); + } + + return err; +} + +/** + * remove_mbx_irq - Remove mbx routine + * @mucse: pointer to private structure + **/ +void remove_mbx_irq(struct mucse *mucse) +{ + struct pci_dev *pdev =3D mucse->pdev; + + if (mucse->flags & M_FLAG_MSIX_EN) + free_irq(pci_irq_vector(pdev, 0), mucse); +} + +/** + * rnpgbe_set_num_queues - Allocate queues for device, feature dependent + * @mucse: pointer to private structure + * + * Determine tx/rx queue nums + **/ +static void rnpgbe_set_num_queues(struct mucse *mucse) +{ + /* start from 1 queue */ + mucse->num_tx_queues =3D 1; + mucse->num_rx_queues =3D 1; +} + +/** + * rnpgbe_set_interrupt_capability - Set MSI-X or MSI if supported + * @mucse: pointer to private structure + * + * Attempt to configure the interrupts using the best available + * capabilities of the hardware. + * + * @return: 0 on success, negative on failure + **/ +static int rnpgbe_set_interrupt_capability(struct mucse *mucse) +{ + int v_budget; + + v_budget =3D min_t(int, mucse->num_tx_queues, mucse->num_rx_queues); + v_budget =3D min_t(int, v_budget, MAX_Q_VECTORS); + v_budget =3D min_t(int, v_budget, num_online_cpus()); + /* add one vector for mbx */ + v_budget +=3D 1; + v_budget =3D pci_alloc_irq_vectors(mucse->pdev, 1, v_budget, + PCI_IRQ_MSI | PCI_IRQ_MSIX); + if (v_budget < 0) + return v_budget; + + if (mucse->pdev->msix_enabled) { + /* q_vector not include mbx */ + if (v_budget > 1) { + mucse->flags |=3D M_FLAG_MSIX_EN; + mucse->num_q_vectors =3D v_budget - 1; + } else { + mucse->flags |=3D M_FLAG_MSIX_SINGLE_EN; + mucse->num_q_vectors =3D 1; + } + } else { + /* msi use only 1 irq */ + mucse->num_q_vectors =3D 1; + mucse->flags |=3D M_FLAG_MSI_EN; + } + + return 0; +} + +/** + * mucse_add_ring - Add ring to ring container + * @ring: ring to be added + * @head: ring container + **/ +static void mucse_add_ring(struct mucse_ring *ring, + struct mucse_ring_container *head) +{ + ring->next =3D head->ring; + head->ring =3D ring; + head->count++; +} + +/** + * rnpgbe_alloc_q_vector - Allocate memory for a single interrupt vector + * @mucse: pointer to private structure + * @eth_queue_idx: queue_index idx for this q_vector + * @v_idx: index of vector used for this q_vector + * @r_idx: total number of rings to allocate + * @r_count: ring count + * @step: ring step + * + * @return: 0 on success. If allocation fails we return -ENOMEM. + **/ +static int rnpgbe_alloc_q_vector(struct mucse *mucse, + int eth_queue_idx, int v_idx, int r_idx, + int r_count, int step) +{ + int rxr_idx =3D r_idx, txr_idx =3D r_idx; + struct mucse_hw *hw =3D &mucse->hw; + struct mucse_q_vector *q_vector; + int txr_count, rxr_count, idx; + struct mucse_ring *ring; + int ring_count, size; + + txr_count =3D r_count; + rxr_count =3D r_count; + ring_count =3D txr_count + rxr_count; + size =3D sizeof(struct mucse_q_vector) + + (sizeof(struct mucse_ring) * ring_count); + + q_vector =3D kzalloc(size, GFP_KERNEL); + if (!q_vector) + return -ENOMEM; + + netif_napi_add(mucse->netdev, &q_vector->napi, rnpgbe_poll); + /* tie q_vector and mucse together */ + mucse->q_vector[v_idx] =3D q_vector; + q_vector->mucse =3D mucse; + q_vector->v_idx =3D v_idx; + /* if mbx use separate irq, we should add 1 */ + if (mucse->flags & M_FLAG_MSIX_EN) + q_vector->v_idx++; + + ring =3D q_vector->ring; + + for (idx =3D 0; idx < txr_count; idx++) { + mucse_add_ring(ring, &q_vector->tx); + ring->queue_index =3D eth_queue_idx + idx; + ring->rnpgbe_queue_idx =3D txr_idx; + ring->ring_addr =3D hw->hw_addr + RING_OFFSET(txr_idx); + ring->irq_mask =3D ring->ring_addr + RNPGBE_DMA_INT_MASK; + ring->trig =3D ring->ring_addr + RNPGBE_DMA_INT_TRIG; + mucse->tx_ring[ring->queue_index] =3D ring; + txr_idx +=3D step; + ring++; + } + + for (idx =3D 0; idx < rxr_count; idx++) { + mucse_add_ring(ring, &q_vector->rx); + ring->queue_index =3D eth_queue_idx + idx; + ring->rnpgbe_queue_idx =3D rxr_idx; + ring->ring_addr =3D hw->hw_addr + RING_OFFSET(rxr_idx); + ring->irq_mask =3D ring->ring_addr + RNPGBE_DMA_INT_MASK; + ring->trig =3D ring->ring_addr + RNPGBE_DMA_INT_TRIG; + mucse->rx_ring[ring->queue_index] =3D ring; + rxr_idx +=3D step; + ring++; + } + + return 0; +} + +/** + * rnpgbe_free_q_vector - Free memory allocated for specific interrupt vec= tor + * @mucse: pointer to private structure + * @v_idx: index of vector to be freed + * + * This function frees the memory allocated to the q_vector. In addition = if + * NAPI is enabled it will delete any references to the NAPI struct prior + * to freeing the q_vector. + **/ +static void rnpgbe_free_q_vector(struct mucse *mucse, int v_idx) +{ + struct mucse_q_vector *q_vector =3D mucse->q_vector[v_idx]; + struct mucse_ring *ring; + + mucse_for_each_ring(ring, q_vector->tx) + mucse->tx_ring[ring->queue_index] =3D NULL; + mucse_for_each_ring(ring, q_vector->rx) + mucse->rx_ring[ring->queue_index] =3D NULL; + mucse->q_vector[v_idx] =3D NULL; + netif_napi_del(&q_vector->napi); + kfree(q_vector); +} + +/** + * rnpgbe_alloc_q_vectors - Allocate memory for interrupt vectors + * @mucse: pointer to private structure + * + * @return: 0 if success. if allocation fails we return -ENOMEM. + **/ +static int rnpgbe_alloc_q_vectors(struct mucse *mucse) +{ + int err, ring_cnt, v_remaing =3D mucse->num_q_vectors; + int r_remaing =3D min_t(int, mucse->num_tx_queues, + mucse->num_rx_queues); + int q_vector_nums =3D 0; + int eth_queue_idx =3D 0; + int ring_step =3D 1; + int ring_idx =3D 0; + int v_idx =3D 0; + + for (; r_remaing > 0 && v_remaing > 0; v_remaing--) { + ring_cnt =3D DIV_ROUND_UP(r_remaing, v_remaing); + err =3D rnpgbe_alloc_q_vector(mucse, eth_queue_idx, + v_idx, ring_idx, ring_cnt, + ring_step); + if (err) + goto err_free_q_vector; + ring_idx +=3D ring_step * ring_cnt; + eth_queue_idx +=3D ring_cnt; + r_remaing -=3D ring_cnt; + q_vector_nums++; + v_idx++; + } + /* Fix the real used q_vectors_nums */ + mucse->num_q_vectors =3D q_vector_nums; + + return 0; + +err_free_q_vector: + mucse->num_tx_queues =3D 0; + mucse->num_rx_queues =3D 0; + mucse->num_q_vectors =3D 0; + + while (v_idx--) + rnpgbe_free_q_vector(mucse, v_idx); + + return err; +} + +/** + * rnpgbe_reset_interrupt_capability - Reset irq capability setup + * @mucse: pointer to private structure + **/ +static void rnpgbe_reset_interrupt_capability(struct mucse *mucse) +{ + pci_free_irq_vectors(mucse->pdev); + mucse->flags &=3D ~(M_FLAG_MSIX_EN | + M_FLAG_MSIX_SINGLE_EN | + M_FLAG_MSI_EN); +} + +/** + * rnpgbe_init_interrupt_scheme - Determine proper interrupt scheme + * @mucse: pointer to private structure + * + * We determine which interrupt scheme to use based on... + * - Hardware queue count + * - cpu numbers + * - irq mode (msi/legacy force 1) + * + * @return: 0 on success, negative on failure + **/ +int rnpgbe_init_interrupt_scheme(struct mucse *mucse) +{ + int err; + + rnpgbe_set_num_queues(mucse); + + err =3D rnpgbe_set_interrupt_capability(mucse); + if (err) + return err; + + err =3D rnpgbe_alloc_q_vectors(mucse); + if (err) { + rnpgbe_reset_interrupt_capability(mucse); + return err; + } + + return 0; +} + +/** + * rnpgbe_free_q_vectors - Free memory allocated for interrupt vectors + * @mucse: pointer to private structure + * + * This function frees the memory allocated to the q_vectors. In addition= if + * NAPI is enabled it will delete any references to the NAPI struct prior + * to freeing the q_vector. + **/ +static void rnpgbe_free_q_vectors(struct mucse *mucse) +{ + int v_idx =3D mucse->num_q_vectors; + + mucse->num_rx_queues =3D 0; + mucse->num_tx_queues =3D 0; + mucse->num_q_vectors =3D 0; + + while (v_idx--) + rnpgbe_free_q_vector(mucse, v_idx); +} + +/** + * rnpgbe_clear_interrupt_scheme - Clear the current interrupt scheme sett= ings + * @mucse: pointer to private structure + * + * Clear interrupt specific resources and reset the structure + **/ +void rnpgbe_clear_interrupt_scheme(struct mucse *mucse) +{ + mucse->num_tx_queues =3D 0; + mucse->num_rx_queues =3D 0; + rnpgbe_free_q_vectors(mucse); + rnpgbe_reset_interrupt_capability(mucse); +} + +/** + * rnpgbe_msix_clean_rings - Msix irq handler for ring irq + * @irq: irq num + * @data: private data + * + * rnpgbe_msix_clean_rings handle irq from ring, start napi + * @return: IRQ_HANDLED + **/ +static irqreturn_t rnpgbe_msix_clean_rings(int irq, void *data) +{ + struct mucse_q_vector *q_vector =3D (struct mucse_q_vector *)data; + + rnpgbe_irq_disable_queues(q_vector); + if (q_vector->rx.ring || q_vector->tx.ring) + napi_schedule_irqoff(&q_vector->napi); + + return IRQ_HANDLED; +} + +/** + * rnpgbe_int_single - Msix-signle/msi irq handler + * @irq: irq num + * @data: private data + * @return: IRQ_HANDLED + **/ +static irqreturn_t rnpgbe_int_single(int irq, void *data) +{ + struct mucse *mucse =3D (struct mucse *)data; + struct mucse_q_vector *q_vector; + + q_vector =3D mucse->q_vector[0]; + rnpgbe_irq_disable_queues(q_vector); + if (q_vector->rx.ring || q_vector->tx.ring) + napi_schedule_irqoff(&q_vector->napi); + + return IRQ_HANDLED; +} + +/** + * rnpgbe_request_irq - Initialize interrupts + * @mucse: pointer to private structure + * + * Attempts to configure interrupts using the best available + * capabilities of the hardware and kernel. + * + * @return: 0 on success, negative value on failure + **/ +int rnpgbe_request_irq(struct mucse *mucse) +{ + struct net_device *netdev =3D mucse->netdev; + struct pci_dev *pdev =3D mucse->pdev; + struct mucse_q_vector *q_vector; + int err, i; + + if (mucse->flags & M_FLAG_MSIX_EN) { + for (i =3D 0; i < mucse->num_q_vectors; i++) { + q_vector =3D mucse->q_vector[i]; + + snprintf(q_vector->name, sizeof(q_vector->name) - 1, + "%s-%s-%d", netdev->name, "TxRx", i); + + err =3D request_irq(pci_irq_vector(pdev, i + 1), + rnpgbe_msix_clean_rings, 0, + q_vector->name, + q_vector); + if (err) { + dev_err(&pdev->dev, "MSI-X req err %d: %d\n", + i + 1, err); + goto err_free_irqs; + } + } + } else { + /* msi/msix_single */ + err =3D request_irq(pci_irq_vector(pdev, 0), + rnpgbe_int_single, 0, netdev->name, + mucse); + if (err) + return err; + } + + return 0; +err_free_irqs: + while (i--) { + q_vector =3D mucse->q_vector[i]; + synchronize_irq(pci_irq_vector(pdev, i + 1)); + free_irq(pci_irq_vector(pdev, i + 1), q_vector); + } + + return err; +} + +/** + * rnpgbe_free_irq - Free interrupts + * @mucse: pointer to private structure + * + * Attempts to free interrupts according initialized type. + **/ +void rnpgbe_free_irq(struct mucse *mucse) +{ + struct pci_dev *pdev =3D mucse->pdev; + struct mucse_q_vector *q_vector; + + if (mucse->flags & M_FLAG_MSIX_EN) { + for (int i =3D 0; i < mucse->num_q_vectors; i++) { + q_vector =3D mucse->q_vector[i]; + if (!q_vector) + continue; + + free_irq(pci_irq_vector(pdev, i + 1), q_vector); + } + } else { + free_irq(pci_irq_vector(pdev, 0), mucse); + } +} + +/** + * rnpgbe_set_ring_vector - Set the ring_vector registers, + * mapping interrupt causes to vectors + * @mucse: pointer to private structure + * @queue: queue to map the corresponding interrupt to + * @msix_vector: the vector num to map to the corresponding queue + * + */ +static void rnpgbe_set_ring_vector(struct mucse *mucse, + u8 queue, u8 msix_vector) +{ + struct mucse_hw *hw =3D &mucse->hw; + u32 data; + + data =3D hw->pfvfnum << 24; + data |=3D (msix_vector << 8); + data |=3D msix_vector; + writel(data, hw->ring_msix_base + RING_VECTOR(queue)); +} + +/** + * rnpgbe_configure_msix - Configure MSI-X hardware + * @mucse: pointer to private structure + * + * rnpgbe_configure_msix sets up the hardware to properly generate MSI-X + * interrupts. + **/ +static void rnpgbe_configure_msix(struct mucse *mucse) +{ + struct mucse_q_vector *q_vector; + + if (!(mucse->flags & (M_FLAG_MSIX_EN | M_FLAG_MSIX_SINGLE_EN))) + return; + + for (int i =3D 0; i < mucse->num_q_vectors; i++) { + struct mucse_ring *ring; + + q_vector =3D mucse->q_vector[i]; + /* tx/rx use one register, different bit */ + mucse_for_each_ring(ring, q_vector->tx) { + rnpgbe_set_ring_vector(mucse, ring->rnpgbe_queue_idx, + q_vector->v_idx); + } + } +} + +static void rnpgbe_irq_enable(struct mucse *mucse) +{ + for (int i =3D 0; i < mucse->num_q_vectors; i++) + rnpgbe_irq_enable_queues(mucse->q_vector[i]); +} + +/** + * rnpgbe_irq_disable - Mask off interrupt generation on the NIC + * @mucse: board private structure + **/ +void rnpgbe_irq_disable(struct mucse *mucse) +{ + struct pci_dev *pdev =3D mucse->pdev; + + if (mucse->flags & M_FLAG_MSIX_EN) { + for (int i =3D 0; i < mucse->num_q_vectors; i++) { + rnpgbe_irq_disable_queues(mucse->q_vector[i]); + synchronize_irq(pci_irq_vector(pdev, i + 1)); + } + } else { + rnpgbe_irq_disable_queues(mucse->q_vector[0]); + synchronize_irq(pci_irq_vector(pdev, 0)); + } +} + +static void rnpgbe_napi_enable_all(struct mucse *mucse) +{ + for (int i =3D 0; i < mucse->num_q_vectors; i++) + napi_enable(&mucse->q_vector[i]->napi); +} + +static void rnpgbe_napi_disable_all(struct mucse *mucse) +{ + for (int i =3D 0; i < mucse->num_q_vectors; i++) + napi_disable(&mucse->q_vector[i]->napi); +} + +void rnpgbe_down(struct mucse *mucse) +{ + rnpgbe_irq_disable(mucse); + rnpgbe_napi_disable_all(mucse); +} + +/** + * rnpgbe_up_complete - Final step for port up + * @mucse: pointer to private structure + **/ +void rnpgbe_up_complete(struct mucse *mucse) +{ + rnpgbe_configure_msix(mucse); + rnpgbe_napi_enable_all(mucse); + rnpgbe_irq_enable(mucse); +} diff --git a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_lib.h b/drivers/net/e= thernet/mucse/rnpgbe/rnpgbe_lib.h new file mode 100644 index 000000000000..8e8234209840 --- /dev/null +++ b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_lib.h @@ -0,0 +1,33 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* Copyright(c) 2020 - 2025 Mucse Corporation. */ + +#ifndef _RNPGBE_LIB_H +#define _RNPGBE_LIB_H + +struct mucse; + +#define RING_OFFSET(n) (0x1000 + 0x100 * (n)) +#define RNPGBE_DMA_INT_MASK 0x24 +#define TX_INT_MASK BIT(1) +#define RX_INT_MASK BIT(0) +#define INT_VALID (BIT(16) | BIT(17)) +#define RNPGBE_DMA_INT_TRIG 0x2c +/* | 31:24 | .... | 15:8 | 7:0 | */ +/* | pfvfnum | | tx vector | rx vector | */ +#define RING_VECTOR(n) (0x04 * (n)) + +#define mucse_for_each_ring(pos, head)\ + for (typeof((head).ring) __pos =3D (head).ring;\ + __pos ? ({ pos =3D __pos; 1; }) : 0;\ + __pos =3D __pos->next) + +int rnpgbe_init_interrupt_scheme(struct mucse *mucse); +void rnpgbe_clear_interrupt_scheme(struct mucse *mucse); +int register_mbx_irq(struct mucse *mucse); +void remove_mbx_irq(struct mucse *mucse); +int rnpgbe_request_irq(struct mucse *mucse); +void rnpgbe_free_irq(struct mucse *mucse); +void rnpgbe_irq_disable(struct mucse *mucse); +void rnpgbe_down(struct mucse *mucse); +void rnpgbe_up_complete(struct mucse *mucse); +#endif diff --git a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_main.c b/drivers/net/= ethernet/mucse/rnpgbe/rnpgbe_main.c index 316f941629d4..78b8fefd8d60 100644 --- a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_main.c +++ b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_main.c @@ -7,6 +7,7 @@ =20 #include "rnpgbe.h" #include "rnpgbe_hw.h" +#include "rnpgbe_lib.h" #include "rnpgbe_mbx_fw.h" =20 static const char rnpgbe_driver_name[] =3D "rnpgbe"; @@ -32,11 +33,28 @@ static struct pci_device_id rnpgbe_pci_tbl[] =3D { * The open entry point is called when a network interface is made * active by the system (IFF_UP). * - * Return: 0 + * Return: 0 on success, negative value on failure **/ static int rnpgbe_open(struct net_device *netdev) { + struct mucse *mucse =3D netdev_priv(netdev); + int err; + + err =3D rnpgbe_request_irq(mucse); + if (err) + return err; + + err =3D netif_set_real_num_queues(netdev, mucse->num_tx_queues, + mucse->num_rx_queues); + if (err) + goto err_free_irqs; + + rnpgbe_up_complete(mucse); + return 0; +err_free_irqs: + rnpgbe_free_irq(mucse); + return err; } =20 /** @@ -50,6 +68,11 @@ static int rnpgbe_open(struct net_device *netdev) **/ static int rnpgbe_close(struct net_device *netdev) { + struct mucse *mucse =3D netdev_priv(netdev); + + rnpgbe_down(mucse); + rnpgbe_free_irq(mucse); + return 0; } =20 @@ -166,11 +189,28 @@ static int rnpgbe_add_adapter(struct pci_dev *pdev, goto err_powerdown; } =20 + err =3D rnpgbe_init_interrupt_scheme(mucse); + if (err) { + dev_err(&pdev->dev, "init interrupt failed %d\n", err); + goto err_powerdown; + } + err =3D register_netdev(netdev); if (err) - goto err_powerdown; + goto err_clear_interrupt; + + err =3D register_mbx_irq(mucse); + if (err) { + dev_err(&pdev->dev, "register mbx irq failed %d\n", err); + goto err_unregister_netdev; + } =20 return 0; + +err_unregister_netdev: + unregister_netdev(netdev); +err_clear_interrupt: + rnpgbe_clear_interrupt_scheme(mucse); err_powerdown: /* notify powerdown only powerup ok */ if (!err_notify) { @@ -252,10 +292,12 @@ static void rnpgbe_rm_adapter(struct pci_dev *pdev) if (!mucse) return; netdev =3D mucse->netdev; + remove_mbx_irq(mucse); unregister_netdev(netdev); err =3D rnpgbe_send_notify(hw, false, mucse_fw_powerup); if (err) dev_warn(&pdev->dev, "Send powerdown to hw failed %d\n", err); + rnpgbe_clear_interrupt_scheme(mucse); free_netdev(netdev); } =20 --=20 2.25.1 From nobody Tue Apr 7 01:03:01 2026 Received: from smtpbgeu2.qq.com (smtpbgeu2.qq.com [18.194.254.142]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4EA0A2848BE; Fri, 3 Apr 2026 02:59:26 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=18.194.254.142 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775185176; cv=none; b=qz1k4X+Hk+C1eM8qpuo6KcJwHH7SU4qAfz8kDJCoqrwvIPNEFVQ3NP/pqrsst+YZ3U6WNPbznhQIS5IdL287GIjLVfCXR5IRSbHy12NUPXtziLBNpXuuJSCfsUKk+pQkDUiu4etrfZJ5mo32cdt0XaaSQJb5aUIwuBsBlJfMA5w= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775185176; c=relaxed/simple; bh=s8nHD3Zwv45Mc/DTiCvGczlK4N5a81vzr8YJ4LNqu7w=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=SChsJHBbI3ax/0Gh/vYalldsU81XGE79zDpTa4uvggyHMOc7Q2O4WzCtG0nPpeVXG7VfDACdUHgev/765mhD3tqVRQIN611lOpLOwsGfmRghrleqn9dCGEtCiyrstBOulPzb6TtjTHNwSpIxi+cmUvWhTEXFacbzXYQtTsnoI38= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=mucse.com; spf=pass smtp.mailfrom=mucse.com; arc=none smtp.client-ip=18.194.254.142 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=mucse.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=mucse.com X-QQ-mid: esmtpsz19t1775185092t93589101 X-QQ-Originating-IP: Mv5rPcwwnP9ooj06XGuleDecdAeQ/zQK48Zuw9YeDnI= Received: from localhost.localdomain ( [203.174.112.180]) by bizesmtp.qq.com (ESMTP) with id ; Fri, 03 Apr 2026 10:58:10 +0800 (CST) X-QQ-SSF: 0000000000000000000000000000000 X-QQ-GoodBg: 0 X-BIZMAIL-ID: 17582406897987648349 EX-QQ-RecipientCnt: 11 From: Dong Yibo To: andrew+netdev@lunn.ch, davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, danishanwar@ti.com, vadim.fedorenko@linux.dev, horms@kernel.org Cc: linux-kernel@vger.kernel.org, netdev@vger.kernel.org, dong100@mucse.com Subject: [PATCH net-next v2 2/4] net: rnpgbe: Add basic TX packet transmission support Date: Fri, 3 Apr 2026 10:57:11 +0800 Message-Id: <20260403025713.527841-3-dong100@mucse.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20260403025713.527841-1-dong100@mucse.com> References: <20260403025713.527841-1-dong100@mucse.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-QQ-SENDSIZE: 520 Feedback-ID: esmtpsz:mucse.com:qybglogicsvrgz:qybglogicsvrgz3a-1 X-QQ-XMAILINFO: N3ZD90/zQt3H1e4+Z5rJRUVoQxz8bDWEUyei9En/eAeJKRTc/vW4dJVq RHYY59V5Xdp5v47eRG5qWkr1nuzqnzN5BUNDUvrCKKtLOugcKP0KVuX7zey8K7xpyy5Iqn5 62lDkUrz7wFZPvK0Qsbnr4zz0HfpjGc5TvNvfOLA6V+pnlH8vGOto8tnQbIZ6Hh2yHQvcmW 31bCSbI6tlwJA27qaWGQyPOsIHa/b9T/pw17NuYUg8jgDCxFHfFx75/35zmygjoxDb7KlN+ nr2IIWUseaeTnDSPwQi8kMYHP+MVEhU+GvArY6bE6oToP7b/PM+H1g8tJ0AP2OPvRP5+WHu nrx3kZZWzwskc/Tlug+ARfkSW5uVdVTRvK7lXjktFZveQpsc8y23SyW7+jEy3o84TeLGv1w oj/lfswEFQ7try8Cw/5Mhfvq41MZQ3614ieqIdI69Np2sTyVkNEexnDupD4rE2Y9wGpKP+m ftlfBLAjyQs6UWHFXsCN8nbPIgOccikepRIkIYNElZ/ek5VuVVA4luPv674KV4Y30igjUMY iZtjhSh6gFPlmvG1Y2VnEGNwIpZJN94GlIC3mB8p7w6imLCPEjX5p7+aag6dBXWK02ZWL57 hucTO+8tfkMA9bbGR6tGdC+Rb58icSrc9RCRVOI5eMe5b5tHRNttHNOagRzdLMhDAZqpQhP MufZSXulIhIqQur3WrOi3eWRvrd4xwvSdx/MjdaxiQrMxKjby2qoy4gUc0RRmTx5ZO/bjQP aio75jUhIDDWkyAMW5nQ6GUhVuGqA6mq/S9DeG3mAPVXCE2Ld5DTx+Y3g8XeBV0i8jXa9Qs HY/yij7okeBk3HHg0W2oK1EZbyPgMt9aMgY76/MqVP6wdk8HTluxAJuYwCW2QoqIqHntfRZ OaqQ894GH51BvVMuwWsy9KkkVQ0Umw2/Bv9Ng4hQE9SP2poKKMyG5kG0zLB95mvHf4vRfqG WSSgJ/1Kr2tYK5N7GM8IOFJ4P9CpdwaPVNZzb35ft0qPeMxtIJdGP64Syfee5QGsaNAQBq9 cfv5H6Sw== X-QQ-XMRINFO: MPJ6Tf5t3I/ylTmHUqvI8+Wpn+Gzalws3A== X-QQ-RECHKSPAM: 0 Content-Type: text/plain; charset="utf-8" Implement basic transmit path for the RNPGBE driver: - Add TX descriptor structure (rnpgbe_tx_desc) and TX buffer management - Implement rnpgbe_xmit_frame_ring() for packet transmission - Add TX ring resource allocation and cleanup functions - Implement TX completion handling via rnpgbe_clean_tx_irq() - Implement statistics collection for TX packets/bytes This enables basic packet transmission functionality for the RNPGBE driver. Signed-off-by: Dong Yibo --- drivers/net/ethernet/mucse/rnpgbe/rnpgbe.h | 73 +++ .../net/ethernet/mucse/rnpgbe/rnpgbe_chip.c | 4 + drivers/net/ethernet/mucse/rnpgbe/rnpgbe_hw.h | 3 + .../net/ethernet/mucse/rnpgbe/rnpgbe_lib.c | 582 ++++++++++++++++++ .../net/ethernet/mucse/rnpgbe/rnpgbe_lib.h | 26 + .../net/ethernet/mucse/rnpgbe/rnpgbe_main.c | 33 +- 6 files changed, 717 insertions(+), 4 deletions(-) diff --git a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe.h b/drivers/net/ether= net/mucse/rnpgbe/rnpgbe.h index ea4e5b13564d..2049cf457992 100644 --- a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe.h +++ b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe.h @@ -43,20 +43,84 @@ struct mucse_hw { struct pci_dev *pdev; struct mucse_mbx_info mbx; int port; + u16 cycles_per_us; u8 pfvfnum; }; =20 +struct rnpgbe_tx_desc { + __le64 pkt_addr; /* Packet buffer address */ + union { + __le64 vlan_cmd_bsz; + struct { + __le32 blen_mac_ip_len; + __le32 vlan_cmd; /* vlan & cmd status */ + }; + }; +#define M_TXD_CMD_RS 0x040000 /* Report Status */ +#define M_TXD_STAT_DD 0x020000 /* Descriptor Done */ +#define M_TXD_CMD_EOP 0x010000 /* End of Packet */ +}; + +#define M_TX_DESC(R, i) (&(((struct rnpgbe_tx_desc *)((R)->desc))[i])) + +struct mucse_tx_buffer { + struct rnpgbe_tx_desc *next_to_watch; + struct sk_buff *skb; + unsigned int bytecount; + unsigned short gso_segs; + DEFINE_DMA_UNMAP_ADDR(dma); + DEFINE_DMA_UNMAP_LEN(len); + bool mapped_as_page; /* true if dma was mapped with dma_map_page */ +}; + +struct mucse_queue_stats { + u64 packets; + u64 bytes; +}; + struct mucse_ring { struct mucse_ring *next; struct mucse_q_vector *q_vector; + struct net_device *netdev; + struct device *dev; + void *desc; + struct mucse_tx_buffer *tx_buffer_info; void __iomem *ring_addr; + void __iomem *tail; void __iomem *irq_mask; void __iomem *trig; u8 queue_index; /* hw ring idx */ u8 rnpgbe_queue_idx; + u8 pfvfnum; + u16 count; + u16 next_to_use; + u16 next_to_clean; + dma_addr_t dma; + unsigned int size; + struct mucse_queue_stats stats; + struct u64_stats_sync syncp; } ____cacheline_internodealigned_in_smp; =20 +static inline u16 mucse_desc_unused(struct mucse_ring *ring) +{ + u16 ntc =3D ring->next_to_clean; + u16 ntu =3D ring->next_to_use; + + return ((ntc > ntu) ? 0 : ring->count) + ntc - ntu - 1; +} + +static inline __le64 build_ctob(u32 vlan_cmd, u32 mac_ip_len, u32 size) +{ + return cpu_to_le64(((u64)vlan_cmd << 32) | ((u64)mac_ip_len << 16) | + ((u64)size)); +} + +static inline struct netdev_queue *txring_txq(const struct mucse_ring *rin= g) +{ + return netdev_get_tx_queue(ring->netdev, ring->queue_index); +} + struct mucse_ring_container { struct mucse_ring *ring; u16 count; @@ -78,6 +142,9 @@ struct mucse_stats { =20 #define MAX_Q_VECTORS 8 =20 +#define M_DEFAULT_TXD 512 +#define M_DEFAULT_TX_WORK 256 + struct mucse { struct net_device *netdev; struct pci_dev *pdev; @@ -92,6 +159,8 @@ struct mucse { struct mucse_ring *rx_ring[RNPGBE_MAX_QUEUES] ____cacheline_aligned_in_smp; struct mucse_q_vector *q_vector[MAX_Q_VECTORS]; + int tx_ring_item_count; + int tx_work_limit; int num_tx_queues; int num_q_vectors; int num_rx_queues; @@ -115,4 +184,8 @@ int rnpgbe_init_hw(struct mucse_hw *hw, int board_type); writel((val), (hw)->hw_addr + (reg)) #define mucse_hw_rd32(hw, reg) \ readl((hw)->hw_addr + (reg)) +#define mucse_ring_wr32(ring, reg, val) \ + writel((val), (ring)->ring_addr + (reg)) +#define mucse_ring_rd32(ring, reg) \ + readl((ring)->ring_addr + (reg)) #endif /* _RNPGBE_H */ diff --git a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_chip.c b/drivers/net/= ethernet/mucse/rnpgbe/rnpgbe_chip.c index 921cc325a991..291e77d573fe 100644 --- a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_chip.c +++ b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_chip.c @@ -93,6 +93,8 @@ static void rnpgbe_init_n500(struct mucse_hw *hw) =20 mbx->fwpf_ctrl_base =3D MUCSE_N500_FWPF_CTRL_BASE; mbx->fwpf_shm_base =3D MUCSE_N500_FWPF_SHM_BASE; + + hw->cycles_per_us =3D M_DEFAULT_N500_MHZ; } =20 /** @@ -110,6 +112,8 @@ static void rnpgbe_init_n210(struct mucse_hw *hw) =20 mbx->fwpf_ctrl_base =3D MUCSE_N210_FWPF_CTRL_BASE; mbx->fwpf_shm_base =3D MUCSE_N210_FWPF_SHM_BASE; + + hw->cycles_per_us =3D M_DEFAULT_N210_MHZ; } =20 /** diff --git a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_hw.h b/drivers/net/et= hernet/mucse/rnpgbe/rnpgbe_hw.h index 0dce78e4a91b..cbc593902030 100644 --- a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_hw.h +++ b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_hw.h @@ -7,12 +7,15 @@ #define MUCSE_N500_FWPF_CTRL_BASE 0x28b00 #define MUCSE_N500_FWPF_SHM_BASE 0x2d000 #define MUCSE_N500_RING_MSIX_BASE 0x28700 +#define M_DEFAULT_N500_MHZ 125 #define MUCSE_GBE_PFFW_MBX_CTRL_OFFSET 0x5500 #define MUCSE_GBE_FWPF_MBX_MASK_OFFSET 0x5700 #define MUCSE_N210_FWPF_CTRL_BASE 0x29400 #define MUCSE_N210_FWPF_SHM_BASE 0x2d900 #define MUCSE_N210_RING_MSIX_BASE 0x29000 +#define M_DEFAULT_N210_MHZ 62 =20 +#define TX_AXI_RW_EN 0xc #define RNPGBE_DMA_AXI_EN 0x0010 =20 #define RNPGBE_MAX_QUEUES 8 diff --git a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_lib.c b/drivers/net/e= thernet/mucse/rnpgbe/rnpgbe_lib.c index ae6131032d43..daa061578976 100644 --- a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_lib.c +++ b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_lib.c @@ -3,6 +3,7 @@ =20 #include #include +#include =20 #include "rnpgbe_lib.h" #include "rnpgbe.h" @@ -41,6 +42,120 @@ static void rnpgbe_irq_enable_queues(struct mucse_q_vec= tor *q_vector) } } =20 +/** + * rnpgbe_clean_tx_irq - Reclaim resources after transmit completes + * @q_vector: structure containing interrupt and ring information + * @tx_ring: tx ring to clean + * @napi_budget: Used to determine if we are in netpoll + * + * @return: true is for work done wthin budget, otherwise false + **/ +static bool rnpgbe_clean_tx_irq(struct mucse_q_vector *q_vector, + struct mucse_ring *tx_ring, + int napi_budget) +{ + int budget =3D q_vector->mucse->tx_work_limit; + u64 total_bytes =3D 0, total_packets =3D 0; + struct mucse_tx_buffer *tx_buffer; + struct rnpgbe_tx_desc *tx_desc; + int i =3D tx_ring->next_to_clean; + + tx_buffer =3D &tx_ring->tx_buffer_info[i]; + tx_desc =3D M_TX_DESC(tx_ring, i); + i -=3D tx_ring->count; + + do { + struct rnpgbe_tx_desc *eop_desc =3D tx_buffer->next_to_watch; + + /* if next_to_watch is not set then there is no work pending */ + if (!eop_desc) + break; + + /* prevent any other reads prior to eop_desc */ + rmb(); + + /* if eop DD is not set pending work has not been completed */ + if (!(eop_desc->vlan_cmd & cpu_to_le32(M_TXD_STAT_DD))) + break; + /* clear next_to_watch to prevent false hangs */ + tx_buffer->next_to_watch =3D NULL; + total_bytes +=3D tx_buffer->bytecount; + total_packets +=3D tx_buffer->gso_segs; + napi_consume_skb(tx_buffer->skb, napi_budget); + if (tx_buffer->mapped_as_page) { + dma_unmap_page(tx_ring->dev, + dma_unmap_addr(tx_buffer, dma), + dma_unmap_len(tx_buffer, len), + DMA_TO_DEVICE); + } else { + dma_unmap_single(tx_ring->dev, + dma_unmap_addr(tx_buffer, dma), + dma_unmap_len(tx_buffer, len), + DMA_TO_DEVICE); + } + tx_buffer->skb =3D NULL; + dma_unmap_len_set(tx_buffer, len, 0); + + /* unmap remaining buffers */ + while (tx_desc !=3D eop_desc) { + tx_buffer++; + tx_desc++; + i++; + if (unlikely(!i)) { + i -=3D tx_ring->count; + tx_buffer =3D tx_ring->tx_buffer_info; + tx_desc =3D M_TX_DESC(tx_ring, 0); + } + + /* unmap any remaining paged data */ + if (dma_unmap_len(tx_buffer, len)) { + dma_unmap_page(tx_ring->dev, + dma_unmap_addr(tx_buffer, dma), + dma_unmap_len(tx_buffer, len), + DMA_TO_DEVICE); + dma_unmap_len_set(tx_buffer, len, 0); + } + } + + /* move us one more past the eop_desc for start of next pkt */ + tx_buffer++; + tx_desc++; + i++; + if (unlikely(!i)) { + i -=3D tx_ring->count; + tx_buffer =3D tx_ring->tx_buffer_info; + tx_desc =3D M_TX_DESC(tx_ring, 0); + } + + prefetch(tx_desc); + budget--; + } while (likely(budget > 0)); + netdev_tx_completed_queue(txring_txq(tx_ring), total_packets, + total_bytes); + i +=3D tx_ring->count; + tx_ring->next_to_clean =3D i; + u64_stats_update_begin(&tx_ring->syncp); + tx_ring->stats.bytes +=3D total_bytes; + tx_ring->stats.packets +=3D total_packets; + u64_stats_update_end(&tx_ring->syncp); + +#define TX_WAKE_THRESHOLD (DESC_NEEDED * 2) + if (likely(netif_carrier_ok(tx_ring->netdev) && + (mucse_desc_unused(tx_ring) >=3D TX_WAKE_THRESHOLD))) { + /* Make sure that anybody stopping the queue after this + * sees the new next_to_clean. + */ + smp_mb(); + if (__netif_subqueue_stopped(tx_ring->netdev, + tx_ring->queue_index)) { + netif_wake_subqueue(tx_ring->netdev, + tx_ring->queue_index); + } + } + + return !!budget; +} + /** * rnpgbe_poll - NAPI Rx polling callback * @napi: structure for representing this polling device @@ -53,8 +168,18 @@ static int rnpgbe_poll(struct napi_struct *napi, int bu= dget) { struct mucse_q_vector *q_vector =3D container_of(napi, struct mucse_q_vector, napi); + bool clean_complete =3D true; + struct mucse_ring *ring; int work_done =3D 0; =20 + mucse_for_each_ring(ring, q_vector->tx) { + if (!rnpgbe_clean_tx_irq(q_vector, ring, budget)) + clean_complete =3D false; + } + + if (!clean_complete) + return budget; + if (likely(napi_complete_done(napi, work_done))) rnpgbe_irq_enable_queues(q_vector); =20 @@ -205,12 +330,17 @@ static int rnpgbe_alloc_q_vector(struct mucse *mucse, ring =3D q_vector->ring; =20 for (idx =3D 0; idx < txr_count; idx++) { + ring->dev =3D &mucse->pdev->dev; mucse_add_ring(ring, &q_vector->tx); + ring->count =3D mucse->tx_ring_item_count; + ring->netdev =3D mucse->netdev; ring->queue_index =3D eth_queue_idx + idx; ring->rnpgbe_queue_idx =3D txr_idx; ring->ring_addr =3D hw->hw_addr + RING_OFFSET(txr_idx); ring->irq_mask =3D ring->ring_addr + RNPGBE_DMA_INT_MASK; ring->trig =3D ring->ring_addr + RNPGBE_DMA_INT_TRIG; + ring->pfvfnum =3D hw->pfvfnum; + u64_stats_init(&ring->syncp); mucse->tx_ring[ring->queue_index] =3D ring; txr_idx +=3D step; ring++; @@ -574,10 +704,88 @@ static void rnpgbe_napi_disable_all(struct mucse *muc= se) napi_disable(&mucse->q_vector[i]->napi); } =20 +/** + * rnpgbe_clean_tx_ring - Free Tx Buffers + * @tx_ring: ring to be cleaned + **/ +static void rnpgbe_clean_tx_ring(struct mucse_ring *tx_ring) +{ + u16 i =3D tx_ring->next_to_clean; + struct mucse_tx_buffer *tx_buffer =3D &tx_ring->tx_buffer_info[i]; + unsigned long size; + + /* ring already cleared, nothing to do */ + if (!tx_ring->tx_buffer_info) + return; + + while (i !=3D tx_ring->next_to_use) { + struct rnpgbe_tx_desc *eop_desc, *tx_desc; + + dev_kfree_skb_any(tx_buffer->skb); + /* unmap skb header data */ + if (dma_unmap_len(tx_buffer, len)) { + dma_unmap_single(tx_ring->dev, + dma_unmap_addr(tx_buffer, dma), + dma_unmap_len(tx_buffer, len), + DMA_TO_DEVICE); + } + eop_desc =3D tx_buffer->next_to_watch; + tx_desc =3D M_TX_DESC(tx_ring, i); + /* unmap remaining buffers */ + while (tx_desc !=3D eop_desc) { + tx_buffer++; + tx_desc++; + i++; + if (unlikely(i =3D=3D tx_ring->count)) { + i =3D 0; + tx_buffer =3D tx_ring->tx_buffer_info; + tx_desc =3D M_TX_DESC(tx_ring, 0); + } + + /* unmap any remaining paged data */ + if (dma_unmap_len(tx_buffer, len)) + dma_unmap_page(tx_ring->dev, + dma_unmap_addr(tx_buffer, dma), + dma_unmap_len(tx_buffer, len), + DMA_TO_DEVICE); + } + /* move us one more past the eop_desc for start of next pkt */ + tx_buffer++; + i++; + if (unlikely(i =3D=3D tx_ring->count)) { + i =3D 0; + tx_buffer =3D tx_ring->tx_buffer_info; + } + } + + netdev_tx_reset_queue(txring_txq(tx_ring)); + size =3D sizeof(struct mucse_tx_buffer) * tx_ring->count; + memset(tx_ring->tx_buffer_info, 0, size); + /* Zero out the descriptor ring */ + memset(tx_ring->desc, 0, tx_ring->size); + tx_ring->next_to_use =3D 0; + tx_ring->next_to_clean =3D 0; +} + +/** + * rnpgbe_clean_all_tx_rings - Free Tx Buffers for all queues + * @mucse: board private structure + **/ +static void rnpgbe_clean_all_tx_rings(struct mucse *mucse) +{ + for (int i =3D 0; i < mucse->num_tx_queues; i++) + rnpgbe_clean_tx_ring(mucse->tx_ring[i]); +} + void rnpgbe_down(struct mucse *mucse) { + struct net_device *netdev =3D mucse->netdev; + + netif_tx_stop_all_queues(netdev); rnpgbe_irq_disable(mucse); + netif_tx_disable(netdev); rnpgbe_napi_disable_all(mucse); + rnpgbe_clean_all_tx_rings(mucse); } =20 /** @@ -586,7 +794,381 @@ void rnpgbe_down(struct mucse *mucse) **/ void rnpgbe_up_complete(struct mucse *mucse) { + struct net_device *netdev =3D mucse->netdev; + rnpgbe_configure_msix(mucse); rnpgbe_napi_enable_all(mucse); rnpgbe_irq_enable(mucse); + netif_tx_start_all_queues(netdev); +} + +/** + * rnpgbe_free_tx_resources - Free Tx Resources per Queue + * @tx_ring: tx descriptor ring for a specific queue + * + * Free all transmit software resources + **/ +static void rnpgbe_free_tx_resources(struct mucse_ring *tx_ring) +{ + rnpgbe_clean_tx_ring(tx_ring); + vfree(tx_ring->tx_buffer_info); + tx_ring->tx_buffer_info =3D NULL; + /* if not set, then don't free */ + if (!tx_ring->desc) + return; + + dma_free_coherent(tx_ring->dev, tx_ring->size, tx_ring->desc, + tx_ring->dma); + tx_ring->desc =3D NULL; +} + +/** + * rnpgbe_setup_tx_resources - allocate Tx resources (Descriptors) + * @tx_ring: tx descriptor ring (for a specific queue) to setup + * @mucse: pointer to private structure + * + * @return: 0 on success, negative on failure + **/ +static int rnpgbe_setup_tx_resources(struct mucse_ring *tx_ring, + struct mucse *mucse) +{ + struct device *dev =3D tx_ring->dev; + int size; + + size =3D sizeof(struct mucse_tx_buffer) * tx_ring->count; + + tx_ring->tx_buffer_info =3D vzalloc(size); + if (!tx_ring->tx_buffer_info) + goto err_return; + /* round up to nearest 4K */ + tx_ring->size =3D tx_ring->count * sizeof(struct rnpgbe_tx_desc); + tx_ring->size =3D ALIGN(tx_ring->size, 4096); + tx_ring->desc =3D dma_alloc_coherent(dev, tx_ring->size, &tx_ring->dma, + GFP_KERNEL); + if (!tx_ring->desc) + goto err_free_buffer; + + tx_ring->next_to_use =3D 0; + tx_ring->next_to_clean =3D 0; + + return 0; + +err_free_buffer: + vfree(tx_ring->tx_buffer_info); +err_return: + tx_ring->tx_buffer_info =3D NULL; + return -ENOMEM; +} + +/** + * rnpgbe_configure_tx_ring - Configure Tx ring after Reset + * @mucse: pointer to private structure + * @ring: structure containing ring specific data + * + * Configure the Tx descriptor ring after a reset. + **/ +static void rnpgbe_configure_tx_ring(struct mucse *mucse, + struct mucse_ring *ring) +{ + struct mucse_hw *hw =3D &mucse->hw; + + mucse_ring_wr32(ring, RNPGBE_TX_START, 0); + mucse_ring_wr32(ring, RNPGBE_TX_BASE_ADDR_LO, (u32)ring->dma); + mucse_ring_wr32(ring, RNPGBE_TX_BASE_ADDR_HI, + (u32)(((u64)ring->dma) >> 32) | (hw->pfvfnum << 24)); + mucse_ring_wr32(ring, RNPGBE_TX_LEN, ring->count); + ring->next_to_clean =3D mucse_ring_rd32(ring, RNPGBE_TX_HEAD); + ring->next_to_use =3D ring->next_to_clean; + ring->tail =3D ring->ring_addr + RNPGBE_TX_TAIL; + writel(ring->next_to_use, ring->tail); + mucse_ring_wr32(ring, RNPGBE_TX_FETCH_CTRL, M_DEFAULT_TX_FETCH); + mucse_ring_wr32(ring, RNPGBE_TX_INT_TIMER, + M_DEFAULT_INT_TIMER * hw->cycles_per_us); + mucse_ring_wr32(ring, RNPGBE_TX_INT_PKTCNT, M_DEFAULT_INT_PKTCNT); + /* Ensure all config is written before enabling queue */ + wmb(); + mucse_ring_wr32(ring, RNPGBE_TX_START, 1); +} + +/** + * rnpgbe_configure_tx - Configure Transmit Unit after Reset + * @mucse: pointer to private structure + * + * Configure the Tx DMA after a reset. + **/ +void rnpgbe_configure_tx(struct mucse *mucse) +{ + struct mucse_hw *hw =3D &mucse->hw; + u32 i, dma_axi_ctl; + + dma_axi_ctl =3D mucse_hw_rd32(hw, RNPGBE_DMA_AXI_EN); + dma_axi_ctl |=3D TX_AXI_RW_EN; + mucse_hw_wr32(hw, RNPGBE_DMA_AXI_EN, dma_axi_ctl); + /* Setup the HW Tx Head and Tail descriptor pointers */ + for (i =3D 0; i < mucse->num_tx_queues; i++) + rnpgbe_configure_tx_ring(mucse, mucse->tx_ring[i]); +} + +/** + * rnpgbe_setup_all_tx_resources - allocate all queues Tx resources + * @mucse: pointer to private structure + * + * Allocate memory for tx_ring. + * + * @return: 0 on success, negative on failure + **/ +int rnpgbe_setup_all_tx_resources(struct mucse *mucse) +{ + int i, err =3D 0; + + for (i =3D 0; i < mucse->num_tx_queues; i++) { + err =3D rnpgbe_setup_tx_resources(mucse->tx_ring[i], mucse); + if (!err) + continue; + + goto err_free_res; + } + + return 0; +err_free_res: + while (i--) + rnpgbe_free_tx_resources(mucse->tx_ring[i]); + return err; +} + +/** + * rnpgbe_free_all_tx_resources - Free Tx Resources for All Queues + * @mucse: pointer to private structure + * + * Free all transmit software resources + **/ +void rnpgbe_free_all_tx_resources(struct mucse *mucse) +{ + for (int i =3D 0; i < (mucse->num_tx_queues); i++) + rnpgbe_free_tx_resources(mucse->tx_ring[i]); +} + +static int rnpgbe_tx_map(struct mucse_ring *tx_ring, + struct mucse_tx_buffer *first, u32 mac_ip_len, + u32 tx_flags) +{ + /* hw need this in high 8 bytes desc */ + u64 fun_id =3D ((u64)(tx_ring->pfvfnum) << (56)); + struct mucse_tx_buffer *tx_buffer; + struct sk_buff *skb =3D first->skb; + struct rnpgbe_tx_desc *tx_desc; + u16 i =3D tx_ring->next_to_use; + unsigned int data_len, size; + skb_frag_t *frag; + dma_addr_t dma; + + tx_desc =3D M_TX_DESC(tx_ring, i); + size =3D skb_headlen(skb); + data_len =3D skb->data_len; + frag =3D &skb_shinfo(skb)->frags[0]; + + if (size) { + dma =3D dma_map_single(tx_ring->dev, skb->data, size, DMA_TO_DEVICE); + first->mapped_as_page =3D false; + } else if (data_len) { + size =3D skb_frag_size(frag); + dma =3D skb_frag_dma_map(tx_ring->dev, frag, 0, + size, DMA_TO_DEVICE); + first->mapped_as_page =3D true; + data_len -=3D size; + frag++; + } else { + goto err_unmap; + } + + tx_buffer =3D first; + + dma_unmap_len_set(tx_buffer, len, 0); + dma_unmap_addr_set(tx_buffer, dma, 0); + + for (;; frag++) { + if (dma_mapping_error(tx_ring->dev, dma)) + goto err_unmap; + + /* record length, and DMA address */ + dma_unmap_len_set(tx_buffer, len, size); + dma_unmap_addr_set(tx_buffer, dma, dma); + + tx_desc->pkt_addr =3D cpu_to_le64(dma | fun_id); + + while (unlikely(size > M_MAX_DATA_PER_TXD)) { + tx_desc->vlan_cmd_bsz =3D build_ctob(tx_flags, + mac_ip_len, + M_MAX_DATA_PER_TXD); + i++; + tx_desc++; + if (i =3D=3D tx_ring->count) { + tx_desc =3D M_TX_DESC(tx_ring, 0); + i =3D 0; + } + dma +=3D M_MAX_DATA_PER_TXD; + size -=3D M_MAX_DATA_PER_TXD; + tx_desc->pkt_addr =3D cpu_to_le64(dma | fun_id); + } + + if (likely(!data_len)) + break; + tx_desc->vlan_cmd_bsz =3D build_ctob(tx_flags, mac_ip_len, size); + i++; + tx_desc++; + if (i =3D=3D tx_ring->count) { + tx_desc =3D M_TX_DESC(tx_ring, 0); + i =3D 0; + } + + size =3D skb_frag_size(frag); + data_len -=3D size; + dma =3D skb_frag_dma_map(tx_ring->dev, frag, 0, size, + DMA_TO_DEVICE); + tx_buffer =3D &tx_ring->tx_buffer_info[i]; + tx_buffer->mapped_as_page =3D true; + } + + /* write last descriptor with RS and EOP bits */ + tx_desc->vlan_cmd_bsz =3D build_ctob(tx_flags | M_TXD_CMD_EOP | + M_TXD_CMD_RS, + mac_ip_len, size); + + /* + * Force memory writes to complete before letting h/w know there + * are new descriptors to fetch. (Only applicable for weak-ordered + * memory model archs, such as IA-64). + * + * We also need this memory barrier to make certain all of the + * status bits have been updated before next_to_watch is written. + */ + wmb(); + /* set next_to_watch value indicating a packet is present */ + first->next_to_watch =3D tx_desc; + i++; + if (i =3D=3D tx_ring->count) + i =3D 0; + tx_ring->next_to_use =3D i; + skb_tx_timestamp(skb); + netdev_tx_sent_queue(txring_txq(tx_ring), first->bytecount); + /* notify HW of packet */ + writel(i, tx_ring->tail); + + return 0; +err_unmap: + for (;;) { + tx_buffer =3D &tx_ring->tx_buffer_info[i]; + if (dma_unmap_len(tx_buffer, len)) { + if (tx_buffer->mapped_as_page) { + dma_unmap_page(tx_ring->dev, + dma_unmap_addr(tx_buffer, dma), + dma_unmap_len(tx_buffer, len), + DMA_TO_DEVICE); + } else { + dma_unmap_single(tx_ring->dev, + dma_unmap_addr(tx_buffer, dma), + dma_unmap_len(tx_buffer, len), + DMA_TO_DEVICE); + } + } + dma_unmap_len_set(tx_buffer, len, 0); + dma_unmap_addr_set(tx_buffer, dma, 0); + if (tx_buffer =3D=3D first) + break; + if (i =3D=3D 0) + i +=3D tx_ring->count; + i--; + } + dev_kfree_skb_any(first->skb); + first->skb =3D NULL; + tx_ring->next_to_use =3D i; + + return -ENOMEM; +} + +static int rnpgbe_maybe_stop_tx(struct mucse_ring *tx_ring, u16 size) +{ + if (likely(mucse_desc_unused(tx_ring) >=3D size)) + return 0; + + netif_stop_subqueue(tx_ring->netdev, tx_ring->queue_index); + /* Herbert's original patch had: + * smp_mb__after_netif_stop_queue(); + * but since that doesn't exist yet, just open code it. + */ + smp_mb(); + + /* We need to check again in a case another CPU has just + * made room available. + */ + if (likely(mucse_desc_unused(tx_ring) < size)) + return -EBUSY; + + /* A reprieve! - use start_queue because it doesn't call schedule */ + netif_start_subqueue(tx_ring->netdev, tx_ring->queue_index); + + return 0; +} + +netdev_tx_t rnpgbe_xmit_frame_ring(struct sk_buff *skb, + struct mucse_ring *tx_ring) +{ + u16 count =3D TXD_USE_COUNT(skb_headlen(skb)); + /* hw requires it not zero */ + u32 mac_ip_len =3D M_DEFAULT_MAC_IP_LEN; + struct mucse_tx_buffer *first; + u32 tx_flags =3D 0; + unsigned short f; + + for (f =3D 0; f < skb_shinfo(skb)->nr_frags; f++) { + skb_frag_t *frag_temp =3D &skb_shinfo(skb)->frags[f]; + + count +=3D TXD_USE_COUNT(skb_frag_size(frag_temp)); + } + + if (rnpgbe_maybe_stop_tx(tx_ring, count + 3)) + return NETDEV_TX_BUSY; + + /* record the location of the first descriptor for this packet */ + first =3D &tx_ring->tx_buffer_info[tx_ring->next_to_use]; + first->skb =3D skb; + first->bytecount =3D skb->len; + first->gso_segs =3D 1; + + if (rnpgbe_tx_map(tx_ring, first, mac_ip_len, tx_flags)) + goto out; + + rnpgbe_maybe_stop_tx(tx_ring, DESC_NEEDED); +out: + return NETDEV_TX_OK; +} + +/** + * rnpgbe_get_stats64 - Get stats for this netdev + * @netdev: network interface device structure + * @stats: stats data + **/ +void rnpgbe_get_stats64(struct net_device *netdev, + struct rtnl_link_stats64 *stats) +{ + struct mucse *mucse =3D netdev_priv(netdev); + int i; + + rcu_read_lock(); + for (i =3D 0; i < mucse->num_tx_queues; i++) { + struct mucse_ring *ring =3D READ_ONCE(mucse->tx_ring[i]); + u64 bytes, packets; + unsigned int start; + + if (ring) { + do { + start =3D u64_stats_fetch_begin(&ring->syncp); + packets =3D ring->stats.packets; + bytes =3D ring->stats.bytes; + } while (u64_stats_fetch_retry(&ring->syncp, start)); + stats->tx_packets +=3D packets; + stats->tx_bytes +=3D bytes; + } + } + rcu_read_unlock(); } diff --git a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_lib.h b/drivers/net/e= thernet/mucse/rnpgbe/rnpgbe_lib.h index 8e8234209840..2c2796764c2d 100644 --- a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_lib.h +++ b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_lib.h @@ -5,17 +5,36 @@ #define _RNPGBE_LIB_H =20 struct mucse; +struct mucse_ring; =20 #define RING_OFFSET(n) (0x1000 + 0x100 * (n)) +#define RNPGBE_TX_START 0x18 #define RNPGBE_DMA_INT_MASK 0x24 #define TX_INT_MASK BIT(1) #define RX_INT_MASK BIT(0) #define INT_VALID (BIT(16) | BIT(17)) +#define RNPGBE_TX_BASE_ADDR_HI 0x60 +#define RNPGBE_TX_BASE_ADDR_LO 0x64 +#define RNPGBE_TX_LEN 0x68 +#define RNPGBE_TX_HEAD 0x6c +#define RNPGBE_TX_TAIL 0x70 +#define M_DEFAULT_TX_FETCH 0x80008 +#define RNPGBE_TX_FETCH_CTRL 0x74 +#define M_DEFAULT_INT_TIMER 100 +#define RNPGBE_TX_INT_TIMER 0x78 +#define M_DEFAULT_INT_PKTCNT 48 +#define RNPGBE_TX_INT_PKTCNT 0x7c #define RNPGBE_DMA_INT_TRIG 0x2c /* | 31:24 | .... | 15:8 | 7:0 | */ /* | pfvfnum | | tx vector | rx vector | */ #define RING_VECTOR(n) (0x04 * (n)) =20 +#define M_MAX_TXD_PWR 12 +#define M_MAX_DATA_PER_TXD (0x1 << M_MAX_TXD_PWR) +#define TXD_USE_COUNT(S) DIV_ROUND_UP((S), M_MAX_DATA_PER_TXD) +#define DESC_NEEDED (MAX_SKB_FRAGS + 4) +/* hw require this not zero */ +#define M_DEFAULT_MAC_IP_LEN 20 #define mucse_for_each_ring(pos, head)\ for (typeof((head).ring) __pos =3D (head).ring;\ __pos ? ({ pos =3D __pos; 1; }) : 0;\ @@ -30,4 +49,11 @@ void rnpgbe_free_irq(struct mucse *mucse); void rnpgbe_irq_disable(struct mucse *mucse); void rnpgbe_down(struct mucse *mucse); void rnpgbe_up_complete(struct mucse *mucse); +void rnpgbe_configure_tx(struct mucse *mucse); +int rnpgbe_setup_all_tx_resources(struct mucse *mucse); +void rnpgbe_free_all_tx_resources(struct mucse *mucse); +netdev_tx_t rnpgbe_xmit_frame_ring(struct sk_buff *skb, + struct mucse_ring *tx_ring); +void rnpgbe_get_stats64(struct net_device *netdev, + struct rtnl_link_stats64 *stats); #endif diff --git a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_main.c b/drivers/net/= ethernet/mucse/rnpgbe/rnpgbe_main.c index 78b8fefd8d60..4f9cd0065a4b 100644 --- a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_main.c +++ b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_main.c @@ -26,6 +26,17 @@ static struct pci_device_id rnpgbe_pci_tbl[] =3D { {0, }, }; =20 +/** + * rnpgbe_configure - Configure info to hw + * @mucse: pointer to private structure + * + * rnpgbe_configure configure mac, tx, rx regs to hw + **/ +static void rnpgbe_configure(struct mucse *mucse) +{ + rnpgbe_configure_tx(mucse); +} + /** * rnpgbe_open - Called when a network interface is made active * @netdev: network interface device structure @@ -49,6 +60,11 @@ static int rnpgbe_open(struct net_device *netdev) if (err) goto err_free_irqs; =20 + err =3D rnpgbe_setup_all_tx_resources(mucse); + if (err) + goto err_free_irqs; + + rnpgbe_configure(mucse); rnpgbe_up_complete(mucse); =20 return 0; @@ -72,6 +88,7 @@ static int rnpgbe_close(struct net_device *netdev) =20 rnpgbe_down(mucse); rnpgbe_free_irq(mucse); + rnpgbe_free_all_tx_resources(mucse); =20 return 0; } @@ -81,25 +98,32 @@ static int rnpgbe_close(struct net_device *netdev) * @skb: skb structure to be sent * @netdev: network interface device structure * - * Return: NETDEV_TX_OK + * Return: NETDEV_TX_OK or NETDEV_TX_BUSY when insufficient descriptors **/ static netdev_tx_t rnpgbe_xmit_frame(struct sk_buff *skb, struct net_device *netdev) { struct mucse *mucse =3D netdev_priv(netdev); + struct mucse_ring *tx_ring; =20 - dev_kfree_skb_any(skb); - mucse->stats.tx_dropped++; + tx_ring =3D mucse->tx_ring[skb_get_queue_mapping(skb)]; =20 - return NETDEV_TX_OK; + return rnpgbe_xmit_frame_ring(skb, tx_ring); } =20 static const struct net_device_ops rnpgbe_netdev_ops =3D { .ndo_open =3D rnpgbe_open, .ndo_stop =3D rnpgbe_close, .ndo_start_xmit =3D rnpgbe_xmit_frame, + .ndo_get_stats64 =3D rnpgbe_get_stats64, }; =20 +static void rnpgbe_sw_init(struct mucse *mucse) +{ + mucse->tx_ring_item_count =3D M_DEFAULT_TXD; + mucse->tx_work_limit =3D M_DEFAULT_TX_WORK; +} + /** * rnpgbe_add_adapter - Add netdev for this pci_dev * @pdev: PCI device information structure @@ -172,6 +196,7 @@ static int rnpgbe_add_adapter(struct pci_dev *pdev, } =20 netdev->netdev_ops =3D &rnpgbe_netdev_ops; + rnpgbe_sw_init(mucse); err =3D rnpgbe_reset_hw(hw); if (err) { dev_err(&pdev->dev, "Hw reset failed %d\n", err); --=20 2.25.1 From nobody Tue Apr 7 01:03:01 2026 Received: from smtpbguseast3.qq.com (smtpbguseast3.qq.com [54.243.244.52]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5E23F351C0C; Fri, 3 Apr 2026 02:59:31 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=54.243.244.52 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775185180; cv=none; b=SbPDd8KEedGXiuekpohDWCDaBU6gw0Sgbmxf6Tjv0dd4IGKTjRBaTcJoNEfr+iEkDfjLH/4OBfxg9MZWDFI3jmL7eV4UR/8hXBj8UDOwGqjQEbITxx3rV7qUOVgoLZgpujUuwRhiMuDMSSdtT9E2kDe6WB1X/A6HAdlMMXkaIFk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775185180; c=relaxed/simple; bh=4ssAkjO7jnXE06HYFUDoa5N6Uv9exMu7v8G3qtxRYsU=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=MY/nvXnjqs8lcXKGIyFFL5bOSbfGuCikKiETp7B/oLKOZkP3JZBxYqcrBIWfH45MUNh2aTLsNJ8vCpmWUtF69Q3U9hrNVds4NJ24AXyJSIilnGVr0zwq/oJPJpQXl+XsifoIVmMX0fSUGiSxKk4HUeLsjG0U/X+Hf6f8TJPcC4I= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=mucse.com; spf=pass smtp.mailfrom=mucse.com; arc=none smtp.client-ip=54.243.244.52 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=mucse.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=mucse.com X-QQ-mid: esmtpsz19t1775185095tec6f4900 X-QQ-Originating-IP: bTjzZ/E6PDkXVuQl9mAYd42hoVNiFACCKZA/N+ScBGU= Received: from localhost.localdomain ( [203.174.112.180]) by bizesmtp.qq.com (ESMTP) with id ; Fri, 03 Apr 2026 10:58:13 +0800 (CST) X-QQ-SSF: 0000000000000000000000000000000 X-QQ-GoodBg: 0 X-BIZMAIL-ID: 8179430074309497620 EX-QQ-RecipientCnt: 11 From: Dong Yibo To: andrew+netdev@lunn.ch, davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, danishanwar@ti.com, vadim.fedorenko@linux.dev, horms@kernel.org Cc: linux-kernel@vger.kernel.org, netdev@vger.kernel.org, dong100@mucse.com Subject: [PATCH net-next v2 3/4] net: rnpgbe: Add RX packet reception support Date: Fri, 3 Apr 2026 10:57:12 +0800 Message-Id: <20260403025713.527841-4-dong100@mucse.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20260403025713.527841-1-dong100@mucse.com> References: <20260403025713.527841-1-dong100@mucse.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-QQ-SENDSIZE: 520 Feedback-ID: esmtpsz:mucse.com:qybglogicsvrgz:qybglogicsvrgz3a-1 X-QQ-XMAILINFO: OH/oXo99N/TGEdQiXNfe6eXKEVolMuzewR3HHzP6QjR5v2+TOZ8/DHiS HSe9Hs1103ElpHd86nKLFu7FERUlU94f0wIyVOjco9LitWvXUmD+H+K0eNiSch0M3gRBcFI kO2bCyJOlcP6OZj6Pfn45rwei1mR1dUs66GLjH7WapuxiNjoXOm3vNBUZa8GJuVvxtdywPt og1pq7mLcLXoBWY3PmFdOsQ5k+qK7TDtvFStZwCeRjdoBQn4WzZQcS2LmYRK/t2MKvx/z9K RGrzefzhSHi/5aWotwKBiZFXaYMkEyzkbfB7tQqpsFkBYlY0+8bBCZKtHmMszhTOG6CIuay 5DIEieyd5Z9gTiqHlLeIVQRBd2w9HsDla0/DBtKcrvuq70ERUnYgbWbGw4GDCflzHUWKs47 0RUcW49Re09KGCGuseH6gmTnxNRClGFNI0eFRLf1hOS+GhTR+4x2oqNjjoMWZ+HA047iHRQ fHYjMh1IZsQ7lcyjSMNKjAD/4k7s5Bnw58+jdyOvIY4BOphK92CD6ppqegexvBoRjZzC4ek P1OuqQ1cCGhM7WV0GQR5B3GA4WLi3yV3DzJsW+j2dIVAQMqwC/0tTmxd46G6mIczHUT4qpT FWPe8NSR8P4FDYi3oI6ykUxVBk3DsJS+JTTw1eU5ORaZ/eoPjCwxbk4jd0oJsObjJeo0+C3 G/JyVCW7bRsOxWUeKlINe8ujZTr6YSRlqIenuzuxKSBSTsLeGE3Tts8+I9BzWKkh7alMH/H FTKu1lz8IRItAEmVUfOYNQtZ4N7/cvGJid37vIvKMI73IHfpSyTpi8WUht1GZipptM62HAJ YkxbAlrgs69jfwCgZFNepsi6f1DikcREVckJ0BDsuFep9ft5V3c1K+sx93J8eD7Fw5V0teR NpVyxzRs8dhPCoiwwk5uTuISS5dTpSBKzQPQ1SMfdId3NeTkfxztfQcdI45Tnmt1odNhPtf LLKgSFAQyfl5qTu9/AAgjajocPW90jy1UXZ29bqUOMA/4BwlFyjduCJGINbNgr+xrr2rQcz SNKhpPZXbKKKvI3WI6BCBFocVYhO5wzy7yXhQj6mJPsIS7PEY5 X-QQ-XMRINFO: Nq+8W0+stu50tPAe92KXseR0ZZmBTk3gLg== X-QQ-RECHKSPAM: 0 Content-Type: text/plain; charset="utf-8" Add basic RX packet reception infrastructure to the rnpgbe driver: - Add RX descriptor structure (union rnpgbe_rx_desc) with write-back format for hardware status - Add RX buffer management using page_pool for efficient page recycling - Implement NAPI poll callback (rnpgbe_poll) for RX processing - Add RX ring setup and cleanup functions - Implement packet building from page buffer - Add RX statistics tracking Signed-off-by: Dong Yibo --- drivers/net/ethernet/mucse/Kconfig | 1 + drivers/net/ethernet/mucse/rnpgbe/rnpgbe.h | 49 +- drivers/net/ethernet/mucse/rnpgbe/rnpgbe_hw.h | 1 + .../net/ethernet/mucse/rnpgbe/rnpgbe_lib.c | 605 ++++++++++++++++++ .../net/ethernet/mucse/rnpgbe/rnpgbe_lib.h | 32 +- .../net/ethernet/mucse/rnpgbe/rnpgbe_main.c | 9 + 6 files changed, 695 insertions(+), 2 deletions(-) diff --git a/drivers/net/ethernet/mucse/Kconfig b/drivers/net/ethernet/mucs= e/Kconfig index 0b3e853d625f..be0fdf268484 100644 --- a/drivers/net/ethernet/mucse/Kconfig +++ b/drivers/net/ethernet/mucse/Kconfig @@ -19,6 +19,7 @@ if NET_VENDOR_MUCSE config MGBE tristate "Mucse(R) 1GbE PCI Express adapters support" depends on PCI + select PAGE_POOL help This driver supports Mucse(R) 1GbE PCI Express family of adapters. diff --git a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe.h b/drivers/net/ether= net/mucse/rnpgbe/rnpgbe.h index 2049cf457992..87cb9e9e3f0f 100644 --- a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe.h +++ b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe.h @@ -61,7 +61,32 @@ struct rnpgbe_tx_desc { #define M_TXD_CMD_EOP 0x010000 /* End of Packet */ }; =20 +union rnpgbe_rx_desc { + struct { + __le64 pkt_addr; /* Packet buffer address */ + __le64 resv_cmd; /* cmd status */ + }; + struct { + __le32 rss_hash; /* RSS HASH */ + __le16 mark; /* mark info */ + __le16 rev1; + __le16 len; /* Packet length */ + __le16 padding_len; + __le16 vlan; /* VLAN tag */ + __le16 cmd; /* cmd status */ +#define M_RXD_STAT_DD BIT(1) /* Descriptor Done */ +#define M_RXD_STAT_EOP BIT(0) /* End of Packet */ + } wb; +}; + #define M_TX_DESC(R, i) (&(((struct rnpgbe_tx_desc *)((R)->desc))[i])) +#define M_RX_DESC(R, i) (&(((union rnpgbe_rx_desc *)((R)->desc))[i])) + +static inline __le16 rnpgbe_test_staterr(union rnpgbe_rx_desc *rx_desc, + const u16 stat_err_bits) +{ + return rx_desc->wb.cmd & cpu_to_le16(stat_err_bits); +} =20 struct mucse_tx_buffer { struct rnpgbe_tx_desc *next_to_watch; @@ -78,13 +103,24 @@ struct mucse_queue_stats { u64 bytes; }; =20 +struct mucse_rx_buffer { + struct sk_buff *skb; + dma_addr_t dma; + struct page *page; + u32 page_offset; +}; + struct mucse_ring { struct mucse_ring *next; struct mucse_q_vector *q_vector; struct net_device *netdev; struct device *dev; + struct page_pool *page_pool; void *desc; - struct mucse_tx_buffer *tx_buffer_info; + union { + struct mucse_tx_buffer *tx_buffer_info; + struct mucse_rx_buffer *rx_buffer_info; + }; void __iomem *ring_addr; void __iomem *tail; void __iomem *irq_mask; @@ -110,6 +146,15 @@ static inline u16 mucse_desc_unused(struct mucse_ring = *ring) return ((ntc > ntu) ? 0 : ring->count) + ntc - ntu - 1; } =20 +static inline u16 mucse_desc_unused_rx(struct mucse_ring *ring) +{ + u16 ntc =3D ring->next_to_clean; + u16 ntu =3D ring->next_to_use; + + /* 16 * 16 =3D 256 tlp-max-payload size */ + return ((ntc > ntu) ? 0 : ring->count) + ntc - ntu - 16; +} + static inline __le64 build_ctob(u32 vlan_cmd, u32 mac_ip_len, u32 size) { return cpu_to_le64(((u64)vlan_cmd << 32) | ((u64)mac_ip_len << 16) | @@ -143,6 +188,7 @@ struct mucse_stats { #define MAX_Q_VECTORS 8 =20 #define M_DEFAULT_TXD 512 +#define M_DEFAULT_RXD 512 #define M_DEFAULT_TX_WORK 256 =20 struct mucse { @@ -163,6 +209,7 @@ struct mucse { int tx_work_limit; int num_tx_queues; int num_q_vectors; + int rx_ring_item_count; int num_rx_queues; }; =20 diff --git a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_hw.h b/drivers/net/et= hernet/mucse/rnpgbe/rnpgbe_hw.h index cbc593902030..03688586b447 100644 --- a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_hw.h +++ b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_hw.h @@ -16,6 +16,7 @@ #define M_DEFAULT_N210_MHZ 62 =20 #define TX_AXI_RW_EN 0xc +#define RX_AXI_RW_EN 0x03 #define RNPGBE_DMA_AXI_EN 0x0010 =20 #define RNPGBE_MAX_QUEUES 8 diff --git a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_lib.c b/drivers/net/e= thernet/mucse/rnpgbe/rnpgbe_lib.c index daa061578976..7e057e2f948b 100644 --- a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_lib.c +++ b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_lib.c @@ -3,7 +3,9 @@ =20 #include #include +#include #include +#include =20 #include "rnpgbe_lib.h" #include "rnpgbe.h" @@ -156,6 +158,340 @@ static bool rnpgbe_clean_tx_irq(struct mucse_q_vector= *q_vector, return !!budget; } =20 +static bool mucse_alloc_mapped_page(struct mucse_ring *rx_ring, + struct mucse_rx_buffer *bi) +{ + struct page *page =3D bi->page; + dma_addr_t dma; + + if (page) + return true; + + page =3D page_pool_dev_alloc_pages(rx_ring->page_pool); + if (unlikely(!page)) + return false; + dma =3D page_pool_get_dma_addr(page); + + bi->dma =3D dma; + bi->page =3D page; + bi->page_offset =3D RNPGBE_SKB_PAD; + + return true; +} + +static void mucse_update_rx_tail(struct mucse_ring *rx_ring, + u32 val) +{ + rx_ring->next_to_use =3D val; + /* + * Force memory writes to complete before letting h/w + * know there are new descriptors to fetch. (Only + * applicable for weak-ordered memory model archs, + * such as IA-64). + */ + wmb(); + writel(val, rx_ring->tail); +} + +/** + * rnpgbe_alloc_rx_buffers - Replace used receive buffers + * @rx_ring: ring to place buffers on + * @cleaned_count: number of buffers to replace + * @return: true if alloc failed + **/ +static bool rnpgbe_alloc_rx_buffers(struct mucse_ring *rx_ring, + u16 cleaned_count) +{ + u64 fun_id =3D ((u64)(rx_ring->pfvfnum) << 56); + union rnpgbe_rx_desc *rx_desc; + u16 i =3D rx_ring->next_to_use; + struct mucse_rx_buffer *bi; + bool err =3D false; + /* nothing to do */ + if (!cleaned_count) + return err; + + rx_desc =3D M_RX_DESC(rx_ring, i); + bi =3D &rx_ring->rx_buffer_info[i]; + i -=3D rx_ring->count; + + do { + if (!mucse_alloc_mapped_page(rx_ring, bi)) { + err =3D true; + break; + } + + rx_desc->pkt_addr =3D cpu_to_le64(bi->dma + bi->page_offset + + fun_id); + /* clean dd */ + rx_desc->resv_cmd =3D 0; + rx_desc++; + bi++; + i++; + if (unlikely(!i)) { + rx_desc =3D M_RX_DESC(rx_ring, 0); + bi =3D rx_ring->rx_buffer_info; + i -=3D rx_ring->count; + } + cleaned_count--; + } while (cleaned_count); + + i +=3D rx_ring->count; + + if (rx_ring->next_to_use !=3D i) + mucse_update_rx_tail(rx_ring, i); + + return err; +} + +/** + * rnpgbe_get_buffer - Get the rx_buffer to be used + * @rx_ring: pointer to rx ring + * @skb: pointer skb for this packet + * @size: data size in this desc + * @return: rx_buffer. + **/ +static struct mucse_rx_buffer *rnpgbe_get_buffer(struct mucse_ring *rx_rin= g, + struct sk_buff **skb, + const unsigned int size) +{ + struct mucse_rx_buffer *rx_buffer; + + rx_buffer =3D &rx_ring->rx_buffer_info[rx_ring->next_to_clean]; + *skb =3D rx_buffer->skb; + prefetchw(page_address(rx_buffer->page) + rx_buffer->page_offset); + /* we are reusing so sync this buffer for CPU use */ + dma_sync_single_range_for_cpu(rx_ring->dev, rx_buffer->dma, + rx_buffer->page_offset, size, + DMA_FROM_DEVICE); + + return rx_buffer; +} + +/** + * rnpgbe_add_rx_frag - Add no-linear data to the skb + * @rx_buffer: pointer to rx_buffer + * @skb: pointer skb for this packet + * @size: data size in this desc + **/ +static void rnpgbe_add_rx_frag(struct mucse_rx_buffer *rx_buffer, + struct sk_buff *skb, + unsigned int size) +{ + unsigned int truesize =3D PAGE_SIZE; + + skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags, rx_buffer->page, + rx_buffer->page_offset, size, truesize); +} + +/** + * rnpgbe_build_skb - Try to build a sbk based on rx_buffer + * @rx_buffer: pointer to rx_buffer + * @size: data size in this desc + * @return: skb for this rx_buffer + **/ +static struct sk_buff *rnpgbe_build_skb(struct mucse_rx_buffer *rx_buffer, + unsigned int size) +{ + void *va =3D page_address(rx_buffer->page) + rx_buffer->page_offset; + unsigned int truesize =3D PAGE_SIZE; + struct sk_buff *skb; + + net_prefetch(va); + /* build an skb around the page buffer */ + skb =3D build_skb(va - RNPGBE_SKB_PAD, truesize); + if (unlikely(!skb)) + return NULL; + /* update pointers within the skb to store the data */ + skb_reserve(skb, RNPGBE_SKB_PAD); + __skb_put(skb, size); + skb_mark_for_recycle(skb); + + return skb; +} + +/** + * rnpgbe_pull_tail - Pull header to linear portion of buffer + * @skb: current socket buffer containing buffer in progress + **/ +static void rnpgbe_pull_tail(struct sk_buff *skb) +{ + skb_frag_t *frag =3D &skb_shinfo(skb)->frags[0]; + unsigned int pull_len; + unsigned char *va; + + va =3D skb_frag_address(frag); + pull_len =3D eth_get_headlen(skb->dev, va, M_RX_HDR_SIZE); + /* align pull length to size of long to optimize memcpy performance */ + skb_copy_to_linear_data(skb, va, ALIGN(pull_len, sizeof(long))); + /* update all of the pointers */ + skb_frag_size_sub(frag, pull_len); + skb_frag_off_add(frag, pull_len); + skb->data_len -=3D pull_len; + skb->tail +=3D pull_len; +} + +/** + * rnpgbe_is_non_eop - Process handling of non-EOP buffers + * @rx_ring: rx ring being processed + * @rx_desc: rx descriptor for current buffer + * @skb: current socket buffer containing buffer in progress + * + * This function updates next to clean. If the buffer is an EOP buffer + * this function exits returning false, otherwise it will place the + * sk_buff in the next buffer to be chained and return true indicating + * that this is in fact a non-EOP buffer. + * + * @return: true for not end of packet + **/ +static bool rnpgbe_is_non_eop(struct mucse_ring *rx_ring, + union rnpgbe_rx_desc *rx_desc, + struct sk_buff *skb) +{ + u32 ntc =3D rx_ring->next_to_clean + 1; + + /* fetch, update, and store next to clean */ + ntc =3D (ntc < rx_ring->count) ? ntc : 0; + rx_ring->next_to_clean =3D ntc; + prefetch(M_RX_DESC(rx_ring, ntc)); + /* if we are the last buffer then there is nothing else to do */ + if (likely(rnpgbe_test_staterr(rx_desc, M_RXD_STAT_EOP))) + return false; + if (skb_shinfo(skb)->nr_frags < MAX_SKB_FRAGS) { + /* place skb in next buffer to be received */ + rx_ring->rx_buffer_info[ntc].skb =3D skb; + /* we should clean it since we used all info in it */ + rx_desc->wb.cmd =3D 0; + } else { + /* too much frags, force free */ + dev_kfree_skb_any(skb); + } + + return true; +} + +/** + * rnpgbe_cleanup_headers - Correct corrupted or empty headers + * @skb: current socket buffer containing buffer in progress + * @return: true if an error was encountered and skb was freed. + **/ +static bool rnpgbe_cleanup_headers(struct sk_buff *skb) +{ + if (IS_ERR(skb)) + return true; + /* place header in linear portion of buffer */ + if (!skb_headlen(skb)) + rnpgbe_pull_tail(skb); + /* if eth_skb_pad returns an error the skb was freed */ + if (eth_skb_pad(skb)) + return true; + + return false; +} + +/** + * rnpgbe_process_skb_fields - Setup skb header fields from desc + * @rx_ring: structure containing ring specific data + * @skb: skb currently being received and modified + * + * rnpgbe_process_skb_fields checks the ring, descriptor information + * in order to setup the hash, chksum, vlan, protocol, and other + * fields within the skb. + **/ +static void rnpgbe_process_skb_fields(struct mucse_ring *rx_ring, + struct sk_buff *skb) +{ + struct net_device *dev =3D rx_ring->netdev; + + skb_record_rx_queue(skb, rx_ring->queue_index); + skb->protocol =3D eth_type_trans(skb, dev); +} + +/** + * rnpgbe_clean_rx_irq - Clean completed descriptors from Rx ring + * @q_vector: structure containing interrupt and ring information + * @rx_ring: rx descriptor ring to transact packets on + * @budget: total limit on number of packets to process + * + * rnpgbe_clean_rx_irq tries to check dd in desc, handle this desc + * if dd is set which means data is write-back by hw + * + * @return: amount of work completed. + **/ +static int rnpgbe_clean_rx_irq(struct mucse_q_vector *q_vector, + struct mucse_ring *rx_ring, + int budget) +{ + unsigned int total_rx_bytes =3D 0, total_rx_packets =3D 0; + u16 cleaned_count =3D mucse_desc_unused_rx(rx_ring); + bool fail_alloc =3D false; + + while (likely(total_rx_packets < budget)) { + struct mucse_rx_buffer *rx_buffer; + union rnpgbe_rx_desc *rx_desc; + struct sk_buff *skb; + unsigned int size; + + if (cleaned_count >=3D M_RX_BUFFER_WRITE) { + if (rnpgbe_alloc_rx_buffers(rx_ring, cleaned_count)) { + fail_alloc =3D true; + cleaned_count =3D mucse_desc_unused_rx(rx_ring); + } else { + cleaned_count =3D 0; + } + } + rx_desc =3D M_RX_DESC(rx_ring, rx_ring->next_to_clean); + + if (!rnpgbe_test_staterr(rx_desc, M_RXD_STAT_DD)) + break; + + /* This memory barrier is needed to keep us from reading + * any other fields out of the rx_desc until we know the + * descriptor has been written back + */ + dma_rmb(); + size =3D le16_to_cpu(rx_desc->wb.len); + rx_buffer =3D rnpgbe_get_buffer(rx_ring, &skb, size); + + if (skb) + rnpgbe_add_rx_frag(rx_buffer, skb, size); + else + skb =3D rnpgbe_build_skb(rx_buffer, size); + /* exit if we failed to retrieve a buffer */ + if (!skb) + break; + + rx_buffer->page =3D NULL; + rx_buffer->skb =3D NULL; + cleaned_count++; + + if (rnpgbe_is_non_eop(rx_ring, rx_desc, skb)) + continue; + + /* verify the packet layout is correct */ + if (rnpgbe_cleanup_headers(skb)) { + /* we should clean it since we used all info in it */ + rx_desc->wb.cmd =3D 0; + continue; + } + + /* probably a little skewed due to removing CRC */ + total_rx_bytes +=3D skb->len; + rnpgbe_process_skb_fields(rx_ring, skb); + rx_desc->wb.cmd =3D 0; + napi_gro_receive(&q_vector->napi, skb); + /* update budget accounting */ + total_rx_packets++; + } + + u64_stats_update_begin(&rx_ring->syncp); + rx_ring->stats.packets +=3D total_rx_packets; + rx_ring->stats.bytes +=3D total_rx_bytes; + u64_stats_update_end(&rx_ring->syncp); + /* keep polling if alloc mem failed */ + return fail_alloc ? budget : total_rx_packets; +} + /** * rnpgbe_poll - NAPI Rx polling callback * @napi: structure for representing this polling device @@ -170,6 +506,7 @@ static int rnpgbe_poll(struct napi_struct *napi, int bu= dget) container_of(napi, struct mucse_q_vector, napi); bool clean_complete =3D true; struct mucse_ring *ring; + int per_ring_budget; int work_done =3D 0; =20 mucse_for_each_ring(ring, q_vector->tx) { @@ -177,6 +514,20 @@ static int rnpgbe_poll(struct napi_struct *napi, int b= udget) clean_complete =3D false; } =20 + if (q_vector->rx.count > 1) + per_ring_budget =3D max(budget / q_vector->rx.count, 1); + else + per_ring_budget =3D budget; + + mucse_for_each_ring(ring, q_vector->rx) { + int cleaned =3D 0; + + cleaned =3D rnpgbe_clean_rx_irq(q_vector, ring, per_ring_budget); + work_done +=3D cleaned; + if (cleaned >=3D per_ring_budget) + clean_complete =3D false; + } + if (!clean_complete) return budget; =20 @@ -347,12 +698,17 @@ static int rnpgbe_alloc_q_vector(struct mucse *mucse, } =20 for (idx =3D 0; idx < rxr_count; idx++) { + ring->dev =3D &mucse->pdev->dev; mucse_add_ring(ring, &q_vector->rx); + ring->count =3D mucse->rx_ring_item_count; + ring->netdev =3D mucse->netdev; ring->queue_index =3D eth_queue_idx + idx; ring->rnpgbe_queue_idx =3D rxr_idx; ring->ring_addr =3D hw->hw_addr + RING_OFFSET(rxr_idx); ring->irq_mask =3D ring->ring_addr + RNPGBE_DMA_INT_MASK; ring->trig =3D ring->ring_addr + RNPGBE_DMA_INT_TRIG; + ring->pfvfnum =3D hw->pfvfnum; + u64_stats_init(&ring->syncp); mucse->rx_ring[ring->queue_index] =3D ring; rxr_idx +=3D step; ring++; @@ -777,6 +1133,16 @@ static void rnpgbe_clean_all_tx_rings(struct mucse *m= ucse) rnpgbe_clean_tx_ring(mucse->tx_ring[i]); } =20 +/** + * rnpgbe_clean_all_rx_rings - Free Rx Buffers for all queues + * @mucse: board private structure + **/ +static void rnpgbe_clean_all_rx_rings(struct mucse *mucse) +{ + for (int i =3D 0; i < mucse->num_rx_queues; i++) + rnpgbe_clean_rx_ring(mucse->rx_ring[i]); +} + void rnpgbe_down(struct mucse *mucse) { struct net_device *netdev =3D mucse->netdev; @@ -786,6 +1152,7 @@ void rnpgbe_down(struct mucse *mucse) netif_tx_disable(netdev); rnpgbe_napi_disable_all(mucse); rnpgbe_clean_all_tx_rings(mucse); + rnpgbe_clean_all_rx_rings(mucse); } =20 /** @@ -800,6 +1167,8 @@ void rnpgbe_up_complete(struct mucse *mucse) rnpgbe_napi_enable_all(mucse); rnpgbe_irq_enable(mucse); netif_tx_start_all_queues(netdev); + for (int i =3D 0; i < mucse->num_rx_queues; i++) + mucse_ring_wr32(mucse->rx_ring[i], RNPGBE_RX_START, 1); } =20 /** @@ -1170,5 +1539,241 @@ void rnpgbe_get_stats64(struct net_device *netdev, stats->tx_bytes +=3D bytes; } } + + for (i =3D 0; i < mucse->num_rx_queues; i++) { + struct mucse_ring *ring =3D READ_ONCE(mucse->rx_ring[i]); + u64 bytes, packets; + unsigned int start; + + if (ring) { + do { + start =3D u64_stats_fetch_begin(&ring->syncp); + packets =3D ring->stats.packets; + bytes =3D ring->stats.bytes; + } while (u64_stats_fetch_retry(&ring->syncp, start)); + stats->rx_packets +=3D packets; + stats->rx_bytes +=3D bytes; + } + } rcu_read_unlock(); } + +static int mucse_alloc_page_pool(struct mucse_ring *rx_ring) +{ + int ret =3D 0; + + struct page_pool_params pp_params =3D { + .flags =3D PP_FLAG_DMA_MAP | PP_FLAG_DMA_SYNC_DEV, + .order =3D 0, + .pool_size =3D rx_ring->count, + .nid =3D dev_to_node(rx_ring->dev), + .dev =3D rx_ring->dev, + .dma_dir =3D DMA_FROM_DEVICE, + .offset =3D 0, + .max_len =3D PAGE_SIZE, + }; + + rx_ring->page_pool =3D page_pool_create(&pp_params); + if (IS_ERR(rx_ring->page_pool)) { + ret =3D PTR_ERR(rx_ring->page_pool); + rx_ring->page_pool =3D NULL; + } + + return ret; +} + +/** + * rnpgbe_setup_rx_resources - allocate Rx resources (Descriptors) + * @rx_ring: rx descriptor ring (for a specific queue) to setup + * @mucse: pointer to private structure + * + * @return: 0 on success, negative on failure + **/ +static int rnpgbe_setup_rx_resources(struct mucse_ring *rx_ring, + struct mucse *mucse) +{ + struct device *dev =3D rx_ring->dev; + int size; + + size =3D sizeof(struct mucse_rx_buffer) * rx_ring->count; + + rx_ring->rx_buffer_info =3D vzalloc(size); + + if (!rx_ring->rx_buffer_info) + goto err_return; + /* Round up to nearest 4K */ + rx_ring->size =3D rx_ring->count * sizeof(union rnpgbe_rx_desc); + rx_ring->size =3D ALIGN(rx_ring->size, 4096); + rx_ring->desc =3D dma_alloc_coherent(dev, rx_ring->size, &rx_ring->dma, + GFP_KERNEL); + if (!rx_ring->desc) + goto err_free_buffer; + + rx_ring->next_to_clean =3D 0; + rx_ring->next_to_use =3D 0; + + if (mucse_alloc_page_pool(rx_ring)) + goto err_free_desc; + + return 0; +err_free_desc: + dma_free_coherent(dev, rx_ring->size, rx_ring->desc, + rx_ring->dma); + rx_ring->desc =3D NULL; +err_free_buffer: + vfree(rx_ring->rx_buffer_info); +err_return: + rx_ring->rx_buffer_info =3D NULL; + return -ENOMEM; +} + +/** + * rnpgbe_clean_rx_ring - Free Rx Buffers per Queue + * @rx_ring: ring to free buffers from + **/ +void rnpgbe_clean_rx_ring(struct mucse_ring *rx_ring) +{ + u16 i =3D rx_ring->next_to_clean; + struct mucse_rx_buffer *rx_buffer =3D &rx_ring->rx_buffer_info[i]; + + /* Free all the Rx ring sk_buffs */ + while (i !=3D rx_ring->next_to_use) { + if (rx_buffer->skb) { + struct sk_buff *skb =3D rx_buffer->skb; + + dev_kfree_skb(skb); + rx_buffer->skb =3D NULL; + } + dma_sync_single_range_for_cpu(rx_ring->dev, rx_buffer->dma, + rx_buffer->page_offset, + mucse_rx_bufsz(rx_ring), + DMA_FROM_DEVICE); + if (rx_buffer->page) { + page_pool_put_full_page(rx_ring->page_pool, + rx_buffer->page, false); + rx_buffer->page =3D NULL; + } + i++; + rx_buffer++; + if (i =3D=3D rx_ring->count) { + i =3D 0; + rx_buffer =3D rx_ring->rx_buffer_info; + } + } + + rx_ring->next_to_clean =3D 0; + rx_ring->next_to_use =3D 0; +} + +/** + * rnpgbe_free_rx_resources - Free Rx Resources + * @rx_ring: ring to clean the resources from + * + * Free all receive software resources + **/ +static void rnpgbe_free_rx_resources(struct mucse_ring *rx_ring) +{ + rnpgbe_clean_rx_ring(rx_ring); + vfree(rx_ring->rx_buffer_info); + rx_ring->rx_buffer_info =3D NULL; + /* if not set, then don't free */ + if (!rx_ring->desc) + return; + + dma_free_coherent(rx_ring->dev, rx_ring->size, rx_ring->desc, + rx_ring->dma); + rx_ring->desc =3D NULL; + if (rx_ring->page_pool) { + page_pool_destroy(rx_ring->page_pool); + rx_ring->page_pool =3D NULL; + } +} + +/** + * rnpgbe_setup_all_rx_resources - allocate all queues Rx resources + * @mucse: pointer to private structure + * + * @return: 0 on success, negative on failure + **/ +int rnpgbe_setup_all_rx_resources(struct mucse *mucse) +{ + int i, err =3D 0; + + for (i =3D 0; i < mucse->num_rx_queues; i++) { + err =3D rnpgbe_setup_rx_resources(mucse->rx_ring[i], mucse); + if (!err) + continue; + + goto err_setup_rx; + } + + return 0; +err_setup_rx: + while (i--) + rnpgbe_free_rx_resources(mucse->rx_ring[i]); + return err; +} + +/** + * rnpgbe_free_all_rx_resources - Free Rx Resources for All Queues + * @mucse: pointer to private structure + * + * Free all receive software resources + **/ +void rnpgbe_free_all_rx_resources(struct mucse *mucse) +{ + for (int i =3D 0; i < (mucse->num_rx_queues); i++) { + if (mucse->rx_ring[i]->desc) + rnpgbe_free_rx_resources(mucse->rx_ring[i]); + } +} + +/** + * rnpgbe_configure_rx_ring - Configure Rx ring info to hw + * @mucse: pointer to private structure + * @ring: structure containing ring specific data + * + * Configure the Rx descriptor ring after a reset. + **/ +static void rnpgbe_configure_rx_ring(struct mucse *mucse, + struct mucse_ring *ring) +{ + struct mucse_hw *hw =3D &mucse->hw; + + /* disable queue to avoid issues while updating state */ + mucse_ring_wr32(ring, RNPGBE_RX_START, 0); + /* set descripts registers*/ + mucse_ring_wr32(ring, RNPGBE_RX_BASE_ADDR_LO, (u32)ring->dma); + mucse_ring_wr32(ring, RNPGBE_RX_BASE_ADDR_HI, + (u32)((u64)ring->dma >> 32) | (hw->pfvfnum << 24)); + mucse_ring_wr32(ring, RNPGBE_RX_LEN, ring->count); + ring->tail =3D ring->ring_addr + RNPGBE_RX_TAIL; + ring->next_to_clean =3D mucse_ring_rd32(ring, RNPGBE_RX_HEAD); + ring->next_to_use =3D ring->next_to_clean; + mucse_ring_wr32(ring, RNPGBE_RX_SG_LEN, M_DEFAULT_SG); + mucse_ring_wr32(ring, RNPGBE_RX_FETCH, M_DEFAULT_RX_FETCH); + mucse_ring_wr32(ring, RNPGBE_RX_TIMEOUT_TH, 0); + mucse_ring_wr32(ring, RNPGBE_RX_INT_TIMER, + M_DEFAULT_INT_TIMER_R * hw->cycles_per_us); + mucse_ring_wr32(ring, RNPGBE_RX_INT_PKTCNT, M_DEFAULT_RX_INT_PKTCNT); + rnpgbe_alloc_rx_buffers(ring, mucse_desc_unused_rx(ring)); +} + +/** + * rnpgbe_configure_rx - Configure Receive Unit after Reset + * @mucse: pointer to private structure + * + * Configure the Rx unit after a reset. + **/ +void rnpgbe_configure_rx(struct mucse *mucse) +{ + struct mucse_hw *hw =3D &mucse->hw; + u32 dma_axi_ctl; + + for (int i =3D 0; i < mucse->num_rx_queues; i++) + rnpgbe_configure_rx_ring(mucse, mucse->rx_ring[i]); + + dma_axi_ctl =3D mucse_hw_rd32(hw, RNPGBE_DMA_AXI_EN); + dma_axi_ctl |=3D RX_AXI_RW_EN; + mucse_hw_wr32(hw, RNPGBE_DMA_AXI_EN, dma_axi_ctl); +} diff --git a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_lib.h b/drivers/net/e= thernet/mucse/rnpgbe/rnpgbe_lib.h index 2c2796764c2d..29520ad716ca 100644 --- a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_lib.h +++ b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_lib.h @@ -8,11 +8,27 @@ struct mucse; struct mucse_ring; =20 #define RING_OFFSET(n) (0x1000 + 0x100 * (n)) +#define RNPGBE_RX_START 0x10 #define RNPGBE_TX_START 0x18 #define RNPGBE_DMA_INT_MASK 0x24 #define TX_INT_MASK BIT(1) #define RX_INT_MASK BIT(0) #define INT_VALID (BIT(16) | BIT(17)) +#define RNPGBE_RX_BASE_ADDR_HI 0x30 +#define RNPGBE_RX_BASE_ADDR_LO 0x34 +#define RNPGBE_RX_LEN 0x38 +#define RNPGBE_RX_HEAD 0x3c +#define RNPGBE_RX_TAIL 0x40 +#define M_DEFAULT_RX_FETCH 0x100020 +#define RNPGBE_RX_FETCH 0x44 +#define M_DEFAULT_INT_TIMER_R 30 +#define RNPGBE_RX_INT_TIMER 0x48 +#define M_DEFAULT_RX_INT_PKTCNT 64 +#define RNPGBE_RX_INT_PKTCNT 0x4c +#define RNPGBE_RX_ARB_DEF_LVL 0x50 +#define RNPGBE_RX_TIMEOUT_TH 0x54 +#define M_DEFAULT_SG 96 /* unit 16b, 1536 bytes */ +#define RNPGBE_RX_SG_LEN 0x58 #define RNPGBE_TX_BASE_ADDR_HI 0x60 #define RNPGBE_TX_BASE_ADDR_LO 0x64 #define RNPGBE_TX_LEN 0x68 @@ -33,13 +49,23 @@ struct mucse_ring; #define M_MAX_DATA_PER_TXD (0x1 << M_MAX_TXD_PWR) #define TXD_USE_COUNT(S) DIV_ROUND_UP((S), M_MAX_DATA_PER_TXD) #define DESC_NEEDED (MAX_SKB_FRAGS + 4) +#define RNPGBE_SKB_PAD (NET_SKB_PAD + NET_IP_ALIGN) +#define M_RXBUFFER_1536 1536 +#define M_RX_BUFFER_WRITE 16 +#define M_RX_HDR_SIZE 256 + +static inline unsigned int mucse_rx_bufsz(struct mucse_ring *ring) +{ + /* 1536 is enough for mtu 1500 packets */ + return (M_RXBUFFER_1536 - NET_IP_ALIGN); +} + /* hw require this not zero */ #define M_DEFAULT_MAC_IP_LEN 20 #define mucse_for_each_ring(pos, head)\ for (typeof((head).ring) __pos =3D (head).ring;\ __pos ? ({ pos =3D __pos; 1; }) : 0;\ __pos =3D __pos->next) - int rnpgbe_init_interrupt_scheme(struct mucse *mucse); void rnpgbe_clear_interrupt_scheme(struct mucse *mucse); int register_mbx_irq(struct mucse *mucse); @@ -50,10 +76,14 @@ void rnpgbe_irq_disable(struct mucse *mucse); void rnpgbe_down(struct mucse *mucse); void rnpgbe_up_complete(struct mucse *mucse); void rnpgbe_configure_tx(struct mucse *mucse); +void rnpgbe_configure_rx(struct mucse *mucse); int rnpgbe_setup_all_tx_resources(struct mucse *mucse); void rnpgbe_free_all_tx_resources(struct mucse *mucse); netdev_tx_t rnpgbe_xmit_frame_ring(struct sk_buff *skb, struct mucse_ring *tx_ring); void rnpgbe_get_stats64(struct net_device *netdev, struct rtnl_link_stats64 *stats); +void rnpgbe_clean_rx_ring(struct mucse_ring *rx_ring); +int rnpgbe_setup_all_rx_resources(struct mucse *mucse); +void rnpgbe_free_all_rx_resources(struct mucse *mucse); #endif diff --git a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_main.c b/drivers/net/= ethernet/mucse/rnpgbe/rnpgbe_main.c index 4f9cd0065a4b..f31547a04b3d 100644 --- a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_main.c +++ b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_main.c @@ -35,6 +35,7 @@ static struct pci_device_id rnpgbe_pci_tbl[] =3D { static void rnpgbe_configure(struct mucse *mucse) { rnpgbe_configure_tx(mucse); + rnpgbe_configure_rx(mucse); } =20 /** @@ -63,11 +64,17 @@ static int rnpgbe_open(struct net_device *netdev) err =3D rnpgbe_setup_all_tx_resources(mucse); if (err) goto err_free_irqs; + err =3D rnpgbe_setup_all_rx_resources(mucse); + if (err) + goto err_free_tx; + =20 rnpgbe_configure(mucse); rnpgbe_up_complete(mucse); =20 return 0; +err_free_tx: + rnpgbe_free_all_tx_resources(mucse); err_free_irqs: rnpgbe_free_irq(mucse); return err; @@ -89,6 +96,7 @@ static int rnpgbe_close(struct net_device *netdev) rnpgbe_down(mucse); rnpgbe_free_irq(mucse); rnpgbe_free_all_tx_resources(mucse); + rnpgbe_free_all_rx_resources(mucse); =20 return 0; } @@ -121,6 +129,7 @@ static const struct net_device_ops rnpgbe_netdev_ops = =3D { static void rnpgbe_sw_init(struct mucse *mucse) { mucse->tx_ring_item_count =3D M_DEFAULT_TXD; + mucse->rx_ring_item_count =3D M_DEFAULT_RXD; mucse->tx_work_limit =3D M_DEFAULT_TX_WORK; } =20 --=20 2.25.1 From nobody Tue Apr 7 01:03:01 2026 Received: from smtpbgau2.qq.com (smtpbgau2.qq.com [54.206.34.216]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 21034346FA6; Fri, 3 Apr 2026 02:59:24 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=54.206.34.216 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775185173; cv=none; b=pZUzEZzghalX53E/UgPIDczo/nRWlaO/uaXmxGpaG3AzcBahvtuHalz+McUAwQ21j0Hy9OH7Luazz3d2DNQiEHsTAwwAoQzIedPj7KZi9cKV9M50iCp6evZN0k6bckhmzUS3bH3Ykh40qljv3YvEr6MA+aUx0sym2146HYi1jJU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775185173; c=relaxed/simple; bh=xgiAAlr7wmR8SxTb1lw1IEnA4huC1cSGYwFrLf2oBWo=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=a2TtelaKcxtMyIuKu1DIJERlwnJXtt0eJS8Hye7k6zZGGCg8Lbcfyuk6Gh6qWmNI0v9ayaDyVPDkcfTPXCQio/0LbaqF+0FPmRS30oPlhFn77v9lQ5O/rzEy1L9d9BDeWXXvzAjFhUA+irLkHNRQodblhz72DxtlFoJIdK9QZU0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=mucse.com; spf=pass smtp.mailfrom=mucse.com; arc=none smtp.client-ip=54.206.34.216 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=mucse.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=mucse.com X-QQ-mid: esmtpsz19t1775185098ta59ebdbf X-QQ-Originating-IP: TrN5nvzSr1oDy7mgYO3Cn2A2Wdbr2hKJd0LXTdSZGRM= Received: from localhost.localdomain ( [203.174.112.180]) by bizesmtp.qq.com (ESMTP) with id ; Fri, 03 Apr 2026 10:58:17 +0800 (CST) X-QQ-SSF: 0000000000000000000000000000000 X-QQ-GoodBg: 0 X-BIZMAIL-ID: 4108894342483881346 EX-QQ-RecipientCnt: 11 From: Dong Yibo To: andrew+netdev@lunn.ch, davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, danishanwar@ti.com, vadim.fedorenko@linux.dev, horms@kernel.org Cc: linux-kernel@vger.kernel.org, netdev@vger.kernel.org, dong100@mucse.com Subject: [PATCH net-next v2 4/4] net: rnpgbe: Add link status handling support Date: Fri, 3 Apr 2026 10:57:13 +0800 Message-Id: <20260403025713.527841-5-dong100@mucse.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20260403025713.527841-1-dong100@mucse.com> References: <20260403025713.527841-1-dong100@mucse.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-QQ-SENDSIZE: 520 Feedback-ID: esmtpsz:mucse.com:qybglogicsvrgz:qybglogicsvrgz3a-1 X-QQ-XMAILINFO: N3L+rZAeYOerPbt7p5S5IjxUfrPXkwuA8e7iJZ/m/6FLzLK82NlbYSfY uxlOSAtKobLJoAziLtITZxUZqpm11XN8m3JTCEGVSEqerr9NswhPvYQot38KbU9XlA6iirj zM2340OUN0DU+Nv90TW2mMzFbHuRuUQph4Ws+es4+WmrRmcicZQ/+zzs5dZrELPMMZk0KiY kgpJu0sAjKTcUMta3hnyqTIEJUriKY9+sTHzEEfQC12tisaT49eOOPs1jInCeJeG3cTPDwF usSxYzLIuNHueYiJxh8EF+y8NE17zZCV/FWC3ZjtQPT8ywhFYHinpq6h8EvRJndehB5bdKc J0EMv1nmhKad3h4nRDPjN7o8Xj+72Fu5Bj3m2sD+ul3/u5ETg3p8zrWKBa56zKOxhWSVEdM 51j83/RBiouvjIVdgU20SgCtSia4Cb3Agjn4G2oRELMjvFcHtWl2o4nW9tqm2EOs603izBk tK0bVaQLkeJ8m/nRMsityVGWAm7jno2g4tbsrfg/oC2sqWiJ9vPAgZNnjGvYDxivl7vYkxu 5i4afUECRiwjUhaw/LeS3GwlXkmY3LGf1kzz6u6yobrsfFXcEbed0I/KDTeSe1gpAVUFoFW j90Ez6dIbE8hkjImKTuqdBktfn6z5VJs4NBqCNnwfVANckmxfXHB/a8nBpUjAU++Li0f01M PD/cZzfwkcpauiSvZsVXdIDleV7phMyQRVsKi0sV6n+3wYWlOJ5KrpGgfYtBl4EadBEqZv4 DrOyjhj/sLG5DPU2qRO8vn+rM58U+tyKdTnGyqjEHx8PkcS2xcTsK075VTG1x/X9ZhR2LbR GsDVUzJqnPzFzmg1r3xCLzwMzHLl4sWpRLk9D84hIQnMK0RSIH/dYrSONi18LxmbpaDSeHJ J0HviIRpVbCk9c+1WXiJ9H+WYP78fM1PQpTD8wWZ298S3wp76wJW65+A106HKIpQnmNiBr8 EsIzb85z57spbOghAJ6+Py3EeiXOOiDkHCXn3BYcbsNB51DFv47Bbh9UnF4kBkiwiQbSl6R rJrMlA1eM5q0Llw2f7evcF/oiCThazC5p+T7r9Q3HCNheXEoxOemkahBMVKSk= X-QQ-XMRINFO: NS+P29fieYNwqS3WCnRCOn9D1NpZuCnCRA== X-QQ-RECHKSPAM: 0 Content-Type: text/plain; charset="utf-8" Add link status management infrastructure to the rnpgbe driver: - Add link status related data structures (speed, duplex, link state) - Implement firmware link event handling via mailbox - Add service task for periodic link status monitoring - Implement carrier status management (netif_carrier_on/off) - Add port up/down notification to firmware This enables the driver to properly track and report link status changes. Signed-off-by: Dong Yibo --- drivers/net/ethernet/mucse/rnpgbe/rnpgbe.h | 17 +- .../net/ethernet/mucse/rnpgbe/rnpgbe_chip.c | 31 +++- drivers/net/ethernet/mucse/rnpgbe/rnpgbe_hw.h | 12 ++ .../net/ethernet/mucse/rnpgbe/rnpgbe_lib.c | 150 ++++++++++++++++ .../net/ethernet/mucse/rnpgbe/rnpgbe_lib.h | 1 + .../net/ethernet/mucse/rnpgbe/rnpgbe_main.c | 5 + .../net/ethernet/mucse/rnpgbe/rnpgbe_mbx.c | 20 +++ .../net/ethernet/mucse/rnpgbe/rnpgbe_mbx.h | 1 + .../net/ethernet/mucse/rnpgbe/rnpgbe_mbx_fw.c | 166 ++++++++++++++++++ .../net/ethernet/mucse/rnpgbe/rnpgbe_mbx_fw.h | 38 ++++ 10 files changed, 437 insertions(+), 4 deletions(-) diff --git a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe.h b/drivers/net/ether= net/mucse/rnpgbe/rnpgbe.h index 87cb9e9e3f0f..9bd173c7f44f 100644 --- a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe.h +++ b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe.h @@ -30,11 +30,10 @@ struct mucse_mbx_info { u32 fwpf_ctrl_base; }; =20 -/* Enum for firmware notification modes, - * more modes (e.g., portup, link_report) will be added in future - **/ enum { mucse_fw_powerup, + mucse_fw_portup, + mucse_fw_link_report_en, }; =20 struct mucse_hw { @@ -43,8 +42,11 @@ struct mucse_hw { struct pci_dev *pdev; struct mucse_mbx_info mbx; int port; + int speed; + bool link; u16 cycles_per_us; u8 pfvfnum; + u8 duplex; }; =20 struct rnpgbe_tx_desc { @@ -191,6 +193,10 @@ struct mucse_stats { #define M_DEFAULT_RXD 512 #define M_DEFAULT_TX_WORK 256 =20 +enum mucse_state_t { + __MUCSE_DOWN, +}; + struct mucse { struct net_device *netdev; struct pci_dev *pdev; @@ -199,6 +205,7 @@ struct mucse { #define M_FLAG_MSI_EN BIT(0) #define M_FLAG_MSIX_SINGLE_EN BIT(1) #define M_FLAG_MSIX_EN BIT(2) +#define M_FLAG_NEED_LINK_UPDATE BIT(3) u32 flags; struct mucse_ring *tx_ring[RNPGBE_MAX_QUEUES] ____cacheline_aligned_in_smp; @@ -211,6 +218,9 @@ struct mucse { int num_q_vectors; int rx_ring_item_count; int num_rx_queues; + unsigned long state; + struct delayed_work serv_task; + spinlock_t link_lock; /* spinlock for link update */ }; =20 int rnpgbe_get_permanent_mac(struct mucse_hw *hw, u8 *perm_addr); @@ -219,6 +229,7 @@ int rnpgbe_send_notify(struct mucse_hw *hw, bool enable, int mode); int rnpgbe_init_hw(struct mucse_hw *hw, int board_type); +void rnpgbe_set_rx(struct mucse_hw *hw, bool enable); =20 /* Device IDs */ #define PCI_VENDOR_ID_MUCSE 0x8848 diff --git a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_chip.c b/drivers/net/= ethernet/mucse/rnpgbe/rnpgbe_chip.c index 291e77d573fe..902c8a801ba3 100644 --- a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_chip.c +++ b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_chip.c @@ -66,11 +66,17 @@ int rnpgbe_send_notify(struct mucse_hw *hw, int mode) { int err; - /* Keep switch struct to support more modes in the future */ + switch (mode) { case mucse_fw_powerup: err =3D mucse_mbx_powerup(hw, enable); break; + case mucse_fw_portup: + err =3D mucse_mbx_phyup(hw, enable); + break; + case mucse_fw_link_report_en: + err =3D mucse_mbx_link_report(hw, enable); + break; default: err =3D -EINVAL; } @@ -149,3 +155,26 @@ int rnpgbe_init_hw(struct mucse_hw *hw, int board_type) =20 return 0; } + +/** + * rnpgbe_set_rx - Setup rx state + * @hw: hw information structure + * @enable: set rx on or off + * + * rnpgbe_set_rx setup rx enable + * + **/ +void rnpgbe_set_rx(struct mucse_hw *hw, bool enable) +{ + u32 value =3D mucse_hw_rd32(hw, GMAC_CONTROL); + + if (enable) + value |=3D GMAC_CONTROL_RE; + else + value &=3D ~GMAC_CONTROL_RE; + + mucse_hw_wr32(hw, GMAC_CONTROL, value); + + value =3D mucse_hw_rd32(hw, GMAC_FRAME_FILTER); + mucse_hw_wr32(hw, GMAC_FRAME_FILTER, value | BIT(0)); +} diff --git a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_hw.h b/drivers/net/et= hernet/mucse/rnpgbe/rnpgbe_hw.h index 03688586b447..77aef019acbf 100644 --- a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_hw.h +++ b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_hw.h @@ -17,7 +17,19 @@ =20 #define TX_AXI_RW_EN 0xc #define RX_AXI_RW_EN 0x03 +/* mask all valid info */ +#define M_ST_MASK 0x0f000f11 +/* 31:28 set 0xa to valid it is a driver set info */ +#define M_DEFAULT_ST 0xa0000000 +/* driver setup this by own info */ +/*bit: 25:24 | 11:8 | 4 | 0 */ +/*fun: pause | speed | duplex | up/down */ +#define RNPGBE_LINK_ST 0x000c #define RNPGBE_DMA_AXI_EN 0x0010 =20 +#define MUCSE_GMAC_OFF(_n) (0x20000 + (_n)) +#define GMAC_CONTROL_RE 0x00000004 +#define GMAC_CONTROL MUCSE_GMAC_OFF(0) +#define GMAC_FRAME_FILTER MUCSE_GMAC_OFF(0x4) #define RNPGBE_MAX_QUEUES 8 #endif /* _RNPGBE_HW_H */ diff --git a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_lib.c b/drivers/net/e= thernet/mucse/rnpgbe/rnpgbe_lib.c index 7e057e2f948b..5d0c170b8c9e 100644 --- a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_lib.c +++ b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_lib.c @@ -9,6 +9,7 @@ =20 #include "rnpgbe_lib.h" #include "rnpgbe.h" +#include "rnpgbe_mbx_fw.h" =20 /** * rnpgbe_msix_other - Other irq handler @@ -19,6 +20,10 @@ **/ static irqreturn_t rnpgbe_msix_other(int irq, void *data) { + struct mucse *mucse =3D (struct mucse *)data; + + mucse_fw_irq_handler(&mucse->hw); + return IRQ_HANDLED; } =20 @@ -528,6 +533,9 @@ static int rnpgbe_poll(struct napi_struct *napi, int bu= dget) clean_complete =3D false; } =20 + if (test_bit(__MUCSE_DOWN, &q_vector->mucse->state)) + clean_complete =3D true; + if (!clean_complete) return budget; =20 @@ -892,6 +900,7 @@ static irqreturn_t rnpgbe_int_single(int irq, void *dat= a) struct mucse *mucse =3D (struct mucse *)data; struct mucse_q_vector *q_vector; =20 + mucse_fw_irq_handler(&mucse->hw); q_vector =3D mucse->q_vector[0]; rnpgbe_irq_disable_queues(q_vector); if (q_vector->rx.ring || q_vector->tx.ring) @@ -1146,7 +1155,26 @@ static void rnpgbe_clean_all_rx_rings(struct mucse *= mucse) void rnpgbe_down(struct mucse *mucse) { struct net_device *netdev =3D mucse->netdev; + struct mucse_hw *hw =3D &mucse->hw; + int err; + + set_bit(__MUCSE_DOWN, &mucse->state); + cancel_delayed_work_sync(&mucse->serv_task); + + err =3D rnpgbe_send_notify(hw, false, mucse_fw_link_report_en); + if (err) { + dev_warn(&hw->pdev->dev, "Send link report to hw failed %d\n", + err); + dev_warn(&hw->pdev->dev, "Fw will still report link event\n"); + } =20 + err =3D rnpgbe_send_notify(hw, false, mucse_fw_portup); + if (err) { + dev_warn(&hw->pdev->dev, "Send port down to hw failed %d\n", + err); + dev_warn(&hw->pdev->dev, "Port is not truly down\n"); + } + netif_carrier_off(netdev); netif_tx_stop_all_queues(netdev); rnpgbe_irq_disable(mucse); netif_tx_disable(netdev); @@ -1162,6 +1190,8 @@ void rnpgbe_down(struct mucse *mucse) void rnpgbe_up_complete(struct mucse *mucse) { struct net_device *netdev =3D mucse->netdev; + struct mucse_hw *hw =3D &mucse->hw; + int err; =20 rnpgbe_configure_msix(mucse); rnpgbe_napi_enable_all(mucse); @@ -1169,6 +1199,22 @@ void rnpgbe_up_complete(struct mucse *mucse) netif_tx_start_all_queues(netdev); for (int i =3D 0; i < mucse->num_rx_queues; i++) mucse_ring_wr32(mucse->rx_ring[i], RNPGBE_RX_START, 1); + + err =3D rnpgbe_send_notify(hw, true, mucse_fw_portup); + if (err) { + dev_warn(&hw->pdev->dev, "Send portup to hw failed %d\n", err); + dev_warn(&hw->pdev->dev, "Port is not truly up\n"); + } + + err =3D rnpgbe_send_notify(hw, true, mucse_fw_link_report_en); + if (err) { + dev_warn(&hw->pdev->dev, "Send link report to hw failed %d\n", + err); + dev_warn(&hw->pdev->dev, "Fw will not report link event\n"); + } + clear_bit(__MUCSE_DOWN, &mucse->state); + queue_delayed_work(system_wq, &mucse->serv_task, + msecs_to_jiffies(500)); } =20 /** @@ -1777,3 +1823,107 @@ void rnpgbe_configure_rx(struct mucse *mucse) dma_axi_ctl |=3D RX_AXI_RW_EN; mucse_hw_wr32(hw, RNPGBE_DMA_AXI_EN, dma_axi_ctl); } + +/** + * rnpgbe_watchdog_update_link - Update the link status + * @mucse: pointer to the device private structure + **/ +static void rnpgbe_watchdog_update_link(struct mucse *mucse) +{ + struct net_device *netdev =3D mucse->netdev; + struct mucse_hw *hw =3D &mucse->hw; + unsigned long flags; + bool link; + int speed; + u8 duplex; + + if (!(mucse->flags & M_FLAG_NEED_LINK_UPDATE)) + return; + + spin_lock_irqsave(&mucse->link_lock, flags); + + link =3D hw->link; + speed =3D hw->speed; + duplex =3D hw->duplex; + + mucse->flags &=3D ~M_FLAG_NEED_LINK_UPDATE; + spin_unlock_irqrestore(&mucse->link_lock, flags); + + if (link) { + netdev_info(netdev, "NIC Link is Up %d Mbps, %s Duplex\n", + speed, + duplex ? "Full" : "Half"); + } +} + +/** + * rnpgbe_watchdog_link_is_up - Update netif_carrier status and + * print link up message + * @mucse: pointer to the device private structure + **/ +static void rnpgbe_watchdog_link_is_up(struct mucse *mucse) +{ + struct net_device *netdev =3D mucse->netdev; + struct mucse_hw *hw =3D &mucse->hw; + + /* Only continue if link was previously down */ + if (netif_carrier_ok(netdev)) + return; + rnpgbe_set_rx(hw, true); + netif_carrier_on(netdev); + netif_tx_wake_all_queues(netdev); +} + +/** + * rnpgbe_watchdog_link_is_down - Update netif_carrier status and + * print link down message + * @mucse: pointer to the private structure + **/ +static void rnpgbe_watchdog_link_is_down(struct mucse *mucse) +{ + struct net_device *netdev =3D mucse->netdev; + struct mucse_hw *hw =3D &mucse->hw; + + /* Only continue if link was up previously */ + if (!netif_carrier_ok(netdev)) + return; + netdev_info(netdev, "NIC Link is Down\n"); + rnpgbe_set_rx(hw, false); + netif_carrier_off(netdev); + netif_tx_stop_all_queues(netdev); +} + +/** + * rnpgbe_watchdog_subtask - Check and bring link up + * @mucse: pointer to the device private structure + **/ +static void rnpgbe_watchdog_subtask(struct mucse *mucse) +{ + struct mucse_hw *hw =3D &mucse->hw; + /* if interface is down do nothing */ + if (test_bit(__MUCSE_DOWN, &mucse->state)) + return; + + rnpgbe_watchdog_update_link(mucse); + if (hw->link) + rnpgbe_watchdog_link_is_up(mucse); + else + rnpgbe_watchdog_link_is_down(mucse); +} + +/** + * rnpgbe_service_task - Manages and runs subtasks + * @work: pointer to work_struct containing our data + **/ +void rnpgbe_service_task(struct work_struct *work) +{ + struct mucse *mucse =3D container_of(work, struct mucse, serv_task.work); + + if (test_bit(__MUCSE_DOWN, &mucse->state)) + return; + + rnpgbe_watchdog_subtask(mucse); + + queue_delayed_work(system_wq, &mucse->serv_task, + msecs_to_jiffies(500)); +} diff --git a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_lib.h b/drivers/net/e= thernet/mucse/rnpgbe/rnpgbe_lib.h index 29520ad716ca..74b4b0ab0b89 100644 --- a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_lib.h +++ b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_lib.h @@ -86,4 +86,5 @@ void rnpgbe_get_stats64(struct net_device *netdev, void rnpgbe_clean_rx_ring(struct mucse_ring *rx_ring); int rnpgbe_setup_all_rx_resources(struct mucse *mucse); void rnpgbe_free_all_rx_resources(struct mucse *mucse); +void rnpgbe_service_task(struct work_struct *work); #endif diff --git a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_main.c b/drivers/net/= ethernet/mucse/rnpgbe/rnpgbe_main.c index f31547a04b3d..4fc6244f247c 100644 --- a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_main.c +++ b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_main.c @@ -52,6 +52,7 @@ static int rnpgbe_open(struct net_device *netdev) struct mucse *mucse =3D netdev_priv(netdev); int err; =20 + netif_carrier_off(netdev); err =3D rnpgbe_request_irq(mucse); if (err) return err; @@ -181,6 +182,7 @@ static int rnpgbe_add_adapter(struct pci_dev *pdev, dev_err(&pdev->dev, "Init hw err %d\n", err); goto err_free_net; } + /* Step 1: Send power-up notification to firmware (no response expected) * This informs firmware to initialize hardware power state, but * firmware only acknowledges receipt without returning data. Must be @@ -223,6 +225,9 @@ static int rnpgbe_add_adapter(struct pci_dev *pdev, goto err_powerdown; } =20 + INIT_DELAYED_WORK(&mucse->serv_task, rnpgbe_service_task); + spin_lock_init(&mucse->link_lock); + err =3D rnpgbe_init_interrupt_scheme(mucse); if (err) { dev_err(&pdev->dev, "init interrupt failed %d\n", err); diff --git a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_mbx.c b/drivers/net/e= thernet/mucse/rnpgbe/rnpgbe_mbx.c index de5e29230b3c..1d4e2ae78154 100644 --- a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_mbx.c +++ b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_mbx.c @@ -247,6 +247,26 @@ int mucse_poll_and_read_mbx(struct mucse_hw *hw, u32 *= msg, u16 size) return mucse_read_mbx_pf(hw, msg, size); } =20 +/** + * mucse_check_and_read_mbx - check if there is notification and receive m= essage + * @hw: pointer to the HW structure + * @msg: the message buffer + * @size: length of buffer + * + * Return: 0 if it successfully received a message notification and + * copied it into the receive buffer, negative errno on failure + **/ +int mucse_check_and_read_mbx(struct mucse_hw *hw, u32 *msg, u16 size) +{ + int err; + + err =3D mucse_check_for_msg_pf(hw); + if (err) + return err; + + return mucse_read_mbx_pf(hw, msg, size); +} + /** * mucse_mbx_get_fwack - Read fw ack from reg * @mbx: pointer to the MBX structure diff --git a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_mbx.h b/drivers/net/e= thernet/mucse/rnpgbe/rnpgbe_mbx.h index e6fcc8d1d3ca..cba54a07a7fa 100644 --- a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_mbx.h +++ b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_mbx.h @@ -17,4 +17,5 @@ int mucse_write_and_wait_ack_mbx(struct mucse_hw *hw, u32 *msg, u16 size); void mucse_init_mbx_params_pf(struct mucse_hw *hw); int mucse_poll_and_read_mbx(struct mucse_hw *hw, u32 *msg, u16 size); +int mucse_check_and_read_mbx(struct mucse_hw *hw, u32 *msg, u16 size); #endif /* _RNPGBE_MBX_H */ diff --git a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_mbx_fw.c b/drivers/ne= t/ethernet/mucse/rnpgbe/rnpgbe_mbx_fw.c index 8c8bd5e8e1db..3eef31f4289a 100644 --- a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_mbx_fw.c +++ b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_mbx_fw.c @@ -3,6 +3,7 @@ =20 #include #include +#include =20 #include "rnpgbe.h" #include "rnpgbe_mbx.h" @@ -189,3 +190,168 @@ int mucse_mbx_get_macaddr(struct mucse_hw *hw, int pf= vfnum, =20 return 0; } + +/** + * mucse_mbx_phyup - Echo fw let the phy up + * @hw: pointer to the HW structure + * @is_phyup: true for up, false for down + * + * mucse_mbx_phyup echo fw to change phy status + * + * Return: 0 on success, negative errno on failure + **/ +int mucse_mbx_phyup(struct mucse_hw *hw, bool is_phyup) +{ + struct mbx_fw_cmd_req req =3D { + .datalen =3D cpu_to_le16(sizeof(req.phy_status) + + MUCSE_MBX_REQ_HDR_LEN), + .opcode =3D cpu_to_le16(SET_PHY_UP), + .phy_status =3D { + .port_mask =3D cpu_to_le32(BIT(hw->port)), + .status =3D cpu_to_le32(is_phyup ? 1 : 0), + }, + }; + int len, err; + + len =3D le16_to_cpu(req.datalen); + mutex_lock(&hw->mbx.lock); + err =3D mucse_write_and_wait_ack_mbx(hw, (u32 *)&req, len); + mutex_unlock(&hw->mbx.lock); + + return err; +} + +/** + * mucse_mbx_link_report - Echo fw report link change event or not + * @hw: pointer to the HW structure + * @is_report: true for report, false for no + * + * mucse_mbx_link_eventup echo fw to change event report state + * + * Return: 0 on success, negative errno on failure + **/ +int mucse_mbx_link_report(struct mucse_hw *hw, bool is_report) +{ + struct mbx_fw_cmd_req req =3D { + .datalen =3D cpu_to_le16(sizeof(req. report_status) + + MUCSE_MBX_REQ_HDR_LEN), + .opcode =3D cpu_to_le16(LINK_REPORT_EN), + .report_status =3D { + .port_mask =3D cpu_to_le16(BIT(hw->port)), + .status =3D cpu_to_le16(is_report ? 1 : 0), + }, + }; + int len, err; + + len =3D le16_to_cpu(req.datalen); + mutex_lock(&hw->mbx.lock); + err =3D mucse_write_and_wait_ack_mbx(hw, (u32 *)&req, len); + mutex_unlock(&hw->mbx.lock); + + return err; +} + +/** + * mucse_update_link_status_reg - update driver speed inf to reg + * @hw: pointer to the HW structure + * @req: pointer to req data + * + * mucse_update_link_status_reg update reg according to driver info, + * fw will send irq if status is differ with reg + * + **/ +static void mucse_update_link_status_reg(struct mucse_hw *hw, + struct mbx_fw_cmd_req *req) +{ + u16 status =3D le16_to_cpu(req->link_stat.st.status); + u32 value; + + value =3D mucse_hw_rd32(hw, RNPGBE_LINK_ST); + value &=3D ~M_ST_MASK; + value |=3D M_DEFAULT_ST; + + if (le16_to_cpu(req->link_stat.port_status)) { + value |=3D BIT(0); + switch (hw->speed) { + case 10: + value |=3D (mucse_speed_10 << 8); + break; + case 100: + value |=3D (mucse_speed_100 << 8); + break; + case 1000: + value |=3D (mucse_speed_1000 << 8); + break; + default: + /* invalid speed do nothing */ + break; + } + + value |=3D FIELD_PREP(BIT(4), !!hw->duplex); + value |=3D FIELD_PREP(GENMASK_U32(25, 24), status & GENMASK(1, 0)); + } else { + value &=3D ~BIT(0); + } + + if (status & ST_STATUS_LLDP_STATUS_MASK) + value |=3D BIT(6); + else + value &=3D ~BIT(6); + + mucse_hw_wr32(hw, RNPGBE_LINK_ST, value); +} + +/** + * mucse_mbx_fw_req_handler - Handle fw req + * @hw: pointer to the HW structure + * @req: pointer to req data + * + * rnpgbe_mbx_fw_req_handler handler fw req, such as a link event req. + * + * @return: 0 on success, negative on failure + **/ +static void mucse_mbx_fw_req_handler(struct mucse_hw *hw, + struct mbx_fw_cmd_req *req) +{ + struct mucse *mucse =3D container_of(hw, struct mucse, hw); + u32 magic =3D le32_to_cpu(req->link_stat.port_magic); + unsigned long flags; + + if (le16_to_cpu(req->opcode) =3D=3D LINK_CHANGE_EVT) { + spin_lock_irqsave(&mucse->link_lock, flags); + + if (le16_to_cpu(req->link_stat.port_status)) + hw->link =3D true; + else + hw->link =3D false; + + if (magic =3D=3D ST_VALID_MAGIC) { + hw->speed =3D le16_to_cpu(req->link_stat.st.speed); + hw->duplex =3D req->link_stat.st.flags & DUPLEX_BIT; + /* update regs to notify link info is received from fw */ + mucse_update_link_status_reg(hw, req); + } else { + hw->speed =3D 0; + hw->duplex =3D 0; + } + mucse->flags |=3D M_FLAG_NEED_LINK_UPDATE; + spin_unlock_irqrestore(&mucse->link_lock, flags); + } +} + +/** + * mucse_fw_irq_handler - Try to handle a req from hw + * @hw: pointer to the HW structure + **/ +void mucse_fw_irq_handler(struct mucse_hw *hw) +{ + struct mbx_fw_cmd_req req =3D {}; + + /* try to check and read fw req */ + if (mucse_check_and_read_mbx(hw, (u32 *)&req, sizeof(req))) + return; + + /* handle it if is a req from fw */ + if (!(le16_to_cpu(req.flags) & FLAGS_REPLY)) + mucse_mbx_fw_req_handler(hw, &req); +} diff --git a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_mbx_fw.h b/drivers/ne= t/ethernet/mucse/rnpgbe/rnpgbe_mbx_fw.h index fb24fc12b613..691ee8f39cac 100644 --- a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_mbx_fw.h +++ b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_mbx_fw.h @@ -14,6 +14,9 @@ enum MUCSE_FW_CMD { GET_HW_INFO =3D 0x0601, GET_MAC_ADDRESS =3D 0x0602, RESET_HW =3D 0x0603, + LINK_CHANGE_EVT =3D 0x0608, + LINK_REPORT_EN =3D 0x0613, + SET_PHY_UP =3D 0x0800, POWER_UP =3D 0x0803, }; =20 @@ -36,6 +39,16 @@ struct mucse_hw_info { __le32 ext_info; } __packed; =20 +#define ST_STATUS_LLDP_STATUS_MASK BIT(12) + +#define DUPLEX_BIT BIT(0) +struct st_status { + u8 phyid; + u8 flags; + __le16 speed; + __le16 status; +} __packed; + struct mbx_fw_cmd_req { __le16 flags; __le16 opcode; @@ -55,10 +68,26 @@ struct mbx_fw_cmd_req { __le32 port_mask; __le32 pfvf_num; } get_mac_addr; + struct { + __le32 port_mask; + __le32 status; + } phy_status; + struct { + __le16 status; + __le16 port_mask; + } report_status; + struct { + __le16 changed_lanes; + __le16 port_status; + __le32 port_magic; +#define ST_VALID_MAGIC 0xa4a6a8a9 + struct st_status st; + } link_stat; }; } __packed; =20 struct mbx_fw_cmd_reply { +#define FLAGS_REPLY BIT(0) __le16 flags; __le16 opcode; __le16 error_code; @@ -80,9 +109,18 @@ struct mbx_fw_cmd_reply { }; } __packed; =20 +enum mucse_speed { + mucse_speed_10, + mucse_speed_100, + mucse_speed_1000, +}; + int mucse_mbx_sync_fw(struct mucse_hw *hw); int mucse_mbx_powerup(struct mucse_hw *hw, bool is_powerup); int mucse_mbx_reset_hw(struct mucse_hw *hw); int mucse_mbx_get_macaddr(struct mucse_hw *hw, int pfvfnum, u8 *mac_addr, int port); +int mucse_mbx_phyup(struct mucse_hw *hw, bool is_phyup); +int mucse_mbx_link_report(struct mucse_hw *hw, bool is_report); +void mucse_fw_irq_handler(struct mucse_hw *hw); #endif /* _RNPGBE_MBX_FW_H */ --=20 2.25.1