From nobody Wed Apr 1 12:32:17 2026 Received: from canpmsgout08.his.huawei.com (canpmsgout08.his.huawei.com [113.46.200.223]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AFC30382F39; Tue, 31 Mar 2026 07:56:40 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=113.46.200.223 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774943804; cv=none; b=W32SW18yKRK8Q/LLzHGQzgt2AAmVFwmwpSgoT/n8AdmSWCcqRJ/d34gcPdr74jnkuqktcx2IGsW3Qsij6tiUsTs/WGbbgc6SVT28VHQoVZJ2pedEDqNQ7KCAeEkagox4B1lfpRjyjeOCy58Zodo9m9PcF93Jbb09Dq59Qjtnod4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774943804; c=relaxed/simple; bh=4lwTY+NFeUUWKDuDUn+dCxO3aOMlxQxn/ShZqJIEXRY=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=oXuUq4rR5MZUY5678JUFsXIM0e+GxDH5DRXvYYCcUZdi1oInFkI1E3OTPV10EmJndUz68YANUY75dsKUFcK+jRvsaSlyIb1Y5PZVEujjsl6c8gNK70OdzUoDQyFP6hc5u+1oSXy+Njv10veaeTA8QmmZEIGm3+XbFWoz9EDkzNs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; dkim=pass (1024-bit key) header.d=huawei.com header.i=@huawei.com header.b=ZS6kykkB; arc=none smtp.client-ip=113.46.200.223 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=huawei.com header.i=@huawei.com header.b="ZS6kykkB" dkim-signature: v=1; a=rsa-sha256; d=huawei.com; s=dkim; c=relaxed/relaxed; q=dns/txt; h=From; bh=eIoy9lgl7C/NPCJU12AFg0l5fDMtmhv1ZIFNT6z7Gb0=; b=ZS6kykkB8OrcgX11Nt1pHSkyww2NfdNF99vSFh7LMRtrDOV2xSxJacS0d4l5d3zStHiIfgSqK e4vBMVr7FDBlvWi8WIVuzO6ALpAGMwjfbnVrhmwDR9PfqA5UfVFdWV4O6O+n4T9Dy6F8LyLp8Is ewfFl9s7gdh9TLVJhyUJr8E= Received: from mail.maildlp.com (unknown [172.19.163.163]) by canpmsgout08.his.huawei.com (SkyGuard) with ESMTPS id 4flKzR5F7rzmV7v; Tue, 31 Mar 2026 15:50:27 +0800 (CST) Received: from kwepemf100013.china.huawei.com (unknown [7.202.181.12]) by mail.maildlp.com (Postfix) with ESMTPS id 5656E4056E; Tue, 31 Mar 2026 15:56:38 +0800 (CST) Received: from DESKTOP-62GVMTR.china.huawei.com (10.174.189.124) by kwepemf100013.china.huawei.com (7.202.181.12) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.36; Tue, 31 Mar 2026 15:56:37 +0800 From: Fan Gong To: Fan Gong , Zhu Yikai , , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Simon Horman , Andrew Lunn , Ioana Ciornei CC: , , luosifu , Xin Guo , Zhou Shuai , Wu Like , Shi Jing , Zheng Jiezhen , Maxime Chevallier Subject: [PATCH net-next v03 1/6] hinic3: Add ethtool queue ops Date: Tue, 31 Mar 2026 15:56:20 +0800 Message-ID: <17b3bbb0c917ee09ddd7a164cdefca6eb63793ec.1774940117.git.zhuyikai1@h-partners.com> X-Mailer: git-send-email 2.51.0.windows.1 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ClientProxiedBy: kwepems200001.china.huawei.com (7.221.188.67) To kwepemf100013.china.huawei.com (7.202.181.12) Content-Type: text/plain; charset="utf-8" Implement following ethtool callback function: .get_ringparam .set_ringparam These callbacks allow users to utilize ethtool for detailed queue depth configuration and monitoring. Co-developed-by: Zhu Yikai Signed-off-by: Zhu Yikai Signed-off-by: Fan Gong --- .../ethernet/huawei/hinic3/hinic3_ethtool.c | 94 ++++++++++++++++ .../net/ethernet/huawei/hinic3/hinic3_irq.c | 10 +- .../net/ethernet/huawei/hinic3/hinic3_main.c | 11 ++ .../huawei/hinic3/hinic3_netdev_ops.c | 101 +++++++++++++++++- .../ethernet/huawei/hinic3/hinic3_nic_dev.h | 16 +++ .../ethernet/huawei/hinic3/hinic3_nic_io.h | 4 + 6 files changed, 231 insertions(+), 5 deletions(-) diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_ethtool.c b/drivers/= net/ethernet/huawei/hinic3/hinic3_ethtool.c index 90fc16288de9..d78aff802a20 100644 --- a/drivers/net/ethernet/huawei/hinic3/hinic3_ethtool.c +++ b/drivers/net/ethernet/huawei/hinic3/hinic3_ethtool.c @@ -409,6 +409,98 @@ hinic3_get_link_ksettings(struct net_device *netdev, return 0; } =20 +static void hinic3_get_ringparam(struct net_device *netdev, + struct ethtool_ringparam *ring, + struct kernel_ethtool_ringparam *kernel_ring, + struct netlink_ext_ack *extack) +{ + struct hinic3_nic_dev *nic_dev =3D netdev_priv(netdev); + + ring->rx_max_pending =3D HINIC3_MAX_RX_QUEUE_DEPTH; + ring->tx_max_pending =3D HINIC3_MAX_TX_QUEUE_DEPTH; + ring->rx_pending =3D nic_dev->rxqs[0].q_depth; + ring->tx_pending =3D nic_dev->txqs[0].q_depth; +} + +static void hinic3_update_qp_depth(struct net_device *netdev, + u32 sq_depth, u32 rq_depth) +{ + struct hinic3_nic_dev *nic_dev =3D netdev_priv(netdev); + u16 i; + + nic_dev->q_params.sq_depth =3D sq_depth; + nic_dev->q_params.rq_depth =3D rq_depth; + for (i =3D 0; i < nic_dev->max_qps; i++) { + nic_dev->txqs[i].q_depth =3D sq_depth; + nic_dev->txqs[i].q_mask =3D sq_depth - 1; + nic_dev->rxqs[i].q_depth =3D rq_depth; + nic_dev->rxqs[i].q_mask =3D rq_depth - 1; + } +} + +static int hinic3_check_ringparam_valid(struct net_device *netdev, + const struct ethtool_ringparam *ring) +{ + if (ring->rx_jumbo_pending || ring->rx_mini_pending) { + netdev_err(netdev, "Unsupported rx_jumbo_pending/rx_mini_pending\n"); + return -EINVAL; + } + + if (ring->tx_pending > HINIC3_MAX_TX_QUEUE_DEPTH || + ring->tx_pending < HINIC3_MIN_QUEUE_DEPTH || + ring->rx_pending > HINIC3_MAX_RX_QUEUE_DEPTH || + ring->rx_pending < HINIC3_MIN_QUEUE_DEPTH) { + netdev_err(netdev, + "Queue depth out of range tx[%d-%d] rx[%d-%d]\n", + HINIC3_MIN_QUEUE_DEPTH, HINIC3_MAX_TX_QUEUE_DEPTH, + HINIC3_MIN_QUEUE_DEPTH, HINIC3_MAX_RX_QUEUE_DEPTH); + return -EINVAL; + } + + return 0; +} + +static int hinic3_set_ringparam(struct net_device *netdev, + struct ethtool_ringparam *ring, + struct kernel_ethtool_ringparam *kernel_ring, + struct netlink_ext_ack *extack) +{ + struct hinic3_nic_dev *nic_dev =3D netdev_priv(netdev); + struct hinic3_dyna_txrxq_params q_params =3D {}; + u32 new_sq_depth, new_rq_depth; + int err; + + err =3D hinic3_check_ringparam_valid(netdev, ring); + if (err) + return err; + + new_sq_depth =3D 1U << ilog2(ring->tx_pending); + new_rq_depth =3D 1U << ilog2(ring->rx_pending); + if (new_sq_depth =3D=3D nic_dev->q_params.sq_depth && + new_rq_depth =3D=3D nic_dev->q_params.rq_depth) + return 0; + + netdev_dbg(netdev, "Change Tx/Rx ring depth from %u/%u to %u/%u\n", + nic_dev->q_params.sq_depth, nic_dev->q_params.rq_depth, + new_sq_depth, new_rq_depth); + + if (!netif_running(netdev)) { + hinic3_update_qp_depth(netdev, new_sq_depth, new_rq_depth); + } else { + q_params =3D nic_dev->q_params; + q_params.sq_depth =3D new_sq_depth; + q_params.rq_depth =3D new_rq_depth; + + err =3D hinic3_change_channel_settings(netdev, &q_params); + if (err) { + netdev_err(netdev, "Failed to change channel settings\n"); + return err; + } + } + + return 0; +} + static const struct ethtool_ops hinic3_ethtool_ops =3D { .supported_coalesce_params =3D ETHTOOL_COALESCE_USECS | ETHTOOL_COALESCE_PKT_RATE_RX_USECS, @@ -417,6 +509,8 @@ static const struct ethtool_ops hinic3_ethtool_ops =3D { .get_msglevel =3D hinic3_get_msglevel, .set_msglevel =3D hinic3_set_msglevel, .get_link =3D ethtool_op_get_link, + .get_ringparam =3D hinic3_get_ringparam, + .set_ringparam =3D hinic3_set_ringparam, }; =20 void hinic3_set_ethtool_ops(struct net_device *netdev) diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_irq.c b/drivers/net/= ethernet/huawei/hinic3/hinic3_irq.c index e7d6c2033b45..d3b3927b5408 100644 --- a/drivers/net/ethernet/huawei/hinic3/hinic3_irq.c +++ b/drivers/net/ethernet/huawei/hinic3/hinic3_irq.c @@ -135,10 +135,16 @@ static int hinic3_set_interrupt_moder(struct net_devi= ce *netdev, u16 q_id, { struct hinic3_nic_dev *nic_dev =3D netdev_priv(netdev); struct hinic3_interrupt_info info =3D {}; + unsigned long flags; int err; =20 - if (q_id >=3D nic_dev->q_params.num_qps) + spin_lock_irqsave(&nic_dev->channel_res_lock, flags); + + if (!HINIC3_CHANNEL_RES_VALID(nic_dev) || + q_id >=3D nic_dev->q_params.num_qps) { + spin_unlock_irqrestore(&nic_dev->channel_res_lock, flags); return 0; + } =20 info.interrupt_coalesc_set =3D 1; info.coalesc_timer_cfg =3D coalesc_timer_cfg; @@ -147,6 +153,8 @@ static int hinic3_set_interrupt_moder(struct net_device= *netdev, u16 q_id, info.resend_timer_cfg =3D nic_dev->intr_coalesce[q_id].resend_timer_cfg; =20 + spin_unlock_irqrestore(&nic_dev->channel_res_lock, flags); + err =3D hinic3_set_interrupt_cfg(nic_dev->hwdev, info); if (err) { netdev_err(netdev, diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_main.c b/drivers/net= /ethernet/huawei/hinic3/hinic3_main.c index 0a888fe4c975..3b470978714a 100644 --- a/drivers/net/ethernet/huawei/hinic3/hinic3_main.c +++ b/drivers/net/ethernet/huawei/hinic3/hinic3_main.c @@ -179,6 +179,8 @@ static int hinic3_sw_init(struct net_device *netdev) int err; =20 mutex_init(&nic_dev->port_state_mutex); + mutex_init(&nic_dev->channel_cfg_lock); + spin_lock_init(&nic_dev->channel_res_lock); =20 nic_dev->q_params.sq_depth =3D HINIC3_SQ_DEPTH; nic_dev->q_params.rq_depth =3D HINIC3_RQ_DEPTH; @@ -314,6 +316,15 @@ static void hinic3_link_status_change(struct net_devic= e *netdev, bool link_status_up) { struct hinic3_nic_dev *nic_dev =3D netdev_priv(netdev); + unsigned long flags; + bool valid; + + spin_lock_irqsave(&nic_dev->channel_res_lock, flags); + valid =3D HINIC3_CHANNEL_RES_VALID(nic_dev); + spin_unlock_irqrestore(&nic_dev->channel_res_lock, flags); + + if (!valid) + return; =20 if (link_status_up) { if (netif_carrier_ok(netdev)) diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_netdev_ops.c b/drive= rs/net/ethernet/huawei/hinic3/hinic3_netdev_ops.c index da73811641a9..ae485afeb14e 100644 --- a/drivers/net/ethernet/huawei/hinic3/hinic3_netdev_ops.c +++ b/drivers/net/ethernet/huawei/hinic3/hinic3_netdev_ops.c @@ -428,6 +428,82 @@ static void hinic3_vport_down(struct net_device *netde= v) } } =20 +int +hinic3_change_channel_settings(struct net_device *netdev, + struct hinic3_dyna_txrxq_params *trxq_params) +{ + struct hinic3_nic_dev *nic_dev =3D netdev_priv(netdev); + struct hinic3_dyna_qp_params new_qp_params =3D {}; + struct hinic3_dyna_qp_params cur_qp_params =3D {}; + bool need_teardown =3D false; + unsigned long flags; + int err; + + mutex_lock(&nic_dev->channel_cfg_lock); + + hinic3_config_num_qps(netdev, trxq_params); + + err =3D hinic3_alloc_channel_resources(netdev, &new_qp_params, + trxq_params); + if (err) { + netdev_err(netdev, "Failed to alloc channel resources\n"); + mutex_unlock(&nic_dev->channel_cfg_lock); + return err; + } + + spin_lock_irqsave(&nic_dev->channel_res_lock, flags); + if (!test_and_set_bit(HINIC3_CHANGE_RES_INVALID, &nic_dev->flags)) + need_teardown =3D true; + spin_unlock_irqrestore(&nic_dev->channel_res_lock, flags); + + if (need_teardown) { + hinic3_vport_down(netdev); + hinic3_close_channel(netdev); + hinic3_uninit_qps(nic_dev, &cur_qp_params); + hinic3_free_channel_resources(netdev, &cur_qp_params, + &nic_dev->q_params); + } + + if (nic_dev->num_qp_irq > trxq_params->num_qps) + hinic3_qp_irq_change(netdev, trxq_params->num_qps); + + spin_lock_irqsave(&nic_dev->channel_res_lock, flags); + nic_dev->q_params =3D *trxq_params; + spin_unlock_irqrestore(&nic_dev->channel_res_lock, flags); + + hinic3_init_qps(nic_dev, &new_qp_params); + + err =3D hinic3_open_channel(netdev); + if (err) + goto err_uninit_qps; + + err =3D hinic3_vport_up(netdev); + if (err) + goto err_close_channel; + + spin_lock_irqsave(&nic_dev->channel_res_lock, flags); + clear_bit(HINIC3_CHANGE_RES_INVALID, &nic_dev->flags); + spin_unlock_irqrestore(&nic_dev->channel_res_lock, flags); + + mutex_unlock(&nic_dev->channel_cfg_lock); + + return 0; + +err_close_channel: + hinic3_close_channel(netdev); +err_uninit_qps: + spin_lock_irqsave(&nic_dev->channel_res_lock, flags); + memset(&nic_dev->q_params, 0, sizeof(nic_dev->q_params)); + spin_unlock_irqrestore(&nic_dev->channel_res_lock, flags); + + hinic3_uninit_qps(nic_dev, &new_qp_params); + hinic3_free_channel_resources(netdev, &new_qp_params, trxq_params); + + mutex_unlock(&nic_dev->channel_cfg_lock); + + return err; +} + static int hinic3_open(struct net_device *netdev) { struct hinic3_nic_dev *nic_dev =3D netdev_priv(netdev); @@ -487,16 +563,33 @@ static int hinic3_close(struct net_device *netdev) { struct hinic3_nic_dev *nic_dev =3D netdev_priv(netdev); struct hinic3_dyna_qp_params qp_params; + bool need_teardown =3D false; + unsigned long flags; =20 if (!test_and_clear_bit(HINIC3_INTF_UP, &nic_dev->flags)) { netdev_dbg(netdev, "Netdev already close, do nothing\n"); return 0; } =20 - hinic3_vport_down(netdev); - hinic3_close_channel(netdev); - hinic3_uninit_qps(nic_dev, &qp_params); - hinic3_free_channel_resources(netdev, &qp_params, &nic_dev->q_params); + mutex_lock(&nic_dev->channel_cfg_lock); + + spin_lock_irqsave(&nic_dev->channel_res_lock, flags); + if (!test_and_set_bit(HINIC3_CHANGE_RES_INVALID, &nic_dev->flags)) + need_teardown =3D true; + spin_unlock_irqrestore(&nic_dev->channel_res_lock, flags); + + if (need_teardown) { + hinic3_vport_down(netdev); + hinic3_close_channel(netdev); + hinic3_uninit_qps(nic_dev, &qp_params); + hinic3_free_channel_resources(netdev, &qp_params, + &nic_dev->q_params); + } + + hinic3_free_nicio_res(nic_dev); + hinic3_destroy_num_qps(netdev); + + mutex_unlock(&nic_dev->channel_cfg_lock); =20 return 0; } diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_nic_dev.h b/drivers/= net/ethernet/huawei/hinic3/hinic3_nic_dev.h index 9502293ff710..55b280888ad8 100644 --- a/drivers/net/ethernet/huawei/hinic3/hinic3_nic_dev.h +++ b/drivers/net/ethernet/huawei/hinic3/hinic3_nic_dev.h @@ -10,6 +10,9 @@ #include "hinic3_hw_cfg.h" #include "hinic3_hwdev.h" #include "hinic3_mgmt_interface.h" +#include "hinic3_nic_io.h" +#include "hinic3_tx.h" +#include "hinic3_rx.h" =20 #define HINIC3_VLAN_BITMAP_BYTE_SIZE(nic_dev) (sizeof(*(nic_dev)->vlan_bi= tmap)) #define HINIC3_VLAN_BITMAP_SIZE(nic_dev) \ @@ -20,8 +23,13 @@ enum hinic3_flags { HINIC3_MAC_FILTER_CHANGED, HINIC3_RSS_ENABLE, HINIC3_UPDATE_MAC_FILTER, + HINIC3_CHANGE_RES_INVALID, }; =20 +#define HINIC3_CHANNEL_RES_VALID(nic_dev) \ + (test_bit(HINIC3_INTF_UP, &(nic_dev)->flags) && \ + !test_bit(HINIC3_CHANGE_RES_INVALID, &(nic_dev)->flags)) + enum hinic3_event_work_flags { HINIC3_EVENT_WORK_TX_TIMEOUT, }; @@ -129,6 +137,10 @@ struct hinic3_nic_dev { struct work_struct rx_mode_work; /* lock for enable/disable port */ struct mutex port_state_mutex; + /* lock for channel configuration */ + struct mutex channel_cfg_lock; + /* lock for channel resources */ + spinlock_t channel_res_lock; =20 struct list_head uc_filter_list; struct list_head mc_filter_list; @@ -143,6 +155,10 @@ struct hinic3_nic_dev { =20 void hinic3_set_netdev_ops(struct net_device *netdev); int hinic3_set_hw_features(struct net_device *netdev); +int +hinic3_change_channel_settings(struct net_device *netdev, + struct hinic3_dyna_txrxq_params *trxq_params); + int hinic3_qps_irq_init(struct net_device *netdev); void hinic3_qps_irq_uninit(struct net_device *netdev); =20 diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_nic_io.h b/drivers/n= et/ethernet/huawei/hinic3/hinic3_nic_io.h index 12eefabcf1db..3791b9bc865b 100644 --- a/drivers/net/ethernet/huawei/hinic3/hinic3_nic_io.h +++ b/drivers/net/ethernet/huawei/hinic3/hinic3_nic_io.h @@ -14,6 +14,10 @@ struct hinic3_nic_dev; #define HINIC3_RQ_WQEBB_SHIFT 3 #define HINIC3_SQ_WQEBB_SIZE BIT(HINIC3_SQ_WQEBB_SHIFT) =20 +#define HINIC3_MAX_TX_QUEUE_DEPTH 65536 +#define HINIC3_MAX_RX_QUEUE_DEPTH 16384 +#define HINIC3_MIN_QUEUE_DEPTH 128 + /* ******************** RQ_CTRL ******************** */ enum hinic3_rq_wqe_type { HINIC3_NORMAL_RQ_WQE =3D 1, --=20 2.43.0 From nobody Wed Apr 1 12:32:17 2026 Received: from canpmsgout11.his.huawei.com (canpmsgout11.his.huawei.com [113.46.200.226]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 55E613D6CBB; Tue, 31 Mar 2026 07:56:47 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=113.46.200.226 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774943810; cv=none; b=o62vSzQU+oXQ5Bm+hovy3szMY2V56+U7IrbN78oiIxn67tB/6q4833ioIhcmf/EjpmPsxLN7lFNpojCn9y5FS/jZU8kBF3X8LgzjyvzuT+bjMidfXPSGjqM7CdRov3NVtvw6S+CVW4Rl7OyLzVclVIdcQPAdAnh1hxYUdFg/Vq8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774943810; c=relaxed/simple; bh=jkZbACV2NEjkO2PfP+cOrxjB6PBP/sPeXAs7FHBcTjU=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=DqRvyA7lDvEcgk/vkhIAxaE4v+0ZQ9ydN7JmyGGQFgkN7IwCtxjuhWrq6JjcK9867yVkLN8AE0Mm65OH7TLqLC1a7Q37CrbiDCyPIa5yi9ZHI2R5Ik0RcMKPxZ0YhFriNETX2tKCRSkl+Qj9KI2qPaCG3IiHG0v7/aH0/Osk8G4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; dkim=pass (1024-bit key) header.d=huawei.com header.i=@huawei.com header.b=pLdPELc5; arc=none smtp.client-ip=113.46.200.226 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=huawei.com header.i=@huawei.com header.b="pLdPELc5" dkim-signature: v=1; a=rsa-sha256; d=huawei.com; s=dkim; c=relaxed/relaxed; q=dns/txt; h=From; bh=L/oNwVvvGAlY/YJhXWP4XmbZhBKr++mxdbl7k8706c8=; b=pLdPELc5YoJsQckmyvQhicUqwsCEXS/kANz335gHB/SDxiA45nQkxGn68NutRc8uQP0IX98ED SXFUoKb+h8wkZQuxu/3kJEs6llNPzeXRz9EILjYflLCy+tRPfyn3ImnLM1EixCaJNg4wSPgDWmu Uj3xXlcSt6n7vtUxaqSn4AM= Received: from mail.maildlp.com (unknown [172.19.163.200]) by canpmsgout11.his.huawei.com (SkyGuard) with ESMTPS id 4flKzS6YSnzKmSv; Tue, 31 Mar 2026 15:50:28 +0800 (CST) Received: from kwepemf100013.china.huawei.com (unknown [7.202.181.12]) by mail.maildlp.com (Postfix) with ESMTPS id E22C240563; Tue, 31 Mar 2026 15:56:39 +0800 (CST) Received: from DESKTOP-62GVMTR.china.huawei.com (10.174.189.124) by kwepemf100013.china.huawei.com (7.202.181.12) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.36; Tue, 31 Mar 2026 15:56:38 +0800 From: Fan Gong To: Fan Gong , Zhu Yikai , , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Simon Horman , Andrew Lunn , Ioana Ciornei CC: , , luosifu , Xin Guo , Zhou Shuai , Wu Like , Shi Jing , Zheng Jiezhen , Maxime Chevallier Subject: [PATCH net-next v03 2/6] hinic3: Add ethtool statistic ops Date: Tue, 31 Mar 2026 15:56:21 +0800 Message-ID: X-Mailer: git-send-email 2.51.0.windows.1 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ClientProxiedBy: kwepems200001.china.huawei.com (7.221.188.67) To kwepemf100013.china.huawei.com (7.202.181.12) Content-Type: text/plain; charset="utf-8" Add PF/VF statistics functions in TX and RX processing. Implement following ethtool callback function: .get_sset_count .get_ethtool_stats .get_strings .get_eth_phy_stats .get_eth_mac_stats .get_eth_ctrl_stats .get_rmon_stats .get_pause_stats These callbacks allow users to utilize ethtool for detailed TX and RX netdev stats monitoring. Co-developed-by: Zhu Yikai Signed-off-by: Zhu Yikai Signed-off-by: Fan Gong --- .../ethernet/huawei/hinic3/hinic3_ethtool.c | 493 ++++++++++++++++++ .../ethernet/huawei/hinic3/hinic3_hw_intf.h | 13 +- .../net/ethernet/huawei/hinic3/hinic3_main.c | 1 + .../huawei/hinic3/hinic3_mgmt_interface.h | 37 ++ .../ethernet/huawei/hinic3/hinic3_nic_cfg.c | 64 +++ .../ethernet/huawei/hinic3/hinic3_nic_cfg.h | 109 ++++ .../ethernet/huawei/hinic3/hinic3_nic_dev.h | 8 + .../net/ethernet/huawei/hinic3/hinic3_rx.c | 59 ++- .../net/ethernet/huawei/hinic3/hinic3_rx.h | 14 + .../net/ethernet/huawei/hinic3/hinic3_tx.c | 80 ++- .../net/ethernet/huawei/hinic3/hinic3_tx.h | 2 + 11 files changed, 871 insertions(+), 9 deletions(-) diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_ethtool.c b/drivers/= net/ethernet/huawei/hinic3/hinic3_ethtool.c index d78aff802a20..7fd8ad053c6e 100644 --- a/drivers/net/ethernet/huawei/hinic3/hinic3_ethtool.c +++ b/drivers/net/ethernet/huawei/hinic3/hinic3_ethtool.c @@ -501,6 +501,491 @@ static int hinic3_set_ringparam(struct net_device *ne= tdev, return 0; } =20 +struct hinic3_stats { + char name[ETH_GSTRING_LEN]; + u32 size; + int offset; +}; + +#define HINIC3_NIC_STAT(_stat_item) { \ + .name =3D #_stat_item, \ + .size =3D sizeof_field(struct hinic3_nic_stats, _stat_item), \ + .offset =3D offsetof(struct hinic3_nic_stats, _stat_item) \ +} + +#define HINIC3_RXQ_STAT(_stat_item) { \ + .name =3D "rxq%d_"#_stat_item, \ + .size =3D sizeof_field(struct hinic3_rxq_stats, _stat_item), \ + .offset =3D offsetof(struct hinic3_rxq_stats, _stat_item) \ +} + +#define HINIC3_TXQ_STAT(_stat_item) { \ + .name =3D "txq%d_"#_stat_item, \ + .size =3D sizeof_field(struct hinic3_txq_stats, _stat_item), \ + .offset =3D offsetof(struct hinic3_txq_stats, _stat_item) \ +} + +static struct hinic3_stats hinic3_rx_queue_stats[] =3D { + HINIC3_RXQ_STAT(csum_errors), + HINIC3_RXQ_STAT(other_errors), + HINIC3_RXQ_STAT(rx_buf_empty), + HINIC3_RXQ_STAT(alloc_skb_err), + HINIC3_RXQ_STAT(alloc_rx_buf_err), + HINIC3_RXQ_STAT(restore_drop_sge), +}; + +static struct hinic3_stats hinic3_tx_queue_stats[] =3D { + HINIC3_TXQ_STAT(busy), + HINIC3_TXQ_STAT(skb_pad_err), + HINIC3_TXQ_STAT(frag_len_overflow), + HINIC3_TXQ_STAT(offload_cow_skb_err), + HINIC3_TXQ_STAT(map_frag_err), + HINIC3_TXQ_STAT(unknown_tunnel_pkt), + HINIC3_TXQ_STAT(frag_size_err), +}; + +#define HINIC3_FUNC_STAT(_stat_item) { \ + .name =3D #_stat_item, \ + .size =3D sizeof_field(struct l2nic_vport_stats, _stat_item), \ + .offset =3D offsetof(struct l2nic_vport_stats, _stat_item) \ +} + +static struct hinic3_stats hinic3_function_stats[] =3D { + HINIC3_FUNC_STAT(tx_unicast_pkts_vport), + HINIC3_FUNC_STAT(tx_unicast_bytes_vport), + HINIC3_FUNC_STAT(tx_multicast_pkts_vport), + HINIC3_FUNC_STAT(tx_multicast_bytes_vport), + HINIC3_FUNC_STAT(tx_broadcast_pkts_vport), + HINIC3_FUNC_STAT(tx_broadcast_bytes_vport), + + HINIC3_FUNC_STAT(rx_unicast_pkts_vport), + HINIC3_FUNC_STAT(rx_unicast_bytes_vport), + HINIC3_FUNC_STAT(rx_multicast_pkts_vport), + HINIC3_FUNC_STAT(rx_multicast_bytes_vport), + HINIC3_FUNC_STAT(rx_broadcast_pkts_vport), + HINIC3_FUNC_STAT(rx_broadcast_bytes_vport), + + HINIC3_FUNC_STAT(tx_discard_vport), + HINIC3_FUNC_STAT(rx_discard_vport), + HINIC3_FUNC_STAT(tx_err_vport), + HINIC3_FUNC_STAT(rx_err_vport), +}; + +#define HINIC3_PORT_STAT(_stat_item) { \ + .name =3D #_stat_item, \ + .size =3D sizeof_field(struct mag_cmd_port_stats, _stat_item), \ + .offset =3D offsetof(struct mag_cmd_port_stats, _stat_item) \ +} + +static struct hinic3_stats hinic3_port_stats[] =3D { + HINIC3_PORT_STAT(mac_tx_fragment_pkt_num), + HINIC3_PORT_STAT(mac_tx_undersize_pkt_num), + HINIC3_PORT_STAT(mac_tx_undermin_pkt_num), + HINIC3_PORT_STAT(mac_tx_1519_max_bad_pkt_num), + HINIC3_PORT_STAT(mac_tx_1519_max_good_pkt_num), + HINIC3_PORT_STAT(mac_tx_oversize_pkt_num), + HINIC3_PORT_STAT(mac_tx_jabber_pkt_num), + HINIC3_PORT_STAT(mac_tx_bad_pkt_num), + HINIC3_PORT_STAT(mac_tx_bad_oct_num), + HINIC3_PORT_STAT(mac_tx_good_oct_num), + HINIC3_PORT_STAT(mac_tx_total_pkt_num), + HINIC3_PORT_STAT(mac_tx_uni_pkt_num), + HINIC3_PORT_STAT(mac_tx_pfc_pkt_num), + HINIC3_PORT_STAT(mac_tx_pfc_pri0_pkt_num), + HINIC3_PORT_STAT(mac_tx_pfc_pri1_pkt_num), + HINIC3_PORT_STAT(mac_tx_pfc_pri2_pkt_num), + HINIC3_PORT_STAT(mac_tx_pfc_pri3_pkt_num), + HINIC3_PORT_STAT(mac_tx_pfc_pri4_pkt_num), + HINIC3_PORT_STAT(mac_tx_pfc_pri5_pkt_num), + HINIC3_PORT_STAT(mac_tx_pfc_pri6_pkt_num), + HINIC3_PORT_STAT(mac_tx_pfc_pri7_pkt_num), + HINIC3_PORT_STAT(mac_tx_err_all_pkt_num), + HINIC3_PORT_STAT(mac_tx_from_app_good_pkt_num), + HINIC3_PORT_STAT(mac_tx_from_app_bad_pkt_num), + + HINIC3_PORT_STAT(mac_rx_undermin_pkt_num), + HINIC3_PORT_STAT(mac_rx_1519_max_bad_pkt_num), + HINIC3_PORT_STAT(mac_rx_1519_max_good_pkt_num), + HINIC3_PORT_STAT(mac_rx_bad_pkt_num), + HINIC3_PORT_STAT(mac_rx_bad_oct_num), + HINIC3_PORT_STAT(mac_rx_good_oct_num), + HINIC3_PORT_STAT(mac_rx_total_pkt_num), + HINIC3_PORT_STAT(mac_rx_uni_pkt_num), + HINIC3_PORT_STAT(mac_rx_pfc_pkt_num), + HINIC3_PORT_STAT(mac_rx_pfc_pri0_pkt_num), + HINIC3_PORT_STAT(mac_rx_pfc_pri1_pkt_num), + HINIC3_PORT_STAT(mac_rx_pfc_pri2_pkt_num), + HINIC3_PORT_STAT(mac_rx_pfc_pri3_pkt_num), + HINIC3_PORT_STAT(mac_rx_pfc_pri4_pkt_num), + HINIC3_PORT_STAT(mac_rx_pfc_pri5_pkt_num), + HINIC3_PORT_STAT(mac_rx_pfc_pri6_pkt_num), + HINIC3_PORT_STAT(mac_rx_pfc_pri7_pkt_num), + HINIC3_PORT_STAT(mac_rx_send_app_good_pkt_num), + HINIC3_PORT_STAT(mac_rx_send_app_bad_pkt_num), + HINIC3_PORT_STAT(mac_rx_unfilter_pkt_num), +}; + +static int hinic3_get_sset_count(struct net_device *netdev, int sset) +{ + struct hinic3_nic_dev *nic_dev =3D netdev_priv(netdev); + int count, q_num; + + switch (sset) { + case ETH_SS_STATS: + q_num =3D nic_dev->q_params.num_qps; + count =3D ARRAY_SIZE(hinic3_function_stats) + + (ARRAY_SIZE(hinic3_tx_queue_stats) + + ARRAY_SIZE(hinic3_rx_queue_stats)) * + q_num; + + if (!HINIC3_IS_VF(nic_dev->hwdev)) + count +=3D ARRAY_SIZE(hinic3_port_stats); + + return count; + default: + return -EOPNOTSUPP; + } +} + +static u64 get_val_of_ptr(u32 size, const void *ptr) +{ + u64 ret =3D size =3D=3D sizeof(u64) ? *(u64 *)ptr : + size =3D=3D sizeof(u32) ? *(u32 *)ptr : + size =3D=3D sizeof(u16) ? *(u16 *)ptr : + *(u8 *)ptr; + + return ret; +} + +static void hinic3_get_drv_queue_stats(struct net_device *netdev, u64 *dat= a) +{ + struct hinic3_nic_dev *nic_dev =3D netdev_priv(netdev); + struct hinic3_txq_stats txq_stats =3D {}; + struct hinic3_rxq_stats rxq_stats =3D {}; + u16 i =3D 0, j, qid; + char *p; + + u64_stats_init(&txq_stats.syncp); + u64_stats_init(&rxq_stats.syncp); + + for (qid =3D 0; qid < nic_dev->q_params.num_qps; qid++) { + if (!nic_dev->txqs) + break; + + hinic3_txq_get_stats(&nic_dev->txqs[qid], &txq_stats); + for (j =3D 0; j < ARRAY_SIZE(hinic3_tx_queue_stats); j++, i++) { + p =3D (char *)&txq_stats + + hinic3_tx_queue_stats[j].offset; + data[i] =3D get_val_of_ptr(hinic3_tx_queue_stats[j].size, + p); + } + } + + for (qid =3D 0; qid < nic_dev->q_params.num_qps; qid++) { + if (!nic_dev->rxqs) + break; + + hinic3_rxq_get_stats(&nic_dev->rxqs[qid], &rxq_stats); + for (j =3D 0; j < ARRAY_SIZE(hinic3_rx_queue_stats); j++, i++) { + p =3D (char *)&rxq_stats + + hinic3_rx_queue_stats[j].offset; + data[i] =3D get_val_of_ptr(hinic3_rx_queue_stats[j].size, + p); + } + } +} + +static u16 hinic3_get_ethtool_port_stats(struct net_device *netdev, u64 *d= ata) +{ + struct hinic3_nic_dev *nic_dev =3D netdev_priv(netdev); + struct mag_cmd_port_stats *ps; + u16 i =3D 0, j; + char *p; + int err; + + ps =3D kmalloc_obj(*ps); + if (!ps) + goto err_zero_stats; + + err =3D hinic3_get_phy_port_stats(nic_dev->hwdev, ps); + if (err) { + kfree(ps); + netdev_err(netdev, "Failed to get port stats from fw\n"); + goto err_zero_stats; + } + + for (j =3D 0; j < ARRAY_SIZE(hinic3_port_stats); j++, i++) { + p =3D (char *)ps + hinic3_port_stats[j].offset; + data[i] =3D get_val_of_ptr(hinic3_port_stats[j].size, p); + } + + kfree(ps); + + return i; + +err_zero_stats: + memset(&data[i], 0, ARRAY_SIZE(hinic3_port_stats) * sizeof(*data)); + + return i + ARRAY_SIZE(hinic3_port_stats); +} + +static void hinic3_get_ethtool_stats(struct net_device *netdev, + struct ethtool_stats *stats, u64 *data) +{ + struct hinic3_nic_dev *nic_dev =3D netdev_priv(netdev); + struct l2nic_vport_stats vport_stats =3D {}; + u16 i =3D 0, j; + char *p; + int err; + + err =3D hinic3_get_vport_stats(nic_dev->hwdev, + hinic3_global_func_id(nic_dev->hwdev), + &vport_stats); + if (err) + netdev_err(netdev, "Failed to get function stats from fw\n"); + + for (j =3D 0; j < ARRAY_SIZE(hinic3_function_stats); j++, i++) { + p =3D (char *)&vport_stats + hinic3_function_stats[j].offset; + data[i] =3D get_val_of_ptr(hinic3_function_stats[j].size, p); + } + + if (!HINIC3_IS_VF(nic_dev->hwdev)) + i +=3D hinic3_get_ethtool_port_stats(netdev, data + i); + + hinic3_get_drv_queue_stats(netdev, data + i); +} + +static u16 hinic3_get_hw_stats_strings(struct net_device *netdev, char *p) +{ + struct hinic3_nic_dev *nic_dev =3D netdev_priv(netdev); + u16 i, cnt =3D 0; + + for (i =3D 0; i < ARRAY_SIZE(hinic3_function_stats); i++) { + memcpy(p, hinic3_function_stats[i].name, ETH_GSTRING_LEN); + p +=3D ETH_GSTRING_LEN; + cnt++; + } + + if (!HINIC3_IS_VF(nic_dev->hwdev)) { + for (i =3D 0; i < ARRAY_SIZE(hinic3_port_stats); i++) { + memcpy(p, hinic3_port_stats[i].name, ETH_GSTRING_LEN); + p +=3D ETH_GSTRING_LEN; + cnt++; + } + } + + return cnt; +} + +static void hinic3_get_qp_stats_strings(const struct net_device *netdev, + char *p) +{ + struct hinic3_nic_dev *nic_dev =3D netdev_priv(netdev); + u8 *data =3D p; + u16 i, j; + + for (i =3D 0; i < nic_dev->q_params.num_qps; i++) { + for (j =3D 0; j < ARRAY_SIZE(hinic3_tx_queue_stats); j++) + ethtool_sprintf(&data, + hinic3_tx_queue_stats[j].name, i); + } + + for (i =3D 0; i < nic_dev->q_params.num_qps; i++) { + for (j =3D 0; j < ARRAY_SIZE(hinic3_rx_queue_stats); j++) + ethtool_sprintf(&data, + hinic3_rx_queue_stats[j].name, i); + } +} + +static void hinic3_get_strings(struct net_device *netdev, + u32 stringset, u8 *data) +{ + char *p =3D (char *)data; + u16 offset; + + switch (stringset) { + case ETH_SS_STATS: + offset =3D hinic3_get_hw_stats_strings(netdev, p); + hinic3_get_qp_stats_strings(netdev, + p + offset * ETH_GSTRING_LEN); + + return; + default: + netdev_err(netdev, "Invalid string set %u.\n", stringset); + return; + } +} + +static void hinic3_get_eth_phy_stats(struct net_device *netdev, + struct ethtool_eth_phy_stats *phy_stats) +{ + struct hinic3_nic_dev *nic_dev =3D netdev_priv(netdev); + struct mag_cmd_port_stats *ps; + int err; + + ps =3D kmalloc_obj(*ps); + if (!ps) + return; + + err =3D hinic3_get_phy_port_stats(nic_dev->hwdev, ps); + if (err) { + kfree(ps); + netdev_err(netdev, "Failed to get eth phy stats from fw\n"); + return; + } + + phy_stats->SymbolErrorDuringCarrier =3D ps->mac_rx_sym_err_pkt_num; + + kfree(ps); +} + +static void hinic3_get_eth_mac_stats(struct net_device *netdev, + struct ethtool_eth_mac_stats *mac_stats) +{ + struct hinic3_nic_dev *nic_dev =3D netdev_priv(netdev); + struct mag_cmd_port_stats *ps; + int err; + + ps =3D kmalloc_obj(*ps); + if (!ps) + return; + + err =3D hinic3_get_phy_port_stats(nic_dev->hwdev, ps); + if (err) { + kfree(ps); + netdev_err(netdev, "Failed to get eth mac stats from fw\n"); + return; + } + + mac_stats->FramesTransmittedOK =3D ps->mac_tx_good_pkt_num; + mac_stats->FramesReceivedOK =3D ps->mac_rx_good_pkt_num; + mac_stats->FrameCheckSequenceErrors =3D ps->mac_rx_fcs_err_pkt_num; + mac_stats->OctetsTransmittedOK =3D ps->mac_tx_total_oct_num; + mac_stats->OctetsReceivedOK =3D ps->mac_rx_total_oct_num; + mac_stats->MulticastFramesXmittedOK =3D ps->mac_tx_multi_pkt_num; + mac_stats->BroadcastFramesXmittedOK =3D ps->mac_tx_broad_pkt_num; + mac_stats->MulticastFramesReceivedOK =3D ps->mac_rx_multi_pkt_num; + mac_stats->BroadcastFramesReceivedOK =3D ps->mac_rx_broad_pkt_num; + + kfree(ps); +} + +static void hinic3_get_eth_ctrl_stats(struct net_device *netdev, + struct ethtool_eth_ctrl_stats *ctrl_stats) +{ + struct hinic3_nic_dev *nic_dev =3D netdev_priv(netdev); + struct mag_cmd_port_stats *ps; + int err; + + ps =3D kmalloc_obj(*ps); + if (!ps) + return; + + err =3D hinic3_get_phy_port_stats(nic_dev->hwdev, ps); + if (err) { + kfree(ps); + netdev_err(netdev, "Failed to get eth ctrl stats from fw\n"); + return; + } + + ctrl_stats->MACControlFramesTransmitted =3D ps->mac_tx_control_pkt_num; + ctrl_stats->MACControlFramesReceived =3D ps->mac_rx_control_pkt_num; + + kfree(ps); +} + +static const struct ethtool_rmon_hist_range hinic3_rmon_ranges[] =3D { + { 0, 64 }, + { 65, 127 }, + { 128, 255 }, + { 256, 511 }, + { 512, 1023 }, + { 1024, 1518 }, + { 1519, 2047 }, + { 2048, 4095 }, + { 4096, 8191 }, + { 8192, 9216 }, + { 9217, 12287 }, + {} +}; + +static void hinic3_get_rmon_stats(struct net_device *netdev, + struct ethtool_rmon_stats *rmon_stats, + const struct ethtool_rmon_hist_range **ranges) +{ + struct hinic3_nic_dev *nic_dev =3D netdev_priv(netdev); + struct mag_cmd_port_stats *ps; + int err; + + ps =3D kmalloc_obj(*ps); + if (!ps) + return; + + err =3D hinic3_get_phy_port_stats(nic_dev->hwdev, ps); + if (err) { + kfree(ps); + netdev_err(netdev, "Failed to get eth rmon stats from fw\n"); + return; + } + + rmon_stats->undersize_pkts =3D ps->mac_rx_undersize_pkt_num; + rmon_stats->oversize_pkts =3D ps->mac_rx_oversize_pkt_num; + rmon_stats->fragments =3D ps->mac_rx_fragment_pkt_num; + rmon_stats->jabbers =3D ps->mac_rx_jabber_pkt_num; + + rmon_stats->hist[0] =3D ps->mac_rx_64_oct_pkt_num; + rmon_stats->hist[1] =3D ps->mac_rx_65_127_oct_pkt_num; + rmon_stats->hist[2] =3D ps->mac_rx_128_255_oct_pkt_num; + rmon_stats->hist[3] =3D ps->mac_rx_256_511_oct_pkt_num; + rmon_stats->hist[4] =3D ps->mac_rx_512_1023_oct_pkt_num; + rmon_stats->hist[5] =3D ps->mac_rx_1024_1518_oct_pkt_num; + rmon_stats->hist[6] =3D ps->mac_rx_1519_2047_oct_pkt_num; + rmon_stats->hist[7] =3D ps->mac_rx_2048_4095_oct_pkt_num; + rmon_stats->hist[8] =3D ps->mac_rx_4096_8191_oct_pkt_num; + rmon_stats->hist[9] =3D ps->mac_rx_8192_9216_oct_pkt_num; + rmon_stats->hist[10] =3D ps->mac_rx_9217_12287_oct_pkt_num; + + rmon_stats->hist_tx[0] =3D ps->mac_tx_64_oct_pkt_num; + rmon_stats->hist_tx[1] =3D ps->mac_tx_65_127_oct_pkt_num; + rmon_stats->hist_tx[2] =3D ps->mac_tx_128_255_oct_pkt_num; + rmon_stats->hist_tx[3] =3D ps->mac_tx_256_511_oct_pkt_num; + rmon_stats->hist_tx[4] =3D ps->mac_tx_512_1023_oct_pkt_num; + rmon_stats->hist_tx[5] =3D ps->mac_tx_1024_1518_oct_pkt_num; + rmon_stats->hist_tx[6] =3D ps->mac_tx_1519_2047_oct_pkt_num; + rmon_stats->hist_tx[7] =3D ps->mac_tx_2048_4095_oct_pkt_num; + rmon_stats->hist_tx[8] =3D ps->mac_tx_4096_8191_oct_pkt_num; + rmon_stats->hist_tx[9] =3D ps->mac_tx_8192_9216_oct_pkt_num; + rmon_stats->hist_tx[10] =3D ps->mac_tx_9217_12287_oct_pkt_num; + + *ranges =3D hinic3_rmon_ranges; + + kfree(ps); +} + +static void hinic3_get_pause_stats(struct net_device *netdev, + struct ethtool_pause_stats *pause_stats) +{ + struct hinic3_nic_dev *nic_dev =3D netdev_priv(netdev); + struct mag_cmd_port_stats *ps; + int err; + + ps =3D kmalloc_obj(*ps); + if (!ps) + return; + + err =3D hinic3_get_phy_port_stats(nic_dev->hwdev, ps); + if (err) { + kfree(ps); + netdev_err(netdev, "Failed to get eth pause stats from fw\n"); + return; + } + + pause_stats->tx_pause_frames =3D ps->mac_tx_pause_num; + pause_stats->rx_pause_frames =3D ps->mac_rx_pause_num; + + kfree(ps); +} + static const struct ethtool_ops hinic3_ethtool_ops =3D { .supported_coalesce_params =3D ETHTOOL_COALESCE_USECS | ETHTOOL_COALESCE_PKT_RATE_RX_USECS, @@ -511,6 +996,14 @@ static const struct ethtool_ops hinic3_ethtool_ops =3D= { .get_link =3D ethtool_op_get_link, .get_ringparam =3D hinic3_get_ringparam, .set_ringparam =3D hinic3_set_ringparam, + .get_sset_count =3D hinic3_get_sset_count, + .get_ethtool_stats =3D hinic3_get_ethtool_stats, + .get_strings =3D hinic3_get_strings, + .get_eth_phy_stats =3D hinic3_get_eth_phy_stats, + .get_eth_mac_stats =3D hinic3_get_eth_mac_stats, + .get_eth_ctrl_stats =3D hinic3_get_eth_ctrl_stats, + .get_rmon_stats =3D hinic3_get_rmon_stats, + .get_pause_stats =3D hinic3_get_pause_stats, }; =20 void hinic3_set_ethtool_ops(struct net_device *netdev) diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_hw_intf.h b/drivers/= net/ethernet/huawei/hinic3/hinic3_hw_intf.h index cfc9daa3034f..0b2ebef04c02 100644 --- a/drivers/net/ethernet/huawei/hinic3/hinic3_hw_intf.h +++ b/drivers/net/ethernet/huawei/hinic3/hinic3_hw_intf.h @@ -51,7 +51,18 @@ static inline void mgmt_msg_params_init_default(struct m= gmt_msg_params *msg_para msg_params->in_size =3D buf_size; msg_params->expected_out_size =3D buf_size; msg_params->timeout_ms =3D 0; -} +}; + +static inline void +mgmt_msg_params_init_in_out(struct mgmt_msg_params *msg_params, void *in_b= uf, + void *out_buf, u32 in_buf_size, u32 out_buf_size) +{ + msg_params->buf_in =3D in_buf; + msg_params->buf_out =3D out_buf; + msg_params->in_size =3D in_buf_size; + msg_params->expected_out_size =3D out_buf_size; + msg_params->timeout_ms =3D 0; +}; =20 enum cfg_cmd { CFG_CMD_GET_DEV_CAP =3D 0, diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_main.c b/drivers/net= /ethernet/huawei/hinic3/hinic3_main.c index 3b470978714a..60834f8dffcd 100644 --- a/drivers/net/ethernet/huawei/hinic3/hinic3_main.c +++ b/drivers/net/ethernet/huawei/hinic3/hinic3_main.c @@ -153,6 +153,7 @@ static int hinic3_init_nic_dev(struct net_device *netde= v, return -ENOMEM; =20 nic_dev->nic_svc_cap =3D hwdev->cfg_mgmt->cap.nic_svc_cap; + u64_stats_init(&nic_dev->stats.syncp); =20 nic_dev->workq =3D create_singlethread_workqueue(HINIC3_NIC_DEV_WQ_NAME); if (!nic_dev->workq) { diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_mgmt_interface.h b/d= rivers/net/ethernet/huawei/hinic3/hinic3_mgmt_interface.h index c5bca3c4af96..76c691f82703 100644 --- a/drivers/net/ethernet/huawei/hinic3/hinic3_mgmt_interface.h +++ b/drivers/net/ethernet/huawei/hinic3/hinic3_mgmt_interface.h @@ -143,6 +143,41 @@ struct l2nic_cmd_set_dcb_state { u8 rsvd[7]; }; =20 +struct l2nic_port_stats_info { + struct mgmt_msg_head msg_head; + u16 func_id; + u16 rsvd1; +}; + +struct l2nic_vport_stats { + u64 tx_unicast_pkts_vport; + u64 tx_unicast_bytes_vport; + u64 tx_multicast_pkts_vport; + u64 tx_multicast_bytes_vport; + u64 tx_broadcast_pkts_vport; + u64 tx_broadcast_bytes_vport; + + u64 rx_unicast_pkts_vport; + u64 rx_unicast_bytes_vport; + u64 rx_multicast_pkts_vport; + u64 rx_multicast_bytes_vport; + u64 rx_broadcast_pkts_vport; + u64 rx_broadcast_bytes_vport; + + u64 tx_discard_vport; + u64 rx_discard_vport; + u64 tx_err_vport; + u64 rx_err_vport; +}; + +struct l2nic_cmd_vport_stats { + struct mgmt_msg_head msg_head; + u32 stats_size; + u32 rsvd1; + struct l2nic_vport_stats stats; + u64 rsvd2[6]; +}; + struct l2nic_cmd_lro_config { struct mgmt_msg_head msg_head; u16 func_id; @@ -234,6 +269,7 @@ enum l2nic_cmd { L2NIC_CMD_SET_VPORT_ENABLE =3D 6, L2NIC_CMD_SET_RX_MODE =3D 7, L2NIC_CMD_SET_SQ_CI_ATTR =3D 8, + L2NIC_CMD_GET_VPORT_STAT =3D 9, L2NIC_CMD_CLEAR_QP_RESOURCE =3D 11, L2NIC_CMD_CFG_RX_LRO =3D 13, L2NIC_CMD_CFG_LRO_TIMER =3D 14, @@ -272,6 +308,7 @@ enum mag_cmd { MAG_CMD_SET_PORT_ENABLE =3D 6, MAG_CMD_GET_LINK_STATUS =3D 7, =20 + MAG_CMD_GET_PORT_STAT =3D 151, MAG_CMD_GET_PORT_INFO =3D 153, }; =20 diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_nic_cfg.c b/drivers/= net/ethernet/huawei/hinic3/hinic3_nic_cfg.c index de5a7984d2cb..1b14dc824ce1 100644 --- a/drivers/net/ethernet/huawei/hinic3/hinic3_nic_cfg.c +++ b/drivers/net/ethernet/huawei/hinic3/hinic3_nic_cfg.c @@ -639,6 +639,42 @@ int hinic3_get_link_status(struct hinic3_hwdev *hwdev,= bool *link_status_up) return 0; } =20 +int hinic3_get_phy_port_stats(struct hinic3_hwdev *hwdev, + struct mag_cmd_port_stats *stats) +{ + struct mag_cmd_port_stats_info stats_info =3D {}; + struct mag_cmd_get_port_stat *ps; + struct mgmt_msg_params msg_params =3D {}; + int err; + + ps =3D kzalloc_obj(*ps); + if (!ps) + return -ENOMEM; + + stats_info.port_id =3D hinic3_physical_port_id(hwdev); + + mgmt_msg_params_init_in_out(&msg_params, &stats_info, ps, + sizeof(stats_info), sizeof(*ps)); + + err =3D hinic3_send_mbox_to_mgmt(hwdev, MGMT_MOD_HILINK, + MAG_CMD_GET_PORT_STAT, &msg_params); + + if (err || ps->head.status) { + dev_err(hwdev->dev, + "Failed to get port statistics, err: %d, status: 0x%x\n", + err, ps->head.status); + err =3D -EFAULT; + goto out; + } + + memcpy(stats, &ps->counter, sizeof(*stats)); + +out: + kfree(ps); + + return err; +} + int hinic3_get_port_info(struct hinic3_hwdev *hwdev, struct hinic3_nic_port_info *port_info) { @@ -738,3 +774,31 @@ int hinic3_get_pause_info(struct hinic3_nic_dev *nic_d= ev, return hinic3_cfg_hw_pause(nic_dev->hwdev, MGMT_MSG_CMD_OP_GET, nic_pause); } + +int hinic3_get_vport_stats(struct hinic3_hwdev *hwdev, u16 func_id, + struct l2nic_vport_stats *stats) +{ + struct l2nic_cmd_vport_stats vport_stats =3D {}; + struct l2nic_port_stats_info stats_info =3D {}; + struct mgmt_msg_params msg_params =3D {}; + int err; + + stats_info.func_id =3D func_id; + + mgmt_msg_params_init_in_out(&msg_params, &stats_info, &vport_stats, + sizeof(stats_info), sizeof(vport_stats)); + + err =3D hinic3_send_mbox_to_mgmt(hwdev, MGMT_MOD_L2NIC, + L2NIC_CMD_GET_VPORT_STAT, &msg_params); + + if (err || vport_stats.msg_head.status) { + dev_err(hwdev->dev, + "Failed to get function statistics, err: %d, status: 0x%x\n", + err, vport_stats.msg_head.status); + return -EFAULT; + } + + memcpy(stats, &vport_stats.stats, sizeof(*stats)); + + return 0; +} diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_nic_cfg.h b/drivers/= net/ethernet/huawei/hinic3/hinic3_nic_cfg.h index 5d52202a8d4e..80573c121539 100644 --- a/drivers/net/ethernet/huawei/hinic3/hinic3_nic_cfg.h +++ b/drivers/net/ethernet/huawei/hinic3/hinic3_nic_cfg.h @@ -129,6 +129,110 @@ struct mag_cmd_get_xsfp_present { u8 rsvd[2]; }; =20 +struct mag_cmd_port_stats { + u64 mac_tx_fragment_pkt_num; + u64 mac_tx_undersize_pkt_num; + u64 mac_tx_undermin_pkt_num; + u64 mac_tx_64_oct_pkt_num; + u64 mac_tx_65_127_oct_pkt_num; + u64 mac_tx_128_255_oct_pkt_num; + u64 mac_tx_256_511_oct_pkt_num; + u64 mac_tx_512_1023_oct_pkt_num; + u64 mac_tx_1024_1518_oct_pkt_num; + u64 mac_tx_1519_2047_oct_pkt_num; + u64 mac_tx_2048_4095_oct_pkt_num; + u64 mac_tx_4096_8191_oct_pkt_num; + u64 mac_tx_8192_9216_oct_pkt_num; + u64 mac_tx_9217_12287_oct_pkt_num; + u64 mac_tx_12288_16383_oct_pkt_num; + u64 mac_tx_1519_max_bad_pkt_num; + u64 mac_tx_1519_max_good_pkt_num; + u64 mac_tx_oversize_pkt_num; + u64 mac_tx_jabber_pkt_num; + u64 mac_tx_bad_pkt_num; + u64 mac_tx_bad_oct_num; + u64 mac_tx_good_pkt_num; + u64 mac_tx_good_oct_num; + u64 mac_tx_total_pkt_num; + u64 mac_tx_total_oct_num; + u64 mac_tx_uni_pkt_num; + u64 mac_tx_multi_pkt_num; + u64 mac_tx_broad_pkt_num; + u64 mac_tx_pause_num; + u64 mac_tx_pfc_pkt_num; + u64 mac_tx_pfc_pri0_pkt_num; + u64 mac_tx_pfc_pri1_pkt_num; + u64 mac_tx_pfc_pri2_pkt_num; + u64 mac_tx_pfc_pri3_pkt_num; + u64 mac_tx_pfc_pri4_pkt_num; + u64 mac_tx_pfc_pri5_pkt_num; + u64 mac_tx_pfc_pri6_pkt_num; + u64 mac_tx_pfc_pri7_pkt_num; + u64 mac_tx_control_pkt_num; + u64 mac_tx_err_all_pkt_num; + u64 mac_tx_from_app_good_pkt_num; + u64 mac_tx_from_app_bad_pkt_num; + + u64 mac_rx_fragment_pkt_num; + u64 mac_rx_undersize_pkt_num; + u64 mac_rx_undermin_pkt_num; + u64 mac_rx_64_oct_pkt_num; + u64 mac_rx_65_127_oct_pkt_num; + u64 mac_rx_128_255_oct_pkt_num; + u64 mac_rx_256_511_oct_pkt_num; + u64 mac_rx_512_1023_oct_pkt_num; + u64 mac_rx_1024_1518_oct_pkt_num; + u64 mac_rx_1519_2047_oct_pkt_num; + u64 mac_rx_2048_4095_oct_pkt_num; + u64 mac_rx_4096_8191_oct_pkt_num; + u64 mac_rx_8192_9216_oct_pkt_num; + u64 mac_rx_9217_12287_oct_pkt_num; + u64 mac_rx_12288_16383_oct_pkt_num; + u64 mac_rx_1519_max_bad_pkt_num; + u64 mac_rx_1519_max_good_pkt_num; + u64 mac_rx_oversize_pkt_num; + u64 mac_rx_jabber_pkt_num; + u64 mac_rx_bad_pkt_num; + u64 mac_rx_bad_oct_num; + u64 mac_rx_good_pkt_num; + u64 mac_rx_good_oct_num; + u64 mac_rx_total_pkt_num; + u64 mac_rx_total_oct_num; + u64 mac_rx_uni_pkt_num; + u64 mac_rx_multi_pkt_num; + u64 mac_rx_broad_pkt_num; + u64 mac_rx_pause_num; + u64 mac_rx_pfc_pkt_num; + u64 mac_rx_pfc_pri0_pkt_num; + u64 mac_rx_pfc_pri1_pkt_num; + u64 mac_rx_pfc_pri2_pkt_num; + u64 mac_rx_pfc_pri3_pkt_num; + u64 mac_rx_pfc_pri4_pkt_num; + u64 mac_rx_pfc_pri5_pkt_num; + u64 mac_rx_pfc_pri6_pkt_num; + u64 mac_rx_pfc_pri7_pkt_num; + u64 mac_rx_control_pkt_num; + u64 mac_rx_sym_err_pkt_num; + u64 mac_rx_fcs_err_pkt_num; + u64 mac_rx_send_app_good_pkt_num; + u64 mac_rx_send_app_bad_pkt_num; + u64 mac_rx_unfilter_pkt_num; +}; + +struct mag_cmd_port_stats_info { + struct mgmt_msg_head head; + + u8 port_id; + u8 rsvd0[3]; +}; + +struct mag_cmd_get_port_stat { + struct mgmt_msg_head head; + + struct mag_cmd_port_stats counter; + u64 rsvd1[15]; +}; + enum link_err_type { LINK_ERR_MODULE_UNRECOGENIZED, LINK_ERR_NUM, @@ -209,6 +313,11 @@ int hinic3_get_port_info(struct hinic3_hwdev *hwdev, struct hinic3_nic_port_info *port_info); int hinic3_set_vport_enable(struct hinic3_hwdev *hwdev, u16 func_id, bool enable); +int hinic3_get_phy_port_stats(struct hinic3_hwdev *hwdev, + struct mag_cmd_port_stats *stats); +int hinic3_get_vport_stats(struct hinic3_hwdev *hwdev, u16 func_id, + struct l2nic_vport_stats *stats); + int hinic3_add_vlan(struct hinic3_hwdev *hwdev, u16 vlan_id, u16 func_id); int hinic3_del_vlan(struct hinic3_hwdev *hwdev, u16 vlan_id, u16 func_id); =20 diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_nic_dev.h b/drivers/= net/ethernet/huawei/hinic3/hinic3_nic_dev.h index 55b280888ad8..8f6e0914c31e 100644 --- a/drivers/net/ethernet/huawei/hinic3/hinic3_nic_dev.h +++ b/drivers/net/ethernet/huawei/hinic3/hinic3_nic_dev.h @@ -34,6 +34,13 @@ enum hinic3_event_work_flags { HINIC3_EVENT_WORK_TX_TIMEOUT, }; =20 +struct hinic3_nic_stats { + /* Subdivision statistics show in private tool */ + u64 tx_carrier_off_drop; + u64 tx_invalid_qid; + struct u64_stats_sync syncp; +}; + enum hinic3_rx_mode_state { HINIC3_HW_PROMISC_ON, HINIC3_HW_ALLMULTI_ON, @@ -120,6 +127,7 @@ struct hinic3_nic_dev { struct hinic3_dyna_txrxq_params q_params; struct hinic3_txq *txqs; struct hinic3_rxq *rxqs; + struct hinic3_nic_stats stats; =20 enum hinic3_rss_hash_type rss_hash_type; struct hinic3_rss_type rss_type; diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_rx.c b/drivers/net/e= thernet/huawei/hinic3/hinic3_rx.c index 309ab5901379..8951df172f0e 100644 --- a/drivers/net/ethernet/huawei/hinic3/hinic3_rx.c +++ b/drivers/net/ethernet/huawei/hinic3/hinic3_rx.c @@ -29,7 +29,7 @@ #define HINIC3_LRO_PKT_HDR_LEN_IPV4 66 #define HINIC3_LRO_PKT_HDR_LEN_IPV6 86 #define HINIC3_LRO_PKT_HDR_LEN(cqe) \ - (RQ_CQE_OFFOLAD_TYPE_GET((cqe)->offload_type, IP_TYPE) =3D=3D \ + (RQ_CQE_OFFOLAD_TYPE_GET(le32_to_cpu((cqe)->offload_type), IP_TYPE) =3D= =3D \ HINIC3_RX_IPV6_PKT ? HINIC3_LRO_PKT_HDR_LEN_IPV6 : \ HINIC3_LRO_PKT_HDR_LEN_IPV4) =20 @@ -155,8 +155,12 @@ static u32 hinic3_rx_fill_buffers(struct hinic3_rxq *r= xq) =20 err =3D rx_alloc_mapped_page(rxq->page_pool, rx_info, rxq->buf_len); - if (unlikely(err)) + if (unlikely(err)) { + u64_stats_update_begin(&rxq->rxq_stats.syncp); + rxq->rxq_stats.alloc_rx_buf_err++; + u64_stats_update_end(&rxq->rxq_stats.syncp); break; + } =20 dma_addr =3D page_pool_get_dma_addr(rx_info->page) + rx_info->page_offset; @@ -170,6 +174,10 @@ static u32 hinic3_rx_fill_buffers(struct hinic3_rxq *r= xq) rxq->next_to_update << HINIC3_NORMAL_RQ_WQE); rxq->delta -=3D i; rxq->next_to_alloc =3D rxq->next_to_update; + } else if (free_wqebbs =3D=3D rxq->q_depth - 1) { + u64_stats_update_begin(&rxq->rxq_stats.syncp); + rxq->rxq_stats.rx_buf_empty++; + u64_stats_update_end(&rxq->rxq_stats.syncp); } =20 return i; @@ -330,11 +338,23 @@ static void hinic3_rx_csum(struct hinic3_rxq *rxq, u3= 2 offload_type, struct net_device *netdev =3D rxq->netdev; bool l2_tunnel; =20 + if (unlikely(csum_err =3D=3D HINIC3_RX_CSUM_IPSU_OTHER_ERR)) { + u64_stats_update_begin(&rxq->rxq_stats.syncp); + rxq->rxq_stats.other_errors++; + u64_stats_update_end(&rxq->rxq_stats.syncp); + } + if (!(netdev->features & NETIF_F_RXCSUM)) return; =20 if (unlikely(csum_err)) { /* pkt type is recognized by HW, and csum is wrong */ + if (!(csum_err & (HINIC3_RX_CSUM_HW_CHECK_NONE | + HINIC3_RX_CSUM_IPSU_OTHER_ERR))) { + u64_stats_update_begin(&rxq->rxq_stats.syncp); + rxq->rxq_stats.csum_errors++; + u64_stats_update_end(&rxq->rxq_stats.syncp); + } skb->ip_summed =3D CHECKSUM_NONE; return; } @@ -387,8 +407,12 @@ static int recv_one_pkt(struct hinic3_rxq *rxq, struct= hinic3_rq_cqe *rx_cqe, u16 num_lro; =20 skb =3D hinic3_fetch_rx_buffer(rxq, pkt_len); - if (unlikely(!skb)) + if (unlikely(!skb)) { + u64_stats_update_begin(&rxq->rxq_stats.syncp); + rxq->rxq_stats.alloc_skb_err++; + u64_stats_update_end(&rxq->rxq_stats.syncp); return -ENOMEM; + } =20 /* place header in linear portion of buffer */ if (skb_is_nonlinear(skb)) @@ -550,11 +574,29 @@ int hinic3_configure_rxqs(struct net_device *netdev, = u16 num_rq, return 0; } =20 +void hinic3_rxq_get_stats(struct hinic3_rxq *rxq, + struct hinic3_rxq_stats *stats) +{ + struct hinic3_rxq_stats *rxq_stats =3D &rxq->rxq_stats; + unsigned int start; + + do { + start =3D u64_stats_fetch_begin(&rxq_stats->syncp); + stats->csum_errors =3D rxq_stats->csum_errors; + stats->other_errors =3D rxq_stats->other_errors; + stats->rx_buf_empty =3D rxq_stats->rx_buf_empty; + stats->alloc_skb_err =3D rxq_stats->alloc_skb_err; + stats->alloc_rx_buf_err =3D rxq_stats->alloc_rx_buf_err; + stats->restore_drop_sge =3D rxq_stats->restore_drop_sge; + } while (u64_stats_fetch_retry(&rxq_stats->syncp, start)); +} + int hinic3_rx_poll(struct hinic3_rxq *rxq, int budget) { struct hinic3_nic_dev *nic_dev =3D netdev_priv(rxq->netdev); u32 sw_ci, status, pkt_len, vlan_len; struct hinic3_rq_cqe *rx_cqe; + u64 rx_bytes =3D 0; u32 num_wqe =3D 0; int nr_pkts =3D 0; u16 num_lro; @@ -574,10 +616,14 @@ int hinic3_rx_poll(struct hinic3_rxq *rxq, int budget) if (recv_one_pkt(rxq, rx_cqe, pkt_len, vlan_len, status)) break; =20 + rx_bytes +=3D pkt_len; nr_pkts++; num_lro =3D RQ_CQE_STATUS_GET(status, NUM_LRO); - if (num_lro) + if (num_lro) { + rx_bytes +=3D (num_lro - 1) * + HINIC3_LRO_PKT_HDR_LEN(rx_cqe); num_wqe +=3D hinic3_get_sge_num(rxq, pkt_len); + } =20 rx_cqe->status =3D 0; =20 @@ -588,5 +634,10 @@ int hinic3_rx_poll(struct hinic3_rxq *rxq, int budget) if (rxq->delta >=3D HINIC3_RX_BUFFER_WRITE) hinic3_rx_fill_buffers(rxq); =20 + u64_stats_update_begin(&rxq->rxq_stats.syncp); + rxq->rxq_stats.packets +=3D (u64)nr_pkts; + rxq->rxq_stats.bytes +=3D rx_bytes; + u64_stats_update_end(&rxq->rxq_stats.syncp); + return nr_pkts; } diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_rx.h b/drivers/net/e= thernet/huawei/hinic3/hinic3_rx.h index 06d1b3299e7c..cd2dcaab6cf7 100644 --- a/drivers/net/ethernet/huawei/hinic3/hinic3_rx.h +++ b/drivers/net/ethernet/huawei/hinic3/hinic3_rx.h @@ -8,6 +8,17 @@ #include #include =20 +/* rx cqe checksum err */ +#define HINIC3_RX_CSUM_IP_CSUM_ERR BIT(0) +#define HINIC3_RX_CSUM_TCP_CSUM_ERR BIT(1) +#define HINIC3_RX_CSUM_UDP_CSUM_ERR BIT(2) +#define HINIC3_RX_CSUM_IGMP_CSUM_ERR BIT(3) +#define HINIC3_RX_CSUM_ICMPV4_CSUM_ERR BIT(4) +#define HINIC3_RX_CSUM_ICMPV6_CSUM_ERR BIT(5) +#define HINIC3_RX_CSUM_SCTP_CRC_ERR BIT(6) +#define HINIC3_RX_CSUM_HW_CHECK_NONE BIT(7) +#define HINIC3_RX_CSUM_IPSU_OTHER_ERR BIT(8) + #define RQ_CQE_OFFOLAD_TYPE_PKT_TYPE_MASK GENMASK(4, 0) #define RQ_CQE_OFFOLAD_TYPE_IP_TYPE_MASK GENMASK(6, 5) #define RQ_CQE_OFFOLAD_TYPE_TUNNEL_PKT_FORMAT_MASK GENMASK(11, 8) @@ -123,6 +134,9 @@ void hinic3_free_rxqs_res(struct net_device *netdev, u1= 6 num_rq, u32 rq_depth, struct hinic3_dyna_rxq_res *rxqs_res); int hinic3_configure_rxqs(struct net_device *netdev, u16 num_rq, u32 rq_depth, struct hinic3_dyna_rxq_res *rxqs_res); + +void hinic3_rxq_get_stats(struct hinic3_rxq *rxq, + struct hinic3_rxq_stats *stats); int hinic3_rx_poll(struct hinic3_rxq *rxq, int budget); =20 #endif diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_tx.c b/drivers/net/e= thernet/huawei/hinic3/hinic3_tx.c index 9306bf0020ca..58c1f1f40f5c 100644 --- a/drivers/net/ethernet/huawei/hinic3/hinic3_tx.c +++ b/drivers/net/ethernet/huawei/hinic3/hinic3_tx.c @@ -97,8 +97,12 @@ static int hinic3_tx_map_skb(struct net_device *netdev, = struct sk_buff *skb, =20 dma_info[0].dma =3D dma_map_single(&pdev->dev, skb->data, skb_headlen(skb), DMA_TO_DEVICE); - if (dma_mapping_error(&pdev->dev, dma_info[0].dma)) + if (dma_mapping_error(&pdev->dev, dma_info[0].dma)) { + u64_stats_update_begin(&txq->txq_stats.syncp); + txq->txq_stats.map_frag_err++; + u64_stats_update_end(&txq->txq_stats.syncp); return -EFAULT; + } =20 dma_info[0].len =3D skb_headlen(skb); =20 @@ -117,6 +121,9 @@ static int hinic3_tx_map_skb(struct net_device *netdev,= struct sk_buff *skb, skb_frag_size(frag), DMA_TO_DEVICE); if (dma_mapping_error(&pdev->dev, dma_info[idx].dma)) { + u64_stats_update_begin(&txq->txq_stats.syncp); + txq->txq_stats.map_frag_err++; + u64_stats_update_end(&txq->txq_stats.syncp); err =3D -EFAULT; goto err_unmap_page; } @@ -260,6 +267,9 @@ static int hinic3_tx_csum(struct hinic3_txq *txq, struc= t hinic3_sq_task *task, if (l4_proto !=3D IPPROTO_UDP || ((struct udphdr *)skb_transport_header(skb))->dest !=3D VXLAN_OFFLOAD_PORT_LE) { + u64_stats_update_begin(&txq->txq_stats.syncp); + txq->txq_stats.unknown_tunnel_pkt++; + u64_stats_update_end(&txq->txq_stats.syncp); /* Unsupported tunnel packet, disable csum offload */ skb_checksum_help(skb); return 0; @@ -433,6 +443,27 @@ static u32 hinic3_tx_offload(struct sk_buff *skb, stru= ct hinic3_sq_task *task, return offload; } =20 +static void hinic3_get_pkt_stats(struct hinic3_txq *txq, struct sk_buff *s= kb) +{ + u32 hdr_len, tx_bytes; + unsigned short pkts; + + if (skb_is_gso(skb)) { + hdr_len =3D (skb_shinfo(skb)->gso_segs - 1) * + skb_tcp_all_headers(skb); + tx_bytes =3D skb->len + hdr_len; + pkts =3D skb_shinfo(skb)->gso_segs; + } else { + tx_bytes =3D skb->len > ETH_ZLEN ? skb->len : ETH_ZLEN; + pkts =3D 1; + } + + u64_stats_update_begin(&txq->txq_stats.syncp); + txq->txq_stats.bytes +=3D tx_bytes; + txq->txq_stats.packets +=3D pkts; + u64_stats_update_end(&txq->txq_stats.syncp); +} + static u16 hinic3_get_and_update_sq_owner(struct hinic3_io_queue *sq, u16 curr_pi, u16 wqebb_cnt) { @@ -539,8 +570,12 @@ static netdev_tx_t hinic3_send_one_skb(struct sk_buff = *skb, int err; =20 if (unlikely(skb->len < MIN_SKB_LEN)) { - if (skb_pad(skb, MIN_SKB_LEN - skb->len)) + if (skb_pad(skb, MIN_SKB_LEN - skb->len)) { + u64_stats_update_begin(&txq->txq_stats.syncp); + txq->txq_stats.skb_pad_err++; + u64_stats_update_end(&txq->txq_stats.syncp); goto err_out; + } =20 skb->len =3D MIN_SKB_LEN; } @@ -595,6 +630,7 @@ static netdev_tx_t hinic3_send_one_skb(struct sk_buff *= skb, txq->tx_stop_thrs, txq->tx_start_thrs); =20 + hinic3_get_pkt_stats(txq, skb); hinic3_prepare_sq_ctrl(&wqe_combo, queue_info, num_sge, owner); hinic3_write_db(txq->sq, 0, DB_CFLAG_DP_SQ, hinic3_get_sq_local_pi(txq->sq)); @@ -604,6 +640,10 @@ static netdev_tx_t hinic3_send_one_skb(struct sk_buff = *skb, err_drop_pkt: dev_kfree_skb_any(skb); err_out: + u64_stats_update_begin(&txq->txq_stats.syncp); + txq->txq_stats.dropped++; + u64_stats_update_end(&txq->txq_stats.syncp); + return NETDEV_TX_OK; } =20 @@ -611,12 +651,26 @@ netdev_tx_t hinic3_xmit_frame(struct sk_buff *skb, st= ruct net_device *netdev) { struct hinic3_nic_dev *nic_dev =3D netdev_priv(netdev); u16 q_id =3D skb_get_queue_mapping(skb); + struct hinic3_txq *txq; =20 - if (unlikely(!netif_carrier_ok(netdev))) + if (unlikely(!netif_carrier_ok(netdev))) { + u64_stats_update_begin(&nic_dev->stats.syncp); + nic_dev->stats.tx_carrier_off_drop++; + u64_stats_update_end(&nic_dev->stats.syncp); goto err_drop_pkt; + } + + if (unlikely(q_id >=3D nic_dev->q_params.num_qps)) { + txq =3D &nic_dev->txqs[0]; + u64_stats_update_begin(&txq->txq_stats.syncp); + txq->txq_stats.dropped++; + u64_stats_update_end(&txq->txq_stats.syncp); =20 - if (unlikely(q_id >=3D nic_dev->q_params.num_qps)) + u64_stats_update_begin(&nic_dev->stats.syncp); + nic_dev->stats.tx_invalid_qid++; + u64_stats_update_end(&nic_dev->stats.syncp); goto err_drop_pkt; + } =20 return hinic3_send_one_skb(skb, netdev, &nic_dev->txqs[q_id]); =20 @@ -754,6 +808,24 @@ int hinic3_configure_txqs(struct net_device *netdev, u= 16 num_sq, return 0; } =20 +void hinic3_txq_get_stats(struct hinic3_txq *txq, + struct hinic3_txq_stats *stats) +{ + struct hinic3_txq_stats *txq_stats =3D &txq->txq_stats; + unsigned int start; + + do { + start =3D u64_stats_fetch_begin(&txq_stats->syncp); + stats->busy =3D txq_stats->busy; + stats->skb_pad_err =3D txq_stats->skb_pad_err; + stats->frag_len_overflow =3D txq_stats->frag_len_overflow; + stats->offload_cow_skb_err =3D txq_stats->offload_cow_skb_err; + stats->map_frag_err =3D txq_stats->map_frag_err; + stats->unknown_tunnel_pkt =3D txq_stats->unknown_tunnel_pkt; + stats->frag_size_err =3D txq_stats->frag_size_err; + } while (u64_stats_fetch_retry(&txq_stats->syncp, start)); +} + bool hinic3_tx_poll(struct hinic3_txq *txq, int budget) { struct net_device *netdev =3D txq->netdev; diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_tx.h b/drivers/net/e= thernet/huawei/hinic3/hinic3_tx.h index 00194f2a1bcc..0a21c423618f 100644 --- a/drivers/net/ethernet/huawei/hinic3/hinic3_tx.h +++ b/drivers/net/ethernet/huawei/hinic3/hinic3_tx.h @@ -157,6 +157,8 @@ int hinic3_configure_txqs(struct net_device *netdev, u1= 6 num_sq, u32 sq_depth, struct hinic3_dyna_txq_res *txqs_res); =20 netdev_tx_t hinic3_xmit_frame(struct sk_buff *skb, struct net_device *netd= ev); +void hinic3_txq_get_stats(struct hinic3_txq *txq, + struct hinic3_txq_stats *stats); bool hinic3_tx_poll(struct hinic3_txq *txq, int budget); void hinic3_flush_txqs(struct net_device *netdev); =20 --=20 2.43.0 From nobody Wed Apr 1 12:32:17 2026 Received: from canpmsgout07.his.huawei.com (canpmsgout07.his.huawei.com [113.46.200.222]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 209843D890C; Tue, 31 Mar 2026 07:56:49 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=113.46.200.222 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774943812; cv=none; b=RX5EIAg/aQE0naEgUXpEfCfAejrDfzmlgQm7XXcXr35qOsbci40zRNMLx+89dBI36TviJxX28UnZmphhKdzybqrpo8nMZSVqCm0c/MIEaL7tacfQUxJ3WVVe0vnB5X+Y9gKijL4hbQt6sTLSReToAoPrtDH0vfpKi5d1WCBYvKE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774943812; c=relaxed/simple; bh=cna6hxyszkmCJv2oV4ywCnP32140SL1uJrvdVs2lxew=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=XcnhYHbzdZmAZVeK2dZO2Lpr+ZiDvr7APHfl3R8kYm2VRIChZxfzW0Q6u3bVvyjWTrAzlg1ocR8hR3g516leL4bzfsBO0zg+cd2lWk0CzG6jbZwfCY49nFkg7s1pVvJfvk5H9ftI0JDZG7JJSAIa3Bu7qyZuyo0o1dXVYwZmBEM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; dkim=pass (1024-bit key) header.d=huawei.com header.i=@huawei.com header.b=evoA1A3p; arc=none smtp.client-ip=113.46.200.222 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=huawei.com header.i=@huawei.com header.b="evoA1A3p" dkim-signature: v=1; a=rsa-sha256; d=huawei.com; s=dkim; c=relaxed/relaxed; q=dns/txt; h=From; bh=0sJgBkvmlXENL4UZo2O0u5UDW2/sKbbMDz7Ua6C9GGQ=; b=evoA1A3poZWbeWYJSrUTyZSvVymur7TYgzbb5Rv9XuG9rVPncHVXDCS0nkpXIVjxUbXu8NFnP ZYMKkvQumkBatAofHuBKva98GHqLXMS/P/l43rK0Q5+WyHAtoxvaVqQV/1rOz2bjaoxWdWZcG0k Dw8wa2X726VqVsc9zSyBBEw= Received: from mail.maildlp.com (unknown [172.19.163.200]) by canpmsgout07.his.huawei.com (SkyGuard) with ESMTPS id 4flKzY20TDzLltx; Tue, 31 Mar 2026 15:50:33 +0800 (CST) Received: from kwepemf100013.china.huawei.com (unknown [7.202.181.12]) by mail.maildlp.com (Postfix) with ESMTPS id A1F1D40563; Tue, 31 Mar 2026 15:56:41 +0800 (CST) Received: from DESKTOP-62GVMTR.china.huawei.com (10.174.189.124) by kwepemf100013.china.huawei.com (7.202.181.12) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.36; Tue, 31 Mar 2026 15:56:40 +0800 From: Fan Gong To: Fan Gong , Zhu Yikai , , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Simon Horman , Andrew Lunn , Ioana Ciornei CC: , , luosifu , Xin Guo , Zhou Shuai , Wu Like , Shi Jing , Zheng Jiezhen , Maxime Chevallier Subject: [PATCH net-next v03 3/6] hinic3: Add ethtool coalesce ops Date: Tue, 31 Mar 2026 15:56:22 +0800 Message-ID: X-Mailer: git-send-email 2.51.0.windows.1 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ClientProxiedBy: kwepems200001.china.huawei.com (7.221.188.67) To kwepemf100013.china.huawei.com (7.202.181.12) Content-Type: text/plain; charset="utf-8" Implement following ethtool callback function: .get_coalesce .set_coalesce These callbacks allow users to utilize ethtool for detailed RX coalesce configuration and monitoring. Co-developed-by: Zhu Yikai Signed-off-by: Zhu Yikai Signed-off-by: Fan Gong --- .../ethernet/huawei/hinic3/hinic3_ethtool.c | 233 +++++++++++++++++- 1 file changed, 231 insertions(+), 2 deletions(-) diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_ethtool.c b/drivers/= net/ethernet/huawei/hinic3/hinic3_ethtool.c index 7fd8ad053c6e..a9599a63696f 100644 --- a/drivers/net/ethernet/huawei/hinic3/hinic3_ethtool.c +++ b/drivers/net/ethernet/huawei/hinic3/hinic3_ethtool.c @@ -17,6 +17,11 @@ #include "hinic3_nic_cfg.h" =20 #define HINIC3_MGMT_VERSION_MAX_LEN 32 +/* Coalesce time properties in microseconds */ +#define COALESCE_PENDING_LIMIT_UNIT 8 +#define COALESCE_TIMER_CFG_UNIT 5 +#define COALESCE_MAX_PENDING_LIMIT (255 * COALESCE_PENDING_LIMIT_UNIT) +#define COALESCE_MAX_TIMER_CFG (255 * COALESCE_TIMER_CFG_UNIT) =20 static void hinic3_get_drvinfo(struct net_device *netdev, struct ethtool_drvinfo *info) @@ -986,9 +991,231 @@ static void hinic3_get_pause_stats(struct net_device = *netdev, kfree(ps); } =20 +static int hinic3_set_queue_coalesce(struct net_device *netdev, u16 q_id, + struct hinic3_intr_coal_info *coal) +{ + struct hinic3_nic_dev *nic_dev =3D netdev_priv(netdev); + struct hinic3_intr_coal_info *intr_coal; + struct hinic3_interrupt_info info =3D {}; + int err; + + intr_coal =3D &nic_dev->intr_coalesce[q_id]; + + intr_coal->coalesce_timer_cfg =3D coal->coalesce_timer_cfg; + intr_coal->pending_limit =3D coal->pending_limit; + intr_coal->rx_pending_limit_low =3D coal->rx_pending_limit_low; + intr_coal->rx_pending_limit_high =3D coal->rx_pending_limit_high; + + if (!test_bit(HINIC3_INTF_UP, &nic_dev->flags) || + q_id >=3D nic_dev->q_params.num_qps || nic_dev->adaptive_rx_coal) + return 0; + + info.msix_index =3D nic_dev->q_params.irq_cfg[q_id].msix_entry_idx; + info.interrupt_coalesc_set =3D 1; + info.coalesc_timer_cfg =3D intr_coal->coalesce_timer_cfg; + info.pending_limit =3D intr_coal->pending_limit; + info.resend_timer_cfg =3D intr_coal->resend_timer_cfg; + err =3D hinic3_set_interrupt_cfg(nic_dev->hwdev, info); + if (err) { + netdev_warn(netdev, "Failed to set queue%u coalesce\n", q_id); + return err; + } + + return 0; +} + +static int is_coalesce_exceed_limit(struct net_device *netdev, + const struct ethtool_coalesce *coal) +{ + const struct { + const char *name; + u32 value; + u32 limit; + } coalesce_limits[] =3D { + {"rx_coalesce_usecs", + coal->rx_coalesce_usecs, + COALESCE_MAX_TIMER_CFG}, + {"rx_max_coalesced_frames", + coal->rx_max_coalesced_frames, + COALESCE_MAX_PENDING_LIMIT}, + {"rx_max_coalesced_frames_low", + coal->rx_max_coalesced_frames_low, + COALESCE_MAX_PENDING_LIMIT}, + {"rx_max_coalesced_frames_high", + coal->rx_max_coalesced_frames_high, + COALESCE_MAX_PENDING_LIMIT}, + }; + + for (int i =3D 0; i < ARRAY_SIZE(coalesce_limits); i++) { + if (coalesce_limits[i].value > coalesce_limits[i].limit) { + netdev_err(netdev, "%s out of range %d-%d\n", + coalesce_limits[i].name, 0, + coalesce_limits[i].limit); + return -EOPNOTSUPP; + } + } + return 0; +} + +static int is_coalesce_legal(struct net_device *netdev, + const struct ethtool_coalesce *coal) +{ + int err; + + err =3D is_coalesce_exceed_limit(netdev, coal); + if (err) + return err; + + if (coal->rx_max_coalesced_frames_low >=3D + coal->rx_max_coalesced_frames_high && + coal->rx_max_coalesced_frames_high > 0) { + netdev_err(netdev, "invalid coalesce frame high %u, low %u, unit %d\n", + coal->rx_max_coalesced_frames_high, + coal->rx_max_coalesced_frames_low, + COALESCE_PENDING_LIMIT_UNIT); + return -EOPNOTSUPP; + } + + return 0; +} + +static void check_coalesce_align(struct net_device *netdev, + u32 item, u32 unit, const char *str) +{ + if (item % unit) + netdev_warn(netdev, "%s in %d units, change to %u\n", + str, unit, item - item % unit); +} + +#define CHECK_COALESCE_ALIGN(member, unit) \ + check_coalesce_align(netdev, member, unit, #member) + +static void check_coalesce_changed(struct net_device *netdev, + u32 item, u32 unit, u32 ori_val, + const char *obj_str, const char *str) +{ + if ((item / unit) !=3D ori_val) + netdev_dbg(netdev, "Change %s from %d to %u %s\n", + str, ori_val * unit, item - item % unit, obj_str); +} + +#define CHECK_COALESCE_CHANGED(member, unit, ori_val, obj_str) \ + check_coalesce_changed(netdev, member, unit, ori_val, obj_str, #member) + +static int hinic3_set_hw_coal_param(struct net_device *netdev, + struct hinic3_intr_coal_info *intr_coal) +{ + struct hinic3_nic_dev *nic_dev =3D netdev_priv(netdev); + int err; + u16 i; + + for (i =3D 0; i < nic_dev->max_qps; i++) { + err =3D hinic3_set_queue_coalesce(netdev, i, intr_coal); + if (err) + return err; + } + + return 0; +} + +static int hinic3_get_coalesce(struct net_device *netdev, + struct ethtool_coalesce *coal, + struct kernel_ethtool_coalesce *kernel_coal, + struct netlink_ext_ack *extack) +{ + struct hinic3_nic_dev *nic_dev =3D netdev_priv(netdev); + struct hinic3_intr_coal_info *interrupt_info; + + interrupt_info =3D &nic_dev->intr_coalesce[0]; + + /* TX/RX uses the same interrupt. + * So we only declare RX ethtool_coalesce parameters. + */ + coal->rx_coalesce_usecs =3D interrupt_info->coalesce_timer_cfg * + COALESCE_TIMER_CFG_UNIT; + coal->rx_max_coalesced_frames =3D interrupt_info->pending_limit * + COALESCE_PENDING_LIMIT_UNIT; + + coal->use_adaptive_rx_coalesce =3D nic_dev->adaptive_rx_coal; + + coal->rx_max_coalesced_frames_high =3D + interrupt_info->rx_pending_limit_high * + COALESCE_PENDING_LIMIT_UNIT; + + coal->rx_max_coalesced_frames_low =3D + interrupt_info->rx_pending_limit_low * + COALESCE_PENDING_LIMIT_UNIT; + + return 0; +} + +static int hinic3_set_coalesce(struct net_device *netdev, + struct ethtool_coalesce *coal, + struct kernel_ethtool_coalesce *kernel_coal, + struct netlink_ext_ack *extack) +{ + struct hinic3_nic_dev *nic_dev =3D netdev_priv(netdev); + struct hinic3_intr_coal_info *ori_intr_coal; + struct hinic3_intr_coal_info intr_coal =3D {}; + char obj_str[32]; + int err; + + err =3D is_coalesce_legal(netdev, coal); + if (err) + return err; + + CHECK_COALESCE_ALIGN(coal->rx_coalesce_usecs, COALESCE_TIMER_CFG_UNIT); + CHECK_COALESCE_ALIGN(coal->rx_max_coalesced_frames, + COALESCE_PENDING_LIMIT_UNIT); + CHECK_COALESCE_ALIGN(coal->rx_max_coalesced_frames_high, + COALESCE_PENDING_LIMIT_UNIT); + CHECK_COALESCE_ALIGN(coal->rx_max_coalesced_frames_low, + COALESCE_PENDING_LIMIT_UNIT); + + ori_intr_coal =3D &nic_dev->intr_coalesce[0]; + snprintf(obj_str, sizeof(obj_str), "for netdev"); + + CHECK_COALESCE_CHANGED(coal->rx_coalesce_usecs, COALESCE_TIMER_CFG_UNIT, + ori_intr_coal->coalesce_timer_cfg, obj_str); + CHECK_COALESCE_CHANGED(coal->rx_max_coalesced_frames, + COALESCE_PENDING_LIMIT_UNIT, + ori_intr_coal->pending_limit, obj_str); + CHECK_COALESCE_CHANGED(coal->rx_max_coalesced_frames_high, + COALESCE_PENDING_LIMIT_UNIT, + ori_intr_coal->rx_pending_limit_high, obj_str); + CHECK_COALESCE_CHANGED(coal->rx_max_coalesced_frames_low, + COALESCE_PENDING_LIMIT_UNIT, + ori_intr_coal->rx_pending_limit_low, obj_str); + + intr_coal.coalesce_timer_cfg =3D + (u8)(coal->rx_coalesce_usecs / COALESCE_TIMER_CFG_UNIT); + intr_coal.pending_limit =3D (u8)(coal->rx_max_coalesced_frames / + COALESCE_PENDING_LIMIT_UNIT); + + nic_dev->adaptive_rx_coal =3D coal->use_adaptive_rx_coalesce; + + intr_coal.rx_pending_limit_high =3D + (u8)(coal->rx_max_coalesced_frames_high / + COALESCE_PENDING_LIMIT_UNIT); + + intr_coal.rx_pending_limit_low =3D + (u8)(coal->rx_max_coalesced_frames_low / + COALESCE_PENDING_LIMIT_UNIT); + + /* coalesce timer or pending set to zero will disable coalesce */ + if (!nic_dev->adaptive_rx_coal && + (!intr_coal.coalesce_timer_cfg || !intr_coal.pending_limit)) + netdev_warn(netdev, "Coalesce will be disabled\n"); + + return hinic3_set_hw_coal_param(netdev, &intr_coal); +} + static const struct ethtool_ops hinic3_ethtool_ops =3D { - .supported_coalesce_params =3D ETHTOOL_COALESCE_USECS | - ETHTOOL_COALESCE_PKT_RATE_RX_USECS, + .supported_coalesce_params =3D ETHTOOL_COALESCE_RX_USECS | + ETHTOOL_COALESCE_RX_MAX_FRAMES | + ETHTOOL_COALESCE_USE_ADAPTIVE_RX | + ETHTOOL_COALESCE_RX_MAX_FRAMES_LOW | + ETHTOOL_COALESCE_RX_MAX_FRAMES_HIGH, .get_link_ksettings =3D hinic3_get_link_ksettings, .get_drvinfo =3D hinic3_get_drvinfo, .get_msglevel =3D hinic3_get_msglevel, @@ -1004,6 +1231,8 @@ static const struct ethtool_ops hinic3_ethtool_ops = =3D { .get_eth_ctrl_stats =3D hinic3_get_eth_ctrl_stats, .get_rmon_stats =3D hinic3_get_rmon_stats, .get_pause_stats =3D hinic3_get_pause_stats, + .get_coalesce =3D hinic3_get_coalesce, + .set_coalesce =3D hinic3_set_coalesce, }; =20 void hinic3_set_ethtool_ops(struct net_device *netdev) --=20 2.43.0 From nobody Wed Apr 1 12:32:17 2026 Received: from canpmsgout08.his.huawei.com (canpmsgout08.his.huawei.com [113.46.200.223]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1F2453D524B; Tue, 31 Mar 2026 07:56:44 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=113.46.200.223 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774943808; cv=none; b=KsMEngENZNquZR2YYTJxyztCHVf5uHEkSE7RsJUCelFig5Wb5cgPNzvPIVdxf5Sqa/kQlr8Ofh6bqmAt0DVA5cQFbx5z4wOLtPPxgj9h4Ay8bm8tJ8mssw5u+yVvVDM4On4GyBSVoVkHuY2vGcyO2bNX7Wa41J2jnDvnPX7VL54= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774943808; c=relaxed/simple; bh=uye8yXvqEEKjkWR5uETKO6XIM4UQY2jbAdS8xyT/OXE=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=usoMQ7CDDOSag0m1a5so9f+xfGJpgQ4YUi6NWNQdi9tM/DtpaXTniVq8iq65Wjk1b5M8Z8JqYANa12Ryazksg2XFEl8VoYtnHNmi+kSQBacTNhV7ujBdXt7RNSuoaUuf2afPM0upCIM6vzI1v0AIRqFvdTANeDqEXU3YDavQNZc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; dkim=pass (1024-bit key) header.d=huawei.com header.i=@huawei.com header.b=vtZvVqrL; arc=none smtp.client-ip=113.46.200.223 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=huawei.com header.i=@huawei.com header.b="vtZvVqrL" dkim-signature: v=1; a=rsa-sha256; d=huawei.com; s=dkim; c=relaxed/relaxed; q=dns/txt; h=From; bh=zej9NZ+4u1MtazQFJJpFNzL5xzSwil3I1vDoUezt4QA=; b=vtZvVqrLpsYeIxmx1/iG+o24EcUlm8avATC+9nyguMDiD90MWJlv0xWTmftf/RSSlyoHtRnSZ Vx5/Dv9WJWZSLPf2DWd+DAqXlRi3uESTDysll6WagNHJ0PnbgZrXm9taYtc+qvW5myj1Y7XPEkl qO5Ai8oQPa50G+i55WIo7NI= Received: from mail.maildlp.com (unknown [172.19.162.92]) by canpmsgout08.his.huawei.com (SkyGuard) with ESMTPS id 4flKzX56M0zmVCh; Tue, 31 Mar 2026 15:50:32 +0800 (CST) Received: from kwepemf100013.china.huawei.com (unknown [7.202.181.12]) by mail.maildlp.com (Postfix) with ESMTPS id 5160040565; Tue, 31 Mar 2026 15:56:43 +0800 (CST) Received: from DESKTOP-62GVMTR.china.huawei.com (10.174.189.124) by kwepemf100013.china.huawei.com (7.202.181.12) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.36; Tue, 31 Mar 2026 15:56:42 +0800 From: Fan Gong To: Fan Gong , Zhu Yikai , , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Simon Horman , Andrew Lunn , Ioana Ciornei CC: , , luosifu , Xin Guo , Zhou Shuai , Wu Like , Shi Jing , Zheng Jiezhen , Maxime Chevallier Subject: [PATCH net-next v03 4/6] hinic3: Add ethtool rss ops Date: Tue, 31 Mar 2026 15:56:23 +0800 Message-ID: X-Mailer: git-send-email 2.51.0.windows.1 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ClientProxiedBy: kwepems200001.china.huawei.com (7.221.188.67) To kwepemf100013.china.huawei.com (7.202.181.12) Content-Type: text/plain; charset="utf-8" Implement following ethtool callback function: .get_rxnfc .set_rxnfc .get_channels .set_channels .get_rxfh_indir_size .get_rxfh_key_size .get_rxfh .set_rxfh These callbacks allow users to utilize ethtool for detailed RSS parameters configuration and monitoring. Co-developed-by: Zhu Yikai Signed-off-by: Zhu Yikai Signed-off-by: Fan Gong --- .../ethernet/huawei/hinic3/hinic3_ethtool.c | 9 + .../huawei/hinic3/hinic3_mgmt_interface.h | 2 + .../net/ethernet/huawei/hinic3/hinic3_rss.c | 487 +++++++++++++++++- .../net/ethernet/huawei/hinic3/hinic3_rss.h | 19 + 4 files changed, 515 insertions(+), 2 deletions(-) diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_ethtool.c b/drivers/= net/ethernet/huawei/hinic3/hinic3_ethtool.c index a9599a63696f..c29ed438dd27 100644 --- a/drivers/net/ethernet/huawei/hinic3/hinic3_ethtool.c +++ b/drivers/net/ethernet/huawei/hinic3/hinic3_ethtool.c @@ -15,6 +15,7 @@ #include "hinic3_hw_comm.h" #include "hinic3_nic_dev.h" #include "hinic3_nic_cfg.h" +#include "hinic3_rss.h" =20 #define HINIC3_MGMT_VERSION_MAX_LEN 32 /* Coalesce time properties in microseconds */ @@ -1233,6 +1234,14 @@ static const struct ethtool_ops hinic3_ethtool_ops = =3D { .get_pause_stats =3D hinic3_get_pause_stats, .get_coalesce =3D hinic3_get_coalesce, .set_coalesce =3D hinic3_set_coalesce, + .get_rxnfc =3D hinic3_get_rxnfc, + .set_rxnfc =3D hinic3_set_rxnfc, + .get_channels =3D hinic3_get_channels, + .set_channels =3D hinic3_set_channels, + .get_rxfh_indir_size =3D hinic3_get_rxfh_indir_size, + .get_rxfh_key_size =3D hinic3_get_rxfh_key_size, + .get_rxfh =3D hinic3_get_rxfh, + .set_rxfh =3D hinic3_set_rxfh, }; =20 void hinic3_set_ethtool_ops(struct net_device *netdev) diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_mgmt_interface.h b/d= rivers/net/ethernet/huawei/hinic3/hinic3_mgmt_interface.h index 76c691f82703..3c1263ff99ff 100644 --- a/drivers/net/ethernet/huawei/hinic3/hinic3_mgmt_interface.h +++ b/drivers/net/ethernet/huawei/hinic3/hinic3_mgmt_interface.h @@ -282,6 +282,7 @@ enum l2nic_cmd { L2NIC_CMD_SET_VLAN_FILTER_EN =3D 26, L2NIC_CMD_SET_RX_VLAN_OFFLOAD =3D 27, L2NIC_CMD_CFG_RSS =3D 60, + L2NIC_CMD_GET_RSS_CTX_TBL =3D 62, L2NIC_CMD_CFG_RSS_HASH_KEY =3D 63, L2NIC_CMD_CFG_RSS_HASH_ENGINE =3D 64, L2NIC_CMD_SET_RSS_CTX_TBL =3D 65, @@ -301,6 +302,7 @@ enum l2nic_ucode_cmd { L2NIC_UCODE_CMD_MODIFY_QUEUE_CTX =3D 0, L2NIC_UCODE_CMD_CLEAN_QUEUE_CTX =3D 1, L2NIC_UCODE_CMD_SET_RSS_INDIR_TBL =3D 4, + L2NIC_UCODE_CMD_GET_RSS_INDIR_TBL =3D 6, }; =20 /* hilink mac group command */ diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_rss.c b/drivers/net/= ethernet/huawei/hinic3/hinic3_rss.c index 25db74d8c7dd..1c8aea9d8887 100644 --- a/drivers/net/ethernet/huawei/hinic3/hinic3_rss.c +++ b/drivers/net/ethernet/huawei/hinic3/hinic3_rss.c @@ -155,7 +155,7 @@ static int hinic3_set_rss_type(struct hinic3_hwdev *hwd= ev, L2NIC_CMD_SET_RSS_CTX_TBL, &msg_params); =20 if (ctx_tbl.msg_head.status =3D=3D MGMT_STATUS_CMD_UNSUPPORTED) { - return MGMT_STATUS_CMD_UNSUPPORTED; + return -EOPNOTSUPP; } else if (err || ctx_tbl.msg_head.status) { dev_err(hwdev->dev, "mgmt Failed to set rss context offload, err: %d, st= atus: 0x%x\n", err, ctx_tbl.msg_head.status); @@ -165,6 +165,39 @@ static int hinic3_set_rss_type(struct hinic3_hwdev *hw= dev, return 0; } =20 +static int hinic3_get_rss_type(struct hinic3_hwdev *hwdev, + struct hinic3_rss_type *rss_type) +{ + struct l2nic_cmd_rss_ctx_tbl ctx_tbl =3D {}; + struct mgmt_msg_params msg_params =3D {}; + int err; + + ctx_tbl.func_id =3D hinic3_global_func_id(hwdev); + + mgmt_msg_params_init_default(&msg_params, &ctx_tbl, sizeof(ctx_tbl)); + + err =3D hinic3_send_mbox_to_mgmt(hwdev, MGMT_MOD_L2NIC, + L2NIC_CMD_GET_RSS_CTX_TBL, + &msg_params); + if (err || ctx_tbl.msg_head.status) { + dev_err(hwdev->dev, "Failed to get hash type, err: %d, status: 0x%x\n", + err, ctx_tbl.msg_head.status); + return -EINVAL; + } + + rss_type->ipv4 =3D L2NIC_RSS_TYPE_GET(ctx_tbl.context, IPV4); + rss_type->ipv6 =3D L2NIC_RSS_TYPE_GET(ctx_tbl.context, IPV6); + rss_type->ipv6_ext =3D L2NIC_RSS_TYPE_GET(ctx_tbl.context, IPV6_EXT); + rss_type->tcp_ipv4 =3D L2NIC_RSS_TYPE_GET(ctx_tbl.context, TCP_IPV4); + rss_type->tcp_ipv6 =3D L2NIC_RSS_TYPE_GET(ctx_tbl.context, TCP_IPV6); + rss_type->tcp_ipv6_ext =3D L2NIC_RSS_TYPE_GET(ctx_tbl.context, + TCP_IPV6_EXT); + rss_type->udp_ipv4 =3D L2NIC_RSS_TYPE_GET(ctx_tbl.context, UDP_IPV4); + rss_type->udp_ipv6 =3D L2NIC_RSS_TYPE_GET(ctx_tbl.context, UDP_IPV6); + + return 0; +} + static int hinic3_rss_cfg_hash_type(struct hinic3_hwdev *hwdev, u8 opcode, enum hinic3_rss_hash_type *type) { @@ -264,7 +297,8 @@ static int hinic3_set_hw_rss_parameters(struct net_devi= ce *netdev, u8 rss_en) if (err) return err; =20 - hinic3_fillout_indir_tbl(netdev, nic_dev->rss_indir); + if (!netif_is_rxfh_configured(netdev)) + hinic3_fillout_indir_tbl(netdev, nic_dev->rss_indir); =20 err =3D hinic3_config_rss_hw_resource(netdev, nic_dev->rss_indir); if (err) @@ -334,3 +368,452 @@ void hinic3_try_to_enable_rss(struct net_device *netd= ev) clear_bit(HINIC3_RSS_ENABLE, &nic_dev->flags); nic_dev->q_params.num_qps =3D nic_dev->max_qps; } + +static int hinic3_set_l4_rss_hash_ops(const struct ethtool_rxnfc *cmd, + struct hinic3_rss_type *rss_type) +{ + u8 rss_l4_en; + + switch (cmd->data & (RXH_L4_B_0_1 | RXH_L4_B_2_3)) { + case 0: + rss_l4_en =3D 0; + break; + case (RXH_L4_B_0_1 | RXH_L4_B_2_3): + rss_l4_en =3D 1; + break; + default: + return -EINVAL; + } + + switch (cmd->flow_type) { + case TCP_V4_FLOW: + rss_type->tcp_ipv4 =3D rss_l4_en; + break; + case TCP_V6_FLOW: + rss_type->tcp_ipv6 =3D rss_l4_en; + break; + case UDP_V4_FLOW: + rss_type->udp_ipv4 =3D rss_l4_en; + break; + case UDP_V6_FLOW: + rss_type->udp_ipv6 =3D rss_l4_en; + break; + default: + return -EINVAL; + } + + return 0; +} + +static int hinic3_update_rss_hash_opts(struct net_device *netdev, + struct ethtool_rxnfc *cmd, + struct hinic3_rss_type *rss_type) +{ + int err; + + switch (cmd->flow_type) { + case TCP_V4_FLOW: + case TCP_V6_FLOW: + case UDP_V4_FLOW: + case UDP_V6_FLOW: + err =3D hinic3_set_l4_rss_hash_ops(cmd, rss_type); + if (err) + return err; + + break; + case IPV4_FLOW: + rss_type->ipv4 =3D 1; + break; + case IPV6_FLOW: + rss_type->ipv6 =3D 1; + break; + default: + netdev_err(netdev, "Unsupported flow type\n"); + return -EINVAL; + } + + return 0; +} + +static int hinic3_set_rss_hash_opts(struct net_device *netdev, + struct ethtool_rxnfc *cmd) +{ + struct hinic3_nic_dev *nic_dev =3D netdev_priv(netdev); + struct hinic3_rss_type *rss_type; + int err; + + rss_type =3D &nic_dev->rss_type; + + if (!test_bit(HINIC3_RSS_ENABLE, &nic_dev->flags)) { + cmd->data =3D 0; + netdev_err(netdev, "RSS is disable, not support to set flow-hash\n"); + return -EOPNOTSUPP; + } + + /* RSS only supports hashing of IP addresses and L4 ports */ + if (cmd->data & ~(RXH_IP_SRC | RXH_IP_DST | + RXH_L4_B_0_1 | RXH_L4_B_2_3)) + return -EINVAL; + + /* Both IP addresses must be part of the hash tuple */ + if (!(cmd->data & RXH_IP_SRC) || !(cmd->data & RXH_IP_DST)) + return -EINVAL; + + err =3D hinic3_get_rss_type(nic_dev->hwdev, rss_type); + if (err) { + netdev_err(netdev, "Failed to get rss type\n"); + return err; + } + + err =3D hinic3_update_rss_hash_opts(netdev, cmd, rss_type); + if (err) + return err; + + err =3D hinic3_set_rss_type(nic_dev->hwdev, *rss_type); + if (err) { + netdev_err(netdev, "Failed to set rss type\n"); + return err; + } + + return 0; +} + +static void convert_rss_type(u8 rss_opt, struct ethtool_rxnfc *cmd) +{ + if (rss_opt) + cmd->data |=3D RXH_L4_B_0_1 | RXH_L4_B_2_3; +} + +static int hinic3_convert_rss_type(struct net_device *netdev, + struct hinic3_rss_type *rss_type, + struct ethtool_rxnfc *cmd) +{ + cmd->data =3D RXH_IP_SRC | RXH_IP_DST; + switch (cmd->flow_type) { + case TCP_V4_FLOW: + convert_rss_type(rss_type->tcp_ipv4, cmd); + break; + case TCP_V6_FLOW: + convert_rss_type(rss_type->tcp_ipv6, cmd); + break; + case UDP_V4_FLOW: + convert_rss_type(rss_type->udp_ipv4, cmd); + break; + case UDP_V6_FLOW: + convert_rss_type(rss_type->udp_ipv6, cmd); + break; + case IPV4_FLOW: + case IPV6_FLOW: + break; + default: + netdev_err(netdev, "Unsupported flow type\n"); + cmd->data =3D 0; + return -EINVAL; + } + + return 0; +} + +static int hinic3_get_rss_hash_opts(struct net_device *netdev, + struct ethtool_rxnfc *cmd) +{ + struct hinic3_nic_dev *nic_dev =3D netdev_priv(netdev); + struct hinic3_rss_type rss_type; + int err; + + cmd->data =3D 0; + + if (!test_bit(HINIC3_RSS_ENABLE, &nic_dev->flags)) + return 0; + + err =3D hinic3_get_rss_type(nic_dev->hwdev, &rss_type); + if (err) { + netdev_err(netdev, "Failed to get rss type\n"); + return err; + } + + return hinic3_convert_rss_type(netdev, &rss_type, cmd); +} + +int hinic3_get_rxnfc(struct net_device *netdev, + struct ethtool_rxnfc *cmd, u32 *rule_locs) +{ + struct hinic3_nic_dev *nic_dev =3D netdev_priv(netdev); + int err =3D 0; + + switch (cmd->cmd) { + case ETHTOOL_GRXRINGS: + cmd->data =3D nic_dev->q_params.num_qps; + break; + case ETHTOOL_GRXFH: + err =3D hinic3_get_rss_hash_opts(netdev, cmd); + break; + default: + err =3D -EOPNOTSUPP; + break; + } + + return err; +} + +int hinic3_set_rxnfc(struct net_device *netdev, struct ethtool_rxnfc *cmd) +{ + int err; + + switch (cmd->cmd) { + case ETHTOOL_SRXFH: + err =3D hinic3_set_rss_hash_opts(netdev, cmd); + break; + default: + err =3D -EOPNOTSUPP; + break; + } + + return err; +} + +static u16 hinic3_max_channels(struct net_device *netdev) +{ + struct hinic3_nic_dev *nic_dev =3D netdev_priv(netdev); + u8 tcs =3D netdev_get_num_tc(netdev); + + return tcs ? nic_dev->max_qps / tcs : nic_dev->max_qps; +} + +static u16 hinic3_curr_channels(struct net_device *netdev) +{ + struct hinic3_nic_dev *nic_dev =3D netdev_priv(netdev); + + if (netif_running(netdev)) + return nic_dev->q_params.num_qps ? + nic_dev->q_params.num_qps : 1; + else + return min_t(u16, hinic3_max_channels(netdev), + nic_dev->q_params.num_qps); +} + +void hinic3_get_channels(struct net_device *netdev, + struct ethtool_channels *channels) +{ + channels->max_rx =3D 0; + channels->max_tx =3D 0; + channels->max_other =3D 0; + /* report maximum channels */ + channels->max_combined =3D hinic3_max_channels(netdev); + channels->rx_count =3D 0; + channels->tx_count =3D 0; + channels->other_count =3D 0; + /* report flow director queues as maximum channels */ + channels->combined_count =3D hinic3_curr_channels(netdev); +} + +static int +hinic3_validate_channel_parameter(struct net_device *netdev, + const struct ethtool_channels *channels) +{ + u16 max_channel =3D hinic3_max_channels(netdev); + unsigned int count =3D channels->combined_count; + + if (!count) { + netdev_err(netdev, "Unsupported combined_count=3D0\n"); + return -EINVAL; + } + + if (channels->tx_count || channels->rx_count || channels->other_count) { + netdev_err(netdev, "Setting rx/tx/other count not supported\n"); + return -EINVAL; + } + + if (count > max_channel) { + netdev_err(netdev, "Combined count %u exceed limit %u\n", count, + max_channel); + return -EINVAL; + } + + return 0; +} + +int hinic3_set_channels(struct net_device *netdev, + struct ethtool_channels *channels) +{ + struct hinic3_nic_dev *nic_dev =3D netdev_priv(netdev); + unsigned int count =3D channels->combined_count; + struct hinic3_dyna_txrxq_params q_params; + int err; + + if (hinic3_validate_channel_parameter(netdev, channels)) + return -EINVAL; + + if (!test_bit(HINIC3_RSS_ENABLE, &nic_dev->flags)) { + netdev_err(netdev, "This function doesn't support RSS, only support 1 qu= eue pair\n"); + return -EOPNOTSUPP; + } + + netdev_dbg(netdev, "Set max combined queue number from %u to %u\n", + nic_dev->q_params.num_qps, count); + + if (netif_running(netdev)) { + q_params =3D nic_dev->q_params; + q_params.num_qps =3D (u16)count; + q_params.txqs_res =3D NULL; + q_params.rxqs_res =3D NULL; + q_params.irq_cfg =3D NULL; + + err =3D hinic3_change_channel_settings(netdev, &q_params); + if (err) { + netdev_err(netdev, "Failed to change channel settings\n"); + return err; + } + } else { + nic_dev->q_params.num_qps =3D (u16)count; + } + + return 0; +} + +u32 hinic3_get_rxfh_indir_size(struct net_device *netdev) +{ + return L2NIC_RSS_INDIR_SIZE; +} + +static int hinic3_set_rss_rxfh(struct net_device *netdev, + const u32 *indir, u8 *key) +{ + struct hinic3_nic_dev *nic_dev =3D netdev_priv(netdev); + int err; + u32 i; + + if (indir) { + for (i =3D 0; i < L2NIC_RSS_INDIR_SIZE; i++) + nic_dev->rss_indir[i] =3D (u16)indir[i]; + + err =3D hinic3_rss_set_indir_tbl(nic_dev->hwdev, + nic_dev->rss_indir); + if (err) { + netdev_err(netdev, "Failed to set rss indir table\n"); + return err; + } + } + + if (key) { + err =3D hinic3_rss_set_hash_key(nic_dev->hwdev, key); + if (err) { + netdev_err(netdev, "Failed to set rss key\n"); + return err; + } + + memcpy(nic_dev->rss_hkey, key, L2NIC_RSS_KEY_SIZE); + } + + return 0; +} + +u32 hinic3_get_rxfh_key_size(struct net_device *netdev) +{ + return L2NIC_RSS_KEY_SIZE; +} + +static int hinic3_rss_get_indir_tbl(struct hinic3_hwdev *hwdev, + u32 *indir_table) +{ + struct hinic3_cmd_buf_pair pair; + __le16 *indir_tbl =3D NULL; + int err, i; + + err =3D hinic3_cmd_buf_pair_init(hwdev, &pair); + if (err) { + dev_err(hwdev->dev, "Failed to allocate cmd_buf.\n"); + return err; + } + + err =3D hinic3_cmdq_detail_resp(hwdev, MGMT_MOD_L2NIC, + L2NIC_UCODE_CMD_GET_RSS_INDIR_TBL, + pair.in, pair.out, NULL); + if (err) { + dev_err(hwdev->dev, "Failed to get rss indir table\n"); + goto err_get_indir_tbl; + } + + indir_tbl =3D (__le16 *)pair.out->buf; + for (i =3D 0; i < L2NIC_RSS_INDIR_SIZE; i++) + indir_table[i] =3D le16_to_cpu(*(indir_tbl + i)); + +err_get_indir_tbl: + hinic3_cmd_buf_pair_uninit(hwdev, &pair); + + return err; +} + +int hinic3_get_rxfh(struct net_device *netdev, + struct ethtool_rxfh_param *rxfh) +{ + struct hinic3_nic_dev *nic_dev =3D netdev_priv(netdev); + int err =3D 0; + + if (!test_bit(HINIC3_RSS_ENABLE, &nic_dev->flags)) { + netdev_err(netdev, "Rss is disabled\n"); + return -EOPNOTSUPP; + } + + rxfh->hfunc =3D + nic_dev->rss_hash_type =3D=3D HINIC3_RSS_HASH_ENGINE_TYPE_XOR ? + ETH_RSS_HASH_XOR : ETH_RSS_HASH_TOP; + + if (rxfh->indir) { + err =3D hinic3_rss_get_indir_tbl(nic_dev->hwdev, rxfh->indir); + if (err) + return err; + } + + if (rxfh->key) + memcpy(rxfh->key, nic_dev->rss_hkey, L2NIC_RSS_KEY_SIZE); + + return err; +} + +static int hinic3_update_hash_func_type(struct net_device *netdev, u8 hfun= c) +{ + struct hinic3_nic_dev *nic_dev =3D netdev_priv(netdev); + enum hinic3_rss_hash_type new_rss_hash_type; + + switch (hfunc) { + case ETH_RSS_HASH_NO_CHANGE: + return 0; + case ETH_RSS_HASH_XOR: + new_rss_hash_type =3D HINIC3_RSS_HASH_ENGINE_TYPE_XOR; + break; + case ETH_RSS_HASH_TOP: + new_rss_hash_type =3D HINIC3_RSS_HASH_ENGINE_TYPE_TOEP; + break; + default: + netdev_err(netdev, "Unsupported hash func %u\n", hfunc); + return -EOPNOTSUPP; + } + + if (new_rss_hash_type =3D=3D nic_dev->rss_hash_type) + return 0; + + nic_dev->rss_hash_type =3D new_rss_hash_type; + return hinic3_rss_set_hash_type(nic_dev->hwdev, nic_dev->rss_hash_type); +} + +int hinic3_set_rxfh(struct net_device *netdev, + struct ethtool_rxfh_param *rxfh, + struct netlink_ext_ack *extack) +{ + struct hinic3_nic_dev *nic_dev =3D netdev_priv(netdev); + int err; + + if (!test_bit(HINIC3_RSS_ENABLE, &nic_dev->flags)) { + netdev_err(netdev, "Not support to set rss parameters when rss is disabl= e\n"); + return -EOPNOTSUPP; + } + + err =3D hinic3_update_hash_func_type(netdev, rxfh->hfunc); + if (err) + return err; + + err =3D hinic3_set_rss_rxfh(netdev, rxfh->indir, rxfh->key); + + return err; +} diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_rss.h b/drivers/net/= ethernet/huawei/hinic3/hinic3_rss.h index 78d82c2aca06..9f1b77780cd4 100644 --- a/drivers/net/ethernet/huawei/hinic3/hinic3_rss.h +++ b/drivers/net/ethernet/huawei/hinic3/hinic3_rss.h @@ -5,10 +5,29 @@ #define _HINIC3_RSS_H_ =20 #include +#include =20 int hinic3_rss_init(struct net_device *netdev); void hinic3_rss_uninit(struct net_device *netdev); void hinic3_try_to_enable_rss(struct net_device *netdev); void hinic3_clear_rss_config(struct net_device *netdev); =20 +int hinic3_get_rxnfc(struct net_device *netdev, + struct ethtool_rxnfc *cmd, u32 *rule_locs); +int hinic3_set_rxnfc(struct net_device *netdev, struct ethtool_rxnfc *cmd); + +void hinic3_get_channels(struct net_device *netdev, + struct ethtool_channels *channels); +int hinic3_set_channels(struct net_device *netdev, + struct ethtool_channels *channels); + +u32 hinic3_get_rxfh_indir_size(struct net_device *netdev); +u32 hinic3_get_rxfh_key_size(struct net_device *netdev); + +int hinic3_get_rxfh(struct net_device *netdev, + struct ethtool_rxfh_param *rxfh); +int hinic3_set_rxfh(struct net_device *netdev, + struct ethtool_rxfh_param *rxfh, + struct netlink_ext_ack *extack); + #endif --=20 2.43.0 From nobody Wed Apr 1 12:32:17 2026 Received: from canpmsgout10.his.huawei.com (canpmsgout10.his.huawei.com [113.46.200.225]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 49F003D6CA4; Tue, 31 Mar 2026 07:56:47 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=113.46.200.225 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774943809; cv=none; b=O1mvo/C8RlHRj+z11ZvJRr0/Vx9MIRzWgwTRx0QioDWwQmOa5xhSGNkxqmdq8DpSWjLmqkvGktuA2YIL2iv2xCxDTVeWfFuYV65uZdoonlhzbpvuyJxYs/A9xmHQ7eehWOdnmNv5uFy8aipgCGGIBnDr7QLGSZ1XPcLb0L6I7CA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774943809; c=relaxed/simple; bh=ZWwHQ0YIbpnyNofAWamDGt0drpp0jNNahdo2hEpLI00=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=NRcxkYnoKzD3djF1DduS7saJb6Y9xKHEGlSP/jyhgbzm4MQz3dYf1Fv6bBv0KbErVg0FlHMMN4v8Ohym1fAYtKN5wtwmZ7oYWiKpTMfUMdwnsGFpnVluC/AZZOFMJRHWOEVvEZXTrPQIgm17atV8WcktJc4JHo8afrMaI6vXk98= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; dkim=pass (1024-bit key) header.d=huawei.com header.i=@huawei.com header.b=sk+tLsf+; arc=none smtp.client-ip=113.46.200.225 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=huawei.com header.i=@huawei.com header.b="sk+tLsf+" dkim-signature: v=1; a=rsa-sha256; d=huawei.com; s=dkim; c=relaxed/relaxed; q=dns/txt; h=From; bh=SGaDmtr82UldrbGdj7p4lvrujnkBoMkWMdmhFEHpbj8=; b=sk+tLsf+saF7kDnZ3YdrDnIwAYRjrNORbK6xs6qqkhhswP5LMyQiXSqMomdY/RcFz1xHxKDVK tZDZHVtTFJ/lyN30H7PZJv6q/ZxHxZ/KxWFp1o5LAb/DsxiojWR53ef7ui3sBtzKRfn08TH/xNP UoVdYabOPOUn5MCIkgBbhAM= Received: from mail.maildlp.com (unknown [172.19.163.200]) by canpmsgout10.his.huawei.com (SkyGuard) with ESMTPS id 4flKzf2mSnz1K9W8; Tue, 31 Mar 2026 15:50:38 +0800 (CST) Received: from kwepemf100013.china.huawei.com (unknown [7.202.181.12]) by mail.maildlp.com (Postfix) with ESMTPS id 4498440563; Tue, 31 Mar 2026 15:56:45 +0800 (CST) Received: from DESKTOP-62GVMTR.china.huawei.com (10.174.189.124) by kwepemf100013.china.huawei.com (7.202.181.12) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.36; Tue, 31 Mar 2026 15:56:44 +0800 From: Fan Gong To: Fan Gong , Zhu Yikai , , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Simon Horman , Andrew Lunn , Ioana Ciornei CC: , , luosifu , Xin Guo , Zhou Shuai , Wu Like , Shi Jing , Zheng Jiezhen , Maxime Chevallier Subject: [PATCH net-next v03 5/6] hinic3: Configure netdev->watchdog_timeo to set nic tx timeout Date: Tue, 31 Mar 2026 15:56:24 +0800 Message-ID: X-Mailer: git-send-email 2.51.0.windows.1 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ClientProxiedBy: kwepems200001.china.huawei.com (7.221.188.67) To kwepemf100013.china.huawei.com (7.202.181.12) Content-Type: text/plain; charset="utf-8" Configure netdev watchdog timeout to improve transmission reliability. Co-developed-by: Zhu Yikai Signed-off-by: Zhu Yikai Signed-off-by: Fan Gong --- drivers/net/ethernet/huawei/hinic3/hinic3_main.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_main.c b/drivers/net= /ethernet/huawei/hinic3/hinic3_main.c index 60834f8dffcd..4742c881b7a6 100644 --- a/drivers/net/ethernet/huawei/hinic3/hinic3_main.c +++ b/drivers/net/ethernet/huawei/hinic3/hinic3_main.c @@ -33,6 +33,8 @@ #define HINIC3_RX_PENDING_LIMIT_LOW 2 #define HINIC3_RX_PENDING_LIMIT_HIGH 8 =20 +#define HINIC3_WATCHDOG_TIMEOUT 5 + static void init_intr_coal_param(struct net_device *netdev) { struct hinic3_nic_dev *nic_dev =3D netdev_priv(netdev); @@ -247,6 +249,8 @@ static void hinic3_assign_netdev_ops(struct net_device = *netdev) { hinic3_set_netdev_ops(netdev); hinic3_set_ethtool_ops(netdev); + + netdev->watchdog_timeo =3D HINIC3_WATCHDOG_TIMEOUT * HZ; } =20 static void netdev_feature_init(struct net_device *netdev) --=20 2.43.0 From nobody Wed Apr 1 12:32:17 2026 Received: from canpmsgout12.his.huawei.com (canpmsgout12.his.huawei.com [113.46.200.227]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A1D823DBD6B; Tue, 31 Mar 2026 07:56:55 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=113.46.200.227 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774943817; cv=none; b=BK+tZR3dX5W0Q5zphqYOdNHcy6P0WICMUqoT7KMOWCjuQlDzbm9loUkBT39SSthVdzxGBTio/IUMgUFjpRyoJgurEv3Dt2K9UTY63bhUpTZw9YoB8bLZcVmiW5FTGQMRod+kjtEl6Ru5PwXjIJel317ioh68eHs6uAgV47lrOAY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774943817; c=relaxed/simple; bh=GHm4Tkq/jR7F1cxZivfMJHrrf7Co+SAMDPUjCIOfEFc=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=X5iwIchojYKT5WbG4cpXfXxBfmVMN2KRg+JR/GS1W+7uCCpwaYpALcSNe0b4gWg2fGTZHOmXf1nfkoNE8FcmJbyEKzFReSrFgIexXV4vZY47fZ1eqAoXpGlwR58eaSogLdNtVJENnI5Gzvcpv3ZEkELMSN5VX4IbO4W0ukWjLM8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; dkim=pass (1024-bit key) header.d=huawei.com header.i=@huawei.com header.b=6i7myuC9; arc=none smtp.client-ip=113.46.200.227 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=huawei.com header.i=@huawei.com header.b="6i7myuC9" dkim-signature: v=1; a=rsa-sha256; d=huawei.com; s=dkim; c=relaxed/relaxed; q=dns/txt; h=From; bh=NnYaVZum+bxrIlqVzYb/4Mb17ttO7H1xxO1BG4kn4W8=; b=6i7myuC9ku565MOt7YC/6eD8l6vZ38CHsrH69i4cMWz8u/WtkMxODTYgylx0KAIy10YrOSpLt +YemB9Z2r3IKzZ5JHchRT9RrWEqN6x80rPwZglBU7sDlJf5/Ox57uyMjP4jbWPea8hpd9B6muaY r9zLG9LuBjSB9zjqSdC7qAo= Received: from mail.maildlp.com (unknown [172.19.162.92]) by canpmsgout12.his.huawei.com (SkyGuard) with ESMTPS id 4flL0W2KnFznTXc; Tue, 31 Mar 2026 15:51:23 +0800 (CST) Received: from kwepemf100013.china.huawei.com (unknown [7.202.181.12]) by mail.maildlp.com (Postfix) with ESMTPS id 89BA940565; Tue, 31 Mar 2026 15:56:47 +0800 (CST) Received: from DESKTOP-62GVMTR.china.huawei.com (10.174.189.124) by kwepemf100013.china.huawei.com (7.202.181.12) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.36; Tue, 31 Mar 2026 15:56:46 +0800 From: Fan Gong To: Fan Gong , Zhu Yikai , , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Simon Horman , Andrew Lunn , Ioana Ciornei CC: , , luosifu , Xin Guo , Zhou Shuai , Wu Like , Shi Jing , Zheng Jiezhen , Maxime Chevallier Subject: [PATCH net-next v03 6/6] hinic3: Remove unneeded coalesce parameters Date: Tue, 31 Mar 2026 15:56:25 +0800 Message-ID: <9999532202386c143af05341928de9345d35d94e.1774940117.git.zhuyikai1@h-partners.com> X-Mailer: git-send-email 2.51.0.windows.1 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ClientProxiedBy: kwepems200001.china.huawei.com (7.221.188.67) To kwepemf100013.china.huawei.com (7.202.181.12) Content-Type: text/plain; charset="utf-8" Remove unneeded coalesce parameters in irq handling. Co-developed-by: Zhu Yikai Signed-off-by: Zhu Yikai Signed-off-by: Fan Gong --- drivers/net/ethernet/huawei/hinic3/hinic3_irq.c | 6 +----- drivers/net/ethernet/huawei/hinic3/hinic3_rx.h | 3 --- 2 files changed, 1 insertion(+), 8 deletions(-) diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_irq.c b/drivers/net/= ethernet/huawei/hinic3/hinic3_irq.c index d3b3927b5408..42464c007174 100644 --- a/drivers/net/ethernet/huawei/hinic3/hinic3_irq.c +++ b/drivers/net/ethernet/huawei/hinic3/hinic3_irq.c @@ -156,13 +156,9 @@ static int hinic3_set_interrupt_moder(struct net_devic= e *netdev, u16 q_id, spin_unlock_irqrestore(&nic_dev->channel_res_lock, flags); =20 err =3D hinic3_set_interrupt_cfg(nic_dev->hwdev, info); - if (err) { + if (err) netdev_err(netdev, "Failed to modify moderation for Queue: %u\n", q_id); - } else { - nic_dev->rxqs[q_id].last_coalesc_timer_cfg =3D coalesc_timer_cfg; - nic_dev->rxqs[q_id].last_pending_limit =3D pending_limit; - } =20 return err; } diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_rx.h b/drivers/net/e= thernet/huawei/hinic3/hinic3_rx.h index cd2dcaab6cf7..a64a51d766c5 100644 --- a/drivers/net/ethernet/huawei/hinic3/hinic3_rx.h +++ b/drivers/net/ethernet/huawei/hinic3/hinic3_rx.h @@ -112,9 +112,6 @@ struct hinic3_rxq { dma_addr_t cqe_start_paddr; =20 struct dim dim; - - u8 last_coalesc_timer_cfg; - u8 last_pending_limit; } ____cacheline_aligned; =20 struct hinic3_dyna_rxq_res { --=20 2.43.0