From nobody Wed Apr 1 20:37:31 2026 Received: from smtpout-04.galae.net (smtpout-04.galae.net [185.171.202.116]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A828737F8DD; Wed, 1 Apr 2026 16:39:27 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=185.171.202.116 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775061574; cv=none; b=FpWA5aE0oD1SLsXfpxXMDiDrIDReMh8Nx/ayurpdvVVNIYRbMs6si0IG2aCD8KScksWZlbNO2tTVpmeXwzxDbFV/V7KRQpVnlWOcxiL5O7ku4qJuYVyicNZ2143uju/9Ge/OQXbO067tbKgpKLua9hv3hA9cdIOLBwKrjld2rxg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775061574; c=relaxed/simple; bh=Set0PIhQ3wzo0ruLNKMaXjM1rIH7djI5IREmLH78O80=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=Bd+sJS0yNBELRQZT6W00tNYhg54EIqgx1xzwyKmZjKOXaM6cf2arHNWlVZqIrI74ykSjB2GBgJNzJqMEApwWz0r7dEwwRUfqcwXI+NAhsHKq6rF8Hmjz/WKvRj9L7s/jpL4UnlgsxcRbg2/klJqbnrATDRfZ/q4NfJdUzHU+IB8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=bootlin.com; spf=pass smtp.mailfrom=bootlin.com; dkim=pass (2048-bit key) header.d=bootlin.com header.i=@bootlin.com header.b=RN6Te5WP; arc=none smtp.client-ip=185.171.202.116 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=bootlin.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=bootlin.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=bootlin.com header.i=@bootlin.com header.b="RN6Te5WP" Received: from smtpout-01.galae.net (smtpout-01.galae.net [212.83.139.233]) by smtpout-04.galae.net (Postfix) with ESMTPS id 4B3A6C5996D; Wed, 1 Apr 2026 16:39:57 +0000 (UTC) Received: from mail.galae.net (mail.galae.net [212.83.136.155]) by smtpout-01.galae.net (Postfix) with ESMTPS id 36750602BF; Wed, 1 Apr 2026 16:39:26 +0000 (UTC) Received: from [127.0.0.1] (localhost [127.0.0.1]) by localhost (Mailerdaemon) with ESMTPSA id 8EF7910450637; Wed, 1 Apr 2026 18:39:22 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bootlin.com; s=dkim; t=1775061564; h=from:subject:date:message-id:to:cc:mime-version:content-type: content-transfer-encoding:in-reply-to:references; bh=YIE+oyy11h4qiKJ5x3Pzvo+8d9fbWn7IJ5GoUJCas4M=; b=RN6Te5WP+32JzAoM8VR4G3Hqddecz/F1FX0mhjkK6hrZPGdmTJOm2awh4EYQSpl5tTLvW2 fcUJeB2gRwYCmkzKuS/29QZG95Xc5bzNpM3D4DYe7hTOUjSDePsPJ7UyexoVUwnp8p7nbn q38eP29lY/UO+bhChwh4wgIWK7cvB3OU1/PpxC3LPUXttBLmf8JYzeNM4ibhsi6t4Y6Ik6 ZNOG/H+Gq+0fZMcQP9waIv0Fenx3F3gFknecHUV/mj0IasnzCNt5+BdSB2+lUvnu+K49yQ 9wf1HiySTcOJ8Ghps/yrJ/EBRS6eEjU3RCcMm5/rmYcxfksoiP4vZM8iwPS6fg== From: =?utf-8?q?Th=C3=A9o_Lebrun?= Date: Wed, 01 Apr 2026 18:39:04 +0200 Subject: [PATCH net-next 01/11] net: macb: unify device pointer naming convention Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20260401-macb-context-v1-1-9590c5ab7272@bootlin.com> References: <20260401-macb-context-v1-0-9590c5ab7272@bootlin.com> In-Reply-To: <20260401-macb-context-v1-0-9590c5ab7272@bootlin.com> To: Nicolas Ferre , Claudiu Beznea , Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Richard Cochran , Russell King Cc: Paolo Valerio , Conor Dooley , Nicolai Buchwitz , Vladimir Kondratiev , Gregory CLEMENT , =?utf-8?q?Beno=C3=AEt_Monin?= , Tawfik Bayouk , Thomas Petazzoni , Maxime Chevallier , netdev@vger.kernel.org, linux-kernel@vger.kernel.org, =?utf-8?q?Th=C3=A9o_Lebrun?= X-Mailer: b4 0.15.0 X-Last-TLS-Session-Version: TLSv1.3 Here are all device pointer variable permutations inside MACB: struct device *dev; struct net_device *dev; struct net_device *ndev; struct net_device *netdev; struct pci_dev *pdev; // inside macb_pci.c struct platform_device *pdev; struct platform_device *plat_dev; // inside macb_pci.c Unify to this convention: struct device *dev; struct net_device *netdev; struct pci_dev *pci; struct platform_device *pdev; Ensure nothing slipped through using ctags tooling: =E2=9F=A9 ctags -o - --kinds-c=3D'{local}{member}{parameter}' \ --fields=3D'{typeref}' drivers/net/ethernet/cadence/* | \ awk -F"\t" ' $NF~/struct:.*(device|dev) / {print $NF, $1}' | \ sort -u typeref:struct:device * dev typeref:struct:in_device * idev // ignored typeref:struct:net_device * netdev typeref:struct:pci_dev * pci typeref:struct:phy_device * phy // ignored typeref:struct:phy_device * phydev // ignored typeref:struct:platform_device * pdev Signed-off-by: Th=C3=A9o Lebrun --- drivers/net/ethernet/cadence/macb.h | 14 +- drivers/net/ethernet/cadence/macb_main.c | 628 ++++++++++++++++-----------= ---- drivers/net/ethernet/cadence/macb_pci.c | 46 +-- drivers/net/ethernet/cadence/macb_ptp.c | 18 +- 4 files changed, 354 insertions(+), 352 deletions(-) diff --git a/drivers/net/ethernet/cadence/macb.h b/drivers/net/ethernet/cad= ence/macb.h index 16527dbab875..d6dd1d356e12 100644 --- a/drivers/net/ethernet/cadence/macb.h +++ b/drivers/net/ethernet/cadence/macb.h @@ -1207,8 +1207,8 @@ struct macb_or_gem_ops { =20 /* MACB-PTP interface: adapt to platform needs. */ struct macb_ptp_info { - void (*ptp_init)(struct net_device *ndev); - void (*ptp_remove)(struct net_device *ndev); + void (*ptp_init)(struct net_device *netdev); + void (*ptp_remove)(struct net_device *netdev); s32 (*get_ptp_max_adj)(void); unsigned int (*get_tsu_rate)(struct macb *bp); int (*get_ts_info)(struct net_device *dev, @@ -1326,7 +1326,7 @@ struct macb { struct clk *tx_clk; struct clk *rx_clk; struct clk *tsu_clk; - struct net_device *dev; + struct net_device *netdev; /* Protects hw_stats and ethtool_stats */ spinlock_t stats_lock; union { @@ -1406,8 +1406,8 @@ enum macb_bd_control { TSTAMP_ALL_FRAMES, }; =20 -void gem_ptp_init(struct net_device *ndev); -void gem_ptp_remove(struct net_device *ndev); +void gem_ptp_init(struct net_device *netdev); +void gem_ptp_remove(struct net_device *netdev); void gem_ptp_txstamp(struct macb *bp, struct sk_buff *skb, struct macb_dma= _desc *desc); void gem_ptp_rxstamp(struct macb *bp, struct sk_buff *skb, struct macb_dma= _desc *desc); static inline void gem_ptp_do_txstamp(struct macb *bp, struct sk_buff *skb= , struct macb_dma_desc *desc) @@ -1432,8 +1432,8 @@ int gem_set_hwtst(struct net_device *dev, struct kernel_hwtstamp_config *tstamp_config, struct netlink_ext_ack *extack); #else -static inline void gem_ptp_init(struct net_device *ndev) { } -static inline void gem_ptp_remove(struct net_device *ndev) { } +static inline void gem_ptp_init(struct net_device *netdev) { } +static inline void gem_ptp_remove(struct net_device *netdev) { } =20 static inline void gem_ptp_do_txstamp(struct macb *bp, struct sk_buff *skb= , struct macb_dma_desc *desc) { } static inline void gem_ptp_do_rxstamp(struct macb *bp, struct sk_buff *skb= , struct macb_dma_desc *desc) { } diff --git a/drivers/net/ethernet/cadence/macb_main.c b/drivers/net/etherne= t/cadence/macb_main.c index 7a48ebe0741f..00bd662b5e46 100644 --- a/drivers/net/ethernet/cadence/macb_main.c +++ b/drivers/net/ethernet/cadence/macb_main.c @@ -248,9 +248,9 @@ static void macb_set_hwaddr(struct macb *bp) u32 bottom; u16 top; =20 - bottom =3D get_unaligned_le32(bp->dev->dev_addr); + bottom =3D get_unaligned_le32(bp->netdev->dev_addr); macb_or_gem_writel(bp, SA1B, bottom); - top =3D get_unaligned_le16(bp->dev->dev_addr + 4); + top =3D get_unaligned_le16(bp->netdev->dev_addr + 4); macb_or_gem_writel(bp, SA1T, top); =20 if (gem_has_ptp(bp)) { @@ -287,13 +287,13 @@ static void macb_get_hwaddr(struct macb *bp) addr[5] =3D (top >> 8) & 0xff; =20 if (is_valid_ether_addr(addr)) { - eth_hw_addr_set(bp->dev, addr); + eth_hw_addr_set(bp->netdev, addr); return; } } =20 dev_info(&bp->pdev->dev, "invalid hw address, using random\n"); - eth_hw_addr_random(bp->dev); + eth_hw_addr_random(bp->netdev); } =20 static int macb_mdio_wait_for_idle(struct macb *bp) @@ -505,12 +505,12 @@ static void macb_set_tx_clk(struct macb *bp, int spee= d) ferr =3D abs(rate_rounded - rate); ferr =3D DIV_ROUND_UP(ferr, rate / 100000); if (ferr > 5) - netdev_warn(bp->dev, + netdev_warn(bp->netdev, "unable to generate target frequency: %ld Hz\n", rate); =20 if (clk_set_rate(bp->tx_clk, rate_rounded)) - netdev_err(bp->dev, "adjusting tx_clk failed.\n"); + netdev_err(bp->netdev, "adjusting tx_clk failed.\n"); } =20 static void macb_usx_pcs_link_up(struct phylink_pcs *pcs, unsigned int neg= _mode, @@ -693,8 +693,8 @@ static void macb_tx_lpi_wake(struct macb *bp) =20 static void macb_mac_disable_tx_lpi(struct phylink_config *config) { - struct net_device *ndev =3D to_net_dev(config->dev); - struct macb *bp =3D netdev_priv(ndev); + struct net_device *netdev =3D to_net_dev(config->dev); + struct macb *bp =3D netdev_priv(netdev); unsigned long flags; =20 cancel_delayed_work_sync(&bp->tx_lpi_work); @@ -708,8 +708,8 @@ static void macb_mac_disable_tx_lpi(struct phylink_conf= ig *config) static int macb_mac_enable_tx_lpi(struct phylink_config *config, u32 timer, bool tx_clk_stop) { - struct net_device *ndev =3D to_net_dev(config->dev); - struct macb *bp =3D netdev_priv(ndev); + struct net_device *netdev =3D to_net_dev(config->dev); + struct macb *bp =3D netdev_priv(netdev); unsigned long flags; =20 spin_lock_irqsave(&bp->lock, flags); @@ -728,8 +728,8 @@ static int macb_mac_enable_tx_lpi(struct phylink_config= *config, u32 timer, static void macb_mac_config(struct phylink_config *config, unsigned int mo= de, const struct phylink_link_state *state) { - struct net_device *ndev =3D to_net_dev(config->dev); - struct macb *bp =3D netdev_priv(ndev); + struct net_device *netdev =3D to_net_dev(config->dev); + struct macb *bp =3D netdev_priv(netdev); unsigned long flags; u32 old_ctrl, ctrl; u32 old_ncr, ncr; @@ -770,8 +770,8 @@ static void macb_mac_config(struct phylink_config *conf= ig, unsigned int mode, static void macb_mac_link_down(struct phylink_config *config, unsigned int= mode, phy_interface_t interface) { - struct net_device *ndev =3D to_net_dev(config->dev); - struct macb *bp =3D netdev_priv(ndev); + struct net_device *netdev =3D to_net_dev(config->dev); + struct macb *bp =3D netdev_priv(netdev); struct macb_queue *queue; unsigned int q; u32 ctrl; @@ -785,7 +785,7 @@ static void macb_mac_link_down(struct phylink_config *c= onfig, unsigned int mode, ctrl =3D macb_readl(bp, NCR) & ~(MACB_BIT(RE) | MACB_BIT(TE)); macb_writel(bp, NCR, ctrl); =20 - netif_tx_stop_all_queues(ndev); + netif_tx_stop_all_queues(netdev); } =20 /* Use juggling algorithm to left rotate tx ring and tx skb array */ @@ -885,8 +885,8 @@ static void macb_mac_link_up(struct phylink_config *con= fig, int speed, int duplex, bool tx_pause, bool rx_pause) { - struct net_device *ndev =3D to_net_dev(config->dev); - struct macb *bp =3D netdev_priv(ndev); + struct net_device *netdev =3D to_net_dev(config->dev); + struct macb *bp =3D netdev_priv(netdev); struct macb_queue *queue; unsigned long flags; unsigned int q; @@ -942,14 +942,14 @@ static void macb_mac_link_up(struct phylink_config *c= onfig, =20 macb_writel(bp, NCR, ctrl | MACB_BIT(RE) | MACB_BIT(TE)); =20 - netif_tx_wake_all_queues(ndev); + netif_tx_wake_all_queues(netdev); } =20 static struct phylink_pcs *macb_mac_select_pcs(struct phylink_config *conf= ig, phy_interface_t interface) { - struct net_device *ndev =3D to_net_dev(config->dev); - struct macb *bp =3D netdev_priv(ndev); + struct net_device *netdev =3D to_net_dev(config->dev); + struct macb *bp =3D netdev_priv(netdev); =20 if (interface =3D=3D PHY_INTERFACE_MODE_10GBASER) return &bp->phylink_usx_pcs; @@ -978,7 +978,7 @@ static bool macb_phy_handle_exists(struct device_node *= dn) static int macb_phylink_connect(struct macb *bp) { struct device_node *dn =3D bp->pdev->dev.of_node; - struct net_device *dev =3D bp->dev; + struct net_device *netdev =3D bp->netdev; struct phy_device *phydev; int ret; =20 @@ -988,7 +988,7 @@ static int macb_phylink_connect(struct macb *bp) if (!dn || (ret && !macb_phy_handle_exists(dn))) { phydev =3D phy_find_first(bp->mii_bus); if (!phydev) { - netdev_err(dev, "no PHY found\n"); + netdev_err(netdev, "no PHY found\n"); return -ENXIO; } =20 @@ -997,7 +997,7 @@ static int macb_phylink_connect(struct macb *bp) } =20 if (ret) { - netdev_err(dev, "Could not attach PHY (%d)\n", ret); + netdev_err(netdev, "Could not attach PHY (%d)\n", ret); return ret; } =20 @@ -1009,21 +1009,21 @@ static int macb_phylink_connect(struct macb *bp) static void macb_get_pcs_fixed_state(struct phylink_config *config, struct phylink_link_state *state) { - struct net_device *ndev =3D to_net_dev(config->dev); - struct macb *bp =3D netdev_priv(ndev); + struct net_device *netdev =3D to_net_dev(config->dev); + struct macb *bp =3D netdev_priv(netdev); =20 state->link =3D (macb_readl(bp, NSR) & MACB_BIT(NSR_LINK)) !=3D 0; } =20 /* based on au1000_eth. c*/ -static int macb_mii_probe(struct net_device *dev) +static int macb_mii_probe(struct net_device *netdev) { - struct macb *bp =3D netdev_priv(dev); + struct macb *bp =3D netdev_priv(netdev); =20 bp->phylink_sgmii_pcs.ops =3D &macb_phylink_pcs_ops; bp->phylink_usx_pcs.ops =3D &macb_phylink_usx_pcs_ops; =20 - bp->phylink_config.dev =3D &dev->dev; + bp->phylink_config.dev =3D &netdev->dev; bp->phylink_config.type =3D PHYLINK_NETDEV; bp->phylink_config.mac_managed_pm =3D true; =20 @@ -1082,7 +1082,7 @@ static int macb_mii_probe(struct net_device *dev) bp->phylink =3D phylink_create(&bp->phylink_config, bp->pdev->dev.fwnode, bp->phy_interface, &macb_phylink_ops); if (IS_ERR(bp->phylink)) { - netdev_err(dev, "Could not create a phylink instance (%ld)\n", + netdev_err(netdev, "Could not create a phylink instance (%ld)\n", PTR_ERR(bp->phylink)); return PTR_ERR(bp->phylink); } @@ -1129,7 +1129,7 @@ static int macb_mii_init(struct macb *bp) */ mdio_np =3D of_get_child_by_name(np, "mdio"); if (!mdio_np && of_phy_is_fixed_link(np)) - return macb_mii_probe(bp->dev); + return macb_mii_probe(bp->netdev); =20 /* Enable management port */ macb_writel(bp, NCR, MACB_BIT(MPE)); @@ -1150,13 +1150,13 @@ static int macb_mii_init(struct macb *bp) bp->mii_bus->priv =3D bp; bp->mii_bus->parent =3D &bp->pdev->dev; =20 - dev_set_drvdata(&bp->dev->dev, bp->mii_bus); + dev_set_drvdata(&bp->netdev->dev, bp->mii_bus); =20 err =3D macb_mdiobus_register(bp, mdio_np); if (err) goto err_out_free_mdiobus; =20 - err =3D macb_mii_probe(bp->dev); + err =3D macb_mii_probe(bp->netdev); if (err) goto err_out_unregister_bus; =20 @@ -1264,7 +1264,7 @@ static void macb_tx_error_task(struct work_struct *wo= rk) unsigned long flags; =20 queue_index =3D queue - bp->queues; - netdev_vdbg(bp->dev, "macb_tx_error_task: q =3D %u, t =3D %u, h =3D %u\n", + netdev_vdbg(bp->netdev, "macb_tx_error_task: q =3D %u, t =3D %u, h =3D %u= \n", queue_index, queue->tx_tail, queue->tx_head); =20 /* Prevent the queue NAPI TX poll from running, as it calls @@ -1277,14 +1277,14 @@ static void macb_tx_error_task(struct work_struct *= work) spin_lock_irqsave(&bp->lock, flags); =20 /* Make sure nobody is trying to queue up new packets */ - netif_tx_stop_all_queues(bp->dev); + netif_tx_stop_all_queues(bp->netdev); =20 /* Stop transmission now * (in case we have just queued new packets) * macb/gem must be halted to write TBQP register */ if (macb_halt_tx(bp)) { - netdev_err(bp->dev, "BUG: halt tx timed out\n"); + netdev_err(bp->netdev, "BUG: halt tx timed out\n"); macb_writel(bp, NCR, macb_readl(bp, NCR) & (~MACB_BIT(TE))); halt_timeout =3D true; } @@ -1313,13 +1313,13 @@ static void macb_tx_error_task(struct work_struct *= work) * since it's the only one written back by the hardware */ if (!(ctrl & MACB_BIT(TX_BUF_EXHAUSTED))) { - netdev_vdbg(bp->dev, "txerr skb %u (data %p) TX complete\n", + netdev_vdbg(bp->netdev, "txerr skb %u (data %p) TX complete\n", macb_tx_ring_wrap(bp, tail), skb->data); - bp->dev->stats.tx_packets++; + bp->netdev->stats.tx_packets++; queue->stats.tx_packets++; packets++; - bp->dev->stats.tx_bytes +=3D skb->len; + bp->netdev->stats.tx_bytes +=3D skb->len; queue->stats.tx_bytes +=3D skb->len; bytes +=3D skb->len; } @@ -1329,7 +1329,7 @@ static void macb_tx_error_task(struct work_struct *wo= rk) * those. Statistics are updated by hardware. */ if (ctrl & MACB_BIT(TX_BUF_EXHAUSTED)) - netdev_err(bp->dev, + netdev_err(bp->netdev, "BUG: TX buffers exhausted mid-frame\n"); =20 desc->ctrl =3D ctrl | MACB_BIT(TX_USED); @@ -1338,7 +1338,7 @@ static void macb_tx_error_task(struct work_struct *wo= rk) macb_tx_unmap(bp, tx_skb, 0); } =20 - netdev_tx_completed_queue(netdev_get_tx_queue(bp->dev, queue_index), + netdev_tx_completed_queue(netdev_get_tx_queue(bp->netdev, queue_index), packets, bytes); =20 /* Set end of TX queue */ @@ -1363,7 +1363,7 @@ static void macb_tx_error_task(struct work_struct *wo= rk) macb_writel(bp, NCR, macb_readl(bp, NCR) | MACB_BIT(TE)); =20 /* Now we are ready to start transmission again */ - netif_tx_start_all_queues(bp->dev); + netif_tx_start_all_queues(bp->netdev); macb_writel(bp, NCR, macb_readl(bp, NCR) | MACB_BIT(TSTART)); =20 spin_unlock_irqrestore(&bp->lock, flags); @@ -1442,12 +1442,12 @@ static int macb_tx_complete(struct macb_queue *queu= e, int budget) !ptp_one_step_sync(skb)) gem_ptp_do_txstamp(bp, skb, desc); =20 - netdev_vdbg(bp->dev, "skb %u (data %p) TX complete\n", + netdev_vdbg(bp->netdev, "skb %u (data %p) TX complete\n", macb_tx_ring_wrap(bp, tail), skb->data); - bp->dev->stats.tx_packets++; + bp->netdev->stats.tx_packets++; queue->stats.tx_packets++; - bp->dev->stats.tx_bytes +=3D skb->len; + bp->netdev->stats.tx_bytes +=3D skb->len; queue->stats.tx_bytes +=3D skb->len; packets++; bytes +=3D skb->len; @@ -1465,14 +1465,14 @@ static int macb_tx_complete(struct macb_queue *queu= e, int budget) } } =20 - netdev_tx_completed_queue(netdev_get_tx_queue(bp->dev, queue_index), + netdev_tx_completed_queue(netdev_get_tx_queue(bp->netdev, queue_index), packets, bytes); =20 queue->tx_tail =3D tail; - if (__netif_subqueue_stopped(bp->dev, queue_index) && + if (__netif_subqueue_stopped(bp->netdev, queue_index) && CIRC_CNT(queue->tx_head, queue->tx_tail, bp->tx_ring_size) <=3D MACB_TX_WAKEUP_THRESH(bp)) - netif_wake_subqueue(bp->dev, queue_index); + netif_wake_subqueue(bp->netdev, queue_index); spin_unlock_irqrestore(&queue->tx_ptr_lock, flags); =20 if (packets) @@ -1500,9 +1500,9 @@ static void gem_rx_refill(struct macb_queue *queue) =20 if (!queue->rx_skbuff[entry]) { /* allocate sk_buff for this free entry in ring */ - skb =3D netdev_alloc_skb(bp->dev, bp->rx_buffer_size); + skb =3D netdev_alloc_skb(bp->netdev, bp->rx_buffer_size); if (unlikely(!skb)) { - netdev_err(bp->dev, + netdev_err(bp->netdev, "Unable to allocate sk_buff\n"); break; } @@ -1551,8 +1551,8 @@ static void gem_rx_refill(struct macb_queue *queue) /* Make descriptor updates visible to hardware */ wmb(); =20 - netdev_vdbg(bp->dev, "rx ring: queue: %p, prepared head %d, tail %d\n", - queue, queue->rx_prepared_head, queue->rx_tail); + netdev_vdbg(bp->netdev, "rx ring: queue: %p, prepared head %d, tail %d\n", + queue, queue->rx_prepared_head, queue->rx_tail); } =20 /* Mark DMA descriptors from begin up to and not including end as unused */ @@ -1612,17 +1612,17 @@ static int gem_rx(struct macb_queue *queue, struct = napi_struct *napi, count++; =20 if (!(ctrl & MACB_BIT(RX_SOF) && ctrl & MACB_BIT(RX_EOF))) { - netdev_err(bp->dev, + netdev_err(bp->netdev, "not whole frame pointed by descriptor\n"); - bp->dev->stats.rx_dropped++; + bp->netdev->stats.rx_dropped++; queue->stats.rx_dropped++; break; } skb =3D queue->rx_skbuff[entry]; if (unlikely(!skb)) { - netdev_err(bp->dev, + netdev_err(bp->netdev, "inconsistent Rx descriptor chain\n"); - bp->dev->stats.rx_dropped++; + bp->netdev->stats.rx_dropped++; queue->stats.rx_dropped++; break; } @@ -1630,28 +1630,28 @@ static int gem_rx(struct macb_queue *queue, struct = napi_struct *napi, queue->rx_skbuff[entry] =3D NULL; len =3D ctrl & bp->rx_frm_len_mask; =20 - netdev_vdbg(bp->dev, "gem_rx %u (len %u)\n", entry, len); + netdev_vdbg(bp->netdev, "gem_rx %u (len %u)\n", entry, len); =20 skb_put(skb, len); dma_unmap_single(&bp->pdev->dev, addr, bp->rx_buffer_size, DMA_FROM_DEVICE); =20 - skb->protocol =3D eth_type_trans(skb, bp->dev); + skb->protocol =3D eth_type_trans(skb, bp->netdev); skb_checksum_none_assert(skb); - if (bp->dev->features & NETIF_F_RXCSUM && - !(bp->dev->flags & IFF_PROMISC) && + if (bp->netdev->features & NETIF_F_RXCSUM && + !(bp->netdev->flags & IFF_PROMISC) && GEM_BFEXT(RX_CSUM, ctrl) & GEM_RX_CSUM_CHECKED_MASK) skb->ip_summed =3D CHECKSUM_UNNECESSARY; =20 - bp->dev->stats.rx_packets++; + bp->netdev->stats.rx_packets++; queue->stats.rx_packets++; - bp->dev->stats.rx_bytes +=3D skb->len; + bp->netdev->stats.rx_bytes +=3D skb->len; queue->stats.rx_bytes +=3D skb->len; =20 gem_ptp_do_rxstamp(bp, skb, desc); =20 #if defined(DEBUG) && defined(VERBOSE_DEBUG) - netdev_vdbg(bp->dev, "received skb of length %u, csum: %08x\n", + netdev_vdbg(bp->netdev, "received skb of length %u, csum: %08x\n", skb->len, skb->csum); print_hex_dump(KERN_DEBUG, " mac: ", DUMP_PREFIX_ADDRESS, 16, 1, skb_mac_header(skb), 16, true); @@ -1680,9 +1680,9 @@ static int macb_rx_frame(struct macb_queue *queue, st= ruct napi_struct *napi, desc =3D macb_rx_desc(queue, last_frag); len =3D desc->ctrl & bp->rx_frm_len_mask; =20 - netdev_vdbg(bp->dev, "macb_rx_frame frags %u - %u (len %u)\n", - macb_rx_ring_wrap(bp, first_frag), - macb_rx_ring_wrap(bp, last_frag), len); + netdev_vdbg(bp->netdev, "macb_rx_frame frags %u - %u (len %u)\n", + macb_rx_ring_wrap(bp, first_frag), + macb_rx_ring_wrap(bp, last_frag), len); =20 /* The ethernet header starts NET_IP_ALIGN bytes into the * first buffer. Since the header is 14 bytes, this makes the @@ -1692,9 +1692,9 @@ static int macb_rx_frame(struct macb_queue *queue, st= ruct napi_struct *napi, * the two padding bytes into the skb so that we avoid hitting * the slowpath in memcpy(), and pull them off afterwards. */ - skb =3D netdev_alloc_skb(bp->dev, len + NET_IP_ALIGN); + skb =3D netdev_alloc_skb(bp->netdev, len + NET_IP_ALIGN); if (!skb) { - bp->dev->stats.rx_dropped++; + bp->netdev->stats.rx_dropped++; for (frag =3D first_frag; ; frag++) { desc =3D macb_rx_desc(queue, frag); desc->addr &=3D ~MACB_BIT(RX_USED); @@ -1738,11 +1738,11 @@ static int macb_rx_frame(struct macb_queue *queue, = struct napi_struct *napi, wmb(); =20 __skb_pull(skb, NET_IP_ALIGN); - skb->protocol =3D eth_type_trans(skb, bp->dev); + skb->protocol =3D eth_type_trans(skb, bp->netdev); =20 - bp->dev->stats.rx_packets++; - bp->dev->stats.rx_bytes +=3D skb->len; - netdev_vdbg(bp->dev, "received skb of length %u, csum: %08x\n", + bp->netdev->stats.rx_packets++; + bp->netdev->stats.rx_bytes +=3D skb->len; + netdev_vdbg(bp->netdev, "received skb of length %u, csum: %08x\n", skb->len, skb->csum); napi_gro_receive(napi, skb); =20 @@ -1822,7 +1822,7 @@ static int macb_rx(struct macb_queue *queue, struct n= api_struct *napi, unsigned long flags; u32 ctrl; =20 - netdev_err(bp->dev, "RX queue corruption: reset it\n"); + netdev_err(bp->netdev, "RX queue corruption: reset it\n"); =20 spin_lock_irqsave(&bp->lock, flags); =20 @@ -1869,7 +1869,7 @@ static int macb_rx_poll(struct napi_struct *napi, int= budget) =20 work_done =3D bp->macbgem_ops.mog_rx(queue, napi, budget); =20 - netdev_vdbg(bp->dev, "RX poll: queue =3D %u, work_done =3D %d, budget =3D= %d\n", + netdev_vdbg(bp->netdev, "RX poll: queue =3D %u, work_done =3D %d, budget = =3D %d\n", (unsigned int)(queue - bp->queues), work_done, budget); =20 if (work_done < budget && napi_complete_done(napi, work_done)) { @@ -1889,7 +1889,7 @@ static int macb_rx_poll(struct napi_struct *napi, int= budget) queue_writel(queue, IDR, bp->rx_intr_mask); if (bp->caps & MACB_CAPS_ISR_CLEAR_ON_WRITE) queue_writel(queue, ISR, MACB_BIT(RCOMP)); - netdev_vdbg(bp->dev, "poll: packets pending, reschedule\n"); + netdev_vdbg(bp->netdev, "poll: packets pending, reschedule\n"); napi_schedule(napi); } } @@ -1953,11 +1953,11 @@ static int macb_tx_poll(struct napi_struct *napi, i= nt budget) rmb(); // ensure txubr_pending is up to date if (queue->txubr_pending) { queue->txubr_pending =3D false; - netdev_vdbg(bp->dev, "poll: tx restart\n"); + netdev_vdbg(bp->netdev, "poll: tx restart\n"); macb_tx_restart(queue); } =20 - netdev_vdbg(bp->dev, "TX poll: queue =3D %u, work_done =3D %d, budget =3D= %d\n", + netdev_vdbg(bp->netdev, "TX poll: queue =3D %u, work_done =3D %d, budget = =3D %d\n", (unsigned int)(queue - bp->queues), work_done, budget); =20 if (work_done < budget && napi_complete_done(napi, work_done)) { @@ -1977,7 +1977,7 @@ static int macb_tx_poll(struct napi_struct *napi, int= budget) queue_writel(queue, IDR, MACB_BIT(TCOMP)); if (bp->caps & MACB_CAPS_ISR_CLEAR_ON_WRITE) queue_writel(queue, ISR, MACB_BIT(TCOMP)); - netdev_vdbg(bp->dev, "TX poll: packets pending, reschedule\n"); + netdev_vdbg(bp->netdev, "TX poll: packets pending, reschedule\n"); napi_schedule(napi); } } @@ -1988,7 +1988,7 @@ static int macb_tx_poll(struct napi_struct *napi, int= budget) static void macb_hresp_error_task(struct work_struct *work) { struct macb *bp =3D from_work(bp, work, hresp_err_bh_work); - struct net_device *dev =3D bp->dev; + struct net_device *netdev =3D bp->netdev; struct macb_queue *queue; unsigned int q; u32 ctrl; @@ -2002,8 +2002,8 @@ static void macb_hresp_error_task(struct work_struct = *work) ctrl &=3D ~(MACB_BIT(RE) | MACB_BIT(TE)); macb_writel(bp, NCR, ctrl); =20 - netif_tx_stop_all_queues(dev); - netif_carrier_off(dev); + netif_tx_stop_all_queues(netdev); + netif_carrier_off(netdev); =20 bp->macbgem_ops.mog_init_rings(bp); =20 @@ -2020,8 +2020,8 @@ static void macb_hresp_error_task(struct work_struct = *work) ctrl |=3D MACB_BIT(RE) | MACB_BIT(TE); macb_writel(bp, NCR, ctrl); =20 - netif_carrier_on(dev); - netif_tx_start_all_queues(dev); + netif_carrier_on(netdev); + netif_tx_start_all_queues(netdev); } =20 static irqreturn_t macb_wol_interrupt(int irq, void *dev_id) @@ -2040,7 +2040,7 @@ static irqreturn_t macb_wol_interrupt(int irq, void *= dev_id) if (status & MACB_BIT(WOL)) { queue_writel(queue, IDR, MACB_BIT(WOL)); macb_writel(bp, WOL, 0); - netdev_vdbg(bp->dev, "MACB WoL: queue =3D %u, isr =3D 0x%08lx\n", + netdev_vdbg(bp->netdev, "MACB WoL: queue =3D %u, isr =3D 0x%08lx\n", (unsigned int)(queue - bp->queues), (unsigned long)status); if (bp->caps & MACB_CAPS_ISR_CLEAR_ON_WRITE) @@ -2069,7 +2069,7 @@ static irqreturn_t gem_wol_interrupt(int irq, void *d= ev_id) if (status & GEM_BIT(WOL)) { queue_writel(queue, IDR, GEM_BIT(WOL)); gem_writel(bp, WOL, 0); - netdev_vdbg(bp->dev, "GEM WoL: queue =3D %u, isr =3D 0x%08lx\n", + netdev_vdbg(bp->netdev, "GEM WoL: queue =3D %u, isr =3D 0x%08lx\n", (unsigned int)(queue - bp->queues), (unsigned long)status); if (bp->caps & MACB_CAPS_ISR_CLEAR_ON_WRITE) @@ -2086,7 +2086,7 @@ static irqreturn_t macb_interrupt(int irq, void *dev_= id) { struct macb_queue *queue =3D dev_id; struct macb *bp =3D queue->bp; - struct net_device *dev =3D bp->dev; + struct net_device *netdev =3D bp->netdev; u32 status, ctrl; =20 status =3D queue_readl(queue, ISR); @@ -2098,14 +2098,14 @@ static irqreturn_t macb_interrupt(int irq, void *de= v_id) =20 while (status) { /* close possible race with dev_close */ - if (unlikely(!netif_running(dev))) { + if (unlikely(!netif_running(netdev))) { queue_writel(queue, IDR, -1); if (bp->caps & MACB_CAPS_ISR_CLEAR_ON_WRITE) queue_writel(queue, ISR, -1); break; } =20 - netdev_vdbg(bp->dev, "queue =3D %u, isr =3D 0x%08lx\n", + netdev_vdbg(bp->netdev, "queue =3D %u, isr =3D 0x%08lx\n", (unsigned int)(queue - bp->queues), (unsigned long)status); =20 @@ -2121,7 +2121,7 @@ static irqreturn_t macb_interrupt(int irq, void *dev_= id) queue_writel(queue, ISR, MACB_BIT(RCOMP)); =20 if (napi_schedule_prep(&queue->napi_rx)) { - netdev_vdbg(bp->dev, "scheduling RX softirq\n"); + netdev_vdbg(bp->netdev, "scheduling RX softirq\n"); __napi_schedule(&queue->napi_rx); } } @@ -2139,7 +2139,7 @@ static irqreturn_t macb_interrupt(int irq, void *dev_= id) } =20 if (napi_schedule_prep(&queue->napi_tx)) { - netdev_vdbg(bp->dev, "scheduling TX softirq\n"); + netdev_vdbg(bp->netdev, "scheduling TX softirq\n"); __napi_schedule(&queue->napi_tx); } } @@ -2190,7 +2190,7 @@ static irqreturn_t macb_interrupt(int irq, void *dev_= id) =20 if (status & MACB_BIT(HRESP)) { queue_work(system_bh_wq, &bp->hresp_err_bh_work); - netdev_err(dev, "DMA bus error: HRESP not OK\n"); + netdev_err(netdev, "DMA bus error: HRESP not OK\n"); =20 if (bp->caps & MACB_CAPS_ISR_CLEAR_ON_WRITE) queue_writel(queue, ISR, MACB_BIT(HRESP)); @@ -2207,9 +2207,9 @@ static irqreturn_t macb_interrupt(int irq, void *dev_= id) /* Polling receive - used by netconsole and other diagnostic tools * to allow network i/o with interrupts disabled. */ -static void macb_poll_controller(struct net_device *dev) +static void macb_poll_controller(struct net_device *netdev) { - struct macb *bp =3D netdev_priv(dev); + struct macb *bp =3D netdev_priv(netdev); struct macb_queue *queue; unsigned long flags; unsigned int q; @@ -2303,7 +2303,7 @@ static unsigned int macb_tx_map(struct macb *bp, =20 /* Should never happen */ if (unlikely(!tx_skb)) { - netdev_err(bp->dev, "BUG! empty skb!\n"); + netdev_err(bp->netdev, "BUG! empty skb!\n"); return 0; } =20 @@ -2354,7 +2354,7 @@ static unsigned int macb_tx_map(struct macb *bp, if (i =3D=3D queue->tx_head) { ctrl |=3D MACB_BF(TX_LSO, lso_ctrl); ctrl |=3D MACB_BF(TX_TCP_SEQ_SRC, seq_ctrl); - if ((bp->dev->features & NETIF_F_HW_CSUM) && + if ((bp->netdev->features & NETIF_F_HW_CSUM) && skb->ip_summed !=3D CHECKSUM_PARTIAL && !lso_ctrl && !ptp_one_step_sync(skb)) ctrl |=3D MACB_BIT(TX_NOCRC); @@ -2378,7 +2378,7 @@ static unsigned int macb_tx_map(struct macb *bp, return 0; =20 dma_error: - netdev_err(bp->dev, "TX DMA map failed\n"); + netdev_err(bp->netdev, "TX DMA map failed\n"); =20 for (i =3D queue->tx_head; i !=3D tx_head; i++) { tx_skb =3D macb_tx_skb(queue, i); @@ -2390,7 +2390,7 @@ static unsigned int macb_tx_map(struct macb *bp, } =20 static netdev_features_t macb_features_check(struct sk_buff *skb, - struct net_device *dev, + struct net_device *netdev, netdev_features_t features) { unsigned int nr_frags, f; @@ -2442,7 +2442,7 @@ static inline int macb_clear_csum(struct sk_buff *skb) return 0; } =20 -static int macb_pad_and_fcs(struct sk_buff **skb, struct net_device *ndev) +static int macb_pad_and_fcs(struct sk_buff **skb, struct net_device *netde= v) { bool cloned =3D skb_cloned(*skb) || skb_header_cloned(*skb) || skb_is_nonlinear(*skb); @@ -2451,7 +2451,7 @@ static int macb_pad_and_fcs(struct sk_buff **skb, str= uct net_device *ndev) struct sk_buff *nskb; u32 fcs; =20 - if (!(ndev->features & NETIF_F_HW_CSUM) || + if (!(netdev->features & NETIF_F_HW_CSUM) || !((*skb)->ip_summed !=3D CHECKSUM_PARTIAL) || skb_shinfo(*skb)->gso_size || ptp_one_step_sync(*skb)) return 0; @@ -2493,10 +2493,11 @@ static int macb_pad_and_fcs(struct sk_buff **skb, s= truct net_device *ndev) return 0; } =20 -static netdev_tx_t macb_start_xmit(struct sk_buff *skb, struct net_device = *dev) +static netdev_tx_t macb_start_xmit(struct sk_buff *skb, + struct net_device *netdev) { u16 queue_index =3D skb_get_queue_mapping(skb); - struct macb *bp =3D netdev_priv(dev); + struct macb *bp =3D netdev_priv(netdev); struct macb_queue *queue =3D &bp->queues[queue_index]; unsigned int desc_cnt, nr_frags, frag_size, f; unsigned int hdrlen; @@ -2509,7 +2510,7 @@ static netdev_tx_t macb_start_xmit(struct sk_buff *sk= b, struct net_device *dev) return ret; } =20 - if (macb_pad_and_fcs(&skb, dev)) { + if (macb_pad_and_fcs(&skb, netdev)) { dev_kfree_skb_any(skb); return ret; } @@ -2528,7 +2529,7 @@ static netdev_tx_t macb_start_xmit(struct sk_buff *sk= b, struct net_device *dev) else hdrlen =3D skb_tcp_all_headers(skb); if (skb_headlen(skb) < hdrlen) { - netdev_err(bp->dev, "Error - LSO headers fragmented!!!\n"); + netdev_err(bp->netdev, "Error - LSO headers fragmented!!!\n"); /* if this is required, would need to copy to single buffer */ return NETDEV_TX_BUSY; } @@ -2536,7 +2537,7 @@ static netdev_tx_t macb_start_xmit(struct sk_buff *sk= b, struct net_device *dev) hdrlen =3D umin(skb_headlen(skb), bp->max_tx_length); =20 #if defined(DEBUG) && defined(VERBOSE_DEBUG) - netdev_vdbg(bp->dev, + netdev_vdbg(bp->netdev, "start_xmit: queue %hu len %u head %p data %p tail %p end %p\n", queue_index, skb->len, skb->head, skb->data, skb_tail_pointer(skb), skb_end_pointer(skb)); @@ -2564,8 +2565,8 @@ static netdev_tx_t macb_start_xmit(struct sk_buff *sk= b, struct net_device *dev) /* This is a hard error, log it. */ if (CIRC_SPACE(queue->tx_head, queue->tx_tail, bp->tx_ring_size) < desc_cnt) { - netif_stop_subqueue(dev, queue_index); - netdev_dbg(bp->dev, "tx_head =3D %u, tx_tail =3D %u\n", + netif_stop_subqueue(netdev, queue_index); + netdev_dbg(netdev, "tx_head =3D %u, tx_tail =3D %u\n", queue->tx_head, queue->tx_tail); ret =3D NETDEV_TX_BUSY; goto unlock; @@ -2580,7 +2581,7 @@ static netdev_tx_t macb_start_xmit(struct sk_buff *sk= b, struct net_device *dev) /* Make newly initialized descriptor visible to hardware */ wmb(); skb_tx_timestamp(skb); - netdev_tx_sent_queue(netdev_get_tx_queue(bp->dev, queue_index), + netdev_tx_sent_queue(netdev_get_tx_queue(bp->netdev, queue_index), skb->len); =20 spin_lock(&bp->lock); @@ -2589,7 +2590,7 @@ static netdev_tx_t macb_start_xmit(struct sk_buff *sk= b, struct net_device *dev) spin_unlock(&bp->lock); =20 if (CIRC_SPACE(queue->tx_head, queue->tx_tail, bp->tx_ring_size) < 1) - netif_stop_subqueue(dev, queue_index); + netif_stop_subqueue(netdev, queue_index); =20 unlock: spin_unlock_irqrestore(&queue->tx_ptr_lock, flags); @@ -2605,7 +2606,7 @@ static void macb_init_rx_buffer_size(struct macb *bp,= size_t size) bp->rx_buffer_size =3D MIN(size, RX_BUFFER_MAX); =20 if (bp->rx_buffer_size % RX_BUFFER_MULTIPLE) { - netdev_dbg(bp->dev, + netdev_dbg(bp->netdev, "RX buffer must be multiple of %d bytes, expanding\n", RX_BUFFER_MULTIPLE); bp->rx_buffer_size =3D @@ -2613,8 +2614,8 @@ static void macb_init_rx_buffer_size(struct macb *bp,= size_t size) } } =20 - netdev_dbg(bp->dev, "mtu [%u] rx_buffer_size [%zu]\n", - bp->dev->mtu, bp->rx_buffer_size); + netdev_dbg(bp->netdev, "mtu [%u] rx_buffer_size [%zu]\n", + bp->netdev->mtu, bp->rx_buffer_size); } =20 static void gem_free_rx_buffers(struct macb *bp) @@ -2713,7 +2714,7 @@ static int gem_alloc_rx_buffers(struct macb *bp) if (!queue->rx_skbuff) return -ENOMEM; else - netdev_dbg(bp->dev, + netdev_dbg(bp->netdev, "Allocated %d RX struct sk_buff entries at %p\n", bp->rx_ring_size, queue->rx_skbuff); } @@ -2731,7 +2732,7 @@ static int macb_alloc_rx_buffers(struct macb *bp) if (!queue->rx_buffers) return -ENOMEM; =20 - netdev_dbg(bp->dev, + netdev_dbg(bp->netdev, "Allocated RX buffers of %d bytes at %08lx (mapped %p)\n", size, (unsigned long)queue->rx_buffers_dma, queue->rx_buffers); return 0; @@ -2757,14 +2758,14 @@ static int macb_alloc_consistent(struct macb *bp) tx =3D dma_alloc_coherent(dev, size, &tx_dma, GFP_KERNEL); if (!tx || upper_32_bits(tx_dma) !=3D upper_32_bits(tx_dma + size - 1)) goto out_err; - netdev_dbg(bp->dev, "Allocated %zu bytes for %u TX rings at %08lx (mapped= %p)\n", + netdev_dbg(bp->netdev, "Allocated %zu bytes for %u TX rings at %08lx (map= ped %p)\n", size, bp->num_queues, (unsigned long)tx_dma, tx); =20 size =3D bp->num_queues * macb_rx_ring_size_per_queue(bp); rx =3D dma_alloc_coherent(dev, size, &rx_dma, GFP_KERNEL); if (!rx || upper_32_bits(rx_dma) !=3D upper_32_bits(rx_dma + size - 1)) goto out_err; - netdev_dbg(bp->dev, "Allocated %zu bytes for %u RX rings at %08lx (mapped= %p)\n", + netdev_dbg(bp->netdev, "Allocated %zu bytes for %u RX rings at %08lx (map= ped %p)\n", size, bp->num_queues, (unsigned long)rx_dma, rx); =20 for (q =3D 0, queue =3D bp->queues; q < bp->num_queues; ++q, ++queue) { @@ -2993,7 +2994,7 @@ static void macb_configure_dma(struct macb *bp) else dmacfg |=3D GEM_BIT(ENDIA_DESC); /* CPU in big endian */ =20 - if (bp->dev->features & NETIF_F_HW_CSUM) + if (bp->netdev->features & NETIF_F_HW_CSUM) dmacfg |=3D GEM_BIT(TXCOEN); else dmacfg &=3D ~GEM_BIT(TXCOEN); @@ -3003,7 +3004,7 @@ static void macb_configure_dma(struct macb *bp) dmacfg |=3D GEM_BIT(ADDR64); if (macb_dma_ptp(bp)) dmacfg |=3D GEM_BIT(RXEXT) | GEM_BIT(TXEXT); - netdev_dbg(bp->dev, "Cadence configure DMA with 0x%08x\n", + netdev_dbg(bp->netdev, "Cadence configure DMA with 0x%08x\n", dmacfg); gem_writel(bp, DMACFG, dmacfg); } @@ -3027,11 +3028,11 @@ static void macb_init_hw(struct macb *bp) config |=3D MACB_BIT(JFRAME); /* Enable jumbo frames */ else config |=3D MACB_BIT(BIG); /* Receive oversized frames */ - if (bp->dev->flags & IFF_PROMISC) + if (bp->netdev->flags & IFF_PROMISC) config |=3D MACB_BIT(CAF); /* Copy All Frames */ - else if (macb_is_gem(bp) && bp->dev->features & NETIF_F_RXCSUM) + else if (macb_is_gem(bp) && bp->netdev->features & NETIF_F_RXCSUM) config |=3D GEM_BIT(RXCOEN); - if (!(bp->dev->flags & IFF_BROADCAST)) + if (!(bp->netdev->flags & IFF_BROADCAST)) config |=3D MACB_BIT(NBC); /* No BroadCast */ config |=3D macb_dbw(bp); macb_writel(bp, NCFGR, config); @@ -3105,17 +3106,17 @@ static int hash_get_index(__u8 *addr) } =20 /* Add multicast addresses to the internal multicast-hash table. */ -static void macb_sethashtable(struct net_device *dev) +static void macb_sethashtable(struct net_device *netdev) { struct netdev_hw_addr *ha; unsigned long mc_filter[2]; unsigned int bitnr; - struct macb *bp =3D netdev_priv(dev); + struct macb *bp =3D netdev_priv(netdev); =20 mc_filter[0] =3D 0; mc_filter[1] =3D 0; =20 - netdev_for_each_mc_addr(ha, dev) { + netdev_for_each_mc_addr(ha, netdev) { bitnr =3D hash_get_index(ha->addr); mc_filter[bitnr >> 5] |=3D 1 << (bitnr & 31); } @@ -3125,14 +3126,14 @@ static void macb_sethashtable(struct net_device *de= v) } =20 /* Enable/Disable promiscuous and multicast modes. */ -static void macb_set_rx_mode(struct net_device *dev) +static void macb_set_rx_mode(struct net_device *netdev) { unsigned long cfg; - struct macb *bp =3D netdev_priv(dev); + struct macb *bp =3D netdev_priv(netdev); =20 cfg =3D macb_readl(bp, NCFGR); =20 - if (dev->flags & IFF_PROMISC) { + if (netdev->flags & IFF_PROMISC) { /* Enable promiscuous mode */ cfg |=3D MACB_BIT(CAF); =20 @@ -3144,20 +3145,20 @@ static void macb_set_rx_mode(struct net_device *dev) cfg &=3D ~MACB_BIT(CAF); =20 /* Enable RX checksum offload only if requested */ - if (macb_is_gem(bp) && dev->features & NETIF_F_RXCSUM) + if (macb_is_gem(bp) && netdev->features & NETIF_F_RXCSUM) cfg |=3D GEM_BIT(RXCOEN); } =20 - if (dev->flags & IFF_ALLMULTI) { + if (netdev->flags & IFF_ALLMULTI) { /* Enable all multicast mode */ macb_or_gem_writel(bp, HRB, -1); macb_or_gem_writel(bp, HRT, -1); cfg |=3D MACB_BIT(NCFGR_MTI); - } else if (!netdev_mc_empty(dev)) { + } else if (!netdev_mc_empty(netdev)) { /* Enable specific multicasts */ - macb_sethashtable(dev); + macb_sethashtable(netdev); cfg |=3D MACB_BIT(NCFGR_MTI); - } else if (dev->flags & (~IFF_ALLMULTI)) { + } else if (netdev->flags & (~IFF_ALLMULTI)) { /* Disable all multicast mode */ macb_or_gem_writel(bp, HRB, 0); macb_or_gem_writel(bp, HRT, 0); @@ -3167,15 +3168,15 @@ static void macb_set_rx_mode(struct net_device *dev) macb_writel(bp, NCFGR, cfg); } =20 -static int macb_open(struct net_device *dev) +static int macb_open(struct net_device *netdev) { - size_t bufsz =3D dev->mtu + ETH_HLEN + ETH_FCS_LEN + NET_IP_ALIGN; - struct macb *bp =3D netdev_priv(dev); + size_t bufsz =3D netdev->mtu + ETH_HLEN + ETH_FCS_LEN + NET_IP_ALIGN; + struct macb *bp =3D netdev_priv(netdev); struct macb_queue *queue; unsigned int q; int err; =20 - netdev_dbg(bp->dev, "open\n"); + netdev_dbg(bp->netdev, "open\n"); =20 err =3D pm_runtime_resume_and_get(&bp->pdev->dev); if (err < 0) @@ -3186,7 +3187,7 @@ static int macb_open(struct net_device *dev) =20 err =3D macb_alloc_consistent(bp); if (err) { - netdev_err(dev, "Unable to allocate DMA memory (error %d)\n", + netdev_err(netdev, "Unable to allocate DMA memory (error %d)\n", err); goto pm_exit; } @@ -3213,10 +3214,10 @@ static int macb_open(struct net_device *dev) if (err) goto phy_off; =20 - netif_tx_start_all_queues(dev); + netif_tx_start_all_queues(netdev); =20 if (bp->ptp_info) - bp->ptp_info->ptp_init(dev); + bp->ptp_info->ptp_init(netdev); =20 return 0; =20 @@ -3235,19 +3236,19 @@ static int macb_open(struct net_device *dev) return err; } =20 -static int macb_close(struct net_device *dev) +static int macb_close(struct net_device *netdev) { - struct macb *bp =3D netdev_priv(dev); + struct macb *bp =3D netdev_priv(netdev); struct macb_queue *queue; unsigned long flags; unsigned int q; =20 - netif_tx_stop_all_queues(dev); + netif_tx_stop_all_queues(netdev); =20 for (q =3D 0, queue =3D bp->queues; q < bp->num_queues; ++q, ++queue) { napi_disable(&queue->napi_rx); napi_disable(&queue->napi_tx); - netdev_tx_reset_queue(netdev_get_tx_queue(dev, q)); + netdev_tx_reset_queue(netdev_get_tx_queue(netdev, q)); } =20 cancel_delayed_work_sync(&bp->tx_lpi_work); @@ -3259,38 +3260,38 @@ static int macb_close(struct net_device *dev) =20 spin_lock_irqsave(&bp->lock, flags); macb_reset_hw(bp); - netif_carrier_off(dev); + netif_carrier_off(netdev); spin_unlock_irqrestore(&bp->lock, flags); =20 macb_free_consistent(bp); =20 if (bp->ptp_info) - bp->ptp_info->ptp_remove(dev); + bp->ptp_info->ptp_remove(netdev); =20 pm_runtime_put(&bp->pdev->dev); =20 return 0; } =20 -static int macb_change_mtu(struct net_device *dev, int new_mtu) +static int macb_change_mtu(struct net_device *netdev, int new_mtu) { - if (netif_running(dev)) + if (netif_running(netdev)) return -EBUSY; =20 - WRITE_ONCE(dev->mtu, new_mtu); + WRITE_ONCE(netdev->mtu, new_mtu); =20 return 0; } =20 -static int macb_set_mac_addr(struct net_device *dev, void *addr) +static int macb_set_mac_addr(struct net_device *netdev, void *addr) { int err; =20 - err =3D eth_mac_addr(dev, addr); + err =3D eth_mac_addr(netdev, addr); if (err < 0) return err; =20 - macb_set_hwaddr(netdev_priv(dev)); + macb_set_hwaddr(netdev_priv(netdev)); return 0; } =20 @@ -3328,7 +3329,7 @@ static void gem_get_stats(struct macb *bp, struct rtn= l_link_stats64 *nstat) struct gem_stats *hwstat =3D &bp->hw_stats.gem; =20 spin_lock_irq(&bp->stats_lock); - if (netif_running(bp->dev)) + if (netif_running(bp->netdev)) gem_update_stats(bp); =20 nstat->rx_errors =3D (hwstat->rx_frame_check_sequence_errors + @@ -3361,10 +3362,10 @@ static void gem_get_stats(struct macb *bp, struct r= tnl_link_stats64 *nstat) spin_unlock_irq(&bp->stats_lock); } =20 -static void gem_get_ethtool_stats(struct net_device *dev, +static void gem_get_ethtool_stats(struct net_device *netdev, struct ethtool_stats *stats, u64 *data) { - struct macb *bp =3D netdev_priv(dev); + struct macb *bp =3D netdev_priv(netdev); =20 spin_lock_irq(&bp->stats_lock); gem_update_stats(bp); @@ -3373,9 +3374,9 @@ static void gem_get_ethtool_stats(struct net_device *= dev, spin_unlock_irq(&bp->stats_lock); } =20 -static int gem_get_sset_count(struct net_device *dev, int sset) +static int gem_get_sset_count(struct net_device *netdev, int sset) { - struct macb *bp =3D netdev_priv(dev); + struct macb *bp =3D netdev_priv(netdev); =20 switch (sset) { case ETH_SS_STATS: @@ -3385,9 +3386,9 @@ static int gem_get_sset_count(struct net_device *dev,= int sset) } } =20 -static void gem_get_ethtool_strings(struct net_device *dev, u32 sset, u8 *= p) +static void gem_get_ethtool_strings(struct net_device *netdev, u32 sset, u= 8 *p) { - struct macb *bp =3D netdev_priv(dev); + struct macb *bp =3D netdev_priv(netdev); struct macb_queue *queue; unsigned int i; unsigned int q; @@ -3406,13 +3407,13 @@ static void gem_get_ethtool_strings(struct net_devi= ce *dev, u32 sset, u8 *p) } } =20 -static void macb_get_stats(struct net_device *dev, +static void macb_get_stats(struct net_device *netdev, struct rtnl_link_stats64 *nstat) { - struct macb *bp =3D netdev_priv(dev); + struct macb *bp =3D netdev_priv(netdev); struct macb_stats *hwstat =3D &bp->hw_stats.macb; =20 - netdev_stats_to_stats64(nstat, &bp->dev->stats); + netdev_stats_to_stats64(nstat, &bp->netdev->stats); if (macb_is_gem(bp)) { gem_get_stats(bp, nstat); return; @@ -3456,10 +3457,10 @@ static void macb_get_stats(struct net_device *dev, spin_unlock_irq(&bp->stats_lock); } =20 -static void macb_get_pause_stats(struct net_device *dev, +static void macb_get_pause_stats(struct net_device *netdev, struct ethtool_pause_stats *pause_stats) { - struct macb *bp =3D netdev_priv(dev); + struct macb *bp =3D netdev_priv(netdev); struct macb_stats *hwstat =3D &bp->hw_stats.macb; =20 spin_lock_irq(&bp->stats_lock); @@ -3469,10 +3470,10 @@ static void macb_get_pause_stats(struct net_device = *dev, spin_unlock_irq(&bp->stats_lock); } =20 -static void gem_get_pause_stats(struct net_device *dev, +static void gem_get_pause_stats(struct net_device *netdev, struct ethtool_pause_stats *pause_stats) { - struct macb *bp =3D netdev_priv(dev); + struct macb *bp =3D netdev_priv(netdev); struct gem_stats *hwstat =3D &bp->hw_stats.gem; =20 spin_lock_irq(&bp->stats_lock); @@ -3482,10 +3483,10 @@ static void gem_get_pause_stats(struct net_device *= dev, spin_unlock_irq(&bp->stats_lock); } =20 -static void macb_get_eth_mac_stats(struct net_device *dev, +static void macb_get_eth_mac_stats(struct net_device *netdev, struct ethtool_eth_mac_stats *mac_stats) { - struct macb *bp =3D netdev_priv(dev); + struct macb *bp =3D netdev_priv(netdev); struct macb_stats *hwstat =3D &bp->hw_stats.macb; =20 spin_lock_irq(&bp->stats_lock); @@ -3507,10 +3508,10 @@ static void macb_get_eth_mac_stats(struct net_devic= e *dev, spin_unlock_irq(&bp->stats_lock); } =20 -static void gem_get_eth_mac_stats(struct net_device *dev, +static void gem_get_eth_mac_stats(struct net_device *netdev, struct ethtool_eth_mac_stats *mac_stats) { - struct macb *bp =3D netdev_priv(dev); + struct macb *bp =3D netdev_priv(netdev); struct gem_stats *hwstat =3D &bp->hw_stats.gem; =20 spin_lock_irq(&bp->stats_lock); @@ -3540,10 +3541,10 @@ static void gem_get_eth_mac_stats(struct net_device= *dev, } =20 /* TODO: Report SQE test errors when added to phy_stats */ -static void macb_get_eth_phy_stats(struct net_device *dev, +static void macb_get_eth_phy_stats(struct net_device *netdev, struct ethtool_eth_phy_stats *phy_stats) { - struct macb *bp =3D netdev_priv(dev); + struct macb *bp =3D netdev_priv(netdev); struct macb_stats *hwstat =3D &bp->hw_stats.macb; =20 spin_lock_irq(&bp->stats_lock); @@ -3552,10 +3553,10 @@ static void macb_get_eth_phy_stats(struct net_devic= e *dev, spin_unlock_irq(&bp->stats_lock); } =20 -static void gem_get_eth_phy_stats(struct net_device *dev, +static void gem_get_eth_phy_stats(struct net_device *netdev, struct ethtool_eth_phy_stats *phy_stats) { - struct macb *bp =3D netdev_priv(dev); + struct macb *bp =3D netdev_priv(netdev); struct gem_stats *hwstat =3D &bp->hw_stats.gem; =20 spin_lock_irq(&bp->stats_lock); @@ -3564,11 +3565,11 @@ static void gem_get_eth_phy_stats(struct net_device= *dev, spin_unlock_irq(&bp->stats_lock); } =20 -static void macb_get_rmon_stats(struct net_device *dev, +static void macb_get_rmon_stats(struct net_device *netdev, struct ethtool_rmon_stats *rmon_stats, const struct ethtool_rmon_hist_range **ranges) { - struct macb *bp =3D netdev_priv(dev); + struct macb *bp =3D netdev_priv(netdev); struct macb_stats *hwstat =3D &bp->hw_stats.macb; =20 spin_lock_irq(&bp->stats_lock); @@ -3590,11 +3591,11 @@ static const struct ethtool_rmon_hist_range gem_rmo= n_ranges[] =3D { { }, }; =20 -static void gem_get_rmon_stats(struct net_device *dev, +static void gem_get_rmon_stats(struct net_device *netdev, struct ethtool_rmon_stats *rmon_stats, const struct ethtool_rmon_hist_range **ranges) { - struct macb *bp =3D netdev_priv(dev); + struct macb *bp =3D netdev_priv(netdev); struct gem_stats *hwstat =3D &bp->hw_stats.gem; =20 spin_lock_irq(&bp->stats_lock); @@ -3625,10 +3626,10 @@ static int macb_get_regs_len(struct net_device *net= dev) return MACB_GREGS_NBR * sizeof(u32); } =20 -static void macb_get_regs(struct net_device *dev, struct ethtool_regs *reg= s, +static void macb_get_regs(struct net_device *netdev, struct ethtool_regs *= regs, void *p) { - struct macb *bp =3D netdev_priv(dev); + struct macb *bp =3D netdev_priv(netdev); unsigned int tail, head; u32 *regs_buff =3D p; =20 @@ -3745,16 +3746,16 @@ static int macb_set_ringparam(struct net_device *ne= tdev, return 0; } =20 - if (netif_running(bp->dev)) { + if (netif_running(bp->netdev)) { reset =3D 1; - macb_close(bp->dev); + macb_close(bp->netdev); } =20 bp->rx_ring_size =3D new_rx_size; bp->tx_ring_size =3D new_tx_size; =20 if (reset) - macb_open(bp->dev); + macb_open(bp->netdev); =20 return 0; } @@ -3781,13 +3782,13 @@ static s32 gem_get_ptp_max_adj(void) return 64000000; } =20 -static int gem_get_ts_info(struct net_device *dev, +static int gem_get_ts_info(struct net_device *netdev, struct kernel_ethtool_ts_info *info) { - struct macb *bp =3D netdev_priv(dev); + struct macb *bp =3D netdev_priv(netdev); =20 if (!macb_dma_ptp(bp)) { - ethtool_op_get_ts_info(dev, info); + ethtool_op_get_ts_info(netdev, info); return 0; } =20 @@ -3834,7 +3835,7 @@ static int macb_get_ts_info(struct net_device *netdev, =20 static void gem_enable_flow_filters(struct macb *bp, bool enable) { - struct net_device *netdev =3D bp->dev; + struct net_device *netdev =3D bp->netdev; struct ethtool_rx_fs_item *item; u32 t2_scr; int num_t2_scr; @@ -4164,16 +4165,16 @@ static const struct ethtool_ops macb_ethtool_ops = =3D { .set_ringparam =3D macb_set_ringparam, }; =20 -static int macb_get_eee(struct net_device *dev, struct ethtool_keee *eee) +static int macb_get_eee(struct net_device *netdev, struct ethtool_keee *ee= e) { - struct macb *bp =3D netdev_priv(dev); + struct macb *bp =3D netdev_priv(netdev); =20 return phylink_ethtool_get_eee(bp->phylink, eee); } =20 -static int macb_set_eee(struct net_device *dev, struct ethtool_keee *eee) +static int macb_set_eee(struct net_device *netdev, struct ethtool_keee *ee= e) { - struct macb *bp =3D netdev_priv(dev); + struct macb *bp =3D netdev_priv(netdev); =20 return phylink_ethtool_set_eee(bp->phylink, eee); } @@ -4204,43 +4205,43 @@ static const struct ethtool_ops gem_ethtool_ops =3D= { .set_eee =3D macb_set_eee, }; =20 -static int macb_ioctl(struct net_device *dev, struct ifreq *rq, int cmd) +static int macb_ioctl(struct net_device *netdev, struct ifreq *rq, int cmd) { - struct macb *bp =3D netdev_priv(dev); + struct macb *bp =3D netdev_priv(netdev); =20 - if (!netif_running(dev)) + if (!netif_running(netdev)) return -EINVAL; =20 return phylink_mii_ioctl(bp->phylink, rq, cmd); } =20 -static int macb_hwtstamp_get(struct net_device *dev, +static int macb_hwtstamp_get(struct net_device *netdev, struct kernel_hwtstamp_config *cfg) { - struct macb *bp =3D netdev_priv(dev); + struct macb *bp =3D netdev_priv(netdev); =20 - if (!netif_running(dev)) + if (!netif_running(netdev)) return -EINVAL; =20 if (!bp->ptp_info) return -EOPNOTSUPP; =20 - return bp->ptp_info->get_hwtst(dev, cfg); + return bp->ptp_info->get_hwtst(netdev, cfg); } =20 -static int macb_hwtstamp_set(struct net_device *dev, +static int macb_hwtstamp_set(struct net_device *netdev, struct kernel_hwtstamp_config *cfg, struct netlink_ext_ack *extack) { - struct macb *bp =3D netdev_priv(dev); + struct macb *bp =3D netdev_priv(netdev); =20 - if (!netif_running(dev)) + if (!netif_running(netdev)) return -EINVAL; =20 if (!bp->ptp_info) return -EOPNOTSUPP; =20 - return bp->ptp_info->set_hwtst(dev, cfg, extack); + return bp->ptp_info->set_hwtst(netdev, cfg, extack); } =20 static inline void macb_set_txcsum_feature(struct macb *bp, @@ -4263,7 +4264,7 @@ static inline void macb_set_txcsum_feature(struct mac= b *bp, static inline void macb_set_rxcsum_feature(struct macb *bp, netdev_features_t features) { - struct net_device *netdev =3D bp->dev; + struct net_device *netdev =3D bp->netdev; u32 val; =20 if (!macb_is_gem(bp)) @@ -4310,7 +4311,7 @@ static int macb_set_features(struct net_device *netde= v, =20 static void macb_restore_features(struct macb *bp) { - struct net_device *netdev =3D bp->dev; + struct net_device *netdev =3D bp->netdev; netdev_features_t features =3D netdev->features; struct ethtool_rx_fs_item *item; =20 @@ -4327,14 +4328,14 @@ static void macb_restore_features(struct macb *bp) macb_set_rxflow_feature(bp, features); } =20 -static int macb_taprio_setup_replace(struct net_device *ndev, +static int macb_taprio_setup_replace(struct net_device *netdev, struct tc_taprio_qopt_offload *conf) { u64 total_on_time =3D 0, start_time_sec =3D 0, start_time =3D conf->base_= time; u32 configured_queues =3D 0, speed =3D 0, start_time_nsec; struct macb_queue_enst_config *enst_queue; struct tc_taprio_sched_entry *entry; - struct macb *bp =3D netdev_priv(ndev); + struct macb *bp =3D netdev_priv(netdev); struct ethtool_link_ksettings kset; struct macb_queue *queue; u32 queue_mask; @@ -4343,13 +4344,13 @@ static int macb_taprio_setup_replace(struct net_dev= ice *ndev, int err; =20 if (conf->num_entries > bp->num_queues) { - netdev_err(ndev, "Too many TAPRIO entries: %zu > %d queues\n", + netdev_err(netdev, "Too many TAPRIO entries: %zu > %d queues\n", conf->num_entries, bp->num_queues); return -EINVAL; } =20 if (conf->base_time < 0) { - netdev_err(ndev, "Invalid base_time: must be 0 or positive, got %lld\n", + netdev_err(netdev, "Invalid base_time: must be 0 or positive, got %lld\n= ", conf->base_time); return -ERANGE; } @@ -4357,13 +4358,13 @@ static int macb_taprio_setup_replace(struct net_dev= ice *ndev, /* Get the current link speed */ err =3D phylink_ethtool_ksettings_get(bp->phylink, &kset); if (unlikely(err)) { - netdev_err(ndev, "Failed to get link settings: %d\n", err); + netdev_err(netdev, "Failed to get link settings: %d\n", err); return err; } =20 speed =3D kset.base.speed; if (unlikely(speed <=3D 0)) { - netdev_err(ndev, "Invalid speed: %d\n", speed); + netdev_err(netdev, "Invalid speed: %d\n", speed); return -EINVAL; } =20 @@ -4376,7 +4377,7 @@ static int macb_taprio_setup_replace(struct net_devic= e *ndev, entry =3D &conf->entries[i]; =20 if (entry->command !=3D TC_TAPRIO_CMD_SET_GATES) { - netdev_err(ndev, "Entry %zu: unsupported command %d\n", + netdev_err(netdev, "Entry %zu: unsupported command %d\n", i, entry->command); err =3D -EOPNOTSUPP; goto cleanup; @@ -4384,7 +4385,7 @@ static int macb_taprio_setup_replace(struct net_devic= e *ndev, =20 /* Validate gate_mask: must be nonzero, single queue, and within range */ if (!is_power_of_2(entry->gate_mask)) { - netdev_err(ndev, "Entry %zu: gate_mask 0x%x is not a power of 2 (only o= ne queue per entry allowed)\n", + netdev_err(netdev, "Entry %zu: gate_mask 0x%x is not a power of 2 (only= one queue per entry allowed)\n", i, entry->gate_mask); err =3D -EINVAL; goto cleanup; @@ -4393,7 +4394,7 @@ static int macb_taprio_setup_replace(struct net_devic= e *ndev, /* gate_mask must not select queues outside the valid queues */ queue_id =3D order_base_2(entry->gate_mask); if (queue_id >=3D bp->num_queues) { - netdev_err(ndev, "Entry %zu: gate_mask 0x%x exceeds queue range (max_qu= eues=3D%d)\n", + netdev_err(netdev, "Entry %zu: gate_mask 0x%x exceeds queue range (max_= queues=3D%d)\n", i, entry->gate_mask, bp->num_queues); err =3D -EINVAL; goto cleanup; @@ -4403,7 +4404,7 @@ static int macb_taprio_setup_replace(struct net_devic= e *ndev, start_time_sec =3D start_time; start_time_nsec =3D do_div(start_time_sec, NSEC_PER_SEC); if (start_time_sec > GENMASK(GEM_START_TIME_SEC_SIZE - 1, 0)) { - netdev_err(ndev, "Entry %zu: Start time %llu s exceeds hardware limit\n= ", + netdev_err(netdev, "Entry %zu: Start time %llu s exceeds hardware limit= \n", i, start_time_sec); err =3D -ERANGE; goto cleanup; @@ -4411,7 +4412,7 @@ static int macb_taprio_setup_replace(struct net_devic= e *ndev, =20 /* Check for on time limit */ if (entry->interval > enst_max_hw_interval(speed)) { - netdev_err(ndev, "Entry %zu: interval %u ns exceeds hardware limit %llu= ns\n", + netdev_err(netdev, "Entry %zu: interval %u ns exceeds hardware limit %l= lu ns\n", i, entry->interval, enst_max_hw_interval(speed)); err =3D -ERANGE; goto cleanup; @@ -4419,7 +4420,7 @@ static int macb_taprio_setup_replace(struct net_devic= e *ndev, =20 /* Check for off time limit*/ if ((conf->cycle_time - entry->interval) > enst_max_hw_interval(speed)) { - netdev_err(ndev, "Entry %zu: off_time %llu ns exceeds hardware limit %l= lu ns\n", + netdev_err(netdev, "Entry %zu: off_time %llu ns exceeds hardware limit = %llu ns\n", i, conf->cycle_time - entry->interval, enst_max_hw_interval(speed)); err =3D -ERANGE; @@ -4442,13 +4443,13 @@ static int macb_taprio_setup_replace(struct net_dev= ice *ndev, =20 /* Check total interval doesn't exceed cycle time */ if (total_on_time > conf->cycle_time) { - netdev_err(ndev, "Total ON %llu ns exceeds cycle time %llu ns\n", + netdev_err(netdev, "Total ON %llu ns exceeds cycle time %llu ns\n", total_on_time, conf->cycle_time); err =3D -EINVAL; goto cleanup; } =20 - netdev_dbg(ndev, "TAPRIO setup: %zu entries, base_time=3D%lld ns, cycle_t= ime=3D%llu ns\n", + netdev_dbg(netdev, "TAPRIO setup: %zu entries, base_time=3D%lld ns, cycle= _time=3D%llu ns\n", conf->num_entries, conf->base_time, conf->cycle_time); =20 /* All validations passed - proceed with hardware configuration */ @@ -4473,7 +4474,7 @@ static int macb_taprio_setup_replace(struct net_devic= e *ndev, gem_writel(bp, ENST_CONTROL, configured_queues); } =20 - netdev_info(ndev, "TAPRIO configuration completed successfully: %zu entri= es, %d queues configured\n", + netdev_info(netdev, "TAPRIO configuration completed successfully: %zu ent= ries, %d queues configured\n", conf->num_entries, hweight32(configured_queues)); =20 cleanup: @@ -4481,14 +4482,14 @@ static int macb_taprio_setup_replace(struct net_dev= ice *ndev, return err; } =20 -static void macb_taprio_destroy(struct net_device *ndev) +static void macb_taprio_destroy(struct net_device *netdev) { - struct macb *bp =3D netdev_priv(ndev); + struct macb *bp =3D netdev_priv(netdev); struct macb_queue *queue; u32 queue_mask; unsigned int q; =20 - netdev_reset_tc(ndev); + netdev_reset_tc(netdev); queue_mask =3D BIT_U32(bp->num_queues) - 1; =20 scoped_guard(spinlock_irqsave, &bp->lock) { @@ -4503,30 +4504,30 @@ static void macb_taprio_destroy(struct net_device *= ndev) queue_writel(queue, ENST_OFF_TIME, 0); } } - netdev_info(ndev, "TAPRIO destroy: All gates disabled\n"); + netdev_info(netdev, "TAPRIO destroy: All gates disabled\n"); } =20 -static int macb_setup_taprio(struct net_device *ndev, +static int macb_setup_taprio(struct net_device *netdev, struct tc_taprio_qopt_offload *taprio) { - struct macb *bp =3D netdev_priv(ndev); + struct macb *bp =3D netdev_priv(netdev); int err =3D 0; =20 - if (unlikely(!(ndev->hw_features & NETIF_F_HW_TC))) + if (unlikely(!(netdev->hw_features & NETIF_F_HW_TC))) return -EOPNOTSUPP; =20 /* Check if Device is in runtime suspend */ if (unlikely(pm_runtime_suspended(&bp->pdev->dev))) { - netdev_err(ndev, "Device is in runtime suspend\n"); + netdev_err(netdev, "Device is in runtime suspend\n"); return -EOPNOTSUPP; } =20 switch (taprio->cmd) { case TAPRIO_CMD_REPLACE: - err =3D macb_taprio_setup_replace(ndev, taprio); + err =3D macb_taprio_setup_replace(netdev, taprio); break; case TAPRIO_CMD_DESTROY: - macb_taprio_destroy(ndev); + macb_taprio_destroy(netdev); break; default: err =3D -EOPNOTSUPP; @@ -4535,15 +4536,15 @@ static int macb_setup_taprio(struct net_device *nde= v, return err; } =20 -static int macb_setup_tc(struct net_device *dev, enum tc_setup_type type, +static int macb_setup_tc(struct net_device *netdev, enum tc_setup_type typ= e, void *type_data) { - if (!dev || !type_data) + if (!netdev || !type_data) return -EINVAL; =20 switch (type) { case TC_SETUP_QDISC_TAPRIO: - return macb_setup_taprio(dev, type_data); + return macb_setup_taprio(netdev, type_data); default: return -EOPNOTSUPP; } @@ -4751,9 +4752,9 @@ static int macb_clk_init(struct platform_device *pdev= , struct clk **pclk, =20 static int macb_init_dflt(struct platform_device *pdev) { - struct net_device *dev =3D platform_get_drvdata(pdev); + struct net_device *netdev =3D platform_get_drvdata(pdev); unsigned int hw_q, q; - struct macb *bp =3D netdev_priv(dev); + struct macb *bp =3D netdev_priv(netdev); struct macb_queue *queue; int err; u32 val, reg; @@ -4769,8 +4770,8 @@ static int macb_init_dflt(struct platform_device *pde= v) queue =3D &bp->queues[q]; queue->bp =3D bp; spin_lock_init(&queue->tx_ptr_lock); - netif_napi_add(dev, &queue->napi_rx, macb_rx_poll); - netif_napi_add(dev, &queue->napi_tx, macb_tx_poll); + netif_napi_add(netdev, &queue->napi_rx, macb_rx_poll); + netif_napi_add(netdev, &queue->napi_tx, macb_tx_poll); if (hw_q) { queue->ISR =3D GEM_ISR(hw_q - 1); queue->IER =3D GEM_IER(hw_q - 1); @@ -4800,7 +4801,7 @@ static int macb_init_dflt(struct platform_device *pde= v) */ queue->irq =3D platform_get_irq(pdev, q); err =3D devm_request_irq(&pdev->dev, queue->irq, macb_interrupt, - IRQF_SHARED, dev->name, queue); + IRQF_SHARED, netdev->name, queue); if (err) { dev_err(&pdev->dev, "Unable to request IRQ %d (error %d)\n", @@ -4812,7 +4813,7 @@ static int macb_init_dflt(struct platform_device *pde= v) q++; } =20 - dev->netdev_ops =3D &macb_netdev_ops; + netdev->netdev_ops =3D &macb_netdev_ops; =20 /* setup appropriated routines according to adapter type */ if (macb_is_gem(bp)) { @@ -4820,39 +4821,39 @@ static int macb_init_dflt(struct platform_device *p= dev) bp->macbgem_ops.mog_free_rx_buffers =3D gem_free_rx_buffers; bp->macbgem_ops.mog_init_rings =3D gem_init_rings; bp->macbgem_ops.mog_rx =3D gem_rx; - dev->ethtool_ops =3D &gem_ethtool_ops; + netdev->ethtool_ops =3D &gem_ethtool_ops; } else { bp->macbgem_ops.mog_alloc_rx_buffers =3D macb_alloc_rx_buffers; bp->macbgem_ops.mog_free_rx_buffers =3D macb_free_rx_buffers; bp->macbgem_ops.mog_init_rings =3D macb_init_rings; bp->macbgem_ops.mog_rx =3D macb_rx; - dev->ethtool_ops =3D &macb_ethtool_ops; + netdev->ethtool_ops =3D &macb_ethtool_ops; } =20 - netdev_sw_irq_coalesce_default_on(dev); + netdev_sw_irq_coalesce_default_on(netdev); =20 - dev->priv_flags |=3D IFF_LIVE_ADDR_CHANGE; + netdev->priv_flags |=3D IFF_LIVE_ADDR_CHANGE; =20 /* Set features */ - dev->hw_features =3D NETIF_F_SG; + netdev->hw_features =3D NETIF_F_SG; =20 /* Check LSO capability; runtime detection can be overridden by a cap * flag if the hardware is known to be buggy */ if (!(bp->caps & MACB_CAPS_NO_LSO) && GEM_BFEXT(PBUF_LSO, gem_readl(bp, DCFG6))) - dev->hw_features |=3D MACB_NETIF_LSO; + netdev->hw_features |=3D MACB_NETIF_LSO; =20 /* Checksum offload is only available on gem with packet buffer */ if (macb_is_gem(bp) && !(bp->caps & MACB_CAPS_FIFO_MODE)) - dev->hw_features |=3D NETIF_F_HW_CSUM | NETIF_F_RXCSUM; + netdev->hw_features |=3D NETIF_F_HW_CSUM | NETIF_F_RXCSUM; if (bp->caps & MACB_CAPS_SG_DISABLED) - dev->hw_features &=3D ~NETIF_F_SG; + netdev->hw_features &=3D ~NETIF_F_SG; /* Enable HW_TC if hardware supports QBV */ if (bp->caps & MACB_CAPS_QBV) - dev->hw_features |=3D NETIF_F_HW_TC; + netdev->hw_features |=3D NETIF_F_HW_TC; =20 - dev->features =3D dev->hw_features; + netdev->features =3D netdev->hw_features; =20 /* Check RX Flow Filters support. * Max Rx flows set by availability of screeners & compare regs: @@ -4870,7 +4871,7 @@ static int macb_init_dflt(struct platform_device *pde= v) reg =3D GEM_BFINS(ETHTCMP, (uint16_t)ETH_P_IP, reg); gem_writel_n(bp, ETHT, SCRT2_ETHT, reg); /* Filtering is supported in hw but don't enable it in kernel now */ - dev->hw_features |=3D NETIF_F_NTUPLE; + netdev->hw_features |=3D NETIF_F_NTUPLE; /* init Rx flow definitions */ bp->rx_fs_list.count =3D 0; spin_lock_init(&bp->rx_fs_lock); @@ -5073,9 +5074,9 @@ static void at91ether_stop(struct macb *lp) } =20 /* Open the ethernet interface */ -static int at91ether_open(struct net_device *dev) +static int at91ether_open(struct net_device *netdev) { - struct macb *lp =3D netdev_priv(dev); + struct macb *lp =3D netdev_priv(netdev); u32 ctl; int ret; =20 @@ -5097,7 +5098,7 @@ static int at91ether_open(struct net_device *dev) if (ret) goto stop; =20 - netif_start_queue(dev); + netif_start_queue(netdev); =20 return 0; =20 @@ -5109,11 +5110,11 @@ static int at91ether_open(struct net_device *dev) } =20 /* Close the interface */ -static int at91ether_close(struct net_device *dev) +static int at91ether_close(struct net_device *netdev) { - struct macb *lp =3D netdev_priv(dev); + struct macb *lp =3D netdev_priv(netdev); =20 - netif_stop_queue(dev); + netif_stop_queue(netdev); =20 phylink_stop(lp->phylink); phylink_disconnect_phy(lp->phylink); @@ -5127,14 +5128,14 @@ static int at91ether_close(struct net_device *dev) =20 /* Transmit packet */ static netdev_tx_t at91ether_start_xmit(struct sk_buff *skb, - struct net_device *dev) + struct net_device *netdev) { - struct macb *lp =3D netdev_priv(dev); + struct macb *lp =3D netdev_priv(netdev); =20 if (macb_readl(lp, TSR) & MACB_BIT(RM9200_BNQ)) { int desc =3D 0; =20 - netif_stop_queue(dev); + netif_stop_queue(netdev); =20 /* Store packet information (to free when Tx completed) */ lp->rm9200_txq[desc].skb =3D skb; @@ -5143,8 +5144,8 @@ static netdev_tx_t at91ether_start_xmit(struct sk_buf= f *skb, skb->len, DMA_TO_DEVICE); if (dma_mapping_error(&lp->pdev->dev, lp->rm9200_txq[desc].mapping)) { dev_kfree_skb_any(skb); - dev->stats.tx_dropped++; - netdev_err(dev, "%s: DMA mapping error\n", __func__); + netdev->stats.tx_dropped++; + netdev_err(netdev, "%s: DMA mapping error\n", __func__); return NETDEV_TX_OK; } =20 @@ -5154,7 +5155,8 @@ static netdev_tx_t at91ether_start_xmit(struct sk_buf= f *skb, macb_writel(lp, TCR, skb->len); =20 } else { - netdev_err(dev, "%s called, but device is busy!\n", __func__); + netdev_err(netdev, "%s called, but device is busy!\n", + __func__); return NETDEV_TX_BUSY; } =20 @@ -5164,9 +5166,9 @@ static netdev_tx_t at91ether_start_xmit(struct sk_buf= f *skb, /* Extract received frame from buffer descriptors and sent to upper layers. * (Called from interrupt context) */ -static void at91ether_rx(struct net_device *dev) +static void at91ether_rx(struct net_device *netdev) { - struct macb *lp =3D netdev_priv(dev); + struct macb *lp =3D netdev_priv(netdev); struct macb_queue *q =3D &lp->queues[0]; struct macb_dma_desc *desc; unsigned char *p_recv; @@ -5177,21 +5179,21 @@ static void at91ether_rx(struct net_device *dev) while (desc->addr & MACB_BIT(RX_USED)) { p_recv =3D q->rx_buffers + q->rx_tail * AT91ETHER_MAX_RBUFF_SZ; pktlen =3D MACB_BF(RX_FRMLEN, desc->ctrl); - skb =3D netdev_alloc_skb(dev, pktlen + 2); + skb =3D netdev_alloc_skb(netdev, pktlen + 2); if (skb) { skb_reserve(skb, 2); skb_put_data(skb, p_recv, pktlen); =20 - skb->protocol =3D eth_type_trans(skb, dev); - dev->stats.rx_packets++; - dev->stats.rx_bytes +=3D pktlen; + skb->protocol =3D eth_type_trans(skb, netdev); + netdev->stats.rx_packets++; + netdev->stats.rx_bytes +=3D pktlen; netif_rx(skb); } else { - dev->stats.rx_dropped++; + netdev->stats.rx_dropped++; } =20 if (desc->ctrl & MACB_BIT(RX_MHASH_MATCH)) - dev->stats.multicast++; + netdev->stats.multicast++; =20 /* reset ownership bit */ desc->addr &=3D ~MACB_BIT(RX_USED); @@ -5209,8 +5211,8 @@ static void at91ether_rx(struct net_device *dev) /* MAC interrupt handler */ static irqreturn_t at91ether_interrupt(int irq, void *dev_id) { - struct net_device *dev =3D dev_id; - struct macb *lp =3D netdev_priv(dev); + struct net_device *netdev =3D dev_id; + struct macb *lp =3D netdev_priv(netdev); u32 intstatus, ctl; unsigned int desc; =20 @@ -5221,13 +5223,13 @@ static irqreturn_t at91ether_interrupt(int irq, voi= d *dev_id) =20 /* Receive complete */ if (intstatus & MACB_BIT(RCOMP)) - at91ether_rx(dev); + at91ether_rx(netdev); =20 /* Transmit complete */ if (intstatus & MACB_BIT(TCOMP)) { /* The TCOM bit is set even if the transmission failed */ if (intstatus & (MACB_BIT(ISR_TUND) | MACB_BIT(ISR_RLE))) - dev->stats.tx_errors++; + netdev->stats.tx_errors++; =20 desc =3D 0; if (lp->rm9200_txq[desc].skb) { @@ -5235,10 +5237,10 @@ static irqreturn_t at91ether_interrupt(int irq, voi= d *dev_id) lp->rm9200_txq[desc].skb =3D NULL; dma_unmap_single(&lp->pdev->dev, lp->rm9200_txq[desc].mapping, lp->rm9200_txq[desc].size, DMA_TO_DEVICE); - dev->stats.tx_packets++; - dev->stats.tx_bytes +=3D lp->rm9200_txq[desc].size; + netdev->stats.tx_packets++; + netdev->stats.tx_bytes +=3D lp->rm9200_txq[desc].size; } - netif_wake_queue(dev); + netif_wake_queue(netdev); } =20 /* Work-around for EMAC Errata section 41.3.1 */ @@ -5250,18 +5252,18 @@ static irqreturn_t at91ether_interrupt(int irq, voi= d *dev_id) } =20 if (intstatus & MACB_BIT(ISR_ROVR)) - netdev_err(dev, "ROVR error\n"); + netdev_err(netdev, "ROVR error\n"); =20 return IRQ_HANDLED; } =20 #ifdef CONFIG_NET_POLL_CONTROLLER -static void at91ether_poll_controller(struct net_device *dev) +static void at91ether_poll_controller(struct net_device *netdev) { unsigned long flags; =20 local_irq_save(flags); - at91ether_interrupt(dev->irq, dev); + at91ether_interrupt(netdev->irq, netdev); local_irq_restore(flags); } #endif @@ -5308,17 +5310,17 @@ static int at91ether_clk_init(struct platform_devic= e *pdev, struct clk **pclk, =20 static int at91ether_init(struct platform_device *pdev) { - struct net_device *dev =3D platform_get_drvdata(pdev); - struct macb *bp =3D netdev_priv(dev); + struct net_device *netdev =3D platform_get_drvdata(pdev); + struct macb *bp =3D netdev_priv(netdev); int err; =20 bp->queues[0].bp =3D bp; =20 - dev->netdev_ops =3D &at91ether_netdev_ops; - dev->ethtool_ops =3D &macb_ethtool_ops; + netdev->netdev_ops =3D &at91ether_netdev_ops; + netdev->ethtool_ops =3D &macb_ethtool_ops; =20 - err =3D devm_request_irq(&pdev->dev, dev->irq, at91ether_interrupt, - 0, dev->name, dev); + err =3D devm_request_irq(&pdev->dev, netdev->irq, at91ether_interrupt, + 0, netdev->name, netdev); if (err) return err; =20 @@ -5447,8 +5449,8 @@ static int fu540_c000_init(struct platform_device *pd= ev) =20 static int init_reset_optional(struct platform_device *pdev) { - struct net_device *dev =3D platform_get_drvdata(pdev); - struct macb *bp =3D netdev_priv(dev); + struct net_device *netdev =3D platform_get_drvdata(pdev); + struct macb *bp =3D netdev_priv(netdev); int ret; =20 if (bp->phy_interface =3D=3D PHY_INTERFACE_MODE_SGMII) { @@ -5763,7 +5765,7 @@ static int macb_probe(struct platform_device *pdev) const struct macb_config *macb_config; struct clk *tsu_clk =3D NULL; phy_interface_t interface; - struct net_device *dev; + struct net_device *netdev; struct resource *regs; u32 wtrmrk_rst_val; void __iomem *mem; @@ -5798,19 +5800,19 @@ static int macb_probe(struct platform_device *pdev) goto err_disable_clocks; } =20 - dev =3D alloc_etherdev_mq(sizeof(*bp), num_queues); - if (!dev) { + netdev =3D alloc_etherdev_mq(sizeof(*bp), num_queues); + if (!netdev) { err =3D -ENOMEM; goto err_disable_clocks; } =20 - dev->base_addr =3D regs->start; + netdev->base_addr =3D regs->start; =20 - SET_NETDEV_DEV(dev, &pdev->dev); + SET_NETDEV_DEV(netdev, &pdev->dev); =20 - bp =3D netdev_priv(dev); + bp =3D netdev_priv(netdev); bp->pdev =3D pdev; - bp->dev =3D dev; + bp->netdev =3D netdev; bp->regs =3D mem; bp->native_io =3D native_io; if (native_io) { @@ -5883,21 +5885,21 @@ static int macb_probe(struct platform_device *pdev) bp->caps |=3D MACB_CAPS_DMA_64B; } #endif - platform_set_drvdata(pdev, dev); + platform_set_drvdata(pdev, netdev); =20 - dev->irq =3D platform_get_irq(pdev, 0); - if (dev->irq < 0) { - err =3D dev->irq; + netdev->irq =3D platform_get_irq(pdev, 0); + if (netdev->irq < 0) { + err =3D netdev->irq; goto err_out_free_netdev; } =20 /* MTU range: 68 - 1518 or 10240 */ - dev->min_mtu =3D GEM_MTU_MIN_SIZE; + netdev->min_mtu =3D GEM_MTU_MIN_SIZE; if ((bp->caps & MACB_CAPS_JUMBO) && bp->jumbo_max_len) - dev->max_mtu =3D MIN(bp->jumbo_max_len, RX_BUFFER_MAX) - + netdev->max_mtu =3D MIN(bp->jumbo_max_len, RX_BUFFER_MAX) - ETH_HLEN - ETH_FCS_LEN; else - dev->max_mtu =3D 1536 - ETH_HLEN - ETH_FCS_LEN; + netdev->max_mtu =3D 1536 - ETH_HLEN - ETH_FCS_LEN; =20 if (bp->caps & MACB_CAPS_BD_RD_PREFETCH) { val =3D GEM_BFEXT(RXBD_RDBUFF, gem_readl(bp, DCFG10)); @@ -5915,7 +5917,7 @@ static int macb_probe(struct platform_device *pdev) if (bp->caps & MACB_CAPS_NEEDS_RSTONUBR) bp->rx_intr_mask |=3D MACB_BIT(RXUBR); =20 - err =3D of_get_ethdev_address(np, bp->dev); + err =3D of_get_ethdev_address(np, bp->netdev); if (err =3D=3D -EPROBE_DEFER) goto err_out_free_netdev; else if (err) @@ -5937,9 +5939,9 @@ static int macb_probe(struct platform_device *pdev) if (err) goto err_out_phy_exit; =20 - netif_carrier_off(dev); + netif_carrier_off(netdev); =20 - err =3D register_netdev(dev); + err =3D register_netdev(netdev); if (err) { dev_err(&pdev->dev, "Cannot register net device, aborting.\n"); goto err_out_unregister_mdio; @@ -5948,9 +5950,9 @@ static int macb_probe(struct platform_device *pdev) INIT_WORK(&bp->hresp_err_bh_work, macb_hresp_error_task); INIT_DELAYED_WORK(&bp->tx_lpi_work, macb_tx_lpi_work_fn); =20 - netdev_info(dev, "Cadence %s rev 0x%08x at 0x%08lx irq %d (%pM)\n", + netdev_info(netdev, "Cadence %s rev 0x%08x at 0x%08lx irq %d (%pM)\n", macb_is_gem(bp) ? "GEM" : "MACB", macb_readl(bp, MID), - dev->base_addr, dev->irq, dev->dev_addr); + netdev->base_addr, netdev->irq, netdev->dev_addr); =20 pm_runtime_put_autosuspend(&bp->pdev->dev); =20 @@ -5964,7 +5966,7 @@ static int macb_probe(struct platform_device *pdev) phy_exit(bp->phy); =20 err_out_free_netdev: - free_netdev(dev); + free_netdev(netdev); =20 err_disable_clocks: macb_clks_disable(pclk, hclk, tx_clk, rx_clk, tsu_clk); @@ -5977,14 +5979,14 @@ static int macb_probe(struct platform_device *pdev) =20 static void macb_remove(struct platform_device *pdev) { - struct net_device *dev; + struct net_device *netdev; struct macb *bp; =20 - dev =3D platform_get_drvdata(pdev); + netdev =3D platform_get_drvdata(pdev); =20 - if (dev) { - bp =3D netdev_priv(dev); - unregister_netdev(dev); + if (netdev) { + bp =3D netdev_priv(netdev); + unregister_netdev(netdev); phy_exit(bp->phy); mdiobus_unregister(bp->mii_bus); mdiobus_free(bp->mii_bus); @@ -5996,7 +5998,7 @@ static void macb_remove(struct platform_device *pdev) pm_runtime_dont_use_autosuspend(&pdev->dev); pm_runtime_set_suspended(&pdev->dev); phylink_destroy(bp->phylink); - free_netdev(dev); + free_netdev(netdev); } } =20 @@ -6012,7 +6014,7 @@ static int __maybe_unused macb_suspend(struct device = *dev) unsigned int q; int err; =20 - if (!device_may_wakeup(&bp->dev->dev)) + if (!device_may_wakeup(&bp->netdev->dev)) phy_exit(bp->phy); =20 if (!netif_running(netdev)) @@ -6022,7 +6024,7 @@ static int __maybe_unused macb_suspend(struct device = *dev) if (bp->wolopts & WAKE_ARP) { /* Check for IP address in WOL ARP mode */ rcu_read_lock(); - idev =3D __in_dev_get_rcu(bp->dev); + idev =3D __in_dev_get_rcu(bp->netdev); if (idev) ifa =3D rcu_dereference(idev->ifa_list); if (!ifa) { @@ -6150,7 +6152,7 @@ static int __maybe_unused macb_resume(struct device *= dev) unsigned int q; int err; =20 - if (!device_may_wakeup(&bp->dev->dev)) + if (!device_may_wakeup(&bp->netdev->dev)) phy_init(bp->phy); =20 if (!netif_running(netdev)) diff --git a/drivers/net/ethernet/cadence/macb_pci.c b/drivers/net/ethernet= /cadence/macb_pci.c index fc4f5aee6ab3..91108d4366f6 100644 --- a/drivers/net/ethernet/cadence/macb_pci.c +++ b/drivers/net/ethernet/cadence/macb_pci.c @@ -24,48 +24,48 @@ #define GEM_PCLK_RATE 50000000 #define GEM_HCLK_RATE 50000000 =20 -static int macb_probe(struct pci_dev *pdev, const struct pci_device_id *id) +static int macb_probe(struct pci_dev *pci, const struct pci_device_id *id) { int err; - struct platform_device *plat_dev; + struct platform_device *pdev; struct platform_device_info plat_info; struct macb_platform_data plat_data; struct resource res[2]; =20 /* enable pci device */ - err =3D pcim_enable_device(pdev); + err =3D pcim_enable_device(pci); if (err < 0) { - dev_err(&pdev->dev, "Enabling PCI device has failed: %d", err); + dev_err(&pci->dev, "Enabling PCI device has failed: %d", err); return err; } =20 - pci_set_master(pdev); + pci_set_master(pci); =20 /* set up resources */ memset(res, 0x00, sizeof(struct resource) * ARRAY_SIZE(res)); - res[0].start =3D pci_resource_start(pdev, 0); - res[0].end =3D pci_resource_end(pdev, 0); + res[0].start =3D pci_resource_start(pci, 0); + res[0].end =3D pci_resource_end(pci, 0); res[0].name =3D PCI_DRIVER_NAME; res[0].flags =3D IORESOURCE_MEM; - res[1].start =3D pci_irq_vector(pdev, 0); + res[1].start =3D pci_irq_vector(pci, 0); res[1].name =3D PCI_DRIVER_NAME; res[1].flags =3D IORESOURCE_IRQ; =20 - dev_info(&pdev->dev, "EMAC physical base addr: %pa\n", + dev_info(&pci->dev, "EMAC physical base addr: %pa\n", &res[0].start); =20 /* set up macb platform data */ memset(&plat_data, 0, sizeof(plat_data)); =20 /* initialize clocks */ - plat_data.pclk =3D clk_register_fixed_rate(&pdev->dev, "pclk", NULL, 0, + plat_data.pclk =3D clk_register_fixed_rate(&pci->dev, "pclk", NULL, 0, GEM_PCLK_RATE); if (IS_ERR(plat_data.pclk)) { err =3D PTR_ERR(plat_data.pclk); goto err_pclk_register; } =20 - plat_data.hclk =3D clk_register_fixed_rate(&pdev->dev, "hclk", NULL, 0, + plat_data.hclk =3D clk_register_fixed_rate(&pci->dev, "hclk", NULL, 0, GEM_HCLK_RATE); if (IS_ERR(plat_data.hclk)) { err =3D PTR_ERR(plat_data.hclk); @@ -74,24 +74,24 @@ static int macb_probe(struct pci_dev *pdev, const struc= t pci_device_id *id) =20 /* set up platform device info */ memset(&plat_info, 0, sizeof(plat_info)); - plat_info.parent =3D &pdev->dev; - plat_info.fwnode =3D pdev->dev.fwnode; + plat_info.parent =3D &pci->dev; + plat_info.fwnode =3D pci->dev.fwnode; plat_info.name =3D PLAT_DRIVER_NAME; - plat_info.id =3D pdev->devfn; + plat_info.id =3D pci->devfn; plat_info.res =3D res; plat_info.num_res =3D ARRAY_SIZE(res); plat_info.data =3D &plat_data; plat_info.size_data =3D sizeof(plat_data); - plat_info.dma_mask =3D pdev->dma_mask; + plat_info.dma_mask =3D pci->dma_mask; =20 /* register platform device */ - plat_dev =3D platform_device_register_full(&plat_info); - if (IS_ERR(plat_dev)) { - err =3D PTR_ERR(plat_dev); + pdev =3D platform_device_register_full(&plat_info); + if (IS_ERR(pdev)) { + err =3D PTR_ERR(pdev); goto err_plat_dev_register; } =20 - pci_set_drvdata(pdev, plat_dev); + pci_set_drvdata(pci, pdev); =20 return 0; =20 @@ -105,14 +105,14 @@ static int macb_probe(struct pci_dev *pdev, const str= uct pci_device_id *id) return err; } =20 -static void macb_remove(struct pci_dev *pdev) +static void macb_remove(struct pci_dev *pci) { - struct platform_device *plat_dev =3D pci_get_drvdata(pdev); - struct macb_platform_data *plat_data =3D dev_get_platdata(&plat_dev->dev); + struct platform_device *pdev =3D pci_get_drvdata(pci); + struct macb_platform_data *plat_data =3D dev_get_platdata(&pdev->dev); =20 clk_unregister(plat_data->pclk); clk_unregister(plat_data->hclk); - platform_device_unregister(plat_dev); + platform_device_unregister(pdev); } =20 static const struct pci_device_id dev_id_table[] =3D { diff --git a/drivers/net/ethernet/cadence/macb_ptp.c b/drivers/net/ethernet= /cadence/macb_ptp.c index d91f7b1aa39c..e5195d7dac1d 100644 --- a/drivers/net/ethernet/cadence/macb_ptp.c +++ b/drivers/net/ethernet/cadence/macb_ptp.c @@ -324,9 +324,9 @@ void gem_ptp_txstamp(struct macb *bp, struct sk_buff *s= kb, skb_tstamp_tx(skb, &shhwtstamps); } =20 -void gem_ptp_init(struct net_device *dev) +void gem_ptp_init(struct net_device *netdev) { - struct macb *bp =3D netdev_priv(dev); + struct macb *bp =3D netdev_priv(netdev); =20 bp->ptp_clock_info =3D gem_ptp_caps_template; =20 @@ -334,7 +334,7 @@ void gem_ptp_init(struct net_device *dev) bp->tsu_rate =3D bp->ptp_info->get_tsu_rate(bp); bp->ptp_clock_info.max_adj =3D bp->ptp_info->get_ptp_max_adj(); gem_ptp_init_timer(bp); - bp->ptp_clock =3D ptp_clock_register(&bp->ptp_clock_info, &dev->dev); + bp->ptp_clock =3D ptp_clock_register(&bp->ptp_clock_info, &netdev->dev); if (IS_ERR(bp->ptp_clock)) { pr_err("ptp clock register failed: %ld\n", PTR_ERR(bp->ptp_clock)); @@ -353,9 +353,9 @@ void gem_ptp_init(struct net_device *dev) GEM_PTP_TIMER_NAME); } =20 -void gem_ptp_remove(struct net_device *ndev) +void gem_ptp_remove(struct net_device *netdev) { - struct macb *bp =3D netdev_priv(ndev); + struct macb *bp =3D netdev_priv(netdev); =20 if (bp->ptp_clock) { ptp_clock_unregister(bp->ptp_clock); @@ -378,10 +378,10 @@ static int gem_ptp_set_ts_mode(struct macb *bp, return 0; } =20 -int gem_get_hwtst(struct net_device *dev, +int gem_get_hwtst(struct net_device *netdev, struct kernel_hwtstamp_config *tstamp_config) { - struct macb *bp =3D netdev_priv(dev); + struct macb *bp =3D netdev_priv(netdev); =20 *tstamp_config =3D bp->tstamp_config; if (!macb_dma_ptp(bp)) @@ -402,13 +402,13 @@ static void gem_ptp_set_one_step_sync(struct macb *bp= , u8 enable) macb_writel(bp, NCR, reg_val & ~MACB_BIT(OSSMODE)); } =20 -int gem_set_hwtst(struct net_device *dev, +int gem_set_hwtst(struct net_device *netdev, struct kernel_hwtstamp_config *tstamp_config, struct netlink_ext_ack *extack) { enum macb_bd_control tx_bd_control =3D TSTAMP_DISABLED; enum macb_bd_control rx_bd_control =3D TSTAMP_DISABLED; - struct macb *bp =3D netdev_priv(dev); + struct macb *bp =3D netdev_priv(netdev); u32 regval; =20 if (!macb_dma_ptp(bp)) --=20 2.53.0 From nobody Wed Apr 1 20:37:31 2026 Received: from smtpout-04.galae.net (smtpout-04.galae.net [185.171.202.116]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D9F4047884B for ; Wed, 1 Apr 2026 16:39:29 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=185.171.202.116 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775061575; cv=none; b=GN+rbXdFQIuKzTQhLHBrc87n2bpN7ptejeRpB4PnYge0B/065SJjEJsx3trlxh0EQ2rlUfKx+DST1vI3Qxh0jDy/hudq1TGo0oEN2FPkyqvG5lPVPiDddjsJGMfaJ4mXONPr0BOABV3DSSMEyrrTp0Beya/DOWboOx7AUb1c9gg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775061575; c=relaxed/simple; bh=pF9tYAo2l6p40s4lFtZyIZqM5LpFxuYjAlaBDPfCO04=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=oaFeA+Fmx/eN69t4Sghu3Ny49Un7Wputc2STMpkyr2FQr4VZ2vQseEPlptiK97RPSEN1qibyvauCD4X4ZKcKykFOwQCzVBzq84H682S3dBnD7PJEX1/EZqkSuv9zaugfDjxtymNtAKOG8WWB2yMczw7SPYBvyYg03wsWiUrQtFI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=bootlin.com; spf=pass smtp.mailfrom=bootlin.com; dkim=pass (2048-bit key) header.d=bootlin.com header.i=@bootlin.com header.b=ldnav+Fm; arc=none smtp.client-ip=185.171.202.116 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=bootlin.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=bootlin.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=bootlin.com header.i=@bootlin.com header.b="ldnav+Fm" Received: from smtpout-01.galae.net (smtpout-01.galae.net [212.83.139.233]) by smtpout-04.galae.net (Postfix) with ESMTPS id 861EBC5996C; Wed, 1 Apr 2026 16:39:59 +0000 (UTC) Received: from mail.galae.net (mail.galae.net [212.83.136.155]) by smtpout-01.galae.net (Postfix) with ESMTPS id 74695602BF; Wed, 1 Apr 2026 16:39:28 +0000 (UTC) Received: from [127.0.0.1] (localhost [127.0.0.1]) by localhost (Mailerdaemon) with ESMTPSA id 1EF6310450692; Wed, 1 Apr 2026 18:39:25 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bootlin.com; s=dkim; t=1775061567; h=from:subject:date:message-id:to:cc:mime-version:content-type: content-transfer-encoding:in-reply-to:references; bh=9KB/4FBZIPpSbfW83xjGhOniWDhFfnwYa/lc/PM1RTQ=; b=ldnav+FmJvJaA6hX4t8zQvGklVkClRDpIpzMkNqlCjTEKGz+P4WPnCBpSyZULdYidAwdmU 6pLOeJu0cPp2qlAfbH7Ef+NgP1VhLn8O+Hu2Cd0k08OncpxkF+2MgUWrjEREOyrACgXDb1 oaRMqdtooSjo3siVa2vq8vJ0AMux1V/HlmajE8kdmIaMtoyVuSf31Jqf6Q/xXxs6TOUrhJ EreLkiX6X48yFCBh3jhGyWgC2jk/4xhaFx24VL1ADB/xW2B9Y0t/kT+ypUZr1FMowv9lMd DmxWbc8FA6HO389+RDkxXFdTHaA7dDVbwtXZFalND37Nd12s4XkQX0PGJR0p+A== From: =?utf-8?q?Th=C3=A9o_Lebrun?= Date: Wed, 01 Apr 2026 18:39:05 +0200 Subject: [PATCH net-next 02/11] net: macb: unify `struct macb *` naming convention Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20260401-macb-context-v1-2-9590c5ab7272@bootlin.com> References: <20260401-macb-context-v1-0-9590c5ab7272@bootlin.com> In-Reply-To: <20260401-macb-context-v1-0-9590c5ab7272@bootlin.com> To: Nicolas Ferre , Claudiu Beznea , Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Richard Cochran , Russell King Cc: Paolo Valerio , Conor Dooley , Nicolai Buchwitz , Vladimir Kondratiev , Gregory CLEMENT , =?utf-8?q?Beno=C3=AEt_Monin?= , Tawfik Bayouk , Thomas Petazzoni , Maxime Chevallier , netdev@vger.kernel.org, linux-kernel@vger.kernel.org, =?utf-8?q?Th=C3=A9o_Lebrun?= X-Mailer: b4 0.15.0 X-Last-TLS-Session-Version: TLSv1.3 For historical reason, MACB has both: struct macb *bp; struct macb *lp; // used in at91ether functions Use only the former. Signed-off-by: Th=C3=A9o Lebrun --- drivers/net/ethernet/cadence/macb_main.c | 176 ++++++++++++++++-----------= ---- 1 file changed, 91 insertions(+), 85 deletions(-) diff --git a/drivers/net/ethernet/cadence/macb_main.c b/drivers/net/etherne= t/cadence/macb_main.c index 00bd662b5e46..05ccb6f186f7 100644 --- a/drivers/net/ethernet/cadence/macb_main.c +++ b/drivers/net/ethernet/cadence/macb_main.c @@ -4958,71 +4958,72 @@ static int macb_init(struct platform_device *pdev, =20 static struct sifive_fu540_macb_mgmt *mgmt; =20 -static int at91ether_alloc_coherent(struct macb *lp) +static int at91ether_alloc_coherent(struct macb *bp) { - struct macb_queue *q =3D &lp->queues[0]; + struct macb_queue *queue =3D &bp->queues[0]; =20 - q->rx_ring =3D dma_alloc_coherent(&lp->pdev->dev, - (AT91ETHER_MAX_RX_DESCR * - macb_dma_desc_get_size(lp)), - &q->rx_ring_dma, GFP_KERNEL); - if (!q->rx_ring) + queue->rx_ring =3D dma_alloc_coherent(&bp->pdev->dev, + (AT91ETHER_MAX_RX_DESCR * + macb_dma_desc_get_size(bp)), + &queue->rx_ring_dma, GFP_KERNEL); + if (!queue->rx_ring) return -ENOMEM; =20 - q->rx_buffers =3D dma_alloc_coherent(&lp->pdev->dev, - AT91ETHER_MAX_RX_DESCR * - AT91ETHER_MAX_RBUFF_SZ, - &q->rx_buffers_dma, GFP_KERNEL); - if (!q->rx_buffers) { - dma_free_coherent(&lp->pdev->dev, + queue->rx_buffers =3D dma_alloc_coherent(&bp->pdev->dev, + AT91ETHER_MAX_RX_DESCR * + AT91ETHER_MAX_RBUFF_SZ, + &queue->rx_buffers_dma, + GFP_KERNEL); + if (!queue->rx_buffers) { + dma_free_coherent(&bp->pdev->dev, AT91ETHER_MAX_RX_DESCR * - macb_dma_desc_get_size(lp), - q->rx_ring, q->rx_ring_dma); - q->rx_ring =3D NULL; + macb_dma_desc_get_size(bp), + queue->rx_ring, queue->rx_ring_dma); + queue->rx_ring =3D NULL; return -ENOMEM; } =20 return 0; } =20 -static void at91ether_free_coherent(struct macb *lp) +static void at91ether_free_coherent(struct macb *bp) { - struct macb_queue *q =3D &lp->queues[0]; + struct macb_queue *queue =3D &bp->queues[0]; =20 - if (q->rx_ring) { - dma_free_coherent(&lp->pdev->dev, + if (queue->rx_ring) { + dma_free_coherent(&bp->pdev->dev, AT91ETHER_MAX_RX_DESCR * - macb_dma_desc_get_size(lp), - q->rx_ring, q->rx_ring_dma); - q->rx_ring =3D NULL; + macb_dma_desc_get_size(bp), + queue->rx_ring, queue->rx_ring_dma); + queue->rx_ring =3D NULL; } =20 - if (q->rx_buffers) { - dma_free_coherent(&lp->pdev->dev, + if (queue->rx_buffers) { + dma_free_coherent(&bp->pdev->dev, AT91ETHER_MAX_RX_DESCR * AT91ETHER_MAX_RBUFF_SZ, - q->rx_buffers, q->rx_buffers_dma); - q->rx_buffers =3D NULL; + queue->rx_buffers, queue->rx_buffers_dma); + queue->rx_buffers =3D NULL; } } =20 /* Initialize and start the Receiver and Transmit subsystems */ -static int at91ether_start(struct macb *lp) +static int at91ether_start(struct macb *bp) { - struct macb_queue *q =3D &lp->queues[0]; + struct macb_queue *queue =3D &bp->queues[0]; struct macb_dma_desc *desc; dma_addr_t addr; u32 ctl; int i, ret; =20 - ret =3D at91ether_alloc_coherent(lp); + ret =3D at91ether_alloc_coherent(bp); if (ret) return ret; =20 - addr =3D q->rx_buffers_dma; + addr =3D queue->rx_buffers_dma; for (i =3D 0; i < AT91ETHER_MAX_RX_DESCR; i++) { - desc =3D macb_rx_desc(q, i); - macb_set_addr(lp, desc, addr); + desc =3D macb_rx_desc(queue, i); + macb_set_addr(bp, desc, addr); desc->ctrl =3D 0; addr +=3D AT91ETHER_MAX_RBUFF_SZ; } @@ -5031,17 +5032,17 @@ static int at91ether_start(struct macb *lp) desc->addr |=3D MACB_BIT(RX_WRAP); =20 /* Reset buffer index */ - q->rx_tail =3D 0; + queue->rx_tail =3D 0; =20 /* Program address of descriptor list in Rx Buffer Queue register */ - macb_writel(lp, RBQP, q->rx_ring_dma); + macb_writel(bp, RBQP, queue->rx_ring_dma); =20 /* Enable Receive and Transmit */ - ctl =3D macb_readl(lp, NCR); - macb_writel(lp, NCR, ctl | MACB_BIT(RE) | MACB_BIT(TE)); + ctl =3D macb_readl(bp, NCR); + macb_writel(bp, NCR, ctl | MACB_BIT(RE) | MACB_BIT(TE)); =20 /* Enable MAC interrupts */ - macb_writel(lp, IER, MACB_BIT(RCOMP) | + macb_writel(bp, IER, MACB_BIT(RCOMP) | MACB_BIT(RXUBR) | MACB_BIT(ISR_TUND) | MACB_BIT(ISR_RLE) | @@ -5052,12 +5053,12 @@ static int at91ether_start(struct macb *lp) return 0; } =20 -static void at91ether_stop(struct macb *lp) +static void at91ether_stop(struct macb *bp) { u32 ctl; =20 /* Disable MAC interrupts */ - macb_writel(lp, IDR, MACB_BIT(RCOMP) | + macb_writel(bp, IDR, MACB_BIT(RCOMP) | MACB_BIT(RXUBR) | MACB_BIT(ISR_TUND) | MACB_BIT(ISR_RLE) | @@ -5066,35 +5067,35 @@ static void at91ether_stop(struct macb *lp) MACB_BIT(HRESP)); =20 /* Disable Receiver and Transmitter */ - ctl =3D macb_readl(lp, NCR); - macb_writel(lp, NCR, ctl & ~(MACB_BIT(TE) | MACB_BIT(RE))); + ctl =3D macb_readl(bp, NCR); + macb_writel(bp, NCR, ctl & ~(MACB_BIT(TE) | MACB_BIT(RE))); =20 /* Free resources. */ - at91ether_free_coherent(lp); + at91ether_free_coherent(bp); } =20 /* Open the ethernet interface */ static int at91ether_open(struct net_device *netdev) { - struct macb *lp =3D netdev_priv(netdev); + struct macb *bp =3D netdev_priv(netdev); u32 ctl; int ret; =20 - ret =3D pm_runtime_resume_and_get(&lp->pdev->dev); + ret =3D pm_runtime_resume_and_get(&bp->pdev->dev); if (ret < 0) return ret; =20 /* Clear internal statistics */ - ctl =3D macb_readl(lp, NCR); - macb_writel(lp, NCR, ctl | MACB_BIT(CLRSTAT)); + ctl =3D macb_readl(bp, NCR); + macb_writel(bp, NCR, ctl | MACB_BIT(CLRSTAT)); =20 - macb_set_hwaddr(lp); + macb_set_hwaddr(bp); =20 - ret =3D at91ether_start(lp); + ret =3D at91ether_start(bp); if (ret) goto pm_exit; =20 - ret =3D macb_phylink_connect(lp); + ret =3D macb_phylink_connect(bp); if (ret) goto stop; =20 @@ -5103,25 +5104,25 @@ static int at91ether_open(struct net_device *netdev) return 0; =20 stop: - at91ether_stop(lp); + at91ether_stop(bp); pm_exit: - pm_runtime_put_sync(&lp->pdev->dev); + pm_runtime_put_sync(&bp->pdev->dev); return ret; } =20 /* Close the interface */ static int at91ether_close(struct net_device *netdev) { - struct macb *lp =3D netdev_priv(netdev); + struct macb *bp =3D netdev_priv(netdev); =20 netif_stop_queue(netdev); =20 - phylink_stop(lp->phylink); - phylink_disconnect_phy(lp->phylink); + phylink_stop(bp->phylink); + phylink_disconnect_phy(bp->phylink); =20 - at91ether_stop(lp); + at91ether_stop(bp); =20 - pm_runtime_put(&lp->pdev->dev); + pm_runtime_put(&bp->pdev->dev); =20 return 0; } @@ -5130,19 +5131,21 @@ static int at91ether_close(struct net_device *netde= v) static netdev_tx_t at91ether_start_xmit(struct sk_buff *skb, struct net_device *netdev) { - struct macb *lp =3D netdev_priv(netdev); + struct macb *bp =3D netdev_priv(netdev); + struct device *dev =3D &bp->pdev->dev; =20 - if (macb_readl(lp, TSR) & MACB_BIT(RM9200_BNQ)) { + if (macb_readl(bp, TSR) & MACB_BIT(RM9200_BNQ)) { int desc =3D 0; =20 netif_stop_queue(netdev); =20 /* Store packet information (to free when Tx completed) */ - lp->rm9200_txq[desc].skb =3D skb; - lp->rm9200_txq[desc].size =3D skb->len; - lp->rm9200_txq[desc].mapping =3D dma_map_single(&lp->pdev->dev, skb->dat= a, - skb->len, DMA_TO_DEVICE); - if (dma_mapping_error(&lp->pdev->dev, lp->rm9200_txq[desc].mapping)) { + bp->rm9200_txq[desc].skb =3D skb; + bp->rm9200_txq[desc].size =3D skb->len; + bp->rm9200_txq[desc].mapping =3D dma_map_single(dev, skb->data, + skb->len, + DMA_TO_DEVICE); + if (dma_mapping_error(dev, bp->rm9200_txq[desc].mapping)) { dev_kfree_skb_any(skb); netdev->stats.tx_dropped++; netdev_err(netdev, "%s: DMA mapping error\n", __func__); @@ -5150,9 +5153,9 @@ static netdev_tx_t at91ether_start_xmit(struct sk_buf= f *skb, } =20 /* Set address of the data in the Transmit Address register */ - macb_writel(lp, TAR, lp->rm9200_txq[desc].mapping); + macb_writel(bp, TAR, bp->rm9200_txq[desc].mapping); /* Set length of the packet in the Transmit Control register */ - macb_writel(lp, TCR, skb->len); + macb_writel(bp, TCR, skb->len); =20 } else { netdev_err(netdev, "%s called, but device is busy!\n", @@ -5168,16 +5171,17 @@ static netdev_tx_t at91ether_start_xmit(struct sk_b= uff *skb, */ static void at91ether_rx(struct net_device *netdev) { - struct macb *lp =3D netdev_priv(netdev); - struct macb_queue *q =3D &lp->queues[0]; + struct macb *bp =3D netdev_priv(netdev); + struct macb_queue *queue =3D &bp->queues[0]; struct macb_dma_desc *desc; unsigned char *p_recv; struct sk_buff *skb; unsigned int pktlen; =20 - desc =3D macb_rx_desc(q, q->rx_tail); + desc =3D macb_rx_desc(queue, queue->rx_tail); while (desc->addr & MACB_BIT(RX_USED)) { - p_recv =3D q->rx_buffers + q->rx_tail * AT91ETHER_MAX_RBUFF_SZ; + p_recv =3D queue->rx_buffers + + queue->rx_tail * AT91ETHER_MAX_RBUFF_SZ; pktlen =3D MACB_BF(RX_FRMLEN, desc->ctrl); skb =3D netdev_alloc_skb(netdev, pktlen + 2); if (skb) { @@ -5199,12 +5203,12 @@ static void at91ether_rx(struct net_device *netdev) desc->addr &=3D ~MACB_BIT(RX_USED); =20 /* wrap after last buffer */ - if (q->rx_tail =3D=3D AT91ETHER_MAX_RX_DESCR - 1) - q->rx_tail =3D 0; + if (queue->rx_tail =3D=3D AT91ETHER_MAX_RX_DESCR - 1) + queue->rx_tail =3D 0; else - q->rx_tail++; + queue->rx_tail++; =20 - desc =3D macb_rx_desc(q, q->rx_tail); + desc =3D macb_rx_desc(queue, queue->rx_tail); } } =20 @@ -5212,14 +5216,14 @@ static void at91ether_rx(struct net_device *netdev) static irqreturn_t at91ether_interrupt(int irq, void *dev_id) { struct net_device *netdev =3D dev_id; - struct macb *lp =3D netdev_priv(netdev); + struct macb *bp =3D netdev_priv(netdev); u32 intstatus, ctl; unsigned int desc; =20 /* MAC Interrupt Status register indicates what interrupts are pending. * It is automatically cleared once read. */ - intstatus =3D macb_readl(lp, ISR); + intstatus =3D macb_readl(bp, ISR); =20 /* Receive complete */ if (intstatus & MACB_BIT(RCOMP)) @@ -5232,23 +5236,25 @@ static irqreturn_t at91ether_interrupt(int irq, voi= d *dev_id) netdev->stats.tx_errors++; =20 desc =3D 0; - if (lp->rm9200_txq[desc].skb) { - dev_consume_skb_irq(lp->rm9200_txq[desc].skb); - lp->rm9200_txq[desc].skb =3D NULL; - dma_unmap_single(&lp->pdev->dev, lp->rm9200_txq[desc].mapping, - lp->rm9200_txq[desc].size, DMA_TO_DEVICE); + if (bp->rm9200_txq[desc].skb) { + dev_consume_skb_irq(bp->rm9200_txq[desc].skb); + bp->rm9200_txq[desc].skb =3D NULL; + dma_unmap_single(&bp->pdev->dev, + bp->rm9200_txq[desc].mapping, + bp->rm9200_txq[desc].size, + DMA_TO_DEVICE); netdev->stats.tx_packets++; - netdev->stats.tx_bytes +=3D lp->rm9200_txq[desc].size; + netdev->stats.tx_bytes +=3D bp->rm9200_txq[desc].size; } netif_wake_queue(netdev); } =20 /* Work-around for EMAC Errata section 41.3.1 */ if (intstatus & MACB_BIT(RXUBR)) { - ctl =3D macb_readl(lp, NCR); - macb_writel(lp, NCR, ctl & ~MACB_BIT(RE)); + ctl =3D macb_readl(bp, NCR); + macb_writel(bp, NCR, ctl & ~MACB_BIT(RE)); wmb(); - macb_writel(lp, NCR, ctl | MACB_BIT(RE)); + macb_writel(bp, NCR, ctl | MACB_BIT(RE)); } =20 if (intstatus & MACB_BIT(ISR_ROVR)) --=20 2.53.0 From nobody Wed Apr 1 20:37:31 2026 Received: from smtpout-04.galae.net (smtpout-04.galae.net [185.171.202.116]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AD95B47277C for ; Wed, 1 Apr 2026 16:39:32 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=185.171.202.116 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775061576; cv=none; b=FZ7Zoa78duaVjcD7vw8gqq7pvqa7mfo0MOlAOIm82pELvMnM5ZjtnO2FZ5BJ8uXw5qShx/PpmGo1jZ+BCiKBsChdFL3PdK9Goof8NIf5NdBM7mMqbRzxm4WFuf9srnAsjCcuIIrhOIqY9pw0giy75h8ZzQKc+vozAVDJf/3A7bE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775061576; c=relaxed/simple; bh=2Li85y3O0B5xRflVjiPQmdDthz1Sij/s+po81PL0O5g=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=C+QHtnpipdf+Mn2vnU+ouWFWC51s5Vsspc/nA/7PROeKXAExlBNxHA9mnKE1x/tcTmHEHND+viLRzeIn/Cpj8yleAMzYtSxfNiFj8PLi5B0F0en2IyWwFmazcPxwXjiaq4wH3dn/52TlkjHjcrluVR4rNi6MQTqjbmEaPZuarQw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=bootlin.com; spf=pass smtp.mailfrom=bootlin.com; dkim=pass (2048-bit key) header.d=bootlin.com header.i=@bootlin.com header.b=zLlB3BlQ; arc=none smtp.client-ip=185.171.202.116 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=bootlin.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=bootlin.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=bootlin.com header.i=@bootlin.com header.b="zLlB3BlQ" Received: from smtpout-01.galae.net (smtpout-01.galae.net [212.83.139.233]) by smtpout-04.galae.net (Postfix) with ESMTPS id 23F88C5996F; Wed, 1 Apr 2026 16:40:02 +0000 (UTC) Received: from mail.galae.net (mail.galae.net [212.83.136.155]) by smtpout-01.galae.net (Postfix) with ESMTPS id 11656602BF; Wed, 1 Apr 2026 16:39:31 +0000 (UTC) Received: from [127.0.0.1] (localhost [127.0.0.1]) by localhost (Mailerdaemon) with ESMTPSA id 94F7510450306; Wed, 1 Apr 2026 18:39:27 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bootlin.com; s=dkim; t=1775061569; h=from:subject:date:message-id:to:cc:mime-version:content-type: content-transfer-encoding:in-reply-to:references; bh=DWtIq6jd1F1PaNYVYR5QTVPdsPIiGubpvbe0+JPvC9s=; b=zLlB3BlQyXf0HvaA39mmXsTSYrpAbohWHoKNcgIHs/WQY9I8t1pcHN7rJve5bTQj44YoCc kcuTvlG2XKfElBvPFaB9o7sDtBvSQROixp6HFz8kO0POj8kZmX2m+bBNlJ0CkYpG8f14xn EYd6cj3QMz12TIRtOl07ia0X77M0jUMjbGvVUz0PWA4hjjSUl3zqB5zwUUh9dJM7C6YP/1 YnbPM+JzbjOSTcrpHIz1qCJBlcrkUiiO9FbVEV8t9m7yT6R8ThA6Low2zy2mqv5SOgRkw5 Oea+L/PcETGAyUtlkR4JIbhfJzh8eIrtx1qhKK4VR6OeaP8PphL5ASsxF9JCqA== From: =?utf-8?q?Th=C3=A9o_Lebrun?= Date: Wed, 01 Apr 2026 18:39:06 +0200 Subject: [PATCH net-next 03/11] net: macb: unify queue index variable naming convention and types Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20260401-macb-context-v1-3-9590c5ab7272@bootlin.com> References: <20260401-macb-context-v1-0-9590c5ab7272@bootlin.com> In-Reply-To: <20260401-macb-context-v1-0-9590c5ab7272@bootlin.com> To: Nicolas Ferre , Claudiu Beznea , Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Richard Cochran , Russell King Cc: Paolo Valerio , Conor Dooley , Nicolai Buchwitz , Vladimir Kondratiev , Gregory CLEMENT , =?utf-8?q?Beno=C3=AEt_Monin?= , Tawfik Bayouk , Thomas Petazzoni , Maxime Chevallier , netdev@vger.kernel.org, linux-kernel@vger.kernel.org, =?utf-8?q?Th=C3=A9o_Lebrun?= X-Mailer: b4 0.15.0 X-Last-TLS-Session-Version: TLSv1.3 Variables are named q or queue_index. Types are int, unsigned int, u32 and u16. Use `unsigned int q` everywhere. Skip over taprio functions. They use `u8 queue_id` which fits with the `struct macb_queue_enst_config` field. Using `queue_id` everywhere would be too verbose. Signed-off-by: Th=C3=A9o Lebrun --- drivers/net/ethernet/cadence/macb_main.c | 30 +++++++++++++++-------------= -- 1 file changed, 15 insertions(+), 15 deletions(-) diff --git a/drivers/net/ethernet/cadence/macb_main.c b/drivers/net/etherne= t/cadence/macb_main.c index 05ccb6f186f7..087401163771 100644 --- a/drivers/net/ethernet/cadence/macb_main.c +++ b/drivers/net/ethernet/cadence/macb_main.c @@ -873,7 +873,7 @@ static void gem_shuffle_tx_one_ring(struct macb_queue *= queue) static void gem_shuffle_tx_rings(struct macb *bp) { struct macb_queue *queue; - int q; + unsigned int q; =20 for (q =3D 0, queue =3D bp->queues; q < bp->num_queues; q++, queue++) gem_shuffle_tx_one_ring(queue); @@ -1254,7 +1254,7 @@ static void macb_tx_error_task(struct work_struct *wo= rk) tx_error_task); bool halt_timeout =3D false; struct macb *bp =3D queue->bp; - u32 queue_index; + unsigned int q; u32 packets =3D 0; u32 bytes =3D 0; struct macb_tx_skb *tx_skb; @@ -1263,9 +1263,9 @@ static void macb_tx_error_task(struct work_struct *wo= rk) unsigned int tail; unsigned long flags; =20 - queue_index =3D queue - bp->queues; + q =3D queue - bp->queues; netdev_vdbg(bp->netdev, "macb_tx_error_task: q =3D %u, t =3D %u, h =3D %u= \n", - queue_index, queue->tx_tail, queue->tx_head); + q, queue->tx_tail, queue->tx_head); =20 /* Prevent the queue NAPI TX poll from running, as it calls * macb_tx_complete(), which in turn may call netif_wake_subqueue(). @@ -1338,7 +1338,7 @@ static void macb_tx_error_task(struct work_struct *wo= rk) macb_tx_unmap(bp, tx_skb, 0); } =20 - netdev_tx_completed_queue(netdev_get_tx_queue(bp->netdev, queue_index), + netdev_tx_completed_queue(netdev_get_tx_queue(bp->netdev, q), packets, bytes); =20 /* Set end of TX queue */ @@ -1403,7 +1403,7 @@ static bool ptp_one_step_sync(struct sk_buff *skb) static int macb_tx_complete(struct macb_queue *queue, int budget) { struct macb *bp =3D queue->bp; - u16 queue_index =3D queue - bp->queues; + unsigned int q =3D queue - bp->queues; unsigned long flags; unsigned int tail; unsigned int head; @@ -1465,14 +1465,14 @@ static int macb_tx_complete(struct macb_queue *queu= e, int budget) } } =20 - netdev_tx_completed_queue(netdev_get_tx_queue(bp->netdev, queue_index), + netdev_tx_completed_queue(netdev_get_tx_queue(bp->netdev, q), packets, bytes); =20 queue->tx_tail =3D tail; - if (__netif_subqueue_stopped(bp->netdev, queue_index) && + if (__netif_subqueue_stopped(bp->netdev, q) && CIRC_CNT(queue->tx_head, queue->tx_tail, bp->tx_ring_size) <=3D MACB_TX_WAKEUP_THRESH(bp)) - netif_wake_subqueue(bp->netdev, queue_index); + netif_wake_subqueue(bp->netdev, q); spin_unlock_irqrestore(&queue->tx_ptr_lock, flags); =20 if (packets) @@ -2496,10 +2496,10 @@ static int macb_pad_and_fcs(struct sk_buff **skb, s= truct net_device *netdev) static netdev_tx_t macb_start_xmit(struct sk_buff *skb, struct net_device *netdev) { - u16 queue_index =3D skb_get_queue_mapping(skb); struct macb *bp =3D netdev_priv(netdev); - struct macb_queue *queue =3D &bp->queues[queue_index]; + unsigned int q =3D skb_get_queue_mapping(skb); unsigned int desc_cnt, nr_frags, frag_size, f; + struct macb_queue *queue =3D &bp->queues[q]; unsigned int hdrlen; unsigned long flags; bool is_lso; @@ -2539,7 +2539,7 @@ static netdev_tx_t macb_start_xmit(struct sk_buff *sk= b, #if defined(DEBUG) && defined(VERBOSE_DEBUG) netdev_vdbg(bp->netdev, "start_xmit: queue %hu len %u head %p data %p tail %p end %p\n", - queue_index, skb->len, skb->head, skb->data, + q, skb->len, skb->head, skb->data, skb_tail_pointer(skb), skb_end_pointer(skb)); print_hex_dump(KERN_DEBUG, "data: ", DUMP_PREFIX_OFFSET, 16, 1, skb->data, 16, true); @@ -2565,7 +2565,7 @@ static netdev_tx_t macb_start_xmit(struct sk_buff *sk= b, /* This is a hard error, log it. */ if (CIRC_SPACE(queue->tx_head, queue->tx_tail, bp->tx_ring_size) < desc_cnt) { - netif_stop_subqueue(netdev, queue_index); + netif_stop_subqueue(netdev, q); netdev_dbg(netdev, "tx_head =3D %u, tx_tail =3D %u\n", queue->tx_head, queue->tx_tail); ret =3D NETDEV_TX_BUSY; @@ -2581,7 +2581,7 @@ static netdev_tx_t macb_start_xmit(struct sk_buff *sk= b, /* Make newly initialized descriptor visible to hardware */ wmb(); skb_tx_timestamp(skb); - netdev_tx_sent_queue(netdev_get_tx_queue(bp->netdev, queue_index), + netdev_tx_sent_queue(netdev_get_tx_queue(bp->netdev, q), skb->len); =20 spin_lock(&bp->lock); @@ -2590,7 +2590,7 @@ static netdev_tx_t macb_start_xmit(struct sk_buff *sk= b, spin_unlock(&bp->lock); =20 if (CIRC_SPACE(queue->tx_head, queue->tx_tail, bp->tx_ring_size) < 1) - netif_stop_subqueue(netdev, queue_index); + netif_stop_subqueue(netdev, q); =20 unlock: spin_unlock_irqrestore(&queue->tx_ptr_lock, flags); --=20 2.53.0 From nobody Wed Apr 1 20:37:31 2026 Received: from smtpout-02.galae.net (smtpout-02.galae.net [185.246.84.56]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E62E9478859 for ; Wed, 1 Apr 2026 16:39:34 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=185.246.84.56 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775061577; cv=none; b=OcnIqViQOSTdAbNGxnrmqQsw1I6SjfTwUPqukGmZaO085q1n1DtxiAuYQn16oNugB0niC9AqNoODWe9cKphrKgP8u5uTIEQJeKS0OEUk9hROl8jLw7As2Q1KWP4WPrQ/JYIlymhtj0PWZUTfYv/yw6FeMi4eVRkLPRY4P5eMTzo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775061577; c=relaxed/simple; bh=p7Luq0FBAy6wiOaPfbXiHVPf61+o5ImPPHQwzl94OEU=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=kK6mmWT8x1ULRYhzFm4NpZcWB9zN6gFJ/2D7dXJnQkp/MR2zhgoC3GFVJOIOcK7IcxXtgPXtatCrjOhEO41WyAgf2TACul7P1mRKnvsTCIkC1U9XVRcd9Yxw/Cxu3QYRHbKNQKL7UaPTn+czGD0yIIvoCA+NQbFCM9hniI1vFPs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=bootlin.com; spf=pass smtp.mailfrom=bootlin.com; dkim=pass (2048-bit key) header.d=bootlin.com header.i=@bootlin.com header.b=WJZqPxzo; arc=none smtp.client-ip=185.246.84.56 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=bootlin.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=bootlin.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=bootlin.com header.i=@bootlin.com header.b="WJZqPxzo" Received: from smtpout-01.galae.net (smtpout-01.galae.net [212.83.139.233]) by smtpout-02.galae.net (Postfix) with ESMTPS id 6A4841A30C2; Wed, 1 Apr 2026 16:39:33 +0000 (UTC) Received: from mail.galae.net (mail.galae.net [212.83.136.155]) by smtpout-01.galae.net (Postfix) with ESMTPS id 3FCB4602BF; Wed, 1 Apr 2026 16:39:33 +0000 (UTC) Received: from [127.0.0.1] (localhost [127.0.0.1]) by localhost (Mailerdaemon) with ESMTPSA id 31F9110450637; Wed, 1 Apr 2026 18:39:30 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bootlin.com; s=dkim; t=1775061572; h=from:subject:date:message-id:to:cc:mime-version:content-type: content-transfer-encoding:in-reply-to:references; bh=LzXtFxzisqeODwJ7M+kJsVHj+h3/fdo/aXOIgPH+rH4=; b=WJZqPxzoDxgKLSqV6DIzBuDbL4Co9jn4dfWSWNQwSAD1swxTvQd7NIQo6l6Bx/E8q+x4Kf UgopVt4fN6enfhXuitg7+ueABP4mZDGigufUosjtKF27md6UKIKvJkV5qv7Rwv9/WMEh2l fZ3zhEDAINqzH5m5Dem3xmvlNU8AiZmtGWE14Im2xtSEgj9GRK81+KzE1RQbxpuNAHdiub 3Citc1ZiL8/G8Oto/fNNc9qdSf6zW7PR+6aPEkWM8aqaWGrXMFrgAIsKYXCLzUcMM1I4B9 bGbXbMeHo0sbDBjMoR3WdPjuOBkZjqiFY6P2C3OXVdUDQsNg+qDTeHteRdB+hw== From: =?utf-8?q?Th=C3=A9o_Lebrun?= Date: Wed, 01 Apr 2026 18:39:07 +0200 Subject: [PATCH net-next 04/11] net: macb: enforce reverse christmas tree (RCT) convention Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20260401-macb-context-v1-4-9590c5ab7272@bootlin.com> References: <20260401-macb-context-v1-0-9590c5ab7272@bootlin.com> In-Reply-To: <20260401-macb-context-v1-0-9590c5ab7272@bootlin.com> To: Nicolas Ferre , Claudiu Beznea , Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Richard Cochran , Russell King Cc: Paolo Valerio , Conor Dooley , Nicolai Buchwitz , Vladimir Kondratiev , Gregory CLEMENT , =?utf-8?q?Beno=C3=AEt_Monin?= , Tawfik Bayouk , Thomas Petazzoni , Maxime Chevallier , netdev@vger.kernel.org, linux-kernel@vger.kernel.org, =?utf-8?q?Th=C3=A9o_Lebrun?= X-Mailer: b4 0.15.0 X-Last-TLS-Session-Version: TLSv1.3 Enforce the reverse christmas tree convention in those functions: macb_tx_error_task() gem_rx_refill() gem_rx() macb_rx_frame() macb_init_rx_ring() macb_rx() macb_rx_pending() macb_start_xmit() The goal is to minimise unrelated diff in future patches. Signed-off-by: Th=C3=A9o Lebrun --- drivers/net/ethernet/cadence/macb_main.c | 61 ++++++++++++++++------------= ---- 1 file changed, 30 insertions(+), 31 deletions(-) diff --git a/drivers/net/ethernet/cadence/macb_main.c b/drivers/net/etherne= t/cadence/macb_main.c index 087401163771..081f220f6756 100644 --- a/drivers/net/ethernet/cadence/macb_main.c +++ b/drivers/net/ethernet/cadence/macb_main.c @@ -1250,20 +1250,19 @@ static dma_addr_t macb_get_addr(struct macb *bp, st= ruct macb_dma_desc *desc) =20 static void macb_tx_error_task(struct work_struct *work) { - struct macb_queue *queue =3D container_of(work, struct macb_queue, - tx_error_task); - bool halt_timeout =3D false; - struct macb *bp =3D queue->bp; - unsigned int q; - u32 packets =3D 0; - u32 bytes =3D 0; - struct macb_tx_skb *tx_skb; - struct macb_dma_desc *desc; - struct sk_buff *skb; - unsigned int tail; - unsigned long flags; + struct macb_queue *queue =3D container_of(work, struct macb_queue, + tx_error_task); + unsigned int q =3D queue - queue->bp->queues; + struct macb *bp =3D queue->bp; + struct macb_tx_skb *tx_skb; + struct macb_dma_desc *desc; + bool halt_timeout =3D false; + struct sk_buff *skb; + unsigned long flags; + unsigned int tail; + u32 packets =3D 0; + u32 bytes =3D 0; =20 - q =3D queue - bp->queues; netdev_vdbg(bp->netdev, "macb_tx_error_task: q =3D %u, t =3D %u, h =3D %u= \n", q, queue->tx_tail, queue->tx_head); =20 @@ -1483,11 +1482,11 @@ static int macb_tx_complete(struct macb_queue *queu= e, int budget) =20 static void gem_rx_refill(struct macb_queue *queue) { - unsigned int entry; - struct sk_buff *skb; - dma_addr_t paddr; struct macb *bp =3D queue->bp; struct macb_dma_desc *desc; + struct sk_buff *skb; + unsigned int entry; + dma_addr_t paddr; =20 while (CIRC_SPACE(queue->rx_prepared_head, queue->rx_tail, bp->rx_ring_size) > 0) { @@ -1580,11 +1579,11 @@ static int gem_rx(struct macb_queue *queue, struct = napi_struct *napi, int budget) { struct macb *bp =3D queue->bp; - unsigned int len; - unsigned int entry; - struct sk_buff *skb; - struct macb_dma_desc *desc; - int count =3D 0; + struct macb_dma_desc *desc; + struct sk_buff *skb; + unsigned int entry; + unsigned int len; + int count =3D 0; =20 while (count < budget) { u32 ctrl; @@ -1670,12 +1669,12 @@ static int gem_rx(struct macb_queue *queue, struct = napi_struct *napi, static int macb_rx_frame(struct macb_queue *queue, struct napi_struct *nap= i, unsigned int first_frag, unsigned int last_frag) { - unsigned int len; - unsigned int frag; + struct macb *bp =3D queue->bp; + struct macb_dma_desc *desc; unsigned int offset; struct sk_buff *skb; - struct macb_dma_desc *desc; - struct macb *bp =3D queue->bp; + unsigned int frag; + unsigned int len; =20 desc =3D macb_rx_desc(queue, last_frag); len =3D desc->ctrl & bp->rx_frm_len_mask; @@ -1751,9 +1750,9 @@ static int macb_rx_frame(struct macb_queue *queue, st= ruct napi_struct *napi, =20 static inline void macb_init_rx_ring(struct macb_queue *queue) { + struct macb_dma_desc *desc =3D NULL; struct macb *bp =3D queue->bp; dma_addr_t addr; - struct macb_dma_desc *desc =3D NULL; int i; =20 addr =3D queue->rx_buffers_dma; @@ -1772,9 +1771,9 @@ static int macb_rx(struct macb_queue *queue, struct n= api_struct *napi, { struct macb *bp =3D queue->bp; bool reset_rx_queue =3D false; - int received =3D 0; - unsigned int tail; int first_frag =3D -1; + unsigned int tail; + int received =3D 0; =20 for (tail =3D queue->rx_tail; budget > 0; tail++) { struct macb_dma_desc *desc =3D macb_rx_desc(queue, tail); @@ -1849,8 +1848,8 @@ static int macb_rx(struct macb_queue *queue, struct n= api_struct *napi, static bool macb_rx_pending(struct macb_queue *queue) { struct macb *bp =3D queue->bp; - unsigned int entry; - struct macb_dma_desc *desc; + struct macb_dma_desc *desc; + unsigned int entry; =20 entry =3D macb_rx_ring_wrap(bp, queue->rx_tail); desc =3D macb_rx_desc(queue, entry); @@ -2500,10 +2499,10 @@ static netdev_tx_t macb_start_xmit(struct sk_buff *= skb, unsigned int q =3D skb_get_queue_mapping(skb); unsigned int desc_cnt, nr_frags, frag_size, f; struct macb_queue *queue =3D &bp->queues[q]; + netdev_tx_t ret =3D NETDEV_TX_OK; unsigned int hdrlen; unsigned long flags; bool is_lso; - netdev_tx_t ret =3D NETDEV_TX_OK; =20 if (macb_clear_csum(skb)) { dev_kfree_skb_any(skb); --=20 2.53.0 From nobody Wed Apr 1 20:37:31 2026 Received: from smtpout-04.galae.net (smtpout-04.galae.net [185.171.202.116]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8339E478E4A for ; Wed, 1 Apr 2026 16:39:37 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=185.171.202.116 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775061581; cv=none; b=D1fxy8p7OV1+5mPUW56yp7shOgg9+M92g19T2Zvr9o8Qk0Df9NLWQQPKi3zFLLSr5pfqIl+mcw5vny6eItn14mx1pkl5tfCoBLGX5IA4DH9JDoGx1+2r+IbvFCR8wnLZmR3HQd6zjia0AHpeRK0n1AZpPekvf4Ajb05r828F7+E= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775061581; c=relaxed/simple; bh=YP6CtUF9AL4yooz8CZtnf8bQL1uJ5a4WtdyvnKsuHT8=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=f4NleSBhkOXI8PbkzNku1PKh3bn3npBM5uRFYXhUo5GHwgNABcKyoOzNkuk9UxatLOoDhAvqQQVscWYRMjiTFgnPHoxTXrDFHfyx7pFOs58RKi7Qp7cDRGEIXaU9DFo2xRYaqK2cV0UbH+si+XExtCBkZTuNRMKpEVt5Sz76Eac= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=bootlin.com; spf=pass smtp.mailfrom=bootlin.com; dkim=pass (2048-bit key) header.d=bootlin.com header.i=@bootlin.com header.b=T2eMdf8U; arc=none smtp.client-ip=185.171.202.116 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=bootlin.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=bootlin.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=bootlin.com header.i=@bootlin.com header.b="T2eMdf8U" Received: from smtpout-01.galae.net (smtpout-01.galae.net [212.83.139.233]) by smtpout-04.galae.net (Postfix) with ESMTPS id C1F46C5996C; Wed, 1 Apr 2026 16:40:06 +0000 (UTC) Received: from mail.galae.net (mail.galae.net [212.83.136.155]) by smtpout-01.galae.net (Postfix) with ESMTPS id AEF10602BF; Wed, 1 Apr 2026 16:39:35 +0000 (UTC) Received: from [127.0.0.1] (localhost [127.0.0.1]) by localhost (Mailerdaemon) with ESMTPSA id 4D661104507D9; Wed, 1 Apr 2026 18:39:32 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bootlin.com; s=dkim; t=1775061574; h=from:subject:date:message-id:to:cc:mime-version:content-type: content-transfer-encoding:in-reply-to:references; bh=0+PuJJPyolIoWacMBFDdkg6OvWsMTEUZdEigZZJOtyI=; b=T2eMdf8UA6kMo1dMR2i8CJydBOl46HuFTA9a7unpQeGbpkL5kswazb0c4ZBE1fBIEAJKjW DY9sMrCnEETVtc4YwVLc8JpNa26Ti/l3Um+Aod4nkzF4/Z5DgaHsNZVcJs3YwWKdbRhcPj CeigXU7P7cAqXqFEfPRsRFjm7HNdc2bH61WLIGkJEHPbZkjPIwfyPRPU2iDJ9h/17cNV1o sMhawq20xvHW37GJISpP1l2ffZRPx7VC5hItQHTO+Z+jwtnjogphTFxdtj+9WYA4MOTQ5e fOiLim58Pq6f9unBagr76K+198DGpVuNcfE/GDLK8ClStRGJtMOifVZCc1xQKg== From: =?utf-8?q?Th=C3=A9o_Lebrun?= Date: Wed, 01 Apr 2026 18:39:08 +0200 Subject: [PATCH net-next 05/11] net: macb: allocate tieoff descriptor once across device lifetime Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20260401-macb-context-v1-5-9590c5ab7272@bootlin.com> References: <20260401-macb-context-v1-0-9590c5ab7272@bootlin.com> In-Reply-To: <20260401-macb-context-v1-0-9590c5ab7272@bootlin.com> To: Nicolas Ferre , Claudiu Beznea , Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Richard Cochran , Russell King Cc: Paolo Valerio , Conor Dooley , Nicolai Buchwitz , Vladimir Kondratiev , Gregory CLEMENT , =?utf-8?q?Beno=C3=AEt_Monin?= , Tawfik Bayouk , Thomas Petazzoni , Maxime Chevallier , netdev@vger.kernel.org, linux-kernel@vger.kernel.org, =?utf-8?q?Th=C3=A9o_Lebrun?= X-Mailer: b4 0.15.0 X-Last-TLS-Session-Version: TLSv1.3 The tieoff descriptor is a RX DMA descriptor ring of size one. It gets configured onto queues for Wake-on-LAN during system-wide suspend when hardware does not support disabling individual queues (MACB_CAPS_QUEUE_DISABLE). MACB/GEM driver allocates it alongside the main RX ring inside macb_alloc_consistent() at open. Free is done by macb_free_consistent() at close. Change to allocate once at probe and free on probe failure or device removal. This makes the tieoff descriptor lifetime much longer, avoiding repeating coherent buffer allocation on each open/close cycle. Main benefit: we dissociate its lifetime from the main ring's lifetime. That way there is less work to be doing on resources (re)alloc. This currently happens on close/open, but will soon also happen on context swap operations (set_ringparam, change_mtu, set_channels, etc). Signed-off-by: Th=C3=A9o Lebrun --- drivers/net/ethernet/cadence/macb_main.c | 70 ++++++++++++++++------------= ---- 1 file changed, 36 insertions(+), 34 deletions(-) diff --git a/drivers/net/ethernet/cadence/macb_main.c b/drivers/net/etherne= t/cadence/macb_main.c index 081f220f6756..d5023fdc0756 100644 --- a/drivers/net/ethernet/cadence/macb_main.c +++ b/drivers/net/ethernet/cadence/macb_main.c @@ -2679,12 +2679,6 @@ static void macb_free_consistent(struct macb *bp) unsigned int q; size_t size; =20 - if (bp->rx_ring_tieoff) { - dma_free_coherent(dev, macb_dma_desc_get_size(bp), - bp->rx_ring_tieoff, bp->rx_ring_tieoff_dma); - bp->rx_ring_tieoff =3D NULL; - } - bp->macbgem_ops.mog_free_rx_buffers(bp); =20 size =3D bp->num_queues * macb_tx_ring_size_per_queue(bp); @@ -2782,16 +2776,6 @@ static int macb_alloc_consistent(struct macb *bp) if (bp->macbgem_ops.mog_alloc_rx_buffers(bp)) goto out_err; =20 - /* Required for tie off descriptor for PM cases */ - if (!(bp->caps & MACB_CAPS_QUEUE_DISABLE)) { - bp->rx_ring_tieoff =3D dma_alloc_coherent(&bp->pdev->dev, - macb_dma_desc_get_size(bp), - &bp->rx_ring_tieoff_dma, - GFP_KERNEL); - if (!bp->rx_ring_tieoff) - goto out_err; - } - return 0; =20 out_err: @@ -2799,19 +2783,6 @@ static int macb_alloc_consistent(struct macb *bp) return -ENOMEM; } =20 -static void macb_init_tieoff(struct macb *bp) -{ - struct macb_dma_desc *desc =3D bp->rx_ring_tieoff; - - if (bp->caps & MACB_CAPS_QUEUE_DISABLE) - return; - /* Setup a wrapping descriptor with no free slots - * (WRAP and USED) to tie off/disable unused RX queues. - */ - macb_set_addr(bp, desc, MACB_BIT(RX_WRAP) | MACB_BIT(RX_USED)); - desc->ctrl =3D 0; -} - static void gem_init_rx_ring(struct macb_queue *queue) { queue->rx_tail =3D 0; @@ -2839,8 +2810,6 @@ static void gem_init_rings(struct macb *bp) =20 gem_init_rx_ring(queue); } - - macb_init_tieoff(bp); } =20 static void macb_init_rings(struct macb *bp) @@ -2858,8 +2827,6 @@ static void macb_init_rings(struct macb *bp) bp->queues[0].tx_head =3D 0; bp->queues[0].tx_tail =3D 0; desc->ctrl |=3D MACB_BIT(TX_WRAP); - - macb_init_tieoff(bp); } =20 static void macb_reset_hw(struct macb *bp) @@ -5530,6 +5497,33 @@ static int eyeq5_init(struct platform_device *pdev) return ret; } =20 +static int macb_alloc_tieoff(struct macb *bp) +{ + /* Tieoff is a workaround in case HW cannot disable queues, for PM. */ + if (bp->caps & MACB_CAPS_QUEUE_DISABLE) + return 0; + + bp->rx_ring_tieoff =3D dma_alloc_coherent(&bp->pdev->dev, + macb_dma_desc_get_size(bp), + &bp->rx_ring_tieoff_dma, + GFP_KERNEL); + if (!bp->rx_ring_tieoff) + return -ENOMEM; + + return 0; +} + +static void macb_free_tieoff(struct macb *bp) +{ + if (!bp->rx_ring_tieoff) + return; + + dma_free_coherent(&bp->pdev->dev, macb_dma_desc_get_size(bp), + bp->rx_ring_tieoff, + bp->rx_ring_tieoff_dma); + bp->rx_ring_tieoff =3D NULL; +} + static const struct macb_usrio_config at91_default_usrio =3D { .mii =3D MACB_BIT(MII), .rmii =3D MACB_BIT(RMII), @@ -5946,10 +5940,14 @@ static int macb_probe(struct platform_device *pdev) =20 netif_carrier_off(netdev); =20 + err =3D macb_alloc_tieoff(bp); + if (err) + goto err_out_unregister_mdio; + err =3D register_netdev(netdev); if (err) { dev_err(&pdev->dev, "Cannot register net device, aborting.\n"); - goto err_out_unregister_mdio; + goto err_out_free_tieoff; } =20 INIT_WORK(&bp->hresp_err_bh_work, macb_hresp_error_task); @@ -5963,6 +5961,9 @@ static int macb_probe(struct platform_device *pdev) =20 return 0; =20 +err_out_free_tieoff: + macb_free_tieoff(bp); + err_out_unregister_mdio: mdiobus_unregister(bp->mii_bus); mdiobus_free(bp->mii_bus); @@ -5992,6 +5993,7 @@ static void macb_remove(struct platform_device *pdev) if (netdev) { bp =3D netdev_priv(netdev); unregister_netdev(netdev); + macb_free_tieoff(bp); phy_exit(bp->phy); mdiobus_unregister(bp->mii_bus); mdiobus_free(bp->mii_bus); --=20 2.53.0 From nobody Wed Apr 1 20:37:31 2026 Received: from smtpout-02.galae.net (smtpout-02.galae.net [185.246.84.56]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D31BC47B43B; Wed, 1 Apr 2026 16:39:39 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=185.246.84.56 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775061586; cv=none; b=YH8lqAMFSTZnaSB5KldWIs2gNbWykCzvkeS+QDkDXwvTpYoVy2WI23YH9QYrHcurXiTWXK2Klb2Me+PIAgRDyT50PRQSbIvax5HI8ttw+iX4D4CnydGQnZFixtf4UzYfPTUFnWt9/i9PdtatYFWg/3ijcGNxsQ4G18W35UeUjOw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775061586; c=relaxed/simple; bh=VZE5GsWeH6j/DKbvO7v/+eNrx0GQslaOreKOzBQf7UQ=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=otmQC/5kxCbkjHlS/WdvyvuomuxU94xOwZxqgF8vhfXrbrmipzzDJNajcsON2Vz7BlszD4e3kxXTD62lUwhR9nF6p+Qdk3g4NtboUWCwd1C9GZGni5qWy/zUk060Z+1/2gFYJrxroq1tqX3RM57l+btBXT3a4X9z4GUR6dsT7qk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=bootlin.com; spf=pass smtp.mailfrom=bootlin.com; dkim=pass (2048-bit key) header.d=bootlin.com header.i=@bootlin.com header.b=nDJV8j3U; arc=none smtp.client-ip=185.246.84.56 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=bootlin.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=bootlin.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=bootlin.com header.i=@bootlin.com header.b="nDJV8j3U" Received: from smtpout-01.galae.net (smtpout-01.galae.net [212.83.139.233]) by smtpout-02.galae.net (Postfix) with ESMTPS id 440DF1A30BD; Wed, 1 Apr 2026 16:39:38 +0000 (UTC) Received: from mail.galae.net (mail.galae.net [212.83.136.155]) by smtpout-01.galae.net (Postfix) with ESMTPS id 1A8B7602BF; Wed, 1 Apr 2026 16:39:38 +0000 (UTC) Received: from [127.0.0.1] (localhost [127.0.0.1]) by localhost (Mailerdaemon) with ESMTPSA id C9DB0104509D8; Wed, 1 Apr 2026 18:39:34 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bootlin.com; s=dkim; t=1775061576; h=from:subject:date:message-id:to:cc:mime-version:content-type: content-transfer-encoding:in-reply-to:references; bh=Vt7UXQgblo7Y2fcJY1k7i3Bd5s0n3yDKMkNNDR8/6Kk=; b=nDJV8j3UPMYCvdDKjt1kfXnu2acOvTXHg2B0eED27aggqzA4PshXro/+hADM/hZyjqQvWD AhKgB6vK8kbDptGK6WzWOhy5Jav3xcvuLWrf/CtSdLlGmlC8nt4H+Y59yUdPRKZ+yvhpNE BTCh5tBI+dljenyjmpe1IFaqpb4xihGCHpsZSHbK4CW51j155ZuTlYpKkMvhwNZfVe6EDx brZlBX06biNJKOx5T67rLF4eyjuS2CTAY1DC2m9aVoR1xx0AQwmZJdGb9zyXrahSetbVVU bR4uDD2AjgZbcaZ0t0FWo/srTWXz/JDTfGgvMIMNGutNjBABVnqsXbCauprjeA== From: =?utf-8?q?Th=C3=A9o_Lebrun?= Date: Wed, 01 Apr 2026 18:39:09 +0200 Subject: [PATCH net-next 06/11] net: macb: introduce macb_context struct for buffer management Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20260401-macb-context-v1-6-9590c5ab7272@bootlin.com> References: <20260401-macb-context-v1-0-9590c5ab7272@bootlin.com> In-Reply-To: <20260401-macb-context-v1-0-9590c5ab7272@bootlin.com> To: Nicolas Ferre , Claudiu Beznea , Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Richard Cochran , Russell King Cc: Paolo Valerio , Conor Dooley , Nicolai Buchwitz , Vladimir Kondratiev , Gregory CLEMENT , =?utf-8?q?Beno=C3=AEt_Monin?= , Tawfik Bayouk , Thomas Petazzoni , Maxime Chevallier , netdev@vger.kernel.org, linux-kernel@vger.kernel.org, =?utf-8?q?Th=C3=A9o_Lebrun?= X-Mailer: b4 0.15.0 X-Last-TLS-Session-Version: TLSv1.3 Whenever an operation requires buffer realloc, we close the interface, update parameters and reopen. To improve reliability under memory pressure, we should rather alloc new buffers, reconfigure HW and free old buffers. This requires MACB to support having multiple "contexts" in parallel. Introduce this concept by adding the macb_context struct, which owns all queue buffers and the parameters associated. We do not yet support multiple contexts in parallel, because all functions access bp->ctx (the currently active context) directly. Steps: - Introduce `struct macb_context` and its children `struct macb_rxq` and `struct macb_txq`. Context fields are stolen from `struct macb` and rxq/txq fields are from `struct macb_queue`. Making it two separate structs per queue simplifies accesses: we grab a txq/rxq local variable and access fields like txq->head instead of queue->tx_head. It also anecdotally improves data locality. - macb_init_dflt() does not set bp->ctx->{rx,tx}_ring_size to default values as ctx is not allocated yet. Instead, introduce bp->configured_{rx,tx}_ring_size which get updated on user requests. - macb_open() starts by allocating bp->ctx. It gets freed in the open error codepath or by macb_close(). - Guided by compile errors, update all codepaths. Most diff is changing `queue->tx_*` to `txq->*` and `queue->rx_*` to `rxq->*`, with a new local variable. Also rx_buffer_size / rx_ring_size / tx_ring_size move from bp to bp->ctx. Introduce two helpers macb_tx|rx() functions to convert macb_queue pointers. Signed-off-by: Th=C3=A9o Lebrun --- drivers/net/ethernet/cadence/macb.h | 49 ++-- drivers/net/ethernet/cadence/macb_main.c | 442 ++++++++++++++++++---------= ---- 2 files changed, 296 insertions(+), 195 deletions(-) diff --git a/drivers/net/ethernet/cadence/macb.h b/drivers/net/ethernet/cad= ence/macb.h index d6dd1d356e12..8821205e8875 100644 --- a/drivers/net/ethernet/cadence/macb.h +++ b/drivers/net/ethernet/cadence/macb.h @@ -1272,21 +1272,10 @@ struct macb_queue { =20 /* Lock to protect tx_head and tx_tail */ spinlock_t tx_ptr_lock; - unsigned int tx_head, tx_tail; - struct macb_dma_desc *tx_ring; - struct macb_tx_skb *tx_skb; - dma_addr_t tx_ring_dma; struct work_struct tx_error_task; bool txubr_pending; struct napi_struct napi_tx; =20 - dma_addr_t rx_ring_dma; - dma_addr_t rx_buffers_dma; - unsigned int rx_tail; - unsigned int rx_prepared_head; - struct macb_dma_desc *rx_ring; - struct sk_buff **rx_skbuff; - void *rx_buffers; struct napi_struct napi_rx; struct queue_stats stats; }; @@ -1301,6 +1290,32 @@ struct ethtool_rx_fs_list { unsigned int count; }; =20 +struct macb_rxq { + struct macb_dma_desc *ring; /* MACB & GEM */ + dma_addr_t ring_dma; /* MACB & GEM */ + unsigned int tail; /* MACB & GEM */ + unsigned int prepared_head; /* GEM */ + struct sk_buff **skbuff; /* GEM */ + dma_addr_t buffers_dma; /* MACB */ + void *buffers; /* MACB */ +}; + +struct macb_txq { + unsigned int head; + unsigned int tail; + struct macb_dma_desc *ring; + dma_addr_t ring_dma; + struct macb_tx_skb *skb; +}; + +struct macb_context { + unsigned int rx_buffer_size; + unsigned int rx_ring_size; + unsigned int tx_ring_size; + struct macb_rxq rxq[MACB_MAX_QUEUES]; + struct macb_txq txq[MACB_MAX_QUEUES]; +}; + struct macb { void __iomem *regs; bool native_io; @@ -1309,12 +1324,16 @@ struct macb { u32 (*macb_reg_readl)(struct macb *bp, int offset); void (*macb_reg_writel)(struct macb *bp, int offset, u32 value); =20 + /* + * Context stores all its parameters. + * But we must remember them across closure. + */ + unsigned int configured_rx_ring_size; + unsigned int configured_tx_ring_size; + struct macb_context *ctx; + struct macb_dma_desc *rx_ring_tieoff; dma_addr_t rx_ring_tieoff_dma; - size_t rx_buffer_size; - - unsigned int rx_ring_size; - unsigned int tx_ring_size; =20 unsigned int num_queues; struct macb_queue queues[MACB_MAX_QUEUES]; diff --git a/drivers/net/ethernet/cadence/macb_main.c b/drivers/net/etherne= t/cadence/macb_main.c index d5023fdc0756..0f63d9b89c11 100644 --- a/drivers/net/ethernet/cadence/macb_main.c +++ b/drivers/net/ethernet/cadence/macb_main.c @@ -61,7 +61,7 @@ struct sifive_fu540_macb_mgmt { #define MAX_TX_RING_SIZE 4096 =20 /* level of occupied TX descriptors under which we wake up TX process */ -#define MACB_TX_WAKEUP_THRESH(bp) (3 * (bp)->tx_ring_size / 4) +#define MACB_TX_WAKEUP_THRESH(bp) (3 * (bp)->ctx->tx_ring_size / 4) =20 #define MACB_RX_INT_FLAGS (MACB_BIT(RCOMP) | MACB_BIT(ISR_ROVR)) #define MACB_TX_ERR_FLAGS (MACB_BIT(ISR_TUND) \ @@ -148,48 +148,73 @@ static struct macb_dma_desc_64 *macb_64b_desc(struct = macb *bp, struct macb_dma_d /* Ring buffer accessors */ static unsigned int macb_tx_ring_wrap(struct macb *bp, unsigned int index) { - return index & (bp->tx_ring_size - 1); + return index & (bp->ctx->tx_ring_size - 1); +} + +static struct macb_txq *macb_txq(struct macb_queue *queue) +{ + struct macb *bp =3D queue->bp; + unsigned int q =3D queue - bp->queues; + + return &bp->ctx->txq[q]; +} + +static struct macb_rxq *macb_rxq(struct macb_queue *queue) +{ + struct macb *bp =3D queue->bp; + unsigned int q =3D queue - bp->queues; + + return &bp->ctx->rxq[q]; } =20 static struct macb_dma_desc *macb_tx_desc(struct macb_queue *queue, unsigned int index) { + struct macb_txq *txq =3D macb_txq(queue); + index =3D macb_tx_ring_wrap(queue->bp, index); index =3D macb_adj_dma_desc_idx(queue->bp, index); - return &queue->tx_ring[index]; + return &txq->ring[index]; } =20 static struct macb_tx_skb *macb_tx_skb(struct macb_queue *queue, unsigned int index) { - return &queue->tx_skb[macb_tx_ring_wrap(queue->bp, index)]; + struct macb_txq *txq =3D macb_txq(queue); + + return &txq->skb[macb_tx_ring_wrap(queue->bp, index)]; } =20 static dma_addr_t macb_tx_dma(struct macb_queue *queue, unsigned int index) { + struct macb_txq *txq =3D macb_txq(queue); dma_addr_t offset; =20 offset =3D macb_tx_ring_wrap(queue->bp, index) * macb_dma_desc_get_size(queue->bp); =20 - return queue->tx_ring_dma + offset; + return txq->ring_dma + offset; } =20 static unsigned int macb_rx_ring_wrap(struct macb *bp, unsigned int index) { - return index & (bp->rx_ring_size - 1); + return index & (bp->ctx->rx_ring_size - 1); } =20 static struct macb_dma_desc *macb_rx_desc(struct macb_queue *queue, unsign= ed int index) { + struct macb_rxq *rxq =3D macb_rxq(queue); + index =3D macb_rx_ring_wrap(queue->bp, index); index =3D macb_adj_dma_desc_idx(queue->bp, index); - return &queue->rx_ring[index]; + return &rxq->ring[index]; } =20 static void *macb_rx_buffer(struct macb_queue *queue, unsigned int index) { - return queue->rx_buffers + queue->bp->rx_buffer_size * + struct macb_rxq *rxq =3D macb_rxq(queue); + + return rxq->buffers + queue->bp->ctx->rx_buffer_size * macb_rx_ring_wrap(queue->bp, index); } =20 @@ -459,19 +484,23 @@ static int macb_mdio_write_c45(struct mii_bus *bus, i= nt mii_id, static void macb_init_buffers(struct macb *bp) { struct macb_queue *queue; + struct macb_rxq *rxq; + struct macb_txq *txq; unsigned int q; =20 /* Single register for all queues' high 32 bits. */ if (macb_dma64(bp)) { - macb_writel(bp, RBQPH, - upper_32_bits(bp->queues[0].rx_ring_dma)); - macb_writel(bp, TBQPH, - upper_32_bits(bp->queues[0].tx_ring_dma)); + rxq =3D &bp->ctx->rxq[0]; + txq =3D &bp->ctx->txq[0]; + macb_writel(bp, RBQPH, upper_32_bits(rxq->ring_dma)); + macb_writel(bp, TBQPH, upper_32_bits(txq->ring_dma)); } =20 for (q =3D 0, queue =3D bp->queues; q < bp->num_queues; ++q, ++queue) { - queue_writel(queue, RBQP, lower_32_bits(queue->rx_ring_dma)); - queue_writel(queue, TBQP, lower_32_bits(queue->tx_ring_dma)); + rxq =3D &bp->ctx->rxq[q]; + txq =3D &bp->ctx->txq[q]; + queue_writel(queue, RBQP, lower_32_bits(rxq->ring_dma)); + queue_writel(queue, TBQP, lower_32_bits(txq->ring_dma)); } } =20 @@ -644,11 +673,12 @@ static bool macb_tx_lpi_set(struct macb *bp, bool ena= ble) =20 static bool macb_tx_all_queues_idle(struct macb *bp) { - struct macb_queue *queue; + struct macb_txq *txq; unsigned int q; =20 - for (q =3D 0, queue =3D bp->queues; q < bp->num_queues; ++q, ++queue) { - if (READ_ONCE(queue->tx_head) !=3D READ_ONCE(queue->tx_tail)) + for (q =3D 0; q < bp->num_queues; ++q) { + txq =3D &bp->ctx->txq[q]; + if (READ_ONCE(txq->head) !=3D READ_ONCE(txq->tail)) return false; } return true; @@ -795,6 +825,7 @@ static void gem_shuffle_tx_one_ring(struct macb_queue *= queue) struct macb_tx_skb tx_skb, *skb_curr, *skb_next; struct macb_dma_desc *desc_curr, *desc_next; unsigned int i, cycles, shift, curr, next; + struct macb_txq *txq =3D macb_txq(queue); struct macb *bp =3D queue->bp; unsigned char desc[24]; unsigned long flags; @@ -805,17 +836,17 @@ static void gem_shuffle_tx_one_ring(struct macb_queue= *queue) return; =20 spin_lock_irqsave(&queue->tx_ptr_lock, flags); - head =3D queue->tx_head; - tail =3D queue->tx_tail; - ring_size =3D bp->tx_ring_size; + head =3D txq->head; + tail =3D txq->tail; + ring_size =3D bp->ctx->tx_ring_size; count =3D CIRC_CNT(head, tail, ring_size); =20 if (!(tail % ring_size)) goto unlock; =20 if (!count) { - queue->tx_head =3D 0; - queue->tx_tail =3D 0; + txq->head =3D 0; + txq->tail =3D 0; goto unlock; } =20 @@ -859,8 +890,8 @@ static void gem_shuffle_tx_one_ring(struct macb_queue *= queue) sizeof(struct macb_tx_skb)); } =20 - queue->tx_head =3D count; - queue->tx_tail =3D 0; + txq->head =3D count; + txq->tail =3D 0; =20 /* Make descriptor updates visible to hardware */ wmb(); @@ -1253,6 +1284,7 @@ static void macb_tx_error_task(struct work_struct *wo= rk) struct macb_queue *queue =3D container_of(work, struct macb_queue, tx_error_task); unsigned int q =3D queue - queue->bp->queues; + struct macb_txq *txq =3D macb_txq(queue); struct macb *bp =3D queue->bp; struct macb_tx_skb *tx_skb; struct macb_dma_desc *desc; @@ -1264,7 +1296,7 @@ static void macb_tx_error_task(struct work_struct *wo= rk) u32 bytes =3D 0; =20 netdev_vdbg(bp->netdev, "macb_tx_error_task: q =3D %u, t =3D %u, h =3D %u= \n", - q, queue->tx_tail, queue->tx_head); + q, txq->tail, txq->head); =20 /* Prevent the queue NAPI TX poll from running, as it calls * macb_tx_complete(), which in turn may call netif_wake_subqueue(). @@ -1291,7 +1323,7 @@ static void macb_tx_error_task(struct work_struct *wo= rk) /* Treat frames in TX queue including the ones that caused the error. * Free transmit buffers in upper layer. */ - for (tail =3D queue->tx_tail; tail !=3D queue->tx_head; tail++) { + for (tail =3D txq->tail; tail !=3D txq->head; tail++) { u32 ctrl; =20 desc =3D macb_tx_desc(queue, tail); @@ -1349,10 +1381,10 @@ static void macb_tx_error_task(struct work_struct *= work) wmb(); =20 /* Reinitialize the TX desc queue */ - queue_writel(queue, TBQP, lower_32_bits(queue->tx_ring_dma)); + queue_writel(queue, TBQP, lower_32_bits(txq->ring_dma)); /* Make TX ring reflect state of hardware */ - queue->tx_head =3D 0; - queue->tx_tail =3D 0; + txq->head =3D 0; + txq->tail =3D 0; =20 /* Housework before enabling TX IRQ */ macb_writel(bp, TSR, macb_readl(bp, TSR)); @@ -1402,6 +1434,7 @@ static bool ptp_one_step_sync(struct sk_buff *skb) static int macb_tx_complete(struct macb_queue *queue, int budget) { struct macb *bp =3D queue->bp; + struct macb_txq *txq =3D macb_txq(queue); unsigned int q =3D queue - bp->queues; unsigned long flags; unsigned int tail; @@ -1410,8 +1443,8 @@ static int macb_tx_complete(struct macb_queue *queue,= int budget) u32 bytes =3D 0; =20 spin_lock_irqsave(&queue->tx_ptr_lock, flags); - head =3D queue->tx_head; - for (tail =3D queue->tx_tail; tail !=3D head && packets < budget; tail++)= { + head =3D txq->head; + for (tail =3D txq->tail; tail !=3D head && packets < budget; tail++) { struct macb_tx_skb *tx_skb; struct sk_buff *skb; struct macb_dma_desc *desc; @@ -1467,10 +1500,10 @@ static int macb_tx_complete(struct macb_queue *queu= e, int budget) netdev_tx_completed_queue(netdev_get_tx_queue(bp->netdev, q), packets, bytes); =20 - queue->tx_tail =3D tail; + txq->tail =3D tail; if (__netif_subqueue_stopped(bp->netdev, q) && - CIRC_CNT(queue->tx_head, queue->tx_tail, - bp->tx_ring_size) <=3D MACB_TX_WAKEUP_THRESH(bp)) + CIRC_CNT(txq->head, txq->tail, + bp->ctx->tx_ring_size) <=3D MACB_TX_WAKEUP_THRESH(bp)) netif_wake_subqueue(bp->netdev, q); spin_unlock_irqrestore(&queue->tx_ptr_lock, flags); =20 @@ -1482,24 +1515,26 @@ static int macb_tx_complete(struct macb_queue *queu= e, int budget) =20 static void gem_rx_refill(struct macb_queue *queue) { + struct macb_rxq *rxq =3D macb_rxq(queue); struct macb *bp =3D queue->bp; struct macb_dma_desc *desc; struct sk_buff *skb; unsigned int entry; dma_addr_t paddr; =20 - while (CIRC_SPACE(queue->rx_prepared_head, queue->rx_tail, - bp->rx_ring_size) > 0) { - entry =3D macb_rx_ring_wrap(bp, queue->rx_prepared_head); + while (CIRC_SPACE(rxq->prepared_head, rxq->tail, + bp->ctx->rx_ring_size) > 0) { + entry =3D macb_rx_ring_wrap(bp, rxq->prepared_head); =20 /* Make hw descriptor updates visible to CPU */ rmb(); =20 desc =3D macb_rx_desc(queue, entry); =20 - if (!queue->rx_skbuff[entry]) { + if (!rxq->skbuff[entry]) { /* allocate sk_buff for this free entry in ring */ - skb =3D netdev_alloc_skb(bp->netdev, bp->rx_buffer_size); + skb =3D netdev_alloc_skb(bp->netdev, + bp->ctx->rx_buffer_size); if (unlikely(!skb)) { netdev_err(bp->netdev, "Unable to allocate sk_buff\n"); @@ -1508,16 +1543,16 @@ static void gem_rx_refill(struct macb_queue *queue) =20 /* now fill corresponding descriptor entry */ paddr =3D dma_map_single(&bp->pdev->dev, skb->data, - bp->rx_buffer_size, + bp->ctx->rx_buffer_size, DMA_FROM_DEVICE); if (dma_mapping_error(&bp->pdev->dev, paddr)) { dev_kfree_skb(skb); break; } =20 - queue->rx_skbuff[entry] =3D skb; + rxq->skbuff[entry] =3D skb; =20 - if (entry =3D=3D bp->rx_ring_size - 1) + if (entry =3D=3D bp->ctx->rx_ring_size - 1) paddr |=3D MACB_BIT(RX_WRAP); desc->ctrl =3D 0; /* Setting addr clears RX_USED and allows reception, @@ -1544,14 +1579,14 @@ static void gem_rx_refill(struct macb_queue *queue) dma_wmb(); desc->addr &=3D ~MACB_BIT(RX_USED); } - queue->rx_prepared_head++; + rxq->prepared_head++; } =20 /* Make descriptor updates visible to hardware */ wmb(); =20 netdev_vdbg(bp->netdev, "rx ring: queue: %p, prepared head %d, tail %d\n", - queue, queue->rx_prepared_head, queue->rx_tail); + queue, rxq->prepared_head, rxq->tail); } =20 /* Mark DMA descriptors from begin up to and not including end as unused */ @@ -1578,6 +1613,7 @@ static void discard_partial_frame(struct macb_queue *= queue, unsigned int begin, static int gem_rx(struct macb_queue *queue, struct napi_struct *napi, int budget) { + struct macb_rxq *rxq =3D macb_rxq(queue); struct macb *bp =3D queue->bp; struct macb_dma_desc *desc; struct sk_buff *skb; @@ -1590,7 +1626,7 @@ static int gem_rx(struct macb_queue *queue, struct na= pi_struct *napi, dma_addr_t addr; bool rxused; =20 - entry =3D macb_rx_ring_wrap(bp, queue->rx_tail); + entry =3D macb_rx_ring_wrap(bp, rxq->tail); desc =3D macb_rx_desc(queue, entry); =20 /* Make hw descriptor updates visible to CPU */ @@ -1607,7 +1643,7 @@ static int gem_rx(struct macb_queue *queue, struct na= pi_struct *napi, =20 ctrl =3D desc->ctrl; =20 - queue->rx_tail++; + rxq->tail++; count++; =20 if (!(ctrl & MACB_BIT(RX_SOF) && ctrl & MACB_BIT(RX_EOF))) { @@ -1617,7 +1653,7 @@ static int gem_rx(struct macb_queue *queue, struct na= pi_struct *napi, queue->stats.rx_dropped++; break; } - skb =3D queue->rx_skbuff[entry]; + skb =3D rxq->skbuff[entry]; if (unlikely(!skb)) { netdev_err(bp->netdev, "inconsistent Rx descriptor chain\n"); @@ -1626,14 +1662,14 @@ static int gem_rx(struct macb_queue *queue, struct = napi_struct *napi, break; } /* now everything is ready for receiving packet */ - queue->rx_skbuff[entry] =3D NULL; + rxq->skbuff[entry] =3D NULL; len =3D ctrl & bp->rx_frm_len_mask; =20 netdev_vdbg(bp->netdev, "gem_rx %u (len %u)\n", entry, len); =20 skb_put(skb, len); dma_unmap_single(&bp->pdev->dev, addr, - bp->rx_buffer_size, DMA_FROM_DEVICE); + bp->ctx->rx_buffer_size, DMA_FROM_DEVICE); =20 skb->protocol =3D eth_type_trans(skb, bp->netdev); skb_checksum_none_assert(skb); @@ -1713,7 +1749,7 @@ static int macb_rx_frame(struct macb_queue *queue, st= ruct napi_struct *napi, skb_put(skb, len); =20 for (frag =3D first_frag; ; frag++) { - unsigned int frag_len =3D bp->rx_buffer_size; + unsigned int frag_len =3D bp->ctx->rx_buffer_size; =20 if (offset + frag_len > len) { if (unlikely(frag !=3D last_frag)) { @@ -1725,7 +1761,7 @@ static int macb_rx_frame(struct macb_queue *queue, st= ruct napi_struct *napi, skb_copy_to_linear_data_offset(skb, offset, macb_rx_buffer(queue, frag), frag_len); - offset +=3D bp->rx_buffer_size; + offset +=3D bp->ctx->rx_buffer_size; desc =3D macb_rx_desc(queue, frag); desc->addr &=3D ~MACB_BIT(RX_USED); =20 @@ -1750,32 +1786,34 @@ static int macb_rx_frame(struct macb_queue *queue, = struct napi_struct *napi, =20 static inline void macb_init_rx_ring(struct macb_queue *queue) { + struct macb_rxq *rxq =3D macb_rxq(queue); struct macb_dma_desc *desc =3D NULL; struct macb *bp =3D queue->bp; dma_addr_t addr; int i; =20 - addr =3D queue->rx_buffers_dma; - for (i =3D 0; i < bp->rx_ring_size; i++) { + addr =3D rxq->buffers_dma; + for (i =3D 0; i < bp->ctx->rx_ring_size; i++) { desc =3D macb_rx_desc(queue, i); macb_set_addr(bp, desc, addr); desc->ctrl =3D 0; - addr +=3D bp->rx_buffer_size; + addr +=3D bp->ctx->rx_buffer_size; } desc->addr |=3D MACB_BIT(RX_WRAP); - queue->rx_tail =3D 0; + rxq->tail =3D 0; } =20 static int macb_rx(struct macb_queue *queue, struct napi_struct *napi, int budget) { + struct macb_rxq *rxq =3D macb_rxq(queue); struct macb *bp =3D queue->bp; bool reset_rx_queue =3D false; int first_frag =3D -1; unsigned int tail; int received =3D 0; =20 - for (tail =3D queue->rx_tail; budget > 0; tail++) { + for (tail =3D rxq->tail; budget > 0; tail++) { struct macb_dma_desc *desc =3D macb_rx_desc(queue, tail); u32 ctrl; =20 @@ -1829,7 +1867,7 @@ static int macb_rx(struct macb_queue *queue, struct n= api_struct *napi, macb_writel(bp, NCR, ctrl & ~MACB_BIT(RE)); =20 macb_init_rx_ring(queue); - queue_writel(queue, RBQP, queue->rx_ring_dma); + queue_writel(queue, RBQP, rxq->ring_dma); =20 macb_writel(bp, NCR, ctrl | MACB_BIT(RE)); =20 @@ -1838,20 +1876,21 @@ static int macb_rx(struct macb_queue *queue, struct= napi_struct *napi, } =20 if (first_frag !=3D -1) - queue->rx_tail =3D first_frag; + rxq->tail =3D first_frag; else - queue->rx_tail =3D tail; + rxq->tail =3D tail; =20 return received; } =20 static bool macb_rx_pending(struct macb_queue *queue) { + struct macb_rxq *rxq =3D macb_rxq(queue); struct macb *bp =3D queue->bp; struct macb_dma_desc *desc; unsigned int entry; =20 - entry =3D macb_rx_ring_wrap(bp, queue->rx_tail); + entry =3D macb_rx_ring_wrap(bp, rxq->tail); desc =3D macb_rx_desc(queue, entry); =20 /* Make hw descriptor updates visible to CPU */ @@ -1900,18 +1939,19 @@ static int macb_rx_poll(struct napi_struct *napi, i= nt budget) =20 static void macb_tx_restart(struct macb_queue *queue) { + struct macb_txq *txq =3D macb_txq(queue); struct macb *bp =3D queue->bp; unsigned int head_idx, tbqp; unsigned long flags; =20 spin_lock_irqsave(&queue->tx_ptr_lock, flags); =20 - if (queue->tx_head =3D=3D queue->tx_tail) + if (txq->head =3D=3D txq->tail) goto out_tx_ptr_unlock; =20 tbqp =3D queue_readl(queue, TBQP) / macb_dma_desc_get_size(bp); tbqp =3D macb_adj_dma_desc_idx(bp, macb_tx_ring_wrap(bp, tbqp)); - head_idx =3D macb_adj_dma_desc_idx(bp, macb_tx_ring_wrap(bp, queue->tx_he= ad)); + head_idx =3D macb_adj_dma_desc_idx(bp, macb_tx_ring_wrap(bp, txq->head)); =20 if (tbqp =3D=3D head_idx) goto out_tx_ptr_unlock; @@ -1926,15 +1966,16 @@ static void macb_tx_restart(struct macb_queue *queu= e) =20 static bool macb_tx_complete_pending(struct macb_queue *queue) { + struct macb_txq *txq =3D macb_txq(queue); bool retval =3D false; unsigned long flags; =20 spin_lock_irqsave(&queue->tx_ptr_lock, flags); - if (queue->tx_head !=3D queue->tx_tail) { + if (txq->head !=3D txq->tail) { /* Make hw descriptor updates visible to CPU */ rmb(); =20 - if (macb_tx_desc(queue, queue->tx_tail)->ctrl & MACB_BIT(TX_USED)) + if (macb_tx_desc(queue, txq->tail)->ctrl & MACB_BIT(TX_USED)) retval =3D true; } spin_unlock_irqrestore(&queue->tx_ptr_lock, flags); @@ -2225,8 +2266,9 @@ static unsigned int macb_tx_map(struct macb *bp, struct sk_buff *skb, unsigned int hdrlen) { + struct macb_txq *txq =3D macb_txq(queue); unsigned int f, nr_frags =3D skb_shinfo(skb)->nr_frags; - unsigned int len, i, tx_head =3D queue->tx_head; + unsigned int len, i, tx_head =3D txq->head; u32 ctrl, lso_ctrl =3D 0, seq_ctrl =3D 0; unsigned int eof =3D 1, mss_mfs =3D 0; struct macb_tx_skb *tx_skb =3D NULL; @@ -2346,11 +2388,12 @@ static unsigned int macb_tx_map(struct macb *bp, ctrl |=3D MACB_BIT(TX_LAST); eof =3D 0; } - if (unlikely(macb_tx_ring_wrap(bp, i) =3D=3D bp->tx_ring_size - 1)) + if (unlikely(macb_tx_ring_wrap(bp, i) =3D=3D + bp->ctx->tx_ring_size - 1)) ctrl |=3D MACB_BIT(TX_WRAP); =20 /* First descriptor is header descriptor */ - if (i =3D=3D queue->tx_head) { + if (i =3D=3D txq->head) { ctrl |=3D MACB_BF(TX_LSO, lso_ctrl); ctrl |=3D MACB_BF(TX_TCP_SEQ_SRC, seq_ctrl); if ((bp->netdev->features & NETIF_F_HW_CSUM) && @@ -2370,16 +2413,16 @@ static unsigned int macb_tx_map(struct macb *bp, */ wmb(); desc->ctrl =3D ctrl; - } while (i !=3D queue->tx_head); + } while (i !=3D txq->head); =20 - queue->tx_head =3D tx_head; + txq->head =3D tx_head; =20 return 0; =20 dma_error: netdev_err(bp->netdev, "TX DMA map failed\n"); =20 - for (i =3D queue->tx_head; i !=3D tx_head; i++) { + for (i =3D txq->head; i !=3D tx_head; i++) { tx_skb =3D macb_tx_skb(queue, i); =20 macb_tx_unmap(bp, tx_skb, 0); @@ -2499,6 +2542,7 @@ static netdev_tx_t macb_start_xmit(struct sk_buff *sk= b, unsigned int q =3D skb_get_queue_mapping(skb); unsigned int desc_cnt, nr_frags, frag_size, f; struct macb_queue *queue =3D &bp->queues[q]; + struct macb_txq *txq =3D macb_txq(queue); netdev_tx_t ret =3D NETDEV_TX_OK; unsigned int hdrlen; unsigned long flags; @@ -2562,11 +2606,11 @@ static netdev_tx_t macb_start_xmit(struct sk_buff *= skb, spin_lock_irqsave(&queue->tx_ptr_lock, flags); =20 /* This is a hard error, log it. */ - if (CIRC_SPACE(queue->tx_head, queue->tx_tail, - bp->tx_ring_size) < desc_cnt) { + if (CIRC_SPACE(txq->head, txq->tail, + bp->ctx->tx_ring_size) < desc_cnt) { netif_stop_subqueue(netdev, q); netdev_dbg(netdev, "tx_head =3D %u, tx_tail =3D %u\n", - queue->tx_head, queue->tx_tail); + txq->head, txq->tail); ret =3D NETDEV_TX_BUSY; goto unlock; } @@ -2588,7 +2632,7 @@ static netdev_tx_t macb_start_xmit(struct sk_buff *sk= b, macb_writel(bp, NCR, macb_readl(bp, NCR) | MACB_BIT(TSTART)); spin_unlock(&bp->lock); =20 - if (CIRC_SPACE(queue->tx_head, queue->tx_tail, bp->tx_ring_size) < 1) + if (CIRC_SPACE(txq->head, txq->tail, bp->ctx->tx_ring_size) < 1) netif_stop_subqueue(netdev, q); =20 unlock: @@ -2600,38 +2644,42 @@ static netdev_tx_t macb_start_xmit(struct sk_buff *= skb, static void macb_init_rx_buffer_size(struct macb *bp, size_t size) { if (!macb_is_gem(bp)) { - bp->rx_buffer_size =3D MACB_RX_BUFFER_SIZE; + bp->ctx->rx_buffer_size =3D MACB_RX_BUFFER_SIZE; } else { - bp->rx_buffer_size =3D MIN(size, RX_BUFFER_MAX); + bp->ctx->rx_buffer_size =3D MIN(size, RX_BUFFER_MAX); =20 - if (bp->rx_buffer_size % RX_BUFFER_MULTIPLE) { + if (bp->ctx->rx_buffer_size % RX_BUFFER_MULTIPLE) { netdev_dbg(bp->netdev, "RX buffer must be multiple of %d bytes, expanding\n", RX_BUFFER_MULTIPLE); - bp->rx_buffer_size =3D - roundup(bp->rx_buffer_size, RX_BUFFER_MULTIPLE); + bp->ctx->rx_buffer_size =3D + roundup(bp->ctx->rx_buffer_size, + RX_BUFFER_MULTIPLE); } } =20 - netdev_dbg(bp->netdev, "mtu [%u] rx_buffer_size [%zu]\n", - bp->netdev->mtu, bp->rx_buffer_size); + netdev_dbg(bp->netdev, "mtu [%u] rx_buffer_size [%u]\n", + bp->netdev->mtu, bp->ctx->rx_buffer_size); } =20 static void gem_free_rx_buffers(struct macb *bp) { - struct sk_buff *skb; - struct macb_dma_desc *desc; + struct macb_dma_desc *desc; struct macb_queue *queue; - dma_addr_t addr; + struct macb_rxq *rxq; + struct sk_buff *skb; + dma_addr_t addr; unsigned int q; int i; =20 for (q =3D 0, queue =3D bp->queues; q < bp->num_queues; ++q, ++queue) { - if (!queue->rx_skbuff) + rxq =3D &bp->ctx->rxq[q]; + + if (!rxq->skbuff) continue; =20 - for (i =3D 0; i < bp->rx_ring_size; i++) { - skb =3D queue->rx_skbuff[i]; + for (i =3D 0; i < bp->ctx->rx_ring_size; i++) { + skb =3D rxq->skbuff[i]; =20 if (!skb) continue; @@ -2639,95 +2687,106 @@ static void gem_free_rx_buffers(struct macb *bp) desc =3D macb_rx_desc(queue, i); addr =3D macb_get_addr(bp, desc); =20 - dma_unmap_single(&bp->pdev->dev, addr, bp->rx_buffer_size, - DMA_FROM_DEVICE); + dma_unmap_single(&bp->pdev->dev, addr, + bp->ctx->rx_buffer_size, + DMA_FROM_DEVICE); dev_kfree_skb_any(skb); skb =3D NULL; } =20 - kfree(queue->rx_skbuff); - queue->rx_skbuff =3D NULL; + kfree(rxq->skbuff); + rxq->skbuff =3D NULL; } } =20 static void macb_free_rx_buffers(struct macb *bp) { - struct macb_queue *queue =3D &bp->queues[0]; + struct macb_rxq *rxq =3D &bp->ctx->rxq[0]; =20 - if (queue->rx_buffers) { + if (rxq->buffers) { dma_free_coherent(&bp->pdev->dev, - bp->rx_ring_size * bp->rx_buffer_size, - queue->rx_buffers, queue->rx_buffers_dma); - queue->rx_buffers =3D NULL; + bp->ctx->rx_ring_size * + bp->ctx->rx_buffer_size, + rxq->buffers, rxq->buffers_dma); + rxq->buffers =3D NULL; } } =20 static unsigned int macb_tx_ring_size_per_queue(struct macb *bp) { - return macb_dma_desc_get_size(bp) * bp->tx_ring_size + bp->tx_bd_rd_prefe= tch; + return macb_dma_desc_get_size(bp) * bp->ctx->tx_ring_size + + bp->tx_bd_rd_prefetch; } =20 static unsigned int macb_rx_ring_size_per_queue(struct macb *bp) { - return macb_dma_desc_get_size(bp) * bp->rx_ring_size + bp->rx_bd_rd_prefe= tch; + return macb_dma_desc_get_size(bp) * bp->ctx->rx_ring_size + + bp->rx_bd_rd_prefetch; } =20 static void macb_free_consistent(struct macb *bp) { struct device *dev =3D &bp->pdev->dev; - struct macb_queue *queue; + struct macb_txq *txq; + struct macb_rxq *rxq; unsigned int q; size_t size; =20 bp->macbgem_ops.mog_free_rx_buffers(bp); =20 + txq =3D &bp->ctx->txq[0]; size =3D bp->num_queues * macb_tx_ring_size_per_queue(bp); - dma_free_coherent(dev, size, bp->queues[0].tx_ring, bp->queues[0].tx_ring= _dma); + dma_free_coherent(dev, size, txq->ring, txq->ring_dma); =20 + rxq =3D &bp->ctx->rxq[0]; size =3D bp->num_queues * macb_rx_ring_size_per_queue(bp); - dma_free_coherent(dev, size, bp->queues[0].rx_ring, bp->queues[0].rx_ring= _dma); + dma_free_coherent(dev, size, rxq->ring, rxq->ring_dma); =20 - for (q =3D 0, queue =3D bp->queues; q < bp->num_queues; ++q, ++queue) { - kfree(queue->tx_skb); - queue->tx_skb =3D NULL; - queue->tx_ring =3D NULL; - queue->rx_ring =3D NULL; + for (q =3D 0; q < bp->num_queues; ++q) { + txq =3D &bp->ctx->txq[q]; + rxq =3D &bp->ctx->rxq[q]; + + kfree(txq->skb); + txq->skb =3D NULL; + txq->ring =3D NULL; + rxq->ring =3D NULL; } } =20 static int gem_alloc_rx_buffers(struct macb *bp) { - struct macb_queue *queue; + struct macb_rxq *rxq; unsigned int q; int size; =20 - for (q =3D 0, queue =3D bp->queues; q < bp->num_queues; ++q, ++queue) { - size =3D bp->rx_ring_size * sizeof(struct sk_buff *); - queue->rx_skbuff =3D kzalloc(size, GFP_KERNEL); - if (!queue->rx_skbuff) + for (q =3D 0; q < bp->num_queues; ++q) { + rxq =3D &bp->ctx->rxq[q]; + size =3D bp->ctx->rx_ring_size * sizeof(struct sk_buff *); + rxq->skbuff =3D kzalloc(size, GFP_KERNEL); + if (!rxq->skbuff) return -ENOMEM; else netdev_dbg(bp->netdev, "Allocated %d RX struct sk_buff entries at %p\n", - bp->rx_ring_size, queue->rx_skbuff); + bp->ctx->rx_ring_size, rxq->skbuff); } return 0; } =20 static int macb_alloc_rx_buffers(struct macb *bp) { - struct macb_queue *queue =3D &bp->queues[0]; + struct macb_rxq *rxq =3D &bp->ctx->rxq[0]; int size; =20 - size =3D bp->rx_ring_size * bp->rx_buffer_size; - queue->rx_buffers =3D dma_alloc_coherent(&bp->pdev->dev, size, - &queue->rx_buffers_dma, GFP_KERNEL); - if (!queue->rx_buffers) + size =3D bp->ctx->rx_ring_size * bp->ctx->rx_buffer_size; + rxq->buffers =3D dma_alloc_coherent(&bp->pdev->dev, size, + &rxq->buffers_dma, GFP_KERNEL); + if (!rxq->buffers) return -ENOMEM; =20 netdev_dbg(bp->netdev, "Allocated RX buffers of %d bytes at %08lx (mapped %p)\n", - size, (unsigned long)queue->rx_buffers_dma, queue->rx_buffers); + size, (unsigned long)rxq->buffers_dma, rxq->buffers); return 0; } =20 @@ -2735,7 +2794,8 @@ static int macb_alloc_consistent(struct macb *bp) { struct device *dev =3D &bp->pdev->dev; dma_addr_t tx_dma, rx_dma; - struct macb_queue *queue; + struct macb_txq *txq; + struct macb_rxq *rxq; unsigned int q; void *tx, *rx; size_t size; @@ -2761,16 +2821,19 @@ static int macb_alloc_consistent(struct macb *bp) netdev_dbg(bp->netdev, "Allocated %zu bytes for %u RX rings at %08lx (map= ped %p)\n", size, bp->num_queues, (unsigned long)rx_dma, rx); =20 - for (q =3D 0, queue =3D bp->queues; q < bp->num_queues; ++q, ++queue) { - queue->tx_ring =3D tx + macb_tx_ring_size_per_queue(bp) * q; - queue->tx_ring_dma =3D tx_dma + macb_tx_ring_size_per_queue(bp) * q; + for (q =3D 0; q < bp->num_queues; ++q) { + txq =3D &bp->ctx->txq[q]; + rxq =3D &bp->ctx->rxq[q]; =20 - queue->rx_ring =3D rx + macb_rx_ring_size_per_queue(bp) * q; - queue->rx_ring_dma =3D rx_dma + macb_rx_ring_size_per_queue(bp) * q; + txq->ring =3D tx + macb_tx_ring_size_per_queue(bp) * q; + txq->ring_dma =3D tx_dma + macb_tx_ring_size_per_queue(bp) * q; =20 - size =3D bp->tx_ring_size * sizeof(struct macb_tx_skb); - queue->tx_skb =3D kmalloc(size, GFP_KERNEL); - if (!queue->tx_skb) + rxq->ring =3D rx + macb_rx_ring_size_per_queue(bp) * q; + rxq->ring_dma =3D rx_dma + macb_rx_ring_size_per_queue(bp) * q; + + size =3D bp->ctx->tx_ring_size * sizeof(struct macb_tx_skb); + txq->skb =3D kmalloc(size, GFP_KERNEL); + if (!txq->skb) goto out_err; } if (bp->macbgem_ops.mog_alloc_rx_buffers(bp)) @@ -2785,8 +2848,10 @@ static int macb_alloc_consistent(struct macb *bp) =20 static void gem_init_rx_ring(struct macb_queue *queue) { - queue->rx_tail =3D 0; - queue->rx_prepared_head =3D 0; + struct macb_rxq *rxq =3D macb_rxq(queue); + + rxq->tail =3D 0; + rxq->prepared_head =3D 0; =20 gem_rx_refill(queue); } @@ -2795,18 +2860,20 @@ static void gem_init_rings(struct macb *bp) { struct macb_queue *queue; struct macb_dma_desc *desc =3D NULL; + struct macb_txq *txq; unsigned int q; int i; =20 for (q =3D 0, queue =3D bp->queues; q < bp->num_queues; ++q, ++queue) { - for (i =3D 0; i < bp->tx_ring_size; i++) { + txq =3D &bp->ctx->txq[q]; + for (i =3D 0; i < bp->ctx->tx_ring_size; i++) { desc =3D macb_tx_desc(queue, i); macb_set_addr(bp, desc, 0); desc->ctrl =3D MACB_BIT(TX_USED); } desc->ctrl |=3D MACB_BIT(TX_WRAP); - queue->tx_head =3D 0; - queue->tx_tail =3D 0; + txq->head =3D 0; + txq->tail =3D 0; =20 gem_init_rx_ring(queue); } @@ -2814,18 +2881,19 @@ static void gem_init_rings(struct macb *bp) =20 static void macb_init_rings(struct macb *bp) { - int i; + struct macb_txq *txq =3D &bp->ctx->txq[0]; struct macb_dma_desc *desc =3D NULL; + int i; =20 macb_init_rx_ring(&bp->queues[0]); =20 - for (i =3D 0; i < bp->tx_ring_size; i++) { + for (i =3D 0; i < bp->ctx->tx_ring_size; i++) { desc =3D macb_tx_desc(&bp->queues[0], i); macb_set_addr(bp, desc, 0); desc->ctrl =3D MACB_BIT(TX_USED); } - bp->queues[0].tx_head =3D 0; - bp->queues[0].tx_tail =3D 0; + txq->head =3D 0; + txq->tail =3D 0; desc->ctrl |=3D MACB_BIT(TX_WRAP); } =20 @@ -2941,7 +3009,7 @@ static void macb_configure_dma(struct macb *bp) unsigned int q; u32 dmacfg; =20 - buffer_size =3D bp->rx_buffer_size / RX_BUFFER_MULTIPLE; + buffer_size =3D bp->ctx->rx_buffer_size / RX_BUFFER_MULTIPLE; if (macb_is_gem(bp)) { dmacfg =3D gem_readl(bp, DMACFG) & ~GEM_BF(RXBS, -1L); for (q =3D 0, queue =3D bp->queues; q < bp->num_queues; ++q, ++queue) { @@ -3148,14 +3216,22 @@ static int macb_open(struct net_device *netdev) if (err < 0) return err; =20 + bp->ctx =3D kzalloc_obj(*bp->ctx); + if (!bp->ctx) { + err =3D -ENOMEM; + goto pm_exit; + } + /* RX buffers initialization */ macb_init_rx_buffer_size(bp, bufsz); + bp->ctx->rx_ring_size =3D bp->configured_rx_ring_size; + bp->ctx->tx_ring_size =3D bp->configured_tx_ring_size; =20 err =3D macb_alloc_consistent(bp); if (err) { netdev_err(netdev, "Unable to allocate DMA memory (error %d)\n", err); - goto pm_exit; + goto free_ctx; } =20 bp->macbgem_ops.mog_init_rings(bp); @@ -3197,6 +3273,9 @@ static int macb_open(struct net_device *netdev) napi_disable(&queue->napi_tx); } macb_free_consistent(bp); +free_ctx: + kfree(bp->ctx); + bp->ctx =3D NULL; pm_exit: pm_runtime_put_sync(&bp->pdev->dev); return err; @@ -3230,6 +3309,8 @@ static int macb_close(struct net_device *netdev) spin_unlock_irqrestore(&bp->lock, flags); =20 macb_free_consistent(bp); + kfree(bp->ctx); + bp->ctx =3D NULL; =20 if (bp->ptp_info) bp->ptp_info->ptp_remove(netdev); @@ -3596,14 +3677,15 @@ static void macb_get_regs(struct net_device *netdev= , struct ethtool_regs *regs, void *p) { struct macb *bp =3D netdev_priv(netdev); + struct macb_txq *txq =3D &bp->ctx->txq[0]; unsigned int tail, head; u32 *regs_buff =3D p; =20 regs->version =3D (macb_readl(bp, MID) & ((1 << MACB_REV_SIZE) - 1)) | MACB_GREGS_VERSION; =20 - tail =3D macb_tx_ring_wrap(bp, bp->queues[0].tx_tail); - head =3D macb_tx_ring_wrap(bp, bp->queues[0].tx_head); + tail =3D macb_tx_ring_wrap(bp, txq->tail); + head =3D macb_tx_ring_wrap(bp, txq->head); =20 regs_buff[0] =3D macb_readl(bp, NCR); regs_buff[1] =3D macb_or_gem_readl(bp, NCFGR); @@ -3682,8 +3764,8 @@ static void macb_get_ringparam(struct net_device *net= dev, ring->rx_max_pending =3D MAX_RX_RING_SIZE; ring->tx_max_pending =3D MAX_TX_RING_SIZE; =20 - ring->rx_pending =3D bp->rx_ring_size; - ring->tx_pending =3D bp->tx_ring_size; + ring->rx_pending =3D bp->ctx->rx_ring_size; + ring->tx_pending =3D bp->ctx->tx_ring_size; } =20 static int macb_set_ringparam(struct net_device *netdev, @@ -3706,8 +3788,8 @@ static int macb_set_ringparam(struct net_device *netd= ev, MIN_TX_RING_SIZE, MAX_TX_RING_SIZE); new_tx_size =3D roundup_pow_of_two(new_tx_size); =20 - if ((new_tx_size =3D=3D bp->tx_ring_size) && - (new_rx_size =3D=3D bp->rx_ring_size)) { + if (new_tx_size =3D=3D bp->configured_tx_ring_size && + new_rx_size =3D=3D bp->configured_rx_ring_size) { /* nothing to do */ return 0; } @@ -3717,8 +3799,8 @@ static int macb_set_ringparam(struct net_device *netd= ev, macb_close(bp->netdev); } =20 - bp->rx_ring_size =3D new_rx_size; - bp->tx_ring_size =3D new_tx_size; + bp->configured_rx_ring_size =3D new_rx_size; + bp->configured_tx_ring_size =3D new_tx_size; =20 if (reset) macb_open(bp->netdev); @@ -4725,9 +4807,6 @@ static int macb_init_dflt(struct platform_device *pde= v) int err; u32 val, reg; =20 - bp->tx_ring_size =3D DEFAULT_TX_RING_SIZE; - bp->rx_ring_size =3D DEFAULT_RX_RING_SIZE; - /* set the queue register mapping once for all: queue0 has a special * register mapping but we don't want to test the queue index then * compute the corresponding register offset at run time. @@ -4926,26 +5005,26 @@ static struct sifive_fu540_macb_mgmt *mgmt; =20 static int at91ether_alloc_coherent(struct macb *bp) { - struct macb_queue *queue =3D &bp->queues[0]; + struct macb_rxq *rxq =3D &bp->ctx->rxq[0]; =20 - queue->rx_ring =3D dma_alloc_coherent(&bp->pdev->dev, - (AT91ETHER_MAX_RX_DESCR * - macb_dma_desc_get_size(bp)), - &queue->rx_ring_dma, GFP_KERNEL); - if (!queue->rx_ring) + rxq->ring =3D dma_alloc_coherent(&bp->pdev->dev, + (AT91ETHER_MAX_RX_DESCR * + macb_dma_desc_get_size(bp)), + &rxq->ring_dma, GFP_KERNEL); + if (!rxq->ring) return -ENOMEM; =20 - queue->rx_buffers =3D dma_alloc_coherent(&bp->pdev->dev, - AT91ETHER_MAX_RX_DESCR * - AT91ETHER_MAX_RBUFF_SZ, - &queue->rx_buffers_dma, - GFP_KERNEL); - if (!queue->rx_buffers) { + rxq->buffers =3D dma_alloc_coherent(&bp->pdev->dev, + AT91ETHER_MAX_RX_DESCR * + AT91ETHER_MAX_RBUFF_SZ, + &rxq->buffers_dma, + GFP_KERNEL); + if (!rxq->buffers) { dma_free_coherent(&bp->pdev->dev, AT91ETHER_MAX_RX_DESCR * macb_dma_desc_get_size(bp), - queue->rx_ring, queue->rx_ring_dma); - queue->rx_ring =3D NULL; + rxq->ring, rxq->ring_dma); + rxq->ring =3D NULL; return -ENOMEM; } =20 @@ -4954,22 +5033,22 @@ static int at91ether_alloc_coherent(struct macb *bp) =20 static void at91ether_free_coherent(struct macb *bp) { - struct macb_queue *queue =3D &bp->queues[0]; + struct macb_rxq *rxq =3D &bp->ctx->rxq[0]; =20 - if (queue->rx_ring) { + if (rxq->ring) { dma_free_coherent(&bp->pdev->dev, AT91ETHER_MAX_RX_DESCR * macb_dma_desc_get_size(bp), - queue->rx_ring, queue->rx_ring_dma); - queue->rx_ring =3D NULL; + rxq->ring, rxq->ring_dma); + rxq->ring =3D NULL; } =20 - if (queue->rx_buffers) { + if (rxq->buffers) { dma_free_coherent(&bp->pdev->dev, AT91ETHER_MAX_RX_DESCR * AT91ETHER_MAX_RBUFF_SZ, - queue->rx_buffers, queue->rx_buffers_dma); - queue->rx_buffers =3D NULL; + rxq->buffers, rxq->buffers_dma); + rxq->buffers =3D NULL; } } =20 @@ -4977,6 +5056,7 @@ static void at91ether_free_coherent(struct macb *bp) static int at91ether_start(struct macb *bp) { struct macb_queue *queue =3D &bp->queues[0]; + struct macb_rxq *rxq =3D &bp->ctx->rxq[0]; struct macb_dma_desc *desc; dma_addr_t addr; u32 ctl; @@ -4986,7 +5066,7 @@ static int at91ether_start(struct macb *bp) if (ret) return ret; =20 - addr =3D queue->rx_buffers_dma; + addr =3D rxq->buffers_dma; for (i =3D 0; i < AT91ETHER_MAX_RX_DESCR; i++) { desc =3D macb_rx_desc(queue, i); macb_set_addr(bp, desc, addr); @@ -4998,10 +5078,10 @@ static int at91ether_start(struct macb *bp) desc->addr |=3D MACB_BIT(RX_WRAP); =20 /* Reset buffer index */ - queue->rx_tail =3D 0; + rxq->tail =3D 0; =20 /* Program address of descriptor list in Rx Buffer Queue register */ - macb_writel(bp, RBQP, queue->rx_ring_dma); + macb_writel(bp, RBQP, rxq->ring_dma); =20 /* Enable Receive and Transmit */ ctl =3D macb_readl(bp, NCR); @@ -5139,15 +5219,15 @@ static void at91ether_rx(struct net_device *netdev) { struct macb *bp =3D netdev_priv(netdev); struct macb_queue *queue =3D &bp->queues[0]; + struct macb_rxq *rxq =3D &bp->ctx->rxq[0]; struct macb_dma_desc *desc; unsigned char *p_recv; struct sk_buff *skb; unsigned int pktlen; =20 - desc =3D macb_rx_desc(queue, queue->rx_tail); + desc =3D macb_rx_desc(queue, rxq->tail); while (desc->addr & MACB_BIT(RX_USED)) { - p_recv =3D queue->rx_buffers + - queue->rx_tail * AT91ETHER_MAX_RBUFF_SZ; + p_recv =3D rxq->buffers + rxq->tail * AT91ETHER_MAX_RBUFF_SZ; pktlen =3D MACB_BF(RX_FRMLEN, desc->ctrl); skb =3D netdev_alloc_skb(netdev, pktlen + 2); if (skb) { @@ -5169,12 +5249,12 @@ static void at91ether_rx(struct net_device *netdev) desc->addr &=3D ~MACB_BIT(RX_USED); =20 /* wrap after last buffer */ - if (queue->rx_tail =3D=3D AT91ETHER_MAX_RX_DESCR - 1) - queue->rx_tail =3D 0; + if (rxq->tail =3D=3D AT91ETHER_MAX_RX_DESCR - 1) + rxq->tail =3D 0; else - queue->rx_tail++; + rxq->tail++; =20 - desc =3D macb_rx_desc(queue, queue->rx_tail); + desc =3D macb_rx_desc(queue, rxq->tail); } } =20 @@ -5829,6 +5909,8 @@ static int macb_probe(struct platform_device *pdev) bp->rx_clk =3D rx_clk; bp->tsu_clk =3D tsu_clk; bp->jumbo_max_len =3D macb_config->jumbo_max_len; + bp->configured_rx_ring_size =3D DEFAULT_RX_RING_SIZE; + bp->configured_tx_ring_size =3D DEFAULT_TX_RING_SIZE; =20 if (!hw_is_gem(bp->regs, bp->native_io)) bp->max_tx_length =3D MACB_MAX_TX_LEN; --=20 2.53.0 From nobody Wed Apr 1 20:37:31 2026 Received: from smtpout-04.galae.net (smtpout-04.galae.net [185.171.202.116]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1579847CC98; Wed, 1 Apr 2026 16:39:41 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=185.171.202.116 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775061586; cv=none; b=Tfr0aHibERkSbwE5/r8UP0t/xtj5RQbyWsvRF3fiH4JJ8GNV4ltGJDpG6asdiG1PkXq6U5G/uuk8SAXgtg+SGzaBDs0KRWJ5ncoeXgJ3Z0AdbwXaSUhxGTrRjzvrXhuT+BMEenQ2VsGOhEid9X65m7AO5Em7SMgELV07sqSB+P8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775061586; c=relaxed/simple; bh=tl1JEdPS078PG3MtDA2A1kWWBWIbEL7CJ8yJSvz6aAs=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=ibbSMl7ZZrfugqhRgWmADIZuYdTOFQJX/dKpIbLO/lVOYNJ3q7wQNbXsfmJ/5eFuNfOuf/9lohJm5Y+ImqQpcobLFeqPmvECQ6A7ttJbucg08fkFMhxsgq4qYRTWPzbXWcjGIe4JD0EGnOYceF6jCTq+2CCY0NnvQaZzML2sv8Y= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=bootlin.com; spf=pass smtp.mailfrom=bootlin.com; dkim=pass (2048-bit key) header.d=bootlin.com header.i=@bootlin.com header.b=A7nnzZDB; arc=none smtp.client-ip=185.171.202.116 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=bootlin.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=bootlin.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=bootlin.com header.i=@bootlin.com header.b="A7nnzZDB" Received: from smtpout-01.galae.net (smtpout-01.galae.net [212.83.139.233]) by smtpout-04.galae.net (Postfix) with ESMTPS id 7B945C5996E; Wed, 1 Apr 2026 16:40:11 +0000 (UTC) Received: from mail.galae.net (mail.galae.net [212.83.136.155]) by smtpout-01.galae.net (Postfix) with ESMTPS id 6A8F6602BF; Wed, 1 Apr 2026 16:39:40 +0000 (UTC) Received: from [127.0.0.1] (localhost [127.0.0.1]) by localhost (Mailerdaemon) with ESMTPSA id 1E5D8104509E8; Wed, 1 Apr 2026 18:39:37 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bootlin.com; s=dkim; t=1775061579; h=from:subject:date:message-id:to:cc:mime-version:content-type: content-transfer-encoding:in-reply-to:references; bh=HrzWnDXn/D9f5CDa6I0v3TWYtvCVSYZ9ed+j76pS1yQ=; b=A7nnzZDBdXRB/EGWEKYb0bJvZYGb8JmbzKycFU1YhOSnXc8YTB6l5GQZMGqF1Eb6li2yPG qgLpPWEJOPPaB/WjnJ2A5O0YHgsE740cCiHlxhZKKDidjtzPhceeAYJAMDrwvzm7K0jFFa 96sQdWNzC6OS/ksb3fNXcq+pAG0+CX2QUHGarPELIc6lgLkdrBYbADK+WZeigAe/QauxEu KYvk+52Q+UxrlOi1XWUTmxfVMc/HgCxCuTJXDLOfAty1Yjhktr93rUMgJUtVELm1ar06a4 XQMiDMRI9qk7zAdGwFGPWfeqDTrVIxKfkGu8zdLlR6IjVoRxwGZMUnDjKkk7Vg== From: =?utf-8?q?Th=C3=A9o_Lebrun?= Date: Wed, 01 Apr 2026 18:39:10 +0200 Subject: [PATCH net-next 07/11] net: macb: avoid macb_init_rx_buffer_size() modifying state Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20260401-macb-context-v1-7-9590c5ab7272@bootlin.com> References: <20260401-macb-context-v1-0-9590c5ab7272@bootlin.com> In-Reply-To: <20260401-macb-context-v1-0-9590c5ab7272@bootlin.com> To: Nicolas Ferre , Claudiu Beznea , Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Richard Cochran , Russell King Cc: Paolo Valerio , Conor Dooley , Nicolai Buchwitz , Vladimir Kondratiev , Gregory CLEMENT , =?utf-8?q?Beno=C3=AEt_Monin?= , Tawfik Bayouk , Thomas Petazzoni , Maxime Chevallier , netdev@vger.kernel.org, linux-kernel@vger.kernel.org, =?utf-8?q?Th=C3=A9o_Lebrun?= X-Mailer: b4 0.15.0 X-Last-TLS-Session-Version: TLSv1.3 macb_init_rx_buffer_size() takes the macb private data struct and overrides its bp->ctx->rx_buffer_size. To make it usable with multiple contexts, make it return its value. Also, move the `bufsz` computation into it. The value is only used if GEM, and for historical reason it currently lives in macb_open(). Signed-off-by: Th=C3=A9o Lebrun --- drivers/net/ethernet/cadence/macb_main.c | 26 +++++++++++++------------- 1 file changed, 13 insertions(+), 13 deletions(-) diff --git a/drivers/net/ethernet/cadence/macb_main.c b/drivers/net/etherne= t/cadence/macb_main.c index 0f63d9b89c11..033c36d8a3d4 100644 --- a/drivers/net/ethernet/cadence/macb_main.c +++ b/drivers/net/ethernet/cadence/macb_main.c @@ -2641,25 +2641,26 @@ static netdev_tx_t macb_start_xmit(struct sk_buff *= skb, return ret; } =20 -static void macb_init_rx_buffer_size(struct macb *bp, size_t size) +static unsigned int macb_rx_buffer_size(struct macb *bp, unsigned int mtu) { - if (!macb_is_gem(bp)) { - bp->ctx->rx_buffer_size =3D MACB_RX_BUFFER_SIZE; - } else { - bp->ctx->rx_buffer_size =3D MIN(size, RX_BUFFER_MAX); + unsigned int size; =20 - if (bp->ctx->rx_buffer_size % RX_BUFFER_MULTIPLE) { + if (!macb_is_gem(bp)) { + size =3D MACB_RX_BUFFER_SIZE; + } else { + size =3D mtu + ETH_HLEN + ETH_FCS_LEN + NET_IP_ALIGN; + size =3D MIN(size, RX_BUFFER_MAX); + + if (size % RX_BUFFER_MULTIPLE) { netdev_dbg(bp->netdev, "RX buffer must be multiple of %d bytes, expanding\n", RX_BUFFER_MULTIPLE); - bp->ctx->rx_buffer_size =3D - roundup(bp->ctx->rx_buffer_size, - RX_BUFFER_MULTIPLE); + size =3D roundup(size, RX_BUFFER_MULTIPLE); } } =20 - netdev_dbg(bp->netdev, "mtu [%u] rx_buffer_size [%u]\n", - bp->netdev->mtu, bp->ctx->rx_buffer_size); + netdev_dbg(bp->netdev, "mtu [%u] rx_buffer_size [%u]\n", mtu, size); + return size; } =20 static void gem_free_rx_buffers(struct macb *bp) @@ -3204,7 +3205,6 @@ static void macb_set_rx_mode(struct net_device *netde= v) =20 static int macb_open(struct net_device *netdev) { - size_t bufsz =3D netdev->mtu + ETH_HLEN + ETH_FCS_LEN + NET_IP_ALIGN; struct macb *bp =3D netdev_priv(netdev); struct macb_queue *queue; unsigned int q; @@ -3223,7 +3223,7 @@ static int macb_open(struct net_device *netdev) } =20 /* RX buffers initialization */ - macb_init_rx_buffer_size(bp, bufsz); + bp->ctx->rx_buffer_size =3D macb_rx_buffer_size(bp, netdev->mtu); bp->ctx->rx_ring_size =3D bp->configured_rx_ring_size; bp->ctx->tx_ring_size =3D bp->configured_tx_ring_size; =20 --=20 2.53.0 From nobody Wed Apr 1 20:37:31 2026 Received: from smtpout-03.galae.net (smtpout-03.galae.net [185.246.85.4]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 15C7747D938; Wed, 1 Apr 2026 16:39:44 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=185.246.85.4 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775061593; cv=none; b=NFHe5VpkAzyPSElve1nr3JsrnIRrkXVfQZ8yG4Sx21DxZ2SY5lquaSYZlFcdkZCJXQUiMQhQ6twwqwl+gUEEcALxleyRPqyTAcVbuFUjQ/sV9a/28eQoGRWjcgksZCLHOXqzKnwvy3XZ6Fyz8vPXg0/Yd7aq2AEUGMtklQRz+7E= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775061593; c=relaxed/simple; bh=lm8NB9tzfRpApXsp+WfP5KXrLK5dcEv8oyupQPzCPIM=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=l9TK8Aj7qxKIAmd4cbdQAPl8/yYNhxkcOWaRawcXXs1bD2RFBLpvMOPVryvlD1l8qanIx1jiTRX8jnea2Lc/6CXGgTr/TsCHzgZGJwOZbdQlBTPgaaFtSCoUhb5AnqnR6PX3y51f5U9XxpGpB9AM4I1pRtXHj+GXWzd1Nt2fseQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=bootlin.com; spf=pass smtp.mailfrom=bootlin.com; dkim=pass (2048-bit key) header.d=bootlin.com header.i=@bootlin.com header.b=LptjTZN5; arc=none smtp.client-ip=185.246.85.4 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=bootlin.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=bootlin.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=bootlin.com header.i=@bootlin.com header.b="LptjTZN5" Received: from smtpout-01.galae.net (smtpout-01.galae.net [212.83.139.233]) by smtpout-03.galae.net (Postfix) with ESMTPS id 283DC4E42898; Wed, 1 Apr 2026 16:39:43 +0000 (UTC) Received: from mail.galae.net (mail.galae.net [212.83.136.155]) by smtpout-01.galae.net (Postfix) with ESMTPS id F1677602BF; Wed, 1 Apr 2026 16:39:42 +0000 (UTC) Received: from [127.0.0.1] (localhost [127.0.0.1]) by localhost (Mailerdaemon) with ESMTPSA id 9992A104507FB; Wed, 1 Apr 2026 18:39:39 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bootlin.com; s=dkim; t=1775061581; h=from:subject:date:message-id:to:cc:mime-version:content-type: content-transfer-encoding:in-reply-to:references; bh=X9MoZ1k6DPU6zKTVbA58S1QXZFLGGGZjTlfS0KgPoeE=; b=LptjTZN51JMNeQ1ri4qu/QY298Of2Mk/EKt6Y7YVOL9giIPjuI83P7aFv0mb5SsCJdLJaW fL9/BMqJaMa74ZNptzG9FC0t9IuAZLyGMQlZtHQLLUvTJEB+TgVxth5j13BuOtWC0CW7n6 n2CS4raeVPgd70Efq+eccxEL1dmmb+034ubu/2rT6fRViqsNDHbeS9qc7nGTmrl2/QgB0I kfAG7tIpz3CqP8au+wCJGtTzV0qKQ8AAgibwfXc2CO7WAhi7FsHmT6D/csMHz5YbvgWyau 7K12Aaz2PR724fz0R6GlA1eCmzd9QRINJOCcIUZr1zxFwsdEVPQvVn6DshgNRA== From: =?utf-8?q?Th=C3=A9o_Lebrun?= Date: Wed, 01 Apr 2026 18:39:11 +0200 Subject: [PATCH net-next 08/11] net: macb: make `struct macb` subset reachable from macb_context struct Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20260401-macb-context-v1-8-9590c5ab7272@bootlin.com> References: <20260401-macb-context-v1-0-9590c5ab7272@bootlin.com> In-Reply-To: <20260401-macb-context-v1-0-9590c5ab7272@bootlin.com> To: Nicolas Ferre , Claudiu Beznea , Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Richard Cochran , Russell King Cc: Paolo Valerio , Conor Dooley , Nicolai Buchwitz , Vladimir Kondratiev , Gregory CLEMENT , =?utf-8?q?Beno=C3=AEt_Monin?= , Tawfik Bayouk , Thomas Petazzoni , Maxime Chevallier , netdev@vger.kernel.org, linux-kernel@vger.kernel.org, =?utf-8?q?Th=C3=A9o_Lebrun?= X-Mailer: b4 0.15.0 X-Last-TLS-Session-Version: TLSv1.3 For parallel MACB context to start become a reality, many functions need to stop operating on bp->ctx (the currently active context) and instead work on a context they get passed. That context might be (1) the new one that is getting allocated and initialised, or, (2) the old one to be freed. To reduce bug surface area, we will taint those functions to *only* take a context and no `struct macb *bp`. That way, no bug of using `bp->ctx` instead of `ctx` will ever occur. For that, we need to embed a subset of `struct macb` information into each context so that all helpers can still do their jobs. That subset must be constant once probe is completed. Do this by taking a pointer to a subset of macb called `struct macb_info`. That subset is accessible from context (ctx->info->caps) or from bp (bp->caps) using `-fms-extensions` option, thanks to commit c4781dc3d1cf ("Kbuild: enable -fms-extensions"). https://gcc.gnu.org/onlinedocs/gcc/Unnamed-Fields.html Add the structure and assign ctx->info at alloc, but nothing uses it yet. Signed-off-by: Th=C3=A9o Lebrun --- drivers/net/ethernet/cadence/macb.h | 58 ++-- drivers/net/ethernet/cadence/macb_main.c | 474 ++++++++++++++++-----------= ---- drivers/net/ethernet/cadence/macb_ptp.c | 8 +- 3 files changed, 291 insertions(+), 249 deletions(-) diff --git a/drivers/net/ethernet/cadence/macb.h b/drivers/net/ethernet/cad= ence/macb.h index 8821205e8875..66e3638b84c0 100644 --- a/drivers/net/ethernet/cadence/macb.h +++ b/drivers/net/ethernet/cadence/macb.h @@ -840,7 +840,7 @@ */ #define macb_or_gem_writel(__bp, __reg, __value) \ ({ \ - if (macb_is_gem((__bp))) \ + if (macb_is_gem((__bp)->caps)) \ gem_writel((__bp), __reg, __value); \ else \ macb_writel((__bp), __reg, __value); \ @@ -849,7 +849,7 @@ #define macb_or_gem_readl(__bp, __reg) \ ({ \ u32 __v; \ - if (macb_is_gem((__bp))) \ + if (macb_is_gem((__bp)->caps)) \ __v =3D gem_readl((__bp), __reg); \ else \ __v =3D macb_readl((__bp), __reg); \ @@ -1196,11 +1196,12 @@ static const struct gem_statistic queue_statistics[= ] =3D { =20 struct macb; struct macb_queue; +struct macb_context; =20 struct macb_or_gem_ops { - int (*mog_alloc_rx_buffers)(struct macb *bp); - void (*mog_free_rx_buffers)(struct macb *bp); - void (*mog_init_rings)(struct macb *bp); + int (*mog_alloc_rx_buffers)(struct macb_context *ctx); + void (*mog_free_rx_buffers)(struct macb_context *ctx); + void (*mog_init_rings)(struct macb_context *ctx); int (*mog_rx)(struct macb_queue *queue, struct napi_struct *napi, int budget); }; @@ -1290,6 +1291,16 @@ struct ethtool_rx_fs_list { unsigned int count; }; =20 +struct macb_info { + struct platform_device *pdev; + struct net_device *netdev; + struct macb_or_gem_ops macbgem_ops; + unsigned int num_queues; + u32 caps; + int rx_bd_rd_prefetch; + int tx_bd_rd_prefetch; +}; + struct macb_rxq { struct macb_dma_desc *ring; /* MACB & GEM */ dma_addr_t ring_dma; /* MACB & GEM */ @@ -1309,6 +1320,8 @@ struct macb_txq { }; =20 struct macb_context { + const struct macb_info *info; + unsigned int rx_buffer_size; unsigned int rx_ring_size; unsigned int tx_ring_size; @@ -1324,6 +1337,15 @@ struct macb { u32 (*macb_reg_readl)(struct macb *bp, int offset); void (*macb_reg_writel)(struct macb *bp, int offset, u32 value); =20 + /* + * Give direct access (bp->caps) and + * allow taking a pointer to it (&bp->info) for contexts. + */ + union { + struct macb_info; + struct macb_info info; + }; + /* * Context stores all its parameters. * But we must remember them across closure. @@ -1335,17 +1357,14 @@ struct macb { struct macb_dma_desc *rx_ring_tieoff; dma_addr_t rx_ring_tieoff_dma; =20 - unsigned int num_queues; struct macb_queue queues[MACB_MAX_QUEUES]; =20 spinlock_t lock; - struct platform_device *pdev; struct clk *pclk; struct clk *hclk; struct clk *tx_clk; struct clk *rx_clk; struct clk *tsu_clk; - struct net_device *netdev; /* Protects hw_stats and ethtool_stats */ spinlock_t stats_lock; union { @@ -1353,15 +1372,12 @@ struct macb { struct gem_stats gem; } hw_stats; =20 - struct macb_or_gem_ops macbgem_ops; - struct mii_bus *mii_bus; struct phylink *phylink; struct phylink_config phylink_config; struct phylink_pcs phylink_usx_pcs; struct phylink_pcs phylink_sgmii_pcs; =20 - u32 caps; unsigned int dma_burst_length; =20 phy_interface_t phy_interface; @@ -1404,9 +1420,6 @@ struct macb { struct delayed_work tx_lpi_work; u32 tx_lpi_timer; =20 - int rx_bd_rd_prefetch; - int tx_bd_rd_prefetch; - u32 rx_intr_mask; =20 struct macb_pm_data pm_data; @@ -1458,14 +1471,15 @@ static inline void gem_ptp_do_txstamp(struct macb *= bp, struct sk_buff *skb, stru static inline void gem_ptp_do_rxstamp(struct macb *bp, struct sk_buff *skb= , struct macb_dma_desc *desc) { } #endif =20 -static inline bool macb_is_gem(struct macb *bp) +static inline bool macb_is_gem(u32 caps) { - return !!(bp->caps & MACB_CAPS_MACB_IS_GEM); + return !!(caps & MACB_CAPS_MACB_IS_GEM); } =20 -static inline bool gem_has_ptp(struct macb *bp) +static inline bool gem_has_ptp(u32 caps) { - return IS_ENABLED(CONFIG_MACB_USE_HWSTAMP) && (bp->caps & MACB_CAPS_GEM_H= AS_PTP); + return IS_ENABLED(CONFIG_MACB_USE_HWSTAMP) && + (caps & MACB_CAPS_GEM_HAS_PTP); } =20 /* ENST Helper functions */ @@ -1481,16 +1495,16 @@ static inline u64 enst_max_hw_interval(u32 speed_mb= ps) ENST_TIME_GRANULARITY_NS * 1000, (speed_mbps)); } =20 -static inline bool macb_dma64(struct macb *bp) +static inline bool macb_dma64(u32 caps) { return IS_ENABLED(CONFIG_ARCH_DMA_ADDR_T_64BIT) && - bp->caps & MACB_CAPS_DMA_64B; + caps & MACB_CAPS_DMA_64B; } =20 -static inline bool macb_dma_ptp(struct macb *bp) +static inline bool macb_dma_ptp(u32 caps) { return IS_ENABLED(CONFIG_MACB_USE_HWSTAMP) && - bp->caps & MACB_CAPS_DMA_PTP; + caps & MACB_CAPS_DMA_PTP; } =20 /** diff --git a/drivers/net/ethernet/cadence/macb_main.c b/drivers/net/etherne= t/cadence/macb_main.c index 033c36d8a3d4..47f0d27cd979 100644 --- a/drivers/net/ethernet/cadence/macb_main.c +++ b/drivers/net/ethernet/cadence/macb_main.c @@ -122,33 +122,36 @@ struct sifive_fu540_macb_mgmt { * word 5: timestamp word 1 * word 6: timestamp word 2 */ -static unsigned int macb_dma_desc_get_size(struct macb *bp) +static unsigned int macb_dma_desc_get_size(u32 caps) { unsigned int desc_size =3D sizeof(struct macb_dma_desc); =20 - if (macb_dma64(bp)) + if (macb_dma64(caps)) desc_size +=3D sizeof(struct macb_dma_desc_64); - if (macb_dma_ptp(bp)) + if (macb_dma_ptp(caps)) desc_size +=3D sizeof(struct macb_dma_desc_ptp); =20 return desc_size; } =20 -static unsigned int macb_adj_dma_desc_idx(struct macb *bp, unsigned int de= sc_idx) +static unsigned int macb_adj_dma_desc_idx(struct macb_context *ctx, + unsigned int desc_idx) { - return desc_idx * (1 + macb_dma64(bp) + macb_dma_ptp(bp)); + return desc_idx * (1 + macb_dma64(ctx->info->caps) + + macb_dma_ptp(ctx->info->caps)); } =20 -static struct macb_dma_desc_64 *macb_64b_desc(struct macb *bp, struct macb= _dma_desc *desc) +static struct macb_dma_desc_64 *macb_64b_desc(struct macb_dma_desc *desc) { return (struct macb_dma_desc_64 *)((void *)desc + sizeof(struct macb_dma_desc)); } =20 /* Ring buffer accessors */ -static unsigned int macb_tx_ring_wrap(struct macb *bp, unsigned int index) +static unsigned int macb_tx_ring_wrap(struct macb_context *ctx, + unsigned int index) { - return index & (bp->ctx->tx_ring_size - 1); + return index & (ctx->tx_ring_size - 1); } =20 static struct macb_txq *macb_txq(struct macb_queue *queue) @@ -167,14 +170,13 @@ static struct macb_rxq *macb_rxq(struct macb_queue *q= ueue) return &bp->ctx->rxq[q]; } =20 -static struct macb_dma_desc *macb_tx_desc(struct macb_queue *queue, +static struct macb_dma_desc *macb_tx_desc(struct macb_context *ctx, + unsigned int q, unsigned int index) { - struct macb_txq *txq =3D macb_txq(queue); - - index =3D macb_tx_ring_wrap(queue->bp, index); - index =3D macb_adj_dma_desc_idx(queue->bp, index); - return &txq->ring[index]; + index =3D macb_tx_ring_wrap(ctx, index); + index =3D macb_adj_dma_desc_idx(ctx, index); + return &ctx->txq[q].ring[index]; } =20 static struct macb_tx_skb *macb_tx_skb(struct macb_queue *queue, @@ -182,40 +184,42 @@ static struct macb_tx_skb *macb_tx_skb(struct macb_qu= eue *queue, { struct macb_txq *txq =3D macb_txq(queue); =20 - return &txq->skb[macb_tx_ring_wrap(queue->bp, index)]; + return &txq->skb[macb_tx_ring_wrap(queue->bp->ctx, index)]; } =20 static dma_addr_t macb_tx_dma(struct macb_queue *queue, unsigned int index) { + struct macb_context *ctx =3D queue->bp->ctx; struct macb_txq *txq =3D macb_txq(queue); dma_addr_t offset; =20 - offset =3D macb_tx_ring_wrap(queue->bp, index) * - macb_dma_desc_get_size(queue->bp); + offset =3D macb_tx_ring_wrap(ctx, index) * + macb_dma_desc_get_size(queue->bp->caps); =20 return txq->ring_dma + offset; } =20 -static unsigned int macb_rx_ring_wrap(struct macb *bp, unsigned int index) +static unsigned int macb_rx_ring_wrap(struct macb_context *ctx, + unsigned int index) { - return index & (bp->ctx->rx_ring_size - 1); + return index & (ctx->rx_ring_size - 1); } =20 -static struct macb_dma_desc *macb_rx_desc(struct macb_queue *queue, unsign= ed int index) +static struct macb_dma_desc *macb_rx_desc(struct macb_context *ctx, + unsigned int q, unsigned int index) { - struct macb_rxq *rxq =3D macb_rxq(queue); - - index =3D macb_rx_ring_wrap(queue->bp, index); - index =3D macb_adj_dma_desc_idx(queue->bp, index); - return &rxq->ring[index]; + index =3D macb_rx_ring_wrap(ctx, index); + index =3D macb_adj_dma_desc_idx(ctx, index); + return &ctx->rxq[q].ring[index]; } =20 static void *macb_rx_buffer(struct macb_queue *queue, unsigned int index) { + struct macb_context *ctx =3D queue->bp->ctx; struct macb_rxq *rxq =3D macb_rxq(queue); =20 - return rxq->buffers + queue->bp->ctx->rx_buffer_size * - macb_rx_ring_wrap(queue->bp, index); + return rxq->buffers + ctx->rx_buffer_size * + macb_rx_ring_wrap(ctx, index); } =20 /* I/O accessors */ @@ -278,7 +282,7 @@ static void macb_set_hwaddr(struct macb *bp) top =3D get_unaligned_le16(bp->netdev->dev_addr + 4); macb_or_gem_writel(bp, SA1T, top); =20 - if (gem_has_ptp(bp)) { + if (gem_has_ptp(bp->caps)) { gem_writel(bp, RXPTPUNI, bottom); gem_writel(bp, TXPTPUNI, bottom); } @@ -489,7 +493,7 @@ static void macb_init_buffers(struct macb *bp) unsigned int q; =20 /* Single register for all queues' high 32 bits. */ - if (macb_dma64(bp)) { + if (macb_dma64(bp->caps)) { rxq =3D &bp->ctx->rxq[0]; txq =3D &bp->ctx->txq[0]; macb_writel(bp, RBQPH, upper_32_bits(rxq->ring_dma)); @@ -772,7 +776,7 @@ static void macb_mac_config(struct phylink_config *conf= ig, unsigned int mode, if (bp->caps & MACB_CAPS_MACB_IS_EMAC) { if (state->interface =3D=3D PHY_INTERFACE_MODE_RMII) ctrl |=3D MACB_BIT(RM9200_RMII); - } else if (macb_is_gem(bp)) { + } else if (macb_is_gem(bp->caps)) { ctrl &=3D ~(GEM_BIT(SGMIIEN) | GEM_BIT(PCSSEL)); ncr &=3D ~GEM_BIT(ENABLE_HS_MAC); =20 @@ -824,13 +828,14 @@ static void gem_shuffle_tx_one_ring(struct macb_queue= *queue) unsigned int head, tail, count, ring_size, desc_size; struct macb_tx_skb tx_skb, *skb_curr, *skb_next; struct macb_dma_desc *desc_curr, *desc_next; + unsigned int q =3D queue - queue->bp->queues; unsigned int i, cycles, shift, curr, next; + struct macb_context *ctx =3D queue->bp->ctx; struct macb_txq *txq =3D macb_txq(queue); - struct macb *bp =3D queue->bp; unsigned char desc[24]; unsigned long flags; =20 - desc_size =3D macb_dma_desc_get_size(bp); + desc_size =3D macb_dma_desc_get_size(queue->bp->caps); =20 if (WARN_ON_ONCE(desc_size > ARRAY_SIZE(desc))) return; @@ -838,7 +843,7 @@ static void gem_shuffle_tx_one_ring(struct macb_queue *= queue) spin_lock_irqsave(&queue->tx_ptr_lock, flags); head =3D txq->head; tail =3D txq->tail; - ring_size =3D bp->ctx->tx_ring_size; + ring_size =3D ctx->tx_ring_size; count =3D CIRC_CNT(head, tail, ring_size); =20 if (!(tail % ring_size)) @@ -854,7 +859,7 @@ static void gem_shuffle_tx_one_ring(struct macb_queue *= queue) cycles =3D gcd(ring_size, shift); =20 for (i =3D 0; i < cycles; i++) { - memcpy(&desc, macb_tx_desc(queue, i), desc_size); + memcpy(&desc, macb_tx_desc(ctx, q, i), desc_size); memcpy(&tx_skb, macb_tx_skb(queue, i), sizeof(struct macb_tx_skb)); =20 @@ -862,8 +867,8 @@ static void gem_shuffle_tx_one_ring(struct macb_queue *= queue) next =3D (curr + shift) % ring_size; =20 while (next !=3D i) { - desc_curr =3D macb_tx_desc(queue, curr); - desc_next =3D macb_tx_desc(queue, next); + desc_curr =3D macb_tx_desc(ctx, q, curr); + desc_next =3D macb_tx_desc(ctx, q, next); =20 memcpy(desc_curr, desc_next, desc_size); =20 @@ -880,7 +885,7 @@ static void gem_shuffle_tx_one_ring(struct macb_queue *= queue) next =3D (curr + shift) % ring_size; } =20 - desc_curr =3D macb_tx_desc(queue, curr); + desc_curr =3D macb_tx_desc(ctx, q, curr); memcpy(desc_curr, &desc, desc_size); if (i =3D=3D ring_size - 1) desc_curr->ctrl &=3D ~MACB_BIT(TX_WRAP); @@ -937,7 +942,7 @@ static void macb_mac_link_up(struct phylink_config *con= fig, =20 if (!(bp->caps & MACB_CAPS_MACB_IS_EMAC)) { ctrl &=3D ~MACB_BIT(PAE); - if (macb_is_gem(bp)) { + if (macb_is_gem(bp->caps)) { ctrl &=3D ~GEM_BIT(GBE); =20 if (speed =3D=3D SPEED_1000) @@ -968,7 +973,7 @@ static void macb_mac_link_up(struct phylink_config *con= fig, =20 /* Enable Rx and Tx; Enable PTP unicast */ ctrl =3D macb_readl(bp, NCR); - if (gem_has_ptp(bp)) + if (gem_has_ptp(bp->caps)) ctrl |=3D MACB_BIT(PTPUNI); =20 macb_writel(bp, NCR, ctrl | MACB_BIT(RE) | MACB_BIT(TE)); @@ -1078,7 +1083,8 @@ static int macb_mii_probe(struct net_device *netdev) bp->phylink_config.supported_interfaces); =20 /* Determine what modes are supported */ - if (macb_is_gem(bp) && (bp->caps & MACB_CAPS_GIGABIT_MODE_AVAILABLE)) { + if (macb_is_gem(bp->caps) && + (bp->caps & MACB_CAPS_GIGABIT_MODE_AVAILABLE)) { bp->phylink_config.mac_capabilities |=3D MAC_1000FD; if (!(bp->caps & MACB_CAPS_NO_GIGABIT_HALF)) bp->phylink_config.mac_capabilities |=3D MAC_1000HD; @@ -1246,12 +1252,13 @@ static void macb_tx_unmap(struct macb *bp, struct m= acb_tx_skb *tx_skb, int budge } } =20 -static void macb_set_addr(struct macb *bp, struct macb_dma_desc *desc, dma= _addr_t addr) +static void macb_set_addr(struct macb_context *ctx, struct macb_dma_desc *= desc, + dma_addr_t addr) { - if (macb_dma64(bp)) { + if (macb_dma64(ctx->info->caps)) { struct macb_dma_desc_64 *desc_64; =20 - desc_64 =3D macb_64b_desc(bp, desc); + desc_64 =3D macb_64b_desc(desc); desc_64->addrh =3D upper_32_bits(addr); /* The low bits of RX address contain the RX_USED bit, clearing * of which allows packet RX. Make sure the high bits are also @@ -1263,18 +1270,19 @@ static void macb_set_addr(struct macb *bp, struct m= acb_dma_desc *desc, dma_addr_ desc->addr =3D lower_32_bits(addr); } =20 -static dma_addr_t macb_get_addr(struct macb *bp, struct macb_dma_desc *des= c) +static dma_addr_t macb_get_addr(struct macb_context *ctx, + struct macb_dma_desc *desc) { dma_addr_t addr =3D 0; =20 - if (macb_dma64(bp)) { + if (macb_dma64(ctx->info->caps)) { struct macb_dma_desc_64 *desc_64; =20 - desc_64 =3D macb_64b_desc(bp, desc); + desc_64 =3D macb_64b_desc(desc); addr =3D ((u64)(desc_64->addrh) << 32); } addr |=3D MACB_BF(RX_WADDR, MACB_BFEXT(RX_WADDR, desc->addr)); - if (macb_dma_ptp(bp)) + if (macb_dma_ptp(ctx->info->caps)) addr &=3D ~GEM_BIT(DMA_RXVALID); return addr; } @@ -1284,6 +1292,7 @@ static void macb_tx_error_task(struct work_struct *wo= rk) struct macb_queue *queue =3D container_of(work, struct macb_queue, tx_error_task); unsigned int q =3D queue - queue->bp->queues; + struct macb_context *ctx =3D queue->bp->ctx; struct macb_txq *txq =3D macb_txq(queue); struct macb *bp =3D queue->bp; struct macb_tx_skb *tx_skb; @@ -1326,7 +1335,7 @@ static void macb_tx_error_task(struct work_struct *wo= rk) for (tail =3D txq->tail; tail !=3D txq->head; tail++) { u32 ctrl; =20 - desc =3D macb_tx_desc(queue, tail); + desc =3D macb_tx_desc(ctx, q, tail); ctrl =3D desc->ctrl; tx_skb =3D macb_tx_skb(queue, tail); skb =3D tx_skb->skb; @@ -1345,7 +1354,7 @@ static void macb_tx_error_task(struct work_struct *wo= rk) */ if (!(ctrl & MACB_BIT(TX_BUF_EXHAUSTED))) { netdev_vdbg(bp->netdev, "txerr skb %u (data %p) TX complete\n", - macb_tx_ring_wrap(bp, tail), + macb_tx_ring_wrap(ctx, tail), skb->data); bp->netdev->stats.tx_packets++; queue->stats.tx_packets++; @@ -1373,8 +1382,8 @@ static void macb_tx_error_task(struct work_struct *wo= rk) packets, bytes); =20 /* Set end of TX queue */ - desc =3D macb_tx_desc(queue, 0); - macb_set_addr(bp, desc, 0); + desc =3D macb_tx_desc(ctx, q, 0); + macb_set_addr(ctx, desc, 0); desc->ctrl =3D MACB_BIT(TX_USED); =20 /* Make descriptor updates visible to hardware */ @@ -1436,6 +1445,7 @@ static int macb_tx_complete(struct macb_queue *queue,= int budget) struct macb *bp =3D queue->bp; struct macb_txq *txq =3D macb_txq(queue); unsigned int q =3D queue - bp->queues; + struct macb_context *ctx =3D bp->ctx; unsigned long flags; unsigned int tail; unsigned int head; @@ -1450,7 +1460,7 @@ static int macb_tx_complete(struct macb_queue *queue,= int budget) struct macb_dma_desc *desc; u32 ctrl; =20 - desc =3D macb_tx_desc(queue, tail); + desc =3D macb_tx_desc(ctx, q, tail); =20 /* Make hw descriptor updates visible to CPU */ rmb(); @@ -1475,7 +1485,7 @@ static int macb_tx_complete(struct macb_queue *queue,= int budget) gem_ptp_do_txstamp(bp, skb, desc); =20 netdev_vdbg(bp->netdev, "skb %u (data %p) TX complete\n", - macb_tx_ring_wrap(bp, tail), + macb_tx_ring_wrap(ctx, tail), skb->data); bp->netdev->stats.tx_packets++; queue->stats.tx_packets++; @@ -1513,53 +1523,53 @@ static int macb_tx_complete(struct macb_queue *queu= e, int budget) return packets; } =20 -static void gem_rx_refill(struct macb_queue *queue) +static void gem_rx_refill(struct macb_context *ctx, unsigned int q) { - struct macb_rxq *rxq =3D macb_rxq(queue); - struct macb *bp =3D queue->bp; + struct device *dev =3D &ctx->info->pdev->dev; + struct macb_rxq *rxq =3D &ctx->rxq[q]; struct macb_dma_desc *desc; struct sk_buff *skb; unsigned int entry; dma_addr_t paddr; =20 while (CIRC_SPACE(rxq->prepared_head, rxq->tail, - bp->ctx->rx_ring_size) > 0) { - entry =3D macb_rx_ring_wrap(bp, rxq->prepared_head); + ctx->rx_ring_size) > 0) { + entry =3D macb_rx_ring_wrap(ctx, rxq->prepared_head); =20 /* Make hw descriptor updates visible to CPU */ rmb(); =20 - desc =3D macb_rx_desc(queue, entry); + desc =3D macb_rx_desc(ctx, q, entry); =20 if (!rxq->skbuff[entry]) { /* allocate sk_buff for this free entry in ring */ - skb =3D netdev_alloc_skb(bp->netdev, - bp->ctx->rx_buffer_size); + skb =3D netdev_alloc_skb(ctx->info->netdev, + ctx->rx_buffer_size); if (unlikely(!skb)) { - netdev_err(bp->netdev, + netdev_err(ctx->info->netdev, "Unable to allocate sk_buff\n"); break; } =20 /* now fill corresponding descriptor entry */ - paddr =3D dma_map_single(&bp->pdev->dev, skb->data, - bp->ctx->rx_buffer_size, + paddr =3D dma_map_single(dev, skb->data, + ctx->rx_buffer_size, DMA_FROM_DEVICE); - if (dma_mapping_error(&bp->pdev->dev, paddr)) { + if (dma_mapping_error(dev, paddr)) { dev_kfree_skb(skb); break; } =20 rxq->skbuff[entry] =3D skb; =20 - if (entry =3D=3D bp->ctx->rx_ring_size - 1) + if (entry =3D=3D ctx->rx_ring_size - 1) paddr |=3D MACB_BIT(RX_WRAP); desc->ctrl =3D 0; /* Setting addr clears RX_USED and allows reception, * make sure ctrl is cleared first to avoid a race. */ dma_wmb(); - macb_set_addr(bp, desc, paddr); + macb_set_addr(ctx, desc, paddr); =20 /* Properly align Ethernet header. * @@ -1572,7 +1582,7 @@ static void gem_rx_refill(struct macb_queue *queue) * setting the low 2/3 bits. * It is 3 bits if HW_DMA_CAP_PTP, else 2 bits. */ - if (!(bp->caps & MACB_CAPS_RSC)) + if (!(ctx->info->caps & MACB_CAPS_RSC)) skb_reserve(skb, NET_IP_ALIGN); } else { desc->ctrl =3D 0; @@ -1585,18 +1595,21 @@ static void gem_rx_refill(struct macb_queue *queue) /* Make descriptor updates visible to hardware */ wmb(); =20 - netdev_vdbg(bp->netdev, "rx ring: queue: %p, prepared head %d, tail %d\n", - queue, rxq->prepared_head, rxq->tail); + netdev_vdbg(ctx->info->netdev, + "rx ring: queue: %u, prepared head %d, tail %d\n", + q, rxq->prepared_head, rxq->tail); } =20 /* Mark DMA descriptors from begin up to and not including end as unused */ static void discard_partial_frame(struct macb_queue *queue, unsigned int b= egin, unsigned int end) { + unsigned int q =3D queue - queue->bp->queues; + struct macb_context *ctx =3D queue->bp->ctx; unsigned int frag; =20 for (frag =3D begin; frag !=3D end; frag++) { - struct macb_dma_desc *desc =3D macb_rx_desc(queue, frag); + struct macb_dma_desc *desc =3D macb_rx_desc(ctx, q, frag); =20 desc->addr &=3D ~MACB_BIT(RX_USED); } @@ -1613,6 +1626,8 @@ static void discard_partial_frame(struct macb_queue *= queue, unsigned int begin, static int gem_rx(struct macb_queue *queue, struct napi_struct *napi, int budget) { + unsigned int q =3D queue - queue->bp->queues; + struct macb_context *ctx =3D queue->bp->ctx; struct macb_rxq *rxq =3D macb_rxq(queue); struct macb *bp =3D queue->bp; struct macb_dma_desc *desc; @@ -1626,14 +1641,14 @@ static int gem_rx(struct macb_queue *queue, struct = napi_struct *napi, dma_addr_t addr; bool rxused; =20 - entry =3D macb_rx_ring_wrap(bp, rxq->tail); - desc =3D macb_rx_desc(queue, entry); + entry =3D macb_rx_ring_wrap(ctx, rxq->tail); + desc =3D macb_rx_desc(ctx, q, entry); =20 /* Make hw descriptor updates visible to CPU */ rmb(); =20 rxused =3D (desc->addr & MACB_BIT(RX_USED)) ? true : false; - addr =3D macb_get_addr(bp, desc); + addr =3D macb_get_addr(ctx, desc); =20 if (!rxused) break; @@ -1697,7 +1712,7 @@ static int gem_rx(struct macb_queue *queue, struct na= pi_struct *napi, napi_gro_receive(napi, skb); } =20 - gem_rx_refill(queue); + gem_rx_refill(ctx, q); =20 return count; } @@ -1705,6 +1720,8 @@ static int gem_rx(struct macb_queue *queue, struct na= pi_struct *napi, static int macb_rx_frame(struct macb_queue *queue, struct napi_struct *nap= i, unsigned int first_frag, unsigned int last_frag) { + unsigned int q =3D queue - queue->bp->queues; + struct macb_context *ctx =3D queue->bp->ctx; struct macb *bp =3D queue->bp; struct macb_dma_desc *desc; unsigned int offset; @@ -1712,12 +1729,12 @@ static int macb_rx_frame(struct macb_queue *queue, = struct napi_struct *napi, unsigned int frag; unsigned int len; =20 - desc =3D macb_rx_desc(queue, last_frag); + desc =3D macb_rx_desc(ctx, q, last_frag); len =3D desc->ctrl & bp->rx_frm_len_mask; =20 netdev_vdbg(bp->netdev, "macb_rx_frame frags %u - %u (len %u)\n", - macb_rx_ring_wrap(bp, first_frag), - macb_rx_ring_wrap(bp, last_frag), len); + macb_rx_ring_wrap(ctx, first_frag), + macb_rx_ring_wrap(ctx, last_frag), len); =20 /* The ethernet header starts NET_IP_ALIGN bytes into the * first buffer. Since the header is 14 bytes, this makes the @@ -1731,7 +1748,7 @@ static int macb_rx_frame(struct macb_queue *queue, st= ruct napi_struct *napi, if (!skb) { bp->netdev->stats.rx_dropped++; for (frag =3D first_frag; ; frag++) { - desc =3D macb_rx_desc(queue, frag); + desc =3D macb_rx_desc(ctx, q, frag); desc->addr &=3D ~MACB_BIT(RX_USED); if (frag =3D=3D last_frag) break; @@ -1762,7 +1779,7 @@ static int macb_rx_frame(struct macb_queue *queue, st= ruct napi_struct *napi, macb_rx_buffer(queue, frag), frag_len); offset +=3D bp->ctx->rx_buffer_size; - desc =3D macb_rx_desc(queue, frag); + desc =3D macb_rx_desc(ctx, q, frag); desc->addr &=3D ~MACB_BIT(RX_USED); =20 if (frag =3D=3D last_frag) @@ -1784,20 +1801,19 @@ static int macb_rx_frame(struct macb_queue *queue, = struct napi_struct *napi, return 0; } =20 -static inline void macb_init_rx_ring(struct macb_queue *queue) +static inline void macb_init_rx_ring(struct macb_context *ctx, unsigned in= t q) { - struct macb_rxq *rxq =3D macb_rxq(queue); + struct macb_rxq *rxq =3D &ctx->rxq[q]; struct macb_dma_desc *desc =3D NULL; - struct macb *bp =3D queue->bp; dma_addr_t addr; int i; =20 addr =3D rxq->buffers_dma; - for (i =3D 0; i < bp->ctx->rx_ring_size; i++) { - desc =3D macb_rx_desc(queue, i); - macb_set_addr(bp, desc, addr); + for (i =3D 0; i < ctx->rx_ring_size; i++) { + desc =3D macb_rx_desc(ctx, q, i); + macb_set_addr(ctx, desc, addr); desc->ctrl =3D 0; - addr +=3D bp->ctx->rx_buffer_size; + addr +=3D ctx->rx_buffer_size; } desc->addr |=3D MACB_BIT(RX_WRAP); rxq->tail =3D 0; @@ -1806,6 +1822,8 @@ static inline void macb_init_rx_ring(struct macb_queu= e *queue) static int macb_rx(struct macb_queue *queue, struct napi_struct *napi, int budget) { + unsigned int q =3D queue - queue->bp->queues; + struct macb_context *ctx =3D queue->bp->ctx; struct macb_rxq *rxq =3D macb_rxq(queue); struct macb *bp =3D queue->bp; bool reset_rx_queue =3D false; @@ -1814,7 +1832,7 @@ static int macb_rx(struct macb_queue *queue, struct n= api_struct *napi, int received =3D 0; =20 for (tail =3D rxq->tail; budget > 0; tail++) { - struct macb_dma_desc *desc =3D macb_rx_desc(queue, tail); + struct macb_dma_desc *desc =3D macb_rx_desc(ctx, q, tail); u32 ctrl; =20 /* Make hw descriptor updates visible to CPU */ @@ -1866,7 +1884,7 @@ static int macb_rx(struct macb_queue *queue, struct n= api_struct *napi, ctrl =3D macb_readl(bp, NCR); macb_writel(bp, NCR, ctrl & ~MACB_BIT(RE)); =20 - macb_init_rx_ring(queue); + macb_init_rx_ring(ctx, q); queue_writel(queue, RBQP, rxq->ring_dma); =20 macb_writel(bp, NCR, ctrl | MACB_BIT(RE)); @@ -1885,13 +1903,14 @@ static int macb_rx(struct macb_queue *queue, struct= napi_struct *napi, =20 static bool macb_rx_pending(struct macb_queue *queue) { + unsigned int q =3D queue - queue->bp->queues; + struct macb_context *ctx =3D queue->bp->ctx; struct macb_rxq *rxq =3D macb_rxq(queue); - struct macb *bp =3D queue->bp; struct macb_dma_desc *desc; unsigned int entry; =20 - entry =3D macb_rx_ring_wrap(bp, rxq->tail); - desc =3D macb_rx_desc(queue, entry); + entry =3D macb_rx_ring_wrap(ctx, rxq->tail); + desc =3D macb_rx_desc(ctx, q, entry); =20 /* Make hw descriptor updates visible to CPU */ rmb(); @@ -1939,6 +1958,7 @@ static int macb_rx_poll(struct napi_struct *napi, int= budget) =20 static void macb_tx_restart(struct macb_queue *queue) { + struct macb_context *ctx =3D queue->bp->ctx; struct macb_txq *txq =3D macb_txq(queue); struct macb *bp =3D queue->bp; unsigned int head_idx, tbqp; @@ -1949,9 +1969,9 @@ static void macb_tx_restart(struct macb_queue *queue) if (txq->head =3D=3D txq->tail) goto out_tx_ptr_unlock; =20 - tbqp =3D queue_readl(queue, TBQP) / macb_dma_desc_get_size(bp); - tbqp =3D macb_adj_dma_desc_idx(bp, macb_tx_ring_wrap(bp, tbqp)); - head_idx =3D macb_adj_dma_desc_idx(bp, macb_tx_ring_wrap(bp, txq->head)); + tbqp =3D queue_readl(queue, TBQP) / macb_dma_desc_get_size(ctx->info->cap= s); + tbqp =3D macb_adj_dma_desc_idx(ctx, macb_tx_ring_wrap(ctx, tbqp)); + head_idx =3D macb_adj_dma_desc_idx(ctx, macb_tx_ring_wrap(ctx, txq->head)= ); =20 if (tbqp =3D=3D head_idx) goto out_tx_ptr_unlock; @@ -1966,6 +1986,8 @@ static void macb_tx_restart(struct macb_queue *queue) =20 static bool macb_tx_complete_pending(struct macb_queue *queue) { + unsigned int q =3D queue - queue->bp->queues; + struct macb_context *ctx =3D queue->bp->ctx; struct macb_txq *txq =3D macb_txq(queue); bool retval =3D false; unsigned long flags; @@ -1975,7 +1997,7 @@ static bool macb_tx_complete_pending(struct macb_queu= e *queue) /* Make hw descriptor updates visible to CPU */ rmb(); =20 - if (macb_tx_desc(queue, txq->tail)->ctrl & MACB_BIT(TX_USED)) + if (macb_tx_desc(ctx, q, txq->tail)->ctrl & MACB_BIT(TX_USED)) retval =3D true; } spin_unlock_irqrestore(&queue->tx_ptr_lock, flags); @@ -2029,6 +2051,7 @@ static void macb_hresp_error_task(struct work_struct = *work) { struct macb *bp =3D from_work(bp, work, hresp_err_bh_work); struct net_device *netdev =3D bp->netdev; + struct macb_context *ctx =3D bp->ctx; struct macb_queue *queue; unsigned int q; u32 ctrl; @@ -2045,7 +2068,7 @@ static void macb_hresp_error_task(struct work_struct = *work) netif_tx_stop_all_queues(netdev); netif_carrier_off(netdev); =20 - bp->macbgem_ops.mog_init_rings(bp); + bp->macbgem_ops.mog_init_rings(ctx); =20 /* Initialize TX and RX buffers */ macb_init_buffers(bp); @@ -2218,7 +2241,7 @@ static irqreturn_t macb_interrupt(int irq, void *dev_= id) if (status & MACB_BIT(ISR_ROVR)) { /* We missed at least one packet */ spin_lock(&bp->stats_lock); - if (macb_is_gem(bp)) + if (macb_is_gem(bp->caps)) bp->hw_stats.gem.rx_overruns++; else bp->hw_stats.macb.rx_overruns++; @@ -2270,6 +2293,8 @@ static unsigned int macb_tx_map(struct macb *bp, unsigned int f, nr_frags =3D skb_shinfo(skb)->nr_frags; unsigned int len, i, tx_head =3D txq->head; u32 ctrl, lso_ctrl =3D 0, seq_ctrl =3D 0; + unsigned int q =3D queue - bp->queues; + struct macb_context *ctx =3D bp->ctx; unsigned int eof =3D 1, mss_mfs =3D 0; struct macb_tx_skb *tx_skb =3D NULL; struct macb_dma_desc *desc; @@ -2360,7 +2385,7 @@ static unsigned int macb_tx_map(struct macb *bp, */ i =3D tx_head; ctrl =3D MACB_BIT(TX_USED); - desc =3D macb_tx_desc(queue, i); + desc =3D macb_tx_desc(ctx, q, i); desc->ctrl =3D ctrl; =20 if (lso_ctrl) { @@ -2381,14 +2406,14 @@ static unsigned int macb_tx_map(struct macb *bp, do { i--; tx_skb =3D macb_tx_skb(queue, i); - desc =3D macb_tx_desc(queue, i); + desc =3D macb_tx_desc(ctx, q, i); =20 ctrl =3D (u32)tx_skb->size; if (eof) { ctrl |=3D MACB_BIT(TX_LAST); eof =3D 0; } - if (unlikely(macb_tx_ring_wrap(bp, i) =3D=3D + if (unlikely(macb_tx_ring_wrap(ctx, i) =3D=3D bp->ctx->tx_ring_size - 1)) ctrl |=3D MACB_BIT(TX_WRAP); =20 @@ -2407,7 +2432,7 @@ static unsigned int macb_tx_map(struct macb *bp, ctrl |=3D MACB_BF(MSS_MFS, mss_mfs); =20 /* Set TX buffer descriptor */ - macb_set_addr(bp, desc, tx_skb->mapping); + macb_set_addr(ctx, desc, tx_skb->mapping); /* desc->addr must be visible to hardware before clearing * 'TX_USED' bit in desc->ctrl. */ @@ -2558,7 +2583,7 @@ static netdev_tx_t macb_start_xmit(struct sk_buff *sk= b, return ret; } =20 - if (macb_dma_ptp(bp) && + if (macb_dma_ptp(bp->caps) && (skb_shinfo(skb)->tx_flags & SKBTX_HW_TSTAMP)) skb_shinfo(skb)->tx_flags |=3D SKBTX_IN_PROGRESS; =20 @@ -2645,7 +2670,7 @@ static unsigned int macb_rx_buffer_size(struct macb *= bp, unsigned int mtu) { unsigned int size; =20 - if (!macb_is_gem(bp)) { + if (!macb_is_gem(bp->caps)) { size =3D MACB_RX_BUFFER_SIZE; } else { size =3D mtu + ETH_HLEN + ETH_FCS_LEN + NET_IP_ALIGN; @@ -2663,33 +2688,32 @@ static unsigned int macb_rx_buffer_size(struct macb= *bp, unsigned int mtu) return size; } =20 -static void gem_free_rx_buffers(struct macb *bp) +static void gem_free_rx_buffers(struct macb_context *ctx) { + struct device *dev =3D &ctx->info->pdev->dev; struct macb_dma_desc *desc; - struct macb_queue *queue; struct macb_rxq *rxq; struct sk_buff *skb; dma_addr_t addr; unsigned int q; int i; =20 - for (q =3D 0, queue =3D bp->queues; q < bp->num_queues; ++q, ++queue) { - rxq =3D &bp->ctx->rxq[q]; + for (q =3D 0; q < ctx->info->num_queues; ++q) { + rxq =3D &ctx->rxq[q]; =20 if (!rxq->skbuff) continue; =20 - for (i =3D 0; i < bp->ctx->rx_ring_size; i++) { + for (i =3D 0; i < ctx->rx_ring_size; i++) { skb =3D rxq->skbuff[i]; =20 if (!skb) continue; =20 - desc =3D macb_rx_desc(queue, i); - addr =3D macb_get_addr(bp, desc); + desc =3D macb_rx_desc(ctx, q, i); + addr =3D macb_get_addr(ctx, desc); =20 - dma_unmap_single(&bp->pdev->dev, addr, - bp->ctx->rx_buffer_size, + dma_unmap_single(dev, addr, ctx->rx_buffer_size, DMA_FROM_DEVICE); dev_kfree_skb_any(skb); skb =3D NULL; @@ -2700,52 +2724,52 @@ static void gem_free_rx_buffers(struct macb *bp) } } =20 -static void macb_free_rx_buffers(struct macb *bp) +static void macb_free_rx_buffers(struct macb_context *ctx) { - struct macb_rxq *rxq =3D &bp->ctx->rxq[0]; + struct device *dev =3D &ctx->info->pdev->dev; + struct macb_rxq *rxq =3D &ctx->rxq[0]; =20 if (rxq->buffers) { - dma_free_coherent(&bp->pdev->dev, - bp->ctx->rx_ring_size * - bp->ctx->rx_buffer_size, + dma_free_coherent(dev, + ctx->rx_ring_size * ctx->rx_buffer_size, rxq->buffers, rxq->buffers_dma); rxq->buffers =3D NULL; } } =20 -static unsigned int macb_tx_ring_size_per_queue(struct macb *bp) +static unsigned int macb_tx_ring_size_per_queue(struct macb_context *ctx) { - return macb_dma_desc_get_size(bp) * bp->ctx->tx_ring_size + - bp->tx_bd_rd_prefetch; + return macb_dma_desc_get_size(ctx->info->caps) * ctx->tx_ring_size + + ctx->info->tx_bd_rd_prefetch; } =20 -static unsigned int macb_rx_ring_size_per_queue(struct macb *bp) +static unsigned int macb_rx_ring_size_per_queue(struct macb_context *ctx) { - return macb_dma_desc_get_size(bp) * bp->ctx->rx_ring_size + - bp->rx_bd_rd_prefetch; + return macb_dma_desc_get_size(ctx->info->caps) * ctx->rx_ring_size + + ctx->info->rx_bd_rd_prefetch; } =20 -static void macb_free_consistent(struct macb *bp) +static void macb_free_consistent(struct macb_context *ctx) { - struct device *dev =3D &bp->pdev->dev; + struct device *dev =3D &ctx->info->pdev->dev; struct macb_txq *txq; struct macb_rxq *rxq; unsigned int q; size_t size; =20 - bp->macbgem_ops.mog_free_rx_buffers(bp); + ctx->info->macbgem_ops.mog_free_rx_buffers(ctx); =20 - txq =3D &bp->ctx->txq[0]; - size =3D bp->num_queues * macb_tx_ring_size_per_queue(bp); + txq =3D &ctx->txq[0]; + size =3D ctx->info->num_queues * macb_tx_ring_size_per_queue(ctx); dma_free_coherent(dev, size, txq->ring, txq->ring_dma); =20 - rxq =3D &bp->ctx->rxq[0]; - size =3D bp->num_queues * macb_rx_ring_size_per_queue(bp); + rxq =3D &ctx->rxq[0]; + size =3D ctx->info->num_queues * macb_rx_ring_size_per_queue(ctx); dma_free_coherent(dev, size, rxq->ring, rxq->ring_dma); =20 - for (q =3D 0; q < bp->num_queues; ++q) { - txq =3D &bp->ctx->txq[q]; - rxq =3D &bp->ctx->rxq[q]; + for (q =3D 0; q < ctx->info->num_queues; ++q) { + txq =3D &ctx->txq[q]; + rxq =3D &ctx->rxq[q]; =20 kfree(txq->skb); txq->skb =3D NULL; @@ -2754,46 +2778,48 @@ static void macb_free_consistent(struct macb *bp) } } =20 -static int gem_alloc_rx_buffers(struct macb *bp) +static int gem_alloc_rx_buffers(struct macb_context *ctx) { struct macb_rxq *rxq; unsigned int q; int size; =20 - for (q =3D 0; q < bp->num_queues; ++q) { - rxq =3D &bp->ctx->rxq[q]; - size =3D bp->ctx->rx_ring_size * sizeof(struct sk_buff *); + for (q =3D 0; q < ctx->info->num_queues; ++q) { + rxq =3D &ctx->rxq[q]; + size =3D ctx->rx_ring_size * sizeof(struct sk_buff *); rxq->skbuff =3D kzalloc(size, GFP_KERNEL); if (!rxq->skbuff) return -ENOMEM; else - netdev_dbg(bp->netdev, + netdev_dbg(ctx->info->netdev, "Allocated %d RX struct sk_buff entries at %p\n", - bp->ctx->rx_ring_size, rxq->skbuff); + ctx->rx_ring_size, rxq->skbuff); } return 0; } =20 -static int macb_alloc_rx_buffers(struct macb *bp) +static int macb_alloc_rx_buffers(struct macb_context *ctx) { - struct macb_rxq *rxq =3D &bp->ctx->rxq[0]; + struct device *dev =3D &ctx->info->pdev->dev; + struct macb_rxq *rxq =3D &ctx->rxq[0]; int size; =20 - size =3D bp->ctx->rx_ring_size * bp->ctx->rx_buffer_size; - rxq->buffers =3D dma_alloc_coherent(&bp->pdev->dev, size, + size =3D ctx->rx_ring_size * ctx->rx_buffer_size; + rxq->buffers =3D dma_alloc_coherent(dev, size, &rxq->buffers_dma, GFP_KERNEL); if (!rxq->buffers) return -ENOMEM; =20 - netdev_dbg(bp->netdev, + netdev_dbg(ctx->info->netdev, "Allocated RX buffers of %d bytes at %08lx (mapped %p)\n", size, (unsigned long)rxq->buffers_dma, rxq->buffers); return 0; } =20 -static int macb_alloc_consistent(struct macb *bp) +static int macb_alloc_consistent(struct macb_context *ctx) { - struct device *dev =3D &bp->pdev->dev; + unsigned int num_queues =3D ctx->info->num_queues; + struct device *dev =3D &ctx->info->pdev->dev; dma_addr_t tx_dma, rx_dma; struct macb_txq *txq; struct macb_rxq *rxq; @@ -2808,89 +2834,90 @@ static int macb_alloc_consistent(struct macb *bp) * natural alignment of physical addresses. */ =20 - size =3D bp->num_queues * macb_tx_ring_size_per_queue(bp); + size =3D num_queues * macb_tx_ring_size_per_queue(ctx); tx =3D dma_alloc_coherent(dev, size, &tx_dma, GFP_KERNEL); if (!tx || upper_32_bits(tx_dma) !=3D upper_32_bits(tx_dma + size - 1)) goto out_err; - netdev_dbg(bp->netdev, "Allocated %zu bytes for %u TX rings at %08lx (map= ped %p)\n", - size, bp->num_queues, (unsigned long)tx_dma, tx); + netdev_dbg(ctx->info->netdev, + "Allocated %zu bytes for %u TX rings at %08lx (mapped %p)\n", + size, num_queues, (unsigned long)tx_dma, tx); =20 - size =3D bp->num_queues * macb_rx_ring_size_per_queue(bp); + size =3D num_queues * macb_rx_ring_size_per_queue(ctx); rx =3D dma_alloc_coherent(dev, size, &rx_dma, GFP_KERNEL); if (!rx || upper_32_bits(rx_dma) !=3D upper_32_bits(rx_dma + size - 1)) goto out_err; - netdev_dbg(bp->netdev, "Allocated %zu bytes for %u RX rings at %08lx (map= ped %p)\n", - size, bp->num_queues, (unsigned long)rx_dma, rx); + netdev_dbg(ctx->info->netdev, + "Allocated %zu bytes for %u RX rings at %08lx (mapped %p)\n", + size, num_queues, (unsigned long)rx_dma, rx); =20 - for (q =3D 0; q < bp->num_queues; ++q) { - txq =3D &bp->ctx->txq[q]; - rxq =3D &bp->ctx->rxq[q]; + for (q =3D 0; q < num_queues; ++q) { + txq =3D &ctx->txq[q]; + rxq =3D &ctx->rxq[q]; =20 - txq->ring =3D tx + macb_tx_ring_size_per_queue(bp) * q; - txq->ring_dma =3D tx_dma + macb_tx_ring_size_per_queue(bp) * q; + txq->ring =3D tx + macb_tx_ring_size_per_queue(ctx) * q; + txq->ring_dma =3D tx_dma + macb_tx_ring_size_per_queue(ctx) * q; =20 - rxq->ring =3D rx + macb_rx_ring_size_per_queue(bp) * q; - rxq->ring_dma =3D rx_dma + macb_rx_ring_size_per_queue(bp) * q; + rxq->ring =3D rx + macb_rx_ring_size_per_queue(ctx) * q; + rxq->ring_dma =3D rx_dma + macb_rx_ring_size_per_queue(ctx) * q; =20 - size =3D bp->ctx->tx_ring_size * sizeof(struct macb_tx_skb); + size =3D ctx->tx_ring_size * sizeof(struct macb_tx_skb); txq->skb =3D kmalloc(size, GFP_KERNEL); if (!txq->skb) goto out_err; } - if (bp->macbgem_ops.mog_alloc_rx_buffers(bp)) + if (ctx->info->macbgem_ops.mog_alloc_rx_buffers(ctx)) goto out_err; =20 return 0; =20 out_err: - macb_free_consistent(bp); + macb_free_consistent(ctx); return -ENOMEM; } =20 -static void gem_init_rx_ring(struct macb_queue *queue) +static void gem_init_rx_ring(struct macb_context *ctx, unsigned int q) { - struct macb_rxq *rxq =3D macb_rxq(queue); + struct macb_rxq *rxq =3D &ctx->rxq[q]; =20 rxq->tail =3D 0; rxq->prepared_head =3D 0; =20 - gem_rx_refill(queue); + gem_rx_refill(ctx, q); } =20 -static void gem_init_rings(struct macb *bp) +static void gem_init_rings(struct macb_context *ctx) { - struct macb_queue *queue; struct macb_dma_desc *desc =3D NULL; struct macb_txq *txq; unsigned int q; int i; =20 - for (q =3D 0, queue =3D bp->queues; q < bp->num_queues; ++q, ++queue) { - txq =3D &bp->ctx->txq[q]; - for (i =3D 0; i < bp->ctx->tx_ring_size; i++) { - desc =3D macb_tx_desc(queue, i); - macb_set_addr(bp, desc, 0); + for (q =3D 0; q < ctx->info->num_queues; ++q) { + txq =3D &ctx->txq[q]; + for (i =3D 0; i < ctx->tx_ring_size; i++) { + desc =3D macb_tx_desc(ctx, q, i); + macb_set_addr(ctx, desc, 0); desc->ctrl =3D MACB_BIT(TX_USED); } desc->ctrl |=3D MACB_BIT(TX_WRAP); txq->head =3D 0; txq->tail =3D 0; =20 - gem_init_rx_ring(queue); + gem_init_rx_ring(ctx, q); } } =20 -static void macb_init_rings(struct macb *bp) +static void macb_init_rings(struct macb_context *ctx) { - struct macb_txq *txq =3D &bp->ctx->txq[0]; + struct macb_txq *txq =3D &ctx->txq[0]; struct macb_dma_desc *desc =3D NULL; int i; =20 - macb_init_rx_ring(&bp->queues[0]); + macb_init_rx_ring(ctx, 0); =20 - for (i =3D 0; i < bp->ctx->tx_ring_size; i++) { - desc =3D macb_tx_desc(&bp->queues[0], i); - macb_set_addr(bp, desc, 0); + for (i =3D 0; i < ctx->tx_ring_size; i++) { + desc =3D macb_tx_desc(ctx, 0, i); + macb_set_addr(ctx, desc, 0); desc->ctrl =3D MACB_BIT(TX_USED); } txq->head =3D 0; @@ -2960,7 +2987,7 @@ static u32 macb_mdc_clk_div(struct macb *bp) u32 config; unsigned long pclk_hz; =20 - if (macb_is_gem(bp)) + if (macb_is_gem(bp->caps)) return gem_mdc_clk_div(bp); =20 pclk_hz =3D clk_get_rate(bp->pclk); @@ -2982,7 +3009,7 @@ static u32 macb_mdc_clk_div(struct macb *bp) */ static u32 macb_dbw(struct macb *bp) { - if (!macb_is_gem(bp)) + if (!macb_is_gem(bp->caps)) return 0; =20 switch (GEM_BFEXT(DBWDEF, gem_readl(bp, DCFG1))) { @@ -3011,7 +3038,7 @@ static void macb_configure_dma(struct macb *bp) u32 dmacfg; =20 buffer_size =3D bp->ctx->rx_buffer_size / RX_BUFFER_MULTIPLE; - if (macb_is_gem(bp)) { + if (macb_is_gem(bp->caps)) { dmacfg =3D gem_readl(bp, DMACFG) & ~GEM_BF(RXBS, -1L); for (q =3D 0, queue =3D bp->queues; q < bp->num_queues; ++q, ++queue) { if (q) @@ -3035,9 +3062,9 @@ static void macb_configure_dma(struct macb *bp) dmacfg &=3D ~GEM_BIT(TXCOEN); =20 dmacfg &=3D ~GEM_BIT(ADDR64); - if (macb_dma64(bp)) + if (macb_dma64(bp->caps)) dmacfg |=3D GEM_BIT(ADDR64); - if (macb_dma_ptp(bp)) + if (macb_dma_ptp(bp->caps)) dmacfg |=3D GEM_BIT(RXEXT) | GEM_BIT(TXEXT); netdev_dbg(bp->netdev, "Cadence configure DMA with 0x%08x\n", dmacfg); @@ -3065,7 +3092,7 @@ static void macb_init_hw(struct macb *bp) config |=3D MACB_BIT(BIG); /* Receive oversized frames */ if (bp->netdev->flags & IFF_PROMISC) config |=3D MACB_BIT(CAF); /* Copy All Frames */ - else if (macb_is_gem(bp) && bp->netdev->features & NETIF_F_RXCSUM) + else if (macb_is_gem(bp->caps) && bp->netdev->features & NETIF_F_RXCSUM) config |=3D GEM_BIT(RXCOEN); if (!(bp->netdev->flags & IFF_BROADCAST)) config |=3D MACB_BIT(NBC); /* No BroadCast */ @@ -3173,14 +3200,14 @@ static void macb_set_rx_mode(struct net_device *net= dev) cfg |=3D MACB_BIT(CAF); =20 /* Disable RX checksum offload */ - if (macb_is_gem(bp)) + if (macb_is_gem(bp->caps)) cfg &=3D ~GEM_BIT(RXCOEN); } else { /* Disable promiscuous mode */ cfg &=3D ~MACB_BIT(CAF); =20 /* Enable RX checksum offload only if requested */ - if (macb_is_gem(bp) && netdev->features & NETIF_F_RXCSUM) + if (macb_is_gem(bp->caps) && netdev->features & NETIF_F_RXCSUM) cfg |=3D GEM_BIT(RXCOEN); } =20 @@ -3222,19 +3249,21 @@ static int macb_open(struct net_device *netdev) goto pm_exit; } =20 + bp->ctx->info =3D &bp->info; + /* RX buffers initialization */ bp->ctx->rx_buffer_size =3D macb_rx_buffer_size(bp, netdev->mtu); bp->ctx->rx_ring_size =3D bp->configured_rx_ring_size; bp->ctx->tx_ring_size =3D bp->configured_tx_ring_size; =20 - err =3D macb_alloc_consistent(bp); + err =3D macb_alloc_consistent(bp->ctx); if (err) { netdev_err(netdev, "Unable to allocate DMA memory (error %d)\n", err); goto free_ctx; } =20 - bp->macbgem_ops.mog_init_rings(bp); + bp->macbgem_ops.mog_init_rings(bp->ctx); macb_init_buffers(bp); =20 for (q =3D 0, queue =3D bp->queues; q < bp->num_queues; ++q, ++queue) { @@ -3272,7 +3301,7 @@ static int macb_open(struct net_device *netdev) napi_disable(&queue->napi_rx); napi_disable(&queue->napi_tx); } - macb_free_consistent(bp); + macb_free_consistent(bp->ctx); free_ctx: kfree(bp->ctx); bp->ctx =3D NULL; @@ -3308,7 +3337,7 @@ static int macb_close(struct net_device *netdev) netif_carrier_off(netdev); spin_unlock_irqrestore(&bp->lock, flags); =20 - macb_free_consistent(bp); + macb_free_consistent(bp->ctx); kfree(bp->ctx); bp->ctx =3D NULL; =20 @@ -3461,7 +3490,7 @@ static void macb_get_stats(struct net_device *netdev, struct macb_stats *hwstat =3D &bp->hw_stats.macb; =20 netdev_stats_to_stats64(nstat, &bp->netdev->stats); - if (macb_is_gem(bp)) { + if (macb_is_gem(bp->caps)) { gem_get_stats(bp, nstat); return; } @@ -3684,8 +3713,8 @@ static void macb_get_regs(struct net_device *netdev, = struct ethtool_regs *regs, regs->version =3D (macb_readl(bp, MID) & ((1 << MACB_REV_SIZE) - 1)) | MACB_GREGS_VERSION; =20 - tail =3D macb_tx_ring_wrap(bp, txq->tail); - head =3D macb_tx_ring_wrap(bp, txq->head); + tail =3D macb_tx_ring_wrap(bp->ctx, txq->tail); + head =3D macb_tx_ring_wrap(bp->ctx, txq->head); =20 regs_buff[0] =3D macb_readl(bp, NCR); regs_buff[1] =3D macb_or_gem_readl(bp, NCFGR); @@ -3703,7 +3732,7 @@ static void macb_get_regs(struct net_device *netdev, = struct ethtool_regs *regs, =20 if (!(bp->caps & MACB_CAPS_USRIO_DISABLED)) regs_buff[12] =3D macb_or_gem_readl(bp, USRIO); - if (macb_is_gem(bp)) + if (macb_is_gem(bp->caps)) regs_buff[13] =3D gem_readl(bp, DMACFG); } =20 @@ -3835,7 +3864,7 @@ static int gem_get_ts_info(struct net_device *netdev, { struct macb *bp =3D netdev_priv(netdev); =20 - if (!macb_dma_ptp(bp)) { + if (!macb_dma_ptp(bp->caps)) { ethtool_op_get_ts_info(netdev, info); return 0; } @@ -3936,7 +3965,7 @@ static void gem_prog_cmp_regs(struct macb *bp, struct= ethtool_rx_flow_spec *fs) bool cmp_b =3D false; bool cmp_c =3D false; =20 - if (!macb_is_gem(bp)) + if (!macb_is_gem(bp->caps)) return; =20 tp4sp_v =3D &(fs->h_u.tcp_ip4_spec); @@ -4297,7 +4326,7 @@ static inline void macb_set_txcsum_feature(struct mac= b *bp, { u32 val; =20 - if (!macb_is_gem(bp)) + if (!macb_is_gem(bp->caps)) return; =20 val =3D gem_readl(bp, DMACFG); @@ -4315,7 +4344,7 @@ static inline void macb_set_rxcsum_feature(struct mac= b *bp, struct net_device *netdev =3D bp->netdev; u32 val; =20 - if (!macb_is_gem(bp)) + if (!macb_is_gem(bp->caps)) return; =20 val =3D gem_readl(bp, NCFGR); @@ -4330,7 +4359,7 @@ static inline void macb_set_rxcsum_feature(struct mac= b *bp, static inline void macb_set_rxflow_feature(struct macb *bp, netdev_features_t features) { - if (!macb_is_gem(bp)) + if (!macb_is_gem(bp->caps)) return; =20 gem_enable_flow_filters(bp, !!(features & NETIF_F_NTUPLE)); @@ -4649,7 +4678,7 @@ static void macb_configure_caps(struct macb *bp, bp->caps |=3D MACB_CAPS_FIFO_MODE; if (GEM_BFEXT(PBUF_RSC, gem_readl(bp, DCFG6))) bp->caps |=3D MACB_CAPS_RSC; - if (gem_has_ptp(bp)) { + if (gem_has_ptp(bp->caps)) { if (!GEM_BFEXT(TSU, gem_readl(bp, DCFG5))) dev_err(&bp->pdev->dev, "GEM doesn't support hardware ptp.\n"); @@ -4861,7 +4890,7 @@ static int macb_init_dflt(struct platform_device *pde= v) netdev->netdev_ops =3D &macb_netdev_ops; =20 /* setup appropriated routines according to adapter type */ - if (macb_is_gem(bp)) { + if (macb_is_gem(bp->caps)) { bp->macbgem_ops.mog_alloc_rx_buffers =3D gem_alloc_rx_buffers; bp->macbgem_ops.mog_free_rx_buffers =3D gem_free_rx_buffers; bp->macbgem_ops.mog_init_rings =3D gem_init_rings; @@ -4890,7 +4919,7 @@ static int macb_init_dflt(struct platform_device *pde= v) netdev->hw_features |=3D MACB_NETIF_LSO; =20 /* Checksum offload is only available on gem with packet buffer */ - if (macb_is_gem(bp) && !(bp->caps & MACB_CAPS_FIFO_MODE)) + if (macb_is_gem(bp->caps) && !(bp->caps & MACB_CAPS_FIFO_MODE)) netdev->hw_features |=3D NETIF_F_HW_CSUM | NETIF_F_RXCSUM; if (bp->caps & MACB_CAPS_SG_DISABLED) netdev->hw_features &=3D ~NETIF_F_SG; @@ -5009,7 +5038,7 @@ static int at91ether_alloc_coherent(struct macb *bp) =20 rxq->ring =3D dma_alloc_coherent(&bp->pdev->dev, (AT91ETHER_MAX_RX_DESCR * - macb_dma_desc_get_size(bp)), + macb_dma_desc_get_size(bp->caps)), &rxq->ring_dma, GFP_KERNEL); if (!rxq->ring) return -ENOMEM; @@ -5022,7 +5051,7 @@ static int at91ether_alloc_coherent(struct macb *bp) if (!rxq->buffers) { dma_free_coherent(&bp->pdev->dev, AT91ETHER_MAX_RX_DESCR * - macb_dma_desc_get_size(bp), + macb_dma_desc_get_size(bp->caps), rxq->ring, rxq->ring_dma); rxq->ring =3D NULL; return -ENOMEM; @@ -5038,7 +5067,7 @@ static void at91ether_free_coherent(struct macb *bp) if (rxq->ring) { dma_free_coherent(&bp->pdev->dev, AT91ETHER_MAX_RX_DESCR * - macb_dma_desc_get_size(bp), + macb_dma_desc_get_size(bp->caps), rxq->ring, rxq->ring_dma); rxq->ring =3D NULL; } @@ -5055,7 +5084,6 @@ static void at91ether_free_coherent(struct macb *bp) /* Initialize and start the Receiver and Transmit subsystems */ static int at91ether_start(struct macb *bp) { - struct macb_queue *queue =3D &bp->queues[0]; struct macb_rxq *rxq =3D &bp->ctx->rxq[0]; struct macb_dma_desc *desc; dma_addr_t addr; @@ -5068,8 +5096,8 @@ static int at91ether_start(struct macb *bp) =20 addr =3D rxq->buffers_dma; for (i =3D 0; i < AT91ETHER_MAX_RX_DESCR; i++) { - desc =3D macb_rx_desc(queue, i); - macb_set_addr(bp, desc, addr); + desc =3D macb_rx_desc(bp->ctx, 0, i); + macb_set_addr(bp->ctx, desc, addr); desc->ctrl =3D 0; addr +=3D AT91ETHER_MAX_RBUFF_SZ; } @@ -5218,14 +5246,13 @@ static netdev_tx_t at91ether_start_xmit(struct sk_b= uff *skb, static void at91ether_rx(struct net_device *netdev) { struct macb *bp =3D netdev_priv(netdev); - struct macb_queue *queue =3D &bp->queues[0]; struct macb_rxq *rxq =3D &bp->ctx->rxq[0]; struct macb_dma_desc *desc; unsigned char *p_recv; struct sk_buff *skb; unsigned int pktlen; =20 - desc =3D macb_rx_desc(queue, rxq->tail); + desc =3D macb_rx_desc(bp->ctx, 0, rxq->tail); while (desc->addr & MACB_BIT(RX_USED)) { p_recv =3D rxq->buffers + rxq->tail * AT91ETHER_MAX_RBUFF_SZ; pktlen =3D MACB_BF(RX_FRMLEN, desc->ctrl); @@ -5254,7 +5281,7 @@ static void at91ether_rx(struct net_device *netdev) else rxq->tail++; =20 - desc =3D macb_rx_desc(queue, rxq->tail); + desc =3D macb_rx_desc(bp->ctx, 0, rxq->tail); } } =20 @@ -5584,7 +5611,7 @@ static int macb_alloc_tieoff(struct macb *bp) return 0; =20 bp->rx_ring_tieoff =3D dma_alloc_coherent(&bp->pdev->dev, - macb_dma_desc_get_size(bp), + macb_dma_desc_get_size(bp->caps), &bp->rx_ring_tieoff_dma, GFP_KERNEL); if (!bp->rx_ring_tieoff) @@ -5598,7 +5625,7 @@ static void macb_free_tieoff(struct macb *bp) if (!bp->rx_ring_tieoff) return; =20 - dma_free_coherent(&bp->pdev->dev, macb_dma_desc_get_size(bp), + dma_free_coherent(&bp->pdev->dev, macb_dma_desc_get_size(bp->caps), bp->rx_ring_tieoff, bp->rx_ring_tieoff_dma); bp->rx_ring_tieoff =3D NULL; @@ -5986,12 +6013,12 @@ static int macb_probe(struct platform_device *pdev) val =3D GEM_BFEXT(RXBD_RDBUFF, gem_readl(bp, DCFG10)); if (val) bp->rx_bd_rd_prefetch =3D (2 << (val - 1)) * - macb_dma_desc_get_size(bp); + macb_dma_desc_get_size(bp->caps); =20 val =3D GEM_BFEXT(TXBD_RDBUFF, gem_readl(bp, DCFG10)); if (val) bp->tx_bd_rd_prefetch =3D (2 << (val - 1)) * - macb_dma_desc_get_size(bp); + macb_dma_desc_get_size(bp->caps); } =20 bp->rx_intr_mask =3D MACB_RX_INT_FLAGS; @@ -6036,7 +6063,7 @@ static int macb_probe(struct platform_device *pdev) INIT_DELAYED_WORK(&bp->tx_lpi_work, macb_tx_lpi_work_fn); =20 netdev_info(netdev, "Cadence %s rev 0x%08x at 0x%08lx irq %d (%pM)\n", - macb_is_gem(bp) ? "GEM" : "MACB", macb_readl(bp, MID), + macb_is_gem(bp->caps) ? "GEM" : "MACB", macb_readl(bp, MID), netdev->base_addr, netdev->irq, netdev->dev_addr); =20 pm_runtime_put_autosuspend(&bp->pdev->dev); @@ -6171,7 +6198,7 @@ static int __maybe_unused macb_suspend(struct device = *dev) * Enable WoL IRQ on queue 0 */ devm_free_irq(dev, bp->queues[0].irq, bp->queues); - if (macb_is_gem(bp)) { + if (macb_is_gem(bp->caps)) { err =3D devm_request_irq(dev, bp->queues[0].irq, gem_wol_interrupt, IRQF_SHARED, netdev->name, bp->queues); if (err) { @@ -6236,6 +6263,7 @@ static int __maybe_unused macb_resume(struct device *= dev) { struct net_device *netdev =3D dev_get_drvdata(dev); struct macb *bp =3D netdev_priv(netdev); + struct macb_context *ctx =3D bp->ctx; struct macb_queue *queue; unsigned long flags; unsigned int q; @@ -6253,7 +6281,7 @@ static int __maybe_unused macb_resume(struct device *= dev) if (bp->wol & MACB_WOL_ENABLED) { spin_lock_irqsave(&bp->lock, flags); /* Disable WoL */ - if (macb_is_gem(bp)) { + if (macb_is_gem(bp->caps)) { queue_writel(bp->queues, IDR, GEM_BIT(WOL)); gem_writel(bp, WOL, 0); } else { @@ -6293,10 +6321,10 @@ static int __maybe_unused macb_resume(struct device= *dev) for (q =3D 0, queue =3D bp->queues; q < bp->num_queues; ++q, ++queue) { if (!(bp->caps & MACB_CAPS_MACB_IS_EMAC)) { - if (macb_is_gem(bp)) - gem_init_rx_ring(queue); + if (macb_is_gem(bp->caps)) + gem_init_rx_ring(ctx, q); else - macb_init_rx_ring(queue); + macb_init_rx_ring(ctx, q); } =20 napi_enable(&queue->napi_rx); diff --git a/drivers/net/ethernet/cadence/macb_ptp.c b/drivers/net/ethernet= /cadence/macb_ptp.c index e5195d7dac1d..2070508fd2e0 100644 --- a/drivers/net/ethernet/cadence/macb_ptp.c +++ b/drivers/net/ethernet/cadence/macb_ptp.c @@ -28,10 +28,10 @@ static struct macb_dma_desc_ptp *macb_ptp_desc(struct macb *bp, struct macb_dma_desc *desc) { - if (!macb_dma_ptp(bp)) + if (!macb_dma_ptp(bp->caps)) return NULL; =20 - if (macb_dma64(bp)) + if (macb_dma64(bp->caps)) return (struct macb_dma_desc_ptp *) ((u8 *)desc + sizeof(struct macb_dma_desc) + sizeof(struct macb_dma_desc_64)); @@ -384,7 +384,7 @@ int gem_get_hwtst(struct net_device *netdev, struct macb *bp =3D netdev_priv(netdev); =20 *tstamp_config =3D bp->tstamp_config; - if (!macb_dma_ptp(bp)) + if (!macb_dma_ptp(bp->caps)) return -EOPNOTSUPP; =20 return 0; @@ -411,7 +411,7 @@ int gem_set_hwtst(struct net_device *netdev, struct macb *bp =3D netdev_priv(netdev); u32 regval; =20 - if (!macb_dma_ptp(bp)) + if (!macb_dma_ptp(bp->caps)) return -EOPNOTSUPP; =20 switch (tstamp_config->tx_type) { --=20 2.53.0 From nobody Wed Apr 1 20:37:31 2026 Received: from smtpout-03.galae.net (smtpout-03.galae.net [185.246.85.4]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7A9733F660E for ; Wed, 1 Apr 2026 16:39:47 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=185.246.85.4 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775061590; cv=none; b=bPiJGHWhh7WbNhM3nY1JDq99PDvfOCQqxHklxxWOjRCaYLhIwEllek7pMK834zt4Lw44e5yP78Q76HAaQiKO1qu+q8m8I8Le/W7fd0UE6GJiyGw9FtaGrWIuOpLW2Zop/rprGzYRssV0vGl0/CXbpqw74xYbMt/DoSTCbYT9gf8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775061590; c=relaxed/simple; bh=1TxIxSJZeqZjWcOwVC4IcbBr3BCcWEp/Z6WhRzxGW5g=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=LB1Rjg3p/flCYGoTOCfTghpieEqWxIJ7Z3k/Vk7LjBUxbJsbUVMfj6Bsg+dRi2e/ai54gNVdEhF+V4E7M+FPbuWDwDOLR7O9wo1tW4ovCj55FQXAglxy1d2lemYIKDc7wxV2YXD7lEMampjj5Sg9HnE6lYKxbJwCIXKv1aSFspg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=bootlin.com; spf=pass smtp.mailfrom=bootlin.com; dkim=pass (2048-bit key) header.d=bootlin.com header.i=@bootlin.com header.b=s7gwxAj2; arc=none smtp.client-ip=185.246.85.4 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=bootlin.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=bootlin.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=bootlin.com header.i=@bootlin.com header.b="s7gwxAj2" Received: from smtpout-01.galae.net (smtpout-01.galae.net [212.83.139.233]) by smtpout-03.galae.net (Postfix) with ESMTPS id 9491F4E42897; Wed, 1 Apr 2026 16:39:45 +0000 (UTC) Received: from mail.galae.net (mail.galae.net [212.83.136.155]) by smtpout-01.galae.net (Postfix) with ESMTPS id 69C5C602BF; Wed, 1 Apr 2026 16:39:45 +0000 (UTC) Received: from [127.0.0.1] (localhost [127.0.0.1]) by localhost (Mailerdaemon) with ESMTPSA id 123AD104509F8; Wed, 1 Apr 2026 18:39:41 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bootlin.com; s=dkim; t=1775061584; h=from:subject:date:message-id:to:cc:mime-version:content-type: content-transfer-encoding:in-reply-to:references; bh=6h2z4t90Q2l0iiaHrS3n38ggV2dCoYzcBQFCJpIsf5E=; b=s7gwxAj2fn9qFawAK9XolF70B3r6TiZPy9am6rEgShK+JvjpAFWAw5sTRgyDyBF24tqhAB PJKgPkaoTyqZubK02mlE/fK3pU3+eEGi5T+m8uiCiMkvNNatde/IfDASHBSIB07IXoGDZw DaMwJvoaML5cSkPHdNZ2DTTf0NlHzLYIoC+qotC333V32k04PNir73hPyw5CoWIebPKi4n cMkStG3DuV3LTlNSspoOtzW729ni/bhObqeJQ79BrzOObdpwRqi3VtSuTFOuYglNGwB8Vy kQPl+XFchImOc2V3S/FF/Yb52SQ5MN8+whozPgvZxfFqG8LDFeXGpG/fauObVA== From: =?utf-8?q?Th=C3=A9o_Lebrun?= Date: Wed, 01 Apr 2026 18:39:12 +0200 Subject: [PATCH net-next 09/11] net: macb: introduce macb_context_alloc() helper Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20260401-macb-context-v1-9-9590c5ab7272@bootlin.com> References: <20260401-macb-context-v1-0-9590c5ab7272@bootlin.com> In-Reply-To: <20260401-macb-context-v1-0-9590c5ab7272@bootlin.com> To: Nicolas Ferre , Claudiu Beznea , Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Richard Cochran , Russell King Cc: Paolo Valerio , Conor Dooley , Nicolai Buchwitz , Vladimir Kondratiev , Gregory CLEMENT , =?utf-8?q?Beno=C3=AEt_Monin?= , Tawfik Bayouk , Thomas Petazzoni , Maxime Chevallier , netdev@vger.kernel.org, linux-kernel@vger.kernel.org, =?utf-8?q?Th=C3=A9o_Lebrun?= X-Mailer: b4 0.15.0 X-Last-TLS-Session-Version: TLSv1.3 Move the context allocation sequence from inline macb_open() to its own helper function called macb_context_alloc(). All ops doing context swapping (set_ringparam, change_mtu, etc) will use this helper. Signed-off-by: Th=C3=A9o Lebrun --- drivers/net/ethernet/cadence/macb_main.c | 55 +++++++++++++++++++++-------= ---- 1 file changed, 36 insertions(+), 19 deletions(-) diff --git a/drivers/net/ethernet/cadence/macb_main.c b/drivers/net/etherne= t/cadence/macb_main.c index 47f0d27cd979..42b19b969f3e 100644 --- a/drivers/net/ethernet/cadence/macb_main.c +++ b/drivers/net/ethernet/cadence/macb_main.c @@ -2875,6 +2875,36 @@ static int macb_alloc_consistent(struct macb_context= *ctx) return -ENOMEM; } =20 +static struct macb_context *macb_context_alloc(struct macb *bp, + unsigned int mtu, + unsigned int rx_ring_size, + unsigned int tx_ring_size) +{ + struct macb_context *ctx; + int err; + + ctx =3D kzalloc_obj(*ctx); + if (!ctx) + return ERR_PTR(-ENOMEM); + + ctx->info =3D &bp->info; + ctx->rx_buffer_size =3D macb_rx_buffer_size(bp, mtu); + ctx->rx_ring_size =3D rx_ring_size; + ctx->tx_ring_size =3D tx_ring_size; + + err =3D macb_alloc_consistent(ctx); + if (err) { + netdev_err(bp->netdev, + "Unable to allocate DMA memory (error %d)\n", err); + kfree(ctx); + return ERR_PTR(err); + } + + bp->macbgem_ops.mog_init_rings(ctx); + + return ctx; +} + static void gem_init_rx_ring(struct macb_context *ctx, unsigned int q) { struct macb_rxq *rxq =3D &ctx->rxq[q]; @@ -3243,27 +3273,15 @@ static int macb_open(struct net_device *netdev) if (err < 0) return err; =20 - bp->ctx =3D kzalloc_obj(*bp->ctx); - if (!bp->ctx) { - err =3D -ENOMEM; + bp->ctx =3D macb_context_alloc(bp, netdev->mtu, + bp->configured_rx_ring_size, + bp->configured_tx_ring_size); + if (IS_ERR(bp->ctx)) { + err =3D PTR_ERR(bp->ctx); + bp->ctx =3D NULL; goto pm_exit; } =20 - bp->ctx->info =3D &bp->info; - - /* RX buffers initialization */ - bp->ctx->rx_buffer_size =3D macb_rx_buffer_size(bp, netdev->mtu); - bp->ctx->rx_ring_size =3D bp->configured_rx_ring_size; - bp->ctx->tx_ring_size =3D bp->configured_tx_ring_size; - - err =3D macb_alloc_consistent(bp->ctx); - if (err) { - netdev_err(netdev, "Unable to allocate DMA memory (error %d)\n", - err); - goto free_ctx; - } - - bp->macbgem_ops.mog_init_rings(bp->ctx); macb_init_buffers(bp); =20 for (q =3D 0, queue =3D bp->queues; q < bp->num_queues; ++q, ++queue) { @@ -3302,7 +3320,6 @@ static int macb_open(struct net_device *netdev) napi_disable(&queue->napi_tx); } macb_free_consistent(bp->ctx); -free_ctx: kfree(bp->ctx); bp->ctx =3D NULL; pm_exit: --=20 2.53.0 From nobody Wed Apr 1 20:37:31 2026 Received: from smtpout-04.galae.net (smtpout-04.galae.net [185.171.202.116]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2060747887F; Wed, 1 Apr 2026 16:39:49 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=185.171.202.116 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775061596; cv=none; b=aKoIkryKjRBVMuRKxjCP+5iAzQCtjzHiPadWtSul7I7m2sHR7kYo9cwNYGo4vILud+OTWg75/CE+I7fcXqwR8TjrxsXlKxghzyR0rkq5GPCL228uC3zA5lsxSLsw07cD2Roogzo18hh2Shs/bSHzrA1RTGChWnkX6ZItQSUJk5U= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775061596; c=relaxed/simple; bh=u5140gH0pJdXxy8qBraOd6kg1NZiRUugpvd6QrWutqM=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=G2+BRk1ZlU2GZJZLKd0meeM6V5Ku9wiQEjN7AyOVBueR69w++9MHt6EQ5iaCrvQRLrSzCszH8DUcmE1RMEamU5PyaR9sFomccgYFbVRYIuDGmhYHrTWMFT2BVNWLsKOV4Z0mqJOZx0hPCEyxn8g9/Roccmz3plmAttdiVkf/ooo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=bootlin.com; spf=pass smtp.mailfrom=bootlin.com; dkim=pass (2048-bit key) header.d=bootlin.com header.i=@bootlin.com header.b=DWxfpxqV; arc=none smtp.client-ip=185.171.202.116 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=bootlin.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=bootlin.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=bootlin.com header.i=@bootlin.com header.b="DWxfpxqV" Received: from smtpout-01.galae.net (smtpout-01.galae.net [212.83.139.233]) by smtpout-04.galae.net (Postfix) with ESMTPS id 45B11C5996D; Wed, 1 Apr 2026 16:40:19 +0000 (UTC) Received: from mail.galae.net (mail.galae.net [212.83.136.155]) by smtpout-01.galae.net (Postfix) with ESMTPS id 342D0602BF; Wed, 1 Apr 2026 16:39:48 +0000 (UTC) Received: from [127.0.0.1] (localhost [127.0.0.1]) by localhost (Mailerdaemon) with ESMTPSA id 814F510450A06; Wed, 1 Apr 2026 18:39:44 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bootlin.com; s=dkim; t=1775061586; h=from:subject:date:message-id:to:cc:mime-version:content-type: content-transfer-encoding:in-reply-to:references; bh=kCgOzQ8Y4YhrvRV1+f5amWm1GBx7MYC0pnJwaV1132A=; b=DWxfpxqV7qZPqZuovoSYwxKUgaYuGgy/DqYSKxcQ0Wyc68TeJKVEro/kpir7ppjhHoG2Pc F+nLvIBRyePjRqV1Qhqw0jdpoCVZB4UMThpg0mXKQ4bwRnlFQpOO4pSQ7vR4tY00/vpMHo cakkvtFWj+URKPT2tFxVOQkbfZ+TMrE4S+u46ua+yqAUatpgKyFmDQECo4WY3/6lcP8HPz pmnoNr+Q/Sk+rl3EKvTBg/DJVH8FQbi2vIivB0sbO3t4q4YZDeAoqpHshENeA5j4ZrDxBp AhjmQwpAlGTmoQoV9ytmWFKsje9LGJDY3R86drBwGUVXYHPrOwPCPH20UkE8KA== From: =?utf-8?q?Th=C3=A9o_Lebrun?= Date: Wed, 01 Apr 2026 18:39:13 +0200 Subject: [PATCH net-next 10/11] net: macb: use context swapping in .set_ringparam() Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20260401-macb-context-v1-10-9590c5ab7272@bootlin.com> References: <20260401-macb-context-v1-0-9590c5ab7272@bootlin.com> In-Reply-To: <20260401-macb-context-v1-0-9590c5ab7272@bootlin.com> To: Nicolas Ferre , Claudiu Beznea , Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Richard Cochran , Russell King Cc: Paolo Valerio , Conor Dooley , Nicolai Buchwitz , Vladimir Kondratiev , Gregory CLEMENT , =?utf-8?q?Beno=C3=AEt_Monin?= , Tawfik Bayouk , Thomas Petazzoni , Maxime Chevallier , netdev@vger.kernel.org, linux-kernel@vger.kernel.org, =?utf-8?q?Th=C3=A9o_Lebrun?= X-Mailer: b4 0.15.0 X-Last-TLS-Session-Version: TLSv1.3 ethtool_ops.set_ringparam() is implemented using the primitive close / update ring size / reopen sequence. Under memory pressure this does not fly: we free our buffers at close and cannot reallocate new ones at open. Also, it triggers a slow PHY reinit. Instead, exploit the new context mechanism and improve our sequence to: - allocate a new context (including buffers) first - if it fails, early return without any impact to the interface - stop interface - update global state (bp, netdev, etc) - pass buffer pointers to the hardware - start interface - free old context. The HW disable sequence is inspired by macb_reset_hw() but avoids (1) setting NCR bit CLRSTAT and (2) clearing register PBUFRXCUT. The HW re-enable sequence is inspired by macb_mac_link_up(), skipping over register writes which would be redundant (because values have not changed). The generic context swapping parts are isolated into helper functions macb_context_swap_start|end(), reusable by other operations (change_mtu, set_channels, etc). Signed-off-by: Th=C3=A9o Lebrun --- drivers/net/ethernet/cadence/macb_main.c | 89 ++++++++++++++++++++++++++++= +--- 1 file changed, 82 insertions(+), 7 deletions(-) diff --git a/drivers/net/ethernet/cadence/macb_main.c b/drivers/net/etherne= t/cadence/macb_main.c index 42b19b969f3e..543356554c11 100644 --- a/drivers/net/ethernet/cadence/macb_main.c +++ b/drivers/net/ethernet/cadence/macb_main.c @@ -2905,6 +2905,76 @@ static struct macb_context *macb_context_alloc(struc= t macb *bp, return ctx; } =20 +static void macb_context_swap_start(struct macb *bp) +{ + struct macb_queue *queue; + unsigned int q; + u32 ctrl; + + /* Disable software Tx, disable HW Tx/Rx and disable NAPI. */ + + netif_tx_disable(bp->netdev); + + ctrl =3D macb_readl(bp, NCR); + macb_writel(bp, NCR, ctrl & ~(MACB_BIT(RE) | MACB_BIT(TE))); + + macb_writel(bp, TSR, -1); + macb_writel(bp, RSR, -1); + + for (q =3D 0, queue =3D bp->queues; q < bp->num_queues; ++q, ++queue) { + queue_writel(queue, IDR, -1); + queue_readl(queue, ISR); + if (bp->caps & MACB_CAPS_ISR_CLEAR_ON_WRITE) + queue_writel(queue, ISR, -1); + } + + for (q =3D 0, queue =3D bp->queues; q < bp->num_queues; ++q, ++queue) { + napi_disable(&queue->napi_rx); + napi_disable(&queue->napi_tx); + } +} + +static void macb_context_swap_end(struct macb *bp, + struct macb_context *new_ctx) +{ + struct macb_context *old_ctx; + struct macb_queue *queue; + unsigned int q; + u32 ctrl; + + /* Swap contexts & give buffer pointers to HW. */ + + old_ctx =3D bp->ctx; + bp->ctx =3D new_ctx; + macb_init_buffers(bp); + + /* Start NAPI, HW Tx/Rx and software Tx. */ + + for (q =3D 0, queue =3D bp->queues; q < bp->num_queues; ++q, ++queue) { + napi_enable(&queue->napi_rx); + napi_enable(&queue->napi_tx); + } + + if (!(bp->caps & MACB_CAPS_MACB_IS_EMAC)) { + for (q =3D 0, queue =3D bp->queues; q < bp->num_queues; + ++q, ++queue) { + queue_writel(queue, IER, + bp->rx_intr_mask | + MACB_TX_INT_FLAGS | + MACB_BIT(HRESP)); + } + } + + ctrl =3D macb_readl(bp, NCR); + macb_writel(bp, NCR, ctrl | MACB_BIT(RE) | MACB_BIT(TE)); + + netif_tx_start_all_queues(bp->netdev); + + /* Free old context. */ + + macb_free_consistent(old_ctx); +} + static void gem_init_rx_ring(struct macb_context *ctx, unsigned int q) { struct macb_rxq *rxq =3D &ctx->rxq[q]; @@ -3819,9 +3889,10 @@ static int macb_set_ringparam(struct net_device *net= dev, struct kernel_ethtool_ringparam *kernel_ring, struct netlink_ext_ack *extack) { + unsigned int new_rx_size, new_tx_size; struct macb *bp =3D netdev_priv(netdev); - u32 new_rx_size, new_tx_size; - unsigned int reset =3D 0; + bool running =3D netif_running(netdev); + struct macb_context *new_ctx; =20 if ((ring->rx_mini_pending) || (ring->rx_jumbo_pending)) return -EINVAL; @@ -3840,16 +3911,20 @@ static int macb_set_ringparam(struct net_device *ne= tdev, return 0; } =20 - if (netif_running(bp->netdev)) { - reset =3D 1; - macb_close(bp->netdev); + if (running) { + new_ctx =3D macb_context_alloc(bp, netdev->mtu, + new_rx_size, new_tx_size); + if (IS_ERR(new_ctx)) + return PTR_ERR(new_ctx); + + macb_context_swap_start(bp); } =20 bp->configured_rx_ring_size =3D new_rx_size; bp->configured_tx_ring_size =3D new_tx_size; =20 - if (reset) - macb_open(bp->netdev); + if (running) + macb_context_swap_end(bp, new_ctx); =20 return 0; } --=20 2.53.0 From nobody Wed Apr 1 20:37:31 2026 Received: from smtpout-04.galae.net (smtpout-04.galae.net [185.171.202.116]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 50ABC4779B3 for ; Wed, 1 Apr 2026 16:39:53 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=185.171.202.116 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775061598; cv=none; b=NUfx2sZh5J77Oj6+X+voA+NWLYmBlZiTBDzoAqlvWD/Aevc7TbKHnnFT3tuoix+C7xdi7JE3TGISDOzeWtMqHGbqm9vAWDvzeVSaNhUTq5aSCZqcA/X4RpVa/SykL8lo69e53n1TQ5vo4MEiPuNhiSZw0d3ownh6vPaTiWXPKAE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775061598; c=relaxed/simple; bh=vsBwCme8KJvf6E7ASjBFpFWkYjm4KVrMJftP/zq2PjM=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=kdmlsixT2tRR/XRUMEhstSqNvyrbGBNoeL4Xzs27/AfhSHgMvZXl+mnrcH/YVwRxCDOuCXaDmTHGl12GrF98n3+RAW7okL14zXcWIZ+T3mVi+TxsFaNa5gRGAr1lFYp4TSVDhVlspkBgAwuI78itS60VB2+bIgYkJ3SIxWufTyY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=bootlin.com; spf=pass smtp.mailfrom=bootlin.com; dkim=pass (2048-bit key) header.d=bootlin.com header.i=@bootlin.com header.b=V/8bmKlK; arc=none smtp.client-ip=185.171.202.116 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=bootlin.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=bootlin.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=bootlin.com header.i=@bootlin.com header.b="V/8bmKlK" Received: from smtpout-01.galae.net (smtpout-01.galae.net [212.83.139.233]) by smtpout-04.galae.net (Postfix) with ESMTPS id 5CBAFC5996E; Wed, 1 Apr 2026 16:40:21 +0000 (UTC) Received: from mail.galae.net (mail.galae.net [212.83.136.155]) by smtpout-01.galae.net (Postfix) with ESMTPS id 497E6602BF; Wed, 1 Apr 2026 16:39:50 +0000 (UTC) Received: from [127.0.0.1] (localhost [127.0.0.1]) by localhost (Mailerdaemon) with ESMTPSA id DEE7610450A10; Wed, 1 Apr 2026 18:39:46 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bootlin.com; s=dkim; t=1775061589; h=from:subject:date:message-id:to:cc:mime-version:content-type: content-transfer-encoding:in-reply-to:references; bh=4y1czYbIcSgISGAdHVLbVNE8ycNyrwxMXKBVsQfa3wE=; b=V/8bmKlKCoUKoRRtGcCeocLSmadXKqohQL/hSgyTEBhjNJ6JAoJwm27zuwEV6nEzk7EfTe VFU9OuUN7R0M2amBgr2S6TXudMmUWSCPLo3iuKk8GOZXiweh+Pvd/wQYUYSt++g1Jqu/pA NVs9esUAwJk8yu9Fw5HHeub/4DO57Rw7AVaWwSspHw/mY+cLkF2XVXmL0C6AIazVqy3Ng1 MJDuilhOTP8lt1MPjrjoj/63Jiz+dbSKzFEm//6+VCi6CUvLgZc+SmnQiWrSAl2UpzbAEv E0+A/YIEUK1Iycfs5Nsbx6ssSXCDMisxpepmzAVwUDiKu3VKSxlp1Lt523/KPQ== From: =?utf-8?q?Th=C3=A9o_Lebrun?= Date: Wed, 01 Apr 2026 18:39:14 +0200 Subject: [PATCH net-next 11/11] net: macb: use context swapping in .ndo_change_mtu() Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20260401-macb-context-v1-11-9590c5ab7272@bootlin.com> References: <20260401-macb-context-v1-0-9590c5ab7272@bootlin.com> In-Reply-To: <20260401-macb-context-v1-0-9590c5ab7272@bootlin.com> To: Nicolas Ferre , Claudiu Beznea , Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Richard Cochran , Russell King Cc: Paolo Valerio , Conor Dooley , Nicolai Buchwitz , Vladimir Kondratiev , Gregory CLEMENT , =?utf-8?q?Beno=C3=AEt_Monin?= , Tawfik Bayouk , Thomas Petazzoni , Maxime Chevallier , netdev@vger.kernel.org, linux-kernel@vger.kernel.org, =?utf-8?q?Th=C3=A9o_Lebrun?= X-Mailer: b4 0.15.0 X-Last-TLS-Session-Version: TLSv1.3 Use newly introduced context buffer management to implement .ndo_change_mtu() as a context swap: allocate new context -> reconfigure HW -> free old context. This resists memory pressure well by failing without closing the interface and it is much faster by avoiding PHY reinit. Signed-off-by: Th=C3=A9o Lebrun --- drivers/net/ethernet/cadence/macb_main.c | 18 ++++++++++++++++-- 1 file changed, 16 insertions(+), 2 deletions(-) diff --git a/drivers/net/ethernet/cadence/macb_main.c b/drivers/net/etherne= t/cadence/macb_main.c index 543356554c11..e10791bf1f4d 100644 --- a/drivers/net/ethernet/cadence/macb_main.c +++ b/drivers/net/ethernet/cadence/macb_main.c @@ -3438,11 +3438,25 @@ static int macb_close(struct net_device *netdev) =20 static int macb_change_mtu(struct net_device *netdev, int new_mtu) { - if (netif_running(netdev)) - return -EBUSY; + struct macb *bp =3D netdev_priv(netdev); + bool running =3D netif_running(netdev); + struct macb_context *new_ctx; + + if (running) { + new_ctx =3D macb_context_alloc(bp, new_mtu, + bp->configured_rx_ring_size, + bp->configured_tx_ring_size); + if (IS_ERR(new_ctx)) + return PTR_ERR(new_ctx); + + macb_context_swap_start(bp); + } =20 WRITE_ONCE(netdev->mtu, new_mtu); =20 + if (running) + macb_context_swap_end(bp, new_ctx); + return 0; } =20 --=20 2.53.0