Link queues to NAPIs using the netdev-genl API so this information is
queryable.
First, test with the default setting on my tg3 NIC at boot with 1 TX
queue:
$ ./tools/net/ynl/cli.py --spec Documentation/netlink/specs/netdev.yaml \
--dump queue-get --json='{"ifindex": 2}'
[{'id': 0, 'ifindex': 2, 'napi-id': 8194, 'type': 'rx'},
{'id': 1, 'ifindex': 2, 'napi-id': 8195, 'type': 'rx'},
{'id': 2, 'ifindex': 2, 'napi-id': 8196, 'type': 'rx'},
{'id': 3, 'ifindex': 2, 'napi-id': 8197, 'type': 'rx'},
{'id': 0, 'ifindex': 2, 'napi-id': 8193, 'type': 'tx'}]
Now, adjust the number of TX queues to be 4 via ethtool:
$ sudo ethtool -L eth0 tx 4
$ sudo ethtool -l eth0 | tail -5
Current hardware settings:
RX: 4
TX: 4
Other: n/a
Combined: n/a
Despite "Combined: n/a" in the ethtool output, /proc/interrupts shows
the tg3 has renamed the IRQs to be combined:
343: [...] eth0-0
344: [...] eth0-txrx-1
345: [...] eth0-txrx-2
346: [...] eth0-txrx-3
347: [...] eth0-txrx-4
Now query this via netlink to ensure the queues are linked properly to
their NAPIs:
$ ./tools/net/ynl/cli.py --spec Documentation/netlink/specs/netdev.yaml \
--dump queue-get --json='{"ifindex": 2}'
[{'id': 0, 'ifindex': 2, 'napi-id': 8960, 'type': 'rx'},
{'id': 1, 'ifindex': 2, 'napi-id': 8961, 'type': 'rx'},
{'id': 2, 'ifindex': 2, 'napi-id': 8962, 'type': 'rx'},
{'id': 3, 'ifindex': 2, 'napi-id': 8963, 'type': 'rx'},
{'id': 0, 'ifindex': 2, 'napi-id': 8960, 'type': 'tx'},
{'id': 1, 'ifindex': 2, 'napi-id': 8961, 'type': 'tx'},
{'id': 2, 'ifindex': 2, 'napi-id': 8962, 'type': 'tx'},
{'id': 3, 'ifindex': 2, 'napi-id': 8963, 'type': 'tx'}]
As you can see above, id 0 for both TX and RX share a NAPI, NAPI ID
8960, and so on for each queue index up to 3.
Signed-off-by: Joe Damato <jdamato@fastly.com>
---
v4:
- Switch the if ... else if to two ifs as suggested by Michael Chan
to handle the case where tg3 might use combined queues
- Updated the commit message to test both the default and combined
queue cases to ensure correctness
rfcv3:
- Added running counters for numbering the rx and tx queue IDs to
tg3_napi_enable and tg3_napi_disable
drivers/net/ethernet/broadcom/tg3.c | 39 ++++++++++++++++++++++++++---
1 file changed, 35 insertions(+), 4 deletions(-)
diff --git a/drivers/net/ethernet/broadcom/tg3.c b/drivers/net/ethernet/broadcom/tg3.c
index 6564072b47ba..c4bce6493afa 100644
--- a/drivers/net/ethernet/broadcom/tg3.c
+++ b/drivers/net/ethernet/broadcom/tg3.c
@@ -7395,18 +7395,49 @@ static int tg3_poll(struct napi_struct *napi, int budget)
static void tg3_napi_disable(struct tg3 *tp)
{
+ int txq_idx = tp->txq_cnt - 1;
+ int rxq_idx = tp->rxq_cnt - 1;
+ struct tg3_napi *tnapi;
int i;
- for (i = tp->irq_cnt - 1; i >= 0; i--)
- napi_disable(&tp->napi[i].napi);
+ for (i = tp->irq_cnt - 1; i >= 0; i--) {
+ tnapi = &tp->napi[i];
+ if (tnapi->tx_buffers) {
+ netif_queue_set_napi(tp->dev, txq_idx,
+ NETDEV_QUEUE_TYPE_TX, NULL);
+ txq_idx--;
+ }
+ if (tnapi->rx_rcb) {
+ netif_queue_set_napi(tp->dev, rxq_idx,
+ NETDEV_QUEUE_TYPE_RX, NULL);
+ rxq_idx--;
+ }
+ napi_disable(&tnapi->napi);
+ }
}
static void tg3_napi_enable(struct tg3 *tp)
{
+ int txq_idx = 0, rxq_idx = 0;
+ struct tg3_napi *tnapi;
int i;
- for (i = 0; i < tp->irq_cnt; i++)
- napi_enable(&tp->napi[i].napi);
+ for (i = 0; i < tp->irq_cnt; i++) {
+ tnapi = &tp->napi[i];
+ napi_enable(&tnapi->napi);
+ if (tnapi->tx_buffers) {
+ netif_queue_set_napi(tp->dev, txq_idx,
+ NETDEV_QUEUE_TYPE_TX,
+ &tnapi->napi);
+ txq_idx++;
+ }
+ if (tnapi->rx_rcb) {
+ netif_queue_set_napi(tp->dev, rxq_idx,
+ NETDEV_QUEUE_TYPE_RX,
+ &tnapi->napi);
+ rxq_idx++;
+ }
+ }
}
static void tg3_napi_init(struct tg3 *tp)
--
2.25.1
On Wed, Oct 9, 2024 at 10:55 AM Joe Damato <jdamato@fastly.com> wrote: > > Link queues to NAPIs using the netdev-genl API so this information is > queryable. > > First, test with the default setting on my tg3 NIC at boot with 1 TX > queue: > > $ ./tools/net/ynl/cli.py --spec Documentation/netlink/specs/netdev.yaml \ > --dump queue-get --json='{"ifindex": 2}' > > [{'id': 0, 'ifindex': 2, 'napi-id': 8194, 'type': 'rx'}, > {'id': 1, 'ifindex': 2, 'napi-id': 8195, 'type': 'rx'}, > {'id': 2, 'ifindex': 2, 'napi-id': 8196, 'type': 'rx'}, > {'id': 3, 'ifindex': 2, 'napi-id': 8197, 'type': 'rx'}, > {'id': 0, 'ifindex': 2, 'napi-id': 8193, 'type': 'tx'}] > This is correct. When TSS is not enabled (1 TX ring only), the TX ring uses NAPI 0 and the RSS RX rings use NAPI 1, 2, 3, 4. > Now, adjust the number of TX queues to be 4 via ethtool: > > $ sudo ethtool -L eth0 tx 4 > $ sudo ethtool -l eth0 | tail -5 > Current hardware settings: > RX: 4 > TX: 4 > Other: n/a > Combined: n/a > > Despite "Combined: n/a" in the ethtool output, /proc/interrupts shows > the tg3 has renamed the IRQs to be combined: > > 343: [...] eth0-0 > 344: [...] eth0-txrx-1 > 345: [...] eth0-txrx-2 > 346: [...] eth0-txrx-3 > 347: [...] eth0-txrx-4 > > Now query this via netlink to ensure the queues are linked properly to > their NAPIs: > > $ ./tools/net/ynl/cli.py --spec Documentation/netlink/specs/netdev.yaml \ > --dump queue-get --json='{"ifindex": 2}' > [{'id': 0, 'ifindex': 2, 'napi-id': 8960, 'type': 'rx'}, > {'id': 1, 'ifindex': 2, 'napi-id': 8961, 'type': 'rx'}, > {'id': 2, 'ifindex': 2, 'napi-id': 8962, 'type': 'rx'}, > {'id': 3, 'ifindex': 2, 'napi-id': 8963, 'type': 'rx'}, > {'id': 0, 'ifindex': 2, 'napi-id': 8960, 'type': 'tx'}, > {'id': 1, 'ifindex': 2, 'napi-id': 8961, 'type': 'tx'}, > {'id': 2, 'ifindex': 2, 'napi-id': 8962, 'type': 'tx'}, > {'id': 3, 'ifindex': 2, 'napi-id': 8963, 'type': 'tx'}] > This is also correct after reviewing the driver code. When TSS is enabled, NAPI 0 is no longer used for any TX ring. All TSS and RSS rings start from NAPI 1. NAPI 0 is only used for link change and other error interrupts. > As you can see above, id 0 for both TX and RX share a NAPI, NAPI ID > 8960, and so on for each queue index up to 3. > > Signed-off-by: Joe Damato <jdamato@fastly.com> Thanks. Reviewed-by: Michael Chan <michael.chan@broadcom.com>
© 2016 - 2024 Red Hat, Inc.