From nobody Sun Oct 5 01:50:01 2025 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.20]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4ABCC2E2668; Mon, 11 Aug 2025 16:13:02 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.20 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1754928783; cv=none; b=HUxLgbcmHpLPSwWbSN+UmdfCJmMj6FSJhrggr8Sn7yTjdxP6aqnuX8T4xLHFoleJ9j6Gt50dSRQYlx7MIIe9okzVYpRrh0Yq9x0O3SLgZJxGT14/uKMVlP3igzLBDseLUU0fTixylfaGwom1yYmvA2bV6JCxzXDG+H9fp30kjwM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1754928783; c=relaxed/simple; bh=nmI0TFRXNzS5ENdMywUSJ4MERcvVq4X6PgXUFbNdSu8=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=G9HXIVkSsJMb2f4iDYblW0W+BH0zG4Eo91ry4D83oRZeMJR9f7+j+4qsbLokhUgTpyDJ05VmkQfV8yAMIC4AoJmUAhluQCNS7vqve3bnA1m9g7Eo4MSqpGpSEO4nPiJINt6l/lPpl4biZCSHZgsQgrEfv4OTep7D1nR3EmczdII= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=O4Udj4aY; arc=none smtp.client-ip=198.175.65.20 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="O4Udj4aY" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1754928782; x=1786464782; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=nmI0TFRXNzS5ENdMywUSJ4MERcvVq4X6PgXUFbNdSu8=; b=O4Udj4aYK1Ub0vLWU4Qh8t5GmndkXNuRPCjI21RHiDxhyMF18l0UaaOv c1pJTqJNVUydQteV8r+j2YuZg2/y8RCHiQ+WSgDLi3oGU8veQpa0Jzxyv 4L7tO5UMPbYuB8MOpFUbNxFTVUZRWH9zAA6kF3s3tRrkWgWZrRuczfAyr G/EaA3CWhbHLOZavaNrsQVQCKtY9WfV+efWKAAoVO2Nqt9/t7WgSVxhXz D1nbXCBXfbkvLHr4axct2mRnOqi+kL/ek+ZzAapQNsKFzhYTddrdRPIZi Hgwl/Q0iYZQ5VFWoWA3HMJqwXCNDiDFfGQ3t7OHsMbpkQze7zM2q+ucqx Q==; X-CSE-ConnectionGUID: mVGQxOPEQ3G4MMDG2CJW7w== X-CSE-MsgGUID: gEKjKAMKQhakViMnj3rylA== X-IronPort-AV: E=McAfee;i="6800,10657,11518"; a="56899589" X-IronPort-AV: E=Sophos;i="6.17,278,1747724400"; d="scan'208";a="56899589" Received: from orviesa006.jf.intel.com ([10.64.159.146]) by orvoesa112.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 11 Aug 2025 09:13:02 -0700 X-CSE-ConnectionGUID: wot9GlcPT22CQ0FtpwTNxg== X-CSE-MsgGUID: epYe4Tg7R3Cpb3F54EmtOQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.17,278,1747724400"; d="scan'208";a="165163189" Received: from newjersey.igk.intel.com ([10.102.20.203]) by orviesa006.jf.intel.com with ESMTP; 11 Aug 2025 09:12:58 -0700 From: Alexander Lobakin To: intel-wired-lan@lists.osuosl.org Cc: Alexander Lobakin , Michal Kubiak , Maciej Fijalkowski , Tony Nguyen , Przemek Kitszel , Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Alexei Starovoitov , Daniel Borkmann , Simon Horman , nxne.cnse.osdt.itp.upstreaming@intel.com, bpf@vger.kernel.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH iwl-next v4 04/13] idpf: link NAPIs to queues Date: Mon, 11 Aug 2025 18:10:35 +0200 Message-ID: <20250811161044.32329-5-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.50.1 In-Reply-To: <20250811161044.32329-1-aleksander.lobakin@intel.com> References: <20250811161044.32329-1-aleksander.lobakin@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add the missing linking of NAPIs to netdev queues when enabling interrupt vectors in order to support NAPI configuration and interfaces requiring get_rx_queue()->napi to be set (like XSk busy polling). As currently, idpf_vport_{start,stop}() is called from several flows with inconsistent RTNL locking, we need to synchronize them to avoid runtime assertions. Notably: * idpf_{open,stop}() -- regular NDOs, RTNL is always taken; * idpf_initiate_soft_reset() -- usually called under RTNL; * idpf_init_task -- called from the init work, needs RTNL; * idpf_vport_dealloc -- called without RTNL taken, needs it. Expand common idpf_vport_{start,stop}() to take an additional bool telling whether we need to manually take the RTNL lock. Suggested-by: Maciej Fijalkowski # helper Signed-off-by: Alexander Lobakin --- drivers/net/ethernet/intel/idpf/idpf_lib.c | 38 +++++++++++++++------ drivers/net/ethernet/intel/idpf/idpf_txrx.c | 17 +++++++++ 2 files changed, 45 insertions(+), 10 deletions(-) diff --git a/drivers/net/ethernet/intel/idpf/idpf_lib.c b/drivers/net/ether= net/intel/idpf/idpf_lib.c index 228991ac4271..8c4114473f29 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_lib.c +++ b/drivers/net/ethernet/intel/idpf/idpf_lib.c @@ -883,14 +883,18 @@ static void idpf_remove_features(struct idpf_vport *v= port) /** * idpf_vport_stop - Disable a vport * @vport: vport to disable + * @rtnl: whether to take RTNL lock */ -static void idpf_vport_stop(struct idpf_vport *vport) +static void idpf_vport_stop(struct idpf_vport *vport, bool rtnl) { struct idpf_netdev_priv *np =3D netdev_priv(vport->netdev); =20 if (np->state <=3D __IDPF_VPORT_DOWN) return; =20 + if (rtnl) + rtnl_lock(); + netif_carrier_off(vport->netdev); netif_tx_disable(vport->netdev); =20 @@ -912,6 +916,9 @@ static void idpf_vport_stop(struct idpf_vport *vport) idpf_vport_queues_rel(vport); idpf_vport_intr_rel(vport); np->state =3D __IDPF_VPORT_DOWN; + + if (rtnl) + rtnl_unlock(); } =20 /** @@ -935,7 +942,7 @@ static int idpf_stop(struct net_device *netdev) idpf_vport_ctrl_lock(netdev); vport =3D idpf_netdev_to_vport(netdev); =20 - idpf_vport_stop(vport); + idpf_vport_stop(vport, false); =20 idpf_vport_ctrl_unlock(netdev); =20 @@ -1028,7 +1035,7 @@ static void idpf_vport_dealloc(struct idpf_vport *vpo= rt) idpf_idc_deinit_vport_aux_device(vport->vdev_info); =20 idpf_deinit_mac_addr(vport); - idpf_vport_stop(vport); + idpf_vport_stop(vport, true); =20 if (!test_bit(IDPF_HR_RESET_IN_PROG, adapter->flags)) idpf_decfg_netdev(vport); @@ -1369,8 +1376,9 @@ static void idpf_rx_init_buf_tail(struct idpf_vport *= vport) /** * idpf_vport_open - Bring up a vport * @vport: vport to bring up + * @rtnl: whether to take RTNL lock */ -static int idpf_vport_open(struct idpf_vport *vport) +static int idpf_vport_open(struct idpf_vport *vport, bool rtnl) { struct idpf_netdev_priv *np =3D netdev_priv(vport->netdev); struct idpf_adapter *adapter =3D vport->adapter; @@ -1380,6 +1388,9 @@ static int idpf_vport_open(struct idpf_vport *vport) if (np->state !=3D __IDPF_VPORT_DOWN) return -EBUSY; =20 + if (rtnl) + rtnl_lock(); + /* we do not allow interface up just yet */ netif_carrier_off(vport->netdev); =20 @@ -1387,7 +1398,7 @@ static int idpf_vport_open(struct idpf_vport *vport) if (err) { dev_err(&adapter->pdev->dev, "Failed to allocate interrupts for vport %u= : %d\n", vport->vport_id, err); - return err; + goto err_rtnl_unlock; } =20 err =3D idpf_vport_queues_alloc(vport); @@ -1474,6 +1485,9 @@ static int idpf_vport_open(struct idpf_vport *vport) goto deinit_rss; } =20 + if (rtnl) + rtnl_unlock(); + return 0; =20 deinit_rss: @@ -1491,6 +1505,10 @@ static int idpf_vport_open(struct idpf_vport *vport) intr_rel: idpf_vport_intr_rel(vport); =20 +err_rtnl_unlock: + if (rtnl) + rtnl_unlock(); + return err; } =20 @@ -1571,7 +1589,7 @@ void idpf_init_task(struct work_struct *work) np =3D netdev_priv(vport->netdev); np->state =3D __IDPF_VPORT_DOWN; if (test_and_clear_bit(IDPF_VPORT_UP_REQUESTED, vport_config->flags)) - idpf_vport_open(vport); + idpf_vport_open(vport, true); =20 /* Spawn and return 'idpf_init_task' work queue until all the * default vports are created @@ -1961,7 +1979,7 @@ int idpf_initiate_soft_reset(struct idpf_vport *vport, idpf_send_delete_queues_msg(vport); } else { set_bit(IDPF_VPORT_DEL_QUEUES, vport->flags); - idpf_vport_stop(vport); + idpf_vport_stop(vport, false); } =20 idpf_deinit_rss(vport); @@ -1991,7 +2009,7 @@ int idpf_initiate_soft_reset(struct idpf_vport *vport, goto err_open; =20 if (current_state =3D=3D __IDPF_VPORT_UP) - err =3D idpf_vport_open(vport); + err =3D idpf_vport_open(vport, false); =20 goto free_vport; =20 @@ -2001,7 +2019,7 @@ int idpf_initiate_soft_reset(struct idpf_vport *vport, =20 err_open: if (current_state =3D=3D __IDPF_VPORT_UP) - idpf_vport_open(vport); + idpf_vport_open(vport, false); =20 free_vport: kfree(new_vport); @@ -2239,7 +2257,7 @@ static int idpf_open(struct net_device *netdev) if (err) goto unlock; =20 - err =3D idpf_vport_open(vport); + err =3D idpf_vport_open(vport, false); =20 unlock: idpf_vport_ctrl_unlock(netdev); diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.c b/drivers/net/ethe= rnet/intel/idpf/idpf_txrx.c index 26dde653587d..834ff436d852 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_txrx.c +++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.c @@ -3508,6 +3508,20 @@ void idpf_vport_intr_rel(struct idpf_vport *vport) vport->q_vectors =3D NULL; } =20 +static void idpf_q_vector_set_napi(struct idpf_q_vector *q_vector, bool li= nk) +{ + struct napi_struct *napi =3D link ? &q_vector->napi : NULL; + struct net_device *dev =3D q_vector->vport->netdev; + + for (u32 i =3D 0; i < q_vector->num_rxq; i++) + netif_queue_set_napi(dev, q_vector->rx[i]->idx, + NETDEV_QUEUE_TYPE_RX, napi); + + for (u32 i =3D 0; i < q_vector->num_txq; i++) + netif_queue_set_napi(dev, q_vector->tx[i]->idx, + NETDEV_QUEUE_TYPE_TX, napi); +} + /** * idpf_vport_intr_rel_irq - Free the IRQ association with the OS * @vport: main vport structure @@ -3528,6 +3542,7 @@ static void idpf_vport_intr_rel_irq(struct idpf_vport= *vport) vidx =3D vport->q_vector_idxs[vector]; irq_num =3D adapter->msix_entries[vidx].vector; =20 + idpf_q_vector_set_napi(q_vector, false); kfree(free_irq(irq_num, q_vector)); } } @@ -3715,6 +3730,8 @@ static int idpf_vport_intr_req_irq(struct idpf_vport = *vport) "Request_irq failed, error: %d\n", err); goto free_q_irqs; } + + idpf_q_vector_set_napi(q_vector, true); } =20 return 0; --=20 2.50.1