From nobody Sun Oct 5 20:01:16 2025 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.13]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4CED12DC345; Wed, 30 Jul 2025 16:08:25 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.13 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1753891706; cv=none; b=acJp4sXY2DHgaisxPYUWHsUYXl17vUKez16bF8qvJle2xt8ucBo+klIzpRyzHN4/d8I7Gf/za2Kf/NX9EpAB4QHrL5fb9Axr73bFGdBWj1rQt5SjCFOrjJ75WYP97wUpzw+l8H2i7/z+BNytpS1cPJ7q3HGstOPTKbaautFGls4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1753891706; c=relaxed/simple; bh=H43ww09ZmCHzRwuAlYMfQwD23pYFtFUIwyAKN9Ge4Tc=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=G3RncwH9UJn7j1DXjmF0GWzsc+OBbQIybvJ7QWleA0Jv+SSLNwSWa7CP64ubySTnc5JnzA3+IdVremaGy8j989QHUBQVkUIPGxkBAiChTEvKN8p0O+U7UOlzlWNKfDpKhoOLZv0Fy3I0yMdTr5PYE212+gC74XGPwmEbM4cfnCk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=boIUKiAf; arc=none smtp.client-ip=198.175.65.13 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="boIUKiAf" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1753891706; x=1785427706; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=H43ww09ZmCHzRwuAlYMfQwD23pYFtFUIwyAKN9Ge4Tc=; b=boIUKiAf6l5MKkFr6x2poTszHcwz9Pfcgh4R+gSVGEFktf8mDZyu4Cv4 h2lPS2vgt0GFpLR8Q07Dd9t0ibw7mC0u849ZYn+33aHjDvRgSeoXvVTc3 LJ3hpYOKYosv2ckraYEePVcXOgi4n3WEStAYhPeI+E5tTFce58HzWcJpV doOOnEUBSx7c/gS3xrRe0WWymHZ/cl6SEOdRG4WMscQYgZ12YOrY+jlnf Q+1JG7KRR27AZwRZdgzBBsvqJxt9Dj4i56JvauhPAAdRGaK6MGdEutlqi 6/LOWRo4/ZzsIFbbRoBWDw767Rp31kH+TfyghdwNltzSMmu0o1S2nfKpn A==; X-CSE-ConnectionGUID: rwZNx4BBSg6Ax+ZD1J2sJw== X-CSE-MsgGUID: UV5Xe+MNSSOD7Ys/AB+mzQ== X-IronPort-AV: E=McAfee;i="6800,10657,11507"; a="67278874" X-IronPort-AV: E=Sophos;i="6.16,350,1744095600"; d="scan'208";a="67278874" Received: from fmviesa010.fm.intel.com ([10.60.135.150]) by orvoesa105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Jul 2025 09:08:25 -0700 X-CSE-ConnectionGUID: 1W/gIVKyTz+lOxPL3lnjzg== X-CSE-MsgGUID: HUzJeLrtSkygfDS+z8zicA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.16,350,1744095600"; d="scan'208";a="163812960" Received: from newjersey.igk.intel.com ([10.102.20.203]) by fmviesa010.fm.intel.com with ESMTP; 30 Jul 2025 09:08:21 -0700 From: Alexander Lobakin To: intel-wired-lan@lists.osuosl.org Cc: Alexander Lobakin , Michal Kubiak , Maciej Fijalkowski , Tony Nguyen , Przemek Kitszel , Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Alexei Starovoitov , Daniel Borkmann , Simon Horman , nxne.cnse.osdt.itp.upstreaming@intel.com, bpf@vger.kernel.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH iwl-next v3 09/18] idpf: link NAPIs to queues Date: Wed, 30 Jul 2025 18:07:08 +0200 Message-ID: <20250730160717.28976-10-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.50.1 In-Reply-To: <20250730160717.28976-1-aleksander.lobakin@intel.com> References: <20250730160717.28976-1-aleksander.lobakin@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add the missing linking of NAPIs to netdev queues when enabling interrupt vectors in order to support NAPI configuration and interfaces requiring get_rx_queue()->napi to be set (like XSk busy polling). As currently, idpf_vport_{start,stop}() is called from several flows with inconsistent RTNL locking, we need to synchronize them to avoid runtime assertions. Notably: * idpf_{open,stop}() -- regular NDOs, RTNL is always taken; * idpf_initiate_soft_reset() -- usually called under RTNL; * idpf_init_task -- called from the init work, needs RTNL; * idpf_vport_dealloc -- called without RTNL taken, needs it. Expand common idpf_vport_{start,stop}() to take an additional bool telling whether we need to manually take the RTNL lock. Suggested-by: Maciej Fijalkowski # helper Signed-off-by: Alexander Lobakin --- drivers/net/ethernet/intel/idpf/idpf_lib.c | 38 +++++++++++++++------ drivers/net/ethernet/intel/idpf/idpf_txrx.c | 17 +++++++++ 2 files changed, 45 insertions(+), 10 deletions(-) diff --git a/drivers/net/ethernet/intel/idpf/idpf_lib.c b/drivers/net/ether= net/intel/idpf/idpf_lib.c index 2c2a3e85d693..da588d78846e 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_lib.c +++ b/drivers/net/ethernet/intel/idpf/idpf_lib.c @@ -883,14 +883,18 @@ static void idpf_remove_features(struct idpf_vport *v= port) /** * idpf_vport_stop - Disable a vport * @vport: vport to disable + * @rtnl: whether to take RTNL lock */ -static void idpf_vport_stop(struct idpf_vport *vport) +static void idpf_vport_stop(struct idpf_vport *vport, bool rtnl) { struct idpf_netdev_priv *np =3D netdev_priv(vport->netdev); =20 if (np->state <=3D __IDPF_VPORT_DOWN) return; =20 + if (rtnl) + rtnl_lock(); + netif_carrier_off(vport->netdev); netif_tx_disable(vport->netdev); =20 @@ -912,6 +916,9 @@ static void idpf_vport_stop(struct idpf_vport *vport) idpf_vport_queues_rel(vport); idpf_vport_intr_rel(vport); np->state =3D __IDPF_VPORT_DOWN; + + if (rtnl) + rtnl_unlock(); } =20 /** @@ -935,7 +942,7 @@ static int idpf_stop(struct net_device *netdev) idpf_vport_ctrl_lock(netdev); vport =3D idpf_netdev_to_vport(netdev); =20 - idpf_vport_stop(vport); + idpf_vport_stop(vport, false); =20 idpf_vport_ctrl_unlock(netdev); =20 @@ -1028,7 +1035,7 @@ static void idpf_vport_dealloc(struct idpf_vport *vpo= rt) idpf_idc_deinit_vport_aux_device(vport->vdev_info); =20 idpf_deinit_mac_addr(vport); - idpf_vport_stop(vport); + idpf_vport_stop(vport, true); =20 if (!test_bit(IDPF_HR_RESET_IN_PROG, adapter->flags)) idpf_decfg_netdev(vport); @@ -1369,8 +1376,9 @@ static void idpf_rx_init_buf_tail(struct idpf_vport *= vport) /** * idpf_vport_open - Bring up a vport * @vport: vport to bring up + * @rtnl: whether to take RTNL lock */ -static int idpf_vport_open(struct idpf_vport *vport) +static int idpf_vport_open(struct idpf_vport *vport, bool rtnl) { struct idpf_netdev_priv *np =3D netdev_priv(vport->netdev); struct idpf_adapter *adapter =3D vport->adapter; @@ -1380,6 +1388,9 @@ static int idpf_vport_open(struct idpf_vport *vport) if (np->state !=3D __IDPF_VPORT_DOWN) return -EBUSY; =20 + if (rtnl) + rtnl_lock(); + /* we do not allow interface up just yet */ netif_carrier_off(vport->netdev); =20 @@ -1387,7 +1398,7 @@ static int idpf_vport_open(struct idpf_vport *vport) if (err) { dev_err(&adapter->pdev->dev, "Failed to allocate interrupts for vport %u= : %d\n", vport->vport_id, err); - return err; + goto err_rtnl_unlock; } =20 err =3D idpf_vport_queues_alloc(vport); @@ -1474,6 +1485,9 @@ static int idpf_vport_open(struct idpf_vport *vport) goto deinit_rss; } =20 + if (rtnl) + rtnl_unlock(); + return 0; =20 deinit_rss: @@ -1491,6 +1505,10 @@ static int idpf_vport_open(struct idpf_vport *vport) intr_rel: idpf_vport_intr_rel(vport); =20 +err_rtnl_unlock: + if (rtnl) + rtnl_unlock(); + return err; } =20 @@ -1571,7 +1589,7 @@ void idpf_init_task(struct work_struct *work) np =3D netdev_priv(vport->netdev); np->state =3D __IDPF_VPORT_DOWN; if (test_and_clear_bit(IDPF_VPORT_UP_REQUESTED, vport_config->flags)) - idpf_vport_open(vport); + idpf_vport_open(vport, true); =20 /* Spawn and return 'idpf_init_task' work queue until all the * default vports are created @@ -1961,7 +1979,7 @@ int idpf_initiate_soft_reset(struct idpf_vport *vport, idpf_send_delete_queues_msg(vport); } else { set_bit(IDPF_VPORT_DEL_QUEUES, vport->flags); - idpf_vport_stop(vport); + idpf_vport_stop(vport, false); } =20 idpf_deinit_rss(vport); @@ -1991,7 +2009,7 @@ int idpf_initiate_soft_reset(struct idpf_vport *vport, goto err_open; =20 if (current_state =3D=3D __IDPF_VPORT_UP) - err =3D idpf_vport_open(vport); + err =3D idpf_vport_open(vport, false); =20 goto free_vport; =20 @@ -2001,7 +2019,7 @@ int idpf_initiate_soft_reset(struct idpf_vport *vport, =20 err_open: if (current_state =3D=3D __IDPF_VPORT_UP) - idpf_vport_open(vport); + idpf_vport_open(vport, false); =20 free_vport: kfree(new_vport); @@ -2239,7 +2257,7 @@ static int idpf_open(struct net_device *netdev) if (err) goto unlock; =20 - err =3D idpf_vport_open(vport); + err =3D idpf_vport_open(vport, false); =20 unlock: idpf_vport_ctrl_unlock(netdev); diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.c b/drivers/net/ethe= rnet/intel/idpf/idpf_txrx.c index e0b0a05c998f..34dc12cf5b21 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_txrx.c +++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.c @@ -3508,6 +3508,20 @@ void idpf_vport_intr_rel(struct idpf_vport *vport) vport->q_vectors =3D NULL; } =20 +static void idpf_q_vector_set_napi(struct idpf_q_vector *q_vector, bool li= nk) +{ + struct napi_struct *napi =3D link ? &q_vector->napi : NULL; + struct net_device *dev =3D q_vector->vport->netdev; + + for (u32 i =3D 0; i < q_vector->num_rxq; i++) + netif_queue_set_napi(dev, q_vector->rx[i]->idx, + NETDEV_QUEUE_TYPE_RX, napi); + + for (u32 i =3D 0; i < q_vector->num_txq; i++) + netif_queue_set_napi(dev, q_vector->tx[i]->idx, + NETDEV_QUEUE_TYPE_TX, napi); +} + /** * idpf_vport_intr_rel_irq - Free the IRQ association with the OS * @vport: main vport structure @@ -3528,6 +3542,7 @@ static void idpf_vport_intr_rel_irq(struct idpf_vport= *vport) vidx =3D vport->q_vector_idxs[vector]; irq_num =3D adapter->msix_entries[vidx].vector; =20 + idpf_q_vector_set_napi(q_vector, false); kfree(free_irq(irq_num, q_vector)); } } @@ -3715,6 +3730,8 @@ static int idpf_vport_intr_req_irq(struct idpf_vport = *vport) "Request_irq failed, error: %d\n", err); goto free_q_irqs; } + + idpf_q_vector_set_napi(q_vector, true); } =20 return 0; --=20 2.50.1