From nobody Thu Apr 9 11:18:15 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6E9EF7262E; Mon, 9 Mar 2026 10:33:45 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773052425; cv=none; b=rk5RfqQa3jozlrwVWAgohokRnOYKeiR6yDdaBlqcVDVobKebp9bU65f7Dy2hwheVrBqpAbSwGKPsZXMvgVqARkSUA+iN7KisHDzfll/RQ4pLmpc1Kkmkdvr4Q6sJlVbsD7RcwfuyPI+5mo94iTPsXtJfnZZVkBhpcETY1TEvSmk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773052425; c=relaxed/simple; bh=tMbgpoF4DiWZtL1XEtEHy9iTInu4M1bOjjdygd+IxQs=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=IaD42qiXv2W9fXlNdUkcJKtmaSFjnt4ULMNxu4gcDG6wmsp4BViCf/pYHPYYBNsPqw8KSZoObsNoUAP5GxVdjJilDgbm9YGnl+vNjnaN6efXsTGzcmQ9khPIZMwL2KQSwklsQQXfxgR81VAyKrvTcTT0IREoAlcjhlOaAXwVtxI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=DYVyrGOk; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="DYVyrGOk" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 0CE8EC4CEF7; Mon, 9 Mar 2026 10:33:42 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1773052425; bh=tMbgpoF4DiWZtL1XEtEHy9iTInu4M1bOjjdygd+IxQs=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=DYVyrGOk8ES4LyLAitqqRmp+jsHcu98Kjk1Bn/GsEBnoeICFXjOGWxCxIPNTklg5V zgEvyjVEsX+YvcA1mCGKtU3XVEy+hv8mjNkmlfktb+zc8lfoM9eG9+4N4aA2WLSN6z dtbgX3uHCSzgQWdR2EPhh1+unpsFMtdp/c4adTptxfSCjCL2aSJrInyteQqw/iNBti BHS/bg9+lTFrSYW04NIf2zwH6D8lkDYdbGFWCWF98GNKZ1YWBR0X3SfNr849uhqqq1 O/GLlazJyiep+NLUsMDT60qrcSPgHbCTvtOTSNmm+w3AB7dxB7BWhjUquBP7Mc7R4l n5CiVFF3p6BJw== From: Konrad Dybcio Date: Mon, 09 Mar 2026 11:32:59 +0100 Subject: [PATCH RFC/RFT 1/3] thunderbolt: Move pci_device out of tb_nhi Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20260309-topic-usb4_nonpcie_prepwork-v1-1-d901d85fc794@oss.qualcomm.com> References: <20260309-topic-usb4_nonpcie_prepwork-v1-0-d901d85fc794@oss.qualcomm.com> In-Reply-To: <20260309-topic-usb4_nonpcie_prepwork-v1-0-d901d85fc794@oss.qualcomm.com> To: Andreas Noever , Mika Westerberg , Yehezkel Bernat Cc: linux-kernel@vger.kernel.org, linux-usb@vger.kernel.org, usb4-upstream@oss.qualcomm.com, Raghavendra Thoorpu , Konrad Dybcio X-Mailer: b4 0.14.3 X-Developer-Signature: v=1; a=ed25519-sha256; t=1773052420; l=41778; i=konrad.dybcio@oss.qualcomm.com; s=20230215; h=from:subject:message-id; bh=JWs1T6qnRUd7dfPhyYhdkP+Jy82Kv4qwgryzuUjn9yw=; b=/JZlNoBjKrGF2dAo+gqkGMks3EWUBAh4M/QKl55tpigAqaOZ/pzlex9JD/wfaOp0EB5qDnzXZ cTyY9Cv7I64CKEOYEVaT4a3YdflKrLtDSfMni85q5wHUfycWhGZPNvW X-Developer-Key: i=konrad.dybcio@oss.qualcomm.com; a=ed25519; pk=iclgkYvtl2w05SSXO5EjjSYlhFKsJ+5OSZBjOkQuEms= From: Konrad Dybcio Not all USB4/TB implementations are based on a PCIe-attached controller. In order to make way for these, start off with moving the pci_device reference out of the main tb_nhi structure. Encapsulate the existing struct in a new tb_nhi_pci, that shall also house all properties that relate to the parent bus. Similarly, any other type of controller will be expected to contain tb_nhi as a member. Signed-off-by: Konrad Dybcio --- drivers/thunderbolt/acpi.c | 14 +-- drivers/thunderbolt/ctl.c | 14 +-- drivers/thunderbolt/domain.c | 2 +- drivers/thunderbolt/eeprom.c | 2 +- drivers/thunderbolt/icm.c | 25 ++-- drivers/thunderbolt/nhi.c | 247 ++++++++++++++++++++++++++++++------= ---- drivers/thunderbolt/nhi.h | 11 ++ drivers/thunderbolt/nhi_ops.c | 29 +++-- drivers/thunderbolt/nhi_pci.h | 20 ++++ drivers/thunderbolt/switch.c | 41 ++----- drivers/thunderbolt/tb.c | 69 ----------- drivers/thunderbolt/tb.h | 10 +- drivers/thunderbolt/usb4_port.c | 2 +- include/linux/thunderbolt.h | 5 +- 14 files changed, 283 insertions(+), 208 deletions(-) diff --git a/drivers/thunderbolt/acpi.c b/drivers/thunderbolt/acpi.c index 45d1415871b4..53546bc477a5 100644 --- a/drivers/thunderbolt/acpi.c +++ b/drivers/thunderbolt/acpi.c @@ -28,7 +28,7 @@ static acpi_status tb_acpi_add_link(acpi_handle handle, u= 32 level, void *data, return AE_OK; =20 /* It needs to reference this NHI */ - if (dev_fwnode(&nhi->pdev->dev) !=3D fwnode) + if (dev_fwnode(nhi->dev) !=3D fwnode) goto out_put; =20 /* @@ -57,16 +57,16 @@ static acpi_status tb_acpi_add_link(acpi_handle handle,= u32 level, void *data, */ pm_runtime_get_sync(&pdev->dev); =20 - link =3D device_link_add(&pdev->dev, &nhi->pdev->dev, + link =3D device_link_add(&pdev->dev, nhi->dev, DL_FLAG_AUTOREMOVE_SUPPLIER | DL_FLAG_RPM_ACTIVE | DL_FLAG_PM_RUNTIME); if (link) { - dev_dbg(&nhi->pdev->dev, "created link from %s\n", + dev_dbg(nhi->dev, "created link from %s\n", dev_name(&pdev->dev)); *(bool *)ret =3D true; } else { - dev_warn(&nhi->pdev->dev, "device link creation from %s failed\n", + dev_warn(nhi->dev, "device link creation from %s failed\n", dev_name(&pdev->dev)); } =20 @@ -93,7 +93,7 @@ bool tb_acpi_add_links(struct tb_nhi *nhi) acpi_status status; bool ret =3D false; =20 - if (!has_acpi_companion(&nhi->pdev->dev)) + if (!has_acpi_companion(nhi->dev)) return false; =20 /* @@ -103,7 +103,7 @@ bool tb_acpi_add_links(struct tb_nhi *nhi) status =3D acpi_walk_namespace(ACPI_TYPE_DEVICE, ACPI_ROOT_OBJECT, 32, tb_acpi_add_link, NULL, nhi, (void **)&ret); if (ACPI_FAILURE(status)) { - dev_warn(&nhi->pdev->dev, "failed to enumerate tunneled ports\n"); + dev_warn(nhi->dev, "failed to enumerate tunneled ports\n"); return false; } =20 @@ -305,7 +305,7 @@ static struct acpi_device *tb_acpi_switch_find_companio= n(struct tb_switch *sw) struct tb_nhi *nhi =3D sw->tb->nhi; struct acpi_device *parent_adev; =20 - parent_adev =3D ACPI_COMPANION(&nhi->pdev->dev); + parent_adev =3D ACPI_COMPANION(nhi->dev); if (parent_adev) adev =3D acpi_find_child_device(parent_adev, 0, false); } diff --git a/drivers/thunderbolt/ctl.c b/drivers/thunderbolt/ctl.c index b2fd60fc7bcc..a22c8c2a301d 100644 --- a/drivers/thunderbolt/ctl.c +++ b/drivers/thunderbolt/ctl.c @@ -56,22 +56,22 @@ struct tb_ctl { =20 =20 #define tb_ctl_WARN(ctl, format, arg...) \ - dev_WARN(&(ctl)->nhi->pdev->dev, format, ## arg) + dev_WARN((ctl)->nhi->dev, format, ## arg) =20 #define tb_ctl_err(ctl, format, arg...) \ - dev_err(&(ctl)->nhi->pdev->dev, format, ## arg) + dev_err((ctl)->nhi->dev, format, ## arg) =20 #define tb_ctl_warn(ctl, format, arg...) \ - dev_warn(&(ctl)->nhi->pdev->dev, format, ## arg) + dev_warn((ctl)->nhi->dev, format, ## arg) =20 #define tb_ctl_info(ctl, format, arg...) \ - dev_info(&(ctl)->nhi->pdev->dev, format, ## arg) + dev_info((ctl)->nhi->dev, format, ## arg) =20 #define tb_ctl_dbg(ctl, format, arg...) \ - dev_dbg(&(ctl)->nhi->pdev->dev, format, ## arg) + dev_dbg((ctl)->nhi->dev, format, ## arg) =20 #define tb_ctl_dbg_once(ctl, format, arg...) \ - dev_dbg_once(&(ctl)->nhi->pdev->dev, format, ## arg) + dev_dbg_once((ctl)->nhi->dev, format, ## arg) =20 static DECLARE_WAIT_QUEUE_HEAD(tb_cfg_request_cancel_queue); /* Serializes access to request kref_get/put */ @@ -666,7 +666,7 @@ struct tb_ctl *tb_ctl_alloc(struct tb_nhi *nhi, int ind= ex, int timeout_msec, =20 mutex_init(&ctl->request_queue_lock); INIT_LIST_HEAD(&ctl->request_queue); - ctl->frame_pool =3D dma_pool_create("thunderbolt_ctl", &nhi->pdev->dev, + ctl->frame_pool =3D dma_pool_create("thunderbolt_ctl", nhi->dev, TB_FRAME_SIZE, 4, 0); if (!ctl->frame_pool) goto err; diff --git a/drivers/thunderbolt/domain.c b/drivers/thunderbolt/domain.c index 317780b99992..8e332a9ad625 100644 --- a/drivers/thunderbolt/domain.c +++ b/drivers/thunderbolt/domain.c @@ -402,7 +402,7 @@ struct tb *tb_domain_alloc(struct tb_nhi *nhi, int time= out_msec, size_t privsize if (!tb->ctl) goto err_destroy_wq; =20 - tb->dev.parent =3D &nhi->pdev->dev; + tb->dev.parent =3D nhi->dev; tb->dev.bus =3D &tb_bus_type; tb->dev.type =3D &tb_domain_type; tb->dev.groups =3D domain_attr_groups; diff --git a/drivers/thunderbolt/eeprom.c b/drivers/thunderbolt/eeprom.c index 5477b9437048..5681c17f82ec 100644 --- a/drivers/thunderbolt/eeprom.c +++ b/drivers/thunderbolt/eeprom.c @@ -465,7 +465,7 @@ static void tb_switch_drom_free(struct tb_switch *sw) */ static int tb_drom_copy_efi(struct tb_switch *sw, u16 *size) { - struct device *dev =3D &sw->tb->nhi->pdev->dev; + struct device *dev =3D sw->tb->nhi->dev; int len, res; =20 len =3D device_property_count_u8(dev, "ThunderboltDROM"); diff --git a/drivers/thunderbolt/icm.c b/drivers/thunderbolt/icm.c index 9d95bf3ab44c..ba79907247c3 100644 --- a/drivers/thunderbolt/icm.c +++ b/drivers/thunderbolt/icm.c @@ -20,6 +20,7 @@ #include =20 #include "ctl.h" +#include "nhi_pci.h" #include "nhi_regs.h" #include "tb.h" #include "tunnel.h" @@ -1455,6 +1456,7 @@ static struct pci_dev *get_upstream_port(struct pci_d= ev *pdev) =20 static bool icm_ar_is_supported(struct tb *tb) { + struct tb_nhi_pci *nhi_pci =3D nhi_to_pci(tb->nhi); struct pci_dev *upstream_port; struct icm *icm =3D tb_priv(tb); =20 @@ -1472,7 +1474,7 @@ static bool icm_ar_is_supported(struct tb *tb) * Find the upstream PCIe port in case we need to do reset * through its vendor specific registers. */ - upstream_port =3D get_upstream_port(tb->nhi->pdev); + upstream_port =3D get_upstream_port(nhi_pci->pdev); if (upstream_port) { int cap; =20 @@ -1508,7 +1510,7 @@ static int icm_ar_get_mode(struct tb *tb) } while (--retries); =20 if (!retries) { - dev_err(&nhi->pdev->dev, "ICM firmware not authenticated\n"); + dev_err(nhi->dev, "ICM firmware not authenticated\n"); return -ENODEV; } =20 @@ -1674,11 +1676,11 @@ icm_icl_driver_ready(struct tb *tb, enum tb_securit= y_level *security_level, =20 static void icm_icl_set_uuid(struct tb *tb) { - struct tb_nhi *nhi =3D tb->nhi; + struct tb_nhi_pci *nhi_pci =3D nhi_to_pci(tb->nhi); u32 uuid[4]; =20 - pci_read_config_dword(nhi->pdev, VS_CAP_10, &uuid[0]); - pci_read_config_dword(nhi->pdev, VS_CAP_11, &uuid[1]); + pci_read_config_dword(nhi_pci->pdev, VS_CAP_10, &uuid[0]); + pci_read_config_dword(nhi_pci->pdev, VS_CAP_11, &uuid[1]); uuid[2] =3D 0xffffffff; uuid[3] =3D 0xffffffff; =20 @@ -1853,7 +1855,7 @@ static int icm_firmware_start(struct tb *tb, struct t= b_nhi *nhi) if (icm_firmware_running(nhi)) return 0; =20 - dev_dbg(&nhi->pdev->dev, "starting ICM firmware\n"); + dev_dbg(nhi->dev, "starting ICM firmware\n"); =20 ret =3D icm_firmware_reset(tb, nhi); if (ret) @@ -1948,7 +1950,7 @@ static int icm_firmware_init(struct tb *tb) =20 ret =3D icm_firmware_start(tb, nhi); if (ret) { - dev_err(&nhi->pdev->dev, "could not start ICM firmware\n"); + dev_err(nhi->dev, "could not start ICM firmware\n"); return ret; } =20 @@ -1980,10 +1982,10 @@ static int icm_firmware_init(struct tb *tb) */ ret =3D icm_reset_phy_port(tb, 0); if (ret) - dev_warn(&nhi->pdev->dev, "failed to reset links on port0\n"); + dev_warn(nhi->dev, "failed to reset links on port0\n"); ret =3D icm_reset_phy_port(tb, 1); if (ret) - dev_warn(&nhi->pdev->dev, "failed to reset links on port1\n"); + dev_warn(nhi->dev, "failed to reset links on port1\n"); =20 return 0; } @@ -2462,6 +2464,7 @@ static const struct tb_cm_ops icm_icl_ops =3D { =20 struct tb *icm_probe(struct tb_nhi *nhi) { + struct tb_nhi_pci *nhi_pci =3D nhi_to_pci(nhi); struct icm *icm; struct tb *tb; =20 @@ -2473,7 +2476,7 @@ struct tb *icm_probe(struct tb_nhi *nhi) INIT_DELAYED_WORK(&icm->rescan_work, icm_rescan_work); mutex_init(&icm->request_lock); =20 - switch (nhi->pdev->device) { + switch (nhi_pci->pdev->device) { case PCI_DEVICE_ID_INTEL_FALCON_RIDGE_2C_NHI: case PCI_DEVICE_ID_INTEL_FALCON_RIDGE_4C_NHI: icm->can_upgrade_nvm =3D true; @@ -2579,7 +2582,7 @@ struct tb *icm_probe(struct tb_nhi *nhi) } =20 if (!icm->is_supported || !icm->is_supported(tb)) { - dev_dbg(&nhi->pdev->dev, "ICM not supported on this controller\n"); + dev_dbg(nhi->dev, "ICM not supported on this controller\n"); tb_domain_put(tb); return NULL; } diff --git a/drivers/thunderbolt/nhi.c b/drivers/thunderbolt/nhi.c index ccce020a2432..18710bafef20 100644 --- a/drivers/thunderbolt/nhi.c +++ b/drivers/thunderbolt/nhi.c @@ -18,11 +18,13 @@ #include #include #include +#include #include #include #include =20 #include "nhi.h" +#include "nhi_pci.h" #include "nhi_regs.h" #include "tb.h" =20 @@ -139,12 +141,12 @@ static void ring_interrupt_active(struct tb_ring *rin= g, bool active) else new =3D old & ~mask; =20 - dev_dbg(&ring->nhi->pdev->dev, + dev_dbg(ring->nhi->dev, "%s interrupt at register %#x bit %d (%#x -> %#x)\n", active ? "enabling" : "disabling", reg, interrupt_bit, old, new); =20 if (new =3D=3D old) - dev_WARN(&ring->nhi->pdev->dev, + dev_WARN(ring->nhi->dev, "interrupt for %s %d is already %s\n", RING_TYPE(ring), ring->hop, str_enabled_disabled(active)); @@ -462,19 +464,20 @@ static irqreturn_t ring_msix(int irq, void *data) static int ring_request_msix(struct tb_ring *ring, bool no_suspend) { struct tb_nhi *nhi =3D ring->nhi; + struct tb_nhi_pci *nhi_pci =3D nhi_to_pci(nhi); unsigned long irqflags; int ret; =20 - if (!nhi->pdev->msix_enabled) + if (!nhi_pci->pdev->msix_enabled) return 0; =20 - ret =3D ida_alloc_max(&nhi->msix_ida, MSIX_MAX_VECS - 1, GFP_KERNEL); + ret =3D ida_alloc_max(&nhi_pci->msix_ida, MSIX_MAX_VECS - 1, GFP_KERNEL); if (ret < 0) return ret; =20 ring->vector =3D ret; =20 - ret =3D pci_irq_vector(ring->nhi->pdev, ring->vector); + ret =3D pci_irq_vector(nhi_pci->pdev, ring->vector); if (ret < 0) goto err_ida_remove; =20 @@ -488,18 +491,20 @@ static int ring_request_msix(struct tb_ring *ring, bo= ol no_suspend) return 0; =20 err_ida_remove: - ida_free(&nhi->msix_ida, ring->vector); + ida_free(&nhi_pci->msix_ida, ring->vector); =20 return ret; } =20 static void ring_release_msix(struct tb_ring *ring) { + struct tb_nhi_pci *nhi_pci =3D nhi_to_pci(ring->nhi); + if (ring->irq <=3D 0) return; =20 free_irq(ring->irq, ring); - ida_free(&ring->nhi->msix_ida, ring->vector); + ida_free(&nhi_pci->msix_ida, ring->vector); ring->vector =3D 0; ring->irq =3D 0; } @@ -512,7 +517,7 @@ static int nhi_alloc_hop(struct tb_nhi *nhi, struct tb_= ring *ring) if (nhi->quirks & QUIRK_E2E) { start_hop =3D RING_FIRST_USABLE_HOPID + 1; if (ring->flags & RING_FLAG_E2E && !ring->is_tx) { - dev_dbg(&nhi->pdev->dev, "quirking E2E TX HopID %u -> %u\n", + dev_dbg(nhi->dev, "quirking E2E TX HopID %u -> %u\n", ring->e2e_tx_hop, RING_E2E_RESERVED_HOPID); ring->e2e_tx_hop =3D RING_E2E_RESERVED_HOPID; } @@ -543,23 +548,23 @@ static int nhi_alloc_hop(struct tb_nhi *nhi, struct t= b_ring *ring) } =20 if (ring->hop > 0 && ring->hop < start_hop) { - dev_warn(&nhi->pdev->dev, "invalid hop: %d\n", ring->hop); + dev_warn(nhi->dev, "invalid hop: %d\n", ring->hop); ret =3D -EINVAL; goto err_unlock; } if (ring->hop < 0 || ring->hop >=3D nhi->hop_count) { - dev_warn(&nhi->pdev->dev, "invalid hop: %d\n", ring->hop); + dev_warn(nhi->dev, "invalid hop: %d\n", ring->hop); ret =3D -EINVAL; goto err_unlock; } if (ring->is_tx && nhi->tx_rings[ring->hop]) { - dev_warn(&nhi->pdev->dev, "TX hop %d already allocated\n", + dev_warn(nhi->dev, "TX hop %d already allocated\n", ring->hop); ret =3D -EBUSY; goto err_unlock; } if (!ring->is_tx && nhi->rx_rings[ring->hop]) { - dev_warn(&nhi->pdev->dev, "RX hop %d already allocated\n", + dev_warn(nhi->dev, "RX hop %d already allocated\n", ring->hop); ret =3D -EBUSY; goto err_unlock; @@ -584,7 +589,7 @@ static struct tb_ring *tb_ring_alloc(struct tb_nhi *nhi= , u32 hop, int size, { struct tb_ring *ring =3D NULL; =20 - dev_dbg(&nhi->pdev->dev, "allocating %s ring %d of size %d\n", + dev_dbg(nhi->dev, "allocating %s ring %d of size %d\n", transmit ? "TX" : "RX", hop, size); =20 ring =3D kzalloc_obj(*ring); @@ -610,14 +615,16 @@ static struct tb_ring *tb_ring_alloc(struct tb_nhi *n= hi, u32 hop, int size, ring->start_poll =3D start_poll; ring->poll_data =3D poll_data; =20 - ring->descriptors =3D dma_alloc_coherent(&ring->nhi->pdev->dev, + ring->descriptors =3D dma_alloc_coherent(ring->nhi->dev, size * sizeof(*ring->descriptors), &ring->descriptors_dma, GFP_KERNEL | __GFP_ZERO); if (!ring->descriptors) goto err_free_ring; =20 - if (ring_request_msix(ring, flags & RING_FLAG_NO_SUSPEND)) - goto err_free_descs; + if (nhi->ops && nhi->ops->request_ring_irq) { + if (nhi->ops->request_ring_irq(ring, flags & RING_FLAG_NO_SUSPEND)) + goto err_free_descs; + } =20 if (nhi_alloc_hop(nhi, ring)) goto err_release_msix; @@ -625,9 +632,10 @@ static struct tb_ring *tb_ring_alloc(struct tb_nhi *nh= i, u32 hop, int size, return ring; =20 err_release_msix: - ring_release_msix(ring); + if (nhi->ops && nhi->ops->release_ring_irq) + nhi->ops->release_ring_irq(ring); err_free_descs: - dma_free_coherent(&ring->nhi->pdev->dev, + dma_free_coherent(ring->nhi->dev, ring->size * sizeof(*ring->descriptors), ring->descriptors, ring->descriptors_dma); err_free_ring: @@ -694,10 +702,10 @@ void tb_ring_start(struct tb_ring *ring) if (ring->nhi->going_away) goto err; if (ring->running) { - dev_WARN(&ring->nhi->pdev->dev, "ring already started\n"); + dev_WARN(ring->nhi->dev, "ring already started\n"); goto err; } - dev_dbg(&ring->nhi->pdev->dev, "starting %s %d\n", + dev_dbg(ring->nhi->dev, "starting %s %d\n", RING_TYPE(ring), ring->hop); =20 if (ring->flags & RING_FLAG_FRAME) { @@ -734,11 +742,11 @@ void tb_ring_start(struct tb_ring *ring) hop &=3D REG_RX_OPTIONS_E2E_HOP_MASK; flags |=3D hop; =20 - dev_dbg(&ring->nhi->pdev->dev, + dev_dbg(ring->nhi->dev, "enabling E2E for %s %d with TX HopID %d\n", RING_TYPE(ring), ring->hop, ring->e2e_tx_hop); } else { - dev_dbg(&ring->nhi->pdev->dev, "enabling E2E for %s %d\n", + dev_dbg(ring->nhi->dev, "enabling E2E for %s %d\n", RING_TYPE(ring), ring->hop); } =20 @@ -772,12 +780,12 @@ void tb_ring_stop(struct tb_ring *ring) { spin_lock_irq(&ring->nhi->lock); spin_lock(&ring->lock); - dev_dbg(&ring->nhi->pdev->dev, "stopping %s %d\n", + dev_dbg(ring->nhi->dev, "stopping %s %d\n", RING_TYPE(ring), ring->hop); if (ring->nhi->going_away) goto err; if (!ring->running) { - dev_WARN(&ring->nhi->pdev->dev, "%s %d already stopped\n", + dev_WARN(ring->nhi->dev, "%s %d already stopped\n", RING_TYPE(ring), ring->hop); goto err; } @@ -815,6 +823,8 @@ EXPORT_SYMBOL_GPL(tb_ring_stop); */ void tb_ring_free(struct tb_ring *ring) { + struct tb_nhi *nhi =3D ring->nhi; + spin_lock_irq(&ring->nhi->lock); /* * Dissociate the ring from the NHI. This also ensures that @@ -826,14 +836,15 @@ void tb_ring_free(struct tb_ring *ring) ring->nhi->rx_rings[ring->hop] =3D NULL; =20 if (ring->running) { - dev_WARN(&ring->nhi->pdev->dev, "%s %d still running\n", + dev_WARN(ring->nhi->dev, "%s %d still running\n", RING_TYPE(ring), ring->hop); } spin_unlock_irq(&ring->nhi->lock); =20 - ring_release_msix(ring); + if (nhi->ops && nhi->ops->release_ring_irq) + nhi->ops->release_ring_irq(ring); =20 - dma_free_coherent(&ring->nhi->pdev->dev, + dma_free_coherent(ring->nhi->dev, ring->size * sizeof(*ring->descriptors), ring->descriptors, ring->descriptors_dma); =20 @@ -841,7 +852,7 @@ void tb_ring_free(struct tb_ring *ring) ring->descriptors_dma =3D 0; =20 =20 - dev_dbg(&ring->nhi->pdev->dev, "freeing %s %d\n", RING_TYPE(ring), + dev_dbg(ring->nhi->dev, "freeing %s %d\n", RING_TYPE(ring), ring->hop); =20 /* @@ -940,7 +951,7 @@ static void nhi_interrupt_work(struct work_struct *work) if ((value & (1 << (bit % 32))) =3D=3D 0) continue; if (type =3D=3D 2) { - dev_warn(&nhi->pdev->dev, + dev_warn(nhi->dev, "RX overflow for ring %d\n", hop); continue; @@ -950,7 +961,7 @@ static void nhi_interrupt_work(struct work_struct *work) else ring =3D nhi->rx_rings[hop]; if (ring =3D=3D NULL) { - dev_warn(&nhi->pdev->dev, + dev_warn(nhi->dev, "got interrupt for inactive %s ring %d\n", type ? "RX" : "TX", hop); @@ -1139,16 +1150,17 @@ static int nhi_runtime_resume(struct device *dev) =20 static void nhi_shutdown(struct tb_nhi *nhi) { + struct tb_nhi_pci *nhi_pci =3D nhi_to_pci(nhi); int i; =20 - dev_dbg(&nhi->pdev->dev, "shutdown\n"); + dev_dbg(nhi->dev, "shutdown\n"); =20 for (i =3D 0; i < nhi->hop_count; i++) { if (nhi->tx_rings[i]) - dev_WARN(&nhi->pdev->dev, + dev_WARN(nhi->dev, "TX ring %d is still active\n", i); if (nhi->rx_rings[i]) - dev_WARN(&nhi->pdev->dev, + dev_WARN(nhi->dev, "RX ring %d is still active\n", i); } nhi_disable_interrupts(nhi); @@ -1156,19 +1168,21 @@ static void nhi_shutdown(struct tb_nhi *nhi) * We have to release the irq before calling flush_work. Otherwise an * already executing IRQ handler could call schedule_work again. */ - if (!nhi->pdev->msix_enabled) { - devm_free_irq(&nhi->pdev->dev, nhi->pdev->irq, nhi); + if (!nhi_pci->pdev->msix_enabled) { + devm_free_irq(nhi->dev, nhi_pci->pdev->irq, nhi); flush_work(&nhi->interrupt_work); } - ida_destroy(&nhi->msix_ida); + ida_destroy(&nhi_pci->msix_ida); =20 if (nhi->ops && nhi->ops->shutdown) nhi->ops->shutdown(nhi); } =20 -static void nhi_check_quirks(struct tb_nhi *nhi) +static void nhi_check_quirks(struct tb_nhi_pci *nhi_pci) { - if (nhi->pdev->vendor =3D=3D PCI_VENDOR_ID_INTEL) { + struct tb_nhi *nhi =3D &nhi_pci->nhi; + + if (nhi_pci->pdev->vendor =3D=3D PCI_VENDOR_ID_INTEL) { /* * Intel hardware supports auto clear of the interrupt * status register right after interrupt is being @@ -1176,7 +1190,7 @@ static void nhi_check_quirks(struct tb_nhi *nhi) */ nhi->quirks |=3D QUIRK_AUTO_CLEAR_INT; =20 - switch (nhi->pdev->device) { + switch (nhi_pci->pdev->device) { case PCI_DEVICE_ID_INTEL_FALCON_RIDGE_2C_NHI: case PCI_DEVICE_ID_INTEL_FALCON_RIDGE_4C_NHI: /* @@ -1190,7 +1204,7 @@ static void nhi_check_quirks(struct tb_nhi *nhi) } } =20 -static int nhi_check_iommu_pdev(struct pci_dev *pdev, void *data) +static int nhi_check_iommu_pci_dev(struct pci_dev *pdev, void *data) { if (!pdev->external_facing || !device_iommu_capable(&pdev->dev, IOMMU_CAP_PRE_BOOT_PROTECTION)) @@ -1199,9 +1213,10 @@ static int nhi_check_iommu_pdev(struct pci_dev *pdev= , void *data) return 1; /* Stop walking */ } =20 -static void nhi_check_iommu(struct tb_nhi *nhi) +static void nhi_check_iommu(struct tb_nhi_pci *nhi_pci) { - struct pci_bus *bus =3D nhi->pdev->bus; + struct pci_bus *bus =3D nhi_pci->pdev->bus; + struct tb_nhi *nhi =3D &nhi_pci->nhi; bool port_ok =3D false; =20 /* @@ -1224,10 +1239,10 @@ static void nhi_check_iommu(struct tb_nhi *nhi) while (bus->parent) bus =3D bus->parent; =20 - pci_walk_bus(bus, nhi_check_iommu_pdev, &port_ok); + pci_walk_bus(bus, nhi_check_iommu_pci_dev, &port_ok); =20 nhi->iommu_dma_protection =3D port_ok; - dev_dbg(&nhi->pdev->dev, "IOMMU DMA protection is %s\n", + dev_dbg(nhi->dev, "IOMMU DMA protection is %s\n", str_enabled_disabled(port_ok)); } =20 @@ -1242,7 +1257,7 @@ static void nhi_reset(struct tb_nhi *nhi) return; =20 if (!host_reset) { - dev_dbg(&nhi->pdev->dev, "skipping host router reset\n"); + dev_dbg(nhi->dev, "skipping host router reset\n"); return; } =20 @@ -1253,27 +1268,23 @@ static void nhi_reset(struct tb_nhi *nhi) do { val =3D ioread32(nhi->iobase + REG_RESET); if (!(val & REG_RESET_HRR)) { - dev_warn(&nhi->pdev->dev, "host router reset successful\n"); + dev_warn(nhi->dev, "host router reset successful\n"); return; } usleep_range(10, 20); } while (ktime_before(ktime_get(), timeout)); =20 - dev_warn(&nhi->pdev->dev, "timeout resetting host router\n"); + dev_warn(nhi->dev, "timeout resetting host router\n"); } =20 -static int nhi_init_msi(struct tb_nhi *nhi) +static int nhi_init_msi(struct tb_nhi_pci *nhi_pci) { - struct pci_dev *pdev =3D nhi->pdev; + struct pci_dev *pdev =3D nhi_pci->pdev; + struct tb_nhi *nhi =3D &nhi_pci->nhi; struct device *dev =3D &pdev->dev; int res, irq, nvec; =20 - /* In case someone left them on. */ - nhi_disable_interrupts(nhi); - - nhi_enable_int_throttling(nhi); - - ida_init(&nhi->msix_ida); + ida_init(&nhi_pci->msix_ida); =20 /* * The NHI has 16 MSI-X vectors or a single MSI. We first try to @@ -1290,7 +1301,7 @@ static int nhi_init_msi(struct tb_nhi *nhi) =20 INIT_WORK(&nhi->interrupt_work, nhi_interrupt_work); =20 - irq =3D pci_irq_vector(nhi->pdev, 0); + irq =3D pci_irq_vector(nhi_pci->pdev, 0); if (irq < 0) return irq; =20 @@ -1313,6 +1324,39 @@ static bool nhi_imr_valid(struct pci_dev *pdev) return true; } =20 +void nhi_pci_start_dma_port(struct tb_nhi *nhi) +{ + struct tb_nhi_pci *nhi_pci =3D nhi_to_pci(nhi); + struct pci_dev *root_port; + + /* + * During host router NVM upgrade we should not allow root port to + * go into D3cold because some root ports cannot trigger PME + * itself. To be on the safe side keep the root port in D0 during + * the whole upgrade process. + */ + root_port =3D pcie_find_root_port(nhi_pci->pdev); + if (root_port) + pm_runtime_get_noresume(&root_port->dev); +} + +void nhi_pci_complete_dma_port(struct tb_nhi *nhi) +{ + struct tb_nhi_pci *nhi_pci =3D nhi_to_pci(nhi); + struct pci_dev *root_port; + + root_port =3D pcie_find_root_port(nhi_pci->pdev); + if (root_port) + pm_runtime_put(&root_port->dev); +} + +static const struct tb_nhi_ops pci_nhi_default_ops =3D { + .pre_nvm_auth =3D nhi_pci_start_dma_port, + .post_nvm_auth =3D nhi_pci_complete_dma_port, + .request_ring_irq =3D ring_request_msix, + .release_ring_irq =3D ring_release_msix, +}; + static struct tb *nhi_select_cm(struct tb_nhi *nhi) { struct tb *tb; @@ -1339,6 +1383,7 @@ static struct tb *nhi_select_cm(struct tb_nhi *nhi) static int nhi_probe(struct pci_dev *pdev, const struct pci_device_id *id) { struct device *dev =3D &pdev->dev; + struct tb_nhi_pci *nhi_pci; struct tb_nhi *nhi; struct tb *tb; int res; @@ -1350,12 +1395,15 @@ static int nhi_probe(struct pci_dev *pdev, const st= ruct pci_device_id *id) if (res) return dev_err_probe(dev, res, "cannot enable PCI device, aborting\n"); =20 - nhi =3D devm_kzalloc(&pdev->dev, sizeof(*nhi), GFP_KERNEL); - if (!nhi) + nhi_pci =3D devm_kzalloc(dev, sizeof(*nhi_pci), GFP_KERNEL); + if (!nhi_pci) return -ENOMEM; =20 - nhi->pdev =3D pdev; - nhi->ops =3D (const struct tb_nhi_ops *)id->driver_data; + nhi_pci->pdev =3D pdev; + + nhi =3D &nhi_pci->nhi; + nhi->dev =3D dev; + nhi->ops =3D (const struct tb_nhi_ops *)id->driver_data ?: &pci_nhi_defau= lt_ops; =20 nhi->iobase =3D pcim_iomap_region(pdev, 0, "thunderbolt"); res =3D PTR_ERR_OR_ZERO(nhi->iobase); @@ -1372,11 +1420,15 @@ static int nhi_probe(struct pci_dev *pdev, const st= ruct pci_device_id *id) if (!nhi->tx_rings || !nhi->rx_rings) return -ENOMEM; =20 - nhi_check_quirks(nhi); - nhi_check_iommu(nhi); + nhi_check_quirks(nhi_pci); + nhi_check_iommu(nhi_pci); nhi_reset(nhi); =20 - res =3D nhi_init_msi(nhi); + /* In case someone left them on. */ + nhi_disable_interrupts(nhi); + nhi_enable_int_throttling(nhi); + + res =3D nhi_init_msi(nhi_pci); if (res) return dev_err_probe(dev, res, "cannot enable MSI, aborting\n"); =20 @@ -1458,6 +1510,75 @@ static const struct dev_pm_ops nhi_pm_ops =3D { .runtime_resume =3D nhi_runtime_resume, }; =20 +/* + * During suspend the Thunderbolt controller is reset and all PCIe + * tunnels are lost. The NHI driver will try to reestablish all tunnels + * during resume. This adds device links between the tunneled PCIe + * downstream ports and the NHI so that the device core will make sure + * NHI is resumed first before the rest. + */ +bool tb_apple_add_links(struct tb_nhi *nhi) +{ + struct tb_nhi_pci *nhi_pci =3D nhi_to_pci(nhi); + struct pci_dev *upstream, *pdev; + bool ret; + + if (!x86_apple_machine) + return false; + + switch (nhi_pci->pdev->device) { + case PCI_DEVICE_ID_INTEL_LIGHT_RIDGE: + case PCI_DEVICE_ID_INTEL_CACTUS_RIDGE_4C: + case PCI_DEVICE_ID_INTEL_FALCON_RIDGE_2C_NHI: + case PCI_DEVICE_ID_INTEL_FALCON_RIDGE_4C_NHI: + break; + default: + return false; + } + + upstream =3D pci_upstream_bridge(nhi_pci->pdev); + while (upstream) { + if (!pci_is_pcie(upstream)) + return false; + if (pci_pcie_type(upstream) =3D=3D PCI_EXP_TYPE_UPSTREAM) + break; + upstream =3D pci_upstream_bridge(upstream); + } + + if (!upstream) + return false; + + /* + * For each hotplug downstream port, create add device link + * back to NHI so that PCIe tunnels can be re-established after + * sleep. + */ + ret =3D false; + for_each_pci_bridge(pdev, upstream->subordinate) { + const struct device_link *link; + + if (!pci_is_pcie(pdev)) + continue; + if (pci_pcie_type(pdev) !=3D PCI_EXP_TYPE_DOWNSTREAM || + !pdev->is_pciehp) + continue; + + link =3D device_link_add(&pdev->dev, nhi->dev, + DL_FLAG_AUTOREMOVE_SUPPLIER | + DL_FLAG_PM_RUNTIME); + if (link) { + dev_dbg(nhi->dev, "created link from %s\n", + dev_name(&pdev->dev)); + ret =3D true; + } else { + dev_warn(nhi->dev, "device link creation from %s failed\n", + dev_name(&pdev->dev)); + } + } + + return ret; +} + static struct pci_device_id nhi_ids[] =3D { /* * We have to specify class, the TB bridges use the same device and diff --git a/drivers/thunderbolt/nhi.h b/drivers/thunderbolt/nhi.h index 24ac4246d0ca..5534a3f0800a 100644 --- a/drivers/thunderbolt/nhi.h +++ b/drivers/thunderbolt/nhi.h @@ -29,6 +29,9 @@ enum nhi_mailbox_cmd { =20 int nhi_mailbox_cmd(struct tb_nhi *nhi, enum nhi_mailbox_cmd cmd, u32 data= ); enum nhi_fw_mode nhi_mailbox_mode(struct tb_nhi *nhi); +bool tb_apple_add_links(struct tb_nhi *nhi); +void nhi_pci_start_dma_port(struct tb_nhi *nhi); +void nhi_pci_complete_dma_port(struct tb_nhi *nhi); =20 /** * struct tb_nhi_ops - NHI specific optional operations @@ -38,6 +41,10 @@ enum nhi_fw_mode nhi_mailbox_mode(struct tb_nhi *nhi); * @runtime_suspend: NHI specific runtime_suspend hook * @runtime_resume: NHI specific runtime_resume hook * @shutdown: NHI specific shutdown + * @pre_nvm_auth: hook to run before TBT3 NVM authentication + * @post_nvm_auth: hook to run after TBT3 NVM authentication + * @request_ring_irq: NHI specific interrupt retrieval function pointer + * @release_ring_irq: NHI specific interrupt release function pointer */ struct tb_nhi_ops { int (*init)(struct tb_nhi *nhi); @@ -46,6 +53,10 @@ struct tb_nhi_ops { int (*runtime_suspend)(struct tb_nhi *nhi); int (*runtime_resume)(struct tb_nhi *nhi); void (*shutdown)(struct tb_nhi *nhi); + void (*pre_nvm_auth)(struct tb_nhi *nhi); + void (*post_nvm_auth)(struct tb_nhi *nhi); + int (*request_ring_irq)(struct tb_ring *ring, bool no_suspend); + void (*release_ring_irq)(struct tb_ring *ring); }; =20 extern const struct tb_nhi_ops icl_nhi_ops; diff --git a/drivers/thunderbolt/nhi_ops.c b/drivers/thunderbolt/nhi_ops.c index 96da07e88c52..da6083f45fad 100644 --- a/drivers/thunderbolt/nhi_ops.c +++ b/drivers/thunderbolt/nhi_ops.c @@ -10,6 +10,7 @@ #include =20 #include "nhi.h" +#include "nhi_pci.h" #include "nhi_regs.h" #include "tb.h" =20 @@ -24,7 +25,7 @@ static int check_for_device(struct device *dev, void *dat= a) =20 static bool icl_nhi_is_device_connected(struct tb_nhi *nhi) { - struct tb *tb =3D pci_get_drvdata(nhi->pdev); + struct tb *tb =3D dev_get_drvdata(nhi->dev); int ret; =20 ret =3D device_for_each_child(&tb->root_switch->dev, NULL, @@ -34,6 +35,7 @@ static bool icl_nhi_is_device_connected(struct tb_nhi *nh= i) =20 static int icl_nhi_force_power(struct tb_nhi *nhi, bool power) { + struct tb_nhi_pci *nhi_pci =3D nhi_to_pci(nhi); u32 vs_cap; =20 /* @@ -48,7 +50,7 @@ static int icl_nhi_force_power(struct tb_nhi *nhi, bool p= ower) * The actual power management happens inside shared ACPI power * resources using standard ACPI methods. */ - pci_read_config_dword(nhi->pdev, VS_CAP_22, &vs_cap); + pci_read_config_dword(nhi_pci->pdev, VS_CAP_22, &vs_cap); if (power) { vs_cap &=3D ~VS_CAP_22_DMA_DELAY_MASK; vs_cap |=3D 0x22 << VS_CAP_22_DMA_DELAY_SHIFT; @@ -56,7 +58,7 @@ static int icl_nhi_force_power(struct tb_nhi *nhi, bool p= ower) } else { vs_cap &=3D ~VS_CAP_22_FORCE_POWER; } - pci_write_config_dword(nhi->pdev, VS_CAP_22, vs_cap); + pci_write_config_dword(nhi_pci->pdev, VS_CAP_22, vs_cap); =20 if (power) { unsigned int retries =3D 350; @@ -64,7 +66,7 @@ static int icl_nhi_force_power(struct tb_nhi *nhi, bool p= ower) =20 /* Wait until the firmware tells it is up and running */ do { - pci_read_config_dword(nhi->pdev, VS_CAP_9, &val); + pci_read_config_dword(nhi_pci->pdev, VS_CAP_9, &val); if (val & VS_CAP_9_FW_READY) return 0; usleep_range(3000, 3100); @@ -78,14 +80,16 @@ static int icl_nhi_force_power(struct tb_nhi *nhi, bool= power) =20 static void icl_nhi_lc_mailbox_cmd(struct tb_nhi *nhi, enum icl_lc_mailbox= _cmd cmd) { + struct tb_nhi_pci *nhi_pci =3D nhi_to_pci(nhi); u32 data; =20 data =3D (cmd << VS_CAP_19_CMD_SHIFT) & VS_CAP_19_CMD_MASK; - pci_write_config_dword(nhi->pdev, VS_CAP_19, data | VS_CAP_19_VALID); + pci_write_config_dword(nhi_pci->pdev, VS_CAP_19, data | VS_CAP_19_VALID); } =20 static int icl_nhi_lc_mailbox_cmd_complete(struct tb_nhi *nhi, int timeout) { + struct tb_nhi_pci *nhi_pci =3D nhi_to_pci(nhi); unsigned long end; u32 data; =20 @@ -94,7 +98,7 @@ static int icl_nhi_lc_mailbox_cmd_complete(struct tb_nhi = *nhi, int timeout) =20 end =3D jiffies + msecs_to_jiffies(timeout); do { - pci_read_config_dword(nhi->pdev, VS_CAP_18, &data); + pci_read_config_dword(nhi_pci->pdev, VS_CAP_18, &data); if (data & VS_CAP_18_DONE) goto clear; usleep_range(1000, 1100); @@ -104,24 +108,25 @@ static int icl_nhi_lc_mailbox_cmd_complete(struct tb_= nhi *nhi, int timeout) =20 clear: /* Clear the valid bit */ - pci_write_config_dword(nhi->pdev, VS_CAP_19, 0); + pci_write_config_dword(nhi_pci->pdev, VS_CAP_19, 0); return 0; } =20 static void icl_nhi_set_ltr(struct tb_nhi *nhi) { + struct tb_nhi_pci *nhi_pci =3D nhi_to_pci(nhi); u32 max_ltr, ltr; =20 - pci_read_config_dword(nhi->pdev, VS_CAP_16, &max_ltr); + pci_read_config_dword(nhi_pci->pdev, VS_CAP_16, &max_ltr); max_ltr &=3D 0xffff; /* Program the same value for both snoop and no-snoop */ ltr =3D max_ltr << 16 | max_ltr; - pci_write_config_dword(nhi->pdev, VS_CAP_15, ltr); + pci_write_config_dword(nhi_pci->pdev, VS_CAP_15, ltr); } =20 static int icl_nhi_suspend(struct tb_nhi *nhi) { - struct tb *tb =3D pci_get_drvdata(nhi->pdev); + struct tb *tb =3D dev_get_drvdata(nhi->dev); int ret; =20 if (icl_nhi_is_device_connected(nhi)) @@ -144,7 +149,7 @@ static int icl_nhi_suspend(struct tb_nhi *nhi) =20 static int icl_nhi_suspend_noirq(struct tb_nhi *nhi, bool wakeup) { - struct tb *tb =3D pci_get_drvdata(nhi->pdev); + struct tb *tb =3D dev_get_drvdata(nhi->dev); enum icl_lc_mailbox_cmd cmd; =20 if (!pm_suspend_via_firmware()) @@ -182,4 +187,6 @@ const struct tb_nhi_ops icl_nhi_ops =3D { .runtime_suspend =3D icl_nhi_suspend, .runtime_resume =3D icl_nhi_resume, .shutdown =3D icl_nhi_shutdown, + .pre_nvm_auth =3D nhi_pci_start_dma_port, + .post_nvm_auth =3D nhi_pci_complete_dma_port, }; diff --git a/drivers/thunderbolt/nhi_pci.h b/drivers/thunderbolt/nhi_pci.h new file mode 100644 index 000000000000..9f686e0512e9 --- /dev/null +++ b/drivers/thunderbolt/nhi_pci.h @@ -0,0 +1,20 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Copyright (c) Qualcomm Technologies, Inc. and/or its subsidiaries. + */ + +#ifndef __TBT_NHI_PCI_H +#define __TBT_NHI_PCI_H + +struct tb_nhi_pci { + struct pci_dev *pdev; + struct ida msix_ida; + struct tb_nhi nhi; +}; + +static inline struct tb_nhi_pci *nhi_to_pci(struct tb_nhi *nhi) +{ + return container_of(nhi, struct tb_nhi_pci, nhi); +} + +#endif diff --git a/drivers/thunderbolt/switch.c b/drivers/thunderbolt/switch.c index c2ad58b19e7b..9647650ee02d 100644 --- a/drivers/thunderbolt/switch.c +++ b/drivers/thunderbolt/switch.c @@ -209,30 +209,6 @@ static int nvm_authenticate_device_dma_port(struct tb_= switch *sw) return -ETIMEDOUT; } =20 -static void nvm_authenticate_start_dma_port(struct tb_switch *sw) -{ - struct pci_dev *root_port; - - /* - * During host router NVM upgrade we should not allow root port to - * go into D3cold because some root ports cannot trigger PME - * itself. To be on the safe side keep the root port in D0 during - * the whole upgrade process. - */ - root_port =3D pcie_find_root_port(sw->tb->nhi->pdev); - if (root_port) - pm_runtime_get_noresume(&root_port->dev); -} - -static void nvm_authenticate_complete_dma_port(struct tb_switch *sw) -{ - struct pci_dev *root_port; - - root_port =3D pcie_find_root_port(sw->tb->nhi->pdev); - if (root_port) - pm_runtime_put(&root_port->dev); -} - static inline bool nvm_readable(struct tb_switch *sw) { if (tb_switch_is_usb4(sw)) { @@ -258,6 +234,7 @@ static inline bool nvm_upgradeable(struct tb_switch *sw) =20 static int nvm_authenticate(struct tb_switch *sw, bool auth_only) { + struct tb_nhi *nhi =3D sw->tb->nhi; int ret; =20 if (tb_switch_is_usb4(sw)) { @@ -274,7 +251,8 @@ static int nvm_authenticate(struct tb_switch *sw, bool = auth_only) =20 sw->nvm->authenticating =3D true; if (!tb_route(sw)) { - nvm_authenticate_start_dma_port(sw); + if (nhi->ops && nhi->ops->pre_nvm_auth) + nhi->ops->pre_nvm_auth(nhi); ret =3D nvm_authenticate_host_dma_port(sw); } else { ret =3D nvm_authenticate_device_dma_port(sw); @@ -2743,6 +2721,7 @@ static int tb_switch_set_uuid(struct tb_switch *sw) =20 static int tb_switch_add_dma_port(struct tb_switch *sw) { + struct tb_nhi *nhi =3D sw->tb->nhi; u32 status; int ret; =20 @@ -2802,8 +2781,10 @@ static int tb_switch_add_dma_port(struct tb_switch *= sw) */ nvm_get_auth_status(sw, &status); if (status) { - if (!tb_route(sw)) - nvm_authenticate_complete_dma_port(sw); + if (!tb_route(sw)) { + if (nhi->ops && nhi->ops->post_nvm_auth) + nhi->ops->post_nvm_auth(nhi); + } return 0; } =20 @@ -2817,8 +2798,10 @@ static int tb_switch_add_dma_port(struct tb_switch *= sw) return ret; =20 /* Now we can allow root port to suspend again */ - if (!tb_route(sw)) - nvm_authenticate_complete_dma_port(sw); + if (!tb_route(sw)) { + if (nhi->ops && nhi->ops->post_nvm_auth) + nhi->ops->post_nvm_auth(nhi); + } =20 if (status) { tb_sw_info(sw, "switch flash authentication failed\n"); diff --git a/drivers/thunderbolt/tb.c b/drivers/thunderbolt/tb.c index c69c323e6952..0126e38d9396 100644 --- a/drivers/thunderbolt/tb.c +++ b/drivers/thunderbolt/tb.c @@ -10,7 +10,6 @@ #include #include #include -#include =20 #include "tb.h" #include "tb_regs.h" @@ -3295,74 +3294,6 @@ static const struct tb_cm_ops tb_cm_ops =3D { .disconnect_xdomain_paths =3D tb_disconnect_xdomain_paths, }; =20 -/* - * During suspend the Thunderbolt controller is reset and all PCIe - * tunnels are lost. The NHI driver will try to reestablish all tunnels - * during resume. This adds device links between the tunneled PCIe - * downstream ports and the NHI so that the device core will make sure - * NHI is resumed first before the rest. - */ -static bool tb_apple_add_links(struct tb_nhi *nhi) -{ - struct pci_dev *upstream, *pdev; - bool ret; - - if (!x86_apple_machine) - return false; - - switch (nhi->pdev->device) { - case PCI_DEVICE_ID_INTEL_LIGHT_RIDGE: - case PCI_DEVICE_ID_INTEL_CACTUS_RIDGE_4C: - case PCI_DEVICE_ID_INTEL_FALCON_RIDGE_2C_NHI: - case PCI_DEVICE_ID_INTEL_FALCON_RIDGE_4C_NHI: - break; - default: - return false; - } - - upstream =3D pci_upstream_bridge(nhi->pdev); - while (upstream) { - if (!pci_is_pcie(upstream)) - return false; - if (pci_pcie_type(upstream) =3D=3D PCI_EXP_TYPE_UPSTREAM) - break; - upstream =3D pci_upstream_bridge(upstream); - } - - if (!upstream) - return false; - - /* - * For each hotplug downstream port, create add device link - * back to NHI so that PCIe tunnels can be re-established after - * sleep. - */ - ret =3D false; - for_each_pci_bridge(pdev, upstream->subordinate) { - const struct device_link *link; - - if (!pci_is_pcie(pdev)) - continue; - if (pci_pcie_type(pdev) !=3D PCI_EXP_TYPE_DOWNSTREAM || - !pdev->is_pciehp) - continue; - - link =3D device_link_add(&pdev->dev, &nhi->pdev->dev, - DL_FLAG_AUTOREMOVE_SUPPLIER | - DL_FLAG_PM_RUNTIME); - if (link) { - dev_dbg(&nhi->pdev->dev, "created link from %s\n", - dev_name(&pdev->dev)); - ret =3D true; - } else { - dev_warn(&nhi->pdev->dev, "device link creation from %s failed\n", - dev_name(&pdev->dev)); - } - } - - return ret; -} - struct tb *tb_probe(struct tb_nhi *nhi) { struct tb_cm *tcm; diff --git a/drivers/thunderbolt/tb.h b/drivers/thunderbolt/tb.h index e96474f17067..ee689df8f1d8 100644 --- a/drivers/thunderbolt/tb.h +++ b/drivers/thunderbolt/tb.h @@ -724,11 +724,11 @@ static inline int tb_port_write(struct tb_port *port,= const void *buffer, length); } =20 -#define tb_err(tb, fmt, arg...) dev_err(&(tb)->nhi->pdev->dev, fmt, ## arg) -#define tb_WARN(tb, fmt, arg...) dev_WARN(&(tb)->nhi->pdev->dev, fmt, ## a= rg) -#define tb_warn(tb, fmt, arg...) dev_warn(&(tb)->nhi->pdev->dev, fmt, ## a= rg) -#define tb_info(tb, fmt, arg...) dev_info(&(tb)->nhi->pdev->dev, fmt, ## a= rg) -#define tb_dbg(tb, fmt, arg...) dev_dbg(&(tb)->nhi->pdev->dev, fmt, ## arg) +#define tb_err(tb, fmt, arg...) dev_err((tb)->nhi->dev, fmt, ## arg) +#define tb_WARN(tb, fmt, arg...) dev_WARN((tb)->nhi->dev, fmt, ## arg) +#define tb_warn(tb, fmt, arg...) dev_warn((tb)->nhi->dev, fmt, ## arg) +#define tb_info(tb, fmt, arg...) dev_info((tb)->nhi->dev, fmt, ## arg) +#define tb_dbg(tb, fmt, arg...) dev_dbg((tb)->nhi->dev, fmt, ## arg) =20 #define __TB_SW_PRINT(level, sw, fmt, arg...) \ do { \ diff --git a/drivers/thunderbolt/usb4_port.c b/drivers/thunderbolt/usb4_por= t.c index c32d3516e780..890de530debc 100644 --- a/drivers/thunderbolt/usb4_port.c +++ b/drivers/thunderbolt/usb4_port.c @@ -138,7 +138,7 @@ bool usb4_usb3_port_match(struct device *usb4_port_dev, return false; =20 /* Check if USB3 fwnode references same NHI where USB4 port resides */ - if (!device_match_fwnode(&nhi->pdev->dev, nhi_fwnode)) + if (!device_match_fwnode(nhi->dev, nhi_fwnode)) return false; =20 if (fwnode_property_read_u8(usb3_port_fwnode, "usb4-port-number", &usb4_p= ort_num)) diff --git a/include/linux/thunderbolt.h b/include/linux/thunderbolt.h index 0ba112175bb3..789cd7f364e1 100644 --- a/include/linux/thunderbolt.h +++ b/include/linux/thunderbolt.h @@ -496,12 +496,11 @@ static inline struct tb_xdomain *tb_service_parent(st= ruct tb_service *svc) */ struct tb_nhi { spinlock_t lock; - struct pci_dev *pdev; + struct device *dev; const struct tb_nhi_ops *ops; void __iomem *iobase; struct tb_ring **tx_rings; struct tb_ring **rx_rings; - struct ida msix_ida; bool going_away; bool iommu_dma_protection; struct work_struct interrupt_work; @@ -681,7 +680,7 @@ void tb_ring_poll_complete(struct tb_ring *ring); */ static inline struct device *tb_ring_dma_device(struct tb_ring *ring) { - return &ring->nhi->pdev->dev; + return ring->nhi->dev; } =20 bool usb4_usb3_port_match(struct device *usb4_port_dev, --=20 2.53.0 From nobody Thu Apr 9 11:18:15 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A81F438B7B9; Mon, 9 Mar 2026 10:33:47 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773052427; cv=none; b=g2omEbKvDuW1ryqHb+5xNtTfFBlc31PXZFuHU3VyUeu2c6Zpvr/69T0oLPkUb4RjAwfFvSJx4gIPIdmBe8aZjqvzuyw5hWSeN5d7z1QpQQ5gLxnv6+vhgAHScPrT0GSs37eHcO1ScaO0gzUgB0cmkZdVU9QgDssk2VDkS1I/zCo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773052427; c=relaxed/simple; bh=d/Cda0uwNorcwEDKJiLBY68hrG1k7S02ZAi4VISBLBk=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=rIiS1RYTFnrgKt+K6Z73AGv61XqPAUM97LZ6ws6HNzXDxon/sQBlanSU2yq23szDh7OUfVvEA0PMFCGAGjlMtXK2Z2POwNuv/OS8bA4HbHwC99GZZaJZV6PfHkh3wtlS2iFi8mwfCwFgwv7gKR7u0gpRUOBZYZ5SyVHsc/lsUYc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=PM3vO1NL; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="PM3vO1NL" Received: by smtp.kernel.org (Postfix) with ESMTPSA id B2895C4CEF7; Mon, 9 Mar 2026 10:33:45 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1773052427; bh=d/Cda0uwNorcwEDKJiLBY68hrG1k7S02ZAi4VISBLBk=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=PM3vO1NLJPicQHleRjObAt5sdgikBtQbX/X6RaJCGz4rkm9RqctYLkegxanwUOMeG mRDykFQgKR2nSgvcZ3vQe3Cb5dCU9Fp+u8knZjcig1JB8fqnNOzU0g6TYrGMlrf9PO N3nPLgJhdxL2n5aI7ZSiQdrFjuE/wleSqtk76F1J0fkko9NMivXErV9TTtxAQHayb9 yThPAnQBynNEJdv3I3V9/2qRZvCwtugmDVBr7YKd+QGP90EA04cRyXKfDm7ImpgCdl otO0HRHxvRCEQneUEn2S5JWaYNl2TUzUNHkK+wDA9Ov300l0v1CaeopJoWBI5s6J1z 5E7+1MYZ90pvw== From: Konrad Dybcio Date: Mon, 09 Mar 2026 11:33:00 +0100 Subject: [PATCH RFC/RFT 2/3] thunderbolt: Separate out common NHI bits Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20260309-topic-usb4_nonpcie_prepwork-v1-2-d901d85fc794@oss.qualcomm.com> References: <20260309-topic-usb4_nonpcie_prepwork-v1-0-d901d85fc794@oss.qualcomm.com> In-Reply-To: <20260309-topic-usb4_nonpcie_prepwork-v1-0-d901d85fc794@oss.qualcomm.com> To: Andreas Noever , Mika Westerberg , Yehezkel Bernat Cc: linux-kernel@vger.kernel.org, linux-usb@vger.kernel.org, usb4-upstream@oss.qualcomm.com, Raghavendra Thoorpu , Konrad Dybcio X-Mailer: b4 0.14.3 X-Developer-Signature: v=1; a=ed25519-sha256; t=1773052420; l=43360; i=konrad.dybcio@oss.qualcomm.com; s=20230215; h=from:subject:message-id; bh=sjryN46v4+J93ZADSK/0CMgeGD58OP3Z9/p+OyU2lfo=; b=022PHtpmT4SIDfd2o/oqDYzST5z2QisJhY0wbU/Hb9/C0B9/alsRYGIxycp6min0ONuw3ceNp NLbE98rn2H1AyuKkqY4yPhzoY2f7wVGOoqMeWAbHyrcIrLiNm/JQ7sV X-Developer-Key: i=konrad.dybcio@oss.qualcomm.com; a=ed25519; pk=iclgkYvtl2w05SSXO5EjjSYlhFKsJ+5OSZBjOkQuEms= From: Konrad Dybcio Add a new file encapsulating most of the PCI NHI specifics (intentionally leaving some odd cookies behind to make the layering simpler). Most notably, separate out nhi_probe_common() to make it easier to register other types of NHIs. Signed-off-by: Konrad Dybcio --- drivers/thunderbolt/Makefile | 2 +- drivers/thunderbolt/nhi.c | 531 +++-----------------------------------= ---- drivers/thunderbolt/nhi.h | 21 ++ drivers/thunderbolt/nhi_ops.c | 2 + drivers/thunderbolt/nhi_pci.c | 496 +++++++++++++++++++++++++++++++++++++++ drivers/thunderbolt/nhi_pci.h | 2 + 6 files changed, 554 insertions(+), 500 deletions(-) diff --git a/drivers/thunderbolt/Makefile b/drivers/thunderbolt/Makefile index b44b32dcb832..58505c7c9719 100644 --- a/drivers/thunderbolt/Makefile +++ b/drivers/thunderbolt/Makefile @@ -1,7 +1,7 @@ # SPDX-License-Identifier: GPL-2.0-only ccflags-y :=3D -I$(src) obj-${CONFIG_USB4} :=3D thunderbolt.o -thunderbolt-objs :=3D nhi.o nhi_ops.o ctl.o tb.o switch.o cap.o path.o tun= nel.o eeprom.o +thunderbolt-objs :=3D nhi.o nhi_pci.o nhi_ops.o ctl.o tb.o switch.o cap.o = path.o tunnel.o eeprom.o thunderbolt-objs +=3D domain.o dma_port.o icm.o property.o xdomain.o lc.o = tmu.o usb4.o thunderbolt-objs +=3D usb4_port.o nvm.o retimer.o quirks.o clx.o =20 diff --git a/drivers/thunderbolt/nhi.c b/drivers/thunderbolt/nhi.c index 18710bafef20..ca832f802ee7 100644 --- a/drivers/thunderbolt/nhi.c +++ b/drivers/thunderbolt/nhi.c @@ -18,13 +18,11 @@ #include #include #include -#include #include #include #include =20 #include "nhi.h" -#include "nhi_pci.h" #include "nhi_regs.h" #include "tb.h" =20 @@ -36,19 +34,9 @@ * transferred. */ #define RING_E2E_RESERVED_HOPID RING_FIRST_USABLE_HOPID -/* - * Minimal number of vectors when we use MSI-X. Two for control channel - * Rx/Tx and the rest four are for cross domain DMA paths. - */ -#define MSIX_MIN_VECS 6 -#define MSIX_MAX_VECS 16 =20 #define NHI_MAILBOX_TIMEOUT 500 /* ms */ =20 -/* Host interface quirks */ -#define QUIRK_AUTO_CLEAR_INT BIT(0) -#define QUIRK_E2E BIT(1) - static bool host_reset =3D true; module_param(host_reset, bool, 0444); MODULE_PARM_DESC(host_reset, "reset USB4 host router (default: true)"); @@ -162,7 +150,7 @@ static void ring_interrupt_active(struct tb_ring *ring,= bool active) * * Use only during init and shutdown. */ -static void nhi_disable_interrupts(struct tb_nhi *nhi) +void nhi_disable_interrupts(struct tb_nhi *nhi) { int i =3D 0; /* disable interrupts */ @@ -447,7 +435,7 @@ static void ring_clear_msix(const struct tb_ring *ring) 4 * (ring->nhi->hop_count / 32)); } =20 -static irqreturn_t ring_msix(int irq, void *data) +irqreturn_t ring_msix(int irq, void *data) { struct tb_ring *ring =3D data; =20 @@ -461,54 +449,6 @@ static irqreturn_t ring_msix(int irq, void *data) return IRQ_HANDLED; } =20 -static int ring_request_msix(struct tb_ring *ring, bool no_suspend) -{ - struct tb_nhi *nhi =3D ring->nhi; - struct tb_nhi_pci *nhi_pci =3D nhi_to_pci(nhi); - unsigned long irqflags; - int ret; - - if (!nhi_pci->pdev->msix_enabled) - return 0; - - ret =3D ida_alloc_max(&nhi_pci->msix_ida, MSIX_MAX_VECS - 1, GFP_KERNEL); - if (ret < 0) - return ret; - - ring->vector =3D ret; - - ret =3D pci_irq_vector(nhi_pci->pdev, ring->vector); - if (ret < 0) - goto err_ida_remove; - - ring->irq =3D ret; - - irqflags =3D no_suspend ? IRQF_NO_SUSPEND : 0; - ret =3D request_irq(ring->irq, ring_msix, irqflags, "thunderbolt", ring); - if (ret) - goto err_ida_remove; - - return 0; - -err_ida_remove: - ida_free(&nhi_pci->msix_ida, ring->vector); - - return ret; -} - -static void ring_release_msix(struct tb_ring *ring) -{ - struct tb_nhi_pci *nhi_pci =3D nhi_to_pci(ring->nhi); - - if (ring->irq <=3D 0) - return; - - free_irq(ring->irq, ring); - ida_free(&nhi_pci->msix_ida, ring->vector); - ring->vector =3D 0; - ring->irq =3D 0; -} - static int nhi_alloc_hop(struct tb_nhi *nhi, struct tb_ring *ring) { unsigned int start_hop =3D RING_FIRST_USABLE_HOPID; @@ -923,7 +863,7 @@ enum nhi_fw_mode nhi_mailbox_mode(struct tb_nhi *nhi) return (enum nhi_fw_mode)val; } =20 -static void nhi_interrupt_work(struct work_struct *work) +void nhi_interrupt_work(struct work_struct *work) { struct tb_nhi *nhi =3D container_of(work, typeof(*nhi), interrupt_work); int value =3D 0; /* Suppress uninitialized usage warning. */ @@ -975,7 +915,7 @@ static void nhi_interrupt_work(struct work_struct *work) spin_unlock_irq(&nhi->lock); } =20 -static irqreturn_t nhi_msi(int irq, void *data) +irqreturn_t nhi_msi(int irq, void *data) { struct tb_nhi *nhi =3D data; schedule_work(&nhi->interrupt_work); @@ -984,8 +924,7 @@ static irqreturn_t nhi_msi(int irq, void *data) =20 static int __nhi_suspend_noirq(struct device *dev, bool wakeup) { - struct pci_dev *pdev =3D to_pci_dev(dev); - struct tb *tb =3D pci_get_drvdata(pdev); + struct tb *tb =3D dev_get_drvdata(dev); struct tb_nhi *nhi =3D tb->nhi; int ret; =20 @@ -1009,21 +948,19 @@ static int nhi_suspend_noirq(struct device *dev) =20 static int nhi_freeze_noirq(struct device *dev) { - struct pci_dev *pdev =3D to_pci_dev(dev); - struct tb *tb =3D pci_get_drvdata(pdev); + struct tb *tb =3D dev_get_drvdata(dev); =20 return tb_domain_freeze_noirq(tb); } =20 static int nhi_thaw_noirq(struct device *dev) { - struct pci_dev *pdev =3D to_pci_dev(dev); - struct tb *tb =3D pci_get_drvdata(pdev); + struct tb *tb =3D dev_get_drvdata(dev); =20 return tb_domain_thaw_noirq(tb); } =20 -static bool nhi_wake_supported(struct pci_dev *pdev) +static bool nhi_wake_supported(struct device *dev) { u8 val; =20 @@ -1031,7 +968,7 @@ static bool nhi_wake_supported(struct pci_dev *pdev) * If power rails are sustainable for wakeup from S4 this * property is set by the BIOS. */ - if (device_property_read_u8(&pdev->dev, "WAKE_SUPPORTED", &val)) + if (device_property_read_u8(dev, "WAKE_SUPPORTED", &val)) return !!val; =20 return true; @@ -1039,14 +976,13 @@ static bool nhi_wake_supported(struct pci_dev *pdev) =20 static int nhi_poweroff_noirq(struct device *dev) { - struct pci_dev *pdev =3D to_pci_dev(dev); bool wakeup; =20 - wakeup =3D device_may_wakeup(dev) && nhi_wake_supported(pdev); + wakeup =3D device_may_wakeup(dev) && nhi_wake_supported(dev); return __nhi_suspend_noirq(dev, wakeup); } =20 -static void nhi_enable_int_throttling(struct tb_nhi *nhi) +void nhi_enable_int_throttling(struct tb_nhi *nhi) { /* Throttling is specified in 256ns increments */ u32 throttle =3D DIV_ROUND_UP(128 * NSEC_PER_USEC, 256); @@ -1064,8 +1000,7 @@ static void nhi_enable_int_throttling(struct tb_nhi *= nhi) =20 static int nhi_resume_noirq(struct device *dev) { - struct pci_dev *pdev =3D to_pci_dev(dev); - struct tb *tb =3D pci_get_drvdata(pdev); + struct tb *tb =3D dev_get_drvdata(dev); struct tb_nhi *nhi =3D tb->nhi; int ret; =20 @@ -1074,7 +1009,7 @@ static int nhi_resume_noirq(struct device *dev) * unplugged last device which causes the host controller to go * away on PCs. */ - if (!pci_device_is_present(pdev)) { + if ((nhi->ops->is_present && nhi->ops->is_present(nhi))) { nhi->going_away =3D true; } else { if (nhi->ops && nhi->ops->resume_noirq) { @@ -1090,32 +1025,29 @@ static int nhi_resume_noirq(struct device *dev) =20 static int nhi_suspend(struct device *dev) { - struct pci_dev *pdev =3D to_pci_dev(dev); - struct tb *tb =3D pci_get_drvdata(pdev); + struct tb *tb =3D dev_get_drvdata(dev); =20 return tb_domain_suspend(tb); } =20 static void nhi_complete(struct device *dev) { - struct pci_dev *pdev =3D to_pci_dev(dev); - struct tb *tb =3D pci_get_drvdata(pdev); + struct tb *tb =3D dev_get_drvdata(dev); =20 /* * If we were runtime suspended when system suspend started, * schedule runtime resume now. It should bring the domain back * to functional state. */ - if (pm_runtime_suspended(&pdev->dev)) - pm_runtime_resume(&pdev->dev); + if (pm_runtime_suspended(dev)) + pm_runtime_resume(dev); else tb_domain_complete(tb); } =20 static int nhi_runtime_suspend(struct device *dev) { - struct pci_dev *pdev =3D to_pci_dev(dev); - struct tb *tb =3D pci_get_drvdata(pdev); + struct tb *tb =3D dev_get_drvdata(dev); struct tb_nhi *nhi =3D tb->nhi; int ret; =20 @@ -1133,8 +1065,7 @@ static int nhi_runtime_suspend(struct device *dev) =20 static int nhi_runtime_resume(struct device *dev) { - struct pci_dev *pdev =3D to_pci_dev(dev); - struct tb *tb =3D pci_get_drvdata(pdev); + struct tb *tb =3D dev_get_drvdata(dev); struct tb_nhi *nhi =3D tb->nhi; int ret; =20 @@ -1148,9 +1079,8 @@ static int nhi_runtime_resume(struct device *dev) return tb_domain_runtime_resume(tb); } =20 -static void nhi_shutdown(struct tb_nhi *nhi) +void nhi_shutdown(struct tb_nhi *nhi) { - struct tb_nhi_pci *nhi_pci =3D nhi_to_pci(nhi); int i; =20 dev_dbg(nhi->dev, "shutdown\n"); @@ -1164,88 +1094,11 @@ static void nhi_shutdown(struct tb_nhi *nhi) "RX ring %d is still active\n", i); } nhi_disable_interrupts(nhi); - /* - * We have to release the irq before calling flush_work. Otherwise an - * already executing IRQ handler could call schedule_work again. - */ - if (!nhi_pci->pdev->msix_enabled) { - devm_free_irq(nhi->dev, nhi_pci->pdev->irq, nhi); - flush_work(&nhi->interrupt_work); - } - ida_destroy(&nhi_pci->msix_ida); =20 if (nhi->ops && nhi->ops->shutdown) nhi->ops->shutdown(nhi); } =20 -static void nhi_check_quirks(struct tb_nhi_pci *nhi_pci) -{ - struct tb_nhi *nhi =3D &nhi_pci->nhi; - - if (nhi_pci->pdev->vendor =3D=3D PCI_VENDOR_ID_INTEL) { - /* - * Intel hardware supports auto clear of the interrupt - * status register right after interrupt is being - * issued. - */ - nhi->quirks |=3D QUIRK_AUTO_CLEAR_INT; - - switch (nhi_pci->pdev->device) { - case PCI_DEVICE_ID_INTEL_FALCON_RIDGE_2C_NHI: - case PCI_DEVICE_ID_INTEL_FALCON_RIDGE_4C_NHI: - /* - * Falcon Ridge controller needs the end-to-end - * flow control workaround to avoid losing Rx - * packets when RING_FLAG_E2E is set. - */ - nhi->quirks |=3D QUIRK_E2E; - break; - } - } -} - -static int nhi_check_iommu_pci_dev(struct pci_dev *pdev, void *data) -{ - if (!pdev->external_facing || - !device_iommu_capable(&pdev->dev, IOMMU_CAP_PRE_BOOT_PROTECTION)) - return 0; - *(bool *)data =3D true; - return 1; /* Stop walking */ -} - -static void nhi_check_iommu(struct tb_nhi_pci *nhi_pci) -{ - struct pci_bus *bus =3D nhi_pci->pdev->bus; - struct tb_nhi *nhi =3D &nhi_pci->nhi; - bool port_ok =3D false; - - /* - * Ideally what we'd do here is grab every PCI device that - * represents a tunnelling adapter for this NHI and check their - * status directly, but unfortunately USB4 seems to make it - * obnoxiously difficult to reliably make any correlation. - * - * So for now we'll have to bodge it... Hoping that the system - * is at least sane enough that an adapter is in the same PCI - * segment as its NHI, if we can find *something* on that segment - * which meets the requirements for Kernel DMA Protection, we'll - * take that to imply that firmware is aware and has (hopefully) - * done the right thing in general. We need to know that the PCI - * layer has seen the ExternalFacingPort property which will then - * inform the IOMMU layer to enforce the complete "untrusted DMA" - * flow, but also that the IOMMU driver itself can be trusted not - * to have been subverted by a pre-boot DMA attack. - */ - while (bus->parent) - bus =3D bus->parent; - - pci_walk_bus(bus, nhi_check_iommu_pci_dev, &port_ok); - - nhi->iommu_dma_protection =3D port_ok; - dev_dbg(nhi->dev, "IOMMU DMA protection is %s\n", - str_enabled_disabled(port_ok)); -} - static void nhi_reset(struct tb_nhi *nhi) { ktime_t timeout; @@ -1277,86 +1130,6 @@ static void nhi_reset(struct tb_nhi *nhi) dev_warn(nhi->dev, "timeout resetting host router\n"); } =20 -static int nhi_init_msi(struct tb_nhi_pci *nhi_pci) -{ - struct pci_dev *pdev =3D nhi_pci->pdev; - struct tb_nhi *nhi =3D &nhi_pci->nhi; - struct device *dev =3D &pdev->dev; - int res, irq, nvec; - - ida_init(&nhi_pci->msix_ida); - - /* - * The NHI has 16 MSI-X vectors or a single MSI. We first try to - * get all MSI-X vectors and if we succeed, each ring will have - * one MSI-X. If for some reason that does not work out, we - * fallback to a single MSI. - */ - nvec =3D pci_alloc_irq_vectors(pdev, MSIX_MIN_VECS, MSIX_MAX_VECS, - PCI_IRQ_MSIX); - if (nvec < 0) { - nvec =3D pci_alloc_irq_vectors(pdev, 1, 1, PCI_IRQ_MSI); - if (nvec < 0) - return nvec; - - INIT_WORK(&nhi->interrupt_work, nhi_interrupt_work); - - irq =3D pci_irq_vector(nhi_pci->pdev, 0); - if (irq < 0) - return irq; - - res =3D devm_request_irq(&pdev->dev, irq, nhi_msi, - IRQF_NO_SUSPEND, "thunderbolt", nhi); - if (res) - return dev_err_probe(dev, res, "request_irq failed, aborting\n"); - } - - return 0; -} - -static bool nhi_imr_valid(struct pci_dev *pdev) -{ - u8 val; - - if (!device_property_read_u8(&pdev->dev, "IMR_VALID", &val)) - return !!val; - - return true; -} - -void nhi_pci_start_dma_port(struct tb_nhi *nhi) -{ - struct tb_nhi_pci *nhi_pci =3D nhi_to_pci(nhi); - struct pci_dev *root_port; - - /* - * During host router NVM upgrade we should not allow root port to - * go into D3cold because some root ports cannot trigger PME - * itself. To be on the safe side keep the root port in D0 during - * the whole upgrade process. - */ - root_port =3D pcie_find_root_port(nhi_pci->pdev); - if (root_port) - pm_runtime_get_noresume(&root_port->dev); -} - -void nhi_pci_complete_dma_port(struct tb_nhi *nhi) -{ - struct tb_nhi_pci *nhi_pci =3D nhi_to_pci(nhi); - struct pci_dev *root_port; - - root_port =3D pcie_find_root_port(nhi_pci->pdev); - if (root_port) - pm_runtime_put(&root_port->dev); -} - -static const struct tb_nhi_ops pci_nhi_default_ops =3D { - .pre_nvm_auth =3D nhi_pci_start_dma_port, - .post_nvm_auth =3D nhi_pci_complete_dma_port, - .request_ring_irq =3D ring_request_msix, - .release_ring_irq =3D ring_release_msix, -}; - static struct tb *nhi_select_cm(struct tb_nhi *nhi) { struct tb *tb; @@ -1380,66 +1153,34 @@ static struct tb *nhi_select_cm(struct tb_nhi *nhi) return tb; } =20 -static int nhi_probe(struct pci_dev *pdev, const struct pci_device_id *id) +int nhi_probe_common(struct tb_nhi *nhi) { - struct device *dev =3D &pdev->dev; - struct tb_nhi_pci *nhi_pci; - struct tb_nhi *nhi; + struct device *dev =3D nhi->dev; struct tb *tb; int res; =20 - if (!nhi_imr_valid(pdev)) - return dev_err_probe(dev, -ENODEV, "firmware image not valid, aborting\n= "); - - res =3D pcim_enable_device(pdev); - if (res) - return dev_err_probe(dev, res, "cannot enable PCI device, aborting\n"); - - nhi_pci =3D devm_kzalloc(dev, sizeof(*nhi_pci), GFP_KERNEL); - if (!nhi_pci) - return -ENOMEM; - - nhi_pci->pdev =3D pdev; - - nhi =3D &nhi_pci->nhi; - nhi->dev =3D dev; - nhi->ops =3D (const struct tb_nhi_ops *)id->driver_data ?: &pci_nhi_defau= lt_ops; - - nhi->iobase =3D pcim_iomap_region(pdev, 0, "thunderbolt"); - res =3D PTR_ERR_OR_ZERO(nhi->iobase); - if (res) - return dev_err_probe(dev, res, "cannot obtain PCI resources, aborting\n"= ); - nhi->hop_count =3D ioread32(nhi->iobase + REG_CAPS) & 0x3ff; dev_dbg(dev, "total paths: %d\n", nhi->hop_count); =20 - nhi->tx_rings =3D devm_kcalloc(&pdev->dev, nhi->hop_count, + nhi->tx_rings =3D devm_kcalloc(dev, nhi->hop_count, sizeof(*nhi->tx_rings), GFP_KERNEL); - nhi->rx_rings =3D devm_kcalloc(&pdev->dev, nhi->hop_count, + nhi->rx_rings =3D devm_kcalloc(dev, nhi->hop_count, sizeof(*nhi->rx_rings), GFP_KERNEL); if (!nhi->tx_rings || !nhi->rx_rings) return -ENOMEM; =20 - nhi_check_quirks(nhi_pci); - nhi_check_iommu(nhi_pci); nhi_reset(nhi); =20 /* In case someone left them on. */ nhi_disable_interrupts(nhi); nhi_enable_int_throttling(nhi); =20 - res =3D nhi_init_msi(nhi_pci); - if (res) - return dev_err_probe(dev, res, "cannot enable MSI, aborting\n"); - spin_lock_init(&nhi->lock); =20 - res =3D dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(64)); + res =3D dma_set_mask_and_coherent(dev, DMA_BIT_MASK(64)); if (res) return dev_err_probe(dev, res, "failed to set DMA mask\n"); =20 - pci_set_master(pdev); - if (nhi->ops && nhi->ops->init) { res =3D nhi->ops->init(nhi); if (res) @@ -1463,37 +1204,24 @@ static int nhi_probe(struct pci_dev *pdev, const st= ruct pci_device_id *id) nhi_shutdown(nhi); return res; } - pci_set_drvdata(pdev, tb); + dev_set_drvdata(dev, tb); =20 - device_wakeup_enable(&pdev->dev); + device_wakeup_enable(dev); =20 - pm_runtime_allow(&pdev->dev); - pm_runtime_set_autosuspend_delay(&pdev->dev, TB_AUTOSUSPEND_DELAY); - pm_runtime_use_autosuspend(&pdev->dev); - pm_runtime_put_autosuspend(&pdev->dev); + pm_runtime_allow(dev); + pm_runtime_set_autosuspend_delay(dev, TB_AUTOSUSPEND_DELAY); + pm_runtime_use_autosuspend(dev); + pm_runtime_put_autosuspend(dev); =20 return 0; } =20 -static void nhi_remove(struct pci_dev *pdev) -{ - struct tb *tb =3D pci_get_drvdata(pdev); - struct tb_nhi *nhi =3D tb->nhi; - - pm_runtime_get_sync(&pdev->dev); - pm_runtime_dont_use_autosuspend(&pdev->dev); - pm_runtime_forbid(&pdev->dev); - - tb_domain_remove(tb); - nhi_shutdown(nhi); -} - /* * The tunneled pci bridges are siblings of us. Use resume_noirq to reenab= le * the tunnels asap. A corresponding pci quirk blocks the downstream bridg= es * resume_noirq until we are done. */ -static const struct dev_pm_ops nhi_pm_ops =3D { +const struct dev_pm_ops nhi_pm_ops =3D { .suspend_noirq =3D nhi_suspend_noirq, .resume_noirq =3D nhi_resume_noirq, .freeze_noirq =3D nhi_freeze_noirq, /* @@ -1509,198 +1237,3 @@ static const struct dev_pm_ops nhi_pm_ops =3D { .runtime_suspend =3D nhi_runtime_suspend, .runtime_resume =3D nhi_runtime_resume, }; - -/* - * During suspend the Thunderbolt controller is reset and all PCIe - * tunnels are lost. The NHI driver will try to reestablish all tunnels - * during resume. This adds device links between the tunneled PCIe - * downstream ports and the NHI so that the device core will make sure - * NHI is resumed first before the rest. - */ -bool tb_apple_add_links(struct tb_nhi *nhi) -{ - struct tb_nhi_pci *nhi_pci =3D nhi_to_pci(nhi); - struct pci_dev *upstream, *pdev; - bool ret; - - if (!x86_apple_machine) - return false; - - switch (nhi_pci->pdev->device) { - case PCI_DEVICE_ID_INTEL_LIGHT_RIDGE: - case PCI_DEVICE_ID_INTEL_CACTUS_RIDGE_4C: - case PCI_DEVICE_ID_INTEL_FALCON_RIDGE_2C_NHI: - case PCI_DEVICE_ID_INTEL_FALCON_RIDGE_4C_NHI: - break; - default: - return false; - } - - upstream =3D pci_upstream_bridge(nhi_pci->pdev); - while (upstream) { - if (!pci_is_pcie(upstream)) - return false; - if (pci_pcie_type(upstream) =3D=3D PCI_EXP_TYPE_UPSTREAM) - break; - upstream =3D pci_upstream_bridge(upstream); - } - - if (!upstream) - return false; - - /* - * For each hotplug downstream port, create add device link - * back to NHI so that PCIe tunnels can be re-established after - * sleep. - */ - ret =3D false; - for_each_pci_bridge(pdev, upstream->subordinate) { - const struct device_link *link; - - if (!pci_is_pcie(pdev)) - continue; - if (pci_pcie_type(pdev) !=3D PCI_EXP_TYPE_DOWNSTREAM || - !pdev->is_pciehp) - continue; - - link =3D device_link_add(&pdev->dev, nhi->dev, - DL_FLAG_AUTOREMOVE_SUPPLIER | - DL_FLAG_PM_RUNTIME); - if (link) { - dev_dbg(nhi->dev, "created link from %s\n", - dev_name(&pdev->dev)); - ret =3D true; - } else { - dev_warn(nhi->dev, "device link creation from %s failed\n", - dev_name(&pdev->dev)); - } - } - - return ret; -} - -static struct pci_device_id nhi_ids[] =3D { - /* - * We have to specify class, the TB bridges use the same device and - * vendor (sub)id on gen 1 and gen 2 controllers. - */ - { - .class =3D PCI_CLASS_SYSTEM_OTHER << 8, .class_mask =3D ~0, - .vendor =3D PCI_VENDOR_ID_INTEL, - .device =3D PCI_DEVICE_ID_INTEL_LIGHT_RIDGE, - .subvendor =3D 0x2222, .subdevice =3D 0x1111, - }, - { - .class =3D PCI_CLASS_SYSTEM_OTHER << 8, .class_mask =3D ~0, - .vendor =3D PCI_VENDOR_ID_INTEL, - .device =3D PCI_DEVICE_ID_INTEL_CACTUS_RIDGE_4C, - .subvendor =3D 0x2222, .subdevice =3D 0x1111, - }, - { - .class =3D PCI_CLASS_SYSTEM_OTHER << 8, .class_mask =3D ~0, - .vendor =3D PCI_VENDOR_ID_INTEL, - .device =3D PCI_DEVICE_ID_INTEL_FALCON_RIDGE_2C_NHI, - .subvendor =3D PCI_ANY_ID, .subdevice =3D PCI_ANY_ID, - }, - { - .class =3D PCI_CLASS_SYSTEM_OTHER << 8, .class_mask =3D ~0, - .vendor =3D PCI_VENDOR_ID_INTEL, - .device =3D PCI_DEVICE_ID_INTEL_FALCON_RIDGE_4C_NHI, - .subvendor =3D PCI_ANY_ID, .subdevice =3D PCI_ANY_ID, - }, - - /* Thunderbolt 3 */ - { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_2C_NHI) }, - { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_4C_NHI) }, - { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_USBONLY_NHI) }, - { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_LP_NHI) }, - { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_LP_USBONLY_NHI) }, - { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_C_2C_NHI) }, - { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_C_4C_NHI) }, - { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_C_USBONLY_NHI) }, - { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_TITAN_RIDGE_2C_NHI) }, - { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_TITAN_RIDGE_4C_NHI) }, - { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_ICL_NHI0), - .driver_data =3D (kernel_ulong_t)&icl_nhi_ops }, - { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_ICL_NHI1), - .driver_data =3D (kernel_ulong_t)&icl_nhi_ops }, - /* Thunderbolt 4 */ - { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_TGL_NHI0), - .driver_data =3D (kernel_ulong_t)&icl_nhi_ops }, - { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_TGL_NHI1), - .driver_data =3D (kernel_ulong_t)&icl_nhi_ops }, - { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_TGL_H_NHI0), - .driver_data =3D (kernel_ulong_t)&icl_nhi_ops }, - { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_TGL_H_NHI1), - .driver_data =3D (kernel_ulong_t)&icl_nhi_ops }, - { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_ADL_NHI0), - .driver_data =3D (kernel_ulong_t)&icl_nhi_ops }, - { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_ADL_NHI1), - .driver_data =3D (kernel_ulong_t)&icl_nhi_ops }, - { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_RPL_NHI0), - .driver_data =3D (kernel_ulong_t)&icl_nhi_ops }, - { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_RPL_NHI1), - .driver_data =3D (kernel_ulong_t)&icl_nhi_ops }, - { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_MTL_M_NHI0), - .driver_data =3D (kernel_ulong_t)&icl_nhi_ops }, - { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_MTL_P_NHI0), - .driver_data =3D (kernel_ulong_t)&icl_nhi_ops }, - { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_MTL_P_NHI1), - .driver_data =3D (kernel_ulong_t)&icl_nhi_ops }, - { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_LNL_NHI0), - .driver_data =3D (kernel_ulong_t)&icl_nhi_ops }, - { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_LNL_NHI1), - .driver_data =3D (kernel_ulong_t)&icl_nhi_ops }, - { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_PTL_M_NHI0), - .driver_data =3D (kernel_ulong_t)&icl_nhi_ops }, - { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_PTL_M_NHI1), - .driver_data =3D (kernel_ulong_t)&icl_nhi_ops }, - { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_PTL_P_NHI0), - .driver_data =3D (kernel_ulong_t)&icl_nhi_ops }, - { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_PTL_P_NHI1), - .driver_data =3D (kernel_ulong_t)&icl_nhi_ops }, - { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_WCL_NHI0), - .driver_data =3D (kernel_ulong_t)&icl_nhi_ops }, - { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_BARLOW_RIDGE_HOST_80G_NHI) }, - { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_BARLOW_RIDGE_HOST_40G_NHI) }, - - /* Any USB4 compliant host */ - { PCI_DEVICE_CLASS(PCI_CLASS_SERIAL_USB_USB4, ~0) }, - - { 0,} -}; - -MODULE_DEVICE_TABLE(pci, nhi_ids); -MODULE_DESCRIPTION("Thunderbolt/USB4 core driver"); -MODULE_LICENSE("GPL"); - -static struct pci_driver nhi_driver =3D { - .name =3D "thunderbolt", - .id_table =3D nhi_ids, - .probe =3D nhi_probe, - .remove =3D nhi_remove, - .shutdown =3D nhi_remove, - .driver.pm =3D &nhi_pm_ops, -}; - -static int __init nhi_init(void) -{ - int ret; - - ret =3D tb_domain_init(); - if (ret) - return ret; - ret =3D pci_register_driver(&nhi_driver); - if (ret) - tb_domain_exit(); - return ret; -} - -static void __exit nhi_unload(void) -{ - pci_unregister_driver(&nhi_driver); - tb_domain_exit(); -} - -rootfs_initcall(nhi_init); -module_exit(nhi_unload); diff --git a/drivers/thunderbolt/nhi.h b/drivers/thunderbolt/nhi.h index 5534a3f0800a..b0490a1cd463 100644 --- a/drivers/thunderbolt/nhi.h +++ b/drivers/thunderbolt/nhi.h @@ -32,6 +32,14 @@ enum nhi_fw_mode nhi_mailbox_mode(struct tb_nhi *nhi); bool tb_apple_add_links(struct tb_nhi *nhi); void nhi_pci_start_dma_port(struct tb_nhi *nhi); void nhi_pci_complete_dma_port(struct tb_nhi *nhi); +void nhi_enable_int_throttling(struct tb_nhi *nhi); +void nhi_disable_interrupts(struct tb_nhi *nhi); +void nhi_interrupt_work(struct work_struct *work); +irqreturn_t nhi_msi(int irq, void *data); +irqreturn_t ring_msix(int irq, void *data); +int nhi_probe_common(struct tb_nhi *nhi); +void nhi_shutdown(struct tb_nhi *nhi); +extern const struct dev_pm_ops nhi_pm_ops; =20 /** * struct tb_nhi_ops - NHI specific optional operations @@ -45,6 +53,7 @@ void nhi_pci_complete_dma_port(struct tb_nhi *nhi); * @post_nvm_auth: hook to run after TBT3 NVM authentication * @request_ring_irq: NHI specific interrupt retrieval function pointer * @release_ring_irq: NHI specific interrupt release function pointer + * @is_present: Whether the device is currently present on the parent bus */ struct tb_nhi_ops { int (*init)(struct tb_nhi *nhi); @@ -57,6 +66,7 @@ struct tb_nhi_ops { void (*post_nvm_auth)(struct tb_nhi *nhi); int (*request_ring_irq)(struct tb_ring *ring, bool no_suspend); void (*release_ring_irq)(struct tb_ring *ring); + bool (*is_present)(struct tb_nhi *nhi); }; =20 extern const struct tb_nhi_ops icl_nhi_ops; @@ -111,4 +121,15 @@ extern const struct tb_nhi_ops icl_nhi_ops; =20 #define PCI_CLASS_SERIAL_USB_USB4 0x0c0340 =20 +/* Host interface quirks */ +#define QUIRK_AUTO_CLEAR_INT BIT(0) +#define QUIRK_E2E BIT(1) + +/* + * Minimal number of vectors when we use MSI-X. Two for control channel + * Rx/Tx and the rest four are for cross domain DMA paths. + */ +#define MSIX_MIN_VECS 6 +#define MSIX_MAX_VECS 16 + #endif diff --git a/drivers/thunderbolt/nhi_ops.c b/drivers/thunderbolt/nhi_ops.c index da6083f45fad..a2e79aab20a4 100644 --- a/drivers/thunderbolt/nhi_ops.c +++ b/drivers/thunderbolt/nhi_ops.c @@ -177,6 +177,8 @@ static int icl_nhi_resume(struct tb_nhi *nhi) =20 static void icl_nhi_shutdown(struct tb_nhi *nhi) { + nhi_pci_shutdown(nhi); + icl_nhi_force_power(nhi, false); } =20 diff --git a/drivers/thunderbolt/nhi_pci.c b/drivers/thunderbolt/nhi_pci.c new file mode 100644 index 000000000000..c63f37580128 --- /dev/null +++ b/drivers/thunderbolt/nhi_pci.c @@ -0,0 +1,496 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Thunderbolt driver - PCI NHI driver + * + * Copyright (c) 2014 Andreas Noever + * Copyright (C) 2018, Intel Corporation + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "nhi.h" +#include "nhi_pci.h" +#include "nhi_regs.h" +#include "tb.h" + +static void nhi_pci_check_quirks(struct tb_nhi_pci *nhi_pci) +{ + struct tb_nhi *nhi =3D &nhi_pci->nhi; + + if (nhi_pci->pdev->vendor =3D=3D PCI_VENDOR_ID_INTEL) { + /* + * Intel hardware supports auto clear of the interrupt + * status register right after interrupt is being + * issued. + */ + nhi->quirks |=3D QUIRK_AUTO_CLEAR_INT; + + switch (nhi_pci->pdev->device) { + case PCI_DEVICE_ID_INTEL_FALCON_RIDGE_2C_NHI: + case PCI_DEVICE_ID_INTEL_FALCON_RIDGE_4C_NHI: + /* + * Falcon Ridge controller needs the end-to-end + * flow control workaround to avoid losing Rx + * packets when RING_FLAG_E2E is set. + */ + nhi->quirks |=3D QUIRK_E2E; + break; + } + } +} + +static int nhi_pci_check_iommu_pdev(struct pci_dev *pdev, void *data) +{ + if (!pdev->external_facing || + !device_iommu_capable(&pdev->dev, IOMMU_CAP_PRE_BOOT_PROTECTION)) + return 0; + *(bool *)data =3D true; + return 1; /* Stop walking */ +} + +static void nhi_pci_check_iommu(struct tb_nhi_pci *nhi_pci) +{ + struct pci_bus *bus =3D nhi_pci->pdev->bus; + struct tb_nhi *nhi =3D &nhi_pci->nhi; + bool port_ok =3D false; + + /* + * Ideally what we'd do here is grab every PCI device that + * represents a tunnelling adapter for this NHI and check their + * status directly, but unfortunately USB4 seems to make it + * obnoxiously difficult to reliably make any correlation. + * + * So for now we'll have to bodge it... Hoping that the system + * is at least sane enough that an adapter is in the same PCI + * segment as its NHI, if we can find *something* on that segment + * which meets the requirements for Kernel DMA Protection, we'll + * take that to imply that firmware is aware and has (hopefully) + * done the right thing in general. We need to know that the PCI + * layer has seen the ExternalFacingPort property which will then + * inform the IOMMU layer to enforce the complete "untrusted DMA" + * flow, but also that the IOMMU driver itself can be trusted not + * to have been subverted by a pre-boot DMA attack. + */ + while (bus->parent) + bus =3D bus->parent; + + pci_walk_bus(bus, nhi_pci_check_iommu_pdev, &port_ok); + + nhi->iommu_dma_protection =3D port_ok; + dev_dbg(nhi->dev, "IOMMU DMA protection is %s\n", + str_enabled_disabled(port_ok)); +} + +static int nhi_pci_init_msi(struct tb_nhi *nhi) +{ + struct tb_nhi_pci *nhi_pci =3D nhi_to_pci(nhi); + struct pci_dev *pdev =3D nhi_pci->pdev; + struct device *dev =3D &pdev->dev; + int res, irq, nvec; + + ida_init(&nhi_pci->msix_ida); + + /* + * The NHI has 16 MSI-X vectors or a single MSI. We first try to + * get all MSI-X vectors and if we succeed, each ring will have + * one MSI-X. If for some reason that does not work out, we + * fallback to a single MSI. + */ + nvec =3D pci_alloc_irq_vectors(pdev, MSIX_MIN_VECS, MSIX_MAX_VECS, + PCI_IRQ_MSIX); + if (nvec < 0) { + nvec =3D pci_alloc_irq_vectors(pdev, 1, 1, PCI_IRQ_MSI); + if (nvec < 0) + return nvec; + + INIT_WORK(&nhi->interrupt_work, nhi_interrupt_work); + + irq =3D pci_irq_vector(nhi_pci->pdev, 0); + if (irq < 0) + return irq; + + res =3D devm_request_irq(&pdev->dev, irq, nhi_msi, + IRQF_NO_SUSPEND, "thunderbolt", nhi); + if (res) + return dev_err_probe(dev, res, "request_irq failed, aborting\n"); + } + + return 0; +} + +static bool nhi_pci_imr_valid(struct pci_dev *pdev) +{ + u8 val; + + if (!device_property_read_u8(&pdev->dev, "IMR_VALID", &val)) + return !!val; + + return true; +} + +void nhi_pci_start_dma_port(struct tb_nhi *nhi) +{ + struct tb_nhi_pci *nhi_pci =3D nhi_to_pci(nhi); + struct pci_dev *root_port; + + /* + * During host router NVM upgrade we should not allow root port to + * go into D3cold because some root ports cannot trigger PME + * itself. To be on the safe side keep the root port in D0 during + * the whole upgrade process. + */ + root_port =3D pcie_find_root_port(nhi_pci->pdev); + if (root_port) + pm_runtime_get_noresume(&root_port->dev); +} + +void nhi_pci_complete_dma_port(struct tb_nhi *nhi) +{ + struct tb_nhi_pci *nhi_pci =3D nhi_to_pci(nhi); + struct pci_dev *root_port; + + root_port =3D pcie_find_root_port(nhi_pci->pdev); + if (root_port) + pm_runtime_put(&root_port->dev); +} + +static int ring_request_msix(struct tb_ring *ring, bool no_suspend) +{ + struct tb_nhi *nhi =3D ring->nhi; + struct tb_nhi_pci *nhi_pci =3D nhi_to_pci(nhi); + unsigned long irqflags; + int ret; + + if (!nhi_pci->pdev->msix_enabled) + return 0; + + ret =3D ida_alloc_max(&nhi_pci->msix_ida, MSIX_MAX_VECS - 1, GFP_KERNEL); + if (ret < 0) + return ret; + + ring->vector =3D ret; + + ret =3D pci_irq_vector(nhi_pci->pdev, ring->vector); + if (ret < 0) + goto err_ida_remove; + + ring->irq =3D ret; + + irqflags =3D no_suspend ? IRQF_NO_SUSPEND : 0; + ret =3D request_irq(ring->irq, ring_msix, irqflags, "thunderbolt", ring); + if (ret) + goto err_ida_remove; + + return 0; + +err_ida_remove: + ida_free(&nhi_pci->msix_ida, ring->vector); + + return ret; +} + +static void ring_release_msix(struct tb_ring *ring) +{ + struct tb_nhi_pci *nhi_pci =3D nhi_to_pci(ring->nhi); + + if (ring->irq <=3D 0) + return; + + free_irq(ring->irq, ring); + ida_free(&nhi_pci->msix_ida, ring->vector); + ring->vector =3D 0; + ring->irq =3D 0; +} + +void nhi_pci_shutdown(struct tb_nhi *nhi) +{ + struct tb_nhi_pci *nhi_pci =3D nhi_to_pci(nhi); + + /* + * We have to release the irq before calling flush_work. Otherwise an + * already executing IRQ handler could call schedule_work again. + */ + if (!nhi_pci->pdev->msix_enabled) { + devm_free_irq(nhi->dev, nhi_pci->pdev->irq, nhi); + flush_work(&nhi->interrupt_work); + } + ida_destroy(&nhi_pci->msix_ida); +} + +static bool nhi_pci_is_present(struct tb_nhi *nhi) +{ + return pci_device_is_present(nhi_to_pci(nhi)->pdev); +} + +static const struct tb_nhi_ops pci_nhi_default_ops =3D { + .pre_nvm_auth =3D nhi_pci_start_dma_port, + .post_nvm_auth =3D nhi_pci_complete_dma_port, + .request_ring_irq =3D ring_request_msix, + .release_ring_irq =3D ring_release_msix, + .shutdown =3D nhi_pci_shutdown, + .is_present =3D nhi_pci_is_present, +}; + +static int nhi_pci_probe(struct pci_dev *pdev, const struct pci_device_id = *id) +{ + struct device *dev =3D &pdev->dev; + struct tb_nhi_pci *nhi_pci; + struct tb_nhi *nhi; + int res; + + if (!nhi_pci_imr_valid(pdev)) + return dev_err_probe(dev, -ENODEV, "firmware image not valid, aborting\n= "); + + res =3D pcim_enable_device(pdev); + if (res) + return dev_err_probe(dev, res, "cannot enable PCI device, aborting\n"); + + nhi_pci =3D devm_kzalloc(dev, sizeof(*nhi_pci), GFP_KERNEL); + if (!nhi_pci) + return -ENOMEM; + + nhi_pci->pdev =3D pdev; + + nhi =3D &nhi_pci->nhi; + nhi->dev =3D dev; + nhi->ops =3D (const struct tb_nhi_ops *)id->driver_data ?: &pci_nhi_defau= lt_ops; + + nhi->iobase =3D pcim_iomap_region(pdev, 0, "thunderbolt"); + res =3D PTR_ERR_OR_ZERO(nhi->iobase); + if (res) + return dev_err_probe(dev, res, "cannot obtain PCI resources, aborting\n"= ); + + nhi_pci_check_quirks(nhi_pci); + nhi_pci_check_iommu(nhi_pci); + + res =3D nhi_pci_init_msi(nhi); + if (res) + return dev_err_probe(dev, res, "cannot enable MSI, aborting\n"); + + res =3D nhi_probe_common(&nhi_pci->nhi); + if (res) + return dev_err_probe(dev, res, "NHI common probe failed\n"); + + pci_set_master(pdev); + + return 0; +} + +static void nhi_pci_remove(struct pci_dev *pdev) +{ + struct tb *tb =3D pci_get_drvdata(pdev); + struct tb_nhi *nhi =3D tb->nhi; + + pm_runtime_get_sync(&pdev->dev); + pm_runtime_dont_use_autosuspend(&pdev->dev); + pm_runtime_forbid(&pdev->dev); + + tb_domain_remove(tb); + nhi_shutdown(nhi); +} + +/* + * During suspend the Thunderbolt controller is reset and all PCIe + * tunnels are lost. The NHI driver will try to reestablish all tunnels + * during resume. This adds device links between the tunneled PCIe + * downstream ports and the NHI so that the device core will make sure + * NHI is resumed first before the rest. + */ +bool tb_apple_add_links(struct tb_nhi *nhi) +{ + struct tb_nhi_pci *nhi_pci =3D nhi_to_pci(nhi); + struct pci_dev *upstream, *pdev; + bool ret; + + if (!x86_apple_machine) + return false; + + switch (nhi_pci->pdev->device) { + case PCI_DEVICE_ID_INTEL_LIGHT_RIDGE: + case PCI_DEVICE_ID_INTEL_CACTUS_RIDGE_4C: + case PCI_DEVICE_ID_INTEL_FALCON_RIDGE_2C_NHI: + case PCI_DEVICE_ID_INTEL_FALCON_RIDGE_4C_NHI: + break; + default: + return false; + } + + upstream =3D pci_upstream_bridge(nhi_pci->pdev); + while (upstream) { + if (!pci_is_pcie(upstream)) + return false; + if (pci_pcie_type(upstream) =3D=3D PCI_EXP_TYPE_UPSTREAM) + break; + upstream =3D pci_upstream_bridge(upstream); + } + + if (!upstream) + return false; + + /* + * For each hotplug downstream port, create add device link + * back to NHI so that PCIe tunnels can be re-established after + * sleep. + */ + ret =3D false; + for_each_pci_bridge(pdev, upstream->subordinate) { + const struct device_link *link; + + if (!pci_is_pcie(pdev)) + continue; + if (pci_pcie_type(pdev) !=3D PCI_EXP_TYPE_DOWNSTREAM || + !pdev->is_pciehp) + continue; + + link =3D device_link_add(&pdev->dev, nhi->dev, + DL_FLAG_AUTOREMOVE_SUPPLIER | + DL_FLAG_PM_RUNTIME); + if (link) { + dev_dbg(nhi->dev, "created link from %s\n", + dev_name(&pdev->dev)); + ret =3D true; + } else { + dev_warn(nhi->dev, "device link creation from %s failed\n", + dev_name(&pdev->dev)); + } + } + + return ret; +} + +static struct pci_device_id nhi_ids[] =3D { + /* + * We have to specify class, the TB bridges use the same device and + * vendor (sub)id on gen 1 and gen 2 controllers. + */ + { + .class =3D PCI_CLASS_SYSTEM_OTHER << 8, .class_mask =3D ~0, + .vendor =3D PCI_VENDOR_ID_INTEL, + .device =3D PCI_DEVICE_ID_INTEL_LIGHT_RIDGE, + .subvendor =3D 0x2222, .subdevice =3D 0x1111, + }, + { + .class =3D PCI_CLASS_SYSTEM_OTHER << 8, .class_mask =3D ~0, + .vendor =3D PCI_VENDOR_ID_INTEL, + .device =3D PCI_DEVICE_ID_INTEL_CACTUS_RIDGE_4C, + .subvendor =3D 0x2222, .subdevice =3D 0x1111, + }, + { + .class =3D PCI_CLASS_SYSTEM_OTHER << 8, .class_mask =3D ~0, + .vendor =3D PCI_VENDOR_ID_INTEL, + .device =3D PCI_DEVICE_ID_INTEL_FALCON_RIDGE_2C_NHI, + .subvendor =3D PCI_ANY_ID, .subdevice =3D PCI_ANY_ID, + }, + { + .class =3D PCI_CLASS_SYSTEM_OTHER << 8, .class_mask =3D ~0, + .vendor =3D PCI_VENDOR_ID_INTEL, + .device =3D PCI_DEVICE_ID_INTEL_FALCON_RIDGE_4C_NHI, + .subvendor =3D PCI_ANY_ID, .subdevice =3D PCI_ANY_ID, + }, + + /* Thunderbolt 3 */ + { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_2C_NHI) }, + { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_4C_NHI) }, + { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_USBONLY_NHI) }, + { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_LP_NHI) }, + { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_LP_USBONLY_NHI) }, + { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_C_2C_NHI) }, + { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_C_4C_NHI) }, + { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_C_USBONLY_NHI) }, + { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_TITAN_RIDGE_2C_NHI) }, + { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_TITAN_RIDGE_4C_NHI) }, + { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_ICL_NHI0), + .driver_data =3D (kernel_ulong_t)&icl_nhi_ops }, + { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_ICL_NHI1), + .driver_data =3D (kernel_ulong_t)&icl_nhi_ops }, + /* Thunderbolt 4 */ + { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_TGL_NHI0), + .driver_data =3D (kernel_ulong_t)&icl_nhi_ops }, + { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_TGL_NHI1), + .driver_data =3D (kernel_ulong_t)&icl_nhi_ops }, + { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_TGL_H_NHI0), + .driver_data =3D (kernel_ulong_t)&icl_nhi_ops }, + { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_TGL_H_NHI1), + .driver_data =3D (kernel_ulong_t)&icl_nhi_ops }, + { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_ADL_NHI0), + .driver_data =3D (kernel_ulong_t)&icl_nhi_ops }, + { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_ADL_NHI1), + .driver_data =3D (kernel_ulong_t)&icl_nhi_ops }, + { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_RPL_NHI0), + .driver_data =3D (kernel_ulong_t)&icl_nhi_ops }, + { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_RPL_NHI1), + .driver_data =3D (kernel_ulong_t)&icl_nhi_ops }, + { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_MTL_M_NHI0), + .driver_data =3D (kernel_ulong_t)&icl_nhi_ops }, + { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_MTL_P_NHI0), + .driver_data =3D (kernel_ulong_t)&icl_nhi_ops }, + { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_MTL_P_NHI1), + .driver_data =3D (kernel_ulong_t)&icl_nhi_ops }, + { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_LNL_NHI0), + .driver_data =3D (kernel_ulong_t)&icl_nhi_ops }, + { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_LNL_NHI1), + .driver_data =3D (kernel_ulong_t)&icl_nhi_ops }, + { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_PTL_M_NHI0), + .driver_data =3D (kernel_ulong_t)&icl_nhi_ops }, + { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_PTL_M_NHI1), + .driver_data =3D (kernel_ulong_t)&icl_nhi_ops }, + { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_PTL_P_NHI0), + .driver_data =3D (kernel_ulong_t)&icl_nhi_ops }, + { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_PTL_P_NHI1), + .driver_data =3D (kernel_ulong_t)&icl_nhi_ops }, + { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_WCL_NHI0), + .driver_data =3D (kernel_ulong_t)&icl_nhi_ops }, + { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_BARLOW_RIDGE_HOST_80G_NHI) }, + { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_BARLOW_RIDGE_HOST_40G_NHI) }, + + /* Any USB4 compliant host */ + { PCI_DEVICE_CLASS(PCI_CLASS_SERIAL_USB_USB4, ~0) }, + + { 0,} +}; + +MODULE_DEVICE_TABLE(pci, nhi_ids); +MODULE_DESCRIPTION("Thunderbolt/USB4 core driver"); +MODULE_LICENSE("GPL"); + +static struct pci_driver nhi_driver =3D { + .name =3D "thunderbolt", + .id_table =3D nhi_ids, + .probe =3D nhi_pci_probe, + .remove =3D nhi_pci_remove, + .shutdown =3D nhi_pci_remove, + .driver.pm =3D &nhi_pm_ops, +}; + +static int __init nhi_init(void) +{ + int ret; + + ret =3D tb_domain_init(); + if (ret) + return ret; + ret =3D pci_register_driver(&nhi_driver); + if (ret) + tb_domain_exit(); + return ret; +} + +static void __exit nhi_unload(void) +{ + pci_unregister_driver(&nhi_driver); + tb_domain_exit(); +} + +rootfs_initcall(nhi_init); +module_exit(nhi_unload); diff --git a/drivers/thunderbolt/nhi_pci.h b/drivers/thunderbolt/nhi_pci.h index 9f686e0512e9..6e930a13400e 100644 --- a/drivers/thunderbolt/nhi_pci.h +++ b/drivers/thunderbolt/nhi_pci.h @@ -12,6 +12,8 @@ struct tb_nhi_pci { struct tb_nhi nhi; }; =20 +void nhi_pci_shutdown(struct tb_nhi *nhi); + static inline struct tb_nhi_pci *nhi_to_pci(struct tb_nhi *nhi) { return container_of(nhi, struct tb_nhi_pci, nhi); --=20 2.53.0 From nobody Thu Apr 9 11:18:15 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D19B93921C4; Mon, 9 Mar 2026 10:33:49 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773052429; cv=none; b=BAc9bAQ4rkwz78rCMDWNrosuGXOXAbqh3O4itF44MzA821XxSC1K7LkmZrLIz+/IW9bAtzQNE5EAPZCKEjw6ylC7GgNuxnfQGfHyTLog45jlTD1FCZmtWYe/K+XshhCGGgfv/DyWZ6a7DmYteQmVPcpmResKzAbekR8Z6NLm1nQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773052429; c=relaxed/simple; bh=wtlzZ7OpZ9t1rChmMG51K4g/14Em6sUccJhAe/r62y4=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=s3BuH5pZJkqwLgziJMuLMkZl+n3bkr5d2MyzI7OPBhGaSSOp48mB1BzyrAQDV7NyNzIQWx2WcZUN9UvuZOThZw1vdvtQ+jckzBLerW5RHLd6iRTb438ODw5iexbpmmcufTcHTUPu9CrncXgiu6SRJNlrEJng03D7RBrRh+l8jVw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=jxvaVbAb; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="jxvaVbAb" Received: by smtp.kernel.org (Postfix) with ESMTPSA id EB682C19423; Mon, 9 Mar 2026 10:33:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1773052429; bh=wtlzZ7OpZ9t1rChmMG51K4g/14Em6sUccJhAe/r62y4=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=jxvaVbAbkyBAivqrI6uxnpr+yshnbcBUBpBEO1kI3gqiHRJsZCxCwZf7ZfBmvcqEl V9FQF3NJY1+pmWqoSi5lRovH8kTayP5Ok/ufJjpmlTwDwInc2G0jERBAhv+Uqtl2rK 2NCx06h085h3NCqyo4O96jPHfheLcCuy5jBN4xteqQGkKHXDnpyr9teQGTzId81AtW KvETaPGlLO9/yhNGHyjrLVuL1VARVAsS9rZDbglsGfP3Nu7rRTmvGOAL4RDHfrrRKG tdKm22Cqt7k0HnoSqNJUy85moaa4cHo8nL/JFmDapDaV9kFY5xawxAued1dNVTUWWO ywBr2o2AUDoFg== From: Konrad Dybcio Date: Mon, 09 Mar 2026 11:33:01 +0100 Subject: [PATCH RFC/RFT 3/3] thunderbolt: Add some more descriptive probe error messages Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20260309-topic-usb4_nonpcie_prepwork-v1-3-d901d85fc794@oss.qualcomm.com> References: <20260309-topic-usb4_nonpcie_prepwork-v1-0-d901d85fc794@oss.qualcomm.com> In-Reply-To: <20260309-topic-usb4_nonpcie_prepwork-v1-0-d901d85fc794@oss.qualcomm.com> To: Andreas Noever , Mika Westerberg , Yehezkel Bernat Cc: linux-kernel@vger.kernel.org, linux-usb@vger.kernel.org, usb4-upstream@oss.qualcomm.com, Raghavendra Thoorpu , Konrad Dybcio X-Mailer: b4 0.14.3 X-Developer-Signature: v=1; a=ed25519-sha256; t=1773052420; l=2104; i=konrad.dybcio@oss.qualcomm.com; s=20230215; h=from:subject:message-id; bh=MG1akxL0nb54DDcc8UyAn6PeKPR2rxv6fJV9cH7fB5A=; b=N9DHob2jznpDh9ZNvdAoN4SIVMCjK2fuDQ9F+uw4ZIH6T1KKW3mFy0TyW2T2ublSxQ43J7yCc Q0eE3XnOY2oDc+Yv1cAE3s0wmVdd1SccQ1Bh4C40NSIXPp3vq9nmTaU X-Developer-Key: i=konrad.dybcio@oss.qualcomm.com; a=ed25519; pk=iclgkYvtl2w05SSXO5EjjSYlhFKsJ+5OSZBjOkQuEms= From: Konrad Dybcio Currently there's a lot of silent error-return paths in various places where nhi_probe() can fail. Sprinkle some prints to make it clearer where the problem is. Signed-off-by: Konrad Dybcio --- drivers/thunderbolt/nhi.c | 4 ++-- drivers/thunderbolt/tb.c | 7 ++++--- 2 files changed, 6 insertions(+), 5 deletions(-) diff --git a/drivers/thunderbolt/nhi.c b/drivers/thunderbolt/nhi.c index ca832f802ee7..9f39a837c731 100644 --- a/drivers/thunderbolt/nhi.c +++ b/drivers/thunderbolt/nhi.c @@ -1184,7 +1184,7 @@ int nhi_probe_common(struct tb_nhi *nhi) if (nhi->ops && nhi->ops->init) { res =3D nhi->ops->init(nhi); if (res) - return res; + return dev_err_probe(dev, res, "NHI specific init failed\n"); } =20 tb =3D nhi_select_cm(nhi); @@ -1202,7 +1202,7 @@ int nhi_probe_common(struct tb_nhi *nhi) */ tb_domain_put(tb); nhi_shutdown(nhi); - return res; + return dev_err_probe(dev, res, "tb_domain_add() failed\n"); } dev_set_drvdata(dev, tb); =20 diff --git a/drivers/thunderbolt/tb.c b/drivers/thunderbolt/tb.c index 0126e38d9396..e743fb698b30 100644 --- a/drivers/thunderbolt/tb.c +++ b/drivers/thunderbolt/tb.c @@ -2990,7 +2990,8 @@ static int tb_start(struct tb *tb, bool reset) =20 tb->root_switch =3D tb_switch_alloc(tb, &tb->dev, 0); if (IS_ERR(tb->root_switch)) - return PTR_ERR(tb->root_switch); + return dev_err_probe(tb->nhi->dev, PTR_ERR(tb->root_switch), + "tb_switch_alloc() failed\n"); =20 /* * ICM firmware upgrade needs running firmware and in native @@ -3007,14 +3008,14 @@ static int tb_start(struct tb *tb, bool reset) ret =3D tb_switch_configure(tb->root_switch); if (ret) { tb_switch_put(tb->root_switch); - return ret; + return dev_err_probe(tb->nhi->dev, ret, "Couldn't configure switch\n"); } =20 /* Announce the switch to the world */ ret =3D tb_switch_add(tb->root_switch); if (ret) { tb_switch_put(tb->root_switch); - return ret; + return dev_err_probe(tb->nhi->dev, ret, "Couldn't add switch\n"); } =20 /* --=20 2.53.0