From nobody Mon Oct 6 10:17:28 2025 Received: from mta-65-226.siemens.flowmailer.net (mta-65-226.siemens.flowmailer.net [185.136.65.226]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D1CF21DFE0B for ; Wed, 23 Jul 2025 03:46:05 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=185.136.65.226 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1753242367; cv=none; b=Luiht5Iq6QeSPfeZry9+H14162GVXxdde0paL9Mp+BgyIr8B3xS8L7vJDb327L40agRrMZI+xGp3hKWSEDaQgBb73Cn82bgfe0/AtKdc+jQliQUFpHWcQJvvGLkjh3o6Jn8wyXP2otw9GJ98NC8pXUgRJPJaG6HFAEqVUIB+ZnA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1753242367; c=relaxed/simple; bh=do0ibCXS6I6YLPYTC/wDPko9QH1cT3UvCDw2LL+SB/w=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=oM89hK+iaG4GNj2BTKW3R9s+pIpsFlIbhCWagSmo/+wVmk3j2jbeS1uzYASP5rGvWwMixfN/79ibH5nh0SiTw2jSrf83QgaqR6kFc8k4tQ8sxAqBfYMJIyxo4ZeKR9PTetQ+fSS9spJQQ7JjWUXqOYj1esg46kJ8qI5C1g9We3A= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=siemens.com; spf=pass smtp.mailfrom=rts-flowmailer.siemens.com; dkim=pass (2048-bit key) header.d=siemens.com header.i=huaqian.li@siemens.com header.b=PLYEX+60; arc=none smtp.client-ip=185.136.65.226 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=siemens.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=rts-flowmailer.siemens.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=siemens.com header.i=huaqian.li@siemens.com header.b="PLYEX+60" Received: by mta-65-226.siemens.flowmailer.net with ESMTPSA id 20250723034603bfb0a81cff22890e74 for ; Wed, 23 Jul 2025 05:46:04 +0200 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; s=fm1; d=siemens.com; i=huaqian.li@siemens.com; h=Date:From:Subject:To:Message-ID:MIME-Version:Content-Type:Content-Transfer-Encoding:Cc:References:In-Reply-To; bh=+vv0B7VWkjzMQb69nca+HiyG5X+rvehGR722FhUinDU=; b=PLYEX+60OVZ222fTz2Xciwf9ZGD40cUOaVU23XzCaWqcchuuE+iuIyGDzQnDfL38y/XBRZ 4wDzVvDUEVS82UG9RdZBOYxm1PcRl0zN9w5lyzs/u3VNXrwEIk4knXelF9EB5o7TMkHtCRBo +6/5UDO4Df0fN9yBsuxssHJUx5MNiHF+/pgE+5Lt4cYbLzuxJsBXlXA8n0pWErGz4zrmjPgL fNJs8Wrhygmpncf9sZCBhyW56mWpsFMeVy36+Qi2Its0eSwORPg9GzvqSmx/3ywA3VEyEOBv l+Y2rGPd8SweK3z22AMNiQPTeu4XQjn9GzKMahfP+of+HHMY27P3+y+g==; From: huaqian.li@siemens.com To: christophe.jaillet@wanadoo.fr Cc: baocheng.su@siemens.com, bhelgaas@google.com, conor+dt@kernel.org, devicetree@vger.kernel.org, diogo.ivo@siemens.com, helgaas@kernel.org, huaqian.li@siemens.com, jan.kiszka@siemens.com, kristo@kernel.org, krzk+dt@kernel.org, kw@linux.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-pci@vger.kernel.org, lpieralisi@kernel.org, nm@ti.com, robh@kernel.org, s-vadapalli@ti.com, ssantosh@kernel.org, vigneshr@ti.com Subject: [PATCH v11 4/7] PCI: keystone: Add support for PVU-based DMA isolation on AM654 Date: Wed, 23 Jul 2025 11:45:18 +0800 Message-Id: <20250723034521.138695-5-huaqian.li@siemens.com> In-Reply-To: <20250723034521.138695-1-huaqian.li@siemens.com> References: <20250723034521.138695-1-huaqian.li@siemens.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Flowmailer-Platform: Siemens Feedback-ID: 519:519-959203:519-21489:flowmailer Content-Type: text/plain; charset="utf-8" From: Jan Kiszka The AM654 lacks an IOMMU, thus does not support isolating DMA requests from untrusted PCI devices to selected memory regions this way. Use static PVU-based protection instead. The PVU, when enabled, will only accept DMA requests that address previously configured regions. Use the availability of a restricted-dma-pool memory region as trigger and register it as valid DMA target with the PVU. In addition, enable the mapping of requester IDs to VirtIDs in the PCI RC. Use only a single VirtID so far, catching all devices. Signed-off-by: Jan Kiszka Acked-by: Bjorn Helgaas Signed-off-by: Li Hua Qian Reviewed-by: Siddharth Vadapalli --- drivers/pci/controller/dwc/pci-keystone.c | 118 +++++++++++++++++++++- 1 file changed, 115 insertions(+), 3 deletions(-) diff --git a/drivers/pci/controller/dwc/pci-keystone.c b/drivers/pci/contro= ller/dwc/pci-keystone.c index 2b2632e513b5..c4a0947b8a00 100644 --- a/drivers/pci/controller/dwc/pci-keystone.c +++ b/drivers/pci/controller/dwc/pci-keystone.c @@ -19,6 +19,7 @@ #include #include #include +#include #include #include #include @@ -26,6 +27,7 @@ #include #include #include +#include =20 #include "../../pci.h" #include "pcie-designware.h" @@ -111,6 +113,16 @@ =20 #define PCI_DEVICE_ID_TI_AM654X 0xb00c =20 +#define KS_PCI_VIRTID 0 + +#define PCIE_VMAP_xP_CTRL 0x0 +#define PCIE_VMAP_xP_REQID 0x4 +#define PCIE_VMAP_xP_VIRTID 0x8 + +#define PCIE_VMAP_xP_CTRL_EN BIT(0) + +#define PCIE_VMAP_xP_VIRTID_VID_MASK 0xfff + struct ks_pcie_of_data { enum dw_pcie_device_mode mode; const struct dw_pcie_host_ops *host_ops; @@ -1136,6 +1148,94 @@ static const struct of_device_id ks_pcie_of_match[] = =3D { { }, }; =20 +static int ks_init_vmap(struct platform_device *pdev, const char *vmap_nam= e) +{ + struct resource *res; + void __iomem *base; + u32 val; + + if (!IS_ENABLED(CONFIG_TI_PVU)) + return 0; + + res =3D platform_get_resource_byname(pdev, IORESOURCE_MEM, vmap_name); + base =3D devm_pci_remap_cfg_resource(&pdev->dev, res); + if (IS_ERR(base)) + return PTR_ERR(base); + + writel(0, base + PCIE_VMAP_xP_REQID); + + val =3D readl(base + PCIE_VMAP_xP_VIRTID); + val &=3D ~PCIE_VMAP_xP_VIRTID_VID_MASK; + val |=3D KS_PCI_VIRTID; + writel(val, base + PCIE_VMAP_xP_VIRTID); + + val =3D readl(base + PCIE_VMAP_xP_CTRL); + val |=3D PCIE_VMAP_xP_CTRL_EN; + writel(val, base + PCIE_VMAP_xP_CTRL); + + return 0; +} + +static int ks_init_restricted_dma(struct platform_device *pdev) +{ + struct device *dev =3D &pdev->dev; + struct of_phandle_iterator it; + struct resource phys; + int err; + + if (!IS_ENABLED(CONFIG_TI_PVU)) + return 0; + + /* Only process the first restricted DMA pool, more are not allowed */ + of_for_each_phandle(&it, err, dev->of_node, "memory-region", + NULL, 0) { + if (of_device_is_compatible(it.node, "restricted-dma-pool")) + break; + } + if (err) + return err =3D=3D -ENOENT ? 0 : err; + + err =3D of_address_to_resource(it.node, 0, &phys); + if (err < 0) { + dev_err(dev, "failed to parse memory region %pOF: %d\n", + it.node, err); + return 0; + } + + /* Map all incoming requests on low and high prio port to virtID 0 */ + err =3D ks_init_vmap(pdev, "vmap_lp"); + if (err) + return err; + err =3D ks_init_vmap(pdev, "vmap_hp"); + if (err) + return err; + + /* + * Enforce DMA pool usage with the help of the PVU. + * Any request outside will be dropped and raise an error at the PVU. + */ + return ti_pvu_create_region(KS_PCI_VIRTID, &phys); +} + +static void ks_release_restricted_dma(struct platform_device *pdev) +{ + struct of_phandle_iterator it; + struct resource phys; + int err; + + if (!IS_ENABLED(CONFIG_TI_PVU)) + return; + + of_for_each_phandle(&it, err, pdev->dev.of_node, "memory-region", + NULL, 0) { + if (of_device_is_compatible(it.node, "restricted-dma-pool") && + of_address_to_resource(it.node, 0, &phys) =3D=3D 0) { + ti_pvu_remove_region(KS_PCI_VIRTID, &phys); + break; + } + } +} + static int ks_pcie_probe(struct platform_device *pdev) { const struct dw_pcie_host_ops *host_ops; @@ -1286,15 +1386,19 @@ static int ks_pcie_probe(struct platform_device *pd= ev) =20 switch (mode) { case DW_PCIE_RC_TYPE: + ret =3D ks_init_restricted_dma(pdev); + if (ret < 0) + goto err_get_sync; + if (!IS_ENABLED(CONFIG_PCI_KEYSTONE_HOST)) { ret =3D -ENODEV; - goto err_get_sync; + goto err_dma_cleanup; } =20 ret =3D of_property_read_u32(np, "num-viewport", &num_viewport); if (ret < 0) { dev_err(dev, "unable to read *num-viewport* property\n"); - goto err_get_sync; + goto err_dma_cleanup; } =20 /* @@ -1314,7 +1418,7 @@ static int ks_pcie_probe(struct platform_device *pdev) pci->pp.ops =3D host_ops; ret =3D dw_pcie_host_init(&pci->pp); if (ret < 0) - goto err_get_sync; + goto err_dma_cleanup; break; case DW_PCIE_EP_TYPE: if (!IS_ENABLED(CONFIG_PCI_KEYSTONE_EP)) { @@ -1346,6 +1450,9 @@ static int ks_pcie_probe(struct platform_device *pdev) =20 err_ep_init: dw_pcie_ep_deinit(&pci->ep); +err_dma_cleanup: + if (mode =3D=3D DW_PCIE_RC_TYPE) + ks_release_restricted_dma(pdev); err_get_sync: pm_runtime_put(dev); pm_runtime_disable(dev); @@ -1362,9 +1469,14 @@ static void ks_pcie_remove(struct platform_device *p= dev) { struct keystone_pcie *ks_pcie =3D platform_get_drvdata(pdev); struct device_link **link =3D ks_pcie->link; + const struct ks_pcie_of_data *data; int num_lanes =3D ks_pcie->num_lanes; struct device *dev =3D &pdev->dev; =20 + data =3D of_device_get_match_data(dev); + if (data && data->mode =3D=3D DW_PCIE_RC_TYPE) + ks_release_restricted_dma(pdev); + pm_runtime_put(dev); pm_runtime_disable(dev); ks_pcie_disable_phy(ks_pcie); --=20 2.34.1