From nobody Mon Feb 9 10:24:47 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3058CC6FD1D for ; Mon, 27 Mar 2023 23:18:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232251AbjC0XSG (ORCPT ); Mon, 27 Mar 2023 19:18:06 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55270 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231433AbjC0XRn (ORCPT ); Mon, 27 Mar 2023 19:17:43 -0400 Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 91D6A2713; Mon, 27 Mar 2023 16:17:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1679959062; x=1711495062; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=ye3rUV6NzhJLigftYP5DEiAs/UeGssE/680ccItlk38=; b=WbKmisHZWecM7m7DDJJUOMtghn0Z8FobaTLBroiaw8TGVtSe0JTTyndg PzzWKFnXbkaLhoimCSAik49W5bpbwjg0iIT8LAogL7liKcVCg/HkpcMrM iyiFRZuNFq0oisMvdyL5RrYE1SEPCU2elX7MZrGQ2yPgdNNkB19N15Cud bDovldoZGFTaL0Zv2obzAcejCyWSL8f/Gpdw12ltnO0me4wATANXdRFLe Qid5TA78qBrUQSPQUqJdUioLxB+AkBB0/JvtyJR4B+aRLCvG41stCk4di sy88oP7ZQ5qf0HGC3MRXGgPqPT/Is5OjCUPIlS2Mr5p3dsgi2n6G1korW A==; X-IronPort-AV: E=McAfee;i="6600,9927,10662"; a="320817352" X-IronPort-AV: E=Sophos;i="5.98,295,1673942400"; d="scan'208";a="320817352" Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Mar 2023 16:17:42 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10662"; a="686144748" X-IronPort-AV: E=Sophos;i="5.98,295,1673942400"; d="scan'208";a="686144748" Received: from srinivas-otcpl-7600.jf.intel.com (HELO jacob-builder.jf.intel.com) ([10.54.39.106]) by fmsmga007.fm.intel.com with ESMTP; 27 Mar 2023 16:17:41 -0700 From: Jacob Pan To: LKML , iommu@lists.linux.dev, Jason Gunthorpe , "Lu Baolu" , Joerg Roedel , dmaengine@vger.kernel.org, vkoul@kernel.org Cc: "Robin Murphy" , "Will Deacon" , David Woodhouse , Raj Ashok , "Tian, Kevin" , Yi Liu , "Yu, Fenghua" , Dave Jiang , Tony Luck , "Zanussi, Tom" , Jacob Pan Subject: [PATCH v2 8/8] dmaengine/idxd: Re-enable kernel workqueue under DMA API Date: Mon, 27 Mar 2023 16:21:38 -0700 Message-Id: <20230327232138.1490712-9-jacob.jun.pan@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230327232138.1490712-1-jacob.jun.pan@linux.intel.com> References: <20230327232138.1490712-1-jacob.jun.pan@linux.intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Kernel workqueues were disabled due to flawed use of kernel VA and SVA API. Now That we have the support for attaching PASID to the device's default domain and the ability to reserve global PASIDs from SVA APIs, we can re-enable the kernel work queues and use them under DMA API. We also use non-privileged access for in-kernel DMA to be consistent with the IOMMU settings. Consequently, interrupt for user privilege is enabled for work completion IRQs. Link:https://lore.kernel.org/linux-iommu/20210511194726.GP1002214@nvidia.co= m/ Reviewed-by: Dave Jiang Signed-off-by: Jacob Pan --- drivers/dma/idxd/device.c | 30 ++++------------------- drivers/dma/idxd/init.c | 51 ++++++++++++++++++++++++++++++++++++--- drivers/dma/idxd/sysfs.c | 7 ------ 3 files changed, 52 insertions(+), 36 deletions(-) diff --git a/drivers/dma/idxd/device.c b/drivers/dma/idxd/device.c index 6fca8fa8d3a8..f6b133d61a04 100644 --- a/drivers/dma/idxd/device.c +++ b/drivers/dma/idxd/device.c @@ -299,21 +299,6 @@ void idxd_wqs_unmap_portal(struct idxd_device *idxd) } } =20 -static void __idxd_wq_set_priv_locked(struct idxd_wq *wq, int priv) -{ - struct idxd_device *idxd =3D wq->idxd; - union wqcfg wqcfg; - unsigned int offset; - - offset =3D WQCFG_OFFSET(idxd, wq->id, WQCFG_PRIVL_IDX); - spin_lock(&idxd->dev_lock); - wqcfg.bits[WQCFG_PRIVL_IDX] =3D ioread32(idxd->reg_base + offset); - wqcfg.priv =3D priv; - wq->wqcfg->bits[WQCFG_PRIVL_IDX] =3D wqcfg.bits[WQCFG_PRIVL_IDX]; - iowrite32(wqcfg.bits[WQCFG_PRIVL_IDX], idxd->reg_base + offset); - spin_unlock(&idxd->dev_lock); -} - static void __idxd_wq_set_pasid_locked(struct idxd_wq *wq, int pasid) { struct idxd_device *idxd =3D wq->idxd; @@ -1324,15 +1309,14 @@ int drv_enable_wq(struct idxd_wq *wq) } =20 /* - * In the event that the WQ is configurable for pasid and priv bits. - * For kernel wq, the driver should setup the pasid, pasid_en, and priv b= it. - * However, for non-kernel wq, the driver should only set the pasid_en bi= t for - * shared wq. A dedicated wq that is not 'kernel' type will configure pas= id and + * In the event that the WQ is configurable for pasid, the driver + * should setup the pasid, pasid_en bit. This is true for both kernel + * and user shared workqueues. There is no need to setup priv bit in + * that in-kernel DMA will also do user privileged requests. + * A dedicated wq that is not 'kernel' type will configure pasid and * pasid_en later on so there is no need to setup. */ if (test_bit(IDXD_FLAG_CONFIGURABLE, &idxd->flags)) { - int priv =3D 0; - if (wq_pasid_enabled(wq)) { if (is_idxd_wq_kernel(wq) || wq_shared(wq)) { u32 pasid =3D wq_dedicated(wq) ? idxd->pasid : 0; @@ -1340,10 +1324,6 @@ int drv_enable_wq(struct idxd_wq *wq) __idxd_wq_set_pasid_locked(wq, pasid); } } - - if (is_idxd_wq_kernel(wq)) - priv =3D 1; - __idxd_wq_set_priv_locked(wq, priv); } =20 rc =3D 0; diff --git a/drivers/dma/idxd/init.c b/drivers/dma/idxd/init.c index e6ee267da0ff..a3396e1b38f1 100644 --- a/drivers/dma/idxd/init.c +++ b/drivers/dma/idxd/init.c @@ -506,14 +506,56 @@ static struct idxd_device *idxd_alloc(struct pci_dev = *pdev, struct idxd_driver_d =20 static int idxd_enable_system_pasid(struct idxd_device *idxd) { - return -EOPNOTSUPP; + struct pci_dev *pdev =3D idxd->pdev; + struct device *dev =3D &pdev->dev; + struct iommu_domain *domain; + union gencfg_reg gencfg; + ioasid_t pasid; + int ret; + + /* + * Attach a global PASID to the DMA domain so that we can use ENQCMDS + * to submit work on buffers mapped by DMA API. + */ + domain =3D iommu_get_dma_domain(dev); + if (!domain) + return -EPERM; + + pasid =3D iommu_sva_reserve_pasid(1, dev->iommu->max_pasids); + if (pasid =3D=3D IOMMU_PASID_INVALID) + return -ENOSPC; + + ret =3D iommu_attach_device_pasid(domain, dev, pasid); + if (ret) { + dev_err(dev, "failed to attach device pasid %d, domain type %d", + pasid, domain->type); + iommu_sva_release_pasid(pasid); + return ret; + } + + /* Since we set user privilege for kernel DMA, enable completion IRQ */ + gencfg.bits =3D ioread32(idxd->reg_base + IDXD_GENCFG_OFFSET); + gencfg.user_int_en =3D 1; + iowrite32(gencfg.bits, idxd->reg_base + IDXD_GENCFG_OFFSET); + idxd->pasid =3D pasid; + + return ret; } =20 static void idxd_disable_system_pasid(struct idxd_device *idxd) { + struct pci_dev *pdev =3D idxd->pdev; + struct device *dev =3D &pdev->dev; + struct iommu_domain *domain; + + domain =3D iommu_get_domain_for_dev(dev); + if (!domain || domain->type =3D=3D IOMMU_DOMAIN_BLOCKED) + return; =20 - iommu_sva_unbind_device(idxd->sva); + iommu_detach_device_pasid(domain, dev, idxd->pasid); + iommu_sva_release_pasid(idxd->pasid); idxd->sva =3D NULL; + idxd->pasid =3D IOMMU_PASID_INVALID; } =20 static int idxd_probe(struct idxd_device *idxd) @@ -535,8 +577,9 @@ static int idxd_probe(struct idxd_device *idxd) } else { set_bit(IDXD_FLAG_USER_PASID_ENABLED, &idxd->flags); =20 - if (idxd_enable_system_pasid(idxd)) - dev_warn(dev, "No in-kernel DMA with PASID.\n"); + rc =3D idxd_enable_system_pasid(idxd); + if (rc) + dev_warn(dev, "No in-kernel DMA with PASID. %d\n", rc); else set_bit(IDXD_FLAG_PASID_ENABLED, &idxd->flags); } diff --git a/drivers/dma/idxd/sysfs.c b/drivers/dma/idxd/sysfs.c index 18cd8151dee0..c5561c00a503 100644 --- a/drivers/dma/idxd/sysfs.c +++ b/drivers/dma/idxd/sysfs.c @@ -944,13 +944,6 @@ static ssize_t wq_name_store(struct device *dev, if (strlen(buf) > WQ_NAME_SIZE || strlen(buf) =3D=3D 0) return -EINVAL; =20 - /* - * This is temporarily placed here until we have SVM support for - * dmaengine. - */ - if (wq->type =3D=3D IDXD_WQT_KERNEL && device_pasid_enabled(wq->idxd)) - return -EOPNOTSUPP; - input =3D kstrndup(buf, count, GFP_KERNEL); if (!input) return -ENOMEM; --=20 2.25.1