From nobody Fri Nov 29 23:43:56 2024 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 560F61C2DCE; Fri, 13 Sep 2024 11:45:27 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726227927; cv=none; b=ieuco+2KiQFAI/KINgBMN0l7jykGQjKidcxAgxTJPmMXHkouC8brRpg0eTZ8ovFaZ6uEB+pKM/sf77xF/woVrdbOqAsu4xh2RKskR+ztr1DGp6kS8xxWs/8L/jW0mZVlv0/nTATbDt3TJ56MyU7xxskJZyfcnvJxrzQai15GkL4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726227927; c=relaxed/simple; bh=NgxG9Dq5bkPm/G8SsOBVHYC7Q+9FOlSSnoi6zrPY/Hc=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=gf1baJJP7T0egKG5shtnbG2VqdOQRemdmFJXDLBXyNk1ucM+fYv1Jnnz1T+tx73v85KrzamjTDyCMkh/lZ1iEvtmXEoO2LfybN8q+xsUinIMMnjIsxnOMIz7jJQvIG9DqpHOEbolEQNqiYW5OFlB3xVJYHBI3jertoQaK6u5F34= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=fdqa58mJ; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="fdqa58mJ" Received: by smtp.kernel.org (Postfix) with ESMTPS id EC58BC4CEC7; Fri, 13 Sep 2024 11:45:26 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1726227927; bh=NgxG9Dq5bkPm/G8SsOBVHYC7Q+9FOlSSnoi6zrPY/Hc=; h=From:Date:Subject:References:In-Reply-To:To:Cc:Reply-To:From; b=fdqa58mJHO2kHLodggOFhVK+xPW2W5bcfPwwJRML39qI80GubFjISxW8pzZ1m+I8F lPz5d5nPIBdrkD7KEO593a/xdGcHB5NaFGvKMyKVK0iIcIRjpXyPNMKMh6cJErNaX/ TqipyoLJTTrm7bHqU6U2e1dJnuUtxR6i2ookMv4DwPhmzCGQ66KlQ2ZwO0tkPf7CSk zPKgXNw8WLXhJeFa79ctzzlFPcERpreg6ztCr5jBFviZGmqrr492hh/KeLjwK38phC WlpxTuNQoemWrKAzoc7ZtkrY+fJvRq40xhORXA0M6WTxpnBNVYXH0G4PjwMv/0ncU5 13BqMfTNE5CFg== Received: from aws-us-west-2-korg-lkml-1.web.codeaurora.org (localhost.localdomain [127.0.0.1]) by smtp.lore.kernel.org (Postfix) with ESMTP id DC52CFA3748; Fri, 13 Sep 2024 11:45:26 +0000 (UTC) From: Joel Granados via B4 Relay Date: Fri, 13 Sep 2024 13:44:47 +0200 Subject: [PATCH v2 1/5] iommu/vt-d: Separate page request queue from SVM Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20240913-jag-iopfv8-v2-1-dea01c2343bc@samsung.com> References: <20240913-jag-iopfv8-v2-0-dea01c2343bc@samsung.com> In-Reply-To: <20240913-jag-iopfv8-v2-0-dea01c2343bc@samsung.com> To: David Woodhouse , Lu Baolu , Joerg Roedel , Will Deacon , Robin Murphy , Jason Gunthorpe , Kevin Tian , Klaus Jensen Cc: linux-kernel@vger.kernel.org, iommu@lists.linux.dev, Joel Granados X-Mailer: b4 0.15-dev-00a43 X-Developer-Signature: v=1; a=openpgp-sha256; l=28550; i=j.granados@samsung.com; h=from:subject:message-id; bh=29Yzep/jVJSdz42kcEM/hEcTLgP//kPbGrv3GHMKTW4=; b=owJ4nAHtARL+kA0DAAoBupfNUreWQU8ByyZiAGbkJdKkxdaFgRgxYKRHAC/3p3aDt7YQeMPoI Z/XgCLYk75euYkBswQAAQoAHRYhBK5HCVcl5jElzssnkLqXzVK3lkFPBQJm5CXSAAoJELqXzVK3 lkFP8h8L/R6/4ylG0mReVaE9Fc1dAOdnEUCnKqcfoU271RG/kS+zeUS9DtZbZNrN9S/TKMjC3Ki uH9gb3I7mupT4mVkf6as2oUpW/O/toTPNakDk4+cVq7nq06h80YLDzrNDMLop5+bZMxB8X4h/Dl 84RLWJoAlsLcc2/mjkmzfnh1SpLs0saDTmbCTjMLQc+fw26mfBuQNauUUwTE0EI3ErQPuUcdnz9 z0aKF0gnVuC94yK44OMF/MzHjDSzMnw0TlpVLTROIl5sE88a9lacicGaiNg25crc+vc7H3pOfHo Z5+dO6mhRUrjkSKkdk42VIxXHvI0LZB45olQ1x6PiFg/CsMrqkQsy3o7zFgaUn8wXegfFdfOs7D HUay5XrvEnqQuAZHN5OBoZK6xNlQaiDCu77F4IUVQUdWkVWLKSwOLcgSHtXGZrB/rlxlRHU9WYv fe4TPUOwnR8tG6e4HOqn9prilURzRodIegiGrLFPWUOcN8UJ+eW73fVFvvSo5hzoBwI479vGw17 4k= X-Developer-Key: i=j.granados@samsung.com; a=openpgp; fpr=F1F8E46D30F0F6C4A45FF4465895FAAC338C6E77 X-Endpoint-Received: by B4 Relay for j.granados@samsung.com/default with auth_id=70 X-Original-From: Joel Granados Reply-To: j.granados@samsung.com From: Joel Granados IO page faults are no longer dependent on CONFIG_INTEL_IOMMU_SVM. Move all Page Request Queue (PRQ) functions that handle prq events to a new file in drivers/iommu/intel/prq.c. The page_req_des struct is now declared in drivers/iommu/intel/prq.c. No functional changes are intended. This is a preparation patch to enable the use of IO page faults outside the SVM/PASID use cases. Signed-off-by: Joel Granados --- drivers/iommu/intel/Makefile | 2 +- drivers/iommu/intel/iommu.c | 18 +- drivers/iommu/intel/iommu.h | 14 +- drivers/iommu/intel/prq.c | 410 +++++++++++++++++++++++++++++++++++++++= ++++ drivers/iommu/intel/svm.c | 397 ---------------------------------------= -- 5 files changed, 423 insertions(+), 418 deletions(-) diff --git a/drivers/iommu/intel/Makefile b/drivers/iommu/intel/Makefile index c8beb0281559..d3bb0798092d 100644 --- a/drivers/iommu/intel/Makefile +++ b/drivers/iommu/intel/Makefile @@ -1,6 +1,6 @@ # SPDX-License-Identifier: GPL-2.0 obj-$(CONFIG_DMAR_TABLE) +=3D dmar.o -obj-$(CONFIG_INTEL_IOMMU) +=3D iommu.o pasid.o nested.o cache.o +obj-$(CONFIG_INTEL_IOMMU) +=3D iommu.o pasid.o nested.o cache.o prq.o obj-$(CONFIG_DMAR_TABLE) +=3D trace.o cap_audit.o obj-$(CONFIG_DMAR_PERF) +=3D perf.o obj-$(CONFIG_INTEL_IOMMU_DEBUGFS) +=3D debugfs.o diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c index 4aa070cf56e7..5acc52c62e8c 100644 --- a/drivers/iommu/intel/iommu.c +++ b/drivers/iommu/intel/iommu.c @@ -1487,12 +1487,10 @@ static void free_dmar_iommu(struct intel_iommu *iom= mu) /* free context mapping */ free_context_table(iommu); =20 -#ifdef CONFIG_INTEL_IOMMU_SVM if (pasid_supported(iommu)) { if (ecap_prs(iommu->ecap)) - intel_svm_finish_prq(iommu); + intel_finish_prq(iommu); } -#endif } =20 /* @@ -2482,19 +2480,18 @@ static int __init init_dmars(void) =20 iommu_flush_write_buffer(iommu); =20 -#ifdef CONFIG_INTEL_IOMMU_SVM if (pasid_supported(iommu) && ecap_prs(iommu->ecap)) { /* * Call dmar_alloc_hwirq() with dmar_global_lock held, * could cause possible lock race condition. */ up_write(&dmar_global_lock); - ret =3D intel_svm_enable_prq(iommu); + ret =3D intel_enable_prq(iommu); down_write(&dmar_global_lock); if (ret) goto free_iommu; } -#endif + ret =3D dmar_set_interrupt(iommu); if (ret) goto free_iommu; @@ -2924,13 +2921,12 @@ static int intel_iommu_add(struct dmar_drhd_unit *d= maru) intel_iommu_init_qi(iommu); iommu_flush_write_buffer(iommu); =20 -#ifdef CONFIG_INTEL_IOMMU_SVM if (pasid_supported(iommu) && ecap_prs(iommu->ecap)) { - ret =3D intel_svm_enable_prq(iommu); + ret =3D intel_enable_prq(iommu); if (ret) goto disable_iommu; } -#endif + ret =3D dmar_set_interrupt(iommu); if (ret) goto disable_iommu; @@ -4673,9 +4669,7 @@ const struct iommu_ops intel_iommu_ops =3D { .def_domain_type =3D device_def_domain_type, .remove_dev_pasid =3D intel_iommu_remove_dev_pasid, .pgsize_bitmap =3D SZ_4K, -#ifdef CONFIG_INTEL_IOMMU_SVM - .page_response =3D intel_svm_page_response, -#endif + .page_response =3D intel_page_response, .default_domain_ops =3D &(const struct iommu_domain_ops) { .attach_dev =3D intel_iommu_attach_device, .set_dev_pasid =3D intel_iommu_set_dev_pasid, diff --git a/drivers/iommu/intel/iommu.h b/drivers/iommu/intel/iommu.h index a969be2258b1..3bce514e1d88 100644 --- a/drivers/iommu/intel/iommu.h +++ b/drivers/iommu/intel/iommu.h @@ -719,12 +719,10 @@ struct intel_iommu { =20 struct iommu_flush flush; #endif -#ifdef CONFIG_INTEL_IOMMU_SVM struct page_req_dsc *prq; unsigned char prq_name[16]; /* Name for PRQ interrupt */ unsigned long prq_seq_number; struct completion prq_complete; -#endif struct iopf_queue *iopf_queue; unsigned char iopfq_name[16]; /* Synchronization between fault report and iommu device release. */ @@ -1156,18 +1154,18 @@ void intel_context_flush_present(struct device_doma= in_info *info, struct context_entry *context, u16 did, bool affect_domains); =20 +int intel_enable_prq(struct intel_iommu *iommu); +int intel_finish_prq(struct intel_iommu *iommu); +void intel_page_response(struct device *dev, struct iopf_fault *evt, + struct iommu_page_response *msg); +void intel_drain_pasid_prq(struct device *dev, u32 pasid); + #ifdef CONFIG_INTEL_IOMMU_SVM void intel_svm_check(struct intel_iommu *iommu); -int intel_svm_enable_prq(struct intel_iommu *iommu); -int intel_svm_finish_prq(struct intel_iommu *iommu); -void intel_svm_page_response(struct device *dev, struct iopf_fault *evt, - struct iommu_page_response *msg); struct iommu_domain *intel_svm_domain_alloc(struct device *dev, struct mm_struct *mm); -void intel_drain_pasid_prq(struct device *dev, u32 pasid); #else static inline void intel_svm_check(struct intel_iommu *iommu) {} -static inline void intel_drain_pasid_prq(struct device *dev, u32 pasid) {} static inline struct iommu_domain *intel_svm_domain_alloc(struct device *d= ev, struct mm_struct *mm) { diff --git a/drivers/iommu/intel/prq.c b/drivers/iommu/intel/prq.c new file mode 100644 index 000000000000..3376f60082b5 --- /dev/null +++ b/drivers/iommu/intel/prq.c @@ -0,0 +1,410 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright =C2=A9 2015 Intel Corporation. + * + * Originally split from drivers/iommu/intel/svm.c + */ + +#include +#include + +#include "iommu.h" +#include "../iommu-pages.h" +#include "trace.h" + +/* Page request queue descriptor */ +struct page_req_dsc { + union { + struct { + u64 type:8; + u64 pasid_present:1; + u64 rsvd:7; + u64 rid:16; + u64 pasid:20; + u64 exe_req:1; + u64 pm_req:1; + u64 rsvd2:10; + }; + u64 qw_0; + }; + union { + struct { + u64 rd_req:1; + u64 wr_req:1; + u64 lpig:1; + u64 prg_index:9; + u64 addr:52; + }; + u64 qw_1; + }; + u64 qw_2; + u64 qw_3; +}; + +/** + * intel_drain_pasid_prq - Drain page requests and responses for a pasid + * @dev: target device + * @pasid: pasid for draining + * + * Drain all pending page requests and responses related to @pasid in both + * software and hardware. This is supposed to be called after the device + * driver has stopped DMA, the pasid entry has been cleared, and both IOTLB + * and DevTLB have been invalidated. + * + * It waits until all pending page requests for @pasid in the page fault + * queue are completed by the prq handling thread. Then follow the steps + * described in VT-d spec CH7.10 to drain all page requests and page + * responses pending in the hardware. + */ +void intel_drain_pasid_prq(struct device *dev, u32 pasid) +{ + struct device_domain_info *info; + struct dmar_domain *domain; + struct intel_iommu *iommu; + struct qi_desc desc[3]; + struct pci_dev *pdev; + int head, tail; + u16 sid, did; + int qdep; + + info =3D dev_iommu_priv_get(dev); + if (WARN_ON(!info || !dev_is_pci(dev))) + return; + + if (!info->pri_enabled) + return; + + iommu =3D info->iommu; + domain =3D info->domain; + pdev =3D to_pci_dev(dev); + sid =3D PCI_DEVID(info->bus, info->devfn); + did =3D domain_id_iommu(domain, iommu); + qdep =3D pci_ats_queue_depth(pdev); + + /* + * Check and wait until all pending page requests in the queue are + * handled by the prq handling thread. + */ +prq_retry: + reinit_completion(&iommu->prq_complete); + tail =3D dmar_readq(iommu->reg + DMAR_PQT_REG) & PRQ_RING_MASK; + head =3D dmar_readq(iommu->reg + DMAR_PQH_REG) & PRQ_RING_MASK; + while (head !=3D tail) { + struct page_req_dsc *req; + + req =3D &iommu->prq[head / sizeof(*req)]; + if (!req->pasid_present || req->pasid !=3D pasid) { + head =3D (head + sizeof(*req)) & PRQ_RING_MASK; + continue; + } + + wait_for_completion(&iommu->prq_complete); + goto prq_retry; + } + + iopf_queue_flush_dev(dev); + + /* + * Perform steps described in VT-d spec CH7.10 to drain page + * requests and responses in hardware. + */ + memset(desc, 0, sizeof(desc)); + desc[0].qw0 =3D QI_IWD_STATUS_DATA(QI_DONE) | + QI_IWD_FENCE | + QI_IWD_TYPE; + desc[1].qw0 =3D QI_EIOTLB_PASID(pasid) | + QI_EIOTLB_DID(did) | + QI_EIOTLB_GRAN(QI_GRAN_NONG_PASID) | + QI_EIOTLB_TYPE; + desc[2].qw0 =3D QI_DEV_EIOTLB_PASID(pasid) | + QI_DEV_EIOTLB_SID(sid) | + QI_DEV_EIOTLB_QDEP(qdep) | + QI_DEIOTLB_TYPE | + QI_DEV_IOTLB_PFSID(info->pfsid); +qi_retry: + reinit_completion(&iommu->prq_complete); + qi_submit_sync(iommu, desc, 3, QI_OPT_WAIT_DRAIN); + if (readl(iommu->reg + DMAR_PRS_REG) & DMA_PRS_PRO) { + wait_for_completion(&iommu->prq_complete); + goto qi_retry; + } +} + + +static bool is_canonical_address(u64 addr) +{ + int shift =3D 64 - (__VIRTUAL_MASK_SHIFT + 1); + long saddr =3D (long) addr; + + return (((saddr << shift) >> shift) =3D=3D saddr); +} + +static void handle_bad_prq_event(struct intel_iommu *iommu, + struct page_req_dsc *req, int result) +{ + struct qi_desc desc =3D { }; + + pr_err("%s: Invalid page request: %08llx %08llx\n", + iommu->name, ((unsigned long long *)req)[0], + ((unsigned long long *)req)[1]); + + if (!req->lpig) + return; + + desc.qw0 =3D QI_PGRP_PASID(req->pasid) | + QI_PGRP_DID(req->rid) | + QI_PGRP_PASID_P(req->pasid_present) | + QI_PGRP_RESP_CODE(result) | + QI_PGRP_RESP_TYPE; + desc.qw1 =3D QI_PGRP_IDX(req->prg_index) | + QI_PGRP_LPIG(req->lpig); + + qi_submit_sync(iommu, &desc, 1, 0); +} + +static int prq_to_iommu_prot(struct page_req_dsc *req) +{ + int prot =3D 0; + + if (req->rd_req) + prot |=3D IOMMU_FAULT_PERM_READ; + if (req->wr_req) + prot |=3D IOMMU_FAULT_PERM_WRITE; + if (req->exe_req) + prot |=3D IOMMU_FAULT_PERM_EXEC; + if (req->pm_req) + prot |=3D IOMMU_FAULT_PERM_PRIV; + + return prot; +} + +static void intel_prq_report(struct intel_iommu *iommu, struct device *dev, + struct page_req_dsc *desc) +{ + struct iopf_fault event =3D { }; + + /* Fill in event data for device specific processing */ + event.fault.type =3D IOMMU_FAULT_PAGE_REQ; + event.fault.prm.addr =3D (u64)desc->addr << VTD_PAGE_SHIFT; + event.fault.prm.pasid =3D desc->pasid; + event.fault.prm.grpid =3D desc->prg_index; + event.fault.prm.perm =3D prq_to_iommu_prot(desc); + + if (desc->lpig) + event.fault.prm.flags |=3D IOMMU_FAULT_PAGE_REQUEST_LAST_PAGE; + if (desc->pasid_present) { + event.fault.prm.flags |=3D IOMMU_FAULT_PAGE_REQUEST_PASID_VALID; + event.fault.prm.flags |=3D IOMMU_FAULT_PAGE_RESPONSE_NEEDS_PASID; + } + + iommu_report_device_fault(dev, &event); +} + +static irqreturn_t prq_event_thread(int irq, void *d) +{ + struct intel_iommu *iommu =3D d; + struct page_req_dsc *req; + int head, tail, handled; + struct device *dev; + u64 address; + + /* + * Clear PPR bit before reading head/tail registers, to ensure that + * we get a new interrupt if needed. + */ + writel(DMA_PRS_PPR, iommu->reg + DMAR_PRS_REG); + + tail =3D dmar_readq(iommu->reg + DMAR_PQT_REG) & PRQ_RING_MASK; + head =3D dmar_readq(iommu->reg + DMAR_PQH_REG) & PRQ_RING_MASK; + handled =3D (head !=3D tail); + while (head !=3D tail) { + req =3D &iommu->prq[head / sizeof(*req)]; + address =3D (u64)req->addr << VTD_PAGE_SHIFT; + + if (unlikely(!req->pasid_present)) { + pr_err("IOMMU: %s: Page request without PASID\n", + iommu->name); +bad_req: + handle_bad_prq_event(iommu, req, QI_RESP_INVALID); + goto prq_advance; + } + + if (unlikely(!is_canonical_address(address))) { + pr_err("IOMMU: %s: Address is not canonical\n", + iommu->name); + goto bad_req; + } + + if (unlikely(req->pm_req && (req->rd_req | req->wr_req))) { + pr_err("IOMMU: %s: Page request in Privilege Mode\n", + iommu->name); + goto bad_req; + } + + if (unlikely(req->exe_req && req->rd_req)) { + pr_err("IOMMU: %s: Execution request not supported\n", + iommu->name); + goto bad_req; + } + + /* Drop Stop Marker message. No need for a response. */ + if (unlikely(req->lpig && !req->rd_req && !req->wr_req)) + goto prq_advance; + + /* + * If prq is to be handled outside iommu driver via receiver of + * the fault notifiers, we skip the page response here. + */ + mutex_lock(&iommu->iopf_lock); + dev =3D device_rbtree_find(iommu, req->rid); + if (!dev) { + mutex_unlock(&iommu->iopf_lock); + goto bad_req; + } + + intel_prq_report(iommu, dev, req); + trace_prq_report(iommu, dev, req->qw_0, req->qw_1, + req->qw_2, req->qw_3, + iommu->prq_seq_number++); + mutex_unlock(&iommu->iopf_lock); +prq_advance: + head =3D (head + sizeof(*req)) & PRQ_RING_MASK; + } + + dmar_writeq(iommu->reg + DMAR_PQH_REG, tail); + + /* + * Clear the page request overflow bit and wake up all threads that + * are waiting for the completion of this handling. + */ + if (readl(iommu->reg + DMAR_PRS_REG) & DMA_PRS_PRO) { + pr_info_ratelimited("IOMMU: %s: PRQ overflow detected\n", + iommu->name); + head =3D dmar_readq(iommu->reg + DMAR_PQH_REG) & PRQ_RING_MASK; + tail =3D dmar_readq(iommu->reg + DMAR_PQT_REG) & PRQ_RING_MASK; + if (head =3D=3D tail) { + iopf_queue_discard_partial(iommu->iopf_queue); + writel(DMA_PRS_PRO, iommu->reg + DMAR_PRS_REG); + pr_info_ratelimited("IOMMU: %s: PRQ overflow cleared", + iommu->name); + } + } + + if (!completion_done(&iommu->prq_complete)) + complete(&iommu->prq_complete); + + return IRQ_RETVAL(handled); +} + +int intel_enable_prq(struct intel_iommu *iommu) +{ + struct iopf_queue *iopfq; + int irq, ret; + + iommu->prq =3D iommu_alloc_pages_node(iommu->node, GFP_KERNEL, PRQ_ORDER); + if (!iommu->prq) { + pr_warn("IOMMU: %s: Failed to allocate page request queue\n", + iommu->name); + return -ENOMEM; + } + + irq =3D dmar_alloc_hwirq(IOMMU_IRQ_ID_OFFSET_PRQ + iommu->seq_id, iommu->= node, iommu); + if (irq <=3D 0) { + pr_err("IOMMU: %s: Failed to create IRQ vector for page request queue\n", + iommu->name); + ret =3D -EINVAL; + goto free_prq; + } + iommu->pr_irq =3D irq; + + snprintf(iommu->iopfq_name, sizeof(iommu->iopfq_name), + "dmar%d-iopfq", iommu->seq_id); + iopfq =3D iopf_queue_alloc(iommu->iopfq_name); + if (!iopfq) { + pr_err("IOMMU: %s: Failed to allocate iopf queue\n", iommu->name); + ret =3D -ENOMEM; + goto free_hwirq; + } + iommu->iopf_queue =3D iopfq; + + snprintf(iommu->prq_name, sizeof(iommu->prq_name), "dmar%d-prq", iommu->s= eq_id); + + ret =3D request_threaded_irq(irq, NULL, prq_event_thread, IRQF_ONESHOT, + iommu->prq_name, iommu); + if (ret) { + pr_err("IOMMU: %s: Failed to request IRQ for page request queue\n", + iommu->name); + goto free_iopfq; + } + dmar_writeq(iommu->reg + DMAR_PQH_REG, 0ULL); + dmar_writeq(iommu->reg + DMAR_PQT_REG, 0ULL); + dmar_writeq(iommu->reg + DMAR_PQA_REG, virt_to_phys(iommu->prq) | PRQ_ORD= ER); + + init_completion(&iommu->prq_complete); + + return 0; + +free_iopfq: + iopf_queue_free(iommu->iopf_queue); + iommu->iopf_queue =3D NULL; +free_hwirq: + dmar_free_hwirq(irq); + iommu->pr_irq =3D 0; +free_prq: + iommu_free_pages(iommu->prq, PRQ_ORDER); + iommu->prq =3D NULL; + + return ret; +} + +int intel_finish_prq(struct intel_iommu *iommu) +{ + dmar_writeq(iommu->reg + DMAR_PQH_REG, 0ULL); + dmar_writeq(iommu->reg + DMAR_PQT_REG, 0ULL); + dmar_writeq(iommu->reg + DMAR_PQA_REG, 0ULL); + + if (iommu->pr_irq) { + free_irq(iommu->pr_irq, iommu); + dmar_free_hwirq(iommu->pr_irq); + iommu->pr_irq =3D 0; + } + + if (iommu->iopf_queue) { + iopf_queue_free(iommu->iopf_queue); + iommu->iopf_queue =3D NULL; + } + + iommu_free_pages(iommu->prq, PRQ_ORDER); + iommu->prq =3D NULL; + + return 0; +} + +void intel_page_response(struct device *dev, struct iopf_fault *evt, + struct iommu_page_response *msg) +{ + struct device_domain_info *info =3D dev_iommu_priv_get(dev); + struct intel_iommu *iommu =3D info->iommu; + u8 bus =3D info->bus, devfn =3D info->devfn; + struct iommu_fault_page_request *prm; + struct qi_desc desc; + bool pasid_present; + bool last_page; + u16 sid; + + prm =3D &evt->fault.prm; + sid =3D PCI_DEVID(bus, devfn); + pasid_present =3D prm->flags & IOMMU_FAULT_PAGE_REQUEST_PASID_VALID; + last_page =3D prm->flags & IOMMU_FAULT_PAGE_REQUEST_LAST_PAGE; + + desc.qw0 =3D QI_PGRP_PASID(prm->pasid) | QI_PGRP_DID(sid) | + QI_PGRP_PASID_P(pasid_present) | + QI_PGRP_RESP_CODE(msg->code) | + QI_PGRP_RESP_TYPE; + desc.qw1 =3D QI_PGRP_IDX(prm->grpid) | QI_PGRP_LPIG(last_page); + desc.qw2 =3D 0; + desc.qw3 =3D 0; + + qi_submit_sync(iommu, &desc, 1, 0); +} + diff --git a/drivers/iommu/intel/svm.c b/drivers/iommu/intel/svm.c index 0e3a9b38bef2..6ab7d9d03d3d 100644 --- a/drivers/iommu/intel/svm.c +++ b/drivers/iommu/intel/svm.c @@ -25,92 +25,6 @@ #include "../iommu-pages.h" #include "trace.h" =20 -static irqreturn_t prq_event_thread(int irq, void *d); - -int intel_svm_enable_prq(struct intel_iommu *iommu) -{ - struct iopf_queue *iopfq; - int irq, ret; - - iommu->prq =3D iommu_alloc_pages_node(iommu->node, GFP_KERNEL, PRQ_ORDER); - if (!iommu->prq) { - pr_warn("IOMMU: %s: Failed to allocate page request queue\n", - iommu->name); - return -ENOMEM; - } - - irq =3D dmar_alloc_hwirq(IOMMU_IRQ_ID_OFFSET_PRQ + iommu->seq_id, iommu->= node, iommu); - if (irq <=3D 0) { - pr_err("IOMMU: %s: Failed to create IRQ vector for page request queue\n", - iommu->name); - ret =3D -EINVAL; - goto free_prq; - } - iommu->pr_irq =3D irq; - - snprintf(iommu->iopfq_name, sizeof(iommu->iopfq_name), - "dmar%d-iopfq", iommu->seq_id); - iopfq =3D iopf_queue_alloc(iommu->iopfq_name); - if (!iopfq) { - pr_err("IOMMU: %s: Failed to allocate iopf queue\n", iommu->name); - ret =3D -ENOMEM; - goto free_hwirq; - } - iommu->iopf_queue =3D iopfq; - - snprintf(iommu->prq_name, sizeof(iommu->prq_name), "dmar%d-prq", iommu->s= eq_id); - - ret =3D request_threaded_irq(irq, NULL, prq_event_thread, IRQF_ONESHOT, - iommu->prq_name, iommu); - if (ret) { - pr_err("IOMMU: %s: Failed to request IRQ for page request queue\n", - iommu->name); - goto free_iopfq; - } - dmar_writeq(iommu->reg + DMAR_PQH_REG, 0ULL); - dmar_writeq(iommu->reg + DMAR_PQT_REG, 0ULL); - dmar_writeq(iommu->reg + DMAR_PQA_REG, virt_to_phys(iommu->prq) | PRQ_ORD= ER); - - init_completion(&iommu->prq_complete); - - return 0; - -free_iopfq: - iopf_queue_free(iommu->iopf_queue); - iommu->iopf_queue =3D NULL; -free_hwirq: - dmar_free_hwirq(irq); - iommu->pr_irq =3D 0; -free_prq: - iommu_free_pages(iommu->prq, PRQ_ORDER); - iommu->prq =3D NULL; - - return ret; -} - -int intel_svm_finish_prq(struct intel_iommu *iommu) -{ - dmar_writeq(iommu->reg + DMAR_PQH_REG, 0ULL); - dmar_writeq(iommu->reg + DMAR_PQT_REG, 0ULL); - dmar_writeq(iommu->reg + DMAR_PQA_REG, 0ULL); - - if (iommu->pr_irq) { - free_irq(iommu->pr_irq, iommu); - dmar_free_hwirq(iommu->pr_irq); - iommu->pr_irq =3D 0; - } - - if (iommu->iopf_queue) { - iopf_queue_free(iommu->iopf_queue); - iommu->iopf_queue =3D NULL; - } - - iommu_free_pages(iommu->prq, PRQ_ORDER); - iommu->prq =3D NULL; - - return 0; -} - void intel_svm_check(struct intel_iommu *iommu) { if (!pasid_supported(iommu)) @@ -237,317 +151,6 @@ static int intel_svm_set_dev_pasid(struct iommu_domai= n *domain, return ret; } =20 -/* Page request queue descriptor */ -struct page_req_dsc { - union { - struct { - u64 type:8; - u64 pasid_present:1; - u64 rsvd:7; - u64 rid:16; - u64 pasid:20; - u64 exe_req:1; - u64 pm_req:1; - u64 rsvd2:10; - }; - u64 qw_0; - }; - union { - struct { - u64 rd_req:1; - u64 wr_req:1; - u64 lpig:1; - u64 prg_index:9; - u64 addr:52; - }; - u64 qw_1; - }; - u64 qw_2; - u64 qw_3; -}; - -static bool is_canonical_address(u64 addr) -{ - int shift =3D 64 - (__VIRTUAL_MASK_SHIFT + 1); - long saddr =3D (long) addr; - - return (((saddr << shift) >> shift) =3D=3D saddr); -} - -/** - * intel_drain_pasid_prq - Drain page requests and responses for a pasid - * @dev: target device - * @pasid: pasid for draining - * - * Drain all pending page requests and responses related to @pasid in both - * software and hardware. This is supposed to be called after the device - * driver has stopped DMA, the pasid entry has been cleared, and both IOTLB - * and DevTLB have been invalidated. - * - * It waits until all pending page requests for @pasid in the page fault - * queue are completed by the prq handling thread. Then follow the steps - * described in VT-d spec CH7.10 to drain all page requests and page - * responses pending in the hardware. - */ -void intel_drain_pasid_prq(struct device *dev, u32 pasid) -{ - struct device_domain_info *info; - struct dmar_domain *domain; - struct intel_iommu *iommu; - struct qi_desc desc[3]; - struct pci_dev *pdev; - int head, tail; - u16 sid, did; - int qdep; - - info =3D dev_iommu_priv_get(dev); - if (WARN_ON(!info || !dev_is_pci(dev))) - return; - - if (!info->pri_enabled) - return; - - iommu =3D info->iommu; - domain =3D info->domain; - pdev =3D to_pci_dev(dev); - sid =3D PCI_DEVID(info->bus, info->devfn); - did =3D domain_id_iommu(domain, iommu); - qdep =3D pci_ats_queue_depth(pdev); - - /* - * Check and wait until all pending page requests in the queue are - * handled by the prq handling thread. - */ -prq_retry: - reinit_completion(&iommu->prq_complete); - tail =3D dmar_readq(iommu->reg + DMAR_PQT_REG) & PRQ_RING_MASK; - head =3D dmar_readq(iommu->reg + DMAR_PQH_REG) & PRQ_RING_MASK; - while (head !=3D tail) { - struct page_req_dsc *req; - - req =3D &iommu->prq[head / sizeof(*req)]; - if (!req->pasid_present || req->pasid !=3D pasid) { - head =3D (head + sizeof(*req)) & PRQ_RING_MASK; - continue; - } - - wait_for_completion(&iommu->prq_complete); - goto prq_retry; - } - - iopf_queue_flush_dev(dev); - - /* - * Perform steps described in VT-d spec CH7.10 to drain page - * requests and responses in hardware. - */ - memset(desc, 0, sizeof(desc)); - desc[0].qw0 =3D QI_IWD_STATUS_DATA(QI_DONE) | - QI_IWD_FENCE | - QI_IWD_TYPE; - desc[1].qw0 =3D QI_EIOTLB_PASID(pasid) | - QI_EIOTLB_DID(did) | - QI_EIOTLB_GRAN(QI_GRAN_NONG_PASID) | - QI_EIOTLB_TYPE; - desc[2].qw0 =3D QI_DEV_EIOTLB_PASID(pasid) | - QI_DEV_EIOTLB_SID(sid) | - QI_DEV_EIOTLB_QDEP(qdep) | - QI_DEIOTLB_TYPE | - QI_DEV_IOTLB_PFSID(info->pfsid); -qi_retry: - reinit_completion(&iommu->prq_complete); - qi_submit_sync(iommu, desc, 3, QI_OPT_WAIT_DRAIN); - if (readl(iommu->reg + DMAR_PRS_REG) & DMA_PRS_PRO) { - wait_for_completion(&iommu->prq_complete); - goto qi_retry; - } -} - -static int prq_to_iommu_prot(struct page_req_dsc *req) -{ - int prot =3D 0; - - if (req->rd_req) - prot |=3D IOMMU_FAULT_PERM_READ; - if (req->wr_req) - prot |=3D IOMMU_FAULT_PERM_WRITE; - if (req->exe_req) - prot |=3D IOMMU_FAULT_PERM_EXEC; - if (req->pm_req) - prot |=3D IOMMU_FAULT_PERM_PRIV; - - return prot; -} - -static void intel_svm_prq_report(struct intel_iommu *iommu, struct device = *dev, - struct page_req_dsc *desc) -{ - struct iopf_fault event =3D { }; - - /* Fill in event data for device specific processing */ - event.fault.type =3D IOMMU_FAULT_PAGE_REQ; - event.fault.prm.addr =3D (u64)desc->addr << VTD_PAGE_SHIFT; - event.fault.prm.pasid =3D desc->pasid; - event.fault.prm.grpid =3D desc->prg_index; - event.fault.prm.perm =3D prq_to_iommu_prot(desc); - - if (desc->lpig) - event.fault.prm.flags |=3D IOMMU_FAULT_PAGE_REQUEST_LAST_PAGE; - if (desc->pasid_present) { - event.fault.prm.flags |=3D IOMMU_FAULT_PAGE_REQUEST_PASID_VALID; - event.fault.prm.flags |=3D IOMMU_FAULT_PAGE_RESPONSE_NEEDS_PASID; - } - - iommu_report_device_fault(dev, &event); -} - -static void handle_bad_prq_event(struct intel_iommu *iommu, - struct page_req_dsc *req, int result) -{ - struct qi_desc desc =3D { }; - - pr_err("%s: Invalid page request: %08llx %08llx\n", - iommu->name, ((unsigned long long *)req)[0], - ((unsigned long long *)req)[1]); - - if (!req->lpig) - return; - - desc.qw0 =3D QI_PGRP_PASID(req->pasid) | - QI_PGRP_DID(req->rid) | - QI_PGRP_PASID_P(req->pasid_present) | - QI_PGRP_RESP_CODE(result) | - QI_PGRP_RESP_TYPE; - desc.qw1 =3D QI_PGRP_IDX(req->prg_index) | - QI_PGRP_LPIG(req->lpig); - - qi_submit_sync(iommu, &desc, 1, 0); -} - -static irqreturn_t prq_event_thread(int irq, void *d) -{ - struct intel_iommu *iommu =3D d; - struct page_req_dsc *req; - int head, tail, handled; - struct device *dev; - u64 address; - - /* - * Clear PPR bit before reading head/tail registers, to ensure that - * we get a new interrupt if needed. - */ - writel(DMA_PRS_PPR, iommu->reg + DMAR_PRS_REG); - - tail =3D dmar_readq(iommu->reg + DMAR_PQT_REG) & PRQ_RING_MASK; - head =3D dmar_readq(iommu->reg + DMAR_PQH_REG) & PRQ_RING_MASK; - handled =3D (head !=3D tail); - while (head !=3D tail) { - req =3D &iommu->prq[head / sizeof(*req)]; - address =3D (u64)req->addr << VTD_PAGE_SHIFT; - - if (unlikely(!req->pasid_present)) { - pr_err("IOMMU: %s: Page request without PASID\n", - iommu->name); -bad_req: - handle_bad_prq_event(iommu, req, QI_RESP_INVALID); - goto prq_advance; - } - - if (unlikely(!is_canonical_address(address))) { - pr_err("IOMMU: %s: Address is not canonical\n", - iommu->name); - goto bad_req; - } - - if (unlikely(req->pm_req && (req->rd_req | req->wr_req))) { - pr_err("IOMMU: %s: Page request in Privilege Mode\n", - iommu->name); - goto bad_req; - } - - if (unlikely(req->exe_req && req->rd_req)) { - pr_err("IOMMU: %s: Execution request not supported\n", - iommu->name); - goto bad_req; - } - - /* Drop Stop Marker message. No need for a response. */ - if (unlikely(req->lpig && !req->rd_req && !req->wr_req)) - goto prq_advance; - - /* - * If prq is to be handled outside iommu driver via receiver of - * the fault notifiers, we skip the page response here. - */ - mutex_lock(&iommu->iopf_lock); - dev =3D device_rbtree_find(iommu, req->rid); - if (!dev) { - mutex_unlock(&iommu->iopf_lock); - goto bad_req; - } - - intel_svm_prq_report(iommu, dev, req); - trace_prq_report(iommu, dev, req->qw_0, req->qw_1, - req->qw_2, req->qw_3, - iommu->prq_seq_number++); - mutex_unlock(&iommu->iopf_lock); -prq_advance: - head =3D (head + sizeof(*req)) & PRQ_RING_MASK; - } - - dmar_writeq(iommu->reg + DMAR_PQH_REG, tail); - - /* - * Clear the page request overflow bit and wake up all threads that - * are waiting for the completion of this handling. - */ - if (readl(iommu->reg + DMAR_PRS_REG) & DMA_PRS_PRO) { - pr_info_ratelimited("IOMMU: %s: PRQ overflow detected\n", - iommu->name); - head =3D dmar_readq(iommu->reg + DMAR_PQH_REG) & PRQ_RING_MASK; - tail =3D dmar_readq(iommu->reg + DMAR_PQT_REG) & PRQ_RING_MASK; - if (head =3D=3D tail) { - iopf_queue_discard_partial(iommu->iopf_queue); - writel(DMA_PRS_PRO, iommu->reg + DMAR_PRS_REG); - pr_info_ratelimited("IOMMU: %s: PRQ overflow cleared", - iommu->name); - } - } - - if (!completion_done(&iommu->prq_complete)) - complete(&iommu->prq_complete); - - return IRQ_RETVAL(handled); -} - -void intel_svm_page_response(struct device *dev, struct iopf_fault *evt, - struct iommu_page_response *msg) -{ - struct device_domain_info *info =3D dev_iommu_priv_get(dev); - struct intel_iommu *iommu =3D info->iommu; - u8 bus =3D info->bus, devfn =3D info->devfn; - struct iommu_fault_page_request *prm; - struct qi_desc desc; - bool pasid_present; - bool last_page; - u16 sid; - - prm =3D &evt->fault.prm; - sid =3D PCI_DEVID(bus, devfn); - pasid_present =3D prm->flags & IOMMU_FAULT_PAGE_REQUEST_PASID_VALID; - last_page =3D prm->flags & IOMMU_FAULT_PAGE_REQUEST_LAST_PAGE; - - desc.qw0 =3D QI_PGRP_PASID(prm->pasid) | QI_PGRP_DID(sid) | - QI_PGRP_PASID_P(pasid_present) | - QI_PGRP_RESP_CODE(msg->code) | - QI_PGRP_RESP_TYPE; - desc.qw1 =3D QI_PGRP_IDX(prm->grpid) | QI_PGRP_LPIG(last_page); - desc.qw2 =3D 0; - desc.qw3 =3D 0; - - qi_submit_sync(iommu, &desc, 1, 0); -} - static void intel_svm_domain_free(struct iommu_domain *domain) { struct dmar_domain *dmar_domain =3D to_dmar_domain(domain); --=20 2.43.0 From nobody Fri Nov 29 23:43:56 2024 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 561451D6C7F; Fri, 13 Sep 2024 11:45:27 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726227927; cv=none; b=KGUgdnQZfSj9JSvVAEJzfchpaiAGoLL/fkOX4TY+/ZO7xZFlZKJFGvBB04f3E2kP8megNqiyN65YtWbXY3XSPMAxR9UbsS+x85CgMIXty3W5slLIWTZ2dreOabKKMPx/u/SYQUIyKXnkjthvRomoE31AaXbcz27K3/mE+3hyBI0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726227927; c=relaxed/simple; bh=DahTmE+cw2GuBIr/HQoJZM14WQAlTGLxnm0oWWFYEwE=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=qUTKX3hY34ZLs6251f/Y0JVsuk2MmXFczJKt5jwZeUOyyYf0cq5omEi3EfGOxcUNYw/ACTZ/oSTd+FSvpRsbQP25zCci5oF0inugkE2wL/N4GXQXvcXN/aGxRNFVRWST0+LLPPm5uV7rRX6vZiuImUvt96tls+NH9oU0FIdkTp8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=Izfu183/; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="Izfu183/" Received: by smtp.kernel.org (Postfix) with ESMTPS id 04F44C4CECE; Fri, 13 Sep 2024 11:45:27 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1726227927; bh=DahTmE+cw2GuBIr/HQoJZM14WQAlTGLxnm0oWWFYEwE=; h=From:Date:Subject:References:In-Reply-To:To:Cc:Reply-To:From; b=Izfu183/5sWNJrfrJ1ZtmUtzrPErMGl1O+c/LknMeS8OlbBuuOJ/7fFFxKR/5DfiV j8atQc1wRfjUApO57T0zrKOaBBg75K+eKfsMPHXt0ZKheipx2Qu03ZtoB6+vLcwzaW sMsElbbYQH2gdY1XYPPkrdpD3FLkGBXAWVWJLxplSIvXjHPbtG4JtfGT11uyUlFxXK W7Dj//cBRfB68Cw0bP+D7O6iy6fK9pU4rj+4c9XnUbmPz1xH0D4QNRzqrVxT3Ovb9Z Jk4+a+N+NLptXDvjSL+VAD8/zA7arngNLDqa7umop8TqOZZz8pDjXmrRFI3cfwS9nG SSwcx7n7J8Ylg== Received: from aws-us-west-2-korg-lkml-1.web.codeaurora.org (localhost.localdomain [127.0.0.1]) by smtp.lore.kernel.org (Postfix) with ESMTP id EC174FA3749; Fri, 13 Sep 2024 11:45:26 +0000 (UTC) From: Joel Granados via B4 Relay Date: Fri, 13 Sep 2024 13:44:48 +0200 Subject: [PATCH v2 2/5] iommu/vt-d: Remove the pasid present check in prq_event_thread Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20240913-jag-iopfv8-v2-2-dea01c2343bc@samsung.com> References: <20240913-jag-iopfv8-v2-0-dea01c2343bc@samsung.com> In-Reply-To: <20240913-jag-iopfv8-v2-0-dea01c2343bc@samsung.com> To: David Woodhouse , Lu Baolu , Joerg Roedel , Will Deacon , Robin Murphy , Jason Gunthorpe , Kevin Tian , Klaus Jensen Cc: linux-kernel@vger.kernel.org, iommu@lists.linux.dev, Joel Granados , Klaus Jensen X-Mailer: b4 0.15-dev-00a43 X-Developer-Signature: v=1; a=openpgp-sha256; l=1351; i=j.granados@samsung.com; h=from:subject:message-id; bh=FD3UiP6NJxmTUYvjmWxZFWuSigYaHG1wzRA5qCg/pTc=; b=owJ4nAHtARL+kA0DAAoBupfNUreWQU8ByyZiAGbkJdKBW+Hs+8Tf0NKrmnye3XUlNDJ2kjEZ1 4HK8I/1yWanyYkBswQAAQoAHRYhBK5HCVcl5jElzssnkLqXzVK3lkFPBQJm5CXSAAoJELqXzVK3 lkFPVAML/ikXS0WO+qqt2AG+iyoNq2202M14ymCKCshpnxQOjHqAo/0YJjL12DrZtDth6XLYECv 7mR7q0ja0t1DwzlAlFYHZT7oEIiGaj8PL+AJIQRaBf8uWeF6LoqyCXUuXaDKY1PLK45k6ofc+7i +lODsB2m5dz0/+00aMZstqPiIWy5g0fh4+JSyqPXcaj6jVP8AfSL5Faj5LxXOUl5drqxGARy7rQ UJK1S85W10Yz/+0VKxFWfyBX6sU47uxAoXQvv5bQHpMdAW6DhY0DXH8bAp2B0+ZKkc0V266Zt/l NYkDBNkEQIYpuqgXSvX20GBGUlV6ptzXyTUxLECORrdqbPdjKKY3w2B39SLmlL0FdDD12J/7tfs YRkslIr7c8BBKIf+M58a72X2jwE3Nsmfxwjzcsx6yaHwJYfffeeQLh2/HOlah9QD6kfLfhWaHzA +Lr2Is7psM2PK1ZPH2of/r+NCmySmAwGXAe/fQQJtYG0skZF332oFIX60K12h1Jfdpr2FkgSnu8 DU= X-Developer-Key: i=j.granados@samsung.com; a=openpgp; fpr=F1F8E46D30F0F6C4A45FF4465895FAAC338C6E77 X-Endpoint-Received: by B4 Relay for j.granados@samsung.com/default with auth_id=70 X-Original-From: Joel Granados Reply-To: j.granados@samsung.com From: Klaus Jensen PASID is not strictly needed when handling a PRQ event; remove the check for the pasid present bit in the request. This change was not included in the creation of prq.c to emphasize the change in capability checks when handing PRQ events. Signed-off-by: Klaus Jensen Signed-off-by: Joel Granados Reviewed-by: Kevin Tian --- drivers/iommu/intel/prq.c | 12 +++--------- 1 file changed, 3 insertions(+), 9 deletions(-) diff --git a/drivers/iommu/intel/prq.c b/drivers/iommu/intel/prq.c index 3376f60082b5..5f2d01a9aa11 100644 --- a/drivers/iommu/intel/prq.c +++ b/drivers/iommu/intel/prq.c @@ -221,18 +221,12 @@ static irqreturn_t prq_event_thread(int irq, void *d) req =3D &iommu->prq[head / sizeof(*req)]; address =3D (u64)req->addr << VTD_PAGE_SHIFT; =20 - if (unlikely(!req->pasid_present)) { - pr_err("IOMMU: %s: Page request without PASID\n", - iommu->name); -bad_req: - handle_bad_prq_event(iommu, req, QI_RESP_INVALID); - goto prq_advance; - } - if (unlikely(!is_canonical_address(address))) { pr_err("IOMMU: %s: Address is not canonical\n", iommu->name); - goto bad_req; +bad_req: + handle_bad_prq_event(iommu, req, QI_RESP_INVALID); + goto prq_advance; } =20 if (unlikely(req->pm_req && (req->rd_req | req->wr_req))) { --=20 2.43.0 From nobody Fri Nov 29 23:43:56 2024 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6C42D1D7998; Fri, 13 Sep 2024 11:45:27 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726227927; cv=none; b=Gy7PRaL1PAWXIujsYMUqPfuADHkfj9zW94uf8nhx4NbbNgVR6rR/aX1PLfJimqMgfWE06xzoXfdP4NgTfXd0m3ieN4oG7SqwJsAOSkpQGNRzVwjtXFNnGm/vWt4QbGyPfzJZCNN561e4N2DcYDvmRDOCorgyMyKFFdsCYWo2uao= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726227927; c=relaxed/simple; bh=oGH1FuTsaRy7oacITrQ+mPPquktRVskCF8uOQBlSApM=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=rRfqF3aBEScnDeQw3diUn4is8pGtxZUjybp9FsJ4+UBIyq/V/kV/8+0K2lOqI7wCgphaa/Z9RVXpkFV1L+rlQse9rXdR0ykUHMP4P5Zqjis7cANT4GbacWs3cyyyjnxCUyEeUrHyCZgN/lc+Mp3hiC/JlVy+RVLjk0wrbjbL8uQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=P2DRgy/H; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="P2DRgy/H" Received: by smtp.kernel.org (Postfix) with ESMTPS id 142ABC4CECF; Fri, 13 Sep 2024 11:45:27 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1726227927; bh=oGH1FuTsaRy7oacITrQ+mPPquktRVskCF8uOQBlSApM=; h=From:Date:Subject:References:In-Reply-To:To:Cc:Reply-To:From; b=P2DRgy/HzfBRid/vB6c1vVycvrSx1Jm5UIr5b6vUmxZS1azrTN2x0T4zAV90BosBS JPrdAxgTFEPSxI3ysUUolhllWmrqliCbgapeOVrHCkytcinBEcuBQ9STf2Uyrv8wHi otSJ55weAElQsjosSJoLfRH62vRzfSCnIAECo7P5myK4vcJqegEr9qWLVXvCvce4EL gyTwAOPMf25xYJ6io1DFqraMO/XWjVLG67/on6Bj+4SZRfkglw1L2a2IKmWkH3fi7d gXhYSomQnAjp3UpKy/1/mzEhymjsrB7sDN5G21qHf2C7EZ/FAAFpPodynlzh4oYCwN OvLO0MPL2aNSw== Received: from aws-us-west-2-korg-lkml-1.web.codeaurora.org (localhost.localdomain [127.0.0.1]) by smtp.lore.kernel.org (Postfix) with ESMTP id 092F6FA374B; Fri, 13 Sep 2024 11:45:27 +0000 (UTC) From: Joel Granados via B4 Relay Date: Fri, 13 Sep 2024 13:44:49 +0200 Subject: [PATCH v2 3/5] iommu: kconfig: Move IOMMU_IOPF into INTEL_IOMMU Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20240913-jag-iopfv8-v2-3-dea01c2343bc@samsung.com> References: <20240913-jag-iopfv8-v2-0-dea01c2343bc@samsung.com> In-Reply-To: <20240913-jag-iopfv8-v2-0-dea01c2343bc@samsung.com> To: David Woodhouse , Lu Baolu , Joerg Roedel , Will Deacon , Robin Murphy , Jason Gunthorpe , Kevin Tian , Klaus Jensen Cc: linux-kernel@vger.kernel.org, iommu@lists.linux.dev, Joel Granados X-Mailer: b4 0.15-dev-00a43 X-Developer-Signature: v=1; a=openpgp-sha256; l=998; i=j.granados@samsung.com; h=from:subject:message-id; bh=A1IcN/yt0t1BbWgqjM2qPuJqNqel/ygqpz+aUMrV/hQ=; b=owJ4nAHtARL+kA0DAAoBupfNUreWQU8ByyZiAGbkJdN3k2cFzx1Cw4NfkutB8zhWv4i0QFuWR 7OInIU4bLBQ04kBswQAAQoAHRYhBK5HCVcl5jElzssnkLqXzVK3lkFPBQJm5CXTAAoJELqXzVK3 lkFPuTEL/2zXwIxLN9rXthl7RCHBmHrGxaL4B5tD2/+4ByrYwQkou9LUbd/SDXbLSYrPjD/hxOt 4l9oesPzfrhtrGrVaT96stXlcAXb4kXlpDYTBMo0IWjOdfohpuchaKfLBji0vhQaaNErkhn2QSo VtcW/iHI/yUFoHuG2sIl2K4J9eIZiufCAKe7SE34W3wPoMQLM3LwhiQC838vagglkVBkVygkZNB uoDe5TBFImhWMoflDebmemUChazEBZNFoLicCN7ABjBAAjPD+ECkPxcaz6DjR52zrBfQpaaylnq R3oxwM8976iITMt+HG9YHlIu9Vw/73n3fIGpsqo8kXwlEcpHX0VFzODCtfund5VuWZQr8fGQQ4r CgW8f5/0bqxsB3cXCxv18NCSYzfYEIhbcdfBCAgu8XgF1roF83ptSA/8nauVUGVPwqcdlSBCZeX /bd2QMSibTrU1TQBCvkXiLU5e5r9m+LzYgnDnLsYVwaA3fCyJn7wu6erUawImfYLth5DNW6EBc5 iw= X-Developer-Key: i=j.granados@samsung.com; a=openpgp; fpr=F1F8E46D30F0F6C4A45FF4465895FAAC338C6E77 X-Endpoint-Received: by B4 Relay for j.granados@samsung.com/default with auth_id=70 X-Original-From: Joel Granados Reply-To: j.granados@samsung.com From: Joel Granados Move IOMMU_IOPF from under INTEL_IOMMU_SVM into INTEL_IOMMU. This certifies that the core intel iommu utilizes the IOPF library functions, independent of the INTEL_IOMMU_SVM config. Signed-off-by: Joel Granados --- drivers/iommu/intel/Kconfig | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/iommu/intel/Kconfig b/drivers/iommu/intel/Kconfig index f52fb39c968e..2888671c9278 100644 --- a/drivers/iommu/intel/Kconfig +++ b/drivers/iommu/intel/Kconfig @@ -15,6 +15,7 @@ config INTEL_IOMMU select DMA_OPS select IOMMU_API select IOMMU_IOVA + select IOMMU_IOPF select IOMMUFD_DRIVER if IOMMUFD select NEED_DMA_MAP_STATE select DMAR_TABLE @@ -51,7 +52,6 @@ config INTEL_IOMMU_SVM depends on X86_64 select MMU_NOTIFIER select IOMMU_SVA - select IOMMU_IOPF help Shared Virtual Memory (SVM) provides a facility for devices to access DMA resources through process address space by --=20 2.43.0 From nobody Fri Nov 29 23:43:56 2024 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4CADF1BAED2; Fri, 13 Sep 2024 11:45:27 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726227927; cv=none; b=sqitKUvhZI+IOzVRUUR1wJb5xlS6qM6iOuGcizqHaoGS+LEtx1MEl53nI3xDgmoPhBRqcWyJLPUmOMeNER6Q6+n/BfgPKPQYVdln6f4r/tE471kSoIjLLL9OqHwsDR9kLq4ki+nOG/FTHC6inbdbf+/5ioEaPTF1Vlo51QnmnXQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726227927; c=relaxed/simple; bh=yI/YdEzUxnbqQlsPHNM+E/+LIgmQwvyu/UUnjDxNwS8=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=bZg9seRFc72X40NIEapMLhSKJlCRKamP0eC554d4gJ4jFYrCmaU7s8Rxi2MtM+C/QOFwl0NNNJfollYN0fnC3ZSL3FUFjleNtbQudxNHomSoEPS7KShelK1dh3W1/dnmC26I5/jdcjRdaLQihCekB61eXzqnOLNy+OBEnGVBVgI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=Hp0Vdf3L; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="Hp0Vdf3L" Received: by smtp.kernel.org (Postfix) with ESMTPS id 26364C4CED1; Fri, 13 Sep 2024 11:45:27 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1726227927; bh=yI/YdEzUxnbqQlsPHNM+E/+LIgmQwvyu/UUnjDxNwS8=; h=From:Date:Subject:References:In-Reply-To:To:Cc:Reply-To:From; b=Hp0Vdf3LmtddAbaYrY/fHpyvA+KDCnKjHGDo6W2b+9tXqgkgoe0v6zatj4kgskf0y R5kwYaGVC7CySH6IDbTP9tyoEQJjmS1JzLSPfky+Y1k3QV1qk6XdTjkytFJQ0J7DP/ pV7cDBhOGFiO3sYsAbyhgxfK7xLYuDL2b0NQHNqmiB4GvdqwiJ5Ew0OWPIcHSxTWpl IFf/J3f+nOn3gHnRDHSP5WBPxkuvAlvTQznHz9h894Kvn4m/6KWL7IwgH3GVjuBo0B 4mk5qPxl89/m9NM9dkzb/fTzdwIAqympoUzDO6oufhy2Fp+aDxD9ut7MADGCLj0IO7 SCTunpFgDQa5A== Received: from aws-us-west-2-korg-lkml-1.web.codeaurora.org (localhost.localdomain [127.0.0.1]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1AF79FA3748; Fri, 13 Sep 2024 11:45:27 +0000 (UTC) From: Joel Granados via B4 Relay Date: Fri, 13 Sep 2024 13:44:50 +0200 Subject: [PATCH v2 4/5] iommufd: Enable PRI when doing the iommufd_hwpt_alloc Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20240913-jag-iopfv8-v2-4-dea01c2343bc@samsung.com> References: <20240913-jag-iopfv8-v2-0-dea01c2343bc@samsung.com> In-Reply-To: <20240913-jag-iopfv8-v2-0-dea01c2343bc@samsung.com> To: David Woodhouse , Lu Baolu , Joerg Roedel , Will Deacon , Robin Murphy , Jason Gunthorpe , Kevin Tian , Klaus Jensen Cc: linux-kernel@vger.kernel.org, iommu@lists.linux.dev, Joel Granados X-Mailer: b4 0.15-dev-00a43 X-Developer-Signature: v=1; a=openpgp-sha256; l=1694; i=j.granados@samsung.com; h=from:subject:message-id; bh=DxEbBmnkeSWlQ/5MEXCH0l7yNP6fyq/9U3DrRBtODeQ=; b=owJ4nAHtARL+kA0DAAoBupfNUreWQU8ByyZiAGbkJdN3nIyQtewYxO5dzLpub6VBCQvASt1vP lOAMWJPVRuvTYkBswQAAQoAHRYhBK5HCVcl5jElzssnkLqXzVK3lkFPBQJm5CXTAAoJELqXzVK3 lkFPctcL+QHiIJxPNK8u2+QZbImwSx9KtOG1MIiyOjy1prL1mFeCx3hv6fyrYsZCBYZnVFQCnC5 OKY6Ot6kNutAzzaHvu+wFxeGddX25+OJL9VRjwSPv4hRHGdK+ojK2etw+3/V1jdURYxBRyf37ew mkpmo1QTBDb2e8dqWUXQa9dliujivt92tXPIdiSPMFJz1pPZD0guvljBA3JdLEpKERGubE6EIhb PuWihQnMVVcaNtPhsjy/9XbRnklRtaMVoM+CSnchCXq+BSTczuwvYIvTdyxKOOvtFMKEGBK8w0/ o6azqmqwvHDVuk6/DirH/w1kDqsGhbLHcjJNtQAfWhxZls/qwkAxE1jZPFnDxtXu6ikOo/aJwdC rSf5p/ppZd0uaiO116XZ9pWvVBfq++VcpqL1Fwbh9X5eLN1qGtNRsgBoY2HMVNw5D5ho3SpGgHI i+2OUylhqVMFDlzJyCXkP1l6udZe1MR5z/ppRBWeHhXAzTHb3d8KpsCQz/26dkyjMV+iEfmX0D7 x8= X-Developer-Key: i=j.granados@samsung.com; a=openpgp; fpr=F1F8E46D30F0F6C4A45FF4465895FAAC338C6E77 X-Endpoint-Received: by B4 Relay for j.granados@samsung.com/default with auth_id=70 X-Original-From: Joel Granados Reply-To: j.granados@samsung.com From: Joel Granados Add IOMMU_HWPT_FAULT_ID_VALID as part of the valid flags when doing an iommufd_hwpt_alloc allowing the use of an iommu fault allocation (iommu_fault_alloc) with the IOMMU_HWPT_ALLOC ioctl. Signed-off-by: Joel Granados Reviewed-by: Kevin Tian --- drivers/iommu/intel/iommu.c | 3 ++- drivers/iommu/iommufd/hw_pagetable.c | 3 ++- 2 files changed, 4 insertions(+), 2 deletions(-) diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c index 5acc52c62e8c..3d1c971eb9e5 100644 --- a/drivers/iommu/intel/iommu.c +++ b/drivers/iommu/intel/iommu.c @@ -3718,7 +3718,8 @@ intel_iommu_domain_alloc_user(struct device *dev, u32= flags, } =20 if (flags & - (~(IOMMU_HWPT_ALLOC_NEST_PARENT | IOMMU_HWPT_ALLOC_DIRTY_TRACKING))) + (~(IOMMU_HWPT_ALLOC_NEST_PARENT | IOMMU_HWPT_ALLOC_DIRTY_TRACKING + | IOMMU_HWPT_FAULT_ID_VALID))) return ERR_PTR(-EOPNOTSUPP); if (nested_parent && !nested_supported(iommu)) return ERR_PTR(-EOPNOTSUPP); diff --git a/drivers/iommu/iommufd/hw_pagetable.c b/drivers/iommu/iommufd/h= w_pagetable.c index aefde4443671..88074e459995 100644 --- a/drivers/iommu/iommufd/hw_pagetable.c +++ b/drivers/iommu/iommufd/hw_pagetable.c @@ -107,7 +107,8 @@ iommufd_hwpt_paging_alloc(struct iommufd_ctx *ictx, str= uct iommufd_ioas *ioas, const struct iommu_user_data *user_data) { const u32 valid_flags =3D IOMMU_HWPT_ALLOC_NEST_PARENT | - IOMMU_HWPT_ALLOC_DIRTY_TRACKING; + IOMMU_HWPT_ALLOC_DIRTY_TRACKING | + IOMMU_HWPT_FAULT_ID_VALID; const struct iommu_ops *ops =3D dev_iommu_ops(idev->dev); struct iommufd_hwpt_paging *hwpt_paging; struct iommufd_hw_pagetable *hwpt; --=20 2.43.0 From nobody Fri Nov 29 23:43:56 2024 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A71441D88C7; Fri, 13 Sep 2024 11:45:27 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726227927; cv=none; b=dEQZR/IDA/iFvjFVhqcjYOZA1cxMftMFxq1wpWDNPeheJ6PlN+EfiTpCzH+z3E+/KS8hqSBT9Jrle7vN0knd7+a6J4S1U9kSGd4onyePnxEOVXfK6VIVo2IEIB9Ap5Zsk0VyN4gpXNmk7LRpby43Co1MgGAjH8Hyz8IZbRhu/9o= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726227927; c=relaxed/simple; bh=ZQzu5Awk7csrIQP8WA5EilsMPXyklmrTn+/ECQpc/IE=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=BnYZrkQURYvZBiLviQwt1/4m17fs8tDziNKSBP6Cjb/NB/6gS5XAb+UCxJh3SlHixJ6lf6Hw1P7vHNEs9m1pX5CF4voqbwdJW0up+N4ZZZXBTrNWu546tuHfdr0UuJxwtWxPg/F4D5X1OBWQw5Whkq8cCyNyLvdEzJs99WtX/mo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=ra1edKq+; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="ra1edKq+" Received: by smtp.kernel.org (Postfix) with ESMTPS id 38AADC4CED2; Fri, 13 Sep 2024 11:45:27 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1726227927; bh=ZQzu5Awk7csrIQP8WA5EilsMPXyklmrTn+/ECQpc/IE=; h=From:Date:Subject:References:In-Reply-To:To:Cc:Reply-To:From; b=ra1edKq+SgQHwcyXwTJ/h59HkOWKDJUpbKJV7RQM5cCVM2D0KRPQdL6WtscZ45Qn1 c1QWD7t6Kr8Wtin+hhdq1vR6z/afTaGa5GTgrhlWrFeYUP8K4Ss0tbH+YhDb498Hzd BD/95uO0NdHppKX97c/mEi+xsJX0OP+YLeZSU99sfGY9qTKXzv1TaQdc/2eKPrvTj8 wXMpLSzLbLB90C1hSjoU8lCEm6q+rzroDV6q3AGsjRTDcabkMUmrisMwuWhj53fl2A ppHmrgl5ZUrMYWYxtS5VE34BpQkdNAnGROhAXuECzYa2ymbKjDaulTM9wx6R8zD++b 6gXX6qfeST8xQ== Received: from aws-us-west-2-korg-lkml-1.web.codeaurora.org (localhost.localdomain [127.0.0.1]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2EA88EEE26C; Fri, 13 Sep 2024 11:45:27 +0000 (UTC) From: Joel Granados via B4 Relay Date: Fri, 13 Sep 2024 13:44:51 +0200 Subject: [PATCH v2 5/5] iommu/vt-d: drop pasid requirement for prq initialization Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20240913-jag-iopfv8-v2-5-dea01c2343bc@samsung.com> References: <20240913-jag-iopfv8-v2-0-dea01c2343bc@samsung.com> In-Reply-To: <20240913-jag-iopfv8-v2-0-dea01c2343bc@samsung.com> To: David Woodhouse , Lu Baolu , Joerg Roedel , Will Deacon , Robin Murphy , Jason Gunthorpe , Kevin Tian , Klaus Jensen Cc: linux-kernel@vger.kernel.org, iommu@lists.linux.dev, Joel Granados , Klaus Jensen X-Mailer: b4 0.15-dev-00a43 X-Developer-Signature: v=1; a=openpgp-sha256; l=1454; i=j.granados@samsung.com; h=from:subject:message-id; bh=GQ979ruQ4vAANRiiIGnsASOC26gBsL07Le0pjm1OYPs=; b=owJ4nAHtARL+kA0DAAoBupfNUreWQU8ByyZiAGbkJdSDbDHpYQc6xVbn9fVpO6eCYSpUK9GLI mT3uAH1eI7JaokBswQAAQoAHRYhBK5HCVcl5jElzssnkLqXzVK3lkFPBQJm5CXUAAoJELqXzVK3 lkFPSRoL/jDNsHyu6rk8NtWUxOEZDUDZI8W/LxsklCrAfGXhdevfW124gc/cIkMBbEYtCbEynMP +ZMotsHFO1hEgp+Oiha5ma8cRVjdK+sq2BSE4TLUP9Kz0/4Wt8fEJbZ803ur0tRYqYdEeNc1hp1 io8W7KtZlCUuc19Y2UGZLJvqqa+Z9SMNebnwRqiBayNu/h3Hub62B942p2NCol7P0MkXqt5x8BI KvLdDJUYkSd82IfQ5SedIurTIICvxlfNMO07CH0fYxel+a/fS36M4i+xLAT/7CnRkks+PEg3umv qy0Lns+EXrol0jdJykO//wCbNcM0Olg2B0qiqwOrff/PUQfEbazxru4CJrIvv5VyidU5rBkeivG fNQuljbsElRdiM2mrtf+5yy7P4eBHDYhVuBV7AEsQ5Rv9lmJcmb4W4zMq6Gv4QOyYs7uF7z23U9 0/Tx/tykCCFxQ1cSUEneLyk/NhlvALPL2s1UaZuK8FP6i8TYiPZ1H19CvLOYTSHAqb6rsxE1D17 yw= X-Developer-Key: i=j.granados@samsung.com; a=openpgp; fpr=F1F8E46D30F0F6C4A45FF4465895FAAC338C6E77 X-Endpoint-Received: by B4 Relay for j.granados@samsung.com/default with auth_id=70 X-Original-From: Joel Granados Reply-To: j.granados@samsung.com From: Klaus Jensen PASID support within the IOMMU is not required to enable the Page Request Queue, only the PRS capability. Signed-off-by: Klaus Jensen Signed-off-by: Joel Granados Reviewed-by: Kevin Tian --- drivers/iommu/intel/iommu.c | 10 ++++------ 1 file changed, 4 insertions(+), 6 deletions(-) diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c index 3d1c971eb9e5..9f3bbdbd6372 100644 --- a/drivers/iommu/intel/iommu.c +++ b/drivers/iommu/intel/iommu.c @@ -1487,10 +1487,8 @@ static void free_dmar_iommu(struct intel_iommu *iomm= u) /* free context mapping */ free_context_table(iommu); =20 - if (pasid_supported(iommu)) { - if (ecap_prs(iommu->ecap)) - intel_finish_prq(iommu); - } + if (ecap_prs(iommu->ecap)) + intel_finish_prq(iommu); } =20 /* @@ -2480,7 +2478,7 @@ static int __init init_dmars(void) =20 iommu_flush_write_buffer(iommu); =20 - if (pasid_supported(iommu) && ecap_prs(iommu->ecap)) { + if (ecap_prs(iommu->ecap)) { /* * Call dmar_alloc_hwirq() with dmar_global_lock held, * could cause possible lock race condition. @@ -2921,7 +2919,7 @@ static int intel_iommu_add(struct dmar_drhd_unit *dma= ru) intel_iommu_init_qi(iommu); iommu_flush_write_buffer(iommu); =20 - if (pasid_supported(iommu) && ecap_prs(iommu->ecap)) { + if (ecap_prs(iommu->ecap)) { ret =3D intel_enable_prq(iommu); if (ret) goto disable_iommu; --=20 2.43.0