From nobody Sun Feb 8 03:58:19 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1EC84C678DD for ; Thu, 17 Aug 2023 23:44:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1356238AbjHQXo2 (ORCPT ); Thu, 17 Aug 2023 19:44:28 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33924 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1356210AbjHQXn7 (ORCPT ); Thu, 17 Aug 2023 19:43:59 -0400 Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.136]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A543235A7; Thu, 17 Aug 2023 16:43:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1692315836; x=1723851836; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=y/z7zJucyrwowTDIwuIT1UUN4PWR9t1a/OA+Ip560pg=; b=aM+6WW2MbPFRSS/nzoSw+A4qVS2YWZI7kALv0/BXA9T75UL0ft7zhXPx l40eSmrwdcnmQtfoQk1I/DDH1R84WewYwjgywKEi8T8qSpDI7Owws03PA wy8XQTgnZFNrA5VCKQkaDoE6zVamcVBSNU+NxqVzbotBKksh1tq73MK2M bjBemdgF5HjOn8uIY1P9J7CJ+YIiwKWQ77tO83OdfxEPa1sgxDgLe+sUd IQ1tLLbGhe4zCoMQnpB/gY7YYcgeZFkbDGZSfgQC1CSsIYbELhlsY9AbM uFqf7OLDncpsCTajX0ADSIUyMSsQArQU9MYDQ/2iUvOu+Ki+LJV79giBg A==; X-IronPort-AV: E=McAfee;i="6600,9927,10805"; a="352552080" X-IronPort-AV: E=Sophos;i="6.01,181,1684825200"; d="scan'208";a="352552080" Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Aug 2023 16:43:56 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10805"; a="849051857" X-IronPort-AV: E=Sophos;i="6.01,181,1684825200"; d="scan'208";a="849051857" Received: from allen-box.sh.intel.com ([10.239.159.127]) by fmsmga002.fm.intel.com with ESMTP; 17 Aug 2023 16:43:51 -0700 From: Lu Baolu To: Joerg Roedel , Will Deacon , Robin Murphy , Jason Gunthorpe , Kevin Tian , Jean-Philippe Brucker , Nicolin Chen Cc: Yi Liu , Jacob Pan , iommu@lists.linux.dev, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Lu Baolu Subject: [PATCH v3 07/11] iommu: Prepare for separating SVA and IOPF Date: Fri, 18 Aug 2023 07:40:43 +0800 Message-Id: <20230817234047.195194-8-baolu.lu@linux.intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230817234047.195194-1-baolu.lu@linux.intel.com> References: <20230817234047.195194-1-baolu.lu@linux.intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Move iopf_group data structure to iommu.h. This is being done to make it a minimal set of faults that a domain's page fault handler should handle. Add two new helpers for the domain's page fault handler: - iopf_free_group: free a fault group after all faults in the group are handled. - iopf_queue_work: queue a given work item for a fault group. This will simplify the sequential patches. Signed-off-by: Lu Baolu Reviewed-by: Jason Gunthorpe --- include/linux/iommu.h | 12 ++++++++++ drivers/iommu/io-pgfault.c | 49 ++++++++++++++++++++++---------------- 2 files changed, 41 insertions(+), 20 deletions(-) diff --git a/include/linux/iommu.h b/include/linux/iommu.h index 8243d72098ea..ff292eea9d31 100644 --- a/include/linux/iommu.h +++ b/include/linux/iommu.h @@ -516,6 +516,18 @@ struct dev_iommu { u32 require_direct:1; }; =20 +struct iopf_fault { + struct iommu_fault fault; + struct list_head list; +}; + +struct iopf_group { + struct iopf_fault last_fault; + struct list_head faults; + struct work_struct work; + struct device *dev; +}; + int iommu_device_register(struct iommu_device *iommu, const struct iommu_ops *ops, struct device *hwdev); diff --git a/drivers/iommu/io-pgfault.c b/drivers/iommu/io-pgfault.c index 31832aeacdba..d07586cd37fd 100644 --- a/drivers/iommu/io-pgfault.c +++ b/drivers/iommu/io-pgfault.c @@ -25,17 +25,17 @@ struct iopf_queue { struct mutex lock; }; =20 -struct iopf_fault { - struct iommu_fault fault; - struct list_head list; -}; +static void iopf_free_group(struct iopf_group *group) +{ + struct iopf_fault *iopf, *next; =20 -struct iopf_group { - struct iopf_fault last_fault; - struct list_head faults; - struct work_struct work; - struct device *dev; -}; + list_for_each_entry_safe(iopf, next, &group->faults, list) { + if (!(iopf->fault.prm.flags & IOMMU_FAULT_PAGE_REQUEST_LAST_PAGE)) + kfree(iopf); + } + + kfree(group); +} =20 static int iopf_complete_group(struct device *dev, struct iopf_fault *iopf, enum iommu_page_response_code status) @@ -55,9 +55,9 @@ static int iopf_complete_group(struct device *dev, struct= iopf_fault *iopf, =20 static void iopf_handler(struct work_struct *work) { + struct iopf_fault *iopf; struct iopf_group *group; struct iommu_domain *domain; - struct iopf_fault *iopf, *next; enum iommu_page_response_code status =3D IOMMU_PAGE_RESP_SUCCESS; =20 group =3D container_of(work, struct iopf_group, work); @@ -66,7 +66,7 @@ static void iopf_handler(struct work_struct *work) if (!domain || !domain->iopf_handler) status =3D IOMMU_PAGE_RESP_INVALID; =20 - list_for_each_entry_safe(iopf, next, &group->faults, list) { + list_for_each_entry(iopf, &group->faults, list) { /* * For the moment, errors are sticky: don't handle subsequent * faults in the group if there is an error. @@ -74,14 +74,21 @@ static void iopf_handler(struct work_struct *work) if (status =3D=3D IOMMU_PAGE_RESP_SUCCESS) status =3D domain->iopf_handler(&iopf->fault, domain->fault_data); - - if (!(iopf->fault.prm.flags & - IOMMU_FAULT_PAGE_REQUEST_LAST_PAGE)) - kfree(iopf); } =20 iopf_complete_group(group->dev, &group->last_fault, status); - kfree(group); + iopf_free_group(group); +} + +static int iopf_queue_work(struct iopf_group *group, work_func_t func) +{ + struct iommu_fault_param *fault_param =3D group->dev->iommu->fault_param; + + INIT_WORK(&group->work, func); + if (!queue_work(fault_param->queue->wq, &group->work)) + return -EBUSY; + + return 0; } =20 /** @@ -174,7 +181,6 @@ int iommu_queue_iopf(struct iommu_fault *fault, struct = device *dev) group->last_fault.fault =3D *fault; INIT_LIST_HEAD(&group->faults); list_add(&group->last_fault.list, &group->faults); - INIT_WORK(&group->work, iopf_handler); =20 /* See if we have partial faults for this group */ list_for_each_entry_safe(iopf, next, &iopf_param->partial, list) { @@ -183,8 +189,11 @@ int iommu_queue_iopf(struct iommu_fault *fault, struct= device *dev) list_move(&iopf->list, &group->faults); } =20 - queue_work(iopf_param->queue->wq, &group->work); - return 0; + ret =3D iopf_queue_work(group, iopf_handler); + if (ret) + iopf_free_group(group); + + return ret; =20 cleanup_partial: list_for_each_entry_safe(iopf, next, &iopf_param->partial, list) { --=20 2.34.1