From nobody Mon Feb 9 05:53:27 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D4DDDEB64DA for ; Wed, 12 Jul 2023 16:29:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232848AbjGLQ3N (ORCPT ); Wed, 12 Jul 2023 12:29:13 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33946 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232531AbjGLQ3E (ORCPT ); Wed, 12 Jul 2023 12:29:04 -0400 Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F13F8E8 for ; Wed, 12 Jul 2023 09:29:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1689179344; x=1720715344; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=mm0RBRlxNTWUVWGSjZmJqXmc9M/wdroaapVIA3ESeqk=; b=PuIEwM0clHBWpMwDYyu/6DnEHt9iONvrO+2IHKCBirVcFoT95Zqi3GCq dLebspaNK6naXWmzYylfU335dVFRPl1oOJg/v6fwG5Bcp1uPs3BU0x9EN S5K4tEukRyobqkMmhqmQt2DU1B0CExuCqTRqU6PxsZ3TdIAPUTsFci4We t/TsvKZTPdBGsC2vrRf5AKt/R8/WZCfFx3MDw+ksdBbUV2W6axteBlPem LOMdcev+tV3vVarH+ZG12q2xs6Wne09dyTKuKmICNdsWAmYpym+2bp5aJ GnoP3ic8RzmeBOWHQ93RSd2QSCAHtuWA2OJArijc661Jvvf8MgwqK3IBq Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10769"; a="431072743" X-IronPort-AV: E=Sophos;i="6.01,200,1684825200"; d="scan'208";a="431072743" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Jul 2023 09:29:02 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10769"; a="715639035" X-IronPort-AV: E=Sophos;i="6.01,200,1684825200"; d="scan'208";a="715639035" Received: from srinivas-otcpl-7600.jf.intel.com (HELO jacob-builder.jf.intel.com) ([10.54.97.184]) by orsmga007.jf.intel.com with ESMTP; 12 Jul 2023 09:29:01 -0700 From: Jacob Pan To: LKML , iommu@lists.linux.dev, "Lu Baolu" , Joerg Roedel , Jean-Philippe Brucker , "Robin Murphy" Cc: Jason Gunthorpe , "Will Deacon" , "Tian, Kevin" , Yi Liu , "Yu, Fenghua" , Tony Luck , Jacob Pan Subject: [PATCH v10 4/7] iommu/vt-d: Remove pasid_mutex Date: Wed, 12 Jul 2023 09:33:52 -0700 Message-Id: <20230712163355.3177511-5-jacob.jun.pan@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230712163355.3177511-1-jacob.jun.pan@linux.intel.com> References: <20230712163355.3177511-1-jacob.jun.pan@linux.intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Lu Baolu The pasid_mutex was used to protect the paths of set/remove_dev_pasid(). It's duplicate with iommu_sva_lock. Remove it to avoid duplicate code. Reviewed-by: Jacob Pan Signed-off-by: Lu Baolu --- drivers/iommu/intel/svm.c | 45 +++++---------------------------------- 1 file changed, 5 insertions(+), 40 deletions(-) diff --git a/drivers/iommu/intel/svm.c b/drivers/iommu/intel/svm.c index e95b339e9cdc..2a82864e9d57 100644 --- a/drivers/iommu/intel/svm.c +++ b/drivers/iommu/intel/svm.c @@ -259,8 +259,6 @@ static const struct mmu_notifier_ops intel_mmuops =3D { .invalidate_range =3D intel_invalidate_range, }; =20 -static DEFINE_MUTEX(pasid_mutex); - static int pasid_to_svm_sdev(struct device *dev, unsigned int pasid, struct intel_svm **rsvm, struct intel_svm_dev **rsdev) @@ -268,10 +266,6 @@ static int pasid_to_svm_sdev(struct device *dev, unsig= ned int pasid, struct intel_svm_dev *sdev =3D NULL; struct intel_svm *svm; =20 - /* The caller should hold the pasid_mutex lock */ - if (WARN_ON(!mutex_is_locked(&pasid_mutex))) - return -EINVAL; - if (pasid =3D=3D IOMMU_PASID_INVALID || pasid >=3D PASID_MAX) return -EINVAL; =20 @@ -371,22 +365,19 @@ static int intel_svm_bind_mm(struct intel_iommu *iomm= u, struct device *dev, return ret; } =20 -/* Caller must hold pasid_mutex */ -static int intel_svm_unbind_mm(struct device *dev, u32 pasid) +void intel_svm_remove_dev_pasid(struct device *dev, u32 pasid) { struct intel_svm_dev *sdev; struct intel_iommu *iommu; struct intel_svm *svm; struct mm_struct *mm; - int ret =3D -EINVAL; =20 iommu =3D device_to_iommu(dev, NULL, NULL); if (!iommu) - goto out; + return; =20 - ret =3D pasid_to_svm_sdev(dev, pasid, &svm, &sdev); - if (ret) - goto out; + if (pasid_to_svm_sdev(dev, pasid, &svm, &sdev)) + return; mm =3D svm->mm; =20 if (sdev) { @@ -418,8 +409,6 @@ static int intel_svm_unbind_mm(struct device *dev, u32 = pasid) kfree(svm); } } -out: - return ret; } =20 /* Page request queue descriptor */ @@ -520,19 +509,7 @@ static void intel_svm_drain_prq(struct device *dev, u3= 2 pasid) goto prq_retry; } =20 - /* - * A work in IO page fault workqueue may try to lock pasid_mutex now. - * Holding pasid_mutex while waiting in iopf_queue_flush_dev() for - * all works in the workqueue to finish may cause deadlock. - * - * It's unnecessary to hold pasid_mutex in iopf_queue_flush_dev(). - * Unlock it to allow the works to be handled while waiting for - * them to finish. - */ - lockdep_assert_held(&pasid_mutex); - mutex_unlock(&pasid_mutex); iopf_queue_flush_dev(dev); - mutex_lock(&pasid_mutex); =20 /* * Perform steps described in VT-d spec CH7.10 to drain page @@ -827,26 +804,14 @@ int intel_svm_page_response(struct device *dev, return ret; } =20 -void intel_svm_remove_dev_pasid(struct device *dev, ioasid_t pasid) -{ - mutex_lock(&pasid_mutex); - intel_svm_unbind_mm(dev, pasid); - mutex_unlock(&pasid_mutex); -} - static int intel_svm_set_dev_pasid(struct iommu_domain *domain, struct device *dev, ioasid_t pasid) { struct device_domain_info *info =3D dev_iommu_priv_get(dev); struct intel_iommu *iommu =3D info->iommu; struct mm_struct *mm =3D domain->mm; - int ret; =20 - mutex_lock(&pasid_mutex); - ret =3D intel_svm_bind_mm(iommu, dev, mm); - mutex_unlock(&pasid_mutex); - - return ret; + return intel_svm_bind_mm(iommu, dev, mm); } =20 static void intel_svm_domain_free(struct iommu_domain *domain) --=20 2.25.1