From nobody Fri Sep 12 09:41:09 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 141D8C05027 for ; Fri, 10 Feb 2023 22:58:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229616AbjBJW6f (ORCPT ); Fri, 10 Feb 2023 17:58:35 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49182 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229602AbjBJW6d (ORCPT ); Fri, 10 Feb 2023 17:58:33 -0500 Received: from mga12.intel.com (mga12.intel.com [192.55.52.136]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1F48D12F39 for ; Fri, 10 Feb 2023 14:58:32 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1676069912; x=1707605912; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=6RYPYNudeO92HkgYSZvErdX3dsaQSj5ow61NfTrlatg=; b=Gzd0w42nWIpOa2pYe+Kmdpwm0qGTtFwHbwc4MpdhRmdeebWnLV0RutEC Cz8IfK3HAUaYYCZzT9CpjU9EEPud6Ke820YaHUMJB610bvv7OA3ylArva IKEdrguHMlgVGsutEcFnd7OaojEzO3m5FZV7IlZ1KJEygE6QaD+w1OQmi eov+gL939q5wN9NWSGL83vH9/jM9GL2Ek808ObcBH2O+ySaaqVtnuYpDo 2xD1Hae163sIdnLdSgs0Q6QjkrS00FapoFY2UputbunkF0hHylAwPIaSa sdOyvSZgWOqHqsEbbQw8pYvVKv6TGkCivpPK8FwvpqyLromnj3VTFQIJm w==; X-IronPort-AV: E=McAfee;i="6500,9779,10617"; a="310179487" X-IronPort-AV: E=Sophos;i="5.97,287,1669104000"; d="scan'208";a="310179487" Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Feb 2023 14:58:31 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10617"; a="777093496" X-IronPort-AV: E=Sophos;i="5.97,287,1669104000"; d="scan'208";a="777093496" Received: from srinivas-otcpl-7600.jf.intel.com (HELO jacob-builder.jf.intel.com) ([10.54.39.106]) by fmsmga002.fm.intel.com with ESMTP; 10 Feb 2023 14:58:30 -0800 From: Jacob Pan To: LKML , iommu@lists.linux.dev, "Lu Baolu" , Joerg Roedel , Jean-Philippe Brucker , "Robin Murphy" Cc: David Woodhouse , Raj Ashok , "Tian, Kevin" , Yi Liu , Jason Gunthorpe , Jacob Pan Subject: [PATCH 1/2] iommu/vt-d: Remove virtual command interface Date: Fri, 10 Feb 2023 15:02:05 -0800 Message-Id: <20230210230206.3160144-2-jacob.jun.pan@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230210230206.3160144-1-jacob.jun.pan@linux.intel.com> References: <20230210230206.3160144-1-jacob.jun.pan@linux.intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Virtual command interface was introduced to allow using host PASIDs inside VMs. It is unused and abandoned due to architectural change. With this patch, we can safely remove this feature and the related helpers. Signed-off-by: Jacob Pan Reviewed-by: Jason Gunthorpe Reviewed-by: Kevin Tian --- drivers/iommu/intel/cap_audit.c | 2 - drivers/iommu/intel/dmar.c | 2 - drivers/iommu/intel/iommu.c | 85 --------------------------------- drivers/iommu/intel/iommu.h | 8 ---- 4 files changed, 97 deletions(-) diff --git a/drivers/iommu/intel/cap_audit.c b/drivers/iommu/intel/cap_audi= t.c index 806986696841..9862dc20b35e 100644 --- a/drivers/iommu/intel/cap_audit.c +++ b/drivers/iommu/intel/cap_audit.c @@ -54,7 +54,6 @@ static inline void check_dmar_capabilities(struct intel_i= ommu *a, CHECK_FEATURE_MISMATCH(a, b, ecap, slts, ECAP_SLTS_MASK); CHECK_FEATURE_MISMATCH(a, b, ecap, nwfs, ECAP_NWFS_MASK); CHECK_FEATURE_MISMATCH(a, b, ecap, slads, ECAP_SLADS_MASK); - CHECK_FEATURE_MISMATCH(a, b, ecap, vcs, ECAP_VCS_MASK); CHECK_FEATURE_MISMATCH(a, b, ecap, smts, ECAP_SMTS_MASK); CHECK_FEATURE_MISMATCH(a, b, ecap, pds, ECAP_PDS_MASK); CHECK_FEATURE_MISMATCH(a, b, ecap, dit, ECAP_DIT_MASK); @@ -101,7 +100,6 @@ static int cap_audit_hotplug(struct intel_iommu *iommu,= enum cap_audit_type type CHECK_FEATURE_MISMATCH_HOTPLUG(iommu, ecap, slts, ECAP_SLTS_MASK); CHECK_FEATURE_MISMATCH_HOTPLUG(iommu, ecap, nwfs, ECAP_NWFS_MASK); CHECK_FEATURE_MISMATCH_HOTPLUG(iommu, ecap, slads, ECAP_SLADS_MASK); - CHECK_FEATURE_MISMATCH_HOTPLUG(iommu, ecap, vcs, ECAP_VCS_MASK); CHECK_FEATURE_MISMATCH_HOTPLUG(iommu, ecap, smts, ECAP_SMTS_MASK); CHECK_FEATURE_MISMATCH_HOTPLUG(iommu, ecap, pds, ECAP_PDS_MASK); CHECK_FEATURE_MISMATCH_HOTPLUG(iommu, ecap, dit, ECAP_DIT_MASK); diff --git a/drivers/iommu/intel/dmar.c b/drivers/iommu/intel/dmar.c index b00a0ceb2d13..bf0bfe5ba7a7 100644 --- a/drivers/iommu/intel/dmar.c +++ b/drivers/iommu/intel/dmar.c @@ -989,8 +989,6 @@ static int map_iommu(struct intel_iommu *iommu, u64 phy= s_addr) warn_invalid_dmar(phys_addr, " returns all ones"); goto unmap; } - if (ecap_vcs(iommu->ecap)) - iommu->vccap =3D dmar_readq(iommu->reg + DMAR_VCCAP_REG); =20 /* the registers might be more than one page */ map_size =3D max_t(int, ecap_max_iotlb_offset(iommu->ecap), diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c index 2ee270e4d484..ca989d997a9a 100644 --- a/drivers/iommu/intel/iommu.c +++ b/drivers/iommu/intel/iommu.c @@ -1721,9 +1721,6 @@ static void free_dmar_iommu(struct intel_iommu *iommu) if (ecap_prs(iommu->ecap)) intel_svm_finish_prq(iommu); } - if (vccap_pasid(iommu->vccap)) - ioasid_unregister_allocator(&iommu->pasid_allocator); - #endif } =20 @@ -2793,85 +2790,6 @@ static int copy_translation_tables(struct intel_iomm= u *iommu) return ret; } =20 -#ifdef CONFIG_INTEL_IOMMU_SVM -static ioasid_t intel_vcmd_ioasid_alloc(ioasid_t min, ioasid_t max, void *= data) -{ - struct intel_iommu *iommu =3D data; - ioasid_t ioasid; - - if (!iommu) - return INVALID_IOASID; - /* - * VT-d virtual command interface always uses the full 20 bit - * PASID range. Host can partition guest PASID range based on - * policies but it is out of guest's control. - */ - if (min < PASID_MIN || max > intel_pasid_max_id) - return INVALID_IOASID; - - if (vcmd_alloc_pasid(iommu, &ioasid)) - return INVALID_IOASID; - - return ioasid; -} - -static void intel_vcmd_ioasid_free(ioasid_t ioasid, void *data) -{ - struct intel_iommu *iommu =3D data; - - if (!iommu) - return; - /* - * Sanity check the ioasid owner is done at upper layer, e.g. VFIO - * We can only free the PASID when all the devices are unbound. - */ - if (ioasid_find(NULL, ioasid, NULL)) { - pr_alert("Cannot free active IOASID %d\n", ioasid); - return; - } - vcmd_free_pasid(iommu, ioasid); -} - -static void register_pasid_allocator(struct intel_iommu *iommu) -{ - /* - * If we are running in the host, no need for custom allocator - * in that PASIDs are allocated from the host system-wide. - */ - if (!cap_caching_mode(iommu->cap)) - return; - - if (!sm_supported(iommu)) { - pr_warn("VT-d Scalable Mode not enabled, no PASID allocation\n"); - return; - } - - /* - * Register a custom PASID allocator if we are running in a guest, - * guest PASID must be obtained via virtual command interface. - * There can be multiple vIOMMUs in each guest but only one allocator - * is active. All vIOMMU allocators will eventually be calling the same - * host allocator. - */ - if (!vccap_pasid(iommu->vccap)) - return; - - pr_info("Register custom PASID allocator\n"); - iommu->pasid_allocator.alloc =3D intel_vcmd_ioasid_alloc; - iommu->pasid_allocator.free =3D intel_vcmd_ioasid_free; - iommu->pasid_allocator.pdata =3D (void *)iommu; - if (ioasid_register_allocator(&iommu->pasid_allocator)) { - pr_warn("Custom PASID allocator failed, scalable mode disabled\n"); - /* - * Disable scalable mode on this IOMMU if there - * is no custom allocator. Mixing SM capable vIOMMU - * and non-SM vIOMMU are not supported. - */ - intel_iommu_sm =3D 0; - } -} -#endif - static int __init init_dmars(void) { struct dmar_drhd_unit *drhd; @@ -2960,9 +2878,6 @@ static int __init init_dmars(void) */ for_each_active_iommu(iommu, drhd) { iommu_flush_write_buffer(iommu); -#ifdef CONFIG_INTEL_IOMMU_SVM - register_pasid_allocator(iommu); -#endif iommu_set_root_entry(iommu); } =20 diff --git a/drivers/iommu/intel/iommu.h b/drivers/iommu/intel/iommu.h index 06e61e474856..6bdfbead82c4 100644 --- a/drivers/iommu/intel/iommu.h +++ b/drivers/iommu/intel/iommu.h @@ -184,7 +184,6 @@ #define ecap_flts(e) (((e) >> 47) & 0x1) #define ecap_slts(e) (((e) >> 46) & 0x1) #define ecap_slads(e) (((e) >> 45) & 0x1) -#define ecap_vcs(e) (((e) >> 44) & 0x1) #define ecap_smts(e) (((e) >> 43) & 0x1) #define ecap_dit(e) (((e) >> 41) & 0x1) #define ecap_pds(e) (((e) >> 42) & 0x1) @@ -210,9 +209,6 @@ #define ecap_max_handle_mask(e) (((e) >> 20) & 0xf) #define ecap_sc_support(e) (((e) >> 7) & 0x1) /* Snooping Control */ =20 -/* Virtual command interface capability */ -#define vccap_pasid(v) (((v) & DMA_VCS_PAS)) /* PASID allocation */ - /* IOTLB_REG */ #define DMA_TLB_FLUSH_GRANU_OFFSET 60 #define DMA_TLB_GLOBAL_FLUSH (((u64)1) << 60) @@ -307,8 +303,6 @@ #define DMA_PRS_PPR ((u32)1) #define DMA_PRS_PRO ((u32)2) =20 -#define DMA_VCS_PAS ((u64)1) - #define IOMMU_WAIT_OP(iommu, offset, op, cond, sts) \ do { \ cycles_t start_time =3D get_cycles(); \ @@ -560,7 +554,6 @@ struct intel_iommu { u64 reg_size; /* size of hw register set */ u64 cap; u64 ecap; - u64 vccap; u32 gcmd; /* Holds TE, EAFL. Don't need SRTP, SFL, WBF */ raw_spinlock_t register_lock; /* protect register handling */ int seq_id; /* sequence id of the iommu */ @@ -583,7 +576,6 @@ struct intel_iommu { unsigned char prq_name[16]; /* Name for PRQ interrupt */ unsigned long prq_seq_number; struct completion prq_complete; - struct ioasid_allocator_ops pasid_allocator; /* Custom allocator for PASI= Ds */ #endif struct iopf_queue *iopf_queue; unsigned char iopfq_name[16]; --=20 2.25.1 From nobody Fri Sep 12 09:41:09 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6D319C636D4 for ; Fri, 10 Feb 2023 22:58:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229696AbjBJW6m (ORCPT ); Fri, 10 Feb 2023 17:58:42 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49258 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229629AbjBJW6f (ORCPT ); Fri, 10 Feb 2023 17:58:35 -0500 Received: from mga12.intel.com (mga12.intel.com [192.55.52.136]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2C6053A0BB for ; Fri, 10 Feb 2023 14:58:34 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1676069914; x=1707605914; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=V62zNsOgOFRg0pqcZAWx+q0H//GVFp2Mbu6WcKQSqMg=; b=Si6z9YVVpr8rTMigVpymJdFlL9DS8infrzCoZ3RsDKFTbyYjiqdkzdT6 Afv53zbSzxVmDTwanki5QkuYMtaltxAVeeM9BvxBM9LEyzIGJ9w3J39w9 /KisHds9F18g/kt5KOqc2OTOiZMCbeOqhynL66xKDnFzVmLlRGG339ZOV JDtVTdZBgFjBoe5g8gmRNdgdi1TtJWaIoMnQUuCT8NQHgUev56ta7LTiv q+lbfdc95ritZKjI1+y95X/oNdL3UZvjMPy8svzz+KBJAkLt3Q9FPcC6h /upygdZsbjTlbw35ZAoIsLZ5vgz8YXHlWoV8GiRsgTYRyfPMmNo/x1oVE w==; X-IronPort-AV: E=McAfee;i="6500,9779,10617"; a="310179496" X-IronPort-AV: E=Sophos;i="5.97,287,1669104000"; d="scan'208";a="310179496" Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Feb 2023 14:58:33 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10617"; a="777093503" X-IronPort-AV: E=Sophos;i="5.97,287,1669104000"; d="scan'208";a="777093503" Received: from srinivas-otcpl-7600.jf.intel.com (HELO jacob-builder.jf.intel.com) ([10.54.39.106]) by fmsmga002.fm.intel.com with ESMTP; 10 Feb 2023 14:58:33 -0800 From: Jacob Pan To: LKML , iommu@lists.linux.dev, "Lu Baolu" , Joerg Roedel , Jean-Philippe Brucker , "Robin Murphy" Cc: David Woodhouse , Raj Ashok , "Tian, Kevin" , Yi Liu , Jason Gunthorpe , Jacob Pan Subject: [PATCH 2/2] iommu/ioasid: Remove custom IOASID allocator Date: Fri, 10 Feb 2023 15:02:06 -0800 Message-Id: <20230210230206.3160144-3-jacob.jun.pan@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230210230206.3160144-1-jacob.jun.pan@linux.intel.com> References: <20230210230206.3160144-1-jacob.jun.pan@linux.intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Custom allocator feature was introduced to support VT-d's virtual command, an enlightened interface designed for VMs to allocate PASIDs from the host. As we remove/withdraw the VT-d virtual command feature, the sole user of custom allocator, we can safely remove the custom allocator as well. Effectively, this will return IOASID core to the original simple global namespace allocator. Signed-off-by: Jacob Pan Reviewed-by: Jean-Philippe Brucker Reviewed-by: Kevin Tian --- drivers/iommu/ioasid.c | 293 ++--------------------------------------- include/linux/ioasid.h | 28 ---- 2 files changed, 9 insertions(+), 312 deletions(-) diff --git a/drivers/iommu/ioasid.c b/drivers/iommu/ioasid.c index a786c034907c..85715e171db2 100644 --- a/drivers/iommu/ioasid.c +++ b/drivers/iommu/ioasid.c @@ -16,246 +16,7 @@ struct ioasid_data { void *private; struct rcu_head rcu; }; - -/* - * struct ioasid_allocator_data - Internal data structure to hold informat= ion - * about an allocator. There are two types of allocators: - * - * - Default allocator always has its own XArray to track the IOASIDs allo= cated. - * - Custom allocators may share allocation helpers with different private= data. - * Custom allocators that share the same helper functions also share the= same - * XArray. - * Rules: - * 1. Default allocator is always available, not dynamically registered. T= his is - * to prevent race conditions with early boot code that want to register - * custom allocators or allocate IOASIDs. - * 2. Custom allocators take precedence over the default allocator. - * 3. When all custom allocators sharing the same helper functions are - * unregistered (e.g. due to hotplug), all outstanding IOASIDs must be - * freed. Otherwise, outstanding IOASIDs will be lost and orphaned. - * 4. When switching between custom allocators sharing the same helper - * functions, outstanding IOASIDs are preserved. - * 5. When switching between custom allocator and default allocator, all I= OASIDs - * must be freed to ensure unadulterated space for the new allocator. - * - * @ops: allocator helper functions and its data - * @list: registered custom allocators - * @slist: allocators share the same ops but different data - * @flags: attributes of the allocator - * @xa: xarray holds the IOASID space - * @rcu: used for kfree_rcu when unregistering allocator - */ -struct ioasid_allocator_data { - struct ioasid_allocator_ops *ops; - struct list_head list; - struct list_head slist; -#define IOASID_ALLOCATOR_CUSTOM BIT(0) /* Needs framework to track results= */ - unsigned long flags; - struct xarray xa; - struct rcu_head rcu; -}; - -static DEFINE_SPINLOCK(ioasid_allocator_lock); -static LIST_HEAD(allocators_list); - -static ioasid_t default_alloc(ioasid_t min, ioasid_t max, void *opaque); -static void default_free(ioasid_t ioasid, void *opaque); - -static struct ioasid_allocator_ops default_ops =3D { - .alloc =3D default_alloc, - .free =3D default_free, -}; - -static struct ioasid_allocator_data default_allocator =3D { - .ops =3D &default_ops, - .flags =3D 0, - .xa =3D XARRAY_INIT(ioasid_xa, XA_FLAGS_ALLOC), -}; - -static struct ioasid_allocator_data *active_allocator =3D &default_allocat= or; - -static ioasid_t default_alloc(ioasid_t min, ioasid_t max, void *opaque) -{ - ioasid_t id; - - if (xa_alloc(&default_allocator.xa, &id, opaque, XA_LIMIT(min, max), GFP_= ATOMIC)) { - pr_err("Failed to alloc ioasid from %d to %d\n", min, max); - return INVALID_IOASID; - } - - return id; -} - -static void default_free(ioasid_t ioasid, void *opaque) -{ - struct ioasid_data *ioasid_data; - - ioasid_data =3D xa_erase(&default_allocator.xa, ioasid); - kfree_rcu(ioasid_data, rcu); -} - -/* Allocate and initialize a new custom allocator with its helper function= s */ -static struct ioasid_allocator_data *ioasid_alloc_allocator(struct ioasid_= allocator_ops *ops) -{ - struct ioasid_allocator_data *ia_data; - - ia_data =3D kzalloc(sizeof(*ia_data), GFP_ATOMIC); - if (!ia_data) - return NULL; - - xa_init_flags(&ia_data->xa, XA_FLAGS_ALLOC); - INIT_LIST_HEAD(&ia_data->slist); - ia_data->flags |=3D IOASID_ALLOCATOR_CUSTOM; - ia_data->ops =3D ops; - - /* For tracking custom allocators that share the same ops */ - list_add_tail(&ops->list, &ia_data->slist); - - return ia_data; -} - -static bool use_same_ops(struct ioasid_allocator_ops *a, struct ioasid_all= ocator_ops *b) -{ - return (a->free =3D=3D b->free) && (a->alloc =3D=3D b->alloc); -} - -/** - * ioasid_register_allocator - register a custom allocator - * @ops: the custom allocator ops to be registered - * - * Custom allocators take precedence over the default xarray based allocat= or. - * Private data associated with the IOASID allocated by the custom allocat= ors - * are managed by IOASID framework similar to data stored in xa by default - * allocator. - * - * There can be multiple allocators registered but only one is active. In = case - * of runtime removal of a custom allocator, the next one is activated bas= ed - * on the registration ordering. - * - * Multiple allocators can share the same alloc() function, in this case t= he - * IOASID space is shared. - */ -int ioasid_register_allocator(struct ioasid_allocator_ops *ops) -{ - struct ioasid_allocator_data *ia_data; - struct ioasid_allocator_data *pallocator; - int ret =3D 0; - - spin_lock(&ioasid_allocator_lock); - - ia_data =3D ioasid_alloc_allocator(ops); - if (!ia_data) { - ret =3D -ENOMEM; - goto out_unlock; - } - - /* - * No particular preference, we activate the first one and keep - * the later registered allocators in a list in case the first one gets - * removed due to hotplug. - */ - if (list_empty(&allocators_list)) { - WARN_ON(active_allocator !=3D &default_allocator); - /* Use this new allocator if default is not active */ - if (xa_empty(&active_allocator->xa)) { - rcu_assign_pointer(active_allocator, ia_data); - list_add_tail(&ia_data->list, &allocators_list); - goto out_unlock; - } - pr_warn("Default allocator active with outstanding IOASID\n"); - ret =3D -EAGAIN; - goto out_free; - } - - /* Check if the allocator is already registered */ - list_for_each_entry(pallocator, &allocators_list, list) { - if (pallocator->ops =3D=3D ops) { - pr_err("IOASID allocator already registered\n"); - ret =3D -EEXIST; - goto out_free; - } else if (use_same_ops(pallocator->ops, ops)) { - /* - * If the new allocator shares the same ops, - * then they will share the same IOASID space. - * We should put them under the same xarray. - */ - list_add_tail(&ops->list, &pallocator->slist); - goto out_free; - } - } - list_add_tail(&ia_data->list, &allocators_list); - - spin_unlock(&ioasid_allocator_lock); - return 0; -out_free: - kfree(ia_data); -out_unlock: - spin_unlock(&ioasid_allocator_lock); - return ret; -} -EXPORT_SYMBOL_GPL(ioasid_register_allocator); - -/** - * ioasid_unregister_allocator - Remove a custom IOASID allocator ops - * @ops: the custom allocator to be removed - * - * Remove an allocator from the list, activate the next allocator in - * the order it was registered. Or revert to default allocator if all - * custom allocators are unregistered without outstanding IOASIDs. - */ -void ioasid_unregister_allocator(struct ioasid_allocator_ops *ops) -{ - struct ioasid_allocator_data *pallocator; - struct ioasid_allocator_ops *sops; - - spin_lock(&ioasid_allocator_lock); - if (list_empty(&allocators_list)) { - pr_warn("No custom IOASID allocators active!\n"); - goto exit_unlock; - } - - list_for_each_entry(pallocator, &allocators_list, list) { - if (!use_same_ops(pallocator->ops, ops)) - continue; - - if (list_is_singular(&pallocator->slist)) { - /* No shared helper functions */ - list_del(&pallocator->list); - /* - * All IOASIDs should have been freed before - * the last allocator that shares the same ops - * is unregistered. - */ - WARN_ON(!xa_empty(&pallocator->xa)); - if (list_empty(&allocators_list)) { - pr_info("No custom IOASID allocators, switch to default.\n"); - rcu_assign_pointer(active_allocator, &default_allocator); - } else if (pallocator =3D=3D active_allocator) { - rcu_assign_pointer(active_allocator, - list_first_entry(&allocators_list, - struct ioasid_allocator_data, list)); - pr_info("IOASID allocator changed"); - } - kfree_rcu(pallocator, rcu); - break; - } - /* - * Find the matching shared ops to delete, - * but keep outstanding IOASIDs - */ - list_for_each_entry(sops, &pallocator->slist, list) { - if (sops =3D=3D ops) { - list_del(&ops->list); - break; - } - } - break; - } - -exit_unlock: - spin_unlock(&ioasid_allocator_lock); -} -EXPORT_SYMBOL_GPL(ioasid_unregister_allocator); +static DEFINE_XARRAY_ALLOC(ioasid_xa); =20 /** * ioasid_set_data - Set private data for an allocated ioasid @@ -270,13 +31,13 @@ int ioasid_set_data(ioasid_t ioasid, void *data) struct ioasid_data *ioasid_data; int ret =3D 0; =20 - spin_lock(&ioasid_allocator_lock); - ioasid_data =3D xa_load(&active_allocator->xa, ioasid); + xa_lock(&ioasid_xa); + ioasid_data =3D xa_load(&ioasid_xa, ioasid); if (ioasid_data) rcu_assign_pointer(ioasid_data->private, data); else ret =3D -ENOENT; - spin_unlock(&ioasid_allocator_lock); + xa_unlock(&ioasid_xa); =20 /* * Wait for readers to stop accessing the old private data, so the @@ -305,7 +66,6 @@ ioasid_t ioasid_alloc(struct ioasid_set *set, ioasid_t m= in, ioasid_t max, void *private) { struct ioasid_data *data; - void *adata; ioasid_t id; =20 data =3D kzalloc(sizeof(*data), GFP_ATOMIC); @@ -314,32 +74,13 @@ ioasid_t ioasid_alloc(struct ioasid_set *set, ioasid_t= min, ioasid_t max, =20 data->set =3D set; data->private =3D private; - - /* - * Custom allocator needs allocator data to perform platform specific - * operations. - */ - spin_lock(&ioasid_allocator_lock); - adata =3D active_allocator->flags & IOASID_ALLOCATOR_CUSTOM ? active_allo= cator->ops->pdata : data; - id =3D active_allocator->ops->alloc(min, max, adata); - if (id =3D=3D INVALID_IOASID) { - pr_err("Failed ASID allocation %lu\n", active_allocator->flags); - goto exit_free; - } - - if ((active_allocator->flags & IOASID_ALLOCATOR_CUSTOM) && - xa_alloc(&active_allocator->xa, &id, data, XA_LIMIT(id, id), GFP_ATO= MIC)) { - /* Custom allocator needs framework to store and track allocation result= s */ - pr_err("Failed to alloc ioasid from %d\n", id); - active_allocator->ops->free(id, active_allocator->ops->pdata); + if (xa_alloc(&ioasid_xa, &id, data, XA_LIMIT(min, max), GFP_ATOMIC)) { + pr_err("Failed to alloc ioasid from %d to %d\n", min, max); goto exit_free; } data->id =3D id; - - spin_unlock(&ioasid_allocator_lock); return id; exit_free: - spin_unlock(&ioasid_allocator_lock); kfree(data); return INVALID_IOASID; } @@ -353,22 +94,8 @@ void ioasid_free(ioasid_t ioasid) { struct ioasid_data *ioasid_data; =20 - spin_lock(&ioasid_allocator_lock); - ioasid_data =3D xa_load(&active_allocator->xa, ioasid); - if (!ioasid_data) { - pr_err("Trying to free unknown IOASID %u\n", ioasid); - goto exit_unlock; - } - - active_allocator->ops->free(ioasid, active_allocator->ops->pdata); - /* Custom allocator needs additional steps to free the xa element */ - if (active_allocator->flags & IOASID_ALLOCATOR_CUSTOM) { - ioasid_data =3D xa_erase(&active_allocator->xa, ioasid); - kfree_rcu(ioasid_data, rcu); - } - -exit_unlock: - spin_unlock(&ioasid_allocator_lock); + ioasid_data =3D xa_erase(&ioasid_xa, ioasid); + kfree_rcu(ioasid_data, rcu); } EXPORT_SYMBOL_GPL(ioasid_free); =20 @@ -391,11 +118,9 @@ void *ioasid_find(struct ioasid_set *set, ioasid_t ioa= sid, { void *priv; struct ioasid_data *ioasid_data; - struct ioasid_allocator_data *idata; =20 rcu_read_lock(); - idata =3D rcu_dereference(active_allocator); - ioasid_data =3D xa_load(&idata->xa, ioasid); + ioasid_data =3D xa_load(&ioasid_xa, ioasid); if (!ioasid_data) { priv =3D ERR_PTR(-ENOENT); goto unlock; diff --git a/include/linux/ioasid.h b/include/linux/ioasid.h index af1c9d62e642..fdfa70857227 100644 --- a/include/linux/ioasid.h +++ b/include/linux/ioasid.h @@ -7,28 +7,11 @@ =20 #define INVALID_IOASID ((ioasid_t)-1) typedef unsigned int ioasid_t; -typedef ioasid_t (*ioasid_alloc_fn_t)(ioasid_t min, ioasid_t max, void *da= ta); -typedef void (*ioasid_free_fn_t)(ioasid_t ioasid, void *data); =20 struct ioasid_set { int dummy; }; =20 -/** - * struct ioasid_allocator_ops - IOASID allocator helper functions and data - * - * @alloc: helper function to allocate IOASID - * @free: helper function to free IOASID - * @list: for tracking ops that share helper functions but not data - * @pdata: data belong to the allocator, provided when calling alloc() - */ -struct ioasid_allocator_ops { - ioasid_alloc_fn_t alloc; - ioasid_free_fn_t free; - struct list_head list; - void *pdata; -}; - #define DECLARE_IOASID_SET(name) struct ioasid_set name =3D { 0 } =20 #if IS_ENABLED(CONFIG_IOASID) @@ -37,8 +20,6 @@ ioasid_t ioasid_alloc(struct ioasid_set *set, ioasid_t mi= n, ioasid_t max, void ioasid_free(ioasid_t ioasid); void *ioasid_find(struct ioasid_set *set, ioasid_t ioasid, bool (*getter)(void *)); -int ioasid_register_allocator(struct ioasid_allocator_ops *allocator); -void ioasid_unregister_allocator(struct ioasid_allocator_ops *allocator); int ioasid_set_data(ioasid_t ioasid, void *data); static inline bool pasid_valid(ioasid_t ioasid) { @@ -60,15 +41,6 @@ static inline void *ioasid_find(struct ioasid_set *set, = ioasid_t ioasid, return NULL; } =20 -static inline int ioasid_register_allocator(struct ioasid_allocator_ops *a= llocator) -{ - return -ENOTSUPP; -} - -static inline void ioasid_unregister_allocator(struct ioasid_allocator_ops= *allocator) -{ -} - static inline int ioasid_set_data(ioasid_t ioasid, void *data) { return -ENOTSUPP; --=20 2.25.1