From nobody Sun Dec 14 19:14:26 2025 Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C0DCE2066F0 for ; Thu, 12 Dec 2024 18:04:47 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734026691; cv=none; b=iOiCpDBKD6Vd/PxUXmncyl8wQE0g4Dd6iffel4UpltDRW0W8fPhAL0cbs4wcpKAE0F8F4UX5Gl1ikG41aQPzmKzGTAJBj1x9ad4333vGuU76128PBnc8rJh36PH7QGHiBxQdiPS/m1Pl445elQk3ozpC30hT1DQ+uogIjdS8m6o= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734026691; c=relaxed/simple; bh=vfbb86YcYEgZqQ/dchSXygjWqWPD+rH+ReEN0boaKbs=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=sCigrKNvHJdggCSDaBVE4pyocyjedYaa3LQCidbDsXCZ4CnQKcjGNauZ+3rsNnZzGQm76oZ28mCGkWLCz2zRECqbSIdGRNry9iJjK9hPzjpzNQxDUzFjtiun6gSR9THnhdukG/lbySf0zKDg0bPXEvAKa/q47KwRgUiNdubpXLo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--smostafa.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=dUzMwJhf; arc=none smtp.client-ip=209.85.128.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--smostafa.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="dUzMwJhf" Received: by mail-wm1-f73.google.com with SMTP id 5b1f17b1804b1-436219070b4so5740615e9.1 for ; Thu, 12 Dec 2024 10:04:47 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1734026686; x=1734631486; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=kkEsLZ3YG2EoaXRLgImhrTKrYW+rAcMzbCNnb5CgHBQ=; b=dUzMwJhfj4LnvEwInKvML1UepFasxoQMx0wb1BFrbUepRa34agByVK/G8FTy+p4k3q bM9emgkklpqu2NRowEeoBFiB+WVHaKkQYrlUD6ugt2s+uHfMEQDwg742AxdGzv8DBwkP 3CoxtVXhVJ8ql2YKu8XOhCy2Wk1fuoyf1wLPuEy/qhOEstxTZ4dAHW+lptN2cVDpimAJ EVzjyeus6zwJD8snwsQgWokeAIBPg/l+YRgXSA+YGyV9u7ZNkG/zFuRfDDkIQHa6rBFt u8H6+e4518NKAHGifiEj+iWxX7c/g+GNATIk+IaJiJCameMyNtHHy//s0rVZ43x7BYpq dCrA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734026686; x=1734631486; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=kkEsLZ3YG2EoaXRLgImhrTKrYW+rAcMzbCNnb5CgHBQ=; b=fF8ZbkyHtguGZjGd7lTQAVQKe6ZhnhyX9pgTid8mngFQ8QTC310eS1rbgtGPqOirqk PunN5UN6iIg/HRjZoB9ED1wM3+JgqnQh2h54FjdLdRMKtBLza/uqO5EfyPVpkxSZKZYC QPWrgFLNAMws3Mzt/G9Clh5ZHmJ0E2fwuQANUfFF1X8fTO5NKHP+6IfFOUY9X4l6DDPz H2WqBM/n2nfi5Lie2fobOFbex47VQM+Y67OhMZSv6dDUS8eeISR6tNCqOh26bZ4/p9ly kJCXm8gWOessFlnEEGAvcSigJ9bVBXggPCEj+1ea9kHsczbxR/I7w90/apDIbwTn1ekl V1kQ== X-Forwarded-Encrypted: i=1; AJvYcCUZ2TedFmUarbpVr2R9WR4ofPz1zuu6ZLUZW9IHMJLw0mpDyz/RVconbIc9M9OREHfeweS+u8H75HZyxC0=@vger.kernel.org X-Gm-Message-State: AOJu0Yz8gBgJ3EGGuxoaHRTrpIybU1zXYh09b39PmRD8iAmJZzMpZoxu rBe3s934lpbD5ZYVU07+07GN6pBFJ68hb+bKQN/UaiOf/OV5rMLhQUk3PyCSJ+dyZTkR20TpGSR +G6jvNcjA0Q== X-Google-Smtp-Source: AGHT+IGH2nFuU5HXupcAxEHWWusoSGCjWEWIjCmRhz0s1hM3QaxPgYYSUV0qNHoc7zrLJETy3m6PooUfkGDCIA== X-Received: from wmbg15.prod.google.com ([2002:a05:600c:a40f:b0:436:1534:b059]) (user=smostafa job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:468a:b0:434:f0df:a14 with SMTP id 5b1f17b1804b1-4361c34623dmr70607635e9.2.1734026686221; Thu, 12 Dec 2024 10:04:46 -0800 (PST) Date: Thu, 12 Dec 2024 18:03:25 +0000 In-Reply-To: <20241212180423.1578358-1-smostafa@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241212180423.1578358-1-smostafa@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20241212180423.1578358-2-smostafa@google.com> Subject: [RFC PATCH v2 01/58] iommu/io-pgtable-arm: Split the page table driver From: Mostafa Saleh To: iommu@lists.linux.dev, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org Cc: catalin.marinas@arm.com, will@kernel.org, maz@kernel.org, oliver.upton@linux.dev, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, robdclark@gmail.com, joro@8bytes.org, robin.murphy@arm.com, jean-philippe@linaro.org, jgg@ziepe.ca, nicolinc@nvidia.com, vdonnefort@google.com, qperret@google.com, tabba@google.com, danielmentz@google.com, tzukui@google.com, Mostafa Saleh Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Jean-Philippe Brucker To allow the KVM IOMMU driver to populate page tables using the io-pgtable-arm code, move the shared bits into io-pgtable-arm-common.c. Here we move the bulk of the common code, and a subsequent patch handles the bits that require more care. phys_to_virt() and virt_to_phys() do need special handling here because the hypervisor will have its own version. It will also implement its own version of __arm_lpae_alloc_pages(), __arm_lpae_free_pages() and __arm_lpae_sync_pte() since the hypervisor needs some assistance for allocating pages. Some other minor changes around mapping existing or unmapping empty PTEs as WARN_ON is fatal in the hypervisor. Signed-off-by: Jean-Philippe Brucker Signed-off-by: Mostafa Saleh --- drivers/iommu/Makefile | 2 +- drivers/iommu/io-pgtable-arm-common.c | 625 ++++++++++++++++++++ drivers/iommu/io-pgtable-arm.c | 795 +------------------------- drivers/iommu/io-pgtable-arm.h | 30 - include/linux/io-pgtable-arm.h | 223 ++++++++ 5 files changed, 866 insertions(+), 809 deletions(-) create mode 100644 drivers/iommu/io-pgtable-arm-common.c delete mode 100644 drivers/iommu/io-pgtable-arm.h create mode 100644 include/linux/io-pgtable-arm.h diff --git a/drivers/iommu/Makefile b/drivers/iommu/Makefile index 542760d963ec..70c5386ce298 100644 --- a/drivers/iommu/Makefile +++ b/drivers/iommu/Makefile @@ -7,7 +7,7 @@ obj-$(CONFIG_IOMMU_DEBUGFS) +=3D iommu-debugfs.o obj-$(CONFIG_IOMMU_DMA) +=3D dma-iommu.o obj-$(CONFIG_IOMMU_IO_PGTABLE) +=3D io-pgtable.o obj-$(CONFIG_IOMMU_IO_PGTABLE_ARMV7S) +=3D io-pgtable-arm-v7s.o -obj-$(CONFIG_IOMMU_IO_PGTABLE_LPAE) +=3D io-pgtable-arm.o +obj-$(CONFIG_IOMMU_IO_PGTABLE_LPAE) +=3D io-pgtable-arm.o io-pgtable-arm-c= ommon.o obj-$(CONFIG_IOMMU_IO_PGTABLE_DART) +=3D io-pgtable-dart.o obj-$(CONFIG_IOMMU_IOVA) +=3D iova.o obj-$(CONFIG_OF_IOMMU) +=3D of_iommu.o diff --git a/drivers/iommu/io-pgtable-arm-common.c b/drivers/iommu/io-pgtab= le-arm-common.c new file mode 100644 index 000000000000..ef14a1b50d32 --- /dev/null +++ b/drivers/iommu/io-pgtable-arm-common.c @@ -0,0 +1,625 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * CPU-agnostic ARM page table allocator. + * A copy of this library is embedded in the KVM nVHE image. + * + * Copyright (C) 2022 Arm Limited + * + * Author: Will Deacon + */ + +#include + +#include +#include + +#define iopte_deref(pte, d) __arm_lpae_phys_to_virt(iopte_to_paddr(pte, d)) + +static arm_lpae_iopte paddr_to_iopte(phys_addr_t paddr, + struct arm_lpae_io_pgtable *data) +{ + arm_lpae_iopte pte =3D paddr; + + /* Of the bits which overlap, either 51:48 or 15:12 are always RES0 */ + return (pte | (pte >> (48 - 12))) & ARM_LPAE_PTE_ADDR_MASK; +} + +static phys_addr_t iopte_to_paddr(arm_lpae_iopte pte, + struct arm_lpae_io_pgtable *data) +{ + u64 paddr =3D pte & ARM_LPAE_PTE_ADDR_MASK; + + if (ARM_LPAE_GRANULE(data) < SZ_64K) + return paddr; + + /* Rotate the packed high-order bits back to the top */ + return (paddr | (paddr << (48 - 12))) & (ARM_LPAE_PTE_ADDR_MASK << 4); +} + +static void __arm_lpae_clear_pte(arm_lpae_iopte *ptep, struct io_pgtable_c= fg *cfg, int num_entries) +{ + for (int i =3D 0; i < num_entries; i++) + ptep[i] =3D 0; + + if (!cfg->coherent_walk && num_entries) + __arm_lpae_sync_pte(ptep, num_entries, cfg); +} + +static size_t __arm_lpae_unmap(struct arm_lpae_io_pgtable *data, + struct iommu_iotlb_gather *gather, + unsigned long iova, size_t size, size_t pgcount, + int lvl, arm_lpae_iopte *ptep); + +static void __arm_lpae_init_pte(struct arm_lpae_io_pgtable *data, + phys_addr_t paddr, arm_lpae_iopte prot, + int lvl, int num_entries, arm_lpae_iopte *ptep) +{ + arm_lpae_iopte pte =3D prot; + struct io_pgtable_cfg *cfg =3D &data->iop.cfg; + size_t sz =3D ARM_LPAE_BLOCK_SIZE(lvl, data); + int i; + + if (data->iop.fmt !=3D ARM_MALI_LPAE && lvl =3D=3D ARM_LPAE_MAX_LEVELS - = 1) + pte |=3D ARM_LPAE_PTE_TYPE_PAGE; + else + pte |=3D ARM_LPAE_PTE_TYPE_BLOCK; + + for (i =3D 0; i < num_entries; i++) + ptep[i] =3D pte | paddr_to_iopte(paddr + i * sz, data); + + if (!cfg->coherent_walk) + __arm_lpae_sync_pte(ptep, num_entries, cfg); +} + +static int arm_lpae_init_pte(struct arm_lpae_io_pgtable *data, + unsigned long iova, phys_addr_t paddr, + arm_lpae_iopte prot, int lvl, int num_entries, + arm_lpae_iopte *ptep) +{ + int i; + + for (i =3D 0; i < num_entries; i++) + if (iopte_leaf(ptep[i], lvl, data->iop.fmt)) { + /* We require an unmap first */ + return arm_lpae_map_exists(); + } else if (iopte_type(ptep[i]) =3D=3D ARM_LPAE_PTE_TYPE_TABLE) { + /* + * We need to unmap and free the old table before + * overwriting it with a block entry. + */ + arm_lpae_iopte *tblp; + size_t sz =3D ARM_LPAE_BLOCK_SIZE(lvl, data); + + tblp =3D ptep - ARM_LPAE_LVL_IDX(iova, lvl, data); + if (__arm_lpae_unmap(data, NULL, iova + i * sz, sz, 1, + lvl, tblp) !=3D sz) { + WARN_ON(1); + return -EINVAL; + } + } + + __arm_lpae_init_pte(data, paddr, prot, lvl, num_entries, ptep); + return 0; +} + +static arm_lpae_iopte arm_lpae_install_table(arm_lpae_iopte *table, + arm_lpae_iopte *ptep, + arm_lpae_iopte curr, + struct arm_lpae_io_pgtable *data) +{ + arm_lpae_iopte old, new; + struct io_pgtable_cfg *cfg =3D &data->iop.cfg; + + new =3D paddr_to_iopte(__arm_lpae_virt_to_phys(table), data) | + ARM_LPAE_PTE_TYPE_TABLE; + if (cfg->quirks & IO_PGTABLE_QUIRK_ARM_NS) + new |=3D ARM_LPAE_PTE_NSTABLE; + + /* + * Ensure the table itself is visible before its PTE can be. + * Whilst we could get away with cmpxchg64_release below, this + * doesn't have any ordering semantics when !CONFIG_SMP. + */ + dma_wmb(); + + old =3D cmpxchg64_relaxed(ptep, curr, new); + + if (cfg->coherent_walk || (old & ARM_LPAE_PTE_SW_SYNC)) + return old; + + /* Even if it's not ours, there's no point waiting; just kick it */ + __arm_lpae_sync_pte(ptep, 1, cfg); + if (old =3D=3D curr) + WRITE_ONCE(*ptep, new | ARM_LPAE_PTE_SW_SYNC); + + return old; +} + +static int __arm_lpae_map(struct arm_lpae_io_pgtable *data, unsigned long = iova, + phys_addr_t paddr, size_t size, size_t pgcount, + arm_lpae_iopte prot, int lvl, arm_lpae_iopte *ptep, + gfp_t gfp, size_t *mapped) +{ + arm_lpae_iopte *cptep, pte; + size_t block_size =3D ARM_LPAE_BLOCK_SIZE(lvl, data); + size_t tblsz =3D ARM_LPAE_GRANULE(data); + struct io_pgtable_cfg *cfg =3D &data->iop.cfg; + int ret =3D 0, num_entries, max_entries, map_idx_start; + + /* Find our entry at the current level */ + map_idx_start =3D ARM_LPAE_LVL_IDX(iova, lvl, data); + ptep +=3D map_idx_start; + + /* If we can install a leaf entry at this level, then do so */ + if (size =3D=3D block_size) { + max_entries =3D ARM_LPAE_PTES_PER_TABLE(data) - map_idx_start; + num_entries =3D min_t(int, pgcount, max_entries); + ret =3D arm_lpae_init_pte(data, iova, paddr, prot, lvl, num_entries, pte= p); + if (!ret) + *mapped +=3D num_entries * size; + + return ret; + } + + /* We can't allocate tables at the final level */ + if (WARN_ON(lvl >=3D ARM_LPAE_MAX_LEVELS - 1)) + return -EINVAL; + + /* Grab a pointer to the next level */ + pte =3D READ_ONCE(*ptep); + if (!pte) { + cptep =3D __arm_lpae_alloc_pages(tblsz, gfp, cfg, data->iop.cookie); + if (!cptep) + return -ENOMEM; + + pte =3D arm_lpae_install_table(cptep, ptep, 0, data); + if (pte) + __arm_lpae_free_pages(cptep, tblsz, cfg, data->iop.cookie); + } else if (!cfg->coherent_walk && !(pte & ARM_LPAE_PTE_SW_SYNC)) { + __arm_lpae_sync_pte(ptep, 1, cfg); + } + + if (pte && !iopte_leaf(pte, lvl, data->iop.fmt)) { + cptep =3D iopte_deref(pte, data); + } else if (pte) { + /* We require an unmap first */ + return arm_lpae_unmap_empty(); + } + + /* Rinse, repeat */ + return __arm_lpae_map(data, iova, paddr, size, pgcount, prot, lvl + 1, + cptep, gfp, mapped); +} + +static arm_lpae_iopte arm_lpae_prot_to_pte(struct arm_lpae_io_pgtable *dat= a, + int prot) +{ + arm_lpae_iopte pte; + + if (data->iop.fmt =3D=3D ARM_64_LPAE_S1 || + data->iop.fmt =3D=3D ARM_32_LPAE_S1) { + pte =3D ARM_LPAE_PTE_nG; + if (!(prot & IOMMU_WRITE) && (prot & IOMMU_READ)) + pte |=3D ARM_LPAE_PTE_AP_RDONLY; + else if (data->iop.cfg.quirks & IO_PGTABLE_QUIRK_ARM_HD) + pte |=3D ARM_LPAE_PTE_DBM; + if (!(prot & IOMMU_PRIV)) + pte |=3D ARM_LPAE_PTE_AP_UNPRIV; + } else { + pte =3D ARM_LPAE_PTE_HAP_FAULT; + if (prot & IOMMU_READ) + pte |=3D ARM_LPAE_PTE_HAP_READ; + if (prot & IOMMU_WRITE) + pte |=3D ARM_LPAE_PTE_HAP_WRITE; + } + + /* + * Note that this logic is structured to accommodate Mali LPAE + * having stage-1-like attributes but stage-2-like permissions. + */ + if (data->iop.fmt =3D=3D ARM_64_LPAE_S2 || + data->iop.fmt =3D=3D ARM_32_LPAE_S2) { + if (prot & IOMMU_MMIO) + pte |=3D ARM_LPAE_PTE_MEMATTR_DEV; + else if (prot & IOMMU_CACHE) + pte |=3D ARM_LPAE_PTE_MEMATTR_OIWB; + else + pte |=3D ARM_LPAE_PTE_MEMATTR_NC; + } else { + if (prot & IOMMU_MMIO) + pte |=3D (ARM_LPAE_MAIR_ATTR_IDX_DEV + << ARM_LPAE_PTE_ATTRINDX_SHIFT); + else if (prot & IOMMU_CACHE) + pte |=3D (ARM_LPAE_MAIR_ATTR_IDX_CACHE + << ARM_LPAE_PTE_ATTRINDX_SHIFT); + } + + /* + * Also Mali has its own notions of shareability wherein its Inner + * domain covers the cores within the GPU, and its Outer domain is + * "outside the GPU" (i.e. either the Inner or System domain in CPU + * terms, depending on coherency). + */ + if (prot & IOMMU_CACHE && data->iop.fmt !=3D ARM_MALI_LPAE) + pte |=3D ARM_LPAE_PTE_SH_IS; + else + pte |=3D ARM_LPAE_PTE_SH_OS; + + if (prot & IOMMU_NOEXEC) + pte |=3D ARM_LPAE_PTE_XN; + + if (data->iop.cfg.quirks & IO_PGTABLE_QUIRK_ARM_NS) + pte |=3D ARM_LPAE_PTE_NS; + + if (data->iop.fmt !=3D ARM_MALI_LPAE) + pte |=3D ARM_LPAE_PTE_AF; + + return pte; +} + +int arm_lpae_map_pages(struct io_pgtable_ops *ops, unsigned long iova, + phys_addr_t paddr, size_t pgsize, size_t pgcount, + int iommu_prot, gfp_t gfp, size_t *mapped) +{ + struct arm_lpae_io_pgtable *data =3D io_pgtable_ops_to_data(ops); + struct io_pgtable_cfg *cfg =3D &data->iop.cfg; + arm_lpae_iopte *ptep =3D data->pgd; + int ret, lvl =3D data->start_level; + arm_lpae_iopte prot; + long iaext =3D (s64)iova >> cfg->ias; + + if (WARN_ON(!pgsize || (pgsize & cfg->pgsize_bitmap) !=3D pgsize)) + return -EINVAL; + + if (cfg->quirks & IO_PGTABLE_QUIRK_ARM_TTBR1) + iaext =3D ~iaext; + if (WARN_ON(iaext || paddr >> cfg->oas)) + return -ERANGE; + + if (!(iommu_prot & (IOMMU_READ | IOMMU_WRITE))) + return -EINVAL; + + prot =3D arm_lpae_prot_to_pte(data, iommu_prot); + ret =3D __arm_lpae_map(data, iova, paddr, pgsize, pgcount, prot, lvl, + ptep, gfp, mapped); + /* + * Synchronise all PTE updates for the new mapping before there's + * a chance for anything to kick off a table walk for the new iova. + */ + wmb(); + + return ret; +} + +void __arm_lpae_free_pgtable(struct arm_lpae_io_pgtable *data, int lvl, + arm_lpae_iopte *ptep) +{ + arm_lpae_iopte *start, *end; + unsigned long table_size; + + if (lvl =3D=3D data->start_level) + table_size =3D ARM_LPAE_PGD_SIZE(data); + else + table_size =3D ARM_LPAE_GRANULE(data); + + start =3D ptep; + + /* Only leaf entries at the last level */ + if (lvl =3D=3D ARM_LPAE_MAX_LEVELS - 1) + end =3D ptep; + else + end =3D (void *)ptep + table_size; + + while (ptep !=3D end) { + arm_lpae_iopte pte =3D *ptep++; + + if (!pte || iopte_leaf(pte, lvl, data->iop.fmt)) + continue; + + __arm_lpae_free_pgtable(data, lvl + 1, iopte_deref(pte, data)); + } + + __arm_lpae_free_pages(start, table_size, &data->iop.cfg, data->iop.cookie= ); +} + +static size_t arm_lpae_split_blk_unmap(struct arm_lpae_io_pgtable *data, + struct iommu_iotlb_gather *gather, + unsigned long iova, size_t size, + arm_lpae_iopte blk_pte, int lvl, + arm_lpae_iopte *ptep, size_t pgcount) +{ + struct io_pgtable_cfg *cfg =3D &data->iop.cfg; + arm_lpae_iopte pte, *tablep; + phys_addr_t blk_paddr; + size_t tablesz =3D ARM_LPAE_GRANULE(data); + size_t split_sz =3D ARM_LPAE_BLOCK_SIZE(lvl, data); + int ptes_per_table =3D ARM_LPAE_PTES_PER_TABLE(data); + int i, unmap_idx_start =3D -1, num_entries =3D 0, max_entries; + + if (WARN_ON(lvl =3D=3D ARM_LPAE_MAX_LEVELS)) + return 0; + + tablep =3D __arm_lpae_alloc_pages(tablesz, GFP_ATOMIC, cfg, data->iop.coo= kie); + if (!tablep) + return 0; /* Bytes unmapped */ + + if (size =3D=3D split_sz) { + unmap_idx_start =3D ARM_LPAE_LVL_IDX(iova, lvl, data); + max_entries =3D ptes_per_table - unmap_idx_start; + num_entries =3D min_t(int, pgcount, max_entries); + } + + blk_paddr =3D iopte_to_paddr(blk_pte, data); + pte =3D iopte_prot(blk_pte); + + for (i =3D 0; i < ptes_per_table; i++, blk_paddr +=3D split_sz) { + /* Unmap! */ + if (i >=3D unmap_idx_start && i < (unmap_idx_start + num_entries)) + continue; + + __arm_lpae_init_pte(data, blk_paddr, pte, lvl, 1, &tablep[i]); + } + + pte =3D arm_lpae_install_table(tablep, ptep, blk_pte, data); + if (pte !=3D blk_pte) { + __arm_lpae_free_pages(tablep, tablesz, cfg, data->iop.cookie); + /* + * We may race against someone unmapping another part of this + * block, but anything else is invalid. We can't misinterpret + * a page entry here since we're never at the last level. + */ + if (iopte_type(pte) !=3D ARM_LPAE_PTE_TYPE_TABLE) + return 0; + + tablep =3D iopte_deref(pte, data); + } else if (unmap_idx_start >=3D 0) { + for (i =3D 0; i < num_entries; i++) + io_pgtable_tlb_add_page(&data->iop, gather, iova + i * size, size); + + return num_entries * size; + } + + return __arm_lpae_unmap(data, gather, iova, size, pgcount, lvl, tablep); +} + +static size_t __arm_lpae_unmap(struct arm_lpae_io_pgtable *data, + struct iommu_iotlb_gather *gather, + unsigned long iova, size_t size, size_t pgcount, + int lvl, arm_lpae_iopte *ptep) +{ + arm_lpae_iopte pte; + struct io_pgtable *iop =3D &data->iop; + int i =3D 0, num_entries, max_entries, unmap_idx_start; + + /* Something went horribly wrong and we ran out of page table */ + if (WARN_ON(lvl =3D=3D ARM_LPAE_MAX_LEVELS)) + return 0; + + unmap_idx_start =3D ARM_LPAE_LVL_IDX(iova, lvl, data); + ptep +=3D unmap_idx_start; + pte =3D READ_ONCE(*ptep); + if (WARN_ON(!pte)) + return 0; + + /* If the size matches this level, we're in the right place */ + if (size =3D=3D ARM_LPAE_BLOCK_SIZE(lvl, data)) { + max_entries =3D ARM_LPAE_PTES_PER_TABLE(data) - unmap_idx_start; + num_entries =3D min_t(int, pgcount, max_entries); + + /* Find and handle non-leaf entries */ + for (i =3D 0; i < num_entries; i++) { + pte =3D READ_ONCE(ptep[i]); + if (WARN_ON(!pte)) + break; + + if (!iopte_leaf(pte, lvl, iop->fmt)) { + __arm_lpae_clear_pte(&ptep[i], &iop->cfg, 1); + + /* Also flush any partial walks */ + io_pgtable_tlb_flush_walk(iop, iova + i * size, size, + ARM_LPAE_GRANULE(data)); + __arm_lpae_free_pgtable(data, lvl + 1, iopte_deref(pte, data)); + } + } + + /* Clear the remaining entries */ + __arm_lpae_clear_pte(ptep, &iop->cfg, i); + + if (gather && !iommu_iotlb_gather_queued(gather)) + for (int j =3D 0; j < i; j++) + io_pgtable_tlb_add_page(iop, gather, iova + j * size, size); + + return i * size; + } else if (iopte_leaf(pte, lvl, iop->fmt)) { + /* + * Insert a table at the next level to map the old region, + * minus the part we want to unmap + */ + return arm_lpae_split_blk_unmap(data, gather, iova, size, pte, + lvl + 1, ptep, pgcount); + } + + /* Keep on walkin' */ + ptep =3D iopte_deref(pte, data); + return __arm_lpae_unmap(data, gather, iova, size, pgcount, lvl + 1, ptep); +} + +size_t arm_lpae_unmap_pages(struct io_pgtable_ops *ops, unsigned long iova, + size_t pgsize, size_t pgcount, + struct iommu_iotlb_gather *gather) +{ + struct arm_lpae_io_pgtable *data =3D io_pgtable_ops_to_data(ops); + struct io_pgtable_cfg *cfg =3D &data->iop.cfg; + arm_lpae_iopte *ptep =3D data->pgd; + long iaext =3D (s64)iova >> cfg->ias; + + if (WARN_ON(!pgsize || (pgsize & cfg->pgsize_bitmap) !=3D pgsize || !pgco= unt)) + return 0; + + if (cfg->quirks & IO_PGTABLE_QUIRK_ARM_TTBR1) + iaext =3D ~iaext; + if (WARN_ON(iaext)) + return 0; + + return __arm_lpae_unmap(data, gather, iova, pgsize, pgcount, + data->start_level, ptep); +} + +static int __arm_lpae_iopte_walk(struct arm_lpae_io_pgtable *data, + struct io_pgtable_walk_data *walk_data, + arm_lpae_iopte *ptep, + int lvl); + +struct iova_to_phys_data { + arm_lpae_iopte pte; + int lvl; +}; + +static int visit_iova_to_phys(struct io_pgtable_walk_data *walk_data, int = lvl, + arm_lpae_iopte *ptep, size_t size) +{ + struct iova_to_phys_data *data =3D walk_data->data; + data->pte =3D *ptep; + data->lvl =3D lvl; + return 0; +} + +phys_addr_t arm_lpae_iova_to_phys(struct io_pgtable_ops *ops, + unsigned long iova) +{ + struct arm_lpae_io_pgtable *data =3D io_pgtable_ops_to_data(ops); + struct iova_to_phys_data d; + struct io_pgtable_walk_data walk_data =3D { + .data =3D &d, + .visit =3D visit_iova_to_phys, + .addr =3D iova, + .end =3D iova + 1, + }; + int ret; + + ret =3D __arm_lpae_iopte_walk(data, &walk_data, data->pgd, data->start_le= vel); + if (ret) + return 0; + + iova &=3D (ARM_LPAE_BLOCK_SIZE(d.lvl, data) - 1); + return iopte_to_paddr(d.pte, data) | iova; +} + +static int visit_pgtable_walk(struct io_pgtable_walk_data *walk_data, int = lvl, + arm_lpae_iopte *ptep, size_t size) +{ + struct arm_lpae_io_pgtable_walk_data *data =3D walk_data->data; + data->ptes[data->level++] =3D *ptep; + return 0; +} + +int arm_lpae_pgtable_walk(struct io_pgtable_ops *ops, unsigned long iova, = void *wd) +{ + struct arm_lpae_io_pgtable *data =3D io_pgtable_ops_to_data(ops); + struct io_pgtable_walk_data walk_data =3D { + .data =3D wd, + .visit =3D visit_pgtable_walk, + .addr =3D iova, + .end =3D iova + 1, + }; + + ((struct arm_lpae_io_pgtable_walk_data *)wd)->level =3D 0; + + return __arm_lpae_iopte_walk(data, &walk_data, data->pgd, data->start_lev= el); +} + +static int io_pgtable_visit(struct arm_lpae_io_pgtable *data, + struct io_pgtable_walk_data *walk_data, + arm_lpae_iopte *ptep, int lvl) +{ + struct io_pgtable *iop =3D &data->iop; + arm_lpae_iopte pte =3D READ_ONCE(*ptep); + + size_t size =3D ARM_LPAE_BLOCK_SIZE(lvl, data); + int ret =3D walk_data->visit(walk_data, lvl, ptep, size); + if (ret) + return ret; + + if (iopte_leaf(pte, lvl, iop->fmt)) { + walk_data->addr +=3D size; + return 0; + } + + if (!iopte_table(pte, lvl)) { + return -EINVAL; + } + + ptep =3D iopte_deref(pte, data); + return __arm_lpae_iopte_walk(data, walk_data, ptep, lvl + 1); +} + +static int __arm_lpae_iopte_walk(struct arm_lpae_io_pgtable *data, + struct io_pgtable_walk_data *walk_data, + arm_lpae_iopte *ptep, + int lvl) +{ + u32 idx; + int max_entries, ret; + + if (WARN_ON(lvl =3D=3D ARM_LPAE_MAX_LEVELS)) + return -EINVAL; + + if (lvl =3D=3D data->start_level) + max_entries =3D ARM_LPAE_PGD_SIZE(data) / sizeof(arm_lpae_iopte); + else + max_entries =3D ARM_LPAE_PTES_PER_TABLE(data); + + for (idx =3D ARM_LPAE_LVL_IDX(walk_data->addr, lvl, data); + (idx < max_entries) && (walk_data->addr < walk_data->end); ++idx) { + ret =3D io_pgtable_visit(data, walk_data, ptep + idx, lvl); + if (ret) + return ret; + } + + return 0; +} + +static int visit_dirty(struct io_pgtable_walk_data *walk_data, int lvl, + arm_lpae_iopte *ptep, size_t size) +{ + struct iommu_dirty_bitmap *dirty =3D walk_data->data; + + if (!iopte_leaf(*ptep, lvl, walk_data->iop->fmt)) + return 0; + + if (iopte_writeable_dirty(*ptep)) { + iommu_dirty_bitmap_record(dirty, walk_data->addr, size); + if (!(walk_data->flags & IOMMU_DIRTY_NO_CLEAR)) + iopte_set_writeable_clean(ptep); + } + + return 0; +} + +int arm_lpae_read_and_clear_dirty(struct io_pgtable_ops *ops, + unsigned long iova, size_t size, + unsigned long flags, + struct iommu_dirty_bitmap *dirty) +{ + struct arm_lpae_io_pgtable *data =3D io_pgtable_ops_to_data(ops); + struct io_pgtable_cfg *cfg =3D &data->iop.cfg; + struct io_pgtable_walk_data walk_data =3D { + .iop =3D &data->iop, + .data =3D dirty, + .visit =3D visit_dirty, + .flags =3D flags, + .addr =3D iova, + .end =3D iova + size, + }; + arm_lpae_iopte *ptep =3D data->pgd; + int lvl =3D data->start_level; + + if (WARN_ON(!size)) + return -EINVAL; + if (WARN_ON((iova + size - 1) & ~(BIT(cfg->ias) - 1))) + return -EINVAL; + if (data->iop.fmt !=3D ARM_64_LPAE_S1) + return -EINVAL; + + return __arm_lpae_iopte_walk(data, &walk_data, ptep, lvl); +} diff --git a/drivers/iommu/io-pgtable-arm.c b/drivers/iommu/io-pgtable-arm.c index 6739e1fa54ec..cb4eb513adbf 100644 --- a/drivers/iommu/io-pgtable-arm.c +++ b/drivers/iommu/io-pgtable-arm.c @@ -1,6 +1,7 @@ // SPDX-License-Identifier: GPL-2.0-only /* * CPU-agnostic ARM page table allocator. + * Host-specific functions. The rest is in io-pgtable-arm-common.c. * * Copyright (C) 2014 ARM Limited * @@ -11,7 +12,7 @@ =20 #include #include -#include +#include #include #include #include @@ -20,195 +21,33 @@ =20 #include =20 -#include "io-pgtable-arm.h" #include "iommu-pages.h" =20 #define ARM_LPAE_MAX_ADDR_BITS 52 #define ARM_LPAE_S2_MAX_CONCAT_PAGES 16 -#define ARM_LPAE_MAX_LEVELS 4 =20 -/* Struct accessors */ -#define io_pgtable_to_data(x) \ - container_of((x), struct arm_lpae_io_pgtable, iop) - -#define io_pgtable_ops_to_data(x) \ - io_pgtable_to_data(io_pgtable_ops_to_pgtable(x)) - -/* - * Calculate the right shift amount to get to the portion describing level= l - * in a virtual address mapped by the pagetable in d. - */ -#define ARM_LPAE_LVL_SHIFT(l,d) \ - (((ARM_LPAE_MAX_LEVELS - (l)) * (d)->bits_per_level) + \ - ilog2(sizeof(arm_lpae_iopte))) - -#define ARM_LPAE_GRANULE(d) \ - (sizeof(arm_lpae_iopte) << (d)->bits_per_level) -#define ARM_LPAE_PGD_SIZE(d) \ - (sizeof(arm_lpae_iopte) << (d)->pgd_bits) - -#define ARM_LPAE_PTES_PER_TABLE(d) \ - (ARM_LPAE_GRANULE(d) >> ilog2(sizeof(arm_lpae_iopte))) - -/* - * Calculate the index at level l used to map virtual address a using the - * pagetable in d. - */ -#define ARM_LPAE_PGD_IDX(l,d) \ - ((l) =3D=3D (d)->start_level ? (d)->pgd_bits - (d)->bits_per_level : 0) - -#define ARM_LPAE_LVL_IDX(a,l,d) \ - (((u64)(a) >> ARM_LPAE_LVL_SHIFT(l,d)) & \ - ((1 << ((d)->bits_per_level + ARM_LPAE_PGD_IDX(l,d))) - 1)) - -/* Calculate the block/page mapping size at level l for pagetable in d. */ -#define ARM_LPAE_BLOCK_SIZE(l,d) (1ULL << ARM_LPAE_LVL_SHIFT(l,d)) - -/* Page table bits */ -#define ARM_LPAE_PTE_TYPE_SHIFT 0 -#define ARM_LPAE_PTE_TYPE_MASK 0x3 - -#define ARM_LPAE_PTE_TYPE_BLOCK 1 -#define ARM_LPAE_PTE_TYPE_TABLE 3 -#define ARM_LPAE_PTE_TYPE_PAGE 3 - -#define ARM_LPAE_PTE_ADDR_MASK GENMASK_ULL(47,12) - -#define ARM_LPAE_PTE_NSTABLE (((arm_lpae_iopte)1) << 63) -#define ARM_LPAE_PTE_XN (((arm_lpae_iopte)3) << 53) -#define ARM_LPAE_PTE_DBM (((arm_lpae_iopte)1) << 51) -#define ARM_LPAE_PTE_AF (((arm_lpae_iopte)1) << 10) -#define ARM_LPAE_PTE_SH_NS (((arm_lpae_iopte)0) << 8) -#define ARM_LPAE_PTE_SH_OS (((arm_lpae_iopte)2) << 8) -#define ARM_LPAE_PTE_SH_IS (((arm_lpae_iopte)3) << 8) -#define ARM_LPAE_PTE_NS (((arm_lpae_iopte)1) << 5) -#define ARM_LPAE_PTE_VALID (((arm_lpae_iopte)1) << 0) - -#define ARM_LPAE_PTE_ATTR_LO_MASK (((arm_lpae_iopte)0x3ff) << 2) -/* Ignore the contiguous bit for block splitting */ -#define ARM_LPAE_PTE_ATTR_HI_MASK (ARM_LPAE_PTE_XN | ARM_LPAE_PTE_DBM) -#define ARM_LPAE_PTE_ATTR_MASK (ARM_LPAE_PTE_ATTR_LO_MASK | \ - ARM_LPAE_PTE_ATTR_HI_MASK) -/* Software bit for solving coherency races */ -#define ARM_LPAE_PTE_SW_SYNC (((arm_lpae_iopte)1) << 55) - -/* Stage-1 PTE */ -#define ARM_LPAE_PTE_AP_UNPRIV (((arm_lpae_iopte)1) << 6) -#define ARM_LPAE_PTE_AP_RDONLY_BIT 7 -#define ARM_LPAE_PTE_AP_RDONLY (((arm_lpae_iopte)1) << \ - ARM_LPAE_PTE_AP_RDONLY_BIT) -#define ARM_LPAE_PTE_AP_WR_CLEAN_MASK (ARM_LPAE_PTE_AP_RDONLY | \ - ARM_LPAE_PTE_DBM) -#define ARM_LPAE_PTE_ATTRINDX_SHIFT 2 -#define ARM_LPAE_PTE_nG (((arm_lpae_iopte)1) << 11) - -/* Stage-2 PTE */ -#define ARM_LPAE_PTE_HAP_FAULT (((arm_lpae_iopte)0) << 6) -#define ARM_LPAE_PTE_HAP_READ (((arm_lpae_iopte)1) << 6) -#define ARM_LPAE_PTE_HAP_WRITE (((arm_lpae_iopte)2) << 6) -#define ARM_LPAE_PTE_MEMATTR_OIWB (((arm_lpae_iopte)0xf) << 2) -#define ARM_LPAE_PTE_MEMATTR_NC (((arm_lpae_iopte)0x5) << 2) -#define ARM_LPAE_PTE_MEMATTR_DEV (((arm_lpae_iopte)0x1) << 2) - -/* Register bits */ -#define ARM_LPAE_VTCR_SL0_MASK 0x3 - -#define ARM_LPAE_TCR_T0SZ_SHIFT 0 - -#define ARM_LPAE_VTCR_PS_SHIFT 16 -#define ARM_LPAE_VTCR_PS_MASK 0x7 - -#define ARM_LPAE_MAIR_ATTR_SHIFT(n) ((n) << 3) -#define ARM_LPAE_MAIR_ATTR_MASK 0xff -#define ARM_LPAE_MAIR_ATTR_DEVICE 0x04 -#define ARM_LPAE_MAIR_ATTR_NC 0x44 -#define ARM_LPAE_MAIR_ATTR_INC_OWBRWA 0xf4 -#define ARM_LPAE_MAIR_ATTR_WBRWA 0xff -#define ARM_LPAE_MAIR_ATTR_IDX_NC 0 -#define ARM_LPAE_MAIR_ATTR_IDX_CACHE 1 -#define ARM_LPAE_MAIR_ATTR_IDX_DEV 2 -#define ARM_LPAE_MAIR_ATTR_IDX_INC_OCACHE 3 - -#define ARM_MALI_LPAE_TTBR_ADRMODE_TABLE (3u << 0) -#define ARM_MALI_LPAE_TTBR_READ_INNER BIT(2) -#define ARM_MALI_LPAE_TTBR_SHARE_OUTER BIT(4) - -#define ARM_MALI_LPAE_MEMATTR_IMP_DEF 0x88ULL -#define ARM_MALI_LPAE_MEMATTR_WRITE_ALLOC 0x8DULL - -/* IOPTE accessors */ -#define iopte_deref(pte,d) __va(iopte_to_paddr(pte, d)) - -#define iopte_type(pte) \ - (((pte) >> ARM_LPAE_PTE_TYPE_SHIFT) & ARM_LPAE_PTE_TYPE_MASK) - -#define iopte_prot(pte) ((pte) & ARM_LPAE_PTE_ATTR_MASK) - -#define iopte_writeable_dirty(pte) \ - (((pte) & ARM_LPAE_PTE_AP_WR_CLEAN_MASK) =3D=3D ARM_LPAE_PTE_DBM) - -#define iopte_set_writeable_clean(ptep) \ - set_bit(ARM_LPAE_PTE_AP_RDONLY_BIT, (unsigned long *)(ptep)) - -struct arm_lpae_io_pgtable { - struct io_pgtable iop; - - int pgd_bits; - int start_level; - int bits_per_level; - - void *pgd; -}; - -typedef u64 arm_lpae_iopte; - -static inline bool iopte_leaf(arm_lpae_iopte pte, int lvl, - enum io_pgtable_fmt fmt) -{ - if (lvl =3D=3D (ARM_LPAE_MAX_LEVELS - 1) && fmt !=3D ARM_MALI_LPAE) - return iopte_type(pte) =3D=3D ARM_LPAE_PTE_TYPE_PAGE; - - return iopte_type(pte) =3D=3D ARM_LPAE_PTE_TYPE_BLOCK; -} +static bool selftest_running =3D false; =20 -static inline bool iopte_table(arm_lpae_iopte pte, int lvl) +int arm_lpae_map_exists(void) { - if (lvl =3D=3D (ARM_LPAE_MAX_LEVELS - 1)) - return false; - return iopte_type(pte) =3D=3D ARM_LPAE_PTE_TYPE_TABLE; + WARN_ON(!selftest_running); + return -EEXIST; } =20 -static arm_lpae_iopte paddr_to_iopte(phys_addr_t paddr, - struct arm_lpae_io_pgtable *data) +int arm_lpae_unmap_empty(void) { - arm_lpae_iopte pte =3D paddr; - - /* Of the bits which overlap, either 51:48 or 15:12 are always RES0 */ - return (pte | (pte >> (48 - 12))) & ARM_LPAE_PTE_ADDR_MASK; + WARN_ON(!selftest_running); + return -EEXIST; } =20 -static phys_addr_t iopte_to_paddr(arm_lpae_iopte pte, - struct arm_lpae_io_pgtable *data) -{ - u64 paddr =3D pte & ARM_LPAE_PTE_ADDR_MASK; - - if (ARM_LPAE_GRANULE(data) < SZ_64K) - return paddr; - - /* Rotate the packed high-order bits back to the top */ - return (paddr | (paddr << (48 - 12))) & (ARM_LPAE_PTE_ADDR_MASK << 4); -} - -static bool selftest_running =3D false; - static dma_addr_t __arm_lpae_dma_addr(void *pages) { return (dma_addr_t)virt_to_phys(pages); } =20 -static void *__arm_lpae_alloc_pages(size_t size, gfp_t gfp, - struct io_pgtable_cfg *cfg, - void *cookie) +void *__arm_lpae_alloc_pages(size_t size, gfp_t gfp, + struct io_pgtable_cfg *cfg, + void *cookie) { struct device *dev =3D cfg->iommu_dev; int order =3D get_order(size); @@ -253,9 +92,9 @@ static void *__arm_lpae_alloc_pages(size_t size, gfp_t g= fp, return NULL; } =20 -static void __arm_lpae_free_pages(void *pages, size_t size, - struct io_pgtable_cfg *cfg, - void *cookie) +void __arm_lpae_free_pages(void *pages, size_t size, + struct io_pgtable_cfg *cfg, + void *cookie) { if (!cfg->coherent_walk) dma_unmap_single(cfg->iommu_dev, __arm_lpae_dma_addr(pages), @@ -267,300 +106,13 @@ static void __arm_lpae_free_pages(void *pages, size_= t size, iommu_free_pages(pages, get_order(size)); } =20 -static void __arm_lpae_sync_pte(arm_lpae_iopte *ptep, int num_entries, - struct io_pgtable_cfg *cfg) +void __arm_lpae_sync_pte(arm_lpae_iopte *ptep, int num_entries, + struct io_pgtable_cfg *cfg) { dma_sync_single_for_device(cfg->iommu_dev, __arm_lpae_dma_addr(ptep), sizeof(*ptep) * num_entries, DMA_TO_DEVICE); } =20 -static void __arm_lpae_clear_pte(arm_lpae_iopte *ptep, struct io_pgtable_c= fg *cfg, int num_entries) -{ - for (int i =3D 0; i < num_entries; i++) - ptep[i] =3D 0; - - if (!cfg->coherent_walk && num_entries) - __arm_lpae_sync_pte(ptep, num_entries, cfg); -} - -static size_t __arm_lpae_unmap(struct arm_lpae_io_pgtable *data, - struct iommu_iotlb_gather *gather, - unsigned long iova, size_t size, size_t pgcount, - int lvl, arm_lpae_iopte *ptep); - -static void __arm_lpae_init_pte(struct arm_lpae_io_pgtable *data, - phys_addr_t paddr, arm_lpae_iopte prot, - int lvl, int num_entries, arm_lpae_iopte *ptep) -{ - arm_lpae_iopte pte =3D prot; - struct io_pgtable_cfg *cfg =3D &data->iop.cfg; - size_t sz =3D ARM_LPAE_BLOCK_SIZE(lvl, data); - int i; - - if (data->iop.fmt !=3D ARM_MALI_LPAE && lvl =3D=3D ARM_LPAE_MAX_LEVELS - = 1) - pte |=3D ARM_LPAE_PTE_TYPE_PAGE; - else - pte |=3D ARM_LPAE_PTE_TYPE_BLOCK; - - for (i =3D 0; i < num_entries; i++) - ptep[i] =3D pte | paddr_to_iopte(paddr + i * sz, data); - - if (!cfg->coherent_walk) - __arm_lpae_sync_pte(ptep, num_entries, cfg); -} - -static int arm_lpae_init_pte(struct arm_lpae_io_pgtable *data, - unsigned long iova, phys_addr_t paddr, - arm_lpae_iopte prot, int lvl, int num_entries, - arm_lpae_iopte *ptep) -{ - int i; - - for (i =3D 0; i < num_entries; i++) - if (iopte_leaf(ptep[i], lvl, data->iop.fmt)) { - /* We require an unmap first */ - WARN_ON(!selftest_running); - return -EEXIST; - } else if (iopte_type(ptep[i]) =3D=3D ARM_LPAE_PTE_TYPE_TABLE) { - /* - * We need to unmap and free the old table before - * overwriting it with a block entry. - */ - arm_lpae_iopte *tblp; - size_t sz =3D ARM_LPAE_BLOCK_SIZE(lvl, data); - - tblp =3D ptep - ARM_LPAE_LVL_IDX(iova, lvl, data); - if (__arm_lpae_unmap(data, NULL, iova + i * sz, sz, 1, - lvl, tblp) !=3D sz) { - WARN_ON(1); - return -EINVAL; - } - } - - __arm_lpae_init_pte(data, paddr, prot, lvl, num_entries, ptep); - return 0; -} - -static arm_lpae_iopte arm_lpae_install_table(arm_lpae_iopte *table, - arm_lpae_iopte *ptep, - arm_lpae_iopte curr, - struct arm_lpae_io_pgtable *data) -{ - arm_lpae_iopte old, new; - struct io_pgtable_cfg *cfg =3D &data->iop.cfg; - - new =3D paddr_to_iopte(__pa(table), data) | ARM_LPAE_PTE_TYPE_TABLE; - if (cfg->quirks & IO_PGTABLE_QUIRK_ARM_NS) - new |=3D ARM_LPAE_PTE_NSTABLE; - - /* - * Ensure the table itself is visible before its PTE can be. - * Whilst we could get away with cmpxchg64_release below, this - * doesn't have any ordering semantics when !CONFIG_SMP. - */ - dma_wmb(); - - old =3D cmpxchg64_relaxed(ptep, curr, new); - - if (cfg->coherent_walk || (old & ARM_LPAE_PTE_SW_SYNC)) - return old; - - /* Even if it's not ours, there's no point waiting; just kick it */ - __arm_lpae_sync_pte(ptep, 1, cfg); - if (old =3D=3D curr) - WRITE_ONCE(*ptep, new | ARM_LPAE_PTE_SW_SYNC); - - return old; -} - -static int __arm_lpae_map(struct arm_lpae_io_pgtable *data, unsigned long = iova, - phys_addr_t paddr, size_t size, size_t pgcount, - arm_lpae_iopte prot, int lvl, arm_lpae_iopte *ptep, - gfp_t gfp, size_t *mapped) -{ - arm_lpae_iopte *cptep, pte; - size_t block_size =3D ARM_LPAE_BLOCK_SIZE(lvl, data); - size_t tblsz =3D ARM_LPAE_GRANULE(data); - struct io_pgtable_cfg *cfg =3D &data->iop.cfg; - int ret =3D 0, num_entries, max_entries, map_idx_start; - - /* Find our entry at the current level */ - map_idx_start =3D ARM_LPAE_LVL_IDX(iova, lvl, data); - ptep +=3D map_idx_start; - - /* If we can install a leaf entry at this level, then do so */ - if (size =3D=3D block_size) { - max_entries =3D ARM_LPAE_PTES_PER_TABLE(data) - map_idx_start; - num_entries =3D min_t(int, pgcount, max_entries); - ret =3D arm_lpae_init_pte(data, iova, paddr, prot, lvl, num_entries, pte= p); - if (!ret) - *mapped +=3D num_entries * size; - - return ret; - } - - /* We can't allocate tables at the final level */ - if (WARN_ON(lvl >=3D ARM_LPAE_MAX_LEVELS - 1)) - return -EINVAL; - - /* Grab a pointer to the next level */ - pte =3D READ_ONCE(*ptep); - if (!pte) { - cptep =3D __arm_lpae_alloc_pages(tblsz, gfp, cfg, data->iop.cookie); - if (!cptep) - return -ENOMEM; - - pte =3D arm_lpae_install_table(cptep, ptep, 0, data); - if (pte) - __arm_lpae_free_pages(cptep, tblsz, cfg, data->iop.cookie); - } else if (!cfg->coherent_walk && !(pte & ARM_LPAE_PTE_SW_SYNC)) { - __arm_lpae_sync_pte(ptep, 1, cfg); - } - - if (pte && !iopte_leaf(pte, lvl, data->iop.fmt)) { - cptep =3D iopte_deref(pte, data); - } else if (pte) { - /* We require an unmap first */ - WARN_ON(!selftest_running); - return -EEXIST; - } - - /* Rinse, repeat */ - return __arm_lpae_map(data, iova, paddr, size, pgcount, prot, lvl + 1, - cptep, gfp, mapped); -} - -static arm_lpae_iopte arm_lpae_prot_to_pte(struct arm_lpae_io_pgtable *dat= a, - int prot) -{ - arm_lpae_iopte pte; - - if (data->iop.fmt =3D=3D ARM_64_LPAE_S1 || - data->iop.fmt =3D=3D ARM_32_LPAE_S1) { - pte =3D ARM_LPAE_PTE_nG; - if (!(prot & IOMMU_WRITE) && (prot & IOMMU_READ)) - pte |=3D ARM_LPAE_PTE_AP_RDONLY; - else if (data->iop.cfg.quirks & IO_PGTABLE_QUIRK_ARM_HD) - pte |=3D ARM_LPAE_PTE_DBM; - if (!(prot & IOMMU_PRIV)) - pte |=3D ARM_LPAE_PTE_AP_UNPRIV; - } else { - pte =3D ARM_LPAE_PTE_HAP_FAULT; - if (prot & IOMMU_READ) - pte |=3D ARM_LPAE_PTE_HAP_READ; - if (prot & IOMMU_WRITE) - pte |=3D ARM_LPAE_PTE_HAP_WRITE; - } - - /* - * Note that this logic is structured to accommodate Mali LPAE - * having stage-1-like attributes but stage-2-like permissions. - */ - if (data->iop.fmt =3D=3D ARM_64_LPAE_S2 || - data->iop.fmt =3D=3D ARM_32_LPAE_S2) { - if (prot & IOMMU_MMIO) - pte |=3D ARM_LPAE_PTE_MEMATTR_DEV; - else if (prot & IOMMU_CACHE) - pte |=3D ARM_LPAE_PTE_MEMATTR_OIWB; - else - pte |=3D ARM_LPAE_PTE_MEMATTR_NC; - } else { - if (prot & IOMMU_MMIO) - pte |=3D (ARM_LPAE_MAIR_ATTR_IDX_DEV - << ARM_LPAE_PTE_ATTRINDX_SHIFT); - else if (prot & IOMMU_CACHE) - pte |=3D (ARM_LPAE_MAIR_ATTR_IDX_CACHE - << ARM_LPAE_PTE_ATTRINDX_SHIFT); - } - - /* - * Also Mali has its own notions of shareability wherein its Inner - * domain covers the cores within the GPU, and its Outer domain is - * "outside the GPU" (i.e. either the Inner or System domain in CPU - * terms, depending on coherency). - */ - if (prot & IOMMU_CACHE && data->iop.fmt !=3D ARM_MALI_LPAE) - pte |=3D ARM_LPAE_PTE_SH_IS; - else - pte |=3D ARM_LPAE_PTE_SH_OS; - - if (prot & IOMMU_NOEXEC) - pte |=3D ARM_LPAE_PTE_XN; - - if (data->iop.cfg.quirks & IO_PGTABLE_QUIRK_ARM_NS) - pte |=3D ARM_LPAE_PTE_NS; - - if (data->iop.fmt !=3D ARM_MALI_LPAE) - pte |=3D ARM_LPAE_PTE_AF; - - return pte; -} - -static int arm_lpae_map_pages(struct io_pgtable_ops *ops, unsigned long io= va, - phys_addr_t paddr, size_t pgsize, size_t pgcount, - int iommu_prot, gfp_t gfp, size_t *mapped) -{ - struct arm_lpae_io_pgtable *data =3D io_pgtable_ops_to_data(ops); - struct io_pgtable_cfg *cfg =3D &data->iop.cfg; - arm_lpae_iopte *ptep =3D data->pgd; - int ret, lvl =3D data->start_level; - arm_lpae_iopte prot; - long iaext =3D (s64)iova >> cfg->ias; - - if (WARN_ON(!pgsize || (pgsize & cfg->pgsize_bitmap) !=3D pgsize)) - return -EINVAL; - - if (cfg->quirks & IO_PGTABLE_QUIRK_ARM_TTBR1) - iaext =3D ~iaext; - if (WARN_ON(iaext || paddr >> cfg->oas)) - return -ERANGE; - - if (!(iommu_prot & (IOMMU_READ | IOMMU_WRITE))) - return -EINVAL; - - prot =3D arm_lpae_prot_to_pte(data, iommu_prot); - ret =3D __arm_lpae_map(data, iova, paddr, pgsize, pgcount, prot, lvl, - ptep, gfp, mapped); - /* - * Synchronise all PTE updates for the new mapping before there's - * a chance for anything to kick off a table walk for the new iova. - */ - wmb(); - - return ret; -} - -static void __arm_lpae_free_pgtable(struct arm_lpae_io_pgtable *data, int = lvl, - arm_lpae_iopte *ptep) -{ - arm_lpae_iopte *start, *end; - unsigned long table_size; - - if (lvl =3D=3D data->start_level) - table_size =3D ARM_LPAE_PGD_SIZE(data); - else - table_size =3D ARM_LPAE_GRANULE(data); - - start =3D ptep; - - /* Only leaf entries at the last level */ - if (lvl =3D=3D ARM_LPAE_MAX_LEVELS - 1) - end =3D ptep; - else - end =3D (void *)ptep + table_size; - - while (ptep !=3D end) { - arm_lpae_iopte pte =3D *ptep++; - - if (!pte || iopte_leaf(pte, lvl, data->iop.fmt)) - continue; - - __arm_lpae_free_pgtable(data, lvl + 1, iopte_deref(pte, data)); - } - - __arm_lpae_free_pages(start, table_size, &data->iop.cfg, data->iop.cookie= ); -} - static void arm_lpae_free_pgtable(struct io_pgtable *iop) { struct arm_lpae_io_pgtable *data =3D io_pgtable_to_data(iop); @@ -569,319 +121,6 @@ static void arm_lpae_free_pgtable(struct io_pgtable *= iop) kfree(data); } =20 -static size_t arm_lpae_split_blk_unmap(struct arm_lpae_io_pgtable *data, - struct iommu_iotlb_gather *gather, - unsigned long iova, size_t size, - arm_lpae_iopte blk_pte, int lvl, - arm_lpae_iopte *ptep, size_t pgcount) -{ - struct io_pgtable_cfg *cfg =3D &data->iop.cfg; - arm_lpae_iopte pte, *tablep; - phys_addr_t blk_paddr; - size_t tablesz =3D ARM_LPAE_GRANULE(data); - size_t split_sz =3D ARM_LPAE_BLOCK_SIZE(lvl, data); - int ptes_per_table =3D ARM_LPAE_PTES_PER_TABLE(data); - int i, unmap_idx_start =3D -1, num_entries =3D 0, max_entries; - - if (WARN_ON(lvl =3D=3D ARM_LPAE_MAX_LEVELS)) - return 0; - - tablep =3D __arm_lpae_alloc_pages(tablesz, GFP_ATOMIC, cfg, data->iop.coo= kie); - if (!tablep) - return 0; /* Bytes unmapped */ - - if (size =3D=3D split_sz) { - unmap_idx_start =3D ARM_LPAE_LVL_IDX(iova, lvl, data); - max_entries =3D ptes_per_table - unmap_idx_start; - num_entries =3D min_t(int, pgcount, max_entries); - } - - blk_paddr =3D iopte_to_paddr(blk_pte, data); - pte =3D iopte_prot(blk_pte); - - for (i =3D 0; i < ptes_per_table; i++, blk_paddr +=3D split_sz) { - /* Unmap! */ - if (i >=3D unmap_idx_start && i < (unmap_idx_start + num_entries)) - continue; - - __arm_lpae_init_pte(data, blk_paddr, pte, lvl, 1, &tablep[i]); - } - - pte =3D arm_lpae_install_table(tablep, ptep, blk_pte, data); - if (pte !=3D blk_pte) { - __arm_lpae_free_pages(tablep, tablesz, cfg, data->iop.cookie); - /* - * We may race against someone unmapping another part of this - * block, but anything else is invalid. We can't misinterpret - * a page entry here since we're never at the last level. - */ - if (iopte_type(pte) !=3D ARM_LPAE_PTE_TYPE_TABLE) - return 0; - - tablep =3D iopte_deref(pte, data); - } else if (unmap_idx_start >=3D 0) { - for (i =3D 0; i < num_entries; i++) - io_pgtable_tlb_add_page(&data->iop, gather, iova + i * size, size); - - return num_entries * size; - } - - return __arm_lpae_unmap(data, gather, iova, size, pgcount, lvl, tablep); -} - -static size_t __arm_lpae_unmap(struct arm_lpae_io_pgtable *data, - struct iommu_iotlb_gather *gather, - unsigned long iova, size_t size, size_t pgcount, - int lvl, arm_lpae_iopte *ptep) -{ - arm_lpae_iopte pte; - struct io_pgtable *iop =3D &data->iop; - int i =3D 0, num_entries, max_entries, unmap_idx_start; - - /* Something went horribly wrong and we ran out of page table */ - if (WARN_ON(lvl =3D=3D ARM_LPAE_MAX_LEVELS)) - return 0; - - unmap_idx_start =3D ARM_LPAE_LVL_IDX(iova, lvl, data); - ptep +=3D unmap_idx_start; - pte =3D READ_ONCE(*ptep); - if (WARN_ON(!pte)) - return 0; - - /* If the size matches this level, we're in the right place */ - if (size =3D=3D ARM_LPAE_BLOCK_SIZE(lvl, data)) { - max_entries =3D ARM_LPAE_PTES_PER_TABLE(data) - unmap_idx_start; - num_entries =3D min_t(int, pgcount, max_entries); - - /* Find and handle non-leaf entries */ - for (i =3D 0; i < num_entries; i++) { - pte =3D READ_ONCE(ptep[i]); - if (WARN_ON(!pte)) - break; - - if (!iopte_leaf(pte, lvl, iop->fmt)) { - __arm_lpae_clear_pte(&ptep[i], &iop->cfg, 1); - - /* Also flush any partial walks */ - io_pgtable_tlb_flush_walk(iop, iova + i * size, size, - ARM_LPAE_GRANULE(data)); - __arm_lpae_free_pgtable(data, lvl + 1, iopte_deref(pte, data)); - } - } - - /* Clear the remaining entries */ - __arm_lpae_clear_pte(ptep, &iop->cfg, i); - - if (gather && !iommu_iotlb_gather_queued(gather)) - for (int j =3D 0; j < i; j++) - io_pgtable_tlb_add_page(iop, gather, iova + j * size, size); - - return i * size; - } else if (iopte_leaf(pte, lvl, iop->fmt)) { - /* - * Insert a table at the next level to map the old region, - * minus the part we want to unmap - */ - return arm_lpae_split_blk_unmap(data, gather, iova, size, pte, - lvl + 1, ptep, pgcount); - } - - /* Keep on walkin' */ - ptep =3D iopte_deref(pte, data); - return __arm_lpae_unmap(data, gather, iova, size, pgcount, lvl + 1, ptep); -} - -static size_t arm_lpae_unmap_pages(struct io_pgtable_ops *ops, unsigned lo= ng iova, - size_t pgsize, size_t pgcount, - struct iommu_iotlb_gather *gather) -{ - struct arm_lpae_io_pgtable *data =3D io_pgtable_ops_to_data(ops); - struct io_pgtable_cfg *cfg =3D &data->iop.cfg; - arm_lpae_iopte *ptep =3D data->pgd; - long iaext =3D (s64)iova >> cfg->ias; - - if (WARN_ON(!pgsize || (pgsize & cfg->pgsize_bitmap) !=3D pgsize || !pgco= unt)) - return 0; - - if (cfg->quirks & IO_PGTABLE_QUIRK_ARM_TTBR1) - iaext =3D ~iaext; - if (WARN_ON(iaext)) - return 0; - - return __arm_lpae_unmap(data, gather, iova, pgsize, pgcount, - data->start_level, ptep); -} - -struct io_pgtable_walk_data { - struct io_pgtable *iop; - void *data; - int (*visit)(struct io_pgtable_walk_data *walk_data, int lvl, - arm_lpae_iopte *ptep, size_t size); - unsigned long flags; - u64 addr; - const u64 end; -}; - -static int __arm_lpae_iopte_walk(struct arm_lpae_io_pgtable *data, - struct io_pgtable_walk_data *walk_data, - arm_lpae_iopte *ptep, - int lvl); - -struct iova_to_phys_data { - arm_lpae_iopte pte; - int lvl; -}; - -static int visit_iova_to_phys(struct io_pgtable_walk_data *walk_data, int = lvl, - arm_lpae_iopte *ptep, size_t size) -{ - struct iova_to_phys_data *data =3D walk_data->data; - data->pte =3D *ptep; - data->lvl =3D lvl; - return 0; -} - -static phys_addr_t arm_lpae_iova_to_phys(struct io_pgtable_ops *ops, - unsigned long iova) -{ - struct arm_lpae_io_pgtable *data =3D io_pgtable_ops_to_data(ops); - struct iova_to_phys_data d; - struct io_pgtable_walk_data walk_data =3D { - .data =3D &d, - .visit =3D visit_iova_to_phys, - .addr =3D iova, - .end =3D iova + 1, - }; - int ret; - - ret =3D __arm_lpae_iopte_walk(data, &walk_data, data->pgd, data->start_le= vel); - if (ret) - return 0; - - iova &=3D (ARM_LPAE_BLOCK_SIZE(d.lvl, data) - 1); - return iopte_to_paddr(d.pte, data) | iova; -} - -static int visit_pgtable_walk(struct io_pgtable_walk_data *walk_data, int = lvl, - arm_lpae_iopte *ptep, size_t size) -{ - struct arm_lpae_io_pgtable_walk_data *data =3D walk_data->data; - data->ptes[data->level++] =3D *ptep; - return 0; -} - -static int arm_lpae_pgtable_walk(struct io_pgtable_ops *ops, unsigned long= iova, - void *wd) -{ - struct arm_lpae_io_pgtable *data =3D io_pgtable_ops_to_data(ops); - struct io_pgtable_walk_data walk_data =3D { - .data =3D wd, - .visit =3D visit_pgtable_walk, - .addr =3D iova, - .end =3D iova + 1, - }; - - ((struct arm_lpae_io_pgtable_walk_data *)wd)->level =3D 0; - - return __arm_lpae_iopte_walk(data, &walk_data, data->pgd, data->start_lev= el); -} - -static int io_pgtable_visit(struct arm_lpae_io_pgtable *data, - struct io_pgtable_walk_data *walk_data, - arm_lpae_iopte *ptep, int lvl) -{ - struct io_pgtable *iop =3D &data->iop; - arm_lpae_iopte pte =3D READ_ONCE(*ptep); - - size_t size =3D ARM_LPAE_BLOCK_SIZE(lvl, data); - int ret =3D walk_data->visit(walk_data, lvl, ptep, size); - if (ret) - return ret; - - if (iopte_leaf(pte, lvl, iop->fmt)) { - walk_data->addr +=3D size; - return 0; - } - - if (!iopte_table(pte, lvl)) { - return -EINVAL; - } - - ptep =3D iopte_deref(pte, data); - return __arm_lpae_iopte_walk(data, walk_data, ptep, lvl + 1); -} - -static int __arm_lpae_iopte_walk(struct arm_lpae_io_pgtable *data, - struct io_pgtable_walk_data *walk_data, - arm_lpae_iopte *ptep, - int lvl) -{ - u32 idx; - int max_entries, ret; - - if (WARN_ON(lvl =3D=3D ARM_LPAE_MAX_LEVELS)) - return -EINVAL; - - if (lvl =3D=3D data->start_level) - max_entries =3D ARM_LPAE_PGD_SIZE(data) / sizeof(arm_lpae_iopte); - else - max_entries =3D ARM_LPAE_PTES_PER_TABLE(data); - - for (idx =3D ARM_LPAE_LVL_IDX(walk_data->addr, lvl, data); - (idx < max_entries) && (walk_data->addr < walk_data->end); ++idx) { - ret =3D io_pgtable_visit(data, walk_data, ptep + idx, lvl); - if (ret) - return ret; - } - - return 0; -} - -static int visit_dirty(struct io_pgtable_walk_data *walk_data, int lvl, - arm_lpae_iopte *ptep, size_t size) -{ - struct iommu_dirty_bitmap *dirty =3D walk_data->data; - - if (!iopte_leaf(*ptep, lvl, walk_data->iop->fmt)) - return 0; - - if (iopte_writeable_dirty(*ptep)) { - iommu_dirty_bitmap_record(dirty, walk_data->addr, size); - if (!(walk_data->flags & IOMMU_DIRTY_NO_CLEAR)) - iopte_set_writeable_clean(ptep); - } - - return 0; -} - -static int arm_lpae_read_and_clear_dirty(struct io_pgtable_ops *ops, - unsigned long iova, size_t size, - unsigned long flags, - struct iommu_dirty_bitmap *dirty) -{ - struct arm_lpae_io_pgtable *data =3D io_pgtable_ops_to_data(ops); - struct io_pgtable_cfg *cfg =3D &data->iop.cfg; - struct io_pgtable_walk_data walk_data =3D { - .iop =3D &data->iop, - .data =3D dirty, - .visit =3D visit_dirty, - .flags =3D flags, - .addr =3D iova, - .end =3D iova + size, - }; - arm_lpae_iopte *ptep =3D data->pgd; - int lvl =3D data->start_level; - - if (WARN_ON(!size)) - return -EINVAL; - if (WARN_ON((iova + size - 1) & ~(BIT(cfg->ias) - 1))) - return -EINVAL; - if (data->iop.fmt !=3D ARM_64_LPAE_S1) - return -EINVAL; - - return __arm_lpae_iopte_walk(data, &walk_data, ptep, lvl); -} - static void arm_lpae_restrict_pgsizes(struct io_pgtable_cfg *cfg) { unsigned long granule, page_sizes; diff --git a/drivers/iommu/io-pgtable-arm.h b/drivers/iommu/io-pgtable-arm.h deleted file mode 100644 index ba7cfdf7afa0..000000000000 --- a/drivers/iommu/io-pgtable-arm.h +++ /dev/null @@ -1,30 +0,0 @@ -/* SPDX-License-Identifier: GPL-2.0-only */ -#ifndef IO_PGTABLE_ARM_H_ -#define IO_PGTABLE_ARM_H_ - -#define ARM_LPAE_TCR_TG0_4K 0 -#define ARM_LPAE_TCR_TG0_64K 1 -#define ARM_LPAE_TCR_TG0_16K 2 - -#define ARM_LPAE_TCR_TG1_16K 1 -#define ARM_LPAE_TCR_TG1_4K 2 -#define ARM_LPAE_TCR_TG1_64K 3 - -#define ARM_LPAE_TCR_SH_NS 0 -#define ARM_LPAE_TCR_SH_OS 2 -#define ARM_LPAE_TCR_SH_IS 3 - -#define ARM_LPAE_TCR_RGN_NC 0 -#define ARM_LPAE_TCR_RGN_WBWA 1 -#define ARM_LPAE_TCR_RGN_WT 2 -#define ARM_LPAE_TCR_RGN_WB 3 - -#define ARM_LPAE_TCR_PS_32_BIT 0x0ULL -#define ARM_LPAE_TCR_PS_36_BIT 0x1ULL -#define ARM_LPAE_TCR_PS_40_BIT 0x2ULL -#define ARM_LPAE_TCR_PS_42_BIT 0x3ULL -#define ARM_LPAE_TCR_PS_44_BIT 0x4ULL -#define ARM_LPAE_TCR_PS_48_BIT 0x5ULL -#define ARM_LPAE_TCR_PS_52_BIT 0x6ULL - -#endif /* IO_PGTABLE_ARM_H_ */ diff --git a/include/linux/io-pgtable-arm.h b/include/linux/io-pgtable-arm.h new file mode 100644 index 000000000000..1f56dabca18c --- /dev/null +++ b/include/linux/io-pgtable-arm.h @@ -0,0 +1,223 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +#ifndef IO_PGTABLE_H_ +#define IO_PGTABLE_H_ + +#include + +typedef u64 arm_lpae_iopte; + +struct arm_lpae_io_pgtable { + struct io_pgtable iop; + + int pgd_bits; + int start_level; + int bits_per_level; + + void *pgd; +}; + +struct io_pgtable_walk_data { + struct io_pgtable *iop; + void *data; + int (*visit)(struct io_pgtable_walk_data *walk_data, int lvl, + arm_lpae_iopte *ptep, size_t size); + unsigned long flags; + u64 addr; + const u64 end; +}; + +/* Struct accessors */ +#define io_pgtable_to_data(x) \ + container_of((x), struct arm_lpae_io_pgtable, iop) + +#define io_pgtable_ops_to_data(x) \ + io_pgtable_to_data(io_pgtable_ops_to_pgtable(x)) + +/* + * Calculate the right shift amount to get to the portion describing level= l + * in a virtual address mapped by the pagetable in d. + */ +#define ARM_LPAE_LVL_SHIFT(l,d) \ + (((ARM_LPAE_MAX_LEVELS - (l)) * (d)->bits_per_level) + \ + ilog2(sizeof(arm_lpae_iopte))) + +#define ARM_LPAE_GRANULE(d) \ + (sizeof(arm_lpae_iopte) << (d)->bits_per_level) +#define ARM_LPAE_PGD_SIZE(d) \ + (sizeof(arm_lpae_iopte) << (d)->pgd_bits) + +#define ARM_LPAE_PTES_PER_TABLE(d) \ + (ARM_LPAE_GRANULE(d) >> ilog2(sizeof(arm_lpae_iopte))) + +/* + * Calculate the index at level l used to map virtual address a using the + * pagetable in d. + */ +#define ARM_LPAE_PGD_IDX(l,d) \ + ((l) =3D=3D (d)->start_level ? (d)->pgd_bits - (d)->bits_per_level : 0) + +#define ARM_LPAE_LVL_IDX(a,l,d) \ + (((u64)(a) >> ARM_LPAE_LVL_SHIFT(l,d)) & \ + ((1 << ((d)->bits_per_level + ARM_LPAE_PGD_IDX(l,d))) - 1)) + +/* Calculate the block/page mapping size at level l for pagetable in d. */ +#define ARM_LPAE_BLOCK_SIZE(l,d) (1ULL << ARM_LPAE_LVL_SHIFT(l,d)) + +/* Page table bits */ +#define ARM_LPAE_PTE_TYPE_SHIFT 0 +#define ARM_LPAE_PTE_TYPE_MASK 0x3 + +#define ARM_LPAE_PTE_TYPE_BLOCK 1 +#define ARM_LPAE_PTE_TYPE_TABLE 3 +#define ARM_LPAE_PTE_TYPE_PAGE 3 + +#define ARM_LPAE_PTE_ADDR_MASK GENMASK_ULL(47,12) + +#define ARM_LPAE_PTE_NSTABLE (((arm_lpae_iopte)1) << 63) +#define ARM_LPAE_PTE_XN (((arm_lpae_iopte)3) << 53) +#define ARM_LPAE_PTE_DBM (((arm_lpae_iopte)1) << 51) +#define ARM_LPAE_PTE_AF (((arm_lpae_iopte)1) << 10) +#define ARM_LPAE_PTE_SH_NS (((arm_lpae_iopte)0) << 8) +#define ARM_LPAE_PTE_SH_OS (((arm_lpae_iopte)2) << 8) +#define ARM_LPAE_PTE_SH_IS (((arm_lpae_iopte)3) << 8) +#define ARM_LPAE_PTE_NS (((arm_lpae_iopte)1) << 5) +#define ARM_LPAE_PTE_VALID (((arm_lpae_iopte)1) << 0) + +#define ARM_LPAE_PTE_ATTR_LO_MASK (((arm_lpae_iopte)0x3ff) << 2) +/* Ignore the contiguous bit for block splitting */ +#define ARM_LPAE_PTE_ATTR_HI_MASK (ARM_LPAE_PTE_XN | ARM_LPAE_PTE_DBM) +#define ARM_LPAE_PTE_ATTR_MASK (ARM_LPAE_PTE_ATTR_LO_MASK | \ + ARM_LPAE_PTE_ATTR_HI_MASK) +/* Software bit for solving coherency races */ +#define ARM_LPAE_PTE_SW_SYNC (((arm_lpae_iopte)1) << 55) + +/* Stage-1 PTE */ +#define ARM_LPAE_PTE_AP_UNPRIV (((arm_lpae_iopte)1) << 6) +#define ARM_LPAE_PTE_AP_RDONLY_BIT 7 +#define ARM_LPAE_PTE_AP_RDONLY (((arm_lpae_iopte)1) << \ + ARM_LPAE_PTE_AP_RDONLY_BIT) +#define ARM_LPAE_PTE_AP_WR_CLEAN_MASK (ARM_LPAE_PTE_AP_RDONLY | \ + ARM_LPAE_PTE_DBM) +#define ARM_LPAE_PTE_ATTRINDX_SHIFT 2 +#define ARM_LPAE_PTE_nG (((arm_lpae_iopte)1) << 11) + +/* Stage-2 PTE */ +#define ARM_LPAE_PTE_HAP_FAULT (((arm_lpae_iopte)0) << 6) +#define ARM_LPAE_PTE_HAP_READ (((arm_lpae_iopte)1) << 6) +#define ARM_LPAE_PTE_HAP_WRITE (((arm_lpae_iopte)2) << 6) +#define ARM_LPAE_PTE_MEMATTR_OIWB (((arm_lpae_iopte)0xf) << 2) +#define ARM_LPAE_PTE_MEMATTR_NC (((arm_lpae_iopte)0x5) << 2) +#define ARM_LPAE_PTE_MEMATTR_DEV (((arm_lpae_iopte)0x1) << 2) + +/* Register bits */ +#define ARM_LPAE_VTCR_SL0_MASK 0x3 + +#define ARM_LPAE_TCR_T0SZ_SHIFT 0 + +#define ARM_LPAE_VTCR_PS_SHIFT 16 +#define ARM_LPAE_VTCR_PS_MASK 0x7 + +#define ARM_LPAE_MAIR_ATTR_SHIFT(n) ((n) << 3) +#define ARM_LPAE_MAIR_ATTR_MASK 0xff +#define ARM_LPAE_MAIR_ATTR_DEVICE 0x04 +#define ARM_LPAE_MAIR_ATTR_NC 0x44 +#define ARM_LPAE_MAIR_ATTR_INC_OWBRWA 0xf4 +#define ARM_LPAE_MAIR_ATTR_WBRWA 0xff +#define ARM_LPAE_MAIR_ATTR_IDX_NC 0 +#define ARM_LPAE_MAIR_ATTR_IDX_CACHE 1 +#define ARM_LPAE_MAIR_ATTR_IDX_DEV 2 +#define ARM_LPAE_MAIR_ATTR_IDX_INC_OCACHE 3 + +#define ARM_MALI_LPAE_TTBR_ADRMODE_TABLE (3u << 0) +#define ARM_MALI_LPAE_TTBR_READ_INNER BIT(2) +#define ARM_MALI_LPAE_TTBR_SHARE_OUTER BIT(4) + +#define ARM_MALI_LPAE_MEMATTR_IMP_DEF 0x88ULL +#define ARM_MALI_LPAE_MEMATTR_WRITE_ALLOC 0x8DULL + +#define ARM_LPAE_MAX_LEVELS 4 + +#define ARM_LPAE_TCR_TG0_4K 0 +#define ARM_LPAE_TCR_TG0_64K 1 +#define ARM_LPAE_TCR_TG0_16K 2 + +#define ARM_LPAE_TCR_TG1_16K 1 +#define ARM_LPAE_TCR_TG1_4K 2 +#define ARM_LPAE_TCR_TG1_64K 3 + +#define ARM_LPAE_TCR_SH_NS 0 +#define ARM_LPAE_TCR_SH_OS 2 +#define ARM_LPAE_TCR_SH_IS 3 + +#define ARM_LPAE_TCR_RGN_NC 0 +#define ARM_LPAE_TCR_RGN_WBWA 1 +#define ARM_LPAE_TCR_RGN_WT 2 +#define ARM_LPAE_TCR_RGN_WB 3 + +#define ARM_LPAE_TCR_PS_32_BIT 0x0ULL +#define ARM_LPAE_TCR_PS_36_BIT 0x1ULL +#define ARM_LPAE_TCR_PS_40_BIT 0x2ULL +#define ARM_LPAE_TCR_PS_42_BIT 0x3ULL +#define ARM_LPAE_TCR_PS_44_BIT 0x4ULL +#define ARM_LPAE_TCR_PS_48_BIT 0x5ULL +#define ARM_LPAE_TCR_PS_52_BIT 0x6ULL + +/* IOPTE accessors */ +#define iopte_type(pte) \ + (((pte) >> ARM_LPAE_PTE_TYPE_SHIFT) & ARM_LPAE_PTE_TYPE_MASK) + +#define iopte_prot(pte) ((pte) & ARM_LPAE_PTE_ATTR_MASK) + +#define iopte_writeable_dirty(pte) \ + (((pte) & ARM_LPAE_PTE_AP_WR_CLEAN_MASK) =3D=3D ARM_LPAE_PTE_DBM) + +#define iopte_set_writeable_clean(ptep) \ + set_bit(ARM_LPAE_PTE_AP_RDONLY_BIT, (unsigned long *)(ptep)) + + +static inline bool iopte_leaf(arm_lpae_iopte pte, int lvl, + enum io_pgtable_fmt fmt) +{ + if (lvl =3D=3D (ARM_LPAE_MAX_LEVELS - 1) && fmt !=3D ARM_MALI_LPAE) + return iopte_type(pte) =3D=3D ARM_LPAE_PTE_TYPE_PAGE; + + return iopte_type(pte) =3D=3D ARM_LPAE_PTE_TYPE_BLOCK; +} + +static inline bool iopte_table(arm_lpae_iopte pte, int lvl) +{ + if (lvl =3D=3D (ARM_LPAE_MAX_LEVELS - 1)) + return false; + return iopte_type(pte) =3D=3D ARM_LPAE_PTE_TYPE_TABLE; +} + +#define __arm_lpae_virt_to_phys __pa +#define __arm_lpae_phys_to_virt __va + +/* Generic functions */ +int arm_lpae_map_pages(struct io_pgtable_ops *ops, unsigned long iova, + phys_addr_t paddr, size_t pgsize, size_t pgcount, + int iommu_prot, gfp_t gfp, size_t *mapped); +size_t arm_lpae_unmap_pages(struct io_pgtable_ops *ops, unsigned long iova, + size_t pgsize, size_t pgcount, + struct iommu_iotlb_gather *gather); +phys_addr_t arm_lpae_iova_to_phys(struct io_pgtable_ops *ops, + unsigned long iova); +void __arm_lpae_free_pgtable(struct arm_lpae_io_pgtable *data, int lvl, + arm_lpae_iopte *ptep); + +int arm_lpae_read_and_clear_dirty(struct io_pgtable_ops *ops, + unsigned long iova, size_t size, + unsigned long flags, + struct iommu_dirty_bitmap *dirty); + +int arm_lpae_pgtable_walk(struct io_pgtable_ops *ops, unsigned long iova, = void *wd); + +/* Host/hyp-specific functions */ +void *__arm_lpae_alloc_pages(size_t size, gfp_t gfp, struct io_pgtable_cfg= *cfg, void *cookie); +void __arm_lpae_free_pages(void *pages, size_t size, struct io_pgtable_cfg= *cfg, void *cookie); +void __arm_lpae_sync_pte(arm_lpae_iopte *ptep, int num_entries, + struct io_pgtable_cfg *cfg); +int arm_lpae_map_exists(void); +int arm_lpae_unmap_empty(void); +#endif /* IO_PGTABLE_H_ */ --=20 2.47.0.338.g60cca15819-goog From nobody Sun Dec 14 19:14:26 2025 Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C5E0C223C5D for ; Thu, 12 Dec 2024 18:04:49 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734026692; cv=none; b=Jt40fzwDCw9mASTnuaxo26ZhSwPcBLX6pUYVr2iTKLgCd65p7vEJoozsS08fuvz/qEdNTQlFuWYQMbtd3dlYTkx1NsyOeo6sDEiXMTZQmBswVtX688SUAbLSIeKg/S4wXWbR4BqLHlVBYhaShRStA6K3gzOlsfN0db5yDyeH8s0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734026692; c=relaxed/simple; bh=9ZFTcQsqDyTfTPLjTV5WSbeYTkq1aafa7zyvTsUILTc=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=rmaWen+eBCg0mLg23PZmLIj6OomURXjWgzJn0zF2TP5B9aiS4nZ15JfT/bLtLJdTtAj7DArPSa7/PkSKs3hseuTfwxFi2S56P477MXgREX7vDG5KdFJrhb9bXS9UjbgQWr3cyEgJjLlJC8uxIna6eez+50Cqd0Qcu3V+qraxUno= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--smostafa.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=fX2bplL+; arc=none smtp.client-ip=209.85.128.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--smostafa.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="fX2bplL+" Received: by mail-wm1-f74.google.com with SMTP id 5b1f17b1804b1-436248d1240so5766595e9.0 for ; Thu, 12 Dec 2024 10:04:49 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1734026688; x=1734631488; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=mXyCI09ZhbjXuioTmiv59A6C3cN5WJchYUKT74jp+y4=; b=fX2bplL+mzjO1tdD0wEvxnjvR7v/10Uxlp39vl9fx6+BGjmMYQAoSlhy9c5t5XyHat kbDCO5oi9BmKXs4wB8iroqTtPWQx108UXXdi4E81EMTQsllIPrEMuV1G6gJr85jHHv4o IfB+KRwtad6wTSZiAb3oIznD5QTP/PuE1QI4hUHnfbDUyKQR5zvCegm8ytGL/m8/ZKjh Kkbx2uiVU6LGSG5wS+fd1qyhpFFJ2/H6+AWxO/sTPxZABahTbaBCpAymAGcJhmiffy1a IX4eli3F8AblbYup7kCcxWd6gJFjIE09l/Tic5fupP+DoQ8Ql7fzAeUC6hyDAzEuESYr NTWw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734026688; x=1734631488; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=mXyCI09ZhbjXuioTmiv59A6C3cN5WJchYUKT74jp+y4=; b=AhmoeZkASfj2qlEZKZQFSCy+DAGjqG7rl58EpzQEohZCdZpSZoJD67GAvs+IBVntTg KKeFlDETevgUE6CHKdL1EIjCTARH/lOROUT/QDO46uMsEiory1Z+X4mkBhqsk875gO2k msvf1/f7zm2Bh9Ozg3riIrh/REHX9NCDFNvfx1vsTTzRbO6302WEsUGr0T3kebrWDXkU 2MD5XATvl/wV92uNKjzV0EuQjVGRGy5Vvtfjn+4r44/v63LcVNRBpoMtayrrJqPL/MYe iWWvmjoDUtez5mncW+Ce5CSY3Qdxc0+d5dfaQqSSDn7rynz2MqZiw81hvBy1E1N4aAt6 WFrA== X-Forwarded-Encrypted: i=1; AJvYcCVDQdiwkDQSuYdc2WYF9CrjEzN+B27CE8wvdrNDSfmXoyM/CQroq5MCdpF7zB6pTzU0XTOYmuFczLoGwJc=@vger.kernel.org X-Gm-Message-State: AOJu0YxDcV81uMlf+LZW77+n0T7zPyyWffoH1TS8MYe1ZuAuI1pEqnvA a0zdlfbNdmaw0R/qVpgOiHdgrVn6yEt1/WO0LNkiZP4K5LFcmEnDttLb/ttdgkJCKp11dzXNgY3 ej9pJrv73Sg== X-Google-Smtp-Source: AGHT+IEk2aXuVPdW8p5vy/iG8MnfnqpK0HK6/sRZjiQb0zjhXoM+JzKpW5jH3xqld9fYqITutfI7YBgnBki8IA== X-Received: from wmbd13.prod.google.com ([2002:a05:600c:58cd:b0:434:9dec:7cc5]) (user=smostafa job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:8507:b0:434:a29d:6c71 with SMTP id 5b1f17b1804b1-4361c411ab0mr60536895e9.27.1734026688254; Thu, 12 Dec 2024 10:04:48 -0800 (PST) Date: Thu, 12 Dec 2024 18:03:26 +0000 In-Reply-To: <20241212180423.1578358-1-smostafa@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241212180423.1578358-1-smostafa@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20241212180423.1578358-3-smostafa@google.com> Subject: [RFC PATCH v2 02/58] iommu/io-pgtable-arm: Split initialization From: Mostafa Saleh To: iommu@lists.linux.dev, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org Cc: catalin.marinas@arm.com, will@kernel.org, maz@kernel.org, oliver.upton@linux.dev, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, robdclark@gmail.com, joro@8bytes.org, robin.murphy@arm.com, jean-philippe@linaro.org, jgg@ziepe.ca, nicolinc@nvidia.com, vdonnefort@google.com, qperret@google.com, tabba@google.com, danielmentz@google.com, tzukui@google.com, Mostafa Saleh Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Jean-Philippe Brucker Extract the configuration part from io-pgtable-arm.c, move it to io-pgtable-arm-common.c. Signed-off-by: Jean-Philippe Brucker Signed-off-by: Mostafa Saleh --- drivers/iommu/io-pgtable-arm-common.c | 284 ++++++++++++++++++++++++-- drivers/iommu/io-pgtable-arm.c | 250 +---------------------- include/linux/io-pgtable-arm.h | 20 +- 3 files changed, 286 insertions(+), 268 deletions(-) diff --git a/drivers/iommu/io-pgtable-arm-common.c b/drivers/iommu/io-pgtab= le-arm-common.c index ef14a1b50d32..21ee8ff7c881 100644 --- a/drivers/iommu/io-pgtable-arm-common.c +++ b/drivers/iommu/io-pgtable-arm-common.c @@ -15,6 +15,9 @@ =20 #define iopte_deref(pte, d) __arm_lpae_phys_to_virt(iopte_to_paddr(pte, d)) =20 +#define ARM_LPAE_MAX_ADDR_BITS 52 +#define ARM_LPAE_S2_MAX_CONCAT_PAGES 16 + static arm_lpae_iopte paddr_to_iopte(phys_addr_t paddr, struct arm_lpae_io_pgtable *data) { @@ -257,9 +260,9 @@ static arm_lpae_iopte arm_lpae_prot_to_pte(struct arm_l= pae_io_pgtable *data, return pte; } =20 -int arm_lpae_map_pages(struct io_pgtable_ops *ops, unsigned long iova, - phys_addr_t paddr, size_t pgsize, size_t pgcount, - int iommu_prot, gfp_t gfp, size_t *mapped) +static int arm_lpae_map_pages(struct io_pgtable_ops *ops, unsigned long io= va, + phys_addr_t paddr, size_t pgsize, size_t pgcount, + int iommu_prot, gfp_t gfp, size_t *mapped) { struct arm_lpae_io_pgtable *data =3D io_pgtable_ops_to_data(ops); struct io_pgtable_cfg *cfg =3D &data->iop.cfg; @@ -444,9 +447,9 @@ static size_t __arm_lpae_unmap(struct arm_lpae_io_pgtab= le *data, return __arm_lpae_unmap(data, gather, iova, size, pgcount, lvl + 1, ptep); } =20 -size_t arm_lpae_unmap_pages(struct io_pgtable_ops *ops, unsigned long iova, - size_t pgsize, size_t pgcount, - struct iommu_iotlb_gather *gather) +static size_t arm_lpae_unmap_pages(struct io_pgtable_ops *ops, unsigned lo= ng iova, + size_t pgsize, size_t pgcount, + struct iommu_iotlb_gather *gather) { struct arm_lpae_io_pgtable *data =3D io_pgtable_ops_to_data(ops); struct io_pgtable_cfg *cfg =3D &data->iop.cfg; @@ -484,8 +487,8 @@ static int visit_iova_to_phys(struct io_pgtable_walk_da= ta *walk_data, int lvl, return 0; } =20 -phys_addr_t arm_lpae_iova_to_phys(struct io_pgtable_ops *ops, - unsigned long iova) +static phys_addr_t arm_lpae_iova_to_phys(struct io_pgtable_ops *ops, + unsigned long iova) { struct arm_lpae_io_pgtable *data =3D io_pgtable_ops_to_data(ops); struct iova_to_phys_data d; @@ -513,7 +516,7 @@ static int visit_pgtable_walk(struct io_pgtable_walk_da= ta *walk_data, int lvl, return 0; } =20 -int arm_lpae_pgtable_walk(struct io_pgtable_ops *ops, unsigned long iova, = void *wd) +static int arm_lpae_pgtable_walk(struct io_pgtable_ops *ops, unsigned long= iova, void *wd) { struct arm_lpae_io_pgtable *data =3D io_pgtable_ops_to_data(ops); struct io_pgtable_walk_data walk_data =3D { @@ -596,10 +599,10 @@ static int visit_dirty(struct io_pgtable_walk_data *w= alk_data, int lvl, return 0; } =20 -int arm_lpae_read_and_clear_dirty(struct io_pgtable_ops *ops, - unsigned long iova, size_t size, - unsigned long flags, - struct iommu_dirty_bitmap *dirty) +static int arm_lpae_read_and_clear_dirty(struct io_pgtable_ops *ops, + unsigned long iova, size_t size, + unsigned long flags, + struct iommu_dirty_bitmap *dirty) { struct arm_lpae_io_pgtable *data =3D io_pgtable_ops_to_data(ops); struct io_pgtable_cfg *cfg =3D &data->iop.cfg; @@ -623,3 +626,258 @@ int arm_lpae_read_and_clear_dirty(struct io_pgtable_o= ps *ops, =20 return __arm_lpae_iopte_walk(data, &walk_data, ptep, lvl); } + +static void arm_lpae_restrict_pgsizes(struct io_pgtable_cfg *cfg) +{ + unsigned long granule, page_sizes; + unsigned int max_addr_bits =3D 48; + + /* + * We need to restrict the supported page sizes to match the + * translation regime for a particular granule. Aim to match + * the CPU page size if possible, otherwise prefer smaller sizes. + * While we're at it, restrict the block sizes to match the + * chosen granule. + */ + if (cfg->pgsize_bitmap & PAGE_SIZE) + granule =3D PAGE_SIZE; + else if (cfg->pgsize_bitmap & ~PAGE_MASK) + granule =3D 1UL << __fls(cfg->pgsize_bitmap & ~PAGE_MASK); + else if (cfg->pgsize_bitmap & PAGE_MASK) + granule =3D 1UL << __ffs(cfg->pgsize_bitmap & PAGE_MASK); + else + granule =3D 0; + + switch (granule) { + case SZ_4K: + page_sizes =3D (SZ_4K | SZ_2M | SZ_1G); + break; + case SZ_16K: + page_sizes =3D (SZ_16K | SZ_32M); + break; + case SZ_64K: + max_addr_bits =3D 52; + page_sizes =3D (SZ_64K | SZ_512M); + if (cfg->oas > 48) + page_sizes |=3D 1ULL << 42; /* 4TB */ + break; + default: + page_sizes =3D 0; + } + + cfg->pgsize_bitmap &=3D page_sizes; + cfg->ias =3D min(cfg->ias, max_addr_bits); + cfg->oas =3D min(cfg->oas, max_addr_bits); +} + +int arm_lpae_init_pgtable(struct io_pgtable_cfg *cfg, + struct arm_lpae_io_pgtable *data) +{ + int levels, va_bits, pg_shift; + + arm_lpae_restrict_pgsizes(cfg); + + if (!(cfg->pgsize_bitmap & (SZ_4K | SZ_16K | SZ_64K))) + return -EINVAL; + + if (cfg->ias > ARM_LPAE_MAX_ADDR_BITS) + return -E2BIG; + + if (cfg->oas > ARM_LPAE_MAX_ADDR_BITS) + return -E2BIG; + + pg_shift =3D __ffs(cfg->pgsize_bitmap); + data->bits_per_level =3D pg_shift - ilog2(sizeof(arm_lpae_iopte)); + + va_bits =3D cfg->ias - pg_shift; + levels =3D DIV_ROUND_UP(va_bits, data->bits_per_level); + data->start_level =3D ARM_LPAE_MAX_LEVELS - levels; + + /* Calculate the actual size of our pgd (without concatenation) */ + data->pgd_bits =3D va_bits - (data->bits_per_level * (levels - 1)); + + data->iop.ops =3D (struct io_pgtable_ops) { + .map_pages =3D arm_lpae_map_pages, + .unmap_pages =3D arm_lpae_unmap_pages, + .iova_to_phys =3D arm_lpae_iova_to_phys, + .read_and_clear_dirty =3D arm_lpae_read_and_clear_dirty, + .pgtable_walk =3D arm_lpae_pgtable_walk, + }; + + return 0; +} + +int arm_lpae_init_pgtable_s1(struct io_pgtable_cfg *cfg, + struct arm_lpae_io_pgtable *data) +{ + u64 reg; + int ret; + typeof(&cfg->arm_lpae_s1_cfg.tcr) tcr =3D &cfg->arm_lpae_s1_cfg.tcr; + bool tg1; + + if (cfg->quirks & ~(IO_PGTABLE_QUIRK_ARM_NS | + IO_PGTABLE_QUIRK_ARM_TTBR1 | + IO_PGTABLE_QUIRK_ARM_OUTER_WBWA | + IO_PGTABLE_QUIRK_ARM_HD)) + return -EINVAL; + + ret =3D arm_lpae_init_pgtable(cfg, data); + if (ret) + return ret; + + /* TCR */ + if (cfg->coherent_walk) { + tcr->sh =3D ARM_LPAE_TCR_SH_IS; + tcr->irgn =3D ARM_LPAE_TCR_RGN_WBWA; + tcr->orgn =3D ARM_LPAE_TCR_RGN_WBWA; + if (cfg->quirks & IO_PGTABLE_QUIRK_ARM_OUTER_WBWA) + return -EINVAL; + } else { + tcr->sh =3D ARM_LPAE_TCR_SH_OS; + tcr->irgn =3D ARM_LPAE_TCR_RGN_NC; + if (!(cfg->quirks & IO_PGTABLE_QUIRK_ARM_OUTER_WBWA)) + tcr->orgn =3D ARM_LPAE_TCR_RGN_NC; + else + tcr->orgn =3D ARM_LPAE_TCR_RGN_WBWA; + } + + tg1 =3D cfg->quirks & IO_PGTABLE_QUIRK_ARM_TTBR1; + switch (ARM_LPAE_GRANULE(data)) { + case SZ_4K: + tcr->tg =3D tg1 ? ARM_LPAE_TCR_TG1_4K : ARM_LPAE_TCR_TG0_4K; + break; + case SZ_16K: + tcr->tg =3D tg1 ? ARM_LPAE_TCR_TG1_16K : ARM_LPAE_TCR_TG0_16K; + break; + case SZ_64K: + tcr->tg =3D tg1 ? ARM_LPAE_TCR_TG1_64K : ARM_LPAE_TCR_TG0_64K; + break; + } + + switch (cfg->oas) { + case 32: + tcr->ips =3D ARM_LPAE_TCR_PS_32_BIT; + break; + case 36: + tcr->ips =3D ARM_LPAE_TCR_PS_36_BIT; + break; + case 40: + tcr->ips =3D ARM_LPAE_TCR_PS_40_BIT; + break; + case 42: + tcr->ips =3D ARM_LPAE_TCR_PS_42_BIT; + break; + case 44: + tcr->ips =3D ARM_LPAE_TCR_PS_44_BIT; + break; + case 48: + tcr->ips =3D ARM_LPAE_TCR_PS_48_BIT; + break; + case 52: + tcr->ips =3D ARM_LPAE_TCR_PS_52_BIT; + break; + default: + return -EINVAL; + } + + tcr->tsz =3D 64ULL - cfg->ias; + + /* MAIRs */ + reg =3D (ARM_LPAE_MAIR_ATTR_NC + << ARM_LPAE_MAIR_ATTR_SHIFT(ARM_LPAE_MAIR_ATTR_IDX_NC)) | + (ARM_LPAE_MAIR_ATTR_WBRWA + << ARM_LPAE_MAIR_ATTR_SHIFT(ARM_LPAE_MAIR_ATTR_IDX_CACHE)) | + (ARM_LPAE_MAIR_ATTR_DEVICE + << ARM_LPAE_MAIR_ATTR_SHIFT(ARM_LPAE_MAIR_ATTR_IDX_DEV)) | + (ARM_LPAE_MAIR_ATTR_INC_OWBRWA + << ARM_LPAE_MAIR_ATTR_SHIFT(ARM_LPAE_MAIR_ATTR_IDX_INC_OCACHE)); + + cfg->arm_lpae_s1_cfg.mair =3D reg; + return 0; +} + +int arm_lpae_init_pgtable_s2(struct io_pgtable_cfg *cfg, + struct arm_lpae_io_pgtable *data) +{ + u64 sl; + int ret; + typeof(&cfg->arm_lpae_s2_cfg.vtcr) vtcr =3D &cfg->arm_lpae_s2_cfg.vtcr; + + /* The NS quirk doesn't apply at stage 2 */ + if (cfg->quirks) + return -EINVAL; + + ret =3D arm_lpae_init_pgtable(cfg, data); + if (ret) + return ret; + + /* + * Concatenate PGDs at level 1 if possible in order to reduce + * the depth of the stage-2 walk. + */ + if (data->start_level =3D=3D 0) { + unsigned long pgd_pages; + + pgd_pages =3D ARM_LPAE_PGD_SIZE(data) / sizeof(arm_lpae_iopte); + if (pgd_pages <=3D ARM_LPAE_S2_MAX_CONCAT_PAGES) { + data->pgd_bits +=3D data->bits_per_level; + data->start_level++; + } + } + + /* VTCR */ + if (cfg->coherent_walk) { + vtcr->sh =3D ARM_LPAE_TCR_SH_IS; + vtcr->irgn =3D ARM_LPAE_TCR_RGN_WBWA; + vtcr->orgn =3D ARM_LPAE_TCR_RGN_WBWA; + } else { + vtcr->sh =3D ARM_LPAE_TCR_SH_OS; + vtcr->irgn =3D ARM_LPAE_TCR_RGN_NC; + vtcr->orgn =3D ARM_LPAE_TCR_RGN_NC; + } + + sl =3D data->start_level; + + switch (ARM_LPAE_GRANULE(data)) { + case SZ_4K: + vtcr->tg =3D ARM_LPAE_TCR_TG0_4K; + sl++; /* SL0 format is different for 4K granule size */ + break; + case SZ_16K: + vtcr->tg =3D ARM_LPAE_TCR_TG0_16K; + break; + case SZ_64K: + vtcr->tg =3D ARM_LPAE_TCR_TG0_64K; + break; + } + + switch (cfg->oas) { + case 32: + vtcr->ps =3D ARM_LPAE_TCR_PS_32_BIT; + break; + case 36: + vtcr->ps =3D ARM_LPAE_TCR_PS_36_BIT; + break; + case 40: + vtcr->ps =3D ARM_LPAE_TCR_PS_40_BIT; + break; + case 42: + vtcr->ps =3D ARM_LPAE_TCR_PS_42_BIT; + break; + case 44: + vtcr->ps =3D ARM_LPAE_TCR_PS_44_BIT; + break; + case 48: + vtcr->ps =3D ARM_LPAE_TCR_PS_48_BIT; + break; + case 52: + vtcr->ps =3D ARM_LPAE_TCR_PS_52_BIT; + break; + default: + return -EINVAL; + } + + vtcr->tsz =3D 64ULL - cfg->ias; + vtcr->sl =3D ~sl & ARM_LPAE_VTCR_SL0_MASK; + return 0; +} diff --git a/drivers/iommu/io-pgtable-arm.c b/drivers/iommu/io-pgtable-arm.c index cb4eb513adbf..8d435a5bcd9a 100644 --- a/drivers/iommu/io-pgtable-arm.c +++ b/drivers/iommu/io-pgtable-arm.c @@ -19,12 +19,9 @@ #include #include =20 -#include - #include "iommu-pages.h" =20 -#define ARM_LPAE_MAX_ADDR_BITS 52 -#define ARM_LPAE_S2_MAX_CONCAT_PAGES 16 +#include =20 static bool selftest_running =3D false; =20 @@ -121,177 +118,17 @@ static void arm_lpae_free_pgtable(struct io_pgtable = *iop) kfree(data); } =20 -static void arm_lpae_restrict_pgsizes(struct io_pgtable_cfg *cfg) -{ - unsigned long granule, page_sizes; - unsigned int max_addr_bits =3D 48; - - /* - * We need to restrict the supported page sizes to match the - * translation regime for a particular granule. Aim to match - * the CPU page size if possible, otherwise prefer smaller sizes. - * While we're at it, restrict the block sizes to match the - * chosen granule. - */ - if (cfg->pgsize_bitmap & PAGE_SIZE) - granule =3D PAGE_SIZE; - else if (cfg->pgsize_bitmap & ~PAGE_MASK) - granule =3D 1UL << __fls(cfg->pgsize_bitmap & ~PAGE_MASK); - else if (cfg->pgsize_bitmap & PAGE_MASK) - granule =3D 1UL << __ffs(cfg->pgsize_bitmap & PAGE_MASK); - else - granule =3D 0; - - switch (granule) { - case SZ_4K: - page_sizes =3D (SZ_4K | SZ_2M | SZ_1G); - break; - case SZ_16K: - page_sizes =3D (SZ_16K | SZ_32M); - break; - case SZ_64K: - max_addr_bits =3D 52; - page_sizes =3D (SZ_64K | SZ_512M); - if (cfg->oas > 48) - page_sizes |=3D 1ULL << 42; /* 4TB */ - break; - default: - page_sizes =3D 0; - } - - cfg->pgsize_bitmap &=3D page_sizes; - cfg->ias =3D min(cfg->ias, max_addr_bits); - cfg->oas =3D min(cfg->oas, max_addr_bits); -} - -static struct arm_lpae_io_pgtable * -arm_lpae_alloc_pgtable(struct io_pgtable_cfg *cfg) -{ - struct arm_lpae_io_pgtable *data; - int levels, va_bits, pg_shift; - - arm_lpae_restrict_pgsizes(cfg); - - if (!(cfg->pgsize_bitmap & (SZ_4K | SZ_16K | SZ_64K))) - return NULL; - - if (cfg->ias > ARM_LPAE_MAX_ADDR_BITS) - return NULL; - - if (cfg->oas > ARM_LPAE_MAX_ADDR_BITS) - return NULL; - - data =3D kmalloc(sizeof(*data), GFP_KERNEL); - if (!data) - return NULL; - - pg_shift =3D __ffs(cfg->pgsize_bitmap); - data->bits_per_level =3D pg_shift - ilog2(sizeof(arm_lpae_iopte)); - - va_bits =3D cfg->ias - pg_shift; - levels =3D DIV_ROUND_UP(va_bits, data->bits_per_level); - data->start_level =3D ARM_LPAE_MAX_LEVELS - levels; - - /* Calculate the actual size of our pgd (without concatenation) */ - data->pgd_bits =3D va_bits - (data->bits_per_level * (levels - 1)); - - data->iop.ops =3D (struct io_pgtable_ops) { - .map_pages =3D arm_lpae_map_pages, - .unmap_pages =3D arm_lpae_unmap_pages, - .iova_to_phys =3D arm_lpae_iova_to_phys, - .read_and_clear_dirty =3D arm_lpae_read_and_clear_dirty, - .pgtable_walk =3D arm_lpae_pgtable_walk, - }; - - return data; -} - static struct io_pgtable * arm_64_lpae_alloc_pgtable_s1(struct io_pgtable_cfg *cfg, void *cookie) { - u64 reg; struct arm_lpae_io_pgtable *data; - typeof(&cfg->arm_lpae_s1_cfg.tcr) tcr =3D &cfg->arm_lpae_s1_cfg.tcr; - bool tg1; - - if (cfg->quirks & ~(IO_PGTABLE_QUIRK_ARM_NS | - IO_PGTABLE_QUIRK_ARM_TTBR1 | - IO_PGTABLE_QUIRK_ARM_OUTER_WBWA | - IO_PGTABLE_QUIRK_ARM_HD)) - return NULL; =20 - data =3D arm_lpae_alloc_pgtable(cfg); + data =3D kzalloc(sizeof(*data), GFP_KERNEL); if (!data) return NULL; =20 - /* TCR */ - if (cfg->coherent_walk) { - tcr->sh =3D ARM_LPAE_TCR_SH_IS; - tcr->irgn =3D ARM_LPAE_TCR_RGN_WBWA; - tcr->orgn =3D ARM_LPAE_TCR_RGN_WBWA; - if (cfg->quirks & IO_PGTABLE_QUIRK_ARM_OUTER_WBWA) - goto out_free_data; - } else { - tcr->sh =3D ARM_LPAE_TCR_SH_OS; - tcr->irgn =3D ARM_LPAE_TCR_RGN_NC; - if (!(cfg->quirks & IO_PGTABLE_QUIRK_ARM_OUTER_WBWA)) - tcr->orgn =3D ARM_LPAE_TCR_RGN_NC; - else - tcr->orgn =3D ARM_LPAE_TCR_RGN_WBWA; - } - - tg1 =3D cfg->quirks & IO_PGTABLE_QUIRK_ARM_TTBR1; - switch (ARM_LPAE_GRANULE(data)) { - case SZ_4K: - tcr->tg =3D tg1 ? ARM_LPAE_TCR_TG1_4K : ARM_LPAE_TCR_TG0_4K; - break; - case SZ_16K: - tcr->tg =3D tg1 ? ARM_LPAE_TCR_TG1_16K : ARM_LPAE_TCR_TG0_16K; - break; - case SZ_64K: - tcr->tg =3D tg1 ? ARM_LPAE_TCR_TG1_64K : ARM_LPAE_TCR_TG0_64K; - break; - } - - switch (cfg->oas) { - case 32: - tcr->ips =3D ARM_LPAE_TCR_PS_32_BIT; - break; - case 36: - tcr->ips =3D ARM_LPAE_TCR_PS_36_BIT; - break; - case 40: - tcr->ips =3D ARM_LPAE_TCR_PS_40_BIT; - break; - case 42: - tcr->ips =3D ARM_LPAE_TCR_PS_42_BIT; - break; - case 44: - tcr->ips =3D ARM_LPAE_TCR_PS_44_BIT; - break; - case 48: - tcr->ips =3D ARM_LPAE_TCR_PS_48_BIT; - break; - case 52: - tcr->ips =3D ARM_LPAE_TCR_PS_52_BIT; - break; - default: + if (arm_lpae_init_pgtable_s1(cfg, data)) goto out_free_data; - } - - tcr->tsz =3D 64ULL - cfg->ias; - - /* MAIRs */ - reg =3D (ARM_LPAE_MAIR_ATTR_NC - << ARM_LPAE_MAIR_ATTR_SHIFT(ARM_LPAE_MAIR_ATTR_IDX_NC)) | - (ARM_LPAE_MAIR_ATTR_WBRWA - << ARM_LPAE_MAIR_ATTR_SHIFT(ARM_LPAE_MAIR_ATTR_IDX_CACHE)) | - (ARM_LPAE_MAIR_ATTR_DEVICE - << ARM_LPAE_MAIR_ATTR_SHIFT(ARM_LPAE_MAIR_ATTR_IDX_DEV)) | - (ARM_LPAE_MAIR_ATTR_INC_OWBRWA - << ARM_LPAE_MAIR_ATTR_SHIFT(ARM_LPAE_MAIR_ATTR_IDX_INC_OCACHE)); - - cfg->arm_lpae_s1_cfg.mair =3D reg; =20 /* Looking good; allocate a pgd */ data->pgd =3D __arm_lpae_alloc_pages(ARM_LPAE_PGD_SIZE(data), @@ -314,86 +151,14 @@ arm_64_lpae_alloc_pgtable_s1(struct io_pgtable_cfg *c= fg, void *cookie) static struct io_pgtable * arm_64_lpae_alloc_pgtable_s2(struct io_pgtable_cfg *cfg, void *cookie) { - u64 sl; struct arm_lpae_io_pgtable *data; - typeof(&cfg->arm_lpae_s2_cfg.vtcr) vtcr =3D &cfg->arm_lpae_s2_cfg.vtcr; - - /* The NS quirk doesn't apply at stage 2 */ - if (cfg->quirks) - return NULL; =20 - data =3D arm_lpae_alloc_pgtable(cfg); + data =3D kzalloc(sizeof(*data), GFP_KERNEL); if (!data) return NULL; =20 - /* - * Concatenate PGDs at level 1 if possible in order to reduce - * the depth of the stage-2 walk. - */ - if (data->start_level =3D=3D 0) { - unsigned long pgd_pages; - - pgd_pages =3D ARM_LPAE_PGD_SIZE(data) / sizeof(arm_lpae_iopte); - if (pgd_pages <=3D ARM_LPAE_S2_MAX_CONCAT_PAGES) { - data->pgd_bits +=3D data->bits_per_level; - data->start_level++; - } - } - - /* VTCR */ - if (cfg->coherent_walk) { - vtcr->sh =3D ARM_LPAE_TCR_SH_IS; - vtcr->irgn =3D ARM_LPAE_TCR_RGN_WBWA; - vtcr->orgn =3D ARM_LPAE_TCR_RGN_WBWA; - } else { - vtcr->sh =3D ARM_LPAE_TCR_SH_OS; - vtcr->irgn =3D ARM_LPAE_TCR_RGN_NC; - vtcr->orgn =3D ARM_LPAE_TCR_RGN_NC; - } - - sl =3D data->start_level; - - switch (ARM_LPAE_GRANULE(data)) { - case SZ_4K: - vtcr->tg =3D ARM_LPAE_TCR_TG0_4K; - sl++; /* SL0 format is different for 4K granule size */ - break; - case SZ_16K: - vtcr->tg =3D ARM_LPAE_TCR_TG0_16K; - break; - case SZ_64K: - vtcr->tg =3D ARM_LPAE_TCR_TG0_64K; - break; - } - - switch (cfg->oas) { - case 32: - vtcr->ps =3D ARM_LPAE_TCR_PS_32_BIT; - break; - case 36: - vtcr->ps =3D ARM_LPAE_TCR_PS_36_BIT; - break; - case 40: - vtcr->ps =3D ARM_LPAE_TCR_PS_40_BIT; - break; - case 42: - vtcr->ps =3D ARM_LPAE_TCR_PS_42_BIT; - break; - case 44: - vtcr->ps =3D ARM_LPAE_TCR_PS_44_BIT; - break; - case 48: - vtcr->ps =3D ARM_LPAE_TCR_PS_48_BIT; - break; - case 52: - vtcr->ps =3D ARM_LPAE_TCR_PS_52_BIT; - break; - default: + if (arm_lpae_init_pgtable_s2(cfg, data)) goto out_free_data; - } - - vtcr->tsz =3D 64ULL - cfg->ias; - vtcr->sl =3D ~sl & ARM_LPAE_VTCR_SL0_MASK; =20 /* Allocate pgd pages */ data->pgd =3D __arm_lpae_alloc_pages(ARM_LPAE_PGD_SIZE(data), @@ -447,10 +212,13 @@ arm_mali_lpae_alloc_pgtable(struct io_pgtable_cfg *cf= g, void *cookie) =20 cfg->pgsize_bitmap &=3D (SZ_4K | SZ_2M | SZ_1G); =20 - data =3D arm_lpae_alloc_pgtable(cfg); + data =3D kzalloc(sizeof(*data), GFP_KERNEL); if (!data) return NULL; =20 + if (arm_lpae_init_pgtable(cfg, data)) + return NULL; + /* Mali seems to need a full 4-level table regardless of IAS */ if (data->start_level > 0) { data->start_level =3D 0; diff --git a/include/linux/io-pgtable-arm.h b/include/linux/io-pgtable-arm.h index 1f56dabca18c..337e9254fdbd 100644 --- a/include/linux/io-pgtable-arm.h +++ b/include/linux/io-pgtable-arm.h @@ -195,23 +195,15 @@ static inline bool iopte_table(arm_lpae_iopte pte, in= t lvl) #define __arm_lpae_phys_to_virt __va =20 /* Generic functions */ -int arm_lpae_map_pages(struct io_pgtable_ops *ops, unsigned long iova, - phys_addr_t paddr, size_t pgsize, size_t pgcount, - int iommu_prot, gfp_t gfp, size_t *mapped); -size_t arm_lpae_unmap_pages(struct io_pgtable_ops *ops, unsigned long iova, - size_t pgsize, size_t pgcount, - struct iommu_iotlb_gather *gather); -phys_addr_t arm_lpae_iova_to_phys(struct io_pgtable_ops *ops, - unsigned long iova); void __arm_lpae_free_pgtable(struct arm_lpae_io_pgtable *data, int lvl, arm_lpae_iopte *ptep); =20 -int arm_lpae_read_and_clear_dirty(struct io_pgtable_ops *ops, - unsigned long iova, size_t size, - unsigned long flags, - struct iommu_dirty_bitmap *dirty); - -int arm_lpae_pgtable_walk(struct io_pgtable_ops *ops, unsigned long iova, = void *wd); +int arm_lpae_init_pgtable(struct io_pgtable_cfg *cfg, + struct arm_lpae_io_pgtable *data); +int arm_lpae_init_pgtable_s1(struct io_pgtable_cfg *cfg, + struct arm_lpae_io_pgtable *data); +int arm_lpae_init_pgtable_s2(struct io_pgtable_cfg *cfg, + struct arm_lpae_io_pgtable *data); =20 /* Host/hyp-specific functions */ void *__arm_lpae_alloc_pages(size_t size, gfp_t gfp, struct io_pgtable_cfg= *cfg, void *cookie); --=20 2.47.0.338.g60cca15819-goog From nobody Sun Dec 14 19:14:26 2025 Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AC148223C7A for ; Thu, 12 Dec 2024 18:04:51 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734026693; cv=none; b=pNkt2Rh2aj/cOU/M//xH6JBH7kpIH1VeNPCVepJvWF5vI1SOzUHN7PhfJuulAATagCR74tcD1hUXKur/MbXCuXh1gYy0DVmdEi/cOt//YDGEqLKgZZOtpigb8XqXe/pVyolMcZ71iix7eY4n+qPIsi+GMFK9BdwFwYVDSQ5clys= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734026693; c=relaxed/simple; bh=gp+CHgCT6MfjzlV3G7e67wFLwUFxHaGVgOQC7Dq+A54=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=Ptug+UBk3iokSPTcqVaT3r0gq8PBLhTkhP0Q8iL8lef9sFtWgcsCqpwr1kK3dh6xJPLYVdeO+ctadRWNoGgqav3GR0ivxejEmveTxQl5tEggxhM67WnDf0SctkM2PC7rFDqAPnv6tRxZWS/+uRKkydH/QXSlz4AO+kbBCY6HJNE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--smostafa.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=SfmpS9BE; arc=none smtp.client-ip=209.85.128.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--smostafa.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="SfmpS9BE" Received: by mail-wm1-f73.google.com with SMTP id 5b1f17b1804b1-4361efc9d1fso8309595e9.2 for ; Thu, 12 Dec 2024 10:04:51 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1734026690; x=1734631490; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=fZxQMQu4cbyziwIooPRheQAiraNBUGRcC5SFKfDzOkk=; b=SfmpS9BEfOoaqYIHMFXJ3HTcb4xQlwgh61ekRXq+j9dmJSrC7s79whxHdmJHqrwoyk AEITO9SRsigGWuH1YsWbh5E5aenwVUgPsEtkAbfHQB+1ejwDgmsmuI5euuPX/6PTc8H/ J2NZ8ISw9UPO/fIs4XIduouAxYawiy/yt+u+RNUkBOHT9a3W8R8R8BtsRpr9G+NWCPoU lmmjzfdE2cxF+4ULPt/DuolH88JOF30KjOzCox8y2553xBGdu0lPK8GkSk7vsagQMLb+ k49r4Vjt84WbHHyS5sQc6xSuS6d9/xr1ImUA3MtbaL/OXS2aq50gzsRU0+C6C5diaQe1 BDbg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734026690; x=1734631490; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=fZxQMQu4cbyziwIooPRheQAiraNBUGRcC5SFKfDzOkk=; b=TX8FWv+OvMc5LG+kJauOY/XDQ80h24rY3lkqfWVhi2z/mVeKPqAhZMGQdVGCxZaabp aRT4ERLQJi+KoYwTYnByN52zMjLqd9chskx4mP0CxSlnPwYJNDMYJaJrmCzy6WVCu5hD XQrdglwCQn9oihMG/wiW7/3/b80GY+5bYzprckFwnCreHNa6NtiU7ev0wME0SWmNR8Lp LoCbW9M5FqOgeqY6B6W61UqFsLZqCKRZVS0e/ztnpr/9vRlsKDnpV/EtdKN0XGaq2UIH hSexu252PND0AuuSvuxmOiiZQYSgfpE64rwPGelYTmOeDe9r3k9IGJAHlaZAbP01P4eJ ozTw== X-Forwarded-Encrypted: i=1; AJvYcCWHR0MKQE0/SqWPFi7MFdJGW/tMpg2+Z8WTDO4GTaaNxfSS+arFwt7DDd14trKj8g9Vl/wolTC6UzK1dlE=@vger.kernel.org X-Gm-Message-State: AOJu0Yzj7/K6o81dG2dzfvwv7AtfI9Py5snxm4nPpSg2Di3oxraVSphI /3MYiIpN0WfA4KaIHII5vFypiudyyhWZ9cVF/wfnOgV2sDKZZHwZDmXHXindWaURonrflNZ0NkX bIoV26JxGSg== X-Google-Smtp-Source: AGHT+IFzpfMVwcRrV2/gqjEcPWIIertEvfaFJMhjw3EeYh/NSeVQCIM7XVIoTxUlYOdKn5XnKjOATZ6q1zompA== X-Received: from wmbay15.prod.google.com ([2002:a05:600c:1e0f:b0:434:ff52:1c7]) (user=smostafa job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:a011:b0:434:f623:a004 with SMTP id 5b1f17b1804b1-4361c3a157cmr72244495e9.16.1734026690208; Thu, 12 Dec 2024 10:04:50 -0800 (PST) Date: Thu, 12 Dec 2024 18:03:27 +0000 In-Reply-To: <20241212180423.1578358-1-smostafa@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241212180423.1578358-1-smostafa@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20241212180423.1578358-4-smostafa@google.com> Subject: [RFC PATCH v2 03/58] iommu/io-pgtable: Add configure() operation From: Mostafa Saleh To: iommu@lists.linux.dev, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org Cc: catalin.marinas@arm.com, will@kernel.org, maz@kernel.org, oliver.upton@linux.dev, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, robdclark@gmail.com, joro@8bytes.org, robin.murphy@arm.com, jean-philippe@linaro.org, jgg@ziepe.ca, nicolinc@nvidia.com, vdonnefort@google.com, qperret@google.com, tabba@google.com, danielmentz@google.com, tzukui@google.com, Mostafa Saleh Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Jean-Philippe Brucker Allow IOMMU drivers to create the io-pgtable configuration without allocating any tables. This will be used by the SMMUv3-KVM driver to initialize a config and pass it to KVM. Signed-off-by: Jean-Philippe Brucker Signed-off-by: Mostafa Saleh --- drivers/iommu/io-pgtable-arm.c | 16 ++++++++++++++++ drivers/iommu/io-pgtable.c | 15 +++++++++++++++ include/linux/io-pgtable.h | 15 +++++++++++++++ 3 files changed, 46 insertions(+) diff --git a/drivers/iommu/io-pgtable-arm.c b/drivers/iommu/io-pgtable-arm.c index 8d435a5bcd9a..e85866c90290 100644 --- a/drivers/iommu/io-pgtable-arm.c +++ b/drivers/iommu/io-pgtable-arm.c @@ -148,6 +148,13 @@ arm_64_lpae_alloc_pgtable_s1(struct io_pgtable_cfg *cf= g, void *cookie) return NULL; } =20 +static int arm_64_lpae_configure_s1(struct io_pgtable_cfg *cfg) +{ + struct arm_lpae_io_pgtable data =3D {}; + + return arm_lpae_init_pgtable_s1(cfg, &data); +} + static struct io_pgtable * arm_64_lpae_alloc_pgtable_s2(struct io_pgtable_cfg *cfg, void *cookie) { @@ -178,6 +185,13 @@ arm_64_lpae_alloc_pgtable_s2(struct io_pgtable_cfg *cf= g, void *cookie) return NULL; } =20 +static int arm_64_lpae_configure_s2(struct io_pgtable_cfg *cfg) +{ + struct arm_lpae_io_pgtable data =3D {}; + + return arm_lpae_init_pgtable_s2(cfg, &data); +} + static struct io_pgtable * arm_32_lpae_alloc_pgtable_s1(struct io_pgtable_cfg *cfg, void *cookie) { @@ -264,12 +278,14 @@ struct io_pgtable_init_fns io_pgtable_arm_64_lpae_s1_= init_fns =3D { .caps =3D IO_PGTABLE_CAP_CUSTOM_ALLOCATOR, .alloc =3D arm_64_lpae_alloc_pgtable_s1, .free =3D arm_lpae_free_pgtable, + .configure =3D arm_64_lpae_configure_s1, }; =20 struct io_pgtable_init_fns io_pgtable_arm_64_lpae_s2_init_fns =3D { .caps =3D IO_PGTABLE_CAP_CUSTOM_ALLOCATOR, .alloc =3D arm_64_lpae_alloc_pgtable_s2, .free =3D arm_lpae_free_pgtable, + .configure =3D arm_64_lpae_configure_s2, }; =20 struct io_pgtable_init_fns io_pgtable_arm_32_lpae_s1_init_fns =3D { diff --git a/drivers/iommu/io-pgtable.c b/drivers/iommu/io-pgtable.c index 8841c1487f00..be65f70ec2a6 100644 --- a/drivers/iommu/io-pgtable.c +++ b/drivers/iommu/io-pgtable.c @@ -99,3 +99,18 @@ void free_io_pgtable_ops(struct io_pgtable_ops *ops) io_pgtable_init_table[iop->fmt]->free(iop); } EXPORT_SYMBOL_GPL(free_io_pgtable_ops); + +int io_pgtable_configure(struct io_pgtable_cfg *cfg) +{ + const struct io_pgtable_init_fns *fns; + + if (cfg->fmt >=3D IO_PGTABLE_NUM_FMTS) + return -EINVAL; + + fns =3D io_pgtable_init_table[cfg->fmt]; + if (!fns || !fns->configure) + return -EOPNOTSUPP; + + return fns->configure(cfg); +} +EXPORT_SYMBOL_GPL(io_pgtable_configure); diff --git a/include/linux/io-pgtable.h b/include/linux/io-pgtable.h index d7bfbf351975..f789234c703b 100644 --- a/include/linux/io-pgtable.h +++ b/include/linux/io-pgtable.h @@ -49,6 +49,7 @@ struct iommu_flush_ops { /** * struct io_pgtable_cfg - Configuration data for a set of page tables. * + * @fmt: Format used for these page tables * @quirks: A bitmap of hardware quirks that require some special * action by the low-level page table allocator. * @pgsize_bitmap: A bitmap of page sizes supported by this set of page @@ -62,6 +63,7 @@ struct iommu_flush_ops { * page table walker. */ struct io_pgtable_cfg { + enum io_pgtable_fmt fmt; /* * IO_PGTABLE_QUIRK_ARM_NS: (ARM formats) Set NS and NSTABLE bits in * stage 1 PTEs, for hardware which insists on validating them @@ -241,6 +243,17 @@ struct io_pgtable_ops *alloc_io_pgtable_ops(enum io_pg= table_fmt fmt, */ void free_io_pgtable_ops(struct io_pgtable_ops *ops); =20 +/** + * io_pgtable_configure - Create page table config + * + * @cfg: The page table configuration. + * + * Initialize @cfg in the same way as alloc_io_pgtable_ops(), without allo= cating + * anything. + * + * Not all io_pgtable drivers implement this operation. + */ +int io_pgtable_configure(struct io_pgtable_cfg *cfg); =20 /* * Internal structures for page table allocator implementations. @@ -301,11 +314,13 @@ enum io_pgtable_caps { * * @alloc: Allocate a set of page tables described by cfg. * @free: Free the page tables associated with iop. + * @configure: Create the configuration without allocating anything. Optio= nal. * @caps: Combination of @io_pgtable_caps flags encoding the backend capa= bilities. */ struct io_pgtable_init_fns { struct io_pgtable *(*alloc)(struct io_pgtable_cfg *cfg, void *cookie); void (*free)(struct io_pgtable *iop); + int (*configure)(struct io_pgtable_cfg *cfg); u32 caps; }; =20 --=20 2.47.0.338.g60cca15819-goog From nobody Sun Dec 14 19:14:26 2025 Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C2F14223E87 for ; Thu, 12 Dec 2024 18:04:53 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734026697; cv=none; b=cypeyX4cF2a4Wa3k7nxZLn+Tddauh/hjEAp3MQFfqIlwf4zbQIjTuO2QkK0jnNRdlVW0pyM+ZPctuuP00jHkDIlT+XzBRKQM8i/lr6aKPLJwDnDdoV3HcSdhllGMpypY3rYXPHWh8x4ppTJda7iQ+cspvhBKSnPRglDqDNRC3Ds= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734026697; c=relaxed/simple; bh=LVDsJOfjrtbwhwPIRsuuBZbqne4YI1gYPyuwnmxYwAE=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=bJcPCHOM5zu6K3ArZFsXuYEmYA0BWAS6m6ueM/NWuWsHBTUPPelbbsWt2n+lf7Xqi9OiDtaWdiqEG3vPlwV9kV5aJeahEnB0v/epVdv/DBSFgIziuE2UF08jo6zG02v0hwFDXln0LopPSnVOdVvfG2QqV34Sx83tEYFt8v89s/o= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--smostafa.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=YtxgpxNt; arc=none smtp.client-ip=209.85.128.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--smostafa.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="YtxgpxNt" Received: by mail-wm1-f74.google.com with SMTP id 5b1f17b1804b1-436248d1240so5767075e9.0 for ; Thu, 12 Dec 2024 10:04:53 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1734026692; x=1734631492; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=O8dEykcVB5T9ygrig36D5xqeElM72SNIhLY33SlZy5o=; b=YtxgpxNtYSfNn50WX0/kZ+3qxlnhCVyNXJJMfy1Yt7ba3lYIcIX0tW8VTq07+1dadE SQUCq3Ey7pK/J0nl2mC40miKG3ANNVwtAde/gjedMs4+UjExt5CwaWqkW0c4CRVMXhI4 LTCcfvjj04tNUbltUb7k60aRuzHhkBUagySTnaE9H5itXef77cQJ3deFDxxdQB1UJfNn elFEuohWCrotwgJwgUrEIISN4fE2dbS8Y9iJ1R7f3FkezouRdN2+jZFqMNl0FWEzfJyZ ekDfYX3ceYnRGJXODs98Jtwlt+yF25dFBfxj3KWj5krrA4XdrxPMJvj8Rp1kjN+/F9+1 v/zg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734026692; x=1734631492; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=O8dEykcVB5T9ygrig36D5xqeElM72SNIhLY33SlZy5o=; b=SVxx7jMFYu3c0bZ8ObGe8PU7GlaZ+c9ELPzQ98mJfm/wVC9dddON5JaDhm1HuVHr/7 onV9JZmexbT9oFpz5OPNj3ZrjxiEgXFGUUuXi/3mBExUEiNthJXEmvpZMA5bJ2DyC5Qa OSzrJ7cvJ78fiyN75R54Emlu3lkfmdyf4S0lmcNx3Gor5/KU5Gn+Z94+Oj7YEZXpWf2S fNJ3TPhp4DMl6+1NaopN9oJ8NJJ9cYTmNslX2SRY0bHw2kp0Z6D5oD2w7Pmlm/NvQIH8 iB6HFEqMph8DUJ/kC524A4Fu36pxnewVT///qkqD9WoNRdDW5zAAC3yo3B/KdVHexwNb jlTw== X-Forwarded-Encrypted: i=1; AJvYcCXCtHGcpQwCBxf+KzlyxLdqPBtBNEg3GGqK8iy9wdKHMvytBy+gimg4ustvYk0em5HmrRSN14k5cIwHvu8=@vger.kernel.org X-Gm-Message-State: AOJu0YxqSRQTemka00WUdy1kb4n3a8KtVBtJA7VagplxXHMmGpRKK33C Yh0j/zyIJW533z7ZZs/BvUthT6KKIZcj35VFzRi+6itRS3M1ktryaaiVL8S5RWrchAIb6vGYmGm wIOYyG1QHlA== X-Google-Smtp-Source: AGHT+IGOxHT/qVynEpiDA1QXj6yVl3Iwd7eJKD0IdwMDSFLkzEgYk/lDv4hFby7sfUfW42vxIPOIRkNLJXEDVw== X-Received: from wmok7.prod.google.com ([2002:a05:600c:4787:b0:434:fb78:6216]) (user=smostafa job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:3b09:b0:436:1b7a:c0b4 with SMTP id 5b1f17b1804b1-4361c3454dbmr64160055e9.1.1734026692209; Thu, 12 Dec 2024 10:04:52 -0800 (PST) Date: Thu, 12 Dec 2024 18:03:28 +0000 In-Reply-To: <20241212180423.1578358-1-smostafa@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241212180423.1578358-1-smostafa@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20241212180423.1578358-5-smostafa@google.com> Subject: [RFC PATCH v2 04/58] iommu/arm-smmu-v3: Move some definitions to arm64 include/ From: Mostafa Saleh To: iommu@lists.linux.dev, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org Cc: catalin.marinas@arm.com, will@kernel.org, maz@kernel.org, oliver.upton@linux.dev, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, robdclark@gmail.com, joro@8bytes.org, robin.murphy@arm.com, jean-philippe@linaro.org, jgg@ziepe.ca, nicolinc@nvidia.com, vdonnefort@google.com, qperret@google.com, tabba@google.com, danielmentz@google.com, tzukui@google.com, Mostafa Saleh Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Jean-Philippe Brucker So that the KVM SMMUv3 driver can re-use architectural definitions, command structures and feature bits, move them to the arm64 include/ Signed-off-by: Jean-Philippe Brucker Signed-off-by: Mostafa Saleh --- arch/arm64/include/asm/arm-smmu-v3-common.h | 547 ++++++++++++++++++++ drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h | 540 +------------------ 2 files changed, 549 insertions(+), 538 deletions(-) create mode 100644 arch/arm64/include/asm/arm-smmu-v3-common.h diff --git a/arch/arm64/include/asm/arm-smmu-v3-common.h b/arch/arm64/inclu= de/asm/arm-smmu-v3-common.h new file mode 100644 index 000000000000..e6e339248816 --- /dev/null +++ b/arch/arm64/include/asm/arm-smmu-v3-common.h @@ -0,0 +1,547 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +#ifndef _ARM_SMMU_V3_COMMON_H +#define _ARM_SMMU_V3_COMMON_H + +#include + +/* MMIO registers */ +#define ARM_SMMU_IDR0 0x0 +#define IDR0_ST_LVL GENMASK(28, 27) +#define IDR0_ST_LVL_2LVL 1 +#define IDR0_STALL_MODEL GENMASK(25, 24) +#define IDR0_STALL_MODEL_STALL 0 +#define IDR0_STALL_MODEL_FORCE 2 +#define IDR0_TTENDIAN GENMASK(22, 21) +#define IDR0_TTENDIAN_MIXED 0 +#define IDR0_TTENDIAN_LE 2 +#define IDR0_TTENDIAN_BE 3 +#define IDR0_CD2L (1 << 19) +#define IDR0_VMID16 (1 << 18) +#define IDR0_PRI (1 << 16) +#define IDR0_SEV (1 << 14) +#define IDR0_MSI (1 << 13) +#define IDR0_ASID16 (1 << 12) +#define IDR0_ATS (1 << 10) +#define IDR0_HYP (1 << 9) +#define IDR0_HTTU GENMASK(7, 6) +#define IDR0_HTTU_ACCESS 1 +#define IDR0_HTTU_ACCESS_DIRTY 2 +#define IDR0_COHACC (1 << 4) +#define IDR0_TTF GENMASK(3, 2) +#define IDR0_TTF_AARCH64 2 +#define IDR0_TTF_AARCH32_64 3 +#define IDR0_S1P (1 << 1) +#define IDR0_S2P (1 << 0) + +#define ARM_SMMU_IDR1 0x4 +#define IDR1_TABLES_PRESET (1 << 30) +#define IDR1_QUEUES_PRESET (1 << 29) +#define IDR1_REL (1 << 28) +#define IDR1_ATTR_TYPES_OVR (1 << 27) +#define IDR1_CMDQS GENMASK(25, 21) +#define IDR1_EVTQS GENMASK(20, 16) +#define IDR1_PRIQS GENMASK(15, 11) +#define IDR1_SSIDSIZE GENMASK(10, 6) +#define IDR1_SIDSIZE GENMASK(5, 0) + +#define ARM_SMMU_IDR3 0xc +#define IDR3_RIL (1 << 10) + +#define ARM_SMMU_IDR5 0x14 +#define IDR5_STALL_MAX GENMASK(31, 16) +#define IDR5_GRAN64K (1 << 6) +#define IDR5_GRAN16K (1 << 5) +#define IDR5_GRAN4K (1 << 4) +#define IDR5_OAS GENMASK(2, 0) +#define IDR5_OAS_32_BIT 0 +#define IDR5_OAS_36_BIT 1 +#define IDR5_OAS_40_BIT 2 +#define IDR5_OAS_42_BIT 3 +#define IDR5_OAS_44_BIT 4 +#define IDR5_OAS_48_BIT 5 +#define IDR5_OAS_52_BIT 6 +#define IDR5_VAX GENMASK(11, 10) +#define IDR5_VAX_52_BIT 1 + +#define ARM_SMMU_IIDR 0x18 +#define IIDR_PRODUCTID GENMASK(31, 20) +#define IIDR_VARIANT GENMASK(19, 16) +#define IIDR_REVISION GENMASK(15, 12) +#define IIDR_IMPLEMENTER GENMASK(11, 0) + +#define ARM_SMMU_CR0 0x20 +#define CR0_ATSCHK (1 << 4) +#define CR0_CMDQEN (1 << 3) +#define CR0_EVTQEN (1 << 2) +#define CR0_PRIQEN (1 << 1) +#define CR0_SMMUEN (1 << 0) + +#define ARM_SMMU_CR0ACK 0x24 + +#define ARM_SMMU_CR1 0x28 +#define CR1_TABLE_SH GENMASK(11, 10) +#define CR1_TABLE_OC GENMASK(9, 8) +#define CR1_TABLE_IC GENMASK(7, 6) +#define CR1_QUEUE_SH GENMASK(5, 4) +#define CR1_QUEUE_OC GENMASK(3, 2) +#define CR1_QUEUE_IC GENMASK(1, 0) +/* CR1 cacheability fields don't quite follow the usual TCR-style encoding= */ +#define CR1_CACHE_NC 0 +#define CR1_CACHE_WB 1 +#define CR1_CACHE_WT 2 + +#define ARM_SMMU_CR2 0x2c +#define CR2_PTM (1 << 2) +#define CR2_RECINVSID (1 << 1) +#define CR2_E2H (1 << 0) + +#define ARM_SMMU_GBPA 0x44 +#define GBPA_UPDATE (1 << 31) +#define GBPA_ABORT (1 << 20) + +#define ARM_SMMU_IRQ_CTRL 0x50 +#define IRQ_CTRL_EVTQ_IRQEN (1 << 2) +#define IRQ_CTRL_PRIQ_IRQEN (1 << 1) +#define IRQ_CTRL_GERROR_IRQEN (1 << 0) + +#define ARM_SMMU_IRQ_CTRLACK 0x54 + +#define ARM_SMMU_GERROR 0x60 +#define GERROR_SFM_ERR (1 << 8) +#define GERROR_MSI_GERROR_ABT_ERR (1 << 7) +#define GERROR_MSI_PRIQ_ABT_ERR (1 << 6) +#define GERROR_MSI_EVTQ_ABT_ERR (1 << 5) +#define GERROR_MSI_CMDQ_ABT_ERR (1 << 4) +#define GERROR_PRIQ_ABT_ERR (1 << 3) +#define GERROR_EVTQ_ABT_ERR (1 << 2) +#define GERROR_CMDQ_ERR (1 << 0) +#define GERROR_ERR_MASK 0x1fd + +#define ARM_SMMU_GERRORN 0x64 + +#define ARM_SMMU_GERROR_IRQ_CFG0 0x68 +#define ARM_SMMU_GERROR_IRQ_CFG1 0x70 +#define ARM_SMMU_GERROR_IRQ_CFG2 0x74 + +#define ARM_SMMU_STRTAB_BASE 0x80 +#define STRTAB_BASE_RA (1UL << 62) +#define STRTAB_BASE_ADDR_MASK GENMASK_ULL(51, 6) + +#define ARM_SMMU_STRTAB_BASE_CFG 0x88 +#define STRTAB_BASE_CFG_FMT GENMASK(17, 16) +#define STRTAB_BASE_CFG_FMT_LINEAR 0 +#define STRTAB_BASE_CFG_FMT_2LVL 1 +#define STRTAB_BASE_CFG_SPLIT GENMASK(10, 6) +#define STRTAB_BASE_CFG_LOG2SIZE GENMASK(5, 0) + +#define ARM_SMMU_CMDQ_BASE 0x90 +#define ARM_SMMU_CMDQ_PROD 0x98 +#define ARM_SMMU_CMDQ_CONS 0x9c + +#define ARM_SMMU_EVTQ_BASE 0xa0 +#define ARM_SMMU_EVTQ_PROD 0xa8 +#define ARM_SMMU_EVTQ_CONS 0xac +#define ARM_SMMU_EVTQ_IRQ_CFG0 0xb0 +#define ARM_SMMU_EVTQ_IRQ_CFG1 0xb8 +#define ARM_SMMU_EVTQ_IRQ_CFG2 0xbc + +#define ARM_SMMU_PRIQ_BASE 0xc0 +#define ARM_SMMU_PRIQ_PROD 0xc8 +#define ARM_SMMU_PRIQ_CONS 0xcc +#define ARM_SMMU_PRIQ_IRQ_CFG0 0xd0 +#define ARM_SMMU_PRIQ_IRQ_CFG1 0xd8 +#define ARM_SMMU_PRIQ_IRQ_CFG2 0xdc + +#define ARM_SMMU_REG_SZ 0xe00 + +/* Common MSI config fields */ +#define MSI_CFG0_ADDR_MASK GENMASK_ULL(51, 2) +#define MSI_CFG2_SH GENMASK(5, 4) +#define MSI_CFG2_MEMATTR GENMASK(3, 0) + +/* Common memory attribute values */ +#define ARM_SMMU_SH_NSH 0 +#define ARM_SMMU_SH_OSH 2 +#define ARM_SMMU_SH_ISH 3 +#define ARM_SMMU_MEMATTR_DEVICE_nGnRE 0x1 +#define ARM_SMMU_MEMATTR_OIWB 0xf + +#define Q_BASE_RWA (1UL << 62) +#define Q_BASE_ADDR_MASK GENMASK_ULL(51, 5) +#define Q_BASE_LOG2SIZE GENMASK(4, 0) + +/* + * Stream table. + * + * Linear: Enough to cover 1 << IDR1.SIDSIZE entries + * 2lvl: 128k L1 entries, + * 256 lazy entries per table (each table covers a PCI bus) + */ +#define STRTAB_SPLIT 8 + +#define STRTAB_L1_DESC_SPAN GENMASK_ULL(4, 0) +#define STRTAB_L1_DESC_L2PTR_MASK GENMASK_ULL(51, 6) + +#define STRTAB_STE_DWORDS 8 + +struct arm_smmu_ste { + __le64 data[STRTAB_STE_DWORDS]; +}; + +#define STRTAB_NUM_L2_STES (1 << STRTAB_SPLIT) +struct arm_smmu_strtab_l2 { + struct arm_smmu_ste stes[STRTAB_NUM_L2_STES]; +}; + +struct arm_smmu_strtab_l1 { + __le64 l2ptr; +}; +#define STRTAB_MAX_L1_ENTRIES (1 << 17) + +static inline u32 arm_smmu_strtab_l1_idx(u32 sid) +{ + return sid / STRTAB_NUM_L2_STES; +} + +static inline u32 arm_smmu_strtab_l2_idx(u32 sid) +{ + return sid % STRTAB_NUM_L2_STES; +} + +#define STRTAB_STE_0_V (1UL << 0) +#define STRTAB_STE_0_CFG GENMASK_ULL(3, 1) +#define STRTAB_STE_0_CFG_ABORT 0 +#define STRTAB_STE_0_CFG_BYPASS 4 +#define STRTAB_STE_0_CFG_S1_TRANS 5 +#define STRTAB_STE_0_CFG_S2_TRANS 6 + +#define STRTAB_STE_0_S1FMT GENMASK_ULL(5, 4) +#define STRTAB_STE_0_S1FMT_LINEAR 0 +#define STRTAB_STE_0_S1FMT_64K_L2 2 +#define STRTAB_STE_0_S1CTXPTR_MASK GENMASK_ULL(51, 6) +#define STRTAB_STE_0_S1CDMAX GENMASK_ULL(63, 59) + +#define STRTAB_STE_1_S1DSS GENMASK_ULL(1, 0) +#define STRTAB_STE_1_S1DSS_TERMINATE 0x0 +#define STRTAB_STE_1_S1DSS_BYPASS 0x1 +#define STRTAB_STE_1_S1DSS_SSID0 0x2 + +#define STRTAB_STE_1_S1C_CACHE_NC 0UL +#define STRTAB_STE_1_S1C_CACHE_WBRA 1UL +#define STRTAB_STE_1_S1C_CACHE_WT 2UL +#define STRTAB_STE_1_S1C_CACHE_WB 3UL +#define STRTAB_STE_1_S1CIR GENMASK_ULL(3, 2) +#define STRTAB_STE_1_S1COR GENMASK_ULL(5, 4) +#define STRTAB_STE_1_S1CSH GENMASK_ULL(7, 6) + +#define STRTAB_STE_1_S1STALLD (1UL << 27) + +#define STRTAB_STE_1_EATS GENMASK_ULL(29, 28) +#define STRTAB_STE_1_EATS_ABT 0UL +#define STRTAB_STE_1_EATS_TRANS 1UL +#define STRTAB_STE_1_EATS_S1CHK 2UL + +#define STRTAB_STE_1_STRW GENMASK_ULL(31, 30) +#define STRTAB_STE_1_STRW_NSEL1 0UL +#define STRTAB_STE_1_STRW_EL2 2UL + +#define STRTAB_STE_1_SHCFG GENMASK_ULL(45, 44) +#define STRTAB_STE_1_SHCFG_INCOMING 1UL + +#define STRTAB_STE_2_S2VMID GENMASK_ULL(15, 0) +#define STRTAB_STE_2_VTCR GENMASK_ULL(50, 32) +#define STRTAB_STE_2_VTCR_S2T0SZ GENMASK_ULL(5, 0) +#define STRTAB_STE_2_VTCR_S2SL0 GENMASK_ULL(7, 6) +#define STRTAB_STE_2_VTCR_S2IR0 GENMASK_ULL(9, 8) +#define STRTAB_STE_2_VTCR_S2OR0 GENMASK_ULL(11, 10) +#define STRTAB_STE_2_VTCR_S2SH0 GENMASK_ULL(13, 12) +#define STRTAB_STE_2_VTCR_S2TG GENMASK_ULL(15, 14) +#define STRTAB_STE_2_VTCR_S2PS GENMASK_ULL(18, 16) +#define STRTAB_STE_2_S2AA64 (1UL << 51) +#define STRTAB_STE_2_S2ENDI (1UL << 52) +#define STRTAB_STE_2_S2PTW (1UL << 54) +#define STRTAB_STE_2_S2S (1UL << 57) +#define STRTAB_STE_2_S2R (1UL << 58) + +#define STRTAB_STE_3_S2TTB_MASK GENMASK_ULL(51, 4) + +/* + * Context descriptors. + * + * Linear: when less than 1024 SSIDs are supported + * 2lvl: at most 1024 L1 entries, + * 1024 lazy entries per table. + */ +#define CTXDESC_L2_ENTRIES 1024 + +#define CTXDESC_L1_DESC_V (1UL << 0) +#define CTXDESC_L1_DESC_L2PTR_MASK GENMASK_ULL(51, 12) + +#define CTXDESC_CD_DWORDS 8 + +struct arm_smmu_cd { + __le64 data[CTXDESC_CD_DWORDS]; +}; + +struct arm_smmu_cdtab_l2 { + struct arm_smmu_cd cds[CTXDESC_L2_ENTRIES]; +}; + +struct arm_smmu_cdtab_l1 { + __le64 l2ptr; +}; + +static inline unsigned int arm_smmu_cdtab_l1_idx(unsigned int ssid) +{ + return ssid / CTXDESC_L2_ENTRIES; +} + +static inline unsigned int arm_smmu_cdtab_l2_idx(unsigned int ssid) +{ + return ssid % CTXDESC_L2_ENTRIES; +} + +#define CTXDESC_CD_0_TCR_T0SZ GENMASK_ULL(5, 0) +#define CTXDESC_CD_0_TCR_TG0 GENMASK_ULL(7, 6) +#define CTXDESC_CD_0_TCR_IRGN0 GENMASK_ULL(9, 8) +#define CTXDESC_CD_0_TCR_ORGN0 GENMASK_ULL(11, 10) +#define CTXDESC_CD_0_TCR_SH0 GENMASK_ULL(13, 12) +#define CTXDESC_CD_0_TCR_EPD0 (1ULL << 14) +#define CTXDESC_CD_0_TCR_EPD1 (1ULL << 30) + +#define CTXDESC_CD_0_ENDI (1UL << 15) +#define CTXDESC_CD_0_V (1UL << 31) + +#define CTXDESC_CD_0_TCR_IPS GENMASK_ULL(34, 32) +#define CTXDESC_CD_0_TCR_TBI0 (1ULL << 38) + +#define CTXDESC_CD_0_TCR_HA (1UL << 43) +#define CTXDESC_CD_0_TCR_HD (1UL << 42) + +#define CTXDESC_CD_0_AA64 (1UL << 41) +#define CTXDESC_CD_0_S (1UL << 44) +#define CTXDESC_CD_0_R (1UL << 45) +#define CTXDESC_CD_0_A (1UL << 46) +#define CTXDESC_CD_0_ASET (1UL << 47) +#define CTXDESC_CD_0_ASID GENMASK_ULL(63, 48) + +#define CTXDESC_CD_1_TTB0_MASK GENMASK_ULL(51, 4) + +/* + * When the SMMU only supports linear context descriptor tables, pick a + * reasonable size limit (64kB). + */ +#define CTXDESC_LINEAR_CDMAX ilog2(SZ_64K / sizeof(struct arm_smmu_cd)) + +/* Command queue */ +#define CMDQ_ENT_SZ_SHIFT 4 +#define CMDQ_ENT_DWORDS ((1 << CMDQ_ENT_SZ_SHIFT) >> 3) +#define CMDQ_MAX_SZ_SHIFT (Q_MAX_SZ_SHIFT - CMDQ_ENT_SZ_SHIFT) + +#define CMDQ_CONS_ERR GENMASK(30, 24) +#define CMDQ_ERR_CERROR_NONE_IDX 0 +#define CMDQ_ERR_CERROR_ILL_IDX 1 +#define CMDQ_ERR_CERROR_ABT_IDX 2 +#define CMDQ_ERR_CERROR_ATC_INV_IDX 3 + +#define CMDQ_0_OP GENMASK_ULL(7, 0) +#define CMDQ_0_SSV (1UL << 11) + +#define CMDQ_PREFETCH_0_SID GENMASK_ULL(63, 32) +#define CMDQ_PREFETCH_1_SIZE GENMASK_ULL(4, 0) +#define CMDQ_PREFETCH_1_ADDR_MASK GENMASK_ULL(63, 12) + +#define CMDQ_CFGI_0_SSID GENMASK_ULL(31, 12) +#define CMDQ_CFGI_0_SID GENMASK_ULL(63, 32) +#define CMDQ_CFGI_1_LEAF (1UL << 0) +#define CMDQ_CFGI_1_RANGE GENMASK_ULL(4, 0) + +#define CMDQ_TLBI_0_NUM GENMASK_ULL(16, 12) +#define CMDQ_TLBI_RANGE_NUM_MAX 31 +#define CMDQ_TLBI_0_SCALE GENMASK_ULL(24, 20) +#define CMDQ_TLBI_0_VMID GENMASK_ULL(47, 32) +#define CMDQ_TLBI_0_ASID GENMASK_ULL(63, 48) +#define CMDQ_TLBI_1_LEAF (1UL << 0) +#define CMDQ_TLBI_1_TTL GENMASK_ULL(9, 8) +#define CMDQ_TLBI_1_TG GENMASK_ULL(11, 10) +#define CMDQ_TLBI_1_VA_MASK GENMASK_ULL(63, 12) +#define CMDQ_TLBI_1_IPA_MASK GENMASK_ULL(51, 12) + +#define CMDQ_ATC_0_SSID GENMASK_ULL(31, 12) +#define CMDQ_ATC_0_SID GENMASK_ULL(63, 32) +#define CMDQ_ATC_0_GLOBAL (1UL << 9) +#define CMDQ_ATC_1_SIZE GENMASK_ULL(5, 0) +#define CMDQ_ATC_1_ADDR_MASK GENMASK_ULL(63, 12) + +#define CMDQ_PRI_0_SSID GENMASK_ULL(31, 12) +#define CMDQ_PRI_0_SID GENMASK_ULL(63, 32) +#define CMDQ_PRI_1_GRPID GENMASK_ULL(8, 0) +#define CMDQ_PRI_1_RESP GENMASK_ULL(13, 12) + +#define CMDQ_RESUME_0_RESP_TERM 0UL +#define CMDQ_RESUME_0_RESP_RETRY 1UL +#define CMDQ_RESUME_0_RESP_ABORT 2UL +#define CMDQ_RESUME_0_RESP GENMASK_ULL(13, 12) +#define CMDQ_RESUME_0_SID GENMASK_ULL(63, 32) +#define CMDQ_RESUME_1_STAG GENMASK_ULL(15, 0) + +#define CMDQ_SYNC_0_CS GENMASK_ULL(13, 12) +#define CMDQ_SYNC_0_CS_NONE 0 +#define CMDQ_SYNC_0_CS_IRQ 1 +#define CMDQ_SYNC_0_CS_SEV 2 +#define CMDQ_SYNC_0_MSH GENMASK_ULL(23, 22) +#define CMDQ_SYNC_0_MSIATTR GENMASK_ULL(27, 24) +#define CMDQ_SYNC_0_MSIDATA GENMASK_ULL(63, 32) +#define CMDQ_SYNC_1_MSIADDR_MASK GENMASK_ULL(51, 2) + +/* Event queue */ +#define EVTQ_ENT_SZ_SHIFT 5 +#define EVTQ_ENT_DWORDS ((1 << EVTQ_ENT_SZ_SHIFT) >> 3) +#define EVTQ_MAX_SZ_SHIFT (Q_MAX_SZ_SHIFT - EVTQ_ENT_SZ_SHIFT) + +#define EVTQ_0_ID GENMASK_ULL(7, 0) + +#define EVT_ID_TRANSLATION_FAULT 0x10 +#define EVT_ID_ADDR_SIZE_FAULT 0x11 +#define EVT_ID_ACCESS_FAULT 0x12 +#define EVT_ID_PERMISSION_FAULT 0x13 + +#define EVTQ_0_SSV (1UL << 11) +#define EVTQ_0_SSID GENMASK_ULL(31, 12) +#define EVTQ_0_SID GENMASK_ULL(63, 32) +#define EVTQ_1_STAG GENMASK_ULL(15, 0) +#define EVTQ_1_STALL (1UL << 31) +#define EVTQ_1_PnU (1UL << 33) +#define EVTQ_1_InD (1UL << 34) +#define EVTQ_1_RnW (1UL << 35) +#define EVTQ_1_S2 (1UL << 39) +#define EVTQ_1_CLASS GENMASK_ULL(41, 40) +#define EVTQ_1_TT_READ (1UL << 44) +#define EVTQ_2_ADDR GENMASK_ULL(63, 0) +#define EVTQ_3_IPA GENMASK_ULL(51, 12) + +/* PRI queue */ +#define PRIQ_ENT_SZ_SHIFT 4 +#define PRIQ_ENT_DWORDS ((1 << PRIQ_ENT_SZ_SHIFT) >> 3) +#define PRIQ_MAX_SZ_SHIFT (Q_MAX_SZ_SHIFT - PRIQ_ENT_SZ_SHIFT) + +#define PRIQ_0_SID GENMASK_ULL(31, 0) +#define PRIQ_0_SSID GENMASK_ULL(51, 32) +#define PRIQ_0_PERM_PRIV (1UL << 58) +#define PRIQ_0_PERM_EXEC (1UL << 59) +#define PRIQ_0_PERM_READ (1UL << 60) +#define PRIQ_0_PERM_WRITE (1UL << 61) +#define PRIQ_0_PRG_LAST (1UL << 62) +#define PRIQ_0_SSID_V (1UL << 63) + +#define PRIQ_1_PRG_IDX GENMASK_ULL(8, 0) +#define PRIQ_1_ADDR_MASK GENMASK_ULL(63, 12) + +/* Synthesized features */ +#define ARM_SMMU_FEAT_2_LVL_STRTAB (1 << 0) +#define ARM_SMMU_FEAT_2_LVL_CDTAB (1 << 1) +#define ARM_SMMU_FEAT_TT_LE (1 << 2) +#define ARM_SMMU_FEAT_TT_BE (1 << 3) +#define ARM_SMMU_FEAT_PRI (1 << 4) +#define ARM_SMMU_FEAT_ATS (1 << 5) +#define ARM_SMMU_FEAT_SEV (1 << 6) +#define ARM_SMMU_FEAT_MSI (1 << 7) +#define ARM_SMMU_FEAT_COHERENCY (1 << 8) +#define ARM_SMMU_FEAT_TRANS_S1 (1 << 9) +#define ARM_SMMU_FEAT_TRANS_S2 (1 << 10) +#define ARM_SMMU_FEAT_STALLS (1 << 11) +#define ARM_SMMU_FEAT_HYP (1 << 12) +#define ARM_SMMU_FEAT_STALL_FORCE (1 << 13) +#define ARM_SMMU_FEAT_VAX (1 << 14) +#define ARM_SMMU_FEAT_RANGE_INV (1 << 15) +#define ARM_SMMU_FEAT_BTM (1 << 16) +#define ARM_SMMU_FEAT_SVA (1 << 17) +#define ARM_SMMU_FEAT_E2H (1 << 18) +#define ARM_SMMU_FEAT_NESTING (1 << 19) +#define ARM_SMMU_FEAT_ATTR_TYPES_OVR (1 << 20) +#define ARM_SMMU_FEAT_HA (1 << 21) +#define ARM_SMMU_FEAT_HD (1 << 22) + +enum pri_resp { + PRI_RESP_DENY =3D 0, + PRI_RESP_FAIL =3D 1, + PRI_RESP_SUCC =3D 2, +}; + +struct arm_smmu_cmdq_ent { + /* Common fields */ + u8 opcode; + bool substream_valid; + + /* Command-specific fields */ + union { + #define CMDQ_OP_PREFETCH_CFG 0x1 + struct { + u32 sid; + } prefetch; + + #define CMDQ_OP_CFGI_STE 0x3 + #define CMDQ_OP_CFGI_ALL 0x4 + #define CMDQ_OP_CFGI_CD 0x5 + #define CMDQ_OP_CFGI_CD_ALL 0x6 + struct { + u32 sid; + u32 ssid; + union { + bool leaf; + u8 span; + }; + } cfgi; + + #define CMDQ_OP_TLBI_NH_ASID 0x11 + #define CMDQ_OP_TLBI_NH_VA 0x12 + #define CMDQ_OP_TLBI_EL2_ALL 0x20 + #define CMDQ_OP_TLBI_EL2_ASID 0x21 + #define CMDQ_OP_TLBI_EL2_VA 0x22 + #define CMDQ_OP_TLBI_S12_VMALL 0x28 + #define CMDQ_OP_TLBI_S2_IPA 0x2a + #define CMDQ_OP_TLBI_NSNH_ALL 0x30 + struct { + u8 num; + u8 scale; + u16 asid; + u16 vmid; + bool leaf; + u8 ttl; + u8 tg; + u64 addr; + } tlbi; + + #define CMDQ_OP_ATC_INV 0x40 + #define ATC_INV_SIZE_ALL 52 + struct { + u32 sid; + u32 ssid; + u64 addr; + u8 size; + bool global; + } atc; + + #define CMDQ_OP_PRI_RESP 0x41 + struct { + u32 sid; + u32 ssid; + u16 grpid; + enum pri_resp resp; + } pri; + + #define CMDQ_OP_RESUME 0x44 + struct { + u32 sid; + u16 stag; + u8 resp; + } resume; + + #define CMDQ_OP_CMD_SYNC 0x46 + struct { + u64 msiaddr; + } sync; + }; +}; + +#endif /* _ARM_SMMU_V3_COMMON_H */ diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h b/drivers/iommu/ar= m/arm-smmu-v3/arm-smmu-v3.h index 1e9952ca989f..fc1b8c2af2a2 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h @@ -8,7 +8,6 @@ #ifndef _ARM_SMMU_V3_H #define _ARM_SMMU_V3_H =20 -#include #include #include #include @@ -16,167 +15,7 @@ =20 struct arm_smmu_device; =20 -/* MMIO registers */ -#define ARM_SMMU_IDR0 0x0 -#define IDR0_ST_LVL GENMASK(28, 27) -#define IDR0_ST_LVL_2LVL 1 -#define IDR0_STALL_MODEL GENMASK(25, 24) -#define IDR0_STALL_MODEL_STALL 0 -#define IDR0_STALL_MODEL_FORCE 2 -#define IDR0_TTENDIAN GENMASK(22, 21) -#define IDR0_TTENDIAN_MIXED 0 -#define IDR0_TTENDIAN_LE 2 -#define IDR0_TTENDIAN_BE 3 -#define IDR0_CD2L (1 << 19) -#define IDR0_VMID16 (1 << 18) -#define IDR0_PRI (1 << 16) -#define IDR0_SEV (1 << 14) -#define IDR0_MSI (1 << 13) -#define IDR0_ASID16 (1 << 12) -#define IDR0_ATS (1 << 10) -#define IDR0_HYP (1 << 9) -#define IDR0_HTTU GENMASK(7, 6) -#define IDR0_HTTU_ACCESS 1 -#define IDR0_HTTU_ACCESS_DIRTY 2 -#define IDR0_COHACC (1 << 4) -#define IDR0_TTF GENMASK(3, 2) -#define IDR0_TTF_AARCH64 2 -#define IDR0_TTF_AARCH32_64 3 -#define IDR0_S1P (1 << 1) -#define IDR0_S2P (1 << 0) - -#define ARM_SMMU_IDR1 0x4 -#define IDR1_TABLES_PRESET (1 << 30) -#define IDR1_QUEUES_PRESET (1 << 29) -#define IDR1_REL (1 << 28) -#define IDR1_ATTR_TYPES_OVR (1 << 27) -#define IDR1_CMDQS GENMASK(25, 21) -#define IDR1_EVTQS GENMASK(20, 16) -#define IDR1_PRIQS GENMASK(15, 11) -#define IDR1_SSIDSIZE GENMASK(10, 6) -#define IDR1_SIDSIZE GENMASK(5, 0) - -#define ARM_SMMU_IDR3 0xc -#define IDR3_RIL (1 << 10) - -#define ARM_SMMU_IDR5 0x14 -#define IDR5_STALL_MAX GENMASK(31, 16) -#define IDR5_GRAN64K (1 << 6) -#define IDR5_GRAN16K (1 << 5) -#define IDR5_GRAN4K (1 << 4) -#define IDR5_OAS GENMASK(2, 0) -#define IDR5_OAS_32_BIT 0 -#define IDR5_OAS_36_BIT 1 -#define IDR5_OAS_40_BIT 2 -#define IDR5_OAS_42_BIT 3 -#define IDR5_OAS_44_BIT 4 -#define IDR5_OAS_48_BIT 5 -#define IDR5_OAS_52_BIT 6 -#define IDR5_VAX GENMASK(11, 10) -#define IDR5_VAX_52_BIT 1 - -#define ARM_SMMU_IIDR 0x18 -#define IIDR_PRODUCTID GENMASK(31, 20) -#define IIDR_VARIANT GENMASK(19, 16) -#define IIDR_REVISION GENMASK(15, 12) -#define IIDR_IMPLEMENTER GENMASK(11, 0) - -#define ARM_SMMU_CR0 0x20 -#define CR0_ATSCHK (1 << 4) -#define CR0_CMDQEN (1 << 3) -#define CR0_EVTQEN (1 << 2) -#define CR0_PRIQEN (1 << 1) -#define CR0_SMMUEN (1 << 0) - -#define ARM_SMMU_CR0ACK 0x24 - -#define ARM_SMMU_CR1 0x28 -#define CR1_TABLE_SH GENMASK(11, 10) -#define CR1_TABLE_OC GENMASK(9, 8) -#define CR1_TABLE_IC GENMASK(7, 6) -#define CR1_QUEUE_SH GENMASK(5, 4) -#define CR1_QUEUE_OC GENMASK(3, 2) -#define CR1_QUEUE_IC GENMASK(1, 0) -/* CR1 cacheability fields don't quite follow the usual TCR-style encoding= */ -#define CR1_CACHE_NC 0 -#define CR1_CACHE_WB 1 -#define CR1_CACHE_WT 2 - -#define ARM_SMMU_CR2 0x2c -#define CR2_PTM (1 << 2) -#define CR2_RECINVSID (1 << 1) -#define CR2_E2H (1 << 0) - -#define ARM_SMMU_GBPA 0x44 -#define GBPA_UPDATE (1 << 31) -#define GBPA_ABORT (1 << 20) - -#define ARM_SMMU_IRQ_CTRL 0x50 -#define IRQ_CTRL_EVTQ_IRQEN (1 << 2) -#define IRQ_CTRL_PRIQ_IRQEN (1 << 1) -#define IRQ_CTRL_GERROR_IRQEN (1 << 0) - -#define ARM_SMMU_IRQ_CTRLACK 0x54 - -#define ARM_SMMU_GERROR 0x60 -#define GERROR_SFM_ERR (1 << 8) -#define GERROR_MSI_GERROR_ABT_ERR (1 << 7) -#define GERROR_MSI_PRIQ_ABT_ERR (1 << 6) -#define GERROR_MSI_EVTQ_ABT_ERR (1 << 5) -#define GERROR_MSI_CMDQ_ABT_ERR (1 << 4) -#define GERROR_PRIQ_ABT_ERR (1 << 3) -#define GERROR_EVTQ_ABT_ERR (1 << 2) -#define GERROR_CMDQ_ERR (1 << 0) -#define GERROR_ERR_MASK 0x1fd - -#define ARM_SMMU_GERRORN 0x64 - -#define ARM_SMMU_GERROR_IRQ_CFG0 0x68 -#define ARM_SMMU_GERROR_IRQ_CFG1 0x70 -#define ARM_SMMU_GERROR_IRQ_CFG2 0x74 - -#define ARM_SMMU_STRTAB_BASE 0x80 -#define STRTAB_BASE_RA (1UL << 62) -#define STRTAB_BASE_ADDR_MASK GENMASK_ULL(51, 6) - -#define ARM_SMMU_STRTAB_BASE_CFG 0x88 -#define STRTAB_BASE_CFG_FMT GENMASK(17, 16) -#define STRTAB_BASE_CFG_FMT_LINEAR 0 -#define STRTAB_BASE_CFG_FMT_2LVL 1 -#define STRTAB_BASE_CFG_SPLIT GENMASK(10, 6) -#define STRTAB_BASE_CFG_LOG2SIZE GENMASK(5, 0) - -#define ARM_SMMU_CMDQ_BASE 0x90 -#define ARM_SMMU_CMDQ_PROD 0x98 -#define ARM_SMMU_CMDQ_CONS 0x9c - -#define ARM_SMMU_EVTQ_BASE 0xa0 -#define ARM_SMMU_EVTQ_PROD 0xa8 -#define ARM_SMMU_EVTQ_CONS 0xac -#define ARM_SMMU_EVTQ_IRQ_CFG0 0xb0 -#define ARM_SMMU_EVTQ_IRQ_CFG1 0xb8 -#define ARM_SMMU_EVTQ_IRQ_CFG2 0xbc - -#define ARM_SMMU_PRIQ_BASE 0xc0 -#define ARM_SMMU_PRIQ_PROD 0xc8 -#define ARM_SMMU_PRIQ_CONS 0xcc -#define ARM_SMMU_PRIQ_IRQ_CFG0 0xd0 -#define ARM_SMMU_PRIQ_IRQ_CFG1 0xd8 -#define ARM_SMMU_PRIQ_IRQ_CFG2 0xdc - -#define ARM_SMMU_REG_SZ 0xe00 - -/* Common MSI config fields */ -#define MSI_CFG0_ADDR_MASK GENMASK_ULL(51, 2) -#define MSI_CFG2_SH GENMASK(5, 4) -#define MSI_CFG2_MEMATTR GENMASK(3, 0) - -/* Common memory attribute values */ -#define ARM_SMMU_SH_NSH 0 -#define ARM_SMMU_SH_OSH 2 -#define ARM_SMMU_SH_ISH 3 -#define ARM_SMMU_MEMATTR_DEVICE_nGnRE 0x1 -#define ARM_SMMU_MEMATTR_OIWB 0xf +#include =20 #define Q_IDX(llq, p) ((p) & ((1 << (llq)->max_n_shift) - 1)) #define Q_WRP(llq, p) ((p) & (1 << (llq)->max_n_shift)) @@ -186,10 +25,6 @@ struct arm_smmu_device; Q_IDX(&((q)->llq), p) * \ (q)->ent_dwords) =20 -#define Q_BASE_RWA (1UL << 62) -#define Q_BASE_ADDR_MASK GENMASK_ULL(51, 5) -#define Q_BASE_LOG2SIZE GENMASK(4, 0) - /* Ensure DMA allocations are naturally aligned */ #ifdef CONFIG_CMA_ALIGNMENT #define Q_MAX_SZ_SHIFT (PAGE_SHIFT + CONFIG_CMA_ALIGNMENT) @@ -197,180 +32,6 @@ struct arm_smmu_device; #define Q_MAX_SZ_SHIFT (PAGE_SHIFT + MAX_PAGE_ORDER) #endif =20 -/* - * Stream table. - * - * Linear: Enough to cover 1 << IDR1.SIDSIZE entries - * 2lvl: 128k L1 entries, - * 256 lazy entries per table (each table covers a PCI bus) - */ -#define STRTAB_SPLIT 8 - -#define STRTAB_L1_DESC_SPAN GENMASK_ULL(4, 0) -#define STRTAB_L1_DESC_L2PTR_MASK GENMASK_ULL(51, 6) - -#define STRTAB_STE_DWORDS 8 - -struct arm_smmu_ste { - __le64 data[STRTAB_STE_DWORDS]; -}; - -#define STRTAB_NUM_L2_STES (1 << STRTAB_SPLIT) -struct arm_smmu_strtab_l2 { - struct arm_smmu_ste stes[STRTAB_NUM_L2_STES]; -}; - -struct arm_smmu_strtab_l1 { - __le64 l2ptr; -}; -#define STRTAB_MAX_L1_ENTRIES (1 << 17) - -static inline u32 arm_smmu_strtab_l1_idx(u32 sid) -{ - return sid / STRTAB_NUM_L2_STES; -} - -static inline u32 arm_smmu_strtab_l2_idx(u32 sid) -{ - return sid % STRTAB_NUM_L2_STES; -} - -#define STRTAB_STE_0_V (1UL << 0) -#define STRTAB_STE_0_CFG GENMASK_ULL(3, 1) -#define STRTAB_STE_0_CFG_ABORT 0 -#define STRTAB_STE_0_CFG_BYPASS 4 -#define STRTAB_STE_0_CFG_S1_TRANS 5 -#define STRTAB_STE_0_CFG_S2_TRANS 6 - -#define STRTAB_STE_0_S1FMT GENMASK_ULL(5, 4) -#define STRTAB_STE_0_S1FMT_LINEAR 0 -#define STRTAB_STE_0_S1FMT_64K_L2 2 -#define STRTAB_STE_0_S1CTXPTR_MASK GENMASK_ULL(51, 6) -#define STRTAB_STE_0_S1CDMAX GENMASK_ULL(63, 59) - -#define STRTAB_STE_1_S1DSS GENMASK_ULL(1, 0) -#define STRTAB_STE_1_S1DSS_TERMINATE 0x0 -#define STRTAB_STE_1_S1DSS_BYPASS 0x1 -#define STRTAB_STE_1_S1DSS_SSID0 0x2 - -#define STRTAB_STE_1_S1C_CACHE_NC 0UL -#define STRTAB_STE_1_S1C_CACHE_WBRA 1UL -#define STRTAB_STE_1_S1C_CACHE_WT 2UL -#define STRTAB_STE_1_S1C_CACHE_WB 3UL -#define STRTAB_STE_1_S1CIR GENMASK_ULL(3, 2) -#define STRTAB_STE_1_S1COR GENMASK_ULL(5, 4) -#define STRTAB_STE_1_S1CSH GENMASK_ULL(7, 6) - -#define STRTAB_STE_1_S1STALLD (1UL << 27) - -#define STRTAB_STE_1_EATS GENMASK_ULL(29, 28) -#define STRTAB_STE_1_EATS_ABT 0UL -#define STRTAB_STE_1_EATS_TRANS 1UL -#define STRTAB_STE_1_EATS_S1CHK 2UL - -#define STRTAB_STE_1_STRW GENMASK_ULL(31, 30) -#define STRTAB_STE_1_STRW_NSEL1 0UL -#define STRTAB_STE_1_STRW_EL2 2UL - -#define STRTAB_STE_1_SHCFG GENMASK_ULL(45, 44) -#define STRTAB_STE_1_SHCFG_INCOMING 1UL - -#define STRTAB_STE_2_S2VMID GENMASK_ULL(15, 0) -#define STRTAB_STE_2_VTCR GENMASK_ULL(50, 32) -#define STRTAB_STE_2_VTCR_S2T0SZ GENMASK_ULL(5, 0) -#define STRTAB_STE_2_VTCR_S2SL0 GENMASK_ULL(7, 6) -#define STRTAB_STE_2_VTCR_S2IR0 GENMASK_ULL(9, 8) -#define STRTAB_STE_2_VTCR_S2OR0 GENMASK_ULL(11, 10) -#define STRTAB_STE_2_VTCR_S2SH0 GENMASK_ULL(13, 12) -#define STRTAB_STE_2_VTCR_S2TG GENMASK_ULL(15, 14) -#define STRTAB_STE_2_VTCR_S2PS GENMASK_ULL(18, 16) -#define STRTAB_STE_2_S2AA64 (1UL << 51) -#define STRTAB_STE_2_S2ENDI (1UL << 52) -#define STRTAB_STE_2_S2PTW (1UL << 54) -#define STRTAB_STE_2_S2S (1UL << 57) -#define STRTAB_STE_2_S2R (1UL << 58) - -#define STRTAB_STE_3_S2TTB_MASK GENMASK_ULL(51, 4) - -/* - * Context descriptors. - * - * Linear: when less than 1024 SSIDs are supported - * 2lvl: at most 1024 L1 entries, - * 1024 lazy entries per table. - */ -#define CTXDESC_L2_ENTRIES 1024 - -#define CTXDESC_L1_DESC_V (1UL << 0) -#define CTXDESC_L1_DESC_L2PTR_MASK GENMASK_ULL(51, 12) - -#define CTXDESC_CD_DWORDS 8 - -struct arm_smmu_cd { - __le64 data[CTXDESC_CD_DWORDS]; -}; - -struct arm_smmu_cdtab_l2 { - struct arm_smmu_cd cds[CTXDESC_L2_ENTRIES]; -}; - -struct arm_smmu_cdtab_l1 { - __le64 l2ptr; -}; - -static inline unsigned int arm_smmu_cdtab_l1_idx(unsigned int ssid) -{ - return ssid / CTXDESC_L2_ENTRIES; -} - -static inline unsigned int arm_smmu_cdtab_l2_idx(unsigned int ssid) -{ - return ssid % CTXDESC_L2_ENTRIES; -} - -#define CTXDESC_CD_0_TCR_T0SZ GENMASK_ULL(5, 0) -#define CTXDESC_CD_0_TCR_TG0 GENMASK_ULL(7, 6) -#define CTXDESC_CD_0_TCR_IRGN0 GENMASK_ULL(9, 8) -#define CTXDESC_CD_0_TCR_ORGN0 GENMASK_ULL(11, 10) -#define CTXDESC_CD_0_TCR_SH0 GENMASK_ULL(13, 12) -#define CTXDESC_CD_0_TCR_EPD0 (1ULL << 14) -#define CTXDESC_CD_0_TCR_EPD1 (1ULL << 30) - -#define CTXDESC_CD_0_ENDI (1UL << 15) -#define CTXDESC_CD_0_V (1UL << 31) - -#define CTXDESC_CD_0_TCR_IPS GENMASK_ULL(34, 32) -#define CTXDESC_CD_0_TCR_TBI0 (1ULL << 38) - -#define CTXDESC_CD_0_TCR_HA (1UL << 43) -#define CTXDESC_CD_0_TCR_HD (1UL << 42) - -#define CTXDESC_CD_0_AA64 (1UL << 41) -#define CTXDESC_CD_0_S (1UL << 44) -#define CTXDESC_CD_0_R (1UL << 45) -#define CTXDESC_CD_0_A (1UL << 46) -#define CTXDESC_CD_0_ASET (1UL << 47) -#define CTXDESC_CD_0_ASID GENMASK_ULL(63, 48) - -#define CTXDESC_CD_1_TTB0_MASK GENMASK_ULL(51, 4) - -/* - * When the SMMU only supports linear context descriptor tables, pick a - * reasonable size limit (64kB). - */ -#define CTXDESC_LINEAR_CDMAX ilog2(SZ_64K / sizeof(struct arm_smmu_cd)) - -/* Command queue */ -#define CMDQ_ENT_SZ_SHIFT 4 -#define CMDQ_ENT_DWORDS ((1 << CMDQ_ENT_SZ_SHIFT) >> 3) -#define CMDQ_MAX_SZ_SHIFT (Q_MAX_SZ_SHIFT - CMDQ_ENT_SZ_SHIFT) - -#define CMDQ_CONS_ERR GENMASK(30, 24) -#define CMDQ_ERR_CERROR_NONE_IDX 0 -#define CMDQ_ERR_CERROR_ILL_IDX 1 -#define CMDQ_ERR_CERROR_ABT_IDX 2 -#define CMDQ_ERR_CERROR_ATC_INV_IDX 3 - #define CMDQ_PROD_OWNED_FLAG Q_OVERFLOW_FLAG =20 /* @@ -380,99 +41,6 @@ static inline unsigned int arm_smmu_cdtab_l2_idx(unsign= ed int ssid) */ #define CMDQ_BATCH_ENTRIES BITS_PER_LONG =20 -#define CMDQ_0_OP GENMASK_ULL(7, 0) -#define CMDQ_0_SSV (1UL << 11) - -#define CMDQ_PREFETCH_0_SID GENMASK_ULL(63, 32) -#define CMDQ_PREFETCH_1_SIZE GENMASK_ULL(4, 0) -#define CMDQ_PREFETCH_1_ADDR_MASK GENMASK_ULL(63, 12) - -#define CMDQ_CFGI_0_SSID GENMASK_ULL(31, 12) -#define CMDQ_CFGI_0_SID GENMASK_ULL(63, 32) -#define CMDQ_CFGI_1_LEAF (1UL << 0) -#define CMDQ_CFGI_1_RANGE GENMASK_ULL(4, 0) - -#define CMDQ_TLBI_0_NUM GENMASK_ULL(16, 12) -#define CMDQ_TLBI_RANGE_NUM_MAX 31 -#define CMDQ_TLBI_0_SCALE GENMASK_ULL(24, 20) -#define CMDQ_TLBI_0_VMID GENMASK_ULL(47, 32) -#define CMDQ_TLBI_0_ASID GENMASK_ULL(63, 48) -#define CMDQ_TLBI_1_LEAF (1UL << 0) -#define CMDQ_TLBI_1_TTL GENMASK_ULL(9, 8) -#define CMDQ_TLBI_1_TG GENMASK_ULL(11, 10) -#define CMDQ_TLBI_1_VA_MASK GENMASK_ULL(63, 12) -#define CMDQ_TLBI_1_IPA_MASK GENMASK_ULL(51, 12) - -#define CMDQ_ATC_0_SSID GENMASK_ULL(31, 12) -#define CMDQ_ATC_0_SID GENMASK_ULL(63, 32) -#define CMDQ_ATC_0_GLOBAL (1UL << 9) -#define CMDQ_ATC_1_SIZE GENMASK_ULL(5, 0) -#define CMDQ_ATC_1_ADDR_MASK GENMASK_ULL(63, 12) - -#define CMDQ_PRI_0_SSID GENMASK_ULL(31, 12) -#define CMDQ_PRI_0_SID GENMASK_ULL(63, 32) -#define CMDQ_PRI_1_GRPID GENMASK_ULL(8, 0) -#define CMDQ_PRI_1_RESP GENMASK_ULL(13, 12) - -#define CMDQ_RESUME_0_RESP_TERM 0UL -#define CMDQ_RESUME_0_RESP_RETRY 1UL -#define CMDQ_RESUME_0_RESP_ABORT 2UL -#define CMDQ_RESUME_0_RESP GENMASK_ULL(13, 12) -#define CMDQ_RESUME_0_SID GENMASK_ULL(63, 32) -#define CMDQ_RESUME_1_STAG GENMASK_ULL(15, 0) - -#define CMDQ_SYNC_0_CS GENMASK_ULL(13, 12) -#define CMDQ_SYNC_0_CS_NONE 0 -#define CMDQ_SYNC_0_CS_IRQ 1 -#define CMDQ_SYNC_0_CS_SEV 2 -#define CMDQ_SYNC_0_MSH GENMASK_ULL(23, 22) -#define CMDQ_SYNC_0_MSIATTR GENMASK_ULL(27, 24) -#define CMDQ_SYNC_0_MSIDATA GENMASK_ULL(63, 32) -#define CMDQ_SYNC_1_MSIADDR_MASK GENMASK_ULL(51, 2) - -/* Event queue */ -#define EVTQ_ENT_SZ_SHIFT 5 -#define EVTQ_ENT_DWORDS ((1 << EVTQ_ENT_SZ_SHIFT) >> 3) -#define EVTQ_MAX_SZ_SHIFT (Q_MAX_SZ_SHIFT - EVTQ_ENT_SZ_SHIFT) - -#define EVTQ_0_ID GENMASK_ULL(7, 0) - -#define EVT_ID_TRANSLATION_FAULT 0x10 -#define EVT_ID_ADDR_SIZE_FAULT 0x11 -#define EVT_ID_ACCESS_FAULT 0x12 -#define EVT_ID_PERMISSION_FAULT 0x13 - -#define EVTQ_0_SSV (1UL << 11) -#define EVTQ_0_SSID GENMASK_ULL(31, 12) -#define EVTQ_0_SID GENMASK_ULL(63, 32) -#define EVTQ_1_STAG GENMASK_ULL(15, 0) -#define EVTQ_1_STALL (1UL << 31) -#define EVTQ_1_PnU (1UL << 33) -#define EVTQ_1_InD (1UL << 34) -#define EVTQ_1_RnW (1UL << 35) -#define EVTQ_1_S2 (1UL << 39) -#define EVTQ_1_CLASS GENMASK_ULL(41, 40) -#define EVTQ_1_TT_READ (1UL << 44) -#define EVTQ_2_ADDR GENMASK_ULL(63, 0) -#define EVTQ_3_IPA GENMASK_ULL(51, 12) - -/* PRI queue */ -#define PRIQ_ENT_SZ_SHIFT 4 -#define PRIQ_ENT_DWORDS ((1 << PRIQ_ENT_SZ_SHIFT) >> 3) -#define PRIQ_MAX_SZ_SHIFT (Q_MAX_SZ_SHIFT - PRIQ_ENT_SZ_SHIFT) - -#define PRIQ_0_SID GENMASK_ULL(31, 0) -#define PRIQ_0_SSID GENMASK_ULL(51, 32) -#define PRIQ_0_PERM_PRIV (1UL << 58) -#define PRIQ_0_PERM_EXEC (1UL << 59) -#define PRIQ_0_PERM_READ (1UL << 60) -#define PRIQ_0_PERM_WRITE (1UL << 61) -#define PRIQ_0_PRG_LAST (1UL << 62) -#define PRIQ_0_SSID_V (1UL << 63) - -#define PRIQ_1_PRG_IDX GENMASK_ULL(8, 0) -#define PRIQ_1_ADDR_MASK GENMASK_ULL(63, 12) - /* High-level queue structures */ #define ARM_SMMU_POLL_TIMEOUT_US 1000000 /* 1s! */ #define ARM_SMMU_POLL_SPIN_COUNT 10 @@ -480,88 +48,6 @@ static inline unsigned int arm_smmu_cdtab_l2_idx(unsign= ed int ssid) #define MSI_IOVA_BASE 0x8000000 #define MSI_IOVA_LENGTH 0x100000 =20 -enum pri_resp { - PRI_RESP_DENY =3D 0, - PRI_RESP_FAIL =3D 1, - PRI_RESP_SUCC =3D 2, -}; - -struct arm_smmu_cmdq_ent { - /* Common fields */ - u8 opcode; - bool substream_valid; - - /* Command-specific fields */ - union { - #define CMDQ_OP_PREFETCH_CFG 0x1 - struct { - u32 sid; - } prefetch; - - #define CMDQ_OP_CFGI_STE 0x3 - #define CMDQ_OP_CFGI_ALL 0x4 - #define CMDQ_OP_CFGI_CD 0x5 - #define CMDQ_OP_CFGI_CD_ALL 0x6 - struct { - u32 sid; - u32 ssid; - union { - bool leaf; - u8 span; - }; - } cfgi; - - #define CMDQ_OP_TLBI_NH_ASID 0x11 - #define CMDQ_OP_TLBI_NH_VA 0x12 - #define CMDQ_OP_TLBI_EL2_ALL 0x20 - #define CMDQ_OP_TLBI_EL2_ASID 0x21 - #define CMDQ_OP_TLBI_EL2_VA 0x22 - #define CMDQ_OP_TLBI_S12_VMALL 0x28 - #define CMDQ_OP_TLBI_S2_IPA 0x2a - #define CMDQ_OP_TLBI_NSNH_ALL 0x30 - struct { - u8 num; - u8 scale; - u16 asid; - u16 vmid; - bool leaf; - u8 ttl; - u8 tg; - u64 addr; - } tlbi; - - #define CMDQ_OP_ATC_INV 0x40 - #define ATC_INV_SIZE_ALL 52 - struct { - u32 sid; - u32 ssid; - u64 addr; - u8 size; - bool global; - } atc; - - #define CMDQ_OP_PRI_RESP 0x41 - struct { - u32 sid; - u32 ssid; - u16 grpid; - enum pri_resp resp; - } pri; - - #define CMDQ_OP_RESUME 0x44 - struct { - u32 sid; - u16 stag; - u8 resp; - } resume; - - #define CMDQ_OP_CMD_SYNC 0x46 - struct { - u64 msiaddr; - } sync; - }; -}; - struct arm_smmu_ll_queue { union { u64 val; @@ -703,29 +189,7 @@ struct arm_smmu_device { void __iomem *base; void __iomem *page1; =20 -#define ARM_SMMU_FEAT_2_LVL_STRTAB (1 << 0) -#define ARM_SMMU_FEAT_2_LVL_CDTAB (1 << 1) -#define ARM_SMMU_FEAT_TT_LE (1 << 2) -#define ARM_SMMU_FEAT_TT_BE (1 << 3) -#define ARM_SMMU_FEAT_PRI (1 << 4) -#define ARM_SMMU_FEAT_ATS (1 << 5) -#define ARM_SMMU_FEAT_SEV (1 << 6) -#define ARM_SMMU_FEAT_MSI (1 << 7) -#define ARM_SMMU_FEAT_COHERENCY (1 << 8) -#define ARM_SMMU_FEAT_TRANS_S1 (1 << 9) -#define ARM_SMMU_FEAT_TRANS_S2 (1 << 10) -#define ARM_SMMU_FEAT_STALLS (1 << 11) -#define ARM_SMMU_FEAT_HYP (1 << 12) -#define ARM_SMMU_FEAT_STALL_FORCE (1 << 13) -#define ARM_SMMU_FEAT_VAX (1 << 14) -#define ARM_SMMU_FEAT_RANGE_INV (1 << 15) -#define ARM_SMMU_FEAT_BTM (1 << 16) -#define ARM_SMMU_FEAT_SVA (1 << 17) -#define ARM_SMMU_FEAT_E2H (1 << 18) -#define ARM_SMMU_FEAT_NESTING (1 << 19) -#define ARM_SMMU_FEAT_ATTR_TYPES_OVR (1 << 20) -#define ARM_SMMU_FEAT_HA (1 << 21) -#define ARM_SMMU_FEAT_HD (1 << 22) + /* See arm-smmu-v3-common.h*/ u32 features; =20 #define ARM_SMMU_OPT_SKIP_PREFETCH (1 << 0) --=20 2.47.0.338.g60cca15819-goog From nobody Sun Dec 14 19:14:26 2025 Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9257E223E69 for ; Thu, 12 Dec 2024 18:04:55 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734026697; cv=none; b=IVKZDkmZEpHYQevJPz4qN54dB2QOaJo0p6U2HUu81p+Kv1vdGgYXVNTMtu9TEKoNDd9pSkJ1E1PpXZo9g3o4HVlpRSJIrKjB0iG7Jga3z11F2dWjnFqXdB6NaKmPY1SMN91YdyX305vo4JK2OVvtHDhNSikhfmjf/lx0533GfjQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734026697; c=relaxed/simple; bh=p/qYAp+a0yQJRLOSye/K8aHYd6t+T9vmVjY6IoSMRKo=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=TWBKpWtQ7vCRvjeXWHSkI+IiZkTtYCnoH2+jnVOB5l9pWmJZYxZiNo0YjIMlNtmuIIxRos4VoY/Hnnb0kp1VheqNgEp/FXFTeiSlzaNQ38qopy7uikzdkGRhaYJ6vfz1Y3C0dc6uOfG4UHneVMCqe/gEKO90mydGBvbpChwauj4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--smostafa.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=Q6IN6uDX; arc=none smtp.client-ip=209.85.128.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--smostafa.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="Q6IN6uDX" Received: by mail-wm1-f74.google.com with SMTP id 5b1f17b1804b1-434efe65e62so5715825e9.3 for ; Thu, 12 Dec 2024 10:04:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1734026694; x=1734631494; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=wR1p8S8LwPyQNTvUjkaWT5KmeMMzb+jLodOVwCBm1UI=; b=Q6IN6uDXUj3I9eeA1EH4UFVmtfKXl3aF1KbcVCaAGMlog8oSiebcIpUAtQiRcDWaDO iS5gBiHoOS0sEAb/06MZ5oZkhzgaw1OdFU2600QgNwwMclCzynchLXVCzxVt3t9LccKb NZjv//UZlNutU2uPhb/Tl1pJoOW5qVupa7xl2GcWnr1pue3RA0nspXhIl4QQWdQc3tWL /8ScLb/ljX61rOgubDK2dkkBCjbatPVEMCiWa06R6YY3nOR9FzDvw2PcPFgbkR5xcbWn O6mHA91LxdOi5jhlZooy4lVgCtjouWLn9JuP41bGaD6j/ZJQW5aqVm/7fwuLxeVzste5 OOQQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734026694; x=1734631494; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=wR1p8S8LwPyQNTvUjkaWT5KmeMMzb+jLodOVwCBm1UI=; b=ZkUB4aYZFdbXThGur+SHWfprHt4ZA11gEn1H/heSQRWs+rXcKArrfuP8nw3k8nFhvg gl4sKBSp11i3o2+u4E9ML5lT9588En58byOd4NTMX+lHbFa47HTcrTyS6Q7iMGFAXhgU fv7w+SUvgBvnBnCrnKWhlwwNakcZQw6DnndIG4bB/L7BFyuQNDQ0/hUd0SLvSmNsqbz0 0mADZ+l6WsDpWaAxyToPy/3QG4YxIj1tk0axExyvhLauv2d8gIRaS8tZhJr3RQZqgk+x 098GAo8m3Rt0RylHkEJY3JNPSx1lARWLv17IJneUJsPiHFoyS+PpPAP88aExJp86w+/w IXsg== X-Forwarded-Encrypted: i=1; AJvYcCXeF0mXvsd1V2OUk2cD3lcGNA69HgXi+BY1cO/pM1qpGThMrngjH3QE3xoYNpQQHFCN7f6BgekmZg71F6c=@vger.kernel.org X-Gm-Message-State: AOJu0Yz27HIAgu9tUhvOJ8Zx8FJFUBl2BPp0n2iOru16J4/SJMvUMG6P CItQsrp1x2Tp7pBTqdOLnu5VqgU6OxnfjkuvsCORE183xuENh3Qg6PaQ8R/HXW6T5dpU6vf0OZ0 2Sr5TffXhxA== X-Google-Smtp-Source: AGHT+IGngLUe7hTwqwYMuDaVTAngV7BbFyaM98e5ARjhrfPTg7ZpE0c7NHe9zxYUAMAR9O6PlwiwJtr/rnZAWw== X-Received: from wmbjl5.prod.google.com ([2002:a05:600c:6a85:b0:434:f119:f1a]) (user=smostafa job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:3b09:b0:436:1b7a:c0b4 with SMTP id 5b1f17b1804b1-4361c3454dbmr64162005e9.1.1734026694237; Thu, 12 Dec 2024 10:04:54 -0800 (PST) Date: Thu, 12 Dec 2024 18:03:29 +0000 In-Reply-To: <20241212180423.1578358-1-smostafa@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241212180423.1578358-1-smostafa@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20241212180423.1578358-6-smostafa@google.com> Subject: [RFC PATCH v2 05/58] iommu/arm-smmu-v3: Extract driver-specific bits from probe function From: Mostafa Saleh To: iommu@lists.linux.dev, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org Cc: catalin.marinas@arm.com, will@kernel.org, maz@kernel.org, oliver.upton@linux.dev, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, robdclark@gmail.com, joro@8bytes.org, robin.murphy@arm.com, jean-philippe@linaro.org, jgg@ziepe.ca, nicolinc@nvidia.com, vdonnefort@google.com, qperret@google.com, tabba@google.com, danielmentz@google.com, tzukui@google.com, Mostafa Saleh Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Jean-Philippe Brucker As we're about to share the arm_smmu_device_hw_probe() function with the KVM driver, extract bits that are specific to the normal driver. Signed-off-by: Jean-Philippe Brucker Signed-off-by: Mostafa Saleh --- drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 21 ++++++++++++--------- 1 file changed, 12 insertions(+), 9 deletions(-) diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/ar= m/arm-smmu-v3/arm-smmu-v3.c index 737c5b882355..702863c94f91 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c @@ -4167,7 +4167,7 @@ static int arm_smmu_device_hw_probe(struct arm_smmu_d= evice *smmu) =20 if (reg & IDR0_MSI) { smmu->features |=3D ARM_SMMU_FEAT_MSI; - if (coherent && !disable_msipolling) + if (coherent) smmu->options |=3D ARM_SMMU_OPT_MSIPOLL; } =20 @@ -4316,11 +4316,6 @@ static int arm_smmu_device_hw_probe(struct arm_smmu_= device *smmu) smmu->oas =3D 48; } =20 - if (arm_smmu_ops.pgsize_bitmap =3D=3D -1UL) - arm_smmu_ops.pgsize_bitmap =3D smmu->pgsize_bitmap; - else - arm_smmu_ops.pgsize_bitmap |=3D smmu->pgsize_bitmap; - /* Set the DMA mask for our table walker */ if (dma_set_mask_and_coherent(smmu->dev, DMA_BIT_MASK(smmu->oas))) dev_warn(smmu->dev, @@ -4334,9 +4329,6 @@ static int arm_smmu_device_hw_probe(struct arm_smmu_d= evice *smmu) =20 arm_smmu_device_iidr_probe(smmu); =20 - if (arm_smmu_sva_supported(smmu)) - smmu->features |=3D ARM_SMMU_FEAT_SVA; - dev_info(smmu->dev, "ias %lu-bit, oas %lu-bit (features 0x%08x)\n", smmu->ias, smmu->oas, smmu->features); return 0; @@ -4606,6 +4598,17 @@ static int arm_smmu_device_probe(struct platform_dev= ice *pdev) if (ret) return ret; =20 + if (arm_smmu_sva_supported(smmu)) + smmu->features |=3D ARM_SMMU_FEAT_SVA; + + if (disable_msipolling) + smmu->options &=3D ~ARM_SMMU_OPT_MSIPOLL; + + if (arm_smmu_ops.pgsize_bitmap =3D=3D -1UL) + arm_smmu_ops.pgsize_bitmap =3D smmu->pgsize_bitmap; + else + arm_smmu_ops.pgsize_bitmap |=3D smmu->pgsize_bitmap; + /* Initialise in-memory data structures */ ret =3D arm_smmu_init_structures(smmu); if (ret) --=20 2.47.0.338.g60cca15819-goog From nobody Sun Dec 14 19:14:26 2025 Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 91F0422540F for ; Thu, 12 Dec 2024 18:04:57 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734026700; cv=none; b=TqevYJw8OZmpLm8zPn0Ls/+KnhA7dP3froBQaj7uQVXO4c5j5Ga4irRDylrNCqeiLpxc+ogKGqGUxE1y5gcfaOmEtnCcV4UvojyqP0YWTtpKRXsmvHU2RXCoaFD389Epe5PUYqmXgV/DDBxlaqoiW6lB9XI7/+svXjB5qdSu1wA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734026700; c=relaxed/simple; bh=xUlIBhmSp+ZhASDX+TgCLIU26rEDHhUaLZGF41HT1YA=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=IoQkeCBejclJ5hEUvtHgdYhisYJvw7uvcuLoCOTLn5nRy6wspdeC1YLb1qIWfC9ZguovS7pIoxZg7vYCmJx6ar49X0dva+gl4uA+i4EyL1UEtw2RqfpYiR8CL3kZR1X++5sCVVYHiAjILieHnU7kmNoC8RRROMeHZZ/fUlImgVM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--smostafa.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=A9wVLKsi; arc=none smtp.client-ip=209.85.128.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--smostafa.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="A9wVLKsi" Received: by mail-wm1-f73.google.com with SMTP id 5b1f17b1804b1-4361ecebc5bso5648035e9.1 for ; Thu, 12 Dec 2024 10:04:57 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1734026696; x=1734631496; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=XUvxXRJ1+y/fkPaxEcJoZOef6jFXTK4/g96LVt9tKU4=; b=A9wVLKsiva6aIsvbmUyIxUJgzpUfjVEFiVT4hm/r5RR4LUlFdz2ctJXFe3ZS7TP7qQ oxyYi5gpX7dGmeh8fzp9bKWOCULTmC0XbS+733O0H5SFprbs6viUth2+Jc3ao5dedNzV kkRJZVhwa2S106+JoIkObiEmqa8By3YyItHiELSKJDhSZjybmCWJ5hFs64XUyjG8sHUF L2/3r6fEXuOlR0Q2XZzXo3pBuhpJZJXEKdOE0p+YiR0iOZG2TA1x1HWpax/j6GJmc3qF k1XJurgYh/b/PCk+HLRTzzB15SHPjVNhLuXur4rBWGtKXti1Xog2MqFcsiTVSKdiBHVM d2MQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734026696; x=1734631496; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=XUvxXRJ1+y/fkPaxEcJoZOef6jFXTK4/g96LVt9tKU4=; b=RQaOB2Qe39n/KaLvxsVCaEyZkEn2JMkZ77XuwwpI5s4I8v3dVgXLtZvbzKBGBjXNxZ /yvs3ol+xQwlZb5qTVomVFbrZ3voYmo/FMHzkMSozumRZvQfpcSXsxdYzN2pVzxpFveO 3fC8sbqqBVv2H/dAoTSupUpcU9QgjvX+zus0PrwD9CiVZVDXPS5fyD7a/PODQdKE8Hv7 Pa4HLKChtz5JHrwDbQw4rNkG6mqL+V0N7RaK75Z4q3yic/seK/SvRzWHNYwHvR5wWiq/ muKVOSwi2p6b1UtRJwFXvnvC7Tf4vjEzsOXtvt9WEWiAxUspZyIGtmYYnCVO9itEBwrL 9ifA== X-Forwarded-Encrypted: i=1; AJvYcCW+Ym/3PcLRp4gXKvN9wAmE1CXtcaJaFEcpi0fvEDvahm3EYZGo4Ya9LAXmEfBjshWX189QvyF8tIuhR18=@vger.kernel.org X-Gm-Message-State: AOJu0YxHrxh85Ji+80n3lELEuOKqXmw0yi3Sou/Vxpl4hWJNIPYcP/y+ pl21VlZ+qFqDzSlzJLRGNjWY17STaZqzFa05AMw7YiIfw6zv94fqp56+Opd31JwPKwDO3iJaxfw ze/aCrO28SQ== X-Google-Smtp-Source: AGHT+IFzPx9Qlm2oSiOOqD2lEb+aD+ViZ+vxvuCF+LiMJIDKi+Rvut8RxtYpYi7M5NjOo9xZIBfzoQAKIPzsAQ== X-Received: from wmik26.prod.google.com ([2002:a7b:c41a:0:b0:434:f801:bf67]) (user=smostafa job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:4450:b0:434:a7e7:a1ca with SMTP id 5b1f17b1804b1-4361c3e1cfcmr58211195e9.20.1734026696232; Thu, 12 Dec 2024 10:04:56 -0800 (PST) Date: Thu, 12 Dec 2024 18:03:30 +0000 In-Reply-To: <20241212180423.1578358-1-smostafa@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241212180423.1578358-1-smostafa@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20241212180423.1578358-7-smostafa@google.com> Subject: [RFC PATCH v2 06/58] iommu/arm-smmu-v3: Move some functions to arm-smmu-v3-common.c From: Mostafa Saleh To: iommu@lists.linux.dev, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org Cc: catalin.marinas@arm.com, will@kernel.org, maz@kernel.org, oliver.upton@linux.dev, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, robdclark@gmail.com, joro@8bytes.org, robin.murphy@arm.com, jean-philippe@linaro.org, jgg@ziepe.ca, nicolinc@nvidia.com, vdonnefort@google.com, qperret@google.com, tabba@google.com, danielmentz@google.com, tzukui@google.com, Mostafa Saleh Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Jean-Philippe Brucker Move functions that can be shared between normal and KVM drivers to arm-smmu-v3-common.c Only straightforward moves here. More subtle factoring will be done in then next patches. Signed-off-by: Jean-Philippe Brucker Signed-off-by: Mostafa Saleh --- drivers/iommu/arm/arm-smmu-v3/Makefile | 1 + .../arm/arm-smmu-v3/arm-smmu-v3-common.c | 365 ++++++++++++++++++ drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 363 ----------------- drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h | 11 + 4 files changed, 377 insertions(+), 363 deletions(-) create mode 100644 drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-common.c diff --git a/drivers/iommu/arm/arm-smmu-v3/Makefile b/drivers/iommu/arm/arm= -smmu-v3/Makefile index dc98c88b48c8..515a84f14783 100644 --- a/drivers/iommu/arm/arm-smmu-v3/Makefile +++ b/drivers/iommu/arm/arm-smmu-v3/Makefile @@ -1,6 +1,7 @@ # SPDX-License-Identifier: GPL-2.0 obj-$(CONFIG_ARM_SMMU_V3) +=3D arm_smmu_v3.o arm_smmu_v3-y :=3D arm-smmu-v3.o +arm_smmu_v3-y +=3D arm-smmu-v3-common.o arm_smmu_v3-$(CONFIG_ARM_SMMU_V3_SVA) +=3D arm-smmu-v3-sva.o arm_smmu_v3-$(CONFIG_TEGRA241_CMDQV) +=3D tegra241-cmdqv.o =20 diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-common.c b/drivers/i= ommu/arm/arm-smmu-v3/arm-smmu-v3-common.c new file mode 100644 index 000000000000..cfd5ba69e67e --- /dev/null +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-common.c @@ -0,0 +1,365 @@ +// SPDX-License-Identifier: GPL-2.0 +#include +#include +#include + +#include "arm-smmu-v3.h" +#include "../../dma-iommu.h" + +#define IIDR_IMPLEMENTER_ARM 0x43b +#define IIDR_PRODUCTID_ARM_MMU_600 0x483 +#define IIDR_PRODUCTID_ARM_MMU_700 0x487 + +static void arm_smmu_device_iidr_probe(struct arm_smmu_device *smmu) +{ + u32 reg; + unsigned int implementer, productid, variant, revision; + + reg =3D readl_relaxed(smmu->base + ARM_SMMU_IIDR); + implementer =3D FIELD_GET(IIDR_IMPLEMENTER, reg); + productid =3D FIELD_GET(IIDR_PRODUCTID, reg); + variant =3D FIELD_GET(IIDR_VARIANT, reg); + revision =3D FIELD_GET(IIDR_REVISION, reg); + + switch (implementer) { + case IIDR_IMPLEMENTER_ARM: + switch (productid) { + case IIDR_PRODUCTID_ARM_MMU_600: + /* Arm erratum 1076982 */ + if (variant =3D=3D 0 && revision <=3D 2) + smmu->features &=3D ~ARM_SMMU_FEAT_SEV; + /* Arm erratum 1209401 */ + if (variant < 2) + smmu->features &=3D ~ARM_SMMU_FEAT_NESTING; + break; + case IIDR_PRODUCTID_ARM_MMU_700: + /* Arm erratum 2812531 */ + smmu->features &=3D ~ARM_SMMU_FEAT_BTM; + smmu->options |=3D ARM_SMMU_OPT_CMDQ_FORCE_SYNC; + /* Arm errata 2268618, 2812531 */ + smmu->features &=3D ~ARM_SMMU_FEAT_NESTING; + break; + } + break; + } +} + +static void arm_smmu_get_httu(struct arm_smmu_device *smmu, u32 reg) +{ + u32 fw_features =3D smmu->features & (ARM_SMMU_FEAT_HA | ARM_SMMU_FEAT_HD= ); + u32 hw_features =3D 0; + + switch (FIELD_GET(IDR0_HTTU, reg)) { + case IDR0_HTTU_ACCESS_DIRTY: + hw_features |=3D ARM_SMMU_FEAT_HD; + fallthrough; + case IDR0_HTTU_ACCESS: + hw_features |=3D ARM_SMMU_FEAT_HA; + } + + if (smmu->dev->of_node) + smmu->features |=3D hw_features; + else if (hw_features !=3D fw_features) + /* ACPI IORT sets the HTTU bits */ + dev_warn(smmu->dev, + "IDR0.HTTU features(0x%x) overridden by FW configuration (0x%x)\n", + hw_features, fw_features); +} + +int arm_smmu_device_hw_probe(struct arm_smmu_device *smmu) +{ + u32 reg; + bool coherent =3D smmu->features & ARM_SMMU_FEAT_COHERENCY; + + /* IDR0 */ + reg =3D readl_relaxed(smmu->base + ARM_SMMU_IDR0); + + /* 2-level structures */ + if (FIELD_GET(IDR0_ST_LVL, reg) =3D=3D IDR0_ST_LVL_2LVL) + smmu->features |=3D ARM_SMMU_FEAT_2_LVL_STRTAB; + + if (reg & IDR0_CD2L) + smmu->features |=3D ARM_SMMU_FEAT_2_LVL_CDTAB; + + /* + * Translation table endianness. + * We currently require the same endianness as the CPU, but this + * could be changed later by adding a new IO_PGTABLE_QUIRK. + */ + switch (FIELD_GET(IDR0_TTENDIAN, reg)) { + case IDR0_TTENDIAN_MIXED: + smmu->features |=3D ARM_SMMU_FEAT_TT_LE | ARM_SMMU_FEAT_TT_BE; + break; +#ifdef __BIG_ENDIAN + case IDR0_TTENDIAN_BE: + smmu->features |=3D ARM_SMMU_FEAT_TT_BE; + break; +#else + case IDR0_TTENDIAN_LE: + smmu->features |=3D ARM_SMMU_FEAT_TT_LE; + break; +#endif + default: + dev_err(smmu->dev, "unknown/unsupported TT endianness!\n"); + return -ENXIO; + } + + /* Boolean feature flags */ + if (IS_ENABLED(CONFIG_PCI_PRI) && reg & IDR0_PRI) + smmu->features |=3D ARM_SMMU_FEAT_PRI; + + if (IS_ENABLED(CONFIG_PCI_ATS) && reg & IDR0_ATS) + smmu->features |=3D ARM_SMMU_FEAT_ATS; + + if (reg & IDR0_SEV) + smmu->features |=3D ARM_SMMU_FEAT_SEV; + + if (reg & IDR0_MSI) { + smmu->features |=3D ARM_SMMU_FEAT_MSI; + if (coherent) + smmu->options |=3D ARM_SMMU_OPT_MSIPOLL; + } + + if (reg & IDR0_HYP) { + smmu->features |=3D ARM_SMMU_FEAT_HYP; + if (cpus_have_cap(ARM64_HAS_VIRT_HOST_EXTN)) + smmu->features |=3D ARM_SMMU_FEAT_E2H; + } + + arm_smmu_get_httu(smmu, reg); + + /* + * The coherency feature as set by FW is used in preference to the ID + * register, but warn on mismatch. + */ + if (!!(reg & IDR0_COHACC) !=3D coherent) + dev_warn(smmu->dev, "IDR0.COHACC overridden by FW configuration (%s)\n", + coherent ? "true" : "false"); + + switch (FIELD_GET(IDR0_STALL_MODEL, reg)) { + case IDR0_STALL_MODEL_FORCE: + smmu->features |=3D ARM_SMMU_FEAT_STALL_FORCE; + fallthrough; + case IDR0_STALL_MODEL_STALL: + smmu->features |=3D ARM_SMMU_FEAT_STALLS; + } + + if (reg & IDR0_S1P) + smmu->features |=3D ARM_SMMU_FEAT_TRANS_S1; + + if (reg & IDR0_S2P) + smmu->features |=3D ARM_SMMU_FEAT_TRANS_S2; + + if (!(reg & (IDR0_S1P | IDR0_S2P))) { + dev_err(smmu->dev, "no translation support!\n"); + return -ENXIO; + } + + /* We only support the AArch64 table format at present */ + switch (FIELD_GET(IDR0_TTF, reg)) { + case IDR0_TTF_AARCH32_64: + smmu->ias =3D 40; + fallthrough; + case IDR0_TTF_AARCH64: + break; + default: + dev_err(smmu->dev, "AArch64 table format not supported!\n"); + return -ENXIO; + } + + /* ASID/VMID sizes */ + smmu->asid_bits =3D reg & IDR0_ASID16 ? 16 : 8; + smmu->vmid_bits =3D reg & IDR0_VMID16 ? 16 : 8; + + /* IDR1 */ + reg =3D readl_relaxed(smmu->base + ARM_SMMU_IDR1); + if (reg & (IDR1_TABLES_PRESET | IDR1_QUEUES_PRESET | IDR1_REL)) { + dev_err(smmu->dev, "embedded implementation not supported\n"); + return -ENXIO; + } + + if (reg & IDR1_ATTR_TYPES_OVR) + smmu->features |=3D ARM_SMMU_FEAT_ATTR_TYPES_OVR; + + /* Queue sizes, capped to ensure natural alignment */ + smmu->cmdq.q.llq.max_n_shift =3D min_t(u32, CMDQ_MAX_SZ_SHIFT, + FIELD_GET(IDR1_CMDQS, reg)); + if (smmu->cmdq.q.llq.max_n_shift <=3D ilog2(CMDQ_BATCH_ENTRIES)) { + /* + * We don't support splitting up batches, so one batch of + * commands plus an extra sync needs to fit inside the command + * queue. There's also no way we can handle the weird alignment + * restrictions on the base pointer for a unit-length queue. + */ + dev_err(smmu->dev, "command queue size <=3D %d entries not supported\n", + CMDQ_BATCH_ENTRIES); + return -ENXIO; + } + + smmu->evtq.q.llq.max_n_shift =3D min_t(u32, EVTQ_MAX_SZ_SHIFT, + FIELD_GET(IDR1_EVTQS, reg)); + smmu->priq.q.llq.max_n_shift =3D min_t(u32, PRIQ_MAX_SZ_SHIFT, + FIELD_GET(IDR1_PRIQS, reg)); + + /* SID/SSID sizes */ + smmu->ssid_bits =3D FIELD_GET(IDR1_SSIDSIZE, reg); + smmu->sid_bits =3D FIELD_GET(IDR1_SIDSIZE, reg); + smmu->iommu.max_pasids =3D 1UL << smmu->ssid_bits; + + /* + * If the SMMU supports fewer bits than would fill a single L2 stream + * table, use a linear table instead. + */ + if (smmu->sid_bits <=3D STRTAB_SPLIT) + smmu->features &=3D ~ARM_SMMU_FEAT_2_LVL_STRTAB; + + /* IDR3 */ + reg =3D readl_relaxed(smmu->base + ARM_SMMU_IDR3); + if (FIELD_GET(IDR3_RIL, reg)) + smmu->features |=3D ARM_SMMU_FEAT_RANGE_INV; + + /* IDR5 */ + reg =3D readl_relaxed(smmu->base + ARM_SMMU_IDR5); + + /* Maximum number of outstanding stalls */ + smmu->evtq.max_stalls =3D FIELD_GET(IDR5_STALL_MAX, reg); + + /* Page sizes */ + if (reg & IDR5_GRAN64K) + smmu->pgsize_bitmap |=3D SZ_64K | SZ_512M; + if (reg & IDR5_GRAN16K) + smmu->pgsize_bitmap |=3D SZ_16K | SZ_32M; + if (reg & IDR5_GRAN4K) + smmu->pgsize_bitmap |=3D SZ_4K | SZ_2M | SZ_1G; + + /* Input address size */ + if (FIELD_GET(IDR5_VAX, reg) =3D=3D IDR5_VAX_52_BIT) + smmu->features |=3D ARM_SMMU_FEAT_VAX; + + /* Output address size */ + switch (FIELD_GET(IDR5_OAS, reg)) { + case IDR5_OAS_32_BIT: + smmu->oas =3D 32; + break; + case IDR5_OAS_36_BIT: + smmu->oas =3D 36; + break; + case IDR5_OAS_40_BIT: + smmu->oas =3D 40; + break; + case IDR5_OAS_42_BIT: + smmu->oas =3D 42; + break; + case IDR5_OAS_44_BIT: + smmu->oas =3D 44; + break; + case IDR5_OAS_52_BIT: + smmu->oas =3D 52; + smmu->pgsize_bitmap |=3D 1ULL << 42; /* 4TB */ + break; + default: + dev_info(smmu->dev, + "unknown output address size. Truncating to 48-bit\n"); + fallthrough; + case IDR5_OAS_48_BIT: + smmu->oas =3D 48; + } + + /* Set the DMA mask for our table walker */ + if (dma_set_mask_and_coherent(smmu->dev, DMA_BIT_MASK(smmu->oas))) + dev_warn(smmu->dev, + "failed to set DMA mask for table walker\n"); + + smmu->ias =3D max(smmu->ias, smmu->oas); + + if ((smmu->features & ARM_SMMU_FEAT_TRANS_S1) && + (smmu->features & ARM_SMMU_FEAT_TRANS_S2)) + smmu->features |=3D ARM_SMMU_FEAT_NESTING; + + arm_smmu_device_iidr_probe(smmu); + + dev_info(smmu->dev, "ias %lu-bit, oas %lu-bit (features 0x%08x)\n", + smmu->ias, smmu->oas, smmu->features); + return 0; +} + +int arm_smmu_write_reg_sync(struct arm_smmu_device *smmu, u32 val, + unsigned int reg_off, unsigned int ack_off) +{ + u32 reg; + + writel_relaxed(val, smmu->base + reg_off); + return readl_relaxed_poll_timeout(smmu->base + ack_off, reg, reg =3D=3D v= al, + 1, ARM_SMMU_POLL_TIMEOUT_US); +} + +/* GBPA is "special" */ +int arm_smmu_update_gbpa(struct arm_smmu_device *smmu, u32 set, u32 clr) +{ + int ret; + u32 reg, __iomem *gbpa =3D smmu->base + ARM_SMMU_GBPA; + + ret =3D readl_relaxed_poll_timeout(gbpa, reg, !(reg & GBPA_UPDATE), + 1, ARM_SMMU_POLL_TIMEOUT_US); + if (ret) + return ret; + + reg &=3D ~clr; + reg |=3D set; + writel_relaxed(reg | GBPA_UPDATE, gbpa); + ret =3D readl_relaxed_poll_timeout(gbpa, reg, !(reg & GBPA_UPDATE), + 1, ARM_SMMU_POLL_TIMEOUT_US); + + if (ret) + dev_err(smmu->dev, "GBPA not responding to update\n"); + return ret; +} + +int arm_smmu_device_disable(struct arm_smmu_device *smmu) +{ + int ret; + + ret =3D arm_smmu_write_reg_sync(smmu, 0, ARM_SMMU_CR0, ARM_SMMU_CR0ACK); + if (ret) + dev_err(smmu->dev, "failed to clear cr0\n"); + + return ret; +} + +struct iommu_group *arm_smmu_device_group(struct device *dev) +{ + struct iommu_group *group; + + /* + * We don't support devices sharing stream IDs other than PCI RID + pick f473c7c1b189 fixup drv splt * aliases, since the necessary ID-to-dev= ice lookup becomes rather + * impractical given a potential sparse 32-bit stream ID space. + */ + if (dev_is_pci(dev)) + group =3D pci_device_group(dev); + else + group =3D generic_device_group(dev); + + return group; +} + +int arm_smmu_of_xlate(struct device *dev, const struct of_phandle_args *ar= gs) +{ + return iommu_fwspec_add_ids(dev, args->args, 1); +} + +void arm_smmu_get_resv_regions(struct device *dev, + struct list_head *head) +{ + struct iommu_resv_region *region; + int prot =3D IOMMU_WRITE | IOMMU_NOEXEC | IOMMU_MMIO; + + region =3D iommu_alloc_resv_region(MSI_IOVA_BASE, MSI_IOVA_LENGTH, + prot, IOMMU_RESV_SW_MSI, GFP_KERNEL); + if (!region) + return; + + list_add_tail(®ion->list, head); + + iommu_dma_get_resv_regions(dev, head); +} diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/ar= m/arm-smmu-v3/arm-smmu-v3.c index 702863c94f91..8741b8f57a8d 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c @@ -17,13 +17,11 @@ #include #include #include -#include #include #include #include #include #include -#include #include #include #include @@ -1914,8 +1912,6 @@ static irqreturn_t arm_smmu_priq_thread(int irq, void= *dev) return IRQ_HANDLED; } =20 -static int arm_smmu_device_disable(struct arm_smmu_device *smmu); - static irqreturn_t arm_smmu_gerror_handler(int irq, void *dev) { u32 gerror, gerrorn, active; @@ -3361,23 +3357,6 @@ static int arm_smmu_set_dirty_tracking(struct iommu_= domain *domain, return 0; } =20 -static struct iommu_group *arm_smmu_device_group(struct device *dev) -{ - struct iommu_group *group; - - /* - * We don't support devices sharing stream IDs other than PCI RID - * aliases, since the necessary ID-to-device lookup becomes rather - * impractical given a potential sparse 32-bit stream ID space. - */ - if (dev_is_pci(dev)) - group =3D pci_device_group(dev); - else - group =3D generic_device_group(dev); - - return group; -} - static int arm_smmu_enable_nesting(struct iommu_domain *domain) { struct arm_smmu_domain *smmu_domain =3D to_smmu_domain(domain); @@ -3393,28 +3372,6 @@ static int arm_smmu_enable_nesting(struct iommu_doma= in *domain) return ret; } =20 -static int arm_smmu_of_xlate(struct device *dev, - const struct of_phandle_args *args) -{ - return iommu_fwspec_add_ids(dev, args->args, 1); -} - -static void arm_smmu_get_resv_regions(struct device *dev, - struct list_head *head) -{ - struct iommu_resv_region *region; - int prot =3D IOMMU_WRITE | IOMMU_NOEXEC | IOMMU_MMIO; - - region =3D iommu_alloc_resv_region(MSI_IOVA_BASE, MSI_IOVA_LENGTH, - prot, IOMMU_RESV_SW_MSI, GFP_KERNEL); - if (!region) - return; - - list_add_tail(®ion->list, head); - - iommu_dma_get_resv_regions(dev, head); -} - static int arm_smmu_dev_enable_feature(struct device *dev, enum iommu_dev_features feat) { @@ -3711,38 +3668,6 @@ static int arm_smmu_init_structures(struct arm_smmu_= device *smmu) return 0; } =20 -static int arm_smmu_write_reg_sync(struct arm_smmu_device *smmu, u32 val, - unsigned int reg_off, unsigned int ack_off) -{ - u32 reg; - - writel_relaxed(val, smmu->base + reg_off); - return readl_relaxed_poll_timeout(smmu->base + ack_off, reg, reg =3D=3D v= al, - 1, ARM_SMMU_POLL_TIMEOUT_US); -} - -/* GBPA is "special" */ -static int arm_smmu_update_gbpa(struct arm_smmu_device *smmu, u32 set, u32= clr) -{ - int ret; - u32 reg, __iomem *gbpa =3D smmu->base + ARM_SMMU_GBPA; - - ret =3D readl_relaxed_poll_timeout(gbpa, reg, !(reg & GBPA_UPDATE), - 1, ARM_SMMU_POLL_TIMEOUT_US); - if (ret) - return ret; - - reg &=3D ~clr; - reg |=3D set; - writel_relaxed(reg | GBPA_UPDATE, gbpa); - ret =3D readl_relaxed_poll_timeout(gbpa, reg, !(reg & GBPA_UPDATE), - 1, ARM_SMMU_POLL_TIMEOUT_US); - - if (ret) - dev_err(smmu->dev, "GBPA not responding to update\n"); - return ret; -} - static void arm_smmu_free_msis(void *data) { struct device *dev =3D data; @@ -3889,17 +3814,6 @@ static int arm_smmu_setup_irqs(struct arm_smmu_devic= e *smmu) return 0; } =20 -static int arm_smmu_device_disable(struct arm_smmu_device *smmu) -{ - int ret; - - ret =3D arm_smmu_write_reg_sync(smmu, 0, ARM_SMMU_CR0, ARM_SMMU_CR0ACK); - if (ret) - dev_err(smmu->dev, "failed to clear cr0\n"); - - return ret; -} - static void arm_smmu_write_strtab(struct arm_smmu_device *smmu) { struct arm_smmu_strtab_cfg *cfg =3D &smmu->strtab_cfg; @@ -4057,283 +3971,6 @@ static int arm_smmu_device_reset(struct arm_smmu_de= vice *smmu) return 0; } =20 -#define IIDR_IMPLEMENTER_ARM 0x43b -#define IIDR_PRODUCTID_ARM_MMU_600 0x483 -#define IIDR_PRODUCTID_ARM_MMU_700 0x487 - -static void arm_smmu_device_iidr_probe(struct arm_smmu_device *smmu) -{ - u32 reg; - unsigned int implementer, productid, variant, revision; - - reg =3D readl_relaxed(smmu->base + ARM_SMMU_IIDR); - implementer =3D FIELD_GET(IIDR_IMPLEMENTER, reg); - productid =3D FIELD_GET(IIDR_PRODUCTID, reg); - variant =3D FIELD_GET(IIDR_VARIANT, reg); - revision =3D FIELD_GET(IIDR_REVISION, reg); - - switch (implementer) { - case IIDR_IMPLEMENTER_ARM: - switch (productid) { - case IIDR_PRODUCTID_ARM_MMU_600: - /* Arm erratum 1076982 */ - if (variant =3D=3D 0 && revision <=3D 2) - smmu->features &=3D ~ARM_SMMU_FEAT_SEV; - /* Arm erratum 1209401 */ - if (variant < 2) - smmu->features &=3D ~ARM_SMMU_FEAT_NESTING; - break; - case IIDR_PRODUCTID_ARM_MMU_700: - /* Arm erratum 2812531 */ - smmu->features &=3D ~ARM_SMMU_FEAT_BTM; - smmu->options |=3D ARM_SMMU_OPT_CMDQ_FORCE_SYNC; - /* Arm errata 2268618, 2812531 */ - smmu->features &=3D ~ARM_SMMU_FEAT_NESTING; - break; - } - break; - } -} - -static void arm_smmu_get_httu(struct arm_smmu_device *smmu, u32 reg) -{ - u32 fw_features =3D smmu->features & (ARM_SMMU_FEAT_HA | ARM_SMMU_FEAT_HD= ); - u32 hw_features =3D 0; - - switch (FIELD_GET(IDR0_HTTU, reg)) { - case IDR0_HTTU_ACCESS_DIRTY: - hw_features |=3D ARM_SMMU_FEAT_HD; - fallthrough; - case IDR0_HTTU_ACCESS: - hw_features |=3D ARM_SMMU_FEAT_HA; - } - - if (smmu->dev->of_node) - smmu->features |=3D hw_features; - else if (hw_features !=3D fw_features) - /* ACPI IORT sets the HTTU bits */ - dev_warn(smmu->dev, - "IDR0.HTTU features(0x%x) overridden by FW configuration (0x%x)\n", - hw_features, fw_features); -} - -static int arm_smmu_device_hw_probe(struct arm_smmu_device *smmu) -{ - u32 reg; - bool coherent =3D smmu->features & ARM_SMMU_FEAT_COHERENCY; - - /* IDR0 */ - reg =3D readl_relaxed(smmu->base + ARM_SMMU_IDR0); - - /* 2-level structures */ - if (FIELD_GET(IDR0_ST_LVL, reg) =3D=3D IDR0_ST_LVL_2LVL) - smmu->features |=3D ARM_SMMU_FEAT_2_LVL_STRTAB; - - if (reg & IDR0_CD2L) - smmu->features |=3D ARM_SMMU_FEAT_2_LVL_CDTAB; - - /* - * Translation table endianness. - * We currently require the same endianness as the CPU, but this - * could be changed later by adding a new IO_PGTABLE_QUIRK. - */ - switch (FIELD_GET(IDR0_TTENDIAN, reg)) { - case IDR0_TTENDIAN_MIXED: - smmu->features |=3D ARM_SMMU_FEAT_TT_LE | ARM_SMMU_FEAT_TT_BE; - break; -#ifdef __BIG_ENDIAN - case IDR0_TTENDIAN_BE: - smmu->features |=3D ARM_SMMU_FEAT_TT_BE; - break; -#else - case IDR0_TTENDIAN_LE: - smmu->features |=3D ARM_SMMU_FEAT_TT_LE; - break; -#endif - default: - dev_err(smmu->dev, "unknown/unsupported TT endianness!\n"); - return -ENXIO; - } - - /* Boolean feature flags */ - if (IS_ENABLED(CONFIG_PCI_PRI) && reg & IDR0_PRI) - smmu->features |=3D ARM_SMMU_FEAT_PRI; - - if (IS_ENABLED(CONFIG_PCI_ATS) && reg & IDR0_ATS) - smmu->features |=3D ARM_SMMU_FEAT_ATS; - - if (reg & IDR0_SEV) - smmu->features |=3D ARM_SMMU_FEAT_SEV; - - if (reg & IDR0_MSI) { - smmu->features |=3D ARM_SMMU_FEAT_MSI; - if (coherent) - smmu->options |=3D ARM_SMMU_OPT_MSIPOLL; - } - - if (reg & IDR0_HYP) { - smmu->features |=3D ARM_SMMU_FEAT_HYP; - if (cpus_have_cap(ARM64_HAS_VIRT_HOST_EXTN)) - smmu->features |=3D ARM_SMMU_FEAT_E2H; - } - - arm_smmu_get_httu(smmu, reg); - - /* - * The coherency feature as set by FW is used in preference to the ID - * register, but warn on mismatch. - */ - if (!!(reg & IDR0_COHACC) !=3D coherent) - dev_warn(smmu->dev, "IDR0.COHACC overridden by FW configuration (%s)\n", - coherent ? "true" : "false"); - - switch (FIELD_GET(IDR0_STALL_MODEL, reg)) { - case IDR0_STALL_MODEL_FORCE: - smmu->features |=3D ARM_SMMU_FEAT_STALL_FORCE; - fallthrough; - case IDR0_STALL_MODEL_STALL: - smmu->features |=3D ARM_SMMU_FEAT_STALLS; - } - - if (reg & IDR0_S1P) - smmu->features |=3D ARM_SMMU_FEAT_TRANS_S1; - - if (reg & IDR0_S2P) - smmu->features |=3D ARM_SMMU_FEAT_TRANS_S2; - - if (!(reg & (IDR0_S1P | IDR0_S2P))) { - dev_err(smmu->dev, "no translation support!\n"); - return -ENXIO; - } - - /* We only support the AArch64 table format at present */ - switch (FIELD_GET(IDR0_TTF, reg)) { - case IDR0_TTF_AARCH32_64: - smmu->ias =3D 40; - fallthrough; - case IDR0_TTF_AARCH64: - break; - default: - dev_err(smmu->dev, "AArch64 table format not supported!\n"); - return -ENXIO; - } - - /* ASID/VMID sizes */ - smmu->asid_bits =3D reg & IDR0_ASID16 ? 16 : 8; - smmu->vmid_bits =3D reg & IDR0_VMID16 ? 16 : 8; - - /* IDR1 */ - reg =3D readl_relaxed(smmu->base + ARM_SMMU_IDR1); - if (reg & (IDR1_TABLES_PRESET | IDR1_QUEUES_PRESET | IDR1_REL)) { - dev_err(smmu->dev, "embedded implementation not supported\n"); - return -ENXIO; - } - - if (reg & IDR1_ATTR_TYPES_OVR) - smmu->features |=3D ARM_SMMU_FEAT_ATTR_TYPES_OVR; - - /* Queue sizes, capped to ensure natural alignment */ - smmu->cmdq.q.llq.max_n_shift =3D min_t(u32, CMDQ_MAX_SZ_SHIFT, - FIELD_GET(IDR1_CMDQS, reg)); - if (smmu->cmdq.q.llq.max_n_shift <=3D ilog2(CMDQ_BATCH_ENTRIES)) { - /* - * We don't support splitting up batches, so one batch of - * commands plus an extra sync needs to fit inside the command - * queue. There's also no way we can handle the weird alignment - * restrictions on the base pointer for a unit-length queue. - */ - dev_err(smmu->dev, "command queue size <=3D %d entries not supported\n", - CMDQ_BATCH_ENTRIES); - return -ENXIO; - } - - smmu->evtq.q.llq.max_n_shift =3D min_t(u32, EVTQ_MAX_SZ_SHIFT, - FIELD_GET(IDR1_EVTQS, reg)); - smmu->priq.q.llq.max_n_shift =3D min_t(u32, PRIQ_MAX_SZ_SHIFT, - FIELD_GET(IDR1_PRIQS, reg)); - - /* SID/SSID sizes */ - smmu->ssid_bits =3D FIELD_GET(IDR1_SSIDSIZE, reg); - smmu->sid_bits =3D FIELD_GET(IDR1_SIDSIZE, reg); - smmu->iommu.max_pasids =3D 1UL << smmu->ssid_bits; - - /* - * If the SMMU supports fewer bits than would fill a single L2 stream - * table, use a linear table instead. - */ - if (smmu->sid_bits <=3D STRTAB_SPLIT) - smmu->features &=3D ~ARM_SMMU_FEAT_2_LVL_STRTAB; - - /* IDR3 */ - reg =3D readl_relaxed(smmu->base + ARM_SMMU_IDR3); - if (FIELD_GET(IDR3_RIL, reg)) - smmu->features |=3D ARM_SMMU_FEAT_RANGE_INV; - - /* IDR5 */ - reg =3D readl_relaxed(smmu->base + ARM_SMMU_IDR5); - - /* Maximum number of outstanding stalls */ - smmu->evtq.max_stalls =3D FIELD_GET(IDR5_STALL_MAX, reg); - - /* Page sizes */ - if (reg & IDR5_GRAN64K) - smmu->pgsize_bitmap |=3D SZ_64K | SZ_512M; - if (reg & IDR5_GRAN16K) - smmu->pgsize_bitmap |=3D SZ_16K | SZ_32M; - if (reg & IDR5_GRAN4K) - smmu->pgsize_bitmap |=3D SZ_4K | SZ_2M | SZ_1G; - - /* Input address size */ - if (FIELD_GET(IDR5_VAX, reg) =3D=3D IDR5_VAX_52_BIT) - smmu->features |=3D ARM_SMMU_FEAT_VAX; - - /* Output address size */ - switch (FIELD_GET(IDR5_OAS, reg)) { - case IDR5_OAS_32_BIT: - smmu->oas =3D 32; - break; - case IDR5_OAS_36_BIT: - smmu->oas =3D 36; - break; - case IDR5_OAS_40_BIT: - smmu->oas =3D 40; - break; - case IDR5_OAS_42_BIT: - smmu->oas =3D 42; - break; - case IDR5_OAS_44_BIT: - smmu->oas =3D 44; - break; - case IDR5_OAS_52_BIT: - smmu->oas =3D 52; - smmu->pgsize_bitmap |=3D 1ULL << 42; /* 4TB */ - break; - default: - dev_info(smmu->dev, - "unknown output address size. Truncating to 48-bit\n"); - fallthrough; - case IDR5_OAS_48_BIT: - smmu->oas =3D 48; - } - - /* Set the DMA mask for our table walker */ - if (dma_set_mask_and_coherent(smmu->dev, DMA_BIT_MASK(smmu->oas))) - dev_warn(smmu->dev, - "failed to set DMA mask for table walker\n"); - - smmu->ias =3D max(smmu->ias, smmu->oas); - - if ((smmu->features & ARM_SMMU_FEAT_TRANS_S1) && - (smmu->features & ARM_SMMU_FEAT_TRANS_S2)) - smmu->features |=3D ARM_SMMU_FEAT_NESTING; - - arm_smmu_device_iidr_probe(smmu); - - dev_info(smmu->dev, "ias %lu-bit, oas %lu-bit (features 0x%08x)\n", - smmu->ias, smmu->oas, smmu->features); - return 0; -} - #ifdef CONFIG_ACPI #ifdef CONFIG_TEGRA241_CMDQV static void acpi_smmu_dsdt_probe_tegra241_cmdqv(struct acpi_iort_node *nod= e, diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h b/drivers/iommu/ar= m/arm-smmu-v3/arm-smmu-v3.h index fc1b8c2af2a2..1ffc8320b846 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h @@ -341,6 +341,17 @@ int arm_smmu_set_pasid(struct arm_smmu_master *master, struct arm_smmu_domain *smmu_domain, ioasid_t pasid, struct arm_smmu_cd *cd); =20 +int arm_smmu_write_reg_sync(struct arm_smmu_device *smmu, u32 val, + unsigned int reg_off, unsigned int ack_off); +int arm_smmu_update_gbpa(struct arm_smmu_device *smmu, u32 set, u32 clr); +int arm_smmu_device_disable(struct arm_smmu_device *smmu); +struct iommu_group *arm_smmu_device_group(struct device *dev); +int arm_smmu_of_xlate(struct device *dev, const struct of_phandle_args *ar= gs); +void arm_smmu_get_resv_regions(struct device *dev, + struct list_head *head); + +int arm_smmu_device_hw_probe(struct arm_smmu_device *smmu); + void arm_smmu_tlb_inv_asid(struct arm_smmu_device *smmu, u16 asid); void arm_smmu_tlb_inv_range_asid(unsigned long iova, size_t size, int asid, size_t granule, bool leaf, --=20 2.47.0.338.g60cca15819-goog From nobody Sun Dec 14 19:14:26 2025 Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9F153226542 for ; Thu, 12 Dec 2024 18:04:59 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734026702; cv=none; b=Ic0LwiOAUBUiRstQKOexZqcqJwCMgph+7TpsTDGEFAbwd0gonBSAjQGJcZweXwW0q21XUJN7Jb43o03qeqbifhk9lSXgSMoOChf+mmowDdyUOnpgneWLq4feFfOAyxblXuP/UOhQXFZt96dnHAccZaE3/nKD3/SOykYZv+dOKes= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734026702; c=relaxed/simple; bh=le7i/soK4DQQ5NkX9KBnGksEHZPwTBz9+TNbPWtJ+9s=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=vE6UkQiLZewIcLfhTDL6apdQaMuM+Alhw7DjHOTVQduBek+VhBuZAcbjXmrUNyQTs1GoKSg/Cqpt1NPHHs3AdNYTPzD9LWYcDaTwpHq+ty6Efu40qcGTleyk7mBloiu/HlEvZWPxAU+dFVFAMPtd5OqYRkhztCp56rLX3oxRfUE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--smostafa.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=AvNOZlu+; arc=none smtp.client-ip=209.85.128.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--smostafa.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="AvNOZlu+" Received: by mail-wm1-f73.google.com with SMTP id 5b1f17b1804b1-43621907030so9262625e9.1 for ; Thu, 12 Dec 2024 10:04:59 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1734026698; x=1734631498; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=Ye5CEuq9/eypErcJdDhnDz6lI+zHnp9kWjOABwgLrkM=; b=AvNOZlu+nMbg6ceu55ck6QnD4TKAp1fGJUhvlYiA8yW9iqBpcT9esCwPf8/3mE05Gt xJNa2WPKKpV7Ba12xnW8AxohIu5Uwq0zlA7jnvI8vl2PqEhEW9oB7AlneIBcCb2rp9kx LAN+9COFAOF6KT/cuGnJWJytDp8QcdqAAtef+T81KPZvFoD2GcBa1SoVsGzP1sLkom3u 1U8scEzOqqu9pIa3DcpeGHA82ulIJxHTJSwcZJDCslh7aIWrh68pSz/+mvOYFqI58VxR MekVaPAvgxsQz4pzQbMT1KONqi7GtI1L8gOHVvwnaJ3NohPGIje3DWPZ5sAWNNVAJWVd INvA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734026698; x=1734631498; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=Ye5CEuq9/eypErcJdDhnDz6lI+zHnp9kWjOABwgLrkM=; b=a9GWLeeicwpcsLaA2NICmz7ix4KjqI8dyNdalU/V+7xnFMcEBCSGB/Ubu8d0PntAkN /VA5AkjSCGEZHgKsH7xMiyAFmjInz0No+xShwOrCQ+mcpiohfOMDZWKZtOovtCSlIk8G +N7ZS8IqQuWEiKi+1+JfpOmi/dzH3S84ZP2+1EmPgD334XtWxrJnYlnQYdLJZFjSH/WC JLsX0762iZfvGdq9J/PZE4I2jhA8fvXa7wHxHsR5USXaImuLM9ROaiEnEkwY0gmMJxRy 5c6IMDhcWAn6VUcECvpWCqvNbim4mE1Qk70C9RH0l3i/d/k8WkIIUcwY231TAKC2eHRy N+0Q== X-Forwarded-Encrypted: i=1; AJvYcCW+BSTQIMaQgqadiF0LJKRJ7NOh9KjHi9++ZgEOXg7rksnkNCeK1ySpTnezU/n7z0Cv2EnaiPOm+NSYNvY=@vger.kernel.org X-Gm-Message-State: AOJu0YzKPrDSf1jl68MWlfRiG90+20AFA3HPiZuMpkNOpL1CyVyHtqUi 9qxbtAYBkRy7VYuzjRt4kA7oSTCuEqlA92br+NRl8rBnOcA6z8TIywZaCXaHhOWuk5IKQTv+Juu pq0dkbxGhlw== X-Google-Smtp-Source: AGHT+IFrguWHfxaIWvgSDnKr1rrer8q0SLDpAYq1h124COJGgdteOxwFwyjctHkLBQwo0g3Wxv5rMcZMm3pJsA== X-Received: from wmnm21.prod.google.com ([2002:a05:600c:1615:b0:436:1923:6cf5]) (user=smostafa job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:a55:b0:431:4f29:9539 with SMTP id 5b1f17b1804b1-4361c43dbefmr64106285e9.32.1734026698221; Thu, 12 Dec 2024 10:04:58 -0800 (PST) Date: Thu, 12 Dec 2024 18:03:31 +0000 In-Reply-To: <20241212180423.1578358-1-smostafa@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241212180423.1578358-1-smostafa@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20241212180423.1578358-8-smostafa@google.com> Subject: [RFC PATCH v2 07/58] iommu/arm-smmu-v3: Move queue and table allocation to arm-smmu-v3-common.c From: Mostafa Saleh To: iommu@lists.linux.dev, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org Cc: catalin.marinas@arm.com, will@kernel.org, maz@kernel.org, oliver.upton@linux.dev, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, robdclark@gmail.com, joro@8bytes.org, robin.murphy@arm.com, jean-philippe@linaro.org, jgg@ziepe.ca, nicolinc@nvidia.com, vdonnefort@google.com, qperret@google.com, tabba@google.com, danielmentz@google.com, tzukui@google.com, Mostafa Saleh Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Jean-Philippe Brucker Move more code to arm-smmu-v3-common.c, so that the KVM driver can reuse it. Also, make sure that that allocated memory is aligned as it its going to protected by the hypervisor stage-2. Signed-off-by: Jean-Philippe Brucker Signed-off-by: Mostafa Saleh --- arch/arm64/include/asm/arm-smmu-v3-common.h | 29 ++++ .../arm/arm-smmu-v3/arm-smmu-v3-common.c | 136 ++++++++++++++++ drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 151 +----------------- drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h | 26 ++- 4 files changed, 179 insertions(+), 163 deletions(-) diff --git a/arch/arm64/include/asm/arm-smmu-v3-common.h b/arch/arm64/inclu= de/asm/arm-smmu-v3-common.h index e6e339248816..f2fbd286f674 100644 --- a/arch/arm64/include/asm/arm-smmu-v3-common.h +++ b/arch/arm64/include/asm/arm-smmu-v3-common.h @@ -3,6 +3,7 @@ #define _ARM_SMMU_V3_COMMON_H =20 #include +#include =20 /* MMIO registers */ #define ARM_SMMU_IDR0 0x0 @@ -198,6 +199,22 @@ struct arm_smmu_strtab_l1 { }; #define STRTAB_MAX_L1_ENTRIES (1 << 17) =20 +struct arm_smmu_strtab_cfg { + union { + struct { + struct arm_smmu_ste *table; + dma_addr_t ste_dma; + unsigned int num_ents; + } linear; + struct { + struct arm_smmu_strtab_l1 *l1tab; + struct arm_smmu_strtab_l2 **l2ptrs; + dma_addr_t l1_dma; + unsigned int num_l1_ents; + } l2; + }; +}; + static inline u32 arm_smmu_strtab_l1_idx(u32 sid) { return sid / STRTAB_NUM_L2_STES; @@ -208,6 +225,18 @@ static inline u32 arm_smmu_strtab_l2_idx(u32 sid) return sid % STRTAB_NUM_L2_STES; } =20 +static inline void arm_smmu_write_strtab_l1_desc(struct arm_smmu_strtab_l1= *dst, + dma_addr_t l2ptr_dma) +{ + u64 val =3D 0; + + val |=3D FIELD_PREP(STRTAB_L1_DESC_SPAN, STRTAB_SPLIT + 1); + val |=3D l2ptr_dma & STRTAB_L1_DESC_L2PTR_MASK; + + /* The HW has 64 bit atomicity with stores to the L2 STE table */ + WRITE_ONCE(dst->l2ptr, cpu_to_le64(val)); +} + #define STRTAB_STE_0_V (1UL << 0) #define STRTAB_STE_0_CFG GENMASK_ULL(3, 1) #define STRTAB_STE_0_CFG_ABORT 0 diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-common.c b/drivers/i= ommu/arm/arm-smmu-v3/arm-smmu-v3-common.c index cfd5ba69e67e..80ac13b0dc06 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-common.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-common.c @@ -348,6 +348,7 @@ int arm_smmu_of_xlate(struct device *dev, const struct = of_phandle_args *args) return iommu_fwspec_add_ids(dev, args->args, 1); } =20 + void arm_smmu_get_resv_regions(struct device *dev, struct list_head *head) { @@ -363,3 +364,138 @@ void arm_smmu_get_resv_regions(struct device *dev, =20 iommu_dma_get_resv_regions(dev, head); } + +int arm_smmu_init_one_queue(struct arm_smmu_device *smmu, + struct arm_smmu_queue *q, void __iomem *page, + unsigned long prod_off, unsigned long cons_off, + size_t dwords, const char *name) +{ + size_t qsz; + + do { + qsz =3D ((1 << q->llq.max_n_shift) * dwords) << 3; + q->base =3D dmam_alloc_coherent(smmu->dev, PAGE_ALIGN(qsz), &q->base_dma, + GFP_KERNEL); + if (q->base || qsz < PAGE_SIZE) + break; + + q->llq.max_n_shift--; + } while (1); + + if (!q->base) { + dev_err(smmu->dev, + "failed to allocate queue (0x%zx bytes) for %s\n", + qsz, name); + return -ENOMEM; + } + + if (!WARN_ON(q->base_dma & (qsz - 1))) { + dev_info(smmu->dev, "allocated %u entries for %s\n", + 1 << q->llq.max_n_shift, name); + } + + q->prod_reg =3D page + prod_off; + q->cons_reg =3D page + cons_off; + q->ent_dwords =3D dwords; + + q->q_base =3D Q_BASE_RWA; + q->q_base |=3D q->base_dma & Q_BASE_ADDR_MASK; + q->q_base |=3D FIELD_PREP(Q_BASE_LOG2SIZE, q->llq.max_n_shift); + + q->llq.prod =3D q->llq.cons =3D 0; + return 0; +} + +/* Stream table initialization functions */ +static int arm_smmu_init_strtab_2lvl(struct arm_smmu_device *smmu) +{ + u32 l1size; + struct arm_smmu_strtab_cfg *cfg =3D &smmu->strtab_cfg; + unsigned int last_sid_idx =3D + arm_smmu_strtab_l1_idx((1 << smmu->sid_bits) - 1); + + /* Calculate the L1 size, capped to the SIDSIZE. */ + cfg->l2.num_l1_ents =3D min(last_sid_idx + 1, STRTAB_MAX_L1_ENTRIES); + if (cfg->l2.num_l1_ents <=3D last_sid_idx) + dev_warn(smmu->dev, + "2-level strtab only covers %u/%u bits of SID\n", + ilog2(cfg->l2.num_l1_ents * STRTAB_NUM_L2_STES), + smmu->sid_bits); + + l1size =3D cfg->l2.num_l1_ents * sizeof(struct arm_smmu_strtab_l1); + cfg->l2.l1tab =3D dmam_alloc_coherent(smmu->dev, PAGE_ALIGN(l1size), &cfg= ->l2.l1_dma, + GFP_KERNEL); + if (!cfg->l2.l1tab) { + dev_err(smmu->dev, + "failed to allocate l1 stream table (%u bytes)\n", + l1size); + return -ENOMEM; + } + + cfg->l2.l2ptrs =3D devm_kcalloc(smmu->dev, cfg->l2.num_l1_ents, + sizeof(*cfg->l2.l2ptrs), GFP_KERNEL); + if (!cfg->l2.l2ptrs) + return -ENOMEM; + + return 0; +} + +static int arm_smmu_init_strtab_linear(struct arm_smmu_device *smmu) +{ + u32 size; + struct arm_smmu_strtab_cfg *cfg =3D &smmu->strtab_cfg; + + size =3D (1 << smmu->sid_bits) * sizeof(struct arm_smmu_ste); + cfg->linear.table =3D dmam_alloc_coherent(smmu->dev, PAGE_ALIGN(size), + &cfg->linear.ste_dma, + GFP_KERNEL); + if (!cfg->linear.table) { + dev_err(smmu->dev, + "failed to allocate linear stream table (%u bytes)\n", + size); + return -ENOMEM; + } + cfg->linear.num_ents =3D 1 << smmu->sid_bits; + + return 0; +} + +int arm_smmu_init_strtab(struct arm_smmu_device *smmu) +{ + int ret; + + if (smmu->features & ARM_SMMU_FEAT_2_LVL_STRTAB) + ret =3D arm_smmu_init_strtab_2lvl(smmu); + else + ret =3D arm_smmu_init_strtab_linear(smmu); + if (ret) + return ret; + + ida_init(&smmu->vmid_map); + + return 0; +} + +void arm_smmu_write_strtab(struct arm_smmu_device *smmu) +{ + struct arm_smmu_strtab_cfg *cfg =3D &smmu->strtab_cfg; + dma_addr_t dma; + u32 reg; + + if (smmu->features & ARM_SMMU_FEAT_2_LVL_STRTAB) { + reg =3D FIELD_PREP(STRTAB_BASE_CFG_FMT, + STRTAB_BASE_CFG_FMT_2LVL) | + FIELD_PREP(STRTAB_BASE_CFG_LOG2SIZE, + ilog2(cfg->l2.num_l1_ents) + STRTAB_SPLIT) | + FIELD_PREP(STRTAB_BASE_CFG_SPLIT, STRTAB_SPLIT); + dma =3D cfg->l2.l1_dma; + } else { + reg =3D FIELD_PREP(STRTAB_BASE_CFG_FMT, + STRTAB_BASE_CFG_FMT_LINEAR) | + FIELD_PREP(STRTAB_BASE_CFG_LOG2SIZE, smmu->sid_bits); + dma =3D cfg->linear.ste_dma; + } + writeq_relaxed((dma & STRTAB_BASE_ADDR_MASK) | STRTAB_BASE_RA, + smmu->base + ARM_SMMU_STRTAB_BASE); + writel_relaxed(reg, smmu->base + ARM_SMMU_STRTAB_BASE_CFG); +} diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/ar= m/arm-smmu-v3/arm-smmu-v3.c index 8741b8f57a8d..cfee7f9b5afc 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c @@ -1483,18 +1483,6 @@ static void arm_smmu_free_cd_tables(struct arm_smmu_= master *master) } =20 /* Stream table manipulation functions */ -static void arm_smmu_write_strtab_l1_desc(struct arm_smmu_strtab_l1 *dst, - dma_addr_t l2ptr_dma) -{ - u64 val =3D 0; - - val |=3D FIELD_PREP(STRTAB_L1_DESC_SPAN, STRTAB_SPLIT + 1); - val |=3D l2ptr_dma & STRTAB_L1_DESC_L2PTR_MASK; - - /* The HW has 64 bit atomicity with stores to the L2 STE table */ - WRITE_ONCE(dst->l2ptr, cpu_to_le64(val)); -} - struct arm_smmu_ste_writer { struct arm_smmu_entry_writer writer; u32 sid; @@ -3482,47 +3470,6 @@ static struct iommu_dirty_ops arm_smmu_dirty_ops =3D= { }; =20 /* Probing and initialisation functions */ -int arm_smmu_init_one_queue(struct arm_smmu_device *smmu, - struct arm_smmu_queue *q, void __iomem *page, - unsigned long prod_off, unsigned long cons_off, - size_t dwords, const char *name) -{ - size_t qsz; - - do { - qsz =3D ((1 << q->llq.max_n_shift) * dwords) << 3; - q->base =3D dmam_alloc_coherent(smmu->dev, qsz, &q->base_dma, - GFP_KERNEL); - if (q->base || qsz < PAGE_SIZE) - break; - - q->llq.max_n_shift--; - } while (1); - - if (!q->base) { - dev_err(smmu->dev, - "failed to allocate queue (0x%zx bytes) for %s\n", - qsz, name); - return -ENOMEM; - } - - if (!WARN_ON(q->base_dma & (qsz - 1))) { - dev_info(smmu->dev, "allocated %u entries for %s\n", - 1 << q->llq.max_n_shift, name); - } - - q->prod_reg =3D page + prod_off; - q->cons_reg =3D page + cons_off; - q->ent_dwords =3D dwords; - - q->q_base =3D Q_BASE_RWA; - q->q_base |=3D q->base_dma & Q_BASE_ADDR_MASK; - q->q_base |=3D FIELD_PREP(Q_BASE_LOG2SIZE, q->llq.max_n_shift); - - q->llq.prod =3D q->llq.cons =3D 0; - return 0; -} - int arm_smmu_cmdq_init(struct arm_smmu_device *smmu, struct arm_smmu_cmdq *cmdq) { @@ -3577,76 +3524,6 @@ static int arm_smmu_init_queues(struct arm_smmu_devi= ce *smmu) PRIQ_ENT_DWORDS, "priq"); } =20 -static int arm_smmu_init_strtab_2lvl(struct arm_smmu_device *smmu) -{ - u32 l1size; - struct arm_smmu_strtab_cfg *cfg =3D &smmu->strtab_cfg; - unsigned int last_sid_idx =3D - arm_smmu_strtab_l1_idx((1 << smmu->sid_bits) - 1); - - /* Calculate the L1 size, capped to the SIDSIZE. */ - cfg->l2.num_l1_ents =3D min(last_sid_idx + 1, STRTAB_MAX_L1_ENTRIES); - if (cfg->l2.num_l1_ents <=3D last_sid_idx) - dev_warn(smmu->dev, - "2-level strtab only covers %u/%u bits of SID\n", - ilog2(cfg->l2.num_l1_ents * STRTAB_NUM_L2_STES), - smmu->sid_bits); - - l1size =3D cfg->l2.num_l1_ents * sizeof(struct arm_smmu_strtab_l1); - cfg->l2.l1tab =3D dmam_alloc_coherent(smmu->dev, l1size, &cfg->l2.l1_dma, - GFP_KERNEL); - if (!cfg->l2.l1tab) { - dev_err(smmu->dev, - "failed to allocate l1 stream table (%u bytes)\n", - l1size); - return -ENOMEM; - } - - cfg->l2.l2ptrs =3D devm_kcalloc(smmu->dev, cfg->l2.num_l1_ents, - sizeof(*cfg->l2.l2ptrs), GFP_KERNEL); - if (!cfg->l2.l2ptrs) - return -ENOMEM; - - return 0; -} - -static int arm_smmu_init_strtab_linear(struct arm_smmu_device *smmu) -{ - u32 size; - struct arm_smmu_strtab_cfg *cfg =3D &smmu->strtab_cfg; - - size =3D (1 << smmu->sid_bits) * sizeof(struct arm_smmu_ste); - cfg->linear.table =3D dmam_alloc_coherent(smmu->dev, size, - &cfg->linear.ste_dma, - GFP_KERNEL); - if (!cfg->linear.table) { - dev_err(smmu->dev, - "failed to allocate linear stream table (%u bytes)\n", - size); - return -ENOMEM; - } - cfg->linear.num_ents =3D 1 << smmu->sid_bits; - - arm_smmu_init_initial_stes(cfg->linear.table, cfg->linear.num_ents); - return 0; -} - -static int arm_smmu_init_strtab(struct arm_smmu_device *smmu) -{ - int ret; - - if (smmu->features & ARM_SMMU_FEAT_2_LVL_STRTAB) - ret =3D arm_smmu_init_strtab_2lvl(smmu); - else - ret =3D arm_smmu_init_strtab_linear(smmu); - if (ret) - return ret; - - ida_init(&smmu->vmid_map); - - return 0; -} - static int arm_smmu_init_structures(struct arm_smmu_device *smmu) { int ret; @@ -3662,6 +3539,10 @@ static int arm_smmu_init_structures(struct arm_smmu_= device *smmu) if (ret) return ret; =20 + if (!(smmu->features & ARM_SMMU_FEAT_2_LVL_STRTAB)) + arm_smmu_init_initial_stes(smmu->strtab_cfg.linear.table, + smmu->strtab_cfg.linear.num_ents); + if (smmu->impl_ops && smmu->impl_ops->init_structures) return smmu->impl_ops->init_structures(smmu); =20 @@ -3814,30 +3695,6 @@ static int arm_smmu_setup_irqs(struct arm_smmu_devic= e *smmu) return 0; } =20 -static void arm_smmu_write_strtab(struct arm_smmu_device *smmu) -{ - struct arm_smmu_strtab_cfg *cfg =3D &smmu->strtab_cfg; - dma_addr_t dma; - u32 reg; - - if (smmu->features & ARM_SMMU_FEAT_2_LVL_STRTAB) { - reg =3D FIELD_PREP(STRTAB_BASE_CFG_FMT, - STRTAB_BASE_CFG_FMT_2LVL) | - FIELD_PREP(STRTAB_BASE_CFG_LOG2SIZE, - ilog2(cfg->l2.num_l1_ents) + STRTAB_SPLIT) | - FIELD_PREP(STRTAB_BASE_CFG_SPLIT, STRTAB_SPLIT); - dma =3D cfg->l2.l1_dma; - } else { - reg =3D FIELD_PREP(STRTAB_BASE_CFG_FMT, - STRTAB_BASE_CFG_FMT_LINEAR) | - FIELD_PREP(STRTAB_BASE_CFG_LOG2SIZE, smmu->sid_bits); - dma =3D cfg->linear.ste_dma; - } - writeq_relaxed((dma & STRTAB_BASE_ADDR_MASK) | STRTAB_BASE_RA, - smmu->base + ARM_SMMU_STRTAB_BASE); - writel_relaxed(reg, smmu->base + ARM_SMMU_STRTAB_BASE_CFG); -} - static int arm_smmu_device_reset(struct arm_smmu_device *smmu) { int ret; diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h b/drivers/iommu/ar= m/arm-smmu-v3/arm-smmu-v3.h index 1ffc8320b846..1a3452554ca8 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h @@ -156,22 +156,6 @@ struct arm_smmu_s2_cfg { u16 vmid; }; =20 -struct arm_smmu_strtab_cfg { - union { - struct { - struct arm_smmu_ste *table; - dma_addr_t ste_dma; - unsigned int num_ents; - } linear; - struct { - struct arm_smmu_strtab_l1 *l1tab; - struct arm_smmu_strtab_l2 **l2ptrs; - dma_addr_t l1_dma; - unsigned int num_l1_ents; - } l2; - }; -}; - struct arm_smmu_impl_ops { int (*device_reset)(struct arm_smmu_device *smmu); void (*device_remove)(struct arm_smmu_device *smmu); @@ -351,6 +335,16 @@ void arm_smmu_get_resv_regions(struct device *dev, struct list_head *head); =20 int arm_smmu_device_hw_probe(struct arm_smmu_device *smmu); +int arm_smmu_init_one_queue(struct arm_smmu_device *smmu, + struct arm_smmu_queue *q, + void __iomem *page, + unsigned long prod_off, + unsigned long cons_off, + size_t dwords, const char *name); +int arm_smmu_init_strtab(struct arm_smmu_device *smmu); +void arm_smmu_write_strtab_l1_desc(struct arm_smmu_strtab_l1 *dst, + dma_addr_t l2ptr_dma); +void arm_smmu_write_strtab(struct arm_smmu_device *smmu); =20 void arm_smmu_tlb_inv_asid(struct arm_smmu_device *smmu, u16 asid); void arm_smmu_tlb_inv_range_asid(unsigned long iova, size_t size, int asid, --=20 2.47.0.338.g60cca15819-goog From nobody Sun Dec 14 19:14:26 2025 Received: from mail-wr1-f74.google.com (mail-wr1-f74.google.com [209.85.221.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 34964223E69 for ; Thu, 12 Dec 2024 18:05:02 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.221.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734026704; cv=none; b=MEq2pN3C7w0tjsh6ORNsTafezs+UCIoiieENygY/QK9jri0hi8ceIRMPTlHLxJPU7htIquZVpiGRXusZXqWj+FdFXGmuZKjLb4ZddFMfmsLaansmPFiRbypgTpDq/+TXeQnhaEq21BuX0PqyYxOggFQMNYBOSBdRz8rk+eOGc/I= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734026704; c=relaxed/simple; bh=WMwE/dNVHizFjbVhOCSe7B/7FXE4TRfu8fmUOUXrn38=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=LR2Q00kLxd8g+YMtXTKQS9oUJaICq2M40L5r8t2Mbtr9OMfQutqc5bUGkt6G6WYJhsejHm9FvZHMdGkZaKZLgS3copdOBpo8BslQWnlkZjL6nzTrBxfAsFHUOFSR1rXl/9kaw65xpiymPG8gI1MpgqUM/Q3Aa5gVzl+e9PxgnnI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--smostafa.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=J/AIpOsG; arc=none smtp.client-ip=209.85.221.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--smostafa.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="J/AIpOsG" Received: by mail-wr1-f74.google.com with SMTP id ffacd0b85a97d-385d52591d6so534016f8f.1 for ; Thu, 12 Dec 2024 10:05:02 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1734026700; x=1734631500; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=5tYgfa42Qu0evfptuxAajuASNAN4dnebvy4bJLnTSRk=; b=J/AIpOsGWkzARDkfC+HeB6qFlEBgwOmqdWcfdThK0SfDiujnWEebji3vLJbhv09ZeI a7MX4p5lNI/FXUYQ2S6fAGun2yAIPcE+UcIg0x6SdfQ/n0av0nRdv0ZZ2pV/1n5xnVOm AM2XLPdnhEP8hiY3mLW/rqFsyKhQyICAQp/+VfwxXOoG5JFr+Xomy1vxL/zO2iPRNhzS FaH/2ZAcIqqPAISiNP2AJ9S09SciPUnq1JmGEfnnWZ6lgFkmeuQM0rOhfe64w4wEmCg2 oja7I/1v0vP8L/nvV1Q4D29+1ikFK8kVGBtTuVmLaXTrmelShVP4xrdBCHn8tw4I0wq4 PJRQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734026700; x=1734631500; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=5tYgfa42Qu0evfptuxAajuASNAN4dnebvy4bJLnTSRk=; b=V+huqm73ngUA+3wGZViqQ0s3ZenpKvI3kPB4r35J/de7qGnbRlNPPJbTT3mCw4sRq8 Fk0r5DfDZLs7pxohEkzqBHvqzz2aEvzBanD8m0ApSbbNoeeHRdyfCBpLn6RXhAjZvyL9 BejXFbi17VFsLFX9BJ9Nm7ap2D3xCJIepiiKks2d5aW/hmJxUddUy35pHB2T0JE62Bo5 6+yepZZ0wBNavghmJhx32S0NolYCdHqXrLk7bVGxxWui4XWEWEiIq9A1CO2WjDYOT+kk YCRRsVvLklTvAryVGLXysar0ogr2/idRccPoEeCIziuIkXKVLOHP68gdgH9guOBQD8n6 jJLw== X-Forwarded-Encrypted: i=1; AJvYcCXC4SbVC0/xMqeKw2MVzOTs94XWCPd211R96of/cRbmLMpRy77s4BsFwK3zwfPLXEHPvrva/R4aRbxmL4c=@vger.kernel.org X-Gm-Message-State: AOJu0YwFVFdV7joVQFBzkcbuPuOoYwG6oIUbGHfYZoAuxfEF7aX7BZSQ Rvp7XJPLFp/Nb55lrLMrlrrOggR9iqhyEhqXv4ShEC0VVLjYLhiuZqmtUmBzpB+htr7vk4OghpJ VVpDuk0hBdw== X-Google-Smtp-Source: AGHT+IHXTmXTs9qNjhnollSYp0ZY1MWz75DhgysDcjgOTrRd4ncRnZH4VHrbAicgmJyQ6qHp6MJ5a1TqqJaaWQ== X-Received: from wmfv18.prod.google.com ([2002:a05:600c:15d2:b0:42c:8a48:f007]) (user=smostafa job=prod-delivery.src-stubby-dispatcher) by 2002:a5d:5f92:0:b0:386:37f8:450b with SMTP id ffacd0b85a97d-387887c252emr2631173f8f.5.1734026700644; Thu, 12 Dec 2024 10:05:00 -0800 (PST) Date: Thu, 12 Dec 2024 18:03:32 +0000 In-Reply-To: <20241212180423.1578358-1-smostafa@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241212180423.1578358-1-smostafa@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20241212180423.1578358-9-smostafa@google.com> Subject: [RFC PATCH v2 08/58] iommu/arm-smmu-v3: Move firmware probe to arm-smmu-v3-common From: Mostafa Saleh To: iommu@lists.linux.dev, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org Cc: catalin.marinas@arm.com, will@kernel.org, maz@kernel.org, oliver.upton@linux.dev, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, robdclark@gmail.com, joro@8bytes.org, robin.murphy@arm.com, jean-philippe@linaro.org, jgg@ziepe.ca, nicolinc@nvidia.com, vdonnefort@google.com, qperret@google.com, tabba@google.com, danielmentz@google.com, tzukui@google.com, Mostafa Saleh Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Jean-Philippe Brucker Move the FW probe functions to the common source. Signed-off-by: Jean-Philippe Brucker Signed-off-by: Mostafa Saleh --- .../arm/arm-smmu-v3/arm-smmu-v3-common.c | 146 ++++++++++++++++++ drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 142 +---------------- drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h | 3 + 3 files changed, 150 insertions(+), 141 deletions(-) diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-common.c b/drivers/i= ommu/arm/arm-smmu-v3/arm-smmu-v3-common.c index 80ac13b0dc06..04f1e2f1c458 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-common.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-common.c @@ -1,11 +1,157 @@ // SPDX-License-Identifier: GPL-2.0 +#include #include #include +#include +#include +#include #include +#include =20 #include "arm-smmu-v3.h" #include "../../dma-iommu.h" =20 +struct arm_smmu_option_prop { + u32 opt; + const char *prop; +}; + +static struct arm_smmu_option_prop arm_smmu_options[] =3D { + { ARM_SMMU_OPT_SKIP_PREFETCH, "hisilicon,broken-prefetch-cmd" }, + { ARM_SMMU_OPT_PAGE0_REGS_ONLY, "cavium,cn9900-broken-page1-regspace"}, + { 0, NULL}, +}; + +static void parse_driver_options(struct arm_smmu_device *smmu) +{ + int i =3D 0; + + do { + if (of_property_read_bool(smmu->dev->of_node, + arm_smmu_options[i].prop)) { + smmu->options |=3D arm_smmu_options[i].opt; + dev_notice(smmu->dev, "option %s\n", + arm_smmu_options[i].prop); + } + } while (arm_smmu_options[++i].opt); +} + +#ifdef CONFIG_ACPI +#ifdef CONFIG_TEGRA241_CMDQV +static void acpi_smmu_dsdt_probe_tegra241_cmdqv(struct acpi_iort_node *nod= e, + struct arm_smmu_device *smmu) +{ + const char *uid =3D kasprintf(GFP_KERNEL, "%u", node->identifier); + struct acpi_device *adev; + + /* Look for an NVDA200C node whose _UID matches the SMMU node ID */ + adev =3D acpi_dev_get_first_match_dev("NVDA200C", uid, -1); + if (adev) { + /* Tegra241 CMDQV driver is responsible for put_device() */ + smmu->impl_dev =3D &adev->dev; + smmu->options |=3D ARM_SMMU_OPT_TEGRA241_CMDQV; + dev_info(smmu->dev, "found companion CMDQV device: %s\n", + dev_name(smmu->impl_dev)); + } + kfree(uid); +} +#else +static void acpi_smmu_dsdt_probe_tegra241_cmdqv(struct acpi_iort_node *nod= e, + struct arm_smmu_device *smmu) +{ +} +#endif + +static int acpi_smmu_iort_probe_model(struct acpi_iort_node *node, + struct arm_smmu_device *smmu) +{ + struct acpi_iort_smmu_v3 *iort_smmu =3D + (struct acpi_iort_smmu_v3 *)node->node_data; + + switch (iort_smmu->model) { + case ACPI_IORT_SMMU_V3_CAVIUM_CN99XX: + smmu->options |=3D ARM_SMMU_OPT_PAGE0_REGS_ONLY; + break; + case ACPI_IORT_SMMU_V3_HISILICON_HI161X: + smmu->options |=3D ARM_SMMU_OPT_SKIP_PREFETCH; + break; + case ACPI_IORT_SMMU_V3_GENERIC: + /* + * Tegra241 implementation stores its SMMU options and impl_dev + * in DSDT. Thus, go through the ACPI tables unconditionally. + */ + acpi_smmu_dsdt_probe_tegra241_cmdqv(node, smmu); + break; + } + + dev_notice(smmu->dev, "option mask 0x%x\n", smmu->options); + return 0; +} + +static int arm_smmu_device_acpi_probe(struct platform_device *pdev, + struct arm_smmu_device *smmu) +{ + struct acpi_iort_smmu_v3 *iort_smmu; + struct device *dev =3D smmu->dev; + struct acpi_iort_node *node; + + node =3D *(struct acpi_iort_node **)dev_get_platdata(dev); + + /* Retrieve SMMUv3 specific data */ + iort_smmu =3D (struct acpi_iort_smmu_v3 *)node->node_data; + + if (iort_smmu->flags & ACPI_IORT_SMMU_V3_COHACC_OVERRIDE) + smmu->features |=3D ARM_SMMU_FEAT_COHERENCY; + + switch (FIELD_GET(ACPI_IORT_SMMU_V3_HTTU_OVERRIDE, iort_smmu->flags)) { + case IDR0_HTTU_ACCESS_DIRTY: + smmu->features |=3D ARM_SMMU_FEAT_HD; + fallthrough; + case IDR0_HTTU_ACCESS: + smmu->features |=3D ARM_SMMU_FEAT_HA; + } + + return acpi_smmu_iort_probe_model(node, smmu); +} +#else +static inline int arm_smmu_device_acpi_probe(struct platform_device *pdev, + struct arm_smmu_device *smmu) +{ + return -ENODEV; +} +#endif + +static int arm_smmu_device_dt_probe(struct platform_device *pdev, + struct arm_smmu_device *smmu) +{ + struct device *dev =3D &pdev->dev; + u32 cells; + int ret =3D -EINVAL; + + if (of_property_read_u32(dev->of_node, "#iommu-cells", &cells)) + dev_err(dev, "missing #iommu-cells property\n"); + else if (cells !=3D 1) + dev_err(dev, "invalid #iommu-cells value (%d)\n", cells); + else + ret =3D 0; + + parse_driver_options(smmu); + + if (of_dma_is_coherent(dev->of_node)) + smmu->features |=3D ARM_SMMU_FEAT_COHERENCY; + + return ret; +} + +int arm_smmu_fw_probe(struct platform_device *pdev, + struct arm_smmu_device *smmu) +{ + if (smmu->dev->of_node) + return arm_smmu_device_dt_probe(pdev, smmu); + else + return arm_smmu_device_acpi_probe(pdev, smmu); +} + #define IIDR_IMPLEMENTER_ARM 0x43b #define IIDR_PRODUCTID_ARM_MMU_600 0x483 #define IIDR_PRODUCTID_ARM_MMU_700 0x487 diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/ar= m/arm-smmu-v3/arm-smmu-v3.c index cfee7f9b5afc..91f64416900b 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c @@ -9,7 +9,6 @@ * This driver is powered by bad coffee and bombay mix. */ =20 -#include #include #include #include @@ -19,9 +18,6 @@ #include #include #include -#include -#include -#include #include #include #include @@ -67,38 +63,13 @@ static phys_addr_t arm_smmu_msi_cfg[ARM_SMMU_MAX_MSIS][= 3] =3D { }, }; =20 -struct arm_smmu_option_prop { - u32 opt; - const char *prop; -}; - DEFINE_XARRAY_ALLOC1(arm_smmu_asid_xa); DEFINE_MUTEX(arm_smmu_asid_lock); =20 -static struct arm_smmu_option_prop arm_smmu_options[] =3D { - { ARM_SMMU_OPT_SKIP_PREFETCH, "hisilicon,broken-prefetch-cmd" }, - { ARM_SMMU_OPT_PAGE0_REGS_ONLY, "cavium,cn9900-broken-page1-regspace"}, - { 0, NULL}, -}; - static int arm_smmu_domain_finalise(struct arm_smmu_domain *smmu_domain, struct arm_smmu_device *smmu, u32 flags); static int arm_smmu_alloc_cd_tables(struct arm_smmu_master *master); =20 -static void parse_driver_options(struct arm_smmu_device *smmu) -{ - int i =3D 0; - - do { - if (of_property_read_bool(smmu->dev->of_node, - arm_smmu_options[i].prop)) { - smmu->options |=3D arm_smmu_options[i].opt; - dev_notice(smmu->dev, "option %s\n", - arm_smmu_options[i].prop); - } - } while (arm_smmu_options[++i].opt); -} - /* Low-level queue manipulation functions */ static bool queue_has_space(struct arm_smmu_ll_queue *q, u32 n) { @@ -3828,113 +3799,6 @@ static int arm_smmu_device_reset(struct arm_smmu_de= vice *smmu) return 0; } =20 -#ifdef CONFIG_ACPI -#ifdef CONFIG_TEGRA241_CMDQV -static void acpi_smmu_dsdt_probe_tegra241_cmdqv(struct acpi_iort_node *nod= e, - struct arm_smmu_device *smmu) -{ - const char *uid =3D kasprintf(GFP_KERNEL, "%u", node->identifier); - struct acpi_device *adev; - - /* Look for an NVDA200C node whose _UID matches the SMMU node ID */ - adev =3D acpi_dev_get_first_match_dev("NVDA200C", uid, -1); - if (adev) { - /* Tegra241 CMDQV driver is responsible for put_device() */ - smmu->impl_dev =3D &adev->dev; - smmu->options |=3D ARM_SMMU_OPT_TEGRA241_CMDQV; - dev_info(smmu->dev, "found companion CMDQV device: %s\n", - dev_name(smmu->impl_dev)); - } - kfree(uid); -} -#else -static void acpi_smmu_dsdt_probe_tegra241_cmdqv(struct acpi_iort_node *nod= e, - struct arm_smmu_device *smmu) -{ -} -#endif - -static int acpi_smmu_iort_probe_model(struct acpi_iort_node *node, - struct arm_smmu_device *smmu) -{ - struct acpi_iort_smmu_v3 *iort_smmu =3D - (struct acpi_iort_smmu_v3 *)node->node_data; - - switch (iort_smmu->model) { - case ACPI_IORT_SMMU_V3_CAVIUM_CN99XX: - smmu->options |=3D ARM_SMMU_OPT_PAGE0_REGS_ONLY; - break; - case ACPI_IORT_SMMU_V3_HISILICON_HI161X: - smmu->options |=3D ARM_SMMU_OPT_SKIP_PREFETCH; - break; - case ACPI_IORT_SMMU_V3_GENERIC: - /* - * Tegra241 implementation stores its SMMU options and impl_dev - * in DSDT. Thus, go through the ACPI tables unconditionally. - */ - acpi_smmu_dsdt_probe_tegra241_cmdqv(node, smmu); - break; - } - - dev_notice(smmu->dev, "option mask 0x%x\n", smmu->options); - return 0; -} - -static int arm_smmu_device_acpi_probe(struct platform_device *pdev, - struct arm_smmu_device *smmu) -{ - struct acpi_iort_smmu_v3 *iort_smmu; - struct device *dev =3D smmu->dev; - struct acpi_iort_node *node; - - node =3D *(struct acpi_iort_node **)dev_get_platdata(dev); - - /* Retrieve SMMUv3 specific data */ - iort_smmu =3D (struct acpi_iort_smmu_v3 *)node->node_data; - - if (iort_smmu->flags & ACPI_IORT_SMMU_V3_COHACC_OVERRIDE) - smmu->features |=3D ARM_SMMU_FEAT_COHERENCY; - - switch (FIELD_GET(ACPI_IORT_SMMU_V3_HTTU_OVERRIDE, iort_smmu->flags)) { - case IDR0_HTTU_ACCESS_DIRTY: - smmu->features |=3D ARM_SMMU_FEAT_HD; - fallthrough; - case IDR0_HTTU_ACCESS: - smmu->features |=3D ARM_SMMU_FEAT_HA; - } - - return acpi_smmu_iort_probe_model(node, smmu); -} -#else -static inline int arm_smmu_device_acpi_probe(struct platform_device *pdev, - struct arm_smmu_device *smmu) -{ - return -ENODEV; -} -#endif - -static int arm_smmu_device_dt_probe(struct platform_device *pdev, - struct arm_smmu_device *smmu) -{ - struct device *dev =3D &pdev->dev; - u32 cells; - int ret =3D -EINVAL; - - if (of_property_read_u32(dev->of_node, "#iommu-cells", &cells)) - dev_err(dev, "missing #iommu-cells property\n"); - else if (cells !=3D 1) - dev_err(dev, "invalid #iommu-cells value (%d)\n", cells); - else - ret =3D 0; - - parse_driver_options(smmu); - - if (of_dma_is_coherent(dev->of_node)) - smmu->features |=3D ARM_SMMU_FEAT_COHERENCY; - - return ret; -} - static unsigned long arm_smmu_resource_size(struct arm_smmu_device *smmu) { if (smmu->options & ARM_SMMU_OPT_PAGE0_REGS_ONLY) @@ -4030,11 +3894,7 @@ static int arm_smmu_device_probe(struct platform_dev= ice *pdev) return -ENOMEM; smmu->dev =3D dev; =20 - if (dev->of_node) { - ret =3D arm_smmu_device_dt_probe(pdev, smmu); - } else { - ret =3D arm_smmu_device_acpi_probe(pdev, smmu); - } + ret =3D arm_smmu_fw_probe(pdev, smmu); if (ret) return ret; =20 diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h b/drivers/iommu/ar= m/arm-smmu-v3/arm-smmu-v3.h index 1a3452554ca8..2d658f15973a 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h @@ -334,6 +334,9 @@ int arm_smmu_of_xlate(struct device *dev, const struct = of_phandle_args *args); void arm_smmu_get_resv_regions(struct device *dev, struct list_head *head); =20 +struct platform_device; +int arm_smmu_fw_probe(struct platform_device *pdev, + struct arm_smmu_device *smmu); int arm_smmu_device_hw_probe(struct arm_smmu_device *smmu); int arm_smmu_init_one_queue(struct arm_smmu_device *smmu, struct arm_smmu_queue *q, --=20 2.47.0.338.g60cca15819-goog From nobody Sun Dec 14 19:14:26 2025 Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3261F229123 for ; Thu, 12 Dec 2024 18:05:04 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734026706; cv=none; b=EmC69rcwYsM6s/8zV58AbUL6n7s/9JT0DKdzN6ZOJ01Qf58/LxEJzQJlQQ0Oqq8QlihRVoy9pApgsGvqXamsaPX0InLAPT2ouQQ1vPHyDvc3XBSsYW3GG9Sc1UIUiT3lXnKM3ksNbZS6tSrGUwrVYTzfGy7pBiHWDdyHNf8Loz0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734026706; c=relaxed/simple; bh=s41qDBeIfR+pyi6L0VZtpKA+UqenXrCMsBsTuT0BQaM=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=B6mfGaFgKOVI0eaPMErFqr/u2j29Yblerjaslzn9ekXb9Ys9q0veqxniisHQmk+iyDgyOKGlEHpRaTGrXxQ7Ja6D0H3zELF9W5++gsXvq2Q/aDHKBBQs8SUIXSkR2G9gdTFBSPRGjx4CF2C9rI7PTF2cKBJXaQBAID5bXB3hLxw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--smostafa.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=PnPAGyRy; arc=none smtp.client-ip=209.85.128.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--smostafa.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="PnPAGyRy" Received: by mail-wm1-f73.google.com with SMTP id 5b1f17b1804b1-4361a8fc3bdso5504595e9.2 for ; Thu, 12 Dec 2024 10:05:03 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1734026702; x=1734631502; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=RRlluNjB7B7yhqcQsTcc7WzSgCbMOzbSDkNF00y+zZQ=; b=PnPAGyRyyUAGOGJTR+VGajaw44Oc5apJNcCA5H4dICriWBR+gyZVxOvi/rap10cmQ2 I2vIkHoq1EWufHNZ3dFEJ6ZFocznQKc1UUjBydY/Th0tot3rv3/hSovlhP7/uyu4Ixoj 44xoPcy2RbXGkciZdSv/JskU8xrw082zHUhEdsz3+kYG8ceJ7gdhcjARrCy25FOISXMX NWgKv84lXavgEGIHZ+bKqVtn43YbI6Fy2uIhs4HKVHzkv2rfCyEnyfaD/m2cuFAAYJOE xhenYR44RSEoe5gPq74vlH3IDYxfKzJhvvVS8NZplW0+Gn/WvfoBOCJdx4aNYiKL/rBy hfzw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734026702; x=1734631502; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=RRlluNjB7B7yhqcQsTcc7WzSgCbMOzbSDkNF00y+zZQ=; b=QeKcTjOic51HlZJt7dSSR9y9J3L8mphdmtMwoFjs3PT5YyEhBOLekC5WPX7QT3bgnb UlWfYaUJH64OkLKkTTC1O6xH0A+nRD+qeftPp11DwEdGtFWA9UBuZJZ4dtzUjq+H9Jjt ki6jDyUYA2Zu6Dqz54BJwJjuUiMKYsW8UK+4o108gu8d8YG4N36GD/65Y1gJ0KfUfJyK ki7ITuD1R5ou46BdzwyCSkX1wz6HUPSbvuGNk8GEP85OyvGgTImSc1qS8atUUX5XQOt6 dzKQfDLwrKvp3yK6ipc8BMdpbH7OjepCSzKfvE/pdGMbtZTpuunP7HpehZ3yD9iKyZwG awyg== X-Forwarded-Encrypted: i=1; AJvYcCUy62OFFhWlu4JhtpJfXUfaownQV2yWtmjVv02YNjEs1rWZ7/iOTANiKZp0hPpXAKHfqmUBXPfnfETBTh8=@vger.kernel.org X-Gm-Message-State: AOJu0Yx3UGxShR+17d+k7NVKgeaFVXl4NTAz88whk0KF1DyE9K02GML1 YbeQYtT31QmyowYcgeMMg1umkjLjjIZISf/VCADuj/d0dnEqWoeEp+t3M0EHeVHYbg3P+9ISK2W A/m/9MIPYiw== X-Google-Smtp-Source: AGHT+IG8R7L09stXwqH3oYXhWOZy4o2kVZLVIfQiCvS7+nSVYxh7Lqp6AmiQAmgplzW2b5ucEblLkxA9Ex8Hjg== X-Received: from wmik26.prod.google.com ([2002:a7b:c41a:0:b0:434:f801:bf67]) (user=smostafa job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:3b94:b0:434:f2bf:1708 with SMTP id 5b1f17b1804b1-4361c34672amr67605075e9.7.1734026702619; Thu, 12 Dec 2024 10:05:02 -0800 (PST) Date: Thu, 12 Dec 2024 18:03:33 +0000 In-Reply-To: <20241212180423.1578358-1-smostafa@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241212180423.1578358-1-smostafa@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20241212180423.1578358-10-smostafa@google.com> Subject: [RFC PATCH v2 09/58] iommu/arm-smmu-v3: Move IOMMU registration to arm-smmu-v3-common.c From: Mostafa Saleh To: iommu@lists.linux.dev, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org Cc: catalin.marinas@arm.com, will@kernel.org, maz@kernel.org, oliver.upton@linux.dev, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, robdclark@gmail.com, joro@8bytes.org, robin.murphy@arm.com, jean-philippe@linaro.org, jgg@ziepe.ca, nicolinc@nvidia.com, vdonnefort@google.com, qperret@google.com, tabba@google.com, danielmentz@google.com, tzukui@google.com, Mostafa Saleh Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Jean-Philippe Brucker The KVM driver will need to implement a few IOMMU ops, so move the helpers to arm-smmu-v3-common. Signed-off-by: Jean-Philippe Brucker Signed-off-by: Mostafa Saleh --- .../arm/arm-smmu-v3/arm-smmu-v3-common.c | 27 +++++++++++++++++++ drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 17 ++---------- drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h | 4 +++ 3 files changed, 33 insertions(+), 15 deletions(-) diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-common.c b/drivers/i= ommu/arm/arm-smmu-v3/arm-smmu-v3-common.c index 04f1e2f1c458..b7bcac51cc7d 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-common.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-common.c @@ -645,3 +645,30 @@ void arm_smmu_write_strtab(struct arm_smmu_device *smm= u) smmu->base + ARM_SMMU_STRTAB_BASE); writel_relaxed(reg, smmu->base + ARM_SMMU_STRTAB_BASE_CFG); } + +int arm_smmu_register_iommu(struct arm_smmu_device *smmu, + struct iommu_ops *ops, phys_addr_t ioaddr) +{ + int ret; + struct device *dev =3D smmu->dev; + + ret =3D iommu_device_sysfs_add(&smmu->iommu, dev, NULL, + "smmu3.%pa", &ioaddr); + if (ret) + return ret; + + ret =3D iommu_device_register(&smmu->iommu, ops, dev); + if (ret) { + dev_err(dev, "Failed to register iommu\n"); + iommu_device_sysfs_remove(&smmu->iommu); + return ret; + } + + return 0; +} + +void arm_smmu_unregister_iommu(struct arm_smmu_device *smmu) +{ + iommu_device_unregister(&smmu->iommu); + iommu_device_sysfs_remove(&smmu->iommu); +} diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/ar= m/arm-smmu-v3/arm-smmu-v3.c index 91f64416900b..bcefa361f3d3 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c @@ -3980,27 +3980,14 @@ static int arm_smmu_device_probe(struct platform_de= vice *pdev) return ret; =20 /* And we're up. Go go go! */ - ret =3D iommu_device_sysfs_add(&smmu->iommu, dev, NULL, - "smmu3.%pa", &ioaddr); - if (ret) - return ret; - - ret =3D iommu_device_register(&smmu->iommu, &arm_smmu_ops, dev); - if (ret) { - dev_err(dev, "Failed to register iommu\n"); - iommu_device_sysfs_remove(&smmu->iommu); - return ret; - } - - return 0; + return arm_smmu_register_iommu(smmu, &arm_smmu_ops, ioaddr); } =20 static void arm_smmu_device_remove(struct platform_device *pdev) { struct arm_smmu_device *smmu =3D platform_get_drvdata(pdev); =20 - iommu_device_unregister(&smmu->iommu); - iommu_device_sysfs_remove(&smmu->iommu); + arm_smmu_unregister_iommu(smmu); arm_smmu_device_disable(smmu); iopf_queue_free(smmu->evtq.iopf); ida_destroy(&smmu->vmid_map); diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h b/drivers/iommu/ar= m/arm-smmu-v3/arm-smmu-v3.h index 2d658f15973a..63545fdf55f9 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h @@ -349,6 +349,10 @@ void arm_smmu_write_strtab_l1_desc(struct arm_smmu_str= tab_l1 *dst, dma_addr_t l2ptr_dma); void arm_smmu_write_strtab(struct arm_smmu_device *smmu); =20 +int arm_smmu_register_iommu(struct arm_smmu_device *smmu, + struct iommu_ops *ops, phys_addr_t ioaddr); +void arm_smmu_unregister_iommu(struct arm_smmu_device *smmu); + void arm_smmu_tlb_inv_asid(struct arm_smmu_device *smmu, u16 asid); void arm_smmu_tlb_inv_range_asid(unsigned long iova, size_t size, int asid, size_t granule, bool leaf, --=20 2.47.0.338.g60cca15819-goog From nobody Sun Dec 14 19:14:26 2025 Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 70A2E22967F for ; Thu, 12 Dec 2024 18:05:06 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734026709; cv=none; b=skQu08WiZLUEEBfic7eUNamwBepKOobQTTS9x1KrV6AGS9OmPzbGXI0qjKacQGmX4zsjffytrQKjqce5lDw9UFrWatE3pOnBAG7WESY9SqYXjULGRGHqVfMQMsk8oD5qCr6izarbQSPsDUxXKZaEv9GDQ9QNfYtrRJWmQT5xKyQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734026709; c=relaxed/simple; bh=3uWvLVuBdt/5dyApctpZAq3G856PwLKo+YRdE46AXRg=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=RxFX9xrcxFB0CiJ8lTUYK99OHtTn/bIdywdgOClsRsOYC7XkP7FimHaCYpa0H9DyyTUJ9omwhHITMqsouMzJ98dh/u9p1UvLrXmsEuTZ6eTnRyaatRJxdTh5M/iFbc8kc6WHDqo8mKUKpxcNwbCRLpW8wjEK2SIFQ2N7azH2J4g= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--smostafa.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=k+jzIX+i; arc=none smtp.client-ip=209.85.128.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--smostafa.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="k+jzIX+i" Received: by mail-wm1-f73.google.com with SMTP id 5b1f17b1804b1-43582d49dacso8224025e9.2 for ; Thu, 12 Dec 2024 10:05:06 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1734026705; x=1734631505; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=skPRCDE1HCBbSqTOkyCl3BOrkZpG5vD9RyOgjzCyY2w=; b=k+jzIX+ivTXrqWr1ZE8KZQIKZuHKFZaZXjkvGhB6r0Oo9dccQ1e/44+Y8pkQNaXQVY d3EIJk1yg5Ut+g3HajEWYGmHtBIhUGj3N1tag0fYX478NPGy6gE5XyHBH6aD5HRhoLRQ 8MYi5PqQtRO2bSc3WyDfN7OdHLJkphjSAeampNzr3s/de+9m3LWbse6/kur9nJ17WqlE BA333mIKJu2MQ3pM+UZNUVDjp87WrWUkqNiEvkRr6n2fJs+uTYk43MenxCxyMXZQ/KEp DdEbiAeVLo0ymVdqx+TXdrF/fSegPTo7x4EvPDou9bt36SmWAx9B52sCM7BxiDAvvExo 6MxQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734026705; x=1734631505; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=skPRCDE1HCBbSqTOkyCl3BOrkZpG5vD9RyOgjzCyY2w=; b=xDoqeTPJcXht+xacWUaadPsV8ArOoA7F24fg9CN0p8Rtg9sPHtyNLHc20jErmyFEHq GKlKG7H+mb9ddZZCwS/5CQtt5652g+LZrTYULpnvRUHwut5GGsQHmaA1OiM8KFbB+TeD 09h+BajOr6aLq8BZ9cTEVk8UpB+SUJsP59xTwAApZvw6keppjDCUmoyBzfASGPuOQ4s2 HW/rX8t9pvxXUWPwysuOxp2iwyJ9StykeIozTBQzFaH/Vd10u07/SJ9gmluvDysJeujB nVlH3GEaivHGvabbQjvn3IW4swp9leGUUa9u4Gva58l6+HvMftaqc3PYSbSSLBTNoBca vC4A== X-Forwarded-Encrypted: i=1; AJvYcCX7a8poAH9qoAu/vt05xe+efwG5Es6yb8Ux9/dSVoHFez3sSi/MUGNHbwnnN+Or69Njxs10pHk7X/Ud7K4=@vger.kernel.org X-Gm-Message-State: AOJu0YxDKV2vFgHoAFus76nHfd0SjHjXpcmzsw4epGGD3dOy5UBWrnUN TWop9gZQb+pZIunD7FjBKUgS0AOdp0tjiT8QX2L2SxmIjP2T7iyIY6u3yNiqU8PcjPd+KWdhDMo keqT6blcCeg== X-Google-Smtp-Source: AGHT+IE1C6qxQTfe+8cdh04qfTCyJPaxhloF3037zF0ykibXAs6bBxJ7FZWGjc9khBTHmnYTXd7662kyVwWhvA== X-Received: from wmc3.prod.google.com ([2002:a05:600c:6003:b0:435:dde5:2c3b]) (user=smostafa job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:1c12:b0:435:23c:e23e with SMTP id 5b1f17b1804b1-43622833454mr41606225e9.12.1734026704729; Thu, 12 Dec 2024 10:05:04 -0800 (PST) Date: Thu, 12 Dec 2024 18:03:34 +0000 In-Reply-To: <20241212180423.1578358-1-smostafa@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241212180423.1578358-1-smostafa@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20241212180423.1578358-11-smostafa@google.com> Subject: [RFC PATCH v2 10/58] iommu/arm-smmu-v3: Move common irq code to common file From: Mostafa Saleh To: iommu@lists.linux.dev, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org Cc: catalin.marinas@arm.com, will@kernel.org, maz@kernel.org, oliver.upton@linux.dev, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, robdclark@gmail.com, joro@8bytes.org, robin.murphy@arm.com, jean-philippe@linaro.org, jgg@ziepe.ca, nicolinc@nvidia.com, vdonnefort@google.com, qperret@google.com, tabba@google.com, danielmentz@google.com, tzukui@google.com, Mostafa Saleh Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Soon, KVM SMMUv3 driver will support irqs, instead of re-implementing the architectural bits and the firmware bindings, move this code to the common file shared with KVM. Signed-off-by: Mostafa Saleh --- .../arm/arm-smmu-v3/arm-smmu-v3-common.c | 150 +++++++++ drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 297 +----------------- drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h | 159 ++++++++++ 3 files changed, 313 insertions(+), 293 deletions(-) diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-common.c b/drivers/i= ommu/arm/arm-smmu-v3/arm-smmu-v3-common.c index b7bcac51cc7d..d842e592b351 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-common.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-common.c @@ -1,6 +1,7 @@ // SPDX-License-Identifier: GPL-2.0 #include #include +#include #include #include #include @@ -22,6 +23,24 @@ static struct arm_smmu_option_prop arm_smmu_options[] = =3D { { 0, NULL}, }; =20 +static phys_addr_t arm_smmu_msi_cfg[ARM_SMMU_MAX_MSIS][3] =3D { + [EVTQ_MSI_INDEX] =3D { + ARM_SMMU_EVTQ_IRQ_CFG0, + ARM_SMMU_EVTQ_IRQ_CFG1, + ARM_SMMU_EVTQ_IRQ_CFG2, + }, + [GERROR_MSI_INDEX] =3D { + ARM_SMMU_GERROR_IRQ_CFG0, + ARM_SMMU_GERROR_IRQ_CFG1, + ARM_SMMU_GERROR_IRQ_CFG2, + }, + [PRIQ_MSI_INDEX] =3D { + ARM_SMMU_PRIQ_IRQ_CFG0, + ARM_SMMU_PRIQ_IRQ_CFG1, + ARM_SMMU_PRIQ_IRQ_CFG2, + }, +}; + static void parse_driver_options(struct arm_smmu_device *smmu) { int i =3D 0; @@ -646,6 +665,137 @@ void arm_smmu_write_strtab(struct arm_smmu_device *sm= mu) writel_relaxed(reg, smmu->base + ARM_SMMU_STRTAB_BASE_CFG); } =20 +static void arm_smmu_free_msis(void *data) +{ + struct device *dev =3D data; + + platform_device_msi_free_irqs_all(dev); +} + +static void arm_smmu_write_msi_msg(struct msi_desc *desc, struct msi_msg *= msg) +{ + phys_addr_t doorbell; + struct device *dev =3D msi_desc_to_dev(desc); + struct arm_smmu_device *smmu =3D dev_get_drvdata(dev); + phys_addr_t *cfg =3D arm_smmu_msi_cfg[desc->msi_index]; + + doorbell =3D (((u64)msg->address_hi) << 32) | msg->address_lo; + doorbell &=3D MSI_CFG0_ADDR_MASK; + + writeq_relaxed(doorbell, smmu->base + cfg[0]); + writel_relaxed(msg->data, smmu->base + cfg[1]); + writel_relaxed(ARM_SMMU_MEMATTR_DEVICE_nGnRE, smmu->base + cfg[2]); +} + +static void arm_smmu_setup_msis(struct arm_smmu_device *smmu) +{ + int ret, nvec =3D ARM_SMMU_MAX_MSIS; + struct device *dev =3D smmu->dev; + + /* Clear the MSI address regs */ + writeq_relaxed(0, smmu->base + ARM_SMMU_GERROR_IRQ_CFG0); + writeq_relaxed(0, smmu->base + ARM_SMMU_EVTQ_IRQ_CFG0); + + if (smmu->features & ARM_SMMU_FEAT_PRI) + writeq_relaxed(0, smmu->base + ARM_SMMU_PRIQ_IRQ_CFG0); + else + nvec--; + + if (!(smmu->features & ARM_SMMU_FEAT_MSI)) + return; + + if (!dev->msi.domain) { + dev_info(smmu->dev, "msi_domain absent - falling back to wired irqs\n"); + return; + } + + /* Allocate MSIs for evtq, gerror and priq. Ignore cmdq */ + ret =3D platform_device_msi_init_and_alloc_irqs(dev, nvec, arm_smmu_write= _msi_msg); + if (ret) { + dev_warn(dev, "failed to allocate MSIs - falling back to wired irqs\n"); + return; + } + + smmu->evtq.q.irq =3D msi_get_virq(dev, EVTQ_MSI_INDEX); + smmu->gerr_irq =3D msi_get_virq(dev, GERROR_MSI_INDEX); + smmu->priq.q.irq =3D msi_get_virq(dev, PRIQ_MSI_INDEX); + + /* Add callback to free MSIs on teardown */ + devm_add_action_or_reset(dev, arm_smmu_free_msis, dev); +} + +void arm_smmu_setup_unique_irqs(struct arm_smmu_device *smmu, + irqreturn_t evtqirq(int irq, void *dev), + irqreturn_t gerrorirq(int irq, void *dev), + irqreturn_t priirq(int irq, void *dev)) +{ + int irq, ret; + + arm_smmu_setup_msis(smmu); + + /* Request interrupt lines */ + irq =3D smmu->evtq.q.irq; + if (irq) { + ret =3D devm_request_threaded_irq(smmu->dev, irq, NULL, + evtqirq, + IRQF_ONESHOT, + "arm-smmu-v3-evtq", smmu); + if (ret < 0) + dev_warn(smmu->dev, "failed to enable evtq irq\n"); + } else { + dev_warn(smmu->dev, "no evtq irq - events will not be reported!\n"); + } + + irq =3D smmu->gerr_irq; + if (irq) { + ret =3D devm_request_irq(smmu->dev, irq, gerrorirq, + 0, "arm-smmu-v3-gerror", smmu); + if (ret < 0) + dev_warn(smmu->dev, "failed to enable gerror irq\n"); + } else { + dev_warn(smmu->dev, "no gerr irq - errors will not be reported!\n"); + } + + if (smmu->features & ARM_SMMU_FEAT_PRI) { + irq =3D smmu->priq.q.irq; + if (irq) { + ret =3D devm_request_threaded_irq(smmu->dev, irq, NULL, + priirq, + IRQF_ONESHOT, + "arm-smmu-v3-priq", + smmu); + if (ret < 0) + dev_warn(smmu->dev, + "failed to enable priq irq\n"); + } else { + dev_warn(smmu->dev, "no priq irq - PRI will be broken\n"); + } + } +} + +void arm_smmu_probe_irq(struct platform_device *pdev, + struct arm_smmu_device *smmu) +{ + int irq; + + irq =3D platform_get_irq_byname_optional(pdev, "combined"); + if (irq > 0) + smmu->combined_irq =3D irq; + else { + irq =3D platform_get_irq_byname_optional(pdev, "eventq"); + if (irq > 0) + smmu->evtq.q.irq =3D irq; + + irq =3D platform_get_irq_byname_optional(pdev, "priq"); + if (irq > 0) + smmu->priq.q.irq =3D irq; + + irq =3D platform_get_irq_byname_optional(pdev, "gerror"); + if (irq > 0) + smmu->gerr_irq =3D irq; + } +} + int arm_smmu_register_iommu(struct arm_smmu_device *smmu, struct iommu_ops *ops, phys_addr_t ioaddr) { diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/ar= m/arm-smmu-v3/arm-smmu-v3.c index bcefa361f3d3..8234a9754a04 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c @@ -12,7 +12,6 @@ #include #include #include -#include #include #include #include @@ -34,35 +33,10 @@ MODULE_PARM_DESC(disable_msipolling, static struct iommu_ops arm_smmu_ops; static struct iommu_dirty_ops arm_smmu_dirty_ops; =20 -enum arm_smmu_msi_index { - EVTQ_MSI_INDEX, - GERROR_MSI_INDEX, - PRIQ_MSI_INDEX, - ARM_SMMU_MAX_MSIS, -}; - #define NUM_ENTRY_QWORDS 8 static_assert(sizeof(struct arm_smmu_ste) =3D=3D NUM_ENTRY_QWORDS * sizeof= (u64)); static_assert(sizeof(struct arm_smmu_cd) =3D=3D NUM_ENTRY_QWORDS * sizeof(= u64)); =20 -static phys_addr_t arm_smmu_msi_cfg[ARM_SMMU_MAX_MSIS][3] =3D { - [EVTQ_MSI_INDEX] =3D { - ARM_SMMU_EVTQ_IRQ_CFG0, - ARM_SMMU_EVTQ_IRQ_CFG1, - ARM_SMMU_EVTQ_IRQ_CFG2, - }, - [GERROR_MSI_INDEX] =3D { - ARM_SMMU_GERROR_IRQ_CFG0, - ARM_SMMU_GERROR_IRQ_CFG1, - ARM_SMMU_GERROR_IRQ_CFG2, - }, - [PRIQ_MSI_INDEX] =3D { - ARM_SMMU_PRIQ_IRQ_CFG0, - ARM_SMMU_PRIQ_IRQ_CFG1, - ARM_SMMU_PRIQ_IRQ_CFG2, - }, -}; - DEFINE_XARRAY_ALLOC1(arm_smmu_asid_xa); DEFINE_MUTEX(arm_smmu_asid_lock); =20 @@ -70,149 +44,6 @@ static int arm_smmu_domain_finalise(struct arm_smmu_dom= ain *smmu_domain, struct arm_smmu_device *smmu, u32 flags); static int arm_smmu_alloc_cd_tables(struct arm_smmu_master *master); =20 -/* Low-level queue manipulation functions */ -static bool queue_has_space(struct arm_smmu_ll_queue *q, u32 n) -{ - u32 space, prod, cons; - - prod =3D Q_IDX(q, q->prod); - cons =3D Q_IDX(q, q->cons); - - if (Q_WRP(q, q->prod) =3D=3D Q_WRP(q, q->cons)) - space =3D (1 << q->max_n_shift) - (prod - cons); - else - space =3D cons - prod; - - return space >=3D n; -} - -static bool queue_full(struct arm_smmu_ll_queue *q) -{ - return Q_IDX(q, q->prod) =3D=3D Q_IDX(q, q->cons) && - Q_WRP(q, q->prod) !=3D Q_WRP(q, q->cons); -} - -static bool queue_empty(struct arm_smmu_ll_queue *q) -{ - return Q_IDX(q, q->prod) =3D=3D Q_IDX(q, q->cons) && - Q_WRP(q, q->prod) =3D=3D Q_WRP(q, q->cons); -} - -static bool queue_consumed(struct arm_smmu_ll_queue *q, u32 prod) -{ - return ((Q_WRP(q, q->cons) =3D=3D Q_WRP(q, prod)) && - (Q_IDX(q, q->cons) > Q_IDX(q, prod))) || - ((Q_WRP(q, q->cons) !=3D Q_WRP(q, prod)) && - (Q_IDX(q, q->cons) <=3D Q_IDX(q, prod))); -} - -static void queue_sync_cons_out(struct arm_smmu_queue *q) -{ - /* - * Ensure that all CPU accesses (reads and writes) to the queue - * are complete before we update the cons pointer. - */ - __iomb(); - writel_relaxed(q->llq.cons, q->cons_reg); -} - -static void queue_inc_cons(struct arm_smmu_ll_queue *q) -{ - u32 cons =3D (Q_WRP(q, q->cons) | Q_IDX(q, q->cons)) + 1; - q->cons =3D Q_OVF(q->cons) | Q_WRP(q, cons) | Q_IDX(q, cons); -} - -static void queue_sync_cons_ovf(struct arm_smmu_queue *q) -{ - struct arm_smmu_ll_queue *llq =3D &q->llq; - - if (likely(Q_OVF(llq->prod) =3D=3D Q_OVF(llq->cons))) - return; - - llq->cons =3D Q_OVF(llq->prod) | Q_WRP(llq, llq->cons) | - Q_IDX(llq, llq->cons); - queue_sync_cons_out(q); -} - -static int queue_sync_prod_in(struct arm_smmu_queue *q) -{ - u32 prod; - int ret =3D 0; - - /* - * We can't use the _relaxed() variant here, as we must prevent - * speculative reads of the queue before we have determined that - * prod has indeed moved. - */ - prod =3D readl(q->prod_reg); - - if (Q_OVF(prod) !=3D Q_OVF(q->llq.prod)) - ret =3D -EOVERFLOW; - - q->llq.prod =3D prod; - return ret; -} - -static u32 queue_inc_prod_n(struct arm_smmu_ll_queue *q, int n) -{ - u32 prod =3D (Q_WRP(q, q->prod) | Q_IDX(q, q->prod)) + n; - return Q_OVF(q->prod) | Q_WRP(q, prod) | Q_IDX(q, prod); -} - -static void queue_poll_init(struct arm_smmu_device *smmu, - struct arm_smmu_queue_poll *qp) -{ - qp->delay =3D 1; - qp->spin_cnt =3D 0; - qp->wfe =3D !!(smmu->features & ARM_SMMU_FEAT_SEV); - qp->timeout =3D ktime_add_us(ktime_get(), ARM_SMMU_POLL_TIMEOUT_US); -} - -static int queue_poll(struct arm_smmu_queue_poll *qp) -{ - if (ktime_compare(ktime_get(), qp->timeout) > 0) - return -ETIMEDOUT; - - if (qp->wfe) { - wfe(); - } else if (++qp->spin_cnt < ARM_SMMU_POLL_SPIN_COUNT) { - cpu_relax(); - } else { - udelay(qp->delay); - qp->delay *=3D 2; - qp->spin_cnt =3D 0; - } - - return 0; -} - -static void queue_write(__le64 *dst, u64 *src, size_t n_dwords) -{ - int i; - - for (i =3D 0; i < n_dwords; ++i) - *dst++ =3D cpu_to_le64(*src++); -} - -static void queue_read(u64 *dst, __le64 *src, size_t n_dwords) -{ - int i; - - for (i =3D 0; i < n_dwords; ++i) - *dst++ =3D le64_to_cpu(*src++); -} - -static int queue_remove_raw(struct arm_smmu_queue *q, u64 *ent) -{ - if (queue_empty(&q->llq)) - return -EAGAIN; - - queue_read(ent, Q_ENT(q, q->llq.cons), q->ent_dwords); - queue_inc_cons(&q->llq); - queue_sync_cons_out(q); - return 0; -} - /* High-level queue accessors */ static int arm_smmu_cmdq_build_cmd(u64 *cmd, struct arm_smmu_cmdq_ent *ent) { @@ -3520,111 +3351,6 @@ static int arm_smmu_init_structures(struct arm_smmu= _device *smmu) return 0; } =20 -static void arm_smmu_free_msis(void *data) -{ - struct device *dev =3D data; - - platform_device_msi_free_irqs_all(dev); -} - -static void arm_smmu_write_msi_msg(struct msi_desc *desc, struct msi_msg *= msg) -{ - phys_addr_t doorbell; - struct device *dev =3D msi_desc_to_dev(desc); - struct arm_smmu_device *smmu =3D dev_get_drvdata(dev); - phys_addr_t *cfg =3D arm_smmu_msi_cfg[desc->msi_index]; - - doorbell =3D (((u64)msg->address_hi) << 32) | msg->address_lo; - doorbell &=3D MSI_CFG0_ADDR_MASK; - - writeq_relaxed(doorbell, smmu->base + cfg[0]); - writel_relaxed(msg->data, smmu->base + cfg[1]); - writel_relaxed(ARM_SMMU_MEMATTR_DEVICE_nGnRE, smmu->base + cfg[2]); -} - -static void arm_smmu_setup_msis(struct arm_smmu_device *smmu) -{ - int ret, nvec =3D ARM_SMMU_MAX_MSIS; - struct device *dev =3D smmu->dev; - - /* Clear the MSI address regs */ - writeq_relaxed(0, smmu->base + ARM_SMMU_GERROR_IRQ_CFG0); - writeq_relaxed(0, smmu->base + ARM_SMMU_EVTQ_IRQ_CFG0); - - if (smmu->features & ARM_SMMU_FEAT_PRI) - writeq_relaxed(0, smmu->base + ARM_SMMU_PRIQ_IRQ_CFG0); - else - nvec--; - - if (!(smmu->features & ARM_SMMU_FEAT_MSI)) - return; - - if (!dev->msi.domain) { - dev_info(smmu->dev, "msi_domain absent - falling back to wired irqs\n"); - return; - } - - /* Allocate MSIs for evtq, gerror and priq. Ignore cmdq */ - ret =3D platform_device_msi_init_and_alloc_irqs(dev, nvec, arm_smmu_write= _msi_msg); - if (ret) { - dev_warn(dev, "failed to allocate MSIs - falling back to wired irqs\n"); - return; - } - - smmu->evtq.q.irq =3D msi_get_virq(dev, EVTQ_MSI_INDEX); - smmu->gerr_irq =3D msi_get_virq(dev, GERROR_MSI_INDEX); - smmu->priq.q.irq =3D msi_get_virq(dev, PRIQ_MSI_INDEX); - - /* Add callback to free MSIs on teardown */ - devm_add_action_or_reset(dev, arm_smmu_free_msis, dev); -} - -static void arm_smmu_setup_unique_irqs(struct arm_smmu_device *smmu) -{ - int irq, ret; - - arm_smmu_setup_msis(smmu); - - /* Request interrupt lines */ - irq =3D smmu->evtq.q.irq; - if (irq) { - ret =3D devm_request_threaded_irq(smmu->dev, irq, NULL, - arm_smmu_evtq_thread, - IRQF_ONESHOT, - "arm-smmu-v3-evtq", smmu); - if (ret < 0) - dev_warn(smmu->dev, "failed to enable evtq irq\n"); - } else { - dev_warn(smmu->dev, "no evtq irq - events will not be reported!\n"); - } - - irq =3D smmu->gerr_irq; - if (irq) { - ret =3D devm_request_irq(smmu->dev, irq, arm_smmu_gerror_handler, - 0, "arm-smmu-v3-gerror", smmu); - if (ret < 0) - dev_warn(smmu->dev, "failed to enable gerror irq\n"); - } else { - dev_warn(smmu->dev, "no gerr irq - errors will not be reported!\n"); - } - - if (smmu->features & ARM_SMMU_FEAT_PRI) { - irq =3D smmu->priq.q.irq; - if (irq) { - ret =3D devm_request_threaded_irq(smmu->dev, irq, NULL, - arm_smmu_priq_thread, - IRQF_ONESHOT, - "arm-smmu-v3-priq", - smmu); - if (ret < 0) - dev_warn(smmu->dev, - "failed to enable priq irq\n"); - } else { - dev_warn(smmu->dev, "no priq irq - PRI will be broken\n"); - } - } -} - static int arm_smmu_setup_irqs(struct arm_smmu_device *smmu) { int ret, irq; @@ -3652,7 +3378,8 @@ static int arm_smmu_setup_irqs(struct arm_smmu_device= *smmu) if (ret < 0) dev_warn(smmu->dev, "failed to enable combined irq\n"); } else - arm_smmu_setup_unique_irqs(smmu); + arm_smmu_setup_unique_irqs(smmu, arm_smmu_evtq_thread, + arm_smmu_gerror_handler, arm_smmu_priq_thread); =20 if (smmu->features & ARM_SMMU_FEAT_PRI) irqen_flags |=3D IRQ_CTRL_PRIQ_IRQEN; @@ -3883,7 +3610,7 @@ static struct arm_smmu_device *arm_smmu_impl_probe(st= ruct arm_smmu_device *smmu) =20 static int arm_smmu_device_probe(struct platform_device *pdev) { - int irq, ret; + int ret; struct resource *res; resource_size_t ioaddr; struct arm_smmu_device *smmu; @@ -3929,24 +3656,8 @@ static int arm_smmu_device_probe(struct platform_dev= ice *pdev) smmu->page1 =3D smmu->base; } =20 - /* Interrupt lines */ - - irq =3D platform_get_irq_byname_optional(pdev, "combined"); - if (irq > 0) - smmu->combined_irq =3D irq; - else { - irq =3D platform_get_irq_byname_optional(pdev, "eventq"); - if (irq > 0) - smmu->evtq.q.irq =3D irq; + arm_smmu_probe_irq(pdev, smmu); =20 - irq =3D platform_get_irq_byname_optional(pdev, "priq"); - if (irq > 0) - smmu->priq.q.irq =3D irq; - - irq =3D platform_get_irq_byname_optional(pdev, "gerror"); - if (irq > 0) - smmu->gerr_irq =3D irq; - } /* Probe the h/w */ ret =3D arm_smmu_device_hw_probe(smmu); if (ret) diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h b/drivers/iommu/ar= m/arm-smmu-v3/arm-smmu-v3.h index 63545fdf55f9..d91dfe55835d 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h @@ -8,6 +8,7 @@ #ifndef _ARM_SMMU_V3_H #define _ARM_SMMU_V3_H =20 +#include #include #include #include @@ -349,6 +350,13 @@ void arm_smmu_write_strtab_l1_desc(struct arm_smmu_str= tab_l1 *dst, dma_addr_t l2ptr_dma); void arm_smmu_write_strtab(struct arm_smmu_device *smmu); =20 +void arm_smmu_probe_irq(struct platform_device *pdev, + struct arm_smmu_device *smmu); +void arm_smmu_setup_unique_irqs(struct arm_smmu_device *smmu, + irqreturn_t evtqirq(int irq, void *dev), + irqreturn_t gerrorirq(int irq, void *dev), + irqreturn_t priirq(int irq, void *dev)); + int arm_smmu_register_iommu(struct arm_smmu_device *smmu, struct iommu_ops *ops, phys_addr_t ioaddr); void arm_smmu_unregister_iommu(struct arm_smmu_device *smmu); @@ -425,4 +433,155 @@ tegra241_cmdqv_probe(struct arm_smmu_device *smmu) return ERR_PTR(-ENODEV); } #endif /* CONFIG_TEGRA241_CMDQV */ + +/* Queue functions shared with common and kernel drivers */ +static bool __maybe_unused queue_has_space(struct arm_smmu_ll_queue *q, u3= 2 n) +{ + u32 space, prod, cons; + + prod =3D Q_IDX(q, q->prod); + cons =3D Q_IDX(q, q->cons); + + if (Q_WRP(q, q->prod) =3D=3D Q_WRP(q, q->cons)) + space =3D (1 << q->max_n_shift) - (prod - cons); + else + space =3D cons - prod; + + return space >=3D n; +} + +static bool __maybe_unused queue_full(struct arm_smmu_ll_queue *q) +{ + return Q_IDX(q, q->prod) =3D=3D Q_IDX(q, q->cons) && + Q_WRP(q, q->prod) !=3D Q_WRP(q, q->cons); +} + +static bool __maybe_unused queue_empty(struct arm_smmu_ll_queue *q) +{ + return Q_IDX(q, q->prod) =3D=3D Q_IDX(q, q->cons) && + Q_WRP(q, q->prod) =3D=3D Q_WRP(q, q->cons); +} + +static bool __maybe_unused queue_consumed(struct arm_smmu_ll_queue *q, u32= prod) +{ + return ((Q_WRP(q, q->cons) =3D=3D Q_WRP(q, prod)) && + (Q_IDX(q, q->cons) > Q_IDX(q, prod))) || + ((Q_WRP(q, q->cons) !=3D Q_WRP(q, prod)) && + (Q_IDX(q, q->cons) <=3D Q_IDX(q, prod))); +} + +static void __maybe_unused queue_sync_cons_out(struct arm_smmu_queue *q) +{ + /* + * Ensure that all CPU accesses (reads and writes) to the queue + * are complete before we update the cons pointer. + */ + __iomb(); + writel_relaxed(q->llq.cons, q->cons_reg); +} + +static void __maybe_unused queue_inc_cons(struct arm_smmu_ll_queue *q) +{ + u32 cons =3D (Q_WRP(q, q->cons) | Q_IDX(q, q->cons)) + 1; + q->cons =3D Q_OVF(q->cons) | Q_WRP(q, cons) | Q_IDX(q, cons); +} + +static void __maybe_unused queue_sync_cons_ovf(struct arm_smmu_queue *q) +{ + struct arm_smmu_ll_queue *llq =3D &q->llq; + + if (likely(Q_OVF(llq->prod) =3D=3D Q_OVF(llq->cons))) + return; + + llq->cons =3D Q_OVF(llq->prod) | Q_WRP(llq, llq->cons) | + Q_IDX(llq, llq->cons); + queue_sync_cons_out(q); +} + +static int __maybe_unused queue_sync_prod_in(struct arm_smmu_queue *q) +{ + u32 prod; + int ret =3D 0; + + /* + * We can't use the _relaxed() variant here, as we must prevent + * speculative reads of the queue before we have determined that + * prod has indeed moved. + */ + prod =3D readl(q->prod_reg); + + if (Q_OVF(prod) !=3D Q_OVF(q->llq.prod)) + ret =3D -EOVERFLOW; + + q->llq.prod =3D prod; + return ret; +} + +static u32 __maybe_unused queue_inc_prod_n(struct arm_smmu_ll_queue *q, in= t n) +{ + u32 prod =3D (Q_WRP(q, q->prod) | Q_IDX(q, q->prod)) + n; + return Q_OVF(q->prod) | Q_WRP(q, prod) | Q_IDX(q, prod); +} + +static void __maybe_unused queue_poll_init(struct arm_smmu_device *smmu, + struct arm_smmu_queue_poll *qp) +{ + qp->delay =3D 1; + qp->spin_cnt =3D 0; + qp->wfe =3D !!(smmu->features & ARM_SMMU_FEAT_SEV); + qp->timeout =3D ktime_add_us(ktime_get(), ARM_SMMU_POLL_TIMEOUT_US); +} + +static int __maybe_unused queue_poll(struct arm_smmu_queue_poll *qp) +{ + if (ktime_compare(ktime_get(), qp->timeout) > 0) + return -ETIMEDOUT; + + if (qp->wfe) { + wfe(); + } else if (++qp->spin_cnt < ARM_SMMU_POLL_SPIN_COUNT) { + cpu_relax(); + } else { + udelay(qp->delay); + qp->delay *=3D 2; + qp->spin_cnt =3D 0; + } + + return 0; +} + +static void __maybe_unused queue_write(__le64 *dst, u64 *src, size_t n_dwo= rds) +{ + int i; + + for (i =3D 0; i < n_dwords; ++i) + *dst++ =3D cpu_to_le64(*src++); +} + +static void __maybe_unused queue_read(u64 *dst, __le64 *src, size_t n_dwor= ds) +{ + int i; + + for (i =3D 0; i < n_dwords; ++i) + *dst++ =3D le64_to_cpu(*src++); +} + +static int __maybe_unused queue_remove_raw(struct arm_smmu_queue *q, u64 *= ent) +{ + if (queue_empty(&q->llq)) + return -EAGAIN; + + queue_read(ent, Q_ENT(q, q->llq.cons), q->ent_dwords); + queue_inc_cons(&q->llq); + queue_sync_cons_out(q); + return 0; +} + +enum arm_smmu_msi_index { + EVTQ_MSI_INDEX, + GERROR_MSI_INDEX, + PRIQ_MSI_INDEX, + ARM_SMMU_MAX_MSIS, +}; + #endif /* _ARM_SMMU_V3_H */ --=20 2.47.0.338.g60cca15819-goog From nobody Sun Dec 14 19:14:26 2025 Received: from mail-wr1-f73.google.com (mail-wr1-f73.google.com [209.85.221.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A710F22A7F2 for ; Thu, 12 Dec 2024 18:05:08 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.221.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734026710; cv=none; b=oxarDh0AeZOkco5KP3813c17g4kvZ78XZvoiPvh+1y9A80XH0e8D0YqBHLzdwoIfKYfr9yHgNVal/z8S9aQFjcjYtJZFOEKM0tD4IOu8wnprrDd2hzkKvaV9TVGWJ/+ylMJmEsPIDaboyBH/RdWEEA7rk8nixdPUorbdgyzrBv4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734026710; c=relaxed/simple; bh=RigQSl3gYtHUND7WnM8WqyROz8rALOea7rQDuWDs2Ew=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=p6Nw53whTgtjjExyg6He/+4A5eLIO4OszPlkRnstdB0kyLVPGSWDd2TzoLTtmxjOfaT5M/zVZwwjCSYjRGeSL9nmpCQwbjg2YqnR+8XNSPdDzQqptAyXEQ+/QUIZI5i5EdgrZICoJmSCaB/z4LO3MWljHUQgjf7c3zbYoLzcgfY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--smostafa.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=p+rDWRvP; arc=none smtp.client-ip=209.85.221.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--smostafa.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="p+rDWRvP" Received: by mail-wr1-f73.google.com with SMTP id ffacd0b85a97d-385dfa9b758so420948f8f.0 for ; Thu, 12 Dec 2024 10:05:08 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1734026707; x=1734631507; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=kvcwClhLv6xGoczW4PJAIpjHHqg4M8M55H3dQR9JizI=; b=p+rDWRvP+7mIkCbZPb0pxRDZmMvaLCog6YgVrbAYSLM/JZYLnxTObB1sNqzmQuWF9p rkysIlbi827PhXmVyay9veN+j+K4CTy5xL2d7xGQfEfkVG7msDPT4LMEQ3O800hAXXcJ qP4Lii86ZaHyQfU5YgASjhjYKdb8MzpMJ9Kf5aP3rdsZODr7dwXuDJ9EWZ2IH91uujRq 5bX6mwxpi+DBepsH4SX7ssFFpPw4J83eCCmV5OUyenT9LE3862Nm9HQzCkZOeVKiUubE YEdyzggijTJMFw0418BfMMycX7PntX4Q9ZEvBDRtb01ERWB/LuGisea+gQc6EArKRLyS 0obQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734026707; x=1734631507; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=kvcwClhLv6xGoczW4PJAIpjHHqg4M8M55H3dQR9JizI=; b=w/PiMl/myqslcYpfRvIyzY3BXFfO2jtsebS1QJzakF0pq68yZjMvPwB56RY9OnlEzY RYfHIoBpcWBzfcrWrZ8Zn81YIriU8AlPlgn9Z1KYYDR1O50WuvOgHBHi0jPehAN5TPOB 3ZNNbgxVC5dQNRcMiEtJkC765To5W4qZczSuuMWSLzx+OCLFCOi1P1DSyvgAz00+N4Kz +6jtB/343B8dk6ruFrudo4CgNa0zbPHudD6LvTC6Bu9fyt+eHd4RBOdBB8YZFn3xJVDU Jp9PoBnOvSFo+9TY/5dPI1cX64Yln6Ts1KRjJTrQGLJHOOVKRrlROXLUNbm+HueB9PvE nvhg== X-Forwarded-Encrypted: i=1; AJvYcCUlO9qiEW2BvzwuTuXzB2AirDgnnfpBM2BCAgd7WK6qtRfux+dcfKBNzFNIlp7Ft1pADrQqgPuO0SkYDqU=@vger.kernel.org X-Gm-Message-State: AOJu0Yz3LOlpbTyaycHrfqZn20iGraT/w7vRhtGN+kJA5poUmldZNxan F5HppO99bFCA4fmj01ItgVcEvSokpv/PeGZSbPnNjkpEsIPAdO0PzzaJE6AuZmr7m67jQ75GiE0 fG+QIZV0poQ== X-Google-Smtp-Source: AGHT+IE10scUEuEcCRD+5w6H2L6RFmN8j1xObFtBxLC8p2WLtJlxovv9B58XfdIQzXv891vozkqFrRDj+yh8bw== X-Received: from wrvj17.prod.google.com ([2002:a05:6000:1bd1:b0:386:3835:9fda]) (user=smostafa job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6000:280d:b0:385:ec89:2f07 with SMTP id ffacd0b85a97d-3864cea56c5mr4112209f8f.32.1734026707185; Thu, 12 Dec 2024 10:05:07 -0800 (PST) Date: Thu, 12 Dec 2024 18:03:35 +0000 In-Reply-To: <20241212180423.1578358-1-smostafa@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241212180423.1578358-1-smostafa@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20241212180423.1578358-12-smostafa@google.com> Subject: [RFC PATCH v2 11/58] KVM: arm64: pkvm: Add pkvm_udelay() From: Mostafa Saleh To: iommu@lists.linux.dev, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org Cc: catalin.marinas@arm.com, will@kernel.org, maz@kernel.org, oliver.upton@linux.dev, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, robdclark@gmail.com, joro@8bytes.org, robin.murphy@arm.com, jean-philippe@linaro.org, jgg@ziepe.ca, nicolinc@nvidia.com, vdonnefort@google.com, qperret@google.com, tabba@google.com, danielmentz@google.com, tzukui@google.com, Mostafa Saleh Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Jean-Philippe Brucker Add a simple delay loop for drivers. This could use more work. It should be possible to insert a wfe and save power, but I haven't studied whether it is safe to do so with the host in control of the event stream. The SMMU driver will use wfe anyway for frequent waits (provided the implementation can send command queue events). Signed-off-by: Mostafa Saleh Signed-off-by: Jean-Philippe Brucker --- arch/arm64/kvm/hyp/include/nvhe/pkvm.h | 3 ++ arch/arm64/kvm/hyp/nvhe/setup.c | 4 +++ arch/arm64/kvm/hyp/nvhe/timer-sr.c | 42 ++++++++++++++++++++++++++ 3 files changed, 49 insertions(+) diff --git a/arch/arm64/kvm/hyp/include/nvhe/pkvm.h b/arch/arm64/kvm/hyp/in= clude/nvhe/pkvm.h index 3b515ce4c433..8a5554615e40 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/pkvm.h +++ b/arch/arm64/kvm/hyp/include/nvhe/pkvm.h @@ -143,4 +143,7 @@ int pkvm_load_pvmfw_pages(struct pkvm_hyp_vm *vm, u64 i= pa, phys_addr_t phys, u64 size); void pkvm_poison_pvmfw_pages(void); =20 +int pkvm_timer_init(void); +void pkvm_udelay(unsigned long usecs); + #endif /* __ARM64_KVM_NVHE_PKVM_H__ */ diff --git a/arch/arm64/kvm/hyp/nvhe/setup.c b/arch/arm64/kvm/hyp/nvhe/setu= p.c index 46dd68161979..9d09f5f471b9 100644 --- a/arch/arm64/kvm/hyp/nvhe/setup.c +++ b/arch/arm64/kvm/hyp/nvhe/setup.c @@ -356,6 +356,10 @@ void __noreturn __pkvm_init_finalise(void) if (ret) goto out; =20 + ret =3D pkvm_timer_init(); + if (ret) + goto out; + ret =3D fix_host_ownership(); if (ret) goto out; diff --git a/arch/arm64/kvm/hyp/nvhe/timer-sr.c b/arch/arm64/kvm/hyp/nvhe/t= imer-sr.c index 3aaab20ae5b4..732beb5fe24b 100644 --- a/arch/arm64/kvm/hyp/nvhe/timer-sr.c +++ b/arch/arm64/kvm/hyp/nvhe/timer-sr.c @@ -11,6 +11,10 @@ #include #include =20 +#include + +static u32 timer_freq; + void __kvm_timer_set_cntvoff(u64 cntvoff) { write_sysreg(cntvoff, cntvoff_el2); @@ -60,3 +64,41 @@ void __timer_enable_traps(struct kvm_vcpu *vcpu) =20 sysreg_clear_set(cnthctl_el2, clr, set); } + +static u64 pkvm_ticks_get(void) +{ + return __arch_counter_get_cntvct(); +} + +#define SEC_TO_US 1000000 + +int pkvm_timer_init(void) +{ + timer_freq =3D read_sysreg(cntfrq_el0); + /* + * TODO: The highest privileged level is supposed to initialize this + * register. But on some systems (which?), this information is only + * contained in the device-tree, so we'll need to find it out some other + * way. + */ + if (!timer_freq || timer_freq < SEC_TO_US) + return -ENODEV; + return 0; +} + +#define pkvm_time_us_to_ticks(us) ((u64)(us) * timer_freq / SEC_TO_US) + +void pkvm_udelay(unsigned long usecs) +{ + u64 ticks =3D pkvm_time_us_to_ticks(usecs); + u64 start =3D pkvm_ticks_get(); + + while (true) { + u64 cur =3D pkvm_ticks_get(); + + if ((cur - start) >=3D ticks || cur < start) + break; + /* TODO wfe */ + cpu_relax(); + } +} --=20 2.47.0.338.g60cca15819-goog From nobody Sun Dec 14 19:14:26 2025 Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AEA9D22A818 for ; Thu, 12 Dec 2024 18:05:10 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734026712; cv=none; b=pCg9hzcUGgYnHM5JGM3sgJa2DpKEcpP3Ig44jXQiIPgG5P7ZuqDTS7DjcbJLaXnvvi1B5Hd0Uv6HPw3Tz5/oE/+ECrnto5ii481f5fQEjmqarK6YjTma16fPoHakzPI/RQCHJoM3yAYXMd01MLjR13BBSArrB7Xq0yKyitHuz4o= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734026712; c=relaxed/simple; bh=l4FsE44zdt2XfWMQvkqayVSDSJzVGOEltPA16a4uDmA=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=FaVww11mezpi/a30oj6rZLJFV9BG3Mg/Rew5pmDLideWwsB65Yt82aTZSz/dq7PeHZSDSDiB8cXZCrRy47uJnXl9unlilMO2Mr9llXSz1Gk9IMvElacevK/HeFg9OJIce0pgY4QePAM10nXDfv75I4tiz7HlYe+GVXxBHxmFixQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--smostafa.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=sqMMd6uP; arc=none smtp.client-ip=209.85.128.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--smostafa.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="sqMMd6uP" Received: by mail-wm1-f74.google.com with SMTP id 5b1f17b1804b1-4361ac607b6so7279515e9.0 for ; Thu, 12 Dec 2024 10:05:10 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1734026709; x=1734631509; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=iXIRvpZuiwLtOV93NpSTxaaQKTg5ZvL9FYLJBzJb620=; b=sqMMd6uPIGHfndXrV4mBho6n6Fc45UMYK10bm+GgMvPlveqOP9C2PGIUPgq+0O9gmD eOSgxKso7O4tSG/XSlDoBxzu/zuDR9B6r60mymLXoAjCN8b6NjSi8BL47cTbJlW91Jnb tVVMjqLgMq7i/Msif2j+YkHBdhJUmwNTTrXjHph//MULNnueMAporhFD7QzzW8TIEoDB OOGhdBtBIyWedI3AiGGlSYyXgO48lpkkxZHBTofbts3HjuUWiTVGb1ndon3thQv+ovp+ 8VskS7rAB7GO64s8RlRpGJpoc2jCG8LxeAY9KQHDD2AcMWVYzf6Bq+KQd8tswHdAVbaU rK0A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734026709; x=1734631509; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=iXIRvpZuiwLtOV93NpSTxaaQKTg5ZvL9FYLJBzJb620=; b=qfS0YU/036S0xlvDTCjz8GtMXZtK53SN1aaUVusaO2H2uG//xvBwS1nFyf7jQ6dsf+ QTNXbwMR3CkukjeEOUnoJ+Wv8qNgIjvjSp6cIxfsa6ogyHghDKd8ZXKuPmRAdHrJ6Cdi OBXbSyQ3tADBT99cNMB1fZ/CuypnEm5opvAO2wfj7JRpaGJ+BxgjKsa1/u1EWNCA5DI5 OUxf/SnGKEp+lhF3SffAbReBmM7bbtKSvtjcwzztlwhANnaE9fME8wdCnP68pY+zq4vN kbwhrd4vssgV7ABQ5xrmEsLEHSwUnfBO6BIvYXdakvDIEhzL9c0YzdEc4TbROiMY0lPh foAw== X-Forwarded-Encrypted: i=1; AJvYcCVxPnaHhdFkfGeviDYt0UF8FXPNnrrE2eG9DrhRwtHdZ4goZEvKOIyO71O7UPq307NkxiIX2wkNNLfV8qU=@vger.kernel.org X-Gm-Message-State: AOJu0Yyco6lnApShVUPy2ECbknXbzbQmdnsDzl/eGtQogi8VQxpokVPO A8qeTgmN/HY005lT/wFLOJOM6l/yvrjOqI7MzcmztFfv5piGQulcNFxP2ZV8zsFskHs1LXuIAZJ PVVcVhBPtGg== X-Google-Smtp-Source: AGHT+IHgnk/pCZh1WlJnWXuUNSz2FFTV3sydknebPh6K+YIgIP/qVa3msTdRnMT5NLGcQQqFklKUxKoqwGRQMg== X-Received: from wmdp19.prod.google.com ([2002:a05:600c:5d3:b0:434:f271:522e]) (user=smostafa job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:46c6:b0:434:a746:9c82 with SMTP id 5b1f17b1804b1-43622823ab1mr38109965e9.5.1734026709204; Thu, 12 Dec 2024 10:05:09 -0800 (PST) Date: Thu, 12 Dec 2024 18:03:36 +0000 In-Reply-To: <20241212180423.1578358-1-smostafa@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241212180423.1578358-1-smostafa@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20241212180423.1578358-13-smostafa@google.com> Subject: [RFC PATCH v2 12/58] KVM: arm64: Add __pkvm_{use, unuse}_dma() From: Mostafa Saleh To: iommu@lists.linux.dev, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org Cc: catalin.marinas@arm.com, will@kernel.org, maz@kernel.org, oliver.upton@linux.dev, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, robdclark@gmail.com, joro@8bytes.org, robin.murphy@arm.com, jean-philippe@linaro.org, jgg@ziepe.ca, nicolinc@nvidia.com, vdonnefort@google.com, qperret@google.com, tabba@google.com, danielmentz@google.com, tzukui@google.com, Mostafa Saleh Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" When a page is mapped in an IOMMU page table for DMA, it must not be donated to a guest or the hypervisor we ensure this with: - Host can only map pages that are OWNED - Any page that is mapped is refcounted - Donation/Sharing is prevented from refcount check in host_request_owned_transition() - No MMIO transtion is allowed beyond IOMMU MMIO which happens during de-privilege. In case in the future shared pages are allowed to be mapped, similar checks are needed in host_request_unshare() and host_ack_unshare() Add 2 functions that would be called before each IOMMU map and after each successful IOMMU unmap. Signed-off-by: Mostafa Saleh --- arch/arm64/kvm/hyp/include/nvhe/mem_protect.h | 2 + arch/arm64/kvm/hyp/nvhe/mem_protect.c | 97 +++++++++++++++++++ 2 files changed, 99 insertions(+) diff --git a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h b/arch/arm64/kvm= /hyp/include/nvhe/mem_protect.h index 67466b4941b4..d75e64e59596 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h @@ -92,6 +92,8 @@ int __pkvm_remove_ioguard_page(struct pkvm_hyp_vcpu *hyp_= vcpu, u64 ipa); bool __pkvm_check_ioguard_page(struct pkvm_hyp_vcpu *hyp_vcpu); int __pkvm_guest_relinquish_to_host(struct pkvm_hyp_vcpu *vcpu, u64 ipa, u64 *ppa); +int __pkvm_host_use_dma(u64 phys_addr, size_t size); +int __pkvm_host_unuse_dma(u64 phys_addr, size_t size); =20 bool addr_is_memory(phys_addr_t phys); int host_stage2_idmap_locked(phys_addr_t addr, u64 size, enum kvm_pgtable_= prot prot); diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvh= e/mem_protect.c index d14f4d63eb8b..0840af20c366 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -513,6 +513,20 @@ bool addr_is_memory(phys_addr_t phys) return !!find_mem_range(phys, &range); } =20 +static bool is_range_refcounted(phys_addr_t addr, u64 nr_pages) +{ + struct hyp_page *p; + int i; + + for (i =3D 0 ; i < nr_pages ; ++i) { + p =3D hyp_phys_to_page(addr + i * PAGE_SIZE); + if (hyp_refcount_get(p->refcount)) + return true; + } + + return false; +} + static bool addr_is_allowed_memory(phys_addr_t phys) { struct memblock_region *reg; @@ -927,6 +941,9 @@ static int host_request_owned_transition(u64 *completer= _addr, u64 size =3D tx->nr_pages * PAGE_SIZE; u64 addr =3D tx->initiator.addr; =20 + if (range_is_memory(addr, addr + size) && is_range_refcounted(addr, tx->n= r_pages)) + return -EINVAL; + *completer_addr =3D tx->initiator.host.completer_addr; return __host_check_page_state_range(addr, size, PKVM_PAGE_OWNED); } @@ -938,6 +955,7 @@ static int host_request_unshare(u64 *completer_addr, u64 addr =3D tx->initiator.addr; =20 *completer_addr =3D tx->initiator.host.completer_addr; + return __host_check_page_state_range(addr, size, PKVM_PAGE_SHARED_OWNED); } =20 @@ -2047,6 +2065,85 @@ int __pkvm_host_unshare_ffa(u64 pfn, u64 nr_pages) return ret; } =20 +static void __pkvm_host_use_dma_page(phys_addr_t phys_addr) +{ + struct hyp_page *p =3D hyp_phys_to_page(phys_addr); + + hyp_page_ref_inc(p); +} + +static void __pkvm_host_unuse_dma_page(phys_addr_t phys_addr) +{ + struct hyp_page *p =3D hyp_phys_to_page(phys_addr); + + hyp_page_ref_dec(p); +} + +/* + * __pkvm_host_use_dma - Mark host memory as used for DMA + * @phys_addr: physical address of the DMA region + * @size: size of the DMA region + * When a page is mapped in an IOMMU page table for DMA, it must + * not be donated to a guest or the hypervisor we ensure this with: + * - Host can only map pages that are OWNED + * - Any page that is mapped is refcounted + * - Donation/Sharing is prevented from refcount check in + * host_request_owned_transition() + * - No MMIO transtion is allowed beyond IOMMU MMIO which + * happens during de-privilege. + * In case in the future shared pages are allowed to be mapped, + * similar checks are needed in host_request_unshare() and + * host_ack_unshare() + */ +int __pkvm_host_use_dma(phys_addr_t phys_addr, size_t size) +{ + int i; + int ret =3D 0; + size_t nr_pages =3D size >> PAGE_SHIFT; + + if (WARN_ON(!PAGE_ALIGNED(phys_addr | size))) + return -EINVAL; + + host_lock_component(); + ret =3D __host_check_page_state_range(phys_addr, size, PKVM_PAGE_OWNED); + if (ret) + goto out_ret; + + if (!range_is_memory(phys_addr, phys_addr + size)) + goto out_ret; + + for (i =3D 0; i < nr_pages; i++) + __pkvm_host_use_dma_page(phys_addr + i * PAGE_SIZE); + +out_ret: + host_unlock_component(); + return ret; +} + +int __pkvm_host_unuse_dma(phys_addr_t phys_addr, size_t size) +{ + int i; + size_t nr_pages =3D size >> PAGE_SHIFT; + + if (WARN_ON(!PAGE_ALIGNED(phys_addr | size))) + return -EINVAL; + + host_lock_component(); + if (!range_is_memory(phys_addr, phys_addr + size)) + goto out_ret; + /* + * We end up here after the caller successfully unmapped the page from + * the IOMMU table. Which means that a ref is held, the page is shared + * in the host s2, there can be no failure. + */ + for (i =3D 0; i < nr_pages; i++) + __pkvm_host_unuse_dma_page(phys_addr + i * PAGE_SIZE); + +out_ret: + host_unlock_component(); + return 0; +} + int __pkvm_host_share_guest(u64 pfn, u64 gfn, struct pkvm_hyp_vcpu *vcpu, enum kvm_pgtable_prot prot) { --=20 2.47.0.338.g60cca15819-goog From nobody Sun Dec 14 19:14:26 2025 Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D919322C35E for ; Thu, 12 Dec 2024 18:05:12 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734026715; cv=none; b=VT63IYOgKWf1LwJlUaq9EDVOkQMY+LFJCMMB3woyckiSiEe4snxfn3dH/0g0dXOIGswSylA6tVflRhIHrtGzVzQifpKprVLC0THeZDEcTCscr1BwbjAOjo/Z0RRijqB5UrIh8ErD9r9X2cp7uFwkWPfWTbMKvME9iQOavA5rFLM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734026715; c=relaxed/simple; bh=a/JQMVUSO4WSIOgI0iyk5RsJwLOyAIeGYUISnRCOPz4=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=V26q9fEZ2tbayzomD0uo7gKbhgrGfzlQMe/7a+aRhfZKIRhe2dY7QF5mbooeEabL82wq+xJs+HbFYb2YNbCYzXX2geHSVPpoquUVEjiFAZttTVWTzh6uNnIGV6zeo93lzxWT/OHgIiOuq6JyEhJYIMzoxf33Ljt13NXXmCOAEOY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--smostafa.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=aslPYKtZ; arc=none smtp.client-ip=209.85.128.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--smostafa.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="aslPYKtZ" Received: by mail-wm1-f73.google.com with SMTP id 5b1f17b1804b1-4361eb83f46so8306345e9.3 for ; Thu, 12 Dec 2024 10:05:12 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1734026711; x=1734631511; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=RS8qlDlci8/4Bym+8sOq6f7pGooq251OZRMaf+SOEgY=; b=aslPYKtZdW++W0ffprp1QTcZi2efJd9MHDtD6jCVMgL5JMoFaajJ/YCa7FsAUxBwyx uqI3+OqbTg2qjMvp/MpvG0ytPm6H0iXBb7fHNLLX5HpCsAEWBDPFo9mlJjNNBG35M6lM evNrAjc1etOqjvPR1WkalPvlscbj5zWbvr49J3OetYOpbBdQPoIatvhx4T/F7wUybwMl auvlBiPXilH7HIPP7LxEdSf1EFka7fOTIOJlF9I1ZOVxdWFDCoucL/r0Jlq8Ezb0c6Fh qK7NwlPXvzvo1joz2iioRT2RSyIhlWrvca7bA72zWSaUzwUokricqRfR+UodSbrjYu9b 2wjg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734026711; x=1734631511; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=RS8qlDlci8/4Bym+8sOq6f7pGooq251OZRMaf+SOEgY=; b=WcbWeQswzp2GY879yZ/z4PyMDq6Sp8jRcQqvBg/Zuji5n58Ffq/d/tYUr6MX5IMbVQ vKESm2mFYGiRnNTn8OyQnq0qvaxYB6hrp+r5w9hZ9g/YFxZZKiyPoWue+bU2fWJGtJie zTtU3i3zm1/hk6HfJ0omdw8dlLGD01i9pCXUMq1mj7G0L0QKdCifskhyRnEp9sxcHZXV IzwylbL0wBn6k4X7f2xNEFaI9xtH6+u4nLhab1mq7sc8Kt/htofhcZd6/sUXAXE+3sZg 5MIZpUjgnltHnnkGAztWLP0jIhpUW283ElfWuRL1OLLIH1rbdwTeQZv3y2QmlHtRpgjd hLQg== X-Forwarded-Encrypted: i=1; AJvYcCUseWk1W3HZicI2NBSGD9ikze8gm6IT1Vvzl8e9oRIyHz6YibgIIMqp2ELHD5CWqnXgfJlid60N5WYa+30=@vger.kernel.org X-Gm-Message-State: AOJu0YwFA0fWanRqwiGXwf4YEf2RZwD1GTC2LmzvIcobfB+B25Yw49T+ ngZjckFVExMA+m1XOrZXBf8HRIfWLo1AxewLLXFgfbw4A5cXxu/ViP9Li4qR9yJHyj1Q6qNU4tV c7vqg0rT7Qg== X-Google-Smtp-Source: AGHT+IEXzXpQhlLFADFsSKXZo9irA4liP3Y9eVpRtNsKhNXeGfOrD877Mo3P2mHA18Npm95BG/JFoGOJUDSq9w== X-Received: from wmbd18.prod.google.com ([2002:a05:600c:58d2:b0:434:a98d:6a1c]) (user=smostafa job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:cc7:b0:434:f753:600f with SMTP id 5b1f17b1804b1-4361c3a6062mr66983595e9.19.1734026711336; Thu, 12 Dec 2024 10:05:11 -0800 (PST) Date: Thu, 12 Dec 2024 18:03:37 +0000 In-Reply-To: <20241212180423.1578358-1-smostafa@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241212180423.1578358-1-smostafa@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20241212180423.1578358-14-smostafa@google.com> Subject: [RFC PATCH v2 13/58] KVM: arm64: Introduce IOMMU driver infrastructure From: Mostafa Saleh To: iommu@lists.linux.dev, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org Cc: catalin.marinas@arm.com, will@kernel.org, maz@kernel.org, oliver.upton@linux.dev, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, robdclark@gmail.com, joro@8bytes.org, robin.murphy@arm.com, jean-philippe@linaro.org, jgg@ziepe.ca, nicolinc@nvidia.com, vdonnefort@google.com, qperret@google.com, tabba@google.com, danielmentz@google.com, tzukui@google.com, Mostafa Saleh Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" To establish DMA isolation, KVM needs an IOMMU driver which provide certain ops, these ops are defined outside of the iommu_ops, and has 2 components: - kvm_iommu_driver (kernel): Implements simple interaction with the kernel (init, remove,...) - kvm_iommu_ops (hypervisor): Implements paravirtual interface (map, unmap, attach, detach,...) Only one driver can be used and is registered with kvm_iommu_register_driver() by passing pointers to both ops. KVM will initialise the driver after it initialises and before the de-privilege point, which is a suitable point to establish trusted interaction between the host and the hypervisor, this also allows the host kernel to do one initialization from the kernel and avoid such complexity in the hypervisor as the kernel is still trusted at this point. Also, during the registration call, the pointer for the hypervisor ops will be initialised. The hypervisor init part is called from the finalise hypercall which is executed after the kernel kvm IOMMU driver init. Signed-off-by: Mostafa Saleh Signed-off-by: Jean-Philippe Brucker --- arch/arm64/include/asm/kvm_host.h | 11 ++++++ arch/arm64/kvm/Makefile | 2 +- arch/arm64/kvm/arm.c | 8 ++++- arch/arm64/kvm/hyp/include/nvhe/iommu.h | 13 +++++++ arch/arm64/kvm/hyp/nvhe/Makefile | 2 +- arch/arm64/kvm/hyp/nvhe/iommu/iommu.c | 18 ++++++++++ arch/arm64/kvm/hyp/nvhe/setup.c | 5 +++ arch/arm64/kvm/iommu.c | 47 +++++++++++++++++++++++++ 8 files changed, 103 insertions(+), 3 deletions(-) create mode 100644 arch/arm64/kvm/hyp/include/nvhe/iommu.h create mode 100644 arch/arm64/kvm/hyp/nvhe/iommu/iommu.c create mode 100644 arch/arm64/kvm/iommu.c diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm= _host.h index 53916a7f0def..54416cfea573 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -1628,4 +1628,15 @@ void kvm_set_vm_id_reg(struct kvm *kvm, u32 reg, u64= val); =20 unsigned long __pkvm_reclaim_hyp_alloc_mgt(unsigned long nr_pages); =20 +struct kvm_iommu_driver { + int (*init_driver)(void); + void (*remove_driver)(void); +}; + +struct kvm_iommu_ops; +int kvm_iommu_register_driver(struct kvm_iommu_driver *kern_ops, + struct kvm_iommu_ops *el2_ops); +int kvm_iommu_init_driver(void); +void kvm_iommu_remove_driver(void); + #endif /* __ARM64_KVM_HOST_H__ */ diff --git a/arch/arm64/kvm/Makefile b/arch/arm64/kvm/Makefile index f9e208273031..440897366e88 100644 --- a/arch/arm64/kvm/Makefile +++ b/arch/arm64/kvm/Makefile @@ -23,7 +23,7 @@ kvm-y +=3D arm.o mmu.o mmio.o psci.o hypercalls.o pvtime.= o \ vgic/vgic-v3.o vgic/vgic-v4.o \ vgic/vgic-mmio.o vgic/vgic-mmio-v2.o \ vgic/vgic-mmio-v3.o vgic/vgic-kvm-device.o \ - vgic/vgic-its.o vgic/vgic-debug.o + vgic/vgic-its.o vgic/vgic-debug.o iommu.o =20 kvm-$(CONFIG_HW_PERF_EVENTS) +=3D pmu-emul.o pmu.o kvm-$(CONFIG_ARM64_PTR_AUTH) +=3D pauth.o diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index 94b210f36573..4b486323c0c9 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -2510,9 +2510,15 @@ static int __init kvm_hyp_init_protection(u32 hyp_va= _bits) if (ret) return ret; =20 + ret =3D kvm_iommu_init_driver(); + if (ret < 0) + return ret; + ret =3D do_pkvm_init(hyp_va_bits); - if (ret) + if (ret) { + kvm_iommu_remove_driver(); return ret; + } =20 free_hyp_pgds(); =20 diff --git a/arch/arm64/kvm/hyp/include/nvhe/iommu.h b/arch/arm64/kvm/hyp/i= nclude/nvhe/iommu.h new file mode 100644 index 000000000000..1ac70cc28a9e --- /dev/null +++ b/arch/arm64/kvm/hyp/include/nvhe/iommu.h @@ -0,0 +1,13 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef __ARM64_KVM_NVHE_IOMMU_H__ +#define __ARM64_KVM_NVHE_IOMMU_H__ + +#include + +struct kvm_iommu_ops { + int (*init)(void); +}; + +int kvm_iommu_init(void); + +#endif /* __ARM64_KVM_NVHE_IOMMU_H__ */ diff --git a/arch/arm64/kvm/hyp/nvhe/Makefile b/arch/arm64/kvm/hyp/nvhe/Mak= efile index 415cc51fe391..9e1b74c661d2 100644 --- a/arch/arm64/kvm/hyp/nvhe/Makefile +++ b/arch/arm64/kvm/hyp/nvhe/Makefile @@ -8,7 +8,7 @@ CFLAGS_switch.nvhe.o +=3D -Wno-override-init hyp-obj-y :=3D timer-sr.o sysreg-sr.o debug-sr.o switch.o tlb.o hyp-init.o= host.o \ hyp-main.o hyp-smp.o psci-relay.o alloc.o early_alloc.o page_alloc.o \ cache.o setup.o mm.o mem_protect.o sys_regs.o pkvm.o stacktrace.o ffa.o \ - serial.o alloc_mgt.o + serial.o alloc_mgt.o iommu/iommu.o hyp-obj-y +=3D ../vgic-v3-sr.o ../aarch32.o ../vgic-v2-cpuif-proxy.o ../en= try.o \ ../fpsimd.o ../hyp-entry.o ../exception.o ../pgtable.o hyp-obj-$(CONFIG_LIST_HARDENED) +=3D list_debug.o diff --git a/arch/arm64/kvm/hyp/nvhe/iommu/iommu.c b/arch/arm64/kvm/hyp/nvh= e/iommu/iommu.c new file mode 100644 index 000000000000..3bd87d2084e9 --- /dev/null +++ b/arch/arm64/kvm/hyp/nvhe/iommu/iommu.c @@ -0,0 +1,18 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * IOMMU operations for pKVM + * + * Copyright (C) 2022 Linaro Ltd. + */ +#include + +/* Only one set of ops supported, similary to the kernel */ +struct kvm_iommu_ops *kvm_iommu_ops; + +int kvm_iommu_init(void) +{ + if (!kvm_iommu_ops || !kvm_iommu_ops->init) + return -ENODEV; + + return kvm_iommu_ops->init(); +} diff --git a/arch/arm64/kvm/hyp/nvhe/setup.c b/arch/arm64/kvm/hyp/nvhe/setu= p.c index 9d09f5f471b9..4d36616a7f02 100644 --- a/arch/arm64/kvm/hyp/nvhe/setup.c +++ b/arch/arm64/kvm/hyp/nvhe/setup.c @@ -14,6 +14,7 @@ #include #include #include +#include #include #include #include @@ -360,6 +361,10 @@ void __noreturn __pkvm_init_finalise(void) if (ret) goto out; =20 + ret =3D kvm_iommu_init(); + if (ret) + goto out; + ret =3D fix_host_ownership(); if (ret) goto out; diff --git a/arch/arm64/kvm/iommu.c b/arch/arm64/kvm/iommu.c new file mode 100644 index 000000000000..ed77ea0d12bb --- /dev/null +++ b/arch/arm64/kvm/iommu.c @@ -0,0 +1,47 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2023 Google LLC + * Author: Mostafa Saleh + */ + +#include +#include + +struct kvm_iommu_driver *iommu_driver; +extern struct kvm_iommu_ops *kvm_nvhe_sym(kvm_iommu_ops); + +int kvm_iommu_register_driver(struct kvm_iommu_driver *kern_ops, struct kv= m_iommu_ops *el2_ops) +{ + int ret; + + if (WARN_ON(!kern_ops || !el2_ops)) + return -EINVAL; + + /* + * Paired with smp_load_acquire(&iommu_driver) + * Ensure memory stores happening during a driver + * init are observed before executing kvm iommu callbacks. + */ + ret =3D cmpxchg_release(&iommu_driver, NULL, kern_ops) ? -EBUSY : 0; + if (ret) + return ret; + + kvm_nvhe_sym(kvm_iommu_ops) =3D el2_ops; + return 0; +} + +int kvm_iommu_init_driver(void) +{ + if (WARN_ON(!smp_load_acquire(&iommu_driver))) { + kvm_err("pKVM enabled without an IOMMU driver, do not run confidential w= orkloads in virtual machines\n"); + return -ENODEV; + } + + return iommu_driver->init_driver(); +} + +void kvm_iommu_remove_driver(void) +{ + if (smp_load_acquire(&iommu_driver)) + iommu_driver->remove_driver(); +} --=20 2.47.0.338.g60cca15819-goog From nobody Sun Dec 14 19:14:26 2025 Received: from mail-wr1-f74.google.com (mail-wr1-f74.google.com [209.85.221.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C5FCC22D4FA for ; Thu, 12 Dec 2024 18:05:14 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.221.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734026716; cv=none; b=fKlZdDGS+zDw7lUfiNXsmlTPfqpQvby24b2zA4S8MHjIC+e9nT0IZvBh02V97RyB7HBfJEaUgsN8hiYI3v5D4I+sO2yzqofMw3D1XTojZAFRDRGAZNKV5jPlS7LCVc0SNRGfKUai0AmPivoP9Xe0lvkaaM5OKhbK5qFf5KSyr4Q= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734026716; c=relaxed/simple; bh=3HJivu/WpNNE5R01tWNoZyPnqq7fKgTYFZUtNkKWCGs=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=WYs2J4UPvmuwF9S1rpCKgk/CYrEW/0nNbPcJ4t4saP8UszOp2oWXFJwfVkXfbsPEhxco3WHHbW6WPPsqMjSur94N5R4DvLfde76NCe99iqr0SVqGgqqQvX9wanJ6VqeIF/ERjv27hoxJzbw/JWoR3duyydHGrWwlym7qFetYzJg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--smostafa.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=2ay029Pj; arc=none smtp.client-ip=209.85.221.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--smostafa.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="2ay029Pj" Received: by mail-wr1-f74.google.com with SMTP id ffacd0b85a97d-385d735965bso542529f8f.1 for ; Thu, 12 Dec 2024 10:05:14 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1734026713; x=1734631513; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=cwdSo/iBYWKLrbHTG+HyNIQbWdtp/PGNH3wAwUOLzAc=; b=2ay029PjwQvcnM8iOgcPMFYsCUrwzL06gqxWot0xuzeBDV8PvWitmz1j7RZIUduTxO 9+09A0UtGlKRyUXbMaxpw5Sh1F6jPQ3bGysD/Rqd4wopbl2z83rGylHjKlqC6tR/XR6j hFj7LdnznEKpc0Lh9T5Fe+dkFNhZAPKjsjggqdzLqueAOomKb4d92NNozYecKpSTf8vJ jus+j3BlPlTNh3Yf68y1m+QNDtn12l/KjYx1a10ytrg19Yn41hU8ka9G822X789UsoUg E7dmJw1r6MnboIt++HQxiLPBGShwZ3iFzNy85UYYVZYg1MDhyfu5Ncwmp7dy/XnBYqLW 8jOw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734026713; x=1734631513; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=cwdSo/iBYWKLrbHTG+HyNIQbWdtp/PGNH3wAwUOLzAc=; b=nGp1bWa9Z7rWBHN5MVi+T+jKQHSMY3igV4z5hctIkuFQMV37V33XizF/zjgO+xJ2/V nrc/eCeAch+Uh5nqDklMr7I+dT1qb7SUmRqrVy2xd7VXjhB6g++uPgwlGo3rIjYdN0aF Q5KmcgFMkGAaycXWUaGX7mcVHKNBIxqPRfuOtVmUJQtngIStkr2jayCtF8QIL8VWUpJL uHi6dX4lFoDnjydZlnIvZVJuUnrbOk8T5pEm4eavICeaB/avFddX63cV/5uTzSQw5/Y9 O+ywPYkG9qwhW8Y3REt9v2m8QU/BuPrZHY833m2MvqsAQONO93GDAWYnya0NuQ7N5xqv HnQg== X-Forwarded-Encrypted: i=1; AJvYcCVe8OVrLrHc8hFa3CkDYpzHf0M7yhQTW/cJuPwnPcJfhAfJ+LpBHEToFQQj6AjfjJ9z+HbAWv5Y/EVrL3Y=@vger.kernel.org X-Gm-Message-State: AOJu0Yy0fgh34snFMEo9d80BeHqCitbtTz8pF0js1igV79CgahZ0mqxF 2muD5TpGQNo22P1aZv9lAMwrFOjAJr5jrG6gB5qL5HYvPtUE9dEB38O3mTh7Zavhs9CqB0Mw+Xy u9QT61gsbnw== X-Google-Smtp-Source: AGHT+IFmbvEzDhykMqyy8OjOsZ2NeXlkTbt4cjtUEyGjwEAdXiFcByoXchIDeMneglUtq2nV1Br6AHE0eUe6Bg== X-Received: from wmom6.prod.google.com ([2002:a05:600c:4606:b0:436:16c6:831]) (user=smostafa job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6000:2b0e:b0:386:8ff:d20b with SMTP id ffacd0b85a97d-3864ce9f344mr5120606f8f.27.1734026713289; Thu, 12 Dec 2024 10:05:13 -0800 (PST) Date: Thu, 12 Dec 2024 18:03:38 +0000 In-Reply-To: <20241212180423.1578358-1-smostafa@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241212180423.1578358-1-smostafa@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20241212180423.1578358-15-smostafa@google.com> Subject: [RFC PATCH v2 14/58] KVM: arm64: pkvm: Add IOMMU hypercalls From: Mostafa Saleh To: iommu@lists.linux.dev, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org Cc: catalin.marinas@arm.com, will@kernel.org, maz@kernel.org, oliver.upton@linux.dev, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, robdclark@gmail.com, joro@8bytes.org, robin.murphy@arm.com, jean-philippe@linaro.org, jgg@ziepe.ca, nicolinc@nvidia.com, vdonnefort@google.com, qperret@google.com, tabba@google.com, danielmentz@google.com, tzukui@google.com, Mostafa Saleh Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The unprivileged host IOMMU driver forwards some of the IOMMU API calls to the hypervisor, which installs and populates the page tables. Note that this is not a stable ABI. Those hypercalls change with the kernel just like internal function calls. One thing special about some of the IOMMU hypercalls, that they use newly added hyp_reqs_smccc_encode() to encode memory requests in the HVC return, leveraging X1, X2 and X3 registers as allowed SMCCC. Signed-off-by: Mostafa Saleh Signed-off-by: Jean-Philippe Brucker --- arch/arm64/include/asm/kvm_asm.h | 7 ++ arch/arm64/kvm/hyp/include/nvhe/iommu.h | 14 ++++ arch/arm64/kvm/hyp/nvhe/hyp-main.c | 89 +++++++++++++++++++++++++ arch/arm64/kvm/hyp/nvhe/iommu/iommu.c | 40 +++++++++++ 4 files changed, 150 insertions(+) diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_= asm.h index e4b391bdfdac..9ea155a04332 100644 --- a/arch/arm64/include/asm/kvm_asm.h +++ b/arch/arm64/include/asm/kvm_asm.h @@ -107,6 +107,13 @@ enum __kvm_host_smccc_func { __KVM_HOST_SMCCC_FUNC___pkvm_hyp_alloc_mgt_refill, __KVM_HOST_SMCCC_FUNC___pkvm_hyp_alloc_mgt_reclaimable, __KVM_HOST_SMCCC_FUNC___pkvm_hyp_alloc_mgt_reclaim, + __KVM_HOST_SMCCC_FUNC___pkvm_host_iommu_alloc_domain, + __KVM_HOST_SMCCC_FUNC___pkvm_host_iommu_free_domain, + __KVM_HOST_SMCCC_FUNC___pkvm_host_iommu_attach_dev, + __KVM_HOST_SMCCC_FUNC___pkvm_host_iommu_detach_dev, + __KVM_HOST_SMCCC_FUNC___pkvm_host_iommu_map_pages, + __KVM_HOST_SMCCC_FUNC___pkvm_host_iommu_unmap_pages, + __KVM_HOST_SMCCC_FUNC___pkvm_host_iommu_iova_to_phys, =20 /* * Start of the dynamically registered hypercalls. Start a bit diff --git a/arch/arm64/kvm/hyp/include/nvhe/iommu.h b/arch/arm64/kvm/hyp/i= nclude/nvhe/iommu.h index 1ac70cc28a9e..908863f07b0b 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/iommu.h +++ b/arch/arm64/kvm/hyp/include/nvhe/iommu.h @@ -4,6 +4,20 @@ =20 #include =20 +/* Hypercall handlers */ +int kvm_iommu_alloc_domain(pkvm_handle_t domain_id, int type); +int kvm_iommu_free_domain(pkvm_handle_t domain_id); +int kvm_iommu_attach_dev(pkvm_handle_t iommu_id, pkvm_handle_t domain_id, + u32 endpoint_id, u32 pasid, u32 pasid_bits); +int kvm_iommu_detach_dev(pkvm_handle_t iommu_id, pkvm_handle_t domain_id, + u32 endpoint_id, u32 pasid); +size_t kvm_iommu_map_pages(pkvm_handle_t domain_id, + unsigned long iova, phys_addr_t paddr, size_t pgsize, + size_t pgcount, int prot); +size_t kvm_iommu_unmap_pages(pkvm_handle_t domain_id, unsigned long iova, + size_t pgsize, size_t pgcount); +phys_addr_t kvm_iommu_iova_to_phys(pkvm_handle_t domain_id, unsigned long = iova); + struct kvm_iommu_ops { int (*init)(void); }; diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/h= yp-main.c index 08c0ff823a55..9b224842c487 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -19,6 +19,7 @@ #include #include #include +#include #include #include #include @@ -1592,6 +1593,87 @@ static void handle___pkvm_hyp_alloc_mgt_reclaim(stru= ct kvm_cpu_context *host_ctx cpu_reg(host_ctxt, 2) =3D mc.nr_pages; } =20 +static void handle___pkvm_host_iommu_alloc_domain(struct kvm_cpu_context *= host_ctxt) +{ + int ret; + DECLARE_REG(pkvm_handle_t, domain, host_ctxt, 1); + DECLARE_REG(int, type, host_ctxt, 2); + + ret =3D kvm_iommu_alloc_domain(domain, type); + hyp_reqs_smccc_encode(ret, host_ctxt, this_cpu_ptr(&host_hyp_reqs)); +} + +static void handle___pkvm_host_iommu_free_domain(struct kvm_cpu_context *h= ost_ctxt) +{ + int ret; + DECLARE_REG(pkvm_handle_t, domain, host_ctxt, 1); + + ret =3D kvm_iommu_free_domain(domain); + hyp_reqs_smccc_encode(ret, host_ctxt, this_cpu_ptr(&host_hyp_reqs)); +} + +static void handle___pkvm_host_iommu_attach_dev(struct kvm_cpu_context *ho= st_ctxt) +{ + int ret; + DECLARE_REG(pkvm_handle_t, iommu, host_ctxt, 1); + DECLARE_REG(pkvm_handle_t, domain, host_ctxt, 2); + DECLARE_REG(unsigned int, endpoint, host_ctxt, 3); + DECLARE_REG(unsigned int, pasid, host_ctxt, 4); + DECLARE_REG(unsigned int, pasid_bits, host_ctxt, 5); + + ret =3D kvm_iommu_attach_dev(iommu, domain, endpoint, + pasid, pasid_bits); + hyp_reqs_smccc_encode(ret, host_ctxt, this_cpu_ptr(&host_hyp_reqs)); +} + +static void handle___pkvm_host_iommu_detach_dev(struct kvm_cpu_context *ho= st_ctxt) +{ + int ret; + DECLARE_REG(pkvm_handle_t, iommu, host_ctxt, 1); + DECLARE_REG(pkvm_handle_t, domain, host_ctxt, 2); + DECLARE_REG(unsigned int, endpoint, host_ctxt, 3); + DECLARE_REG(unsigned int, pasid, host_ctxt, 4); + + ret =3D kvm_iommu_detach_dev(iommu, domain, endpoint, pasid); + hyp_reqs_smccc_encode(ret, host_ctxt, this_cpu_ptr(&host_hyp_reqs)); +} + +static void handle___pkvm_host_iommu_map_pages(struct kvm_cpu_context *hos= t_ctxt) +{ + unsigned long ret; + DECLARE_REG(pkvm_handle_t, domain, host_ctxt, 1); + DECLARE_REG(unsigned long, iova, host_ctxt, 2); + DECLARE_REG(phys_addr_t, paddr, host_ctxt, 3); + DECLARE_REG(size_t, pgsize, host_ctxt, 4); + DECLARE_REG(size_t, pgcount, host_ctxt, 5); + DECLARE_REG(unsigned int, prot, host_ctxt, 6); + + ret =3D kvm_iommu_map_pages(domain, iova, paddr, + pgsize, pgcount, prot); + hyp_reqs_smccc_encode(ret, host_ctxt, this_cpu_ptr(&host_hyp_reqs)); +} + +static void handle___pkvm_host_iommu_unmap_pages(struct kvm_cpu_context *h= ost_ctxt) +{ + unsigned long ret; + DECLARE_REG(pkvm_handle_t, domain, host_ctxt, 1); + DECLARE_REG(unsigned long, iova, host_ctxt, 2); + DECLARE_REG(size_t, pgsize, host_ctxt, 3); + DECLARE_REG(size_t, pgcount, host_ctxt, 4); + + ret =3D kvm_iommu_unmap_pages(domain, iova, + pgsize, pgcount); + hyp_reqs_smccc_encode(ret, host_ctxt, this_cpu_ptr(&host_hyp_reqs)); +} + +static void handle___pkvm_host_iommu_iova_to_phys(struct kvm_cpu_context *= host_ctxt) +{ + DECLARE_REG(pkvm_handle_t, domain, host_ctxt, 1); + DECLARE_REG(unsigned long, iova, host_ctxt, 2); + + cpu_reg(host_ctxt, 1) =3D kvm_iommu_iova_to_phys(domain, iova); +} + typedef void (*hcall_t)(struct kvm_cpu_context *); =20 #define HANDLE_FUNC(x) [__KVM_HOST_SMCCC_FUNC_##x] =3D (hcall_t)handle_##x @@ -1649,6 +1731,13 @@ static const hcall_t host_hcall[] =3D { HANDLE_FUNC(__pkvm_hyp_alloc_mgt_refill), HANDLE_FUNC(__pkvm_hyp_alloc_mgt_reclaimable), HANDLE_FUNC(__pkvm_hyp_alloc_mgt_reclaim), + HANDLE_FUNC(__pkvm_host_iommu_alloc_domain), + HANDLE_FUNC(__pkvm_host_iommu_free_domain), + HANDLE_FUNC(__pkvm_host_iommu_attach_dev), + HANDLE_FUNC(__pkvm_host_iommu_detach_dev), + HANDLE_FUNC(__pkvm_host_iommu_map_pages), + HANDLE_FUNC(__pkvm_host_iommu_unmap_pages), + HANDLE_FUNC(__pkvm_host_iommu_iova_to_phys), }; =20 static void handle_host_hcall(struct kvm_cpu_context *host_ctxt) diff --git a/arch/arm64/kvm/hyp/nvhe/iommu/iommu.c b/arch/arm64/kvm/hyp/nvh= e/iommu/iommu.c index 3bd87d2084e9..9022fd612a49 100644 --- a/arch/arm64/kvm/hyp/nvhe/iommu/iommu.c +++ b/arch/arm64/kvm/hyp/nvhe/iommu/iommu.c @@ -16,3 +16,43 @@ int kvm_iommu_init(void) =20 return kvm_iommu_ops->init(); } + +int kvm_iommu_alloc_domain(pkvm_handle_t domain_id, int type) +{ + return -ENODEV; +} + +int kvm_iommu_free_domain(pkvm_handle_t domain_id) +{ + return -ENODEV; +} + +int kvm_iommu_attach_dev(pkvm_handle_t iommu_id, pkvm_handle_t domain_id, + u32 endpoint_id, u32 pasid, u32 pasid_bits) +{ + return -ENODEV; +} + +int kvm_iommu_detach_dev(pkvm_handle_t iommu_id, pkvm_handle_t domain_id, + u32 endpoint_id, u32 pasid) +{ + return -ENODEV; +} + +size_t kvm_iommu_map_pages(pkvm_handle_t domain_id, + unsigned long iova, phys_addr_t paddr, size_t pgsize, + size_t pgcount, int prot) +{ + return 0; +} + +size_t kvm_iommu_unmap_pages(pkvm_handle_t domain_id, unsigned long iova, + size_t pgsize, size_t pgcount) +{ + return 0; +} + +phys_addr_t kvm_iommu_iova_to_phys(pkvm_handle_t domain_id, unsigned long = iova) +{ + return 0; +} --=20 2.47.0.338.g60cca15819-goog From nobody Sun Dec 14 19:14:26 2025 Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 96B1622E9EA for ; Thu, 12 Dec 2024 18:05:16 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734026719; cv=none; b=hpw43xxE1poikTC8JvSUXKlYLrm3nWV5Dp6QSxo4Jb5d9+pDVYquuW/JTF/svrEc8cPJ41xLj4q49wrpS7PD6DvBSycBhZZ+PAF4UeiHlszPT3wQJLOd0jxpazOw6EkdetcuT+X2cKBoZz8KUVUhlmQemp9Wc0HrnIAjr4Gvk08= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734026719; c=relaxed/simple; bh=kVQMROgLkRti3+tLSCiZf/Q7tBUqrvd9jH6yyXvlJ2g=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=i4wRnkVVk0X8uHg1f15XsknJX7mFuiRtKbJLmK2vH58U2bhYfrEWF+bKrAC0oKGFQ/PudQuNFq/A6Y+r63Ibju4RCgproD5Whtb6zWJYlujIXWU6mjd+plLrTL3hQUdwqbdBjamsUpMy6iQJXmzBLCBpk4yAAnQlNYGET18jNNI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--smostafa.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=NUKId1rq; arc=none smtp.client-ip=209.85.128.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--smostafa.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="NUKId1rq" Received: by mail-wm1-f73.google.com with SMTP id 5b1f17b1804b1-4361b090d23so5674215e9.0 for ; Thu, 12 Dec 2024 10:05:16 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1734026715; x=1734631515; darn=vger.kernel.org; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:from:to:cc:subject:date:message-id :reply-to; bh=is6Al2H2AiqMxaqLQL7gdzUm7OUxIBSFMLQ1Q2t4888=; b=NUKId1rqIl5xnNGyA6fVOLK3gKBQHecTqbeJVzTp6cWgkoKB+7/HPm4R2j+ExVuOTz opQ4C578v+9NmOpPdWjmKE8duF6nKblczkbagIAvTYaq8Jr6IeTpYz7+C1cH5LjcsdWT OqFYWk4+Mc67XvcvWjMgxGoM4y1fhdlsK80vNBAQjPNUthQMwL0VjqGBU+1/KY0YAJr4 cXe9Ihx1k39U31oJ520GCAedDQxSvSH+5r193mp+g29s/ruDA0LWGFJxkh/zfZvSZ0PT erpBlmW5Yv8GkQeZZJTwUza5zkdsKionB6g/CyJU2PZucHcUbGRtwAktvBT+nFz7PMhy +y4g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734026715; x=1734631515; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:x-gm-message-state:from:to:cc:subject :date:message-id:reply-to; bh=is6Al2H2AiqMxaqLQL7gdzUm7OUxIBSFMLQ1Q2t4888=; b=WO6KgIJ14W+6Hhlf9jGA0AOH4hagSztbyp+kPGMJFubHQfd3OBUG+ZbQQB2156LFUZ DDSQmB/x+WGKrov5fk68rDEQ2YumpOc0YXEi4rfQrAg8e2HQTFeJzTzXLlLfr0ymTowd L5tp/ig2yi5X/8hfv5Ax8bnDZpKfy3K9pSgQMhZGgKq/h5UqiM2n0nHNWsrkY9XGHojO PKsgsv8pOTYhcLimsgw1k34wwWWHHQV9bLlQ8d1/NrP6uhySepgh8WUqkgp38JGkfIpa 23fGd7BgGIM4IS7AY9d+J/NZZwZdGpw/fZqWn5Tbus7MW0k2UF6jYy0QoEqjwgsyvIw6 1ykA== X-Forwarded-Encrypted: i=1; AJvYcCW0j5BuWgY0FxhnQupg4+n3eK9jqSwZLFnDI1QQom9qL5s5NVCI7EJK8cY9sOx7WvAFIBAq6gmpYYB/iF0=@vger.kernel.org X-Gm-Message-State: AOJu0Yzg9VdZp5Lsy+bGszPoiWLOt4WDeGENerBVDOLEzHWsworxdK6c TE753TYhj9AgDEVyq/7qVo6TkHEeyAsQa2I1fPGg4Tl8G9yI1g1MXEplxV6UtCEzbVf65KeTtVd TJV+DbjRISg== X-Google-Smtp-Source: AGHT+IHgLyGmzSPkmfPVAl03VtCFCec89plA2sOh31ATkaIn3otdhgvEhbA1+34qV92LIfAmgGb4yfu9wRTYfQ== X-Received: from wmso20.prod.google.com ([2002:a05:600c:5114:b0:436:1abf:b8fe]) (user=smostafa job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:350c:b0:434:ff9d:a370 with SMTP id 5b1f17b1804b1-4361c2b5351mr69806205e9.0.1734026715242; Thu, 12 Dec 2024 10:05:15 -0800 (PST) Date: Thu, 12 Dec 2024 18:03:39 +0000 In-Reply-To: <20241212180423.1578358-1-smostafa@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241212180423.1578358-1-smostafa@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20241212180423.1578358-16-smostafa@google.com> Subject: [RFC PATCH v2 15/58] KVM: arm64: iommu: Add a memory pool for the IOMMU From: Mostafa Saleh To: iommu@lists.linux.dev, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org Cc: catalin.marinas@arm.com, will@kernel.org, maz@kernel.org, oliver.upton@linux.dev, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, robdclark@gmail.com, joro@8bytes.org, robin.murphy@arm.com, jean-philippe@linaro.org, jgg@ziepe.ca, nicolinc@nvidia.com, vdonnefort@google.com, qperret@google.com, tabba@google.com, danielmentz@google.com, tzukui@google.com, Mostafa Saleh Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" This patch defines a new hypervisor allocator which is an instance of the hyp buddy allocator. This allocator would be used from the IOMMU drivers to be used for the page tables, generally these pages have 2 properties: - Can be multi order - Can be non-coherent The interface provide functions and wrappers for those types of allocations. The IOMMU hypervisor will leverage the allocator interface which provides a standardized interface that can be called from the kernel part of the IOMMU driver to top up the allocator, and can be reclaimed through the shrinker for pKVM. Also, the allocation function would automatically create a request when it fails to allocate memory from the pool, so it=E2=80=99s sufficient for the driver to return an error code and the kernel part of the driver should check the requests in the return and fill the hypervisor allocator. Signed-off-by: Mostafa Saleh --- arch/arm64/include/asm/kvm_host.h | 1 + arch/arm64/kvm/hyp/include/nvhe/iommu.h | 13 ++++ arch/arm64/kvm/hyp/include/nvhe/mm.h | 1 + arch/arm64/kvm/hyp/nvhe/alloc_mgt.c | 2 + arch/arm64/kvm/hyp/nvhe/iommu/iommu.c | 86 +++++++++++++++++++++++++ arch/arm64/kvm/hyp/nvhe/mm.c | 17 +++++ 6 files changed, 120 insertions(+) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm= _host.h index 54416cfea573..a3b5d8dd8995 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -1625,6 +1625,7 @@ void kvm_set_vm_id_reg(struct kvm *kvm, u32 reg, u64 = val); =20 /* Allocator interface IDs. */ #define HYP_ALLOC_MGT_HEAP_ID 0 +#define HYP_ALLOC_MGT_IOMMU_ID 1 =20 unsigned long __pkvm_reclaim_hyp_alloc_mgt(unsigned long nr_pages); =20 diff --git a/arch/arm64/kvm/hyp/include/nvhe/iommu.h b/arch/arm64/kvm/hyp/i= nclude/nvhe/iommu.h index 908863f07b0b..5f91605cd48a 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/iommu.h +++ b/arch/arm64/kvm/hyp/include/nvhe/iommu.h @@ -4,6 +4,8 @@ =20 #include =20 +#include + /* Hypercall handlers */ int kvm_iommu_alloc_domain(pkvm_handle_t domain_id, int type); int kvm_iommu_free_domain(pkvm_handle_t domain_id); @@ -18,10 +20,21 @@ size_t kvm_iommu_unmap_pages(pkvm_handle_t domain_id, u= nsigned long iova, size_t pgsize, size_t pgcount); phys_addr_t kvm_iommu_iova_to_phys(pkvm_handle_t domain_id, unsigned long = iova); =20 +/* Flags for memory allocation for IOMMU drivers */ +#define IOMMU_PAGE_NOCACHE BIT(0) +void *kvm_iommu_donate_pages(u8 order, int flags); +void kvm_iommu_reclaim_pages(void *p, u8 order); + +#define kvm_iommu_donate_page() kvm_iommu_donate_pages(0, 0) +#define kvm_iommu_donate_page_nc() kvm_iommu_donate_pages(0, IOMMU_PAGE_NO= CACHE) +#define kvm_iommu_reclaim_page(p) kvm_iommu_reclaim_pages(p, 0) + struct kvm_iommu_ops { int (*init)(void); }; =20 int kvm_iommu_init(void); =20 +extern struct hyp_mgt_allocator_ops kvm_iommu_allocator_ops; + #endif /* __ARM64_KVM_NVHE_IOMMU_H__ */ diff --git a/arch/arm64/kvm/hyp/include/nvhe/mm.h b/arch/arm64/kvm/hyp/incl= ude/nvhe/mm.h index 5d33aca7d686..7b425f811efb 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mm.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mm.h @@ -37,6 +37,7 @@ int __hyp_allocator_map(unsigned long start, phys_addr_t = phys); int __pkvm_map_module_page(u64 pfn, void *va, enum kvm_pgtable_prot prot, = bool is_protected); void __pkvm_unmap_module_page(u64 pfn, void *va); void *__pkvm_alloc_module_va(u64 nr_pages); +int pkvm_remap_range(void *va, int nr_pages, bool nc); #ifdef CONFIG_NVHE_EL2_DEBUG void assert_in_mod_range(unsigned long addr); #else diff --git a/arch/arm64/kvm/hyp/nvhe/alloc_mgt.c b/arch/arm64/kvm/hyp/nvhe/= alloc_mgt.c index 4a0f33b9820a..cfd903b30427 100644 --- a/arch/arm64/kvm/hyp/nvhe/alloc_mgt.c +++ b/arch/arm64/kvm/hyp/nvhe/alloc_mgt.c @@ -7,9 +7,11 @@ =20 #include #include +#include =20 static struct hyp_mgt_allocator_ops *registered_allocators[] =3D { [HYP_ALLOC_MGT_HEAP_ID] =3D &hyp_alloc_ops, + [HYP_ALLOC_MGT_IOMMU_ID] =3D &kvm_iommu_allocator_ops, }; =20 #define MAX_ALLOC_ID (ARRAY_SIZE(registered_allocators)) diff --git a/arch/arm64/kvm/hyp/nvhe/iommu/iommu.c b/arch/arm64/kvm/hyp/nvh= e/iommu/iommu.c index 9022fd612a49..af6ae9b4dc51 100644 --- a/arch/arm64/kvm/hyp/nvhe/iommu/iommu.c +++ b/arch/arm64/kvm/hyp/nvhe/iommu/iommu.c @@ -5,15 +5,101 @@ * Copyright (C) 2022 Linaro Ltd. */ #include +#include +#include =20 /* Only one set of ops supported, similary to the kernel */ struct kvm_iommu_ops *kvm_iommu_ops; =20 +/* + * Common pool that can be used by IOMMU driver to allocate pages. + */ +static struct hyp_pool iommu_host_pool; + +DECLARE_PER_CPU(struct kvm_hyp_req, host_hyp_reqs); + +static int kvm_iommu_refill(struct kvm_hyp_memcache *host_mc) +{ + if (!kvm_iommu_ops) + return -EINVAL; + + return refill_hyp_pool(&iommu_host_pool, host_mc); +} + +static void kvm_iommu_reclaim(struct kvm_hyp_memcache *host_mc, int target) +{ + if (!kvm_iommu_ops) + return; + + reclaim_hyp_pool(&iommu_host_pool, host_mc, target); +} + +static int kvm_iommu_reclaimable(void) +{ + if (!kvm_iommu_ops) + return 0; + + return hyp_pool_free_pages(&iommu_host_pool); +} + +struct hyp_mgt_allocator_ops kvm_iommu_allocator_ops =3D { + .refill =3D kvm_iommu_refill, + .reclaim =3D kvm_iommu_reclaim, + .reclaimable =3D kvm_iommu_reclaimable, +}; + +void *kvm_iommu_donate_pages(u8 order, int flags) +{ + void *p; + struct kvm_hyp_req *req =3D this_cpu_ptr(&host_hyp_reqs); + int ret; + + p =3D hyp_alloc_pages(&iommu_host_pool, order); + if (p) { + /* + * If page request is non-cacheable remap it as such + * as all pages in the pool are mapped before hand and + * assumed to be cacheable. + */ + if (flags & IOMMU_PAGE_NOCACHE) { + ret =3D pkvm_remap_range(p, 1 << order, true); + if (ret) { + hyp_put_page(&iommu_host_pool, p); + return NULL; + } + } + return p; + } + + req->type =3D KVM_HYP_REQ_TYPE_MEM; + req->mem.dest =3D REQ_MEM_DEST_HYP_IOMMU; + req->mem.sz_alloc =3D (1 << order) * PAGE_SIZE; + req->mem.nr_pages =3D 1; + return NULL; +} + +void kvm_iommu_reclaim_pages(void *p, u8 order) +{ + /* + * Remap all pages to cacheable, as we don't know, may be use a flag + * in the vmemmap or trust the driver to pass the cacheability same + * as the allocation on free? + */ + pkvm_remap_range(p, 1 << order, false); + hyp_put_page(&iommu_host_pool, p); +} + int kvm_iommu_init(void) { + int ret; + if (!kvm_iommu_ops || !kvm_iommu_ops->init) return -ENODEV; =20 + ret =3D hyp_pool_init_empty(&iommu_host_pool, 64); + if (ret) + return ret; + return kvm_iommu_ops->init(); } =20 diff --git a/arch/arm64/kvm/hyp/nvhe/mm.c b/arch/arm64/kvm/hyp/nvhe/mm.c index 76bbb4c9012e..7a18b31538ae 100644 --- a/arch/arm64/kvm/hyp/nvhe/mm.c +++ b/arch/arm64/kvm/hyp/nvhe/mm.c @@ -564,3 +564,20 @@ int reclaim_hyp_pool(struct hyp_pool *pool, struct kvm= _hyp_memcache *host_mc, =20 return 0; } + +/* Remap hyp memory with different cacheability */ +int pkvm_remap_range(void *va, int nr_pages, bool nc) +{ + size_t size =3D nr_pages << PAGE_SHIFT; + phys_addr_t phys =3D hyp_virt_to_phys(va); + enum kvm_pgtable_prot prot =3D PKVM_HOST_MEM_PROT; + int ret; + + if (nc) + prot |=3D KVM_PGTABLE_PROT_NORMAL_NC; + hyp_spin_lock(&pkvm_pgd_lock); + WARN_ON(kvm_pgtable_hyp_unmap(&pkvm_pgtable, (u64)va, size) !=3D size); + ret =3D kvm_pgtable_hyp_map(&pkvm_pgtable, (u64)va, size, phys, prot); + hyp_spin_unlock(&pkvm_pgd_lock); + return ret; +} --=20 2.47.0.338.g60cca15819-goog From nobody Sun Dec 14 19:14:26 2025 Received: from mail-wr1-f74.google.com (mail-wr1-f74.google.com [209.85.221.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9412422EA0B for ; Thu, 12 Dec 2024 18:05:18 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.221.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734026720; cv=none; b=V2Z9YZaNbmf5jDjeEmJZiPQep531RX+DwrDg+eQjlVcx5HE7lbhh4SxDwGlwFuouUsSe30OxPNH38Z4mmZMVGHnSSnyc37CIDdsKZdbpWzq9dtd+Y2tZqIv2oiQqN9adXnMYoQ29+p+5nKY0Lw/yxnz7ZxYX5ulz5+U5eCKqDOg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734026720; c=relaxed/simple; bh=fn62HgvDwq7znwZ9sWBxjJm/DB7k4gF22+OR2ASFa1Q=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=jdZbpgMLECETqNDVacHprNwXD8N6E9vyWNmZqlw1V4yzXdyvd9Xj+/ExUTjt8A8ghA3uuoCFgxEb0DW/JlRX6ukVYQdo0j/2MhMSz/sF5yPmBL6oLf7oFuQ1zeELc+32DtrxGcnc0NpZ1eJ7K6bQ5ZOVvjoy0GXZYOfl2Aw65gc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--smostafa.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=GEmRLjK+; arc=none smtp.client-ip=209.85.221.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--smostafa.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="GEmRLjK+" Received: by mail-wr1-f74.google.com with SMTP id ffacd0b85a97d-385d52591d6so534797f8f.1 for ; Thu, 12 Dec 2024 10:05:18 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1734026717; x=1734631517; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=Ke5HF+07C19gyf64zaMrtB1MJ91CLJx66lSBSZ4+t8c=; b=GEmRLjK+mdcJwVa+QsSjn4vQGFl7teIhCh25qqLh63ZH8vgAO8bHTdtNsjxpaTbv8i RBCE+lTiY6HnIjWdUkYMCqcAODJYl8JX6GBOYQmOlJu7PRFETh1BkLVoHl5Y500uQmD7 pHr7UmplhZFIWcHx1zO/5OMiLGjsFquVAMo0NFnz+thsePr17GOjHFzN1pkbrKQfssPN 9CQVCzL90csOFrr5Wi5xN2/u/0xX2pYewGbPcWwDpFAtP28Wv3pUBqTPa51R8DNsh6oa Efx1CTncApR0ldXDiWSao/ve+J6Yvc9Q+sVVJBsHgIeB0dRyFePzfWRuFh323r35dxHp bmKQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734026717; x=1734631517; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=Ke5HF+07C19gyf64zaMrtB1MJ91CLJx66lSBSZ4+t8c=; b=ku+YDyY0wGGiysRQ/iBKyQIktPlz9CRWwwEi6LhLyAZxqPgdj8y3AJoPUlHZeOrbkj noVoV9poM234ecZxpbBrBEwV38bYOo7kNkEylkGHTX1G1XCvcL38+uXKdF1p2CBso4Sb 5y7jHDx3wzOUNm7LIIPLtQccV7eOaOjTFxKfzQftwHlP8DqWQelipnp5kQ3kPeKetw6+ s9pcMUXtiIHyWRfNR7Ma9B5XHAnBLMlhK1fHc4o1Xaj8FyON2yECbRx8jehckjyzWmmW Dwp9A8S90vrCq6yvD7G4Jly0gwP41JGCV7sFaF6QpDp4z+r3KHZ8KpslOaWKhsZfT+SC p1QQ== X-Forwarded-Encrypted: i=1; AJvYcCXP49mMvHbukYpQvYSWGik+MtZJg9G9pzwEMpZ9llXEYOvfZV1WIvnHm5LWoAxWw9cRstjvJmtJprmWMVQ=@vger.kernel.org X-Gm-Message-State: AOJu0YxTqTU9cYOgvTB59QLyAwUoJs/RQleyGqCuJ99kPDXIi8/mloqJ cfRIOjmJ5Ks9Ij3DvCOC6hHR2L9CcoG3A9HSaDf+9FAIMR4bYfZ6UALLFjLd+GZ6T0cS2UL881x vkxEB0W+1tw== X-Google-Smtp-Source: AGHT+IF+toprrwxOBtfU6Qu/y99TnI5qbHVjvJgvBhzHP1r3wGKXRWAe497Prx6Jb15z54DCzP6NFkkS78Yvng== X-Received: from wmnm21.prod.google.com ([2002:a05:600c:1615:b0:436:1923:6cf5]) (user=smostafa job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6000:1fa2:b0:386:3213:5b80 with SMTP id ffacd0b85a97d-3878886283cmr3258934f8f.24.1734026717185; Thu, 12 Dec 2024 10:05:17 -0800 (PST) Date: Thu, 12 Dec 2024 18:03:40 +0000 In-Reply-To: <20241212180423.1578358-1-smostafa@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241212180423.1578358-1-smostafa@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20241212180423.1578358-17-smostafa@google.com> Subject: [RFC PATCH v2 16/58] KVM: arm64: iommu: Add domains From: Mostafa Saleh To: iommu@lists.linux.dev, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org Cc: catalin.marinas@arm.com, will@kernel.org, maz@kernel.org, oliver.upton@linux.dev, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, robdclark@gmail.com, joro@8bytes.org, robin.murphy@arm.com, jean-philippe@linaro.org, jgg@ziepe.ca, nicolinc@nvidia.com, vdonnefort@google.com, qperret@google.com, tabba@google.com, danielmentz@google.com, tzukui@google.com, Mostafa Saleh Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The IOMMU domain abstraction allows to share the same page tables between multiple devices. That may be necessary due to hardware constraints, if multiple devices cannot be isolated by the IOMMU (conventional PCI bus for example). It may also help with optimizing resource or TLB use. For pKVM in particular, it may be useful to reduce the amount of memory required for page tables. All devices owned by the host kernel could be attached to the same domain (though that requires host changes). There is one shared domain space with all IOMMUs holding up to 2^16 domains. Signed-off-by: Jean-Philippe Brucker Signed-off-by: Mostafa Saleh --- arch/arm64/kvm/hyp/hyp-constants.c | 1 + arch/arm64/kvm/hyp/include/nvhe/iommu.h | 4 + arch/arm64/kvm/hyp/nvhe/iommu/iommu.c | 102 +++++++++++++++++++++++- arch/arm64/kvm/iommu.c | 10 +++ include/kvm/iommu.h | 48 +++++++++++ 5 files changed, 161 insertions(+), 4 deletions(-) create mode 100644 include/kvm/iommu.h diff --git a/arch/arm64/kvm/hyp/hyp-constants.c b/arch/arm64/kvm/hyp/hyp-co= nstants.c index 5fb26cabd606..96a6b45b424a 100644 --- a/arch/arm64/kvm/hyp/hyp-constants.c +++ b/arch/arm64/kvm/hyp/hyp-constants.c @@ -8,5 +8,6 @@ int main(void) { DEFINE(STRUCT_HYP_PAGE_SIZE, sizeof(struct hyp_page)); + DEFINE(HYP_SPINLOCK_SIZE, sizeof(hyp_spinlock_t)); return 0; } diff --git a/arch/arm64/kvm/hyp/include/nvhe/iommu.h b/arch/arm64/kvm/hyp/i= nclude/nvhe/iommu.h index 5f91605cd48a..8f619f415d1f 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/iommu.h +++ b/arch/arm64/kvm/hyp/include/nvhe/iommu.h @@ -4,6 +4,8 @@ =20 #include =20 +#include + #include =20 /* Hypercall handlers */ @@ -31,6 +33,8 @@ void kvm_iommu_reclaim_pages(void *p, u8 order); =20 struct kvm_iommu_ops { int (*init)(void); + int (*alloc_domain)(struct kvm_hyp_iommu_domain *domain, int type); + void (*free_domain)(struct kvm_hyp_iommu_domain *domain); }; =20 int kvm_iommu_init(void); diff --git a/arch/arm64/kvm/hyp/nvhe/iommu/iommu.c b/arch/arm64/kvm/hyp/nvh= e/iommu/iommu.c index af6ae9b4dc51..ba2aed52a74f 100644 --- a/arch/arm64/kvm/hyp/nvhe/iommu/iommu.c +++ b/arch/arm64/kvm/hyp/nvhe/iommu/iommu.c @@ -4,12 +4,15 @@ * * Copyright (C) 2022 Linaro Ltd. */ +#include + #include #include #include =20 /* Only one set of ops supported, similary to the kernel */ struct kvm_iommu_ops *kvm_iommu_ops; +void **kvm_hyp_iommu_domains; =20 /* * Common pool that can be used by IOMMU driver to allocate pages. @@ -18,6 +21,9 @@ static struct hyp_pool iommu_host_pool; =20 DECLARE_PER_CPU(struct kvm_hyp_req, host_hyp_reqs); =20 +/* Protects domains in kvm_hyp_iommu_domains */ +static DEFINE_HYP_SPINLOCK(kvm_iommu_domain_lock); + static int kvm_iommu_refill(struct kvm_hyp_memcache *host_mc) { if (!kvm_iommu_ops) @@ -89,28 +95,116 @@ void kvm_iommu_reclaim_pages(void *p, u8 order) hyp_put_page(&iommu_host_pool, p); } =20 +static struct kvm_hyp_iommu_domain * +handle_to_domain(pkvm_handle_t domain_id) +{ + int idx; + struct kvm_hyp_iommu_domain *domains; + + if (domain_id >=3D KVM_IOMMU_MAX_DOMAINS) + return NULL; + domain_id =3D array_index_nospec(domain_id, KVM_IOMMU_MAX_DOMAINS); + + idx =3D domain_id / KVM_IOMMU_DOMAINS_PER_PAGE; + domains =3D (struct kvm_hyp_iommu_domain *)READ_ONCE(kvm_hyp_iommu_domain= s[idx]); + if (!domains) { + domains =3D kvm_iommu_donate_page(); + if (!domains) + return NULL; + /* + * handle_to_domain() does not have to be called under a lock, + * but even though we allocate a leaf in all cases, it's only + * really a valid thing to do under alloc_domain(), which uses a + * lock. Races are therefore a host bug and we don't need to be + * delicate about it. + */ + if (WARN_ON(cmpxchg64_relaxed(&kvm_hyp_iommu_domains[idx], 0, + (void *)domains) !=3D 0)) { + kvm_iommu_reclaim_page(domains); + return NULL; + } + } + return &domains[domain_id % KVM_IOMMU_DOMAINS_PER_PAGE]; +} + int kvm_iommu_init(void) { int ret; + u64 domain_root_pfn =3D __hyp_pa(kvm_hyp_iommu_domains) >> PAGE_SHIFT; =20 - if (!kvm_iommu_ops || !kvm_iommu_ops->init) + if (!kvm_iommu_ops || + !kvm_iommu_ops->init || + !kvm_iommu_ops->alloc_domain || + !kvm_iommu_ops->free_domain) return -ENODEV; =20 ret =3D hyp_pool_init_empty(&iommu_host_pool, 64); if (ret) return ret; =20 - return kvm_iommu_ops->init(); + ret =3D __pkvm_host_donate_hyp(domain_root_pfn, + KVM_IOMMU_DOMAINS_ROOT_ORDER_NR); + if (ret) + return ret; + + ret =3D kvm_iommu_ops->init(); + if (ret) + goto out_reclaim_domain; + + return ret; + +out_reclaim_domain: + __pkvm_hyp_donate_host(domain_root_pfn, KVM_IOMMU_DOMAINS_ROOT_ORDER_NR); + return ret; } =20 int kvm_iommu_alloc_domain(pkvm_handle_t domain_id, int type) { - return -ENODEV; + int ret =3D -EINVAL; + struct kvm_hyp_iommu_domain *domain; + + domain =3D handle_to_domain(domain_id); + if (!domain) + return -ENOMEM; + + hyp_spin_lock(&kvm_iommu_domain_lock); + if (atomic_read(&domain->refs)) + goto out_unlock; + + domain->domain_id =3D domain_id; + ret =3D kvm_iommu_ops->alloc_domain(domain, type); + if (ret) + goto out_unlock; + + atomic_set_release(&domain->refs, 1); +out_unlock: + hyp_spin_unlock(&kvm_iommu_domain_lock); + return ret; } =20 int kvm_iommu_free_domain(pkvm_handle_t domain_id) { - return -ENODEV; + int ret =3D 0; + struct kvm_hyp_iommu_domain *domain; + + domain =3D handle_to_domain(domain_id); + if (!domain) + return -EINVAL; + + hyp_spin_lock(&kvm_iommu_domain_lock); + if (WARN_ON(atomic_cmpxchg_acquire(&domain->refs, 1, 0) !=3D 1)) { + ret =3D -EINVAL; + goto out_unlock; + } + + kvm_iommu_ops->free_domain(domain); + + memset(domain, 0, sizeof(*domain)); + +out_unlock: + hyp_spin_unlock(&kvm_iommu_domain_lock); + + return ret; } =20 int kvm_iommu_attach_dev(pkvm_handle_t iommu_id, pkvm_handle_t domain_id, diff --git a/arch/arm64/kvm/iommu.c b/arch/arm64/kvm/iommu.c index ed77ea0d12bb..af3417e6259d 100644 --- a/arch/arm64/kvm/iommu.c +++ b/arch/arm64/kvm/iommu.c @@ -5,6 +5,9 @@ */ =20 #include + +#include + #include =20 struct kvm_iommu_driver *iommu_driver; @@ -37,6 +40,13 @@ int kvm_iommu_init_driver(void) return -ENODEV; } =20 + kvm_hyp_iommu_domains =3D (void *)__get_free_pages(GFP_KERNEL | __GFP_ZER= O, + get_order(KVM_IOMMU_DOMAINS_ROOT_SIZE)); + if (!kvm_hyp_iommu_domains) + return -ENOMEM; + + kvm_hyp_iommu_domains =3D kern_hyp_va(kvm_hyp_iommu_domains); + return iommu_driver->init_driver(); } =20 diff --git a/include/kvm/iommu.h b/include/kvm/iommu.h new file mode 100644 index 000000000000..10ecaae0f6a3 --- /dev/null +++ b/include/kvm/iommu.h @@ -0,0 +1,48 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef __KVM_IOMMU_H +#define __KVM_IOMMU_H + +#include +#include +#ifdef __KVM_NVHE_HYPERVISOR__ +#include +#else +#include "hyp_constants.h" +#endif + +struct kvm_hyp_iommu_domain { + atomic_t refs; + pkvm_handle_t domain_id; + void *priv; +}; + +extern void **kvm_nvhe_sym(kvm_hyp_iommu_domains); +#define kvm_hyp_iommu_domains kvm_nvhe_sym(kvm_hyp_iommu_domains) + +/* + * At the moment the number of domains is limited to 2^16 + * In practice we're rarely going to need a lot of domains. To avoid alloc= ating + * a large domain table, we use a two-level table, indexed by domain ID. W= ith + * 4kB pages and 16-bytes domains, the leaf table contains 256 domains, an= d the + * root table 256 pointers. With 64kB pages, the leaf table contains 4096 + * domains and the root table 16 pointers. In this case, or when using 8-b= it + * VMIDs, it may be more advantageous to use a single level. But using two + * levels allows to easily extend the domain size. + */ +#define KVM_IOMMU_MAX_DOMAINS (1 << 16) + +/* Number of entries in the level-2 domain table */ +#define KVM_IOMMU_DOMAINS_PER_PAGE \ + (PAGE_SIZE / sizeof(struct kvm_hyp_iommu_domain)) + +/* Number of entries in the root domain table */ +#define KVM_IOMMU_DOMAINS_ROOT_ENTRIES \ + (KVM_IOMMU_MAX_DOMAINS / KVM_IOMMU_DOMAINS_PER_PAGE) + +#define KVM_IOMMU_DOMAINS_ROOT_SIZE \ + (KVM_IOMMU_DOMAINS_ROOT_ENTRIES * sizeof(void *)) + +#define KVM_IOMMU_DOMAINS_ROOT_ORDER_NR \ + (1 << get_order(KVM_IOMMU_DOMAINS_ROOT_SIZE)) + +#endif /* __KVM_IOMMU_H */ --=20 2.47.0.338.g60cca15819-goog From nobody Sun Dec 14 19:14:26 2025 Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AAEBF22F39D for ; Thu, 12 Dec 2024 18:05:20 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734026722; cv=none; b=GiRPe/YSYlNGD5jUeJbvryGvD7Y+6q8ynEwfO3W/pjjGLhqUp1GXUmMc3m5sGKTP8X5K3CNBbuQl5b0LgTEIrXA7H5VPd1f0h6Zqo1efS7SilgO3TK+8FPS7ehxiREFHlbX5avtMTn/yglQJF6xDEnBmdPOH4FkRDxh4N7to5J4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734026722; c=relaxed/simple; bh=7DeqoP+0h4J/nlEoKq6qxUg79CviM0U7ODp/Jiv7XFU=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=eqru/swKo08Velif0npoqlpEj8UrQ+osBb+9Be8bBxtjDwaNxJhgjsf6YjIpdCZtns5IEMUX9LXeySA7ieaDnRiUvGbvLKVgILL+WWGoa7Ho5gayow2TJjWlTEsvUwhTGiaxv6BdZa1T+X1wA88AVEEu4c+pfH3Gv1NFTLpf+ck= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--smostafa.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=tc6pTaCG; arc=none smtp.client-ip=209.85.128.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--smostafa.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="tc6pTaCG" Received: by mail-wm1-f74.google.com with SMTP id 5b1f17b1804b1-43624b081f3so8256605e9.1 for ; Thu, 12 Dec 2024 10:05:20 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1734026719; x=1734631519; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=r96D7jtEo6iMT7XaPgJfnXzgjFVNWq31bg+JNNbd33c=; b=tc6pTaCGByxyCC/8LBWgSUwpmkEXVLg8UnWxcCETvGjl1b2jSYMo4zLSrNjRpulPUd vcCH8DPqkbDgcuvKpJY3wihp4SfQNn3I+6bQnuleXBRQuEYVTFoxpHl9JIetc8fYVomS 9aA6Qy6iTCefNzw5zW/YSx3ljE3wOPY6HWrLu6HGv8lsdm5zcacOQ2BUOEAmipAqtZ7v VokPPYL+RZHDMOuCIXxJyMkJIhLObeFLaytHElUcJDaNhVST/rCx0UAm3/4kstMPJQOp rjMLNEsPnH2fDY1EOMwiYCmIbKiB8PVkJpxgYbTnsb4s2l6CMMlO554O8x4J+1XnSIxY CKbg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734026719; x=1734631519; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=r96D7jtEo6iMT7XaPgJfnXzgjFVNWq31bg+JNNbd33c=; b=el+yjWgdLw92QuCO0XXkO4I+oQ5QntQrrcjWFGzbpk+m5ZLyzGYXdn0EqUjAYiMJrP IzAT3katZPcbUjdPJbIWV3hodLphA5QE77ywAwxcscJShXW35eRZNdeNdZ1Ldkw42H78 ztJwPHRKl8GG7Tr4Sl1IAMBCJiQqK6ShPiI2EGJxRYv+Lo3Smk+tQpaopDo8A2DO5lv9 fi0IPDhZRahJnvRoRzUUVCcoIU3Z3R6HeM8kTBBjG+BxuVo4jXXBsZDY6vWkdZCtYnWr 0W6vqUCrpPXBlvQa/GHExVs44VgOFyizs66qGaBzwprKJkY6V1nB/DNLBbclvnHE4Ozf yASg== X-Forwarded-Encrypted: i=1; AJvYcCV6k66v2GqxmlPnddD/QuK9mct2nxx4/FIb3kUH3jdl6Z5PfXAkdPduPfYX6T86u9jiCj9XpjtpUaAMnYI=@vger.kernel.org X-Gm-Message-State: AOJu0Yy12EZQcdMsp6V1NlE7w2P7kVNH7UOxg21aU+nJTJAtRwuW28U8 jj7ZJAyRaELhgxW3rLmnc0DQBG5MklyB66v1XYHwY/5Rpk+nhV1AqIC+m90vXqFEg1XHwpTQq1g lWuZdS+BMow== X-Google-Smtp-Source: AGHT+IHOlQVhqOBCLyKRJBicSu0YdgpRP9Z8cEsxdD6v5Zy8WYvPT3uz4gGoRc3GnnVzZTWZ+pU892qsZdg7uQ== X-Received: from wmhj22.prod.google.com ([2002:a05:600c:3016:b0:434:fe74:1bd5]) (user=smostafa job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:3593:b0:435:294:f1c8 with SMTP id 5b1f17b1804b1-43622883637mr32346155e9.28.1734026719225; Thu, 12 Dec 2024 10:05:19 -0800 (PST) Date: Thu, 12 Dec 2024 18:03:41 +0000 In-Reply-To: <20241212180423.1578358-1-smostafa@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241212180423.1578358-1-smostafa@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20241212180423.1578358-18-smostafa@google.com> Subject: [RFC PATCH v2 17/58] KVM: arm64: iommu: Add {attach, detach}_dev From: Mostafa Saleh To: iommu@lists.linux.dev, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org Cc: catalin.marinas@arm.com, will@kernel.org, maz@kernel.org, oliver.upton@linux.dev, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, robdclark@gmail.com, joro@8bytes.org, robin.murphy@arm.com, jean-philippe@linaro.org, jgg@ziepe.ca, nicolinc@nvidia.com, vdonnefort@google.com, qperret@google.com, tabba@google.com, danielmentz@google.com, tzukui@google.com, Mostafa Saleh Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add attach/detach dev operations which are forwarded to the driver. To avoid racing between alloc/free domain and attach/detach dev, the refcount is used. Although, as IOMMU attach/detach are per-IOMMU and would require some sort of locking, nothing in the IOMMU core code need the lock so delegate that to the driver to use locks when needed and the hypervisor only guarantees no races between alloc/free domain. Also, add a new function kvm_iommu_init_device() to initialise common fields of the IOMMU struct, which is only the lock at the moment. The IOMMU core code will need to use the lock next for power management. Signed-off-by: Mostafa Saleh Signed-off-by: Jean-Philippe Brucker --- arch/arm64/kvm/hyp/include/nvhe/iommu.h | 29 +++++++++++++ arch/arm64/kvm/hyp/nvhe/iommu/iommu.c | 56 ++++++++++++++++++++++++- include/kvm/iommu.h | 8 ++++ 3 files changed, 91 insertions(+), 2 deletions(-) diff --git a/arch/arm64/kvm/hyp/include/nvhe/iommu.h b/arch/arm64/kvm/hyp/i= nclude/nvhe/iommu.h index 8f619f415d1f..d6d7447fbac8 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/iommu.h +++ b/arch/arm64/kvm/hyp/include/nvhe/iommu.h @@ -35,10 +35,39 @@ struct kvm_iommu_ops { int (*init)(void); int (*alloc_domain)(struct kvm_hyp_iommu_domain *domain, int type); void (*free_domain)(struct kvm_hyp_iommu_domain *domain); + struct kvm_hyp_iommu *(*get_iommu_by_id)(pkvm_handle_t iommu_id); + int (*attach_dev)(struct kvm_hyp_iommu *iommu, struct kvm_hyp_iommu_domai= n *domain, + u32 endpoint_id, u32 pasid, u32 pasid_bits); + int (*detach_dev)(struct kvm_hyp_iommu *iommu, struct kvm_hyp_iommu_domai= n *domain, + u32 endpoint_id, u32 pasid); }; =20 int kvm_iommu_init(void); =20 +int kvm_iommu_init_device(struct kvm_hyp_iommu *iommu); + +static inline hyp_spinlock_t *kvm_iommu_get_lock(struct kvm_hyp_iommu *iom= mu) +{ + /* See struct kvm_hyp_iommu */ + BUILD_BUG_ON(sizeof(iommu->lock) !=3D sizeof(hyp_spinlock_t)); + return (hyp_spinlock_t *)(&iommu->lock); +} + +static inline void kvm_iommu_lock_init(struct kvm_hyp_iommu *iommu) +{ + hyp_spin_lock_init(kvm_iommu_get_lock(iommu)); +} + +static inline void kvm_iommu_lock(struct kvm_hyp_iommu *iommu) +{ + hyp_spin_lock(kvm_iommu_get_lock(iommu)); +} + +static inline void kvm_iommu_unlock(struct kvm_hyp_iommu *iommu) +{ + hyp_spin_unlock(kvm_iommu_get_lock(iommu)); +} + extern struct hyp_mgt_allocator_ops kvm_iommu_allocator_ops; =20 #endif /* __ARM64_KVM_NVHE_IOMMU_H__ */ diff --git a/arch/arm64/kvm/hyp/nvhe/iommu/iommu.c b/arch/arm64/kvm/hyp/nvh= e/iommu/iommu.c index ba2aed52a74f..df2dbe4c0121 100644 --- a/arch/arm64/kvm/hyp/nvhe/iommu/iommu.c +++ b/arch/arm64/kvm/hyp/nvhe/iommu/iommu.c @@ -127,6 +127,19 @@ handle_to_domain(pkvm_handle_t domain_id) return &domains[domain_id % KVM_IOMMU_DOMAINS_PER_PAGE]; } =20 +static int domain_get(struct kvm_hyp_iommu_domain *domain) +{ + int old =3D atomic_fetch_inc_acquire(&domain->refs); + + BUG_ON(!old || (old + 1 < 0)); + return 0; +} + +static void domain_put(struct kvm_hyp_iommu_domain *domain) +{ + BUG_ON(!atomic_dec_return_release(&domain->refs)); +} + int kvm_iommu_init(void) { int ret; @@ -210,13 +223,44 @@ int kvm_iommu_free_domain(pkvm_handle_t domain_id) int kvm_iommu_attach_dev(pkvm_handle_t iommu_id, pkvm_handle_t domain_id, u32 endpoint_id, u32 pasid, u32 pasid_bits) { - return -ENODEV; + int ret; + struct kvm_hyp_iommu *iommu; + struct kvm_hyp_iommu_domain *domain; + + iommu =3D kvm_iommu_ops->get_iommu_by_id(iommu_id); + if (!iommu) + return -EINVAL; + + domain =3D handle_to_domain(domain_id); + if (!domain || domain_get(domain)) + return -EINVAL; + + ret =3D kvm_iommu_ops->attach_dev(iommu, domain, endpoint_id, pasid, pasi= d_bits); + if (ret) + domain_put(domain); + return ret; } =20 int kvm_iommu_detach_dev(pkvm_handle_t iommu_id, pkvm_handle_t domain_id, u32 endpoint_id, u32 pasid) { - return -ENODEV; + int ret; + struct kvm_hyp_iommu *iommu; + struct kvm_hyp_iommu_domain *domain; + + iommu =3D kvm_iommu_ops->get_iommu_by_id(iommu_id); + if (!iommu) + return -EINVAL; + + domain =3D handle_to_domain(domain_id); + if (!domain || atomic_read(&domain->refs) <=3D 1) + return -EINVAL; + + ret =3D kvm_iommu_ops->detach_dev(iommu, domain, endpoint_id, pasid); + if (ret) + return ret; + domain_put(domain); + return ret; } =20 size_t kvm_iommu_map_pages(pkvm_handle_t domain_id, @@ -236,3 +280,11 @@ phys_addr_t kvm_iommu_iova_to_phys(pkvm_handle_t domai= n_id, unsigned long iova) { return 0; } + +/* Must be called from the IOMMU driver per IOMMU */ +int kvm_iommu_init_device(struct kvm_hyp_iommu *iommu) +{ + kvm_iommu_lock_init(iommu); + + return 0; +} diff --git a/include/kvm/iommu.h b/include/kvm/iommu.h index 10ecaae0f6a3..6ff78d766466 100644 --- a/include/kvm/iommu.h +++ b/include/kvm/iommu.h @@ -45,4 +45,12 @@ extern void **kvm_nvhe_sym(kvm_hyp_iommu_domains); #define KVM_IOMMU_DOMAINS_ROOT_ORDER_NR \ (1 << get_order(KVM_IOMMU_DOMAINS_ROOT_SIZE)) =20 +struct kvm_hyp_iommu { +#ifdef __KVM_NVHE_HYPERVISOR__ + hyp_spinlock_t lock; +#else + u32 unused; +#endif +}; + #endif /* __KVM_IOMMU_H */ --=20 2.47.0.338.g60cca15819-goog From nobody Sun Dec 14 19:14:26 2025 Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D18DA22FE08 for ; Thu, 12 Dec 2024 18:05:22 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734026725; cv=none; b=LrYVW6cvcJG0nyOSGjMTeTJ7mkanVMEovHGCxmIUMgdkjuxFBc5LAHeBhsS7E2z7M4kD0aXHf6wEuUtZhVxkF/WBjVA0FE91rQ8pJnJILjh4GILphVW/9kVbSAR7yuvK6OT6Zt0NA92kYIfMTi79foMw1k0KsXJ6hjXZnb1gtII= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734026725; c=relaxed/simple; bh=8/JR72FsO9ooIjOLkT1P8qfkZvbDSIe4qDlXrNLBT5E=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=tdRq+NIsEDm0V7Zv7lg4kgwKyAw0Rtk3f5UoL6qxZ/6PIklrX0lMDZ6znerSelBMkRhOiBWoxOedqf6OkqPKcbz4mbFYeyl+U4Kh0cUzCMe/T6iNaExtygRFHSkplRE0XJAyxr8/K03W3IxCUQegBavy42il4P0XEEVJqqD7IeE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--smostafa.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=IFXCdZTN; arc=none smtp.client-ip=209.85.128.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--smostafa.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="IFXCdZTN" Received: by mail-wm1-f73.google.com with SMTP id 5b1f17b1804b1-4361ac607b6so7281025e9.0 for ; Thu, 12 Dec 2024 10:05:22 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1734026721; x=1734631521; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=ZW1+nydU55G8nQ9hrqIcRZ7rSjcAUC8BJY9QSeiwFSI=; b=IFXCdZTNyy2kNhCHXGcMDSliESDMDzytMKQoOBnATrrU+LqV738EVGqs8PJQ/5EOVm VTVqeoTT2P+T0UxNBe43yWmm7/TkFg39rA3xihdspDAVo+6JB4JDF8lJHpqR4LzQKOUH IIGqYDAIy6QW34UClmTY+ljlGEccken0m6FGStD+oHEqjcN9zu9m9Nyl+ryAEus+/bmM feN1ZyGDGYotEXGzEmftVuRAAZ9qoYK2K1UvJFxB91MMZ8CLPVs5L3y+fnF07//YNkbR G2c/m/a6oYukmpGkqb9BWdJLK7dmeUNehF88GZVKZQX6iH4lqkLFJWQK5fBjlrlNNMH5 vMRA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734026721; x=1734631521; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=ZW1+nydU55G8nQ9hrqIcRZ7rSjcAUC8BJY9QSeiwFSI=; b=Emfcnqi72crlDwKU+ieF9w3WAWPVsKKK2DRjlbjJGI+PycM80fBG+bl003f2k73nmj p1yKhDthyjiaVr4bsBrEb86OMnXsNMEGLb16SdEV3Lfwctdd5hhExIMlcZNuBn6b+KxR 2vDJkmMxvRq4M9OW99lnZ+2CEHJGiskqPwKwx9+lTmCin52Gv0CJThkWEASqJTlbf8ci TKEzXBDjr7+faTf4bfZBPekBupHDl74SNufKp1XPf7w64uS4OJWlQyqBBsC+TAl9lHVV Hk831+yfCjdUAksW2/F+CVdj3n6818D2e6f8uwmIikVy+6ECnLlfW+pIx7k74mtykg53 68uw== X-Forwarded-Encrypted: i=1; AJvYcCUwPYoD4fTdIt2nUfgWRLD0BcPGpSw9B0exmcUIXfT9+dt6aVZJs5PXzSzj/1XRBVwlI+XIoHf65xfRe/8=@vger.kernel.org X-Gm-Message-State: AOJu0YyKEZPF2AcK4Jq7ezBRYMsB9eTXLO70Z/CANtaNV7w8XcyeR8lc GZXtXF9GN5bSF73InhQQez9aB6qRIYgbWZKS/i/hlNhy7VKgjad9d8FlKawND4VUGCs9b6fttVq OWyhzNK9FUw== X-Google-Smtp-Source: AGHT+IF/eVrE/BNFHjZA31UXBb9BGDdvQQjzrDNU0uCStoSMr927atSRKA4GQ6BWli6UKC37YTy5oYGnkZ5VOA== X-Received: from wmpl36.prod.google.com ([2002:a05:600c:8a4:b0:434:f0d4:cbaf]) (user=smostafa job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:19ca:b0:434:a75b:5f59 with SMTP id 5b1f17b1804b1-43622823a73mr44642205e9.3.1734026721375; Thu, 12 Dec 2024 10:05:21 -0800 (PST) Date: Thu, 12 Dec 2024 18:03:42 +0000 In-Reply-To: <20241212180423.1578358-1-smostafa@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241212180423.1578358-1-smostafa@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20241212180423.1578358-19-smostafa@google.com> Subject: [RFC PATCH v2 18/58] KVM: arm64: iommu: Add map/unmap() operations From: Mostafa Saleh To: iommu@lists.linux.dev, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org Cc: catalin.marinas@arm.com, will@kernel.org, maz@kernel.org, oliver.upton@linux.dev, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, robdclark@gmail.com, joro@8bytes.org, robin.murphy@arm.com, jean-philippe@linaro.org, jgg@ziepe.ca, nicolinc@nvidia.com, vdonnefort@google.com, qperret@google.com, tabba@google.com, danielmentz@google.com, tzukui@google.com, Mostafa Saleh Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Handle map(), unmap() and iova_to_phys() hypercalls. In addition to map/unmap, the hypervisor has to ensure that all mapped pages are tracked, so be before each map() __pkvm_host_use_dma() would be called to ensure that. Similarly, on unmap() we need to decrement the refcount using __pkvm_host_unuse_dma(). However, doing this in standard way as mentioned in the comments is challenging, so we leave that to the driver. Also, the hypervisor only guarantees that there are no races between alloc/free domain operations using the domain refcount to avoid using extra locks. Signed-off-by: Mostafa Saleh Signed-off-by: Jean-Philippe Brucker --- arch/arm64/kvm/hyp/include/nvhe/iommu.h | 7 +++ arch/arm64/kvm/hyp/nvhe/iommu/iommu.c | 80 ++++++++++++++++++++++++- 2 files changed, 84 insertions(+), 3 deletions(-) diff --git a/arch/arm64/kvm/hyp/include/nvhe/iommu.h b/arch/arm64/kvm/hyp/i= nclude/nvhe/iommu.h index d6d7447fbac8..17f24a8eb1b9 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/iommu.h +++ b/arch/arm64/kvm/hyp/include/nvhe/iommu.h @@ -40,6 +40,13 @@ struct kvm_iommu_ops { u32 endpoint_id, u32 pasid, u32 pasid_bits); int (*detach_dev)(struct kvm_hyp_iommu *iommu, struct kvm_hyp_iommu_domai= n *domain, u32 endpoint_id, u32 pasid); + int (*map_pages)(struct kvm_hyp_iommu_domain *domain, unsigned long iova, + phys_addr_t paddr, size_t pgsize, + size_t pgcount, int prot, size_t *total_mapped); + size_t (*unmap_pages)(struct kvm_hyp_iommu_domain *domain, unsigned long = iova, + size_t pgsize, size_t pgcount); + phys_addr_t (*iova_to_phys)(struct kvm_hyp_iommu_domain *domain, unsigned= long iova); + }; =20 int kvm_iommu_init(void); diff --git a/arch/arm64/kvm/hyp/nvhe/iommu/iommu.c b/arch/arm64/kvm/hyp/nvh= e/iommu/iommu.c index df2dbe4c0121..83321cc5f466 100644 --- a/arch/arm64/kvm/hyp/nvhe/iommu/iommu.c +++ b/arch/arm64/kvm/hyp/nvhe/iommu/iommu.c @@ -263,22 +263,96 @@ int kvm_iommu_detach_dev(pkvm_handle_t iommu_id, pkvm= _handle_t domain_id, return ret; } =20 +#define IOMMU_PROT_MASK (IOMMU_READ | IOMMU_WRITE | IOMMU_CACHE |\ + IOMMU_NOEXEC | IOMMU_MMIO | IOMMU_PRIV) + size_t kvm_iommu_map_pages(pkvm_handle_t domain_id, unsigned long iova, phys_addr_t paddr, size_t pgsize, size_t pgcount, int prot) { - return 0; + size_t size; + int ret; + size_t total_mapped =3D 0; + struct kvm_hyp_iommu_domain *domain; + + if (prot & ~IOMMU_PROT_MASK) + return 0; + + if (__builtin_mul_overflow(pgsize, pgcount, &size) || + iova + size < iova || paddr + size < paddr) + return 0; + + domain =3D handle_to_domain(domain_id); + if (!domain || domain_get(domain)) + return 0; + + ret =3D __pkvm_host_use_dma(paddr, size); + if (ret) + return 0; + + kvm_iommu_ops->map_pages(domain, iova, paddr, pgsize, pgcount, prot, &tot= al_mapped); + + pgcount -=3D total_mapped / pgsize; + /* + * unuse the bits that haven't been mapped yet. The host calls back + * either to continue mapping, or to unmap and unuse what's been done + * so far. + */ + if (pgcount) + __pkvm_host_unuse_dma(paddr + total_mapped, pgcount * pgsize); + + domain_put(domain); + return total_mapped; } =20 size_t kvm_iommu_unmap_pages(pkvm_handle_t domain_id, unsigned long iova, size_t pgsize, size_t pgcount) { - return 0; + size_t size; + size_t unmapped; + struct kvm_hyp_iommu_domain *domain; + + if (!pgsize || !pgcount) + return 0; + + if (__builtin_mul_overflow(pgsize, pgcount, &size) || + iova + size < iova) + return 0; + + domain =3D handle_to_domain(domain_id); + if (!domain || domain_get(domain)) + return 0; + + /* + * Unlike map, the common code doesn't call the __pkvm_host_unuse_dma, + * because this means that we need either walk the table using iova_to_ph= ys + * similar to VFIO then unmap and call this function, or unmap leaf (page= or + * block) at a time, where both might be suboptimal. + * For some IOMMU, we can do 2 walks where one only invalidate the pages + * and the other decrement the refcount. + * As, semantics for this might differ between IOMMUs and it's hard to + * standardized, we leave that to the driver. + */ + unmapped =3D kvm_iommu_ops->unmap_pages(domain, iova, pgsize, + pgcount); + + domain_put(domain); + return unmapped; } =20 phys_addr_t kvm_iommu_iova_to_phys(pkvm_handle_t domain_id, unsigned long = iova) { - return 0; + phys_addr_t phys =3D 0; + struct kvm_hyp_iommu_domain *domain; + + domain =3D handle_to_domain( domain_id); + + if (!domain || domain_get(domain)) + return 0; + + phys =3D kvm_iommu_ops->iova_to_phys(domain, iova); + domain_put(domain); + return phys; } =20 /* Must be called from the IOMMU driver per IOMMU */ --=20 2.47.0.338.g60cca15819-goog From nobody Sun Dec 14 19:14:26 2025 Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D1759231A29 for ; Thu, 12 Dec 2024 18:05:24 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734026726; cv=none; b=IoMBE8lTJxYtdbGATZbXMrQtLaVq4ay9CfWcmlI8zxZZGSbP6HnuFEctrcREWUEeN8+AXsgq1DqfFFah80kB09iVIxY1YH5bWZiPQgCd+Vy1xEhQhwSJEyXT9yBaBsrf3AlibZtfaps9JSltD1caRBphWeQlzwKYbF+Wb+NqmCQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734026726; c=relaxed/simple; bh=7KXEuH5v8YrWgzX2fXUjWcfTANvESrnvUggEWHtho/I=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=COscG+F4M68chl3D5VlRBvucI/KzIcrkhQN+iGj3eeUm/gXRto2tItvkJc63FQrWlGxGMqtYxXNLqWoUL9E+byYIzWY7FWBSes88W2fLKEUs+Rm7Qz0G1pXXxzIRxYBJBNdN7tQA47kK/HcAnygxZcqMIZTMVcIv+Y8NRf6/tX8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--smostafa.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=GwtORUaq; arc=none smtp.client-ip=209.85.128.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--smostafa.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="GwtORUaq" Received: by mail-wm1-f73.google.com with SMTP id 5b1f17b1804b1-4361fc2b2d6so5532605e9.3 for ; Thu, 12 Dec 2024 10:05:24 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1734026723; x=1734631523; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=oAsQTCVq1k5cQX4NrU1R+91jvUrFkU1qGqPwmUapmCQ=; b=GwtORUaqR4uaW09WyyFb62HKxAszU+RVKhO7eLALIlo7R4oJticIEwUXjzW/gCVGoz OlW7oPECBERWPGQ1zk6Gynkm6z4Pvry9FNF3EIU/zf7BXQ5FohWdQsFjUCfUcgmbWMf+ qCfH6NV5MbpQoGBiVonz+b/c0w2unnRQs9TWog7+kdIqQMTseAjM0CYhEhKhDNp96Vqr mvDqe8t7ghNU9I6TJwS86Z3HAqJQffrY1/Zad9TdMflYJhBQqgAebC7cavAzKb2UxpdU D8hbNVXUbc2RL4Vlpkc+lciHv4+Gp4o9TzAeEPekDsvfzZSdSftDU72oZI/2mnC08HSX BDoQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734026723; x=1734631523; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=oAsQTCVq1k5cQX4NrU1R+91jvUrFkU1qGqPwmUapmCQ=; b=JxKaX2m60J4qnJaS71llI0uXNw7BPyGgH8Hc/6MSOf2u6ySbpVhP2+ZTKh2XG5sKES EZdO2Vssc/7bWpSxIFzgSBdmjpXTxPmJWTYWeSiyKyfdJgNBvFhstV9lshRFElWTIoxl Zz/lEKncSnHQcxgaHf8x6IP8yYGde40Da/dgdJIB7uJVbYz6ihXbSfz5k4SsgcQKyFg8 KSbBKTl26e8cgVGrghc4gKTzIayTpzz+Ruo9o1J3ZOdrUmLKYtQ6QMKka/F6dUkiIWQ6 PXfOQ3HaCBgMV3CuxD306t3pa9iIUA391NJdl/MGJyyjQbMpxw2Pa1Ozl+LRfQ2lzhVK qwZw== X-Forwarded-Encrypted: i=1; AJvYcCWBgUnkq7ISop2xd9WzlUmiCeeJvsELxRkWGeQjXN2FTDNUm8cgVfHbK2cVKerrbMWEFEGPCaO2LmIisuY=@vger.kernel.org X-Gm-Message-State: AOJu0YyJbC9DTn8BWOWsEDum+0bDgdyB4IijAgeML3wLSlgX0ASzzzLb s9r3JLUgWzEcNEfd2hlGH0o9GwY3/DvUtI569xwD3lesALkEJvO7IoOCWS7vzaGaqj+cXuA7zPY XAlG6MDqi9Q== X-Google-Smtp-Source: AGHT+IEK5984jXibFupy1tKIQcqayv18x11IpqLmg4k+VIF9btcstf/EJxCVy5RbSEtdnliqdjvouvMqk5ypZg== X-Received: from wmej18.prod.google.com ([2002:a05:600c:42d2:b0:434:a1af:5d39]) (user=smostafa job=prod-delivery.src-stubby-dispatcher) by 2002:a5d:59ad:0:b0:385:fb2c:6021 with SMTP id ffacd0b85a97d-3864ce986camr4566052f8f.39.1734026723319; Thu, 12 Dec 2024 10:05:23 -0800 (PST) Date: Thu, 12 Dec 2024 18:03:43 +0000 In-Reply-To: <20241212180423.1578358-1-smostafa@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241212180423.1578358-1-smostafa@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20241212180423.1578358-20-smostafa@google.com> Subject: [RFC PATCH v2 19/58] KVM: arm64: iommu: support iommu_iotlb_gather From: Mostafa Saleh To: iommu@lists.linux.dev, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org Cc: catalin.marinas@arm.com, will@kernel.org, maz@kernel.org, oliver.upton@linux.dev, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, robdclark@gmail.com, joro@8bytes.org, robin.murphy@arm.com, jean-philippe@linaro.org, jgg@ziepe.ca, nicolinc@nvidia.com, vdonnefort@google.com, qperret@google.com, tabba@google.com, danielmentz@google.com, tzukui@google.com, Mostafa Saleh Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" To improve unmap performance, we can batch TLB invalidations at the end of the unmap similarly to what the kernel. We use the same data structure as the kernel and most of the same code. Signed-off-by: Mostafa Saleh --- arch/arm64/kvm/hyp/include/nvhe/iommu.h | 11 +++++++++-- arch/arm64/kvm/hyp/nvhe/iommu/iommu.c | 22 +++++++++++++++++++++- include/linux/iommu.h | 24 +++++++++++++----------- 3 files changed, 43 insertions(+), 14 deletions(-) diff --git a/arch/arm64/kvm/hyp/include/nvhe/iommu.h b/arch/arm64/kvm/hyp/i= nclude/nvhe/iommu.h index 17f24a8eb1b9..06d12b35fa3e 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/iommu.h +++ b/arch/arm64/kvm/hyp/include/nvhe/iommu.h @@ -44,15 +44,22 @@ struct kvm_iommu_ops { phys_addr_t paddr, size_t pgsize, size_t pgcount, int prot, size_t *total_mapped); size_t (*unmap_pages)(struct kvm_hyp_iommu_domain *domain, unsigned long = iova, - size_t pgsize, size_t pgcount); + size_t pgsize, size_t pgcount, + struct iommu_iotlb_gather *gather); phys_addr_t (*iova_to_phys)(struct kvm_hyp_iommu_domain *domain, unsigned= long iova); - + void (*iotlb_sync)(struct kvm_hyp_iommu_domain *domain, + struct iommu_iotlb_gather *gather); }; =20 int kvm_iommu_init(void); =20 int kvm_iommu_init_device(struct kvm_hyp_iommu *iommu); =20 +void kvm_iommu_iotlb_gather_add_page(struct kvm_hyp_iommu_domain *domain, + struct iommu_iotlb_gather *gather, + unsigned long iova, + size_t size); + static inline hyp_spinlock_t *kvm_iommu_get_lock(struct kvm_hyp_iommu *iom= mu) { /* See struct kvm_hyp_iommu */ diff --git a/arch/arm64/kvm/hyp/nvhe/iommu/iommu.c b/arch/arm64/kvm/hyp/nvh= e/iommu/iommu.c index 83321cc5f466..a6e0f3634756 100644 --- a/arch/arm64/kvm/hyp/nvhe/iommu/iommu.c +++ b/arch/arm64/kvm/hyp/nvhe/iommu/iommu.c @@ -305,12 +305,30 @@ size_t kvm_iommu_map_pages(pkvm_handle_t domain_id, return total_mapped; } =20 +static inline void kvm_iommu_iotlb_sync(struct kvm_hyp_iommu_domain *domai= n, + struct iommu_iotlb_gather *iotlb_gather) +{ + if (kvm_iommu_ops->iotlb_sync) + kvm_iommu_ops->iotlb_sync(domain, iotlb_gather); + + iommu_iotlb_gather_init(iotlb_gather); +} + +void kvm_iommu_iotlb_gather_add_page(struct kvm_hyp_iommu_domain *domain, + struct iommu_iotlb_gather *gather, + unsigned long iova, + size_t size) +{ + _iommu_iotlb_add_page(domain, gather, iova, size, kvm_iommu_iotlb_sync); +} + size_t kvm_iommu_unmap_pages(pkvm_handle_t domain_id, unsigned long iova, size_t pgsize, size_t pgcount) { size_t size; size_t unmapped; struct kvm_hyp_iommu_domain *domain; + struct iommu_iotlb_gather iotlb_gather; =20 if (!pgsize || !pgcount) return 0; @@ -323,6 +341,7 @@ size_t kvm_iommu_unmap_pages(pkvm_handle_t domain_id, u= nsigned long iova, if (!domain || domain_get(domain)) return 0; =20 + iommu_iotlb_gather_init(&iotlb_gather); /* * Unlike map, the common code doesn't call the __pkvm_host_unuse_dma, * because this means that we need either walk the table using iova_to_ph= ys @@ -334,7 +353,8 @@ size_t kvm_iommu_unmap_pages(pkvm_handle_t domain_id, u= nsigned long iova, * standardized, we leave that to the driver. */ unmapped =3D kvm_iommu_ops->unmap_pages(domain, iova, pgsize, - pgcount); + pgcount, &iotlb_gather); + kvm_iommu_iotlb_sync(domain, &iotlb_gather); =20 domain_put(domain); return unmapped; diff --git a/include/linux/iommu.h b/include/linux/iommu.h index bd722f473635..c75877044185 100644 --- a/include/linux/iommu.h +++ b/include/linux/iommu.h @@ -911,6 +911,18 @@ static inline void iommu_iotlb_gather_add_range(struct= iommu_iotlb_gather *gathe gather->end =3D end; } =20 +/* + * If the new page is disjoint from the current range or is mapped at + * a different granularity, then sync the TLB so that the gather + * structure can be rewritten. + */ +#define _iommu_iotlb_add_page(domain, gather, iova, size, sync) \ + if (((gather)->pgsize && (gather)->pgsize !=3D (size)) || \ + iommu_iotlb_gather_is_disjoint((gather), (iova), (size))) \ + sync((domain), (gather)); \ + (gather)->pgsize =3D (size); \ + iommu_iotlb_gather_add_range((gather), (iova), (size)) + /** * iommu_iotlb_gather_add_page - Gather for page-based TLB invalidation * @domain: IOMMU domain to be invalidated @@ -926,17 +938,7 @@ static inline void iommu_iotlb_gather_add_page(struct = iommu_domain *domain, struct iommu_iotlb_gather *gather, unsigned long iova, size_t size) { - /* - * If the new page is disjoint from the current range or is mapped at - * a different granularity, then sync the TLB so that the gather - * structure can be rewritten. - */ - if ((gather->pgsize && gather->pgsize !=3D size) || - iommu_iotlb_gather_is_disjoint(gather, iova, size)) - iommu_iotlb_sync(domain, gather); - - gather->pgsize =3D size; - iommu_iotlb_gather_add_range(gather, iova, size); + _iommu_iotlb_add_page(domain, gather, iova, size, iommu_iotlb_sync); } =20 static inline bool iommu_iotlb_gather_queued(struct iommu_iotlb_gather *ga= ther) --=20 2.47.0.338.g60cca15819-goog From nobody Sun Dec 14 19:14:26 2025 Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AAEDC231A23 for ; Thu, 12 Dec 2024 18:05:26 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734026729; cv=none; b=WmgkI10vxAHMVw90p110r7bscM19+tAcsCsC9YhC28M20P3yCNyrid+b0UYdvzAbWbDrlywgAU7hvkP4Vib0iAcXNs4VP3Z7qKbAJHE3qIclDlYpxtrQO7RRk32KmTux2RsRrPdfcF3jniBF9WE2NdAM5vkOvhLpEyT2WV6MkNw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734026729; c=relaxed/simple; bh=bRcvub80rx4C3PI7vztkfCku189v3L3VAbJCJAIthps=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=tnIy/xFcRKJ7vUZlIIoMCW3LKNhyOHswvB86RtQ4I+e+/3NSD07Wc8zoKHcUZfw4GpHzoe61e54lYBHimNnfXoNWW/60cbrZlF7fhZkP0RKKhq5VY2KaeYHLpq/dqz1XMCcCGC1vqdyuDMluuGsHRdd/kImedBSZBt7U4knnn2w= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--smostafa.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=RWD/C69g; arc=none smtp.client-ip=209.85.128.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--smostafa.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="RWD/C69g" Received: by mail-wm1-f73.google.com with SMTP id 5b1f17b1804b1-434fb9646efso8347325e9.1 for ; Thu, 12 Dec 2024 10:05:26 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1734026725; x=1734631525; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=S7kpsgAhVH2bN/fIbcV7TDao/OohnNe7nMaqWp62Lyo=; b=RWD/C69gEkWlI8wzHUb8MklUOPMGH/S5hqxFCkvX1L4yxn3hB9RKGdjOO1Kvclz+kS ndQXYTzVuT8cat2tcuKE8hXuieWmxVh4L800Su/q9/q4lvGcP9gNCouOAp2o+DWWVYhg +HI0lBhn84+sDfedjZWzovkCVNLLHEZbCC2DG0+n7xA45FtwL1/nvJPl1ikV/25iPfPP Hu9zBIMsLlPBVGjQDyHnf+alrbenex1j+P1x3TBIJRFM4EGbUT2AsBFkwAXA0/z8CFvS C8Yjyq4MKt6zu274mldHpMptnFmPUW79xiwn1C9hnBn2pZcE9j2xSL8Wuk5EevYlH96D h4Rg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734026725; x=1734631525; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=S7kpsgAhVH2bN/fIbcV7TDao/OohnNe7nMaqWp62Lyo=; b=hX8nDGlw3VFeCJINgTGGXN2ebo6IpowXF5x7gnaekf9shitmo0zcF5omsZ406jUWIF NkdYCaJABE358hHc6A1gyWj/JVV7vHKWqUBpwqdaKbc+5AiT9wgpPotH7/I8xLsYYDDQ lf+VC8wz+oGuUXFpjoOrRRnXigeMEiliavA8u5lPT6Zhiur3M2VeGDDF0qF1/y2R7qjm rLH52uWgwGxw1FMb/Jmgv7QSEqMHvJLBg+W33P+cQiqKlCD293LEY6RzQvINkqml6f3S Cw8BOawSmbd/2bkA1H9ZY2pY8p6q3pFClWn2YcfAwVt4uNmtBf5QooSBZ61HNyNmgsxO ouWg== X-Forwarded-Encrypted: i=1; AJvYcCVIAPmAgwe2XTlmNcVoSQOceBZFjw/Oldfe2wFeKbRSJqL8bfE1HUD15Qg1tecSApaUEZHNfT8j8ZW+79E=@vger.kernel.org X-Gm-Message-State: AOJu0YwibiDkeKm7lbgnAWM3bPekwY35W7Nu27dtWHlJQ2BDmH/BZotu e0iyWRhVOH1R7cjlVHKW8n8wPH4OopH92qer5alYm9Z5flArTyDLHfX29KNuqgmDu5e8Dwd7Yvt c0Q6sO2az2w== X-Google-Smtp-Source: AGHT+IF25HMr2bpcyIuNUAnWweDO42iRimcWzqAUfJd/l3fGaSIRxPPS2WBm0GYXtXtdeE6P3qMiQCbNAneVug== X-Received: from wmpr23.prod.google.com ([2002:a05:600c:3217:b0:434:f5f3:3314]) (user=smostafa job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:35c9:b0:434:e9ee:c2d with SMTP id 5b1f17b1804b1-4361c401531mr58358885e9.26.1734026725379; Thu, 12 Dec 2024 10:05:25 -0800 (PST) Date: Thu, 12 Dec 2024 18:03:44 +0000 In-Reply-To: <20241212180423.1578358-1-smostafa@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241212180423.1578358-1-smostafa@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20241212180423.1578358-21-smostafa@google.com> Subject: [RFC PATCH v2 20/58] KVM: arm64: Support power domains From: Mostafa Saleh To: iommu@lists.linux.dev, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org Cc: catalin.marinas@arm.com, will@kernel.org, maz@kernel.org, oliver.upton@linux.dev, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, robdclark@gmail.com, joro@8bytes.org, robin.murphy@arm.com, jean-philippe@linaro.org, jgg@ziepe.ca, nicolinc@nvidia.com, vdonnefort@google.com, qperret@google.com, tabba@google.com, danielmentz@google.com, tzukui@google.com, Mostafa Saleh Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Unfortunately, as power management is not widely standardized we have to work around that. One implementation we can support is HOST_HVC, where the host is in control of the power management and it notifies the hypervisor about the updates. This adds extra constraints to the IOMMUs, as they must reset to blocking DMA traffic to be able to use this PD interface. Unfortunately again, for SMMUv3 which is the only IOMMU currently supported, there is no architectural way to discover this, so we rely on enabling this driver when it fits the constraints, also the driver sets GBPA and assumes that the SMMU retains across power cycling. In the next patch SCMI support is added. Signed-off-by: Mostafa Saleh Signed-off-by: Jean-Philippe Brucker --- arch/arm64/include/asm/kvm_asm.h | 1 + arch/arm64/kvm/hyp/include/nvhe/pkvm.h | 30 ++++++++++++++++ arch/arm64/kvm/hyp/nvhe/Makefile | 2 +- arch/arm64/kvm/hyp/nvhe/hyp-main.c | 9 +++++ arch/arm64/kvm/hyp/nvhe/power/hvc.c | 47 ++++++++++++++++++++++++++ include/kvm/power_domain.h | 17 ++++++++++ 6 files changed, 105 insertions(+), 1 deletion(-) create mode 100644 arch/arm64/kvm/hyp/nvhe/power/hvc.c create mode 100644 include/kvm/power_domain.h diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_= asm.h index 9ea155a04332..3dbf30cd10f3 100644 --- a/arch/arm64/include/asm/kvm_asm.h +++ b/arch/arm64/include/asm/kvm_asm.h @@ -114,6 +114,7 @@ enum __kvm_host_smccc_func { __KVM_HOST_SMCCC_FUNC___pkvm_host_iommu_map_pages, __KVM_HOST_SMCCC_FUNC___pkvm_host_iommu_unmap_pages, __KVM_HOST_SMCCC_FUNC___pkvm_host_iommu_iova_to_phys, + __KVM_HOST_SMCCC_FUNC___pkvm_host_hvc_pd, =20 /* * Start of the dynamically registered hypercalls. Start a bit diff --git a/arch/arm64/kvm/hyp/include/nvhe/pkvm.h b/arch/arm64/kvm/hyp/in= clude/nvhe/pkvm.h index 8a5554615e40..e4a94696b10e 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/pkvm.h +++ b/arch/arm64/kvm/hyp/include/nvhe/pkvm.h @@ -8,6 +8,7 @@ #define __ARM64_KVM_NVHE_PKVM_H__ =20 #include +#include =20 #include #include @@ -146,4 +147,33 @@ void pkvm_poison_pvmfw_pages(void); int pkvm_timer_init(void); void pkvm_udelay(unsigned long usecs); =20 +#define MAX_POWER_DOMAINS 32 + +struct kvm_power_domain_ops { + int (*power_on)(struct kvm_power_domain *pd); + int (*power_off)(struct kvm_power_domain *pd); +}; + +int pkvm_init_hvc_pd(struct kvm_power_domain *pd, + const struct kvm_power_domain_ops *ops); + +int pkvm_host_hvc_pd(u64 device_id, u64 on); + +/* + * Register a power domain. When the hypervisor catches power requests fro= m the + * host for this power domain, it calls the power ops with @pd as argument. + */ +static inline int pkvm_init_power_domain(struct kvm_power_domain *pd, + const struct kvm_power_domain_ops *ops) +{ + switch (pd->type) { + case KVM_POWER_DOMAIN_NONE: + return 0; + case KVM_POWER_DOMAIN_HOST_HVC: + return pkvm_init_hvc_pd(pd, ops); + default: + return -EOPNOTSUPP; + } +} + #endif /* __ARM64_KVM_NVHE_PKVM_H__ */ diff --git a/arch/arm64/kvm/hyp/nvhe/Makefile b/arch/arm64/kvm/hyp/nvhe/Mak= efile index 9e1b74c661d2..950d34ba6e50 100644 --- a/arch/arm64/kvm/hyp/nvhe/Makefile +++ b/arch/arm64/kvm/hyp/nvhe/Makefile @@ -8,7 +8,7 @@ CFLAGS_switch.nvhe.o +=3D -Wno-override-init hyp-obj-y :=3D timer-sr.o sysreg-sr.o debug-sr.o switch.o tlb.o hyp-init.o= host.o \ hyp-main.o hyp-smp.o psci-relay.o alloc.o early_alloc.o page_alloc.o \ cache.o setup.o mm.o mem_protect.o sys_regs.o pkvm.o stacktrace.o ffa.o \ - serial.o alloc_mgt.o iommu/iommu.o + serial.o alloc_mgt.o iommu/iommu.o power/hvc.o hyp-obj-y +=3D ../vgic-v3-sr.o ../aarch32.o ../vgic-v2-cpuif-proxy.o ../en= try.o \ ../fpsimd.o ../hyp-entry.o ../exception.o ../pgtable.o hyp-obj-$(CONFIG_LIST_HARDENED) +=3D list_debug.o diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/h= yp-main.c index 9b224842c487..5df98bf04ef4 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -1674,6 +1674,14 @@ static void handle___pkvm_host_iommu_iova_to_phys(st= ruct kvm_cpu_context *host_c cpu_reg(host_ctxt, 1) =3D kvm_iommu_iova_to_phys(domain, iova); } =20 +static void handle___pkvm_host_hvc_pd(struct kvm_cpu_context *host_ctxt) +{ + DECLARE_REG(u64, device_id, host_ctxt, 1); + DECLARE_REG(u64, on, host_ctxt, 2); + + cpu_reg(host_ctxt, 1) =3D pkvm_host_hvc_pd(device_id, on); +} + typedef void (*hcall_t)(struct kvm_cpu_context *); =20 #define HANDLE_FUNC(x) [__KVM_HOST_SMCCC_FUNC_##x] =3D (hcall_t)handle_##x @@ -1738,6 +1746,7 @@ static const hcall_t host_hcall[] =3D { HANDLE_FUNC(__pkvm_host_iommu_map_pages), HANDLE_FUNC(__pkvm_host_iommu_unmap_pages), HANDLE_FUNC(__pkvm_host_iommu_iova_to_phys), + HANDLE_FUNC(__pkvm_host_hvc_pd), }; =20 static void handle_host_hcall(struct kvm_cpu_context *host_ctxt) diff --git a/arch/arm64/kvm/hyp/nvhe/power/hvc.c b/arch/arm64/kvm/hyp/nvhe/= power/hvc.c new file mode 100644 index 000000000000..f4d811847e73 --- /dev/null +++ b/arch/arm64/kvm/hyp/nvhe/power/hvc.c @@ -0,0 +1,47 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2023 Google LLC + * Author: Mostafa Saleh + */ + +#include + +struct hvc_power_domain { + struct kvm_power_domain *pd; + const struct kvm_power_domain_ops *ops; +}; + +struct hvc_power_domain handlers[MAX_POWER_DOMAINS]; + +int pkvm_init_hvc_pd(struct kvm_power_domain *pd, + const struct kvm_power_domain_ops *ops) +{ + if (pd->device_id >=3D MAX_POWER_DOMAINS) + return -E2BIG; + + handlers[pd->device_id].ops =3D ops; + handlers[pd->device_id].pd =3D pd; + + return 0; +} + +int pkvm_host_hvc_pd(u64 device_id, u64 on) +{ + struct hvc_power_domain *pd; + + if (device_id >=3D MAX_POWER_DOMAINS) + return -E2BIG; + + device_id =3D array_index_nospec(device_id, MAX_POWER_DOMAINS); + pd =3D &handlers[device_id]; + + if (!pd->ops) + return -ENOENT; + + if (on) + pd->ops->power_on(pd->pd); + else + pd->ops->power_off(pd->pd); + + return 0; +} diff --git a/include/kvm/power_domain.h b/include/kvm/power_domain.h new file mode 100644 index 000000000000..f6a9c5cdfebb --- /dev/null +++ b/include/kvm/power_domain.h @@ -0,0 +1,17 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef __KVM_POWER_DOMAIN_H +#define __KVM_POWER_DOMAIN_H + +enum kvm_power_domain_type { + KVM_POWER_DOMAIN_NONE, + KVM_POWER_DOMAIN_HOST_HVC, +}; + +struct kvm_power_domain { + enum kvm_power_domain_type type; + union { + u64 device_id; /* HOST_HVC device ID*/ + }; +}; + +#endif /* __KVM_POWER_DOMAIN_H */ --=20 2.47.0.338.g60cca15819-goog From nobody Sun Dec 14 19:14:26 2025 Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C90FB235887 for ; Thu, 12 Dec 2024 18:05:28 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734026730; cv=none; b=kfxppNrYYtMM0DMinDvkytEL7tYAH7HmJVdTzi0H4K9SZY06pAhdOpmNQv9uIfDieT6xzigqb9uWH5eGkGetQIrkPCSUNYmlrwiPnSb+ZgC0qxq+S5Jwa11827hYt1c8dj2n7CP/HulXCvqLmZ5hoB1zlFlvOJB1SLf4pFR2BJs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734026730; c=relaxed/simple; bh=93WTzLCk66oie5FDAwVxQF18Hew2J+sHUmJc+JJamG0=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=JlLulBVroYe9IWxuJ3AWCejycd9hObKwKN/crzg3hweXrOBl4cvTdm5cLJ3f3saO17C9I6nTYuRfG4EkKGbyIw9hNzhKdKLj5O2sKVBYMYyw4vCdlWVOMFSNG9VEeFsJqfKySzJ7jk9iO3ty9fVRviVdkwKzPQGK4NF+3W/64hI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--smostafa.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=xicHqKN1; arc=none smtp.client-ip=209.85.128.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--smostafa.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="xicHqKN1" Received: by mail-wm1-f74.google.com with SMTP id 5b1f17b1804b1-4361efc9d1fso8314975e9.2 for ; Thu, 12 Dec 2024 10:05:28 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1734026727; x=1734631527; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=c/Ra//4VkvS4Qrc/csXRRquQ5N0sclWFpXtLU13M1U0=; b=xicHqKN1pPvMhPNh7LWMUyhLFxKNiUQaoSzOfu8BMarRBplNFm65IaWkmpAhEXvtsu 3djDwEJPY7Phg9L/GyLIDehrdZrA3UMhgWpxqEdGwAGdgQ1nYkAB+3Olncf/EitUf2N1 6T/koq2QoG/Xfo/LLPXgeBA4CNbe0VVDf2EOCz9xLwWjZ73xjQx6xwt+trK0PIFtjRdm 1nNQwY4xJLDIMKTIWw8t9ePqL0M2F26XYvPT+7P0kjl9VHPBIvOLwilX9SAHkjARTdso hZXycZDQd285x7G6MTH1fya6wIjm6Bw+8YdAv19j6W7chTMehQaJxL3OIzN4+wxSMXCb tMNw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734026727; x=1734631527; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=c/Ra//4VkvS4Qrc/csXRRquQ5N0sclWFpXtLU13M1U0=; b=jUmK5M1tbASfZcTWkgDg7jG1/7Aj3Ej6lS6sQOBMeC2676mT/4q/XJbcw0KzuuWcGk 0u55iTNV+ILyxzmH6MkA74bAncXCkXl1XWbOW95cjvSxD1rCHauFBjWzWv1T5xLdFGXB +PtOq8QG7fY5+gSvp9OWmWgdM6xKUiSPGl1aTI7UzfM2Gni9fH9b3jMHZD7BkdbSpR4A LHf3BR3InhV3qcaZ37lDgA3yzL1sHX/i5r8mxzxX/M9KDXWBOjU6jYrzI2ZfoutBW5VX Lv2qkrbn7/hge+3mCb6IKInfewhiOysaod0MY1wqgj8IlEA2Y89IJKRNHJ6AAT7zIxpJ Ei7w== X-Forwarded-Encrypted: i=1; AJvYcCW2yNwrrUHN8IKqH6abJ8ot0GHcqRFHCVoMNwJTh8ldlHgzPeydEHBeYm3vAVefzj36RZjMRZNsCvNgb3g=@vger.kernel.org X-Gm-Message-State: AOJu0YzZV8oSVkoJgiQNAapXrlLGtEDajRmcCBhUrGxauy3NQZ/LNLDO 0mSGCMkJwpY8nPRi/RLWS42oUt3Z6nSB/zCwlMSNe2SoW6B1tlp76N9XNCUiuBYz4NnIAj8utkE 04sdTJhI73g== X-Google-Smtp-Source: AGHT+IFrLa5teCkiJUmugRTGmiVyWcF7Bk2iiIesqhTlDV4Jk+wP67mU9JHmnXeWzq6SoD15OWfRPDjTEIR09g== X-Received: from wmik26.prod.google.com ([2002:a7b:c41a:0:b0:434:f801:bf67]) (user=smostafa job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:4446:b0:434:fa73:a907 with SMTP id 5b1f17b1804b1-4361c3723acmr67363395e9.13.1734026727341; Thu, 12 Dec 2024 10:05:27 -0800 (PST) Date: Thu, 12 Dec 2024 18:03:45 +0000 In-Reply-To: <20241212180423.1578358-1-smostafa@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241212180423.1578358-1-smostafa@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20241212180423.1578358-22-smostafa@google.com> Subject: [RFC PATCH v2 21/58] KVM: arm64: pkvm: Add __pkvm_host_add_remove_page() From: Mostafa Saleh To: iommu@lists.linux.dev, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org Cc: catalin.marinas@arm.com, will@kernel.org, maz@kernel.org, oliver.upton@linux.dev, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, robdclark@gmail.com, joro@8bytes.org, robin.murphy@arm.com, jean-philippe@linaro.org, jgg@ziepe.ca, nicolinc@nvidia.com, vdonnefort@google.com, qperret@google.com, tabba@google.com, danielmentz@google.com, tzukui@google.com, Mostafa Saleh Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Jean-Philippe Brucker Add a small helper to remove and add back a page from the host stage-2. This will be used to temporarily unmap a piece of shared sram (device memory) from the host while we handle a SCMI request, preventing the host from modifying the request after it is verified. Signed-off-by: Jean-Philippe Brucker Signed-off-by: Mostafa Saleh --- arch/arm64/kvm/hyp/include/nvhe/mem_protect.h | 1 + arch/arm64/kvm/hyp/nvhe/mem_protect.c | 17 +++++++++++++++++ 2 files changed, 18 insertions(+) diff --git a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h b/arch/arm64/kvm= /hyp/include/nvhe/mem_protect.h index d75e64e59596..c8f49b335093 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h @@ -94,6 +94,7 @@ int __pkvm_guest_relinquish_to_host(struct pkvm_hyp_vcpu = *vcpu, u64 ipa, u64 *ppa); int __pkvm_host_use_dma(u64 phys_addr, size_t size); int __pkvm_host_unuse_dma(u64 phys_addr, size_t size); +int __pkvm_host_add_remove_page(u64 pfn, bool remove); =20 bool addr_is_memory(phys_addr_t phys); int host_stage2_idmap_locked(phys_addr_t addr, u64 size, enum kvm_pgtable_= prot prot); diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvh= e/mem_protect.c index 0840af20c366..a428ad9ca871 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -2521,3 +2521,20 @@ int host_stage2_get_leaf(phys_addr_t phys, kvm_pte_t= *ptep, s8 *level) =20 return ret; } + +/* + * Temporarily unmap a page from the host stage-2, if @remove is true, or = put it + * back. After restoring the ownership to host, the page will be lazy-mapp= ed. + */ +int __pkvm_host_add_remove_page(u64 pfn, bool remove) +{ + int ret; + u64 host_addr =3D hyp_pfn_to_phys(pfn); + u8 owner =3D remove ? PKVM_ID_HYP : PKVM_ID_HOST; + + host_lock_component(); + ret =3D host_stage2_set_owner_locked(host_addr, PAGE_SIZE, owner); + host_unlock_component(); + + return ret; +} --=20 2.47.0.338.g60cca15819-goog From nobody Sun Dec 14 19:14:26 2025 Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 535E42358B6 for ; Thu, 12 Dec 2024 18:05:31 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734026733; cv=none; b=h3LETJG1YwREzsnBytkmccG9B9J/be3K+OCGTgNOK893T0bStNtmpj8Hbz4gd2ZwzMXXf5iHEc5ZJXD9H1hx/i+idBKN1ADub6m5fQznBK2CttSO0ytMN5EvRRLCs39ZcflOYL++P1mwvKQhyaqlyz2fUSjnQbETfiwsiMzGOvg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734026733; c=relaxed/simple; bh=WW7htzsu7sXVRxglmD0J4+LO+E917cDCTNUHbEPGRNA=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=TXvJI8YhJIvJIz0cSwoHxqjoxRcrMzMYslLXgKpREsvEDO1kP1El7X3//2YNr32bRpWo3iOrSjgrunAXIkrxj/0kInUvER7E5fXuZO2eJVNnOmXrnwsaPacRRz12z5RGzH5lCUCn8BWgKb9NhkZNQEOJX4H8sNS+evVNkJ40rK4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--smostafa.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=mmv6zeWa; arc=none smtp.client-ip=209.85.128.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--smostafa.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="mmv6zeWa" Received: by mail-wm1-f74.google.com with SMTP id 5b1f17b1804b1-434f387f346so5759085e9.1 for ; Thu, 12 Dec 2024 10:05:31 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1734026729; x=1734631529; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=rQNFQGkYKXUyxK2xW+xQ5PHpuYoLcSUmJ0SBk4sWRaw=; b=mmv6zeWaAHCs5Q3rsF+HdPv7fkUuWdisLgE4ww9VdcCqrwb6RzpyE2/hE0Aojra6n4 DsQHUQZVTLzbmn7tO64hzSkylOpP8i0bbc74Nbp+NC2z/6dRP13VY/nE/K9bndc1SsV+ XxqmDORi6/43INhKw6cgsEVhkKPWDngQlQWbz6HDMteijX/Snv0sfG1hV5XlWxNoAYzu A28qhIy656b6Sl48lzx4MBPDBMkDh3pe51Ax7MPwRPE3pjORZ3mhpO5HvQj97KXQXGfR PSHbK0UnGzbfqrEk8JP48TaiN2fFh37HSyK89UOn81VJPn6OO6A2A/rANyci95V/eFVf gILA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734026729; x=1734631529; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=rQNFQGkYKXUyxK2xW+xQ5PHpuYoLcSUmJ0SBk4sWRaw=; b=v517llBk5846Ft7jJkvim4Kfyd6hiyUxhHDonpSw2FkEMZj3NEhwi1mc2SUO0FJaiD d6IEsvO+2WhS8rEyN6JKiCxg0XcjIQibQjwODOsw4yLyLgcRbuFsXBdqhftn0CjEk0WR meAcr/hcxAIL6F5+cLPLIfbsz47U9cpOnuuZUzjE06jPH2Giaos3itzPoQEa5iJurdma zhwPsE42Fob6dVeNmhGTLCQtUJxgEzJOnXYdOPUF9HpI00h2cZtmLcinqCpTsA4CDldz iFQXYbAsuVxIX/LnTzOk8FsSqCI70m6y30/d025+GRgJJMPZd7spBORjTRk6x5qirqAD ZL6A== X-Forwarded-Encrypted: i=1; AJvYcCXt7w/bCcJt5mcfdzjxT0Df07VyfqYy3MdpRFP0ssYl6gCfgS6XnMNt5u/iHhNjIg3qs/nFqL1qoWVqqJ8=@vger.kernel.org X-Gm-Message-State: AOJu0Ywo/hIzfpFlj/Drc4DxrLbq2bD+blRPjlWIAMP9tHvbZ9GDtezK EyhN2K+XtZ0zKOjOnvr1lpXV/K10b2MbpVkKOvFNiobDt8mqiV8pebhcMck320UAEd+IeKsFij/ v3Lxez9z+gA== X-Google-Smtp-Source: AGHT+IEzjS9xw7sktl+QWCcIlB5qocD8lZy0X03FhVRyoTWsrcoNyuiGTV3JIHTs+nau+tAF2x1fcfdAg2BCow== X-Received: from wrqk12.prod.google.com ([2002:a5d:428c:0:b0:385:ed82:2202]) (user=smostafa job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6000:684:b0:385:f6c7:90c6 with SMTP id ffacd0b85a97d-3878768e976mr3581027f8f.20.1734026729788; Thu, 12 Dec 2024 10:05:29 -0800 (PST) Date: Thu, 12 Dec 2024 18:03:46 +0000 In-Reply-To: <20241212180423.1578358-1-smostafa@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241212180423.1578358-1-smostafa@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20241212180423.1578358-23-smostafa@google.com> Subject: [RFC PATCH v2 22/58] KVM: arm64: pkvm: Support SCMI power domain From: Mostafa Saleh To: iommu@lists.linux.dev, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org Cc: catalin.marinas@arm.com, will@kernel.org, maz@kernel.org, oliver.upton@linux.dev, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, robdclark@gmail.com, joro@8bytes.org, robin.murphy@arm.com, jean-philippe@linaro.org, jgg@ziepe.ca, nicolinc@nvidia.com, vdonnefort@google.com, qperret@google.com, tabba@google.com, danielmentz@google.com, tzukui@google.com, Mostafa Saleh Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Jean-Philippe Brucker The hypervisor needs to catch power domain changes for devices it owns, such as the SMMU. Possible reasons: * Ensure that software and hardware states are consistent. The driver does not attempt to modify the state while the device is off. * Save and restore the device state. * Enforce dependency between consumers and suppliers. For example ensure that endpoints are off before turning the SMMU off, in case a powered off SMMU lets DMA through. However this is normally enforced by firmware. Add a SCMI power domain, as the standard method for device power management on Arm. Other methods can be added to kvm_power_domain later. Signed-off-by: Jean-Philippe Brucker Signed-off-by: Mostafa Saleh --- arch/arm64/include/asm/kvm_hyp.h | 2 + arch/arm64/kvm/hyp/include/nvhe/pkvm.h | 4 + .../arm64/kvm/hyp/include/nvhe/trap_handler.h | 2 + arch/arm64/kvm/hyp/nvhe/Makefile | 2 +- arch/arm64/kvm/hyp/nvhe/hyp-main.c | 2 + arch/arm64/kvm/hyp/nvhe/power/scmi.c | 231 ++++++++++++++++++ include/kvm/power_domain.h | 7 + 7 files changed, 249 insertions(+), 1 deletion(-) create mode 100644 arch/arm64/kvm/hyp/nvhe/power/scmi.c diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_= hyp.h index ee85c6dfb504..0257e8e37434 100644 --- a/arch/arm64/include/asm/kvm_hyp.h +++ b/arch/arm64/include/asm/kvm_hyp.h @@ -119,7 +119,9 @@ void __sve_restore_state(void *sve_pffr, u32 *fpsr, int= restore_ffr); =20 u64 __guest_enter(struct kvm_vcpu *vcpu); =20 + bool kvm_host_psci_handler(struct kvm_cpu_context *host_ctxt, u32 func_id); +bool kvm_host_scmi_handler(struct kvm_cpu_context *host_ctxt); =20 #ifdef __KVM_NVHE_HYPERVISOR__ void __noreturn __hyp_do_panic(struct kvm_cpu_context *host_ctxt, u64 spsr, diff --git a/arch/arm64/kvm/hyp/include/nvhe/pkvm.h b/arch/arm64/kvm/hyp/in= clude/nvhe/pkvm.h index e4a94696b10e..4d40c536d26a 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/pkvm.h +++ b/arch/arm64/kvm/hyp/include/nvhe/pkvm.h @@ -158,6 +158,8 @@ int pkvm_init_hvc_pd(struct kvm_power_domain *pd, const struct kvm_power_domain_ops *ops); =20 int pkvm_host_hvc_pd(u64 device_id, u64 on); +int pkvm_init_scmi_pd(struct kvm_power_domain *pd, + const struct kvm_power_domain_ops *ops); =20 /* * Register a power domain. When the hypervisor catches power requests fro= m the @@ -171,6 +173,8 @@ static inline int pkvm_init_power_domain(struct kvm_pow= er_domain *pd, return 0; case KVM_POWER_DOMAIN_HOST_HVC: return pkvm_init_hvc_pd(pd, ops); + case KVM_POWER_DOMAIN_ARM_SCMI: + return pkvm_init_scmi_pd(pd, ops); default: return -EOPNOTSUPP; } diff --git a/arch/arm64/kvm/hyp/include/nvhe/trap_handler.h b/arch/arm64/kv= m/hyp/include/nvhe/trap_handler.h index 1e6d995968a1..0e6bb92ccdb7 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/trap_handler.h +++ b/arch/arm64/kvm/hyp/include/nvhe/trap_handler.h @@ -15,4 +15,6 @@ #define DECLARE_REG(type, name, ctxt, reg) \ type name =3D (type)cpu_reg(ctxt, (reg)) =20 +void __kvm_hyp_host_forward_smc(struct kvm_cpu_context *host_ctxt); + #endif /* __ARM64_KVM_NVHE_TRAP_HANDLER_H__ */ diff --git a/arch/arm64/kvm/hyp/nvhe/Makefile b/arch/arm64/kvm/hyp/nvhe/Mak= efile index 950d34ba6e50..d846962e7246 100644 --- a/arch/arm64/kvm/hyp/nvhe/Makefile +++ b/arch/arm64/kvm/hyp/nvhe/Makefile @@ -8,7 +8,7 @@ CFLAGS_switch.nvhe.o +=3D -Wno-override-init hyp-obj-y :=3D timer-sr.o sysreg-sr.o debug-sr.o switch.o tlb.o hyp-init.o= host.o \ hyp-main.o hyp-smp.o psci-relay.o alloc.o early_alloc.o page_alloc.o \ cache.o setup.o mm.o mem_protect.o sys_regs.o pkvm.o stacktrace.o ffa.o \ - serial.o alloc_mgt.o iommu/iommu.o power/hvc.o + serial.o alloc_mgt.o iommu/iommu.o power/hvc.o power/scmi.o hyp-obj-y +=3D ../vgic-v3-sr.o ../aarch32.o ../vgic-v2-cpuif-proxy.o ../en= try.o \ ../fpsimd.o ../hyp-entry.o ../exception.o ../pgtable.o hyp-obj-$(CONFIG_LIST_HARDENED) +=3D list_debug.o diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/h= yp-main.c index 5df98bf04ef4..1ab8e5507825 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -1806,6 +1806,8 @@ static void handle_host_smc(struct kvm_cpu_context *h= ost_ctxt) handled =3D kvm_host_psci_handler(host_ctxt, func_id); if (!handled) handled =3D kvm_host_ffa_handler(host_ctxt, func_id); + if (!handled) + handled =3D kvm_host_scmi_handler(host_ctxt); if (!handled && smp_load_acquire(&default_host_smc_handler)) handled =3D default_host_smc_handler(&host_ctxt->regs); if (!handled) { diff --git a/arch/arm64/kvm/hyp/nvhe/power/scmi.c b/arch/arm64/kvm/hyp/nvhe= /power/scmi.c new file mode 100644 index 000000000000..7de3feb2f1d9 --- /dev/null +++ b/arch/arm64/kvm/hyp/nvhe/power/scmi.c @@ -0,0 +1,231 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2022 Linaro Ltd. + */ + +#include + +#include +#include +#include +#include + +/* SCMI protocol */ +#define SCMI_PROTOCOL_POWER_DOMAIN 0x11 + +/* shmem registers */ +#define SCMI_SHM_CHANNEL_STATUS 0x4 +#define SCMI_SHM_CHANNEL_FLAGS 0x10 +#define SCMI_SHM_LENGTH 0x14 +#define SCMI_SHM_MESSAGE_HEADER 0x18 +#define SCMI_SHM_MESSAGE_PAYLOAD 0x1c + +/* channel status */ +#define SCMI_CHN_FREE (1U << 0) +#define SCMI_CHN_ERROR (1U << 1) + +/* channel flags */ +#define SCMI_CHN_IRQ (1U << 0) + +/* message header */ +#define SCMI_HDR_TOKEN GENMASK(27, 18) +#define SCMI_HDR_PROTOCOL_ID GENMASK(17, 10) +#define SCMI_HDR_MESSAGE_TYPE GENMASK(9, 8) +#define SCMI_HDR_MESSAGE_ID GENMASK(7, 0) + +/* power domain */ +#define SCMI_PD_STATE_SET 0x4 +#define SCMI_PD_STATE_SET_FLAGS 0x0 +#define SCMI_PD_STATE_SET_DOMAIN_ID 0x4 +#define SCMI_PD_STATE_SET_POWER_STATE 0x8 + +#define SCMI_PD_STATE_SET_STATUS 0x0 + +#define SCMI_PD_STATE_SET_FLAGS_ASYNC (1U << 0) + +#define SCMI_PD_POWER_ON 0 +#define SCMI_PD_POWER_OFF (1U << 30) + +#define SCMI_SUCCESS 0 + + +static struct { + u32 smc_id; + phys_addr_t shmem_pfn; + size_t shmem_size; + void __iomem *shmem; +} scmi_channel; + +struct scmi_power_domain { + struct kvm_power_domain *pd; + const struct kvm_power_domain_ops *ops; +}; + +static struct scmi_power_domain scmi_power_domains[MAX_POWER_DOMAINS]; +static int scmi_power_domain_count; + +#define SCMI_POLL_TIMEOUT_US 1000000 /* 1s! */ + +/* Forward the command to EL3, and wait for completion */ +static int scmi_run_command(struct kvm_cpu_context *host_ctxt) +{ + u32 reg; + unsigned long i =3D 0; + + __kvm_hyp_host_forward_smc(host_ctxt); + + do { + reg =3D readl_relaxed(scmi_channel.shmem + SCMI_SHM_CHANNEL_STATUS); + if (reg & SCMI_CHN_FREE) + break; + + if (WARN_ON(++i > SCMI_POLL_TIMEOUT_US)) + return -ETIMEDOUT; + + pkvm_udelay(1); + } while (!(reg & (SCMI_CHN_FREE | SCMI_CHN_ERROR))); + + if (reg & SCMI_CHN_ERROR) + return -EIO; + + reg =3D readl_relaxed(scmi_channel.shmem + SCMI_SHM_MESSAGE_PAYLOAD + + SCMI_PD_STATE_SET_STATUS); + if (reg !=3D SCMI_SUCCESS) + return -EIO; + + return 0; +} + +static void __kvm_host_scmi_handler(struct kvm_cpu_context *host_ctxt) +{ + int i; + u32 reg; + struct scmi_power_domain *scmi_pd =3D NULL; + + /* + * FIXME: the spec does not really allow for an intermediary filtering + * messages on the channel: as soon as the host clears SCMI_CHN_FREE, + * the server may process the message. It doesn't have to wait for a + * doorbell and could just poll on the shared mem. Unlikely in practice, + * but this code is not correct without a spec change requiring the + * server to observe an SMC before processing the message. + */ + reg =3D readl_relaxed(scmi_channel.shmem + SCMI_SHM_CHANNEL_STATUS); + if (reg & (SCMI_CHN_FREE | SCMI_CHN_ERROR)) + return; + + reg =3D readl_relaxed(scmi_channel.shmem + SCMI_SHM_MESSAGE_HEADER); + if (FIELD_GET(SCMI_HDR_PROTOCOL_ID, reg) !=3D SCMI_PROTOCOL_POWER_DOMAIN) + goto out_forward_smc; + + if (FIELD_GET(SCMI_HDR_MESSAGE_ID, reg) !=3D SCMI_PD_STATE_SET) + goto out_forward_smc; + + reg =3D readl_relaxed(scmi_channel.shmem + SCMI_SHM_MESSAGE_PAYLOAD + + SCMI_PD_STATE_SET_FLAGS); + if (WARN_ON(reg & SCMI_PD_STATE_SET_FLAGS_ASYNC)) + /* We don't support async requests at the moment */ + return; + + reg =3D readl_relaxed(scmi_channel.shmem + SCMI_SHM_MESSAGE_PAYLOAD + + SCMI_PD_STATE_SET_DOMAIN_ID); + + for (i =3D 0; i < MAX_POWER_DOMAINS; i++) { + if (!scmi_power_domains[i].pd) + break; + + if (reg =3D=3D scmi_power_domains[i].pd->arm_scmi.domain_id) { + scmi_pd =3D &scmi_power_domains[i]; + break; + } + } + if (!scmi_pd) + goto out_forward_smc; + + reg =3D readl_relaxed(scmi_channel.shmem + SCMI_SHM_MESSAGE_PAYLOAD + + SCMI_PD_STATE_SET_POWER_STATE); + switch (reg) { + case SCMI_PD_POWER_ON: + if (scmi_run_command(host_ctxt)) + break; + + scmi_pd->ops->power_on(scmi_pd->pd); + break; + case SCMI_PD_POWER_OFF: + scmi_pd->ops->power_off(scmi_pd->pd); + + if (scmi_run_command(host_ctxt)) + scmi_pd->ops->power_on(scmi_pd->pd); + break; + } + return; + +out_forward_smc: + __kvm_hyp_host_forward_smc(host_ctxt); +} + +bool kvm_host_scmi_handler(struct kvm_cpu_context *host_ctxt) +{ + DECLARE_REG(u64, func_id, host_ctxt, 0); + + if (!scmi_channel.shmem || func_id !=3D scmi_channel.smc_id) + return false; /* Unhandled */ + + /* + * Prevent the host from modifying the request while it is in flight. + * One page is enough, SCMI messages are smaller than that. + * + * FIXME: the host is allowed to poll the shmem while the request is in + * flight, or read shmem when receiving the SCMI interrupt. Although + * it's unlikely with the SMC-based transport, this too requires some + * tightening in the spec. + */ + if (WARN_ON(__pkvm_host_add_remove_page(scmi_channel.shmem_pfn, true))) + return true; + + __kvm_host_scmi_handler(host_ctxt); + + WARN_ON(__pkvm_host_add_remove_page(scmi_channel.shmem_pfn, false)); + return true; /* Handled */ +} + +int pkvm_init_scmi_pd(struct kvm_power_domain *pd, + const struct kvm_power_domain_ops *ops) +{ + int ret; + + if (!IS_ALIGNED(pd->arm_scmi.shmem_base, PAGE_SIZE) || + pd->arm_scmi.shmem_size < PAGE_SIZE) { + return -EINVAL; + } + + if (!scmi_channel.shmem) { + unsigned long shmem; + + /* FIXME: Do we need to mark those pages shared in the host s2? */ + ret =3D __pkvm_create_private_mapping(pd->arm_scmi.shmem_base, + pd->arm_scmi.shmem_size, + PAGE_HYP_DEVICE, + &shmem); + if (ret) + return ret; + + scmi_channel.smc_id =3D pd->arm_scmi.smc_id; + scmi_channel.shmem_pfn =3D hyp_phys_to_pfn(pd->arm_scmi.shmem_base); + scmi_channel.shmem =3D (void *)shmem; + + } else if (scmi_channel.shmem_pfn !=3D + hyp_phys_to_pfn(pd->arm_scmi.shmem_base) || + scmi_channel.smc_id !=3D pd->arm_scmi.smc_id) { + /* We support a single channel at the moment */ + return -ENXIO; + } + + if (scmi_power_domain_count =3D=3D MAX_POWER_DOMAINS) + return -ENOSPC; + + scmi_power_domains[scmi_power_domain_count].pd =3D pd; + scmi_power_domains[scmi_power_domain_count].ops =3D ops; + scmi_power_domain_count++; + return 0; +} diff --git a/include/kvm/power_domain.h b/include/kvm/power_domain.h index f6a9c5cdfebb..9ade1d60f5f5 100644 --- a/include/kvm/power_domain.h +++ b/include/kvm/power_domain.h @@ -5,12 +5,19 @@ enum kvm_power_domain_type { KVM_POWER_DOMAIN_NONE, KVM_POWER_DOMAIN_HOST_HVC, + KVM_POWER_DOMAIN_ARM_SCMI, }; =20 struct kvm_power_domain { enum kvm_power_domain_type type; union { u64 device_id; /* HOST_HVC device ID*/ + struct { + u32 smc_id; + u32 domain_id; + phys_addr_t shmem_base; + size_t shmem_size; + } arm_scmi; /*ARM_SCMI channel */ }; }; =20 --=20 2.47.0.338.g60cca15819-goog From nobody Sun Dec 14 19:14:26 2025 Received: from mail-wr1-f73.google.com (mail-wr1-f73.google.com [209.85.221.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4FAB0235C33 for ; Thu, 12 Dec 2024 18:05:33 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.221.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734026735; cv=none; b=qpW1feH8wUC7+n5d5J+wqWsMfMb/03eRiSxwmMc2RFYaElv+5rszITiLzl2R7GYRWNLKrES/LRcN2wUh44slWHYc0F2ImDXJBmpFM/TPQf0qECfTgqFVa9GwlIqNgagnznwqGUFrtWfo8zvC2tBk/jt5hXPmRPD0jVIgUHFoUko= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734026735; c=relaxed/simple; bh=LQ7osqA/9T45zp9pKA7tsvG8dh0AD4hAzw8bzt47krI=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=TGuqmniSZa+M5Fyc5eBr6yEPVHG7VRa6P22skyrozDE+e780bkCALPqvdeg/Pb6IrlIf0+VJhyh4elv4NH+YUQZd1RqWhI0oATD8DfLX+srAtcv6RYxEdC26La66X2mlTEsLOLvLxPNBE/nThOq1JdfwfsyR2XIN2M79ID+cCPI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--smostafa.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=NdaLYQLN; arc=none smtp.client-ip=209.85.221.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--smostafa.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="NdaLYQLN" Received: by mail-wr1-f73.google.com with SMTP id ffacd0b85a97d-385d735965bso542710f8f.1 for ; Thu, 12 Dec 2024 10:05:33 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1734026732; x=1734631532; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=iNmbzxfQZlpDuPsHWgM61suzSUM5uzL/JDk4UedkmbE=; b=NdaLYQLN+1PSDeMJx+ET3ZuhU3Z9uU3nu4KGAMFIDr6e6oYN8YRSJ3dDXfLG7vA3ZU VFf9ROZKn26sELHu4bhIqpYQ5i1nzHYZXb5N5hNSek9XvXUUFci8THLw8emcMqjN3Zuy qluZVr03OoaSz4N1sVUmQ0M9Hx04GYXzfZJpkNBqf6KlHVSvGA8R9FyM/WLqv1SHP98j 3MrCl3NFI7+G45kbbosBx2bxsLP6dOz2CsW7hGNwJc7cI2P8GaDwv2/3+gOqSNh1LJKN xIDisW8MkPIC97h/a4nv03qEARljMgEqjORRmO4B6ZY9aeIC7nTmXnJWQCCt+2195sGB Ie7g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734026732; x=1734631532; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=iNmbzxfQZlpDuPsHWgM61suzSUM5uzL/JDk4UedkmbE=; b=psJefqR8E1BkDh7hmgYaUidDkGBANqvO/aulQjmsXwxHbucxiQzqDXXNiKz/nNwry1 S5Tjg5MagXjwXK3/NKwPP7uUr8H6OgVOEO0pqa6qJA58JdYtNXl9vPK+OJmisa0RNu5n aReHOqCoeq4xnvrtOsjAkSIHYj4lZuys7bLY6wNL5l5fuW3YrDYPmMQYWOMVHad5ztOY 0gcoZUBXZkrP+DvqZH1orxOc7Pmg9Xo7mWXrOjVmUMpZx4/zi6Mqpg+Sp6h6YYa3RvaC rxdy8v6DuuG58E4zBePeieaEbQaAcHDkLn7qJQxGVYMRGxDq6GQuMEwF8YP/t5L9v7xc kFbA== X-Forwarded-Encrypted: i=1; AJvYcCWyZG0AS2F+eZhelweJcseovZtEi1CJRBDAawV36QbTDEzGXdxf/byUbY8W0Pd+gjWzBm8nzbnBQ1O+1IM=@vger.kernel.org X-Gm-Message-State: AOJu0YwKL6/bLYi7zPSoUgEpil7VBYBEmKme3aevC3AFpB3u/Tup7eMA 4yHJJHvrig1qVyWrJ98S6ZzM0H+qDLaTmjiZoMxV1O8MFVSGyk4p1uLSvjVRfHUUrghRIYVDy8c cQdIMqS/I5w== X-Google-Smtp-Source: AGHT+IG5u7MltLTltlnPqAoPYJ0zedPyLXzvE96q5dJmxhMlV9Kb2g6bQvckMYIYAkBMBY1CEWyusiJQB7JM4g== X-Received: from wmnb15.prod.google.com ([2002:a05:600c:6cf:b0:436:1995:1888]) (user=smostafa job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6000:2802:b0:385:df6d:6fc7 with SMTP id ffacd0b85a97d-3864ce9f30amr4398431f8f.25.1734026731871; Thu, 12 Dec 2024 10:05:31 -0800 (PST) Date: Thu, 12 Dec 2024 18:03:47 +0000 In-Reply-To: <20241212180423.1578358-1-smostafa@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241212180423.1578358-1-smostafa@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20241212180423.1578358-24-smostafa@google.com> Subject: [RFC PATCH v2 23/58] KVM: arm64: iommu: Support power management From: Mostafa Saleh To: iommu@lists.linux.dev, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org Cc: catalin.marinas@arm.com, will@kernel.org, maz@kernel.org, oliver.upton@linux.dev, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, robdclark@gmail.com, joro@8bytes.org, robin.murphy@arm.com, jean-philippe@linaro.org, jgg@ziepe.ca, nicolinc@nvidia.com, vdonnefort@google.com, qperret@google.com, tabba@google.com, danielmentz@google.com, tzukui@google.com, Mostafa Saleh Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Jean-Philippe Brucker Add power domain ops to the hypervisor IOMMU driver. We currently make these assumptions: * The register state is retained across power off. * The TLBs are clean on power on. * Another privileged software (EL3 or SCP FW) handles dependencies between SMMU and endpoints. So we just need to make sure that the CPU does not touch the SMMU registers while it is powered off. Signed-off-by: Jean-Philippe Brucker Signed-off-by: Mostafa Saleh --- arch/arm64/kvm/hyp/nvhe/iommu/iommu.c | 33 ++++++++++++++++++++++++++- include/kvm/iommu.h | 3 +++ 2 files changed, 35 insertions(+), 1 deletion(-) diff --git a/arch/arm64/kvm/hyp/nvhe/iommu/iommu.c b/arch/arm64/kvm/hyp/nvh= e/iommu/iommu.c index a6e0f3634756..fbab335d3490 100644 --- a/arch/arm64/kvm/hyp/nvhe/iommu/iommu.c +++ b/arch/arm64/kvm/hyp/nvhe/iommu/iommu.c @@ -375,10 +375,41 @@ phys_addr_t kvm_iommu_iova_to_phys(pkvm_handle_t doma= in_id, unsigned long iova) return phys; } =20 +static int iommu_power_on(struct kvm_power_domain *pd) +{ + struct kvm_hyp_iommu *iommu =3D container_of(pd, struct kvm_hyp_iommu, + power_domain); + + /* + * We currently assume that the device retains its architectural state + * across power off, hence no save/restore. + */ + kvm_iommu_lock(iommu); + iommu->power_is_off =3D false; + kvm_iommu_unlock(iommu); + return 0; +} + +static int iommu_power_off(struct kvm_power_domain *pd) +{ + struct kvm_hyp_iommu *iommu =3D container_of(pd, struct kvm_hyp_iommu, + power_domain); + + kvm_iommu_lock(iommu); + iommu->power_is_off =3D true; + kvm_iommu_unlock(iommu); + return 0; +} + +static const struct kvm_power_domain_ops iommu_power_ops =3D { + .power_on =3D iommu_power_on, + .power_off =3D iommu_power_off, +}; + /* Must be called from the IOMMU driver per IOMMU */ int kvm_iommu_init_device(struct kvm_hyp_iommu *iommu) { kvm_iommu_lock_init(iommu); =20 - return 0; + return pkvm_init_power_domain(&iommu->power_domain, &iommu_power_ops); } diff --git a/include/kvm/iommu.h b/include/kvm/iommu.h index 6ff78d766466..c524ba84a9cf 100644 --- a/include/kvm/iommu.h +++ b/include/kvm/iommu.h @@ -3,6 +3,7 @@ #define __KVM_IOMMU_H =20 #include +#include #include #ifdef __KVM_NVHE_HYPERVISOR__ #include @@ -51,6 +52,8 @@ struct kvm_hyp_iommu { #else u32 unused; #endif + struct kvm_power_domain power_domain; + bool power_is_off; }; =20 #endif /* __KVM_IOMMU_H */ --=20 2.47.0.338.g60cca15819-goog From nobody Sun Dec 14 19:14:26 2025 Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4941B235C4A for ; Thu, 12 Dec 2024 18:05:35 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734026737; cv=none; b=e3eGB/ksOb7UmqOXWYGFtJn++3QayoXGTYN4bUyQYzcl9TGF/8Wa1L2qxWXOzWqmCJds+jze/PODZidvJU9nCwsm9V/WalBUuvc/ECrZ+XlZxhb+ICS72CGeCvDPAR0bNQNX6wqXSQxLQZOuil0ytvh2Q0CeHG4YsRqdnBeJY5k= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734026737; c=relaxed/simple; bh=QIyUf9RC5tml3wUiGzlbTmMFJHl3q8rlahMnK3AQH94=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=hwhYUHb2J6UBSRLsUViS35aDjbDVuKvejgAlLWJLoyvxd24GFQH7wBuMxtYQ7nff5HKHDmc70ZlpPnnFxIWzKCrXeDSOkwAZ5Frhw3GKVyrb0u4Lk2RPKeOt5NExEA1qeZX8OBlHoVD0bWXRG0hs8VSxHzFFdqm1DWE9DjMW2c4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--smostafa.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=ZkR2KeaX; arc=none smtp.client-ip=209.85.128.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--smostafa.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="ZkR2KeaX" Received: by mail-wm1-f73.google.com with SMTP id 5b1f17b1804b1-43628594d34so2694115e9.2 for ; Thu, 12 Dec 2024 10:05:35 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1734026734; x=1734631534; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=jZNgwB6X8v5j4gzxppU/t/fZqzqcDrO7o9YkYVnh2wA=; b=ZkR2KeaX+Z+QId8bD4jYvQ0/1IjdG5Qcd07yl8GZXDVUwqYTIiCLqA/SVZgNIl5TG7 T65oyFPS2H65kdCpIPE55uP9LzThGMZ4y/iGis01G1uwhvyNvpBRiuFmf1lbo2oQnW2A 3ZmEglzeB/tFmauVtBPuvMPETrVz/uC997gRuQC9lYBALozy9GwsoC1T5MuQV09T4MPJ WbK4uhP4Thd/hM0ubgSpCPwzCRhpT6SLfs5+Of8bt1bgwPDDTg0d0Job8P3HaCqzV1wR odwAChUiw+04XTKqJWbzeGWi97rw3yS3QnnfE1Y7XigDNCiDByqRa3IB0orvL5+PdQiZ 46/g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734026734; x=1734631534; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=jZNgwB6X8v5j4gzxppU/t/fZqzqcDrO7o9YkYVnh2wA=; b=v89+GpAGxEYn9j1r2vuzuN/kjRKPPkNhtsduAFv3aqFuKw+56iFM3gl2KfOyCwvxP6 H/JdADIL+NzYfji4q/bq6jYlmDY7qMBvMPBQtSG+JrLJKYRk7oBPAyeacFeRe7Xv8aXA ZWCgWfvHU6/IN5xV/mx1qRVYAcVKQNCaz2By/tXY1XNL5wBWigBBYf3kn8kDJ3zweipO 0u5iOrd6RLuCskjxK/eBczX1hpwD9M1L7IJ2TlgP4hUCqwcrJjilrNYQzIxUbzdtENnM YFQGajEbvfExGaUH0bmh4TD5TJ853MwgX+3+5d7AN1Ig+n0TyiaR3N0TZjAbxLMEkkk7 +nJQ== X-Forwarded-Encrypted: i=1; AJvYcCWixd9qilnN33hM9lilhCngW6JCJCq3APJ799gv93rBHw9mxMUvTJIgIm0SX5wnn8I7sR8/XcqwmQpmb3s=@vger.kernel.org X-Gm-Message-State: AOJu0Yw90e6PhsYeOcxAE++OZvlAPVbU+GDwaIgqC9EieBIbAGEMuEnn Z6nVIpQm7JdmwXDNsTzK1SP8D9IlF2jjrFdOeuWOko5MD2XiDJi/yY7nRpvkzU7khIH9RZdlhHP 2suBsl9g4zQ== X-Google-Smtp-Source: AGHT+IFU5A2xYG/aIaT/yYE5/fNPeYIneyer4lpXo+usrcm3VGdKeb0NPMFh6orkypYDcbhz8IOsQoPR+yxjUA== X-Received: from wmmu14.prod.google.com ([2002:a05:600c:ce:b0:434:f0a3:7876]) (user=smostafa job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:3593:b0:434:f297:8e85 with SMTP id 5b1f17b1804b1-4361c3aaeeemr77507485e9.10.1734026733780; Thu, 12 Dec 2024 10:05:33 -0800 (PST) Date: Thu, 12 Dec 2024 18:03:48 +0000 In-Reply-To: <20241212180423.1578358-1-smostafa@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241212180423.1578358-1-smostafa@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20241212180423.1578358-25-smostafa@google.com> Subject: [RFC PATCH v2 24/58] KVM: arm64: iommu: Support DABT for IOMMU From: Mostafa Saleh To: iommu@lists.linux.dev, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org Cc: catalin.marinas@arm.com, will@kernel.org, maz@kernel.org, oliver.upton@linux.dev, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, robdclark@gmail.com, joro@8bytes.org, robin.murphy@arm.com, jean-philippe@linaro.org, jgg@ziepe.ca, nicolinc@nvidia.com, vdonnefort@google.com, qperret@google.com, tabba@google.com, danielmentz@google.com, tzukui@google.com, Mostafa Saleh Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Soon, SMMUv3 driver would be added and it would need to emulate access to some of its MMIO space. Add a handler for DABTs for IOMMU drivers to be able to do so. Signed-off-by: Mostafa Saleh --- arch/arm64/kvm/hyp/include/nvhe/iommu.h | 2 ++ arch/arm64/kvm/hyp/nvhe/iommu/iommu.c | 17 +++++++++++++++++ arch/arm64/kvm/hyp/nvhe/mem_protect.c | 19 +++++++++++++++++-- 3 files changed, 36 insertions(+), 2 deletions(-) diff --git a/arch/arm64/kvm/hyp/include/nvhe/iommu.h b/arch/arm64/kvm/hyp/i= nclude/nvhe/iommu.h index 06d12b35fa3e..cff75d67d807 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/iommu.h +++ b/arch/arm64/kvm/hyp/include/nvhe/iommu.h @@ -21,6 +21,7 @@ size_t kvm_iommu_map_pages(pkvm_handle_t domain_id, size_t kvm_iommu_unmap_pages(pkvm_handle_t domain_id, unsigned long iova, size_t pgsize, size_t pgcount); phys_addr_t kvm_iommu_iova_to_phys(pkvm_handle_t domain_id, unsigned long = iova); +bool kvm_iommu_host_dabt_handler(struct kvm_cpu_context *host_ctxt, u64 es= r, u64 addr); =20 /* Flags for memory allocation for IOMMU drivers */ #define IOMMU_PAGE_NOCACHE BIT(0) @@ -49,6 +50,7 @@ struct kvm_iommu_ops { phys_addr_t (*iova_to_phys)(struct kvm_hyp_iommu_domain *domain, unsigned= long iova); void (*iotlb_sync)(struct kvm_hyp_iommu_domain *domain, struct iommu_iotlb_gather *gather); + bool (*dabt_handler)(struct kvm_cpu_context *host_ctxt, u64 esr, u64 addr= ); }; =20 int kvm_iommu_init(void); diff --git a/arch/arm64/kvm/hyp/nvhe/iommu/iommu.c b/arch/arm64/kvm/hyp/nvh= e/iommu/iommu.c index fbab335d3490..e45dadd0c4aa 100644 --- a/arch/arm64/kvm/hyp/nvhe/iommu/iommu.c +++ b/arch/arm64/kvm/hyp/nvhe/iommu/iommu.c @@ -4,6 +4,10 @@ * * Copyright (C) 2022 Linaro Ltd. */ +#include + +#include + #include =20 #include @@ -375,6 +379,19 @@ phys_addr_t kvm_iommu_iova_to_phys(pkvm_handle_t domai= n_id, unsigned long iova) return phys; } =20 +bool kvm_iommu_host_dabt_handler(struct kvm_cpu_context *host_ctxt, u64 es= r, u64 addr) +{ + bool ret =3D false; + + if (kvm_iommu_ops && kvm_iommu_ops->dabt_handler) + ret =3D kvm_iommu_ops->dabt_handler(host_ctxt, esr, addr); + + if (ret) + kvm_skip_host_instr(); + + return ret; +} + static int iommu_power_on(struct kvm_power_domain *pd) { struct kvm_hyp_iommu *iommu =3D container_of(pd, struct kvm_hyp_iommu, diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvh= e/mem_protect.c index a428ad9ca871..0fae651107db 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -16,6 +16,7 @@ #include =20 #include +#include #include #include #include @@ -799,11 +800,16 @@ static int handle_host_perm_fault(struct kvm_cpu_cont= ext *host_ctxt, u64 esr, u6 return handled ? 0 : -EPERM; } =20 +static bool is_dabt(u64 esr) +{ + return ESR_ELx_EC(esr) =3D=3D ESR_ELx_EC_DABT_LOW; +} + void handle_host_mem_abort(struct kvm_cpu_context *host_ctxt) { struct kvm_vcpu_fault_info fault; u64 esr, addr; - int ret =3D 0; + int ret =3D -EPERM; =20 esr =3D read_sysreg_el2(SYS_ESR); if (!__get_fault_info(esr, &fault)) { @@ -817,7 +823,15 @@ void handle_host_mem_abort(struct kvm_cpu_context *hos= t_ctxt) } =20 addr =3D (fault.hpfar_el2 & HPFAR_MASK) << 8; - ret =3D host_stage2_idmap(addr); + addr |=3D fault.far_el2 & FAR_MASK; + + if (is_dabt(esr) && !addr_is_memory(addr) && + kvm_iommu_host_dabt_handler(host_ctxt, esr, addr)) + goto return_to_host; + + /* If not handled, attempt to map the page. */ + if (ret =3D=3D -EPERM) + ret =3D host_stage2_idmap(addr); =20 if ((esr & ESR_ELx_FSC_TYPE) =3D=3D ESR_ELx_FSC_PERM) ret =3D handle_host_perm_fault(host_ctxt, esr, addr); @@ -827,6 +841,7 @@ void handle_host_mem_abort(struct kvm_cpu_context *host= _ctxt) else BUG_ON(ret && ret !=3D -EAGAIN); =20 +return_to_host: trace_host_mem_abort(esr, addr); } =20 --=20 2.47.0.338.g60cca15819-goog From nobody Sun Dec 14 19:14:26 2025 Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 27D2E2368E9 for ; Thu, 12 Dec 2024 18:05:37 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734026738; cv=none; b=KpQFt2zbKgTt8Udxmu/KGtkp+mxMveQaIbTokNkdBTJpxoH8rDfLBTuW3hDopdih69TiyhiUIN++HYRmnmahLTa4kdr5YpPKUKAIWt3AOrDJCA6bmNnp8mvxn8duuoLKRIJvHGW7ShCHtmpOFpHIF2KBJJNSmvUsIC17LCb7WDI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734026738; c=relaxed/simple; bh=Z2UHwzkM46eNyOELnlSu5KplZV6dyg6FmiiX5LeoOuQ=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=PMInivWQegoOMo97bI5x2Uk6sxiu5pZvIQ497fxdg0o1Y39xQ99lcPliBZ34MTWFmqaK3C2AMwKsuVxo/V48SjMzAU6SDuuM6BgA1SZEMCNEpUTsLviqhP+826lGPicq8SCRV7hmqmbmAMMK2aJ3+B2Ij05paiywhfMyvlp+ONk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--smostafa.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=HDkly875; arc=none smtp.client-ip=209.85.128.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--smostafa.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="HDkly875" Received: by mail-wm1-f73.google.com with SMTP id 5b1f17b1804b1-43582d49dacso8227735e9.2 for ; Thu, 12 Dec 2024 10:05:37 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1734026736; x=1734631536; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=YdMuO2gtmeL3RMqWaakXFL6M4DlcZZq40B5WqzcNta4=; b=HDkly875IfJkYrNdjky4itPOkt4EsAS85atpFDpufJ7IoUw41Nlgl43AvWc9EnqW3/ 4iZUQYwdEJwkF3OPs8a2OLcoNck6pekbBtbViea/KuwR/R3IBvCiLBv6O+5AK9j8MzLv xtH1r7qKOOZ+dvORe4M6CUpP1lAaviWMQ/yNFGSlxtttTkp4+c08bCzVudoQ//IXgmij 0mvre9HNEbGsscOHdgVqtyNQma/S5jd5LRKYB6lfpNPV2Nr1s8hrr8+ca8sZEuRy9UnR 12wIUyPedjEWEhg4rSV5cP3ZHAV07/uwUPNhieZmDm5F7R/kVlJRt/IGZgXB3bLKUP+P JfBQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734026736; x=1734631536; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=YdMuO2gtmeL3RMqWaakXFL6M4DlcZZq40B5WqzcNta4=; b=VR6aDsoihucg9tA7okcz8SDytnmgOc6UKrnnUzmKTYrO5K66ZBmsge7PmEHc7kqh8n QR86A7Muc6J80+rnCDRb+DZo2x6FxJzHkavRQusSwSMXc/lfeZbXL96i6Bk9ynA+sxMV +Y6NAOZZ56LImW83/iYu1KqX8HCWOc2EGyu23feBxriXoCTlK8C/uKou9e/Dk8Mu+EXQ gjy64EniPr0p6cpRBS3RSbRTf0wuUk7RsazMQAuIP/GOvWE0Pk3gorXMP++yV19Lq22D u4AfyFgoMUkY4283Hqd9tExVA7Bg8lK5W3B9q3slvKUAqbFlSCoFq2VRUXk1XyEVUszk i8kQ== X-Forwarded-Encrypted: i=1; AJvYcCW47X1MB+/oYpkaB/gGLDGqdAjFsSSYTllsT+l8EZd+yGKedvGKK9HOWzMinhd7glP0EMOAFwfb3XdSHdI=@vger.kernel.org X-Gm-Message-State: AOJu0YwZAgspnSLrYdyK9S/B9n86APY3SM0jaLBspJsk8H6/Rw0ltzOE O+7VkaOeJHhZ1gDEYobQE6r8BJtvqq/2SMB0wm4UZRt8jAxAFBnwjHage86cH4EskISAg4A8Tj9 k/aL2bsL3hw== X-Google-Smtp-Source: AGHT+IFhLgvU2Rm+DNubCv6JNasQpuX6sd+/DiydVh4HkRumx8mqOJN1mpnAimZm+KH9fDL3y3s/kc2a4/LuEQ== X-Received: from wmso20.prod.google.com ([2002:a05:600c:5114:b0:436:1abf:b8fe]) (user=smostafa job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:5028:b0:435:172:5052 with SMTP id 5b1f17b1804b1-4362282768bmr38379225e9.1.1734026735825; Thu, 12 Dec 2024 10:05:35 -0800 (PST) Date: Thu, 12 Dec 2024 18:03:49 +0000 In-Reply-To: <20241212180423.1578358-1-smostafa@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241212180423.1578358-1-smostafa@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20241212180423.1578358-26-smostafa@google.com> Subject: [RFC PATCH v2 25/58] KVM: arm64: iommu: Add SMMUv3 driver From: Mostafa Saleh To: iommu@lists.linux.dev, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org Cc: catalin.marinas@arm.com, will@kernel.org, maz@kernel.org, oliver.upton@linux.dev, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, robdclark@gmail.com, joro@8bytes.org, robin.murphy@arm.com, jean-philippe@linaro.org, jgg@ziepe.ca, nicolinc@nvidia.com, vdonnefort@google.com, qperret@google.com, tabba@google.com, danielmentz@google.com, tzukui@google.com, Mostafa Saleh Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Jean-Philippe Brucker Add the skeleton for an Arm SMMUv3 driver at EL2. Signed-off-by: Jean-Philippe Brucker Signed-off-by: Mostafa Saleh --- arch/arm64/kvm/hyp/nvhe/Makefile | 2 ++ arch/arm64/kvm/hyp/nvhe/iommu/arm-smmu-v3.c | 22 +++++++++++++++++++++ drivers/iommu/Kconfig | 9 +++++++++ include/kvm/arm_smmu_v3.h | 18 +++++++++++++++++ 4 files changed, 51 insertions(+) create mode 100644 arch/arm64/kvm/hyp/nvhe/iommu/arm-smmu-v3.c create mode 100644 include/kvm/arm_smmu_v3.h diff --git a/arch/arm64/kvm/hyp/nvhe/Makefile b/arch/arm64/kvm/hyp/nvhe/Mak= efile index d846962e7246..edfd8a11ac90 100644 --- a/arch/arm64/kvm/hyp/nvhe/Makefile +++ b/arch/arm64/kvm/hyp/nvhe/Makefile @@ -16,6 +16,8 @@ hyp-obj-$(CONFIG_TRACING) +=3D clock.o events.o trace.o hyp-obj-$(CONFIG_MODULES) +=3D modules.o hyp-obj-y +=3D $(lib-objs) =20 +hyp-obj-$(CONFIG_ARM_SMMU_V3_PKVM) +=3D iommu/arm-smmu-v3.o + $(obj)/hyp.lds: $(src)/hyp.lds.S FORCE $(call if_changed_dep,cpp_lds_S) =20 diff --git a/arch/arm64/kvm/hyp/nvhe/iommu/arm-smmu-v3.c b/arch/arm64/kvm/h= yp/nvhe/iommu/arm-smmu-v3.c new file mode 100644 index 000000000000..d2a570c9f3ec --- /dev/null +++ b/arch/arm64/kvm/hyp/nvhe/iommu/arm-smmu-v3.c @@ -0,0 +1,22 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * pKVM hyp driver for the Arm SMMUv3 + * + * Copyright (C) 2022 Linaro Ltd. + */ +#include +#include +#include + +size_t __ro_after_init kvm_hyp_arm_smmu_v3_count; +struct hyp_arm_smmu_v3_device *kvm_hyp_arm_smmu_v3_smmus; + +static int smmu_init(void) +{ + return -ENOSYS; +} + +/* Shared with the kernel driver in EL1 */ +struct kvm_iommu_ops smmu_ops =3D { + .init =3D smmu_init, +}; diff --git a/drivers/iommu/Kconfig b/drivers/iommu/Kconfig index b3aa1f5d5321..fea5d6a8b90b 100644 --- a/drivers/iommu/Kconfig +++ b/drivers/iommu/Kconfig @@ -437,6 +437,15 @@ config TEGRA241_CMDQV CMDQ-V extension. endif =20 +config ARM_SMMU_V3_PKVM + bool "ARM SMMUv3 support for protected Virtual Machines" + depends on KVM && ARM64 + help + Enable a SMMUv3 driver in the KVM hypervisor, to protect VMs against + memory accesses from devices owned by the host. + + Say Y here if you intend to enable KVM in protected mode. + config S390_IOMMU def_bool y if S390 && PCI depends on S390 && PCI diff --git a/include/kvm/arm_smmu_v3.h b/include/kvm/arm_smmu_v3.h new file mode 100644 index 000000000000..521028b3ff71 --- /dev/null +++ b/include/kvm/arm_smmu_v3.h @@ -0,0 +1,18 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef __KVM_ARM_SMMU_V3_H +#define __KVM_ARM_SMMU_V3_H + +#include +#include + +struct hyp_arm_smmu_v3_device { + struct kvm_hyp_iommu iommu; +}; + +extern size_t kvm_nvhe_sym(kvm_hyp_arm_smmu_v3_count); +#define kvm_hyp_arm_smmu_v3_count kvm_nvhe_sym(kvm_hyp_arm_smmu_v3_count) + +extern struct hyp_arm_smmu_v3_device *kvm_nvhe_sym(kvm_hyp_arm_smmu_v3_smm= us); +#define kvm_hyp_arm_smmu_v3_smmus kvm_nvhe_sym(kvm_hyp_arm_smmu_v3_smmus) + +#endif /* __KVM_ARM_SMMU_V3_H */ --=20 2.47.0.338.g60cca15819-goog From nobody Sun Dec 14 19:14:26 2025 Received: from mail-wr1-f74.google.com (mail-wr1-f74.google.com [209.85.221.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3B730225A5E for ; Thu, 12 Dec 2024 18:05:39 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.221.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734026740; cv=none; b=h9I/HwuM2dOXQFi09jXgwTz1EI9NkQ95MoXuaKImdAEYiKZ5f5+LM9eg5s89tfP6r311tEHwu0tb9U1mvEWWDvIjDDi21cD4pQdQM+KJDmPM1qM9urv/3IBd+xyjrr2jnVbP9mlEjfdjbKEwX7io408nk2/OdXNCQ7+4gyUxFAI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734026740; c=relaxed/simple; bh=zZ1Nzljx/w3TaRRu2FIWD04CjYa8X5+HeWRzj5ds/lk=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=IQkZN3GoVzg0Gul28OrUWT1SEfQmlvUXMCHpAq2UfkwCBdRnOgN8m3TF1LXEcorcjHQABvDIhSbApwLrlv3ewnHT0Z7da1zmOJqTOtdiH6+uJC4BL5/4PPoJcQaCoaE4528k1m7p2ZsPkUS6y0E+4y4mPiVEGDWQ5UzO5eIVWPY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--smostafa.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=NK3fQWbA; arc=none smtp.client-ip=209.85.221.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--smostafa.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="NK3fQWbA" Received: by mail-wr1-f74.google.com with SMTP id ffacd0b85a97d-385ed79291eso1146127f8f.0 for ; Thu, 12 Dec 2024 10:05:39 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1734026738; x=1734631538; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=ybNvGVgl/hoCzKNXbffTwzQBiR9JZU3Un5nM2P0DGi4=; b=NK3fQWbAbCQzqrtPAkbMPVrwtUcReSSLuKCmelLOqVrX4XYnQaztAOjwxeEE/DvP46 kDJSzdBeXGVwlr6daW71uB7qauZM5xDDBSYZUvI6eQkCdxvPBYGh2E0WPd0gg9FXQNbU kGR2fyh8ociDGexoUIeg9TJEvPQeWf2tE5o2/VSVqe8rWSF6Wu2mR/JZgJhCHe2Ynf5F cZPGQ69pc3s9onRtG4+UhrhQBZQuQSrDgRjIXqV2K4I6EpE9OcjhSxdKzoa3NBQcX69O ynE5j3hek7mgA9PiuMsb7LYBJiiRS1EY+ZtzreVMfhjW7sXfISayXB4lvICOnsVBANVL k9oQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734026738; x=1734631538; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=ybNvGVgl/hoCzKNXbffTwzQBiR9JZU3Un5nM2P0DGi4=; b=PoIER54v8/Dq0roWRCuz9C8YCgsN3h5McwiAZpLVWwqSPke8S3z2HmOQPm3MycflIe HnfcKGlbA4J9R+60SJT8OF3eMGGZCG5PQ2j83kNvVfZ9XiZESGplprZss94V9RFS2e2c +QJmaZAmqroMcDywjaL1Z+qFIU1x7NjqNasDTPHZzAaYIK04XScGRjaTkewVQsqp8xBz U5O6bp94BrFBqMxeUqaJ94TeaaRwAlaoc132OJO3JrWJeCeyJd5GFeGuRBLdXn7KNIOx u4g7mW01JUMbsUtm/pmnsBWwybp7+bnti85rpn9aMp+TWNg03AFmyvyFtU+MRmleBo4O 2LkQ== X-Forwarded-Encrypted: i=1; AJvYcCV8hftIGwY1tZvb0Byn9MrKVcO5/1nY3AvCP+9GS4uj/69zzNMi1gPGdcw5Cbj/V43vXc9epBLZPWaP75g=@vger.kernel.org X-Gm-Message-State: AOJu0YziDXOh5lv1BgrrtemqbMWU7iWV2Micxbipevv6BmQhGhMjcW3K wvKEQ1MS9xMYKB8VXNbm4wcv6fyUDmxB2o127+ZotNALmBRjhl3rLhrP/ejKALgKozG/oxd1YXE 7AdTWCj9oQw== X-Google-Smtp-Source: AGHT+IE0b7txsRIOyvGxW20/4w0Q8YO5iAlMJIT7VhEN6QmR2/Vq39IVy03R+zOk9tpZ9g+gCkzVceXKRYZllw== X-Received: from wmbg5.prod.google.com ([2002:a05:600c:a405:b0:434:fddf:5c1a]) (user=smostafa job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6000:18a2:b0:385:faf5:ebc8 with SMTP id ffacd0b85a97d-38788847313mr3909644f8f.21.1734026737836; Thu, 12 Dec 2024 10:05:37 -0800 (PST) Date: Thu, 12 Dec 2024 18:03:50 +0000 In-Reply-To: <20241212180423.1578358-1-smostafa@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241212180423.1578358-1-smostafa@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20241212180423.1578358-27-smostafa@google.com> Subject: [RFC PATCH v2 26/58] KVM: arm64: smmu-v3: Initialize registers From: Mostafa Saleh To: iommu@lists.linux.dev, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org Cc: catalin.marinas@arm.com, will@kernel.org, maz@kernel.org, oliver.upton@linux.dev, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, robdclark@gmail.com, joro@8bytes.org, robin.murphy@arm.com, jean-philippe@linaro.org, jgg@ziepe.ca, nicolinc@nvidia.com, vdonnefort@google.com, qperret@google.com, tabba@google.com, danielmentz@google.com, tzukui@google.com, Mostafa Saleh Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Jean-Philippe Brucker Ensure all writable registers are properly initialized. We do not touch registers that will not be read by the SMMU due to disabled features. Signed-off-by: Jean-Philippe Brucker Signed-off-by: Mostafa Saleh --- arch/arm64/kvm/hyp/nvhe/iommu/arm-smmu-v3.c | 130 +++++++++++++++++++- include/kvm/arm_smmu_v3.h | 11 ++ 2 files changed, 140 insertions(+), 1 deletion(-) diff --git a/arch/arm64/kvm/hyp/nvhe/iommu/arm-smmu-v3.c b/arch/arm64/kvm/h= yp/nvhe/iommu/arm-smmu-v3.c index d2a570c9f3ec..f7e60c188cb0 100644 --- a/arch/arm64/kvm/hyp/nvhe/iommu/arm-smmu-v3.c +++ b/arch/arm64/kvm/hyp/nvhe/iommu/arm-smmu-v3.c @@ -4,16 +4,144 @@ * * Copyright (C) 2022 Linaro Ltd. */ +#include #include #include #include +#include +#include +#include + +#define ARM_SMMU_POLL_TIMEOUT_US 100000 /* 100ms arbitrary timeout */ =20 size_t __ro_after_init kvm_hyp_arm_smmu_v3_count; struct hyp_arm_smmu_v3_device *kvm_hyp_arm_smmu_v3_smmus; =20 +#define for_each_smmu(smmu) \ + for ((smmu) =3D kvm_hyp_arm_smmu_v3_smmus; \ + (smmu) !=3D &kvm_hyp_arm_smmu_v3_smmus[kvm_hyp_arm_smmu_v3_count]; \ + (smmu)++) + +/* + * Wait until @cond is true. + * Return 0 on success, or -ETIMEDOUT + */ +#define smmu_wait(_cond) \ +({ \ + int __i =3D 0; \ + int __ret =3D 0; \ + \ + while (!(_cond)) { \ + if (++__i > ARM_SMMU_POLL_TIMEOUT_US) { \ + __ret =3D -ETIMEDOUT; \ + break; \ + } \ + pkvm_udelay(1); \ + } \ + __ret; \ +}) + +static int smmu_write_cr0(struct hyp_arm_smmu_v3_device *smmu, u32 val) +{ + writel_relaxed(val, smmu->base + ARM_SMMU_CR0); + return smmu_wait(readl_relaxed(smmu->base + ARM_SMMU_CR0ACK) =3D=3D val); +} + +/* Transfer ownership of structures from host to hyp */ +static int smmu_take_pages(u64 phys, size_t size) +{ + WARN_ON(!PAGE_ALIGNED(phys) || !PAGE_ALIGNED(size)); + return __pkvm_host_donate_hyp(phys >> PAGE_SHIFT, size >> PAGE_SHIFT); +} + +static void smmu_reclaim_pages(u64 phys, size_t size) +{ + WARN_ON(!PAGE_ALIGNED(phys) || !PAGE_ALIGNED(size)); + WARN_ON(__pkvm_hyp_donate_host(phys >> PAGE_SHIFT, size >> PAGE_SHIFT)); +} + +static int smmu_init_registers(struct hyp_arm_smmu_v3_device *smmu) +{ + u64 val, old; + int ret; + + if (!(readl_relaxed(smmu->base + ARM_SMMU_GBPA) & GBPA_ABORT)) + return -EINVAL; + + /* Initialize all RW registers that will be read by the SMMU */ + ret =3D smmu_write_cr0(smmu, 0); + if (ret) + return ret; + + val =3D FIELD_PREP(CR1_TABLE_SH, ARM_SMMU_SH_ISH) | + FIELD_PREP(CR1_TABLE_OC, CR1_CACHE_WB) | + FIELD_PREP(CR1_TABLE_IC, CR1_CACHE_WB) | + FIELD_PREP(CR1_QUEUE_SH, ARM_SMMU_SH_ISH) | + FIELD_PREP(CR1_QUEUE_OC, CR1_CACHE_WB) | + FIELD_PREP(CR1_QUEUE_IC, CR1_CACHE_WB); + writel_relaxed(val, smmu->base + ARM_SMMU_CR1); + writel_relaxed(CR2_PTM, smmu->base + ARM_SMMU_CR2); + writel_relaxed(0, smmu->base + ARM_SMMU_IRQ_CTRL); + + val =3D readl_relaxed(smmu->base + ARM_SMMU_GERROR); + old =3D readl_relaxed(smmu->base + ARM_SMMU_GERRORN); + /* Service Failure Mode is fatal */ + if ((val ^ old) & GERROR_SFM_ERR) + return -EIO; + /* Clear pending errors */ + writel_relaxed(val, smmu->base + ARM_SMMU_GERRORN); + + return 0; +} + +static int smmu_init_device(struct hyp_arm_smmu_v3_device *smmu) +{ + int ret; + + if (!PAGE_ALIGNED(smmu->mmio_addr | smmu->mmio_size)) + return -EINVAL; + + ret =3D ___pkvm_host_donate_hyp(smmu->mmio_addr >> PAGE_SHIFT, + smmu->mmio_size >> PAGE_SHIFT, + /* accept_mmio */ true); + if (ret) + return ret; + + smmu->base =3D hyp_phys_to_virt(smmu->mmio_addr); + + ret =3D smmu_init_registers(smmu); + if (ret) + return ret; + + return kvm_iommu_init_device(&smmu->iommu); +} + static int smmu_init(void) { - return -ENOSYS; + int ret; + struct hyp_arm_smmu_v3_device *smmu; + size_t smmu_arr_size =3D PAGE_ALIGN(sizeof(*kvm_hyp_arm_smmu_v3_smmus) * + kvm_hyp_arm_smmu_v3_count); + phys_addr_t smmu_arr_phys; + + kvm_hyp_arm_smmu_v3_smmus =3D kern_hyp_va(kvm_hyp_arm_smmu_v3_smmus); + + smmu_arr_phys =3D hyp_virt_to_phys(kvm_hyp_arm_smmu_v3_smmus); + + ret =3D smmu_take_pages(smmu_arr_phys, smmu_arr_size); + if (ret) + return ret; + + for_each_smmu(smmu) { + ret =3D smmu_init_device(smmu); + if (ret) + goto out_reclaim_smmu; + } + + return 0; +out_reclaim_smmu: + smmu_reclaim_pages(smmu_arr_phys, smmu_arr_size); + return ret; } =20 /* Shared with the kernel driver in EL1 */ diff --git a/include/kvm/arm_smmu_v3.h b/include/kvm/arm_smmu_v3.h index 521028b3ff71..fb24bcef1624 100644 --- a/include/kvm/arm_smmu_v3.h +++ b/include/kvm/arm_smmu_v3.h @@ -5,8 +5,19 @@ #include #include =20 +/* + * Parameters from the trusted host: + * @mmio_addr base address of the SMMU registers + * @mmio_size size of the registers resource + * + * Other members are filled and used at runtime by the SMMU driver. + */ struct hyp_arm_smmu_v3_device { struct kvm_hyp_iommu iommu; + phys_addr_t mmio_addr; + size_t mmio_size; + + void __iomem *base; }; =20 extern size_t kvm_nvhe_sym(kvm_hyp_arm_smmu_v3_count); --=20 2.47.0.338.g60cca15819-goog From nobody Sun Dec 14 19:14:26 2025 Received: from mail-wr1-f74.google.com (mail-wr1-f74.google.com [209.85.221.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 75E95236FAB for ; Thu, 12 Dec 2024 18:05:41 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.221.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734026743; cv=none; b=eMROM9gv104lHTkRoSYQ5Iv+Uh2vc2M2hUtHHf6UDvxm9yTruwpyKPfS6qRMIgkrSsJzFitjX9zSMeU1eUyKZ22CDf9W9crm2RVWhrlRSzgM5WCirznrDig53jGqVgnzZEKJJNG8UFxfRWbG0I2pk2OEs3aU5u0GOi9vqYHJ+Dk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734026743; c=relaxed/simple; bh=euL9B2UCG6k2HW5isDJ95uh1CP91/CqfM0fov3cwjUc=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=f0/eK09BrpFR9cSygNv8EHaBZ5HdadndH9GwZi0R8zlpOm4B/w+RyUu3dH++FM1CXiTc6TxLGls/XNVeYALg2sxQkqtdUuU+Yxoy14/lcAIhGh2HL7Q/fTX7oFy9DTFfQsFs6fQAyCeTZC7MviTwe1nJO+61leEAOAIqO6oBaOA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--smostafa.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=gW8DEHLf; arc=none smtp.client-ip=209.85.221.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--smostafa.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="gW8DEHLf" Received: by mail-wr1-f74.google.com with SMTP id ffacd0b85a97d-38634103b0dso511682f8f.2 for ; Thu, 12 Dec 2024 10:05:41 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1734026740; x=1734631540; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=QW55e4r1FCJJZ+Mxmax2CNsYmQyYsh/1We9MYonnJlo=; b=gW8DEHLfgepX2RIwkfsZLLUDw3dFWaIptqCpABYtgjdPDCyPb/4tMxUAJohRWpipl1 plcSXljRKyuG7RC7PUNf0ql+jyF1qcfjqbLJ6q9AD8Jq2WaKywW1ou6lNir8GidGWS2Q mouCkwnyalw7+Go0OD85E0vUiwsfAZFwGsQzsFCsjMar7QOZyhZoW6Ugu//uuHi6oOM7 21KK2/OfhH5eAiNAris5cY9QfndqOcojDvLQYl8A9ufv7hFRowJx9rfH4O7I362DTXd6 PQXwLbS/KZuOsGCmn2qP9+KIdkxnKTjydVt1V0cQjA2ekDOs+eVj6QNhkokSdFEV+6D2 oSrQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734026740; x=1734631540; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=QW55e4r1FCJJZ+Mxmax2CNsYmQyYsh/1We9MYonnJlo=; b=vqGVeor2pUabvLiFfkVyh8SgFCoIhse4LNc62qXLL3Mi3I1liDyo8tgAERrkPCu8ZR 98hby3BiUJ5lm7BT5h8Xdr5pmFQoZc0sFLO6xxTFIDFz07EXPm1uOIXR0cnZysDv+ttb z2bj8ghV4ioTKt8ZMoG7CsZ+QEO4qV6UZHS5Hh2ZQBWoxTGpP17Xoyjr51HZ/P1SkSey c9ESAKz283VfVViHgkq1OqtShamdHXaW52/1cFbzQqOBOdv2WtEIv4IeEj1XmJ7LvXA1 5dmNx6AJnL020RKTaCtXfec03b61CdQDePb0RMrLA1cRfZ+Nnth6iLpaWGJ3LB/fLb0G f2rQ== X-Forwarded-Encrypted: i=1; AJvYcCUlJUYq5Q9MMOS9Pdm6YusRA8PLJ7CIzrm9dFeW5H5wNO+stz4RUu9Vw+m7xafBF412PoroTJJ4+2jebaA=@vger.kernel.org X-Gm-Message-State: AOJu0YyfKgktgz/hO9YEPYbBx3/zxnRI1SzH4FTpPO4A7eyI/i5HeKB1 ExpQtQx2q9Eg67j0GpUD5Zxgs7Rr3RHNH1WTL3pHDIesJuUUcK/0q1Aht4QRBF/nnj0zVebUP0C 4j1othfl6Bg== X-Google-Smtp-Source: AGHT+IHB49GVFxhdt7ghgV1N9Jg4Guc+tgbfuRWfQoxnqpfh1xTzyJZXQcrGjuehIps7RcSM/ejov5lNnWr6nQ== X-Received: from wmoy17.prod.google.com ([2002:a05:600c:17d1:b0:434:e665:11a3]) (user=smostafa job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6000:186b:b0:385:ec8d:8ca9 with SMTP id ffacd0b85a97d-387877c98ffmr3854714f8f.42.1734026739887; Thu, 12 Dec 2024 10:05:39 -0800 (PST) Date: Thu, 12 Dec 2024 18:03:51 +0000 In-Reply-To: <20241212180423.1578358-1-smostafa@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241212180423.1578358-1-smostafa@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20241212180423.1578358-28-smostafa@google.com> Subject: [RFC PATCH v2 27/58] KVM: arm64: smmu-v3: Setup command queue From: Mostafa Saleh To: iommu@lists.linux.dev, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org Cc: catalin.marinas@arm.com, will@kernel.org, maz@kernel.org, oliver.upton@linux.dev, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, robdclark@gmail.com, joro@8bytes.org, robin.murphy@arm.com, jean-philippe@linaro.org, jgg@ziepe.ca, nicolinc@nvidia.com, vdonnefort@google.com, qperret@google.com, tabba@google.com, danielmentz@google.com, tzukui@google.com, Mostafa Saleh Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Jean-Philippe Brucker Map the command queue allocated by the host into the hypervisor address space. When the host mappings are finalized, the queue is unmapped from the host. Signed-off-by: Jean-Philippe Brucker Signed-off-by: Mostafa Saleh --- arch/arm64/kvm/hyp/nvhe/iommu/arm-smmu-v3.c | 165 ++++++++++++++++++++ include/kvm/arm_smmu_v3.h | 4 + 2 files changed, 169 insertions(+) diff --git a/arch/arm64/kvm/hyp/nvhe/iommu/arm-smmu-v3.c b/arch/arm64/kvm/h= yp/nvhe/iommu/arm-smmu-v3.c index f7e60c188cb0..e15356509424 100644 --- a/arch/arm64/kvm/hyp/nvhe/iommu/arm-smmu-v3.c +++ b/arch/arm64/kvm/hyp/nvhe/iommu/arm-smmu-v3.c @@ -41,6 +41,15 @@ struct hyp_arm_smmu_v3_device *kvm_hyp_arm_smmu_v3_smmus; __ret; \ }) =20 +#define smmu_wait_event(_smmu, _cond) \ +({ \ + if ((_smmu)->features & ARM_SMMU_FEAT_SEV) { \ + while (!(_cond)) \ + wfe(); \ + } \ + smmu_wait(_cond); \ +}) + static int smmu_write_cr0(struct hyp_arm_smmu_v3_device *smmu, u32 val) { writel_relaxed(val, smmu->base + ARM_SMMU_CR0); @@ -60,6 +69,123 @@ static void smmu_reclaim_pages(u64 phys, size_t size) WARN_ON(__pkvm_hyp_donate_host(phys >> PAGE_SHIFT, size >> PAGE_SHIFT)); } =20 +#define Q_WRAP(smmu, reg) ((reg) & (1 << (smmu)->cmdq_log2size)) +#define Q_IDX(smmu, reg) ((reg) & ((1 << (smmu)->cmdq_log2size) - 1)) + +static bool smmu_cmdq_full(struct hyp_arm_smmu_v3_device *smmu) +{ + u64 cons =3D readl_relaxed(smmu->base + ARM_SMMU_CMDQ_CONS); + + return Q_IDX(smmu, smmu->cmdq_prod) =3D=3D Q_IDX(smmu, cons) && + Q_WRAP(smmu, smmu->cmdq_prod) !=3D Q_WRAP(smmu, cons); +} + +static bool smmu_cmdq_empty(struct hyp_arm_smmu_v3_device *smmu) +{ + u64 cons =3D readl_relaxed(smmu->base + ARM_SMMU_CMDQ_CONS); + + return Q_IDX(smmu, smmu->cmdq_prod) =3D=3D Q_IDX(smmu, cons) && + Q_WRAP(smmu, smmu->cmdq_prod) =3D=3D Q_WRAP(smmu, cons); +} + +static int smmu_add_cmd(struct hyp_arm_smmu_v3_device *smmu, + struct arm_smmu_cmdq_ent *ent) +{ + int i; + int ret; + u64 cmd[CMDQ_ENT_DWORDS] =3D {}; + int idx =3D Q_IDX(smmu, smmu->cmdq_prod); + u64 *slot =3D smmu->cmdq_base + idx * CMDQ_ENT_DWORDS; + + if (smmu->iommu.power_is_off) + return -EPIPE; + + ret =3D smmu_wait_event(smmu, !smmu_cmdq_full(smmu)); + if (ret) + return ret; + + cmd[0] |=3D FIELD_PREP(CMDQ_0_OP, ent->opcode); + + switch (ent->opcode) { + case CMDQ_OP_CFGI_ALL: + cmd[1] |=3D FIELD_PREP(CMDQ_CFGI_1_RANGE, 31); + break; + case CMDQ_OP_CFGI_CD: + cmd[0] |=3D FIELD_PREP(CMDQ_CFGI_0_SSID, ent->cfgi.ssid); + fallthrough; + case CMDQ_OP_CFGI_STE: + cmd[0] |=3D FIELD_PREP(CMDQ_CFGI_0_SID, ent->cfgi.sid); + cmd[1] |=3D FIELD_PREP(CMDQ_CFGI_1_LEAF, ent->cfgi.leaf); + break; + case CMDQ_OP_TLBI_NH_VA: + cmd[0] |=3D FIELD_PREP(CMDQ_TLBI_0_VMID, ent->tlbi.vmid); + cmd[0] |=3D FIELD_PREP(CMDQ_TLBI_0_NUM, ent->tlbi.num); + cmd[0] |=3D FIELD_PREP(CMDQ_TLBI_0_SCALE, ent->tlbi.scale); + cmd[0] |=3D FIELD_PREP(CMDQ_TLBI_0_ASID, ent->tlbi.asid); + cmd[1] |=3D FIELD_PREP(CMDQ_TLBI_1_LEAF, ent->tlbi.leaf); + cmd[1] |=3D FIELD_PREP(CMDQ_TLBI_1_TTL, ent->tlbi.ttl); + cmd[1] |=3D FIELD_PREP(CMDQ_TLBI_1_TG, ent->tlbi.tg); + cmd[1] |=3D ent->tlbi.addr & CMDQ_TLBI_1_VA_MASK; + break; + case CMDQ_OP_TLBI_NSNH_ALL: + break; + case CMDQ_OP_TLBI_NH_ASID: + cmd[0] |=3D FIELD_PREP(CMDQ_TLBI_0_ASID, ent->tlbi.asid); + fallthrough; + case CMDQ_OP_TLBI_S12_VMALL: + cmd[0] |=3D FIELD_PREP(CMDQ_TLBI_0_VMID, ent->tlbi.vmid); + break; + case CMDQ_OP_TLBI_S2_IPA: + cmd[0] |=3D FIELD_PREP(CMDQ_TLBI_0_NUM, ent->tlbi.num); + cmd[0] |=3D FIELD_PREP(CMDQ_TLBI_0_SCALE, ent->tlbi.scale); + cmd[0] |=3D FIELD_PREP(CMDQ_TLBI_0_VMID, ent->tlbi.vmid); + cmd[1] |=3D FIELD_PREP(CMDQ_TLBI_1_LEAF, ent->tlbi.leaf); + cmd[1] |=3D FIELD_PREP(CMDQ_TLBI_1_TTL, ent->tlbi.ttl); + cmd[1] |=3D FIELD_PREP(CMDQ_TLBI_1_TG, ent->tlbi.tg); + cmd[1] |=3D ent->tlbi.addr & CMDQ_TLBI_1_IPA_MASK; + break; + case CMDQ_OP_CMD_SYNC: + cmd[0] |=3D FIELD_PREP(CMDQ_SYNC_0_CS, CMDQ_SYNC_0_CS_SEV); + break; + default: + return -EINVAL; + } + + for (i =3D 0; i < CMDQ_ENT_DWORDS; i++) + slot[i] =3D cpu_to_le64(cmd[i]); + + smmu->cmdq_prod++; + writel(Q_IDX(smmu, smmu->cmdq_prod) | Q_WRAP(smmu, smmu->cmdq_prod), + smmu->base + ARM_SMMU_CMDQ_PROD); + return 0; +} + +static int smmu_sync_cmd(struct hyp_arm_smmu_v3_device *smmu) +{ + int ret; + struct arm_smmu_cmdq_ent cmd =3D { + .opcode =3D CMDQ_OP_CMD_SYNC, + }; + + ret =3D smmu_add_cmd(smmu, &cmd); + if (ret) + return ret; + + return smmu_wait_event(smmu, smmu_cmdq_empty(smmu)); +} + +__maybe_unused +static int smmu_send_cmd(struct hyp_arm_smmu_v3_device *smmu, + struct arm_smmu_cmdq_ent *cmd) +{ + int ret =3D smmu_add_cmd(smmu, cmd); + + if (ret) + return ret; + + return smmu_sync_cmd(smmu); +} + static int smmu_init_registers(struct hyp_arm_smmu_v3_device *smmu) { u64 val, old; @@ -94,6 +220,41 @@ static int smmu_init_registers(struct hyp_arm_smmu_v3_d= evice *smmu) return 0; } =20 +static int smmu_init_cmdq(struct hyp_arm_smmu_v3_device *smmu) +{ + u64 cmdq_base; + size_t cmdq_nr_entries, cmdq_size; + int ret; + enum kvm_pgtable_prot prot =3D PAGE_HYP; + + cmdq_base =3D readq_relaxed(smmu->base + ARM_SMMU_CMDQ_BASE); + if (cmdq_base & ~(Q_BASE_RWA | Q_BASE_ADDR_MASK | Q_BASE_LOG2SIZE)) + return -EINVAL; + + smmu->cmdq_log2size =3D cmdq_base & Q_BASE_LOG2SIZE; + cmdq_nr_entries =3D 1 << smmu->cmdq_log2size; + cmdq_size =3D cmdq_nr_entries * CMDQ_ENT_DWORDS * 8; + + cmdq_base &=3D Q_BASE_ADDR_MASK; + + if (!(smmu->features & ARM_SMMU_FEAT_COHERENCY)) + prot |=3D KVM_PGTABLE_PROT_NORMAL_NC; + + ret =3D ___pkvm_host_donate_hyp_prot(cmdq_base >> PAGE_SHIFT, + PAGE_ALIGN(cmdq_size) >> PAGE_SHIFT, + false, prot); + if (ret) + return ret; + + smmu->cmdq_base =3D hyp_phys_to_virt(cmdq_base); + + memset(smmu->cmdq_base, 0, cmdq_size); + writel_relaxed(0, smmu->base + ARM_SMMU_CMDQ_PROD); + writel_relaxed(0, smmu->base + ARM_SMMU_CMDQ_CONS); + + return 0; +} + static int smmu_init_device(struct hyp_arm_smmu_v3_device *smmu) { int ret; @@ -113,6 +274,10 @@ static int smmu_init_device(struct hyp_arm_smmu_v3_dev= ice *smmu) if (ret) return ret; =20 + ret =3D smmu_init_cmdq(smmu); + if (ret) + return ret; + return kvm_iommu_init_device(&smmu->iommu); } =20 diff --git a/include/kvm/arm_smmu_v3.h b/include/kvm/arm_smmu_v3.h index fb24bcef1624..393a1a04edba 100644 --- a/include/kvm/arm_smmu_v3.h +++ b/include/kvm/arm_smmu_v3.h @@ -16,8 +16,12 @@ struct hyp_arm_smmu_v3_device { struct kvm_hyp_iommu iommu; phys_addr_t mmio_addr; size_t mmio_size; + unsigned long features; =20 void __iomem *base; + u32 cmdq_prod; + u64 *cmdq_base; + size_t cmdq_log2size; }; =20 extern size_t kvm_nvhe_sym(kvm_hyp_arm_smmu_v3_count); --=20 2.47.0.338.g60cca15819-goog From nobody Sun Dec 14 19:14:26 2025 Received: from mail-wr1-f73.google.com (mail-wr1-f73.google.com [209.85.221.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 78893223E8D for ; Thu, 12 Dec 2024 18:05:43 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.221.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734026746; cv=none; b=CjAMX0mMZ//1ZOol/VRcalEVvxYSYuXCEdD2eq72nCLECF6bJKlYETRGSgUtZmwlPvQCyEvlmys6Iw9dGeJFvjirsRVb56P0cQKB1Au7RY4/u1Nrbd8CB+o6Ekax60J+iHPOj//kNmYGRzkDbTTMKC/Y5IBpsCniheUwfHSC/os= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734026746; c=relaxed/simple; bh=vYlQF/00jW9GBGXp+yQV9E45uql11nRB0ar6n3dkdhQ=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=N0mTQ0kkt6KuUXHHU+7Gu4rZsJXrZgF0yB0NlrtFwwtCB47zqIK6ZyrfEPnA6ZD5hKQEhijiP5CzDbB/tJAyxWiRUNg9hn5UZNlC8cJQSEG/qf0bodNNeMd6rQZ5uxWeajD7L8szEyTN3+QsUsiXKSYMwGHi5FLNPuX7dssQ1GM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--smostafa.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=pGBXhW1U; arc=none smtp.client-ip=209.85.221.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--smostafa.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="pGBXhW1U" Received: by mail-wr1-f73.google.com with SMTP id ffacd0b85a97d-385dfa9b758so421156f8f.0 for ; Thu, 12 Dec 2024 10:05:43 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1734026742; x=1734631542; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=1LMxyNsNja8u4zNmyLLENTRCKJqkK6QgSakPe49cILk=; b=pGBXhW1UVJlJhXEUh4paFObJOkfV6CAdrnd+3L2Dtqaf2sK690XVxWDRCOGCTZYKeS dEIk5lBXMlkoFR0d6ippYiahrg11gVYzRkaKBIsxcNzwtR0xbG6jYA533Z5FzbWSq8tJ fkY2xfksg0xiPIa5OOQT1JKdFBeOgTXAidz1f8w+zrVpIOcfDwUYLjjXS6LapzCtzpZM L5vgRWILCxiEaqMFJhfAvuWew8bwfx8QkeUdvW4gGcy0FVRMQKJtieL+zonoO6YZEkqg meWJ7S8ifeE8MV7T4vCQxm4PZZle2mqPqzjoHaK6txG2XcdNVO3BeMSManIXF3mLrl8O D6Rg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734026742; x=1734631542; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=1LMxyNsNja8u4zNmyLLENTRCKJqkK6QgSakPe49cILk=; b=RFokc9wH9K/IMVfEl4UoP50ytlWwomPSr22at18Uob4KBmkwBt5d+dj1h3ORs6dAal bfLK1NOyN24OUnXH7JPtRH/GRoBn2GqZ1+QFTk2+inUiAoh9Qw/s/mzt0IWPQvZzxlvk WKEAiK4iVWgYkxyr0ZFVLrIXE9u2syK+ItoqEqPB8F5r9W7gF/ujD5NPOBH2g6FYUu6v dITgtmmLGzlg7MF1UXZjjS/uWUv2xILa7WAFbyMGkBz3kCJhS4a1oGFI3hUvTibRBQVb 1oVNxRGSEf6zwPouxppbrjHTLJNRsU5oEF/htyMbXP0xDorITtmDKku3P4oj6VKIs3f1 x8CQ== X-Forwarded-Encrypted: i=1; AJvYcCWjRWYqfnYfjaCVMD80vEWquZ+1OxrAJwPCaddDgf1Aesv4HqiooD6LosvDVIAW/HTNvp+ZDnUoM4FN1GI=@vger.kernel.org X-Gm-Message-State: AOJu0YzGKVyRPkQS8oYoycHz3+t/n9vVn/m8cG271pmGZnMz80IaZ8JM pFiXmb1fUVehcRmROW6LfQXCtjc75pyxAE8C4Xy20DguzIKzqjR9LAO/IbMIHIZpshY9k+lD+9f CWtkj20cs/w== X-Google-Smtp-Source: AGHT+IFPY+N9wTZ2tcFqUmavonh5uuqwBF4VmdeOBIMSWGf8ApdTWlq6S7L/pJ7JZxynPwo+SB3rplDprL91sA== X-Received: from wmbdx10.prod.google.com ([2002:a05:600c:63ca:b0:436:1a60:654e]) (user=smostafa job=prod-delivery.src-stubby-dispatcher) by 2002:a5d:5f47:0:b0:382:6f3:a20f with SMTP id ffacd0b85a97d-3864ce8956dmr7667344f8f.11.1734026741969; Thu, 12 Dec 2024 10:05:41 -0800 (PST) Date: Thu, 12 Dec 2024 18:03:52 +0000 In-Reply-To: <20241212180423.1578358-1-smostafa@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241212180423.1578358-1-smostafa@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20241212180423.1578358-29-smostafa@google.com> Subject: [RFC PATCH v2 28/58] KVM: arm64: smmu-v3: Setup stream table From: Mostafa Saleh To: iommu@lists.linux.dev, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org Cc: catalin.marinas@arm.com, will@kernel.org, maz@kernel.org, oliver.upton@linux.dev, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, robdclark@gmail.com, joro@8bytes.org, robin.murphy@arm.com, jean-philippe@linaro.org, jgg@ziepe.ca, nicolinc@nvidia.com, vdonnefort@google.com, qperret@google.com, tabba@google.com, danielmentz@google.com, tzukui@google.com, Mostafa Saleh Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Map the stream table allocated by the host into the hypervisor address space. When the host mappings are finalized, the table is unmapped from the host. Depending on the host configuration, the stream table may have one or two levels. Populate the level-2 stream table lazily. Also, add accessors for STEs. Signed-off-by: Mostafa Saleh Signed-off-by: Jean-Philippe Brucker --- arch/arm64/kvm/hyp/nvhe/iommu/arm-smmu-v3.c | 157 +++++++++++++++++++- include/kvm/arm_smmu_v3.h | 3 + 2 files changed, 159 insertions(+), 1 deletion(-) diff --git a/arch/arm64/kvm/hyp/nvhe/iommu/arm-smmu-v3.c b/arch/arm64/kvm/h= yp/nvhe/iommu/arm-smmu-v3.c index e15356509424..43d2ce7828c1 100644 --- a/arch/arm64/kvm/hyp/nvhe/iommu/arm-smmu-v3.c +++ b/arch/arm64/kvm/hyp/nvhe/iommu/arm-smmu-v3.c @@ -174,7 +174,6 @@ static int smmu_sync_cmd(struct hyp_arm_smmu_v3_device = *smmu) return smmu_wait_event(smmu, smmu_cmdq_empty(smmu)); } =20 -__maybe_unused static int smmu_send_cmd(struct hyp_arm_smmu_v3_device *smmu, struct arm_smmu_cmdq_ent *cmd) { @@ -186,6 +185,94 @@ static int smmu_send_cmd(struct hyp_arm_smmu_v3_device= *smmu, return smmu_sync_cmd(smmu); } =20 +__maybe_unused +static int smmu_sync_ste(struct hyp_arm_smmu_v3_device *smmu, u32 sid) +{ + struct arm_smmu_cmdq_ent cmd =3D { + .opcode =3D CMDQ_OP_CFGI_STE, + .cfgi.sid =3D sid, + .cfgi.leaf =3D true, + }; + + return smmu_send_cmd(smmu, &cmd); +} + +static int smmu_alloc_l2_strtab(struct hyp_arm_smmu_v3_device *smmu, u32 s= id) +{ + struct arm_smmu_strtab_cfg *cfg =3D &smmu->strtab_cfg; + struct arm_smmu_strtab_l1 *l1_desc; + dma_addr_t l2ptr_dma; + struct arm_smmu_strtab_l2 *l2table; + size_t l2_order =3D get_order(sizeof(struct arm_smmu_strtab_l2)); + int flags =3D 0; + + l1_desc =3D &cfg->l2.l1tab[arm_smmu_strtab_l1_idx(sid)]; + if (l1_desc->l2ptr) + return 0; + + if (!(smmu->features & ARM_SMMU_FEAT_COHERENCY)) + flags |=3D IOMMU_PAGE_NOCACHE; + + l2table =3D kvm_iommu_donate_pages(l2_order, flags); + if (!l2table) + return -ENOMEM; + + l2ptr_dma =3D hyp_virt_to_phys(l2table); + + if (l2ptr_dma & (~STRTAB_L1_DESC_L2PTR_MASK | ~PAGE_MASK)) { + kvm_iommu_reclaim_pages(l2table, l2_order); + return -EINVAL; + } + + /* Ensure the empty stream table is visible before the descriptor write */ + wmb(); + + arm_smmu_write_strtab_l1_desc(l1_desc, l2ptr_dma); + return 0; +} + +static struct arm_smmu_ste * +smmu_get_ste_ptr(struct hyp_arm_smmu_v3_device *smmu, u32 sid) +{ + struct arm_smmu_strtab_cfg *cfg =3D &smmu->strtab_cfg; + + if (smmu->features & ARM_SMMU_FEAT_2_LVL_STRTAB) { + struct arm_smmu_strtab_l1 *l1_desc =3D + &cfg->l2.l1tab[arm_smmu_strtab_l1_idx(sid)]; + struct arm_smmu_strtab_l2 *l2ptr; + + if (arm_smmu_strtab_l1_idx(sid) > cfg->l2.num_l1_ents) + return NULL; + /* L2 should be allocated before calling this. */ + if (WARN_ON(!l1_desc->l2ptr)) + return NULL; + + l2ptr =3D hyp_phys_to_virt(l1_desc->l2ptr & STRTAB_L1_DESC_L2PTR_MASK); + /* Two-level walk */ + return &l2ptr->stes[arm_smmu_strtab_l2_idx(sid)]; + } + + if (sid > cfg->linear.num_ents) + return NULL; + /* Simple linear lookup */ + return &cfg->linear.table[sid]; +} + +__maybe_unused +static struct arm_smmu_ste * +smmu_get_alloc_ste_ptr(struct hyp_arm_smmu_v3_device *smmu, u32 sid) +{ + if (smmu->features & ARM_SMMU_FEAT_2_LVL_STRTAB) { + int ret =3D smmu_alloc_l2_strtab(smmu, sid); + + if (ret) { + WARN_ON(ret !=3D -ENOMEM); + return NULL; + } + } + return smmu_get_ste_ptr(smmu, sid); +} + static int smmu_init_registers(struct hyp_arm_smmu_v3_device *smmu) { u64 val, old; @@ -255,6 +342,70 @@ static int smmu_init_cmdq(struct hyp_arm_smmu_v3_devic= e *smmu) return 0; } =20 +static int smmu_init_strtab(struct hyp_arm_smmu_v3_device *smmu) +{ + int ret; + u64 strtab_base; + size_t strtab_size; + u32 strtab_cfg, fmt; + int split, log2size; + struct arm_smmu_strtab_cfg *cfg =3D &smmu->strtab_cfg; + enum kvm_pgtable_prot prot =3D PAGE_HYP; + + if (!(smmu->features & ARM_SMMU_FEAT_COHERENCY)) + prot |=3D KVM_PGTABLE_PROT_NORMAL_NC; + + strtab_base =3D readq_relaxed(smmu->base + ARM_SMMU_STRTAB_BASE); + if (strtab_base & ~(STRTAB_BASE_ADDR_MASK | STRTAB_BASE_RA)) + return -EINVAL; + + strtab_cfg =3D readl_relaxed(smmu->base + ARM_SMMU_STRTAB_BASE_CFG); + if (strtab_cfg & ~(STRTAB_BASE_CFG_FMT | STRTAB_BASE_CFG_SPLIT | + STRTAB_BASE_CFG_LOG2SIZE)) + return -EINVAL; + + fmt =3D FIELD_GET(STRTAB_BASE_CFG_FMT, strtab_cfg); + split =3D FIELD_GET(STRTAB_BASE_CFG_SPLIT, strtab_cfg); + log2size =3D FIELD_GET(STRTAB_BASE_CFG_LOG2SIZE, strtab_cfg); + strtab_base &=3D STRTAB_BASE_ADDR_MASK; + + switch (fmt) { + case STRTAB_BASE_CFG_FMT_LINEAR: + if (split) + return -EINVAL; + cfg->linear.num_ents =3D 1 << log2size; + strtab_size =3D cfg->linear.num_ents * sizeof(struct arm_smmu_ste); + cfg->linear.ste_dma =3D strtab_base; + ret =3D ___pkvm_host_donate_hyp_prot(strtab_base >> PAGE_SHIFT, + PAGE_ALIGN(strtab_size) >> PAGE_SHIFT, + false, prot); + if (ret) + return -EINVAL; + cfg->linear.table =3D hyp_phys_to_virt(strtab_base); + /* Disable all STEs */ + memset(cfg->linear.table, 0, strtab_size); + break; + case STRTAB_BASE_CFG_FMT_2LVL: + if (split !=3D STRTAB_SPLIT) + return -EINVAL; + cfg->l2.num_l1_ents =3D 1 << max(0, log2size - split); + strtab_size =3D cfg->l2.num_l1_ents * sizeof(struct arm_smmu_strtab_l1); + cfg->l2.l1_dma =3D strtab_base; + ret =3D ___pkvm_host_donate_hyp_prot(strtab_base >> PAGE_SHIFT, + PAGE_ALIGN(strtab_size) >> PAGE_SHIFT, + false, prot); + if (ret) + return -EINVAL; + cfg->l2.l1tab =3D hyp_phys_to_virt(strtab_base); + /* Disable all STEs */ + memset(cfg->l2.l1tab, 0, strtab_size); + break; + default: + return -EINVAL; + } + return 0; +} + static int smmu_init_device(struct hyp_arm_smmu_v3_device *smmu) { int ret; @@ -278,6 +429,10 @@ static int smmu_init_device(struct hyp_arm_smmu_v3_dev= ice *smmu) if (ret) return ret; =20 + ret =3D smmu_init_strtab(smmu); + if (ret) + return ret; + return kvm_iommu_init_device(&smmu->iommu); } =20 diff --git a/include/kvm/arm_smmu_v3.h b/include/kvm/arm_smmu_v3.h index 393a1a04edba..352c1b2dc72a 100644 --- a/include/kvm/arm_smmu_v3.h +++ b/include/kvm/arm_smmu_v3.h @@ -2,6 +2,7 @@ #ifndef __KVM_ARM_SMMU_V3_H #define __KVM_ARM_SMMU_V3_H =20 +#include #include #include =20 @@ -22,6 +23,8 @@ struct hyp_arm_smmu_v3_device { u32 cmdq_prod; u64 *cmdq_base; size_t cmdq_log2size; + /* strtab_cfg.l2.l2ptrs is not used, instead computed from L1 */ + struct arm_smmu_strtab_cfg strtab_cfg; }; =20 extern size_t kvm_nvhe_sym(kvm_hyp_arm_smmu_v3_count); --=20 2.47.0.338.g60cca15819-goog From nobody Sun Dec 14 19:14:26 2025 Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 595DB237FE2 for ; Thu, 12 Dec 2024 18:05:45 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734026747; cv=none; b=a1nsf4RQLn+LW+hVjB1hm/7f84vQ4KEUcq67DgEWX6I1AAfijE+30zhSGWPDYwhXP1FpflLLncbIGl9sMg47z2I6DxUbzlfbqLSSIulfaY3PzLHVBfhApWPZyfMGaetuIdnD8Rrw6o+pYg9rNJU3ctIWbeZXxynbCVQTEZdeLQI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734026747; c=relaxed/simple; bh=A77HVtgiNiiuQUM+4fbCpZhMG/L5Q9L2AOqjPUwbohA=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=AfIvVAUNWlrPX7rNKRRn5YNqGEJb0KDAxG119MJ5pdLQfC2l7Q3czaO6nbBdXeL20uN3E65ksvf7i+j1bc1lY9I+c0VVyBvH/NFkfma1CKZTN38tjRnQNOsdKnChdS5bcUZudp/PXpgz/fJVq0dsIXkq19A9smmqI+t7Sxjsw0E= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--smostafa.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=STTPQ6+m; arc=none smtp.client-ip=209.85.128.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--smostafa.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="STTPQ6+m" Received: by mail-wm1-f74.google.com with SMTP id 5b1f17b1804b1-4361fc2b2d6so5535045e9.3 for ; Thu, 12 Dec 2024 10:05:45 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1734026744; x=1734631544; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=JE/i5lAaPb/dJbCdJrBXAH1qN4sLI6tJdn5DSSo5Z6Q=; b=STTPQ6+mhQywRqrJxoWuJrpbJT22gNeqIdvQeVxUC7xZQOXFv2cWkqkSGcVPFjJhBd kxO6ec3Q//zRpGYliX0/xrVZ6HCoBKnlynUJD60Xp23qn++VXG+zSBfbXDKVN41VaZ4u yDjB474f51dZJ0L02ndsodUnYJWn6pq95jcXOB3QTOD7nULh5WuXrA7C0ORYg+nc35zV zOW/lKrkLOA6fToFSYTJ1reTuRm9jtp0J1PhFfci0OY+TVoMsD3JZaouZcMiUXyuLz+R 2uM8anJgwzKDo3kVQAE0tZ27Rii+RaA1lNoPDO86FPv+g9MtILKcOg234/KwpH7VOfQq VOvQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734026744; x=1734631544; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=JE/i5lAaPb/dJbCdJrBXAH1qN4sLI6tJdn5DSSo5Z6Q=; b=vD3igx+3S/LwXPcEso9+b7klXNZhCGJR+4N/FlIX43fIdK10OG1mrBnG43/zXZK3Ab he0PYqxkvcvmpz2W+NsUVMJ3WuxuTNpaw6tD9hZYM7ZkNHjC536d5WKf45KbpHgnLW+j WG4dMTdutyYZlqo838FLb+MQ9fzqrJrzcD2WBu9Ynf4b657O7z0DLYHNbMP2QtEuy8+z dnzI3gwcaP9LZXyCncrtFvUjwWJQ1FEN1ORPfBgAztTcG7jRMn9ULfEullyMGh8mgzas Wgqkho20aAzaV1bJIwi5pEhjGFiDVCmPJGoKQWkJdeKbSnT4Ex7Sn6avBNWOblkQFzIf btKg== X-Forwarded-Encrypted: i=1; AJvYcCWUVlRj+YfcSoeN3ntu7+b5KQOCQrrOR+K/+vXejWdwt7/zEDIzrHXCFiIHOgfZWJfiYm01JWu+UZ1F2ww=@vger.kernel.org X-Gm-Message-State: AOJu0YxAYZmI/hOz7SNrtdJcSZUybV34ahPMOBUaACwde0bMInKrUjK+ ASOxqhUTHH3YB7QZZhYLB/sbYi5N7W7RtymEXJJRzL/MSlgT0fPzvjtRuB32aW8dlzVOOfA2WI1 4yZ+5Wbo9Dw== X-Google-Smtp-Source: AGHT+IFLzVaDnxOQQpqmcBPGMzPT4gfF0J87Ucqw5+xEyz4q3jDg+AjzzwnSPOPsWSAVi+g32NOxN+jbO3aNIw== X-Received: from wmbd18.prod.google.com ([2002:a05:600c:58d2:b0:434:a98d:6a1c]) (user=smostafa job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:5029:b0:431:44fe:fd9a with SMTP id 5b1f17b1804b1-4361c3e010amr64441725e9.19.1734026744100; Thu, 12 Dec 2024 10:05:44 -0800 (PST) Date: Thu, 12 Dec 2024 18:03:53 +0000 In-Reply-To: <20241212180423.1578358-1-smostafa@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241212180423.1578358-1-smostafa@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20241212180423.1578358-30-smostafa@google.com> Subject: [RFC PATCH v2 29/58] KVM: arm64: smmu-v3: Setup event queue From: Mostafa Saleh To: iommu@lists.linux.dev, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org Cc: catalin.marinas@arm.com, will@kernel.org, maz@kernel.org, oliver.upton@linux.dev, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, robdclark@gmail.com, joro@8bytes.org, robin.murphy@arm.com, jean-philippe@linaro.org, jgg@ziepe.ca, nicolinc@nvidia.com, vdonnefort@google.com, qperret@google.com, tabba@google.com, danielmentz@google.com, tzukui@google.com, Mostafa Saleh Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The host can use the event queue for debuging, and unlike the command queue, it would be managed by the kernel. However, it must set in a shared state so it can't be donated to the hypervisor later. This relies on the ARM_SMMU_EVTQ_BASE can't be changed after de-privilege. Signed-off-by: Mostafa Saleh --- arch/arm64/kvm/hyp/nvhe/iommu/arm-smmu-v3.c | 39 +++++++++++++++++++++ 1 file changed, 39 insertions(+) diff --git a/arch/arm64/kvm/hyp/nvhe/iommu/arm-smmu-v3.c b/arch/arm64/kvm/h= yp/nvhe/iommu/arm-smmu-v3.c index 43d2ce7828c1..5020f74421ad 100644 --- a/arch/arm64/kvm/hyp/nvhe/iommu/arm-smmu-v3.c +++ b/arch/arm64/kvm/hyp/nvhe/iommu/arm-smmu-v3.c @@ -342,6 +342,41 @@ static int smmu_init_cmdq(struct hyp_arm_smmu_v3_devic= e *smmu) return 0; } =20 +/* + * Event q support is optional and managed by the kernel, + * However, it must set in a shared state so it can't be donated + * to the hypervisor later. + * This relies on the ARM_SMMU_EVTQ_BASE can't be changed after + * de-privilege. + */ +static int smmu_init_evtq(struct hyp_arm_smmu_v3_device *smmu) +{ + u64 evtq_base, evtq_pfn; + size_t evtq_nr_entries, evtq_size, evtq_nr_pages; + size_t i; + int ret; + + evtq_base =3D readq_relaxed(smmu->base + ARM_SMMU_EVTQ_BASE); + if (!evtq_base) + return 0; + + if (evtq_base & ~(Q_BASE_RWA | Q_BASE_ADDR_MASK | Q_BASE_LOG2SIZE)) + return -EINVAL; + + evtq_nr_entries =3D 1 << (evtq_base & Q_BASE_LOG2SIZE); + evtq_size =3D evtq_nr_entries * EVTQ_ENT_DWORDS * 8; + evtq_nr_pages =3D PAGE_ALIGN(evtq_size) >> PAGE_SHIFT; + + evtq_pfn =3D PAGE_ALIGN(evtq_base & Q_BASE_ADDR_MASK) >> PAGE_SHIFT; + + for (i =3D 0 ; i < evtq_nr_pages ; ++i) { + ret =3D __pkvm_host_share_hyp(evtq_pfn + i); + if (ret) + return ret; + } + return 0; +} + static int smmu_init_strtab(struct hyp_arm_smmu_v3_device *smmu) { int ret; @@ -429,6 +464,10 @@ static int smmu_init_device(struct hyp_arm_smmu_v3_dev= ice *smmu) if (ret) return ret; =20 + ret =3D smmu_init_evtq(smmu); + if (ret) + return ret; + ret =3D smmu_init_strtab(smmu); if (ret) return ret; --=20 2.47.0.338.g60cca15819-goog From nobody Sun Dec 14 19:14:26 2025 Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 015D0238E1E for ; Thu, 12 Dec 2024 18:05:48 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734026750; cv=none; b=CRPdqPCpIEsB/XShDcB76iNrrI5e8aU9tDYuc2VAMM9hCbXKQYegpKHtvlzEIhAblq+KNsPO1BjTd+1NMUcg1vS4tlyiaaz2Y+fieXlpqE+xSOF0L+sxkZtdA8a0kPcstZ1nbImiHg+gfDXQ9ppP4Gerz0j0TwgXD4IxHQp/S+0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734026750; c=relaxed/simple; bh=ZTCW3MPLRkAgIQ6KHNuQcNDS5MIsVNoOiDzEFUuD1n4=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=lp19Npn+Cwzrc+jzQ2mfSqvMzo+ir0RoRenwh8af4UvrgMKbGkXqWC2ztzsaZSUDYFG4/suUUpOuqkTfXS7Yc5u2GE6OpPOULcAvq+90FeSKwPNMlLA/k4KfH7v1IbUNlIS11a/9eKw8YQTjY1ki6ED81TKJX2YvFSJYbrwMsY0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--smostafa.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=tiZibgwR; arc=none smtp.client-ip=209.85.128.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--smostafa.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="tiZibgwR" Received: by mail-wm1-f74.google.com with SMTP id 5b1f17b1804b1-4359206e1e4so9237035e9.2 for ; Thu, 12 Dec 2024 10:05:48 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1734026747; x=1734631547; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=HFThXzKnjRTlltuWbtKfTZlx0mgnbefnBgFBqpI1yLQ=; b=tiZibgwR9mCK1czs75GK3pUCVwoIaJZLqOBsqqcFjgwqOTgEK8W8MYSrJZNmUfgu2F KBeypDkh0YsLvcNuK1ZE83OfB+DkTgHFtd9V5tsAoUg7UBNjnZUSeyRHTGs/b4nMsnbv rUGOKb2/FqPY9VkU1MXHh29vL7DUrEn0a8zlyyGmrUy50Qt9bpJ63xRkPnfXOeq1Zr7g mZr4ill7op0YWsCNqj5kMxl+vdwZr0xXq5IiXJJshyUpO0MMip/qiM9uSFlZpEAlllyT SRuHNBsuACY9BIk/BUgMMz4TxQgt/AlbOQgFiEdqgfIrlmbToAjhX+Ooo/PLmrpE1bu7 z71A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734026747; x=1734631547; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=HFThXzKnjRTlltuWbtKfTZlx0mgnbefnBgFBqpI1yLQ=; b=RviuKf9RdBkIGmjSx92crYYLU4txOoMXvoU1+1/xoma3cY0JB0IUTn5Os7V2zel+QG bUhHtqRhbkFqePWC3PTxWwS2VmRoWJnBk5AhP0t4pUuz0hx7BpFIfWqLmLR3yxotDCye u2V9AnLwW4xNj0JNzBS0GvZOQ9onrRdCDQyHEZcwgksR9vYIolxvAgglXBnaGbEDa8+M /Cu/qXoxFvtd/LEdaeA6P8jgDFhR0qvryjHBYIgfwEPsH3v4fp8+tlwPcG63e4G8qWU9 Y7/pR4flYsk1NZmpKHV0YTF7yKh8C65C7s3VQtGKDXN5iwvfLsNNyOgUb0Q9BrjR/9jI SGjg== X-Forwarded-Encrypted: i=1; AJvYcCWkF6L1njrYquImrNxGNbCCw8+PLrB+gyGlOH3JjuPq3Nnb4AHZLQr4hBGchtTFeNYDbpgI8zwEMQzw5zQ=@vger.kernel.org X-Gm-Message-State: AOJu0YwEBUqoxLjhQg8lnU+KvtxROvJEfoSBpKU3CuIE9ap4utkSHkZG hrnBOLnCDfZ9omLjwKrKJmPQlJOxlD/SquHw9wSXGLyA9QoTZeZvwkXGomRZZ3yCBvF6mhC83Zp 8LMszVJKhvg== X-Google-Smtp-Source: AGHT+IEcA6NxPQcYOWP3jecw4tajvZQj85hXUMMgjZMFtmkSC/sm2wXlubiRoWv+mV1LLv3tR1NkxUnUlZkBAA== X-Received: from wmdv18.prod.google.com ([2002:a05:600c:12d2:b0:436:164a:763e]) (user=smostafa job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:524b:b0:434:a781:f5d9 with SMTP id 5b1f17b1804b1-4361c3aaefamr76123965e9.11.1734026746134; Thu, 12 Dec 2024 10:05:46 -0800 (PST) Date: Thu, 12 Dec 2024 18:03:54 +0000 In-Reply-To: <20241212180423.1578358-1-smostafa@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241212180423.1578358-1-smostafa@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20241212180423.1578358-31-smostafa@google.com> Subject: [RFC PATCH v2 30/58] KVM: arm64: smmu-v3: Reset the device From: Mostafa Saleh To: iommu@lists.linux.dev, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org Cc: catalin.marinas@arm.com, will@kernel.org, maz@kernel.org, oliver.upton@linux.dev, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, robdclark@gmail.com, joro@8bytes.org, robin.murphy@arm.com, jean-philippe@linaro.org, jgg@ziepe.ca, nicolinc@nvidia.com, vdonnefort@google.com, qperret@google.com, tabba@google.com, danielmentz@google.com, tzukui@google.com, Mostafa Saleh Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Jean-Philippe Brucker Now that all structures are initialized, send global invalidations and reset the SMMUv3 device. Signed-off-by: Jean-Philippe Brucker Signed-off-by: Mostafa Saleh --- arch/arm64/kvm/hyp/nvhe/iommu/arm-smmu-v3.c | 38 +++++++++++++++++++++ 1 file changed, 38 insertions(+) diff --git a/arch/arm64/kvm/hyp/nvhe/iommu/arm-smmu-v3.c b/arch/arm64/kvm/h= yp/nvhe/iommu/arm-smmu-v3.c index 5020f74421ad..58662c2c4c97 100644 --- a/arch/arm64/kvm/hyp/nvhe/iommu/arm-smmu-v3.c +++ b/arch/arm64/kvm/hyp/nvhe/iommu/arm-smmu-v3.c @@ -441,6 +441,40 @@ static int smmu_init_strtab(struct hyp_arm_smmu_v3_dev= ice *smmu) return 0; } =20 +static int smmu_reset_device(struct hyp_arm_smmu_v3_device *smmu) +{ + int ret; + struct arm_smmu_cmdq_ent cfgi_cmd =3D { + .opcode =3D CMDQ_OP_CFGI_ALL, + }; + struct arm_smmu_cmdq_ent tlbi_cmd =3D { + .opcode =3D CMDQ_OP_TLBI_NSNH_ALL, + }; + + /* Invalidate all cached configs and TLBs */ + ret =3D smmu_write_cr0(smmu, CR0_CMDQEN); + if (ret) + return ret; + + ret =3D smmu_add_cmd(smmu, &cfgi_cmd); + if (ret) + goto err_disable_cmdq; + + ret =3D smmu_add_cmd(smmu, &tlbi_cmd); + if (ret) + goto err_disable_cmdq; + + ret =3D smmu_sync_cmd(smmu); + if (ret) + goto err_disable_cmdq; + + /* Enable translation */ + return smmu_write_cr0(smmu, CR0_SMMUEN | CR0_CMDQEN | CR0_ATSCHK); + +err_disable_cmdq: + return smmu_write_cr0(smmu, 0); +} + static int smmu_init_device(struct hyp_arm_smmu_v3_device *smmu) { int ret; @@ -472,6 +506,10 @@ static int smmu_init_device(struct hyp_arm_smmu_v3_dev= ice *smmu) if (ret) return ret; =20 + ret =3D smmu_reset_device(smmu); + if (ret) + return ret; + return kvm_iommu_init_device(&smmu->iommu); } =20 --=20 2.47.0.338.g60cca15819-goog From nobody Sun Dec 14 19:14:26 2025 Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B355323979E for ; Thu, 12 Dec 2024 18:05:49 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734026751; cv=none; b=LwwTDML4nP6J64tCN0gp+XhvR18aBSbm3L48PMMDMzYVCAmVvUnU7FDZ/UsTpUtOC3DWIJFdrmDd27hVZLE8BjJLHvqHfYH9AU/LglheuICm6IktK1dEJv+UgNCfPy0F9CsCWfO1cM0RX4VYiOocO7JIbRpDoF/geeVoGoXg3xc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734026751; c=relaxed/simple; bh=vx+jCTtR7P3OGsZuyIcw2+7atnaHKZPpkcPBracoBQ0=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=MK50S7VkVf+CPxXwmcKvNVR/kKZUJiRbIsjiV8esrSdHTHgGT97kVRmxyQrgYGbiCjriMz7sQQeR87jGnuivrvrQUsGMxE8EjuoP2dBIIdWPDstw4m4W5Dk44abxlcwUWuRNTGQX7y8PmsNAnQAsxza/dtHKcFAfu5YUn9Gq+VA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--smostafa.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=KhvbeWB6; arc=none smtp.client-ip=209.85.128.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--smostafa.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="KhvbeWB6" Received: by mail-wm1-f74.google.com with SMTP id 5b1f17b1804b1-4361ecebc5bso5652925e9.1 for ; Thu, 12 Dec 2024 10:05:49 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1734026748; x=1734631548; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=PfVS3osJ9q22MREQZzZV+YBtg9RTu0l7wPS76L6zkGo=; b=KhvbeWB6FZIxEXO9IqgAXj55QQ2eLdov5g4foeWZkvP54jZl2NsMw35wVdCvazljGK 6INw42QHhS1XxZMJlUuiBEpfDFAXT2NHYKcuIz9zUJkBruA80MUE4fMkJE3XPssBLGY8 XQQsK8NGjqGx2RzeYKPhfBjniNzxXkd2VTIsgJ2lIW19631cBMoILqT2pxmsbWQ3ZK4Y fpDC55ZdbQPjTRGGHTIqxtaRdsVJl29Bfhlf3LATVGlWM+YIuVxTyivKRAdM0ZzDAThy X4Ns93fYanmCgUwL2DbyVvous7bDq9T6yt4a6AqF461B/cyUKSxGFzpWyNCemGFafGNW B8QQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734026748; x=1734631548; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=PfVS3osJ9q22MREQZzZV+YBtg9RTu0l7wPS76L6zkGo=; b=E5KrEYCPYCarzIP4fEdHfg1aj6Jzs6cTmL+g0j7mpZofAykh835/GUr577RXqtaByg G34cvDX8nJnVn8rrhH903bOX2qZbrGrj5pz3xbfxeRaqx6uzAjU3gSsXx4AoKgl7Qwtu wtd5qsCiVkCZPud+GdRU+YnHe7PRzxN4ifvESgOIEMpx2JkVY0WNwsjM6M467M5TMHOd 0kT6IXhTsN9cDvn0MmWrenAHG4byWrCS4GDe33Zrf0OKFwhoJwGb8X7AHWX8XX3Pfr9e kRSW/SCxwHsvHh3LeLdPa2t7+QYovyvXEXlsqZncXJhNwXXvd1+7CDGdcqfUtoNHt9I4 ejvg== X-Forwarded-Encrypted: i=1; AJvYcCWedtIOPiaKabg9EVcaRZ8FPfOJngy3gBB49dPd9pQzZeZRXRoUMEw/6gloJh4FZEtNhrPDALrIpfw+Pg0=@vger.kernel.org X-Gm-Message-State: AOJu0Yy6im/HFHmCuGgayP5ojdn7+qHNW875LCwYtoBfR7fyCYNuBt7+ CalRJO7hZkcuuL80LZxgfFaut0Eo/v36rF/CeCVw7jYsje+iOqXZCtGbDWo+/NQpdQDuHp3sOkA ZttnqdrVZSg== X-Google-Smtp-Source: AGHT+IGxFKclHRVEq5ycO+SmEmjwkOwSKayY3zd4xYSx9lryA9h551F87hCr0p5MkmL1i0ozD721MEZuLPSrCg== X-Received: from wmgg28.prod.google.com ([2002:a05:600d:1c:b0:434:f847:ba1b]) (user=smostafa job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:468b:b0:431:54d9:da57 with SMTP id 5b1f17b1804b1-4361c41811cmr73715735e9.30.1734026748472; Thu, 12 Dec 2024 10:05:48 -0800 (PST) Date: Thu, 12 Dec 2024 18:03:55 +0000 In-Reply-To: <20241212180423.1578358-1-smostafa@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241212180423.1578358-1-smostafa@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20241212180423.1578358-32-smostafa@google.com> Subject: [RFC PATCH v2 31/58] KVM: arm64: smmu-v3: Support io-pgtable From: Mostafa Saleh To: iommu@lists.linux.dev, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org Cc: catalin.marinas@arm.com, will@kernel.org, maz@kernel.org, oliver.upton@linux.dev, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, robdclark@gmail.com, joro@8bytes.org, robin.murphy@arm.com, jean-philippe@linaro.org, jgg@ziepe.ca, nicolinc@nvidia.com, vdonnefort@google.com, qperret@google.com, tabba@google.com, danielmentz@google.com, tzukui@google.com, Mostafa Saleh Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Jean-Philippe Brucker Implement the hypervisor version of io-pgtable allocation functions, mirroring drivers/iommu/io-pgtable-arm.c. Page allocation uses the IOMMU pool filled by the host. Signed-off-by: Jean-Philippe Brucker Signed-off-by: Mostafa Saleh --- arch/arm64/kvm/hyp/nvhe/Makefile | 2 + .../arm64/kvm/hyp/nvhe/iommu/io-pgtable-arm.c | 153 ++++++++++++++++++ include/linux/io-pgtable-arm.h | 11 ++ 3 files changed, 166 insertions(+) create mode 100644 arch/arm64/kvm/hyp/nvhe/iommu/io-pgtable-arm.c diff --git a/arch/arm64/kvm/hyp/nvhe/Makefile b/arch/arm64/kvm/hyp/nvhe/Mak= efile index edfd8a11ac90..e4f662b1447f 100644 --- a/arch/arm64/kvm/hyp/nvhe/Makefile +++ b/arch/arm64/kvm/hyp/nvhe/Makefile @@ -17,6 +17,8 @@ hyp-obj-$(CONFIG_MODULES) +=3D modules.o hyp-obj-y +=3D $(lib-objs) =20 hyp-obj-$(CONFIG_ARM_SMMU_V3_PKVM) +=3D iommu/arm-smmu-v3.o +hyp-obj-$(CONFIG_ARM_SMMU_V3_PKVM) +=3D iommu/io-pgtable-arm.o \ + ../../../../../drivers/iommu/io-pgtable-arm-common.o =20 $(obj)/hyp.lds: $(src)/hyp.lds.S FORCE $(call if_changed_dep,cpp_lds_S) diff --git a/arch/arm64/kvm/hyp/nvhe/iommu/io-pgtable-arm.c b/arch/arm64/kv= m/hyp/nvhe/iommu/io-pgtable-arm.c new file mode 100644 index 000000000000..aa5bf7c0ed03 --- /dev/null +++ b/arch/arm64/kvm/hyp/nvhe/iommu/io-pgtable-arm.c @@ -0,0 +1,153 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2022 Arm Ltd. + */ +#include +#include +#include +#include +#include +#include + +#include +#include +#include + +int arm_lpae_map_exists(void) +{ + return -EEXIST; +} + +int arm_lpae_unmap_empty(void) +{ + return -EEXIST; +} + +void *__arm_lpae_alloc_pages(size_t size, gfp_t gfp, + struct io_pgtable_cfg *cfg, void *cookie) +{ + void *addr; + + if (!PAGE_ALIGNED(size)) + return NULL; + + addr =3D kvm_iommu_donate_pages(get_order(size), 0); + + if (addr && !cfg->coherent_walk) + kvm_flush_dcache_to_poc(addr, size); + + return addr; +} + +void __arm_lpae_free_pages(void *addr, size_t size, struct io_pgtable_cfg = *cfg, + void *cookie) +{ + u8 order; + + /* + * It's guaranteed all allocations are aligned, but core code + * might free PGD with it's actual size. + */ + order =3D get_order(PAGE_ALIGN(size)); + + if (!cfg->coherent_walk) + kvm_flush_dcache_to_poc(addr, size); + + kvm_iommu_reclaim_pages(addr, order); +} + +void __arm_lpae_sync_pte(arm_lpae_iopte *ptep, int num_entries, + struct io_pgtable_cfg *cfg) +{ + if (!cfg->coherent_walk) + kvm_flush_dcache_to_poc(ptep, sizeof(*ptep) * num_entries); +} + +static int kvm_arm_io_pgtable_init(struct io_pgtable_cfg *cfg, + struct arm_lpae_io_pgtable *data) +{ + int ret =3D -EINVAL; + + if (cfg->fmt =3D=3D ARM_64_LPAE_S2) + ret =3D arm_lpae_init_pgtable_s2(cfg, data); + else if (cfg->fmt =3D=3D ARM_64_LPAE_S1) + ret =3D arm_lpae_init_pgtable_s1(cfg, data); + + if (ret) + return ret; + + data->iop.cfg =3D *cfg; + data->iop.fmt =3D cfg->fmt; + + return 0; +} + +struct io_pgtable *kvm_arm_io_pgtable_alloc(struct io_pgtable_cfg *cfg, + void *cookie, + int *out_ret) +{ + size_t pgd_size, alignment; + struct arm_lpae_io_pgtable *data; + int ret; + + data =3D hyp_alloc(sizeof(*data)); + if (!data) { + *out_ret =3D hyp_alloc_errno(); + return NULL; + } + + ret =3D kvm_arm_io_pgtable_init(cfg, data); + if (ret) + goto out_free; + + pgd_size =3D PAGE_ALIGN(ARM_LPAE_PGD_SIZE(data)); + data->pgd =3D __arm_lpae_alloc_pages(pgd_size, 0, &data->iop.cfg, cookie); + if (!data->pgd) { + ret =3D -ENOMEM; + goto out_free; + } + /* + * If it has eight or more entries, the table must be aligned on + * its size. Otherwise 64 bytes. + */ + alignment =3D max(pgd_size, 8 * sizeof(arm_lpae_iopte)); + if (!IS_ALIGNED(hyp_virt_to_phys(data->pgd), alignment)) { + __arm_lpae_free_pages(data->pgd, pgd_size, + &data->iop.cfg, cookie); + ret =3D -EINVAL; + goto out_free; + } + + data->iop.cookie =3D cookie; + if (cfg->fmt =3D=3D ARM_64_LPAE_S2) + data->iop.cfg.arm_lpae_s2_cfg.vttbr =3D __arm_lpae_virt_to_phys(data->pg= d); + else if (cfg->fmt =3D=3D ARM_64_LPAE_S1) + data->iop.cfg.arm_lpae_s1_cfg.ttbr =3D __arm_lpae_virt_to_phys(data->pgd= ); + + if (!data->iop.cfg.coherent_walk) + kvm_flush_dcache_to_poc(data->pgd, pgd_size); + + /* Ensure the empty pgd is visible before any actual TTBR write */ + wmb(); + + *out_ret =3D 0; + return &data->iop; +out_free: + hyp_free(data); + *out_ret =3D ret; + return NULL; +} + +int kvm_arm_io_pgtable_free(struct io_pgtable *iopt) +{ + struct arm_lpae_io_pgtable *data =3D io_pgtable_to_data(iopt); + size_t pgd_size =3D ARM_LPAE_PGD_SIZE(data); + + if (!data->iop.cfg.coherent_walk) + kvm_flush_dcache_to_poc(data->pgd, pgd_size); + + io_pgtable_tlb_flush_all(iopt); + __arm_lpae_free_pgtable(data, data->start_level, data->pgd); + hyp_free(data); + return 0; +} diff --git a/include/linux/io-pgtable-arm.h b/include/linux/io-pgtable-arm.h index 337e9254fdbd..88922314157d 100644 --- a/include/linux/io-pgtable-arm.h +++ b/include/linux/io-pgtable-arm.h @@ -191,8 +191,19 @@ static inline bool iopte_table(arm_lpae_iopte pte, int= lvl) return iopte_type(pte) =3D=3D ARM_LPAE_PTE_TYPE_TABLE; } =20 +#ifdef __KVM_NVHE_HYPERVISOR__ +#include +#define __arm_lpae_virt_to_phys hyp_virt_to_phys +#define __arm_lpae_phys_to_virt hyp_phys_to_virt + +struct io_pgtable *kvm_arm_io_pgtable_alloc(struct io_pgtable_cfg *cfg, + void *cookie, + int *out_ret); +int kvm_arm_io_pgtable_free(struct io_pgtable *iop); +#else #define __arm_lpae_virt_to_phys __pa #define __arm_lpae_phys_to_virt __va +#endif =20 /* Generic functions */ void __arm_lpae_free_pgtable(struct arm_lpae_io_pgtable *data, int lvl, --=20 2.47.0.338.g60cca15819-goog From nobody Sun Dec 14 19:14:26 2025 Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D74D3239BA9 for ; Thu, 12 Dec 2024 18:05:51 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734026753; cv=none; b=aLu8JBdnsi7xT8oD3GBWIpuaVpi7z88jHURCD34VIrd5OaT0kYAN55QIFLdl0rJMC4sPd9oc1lrJM8jnbVvEULfaX3XUN9n+3GosogaEtNAY70GQPPinnREhUWVyu/FwaNTJnc4OZGtUqAVUzpd8rT6eV2cWbAissBZvnwQpA+8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734026753; c=relaxed/simple; bh=rKruoUMFqJDGZAjSYxqmVBeUt8pQ/t5eBu3d7duzRY0=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=FOt7WXbYcQyeNLCOY+EAFDm5aDB5rvRtp1SHfgvB3OtrGe9vQshj2f429yTEM9h2n0SxPY1TAik5fNdSfs3b2goIBiHot20hhvDC6u/skUDIztubrOZpXfAKiTIku3x3Nji7NP7IWpAAnRSdv7gZBKFEIcymi74T8l08STGb8Q0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--smostafa.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=Kvek8QwT; arc=none smtp.client-ip=209.85.128.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--smostafa.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="Kvek8QwT" Received: by mail-wm1-f73.google.com with SMTP id 5b1f17b1804b1-436219070b4so5746475e9.1 for ; Thu, 12 Dec 2024 10:05:51 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1734026750; x=1734631550; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=i0p39a77mImTT2bcTUkjv5rHSXECPntFQZjkOUsIrRs=; b=Kvek8QwTEsN3DwNVu9bZnUGJhxY+eobbV6MsjPwc/H8SAbhSCp/yM4wrccom6GWsbX 2hL3JzLYX1BHGrPvJlFtfyhX3B9axky1vEwgTvusueIV5zqWE6q82juw72hobspMxx9C Zp3aYlCx5wm9/ldFqD95gUUcykP/KB/FOObI3EIJkO/mYmKmWfhfPmHswvRTOBd5OC1u 8lpbIaj80XOhV8C3w9EF+T66WkoGX/lRVrnVQbKxOOZOcMhx9bnKAwwY7C8Vcn4cN3mJ W51MOkF69AWBWWI12RI6EBS6ua2DYwpjhGU4aka54Pb8cMexjDC9aMFbU5nhUtpX63Bh FWaw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734026750; x=1734631550; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=i0p39a77mImTT2bcTUkjv5rHSXECPntFQZjkOUsIrRs=; b=FGy9gZbosMEzhiKkFIxULObchggDHh1lRhIeuYAlfJCYefvbJHB8eRZ616jROQqSaG 04tLmceOI4oMzYbnHeEsuC+e3YEzbaE/EPhEK4vZHHVt6/CEpX3wktTodXXHh5d4iYgd aksBUl/V/wRuycHy57dJ4DJK//gB8qH3e14WuAvaKtDqPtCd27p8NzFKXGiGdh/o/mxd Vtn042IGJY8NOuvVzEg8FTVW3MrvaueAS4HAvu2hbab3s7Jw6n3D9EXBOp1uWVUSk75o DWKOmGdV3qsUgii64gS9pxh45P7naz6umB602paP6w0SkI2GsUhjZ4bKRl0tZTlf97tZ lmdA== X-Forwarded-Encrypted: i=1; AJvYcCX/Dr6rsRWJAvlUBIMjzGohGdn04B5WuV4mMOvsPpv3KtUZ56eSbvgrAYrQgrvVcnGz7xIeBEmLVHnZGS0=@vger.kernel.org X-Gm-Message-State: AOJu0Yyt7lUhmyNjGe4g3kGAfff5hijOkpDMMAvYepLcsDjg9Ab0HR9j 1+19/p/sJlwzOwSW+CeWjXGgeWf6NQihIIRKisnzjUb/w6nY8cPKSCUJKHDzDxLpeDj9VlY6+Yy xw5XNwyLi4w== X-Google-Smtp-Source: AGHT+IG1KFj+8kiPF7R9tNS1w0lYL0LFPzKnh4cL4W+DNw5iDRjcSi9WleF5S9ilyTwoZ5oQcyfDOI5ctOSx2A== X-Received: from wmkz18.prod.google.com ([2002:a7b:c7d2:0:b0:434:a15f:e7ea]) (user=smostafa job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:a011:b0:434:f623:9fe3 with SMTP id 5b1f17b1804b1-4361c3755bamr72354245e9.16.1734026750412; Thu, 12 Dec 2024 10:05:50 -0800 (PST) Date: Thu, 12 Dec 2024 18:03:56 +0000 In-Reply-To: <20241212180423.1578358-1-smostafa@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241212180423.1578358-1-smostafa@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20241212180423.1578358-33-smostafa@google.com> Subject: [RFC PATCH v2 32/58] KVM: arm64: smmu-v3: Add {alloc/free}_domain From: Mostafa Saleh To: iommu@lists.linux.dev, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org Cc: catalin.marinas@arm.com, will@kernel.org, maz@kernel.org, oliver.upton@linux.dev, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, robdclark@gmail.com, joro@8bytes.org, robin.murphy@arm.com, jean-philippe@linaro.org, jgg@ziepe.ca, nicolinc@nvidia.com, vdonnefort@google.com, qperret@google.com, tabba@google.com, danielmentz@google.com, tzukui@google.com, Mostafa Saleh Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add SMMUv3 alloc/free domain, as this operations are not tied to the IOMMU, we can't do much with the io-pgtable allocation or configuration. Signed-off-by: Mostafa Saleh --- arch/arm64/kvm/hyp/nvhe/iommu/arm-smmu-v3.c | 64 +++++++++++++++++++++ include/kvm/arm_smmu_v3.h | 6 ++ 2 files changed, 70 insertions(+) diff --git a/arch/arm64/kvm/hyp/nvhe/iommu/arm-smmu-v3.c b/arch/arm64/kvm/h= yp/nvhe/iommu/arm-smmu-v3.c index 58662c2c4c97..3181933e9a34 100644 --- a/arch/arm64/kvm/hyp/nvhe/iommu/arm-smmu-v3.c +++ b/arch/arm64/kvm/hyp/nvhe/iommu/arm-smmu-v3.c @@ -7,6 +7,8 @@ #include #include #include +#include +#include #include #include #include @@ -50,6 +52,22 @@ struct hyp_arm_smmu_v3_device *kvm_hyp_arm_smmu_v3_smmus; smmu_wait(_cond); \ }) =20 +/* + * SMMUv3 domain: + * @domain: Pointer to the IOMMU domain. + * @smmu: SMMU instance for this domain. + * @type: Type of domain (S1, S2) + * @pgt_lock: Lock for page table + * @pgtable: io_pgtable instance for this domain + */ +struct hyp_arm_smmu_v3_domain { + struct kvm_hyp_iommu_domain *domain; + struct hyp_arm_smmu_v3_device *smmu; + u32 type; + hyp_spinlock_t pgt_lock; + struct io_pgtable *pgtable; +}; + static int smmu_write_cr0(struct hyp_arm_smmu_v3_device *smmu, u32 val) { writel_relaxed(val, smmu->base + ARM_SMMU_CR0); @@ -541,7 +559,53 @@ static int smmu_init(void) return ret; } =20 +static struct kvm_hyp_iommu *smmu_id_to_iommu(pkvm_handle_t smmu_id) +{ + if (smmu_id >=3D kvm_hyp_arm_smmu_v3_count) + return NULL; + smmu_id =3D array_index_nospec(smmu_id, kvm_hyp_arm_smmu_v3_count); + + return &kvm_hyp_arm_smmu_v3_smmus[smmu_id].iommu; +} + +static int smmu_alloc_domain(struct kvm_hyp_iommu_domain *domain, int type) +{ + struct hyp_arm_smmu_v3_domain *smmu_domain; + + if (type >=3D KVM_ARM_SMMU_DOMAIN_MAX) + return -EINVAL; + + smmu_domain =3D hyp_alloc(sizeof(*smmu_domain)); + if (!smmu_domain) + return -ENOMEM; + + /* + * Can't do much without knowing the SMMUv3. + * Page table will be allocated at attach_dev, but can be + * freed from free domain. + */ + smmu_domain->domain =3D domain; + smmu_domain->type =3D type; + hyp_spin_lock_init(&smmu_domain->pgt_lock); + domain->priv =3D (void *)smmu_domain; + + return 0; +} + +static void smmu_free_domain(struct kvm_hyp_iommu_domain *domain) +{ + struct hyp_arm_smmu_v3_domain *smmu_domain =3D domain->priv; + + if (smmu_domain->pgtable) + kvm_arm_io_pgtable_free(smmu_domain->pgtable); + + hyp_free(smmu_domain); +} + /* Shared with the kernel driver in EL1 */ struct kvm_iommu_ops smmu_ops =3D { .init =3D smmu_init, + .get_iommu_by_id =3D smmu_id_to_iommu, + .alloc_domain =3D smmu_alloc_domain, + .free_domain =3D smmu_free_domain, }; diff --git a/include/kvm/arm_smmu_v3.h b/include/kvm/arm_smmu_v3.h index 352c1b2dc72a..ded98cbaebc1 100644 --- a/include/kvm/arm_smmu_v3.h +++ b/include/kvm/arm_smmu_v3.h @@ -33,4 +33,10 @@ extern size_t kvm_nvhe_sym(kvm_hyp_arm_smmu_v3_count); extern struct hyp_arm_smmu_v3_device *kvm_nvhe_sym(kvm_hyp_arm_smmu_v3_smm= us); #define kvm_hyp_arm_smmu_v3_smmus kvm_nvhe_sym(kvm_hyp_arm_smmu_v3_smmus) =20 +enum kvm_arm_smmu_domain_type { + KVM_ARM_SMMU_DOMAIN_S1, + KVM_ARM_SMMU_DOMAIN_S2, + KVM_ARM_SMMU_DOMAIN_MAX, +}; + #endif /* __KVM_ARM_SMMU_V3_H */ --=20 2.47.0.338.g60cca15819-goog From nobody Sun Dec 14 19:14:26 2025 Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 21AA5239BDE for ; Thu, 12 Dec 2024 18:05:53 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734026756; cv=none; b=uwkvVLYdMNLjrZODyiL/khR+YjlLgRRLgLMnmVKRskfyNzZX57A4qrwTAY3o67HE4fIctIlme2VHkF0Tq+CcYAOTFFLJoaUV2Z00D+q0L+ZtXTvqAkZ/ArKyEGJUJVL/6do1ylDrU/sJBy6JvaJlgJdoamQbxGJt6+ptEFoxI44= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734026756; c=relaxed/simple; bh=aMJpNnqVCqfIZxdPd82zCfyFBJH1oACArK53wQftNKU=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=icJQ/2MqoiZqOG7qsacPtyjvr8h7Cmbfbs78B98A+bGGzFiPM5S+nCCPIvqRLifB2UMoW2fLAW2pZ+pzII0aYOHzEVS7k/UztJz4iN0HC52rBbXk+M/3QiA8f/2vpZQZYAN1pwV05DaNyMX8iEF1YafWQE+SLo+Y6j9CWk8aLP8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--smostafa.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=eKerZ1gr; arc=none smtp.client-ip=209.85.128.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--smostafa.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="eKerZ1gr" Received: by mail-wm1-f74.google.com with SMTP id 5b1f17b1804b1-4362153dcd6so5631125e9.2 for ; Thu, 12 Dec 2024 10:05:53 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1734026752; x=1734631552; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=0JvanTJSMR+XI/Wjo2HTOKln5DEsfQoh3Hho27FSuog=; b=eKerZ1grfZhALlOPnjE7gRyFq0qJWjpOa2Prn/96w1Qf3kcjJQXX7q1XF/kBfOPeAA DD/VaDIYnFRBFLILoLuzY7u8JPT31nXrd8p0Q5q8OZdLAn7E5h224VzrwiAwHD+PQ8i4 Fg3kklFCV1hEqXAbcKtpAsCxoR3dtoIxWGf7brABYFQpDxnqbcIUjTu6NZAnu54xoMlv cGPK1vzLJnRc6LT5SxO7sknIWk4bR1sIcFKuVooSps6k3I9+GPtFMvJpI7czisB1Nzle JJW84jHkO0uf7gs+EgIoRwTaMXMBby/MB+XhXt2AyuLmFAPQL526a/03qk50g7IfYdZX zSPw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734026752; x=1734631552; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=0JvanTJSMR+XI/Wjo2HTOKln5DEsfQoh3Hho27FSuog=; b=AfG1i/30Q3uuBUOB1vdHLRWe0MVnaM1OsH4oX86Y4HiJmvV6Bklp5FeNEPRSJBBQ0J TtGkE4M/f5ujvpNo6vNqen+aA+41zumEWGHOn3hEnFXtI1rTp5xEyw/BCDEvRMOBvPJh /3RXOvxDTCC0ntLBk4B/+2HOyZKjBD2T6ZyXXf9LK2M2fd1jwBvP82Mg8dmR77JJ/m04 /qwJ1VcErq5QbDriAnRZq1lrWqoZ6PEGsTqOOJTxjhEEQEdA7A7o7mJR+o1r0D5vsh5p 9qZV6JQ0+wBcCzM+MJ6WpqYbasLovVsm0OO3j/l3LzIlJXkrtWoxXQEkZA02yXLaXTP1 53CQ== X-Forwarded-Encrypted: i=1; AJvYcCWg18RYLY7tn/lg1JrkRkqHCrRvQMoChTtqGR1C9iYkIVdF7s3AXfHxLz7Ovq14DMBFNV8iJx+TgOzs3kY=@vger.kernel.org X-Gm-Message-State: AOJu0YxTBz6xC/irMyxOsu5sR+3Gjgpp/PkI5JcMqwFkVjibXKgTJZ70 5D2m4Ucn/0h7dVAm7w0+QxOiVxS5vuN3tMtVRzWTGe5mrnLMHcfiFMEE36vz+JZWG3ZfEfM9C2Z rM+dJdOHEsQ== X-Google-Smtp-Source: AGHT+IGWGdZgTY7Fu4Fy8oS8GVUUpzVpI6h6GF8Kt9ZH/qx6w2Z6AjSEvrQ/zUVQNCjYoCTIQFsDcTLIxiWTaw== X-Received: from wmkz18.prod.google.com ([2002:a7b:c7d2:0:b0:434:a15f:e7ea]) (user=smostafa job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:4e4b:b0:434:a239:d2fe with SMTP id 5b1f17b1804b1-4361c400c1bmr52688975e9.28.1734026752664; Thu, 12 Dec 2024 10:05:52 -0800 (PST) Date: Thu, 12 Dec 2024 18:03:57 +0000 In-Reply-To: <20241212180423.1578358-1-smostafa@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241212180423.1578358-1-smostafa@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20241212180423.1578358-34-smostafa@google.com> Subject: [RFC PATCH v2 33/58] KVM: arm64: smmu-v3: Add TLB ops From: Mostafa Saleh To: iommu@lists.linux.dev, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org Cc: catalin.marinas@arm.com, will@kernel.org, maz@kernel.org, oliver.upton@linux.dev, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, robdclark@gmail.com, joro@8bytes.org, robin.murphy@arm.com, jean-philippe@linaro.org, jgg@ziepe.ca, nicolinc@nvidia.com, vdonnefort@google.com, qperret@google.com, tabba@google.com, danielmentz@google.com, tzukui@google.com, Mostafa Saleh Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add TLB invalidation functions would be used next from the page table code and attach/detach functions. Signed-off-by: Mostafa Saleh Signed-off-by: Jean-Philippe Brucker --- arch/arm64/kvm/hyp/nvhe/iommu/arm-smmu-v3.c | 167 ++++++++++++++++++++ 1 file changed, 167 insertions(+) diff --git a/arch/arm64/kvm/hyp/nvhe/iommu/arm-smmu-v3.c b/arch/arm64/kvm/h= yp/nvhe/iommu/arm-smmu-v3.c index 3181933e9a34..5f00d5cdf5bc 100644 --- a/arch/arm64/kvm/hyp/nvhe/iommu/arm-smmu-v3.c +++ b/arch/arm64/kvm/hyp/nvhe/iommu/arm-smmu-v3.c @@ -602,10 +602,177 @@ static void smmu_free_domain(struct kvm_hyp_iommu_do= main *domain) hyp_free(smmu_domain); } =20 +static void smmu_inv_domain(struct hyp_arm_smmu_v3_domain *smmu_domain) +{ + struct kvm_hyp_iommu_domain *domain =3D smmu_domain->domain; + struct hyp_arm_smmu_v3_device *smmu =3D smmu_domain->smmu; + struct arm_smmu_cmdq_ent cmd =3D {}; + + if (smmu_domain->pgtable->cfg.fmt =3D=3D ARM_64_LPAE_S2) { + cmd.opcode =3D CMDQ_OP_TLBI_S12_VMALL; + cmd.tlbi.vmid =3D domain->domain_id; + } else { + cmd.opcode =3D CMDQ_OP_TLBI_NH_ASID; + cmd.tlbi.asid =3D domain->domain_id; + } + + if (smmu->iommu.power_is_off) + return; + + WARN_ON(smmu_send_cmd(smmu, &cmd)); +} + +static void smmu_tlb_flush_all(void *cookie) +{ + struct kvm_hyp_iommu_domain *domain =3D cookie; + struct hyp_arm_smmu_v3_domain *smmu_domain =3D domain->priv; + struct hyp_arm_smmu_v3_device *smmu =3D smmu_domain->smmu; + + kvm_iommu_lock(&smmu->iommu); + smmu_inv_domain(smmu_domain); + kvm_iommu_unlock(&smmu->iommu); +} + +static int smmu_tlb_inv_range_smmu(struct hyp_arm_smmu_v3_device *smmu, + struct kvm_hyp_iommu_domain *domain, + struct arm_smmu_cmdq_ent *cmd, + unsigned long iova, size_t size, size_t granule) +{ + int ret =3D 0; + unsigned long end =3D iova + size, num_pages =3D 0, tg =3D 0; + size_t inv_range =3D granule; + struct hyp_arm_smmu_v3_domain *smmu_domain =3D domain->priv; + + kvm_iommu_lock(&smmu->iommu); + if (smmu->iommu.power_is_off) + goto out_ret; + + /* Almost copy-paste from the kernel dirver. */ + if (smmu->features & ARM_SMMU_FEAT_RANGE_INV) { + /* Get the leaf page size */ + tg =3D __ffs(smmu_domain->pgtable->cfg.pgsize_bitmap); + + num_pages =3D size >> tg; + + /* Convert page size of 12,14,16 (log2) to 1,2,3 */ + cmd->tlbi.tg =3D (tg - 10) / 2; + + /* + * Determine what level the granule is at. For non-leaf, both + * io-pgtable and SVA pass a nominal last-level granule because + * they don't know what level(s) actually apply, so ignore that + * and leave TTL=3D0. However for various errata reasons we still + * want to use a range command, so avoid the SVA corner case + * where both scale and num could be 0 as well. + */ + if (cmd->tlbi.leaf) + cmd->tlbi.ttl =3D 4 - ((ilog2(granule) - 3) / (tg - 3)); + else if ((num_pages & CMDQ_TLBI_RANGE_NUM_MAX) =3D=3D 1) + num_pages++; + } + + while (iova < end) { + if (smmu->features & ARM_SMMU_FEAT_RANGE_INV) { + /* + * On each iteration of the loop, the range is 5 bits + * worth of the aligned size remaining. + * The range in pages is: + * + * range =3D (num_pages & (0x1f << __ffs(num_pages))) + */ + unsigned long scale, num; + + /* Determine the power of 2 multiple number of pages */ + scale =3D __ffs(num_pages); + cmd->tlbi.scale =3D scale; + + /* Determine how many chunks of 2^scale size we have */ + num =3D (num_pages >> scale) & CMDQ_TLBI_RANGE_NUM_MAX; + cmd->tlbi.num =3D num - 1; + + /* range is num * 2^scale * pgsize */ + inv_range =3D num << (scale + tg); + + /* Clear out the lower order bits for the next iteration */ + num_pages -=3D num << scale; + } + cmd->tlbi.addr =3D iova; + WARN_ON(smmu_add_cmd(smmu, cmd)); + BUG_ON(iova + inv_range < iova); + iova +=3D inv_range; + } + + ret =3D smmu_sync_cmd(smmu); +out_ret: + kvm_iommu_unlock(&smmu->iommu); + return ret; +} + +static void smmu_tlb_inv_range(struct kvm_hyp_iommu_domain *domain, + unsigned long iova, size_t size, size_t granule, + bool leaf) +{ + struct hyp_arm_smmu_v3_domain *smmu_domain =3D domain->priv; + unsigned long end =3D iova + size; + struct arm_smmu_cmdq_ent cmd; + + cmd.tlbi.leaf =3D leaf; + if (smmu_domain->pgtable->cfg.fmt =3D=3D ARM_64_LPAE_S2) { + cmd.opcode =3D CMDQ_OP_TLBI_S2_IPA; + cmd.tlbi.vmid =3D domain->domain_id; + } else { + cmd.opcode =3D CMDQ_OP_TLBI_NH_VA; + cmd.tlbi.asid =3D domain->domain_id; + cmd.tlbi.vmid =3D 0; + } + /* + * There are no mappings at high addresses since we don't use TTB1, so + * no overflow possible. + */ + BUG_ON(end < iova); + WARN_ON(smmu_tlb_inv_range_smmu(smmu_domain->smmu, domain, + &cmd, iova, size, granule)); +} + +static void smmu_tlb_flush_walk(unsigned long iova, size_t size, + size_t granule, void *cookie) +{ + smmu_tlb_inv_range(cookie, iova, size, granule, false); +} + +static void smmu_tlb_add_page(struct iommu_iotlb_gather *gather, + unsigned long iova, size_t granule, + void *cookie) +{ + if (gather) + kvm_iommu_iotlb_gather_add_page(cookie, gather, iova, granule); + else + smmu_tlb_inv_range(cookie, iova, granule, granule, true); +} + +__maybe_unused +static const struct iommu_flush_ops smmu_tlb_ops =3D { + .tlb_flush_all =3D smmu_tlb_flush_all, + .tlb_flush_walk =3D smmu_tlb_flush_walk, + .tlb_add_page =3D smmu_tlb_add_page, +}; + +static void smmu_iotlb_sync(struct kvm_hyp_iommu_domain *domain, + struct iommu_iotlb_gather *gather) +{ + size_t size; + + if (!gather->pgsize) + return; + size =3D gather->end - gather->start + 1; + smmu_tlb_inv_range(domain, gather->start, size, gather->pgsize, true); +} + /* Shared with the kernel driver in EL1 */ struct kvm_iommu_ops smmu_ops =3D { .init =3D smmu_init, .get_iommu_by_id =3D smmu_id_to_iommu, .alloc_domain =3D smmu_alloc_domain, .free_domain =3D smmu_free_domain, + .iotlb_sync =3D smmu_iotlb_sync, }; --=20 2.47.0.338.g60cca15819-goog From nobody Sun Dec 14 19:14:26 2025 Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6357423A1A0 for ; Thu, 12 Dec 2024 18:05:56 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734026759; cv=none; b=kyeZa5GC1FyIfUMCp6/JtuLJIyqsWDddxb19VZUoGEtmSHWD1xD9PDLLuZBUApZ5kiBb7gXhuLjMYmT/Nv+p1TRyb99j/cRY9W3ZRi2EO9OpvmCpQUtJqlH4RzZbBkfUeZYVQiJlJXUAgMV7mNy19Fj2HQ/GLsX9OLegMHIflRc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734026759; c=relaxed/simple; bh=xoLHfhtpFrxTH540A3d41QMdNpP9ylrvZba0P+No9Hs=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=WEjwPw7rT8DGYc6UVgogaYHBxzNgs6tSEjKtudj9YW+m7EptjYP8RV3sbfJasBoHVBqK8ntMuTtEz/ZsheweP3eLGkTDfKvF1UGSsAcLKhO2v8UuMAdizT/B8QJV+oQQiLVweBNSMu+poTo+4+fjEK+kyrcUtOLztDXkh8R3Pbw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--smostafa.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=R6ba5Gr4; arc=none smtp.client-ip=209.85.128.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--smostafa.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="R6ba5Gr4" Received: by mail-wm1-f74.google.com with SMTP id 5b1f17b1804b1-43627bb20b5so4508065e9.1 for ; Thu, 12 Dec 2024 10:05:56 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1734026755; x=1734631555; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=/jsol/FD7xLO/XnO3v0kMgo+OuMQmHauwbh0Fn9n2Qc=; b=R6ba5Gr4IDjmdpBU6Mvb9qo0s+/aC4wfWXqSlt8ES4IKvzAQBEDcEE1TvOu4uEp9TX QAZKG7wx8lIEQDw5KnvFGna9T05/dFoVrbJ/Hf958ZhuaWPX0MvZdhPetvRWHKF2xqJQ jqhPvFBUuRUhjvwVoI08uY8JWRYJRe3AveGyYtlIzuonjEmPV4F26qZ5zOWlJMMnACIv oh59Xpw4CDkc2HoY6tYXcJkKtwjhS0W6jhcBbbuEIR2kgJaVErxmwHUceRMNLA3siBD9 PPeHkYBxDWpK1MowRln0DDaOwqCmt9qPzuIwvctBe9JbMXC6+RJN2g8qTM6x9eXJFpUs J5Eg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734026755; x=1734631555; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=/jsol/FD7xLO/XnO3v0kMgo+OuMQmHauwbh0Fn9n2Qc=; b=PUne/X+8NTUndWI7kvJ8E4lMp+IOGDvViUwfAayZVLlUagRmKPGamWfBCVOLKRLe8Q CBwPxsQ8MCPpTygoQywHA1vkLKN0UbCFbu7bzgahILFDXUvMG+sBHNI/FF4hr/bsjn0V aPZ6QlgkqKNt70+sFg8g+9eYgK87TQxg5gF0pwLJPsM9i02/QlRaXArbW8uvGlRWPi+1 9a0RwMfAKz3QKAePIBSBUa5Ud/cAnx6aaYvhZCMhJviJG2REkdpUeOz+NhDenDBsvpY4 ugVrE+teTkEhazpOwpNb3enZ/EsXnzW0ILVEf1Z4IqM7rCJgqSjYLBnLqCtAfD1F9rG9 HJNg== X-Forwarded-Encrypted: i=1; AJvYcCUUyPyc0gzFVEKtLU0aCkX7Kzeyo4gM4YNa8NZOiKkmFN3B4OinF2uPBz+IVI/pQKydcfNh4HEmxG9bNVI=@vger.kernel.org X-Gm-Message-State: AOJu0YykEOc1jYvRsmUfEsnh04gsCmg1En3kT0jnXYHYn8UyAKSxV79q 2GAYolBC3DfvDl1sXAKmM/MMoV14xa3wXCVplpm2nzWqDBTtfHlZvUVOIvrbNEoET0wMCMPZe4+ bWptJNSZUVg== X-Google-Smtp-Source: AGHT+IFPvic4v0yXTXsiiS55qmyvRxNfbGXACyTICF0QYMQMm//s6fh52UUXvcsaAQ5ADTdRambhNe92sr/RSw== X-Received: from wmol7.prod.google.com ([2002:a05:600c:47c7:b0:434:a471:130f]) (user=smostafa job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:35cb:b0:434:f9ad:7238 with SMTP id 5b1f17b1804b1-4362286391fmr34351085e9.22.1734026754968; Thu, 12 Dec 2024 10:05:54 -0800 (PST) Date: Thu, 12 Dec 2024 18:03:58 +0000 In-Reply-To: <20241212180423.1578358-1-smostafa@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241212180423.1578358-1-smostafa@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20241212180423.1578358-35-smostafa@google.com> Subject: [RFC PATCH v2 34/58] KVM: arm64: smmu-v3: Add context descriptor functions From: Mostafa Saleh To: iommu@lists.linux.dev, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org Cc: catalin.marinas@arm.com, will@kernel.org, maz@kernel.org, oliver.upton@linux.dev, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, robdclark@gmail.com, joro@8bytes.org, robin.murphy@arm.com, jean-philippe@linaro.org, jgg@ziepe.ca, nicolinc@nvidia.com, vdonnefort@google.com, qperret@google.com, tabba@google.com, danielmentz@google.com, tzukui@google.com, Mostafa Saleh Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add functions to allocate and access context descriptors that would be used in stage-1 attach. Signed-off-by: Mostafa Saleh --- arch/arm64/kvm/hyp/nvhe/iommu/arm-smmu-v3.c | 53 +++++++++++++++++++++ 1 file changed, 53 insertions(+) diff --git a/arch/arm64/kvm/hyp/nvhe/iommu/arm-smmu-v3.c b/arch/arm64/kvm/h= yp/nvhe/iommu/arm-smmu-v3.c index 5f00d5cdf5bc..d58424e45e1d 100644 --- a/arch/arm64/kvm/hyp/nvhe/iommu/arm-smmu-v3.c +++ b/arch/arm64/kvm/hyp/nvhe/iommu/arm-smmu-v3.c @@ -215,6 +215,19 @@ static int smmu_sync_ste(struct hyp_arm_smmu_v3_device= *smmu, u32 sid) return smmu_send_cmd(smmu, &cmd); } =20 +__maybe_unused +static int smmu_sync_cd(struct hyp_arm_smmu_v3_device *smmu, u32 sid, u32 = ssid) +{ + struct arm_smmu_cmdq_ent cmd =3D { + .opcode =3D CMDQ_OP_CFGI_CD, + .cfgi.sid =3D sid, + .cfgi.ssid =3D ssid, + .cfgi.leaf =3D true, + }; + + return smmu_send_cmd(smmu, &cmd); +} + static int smmu_alloc_l2_strtab(struct hyp_arm_smmu_v3_device *smmu, u32 s= id) { struct arm_smmu_strtab_cfg *cfg =3D &smmu->strtab_cfg; @@ -291,6 +304,46 @@ smmu_get_alloc_ste_ptr(struct hyp_arm_smmu_v3_device *= smmu, u32 sid) return smmu_get_ste_ptr(smmu, sid); } =20 +__maybe_unused +static u64 *smmu_get_cd_ptr(u64 *cdtab, u32 ssid) +{ + /* Only linear supported for now. */ + return cdtab + ssid * CTXDESC_CD_DWORDS; +} + +__maybe_unused +static u64 *smmu_alloc_cd(struct hyp_arm_smmu_v3_device *smmu, u32 pasid_b= its) +{ + u64 *cd_table; + int flags =3D 0; + u32 requested_order =3D get_order((1 << pasid_bits) * + (CTXDESC_CD_DWORDS << 3)); + + /* + * We support max of 64K linear tables only, this should be enough + * for 128 pasids + */ + if (WARN_ON(requested_order > 4)) + return NULL; + + if (!(smmu->features & ARM_SMMU_FEAT_COHERENCY)) + flags |=3D IOMMU_PAGE_NOCACHE; + + cd_table =3D kvm_iommu_donate_pages(requested_order, flags); + if (!cd_table) + return NULL; + return (u64 *)hyp_virt_to_phys(cd_table); +} + +__maybe_unused +static void smmu_free_cd(u64 *cd_table, u32 pasid_bits) +{ + u32 order =3D get_order((1 << pasid_bits) * + (CTXDESC_CD_DWORDS << 3)); + + kvm_iommu_reclaim_pages(cd_table, order); +} + static int smmu_init_registers(struct hyp_arm_smmu_v3_device *smmu) { u64 val, old; --=20 2.47.0.338.g60cca15819-goog From nobody Sun Dec 14 19:14:26 2025 Received: from mail-wr1-f74.google.com (mail-wr1-f74.google.com [209.85.221.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9AE4823A19B for ; Thu, 12 Dec 2024 18:05:58 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.221.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734026760; cv=none; b=ifm1xRgbWQyssb76yQwnax+8PKJhulaAQcVmOZv3q24sma33SAfg4qnNKrHzzKhrWwBgdZOPB+00HQstKD2rSJP4ZOW9ePsSQMXHTtYdQlNO1SAQeVPvNUOLMynGsQiwEmE13562F+BX7Lqe5l1qxjjJb6lC40xoxPX85sauZGM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734026760; c=relaxed/simple; bh=8rc7Pi4YaxBecpkzJ1/eshpS8yx8YiLTlVBllfNEJY8=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=WEOjvPD1ao+6ul6PrduXkPcR55/2dQyk2ByVGSRiZUTcVJ/VyVIzhhYfn7t797JT2/ZElGVsy4cQ84cGo9WqAR+1ul/8wOa8fDzsqTJp7yDaBkeuo7Jdqf0jUrHoQ3kxA956ffXEx2n1zgWJHwCPqd7TKEya7JaNjqk+oWBbP3E= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--smostafa.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=LqxR4QVi; arc=none smtp.client-ip=209.85.221.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--smostafa.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="LqxR4QVi" Received: by mail-wr1-f74.google.com with SMTP id ffacd0b85a97d-3862f3ccf4fso358805f8f.0 for ; Thu, 12 Dec 2024 10:05:58 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1734026757; x=1734631557; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=8QA34SgI5l6ru2Xaln4SGQrGK9VCHx5ARJvOrkuc5JA=; b=LqxR4QViKt3VF/kFy+65NpKtLHp4drClNS8g+X6lYJQ/Uq2vEWO898PMKnKmZzXlMc 500uVZeXvrO95DtDTn2V8g87aXeFWbrlXgWLLDgHnoPmSY0HO7tsBJy/ZQpF9btcOMm9 4iu0b9rKNg1DVYx/wtknvue8YjPfdTzMcMx0JZ05PdkvENHJdEh1ETTVe2wPYtV0TMkE 7o54qgk8bhSVe4PeMHMe8JqQw6QzacusDKaB4YtaE5WI3IQZCaK5heA4SjRSLptk4+p4 IFtYIU+fMOjPpSnfStO0yg2e8KMTAOig9vybtqUXNNe6TLuXZr8qCVInjRVAV1qHnOqx 7ieA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734026757; x=1734631557; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=8QA34SgI5l6ru2Xaln4SGQrGK9VCHx5ARJvOrkuc5JA=; b=rsDZPxHemTAupEwPOdT4DlgDim5Nd0KzLRZdiF+qsCDQJD2MH8z4YBnzaL2X+Xzvac ZOanM4n8XmBD8L4EFS7eaA9ishocFbDSPc5loF6I9YeLtNDBYvxr61LJKp0Z5Ll0eyi3 eqdbrBRudnrTXcXXjO4A8fEeQdsx//pNzx10RCfn5O4cuxCESdkIHpmtMiSN7FkndW7O qnthPL+SzZP3nkCauO/IXxHzgnlJsYyA0K2sLr2qAEc/5cVZ6r0BQ3YlQTg+qpzxp3R6 fX7jhHC6oNzPLNzcIQ/8vDCtLCaW7U7UAe1Kxtr/DSIaVObKtPZGjbkZtOEmF9jyWgt0 BvAg== X-Forwarded-Encrypted: i=1; AJvYcCWde/692Ky2ecFSUpOmpNcUBJVY2/CTpWPo9gY8SUDwsl0DLj5ANIjoIlwLg5msuZJHjr7xcNnLHMDdUpc=@vger.kernel.org X-Gm-Message-State: AOJu0Yzb6sqAuN8Ak90SMqvnQ8hBmOePZ4oWtRV+So5+BGUSPfh+fkVn M71nMdZLA56tIm00UBfYnhZBLsHity4rQsDHYWup/AyfiD34zD7L+ILANmNr1jtcEJgzRZOvDSf solotAyYseQ== X-Google-Smtp-Source: AGHT+IF16Jaf8i1Bw0pRdN0NDe/fPVVgR00KHWsUXu/7gcrg94xU+VGVHaEuKfgJTZsXm+cU714jOddkQJqMxQ== X-Received: from wmsk11.prod.google.com ([2002:a05:600c:1c8b:b0:434:ef30:4be3]) (user=smostafa job=prod-delivery.src-stubby-dispatcher) by 2002:a5d:6da6:0:b0:385:f220:f788 with SMTP id ffacd0b85a97d-387876c3257mr4098056f8f.48.1734026757027; Thu, 12 Dec 2024 10:05:57 -0800 (PST) Date: Thu, 12 Dec 2024 18:03:59 +0000 In-Reply-To: <20241212180423.1578358-1-smostafa@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241212180423.1578358-1-smostafa@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20241212180423.1578358-36-smostafa@google.com> Subject: [RFC PATCH v2 35/58] KVM: arm64: smmu-v3: Add attach_dev From: Mostafa Saleh To: iommu@lists.linux.dev, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org Cc: catalin.marinas@arm.com, will@kernel.org, maz@kernel.org, oliver.upton@linux.dev, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, robdclark@gmail.com, joro@8bytes.org, robin.murphy@arm.com, jean-philippe@linaro.org, jgg@ziepe.ca, nicolinc@nvidia.com, vdonnefort@google.com, qperret@google.com, tabba@google.com, danielmentz@google.com, tzukui@google.com, Mostafa Saleh Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add attach_dev HVC code which handles both stage-1 and stage-2. Signed-off-by: Mostafa Saleh --- arch/arm64/kvm/hyp/nvhe/iommu/arm-smmu-v3.c | 244 +++++++++++++++++++- include/kvm/arm_smmu_v3.h | 4 + 2 files changed, 242 insertions(+), 6 deletions(-) diff --git a/arch/arm64/kvm/hyp/nvhe/iommu/arm-smmu-v3.c b/arch/arm64/kvm/h= yp/nvhe/iommu/arm-smmu-v3.c index d58424e45e1d..a96eb6625c48 100644 --- a/arch/arm64/kvm/hyp/nvhe/iommu/arm-smmu-v3.c +++ b/arch/arm64/kvm/hyp/nvhe/iommu/arm-smmu-v3.c @@ -68,6 +68,11 @@ struct hyp_arm_smmu_v3_domain { struct io_pgtable *pgtable; }; =20 +static struct hyp_arm_smmu_v3_device *to_smmu(struct kvm_hyp_iommu *iommu) +{ + return container_of(iommu, struct hyp_arm_smmu_v3_device, iommu); +} + static int smmu_write_cr0(struct hyp_arm_smmu_v3_device *smmu, u32 val) { writel_relaxed(val, smmu->base + ARM_SMMU_CR0); @@ -203,7 +208,6 @@ static int smmu_send_cmd(struct hyp_arm_smmu_v3_device = *smmu, return smmu_sync_cmd(smmu); } =20 -__maybe_unused static int smmu_sync_ste(struct hyp_arm_smmu_v3_device *smmu, u32 sid) { struct arm_smmu_cmdq_ent cmd =3D { @@ -215,7 +219,6 @@ static int smmu_sync_ste(struct hyp_arm_smmu_v3_device = *smmu, u32 sid) return smmu_send_cmd(smmu, &cmd); } =20 -__maybe_unused static int smmu_sync_cd(struct hyp_arm_smmu_v3_device *smmu, u32 sid, u32 = ssid) { struct arm_smmu_cmdq_ent cmd =3D { @@ -289,7 +292,6 @@ smmu_get_ste_ptr(struct hyp_arm_smmu_v3_device *smmu, u= 32 sid) return &cfg->linear.table[sid]; } =20 -__maybe_unused static struct arm_smmu_ste * smmu_get_alloc_ste_ptr(struct hyp_arm_smmu_v3_device *smmu, u32 sid) { @@ -304,14 +306,12 @@ smmu_get_alloc_ste_ptr(struct hyp_arm_smmu_v3_device = *smmu, u32 sid) return smmu_get_ste_ptr(smmu, sid); } =20 -__maybe_unused static u64 *smmu_get_cd_ptr(u64 *cdtab, u32 ssid) { /* Only linear supported for now. */ return cdtab + ssid * CTXDESC_CD_DWORDS; } =20 -__maybe_unused static u64 *smmu_alloc_cd(struct hyp_arm_smmu_v3_device *smmu, u32 pasid_b= its) { u64 *cd_table; @@ -803,7 +803,6 @@ static void smmu_tlb_add_page(struct iommu_iotlb_gather= *gather, smmu_tlb_inv_range(cookie, iova, granule, granule, true); } =20 -__maybe_unused static const struct iommu_flush_ops smmu_tlb_ops =3D { .tlb_flush_all =3D smmu_tlb_flush_all, .tlb_flush_walk =3D smmu_tlb_flush_walk, @@ -821,6 +820,238 @@ static void smmu_iotlb_sync(struct kvm_hyp_iommu_doma= in *domain, smmu_tlb_inv_range(domain, gather->start, size, gather->pgsize, true); } =20 +static int smmu_domain_config_s2(struct kvm_hyp_iommu_domain *domain, + struct arm_smmu_ste *ste) +{ + struct io_pgtable_cfg *cfg; + u64 ts, sl, ic, oc, sh, tg, ps; + struct hyp_arm_smmu_v3_domain *smmu_domain =3D domain->priv; + + cfg =3D &smmu_domain->pgtable->cfg; + ps =3D cfg->arm_lpae_s2_cfg.vtcr.ps; + tg =3D cfg->arm_lpae_s2_cfg.vtcr.tg; + sh =3D cfg->arm_lpae_s2_cfg.vtcr.sh; + oc =3D cfg->arm_lpae_s2_cfg.vtcr.orgn; + ic =3D cfg->arm_lpae_s2_cfg.vtcr.irgn; + sl =3D cfg->arm_lpae_s2_cfg.vtcr.sl; + ts =3D cfg->arm_lpae_s2_cfg.vtcr.tsz; + + ste->data[0] =3D STRTAB_STE_0_V | + FIELD_PREP(STRTAB_STE_0_CFG, STRTAB_STE_0_CFG_S2_TRANS); + ste->data[1] =3D FIELD_PREP(STRTAB_STE_1_SHCFG, STRTAB_STE_1_SHCFG_INCOMI= NG); + ste->data[2] =3D FIELD_PREP(STRTAB_STE_2_VTCR, + FIELD_PREP(STRTAB_STE_2_VTCR_S2PS, ps) | + FIELD_PREP(STRTAB_STE_2_VTCR_S2TG, tg) | + FIELD_PREP(STRTAB_STE_2_VTCR_S2SH0, sh) | + FIELD_PREP(STRTAB_STE_2_VTCR_S2OR0, oc) | + FIELD_PREP(STRTAB_STE_2_VTCR_S2IR0, ic) | + FIELD_PREP(STRTAB_STE_2_VTCR_S2SL0, sl) | + FIELD_PREP(STRTAB_STE_2_VTCR_S2T0SZ, ts)) | + FIELD_PREP(STRTAB_STE_2_S2VMID, domain->domain_id) | + STRTAB_STE_2_S2AA64 | STRTAB_STE_2_S2R; + ste->data[3] =3D cfg->arm_lpae_s2_cfg.vttbr & STRTAB_STE_3_S2TTB_MASK; + + return 0; +} + +static u64 *smmu_domain_config_s1_ste(struct hyp_arm_smmu_v3_device *smmu, + u32 pasid_bits, struct arm_smmu_ste *ste) +{ + u64 *cd_table; + + cd_table =3D smmu_alloc_cd(smmu, pasid_bits); + if (!cd_table) + return NULL; + + ste->data[1] =3D FIELD_PREP(STRTAB_STE_1_S1DSS, STRTAB_STE_1_S1DSS_SSID0)= | + FIELD_PREP(STRTAB_STE_1_S1CIR, STRTAB_STE_1_S1C_CACHE_WBRA) | + FIELD_PREP(STRTAB_STE_1_S1COR, STRTAB_STE_1_S1C_CACHE_WBRA) | + FIELD_PREP(STRTAB_STE_1_S1CSH, ARM_SMMU_SH_ISH); + ste->data[0] =3D ((u64)cd_table & STRTAB_STE_0_S1CTXPTR_MASK) | + FIELD_PREP(STRTAB_STE_0_CFG, STRTAB_STE_0_CFG_S1_TRANS) | + FIELD_PREP(STRTAB_STE_0_S1CDMAX, pasid_bits) | + FIELD_PREP(STRTAB_STE_0_S1FMT, STRTAB_STE_0_S1FMT_LINEAR) | + STRTAB_STE_0_V; + + return cd_table; +} + +/* + * This function handles configuration for pasid and non-pasid domains + * with the following assumptions: + * - pasid 0 always attached first, this should be the typicall flow + * for the kernel where attach_dev is always called before set_dev_pasid. + * In that case only pasid 0 is allowed to allocate memory for the CD, + * and other pasids would expect to find the tabel. + * - pasid 0 is detached last, also guaranteed from the kernel. + */ +static int smmu_domain_config_s1(struct hyp_arm_smmu_v3_device *smmu, + struct kvm_hyp_iommu_domain *domain, + u32 sid, u32 pasid, u32 pasid_bits, + struct arm_smmu_ste *ste) +{ + struct arm_smmu_ste *dst; + u64 val; + u64 *cd_entry, *cd_table; + struct io_pgtable_cfg *cfg; + struct hyp_arm_smmu_v3_domain *smmu_domain =3D domain->priv; + + cfg =3D &smmu_domain->pgtable->cfg; + dst =3D smmu_get_ste_ptr(smmu, sid); + val =3D dst->data[0]; + + if (FIELD_GET(STRTAB_STE_0_CFG, val) =3D=3D STRTAB_STE_0_CFG_S2_TRANS) + return -EBUSY; + + if (pasid =3D=3D 0) { + cd_table =3D smmu_domain_config_s1_ste(smmu, pasid_bits, ste); + if (!cd_table) + return -ENOMEM; + } else { + u32 nr_entries; + + cd_table =3D (u64 *)(FIELD_GET(STRTAB_STE_0_S1CTXPTR_MASK, val) << 6); + if (!cd_table) + return -EINVAL; + nr_entries =3D 1 << FIELD_GET(STRTAB_STE_0_S1CDMAX, val); + if (pasid >=3D nr_entries) + return -E2BIG; + } + + /* Write CD. */ + cd_entry =3D smmu_get_cd_ptr(hyp_phys_to_virt((u64)cd_table), pasid); + + /* CD already used by another device. */ + if (cd_entry[0]) + return -EBUSY; + + cd_entry[1] =3D cpu_to_le64(cfg->arm_lpae_s1_cfg.ttbr & CTXDESC_CD_1_TTB0= _MASK); + cd_entry[2] =3D 0; + cd_entry[3] =3D cpu_to_le64(cfg->arm_lpae_s1_cfg.mair); + + /* STE is live. */ + if (pasid) + smmu_sync_cd(smmu, sid, pasid); + val =3D FIELD_PREP(CTXDESC_CD_0_TCR_T0SZ, cfg->arm_lpae_s1_cfg.tcr.tsz) | + FIELD_PREP(CTXDESC_CD_0_TCR_TG0, cfg->arm_lpae_s1_cfg.tcr.tg) | + FIELD_PREP(CTXDESC_CD_0_TCR_IRGN0, cfg->arm_lpae_s1_cfg.tcr.irgn) | + FIELD_PREP(CTXDESC_CD_0_TCR_ORGN0, cfg->arm_lpae_s1_cfg.tcr.orgn) | + FIELD_PREP(CTXDESC_CD_0_TCR_SH0, cfg->arm_lpae_s1_cfg.tcr.sh) | + FIELD_PREP(CTXDESC_CD_0_TCR_IPS, cfg->arm_lpae_s1_cfg.tcr.ips) | + CTXDESC_CD_0_TCR_EPD1 | CTXDESC_CD_0_AA64 | + CTXDESC_CD_0_R | CTXDESC_CD_0_A | + CTXDESC_CD_0_ASET | + FIELD_PREP(CTXDESC_CD_0_ASID, domain->domain_id) | + CTXDESC_CD_0_V; + WRITE_ONCE(cd_entry[0], cpu_to_le64(val)); + /* STE is live. */ + if (pasid) + smmu_sync_cd(smmu, sid, pasid); + return 0; +} + +static int smmu_domain_finalise(struct hyp_arm_smmu_v3_device *smmu, + struct kvm_hyp_iommu_domain *domain) +{ + int ret; + struct io_pgtable_cfg cfg; + struct hyp_arm_smmu_v3_domain *smmu_domain =3D domain->priv; + + if (smmu_domain->type =3D=3D KVM_ARM_SMMU_DOMAIN_S1) { + size_t ias =3D (smmu->features & ARM_SMMU_FEAT_VAX) ? 52 : 48; + + cfg =3D (struct io_pgtable_cfg) { + .fmt =3D ARM_64_LPAE_S1, + .pgsize_bitmap =3D smmu->pgsize_bitmap, + .ias =3D min_t(unsigned long, ias, VA_BITS), + .oas =3D smmu->ias, + .coherent_walk =3D smmu->features & ARM_SMMU_FEAT_COHERENCY, + .tlb =3D &smmu_tlb_ops, + }; + } else { + cfg =3D (struct io_pgtable_cfg) { + .fmt =3D ARM_64_LPAE_S2, + .pgsize_bitmap =3D smmu->pgsize_bitmap, + .ias =3D smmu->ias, + .oas =3D smmu->oas, + .coherent_walk =3D smmu->features & ARM_SMMU_FEAT_COHERENCY, + .tlb =3D &smmu_tlb_ops, + }; + } + + hyp_spin_lock(&smmu_domain->pgt_lock); + smmu_domain->pgtable =3D kvm_arm_io_pgtable_alloc(&cfg, domain, &ret); + hyp_spin_unlock(&smmu_domain->pgt_lock); + return ret; +} + +static int smmu_attach_dev(struct kvm_hyp_iommu *iommu, struct kvm_hyp_iom= mu_domain *domain, + u32 sid, u32 pasid, u32 pasid_bits) +{ + int i; + int ret; + struct arm_smmu_ste *dst; + struct arm_smmu_ste ste =3D {}; + struct hyp_arm_smmu_v3_device *smmu =3D to_smmu(iommu); + struct hyp_arm_smmu_v3_domain *smmu_domain =3D domain->priv; + + kvm_iommu_lock(iommu); + dst =3D smmu_get_alloc_ste_ptr(smmu, sid); + if (!dst) { + ret =3D -ENOMEM; + goto out_unlock; + } + + if (smmu_domain->smmu && (smmu !=3D smmu_domain->smmu)) { + ret =3D -EINVAL; + goto out_unlock; + } + + if (!smmu_domain->pgtable) { + ret =3D smmu_domain_finalise(smmu, domain); + if (ret) + goto out_unlock; + } + + if (smmu_domain->type =3D=3D KVM_ARM_SMMU_DOMAIN_S2) { + /* Device already attached or pasid for s2. */ + if (dst->data[0] || pasid) { + ret =3D -EBUSY; + goto out_unlock; + } + ret =3D smmu_domain_config_s2(domain, &ste); + } else { + /* + * Allocate and config CD, and update CD if possible. + */ + pasid_bits =3D min(pasid_bits, smmu->ssid_bits); + ret =3D smmu_domain_config_s1(smmu, domain, sid, pasid, + pasid_bits, &ste); + } + smmu_domain->smmu =3D smmu; + /* We don't update STEs for pasid domains. */ + if (ret || pasid) + goto out_unlock; + + /* + * The SMMU may cache a disabled STE. + * Initialize all fields, sync, then enable it. + */ + for (i =3D 1; i < STRTAB_STE_DWORDS; i++) + dst->data[i] =3D ste.data[i]; + + ret =3D smmu_sync_ste(smmu, sid); + if (ret) + goto out_unlock; + + WRITE_ONCE(dst->data[0], ste.data[0]); + ret =3D smmu_sync_ste(smmu, sid); + WARN_ON(ret); +out_unlock: + kvm_iommu_unlock(iommu); + return ret; +} + /* Shared with the kernel driver in EL1 */ struct kvm_iommu_ops smmu_ops =3D { .init =3D smmu_init, @@ -828,4 +1059,5 @@ struct kvm_iommu_ops smmu_ops =3D { .alloc_domain =3D smmu_alloc_domain, .free_domain =3D smmu_free_domain, .iotlb_sync =3D smmu_iotlb_sync, + .attach_dev =3D smmu_attach_dev, }; diff --git a/include/kvm/arm_smmu_v3.h b/include/kvm/arm_smmu_v3.h index ded98cbaebc1..e8616ec5a048 100644 --- a/include/kvm/arm_smmu_v3.h +++ b/include/kvm/arm_smmu_v3.h @@ -25,6 +25,10 @@ struct hyp_arm_smmu_v3_device { size_t cmdq_log2size; /* strtab_cfg.l2.l2ptrs is not used, instead computed from L1 */ struct arm_smmu_strtab_cfg strtab_cfg; + size_t oas; + size_t ias; + size_t pgsize_bitmap; + size_t ssid_bits; }; =20 extern size_t kvm_nvhe_sym(kvm_hyp_arm_smmu_v3_count); --=20 2.47.0.338.g60cca15819-goog From nobody Sun Dec 14 19:14:26 2025 Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6F09723A575 for ; Thu, 12 Dec 2024 18:06:00 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734026762; cv=none; b=ZJnzdIYzJjvb9K9Iu8OL5A/px/NR+Z1cU++7mP4Cfe6FzQoOO2tevYoQcJNblLmSBNytyuIBUNNccYASwdVpBRB1/6iOlRIqB4zZ3WiE8/43rrHxolJ+F+LuCctUl3WVSLKdTq/xBqDhnnHOCEcNO++aPdA9wxb47lIz/AEOF34= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734026762; c=relaxed/simple; bh=Esq0hs9+O5d4DdE62i+3pgWl+hf7rAK4wQPxx+ppKUk=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=ejnUnjaqNBrG5lF69r3dRjto8d+fBkdHzaJNm0656g8IAM24rIAuZrC4C3TfGoXdtTfJJa89MDh+PseMFkASYrlBQHqIDxQn/9z3UDPQVygyp93bG52rPi3M+EUkYk5HBPaxTvq4SkCBjGHewvOI9GvYF7m42we9zDIlqdSPwgQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--smostafa.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=k5xP6Yqx; arc=none smtp.client-ip=209.85.128.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--smostafa.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="k5xP6Yqx" Received: by mail-wm1-f73.google.com with SMTP id 5b1f17b1804b1-434f387f346so5761875e9.1 for ; Thu, 12 Dec 2024 10:06:00 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1734026759; x=1734631559; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=CUMjhbgyZS8tSMuBeujFF8/B0cPiplUozEzmSYJ1apc=; b=k5xP6YqxhxEc7+QrSlY39QJuZ3j+h9CTRQvxEzZB3xoaskKuscdvtZW+YDMD6RtJEW K0ZdZj34lHKWfnr28Ma8xs/zFPstMxHjmotK1B0qBRrQq0xIcEbyhTF7eGmZkZhvLmmz oYeq+mmzU0643xh4MzkSrsBbeh3iCIbi9R7tWk40HK7EgsgkPW7fIX+Ues/KhYhT83mZ WHxnCQgvheiwZuC3M8kCSMGJTw97Hi0L5vHvPFEWSZeKOcO0g2FA/GV59WuqjPAOEFQE 1JX+7NbmI81oNnojDQtW02PGJSTjc965jWPJj0uvFTeT+VaTnjrtFfYgtlpeUSl5oGBf TVqw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734026759; x=1734631559; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=CUMjhbgyZS8tSMuBeujFF8/B0cPiplUozEzmSYJ1apc=; b=WLLfxTYvu+VVuC0bo0bZ5x0QaxqEMbIqCX2/ANerQ2sB8Z55CjtFti1CX9+c10AyVi 7Sbxv99z2BjxBsD2U5BcOeOrPyj0LNrY+2tH3YjqKIO1ZRJFO7z+2F0TRMzk04s0SwMZ VulBoFYMXmfFRxWcjxcpNOePMVsuib2TXmbEkyPLbdR81N+7T9q7pliMeCz4sps6iroM ZOt1afE275XK74EHZBm+WCNVlcWubwMEHyAoOvMOAVzmTqSsigZsNyAdsog7KEuxq7rN hYKP+p1+e9SnFRx4ef71/Fj5nSTRlCOvRXvDIPCZS7pUsSxDH9ZFdLFMvSxbZEfHIlRL Y6qQ== X-Forwarded-Encrypted: i=1; AJvYcCXmbiO/suEMsJ+tLWV7dEftSV8kV8/T5/CUO4167kHi5bCrrDmYdD0E1W/8N2yHit97efNpFD5g4sM3oh8=@vger.kernel.org X-Gm-Message-State: AOJu0YwKv11gyFbRDipPiHwAkgCUeZaHwe8GUuDW0qtKJbzY0VDRTiSg 7AH8VN7XPcaYThqwzx4nzKpaROfjWgoDHzzXAv1Kjngtym0NSwRdhiSkENYUxg4q47tPKRE59lq 0/0UtZm4EQA== X-Google-Smtp-Source: AGHT+IFcyCwxNNcgpiKYzT5jMC9sdAfx92qcWV3X5IeJOyZQbObkA8pT7Uz0LF7/zxo+itcxy4bJEL7vJH7gOw== X-Received: from wmro18.prod.google.com ([2002:a05:600c:3792:b0:434:f2eb:aa72]) (user=smostafa job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:3583:b0:434:f871:1b9d with SMTP id 5b1f17b1804b1-4362287091cmr39010485e9.33.1734026759052; Thu, 12 Dec 2024 10:05:59 -0800 (PST) Date: Thu, 12 Dec 2024 18:04:00 +0000 In-Reply-To: <20241212180423.1578358-1-smostafa@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241212180423.1578358-1-smostafa@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20241212180423.1578358-37-smostafa@google.com> Subject: [RFC PATCH v2 36/58] KVM: arm64: smmu-v3: Add detach_dev From: Mostafa Saleh To: iommu@lists.linux.dev, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org Cc: catalin.marinas@arm.com, will@kernel.org, maz@kernel.org, oliver.upton@linux.dev, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, robdclark@gmail.com, joro@8bytes.org, robin.murphy@arm.com, jean-philippe@linaro.org, jgg@ziepe.ca, nicolinc@nvidia.com, vdonnefort@google.com, qperret@google.com, tabba@google.com, danielmentz@google.com, tzukui@google.com, Mostafa Saleh Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add detach_dev for stage-1 and stage-2 domains. Signed-off-by: Mostafa Saleh --- arch/arm64/kvm/hyp/nvhe/iommu/arm-smmu-v3.c | 76 ++++++++++++++++++++- 1 file changed, 75 insertions(+), 1 deletion(-) diff --git a/arch/arm64/kvm/hyp/nvhe/iommu/arm-smmu-v3.c b/arch/arm64/kvm/h= yp/nvhe/iommu/arm-smmu-v3.c index a96eb6625c48..ec3f8d9749d3 100644 --- a/arch/arm64/kvm/hyp/nvhe/iommu/arm-smmu-v3.c +++ b/arch/arm64/kvm/hyp/nvhe/iommu/arm-smmu-v3.c @@ -335,7 +335,6 @@ static u64 *smmu_alloc_cd(struct hyp_arm_smmu_v3_device= *smmu, u32 pasid_bits) return (u64 *)hyp_virt_to_phys(cd_table); } =20 -__maybe_unused static void smmu_free_cd(u64 *cd_table, u32 pasid_bits) { u32 order =3D get_order((1 << pasid_bits) * @@ -1052,6 +1051,80 @@ static int smmu_attach_dev(struct kvm_hyp_iommu *iom= mu, struct kvm_hyp_iommu_dom return ret; } =20 +static int smmu_detach_dev(struct kvm_hyp_iommu *iommu, struct kvm_hyp_iom= mu_domain *domain, + u32 sid, u32 pasid) +{ + struct arm_smmu_ste *dst; + int i, ret; + struct hyp_arm_smmu_v3_device *smmu =3D to_smmu(iommu); + struct hyp_arm_smmu_v3_domain *smmu_domain =3D domain->priv; + u32 pasid_bits =3D 0; + u64 *cd_table, *cd; + + kvm_iommu_lock(iommu); + dst =3D smmu_get_ste_ptr(smmu, sid); + if (!dst) { + ret =3D -ENODEV; + goto out_unlock; + } + + /* + * For stage-1: + * - The kernel has to detach pasid =3D 0 the last. + * - This will free the CD. + */ + if (smmu_domain->type =3D=3D KVM_ARM_SMMU_DOMAIN_S1) { + pasid_bits =3D FIELD_GET(STRTAB_STE_0_S1CDMAX, dst->data[0]); + if (pasid >=3D (1 << pasid_bits)) { + ret =3D -E2BIG; + goto out_unlock; + } + cd_table =3D (u64 *)(dst->data[0] & STRTAB_STE_0_S1CTXPTR_MASK); + if (WARN_ON(!cd_table)) { + ret =3D -ENODEV; + goto out_unlock; + } + + cd_table =3D hyp_phys_to_virt((phys_addr_t)cd_table); + if (pasid =3D=3D 0) { + int j; + + /* Ensure other pasids are detached. */ + for (j =3D 1 ; j < (1 << pasid_bits) ; ++j) { + cd =3D smmu_get_cd_ptr(cd_table, j); + if (cd[0] & CTXDESC_CD_0_V) { + ret =3D -EINVAL; + goto out_unlock; + } + } + } else { + cd =3D smmu_get_cd_ptr(cd_table, pasid); + cd[0] =3D 0; + smmu_sync_cd(smmu, sid, pasid); + cd[1] =3D 0; + cd[2] =3D 0; + cd[3] =3D 0; + ret =3D smmu_sync_cd(smmu, sid, pasid); + goto out_unlock; + } + } + /* For stage-2 and pasid =3D 0 */ + dst->data[0] =3D 0; + ret =3D smmu_sync_ste(smmu, sid); + if (ret) + goto out_unlock; + for (i =3D 1; i < STRTAB_STE_DWORDS; i++) + dst->data[i] =3D 0; + + ret =3D smmu_sync_ste(smmu, sid); + + smmu_free_cd(cd_table, pasid_bits); + +out_unlock: + kvm_iommu_unlock(iommu); + return ret; +} + /* Shared with the kernel driver in EL1 */ struct kvm_iommu_ops smmu_ops =3D { .init =3D smmu_init, @@ -1060,4 +1133,5 @@ struct kvm_iommu_ops smmu_ops =3D { .free_domain =3D smmu_free_domain, .iotlb_sync =3D smmu_iotlb_sync, .attach_dev =3D smmu_attach_dev, + .detach_dev =3D smmu_detach_dev, }; --=20 2.47.0.338.g60cca15819-goog From nobody Sun Dec 14 19:14:26 2025 Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8560523D411 for ; Thu, 12 Dec 2024 18:06:02 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734026764; cv=none; b=R64V35dGXfk2LDbrpuX+D5DD9BGHl42iOszHomSOezNgc6aj5o0vHvD+B/LOPiZsrvUE7eZpiGHJGtXeyYbXros8ruHQPmzPfy/r1cQYmxnOQe9wscJSgHD0WO7tmUu9/rttrf4DoICx7oRafFKAuokKyltmn4OGh666jQJ752g= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734026764; c=relaxed/simple; bh=j7RHelNygJtS0BmZw4TjYDGdQnRdI0SgZYT8B5cp46g=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=aPXUSFLil6U3tVJLQukYcCvL5bC9/Hrq95FKQz0lK4UaJLCw7/r9Nr+PTGjXK0SUg5mUxVg8lCPskiQQ8B9UQuZuyv/DdGWC5CLjDTychYeRSHdAPbylzpJBQSj9hXfa4A+mX4aot0imnVs79o1ziXHVlEdBByWiKRp/19JXT7k= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--smostafa.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=lw7PjOms; arc=none smtp.client-ip=209.85.128.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--smostafa.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="lw7PjOms" Received: by mail-wm1-f73.google.com with SMTP id 5b1f17b1804b1-4361ac8b25fso5740665e9.2 for ; Thu, 12 Dec 2024 10:06:02 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1734026761; x=1734631561; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=iXAdVcyIvzlz7i7WSO159EYvNqEybHFoFsFwP8+bSos=; b=lw7PjOmsSXljCd2ql/TrCnf6J6526dTG3uBkulmlQl/R+kHU3S0+vjTaKB77SBojTg CS/pNx0HhxMTw2Oe3YnWERwmxSjHJGhazOok1p7ow71UZEZuN/2ccmI/r8OfHILuJXrE F+lHtTOVFpawCc6s2r6MPZd8Ii+e8ZaMzmxYuA0xovOKBgoBgIQcxyB5EEBdcmFIcY06 TBC18sTj5ifqT6FFUxtii5PMHq8s2CYQf3q6eGOkieLiFUl9vwELGzyUlSQlwWxYPc0B VL5olQHIkIMU4nfEIAzXxmZ9dBrsAvrPUlosFj5lvihxaIlqV/NePwuJHY1MaNHaoOv8 JWVA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734026761; x=1734631561; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=iXAdVcyIvzlz7i7WSO159EYvNqEybHFoFsFwP8+bSos=; b=txpcT/iytP9OkhgGiSi8CB6pmMwLM1vC6FgqXhxBA46WMzJ5c5RslDTWTH1lYcbm2w rLu/xHllGVct9yE2DEtHtBgMYS187OS6IJ5KrOPZ2i7Re+OboONVFxIXNAJsi1PKnikT SepaiviQnLLvtq3unAJzvLC/41I62EXtmo+qL+HNaA/jhnZwpsX1im8y4if5JJ45W9oC Y9Brb3jtp8A79ZLLrgV7WaTZVMNRaQM2A+U3f+Knwg5vejzgRVpGo+sDzraG9PyCjCca Cn7kWK12z9haj9nBzFOc5QhePLnlkBDTesjs9cjHrq6Adyilk59zPKzEYUiSJIW9z+N1 AL/g== X-Forwarded-Encrypted: i=1; AJvYcCWnAl0S5Oog1HUK2qec0Elp8oGzpReDAqMVPC9YMlh6LBYhLaL0V0IwUIikRBAD+lIlOaV0JkMVpszfflY=@vger.kernel.org X-Gm-Message-State: AOJu0YxzRgkOq3HFMyjxgLeZo2xL3mAMr6M0MZSdIBYSsQwGe6Nfihfk zIT1Y38awwRDn8QdxuG55l8rbvdI7ToV9ndxIytWYC9tsA66Y6NdqENf0NiBCQLDYZ1vuMVonYu D7jItBuj/Nw== X-Google-Smtp-Source: AGHT+IEv9trywyRhFNpKLmyQCHa1KXy/laHIVh8Z4rtcW7mSbOGhZQtW8954nUj91xsudPP8/kvhlE70Laak2A== X-Received: from wmsn8.prod.google.com ([2002:a05:600c:3b88:b0:434:f018:dd30]) (user=smostafa job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6000:2c5:b0:385:f417:ee3d with SMTP id ffacd0b85a97d-3878769a01fmr3792369f8f.35.1734026761231; Thu, 12 Dec 2024 10:06:01 -0800 (PST) Date: Thu, 12 Dec 2024 18:04:01 +0000 In-Reply-To: <20241212180423.1578358-1-smostafa@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241212180423.1578358-1-smostafa@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20241212180423.1578358-38-smostafa@google.com> Subject: [RFC PATCH v2 37/58] iommu/io-pgtable: Generalize walker interface From: Mostafa Saleh To: iommu@lists.linux.dev, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org Cc: catalin.marinas@arm.com, will@kernel.org, maz@kernel.org, oliver.upton@linux.dev, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, robdclark@gmail.com, joro@8bytes.org, robin.murphy@arm.com, jean-philippe@linaro.org, jgg@ziepe.ca, nicolinc@nvidia.com, vdonnefort@google.com, qperret@google.com, tabba@google.com, danielmentz@google.com, tzukui@google.com, Mostafa Saleh Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add a common struct for walker, which has a callback visitor with HW agnostic info (phys addr + size). Add size to walker so it can walk a range of IOVAs. Also, add a cookie for the arm walker. Signed-off-by: Mostafa Saleh --- drivers/gpu/drm/msm/msm_iommu.c | 5 +++- drivers/iommu/io-pgtable-arm-common.c | 35 ++++++++++++++++++--------- include/linux/io-pgtable-arm.h | 2 +- include/linux/io-pgtable.h | 18 ++++++++++++-- 4 files changed, 45 insertions(+), 15 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_iommu.c b/drivers/gpu/drm/msm/msm_iomm= u.c index 3e692818ba1f..8516861dd626 100644 --- a/drivers/gpu/drm/msm/msm_iommu.c +++ b/drivers/gpu/drm/msm/msm_iommu.c @@ -200,6 +200,9 @@ msm_iommu_pagetable_walk(struct msm_mmu *mmu, unsigned = long iova, uint64_t ptes[ { struct msm_iommu_pagetable *pagetable; struct arm_lpae_io_pgtable_walk_data wd =3D {}; + struct io_pgtable_walk_common walker =3D { + .data =3D &wd, + }; =20 if (mmu->type !=3D MSM_MMU_IOMMU_PAGETABLE) return -EINVAL; @@ -209,7 +212,7 @@ msm_iommu_pagetable_walk(struct msm_mmu *mmu, unsigned = long iova, uint64_t ptes[ if (!pagetable->pgtbl_ops->pgtable_walk) return -EINVAL; =20 - pagetable->pgtbl_ops->pgtable_walk(pagetable->pgtbl_ops, iova, &wd); + pagetable->pgtbl_ops->pgtable_walk(pagetable->pgtbl_ops, iova, 1, &walker= ); =20 for (int i =3D 0; i < ARRAY_SIZE(wd.ptes); i++) ptes[i] =3D wd.ptes[i]; diff --git a/drivers/iommu/io-pgtable-arm-common.c b/drivers/iommu/io-pgtab= le-arm-common.c index 21ee8ff7c881..4fc0b03494e3 100644 --- a/drivers/iommu/io-pgtable-arm-common.c +++ b/drivers/iommu/io-pgtable-arm-common.c @@ -481,7 +481,8 @@ struct iova_to_phys_data { static int visit_iova_to_phys(struct io_pgtable_walk_data *walk_data, int = lvl, arm_lpae_iopte *ptep, size_t size) { - struct iova_to_phys_data *data =3D walk_data->data; + struct io_pgtable_walk_common *walker =3D walk_data->data; + struct iova_to_phys_data *data =3D walker->data; data->pte =3D *ptep; data->lvl =3D lvl; return 0; @@ -492,8 +493,11 @@ static phys_addr_t arm_lpae_iova_to_phys(struct io_pgt= able_ops *ops, { struct arm_lpae_io_pgtable *data =3D io_pgtable_ops_to_data(ops); struct iova_to_phys_data d; - struct io_pgtable_walk_data walk_data =3D { + struct io_pgtable_walk_common walker =3D { .data =3D &d, + }; + struct io_pgtable_walk_data walk_data =3D { + .data =3D &walker, .visit =3D visit_iova_to_phys, .addr =3D iova, .end =3D iova + 1, @@ -511,23 +515,25 @@ static phys_addr_t arm_lpae_iova_to_phys(struct io_pg= table_ops *ops, static int visit_pgtable_walk(struct io_pgtable_walk_data *walk_data, int = lvl, arm_lpae_iopte *ptep, size_t size) { - struct arm_lpae_io_pgtable_walk_data *data =3D walk_data->data; - data->ptes[data->level++] =3D *ptep; + struct io_pgtable_walk_common *walker =3D walk_data->data; + struct arm_lpae_io_pgtable_walk_data *data =3D walker->data; + + data->ptes[lvl] =3D *ptep; + data->level =3D lvl + 1; return 0; } =20 -static int arm_lpae_pgtable_walk(struct io_pgtable_ops *ops, unsigned long= iova, void *wd) +static int arm_lpae_pgtable_walk(struct io_pgtable_ops *ops, unsigned long= iova, + size_t size, struct io_pgtable_walk_common *walker) { struct arm_lpae_io_pgtable *data =3D io_pgtable_ops_to_data(ops); struct io_pgtable_walk_data walk_data =3D { - .data =3D wd, + .data =3D walker, .visit =3D visit_pgtable_walk, .addr =3D iova, - .end =3D iova + 1, + .end =3D iova + size, }; =20 - ((struct arm_lpae_io_pgtable_walk_data *)wd)->level =3D 0; - return __arm_lpae_iopte_walk(data, &walk_data, data->pgd, data->start_lev= el); } =20 @@ -537,6 +543,7 @@ static int io_pgtable_visit(struct arm_lpae_io_pgtable = *data, { struct io_pgtable *iop =3D &data->iop; arm_lpae_iopte pte =3D READ_ONCE(*ptep); + struct io_pgtable_walk_common *walker =3D walk_data->data; =20 size_t size =3D ARM_LPAE_BLOCK_SIZE(lvl, data); int ret =3D walk_data->visit(walk_data, lvl, ptep, size); @@ -544,6 +551,8 @@ static int io_pgtable_visit(struct arm_lpae_io_pgtable = *data, return ret; =20 if (iopte_leaf(pte, lvl, iop->fmt)) { + if (walker->visit_leaf) + walker->visit_leaf(iopte_to_paddr(pte, data), size, walker, ptep); walk_data->addr +=3D size; return 0; } @@ -585,7 +594,8 @@ static int __arm_lpae_iopte_walk(struct arm_lpae_io_pgt= able *data, static int visit_dirty(struct io_pgtable_walk_data *walk_data, int lvl, arm_lpae_iopte *ptep, size_t size) { - struct iommu_dirty_bitmap *dirty =3D walk_data->data; + struct io_pgtable_walk_common *walker =3D walk_data->data; + struct iommu_dirty_bitmap *dirty =3D walker->data; =20 if (!iopte_leaf(*ptep, lvl, walk_data->iop->fmt)) return 0; @@ -606,9 +616,12 @@ static int arm_lpae_read_and_clear_dirty(struct io_pgt= able_ops *ops, { struct arm_lpae_io_pgtable *data =3D io_pgtable_ops_to_data(ops); struct io_pgtable_cfg *cfg =3D &data->iop.cfg; + struct io_pgtable_walk_common walker =3D { + .data =3D dirty, + }; struct io_pgtable_walk_data walk_data =3D { .iop =3D &data->iop, - .data =3D dirty, + .data =3D &walker, .visit =3D visit_dirty, .flags =3D flags, .addr =3D iova, diff --git a/include/linux/io-pgtable-arm.h b/include/linux/io-pgtable-arm.h index 88922314157d..9e5878c37d78 100644 --- a/include/linux/io-pgtable-arm.h +++ b/include/linux/io-pgtable-arm.h @@ -18,7 +18,7 @@ struct arm_lpae_io_pgtable { =20 struct io_pgtable_walk_data { struct io_pgtable *iop; - void *data; + struct io_pgtable_walk_common *data; int (*visit)(struct io_pgtable_walk_data *walk_data, int lvl, arm_lpae_iopte *ptep, size_t size); unsigned long flags; diff --git a/include/linux/io-pgtable.h b/include/linux/io-pgtable.h index f789234c703b..da50e17b0177 100644 --- a/include/linux/io-pgtable.h +++ b/include/linux/io-pgtable.h @@ -185,12 +185,25 @@ struct io_pgtable_cfg { * * @ptes: The recorded PTE values from the walk * @level: The level of the last PTE + * @cookie: Cookie set by caller to identify it * * @level also specifies the last valid index in @ptes */ struct arm_lpae_io_pgtable_walk_data { u64 ptes[4]; int level; + void *cookie; +}; + +/** + * struct io_pgtable_walk_common - common information from a pgtable walk + * @visit_leaf: callback for each leaf providing it's physical address and= size + */ +struct io_pgtable_walk_common { + void (*visit_leaf)(phys_addr_t paddr, size_t size, + struct io_pgtable_walk_common *data, + void *wd); + void *data; /* pointer to walk data as arm_lpae_io_pgtable_walk_data*/ }; =20 /** @@ -199,7 +212,7 @@ struct arm_lpae_io_pgtable_walk_data { * @map_pages: Map a physically contiguous range of pages of the same s= ize. * @unmap_pages: Unmap a range of virtually contiguous pages of the same = size. * @iova_to_phys: Translate iova to physical address. - * @pgtable_walk: (optional) Perform a page table walk for a given iova. + * @pgtable_walk: (optional) Perform a page table walk for a given iova an= d size. * * These functions map directly onto the iommu_ops member functions with * the same names. @@ -213,7 +226,8 @@ struct io_pgtable_ops { struct iommu_iotlb_gather *gather); phys_addr_t (*iova_to_phys)(struct io_pgtable_ops *ops, unsigned long iova); - int (*pgtable_walk)(struct io_pgtable_ops *ops, unsigned long iova, void = *wd); + int (*pgtable_walk)(struct io_pgtable_ops *ops, unsigned long iova, + size_t size, struct io_pgtable_walk_common *walker); int (*read_and_clear_dirty)(struct io_pgtable_ops *ops, unsigned long iova, size_t size, unsigned long flags, --=20 2.47.0.338.g60cca15819-goog From nobody Sun Dec 14 19:14:26 2025 Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9840F23D431 for ; Thu, 12 Dec 2024 18:06:04 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734026766; cv=none; b=rO6OdHr+lgyLFMinLGaGEKmrekYeag1v1fIVvDZi6GMd+bRzTzbPBgaWlbY+KrWjWj+IhH+wfHo7ikJs+AXjgXkqtK8yRibkuXrIWVg0KM5hPlYzdOt+tYyeMeoochIeCrip5XrItWSdG+9cejmR83Lw5lPQgLmSw10/TlOSdLg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734026766; c=relaxed/simple; bh=4pTaI/CYWux8bwQf8488d85bbrFrWoEK4phm9j4oRok=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=E/uKNzYSVxqDe5J5AK9WrVhprhX0M/y+n84FnXVRSVneZlJbJOs4rPAjiQjh+SBcXLKJKfjX3GxOY/ivw4G7WLLYzrjKeqXtDv4dCsVSJ1LtijbR9Xo+/b1Mgvh8aMWPwMV2R9UmIwdVlMLoEp7Fhj+KFQ2jfSXLzb1EIYuWO/M= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--smostafa.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=4zDxmVy/; arc=none smtp.client-ip=209.85.128.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--smostafa.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="4zDxmVy/" Received: by mail-wm1-f74.google.com with SMTP id 5b1f17b1804b1-4361ac607b6so7286455e9.0 for ; Thu, 12 Dec 2024 10:06:04 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1734026763; x=1734631563; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=YmBCnVYtGke08HK2kECtnnkNQ3abN9ao4VlCDDt7BzU=; b=4zDxmVy/WMsKQEKrtfoKzvQXQ/NxwHjxxmtE9bHdpdFXwwR5AtEgf1oj2tv/yonwB7 BL0q9QUSGTjwgXZHEA/rie5HAe/EYPEQl2BQqinSZu60ZraC7mnxJVMqMDBpMQxIp7R2 RqGxv9s/3qkPSg0Ctr78vegwwY5T1uBRSiuMYgKCfKWQ5xKK9Zq3VbhKh+/lkLQotAJQ jDJQetqKXVqT20zQ+SJl6oJMZQbuqfRiOy0Msm2uFt2hY5PWA40UUYkz2HXaJJBTxd0g mAdO6eTiFQ7Qj9ywEvf9nBGCcGxDkuQH1idn+gSQipdqgMO7P/4aY6FXwynWbzPwzNqe k95Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734026763; x=1734631563; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=YmBCnVYtGke08HK2kECtnnkNQ3abN9ao4VlCDDt7BzU=; b=XgQLHgcWFChxQuxGsknah6RRSm76r34lC5snXK5VcO1Sz4QHgj6h6CPHLRt2/FCbFy JwQzJ77Cz/de8Eo0nNwNr+nis7RHuafx8Erry6bQGaa/GsyPzfqI5poYzZKleXBTeeNs rJCEEs+Pp7VVQYdMqNtIpS3jexqCNWUerT5wJvB+seafOj5y0x5mHE5JguWi6ln4dyES MS5oY4q7YOCPmwzX1beF1zx3zvlQuh7KrnNZ4tMiE45iE5Ne+H+r7gpES/XYjsQH6KxP pEmJj4QoUtPzbIE5DG0gIAmNIUmcyE2x9Juo57CA9RzhZbGR/tr96e1TSDTsC3TYdSSW 2Ukw== X-Forwarded-Encrypted: i=1; AJvYcCX5FEhWaXXztdPIX0rVcKVN79VnRcXXpbESbek2Z0P0Y1sff/OkEI0qkT7Hvp/c8Gv1rno/xAU68xVYTHw=@vger.kernel.org X-Gm-Message-State: AOJu0YzWxTo/lL5ZDjG/0Jn00x9VN8u56fCT+D+EOdQ6cMZllYc2yPEU kf3FtrcOrHkkFNa5Krq3M/aTbLEF2dBY/fZC4C7SVCcRTMB9Lj6s78Kr1K28GyOcdZ9XvPNuCVG jxTHKRH94tw== X-Google-Smtp-Source: AGHT+IEPQay9Y8CPc4NCAf9wmXyxCMeYXXSRQnwHxxS/TYx9u0H9ZZe4g9YLXll0aeY364hkPCkt35VntGEZQg== X-Received: from wmma26.prod.google.com ([2002:a05:600c:225a:b0:434:f299:5633]) (user=smostafa job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:1c82:b0:434:a815:2b57 with SMTP id 5b1f17b1804b1-4362286399fmr35483095e9.20.1734026763212; Thu, 12 Dec 2024 10:06:03 -0800 (PST) Date: Thu, 12 Dec 2024 18:04:02 +0000 In-Reply-To: <20241212180423.1578358-1-smostafa@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241212180423.1578358-1-smostafa@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20241212180423.1578358-39-smostafa@google.com> Subject: [RFC PATCH v2 38/58] iommu/io-pgtable-arm: Add post table walker callback From: Mostafa Saleh To: iommu@lists.linux.dev, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org Cc: catalin.marinas@arm.com, will@kernel.org, maz@kernel.org, oliver.upton@linux.dev, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, robdclark@gmail.com, joro@8bytes.org, robin.murphy@arm.com, jean-philippe@linaro.org, jgg@ziepe.ca, nicolinc@nvidia.com, vdonnefort@google.com, qperret@google.com, tabba@google.com, danielmentz@google.com, tzukui@google.com, Mostafa Saleh Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add a callback for postable, this would be used by pKVM to cleanup tables next. Signed-off-by: Mostafa Saleh --- drivers/iommu/io-pgtable-arm-common.c | 15 ++++++++++++++- include/linux/io-pgtable-arm.h | 2 ++ include/linux/io-pgtable.h | 2 ++ 3 files changed, 18 insertions(+), 1 deletion(-) diff --git a/drivers/iommu/io-pgtable-arm-common.c b/drivers/iommu/io-pgtab= le-arm-common.c index 4fc0b03494e3..076240eaec19 100644 --- a/drivers/iommu/io-pgtable-arm-common.c +++ b/drivers/iommu/io-pgtable-arm-common.c @@ -523,6 +523,13 @@ static int visit_pgtable_walk(struct io_pgtable_walk_d= ata *walk_data, int lvl, return 0; } =20 +static void visit_pgtable_post_table(struct arm_lpae_io_pgtable_walk_data = *data, + arm_lpae_iopte *ptep, int lvl) +{ + if (data->visit_post_table) + data->visit_post_table(data, ptep, lvl); +} + static int arm_lpae_pgtable_walk(struct io_pgtable_ops *ops, unsigned long= iova, size_t size, struct io_pgtable_walk_common *walker) { @@ -530,6 +537,7 @@ static int arm_lpae_pgtable_walk(struct io_pgtable_ops = *ops, unsigned long iova, struct io_pgtable_walk_data walk_data =3D { .data =3D walker, .visit =3D visit_pgtable_walk, + .visit_post_table =3D visit_pgtable_post_table, .addr =3D iova, .end =3D iova + size, }; @@ -562,7 +570,12 @@ static int io_pgtable_visit(struct arm_lpae_io_pgtable= *data, } =20 ptep =3D iopte_deref(pte, data); - return __arm_lpae_iopte_walk(data, walk_data, ptep, lvl + 1); + ret =3D __arm_lpae_iopte_walk(data, walk_data, ptep, lvl + 1); + + if (walk_data->visit_post_table) + walk_data->visit_post_table(data, ptep, lvl); + + return ret; } =20 static int __arm_lpae_iopte_walk(struct arm_lpae_io_pgtable *data, diff --git a/include/linux/io-pgtable-arm.h b/include/linux/io-pgtable-arm.h index 9e5878c37d78..c00eb0cb7e43 100644 --- a/include/linux/io-pgtable-arm.h +++ b/include/linux/io-pgtable-arm.h @@ -21,6 +21,8 @@ struct io_pgtable_walk_data { struct io_pgtable_walk_common *data; int (*visit)(struct io_pgtable_walk_data *walk_data, int lvl, arm_lpae_iopte *ptep, size_t size); + void (*visit_post_table)(struct arm_lpae_io_pgtable_walk_data *data, + arm_lpae_iopte *ptep, int lvl); unsigned long flags; u64 addr; const u64 end; diff --git a/include/linux/io-pgtable.h b/include/linux/io-pgtable.h index da50e17b0177..86226571cdb8 100644 --- a/include/linux/io-pgtable.h +++ b/include/linux/io-pgtable.h @@ -193,6 +193,8 @@ struct arm_lpae_io_pgtable_walk_data { u64 ptes[4]; int level; void *cookie; + void (*visit_post_table)(struct arm_lpae_io_pgtable_walk_data *data, + arm_lpae_iopte *ptep, int lvl); }; =20 /** --=20 2.47.0.338.g60cca15819-goog From nobody Sun Dec 14 19:14:26 2025 Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id BF4BD23DE9A for ; Thu, 12 Dec 2024 18:06:07 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734026769; cv=none; b=WLGxG2T2to+mnAR8BFe41TfVZb++6FjEI2EzisKT8JKN4mrMME6nUxAGqPMe/0rTjCqkKXXKU+AlK16sNRslijC6/NNQDsrcI02qhK+h1YoW8EuKUHVqp+h5oM9/MseQbhZef9QMLgYf+Ytjmt+KEr8KjxBKW13yRAt/29SMvXI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734026769; c=relaxed/simple; bh=XHk9s5KgEnNMUGLMGLhFngN3D3QuPGl/2qoxo6DFT74=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=pBmziNH+UxLq4DEEA1P1AMJ1JmXMtYyt9WaoLIbgi9LJXc+J81Un69We+kmXPZoHxV19aahJ/QRRfNQlnvmOweTzOwX/q1EUnWo0akljiFYJfOukKegiPT4HNDEDNnqFW9CiNzBpLJrBguxEL7gHiRBwAGuZwFR6RawYHCfkL4w= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--smostafa.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=fk1hBHGN; arc=none smtp.client-ip=209.85.128.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--smostafa.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="fk1hBHGN" Received: by mail-wm1-f73.google.com with SMTP id 5b1f17b1804b1-4359206e1e4so9240555e9.2 for ; Thu, 12 Dec 2024 10:06:07 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1734026766; x=1734631566; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=rqPsJB88kXVR+TaQlcSaWv/z9lPFdx9zLwQbYqyNGLA=; b=fk1hBHGNVf4AoPNf5QHHwiyI6sX/YZPPUZUQ4Jgshz5bZjaai9WrLdWfXavRIBkRMv iC3ElxRf4KzujnSZnidLwEqomrBaQkVpio3DB3jA+Y0UsjagqOGZYZSvRsdcYKhAeExJ R3jrkb/ZJ+h1JT32yXGrWZ/6fUpIupXB1H+sHEaRxNMOx1PIGh1OrbeM0oMPdvRdk22Y v7aNt4jUB1Z/U+8NhhmipZEfslKhfAhipyxDQDuLk1eTTrSAP57//RyG+geanoUi56ry pVKRSIgcd2MCk+Rc/oYYB4QxeWSXS0tQKxyOSpD09R7RRWEaDHaUVc4C7NNDRBPalt+/ IrcA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734026766; x=1734631566; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=rqPsJB88kXVR+TaQlcSaWv/z9lPFdx9zLwQbYqyNGLA=; b=R0tFNYFpsVtNGI6gD0lg9Rw6t17aU5B91+IMeNVUxweldj7kzsaTaMoQ+/2oEmmR6M u+lChB9WE0aV7b7Kxr9TURRS1G5GiPjXTWt1t6VeNqZEQg7Rb6suNOS1wiTUaafAqWhb 4uWq71Ia/+ozMfKK7oCwqo8v09/xuEjBpGFif/zDXSHBI1nqqBTY0/fNStR8TRMDeX7X fj+jUG8vrM+8qXjSvkt/Qh9HlRyvaEm37GzcjQ9EwdeldDBx8OTHxK+IlhMpQEqrvUAh j2GEO8oNiRqwXjDwv3GOB60vCmsjUcTzJBPvKs+4dfVOugCRqJ2zwn6HRWuLV0LDMQKj AgDw== X-Forwarded-Encrypted: i=1; AJvYcCX7qF+EC5OVZ1j+eOUdWVCCncUyJwlAWJQgzwjPnYC6LacRMlFR+F2lcqqhVbSRVzki8J3fwuyuJxhuC9s=@vger.kernel.org X-Gm-Message-State: AOJu0YxTdTIWlRz0tc4bcaHU5KuVGMVeOEV7F8Ws7qALYJBh/iS7HMqV 6y2CzJr4D+G9fa17EkzOSTHLLr+m+ItL9JrNh2QK5G3LGm8ZGHhk4DTMrDyTwlDpr1tM4B0xyft oZa99lWOtRQ== X-Google-Smtp-Source: AGHT+IEsi+4ZjdjaE64e2fAsGHvL3ha82aYz7MzT70dGqsLuOIBzlVogt+pct6VgpmBHvLpi6wagEZsN2iELKw== X-Received: from wmos10.prod.google.com ([2002:a05:600c:45ca:b0:434:fa72:f1bf]) (user=smostafa job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:4450:b0:434:f5c0:329f with SMTP id 5b1f17b1804b1-4361c3bd9e8mr84507805e9.14.1734026765211; Thu, 12 Dec 2024 10:06:05 -0800 (PST) Date: Thu, 12 Dec 2024 18:04:03 +0000 In-Reply-To: <20241212180423.1578358-1-smostafa@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241212180423.1578358-1-smostafa@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20241212180423.1578358-40-smostafa@google.com> Subject: [RFC PATCH v2 39/58] drivers/iommu: io-pgtable: Add IO_PGTABLE_QUIRK_UNMAP_INVAL From: Mostafa Saleh To: iommu@lists.linux.dev, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org Cc: catalin.marinas@arm.com, will@kernel.org, maz@kernel.org, oliver.upton@linux.dev, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, robdclark@gmail.com, joro@8bytes.org, robin.murphy@arm.com, jean-philippe@linaro.org, jgg@ziepe.ca, nicolinc@nvidia.com, vdonnefort@google.com, qperret@google.com, tabba@google.com, danielmentz@google.com, tzukui@google.com, Mostafa Saleh Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Only invalidate PTE without clearing it. For io-pgtable-armm that also leaves the table allocated after an unmap as they can't be freed. This quirk also will allow the page table walker to traverse through tables invalidated by an unmap, allowing the caller to doing any booking keeping and freeing the table after. Signed-off-by: Mostafa Saleh --- drivers/iommu/io-pgtable-arm-common.c | 50 +++++++++++++++++++-------- include/linux/io-pgtable-arm.h | 7 +++- include/linux/io-pgtable.h | 5 ++- 3 files changed, 45 insertions(+), 17 deletions(-) diff --git a/drivers/iommu/io-pgtable-arm-common.c b/drivers/iommu/io-pgtab= le-arm-common.c index 076240eaec19..89be1aa72a6b 100644 --- a/drivers/iommu/io-pgtable-arm-common.c +++ b/drivers/iommu/io-pgtable-arm-common.c @@ -42,7 +42,10 @@ static phys_addr_t iopte_to_paddr(arm_lpae_iopte pte, static void __arm_lpae_clear_pte(arm_lpae_iopte *ptep, struct io_pgtable_c= fg *cfg, int num_entries) { for (int i =3D 0; i < num_entries; i++) - ptep[i] =3D 0; + if (cfg->quirks & IO_PGTABLE_QUIRK_UNMAP_INVAL) + ptep[i] &=3D ~ARM_LPAE_PTE_VALID; + else + ptep[i] =3D 0; =20 if (!cfg->coherent_walk && num_entries) __arm_lpae_sync_pte(ptep, num_entries, cfg); @@ -170,7 +173,7 @@ static int __arm_lpae_map(struct arm_lpae_io_pgtable *d= ata, unsigned long iova, =20 /* Grab a pointer to the next level */ pte =3D READ_ONCE(*ptep); - if (!pte) { + if (!iopte_valid(pte)) { cptep =3D __arm_lpae_alloc_pages(tblsz, gfp, cfg, data->iop.cookie); if (!cptep) return -ENOMEM; @@ -182,9 +185,9 @@ static int __arm_lpae_map(struct arm_lpae_io_pgtable *d= ata, unsigned long iova, __arm_lpae_sync_pte(ptep, 1, cfg); } =20 - if (pte && !iopte_leaf(pte, lvl, data->iop.fmt)) { + if (iopte_valid(pte) && !iopte_leaf(pte, lvl, data->iop.fmt)) { cptep =3D iopte_deref(pte, data); - } else if (pte) { + } else if (iopte_valid(pte)) { /* We require an unmap first */ return arm_lpae_unmap_empty(); } @@ -316,7 +319,7 @@ void __arm_lpae_free_pgtable(struct arm_lpae_io_pgtable= *data, int lvl, while (ptep !=3D end) { arm_lpae_iopte pte =3D *ptep++; =20 - if (!pte || iopte_leaf(pte, lvl, data->iop.fmt)) + if (!iopte_valid(pte) || iopte_leaf(pte, lvl, data->iop.fmt)) continue; =20 __arm_lpae_free_pgtable(data, lvl + 1, iopte_deref(pte, data)); @@ -401,7 +404,7 @@ static size_t __arm_lpae_unmap(struct arm_lpae_io_pgtab= le *data, unmap_idx_start =3D ARM_LPAE_LVL_IDX(iova, lvl, data); ptep +=3D unmap_idx_start; pte =3D READ_ONCE(*ptep); - if (WARN_ON(!pte)) + if (WARN_ON(!iopte_valid(pte))) return 0; =20 /* If the size matches this level, we're in the right place */ @@ -412,7 +415,7 @@ static size_t __arm_lpae_unmap(struct arm_lpae_io_pgtab= le *data, /* Find and handle non-leaf entries */ for (i =3D 0; i < num_entries; i++) { pte =3D READ_ONCE(ptep[i]); - if (WARN_ON(!pte)) + if (WARN_ON(!iopte_valid(pte))) break; =20 if (!iopte_leaf(pte, lvl, iop->fmt)) { @@ -421,7 +424,9 @@ static size_t __arm_lpae_unmap(struct arm_lpae_io_pgtab= le *data, /* Also flush any partial walks */ io_pgtable_tlb_flush_walk(iop, iova + i * size, size, ARM_LPAE_GRANULE(data)); - __arm_lpae_free_pgtable(data, lvl + 1, iopte_deref(pte, data)); + if (!(iop->cfg.quirks & IO_PGTABLE_QUIRK_UNMAP_INVAL)) + __arm_lpae_free_pgtable(data, lvl + 1, + iopte_deref(pte, data)); } } =20 @@ -523,9 +528,12 @@ static int visit_pgtable_walk(struct io_pgtable_walk_d= ata *walk_data, int lvl, return 0; } =20 -static void visit_pgtable_post_table(struct arm_lpae_io_pgtable_walk_data = *data, +static void visit_pgtable_post_table(struct io_pgtable_walk_data *walk_dat= a, arm_lpae_iopte *ptep, int lvl) { + struct io_pgtable_walk_common *walker =3D walk_data->data; + struct arm_lpae_io_pgtable_walk_data *data =3D walker->data; + if (data->visit_post_table) data->visit_post_table(data, ptep, lvl); } @@ -550,30 +558,41 @@ static int io_pgtable_visit(struct arm_lpae_io_pgtabl= e *data, arm_lpae_iopte *ptep, int lvl) { struct io_pgtable *iop =3D &data->iop; + struct io_pgtable_cfg *cfg =3D &iop->cfg; arm_lpae_iopte pte =3D READ_ONCE(*ptep); struct io_pgtable_walk_common *walker =3D walk_data->data; + arm_lpae_iopte *old_ptep =3D ptep; + bool is_leaf, is_table; =20 size_t size =3D ARM_LPAE_BLOCK_SIZE(lvl, data); int ret =3D walk_data->visit(walk_data, lvl, ptep, size); if (ret) return ret; =20 - if (iopte_leaf(pte, lvl, iop->fmt)) { + if (cfg->quirks & IO_PGTABLE_QUIRK_UNMAP_INVAL) { + /* Visitng invalid tables as it still have enteries. */ + is_table =3D pte && iopte_table(pte | ARM_LPAE_PTE_VALID, lvl); + is_leaf =3D pte && iopte_leaf(pte | ARM_LPAE_PTE_VALID, lvl, iop->fmt); + } else { + is_table =3D iopte_table(pte, lvl); + is_leaf =3D iopte_leaf(pte, lvl, iop->fmt); + } + + if (is_leaf) { if (walker->visit_leaf) walker->visit_leaf(iopte_to_paddr(pte, data), size, walker, ptep); walk_data->addr +=3D size; return 0; } =20 - if (!iopte_table(pte, lvl)) { + if (!is_table) return -EINVAL; - } =20 ptep =3D iopte_deref(pte, data); ret =3D __arm_lpae_iopte_walk(data, walk_data, ptep, lvl + 1); =20 if (walk_data->visit_post_table) - walk_data->visit_post_table(data, ptep, lvl); + walk_data->visit_post_table(walk_data, old_ptep, lvl); =20 return ret; } @@ -744,7 +763,8 @@ int arm_lpae_init_pgtable_s1(struct io_pgtable_cfg *cfg, if (cfg->quirks & ~(IO_PGTABLE_QUIRK_ARM_NS | IO_PGTABLE_QUIRK_ARM_TTBR1 | IO_PGTABLE_QUIRK_ARM_OUTER_WBWA | - IO_PGTABLE_QUIRK_ARM_HD)) + IO_PGTABLE_QUIRK_ARM_HD | + IO_PGTABLE_QUIRK_UNMAP_INVAL)) return -EINVAL; =20 ret =3D arm_lpae_init_pgtable(cfg, data); @@ -830,7 +850,7 @@ int arm_lpae_init_pgtable_s2(struct io_pgtable_cfg *cfg, typeof(&cfg->arm_lpae_s2_cfg.vtcr) vtcr =3D &cfg->arm_lpae_s2_cfg.vtcr; =20 /* The NS quirk doesn't apply at stage 2 */ - if (cfg->quirks) + if (cfg->quirks & ~IO_PGTABLE_QUIRK_UNMAP_INVAL) return -EINVAL; =20 ret =3D arm_lpae_init_pgtable(cfg, data); diff --git a/include/linux/io-pgtable-arm.h b/include/linux/io-pgtable-arm.h index c00eb0cb7e43..407f05fb300a 100644 --- a/include/linux/io-pgtable-arm.h +++ b/include/linux/io-pgtable-arm.h @@ -21,7 +21,7 @@ struct io_pgtable_walk_data { struct io_pgtable_walk_common *data; int (*visit)(struct io_pgtable_walk_data *walk_data, int lvl, arm_lpae_iopte *ptep, size_t size); - void (*visit_post_table)(struct arm_lpae_io_pgtable_walk_data *data, + void (*visit_post_table)(struct io_pgtable_walk_data *walk_data, arm_lpae_iopte *ptep, int lvl); unsigned long flags; u64 addr; @@ -193,6 +193,11 @@ static inline bool iopte_table(arm_lpae_iopte pte, int= lvl) return iopte_type(pte) =3D=3D ARM_LPAE_PTE_TYPE_TABLE; } =20 +static inline bool iopte_valid(arm_lpae_iopte pte) +{ + return pte & ARM_LPAE_PTE_VALID; +} + #ifdef __KVM_NVHE_HYPERVISOR__ #include #define __arm_lpae_virt_to_phys hyp_virt_to_phys diff --git a/include/linux/io-pgtable.h b/include/linux/io-pgtable.h index 86226571cdb8..ce0aed9c87d2 100644 --- a/include/linux/io-pgtable.h +++ b/include/linux/io-pgtable.h @@ -89,6 +89,8 @@ struct io_pgtable_cfg { * attributes set in the TCR for a non-coherent page-table walker. * * IO_PGTABLE_QUIRK_ARM_HD: Enables dirty tracking in stage 1 pagetable. + * + * IO_PGTABLE_QUIRK_UNMAP_INVAL: Only invalidate PTE on unmap, don't clea= r it. */ #define IO_PGTABLE_QUIRK_ARM_NS BIT(0) #define IO_PGTABLE_QUIRK_NO_PERMS BIT(1) @@ -97,6 +99,7 @@ struct io_pgtable_cfg { #define IO_PGTABLE_QUIRK_ARM_TTBR1 BIT(5) #define IO_PGTABLE_QUIRK_ARM_OUTER_WBWA BIT(6) #define IO_PGTABLE_QUIRK_ARM_HD BIT(7) + #define IO_PGTABLE_QUIRK_UNMAP_INVAL BIT(8) unsigned long quirks; unsigned long pgsize_bitmap; unsigned int ias; @@ -194,7 +197,7 @@ struct arm_lpae_io_pgtable_walk_data { int level; void *cookie; void (*visit_post_table)(struct arm_lpae_io_pgtable_walk_data *data, - arm_lpae_iopte *ptep, int lvl); + u64 *ptep, int lvl); }; =20 /** --=20 2.47.0.338.g60cca15819-goog From nobody Sun Dec 14 19:14:26 2025 Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 143A823E6EF for ; Thu, 12 Dec 2024 18:06:08 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734026771; cv=none; b=kfibIR6qCzBGftz/jBziR/hplX4r8kIZaNOFCjzwlIxVfeTe8NYseuMLV4f47vQDdnm3Hit7dzQL9ZkQFkHy75+aiwQS+VVt0KqEh4q2OHYNz0pLWJDofYEnFG9v80z4QLsUc3uSx7IixXG8slQxGGUX99ePWp72u5Sy4FiVJTM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734026771; c=relaxed/simple; bh=onbrJ9zcCDEgz4deQmM2rr2og6c6fCddvp2W4BDAzsY=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=nkpyQX46UXGJRT3amCwqqWnPsmGYx5CTYnKvXwFmaEaEl0y9VKw3ie7CFua799BPEVcGbZUCMKdK+JObTWKhjc72jiOyzdis+B5Km51IUSe9nGkN554vae6mnM3sBT02GvFsMIVNdSOtqSq21qMZ2BlldEIiAb7GwOGm3n40+FA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--smostafa.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=WZ8jV3v3; arc=none smtp.client-ip=209.85.128.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--smostafa.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="WZ8jV3v3" Received: by mail-wm1-f74.google.com with SMTP id 5b1f17b1804b1-43627bb20b5so4509775e9.1 for ; Thu, 12 Dec 2024 10:06:08 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1734026767; x=1734631567; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=gI/xVfW0NJIG4pabP2PrJlF4/ywCtn5dWW4kgaXryCA=; b=WZ8jV3v3iQKyhFighauWEWF0qy7IOC+cC8AfdGfr9nLFb2knJEs1R0MBXwz8/LUNK1 +XInIWOkwYT67k3DjOv4HcteaAy97LaFYUTH2ypH1fYlHlSJU3KktPwN4v0nRRuh9F6I hJGvV4Xnh8dHmBo/UQzjPWgtaAsA/Dp4oymN6nuXPLTcMkkVJDflADJMJwuCOv082o41 BTgGPBReJCM2iH3FKDk3M3jXkTH0haM2tBatKz765rl/fZZz8tR5mQZ01HwhW8jy2WY0 CKga/ZdDQ3HwZSO//Cw6Kx8tr8a1aI1JG4yZHc3lKLnNlu0QY8c9zgtGwUKrQDeV6vtq 5X2A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734026767; x=1734631567; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=gI/xVfW0NJIG4pabP2PrJlF4/ywCtn5dWW4kgaXryCA=; b=c/wpV56RC3iJebFfoUlx3u6C5ZRsK8W57nM2IHxWgPcm9wSUhlDb7FfN9iWI8SEvoG 3WGBAGLFE18vGIqPFCyhswY8kowkTdLVTZ2fQNNcTrfX6N7DEX3Z++F6w86SX233Fl+s ADhiiYVm6KUJ/o1RJiH+oxxd+cZEuRiCEUH9+Uj09w9DPwaqtGAISVSWdJVWt9ZbvMTt kYOOxKEVoaNtXTYKHFEYZzOfgdLPExS4ZpgLQWNcMKbqVs5VWBWYhaHLkgrjdZAXYLnw c8wd69ZkZSwYyw54yqldEjx86rJatSeNtHOVOfqkUZVhofUiT7f465NfPpU3D+Fl87p2 OhRw== X-Forwarded-Encrypted: i=1; AJvYcCWLr6q08vfq5Un91zlVQjI4C4HKnMC/VUitX3PpY0iMRtbRIW0LWyDdD/99ge6+HJXOvfg8wuwVRLdbzhE=@vger.kernel.org X-Gm-Message-State: AOJu0YwkDpWZI4Ttozda9dffpldr2Tl/OPLLUfbDwi8PyogQdSXFVO3b xoweg0nTYlWjIdP1o4qfIkCEQRSrkhUUDT74SkBL2PvCPglv5wSWz9DBKR34yFVOBbeXwT4utsW /V1TvmjIyYA== X-Google-Smtp-Source: AGHT+IHQ4cd+c/HRUSDI+U6v3tD80p/r2kIFY0himPqzGsq85TsqDSkq/5V4+Is4wQaH97N2w1ilsWBcofMkbQ== X-Received: from wmgg16.prod.google.com ([2002:a05:600d:10:b0:434:a7ee:3c40]) (user=smostafa job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:19ca:b0:434:f819:251a with SMTP id 5b1f17b1804b1-4362282ab8emr40017385e9.9.1734026767627; Thu, 12 Dec 2024 10:06:07 -0800 (PST) Date: Thu, 12 Dec 2024 18:04:04 +0000 In-Reply-To: <20241212180423.1578358-1-smostafa@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241212180423.1578358-1-smostafa@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20241212180423.1578358-41-smostafa@google.com> Subject: [RFC PATCH v2 40/58] KVM: arm64: smmu-v3: Add map/unmap pages and iova_to_phys From: Mostafa Saleh To: iommu@lists.linux.dev, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org Cc: catalin.marinas@arm.com, will@kernel.org, maz@kernel.org, oliver.upton@linux.dev, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, robdclark@gmail.com, joro@8bytes.org, robin.murphy@arm.com, jean-philippe@linaro.org, jgg@ziepe.ca, nicolinc@nvidia.com, vdonnefort@google.com, qperret@google.com, tabba@google.com, danielmentz@google.com, tzukui@google.com, Mostafa Saleh Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add map_pages and iova_to_phys HVC code, which mainly calls the io-pgtable. For unmap_pages, we rely on IO_PGTABLE_QUIRK_UNMAP_INVAL, where the driver first calls unmap_pages which invalidate all the pages as a typical unmap, issuing all the necessary TLB invalidations. Then, we will start a page table with 2 callbacks: - visit_leaf: for each unmapped leaf, it would decrement the refcount of the page using __pkvm_host_unuse_dma(), reversing the what IOMMU core does in map. - visit_post_table: this would free any invalidated tables as they wouldn't be freed because of the quirk. Signed-off-by: Mostafa Saleh --- arch/arm64/kvm/hyp/nvhe/iommu/arm-smmu-v3.c | 147 ++++++++++++++++++++ 1 file changed, 147 insertions(+) diff --git a/arch/arm64/kvm/hyp/nvhe/iommu/arm-smmu-v3.c b/arch/arm64/kvm/h= yp/nvhe/iommu/arm-smmu-v3.c index ec3f8d9749d3..1821a3420a4d 100644 --- a/arch/arm64/kvm/hyp/nvhe/iommu/arm-smmu-v3.c +++ b/arch/arm64/kvm/hyp/nvhe/iommu/arm-smmu-v3.c @@ -808,15 +808,74 @@ static const struct iommu_flush_ops smmu_tlb_ops =3D { .tlb_add_page =3D smmu_tlb_add_page, }; =20 +static void smmu_unmap_visit_leaf(phys_addr_t addr, size_t size, + struct io_pgtable_walk_common *data, + void *wd) +{ + u64 *ptep =3D wd; + + WARN_ON(__pkvm_host_unuse_dma(addr, size)); + *ptep =3D 0; +} + +/* + * On unmap with the IO_PGTABLE_QUIRK_UNMAP_INVAL, unmap doesn't clear + * or free any tables, so after the unmap walk the table and on the post + * walk we free invalid tables. + * One caveat, is that a table can be unmapped while it points to other + * tables which would be valid, and we would need to free those also. + * The simples solution is to look at the walk PTE info and if any of + * the parents is invalid it means that this table also needs to freed. + */ +static void smmu_unmap_visit_post_table(struct arm_lpae_io_pgtable_walk_da= ta *walk_data, + arm_lpae_iopte *ptep, int lvl) +{ + struct arm_lpae_io_pgtable *data =3D walk_data->cookie; + size_t table_size; + int i; + bool invalid =3D false; + + if (lvl =3D=3D data->start_level) + table_size =3D ARM_LPAE_PGD_SIZE(data); + else + table_size =3D ARM_LPAE_GRANULE(data); + + for (i =3D 0 ; i <=3D lvl ; ++i) + invalid |=3D !iopte_valid(walk_data->ptes[lvl]); + + if (!invalid) + return; + + __arm_lpae_free_pages(ptep, table_size, &data->iop.cfg, data->iop.cookie); + *ptep =3D 0; +} + static void smmu_iotlb_sync(struct kvm_hyp_iommu_domain *domain, struct iommu_iotlb_gather *gather) { size_t size; + struct hyp_arm_smmu_v3_domain *smmu_domain =3D domain->priv; + struct io_pgtable *pgtable =3D smmu_domain->pgtable; + struct arm_lpae_io_pgtable *data =3D io_pgtable_to_data(pgtable); + struct arm_lpae_io_pgtable_walk_data wd =3D { + .cookie =3D data, + .visit_post_table =3D smmu_unmap_visit_post_table, + }; + struct io_pgtable_walk_common walk_data =3D { + .visit_leaf =3D smmu_unmap_visit_leaf, + .data =3D &wd, + }; =20 if (!gather->pgsize) return; size =3D gather->end - gather->start + 1; smmu_tlb_inv_range(domain, gather->start, size, gather->pgsize, true); + + /* + * Now decrement the refcount of unmapped pages thanks to + * IO_PGTABLE_QUIRK_UNMAP_INVAL + */ + pgtable->ops.pgtable_walk(&pgtable->ops, gather->start, size, &walk_data); } =20 static int smmu_domain_config_s2(struct kvm_hyp_iommu_domain *domain, @@ -966,6 +1025,7 @@ static int smmu_domain_finalise(struct hyp_arm_smmu_v3= _device *smmu, .oas =3D smmu->ias, .coherent_walk =3D smmu->features & ARM_SMMU_FEAT_COHERENCY, .tlb =3D &smmu_tlb_ops, + .quirks =3D IO_PGTABLE_QUIRK_UNMAP_INVAL, }; } else { cfg =3D (struct io_pgtable_cfg) { @@ -975,6 +1035,7 @@ static int smmu_domain_finalise(struct hyp_arm_smmu_v3= _device *smmu, .oas =3D smmu->oas, .coherent_walk =3D smmu->features & ARM_SMMU_FEAT_COHERENCY, .tlb =3D &smmu_tlb_ops, + .quirks =3D IO_PGTABLE_QUIRK_UNMAP_INVAL, }; } =20 @@ -1125,6 +1186,89 @@ static int smmu_detach_dev(struct kvm_hyp_iommu *iom= mu, struct kvm_hyp_iommu_dom return ret; } =20 +static int smmu_map_pages(struct kvm_hyp_iommu_domain *domain, unsigned lo= ng iova, + phys_addr_t paddr, size_t pgsize, + size_t pgcount, int prot, size_t *total_mapped) +{ + size_t mapped; + size_t granule; + int ret; + struct hyp_arm_smmu_v3_domain *smmu_domain =3D domain->priv; + struct io_pgtable *pgtable =3D smmu_domain->pgtable; + + if (!pgtable) + return -EINVAL; + + granule =3D 1UL << __ffs(smmu_domain->pgtable->cfg.pgsize_bitmap); + if (!IS_ALIGNED(iova | paddr | pgsize, granule)) + return -EINVAL; + + hyp_spin_lock(&smmu_domain->pgt_lock); + while (pgcount && !ret) { + mapped =3D 0; + ret =3D pgtable->ops.map_pages(&pgtable->ops, iova, paddr, + pgsize, pgcount, prot, 0, &mapped); + if (ret) + break; + WARN_ON(!IS_ALIGNED(mapped, pgsize)); + WARN_ON(mapped > pgcount * pgsize); + + pgcount -=3D mapped / pgsize; + *total_mapped +=3D mapped; + iova +=3D mapped; + paddr +=3D mapped; + } + hyp_spin_unlock(&smmu_domain->pgt_lock); + + return 0; +} + +static size_t smmu_unmap_pages(struct kvm_hyp_iommu_domain *domain, unsign= ed long iova, + size_t pgsize, size_t pgcount, struct iommu_iotlb_gather *gather) +{ + size_t granule, unmapped, total_unmapped =3D 0; + size_t size =3D pgsize * pgcount; + struct hyp_arm_smmu_v3_domain *smmu_domain =3D domain->priv; + struct io_pgtable *pgtable =3D smmu_domain->pgtable; + + if (!pgtable) + return -EINVAL; + + granule =3D 1UL << __ffs(smmu_domain->pgtable->cfg.pgsize_bitmap); + if (!IS_ALIGNED(iova | pgsize, granule)) + return 0; + + hyp_spin_lock(&smmu_domain->pgt_lock); + while (total_unmapped < size) { + unmapped =3D pgtable->ops.unmap_pages(&pgtable->ops, iova, pgsize, + pgcount, gather); + if (!unmapped) + break; + iova +=3D unmapped; + total_unmapped +=3D unmapped; + pgcount -=3D unmapped / pgsize; + } + hyp_spin_unlock(&smmu_domain->pgt_lock); + return total_unmapped; +} + +static phys_addr_t smmu_iova_to_phys(struct kvm_hyp_iommu_domain *domain, + unsigned long iova) +{ + phys_addr_t paddr; + struct hyp_arm_smmu_v3_domain *smmu_domain =3D domain->priv; + struct io_pgtable *pgtable =3D smmu_domain->pgtable; + + if (!pgtable) + return -EINVAL; + + hyp_spin_lock(&smmu_domain->pgt_lock); + paddr =3D pgtable->ops.iova_to_phys(&pgtable->ops, iova); + hyp_spin_unlock(&smmu_domain->pgt_lock); + + return paddr; +} + /* Shared with the kernel driver in EL1 */ struct kvm_iommu_ops smmu_ops =3D { .init =3D smmu_init, @@ -1134,4 +1278,7 @@ struct kvm_iommu_ops smmu_ops =3D { .iotlb_sync =3D smmu_iotlb_sync, .attach_dev =3D smmu_attach_dev, .detach_dev =3D smmu_detach_dev, + .map_pages =3D smmu_map_pages, + .unmap_pages =3D smmu_unmap_pages, + .iova_to_phys =3D smmu_iova_to_phys, }; --=20 2.47.0.338.g60cca15819-goog From nobody Sun Dec 14 19:14:26 2025 Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 180CC23EA68 for ; Thu, 12 Dec 2024 18:06:11 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734026773; cv=none; b=UAD3yQWRgH+BNL3aPsZGgO6JusJC3uwfbrvG9X/59cyxoMLSiBe4LnkkQpR9yCVenfHWyQRGMfKXYilfn5aaVET49r9Z5KtgvBotdnCrbggjKJUI84gmJsSF+gT5/2nT3Qzn3kGVzJGl8LRyDbauOGXa0oURA+8FMLWtpy348b8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734026773; c=relaxed/simple; bh=83jtEu1bEidihroGtU2ASbvYlrug3jgXmpI7mOWQ/pY=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=PTXXeIFZh4rL69zPyRx6NCNUNu/u9N6i6cYtsppuZN1SmgGotO5YdvKYpp64IpPxMjAZ0moo9JOUcDqTxmqfpbmL3rDv0G6z4tq8akDYj8NGPfxbQqUcTC0GDu1F1x8/ivpngVZ27ajr+JL46Zi7eBsY+7v4XviuSE56fb7ekHQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--smostafa.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=suRiOGnC; arc=none smtp.client-ip=209.85.128.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--smostafa.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="suRiOGnC" Received: by mail-wm1-f74.google.com with SMTP id 5b1f17b1804b1-43623bf2a83so8402235e9.0 for ; Thu, 12 Dec 2024 10:06:11 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1734026769; x=1734631569; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=EpD1lhD14fjHF53MLiJhdNjSda5T/6inc9eTWlsnoLw=; b=suRiOGnCBleDP+xILzMQ8Lmfqhd16BRhp5/FBrVt7ydmF2w3ybAzU1p6i+d8bLKu8R Rr9+rD+4pOiS8/pCPQYFyju289fef65M7mN9R1Sralg8epyeQgsECq/7iusdxSJYInSs ngEfHJWbtuXtTlVHLtjTDsm6PF2872yHS8oRFHIt2GlJOQzo6W412674Qoq0LxnXdcL5 QhmCYnU2bYYhf883MCGWyluNZS1C7BoFEwZK6XC401RcsX/nsTAS2k34S8MrxQw7wl24 wQR/c2B2sXO/6QaWMI0tjXwzzNjUWUxHVuW3EBj/4qeuBy2GbdzeOrcc+XMrNjcsHZtq 2Wkw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734026769; x=1734631569; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=EpD1lhD14fjHF53MLiJhdNjSda5T/6inc9eTWlsnoLw=; b=rIk/HDNFFzSJ9gvmehbHCfPiH+eXolM16Gkm/sL7Q9Aj3zbzqXsKOCMzLSwCIHnSd7 XsFp1xitmiwgFoOG12f5aflN0wAVAZK++LDNqBbRbHq1KYsJ079twXIydSf8DMNGo3lD OAVV41RXWh3tWEKOoRNt+V2GOOS83YtSUaGCtbkzk+5TpSPiqr+FhEJCdFZBr+wkdtbU kLj2cXhBagSWH4Zvawq4p4da5exq1Olim6+9439irKUJGkF8Pu3Usm9+Kfg0lcZorV4X HPpCWqSP2nMDeixap8fxysYVEg5he9//PxLwUt+T1bFa6Gq8pdOgXkTQnI+rG7Ou0uYL o7oQ== X-Forwarded-Encrypted: i=1; AJvYcCUSd4YftSjS67rid6T+DERTucMrom7rsNmibMhfDWmnHFqh1eZDokMY80MojijnERs6cx8U7FjWyzXRbvU=@vger.kernel.org X-Gm-Message-State: AOJu0YzhZMrV7H+Jzpp+BT1gIR/mANwt+ffKhKdbeSgnOWP0rf97L3cj tPNW9YC7PU0/YuZWWfuAd3QZdnJECI+XP9AAdgkuRK+oI9+Y+rrVbtGPEFlTz7RM0OhqDZeQJHL n0AvVd5JrsQ== X-Google-Smtp-Source: AGHT+IHoAtLMCXcx0pNbqpLg+273hrX6BxpTnA/fbV1HglY7lURxrvsoWLSyAcKravbhVxC4p30yvMPx2T2q2A== X-Received: from wmbay15.prod.google.com ([2002:a05:600c:1e0f:b0:434:ff52:1c7]) (user=smostafa job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:5127:b0:434:fddf:5c0a with SMTP id 5b1f17b1804b1-4361c344c70mr73849075e9.3.1734026769635; Thu, 12 Dec 2024 10:06:09 -0800 (PST) Date: Thu, 12 Dec 2024 18:04:05 +0000 In-Reply-To: <20241212180423.1578358-1-smostafa@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241212180423.1578358-1-smostafa@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20241212180423.1578358-42-smostafa@google.com> Subject: [RFC PATCH v2 41/58] KVM: arm64: smmu-v3: Add DABT handler From: Mostafa Saleh To: iommu@lists.linux.dev, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org Cc: catalin.marinas@arm.com, will@kernel.org, maz@kernel.org, oliver.upton@linux.dev, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, robdclark@gmail.com, joro@8bytes.org, robin.murphy@arm.com, jean-philippe@linaro.org, jgg@ziepe.ca, nicolinc@nvidia.com, vdonnefort@google.com, qperret@google.com, tabba@google.com, danielmentz@google.com, tzukui@google.com, Mostafa Saleh Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add a data abort handler for the SMMUv3, we allow access for ETVQ and GERROR for debug purpose. Signed-off-by: Mostafa Saleh --- arch/arm64/kvm/hyp/nvhe/iommu/arm-smmu-v3.c | 58 +++++++++++++++++++++ 1 file changed, 58 insertions(+) diff --git a/arch/arm64/kvm/hyp/nvhe/iommu/arm-smmu-v3.c b/arch/arm64/kvm/h= yp/nvhe/iommu/arm-smmu-v3.c index 1821a3420a4d..2a99873d980f 100644 --- a/arch/arm64/kvm/hyp/nvhe/iommu/arm-smmu-v3.c +++ b/arch/arm64/kvm/hyp/nvhe/iommu/arm-smmu-v3.c @@ -13,6 +13,7 @@ #include #include #include +#include =20 #define ARM_SMMU_POLL_TIMEOUT_US 100000 /* 100ms arbitrary timeout */ =20 @@ -1269,6 +1270,62 @@ static phys_addr_t smmu_iova_to_phys(struct kvm_hyp_= iommu_domain *domain, return paddr; } =20 +static bool smmu_dabt_device(struct hyp_arm_smmu_v3_device *smmu, + struct kvm_cpu_context *host_ctxt, + u64 esr, u32 off) +{ + bool is_write =3D esr & ESR_ELx_WNR; + unsigned int len =3D BIT((esr & ESR_ELx_SAS) >> ESR_ELx_SAS_SHIFT); + int rd =3D (esr & ESR_ELx_SRT_MASK) >> ESR_ELx_SRT_SHIFT; + const u32 no_access =3D 0; + const u32 read_write =3D (u32)(-1); + const u32 read_only =3D is_write ? no_access : read_write; + u32 mask =3D no_access; + + /* + * Only handle MMIO access with u32 size and alignment. + * We don't need to change 64-bit registers for now. + */ + if ((len !=3D sizeof(u32)) || (off & (sizeof(u32) - 1))) + return false; + + switch (off) { + case ARM_SMMU_EVTQ_PROD + SZ_64K: + mask =3D read_write; + break; + case ARM_SMMU_EVTQ_CONS + SZ_64K: + mask =3D read_write; + break; + case ARM_SMMU_GERROR: + mask =3D read_only; + break; + case ARM_SMMU_GERRORN: + mask =3D read_write; + break; + }; + + if (!mask) + return false; + if (is_write) + writel_relaxed(cpu_reg(host_ctxt, rd) & mask, smmu->base + off); + else + cpu_reg(host_ctxt, rd) =3D readl_relaxed(smmu->base + off); + + return true; +} + +static bool smmu_dabt_handler(struct kvm_cpu_context *host_ctxt, u64 esr, = u64 addr) +{ + struct hyp_arm_smmu_v3_device *smmu; + + for_each_smmu(smmu) { + if (addr < smmu->mmio_addr || addr >=3D smmu->mmio_addr + smmu->mmio_siz= e) + continue; + return smmu_dabt_device(smmu, host_ctxt, esr, addr - smmu->mmio_addr); + } + return false; +} + /* Shared with the kernel driver in EL1 */ struct kvm_iommu_ops smmu_ops =3D { .init =3D smmu_init, @@ -1281,4 +1338,5 @@ struct kvm_iommu_ops smmu_ops =3D { .map_pages =3D smmu_map_pages, .unmap_pages =3D smmu_unmap_pages, .iova_to_phys =3D smmu_iova_to_phys, + .dabt_handler =3D smmu_dabt_handler, }; --=20 2.47.0.338.g60cca15819-goog From nobody Sun Dec 14 19:14:26 2025 Received: from mail-wr1-f74.google.com (mail-wr1-f74.google.com [209.85.221.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 26D7E23EA78 for ; Thu, 12 Dec 2024 18:06:13 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.221.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734026775; cv=none; b=Ms8FJpZ3zWWKblw8Jc7FxUcKYL0X9w1J8CKzvR61DIbN6UUeeeFGn1Ap9RIZAyLkuhMHnFDTUg4ZkLAwebTkrAlyAL5DY72dehRc5otawE5bTL7+DF6kyFVxsD80odfhDgKqX/MJYkvLVT3dEabM3FmFOq2ZcHIsy4p1rW1Ly84= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734026775; c=relaxed/simple; bh=8EtCY8V7X3ycV5JM9rJ11+gKlTambbtPDkoVVyt+M/8=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=uv1sC5kHwX6GgOxlh3yMaUvVPaecOdMWY8bMMprE0EbgU/BGS3+kztR7aJPuUCjtE+yjtv3YiEfpG8QxIWbJHqNS1EB5BnjWbvyAGyxZAKk1/MlAM8WU0YAF96lB5kSnO89Dpei7hHmNGOqcLCCiLL+qGJUg/oZTupIK5UICn7U= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--smostafa.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=Ju/kGi7r; arc=none smtp.client-ip=209.85.221.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--smostafa.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="Ju/kGi7r" Received: by mail-wr1-f74.google.com with SMTP id ffacd0b85a97d-385fdff9db5so435771f8f.0 for ; Thu, 12 Dec 2024 10:06:13 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1734026772; x=1734631572; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=mH2sHDIHNxX5nvm0uCAa4ZRouYkSHZdk/Fqj9806/G4=; b=Ju/kGi7rWlLiR5ki6EQqEdgWmNNqdj9c5xVBZ7gsIdKxSbn4orIj93yasl72xcLy58 ITzLFlO0xR7GiQiy7YGc/MJKQnyfWL8p+B1/IrE4pWW7LiD4WNn6+Ch1Eo8f3Me0PMXo CXkSvcavlIeLCvHGe8pVRZvdezzC1fjWdbic0V5PuCdkvHAsQKRVZ8I87nYqkfdXV4vN 0dzPmn0B9b5jzLo1XlBMBP7+HGkvoy1T1unVlzGf05vccnqkP3C9ep2/Mi3XxK7kzX1M wf3XvJCCq90THHuQAmVQ1zAxxUix+T4uvUwOEbFrdZwwHkh6m5EE374L5OpqaEV2PoJp pqNQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734026772; x=1734631572; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=mH2sHDIHNxX5nvm0uCAa4ZRouYkSHZdk/Fqj9806/G4=; b=iF3n12miXCz57Za4hSGoG0uxl0KtFcdnqwHBf/NZdo9riSTQvwkDDr3uzGzsl57ra5 jUhs4tPWcOm5zxMqt2FMU1ahrhnACS1aC5C2jvcrNo6YO4/TM6G/B+pnw3+CUtmTr8z4 HwRGc35M5DhgQmAMmx+VkQnbDTVV5hSaeJlK6FXAIx+J+HPswG6Kp42x7dnICvoCNlbp XKZJqUcs7TcOLpXo6er6mlBts0WYdbT0GmiJQsr+leXH7yYLq8QG6Xa0EbTlN9fcKpnn Y/9sECZt2vK1OLaKAK9WRqr+ZW6uiX4egJ9+GE2vIfLKZ/HSmHdUqfpZx87hnEmU/irL P+0Q== X-Forwarded-Encrypted: i=1; AJvYcCUBnSIvqtPl0Q2Xu/hB2u0ec17F1aWOokHgolp9Q/5cS6R/xrm49yhRkST4/FHvyIjcMENhL5DQvqbzeSU=@vger.kernel.org X-Gm-Message-State: AOJu0YwcD1y1YFLnvsrRa3fDC+3jt+tHPNQWKD4qu7Ru9g33CgatVzsz ZqXUOM1cfvFEkG5xk/mkRRNn3039N087wEV9pdaB/SluuWW5e3JrwDHzardh3V/eWDrt8fi74j2 XkENrWtxuqw== X-Google-Smtp-Source: AGHT+IHYgcere8DY5zEOS+VqEf+/VNdzFtnyb9Z2He5v/bXnj5TKjXPCmvJhIeCirHysNJOzaHefewW8JahNgA== X-Received: from wmgg16.prod.google.com ([2002:a05:600d:10:b0:434:a7ee:3c40]) (user=smostafa job=prod-delivery.src-stubby-dispatcher) by 2002:a5d:584b:0:b0:385:df84:8496 with SMTP id ffacd0b85a97d-3864ce4b01fmr5749345f8f.3.1734026771740; Thu, 12 Dec 2024 10:06:11 -0800 (PST) Date: Thu, 12 Dec 2024 18:04:06 +0000 In-Reply-To: <20241212180423.1578358-1-smostafa@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241212180423.1578358-1-smostafa@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20241212180423.1578358-43-smostafa@google.com> Subject: [RFC PATCH v2 42/58] iommu/arm-smmu-v3-kvm: Add host driver for pKVM From: Mostafa Saleh To: iommu@lists.linux.dev, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org Cc: catalin.marinas@arm.com, will@kernel.org, maz@kernel.org, oliver.upton@linux.dev, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, robdclark@gmail.com, joro@8bytes.org, robin.murphy@arm.com, jean-philippe@linaro.org, jgg@ziepe.ca, nicolinc@nvidia.com, vdonnefort@google.com, qperret@google.com, tabba@google.com, danielmentz@google.com, tzukui@google.com, Mostafa Saleh Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Jean-Philippe Brucker Under protected KVM (pKVM), the host does not have access to guest or hypervisor memory. This means that devices owned by the host must be isolated by the SMMU, and the hypervisor is in charge of the SMMU. Introduce the host component that replaces the normal SMMUv3 driver when pKVM is enabled, and sends configuration and requests to the actual driver running in the hypervisor (EL2). Rather than rely on regular driver probe, pKVM directly calls kvm_arm_smmu_v3_init(), which synchronously finds all SMMUs and hands them to the hypervisor. If the regular driver is enabled, it will not find any free SMMU to drive once it gets probed. Signed-off-by: Jean-Philippe Brucker Signed-off-by: Mostafa Saleh --- drivers/iommu/arm/arm-smmu-v3/Makefile | 6 ++ .../iommu/arm/arm-smmu-v3/arm-smmu-v3-kvm.c | 64 +++++++++++++++++++ 2 files changed, 70 insertions(+) create mode 100644 drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-kvm.c diff --git a/drivers/iommu/arm/arm-smmu-v3/Makefile b/drivers/iommu/arm/arm= -smmu-v3/Makefile index 515a84f14783..7a182adbebc1 100644 --- a/drivers/iommu/arm/arm-smmu-v3/Makefile +++ b/drivers/iommu/arm/arm-smmu-v3/Makefile @@ -6,3 +6,9 @@ arm_smmu_v3-$(CONFIG_ARM_SMMU_V3_SVA) +=3D arm-smmu-v3-sva.o arm_smmu_v3-$(CONFIG_TEGRA241_CMDQV) +=3D tegra241-cmdqv.o =20 obj-$(CONFIG_ARM_SMMU_V3_KUNIT_TEST) +=3D arm-smmu-v3-test.o + +obj-$(CONFIG_ARM_SMMU_V3_PKVM) +=3D arm_smmu_v3_kvm.o +ccflags-$(CONFIG_ARM_SMMU_V3_PKVM) +=3D -Iarch/arm64/kvm/ +arm_smmu_v3_kvm-objs-y +=3D arm-smmu-v3-kvm.o +arm_smmu_v3_kvm-objs-y +=3D arm-smmu-v3-common.o +arm_smmu_v3_kvm-objs :=3D $(arm_smmu_v3_kvm-objs-y) diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-kvm.c b/drivers/iomm= u/arm/arm-smmu-v3/arm-smmu-v3-kvm.c new file mode 100644 index 000000000000..8cea33d15e08 --- /dev/null +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-kvm.c @@ -0,0 +1,64 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * pKVM host driver for the Arm SMMUv3 + * + * Copyright (C) 2022 Linaro Ltd. + */ +#include + +#include +#include + +#include + +#include "arm-smmu-v3.h" + +extern struct kvm_iommu_ops kvm_nvhe_sym(smmu_ops); + +static int kvm_arm_smmu_probe(struct platform_device *pdev) +{ + return -ENOSYS; +} + +static void kvm_arm_smmu_remove(struct platform_device *pdev) +{ +} + +static const struct of_device_id arm_smmu_of_match[] =3D { + { .compatible =3D "arm,smmu-v3", }, + { }, +}; + +static struct platform_driver kvm_arm_smmu_driver =3D { + .driver =3D { + .name =3D "kvm-arm-smmu-v3", + .of_match_table =3D arm_smmu_of_match, + }, + .remove =3D kvm_arm_smmu_remove, +}; + +static int kvm_arm_smmu_v3_init_drv(void) +{ + return platform_driver_probe(&kvm_arm_smmu_driver, kvm_arm_smmu_probe); +} + +static void kvm_arm_smmu_v3_remove_drv(void) +{ + platform_driver_unregister(&kvm_arm_smmu_driver); +} + +struct kvm_iommu_driver kvm_smmu_v3_ops =3D { + .init_driver =3D kvm_arm_smmu_v3_init_drv, + .remove_driver =3D kvm_arm_smmu_v3_remove_drv, +}; + +static int kvm_arm_smmu_v3_register(void) +{ + if (!is_protected_kvm_enabled()) + return 0; + + return kvm_iommu_register_driver(&kvm_smmu_v3_ops, + kern_hyp_va(lm_alias(&kvm_nvhe_sym(smmu_ops)))); +}; + +core_initcall(kvm_arm_smmu_v3_register); --=20 2.47.0.338.g60cca15819-goog From nobody Sun Dec 14 19:14:26 2025 Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4731723EA9E for ; Thu, 12 Dec 2024 18:06:15 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734026777; cv=none; b=OrGcxBoD5c5wMeHLIPt4nEHDQQedZXyZDf+tCPX1I09mYvdmZjgyoa90164N7e8E8S1g9zFad7KciLe7vcRp3tbGQpcT110yLNvdpkGPip1ErLxzUxQRsSIPsHIFVPz2XCIR2+mefdCtosDMndsuVMbkKdb6L5eNpS77Tv++dxM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734026777; c=relaxed/simple; bh=ydpi1l7ULDa9MBB6BZ9Z9gn/EuxJceatGhHzoQ9+coc=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=eqUkEHsdDt5UEx8+1DGgYhJyTsyP6Og9VS4TZNHnatrSA6SdLHmGVcZ4LkRgRw7lmOTqw8d5snVMLsghPrrNyQj9UiD+cZlYOyziAvPCUVX1cUEfTNqP7loI9HCEE98vEjYLPVlW2p8V73InqeRhOyYJF7/dqU6qNLmT59g066s= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--smostafa.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=ngWvNVnA; arc=none smtp.client-ip=209.85.128.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--smostafa.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="ngWvNVnA" Received: by mail-wm1-f74.google.com with SMTP id 5b1f17b1804b1-4361eb83f46so8314885e9.3 for ; Thu, 12 Dec 2024 10:06:15 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1734026774; x=1734631574; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=EVsW/B0gP9PKHwLUmI8ZMBosi9aDXXUah+1PSSjDx2o=; b=ngWvNVnApZSqj6mcz3LgcQBAmOnwZwpHt+KzNz1000a55j2c45xInqrN2langh52Kd EW67HTHpgOxVFJlYvgQBDRaGr3XJx9gmHo6VO6OKCf+nO9CxnQeYyWPYS51hJDWzirHV uy4PW3e3xFHAxm5A9GB/LQpYJN+K2cjqcnydfa1SNh4XlxeqgPbdqyZAOy3WxvwfygQB vLMZ9rX4UgHBAKkG0ZLJpm1/S6b+VjWJ/MMZUNaG5rcm2SjBhZz4ohFA8Z0a9Z7sLtgE sj8rb/UD/NOD7loXbgRncEjAr+gHDeYpzj7OzkRyoZwdQihpIYT9jStsmVF/FtBrK57T qikA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734026774; x=1734631574; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=EVsW/B0gP9PKHwLUmI8ZMBosi9aDXXUah+1PSSjDx2o=; b=CBeTUcEneRwZXwqpqWi0vY/9MXsFCa+uPAGSw4qwz2nU5HBtY0GM8jlXDCBK88G/Sl xOfflTTdMcuouEbOR3J2L1On8N8EcAgsy+9Z7PQFIyzvSy9Qvt4gaPy1zqsNPXcYlaNL 4stFlX3LEgfo2bld0uSGU5cTSXS6TMuz0//tvChFax5QbYAUrjFcatmk9DWsqy60bKdS zpxIT+m57okpbTCLX6zeimxRjj7HP0wYxPLAMe6MtQAxxMKdsQ/booFdXS2fxbaX4ObS 9bna4tKSyHqgfMRPZCG0OXrw4XSktAgp8218WFilQSxC+1abmFL8tI25Kiv6V++GgvUX qkJA== X-Forwarded-Encrypted: i=1; AJvYcCVocxcnAn25upBBd2yLUy5Yh8XYoThD2fjiPfUGuA/lFwPwBD84+7MgF9y7UTN0ceGbweA276kpWqN6040=@vger.kernel.org X-Gm-Message-State: AOJu0YxUtzCehPl2WceS/yNFNRjGFJ779QzU0tzm7Nqfa/Reu0cF6IHx 5H/SxWeQ6TFtVwFKW1awE3dwjpSMdEa+uyB3iOpOPb6litR9yxKW9bfkzEvyzSrq3thzAy01E8M k2h14GAplHg== X-Google-Smtp-Source: AGHT+IGtbmK0V/XXvUUniJq5jDJrYVzgaPP6wOPvnq/baCobCTh6J6FiAIEBuWfIVdGKl3cSeTr7ETO057hIBA== X-Received: from wmpj7.prod.google.com ([2002:a05:600c:4887:b0:434:e90f:9fd3]) (user=smostafa job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:4f86:b0:430:5887:c238 with SMTP id 5b1f17b1804b1-4361c35f09fmr65905665e9.11.1734026773733; Thu, 12 Dec 2024 10:06:13 -0800 (PST) Date: Thu, 12 Dec 2024 18:04:07 +0000 In-Reply-To: <20241212180423.1578358-1-smostafa@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241212180423.1578358-1-smostafa@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20241212180423.1578358-44-smostafa@google.com> Subject: [RFC PATCH v2 43/58] iommu/arm-smmu-v3-kvm: Pass a list of SMMU devices to the hypervisor From: Mostafa Saleh To: iommu@lists.linux.dev, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org Cc: catalin.marinas@arm.com, will@kernel.org, maz@kernel.org, oliver.upton@linux.dev, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, robdclark@gmail.com, joro@8bytes.org, robin.murphy@arm.com, jean-philippe@linaro.org, jgg@ziepe.ca, nicolinc@nvidia.com, vdonnefort@google.com, qperret@google.com, tabba@google.com, danielmentz@google.com, tzukui@google.com, Mostafa Saleh Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Jean-Philippe Brucker Build a list of devices and donate the page to the hypervisor. At this point the host is trusted and this would be a good opportunity to provide more information about the system. For example, which devices are owned by the host (perhaps via the VMID and SW bits in the stream table, although we populate the stream table lazily at the moment.) Signed-off-by: Jean-Philippe Brucker Signed-off-by: Mostafa Saleh --- .../iommu/arm/arm-smmu-v3/arm-smmu-v3-kvm.c | 128 +++++++++++++++++- 1 file changed, 126 insertions(+), 2 deletions(-) diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-kvm.c b/drivers/iomm= u/arm/arm-smmu-v3/arm-smmu-v3-kvm.c index 8cea33d15e08..e2d9bd97ddc5 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-kvm.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-kvm.c @@ -15,9 +15,73 @@ =20 extern struct kvm_iommu_ops kvm_nvhe_sym(smmu_ops); =20 +struct host_arm_smmu_device { + struct arm_smmu_device smmu; + pkvm_handle_t id; +}; + +#define smmu_to_host(_smmu) \ + container_of(_smmu, struct host_arm_smmu_device, smmu); + +static size_t kvm_arm_smmu_cur; +static size_t kvm_arm_smmu_count; +static struct hyp_arm_smmu_v3_device *kvm_arm_smmu_array; + static int kvm_arm_smmu_probe(struct platform_device *pdev) { - return -ENOSYS; + int ret; + size_t size; + phys_addr_t ioaddr; + struct resource *res; + struct arm_smmu_device *smmu; + struct device *dev =3D &pdev->dev; + struct host_arm_smmu_device *host_smmu; + struct hyp_arm_smmu_v3_device *hyp_smmu; + + if (kvm_arm_smmu_cur >=3D kvm_arm_smmu_count) + return -ENOSPC; + + hyp_smmu =3D &kvm_arm_smmu_array[kvm_arm_smmu_cur]; + + host_smmu =3D devm_kzalloc(dev, sizeof(*host_smmu), GFP_KERNEL); + if (!host_smmu) + return -ENOMEM; + + smmu =3D &host_smmu->smmu; + smmu->dev =3D dev; + + ret =3D arm_smmu_fw_probe(pdev, smmu); + if (ret) + return ret; + + res =3D platform_get_resource(pdev, IORESOURCE_MEM, 0); + size =3D resource_size(res); + if (size < SZ_128K) { + dev_err(dev, "unsupported MMIO region size (%pr)\n", res); + return -EINVAL; + } + ioaddr =3D res->start; + host_smmu->id =3D kvm_arm_smmu_cur; + + smmu->base =3D devm_ioremap_resource(dev, res); + if (IS_ERR(smmu->base)) + return PTR_ERR(smmu->base); + + ret =3D arm_smmu_device_hw_probe(smmu); + if (ret) + return ret; + + platform_set_drvdata(pdev, smmu); + + /* Hypervisor parameters */ + hyp_smmu->pgsize_bitmap =3D smmu->pgsize_bitmap; + hyp_smmu->oas =3D smmu->oas; + hyp_smmu->ias =3D smmu->ias; + hyp_smmu->mmio_addr =3D ioaddr; + hyp_smmu->mmio_size =3D size; + kvm_arm_smmu_cur++; + + return arm_smmu_register_iommu(smmu, &kvm_arm_smmu_ops, ioaddr); } =20 static void kvm_arm_smmu_remove(struct platform_device *pdev) @@ -37,9 +101,69 @@ static struct platform_driver kvm_arm_smmu_driver =3D { .remove =3D kvm_arm_smmu_remove, }; =20 +static int kvm_arm_smmu_array_alloc(void) +{ + int smmu_order; + struct device_node *np; + + kvm_arm_smmu_count =3D 0; + for_each_compatible_node(np, NULL, "arm,smmu-v3") + kvm_arm_smmu_count++; + + if (!kvm_arm_smmu_count) + return 0; + + /* Allocate the parameter list shared with the hypervisor */ + smmu_order =3D get_order(kvm_arm_smmu_count * sizeof(*kvm_arm_smmu_array)= ); + kvm_arm_smmu_array =3D (void *)__get_free_pages(GFP_KERNEL | __GFP_ZERO, + smmu_order); + if (!kvm_arm_smmu_array) + return -ENOMEM; + + return 0; +} + +static void kvm_arm_smmu_array_free(void) +{ + int order; + + order =3D get_order(kvm_arm_smmu_count * sizeof(*kvm_arm_smmu_array)); + free_pages((unsigned long)kvm_arm_smmu_array, order); +} + static int kvm_arm_smmu_v3_init_drv(void) { - return platform_driver_probe(&kvm_arm_smmu_driver, kvm_arm_smmu_probe); + int ret; + + /* + * Check whether any device owned by the host is behind an SMMU. + */ + ret =3D kvm_arm_smmu_array_alloc(); + if (ret || !kvm_arm_smmu_count) + return ret; + + ret =3D platform_driver_probe(&kvm_arm_smmu_driver, kvm_arm_smmu_probe); + if (ret) + goto err_free; + + if (kvm_arm_smmu_cur !=3D kvm_arm_smmu_count) { + /* A device exists but failed to probe */ + ret =3D -EUNATCH; + goto err_free; + } + + /* + * These variables are stored in the nVHE image, and won't be accessible + * after KVM initialization. Ownership of kvm_arm_smmu_array will be + * transferred to the hypervisor as well. + */ + kvm_hyp_arm_smmu_v3_smmus =3D kvm_arm_smmu_array; + kvm_hyp_arm_smmu_v3_count =3D kvm_arm_smmu_count; + return 0; + +err_free: + kvm_arm_smmu_array_free(); + return ret; } =20 static void kvm_arm_smmu_v3_remove_drv(void) --=20 2.47.0.338.g60cca15819-goog From nobody Sun Dec 14 19:14:26 2025 Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 57AF423EC0A for ; Thu, 12 Dec 2024 18:06:17 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734026779; cv=none; b=tXLmuRXgJBQXhpW6HVgW7CbB8jVPNMCSJsJ8up06iB57N4apxc/WiW0KHiypFknsHmSyQk4irQR1X2LU3klLTGqMJc3l0UIWmwZsFfLlmXGG3l/TNooxkgAxjg4yVbF8BzoenCnXGCIVbuqDbPcQJY1C32jmRWxnazk4mSpBNfI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734026779; c=relaxed/simple; bh=nldh58rc77hsjw5gSEOSIIELaTF6Cfi2Rvy9VG/QlTM=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=J52gyF0sRJpF1ziXC6bZrHwfHOmlFJmaKUno+VUd1V6/RxEyrjp47sazhbw6Aw5LPBRAo9osn6VqYgAnEuRz1YXvHljcFDxIQd+VXqjvkGPKnzuOjWyfjLEpnSr0esnyiCZTFB9Wx/TBkD9oFpfQaqZpxCZORJaFlrfLF+vu0x0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--smostafa.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=asEUHOLZ; arc=none smtp.client-ip=209.85.128.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--smostafa.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="asEUHOLZ" Received: by mail-wm1-f73.google.com with SMTP id 5b1f17b1804b1-43627bb20b5so4510965e9.1 for ; Thu, 12 Dec 2024 10:06:17 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1734026776; x=1734631576; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=iyTqVJhMppnYdv5NSdPsxpiQh63AvYUQ+byd2sEa6to=; b=asEUHOLZRD/veHsQr5lNN2UybR5UwEbq3nx/X2fWiHqL8eps7HJ0q0zEkSTuzhU3H3 aiJYnlHJMA1pNDUL0RvqRxv4SV0sdSg4cTMj7s6ssLpl00s9saYfxItTn10fhmgE4IFK ES4A/8vgFlTw64g+x9Kr8+8eAsHHNkTkzc0YO4gY3bT6vk/TTIOpXcC0C7aXZtZrXvdx pspIrHcsdPqmKoSKFI4K0ge0pHUupuCJJGGUG6lX+OJ7scUH4Z3lDj1dLhtLmtjiSTGo hox81I5IIDi/au0rxg6vT//4LvPttcyzRX4moA89xy9e0TMNrxZG95KqzIX24911S83U UOhw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734026776; x=1734631576; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=iyTqVJhMppnYdv5NSdPsxpiQh63AvYUQ+byd2sEa6to=; b=EDEOtpbj0eF2kWSxriw5DExpHLY4osDLs0Ls+t2qkESChJc6C5JJTCGXq/rSA/JBn2 GpWT2zYdWHfZP1DpoH5BJkWYhKnAtlbPbQQabvYuVX/oZIX51wpyWIklwlhPiWucMf5+ MNaG/UPziextxBnBK+cidGkoxc/WRum3SyY68ltIHVvgfZx7Ta6DhRg96kxYt3KgK5/j 4nty+iGuxjC6eEXeXgbERuGcOuiYHncp6fcYiLmrH6/Qo/qTZNhP9HK4y2g/cRrDofP8 qCpWDJj4RwA2OFMDOqf2emwvjvpDrBEzx3OtVVUtKGHJ3NE8jSISY002TAd5jvnBhbCU MmSQ== X-Forwarded-Encrypted: i=1; AJvYcCXj9OvyF0mkw72sBSqZuvRR0MfmQF9D0XCjvStlm8qnRYEPh+sAipthvUWNFwrLvtqEnVftw8pAll61L40=@vger.kernel.org X-Gm-Message-State: AOJu0YxJR2cyVpUP6PLnEtdDaRhCbXqhEhaGCZznDRWxaKpPR+RmwKpp mn0rm8FcFeh350YD8EhW4Ke/JGf1ynaHyH8LCOugr3v/0Rldezya5lC3hXq/klg0nBDDLnW+hnb FZVHdRfoALA== X-Google-Smtp-Source: AGHT+IFG8WA1Z2ObHa4JSTh7ZOJd26fn3uY6Vzk1iSlEnJuugWr6UhUgo2xrrDlr60JHvwlglh6D+X+h9Bswrg== X-Received: from wmgf12.prod.google.com ([2002:a05:600c:154c:b0:42c:bfc2:aa72]) (user=smostafa job=prod-delivery.src-stubby-dispatcher) by 2002:a5d:64af:0:b0:385:f7ea:eb86 with SMTP id ffacd0b85a97d-38787685119mr4003192f8f.7.1734026775884; Thu, 12 Dec 2024 10:06:15 -0800 (PST) Date: Thu, 12 Dec 2024 18:04:08 +0000 In-Reply-To: <20241212180423.1578358-1-smostafa@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241212180423.1578358-1-smostafa@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20241212180423.1578358-45-smostafa@google.com> Subject: [RFC PATCH v2 44/58] iommu/arm-smmu-v3-kvm: Validate device features From: Mostafa Saleh To: iommu@lists.linux.dev, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org Cc: catalin.marinas@arm.com, will@kernel.org, maz@kernel.org, oliver.upton@linux.dev, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, robdclark@gmail.com, joro@8bytes.org, robin.murphy@arm.com, jean-philippe@linaro.org, jgg@ziepe.ca, nicolinc@nvidia.com, vdonnefort@google.com, qperret@google.com, tabba@google.com, danielmentz@google.com, tzukui@google.com, Mostafa Saleh Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Jean-Philippe Brucker The KVM hypervisor driver supports a small subset of features. Ensure the implementation is compatible, and disable some unused features. Signed-off-by: Jean-Philippe Brucker Signed-off-by: Mostafa Saleh --- .../iommu/arm/arm-smmu-v3/arm-smmu-v3-kvm.c | 43 +++++++++++++++++++ 1 file changed, 43 insertions(+) diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-kvm.c b/drivers/iomm= u/arm/arm-smmu-v3/arm-smmu-v3-kvm.c index e2d9bd97ddc5..4b0c9ff6e7f1 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-kvm.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-kvm.c @@ -27,6 +27,45 @@ static size_t kvm_arm_smmu_cur; static size_t kvm_arm_smmu_count; static struct hyp_arm_smmu_v3_device *kvm_arm_smmu_array; =20 +static bool kvm_arm_smmu_validate_features(struct arm_smmu_device *smmu) +{ + unsigned int required_features =3D + ARM_SMMU_FEAT_TT_LE; + unsigned int forbidden_features =3D + ARM_SMMU_FEAT_STALL_FORCE; + unsigned int keep_features =3D + ARM_SMMU_FEAT_2_LVL_STRTAB | + ARM_SMMU_FEAT_2_LVL_CDTAB | + ARM_SMMU_FEAT_TT_LE | + ARM_SMMU_FEAT_SEV | + ARM_SMMU_FEAT_COHERENCY | + ARM_SMMU_FEAT_TRANS_S1 | + ARM_SMMU_FEAT_TRANS_S2 | + ARM_SMMU_FEAT_VAX | + ARM_SMMU_FEAT_RANGE_INV; + + if (smmu->options & ARM_SMMU_OPT_PAGE0_REGS_ONLY) { + dev_err(smmu->dev, "unsupported layout\n"); + return false; + } + + if ((smmu->features & required_features) !=3D required_features) { + dev_err(smmu->dev, "missing features 0x%x\n", + required_features & ~smmu->features); + return false; + } + + if (smmu->features & forbidden_features) { + dev_err(smmu->dev, "features 0x%x forbidden\n", + smmu->features & forbidden_features); + return false; + } + + smmu->features &=3D keep_features; + + return true; +} + static int kvm_arm_smmu_probe(struct platform_device *pdev) { int ret; @@ -71,6 +110,9 @@ static int kvm_arm_smmu_probe(struct platform_device *pd= ev) if (ret) return ret; =20 + if (!kvm_arm_smmu_validate_features(smmu)) + return -ENODEV; + platform_set_drvdata(pdev, smmu); =20 /* Hypervisor parameters */ @@ -79,6 +121,7 @@ static int kvm_arm_smmu_probe(struct platform_device *pd= ev) hyp_smmu->ias =3D smmu->ias; hyp_smmu->mmio_addr =3D ioaddr; hyp_smmu->mmio_size =3D size; + hyp_smmu->features =3D smmu->features; kvm_arm_smmu_cur++; =20 return arm_smmu_register_iommu(smmu, &kvm_arm_smmu_ops, ioaddr); --=20 2.47.0.338.g60cca15819-goog From nobody Sun Dec 14 19:14:26 2025 Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5B62F23ED48 for ; Thu, 12 Dec 2024 18:06:19 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734026781; cv=none; b=ZuWpLYZI6Ow1ciX5TJ0QMyqSrq7OBUiTCQiNBf7LpaO8kfflAUf9LZw5G00OkzrW1KeGZf4wECTds4fdUuqtO5v/dOHgLHdMTqUHMti4tpkSZ/LVKgu5+AK89GP1PfpyYDpdpVsZUMTyjRTwCnmTwK5t4tO1ZzYHaBXI3O7mwAs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734026781; c=relaxed/simple; bh=6PX44nPStYCYZZD1cKcooZ6ZGWZjsxFQTSKNYP8ifRQ=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=MoUqzC4xdsxc5uezmyWysa7470TN6Rgf9xSFb29fFIRe6riihrCBtt+faVyr1A/QgkCFbigCcCIrfS5i08SfaDzMPRBOWGZuWbXHnF02WcUB+5DncvkS6UtuqX38Wv7LfgeTbLTZPYN6EA81ZGnXgULiLBEBbufsWi/U6pPVHsY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--smostafa.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=mIAj4sP6; arc=none smtp.client-ip=209.85.128.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--smostafa.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="mIAj4sP6" Received: by mail-wm1-f74.google.com with SMTP id 5b1f17b1804b1-43627bb20b5so4511215e9.1 for ; Thu, 12 Dec 2024 10:06:19 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1734026778; x=1734631578; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=Ri+r7pwRuSGuwjoUtH68VlyYId/xYJ7bbwqqbgxE7o0=; b=mIAj4sP6gayMtn4nrLmaOyoSDSanpdGfbnaqzAChs6o5aPSaY1HlfWh/oS7NwQxQQL NM1CTlgyvQx2xd1aPrb46iDd2/WoWMsJqVr2tVbm+c5Ydi49GDTatY3O7crKVIeB5tZE 0uc8GCJsYkemmQXB+NxlKhxsnjcGmUZ2Czs+T9IPgyR7u8+uqW8JtirTAhIU+MUuLWTP 3ovgGWx+dFUv0cd9duDCmgUrG3tLkVVfxzrQyIkmNzBhrWB9weM+NCuiX+k+C61y14J8 Mppry475PmrhPgaahgWvnCX0AGEZI87z0PpsW41OeB7UpyS58f2fV3veIQPYh7A4QYE8 9xgA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734026778; x=1734631578; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=Ri+r7pwRuSGuwjoUtH68VlyYId/xYJ7bbwqqbgxE7o0=; b=og5Mz4ygPZTUtwmYGEU2lnbTiUKAYOMY/3kT74m//EFD0Nv+Gjvk6jEn4zFJgB7xjO phPQ0RPaqi2Pqw6suOjNr+Ob/2h51M8UFb4cO9Pxq9rxfI/+QwN7tA2VkczTMzkq9qe3 Hyo+bj24lVT0IjmlRy5/dfWmiRbXwcF5OsJmWs/Vhbr4kxpGOPn6pfEhoCyi8H9eBa3Y ix3NWLJdT1rQ4jgHHFj1AhCeg0MLmkM2S4azqhXZp1eP9x7VZOJrUnClEjAhopniIluL kwUekN65X2m3On+LLIIVsO4fwa59pquI8CkCSJXlo2ZD6pnkjY1lz7EUahKDzRzaBpWk d9sQ== X-Forwarded-Encrypted: i=1; AJvYcCUlZ6dGW7JJdhYn4LF4AfSV1af6o88RWJFrpP7YHvL5Tu7eD0bnpNTUtEEyKemqKyrfLx7I38Qb/fhxD8A=@vger.kernel.org X-Gm-Message-State: AOJu0Yz4sbNGdibEk9YB756pSWxfQXAqWCl+Fz7l2nVuszTWxsI0ta4n Shu9KvJS4dZJ+hK1cRhw5WkL0dEfrSvhlR+R/AcZ6lQukjuIICjVe9zZBgnKWHHUUYFaS+rRfy7 sZbz3kYd7mg== X-Google-Smtp-Source: AGHT+IFK/W90GGrvF5ugVo4Ngg6I8Ho4ihHIM4uN2vyaRLD/UUIi5+Bgx5fKDHT7GWhU2g/J1RsFzA2778KxoA== X-Received: from wmdi8.prod.google.com ([2002:a05:600c:2908:b0:434:a0d3:2d57]) (user=smostafa job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:548a:b0:436:17e4:ad4c with SMTP id 5b1f17b1804b1-43622823c62mr35176435e9.6.1734026778111; Thu, 12 Dec 2024 10:06:18 -0800 (PST) Date: Thu, 12 Dec 2024 18:04:09 +0000 In-Reply-To: <20241212180423.1578358-1-smostafa@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241212180423.1578358-1-smostafa@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20241212180423.1578358-46-smostafa@google.com> Subject: [RFC PATCH v2 45/58] iommu/arm-smmu-v3-kvm: Allocate structures and reset device From: Mostafa Saleh To: iommu@lists.linux.dev, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org Cc: catalin.marinas@arm.com, will@kernel.org, maz@kernel.org, oliver.upton@linux.dev, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, robdclark@gmail.com, joro@8bytes.org, robin.murphy@arm.com, jean-philippe@linaro.org, jgg@ziepe.ca, nicolinc@nvidia.com, vdonnefort@google.com, qperret@google.com, tabba@google.com, danielmentz@google.com, tzukui@google.com, Mostafa Saleh Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Jean-Philippe Brucker Allocate the structures that will be shared between hypervisor and SMMU: command queue and stream table. Install them in the MMIO registers, along with some configuration bits. After hyp initialization, the host won't have access to those pages anymore. Signed-off-by: Jean-Philippe Brucker Signed-off-by: Mostafa Saleh --- .../iommu/arm/arm-smmu-v3/arm-smmu-v3-kvm.c | 53 +++++++++++++++++++ 1 file changed, 53 insertions(+) diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-kvm.c b/drivers/iomm= u/arm/arm-smmu-v3/arm-smmu-v3-kvm.c index 4b0c9ff6e7f1..e4a5bdc830bc 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-kvm.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-kvm.c @@ -18,6 +18,7 @@ extern struct kvm_iommu_ops kvm_nvhe_sym(smmu_ops); struct host_arm_smmu_device { struct arm_smmu_device smmu; pkvm_handle_t id; + u32 boot_gbpa; }; =20 #define smmu_to_host(_smmu) \ @@ -66,6 +67,35 @@ static bool kvm_arm_smmu_validate_features(struct arm_sm= mu_device *smmu) return true; } =20 +static int kvm_arm_smmu_device_reset(struct host_arm_smmu_device *host_smm= u) +{ + int ret; + u32 reg; + struct arm_smmu_device *smmu =3D &host_smmu->smmu; + + reg =3D readl_relaxed(smmu->base + ARM_SMMU_CR0); + if (reg & CR0_SMMUEN) + dev_warn(smmu->dev, "SMMU currently enabled! Resetting...\n"); + + /* Disable bypass */ + host_smmu->boot_gbpa =3D readl_relaxed(smmu->base + ARM_SMMU_GBPA); + ret =3D arm_smmu_update_gbpa(smmu, GBPA_ABORT, 0); + if (ret) + return ret; + + ret =3D arm_smmu_device_disable(smmu); + if (ret) + return ret; + + /* Stream table */ + arm_smmu_write_strtab(smmu); + + /* Command queue */ + writeq_relaxed(smmu->cmdq.q.q_base, smmu->base + ARM_SMMU_CMDQ_BASE); + + return 0; +} + static int kvm_arm_smmu_probe(struct platform_device *pdev) { int ret; @@ -113,6 +143,20 @@ static int kvm_arm_smmu_probe(struct platform_device *= pdev) if (!kvm_arm_smmu_validate_features(smmu)) return -ENODEV; =20 + ret =3D arm_smmu_init_one_queue(smmu, &smmu->cmdq.q, smmu->base, + ARM_SMMU_CMDQ_PROD, ARM_SMMU_CMDQ_CONS, + CMDQ_ENT_DWORDS, "cmdq"); + if (ret) + return ret; + + ret =3D arm_smmu_init_strtab(smmu); + if (ret) + return ret; + + ret =3D kvm_arm_smmu_device_reset(host_smmu); + if (ret) + return ret; + platform_set_drvdata(pdev, smmu); =20 /* Hypervisor parameters */ @@ -129,6 +173,15 @@ static int kvm_arm_smmu_probe(struct platform_device *= pdev) =20 static void kvm_arm_smmu_remove(struct platform_device *pdev) { + struct arm_smmu_device *smmu =3D platform_get_drvdata(pdev); + struct host_arm_smmu_device *host_smmu =3D smmu_to_host(smmu); + + /* + * There was an error during hypervisor setup. The hyp driver may + * have already enabled the device, so disable it. + */ + arm_smmu_device_disable(smmu); + arm_smmu_update_gbpa(smmu, host_smmu->boot_gbpa, GBPA_ABORT); } =20 static const struct of_device_id arm_smmu_of_match[] =3D { --=20 2.47.0.338.g60cca15819-goog From nobody Sun Dec 14 19:14:26 2025 Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 58FD723F9F1 for ; Thu, 12 Dec 2024 18:06:21 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734026783; cv=none; b=ctM8AxIgYnP4IOMUbd1CNGaVR9HVtzE3EmJDrk7qJKoQ8hAORo0tPQQQuXSl8q+krB2UeBP8nLf1Kg9SIfh7bXkobP0FDSkR3jsiZYBuh+kO4XDtisC1CkcbWWW2p2NeuMb7JN0sW7xdycehpl0+l0AFahY1MnlWVY7bLbfuvMo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734026783; c=relaxed/simple; bh=EjhIv/oYPDXLgKafBR10ANdAhLD+MmwuWT2lvorilf8=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=dpGwdEJ1ieKSTRXyW9emFZpI5qv6SAIpEF3BiXKCf2vIVGG8atoi+7oWyIoApj5pRQ4wvXV/FreGTnaPlWF39RJjfgCaHHi9N7r8r65EAoRA0M93SxWPgKO89WZc2+V9uSaVkg9+yjKB7NdIoW1sDdqb43EdOJYtHRznmS0YV50= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--smostafa.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=ysVoAjf3; arc=none smtp.client-ip=209.85.128.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--smostafa.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="ysVoAjf3" Received: by mail-wm1-f73.google.com with SMTP id 5b1f17b1804b1-4361efc9d1fso8322165e9.2 for ; Thu, 12 Dec 2024 10:06:21 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1734026780; x=1734631580; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=031HktvOH2BGPB/61CKgeoO+FWayRh3FpkimFP0IrdM=; b=ysVoAjf3o5BipKFb/axG9S+2Ht7r8/w2v4lneRo6xuOlsPQZi1bS6dC9+47aebO3xp Ca5lR4XpOrOs3Z/EhTIu6lc4BIMp7AfUVMWx9mF9UNEno26DyeJomB57PsRNISAszAYD GLIRy+jCKqSTJ7XipjXQchIhbK4aD8fXWGgw4+gtYBkNwMA1LRrfjHJq+83uwnPe9PVF WFFf0OMDd9Txa41Sian9bqy2MN9nfuQXbJSuK30vB9OE8FIc+m5/FRM14v2l/JOkDMox bEORC67l9qsA1Sb+fdlrVl8tGukXXHT36ixo4F+Dd/ANRmA33NrlLeBPD/RMfacqJG3X J28g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734026780; x=1734631580; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=031HktvOH2BGPB/61CKgeoO+FWayRh3FpkimFP0IrdM=; b=J9ftmfCZ+HsYVNVqPqiSxuJf55VEW6ewJOZHnrPlsosIeU+Kyy6r2I8Z7jS7gRYRaQ VRe8ee81silOt3xOrPm2klLm+/0IXqXXlyt2n9xD1itgUyoLHBf3CKmJsrc57H2iT191 GJwb6YbDK5j0E7zymqSFKxmoV9ov1hi6yXa3hFojYA9lCNX1Xj2A94iPQefWESnVexON RVUquvgTxoqgbHXsFGDwXfc0ST47+LIvFljhDSM2+EkeQT4WUHQOD/TumWbPhXGFG4xb DHnJu078daqvC7mISk6wohNwxqm8ilGi2DhxQ/+iLQdi73ZdxPc63g3Gfe6/pFVITr7u ijmg== X-Forwarded-Encrypted: i=1; AJvYcCX3umtmEcRqOPueO8iZRXv2ynED4HJPPvcz6AALhfz3x/22gXdE6dGwpw+HaMsg6tF1RgTUR/YP/kbDqEU=@vger.kernel.org X-Gm-Message-State: AOJu0YwFO+NLCJCijjbkxS+QBOCEY4TLEggTzto57TNB6jvf+8VlM8Ku ROWwI3R6UL+TkuzGmGptiQWbgtN2nJk41A9JNiJoZT0+TOdbbeva6M/u2GRkHJ+9BFthAIv84Qf XBoPKq+fTGA== X-Google-Smtp-Source: AGHT+IHfy5GSFaMJOYpvz35FVWeSJ2xaUjg6ph5uDhYTSNl0fSLoTlqEzHaAu45qIKjF2hF1Km0UhuK02XOy2w== X-Received: from wmmr4.prod.google.com ([2002:a05:600c:4244:b0:436:1796:9989]) (user=smostafa job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:83cf:b0:434:f7e3:bfa0 with SMTP id 5b1f17b1804b1-4361c3e351fmr61389525e9.21.1734026780064; Thu, 12 Dec 2024 10:06:20 -0800 (PST) Date: Thu, 12 Dec 2024 18:04:10 +0000 In-Reply-To: <20241212180423.1578358-1-smostafa@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241212180423.1578358-1-smostafa@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20241212180423.1578358-47-smostafa@google.com> Subject: [RFC PATCH v2 46/58] KVM: arm64: Add function to topup generic allocator From: Mostafa Saleh To: iommu@lists.linux.dev, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org Cc: catalin.marinas@arm.com, will@kernel.org, maz@kernel.org, oliver.upton@linux.dev, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, robdclark@gmail.com, joro@8bytes.org, robin.murphy@arm.com, jean-philippe@linaro.org, jgg@ziepe.ca, nicolinc@nvidia.com, vdonnefort@google.com, qperret@google.com, tabba@google.com, danielmentz@google.com, tzukui@google.com, Mostafa Saleh Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Soon, IOMMU driver might need to topup the IOMMU pool from map_pages IOMMU operation, which has a gfp flag is it might be called from atomic context, add a function to topup an allocator with an ID that also accepts gfp flags. Signed-off-by: Mostafa Saleh --- arch/arm64/include/asm/kvm_host.h | 4 ++++ arch/arm64/kvm/mmu.c | 20 ++++++++++++++++++++ arch/arm64/kvm/pkvm.c | 20 ++++++++++++++++++++ 3 files changed, 44 insertions(+) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm= _host.h index a3b5d8dd8995..59a23828bd0e 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -155,6 +155,8 @@ static inline void __free_hyp_memcache(struct kvm_hyp_m= emcache *mc, =20 void free_hyp_memcache(struct kvm_hyp_memcache *mc); int topup_hyp_memcache(struct kvm_hyp_memcache *mc, unsigned long min_page= s, unsigned long order); +int topup_hyp_memcache_gfp(struct kvm_hyp_memcache *mc, unsigned long min_= pages, + unsigned long order, gfp_t gfp); =20 static inline void init_hyp_memcache(struct kvm_hyp_memcache *mc) { @@ -1628,6 +1630,8 @@ void kvm_set_vm_id_reg(struct kvm *kvm, u32 reg, u64 = val); #define HYP_ALLOC_MGT_IOMMU_ID 1 =20 unsigned long __pkvm_reclaim_hyp_alloc_mgt(unsigned long nr_pages); +int __pkvm_topup_hyp_alloc_mgt_gfp(unsigned long id, unsigned long nr_page= s, + unsigned long sz_alloc, gfp_t gfp); =20 struct kvm_iommu_driver { int (*init_driver)(void); diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index ef7e8c156afb..229338877c59 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1225,6 +1225,11 @@ static void *hyp_mc_alloc_fn(void *flags, unsigned l= ong order) return addr; } =20 +static void *hyp_mc_alloc_gfp_fn(void *flags, unsigned long order) +{ + return (void *)__get_free_pages(*(gfp_t *)flags, order); +} + void free_hyp_memcache(struct kvm_hyp_memcache *mc) { unsigned long flags =3D mc->flags; @@ -1249,6 +1254,21 @@ int topup_hyp_memcache(struct kvm_hyp_memcache *mc, = unsigned long min_pages, kvm_host_pa, (void *)flags, order); } =20 +int topup_hyp_memcache_gfp(struct kvm_hyp_memcache *mc, unsigned long min_= pages, + unsigned long order, gfp_t gfp) +{ + void *flags =3D &gfp; + + if (!is_protected_kvm_enabled()) + return 0; + + if (order > PAGE_SHIFT) + return -E2BIG; + + return __topup_hyp_memcache(mc, min_pages, hyp_mc_alloc_gfp_fn, + kvm_host_pa, flags, order); +} + /** * kvm_phys_addr_ioremap - map a device range to guest IPA * diff --git a/arch/arm64/kvm/pkvm.c b/arch/arm64/kvm/pkvm.c index e6df35aae840..0c45acbbff6e 100644 --- a/arch/arm64/kvm/pkvm.c +++ b/arch/arm64/kvm/pkvm.c @@ -1114,3 +1114,23 @@ unsigned long __pkvm_reclaim_hyp_alloc_mgt(unsigned = long nr_pages) =20 return reclaimed; } + +int __pkvm_topup_hyp_alloc_mgt_gfp(unsigned long id, unsigned long nr_page= s, + unsigned long sz_alloc, gfp_t gfp) +{ + struct kvm_hyp_memcache mc; + int ret; + + init_hyp_memcache(&mc); + + ret =3D topup_hyp_memcache_gfp(&mc, nr_pages, get_order(sz_alloc), gfp); + if (ret) + return ret; + + ret =3D kvm_call_hyp_nvhe(__pkvm_hyp_alloc_mgt_refill, id, + mc.head, mc.nr_pages); + if (ret) + free_hyp_memcache(&mc); + + return ret; +} --=20 2.47.0.338.g60cca15819-goog From nobody Sun Dec 14 19:14:26 2025 Received: from mail-ej1-f73.google.com (mail-ej1-f73.google.com [209.85.218.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A9C5423FA12 for ; Thu, 12 Dec 2024 18:06:23 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.218.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734026785; cv=none; b=ts7W5vi8a4dXjqmPwxjOfEYcVYDKn+b+0r/x2sWWDjrHwlP+vCNel/FDgH5gICQKJPCKRoTZtEGQhzbOyRwaFITMLEUBHWHbkgyqTjSjHR6BMoxbnL3qm4OePkhm5O7tREDFqRFAr8Tsf0xYrjX4gTNEEjQbd2rRPHqgWrCUVt4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734026785; c=relaxed/simple; bh=KBex4nSvV8st4JIwyXj9uqMSukyDo5x+//eUZ7ElLyI=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=owe55tSoQ2qqyqx2Paqc3B8JdOgd/dp6f+z+gNWgMCZ9N6jwzw+6XAHq1EYj2KzMz8FIqP4f7uWxkPAajOrKRaISR03Qa/mw2XRgXOw/Y881flkoe0Jj3FFs8D9TPck9TnXkLo3vTSkVUe5nJJDWpMJSkL3IwTmlPSmPIka0o4E= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--smostafa.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=fN04fh/u; arc=none smtp.client-ip=209.85.218.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--smostafa.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="fN04fh/u" Received: by mail-ej1-f73.google.com with SMTP id a640c23a62f3a-aa65d975c40so70908266b.0 for ; Thu, 12 Dec 2024 10:06:23 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1734026782; x=1734631582; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=oA4flaYxQ5BGtfLZd9pc3kG1NLnmtH/l93s5ei2WXOY=; b=fN04fh/uJbS/hppFen5v1bzlIu9qFec4jiWsEVC4lkkzNFcFG+6Ar9t7Em6Qk4TPo1 CzM/K0hC+iUxvyNobYKS91qRX6X5De+gebYQCnEKbM1Sw1VSrJgMSMUsLuvUgukicqUN pvC5rnhJLw0cvu+Q2V7u4g28dfQkeMtvwn1JrKcZyVFWbkU2KiFtBWZIEi99TUpPiwid ch8EUnKnEpTZ8w3+QyctYnEfETBTjnZJWzbnZjcvc9c7ykqgH5vqdrmU7V9AnVWO5IyR Ur3nyYIIJHMPGQsI/7SMgafBzwxQn9hOcV36S6t5F2b2WGz98hQOwaQpKygC6dWPgig4 Djqg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734026782; x=1734631582; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=oA4flaYxQ5BGtfLZd9pc3kG1NLnmtH/l93s5ei2WXOY=; b=vtIm3afgMxgj6zZ974cKUlZUo35DhhQj11wdUzyz8Z1Y+N4vyLO0R+c0R9RFY8DQtu db9NrgHWa81jHCjVxcPhvnirL7QxYJXiBZkRa9FYPS6mYkPr02FDACRZc5q9K69wGopz jy57gDrmfrv624EqTl9qNWSGoy8btELTS/BXMDjHxIT2Y1GeG2TMqVu20l20Ve6y0pn4 FuJopbg9lrXQqmQwkkilhxseiYHJ39ZHNzWrqhCPTB/4JPEOQv/1iZm1hS4t8bXKz2Lq cCLeIsr6SKPfF9fNFVv1DeybnDdLwNqsxqO5zErLeYHu2uNgtM90h7j70TE7NxWOlV7U XZdw== X-Forwarded-Encrypted: i=1; AJvYcCVeRa3hzhK0z2Fz6eqJAbSsbK+2M3Cx3DZUm9XNgWu+UzWasQxnEpckNSVgwUod2KFm0vICsSskb6HVwnU=@vger.kernel.org X-Gm-Message-State: AOJu0YxmRI2phLizuAz+Ko6WV+/7WFpYd5Y/XNV7hLLvmS+bMT+zFlQP RzVQPXjv6thy3fG13IG6l/F6DYhg5F7/DmMGIOy+CRoGJUzasZ+esEE7YCRL8x2QPjoSgMe/ydH pxR27AhtvYw== X-Google-Smtp-Source: AGHT+IH16CwAGQZdlZ+j17KujE/o9PMEBvOMuLsm290SY8dsLy1Bla7B610srDL7eN2tped3VnaOKpJ7Tllo2w== X-Received: from ejxj19.prod.google.com ([2002:a17:906:8313:b0:aa6:a222:16c7]) (user=smostafa job=prod-delivery.src-stubby-dispatcher) by 2002:a17:907:15c3:b0:aa6:8a1b:8b84 with SMTP id a640c23a62f3a-aa6c1d26981mr510164566b.57.1734026782147; Thu, 12 Dec 2024 10:06:22 -0800 (PST) Date: Thu, 12 Dec 2024 18:04:11 +0000 In-Reply-To: <20241212180423.1578358-1-smostafa@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241212180423.1578358-1-smostafa@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20241212180423.1578358-48-smostafa@google.com> Subject: [RFC PATCH v2 47/58] KVM: arm64: Add macro for SMCCC call with all returns From: Mostafa Saleh To: iommu@lists.linux.dev, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org Cc: catalin.marinas@arm.com, will@kernel.org, maz@kernel.org, oliver.upton@linux.dev, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, robdclark@gmail.com, joro@8bytes.org, robin.murphy@arm.com, jean-philippe@linaro.org, jgg@ziepe.ca, nicolinc@nvidia.com, vdonnefort@google.com, qperret@google.com, tabba@google.com, danielmentz@google.com, tzukui@google.com, Mostafa Saleh Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add a macro that returns SMCCC returns from a hypercall. Signed-off-by: Mostafa Saleh --- arch/arm64/include/asm/kvm_host.h | 13 ++++++++++++- 1 file changed, 12 insertions(+), 1 deletion(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm= _host.h index 59a23828bd0e..3cdc99ebdd0d 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -1248,7 +1248,18 @@ void kvm_arm_resume_guest(struct kvm *kvm); #define vcpu_has_run_once(vcpu) !!rcu_access_pointer((vcpu)->pid) =20 #ifndef __KVM_NVHE_HYPERVISOR__ -#define kvm_call_hyp_nvhe(f, ...) \ +#define kvm_call_hyp_nvhe_smccc(f, ...) \ + ({ \ + struct arm_smccc_res res; \ + \ + arm_smccc_1_1_hvc(KVM_HOST_SMCCC_FUNC(f), \ + ##__VA_ARGS__, &res); \ + WARN_ON(res.a0 !=3D SMCCC_RET_SUCCESS); \ + \ + res; \ + }) + +#define kvm_call_hyp_nvhe(f, ...) \ ({ \ struct arm_smccc_res res; \ \ --=20 2.47.0.338.g60cca15819-goog From nobody Sun Dec 14 19:14:26 2025 Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5DDC8229664 for ; Thu, 12 Dec 2024 18:06:25 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734026787; cv=none; b=AorQLZ93j6YfYD2SFZUlhIia/Y4q7d2H+ohvTawnI+ns1v4k5iUfl90HGsCFmI8BWQ91NyZJnDZP6DNJ9rVb9ggqJmfqMcKANZf4PfMOjmlgH40rX7orTLqC6Xx2pNVK8RkWNz+de6Rza69XrLuWlB6YID3BijLmAocTKkgvOmw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734026787; c=relaxed/simple; bh=WjYK8lBkcDppUYpM6qoHQwGVDmyvnn/wecx/pxdNpY8=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=rNqbjwbzBdJkL0mbOJBSw8cK30aFu1qc3WD/Vvbde+EtVUVnPVGxKVFCgasEsU/hM1Tp8e1IRrpUHjUesQyouNuMgZb4ayjIkWFx37M19Ov6/d91UZtJn60BNU6PskSIZ0e8wpYiW/eySBxQVUM7ylM7+ad9PJs1/A2MBobEPiQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--smostafa.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=oyrtrzvc; arc=none smtp.client-ip=209.85.128.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--smostafa.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="oyrtrzvc" Received: by mail-wm1-f74.google.com with SMTP id 5b1f17b1804b1-4361eb83f46so8316155e9.3 for ; Thu, 12 Dec 2024 10:06:25 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1734026784; x=1734631584; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=o4ydbokuahZg3cDVv89OzbKDX3WmaOYi25/RZtMZGlA=; b=oyrtrzvcqPCjbX2+2SeroXmcqzEIzbwEj3aeRH0mAZnzVVLxgFh/JihacRvC3tGU6F WKdjbTAavztiHbCVW/qP/f75VCi94SrxSAzBN/WPGonIHHrk9COsPCztUwCf52G95IbY sFU8iXaGSR7N0d0//W2Ch0qtgr04HEnRWwFQoXQ71sZLoI2V7jAPx/r7ZsoEOWmyjC3H ktPWikfw2TGOyklGxqvEBvRtLQA/XI5wqTGIE/efO5BFYKGkXNUfXnzKzEI/pF5mDPsW ROzHwefWdotS/rNYL2IwcC/4oECIWjUBKkblgFWgjZf1TIZpuaRa4xlWc8QTqi9SQPEj yVag== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734026784; x=1734631584; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=o4ydbokuahZg3cDVv89OzbKDX3WmaOYi25/RZtMZGlA=; b=DKEu+c55tz2iy91Hw/JxReVCqEhrahBy91RkvKMbWqqihEFNkg0EQimbJz/+SSjZOj BLLFGvsXpfy7FjSEbpC29H1K30f7CoAIqMu3bTlzVXcVRKMC2ccXST3JK12HvFPcmph3 HAKwi3yfP1EZg8hUiAIQoPBwWSYB9Kv+GzfoZ8Z9EQqJWQI480ahfcFy2gBKjMzW2JJv Tlf6p8cznkO6cpqKv4rPSiBcGiHA3CX6OCxyc1l/w6B8E0RUV5pfUsLBXDBOd4wWT67U Z/strrZ1sZQfEW7vLip1mfcFuCRY/bNONW2JjCKFHqKI0A3x3nu88K9c5OxgEWa8b2Hq eT4g== X-Forwarded-Encrypted: i=1; AJvYcCUyTFiGfaf7oKL2VxtHHGo8F6m3wR7WmMJJfeNRGUf1LRxzfIrIIUGtfZ0nbXj++jk+9Hl39ASZo62cY+g=@vger.kernel.org X-Gm-Message-State: AOJu0YxRLzw+C//FguygX5CPOGbD1rYb7pEoj4Um3ApZZsJLoMapk/Du JEn7nG5UJ/yZbzURhWRl5xP8YzGyt6MAYnpahC2Q+H0WDn2QPMqP2KiOfR7rQ1fpkEWRTfYc3OL rB2OHeOxBEA== X-Google-Smtp-Source: AGHT+IFIxg+lr22IOTf3dy5thifj/3GcNWa+J/d4+646pOPogroeM/O6SnmhU5jaKgRShKweTmltoDzKqk0iIg== X-Received: from wmph6.prod.google.com ([2002:a05:600c:4986:b0:434:e96f:86b0]) (user=smostafa job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:35c9:b0:431:58cd:b259 with SMTP id 5b1f17b1804b1-4361c42e361mr77955515e9.31.1734026784111; Thu, 12 Dec 2024 10:06:24 -0800 (PST) Date: Thu, 12 Dec 2024 18:04:12 +0000 In-Reply-To: <20241212180423.1578358-1-smostafa@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241212180423.1578358-1-smostafa@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20241212180423.1578358-49-smostafa@google.com> Subject: [RFC PATCH v2 48/58] iommu/arm-smmu-v3-kvm: Add function to topup IOMMU allocator From: Mostafa Saleh To: iommu@lists.linux.dev, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org Cc: catalin.marinas@arm.com, will@kernel.org, maz@kernel.org, oliver.upton@linux.dev, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, robdclark@gmail.com, joro@8bytes.org, robin.murphy@arm.com, jean-philippe@linaro.org, jgg@ziepe.ca, nicolinc@nvidia.com, vdonnefort@google.com, qperret@google.com, tabba@google.com, danielmentz@google.com, tzukui@google.com, Mostafa Saleh Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The hypervisor returns requests for memory allocation in HVCs encoded in the return registers. Add a function that checks those returns and topup the IOMMU alloctor in response to requests, and a macro that calls this function around calling an HVC. Signed-off-by: Mostafa Saleh --- .../iommu/arm/arm-smmu-v3/arm-smmu-v3-kvm.c | 40 +++++++++++++++++++ 1 file changed, 40 insertions(+) diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-kvm.c b/drivers/iomm= u/arm/arm-smmu-v3/arm-smmu-v3-kvm.c index e4a5bdc830bc..dab2d59b5a88 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-kvm.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-kvm.c @@ -4,6 +4,7 @@ * * Copyright (C) 2022 Linaro Ltd. */ +#include #include =20 #include @@ -28,6 +29,45 @@ static size_t kvm_arm_smmu_cur; static size_t kvm_arm_smmu_count; static struct hyp_arm_smmu_v3_device *kvm_arm_smmu_array; =20 +static int kvm_arm_smmu_topup_memcache(struct arm_smccc_res *res, gfp_t gf= p) +{ + struct kvm_hyp_req req; + + hyp_reqs_smccc_decode(res, &req); + + if ((res->a1 =3D=3D -ENOMEM) && (req.type !=3D KVM_HYP_REQ_TYPE_MEM)) { + /* + * There is no way for drivers to populate hyp_alloc requests, + * so -ENOMEM + no request indicates that. + */ + return __pkvm_topup_hyp_alloc(1); + } else if (req.type !=3D KVM_HYP_REQ_TYPE_MEM) { + return -EBADE; + } + + if (req.mem.dest =3D=3D REQ_MEM_DEST_HYP_IOMMU) { + return __pkvm_topup_hyp_alloc_mgt_gfp(HYP_ALLOC_MGT_IOMMU_ID, + req.mem.nr_pages, + req.mem.sz_alloc, + gfp); + } else if (req.mem.dest =3D=3D REQ_MEM_DEST_HYP_ALLOC) { + /* Fill hyp alloc*/ + return __pkvm_topup_hyp_alloc(req.mem.nr_pages); + } + + pr_err("Bogus mem request"); + return -EBADE; +} + +#define kvm_call_hyp_nvhe_mc(...) \ +({ \ + struct arm_smccc_res __res; \ + do { \ + __res =3D kvm_call_hyp_nvhe_smccc(__VA_ARGS__); \ + } while (__res.a1 && !kvm_arm_smmu_topup_memcache(&__res, GFP_KERNEL));\ + __res.a1; \ +}) + static bool kvm_arm_smmu_validate_features(struct arm_smmu_device *smmu) { unsigned int required_features =3D --=20 2.47.0.338.g60cca15819-goog From nobody Sun Dec 14 19:14:26 2025 Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6344823FD3C for ; Thu, 12 Dec 2024 18:06:27 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734026789; cv=none; b=MzPZ4rY9Zo8kO3TOZahksW/PFyWFn4ZMex+b6M/1GwXNGHHf6K8OSNr0+hRjXlFRq9FicTIkybANEWu+JAmG3Z67GMdgNYGXAhJBoTTCcOFMBuUOwx+TnTW/MPIp6t5+ir4XYh4NkzQSl7s7Fq/Mq0gSJb61W62AoPg30Yh0Z4o= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734026789; c=relaxed/simple; bh=G0Y3j7qNwPYaO8zC2Ubn9vdwuHDYyilOl1CNG+n4cbE=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=TM4+P/o2UDwunTW2kbrdaxw2/UTX3Dp2NmbFGqn5JeRI1F37vMnpkzo5S09iZGEDmoDsmZwTJzEepcp+SVhNDop1DuCVghj8J/D5XeW52fPW9vWYAh91Ta2uwFmoR0NhcncYKnK4rH7kQ/7I9TgdnHOzMydPyf4Q5U4SQFxPCZo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--smostafa.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=hF3GvKMc; arc=none smtp.client-ip=209.85.128.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--smostafa.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="hF3GvKMc" Received: by mail-wm1-f73.google.com with SMTP id 5b1f17b1804b1-436248d1240so5774565e9.0 for ; Thu, 12 Dec 2024 10:06:27 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1734026786; x=1734631586; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=v5SVX+pTxiaENIFA7cNS8JIZZP5zL/TwARaJceYJqX8=; b=hF3GvKMcSw/K/mpgMa4mMxlDvbPQFKLawkUd81z/BBQrb58iWGOZJd04BRBIxJtQDF TmbofpQH4Wd/tNCL6E8hX35rypB2p2KpY0UXvRolw8cXmYXaC7cMJSag65XNt7Z5qBA6 tMruDx6+FDu5qCt8zDDdwg5bEcIH9Usp0ag39llRHoFmf8IMx52o+OKdJykWxuYaethl yFvgrM5O4ONb/KPOJkVupDyrKFQ1be4354T5Fd/4doMMVpaANqaLTM1C5iKzG4XPCYOW rSpwvJ5xovcJ7znAjuUrh5rat5vI+Q9OIBi/ht58v0P0ejVyTOsnk6OqDFlmNDLufRbR zXmQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734026786; x=1734631586; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=v5SVX+pTxiaENIFA7cNS8JIZZP5zL/TwARaJceYJqX8=; b=ksl8YAuaFPEA7vstiabsXXefdEh214+V1Co0UUNKdFUxu217obi8DTWN1hBtPSKpeS FRHTIotLbxBp+MrfBherg6d2FMtAV4Ux4uXmTZzK5wPbvp5h9s641FvXD74DUVPJfUjR d6nkGG94aJe9i2kwQ7dAZ3dJzycfUW3FAiCENJZ3+1eiQ3nPSo/XMLWmYqmG6P46+R+j Ir1GohhsuuN7/1KQInKOfZENanORGCY0S3IusLq8+KtmsMTu6bet9V00M+LAws9rZNY8 AwgnPvBsJcj0vfC+1tpiHiHFo+X0OlgIAyfDJMlAOZPk7qomx2CNEi2zPjr+GFUnUOKK 4K4A== X-Forwarded-Encrypted: i=1; AJvYcCUuVeGdpQdiNQj4WTYRtXtScyaK8MfbbOK3GyjzwpJODkJDZQ9iVpXCFrBdbS8mG11EE6EMBsUi/ven3l8=@vger.kernel.org X-Gm-Message-State: AOJu0YxrAZjzWiZBcHP43pkKTeKju6LRnmFX8pUc8QZxB3m2j8JKyx7W A4PMP2mjJyyOTzPkTTBAth4K5+Z1+4vDhgEP4t+IsKihSMNKW9k4dYnMtscq5fXng59ZGJxLmVi OcJhXa6dwkg== X-Google-Smtp-Source: AGHT+IHSna/pAUMpriFywfhuXQd4p7Z0BvqBnAxaoa4lXIc9lasdTGNcVYfL2Lq+5puAs9DWq8PJ6tvjCCUecw== X-Received: from wmlf18.prod.google.com ([2002:a7b:c8d2:0:b0:434:9da4:2fa5]) (user=smostafa job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:4f92:b0:434:a1d3:a326 with SMTP id 5b1f17b1804b1-4361c346248mr58749665e9.6.1734026786073; Thu, 12 Dec 2024 10:06:26 -0800 (PST) Date: Thu, 12 Dec 2024 18:04:13 +0000 In-Reply-To: <20241212180423.1578358-1-smostafa@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241212180423.1578358-1-smostafa@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20241212180423.1578358-50-smostafa@google.com> Subject: [RFC PATCH v2 49/58] iommu/arm-smmu-v3-kvm: Add IOMMU ops From: Mostafa Saleh To: iommu@lists.linux.dev, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org Cc: catalin.marinas@arm.com, will@kernel.org, maz@kernel.org, oliver.upton@linux.dev, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, robdclark@gmail.com, joro@8bytes.org, robin.murphy@arm.com, jean-philippe@linaro.org, jgg@ziepe.ca, nicolinc@nvidia.com, vdonnefort@google.com, qperret@google.com, tabba@google.com, danielmentz@google.com, tzukui@google.com, Mostafa Saleh Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add iommu_ops: attach_dev, release_device, probe_device, domain_alloc/ free, capable, and some other common ops with the kernel SMMUv3 driver: device_group, of_xlate, get_resv_regions. Other ops as map/unmap and iova_to_phys added next. Signed-off-by: Mostafa Saleh Signed-off-by: Jean-Philippe Brucker --- .../iommu/arm/arm-smmu-v3/arm-smmu-v3-kvm.c | 284 ++++++++++++++++++ 1 file changed, 284 insertions(+) diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-kvm.c b/drivers/iomm= u/arm/arm-smmu-v3/arm-smmu-v3-kvm.c index dab2d59b5a88..071743f5acf9 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-kvm.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-kvm.c @@ -7,6 +7,7 @@ #include #include =20 +#include #include #include =20 @@ -25,9 +26,26 @@ struct host_arm_smmu_device { #define smmu_to_host(_smmu) \ container_of(_smmu, struct host_arm_smmu_device, smmu); =20 +struct kvm_arm_smmu_master { + struct arm_smmu_device *smmu; + struct device *dev; + struct kvm_arm_smmu_domain *domain; +}; + +struct kvm_arm_smmu_domain { + struct iommu_domain domain; + struct arm_smmu_device *smmu; + struct mutex init_mutex; + pkvm_handle_t id; +}; + +#define to_kvm_smmu_domain(_domain) \ + container_of(_domain, struct kvm_arm_smmu_domain, domain) + static size_t kvm_arm_smmu_cur; static size_t kvm_arm_smmu_count; static struct hyp_arm_smmu_v3_device *kvm_arm_smmu_array; +static DEFINE_IDA(kvm_arm_smmu_domain_ida); =20 static int kvm_arm_smmu_topup_memcache(struct arm_smccc_res *res, gfp_t gf= p) { @@ -68,6 +86,267 @@ static int kvm_arm_smmu_topup_memcache(struct arm_smccc= _res *res, gfp_t gfp) __res.a1; \ }) =20 +static struct platform_driver kvm_arm_smmu_driver; + +static struct arm_smmu_device * +kvm_arm_smmu_get_by_fwnode(struct fwnode_handle *fwnode) +{ + struct device *dev; + + dev =3D driver_find_device_by_fwnode(&kvm_arm_smmu_driver.driver, fwnode); + put_device(dev); + return dev ? dev_get_drvdata(dev) : NULL; +} + +static struct iommu_ops kvm_arm_smmu_ops; + +static struct iommu_device *kvm_arm_smmu_probe_device(struct device *dev) +{ + struct arm_smmu_device *smmu; + struct kvm_arm_smmu_master *master; + struct iommu_fwspec *fwspec =3D dev_iommu_fwspec_get(dev); + + if (WARN_ON_ONCE(dev_iommu_priv_get(dev))) + return ERR_PTR(-EBUSY); + + smmu =3D kvm_arm_smmu_get_by_fwnode(fwspec->iommu_fwnode); + if (!smmu) + return ERR_PTR(-ENODEV); + + master =3D kzalloc(sizeof(*master), GFP_KERNEL); + if (!master) + return ERR_PTR(-ENOMEM); + + master->dev =3D dev; + master->smmu =3D smmu; + dev_iommu_priv_set(dev, master); + + return &smmu->iommu; +} + +static struct iommu_domain *kvm_arm_smmu_domain_alloc(unsigned type) +{ + struct kvm_arm_smmu_domain *kvm_smmu_domain; + + /* + * We don't support + * - IOMMU_DOMAIN_DMA_FQ because lazy unmap would clash with memory + * donation to guests. + * - IOMMU_DOMAIN_IDENTITY: Requires a stage-2 only transparent domain. + */ + if (type !=3D IOMMU_DOMAIN_DMA && + type !=3D IOMMU_DOMAIN_UNMANAGED) + return ERR_PTR(-EOPNOTSUPP); + + kvm_smmu_domain =3D kzalloc(sizeof(*kvm_smmu_domain), GFP_KERNEL); + if (!kvm_smmu_domain) + return ERR_PTR(-ENOMEM); + + mutex_init(&kvm_smmu_domain->init_mutex); + + return &kvm_smmu_domain->domain; +} + +static int kvm_arm_smmu_domain_finalize(struct kvm_arm_smmu_domain *kvm_sm= mu_domain, + struct kvm_arm_smmu_master *master) +{ + int ret =3D 0; + struct arm_smmu_device *smmu =3D master->smmu; + unsigned int max_domains; + enum kvm_arm_smmu_domain_type type; + struct io_pgtable_cfg cfg; + unsigned long ias; + + if (kvm_smmu_domain->smmu && (kvm_smmu_domain->smmu !=3D smmu)) + return -EINVAL; + + if (kvm_smmu_domain->smmu) + return 0; + /* Default to stage-1. */ + if (smmu->features & ARM_SMMU_FEAT_TRANS_S1) { + ias =3D (smmu->features & ARM_SMMU_FEAT_VAX) ? 52 : 48; + cfg =3D (struct io_pgtable_cfg) { + .fmt =3D ARM_64_LPAE_S1, + .pgsize_bitmap =3D smmu->pgsize_bitmap, + .ias =3D min_t(unsigned long, ias, VA_BITS), + .oas =3D smmu->ias, + .coherent_walk =3D smmu->features & ARM_SMMU_FEAT_COHERENCY, + }; + ret =3D io_pgtable_configure(&cfg); + if (ret) + return ret; + + type =3D KVM_ARM_SMMU_DOMAIN_S1; + kvm_smmu_domain->domain.pgsize_bitmap =3D cfg.pgsize_bitmap; + kvm_smmu_domain->domain.geometry.aperture_end =3D (1UL << cfg.ias) - 1; + max_domains =3D 1 << smmu->asid_bits; + } else { + cfg =3D (struct io_pgtable_cfg) { + .fmt =3D ARM_64_LPAE_S2, + .pgsize_bitmap =3D smmu->pgsize_bitmap, + .ias =3D smmu->ias, + .oas =3D smmu->oas, + .coherent_walk =3D smmu->features & ARM_SMMU_FEAT_COHERENCY, + }; + ret =3D io_pgtable_configure(&cfg); + if (ret) + return ret; + + type =3D KVM_ARM_SMMU_DOMAIN_S2; + kvm_smmu_domain->domain.pgsize_bitmap =3D cfg.pgsize_bitmap; + kvm_smmu_domain->domain.geometry.aperture_end =3D (1UL << cfg.ias) - 1; + max_domains =3D 1 << smmu->vmid_bits; + } + kvm_smmu_domain->domain.geometry.force_aperture =3D true; + + /* + * The hypervisor uses the domain_id for asid/vmid so it has to be + * unique, and it has to be in range of this smmu, which can be + * either 8 or 16 bits. + */ + ret =3D ida_alloc_range(&kvm_arm_smmu_domain_ida, 0, + min(KVM_IOMMU_MAX_DOMAINS, max_domains), GFP_KERNEL); + if (ret < 0) + return ret; + + kvm_smmu_domain->id =3D ret; + + ret =3D kvm_call_hyp_nvhe_mc(__pkvm_host_iommu_alloc_domain, + kvm_smmu_domain->id, type); + if (ret) { + ida_free(&kvm_arm_smmu_domain_ida, kvm_smmu_domain->id); + return ret; + } + + kvm_smmu_domain->smmu =3D smmu; + return 0; +} + +static void kvm_arm_smmu_domain_free(struct iommu_domain *domain) +{ + int ret; + struct kvm_arm_smmu_domain *kvm_smmu_domain =3D to_kvm_smmu_domain(domain= ); + struct arm_smmu_device *smmu =3D kvm_smmu_domain->smmu; + + if (smmu) { + ret =3D kvm_call_hyp_nvhe(__pkvm_host_iommu_free_domain, kvm_smmu_domain= ->id); + ida_free(&kvm_arm_smmu_domain_ida, kvm_smmu_domain->id); + } + kfree(kvm_smmu_domain); +} + +static int kvm_arm_smmu_detach_dev(struct host_arm_smmu_device *host_smmu, + struct kvm_arm_smmu_master *master) +{ + int i, ret; + struct arm_smmu_device *smmu =3D &host_smmu->smmu; + struct iommu_fwspec *fwspec =3D dev_iommu_fwspec_get(master->dev); + struct kvm_arm_smmu_domain *domain =3D master->domain; + + if (!domain) + return 0; + + for (i =3D 0; i < fwspec->num_ids; i++) { + int sid =3D fwspec->ids[i]; + + ret =3D kvm_call_hyp_nvhe(__pkvm_host_iommu_detach_dev, + host_smmu->id, domain->id, sid, 0); + if (ret) { + dev_err(smmu->dev, "cannot detach device %s (0x%x): %d\n", + dev_name(master->dev), sid, ret); + break; + } + } + + master->domain =3D NULL; + + return ret; +} + +static void kvm_arm_smmu_release_device(struct device *dev) +{ + struct kvm_arm_smmu_master *master =3D dev_iommu_priv_get(dev); + struct host_arm_smmu_device *host_smmu =3D smmu_to_host(master->smmu); + + kvm_arm_smmu_detach_dev(host_smmu, master); + kfree(master); + iommu_fwspec_free(dev); +} + +static int kvm_arm_smmu_attach_dev(struct iommu_domain *domain, struct dev= ice *dev) +{ + int i, ret; + struct arm_smmu_device *smmu; + struct host_arm_smmu_device *host_smmu; + struct iommu_fwspec *fwspec =3D dev_iommu_fwspec_get(dev); + struct kvm_arm_smmu_master *master =3D dev_iommu_priv_get(dev); + struct kvm_arm_smmu_domain *kvm_smmu_domain =3D to_kvm_smmu_domain(domain= ); + + if (!master) + return -ENODEV; + + smmu =3D master->smmu; + host_smmu =3D smmu_to_host(smmu); + + ret =3D kvm_arm_smmu_detach_dev(host_smmu, master); + if (ret) + return ret; + + mutex_lock(&kvm_smmu_domain->init_mutex); + ret =3D kvm_arm_smmu_domain_finalize(kvm_smmu_domain, master); + mutex_unlock(&kvm_smmu_domain->init_mutex); + if (ret) + return ret; + + for (i =3D 0; i < fwspec->num_ids; i++) { + int sid =3D fwspec->ids[i]; + + ret =3D kvm_call_hyp_nvhe_mc(__pkvm_host_iommu_attach_dev, + host_smmu->id, kvm_smmu_domain->id, + sid, 0, 0); + if (ret) { + dev_err(smmu->dev, "cannot attach device %s (0x%x): %d\n", + dev_name(dev), sid, ret); + goto out_ret; + } + } + master->domain =3D kvm_smmu_domain; + +out_ret: + if (ret) + kvm_arm_smmu_detach_dev(host_smmu, master); + return ret; +} + +static bool kvm_arm_smmu_capable(struct device *dev, enum iommu_cap cap) +{ + struct kvm_arm_smmu_master *master =3D dev_iommu_priv_get(dev); + + switch (cap) { + case IOMMU_CAP_CACHE_COHERENCY: + return master->smmu->features & ARM_SMMU_FEAT_COHERENCY; + case IOMMU_CAP_NOEXEC: + default: + return false; + } +} + +static struct iommu_ops kvm_arm_smmu_ops =3D { + .capable =3D kvm_arm_smmu_capable, + .device_group =3D arm_smmu_device_group, + .of_xlate =3D arm_smmu_of_xlate, + .get_resv_regions =3D arm_smmu_get_resv_regions, + .probe_device =3D kvm_arm_smmu_probe_device, + .release_device =3D kvm_arm_smmu_release_device, + .domain_alloc =3D kvm_arm_smmu_domain_alloc, + .pgsize_bitmap =3D -1UL, + .owner =3D THIS_MODULE, + .default_domain_ops =3D &(const struct iommu_domain_ops) { + .attach_dev =3D kvm_arm_smmu_attach_dev, + .free =3D kvm_arm_smmu_domain_free, + } +}; + static bool kvm_arm_smmu_validate_features(struct arm_smmu_device *smmu) { unsigned int required_features =3D @@ -183,6 +462,11 @@ static int kvm_arm_smmu_probe(struct platform_device *= pdev) if (!kvm_arm_smmu_validate_features(smmu)) return -ENODEV; =20 + if (kvm_arm_smmu_ops.pgsize_bitmap =3D=3D -1UL) + kvm_arm_smmu_ops.pgsize_bitmap =3D smmu->pgsize_bitmap; + else + kvm_arm_smmu_ops.pgsize_bitmap |=3D smmu->pgsize_bitmap; + ret =3D arm_smmu_init_one_queue(smmu, &smmu->cmdq.q, smmu->base, ARM_SMMU_CMDQ_PROD, ARM_SMMU_CMDQ_CONS, CMDQ_ENT_DWORDS, "cmdq"); --=20 2.47.0.338.g60cca15819-goog From nobody Sun Dec 14 19:14:26 2025 Received: from mail-wr1-f73.google.com (mail-wr1-f73.google.com [209.85.221.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 86DF124037A for ; Thu, 12 Dec 2024 18:06:29 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.221.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734026791; cv=none; b=rIEP3Bw6pfRYS7pN0lD3yRtqzVIoi1LqH5PjHLI/+cXNLnRVT1gMNk670I0uor/YoKdnlfYyxLDIcEuQ1n5RVvLjoCs++GOrhvFAywSM3eu9B3UhzyLarb2HWtM3guQNPGRQ4sEwwmln7hICq7jsb8q+tB/bKgtceiD63Pc+3KQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734026791; c=relaxed/simple; bh=GZlHSVCi9DCDsyP55JASUXXIEwXOMYR8FDYiO32tVeg=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=f6KCrelfo7s6MHoTPXhiuTIslsThaToQDDMckAIpd8w4z3Bp7S5P57m7sytaY2xxn5Z7WUVFZuvl+Ey8YnVc/OHjUeolAfLKB/3xub9qlL5n7jbS1PM4wTHO5dqHSPbbtw6chx8l0b5Ib2vKtXhomhwPk1Q1LCY3JW2VBiR06YY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--smostafa.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=a5LNyzBK; arc=none smtp.client-ip=209.85.221.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--smostafa.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="a5LNyzBK" Received: by mail-wr1-f73.google.com with SMTP id ffacd0b85a97d-385e9c69929so427167f8f.1 for ; Thu, 12 Dec 2024 10:06:29 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1734026788; x=1734631588; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=FlfJ2RoRGMYx/VEn6wy5+97mQsVAl5qyYx+2pIE2d8k=; b=a5LNyzBKLeKPdUUzeWR2jWWrUdaYk6/QdrKDQU2h397NiAI4YFQRqa2acTLv+zJPiJ MrdFkb7sT6K3qJ1JNBqM2bLsrpkYM3FlWL0FM0dUoOU4+GNKqSwU31x3e7bhb1J30ELb AtNt4B6bpoQ3RJzZv4zaYJkttjohayawtNuqY9kfWWY7b1Zu+DxN8Ap/V7YTeJBLuy9I NL/iqaZT3PAoBp9LL0fqgRG44br7gOUqDJVE6PJbH7GTLayvIMngxfUNHqaIgBzzPZDu xTj0ShApj0MwdQJsf4+jxJnpJeMC1iOSENj7e2fvWGDDK5tw9SD41JSosVjvJioueRSd Lyow== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734026788; x=1734631588; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=FlfJ2RoRGMYx/VEn6wy5+97mQsVAl5qyYx+2pIE2d8k=; b=taZecOFgDzuLOpOdXzl8m7o6mSUOW7cr9pFM+UgVV4crbVlXDYrEhLJ0GceJScsABa yc4vnRSb/Mns4A9ulBPJrUrHkn0MfiHkDkWqNObt+QCRP/j3KL5f8pFgQpph8vSiMq5s DIanfAS3NT7j+ep24mYZs0WdIYM7GBI3GKNONX1ESNb/ZbuTCxkkAfoT5x/s2uxTTgu+ uxNmlQXYvJ6bYJ3qv3OAknmEDrDkssc/Eh6cJpW31LFWlGE3JExlJZrR69knbaGfkLOf O0/vlDNp16pBmuykP4EwoVA1KCLTz11lSj7rKXtsatA8Cskhu5sTsVOoNG2ozX0KqSSe wmeA== X-Forwarded-Encrypted: i=1; AJvYcCX0cIesGYSKu7qwsSL/SQa3VkHe+wHKVatvwXGNq8o764ojl+6vh3fMhMiHiKuxgC3jITPZhbneEtgBkdM=@vger.kernel.org X-Gm-Message-State: AOJu0YwT+RXeaxDSYEJob6sYWhETwH15IPaarEiXsHIj5Yog+mR19gJR ey8s5SoCzUBLcl2xV2dGItNhU16kjsdM3c7YgH6A8CEdJj4vrmu28KnGDfvkDdr0124d3NiUH8u rPLxqfiNT7w== X-Google-Smtp-Source: AGHT+IGJYqzxLBeZXH8b4nITiRWycPugiMNUuQieYr7f4NE4ezaD/pAg6hUBcibKZ+vgqVmM+kM3IYkR8ouErw== X-Received: from wmmu14.prod.google.com ([2002:a05:600c:ce:b0:434:f0a3:7876]) (user=smostafa job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6000:154a:b0:385:ef2f:92ad with SMTP id ffacd0b85a97d-3864ce86a7fmr7352121f8f.10.1734026788086; Thu, 12 Dec 2024 10:06:28 -0800 (PST) Date: Thu, 12 Dec 2024 18:04:14 +0000 In-Reply-To: <20241212180423.1578358-1-smostafa@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241212180423.1578358-1-smostafa@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20241212180423.1578358-51-smostafa@google.com> Subject: [RFC PATCH v2 50/58] iommu/arm-smmu-v3-kvm: Add map, unmap and iova_to_phys operations From: Mostafa Saleh To: iommu@lists.linux.dev, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org Cc: catalin.marinas@arm.com, will@kernel.org, maz@kernel.org, oliver.upton@linux.dev, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, robdclark@gmail.com, joro@8bytes.org, robin.murphy@arm.com, jean-philippe@linaro.org, jgg@ziepe.ca, nicolinc@nvidia.com, vdonnefort@google.com, qperret@google.com, tabba@google.com, danielmentz@google.com, tzukui@google.com, Mostafa Saleh Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add map, unmap and iova_to_phys, which are forwarded to the hypervisor. Signed-off-by: Mostafa Saleh Signed-off-by: Jean-Philippe Brucker --- .../iommu/arm/arm-smmu-v3/arm-smmu-v3-kvm.c | 72 +++++++++++++++++++ 1 file changed, 72 insertions(+) diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-kvm.c b/drivers/iomm= u/arm/arm-smmu-v3/arm-smmu-v3-kvm.c index 071743f5acf9..82f0191b222c 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-kvm.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-kvm.c @@ -331,6 +331,75 @@ static bool kvm_arm_smmu_capable(struct device *dev, e= num iommu_cap cap) } } =20 +static int kvm_arm_smmu_map_pages(struct iommu_domain *domain, + unsigned long iova, phys_addr_t paddr, + size_t pgsize, size_t pgcount, int prot, + gfp_t gfp, size_t *total_mapped) +{ + size_t mapped; + size_t size =3D pgsize * pgcount; + struct kvm_arm_smmu_domain *kvm_smmu_domain =3D to_kvm_smmu_domain(domain= ); + struct arm_smccc_res res; + + do { + res =3D kvm_call_hyp_nvhe_smccc(__pkvm_host_iommu_map_pages, + kvm_smmu_domain->id, + iova, paddr, pgsize, pgcount, prot); + mapped =3D res.a1; + iova +=3D mapped; + paddr +=3D mapped; + WARN_ON(mapped % pgsize); + WARN_ON(mapped > pgcount * pgsize); + pgcount -=3D mapped / pgsize; + *total_mapped +=3D mapped; + } while (*total_mapped < size && !kvm_arm_smmu_topup_memcache(&res, gfp)); + if (*total_mapped < size) + return -EINVAL; + + return 0; +} + +static size_t kvm_arm_smmu_unmap_pages(struct iommu_domain *domain, + unsigned long iova, size_t pgsize, + size_t pgcount, + struct iommu_iotlb_gather *iotlb_gather) +{ + size_t unmapped; + size_t total_unmapped =3D 0; + size_t size =3D pgsize * pgcount; + struct kvm_arm_smmu_domain *kvm_smmu_domain =3D to_kvm_smmu_domain(domain= ); + struct arm_smccc_res res; + + do { + res =3D kvm_call_hyp_nvhe_smccc(__pkvm_host_iommu_unmap_pages, + kvm_smmu_domain->id, + iova, pgsize, pgcount); + unmapped =3D res.a1; + total_unmapped +=3D unmapped; + iova +=3D unmapped; + WARN_ON(unmapped % pgsize); + pgcount -=3D unmapped / pgsize; + + /* + * The page table driver can unmap less than we asked for. If it + * didn't unmap anything at all, then it either reached the end + * of the range, or it needs a page in the memcache to break a + * block mapping. + */ + } while (total_unmapped < size && + (unmapped || !kvm_arm_smmu_topup_memcache(&res, GFP_ATOMIC))); + + return total_unmapped; +} + +static phys_addr_t kvm_arm_smmu_iova_to_phys(struct iommu_domain *domain, + dma_addr_t iova) +{ + struct kvm_arm_smmu_domain *kvm_smmu_domain =3D to_kvm_smmu_domain(domain= ); + + return kvm_call_hyp_nvhe(__pkvm_host_iommu_iova_to_phys, kvm_smmu_domain-= >id, iova); +} + static struct iommu_ops kvm_arm_smmu_ops =3D { .capable =3D kvm_arm_smmu_capable, .device_group =3D arm_smmu_device_group, @@ -344,6 +413,9 @@ static struct iommu_ops kvm_arm_smmu_ops =3D { .default_domain_ops =3D &(const struct iommu_domain_ops) { .attach_dev =3D kvm_arm_smmu_attach_dev, .free =3D kvm_arm_smmu_domain_free, + .map_pages =3D kvm_arm_smmu_map_pages, + .unmap_pages =3D kvm_arm_smmu_unmap_pages, + .iova_to_phys =3D kvm_arm_smmu_iova_to_phys, } }; =20 --=20 2.47.0.338.g60cca15819-goog From nobody Sun Dec 14 19:14:26 2025 Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 993F7240394 for ; Thu, 12 Dec 2024 18:06:31 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734026794; cv=none; b=IZe0L7eM9mXOqhuCAflIyiu6ZJJyuiBdFyRwZiy4RnKWTXqiZQLkidvYSbZpVYuhq+tZmDbBNZWK/lQXPNmhuttlD/GS0lORjwdDiORag8Z+wASdM3dYKSi9uF6sWdDlh0GizmVKgHktK96bGGeWWBvUpN79nXSjwdZ7kRM+xbA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734026794; c=relaxed/simple; bh=07ud3kdPqRkJyWJjKZmbhmLN44HYdmlP0Fh8jQS65qQ=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=U0qaiP43IqpBilEZdDHHgrn//o3yl6OU6iPGDO8DzAiRyJCZ/Kb2Y+QzcuZhJ6sWlOtDqUO1/iA4WMLllHAmw76WFNfmHvYhH+nWIKTs7PQqKckIJoHj/hcSzzWQUlO0jzyTAZLata1tyZa427Aglh74hr6whHQR8rSHtQHFuRg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--smostafa.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=2IBIcBeh; arc=none smtp.client-ip=209.85.128.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--smostafa.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="2IBIcBeh" Received: by mail-wm1-f74.google.com with SMTP id 5b1f17b1804b1-43582d49dacso8235185e9.2 for ; Thu, 12 Dec 2024 10:06:31 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1734026790; x=1734631590; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=rDsPUb0bWk00lVk5VrmMCzR9hOnWzhzOdUMd5hdynGI=; b=2IBIcBeh/xwfkhJLZeuIOJYqDRmEHfHVsL5aeRHeXpEVd3AOEXfMRxgbScz6auUiPQ FLLKAG60yOEiPtkoUP36zof9EfQ39KXh/WtKl5iJ/34A1NFqaJhyKZ/V8JFXt+zxEu8J VTa0v5oHco9jayNHyQ6GL6XV88G21AujG1UmU03nUrYqjczYM2msbIjdbzEtqqH6MAkG lxl2H/xhyxy3TtIyWGZyQ4EPByuenLu895ZOaT0DOIwC9NXLQJJz/8lmB1z8cg2ng4I2 22C8IDkXtVjz1zN5BRF/OLt350TIbayjoL0ZFsQcY7g/+ikXesh9p89rGj/EZe9ria8S 5acg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734026790; x=1734631590; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=rDsPUb0bWk00lVk5VrmMCzR9hOnWzhzOdUMd5hdynGI=; b=dFYG+OJ8j41+cEg8lIo+4GReC+4Xws2ahlB2hZ0iTp+CQfHWgJCN+p72ezIcNZSVmX 1xl/vzKEBjlh4Z/6PYrcT7mgx9XwIPaHqifZKuNsQTSgzQ+svQNkrbmn+EDzP6tHl/KJ iRN3yytLFwWOOeMX+fiiM/A116w9tlELvWazEvWaCJBqcTpa3xOjw4Zt16t2P/qQIEPP U2DCQo04S2sLub7qERVupA9YDQcTEPcxpbm3jX8L8ktk22QhwltKoa4S2uUS1oT1+wKz 39V6muplrfZzXoXi9h1yFwazsA1DLHw/R/o6Hm4ekqGhuZpeaN9rrmYA5bK5aKYf+kYB Uahg== X-Forwarded-Encrypted: i=1; AJvYcCXceLHhWavjh1ZA9pfsFIBiJ5RgKxMovvf9HFN1WE/dJfYzRPuRa0Uj+hzHXbDX68AJGikWEO+iZ+J2MYM=@vger.kernel.org X-Gm-Message-State: AOJu0YzNCSeDCtxQ3TWCosztknKDx1fxArw+SSDsIa33RTu2bD69h1iS Dbb9PsexepNpRn/JEYadwdWtLZ35AYzWU9ruo/PBatS1urNAy6U4deIgrAZVHILLTl/mmT6YoDt wAUUKqI+hrQ== X-Google-Smtp-Source: AGHT+IG5YT0EqzqwRu68YDl6GFjwRjdyDc1eUBljQvzlbHZ6i5XzGDeiAujVaOE5XHTme90ZWc1iBhvfMg1TXA== X-Received: from wmdp19.prod.google.com ([2002:a05:600c:5d3:b0:434:f271:522e]) (user=smostafa job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:1e0b:b0:436:1b96:7072 with SMTP id 5b1f17b1804b1-43622827675mr38391595e9.5.1734026790384; Thu, 12 Dec 2024 10:06:30 -0800 (PST) Date: Thu, 12 Dec 2024 18:04:15 +0000 In-Reply-To: <20241212180423.1578358-1-smostafa@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241212180423.1578358-1-smostafa@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20241212180423.1578358-52-smostafa@google.com> Subject: [RFC PATCH v2 51/58] iommu/arm-smmu-v3-kvm: Support PASID operations From: Mostafa Saleh To: iommu@lists.linux.dev, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org Cc: catalin.marinas@arm.com, will@kernel.org, maz@kernel.org, oliver.upton@linux.dev, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, robdclark@gmail.com, joro@8bytes.org, robin.murphy@arm.com, jean-philippe@linaro.org, jgg@ziepe.ca, nicolinc@nvidia.com, vdonnefort@google.com, qperret@google.com, tabba@google.com, danielmentz@google.com, tzukui@google.com, Mostafa Saleh Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add support for set_dev_pasid and remove_dev_pasid, the hypervisor already supports pasid, so we just need to forward it in the hypercalls in addition to proper tracking of domains per master. Signed-off-by: Mostafa Saleh --- .../iommu/arm/arm-smmu-v3/arm-smmu-v3-kvm.c | 58 +++++++++++++++---- 1 file changed, 48 insertions(+), 10 deletions(-) diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-kvm.c b/drivers/iomm= u/arm/arm-smmu-v3/arm-smmu-v3-kvm.c index 82f0191b222c..cbcd8a75d562 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-kvm.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-kvm.c @@ -29,7 +29,8 @@ struct host_arm_smmu_device { struct kvm_arm_smmu_master { struct arm_smmu_device *smmu; struct device *dev; - struct kvm_arm_smmu_domain *domain; + struct xarray domains; + u32 ssid_bits; }; =20 struct kvm_arm_smmu_domain { @@ -119,6 +120,10 @@ static struct iommu_device *kvm_arm_smmu_probe_device(= struct device *dev) =20 master->dev =3D dev; master->smmu =3D smmu; + + device_property_read_u32(dev, "pasid-num-bits", &master->ssid_bits); + master->ssid_bits =3D min(smmu->ssid_bits, master->ssid_bits); + xa_init(&master->domains); dev_iommu_priv_set(dev, master); =20 return &smmu->iommu; @@ -235,13 +240,14 @@ static void kvm_arm_smmu_domain_free(struct iommu_dom= ain *domain) kfree(kvm_smmu_domain); } =20 -static int kvm_arm_smmu_detach_dev(struct host_arm_smmu_device *host_smmu, - struct kvm_arm_smmu_master *master) +static int kvm_arm_smmu_detach_dev_pasid(struct host_arm_smmu_device *host= _smmu, + struct kvm_arm_smmu_master *master, + ioasid_t pasid) { int i, ret; struct arm_smmu_device *smmu =3D &host_smmu->smmu; struct iommu_fwspec *fwspec =3D dev_iommu_fwspec_get(master->dev); - struct kvm_arm_smmu_domain *domain =3D master->domain; + struct kvm_arm_smmu_domain *domain =3D xa_load(&master->domains, pasid); =20 if (!domain) return 0; @@ -250,7 +256,7 @@ static int kvm_arm_smmu_detach_dev(struct host_arm_smmu= _device *host_smmu, int sid =3D fwspec->ids[i]; =20 ret =3D kvm_call_hyp_nvhe(__pkvm_host_iommu_detach_dev, - host_smmu->id, domain->id, sid, 0); + host_smmu->id, domain->id, sid, pasid); if (ret) { dev_err(smmu->dev, "cannot detach device %s (0x%x): %d\n", dev_name(master->dev), sid, ret); @@ -258,22 +264,39 @@ static int kvm_arm_smmu_detach_dev(struct host_arm_sm= mu_device *host_smmu, } } =20 - master->domain =3D NULL; + xa_erase(&master->domains, pasid); =20 return ret; } =20 +static int kvm_arm_smmu_detach_dev(struct host_arm_smmu_device *host_smmu, + struct kvm_arm_smmu_master *master) +{ + return kvm_arm_smmu_detach_dev_pasid(host_smmu, master, 0); +} + +static void kvm_arm_smmu_remove_dev_pasid(struct device *dev, ioasid_t pas= id, + struct iommu_domain *domain) +{ + struct kvm_arm_smmu_master *master =3D dev_iommu_priv_get(dev); + struct host_arm_smmu_device *host_smmu =3D smmu_to_host(master->smmu); + + kvm_arm_smmu_detach_dev_pasid(host_smmu, master, pasid); +} + static void kvm_arm_smmu_release_device(struct device *dev) { struct kvm_arm_smmu_master *master =3D dev_iommu_priv_get(dev); struct host_arm_smmu_device *host_smmu =3D smmu_to_host(master->smmu); =20 kvm_arm_smmu_detach_dev(host_smmu, master); + xa_destroy(&master->domains); kfree(master); iommu_fwspec_free(dev); } =20 -static int kvm_arm_smmu_attach_dev(struct iommu_domain *domain, struct dev= ice *dev) +static int kvm_arm_smmu_set_dev_pasid(struct iommu_domain *domain, + struct device *dev, ioasid_t pasid) { int i, ret; struct arm_smmu_device *smmu; @@ -288,7 +311,7 @@ static int kvm_arm_smmu_attach_dev(struct iommu_domain = *domain, struct device *d smmu =3D master->smmu; host_smmu =3D smmu_to_host(smmu); =20 - ret =3D kvm_arm_smmu_detach_dev(host_smmu, master); + ret =3D kvm_arm_smmu_detach_dev_pasid(host_smmu, master, pasid); if (ret) return ret; =20 @@ -303,14 +326,14 @@ static int kvm_arm_smmu_attach_dev(struct iommu_domai= n *domain, struct device *d =20 ret =3D kvm_call_hyp_nvhe_mc(__pkvm_host_iommu_attach_dev, host_smmu->id, kvm_smmu_domain->id, - sid, 0, 0); + sid, pasid, master->ssid_bits); if (ret) { dev_err(smmu->dev, "cannot attach device %s (0x%x): %d\n", dev_name(dev), sid, ret); goto out_ret; } } - master->domain =3D kvm_smmu_domain; + ret =3D xa_insert(&master->domains, pasid, kvm_smmu_domain, GFP_KERNEL); =20 out_ret: if (ret) @@ -318,6 +341,19 @@ static int kvm_arm_smmu_attach_dev(struct iommu_domain= *domain, struct device *d return ret; } =20 +static int kvm_arm_smmu_attach_dev(struct iommu_domain *domain, + struct device *dev) +{ + struct kvm_arm_smmu_master *master =3D dev_iommu_priv_get(dev); + unsigned long pasid =3D 0; + + /* All pasids must be removed first. */ + if (xa_find_after(&master->domains, &pasid, ULONG_MAX, XA_PRESENT)) + return -EBUSY; + + return kvm_arm_smmu_set_dev_pasid(domain, dev, 0); +} + static bool kvm_arm_smmu_capable(struct device *dev, enum iommu_cap cap) { struct kvm_arm_smmu_master *master =3D dev_iommu_priv_get(dev); @@ -409,6 +445,7 @@ static struct iommu_ops kvm_arm_smmu_ops =3D { .release_device =3D kvm_arm_smmu_release_device, .domain_alloc =3D kvm_arm_smmu_domain_alloc, .pgsize_bitmap =3D -1UL, + .remove_dev_pasid =3D kvm_arm_smmu_remove_dev_pasid, .owner =3D THIS_MODULE, .default_domain_ops =3D &(const struct iommu_domain_ops) { .attach_dev =3D kvm_arm_smmu_attach_dev, @@ -416,6 +453,7 @@ static struct iommu_ops kvm_arm_smmu_ops =3D { .map_pages =3D kvm_arm_smmu_map_pages, .unmap_pages =3D kvm_arm_smmu_unmap_pages, .iova_to_phys =3D kvm_arm_smmu_iova_to_phys, + .set_dev_pasid =3D kvm_arm_smmu_set_dev_pasid, } }; =20 --=20 2.47.0.338.g60cca15819-goog From nobody Sun Dec 14 19:14:26 2025 Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AC46B240395 for ; Thu, 12 Dec 2024 18:06:33 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734026795; cv=none; b=gy5LjPDuYzBwBUDRuncpLLMLKBjDs7+q9B6AXaMOh3OLQvBOGZhxwjzEQQcj12l21qPqEKkXal9CMwUJFDItErl6jezd9qIP6ztGfGKtUeo3qfDfTxUg/enxR/OKtbvIcWDsfrx0aOYjIw+hNZZzCoFKx81pnvHGRf9Rg9mAxh4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734026795; c=relaxed/simple; bh=qoHNVUntp7L8V1KYygYeM3r4gqUxbVbBZQoQ48yo1Fk=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=YhPaXbJSWIuH/MIEPL1hdraLEDxT9fNlyg9S92Yd+hr800Pv5Hnz82/OaHBHu3/jHx7DYAz2W3zQInBvdOwLf7+JaH8CaSk0kVTbYtkAP2gAOVyGtERZ8mXB2lkg6Em2Hux9n0DE/enjfcGq/u4SnauCdPgbGfBQGVdXmTbBWac= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--smostafa.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=sWRYoT6I; arc=none smtp.client-ip=209.85.128.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--smostafa.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="sWRYoT6I" Received: by mail-wm1-f73.google.com with SMTP id 5b1f17b1804b1-434fb9646efso8355995e9.1 for ; Thu, 12 Dec 2024 10:06:33 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1734026792; x=1734631592; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=2LdunWCve+BzW7qxRXgQ2DWd6pqP346bXuKFqoUe0/0=; b=sWRYoT6I5F1mDr5fBWF6UvctjiEhYzRaP5TYy0V7LgJdao43qFsg1en1HadBceYpl0 IsJsjLQn5oCdqo1JT0cdmpFDJCsXhccsoBcTFG24dpTuSHR26f5P6FAGXo5zVbrEbyNZ kbzw6DMdnWI6G0us3ZMaQwsBO5NfYkMjw7u5MWXC/md8X8SdaK6rrCaVuRcNtH5tnHEd 6v887jQEZLKFwXJZ8uaJcPo1aLr8IPONUjxG/Yz3UmI3e+xQjvn9ctYIWOhiebW/LBUN OGKy2Oz1sebS9xKtCZr68Pnf6gBYR7rEHSyuslHCatgS07/ObAgpBOv81BSu1rA9pRyd phrw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734026792; x=1734631592; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=2LdunWCve+BzW7qxRXgQ2DWd6pqP346bXuKFqoUe0/0=; b=fNN0DUWKj8iKgI4GROsDX7Jj4WFjORxYA/sD3hz2L4/0Fok+OYLpBAWD8FgFdAN2go nO3M+zm0hOcVzxl9WpAiVAjxcZr9FRXaa4D9UsPXh5nY8MUL6nmG6ZPURj9nbTdJXskT PULqA2KjZTDiSHAC50FwaWCcj0pVFYL6DwgJAWUvzGsk2Kukmp9QK7Sru10MzhPb4fFH YHUN7HRocHhV/LoiSYWuCjd96AfTDzEpk6188i11mQUWa8N1ndURKyPwJ1DNVyH8WoGi 2zN6QSZLF+oQG4o9PfthzytS2Jav9AavSxSOWp8lUTB+MZh7FCduqKH3Q7AHVXGalhfo 7W9g== X-Forwarded-Encrypted: i=1; AJvYcCVhoHwhK9Ars2m9SwWUgrexLoaEeAwda8nRBUFJFN++roRC7LqphuJHEdkUhLHwcQgVGZ4G8nXgpCOpnCY=@vger.kernel.org X-Gm-Message-State: AOJu0YxRUGJlKXTvuI5pvFakAD9Pw1pFUxnXrHBckkM7jWajkooOWazx MPICpyuCr3LGP+qXcHi9pwZHJdiStG714Jbl1vSF6yLhTeeKwFuZXsxKb0UDXDwjv9plEiESx73 hVFKLAem+Hw== X-Google-Smtp-Source: AGHT+IERWZ4NnM7KQcPGdQG43bKrQYNR/UyZnwsqD34WfJRCP7KI0E4FIciOvn+bi06f/klghzOUEL4MFalBog== X-Received: from wmqd1.prod.google.com ([2002:a05:600c:34c1:b0:434:fe2b:fea7]) (user=smostafa job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:5101:b0:434:ffb2:f9df with SMTP id 5b1f17b1804b1-4361c3a6550mr73450045e9.17.1734026792347; Thu, 12 Dec 2024 10:06:32 -0800 (PST) Date: Thu, 12 Dec 2024 18:04:16 +0000 In-Reply-To: <20241212180423.1578358-1-smostafa@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241212180423.1578358-1-smostafa@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20241212180423.1578358-53-smostafa@google.com> Subject: [RFC PATCH v2 52/58] iommu/arm-smmu-v3-kvm: Add IRQs for the driver From: Mostafa Saleh To: iommu@lists.linux.dev, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org Cc: catalin.marinas@arm.com, will@kernel.org, maz@kernel.org, oliver.upton@linux.dev, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, robdclark@gmail.com, joro@8bytes.org, robin.murphy@arm.com, jean-philippe@linaro.org, jgg@ziepe.ca, nicolinc@nvidia.com, vdonnefort@google.com, qperret@google.com, tabba@google.com, danielmentz@google.com, tzukui@google.com, Mostafa Saleh Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Handle IRQs from the KVM kernel driver, it should be safe to do it from the kernel as this a debug feature. Only GERROR and EVTQ irqs are handled. Unlike the kernel driver, we don't do much here (no rest in SMMU or interaction of cmdq) but just printing. Signed-off-by: Mostafa Saleh --- arch/arm64/kvm/hyp/nvhe/iommu/arm-smmu-v3.c | 3 +- .../iommu/arm/arm-smmu-v3/arm-smmu-v3-kvm.c | 138 ++++++++++++++++++ 2 files changed, 139 insertions(+), 2 deletions(-) diff --git a/arch/arm64/kvm/hyp/nvhe/iommu/arm-smmu-v3.c b/arch/arm64/kvm/h= yp/nvhe/iommu/arm-smmu-v3.c index 2a99873d980f..60f0760f49eb 100644 --- a/arch/arm64/kvm/hyp/nvhe/iommu/arm-smmu-v3.c +++ b/arch/arm64/kvm/hyp/nvhe/iommu/arm-smmu-v3.c @@ -365,7 +365,6 @@ static int smmu_init_registers(struct hyp_arm_smmu_v3_d= evice *smmu) FIELD_PREP(CR1_QUEUE_IC, CR1_CACHE_WB); writel_relaxed(val, smmu->base + ARM_SMMU_CR1); writel_relaxed(CR2_PTM, smmu->base + ARM_SMMU_CR2); - writel_relaxed(0, smmu->base + ARM_SMMU_IRQ_CTRL); =20 val =3D readl_relaxed(smmu->base + ARM_SMMU_GERROR); old =3D readl_relaxed(smmu->base + ARM_SMMU_GERRORN); @@ -540,7 +539,7 @@ static int smmu_reset_device(struct hyp_arm_smmu_v3_dev= ice *smmu) goto err_disable_cmdq; =20 /* Enable translation */ - return smmu_write_cr0(smmu, CR0_SMMUEN | CR0_CMDQEN | CR0_ATSCHK); + return smmu_write_cr0(smmu, CR0_SMMUEN | CR0_CMDQEN | CR0_ATSCHK | CR0_EV= TQEN); =20 err_disable_cmdq: return smmu_write_cr0(smmu, 0); diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-kvm.c b/drivers/iomm= u/arm/arm-smmu-v3/arm-smmu-v3-kvm.c index cbcd8a75d562..674ce2b02a4b 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-kvm.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-kvm.c @@ -496,11 +496,107 @@ static bool kvm_arm_smmu_validate_features(struct ar= m_smmu_device *smmu) return true; } =20 +static irqreturn_t kvm_arm_smmu_evt_handler(int irq, void *dev) +{ + int i; + struct arm_smmu_device *smmu =3D dev; + struct arm_smmu_queue *q =3D &smmu->evtq.q; + struct arm_smmu_ll_queue *llq =3D &q->llq; + static DEFINE_RATELIMIT_STATE(rs, DEFAULT_RATELIMIT_INTERVAL, + DEFAULT_RATELIMIT_BURST); + u64 evt[EVTQ_ENT_DWORDS]; + + do { + while (!queue_remove_raw(q, evt)) { + u8 id =3D FIELD_GET(EVTQ_0_ID, evt[0]); + + if (!__ratelimit(&rs)) + continue; + + dev_info(smmu->dev, "event 0x%02x received:\n", id); + for (i =3D 0; i < ARRAY_SIZE(evt); ++i) + dev_info(smmu->dev, "\t0x%016llx\n", + (unsigned long long)evt[i]); + + cond_resched(); + } + + /* + * Not much we can do on overflow, so scream and pretend we're + * trying harder. + */ + if (queue_sync_prod_in(q) =3D=3D -EOVERFLOW) + dev_err(smmu->dev, "EVTQ overflow detected -- events lost\n"); + } while (!queue_empty(llq)); + + /* Sync our overflow flag, as we believe we're up to speed */ + queue_sync_cons_ovf(q); + return IRQ_HANDLED; +} + +static irqreturn_t kvm_arm_smmu_gerror_handler(int irq, void *dev) +{ + u32 gerror, gerrorn, active; + struct arm_smmu_device *smmu =3D dev; + + gerror =3D readl_relaxed(smmu->base + ARM_SMMU_GERROR); + gerrorn =3D readl_relaxed(smmu->base + ARM_SMMU_GERRORN); + + active =3D gerror ^ gerrorn; + if (!(active & GERROR_ERR_MASK)) + return IRQ_NONE; /* No errors pending */ + + dev_warn(smmu->dev, + "unexpected global error reported (0x%08x), this could be serious\n", + active); + + /* There is no API to reconfigure the device at the moment.*/ + if (active & GERROR_SFM_ERR) + dev_err(smmu->dev, "device has entered Service Failure Mode!\n"); + + if (active & GERROR_MSI_GERROR_ABT_ERR) + dev_warn(smmu->dev, "GERROR MSI write aborted\n"); + + if (active & GERROR_MSI_PRIQ_ABT_ERR) + dev_warn(smmu->dev, "PRIQ MSI write aborted\n"); + + if (active & GERROR_MSI_EVTQ_ABT_ERR) + dev_warn(smmu->dev, "EVTQ MSI write aborted\n"); + + if (active & GERROR_MSI_CMDQ_ABT_ERR) + dev_warn(smmu->dev, "CMDQ MSI write aborted\n"); + + if (active & GERROR_PRIQ_ABT_ERR) + dev_err(smmu->dev, "PRIQ write aborted -- events may have been lost\n"); + + if (active & GERROR_EVTQ_ABT_ERR) + dev_err(smmu->dev, "EVTQ write aborted -- events may have been lost\n"); + + if (active & GERROR_CMDQ_ERR) { + dev_err(smmu->dev, "CMDQ ERR -- Hypervisor cmdq corrupted?\n"); + BUG(); + } + + writel(gerror, smmu->base + ARM_SMMU_GERRORN); + + return IRQ_HANDLED; +} + +static irqreturn_t kvm_arm_smmu_pri_handler(int irq, void *dev) +{ + struct arm_smmu_device *smmu =3D dev; + + dev_err(smmu->dev, "PRI not supported in KVM driver!\n"); + + return IRQ_HANDLED; +} + static int kvm_arm_smmu_device_reset(struct host_arm_smmu_device *host_smm= u) { int ret; u32 reg; struct arm_smmu_device *smmu =3D &host_smmu->smmu; + u32 irqen_flags =3D IRQ_CTRL_EVTQ_IRQEN | IRQ_CTRL_GERROR_IRQEN; =20 reg =3D readl_relaxed(smmu->base + ARM_SMMU_CR0); if (reg & CR0_SMMUEN) @@ -522,6 +618,39 @@ static int kvm_arm_smmu_device_reset(struct host_arm_s= mmu_device *host_smmu) /* Command queue */ writeq_relaxed(smmu->cmdq.q.q_base, smmu->base + ARM_SMMU_CMDQ_BASE); =20 + /* Event queue */ + writeq_relaxed(smmu->evtq.q.q_base, smmu->base + ARM_SMMU_EVTQ_BASE); + writel_relaxed(smmu->evtq.q.llq.prod, smmu->base + SZ_64K + ARM_SMMU_EVTQ= _PROD); + writel_relaxed(smmu->evtq.q.llq.cons, smmu->base + SZ_64K + ARM_SMMU_EVTQ= _CONS); + + /* Disable IRQs first */ + ret =3D arm_smmu_write_reg_sync(smmu, 0, ARM_SMMU_IRQ_CTRL, + ARM_SMMU_IRQ_CTRLACK); + if (ret) { + dev_err(smmu->dev, "failed to disable irqs\n"); + return ret; + } + + /* + * We don't support combined irqs for now, no specific reason, they are u= ncommon + * so we just try to avoid bloating the code. + */ + if (smmu->combined_irq) + dev_err(smmu->dev, "Combined irqs not supported by this driver\n"); + else + arm_smmu_setup_unique_irqs(smmu, kvm_arm_smmu_evt_handler, + kvm_arm_smmu_gerror_handler, + kvm_arm_smmu_pri_handler); + + if (smmu->features & ARM_SMMU_FEAT_PRI) + irqen_flags |=3D IRQ_CTRL_PRIQ_IRQEN; + + /* Enable interrupt generation on the SMMU */ + ret =3D arm_smmu_write_reg_sync(smmu, irqen_flags, + ARM_SMMU_IRQ_CTRL, ARM_SMMU_IRQ_CTRLACK); + if (ret) + dev_warn(smmu->dev, "failed to enable irqs\n"); + return 0; } =20 @@ -565,6 +694,8 @@ static int kvm_arm_smmu_probe(struct platform_device *p= dev) if (IS_ERR(smmu->base)) return PTR_ERR(smmu->base); =20 + arm_smmu_probe_irq(pdev, smmu); + ret =3D arm_smmu_device_hw_probe(smmu); if (ret) return ret; @@ -583,6 +714,13 @@ static int kvm_arm_smmu_probe(struct platform_device *= pdev) if (ret) return ret; =20 + /* evtq */ + ret =3D arm_smmu_init_one_queue(smmu, &smmu->evtq.q, smmu->base + SZ_64K, + ARM_SMMU_EVTQ_PROD, ARM_SMMU_EVTQ_CONS, + EVTQ_ENT_DWORDS, "evtq"); + if (ret) + return ret; + ret =3D arm_smmu_init_strtab(smmu); if (ret) return ret; --=20 2.47.0.338.g60cca15819-goog From nobody Sun Dec 14 19:14:26 2025 Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C7C2F241F48 for ; Thu, 12 Dec 2024 18:06:35 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734026797; cv=none; b=o4s6NcGJoJsFNS3tdhBXV28hSkFX27kU7x3kyvLONp3FNucDrkxg/0FzrAcpqvm9Rsg3gCMg1miFc2mDi9eb8Eo2rr05X5dcdR74bqKmHoM2cQDHzYMzsZkZGet2B6kWrybVlfK8fWwQ83Gwl1OipUyyS+9WTbWdg47daowfhWA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734026797; c=relaxed/simple; bh=7/D79MYPdDU2T6iY5awYcWs+bBtc+h6wuBQGuuIFMIo=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=KYWHUL7PeZvnoWS6RqAlz5bJzpu/zh4N7b7z+KK4eUQSaFYSZryo0Tw9WvqIYo5X8MVxtomE2jX3+f9xCL+ERpVh2o6xekopmfEiR8Zzdl2dCq6StN+LjlUic1ExhGxChtjQeK6tVi9/VTuQ4cAjFHyI1Z4hAeKe6EF//DPyrNA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--smostafa.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=M38ojX1H; arc=none smtp.client-ip=209.85.128.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--smostafa.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="M38ojX1H" Received: by mail-wm1-f73.google.com with SMTP id 5b1f17b1804b1-4361ac8b25fso5743945e9.2 for ; Thu, 12 Dec 2024 10:06:35 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1734026794; x=1734631594; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=NKQvLMAR4pXvIl9rjZ9gjgm+IMNIvYiup7Xyty4Kn6A=; b=M38ojX1H9Vb6t3Kf1VhmbaUwwQmnghCvX+EdUAkVuhCwibpTXUwsKSjDwfr+rOjHE9 5PdRpPMcarF869og5YlFF7BxhuTp5b3t8eYspCsZdsmzBbVhleAvNBpy1NfjdUCXKK4F teWxGj/nrj6h7pqJqbSjzQk4MYGNPD06qWLcPBPmgu6hT69gCAX1c39B43sIuhIMc5jk wYfOBQ8+NXpN200d4KJA4JqQ8eL6glh9cjv5lxYW9p7Qp8q+tNv8ct38EAkidPYjkKh+ n/i3AC5EwmM8eeQ4BPhiQ+KY2Ox2+ixKLlC1s9cW19NkX+wo0x8KNXkShsR6vWQP1yYB Tk7A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734026794; x=1734631594; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=NKQvLMAR4pXvIl9rjZ9gjgm+IMNIvYiup7Xyty4Kn6A=; b=e5QJv8xPNxl6NNGdSgPMRD40HbNxUMEatzT3DNT3stTUbhSuxy0Pun1lkdRP/To1bx dCAw4pz/Iy8RuVebQHFaxyh7aFNC0R2h2TZ7q1NpDyFlRPVU8Uh8fx2Sx910YMiOROk7 i9PyidcRcCXcewoB8fgrsdeSfX4odur9BUov4qYWUblqWoYwACHZImcrlrIj51gdsUdo GksoEukiCTYs+gZsQPNlql+PlsCfvyArkpNoHbA28uWmVIsia68Yq5msT7A7lUshnxYt M3oWFHM9wVEAFWKid+gxy2hMJwDqhLUjjij2tBdBaNRnno2WDrb9G3cCB3KLaisib7Hf ltlA== X-Forwarded-Encrypted: i=1; AJvYcCW1wSd7Q9a1NFJuveSLruqB7H00blhOBlsIKR7EuzXWiIDv0AwEq+oTO7yf/yyw+uz49du2gne37Zbr+z0=@vger.kernel.org X-Gm-Message-State: AOJu0YxjZolBVOMGYcALoZVoaybS2UJ0JKbnotI72bVM4y+II5e7u9hd TwNGO4nxNlEV/B0D06IBdEsfLYJAb5e2mQpUqObqJSxwpaiETARDuiOUIvtUM45LNU8IqslN1Ib I/keTHIxvpQ== X-Google-Smtp-Source: AGHT+IFkHgsqAYvLOBec6cgNAQkg3vmQRCOKbFIfkLxDpGopMY/6t1JJbqk9n+vF2FSRrh/Pk6B9mVEKRUOq9Q== X-Received: from wmbjl5.prod.google.com ([2002:a05:600c:6a85:b0:434:f119:f1a]) (user=smostafa job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:3583:b0:434:f871:1b9d with SMTP id 5b1f17b1804b1-4362287091cmr39041375e9.33.1734026794478; Thu, 12 Dec 2024 10:06:34 -0800 (PST) Date: Thu, 12 Dec 2024 18:04:17 +0000 In-Reply-To: <20241212180423.1578358-1-smostafa@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241212180423.1578358-1-smostafa@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20241212180423.1578358-54-smostafa@google.com> Subject: [RFC PATCH v2 53/58] iommu/arm-smmu-v3-kvm: Probe power domains From: Mostafa Saleh To: iommu@lists.linux.dev, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org Cc: catalin.marinas@arm.com, will@kernel.org, maz@kernel.org, oliver.upton@linux.dev, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, robdclark@gmail.com, joro@8bytes.org, robin.murphy@arm.com, jean-philippe@linaro.org, jgg@ziepe.ca, nicolinc@nvidia.com, vdonnefort@google.com, qperret@google.com, tabba@google.com, danielmentz@google.com, tzukui@google.com, Mostafa Saleh Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Jean-Philippe Brucker Try to use SCMI if possible, otherwise rely on HVC to the hypervisor to notify about power changes, this is ONLY safe if the SMMU resets to blocking DMA. Signed-off-by: Mostafa Saleh Signed-off-by: Jean-Philippe Brucker --- .../iommu/arm/arm-smmu-v3/arm-smmu-v3-kvm.c | 78 +++++++++++++++++++ 1 file changed, 78 insertions(+) diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-kvm.c b/drivers/iomm= u/arm/arm-smmu-v3/arm-smmu-v3-kvm.c index 674ce2b02a4b..deeed994a131 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-kvm.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-kvm.c @@ -8,6 +8,7 @@ #include =20 #include +#include #include #include =20 @@ -21,6 +22,7 @@ struct host_arm_smmu_device { struct arm_smmu_device smmu; pkvm_handle_t id; u32 boot_gbpa; + struct kvm_power_domain power_domain; }; =20 #define smmu_to_host(_smmu) \ @@ -654,6 +656,77 @@ static int kvm_arm_smmu_device_reset(struct host_arm_s= mmu_device *host_smmu) return 0; } =20 +static int kvm_arm_probe_scmi_pd(struct device_node *scmi_node, + struct kvm_power_domain *pd) +{ + int ret; + struct resource res; + struct of_phandle_args args; + + pd->type =3D KVM_POWER_DOMAIN_ARM_SCMI; + + ret =3D of_parse_phandle_with_args(scmi_node, "shmem", NULL, 0, &args); + if (ret) + return ret; + + ret =3D of_address_to_resource(args.np, 0, &res); + if (ret) + goto out_put_nodes; + + ret =3D of_property_read_u32(scmi_node, "arm,smc-id", + &pd->arm_scmi.smc_id); + if (ret) + goto out_put_nodes; + + /* + * The shared buffer is unmapped from the host while a request is in + * flight, so it has to be on its own page. + */ + if (!IS_ALIGNED(res.start, SZ_64K) || resource_size(&res) < SZ_64K) { + ret =3D -EINVAL; + goto out_put_nodes; + } + + pd->arm_scmi.shmem_base =3D res.start; + pd->arm_scmi.shmem_size =3D resource_size(&res); + +out_put_nodes: + of_node_put(args.np); + return ret; +} + +/* TODO: Move this. None of it is specific to SMMU */ +static int kvm_arm_probe_power_domain(struct device *dev, + struct kvm_power_domain *pd) +{ + int ret; + struct device_node *parent; + struct of_phandle_args args; + + if (!of_get_property(dev->of_node, "power-domains", NULL)) + return 0; + + ret =3D of_parse_phandle_with_args(dev->of_node, "power-domains", + "#power-domain-cells", 0, &args); + if (ret) + return ret; + + parent =3D of_get_parent(args.np); + if (parent && of_device_is_compatible(parent, "arm,scmi-smc") && + args.args_count > 0) { + pd->arm_scmi.domain_id =3D args.args[0]; + ret =3D kvm_arm_probe_scmi_pd(parent, pd); + } else { + dev_warn(dev, "Unknown PM method for %pOF, using HVC\n", + args.np); + pd->type =3D KVM_POWER_DOMAIN_HOST_HVC; + pd->device_id =3D kvm_arm_smmu_cur; + } + of_node_put(parent); + of_node_put(args.np); + return ret; +} + static int kvm_arm_smmu_probe(struct platform_device *pdev) { int ret; @@ -681,6 +754,10 @@ static int kvm_arm_smmu_probe(struct platform_device *= pdev) if (ret) return ret; =20 + ret =3D kvm_arm_probe_power_domain(dev, &host_smmu->power_domain); + if (ret) + return ret; + res =3D platform_get_resource(pdev, IORESOURCE_MEM, 0); size =3D resource_size(res); if (size < SZ_128K) { @@ -738,6 +815,7 @@ static int kvm_arm_smmu_probe(struct platform_device *p= dev) hyp_smmu->mmio_addr =3D ioaddr; hyp_smmu->mmio_size =3D size; hyp_smmu->features =3D smmu->features; + hyp_smmu->iommu.power_domain =3D host_smmu->power_domain; kvm_arm_smmu_cur++; =20 return arm_smmu_register_iommu(smmu, &kvm_arm_smmu_ops, ioaddr); --=20 2.47.0.338.g60cca15819-goog From nobody Sun Dec 14 19:14:26 2025 Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CF173242A8A for ; Thu, 12 Dec 2024 18:06:37 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734026799; cv=none; b=boVrrKYVmL3Bw06M+y1XWKI3logY8Y6bnVo9UvyNlEL2Vk/AKv42ogi7ZgXgjVf2R8t/bJnfhLqb7EjPBHnV4eKB61l9qRUbxQF52jtcUx4ezm4OqNHzT61QDv3sYtFh0ISKMLquMgaxJ9iboWZob5V26g5Ed0IdBIdB5GLwNAY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734026799; c=relaxed/simple; bh=roqaVNS4k1507yPlF6ISRHLKuQfgC3toJn6r6QoUTyM=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=ggCzvomU/qu9kC6nou8JrC04xFfSuF6T61EAotQ2QedbY04QnAY2Sd5W/VBsbbveXzB7tluxSjBKnlxLxnNn0h6rLUXfPZHRBqMsNe6oCCxCKXIzABtw3Ekk5/yD/u/nWhpJkQmEW1JgYFB+PmzWljutOgFTSl69tCA02MEl6uo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--smostafa.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=YkDCjr8D; arc=none smtp.client-ip=209.85.128.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--smostafa.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="YkDCjr8D" Received: by mail-wm1-f73.google.com with SMTP id 5b1f17b1804b1-43619b135bcso5529765e9.1 for ; Thu, 12 Dec 2024 10:06:37 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1734026796; x=1734631596; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=nir/8Fu9wro0KzAUocadSL8VHwLHRc3vCT5pzOEHxAw=; b=YkDCjr8DQHKQ0wwl/aeGvctP5NoiSiG0zLlP6t+C076Iv6wiJ4Y5o4YlXkJgZ2EU8M j1HkoeAuZTNw+2ry7eMcGzQ+kLSAJgO8C1mbVUvKfZlynaU7Ieu3dY4ksW239+Y9il/l B7cPahpKe9KT5BGum9VpedMGnGjT5SdDKM5Se4oULaSljO8i5X54c2NFqAC8S1Gh8L5B QFX8qOkZaTQz9zArQBW53JWv8W44D7lenhgVXNPXRXnzku3RtAkGraDLByzn4dk1fAT9 7oP+AuJENfs4BLcr0xfLy9f8yJQyuk0R0+n7dWig4EJ1BC0lTgGJ1QBvamB4EHuwUQFl Ea4Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734026796; x=1734631596; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=nir/8Fu9wro0KzAUocadSL8VHwLHRc3vCT5pzOEHxAw=; b=XHAnwzkf7ZsON/zwtogicVvYb6KZkliA5A19uUhGqQTMlZpeoIEc0bVnElMAex/nLh 26TDR+r1U3LgGdXJDRqzEWyUZnyeof0CsJnUg8ISyz3DzsM3yNgs1LqyMDDnAMxl5Ueb MYGffIZr6qCFxp0ZxkLz8pZ63JNDyhFhSldvDF4WQsGOxqQu9QhXiNPsGkSNxQIjXrEy bmj6J0HdFpCgKZRenqZR5Ax1iVNKp9CKy22ig/8cXCEOH0j8Nqusz8GcnLwVQYsiCKco UYlwp/ZX4C2P6aKQJJi68B9N7UUX/LLDPo0iG+42nBTtrWrAN0sNkXi8IL8m90CNYCZG I2jQ== X-Forwarded-Encrypted: i=1; AJvYcCXAUiRjenyyiJ5AhMrvV/mtDWY0y7W0W0NtMtLUVOtcPqKM3bE2J65SGIzjuXo3AgAT68wIbdUjlphNpHw=@vger.kernel.org X-Gm-Message-State: AOJu0YzGFFv8TgcF0GSMpfktH1syF/v10tnc4K2Oeg8J9w05StGCzrCV M/dhUweng7EMvRUaO9nx1LUFbRLvQdWGaZKUy/tvSnXj5s8E8kHSj/aeU0rRLDefKd0+sh6795z TdSDImbrkXA== X-Google-Smtp-Source: AGHT+IHRdZKXYWudkhhkQrQvejC9zCICHCvy3HiUQiQs1K09RQKbLBrbkNDh+wOO04CX03Sxp7dykFRKXpwfSw== X-Received: from wmgg15.prod.google.com ([2002:a05:600d:f:b0:434:feb1:add1]) (user=smostafa job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:1d20:b0:434:a529:3b87 with SMTP id 5b1f17b1804b1-4361c36f5ccmr81464315e9.10.1734026796519; Thu, 12 Dec 2024 10:06:36 -0800 (PST) Date: Thu, 12 Dec 2024 18:04:18 +0000 In-Reply-To: <20241212180423.1578358-1-smostafa@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241212180423.1578358-1-smostafa@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20241212180423.1578358-55-smostafa@google.com> Subject: [RFC PATCH v2 54/58] iommu/arm-smmu-v3-kvm: Enable runtime PM From: Mostafa Saleh To: iommu@lists.linux.dev, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org Cc: catalin.marinas@arm.com, will@kernel.org, maz@kernel.org, oliver.upton@linux.dev, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, robdclark@gmail.com, joro@8bytes.org, robin.murphy@arm.com, jean-philippe@linaro.org, jgg@ziepe.ca, nicolinc@nvidia.com, vdonnefort@google.com, qperret@google.com, tabba@google.com, danielmentz@google.com, tzukui@google.com, Mostafa Saleh Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Enable runtime PM for the KVM SMMUv3 driver. The PM link to DMA masters dictates when the SMMU should be powered on. Signed-off-by: Mostafa Saleh Signed-off-by: Jean-Philippe Brucker --- .../iommu/arm/arm-smmu-v3/arm-smmu-v3-kvm.c | 66 +++++++++++++++++++ 1 file changed, 66 insertions(+) diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-kvm.c b/drivers/iomm= u/arm/arm-smmu-v3/arm-smmu-v3-kvm.c index deeed994a131..e987c273ff3c 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-kvm.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-kvm.c @@ -11,6 +11,7 @@ #include #include #include +#include =20 #include =20 @@ -127,6 +128,12 @@ static struct iommu_device *kvm_arm_smmu_probe_device(= struct device *dev) master->ssid_bits =3D min(smmu->ssid_bits, master->ssid_bits); xa_init(&master->domains); dev_iommu_priv_set(dev, master); + if (!device_link_add(dev, smmu->dev, + DL_FLAG_PM_RUNTIME | + DL_FLAG_AUTOREMOVE_SUPPLIER)) { + kfree(master); + return ERR_PTR(-ENOLINK); + } =20 return &smmu->iommu; } @@ -818,6 +825,14 @@ static int kvm_arm_smmu_probe(struct platform_device *= pdev) hyp_smmu->iommu.power_domain =3D host_smmu->power_domain; kvm_arm_smmu_cur++; =20 + pm_runtime_set_active(dev); + pm_runtime_enable(dev); + /* + * Take a reference to keep the SMMU powered on while the hypervisor + * initializes it. + */ + pm_runtime_resume_and_get(dev); + return arm_smmu_register_iommu(smmu, &kvm_arm_smmu_ops, ioaddr); } =20 @@ -826,6 +841,8 @@ static void kvm_arm_smmu_remove(struct platform_device = *pdev) struct arm_smmu_device *smmu =3D platform_get_drvdata(pdev); struct host_arm_smmu_device *host_smmu =3D smmu_to_host(smmu); =20 + pm_runtime_disable(&pdev->dev); + pm_runtime_set_suspended(&pdev->dev); /* * There was an error during hypervisor setup. The hyp driver may * have already enabled the device, so disable it. @@ -834,6 +851,30 @@ static void kvm_arm_smmu_remove(struct platform_device= *pdev) arm_smmu_update_gbpa(smmu, host_smmu->boot_gbpa, GBPA_ABORT); } =20 +static int kvm_arm_smmu_suspend(struct device *dev) +{ + struct arm_smmu_device *smmu =3D dev_get_drvdata(dev); + struct host_arm_smmu_device *host_smmu =3D smmu_to_host(smmu); + + if (host_smmu->power_domain.type =3D=3D KVM_POWER_DOMAIN_HOST_HVC) + return kvm_call_hyp_nvhe(__pkvm_host_hvc_pd, host_smmu->id, 0); + return 0; +} + +static int kvm_arm_smmu_resume(struct device *dev) +{ + struct arm_smmu_device *smmu =3D dev_get_drvdata(dev); + struct host_arm_smmu_device *host_smmu =3D smmu_to_host(smmu); + + if (host_smmu->power_domain.type =3D=3D KVM_POWER_DOMAIN_HOST_HVC) + return kvm_call_hyp_nvhe(__pkvm_host_hvc_pd, host_smmu->id, 1); + return 0; +} + +static const struct dev_pm_ops kvm_arm_smmu_pm_ops =3D { + SET_RUNTIME_PM_OPS(kvm_arm_smmu_suspend, kvm_arm_smmu_resume, NULL) +}; + static const struct of_device_id arm_smmu_of_match[] =3D { { .compatible =3D "arm,smmu-v3", }, { }, @@ -843,6 +884,7 @@ static struct platform_driver kvm_arm_smmu_driver =3D { .driver =3D { .name =3D "kvm-arm-smmu-v3", .of_match_table =3D arm_smmu_of_match, + .pm =3D &kvm_arm_smmu_pm_ops, }, .remove =3D kvm_arm_smmu_remove, }; @@ -877,6 +919,12 @@ static void kvm_arm_smmu_array_free(void) free_pages((unsigned long)kvm_arm_smmu_array, order); } =20 +static int smmu_put_device(struct device *dev, void *data) +{ + pm_runtime_put(dev); + return 0; +} + static int kvm_arm_smmu_v3_init_drv(void) { int ret; @@ -905,6 +953,7 @@ static int kvm_arm_smmu_v3_init_drv(void) */ kvm_hyp_arm_smmu_v3_smmus =3D kvm_arm_smmu_array; kvm_hyp_arm_smmu_v3_count =3D kvm_arm_smmu_count; + return 0; =20 err_free: @@ -931,4 +980,21 @@ static int kvm_arm_smmu_v3_register(void) kern_hyp_va(lm_alias(&kvm_nvhe_sym(smmu_ops)))); }; =20 +/* + * KVM init hypervisor at device_sync init call, + * so we drop the PM references of the SMMU taken at probe + * at the late initcall where it's guaranteed the hypervisor + * has initialized the SMMUs. + */ +static int kvm_arm_smmu_v3_post_init(void) +{ + if (!kvm_arm_smmu_count) + return 0; + + WARN_ON(driver_for_each_device(&kvm_arm_smmu_driver.driver, NULL, + NULL, smmu_put_device)); + return 0; +} + core_initcall(kvm_arm_smmu_v3_register); +late_initcall(kvm_arm_smmu_v3_post_init); --=20 2.47.0.338.g60cca15819-goog From nobody Sun Dec 14 19:14:26 2025 Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 20253242AB8 for ; Thu, 12 Dec 2024 18:06:40 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734026802; cv=none; b=n2vmKBF9Cy9g9D7HSp7H6wDegWdP3P7NMMK58nrVldTD1ROOW9WSGmLfG7rygTPwOkk7w4v0Uk+kdLgKMX9aUKciL8N/pFw0UAX/XGC936/STjPJ0D0iGrreKIQIAtS17w2jpKeqEDzrbQ2I+Rpc9dSA5BK7iH1MnCRYpRW5ryc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734026802; c=relaxed/simple; bh=FrK+n20vhoeayrvHwydsyHrEB44WwH99GJuWSDw5pcU=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=IFBX7VYB58HFDep76sG3mIGW3b52Q+bWedTk6tzPDsVwyoTN967ocfNzUgT1RpnlUJ2wxy6r+zYxquuNVffhcsyyRwEyEL/hyavA5XE1k0ETi9to+cmp1yjGaTnzTJR4WpspRQmKxYcKMUFLOykax2jfuyR5CJRoWUumcj6/kSg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--smostafa.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=OZtWczK7; arc=none smtp.client-ip=209.85.128.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--smostafa.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="OZtWczK7" Received: by mail-wm1-f73.google.com with SMTP id 5b1f17b1804b1-43626224274so5446345e9.0 for ; Thu, 12 Dec 2024 10:06:40 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1734026799; x=1734631599; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=D/WH1qyYX+7rpIZva9xL+b2vQ8iEfpW50WvBw5yyV8M=; b=OZtWczK77lch/MIW+33UuULn0hKkzTSqgHAk8+FmGaszhuow18aguBkpm6NBBP88Cg Db+rt2F8dywQhMB1GOe0kzVSwsoJr+vLS2Z2pwn0JnNmWW8zgnpR7tukcmehSoFP5qcy 72wW6A23bMSc+qMOiGp1juMNIzIEoTa4GDSsOhNLBI+CQXo97jLm2kfrFf19aAd1A7Pt na26oOcEpcfIkP2e+YcUeo8dn3Svxlc4Y1+8WQ4XdRjg7UuYSp2oYeknw/yakeNA77CL QZCzKjnYUsY8aRY6i6ZPaAIMJ7MHZ5mLYxf1UWbuMjCOJ8N6dX9mb46vtXqqGR71SfuM gsEQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734026799; x=1734631599; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=D/WH1qyYX+7rpIZva9xL+b2vQ8iEfpW50WvBw5yyV8M=; b=DlQY9eOlPIM4LUkGtgqFFjfr2A/s7GJG1217oskMVjLCUHZU82iGwUQRkDu4XLVLdQ KCw3ddeDDc+Rs6S81fljOX1rLxYMc1aGAGm+UfbvTC82UPh0fgzQ+zjCg/1oAEaW7inw lzVIz9L6/JPXUM7KEgg4C+fjXA0IClMpyqSVhJv+4c6EZ84PKXNlXqZcGWdxAo2H8k4p zk3WSWw/zxdhV19aFvrY6uQClJjXDf7XGrasGpFOgfcPDsCytt1IwW5WQZ+3SwQ7hBOu +h6ttnvfuSV2FkFEEKcrhc6vwPUuYYH2KI9DIfSTZPbaegqoXXAgC2RnuEQlc9b4G6D9 DsdQ== X-Forwarded-Encrypted: i=1; AJvYcCWACbppjMDK96+9KyRFrnzPXB2RBgnVMxQYWalTwXkPTCCwK+r5Wb+D/U5EOfY1/xvDBvRR4BbtZfMqOwo=@vger.kernel.org X-Gm-Message-State: AOJu0YyzOnC3+Asj9JOSOLLg32a+FTjjlAiRyfYLGnfn05FWxpOiblxS gU6rz8pchSN9SzKXC8p3Q8r0HiQtB/+zQxokwIpukuYrNlvYYGOirBtsr892dUbkOfz68BrZBRz ukKIRdjNtHQ== X-Google-Smtp-Source: AGHT+IHo7wmBmLTPy0XiRrrgBETwIuBRTOBnsQtf6QtHUwUE60Tx6ltVM6P9M60Uqtp3lgGDJwn8Dzv+JpaDLg== X-Received: from wmph6.prod.google.com ([2002:a05:600c:4986:b0:434:e96f:86b0]) (user=smostafa job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:3b94:b0:434:fdaf:af2d with SMTP id 5b1f17b1804b1-4361c441f23mr72295225e9.30.1734026798776; Thu, 12 Dec 2024 10:06:38 -0800 (PST) Date: Thu, 12 Dec 2024 18:04:19 +0000 In-Reply-To: <20241212180423.1578358-1-smostafa@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241212180423.1578358-1-smostafa@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20241212180423.1578358-56-smostafa@google.com> Subject: [RFC PATCH v2 55/58] drivers/iommu: Add deferred map_sg operations From: Mostafa Saleh To: iommu@lists.linux.dev, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org Cc: catalin.marinas@arm.com, will@kernel.org, maz@kernel.org, oliver.upton@linux.dev, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, robdclark@gmail.com, joro@8bytes.org, robin.murphy@arm.com, jean-philippe@linaro.org, jgg@ziepe.ca, nicolinc@nvidia.com, vdonnefort@google.com, qperret@google.com, tabba@google.com, danielmentz@google.com, tzukui@google.com, Mostafa Saleh Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" With pKVM SMMUv3 driver which para-virtualizes the IOMMU in the hypervisor, has an extra overhead with map_sg, as it loops over iommu_map, and for each map requires context switching, disabling interrupts... Instead, add an new domain operations: - alloc_cookie_sg: Allocate a new sg deferred cookie - add_deferred_map_sg: Add a mapping to the cookie - consume_deferred_map_sg: Consume and release the cookie Alternativly, we can pass the sg list as is. However, this would duplicate some of the logic and it would make more sense to conolidate all the sg list parsing for IOMMU drivers in one place. virtio-iommu is another IOMMU that can benfit from this, but it would need to have a new operation that standerdize passing an sglist based on these ops. Signed-off-by: Mostafa Saleh --- drivers/iommu/iommu.c | 53 +++++++++++++++++++++++++++++++++++++++++-- include/linux/iommu.h | 19 ++++++++++++++++ 2 files changed, 70 insertions(+), 2 deletions(-) diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c index 83c8e617a2c5..3a3c48631dd6 100644 --- a/drivers/iommu/iommu.c +++ b/drivers/iommu/iommu.c @@ -2608,6 +2608,37 @@ size_t iommu_unmap_fast(struct iommu_domain *domain, } EXPORT_SYMBOL_GPL(iommu_unmap_fast); =20 +static int __iommu_add_sg(struct iommu_map_cookie_sg *cookie_sg, + unsigned long iova, phys_addr_t paddr, size_t size) +{ + struct iommu_domain *domain =3D cookie_sg->domain; + const struct iommu_domain_ops *ops =3D domain->ops; + unsigned int min_pagesz; + size_t pgsize, count; + + if (unlikely(!(domain->type & __IOMMU_DOMAIN_PAGING))) + return -EINVAL; + + if (WARN_ON(domain->pgsize_bitmap =3D=3D 0UL)) + return -ENODEV; + + /* find out the minimum page size supported */ + min_pagesz =3D 1 << __ffs(domain->pgsize_bitmap); + + /* + * both the virtual address and the physical one, as well as + * the size of the mapping, must be aligned (at least) to the + * size of the smallest page supported by the hardware + */ + if (!IS_ALIGNED(iova | paddr | size, min_pagesz)) { + pr_err("unaligned: iova 0x%lx pa %pa size 0x%zx min_pagesz 0x%x\n", + iova, &paddr, size, min_pagesz); + return -EINVAL; + } + pgsize =3D iommu_pgsize(domain, iova, paddr, size, &count); + return ops->add_deferred_map_sg(cookie_sg, paddr, pgsize, count); +} + ssize_t iommu_map_sg(struct iommu_domain *domain, unsigned long iova, struct scatterlist *sg, unsigned int nents, int prot, gfp_t gfp) @@ -2617,6 +2648,9 @@ ssize_t iommu_map_sg(struct iommu_domain *domain, uns= igned long iova, phys_addr_t start; unsigned int i =3D 0; int ret; + bool deferred_sg =3D ops->alloc_cookie_sg && ops->add_deferred_map_sg && + ops->consume_deferred_map_sg; + struct iommu_map_cookie_sg *cookie_sg; =20 might_sleep_if(gfpflags_allow_blocking(gfp)); =20 @@ -2625,12 +2659,24 @@ ssize_t iommu_map_sg(struct iommu_domain *domain, u= nsigned long iova, __GFP_HIGHMEM))) return -EINVAL; =20 + if (deferred_sg) { + cookie_sg =3D ops->alloc_cookie_sg(iova, prot, nents, gfp); + if (!cookie_sg) { + pr_err("iommu: failed alloc cookie\n"); + return -ENOMEM; + } + cookie_sg->domain =3D domain; + } + while (i <=3D nents) { phys_addr_t s_phys =3D sg_phys(sg); =20 if (len && s_phys !=3D start + len) { - ret =3D __iommu_map(domain, iova + mapped, start, - len, prot, gfp); + if (deferred_sg) + ret =3D __iommu_add_sg(cookie_sg, iova + mapped, start, len); + else + ret =3D __iommu_map(domain, iova + mapped, start, + len, prot, gfp); =20 if (ret) goto out_err; @@ -2654,6 +2700,9 @@ ssize_t iommu_map_sg(struct iommu_domain *domain, uns= igned long iova, sg =3D sg_next(sg); } =20 + if (deferred_sg) + ops->consume_deferred_map_sg(cookie_sg); + if (ops->iotlb_sync_map) { ret =3D ops->iotlb_sync_map(domain, iova, mapped); if (ret) diff --git a/include/linux/iommu.h b/include/linux/iommu.h index c75877044185..5e60ac349228 100644 --- a/include/linux/iommu.h +++ b/include/linux/iommu.h @@ -601,6 +601,14 @@ struct iommu_ops { u8 user_pasid_table:1; }; =20 +/** + * struct iommu_map_cookie_sg - Cookie for a deferred map sg + * @domain: Domain for the sg lit + */ +struct iommu_map_cookie_sg { + struct iommu_domain *domain; +}; + /** * struct iommu_domain_ops - domain specific operations * @attach_dev: attach an iommu domain to a device @@ -638,6 +646,11 @@ struct iommu_ops { * @enable_nesting: Enable nesting * @set_pgtable_quirks: Set io page table quirks (IO_PGTABLE_QUIRK_*) * @free: Release the domain after use. + * @alloc_cookie_sg: Allocate a cookie that would be used to create + * a sg list, filled from the next functions + * @add_deferred_map_sg: Add a mapping to a cookie of a sg list. + * @consume_deferred_map_sg: Consume the sg list as now all mappings are a= dded, + * it should also release the cookie as it's not used. */ struct iommu_domain_ops { int (*attach_dev)(struct iommu_domain *domain, struct device *dev); @@ -668,6 +681,12 @@ struct iommu_domain_ops { unsigned long quirks); =20 void (*free)(struct iommu_domain *domain); + + struct iommu_map_cookie_sg *(*alloc_cookie_sg)(unsigned long iova, int pr= ot, + unsigned int nents, gfp_t gfp); + int (*add_deferred_map_sg)(struct iommu_map_cookie_sg *cookie, + phys_addr_t paddr, size_t pgsize, size_t pgcount); + int (*consume_deferred_map_sg)(struct iommu_map_cookie_sg *cookie); }; =20 /** --=20 2.47.0.338.g60cca15819-goog From nobody Sun Dec 14 19:14:26 2025 Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 11856242EEE for ; Thu, 12 Dec 2024 18:06:42 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734026803; cv=none; b=hh0tA/z7yoJM9hpfE1uUuoKPVd/HIQ7lhE3cc9txBIJRq52taa+0aSWPVEh6erIcv577iQJV9rN67UAV9Uyu1PKgfc3wIzLI0JYa3/rNNr+e+B7a+H6phxrs3xxmVY7JqHx2PVrbxRyW5U1lklvr00Baa69gr/Z/xkOEqDKRQ0Q= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734026803; c=relaxed/simple; bh=hiVH2C6llEMbjz526V+xYppiQALZW/CAzVLix4VAOkk=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=I0iCYUFgnd5GB5PpPuqGpw5OA1vAGyl26c9Q+A368rqt1Yr5br76PHbFvkOViMXvI0ZNRmRZzuK3FFlX3l81xdDKn6NmD/F7BKT53E6yZIbuYocvJbj7lVgUY51BEm7++IP+0ufRpI6pvxOd2RUqf1onYlTIgr2BzkSZOPu6P24= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--smostafa.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=MSReQJIJ; arc=none smtp.client-ip=209.85.128.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--smostafa.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="MSReQJIJ" Received: by mail-wm1-f74.google.com with SMTP id 5b1f17b1804b1-436228ebc5eso5631745e9.3 for ; Thu, 12 Dec 2024 10:06:41 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1734026800; x=1734631600; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=8LOnKB51DX+hbW0El8jQ5+VtfI8i+DJEmanKBr5IaOc=; b=MSReQJIJZa5j7QS/dVEcZ2WZAAt0PSnZiOraVGIEI8fcNepSr+cQliY+S99lvX69oh yEWnPUfVk/VxvGGHvYXIUlyBVzvnFSvm8fBS5DMHUfu8mpMjzJYSN9B8EEGepJ9AqcwR j9Gz263ro/5KzJwJx+RubYc6lz41ZItd++u24V81VUAuR4S8bUq/lHNSiKoiu2ZpAlKX HZ4Qd8lr+Y31fAzZsh5exazAzq0K5auBZ9+GBjb62d6eBc5gzMAaWSwQ3XdclVQGyfZe u+Yjgx7I1YwjxN8Dz9/yBF6BekleVz+63JbOqaeYAlogrZERqN075eHviUFEdvgvhFjD 9r/A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734026800; x=1734631600; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=8LOnKB51DX+hbW0El8jQ5+VtfI8i+DJEmanKBr5IaOc=; b=h4rzggbQgg5BrhLGHkZiNdnI6MUIeF7RLUwGpXH/b9I82iX9tDiUepcFagQzm9YH7h PIGcgEh/rZayCirHuZTNDCNvnAT8q5jWjH7Fn9ylV6zhA9OKXRXcMyqydbqgkP7waoBr Tp2a0lL6cPBqfoxFqTA6o20faT8Gq/zjfXJktA3OFytH1T642wMLANTIr685a2kRgXVu CQzC2T/tof0I+7OVvFBbEAsAhTC5NFNwmGhkEj0uqPr7DPgcB1vrOT/oxhxhIwjcDRdF vFGIY0m58Pu+Ynh5UD2AhWU1293s4J1fySIAKUfYJgoS9klHZQQXdj2suX95ScMK7a4p LTxQ== X-Forwarded-Encrypted: i=1; AJvYcCUsqWWPgpZ0kJBSk7DLCBK7PUCLh9mnQ30tZHdMjSVUlmKQfd+BwZMe/moHDhVfJXBVJ5cyC7j6vNxDc+c=@vger.kernel.org X-Gm-Message-State: AOJu0YzoBUlkPPkQLGJXK8ushrJ93uJljY8PivqfdgjWnTkAWAzR6e+C M+FNJ9cbHUOxWV31UxkOtMhRyYaqTUmBppJjbc4rujd7brI1ZhyNQNc1DcJl5EkejhE0pfUe26v cGLpaMTMyYA== X-Google-Smtp-Source: AGHT+IE8zSOvadgYlQu/oyCrvtp7QB6KNjjif3fcyBgRNu+AsyTgZxEHuVmvUAgeCaCC/YUaZxC98HiuTjTEZA== X-Received: from wmgg15.prod.google.com ([2002:a05:600d:f:b0:434:feb1:add1]) (user=smostafa job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:3109:b0:434:fafe:edb with SMTP id 5b1f17b1804b1-4361c3e22e7mr61901735e9.24.1734026800793; Thu, 12 Dec 2024 10:06:40 -0800 (PST) Date: Thu, 12 Dec 2024 18:04:20 +0000 In-Reply-To: <20241212180423.1578358-1-smostafa@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241212180423.1578358-1-smostafa@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20241212180423.1578358-57-smostafa@google.com> Subject: [RFC PATCH v2 56/58] KVM: arm64: iommu: Add hypercall for map_sg From: Mostafa Saleh To: iommu@lists.linux.dev, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org Cc: catalin.marinas@arm.com, will@kernel.org, maz@kernel.org, oliver.upton@linux.dev, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, robdclark@gmail.com, joro@8bytes.org, robin.murphy@arm.com, jean-philippe@linaro.org, jgg@ziepe.ca, nicolinc@nvidia.com, vdonnefort@google.com, qperret@google.com, tabba@google.com, danielmentz@google.com, tzukui@google.com, Mostafa Saleh Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add a new type struct kvm_iommu_sg, that describes a simple sglist, and a hypercall that can consume it while calling the map_pages ops. Signed-off-by: Mostafa Saleh --- arch/arm64/include/asm/kvm_asm.h | 1 + arch/arm64/include/asm/kvm_host.h | 19 ++++++++ arch/arm64/kvm/hyp/include/nvhe/iommu.h | 2 + arch/arm64/kvm/hyp/nvhe/hyp-main.c | 14 ++++++ arch/arm64/kvm/hyp/nvhe/iommu/iommu.c | 58 +++++++++++++++++++++++++ arch/arm64/kvm/iommu.c | 32 ++++++++++++++ 6 files changed, 126 insertions(+) diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_= asm.h index 3dbf30cd10f3..f2b86d1a62ed 100644 --- a/arch/arm64/include/asm/kvm_asm.h +++ b/arch/arm64/include/asm/kvm_asm.h @@ -115,6 +115,7 @@ enum __kvm_host_smccc_func { __KVM_HOST_SMCCC_FUNC___pkvm_host_iommu_unmap_pages, __KVM_HOST_SMCCC_FUNC___pkvm_host_iommu_iova_to_phys, __KVM_HOST_SMCCC_FUNC___pkvm_host_hvc_pd, + __KVM_HOST_SMCCC_FUNC___pkvm_host_iommu_map_sg, =20 /* * Start of the dynamically registered hypercalls. Start a bit diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm= _host.h index 3cdc99ebdd0d..704648619d28 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -1655,4 +1655,23 @@ int kvm_iommu_register_driver(struct kvm_iommu_drive= r *kern_ops, int kvm_iommu_init_driver(void); void kvm_iommu_remove_driver(void); =20 +struct kvm_iommu_sg { + phys_addr_t phys; + size_t pgsize; + unsigned int pgcount; +}; + +static inline struct kvm_iommu_sg *kvm_iommu_sg_alloc(unsigned int nents, = gfp_t gfp) +{ + return alloc_pages_exact(PAGE_ALIGN(nents * sizeof(struct kvm_iommu_sg)),= gfp); +} + +static inline void kvm_iommu_sg_free(struct kvm_iommu_sg *sg, unsigned int= nents) +{ + free_pages_exact(sg, PAGE_ALIGN(nents * sizeof(struct kvm_iommu_sg))); +} + +int kvm_iommu_share_hyp_sg(struct kvm_iommu_sg *sg, unsigned int nents); +int kvm_iommu_unshare_hyp_sg(struct kvm_iommu_sg *sg, unsigned int nents); + #endif /* __ARM64_KVM_HOST_H__ */ diff --git a/arch/arm64/kvm/hyp/include/nvhe/iommu.h b/arch/arm64/kvm/hyp/i= nclude/nvhe/iommu.h index cff75d67d807..1004465b680a 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/iommu.h +++ b/arch/arm64/kvm/hyp/include/nvhe/iommu.h @@ -22,6 +22,8 @@ size_t kvm_iommu_unmap_pages(pkvm_handle_t domain_id, uns= igned long iova, size_t pgsize, size_t pgcount); phys_addr_t kvm_iommu_iova_to_phys(pkvm_handle_t domain_id, unsigned long = iova); bool kvm_iommu_host_dabt_handler(struct kvm_cpu_context *host_ctxt, u64 es= r, u64 addr); +size_t kvm_iommu_map_sg(pkvm_handle_t domain, unsigned long iova, struct k= vm_iommu_sg *sg, + unsigned int nent, unsigned int prot); =20 /* Flags for memory allocation for IOMMU drivers */ #define IOMMU_PAGE_NOCACHE BIT(0) diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/h= yp-main.c index 1ab8e5507825..5659aae0c758 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -1682,6 +1682,19 @@ static void handle___pkvm_host_hvc_pd(struct kvm_cpu= _context *host_ctxt) cpu_reg(host_ctxt, 1) =3D pkvm_host_hvc_pd(device_id, on); } =20 +static void handle___pkvm_host_iommu_map_sg(struct kvm_cpu_context *host_c= txt) +{ + unsigned long ret; + DECLARE_REG(pkvm_handle_t, domain, host_ctxt, 1); + DECLARE_REG(unsigned long, iova, host_ctxt, 2); + DECLARE_REG(struct kvm_iommu_sg *, sg, host_ctxt, 3); + DECLARE_REG(unsigned int, nent, host_ctxt, 4); + DECLARE_REG(unsigned int, prot, host_ctxt, 5); + + ret =3D kvm_iommu_map_sg(domain, iova, kern_hyp_va(sg), nent, prot); + hyp_reqs_smccc_encode(ret, host_ctxt, this_cpu_ptr(&host_hyp_reqs)); +} + typedef void (*hcall_t)(struct kvm_cpu_context *); =20 #define HANDLE_FUNC(x) [__KVM_HOST_SMCCC_FUNC_##x] =3D (hcall_t)handle_##x @@ -1747,6 +1760,7 @@ static const hcall_t host_hcall[] =3D { HANDLE_FUNC(__pkvm_host_iommu_unmap_pages), HANDLE_FUNC(__pkvm_host_iommu_iova_to_phys), HANDLE_FUNC(__pkvm_host_hvc_pd), + HANDLE_FUNC(__pkvm_host_iommu_map_sg), }; =20 static void handle_host_hcall(struct kvm_cpu_context *host_ctxt) diff --git a/arch/arm64/kvm/hyp/nvhe/iommu/iommu.c b/arch/arm64/kvm/hyp/nvh= e/iommu/iommu.c index e45dadd0c4aa..b0c9b9086fd1 100644 --- a/arch/arm64/kvm/hyp/nvhe/iommu/iommu.c +++ b/arch/arm64/kvm/hyp/nvhe/iommu/iommu.c @@ -392,6 +392,64 @@ bool kvm_iommu_host_dabt_handler(struct kvm_cpu_contex= t *host_ctxt, u64 esr, u64 return ret; } =20 +size_t kvm_iommu_map_sg(pkvm_handle_t domain_id, unsigned long iova, struc= t kvm_iommu_sg *sg, + unsigned int nent, unsigned int prot) +{ + int ret; + size_t total_mapped =3D 0, mapped; + struct kvm_hyp_iommu_domain *domain; + phys_addr_t phys; + size_t size, pgsize, pgcount; + unsigned int orig_nent =3D nent; + struct kvm_iommu_sg *orig_sg =3D sg; + + if (!kvm_iommu_ops || !kvm_iommu_ops->map_pages) + return 0; + + if (prot & ~IOMMU_PROT_MASK) + return 0; + + domain =3D handle_to_domain(domain_id); + if (!domain || domain_get(domain)) + return 0; + + ret =3D hyp_pin_shared_mem(sg, sg + nent); + if (ret) + goto out_put_domain; + + while (nent--) { + phys =3D sg->phys; + pgsize =3D sg->pgsize; + pgcount =3D sg->pgcount; + + if (__builtin_mul_overflow(pgsize, pgcount, &size) || + iova + size < iova) + goto out_unpin_sg; + + ret =3D __pkvm_host_use_dma(phys, size); + if (ret) + goto out_unpin_sg; + + mapped =3D 0; + kvm_iommu_ops->map_pages(domain, iova, phys, pgsize, pgcount, prot, &map= ped); + total_mapped +=3D mapped; + phys +=3D mapped; + iova +=3D mapped; + /* Might need memory */ + if (mapped !=3D size) { + __pkvm_host_unuse_dma(phys, size - mapped); + break; + } + sg++; + } + +out_unpin_sg: + hyp_unpin_shared_mem(orig_sg, orig_sg + orig_nent); +out_put_domain: + domain_put(domain); + return total_mapped; +} + static int iommu_power_on(struct kvm_power_domain *pd) { struct kvm_hyp_iommu *iommu =3D container_of(pd, struct kvm_hyp_iommu, diff --git a/arch/arm64/kvm/iommu.c b/arch/arm64/kvm/iommu.c index af3417e6259d..99718af0cba6 100644 --- a/arch/arm64/kvm/iommu.c +++ b/arch/arm64/kvm/iommu.c @@ -55,3 +55,35 @@ void kvm_iommu_remove_driver(void) if (smp_load_acquire(&iommu_driver)) iommu_driver->remove_driver(); } + +int kvm_iommu_share_hyp_sg(struct kvm_iommu_sg *sg, unsigned int nents) +{ + size_t nr_pages =3D PAGE_ALIGN(sizeof(*sg) * nents) >> PAGE_SHIFT; + phys_addr_t sg_pfn =3D virt_to_phys(sg) >> PAGE_SHIFT; + int i; + int ret; + + for (i =3D 0 ; i < nr_pages ; ++i) { + ret =3D kvm_call_hyp_nvhe(__pkvm_host_share_hyp, sg_pfn + i); + if (ret) + return ret; + } + + return 0; +} + +int kvm_iommu_unshare_hyp_sg(struct kvm_iommu_sg *sg, unsigned int nents) +{ + size_t nr_pages =3D PAGE_ALIGN(sizeof(*sg) * nents) >> PAGE_SHIFT; + phys_addr_t sg_pfn =3D virt_to_phys(sg) >> PAGE_SHIFT; + int i; + int ret; + + for (i =3D 0 ; i < nr_pages ; ++i) { + ret =3D kvm_call_hyp_nvhe(__pkvm_host_unshare_hyp, sg_pfn + i); + if (ret) + return ret; + } + + return 0; +} --=20 2.47.0.338.g60cca15819-goog From nobody Sun Dec 14 19:14:26 2025 Received: from mail-wr1-f73.google.com (mail-wr1-f73.google.com [209.85.221.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0A34D242EFE for ; Thu, 12 Dec 2024 18:06:44 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.221.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734026807; cv=none; b=kddswKL344XeE6aGNfDcPdYrbVgg7gGHz8vMyTF/5lxjVka47IxuC7EX29jiF24bZ0kB6tEiyD9goSzIp+oQnqoUMcvqRw3Q90R2MsCGgoqGhm0rMeKpM5rwHe1I1QCVL/6uVv3hrHvM/1WgCEQ6y4T/5Zw4fDUF7xWhVdrKkXs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734026807; c=relaxed/simple; bh=P++rIyJ+yBr2b2PG0f2OKkeFdIE8TkpF4epFXwZ2v60=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=AFduQ3/6N90R8M5sAj4FQnRkSBobTNrQcWmQ2F1o3GfpYPjUpA1copvK8IYn3EitPIZscjOAmh8Aes1y/k4uP+laMILc7cLcx8TezmBNyL4CfcesHPFLYXWI6lIiJ+APYp/5Q6kIjlQbWnnAggm7HPzpmlPYY8dZICMjQ3ic8OQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--smostafa.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=OSZDZUrI; arc=none smtp.client-ip=209.85.221.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--smostafa.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="OSZDZUrI" Received: by mail-wr1-f73.google.com with SMTP id ffacd0b85a97d-385dcadffebso432404f8f.0 for ; Thu, 12 Dec 2024 10:06:43 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1734026802; x=1734631602; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=tyZdzXRtf8rptaiCXBMnMziyj8QgYtzTFp7Z15tvNkk=; b=OSZDZUrIs+ZS+eDeHed7AlGBDz/UKkynDUU0RQVJWEdQZTvlgVyuKlTiTcGtHlPb3h +CBypEJ6uIpe8T+riAjTweoo7OxEVoXXULXcut3hn8xY4UInL77a0VRcMBB4HGMW1xTK f2AKulqaFUGc7OQ0Wpgr4MQFbo2TBc6aqSsQu1+L8eA1UVEFNgNAuz6fHjFIEvUS6kec hkRo3kiZ3c7Qq+fo8ka5AypBnF/cftjsflWEmmXlChVvVxEZENt42v3rYCVjJoyupEzo PMZEWjWB8uFRu5uA98D/a+Yt8ruEekgV5TT5H3HfF41NuTRgM6ZKHTg4KtPW0nLF5l9X djhg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734026802; x=1734631602; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=tyZdzXRtf8rptaiCXBMnMziyj8QgYtzTFp7Z15tvNkk=; b=llceoU9w820qb3q9GczLLHjfgSOA2vWw3ssMVesOY7V4XmBYgKNwWo2vKzfyZeBZI1 KNRHnAan2nHK8/w3+7cK8tbnN8w/jbt62sc2KL6DMZ5+mBLkf8KxB94JyGfgWtdY5G9I qDYr4PC7HectDECO0fcPzxKvJUgSZLwIn92G2KCzT/eQ1E1phSOwiT5KEGZ8ZwLKbAQ1 Tj0LdDKly8yz/FS52+oLiEZjS4x/cTSte2zHUUmUBNs4P9xYddyld4jmKW6cGbQtkEKN sKmq9jb2Tfsim6o6UqVZdd1tG36eBGQIuMFco+HdNGJYfHFCaoBz4ZI/IZJLw9r62Jb9 M/DA== X-Forwarded-Encrypted: i=1; AJvYcCV5PPdaj14VafSkwXgKmnPkKfSyaVus1mHxFVRNGO8CRizjQLiXJzgu1U2/IIJFduk1sx0CK43bnJwstE4=@vger.kernel.org X-Gm-Message-State: AOJu0YwiIWyFQMnokmOLaY7jdxI1vzknIEy7qolHhXQmItd6aqD9M5M8 bcUIo4w5vvMJ8wldR2PatgAMq1p9M9naMWvkcYqZvq0r0+p6QHbi456pi6UGImdIVGRlH/dnH78 TGzZ54dYmxg== X-Google-Smtp-Source: AGHT+IEWuyPM5Tx5DrErgHedylcB+E6oTYTHFOjRcJ9mqKpISiuwj0ormAbEHASo9YOkKlzy9zSUKaeU8xtaEA== X-Received: from wmbay15.prod.google.com ([2002:a05:600c:1e0f:b0:434:ff52:1c7]) (user=smostafa job=prod-delivery.src-stubby-dispatcher) by 2002:a5d:5886:0:b0:385:faad:bfb9 with SMTP id ffacd0b85a97d-3864ce8644emr6392550f8f.8.1734026802739; Thu, 12 Dec 2024 10:06:42 -0800 (PST) Date: Thu, 12 Dec 2024 18:04:21 +0000 In-Reply-To: <20241212180423.1578358-1-smostafa@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241212180423.1578358-1-smostafa@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20241212180423.1578358-58-smostafa@google.com> Subject: [RFC PATCH v2 57/58] iommu/arm-smmu-v3-kvm: Implement sg operations From: Mostafa Saleh To: iommu@lists.linux.dev, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org Cc: catalin.marinas@arm.com, will@kernel.org, maz@kernel.org, oliver.upton@linux.dev, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, robdclark@gmail.com, joro@8bytes.org, robin.murphy@arm.com, jean-philippe@linaro.org, jgg@ziepe.ca, nicolinc@nvidia.com, vdonnefort@google.com, qperret@google.com, tabba@google.com, danielmentz@google.com, tzukui@google.com, Mostafa Saleh Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Implement the new map_sg ops which mainly populate the kvm_iommu_sg and pass it in the hypervisor. Signed-off-by: Mostafa Saleh --- .../iommu/arm/arm-smmu-v3/arm-smmu-v3-kvm.c | 93 +++++++++++++++++++ 1 file changed, 93 insertions(+) diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-kvm.c b/drivers/iomm= u/arm/arm-smmu-v3/arm-smmu-v3-kvm.c index e987c273ff3c..ac45455b384d 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-kvm.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-kvm.c @@ -445,6 +445,96 @@ static phys_addr_t kvm_arm_smmu_iova_to_phys(struct io= mmu_domain *domain, return kvm_call_hyp_nvhe(__pkvm_host_iommu_iova_to_phys, kvm_smmu_domain-= >id, iova); } =20 +struct kvm_arm_smmu_map_sg { + struct iommu_map_cookie_sg cookie; + struct kvm_iommu_sg *sg; + unsigned int ptr; + unsigned long iova; + int prot; + gfp_t gfp; + unsigned int nents; +}; + +static struct iommu_map_cookie_sg *kvm_arm_smmu_alloc_cookie_sg(unsigned l= ong iova, + int prot, + unsigned int nents, + gfp_t gfp) +{ + int ret; + struct kvm_arm_smmu_map_sg *map_sg =3D kzalloc(sizeof(*map_sg), gfp); + + if (!map_sg) + return NULL; + + map_sg->sg =3D kvm_iommu_sg_alloc(nents, gfp); + if (!map_sg->sg) + return NULL; + map_sg->iova =3D iova; + map_sg->prot =3D prot; + map_sg->gfp =3D gfp; + map_sg->nents =3D nents; + ret =3D kvm_iommu_share_hyp_sg(map_sg->sg, nents); + if (ret) { + kvm_iommu_sg_free(map_sg->sg, nents); + kfree(map_sg); + return NULL; + } + + return &map_sg->cookie; +} + +static int kvm_arm_smmu_add_deferred_map_sg(struct iommu_map_cookie_sg *co= okie, + phys_addr_t paddr, size_t pgsize, size_t pgcount) +{ + struct kvm_arm_smmu_map_sg *map_sg =3D container_of(cookie, struct kvm_ar= m_smmu_map_sg, + cookie); + struct kvm_iommu_sg *sg =3D map_sg->sg; + + sg[map_sg->ptr].phys =3D paddr; + sg[map_sg->ptr].pgsize =3D pgsize; + sg[map_sg->ptr].pgcount =3D pgcount; + map_sg->ptr++; + return 0; +} + +static int kvm_arm_smmu_consume_deferred_map_sg(struct iommu_map_cookie_sg= *cookie) +{ + struct kvm_arm_smmu_map_sg *map_sg =3D container_of(cookie, struct kvm_ar= m_smmu_map_sg, + cookie); + struct kvm_iommu_sg *sg =3D map_sg->sg; + size_t mapped, total_mapped =3D 0; + struct arm_smccc_res res; + struct kvm_arm_smmu_domain *kvm_smmu_domain =3D to_kvm_smmu_domain(map_sg= ->cookie.domain); + + do { + res =3D kvm_call_hyp_nvhe_smccc(__pkvm_host_iommu_map_sg, + kvm_smmu_domain->id, + map_sg->iova, sg, map_sg->ptr, map_sg->prot); + mapped =3D res.a1; + map_sg->iova +=3D mapped; + total_mapped +=3D mapped; + /* Skip mapped */ + while (mapped) { + if (mapped < (sg->pgsize * sg->pgcount)) { + sg->phys +=3D mapped; + sg->pgcount -=3D mapped / sg->pgsize; + mapped =3D 0; + } else { + mapped -=3D sg->pgsize * sg->pgcount; + sg++; + map_sg->ptr--; + } + } + + kvm_arm_smmu_topup_memcache(&res, map_sg->gfp); + } while (map_sg->ptr); + + kvm_iommu_unshare_hyp_sg(sg, map_sg->nents); + kvm_iommu_sg_free(sg, map_sg->nents); + kfree(map_sg); + return 0; +} + static struct iommu_ops kvm_arm_smmu_ops =3D { .capable =3D kvm_arm_smmu_capable, .device_group =3D arm_smmu_device_group, @@ -463,6 +553,9 @@ static struct iommu_ops kvm_arm_smmu_ops =3D { .unmap_pages =3D kvm_arm_smmu_unmap_pages, .iova_to_phys =3D kvm_arm_smmu_iova_to_phys, .set_dev_pasid =3D kvm_arm_smmu_set_dev_pasid, + .alloc_cookie_sg =3D kvm_arm_smmu_alloc_cookie_sg, + .add_deferred_map_sg =3D kvm_arm_smmu_add_deferred_map_sg, + .consume_deferred_map_sg =3D kvm_arm_smmu_consume_deferred_map_sg, } }; =20 --=20 2.47.0.338.g60cca15819-goog From nobody Sun Dec 14 19:14:26 2025 Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 434ED242F19 for ; Thu, 12 Dec 2024 18:06:46 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734026808; cv=none; b=C+drbCVieiEpEcHPqMucZOD3F7TKngI5KwQ6tDYF83If30iE3VWMynl8EQ0OoP4afxpm5rO5cqKIAcUvACMmQEpkzrlgau/tUcEGyD2cgxh5DsyGlJ/VL0SFzjP/EROkhG9CgkOAuhLow8anJe8vFZ+gHki8qbYLgbWBzgSUIXw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734026808; c=relaxed/simple; bh=B1FDcer+KcePlYxW4wV0I3DBTYeqD4qGCUqX0IOiiig=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=YW8dazBAo4H3dWE7M7ErGmzKnG8+y5ehZp0hSkX3mVDQuykOhbejvhm/S7XpBObXAq9QqpfSlis74aYl2ZiQNeSQq3cUMD18xuUDtb1Gobn7nFm7J6hF3GtUr+dgrmzWUfxPEUCSnwCAk0U944cxGqYXBwJTX837IN9CPOmJeVE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--smostafa.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=QKBTwQHI; arc=none smtp.client-ip=209.85.128.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--smostafa.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="QKBTwQHI" Received: by mail-wm1-f73.google.com with SMTP id 5b1f17b1804b1-434fe2b605eso5729625e9.2 for ; Thu, 12 Dec 2024 10:06:45 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1734026804; x=1734631604; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=ajIKdrUqITXEhupzYVVmmPjTpdyF1nqGjC97azBMTVk=; b=QKBTwQHIdr6GBa8/VujLTvdntLRH/516Ls6jYSQ3O/4vmqk2g7q61YK63K5u4rX0OO QiiBN526zC81iFwmGd+b7rQDtMPFolb8mFHYAzrDFkXMIIvWlUU3Jom7NCjQ4YZdiRuM /0ZiTjdXrwVzWS8mryGwcv0VJxtOyZGpDw5Q9TcHygFx+gAZSJLzR/2eXpfe7pqkAUzA BgG8B+sL4+LBNKQ7UbWzwa/DNzy15GqO89ltSwifC5Vh2nmPhF2y/EcA7nYbk3vCndZ9 B3rtiV3napNFsYAVxtvCPCh7TqNsJhDHexe1IK+HFaHrDLWzp8MRZaP4ji+CvaxR/QrT t+Yg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734026804; x=1734631604; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=ajIKdrUqITXEhupzYVVmmPjTpdyF1nqGjC97azBMTVk=; b=SK8opJF6wPeo6j6Fod01dj18Baoa/cO23h8T2iZQ/ReVK8zu5UGvrLdEA3E4l7a2Y3 2Lluh1QzQFDy6u9+IEGsHBgIck5stChLPnLHdv/p9Q+tznhOOeGxRuMbLUao+hwDewOt bzfGgUNTIH4+EjxJUqmDsNxeR7vfcQzpGqj0C3/ufsmhPZjl36XRnuIN+7VBipKQbTVH 1OBBvHtj8UiUKzZGFykXebOlkWDe+XOBJNGldhoxC8rMauA5IMbTpVbdD34Ojqp7nwZa C0WS+gL1QmDFEvQlCG3Hs5gfNGUExWOO9CCSF7+dXZ9jjdw8eBl/oxsM5ZjEGMYQETLF Kq+A== X-Forwarded-Encrypted: i=1; AJvYcCXdn9p8f57TnQYSkLEJW/FB0tW8bYys033crfmWhEeOHHGfgIuo3/6ALCJ5gq3a1UOpd162jyzvnmr4Fsw=@vger.kernel.org X-Gm-Message-State: AOJu0YxE18McCI2SdlklOO/yNosCuUbf5Y7si06y1Emf2J/QuIX4w0gB HydGw2LIAuD33A7rLdaZnmDWexQx3bhF1Y5crD903YYUuh08bpuSq+Mk2MWFCIbJZuXOE1e/BXi 6qPct7o5qFA== X-Google-Smtp-Source: AGHT+IF8vdJDKvGLsvTQvgAsP/DNyJ5otwtCZgR3B2II3FkJG+cRU0VYMTYD1iadajRkM2BnXV0BxcynsvJ72Q== X-Received: from wmlf18.prod.google.com ([2002:a7b:c8d2:0:b0:434:9da4:2fa5]) (user=smostafa job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:350c:b0:434:a4a6:51f8 with SMTP id 5b1f17b1804b1-4361c2bbb02mr71819045e9.0.1734026804672; Thu, 12 Dec 2024 10:06:44 -0800 (PST) Date: Thu, 12 Dec 2024 18:04:22 +0000 In-Reply-To: <20241212180423.1578358-1-smostafa@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241212180423.1578358-1-smostafa@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20241212180423.1578358-59-smostafa@google.com> Subject: [RFC PATCH v2 58/58] iommu/arm-smmu-v3-kvm: Support command queue batching From: Mostafa Saleh To: iommu@lists.linux.dev, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org Cc: catalin.marinas@arm.com, will@kernel.org, maz@kernel.org, oliver.upton@linux.dev, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, robdclark@gmail.com, joro@8bytes.org, robin.murphy@arm.com, jean-philippe@linaro.org, jgg@ziepe.ca, nicolinc@nvidia.com, vdonnefort@google.com, qperret@google.com, tabba@google.com, danielmentz@google.com, tzukui@google.com, Mostafa Saleh Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Similar to the kernel driver, we can batch commands at EL2 to avoid writing to MMIO space, this is quite noticable if the SMMU doesn't support range invalidation so it has to invalidate page per page. Signed-off-by: Mostafa Saleh --- arch/arm64/include/asm/arm-smmu-v3-common.h | 16 ++++ arch/arm64/kvm/hyp/nvhe/iommu/arm-smmu-v3.c | 95 ++++++++++++++++----- drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h | 15 ---- 3 files changed, 88 insertions(+), 38 deletions(-) diff --git a/arch/arm64/include/asm/arm-smmu-v3-common.h b/arch/arm64/inclu= de/asm/arm-smmu-v3-common.h index f2fbd286f674..2578c8e9202e 100644 --- a/arch/arm64/include/asm/arm-smmu-v3-common.h +++ b/arch/arm64/include/asm/arm-smmu-v3-common.h @@ -573,4 +573,20 @@ struct arm_smmu_cmdq_ent { }; }; =20 +#define Q_OVERFLOW_FLAG (1U << 31) +#define Q_OVF(p) ((p) & Q_OVERFLOW_FLAG) + +/* + * This is used to size the command queue and therefore must be at least + * BITS_PER_LONG so that the valid_map works correctly (it relies on the + * total number of queue entries being a multiple of BITS_PER_LONG). + */ +#define CMDQ_BATCH_ENTRIES BITS_PER_LONG + +struct arm_smmu_cmdq_batch { + u64 cmds[CMDQ_BATCH_ENTRIES * CMDQ_ENT_DWORDS]; + struct arm_smmu_cmdq *cmdq; + int num; +}; + #endif /* _ARM_SMMU_V3_COMMON_H */ diff --git a/arch/arm64/kvm/hyp/nvhe/iommu/arm-smmu-v3.c b/arch/arm64/kvm/h= yp/nvhe/iommu/arm-smmu-v3.c index 60f0760f49eb..62760136c6fb 100644 --- a/arch/arm64/kvm/hyp/nvhe/iommu/arm-smmu-v3.c +++ b/arch/arm64/kvm/hyp/nvhe/iommu/arm-smmu-v3.c @@ -96,12 +96,20 @@ static void smmu_reclaim_pages(u64 phys, size_t size) #define Q_WRAP(smmu, reg) ((reg) & (1 << (smmu)->cmdq_log2size)) #define Q_IDX(smmu, reg) ((reg) & ((1 << (smmu)->cmdq_log2size) - 1)) =20 -static bool smmu_cmdq_full(struct hyp_arm_smmu_v3_device *smmu) +static bool smmu_cmdq_has_space(struct hyp_arm_smmu_v3_device *smmu, u32 n) { - u64 cons =3D readl_relaxed(smmu->base + ARM_SMMU_CMDQ_CONS); + u64 smmu_cons =3D readl_relaxed(smmu->base + ARM_SMMU_CMDQ_CONS); + u32 space, prod, cons; =20 - return Q_IDX(smmu, smmu->cmdq_prod) =3D=3D Q_IDX(smmu, cons) && - Q_WRAP(smmu, smmu->cmdq_prod) !=3D Q_WRAP(smmu, cons); + prod =3D Q_IDX(smmu, smmu->cmdq_prod); + cons =3D Q_IDX(smmu, smmu_cons); + + if (Q_WRAP(smmu, smmu->cmdq_prod) =3D=3D Q_WRAP(smmu, smmu_cons)) + space =3D (1 << smmu->cmdq_log2size) - (prod - cons); + else + space =3D cons - prod; + + return space >=3D n; } =20 static bool smmu_cmdq_empty(struct hyp_arm_smmu_v3_device *smmu) @@ -112,22 +120,8 @@ static bool smmu_cmdq_empty(struct hyp_arm_smmu_v3_dev= ice *smmu) Q_WRAP(smmu, smmu->cmdq_prod) =3D=3D Q_WRAP(smmu, cons); } =20 -static int smmu_add_cmd(struct hyp_arm_smmu_v3_device *smmu, - struct arm_smmu_cmdq_ent *ent) +static int smmu_build_cmd(u64 *cmd, struct arm_smmu_cmdq_ent *ent) { - int i; - int ret; - u64 cmd[CMDQ_ENT_DWORDS] =3D {}; - int idx =3D Q_IDX(smmu, smmu->cmdq_prod); - u64 *slot =3D smmu->cmdq_base + idx * CMDQ_ENT_DWORDS; - - if (smmu->iommu.power_is_off) - return -EPIPE; - - ret =3D smmu_wait_event(smmu, !smmu_cmdq_full(smmu)); - if (ret) - return ret; - cmd[0] |=3D FIELD_PREP(CMDQ_0_OP, ent->opcode); =20 switch (ent->opcode) { @@ -175,15 +169,49 @@ static int smmu_add_cmd(struct hyp_arm_smmu_v3_device= *smmu, return -EINVAL; } =20 - for (i =3D 0; i < CMDQ_ENT_DWORDS; i++) - slot[i] =3D cpu_to_le64(cmd[i]); + return 0; +} + +static int smmu_issue_cmds(struct hyp_arm_smmu_v3_device *smmu, + u64 *cmds, int n) +{ + int idx =3D Q_IDX(smmu, smmu->cmdq_prod); + u64 *slot =3D smmu->cmdq_base + idx * CMDQ_ENT_DWORDS; + int i; + int ret; + u32 prod; + + if (smmu->iommu.power_is_off) + return -EPIPE; + + ret =3D smmu_wait_event(smmu, smmu_cmdq_has_space(smmu, n)); + if (ret) + return ret; + + for (i =3D 0; i < CMDQ_ENT_DWORDS * n; i++) + slot[i] =3D cpu_to_le64(cmds[i]); + + prod =3D (Q_WRAP(smmu, smmu->cmdq_prod) | Q_IDX(smmu, smmu->cmdq_prod)) += n; + smmu->cmdq_prod =3D Q_OVF(smmu->cmdq_prod) | Q_WRAP(smmu, prod) | Q_IDX(s= mmu, prod); =20 - smmu->cmdq_prod++; writel(Q_IDX(smmu, smmu->cmdq_prod) | Q_WRAP(smmu, smmu->cmdq_prod), smmu->base + ARM_SMMU_CMDQ_PROD); return 0; } =20 +static int smmu_add_cmd(struct hyp_arm_smmu_v3_device *smmu, + struct arm_smmu_cmdq_ent *ent) +{ + u64 cmd[CMDQ_ENT_DWORDS] =3D {}; + int ret; + + ret =3D smmu_build_cmd(cmd, ent); + if (ret) + return ret; + + return smmu_issue_cmds(smmu, cmd, 1); +} + static int smmu_sync_cmd(struct hyp_arm_smmu_v3_device *smmu) { int ret; @@ -685,6 +713,23 @@ static void smmu_tlb_flush_all(void *cookie) kvm_iommu_unlock(&smmu->iommu); } =20 +static void smmu_cmdq_batch_add(struct hyp_arm_smmu_v3_device *smmu, + struct arm_smmu_cmdq_batch *cmds, + struct arm_smmu_cmdq_ent *cmd) +{ + int index; + + if (cmds->num =3D=3D CMDQ_BATCH_ENTRIES) { + smmu_issue_cmds(smmu, cmds->cmds, cmds->num); + cmds->num =3D 0; + } + + index =3D cmds->num * CMDQ_ENT_DWORDS; + smmu_build_cmd(&cmds->cmds[index], cmd); + + cmds->num++; +} + static int smmu_tlb_inv_range_smmu(struct hyp_arm_smmu_v3_device *smmu, struct kvm_hyp_iommu_domain *domain, struct arm_smmu_cmdq_ent *cmd, @@ -694,6 +739,7 @@ static int smmu_tlb_inv_range_smmu(struct hyp_arm_smmu_= v3_device *smmu, unsigned long end =3D iova + size, num_pages =3D 0, tg =3D 0; size_t inv_range =3D granule; struct hyp_arm_smmu_v3_domain *smmu_domain =3D domain->priv; + struct arm_smmu_cmdq_batch cmds; =20 kvm_iommu_lock(&smmu->iommu); if (smmu->iommu.power_is_off) @@ -723,6 +769,8 @@ static int smmu_tlb_inv_range_smmu(struct hyp_arm_smmu_= v3_device *smmu, num_pages++; } =20 + cmds.num =3D 0; + while (iova < end) { if (smmu->features & ARM_SMMU_FEAT_RANGE_INV) { /* @@ -749,11 +797,12 @@ static int smmu_tlb_inv_range_smmu(struct hyp_arm_smm= u_v3_device *smmu, num_pages -=3D num << scale; } cmd->tlbi.addr =3D iova; - WARN_ON(smmu_add_cmd(smmu, cmd)); + smmu_cmdq_batch_add(smmu, &cmds, cmd); BUG_ON(iova + inv_range < iova); iova +=3D inv_range; } =20 + WARN_ON(smmu_issue_cmds(smmu, cmds.cmds, cmds.num)); ret =3D smmu_sync_cmd(smmu); out_ret: kvm_iommu_unlock(&smmu->iommu); diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h b/drivers/iommu/ar= m/arm-smmu-v3/arm-smmu-v3.h index d91dfe55835d..18f878bb7f98 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h @@ -20,8 +20,6 @@ struct arm_smmu_device; =20 #define Q_IDX(llq, p) ((p) & ((1 << (llq)->max_n_shift) - 1)) #define Q_WRP(llq, p) ((p) & (1 << (llq)->max_n_shift)) -#define Q_OVERFLOW_FLAG (1U << 31) -#define Q_OVF(p) ((p) & Q_OVERFLOW_FLAG) #define Q_ENT(q, p) ((q)->base + \ Q_IDX(&((q)->llq), p) * \ (q)->ent_dwords) @@ -35,13 +33,6 @@ struct arm_smmu_device; =20 #define CMDQ_PROD_OWNED_FLAG Q_OVERFLOW_FLAG =20 -/* - * This is used to size the command queue and therefore must be at least - * BITS_PER_LONG so that the valid_map works correctly (it relies on the - * total number of queue entries being a multiple of BITS_PER_LONG). - */ -#define CMDQ_BATCH_ENTRIES BITS_PER_LONG - /* High-level queue structures */ #define ARM_SMMU_POLL_TIMEOUT_US 1000000 /* 1s! */ #define ARM_SMMU_POLL_SPIN_COUNT 10 @@ -100,12 +91,6 @@ static inline bool arm_smmu_cmdq_supports_cmd(struct ar= m_smmu_cmdq *cmdq, return cmdq->supports_cmd ? cmdq->supports_cmd(ent) : true; } =20 -struct arm_smmu_cmdq_batch { - u64 cmds[CMDQ_BATCH_ENTRIES * CMDQ_ENT_DWORDS]; - struct arm_smmu_cmdq *cmdq; - int num; -}; - struct arm_smmu_evtq { struct arm_smmu_queue q; struct iopf_queue *iopf; --=20 2.47.0.338.g60cca15819-goog