From nobody Wed Dec 17 09:20:03 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id EA90EC07CA9 for ; Tue, 28 Nov 2023 20:49:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1346105AbjK1Utp (ORCPT ); Tue, 28 Nov 2023 15:49:45 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35044 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229526AbjK1Utl (ORCPT ); Tue, 28 Nov 2023 15:49:41 -0500 Received: from mail-ua1-x92c.google.com (mail-ua1-x92c.google.com [IPv6:2607:f8b0:4864:20::92c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2942B19B9 for ; Tue, 28 Nov 2023 12:49:43 -0800 (PST) Received: by mail-ua1-x92c.google.com with SMTP id a1e0cc1a2514c-7c143044625so1919589241.1 for ; Tue, 28 Nov 2023 12:49:43 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; t=1701204582; x=1701809382; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:from:to:cc:subject:date:message-id :reply-to; bh=5Yy4hUPviTRiK7d1jgshQ8ZyWUkBDBJBzqyGrbBWtJ8=; b=g5Vb4oi5Zx3aFEDyKfWTbwOlIcwaOHeUedveBZxtW9O2ESckd6O9ikmKLFeclMxkmi NU8PNoIGofZtybx+VH1+u280+w86nS3t5yphDls5ezJfVjuy1cNlkLim5uQ0bzn2ivSx qCWYycazdV2ruZRikhMyqbdRga0nB5YMqzDJZYVlm8Y3jv+MbshATpVJhXfBw4cj2Pxm mT26FZkUFF9OlRoWyiDdn4FWwD3OqnYngdTNE3ePexff8FG6AKYHJdhO1x5oNWiX0Dp/ 5cd+3rfwhw6aSNHOWBZ+1djNsy77ZqJXuFcDtPunDbgzjik4nhGJ1qg9qvCF0HPxiKHn GGNQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1701204582; x=1701809382; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=5Yy4hUPviTRiK7d1jgshQ8ZyWUkBDBJBzqyGrbBWtJ8=; b=Swte0Na4zKFCNcWdeZ7V/nIfnQI3yvslEu6bmGi77w4hw8osbVa3w7Gt4rkEFBQOEl ME9vDLjGOe227nUCSs+WakqcuJ9h/5S2BjE6zl+4lgkLWVWT1SiFJyU/PqYwrgTDqlVy K1P69bv53xekfnyA+GHJ7JhkQm9OEiZGCn1/gnQmHOyvAZxX41vUCby113qPCoSHFxkq +Zdpz8DdZO4LFrezjw36Zw0M+mz6ObUo6/a3PPIMDyD0V+kdql/BK9hBWKoOxV6rPXK5 n5vuwCjLxR6m3n/DU8McHA9HN4C1TWQgsAnCXGTeq+OyBECofaEjdnUNwZhdbCKEbPpy aa1A== X-Gm-Message-State: AOJu0Yzrkl8T532Ug7VR2LkxRDHpWDyM/VA+1ZT9pHuD61bTw8EXvtz4 pUmegQ0sDJfKwIWS81xAJOXj4w== X-Google-Smtp-Source: AGHT+IH8/bEeoEWe4piY4GbZu9GZ1mU2v9u0hs2sQwvgjF+pi6uXD6MYgG32AAFD5esMsPCrla3XFw== X-Received: by 2002:a05:6102:cd4:b0:462:ad06:c90b with SMTP id g20-20020a0561020cd400b00462ad06c90bmr19786797vst.18.1701204582009; Tue, 28 Nov 2023 12:49:42 -0800 (PST) Received: from soleen.c.googlers.com.com (55.87.194.35.bc.googleusercontent.com. [35.194.87.55]) by smtp.gmail.com with ESMTPSA id d11-20020a0cfe8b000000b0067a56b6adfesm1056863qvs.71.2023.11.28.12.49.41 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 28 Nov 2023 12:49:41 -0800 (PST) From: Pasha Tatashin To: akpm@linux-foundation.org, alex.williamson@redhat.com, alim.akhtar@samsung.com, alyssa@rosenzweig.io, asahi@lists.linux.dev, baolu.lu@linux.intel.com, bhelgaas@google.com, cgroups@vger.kernel.org, corbet@lwn.net, david@redhat.com, dwmw2@infradead.org, hannes@cmpxchg.org, heiko@sntech.de, iommu@lists.linux.dev, jasowang@redhat.com, jernej.skrabec@gmail.com, jgg@ziepe.ca, jonathanh@nvidia.com, joro@8bytes.org, kevin.tian@intel.com, krzysztof.kozlowski@linaro.org, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-doc@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-rockchip@lists.infradead.org, linux-samsung-soc@vger.kernel.org, linux-sunxi@lists.linux.dev, linux-tegra@vger.kernel.org, lizefan.x@bytedance.com, marcan@marcan.st, mhiramat@kernel.org, mst@redhat.com, m.szyprowski@samsung.com, netdev@vger.kernel.org, pasha.tatashin@soleen.com, paulmck@kernel.org, rdunlap@infradead.org, robin.murphy@arm.com, samuel@sholland.org, suravee.suthikulpanit@amd.com, sven@svenpeter.dev, thierry.reding@gmail.com, tj@kernel.org, tomas.mudrunka@gmail.com, vdumpa@nvidia.com, virtualization@lists.linux.dev, wens@csie.org, will@kernel.org, yu-cheng.yu@intel.com Subject: [PATCH 01/16] iommu/vt-d: add wrapper functions for page allocations Date: Tue, 28 Nov 2023 20:49:23 +0000 Message-ID: <20231128204938.1453583-2-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.43.0.rc2.451.g8631bc7472-goog In-Reply-To: <20231128204938.1453583-1-pasha.tatashin@soleen.com> References: <20231128204938.1453583-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" In order to improve observability and accountability of IOMMU layer, we must account the number of pages that are allocated by functions that are calling directly into buddy allocator. This is achieved by first wrapping the allocation related functions into a separate inline functions in new file: drivers/iommu/iommu-pages.h Convert all page allocation calls under iommu/intel to use these new functions. Signed-off-by: Pasha Tatashin --- drivers/iommu/intel/dmar.c | 10 +- drivers/iommu/intel/iommu.c | 47 +++---- drivers/iommu/intel/iommu.h | 2 - drivers/iommu/intel/irq_remapping.c | 10 +- drivers/iommu/intel/pasid.c | 12 +- drivers/iommu/intel/svm.c | 7 +- drivers/iommu/iommu-pages.h | 199 ++++++++++++++++++++++++++++ 7 files changed, 236 insertions(+), 51 deletions(-) create mode 100644 drivers/iommu/iommu-pages.h diff --git a/drivers/iommu/intel/dmar.c b/drivers/iommu/intel/dmar.c index a3414afe11b0..a6937e1e20a5 100644 --- a/drivers/iommu/intel/dmar.c +++ b/drivers/iommu/intel/dmar.c @@ -32,6 +32,7 @@ =20 #include "iommu.h" #include "../irq_remapping.h" +#include "../iommu-pages.h" #include "perf.h" #include "trace.h" #include "perfmon.h" @@ -1185,7 +1186,7 @@ static void free_iommu(struct intel_iommu *iommu) } =20 if (iommu->qi) { - free_page((unsigned long)iommu->qi->desc); + iommu_free_page(iommu->qi->desc); kfree(iommu->qi->desc_status); kfree(iommu->qi); } @@ -1714,6 +1715,7 @@ int dmar_enable_qi(struct intel_iommu *iommu) { struct q_inval *qi; struct page *desc_page; + int order; =20 if (!ecap_qis(iommu->ecap)) return -ENOENT; @@ -1734,8 +1736,8 @@ int dmar_enable_qi(struct intel_iommu *iommu) * Need two pages to accommodate 256 descriptors of 256 bits each * if the remapping hardware supports scalable mode translation. */ - desc_page =3D alloc_pages_node(iommu->node, GFP_ATOMIC | __GFP_ZERO, - !!ecap_smts(iommu->ecap)); + order =3D ecap_smts(iommu->ecap) ? 1 : 0; + desc_page =3D __iommu_alloc_pages_node(iommu->node, GFP_ATOMIC, order); if (!desc_page) { kfree(qi); iommu->qi =3D NULL; @@ -1746,7 +1748,7 @@ int dmar_enable_qi(struct intel_iommu *iommu) =20 qi->desc_status =3D kcalloc(QI_LENGTH, sizeof(int), GFP_ATOMIC); if (!qi->desc_status) { - free_page((unsigned long) qi->desc); + iommu_free_page(qi->desc); kfree(qi); iommu->qi =3D NULL; return -ENOMEM; diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c index 3531b956556c..04f852175cbe 100644 --- a/drivers/iommu/intel/iommu.c +++ b/drivers/iommu/intel/iommu.c @@ -28,6 +28,7 @@ #include "../dma-iommu.h" #include "../irq_remapping.h" #include "../iommu-sva.h" +#include "../iommu-pages.h" #include "pasid.h" #include "cap_audit.h" #include "perfmon.h" @@ -367,22 +368,6 @@ static int __init intel_iommu_setup(char *str) } __setup("intel_iommu=3D", intel_iommu_setup); =20 -void *alloc_pgtable_page(int node, gfp_t gfp) -{ - struct page *page; - void *vaddr =3D NULL; - - page =3D alloc_pages_node(node, gfp | __GFP_ZERO, 0); - if (page) - vaddr =3D page_address(page); - return vaddr; -} - -void free_pgtable_page(void *vaddr) -{ - free_page((unsigned long)vaddr); -} - static inline int domain_type_is_si(struct dmar_domain *domain) { return domain->domain.type =3D=3D IOMMU_DOMAIN_IDENTITY; @@ -617,7 +602,7 @@ struct context_entry *iommu_context_addr(struct intel_i= ommu *iommu, u8 bus, if (!alloc) return NULL; =20 - context =3D alloc_pgtable_page(iommu->node, GFP_ATOMIC); + context =3D iommu_alloc_page_node(iommu->node, GFP_ATOMIC); if (!context) return NULL; =20 @@ -791,17 +776,17 @@ static void free_context_table(struct intel_iommu *io= mmu) for (i =3D 0; i < ROOT_ENTRY_NR; i++) { context =3D iommu_context_addr(iommu, i, 0, 0); if (context) - free_pgtable_page(context); + iommu_free_page(context); =20 if (!sm_supported(iommu)) continue; =20 context =3D iommu_context_addr(iommu, i, 0x80, 0); if (context) - free_pgtable_page(context); + iommu_free_page(context); } =20 - free_pgtable_page(iommu->root_entry); + iommu_free_page(iommu->root_entry); iommu->root_entry =3D NULL; } =20 @@ -939,7 +924,7 @@ static struct dma_pte *pfn_to_dma_pte(struct dmar_domai= n *domain, if (!dma_pte_present(pte)) { uint64_t pteval; =20 - tmp_page =3D alloc_pgtable_page(domain->nid, gfp); + tmp_page =3D iommu_alloc_page_node(domain->nid, gfp); =20 if (!tmp_page) return NULL; @@ -951,7 +936,7 @@ static struct dma_pte *pfn_to_dma_pte(struct dmar_domai= n *domain, =20 if (cmpxchg64(&pte->val, 0ULL, pteval)) /* Someone else set it while we were thinking; use theirs. */ - free_pgtable_page(tmp_page); + iommu_free_page(tmp_page); else domain_flush_cache(domain, pte, sizeof(*pte)); } @@ -1064,7 +1049,7 @@ static void dma_pte_free_level(struct dmar_domain *do= main, int level, last_pfn < level_pfn + level_size(level) - 1)) { dma_clear_pte(pte); domain_flush_cache(domain, pte, sizeof(*pte)); - free_pgtable_page(level_pte); + iommu_free_page(level_pte); } next: pfn +=3D level_size(level); @@ -1088,7 +1073,7 @@ static void dma_pte_free_pagetable(struct dmar_domain= *domain, =20 /* free pgd */ if (start_pfn =3D=3D 0 && last_pfn =3D=3D DOMAIN_MAX_PFN(domain->gaw)) { - free_pgtable_page(domain->pgd); + iommu_free_page(domain->pgd); domain->pgd =3D NULL; } } @@ -1190,7 +1175,7 @@ static int iommu_alloc_root_entry(struct intel_iommu = *iommu) { struct root_entry *root; =20 - root =3D alloc_pgtable_page(iommu->node, GFP_ATOMIC); + root =3D iommu_alloc_page_node(iommu->node, GFP_ATOMIC); if (!root) { pr_err("Allocating root entry for %s failed\n", iommu->name); @@ -1863,7 +1848,7 @@ static void domain_exit(struct dmar_domain *domain) LIST_HEAD(freelist); =20 domain_unmap(domain, 0, DOMAIN_MAX_PFN(domain->gaw), &freelist); - put_pages_list(&freelist); + iommu_free_pages_list(&freelist); } =20 if (WARN_ON(!list_empty(&domain->devices))) @@ -2637,7 +2622,7 @@ static int copy_context_table(struct intel_iommu *iom= mu, if (!old_ce) goto out; =20 - new_ce =3D alloc_pgtable_page(iommu->node, GFP_KERNEL); + new_ce =3D iommu_alloc_page_node(iommu->node, GFP_KERNEL); if (!new_ce) goto out_unmap; =20 @@ -3570,7 +3555,7 @@ static int intel_iommu_memory_notifier(struct notifie= r_block *nb, start_vpfn, mhp->nr_pages, list_empty(&freelist), 0); rcu_read_unlock(); - put_pages_list(&freelist); + iommu_free_pages_list(&freelist); } break; } @@ -4001,7 +3986,7 @@ static int md_domain_init(struct dmar_domain *domain,= int guest_width) domain->max_addr =3D 0; =20 /* always allocate the top pgd */ - domain->pgd =3D alloc_pgtable_page(domain->nid, GFP_ATOMIC); + domain->pgd =3D iommu_alloc_page_node(domain->nid, GFP_ATOMIC); if (!domain->pgd) return -ENOMEM; domain_flush_cache(domain, domain->pgd, PAGE_SIZE); @@ -4148,7 +4133,7 @@ int prepare_domain_attach_device(struct iommu_domain = *domain, pte =3D dmar_domain->pgd; if (dma_pte_present(pte)) { dmar_domain->pgd =3D phys_to_virt(dma_pte_addr(pte)); - free_pgtable_page(pte); + iommu_free_page(pte); } dmar_domain->agaw--; } @@ -4295,7 +4280,7 @@ static void intel_iommu_tlb_sync(struct iommu_domain = *domain, start_pfn, nrpages, list_empty(&gather->freelist), 0); =20 - put_pages_list(&gather->freelist); + iommu_free_pages_list(&gather->freelist); } =20 static phys_addr_t intel_iommu_iova_to_phys(struct iommu_domain *domain, diff --git a/drivers/iommu/intel/iommu.h b/drivers/iommu/intel/iommu.h index 65d37a138c75..b505f3f44d0a 100644 --- a/drivers/iommu/intel/iommu.h +++ b/drivers/iommu/intel/iommu.h @@ -894,8 +894,6 @@ void domain_update_iommu_cap(struct dmar_domain *domain= ); =20 int dmar_ir_support(void); =20 -void *alloc_pgtable_page(int node, gfp_t gfp); -void free_pgtable_page(void *vaddr); void iommu_flush_write_buffer(struct intel_iommu *iommu); struct intel_iommu *device_to_iommu(struct device *dev, u8 *bus, u8 *devfn= ); struct iommu_domain *intel_nested_domain_alloc(struct iommu_domain *parent, diff --git a/drivers/iommu/intel/irq_remapping.c b/drivers/iommu/intel/irq_= remapping.c index 29b9e55dcf26..72e1c1342c13 100644 --- a/drivers/iommu/intel/irq_remapping.c +++ b/drivers/iommu/intel/irq_remapping.c @@ -22,6 +22,7 @@ =20 #include "iommu.h" #include "../irq_remapping.h" +#include "../iommu-pages.h" #include "cap_audit.h" =20 enum irq_mode { @@ -536,8 +537,8 @@ static int intel_setup_irq_remapping(struct intel_iommu= *iommu) if (!ir_table) return -ENOMEM; =20 - pages =3D alloc_pages_node(iommu->node, GFP_KERNEL | __GFP_ZERO, - INTR_REMAP_PAGE_ORDER); + pages =3D __iommu_alloc_pages_node(iommu->node, GFP_KERNEL, + INTR_REMAP_PAGE_ORDER); if (!pages) { pr_err("IR%d: failed to allocate pages of order %d\n", iommu->seq_id, INTR_REMAP_PAGE_ORDER); @@ -622,7 +623,7 @@ static int intel_setup_irq_remapping(struct intel_iommu= *iommu) out_free_bitmap: bitmap_free(bitmap); out_free_pages: - __free_pages(pages, INTR_REMAP_PAGE_ORDER); + __iommu_free_pages(pages, INTR_REMAP_PAGE_ORDER); out_free_table: kfree(ir_table); =20 @@ -643,8 +644,7 @@ static void intel_teardown_irq_remapping(struct intel_i= ommu *iommu) irq_domain_free_fwnode(fn); iommu->ir_domain =3D NULL; } - free_pages((unsigned long)iommu->ir_table->base, - INTR_REMAP_PAGE_ORDER); + iommu_free_pages(iommu->ir_table->base, INTR_REMAP_PAGE_ORDER); bitmap_free(iommu->ir_table->bitmap); kfree(iommu->ir_table); iommu->ir_table =3D NULL; diff --git a/drivers/iommu/intel/pasid.c b/drivers/iommu/intel/pasid.c index 74e8e4c17e81..1856e74bba78 100644 --- a/drivers/iommu/intel/pasid.c +++ b/drivers/iommu/intel/pasid.c @@ -20,6 +20,7 @@ =20 #include "iommu.h" #include "pasid.h" +#include "../iommu-pages.h" =20 /* * Intel IOMMU system wide PASID name space: @@ -116,8 +117,7 @@ int intel_pasid_alloc_table(struct device *dev) =20 size =3D max_pasid >> (PASID_PDE_SHIFT - 3); order =3D size ? get_order(size) : 0; - pages =3D alloc_pages_node(info->iommu->node, - GFP_KERNEL | __GFP_ZERO, order); + pages =3D __iommu_alloc_pages_node(info->iommu->node, GFP_KERNEL, order); if (!pages) { kfree(pasid_table); return -ENOMEM; @@ -154,10 +154,10 @@ void intel_pasid_free_table(struct device *dev) max_pde =3D pasid_table->max_pasid >> PASID_PDE_SHIFT; for (i =3D 0; i < max_pde; i++) { table =3D get_pasid_table_from_pde(&dir[i]); - free_pgtable_page(table); + iommu_free_page(table); } =20 - free_pages((unsigned long)pasid_table->table, pasid_table->order); + iommu_free_pages(pasid_table->table, pasid_table->order); kfree(pasid_table); } =20 @@ -203,7 +203,7 @@ static struct pasid_entry *intel_pasid_get_entry(struct= device *dev, u32 pasid) retry: entries =3D get_pasid_table_from_pde(&dir[dir_index]); if (!entries) { - entries =3D alloc_pgtable_page(info->iommu->node, GFP_ATOMIC); + entries =3D iommu_alloc_page_node(info->iommu->node, GFP_ATOMIC); if (!entries) return NULL; =20 @@ -215,7 +215,7 @@ static struct pasid_entry *intel_pasid_get_entry(struct= device *dev, u32 pasid) */ if (cmpxchg64(&dir[dir_index].val, 0ULL, (u64)virt_to_phys(entries) | PASID_PTE_PRESENT)) { - free_pgtable_page(entries); + iommu_free_page(entries); goto retry; } if (!ecap_coherent(info->iommu->ecap)) { diff --git a/drivers/iommu/intel/svm.c b/drivers/iommu/intel/svm.c index 50a481c895b8..4cf8826b30e1 100644 --- a/drivers/iommu/intel/svm.c +++ b/drivers/iommu/intel/svm.c @@ -23,6 +23,7 @@ #include "pasid.h" #include "perf.h" #include "../iommu-sva.h" +#include "../iommu-pages.h" #include "trace.h" =20 static irqreturn_t prq_event_thread(int irq, void *d); @@ -67,7 +68,7 @@ int intel_svm_enable_prq(struct intel_iommu *iommu) struct page *pages; int irq, ret; =20 - pages =3D alloc_pages(GFP_KERNEL | __GFP_ZERO, PRQ_ORDER); + pages =3D __iommu_alloc_pages(GFP_KERNEL, PRQ_ORDER); if (!pages) { pr_warn("IOMMU: %s: Failed to allocate page request queue\n", iommu->name); @@ -118,7 +119,7 @@ int intel_svm_enable_prq(struct intel_iommu *iommu) dmar_free_hwirq(irq); iommu->pr_irq =3D 0; free_prq: - free_pages((unsigned long)iommu->prq, PRQ_ORDER); + iommu_free_pages(iommu->prq, PRQ_ORDER); iommu->prq =3D NULL; =20 return ret; @@ -141,7 +142,7 @@ int intel_svm_finish_prq(struct intel_iommu *iommu) iommu->iopf_queue =3D NULL; } =20 - free_pages((unsigned long)iommu->prq, PRQ_ORDER); + iommu_free_pages(iommu->prq, PRQ_ORDER); iommu->prq =3D NULL; =20 return 0; diff --git a/drivers/iommu/iommu-pages.h b/drivers/iommu/iommu-pages.h new file mode 100644 index 000000000000..2332f807d514 --- /dev/null +++ b/drivers/iommu/iommu-pages.h @@ -0,0 +1,199 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (c) 2023, Google LLC. + * Pasha Tatashin + */ + +#ifndef __IOMMU_PAGES_H +#define __IOMMU_PAGES_H + +#include +#include +#include + +/* + * All page allocation that are performed in the IOMMU subsystem must use = one of + * the functions below. This is necessary for the proper accounting as IO= MMU + * state can be rather large, i.e. multiple gigabytes in size. + */ + +/** + * __iommu_alloc_pages_node - allocate a zeroed page of a given order from + * specific NUMA node. + * @nid: memory NUMA node id + * @gfp: buddy allocator flags + * @order: page order + * + * returns the head struct page of the allocated page. + */ +static inline struct page *__iommu_alloc_pages_node(int nid, gfp_t gfp, + int order) +{ + struct page *pages; + + pages =3D alloc_pages_node(nid, gfp | __GFP_ZERO, order); + if (!pages) + return NULL; + + return pages; +} + +/** + * __iommu_alloc_pages - allocate a zeroed page of a given order. + * @gfp: buddy allocator flags + * @order: page order + * + * returns the head struct page of the allocated page. + */ +static inline struct page *__iommu_alloc_pages(gfp_t gfp, int order) +{ + struct page *pages; + + pages =3D alloc_pages(gfp | __GFP_ZERO, order); + if (!pages) + return NULL; + + return pages; +} + +/** + * __iommu_alloc_page_node - allocate a zeroed page at specific NUMA node. + * @nid: memory NUMA node id + * @gfp: buddy allocator flags + * + * returns the struct page of the allocated page. + */ +static inline struct page *__iommu_alloc_page_node(int nid, gfp_t gfp) +{ + return __iommu_alloc_pages_node(nid, gfp, 0); +} + +/** + * __iommu_alloc_page - allocate a zeroed page + * @gfp: buddy allocator flags + * + * returns the struct page of the allocated page. + */ +static inline struct page *__iommu_alloc_page(gfp_t gfp) +{ + return __iommu_alloc_pages(gfp, 0); +} + +/** + * __iommu_free_pages - free page of a given order + * @pages: head struct page of the page + * @order: page order + */ +static inline void __iommu_free_pages(struct page *pages, int order) +{ + if (!pages) + return; + + __free_pages(pages, order); +} + +/** + * __iommu_free_page - free page + * @page: struct page of the page + */ +static inline void __iommu_free_page(struct page *page) +{ + __iommu_free_pages(page, 0); +} + +/** + * iommu_alloc_pages_node - allocate a zeroed page of a given order from + * specific NUMA node. + * @nid: memory NUMA node id + * @gfp: buddy allocator flags + * @order: page order + * + * returns the virtual address of the allocated page + */ +static inline void *iommu_alloc_pages_node(int nid, gfp_t gfp, int order) +{ + struct page *pages =3D __iommu_alloc_pages_node(nid, gfp, order); + + if (!pages) + return NULL; + + return page_address(pages); +} + +/** + * iommu_alloc_pages - allocate a zeroed page of a given order + * @gfp: buddy allocator flags + * @order: page order + * + * returns the virtual address of the allocated page + */ +static inline void *iommu_alloc_pages(gfp_t gfp, int order) +{ + struct page *pages =3D __iommu_alloc_pages(gfp, order); + + if (!pages) + return NULL; + + return page_address(pages); +} + +/** + * iommu_alloc_page_node - allocate a zeroed page at specific NUMA node. + * @nid: memory NUMA node id + * @gfp: buddy allocator flags + * + * returns the virtual address of the allocated page + */ +static inline void *iommu_alloc_page_node(int nid, gfp_t gfp) +{ + return iommu_alloc_pages_node(nid, gfp, 0); +} + +/** + * iommu_alloc_page - allocate a zeroed page + * @gfp: buddy allocator flags + * + * returns the virtual address of the allocated page + */ +static inline void *iommu_alloc_page(gfp_t gfp) +{ + return iommu_alloc_pages(gfp, 0); +} + +/** + * iommu_free_pages - free page of a given order + * @virt: virtual address of the page to be freed. + * @order: page order + */ +static inline void iommu_free_pages(void *virt, int order) +{ + if (!virt) + return; + + __iommu_free_pages(virt_to_page(virt), order); +} + +/** + * iommu_free_page - free page + * @virt: virtual address of the page to be freed. + */ +static inline void iommu_free_page(void *virt) +{ + iommu_free_pages(virt, 0); +} + +/** + * iommu_free_pages_list - free a list of pages. + * @pages: the head of the lru list to be freed. + */ +static inline void iommu_free_pages_list(struct list_head *pages) +{ + while (!list_empty(pages)) { + struct page *p =3D list_entry(pages->prev, struct page, lru); + + list_del(&p->lru); + put_page(p); + } +} + +#endif /* __IOMMU_PAGES_H */ --=20 2.43.0.rc2.451.g8631bc7472-goog From nobody Wed Dec 17 09:20:03 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 164DCC4167B for ; Tue, 28 Nov 2023 20:50:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1346255AbjK1Utw (ORCPT ); Tue, 28 Nov 2023 15:49:52 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56170 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345044AbjK1Utn (ORCPT ); Tue, 28 Nov 2023 15:49:43 -0500 Received: from mail-qv1-xf2b.google.com (mail-qv1-xf2b.google.com [IPv6:2607:f8b0:4864:20::f2b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 730EF19BF for ; Tue, 28 Nov 2023 12:49:44 -0800 (PST) Received: by mail-qv1-xf2b.google.com with SMTP id 6a1803df08f44-67a5cebbd36so4847376d6.3 for ; Tue, 28 Nov 2023 12:49:44 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; t=1701204583; x=1701809383; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:from:to:cc:subject:date:message-id :reply-to; bh=k1GabcB3edaQg1Ui1Yy9urjYmEO9oSCGrDI9SMXfmmE=; b=lPw/pXataJO+RjwebVUSmEpOcfobofIULaZj4GRfpdPpg2cJWAj1LrMMJP4B/7JFLK IzYuttpohe07zT2thfL1nKoQ6zAR5n3OCQr10wq3ak33LujLelehQlzLkPNNzHaa4qEx uEk6atWS7S/6DUcwX4MOBvqKdRdrLWEnoKG86Liv2kiAkdJuq1FdN9CwgVFQ1THCWkp6 yEu5Es0WRND0YpayMkr75ttBmiKfpow+vPfPVsWlcqV83XLmLBqS3nDzbSXF4xpqk+9o Oi/0vwT2KM/2sKhzaV03hX/yby4Ku/pvarvnq0jGwOm3zWEDppEAg7XvCkYxNgw2JNeq YlbQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1701204583; x=1701809383; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=k1GabcB3edaQg1Ui1Yy9urjYmEO9oSCGrDI9SMXfmmE=; b=fgdUJYi9P0Yfk3h8ir21Ch8itYU1PlnWxPlI6xoYk4ASrsMYIEW9l3AZp/FksAYrJ9 KFjxNeTCm/gKtlJD01axRK92nUBjJ2qRCXbAVRbhtgkRIEzjALpEteBPD0oVg/Z3TshH 3AQgF8pi2ZGRi2/pgHXDKE3JL4f0JMKcGYl8kulk42DAhSq4A7b8+mFJgfVmZQDhp5rs Ghm50AgfwohJ3nR6MrKSIFjSRu+5RlR+oSEef1MtCS/kaPuTdXqHckgzT4bYiq+ONtlL UrnQZomHxu+1LQKcXCnWRE3+lTeSK8Bvv+byEAHaZRwWME4JklcOr9ZDa6FWn0TM+GQY XyEA== X-Gm-Message-State: AOJu0YxaZkcWGsj3u0iKCQyei27A5r2COQx2pVrOy32n/BVNLILWd7mn JDTPFG8kdb9lliME7HH+XIGxdg== X-Google-Smtp-Source: AGHT+IFZ2J6ifaJU9E1oZ5e/lDPloT8/cUCHwQamxmJzlr5gT1J0RGoEUSTPlGzdwJlACbJO6NjPTw== X-Received: by 2002:ad4:580a:0:b0:67a:26ad:fabc with SMTP id dd10-20020ad4580a000000b0067a26adfabcmr10428038qvb.22.1701204583234; Tue, 28 Nov 2023 12:49:43 -0800 (PST) Received: from soleen.c.googlers.com.com (55.87.194.35.bc.googleusercontent.com. [35.194.87.55]) by smtp.gmail.com with ESMTPSA id d11-20020a0cfe8b000000b0067a56b6adfesm1056863qvs.71.2023.11.28.12.49.42 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 28 Nov 2023 12:49:42 -0800 (PST) From: Pasha Tatashin To: akpm@linux-foundation.org, alex.williamson@redhat.com, alim.akhtar@samsung.com, alyssa@rosenzweig.io, asahi@lists.linux.dev, baolu.lu@linux.intel.com, bhelgaas@google.com, cgroups@vger.kernel.org, corbet@lwn.net, david@redhat.com, dwmw2@infradead.org, hannes@cmpxchg.org, heiko@sntech.de, iommu@lists.linux.dev, jasowang@redhat.com, jernej.skrabec@gmail.com, jgg@ziepe.ca, jonathanh@nvidia.com, joro@8bytes.org, kevin.tian@intel.com, krzysztof.kozlowski@linaro.org, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-doc@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-rockchip@lists.infradead.org, linux-samsung-soc@vger.kernel.org, linux-sunxi@lists.linux.dev, linux-tegra@vger.kernel.org, lizefan.x@bytedance.com, marcan@marcan.st, mhiramat@kernel.org, mst@redhat.com, m.szyprowski@samsung.com, netdev@vger.kernel.org, pasha.tatashin@soleen.com, paulmck@kernel.org, rdunlap@infradead.org, robin.murphy@arm.com, samuel@sholland.org, suravee.suthikulpanit@amd.com, sven@svenpeter.dev, thierry.reding@gmail.com, tj@kernel.org, tomas.mudrunka@gmail.com, vdumpa@nvidia.com, virtualization@lists.linux.dev, wens@csie.org, will@kernel.org, yu-cheng.yu@intel.com Subject: [PATCH 02/16] iommu/amd: use page allocation function provided by iommu-pages.h Date: Tue, 28 Nov 2023 20:49:24 +0000 Message-ID: <20231128204938.1453583-3-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.43.0.rc2.451.g8631bc7472-goog In-Reply-To: <20231128204938.1453583-1-pasha.tatashin@soleen.com> References: <20231128204938.1453583-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Convert iommu/amd/* files to use the new page allocation functions provided in iommu-pages.h. Signed-off-by: Pasha Tatashin --- drivers/iommu/amd/amd_iommu.h | 8 --- drivers/iommu/amd/init.c | 91 ++++++++++++++----------------- drivers/iommu/amd/io_pgtable.c | 13 +++-- drivers/iommu/amd/io_pgtable_v2.c | 20 +++---- drivers/iommu/amd/iommu.c | 13 +++-- 5 files changed, 64 insertions(+), 81 deletions(-) diff --git a/drivers/iommu/amd/amd_iommu.h b/drivers/iommu/amd/amd_iommu.h index 86be1edd50ee..bf697d566e0b 100644 --- a/drivers/iommu/amd/amd_iommu.h +++ b/drivers/iommu/amd/amd_iommu.h @@ -136,14 +136,6 @@ static inline int get_pci_sbdf_id(struct pci_dev *pdev) return PCI_SEG_DEVID_TO_SBDF(seg, devid); } =20 -static inline void *alloc_pgtable_page(int nid, gfp_t gfp) -{ - struct page *page; - - page =3D alloc_pages_node(nid, gfp | __GFP_ZERO, 0); - return page ? page_address(page) : NULL; -} - bool translation_pre_enabled(struct amd_iommu *iommu); bool amd_iommu_is_attach_deferred(struct device *dev); int __init add_special_device(u8 type, u8 id, u32 *devid, bool cmd_line); diff --git a/drivers/iommu/amd/init.c b/drivers/iommu/amd/init.c index 64bcf3df37ee..5b8a80fc7e50 100644 --- a/drivers/iommu/amd/init.c +++ b/drivers/iommu/amd/init.c @@ -35,6 +35,7 @@ =20 #include "amd_iommu.h" #include "../irq_remapping.h" +#include "../iommu-pages.h" =20 /* * definitions for the ACPI scanning code @@ -648,8 +649,8 @@ static int __init find_last_devid_acpi(struct acpi_tabl= e_header *table, u16 pci_ /* Allocate per PCI segment device table */ static inline int __init alloc_dev_table(struct amd_iommu_pci_seg *pci_seg) { - pci_seg->dev_table =3D (void *)__get_free_pages(GFP_KERNEL | __GFP_ZERO |= GFP_DMA32, - get_order(pci_seg->dev_table_size)); + pci_seg->dev_table =3D iommu_alloc_pages(GFP_KERNEL | GFP_DMA32, + get_order(pci_seg->dev_table_size)); if (!pci_seg->dev_table) return -ENOMEM; =20 @@ -658,17 +659,16 @@ static inline int __init alloc_dev_table(struct amd_i= ommu_pci_seg *pci_seg) =20 static inline void free_dev_table(struct amd_iommu_pci_seg *pci_seg) { - free_pages((unsigned long)pci_seg->dev_table, - get_order(pci_seg->dev_table_size)); + iommu_free_pages(pci_seg->dev_table, + get_order(pci_seg->dev_table_size)); pci_seg->dev_table =3D NULL; } =20 /* Allocate per PCI segment IOMMU rlookup table. */ static inline int __init alloc_rlookup_table(struct amd_iommu_pci_seg *pci= _seg) { - pci_seg->rlookup_table =3D (void *)__get_free_pages( - GFP_KERNEL | __GFP_ZERO, - get_order(pci_seg->rlookup_table_size)); + pci_seg->rlookup_table =3D iommu_alloc_pages(GFP_KERNEL, + get_order(pci_seg->rlookup_table_size)); if (pci_seg->rlookup_table =3D=3D NULL) return -ENOMEM; =20 @@ -677,16 +677,15 @@ static inline int __init alloc_rlookup_table(struct a= md_iommu_pci_seg *pci_seg) =20 static inline void free_rlookup_table(struct amd_iommu_pci_seg *pci_seg) { - free_pages((unsigned long)pci_seg->rlookup_table, - get_order(pci_seg->rlookup_table_size)); + iommu_free_pages(pci_seg->rlookup_table, + get_order(pci_seg->rlookup_table_size)); pci_seg->rlookup_table =3D NULL; } =20 static inline int __init alloc_irq_lookup_table(struct amd_iommu_pci_seg *= pci_seg) { - pci_seg->irq_lookup_table =3D (void *)__get_free_pages( - GFP_KERNEL | __GFP_ZERO, - get_order(pci_seg->rlookup_table_size)); + pci_seg->irq_lookup_table =3D iommu_alloc_pages(GFP_KERNEL, + get_order(pci_seg->rlookup_table_size)); kmemleak_alloc(pci_seg->irq_lookup_table, pci_seg->rlookup_table_size, 1, GFP_KERNEL); if (pci_seg->irq_lookup_table =3D=3D NULL) @@ -698,8 +697,8 @@ static inline int __init alloc_irq_lookup_table(struct = amd_iommu_pci_seg *pci_se static inline void free_irq_lookup_table(struct amd_iommu_pci_seg *pci_seg) { kmemleak_free(pci_seg->irq_lookup_table); - free_pages((unsigned long)pci_seg->irq_lookup_table, - get_order(pci_seg->rlookup_table_size)); + iommu_free_pages(pci_seg->irq_lookup_table, + get_order(pci_seg->rlookup_table_size)); pci_seg->irq_lookup_table =3D NULL; } =20 @@ -707,8 +706,8 @@ static int __init alloc_alias_table(struct amd_iommu_pc= i_seg *pci_seg) { int i; =20 - pci_seg->alias_table =3D (void *)__get_free_pages(GFP_KERNEL, - get_order(pci_seg->alias_table_size)); + pci_seg->alias_table =3D iommu_alloc_pages(GFP_KERNEL, + get_order(pci_seg->alias_table_size)); if (!pci_seg->alias_table) return -ENOMEM; =20 @@ -723,8 +722,8 @@ static int __init alloc_alias_table(struct amd_iommu_pc= i_seg *pci_seg) =20 static void __init free_alias_table(struct amd_iommu_pci_seg *pci_seg) { - free_pages((unsigned long)pci_seg->alias_table, - get_order(pci_seg->alias_table_size)); + iommu_free_pages(pci_seg->alias_table, + get_order(pci_seg->alias_table_size)); pci_seg->alias_table =3D NULL; } =20 @@ -735,8 +734,8 @@ static void __init free_alias_table(struct amd_iommu_pc= i_seg *pci_seg) */ static int __init alloc_command_buffer(struct amd_iommu *iommu) { - iommu->cmd_buf =3D (void *)__get_free_pages(GFP_KERNEL | __GFP_ZERO, - get_order(CMD_BUFFER_SIZE)); + iommu->cmd_buf =3D iommu_alloc_pages(GFP_KERNEL, + get_order(CMD_BUFFER_SIZE)); =20 return iommu->cmd_buf ? 0 : -ENOMEM; } @@ -844,19 +843,19 @@ static void iommu_disable_command_buffer(struct amd_i= ommu *iommu) =20 static void __init free_command_buffer(struct amd_iommu *iommu) { - free_pages((unsigned long)iommu->cmd_buf, get_order(CMD_BUFFER_SIZE)); + iommu_free_pages(iommu->cmd_buf, get_order(CMD_BUFFER_SIZE)); } =20 static void *__init iommu_alloc_4k_pages(struct amd_iommu *iommu, gfp_t gfp, size_t size) { int order =3D get_order(size); - void *buf =3D (void *)__get_free_pages(gfp, order); + void *buf =3D iommu_alloc_pages(gfp, order); =20 if (buf && check_feature(FEATURE_SNP) && set_memory_4k((unsigned long)buf, (1 << order))) { - free_pages((unsigned long)buf, order); + iommu_free_pages(buf, order); buf =3D NULL; } =20 @@ -866,7 +865,7 @@ static void *__init iommu_alloc_4k_pages(struct amd_iom= mu *iommu, /* allocates the memory where the IOMMU will log its events to */ static int __init alloc_event_buffer(struct amd_iommu *iommu) { - iommu->evt_buf =3D iommu_alloc_4k_pages(iommu, GFP_KERNEL | __GFP_ZERO, + iommu->evt_buf =3D iommu_alloc_4k_pages(iommu, GFP_KERNEL, EVT_BUFFER_SIZE); =20 return iommu->evt_buf ? 0 : -ENOMEM; @@ -900,14 +899,13 @@ static void iommu_disable_event_buffer(struct amd_iom= mu *iommu) =20 static void __init free_event_buffer(struct amd_iommu *iommu) { - free_pages((unsigned long)iommu->evt_buf, get_order(EVT_BUFFER_SIZE)); + iommu_free_pages(iommu->evt_buf, get_order(EVT_BUFFER_SIZE)); } =20 /* allocates the memory where the IOMMU will log its events to */ static int __init alloc_ppr_log(struct amd_iommu *iommu) { - iommu->ppr_log =3D iommu_alloc_4k_pages(iommu, GFP_KERNEL | __GFP_ZERO, - PPR_LOG_SIZE); + iommu->ppr_log =3D iommu_alloc_4k_pages(iommu, GFP_KERNEL, PPR_LOG_SIZE); =20 return iommu->ppr_log ? 0 : -ENOMEM; } @@ -936,14 +934,14 @@ static void iommu_enable_ppr_log(struct amd_iommu *io= mmu) =20 static void __init free_ppr_log(struct amd_iommu *iommu) { - free_pages((unsigned long)iommu->ppr_log, get_order(PPR_LOG_SIZE)); + iommu_free_pages(iommu->ppr_log, get_order(PPR_LOG_SIZE)); } =20 static void free_ga_log(struct amd_iommu *iommu) { #ifdef CONFIG_IRQ_REMAP - free_pages((unsigned long)iommu->ga_log, get_order(GA_LOG_SIZE)); - free_pages((unsigned long)iommu->ga_log_tail, get_order(8)); + iommu_free_pages(iommu->ga_log, get_order(GA_LOG_SIZE)); + iommu_free_pages(iommu->ga_log_tail, get_order(8)); #endif } =20 @@ -988,13 +986,11 @@ static int iommu_init_ga_log(struct amd_iommu *iommu) if (!AMD_IOMMU_GUEST_IR_VAPIC(amd_iommu_guest_ir)) return 0; =20 - iommu->ga_log =3D (u8 *)__get_free_pages(GFP_KERNEL | __GFP_ZERO, - get_order(GA_LOG_SIZE)); + iommu->ga_log =3D iommu_alloc_pages(GFP_KERNEL, get_order(GA_LOG_SIZE)); if (!iommu->ga_log) goto err_out; =20 - iommu->ga_log_tail =3D (u8 *)__get_free_pages(GFP_KERNEL | __GFP_ZERO, - get_order(8)); + iommu->ga_log_tail =3D iommu_alloc_pages(GFP_KERNEL, get_order(8)); if (!iommu->ga_log_tail) goto err_out; =20 @@ -1007,7 +1003,7 @@ static int iommu_init_ga_log(struct amd_iommu *iommu) =20 static int __init alloc_cwwb_sem(struct amd_iommu *iommu) { - iommu->cmd_sem =3D iommu_alloc_4k_pages(iommu, GFP_KERNEL | __GFP_ZERO, 1= ); + iommu->cmd_sem =3D iommu_alloc_4k_pages(iommu, GFP_KERNEL, 1); =20 return iommu->cmd_sem ? 0 : -ENOMEM; } @@ -1015,7 +1011,7 @@ static int __init alloc_cwwb_sem(struct amd_iommu *io= mmu) static void __init free_cwwb_sem(struct amd_iommu *iommu) { if (iommu->cmd_sem) - free_page((unsigned long)iommu->cmd_sem); + iommu_free_page((void *)iommu->cmd_sem); } =20 static void iommu_enable_xt(struct amd_iommu *iommu) @@ -1080,7 +1076,6 @@ static bool __copy_device_table(struct amd_iommu *iom= mu) u32 lo, hi, devid, old_devtb_size; phys_addr_t old_devtb_phys; u16 dom_id, dte_v, irq_v; - gfp_t gfp_flag; u64 tmp; =20 /* Each IOMMU use separate device table with the same size */ @@ -1114,9 +1109,8 @@ static bool __copy_device_table(struct amd_iommu *iom= mu) if (!old_devtb) return false; =20 - gfp_flag =3D GFP_KERNEL | __GFP_ZERO | GFP_DMA32; - pci_seg->old_dev_tbl_cpy =3D (void *)__get_free_pages(gfp_flag, - get_order(pci_seg->dev_table_size)); + pci_seg->old_dev_tbl_cpy =3D iommu_alloc_pages(GFP_KERNEL | GFP_DMA32, + get_order(pci_seg->dev_table_size)); if (pci_seg->old_dev_tbl_cpy =3D=3D NULL) { pr_err("Failed to allocate memory for copying old device table!\n"); memunmap(old_devtb); @@ -2800,8 +2794,8 @@ static void early_enable_iommus(void) =20 for_each_pci_segment(pci_seg) { if (pci_seg->old_dev_tbl_cpy !=3D NULL) { - free_pages((unsigned long)pci_seg->old_dev_tbl_cpy, - get_order(pci_seg->dev_table_size)); + iommu_free_pages(pci_seg->old_dev_tbl_cpy, + get_order(pci_seg->dev_table_size)); pci_seg->old_dev_tbl_cpy =3D NULL; } } @@ -2814,8 +2808,8 @@ static void early_enable_iommus(void) pr_info("Copied DEV table from previous kernel.\n"); =20 for_each_pci_segment(pci_seg) { - free_pages((unsigned long)pci_seg->dev_table, - get_order(pci_seg->dev_table_size)); + iommu_free_pages(pci_seg->dev_table, + get_order(pci_seg->dev_table_size)); pci_seg->dev_table =3D pci_seg->old_dev_tbl_cpy; } =20 @@ -3018,8 +3012,8 @@ static bool __init check_ioapic_information(void) =20 static void __init free_dma_resources(void) { - free_pages((unsigned long)amd_iommu_pd_alloc_bitmap, - get_order(MAX_DOMAIN_ID/8)); + iommu_free_pages(amd_iommu_pd_alloc_bitmap, + get_order(MAX_DOMAIN_ID / 8)); amd_iommu_pd_alloc_bitmap =3D NULL; =20 free_unity_maps(); @@ -3091,9 +3085,8 @@ static int __init early_amd_iommu_init(void) /* Device table - directly used by all IOMMUs */ ret =3D -ENOMEM; =20 - amd_iommu_pd_alloc_bitmap =3D (void *)__get_free_pages( - GFP_KERNEL | __GFP_ZERO, - get_order(MAX_DOMAIN_ID/8)); + amd_iommu_pd_alloc_bitmap =3D iommu_alloc_pages(GFP_KERNEL, + get_order(MAX_DOMAIN_ID / 8)); if (amd_iommu_pd_alloc_bitmap =3D=3D NULL) goto out; =20 diff --git a/drivers/iommu/amd/io_pgtable.c b/drivers/iommu/amd/io_pgtable.c index 6c0621f6f572..f8b7d4c39a9f 100644 --- a/drivers/iommu/amd/io_pgtable.c +++ b/drivers/iommu/amd/io_pgtable.c @@ -22,6 +22,7 @@ =20 #include "amd_iommu_types.h" #include "amd_iommu.h" +#include "../iommu-pages.h" =20 static void v1_tlb_flush_all(void *cookie) { @@ -156,7 +157,7 @@ static bool increase_address_space(struct protection_do= main *domain, bool ret =3D true; u64 *pte; =20 - pte =3D alloc_pgtable_page(domain->nid, gfp); + pte =3D iommu_alloc_page_node(domain->nid, gfp); if (!pte) return false; =20 @@ -187,7 +188,7 @@ static bool increase_address_space(struct protection_do= main *domain, =20 out: spin_unlock_irqrestore(&domain->lock, flags); - free_page((unsigned long)pte); + iommu_free_page(pte); =20 return ret; } @@ -250,7 +251,7 @@ static u64 *alloc_pte(struct protection_domain *domain, =20 if (!IOMMU_PTE_PRESENT(__pte) || pte_level =3D=3D PAGE_MODE_NONE) { - page =3D alloc_pgtable_page(domain->nid, gfp); + page =3D iommu_alloc_page_node(domain->nid, gfp); =20 if (!page) return NULL; @@ -259,7 +260,7 @@ static u64 *alloc_pte(struct protection_domain *domain, =20 /* pte could have been changed somewhere. */ if (!try_cmpxchg64(pte, &__pte, __npte)) - free_page((unsigned long)page); + iommu_free_page(page); else if (IOMMU_PTE_PRESENT(__pte)) *updated =3D true; =20 @@ -430,7 +431,7 @@ static int iommu_v1_map_pages(struct io_pgtable_ops *op= s, unsigned long iova, } =20 /* Everything flushed out, free pages now */ - put_pages_list(&freelist); + iommu_free_pages_list(&freelist); =20 return ret; } @@ -579,7 +580,7 @@ static void v1_free_pgtable(struct io_pgtable *iop) /* Make changes visible to IOMMUs */ amd_iommu_domain_update(dom); =20 - put_pages_list(&freelist); + iommu_free_pages_list(&freelist); } =20 static struct io_pgtable *v1_alloc_pgtable(struct io_pgtable_cfg *cfg, voi= d *cookie) diff --git a/drivers/iommu/amd/io_pgtable_v2.c b/drivers/iommu/amd/io_pgtab= le_v2.c index f818a7e254d4..1e08dab93686 100644 --- a/drivers/iommu/amd/io_pgtable_v2.c +++ b/drivers/iommu/amd/io_pgtable_v2.c @@ -18,6 +18,7 @@ =20 #include "amd_iommu_types.h" #include "amd_iommu.h" +#include "../iommu-pages.h" =20 #define IOMMU_PAGE_PRESENT BIT_ULL(0) /* Is present */ #define IOMMU_PAGE_RW BIT_ULL(1) /* Writeable */ @@ -99,11 +100,6 @@ static inline int page_size_to_level(u64 pg_size) return PAGE_MODE_1_LEVEL; } =20 -static inline void free_pgtable_page(u64 *pt) -{ - free_page((unsigned long)pt); -} - static void free_pgtable(u64 *pt, int level) { u64 *p; @@ -125,10 +121,10 @@ static void free_pgtable(u64 *pt, int level) if (level > 2) free_pgtable(p, level - 1); else - free_pgtable_page(p); + iommu_free_page(p); } =20 - free_pgtable_page(pt); + iommu_free_page(pt); } =20 /* Allocate page table */ @@ -156,14 +152,14 @@ static u64 *v2_alloc_pte(int nid, u64 *pgd, unsigned = long iova, } =20 if (!IOMMU_PTE_PRESENT(__pte)) { - page =3D alloc_pgtable_page(nid, gfp); + page =3D iommu_alloc_page_node(nid, gfp); if (!page) return NULL; =20 __npte =3D set_pgtable_attr(page); /* pte could have been changed somewhere. */ if (cmpxchg64(pte, __pte, __npte) !=3D __pte) - free_pgtable_page(page); + iommu_free_page(page); else if (IOMMU_PTE_PRESENT(__pte)) *updated =3D true; =20 @@ -185,7 +181,7 @@ static u64 *v2_alloc_pte(int nid, u64 *pgd, unsigned lo= ng iova, if (pg_size =3D=3D IOMMU_PAGE_SIZE_1G) free_pgtable(__pte, end_level - 1); else if (pg_size =3D=3D IOMMU_PAGE_SIZE_2M) - free_pgtable_page(__pte); + iommu_free_page(__pte); } =20 return pte; @@ -380,7 +376,7 @@ static struct io_pgtable *v2_alloc_pgtable(struct io_pg= table_cfg *cfg, void *coo int ret; int ias =3D IOMMU_IN_ADDR_BIT_SIZE; =20 - pgtable->pgd =3D alloc_pgtable_page(pdom->nid, GFP_ATOMIC); + pgtable->pgd =3D iommu_alloc_page_node(pdom->nid, GFP_ATOMIC); if (!pgtable->pgd) return NULL; =20 @@ -403,7 +399,7 @@ static struct io_pgtable *v2_alloc_pgtable(struct io_pg= table_cfg *cfg, void *coo return &pgtable->iop; =20 err_free_pgd: - free_pgtable_page(pgtable->pgd); + iommu_free_page(pgtable->pgd); =20 return NULL; } diff --git a/drivers/iommu/amd/iommu.c b/drivers/iommu/amd/iommu.c index fcc987f5d4ed..9a228a95da0e 100644 --- a/drivers/iommu/amd/iommu.c +++ b/drivers/iommu/amd/iommu.c @@ -42,6 +42,7 @@ #include "amd_iommu.h" #include "../dma-iommu.h" #include "../irq_remapping.h" +#include "../iommu-pages.h" =20 #define CMD_SET_TYPE(cmd, t) ((cmd)->data[1] |=3D ((t) << 28)) =20 @@ -1642,7 +1643,7 @@ static void free_gcr3_tbl_level1(u64 *tbl) =20 ptr =3D iommu_phys_to_virt(tbl[i] & PAGE_MASK); =20 - free_page((unsigned long)ptr); + iommu_free_page(ptr); } } =20 @@ -1670,7 +1671,7 @@ static void free_gcr3_table(struct protection_domain = *domain) else BUG_ON(domain->glx !=3D 0); =20 - free_page((unsigned long)domain->gcr3_tbl); + iommu_free_page(domain->gcr3_tbl); } =20 /* @@ -1697,7 +1698,7 @@ static int setup_gcr3_table(struct protection_domain = *domain, int pasids) if (levels > amd_iommu_max_glx_val) return -EINVAL; =20 - domain->gcr3_tbl =3D alloc_pgtable_page(domain->nid, GFP_ATOMIC); + domain->gcr3_tbl =3D iommu_alloc_page_node(domain->nid, GFP_ATOMIC); if (domain->gcr3_tbl =3D=3D NULL) return -ENOMEM; =20 @@ -2092,7 +2093,7 @@ static void protection_domain_free(struct protection_= domain *domain) free_gcr3_table(domain); =20 if (domain->iop.root) - free_page((unsigned long)domain->iop.root); + iommu_free_page(domain->iop.root); =20 if (domain->id) domain_id_free(domain->id); @@ -2107,7 +2108,7 @@ static int protection_domain_init_v1(struct protectio= n_domain *domain, int mode) BUG_ON(mode < PAGE_MODE_NONE || mode > PAGE_MODE_6_LEVEL); =20 if (mode !=3D PAGE_MODE_NONE) { - pt_root =3D (void *)get_zeroed_page(GFP_KERNEL); + pt_root =3D iommu_alloc_page(GFP_KERNEL); if (!pt_root) return -ENOMEM; } @@ -2783,7 +2784,7 @@ static u64 *__get_gcr3_pte(u64 *root, int level, u32 = pasid, bool alloc) if (!alloc) return NULL; =20 - root =3D (void *)get_zeroed_page(GFP_ATOMIC); + root =3D iommu_alloc_page(GFP_ATOMIC); if (root =3D=3D NULL) return NULL; =20 --=20 2.43.0.rc2.451.g8631bc7472-goog From nobody Wed Dec 17 09:20:03 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E0C8CC4167B for ; Tue, 28 Nov 2023 20:49:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1346220AbjK1Utt (ORCPT ); Tue, 28 Nov 2023 15:49:49 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56178 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345683AbjK1Uto (ORCPT ); Tue, 28 Nov 2023 15:49:44 -0500 Received: from mail-qv1-xf29.google.com (mail-qv1-xf29.google.com [IPv6:2607:f8b0:4864:20::f29]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 29EBE1BC3 for ; Tue, 28 Nov 2023 12:49:45 -0800 (PST) Received: by mail-qv1-xf29.google.com with SMTP id 6a1803df08f44-67a095fbe27so1638706d6.1 for ; Tue, 28 Nov 2023 12:49:45 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; t=1701204584; x=1701809384; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:from:to:cc:subject:date:message-id :reply-to; bh=agdpEgslqhHdhrn0hnWdDpIJa9Hen54IbPQiTYhpn3E=; b=Xfs24UuL16GbfUYKHftGUGEopYClHnDNCUlHM4fRVjS5JT9hJIK4RRCWRZONkAzHvD dlxs0Oh+2kJ3U/TqI40pCheDRRG2hhbHkEHeU3m3Vqn9LcvUFEhAlozx8nkGOlK5i71q GJPX8bWdbHet5KYTsVO02S6BE6oyESKT3wtx3nE0f7bPPqJCRxpn3ePJgT8MSL/uX64D HTNgreL+q5I01i6oMC2xsqJEbEtMBo8ocvacXtOIYNFSZhlTmVoaNfvxZq7bihc+3mnN sx4DOMCp0XBj+ER3hHJOYTzH2UKtNsg4wBjOHMDyo2dLiNo4azjByjAJpXb+fRVZVQIN 98+Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1701204584; x=1701809384; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=agdpEgslqhHdhrn0hnWdDpIJa9Hen54IbPQiTYhpn3E=; b=OKSgDOEnDhzqPILC/Z+WkAp5d+gLNepLLXstRgb5XrfBsK3JlfpLYIZNXPm9xTkuut 7VbMi9mG5Mpy7kJolRePH1EiI76bm2lXJGadqbeA0edicGjOkrqNjqDWoWML2pLJPNoq 72mEQOGlA/Ym5PkICSzDFEAjvm3S4RefbS7ZT03Zswt0LyzIidPlNGfYxpd51GQ1M6wv Hwp8+4pBHW4u8tbciZx/yQC//fFrc7DHc6v8m0XEZnaET9VdNxJ/lLrveHVrnGnRykZU zEPnnoScbEOL5X6TLcMBpv0Xgg8kJxGOkFSJ3Xr5FnGyNmrSj4/Dv0y2Vh21uLbVA8c5 e0Cg== X-Gm-Message-State: AOJu0Yx/wgtIJwfFYA5JWdplyN27MGhTFmX1RA/ge2EixfIQ29tc5tL7 bFTFjvOdni8hxDsNnGQsv/wXkw== X-Google-Smtp-Source: AGHT+IGsN1gYKfhSPZCpUVGbGKIu06D7uLG0xM+OZwtg49W/WqG56O+xCl6uRGYTfUcoNhDPZKc9YQ== X-Received: by 2002:a05:6214:2249:b0:67a:52eb:7a00 with SMTP id c9-20020a056214224900b0067a52eb7a00mr10312900qvc.7.1701204584210; Tue, 28 Nov 2023 12:49:44 -0800 (PST) Received: from soleen.c.googlers.com.com (55.87.194.35.bc.googleusercontent.com. [35.194.87.55]) by smtp.gmail.com with ESMTPSA id d11-20020a0cfe8b000000b0067a56b6adfesm1056863qvs.71.2023.11.28.12.49.43 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 28 Nov 2023 12:49:43 -0800 (PST) From: Pasha Tatashin To: akpm@linux-foundation.org, alex.williamson@redhat.com, alim.akhtar@samsung.com, alyssa@rosenzweig.io, asahi@lists.linux.dev, baolu.lu@linux.intel.com, bhelgaas@google.com, cgroups@vger.kernel.org, corbet@lwn.net, david@redhat.com, dwmw2@infradead.org, hannes@cmpxchg.org, heiko@sntech.de, iommu@lists.linux.dev, jasowang@redhat.com, jernej.skrabec@gmail.com, jgg@ziepe.ca, jonathanh@nvidia.com, joro@8bytes.org, kevin.tian@intel.com, krzysztof.kozlowski@linaro.org, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-doc@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-rockchip@lists.infradead.org, linux-samsung-soc@vger.kernel.org, linux-sunxi@lists.linux.dev, linux-tegra@vger.kernel.org, lizefan.x@bytedance.com, marcan@marcan.st, mhiramat@kernel.org, mst@redhat.com, m.szyprowski@samsung.com, netdev@vger.kernel.org, pasha.tatashin@soleen.com, paulmck@kernel.org, rdunlap@infradead.org, robin.murphy@arm.com, samuel@sholland.org, suravee.suthikulpanit@amd.com, sven@svenpeter.dev, thierry.reding@gmail.com, tj@kernel.org, tomas.mudrunka@gmail.com, vdumpa@nvidia.com, virtualization@lists.linux.dev, wens@csie.org, will@kernel.org, yu-cheng.yu@intel.com Subject: [PATCH 03/16] iommu/io-pgtable-arm: use page allocation function provided by iommu-pages.h Date: Tue, 28 Nov 2023 20:49:25 +0000 Message-ID: <20231128204938.1453583-4-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.43.0.rc2.451.g8631bc7472-goog In-Reply-To: <20231128204938.1453583-1-pasha.tatashin@soleen.com> References: <20231128204938.1453583-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Convert iommu/io-pgtable-arm.c to use the new page allocation functions provided in iommu-pages.h. Signed-off-by: Pasha Tatashin --- drivers/iommu/io-pgtable-arm.c | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/drivers/iommu/io-pgtable-arm.c b/drivers/iommu/io-pgtable-arm.c index 72dcdd468cf3..21d315151ad6 100644 --- a/drivers/iommu/io-pgtable-arm.c +++ b/drivers/iommu/io-pgtable-arm.c @@ -21,6 +21,7 @@ #include =20 #include "io-pgtable-arm.h" +#include "iommu-pages.h" =20 #define ARM_LPAE_MAX_ADDR_BITS 52 #define ARM_LPAE_S2_MAX_CONCAT_PAGES 16 @@ -197,7 +198,7 @@ static void *__arm_lpae_alloc_pages(size_t size, gfp_t = gfp, void *pages; =20 VM_BUG_ON((gfp & __GFP_HIGHMEM)); - p =3D alloc_pages_node(dev_to_node(dev), gfp | __GFP_ZERO, order); + p =3D __iommu_alloc_pages_node(dev_to_node(dev), gfp, order); if (!p) return NULL; =20 @@ -221,7 +222,7 @@ static void *__arm_lpae_alloc_pages(size_t size, gfp_t = gfp, dev_err(dev, "Cannot accommodate DMA translation for IOMMU page tables\n"= ); dma_unmap_single(dev, dma, size, DMA_TO_DEVICE); out_free: - __free_pages(p, order); + __iommu_free_pages(p, order); return NULL; } =20 @@ -231,7 +232,7 @@ static void __arm_lpae_free_pages(void *pages, size_t s= ize, if (!cfg->coherent_walk) dma_unmap_single(cfg->iommu_dev, __arm_lpae_dma_addr(pages), size, DMA_TO_DEVICE); - free_pages((unsigned long)pages, get_order(size)); + iommu_free_pages(pages, get_order(size)); } =20 static void __arm_lpae_sync_pte(arm_lpae_iopte *ptep, int num_entries, --=20 2.43.0.rc2.451.g8631bc7472-goog From nobody Wed Dec 17 09:20:03 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id AA4C9C07CA9 for ; Tue, 28 Nov 2023 20:50:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1346297AbjK1Utz (ORCPT ); Tue, 28 Nov 2023 15:49:55 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56188 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345855AbjK1Uto (ORCPT ); Tue, 28 Nov 2023 15:49:44 -0500 Received: from mail-qv1-xf36.google.com (mail-qv1-xf36.google.com [IPv6:2607:f8b0:4864:20::f36]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1DB9C1BD1 for ; Tue, 28 Nov 2023 12:49:45 -0800 (PST) Received: by mail-qv1-xf36.google.com with SMTP id 6a1803df08f44-67a0d865738so30227386d6.1 for ; Tue, 28 Nov 2023 12:49:45 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; t=1701204585; x=1701809385; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:from:to:cc:subject:date:message-id :reply-to; bh=4JJWGGhifJfmWwzsRRAotq/YxAL9K3n4NSfU3ou+3rE=; b=dmw9r6rSTgNaGZt3OVV0+e3vukqs76MvZSMX4zdxpqDBKaGF/xBq9+vgU1VoDQUJ8T D8kKHfI6jUXPNOO56KXTbjM68THUqAMNE4kqMwbcntwJVmZl8sM0Txgh4sCk+hgLePw3 8pp4FwRLywLZ4zWbkPTjQ1QflIz5swllH52sWZGHZRd6uWDJN8fMQNf+rt7kL0wiIPDw +OiNudfERTF2/fIvW4F6F0moaKGSxcO/SsWJQog+fylIVbCUeBAnTMxeEoxw4Rhkpqkv eZrgW5nXNfrxs+QsiImAs+AAWJdsWlx/WuuWZen4HJahwfC5+BEBezavmt9RhYc3hIO1 Mcxw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1701204585; x=1701809385; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=4JJWGGhifJfmWwzsRRAotq/YxAL9K3n4NSfU3ou+3rE=; b=CGyKFkEVGbD7r1O+K5isSsAxWXta/BlshU7nMhTyVkmYnacLKva+2dD1VqK/4+KPPY yprjJRIa1ncfEcH8Zo0+PbOxfyqaZ3nLKD0Tc96rFIFQ4NTgolSQYiJNo98Ho4rGwK2f wx68CFcPUn0XbxB2IgnCu+02+vrbDtb50Q/KU3v9/zANjFH785MFDUJQzXY40JshuZLH rTXigqwceCiDeuqwNHpy3gJBLBah7ZiYqaZ0qnZjYbHbvqy6qX+XXoP7pYhyaI3Acjq6 dEF0TgiyulKBKu5Lr+jqLXDTufdQNg7AEgF5tpXE+nI/I4WDhqIhLBzElmPE2yyoQfZk 8ARg== X-Gm-Message-State: AOJu0Yx37t4sYBEmV8gk9GTAVu2nlhqnU9Q8V2cw1fSbm3+wR2kxSRLM KbD5gabz9kigjhb2PCMV5Nh2/A== X-Google-Smtp-Source: AGHT+IEDZ/ij59NC+/N0WE8v2site2Dd8U3b4CDvl6tiF76N0ZJEMNbYN0ZJKJUKEFAmf3sLjyNRCw== X-Received: by 2002:ad4:5499:0:b0:67a:5744:264c with SMTP id pv25-20020ad45499000000b0067a5744264cmr3635072qvb.55.1701204585073; Tue, 28 Nov 2023 12:49:45 -0800 (PST) Received: from soleen.c.googlers.com.com (55.87.194.35.bc.googleusercontent.com. [35.194.87.55]) by smtp.gmail.com with ESMTPSA id d11-20020a0cfe8b000000b0067a56b6adfesm1056863qvs.71.2023.11.28.12.49.44 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 28 Nov 2023 12:49:44 -0800 (PST) From: Pasha Tatashin To: akpm@linux-foundation.org, alex.williamson@redhat.com, alim.akhtar@samsung.com, alyssa@rosenzweig.io, asahi@lists.linux.dev, baolu.lu@linux.intel.com, bhelgaas@google.com, cgroups@vger.kernel.org, corbet@lwn.net, david@redhat.com, dwmw2@infradead.org, hannes@cmpxchg.org, heiko@sntech.de, iommu@lists.linux.dev, jasowang@redhat.com, jernej.skrabec@gmail.com, jgg@ziepe.ca, jonathanh@nvidia.com, joro@8bytes.org, kevin.tian@intel.com, krzysztof.kozlowski@linaro.org, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-doc@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-rockchip@lists.infradead.org, linux-samsung-soc@vger.kernel.org, linux-sunxi@lists.linux.dev, linux-tegra@vger.kernel.org, lizefan.x@bytedance.com, marcan@marcan.st, mhiramat@kernel.org, mst@redhat.com, m.szyprowski@samsung.com, netdev@vger.kernel.org, pasha.tatashin@soleen.com, paulmck@kernel.org, rdunlap@infradead.org, robin.murphy@arm.com, samuel@sholland.org, suravee.suthikulpanit@amd.com, sven@svenpeter.dev, thierry.reding@gmail.com, tj@kernel.org, tomas.mudrunka@gmail.com, vdumpa@nvidia.com, virtualization@lists.linux.dev, wens@csie.org, will@kernel.org, yu-cheng.yu@intel.com Subject: [PATCH 04/16] iommu/io-pgtable-dart: use page allocation function provided by iommu-pages.h Date: Tue, 28 Nov 2023 20:49:26 +0000 Message-ID: <20231128204938.1453583-5-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.43.0.rc2.451.g8631bc7472-goog In-Reply-To: <20231128204938.1453583-1-pasha.tatashin@soleen.com> References: <20231128204938.1453583-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Convert iommu/io-pgtable-dart.c to use the new page allocation functions provided in iommu-pages.h. Signed-off-by: Pasha Tatashin Reviewed-by: Janne Grunau --- drivers/iommu/io-pgtable-dart.c | 37 +++++++++++++-------------------- 1 file changed, 14 insertions(+), 23 deletions(-) diff --git a/drivers/iommu/io-pgtable-dart.c b/drivers/iommu/io-pgtable-dar= t.c index 74b1ef2b96be..ad28031e1e93 100644 --- a/drivers/iommu/io-pgtable-dart.c +++ b/drivers/iommu/io-pgtable-dart.c @@ -23,6 +23,7 @@ #include =20 #include +#include "iommu-pages.h" =20 #define DART1_MAX_ADDR_BITS 36 =20 @@ -106,18 +107,12 @@ static phys_addr_t iopte_to_paddr(dart_iopte pte, return paddr; } =20 -static void *__dart_alloc_pages(size_t size, gfp_t gfp, - struct io_pgtable_cfg *cfg) +static void *__dart_alloc_pages(size_t size, gfp_t gfp) { int order =3D get_order(size); - struct page *p; =20 VM_BUG_ON((gfp & __GFP_HIGHMEM)); - p =3D alloc_pages(gfp | __GFP_ZERO, order); - if (!p) - return NULL; - - return page_address(p); + return iommu_alloc_pages(gfp, order); } =20 static int dart_init_pte(struct dart_io_pgtable *data, @@ -262,13 +257,13 @@ static int dart_map_pages(struct io_pgtable_ops *ops,= unsigned long iova, =20 /* no L2 table present */ if (!pte) { - cptep =3D __dart_alloc_pages(tblsz, gfp, cfg); + cptep =3D __dart_alloc_pages(tblsz, gfp); if (!cptep) return -ENOMEM; =20 pte =3D dart_install_table(cptep, ptep, 0, data); if (pte) - free_pages((unsigned long)cptep, get_order(tblsz)); + iommu_free_pages(cptep, get_order(tblsz)); =20 /* L2 table is present (now) */ pte =3D READ_ONCE(*ptep); @@ -419,8 +414,7 @@ apple_dart_alloc_pgtable(struct io_pgtable_cfg *cfg, vo= id *cookie) cfg->apple_dart_cfg.n_ttbrs =3D 1 << data->tbl_bits; =20 for (i =3D 0; i < cfg->apple_dart_cfg.n_ttbrs; ++i) { - data->pgd[i] =3D __dart_alloc_pages(DART_GRANULE(data), GFP_KERNEL, - cfg); + data->pgd[i] =3D __dart_alloc_pages(DART_GRANULE(data), GFP_KERNEL); if (!data->pgd[i]) goto out_free_data; cfg->apple_dart_cfg.ttbr[i] =3D virt_to_phys(data->pgd[i]); @@ -429,9 +423,10 @@ apple_dart_alloc_pgtable(struct io_pgtable_cfg *cfg, v= oid *cookie) return &data->iop; =20 out_free_data: - while (--i >=3D 0) - free_pages((unsigned long)data->pgd[i], - get_order(DART_GRANULE(data))); + while (--i >=3D 0) { + iommu_free_pages(data->pgd[i], + get_order(DART_GRANULE(data))); + } kfree(data); return NULL; } @@ -439,6 +434,7 @@ apple_dart_alloc_pgtable(struct io_pgtable_cfg *cfg, vo= id *cookie) static void apple_dart_free_pgtable(struct io_pgtable *iop) { struct dart_io_pgtable *data =3D io_pgtable_to_data(iop); + int order =3D get_order(DART_GRANULE(data)); dart_iopte *ptep, *end; int i; =20 @@ -449,15 +445,10 @@ static void apple_dart_free_pgtable(struct io_pgtable= *iop) while (ptep !=3D end) { dart_iopte pte =3D *ptep++; =20 - if (pte) { - unsigned long page =3D - (unsigned long)iopte_deref(pte, data); - - free_pages(page, get_order(DART_GRANULE(data))); - } + if (pte) + iommu_free_pages(iopte_deref(pte, data), order); } - free_pages((unsigned long)data->pgd[i], - get_order(DART_GRANULE(data))); + iommu_free_pages(data->pgd[i], order); } =20 kfree(data); --=20 2.43.0.rc2.451.g8631bc7472-goog From nobody Wed Dec 17 09:20:03 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C2D20C4167B for ; Tue, 28 Nov 2023 20:50:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1346224AbjK1Ut6 (ORCPT ); Tue, 28 Nov 2023 15:49:58 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56192 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1346107AbjK1Utp (ORCPT ); Tue, 28 Nov 2023 15:49:45 -0500 Received: from mail-qv1-xf30.google.com (mail-qv1-xf30.google.com [IPv6:2607:f8b0:4864:20::f30]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E2EA91BDD for ; Tue, 28 Nov 2023 12:49:46 -0800 (PST) Received: by mail-qv1-xf30.google.com with SMTP id 6a1803df08f44-67a51ad638eso1805776d6.0 for ; Tue, 28 Nov 2023 12:49:46 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; t=1701204586; x=1701809386; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:from:to:cc:subject:date:message-id :reply-to; bh=EfTMP0IEjMSODjtdYrx10bu5wkmZdOPfzS7HMIYgctc=; b=gMvoHxsMwl3EKBWTRWDNRLdQE3hveYRXOohnt401fxWedyItxuNAARW/iplwD5koVA PlHohGppo+eRkcsBuFRTHCVSgz+pkX81bU8DBGKNySAROqCU76klbgaBx2lNgSeyC0H0 QzdLABHAh6PnW3d0jTWJ0/MCGpX+JKs1sQCCssggIYJCWgDRFFoN9/LMYbOTdr8gW/+j sVbMeu8/5Z8HrYdyBtB7Wn/BPUorLFYoIqTE4Z9pJOoFYLx052a2wIm+odCt0QMNRHeI 4HS2Rs9DsIwXgfbOYWKIvkEfJXE7IBVgbBlbLdcjid7Naa3PMZtnyIa9lf3Tv9uT0VeY AGtw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1701204586; x=1701809386; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=EfTMP0IEjMSODjtdYrx10bu5wkmZdOPfzS7HMIYgctc=; b=q4o/BcuwQioPPgidOhShfCyh8Tp0GJBFrRusCgkiVYWokYuo1FgILOpOKp/mBlsqZS XcicTjcXv61JiX+6lS/rIA8gz+Qce32xSK4+Dgr8b0QThg4uJUd5hGeVNkxDM8Zz/6rU 8wS0jL0hUyCQUfq3j78cG9D5Jt8sx+oCWzTTIouS+l08hNQWi9L97dXQOzGKwVUtngI6 vMlyp1lOXNDP3V7aJguDQBQrR3lPh9XLhwXeRgVkkWKmoAVyBeArPX2SiWNh2Tl5k9QG hZ1ZIARxRKLwJo06MHjx2+O1duW1u3ohpkr/DS1f0K8M1PmQwMp6BoGTEGIlgYHmvoSr 12Dw== X-Gm-Message-State: AOJu0YwpyxT5T7hpZVX1R0o2xEHAHevqBRmv8j+AiI1Wyq/glaSW/rGy XMy8oYSySesnnJxJwJ7MiOQ3ow== X-Google-Smtp-Source: AGHT+IEKeO0ndku7kDd+vg89Kzhd+WrQKp5SmuH7bGrf2GIbl4sfu3JTwF6Gf3JxLicYu9UrEBSdGQ== X-Received: by 2002:a05:6214:1c0b:b0:67a:4546:9895 with SMTP id u11-20020a0562141c0b00b0067a45469895mr14672255qvc.12.1701204585983; Tue, 28 Nov 2023 12:49:45 -0800 (PST) Received: from soleen.c.googlers.com.com (55.87.194.35.bc.googleusercontent.com. [35.194.87.55]) by smtp.gmail.com with ESMTPSA id d11-20020a0cfe8b000000b0067a56b6adfesm1056863qvs.71.2023.11.28.12.49.45 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 28 Nov 2023 12:49:45 -0800 (PST) From: Pasha Tatashin To: akpm@linux-foundation.org, alex.williamson@redhat.com, alim.akhtar@samsung.com, alyssa@rosenzweig.io, asahi@lists.linux.dev, baolu.lu@linux.intel.com, bhelgaas@google.com, cgroups@vger.kernel.org, corbet@lwn.net, david@redhat.com, dwmw2@infradead.org, hannes@cmpxchg.org, heiko@sntech.de, iommu@lists.linux.dev, jasowang@redhat.com, jernej.skrabec@gmail.com, jgg@ziepe.ca, jonathanh@nvidia.com, joro@8bytes.org, kevin.tian@intel.com, krzysztof.kozlowski@linaro.org, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-doc@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-rockchip@lists.infradead.org, linux-samsung-soc@vger.kernel.org, linux-sunxi@lists.linux.dev, linux-tegra@vger.kernel.org, lizefan.x@bytedance.com, marcan@marcan.st, mhiramat@kernel.org, mst@redhat.com, m.szyprowski@samsung.com, netdev@vger.kernel.org, pasha.tatashin@soleen.com, paulmck@kernel.org, rdunlap@infradead.org, robin.murphy@arm.com, samuel@sholland.org, suravee.suthikulpanit@amd.com, sven@svenpeter.dev, thierry.reding@gmail.com, tj@kernel.org, tomas.mudrunka@gmail.com, vdumpa@nvidia.com, virtualization@lists.linux.dev, wens@csie.org, will@kernel.org, yu-cheng.yu@intel.com Subject: [PATCH 05/16] iommu/io-pgtable-arm-v7s: use page allocation function provided by iommu-pages.h Date: Tue, 28 Nov 2023 20:49:27 +0000 Message-ID: <20231128204938.1453583-6-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.43.0.rc2.451.g8631bc7472-goog In-Reply-To: <20231128204938.1453583-1-pasha.tatashin@soleen.com> References: <20231128204938.1453583-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Convert iommu/io-pgtable-arm-v7s.c to use the new page allocation functions provided in iommu-pages.h. Signed-off-by: Pasha Tatashin --- drivers/iommu/io-pgtable-arm-v7s.c | 9 ++++++--- 1 file changed, 6 insertions(+), 3 deletions(-) diff --git a/drivers/iommu/io-pgtable-arm-v7s.c b/drivers/iommu/io-pgtable-= arm-v7s.c index 75f244a3e12d..3d494ca1f671 100644 --- a/drivers/iommu/io-pgtable-arm-v7s.c +++ b/drivers/iommu/io-pgtable-arm-v7s.c @@ -34,6 +34,7 @@ #include =20 #include +#include "iommu-pages.h" =20 /* Struct accessors */ #define io_pgtable_to_data(x) \ @@ -255,7 +256,7 @@ static void *__arm_v7s_alloc_table(int lvl, gfp_t gfp, GFP_KERNEL : ARM_V7S_TABLE_GFP_DMA; =20 if (lvl =3D=3D 1) - table =3D (void *)__get_free_pages(gfp_l1 | __GFP_ZERO, get_order(size)); + table =3D iommu_alloc_pages(gfp_l1, get_order(size)); else if (lvl =3D=3D 2) table =3D kmem_cache_zalloc(data->l2_tables, gfp); =20 @@ -283,6 +284,7 @@ static void *__arm_v7s_alloc_table(int lvl, gfp_t gfp, } if (lvl =3D=3D 2) kmemleak_ignore(table); + return table; =20 out_unmap: @@ -290,7 +292,7 @@ static void *__arm_v7s_alloc_table(int lvl, gfp_t gfp, dma_unmap_single(dev, dma, size, DMA_TO_DEVICE); out_free: if (lvl =3D=3D 1) - free_pages((unsigned long)table, get_order(size)); + iommu_free_pages(table, get_order(size)); else kmem_cache_free(data->l2_tables, table); return NULL; @@ -306,8 +308,9 @@ static void __arm_v7s_free_table(void *table, int lvl, if (!cfg->coherent_walk) dma_unmap_single(dev, __arm_v7s_dma_addr(table), size, DMA_TO_DEVICE); + if (lvl =3D=3D 1) - free_pages((unsigned long)table, get_order(size)); + iommu_free_pages(table, get_order(size)); else kmem_cache_free(data->l2_tables, table); } --=20 2.43.0.rc2.451.g8631bc7472-goog From nobody Wed Dec 17 09:20:03 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1F6F1C07E97 for ; Tue, 28 Nov 2023 20:50:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1346419AbjK1UuB (ORCPT ); Tue, 28 Nov 2023 15:50:01 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56206 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345733AbjK1Utp (ORCPT ); Tue, 28 Nov 2023 15:49:45 -0500 Received: from mail-yw1-x1131.google.com (mail-yw1-x1131.google.com [IPv6:2607:f8b0:4864:20::1131]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C56801BE3 for ; Tue, 28 Nov 2023 12:49:47 -0800 (PST) Received: by mail-yw1-x1131.google.com with SMTP id 00721157ae682-5cc7966e731so50209037b3.0 for ; Tue, 28 Nov 2023 12:49:47 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; t=1701204587; x=1701809387; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:from:to:cc:subject:date:message-id :reply-to; bh=9pzjq4U1m3Jk3/Q2EGDhq/3TlS/Ht1C4D8j4bsFObkQ=; b=eESCoOjNDxfrFLQ7MyEVad+SgTNdaWIKY9zAC6chNUh+LzlsqIJTp7jW3FM3XiEgaW escNzH8gnKUuAH+N2uVP3GlavOp587ZYyyZc0bGbcs+VSbr/A7LsO2DVA61WUOyPK9rf 8e5E3h8ul2nHgN679Qnhe/KQhFXUJ/g7Jq/fjnVDYU93uEwFt41+ra92QkHgo2UExzkU vgVScuS4d9ZzuwO2f0leQULME174Q/f1n0++ExHntJ8dlgeakTntW2nmigsMfZVGKHlf EuddNsGT/OfXbUZ2M9BtTRAdjAuRKiYZ15Xsfar09fvB0T1tDHNmqU8HIsSc15muKbNY ++wg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1701204587; x=1701809387; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=9pzjq4U1m3Jk3/Q2EGDhq/3TlS/Ht1C4D8j4bsFObkQ=; b=ZREHkEhZ0Bd4N3oWcd6LEP1Iwov/sKUjqiYPG5KalFhLWOB70UUhQdlPvrZXr9lFqy DHp4d+XGpaQFi3fqbR+1PGq5l0QSc/n1C2j44llDQeawXhtaKlQMURSGid3SV8LRWBOM lT1geDd9XzT1Z6vVLXbl6lb0+cjBDuvhT4whbFNh2Ig06S4Cn2EFohiDDJZw3u/83jEm sd1zToXDzqoQHF5qgywD+M46tqRpzEUvEL7PQMTiwLq6daUMzeSYH4o6/pa1kUj5b40X shXsY2kjgcuzwwRTgZec55R8qSoqRuGRXMidwQ8KqsaCqYymyfSsoVj1UouYucAm38qM C0PQ== X-Gm-Message-State: AOJu0YysUCdVUvnweb2YgT9eoBfMjJCLAtT7vpoeNtU2TZ/OOa15Fj4d eaAFYzohLqyM0m9wQ5KnY8XBig== X-Google-Smtp-Source: AGHT+IE/Byn/8N8Vta6BS83r49MMC2Wgiev3AoCZv/F2NdDhJZ9+kkS7XfcNeLYX07tpu8Z2obgTjA== X-Received: by 2002:a81:87c2:0:b0:5c0:fc45:a249 with SMTP id x185-20020a8187c2000000b005c0fc45a249mr12000775ywf.40.1701204586847; Tue, 28 Nov 2023 12:49:46 -0800 (PST) Received: from soleen.c.googlers.com.com (55.87.194.35.bc.googleusercontent.com. [35.194.87.55]) by smtp.gmail.com with ESMTPSA id d11-20020a0cfe8b000000b0067a56b6adfesm1056863qvs.71.2023.11.28.12.49.46 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 28 Nov 2023 12:49:46 -0800 (PST) From: Pasha Tatashin To: akpm@linux-foundation.org, alex.williamson@redhat.com, alim.akhtar@samsung.com, alyssa@rosenzweig.io, asahi@lists.linux.dev, baolu.lu@linux.intel.com, bhelgaas@google.com, cgroups@vger.kernel.org, corbet@lwn.net, david@redhat.com, dwmw2@infradead.org, hannes@cmpxchg.org, heiko@sntech.de, iommu@lists.linux.dev, jasowang@redhat.com, jernej.skrabec@gmail.com, jgg@ziepe.ca, jonathanh@nvidia.com, joro@8bytes.org, kevin.tian@intel.com, krzysztof.kozlowski@linaro.org, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-doc@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-rockchip@lists.infradead.org, linux-samsung-soc@vger.kernel.org, linux-sunxi@lists.linux.dev, linux-tegra@vger.kernel.org, lizefan.x@bytedance.com, marcan@marcan.st, mhiramat@kernel.org, mst@redhat.com, m.szyprowski@samsung.com, netdev@vger.kernel.org, pasha.tatashin@soleen.com, paulmck@kernel.org, rdunlap@infradead.org, robin.murphy@arm.com, samuel@sholland.org, suravee.suthikulpanit@amd.com, sven@svenpeter.dev, thierry.reding@gmail.com, tj@kernel.org, tomas.mudrunka@gmail.com, vdumpa@nvidia.com, virtualization@lists.linux.dev, wens@csie.org, will@kernel.org, yu-cheng.yu@intel.com Subject: [PATCH 06/16] iommu/dma: use page allocation function provided by iommu-pages.h Date: Tue, 28 Nov 2023 20:49:28 +0000 Message-ID: <20231128204938.1453583-7-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.43.0.rc2.451.g8631bc7472-goog In-Reply-To: <20231128204938.1453583-1-pasha.tatashin@soleen.com> References: <20231128204938.1453583-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Convert iommu/dma-iommu.c to use the new page allocation functions provided in iommu-pages.h. Signed-off-by: Pasha Tatashin --- drivers/iommu/dma-iommu.c | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-) diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index 85163a83df2f..822adad464c2 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -31,6 +31,7 @@ #include =20 #include "dma-iommu.h" +#include "iommu-pages.h" =20 struct iommu_dma_msi_page { struct list_head list; @@ -874,7 +875,7 @@ static dma_addr_t __iommu_dma_map(struct device *dev, p= hys_addr_t phys, static void __iommu_dma_free_pages(struct page **pages, int count) { while (count--) - __free_page(pages[count]); + __iommu_free_page(pages[count]); kvfree(pages); } =20 @@ -912,7 +913,8 @@ static struct page **__iommu_dma_alloc_pages(struct dev= ice *dev, order_size =3D 1U << order; if (order_mask > order_size) alloc_flags |=3D __GFP_NORETRY; - page =3D alloc_pages_node(nid, alloc_flags, order); + page =3D __iommu_alloc_pages_node(nid, alloc_flags, + order); if (!page) continue; if (order) @@ -1572,7 +1574,7 @@ static void *iommu_dma_alloc_pages(struct device *dev= , size_t size, =20 page =3D dma_alloc_contiguous(dev, alloc_size, gfp); if (!page) - page =3D alloc_pages_node(node, gfp, get_order(alloc_size)); + page =3D __iommu_alloc_pages_node(node, gfp, get_order(alloc_size)); if (!page) return NULL; =20 --=20 2.43.0.rc2.451.g8631bc7472-goog From nobody Wed Dec 17 09:20:03 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8B713C07CA9 for ; Tue, 28 Nov 2023 20:50:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1346529AbjK1UuF (ORCPT ); Tue, 28 Nov 2023 15:50:05 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56220 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1346167AbjK1Utr (ORCPT ); Tue, 28 Nov 2023 15:49:47 -0500 Received: from mail-yb1-xb33.google.com (mail-yb1-xb33.google.com [IPv6:2607:f8b0:4864:20::b33]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E76FA1BF0 for ; Tue, 28 Nov 2023 12:49:48 -0800 (PST) Received: by mail-yb1-xb33.google.com with SMTP id 3f1490d57ef6-d9beb865a40so5406332276.1 for ; Tue, 28 Nov 2023 12:49:48 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; t=1701204588; x=1701809388; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:from:to:cc:subject:date:message-id :reply-to; bh=jRGmXGqyJzQ9dzCxzy8J7vcwlH74KXpuX5+P/ckDUHQ=; b=Lov+Wi6PraKvRis1bszdfdT0CGBn6/32ZpM5rSVgJwaQp05JwdvS0bimH1FHdJZgkj b+CgFqGST9/FPl87aGAhqPVhYSRTd5scTGnMJu/cEldKws+ZBbcvIEff+e0fzlKxbgcU hV4fT6D7BuVcZp9t+gYKWikT8yjfYr+6ekLd4zyDuu9ZQm0Loq63qpD9vbUuqwVnWX4C qPSdhqs7qISvXYwc7nmf0I9y5a2IXr7yzfHEXxPeb75DSecGStkGPoCyINZOplZE14QB z0RikNQVs6E3whmr05d3DNwd/tEc7H1R43u3QdSJrcubQewc7emccwGezcQkMK7Bngyx pJaw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1701204588; x=1701809388; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=jRGmXGqyJzQ9dzCxzy8J7vcwlH74KXpuX5+P/ckDUHQ=; b=FUtAllVvDdwMwVPHvdSORdh/v76xnwCOnEeoVe2Evp0DgU57uucKKbxFg7zi6z6z8n 8eOyqK8QEntNdu32btCTJ87MFp17a14hRqSAmDDZKGlbV4bS94SRz4mRorXgwiXHBmat LlVffzj7EHETQcKuNDSGswE+Cyk7bMvY7vvsTx3qdUrNWmcUlE2c0rdlblr4ESYW8TUv ZyiOBJdqNsy4OoSoTnAu8rZAfVHJe1bk1fNud5SuwGuCDZ/ZLu7yvnlyuQ3PCoeZ1mFd eEJWSea4n1QszQk+Sw5TiYxu1auP5zb6MazXOu5b8hcWs4oxyWz6b0MAJLa5dfDKNZNz sm0g== X-Gm-Message-State: AOJu0Yxe2rBILYfsXZjimpG84kE9EYLzWVpgRw6llOsra9l2+rmQAagw 4cTspDL5xCBRicVnGiln8D+wOw== X-Google-Smtp-Source: AGHT+IE78UUsr3+phF/56bFMU/5eN5UKiQDczVpHvT25XLH3DszV1HuF5Eb6PvW6XiPusog1uQd9qQ== X-Received: by 2002:a05:6902:20f:b0:da0:c615:e4a6 with SMTP id j15-20020a056902020f00b00da0c615e4a6mr16658576ybs.24.1701204587831; Tue, 28 Nov 2023 12:49:47 -0800 (PST) Received: from soleen.c.googlers.com.com (55.87.194.35.bc.googleusercontent.com. [35.194.87.55]) by smtp.gmail.com with ESMTPSA id d11-20020a0cfe8b000000b0067a56b6adfesm1056863qvs.71.2023.11.28.12.49.47 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 28 Nov 2023 12:49:47 -0800 (PST) From: Pasha Tatashin To: akpm@linux-foundation.org, alex.williamson@redhat.com, alim.akhtar@samsung.com, alyssa@rosenzweig.io, asahi@lists.linux.dev, baolu.lu@linux.intel.com, bhelgaas@google.com, cgroups@vger.kernel.org, corbet@lwn.net, david@redhat.com, dwmw2@infradead.org, hannes@cmpxchg.org, heiko@sntech.de, iommu@lists.linux.dev, jasowang@redhat.com, jernej.skrabec@gmail.com, jgg@ziepe.ca, jonathanh@nvidia.com, joro@8bytes.org, kevin.tian@intel.com, krzysztof.kozlowski@linaro.org, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-doc@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-rockchip@lists.infradead.org, linux-samsung-soc@vger.kernel.org, linux-sunxi@lists.linux.dev, linux-tegra@vger.kernel.org, lizefan.x@bytedance.com, marcan@marcan.st, mhiramat@kernel.org, mst@redhat.com, m.szyprowski@samsung.com, netdev@vger.kernel.org, pasha.tatashin@soleen.com, paulmck@kernel.org, rdunlap@infradead.org, robin.murphy@arm.com, samuel@sholland.org, suravee.suthikulpanit@amd.com, sven@svenpeter.dev, thierry.reding@gmail.com, tj@kernel.org, tomas.mudrunka@gmail.com, vdumpa@nvidia.com, virtualization@lists.linux.dev, wens@csie.org, will@kernel.org, yu-cheng.yu@intel.com Subject: [PATCH 07/16] iommu/exynos: use page allocation function provided by iommu-pages.h Date: Tue, 28 Nov 2023 20:49:29 +0000 Message-ID: <20231128204938.1453583-8-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.43.0.rc2.451.g8631bc7472-goog In-Reply-To: <20231128204938.1453583-1-pasha.tatashin@soleen.com> References: <20231128204938.1453583-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Convert iommu/exynos-iommu.c to use the new page allocation functions provided in iommu-pages.h. Signed-off-by: Pasha Tatashin --- drivers/iommu/exynos-iommu.c | 14 ++++++++------ 1 file changed, 8 insertions(+), 6 deletions(-) diff --git a/drivers/iommu/exynos-iommu.c b/drivers/iommu/exynos-iommu.c index 2c6e9094f1e9..3eab0ae65a4f 100644 --- a/drivers/iommu/exynos-iommu.c +++ b/drivers/iommu/exynos-iommu.c @@ -22,6 +22,8 @@ #include #include =20 +#include "iommu-pages.h" + typedef u32 sysmmu_iova_t; typedef u32 sysmmu_pte_t; static struct iommu_domain exynos_identity_domain; @@ -900,11 +902,11 @@ static struct iommu_domain *exynos_iommu_domain_alloc= _paging(struct device *dev) if (!domain) return NULL; =20 - domain->pgtable =3D (sysmmu_pte_t *)__get_free_pages(GFP_KERNEL, 2); + domain->pgtable =3D iommu_alloc_pages(GFP_KERNEL, 2); if (!domain->pgtable) goto err_pgtable; =20 - domain->lv2entcnt =3D (short *)__get_free_pages(GFP_KERNEL | __GFP_ZERO, = 1); + domain->lv2entcnt =3D iommu_alloc_pages(GFP_KERNEL, 1); if (!domain->lv2entcnt) goto err_counter; =20 @@ -930,9 +932,9 @@ static struct iommu_domain *exynos_iommu_domain_alloc_p= aging(struct device *dev) return &domain->domain; =20 err_lv2ent: - free_pages((unsigned long)domain->lv2entcnt, 1); + iommu_free_pages(domain->lv2entcnt, 1); err_counter: - free_pages((unsigned long)domain->pgtable, 2); + iommu_free_pages(domain->pgtable, 2); err_pgtable: kfree(domain); return NULL; @@ -973,8 +975,8 @@ static void exynos_iommu_domain_free(struct iommu_domai= n *iommu_domain) phys_to_virt(base)); } =20 - free_pages((unsigned long)domain->pgtable, 2); - free_pages((unsigned long)domain->lv2entcnt, 1); + iommu_free_pages(domain->pgtable, 2); + iommu_free_pages(domain->lv2entcnt, 1); kfree(domain); } =20 --=20 2.43.0.rc2.451.g8631bc7472-goog From nobody Wed Dec 17 09:20:03 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5C055C4167B for ; Tue, 28 Nov 2023 20:50:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1346471AbjK1UuD (ORCPT ); Tue, 28 Nov 2023 15:50:03 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56210 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1346164AbjK1Utr (ORCPT ); Tue, 28 Nov 2023 15:49:47 -0500 Received: from mail-yw1-x112f.google.com (mail-yw1-x112f.google.com [IPv6:2607:f8b0:4864:20::112f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BD07B1BF5 for ; Tue, 28 Nov 2023 12:49:49 -0800 (PST) Received: by mail-yw1-x112f.google.com with SMTP id 00721157ae682-5cd0af4a7d3so61688827b3.0 for ; Tue, 28 Nov 2023 12:49:49 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; t=1701204589; x=1701809389; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:from:to:cc:subject:date:message-id :reply-to; bh=S6nAo2mT7pHcrL/oNBUQ6H06jfJ0V1n29MMcCYzl3v8=; b=S0UQOTWHBNlFGfUS4oGwhx5iaN0e1rCi7aqR76MZSiPG0xbH9+lXPOVI4Z2x0nm9Jz 8FVqAma4K5GcauDVS52zXymBUgJ1/4kmzrhtLpzUgW7cp6Pv72F6IOeRTevdCLnKVmG0 9F4hkBykJAhtVz1b2Q2ZszgZWy47Y9i+JHCOsOvXnANSH4mKo3IQsQuI5VcU2EE6+MeL LyDw1Pr6LsdVJ89PFwTRLsOaRYYVYSHbcp8EvLGnmCUoQw7ojSpQL/wYST7Fhzy8MnlC VX1ZBmx550XJVf4NJdQGERUa/E+jQOXP7y96zaTD7iWE6epcHzVsbQAbPHVYp2qCEY+N HtVw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1701204589; x=1701809389; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=S6nAo2mT7pHcrL/oNBUQ6H06jfJ0V1n29MMcCYzl3v8=; b=XY5FIIt9tbpVNj6Ibb4qPBVce4TuPO5Z3gwDhahG0Ko3tlD3+q2o3Q7Mu2fWVTQQI0 ilFa+FRDFfyetB8d2yMOdpCn0frTy1E8FtdYXprKFY1pelkWmmTzkaTNlk905kIULGjU Ja53PaFDayfr6tnah/5RzHW1TOhX4Y68FMiQyp6meEcdGtf5Xicl4CguRRgzec2sy3CM Tm5XEamGlXjpuiSXk8/yUyNBVWKXq85UaSV06Na2up6X8T3Q8CAa/bgUu5aq+uT4jIAQ YGcJrlCR9772UTX/Mh2/66q3lZxM0PE6utvhfRC2r5n5cUjpypda/Du8VvQc1jfYxYiy /zOw== X-Gm-Message-State: AOJu0YzwY1BqiNleY0+SGP1S0Eojru0MVq5avONFZUnj0b+rHKmChW1i U6GW4Z/VD+yrRLs1q8nJMGRFPQ== X-Google-Smtp-Source: AGHT+IHzqIoJvEd/INF3AF90sz+4QhWdk8I4ronh0w1gK1vKHyIxxYwztLst06FhLLj54D6SeAj/qg== X-Received: by 2002:a25:68c7:0:b0:da0:c6ae:ad0e with SMTP id d190-20020a2568c7000000b00da0c6aead0emr15080570ybc.21.1701204588830; Tue, 28 Nov 2023 12:49:48 -0800 (PST) Received: from soleen.c.googlers.com.com (55.87.194.35.bc.googleusercontent.com. [35.194.87.55]) by smtp.gmail.com with ESMTPSA id d11-20020a0cfe8b000000b0067a56b6adfesm1056863qvs.71.2023.11.28.12.49.47 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 28 Nov 2023 12:49:48 -0800 (PST) From: Pasha Tatashin To: akpm@linux-foundation.org, alex.williamson@redhat.com, alim.akhtar@samsung.com, alyssa@rosenzweig.io, asahi@lists.linux.dev, baolu.lu@linux.intel.com, bhelgaas@google.com, cgroups@vger.kernel.org, corbet@lwn.net, david@redhat.com, dwmw2@infradead.org, hannes@cmpxchg.org, heiko@sntech.de, iommu@lists.linux.dev, jasowang@redhat.com, jernej.skrabec@gmail.com, jgg@ziepe.ca, jonathanh@nvidia.com, joro@8bytes.org, kevin.tian@intel.com, krzysztof.kozlowski@linaro.org, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-doc@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-rockchip@lists.infradead.org, linux-samsung-soc@vger.kernel.org, linux-sunxi@lists.linux.dev, linux-tegra@vger.kernel.org, lizefan.x@bytedance.com, marcan@marcan.st, mhiramat@kernel.org, mst@redhat.com, m.szyprowski@samsung.com, netdev@vger.kernel.org, pasha.tatashin@soleen.com, paulmck@kernel.org, rdunlap@infradead.org, robin.murphy@arm.com, samuel@sholland.org, suravee.suthikulpanit@amd.com, sven@svenpeter.dev, thierry.reding@gmail.com, tj@kernel.org, tomas.mudrunka@gmail.com, vdumpa@nvidia.com, virtualization@lists.linux.dev, wens@csie.org, will@kernel.org, yu-cheng.yu@intel.com Subject: [PATCH 08/16] iommu/fsl: use page allocation function provided by iommu-pages.h Date: Tue, 28 Nov 2023 20:49:30 +0000 Message-ID: <20231128204938.1453583-9-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.43.0.rc2.451.g8631bc7472-goog In-Reply-To: <20231128204938.1453583-1-pasha.tatashin@soleen.com> References: <20231128204938.1453583-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Convert iommu/fsl_pamu.c to use the new page allocation functions provided in iommu-pages.h. Signed-off-by: Pasha Tatashin --- drivers/iommu/fsl_pamu.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/drivers/iommu/fsl_pamu.c b/drivers/iommu/fsl_pamu.c index f37d3b044131..7bfb49940f0c 100644 --- a/drivers/iommu/fsl_pamu.c +++ b/drivers/iommu/fsl_pamu.c @@ -16,6 +16,7 @@ #include =20 #include +#include "iommu-pages.h" =20 /* define indexes for each operation mapping scenario */ #define OMI_QMAN 0x00 @@ -828,7 +829,7 @@ static int fsl_pamu_probe(struct platform_device *pdev) (PAGE_SIZE << get_order(OMT_SIZE)); order =3D get_order(mem_size); =20 - p =3D alloc_pages(GFP_KERNEL | __GFP_ZERO, order); + p =3D __iommu_alloc_pages(GFP_KERNEL, order); if (!p) { dev_err(dev, "unable to allocate PAACT/SPAACT/OMT block\n"); ret =3D -ENOMEM; @@ -916,7 +917,7 @@ static int fsl_pamu_probe(struct platform_device *pdev) iounmap(guts_regs); =20 if (ppaact) - free_pages((unsigned long)ppaact, order); + iommu_free_pages(ppaact, order); =20 ppaact =3D NULL; =20 --=20 2.43.0.rc2.451.g8631bc7472-goog From nobody Wed Dec 17 09:20:03 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8E94CC07E98 for ; Tue, 28 Nov 2023 20:50:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344571AbjK1UuI (ORCPT ); Tue, 28 Nov 2023 15:50:08 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56224 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1346198AbjK1Uts (ORCPT ); Tue, 28 Nov 2023 15:49:48 -0500 Received: from mail-qv1-xf35.google.com (mail-qv1-xf35.google.com [IPv6:2607:f8b0:4864:20::f35]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id ABFDC1FC1 for ; Tue, 28 Nov 2023 12:49:50 -0800 (PST) Received: by mail-qv1-xf35.google.com with SMTP id 6a1803df08f44-67a51ad638eso1806246d6.0 for ; Tue, 28 Nov 2023 12:49:50 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; t=1701204589; x=1701809389; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:from:to:cc:subject:date:message-id :reply-to; bh=GuhRNW8EraGxCVOazJoq98oeC/940DRf0iOPJkOnyXY=; b=mxP7V98N+v2ExEmrGV3kjkIxZ3YZw4TkJUgzcVhEZtxqitrZtR9Htey3aLkSXVUsx+ pFynFqodwrSp9YjDLGvwDKL+/J3rsPQMFukyFPq2+Th8DDzgSRNPsfFi/V9+VPycoTDa w/DT3ped36LXQqliqeekqeSW1HY3nVD7a9M2t7dnNqsKZ1ieIUuhBCQfObycZinJ8/5i +ElAbqcrl3QyPtY68gfp98jSOrzGtFxZj+m6Dzo2knUgC4xWJTM4GbaPx2kn+IvIKCO5 10iJjRHm/3KdlmiHMq9mhWafVGyvJEBaM36HP50twKo7oJ3LTT7UOCUQf+SQukLO+xbW uPPw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1701204589; x=1701809389; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=GuhRNW8EraGxCVOazJoq98oeC/940DRf0iOPJkOnyXY=; b=C8IVjybr7E1udSITHOMLnVKvN18gToK1gNzz5OnxFY0cn57W4Aj62vWQ8LVkOigxAC akwmHZBmomwJJS4Rzhzllmr5eBlnvqn76IRa4UZMwL04KVhuJUxDxeM08b2X5Bcdtje3 RW6GTGYlHWWy3teS6oCha8FAIU9YVDQNSke+1kU4n4b3SGNbwYvX+F+1XyVB6b23N+99 4zeF8+jCiy5RqRgfegIo/vytXDzST36cAK5XmcMrSiBPJeZY0uRKPl88y7A23y3m9cYT vrD/B+fzxpBqvNpjs9G1mPcNaAOhh3mFc4xCqfFTPv4NwWyhhkMqvp3sU14rNg7MfN6+ pZQA== X-Gm-Message-State: AOJu0YxmrG/KAL6Epe7hyUvk+Par0wyuqD3tFwRdUX5s0NioLZCXm9Hu qSW7MrFMBQZefYkPgs4NKcjFGw== X-Google-Smtp-Source: AGHT+IGPu3upFAdCP6yNYmrLD/qlosIlP36YcCIt0RdR7fCBV7LNnnJ4icXgqHOsHyLuFzh/n11wOA== X-Received: by 2002:a05:6214:1c0b:b0:67a:4546:9895 with SMTP id u11-20020a0562141c0b00b0067a45469895mr14672496qvc.12.1701204589666; Tue, 28 Nov 2023 12:49:49 -0800 (PST) Received: from soleen.c.googlers.com.com (55.87.194.35.bc.googleusercontent.com. [35.194.87.55]) by smtp.gmail.com with ESMTPSA id d11-20020a0cfe8b000000b0067a56b6adfesm1056863qvs.71.2023.11.28.12.49.48 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 28 Nov 2023 12:49:49 -0800 (PST) From: Pasha Tatashin To: akpm@linux-foundation.org, alex.williamson@redhat.com, alim.akhtar@samsung.com, alyssa@rosenzweig.io, asahi@lists.linux.dev, baolu.lu@linux.intel.com, bhelgaas@google.com, cgroups@vger.kernel.org, corbet@lwn.net, david@redhat.com, dwmw2@infradead.org, hannes@cmpxchg.org, heiko@sntech.de, iommu@lists.linux.dev, jasowang@redhat.com, jernej.skrabec@gmail.com, jgg@ziepe.ca, jonathanh@nvidia.com, joro@8bytes.org, kevin.tian@intel.com, krzysztof.kozlowski@linaro.org, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-doc@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-rockchip@lists.infradead.org, linux-samsung-soc@vger.kernel.org, linux-sunxi@lists.linux.dev, linux-tegra@vger.kernel.org, lizefan.x@bytedance.com, marcan@marcan.st, mhiramat@kernel.org, mst@redhat.com, m.szyprowski@samsung.com, netdev@vger.kernel.org, pasha.tatashin@soleen.com, paulmck@kernel.org, rdunlap@infradead.org, robin.murphy@arm.com, samuel@sholland.org, suravee.suthikulpanit@amd.com, sven@svenpeter.dev, thierry.reding@gmail.com, tj@kernel.org, tomas.mudrunka@gmail.com, vdumpa@nvidia.com, virtualization@lists.linux.dev, wens@csie.org, will@kernel.org, yu-cheng.yu@intel.com Subject: [PATCH 09/16] iommu/iommufd: use page allocation function provided by iommu-pages.h Date: Tue, 28 Nov 2023 20:49:31 +0000 Message-ID: <20231128204938.1453583-10-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.43.0.rc2.451.g8631bc7472-goog In-Reply-To: <20231128204938.1453583-1-pasha.tatashin@soleen.com> References: <20231128204938.1453583-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Convert iommu/iommufd/* files to use the new page allocation functions provided in iommu-pages.h. Signed-off-by: Pasha Tatashin --- drivers/iommu/iommufd/iova_bitmap.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/iommu/iommufd/iova_bitmap.c b/drivers/iommu/iommufd/io= va_bitmap.c index 0a92c9eeaf7f..6b8575b93f17 100644 --- a/drivers/iommu/iommufd/iova_bitmap.c +++ b/drivers/iommu/iommufd/iova_bitmap.c @@ -253,7 +253,7 @@ struct iova_bitmap *iova_bitmap_alloc(unsigned long iov= a, size_t length, bitmap->iova =3D iova; bitmap->length =3D length; mapped->iova =3D iova; - mapped->pages =3D (struct page **)__get_free_page(GFP_KERNEL); + mapped->pages =3D iommu_alloc_page(GFP_KERNEL); if (!mapped->pages) { rc =3D -ENOMEM; goto err; @@ -284,7 +284,7 @@ void iova_bitmap_free(struct iova_bitmap *bitmap) iova_bitmap_put(bitmap); =20 if (mapped->pages) { - free_page((unsigned long)mapped->pages); + iommu_free_page(mapped->pages); mapped->pages =3D NULL; } =20 --=20 2.43.0.rc2.451.g8631bc7472-goog From nobody Wed Dec 17 09:20:03 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E07E9C07E98 for ; Tue, 28 Nov 2023 20:50:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1346562AbjK1UuL (ORCPT ); Tue, 28 Nov 2023 15:50:11 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56230 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1346221AbjK1Utt (ORCPT ); Tue, 28 Nov 2023 15:49:49 -0500 Received: from mail-qv1-xf2d.google.com (mail-qv1-xf2d.google.com [IPv6:2607:f8b0:4864:20::f2d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 990E51FC4 for ; Tue, 28 Nov 2023 12:49:51 -0800 (PST) Received: by mail-qv1-xf2d.google.com with SMTP id 6a1803df08f44-67a3f1374bdso16256586d6.2 for ; Tue, 28 Nov 2023 12:49:51 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; t=1701204590; x=1701809390; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:from:to:cc:subject:date:message-id :reply-to; bh=ETdhPmFEsQhEIbzNxRgoLUCae/wd+Ix3V20Xa0Oz0Bc=; b=LAML5hC+xVM8EmlWOWNZB2pfvG0vfPx+JWDaBtbTILydta2ncRrj9pkOl22O+jEebR 6Vqpbry6W/PfSfDW8CpdNN/+QYGY/kt36TYVQQ2UBKQEbBhBm8ucfjwNcsRvHqbiqWI2 E43tozok1WKhwSrQ4DW7Qx8hMxDukNA1IeFgi6kd3EKGBnlpQEPU+1acZlJsXNS8qjqO AiCLSaIrF3UuHHLGSsz2ArUwIvTZoggs1dVtqTJ4b6dFtEQZ4ihN5oT/Trd4TEq9+Dvi LHrpSJULwD9IwjRrRLoKY47l7lkd+bBBkUK0skI4lClfUs1AfE7yqQ8kjQ2UyfWVs46b xDDA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1701204590; x=1701809390; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ETdhPmFEsQhEIbzNxRgoLUCae/wd+Ix3V20Xa0Oz0Bc=; b=Oi3DVmF0L2tH/nCIzncagpoLaF/UsYcSLPP+R8rHxgVXVwJjE4U4p1BNv3aeiOAW2W LJBJjBbKVBH0HEG/PUCWG8OTa/S17WVLfotvJOzr15kX6LshO9xAGmwMqKuJrPTJiEp5 Hf4GJMh3C+ru2TlKZYYzZGK3u2K1rdYmH4qdLpO5Clh3gMr5XX1xfIUyLCglqLwvAww6 33cBQFSjuIrlPZmcEeMBj0hoB1l1eQS33dzTrW1H+nLAVLhBI5tDlw7vaQC8JFecc2JM PKKMBUxfH2VaphFN1kriQFsljvkli0p++5Qr59pw5VRk3bftbDJmLOL/oyMSS+Gy56xB O0iA== X-Gm-Message-State: AOJu0YxH+PHHwdhZ8PiCBlXe0ExQ13JRmSI9ZCM33dPeLoBnFjT/Diid J90qVnvUuK+kB2m0xEXC8uNedQ== X-Google-Smtp-Source: AGHT+IF20t0VWjgac0dRIZCA1JkcXVtwNJBGTcAuMHzFzEm0RhGR7NzU38i0clAtj6I8+ULrxoVFmw== X-Received: by 2002:a0c:ebc3:0:b0:67a:2129:c05a with SMTP id k3-20020a0cebc3000000b0067a2129c05amr14629363qvq.53.1701204590697; Tue, 28 Nov 2023 12:49:50 -0800 (PST) Received: from soleen.c.googlers.com.com (55.87.194.35.bc.googleusercontent.com. [35.194.87.55]) by smtp.gmail.com with ESMTPSA id d11-20020a0cfe8b000000b0067a56b6adfesm1056863qvs.71.2023.11.28.12.49.49 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 28 Nov 2023 12:49:50 -0800 (PST) From: Pasha Tatashin To: akpm@linux-foundation.org, alex.williamson@redhat.com, alim.akhtar@samsung.com, alyssa@rosenzweig.io, asahi@lists.linux.dev, baolu.lu@linux.intel.com, bhelgaas@google.com, cgroups@vger.kernel.org, corbet@lwn.net, david@redhat.com, dwmw2@infradead.org, hannes@cmpxchg.org, heiko@sntech.de, iommu@lists.linux.dev, jasowang@redhat.com, jernej.skrabec@gmail.com, jgg@ziepe.ca, jonathanh@nvidia.com, joro@8bytes.org, kevin.tian@intel.com, krzysztof.kozlowski@linaro.org, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-doc@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-rockchip@lists.infradead.org, linux-samsung-soc@vger.kernel.org, linux-sunxi@lists.linux.dev, linux-tegra@vger.kernel.org, lizefan.x@bytedance.com, marcan@marcan.st, mhiramat@kernel.org, mst@redhat.com, m.szyprowski@samsung.com, netdev@vger.kernel.org, pasha.tatashin@soleen.com, paulmck@kernel.org, rdunlap@infradead.org, robin.murphy@arm.com, samuel@sholland.org, suravee.suthikulpanit@amd.com, sven@svenpeter.dev, thierry.reding@gmail.com, tj@kernel.org, tomas.mudrunka@gmail.com, vdumpa@nvidia.com, virtualization@lists.linux.dev, wens@csie.org, will@kernel.org, yu-cheng.yu@intel.com Subject: [PATCH 10/16] iommu/rockchip: use page allocation function provided by iommu-pages.h Date: Tue, 28 Nov 2023 20:49:32 +0000 Message-ID: <20231128204938.1453583-11-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.43.0.rc2.451.g8631bc7472-goog In-Reply-To: <20231128204938.1453583-1-pasha.tatashin@soleen.com> References: <20231128204938.1453583-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Convert iommu/rockchip-iommu.c to use the new page allocation functions provided in iommu-pages.h. Signed-off-by: Pasha Tatashin --- drivers/iommu/iommufd/iova_bitmap.c | 2 ++ drivers/iommu/rockchip-iommu.c | 14 ++++++++------ 2 files changed, 10 insertions(+), 6 deletions(-) diff --git a/drivers/iommu/iommufd/iova_bitmap.c b/drivers/iommu/iommufd/io= va_bitmap.c index 6b8575b93f17..4d5d1be807fe 100644 --- a/drivers/iommu/iommufd/iova_bitmap.c +++ b/drivers/iommu/iommufd/iova_bitmap.c @@ -8,6 +8,8 @@ #include #include =20 +#include "../iommu-pages.h" + #define BITS_PER_PAGE (PAGE_SIZE * BITS_PER_BYTE) =20 /* diff --git a/drivers/iommu/rockchip-iommu.c b/drivers/iommu/rockchip-iommu.c index 2685861c0a12..e04f22d481d0 100644 --- a/drivers/iommu/rockchip-iommu.c +++ b/drivers/iommu/rockchip-iommu.c @@ -26,6 +26,8 @@ #include #include =20 +#include "iommu-pages.h" + /** MMU register offsets */ #define RK_MMU_DTE_ADDR 0x00 /* Directory table address */ #define RK_MMU_STATUS 0x04 @@ -727,14 +729,14 @@ static u32 *rk_dte_get_page_table(struct rk_iommu_dom= ain *rk_domain, if (rk_dte_is_pt_valid(dte)) goto done; =20 - page_table =3D (u32 *)get_zeroed_page(GFP_ATOMIC | rk_ops->gfp_flags); + page_table =3D iommu_alloc_page(GFP_ATOMIC | rk_ops->gfp_flags); if (!page_table) return ERR_PTR(-ENOMEM); =20 pt_dma =3D dma_map_single(dma_dev, page_table, SPAGE_SIZE, DMA_TO_DEVICE); if (dma_mapping_error(dma_dev, pt_dma)) { dev_err(dma_dev, "DMA mapping error while allocating page table\n"); - free_page((unsigned long)page_table); + iommu_free_page(page_table); return ERR_PTR(-ENOMEM); } =20 @@ -1061,7 +1063,7 @@ static struct iommu_domain *rk_iommu_domain_alloc_pag= ing(struct device *dev) * Each level1 (dt) and level2 (pt) table has 1024 4-byte entries. * Allocate one 4 KiB page for each table. */ - rk_domain->dt =3D (u32 *)get_zeroed_page(GFP_KERNEL | rk_ops->gfp_flags); + rk_domain->dt =3D iommu_alloc_page(GFP_KERNEL | rk_ops->gfp_flags); if (!rk_domain->dt) goto err_free_domain; =20 @@ -1083,7 +1085,7 @@ static struct iommu_domain *rk_iommu_domain_alloc_pag= ing(struct device *dev) return &rk_domain->domain; =20 err_free_dt: - free_page((unsigned long)rk_domain->dt); + iommu_free_page(rk_domain->dt); err_free_domain: kfree(rk_domain); =20 @@ -1104,13 +1106,13 @@ static void rk_iommu_domain_free(struct iommu_domai= n *domain) u32 *page_table =3D phys_to_virt(pt_phys); dma_unmap_single(dma_dev, pt_phys, SPAGE_SIZE, DMA_TO_DEVICE); - free_page((unsigned long)page_table); + iommu_free_page(page_table); } } =20 dma_unmap_single(dma_dev, rk_domain->dt_dma, SPAGE_SIZE, DMA_TO_DEVICE); - free_page((unsigned long)rk_domain->dt); + iommu_free_page(rk_domain->dt); =20 kfree(rk_domain); } --=20 2.43.0.rc2.451.g8631bc7472-goog From nobody Wed Dec 17 09:20:03 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id AD51BC07CA9 for ; Tue, 28 Nov 2023 20:50:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1346587AbjK1UuS (ORCPT ); Tue, 28 Nov 2023 15:50:18 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57104 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1346300AbjK1Utx (ORCPT ); Tue, 28 Nov 2023 15:49:53 -0500 Received: from mail-yw1-x1132.google.com (mail-yw1-x1132.google.com [IPv6:2607:f8b0:4864:20::1132]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8D99E1FD5 for ; Tue, 28 Nov 2023 12:49:52 -0800 (PST) Received: by mail-yw1-x1132.google.com with SMTP id 00721157ae682-5cd0af4a7d3so61689617b3.0 for ; Tue, 28 Nov 2023 12:49:52 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; t=1701204591; x=1701809391; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:from:to:cc:subject:date:message-id :reply-to; bh=WwZxW20NTsAI6OtU2mg3WBn7dc7VhXurLb9lJcGaB3s=; b=SvWFKyrF6VDMtt35mvvjqHaCQnIBwza4Mio9j2jVB6Rc7q2c317f52e286/e8iTyVN isdvbCxLGYRUNn5tyh3Vq6cxnZ067QRRTWs9fC3vSHpHsrNqLfcKeRZzaBdFNPkgHJPO wxhb/RG7HZ/7KfAfppZEDRLgnoo43V8s3yKtIDuAM/tYdEsaujThmKQRrHXUq74EAMEb fXc12W2Y0/yVOB0ChfGsPdNYI4bEnoE2pIu9gVTOtH/iAFAJPijqEeq3z/o20oYa/tYO ctUOr8Cua2gAjiwdwfzdBY5R7PayKOoszFEDpaXxbQDKwav56fJF+kREwA/aoZnc1xHz JL7w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1701204591; x=1701809391; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=WwZxW20NTsAI6OtU2mg3WBn7dc7VhXurLb9lJcGaB3s=; b=uR9y7L+FWM+nCZeDVtkVGQsrnpiWctLOZyWdQMxqATt7pMXs+HPH34gF1UVjpVmYLu SRA8kBs/5Ox+0JkiakIJK2YZ8k5wH++9v+XNYW0woHlJw6NHktaD278dLJpIo9zJ+N+p QI1P6vnm1P/VRJN84Lc6Z1M4e9KOn5PE5or0sVdVgzgGqEhU6s1g/yMS4cnhDbqBpMwx 3iLhi1OJZ3ysJHNk47ABoiUsZrwOrEWm62lviOIma7rSoqinmYZYpTJ9lNklD1wnYPRc IYPzsRYVhIwGkDBWnek6K3Bu1imXx3CFrXDTTFBUNaSyMch320zsZBXSwYqG+Xo/UJ8v AJlQ== X-Gm-Message-State: AOJu0Yxu836x46DsmMYzrq44kGlBmgD7dzYKjHJiX+ooW/fvpvaSf5RF ADVNOTIF8ZBmkcKWUFZgnspkLg== X-Google-Smtp-Source: AGHT+IHCcneC8xut0NvwO9Mjx/qxjTP4L7ixlY2hbUe9eNo80eBX8ekYQZcNVvvgdD6z0QrQuBV3MA== X-Received: by 2002:a0d:d409:0:b0:5cb:5171:ab07 with SMTP id w9-20020a0dd409000000b005cb5171ab07mr16139567ywd.12.1701204591541; Tue, 28 Nov 2023 12:49:51 -0800 (PST) Received: from soleen.c.googlers.com.com (55.87.194.35.bc.googleusercontent.com. [35.194.87.55]) by smtp.gmail.com with ESMTPSA id d11-20020a0cfe8b000000b0067a56b6adfesm1056863qvs.71.2023.11.28.12.49.50 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 28 Nov 2023 12:49:51 -0800 (PST) From: Pasha Tatashin To: akpm@linux-foundation.org, alex.williamson@redhat.com, alim.akhtar@samsung.com, alyssa@rosenzweig.io, asahi@lists.linux.dev, baolu.lu@linux.intel.com, bhelgaas@google.com, cgroups@vger.kernel.org, corbet@lwn.net, david@redhat.com, dwmw2@infradead.org, hannes@cmpxchg.org, heiko@sntech.de, iommu@lists.linux.dev, jasowang@redhat.com, jernej.skrabec@gmail.com, jgg@ziepe.ca, jonathanh@nvidia.com, joro@8bytes.org, kevin.tian@intel.com, krzysztof.kozlowski@linaro.org, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-doc@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-rockchip@lists.infradead.org, linux-samsung-soc@vger.kernel.org, linux-sunxi@lists.linux.dev, linux-tegra@vger.kernel.org, lizefan.x@bytedance.com, marcan@marcan.st, mhiramat@kernel.org, mst@redhat.com, m.szyprowski@samsung.com, netdev@vger.kernel.org, pasha.tatashin@soleen.com, paulmck@kernel.org, rdunlap@infradead.org, robin.murphy@arm.com, samuel@sholland.org, suravee.suthikulpanit@amd.com, sven@svenpeter.dev, thierry.reding@gmail.com, tj@kernel.org, tomas.mudrunka@gmail.com, vdumpa@nvidia.com, virtualization@lists.linux.dev, wens@csie.org, will@kernel.org, yu-cheng.yu@intel.com Subject: [PATCH 11/16] iommu/sun50i: use page allocation function provided by iommu-pages.h Date: Tue, 28 Nov 2023 20:49:33 +0000 Message-ID: <20231128204938.1453583-12-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.43.0.rc2.451.g8631bc7472-goog In-Reply-To: <20231128204938.1453583-1-pasha.tatashin@soleen.com> References: <20231128204938.1453583-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Convert iommu/sun50i-iommu.c to use the new page allocation functions provided in iommu-pages.h. Signed-off-by: Pasha Tatashin --- drivers/iommu/sun50i-iommu.c | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/drivers/iommu/sun50i-iommu.c b/drivers/iommu/sun50i-iommu.c index 41484a5a399b..172ddb717eb5 100644 --- a/drivers/iommu/sun50i-iommu.c +++ b/drivers/iommu/sun50i-iommu.c @@ -26,6 +26,8 @@ #include #include =20 +#include "iommu-pages.h" + #define IOMMU_RESET_REG 0x010 #define IOMMU_RESET_RELEASE_ALL 0xffffffff #define IOMMU_ENABLE_REG 0x020 @@ -679,8 +681,7 @@ sun50i_iommu_domain_alloc_paging(struct device *dev) if (!sun50i_domain) return NULL; =20 - sun50i_domain->dt =3D (u32 *)__get_free_pages(GFP_KERNEL | __GFP_ZERO, - get_order(DT_SIZE)); + sun50i_domain->dt =3D iommu_alloc_pages(GFP_KERNEL, get_order(DT_SIZE)); if (!sun50i_domain->dt) goto err_free_domain; =20 @@ -702,7 +703,7 @@ static void sun50i_iommu_domain_free(struct iommu_domai= n *domain) { struct sun50i_iommu_domain *sun50i_domain =3D to_sun50i_domain(domain); =20 - free_pages((unsigned long)sun50i_domain->dt, get_order(DT_SIZE)); + iommu_free_pages(sun50i_domain->dt, get_order(DT_SIZE)); sun50i_domain->dt =3D NULL; =20 kfree(sun50i_domain); --=20 2.43.0.rc2.451.g8631bc7472-goog From nobody Wed Dec 17 09:20:03 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 29FA7C46CA3 for ; Tue, 28 Nov 2023 20:50:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1346319AbjK1UuO (ORCPT ); Tue, 28 Nov 2023 15:50:14 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57124 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1346310AbjK1Utx (ORCPT ); Tue, 28 Nov 2023 15:49:53 -0500 Received: from mail-qv1-xf2d.google.com (mail-qv1-xf2d.google.com [IPv6:2607:f8b0:4864:20::f2d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8D0861FDB for ; Tue, 28 Nov 2023 12:49:53 -0800 (PST) Received: by mail-qv1-xf2d.google.com with SMTP id 6a1803df08f44-67a51ad638eso1806646d6.0 for ; Tue, 28 Nov 2023 12:49:53 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; t=1701204592; x=1701809392; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:from:to:cc:subject:date:message-id :reply-to; bh=CTqyv+/QBqQRpLXIT6J0+RNMQRf6NZoT619SVstW8Ks=; b=ibxoRr8+V1pQmQtCB+WHcu1MjnX1xYJvKD2osCaza3ddXOgiB8FfbKHpS2j3Hc3mXG iO/hN+myMH//0lAUyzeACjmrgEsX7iPnoBKpOaUIPa4fZW1EbdMvWaEiJ4P7gYXiwOdm s6ORmokkE8gvKFiWZMZ5JWAeLOmuBbmsAE4pwfjc6QP2eIRYqZeEhZ14F/gu1/q6EImR Xd7NZ2PkA2pdpXW962KR3MvStHigQkSfNgxV6Ax1PalCh2yc6uzfVTEinCQOqawU/oSt s9Bxto+ALWy7qnUk4JP0QwdSu8YVu3ze6NI0N2UBZ8dRHLZ02cmDyCDWm9M4BFp5thKz p2yw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1701204592; x=1701809392; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=CTqyv+/QBqQRpLXIT6J0+RNMQRf6NZoT619SVstW8Ks=; b=Qk9F/bfnSdy9VHkd3rFFIJ/LDRQzae30PE0Y5zBHA+T/AIk2L9Fop+27BC7pt4ZQ2G zf7TcXac8ITCWk31TKdVCrrGEhD/7UQgYOnYFup8PfTh6GKYwr1UY0BIxRlC0rILeYVN HqQP3N1PvFhOvTbxQlKQNHMnJp4zg9d4ns/O6u6cJ7qKe3F7DWes9wc7/1ZUzijLPp/R jtf0z5JxOlTGdD1rr+367CLGBYk5LsCE9wCtkY7jIXznbKssSFRZ4njcd6dms5PRPFzv 0rml7Vs8B4y5kRCi45YeE1mvHfyJKpjsqG0EyV5JOPUJWYexUS0KGIAHlDNb3Q7C6Ylz S+zg== X-Gm-Message-State: AOJu0Yxu6juUfQMIDeeQOOwVBdTxsSOhGbK4QYjxq4RHchc6Eaa/+mFG vz0nZPUKF1Jl8wsIF9F+UUn44g== X-Google-Smtp-Source: AGHT+IE3AYmHHgK/QSP8+MPcS+KRS5b9M1xHWJUdTuMzT9sQ+l/sj698GMRB0ddulu50PHGvh8bO9Q== X-Received: by 2002:a05:6214:c86:b0:67a:4f19:d862 with SMTP id r6-20020a0562140c8600b0067a4f19d862mr8115033qvr.18.1701204592666; Tue, 28 Nov 2023 12:49:52 -0800 (PST) Received: from soleen.c.googlers.com.com (55.87.194.35.bc.googleusercontent.com. [35.194.87.55]) by smtp.gmail.com with ESMTPSA id d11-20020a0cfe8b000000b0067a56b6adfesm1056863qvs.71.2023.11.28.12.49.51 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 28 Nov 2023 12:49:52 -0800 (PST) From: Pasha Tatashin To: akpm@linux-foundation.org, alex.williamson@redhat.com, alim.akhtar@samsung.com, alyssa@rosenzweig.io, asahi@lists.linux.dev, baolu.lu@linux.intel.com, bhelgaas@google.com, cgroups@vger.kernel.org, corbet@lwn.net, david@redhat.com, dwmw2@infradead.org, hannes@cmpxchg.org, heiko@sntech.de, iommu@lists.linux.dev, jasowang@redhat.com, jernej.skrabec@gmail.com, jgg@ziepe.ca, jonathanh@nvidia.com, joro@8bytes.org, kevin.tian@intel.com, krzysztof.kozlowski@linaro.org, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-doc@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-rockchip@lists.infradead.org, linux-samsung-soc@vger.kernel.org, linux-sunxi@lists.linux.dev, linux-tegra@vger.kernel.org, lizefan.x@bytedance.com, marcan@marcan.st, mhiramat@kernel.org, mst@redhat.com, m.szyprowski@samsung.com, netdev@vger.kernel.org, pasha.tatashin@soleen.com, paulmck@kernel.org, rdunlap@infradead.org, robin.murphy@arm.com, samuel@sholland.org, suravee.suthikulpanit@amd.com, sven@svenpeter.dev, thierry.reding@gmail.com, tj@kernel.org, tomas.mudrunka@gmail.com, vdumpa@nvidia.com, virtualization@lists.linux.dev, wens@csie.org, will@kernel.org, yu-cheng.yu@intel.com Subject: [PATCH 12/16] iommu/tegra-smmu: use page allocation function provided by iommu-pages.h Date: Tue, 28 Nov 2023 20:49:34 +0000 Message-ID: <20231128204938.1453583-13-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.43.0.rc2.451.g8631bc7472-goog In-Reply-To: <20231128204938.1453583-1-pasha.tatashin@soleen.com> References: <20231128204938.1453583-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Convert iommu/tegra-smmu.c to use the new page allocation functions provided in iommu-pages.h. Signed-off-by: Pasha Tatashin --- drivers/iommu/tegra-smmu.c | 18 ++++++++++-------- 1 file changed, 10 insertions(+), 8 deletions(-) diff --git a/drivers/iommu/tegra-smmu.c b/drivers/iommu/tegra-smmu.c index 310871728ab4..5e0730dc1b0e 100644 --- a/drivers/iommu/tegra-smmu.c +++ b/drivers/iommu/tegra-smmu.c @@ -19,6 +19,8 @@ #include #include =20 +#include "iommu-pages.h" + struct tegra_smmu_group { struct list_head list; struct tegra_smmu *smmu; @@ -282,7 +284,7 @@ static struct iommu_domain *tegra_smmu_domain_alloc_pag= ing(struct device *dev) =20 as->attr =3D SMMU_PD_READABLE | SMMU_PD_WRITABLE | SMMU_PD_NONSECURE; =20 - as->pd =3D alloc_page(GFP_KERNEL | __GFP_DMA | __GFP_ZERO); + as->pd =3D __iommu_alloc_page(GFP_KERNEL | __GFP_DMA); if (!as->pd) { kfree(as); return NULL; @@ -290,7 +292,7 @@ static struct iommu_domain *tegra_smmu_domain_alloc_pag= ing(struct device *dev) =20 as->count =3D kcalloc(SMMU_NUM_PDE, sizeof(u32), GFP_KERNEL); if (!as->count) { - __free_page(as->pd); + __iommu_free_page(as->pd); kfree(as); return NULL; } @@ -298,7 +300,7 @@ static struct iommu_domain *tegra_smmu_domain_alloc_pag= ing(struct device *dev) as->pts =3D kcalloc(SMMU_NUM_PDE, sizeof(*as->pts), GFP_KERNEL); if (!as->pts) { kfree(as->count); - __free_page(as->pd); + __iommu_free_page(as->pd); kfree(as); return NULL; } @@ -599,14 +601,14 @@ static u32 *as_get_pte(struct tegra_smmu_as *as, dma_= addr_t iova, dma =3D dma_map_page(smmu->dev, page, 0, SMMU_SIZE_PT, DMA_TO_DEVICE); if (dma_mapping_error(smmu->dev, dma)) { - __free_page(page); + __iommu_free_page(page); return NULL; } =20 if (!smmu_dma_addr_valid(smmu, dma)) { dma_unmap_page(smmu->dev, dma, SMMU_SIZE_PT, DMA_TO_DEVICE); - __free_page(page); + __iommu_free_page(page); return NULL; } =20 @@ -649,7 +651,7 @@ static void tegra_smmu_pte_put_use(struct tegra_smmu_as= *as, unsigned long iova) tegra_smmu_set_pde(as, iova, 0); =20 dma_unmap_page(smmu->dev, pte_dma, SMMU_SIZE_PT, DMA_TO_DEVICE); - __free_page(page); + __iommu_free_page(page); as->pts[pde] =3D NULL; } } @@ -688,7 +690,7 @@ static struct page *as_get_pde_page(struct tegra_smmu_a= s *as, if (gfpflags_allow_blocking(gfp)) spin_unlock_irqrestore(&as->lock, *flags); =20 - page =3D alloc_page(gfp | __GFP_DMA | __GFP_ZERO); + page =3D __iommu_alloc_page(gfp | __GFP_DMA); =20 if (gfpflags_allow_blocking(gfp)) spin_lock_irqsave(&as->lock, *flags); @@ -700,7 +702,7 @@ static struct page *as_get_pde_page(struct tegra_smmu_a= s *as, */ if (as->pts[pde]) { if (page) - __free_page(page); + __iommu_free_page(page); =20 page =3D as->pts[pde]; } --=20 2.43.0.rc2.451.g8631bc7472-goog From nobody Wed Dec 17 09:20:03 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9535BC46CA0 for ; Tue, 28 Nov 2023 20:50:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1346595AbjK1UuU (ORCPT ); Tue, 28 Nov 2023 15:50:20 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57096 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1346327AbjK1Utx (ORCPT ); Tue, 28 Nov 2023 15:49:53 -0500 Received: from mail-qk1-x733.google.com (mail-qk1-x733.google.com [IPv6:2607:f8b0:4864:20::733]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A41F51FE3 for ; Tue, 28 Nov 2023 12:49:54 -0800 (PST) Received: by mail-qk1-x733.google.com with SMTP id af79cd13be357-77dc733b25cso32577385a.1 for ; Tue, 28 Nov 2023 12:49:54 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; t=1701204593; x=1701809393; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:from:to:cc:subject:date:message-id :reply-to; bh=B42Ompk/0S+0FHWlF0c0WCUF97rKJOKtrfjrMLAbFqA=; b=o/IjtgFTQohEBhw0MZJjxpDFsK8WY+R+3BTtTpGQII2P13f/+ALn1Hy1IXvWVdR/nD Lyiof2gy8t2RhxKCEUsyu+xMdiXK9kYoEk+4v/CzcoYIytWDzr1xFcYGEzHNgNcaBdW9 tAiqsBj6qw9GXtEYKNQ3ooHJ55Nu1ClGh5bpzyliIBIGe95+K6f79uK9JvgU43Zf242q YUzxqt31PfSpVlffh94ObQZfG9yu2J2ns0gk6IUz4N8Hc3yMSYkUSuadexgr0Wxx3uG4 KbnpHLUoQoop5xVrJz/cc1zQDMyzBUmrrcZvilPxHWSaUgC8vcdRE67dlaDeWTafMnNd QAlQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1701204593; x=1701809393; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=B42Ompk/0S+0FHWlF0c0WCUF97rKJOKtrfjrMLAbFqA=; b=I2GrVQeHzRGO3rZ8vKXIasVMgcPBzG8n3ZVRm13ZlmT03/WAqFJAK2utps2NIM2k1X uE5kQKFydItIAiyR3JOxrCYoutk3D+/jHWZUKBR17SjMilGUHy2+JgtbOD8rpjcYGWYv rx7rlRcemD9lmcYPnVnblzRj48Y62GSjvLPrPMTIaVED55kOIDO+0J7mLIzgtLjQBnSW tOy4KK8qSIvBqisOjebTQDArY/oww4mO4oNsBI0I+OvYJytw/SOOvcIO7n/g8LQtunYl lSMuHWXxN+I0tIEz8Nw/hsapR4n7WN5tp0qlEOwAwjQZ4bZhhgAuiN6ncvHeULDZWoQn u3dw== X-Gm-Message-State: AOJu0Ywto5sstbpBtjLiMZIhAocFtfczP/teEh5NfpuWxE2jzl6OrgYv YG3h/qds+/fu2V3JzEdkBdgbIg== X-Google-Smtp-Source: AGHT+IEnkegi/uFQPrDyc6H7OPo4j0lB/ryXbrV4xgswOmTSvbgLDCpwj8bvXcZGxvDXiQNLdJDbew== X-Received: by 2002:ad4:4a6f:0:b0:67a:5ae8:d346 with SMTP id cn15-20020ad44a6f000000b0067a5ae8d346mr3075756qvb.62.1701204593484; Tue, 28 Nov 2023 12:49:53 -0800 (PST) Received: from soleen.c.googlers.com.com (55.87.194.35.bc.googleusercontent.com. [35.194.87.55]) by smtp.gmail.com with ESMTPSA id d11-20020a0cfe8b000000b0067a56b6adfesm1056863qvs.71.2023.11.28.12.49.52 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 28 Nov 2023 12:49:53 -0800 (PST) From: Pasha Tatashin To: akpm@linux-foundation.org, alex.williamson@redhat.com, alim.akhtar@samsung.com, alyssa@rosenzweig.io, asahi@lists.linux.dev, baolu.lu@linux.intel.com, bhelgaas@google.com, cgroups@vger.kernel.org, corbet@lwn.net, david@redhat.com, dwmw2@infradead.org, hannes@cmpxchg.org, heiko@sntech.de, iommu@lists.linux.dev, jasowang@redhat.com, jernej.skrabec@gmail.com, jgg@ziepe.ca, jonathanh@nvidia.com, joro@8bytes.org, kevin.tian@intel.com, krzysztof.kozlowski@linaro.org, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-doc@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-rockchip@lists.infradead.org, linux-samsung-soc@vger.kernel.org, linux-sunxi@lists.linux.dev, linux-tegra@vger.kernel.org, lizefan.x@bytedance.com, marcan@marcan.st, mhiramat@kernel.org, mst@redhat.com, m.szyprowski@samsung.com, netdev@vger.kernel.org, pasha.tatashin@soleen.com, paulmck@kernel.org, rdunlap@infradead.org, robin.murphy@arm.com, samuel@sholland.org, suravee.suthikulpanit@amd.com, sven@svenpeter.dev, thierry.reding@gmail.com, tj@kernel.org, tomas.mudrunka@gmail.com, vdumpa@nvidia.com, virtualization@lists.linux.dev, wens@csie.org, will@kernel.org, yu-cheng.yu@intel.com Subject: [PATCH 13/16] iommu: observability of the IOMMU allocations Date: Tue, 28 Nov 2023 20:49:35 +0000 Message-ID: <20231128204938.1453583-14-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.43.0.rc2.451.g8631bc7472-goog In-Reply-To: <20231128204938.1453583-1-pasha.tatashin@soleen.com> References: <20231128204938.1453583-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Add NR_IOMMU_PAGES into node_stat_item that counts number of pages that are allocated by the IOMMU subsystem. The allocations can be view per-node via: /sys/devices/system/node/nodeN/vmstat. For example: $ grep iommu /sys/devices/system/node/node*/vmstat /sys/devices/system/node/node0/vmstat:nr_iommu_pages 106025 /sys/devices/system/node/node1/vmstat:nr_iommu_pages 3464 The value is in page-count, therefore, in the above example the iommu allocations amount to ~428M. Signed-off-by: Pasha Tatashin --- drivers/iommu/iommu-pages.h | 30 ++++++++++++++++++++++++++++++ include/linux/mmzone.h | 3 +++ mm/vmstat.c | 3 +++ 3 files changed, 36 insertions(+) diff --git a/drivers/iommu/iommu-pages.h b/drivers/iommu/iommu-pages.h index 2332f807d514..69895a355c0c 100644 --- a/drivers/iommu/iommu-pages.h +++ b/drivers/iommu/iommu-pages.h @@ -17,6 +17,30 @@ * state can be rather large, i.e. multiple gigabytes in size. */ =20 +/** + * __iommu_alloc_account - account for newly allocated page. + * @pages: head struct page of the page. + * @order: order of the page + */ +static inline void __iommu_alloc_account(struct page *pages, int order) +{ + const long pgcnt =3D 1l << order; + + mod_node_page_state(page_pgdat(pages), NR_IOMMU_PAGES, pgcnt); +} + +/** + * __iommu_free_account - account a page that is about to be freed. + * @pages: head struct page of the page. + * @order: order of the page + */ +static inline void __iommu_free_account(struct page *pages, int order) +{ + const long pgcnt =3D 1l << order; + + mod_node_page_state(page_pgdat(pages), NR_IOMMU_PAGES, -pgcnt); +} + /** * __iommu_alloc_pages_node - allocate a zeroed page of a given order from * specific NUMA node. @@ -35,6 +59,8 @@ static inline struct page *__iommu_alloc_pages_node(int n= id, gfp_t gfp, if (!pages) return NULL; =20 + __iommu_alloc_account(pages, order); + return pages; } =20 @@ -53,6 +79,8 @@ static inline struct page *__iommu_alloc_pages(gfp_t gfp,= int order) if (!pages) return NULL; =20 + __iommu_alloc_account(pages, order); + return pages; } =20 @@ -89,6 +117,7 @@ static inline void __iommu_free_pages(struct page *pages= , int order) if (!pages) return; =20 + __iommu_free_account(pages, order); __free_pages(pages, order); } =20 @@ -192,6 +221,7 @@ static inline void iommu_free_pages_list(struct list_he= ad *pages) struct page *p =3D list_entry(pages->prev, struct page, lru); =20 list_del(&p->lru); + __iommu_free_account(p, 0); put_page(p); } } diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index 3c25226beeed..1a4d0bba3e8b 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -200,6 +200,9 @@ enum node_stat_item { #endif NR_PAGETABLE, /* used for pagetables */ NR_SECONDARY_PAGETABLE, /* secondary pagetables, e.g. KVM pagetables */ +#ifdef CONFIG_IOMMU_SUPPORT + NR_IOMMU_PAGES, /* # of pages allocated by IOMMU */ +#endif #ifdef CONFIG_SWAP NR_SWAPCACHE, #endif diff --git a/mm/vmstat.c b/mm/vmstat.c index 359460deb377..801b58890b6c 100644 --- a/mm/vmstat.c +++ b/mm/vmstat.c @@ -1242,6 +1242,9 @@ const char * const vmstat_text[] =3D { #endif "nr_page_table_pages", "nr_sec_page_table_pages", +#ifdef CONFIG_IOMMU_SUPPORT + "nr_iommu_pages", +#endif #ifdef CONFIG_SWAP "nr_swapcached", #endif --=20 2.43.0.rc2.451.g8631bc7472-goog From nobody Wed Dec 17 09:20:03 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 31477C4167B for ; Tue, 28 Nov 2023 20:50:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1346599AbjK1UuY (ORCPT ); Tue, 28 Nov 2023 15:50:24 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57098 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1346211AbjK1Utx (ORCPT ); Tue, 28 Nov 2023 15:49:53 -0500 Received: from mail-yw1-x1135.google.com (mail-yw1-x1135.google.com [IPv6:2607:f8b0:4864:20::1135]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 95EE21FF5 for ; Tue, 28 Nov 2023 12:49:55 -0800 (PST) Received: by mail-yw1-x1135.google.com with SMTP id 00721157ae682-5d21b9a5808so4365897b3.2 for ; Tue, 28 Nov 2023 12:49:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; t=1701204594; x=1701809394; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:from:to:cc:subject:date:message-id :reply-to; bh=Nk58INbxx30xLOHUnMGn9OdPuZubp6H+G9gT2CVk01Q=; b=WmMV6ob0b1iKZbSvoDEuo7Zhr3ll+0w4z7L4bxJ3VHcKNLlu5o0YWFoPZyxSpURI86 AryOsKwrKpXAz9/uV+6M8InS3TUYbhLQ6EepJJdqbCVHqrm+yEmI1AG1cYJ0KugcQAcn EN5kXQYsyvjioRz7ldO0Be6gmrHvQk9moBrkJ4eeHVH3nx7XV6v1Cv+DC3BGk7Tg6hB4 /BDMvlQgP7rL0qsZI7GNP4AM98+vlBt520FtoqV0bio4wqV+AzdxD/6G8w43CY2X6BZd P+xzzHPeOJc1J67081r+ZbKwRU2cygCecyiN9TvfH2d3uPx8CpNp9vq21BKfk6LLsuZZ WE2g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1701204594; x=1701809394; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Nk58INbxx30xLOHUnMGn9OdPuZubp6H+G9gT2CVk01Q=; b=TKNOGcvOCelmQKpMD8v3E7sxlnvY0+59upGjkhJ38TbsQLgdoanymbgTm7Qdp3HsPU 8C6e2lKHnXddwrP+psYso7p5FKNr6WTQzCZi4eMzuhiw1Ael2CNxikioSTGIDuYX1R5o 8VnZ4++lvNniU0qoovVTFv8IfLoknhhcDz2vunDKl1rjUZiBDKJV3rUC1jkbdgSovLL9 vbrOD657EwNKgR3Tyi4To/3meHklHDnZXzEL6xJksNjmLWWCH3mL5C39c2VmWssbLDMx 2xLwzpEPRV4KFltwyvBRLA7zX1oIfcEVFHs7NtbVSoT+Jw5leWQZdQFytu2TmPZyxP6H Iw3g== X-Gm-Message-State: AOJu0YwBjYDNbSLpwUoj/K3QrkS+fMobQ03q4MJ2/90L4+9/Y01Vu/Im dgkptktnGnxsSGu2w+AvAebyBw== X-Google-Smtp-Source: AGHT+IFeRft5lu3DnWQgWnRJh58TGHHoSR4IZRyS1wE4nkLKveRfays9X3IHaY/+Xezkx1Q2h+wp9Q== X-Received: by 2002:a81:a507:0:b0:5cb:d645:8cdf with SMTP id u7-20020a81a507000000b005cbd6458cdfmr16849996ywg.48.1701204594372; Tue, 28 Nov 2023 12:49:54 -0800 (PST) Received: from soleen.c.googlers.com.com (55.87.194.35.bc.googleusercontent.com. [35.194.87.55]) by smtp.gmail.com with ESMTPSA id d11-20020a0cfe8b000000b0067a56b6adfesm1056863qvs.71.2023.11.28.12.49.53 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 28 Nov 2023 12:49:54 -0800 (PST) From: Pasha Tatashin To: akpm@linux-foundation.org, alex.williamson@redhat.com, alim.akhtar@samsung.com, alyssa@rosenzweig.io, asahi@lists.linux.dev, baolu.lu@linux.intel.com, bhelgaas@google.com, cgroups@vger.kernel.org, corbet@lwn.net, david@redhat.com, dwmw2@infradead.org, hannes@cmpxchg.org, heiko@sntech.de, iommu@lists.linux.dev, jasowang@redhat.com, jernej.skrabec@gmail.com, jgg@ziepe.ca, jonathanh@nvidia.com, joro@8bytes.org, kevin.tian@intel.com, krzysztof.kozlowski@linaro.org, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-doc@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-rockchip@lists.infradead.org, linux-samsung-soc@vger.kernel.org, linux-sunxi@lists.linux.dev, linux-tegra@vger.kernel.org, lizefan.x@bytedance.com, marcan@marcan.st, mhiramat@kernel.org, mst@redhat.com, m.szyprowski@samsung.com, netdev@vger.kernel.org, pasha.tatashin@soleen.com, paulmck@kernel.org, rdunlap@infradead.org, robin.murphy@arm.com, samuel@sholland.org, suravee.suthikulpanit@amd.com, sven@svenpeter.dev, thierry.reding@gmail.com, tj@kernel.org, tomas.mudrunka@gmail.com, vdumpa@nvidia.com, virtualization@lists.linux.dev, wens@csie.org, will@kernel.org, yu-cheng.yu@intel.com Subject: [PATCH 14/16] iommu: account IOMMU allocated memory Date: Tue, 28 Nov 2023 20:49:36 +0000 Message-ID: <20231128204938.1453583-15-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.43.0.rc2.451.g8631bc7472-goog In-Reply-To: <20231128204938.1453583-1-pasha.tatashin@soleen.com> References: <20231128204938.1453583-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" In order to be able to limit the amount of memory that is allocated by IOMMU subsystem, the memory must be accounted. Account IOMMU as part of the secondary pagetables as it was discussed at LPC. The value of SecPageTables now contains mmeory allocation by IOMMU and KVM. Signed-off-by: Pasha Tatashin --- Documentation/admin-guide/cgroup-v2.rst | 2 +- Documentation/filesystems/proc.rst | 4 ++-- drivers/iommu/iommu-pages.h | 2 ++ include/linux/mmzone.h | 2 +- 4 files changed, 6 insertions(+), 4 deletions(-) diff --git a/Documentation/admin-guide/cgroup-v2.rst b/Documentation/admin-= guide/cgroup-v2.rst index 3f85254f3cef..e004e05a7cde 100644 --- a/Documentation/admin-guide/cgroup-v2.rst +++ b/Documentation/admin-guide/cgroup-v2.rst @@ -1418,7 +1418,7 @@ PAGE_SIZE multiple when read back. sec_pagetables Amount of memory allocated for secondary page tables, this currently includes KVM mmu allocations on x86 - and arm64. + and arm64 and IOMMU page tables. =20 percpu (npn) Amount of memory used for storing per-cpu kernel diff --git a/Documentation/filesystems/proc.rst b/Documentation/filesystems= /proc.rst index 49ef12df631b..86f137a9b66b 100644 --- a/Documentation/filesystems/proc.rst +++ b/Documentation/filesystems/proc.rst @@ -1110,8 +1110,8 @@ KernelStack PageTables Memory consumed by userspace page tables SecPageTables - Memory consumed by secondary page tables, this currently - currently includes KVM mmu allocations on x86 and arm64. + Memory consumed by secondary page tables, this currently inc= ludes + KVM mmu and IOMMU allocations on x86 and arm64. NFS_Unstable Always zero. Previous counted pages which had been written to the server, but has not been committed to stable storage. diff --git a/drivers/iommu/iommu-pages.h b/drivers/iommu/iommu-pages.h index 69895a355c0c..cdd257585284 100644 --- a/drivers/iommu/iommu-pages.h +++ b/drivers/iommu/iommu-pages.h @@ -27,6 +27,7 @@ static inline void __iommu_alloc_account(struct page *pag= es, int order) const long pgcnt =3D 1l << order; =20 mod_node_page_state(page_pgdat(pages), NR_IOMMU_PAGES, pgcnt); + mod_lruvec_page_state(pages, NR_SECONDARY_PAGETABLE, pgcnt); } =20 /** @@ -39,6 +40,7 @@ static inline void __iommu_free_account(struct page *page= s, int order) const long pgcnt =3D 1l << order; =20 mod_node_page_state(page_pgdat(pages), NR_IOMMU_PAGES, -pgcnt); + mod_lruvec_page_state(pages, NR_SECONDARY_PAGETABLE, -pgcnt); } =20 /** diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index 1a4d0bba3e8b..aaabb385663c 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -199,7 +199,7 @@ enum node_stat_item { NR_KERNEL_SCS_KB, /* measured in KiB */ #endif NR_PAGETABLE, /* used for pagetables */ - NR_SECONDARY_PAGETABLE, /* secondary pagetables, e.g. KVM pagetables */ + NR_SECONDARY_PAGETABLE, /* secondary pagetables, KVM & IOMMU */ #ifdef CONFIG_IOMMU_SUPPORT NR_IOMMU_PAGES, /* # of pages allocated by IOMMU */ #endif --=20 2.43.0.rc2.451.g8631bc7472-goog From nobody Wed Dec 17 09:20:03 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8EBABC4167B for ; Tue, 28 Nov 2023 20:50:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1346626AbjK1Uu3 (ORCPT ); Tue, 28 Nov 2023 15:50:29 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57104 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1346271AbjK1Uty (ORCPT ); Tue, 28 Nov 2023 15:49:54 -0500 Received: from mail-qv1-xf2d.google.com (mail-qv1-xf2d.google.com [IPv6:2607:f8b0:4864:20::f2d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7E4A52102 for ; Tue, 28 Nov 2023 12:49:56 -0800 (PST) Received: by mail-qv1-xf2d.google.com with SMTP id 6a1803df08f44-67a338dfca7so21119846d6.2 for ; Tue, 28 Nov 2023 12:49:56 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; t=1701204595; x=1701809395; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:from:to:cc:subject:date:message-id :reply-to; bh=iygGvNb1KE298FiVkmeH8gdBVxeTTuJw4eGKIF/nIuM=; b=TWRnXEVXJruv36LJOKHOhLjpODPbkfmbkFVGiyOwKEshA3y7p2vwRpd/5C7W+qNM23 D6nv4XMHyxnItPBrp6yg8UyDYHblV+lzf/h7za7iL8ugRhzV9JfxwzHqKyxe7Xk8Uh/q PfP1iVJT2M4KQ7WfHjfApsQ4nZZ5x/pPJmPNH0lUx2TYkwjM3yON0sY4b8BF3B3MuZSB Qt9E/+FEDDPITVyILclLHYXmH6fhFO7Gd2aA+9IIMCVoLtHGobMgBqUWIWUXm9UbZ2uR 3PNpdpfkE6bslU98p/cRIQNaPtkG0vRMRtFyvh7U6nqZ2q0hT6WszqpUe9YACfO1iDi/ Dbqg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1701204595; x=1701809395; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=iygGvNb1KE298FiVkmeH8gdBVxeTTuJw4eGKIF/nIuM=; b=ExkWjVhgMW3WysRcKlFNLyJ8y93iVW0rjQ0hqW6tMN04fD2+OM7EndlSdO6YvE5PiF 9f1PHyJdJme2fWg40xBVC29onnKdJw2zdM/EYsxdrBNuSTArdUnJbx+equUEBcf6xEPS 5tEFA6lbtdHsM8+ao8CsAn5l1rRB9rlI1Yze75znuoJCmdH//mjE/Hibc+CkAy7JP9y1 z1AR++Xpns1Yb78PJMvQp07JJcDlyIChbnPkuWMPJpDCoQOUb6+oao4XEI0wQefoh4dZ E1IBV+710+EzATFgYAhSrQQpFyCm474Z8JUe7VDQH7QcyKEe0kzkFea6UtE52rny9Z1q edZw== X-Gm-Message-State: AOJu0YwUv3Z7qwUeEJQtKRKDWEryN/TmUDha6cBl3os8Z+nY4G0CvtiA t9Vn4GbT9oIpmgbcXQnzw/2ydg== X-Google-Smtp-Source: AGHT+IE0Jl3QLEggdMt0xv2T8AyvhzkahJgD3VRGmfbhI7sqHJ8Aa/AdHKiYq6E8wbv77IBFZbLYmg== X-Received: by 2002:ad4:5dc6:0:b0:67a:21aa:6513 with SMTP id m6-20020ad45dc6000000b0067a21aa6513mr18119809qvh.17.1701204595440; Tue, 28 Nov 2023 12:49:55 -0800 (PST) Received: from soleen.c.googlers.com.com (55.87.194.35.bc.googleusercontent.com. [35.194.87.55]) by smtp.gmail.com with ESMTPSA id d11-20020a0cfe8b000000b0067a56b6adfesm1056863qvs.71.2023.11.28.12.49.54 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 28 Nov 2023 12:49:54 -0800 (PST) From: Pasha Tatashin To: akpm@linux-foundation.org, alex.williamson@redhat.com, alim.akhtar@samsung.com, alyssa@rosenzweig.io, asahi@lists.linux.dev, baolu.lu@linux.intel.com, bhelgaas@google.com, cgroups@vger.kernel.org, corbet@lwn.net, david@redhat.com, dwmw2@infradead.org, hannes@cmpxchg.org, heiko@sntech.de, iommu@lists.linux.dev, jasowang@redhat.com, jernej.skrabec@gmail.com, jgg@ziepe.ca, jonathanh@nvidia.com, joro@8bytes.org, kevin.tian@intel.com, krzysztof.kozlowski@linaro.org, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-doc@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-rockchip@lists.infradead.org, linux-samsung-soc@vger.kernel.org, linux-sunxi@lists.linux.dev, linux-tegra@vger.kernel.org, lizefan.x@bytedance.com, marcan@marcan.st, mhiramat@kernel.org, mst@redhat.com, m.szyprowski@samsung.com, netdev@vger.kernel.org, pasha.tatashin@soleen.com, paulmck@kernel.org, rdunlap@infradead.org, robin.murphy@arm.com, samuel@sholland.org, suravee.suthikulpanit@amd.com, sven@svenpeter.dev, thierry.reding@gmail.com, tj@kernel.org, tomas.mudrunka@gmail.com, vdumpa@nvidia.com, virtualization@lists.linux.dev, wens@csie.org, will@kernel.org, yu-cheng.yu@intel.com Subject: [PATCH 15/16] vhost-vdpa: account iommu allocations Date: Tue, 28 Nov 2023 20:49:37 +0000 Message-ID: <20231128204938.1453583-16-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.43.0.rc2.451.g8631bc7472-goog In-Reply-To: <20231128204938.1453583-1-pasha.tatashin@soleen.com> References: <20231128204938.1453583-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" iommu allocations should be accounted in order to allow admins to monitor and limit the amount of iommu memory. Signed-off-by: Pasha Tatashin Acked-by: Michael S. Tsirkin --- drivers/vhost/vdpa.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/drivers/vhost/vdpa.c b/drivers/vhost/vdpa.c index da7ec77cdaff..a51c69c078d9 100644 --- a/drivers/vhost/vdpa.c +++ b/drivers/vhost/vdpa.c @@ -968,7 +968,8 @@ static int vhost_vdpa_map(struct vhost_vdpa *v, struct = vhost_iotlb *iotlb, r =3D ops->set_map(vdpa, asid, iotlb); } else { r =3D iommu_map(v->domain, iova, pa, size, - perm_to_iommu_flags(perm), GFP_KERNEL); + perm_to_iommu_flags(perm), + GFP_KERNEL_ACCOUNT); } if (r) { vhost_iotlb_del_range(iotlb, iova, iova + size - 1); --=20 2.43.0.rc2.451.g8631bc7472-goog From nobody Wed Dec 17 09:20:03 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 650AEC4167B for ; Tue, 28 Nov 2023 20:50:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1346389AbjK1Uud (ORCPT ); Tue, 28 Nov 2023 15:50:33 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57090 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1346284AbjK1Utz (ORCPT ); Tue, 28 Nov 2023 15:49:55 -0500 Received: from mail-qv1-xf2c.google.com (mail-qv1-xf2c.google.com [IPv6:2607:f8b0:4864:20::f2c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8F7F1210A for ; Tue, 28 Nov 2023 12:49:57 -0800 (PST) Received: by mail-qv1-xf2c.google.com with SMTP id 6a1803df08f44-67a0d865738so30228246d6.1 for ; Tue, 28 Nov 2023 12:49:57 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; t=1701204596; x=1701809396; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:from:to:cc:subject:date:message-id :reply-to; bh=YSGX0jW3To7/xuABL7QXXsr1KCFhu7Z9tThr0mLMrMc=; b=mu35PtE5c/9BIjstdYyMmHY7Fsp4p47cAFfxKfLsRGnfxku0i2aSW554C1uqlvpsQG 7n+YTN7UVvy9Da+sXvjnrRbhflbjotnxSYc46MNpmRLi4IO0BtuqyB4DTLVKFRZBCQT6 Gem7H6rBonm0+z37Y3eiO2PmFX83B5wvl1i16H3zAMIqCd+ebFYbcpQsJs88KGdQ3aQo dilQXr6/WXD8tF4AcdIY0AMHMjtiN5cKy4+xcwwc8ETsP9hjskymCjjVw3r727nJcZ4l P7VdPRIIVUXbGZOMRd7My622YMXRlrcltPLBtBUm4FJxlJ4gez9R4x93FrfNE6OSIGVC 3cTw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1701204596; x=1701809396; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=YSGX0jW3To7/xuABL7QXXsr1KCFhu7Z9tThr0mLMrMc=; b=G3E1jce41JoyYH29N49nTMFHG8gTeKW4GBYLPiG9qXGUXsaA1CyfFKvEWiYpoPkopJ +uJNBOW+tBWQy+a+4oRevnCuFWo/o0iSQx1P7Q4lAQVOGgkOPe83TKVMGsBH9MQZN3S/ KgYBBu0NqFNaUTuFOMB6nOitoaFec+Bx6Q+T3Oxoyd2jc2l+mhiIw/wdICkmAeMEzGFl BdnuaUtAhhHpviOWZpEMeLaQIH8Cd042IvrP7hQZVEPpCo1SUHf+fvX/W/N69vGVsxnW ZCrh8JzTe1eJvR0MX6qqSDadt42HciJ43zNwdghc6jD3ChxgiTIL2DLf/yYXHWX+XR2Y 7nwA== X-Gm-Message-State: AOJu0YxsY1iLHfl8qrbS/ow+9UgqouiuZy5WyUvHxqiP9w1DAt9A70UJ edJko5O3IY2nQm4TzcXbaKNXCg== X-Google-Smtp-Source: AGHT+IH3XyzLx3HqaxxWdryJd33MqMJHkx6z6Xt3swBEGkMdx9JQSLka9GW0WH1FkTVqo27TzFUimQ== X-Received: by 2002:a05:6214:246f:b0:67a:4ba1:84d5 with SMTP id im15-20020a056214246f00b0067a4ba184d5mr7928400qvb.16.1701204596396; Tue, 28 Nov 2023 12:49:56 -0800 (PST) Received: from soleen.c.googlers.com.com (55.87.194.35.bc.googleusercontent.com. [35.194.87.55]) by smtp.gmail.com with ESMTPSA id d11-20020a0cfe8b000000b0067a56b6adfesm1056863qvs.71.2023.11.28.12.49.55 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 28 Nov 2023 12:49:56 -0800 (PST) From: Pasha Tatashin To: akpm@linux-foundation.org, alex.williamson@redhat.com, alim.akhtar@samsung.com, alyssa@rosenzweig.io, asahi@lists.linux.dev, baolu.lu@linux.intel.com, bhelgaas@google.com, cgroups@vger.kernel.org, corbet@lwn.net, david@redhat.com, dwmw2@infradead.org, hannes@cmpxchg.org, heiko@sntech.de, iommu@lists.linux.dev, jasowang@redhat.com, jernej.skrabec@gmail.com, jgg@ziepe.ca, jonathanh@nvidia.com, joro@8bytes.org, kevin.tian@intel.com, krzysztof.kozlowski@linaro.org, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-doc@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-rockchip@lists.infradead.org, linux-samsung-soc@vger.kernel.org, linux-sunxi@lists.linux.dev, linux-tegra@vger.kernel.org, lizefan.x@bytedance.com, marcan@marcan.st, mhiramat@kernel.org, mst@redhat.com, m.szyprowski@samsung.com, netdev@vger.kernel.org, pasha.tatashin@soleen.com, paulmck@kernel.org, rdunlap@infradead.org, robin.murphy@arm.com, samuel@sholland.org, suravee.suthikulpanit@amd.com, sven@svenpeter.dev, thierry.reding@gmail.com, tj@kernel.org, tomas.mudrunka@gmail.com, vdumpa@nvidia.com, virtualization@lists.linux.dev, wens@csie.org, will@kernel.org, yu-cheng.yu@intel.com Subject: [PATCH 16/16] vfio: account iommu allocations Date: Tue, 28 Nov 2023 20:49:38 +0000 Message-ID: <20231128204938.1453583-17-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.43.0.rc2.451.g8631bc7472-goog In-Reply-To: <20231128204938.1453583-1-pasha.tatashin@soleen.com> References: <20231128204938.1453583-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" iommu allocations should be accounted in order to allow admins to monitor and limit the amount of iommu memory. Signed-off-by: Pasha Tatashin --- drivers/vfio/vfio_iommu_type1.c | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-) diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type= 1.c index eacd6ec04de5..b2854d7939ce 100644 --- a/drivers/vfio/vfio_iommu_type1.c +++ b/drivers/vfio/vfio_iommu_type1.c @@ -1436,7 +1436,7 @@ static int vfio_iommu_map(struct vfio_iommu *iommu, d= ma_addr_t iova, list_for_each_entry(d, &iommu->domain_list, next) { ret =3D iommu_map(d->domain, iova, (phys_addr_t)pfn << PAGE_SHIFT, npage << PAGE_SHIFT, prot | IOMMU_CACHE, - GFP_KERNEL); + GFP_KERNEL_ACCOUNT); if (ret) goto unwind; =20 @@ -1750,7 +1750,8 @@ static int vfio_iommu_replay(struct vfio_iommu *iommu, } =20 ret =3D iommu_map(domain->domain, iova, phys, size, - dma->prot | IOMMU_CACHE, GFP_KERNEL); + dma->prot | IOMMU_CACHE, + GFP_KERNEL_ACCOUNT); if (ret) { if (!dma->iommu_mapped) { vfio_unpin_pages_remote(dma, iova, @@ -1845,7 +1846,8 @@ static void vfio_test_domain_fgsp(struct vfio_domain = *domain, struct list_head * continue; =20 ret =3D iommu_map(domain->domain, start, page_to_phys(pages), PAGE_SIZE = * 2, - IOMMU_READ | IOMMU_WRITE | IOMMU_CACHE, GFP_KERNEL); + IOMMU_READ | IOMMU_WRITE | IOMMU_CACHE, + GFP_KERNEL_ACCOUNT); if (!ret) { size_t unmapped =3D iommu_unmap(domain->domain, start, PAGE_SIZE); =20 --=20 2.43.0.rc2.451.g8631bc7472-goog