From nobody Tue Dec 16 21:10:39 2025 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.12]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0E39D309DDC for ; Wed, 22 Oct 2025 08:30:01 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.12 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761121802; cv=none; b=FeH4Y0RcX0pUuc0dVaYPozQAtMFq2T60eorxfn+riDyHrqKOjDr058/m9JZh8kw1grwNp0Cv+AgXDczoGBqofMDoHuop0UfEWs9lFqDnUlQv2iUyZBrTHBV59IEsYiFJDfBosZI2TcqoZZHx4KMjy1mG7LudzZTmEDlAzp1D1v0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761121802; c=relaxed/simple; bh=MfPys1H3mTkc2x7BxXNZ3i5yT93gXL3B3ELjd678Yt8=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=F9/JIbZCOKYsvomDaF4vwbjqg6DgC7+XHxKsU6GmSbnWsIPXpHtKmTp6nLu1oBX4JqWQPR5iTvXvm3zPcnKluoO1MwLBozCWs5SdRfdbGQoyHFR3rO1pHRa31TRIZxfAHcB505Tj6nvNFvBeeffvmwDZZRvRTjc3aDPxuO9KbDQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=pass smtp.mailfrom=linux.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=ZXcUgi4F; arc=none smtp.client-ip=192.198.163.12 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="ZXcUgi4F" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1761121801; x=1792657801; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=MfPys1H3mTkc2x7BxXNZ3i5yT93gXL3B3ELjd678Yt8=; b=ZXcUgi4F8EuElerUNwWimxOIbAlpvgQgfaTYc3yeZo7EzoDCdEuQ76QU o6e/3MS7bNJELKzE5ibXcT+xbzxinTEE1540ckydcDgeXhqq0km5xYysW Y7JjI3RGfIBvv15nPOrAIq7bqAfiRAN8MqVRSrX2jdgmtGmriga7G4bpj GUKizN/T4wY1+cMgSRiemp5BbwyGFufOARrsK9iCXzZ0MMs1PKEd2OGqH 4Keh0m/GVbIOEj6PbU5WW2xciNDPdz2wn3alzIGIoBkHGTyVXV8cONlU3 2s2q5EP5QSpOoK2/5JOaV1muBFV/ruq2xm0e2e1T8M8FRXKSi2f0jqrBZ A==; X-CSE-ConnectionGUID: URW4Cdc7Rem8pssTVv1F6g== X-CSE-MsgGUID: /PwUvehxRcqKW0lP25grtw== X-IronPort-AV: E=McAfee;i="6800,10657,11586"; a="67127070" X-IronPort-AV: E=Sophos;i="6.19,246,1754982000"; d="scan'208";a="67127070" Received: from fmviesa007.fm.intel.com ([10.60.135.147]) by fmvoesa106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Oct 2025 01:30:00 -0700 X-CSE-ConnectionGUID: lunhTNnjRASOL1ioYgr5fQ== X-CSE-MsgGUID: 3cO1hRCkTtuGzHJHtyXxUw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.19,246,1754982000"; d="scan'208";a="183516435" Received: from allen-box.sh.intel.com ([10.239.159.52]) by fmviesa007.fm.intel.com with ESMTP; 22 Oct 2025 01:29:54 -0700 From: Lu Baolu To: Joerg Roedel , Will Deacon , Robin Murphy , Kevin Tian , Jason Gunthorpe , Jann Horn , Vasant Hegde , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , Alistair Popple , Peter Zijlstra , Uladzislau Rezki , Jean-Philippe Brucker , Andy Lutomirski , Yi Lai , David Hildenbrand , Lorenzo Stoakes , "Liam R . Howlett" , Andrew Morton , Vlastimil Babka , Mike Rapoport , Michal Hocko , Matthew Wilcox , Vinicius Costa Gomes Cc: iommu@lists.linux.dev, security@kernel.org, x86@kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Dave Hansen , Lu Baolu Subject: [PATCH v7 7/8] mm: Introduce deferred freeing for kernel page tables Date: Wed, 22 Oct 2025 16:26:33 +0800 Message-ID: <20251022082635.2462433-8-baolu.lu@linux.intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20251022082635.2462433-1-baolu.lu@linux.intel.com> References: <20251022082635.2462433-1-baolu.lu@linux.intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Dave Hansen This introduces a conditional asynchronous mechanism, enabled by CONFIG_ASYNC_KERNEL_PGTABLE_FREE. When enabled, this mechanism defers the freeing of pages that are used as page tables for kernel address mappings. These pages are now queued to a work struct instead of being freed immediately. This deferred freeing allows for batch-freeing of page tables, providing a safe context for performing a single expensive operation (TLB flush) for a batch of kernel page tables instead of performing that expensive operation for each page table. Signed-off-by: Dave Hansen Signed-off-by: Lu Baolu Reviewed-by: Jason Gunthorpe Reviewed-by: Kevin Tian Acked-by: David Hildenbrand Acked-by: Mike Rapoport (Microsoft) --- mm/Kconfig | 3 +++ include/linux/mm.h | 16 +++++++++++++--- mm/pgtable-generic.c | 37 +++++++++++++++++++++++++++++++++++++ 3 files changed, 53 insertions(+), 3 deletions(-) diff --git a/mm/Kconfig b/mm/Kconfig index 0e26f4fc8717..a83df9934acd 100644 --- a/mm/Kconfig +++ b/mm/Kconfig @@ -908,6 +908,9 @@ config PAGE_MAPCOUNT config PGTABLE_HAS_HUGE_LEAVES def_bool TRANSPARENT_HUGEPAGE || HUGETLB_PAGE =20 +config ASYNC_KERNEL_PGTABLE_FREE + def_bool n + # TODO: Allow to be enabled without THP config ARCH_SUPPORTS_HUGE_PFNMAP def_bool n diff --git a/include/linux/mm.h b/include/linux/mm.h index 52ae551d0eb4..d521abd33164 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -3031,6 +3031,14 @@ static inline void __pagetable_free(struct ptdesc *p= t) __free_pages(page, compound_order(page)); } =20 +#ifdef CONFIG_ASYNC_KERNEL_PGTABLE_FREE +void pagetable_free_kernel(struct ptdesc *pt); +#else +static inline void pagetable_free_kernel(struct ptdesc *pt) +{ + __pagetable_free(pt); +} +#endif /** * pagetable_free - Free pagetables * @pt: The page table descriptor @@ -3040,10 +3048,12 @@ static inline void __pagetable_free(struct ptdesc *= pt) */ static inline void pagetable_free(struct ptdesc *pt) { - if (ptdesc_test_kernel(pt)) + if (ptdesc_test_kernel(pt)) { ptdesc_clear_kernel(pt); - - __pagetable_free(pt); + pagetable_free_kernel(pt); + } else { + __pagetable_free(pt); + } } =20 #if defined(CONFIG_SPLIT_PTE_PTLOCKS) diff --git a/mm/pgtable-generic.c b/mm/pgtable-generic.c index 567e2d084071..1c7caa8ef164 100644 --- a/mm/pgtable-generic.c +++ b/mm/pgtable-generic.c @@ -406,3 +406,40 @@ pte_t *__pte_offset_map_lock(struct mm_struct *mm, pmd= _t *pmd, pte_unmap_unlock(pte, ptl); goto again; } + +#ifdef CONFIG_ASYNC_KERNEL_PGTABLE_FREE +static void kernel_pgtable_work_func(struct work_struct *work); + +static struct { + struct list_head list; + /* protect above ptdesc lists */ + spinlock_t lock; + struct work_struct work; +} kernel_pgtable_work =3D { + .list =3D LIST_HEAD_INIT(kernel_pgtable_work.list), + .lock =3D __SPIN_LOCK_UNLOCKED(kernel_pgtable_work.lock), + .work =3D __WORK_INITIALIZER(kernel_pgtable_work.work, kernel_pgtable_wor= k_func), +}; + +static void kernel_pgtable_work_func(struct work_struct *work) +{ + struct ptdesc *pt, *next; + LIST_HEAD(page_list); + + spin_lock(&kernel_pgtable_work.lock); + list_splice_tail_init(&kernel_pgtable_work.list, &page_list); + spin_unlock(&kernel_pgtable_work.lock); + + list_for_each_entry_safe(pt, next, &page_list, pt_list) + __pagetable_free(pt); +} + +void pagetable_free_kernel(struct ptdesc *pt) +{ + spin_lock(&kernel_pgtable_work.lock); + list_add(&pt->pt_list, &kernel_pgtable_work.list); + spin_unlock(&kernel_pgtable_work.lock); + + schedule_work(&kernel_pgtable_work.work); +} +#endif --=20 2.43.0