From nobody Fri Oct 3 16:38:12 2025 Received: from fra-out-001.esa.eu-central-1.outbound.mail-perimeter.amazon.com (fra-out-001.esa.eu-central-1.outbound.mail-perimeter.amazon.com [18.156.205.64]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8C0E1283FD0; Thu, 28 Aug 2025 09:39:26 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=18.156.205.64 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1756373968; cv=none; b=kLMGW4xDSAqhwlc36GgB6CLezYSW2KaKbhEH5bIaUcqVGzv77dKxXZS6fBJ2iZ/fB6BTXPTnUlKDxVtJDNEuzBAzwreIh7xbnUi79qdSzWFccCQ3qlb0PAYBlg3scMJHV+vldzZqlExgqzsAmDTh9wFvXzHW7rdENkGM/cet1TM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1756373968; c=relaxed/simple; bh=FkmstGPos2cmw5BMnAKLlCRIAGH2LRWhyE9HIDVmtyM=; h=From:To:CC:Subject:Date:Message-ID:References:In-Reply-To: Content-Type:MIME-Version; b=ngeZyw0I3SuPuYm+Azwovl5MzqFhk0v949v1iw4gm6MX5xs84lt02/oNnl7uk4tngmWzrHfZVO/jTDAJewyNJAMBrLe80k5KLQM0i7U5thA2xXazQgSS7fdBXffb1p9pYvxUFD1sAxeeGzTGAKUkcErbYxkgcy28AmM2GV8xST0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amazon.co.uk; spf=pass smtp.mailfrom=amazon.co.uk; dkim=pass (2048-bit key) header.d=amazon.co.uk header.i=@amazon.co.uk header.b=EQX9hwot; arc=none smtp.client-ip=18.156.205.64 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amazon.co.uk Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=amazon.co.uk Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=amazon.co.uk header.i=@amazon.co.uk header.b="EQX9hwot" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.co.uk; i=@amazon.co.uk; q=dns/txt; s=amazoncorp2; t=1756373966; x=1787909966; h=from:to:cc:subject:date:message-id:references: in-reply-to:content-transfer-encoding:mime-version; bh=L6MGuHLofg/UQ6j3od6DFBlP6h40d9oLh3k2QL5DRKE=; b=EQX9hwotzFaTTrByFcOju61Qf0Eiy4yvGA8VDdIhj0TWYTQ6NQOW/unN 8Q/QwJ0rGd9KlZ9+KPUxSA7wsqYtCj7J6tH0iNYbODKJBd1Mm8ArGVrO5 WCBjx05DCLMGs2HeKVFtphidWxHtSS0MimKVOTL17KxVHpGdYdK/BdfFM x7DUClfku7MLiS1AXLAE7SlsNg3Zou0e4AVepI5Femt1H8MoiA+1o0gyA DopLjh60n3Ac+1Wa+ZvaLFKVKc01NzbuTpEEoh51AMFrYJH4J0uiBSo3Q j/ZcAFzAJUSL3jj8VrEwD/H+FRHHuNuwBqhhK9sAo91RzXNuuzI8gTknz A==; X-CSE-ConnectionGUID: xHLGpQ9EQomHh63vS1uFkA== X-CSE-MsgGUID: peFr5JATRJ+PESY0Origbw== X-IronPort-AV: E=Sophos;i="6.18,214,1751241600"; d="scan'208";a="1303198" Received: from ip-10-6-3-216.eu-central-1.compute.internal (HELO smtpout.naws.eu-central-1.prod.farcaster.email.amazon.dev) ([10.6.3.216]) by internal-fra-out-001.esa.eu-central-1.outbound.mail-perimeter.amazon.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Aug 2025 09:39:16 +0000 Received: from EX19MTAEUA002.ant.amazon.com [54.240.197.232:15240] by smtpin.naws.eu-central-1.prod.farcaster.email.amazon.dev [10.0.43.161:2525] with esmtp (Farcaster) id 88d0e6d1-2f79-4aea-8d55-7d6f4981dd7e; Thu, 28 Aug 2025 09:39:16 +0000 (UTC) X-Farcaster-Flow-ID: 88d0e6d1-2f79-4aea-8d55-7d6f4981dd7e Received: from EX19D015EUB001.ant.amazon.com (10.252.51.114) by EX19MTAEUA002.ant.amazon.com (10.252.50.124) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA) id 15.2.2562.17; Thu, 28 Aug 2025 09:39:16 +0000 Received: from EX19D015EUB004.ant.amazon.com (10.252.51.13) by EX19D015EUB001.ant.amazon.com (10.252.51.114) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA) id 15.2.2562.17; Thu, 28 Aug 2025 09:39:15 +0000 Received: from EX19D015EUB004.ant.amazon.com ([fe80::2dc9:7aa9:9cd3:fc8a]) by EX19D015EUB004.ant.amazon.com ([fe80::2dc9:7aa9:9cd3:fc8a%3]) with mapi id 15.02.2562.017; Thu, 28 Aug 2025 09:39:15 +0000 From: "Roy, Patrick" To: "david@redhat.com" , "seanjc@google.com" CC: Elliot Berman , "tabba@google.com" , "ackerleytng@google.com" , "pbonzini@redhat.com" , "kvm@vger.kernel.org" , "linux-arm-kernel@lists.infradead.org" , "kvmarm@lists.linux.dev" , "linux-kernel@vger.kernel.org" , "linux-mm@kvack.org" , "rppt@kernel.org" , "will@kernel.org" , "vbabka@suse.cz" , "Cali, Marco" , "Kalyazin, Nikita" , "Thomson, Jack" , "Manwaring, Derek" , "Roy, Patrick" Subject: [PATCH v5 01/12] filemap: Pass address_space mapping to ->free_folio() Thread-Topic: [PATCH v5 01/12] filemap: Pass address_space mapping to ->free_folio() Thread-Index: AQHcF/+eDtAQnAAgHUCJHWFnAQTX+Q== Date: Thu, 28 Aug 2025 09:39:15 +0000 Message-ID: <20250828093902.2719-2-roypat@amazon.co.uk> References: <20250828093902.2719-1-roypat@amazon.co.uk> In-Reply-To: <20250828093902.2719-1-roypat@amazon.co.uk> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: Content-Transfer-Encoding: quoted-printable Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" From: Elliot Berman When guest_memfd removes memory from the host kernel's direct map, direct map entries must be restored before the memory is freed again. To do so, ->free_folio() needs to know whether a gmem folio was direct map removed in the first place though. While possible to keep track of this information on each individual folio (e.g. via page flags), direct map removal is an all-or-nothing property of the entire guest_memfd, so it is less error prone to just check the flag stored in the gmem inode's private data. However, by the time ->free_folio() is called, folio->mapping might be cleared. To still allow access to the address space from which the folio was just removed, pass it in as an additional argument to ->free_folio, as the mapping is well-known to all callers. Link: https://lore.kernel.org/all/15f665b4-2d33-41ca-ac50-fafe24ade32f@redh= at.com/ Suggested-by: David Hildenbrand Acked-by: David Hildenbrand Signed-off-by: Elliot Berman [patrick: rewrite shortlog for new usecase] Signed-off-by: Patrick Roy --- Documentation/filesystems/locking.rst | 2 +- fs/nfs/dir.c | 11 ++++++----- fs/orangefs/inode.c | 3 ++- include/linux/fs.h | 2 +- mm/filemap.c | 9 +++++---- mm/secretmem.c | 3 ++- mm/vmscan.c | 4 ++-- virt/kvm/guest_memfd.c | 3 ++- 8 files changed, 21 insertions(+), 16 deletions(-) diff --git a/Documentation/filesystems/locking.rst b/Documentation/filesyst= ems/locking.rst index aa287ccdac2f..74c97287ec40 100644 --- a/Documentation/filesystems/locking.rst +++ b/Documentation/filesystems/locking.rst @@ -262,7 +262,7 @@ prototypes:: sector_t (*bmap)(struct address_space *, sector_t); void (*invalidate_folio) (struct folio *, size_t start, size_t len); bool (*release_folio)(struct folio *, gfp_t); - void (*free_folio)(struct folio *); + void (*free_folio)(struct address_space *, struct folio *); int (*direct_IO)(struct kiocb *, struct iov_iter *iter); int (*migrate_folio)(struct address_space *, struct folio *dst, struct folio *src, enum migrate_mode); diff --git a/fs/nfs/dir.c b/fs/nfs/dir.c index d81217923936..644bd54e052c 100644 --- a/fs/nfs/dir.c +++ b/fs/nfs/dir.c @@ -55,7 +55,7 @@ static int nfs_closedir(struct inode *, struct file *); static int nfs_readdir(struct file *, struct dir_context *); static int nfs_fsync_dir(struct file *, loff_t, loff_t, int); static loff_t nfs_llseek_dir(struct file *, loff_t, int); -static void nfs_readdir_clear_array(struct folio *); +static void nfs_readdir_clear_array(struct address_space *, struct folio *= ); static int nfs_do_create(struct inode *dir, struct dentry *dentry, umode_t mode, int open_flags); =20 @@ -218,7 +218,8 @@ static void nfs_readdir_folio_init_array(struct folio *= folio, u64 last_cookie, /* * we are freeing strings created by nfs_add_to_readdir_array() */ -static void nfs_readdir_clear_array(struct folio *folio) +static void nfs_readdir_clear_array(struct address_space *mapping, + struct folio *folio) { struct nfs_cache_array *array; unsigned int i; @@ -233,7 +234,7 @@ static void nfs_readdir_clear_array(struct folio *folio) static void nfs_readdir_folio_reinit_array(struct folio *folio, u64 last_c= ookie, u64 change_attr) { - nfs_readdir_clear_array(folio); + nfs_readdir_clear_array(folio->mapping, folio); nfs_readdir_folio_init_array(folio, last_cookie, change_attr); } =20 @@ -249,7 +250,7 @@ nfs_readdir_folio_array_alloc(u64 last_cookie, gfp_t gf= p_flags) static void nfs_readdir_folio_array_free(struct folio *folio) { if (folio) { - nfs_readdir_clear_array(folio); + nfs_readdir_clear_array(folio->mapping, folio); folio_put(folio); } } @@ -391,7 +392,7 @@ static void nfs_readdir_folio_init_and_validate(struct = folio *folio, u64 cookie, if (folio_test_uptodate(folio)) { if (nfs_readdir_folio_validate(folio, cookie, change_attr)) return; - nfs_readdir_clear_array(folio); + nfs_readdir_clear_array(folio->mapping, folio); } nfs_readdir_folio_init_array(folio, cookie, change_attr); folio_mark_uptodate(folio); diff --git a/fs/orangefs/inode.c b/fs/orangefs/inode.c index a01400cd41fd..37227ba71593 100644 --- a/fs/orangefs/inode.c +++ b/fs/orangefs/inode.c @@ -452,7 +452,8 @@ static bool orangefs_release_folio(struct folio *folio,= gfp_t foo) return !folio_test_private(folio); } =20 -static void orangefs_free_folio(struct folio *folio) +static void orangefs_free_folio(struct address_space *mapping, + struct folio *folio) { kfree(folio_detach_private(folio)); } diff --git a/include/linux/fs.h b/include/linux/fs.h index d7ab4f96d705..afb0748ffda6 100644 --- a/include/linux/fs.h +++ b/include/linux/fs.h @@ -457,7 +457,7 @@ struct address_space_operations { sector_t (*bmap)(struct address_space *, sector_t); void (*invalidate_folio) (struct folio *, size_t offset, size_t len); bool (*release_folio)(struct folio *, gfp_t); - void (*free_folio)(struct folio *folio); + void (*free_folio)(struct address_space *, struct folio *folio); ssize_t (*direct_IO)(struct kiocb *, struct iov_iter *iter); /* * migrate the contents of a folio to the specified target. If diff --git a/mm/filemap.c b/mm/filemap.c index 751838ef05e5..3dd8ad922d80 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -226,11 +226,11 @@ void __filemap_remove_folio(struct folio *folio, void= *shadow) =20 void filemap_free_folio(struct address_space *mapping, struct folio *folio) { - void (*free_folio)(struct folio *); + void (*free_folio)(struct address_space *, struct folio *); =20 free_folio =3D mapping->a_ops->free_folio; if (free_folio) - free_folio(folio); + free_folio(mapping, folio); =20 folio_put_refs(folio, folio_nr_pages(folio)); } @@ -820,7 +820,8 @@ EXPORT_SYMBOL(file_write_and_wait_range); void replace_page_cache_folio(struct folio *old, struct folio *new) { struct address_space *mapping =3D old->mapping; - void (*free_folio)(struct folio *) =3D mapping->a_ops->free_folio; + void (*free_folio)(struct address_space *, struct folio *) =3D + mapping->a_ops->free_folio; pgoff_t offset =3D old->index; XA_STATE(xas, &mapping->i_pages, offset); =20 @@ -849,7 +850,7 @@ void replace_page_cache_folio(struct folio *old, struct= folio *new) __lruvec_stat_add_folio(new, NR_SHMEM); xas_unlock_irq(&xas); if (free_folio) - free_folio(old); + free_folio(mapping, old); folio_put(old); } EXPORT_SYMBOL_GPL(replace_page_cache_folio); diff --git a/mm/secretmem.c b/mm/secretmem.c index 60137305bc20..422dcaa32506 100644 --- a/mm/secretmem.c +++ b/mm/secretmem.c @@ -150,7 +150,8 @@ static int secretmem_migrate_folio(struct address_space= *mapping, return -EBUSY; } =20 -static void secretmem_free_folio(struct folio *folio) +static void secretmem_free_folio(struct address_space *mapping, + struct folio *folio) { set_direct_map_default_noflush(folio_page(folio, 0)); folio_zero_segment(folio, 0, folio_size(folio)); diff --git a/mm/vmscan.c b/mm/vmscan.c index a48aec8bfd92..559bd6ac965c 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -788,7 +788,7 @@ static int __remove_mapping(struct address_space *mappi= ng, struct folio *folio, xa_unlock_irq(&mapping->i_pages); put_swap_folio(folio, swap); } else { - void (*free_folio)(struct folio *); + void (*free_folio)(struct address_space *, struct folio *); =20 free_folio =3D mapping->a_ops->free_folio; /* @@ -817,7 +817,7 @@ static int __remove_mapping(struct address_space *mappi= ng, struct folio *folio, spin_unlock(&mapping->host->i_lock); =20 if (free_folio) - free_folio(folio); + free_folio(mapping, folio); } =20 return 1; diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c index 08a6bc7d25b6..9ec4c45e3cf2 100644 --- a/virt/kvm/guest_memfd.c +++ b/virt/kvm/guest_memfd.c @@ -430,7 +430,8 @@ static int kvm_gmem_error_folio(struct address_space *m= apping, struct folio *fol } =20 #ifdef CONFIG_HAVE_KVM_ARCH_GMEM_INVALIDATE -static void kvm_gmem_free_folio(struct folio *folio) +static void kvm_gmem_free_folio(struct address_space *mapping, + struct folio *folio) { struct page *page =3D folio_page(folio, 0); kvm_pfn_t pfn =3D page_to_pfn(page); --=20 2.50.1