From nobody Mon Feb 9 19:42:33 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 240C6C433EF for ; Thu, 19 May 2022 15:43:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232287AbiESPnX (ORCPT ); Thu, 19 May 2022 11:43:23 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44364 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241245AbiESPlQ (ORCPT ); Thu, 19 May 2022 11:41:16 -0400 Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F30EA12A82; Thu, 19 May 2022 08:41:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1652974873; x=1684510873; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=bRcbOpClngbVMiu+0GkVBqXH3D1nHdRIvvjiJPwzM5U=; b=Ovcyk7lajrWzlKyvlb6Go8n+kyVcJp38MJ7dRkB1Ihlb1mCvnqAGqhf5 cqW/xG9qoB+0eAZkUnwTfy3FWOiztz72CNq2nhmrHZ+aCd0jnWBQxR6Rc nK2NXuDqCGX5wvD3dX9zrV7Yap8L+c4Afu6MsiwmJBS5WRlGnXCUayr3t BkCMBim4agTxUxzkVAcvLoxREV7BGZ5kQxVrhx1s4vK4N33G57Kr2zRDr tKEc7X7ADSnXNiSxXXOzCBVIao/CeoHdnVrE7EpfL/fvOpqdV9tlkIs1b bmL3Z3qQGG9VKJ8xw9Zvy8EVf665wgV5kjsEvM+LBEkQJsJn/E6t4HZoP Q==; X-IronPort-AV: E=McAfee;i="6400,9594,10352"; a="259828045" X-IronPort-AV: E=Sophos;i="5.91,237,1647327600"; d="scan'208";a="259828045" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 May 2022 08:41:12 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.91,237,1647327600"; d="scan'208";a="598635123" Received: from chaop.bj.intel.com ([10.240.192.101]) by orsmga008.jf.intel.com with ESMTP; 19 May 2022 08:41:02 -0700 From: Chao Peng To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-api@vger.kernel.org, linux-doc@vger.kernel.org, qemu-devel@nongnu.org Cc: Paolo Bonzini , Jonathan Corbet , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org, "H . Peter Anvin" , Hugh Dickins , Jeff Layton , "J . Bruce Fields" , Andrew Morton , Mike Rapoport , Steven Price , "Maciej S . Szmigiero" , Vlastimil Babka , Vishal Annapurve , Yu Zhang , Chao Peng , "Kirill A . Shutemov" , luto@kernel.org, jun.nakajima@intel.com, dave.hansen@intel.com, ak@linux.intel.com, david@redhat.com, aarcange@redhat.com, ddutile@redhat.com, dhildenb@redhat.com, Quentin Perret , Michael Roth , mhocko@suse.com Subject: [PATCH v6 2/8] mm/shmem: Support memfile_notifier Date: Thu, 19 May 2022 23:37:07 +0800 Message-Id: <20220519153713.819591-3-chao.p.peng@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220519153713.819591-1-chao.p.peng@linux.intel.com> References: <20220519153713.819591-1-chao.p.peng@linux.intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: "Kirill A. Shutemov" Implement shmem as a memfile_notifier backing store. Essentially it interacts with the memfile_notifier feature flags for userspace access/page migration/page reclaiming and implements the necessary memfile_backing_store callbacks. Signed-off-by: Kirill A. Shutemov Signed-off-by: Chao Peng --- include/linux/shmem_fs.h | 2 + mm/shmem.c | 120 ++++++++++++++++++++++++++++++++++++++- 2 files changed, 121 insertions(+), 1 deletion(-) diff --git a/include/linux/shmem_fs.h b/include/linux/shmem_fs.h index ab51d3cd39bd..a8e98bdd121e 100644 --- a/include/linux/shmem_fs.h +++ b/include/linux/shmem_fs.h @@ -9,6 +9,7 @@ #include #include #include +#include =20 /* inode in-kernel data */ =20 @@ -25,6 +26,7 @@ struct shmem_inode_info { struct simple_xattrs xattrs; /* list of xattrs */ atomic_t stop_eviction; /* hold when working on inode */ struct timespec64 i_crtime; /* file creation time */ + struct memfile_node memfile_node; /* memfile node */ struct inode vfs_inode; }; =20 diff --git a/mm/shmem.c b/mm/shmem.c index 529c9ad3e926..f97ae328c87a 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -905,6 +905,24 @@ static struct folio *shmem_get_partial_folio(struct in= ode *inode, pgoff_t index) return page ? page_folio(page) : NULL; } =20 +static void notify_populate(struct inode *inode, pgoff_t start, pgoff_t en= d) +{ + struct shmem_inode_info *info =3D SHMEM_I(inode); + + memfile_notifier_populate(&info->memfile_node, start, end); +} + +static void notify_invalidate(struct inode *inode, struct folio *folio, + pgoff_t start, pgoff_t end) +{ + struct shmem_inode_info *info =3D SHMEM_I(inode); + + start =3D max(start, folio->index); + end =3D min(end, folio->index + folio_nr_pages(folio)); + + memfile_notifier_invalidate(&info->memfile_node, start, end); +} + /* * Remove range of pages and swap entries from page cache, and free them. * If !unfalloc, truncate or punch hole; if unfalloc, undo failed fallocat= e. @@ -948,6 +966,8 @@ static void shmem_undo_range(struct inode *inode, loff_= t lstart, loff_t lend, } index +=3D folio_nr_pages(folio) - 1; =20 + notify_invalidate(inode, folio, start, end); + if (!unfalloc || !folio_test_uptodate(folio)) truncate_inode_folio(mapping, folio); folio_unlock(folio); @@ -1021,6 +1041,9 @@ static void shmem_undo_range(struct inode *inode, lof= f_t lstart, loff_t lend, index--; break; } + + notify_invalidate(inode, folio, start, end); + VM_BUG_ON_FOLIO(folio_test_writeback(folio), folio); truncate_inode_folio(mapping, folio); @@ -1092,6 +1115,13 @@ static int shmem_setattr(struct user_namespace *mnt_= userns, (newsize > oldsize && (info->seals & F_SEAL_GROW))) return -EPERM; =20 + if (info->memfile_node.flags & MEMFILE_F_USER_INACCESSIBLE) { + if(oldsize) + return -EPERM; + if (!PAGE_ALIGNED(newsize)) + return -EINVAL; + } + if (newsize !=3D oldsize) { error =3D shmem_reacct_size(SHMEM_I(inode)->flags, oldsize, newsize); @@ -1340,6 +1370,8 @@ static int shmem_writepage(struct page *page, struct = writeback_control *wbc) goto redirty; if (!total_swap_pages) goto redirty; + if (info->memfile_node.flags & MEMFILE_F_UNRECLAIMABLE) + goto redirty; =20 /* * Our capabilities prevent regular writeback or sync from ever calling @@ -2234,6 +2266,9 @@ static int shmem_mmap(struct file *file, struct vm_ar= ea_struct *vma) if (ret) return ret; =20 + if (info->memfile_node.flags & MEMFILE_F_USER_INACCESSIBLE) + return -EPERM; + /* arm64 - allow memory tagging on RAM-based files */ vma->vm_flags |=3D VM_MTE_ALLOWED; =20 @@ -2274,6 +2309,7 @@ static struct inode *shmem_get_inode(struct super_blo= ck *sb, const struct inode info->i_crtime =3D inode->i_mtime; INIT_LIST_HEAD(&info->shrinklist); INIT_LIST_HEAD(&info->swaplist); + memfile_node_init(&info->memfile_node); simple_xattrs_init(&info->xattrs); cache_no_acl(inode); mapping_set_large_folios(inode->i_mapping); @@ -2442,6 +2478,8 @@ shmem_write_begin(struct file *file, struct address_s= pace *mapping, if ((info->seals & F_SEAL_GROW) && pos + len > inode->i_size) return -EPERM; } + if (unlikely(info->memfile_node.flags & MEMFILE_F_USER_INACCESSIBLE)) + return -EPERM; =20 ret =3D shmem_getpage(inode, index, pagep, SGP_WRITE); =20 @@ -2518,6 +2556,13 @@ static ssize_t shmem_file_read_iter(struct kiocb *io= cb, struct iov_iter *to) end_index =3D i_size >> PAGE_SHIFT; if (index > end_index) break; + + if (SHMEM_I(inode)->memfile_node.flags & + MEMFILE_F_USER_INACCESSIBLE) { + error =3D -EPERM; + break; + } + if (index =3D=3D end_index) { nr =3D i_size & ~PAGE_MASK; if (nr <=3D offset) @@ -2649,6 +2694,12 @@ static long shmem_fallocate(struct file *file, int m= ode, loff_t offset, goto out; } =20 + if ((info->memfile_node.flags & MEMFILE_F_USER_INACCESSIBLE) && + (!PAGE_ALIGNED(offset) || !PAGE_ALIGNED(len))) { + error =3D -EINVAL; + goto out; + } + shmem_falloc.waitq =3D &shmem_falloc_waitq; shmem_falloc.start =3D (u64)unmap_start >> PAGE_SHIFT; shmem_falloc.next =3D (unmap_end + 1) >> PAGE_SHIFT; @@ -2768,6 +2819,7 @@ static long shmem_fallocate(struct file *file, int mo= de, loff_t offset, if (!(mode & FALLOC_FL_KEEP_SIZE) && offset + len > inode->i_size) i_size_write(inode, offset + len); inode->i_ctime =3D current_time(inode); + notify_populate(inode, start, end); undone: spin_lock(&inode->i_lock); inode->i_private =3D NULL; @@ -3754,6 +3806,20 @@ static int shmem_error_remove_page(struct address_sp= ace *mapping, return 0; } =20 +#ifdef CONFIG_MIGRATION +static int shmem_migrate_page(struct address_space *mapping, + struct page *newpage, struct page *page, + enum migrate_mode mode) +{ + struct inode *inode =3D mapping->host; + struct shmem_inode_info *info =3D SHMEM_I(inode); + + if (info->memfile_node.flags & MEMFILE_F_UNMOVABLE) + return -ENOTSUPP; + return migrate_page(mapping, newpage, page, mode); +} +#endif + const struct address_space_operations shmem_aops =3D { .writepage =3D shmem_writepage, .dirty_folio =3D noop_dirty_folio, @@ -3762,7 +3828,7 @@ const struct address_space_operations shmem_aops =3D { .write_end =3D shmem_write_end, #endif #ifdef CONFIG_MIGRATION - .migratepage =3D migrate_page, + .migratepage =3D shmem_migrate_page, #endif .error_remove_page =3D shmem_error_remove_page, }; @@ -3879,6 +3945,54 @@ static struct file_system_type shmem_fs_type =3D { .fs_flags =3D FS_USERNS_MOUNT, }; =20 +#ifdef CONFIG_MEMFILE_NOTIFIER +static struct memfile_node* shmem_lookup_memfile_node(struct file *file) +{ + struct inode *inode =3D file_inode(file); + + if (!shmem_mapping(inode->i_mapping)) + return NULL; + + return &SHMEM_I(inode)->memfile_node; +} + + +static int shmem_get_lock_pfn(struct file *file, pgoff_t offset, pfn_t *pf= n, + int *order) +{ + struct page *page; + int ret; + + ret =3D shmem_getpage(file_inode(file), offset, &page, SGP_NOALLOC); + if (ret) + return ret; + + *pfn =3D page_to_pfn_t(page); + *order =3D thp_order(compound_head(page)); + return 0; +} + +static void shmem_put_unlock_pfn(pfn_t pfn) +{ + struct page *page =3D pfn_t_to_page(pfn); + + if (!page) + return; + + VM_BUG_ON_PAGE(!PageLocked(page), page); + + set_page_dirty(page); + unlock_page(page); + put_page(page); +} + +static struct memfile_backing_store shmem_backing_store =3D { + .lookup_memfile_node =3D shmem_lookup_memfile_node, + .get_lock_pfn =3D shmem_get_lock_pfn, + .put_unlock_pfn =3D shmem_put_unlock_pfn, +}; +#endif /* CONFIG_MEMFILE_NOTIFIER */ + int __init shmem_init(void) { int error; @@ -3904,6 +4018,10 @@ int __init shmem_init(void) else shmem_huge =3D SHMEM_HUGE_NEVER; /* just in case it was patched */ #endif + +#ifdef CONFIG_MEMFILE_NOTIFIER + memfile_register_backing_store(&shmem_backing_store); +#endif return 0; =20 out1: --=20 2.25.1