From nobody Tue Dec 16 14:49:37 2025 Received: from out30-97.freemail.mail.aliyun.com (out30-97.freemail.mail.aliyun.com [115.124.30.97]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 42A3A17838E for ; Tue, 11 Jun 2024 10:11:25 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=115.124.30.97 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718100688; cv=none; b=p5lHGoEhnZWVT7VIOVHLMtvwikkNIwDA6i847Mwo2ok8cO3hJZzC2j53A6swpNLSqtYskiLnx40EgcGU/Cdi1Li0Rp5Ok1i0POg4Xwk6zF+cqnDKr6uGPdx94bL8RrKQxxbFuWoBIsBXX0Hj6wAUEy8b4UpVEUVykC/CA1cXzSk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718100688; c=relaxed/simple; bh=jKWpOmVynP3kLGwu9JmWswjDh+/ioHfX22ehBjGFNLE=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=N9AJ+9BdV1BHjTiqMP8i3TNU8etnROWYDUJuqZfDoxxnFIPcEy6x9qkTzE+Xrxi5x7pFZuTo2S0OdQ6ZaboYac3siZSPeR1jUu27ST9cJJH6dPKMbjHfIB1qYDyZu82RJS8GowIodzYLFDAJOXjbQ19VxuhPcUEIsMM0V8z7wWk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com; spf=pass smtp.mailfrom=linux.alibaba.com; dkim=pass (1024-bit key) header.d=linux.alibaba.com header.i=@linux.alibaba.com header.b=ePnvKXJV; arc=none smtp.client-ip=115.124.30.97 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.alibaba.com header.i=@linux.alibaba.com header.b="ePnvKXJV" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.alibaba.com; s=default; t=1718100678; h=From:To:Subject:Date:Message-Id:MIME-Version; bh=W4Bbp1+23OsISWkX2qQa4Pc7burV6pvce6vC6TkTWEc=; b=ePnvKXJVCqr3Ii5A/fJ/90hydRdas5oRF/iykti+9p+WyJhpBQk5gyVnswChnSe2g5gIRLBF3d+qn0ylw3ohHuyMxSoMEvxckYtRie6X9llu6nUNkIgRT+UY+2o2FIete/Zk7dQ8WgCqPvMp1MqkIti2amBJUc7QQYkiWI5RQQQ= X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R671e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=maildocker-contentspam033045075189;MF=baolin.wang@linux.alibaba.com;NM=1;PH=DS;RN=16;SR=0;TI=SMTPD_---0W8G7ELN_1718100676; Received: from localhost(mailfrom:baolin.wang@linux.alibaba.com fp:SMTPD_---0W8G7ELN_1718100676) by smtp.aliyun-inc.com; Tue, 11 Jun 2024 18:11:16 +0800 From: Baolin Wang To: akpm@linux-foundation.org, hughd@google.com Cc: willy@infradead.org, david@redhat.com, wangkefeng.wang@huawei.com, ying.huang@intel.com, 21cnbao@gmail.com, ryan.roberts@arm.com, shy828301@gmail.com, ziy@nvidia.com, ioworker0@gmail.com, da.gomez@samsung.com, p.raghav@samsung.com, baolin.wang@linux.alibaba.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v5 1/6] mm: memory: extend finish_fault() to support large folio Date: Tue, 11 Jun 2024 18:11:05 +0800 Message-Id: <3a190892355989d42f59cf9f2f98b94694b0d24d.1718090413.git.baolin.wang@linux.alibaba.com> X-Mailer: git-send-email 2.39.3 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add large folio mapping establishment support for finish_fault() as a preparation, to support multi-size THP allocation of anonymous shmem pages in the following patches. Keep the same behavior (per-page fault) for non-anon shmem to avoid inflati= ng the RSS unintentionally, and we can discuss what size of mapping to build when extending mTHP to control non-anon shmem in the future. Signed-off-by: Baolin Wang Reviewed-by: Kefeng Wang --- mm/memory.c | 57 +++++++++++++++++++++++++++++++++++++++++++---------- 1 file changed, 47 insertions(+), 10 deletions(-) diff --git a/mm/memory.c b/mm/memory.c index eef4e482c0c2..72775ee99ff3 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -4831,9 +4831,12 @@ vm_fault_t finish_fault(struct vm_fault *vmf) { struct vm_area_struct *vma =3D vmf->vma; struct page *page; + struct folio *folio; vm_fault_t ret; bool is_cow =3D (vmf->flags & FAULT_FLAG_WRITE) && !(vma->vm_flags & VM_SHARED); + int type, nr_pages; + unsigned long addr =3D vmf->address; =20 /* Did we COW the page? */ if (is_cow) @@ -4864,24 +4867,58 @@ vm_fault_t finish_fault(struct vm_fault *vmf) return VM_FAULT_OOM; } =20 + folio =3D page_folio(page); + nr_pages =3D folio_nr_pages(folio); + + /* + * Using per-page fault to maintain the uffd semantics, and same + * approach also applies to non-anonymous-shmem faults to avoid + * inflating the RSS of the process. + */ + if (!vma_is_anon_shmem(vma) || unlikely(userfaultfd_armed(vma))) { + nr_pages =3D 1; + } else if (nr_pages > 1) { + pgoff_t idx =3D folio_page_idx(folio, page); + /* The page offset of vmf->address within the VMA. */ + pgoff_t vma_off =3D vmf->pgoff - vmf->vma->vm_pgoff; + + /* + * Fallback to per-page fault in case the folio size in page + * cache beyond the VMA limits. + */ + if (unlikely(vma_off < idx || + vma_off + (nr_pages - idx) > vma_pages(vma))) { + nr_pages =3D 1; + } else { + /* Now we can set mappings for the whole large folio. */ + addr =3D vmf->address - idx * PAGE_SIZE; + page =3D &folio->page; + } + } + vmf->pte =3D pte_offset_map_lock(vma->vm_mm, vmf->pmd, - vmf->address, &vmf->ptl); + addr, &vmf->ptl); if (!vmf->pte) return VM_FAULT_NOPAGE; =20 /* Re-check under ptl */ - if (likely(!vmf_pte_changed(vmf))) { - struct folio *folio =3D page_folio(page); - int type =3D is_cow ? MM_ANONPAGES : mm_counter_file(folio); - - set_pte_range(vmf, folio, page, 1, vmf->address); - add_mm_counter(vma->vm_mm, type, 1); - ret =3D 0; - } else { - update_mmu_tlb(vma, vmf->address, vmf->pte); + if (nr_pages =3D=3D 1 && unlikely(vmf_pte_changed(vmf))) { + update_mmu_tlb(vma, addr, vmf->pte); + ret =3D VM_FAULT_NOPAGE; + goto unlock; + } else if (nr_pages > 1 && !pte_range_none(vmf->pte, nr_pages)) { + update_mmu_tlb_range(vma, addr, vmf->pte, nr_pages); ret =3D VM_FAULT_NOPAGE; + goto unlock; } =20 + folio_ref_add(folio, nr_pages - 1); + set_pte_range(vmf, folio, page, nr_pages, addr); + type =3D is_cow ? MM_ANONPAGES : mm_counter_file(folio); + add_mm_counter(vma->vm_mm, type, nr_pages); + ret =3D 0; + +unlock: pte_unmap_unlock(vmf->pte, vmf->ptl); return ret; } --=20 2.39.3 From nobody Tue Dec 16 14:49:37 2025 Received: from out30-132.freemail.mail.aliyun.com (out30-132.freemail.mail.aliyun.com [115.124.30.132]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C9CB3E57E for ; Tue, 11 Jun 2024 10:11:21 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=115.124.30.132 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718100685; cv=none; b=q+V8ASJwMQ1vMpt0/AJCReUgWkhBen2q/cevKixnFsTKDsWk4noc/4ZpRWSrWh/fFPfwa3yQA4VZS5gvvEQFysM0tCpjtPdNp/0eyMq6b4hoIkMo+EP8/qt8IURH/a5uKoveEw0tMvvaw6MUh3KoQEza4T1eOLc1zHpVTk0kTmU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718100685; c=relaxed/simple; bh=7IKpLQeAk/PwE+XF6sHfhIKOPcGYzFBQ1rJnTW0WlxY=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=n5xYzM9nwEK8ooT1/evy29FWXhLPcpJMcIW3Hxv427Zgf/vghvZv8tvSYhJ3Wkct8w/RsfQDZm7S4Ktq6eOyLaoErcCjxCUuJ9vj3rhrcdTfexm8oWhX0em5nHmwLLLFRLPytlFD5fATiDn0SNaGvmyepCPGkrubknps2drBtLg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com; spf=pass smtp.mailfrom=linux.alibaba.com; dkim=pass (1024-bit key) header.d=linux.alibaba.com header.i=@linux.alibaba.com header.b=G+S8m3CJ; arc=none smtp.client-ip=115.124.30.132 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.alibaba.com header.i=@linux.alibaba.com header.b="G+S8m3CJ" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.alibaba.com; s=default; t=1718100679; h=From:To:Subject:Date:Message-Id:MIME-Version; bh=sk0VCEBLx9HEPQafud6f811f0Tz6WgnTKVbuudZtsuE=; b=G+S8m3CJeaGO/N3L6M/HIQFECjbieubXUYafz8SHfbvukr+zHxZjC7+yrkmKRvRxj22CHcIrRDKBMQ1s1tSrCP813AzIOuhW/3RFu71gmbwIj3oy7wu2P/gk7y44omrPCbRGc3Gjwa5QeAqP4+yMB1F0ycSZD6O1ExuD5yGIZBw= X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R111e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=maildocker-contentspam033037067113;MF=baolin.wang@linux.alibaba.com;NM=1;PH=DS;RN=16;SR=0;TI=SMTPD_---0W8G1Jqm_1718100677; Received: from localhost(mailfrom:baolin.wang@linux.alibaba.com fp:SMTPD_---0W8G1Jqm_1718100677) by smtp.aliyun-inc.com; Tue, 11 Jun 2024 18:11:17 +0800 From: Baolin Wang To: akpm@linux-foundation.org, hughd@google.com Cc: willy@infradead.org, david@redhat.com, wangkefeng.wang@huawei.com, ying.huang@intel.com, 21cnbao@gmail.com, ryan.roberts@arm.com, shy828301@gmail.com, ziy@nvidia.com, ioworker0@gmail.com, da.gomez@samsung.com, p.raghav@samsung.com, baolin.wang@linux.alibaba.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v5 2/6] mm: shmem: add THP validation for PMD-mapped THP related statistics Date: Tue, 11 Jun 2024 18:11:06 +0800 Message-Id: X-Mailer: git-send-email 2.39.3 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" In order to extend support for mTHP, add THP validation for PMD-mapped THP related statistics to avoid statistical confusion. Signed-off-by: Baolin Wang Reviewed-by: Barry Song --- mm/shmem.c | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/mm/shmem.c b/mm/shmem.c index 6868c0af3a69..ae358efc397a 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -1647,7 +1647,7 @@ static struct folio *shmem_alloc_and_add_folio(gfp_t = gfp, return ERR_PTR(-E2BIG); =20 folio =3D shmem_alloc_folio(gfp, HPAGE_PMD_ORDER, info, index); - if (!folio) + if (!folio && pages =3D=3D HPAGE_PMD_NR) count_vm_event(THP_FILE_FALLBACK); } else { pages =3D 1; @@ -1665,7 +1665,7 @@ static struct folio *shmem_alloc_and_add_folio(gfp_t = gfp, if (xa_find(&mapping->i_pages, &index, index + pages - 1, XA_PRESENT)) { error =3D -EEXIST; - } else if (huge) { + } else if (pages =3D=3D HPAGE_PMD_NR) { count_vm_event(THP_FILE_FALLBACK); count_vm_event(THP_FILE_FALLBACK_CHARGE); } @@ -2031,7 +2031,8 @@ static int shmem_get_folio_gfp(struct inode *inode, p= goff_t index, folio =3D shmem_alloc_and_add_folio(huge_gfp, inode, index, fault_mm, true); if (!IS_ERR(folio)) { - count_vm_event(THP_FILE_ALLOC); + if (folio_test_pmd_mappable(folio)) + count_vm_event(THP_FILE_ALLOC); goto alloced; } if (PTR_ERR(folio) =3D=3D -EEXIST) --=20 2.39.3 From nobody Tue Dec 16 14:49:37 2025 Received: from out30-119.freemail.mail.aliyun.com (out30-119.freemail.mail.aliyun.com [115.124.30.119]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5EC744502C for ; Tue, 11 Jun 2024 10:11:23 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=115.124.30.119 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718100686; cv=none; b=V/gc0t6+e8cjp5l/FgUsGHjuGVSU3p27kFen+fdhmMDiDKGdYtYYdkgsQpTXXF82FG6VhVo09TiyjanLYxPerh0ABIPhhPkivfqhviT31X8I1JXryFLdaRfShuotE2zS1ecRaY0EsHg/HCu4lHecqZ8XcpNRS2OesaIsMtIRPWM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718100686; c=relaxed/simple; bh=T8EKG5DKwUeyYo4U6tRHo+yTe/1Fi0ynMbYlnGXCxNU=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=V+XVrpdugflqUZPuj8Z6zu6YADqbw20Sjx4KshNpgC9v72coq0rfW3zY68ZigIu0JhFkcyoL6XmTsH79VqI0EDz3TZoywxeDtkqxdzLm+dIphKBGGi9A/D+ICv0Mq4gT4z8AmEwGutXjEmA9iYhNmgVgvxskvkhYjqwtCpZk41g= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com; spf=pass smtp.mailfrom=linux.alibaba.com; dkim=pass (1024-bit key) header.d=linux.alibaba.com header.i=@linux.alibaba.com header.b=oqx5L5I9; arc=none smtp.client-ip=115.124.30.119 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.alibaba.com header.i=@linux.alibaba.com header.b="oqx5L5I9" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.alibaba.com; s=default; t=1718100681; h=From:To:Subject:Date:Message-Id:MIME-Version; bh=6yjfO47qpLPy3B0qP6pgcF4Lahv9FnYDIgEE8QuEsvQ=; b=oqx5L5I9lKMQAOde54nyDBMrT0xXmOHt1BEj3i5xgpIlEmFTg2TcYL6IxGUYyiVZ6opDx1und6k2xlndYVrhFPZMul9fQX1NBkTB0z1GbFoWuBq1Ipt/2o9+0aXShqUIC9KRmenyffc98nSRtAmXDKktTaiTAJ2zgEgTJ4dZf+E= X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R211e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=maildocker-contentspam033068173054;MF=baolin.wang@linux.alibaba.com;NM=1;PH=DS;RN=16;SR=0;TI=SMTPD_---0W8G7EMK_1718100678; Received: from localhost(mailfrom:baolin.wang@linux.alibaba.com fp:SMTPD_---0W8G7EMK_1718100678) by smtp.aliyun-inc.com; Tue, 11 Jun 2024 18:11:18 +0800 From: Baolin Wang To: akpm@linux-foundation.org, hughd@google.com Cc: willy@infradead.org, david@redhat.com, wangkefeng.wang@huawei.com, ying.huang@intel.com, 21cnbao@gmail.com, ryan.roberts@arm.com, shy828301@gmail.com, ziy@nvidia.com, ioworker0@gmail.com, da.gomez@samsung.com, p.raghav@samsung.com, baolin.wang@linux.alibaba.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v5 3/6] mm: shmem: add multi-size THP sysfs interface for anonymous shmem Date: Tue, 11 Jun 2024 18:11:07 +0800 Message-Id: X-Mailer: git-send-email 2.39.3 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" To support the use of mTHP with anonymous shmem, add a new sysfs interface 'shmem_enabled' in the '/sys/kernel/mm/transparent_hugepage/hugepages-kB/' directory for each mTHP to control whether shmem is enabled for that mTHP, with a value similar to the top level 'shmem_enabled', which can be set to: "always", "inherit (to inherit the top level setting)", "within_size", "adv= ise", "never". An 'inherit' option is added to ensure compatibility with these global settings, and the options 'force' and 'deny' are dropped, which are rather testing artifacts from the old ages. By default, PMD-sized hugepages have enabled=3D"inherit" and all other huge= page sizes have enabled=3D"never" for '/sys/kernel/mm/transparent_hugepage/hugep= ages-xxkB/shmem_enabled'. In addition, if top level value is 'force', then only PMD-sized hugepages have enabled=3D"inherit", otherwise configuration will be failed and vice v= ersa. That means now we will avoid using non-PMD sized THP to override the global huge allocation. Signed-off-by: Baolin Wang --- Documentation/admin-guide/mm/transhuge.rst | 23 ++++++ include/linux/huge_mm.h | 10 +++ mm/huge_memory.c | 11 +-- mm/shmem.c | 96 ++++++++++++++++++++++ 4 files changed, 132 insertions(+), 8 deletions(-) diff --git a/Documentation/admin-guide/mm/transhuge.rst b/Documentation/adm= in-guide/mm/transhuge.rst index d414d3f5592a..b76d15e408b3 100644 --- a/Documentation/admin-guide/mm/transhuge.rst +++ b/Documentation/admin-guide/mm/transhuge.rst @@ -332,6 +332,29 @@ deny force Force the huge option on for all - very useful for testing; =20 +Shmem can also use "multi-size THP" (mTHP) by adding a new sysfs knob to c= ontrol +mTHP allocation: '/sys/kernel/mm/transparent_hugepage/hugepages-kB/s= hmem_enabled', +and its value for each mTHP is essentially consistent with the global sett= ing. +An 'inherit' option is added to ensure compatibility with these global set= tings. +Conversely, the options 'force' and 'deny' are dropped, which are rather t= esting +artifacts from the old ages. +always + Attempt to allocate huge pages every time we need a new page; + +inherit + Inherit the top-level "shmem_enabled" value. By default, PMD-sized hug= epages + have enabled=3D"inherit" and all other hugepage sizes have enabled=3D"= never"; + +never + Do not allocate huge pages; + +within_size + Only allocate huge page if it will be fully within i_size. + Also respect fadvise()/madvise() hints; + +advise + Only allocate huge pages if requested with fadvise()/madvise(); + Need of application restart =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D =20 diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index 020e2344eb86..fac21548c5de 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -6,6 +6,7 @@ #include =20 #include /* only for vma_is_dax() */ +#include =20 vm_fault_t do_huge_pmd_anonymous_page(struct vm_fault *vmf); int copy_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm, @@ -63,6 +64,7 @@ ssize_t single_hugepage_flag_show(struct kobject *kobj, struct kobj_attribute *attr, char *buf, enum transparent_hugepage_flag flag); extern struct kobj_attribute shmem_enabled_attr; +extern struct kobj_attribute thpsize_shmem_enabled_attr; =20 /* * Mask of all large folio orders supported for anonymous THP; all orders = up to @@ -265,6 +267,14 @@ unsigned long thp_vma_allowable_orders(struct vm_area_= struct *vma, return __thp_vma_allowable_orders(vma, vm_flags, tva_flags, orders); } =20 +struct thpsize { + struct kobject kobj; + struct list_head node; + int order; +}; + +#define to_thpsize(kobj) container_of(kobj, struct thpsize, kobj) + enum mthp_stat_item { MTHP_STAT_ANON_FAULT_ALLOC, MTHP_STAT_ANON_FAULT_FALLBACK, diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 8e49f402d7c7..1360a1903b66 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -449,14 +449,6 @@ static void thpsize_release(struct kobject *kobj); static DEFINE_SPINLOCK(huge_anon_orders_lock); static LIST_HEAD(thpsize_list); =20 -struct thpsize { - struct kobject kobj; - struct list_head node; - int order; -}; - -#define to_thpsize(kobj) container_of(kobj, struct thpsize, kobj) - static ssize_t thpsize_enabled_show(struct kobject *kobj, struct kobj_attribute *attr, char *buf) { @@ -517,6 +509,9 @@ static struct kobj_attribute thpsize_enabled_attr =3D =20 static struct attribute *thpsize_attrs[] =3D { &thpsize_enabled_attr.attr, +#ifdef CONFIG_SHMEM + &thpsize_shmem_enabled_attr.attr, +#endif NULL, }; =20 diff --git a/mm/shmem.c b/mm/shmem.c index ae358efc397a..bb19ea2770e3 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -131,6 +131,14 @@ struct shmem_options { #define SHMEM_SEEN_QUOTA 32 }; =20 +#ifdef CONFIG_TRANSPARENT_HUGEPAGE +static unsigned long huge_shmem_orders_always __read_mostly; +static unsigned long huge_shmem_orders_madvise __read_mostly; +static unsigned long huge_shmem_orders_inherit __read_mostly; +static unsigned long huge_shmem_orders_within_size __read_mostly; +static DEFINE_SPINLOCK(huge_shmem_orders_lock); +#endif + #ifdef CONFIG_TMPFS static unsigned long shmem_default_max_blocks(void) { @@ -4672,6 +4680,12 @@ void __init shmem_init(void) SHMEM_SB(shm_mnt->mnt_sb)->huge =3D shmem_huge; else shmem_huge =3D SHMEM_HUGE_NEVER; /* just in case it was patched */ + + /* + * Default to setting PMD-sized THP to inherit the global setting and + * disable all other multi-size THPs. + */ + huge_shmem_orders_inherit =3D BIT(HPAGE_PMD_ORDER); #endif return; =20 @@ -4731,6 +4745,11 @@ static ssize_t shmem_enabled_store(struct kobject *k= obj, huge !=3D SHMEM_HUGE_NEVER && huge !=3D SHMEM_HUGE_DENY) return -EINVAL; =20 + /* Do not override huge allocation policy with non-PMD sized mTHP */ + if (huge =3D=3D SHMEM_HUGE_FORCE && + huge_shmem_orders_inherit !=3D BIT(HPAGE_PMD_ORDER)) + return -EINVAL; + shmem_huge =3D huge; if (shmem_huge > SHMEM_HUGE_DENY) SHMEM_SB(shm_mnt->mnt_sb)->huge =3D shmem_huge; @@ -4738,6 +4757,83 @@ static ssize_t shmem_enabled_store(struct kobject *k= obj, } =20 struct kobj_attribute shmem_enabled_attr =3D __ATTR_RW(shmem_enabled); + +static ssize_t thpsize_shmem_enabled_show(struct kobject *kobj, + struct kobj_attribute *attr, char *buf) +{ + int order =3D to_thpsize(kobj)->order; + const char *output; + + if (test_bit(order, &huge_shmem_orders_always)) + output =3D "[always] inherit within_size advise never"; + else if (test_bit(order, &huge_shmem_orders_inherit)) + output =3D "always [inherit] within_size advise never"; + else if (test_bit(order, &huge_shmem_orders_within_size)) + output =3D "always inherit [within_size] advise never"; + else if (test_bit(order, &huge_shmem_orders_madvise)) + output =3D "always inherit within_size [advise] never"; + else + output =3D "always inherit within_size advise [never]"; + + return sysfs_emit(buf, "%s\n", output); +} + +static ssize_t thpsize_shmem_enabled_store(struct kobject *kobj, + struct kobj_attribute *attr, + const char *buf, size_t count) +{ + int order =3D to_thpsize(kobj)->order; + ssize_t ret =3D count; + + if (sysfs_streq(buf, "always")) { + spin_lock(&huge_shmem_orders_lock); + clear_bit(order, &huge_shmem_orders_inherit); + clear_bit(order, &huge_shmem_orders_madvise); + clear_bit(order, &huge_shmem_orders_within_size); + set_bit(order, &huge_shmem_orders_always); + spin_unlock(&huge_shmem_orders_lock); + } else if (sysfs_streq(buf, "inherit")) { + /* Do not override huge allocation policy with non-PMD sized mTHP */ + if (shmem_huge =3D=3D SHMEM_HUGE_FORCE && + order !=3D HPAGE_PMD_ORDER) + return -EINVAL; + + spin_lock(&huge_shmem_orders_lock); + clear_bit(order, &huge_shmem_orders_always); + clear_bit(order, &huge_shmem_orders_madvise); + clear_bit(order, &huge_shmem_orders_within_size); + set_bit(order, &huge_shmem_orders_inherit); + spin_unlock(&huge_shmem_orders_lock); + } else if (sysfs_streq(buf, "within_size")) { + spin_lock(&huge_shmem_orders_lock); + clear_bit(order, &huge_shmem_orders_always); + clear_bit(order, &huge_shmem_orders_inherit); + clear_bit(order, &huge_shmem_orders_madvise); + set_bit(order, &huge_shmem_orders_within_size); + spin_unlock(&huge_shmem_orders_lock); + } else if (sysfs_streq(buf, "madvise")) { + spin_lock(&huge_shmem_orders_lock); + clear_bit(order, &huge_shmem_orders_always); + clear_bit(order, &huge_shmem_orders_inherit); + clear_bit(order, &huge_shmem_orders_within_size); + set_bit(order, &huge_shmem_orders_madvise); + spin_unlock(&huge_shmem_orders_lock); + } else if (sysfs_streq(buf, "never")) { + spin_lock(&huge_shmem_orders_lock); + clear_bit(order, &huge_shmem_orders_always); + clear_bit(order, &huge_shmem_orders_inherit); + clear_bit(order, &huge_shmem_orders_within_size); + clear_bit(order, &huge_shmem_orders_madvise); + spin_unlock(&huge_shmem_orders_lock); + } else { + ret =3D -EINVAL; + } + + return ret; +} + +struct kobj_attribute thpsize_shmem_enabled_attr =3D + __ATTR(shmem_enabled, 0644, thpsize_shmem_enabled_show, thpsize_shmem_ena= bled_store); #endif /* CONFIG_TRANSPARENT_HUGEPAGE && CONFIG_SYSFS */ =20 #else /* !CONFIG_SHMEM */ --=20 2.39.3 From nobody Tue Dec 16 14:49:37 2025 Received: from out30-101.freemail.mail.aliyun.com (out30-101.freemail.mail.aliyun.com [115.124.30.101]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 792E54776E for ; Tue, 11 Jun 2024 10:16:38 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=115.124.30.101 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718101001; cv=none; b=ShmHtmYWuomajzgQfL7A7L12Z9egL4JApJAaO9bOE2Xqn/7kstfe1h4fVaDFrDDCD+T7aBOjcbNM6SxIpscz9cmVMCD0DdM3ON66j10lxenjbpWVdqAuJHMVBPE+N6sl992OIdNZXPl8L/6LcIX/QQtuz3eeMQl23z7k50/6p9s= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718101001; c=relaxed/simple; bh=Qyg4RQW/tyLB/2+2YTI8s9itFRONl0NonB3uJIomYHA=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=Oj96WG1njyx0JtIbic3Z0UbOywFLZnAXXBY14KP07oU8xBP1MaoNqVvVKxwf9/K4jy8YK1gdP5vnG8aVkX2+kDlhq5CYAPvQ/9pToUg8pbEKsIHHVNKaOlcnBoOiz3y5vuTJo0Qv0jhDbg/KO5tk1v48MOpd2Oy5qTkElVIOPAw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com; spf=pass smtp.mailfrom=linux.alibaba.com; dkim=pass (1024-bit key) header.d=linux.alibaba.com header.i=@linux.alibaba.com header.b=o6h4VfbQ; arc=none smtp.client-ip=115.124.30.101 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.alibaba.com header.i=@linux.alibaba.com header.b="o6h4VfbQ" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.alibaba.com; s=default; t=1718100997; h=From:To:Subject:Date:Message-Id:MIME-Version; bh=d0UcBrIAWPKoJXUZ8qfKGS/skuBxjMLJJmhrq6myW0M=; b=o6h4VfbQjXLLSCDp/jcWODQoKUzZQ3kV5xNmgjET5KuAib1ZkxXc/rTmawnaYem+zLeUtE10yphqTgIEcUvYjwXV00ZjgUv6nbyxXYSYSf8GWE4/4lTeLWA23UjFBJwjyvIwcYXmiXEZ5Y7vOI4xygjQSeZgnKEvK+TpVwuJppw= X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R151e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=maildocker-contentspam033037067110;MF=baolin.wang@linux.alibaba.com;NM=1;PH=DS;RN=16;SR=0;TI=SMTPD_---0W8G5NWO_1718100679; Received: from localhost(mailfrom:baolin.wang@linux.alibaba.com fp:SMTPD_---0W8G5NWO_1718100679) by smtp.aliyun-inc.com; Tue, 11 Jun 2024 18:11:19 +0800 From: Baolin Wang To: akpm@linux-foundation.org, hughd@google.com Cc: willy@infradead.org, david@redhat.com, wangkefeng.wang@huawei.com, ying.huang@intel.com, 21cnbao@gmail.com, ryan.roberts@arm.com, shy828301@gmail.com, ziy@nvidia.com, ioworker0@gmail.com, da.gomez@samsung.com, p.raghav@samsung.com, baolin.wang@linux.alibaba.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v5 4/6] mm: shmem: add mTHP support for anonymous shmem Date: Tue, 11 Jun 2024 18:11:08 +0800 Message-Id: <65796c1e72e51e15f3410195b5c2d5b6c160d411.1718090413.git.baolin.wang@linux.alibaba.com> X-Mailer: git-send-email 2.39.3 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Commit 19eaf44954df adds multi-size THP (mTHP) for anonymous pages, that can allow THP to be configured through the sysfs interface located at '/sys/kernel/mm/transparent_hugepage/hugepage-XXkb/enabled'. However, the anonymous shmem will ignore the anonymous mTHP rule configured through the sysfs interface, and can only use the PMD-mapped THP, that is not reasonable. Users expect to apply the mTHP rule for all anonymous pages, including the anonymous shmem, in order to enjoy the benefits of mTHP. For example, lower latency than PMD-mapped THP, smaller memory bloat than PMD-mapped THP, contiguous PTEs on ARM architectu= re to reduce TLB miss etc. In addition, the mTHP interfaces can be extended to support all shmem/tmpfs scenarios in the future, especially for the shmem mmap() case. The primary strategy is similar to supporting anonymous mTHP. Introduce a new interface '/mm/transparent_hugepage/hugepage-XXkb/shmem_enabled', which can have almost the same values as the top-level '/sys/kernel/mm/transparent_hugepage/shmem_enabled', with adding a new additional "inherit" option and dropping the testing options 'force' and 'deny'. By default all sizes will be set to "never" except PMD size, which is set to "inherit". This ensures backward compatibility with the anonymous shmem enabled of the top level, meanwhile also allows independent control of anonymous shmem enabled for each mTHP. Signed-off-by: Baolin Wang --- include/linux/huge_mm.h | 10 +++ mm/shmem.c | 187 +++++++++++++++++++++++++++++++++------- 2 files changed, 167 insertions(+), 30 deletions(-) diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index fac21548c5de..909cfc67521d 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -575,6 +575,16 @@ static inline bool thp_migration_supported(void) { return false; } + +static inline int highest_order(unsigned long orders) +{ + return 0; +} + +static inline int next_order(unsigned long *orders, int prev) +{ + return 0; +} #endif /* CONFIG_TRANSPARENT_HUGEPAGE */ =20 static inline int split_folio_to_list_to_order(struct folio *folio, diff --git a/mm/shmem.c b/mm/shmem.c index bb19ea2770e3..e849c88452b2 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -1611,6 +1611,107 @@ static gfp_t limit_gfp_mask(gfp_t huge_gfp, gfp_t l= imit_gfp) return result; } =20 +#ifdef CONFIG_TRANSPARENT_HUGEPAGE +static unsigned long shmem_allowable_huge_orders(struct inode *inode, + struct vm_area_struct *vma, pgoff_t index, + bool global_huge) +{ + unsigned long mask =3D READ_ONCE(huge_shmem_orders_always); + unsigned long within_size_orders =3D READ_ONCE(huge_shmem_orders_within_s= ize); + unsigned long vm_flags =3D vma->vm_flags; + /* + * Check all the (large) orders below HPAGE_PMD_ORDER + 1 that + * are enabled for this vma. + */ + unsigned long orders =3D BIT(PMD_ORDER + 1) - 1; + loff_t i_size; + int order; + + if ((vm_flags & VM_NOHUGEPAGE) || + test_bit(MMF_DISABLE_THP, &vma->vm_mm->flags)) + return 0; + + /* If the hardware/firmware marked hugepage support disabled. */ + if (transparent_hugepage_flags & (1 << TRANSPARENT_HUGEPAGE_UNSUPPORTED)) + return 0; + + /* + * Following the 'deny' semantics of the top level, force the huge + * option off from all mounts. + */ + if (shmem_huge =3D=3D SHMEM_HUGE_DENY) + return 0; + + /* + * Only allow inherit orders if the top-level value is 'force', which + * means non-PMD sized THP can not override 'huge' mount option now. + */ + if (shmem_huge =3D=3D SHMEM_HUGE_FORCE) + return READ_ONCE(huge_shmem_orders_inherit); + + /* Allow mTHP that will be fully within i_size. */ + order =3D highest_order(within_size_orders); + while (within_size_orders) { + index =3D round_up(index + 1, order); + i_size =3D round_up(i_size_read(inode), PAGE_SIZE); + if (i_size >> PAGE_SHIFT >=3D index) { + mask |=3D within_size_orders; + break; + } + + order =3D next_order(&within_size_orders, order); + } + + if (vm_flags & VM_HUGEPAGE) + mask |=3D READ_ONCE(huge_shmem_orders_madvise); + + if (global_huge) + mask |=3D READ_ONCE(huge_shmem_orders_inherit); + + return orders & mask; +} + +static unsigned long shmem_suitable_orders(struct inode *inode, struct vm_= fault *vmf, + struct address_space *mapping, pgoff_t index, + unsigned long orders) +{ + struct vm_area_struct *vma =3D vmf->vma; + unsigned long pages; + int order; + + orders =3D thp_vma_suitable_orders(vma, vmf->address, orders); + if (!orders) + return 0; + + /* Find the highest order that can add into the page cache */ + order =3D highest_order(orders); + while (orders) { + pages =3D 1UL << order; + index =3D round_down(index, pages); + if (!xa_find(&mapping->i_pages, &index, + index + pages - 1, XA_PRESENT)) + break; + order =3D next_order(&orders, order); + } + + return orders; +} +#else +static unsigned long shmem_allowable_huge_orders(struct inode *inode, + struct vm_area_struct *vma, pgoff_t index, + bool global_huge) +{ + return 0; +} + +static unsigned long shmem_suitable_orders(struct inode *inode, struct vm_= fault *vmf, + struct address_space *mapping, pgoff_t index, + unsigned long orders) +{ + return 0; +} +#endif /* CONFIG_TRANSPARENT_HUGEPAGE */ + static struct folio *shmem_alloc_folio(gfp_t gfp, int order, struct shmem_inode_info *info, pgoff_t index) { @@ -1625,38 +1726,55 @@ static struct folio *shmem_alloc_folio(gfp_t gfp, i= nt order, return folio; } =20 -static struct folio *shmem_alloc_and_add_folio(gfp_t gfp, - struct inode *inode, pgoff_t index, - struct mm_struct *fault_mm, bool huge) +static struct folio *shmem_alloc_and_add_folio(struct vm_fault *vmf, + gfp_t gfp, struct inode *inode, pgoff_t index, + struct mm_struct *fault_mm, unsigned long orders) { struct address_space *mapping =3D inode->i_mapping; struct shmem_inode_info *info =3D SHMEM_I(inode); - struct folio *folio; + struct vm_area_struct *vma =3D vmf ? vmf->vma : NULL; + unsigned long suitable_orders =3D 0; + struct folio *folio =3D NULL; long pages; - int error; + int error, order; =20 if (!IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE)) - huge =3D false; + orders =3D 0; =20 - if (huge) { - pages =3D HPAGE_PMD_NR; - index =3D round_down(index, HPAGE_PMD_NR); + if (orders > 0) { + if (vma && vma_is_anon_shmem(vma)) { + suitable_orders =3D shmem_suitable_orders(inode, vmf, + mapping, index, orders); + } else if (orders & BIT(HPAGE_PMD_ORDER)) { + pages =3D HPAGE_PMD_NR; + suitable_orders =3D BIT(HPAGE_PMD_ORDER); + index =3D round_down(index, HPAGE_PMD_NR); =20 - /* - * Check for conflict before waiting on a huge allocation. - * Conflict might be that a huge page has just been allocated - * and added to page cache by a racing thread, or that there - * is already at least one small page in the huge extent. - * Be careful to retry when appropriate, but not forever! - * Elsewhere -EEXIST would be the right code, but not here. - */ - if (xa_find(&mapping->i_pages, &index, - index + HPAGE_PMD_NR - 1, XA_PRESENT)) - return ERR_PTR(-E2BIG); + /* + * Check for conflict before waiting on a huge allocation. + * Conflict might be that a huge page has just been allocated + * and added to page cache by a racing thread, or that there + * is already at least one small page in the huge extent. + * Be careful to retry when appropriate, but not forever! + * Elsewhere -EEXIST would be the right code, but not here. + */ + if (xa_find(&mapping->i_pages, &index, + index + HPAGE_PMD_NR - 1, XA_PRESENT)) + return ERR_PTR(-E2BIG); + } =20 - folio =3D shmem_alloc_folio(gfp, HPAGE_PMD_ORDER, info, index); - if (!folio && pages =3D=3D HPAGE_PMD_NR) - count_vm_event(THP_FILE_FALLBACK); + order =3D highest_order(suitable_orders); + while (suitable_orders) { + pages =3D 1UL << order; + index =3D round_down(index, pages); + folio =3D shmem_alloc_folio(gfp, order, info, index); + if (folio) + goto allocated; + + if (pages =3D=3D HPAGE_PMD_NR) + count_vm_event(THP_FILE_FALLBACK); + order =3D next_order(&suitable_orders, order); + } } else { pages =3D 1; folio =3D shmem_alloc_folio(gfp, 0, info, index); @@ -1664,6 +1782,7 @@ static struct folio *shmem_alloc_and_add_folio(gfp_t = gfp, if (!folio) return ERR_PTR(-ENOMEM); =20 +allocated: __folio_set_locked(folio); __folio_set_swapbacked(folio); =20 @@ -1958,7 +2077,8 @@ static int shmem_get_folio_gfp(struct inode *inode, p= goff_t index, struct mm_struct *fault_mm; struct folio *folio; int error; - bool alloced; + bool alloced, huge; + unsigned long orders =3D 0; =20 if (WARN_ON_ONCE(!shmem_mapping(inode->i_mapping))) return -EINVAL; @@ -2030,14 +2150,21 @@ static int shmem_get_folio_gfp(struct inode *inode,= pgoff_t index, return 0; } =20 - if (shmem_is_huge(inode, index, false, fault_mm, - vma ? vma->vm_flags : 0)) { + huge =3D shmem_is_huge(inode, index, false, fault_mm, + vma ? vma->vm_flags : 0); + /* Find hugepage orders that are allowed for anonymous shmem. */ + if (vma && vma_is_anon_shmem(vma)) + orders =3D shmem_allowable_huge_orders(inode, vma, index, huge); + else if (huge) + orders =3D BIT(HPAGE_PMD_ORDER); + + if (orders > 0) { gfp_t huge_gfp; =20 huge_gfp =3D vma_thp_gfp_mask(vma); huge_gfp =3D limit_gfp_mask(huge_gfp, gfp); - folio =3D shmem_alloc_and_add_folio(huge_gfp, - inode, index, fault_mm, true); + folio =3D shmem_alloc_and_add_folio(vmf, huge_gfp, + inode, index, fault_mm, orders); if (!IS_ERR(folio)) { if (folio_test_pmd_mappable(folio)) count_vm_event(THP_FILE_ALLOC); @@ -2047,7 +2174,7 @@ static int shmem_get_folio_gfp(struct inode *inode, p= goff_t index, goto repeat; } =20 - folio =3D shmem_alloc_and_add_folio(gfp, inode, index, fault_mm, false); + folio =3D shmem_alloc_and_add_folio(vmf, gfp, inode, index, fault_mm, 0); if (IS_ERR(folio)) { error =3D PTR_ERR(folio); if (error =3D=3D -EEXIST) @@ -2058,7 +2185,7 @@ static int shmem_get_folio_gfp(struct inode *inode, p= goff_t index, =20 alloced: alloced =3D true; - if (folio_test_pmd_mappable(folio) && + if (folio_test_large(folio) && DIV_ROUND_UP(i_size_read(inode), PAGE_SIZE) < folio_next_index(folio) - 1) { struct shmem_sb_info *sbinfo =3D SHMEM_SB(inode->i_sb); --=20 2.39.3 From nobody Tue Dec 16 14:49:37 2025 Received: from out30-101.freemail.mail.aliyun.com (out30-101.freemail.mail.aliyun.com [115.124.30.101]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id EFD0317B436 for ; Tue, 11 Jun 2024 10:11:29 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=115.124.30.101 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718100691; cv=none; b=LBeOEEgNJsy89rp/FiKLBeFEk0WSxrw3yKmoTEQvQEvufkaw6H8EuBG9awpKOFE+Ws33i5XkguI/R4JNReC1oe1+vDnVjl85GRCBbFsvVQNAz0nUTqY9q3zsib3zjQQcvbtjjW8u8Ybq8Yl6cyC6imc0Bik2QO2bZWRvWarcBNA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718100691; c=relaxed/simple; bh=70wTV4nMPkC96519CWpP6cXN2DwOUaVtwPlToALg+34=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=ZKZMPms8tCS417WfOcoQwB0MNrgiU4GbKGLnw7I0WCYZy5+cRZPj8eo0s0J6gufHckcPRwxZ5S2pPfc93D29S/Z46IZo/4z8R/yIS7gt84jH9SjM6L5ksGrGsNVmQ2b/gzF3kNKukP9wEIJBbNXouzK9ukAnpe5FL79S9XxbP4Q= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com; spf=pass smtp.mailfrom=linux.alibaba.com; dkim=pass (1024-bit key) header.d=linux.alibaba.com header.i=@linux.alibaba.com header.b=Jptw3rzn; arc=none smtp.client-ip=115.124.30.101 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.alibaba.com header.i=@linux.alibaba.com header.b="Jptw3rzn" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.alibaba.com; s=default; t=1718100682; h=From:To:Subject:Date:Message-Id:MIME-Version; bh=ihHol0Lo4aQL92vXhUE0q0HG3Xsyoz8HlIbQSl3GlRA=; b=Jptw3rznsqoHQxQkCYMKVYim7AsECX9znu/4R94NJp8vLd4WcIzqq+a0yaHYvsN5Bwt54eTffUbUeq1WyXmROKAbiVoPr2290bOnpEePq7Cx23kngnfjzAdFNZycFGJFwpb2enBjzGuxIGdknmyd9dla+hIkBGFUeBLOFw+eu1I= X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R161e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=maildocker-contentspam033045075189;MF=baolin.wang@linux.alibaba.com;NM=1;PH=DS;RN=16;SR=0;TI=SMTPD_---0W8G1Jrq_1718100680; Received: from localhost(mailfrom:baolin.wang@linux.alibaba.com fp:SMTPD_---0W8G1Jrq_1718100680) by smtp.aliyun-inc.com; Tue, 11 Jun 2024 18:11:21 +0800 From: Baolin Wang To: akpm@linux-foundation.org, hughd@google.com Cc: willy@infradead.org, david@redhat.com, wangkefeng.wang@huawei.com, ying.huang@intel.com, 21cnbao@gmail.com, ryan.roberts@arm.com, shy828301@gmail.com, ziy@nvidia.com, ioworker0@gmail.com, da.gomez@samsung.com, p.raghav@samsung.com, baolin.wang@linux.alibaba.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v5 5/6] mm: shmem: add mTHP size alignment in shmem_get_unmapped_area Date: Tue, 11 Jun 2024 18:11:09 +0800 Message-Id: <0c549b57cf7db07503af692d8546ecfad0fcce52.1718090413.git.baolin.wang@linux.alibaba.com> X-Mailer: git-send-email 2.39.3 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Although the top-level hugepage allocation can be turned off, anonymous shm= em can still use mTHP by configuring the sysfs interface located at '/sys/kernel/mm/transparent_hugepage/hugepage-XXkb/shmem_enabled'. Therefor= e, add alignment for mTHP size to provide a suitable alignment address in shmem_get_unmapped_area(). Signed-off-by: Baolin Wang Tested-by: Lance Yang --- mm/shmem.c | 40 +++++++++++++++++++++++++++++++--------- 1 file changed, 31 insertions(+), 9 deletions(-) diff --git a/mm/shmem.c b/mm/shmem.c index e849c88452b2..f5469c357be6 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -2394,6 +2394,7 @@ unsigned long shmem_get_unmapped_area(struct file *fi= le, unsigned long inflated_len; unsigned long inflated_addr; unsigned long inflated_offset; + unsigned long hpage_size; =20 if (len > TASK_SIZE) return -ENOMEM; @@ -2412,8 +2413,6 @@ unsigned long shmem_get_unmapped_area(struct file *fi= le, =20 if (shmem_huge =3D=3D SHMEM_HUGE_DENY) return addr; - if (len < HPAGE_PMD_SIZE) - return addr; if (flags & MAP_FIXED) return addr; /* @@ -2425,8 +2424,11 @@ unsigned long shmem_get_unmapped_area(struct file *f= ile, if (uaddr =3D=3D addr) return addr; =20 + hpage_size =3D HPAGE_PMD_SIZE; if (shmem_huge !=3D SHMEM_HUGE_FORCE) { struct super_block *sb; + unsigned long __maybe_unused hpage_orders; + int order =3D 0; =20 if (file) { VM_BUG_ON(file->f_op !=3D &shmem_file_operations); @@ -2439,18 +2441,38 @@ unsigned long shmem_get_unmapped_area(struct file *= file, if (IS_ERR(shm_mnt)) return addr; sb =3D shm_mnt->mnt_sb; + + /* + * Find the highest mTHP order used for anonymous shmem to + * provide a suitable alignment address. + */ +#ifdef CONFIG_TRANSPARENT_HUGEPAGE + hpage_orders =3D READ_ONCE(huge_shmem_orders_always); + hpage_orders |=3D READ_ONCE(huge_shmem_orders_within_size); + hpage_orders |=3D READ_ONCE(huge_shmem_orders_madvise); + if (SHMEM_SB(sb)->huge !=3D SHMEM_HUGE_NEVER) + hpage_orders |=3D READ_ONCE(huge_shmem_orders_inherit); + + if (hpage_orders > 0) { + order =3D highest_order(hpage_orders); + hpage_size =3D PAGE_SIZE << order; + } +#endif } - if (SHMEM_SB(sb)->huge =3D=3D SHMEM_HUGE_NEVER) + if (SHMEM_SB(sb)->huge =3D=3D SHMEM_HUGE_NEVER && !order) return addr; } =20 - offset =3D (pgoff << PAGE_SHIFT) & (HPAGE_PMD_SIZE-1); - if (offset && offset + len < 2 * HPAGE_PMD_SIZE) + if (len < hpage_size) + return addr; + + offset =3D (pgoff << PAGE_SHIFT) & (hpage_size - 1); + if (offset && offset + len < 2 * hpage_size) return addr; - if ((addr & (HPAGE_PMD_SIZE-1)) =3D=3D offset) + if ((addr & (hpage_size - 1)) =3D=3D offset) return addr; =20 - inflated_len =3D len + HPAGE_PMD_SIZE - PAGE_SIZE; + inflated_len =3D len + hpage_size - PAGE_SIZE; if (inflated_len > TASK_SIZE) return addr; if (inflated_len < len) @@ -2463,10 +2485,10 @@ unsigned long shmem_get_unmapped_area(struct file *= file, if (inflated_addr & ~PAGE_MASK) return addr; =20 - inflated_offset =3D inflated_addr & (HPAGE_PMD_SIZE-1); + inflated_offset =3D inflated_addr & (hpage_size - 1); inflated_addr +=3D offset - inflated_offset; if (inflated_offset > offset) - inflated_addr +=3D HPAGE_PMD_SIZE; + inflated_addr +=3D hpage_size; =20 if (inflated_addr > TASK_SIZE - len) return addr; --=20 2.39.3 From nobody Tue Dec 16 14:49:37 2025 Received: from out30-119.freemail.mail.aliyun.com (out30-119.freemail.mail.aliyun.com [115.124.30.119]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 65081178367 for ; Tue, 11 Jun 2024 10:11:24 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=115.124.30.119 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718100687; cv=none; b=cMxbWKvsIwxAP8XBltf8G59hqL5ScpKnfV4LOD+IH55E/MDxyoIU0vgHWejY6TDrK2OiD6imICl5yK6G45hdhYq25AEojqxuNKzhIs2e55atSD5bTV5t0DWUVftIiy4z6NPhXIEACdiq0/Q2RRhB6PcsJopqClXCTMX1Te4+oVs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718100687; c=relaxed/simple; bh=6tVgkjyqOI9itw8lEaekQQOUJebgN5aDPo+xjDJkHX4=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=dcbqkD8Rdx2pIdmKcTnpCyZR0S9weNSvWQmfr+K1kX2r8W/MFOoUseKJxbMypeV6DU6iBqSwFPw8AYfGgamBHfVXlh9LHE2cUpzU6Sgb+haT2PMhhOSbdUBXPuCIr3oxcbt5+GFJtAq2+0yPEezSFl92hsM/PCg+jtXZPnZoc+4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com; spf=pass smtp.mailfrom=linux.alibaba.com; dkim=pass (1024-bit key) header.d=linux.alibaba.com header.i=@linux.alibaba.com header.b=d5opbIJS; arc=none smtp.client-ip=115.124.30.119 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.alibaba.com header.i=@linux.alibaba.com header.b="d5opbIJS" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.alibaba.com; s=default; t=1718100683; h=From:To:Subject:Date:Message-Id:MIME-Version; bh=nLj8yUsK2lei34HSlxIopppsBC+abcyB+V/ZUwcOo4A=; b=d5opbIJSkEVOPTNByAOoCdOvKH4mh7epYituqTdWbYVGoHQ9hHZomUp8XWlE6DrOv2wgRAH0xTrbE8cLS6qlm2nYmNYWXsSH6Skc91F6WUUrfO5fFQdgzyDCnz74TRsr2VewIt+rhtgRQB0/n+lECiqwWPSllhVB+9d2QvLmQrY= X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R161e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=maildocker-contentspam033037067112;MF=baolin.wang@linux.alibaba.com;NM=1;PH=DS;RN=16;SR=0;TI=SMTPD_---0W8G7ENT_1718100681; Received: from localhost(mailfrom:baolin.wang@linux.alibaba.com fp:SMTPD_---0W8G7ENT_1718100681) by smtp.aliyun-inc.com; Tue, 11 Jun 2024 18:11:22 +0800 From: Baolin Wang To: akpm@linux-foundation.org, hughd@google.com Cc: willy@infradead.org, david@redhat.com, wangkefeng.wang@huawei.com, ying.huang@intel.com, 21cnbao@gmail.com, ryan.roberts@arm.com, shy828301@gmail.com, ziy@nvidia.com, ioworker0@gmail.com, da.gomez@samsung.com, p.raghav@samsung.com, baolin.wang@linux.alibaba.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v5 6/6] mm: shmem: add mTHP counters for anonymous shmem Date: Tue, 11 Jun 2024 18:11:10 +0800 Message-Id: <4fd9e467d49ae4a747e428bcd821c7d13125ae67.1718090413.git.baolin.wang@linux.alibaba.com> X-Mailer: git-send-email 2.39.3 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add mTHP counters for anonymous shmem. Signed-off-by: Baolin Wang Reviewed-by: Lance Yang --- include/linux/huge_mm.h | 3 +++ mm/huge_memory.c | 6 ++++++ mm/shmem.c | 18 +++++++++++++++--- 3 files changed, 24 insertions(+), 3 deletions(-) diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index 909cfc67521d..212cca384d7e 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -281,6 +281,9 @@ enum mthp_stat_item { MTHP_STAT_ANON_FAULT_FALLBACK_CHARGE, MTHP_STAT_SWPOUT, MTHP_STAT_SWPOUT_FALLBACK, + MTHP_STAT_FILE_ALLOC, + MTHP_STAT_FILE_FALLBACK, + MTHP_STAT_FILE_FALLBACK_CHARGE, __MTHP_STAT_COUNT }; =20 diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 1360a1903b66..3fbcd77f5957 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -555,6 +555,9 @@ DEFINE_MTHP_STAT_ATTR(anon_fault_fallback, MTHP_STAT_AN= ON_FAULT_FALLBACK); DEFINE_MTHP_STAT_ATTR(anon_fault_fallback_charge, MTHP_STAT_ANON_FAULT_FAL= LBACK_CHARGE); DEFINE_MTHP_STAT_ATTR(swpout, MTHP_STAT_SWPOUT); DEFINE_MTHP_STAT_ATTR(swpout_fallback, MTHP_STAT_SWPOUT_FALLBACK); +DEFINE_MTHP_STAT_ATTR(file_alloc, MTHP_STAT_FILE_ALLOC); +DEFINE_MTHP_STAT_ATTR(file_fallback, MTHP_STAT_FILE_FALLBACK); +DEFINE_MTHP_STAT_ATTR(file_fallback_charge, MTHP_STAT_FILE_FALLBACK_CHARGE= ); =20 static struct attribute *stats_attrs[] =3D { &anon_fault_alloc_attr.attr, @@ -562,6 +565,9 @@ static struct attribute *stats_attrs[] =3D { &anon_fault_fallback_charge_attr.attr, &swpout_attr.attr, &swpout_fallback_attr.attr, + &file_alloc_attr.attr, + &file_fallback_attr.attr, + &file_fallback_charge_attr.attr, NULL, }; =20 diff --git a/mm/shmem.c b/mm/shmem.c index f5469c357be6..99bd3c34f0fb 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -1773,6 +1773,9 @@ static struct folio *shmem_alloc_and_add_folio(struct= vm_fault *vmf, =20 if (pages =3D=3D HPAGE_PMD_NR) count_vm_event(THP_FILE_FALLBACK); +#ifdef CONFIG_TRANSPARENT_HUGEPAGE + count_mthp_stat(order, MTHP_STAT_FILE_FALLBACK); +#endif order =3D next_order(&suitable_orders, order); } } else { @@ -1792,9 +1795,15 @@ static struct folio *shmem_alloc_and_add_folio(struc= t vm_fault *vmf, if (xa_find(&mapping->i_pages, &index, index + pages - 1, XA_PRESENT)) { error =3D -EEXIST; - } else if (pages =3D=3D HPAGE_PMD_NR) { - count_vm_event(THP_FILE_FALLBACK); - count_vm_event(THP_FILE_FALLBACK_CHARGE); + } else if (pages > 1) { + if (pages =3D=3D HPAGE_PMD_NR) { + count_vm_event(THP_FILE_FALLBACK); + count_vm_event(THP_FILE_FALLBACK_CHARGE); + } +#ifdef CONFIG_TRANSPARENT_HUGEPAGE + count_mthp_stat(folio_order(folio), MTHP_STAT_FILE_FALLBACK); + count_mthp_stat(folio_order(folio), MTHP_STAT_FILE_FALLBACK_CHARGE); +#endif } goto unlock; } @@ -2168,6 +2177,9 @@ static int shmem_get_folio_gfp(struct inode *inode, p= goff_t index, if (!IS_ERR(folio)) { if (folio_test_pmd_mappable(folio)) count_vm_event(THP_FILE_ALLOC); +#ifdef CONFIG_TRANSPARENT_HUGEPAGE + count_mthp_stat(folio_order(folio), MTHP_STAT_FILE_ALLOC); +#endif goto alloced; } if (PTR_ERR(folio) =3D=3D -EEXIST) --=20 2.39.3