From nobody Fri Nov 29 00:40:03 2024 Received: from out30-100.freemail.mail.aliyun.com (out30-100.freemail.mail.aliyun.com [115.124.30.100]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C086F146590 for ; Thu, 28 Nov 2024 07:40:59 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=115.124.30.100 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1732779662; cv=none; b=uzD2dIBJkHqmbJMmiThWC62MesqvERJRufzPSf6XcNbYZoPodP4RB++j0WIjgy8JtsX2C2LuNJ7hJpg82WmESmTMflZVrDGxeu0h8op4D8KhyY25fBDxzYvS3Q0o1Z1mGBEgGTxsDGfpe79cMSQT4PwBEmhUoaRf0A3NHOE5x64= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1732779662; c=relaxed/simple; bh=7YSYlAef+/pzjy7kccts4JkWzj5g+IQasfwFXPdQXMY=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=iJrhwL8Mt8qC0hOp5BL4MoAc0dr3yN1hoOmrOtbXQAXx6bmyeMp4U24hgCwRR21Z1f8PLWRHm8xZi5kDDH3PjuoqNXYw3fgXQc8o609lrHITNAODWzIh/rKDanAGCYLQ6b0PX2Xooz7uxwrrwxZ3RB1ZzLxHnoV3BQh5beDwN+I= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com; spf=pass smtp.mailfrom=linux.alibaba.com; dkim=pass (1024-bit key) header.d=linux.alibaba.com header.i=@linux.alibaba.com header.b=eYIlcs7R; arc=none smtp.client-ip=115.124.30.100 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.alibaba.com header.i=@linux.alibaba.com header.b="eYIlcs7R" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.alibaba.com; s=default; t=1732779657; h=From:To:Subject:Date:Message-Id:MIME-Version; bh=0V6hY79A/2+E7HEAAM4p9j6HrltNCMws+8O8dcahOFU=; b=eYIlcs7RWD17JIYC1eaHR4T6dRSLr5s1M4+5TP3RRus1/DlSLoksuCswGkJMO9a0sHx/x6hfhUpBLkqLiQVr2UycUzhVGT/Vz1z5cjRZIMHOrNdpQ1D+d24J3CJdVUCy8AYCy9w8SE+mBBi52+ARsoh8iDBoTkBkeoHqM5msCAs= Received: from localhost(mailfrom:baolin.wang@linux.alibaba.com fp:SMTPD_---0WKOx1pU_1732779654 cluster:ay36) by smtp.aliyun-inc.com; Thu, 28 Nov 2024 15:40:55 +0800 From: Baolin Wang To: akpm@linux-foundation.org, hughd@google.com Cc: willy@infradead.org, david@redhat.com, wangkefeng.wang@huawei.com, 21cnbao@gmail.com, ryan.roberts@arm.com, ioworker0@gmail.com, da.gomez@samsung.com, baolin.wang@linux.alibaba.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v3 1/6] mm: factor out the order calculation into a new helper Date: Thu, 28 Nov 2024 15:40:39 +0800 Message-Id: <5505f9ea50942820c1924d1803bfdd3a524e54f6.1732779148.git.baolin.wang@linux.alibaba.com> X-Mailer: git-send-email 2.39.3 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Factor out the order calculation into a new helper, which can be reused by shmem in the following patch. Suggested-by: Matthew Wilcox Signed-off-by: Baolin Wang Reviewed-by: Barry Song Reviewed-by: David Hildenbrand Reviewed-by: Daniel Gomez --- include/linux/pagemap.h | 16 +++++++++++++--- 1 file changed, 13 insertions(+), 3 deletions(-) diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index bcf0865a38ae..d796c8a33647 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -727,6 +727,16 @@ typedef unsigned int __bitwise fgf_t; =20 #define FGP_WRITEBEGIN (FGP_LOCK | FGP_WRITE | FGP_CREAT | FGP_STABLE) =20 +static inline unsigned int filemap_get_order(size_t size) +{ + unsigned int shift =3D ilog2(size); + + if (shift <=3D PAGE_SHIFT) + return 0; + + return shift - PAGE_SHIFT; +} + /** * fgf_set_order - Encode a length in the fgf_t flags. * @size: The suggested size of the folio to create. @@ -740,11 +750,11 @@ typedef unsigned int __bitwise fgf_t; */ static inline fgf_t fgf_set_order(size_t size) { - unsigned int shift =3D ilog2(size); + unsigned int order =3D filemap_get_order(size); =20 - if (shift <=3D PAGE_SHIFT) + if (!order) return 0; - return (__force fgf_t)((shift - PAGE_SHIFT) << 26); + return (__force fgf_t)(order << 26); } =20 void *filemap_get_entry(struct address_space *mapping, pgoff_t index); --=20 2.39.3 From nobody Fri Nov 29 00:40:03 2024 Received: from out30-110.freemail.mail.aliyun.com (out30-110.freemail.mail.aliyun.com [115.124.30.110]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0182814C5A1 for ; Thu, 28 Nov 2024 07:41:05 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=115.124.30.110 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1732779668; cv=none; b=kyzWCP9QnMBpFitY+UlNeiYPP481Sr0vX1OWPtBJopjLkmn+nNFfpIPLS1Ae8OssCWVGCRjTYZpz/P5vmxK/Xgnbp9DxPKIo8pjwWrBJqsy3jnIVKSKLB47Ujxd5Iu/rguqRoTRTVPZ+kRNUn1fYc1dW0H2M8/RiTIt1lqz3NrQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1732779668; c=relaxed/simple; bh=fmE+8kcWAfwmmwGyuEAamj+cnsMWh8q4bxMzmNti21o=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=OrooJHjsRBCQ2u9Rnl23q8Hmki4eI6haA78M9w7pVGQrBaiKiYWooI/h32Izx3IkfoNm0AJVVdB8oNbuP7ZtgoDFaMn4PAvL7lELIK18Wak/sWp0Wna7yKh5Eme8nXthql34lkQbNf5WknlNPX0Ltj2koWyiK3Irjd6YuPEPs1I= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com; spf=pass smtp.mailfrom=linux.alibaba.com; dkim=pass (1024-bit key) header.d=linux.alibaba.com header.i=@linux.alibaba.com header.b=BFfSP/Ew; arc=none smtp.client-ip=115.124.30.110 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.alibaba.com header.i=@linux.alibaba.com header.b="BFfSP/Ew" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.alibaba.com; s=default; t=1732779658; h=From:To:Subject:Date:Message-Id:MIME-Version; bh=6HM6mcd3kWqgluzVJQFUY15MBVLtFMN3usvJqqfXf8k=; b=BFfSP/EweevsJSGKCfK0f/ZTFDpI3atJtKf6aSZPcTJEZIFmNM5s6rSTLqKfrP/ubsFdAu3mWKyem0m2PRMNyZfXbqUn2x3vfKl5XxIBJkd+c9oEoMAlOJUwtodqqqHYdextsfmjmca3jMPGttk39hbzgDBvaWsq0BmF4KduDc8= Received: from localhost(mailfrom:baolin.wang@linux.alibaba.com fp:SMTPD_---0WKP.Wk0_1732779655 cluster:ay36) by smtp.aliyun-inc.com; Thu, 28 Nov 2024 15:40:56 +0800 From: Baolin Wang To: akpm@linux-foundation.org, hughd@google.com Cc: willy@infradead.org, david@redhat.com, wangkefeng.wang@huawei.com, 21cnbao@gmail.com, ryan.roberts@arm.com, ioworker0@gmail.com, da.gomez@samsung.com, baolin.wang@linux.alibaba.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v3 2/6] mm: shmem: change shmem_huge_global_enabled() to return huge order bitmap Date: Thu, 28 Nov 2024 15:40:40 +0800 Message-Id: <9dce1cfad3e9c1587cf1a0ea782ddbebd0e92984.1732779148.git.baolin.wang@linux.alibaba.com> X-Mailer: git-send-email 2.39.3 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Change the shmem_huge_global_enabled() to return the suitable huge order bitmap, and return 0 if huge pages are not allowed. This is a preparation for supporting various huge orders allocation of tmpfs in the following patches. No functional changes. Signed-off-by: Baolin Wang Acked-by: David Hildenbrand --- mm/shmem.c | 40 ++++++++++++++++++++-------------------- 1 file changed, 20 insertions(+), 20 deletions(-) diff --git a/mm/shmem.c b/mm/shmem.c index ccb9629a0f70..7595c3db4c1c 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -554,37 +554,37 @@ static bool shmem_confirm_swap(struct address_space *= mapping, =20 static int shmem_huge __read_mostly =3D SHMEM_HUGE_NEVER; =20 -static bool shmem_huge_global_enabled(struct inode *inode, pgoff_t index, - loff_t write_end, bool shmem_huge_force, - unsigned long vm_flags) +static unsigned int shmem_huge_global_enabled(struct inode *inode, pgoff_t= index, + loff_t write_end, bool shmem_huge_force, + unsigned long vm_flags) { loff_t i_size; =20 if (HPAGE_PMD_ORDER > MAX_PAGECACHE_ORDER) - return false; + return 0; if (!S_ISREG(inode->i_mode)) - return false; + return 0; if (shmem_huge =3D=3D SHMEM_HUGE_DENY) - return false; + return 0; if (shmem_huge_force || shmem_huge =3D=3D SHMEM_HUGE_FORCE) - return true; + return BIT(HPAGE_PMD_ORDER); =20 switch (SHMEM_SB(inode->i_sb)->huge) { case SHMEM_HUGE_ALWAYS: - return true; + return BIT(HPAGE_PMD_ORDER); case SHMEM_HUGE_WITHIN_SIZE: index =3D round_up(index + 1, HPAGE_PMD_NR); i_size =3D max(write_end, i_size_read(inode)); i_size =3D round_up(i_size, PAGE_SIZE); if (i_size >> PAGE_SHIFT >=3D index) - return true; + return BIT(HPAGE_PMD_ORDER); fallthrough; case SHMEM_HUGE_ADVISE: if (vm_flags & VM_HUGEPAGE) - return true; + return BIT(HPAGE_PMD_ORDER); fallthrough; default: - return false; + return 0; } } =20 @@ -779,11 +779,11 @@ static unsigned long shmem_unused_huge_shrink(struct = shmem_sb_info *sbinfo, return 0; } =20 -static bool shmem_huge_global_enabled(struct inode *inode, pgoff_t index, - loff_t write_end, bool shmem_huge_force, - unsigned long vm_flags) +static unsigned int shmem_huge_global_enabled(struct inode *inode, pgoff_t= index, + loff_t write_end, bool shmem_huge_force, + unsigned long vm_flags) { - return false; + return 0; } #endif /* CONFIG_TRANSPARENT_HUGEPAGE */ =20 @@ -1685,21 +1685,21 @@ unsigned long shmem_allowable_huge_orders(struct in= ode *inode, unsigned long mask =3D READ_ONCE(huge_shmem_orders_always); unsigned long within_size_orders =3D READ_ONCE(huge_shmem_orders_within_s= ize); unsigned long vm_flags =3D vma ? vma->vm_flags : 0; - bool global_huge; + unsigned int global_orders; loff_t i_size; int order; =20 if (thp_disabled_by_hw() || (vma && vma_thp_disabled(vma, vm_flags))) return 0; =20 - global_huge =3D shmem_huge_global_enabled(inode, index, write_end, - shmem_huge_force, vm_flags); + global_orders =3D shmem_huge_global_enabled(inode, index, write_end, + shmem_huge_force, vm_flags); if (!vma || !vma_is_anon_shmem(vma)) { /* * For tmpfs, we now only support PMD sized THP if huge page * is enabled, otherwise fallback to order 0. */ - return global_huge ? BIT(HPAGE_PMD_ORDER) : 0; + return global_orders; } =20 /* @@ -1732,7 +1732,7 @@ unsigned long shmem_allowable_huge_orders(struct inod= e *inode, if (vm_flags & VM_HUGEPAGE) mask |=3D READ_ONCE(huge_shmem_orders_madvise); =20 - if (global_huge) + if (global_orders > 0) mask |=3D READ_ONCE(huge_shmem_orders_inherit); =20 return THP_ORDERS_ALL_FILE_DEFAULT & mask; --=20 2.39.3 From nobody Fri Nov 29 00:40:03 2024 Received: from out30-99.freemail.mail.aliyun.com (out30-99.freemail.mail.aliyun.com [115.124.30.99]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 46EB714B950 for ; Thu, 28 Nov 2024 07:41:06 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=115.124.30.99 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1732779669; cv=none; b=ZCG8tfkLszDnTbL/8rP7ZUaAufc+wf12NHtwau6ESRvXSZ7HSqpO4/VJalFHEYDq2HYCwFLtyhzHoi9ZGA81sKXnkfWN/RfcLk51MhinKR7Rauni2PMHHjrdVO9zTCj85Ni+Sta7QGP/FyIQM6boHkR2avM0xfTXwngHMKXMgRQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1732779669; c=relaxed/simple; bh=chign36IwJlCS5OS5dHzGDa9thVIdz+Erw7BUtm8UhU=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=oZ6GolmzsbQNYE42BWcNf6z8EmP7j1tdCJJ6JHpp68w1rdjYFLLdVSjv4bvQHUIxcZebD4vecoSZSW4pNSGjni1KXDXGYsMl32BQlgfFYfc8hh20E4bEhWfw1RZanEbGGwOFVr40NgLIbnkUJP/Z4jIUU7tXmfEeZUwvYa6PNi4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com; spf=pass smtp.mailfrom=linux.alibaba.com; dkim=pass (1024-bit key) header.d=linux.alibaba.com header.i=@linux.alibaba.com header.b=PiVUwI2x; arc=none smtp.client-ip=115.124.30.99 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.alibaba.com header.i=@linux.alibaba.com header.b="PiVUwI2x" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.alibaba.com; s=default; t=1732779659; h=From:To:Subject:Date:Message-Id:MIME-Version; bh=iizzfiOOXmZS6eNQ3u5p12da/SLq6MpmcF9cZtSAll0=; b=PiVUwI2x1EHv+5kgdnpTCSKIgLCEANdJ1WcPBIcCy4ATnQ9/P57p0LtFbVi7p3P4fT31fQdUK9RrEEhlDhC/wrHNHkSSQJAQwokQKrLPXkHIaeQDSkJthvQptjKv4lWoO7TkuxZYKrZeJcKEBoWR5r6IXrXHLdd5Cu4pbIPzJGE= Received: from localhost(mailfrom:baolin.wang@linux.alibaba.com fp:SMTPD_---0WKOx1q3_1732779656 cluster:ay36) by smtp.aliyun-inc.com; Thu, 28 Nov 2024 15:40:57 +0800 From: Baolin Wang To: akpm@linux-foundation.org, hughd@google.com Cc: willy@infradead.org, david@redhat.com, wangkefeng.wang@huawei.com, 21cnbao@gmail.com, ryan.roberts@arm.com, ioworker0@gmail.com, da.gomez@samsung.com, baolin.wang@linux.alibaba.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v3 3/6] mm: shmem: add large folio support for tmpfs Date: Thu, 28 Nov 2024 15:40:41 +0800 Message-Id: <035bf55fbdebeff65f5cb2cdb9907b7d632c3228.1732779148.git.baolin.wang@linux.alibaba.com> X-Mailer: git-send-email 2.39.3 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add large folio support for tmpfs write and fallocate paths matching the same high order preference mechanism used in the iomap buffered IO path as used in __filemap_get_folio(). Add shmem_mapping_size_orders() to get a hint for the orders of the folio based on the file size which takes care of the mapping requirements. Traditionally, tmpfs only supported PMD-sized large folios. However nowadays with other file systems supporting any sized large folios, and extending anonymous to support mTHP, we should not restrict tmpfs to allocating only PMD-sized large folios, making it more special. Instead, we should allow tmpfs can allocate any sized large folios. Considering that tmpfs already has the 'huge=3D' option to control the PMD-= sized large folios allocation, we can extend the 'huge=3D' option to allow any si= zed large folios. The semantics of the 'huge=3D' mount option are: huge=3Dnever: no any sized large folios huge=3Dalways: any sized large folios huge=3Dwithin_size: like 'always' but respect the i_size huge=3Dadvise: like 'always' if requested with madvise() Note: for tmpfs mmap() faults, due to the lack of a write size hint, still allocate the PMD-sized huge folios if huge=3Dalways/within_size/advise is s= et. Moreover, the 'deny' and 'force' testing options controlled by '/sys/kernel/mm/transparent_hugepage/shmem_enabled', still retain the same semantics. The 'deny' can disable any sized large folios for tmpfs, while the 'force' can enable PMD sized large folios for tmpfs. Co-developed-by: Daniel Gomez Signed-off-by: Daniel Gomez Signed-off-by: Baolin Wang --- mm/shmem.c | 99 ++++++++++++++++++++++++++++++++++++++++++++---------- 1 file changed, 81 insertions(+), 18 deletions(-) diff --git a/mm/shmem.c b/mm/shmem.c index 7595c3db4c1c..54eaa724c153 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -554,34 +554,100 @@ static bool shmem_confirm_swap(struct address_space = *mapping, =20 static int shmem_huge __read_mostly =3D SHMEM_HUGE_NEVER; =20 +/** + * shmem_mapping_size_orders - Get allowable folio orders for the given fi= le size. + * @mapping: Target address_space. + * @index: The page index. + * @write_end: end of a write, could extend inode size. + * + * This returns huge orders for folios (when supported) based on the file = size + * which the mapping currently allows at the given index. The index is rel= evant + * due to alignment considerations the mapping might have. The returned or= der + * may be less than the size passed. + * + * Return: The orders. + */ +static inline unsigned int +shmem_mapping_size_orders(struct address_space *mapping, pgoff_t index, lo= ff_t write_end) +{ + unsigned int order; + size_t size; + + if (!mapping_large_folio_support(mapping) || !write_end) + return 0; + + /* Calculate the write size based on the write_end */ + size =3D write_end - (index << PAGE_SHIFT); + order =3D filemap_get_order(size); + if (!order) + return 0; + + /* If we're not aligned, allocate a smaller folio */ + if (index & ((1UL << order) - 1)) + order =3D __ffs(index); + + order =3D min_t(size_t, order, MAX_PAGECACHE_ORDER); + return order > 0 ? BIT(order + 1) - 1 : 0; +} + static unsigned int shmem_huge_global_enabled(struct inode *inode, pgoff_t= index, loff_t write_end, bool shmem_huge_force, + struct vm_area_struct *vma, unsigned long vm_flags) { + unsigned int maybe_pmd_order =3D HPAGE_PMD_ORDER > MAX_PAGECACHE_ORDER ? + 0 : BIT(HPAGE_PMD_ORDER); + unsigned long within_size_orders; + unsigned int order; + pgoff_t aligned_index; loff_t i_size; =20 - if (HPAGE_PMD_ORDER > MAX_PAGECACHE_ORDER) - return 0; if (!S_ISREG(inode->i_mode)) return 0; if (shmem_huge =3D=3D SHMEM_HUGE_DENY) return 0; if (shmem_huge_force || shmem_huge =3D=3D SHMEM_HUGE_FORCE) - return BIT(HPAGE_PMD_ORDER); + return maybe_pmd_order; =20 + /* + * The huge order allocation for anon shmem is controlled through + * the mTHP interface, so we still use PMD-sized huge order to + * check whether global control is enabled. + * + * For tmpfs mmap()'s huge order, we still use PMD-sized order to + * allocate huge pages due to lack of a write size hint. + * + * Otherwise, tmpfs will allow getting a highest order hint based on + * the size of write and fallocate paths, then will try each allowable + * huge orders. + */ switch (SHMEM_SB(inode->i_sb)->huge) { case SHMEM_HUGE_ALWAYS: - return BIT(HPAGE_PMD_ORDER); + if (vma) + return maybe_pmd_order; + + return shmem_mapping_size_orders(inode->i_mapping, index, write_end); case SHMEM_HUGE_WITHIN_SIZE: - index =3D round_up(index + 1, HPAGE_PMD_NR); - i_size =3D max(write_end, i_size_read(inode)); - i_size =3D round_up(i_size, PAGE_SIZE); - if (i_size >> PAGE_SHIFT >=3D index) - return BIT(HPAGE_PMD_ORDER); + if (vma) + within_size_orders =3D maybe_pmd_order; + else + within_size_orders =3D shmem_mapping_size_orders(inode->i_mapping, + index, write_end); + + order =3D highest_order(within_size_orders); + while (within_size_orders) { + aligned_index =3D round_up(index + 1, 1 << order); + i_size =3D max(write_end, i_size_read(inode)); + i_size =3D round_up(i_size, PAGE_SIZE); + if (i_size >> PAGE_SHIFT >=3D aligned_index) + return within_size_orders; + + order =3D next_order(&within_size_orders, order); + } fallthrough; case SHMEM_HUGE_ADVISE: if (vm_flags & VM_HUGEPAGE) - return BIT(HPAGE_PMD_ORDER); + return maybe_pmd_order; fallthrough; default: return 0; @@ -781,6 +847,7 @@ static unsigned long shmem_unused_huge_shrink(struct sh= mem_sb_info *sbinfo, =20 static unsigned int shmem_huge_global_enabled(struct inode *inode, pgoff_t= index, loff_t write_end, bool shmem_huge_force, + struct vm_area_struct *vma, unsigned long vm_flags) { return 0; @@ -1176,7 +1243,7 @@ static int shmem_getattr(struct mnt_idmap *idmap, STATX_ATTR_NODUMP); generic_fillattr(idmap, request_mask, inode, stat); =20 - if (shmem_huge_global_enabled(inode, 0, 0, false, 0)) + if (shmem_huge_global_enabled(inode, 0, 0, false, NULL, 0)) stat->blksize =3D HPAGE_PMD_SIZE; =20 if (request_mask & STATX_BTIME) { @@ -1693,14 +1760,10 @@ unsigned long shmem_allowable_huge_orders(struct in= ode *inode, return 0; =20 global_orders =3D shmem_huge_global_enabled(inode, index, write_end, - shmem_huge_force, vm_flags); - if (!vma || !vma_is_anon_shmem(vma)) { - /* - * For tmpfs, we now only support PMD sized THP if huge page - * is enabled, otherwise fallback to order 0. - */ + shmem_huge_force, vma, vm_flags); + /* Tmpfs huge pages allocation */ + if (!vma || !vma_is_anon_shmem(vma)) return global_orders; - } =20 /* * Following the 'deny' semantics of the top level, force the huge --=20 2.39.3 From nobody Fri Nov 29 00:40:03 2024 Received: from out30-124.freemail.mail.aliyun.com (out30-124.freemail.mail.aliyun.com [115.124.30.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C8B23146A97 for ; Thu, 28 Nov 2024 07:41:01 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=115.124.30.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1732779664; cv=none; b=eSUmiC/dSG5WMWSstKBxjAdm90Okc3BQVh+9z+2pkEQhb6TpG8IuC/B9GnrbvftBbqPnG3ttW4f/EXUgjQnLJrHbRtR6t6oNI/M30m1vCzwfBLS2UYObl4kI66lNlBWOTfHQN3WcAXs61eE4aIhUsf+SCD/z9Yt7JRdBAL2Nr80= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1732779664; c=relaxed/simple; bh=KKs8n9SiIVFutlMptcdOlmFgvqAP2MvkuEKFRQEFfHs=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=kFp6D/FBspvLUIsLHhl/ZH3sZBn2PAWy2RPDIMVCVt4WIQBa/bWRU/sIHZcZJm/MyFkZy0E8BOD9oLrtoYDt/cYPnjuzURox8ALfHWFF3molMErJStPGdcMJdZV91TBWnlRQi/VQWYdRoIdlY5lL+UYK+1p9w9TQ9UG5t6EAxx0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com; spf=pass smtp.mailfrom=linux.alibaba.com; dkim=pass (1024-bit key) header.d=linux.alibaba.com header.i=@linux.alibaba.com header.b=VZi5KW16; arc=none smtp.client-ip=115.124.30.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.alibaba.com header.i=@linux.alibaba.com header.b="VZi5KW16" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.alibaba.com; s=default; t=1732779659; h=From:To:Subject:Date:Message-Id:MIME-Version; bh=UjYAYqlkSGfPZJ/FlT3GkxA+JCX542ZQmJh945HHSHc=; b=VZi5KW16nfnP/meeoupv4HZEf3oQKHma8eZykUmqdysyawBens3s5v63TdskS+LhxBPGFtcxTvGTby5ix/BifNVEqqJ9QgG+48xNbLwzIIX2aPCCV5vW14evxv8PF1qUo+e3Q5vAuJDfe6hUecH0mbu+thPiLO+HKDqVHuw39R8= Received: from localhost(mailfrom:baolin.wang@linux.alibaba.com fp:SMTPD_---0WKP.Wl-_1732779657 cluster:ay36) by smtp.aliyun-inc.com; Thu, 28 Nov 2024 15:40:58 +0800 From: Baolin Wang To: akpm@linux-foundation.org, hughd@google.com Cc: willy@infradead.org, david@redhat.com, wangkefeng.wang@huawei.com, 21cnbao@gmail.com, ryan.roberts@arm.com, ioworker0@gmail.com, da.gomez@samsung.com, baolin.wang@linux.alibaba.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v3 4/6] mm: shmem: add a kernel command line to change the default huge policy for tmpfs Date: Thu, 28 Nov 2024 15:40:42 +0800 Message-Id: X-Mailer: git-send-email 2.39.3 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Now the tmpfs can allow to allocate any sized large folios, and the default huge policy is still preferred to be 'never'. Due to tmpfs not behaving like other file systems in some cases as previously explained by David[1]: " I think I raised this in the past, but tmpfs/shmem is just like any other file system .. except it sometimes really isn't and behaves much more like (swappable) anonymous memory. (or mlocked files) There are many systems out there that run without swap enabled, or with extremely minimal swap (IIRC until recently kubernetes was completely incompatible with swapping). Swap can even be disabled today for shmem using a mount option. That's a big difference to all other file systems where you are guaranteed to have backend storage where you can simply evict under memory pressure (might temporarily fail, of course). I *think* that's the reason why we have the "huge=3D" parameter that also controls the THP allocations during page faults (IOW possible memory over-allocation). Maybe also because it was a new feature, and we only had a single THP size. " Thus adding a new command line to change the default huge policy will be helpful to use the large folios for tmpfs, which is similar to the 'transparent_hugepage_shmem' cmdline for shmem. [1] https://lore.kernel.org/all/cbadd5fe-69d5-4c21-8eb8-3344ed36c721@redhat= .com/ Signed-off-by: Baolin Wang --- .../admin-guide/kernel-parameters.txt | 7 ++++++ Documentation/admin-guide/mm/transhuge.rst | 6 +++++ mm/shmem.c | 23 ++++++++++++++++++- 3 files changed, 35 insertions(+), 1 deletion(-) diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentatio= n/admin-guide/kernel-parameters.txt index dc663c0ca670..e73383450240 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -6987,6 +6987,13 @@ See Documentation/admin-guide/mm/transhuge.rst for more details. =20 + transparent_hugepage_tmpfs=3D [KNL] + Format: [always|within_size|advise|never] + Can be used to control the default hugepage allocation policy + for the tmpfs mount. + See Documentation/admin-guide/mm/transhuge.rst + for more details. + trusted.source=3D [KEYS] Format: This parameter identifies the trust source as a backend diff --git a/Documentation/admin-guide/mm/transhuge.rst b/Documentation/adm= in-guide/mm/transhuge.rst index 5034915f4e8e..9ae775eaacbe 100644 --- a/Documentation/admin-guide/mm/transhuge.rst +++ b/Documentation/admin-guide/mm/transhuge.rst @@ -332,6 +332,12 @@ allocation policy for the internal shmem mount by usin= g the kernel parameter seven valid policies for shmem (``always``, ``within_size``, ``advise``, ``never``, ``deny``, and ``force``). =20 +Similarly to ``transparent_hugepage_shmem``, you can control the default +hugepage allocation policy for the tmpfs mount by using the kernel paramet= er +``transparent_hugepage_tmpfs=3D``, where ```` is one of the +four valid policies for tmpfs (``always``, ``within_size``, ``advise``, +``never``). The tmpfs mount default policy is ``never``. + In the same manner as ``thp_anon`` controls each supported anonymous THP size, ``thp_shmem`` controls each supported shmem THP size. ``thp_shmem`` has the same format as ``thp_anon``, but also supports the policy diff --git a/mm/shmem.c b/mm/shmem.c index 54eaa724c153..8a602fc61edb 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -553,6 +553,7 @@ static bool shmem_confirm_swap(struct address_space *ma= pping, /* ifdef here to avoid bloating shmem.o when not necessary */ =20 static int shmem_huge __read_mostly =3D SHMEM_HUGE_NEVER; +static int tmpfs_huge __read_mostly =3D SHMEM_HUGE_NEVER; =20 /** * shmem_mapping_size_orders - Get allowable folio orders for the given fi= le size. @@ -4951,7 +4952,12 @@ static int shmem_fill_super(struct super_block *sb, = struct fs_context *fc) sbinfo->gid =3D ctx->gid; sbinfo->full_inums =3D ctx->full_inums; sbinfo->mode =3D ctx->mode; - sbinfo->huge =3D ctx->huge; +#ifdef CONFIG_TRANSPARENT_HUGEPAGE + if (ctx->seen & SHMEM_SEEN_HUGE) + sbinfo->huge =3D ctx->huge; + else + sbinfo->huge =3D tmpfs_huge; +#endif sbinfo->mpol =3D ctx->mpol; ctx->mpol =3D NULL; =20 @@ -5502,6 +5508,21 @@ static int __init setup_transparent_hugepage_shmem(c= har *str) } __setup("transparent_hugepage_shmem=3D", setup_transparent_hugepage_shmem); =20 +static int __init setup_transparent_hugepage_tmpfs(char *str) +{ + int huge; + + huge =3D shmem_parse_huge(str); + if (huge < 0) { + pr_warn("transparent_hugepage_tmpfs=3D cannot parse, ignored\n"); + return huge; + } + + tmpfs_huge =3D huge; + return 1; +} +__setup("transparent_hugepage_tmpfs=3D", setup_transparent_hugepage_tmpfs); + static char str_dup[PAGE_SIZE] __initdata; static int __init setup_thp_shmem(char *str) { --=20 2.39.3 From nobody Fri Nov 29 00:40:03 2024 Received: from out30-101.freemail.mail.aliyun.com (out30-101.freemail.mail.aliyun.com [115.124.30.101]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DC72D1482F3 for ; Thu, 28 Nov 2024 07:41:07 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=115.124.30.101 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1732779669; cv=none; b=G1PY/RwuWf8HzoiJei5Ba7oU36ijefuMGgklF+cAyQz3Xd0cCB3qZdJ/uKOKVdo3N+1OTkziipAHZs2yu+NwNRM6mQBk88sge4Ec8DUZn4xs7qozUwxEGhmwtYVVElNLnnEhJVyQ8e3o2QbRzPYbLiSiIa6cjTzMn8m0SlWSjzI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1732779669; c=relaxed/simple; bh=1VTUoG2V0gyervSpFGg9a7iaCwc6rFfs5nzrk8u2P+U=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=EoclGHhq9BThDF7A/X4WILUX+3+YdsomI2SW9v8Y8b+PdkackN2ax4zkZotg47JejS1tAGnkZV3pAsGEzf+Dy3bTL8w1Yt6XnLHpmNKddPMjn8wSwiHeIX2nf/Z/REMy/L9fpTHI+yzbCjSnqq+qcv+cwXfRJgSu8tWZH8yq1nc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com; spf=pass smtp.mailfrom=linux.alibaba.com; dkim=pass (1024-bit key) header.d=linux.alibaba.com header.i=@linux.alibaba.com header.b=aJpe0Hmg; arc=none smtp.client-ip=115.124.30.101 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.alibaba.com header.i=@linux.alibaba.com header.b="aJpe0Hmg" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.alibaba.com; s=default; t=1732779660; h=From:To:Subject:Date:Message-Id:MIME-Version; bh=4geB5O7mEfI/X7677yTwrrvC9HQS1VjRPAOpYyMK/zg=; b=aJpe0Hmg0o3VTImpuD6hAmY2tJNdzR4FocU5j+RovPSRXoJDhknGhgcBRlsvgFSc6u5i27C8sTopDd6mkSDV+ZhpumMfMnv1rqQYKev8MsC7JsaAVum044jpMxR+UgVsKNN9vfbwH+QHyZjlSdbGNTTkYc4/L4lNKpu5IklA2aI= Received: from localhost(mailfrom:baolin.wang@linux.alibaba.com fp:SMTPD_---0WKOx1qd_1732779658 cluster:ay36) by smtp.aliyun-inc.com; Thu, 28 Nov 2024 15:40:59 +0800 From: Baolin Wang To: akpm@linux-foundation.org, hughd@google.com Cc: willy@infradead.org, david@redhat.com, wangkefeng.wang@huawei.com, 21cnbao@gmail.com, ryan.roberts@arm.com, ioworker0@gmail.com, da.gomez@samsung.com, baolin.wang@linux.alibaba.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v3 5/6] docs: tmpfs: update the large folios policy for tmpfs and shmem Date: Thu, 28 Nov 2024 15:40:43 +0800 Message-Id: <9b7418af30e300d1eb05721b81d79074d0bb0ec9.1732779148.git.baolin.wang@linux.alibaba.com> X-Mailer: git-send-email 2.39.3 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: David Hildenbrand Update the large folios policy for tmpfs and shmem. Signed-off-by: David Hildenbrand Signed-off-by: Baolin Wang --- Documentation/admin-guide/mm/transhuge.rst | 58 +++++++++++++++------- 1 file changed, 41 insertions(+), 17 deletions(-) diff --git a/Documentation/admin-guide/mm/transhuge.rst b/Documentation/adm= in-guide/mm/transhuge.rst index 9ae775eaacbe..ba6edff728ed 100644 --- a/Documentation/admin-guide/mm/transhuge.rst +++ b/Documentation/admin-guide/mm/transhuge.rst @@ -358,8 +358,21 @@ default to ``never``. Hugepages in tmpfs/shmem =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D =20 -You can control hugepage allocation policy in tmpfs with mount option -``huge=3D``. It can have following values: +Traditionally, tmpfs only supported a single huge page size ("PMD"). Today, +it also supports smaller sizes just like anonymous memory, often referred +to as "multi-size THP" (mTHP). Huge pages of any size are commonly +represented in the kernel as "large folios". + +While there is fine control over the huge page sizes to use for the intern= al +shmem mount (see below), ordinary tmpfs mounts will make use of all availa= ble +huge page sizes without any control over the exact sizes, behaving more li= ke +other file systems. + +tmpfs mounts +------------ + +The THP allocation policy for tmpfs mounts can be adjusted using the mount +option: ``huge=3D``. It can have following values: =20 always Attempt to allocate huge pages every time we need a new page; @@ -374,19 +387,19 @@ within_size advise Only allocate huge pages if requested with fadvise()/madvise(); =20 -The default policy is ``never``. +Remember, that the kernel may use huge pages of all available sizes, and +that no fine control as for the internal tmpfs mount is available. + +The default policy in the past was ``never``, but it can now be adjusted +using the kernel parameter ``transparent_hugepage_tmpfs=3D``. =20 ``mount -o remount,huge=3D /mountpoint`` works fine after mount: remounting ``huge=3Dnever`` will not attempt to break up huge pages at all, just stop= more from being allocated. =20 -There's also sysfs knob to control hugepage allocation policy for internal -shmem mount: /sys/kernel/mm/transparent_hugepage/shmem_enabled. The mount -is used for SysV SHM, memfds, shared anonymous mmaps (of /dev/zero or -MAP_ANONYMOUS), GPU drivers' DRM objects, Ashmem. - -In addition to policies listed above, shmem_enabled allows two further -values: +In addition to policies listed above, the sysfs knob +/sys/kernel/mm/transparent_hugepage/shmem_enabled will affect the +allocation policy of tmpfs mounts, when set to the following values: =20 deny For use in emergencies, to force the huge option off from @@ -394,13 +407,24 @@ deny force Force the huge option on for all - very useful for testing; =20 -Shmem can also use "multi-size THP" (mTHP) by adding a new sysfs knob to -control mTHP allocation: -'/sys/kernel/mm/transparent_hugepage/hugepages-kB/shmem_enabled', -and its value for each mTHP is essentially consistent with the global -setting. An 'inherit' option is added to ensure compatibility with these -global settings. Conversely, the options 'force' and 'deny' are dropped, -which are rather testing artifacts from the old ages. +shmem / internal tmpfs +---------------------- +The mount internal tmpfs mount is used for SysV SHM, memfds, shared anonym= ous +mmaps (of /dev/zero or MAP_ANONYMOUS), GPU drivers' DRM objects, Ashmem. + +To control the THP allocation policy for this internal tmpfs mount, the +sysfs knob /sys/kernel/mm/transparent_hugepage/shmem_enabled and the knobs +per THP size in +'/sys/kernel/mm/transparent_hugepage/hugepages-kB/shmem_enabled' +can be used. + +The global knob has the same semantics as the ``huge=3D`` mount options +for tmpfs mounts, except that the different huge page sizes can be control= led +individually, and will only use the setting of the global knob when the +per-size knob is set to 'inherit'. + +The options 'force' and 'deny' are dropped for the individual sizes, which +are rather testing artifacts from the old ages. =20 always Attempt to allocate huge pages every time we need a new page; --=20 2.39.3 From nobody Fri Nov 29 00:40:03 2024 Received: from out30-110.freemail.mail.aliyun.com (out30-110.freemail.mail.aliyun.com [115.124.30.110]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E282714883C for ; Thu, 28 Nov 2024 07:41:08 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=115.124.30.110 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1732779670; cv=none; b=NhJ0hszQA1l6d02cXHSFV3Vg14mpqg0G7wDec13zn2PKun7hfcqODLlMCoeV9KdTNT3filkjDaOGjckUT7mztLRpP8oK91YRSQGqqZUy7Mk+/pWZqhmE4ILASNQxKEZ+e6wdRkDaZEt2qLhqblHiYxIjJYPPxo/KtJY5u1E42ao= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1732779670; c=relaxed/simple; bh=v2kFLgwbojy4t5dVmznn+tFKB5cqQaY5cspI40Y81Fk=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=E2jiDxoEjf0VvPwODuWKv8j6L5nidwoI7fjiGUADdfoj2lLhE+xXnGUE3v9C4nmMce7QGnsAY2j0tYtso5+IcmArlylNfwxGGVCwmbC4M0Vyoecq9E6DIaNjl/dRXqAlP553ZQ8fLeaY4fXN4+nHzS/f1aMxiZ2VfScXR2nAZiY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com; spf=pass smtp.mailfrom=linux.alibaba.com; dkim=pass (1024-bit key) header.d=linux.alibaba.com header.i=@linux.alibaba.com header.b=uuBPly21; arc=none smtp.client-ip=115.124.30.110 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.alibaba.com header.i=@linux.alibaba.com header.b="uuBPly21" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.alibaba.com; s=default; t=1732779661; h=From:To:Subject:Date:Message-Id:MIME-Version; bh=YBH1XlSLL2miO3wN2ZfU4NokzBeaXtPrx3UOtdhUk6w=; b=uuBPly21pPYYP7xtWE1UxIwL1xTTUgLxsyPohUeGpj3L3JqaNSv9k0+vxu1akBx72qzhoZQJaLhYo0jdsTIzq5gUU2Xoc0yZmB3SdSCHDUxqtePppy4moqRmUggj0cKuWGju0J2EkqTp2Bd/fF0L6weiyNnWtzIZWOsd2Ot/6lI= Received: from localhost(mailfrom:baolin.wang@linux.alibaba.com fp:SMTPD_---0WKOunGS_1732779659 cluster:ay36) by smtp.aliyun-inc.com; Thu, 28 Nov 2024 15:41:00 +0800 From: Baolin Wang To: akpm@linux-foundation.org, hughd@google.com Cc: willy@infradead.org, david@redhat.com, wangkefeng.wang@huawei.com, 21cnbao@gmail.com, ryan.roberts@arm.com, ioworker0@gmail.com, da.gomez@samsung.com, baolin.wang@linux.alibaba.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v3 6/6] docs: tmpfs: drop 'fadvise()' from the documentation Date: Thu, 28 Nov 2024 15:40:44 +0800 Message-Id: <3a10bb49832f6d9827dc2c76aec0bf43a892876b.1732779148.git.baolin.wang@linux.alibaba.com> X-Mailer: git-send-email 2.39.3 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Drop 'fadvise()' from the doc, since fadvise() has no HUGEPAGE advise currently. Signed-off-by: Baolin Wang Reviewed-by: Barry Song Acked-by: David Hildenbrand --- Documentation/admin-guide/mm/transhuge.rst | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/Documentation/admin-guide/mm/transhuge.rst b/Documentation/adm= in-guide/mm/transhuge.rst index ba6edff728ed..333958ef0d5f 100644 --- a/Documentation/admin-guide/mm/transhuge.rst +++ b/Documentation/admin-guide/mm/transhuge.rst @@ -382,10 +382,10 @@ never =20 within_size Only allocate huge page if it will be fully within i_size. - Also respect fadvise()/madvise() hints; + Also respect madvise() hints; =20 advise - Only allocate huge pages if requested with fadvise()/madvise(); + Only allocate huge pages if requested with madvise(); =20 Remember, that the kernel may use huge pages of all available sizes, and that no fine control as for the internal tmpfs mount is available. @@ -438,10 +438,10 @@ never =20 within_size Only allocate huge page if it will be fully within i_size. - Also respect fadvise()/madvise() hints; + Also respect madvise() hints; =20 advise - Only allocate huge pages if requested with fadvise()/madvise(); + Only allocate huge pages if requested with madvise(); =20 Need of application restart =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D --=20 2.39.3