From nobody Fri Nov 29 02:52:29 2024 Received: from out30-99.freemail.mail.aliyun.com (out30-99.freemail.mail.aliyun.com [115.124.30.99]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 46EB714B950 for ; Thu, 28 Nov 2024 07:41:06 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=115.124.30.99 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1732779669; cv=none; b=ZCG8tfkLszDnTbL/8rP7ZUaAufc+wf12NHtwau6ESRvXSZ7HSqpO4/VJalFHEYDq2HYCwFLtyhzHoi9ZGA81sKXnkfWN/RfcLk51MhinKR7Rauni2PMHHjrdVO9zTCj85Ni+Sta7QGP/FyIQM6boHkR2avM0xfTXwngHMKXMgRQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1732779669; c=relaxed/simple; bh=chign36IwJlCS5OS5dHzGDa9thVIdz+Erw7BUtm8UhU=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=oZ6GolmzsbQNYE42BWcNf6z8EmP7j1tdCJJ6JHpp68w1rdjYFLLdVSjv4bvQHUIxcZebD4vecoSZSW4pNSGjni1KXDXGYsMl32BQlgfFYfc8hh20E4bEhWfw1RZanEbGGwOFVr40NgLIbnkUJP/Z4jIUU7tXmfEeZUwvYa6PNi4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com; spf=pass smtp.mailfrom=linux.alibaba.com; dkim=pass (1024-bit key) header.d=linux.alibaba.com header.i=@linux.alibaba.com header.b=PiVUwI2x; arc=none smtp.client-ip=115.124.30.99 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.alibaba.com header.i=@linux.alibaba.com header.b="PiVUwI2x" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.alibaba.com; s=default; t=1732779659; h=From:To:Subject:Date:Message-Id:MIME-Version; bh=iizzfiOOXmZS6eNQ3u5p12da/SLq6MpmcF9cZtSAll0=; b=PiVUwI2x1EHv+5kgdnpTCSKIgLCEANdJ1WcPBIcCy4ATnQ9/P57p0LtFbVi7p3P4fT31fQdUK9RrEEhlDhC/wrHNHkSSQJAQwokQKrLPXkHIaeQDSkJthvQptjKv4lWoO7TkuxZYKrZeJcKEBoWR5r6IXrXHLdd5Cu4pbIPzJGE= Received: from localhost(mailfrom:baolin.wang@linux.alibaba.com fp:SMTPD_---0WKOx1q3_1732779656 cluster:ay36) by smtp.aliyun-inc.com; Thu, 28 Nov 2024 15:40:57 +0800 From: Baolin Wang To: akpm@linux-foundation.org, hughd@google.com Cc: willy@infradead.org, david@redhat.com, wangkefeng.wang@huawei.com, 21cnbao@gmail.com, ryan.roberts@arm.com, ioworker0@gmail.com, da.gomez@samsung.com, baolin.wang@linux.alibaba.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v3 3/6] mm: shmem: add large folio support for tmpfs Date: Thu, 28 Nov 2024 15:40:41 +0800 Message-Id: <035bf55fbdebeff65f5cb2cdb9907b7d632c3228.1732779148.git.baolin.wang@linux.alibaba.com> X-Mailer: git-send-email 2.39.3 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add large folio support for tmpfs write and fallocate paths matching the same high order preference mechanism used in the iomap buffered IO path as used in __filemap_get_folio(). Add shmem_mapping_size_orders() to get a hint for the orders of the folio based on the file size which takes care of the mapping requirements. Traditionally, tmpfs only supported PMD-sized large folios. However nowadays with other file systems supporting any sized large folios, and extending anonymous to support mTHP, we should not restrict tmpfs to allocating only PMD-sized large folios, making it more special. Instead, we should allow tmpfs can allocate any sized large folios. Considering that tmpfs already has the 'huge=3D' option to control the PMD-= sized large folios allocation, we can extend the 'huge=3D' option to allow any si= zed large folios. The semantics of the 'huge=3D' mount option are: huge=3Dnever: no any sized large folios huge=3Dalways: any sized large folios huge=3Dwithin_size: like 'always' but respect the i_size huge=3Dadvise: like 'always' if requested with madvise() Note: for tmpfs mmap() faults, due to the lack of a write size hint, still allocate the PMD-sized huge folios if huge=3Dalways/within_size/advise is s= et. Moreover, the 'deny' and 'force' testing options controlled by '/sys/kernel/mm/transparent_hugepage/shmem_enabled', still retain the same semantics. The 'deny' can disable any sized large folios for tmpfs, while the 'force' can enable PMD sized large folios for tmpfs. Co-developed-by: Daniel Gomez Signed-off-by: Daniel Gomez Signed-off-by: Baolin Wang --- mm/shmem.c | 99 ++++++++++++++++++++++++++++++++++++++++++++---------- 1 file changed, 81 insertions(+), 18 deletions(-) diff --git a/mm/shmem.c b/mm/shmem.c index 7595c3db4c1c..54eaa724c153 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -554,34 +554,100 @@ static bool shmem_confirm_swap(struct address_space = *mapping, =20 static int shmem_huge __read_mostly =3D SHMEM_HUGE_NEVER; =20 +/** + * shmem_mapping_size_orders - Get allowable folio orders for the given fi= le size. + * @mapping: Target address_space. + * @index: The page index. + * @write_end: end of a write, could extend inode size. + * + * This returns huge orders for folios (when supported) based on the file = size + * which the mapping currently allows at the given index. The index is rel= evant + * due to alignment considerations the mapping might have. The returned or= der + * may be less than the size passed. + * + * Return: The orders. + */ +static inline unsigned int +shmem_mapping_size_orders(struct address_space *mapping, pgoff_t index, lo= ff_t write_end) +{ + unsigned int order; + size_t size; + + if (!mapping_large_folio_support(mapping) || !write_end) + return 0; + + /* Calculate the write size based on the write_end */ + size =3D write_end - (index << PAGE_SHIFT); + order =3D filemap_get_order(size); + if (!order) + return 0; + + /* If we're not aligned, allocate a smaller folio */ + if (index & ((1UL << order) - 1)) + order =3D __ffs(index); + + order =3D min_t(size_t, order, MAX_PAGECACHE_ORDER); + return order > 0 ? BIT(order + 1) - 1 : 0; +} + static unsigned int shmem_huge_global_enabled(struct inode *inode, pgoff_t= index, loff_t write_end, bool shmem_huge_force, + struct vm_area_struct *vma, unsigned long vm_flags) { + unsigned int maybe_pmd_order =3D HPAGE_PMD_ORDER > MAX_PAGECACHE_ORDER ? + 0 : BIT(HPAGE_PMD_ORDER); + unsigned long within_size_orders; + unsigned int order; + pgoff_t aligned_index; loff_t i_size; =20 - if (HPAGE_PMD_ORDER > MAX_PAGECACHE_ORDER) - return 0; if (!S_ISREG(inode->i_mode)) return 0; if (shmem_huge =3D=3D SHMEM_HUGE_DENY) return 0; if (shmem_huge_force || shmem_huge =3D=3D SHMEM_HUGE_FORCE) - return BIT(HPAGE_PMD_ORDER); + return maybe_pmd_order; =20 + /* + * The huge order allocation for anon shmem is controlled through + * the mTHP interface, so we still use PMD-sized huge order to + * check whether global control is enabled. + * + * For tmpfs mmap()'s huge order, we still use PMD-sized order to + * allocate huge pages due to lack of a write size hint. + * + * Otherwise, tmpfs will allow getting a highest order hint based on + * the size of write and fallocate paths, then will try each allowable + * huge orders. + */ switch (SHMEM_SB(inode->i_sb)->huge) { case SHMEM_HUGE_ALWAYS: - return BIT(HPAGE_PMD_ORDER); + if (vma) + return maybe_pmd_order; + + return shmem_mapping_size_orders(inode->i_mapping, index, write_end); case SHMEM_HUGE_WITHIN_SIZE: - index =3D round_up(index + 1, HPAGE_PMD_NR); - i_size =3D max(write_end, i_size_read(inode)); - i_size =3D round_up(i_size, PAGE_SIZE); - if (i_size >> PAGE_SHIFT >=3D index) - return BIT(HPAGE_PMD_ORDER); + if (vma) + within_size_orders =3D maybe_pmd_order; + else + within_size_orders =3D shmem_mapping_size_orders(inode->i_mapping, + index, write_end); + + order =3D highest_order(within_size_orders); + while (within_size_orders) { + aligned_index =3D round_up(index + 1, 1 << order); + i_size =3D max(write_end, i_size_read(inode)); + i_size =3D round_up(i_size, PAGE_SIZE); + if (i_size >> PAGE_SHIFT >=3D aligned_index) + return within_size_orders; + + order =3D next_order(&within_size_orders, order); + } fallthrough; case SHMEM_HUGE_ADVISE: if (vm_flags & VM_HUGEPAGE) - return BIT(HPAGE_PMD_ORDER); + return maybe_pmd_order; fallthrough; default: return 0; @@ -781,6 +847,7 @@ static unsigned long shmem_unused_huge_shrink(struct sh= mem_sb_info *sbinfo, =20 static unsigned int shmem_huge_global_enabled(struct inode *inode, pgoff_t= index, loff_t write_end, bool shmem_huge_force, + struct vm_area_struct *vma, unsigned long vm_flags) { return 0; @@ -1176,7 +1243,7 @@ static int shmem_getattr(struct mnt_idmap *idmap, STATX_ATTR_NODUMP); generic_fillattr(idmap, request_mask, inode, stat); =20 - if (shmem_huge_global_enabled(inode, 0, 0, false, 0)) + if (shmem_huge_global_enabled(inode, 0, 0, false, NULL, 0)) stat->blksize =3D HPAGE_PMD_SIZE; =20 if (request_mask & STATX_BTIME) { @@ -1693,14 +1760,10 @@ unsigned long shmem_allowable_huge_orders(struct in= ode *inode, return 0; =20 global_orders =3D shmem_huge_global_enabled(inode, index, write_end, - shmem_huge_force, vm_flags); - if (!vma || !vma_is_anon_shmem(vma)) { - /* - * For tmpfs, we now only support PMD sized THP if huge page - * is enabled, otherwise fallback to order 0. - */ + shmem_huge_force, vma, vm_flags); + /* Tmpfs huge pages allocation */ + if (!vma || !vma_is_anon_shmem(vma)) return global_orders; - } =20 /* * Following the 'deny' semantics of the top level, force the huge --=20 2.39.3