From nobody Sat Feb 7 17:40:50 2026 Received: from out30-119.freemail.mail.aliyun.com (out30-119.freemail.mail.aliyun.com [115.124.30.119]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E5B4121E097 for ; Fri, 14 Nov 2025 00:46:47 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=115.124.30.119 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1763081210; cv=none; b=FH3T3aylkeOq5R/ZsPfoh47TL5zCcG8PzIBE2I/6zRciEy6nIgHRWsy0WcCJBB37uPfQ56OGSR50lw/p7RSfAdskm0XZpG5nVSfTs5XIGzqtqBQ7SXdvijY1JlD7/h0qv25h2HdXZd3Kqy+1OHhqGwV8S2M1w6rAPCKk4ig84j4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1763081210; c=relaxed/simple; bh=Xq0zL9dw/9fwHSmRFHI2HoLGLlAu0YK5+9WmxbDTERM=; h=From:To:Cc:Subject:Date:Message-ID:MIME-Version; b=grthC3a9cWEs9mEbybQMxgcmB0HPq8CPeXOPjFFQopvSVUOgX1psWkujOLHcUylCvNTuwXQqFtB0Yv9SiOsOw6UogjYnaUKJ7ZSSTUhwSPyX1VbntKBK6REDRs/VI/2I51ZLY7RrCvB3kBb4Vzmpd5wfBbc9I2rYxG3CZwZs+mY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com; spf=pass smtp.mailfrom=linux.alibaba.com; dkim=pass (1024-bit key) header.d=linux.alibaba.com header.i=@linux.alibaba.com header.b=IIXgOtYD; arc=none smtp.client-ip=115.124.30.119 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.alibaba.com header.i=@linux.alibaba.com header.b="IIXgOtYD" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.alibaba.com; s=default; t=1763081199; h=From:To:Subject:Date:Message-ID:MIME-Version; bh=as6UwWBbq9lK8qvRYSOLDFF98BPC/QmrT66bi070wN0=; b=IIXgOtYDlGMYpLKi8wmaHAXTgk5UY108EobtH6uYJWZ3MW/38FX23fdc043ej1i5WoWcw8UFyQKTPiw/eZAWbk0N7P/BWNaX6Ww6meHB5M1GytN3AqYZaaQdf8CGAUspT6iq9uLQcjmdfx47KVorb1pB5yR6mb0F5ul5a+Oz7a8= Received: from localhost(mailfrom:baolin.wang@linux.alibaba.com fp:SMTPD_---0WsL3Luj_1763081198 cluster:ay36) by smtp.aliyun-inc.com; Fri, 14 Nov 2025 08:46:39 +0800 From: Baolin Wang To: akpm@linux-foundation.org, hughd@google.com Cc: david@redhat.com, lorenzo.stoakes@oracle.com, willy@infradead.org, baolin.wang@linux.alibaba.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH] mm: shmem: allow fallback to smaller large orders for tmpfs mmap() access Date: Fri, 14 Nov 2025 08:46:32 +0800 Message-ID: <283a0bdfd6ac7aa334a491422bcae70919c572bd.1763008453.git.baolin.wang@linux.alibaba.com> X-Mailer: git-send-email 2.43.7 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" After commit 69e0a3b49003 ("mm: shmem: fix the strategy for the tmpfs 'huge= =3D' options"), we have fixed the large order allocation strategy for tmpfs, whi= ch always tries PMD-sized large folios first, and if that fails, falls back to smaller large folios. For tmpfs large folio allocation via mmap(), we should maintain the same strategy as well. Let's unify the large order allocation strategy for tmpfs. There is no functional change for large folio allocation of anonymous shmem. Signed-off-by: Baolin Wang --- mm/shmem.c | 17 +++-------------- 1 file changed, 3 insertions(+), 14 deletions(-) diff --git a/mm/shmem.c b/mm/shmem.c index 395ca58ac4a5..fc835b3e4914 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -645,34 +645,23 @@ static unsigned int shmem_huge_global_enabled(struct = inode *inode, pgoff_t index * the mTHP interface, so we still use PMD-sized huge order to * check whether global control is enabled. * - * For tmpfs mmap()'s huge order, we still use PMD-sized order to - * allocate huge pages due to lack of a write size hint. - * * For tmpfs with 'huge=3Dalways' or 'huge=3Dwithin_size' mount option, * we will always try PMD-sized order first. If that failed, it will * fall back to small large folios. */ switch (SHMEM_SB(inode->i_sb)->huge) { case SHMEM_HUGE_ALWAYS: - if (vma) - return maybe_pmd_order; - return THP_ORDERS_ALL_FILE_DEFAULT; case SHMEM_HUGE_WITHIN_SIZE: - if (vma) - within_size_orders =3D maybe_pmd_order; - else - within_size_orders =3D THP_ORDERS_ALL_FILE_DEFAULT; - - within_size_orders =3D shmem_get_orders_within_size(inode, within_size_o= rders, - index, write_end); + within_size_orders =3D shmem_get_orders_within_size(inode, + THP_ORDERS_ALL_FILE_DEFAULT, index, write_end); if (within_size_orders > 0) return within_size_orders; =20 fallthrough; case SHMEM_HUGE_ADVISE: if (vm_flags & VM_HUGEPAGE) - return maybe_pmd_order; + return THP_ORDERS_ALL_FILE_DEFAULT; fallthrough; default: return 0; --=20 2.43.7