From nobody Sun Feb 8 11:06:10 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 43EB9C0015E for ; Fri, 28 Jul 2023 16:17:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233219AbjG1QRE (ORCPT ); Fri, 28 Jul 2023 12:17:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59462 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233383AbjG1QRA (ORCPT ); Fri, 28 Jul 2023 12:17:00 -0400 Received: from mgamail.intel.com (unknown [134.134.136.31]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F36B24203; Fri, 28 Jul 2023 09:16:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1690561019; x=1722097019; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=yoKWUZ9b4nmnub08HwQjvgeLWWLSNJ2JgO+gF3lpB1A=; b=CZcI4X1U8P7Y6XJO6xlGSXHGlQMiHD9rPNZck58FSN8nncAFQQMgbmNu jCISWaFwMwgvLPUP+hmKqe35kblQjHtdteADtbFuReEKvkJWrr94Oymjy oyT1nSK8XXpsJoN9xK4HC9/fRSb3MGMb0BwbzkZka/bxsBiOF7lvSiEdc PtKYrjyUHtNME3H+5vtvgg7VTJ/VdHY2wrNXNz0f4j5eP0cLC/bH0EK2d 3nm+SS6qblhRxb56LOaXRPF5NveRN33gS4bMOSs5SK7GdDRi9JBjjlGHk eyH5NjazktxjQErFjc1xqUld33Tq9KWmSOmHliXE4X3HXPRFY7Ffp1KIP g==; X-IronPort-AV: E=McAfee;i="6600,9927,10784"; a="432442967" X-IronPort-AV: E=Sophos;i="6.01,238,1684825200"; d="scan'208";a="432442967" Received: from orsmga004.jf.intel.com ([10.7.209.38]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Jul 2023 09:15:40 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10784"; a="851267434" X-IronPort-AV: E=Sophos;i="6.01,238,1684825200"; d="scan'208";a="851267434" Received: from fyin-dev.sh.intel.com ([10.239.159.32]) by orsmga004.jf.intel.com with ESMTP; 28 Jul 2023 09:15:35 -0700 From: Yin Fengwei To: linux-mm@kvack.org, linux-kernel@vger.kernel.org, stable@vger.kernel.org, akpm@linux-foundation.org, willy@infradead.org, vishal.moola@gmail.com, wangkefeng.wang@huawei.com, minchan@kernel.org, yuzhao@google.com, david@redhat.com, ryan.roberts@arm.com, shy828301@gmail.com Cc: fengwei.yin@intel.com Subject: [PATCH 1/2] madvise: don't use mapcount() against large folio for sharing check Date: Sat, 29 Jul 2023 00:13:55 +0800 Message-Id: <20230728161356.1784568-2-fengwei.yin@intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230728161356.1784568-1-fengwei.yin@intel.com> References: <20230728161356.1784568-1-fengwei.yin@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" The commit 07e8c82b5eff ("madvise: convert madvise_cold_or_pageout_pte_range() to use folios") replaced the page_mapcount() with folio_mapcount() to check whether the folio is shared by other mapping. But it's not correct for large folio. folio_mapcount() returns the total mapcount of large folio which is not suitable to detect whether the folio is shared. Use folio_estimated_sharers() which returns a estimated number of shares. That means it's not 100% correct. But it should be OK for madvise case here. Fixes: 07e8c82b5eff ("madvise: convert madvise_cold_or_pageout_pte_range() = to use folios") Signed-off-by: Yin Fengwei Reviewed-by: Yu Zhao Reviewed-By: Ryan Roberts --- mm/madvise.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/mm/madvise.c b/mm/madvise.c index 886f06066622..148b46beb039 100644 --- a/mm/madvise.c +++ b/mm/madvise.c @@ -383,7 +383,7 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd, folio =3D pfn_folio(pmd_pfn(orig_pmd)); =20 /* Do not interfere with other mappings of this folio */ - if (folio_mapcount(folio) !=3D 1) + if (folio_estimated_sharers(folio) !=3D 1) goto huge_unlock; =20 if (pageout_anon_only_filter && !folio_test_anon(folio)) @@ -457,7 +457,7 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd, if (folio_test_large(folio)) { int err; =20 - if (folio_mapcount(folio) !=3D 1) + if (folio_estimated_sharers(folio) !=3D 1) break; if (pageout_anon_only_filter && !folio_test_anon(folio)) break; --=20 2.39.2 From nobody Sun Feb 8 11:06:10 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D1E29C001DF for ; Fri, 28 Jul 2023 16:16:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231686AbjG1QQG (ORCPT ); Fri, 28 Jul 2023 12:16:06 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58632 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231202AbjG1QQE (ORCPT ); Fri, 28 Jul 2023 12:16:04 -0400 Received: from mgamail.intel.com (unknown [192.55.52.88]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E85274495; Fri, 28 Jul 2023 09:15:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1690560954; x=1722096954; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=BrOxNMR67qVjTh31EvN6swY79AWjbm/yuEvrka6JwLk=; b=Bx93Gn62Ao7K22b9ZrdhENFQp506SfkfJ1L4L+lpju0JXDcgR/5/M8ts R2705USmIZuvZiFKTfBk4foJCy3auz+Hpsp9kWwYPhisU1IRLzkq2mukB /BARqLL5DnuRTJ9GO7OQoHXAeBdHNyEYYgFn2INksHxEWlFRUKDHVA2/P WiDbgQs9ZrE17x6cUM/sUgPj5+e+HbPMu6iRqJQQ0efR2XvBQONdE6DH6 GgZRwbKZpPnI5BWJLC6eEY3R1c1p+XFAB/FZjtWd7teyPdTrw4lWXFBFT 9G2AD7My68B4w+hWoGTFDdioXPoPynN9Mhjiv7NkSIlEgW1mc+88OV7vT Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10784"; a="399566016" X-IronPort-AV: E=Sophos;i="6.01,238,1684825200"; d="scan'208";a="399566016" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Jul 2023 09:15:54 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10784"; a="974142248" X-IronPort-AV: E=Sophos;i="6.01,238,1684825200"; d="scan'208";a="974142248" Received: from fyin-dev.sh.intel.com ([10.239.159.32]) by fmsmga006.fm.intel.com with ESMTP; 28 Jul 2023 09:15:51 -0700 From: Yin Fengwei To: linux-mm@kvack.org, linux-kernel@vger.kernel.org, stable@vger.kernel.org, akpm@linux-foundation.org, willy@infradead.org, vishal.moola@gmail.com, wangkefeng.wang@huawei.com, minchan@kernel.org, yuzhao@google.com, david@redhat.com, ryan.roberts@arm.com, shy828301@gmail.com Cc: fengwei.yin@intel.com Subject: [PATCH 2/2] madvise: don't use mapcount() against large folio for sharing check Date: Sat, 29 Jul 2023 00:13:56 +0800 Message-Id: <20230728161356.1784568-3-fengwei.yin@intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230728161356.1784568-1-fengwei.yin@intel.com> References: <20230728161356.1784568-1-fengwei.yin@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" The commits 98b211d6415f ("madvise: convert madvise_free_pte_range() to use a folio") fc986a38b670 ("mm: huge_memory: convert madvise_free_huge_pmd to use a folio") replaced the page_mapcount() with folio_mapcount() to check whether the folio is shared by other mapping. But it's not correct for large folio. folio_mapcount() returns the total mapcount of large folio which is not suitable to detect whether the folio is shared. Use folio_estimated_sharers() which returns a estimated number of shares. That means it's not 100% correct. But it should be OK for madvise case here. Fixes: 98b211d6415f ("madvise: convert madvise_free_pte_range() to use a fo= lio") Fixes: fc986a38b670 ("mm: huge_memory: convert madvise_free_huge_pmd to use= a folio") Signed-off-by: Yin Fengwei Reviewed-by: Yu Zhao Reviewed-By: Ryan Roberts --- mm/huge_memory.c | 2 +- mm/madvise.c | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index eb3678360b97..68c890875257 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1613,7 +1613,7 @@ bool madvise_free_huge_pmd(struct mmu_gather *tlb, st= ruct vm_area_struct *vma, * If other processes are mapping this folio, we couldn't discard * the folio unless they all do MADV_FREE so let's skip the folio. */ - if (folio_mapcount(folio) !=3D 1) + if (folio_estimated_sharers(folio) !=3D 1) goto out; =20 if (!folio_trylock(folio)) diff --git a/mm/madvise.c b/mm/madvise.c index 148b46beb039..55bdf641abfa 100644 --- a/mm/madvise.c +++ b/mm/madvise.c @@ -678,7 +678,7 @@ static int madvise_free_pte_range(pmd_t *pmd, unsigned = long addr, if (folio_test_large(folio)) { int err; =20 - if (folio_mapcount(folio) !=3D 1) + if (folio_estimated_sharers(folio) !=3D 1) break; if (!folio_trylock(folio)) break; --=20 2.39.2