From nobody Sun Feb 8 19:55:45 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id F4073EB64D9 for ; Fri, 7 Jul 2023 16:53:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229643AbjGGQxK (ORCPT ); Fri, 7 Jul 2023 12:53:10 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33472 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233007AbjGGQxG (ORCPT ); Fri, 7 Jul 2023 12:53:06 -0400 Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DB08F2694 for ; Fri, 7 Jul 2023 09:52:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1688748756; x=1720284756; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=9AZoEU1dBDfQPpclAW3AxhTYvcE6HGN/KVZD4GlGeoo=; b=d/94MWC/wkbSzhSnmtqy5gZ/71whsbZAxyYKwWuqvq42/epCEH7Ifmjb O6vAkzc2/R1uguu4CnTsodWb4+Yd5TsewLztAqMLlXiUY9wrQbTtYm7Ub YsTBfTzMyvPSrsrzT7TDfs4jgwuZURehrbUT+Ql6YzOVyteHSTW0QSDXl 0Mp+Zv9PAnlVYwI9LJcATgMQFdqXHRSG9bm9WCe5E3PJWUWNER6NoFPcM 1QeEHWL7lsz0IgkXIiOmJan0WKA3NfZNcGOGu9dEoTIpgRUeZvbmgpfG1 TmvUiu8YszLz2WU72jqNep4sxYVXCN+og6g9GO1mPuD9t0WIE0fYVtAZc g==; X-IronPort-AV: E=McAfee;i="6600,9927,10764"; a="353776318" X-IronPort-AV: E=Sophos;i="6.01,189,1684825200"; d="scan'208";a="353776318" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Jul 2023 09:52:35 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10764"; a="697290444" X-IronPort-AV: E=Sophos;i="6.01,189,1684825200"; d="scan'208";a="697290444" Received: from fyin-dev.sh.intel.com ([10.239.159.32]) by orsmga006.jf.intel.com with ESMTP; 07 Jul 2023 09:52:33 -0700 From: Yin Fengwei To: linux-mm@kvack.org, linux-kernel@vger.kernel.org, yuzhao@google.com, ryan.roberts@arm.com, shy828301@gmail.com, akpm@linux-foundation.org, willy@infradead.org, david@redhat.com Cc: fengwei.yin@intel.com Subject: [RFC PATCH 1/3] mm: add function folio_in_range() Date: Sat, 8 Jul 2023 00:52:19 +0800 Message-Id: <20230707165221.4076590-2-fengwei.yin@intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230707165221.4076590-1-fengwei.yin@intel.com> References: <20230707165221.4076590-1-fengwei.yin@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" It will be used to check whether the folio is mapped to specific VMA and whether the mapping address of folio is in the range. Signed-off-by: Yin Fengwei --- mm/internal.h | 26 ++++++++++++++++++++++++++ 1 file changed, 26 insertions(+) diff --git a/mm/internal.h b/mm/internal.h index f1276d90484ad..66117523d7d71 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -585,6 +585,32 @@ extern long faultin_vma_page_range(struct vm_area_stru= ct *vma, bool write, int *locked); extern bool mlock_future_ok(struct mm_struct *mm, unsigned long flags, unsigned long bytes); + +static inline bool +folio_in_range(struct folio *folio, struct vm_area_struct *vma, + unsigned long start, unsigned long end) +{ + pgoff_t pgoff, addr; + unsigned long vma_pglen =3D (vma->vm_end - vma->vm_start) >> PAGE_SHIFT; + + VM_WARN_ON_FOLIO(folio_test_ksm(folio), folio); + if (start < vma->vm_start) + start =3D vma->vm_start; + + if (end > vma->vm_end) + end =3D vma->vm_end; + + pgoff =3D folio_pgoff(folio); + + /* if folio start address is not in vma range */ + if (pgoff < vma->vm_pgoff || pgoff > vma->vm_pgoff + vma_pglen) + return false; + + addr =3D vma->vm_start + ((pgoff - vma->vm_pgoff) << PAGE_SHIFT); + + return ((addr >=3D start) && (addr + folio_size(folio) <=3D end)); +} + /* * mlock_vma_folio() and munlock_vma_folio(): * should be called with vma's mmap_lock held for read or write, --=20 2.39.2 From nobody Sun Feb 8 19:55:45 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 331E3EB64DA for ; Fri, 7 Jul 2023 16:53:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233045AbjGGQx1 (ORCPT ); Fri, 7 Jul 2023 12:53:27 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33856 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232745AbjGGQxZ (ORCPT ); Fri, 7 Jul 2023 12:53:25 -0400 Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C35441999 for ; Fri, 7 Jul 2023 09:53:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1688748794; x=1720284794; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=hMAoCmMSZm9wJK0rzWzlbRvlWfE6c9KZJN6MJC4MnjE=; b=CojC+sZYhmdBz9pGBf1Io4qGyE8KDj03MdnKaefDYGPb+BdV0ouzfVe2 Lw0uhrREbrfPNzcmjkyUFmocHb2GAWrB011GjhCAk49F+leiaaiPCauvF fz2knFeShT9GhGbbdL8Q0k2E2J6m/wc4ob6Z8PPQ2bVxFnUtiqzi5yzqY FY0y3t0tHl6TPZjSRyr7dMHipy67XDR8zgvSO9cL3TGpaaVVvp0f5Vmv2 KitVVlA0HH0UBm+Qqtpp4eB6qDqySv73XjhRWve6Myaiv0hnbp34VcNV7 jtQdh+QogtKLM56ceFI6SlP1R7n3nAzTQ9Q/BsahSiDyO1s0MnUG/Pj67 A==; X-IronPort-AV: E=McAfee;i="6600,9927,10764"; a="367422312" X-IronPort-AV: E=Sophos;i="6.01,189,1684825200"; d="scan'208";a="367422312" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Jul 2023 09:52:49 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10764"; a="790040971" X-IronPort-AV: E=Sophos;i="6.01,189,1684825200"; d="scan'208";a="790040971" Received: from fyin-dev.sh.intel.com ([10.239.159.32]) by fmsmga004.fm.intel.com with ESMTP; 07 Jul 2023 09:52:46 -0700 From: Yin Fengwei To: linux-mm@kvack.org, linux-kernel@vger.kernel.org, yuzhao@google.com, ryan.roberts@arm.com, shy828301@gmail.com, akpm@linux-foundation.org, willy@infradead.org, david@redhat.com Cc: fengwei.yin@intel.com Subject: [RFC PATCH 2/3] mm: handle large folio when large folio in VM_LOCKED VMA range Date: Sat, 8 Jul 2023 00:52:20 +0800 Message-Id: <20230707165221.4076590-3-fengwei.yin@intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230707165221.4076590-1-fengwei.yin@intel.com> References: <20230707165221.4076590-1-fengwei.yin@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" If large folio is in the range of VM_LOCKED VMA, it should be mlocked to avoid being picked by page reclaim. Which may split the large folio and then mlock each pages again. Mlock this kind of large folio to prevent them being picked by page reclaim. For the large folio which cross the boundary of VM_LOCKED VMA, we'd better not to mlock it. So if the system is under memory pressure, this kind of large folio will be split and the pages ouf of VM_LOCKED VMA can be reclaimed. Signed-off-by: Yin Fengwei --- mm/internal.h | 11 ++++++++--- mm/rmap.c | 3 ++- 2 files changed, 10 insertions(+), 4 deletions(-) diff --git a/mm/internal.h b/mm/internal.h index 66117523d7d71..c7b8f0b008d81 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -637,7 +637,8 @@ static inline void mlock_vma_folio(struct folio *folio, * still be set while VM_SPECIAL bits are added: so ignore it then. */ if (unlikely((vma->vm_flags & (VM_LOCKED|VM_SPECIAL)) =3D=3D VM_LOCKED) && - (compound || !folio_test_large(folio))) + (compound || !folio_test_large(folio) || + folio_in_range(folio, vma, vma->vm_start, vma->vm_end))) mlock_folio(folio); } =20 @@ -645,8 +646,12 @@ void munlock_folio(struct folio *folio); static inline void munlock_vma_folio(struct folio *folio, struct vm_area_struct *vma, bool compound) { - if (unlikely(vma->vm_flags & VM_LOCKED) && - (compound || !folio_test_large(folio))) + /* + * To handle the case that a mlocked large folio is unmapped from VMA + * piece by piece, allow munlock the large folio which is partially + * mapped to VMA. + */ + if (unlikely(vma->vm_flags & VM_LOCKED)) munlock_folio(folio); } =20 diff --git a/mm/rmap.c b/mm/rmap.c index 2668f5ea35342..7d6547d1bd096 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -817,7 +817,8 @@ static bool folio_referenced_one(struct folio *folio, address =3D pvmw.address; =20 if ((vma->vm_flags & VM_LOCKED) && - (!folio_test_large(folio) || !pvmw.pte)) { + (!folio_test_large(folio) || !pvmw.pte || + folio_in_range(folio, vma, vma->vm_start, vma->vm_end))) { /* Restore the mlock which got missed */ mlock_vma_folio(folio, vma, !pvmw.pte); page_vma_mapped_walk_done(&pvmw); --=20 2.39.2 From nobody Sun Feb 8 19:55:45 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9BDBBC0015E for ; Fri, 7 Jul 2023 16:53:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233007AbjGGQxR (ORCPT ); Fri, 7 Jul 2023 12:53:17 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33454 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232991AbjGGQxO (ORCPT ); Fri, 7 Jul 2023 12:53:14 -0400 Received: from mga06.intel.com (mga06b.intel.com [134.134.136.31]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D0EB5170C for ; Fri, 7 Jul 2023 09:53:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1688748785; x=1720284785; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=LCsBu7dWqfVmLSb5jMD4R7baYuKbw8fnATRe6XbxLRw=; b=nZ29caYEJWijKG7lMoGUS+iyKtJZ3Sti+0oCGDOMnfiKD2bMVlYERpFQ avGQDM50/gt4XwvoWFzmtoQDESmO8tl/Drh+aIiPztCWdrCqpFlkpY6xW lfCrlzzr/5E+dWJ0FlkKxVpjMEOPANz7nZLyZsOm9m3COG/IisSyWYjDh ItPCCn3Xe8MHeGdZ5pqSUC03oX8kska4p/bsRGokfOKH5inQ/duNTJdWA gseykZPa/BnbZCrUXk4D8cT3QSt1CPknEXgOp3JwJYvPpi1Ru+Ph0+2qs 1PmiB9wpaLOPJmHCwg/RUOBgsLt9ejDZq+oWfaVY6skE3QHStdITYW365 A==; X-IronPort-AV: E=McAfee;i="6600,9927,10764"; a="427619015" X-IronPort-AV: E=Sophos;i="6.01,189,1684825200"; d="scan'208";a="427619015" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Jul 2023 09:53:03 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10764"; a="966700777" X-IronPort-AV: E=Sophos;i="6.01,189,1684825200"; d="scan'208";a="966700777" Received: from fyin-dev.sh.intel.com ([10.239.159.32]) by fmsmga006.fm.intel.com with ESMTP; 07 Jul 2023 09:52:59 -0700 From: Yin Fengwei To: linux-mm@kvack.org, linux-kernel@vger.kernel.org, yuzhao@google.com, ryan.roberts@arm.com, shy828301@gmail.com, akpm@linux-foundation.org, willy@infradead.org, david@redhat.com Cc: fengwei.yin@intel.com Subject: [RFC PATCH 3/3] mm: mlock: update mlock_pte_range to handle large folio Date: Sat, 8 Jul 2023 00:52:21 +0800 Message-Id: <20230707165221.4076590-4-fengwei.yin@intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230707165221.4076590-1-fengwei.yin@intel.com> References: <20230707165221.4076590-1-fengwei.yin@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Current kernel only lock base size folio during mlock syscall. Add large folio support with following rules: - Only mlock large folio when it's in VM_LOCKED VMA range - If there is cow folio, mlock the cow folio as cow folio is also in VM_LOCKED VMA range. - munlock will apply to the large folio which is in VMA range or cross the VMA boundary. The last rule is used to handle the case that the large folio is mlocked, later the VMA is split in the middle of large folio and this large folio become cross VMA boundary. Signed-off-by: Yin Fengwei --- mm/mlock.c | 103 ++++++++++++++++++++++++++++++++++++++++++++++++----- 1 file changed, 95 insertions(+), 8 deletions(-) diff --git a/mm/mlock.c b/mm/mlock.c index d7db94519884d..e09a481062ef5 100644 --- a/mm/mlock.c +++ b/mm/mlock.c @@ -305,6 +305,64 @@ void munlock_folio(struct folio *folio) local_unlock(&mlock_fbatch.lock); } =20 +void mlock_folio_range(struct folio *folio, struct vm_area_struct *vma, + pte_t *pte, unsigned long addr, unsigned int nr) +{ + struct folio *cow_folio; + unsigned int step =3D 1; + + mlock_folio(folio); + if (nr =3D=3D 1) + return; + + for (; nr > 0; pte +=3D step, addr +=3D (step << PAGE_SHIFT), nr -=3D ste= p) { + pte_t ptent; + + step =3D 1; + ptent =3D ptep_get(pte); + + if (!pte_present(ptent)) + continue; + + cow_folio =3D vm_normal_folio(vma, addr, ptent); + if (!cow_folio || cow_folio =3D=3D folio) { + continue; + } + + mlock_folio(cow_folio); + step =3D min_t(unsigned int, nr, folio_nr_pages(cow_folio)); + } +} + +void munlock_folio_range(struct folio *folio, struct vm_area_struct *vma, + pte_t *pte, unsigned long addr, unsigned int nr) +{ + struct folio *cow_folio; + unsigned int step =3D 1; + + munlock_folio(folio); + if (nr =3D=3D 1) + return; + + for (; nr > 0; pte +=3D step, addr +=3D (step << PAGE_SHIFT), nr -=3D ste= p) { + pte_t ptent; + + step =3D 1; + ptent =3D ptep_get(pte); + + if (!pte_present(ptent)) + continue; + + cow_folio =3D vm_normal_folio(vma, addr, ptent); + if (!cow_folio || cow_folio =3D=3D folio) { + continue; + } + + munlock_folio(cow_folio); + step =3D min_t(unsigned int, nr, folio_nr_pages(cow_folio)); + } +} + static int mlock_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end, struct mm_walk *walk) =20 @@ -314,6 +372,7 @@ static int mlock_pte_range(pmd_t *pmd, unsigned long ad= dr, pte_t *start_pte, *pte; pte_t ptent; struct folio *folio; + unsigned int step =3D 1, nr; =20 ptl =3D pmd_trans_huge_lock(pmd, vma); if (ptl) { @@ -329,24 +388,52 @@ static int mlock_pte_range(pmd_t *pmd, unsigned long = addr, goto out; } =20 - start_pte =3D pte_offset_map_lock(vma->vm_mm, pmd, addr, &ptl); + pte =3D start_pte =3D pte_offset_map_lock(vma->vm_mm, pmd, addr, &ptl); if (!start_pte) { walk->action =3D ACTION_AGAIN; return 0; } - for (pte =3D start_pte; addr !=3D end; pte++, addr +=3D PAGE_SIZE) { + + for (; addr !=3D end; pte +=3D step, addr +=3D (step << PAGE_SHIFT)) { + step =3D 1; ptent =3D ptep_get(pte); if (!pte_present(ptent)) continue; folio =3D vm_normal_folio(vma, addr, ptent); if (!folio || folio_is_zone_device(folio)) continue; - if (folio_test_large(folio)) - continue; - if (vma->vm_flags & VM_LOCKED) - mlock_folio(folio); - else - munlock_folio(folio); + + folio_get(folio); + nr =3D folio_nr_pages(folio) + folio_pfn(folio) - pte_pfn(ptent); + nr =3D min_t(unsigned int, nr, (end - addr) >> PAGE_SHIFT); + + if (vma->vm_flags & VM_LOCKED) { + /* + * Only mlock the 4K folio or large folio + * in VMA range + */ + if (folio_test_large(folio) && + !folio_in_range(folio, vma, + vma->vm_start, vma->vm_end)) { + folio_put(folio); + continue; + } + mlock_folio_range(folio, vma, pte, addr, nr); + } else { + /* + * Allow munlock large folio which is partially mapped + * to VMA. As it's possible that large folio is mlocked + * and VMA is split later. + * + * During memory pressure, such kind of large folio can + * be split. And the pages are not in VM_LOCKed VMA + * can be reclaimed. + */ + munlock_folio_range(folio, vma, pte, addr, nr); + } + + step =3D nr; + folio_put(folio); } pte_unmap(start_pte); out: --=20 2.39.2