From nobody Fri Dec 19 02:51:34 2025 Received: from mail-pj1-f52.google.com (mail-pj1-f52.google.com [209.85.216.52]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C066470CCC for ; Mon, 29 Apr 2024 13:23:26 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.52 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1714397008; cv=none; b=q7Uxqej+XIS4zuhZIZ6gr7fJdjisATwBn96LHrDs5U6oYKAKn6EoVRHwNFEWuDLma5qsxI2T2u8ppbVuH7u0MdkGaQtOO0KIltfh2fioBzcEiwzcxYX6x8OnNIDxZSPBvZ5DpUt253pr4YaYtIX5va8JJiRjWarf+3Z/vZsmUyo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1714397008; c=relaxed/simple; bh=gWXuoid7dRKeMQUZwz4YAEAXa4JIxXi9Uf3wRsrmHxE=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=KRsvQXufJttqG3YhmyNOw1QNzPH3bQh/qGMbNB0kQAN+zPXbrkTfotNAaKM/OX0EWrnjP3YBj4HaB0TgpTMNhqC1joJcLJl1EzT5i8iL0KFCEE7fiEBhidaWB628//VQ+p/chLPUwq/pgC1AdYMAYQMmbNzPEuG0ibxtl6zf0bY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=h2Lpw2Pi; arc=none smtp.client-ip=209.85.216.52 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="h2Lpw2Pi" Received: by mail-pj1-f52.google.com with SMTP id 98e67ed59e1d1-2b24dbd1bb6so79934a91.3 for ; Mon, 29 Apr 2024 06:23:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1714397006; x=1715001806; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=qxwHmPfQRT1njG/0ZnhsAAHKQR4nWB4DSExpUtfTagU=; b=h2Lpw2PiT5UpEzuG7t1hnR98QLhfFn2Sz/npjfedXI6EYi3WBH3+1Cmmm45dKsLBP0 HV83WPBLBOzt7OwXK+N8eEUUGowLpjAf4/1OgBN4bWklH3YPKYdWC8WUsrxZSGjiLijS kOiUBF7vfBziiSK5xia9Otcj0sXvTViIQnxGiOQHzwao9uLf2zPpuXxwYqaLddTCG3O9 l8bV5P920x9QJJ1RMAz2qpcYoYE9qmv4tsSVgbwFRGdRTa9aNEZ5GZ2TYmCZzgESNK3Y pF7l+3/f6nK5yOKopD7ZlZhf+h/2g8cWcn+kprK8fghJtoZ98iTTv57Gws7/Gq7Dx5CB UVag== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1714397006; x=1715001806; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=qxwHmPfQRT1njG/0ZnhsAAHKQR4nWB4DSExpUtfTagU=; b=ueAzn+n9y/CkcJauqanjOi2qOZS4Lvo9FHurXQjstvnFn+jtjgNba6Jvn5w52RGXZ2 VrwEY4Z8lzD6Ge5MamCRt+X+QtE9bSTxTJdsWxJXg9TNjqnSz7lZGDWkdq9lnTfNak5x lSnjC26QblYS59dlcN5hjGD91Jfvb1u+EF9rV9KNG54+kG2aa87T9MqHMNgpT6YfcSHI pQaC05b9rrT8h3eiG1GmFZkBqjHB/z72xq7HyOIu4wkBvLQPaeygnbGx7802BpEv+XJp nYVAdmM8KGUh5aNHQi7B5TpWqARSWRPjwUAuonct+cekTh+Iih096+DE6d2esMNSkOc/ bvtw== X-Forwarded-Encrypted: i=1; AJvYcCX+5rHfQL7Ux36HrAc2fcJu5WwoOhfMAev+WeEggUyfahNmYwxJK4O/JR9pYlpFWfdVI6j6TV2S5owopm0Z+cnN1vpWLTVRSchkgFwm X-Gm-Message-State: AOJu0Yy4qAxyn/PX6nBNgQGh2qxaUR5tB1VC29pWgaiTOKTHIbOP8w4v 0g+bMO4Eu65kgSRZ06lz3f0HAEBzxil7G1w1Q41HNFTPMdo4/YZP X-Google-Smtp-Source: AGHT+IEjqfs5Wj6xWELIlAmQgTGzpvdNiim5oTC1EPJpMg+Rn05wl3rsER6UCjgEzuJmeCUHtgU2DQ== X-Received: by 2002:a17:90b:696:b0:2b1:55b1:a58c with SMTP id m22-20020a17090b069600b002b155b1a58cmr4231189pjz.35.1714397005964; Mon, 29 Apr 2024 06:23:25 -0700 (PDT) Received: from LancedeMBP.lan ([112.10.225.242]) by smtp.gmail.com with ESMTPSA id pa5-20020a17090b264500b002b113ad5f10sm3562203pjb.12.2024.04.29.06.23.21 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 29 Apr 2024 06:23:25 -0700 (PDT) From: Lance Yang To: akpm@linux-foundation.org Cc: willy@infradead.org, maskray@google.com, ziy@nvidia.com, ryan.roberts@arm.com, david@redhat.com, 21cnbao@gmail.com, mhocko@suse.com, fengwei.yin@intel.com, zokeefe@google.com, shy828301@gmail.com, xiehuan09@gmail.com, libang.li@antgroup.com, wangkefeng.wang@huawei.com, songmuchun@bytedance.com, peterx@redhat.com, minchan@kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Lance Yang Subject: [PATCH v3 1/3] mm/rmap: remove duplicated exit code in pagewalk loop Date: Mon, 29 Apr 2024 21:23:06 +0800 Message-Id: <20240429132308.38794-2-ioworker0@gmail.com> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20240429132308.38794-1-ioworker0@gmail.com> References: <20240429132308.38794-1-ioworker0@gmail.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Introduce the labels walk_done and walk_done_err as exit points to eliminate duplicated exit code in the pagewalk loop. Signed-off-by: Lance Yang --- mm/rmap.c | 40 +++++++++++++++------------------------- 1 file changed, 15 insertions(+), 25 deletions(-) diff --git a/mm/rmap.c b/mm/rmap.c index 7faa60bc3e4d..7e2575d669a9 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1675,9 +1675,7 @@ static bool try_to_unmap_one(struct folio *folio, str= uct vm_area_struct *vma, /* Restore the mlock which got missed */ if (!folio_test_large(folio)) mlock_vma_folio(folio, vma); - page_vma_mapped_walk_done(&pvmw); - ret =3D false; - break; + goto walk_done_err; } =20 pfn =3D pte_pfn(ptep_get(pvmw.pte)); @@ -1715,11 +1713,8 @@ static bool try_to_unmap_one(struct folio *folio, st= ruct vm_area_struct *vma, */ if (!anon) { VM_BUG_ON(!(flags & TTU_RMAP_LOCKED)); - if (!hugetlb_vma_trylock_write(vma)) { - page_vma_mapped_walk_done(&pvmw); - ret =3D false; - break; - } + if (!hugetlb_vma_trylock_write(vma)) + goto walk_done_err; if (huge_pmd_unshare(mm, vma, address, pvmw.pte)) { hugetlb_vma_unlock_write(vma); flush_tlb_range(vma, @@ -1734,8 +1729,7 @@ static bool try_to_unmap_one(struct folio *folio, str= uct vm_area_struct *vma, * actual page and drop map count * to zero. */ - page_vma_mapped_walk_done(&pvmw); - break; + goto walk_done; } hugetlb_vma_unlock_write(vma); } @@ -1807,9 +1801,7 @@ static bool try_to_unmap_one(struct folio *folio, str= uct vm_area_struct *vma, if (unlikely(folio_test_swapbacked(folio) !=3D folio_test_swapcache(folio))) { WARN_ON_ONCE(1); - ret =3D false; - page_vma_mapped_walk_done(&pvmw); - break; + goto walk_done_err; } =20 /* MADV_FREE page check */ @@ -1848,23 +1840,17 @@ static bool try_to_unmap_one(struct folio *folio, s= truct vm_area_struct *vma, */ set_pte_at(mm, address, pvmw.pte, pteval); folio_set_swapbacked(folio); - ret =3D false; - page_vma_mapped_walk_done(&pvmw); - break; + goto walk_done_err; } =20 if (swap_duplicate(entry) < 0) { set_pte_at(mm, address, pvmw.pte, pteval); - ret =3D false; - page_vma_mapped_walk_done(&pvmw); - break; + goto walk_done_err; } if (arch_unmap_one(mm, vma, address, pteval) < 0) { swap_free(entry); set_pte_at(mm, address, pvmw.pte, pteval); - ret =3D false; - page_vma_mapped_walk_done(&pvmw); - break; + goto walk_done_err; } =20 /* See folio_try_share_anon_rmap(): clear PTE first. */ @@ -1872,9 +1858,7 @@ static bool try_to_unmap_one(struct folio *folio, str= uct vm_area_struct *vma, folio_try_share_anon_rmap_pte(folio, subpage)) { swap_free(entry); set_pte_at(mm, address, pvmw.pte, pteval); - ret =3D false; - page_vma_mapped_walk_done(&pvmw); - break; + goto walk_done_err; } if (list_empty(&mm->mmlist)) { spin_lock(&mmlist_lock); @@ -1914,6 +1898,12 @@ static bool try_to_unmap_one(struct folio *folio, st= ruct vm_area_struct *vma, if (vma->vm_flags & VM_LOCKED) mlock_drain_local(); folio_put(folio); + continue; +walk_done_err: + ret =3D false; +walk_done: + page_vma_mapped_walk_done(&pvmw); + break; } =20 mmu_notifier_invalidate_range_end(&range); --=20 2.33.1 From nobody Fri Dec 19 02:51:34 2025 Received: from mail-pg1-f173.google.com (mail-pg1-f173.google.com [209.85.215.173]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 844BF74400 for ; Mon, 29 Apr 2024 13:23:31 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.173 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1714397013; cv=none; b=Wfu2X1mVsr7wB8PL3+cc8sucdUmR87MLTorrw1KZ0eFIwF7Q4TmHiOLFqtsMusFIdsBBDS9uc+KNr4FR1+cXgGglE9vcvQWjDbpJqRMUUMZCFSIXkt74WKOKHrW8V2YegqDqyqQNLpxMvagXUbUCuxp2zTh9DEFn1C8veBtHO9w= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1714397013; c=relaxed/simple; bh=FQCe99Zmgdkp5da6xqXiTmyqRI6xYcw8mcbKi9jVijE=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=S0rd0Ke/qufb3s3rR2lSUIqg4NpeSunTuYQj8elAA8KFnrVx6WBQaKwfn9N//fUUf1geV/XrIiVbCgiT1KmWc6ZDPSAt44ewxZnGCHFEgztMrVPtJ8LTQfUj5cu0Z/ZGI96w02cWP227tlfitVlncvXLBZBOcSfrSM7e5tpwieY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=KbSIcn9h; arc=none smtp.client-ip=209.85.215.173 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="KbSIcn9h" Received: by mail-pg1-f173.google.com with SMTP id 41be03b00d2f7-5cdbc4334edso2446652a12.3 for ; Mon, 29 Apr 2024 06:23:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1714397011; x=1715001811; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=n0uX1PNQF5MmTPU2a0E5f9o7BDyWnY3Y+gAjJZ1Yc8M=; b=KbSIcn9hVJI+thCSGJ1+iXBByTQbFlYNORO7et38mSwiGlbnfmuEWqalKplypBDnaK TC7NJDrvtMBTey9vc/bZd3vxS3ndBC/IJCZjsXwXS/csdaf0+ufPlNxrEY8zDute+Bsn KNaTMzWPHj9QpWHjqfzmm+XGGPtRZaRqS7cztJB6iTDQICIPwQYnID0c+0UioPKl/0Yw oBD7Cc12cGa0gZcIOP9kpRnzxuPKrIaOi8fttZuzN+h8tQ1/EoWCMHHkBvkK3bxI4xme MxvJjq7UnzPFyKmXq49wtjIpbgK0s30fzjAKFvMRXckLJZdqJ4uv3Nr2ll9TtvANRBkn C7Iw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1714397011; x=1715001811; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=n0uX1PNQF5MmTPU2a0E5f9o7BDyWnY3Y+gAjJZ1Yc8M=; b=UhK4qSeyF/tAGUhFjGmqsRoj6k8bFGzNWTgqEr2+5/uOBM58iwk2GdUSBwZVC7WSt7 Mq14rIH/+wTzDzQT3dwaB82Ue1DARCNt7whFvUh+/d4A92D/DFvLikZqbFgYgtDj6iF3 QtckVHz7iY2+URF++I2dgSZEdI+p4upuz4jquIo0BsUlJyDc3SwyyWBuERNOSUiwODso fXVbBiuB94Pe/SdhLfNr/M2JctHqa5KdYbwOCVQBlTj/+2hXGw7u+Hk+Jz+UiEonXamU P+EBcJTx3X7U9YJ5yP7TgB6kmmr+hjLW1h+XNlxo9UqAajR30yXIK9xiEzFmULCxX3ZR ykew== X-Forwarded-Encrypted: i=1; AJvYcCUS14NZwjCXCtESgWhzGVvTgW/BAk6WedA4O5EXqX9nIVaei/sEzkZ2RgcUfxTc597r45KpiQfpY8/hQZBdb2/EGH1AWfxa+ysmNfRS X-Gm-Message-State: AOJu0YyGec40t4QTVapTSm5otOQTqz++glvkl3jbNr9APDplJZNzIKmD J/0ewwWy5JWtyIkdyW9W6iS5PSeW+UFvXfTEHp+FPiIxORpjXM3/ X-Google-Smtp-Source: AGHT+IH00XT4RfnkbDwUikAke+mh2nPnvqKZSzC4cJaeYd/fumPPlELDwUFfWyS2bd6+sld6pIyFMg== X-Received: by 2002:a17:90b:4387:b0:2af:e1f8:11a4 with SMTP id in7-20020a17090b438700b002afe1f811a4mr8642414pjb.19.1714397010666; Mon, 29 Apr 2024 06:23:30 -0700 (PDT) Received: from LancedeMBP.lan ([112.10.225.242]) by smtp.gmail.com with ESMTPSA id pa5-20020a17090b264500b002b113ad5f10sm3562203pjb.12.2024.04.29.06.23.26 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 29 Apr 2024 06:23:30 -0700 (PDT) From: Lance Yang To: akpm@linux-foundation.org Cc: willy@infradead.org, maskray@google.com, ziy@nvidia.com, ryan.roberts@arm.com, david@redhat.com, 21cnbao@gmail.com, mhocko@suse.com, fengwei.yin@intel.com, zokeefe@google.com, shy828301@gmail.com, xiehuan09@gmail.com, libang.li@antgroup.com, wangkefeng.wang@huawei.com, songmuchun@bytedance.com, peterx@redhat.com, minchan@kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Lance Yang Subject: [PATCH v3 2/3] mm/rmap: integrate PMD-mapped folio splitting into pagewalk loop Date: Mon, 29 Apr 2024 21:23:07 +0800 Message-Id: <20240429132308.38794-3-ioworker0@gmail.com> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20240429132308.38794-1-ioworker0@gmail.com> References: <20240429132308.38794-1-ioworker0@gmail.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" In preparation for supporting try_to_unmap_one() to unmap PMD-mapped folios, start the pagewalk first, then call split_huge_pmd_address() to split the folio. Suggested-by: David Hildenbrand Signed-off-by: Lance Yang --- include/linux/huge_mm.h | 2 ++ mm/huge_memory.c | 42 +++++++++++++++++++++-------------------- mm/rmap.c | 26 +++++++++++++++++++------ 3 files changed, 44 insertions(+), 26 deletions(-) diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index c8d3ec116e29..2daadfcc6776 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -36,6 +36,8 @@ bool move_huge_pmd(struct vm_area_struct *vma, unsigned l= ong old_addr, int change_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma, pmd_t *pmd, unsigned long addr, pgprot_t newprot, unsigned long cp_flags); +void split_huge_pmd_locked(struct vm_area_struct *vma, unsigned long addre= ss, + pmd_t *pmd, bool freeze, struct folio *folio); =20 vm_fault_t vmf_insert_pfn_pmd(struct vm_fault *vmf, pfn_t pfn, bool write); vm_fault_t vmf_insert_pfn_pud(struct vm_fault *vmf, pfn_t pfn, bool write); diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 8261b5669397..145505a1dd05 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2584,6 +2584,27 @@ static void __split_huge_pmd_locked(struct vm_area_s= truct *vma, pmd_t *pmd, pmd_populate(mm, pmd, pgtable); } =20 +void split_huge_pmd_locked(struct vm_area_struct *vma, unsigned long addre= ss, + pmd_t *pmd, bool freeze, struct folio *folio) +{ + VM_WARN_ON_ONCE(folio && !folio_test_pmd_mappable(folio)); + VM_WARN_ON_ONCE(!IS_ALIGNED(address, HPAGE_PMD_SIZE)); + VM_WARN_ON_ONCE(folio && !folio_test_locked(folio)); + VM_BUG_ON(freeze && !folio); + + /* + * When the caller requests to set up a migration entry, we + * require a folio to check the PMD against. Otherwise, there + * is a risk of replacing the wrong folio. + */ + if (pmd_trans_huge(*pmd) || pmd_devmap(*pmd) || + is_pmd_migration_entry(*pmd)) { + if (folio && folio !=3D pmd_folio(*pmd)) + return; + __split_huge_pmd_locked(vma, pmd, address, freeze); + } +} + void __split_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd, unsigned long address, bool freeze, struct folio *folio) { @@ -2595,26 +2616,7 @@ void __split_huge_pmd(struct vm_area_struct *vma, pm= d_t *pmd, (address & HPAGE_PMD_MASK) + HPAGE_PMD_SIZE); mmu_notifier_invalidate_range_start(&range); ptl =3D pmd_lock(vma->vm_mm, pmd); - - /* - * If caller asks to setup a migration entry, we need a folio to check - * pmd against. Otherwise we can end up replacing wrong folio. - */ - VM_BUG_ON(freeze && !folio); - VM_WARN_ON_ONCE(folio && !folio_test_locked(folio)); - - if (pmd_trans_huge(*pmd) || pmd_devmap(*pmd) || - is_pmd_migration_entry(*pmd)) { - /* - * It's safe to call pmd_page when folio is set because it's - * guaranteed that pmd is present. - */ - if (folio && folio !=3D pmd_folio(*pmd)) - goto out; - __split_huge_pmd_locked(vma, pmd, range.start, freeze); - } - -out: + split_huge_pmd_locked(vma, range.start, pmd, freeze, folio); spin_unlock(ptl); mmu_notifier_invalidate_range_end(&range); } diff --git a/mm/rmap.c b/mm/rmap.c index 7e2575d669a9..e42f436c7ff3 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1636,9 +1636,6 @@ static bool try_to_unmap_one(struct folio *folio, str= uct vm_area_struct *vma, if (flags & TTU_SYNC) pvmw.flags =3D PVMW_SYNC; =20 - if (flags & TTU_SPLIT_HUGE_PMD) - split_huge_pmd_address(vma, address, false, folio); - /* * For THP, we have to assume the worse case ie pmd for invalidation. * For hugetlb, it could be much worse if we need to do pud @@ -1650,6 +1647,10 @@ static bool try_to_unmap_one(struct folio *folio, st= ruct vm_area_struct *vma, range.end =3D vma_address_end(&pvmw); mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, vma->vm_mm, address, range.end); + if (flags & TTU_SPLIT_HUGE_PMD) { + range.start =3D address & HPAGE_PMD_MASK; + range.end =3D (address & HPAGE_PMD_MASK) + HPAGE_PMD_SIZE; + } if (folio_test_hugetlb(folio)) { /* * If sharing is possible, start and end will be adjusted @@ -1664,9 +1665,6 @@ static bool try_to_unmap_one(struct folio *folio, str= uct vm_area_struct *vma, mmu_notifier_invalidate_range_start(&range); =20 while (page_vma_mapped_walk(&pvmw)) { - /* Unexpected PMD-mapped THP? */ - VM_BUG_ON_FOLIO(!pvmw.pte, folio); - /* * If the folio is in an mlock()d vma, we must not swap it out. */ @@ -1678,6 +1676,22 @@ static bool try_to_unmap_one(struct folio *folio, st= ruct vm_area_struct *vma, goto walk_done_err; } =20 + if (!pvmw.pte && (flags & TTU_SPLIT_HUGE_PMD)) { + /* + * We temporarily have to drop the PTL and start once + * again from that now-PTE-mapped page table. + */ + split_huge_pmd_locked(vma, range.start, pvmw.pmd, false, + folio); + pvmw.pmd =3D NULL; + spin_unlock(pvmw.ptl); + flags &=3D ~TTU_SPLIT_HUGE_PMD; + continue; + } + + /* Unexpected PMD-mapped THP? */ + VM_BUG_ON_FOLIO(!pvmw.pte, folio); + pfn =3D pte_pfn(ptep_get(pvmw.pte)); subpage =3D folio_page(folio, pfn - folio_pfn(folio)); address =3D pvmw.address; --=20 2.33.1 From nobody Fri Dec 19 02:51:34 2025 Received: from mail-pj1-f41.google.com (mail-pj1-f41.google.com [209.85.216.41]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4A136745C0 for ; Mon, 29 Apr 2024 13:23:36 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.41 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1714397017; cv=none; b=myiB4PaQ2olni5HidGiUCUZXG8taieZVS9uwNMdddmc4mLUeG9Ur4UviTHy1Qpj5x/CbTa0pp2bcKqg8sVxx4jnXGUrIX4gR/4dnEz3g7K5G/v2u1fg/Kp7yLysjYxtWV/ppc75+m4SBLZ08cnHdOy9rSIvv6J5BnEIZ5u6nPI0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1714397017; c=relaxed/simple; bh=PNAY2B0twHR0g+eIobtMmiBXo8e2//D9L4cS+khsXzs=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=YHlXdUaCHi9XfeQiSJ93q7anLtm0jeeFXvNsAhVnDNE4BSy2VqoBcN+3/Shgk0WPk1A98o/YvS78zhfx6fJemuDeaHi5wZxT6wwHbgfeYISX/87/jko+yXfrivHPiO+oE3sHuoS/M52U2L0v1tm45Pcxtji2l5QGwb7JgzndwH4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=aqgzuNpG; arc=none smtp.client-ip=209.85.216.41 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="aqgzuNpG" Received: by mail-pj1-f41.google.com with SMTP id 98e67ed59e1d1-2b18e829acfso710840a91.0 for ; Mon, 29 Apr 2024 06:23:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1714397015; x=1715001815; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Yyi8NgAotG2uQx5M1DNlDhdviU8akK6rE5R0Kl6KUlU=; b=aqgzuNpGTtBVAjFAjAwndHNo/Z91G+kntT3ltydIVQ3p63/A+bWUv81SNGEAOcv628 7dHJMUbzxJ0OoNYn7b+nH4v8Lzph5vPcUFlai8kJoPyO9aFcdzslS75z7DFe/2arX9DY EQtSwMXDiOJHjgAKyxrNiT1jxgOKbfdbQSKbLMId8MytjVGpKkqEcLlsFvZvP9fRl58E ZZum5vqW3HFZIvuJ5eDZBQELAX6WNGpkqqF6MSJccaFsaOJK25mz7tzs97YJUlgvUSy3 bpAD2hIDQq/Vy7LA6lSRAeFYbATLmoiVlXZoAdNMZIPXo6UjbEf+l6O+RBzCUGGODitU n8Og== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1714397016; x=1715001816; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Yyi8NgAotG2uQx5M1DNlDhdviU8akK6rE5R0Kl6KUlU=; b=rTzAuLCXLwWXv5gqlxhmzWKQfP7gGYQwo0vb1UamiP7JOL9lQyp0sXhxVIxwzP6eZZ vVVeqd13Pt5mmtVc6HoAAbnYF4eRTF5SLA/MtO3KUSMAP2pb7FeqpBQtpNTeyDDatO55 yYTmLh/lYiDZTOgaYTRBIze9vuA3aFhl9yhBtcG02/YbflVO+wp9kPzeWwK9unk88Gnf 44znrn3msvuYlbrB29BtXYbqVdSkg9MWbIZoZ2uNCEhQYH24q+zGB1A3BR+vy2IrK09f RywtlIF+M2jPNPfLP0KRGr/nQkNba63aOVebid0RBAZzCMTM3KpsvLlzg2NEQSFENFTv 5q5g== X-Forwarded-Encrypted: i=1; AJvYcCUENx1dSMCLUXiowDN93fFbY/S0N8Oyf4fGb8kq+UiYjrK/D0izmn88PB16D3gYr2c39yfDyzO4Hn3lE9BUOs6FJa5e39Nh2uKAakam X-Gm-Message-State: AOJu0YzIXN+BpwbFsAJabSMLqOnhr1lMlc2JdVwum5C1ulsVKZxBMRF5 jRC7bixlpYvu/aTSBH3bCtKUv2ApQNqqR4JlHIxInEmKsiEQr/zs X-Google-Smtp-Source: AGHT+IE3glKHvTVXbEue7ZJjzMdZwSs4YepE3f+DkJR5stamEyLLZPKJxPWIl3KdJI4v7bdaHehPKw== X-Received: by 2002:a17:90a:654c:b0:2a0:215f:dc9c with SMTP id f12-20020a17090a654c00b002a0215fdc9cmr9546775pjs.35.1714397015463; Mon, 29 Apr 2024 06:23:35 -0700 (PDT) Received: from LancedeMBP.lan ([112.10.225.242]) by smtp.gmail.com with ESMTPSA id pa5-20020a17090b264500b002b113ad5f10sm3562203pjb.12.2024.04.29.06.23.31 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 29 Apr 2024 06:23:35 -0700 (PDT) From: Lance Yang To: akpm@linux-foundation.org Cc: willy@infradead.org, maskray@google.com, ziy@nvidia.com, ryan.roberts@arm.com, david@redhat.com, 21cnbao@gmail.com, mhocko@suse.com, fengwei.yin@intel.com, zokeefe@google.com, shy828301@gmail.com, xiehuan09@gmail.com, libang.li@antgroup.com, wangkefeng.wang@huawei.com, songmuchun@bytedance.com, peterx@redhat.com, minchan@kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Lance Yang Subject: [PATCH v3 3/3] mm/vmscan: avoid split lazyfree THP during shrink_folio_list() Date: Mon, 29 Apr 2024 21:23:08 +0800 Message-Id: <20240429132308.38794-4-ioworker0@gmail.com> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20240429132308.38794-1-ioworker0@gmail.com> References: <20240429132308.38794-1-ioworker0@gmail.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" When the user no longer requires the pages, they would use madvise(MADV_FREE) to mark the pages as lazy free. Subsequently, they typically would not re-write to that memory again. During memory reclaim, if we detect that the large folio and its PMD are both still marked as clean and there are no unexpected references (such as GUP), so we can just discard the memory lazily, improving the efficiency of memory reclamation in this case. On an Intel i5 CPU, reclaiming 1GiB of lazyfree THPs using mem_cgroup_force_empty() results in the following runtimes in seconds (shorter is better): Suggested-by: David Hildenbrand Suggested-by: Zi Yan -------------------------------------------- | Old | New | Change | -------------------------------------------- | 0.683426 | 0.049197 | -92.80% | -------------------------------------------- Suggested-by: Zi Yan Suggested-by: David Hildenbrand Signed-off-by: Lance Yang --- include/linux/huge_mm.h | 2 ++ mm/huge_memory.c | 75 +++++++++++++++++++++++++++++++++++++++++ mm/rmap.c | 3 ++ 3 files changed, 80 insertions(+) diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index 2daadfcc6776..fd330f72b4f3 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -38,6 +38,8 @@ int change_huge_pmd(struct mmu_gather *tlb, struct vm_are= a_struct *vma, unsigned long cp_flags); void split_huge_pmd_locked(struct vm_area_struct *vma, unsigned long addre= ss, pmd_t *pmd, bool freeze, struct folio *folio); +bool unmap_huge_pmd_locked(struct vm_area_struct *vma, unsigned long addr, + pmd_t *pmdp, struct folio *folio); =20 vm_fault_t vmf_insert_pfn_pmd(struct vm_fault *vmf, pfn_t pfn, bool write); vm_fault_t vmf_insert_pfn_pud(struct vm_fault *vmf, pfn_t pfn, bool write); diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 145505a1dd05..d35d526ed48f 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2690,6 +2690,81 @@ static void unmap_folio(struct folio *folio) try_to_unmap_flush(); } =20 +static bool __discard_trans_pmd_locked(struct vm_area_struct *vma, + unsigned long addr, pmd_t *pmdp, + struct folio *folio) +{ + struct mm_struct *mm =3D vma->vm_mm; + int ref_count, map_count; + pmd_t orig_pmd =3D *pmdp; + struct mmu_gather tlb; + struct page *page; + + if (pmd_dirty(orig_pmd) || folio_test_dirty(folio)) + return false; + if (unlikely(!pmd_present(orig_pmd) || !pmd_trans_huge(orig_pmd))) + return false; + + page =3D pmd_page(orig_pmd); + if (unlikely(page_folio(page) !=3D folio)) + return false; + + tlb_gather_mmu(&tlb, mm); + orig_pmd =3D pmdp_huge_get_and_clear(mm, addr, pmdp); + tlb_remove_pmd_tlb_entry(&tlb, pmdp, addr); + + /* + * Syncing against concurrent GUP-fast: + * - clear PMD; barrier; read refcount + * - inc refcount; barrier; read PMD + */ + smp_mb(); + + ref_count =3D folio_ref_count(folio); + map_count =3D folio_mapcount(folio); + + /* + * Order reads for folio refcount and dirty flag + * (see comments in __remove_mapping()). + */ + smp_rmb(); + + /* + * If the PMD or folio is redirtied at this point, or if there are + * unexpected references, we will give up to discard this folio + * and remap it. + * + * The only folio refs must be one from isolation plus the rmap(s). + */ + if (ref_count !=3D map_count + 1 || folio_test_dirty(folio) || + pmd_dirty(orig_pmd)) { + set_pmd_at(mm, addr, pmdp, orig_pmd); + return false; + } + + folio_remove_rmap_pmd(folio, page, vma); + zap_deposited_table(mm, pmdp); + add_mm_counter(mm, MM_ANONPAGES, -HPAGE_PMD_NR); + folio_put(folio); + + return true; +} + +bool unmap_huge_pmd_locked(struct vm_area_struct *vma, unsigned long addr, + pmd_t *pmdp, struct folio *folio) +{ + VM_WARN_ON_FOLIO(!folio_test_pmd_mappable(folio), folio); + VM_WARN_ON_FOLIO(!folio_test_locked(folio), folio); + VM_WARN_ON_ONCE(!IS_ALIGNED(addr, HPAGE_PMD_SIZE)); + +#ifdef CONFIG_TRANSPARENT_HUGEPAGE + if (folio_test_anon(folio) && !folio_test_swapbacked(folio)) + return __discard_trans_pmd_locked(vma, addr, pmdp, folio); +#endif + + return false; +} + static void remap_page(struct folio *folio, unsigned long nr) { int i =3D 0; diff --git a/mm/rmap.c b/mm/rmap.c index e42f436c7ff3..ab37af4f47aa 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1677,6 +1677,9 @@ static bool try_to_unmap_one(struct folio *folio, str= uct vm_area_struct *vma, } =20 if (!pvmw.pte && (flags & TTU_SPLIT_HUGE_PMD)) { + if (unmap_huge_pmd_locked(vma, range.start, pvmw.pmd, + folio)) + goto walk_done; /* * We temporarily have to drop the PTL and start once * again from that now-PTE-mapped page table. --=20 2.33.1