From nobody Tue Dec 23 16:25:09 2025 Received: from mail-pl1-f170.google.com (mail-pl1-f170.google.com [209.85.214.170]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 32FCA14A09C for ; Mon, 13 Jan 2025 03:39:45 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.170 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736739588; cv=none; b=qCHlI4QLji2opatObNLyTe665kIP/lpfzpbjVTHQ3fE2MMLwpOcbzJKdZC6uD9e2TG5/jL0/omjEpAmEmnx6GT2k1eIoTHWJtnN7qEhiDaOJzSKa4Nd1WCQfQcbL2+KvkssFu8iBfzlSNth/4ZKSpqclqG7NDck05sUUPt+j+OE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736739588; c=relaxed/simple; bh=2yaODn9tWvmvieUtgE1IzEZpkxdrV57RCl/gPVk9jL8=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=oSoWTlf4ThIEQnzQO+XKYUD/QmCwuSa4sqi1DcfvTV38tc53W1JxPDmiY2wX1b5qgqBi0UgUI/NmqPnd9jsrv6v+91EjK1CuyrJxsvi64LA0Zg8KWmSitBQL9YgH919xmLPN6SoKqdgxmHRvhSxxHh+Rjek4nzMR4KpJGHiTA9Q= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=aWJkooXh; arc=none smtp.client-ip=209.85.214.170 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="aWJkooXh" Received: by mail-pl1-f170.google.com with SMTP id d9443c01a7336-21669fd5c7cso66198755ad.3 for ; Sun, 12 Jan 2025 19:39:45 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1736739585; x=1737344385; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=7ePF7LRaXT6WJxMxcE/kQzKdknlZvi8u0Wr7nYwGrGY=; b=aWJkooXhzeo6p4wAdiSBJ5UCRYOwramAAKlbZhBEipLZ5qQeh+BeaVPfLH4xLlhG20 /gYIQumtbt0rfTfH/qWEUcv4mhNSPoXp/pB1npfPGK8Ec/DX5/ZQ+/jU2uBG2vAJNdQR xQOak0cip/HfwB4ObbkkSVfazSsZD4bDj0WOYb3ngymYmO9obn4IHn+GZ0ChOCQwAGCG 9smTuLAp97z9Z/pSXKtEjbf9XOzHP/fCpgf+MocDBOVFz0rsgtSUxfSQfzHjwyF0pMS7 hSkvK9qfCuSfVWc+/UDNdg2BxE1J/GyR3DkDnAhbdbx1cwIF+guPajSrCa9/2DKb5q8l 7tFg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1736739585; x=1737344385; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=7ePF7LRaXT6WJxMxcE/kQzKdknlZvi8u0Wr7nYwGrGY=; b=R++eYvh3rEm4vr+k3r+ZFpk6PPUJPXG+Q49zoG1GSoNYeWJ+q7ZHRmZidAcDFIYcsd 3CjYwW4AXO8wYNEHwNg2jvfCtzQgU3K9mzgcG8lRFI9QlzErmbyGpbpYvQPxuFFswEPI IPl7WJ+bBeckEV2+W2tnVdAFjAw+rzOoKEzfnPp1LPg2QwUMJ7ksIRGABD+CXUE7K53f KYlIZq8ToVc88mfTSuuZ6rv+80/mWJHAVl2tbmZ3DwezHtozIKiIp5ZKSkUhoiHg3VmN 7UWF5tnTCZuZeHFEYHFP59EWqvdOQe8JBdyZRSaBpkXjJiQABHJh9d9c1gxpAwx/cLOB hMOQ== X-Forwarded-Encrypted: i=1; AJvYcCVGGpHrr5MOJbst1PM5nR5pNGDqBTvuoTcEpPlFXviQW71t8hSalrBvZ+AwcaU11R1DLXwISYeSFaT1YWY=@vger.kernel.org X-Gm-Message-State: AOJu0YxmmpPrhxtEQUzRAKnXKO+7hcW5wuHnQHpr6bJzeDOLFNNTVGJa +iZBblbCVunoSO+FROTzpcD6XRVY8I5yVpB9zMT9hAUC3GA2GDsa X-Gm-Gg: ASbGncsncaIrRej20/GrQE5t9M3wURns+8jsF9CK1zKahVEa/bTnCjdlNno6Kgqlq55 lpzjyBw3TJxmBHvKPvNCp6mJao683OMZ8tjkXRdu2Z+6JY26BF+BEIcxYiNAejOX79B1QO2mRsf pS569opA4oyr1qtuRod3mYXc66MXwnUeuRZdu3FWU0EauuVAExHkSq2tA591UAtuunyIzbYtdJY 5tffRDtMGp1VmEV1cVRjrCe/csIoH+9lhmVF0VOuchmYhBq8efDQQ0LqTbzPyOGgjNypFcZbkf1 Imxmby1U X-Google-Smtp-Source: AGHT+IEJyXTMuDf5NuY3j1SIPBbXyZqFASFj5tk1fMcghhN0EpO5xryOPxQNqGVnzZ7Hib6ZrPMFDA== X-Received: by 2002:a17:903:3385:b0:216:1cf8:8b8 with SMTP id d9443c01a7336-21a83f69643mr199408035ad.27.1736739585271; Sun, 12 Jan 2025 19:39:45 -0800 (PST) Received: from Barrys-MBP.hub ([2407:7000:af65:8200:39b5:3f0b:acf3:9158]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-21a9f25aabfsm44368405ad.246.2025.01.12.19.39.39 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Sun, 12 Jan 2025 19:39:44 -0800 (PST) From: Barry Song <21cnbao@gmail.com> To: akpm@linux-foundation.org, linux-mm@kvack.org Cc: baolin.wang@linux.alibaba.com, chrisl@kernel.org, david@redhat.com, ioworker0@gmail.com, kasong@tencent.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, ryan.roberts@arm.com, v-songbaohua@oppo.com, x86@kernel.org, linux-riscv@lists.infradead.org, ying.huang@intel.com, zhengtangquan@oppo.com, lorenzo.stoakes@oracle.com Subject: [PATCH v2 3/4] mm: Support batched unmap for lazyfree large folios during reclamation Date: Mon, 13 Jan 2025 16:39:00 +1300 Message-Id: <20250113033901.68951-4-21cnbao@gmail.com> X-Mailer: git-send-email 2.39.3 (Apple Git-146) In-Reply-To: <20250113033901.68951-1-21cnbao@gmail.com> References: <20250113033901.68951-1-21cnbao@gmail.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Barry Song Currently, the PTEs and rmap of a large folio are removed one at a time. This is not only slow but also causes the large folio to be unnecessarily added to deferred_split, which can lead to races between the deferred_split shrinker callback and memory reclamation. This patch releases all PTEs and rmap entries in a batch. Currently, it only handles lazyfree large folios. The below microbench tries to reclaim 128MB lazyfree large folios whose sizes are 64KiB: #include #include #include #include #define SIZE 128*1024*1024 // 128 MB unsigned long read_split_deferred() { FILE *file =3D fopen("/sys/kernel/mm/transparent_hugepage" "/hugepages-64kB/stats/split_deferred", "r"); if (!file) { perror("Error opening file"); return 0; } unsigned long value; if (fscanf(file, "%lu", &value) !=3D 1) { perror("Error reading value"); fclose(file); return 0; } fclose(file); return value; } int main(int argc, char *argv[]) { while(1) { volatile int *p =3D mmap(0, SIZE, PROT_READ | PROT_WRITE, MAP_PRIVATE | MAP_ANONYMOUS, -1, 0); memset((void *)p, 1, SIZE); madvise((void *)p, SIZE, MADV_FREE); clock_t start_time =3D clock(); unsigned long start_split =3D read_split_deferred(); madvise((void *)p, SIZE, MADV_PAGEOUT); clock_t end_time =3D clock(); unsigned long end_split =3D read_split_deferred(); double elapsed_time =3D (double)(end_time - start_time) / CLOCKS_PER_SEC; printf("Time taken by reclamation: %f seconds, split_deferred: %ld\n", elapsed_time, end_split - start_split); munmap((void *)p, SIZE); } return 0; } w/o patch: ~ # ./a.out Time taken by reclamation: 0.177418 seconds, split_deferred: 2048 Time taken by reclamation: 0.178348 seconds, split_deferred: 2048 Time taken by reclamation: 0.174525 seconds, split_deferred: 2048 Time taken by reclamation: 0.171620 seconds, split_deferred: 2048 Time taken by reclamation: 0.172241 seconds, split_deferred: 2048 Time taken by reclamation: 0.174003 seconds, split_deferred: 2048 Time taken by reclamation: 0.171058 seconds, split_deferred: 2048 Time taken by reclamation: 0.171993 seconds, split_deferred: 2048 Time taken by reclamation: 0.169829 seconds, split_deferred: 2048 Time taken by reclamation: 0.172895 seconds, split_deferred: 2048 Time taken by reclamation: 0.176063 seconds, split_deferred: 2048 Time taken by reclamation: 0.172568 seconds, split_deferred: 2048 Time taken by reclamation: 0.171185 seconds, split_deferred: 2048 Time taken by reclamation: 0.170632 seconds, split_deferred: 2048 Time taken by reclamation: 0.170208 seconds, split_deferred: 2048 Time taken by reclamation: 0.174192 seconds, split_deferred: 2048 ... w/ patch: ~ # ./a.out Time taken by reclamation: 0.074231 seconds, split_deferred: 0 Time taken by reclamation: 0.071026 seconds, split_deferred: 0 Time taken by reclamation: 0.072029 seconds, split_deferred: 0 Time taken by reclamation: 0.071873 seconds, split_deferred: 0 Time taken by reclamation: 0.073573 seconds, split_deferred: 0 Time taken by reclamation: 0.071906 seconds, split_deferred: 0 Time taken by reclamation: 0.073604 seconds, split_deferred: 0 Time taken by reclamation: 0.075903 seconds, split_deferred: 0 Time taken by reclamation: 0.073191 seconds, split_deferred: 0 Time taken by reclamation: 0.071228 seconds, split_deferred: 0 Time taken by reclamation: 0.071391 seconds, split_deferred: 0 Time taken by reclamation: 0.071468 seconds, split_deferred: 0 Time taken by reclamation: 0.071896 seconds, split_deferred: 0 Time taken by reclamation: 0.072508 seconds, split_deferred: 0 Time taken by reclamation: 0.071884 seconds, split_deferred: 0 Time taken by reclamation: 0.072433 seconds, split_deferred: 0 Time taken by reclamation: 0.071939 seconds, split_deferred: 0 ... Signed-off-by: Barry Song --- mm/rmap.c | 46 ++++++++++++++++++++++++++++++++++++++++------ 1 file changed, 40 insertions(+), 6 deletions(-) diff --git a/mm/rmap.c b/mm/rmap.c index 365112af5291..3ef659310797 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1642,6 +1642,25 @@ void folio_remove_rmap_pmd(struct folio *folio, stru= ct page *page, #endif } =20 +/* We support batch unmapping of PTEs for lazyfree large folios */ +static inline bool can_batch_unmap_folio_ptes(unsigned long addr, + struct folio *folio, pte_t *ptep) +{ + const fpb_t fpb_flags =3D FPB_IGNORE_DIRTY | FPB_IGNORE_SOFT_DIRTY; + int max_nr =3D folio_nr_pages(folio); + pte_t pte =3D ptep_get(ptep); + + if (!folio_test_anon(folio) || folio_test_swapbacked(folio)) + return false; + if (pte_none(pte) || pte_unused(pte) || !pte_present(pte)) + return false; + if (pte_pfn(pte) !=3D folio_pfn(folio)) + return false; + + return folio_pte_batch(folio, addr, ptep, pte, max_nr, fpb_flags, NULL, + NULL, NULL) =3D=3D max_nr; +} + /* * @arg: enum ttu_flags will be passed to this argument */ @@ -1655,6 +1674,7 @@ static bool try_to_unmap_one(struct folio *folio, str= uct vm_area_struct *vma, bool anon_exclusive, ret =3D true; struct mmu_notifier_range range; enum ttu_flags flags =3D (enum ttu_flags)(long)arg; + int nr_pages =3D 1; unsigned long pfn; unsigned long hsz =3D 0; =20 @@ -1780,6 +1800,15 @@ static bool try_to_unmap_one(struct folio *folio, st= ruct vm_area_struct *vma, hugetlb_vma_unlock_write(vma); } pteval =3D huge_ptep_clear_flush(vma, address, pvmw.pte); + } else if (folio_test_large(folio) && !(flags & TTU_HWPOISON) && + can_batch_unmap_folio_ptes(address, folio, pvmw.pte)) { + nr_pages =3D folio_nr_pages(folio); + flush_cache_range(vma, range.start, range.end); + pteval =3D get_and_clear_full_ptes(mm, address, pvmw.pte, nr_pages, 0); + if (should_defer_flush(mm, flags)) + set_tlb_ubc_flush_pending(mm, pteval, address, folio_size(folio)); + else + flush_tlb_range(vma, range.start, range.end); } else { flush_cache_page(vma, address, pfn); /* Nuke the page table entry. */ @@ -1875,7 +1904,7 @@ static bool try_to_unmap_one(struct folio *folio, str= uct vm_area_struct *vma, * redirtied either using the page table or a previously * obtained GUP reference. */ - set_pte_at(mm, address, pvmw.pte, pteval); + set_ptes(mm, address, pvmw.pte, pteval, nr_pages); folio_set_swapbacked(folio); goto walk_abort; } else if (ref_count !=3D 1 + map_count) { @@ -1888,10 +1917,10 @@ static bool try_to_unmap_one(struct folio *folio, s= truct vm_area_struct *vma, * We'll come back here later and detect if the folio was * dirtied when the additional reference is gone. */ - set_pte_at(mm, address, pvmw.pte, pteval); + set_ptes(mm, address, pvmw.pte, pteval, nr_pages); goto walk_abort; } - dec_mm_counter(mm, MM_ANONPAGES); + add_mm_counter(mm, MM_ANONPAGES, -nr_pages); goto discard; } =20 @@ -1943,13 +1972,18 @@ static bool try_to_unmap_one(struct folio *folio, s= truct vm_area_struct *vma, dec_mm_counter(mm, mm_counter_file(folio)); } discard: - if (unlikely(folio_test_hugetlb(folio))) + if (unlikely(folio_test_hugetlb(folio))) { hugetlb_remove_rmap(folio); - else - folio_remove_rmap_pte(folio, subpage, vma); + } else { + folio_remove_rmap_ptes(folio, subpage, nr_pages, vma); + folio_ref_sub(folio, nr_pages - 1); + } if (vma->vm_flags & VM_LOCKED) mlock_drain_local(); folio_put(folio); + /* We have already batched the entire folio */ + if (nr_pages > 1) + goto walk_done; continue; walk_abort: ret =3D false; --=20 2.39.3 (Apple Git-146)