From nobody Sat Feb 7 13:05:34 2026 Received: from mail-pl1-f169.google.com (mail-pl1-f169.google.com [209.85.214.169]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id EFDCC3043AF for ; Tue, 28 Oct 2025 13:22:38 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.169 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761657760; cv=none; b=XGAIr7KVFvGTujZaOpXsnWd0ugwsaBJb387tz0s/Npp9yOpx42c+3AUIrElPXwGZCPR1qIKk5M4azt0d+/eEAHlnV1x1oBxBxMLwMfFTD6IGSboSLAVOYe/pUtbY4Ro3HwGhB5T86N44Lgx5MMI5hbShepUrmI1GHo6nRBRZiiQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761657760; c=relaxed/simple; bh=MtVL95cvmOGqNG5wrZbg+v9pKV8i/qU3CFtaqmmyw3E=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=KzjUdNrKRBaXPxEgK/dKAd33n++c92QD1GWMb2i/ZnqZbqtmd+M636kS0CvKiLRdBHXrb9k4KmNZFxLrdVDzqP2ymdho6V8S/miN46BndljcORwGSaIA2BonpXXtBxXnOA3ieVEdQgCzctvaWmc/EFASuVOJ8dDnGgYNArB80Dc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=IvYSrp1b; arc=none smtp.client-ip=209.85.214.169 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="IvYSrp1b" Received: by mail-pl1-f169.google.com with SMTP id d9443c01a7336-27d3540a43fso62794965ad.3 for ; Tue, 28 Oct 2025 06:22:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1761657758; x=1762262558; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=UrGZMy9hZEV5wyJpLWINpY79WwhAr5U2pevMnRBTDHY=; b=IvYSrp1bRz8l6mtq7anzanMekdJfVb9ZTq/XfevGxvVGprjgmiwPUBaQsqpFX72YQQ pl18dr4t9CWVUFhhKr8qXCJXkOWf9CkAR1GrJjFpnJT+VnwOHuhLXrJL2cQzPuHBXthr kb6PG3eB9EoRdDoHEbximS6IGh4Sz8I5aMBoLVdJ4XxF9caARfcJ3hr7gHzXM7f/8QeI 4qUdiGL1Is3THLljvzi5XlYdiPCJ2PMgmO2e6srMMReATrj1Je+WfY0OTdee7sRU3QaQ /jNz2pOeQim85onMin4GjQW4FzbeTnS8gqpi4YaxwLnya+YxmJ0NsHD2eCFVKYIXV+Fz xXAQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1761657758; x=1762262558; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=UrGZMy9hZEV5wyJpLWINpY79WwhAr5U2pevMnRBTDHY=; b=JMoUUB0WNdJykR+7cIYD3u0maNfzaralZm/6+zE/HyD5nR+fnkWEJ5estvQbO3whX0 lG0qKT9LuTBNk9sQsT5YGBEkGPp0d6GaDn05DFnPKK+uddq9yVstYL6M4x1ayqD2WIzM t0ciUCplE789XzPGOViE8V+opuSW/COIsT0mxrz/jcI0Wk5PtLQq7VuLPjeicLE45n+7 ajQ5nRr373rOu4RmiZoXogbn2yePpR7L2HQ72GGHR6nQ3Zs1KVblzlBYR90Kgv9rWrmK X/0TEHcZEQdXML4Svgic7uxWyrl8PcAcs+AJImKw3/dRlXB9fYyIvkrJjSnNLNug1RHb 5lFg== X-Forwarded-Encrypted: i=1; AJvYcCUtuOFxzmiFXFxzvI5/ZyRFa+oDTf5BepJC3AIdMroMoT7lZi4vGStcY1Sy7zP0oHYpoOWXjWw5qgfuyvo=@vger.kernel.org X-Gm-Message-State: AOJu0YyhWZmrsMsaDwfhd6JmcrBXJVDWueGqygVJ+AL8lV2fmte/iRJK ZW3iCuZvy0doZ09LrV/fSsmp5z8t3gKlTqKNXnDTgdAkpv1GQKXtbiLn1vS4puGa X-Gm-Gg: ASbGncs2ll9ZvquvBhiKrGK7n47pZjBsMbz0MFg5mS5iORiD+SwW1nRsp5VAa6U4T6v ghwWQZoNxo9C8JhTKkyCObx98MmUcOZDuoj9MIjke+/iDnhyuAIGOooZblDKBxEAxxPU9cGu3/S FSI9mhvGwMbWOpGCqn1s3jcbDLFaQzX8vnEu4SfGbF7RrXvBFxsDuOhdAXsYFqMLspxyfzSB3jp hCbuhJskNIMO7952a3f1ci1yOTSQ2A0IHtST0BPCxSMZrHU1WobSGtH7pigg9mZR+fvy5o+YRvY SckAyHdBt2tRWNkzjTHIWGzh1fgFQtHBK4ozlY8pDC40VUiLdwnjBzgn31iaG5DWJAsohR+y9sL 9HF66FNClWwag2MbK62GFxIfds992cGLUpOu+JstmGXFtZvB+VFYEyNKc5AzChM/WUpeTZ/Bafl AIomykbnZVmwj9vaBYBGIvZsVy X-Google-Smtp-Source: AGHT+IGPTXiA8UA4+WfFWcD2ez/aP+Nf0BeAoSyjUo3zq3lEFekz/Uq+3yLX6MS444LZrXu6WCeBVA== X-Received: by 2002:a17:903:124c:b0:273:7d52:e510 with SMTP id d9443c01a7336-294cb5371c0mr44527505ad.58.1761657756035; Tue, 28 Oct 2025 06:22:36 -0700 (PDT) Received: from weg-ThinkPad-P16v-Gen-2.. ([177.73.136.69]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-29498d0c6eesm117446235ad.42.2025.10.28.06.22.31 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 28 Oct 2025 06:22:34 -0700 (PDT) From: Pedro Demarchi Gomes To: David Hildenbrand , Andrew Morton Cc: Xu Xin , Chengming Zhou , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Pedro Demarchi Gomes Subject: [PATCH 1/3] Revert "mm/ksm: convert break_ksm() from walk_page_range_vma() to folio_walk" Date: Tue, 28 Oct 2025 10:19:43 -0300 Message-ID: <20251028131945.26445-2-pedrodemargomes@gmail.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20251028131945.26445-1-pedrodemargomes@gmail.com> References: <20251028131945.26445-1-pedrodemargomes@gmail.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" This reverts commit e317a8d8b4f600fc7ec9725e26417030ee594f52 and changes PageKsm(page) to folio_test_ksm(page_folio(page)). This reverts break_ksm() to use walk_page_range_vma() instead of folio_walk_start(). This will make it easier to later modify break_ksm() to perform a proper range walk. Suggested-by: David Hildenbrand Signed-off-by: Pedro Demarchi Gomes --- mm/ksm.c | 63 ++++++++++++++++++++++++++++++++++++++++++-------------- 1 file changed, 47 insertions(+), 16 deletions(-) diff --git a/mm/ksm.c b/mm/ksm.c index 4f672f4f2140..2a9a7fd4c777 100644 --- a/mm/ksm.c +++ b/mm/ksm.c @@ -607,6 +607,47 @@ static inline bool ksm_test_exit(struct mm_struct *mm) return atomic_read(&mm->mm_users) =3D=3D 0; } =20 +static int break_ksm_pmd_entry(pmd_t *pmd, unsigned long addr, unsigned lo= ng next, + struct mm_walk *walk) +{ + struct page *page =3D NULL; + spinlock_t *ptl; + pte_t *pte; + pte_t ptent; + int ret; + + pte =3D pte_offset_map_lock(walk->mm, pmd, addr, &ptl); + if (!pte) + return 0; + ptent =3D ptep_get(pte); + if (pte_present(ptent)) { + page =3D vm_normal_page(walk->vma, addr, ptent); + } else if (!pte_none(ptent)) { + swp_entry_t entry =3D pte_to_swp_entry(ptent); + + /* + * As KSM pages remain KSM pages until freed, no need to wait + * here for migration to end. + */ + if (is_migration_entry(entry)) + page =3D pfn_swap_entry_to_page(entry); + } + /* return 1 if the page is an normal ksm page or KSM-placed zero page */ + ret =3D (page && folio_test_ksm(page_folio(page))) || is_ksm_zero_pte(pte= nt); + pte_unmap_unlock(pte, ptl); + return ret; +} + +static const struct mm_walk_ops break_ksm_ops =3D { + .pmd_entry =3D break_ksm_pmd_entry, + .walk_lock =3D PGWALK_RDLOCK, +}; + +static const struct mm_walk_ops break_ksm_lock_vma_ops =3D { + .pmd_entry =3D break_ksm_pmd_entry, + .walk_lock =3D PGWALK_WRLOCK, +}; + /* * We use break_ksm to break COW on a ksm page by triggering unsharing, * such that the ksm page will get replaced by an exclusive anonymous page. @@ -623,26 +664,16 @@ static inline bool ksm_test_exit(struct mm_struct *mm) static int break_ksm(struct vm_area_struct *vma, unsigned long addr, bool = lock_vma) { vm_fault_t ret =3D 0; - - if (lock_vma) - vma_start_write(vma); + const struct mm_walk_ops *ops =3D lock_vma ? + &break_ksm_lock_vma_ops : &break_ksm_ops; =20 do { - bool ksm_page =3D false; - struct folio_walk fw; - struct folio *folio; + int ksm_page; =20 cond_resched(); - folio =3D folio_walk_start(&fw, vma, addr, - FW_MIGRATION | FW_ZEROPAGE); - if (folio) { - /* Small folio implies FW_LEVEL_PTE. */ - if (!folio_test_large(folio) && - (folio_test_ksm(folio) || is_ksm_zero_pte(fw.pte))) - ksm_page =3D true; - folio_walk_end(&fw, vma); - } - + ksm_page =3D walk_page_range_vma(vma, addr, addr + 1, ops, NULL); + if (WARN_ON_ONCE(ksm_page < 0)) + return ksm_page; if (!ksm_page) return 0; ret =3D handle_mm_fault(vma, addr, --=20 2.43.0 From nobody Sat Feb 7 13:05:34 2026 Received: from mail-pg1-f178.google.com (mail-pg1-f178.google.com [209.85.215.178]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1CBE6330D35 for ; Tue, 28 Oct 2025 13:22:44 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.178 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761657766; cv=none; b=Ryj4FKXot7EG811eui+IipGkgJ7bC98ll1QSCsKEYVj5JM3qwZxncDH4V2MseI9SRx2+48twZBhAT1vfIYIxRzYATwkv4exceBBAgqYg4Py5fG1d1SzZXDkSZr8cYk6dDKpAvzDOfavcOfAK9vxh62BlG9E9E/TH3uoWSguEy2E= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761657766; c=relaxed/simple; bh=96dzdQrdYu+xfjWYuXyzZ7m4SVB+a2x2dcmqjviBkvo=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=tqTNU4oIhtLHPbo3utf8K4kQfu5P9UYwNzMi1iuXS4yNQMY2SRvLoc+Mh//RxDjpGS6sRBgNsdXe2a5tSGPMhqoVD7F/ryDk1a/ZNr2uoBjiZv1Jm1SWA3xweoK7SgxuMG8gqNIJmtELvirGcJL2+IsZ15HYz4ajOahZcCXQZj4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=VJYlMc3i; arc=none smtp.client-ip=209.85.215.178 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="VJYlMc3i" Received: by mail-pg1-f178.google.com with SMTP id 41be03b00d2f7-b6271ea3a6fso4069158a12.0 for ; Tue, 28 Oct 2025 06:22:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1761657764; x=1762262564; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=LGSMnG+dfPqB5PeO9+sIbuI2ifAbohOvGlLiOHf29mA=; b=VJYlMc3i+a7mkiA6tzCd8CRoMXawn5HfhlT0LUAJcv6Qi3Vt5IRZYlvIvp1JIv2vbL kKspaKhOHtlCc1FlEr2aoMtOhVdvgYowyD/Zk2YkrToDqA/ss/vU+EDtfQ2n9QjwwHfL QquWetRb1MtQZmr8kznjXE65fC/y2agrnyD4bvVfmxrawXJ/tKbav60REqnZz4oSjIOX qF7IWNImYXbUAAv0jsZGU5Tjqf+bQCHDswRRhn4I8JGC7YKo3E93ANt7Y1xOZPaOfp0u eXvWtfjxzkRws92VHmTvQ7RgzBXm1RXrC8Gvjqbkte7yxy3q5pDxdWu8Y7sZ7xACe26g 0pTg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1761657764; x=1762262564; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=LGSMnG+dfPqB5PeO9+sIbuI2ifAbohOvGlLiOHf29mA=; b=NWpYPjCaJwd/y9JIwDQdas7RnYA/iwMT/xQ1d25J3RMbGshVqXyPaWyPKHEeYj23Xm KTjVl20COD5BSMJbDOkwf2vu1c+o7OOeuGE8Y6MAIL+p5juj4Ei9u1V1jkYf4iMQF7Na hsUaQkTVJNY1OXQeTUl6QaR+rCysIT6WDEx2X7v51DWL0d3OzwTRaBaNEyRQxYi6LpNE y3hNrSRJ2XIB2cX4zEzaSJVzF+6h/1Jmmu9PITznELdY2QEJ7VZOlbN4RYFVxTXoF7J3 UqG+08b9h9AWFfckxwZw+iKee1FrAsfIyPhhcWxT8urLtSCNbKNHKocEPU2ybMHxFWAP 0msw== X-Forwarded-Encrypted: i=1; AJvYcCVzgkxaY7Ws61Z6Z+Sk60LOeVsQavNpGiqPBG0mPy3X5x7QMUlhFxs+CswhE+8PXWjuMJCzkvMkHX+eltU=@vger.kernel.org X-Gm-Message-State: AOJu0Yx3ZThNqNKQWxsfTpa7YzPxPkQcfydOjx++Uyqm8LPWhVx6iw4u Y0E6z2+IESR5AyX2aBxWR09F/yhQK4yXOQCzMWDIDGLhVa38mQSUbcUF X-Gm-Gg: ASbGncuAR0LeFxV8rWxtOo9jQ/aTf4J/xRrLBKChGHp3/c+YCRWgV2l4YfCtCGytJ5T lVGeBYsYJIrqNTf0hTiL13czL1g2iy1jguxB2oo8vR0cxz2XCdtclNzUBJcRoQa/vqhadKWoEU1 wgyBYoWoaz/Uz6aeVComE0L0wHvsBIJ9YDbgRtjqVurSEMJHIvsW8emHxP1cEP2qz7c8mAdmOEC 0hMu8URJxNBh0vIf1geQmSVfMarfTPjqHuVLiM17wWaNbBw89Rv4tQLyfpDfZTTzIL9abAb1uTW EGxwH1Qs8oOGBarin3DgKRLKO2eEMZcvhux09g8XsnCNDghDjJcC/ctsALb7Sa0r1PlKXanXtGr HBv2Z1T+HulsAuOijvt9gt0p014ydyjcl/f4xn/fqOHKSS0RSa5oq5vma+X0tju3txJa7/Oe2v1 XCtEH0wluC5dGMg8PLLUqt64nF X-Google-Smtp-Source: AGHT+IERdQtSDdu2ALxwESRZtQdoqizCIyrUk8LFMRRdLzwdff1L8If+YNv1ncII0+3vpd4kLKVlkw== X-Received: by 2002:a17:902:da91:b0:267:a1f1:9b23 with SMTP id d9443c01a7336-294cb3816e6mr48946635ad.18.1761657764129; Tue, 28 Oct 2025 06:22:44 -0700 (PDT) Received: from weg-ThinkPad-P16v-Gen-2.. ([177.73.136.69]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-29498d0c6eesm117446235ad.42.2025.10.28.06.22.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 28 Oct 2025 06:22:42 -0700 (PDT) From: Pedro Demarchi Gomes To: David Hildenbrand , Andrew Morton Cc: Xu Xin , Chengming Zhou , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Pedro Demarchi Gomes Subject: [PATCH 2/3] ksm: perform a range-walk in break_ksm Date: Tue, 28 Oct 2025 10:19:44 -0300 Message-ID: <20251028131945.26445-3-pedrodemargomes@gmail.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20251028131945.26445-1-pedrodemargomes@gmail.com> References: <20251028131945.26445-1-pedrodemargomes@gmail.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Make break_ksm() receive an address range and change break_ksm_pmd_entry() to perform a range-walk and return the address of the first ksm page found. This change allows break_ksm() to skip unmapped regions instead of iterating every page address. When unmerging large sparse VMAs, this significantly reduces runtime, as confirmed by benchmark test (see cover letter). Suggested-by: David Hildenbrand Signed-off-by: Pedro Demarchi Gomes --- mm/ksm.c | 88 +++++++++++++++++++++++++++++++------------------------- 1 file changed, 49 insertions(+), 39 deletions(-) diff --git a/mm/ksm.c b/mm/ksm.c index 2a9a7fd4c777..1d1ef0554c7c 100644 --- a/mm/ksm.c +++ b/mm/ksm.c @@ -607,34 +607,54 @@ static inline bool ksm_test_exit(struct mm_struct *mm) return atomic_read(&mm->mm_users) =3D=3D 0; } =20 -static int break_ksm_pmd_entry(pmd_t *pmd, unsigned long addr, unsigned lo= ng next, +struct break_ksm_arg { + unsigned long addr; +}; + +static int break_ksm_pmd_entry(pmd_t *pmdp, unsigned long addr, unsigned l= ong end, struct mm_walk *walk) { - struct page *page =3D NULL; + struct page *page; spinlock_t *ptl; - pte_t *pte; - pte_t ptent; - int ret; + pte_t *start_ptep =3D NULL, *ptep, pte; + int ret =3D 0; + struct mm_struct *mm =3D walk->mm; + struct break_ksm_arg *private =3D (struct break_ksm_arg *) walk->private; =20 - pte =3D pte_offset_map_lock(walk->mm, pmd, addr, &ptl); - if (!pte) + if (ksm_test_exit(walk->mm)) return 0; - ptent =3D ptep_get(pte); - if (pte_present(ptent)) { - page =3D vm_normal_page(walk->vma, addr, ptent); - } else if (!pte_none(ptent)) { - swp_entry_t entry =3D pte_to_swp_entry(ptent); =20 - /* - * As KSM pages remain KSM pages until freed, no need to wait - * here for migration to end. - */ - if (is_migration_entry(entry)) - page =3D pfn_swap_entry_to_page(entry); + if (signal_pending(current)) + return -ERESTARTSYS; + + start_ptep =3D pte_offset_map_lock(mm, pmdp, addr, &ptl); + if (!start_ptep) + return 0; + + for (ptep =3D start_ptep; addr < end; ptep++, addr +=3D PAGE_SIZE) { + pte =3D ptep_get(ptep); + page =3D NULL; + if (pte_present(pte)) { + page =3D vm_normal_page(walk->vma, addr, pte); + } else if (!pte_none(pte)) { + swp_entry_t entry =3D pte_to_swp_entry(pte); + + /* + * As KSM pages remain KSM pages until freed, no need to wait + * here for migration to end. + */ + if (is_migration_entry(entry)) + page =3D pfn_swap_entry_to_page(entry); + } + /* return 1 if the page is an normal ksm page or KSM-placed zero page */ + ret =3D (page && folio_test_ksm(page_folio(page))) || is_ksm_zero_pte(pt= e); + if (ret) { + private->addr =3D addr; + goto out_unlock; + } } - /* return 1 if the page is an normal ksm page or KSM-placed zero page */ - ret =3D (page && folio_test_ksm(page_folio(page))) || is_ksm_zero_pte(pte= nt); - pte_unmap_unlock(pte, ptl); +out_unlock: + pte_unmap_unlock(ptep, ptl); return ret; } =20 @@ -661,9 +681,11 @@ static const struct mm_walk_ops break_ksm_lock_vma_ops= =3D { * of the process that owns 'vma'. We also do not want to enforce * protection keys here anyway. */ -static int break_ksm(struct vm_area_struct *vma, unsigned long addr, bool = lock_vma) +static int break_ksm(struct vm_area_struct *vma, unsigned long addr, + unsigned long end, bool lock_vma) { vm_fault_t ret =3D 0; + struct break_ksm_arg break_ksm_arg; const struct mm_walk_ops *ops =3D lock_vma ? &break_ksm_lock_vma_ops : &break_ksm_ops; =20 @@ -671,11 +693,10 @@ static int break_ksm(struct vm_area_struct *vma, unsi= gned long addr, bool lock_v int ksm_page; =20 cond_resched(); - ksm_page =3D walk_page_range_vma(vma, addr, addr + 1, ops, NULL); - if (WARN_ON_ONCE(ksm_page < 0)) + ksm_page =3D walk_page_range_vma(vma, addr, end, ops, &break_ksm_arg); + if (ksm_page <=3D 0) return ksm_page; - if (!ksm_page) - return 0; + addr =3D break_ksm_arg.addr; ret =3D handle_mm_fault(vma, addr, FAULT_FLAG_UNSHARE | FAULT_FLAG_REMOTE, NULL); @@ -761,7 +782,7 @@ static void break_cow(struct ksm_rmap_item *rmap_item) mmap_read_lock(mm); vma =3D find_mergeable_vma(mm, addr); if (vma) - break_ksm(vma, addr, false); + break_ksm(vma, addr, addr + 1, false); mmap_read_unlock(mm); } =20 @@ -1072,18 +1093,7 @@ static void remove_trailing_rmap_items(struct ksm_rm= ap_item **rmap_list) static int unmerge_ksm_pages(struct vm_area_struct *vma, unsigned long start, unsigned long end, bool lock_vma) { - unsigned long addr; - int err =3D 0; - - for (addr =3D start; addr < end && !err; addr +=3D PAGE_SIZE) { - if (ksm_test_exit(vma->vm_mm)) - break; - if (signal_pending(current)) - err =3D -ERESTARTSYS; - else - err =3D break_ksm(vma, addr, lock_vma); - } - return err; + return break_ksm(vma, start, end, lock_vma); } =20 static inline --=20 2.43.0 From nobody Sat Feb 7 13:05:34 2026 Received: from mail-pl1-f172.google.com (mail-pl1-f172.google.com [209.85.214.172]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5363E311C39 for ; Tue, 28 Oct 2025 13:22:52 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.172 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761657773; cv=none; b=EJc6NSgSzuVNQASC4AeYkC1YkrT0Ij4JxqU28PzToeo4CCNYiN5TkvM8RF3WhGt1ZOY+De5zO7y4aTQMHWnft8fRamSNvofWjUjbkUCPuPkeQCnQzPmRKkJk/IAoAxGawu6EKqPmAEbDDpDUQSeJ0Q1QDU/O7JAN+8RCsPPFx+o= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761657773; c=relaxed/simple; bh=kd8QbGKjD1tQbzXFq8tvaq30cEyG7L7KJCAoA6QUGxA=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=iW7T474z0a3HG69MjjvL5BzOLWUTPeORorSCU63Ds3yYkG/4MUpNPyQfarEKThyFuSkMJNyXnTlig20r53zGYUydIhkiqvWvafyc4UKNlDufayWDnAusjr4HXKgobzanGtK1WVfVWLVRR9POITVD4A8t3E36b0zpzaXbPgfg9/4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=FpHT2uwk; arc=none smtp.client-ip=209.85.214.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="FpHT2uwk" Received: by mail-pl1-f172.google.com with SMTP id d9443c01a7336-290deb0e643so55319045ad.2 for ; Tue, 28 Oct 2025 06:22:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1761657771; x=1762262571; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=r8hdoqzDw0ciKJ5R5py/N5NF6XV/5atOOkFGVbOI51o=; b=FpHT2uwkcaI3eyMC6OLE3aOcmtHBYs1sfEWUAHQSiqrtZlMI7ds4ACGvZyjUEpmDN7 OXBFGPbCSuRVfImiur8taomy9VFZHNRUWVhgecCViye6/XnUAtd4+sauLxrAfuWc+u07 U1ZCPqGbHPdOCkieTPJyBtbtKsq/ktvHTb799gNL44/eID7Wzh+YUou/XxNbjIBzJMAJ 4z43aPrkepDCjG0zln2LnIqRe1swLexaiak5ltLPTcYB2HGcxIyyh+PlXanMlUzJ7OLt oegI1WNg+7V8eDL7xp/xyuimenr2o6GtmBNb6OBentZF7n2qbJZo4OI+SRq69rokJyS/ Z6+A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1761657771; x=1762262571; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=r8hdoqzDw0ciKJ5R5py/N5NF6XV/5atOOkFGVbOI51o=; b=gMzn7yLxIbgIr7lt09P2ZlwDyj/F9K4i+IsJgkmmwTHebvN0kiCmg5dvycCz7MR0QW wrkTfRZAEnvnBc+O/xB/+NCik7sWY/anIIdt3dacNGNRkZpV4erZa8yyl8MRebNt3Knj lq5hwkoEMMcbeWy6JcjFrnSPlU0TgPygRLDb5FaYo0pKp6QayIk0yxXUZIK2D/OOxcdY VbFI2ovBymvd1rImNDfWgR0weWrhnWCwirLDjhrpq2MTH8zryl/32oWqgkRDcqscHKLo Z+2IlgUW1nTeGjZJmOgdPnQaWy2n0Hl77MBS0nDvImUjEtsaRiewwVW6cb8UugxVuBYi pbKw== X-Forwarded-Encrypted: i=1; AJvYcCWNtIikvakdOUznOUE4N34mevjBCDcKsBg0BrQANKtCtnN64ukI3J6hRB5Gvyf8/dj982DhmF6ykkoEdyE=@vger.kernel.org X-Gm-Message-State: AOJu0YyPFK4Y9dXvO4sKV0O67fWCRXbDUHv/0MEAL9wX96gsj+7xVwwW dtn3WPqMCMxKV0Ezm9qRkwrcWKg5JszRvX9DqmpCl7bgt0YkYon83DwZ X-Gm-Gg: ASbGncvPVFd21h/H0aOmvA49JsZiU/sI3KDeAnEP3k4pPk9sQF31Sv3I4tdlqJVpGvw /mjNjczfw4KUNjwkC7Vk/JudIZJ9vBrKddQUmEtHZRXk+QAZ6pca+FLJln/iuVWioxAKcwte6gB 7E/zTR/Dfhd+rC+IcGW2xItZNbl9U197vy8Uhkd0p/JOhCv7y1lfxM4/+HeC7pJwqfHRoH74Cg/ 3Ydk6fPcE4Fa3+SvrVgnmUI0ax1Q02xN5s7g907LC9DWSdew/W3BY2DSyZ+B0ffaG6hXe+XGtjM Vq+rHZAnN47CM3tuDUq7tNyS4cJ7SPMGOQ7krsE7BTor12Tqbp5/3Y3laHfj3zt7rfQMG117Out u99yLfhkC1sBZy/RxP8Ysc1O+bCEb2YliuJ5Qi0KXiFjpcubLtOonG4D36syQsKT5nNn7Iscd0Y S4dFEynOxK6a7R/bVwoWjds80l X-Google-Smtp-Source: AGHT+IGvfrMDVGrg441E1EtxSS6ZivFeFQGgm+/YRUqoXlfT71nRhB4BmF1WcXFrbzSBiX2fpHfJEA== X-Received: by 2002:a17:903:2f0e:b0:290:c0d7:237d with SMTP id d9443c01a7336-294cb50e4bbmr45976505ad.36.1761657771357; Tue, 28 Oct 2025 06:22:51 -0700 (PDT) Received: from weg-ThinkPad-P16v-Gen-2.. ([177.73.136.69]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-29498d0c6eesm117446235ad.42.2025.10.28.06.22.46 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 28 Oct 2025 06:22:49 -0700 (PDT) From: Pedro Demarchi Gomes To: David Hildenbrand , Andrew Morton Cc: Xu Xin , Chengming Zhou , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Pedro Demarchi Gomes Subject: [PATCH 3/3] ksm: replace function unmerge_ksm_pages with break_ksm Date: Tue, 28 Oct 2025 10:19:45 -0300 Message-ID: <20251028131945.26445-4-pedrodemargomes@gmail.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20251028131945.26445-1-pedrodemargomes@gmail.com> References: <20251028131945.26445-1-pedrodemargomes@gmail.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Function unmerge_ksm_pages() is unnecessary since now break_ksm() walks an address range. So replace it with break_ksm(). Suggested-by: David Hildenbrand Signed-off-by: Pedro Demarchi Gomes Acked-by: David Hildenbrand --- mm/ksm.c | 39 ++++++++++++++++----------------------- 1 file changed, 16 insertions(+), 23 deletions(-) diff --git a/mm/ksm.c b/mm/ksm.c index 1d1ef0554c7c..18c9e3bda285 100644 --- a/mm/ksm.c +++ b/mm/ksm.c @@ -669,6 +669,18 @@ static const struct mm_walk_ops break_ksm_lock_vma_ops= =3D { }; =20 /* + * Though it's very tempting to unmerge rmap_items from stable tree rather + * than check every pte of a given vma, the locking doesn't quite work for + * that - an rmap_item is assigned to the stable tree after inserting ksm + * page and upping mmap_lock. Nor does it fit with the way we skip dup'ing + * rmap_items from parent to child at fork time (so as not to waste time + * if exit comes before the next scan reaches it). + * + * Similarly, although we'd like to remove rmap_items (so updating counts + * and freeing memory) when unmerging an area, it's easier to leave that + * to the next pass of ksmd - consider, for example, how ksmd might be + * in cmp_and_merge_page on one of the rmap_items we would be removing. + * * We use break_ksm to break COW on a ksm page by triggering unsharing, * such that the ksm page will get replaced by an exclusive anonymous page. * @@ -1077,25 +1089,6 @@ static void remove_trailing_rmap_items(struct ksm_rm= ap_item **rmap_list) } } =20 -/* - * Though it's very tempting to unmerge rmap_items from stable tree rather - * than check every pte of a given vma, the locking doesn't quite work for - * that - an rmap_item is assigned to the stable tree after inserting ksm - * page and upping mmap_lock. Nor does it fit with the way we skip dup'ing - * rmap_items from parent to child at fork time (so as not to waste time - * if exit comes before the next scan reaches it). - * - * Similarly, although we'd like to remove rmap_items (so updating counts - * and freeing memory) when unmerging an area, it's easier to leave that - * to the next pass of ksmd - consider, for example, how ksmd might be - * in cmp_and_merge_page on one of the rmap_items we would be removing. - */ -static int unmerge_ksm_pages(struct vm_area_struct *vma, - unsigned long start, unsigned long end, bool lock_vma) -{ - return break_ksm(vma, start, end, lock_vma); -} - static inline struct ksm_stable_node *folio_stable_node(const struct folio *folio) { @@ -1233,7 +1226,7 @@ static int unmerge_and_remove_all_rmap_items(void) for_each_vma(vmi, vma) { if (!(vma->vm_flags & VM_MERGEABLE) || !vma->anon_vma) continue; - err =3D unmerge_ksm_pages(vma, + err =3D break_ksm(vma, vma->vm_start, vma->vm_end, false); if (err) goto error; @@ -2861,7 +2854,7 @@ static int __ksm_del_vma(struct vm_area_struct *vma) return 0; =20 if (vma->anon_vma) { - err =3D unmerge_ksm_pages(vma, vma->vm_start, vma->vm_end, true); + err =3D break_ksm(vma, vma->vm_start, vma->vm_end, true); if (err) return err; } @@ -3013,7 +3006,7 @@ int ksm_madvise(struct vm_area_struct *vma, unsigned = long start, return 0; /* just ignore the advice */ =20 if (vma->anon_vma) { - err =3D unmerge_ksm_pages(vma, start, end, true); + err =3D break_ksm(vma, start, end, true); if (err) return err; } @@ -3395,7 +3388,7 @@ static int ksm_memory_callback(struct notifier_block = *self, * Prevent ksm_do_scan(), unmerge_and_remove_all_rmap_items() * and remove_all_stable_nodes() while memory is going offline: * it is unsafe for them to touch the stable tree at this time. - * But unmerge_ksm_pages(), rmap lookups and other entry points + * But break_ksm(), rmap lookups and other entry points * which do not need the ksm_thread_mutex are all safe. */ mutex_lock(&ksm_thread_mutex); --=20 2.43.0