From nobody Thu Feb 12 17:28:57 2026 Received: from mail-pj1-f46.google.com (mail-pj1-f46.google.com [209.85.216.46]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 79C7D7F499 for ; Mon, 10 Jun 2024 12:02:28 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.46 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718020949; cv=none; b=tBYKYuQx3MiwRd55dC+a89JXcGiVHMHEUMg9OxFLN3uappzyH20LD4E/DZqQfK8cpRt2m1d00uxbOHwq62CreEcbiFurV/BzR5outS9hGMlWcLxHck7zGL7E5kF9WHiiBvl1CnBuq/9FNt1xzVt6hNg7QmhG6FQAbn/Rdoj7IqM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718020949; c=relaxed/simple; bh=h/qbehpgmUCcOjdGYJT1hLUR76dnOV7H89VeLNo0XKI=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=DN8YNo93cruQC3u9C8l3TebqxJzxtvECdLa9UhsMzzbpjumnkRkFRNCYs3jmGRn2JTwYLZTXP74fTr+LTm8KRevc5ntNdk0l9vX8TZhQIcyMQC8j2BNwSHUD1bPdNhoZpkVi/ujlvL3E6KDlPEdcxKD8pkCRP8aguMKxRVkDrWU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=knyCcpvA; arc=none smtp.client-ip=209.85.216.46 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="knyCcpvA" Received: by mail-pj1-f46.google.com with SMTP id 98e67ed59e1d1-2bfff08fc29so3612957a91.1 for ; Mon, 10 Jun 2024 05:02:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1718020948; x=1718625748; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=lBEr8X48gppOS9Pq+Sw4WgX0PyIHU3keRQ8aF5ykbn0=; b=knyCcpvAODywqCHbEn5GDvs8/vOlyVXW2a4khQW8voEYBHFpeJrIUzAe7q4fA55g1n cVQRRfNJ7NH0G7beHBAvmSEHnjtJzcGMDhxGd1kPFudwoknnd1yNcSpejkpTncUOQEKG 3k72x9391Zk6jcDDVTGGzXMjtVYb3uzoLl7PzOosOdsv+TGN4+0UlWmo/ioBvz9lH6dg XuHAcXa7dtbGy/EZ1VYALgXt5bi9ViUPppf2+F1vffx0LcwwaWXg+Cr1VG1sMr+E3ay1 lSjW46wMN2TjNWQVvgW0TC+UkGZ7ppNNWBAxnrc6UQHdYkbBhE7QABJT8Fuqq04ZWkW1 o+HQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1718020948; x=1718625748; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=lBEr8X48gppOS9Pq+Sw4WgX0PyIHU3keRQ8aF5ykbn0=; b=fCHvLurkQEvMpURkmsIcDYFNaJzMEG7ZGt/0iv5JjbC5t9/vMK9zGXpe/sGoCSUgOq mPddk9zkLi6vi8Mwi/XyH9w6Y8aEWbHRXQlRWpwi+wVQ/m07DTsFP49luFF04lsYswXG eMJC+D/HtYO8U85hS4TpFQx297cvnV1QuhsiqsUABhRwLnZ485CTURSWxiBZhKsmWlzx jaJFifVxKCiNegRAClHXpjXnX4OABHYyQDT7sKJ1KDR6Y3a63XZ0CdfLPGevYFvt+/M2 Evt45BIMEVvnTrq0AyqB7vhGCKki9DsdqwxRyOobXT/Z05f1tvfEQ0J2N4ouIb3qQOJ/ rSvQ== X-Forwarded-Encrypted: i=1; AJvYcCXRf6DiThxvpp4KN+ytDR0ONZEnfcl8mq6LlduSIhflMZcza8Iz7qafHjigHgDX3GzmP3PQ2r0vR6Z8YGevk4wAVYVePXPYSlyTcm8P X-Gm-Message-State: AOJu0YznvhBriK56DVa+Zz9pB6jEGu4hkxdHcwlYVCxZAUfDw+IEjPbX WvROXWRX9NS7qfa1NgJHHl3iEIgodFpgGghQp8pPK+1bwWm7GGrN X-Google-Smtp-Source: AGHT+IFSTF5zRph+kFCZTrfNYOUKk6N+AfTH+6+XT32Q1zGJQ4AHy88EyqrdIXL9+OQ1MKB+a7QwUQ== X-Received: by 2002:a17:90a:5e43:b0:2bf:f248:3506 with SMTP id 98e67ed59e1d1-2c2bcaf984fmr8656864a91.12.1718020947621; Mon, 10 Jun 2024 05:02:27 -0700 (PDT) Received: from LancedeMBP.lan.lan ([2403:2c80:6::304c]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-2c302f0ebdcsm2478478a91.23.2024.06.10.05.02.21 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 10 Jun 2024 05:02:27 -0700 (PDT) From: Lance Yang To: akpm@linux-foundation.org Cc: willy@infradead.org, sj@kernel.org, baolin.wang@linux.alibaba.com, maskray@google.com, ziy@nvidia.com, ryan.roberts@arm.com, david@redhat.com, 21cnbao@gmail.com, mhocko@suse.com, fengwei.yin@intel.com, zokeefe@google.com, shy828301@gmail.com, xiehuan09@gmail.com, libang.li@antgroup.com, wangkefeng.wang@huawei.com, songmuchun@bytedance.com, peterx@redhat.com, minchan@kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Lance Yang Subject: [PATCH v7 1/4] mm/rmap: remove duplicated exit code in pagewalk loop Date: Mon, 10 Jun 2024 20:02:06 +0800 Message-Id: <20240610120209.66311-2-ioworker0@gmail.com> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20240610120209.66311-1-ioworker0@gmail.com> References: <20240610120209.66311-1-ioworker0@gmail.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Introduce the labels walk_done and walk_done_err as exit points to eliminate duplicated exit code in the pagewalk loop. Reviewed-by: Zi Yan Reviewed-by: Baolin Wang Reviewed-by: David Hildenbrand Signed-off-by: Lance Yang Reviewed-by: Barry Song --- mm/rmap.c | 40 +++++++++++++++------------------------- 1 file changed, 15 insertions(+), 25 deletions(-) diff --git a/mm/rmap.c b/mm/rmap.c index e8fc5ecb59b2..ddffa30c79fb 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1679,9 +1679,7 @@ static bool try_to_unmap_one(struct folio *folio, str= uct vm_area_struct *vma, /* Restore the mlock which got missed */ if (!folio_test_large(folio)) mlock_vma_folio(folio, vma); - page_vma_mapped_walk_done(&pvmw); - ret =3D false; - break; + goto walk_done_err; } =20 pfn =3D pte_pfn(ptep_get(pvmw.pte)); @@ -1719,11 +1717,8 @@ static bool try_to_unmap_one(struct folio *folio, st= ruct vm_area_struct *vma, */ if (!anon) { VM_BUG_ON(!(flags & TTU_RMAP_LOCKED)); - if (!hugetlb_vma_trylock_write(vma)) { - page_vma_mapped_walk_done(&pvmw); - ret =3D false; - break; - } + if (!hugetlb_vma_trylock_write(vma)) + goto walk_done_err; if (huge_pmd_unshare(mm, vma, address, pvmw.pte)) { hugetlb_vma_unlock_write(vma); flush_tlb_range(vma, @@ -1738,8 +1733,7 @@ static bool try_to_unmap_one(struct folio *folio, str= uct vm_area_struct *vma, * actual page and drop map count * to zero. */ - page_vma_mapped_walk_done(&pvmw); - break; + goto walk_done; } hugetlb_vma_unlock_write(vma); } @@ -1811,9 +1805,7 @@ static bool try_to_unmap_one(struct folio *folio, str= uct vm_area_struct *vma, if (unlikely(folio_test_swapbacked(folio) !=3D folio_test_swapcache(folio))) { WARN_ON_ONCE(1); - ret =3D false; - page_vma_mapped_walk_done(&pvmw); - break; + goto walk_done_err; } =20 /* MADV_FREE page check */ @@ -1852,23 +1844,17 @@ static bool try_to_unmap_one(struct folio *folio, s= truct vm_area_struct *vma, */ set_pte_at(mm, address, pvmw.pte, pteval); folio_set_swapbacked(folio); - ret =3D false; - page_vma_mapped_walk_done(&pvmw); - break; + goto walk_done_err; } =20 if (swap_duplicate(entry) < 0) { set_pte_at(mm, address, pvmw.pte, pteval); - ret =3D false; - page_vma_mapped_walk_done(&pvmw); - break; + goto walk_done_err; } if (arch_unmap_one(mm, vma, address, pteval) < 0) { swap_free(entry); set_pte_at(mm, address, pvmw.pte, pteval); - ret =3D false; - page_vma_mapped_walk_done(&pvmw); - break; + goto walk_done_err; } =20 /* See folio_try_share_anon_rmap(): clear PTE first. */ @@ -1876,9 +1862,7 @@ static bool try_to_unmap_one(struct folio *folio, str= uct vm_area_struct *vma, folio_try_share_anon_rmap_pte(folio, subpage)) { swap_free(entry); set_pte_at(mm, address, pvmw.pte, pteval); - ret =3D false; - page_vma_mapped_walk_done(&pvmw); - break; + goto walk_done_err; } if (list_empty(&mm->mmlist)) { spin_lock(&mmlist_lock); @@ -1918,6 +1902,12 @@ static bool try_to_unmap_one(struct folio *folio, st= ruct vm_area_struct *vma, if (vma->vm_flags & VM_LOCKED) mlock_drain_local(); folio_put(folio); + continue; +walk_done_err: + ret =3D false; +walk_done: + page_vma_mapped_walk_done(&pvmw); + break; } =20 mmu_notifier_invalidate_range_end(&range); --=20 2.33.1 From nobody Thu Feb 12 17:28:57 2026 Received: from mail-pg1-f170.google.com (mail-pg1-f170.google.com [209.85.215.170]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id BCB074D8AB for ; Mon, 10 Jun 2024 12:02:34 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.170 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718020956; cv=none; b=mgR1QY2WXbedcLm2voa+HXbzcFuaf/NL0mO7KsFgJNaNbkZw4lDTlomAXspaUqmraB0CWlTO6tKosc2aGsHQPXjfGsM7vw5f7CRcPGaYRub0HHiKG3TzGgW3h+OXnKlIq6A+KgxSmfaBKdVsMmdf2BjW8amUGnLMXEIaslcfrN0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718020956; c=relaxed/simple; bh=/J8Ceo0Ki62a3kVYpxtqOWLGxQqfruh6wipVZOBmjlU=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=PIO/xmHRgdzkf1DDhK0B5gTVgAXtgb05roix2dOuQbnY/CGhxDf5mxF8mUsvfCH9SUk6xtTEL5eipRg1hw+PpooXB1aT9HvDnEg6RrmhZLYVO94Hor76CHfEXeJ1eAJ9F8dQ7LFGANMwsk0NGb/7v9yQF2kpLQ8yn5V54i90sws= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=Wczqxt6x; arc=none smtp.client-ip=209.85.215.170 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="Wczqxt6x" Received: by mail-pg1-f170.google.com with SMTP id 41be03b00d2f7-6ce533b6409so3258158a12.1 for ; Mon, 10 Jun 2024 05:02:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1718020954; x=1718625754; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=1Jta2NpH0Yo3jq0n3rdl7lcfRrUI+PWrFp0wIskmVQ4=; b=Wczqxt6xzFFyJUedQG21e5PJp/+ZtQcfflLq81Q/bnX2pE0diN9K8f6vgJ/kVlrHKA eyVIBqVKw5CSmFEkSujoGU0Vix/mfAZXBP+VZdGs10Ripa8ZrkszbxuUMJ4iC2EFFB85 jRfwmOtzYfzRolh2R2TmwgVcD3QU1q5DTgkm8rUkeHEjqv/0ha/AmgBxfiHTZIn45Msi siDH/IwnA1Vlt6rAOvhYnfcOAZZmUMeV00Es+a+b34T6z3OLOrw8ryqluiu2LfDc23Re 7uZd38iMq0DLQqNFksw9qLo8a5amPBie5SAoDe4s5vjQF05UtPFUhrWVfYbxl8ttKqkn DQQQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1718020954; x=1718625754; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=1Jta2NpH0Yo3jq0n3rdl7lcfRrUI+PWrFp0wIskmVQ4=; b=kDhWYV+nWRgA6N3Ma/VyydbQKXgxOWUQIPYZpnOMV3IWs8nasjN47zgZp5xveBN+aI wiQNki3CYbuW8sxNbN7ksmnFL4cSlTqZmUf6X+K1mSU3v3ZVvd/3WjyzCfA+QaubyoNT 8aF6Cpri4r1WXcWpWXu/MjH/BcFGBLTHPhSkQPjWh9vxKiMM3+LnBPC99mh5o3gdvUrT k0JAisImREeuPBCDknMKrlS9Vgnc3vLna2Xne/dytSgWpf+9Zvfc1+y3B8cCrCMX8yqd Frp8qw/hopONapSIFEl/t+dKALayTiuJ2EgMzPFTGPplEL3/Yinbes2n+eyZKmFn5tqQ /gMw== X-Forwarded-Encrypted: i=1; AJvYcCV4CP1gG20XKRM4ICTAKxdocMB6bGVsWmxJCBR9hJ4a+ZFMFx2A/Q4eSPcV91N63HXrkEd0CyJwy1GPR1gaEZ9nIx2AOPSKQZn0FI0I X-Gm-Message-State: AOJu0Yw2Sv1syL5ZSFKust2KCxOmQO5YjSWv78WNwuC/oB/OKUjKGH8z RUjeqaSic1MDQzPQfiO5yNgg/7FaQNetctkkLzgxioOHKetKYQnP X-Google-Smtp-Source: AGHT+IHRNhyGYjU+bbHW3Rl/BwTn9HYeOw84bchXR2tVdnHVYoifBRotkcCaIuC++at7Grjg8V1JGQ== X-Received: by 2002:a17:90a:d581:b0:2c0:238c:4ee6 with SMTP id 98e67ed59e1d1-2c2bc9ba465mr8733353a91.2.1718020954042; Mon, 10 Jun 2024 05:02:34 -0700 (PDT) Received: from LancedeMBP.lan.lan ([2403:2c80:6::304c]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-2c302f0ebdcsm2478478a91.23.2024.06.10.05.02.28 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 10 Jun 2024 05:02:33 -0700 (PDT) From: Lance Yang To: akpm@linux-foundation.org Cc: willy@infradead.org, sj@kernel.org, baolin.wang@linux.alibaba.com, maskray@google.com, ziy@nvidia.com, ryan.roberts@arm.com, david@redhat.com, 21cnbao@gmail.com, mhocko@suse.com, fengwei.yin@intel.com, zokeefe@google.com, shy828301@gmail.com, xiehuan09@gmail.com, libang.li@antgroup.com, wangkefeng.wang@huawei.com, songmuchun@bytedance.com, peterx@redhat.com, minchan@kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Lance Yang Subject: [PATCH v7 2/4] mm/rmap: add helper to restart pgtable walk on changes Date: Mon, 10 Jun 2024 20:02:07 +0800 Message-Id: <20240610120209.66311-3-ioworker0@gmail.com> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20240610120209.66311-1-ioworker0@gmail.com> References: <20240610120209.66311-1-ioworker0@gmail.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Introduce the page_vma_mapped_walk_restart() helper to handle scenarios where the page table walk needs to be restarted due to changes in the page table, such as when a PMD is split. It releases the PTL held during the previous walk and resets the state, allowing a new walk to start at the current address stored in pvmw->address. Suggested-by: David Hildenbrand Signed-off-by: Lance Yang --- include/linux/rmap.h | 22 ++++++++++++++++++++++ 1 file changed, 22 insertions(+) diff --git a/include/linux/rmap.h b/include/linux/rmap.h index 7229b9baf20d..5f18509610cc 100644 --- a/include/linux/rmap.h +++ b/include/linux/rmap.h @@ -710,6 +710,28 @@ static inline void page_vma_mapped_walk_done(struct pa= ge_vma_mapped_walk *pvmw) spin_unlock(pvmw->ptl); } =20 +/** + * page_vma_mapped_walk_restart - Restart the page table walk. + * @pvmw: Pointer to struct page_vma_mapped_walk. + * + * It restarts the page table walk when changes occur in the page + * table, such as splitting a PMD. Ensures that the PTL held during + * the previous walk is released and resets the state to allow for + * a new walk starting at the current address stored in pvmw->address. + */ +static inline void +page_vma_mapped_walk_restart(struct page_vma_mapped_walk *pvmw) +{ + WARN_ON_ONCE(!pvmw->pmd); + WARN_ON_ONCE(!pvmw->ptl); + + if (pvmw->ptl) + spin_unlock(pvmw->ptl); + + pvmw->ptl =3D NULL; + pvmw->pmd =3D NULL; +} + bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw); =20 /* --=20 2.33.1 From nobody Thu Feb 12 17:28:57 2026 Received: from mail-pf1-f171.google.com (mail-pf1-f171.google.com [209.85.210.171]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A52291F171 for ; Mon, 10 Jun 2024 12:06:32 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.171 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718021194; cv=none; b=h8aIBe0VK1YKO0q22Bw5Ip9FO0QoUtaXhJ+btacJSUQdtmCWqrZRLWm2v6cuH1xzzX2ciHsAKb6JE5toxuKZbCsvy9M9FPdOzDMxpo0et3UB0oXXZXOWl25kSPleDzSJd37W3uHzuJ2JQp9IUi1rqnyGnJxy/9+6t437PT8jmlA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718021194; c=relaxed/simple; bh=RgmkqwOeLDy5DxsGKQkSYd477fFHSA3uoHghvi9luWY=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=EjTyQsRSSPZyrTiZw8ABied/GYVbvGgeOoCH4evP1g3cNHVqT5HU5uSiLABn0QKMmKDrhW9niFrUjiT43cMq8Ej8w/3JTAE0DBZpq+ZzkrdAR0Tw4GEhGJfiEeuadQBmqLL83jj7Q++GazJReCXI4u1tmlc9upnazSnLxzry5Gs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=BwYnUqMH; arc=none smtp.client-ip=209.85.210.171 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="BwYnUqMH" Received: by mail-pf1-f171.google.com with SMTP id d2e1a72fcca58-7042882e741so1243438b3a.2 for ; Mon, 10 Jun 2024 05:06:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1718021192; x=1718625992; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=YOOpP3mhn3ktm+AdbLZ/EN2VmROk6QDShkqGnjdgrDU=; b=BwYnUqMHKCrIieS+ybqIjtcHGSU68ojqcBl4jr+qFVZ7+xHguCZyksyH3IIooFVY1p X9WsLcl4Yw7rjBf5yTBruRLRjVZllujKtiNzEOv31ApW9tkK+4easgMhn4QfcRrU4FAU YBlQYUUuGNDEjpChAgQywWoiumPwQO/lKOrfqcN1K0ZHsMiglRg6XkcyoNRWNg0mX0yP DD+oY9OAFjWaB0bWeLlB27xNPpVVMpDvI0MoiB31UvwdtU1vV00bb4jba7ArkuD8aXW3 pFIzmzDC27JQU05R4PJ2LiTXJj47dtAIotrRbPjXLmlQkcE4UKzg4pqpd0Yre1nelz2Z jpIA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1718021192; x=1718625992; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=YOOpP3mhn3ktm+AdbLZ/EN2VmROk6QDShkqGnjdgrDU=; b=a2HyUirPUCO5sAUMFpkyY6AsAkpWRC1HI7eSv8NspQ2V9CnrZuqRZ1GJVl+i8tX9c8 iIl9EqhT/dHdQVz+NBcPe49me7vbeeTm3oI5MsQK7e5FXcAAWsGqfc8DlR5eY6vMLxIp LsY0q7ECJXuGk4KlGGtkShA8wpkbixXoHeM6Oovk3YtOsqT2hhAt+ItgXxnFB4eoU7Je 8oAK+ckMrgIN1gXdu1wOtlVkKdf1oBxoPMuxK/OPngKhnMo3pf0dn4gJ9YTsewQtCcJs TdjQgS/KiaTpJf2+DGqLcMFV46WLE+auZDrxDA/WoisS0UY96d/+dIX+PkV3QZ/pRKf3 BHMQ== X-Forwarded-Encrypted: i=1; AJvYcCUWFRLSuigDxfCAe5zsBPDaiCD64hyxBtp2R7RvvOZmX/jte7aGNwUln9LoVhJYWHZXcLr8/OyN6EHTsxsHfSJbd68iN6D9JrLgLRYH X-Gm-Message-State: AOJu0Yxo8xGVKFHUz9zM5h5qngbk88zO9kuqR0MxN3aA5shFbHHKx8kp lPEEI7QxebE9UYfCyUsdUr3IwKSi5gLYaqqGGhHItt4y72TjozpA X-Google-Smtp-Source: AGHT+IHuuLA3IkOVWnemUAiXi6DwNGqf6nD+qxo5av/4Kb4vZP8/N6Adb31SMqw2zaGmXf2b8MhZIQ== X-Received: by 2002:a05:6a20:8423:b0:1af:7646:fc14 with SMTP id adf61e73a8af0-1b2f151f841mr9176853637.0.1718021191815; Mon, 10 Jun 2024 05:06:31 -0700 (PDT) Received: from LancedeMBP.lan.lan ([2403:2c80:6::304c]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-7043aa52899sm2182250b3a.25.2024.06.10.05.06.25 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 10 Jun 2024 05:06:31 -0700 (PDT) From: Lance Yang To: ioworker0@gmail.com Cc: 21cnbao@gmail.com, akpm@linux-foundation.org, baolin.wang@linux.alibaba.com, david@redhat.com, fengwei.yin@intel.com, libang.li@antgroup.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, maskray@google.com, mhocko@suse.com, minchan@kernel.org, peterx@redhat.com, ryan.roberts@arm.com, shy828301@gmail.com, sj@kernel.org, songmuchun@bytedance.com, wangkefeng.wang@huawei.com, willy@infradead.org, xiehuan09@gmail.com, ziy@nvidia.com, zokeefe@google.com Subject: [PATCH v7 3/4] mm/rmap: integrate PMD-mapped folio splitting into pagewalk loop Date: Mon, 10 Jun 2024 20:06:18 +0800 Message-Id: <20240610120618.66520-1-ioworker0@gmail.com> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20240610120209.66311-1-ioworker0@gmail.com> References: <20240610120209.66311-1-ioworker0@gmail.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" In preparation for supporting try_to_unmap_one() to unmap PMD-mapped folios, start the pagewalk first, then call split_huge_pmd_address() to split the folio. Suggested-by: David Hildenbrand Suggested-by: Baolin Wang Signed-off-by: Lance Yang --- include/linux/huge_mm.h | 6 ++++++ mm/huge_memory.c | 42 +++++++++++++++++++++-------------------- mm/rmap.c | 21 +++++++++++++++------ 3 files changed, 43 insertions(+), 26 deletions(-) diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index 088d66a54643..4670c6ee118b 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -415,6 +415,9 @@ static inline bool thp_migration_supported(void) return IS_ENABLED(CONFIG_ARCH_ENABLE_THP_MIGRATION); } =20 +void split_huge_pmd_locked(struct vm_area_struct *vma, unsigned long addre= ss, + pmd_t *pmd, bool freeze, struct folio *folio); + #else /* CONFIG_TRANSPARENT_HUGEPAGE */ =20 static inline bool folio_test_pmd_mappable(struct folio *folio) @@ -477,6 +480,9 @@ static inline void __split_huge_pmd(struct vm_area_stru= ct *vma, pmd_t *pmd, unsigned long address, bool freeze, struct folio *folio) {} static inline void split_huge_pmd_address(struct vm_area_struct *vma, unsigned long address, bool freeze, struct folio *folio) {} +static inline void split_huge_pmd_locked(struct vm_area_struct *vma, + unsigned long address, pmd_t *pmd, + bool freeze, struct folio *folio) {} =20 #define split_huge_pud(__vma, __pmd, __address) \ do { } while (0) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index e6d26c2eb670..d2697cc8f9d4 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2581,6 +2581,27 @@ static void __split_huge_pmd_locked(struct vm_area_s= truct *vma, pmd_t *pmd, pmd_populate(mm, pmd, pgtable); } =20 +void split_huge_pmd_locked(struct vm_area_struct *vma, unsigned long addre= ss, + pmd_t *pmd, bool freeze, struct folio *folio) +{ + VM_WARN_ON_ONCE(folio && !folio_test_pmd_mappable(folio)); + VM_WARN_ON_ONCE(!IS_ALIGNED(address, HPAGE_PMD_SIZE)); + VM_WARN_ON_ONCE(folio && !folio_test_locked(folio)); + VM_BUG_ON(freeze && !folio); + + /* + * When the caller requests to set up a migration entry, we + * require a folio to check the PMD against. Otherwise, there + * is a risk of replacing the wrong folio. + */ + if (pmd_trans_huge(*pmd) || pmd_devmap(*pmd) || + is_pmd_migration_entry(*pmd)) { + if (folio && folio !=3D pmd_folio(*pmd)) + return; + __split_huge_pmd_locked(vma, pmd, address, freeze); + } +} + void __split_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd, unsigned long address, bool freeze, struct folio *folio) { @@ -2592,26 +2613,7 @@ void __split_huge_pmd(struct vm_area_struct *vma, pm= d_t *pmd, (address & HPAGE_PMD_MASK) + HPAGE_PMD_SIZE); mmu_notifier_invalidate_range_start(&range); ptl =3D pmd_lock(vma->vm_mm, pmd); - - /* - * If caller asks to setup a migration entry, we need a folio to check - * pmd against. Otherwise we can end up replacing wrong folio. - */ - VM_BUG_ON(freeze && !folio); - VM_WARN_ON_ONCE(folio && !folio_test_locked(folio)); - - if (pmd_trans_huge(*pmd) || pmd_devmap(*pmd) || - is_pmd_migration_entry(*pmd)) { - /* - * It's safe to call pmd_page when folio is set because it's - * guaranteed that pmd is present. - */ - if (folio && folio !=3D pmd_folio(*pmd)) - goto out; - __split_huge_pmd_locked(vma, pmd, range.start, freeze); - } - -out: + split_huge_pmd_locked(vma, range.start, pmd, freeze, folio); spin_unlock(ptl); mmu_notifier_invalidate_range_end(&range); } diff --git a/mm/rmap.c b/mm/rmap.c index ddffa30c79fb..b77f88695588 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1640,9 +1640,6 @@ static bool try_to_unmap_one(struct folio *folio, str= uct vm_area_struct *vma, if (flags & TTU_SYNC) pvmw.flags =3D PVMW_SYNC; =20 - if (flags & TTU_SPLIT_HUGE_PMD) - split_huge_pmd_address(vma, address, false, folio); - /* * For THP, we have to assume the worse case ie pmd for invalidation. * For hugetlb, it could be much worse if we need to do pud @@ -1668,9 +1665,6 @@ static bool try_to_unmap_one(struct folio *folio, str= uct vm_area_struct *vma, mmu_notifier_invalidate_range_start(&range); =20 while (page_vma_mapped_walk(&pvmw)) { - /* Unexpected PMD-mapped THP? */ - VM_BUG_ON_FOLIO(!pvmw.pte, folio); - /* * If the folio is in an mlock()d vma, we must not swap it out. */ @@ -1682,6 +1676,21 @@ static bool try_to_unmap_one(struct folio *folio, st= ruct vm_area_struct *vma, goto walk_done_err; } =20 + if (!pvmw.pte && (flags & TTU_SPLIT_HUGE_PMD)) { + /* + * We temporarily have to drop the PTL and start once + * again from that now-PTE-mapped page table. + */ + split_huge_pmd_locked(vma, pvmw.address, pvmw.pmd, + false, folio); + flags &=3D ~TTU_SPLIT_HUGE_PMD; + page_vma_mapped_walk_restart(&pvmw); + continue; + } + + /* Unexpected PMD-mapped THP? */ + VM_BUG_ON_FOLIO(!pvmw.pte, folio); + pfn =3D pte_pfn(ptep_get(pvmw.pte)); subpage =3D folio_page(folio, pfn - folio_pfn(folio)); address =3D pvmw.address; --=20 2.33.1 From nobody Thu Feb 12 17:28:57 2026 Received: from mail-pg1-f170.google.com (mail-pg1-f170.google.com [209.85.215.170]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5DF4056444 for ; Mon, 10 Jun 2024 12:08:22 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.170 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718021303; cv=none; b=d7oXaaTpgVaDp6wLiJFvcLsDswtVFckMrdFrSVdDBDtpt2pgA9XmfdAp58T4h4zt5KK5R9BPbtdruWAX9fZekM7cie0GVEG0dK3bVlnuCCTgUS5WvjitAAwbUiKthQdYwrdpPlV8tLLVkeJrwjh2bVKSaA9YKUus+feFHTWVtwI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718021303; c=relaxed/simple; bh=8zF6nJ1lU41KTQQbpcXIcwPW+iqqy1WM7zfobETrTlI=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=mIcC9qYydOPRY/Q0bqB+JFIUxNlFmVZKTUBLoALNt0Gd4uaXxL36oa6UstFxqslW55QVBJHaC9EcGT7WwfvZT9Vj3Y/90n0kA4PoHPDu+ka/0jmdm4MioSbJo7lzaSduyxlCW3zzX2+J2mNvSDLAG5bdq/xlatEW9fvZxWJZzJ4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=XgXtpv0Q; arc=none smtp.client-ip=209.85.215.170 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="XgXtpv0Q" Received: by mail-pg1-f170.google.com with SMTP id 41be03b00d2f7-6c5a6151ff8so3325713a12.2 for ; Mon, 10 Jun 2024 05:08:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1718021302; x=1718626102; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=sOCokgIFZnMYHU7kHatHCs1R7LhgAcEFFxJDcineuXY=; b=XgXtpv0QsZQYL4+gTZzk7wnnTBIDbpj+Y8rcZ54qUIZigbouimRLxa6KJv3NaJK2Xp MbZ1/1IgRw7sTtWxfS0aIpyOj5uC06ych836uchbiKNvl60oTuJRMbrgp1I92v0qzYF4 Oy27EIXRUuhoPNS4f3KAUO+ZEOoCJHQO1T261yNEOm35jn/Z6T8ZQb1K95zwfXX/Pct8 42KbQfK3cvb5YUJpj2U3yNAMl4iAAwpSOwHw3/8iqw0MDPSdui/Ss5xK8FZTaghnv64n zIav57hmc75N6blDxy0h1zqpsUbDgzk3u3jhd6+IZ2o8qBLdAxwtksZZbGMJCUtuxORM bFNw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1718021302; x=1718626102; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=sOCokgIFZnMYHU7kHatHCs1R7LhgAcEFFxJDcineuXY=; b=C/vrvns31M+7qmryqN39woVZUNFSGrfNf7p6ubebEWFxBhCO2uYYwVxYB8uTLabvJH r+ZXL8pTEYVn0y8qQ+j5595ze09Gh5UTJgtdwZ40gUn7K4gehO5662K17iCfRPq3cABS nvHWtJkjFO+Uuf+va0HtIMqyKHdp6WD1rK79fDomoY6KjSDjoeKvGEgzPkk+1UhuP8m5 k4X69Vm2b79k1n5rEJZPXoeMe8x1/lQPwnsOAxfjatXSfS9JAdxcYXzAuXbIfvg9TPBE rPlO6Rs4uVXy0rC5KRdO/CAOlJQS401ICIIhd+5BCThZ0EJNMl1Sv64OwiJLF0ke8svE 6Ifg== X-Forwarded-Encrypted: i=1; AJvYcCVXSI4ic7TBCWMpPm93OmVVlD3aUvb2D/Xor6b5NGhcFXTNdw/NzZ0n00LABlp+l9sfWWqIXqvy5y/WP1uHCmLDjVTyzBEFfOCKVNF/ X-Gm-Message-State: AOJu0YwThX5Hln6U4pzTwgim1bMl3o2nwB/a5ZJV3gZh35KGvCV5DhUT xxM8au4G3KRJdGIpI/kIvD4UqB9HIJT2yTl8rhpEl0/6O4bGpP3U X-Google-Smtp-Source: AGHT+IGiVdBdiRyrTFD3fLPkOV4Q3GrQwZQWB402am+vntnYPikYw03+qIKJb0cFryGT4hjcfjPjvA== X-Received: by 2002:a17:903:2b0e:b0:1f4:5dc0:5fe8 with SMTP id d9443c01a7336-1f6d02e22efmr107290335ad.15.1718021301570; Mon, 10 Jun 2024 05:08:21 -0700 (PDT) Received: from LancedeMBP.lan.lan ([2403:2c80:6::304c]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-1f6bd76c1d2sm81011355ad.107.2024.06.10.05.08.15 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 10 Jun 2024 05:08:21 -0700 (PDT) From: Lance Yang To: ioworker0@gmail.com Cc: 21cnbao@gmail.com, akpm@linux-foundation.org, baolin.wang@linux.alibaba.com, david@redhat.com, fengwei.yin@intel.com, libang.li@antgroup.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, maskray@google.com, mhocko@suse.com, minchan@kernel.org, peterx@redhat.com, ryan.roberts@arm.com, shy828301@gmail.com, sj@kernel.org, songmuchun@bytedance.com, wangkefeng.wang@huawei.com, willy@infradead.org, xiehuan09@gmail.com, ziy@nvidia.com, zokeefe@google.com Subject: [PATCH v7 4/4] mm/vmscan: avoid split lazyfree THP during shrink_folio_list() Date: Mon, 10 Jun 2024 20:08:09 +0800 Message-Id: <20240610120809.66601-1-ioworker0@gmail.com> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20240610120209.66311-1-ioworker0@gmail.com> References: <20240610120209.66311-1-ioworker0@gmail.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" When the user no longer requires the pages, they would use madvise(MADV_FREE) to mark the pages as lazy free. Subsequently, they typically would not re-write to that memory again. During memory reclaim, if we detect that the large folio and its PMD are both still marked as clean and there are no unexpected references (such as GUP), so we can just discard the memory lazily, improving the efficiency of memory reclamation in this case. On an Intel i5 CPU, reclaiming 1GiB of lazyfree THPs using mem_cgroup_force_empty() results in the following runtimes in seconds (shorter is better): Suggested-by: David Hildenbrand Suggested-by: Zi Yan -------------------------------------------- | Old | New | Change | -------------------------------------------- | 0.683426 | 0.049197 | -92.80% | -------------------------------------------- Suggested-by: Zi Yan Suggested-by: David Hildenbrand Signed-off-by: Lance Yang --- include/linux/huge_mm.h | 9 +++++ mm/huge_memory.c | 80 +++++++++++++++++++++++++++++++++++++++++ mm/rmap.c | 36 +++++++++++++------ 3 files changed, 114 insertions(+), 11 deletions(-) diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index 4670c6ee118b..020e2344eb86 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -417,6 +417,8 @@ static inline bool thp_migration_supported(void) =20 void split_huge_pmd_locked(struct vm_area_struct *vma, unsigned long addre= ss, pmd_t *pmd, bool freeze, struct folio *folio); +bool unmap_huge_pmd_locked(struct vm_area_struct *vma, unsigned long addr, + pmd_t *pmdp, struct folio *folio); =20 #else /* CONFIG_TRANSPARENT_HUGEPAGE */ =20 @@ -484,6 +486,13 @@ static inline void split_huge_pmd_locked(struct vm_are= a_struct *vma, unsigned long address, pmd_t *pmd, bool freeze, struct folio *folio) {} =20 +static inline bool unmap_huge_pmd_locked(struct vm_area_struct *vma, + unsigned long addr, pmd_t *pmdp, + struct folio *folio) +{ + return false; +} + #define split_huge_pud(__vma, __pmd, __address) \ do { } while (0) =20 diff --git a/mm/huge_memory.c b/mm/huge_memory.c index d2697cc8f9d4..19592d3f1167 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2687,6 +2687,86 @@ static void unmap_folio(struct folio *folio) try_to_unmap_flush(); } =20 +static bool __discard_anon_folio_pmd_locked(struct vm_area_struct *vma, + unsigned long addr, pmd_t *pmdp, + struct folio *folio) +{ + VM_WARN_ON_FOLIO(folio_test_swapbacked(folio), folio); + VM_WARN_ON_FOLIO(!folio_test_anon(folio), folio); + + struct mm_struct *mm =3D vma->vm_mm; + int ref_count, map_count; + pmd_t orig_pmd =3D *pmdp; + struct page *page; + + if (unlikely(!pmd_present(orig_pmd) || !pmd_trans_huge(orig_pmd))) + return false; + + page =3D pmd_page(orig_pmd); + if (unlikely(page_folio(page) !=3D folio)) + return false; + + if (folio_test_dirty(folio) || pmd_dirty(orig_pmd)) { + folio_set_swapbacked(folio); + return false; + } + + orig_pmd =3D pmdp_huge_clear_flush(vma, addr, pmdp); + + /* + * Syncing against concurrent GUP-fast: + * - clear PMD; barrier; read refcount + * - inc refcount; barrier; read PMD + */ + smp_mb(); + + ref_count =3D folio_ref_count(folio); + map_count =3D folio_mapcount(folio); + + /* + * Order reads for folio refcount and dirty flag + * (see comments in __remove_mapping()). + */ + smp_rmb(); + + /* + * If the folio or its PMD is redirtied at this point, or if there + * are unexpected references, we will give up to discard this folio + * and remap it. + * + * The only folio refs must be one from isolation plus the rmap(s). + */ + if (folio_test_dirty(folio) || pmd_dirty(orig_pmd)) + folio_set_swapbacked(folio); + + if (folio_test_swapbacked(folio) || ref_count !=3D map_count + 1) { + set_pmd_at(mm, addr, pmdp, orig_pmd); + return false; + } + + folio_remove_rmap_pmd(folio, page, vma); + zap_deposited_table(mm, pmdp); + add_mm_counter(mm, MM_ANONPAGES, -HPAGE_PMD_NR); + if (vma->vm_flags & VM_LOCKED) + mlock_drain_local(); + folio_put(folio); + + return true; +} + +bool unmap_huge_pmd_locked(struct vm_area_struct *vma, unsigned long addr, + pmd_t *pmdp, struct folio *folio) +{ + VM_WARN_ON_FOLIO(!folio_test_pmd_mappable(folio), folio); + VM_WARN_ON_FOLIO(!folio_test_locked(folio), folio); + VM_WARN_ON_ONCE(!IS_ALIGNED(addr, HPAGE_PMD_SIZE)); + + if (folio_test_anon(folio) && !folio_test_swapbacked(folio)) + return __discard_anon_folio_pmd_locked(vma, addr, pmdp, folio); + + return false; +} + static void remap_page(struct folio *folio, unsigned long nr) { int i =3D 0; diff --git a/mm/rmap.c b/mm/rmap.c index b77f88695588..8e901636ade9 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1630,6 +1630,7 @@ static bool try_to_unmap_one(struct folio *folio, str= uct vm_area_struct *vma, enum ttu_flags flags =3D (enum ttu_flags)(long)arg; unsigned long pfn; unsigned long hsz =3D 0; + bool pmd_mapped =3D false; =20 /* * When racing against e.g. zap_pte_range() on another cpu, @@ -1676,16 +1677,24 @@ static bool try_to_unmap_one(struct folio *folio, s= truct vm_area_struct *vma, goto walk_done_err; } =20 - if (!pvmw.pte && (flags & TTU_SPLIT_HUGE_PMD)) { - /* - * We temporarily have to drop the PTL and start once - * again from that now-PTE-mapped page table. - */ - split_huge_pmd_locked(vma, pvmw.address, pvmw.pmd, - false, folio); - flags &=3D ~TTU_SPLIT_HUGE_PMD; - page_vma_mapped_walk_restart(&pvmw); - continue; + if (!pvmw.pte) { + pmd_mapped =3D true; + if (unmap_huge_pmd_locked(vma, pvmw.address, pvmw.pmd, + folio)) + goto walk_done; + + if (flags & TTU_SPLIT_HUGE_PMD) { + /* + * We temporarily have to drop the PTL and start + * once again from that now-PTE-mapped page + * table. + */ + split_huge_pmd_locked(vma, pvmw.address, + pvmw.pmd, false, folio); + flags &=3D ~TTU_SPLIT_HUGE_PMD; + page_vma_mapped_walk_restart(&pvmw); + continue; + } } =20 /* Unexpected PMD-mapped THP? */ @@ -1813,7 +1822,12 @@ static bool try_to_unmap_one(struct folio *folio, st= ruct vm_area_struct *vma, */ if (unlikely(folio_test_swapbacked(folio) !=3D folio_test_swapcache(folio))) { - WARN_ON_ONCE(1); + /* + * unmap_huge_pmd_locked() will unmark a + * PMD-mapped folio as lazyfree if the folio or + * its PMD was redirtied. + */ + WARN_ON_ONCE(!pmd_mapped); goto walk_done_err; } =20 --=20 2.33.1