From nobody Tue Dec 16 03:21:43 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1650AE8180D for ; Fri, 22 Sep 2023 19:36:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233105AbjIVThA (ORCPT ); Fri, 22 Sep 2023 15:37:00 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52532 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232781AbjIVTg5 (ORCPT ); Fri, 22 Sep 2023 15:36:57 -0400 Received: from mail-pl1-x62e.google.com (mail-pl1-x62e.google.com [IPv6:2607:f8b0:4864:20::62e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E8C3CB9 for ; Fri, 22 Sep 2023 12:36:50 -0700 (PDT) Received: by mail-pl1-x62e.google.com with SMTP id d9443c01a7336-1c47309a8ccso30014465ad.1 for ; Fri, 22 Sep 2023 12:36:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1695411410; x=1696016210; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=tNVby2wunRWEn0qOBvwSo9+1ZckN5VFf0mdscjhq8sg=; b=nJCAz0FC+Vq+UkOOEWg9ffVDnkruRPAX4fdoldVba9zm0VYK1d0VTC1UrPAvR3sKAE CNWv8FJHyj63BnAuEHrF1PWkAhwlqDwYg0eO1IiCDmNGqfaAYgKiWIe/CcMTT0zqQ4hm 0rYVQFiGyVxYSwmCkl73LQNOMfVBIV0NlteE6Z9kGRzPnyLW0Emmum4KFLBBLaW6hGTR xeDOiB3wReAQ27L3HrBugb8RZHqjuPMkE8GyKQ7WDuK/qbaRa8jON/IRqBhCD0WEeV1C vdv+xO2rLxGbsOPe/dh4RO6VBDk446phlp6mdmAZyh2yTmRkrGaQ2GEzQlb5eJcLTW/8 oheA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1695411410; x=1696016210; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=tNVby2wunRWEn0qOBvwSo9+1ZckN5VFf0mdscjhq8sg=; b=hQw88ly32lAJbiGow8979efZWMY1KB/tBLWWKaCB7x/cRBMN9hGXfyqgxZCULrSL9T 5VwyCT71Opv0OXiSBcR2bBHHdyRiX+cwIaSVYRuI1LHPUHoi2WJwf+D1G0lKdz6fOxIG cfbzL/D4+tqC+VnvqWNgiQtib/0h7NgKkB60/RzAUEKQJh183kp0FS8KtikhWj5+ymya n5VlLdX1CQayBxkSdliLWrDUZ4xxTHtLudb+Dj3LB/E3kAN3BMfKbT6RAqy9rQJqpO13 RtDDPQutVkhH/mM6y8yjK7QP9izFh3IARJsTjC/BZ86qBcR2GB+wgZG3Aom9Lb52MQX0 sQTA== X-Gm-Message-State: AOJu0YxZGnz60b/6wDJLWAgIOJkcsqIwxCk/PgEuV4z7GRKLJEzTmCXX 6D9jirFjJvUMhnWA4URAeHc= X-Google-Smtp-Source: AGHT+IGcVd17kQbCASxBJAiH40hBMFyzCh+Dp7Xr+380ebvaazbf0iRs5Z0HFn9TFwM6Y5hNVP2tfQ== X-Received: by 2002:a17:903:18d:b0:1bd:d510:78fb with SMTP id z13-20020a170903018d00b001bdd51078fbmr5245132plg.3.1695411410275; Fri, 22 Sep 2023 12:36:50 -0700 (PDT) Received: from fedora.. (c-73-170-51-167.hsd1.ca.comcast.net. [73.170.51.167]) by smtp.googlemail.com with ESMTPSA id q16-20020a170902dad000b001c0cb2aa2easm3841833plx.121.2023.09.22.12.36.49 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 22 Sep 2023 12:36:49 -0700 (PDT) From: "Vishal Moola (Oracle)" To: linux-mm@kvack.org Cc: akpm@linux-foundation.org, linux-kernel@vger.kernel.org, "Vishal Moola (Oracle)" Subject: [RFC PATCH 1/2] mm/khugepaged: Convert __collapse_huge_page_isolate() to use folios Date: Fri, 22 Sep 2023 12:36:38 -0700 Message-Id: <20230922193639.10158-2-vishal.moola@gmail.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230922193639.10158-1-vishal.moola@gmail.com> References: <20230922193639.10158-1-vishal.moola@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" This is in preparation for the removal of the khugepaged compound_pagelist. Replaces 11 calls to compound_head() with 1, and removes 499 bytes of kernel text. Signed-off-by: Vishal Moola (Oracle) Reviewed-by: Matthew Wilcox (Oracle) --- mm/khugepaged.c | 52 ++++++++++++++++++++++++------------------------- 1 file changed, 26 insertions(+), 26 deletions(-) diff --git a/mm/khugepaged.c b/mm/khugepaged.c index 88433cc25d8a..f46a7a7c489f 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -541,7 +541,7 @@ static int __collapse_huge_page_isolate(struct vm_area_= struct *vma, struct collapse_control *cc, struct list_head *compound_pagelist) { - struct page *page =3D NULL; + struct folio *folio =3D NULL; pte_t *_pte; int none_or_zero =3D 0, shared =3D 0, result =3D SCAN_FAIL, referenced = =3D 0; bool writable =3D false; @@ -570,15 +570,15 @@ static int __collapse_huge_page_isolate(struct vm_are= a_struct *vma, result =3D SCAN_PTE_UFFD_WP; goto out; } - page =3D vm_normal_page(vma, address, pteval); - if (unlikely(!page) || unlikely(is_zone_device_page(page))) { + folio =3D vm_normal_folio(vma, address, pteval); + if (unlikely(!folio) || unlikely(folio_is_zone_device(folio))) { result =3D SCAN_PAGE_NULL; goto out; } =20 - VM_BUG_ON_PAGE(!PageAnon(page), page); + VM_BUG_ON_FOLIO(!folio_test_anon(folio), folio); =20 - if (page_mapcount(page) > 1) { + if (folio_estimated_sharers(folio) > 1) { ++shared; if (cc->is_khugepaged && shared > khugepaged_max_ptes_shared) { @@ -588,16 +588,15 @@ static int __collapse_huge_page_isolate(struct vm_are= a_struct *vma, } } =20 - if (PageCompound(page)) { - struct page *p; - page =3D compound_head(page); + if (folio_test_large(folio)) { + struct folio *f; =20 /* * Check if we have dealt with the compound page * already */ - list_for_each_entry(p, compound_pagelist, lru) { - if (page =3D=3D p) + list_for_each_entry(f, compound_pagelist, lru) { + if (folio =3D=3D f) goto next; } } @@ -608,7 +607,7 @@ static int __collapse_huge_page_isolate(struct vm_area_= struct *vma, * is needed to serialize against split_huge_page * when invoked from the VM. */ - if (!trylock_page(page)) { + if (!folio_trylock(folio)) { result =3D SCAN_PAGE_LOCK; goto out; } @@ -624,8 +623,8 @@ static int __collapse_huge_page_isolate(struct vm_area_= struct *vma, * but not from this process. The other process cannot write to * the page, only trigger CoW. */ - if (!is_refcount_suitable(page)) { - unlock_page(page); + if (!is_refcount_suitable(&folio->page)) { + folio_unlock(folio); result =3D SCAN_PAGE_COUNT; goto out; } @@ -634,32 +633,33 @@ static int __collapse_huge_page_isolate(struct vm_are= a_struct *vma, * Isolate the page to avoid collapsing an hugepage * currently in use by the VM. */ - if (!isolate_lru_page(page)) { - unlock_page(page); + if (!folio_isolate_lru(folio)) { + folio_unlock(folio); result =3D SCAN_DEL_PAGE_LRU; goto out; } - mod_node_page_state(page_pgdat(page), - NR_ISOLATED_ANON + page_is_file_lru(page), - compound_nr(page)); - VM_BUG_ON_PAGE(!PageLocked(page), page); - VM_BUG_ON_PAGE(PageLRU(page), page); + node_stat_mod_folio(folio, + NR_ISOLATED_ANON + folio_is_file_lru(folio), + folio_nr_pages(folio)); + VM_BUG_ON_FOLIO(!folio_test_locked(folio), folio); + VM_BUG_ON_FOLIO(folio_test_lru(folio), folio); =20 - if (PageCompound(page)) - list_add_tail(&page->lru, compound_pagelist); + if (folio_test_large(folio)) + list_add_tail(&folio->lru, compound_pagelist); next: /* * If collapse was initiated by khugepaged, check that there is * enough young pte to justify collapsing the page */ if (cc->is_khugepaged && - (pte_young(pteval) || page_is_young(page) || - PageReferenced(page) || mmu_notifier_test_young(vma->vm_mm, + (pte_young(pteval) || folio_test_young(folio) || + folio_test_referenced(folio) || mmu_notifier_test_young(vma->vm_mm, address))) referenced++; =20 if (pte_write(pteval)) writable =3D true; + } =20 if (unlikely(!writable)) { @@ -668,13 +668,13 @@ static int __collapse_huge_page_isolate(struct vm_are= a_struct *vma, result =3D SCAN_LACK_REFERENCED_PAGE; } else { result =3D SCAN_SUCCEED; - trace_mm_collapse_huge_page_isolate(page, none_or_zero, + trace_mm_collapse_huge_page_isolate(&folio->page, none_or_zero, referenced, writable, result); return result; } out: release_pte_pages(pte, _pte, compound_pagelist); - trace_mm_collapse_huge_page_isolate(page, none_or_zero, + trace_mm_collapse_huge_page_isolate(&folio->page, none_or_zero, referenced, writable, result); return result; } --=20 2.40.1 From nobody Tue Dec 16 03:21:43 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 14A11E8180B for ; Fri, 22 Sep 2023 19:37:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233215AbjIVThE (ORCPT ); Fri, 22 Sep 2023 15:37:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52610 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232820AbjIVTg6 (ORCPT ); Fri, 22 Sep 2023 15:36:58 -0400 Received: from mail-pl1-x62e.google.com (mail-pl1-x62e.google.com [IPv6:2607:f8b0:4864:20::62e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D9F21C2 for ; Fri, 22 Sep 2023 12:36:51 -0700 (PDT) Received: by mail-pl1-x62e.google.com with SMTP id d9443c01a7336-1c1e3a4a06fso22700055ad.3 for ; Fri, 22 Sep 2023 12:36:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1695411411; x=1696016211; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=0U3LCSGR9Uhg87btUKAqCyaspyDfBYXfRcxhUzWxmiQ=; b=RJ1Pn7SLjrmyrRJCcDvVHX0ZDO2xd98vIi6eI/fYIw/qz5Ng/wr2PTKxRbG+z5+cNS A6Glm/F0kFMb1Qy3DHBW/mlUGHrExQhs21C4gnZbNXjpHwf5kFNvDPraG+Hrh1v45Ty0 SZUUgTZKD4t5ycVbF7iaKDX9mICC5hvahuXRY7M7ERsuxm7zBFDflzhb2zh0H+KL25Ul vp3qXGDTIFZXsMW80kroaZmOOq1G6LB8/9tQANhkDMR5wIX/USyfkCLOyZqziytbEesQ 9dSRdYZMqOkA5hdpzJBHOsGi1tELTHNraMlCSz1M1fJRScE19CK/kg6ro/Bo8XZjxYLa OCsg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1695411411; x=1696016211; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=0U3LCSGR9Uhg87btUKAqCyaspyDfBYXfRcxhUzWxmiQ=; b=Jg2bvA9o8DxK2XjrneC0ebhsS1kAhbsZcTeM9wFiKCKb1mBdZVwv2vWJ05UgsQyry/ 3wrnUdSeAd8jl5N819RWjw2SIQtX7PGhavyx/Hmk3i1Vl035kMzwCsjV6Dox9XQE1DoX friQvWEsBGZHV0TD2aXM0N3NglfK2182Wt/dFXvcMMvFcbzf4yY+CZH6U+Z5zIw7tB2T AFvOBCc6LvDN/y3MrkqxSECKApa/m2QeuBmHiQSGTQO1FrcKmDYjhAtt12q5uqu51AZX TI5V2+EvA5faeHYNCpQw7hHeZ+kAJJGOz46qrIHgJG2tmBrOSr+TpX9WKWo4yhVec9I7 Ie9Q== X-Gm-Message-State: AOJu0YwKlf0jNG1ZYDqtTmig0AXHprjlQ9j6sx7PHJy7eqyXcVb2B37D aCyXXac2qWH0CNH31YBif2U= X-Google-Smtp-Source: AGHT+IFXiOoyqmw7Xh6EuLjaPNmIhMs1ArpEPXr7cCXwinyLyUAQ1cQwwDEcx7Cz+xj6D8q4+onGTQ== X-Received: by 2002:a17:902:9004:b0:1c5:ea60:85c1 with SMTP id a4-20020a170902900400b001c5ea6085c1mr365596plp.12.1695411411279; Fri, 22 Sep 2023 12:36:51 -0700 (PDT) Received: from fedora.. (c-73-170-51-167.hsd1.ca.comcast.net. [73.170.51.167]) by smtp.googlemail.com with ESMTPSA id q16-20020a170902dad000b001c0cb2aa2easm3841833plx.121.2023.09.22.12.36.50 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 22 Sep 2023 12:36:50 -0700 (PDT) From: "Vishal Moola (Oracle)" To: linux-mm@kvack.org Cc: akpm@linux-foundation.org, linux-kernel@vger.kernel.org, "Vishal Moola (Oracle)" Subject: [RFC PATCH 2/2] mm/khugepaged: Remove compound_pagelist Date: Fri, 22 Sep 2023 12:36:39 -0700 Message-Id: <20230922193639.10158-3-vishal.moola@gmail.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230922193639.10158-1-vishal.moola@gmail.com> References: <20230922193639.10158-1-vishal.moola@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Currently, khugepaged builds a compound_pagelist while scanning, which is used to properly account for compound pages. We can now account for a compound page as a singular folio instead, so remove this list. Large folios are guaranteed to have consecutive ptes and addresses, so once the first pte of a large folio is found skip over the rest. This helps convert khugepaged to use folios. It removes 3 compound_head calls in __collapse_huge_page_copy_succeeded(), and removes 980 bytes of kernel text. Signed-off-by: Vishal Moola (Oracle) --- mm/khugepaged.c | 76 ++++++++++++------------------------------------- 1 file changed, 18 insertions(+), 58 deletions(-) diff --git a/mm/khugepaged.c b/mm/khugepaged.c index f46a7a7c489f..b6c7d55a8231 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -498,10 +498,9 @@ static void release_pte_page(struct page *page) release_pte_folio(page_folio(page)); } =20 -static void release_pte_pages(pte_t *pte, pte_t *_pte, - struct list_head *compound_pagelist) +static void release_pte_folios(pte_t *pte, pte_t *_pte) { - struct folio *folio, *tmp; + struct folio *folio; =20 while (--_pte >=3D pte) { pte_t pteval =3D ptep_get(_pte); @@ -514,12 +513,7 @@ static void release_pte_pages(pte_t *pte, pte_t *_pte, continue; folio =3D pfn_folio(pfn); if (folio_test_large(folio)) - continue; - release_pte_folio(folio); - } - - list_for_each_entry_safe(folio, tmp, compound_pagelist, lru) { - list_del(&folio->lru); + _pte -=3D folio_nr_pages(folio) - 1; release_pte_folio(folio); } } @@ -538,8 +532,7 @@ static bool is_refcount_suitable(struct page *page) static int __collapse_huge_page_isolate(struct vm_area_struct *vma, unsigned long address, pte_t *pte, - struct collapse_control *cc, - struct list_head *compound_pagelist) + struct collapse_control *cc) { struct folio *folio =3D NULL; pte_t *_pte; @@ -588,19 +581,6 @@ static int __collapse_huge_page_isolate(struct vm_area= _struct *vma, } } =20 - if (folio_test_large(folio)) { - struct folio *f; - - /* - * Check if we have dealt with the compound page - * already - */ - list_for_each_entry(f, compound_pagelist, lru) { - if (folio =3D=3D f) - goto next; - } - } - /* * We can do it before isolate_lru_page because the * page can't be freed from under us. NOTE: PG_lock @@ -644,9 +624,6 @@ static int __collapse_huge_page_isolate(struct vm_area_= struct *vma, VM_BUG_ON_FOLIO(!folio_test_locked(folio), folio); VM_BUG_ON_FOLIO(folio_test_lru(folio), folio); =20 - if (folio_test_large(folio)) - list_add_tail(&folio->lru, compound_pagelist); -next: /* * If collapse was initiated by khugepaged, check that there is * enough young pte to justify collapsing the page @@ -660,6 +637,10 @@ static int __collapse_huge_page_isolate(struct vm_area= _struct *vma, if (pte_write(pteval)) writable =3D true; =20 + if (folio_test_large(folio)) { + _pte +=3D folio_nr_pages(folio) - 1; + address +=3D folio_size(folio) - PAGE_SIZE; + } } =20 if (unlikely(!writable)) { @@ -673,7 +654,7 @@ static int __collapse_huge_page_isolate(struct vm_area_= struct *vma, return result; } out: - release_pte_pages(pte, _pte, compound_pagelist); + release_pte_folios(pte, _pte); trace_mm_collapse_huge_page_isolate(&folio->page, none_or_zero, referenced, writable, result); return result; @@ -682,11 +663,9 @@ static int __collapse_huge_page_isolate(struct vm_area= _struct *vma, static void __collapse_huge_page_copy_succeeded(pte_t *pte, struct vm_area_struct *vma, unsigned long address, - spinlock_t *ptl, - struct list_head *compound_pagelist) + spinlock_t *ptl) { struct page *src_page; - struct page *tmp; pte_t *_pte; pte_t pteval; =20 @@ -706,8 +685,7 @@ static void __collapse_huge_page_copy_succeeded(pte_t *= pte, } } else { src_page =3D pte_page(pteval); - if (!PageCompound(src_page)) - release_pte_page(src_page); + release_pte_page(src_page); /* * ptl mostly unnecessary, but preempt has to * be disabled to update the per-cpu stats @@ -720,23 +698,12 @@ static void __collapse_huge_page_copy_succeeded(pte_t= *pte, free_page_and_swap_cache(src_page); } } - - list_for_each_entry_safe(src_page, tmp, compound_pagelist, lru) { - list_del(&src_page->lru); - mod_node_page_state(page_pgdat(src_page), - NR_ISOLATED_ANON + page_is_file_lru(src_page), - -compound_nr(src_page)); - unlock_page(src_page); - free_swap_cache(src_page); - putback_lru_page(src_page); - } } =20 static void __collapse_huge_page_copy_failed(pte_t *pte, pmd_t *pmd, pmd_t orig_pmd, - struct vm_area_struct *vma, - struct list_head *compound_pagelist) + struct vm_area_struct *vma) { spinlock_t *pmd_ptl; =20 @@ -753,7 +720,7 @@ static void __collapse_huge_page_copy_failed(pte_t *pte, * Release both raw and compound pages isolated * in __collapse_huge_page_isolate. */ - release_pte_pages(pte, pte + HPAGE_PMD_NR, compound_pagelist); + release_pte_folios(pte, pte + HPAGE_PMD_NR); } =20 /* @@ -769,7 +736,6 @@ static void __collapse_huge_page_copy_failed(pte_t *pte, * @vma: the original raw pages' virtual memory area * @address: starting address to copy * @ptl: lock on raw pages' PTEs - * @compound_pagelist: list that stores compound pages */ static int __collapse_huge_page_copy(pte_t *pte, struct page *page, @@ -777,8 +743,7 @@ static int __collapse_huge_page_copy(pte_t *pte, pmd_t orig_pmd, struct vm_area_struct *vma, unsigned long address, - spinlock_t *ptl, - struct list_head *compound_pagelist) + spinlock_t *ptl) { struct page *src_page; pte_t *_pte; @@ -804,11 +769,9 @@ static int __collapse_huge_page_copy(pte_t *pte, } =20 if (likely(result =3D=3D SCAN_SUCCEED)) - __collapse_huge_page_copy_succeeded(pte, vma, address, ptl, - compound_pagelist); + __collapse_huge_page_copy_succeeded(pte, vma, address, ptl); else - __collapse_huge_page_copy_failed(pte, pmd, orig_pmd, vma, - compound_pagelist); + __collapse_huge_page_copy_failed(pte, pmd, orig_pmd, vma); =20 return result; } @@ -1081,7 +1044,6 @@ static int collapse_huge_page(struct mm_struct *mm, u= nsigned long address, int referenced, int unmapped, struct collapse_control *cc) { - LIST_HEAD(compound_pagelist); pmd_t *pmd, _pmd; pte_t *pte; pgtable_t pgtable; @@ -1168,8 +1130,7 @@ static int collapse_huge_page(struct mm_struct *mm, u= nsigned long address, =20 pte =3D pte_offset_map_lock(mm, &_pmd, address, &pte_ptl); if (pte) { - result =3D __collapse_huge_page_isolate(vma, address, pte, cc, - &compound_pagelist); + result =3D __collapse_huge_page_isolate(vma, address, pte, cc); spin_unlock(pte_ptl); } else { result =3D SCAN_PMD_NULL; @@ -1198,8 +1159,7 @@ static int collapse_huge_page(struct mm_struct *mm, u= nsigned long address, anon_vma_unlock_write(vma->anon_vma); =20 result =3D __collapse_huge_page_copy(pte, hpage, pmd, _pmd, - vma, address, pte_ptl, - &compound_pagelist); + vma, address, pte_ptl); pte_unmap(pte); if (unlikely(result !=3D SCAN_SUCCEED)) goto out_up_write; --=20 2.40.1