From nobody Mon Apr 6 09:10:50 2026 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1E8B0194AE6 for ; Mon, 30 Mar 2026 04:30:52 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774845054; cv=none; b=PzzAy0Yc7h6I+AZKMqA/+CWEtZTM2a3AiCR9v+erPD+xXvRv648PEhAqJl/5e44A9G7EZtaXt064DjQUG6ByAHt0lpuuBiVPMtTh/ttDdnpLE+ubU/eo8qNzF7m7NMnNciRif+gsZUG/P2YlBnGwbnX35gPMANyf4hG6Yaj+Q18= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774845054; c=relaxed/simple; bh=/RknrXU16NLMZn3WDC1fBYbxyMgBFFewh+dshwy8V2Y=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=cF8CIytVbpqFGXIwd1LwYETG0WAUIKhZYe8Aot8jiK8E/P1k6VYAhAvENeQOvyVVchMKamqWwxlV+/sAKg4I+bAW+n925d8xD50JfzVfgNawlfu7sNaNbiGIVYZibrPlksQbBD8FxiJ5oU1uwUGrmYZLD7JcDw5sUhMgdZGsCtI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=DY0JX/J+; dkim=pass (2048-bit key) header.d=redhat.com header.i=@redhat.com header.b=A5IlH0EO; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="DY0JX/J+"; dkim=pass (2048-bit key) header.d=redhat.com header.i=@redhat.com header.b="A5IlH0EO" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1774845051; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ZeqZC2GdcLOyIy95ROtdSuij8xUNn/0x/ILBqca+89E=; b=DY0JX/J+0CewhS0j784D9bYpmijKSkmbonqeKu2UcspFrQGUNZMlH1DlMGJVgrQEc3I8/O 5iSI9KT9iJM7C6KPWhInyEC4rqkGggYtct6RdBTjz6JjfiVg3WGFbtsmoo894NA82G4wfH Td2dWEWTdIrpkRQ1GSPAYAU3A/HAM/Y= Received: from mail-lf1-f72.google.com (mail-lf1-f72.google.com [209.85.167.72]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-454-hdK115h-O1GahmUVDCKJUA-1; Mon, 30 Mar 2026 00:30:49 -0400 X-MC-Unique: hdK115h-O1GahmUVDCKJUA-1 X-Mimecast-MFC-AGG-ID: hdK115h-O1GahmUVDCKJUA_1774845048 Received: by mail-lf1-f72.google.com with SMTP id 2adb3069b0e04-5a2b1ccca2aso951590e87.3 for ; Sun, 29 Mar 2026 21:30:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=google; t=1774845048; x=1775449848; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=ZeqZC2GdcLOyIy95ROtdSuij8xUNn/0x/ILBqca+89E=; b=A5IlH0EO8W0KzIL7oGBLvnjbNorfJZV4XRhq4BcWKYW5gInb0+aDUf81f9nSc1dPNz hilLDThEXPWcoA66GqGfblWHOLjJM7tP5Na2cslRAmv1pdGmQjmfWfMEgNPKp4SMSeAl T5Xz6jHRons6Y0rvANNiePadaVz4ZUZ0GRSPPicX2yBmqY/VzyWGGCHWgymMikIAO11a p/HXbtbiAaJ+SYeHoXs8XvlpLpUdi+N8M+PuDGHORHLxUgAhMC+p6ou6FHv5s+tkm2zi orJ4pn1MZvrY/RYIvWBLIlPvzwZlVty0WVAf+DHFpomWf46cgo9ACzbQAIHmtxJa9uM8 xGEg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20251104; t=1774845048; x=1775449848; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=ZeqZC2GdcLOyIy95ROtdSuij8xUNn/0x/ILBqca+89E=; b=YVu6Gpn5ZIlUj5JlvDw3+FFjByH8BQ/+QQqlpAL3o4GEgnkxlE3Yj/P60/PCfVBB4Y Os8PX2FabukNt5q6OaoN/ZNf8eiQ4XqFeOArUsnhfyBzWUdG4lRmTFZ0rbbp3YDIdJrH aWi6aaXm4wX2sWmrgfEdg62WawuI0VU9UB2PDH3/C0KAvxY8I+1PmM2wcMlNqtERH/1x 5e8JVWq7MDTpsuO7xLxdUUCNI3X+QlI4etatozVwEgLotHUBM6nszzLts/P57qxWz4P7 heRwP9932IDT/Stgc/LCVxEwtTe0ixsla3xjEptjjxmGXkYzypmqx74JpDc1wE6Z+ADJ AmoQ== X-Gm-Message-State: AOJu0Yx5WsiGcMS/c/uYzbXxf7C7MBodT9Djw6S7GF+meMpX2+smzMbH axd6eNfhqOIVTat1/+GuQcbuHScpiYYX9cndKEuMYaty1fj2TcWlwK7t/feJALFD14oi7DH98IC ctxb0Nit+PynkKSm7SUJngdW9lnX1tz+ZB/Y/XtBlec3VqBBS8X0Im3UavsRl5lTo X-Gm-Gg: ATEYQzxmRvmkA5ew40cSS/+QDjExg1Ud6zeu/ky5VmBzQHvax+IMxgLYTxN4o3YJM9u +RYSqmpUs+IyuuaNqsmikeGtlZkmTb63Ic5elhAWTOf1hsqfpC1nA/1Ixt9YP7Ua20NhLxZGrZN X2EtdwWSaxa+k7JKAcmy8PvX05Z6RbWYk4HK9S3Zibx4n3O/wZG9wsJmYNUGpdDdEuYU4RbsSRg BQ4B00yMwC33oiojslXaSV0wGYUzTlsN1J52pcgHeCog6x2eG17Nenu5FR1pEkMybUriczHtzlN 65wPkuJRA9UfybnT5KHiUzYKtRzMa3w6qms9aprxg+2OAB6sORXm5vqa/HDSeOO04Vk5gdE5LwC I54dGrJxpdMRAOhkYXxXKMI8Mlc/KH7Zx6ADt X-Received: by 2002:a05:651c:ba6:b0:383:7f85:8eef with SMTP id 38308e7fff4ca-38c74005967mr24326811fa.29.1774845048215; Sun, 29 Mar 2026 21:30:48 -0700 (PDT) X-Received: by 2002:a05:651c:ba6:b0:383:7f85:8eef with SMTP id 38308e7fff4ca-38c74005967mr24326711fa.29.1774845047667; Sun, 29 Mar 2026 21:30:47 -0700 (PDT) Received: from fedora (85-23-51-1.bb.dnainternet.fi. [85.23.51.1]) by smtp.gmail.com with ESMTPSA id 38308e7fff4ca-38c83729522sm12470671fa.14.2026.03.29.21.30.46 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 29 Mar 2026 21:30:47 -0700 (PDT) From: mpenttil@redhat.com To: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org, =?UTF-8?q?Mika=20Penttil=C3=A4?= , David Hildenbrand , Jason Gunthorpe , Leon Romanovsky , Alistair Popple , Balbir Singh , Zi Yan , Matthew Brost Subject: [PATCH v7 4/6] mm: setup device page migration in HMM pagewalk Date: Mon, 30 Mar 2026 07:30:15 +0300 Message-ID: <20260330043017.251808-5-mpenttil@redhat.com> X-Mailer: git-send-email 2.50.0 In-Reply-To: <20260330043017.251808-1-mpenttil@redhat.com> References: <20260330043017.251808-1-mpenttil@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable From: Mika Penttil=C3=A4 Implement the needed hmm_vma_handle_migrate_prepare_pmd() and hmm_vma_handle_migrate_prepare() functions which are mostly carried over from migrate_device.c, as well as the needed split functions. Make migrate_device take use of HMM pagewalk for collecting part of migration. Cc: David Hildenbrand Cc: Jason Gunthorpe Cc: Leon Romanovsky Cc: Alistair Popple Cc: Balbir Singh Cc: Zi Yan Cc: Matthew Brost Suggested-by: Alistair Popple Signed-off-by: Mika Penttil=C3=A4 --- include/linux/migrate.h | 9 +- mm/hmm.c | 420 ++++++++++++++++++++++++++++++++++++++-- mm/migrate_device.c | 26 ++- 3 files changed, 438 insertions(+), 17 deletions(-) diff --git a/include/linux/migrate.h b/include/linux/migrate.h index 037e7430edb9..9e1081847d1f 100644 --- a/include/linux/migrate.h +++ b/include/linux/migrate.h @@ -163,6 +163,7 @@ enum migrate_vma_info { MIGRATE_VMA_SELECT_DEVICE_PRIVATE =3D 1 << 1, MIGRATE_VMA_SELECT_DEVICE_COHERENT =3D 1 << 2, MIGRATE_VMA_SELECT_COMPOUND =3D 1 << 3, + MIGRATE_VMA_FAULT =3D 1 << 4, }; =20 struct migrate_vma { @@ -200,10 +201,14 @@ struct migrate_vma { struct page *fault_page; }; =20 -// TODO: enable migration static inline enum migrate_vma_info hmm_select_migrate(struct hmm_range *r= ange) { - return 0; + enum migrate_vma_info minfo; + + minfo =3D (range->default_flags & HMM_PFN_REQ_MIGRATE) ? + range->migrate->flags : 0; + + return minfo; } =20 int migrate_vma_setup(struct migrate_vma *args); diff --git a/mm/hmm.c b/mm/hmm.c index 642593c3505f..ce693938e5dc 100644 --- a/mm/hmm.c +++ b/mm/hmm.c @@ -476,34 +476,424 @@ static int hmm_vma_handle_absent_pmd(struct mm_walk = *walk, unsigned long start, #endif /* CONFIG_ARCH_ENABLE_THP_MIGRATION */ =20 #ifdef CONFIG_DEVICE_MIGRATION +/** + * migrate_vma_split_folio() - Helper function to split a THP folio + * @folio: the folio to split + * @fault_page: struct page associated with the fault if any + * + * Returns 0 on success + */ +static int migrate_vma_split_folio(struct folio *folio, + struct page *fault_page) +{ + int ret; + struct folio *fault_folio =3D fault_page ? page_folio(fault_page) : NULL; + struct folio *new_fault_folio =3D NULL; + + if (folio !=3D fault_folio) { + folio_get(folio); + folio_lock(folio); + } + + ret =3D split_folio(folio); + if (ret) { + if (folio !=3D fault_folio) { + folio_unlock(folio); + folio_put(folio); + } + return ret; + } + + new_fault_folio =3D fault_page ? page_folio(fault_page) : NULL; + + /* + * Ensure the lock is held on the correct + * folio after the split + */ + if (!new_fault_folio) { + folio_unlock(folio); + folio_put(folio); + } else if (folio !=3D new_fault_folio) { + if (new_fault_folio !=3D fault_folio) { + folio_get(new_fault_folio); + folio_lock(new_fault_folio); + } + folio_unlock(folio); + folio_put(folio); + } + + return 0; +} + static int hmm_vma_handle_migrate_prepare_pmd(const struct mm_walk *walk, pmd_t *pmdp, unsigned long start, unsigned long end, unsigned long *hmm_pfn) { - // TODO: implement migration entry insertion - return 0; + struct hmm_vma_walk *hmm_vma_walk =3D walk->private; + struct hmm_range *range =3D hmm_vma_walk->range; + struct migrate_vma *migrate =3D range->migrate; + struct folio *fault_folio =3D NULL; + struct folio *folio; + enum migrate_vma_info minfo; + unsigned long i; + int r =3D 0; + + minfo =3D hmm_select_migrate(range); + if (!minfo) + return r; + + WARN_ON_ONCE(!migrate); + HMM_ASSERT_PMD_LOCKED(hmm_vma_walk, true); + + fault_folio =3D migrate->fault_page ? + page_folio(migrate->fault_page) : NULL; + + if (pmd_none(*pmdp)) + return hmm_pfns_fill(start, end, hmm_vma_walk, 0); + + if (!(hmm_pfn[0] & HMM_PFN_VALID)) + goto out; + + if (pmd_trans_huge(*pmdp)) { + if (!(minfo & MIGRATE_VMA_SELECT_SYSTEM)) + goto out; + + folio =3D pmd_folio(*pmdp); + if (is_huge_zero_folio(folio)) + return hmm_pfns_fill(start, end, hmm_vma_walk, 0); + + } else if (!pmd_present(*pmdp)) { + const softleaf_t entry =3D softleaf_from_pmd(*pmdp); + + folio =3D softleaf_to_folio(entry); + + if (!softleaf_is_device_private(entry)) + goto out; + + if (!(minfo & MIGRATE_VMA_SELECT_DEVICE_PRIVATE)) + goto out; + + if (folio->pgmap->owner !=3D migrate->pgmap_owner) + goto out; + + } else { + hmm_vma_walk->last =3D start; + return -EBUSY; + } + + folio_get(folio); + + if (folio !=3D fault_folio && unlikely(!folio_trylock(folio))) { + folio_put(folio); + hmm_pfns_fill(start, end, hmm_vma_walk, HMM_PFN_ERROR); + return 0; + } + + if (thp_migration_supported() && + (migrate->flags & MIGRATE_VMA_SELECT_COMPOUND) && + (IS_ALIGNED(start, HPAGE_PMD_SIZE) && + IS_ALIGNED(end, HPAGE_PMD_SIZE))) { + + struct page_vma_mapped_walk pvmw =3D { + .ptl =3D hmm_vma_walk->ptl, + .address =3D start, + .pmd =3D pmdp, + .vma =3D walk->vma, + }; + + hmm_pfn[0] |=3D HMM_PFN_MIGRATE | HMM_PFN_COMPOUND; + + r =3D set_pmd_migration_entry(&pvmw, folio_page(folio, 0)); + if (r) { + hmm_pfn[0] &=3D ~(HMM_PFN_MIGRATE | HMM_PFN_COMPOUND); + r =3D -ENOENT; // fallback + goto unlock_out; + } + for (i =3D 1, start +=3D PAGE_SIZE; start < end; start +=3D PAGE_SIZE, i= ++) + hmm_pfn[i] &=3D HMM_PFN_INOUT_FLAGS; + + } else { + r =3D -ENOENT; // fallback + goto unlock_out; + } + + +out: + return r; + +unlock_out: + if (folio !=3D fault_folio) + folio_unlock(folio); + folio_put(folio); + goto out; } =20 +/* + * Install migration entries if migration requested, either from fault + * or migrate paths. + * + */ static int hmm_vma_handle_migrate_prepare(const struct mm_walk *walk, pmd_t *pmdp, - pte_t *pte, + pte_t *ptep, unsigned long addr, - unsigned long *hmm_pfn) + unsigned long *hmm_pfn, + bool *unmapped) { - // TODO: implement migration entry insertion + struct hmm_vma_walk *hmm_vma_walk =3D walk->private; + struct hmm_range *range =3D hmm_vma_walk->range; + struct migrate_vma *migrate =3D range->migrate; + struct mm_struct *mm =3D walk->vma->vm_mm; + struct folio *fault_folio =3D NULL; + enum migrate_vma_info minfo; + struct dev_pagemap *pgmap; + bool anon_exclusive; + struct folio *folio; + unsigned long pfn; + struct page *page; + softleaf_t entry; + pte_t pte, swp_pte; + bool writable =3D false; + + // Do we want to migrate at all? + minfo =3D hmm_select_migrate(range); + if (!minfo) + return 0; + + WARN_ON_ONCE(!migrate); + HMM_ASSERT_PTE_LOCKED(hmm_vma_walk, true); + + fault_folio =3D migrate->fault_page ? + page_folio(migrate->fault_page) : NULL; + + pte =3D ptep_get(ptep); + + if (pte_none(pte)) { + // migrate without faulting case + if (vma_is_anonymous(walk->vma)) { + *hmm_pfn &=3D HMM_PFN_INOUT_FLAGS; + *hmm_pfn |=3D HMM_PFN_MIGRATE; + goto out; + } + } + + if (!(hmm_pfn[0] & HMM_PFN_VALID)) + goto out; + + if (!pte_present(pte)) { + /* + * Only care about unaddressable device page special + * page table entry. Other special swap entries are not + * migratable, and we ignore regular swapped page. + */ + entry =3D softleaf_from_pte(pte); + if (!softleaf_is_device_private(entry)) + goto out; + + if (!(minfo & MIGRATE_VMA_SELECT_DEVICE_PRIVATE)) + goto out; + + page =3D softleaf_to_page(entry); + folio =3D page_folio(page); + if (folio->pgmap->owner !=3D migrate->pgmap_owner) + goto out; + + if (folio_test_large(folio)) { + int ret; + + pte_unmap_unlock(ptep, hmm_vma_walk->ptl); + hmm_vma_walk->ptelocked =3D false; + ret =3D migrate_vma_split_folio(folio, + migrate->fault_page); + if (ret) + goto out_error; + return -EAGAIN; + } + + pfn =3D page_to_pfn(page); + if (softleaf_is_device_private_write(entry)) + writable =3D true; + } else { + pfn =3D pte_pfn(pte); + if (is_zero_pfn(pfn) && + (minfo & MIGRATE_VMA_SELECT_SYSTEM)) { + *hmm_pfn =3D HMM_PFN_MIGRATE; + goto out; + } + page =3D vm_normal_page(walk->vma, addr, pte); + if (page && !is_zone_device_page(page) && + !(minfo & MIGRATE_VMA_SELECT_SYSTEM)) { + goto out; + } else if (page && is_device_coherent_page(page)) { + pgmap =3D page_pgmap(page); + + if (!(minfo & + MIGRATE_VMA_SELECT_DEVICE_COHERENT) || + pgmap->owner !=3D migrate->pgmap_owner) + goto out; + } + + folio =3D page ? page_folio(page) : NULL; + if (folio && folio_test_large(folio)) { + int ret; + + pte_unmap_unlock(ptep, hmm_vma_walk->ptl); + hmm_vma_walk->ptelocked =3D false; + + ret =3D migrate_vma_split_folio(folio, + migrate->fault_page); + if (ret) + goto out_error; + return -EAGAIN; + } + + writable =3D pte_write(pte); + } + + if (!page || !page->mapping) + goto out; + + /* + * By getting a reference on the folio we pin it and that blocks + * any kind of migration. Side effect is that it "freezes" the + * pte. + * + * We drop this reference after isolating the folio from the lru + * for non device folio (device folio are not on the lru and thus + * can't be dropped from it). + */ + folio =3D page_folio(page); + folio_get(folio); + + /* + * We rely on folio_trylock() to avoid deadlock between + * concurrent migrations where each is waiting on the others + * folio lock. If we can't immediately lock the folio we fail this + * migration as it is only best effort anyway. + * + * If we can lock the folio it's safe to set up a migration entry + * now. In the common case where the folio is mapped once in a + * single process setting up the migration entry now is an + * optimisation to avoid walking the rmap later with + * try_to_migrate(). + */ + + if (fault_folio =3D=3D folio || folio_trylock(folio)) { + anon_exclusive =3D folio_test_anon(folio) && + PageAnonExclusive(page); + + flush_cache_page(walk->vma, addr, pfn); + + if (anon_exclusive) { + pte =3D ptep_clear_flush(walk->vma, addr, ptep); + + if (folio_try_share_anon_rmap_pte(folio, page)) { + set_pte_at(mm, addr, ptep, pte); + folio_unlock(folio); + folio_put(folio); + goto out; + } + } else { + pte =3D ptep_get_and_clear(mm, addr, ptep); + } + + if (pte_dirty(pte)) + folio_mark_dirty(folio); + + /* Setup special migration page table entry */ + if (writable) + entry =3D make_writable_migration_entry(pfn); + else if (anon_exclusive) + entry =3D make_readable_exclusive_migration_entry(pfn); + else + entry =3D make_readable_migration_entry(pfn); + + if (pte_present(pte)) { + if (pte_young(pte)) + entry =3D make_migration_entry_young(entry); + if (pte_dirty(pte)) + entry =3D make_migration_entry_dirty(entry); + } + + swp_pte =3D swp_entry_to_pte(entry); + if (pte_present(pte)) { + if (pte_soft_dirty(pte)) + swp_pte =3D pte_swp_mksoft_dirty(swp_pte); + if (pte_uffd_wp(pte)) + swp_pte =3D pte_swp_mkuffd_wp(swp_pte); + } else { + if (pte_swp_soft_dirty(pte)) + swp_pte =3D pte_swp_mksoft_dirty(swp_pte); + if (pte_swp_uffd_wp(pte)) + swp_pte =3D pte_swp_mkuffd_wp(swp_pte); + } + + set_pte_at(mm, addr, ptep, swp_pte); + folio_remove_rmap_pte(folio, page, walk->vma); + folio_put(folio); + *hmm_pfn |=3D HMM_PFN_MIGRATE; + + if (pte_present(pte)) + *unmapped =3D true; + } else + folio_put(folio); +out: return 0; +out_error: + return -EFAULT; } =20 static int hmm_vma_walk_split(pmd_t *pmdp, unsigned long addr, struct mm_walk *walk) { - // TODO : implement split - return 0; -} + struct hmm_vma_walk *hmm_vma_walk =3D walk->private; + struct hmm_range *range =3D hmm_vma_walk->range; + struct migrate_vma *migrate =3D range->migrate; + struct folio *folio, *fault_folio; + spinlock_t *ptl; + int ret =3D 0; =20 + HMM_ASSERT_UNLOCKED(hmm_vma_walk); + + fault_folio =3D (migrate && migrate->fault_page) ? + page_folio(migrate->fault_page) : NULL; + + ptl =3D pmd_lock(walk->mm, pmdp); + if (unlikely(!pmd_trans_huge(*pmdp))) { + spin_unlock(ptl); + goto out; + } + + folio =3D pmd_folio(*pmdp); + if (is_huge_zero_folio(folio)) { + spin_unlock(ptl); + split_huge_pmd(walk->vma, pmdp, addr); + } else { + folio_get(folio); + spin_unlock(ptl); + + if (folio !=3D fault_folio) { + if (unlikely(!folio_trylock(folio))) { + folio_put(folio); + ret =3D -EBUSY; + goto out; + } + } else + folio_put(folio); + + ret =3D split_folio(folio); + if (fault_folio !=3D folio) { + folio_unlock(folio); + folio_put(folio); + } + + } +out: + return ret; +} #else static int hmm_vma_handle_migrate_prepare_pmd(const struct mm_walk *walk, pmd_t *pmdp, @@ -518,7 +908,8 @@ static int hmm_vma_handle_migrate_prepare(const struct = mm_walk *walk, pmd_t *pmdp, pte_t *pte, unsigned long addr, - unsigned long *hmm_pfn) + unsigned long *hmm_pfn, + bool *unmapped) { return 0; } @@ -573,6 +964,7 @@ static int hmm_vma_walk_pmd(pmd_t *pmdp, enum migrate_vma_info minfo; unsigned long addr =3D start; unsigned long *hmm_pfns; + bool unmapped =3D false; unsigned long i; pte_t *ptep; pmd_t pmd; @@ -654,7 +1046,7 @@ static int hmm_vma_walk_pmd(pmd_t *pmdp, goto again; } =20 - r =3D hmm_vma_handle_pmd(walk, addr, end, hmm_pfns, pmd); + r =3D hmm_vma_handle_pmd(walk, start, end, hmm_pfns, pmd); =20 // If not migrating we are done if (r || !minfo) { @@ -723,9 +1115,13 @@ static int hmm_vma_walk_pmd(pmd_t *pmdp, return r; } =20 - r =3D hmm_vma_handle_migrate_prepare(walk, pmdp, ptep, addr, hmm_pfns); + r =3D hmm_vma_handle_migrate_prepare(walk, pmdp, ptep, addr, hmm_pfns, &= unmapped); if (r =3D=3D -EAGAIN) { HMM_ASSERT_UNLOCKED(hmm_vma_walk); + if (unmapped) { + flush_tlb_range(walk->vma, start, addr); + unmapped =3D false; + } goto again; } if (r) { @@ -733,6 +1129,8 @@ static int hmm_vma_walk_pmd(pmd_t *pmdp, break; } } + if (unmapped) + flush_tlb_range(walk->vma, start, addr); =20 if (hmm_vma_walk->ptelocked) { pte_unmap_unlock(ptep - 1, hmm_vma_walk->ptl); diff --git a/mm/migrate_device.c b/mm/migrate_device.c index a4062fd21490..7ca5dc80d39b 100644 --- a/mm/migrate_device.c +++ b/mm/migrate_device.c @@ -734,7 +734,17 @@ static void migrate_vma_unmap(struct migrate_vma *migr= ate) */ int migrate_vma_setup(struct migrate_vma *args) { + int ret; long nr_pages =3D (args->end - args->start) >> PAGE_SHIFT; + struct hmm_range range =3D { + .notifier =3D NULL, + .start =3D args->start, + .end =3D args->end, + .hmm_pfns =3D args->src, + .dev_private_owner =3D args->pgmap_owner, + .migrate =3D args, + .default_flags =3D HMM_PFN_REQ_MIGRATE + }; =20 args->start &=3D PAGE_MASK; args->end &=3D PAGE_MASK; @@ -759,17 +769,25 @@ int migrate_vma_setup(struct migrate_vma *args) args->cpages =3D 0; args->npages =3D 0; =20 - migrate_vma_collect(args); + if (args->flags & MIGRATE_VMA_FAULT) + range.default_flags |=3D HMM_PFN_REQ_FAULT; + + ret =3D hmm_range_fault(&range); =20 - if (args->cpages) - migrate_vma_unmap(args); + migrate_hmm_range_setup(&range); + + /* Remove migration PTEs */ + if (ret) { + migrate_vma_pages(args); + migrate_vma_finalize(args); + } =20 /* * At this point pages are locked and unmapped, and thus they have * stable content and can safely be copied to destination memory that * is allocated by the drivers. */ - return 0; + return ret; =20 } EXPORT_SYMBOL(migrate_vma_setup); --=20 2.50.0