From nobody Sat Feb 7 18:20:28 2026 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4CC8230FF33 for ; Mon, 19 Jan 2026 11:28:00 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1768822083; cv=none; b=prUddYtMsNC4myNAf23OIux2QX8d+znMj2iD61es0yZ9JIiWmGXVkxOP9IFwcBYekVajrfIT0ZWf1ofl7c3Wst2klm9NuxASRGZnFZqlw7Q296wVNbhKBpOgEGUoSUvXGTTcVR9YlytYfgc11lH/4tRnF2uAGe9tojEs7e3Z0c0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1768822083; c=relaxed/simple; bh=qmSj2K4f2XmhLTQZSjeO1Le0DBJplsP9v+uedKB8AGE=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=M5Ta5yFtjnmDVBmORJTs6VMS1OOGqUa2rrxpxGOVy0NFygpb6RcOipfctAct1e/jq2GcGbLp2PHtqjsogQZZmKjJPd49SKEMQOkxJFjvlKbDp5p0ixjRO9nwjha4Pq5Ls7ej2afh7aVWqLvNZ3m3TrgF15+V8JT8ejdLGu2gl64= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=GBrHQNIj; dkim=pass (2048-bit key) header.d=redhat.com header.i=@redhat.com header.b=kJRVrpxK; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="GBrHQNIj"; dkim=pass (2048-bit key) header.d=redhat.com header.i=@redhat.com header.b="kJRVrpxK" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1768822080; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=cEsjxFcKqX7KKeLsJYCCzbv6PeCPZ7jOgkZgcz7OAio=; b=GBrHQNIjnLbY1IeZ6vTNhWafUGU0jWHzjV/fIhlsZD0b9Sr52vaukpsrH3Jgq9JGbFr9XF aj15PBX/gJPXJd6draF3SqSx8XgSQzDjegTVY4l5ILR2yzgxZnrZ42rXkFixfIcsEeQArT 7VEwZJC4RuKLrrmbg4LSPiQp4rty4sU= Received: from mail-lf1-f72.google.com (mail-lf1-f72.google.com [209.85.167.72]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-539-5z3a12feN0uSc_1MUTaEZQ-1; Mon, 19 Jan 2026 06:27:58 -0500 X-MC-Unique: 5z3a12feN0uSc_1MUTaEZQ-1 X-Mimecast-MFC-AGG-ID: 5z3a12feN0uSc_1MUTaEZQ_1768822077 Received: by mail-lf1-f72.google.com with SMTP id 2adb3069b0e04-59b786498b0so5093944e87.3 for ; Mon, 19 Jan 2026 03:27:58 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=google; t=1768822077; x=1769426877; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=cEsjxFcKqX7KKeLsJYCCzbv6PeCPZ7jOgkZgcz7OAio=; b=kJRVrpxKEBmXOZNOzdgcJWlJayyp6rtB2oi7mMDPCc2I91Db3mmehHEHL8cVpGzj5y e6VAwMg0yRltJcys858ZnIVguvLQ04AeEBfDEtlpPvh25nj6+B/ivkieZ+o6+/eGbDet v8bRueLZT8J9qa3bF5TnwTfl6KhFsJUgmSVZoYD6vJaR7TraajJfTka+qgKi6ONXr9Fd o19yj8MiBZ3qEHlCbVhHbM4bWAkA6rZiyq+nzPivsjzu5JZtsrPlDuTAbAOh8q9Ixcmp usT66YEtNua0A4/08ba2vke+9yLV6/Szhg+aoVyTllpSHLkrLuiv9/oiCMO9cGHooyeJ Wlzg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1768822077; x=1769426877; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=cEsjxFcKqX7KKeLsJYCCzbv6PeCPZ7jOgkZgcz7OAio=; b=L3Ccr3Bs71lL66zEwhwavdt7kfTv4yC9qC2CYQYomSmKoxC9Ppl7kaokdOfrOkRQYp kd2OsxF3rfx2YjcGTlOgxZ08IzGWEeDP5+2SFcbcsgK9kz8L5HijF1KOi6K6xJJzgIg0 7+y1KEkWkH3m88bGleo2AMs9k1jTp8/SDUoWYnhY3lcuaFWc7tjngRqSPI4V8mYvSr2P 6DLKB3FMbuTLelpup1QqGmmB/l7mH3oixmep+wYesG0FrKRqPnYSRArih+gR013h0r3b gNr/ylp/KWCeIcviEG6d8wDLDQWy3SAzq4z9l3/tiYscg93alwULSy3g6+4iwPoqhfdv 5e4g== X-Gm-Message-State: AOJu0Yw1vnDzs7WbgFXcNqt8iv87Iz4gLwBWUC2DoOfOqpToSXjV5gU4 abenr0l6eqBfQ9wR+bhmC61aEe3Ao6ZnVEWaT541sgT5UEaDEPdBVAF3P5yzCg/jU8Vh1V3JAaK 585mrdU+s5avHCxdRZ68aEyPdYlyIQM83aIgJEXi79XJ7UX7XDV6CJyuMXxg5ywrt X-Gm-Gg: AY/fxX6FJ+AIHrB2jRUxs41xVovkzAibtR4Vi4xNhVcVnt/uPqYwfk82/q/per18tf0 Rtg6d1MrAWbWpOnCjqM0GJVnbXhCbLR4Ybv1LeWWaEqpHqGyZZSCUQSfYATcZ7r3MqGh/7L0+cr wxZO1SQ9F7v7X9xlUMCe3MapWKk0OTb3PEJcxV5oZBQiDE6ZQ4drvzbcHt4UNqkC0sEB5HH6UF7 3rdNUmIkRryB4HV2N99d0x0/EK5fs7ZflfpWiVLi2LO+ctS8EkGHFrHX7FQ4ga3sCHLoBIQaQVP 0MAcl52DIK8ONojQGe/8HQJaD0ujDI+6fRq8kd31KEqcfJsAR3Pbzd6TTiph6toMd+Lg/2PXQh2 iusHZT24LQy0jLleofWva4dul X-Received: by 2002:a05:6512:4021:b0:59a:1a65:7a89 with SMTP id 2adb3069b0e04-59baeeed136mr3637283e87.30.1768822076542; Mon, 19 Jan 2026 03:27:56 -0800 (PST) X-Received: by 2002:a05:6512:4021:b0:59a:1a65:7a89 with SMTP id 2adb3069b0e04-59baeeed136mr3637276e87.30.1768822075924; Mon, 19 Jan 2026 03:27:55 -0800 (PST) Received: from fedora (85-23-51-1.bb.dnainternet.fi. [85.23.51.1]) by smtp.gmail.com with ESMTPSA id 38308e7fff4ca-38384f8a1efsm31226411fa.39.2026.01.19.03.27.55 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 19 Jan 2026 03:27:55 -0800 (PST) From: mpenttil@redhat.com To: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org, =?UTF-8?q?Mika=20Penttil=C3=A4?= , David Hildenbrand , Jason Gunthorpe , Leon Romanovsky , Alistair Popple , Balbir Singh , Zi Yan , Matthew Brost Subject: [PATCH v2 1/3] mm: unified hmm fault and migrate device pagewalk paths Date: Mon, 19 Jan 2026 13:25:00 +0200 Message-ID: <20260119112502.645059-2-mpenttil@redhat.com> X-Mailer: git-send-email 2.50.0 In-Reply-To: <20260119112502.645059-1-mpenttil@redhat.com> References: <20260119112502.645059-1-mpenttil@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable From: Mika Penttil=C3=A4 Currently, the way device page faulting and migration works is not optimal, if you want to do both fault handling and migration at once. Being able to migrate not present pages (or pages mapped with incorrect permissions, eg. COW) to the GPU requires doing either of the following sequences: 1. hmm_range_fault() - fault in non-present pages with correct permissions,= etc. 2. migrate_vma_*() - migrate the pages Or: 1. migrate_vma_*() - migrate present pages 2. If non-present pages detected by migrate_vma_*(): a) call hmm_range_fault() to fault pages in b) call migrate_vma_*() again to migrate now present pages The problem with the first sequence is that you always have to do two page walks even when most of the time the pages are present or zero page mappings so the common case takes a performance hit. The second sequence is better for the common case, but far worse if pages aren't present because now you have to walk the page tables three times (once to find the page is not present, once so hmm_range_fault() can find a non-present page to fault in and once again to setup the migration). It is also tricky to code correctly. We should be able to walk the page table once, faulting pages in as required and replacing them with migration entries if requested. Add a new flag to HMM APIs, HMM_PFN_REQ_MIGRATE, which tells to prepare for migration also during fault handling. Also, for the migrate_vma_setup() call paths, a flag, MIGRATE_VMA_FAULT, is added to tell to add fault handling to migrate. Cc: David Hildenbrand Cc: Jason Gunthorpe Cc: Leon Romanovsky Cc: Alistair Popple Cc: Balbir Singh Cc: Zi Yan Cc: Matthew Brost Suggested-by: Alistair Popple Signed-off-by: Mika Penttil=C3=A4 --- include/linux/hmm.h | 19 +- include/linux/migrate.h | 27 +- mm/hmm.c | 770 +++++++++++++++++++++++++++++++++++++--- mm/migrate_device.c | 86 ++++- 4 files changed, 839 insertions(+), 63 deletions(-) diff --git a/include/linux/hmm.h b/include/linux/hmm.h index db75ffc949a7..e2f53e155af2 100644 --- a/include/linux/hmm.h +++ b/include/linux/hmm.h @@ -12,7 +12,7 @@ #include =20 struct mmu_interval_notifier; - +struct migrate_vma; /* * On output: * 0 - The page is faultable and a future call with=20 @@ -27,6 +27,7 @@ struct mmu_interval_notifier; * HMM_PFN_P2PDMA_BUS - Bus mapped P2P transfer * HMM_PFN_DMA_MAPPED - Flag preserved on input-to-output transformation * to mark that page is already DMA mapped + * HMM_PFN_MIGRATE - Migrate PTE installed * * On input: * 0 - Return the current state of the page, do not fault = it. @@ -34,6 +35,7 @@ struct mmu_interval_notifier; * will fail * HMM_PFN_REQ_WRITE - The output must have HMM_PFN_WRITE or hmm_range_fau= lt() * will fail. Must be combined with HMM_PFN_REQ_FAULT. + * HMM_PFN_REQ_MIGRATE - For default_flags, request to migrate to device */ enum hmm_pfn_flags { /* Output fields and flags */ @@ -48,15 +50,25 @@ enum hmm_pfn_flags { HMM_PFN_P2PDMA =3D 1UL << (BITS_PER_LONG - 5), HMM_PFN_P2PDMA_BUS =3D 1UL << (BITS_PER_LONG - 6), =20 - HMM_PFN_ORDER_SHIFT =3D (BITS_PER_LONG - 11), + /* Migrate request */ + HMM_PFN_MIGRATE =3D 1UL << (BITS_PER_LONG - 7), + HMM_PFN_COMPOUND =3D 1UL << (BITS_PER_LONG - 8), + HMM_PFN_ORDER_SHIFT =3D (BITS_PER_LONG - 13), =20 /* Input flags */ HMM_PFN_REQ_FAULT =3D HMM_PFN_VALID, HMM_PFN_REQ_WRITE =3D HMM_PFN_WRITE, + HMM_PFN_REQ_MIGRATE =3D HMM_PFN_MIGRATE, =20 HMM_PFN_FLAGS =3D ~((1UL << HMM_PFN_ORDER_SHIFT) - 1), }; =20 +enum { + /* These flags are carried from input-to-output */ + HMM_PFN_INOUT_FLAGS =3D HMM_PFN_DMA_MAPPED | HMM_PFN_P2PDMA | + HMM_PFN_P2PDMA_BUS, +}; + /* * hmm_pfn_to_page() - return struct page pointed to by a device entry * @@ -107,6 +119,7 @@ static inline unsigned int hmm_pfn_to_map_order(unsigne= d long hmm_pfn) * @default_flags: default flags for the range (write, read, ... see hmm d= oc) * @pfn_flags_mask: allows to mask pfn flags so that only default_flags ma= tter * @dev_private_owner: owner of device private pages + * @migrate: structure for migrating the associated vma */ struct hmm_range { struct mmu_interval_notifier *notifier; @@ -117,12 +130,14 @@ struct hmm_range { unsigned long default_flags; unsigned long pfn_flags_mask; void *dev_private_owner; + struct migrate_vma *migrate; }; =20 /* * Please see Documentation/mm/hmm.rst for how to use the range API. */ int hmm_range_fault(struct hmm_range *range); +int hmm_range_migrate_prepare(struct hmm_range *range, struct migrate_vma = **pargs); =20 /* * HMM_RANGE_DEFAULT_TIMEOUT - default timeout (ms) when waiting for a ran= ge diff --git a/include/linux/migrate.h b/include/linux/migrate.h index 26ca00c325d9..104eda2dd881 100644 --- a/include/linux/migrate.h +++ b/include/linux/migrate.h @@ -3,6 +3,7 @@ #define _LINUX_MIGRATE_H =20 #include +#include #include #include #include @@ -97,6 +98,16 @@ static inline int set_movable_ops(const struct movable_o= perations *ops, enum pag return -ENOSYS; } =20 +enum migrate_vma_info { + MIGRATE_VMA_SELECT_NONE =3D 0, + MIGRATE_VMA_SELECT_COMPOUND =3D MIGRATE_VMA_SELECT_NONE, +}; + +static inline enum migrate_vma_info hmm_select_migrate(struct hmm_range *r= ange) +{ + return MIGRATE_VMA_SELECT_NONE; +} + #endif /* CONFIG_MIGRATION */ =20 #ifdef CONFIG_NUMA_BALANCING @@ -140,11 +151,12 @@ static inline unsigned long migrate_pfn(unsigned long= pfn) return (pfn << MIGRATE_PFN_SHIFT) | MIGRATE_PFN_VALID; } =20 -enum migrate_vma_direction { +enum migrate_vma_info { MIGRATE_VMA_SELECT_SYSTEM =3D 1 << 0, MIGRATE_VMA_SELECT_DEVICE_PRIVATE =3D 1 << 1, MIGRATE_VMA_SELECT_DEVICE_COHERENT =3D 1 << 2, MIGRATE_VMA_SELECT_COMPOUND =3D 1 << 3, + MIGRATE_VMA_FAULT =3D 1 << 4, }; =20 struct migrate_vma { @@ -182,6 +194,17 @@ struct migrate_vma { struct page *fault_page; }; =20 +static inline enum migrate_vma_info hmm_select_migrate(struct hmm_range *r= ange) +{ + enum migrate_vma_info minfo; + + minfo =3D range->migrate ? range->migrate->flags : 0; + minfo |=3D (range->default_flags & HMM_PFN_REQ_MIGRATE) ? + MIGRATE_VMA_SELECT_SYSTEM : 0; + + return minfo; +} + int migrate_vma_setup(struct migrate_vma *args); void migrate_vma_pages(struct migrate_vma *migrate); void migrate_vma_finalize(struct migrate_vma *migrate); @@ -192,7 +215,7 @@ void migrate_device_pages(unsigned long *src_pfns, unsi= gned long *dst_pfns, unsigned long npages); void migrate_device_finalize(unsigned long *src_pfns, unsigned long *dst_pfns, unsigned long npages); - +void migrate_hmm_range_setup(struct hmm_range *range); #endif /* CONFIG_MIGRATION */ =20 #endif /* _LINUX_MIGRATE_H */ diff --git a/mm/hmm.c b/mm/hmm.c index 4ec74c18bef6..1fdb8665eeec 100644 --- a/mm/hmm.c +++ b/mm/hmm.c @@ -20,6 +20,7 @@ #include #include #include +#include #include #include #include @@ -27,12 +28,20 @@ #include #include #include +#include =20 #include "internal.h" =20 struct hmm_vma_walk { - struct hmm_range *range; - unsigned long last; + struct mmu_notifier_range mmu_range; + struct vm_area_struct *vma; + struct hmm_range *range; + unsigned long start; + unsigned long end; + unsigned long last; + bool locked; + bool pmdlocked; + spinlock_t *ptl; }; =20 enum { @@ -41,21 +50,38 @@ enum { HMM_NEED_ALL_BITS =3D HMM_NEED_FAULT | HMM_NEED_WRITE_FAULT, }; =20 -enum { - /* These flags are carried from input-to-output */ - HMM_PFN_INOUT_FLAGS =3D HMM_PFN_DMA_MAPPED | HMM_PFN_P2PDMA | - HMM_PFN_P2PDMA_BUS, -}; - static int hmm_pfns_fill(unsigned long addr, unsigned long end, - struct hmm_range *range, unsigned long cpu_flags) + struct hmm_vma_walk *hmm_vma_walk, unsigned long cpu_flags) { + struct hmm_range *range =3D hmm_vma_walk->range; unsigned long i =3D (addr - range->start) >> PAGE_SHIFT; + enum migrate_vma_info minfo; + bool migrate =3D false; + + minfo =3D hmm_select_migrate(range); + if (cpu_flags !=3D HMM_PFN_ERROR) { + if (minfo && (vma_is_anonymous(hmm_vma_walk->vma))) { + cpu_flags |=3D (HMM_PFN_VALID | HMM_PFN_MIGRATE); + migrate =3D true; + } + } + + if (migrate && thp_migration_supported() && + (minfo & MIGRATE_VMA_SELECT_COMPOUND) && + IS_ALIGNED(addr, HPAGE_PMD_SIZE) && + IS_ALIGNED(end, HPAGE_PMD_SIZE)) { + range->hmm_pfns[i] &=3D HMM_PFN_INOUT_FLAGS; + range->hmm_pfns[i] |=3D cpu_flags | HMM_PFN_COMPOUND; + addr +=3D PAGE_SIZE; + i++; + cpu_flags =3D 0; + } =20 for (; addr < end; addr +=3D PAGE_SIZE, i++) { range->hmm_pfns[i] &=3D HMM_PFN_INOUT_FLAGS; range->hmm_pfns[i] |=3D cpu_flags; } + return 0; } =20 @@ -171,11 +197,11 @@ static int hmm_vma_walk_hole(unsigned long addr, unsi= gned long end, if (!walk->vma) { if (required_fault) return -EFAULT; - return hmm_pfns_fill(addr, end, range, HMM_PFN_ERROR); + return hmm_pfns_fill(addr, end, hmm_vma_walk, HMM_PFN_ERROR); } if (required_fault) return hmm_vma_fault(addr, end, required_fault, walk); - return hmm_pfns_fill(addr, end, range, 0); + return hmm_pfns_fill(addr, end, hmm_vma_walk, 0); } =20 static inline unsigned long hmm_pfn_flags_order(unsigned long order) @@ -208,8 +234,13 @@ static int hmm_vma_handle_pmd(struct mm_walk *walk, un= signed long addr, cpu_flags =3D pmd_to_hmm_pfn_flags(range, pmd); required_fault =3D hmm_range_need_fault(hmm_vma_walk, hmm_pfns, npages, cpu_flags); - if (required_fault) + if (required_fault) { + if (hmm_vma_walk->pmdlocked) { + spin_unlock(hmm_vma_walk->ptl); + hmm_vma_walk->pmdlocked =3D false; + } return hmm_vma_fault(addr, end, required_fault, walk); + } =20 pfn =3D pmd_pfn(pmd) + ((addr & ~PMD_MASK) >> PAGE_SHIFT); for (i =3D 0; addr < end; addr +=3D PAGE_SIZE, i++, pfn++) { @@ -289,14 +320,28 @@ static int hmm_vma_handle_pte(struct mm_walk *walk, u= nsigned long addr, goto fault; =20 if (softleaf_is_migration(entry)) { - pte_unmap(ptep); - hmm_vma_walk->last =3D addr; - migration_entry_wait(walk->mm, pmdp, addr); - return -EBUSY; + if (!hmm_select_migrate(range)) { + if (hmm_vma_walk->locked) { + pte_unmap_unlock(ptep, hmm_vma_walk->ptl); + hmm_vma_walk->locked =3D false; + } else + pte_unmap(ptep); + + hmm_vma_walk->last =3D addr; + migration_entry_wait(walk->mm, pmdp, addr); + return -EBUSY; + } else + goto out; } =20 /* Report error for everything else */ - pte_unmap(ptep); + + if (hmm_vma_walk->locked) { + pte_unmap_unlock(ptep, hmm_vma_walk->ptl); + hmm_vma_walk->locked =3D false; + } else + pte_unmap(ptep); + return -EFAULT; } =20 @@ -313,7 +358,12 @@ static int hmm_vma_handle_pte(struct mm_walk *walk, un= signed long addr, if (!vm_normal_page(walk->vma, addr, pte) && !is_zero_pfn(pte_pfn(pte))) { if (hmm_pte_need_fault(hmm_vma_walk, pfn_req_flags, 0)) { - pte_unmap(ptep); + if (hmm_vma_walk->locked) { + pte_unmap_unlock(ptep, hmm_vma_walk->ptl); + hmm_vma_walk->locked =3D false; + } else + pte_unmap(ptep); + return -EFAULT; } new_pfn_flags =3D HMM_PFN_ERROR; @@ -326,7 +376,11 @@ static int hmm_vma_handle_pte(struct mm_walk *walk, un= signed long addr, return 0; =20 fault: - pte_unmap(ptep); + if (hmm_vma_walk->locked) { + pte_unmap_unlock(ptep, hmm_vma_walk->ptl); + hmm_vma_walk->locked =3D false; + } else + pte_unmap(ptep); /* Fault any virtual address we were asked to fault */ return hmm_vma_fault(addr, end, required_fault, walk); } @@ -370,13 +424,18 @@ static int hmm_vma_handle_absent_pmd(struct mm_walk *= walk, unsigned long start, required_fault =3D hmm_range_need_fault(hmm_vma_walk, hmm_pfns, npages, 0); if (required_fault) { - if (softleaf_is_device_private(entry)) + if (softleaf_is_device_private(entry)) { + if (hmm_vma_walk->pmdlocked) { + spin_unlock(hmm_vma_walk->ptl); + hmm_vma_walk->pmdlocked =3D false; + } return hmm_vma_fault(addr, end, required_fault, walk); + } else return -EFAULT; } =20 - return hmm_pfns_fill(start, end, range, HMM_PFN_ERROR); + return hmm_pfns_fill(start, end, hmm_vma_walk, HMM_PFN_ERROR); } #else static int hmm_vma_handle_absent_pmd(struct mm_walk *walk, unsigned long s= tart, @@ -384,15 +443,486 @@ static int hmm_vma_handle_absent_pmd(struct mm_walk = *walk, unsigned long start, pmd_t pmd) { struct hmm_vma_walk *hmm_vma_walk =3D walk->private; - struct hmm_range *range =3D hmm_vma_walk->range; unsigned long npages =3D (end - start) >> PAGE_SHIFT; =20 if (hmm_range_need_fault(hmm_vma_walk, hmm_pfns, npages, 0)) return -EFAULT; - return hmm_pfns_fill(start, end, range, HMM_PFN_ERROR); + return hmm_pfns_fill(start, end, hmm_vma_walk, HMM_PFN_ERROR); } #endif /* CONFIG_ARCH_ENABLE_THP_MIGRATION */ =20 +#ifdef CONFIG_DEVICE_MIGRATION +/** + * migrate_vma_split_folio() - Helper function to split a THP folio + * @folio: the folio to split + * @fault_page: struct page associated with the fault if any + * + * Returns 0 on success + */ +static int migrate_vma_split_folio(struct folio *folio, + struct page *fault_page) +{ + int ret; + struct folio *fault_folio =3D fault_page ? page_folio(fault_page) : NULL; + struct folio *new_fault_folio =3D NULL; + + if (folio !=3D fault_folio) { + folio_get(folio); + folio_lock(folio); + } + + ret =3D split_folio(folio); + if (ret) { + if (folio !=3D fault_folio) { + folio_unlock(folio); + folio_put(folio); + } + return ret; + } + + new_fault_folio =3D fault_page ? page_folio(fault_page) : NULL; + + /* + * Ensure the lock is held on the correct + * folio after the split + */ + if (!new_fault_folio) { + folio_unlock(folio); + folio_put(folio); + } else if (folio !=3D new_fault_folio) { + if (new_fault_folio !=3D fault_folio) { + folio_get(new_fault_folio); + folio_lock(new_fault_folio); + } + folio_unlock(folio); + folio_put(folio); + } + + return 0; +} + +static int hmm_vma_handle_migrate_prepare_pmd(const struct mm_walk *walk, + pmd_t *pmdp, + unsigned long start, + unsigned long end, + unsigned long *hmm_pfn) +{ + struct hmm_vma_walk *hmm_vma_walk =3D walk->private; + struct hmm_range *range =3D hmm_vma_walk->range; + struct migrate_vma *migrate =3D range->migrate; + struct folio *fault_folio =3D NULL; + struct folio *folio; + enum migrate_vma_info minfo; + unsigned long i; + int r =3D 0; + + minfo =3D hmm_select_migrate(range); + if (!minfo) + return r; + + fault_folio =3D (migrate && migrate->fault_page) ? + page_folio(migrate->fault_page) : NULL; + + if (pmd_none(*pmdp)) + return hmm_pfns_fill(start, end, hmm_vma_walk, 0); + + if (!(hmm_pfn[0] & HMM_PFN_VALID)) + goto out; + + if (pmd_trans_huge(*pmdp)) { + if (!(minfo & MIGRATE_VMA_SELECT_SYSTEM)) + goto out; + + folio =3D pmd_folio(*pmdp); + if (is_huge_zero_folio(folio)) + return hmm_pfns_fill(start, end, hmm_vma_walk, 0); + + } else if (!pmd_present(*pmdp)) { + const softleaf_t entry =3D softleaf_from_pmd(*pmdp); + + folio =3D softleaf_to_folio(entry); + + if (!softleaf_is_device_private(entry)) + goto out; + + if (!(minfo & MIGRATE_VMA_SELECT_DEVICE_PRIVATE)) + goto out; + if (folio->pgmap->owner !=3D migrate->pgmap_owner) + goto out; + + } else { + hmm_vma_walk->last =3D start; + return -EBUSY; + } + + folio_get(folio); + + if (folio !=3D fault_folio && unlikely(!folio_trylock(folio))) { + folio_put(folio); + hmm_pfns_fill(start, end, hmm_vma_walk, HMM_PFN_ERROR); + return 0; + } + + if (thp_migration_supported() && + (migrate->flags & MIGRATE_VMA_SELECT_COMPOUND) && + (IS_ALIGNED(start, HPAGE_PMD_SIZE) && + IS_ALIGNED(end, HPAGE_PMD_SIZE))) { + + struct page_vma_mapped_walk pvmw =3D { + .ptl =3D hmm_vma_walk->ptl, + .address =3D start, + .pmd =3D pmdp, + .vma =3D walk->vma, + }; + + hmm_pfn[0] |=3D HMM_PFN_MIGRATE | HMM_PFN_COMPOUND; + + r =3D set_pmd_migration_entry(&pvmw, folio_page(folio, 0)); + if (r) { + hmm_pfn[0] &=3D ~(HMM_PFN_MIGRATE | HMM_PFN_COMPOUND); + r =3D -ENOENT; // fallback + goto unlock_out; + } + for (i =3D 1, start +=3D PAGE_SIZE; start < end; start +=3D PAGE_SIZE, i= ++) + hmm_pfn[i] &=3D HMM_PFN_INOUT_FLAGS; + + } else { + r =3D -ENOENT; // fallback + goto unlock_out; + } + + +out: + return r; + +unlock_out: + if (folio !=3D fault_folio) + folio_unlock(folio); + folio_put(folio); + goto out; + +} + +/* + * Install migration entries if migration requested, either from fault + * or migrate paths. + * + */ +static int hmm_vma_handle_migrate_prepare(const struct mm_walk *walk, + pmd_t *pmdp, + pte_t *ptep, + unsigned long addr, + unsigned long *hmm_pfn) +{ + struct hmm_vma_walk *hmm_vma_walk =3D walk->private; + struct hmm_range *range =3D hmm_vma_walk->range; + struct migrate_vma *migrate =3D range->migrate; + struct mm_struct *mm =3D walk->vma->vm_mm; + struct folio *fault_folio =3D NULL; + enum migrate_vma_info minfo; + struct dev_pagemap *pgmap; + bool anon_exclusive; + struct folio *folio; + unsigned long pfn; + struct page *page; + softleaf_t entry; + pte_t pte, swp_pte; + bool writable =3D false; + + // Do we want to migrate at all? + minfo =3D hmm_select_migrate(range); + if (!minfo) + return 0; + + fault_folio =3D (migrate && migrate->fault_page) ? + page_folio(migrate->fault_page) : NULL; + + if (!hmm_vma_walk->locked) { + ptep =3D pte_offset_map_lock(mm, pmdp, addr, &hmm_vma_walk->ptl); + hmm_vma_walk->locked =3D true; + } + pte =3D ptep_get(ptep); + + if (pte_none(pte)) { + // migrate without faulting case + if (vma_is_anonymous(walk->vma)) { + *hmm_pfn &=3D HMM_PFN_INOUT_FLAGS; + *hmm_pfn |=3D HMM_PFN_MIGRATE | HMM_PFN_VALID; + goto out; + } + } + + if (!(hmm_pfn[0] & HMM_PFN_VALID)) + goto out; + + if (!pte_present(pte)) { + /* + * Only care about unaddressable device page special + * page table entry. Other special swap entries are not + * migratable, and we ignore regular swapped page. + */ + entry =3D softleaf_from_pte(pte); + if (!softleaf_is_device_private(entry)) + goto out; + + if (!(minfo & MIGRATE_VMA_SELECT_DEVICE_PRIVATE)) + goto out; + + page =3D softleaf_to_page(entry); + folio =3D page_folio(page); + if (folio->pgmap->owner !=3D migrate->pgmap_owner) + goto out; + + if (folio_test_large(folio)) { + int ret; + + pte_unmap_unlock(ptep, hmm_vma_walk->ptl); + hmm_vma_walk->locked =3D false; + ret =3D migrate_vma_split_folio(folio, + migrate->fault_page); + if (ret) + goto out_error; + return -EAGAIN; + } + + pfn =3D page_to_pfn(page); + if (softleaf_is_device_private_write(entry)) + writable =3D true; + } else { + pfn =3D pte_pfn(pte); + if (is_zero_pfn(pfn) && + (minfo & MIGRATE_VMA_SELECT_SYSTEM)) { + *hmm_pfn =3D HMM_PFN_MIGRATE|HMM_PFN_VALID; + goto out; + } + page =3D vm_normal_page(walk->vma, addr, pte); + if (page && !is_zone_device_page(page) && + !(minfo & MIGRATE_VMA_SELECT_SYSTEM)) { + goto out; + } else if (page && is_device_coherent_page(page)) { + pgmap =3D page_pgmap(page); + + if (!(minfo & + MIGRATE_VMA_SELECT_DEVICE_COHERENT) || + pgmap->owner !=3D migrate->pgmap_owner) + goto out; + } + + folio =3D page ? page_folio(page) : NULL; + if (folio && folio_test_large(folio)) { + int ret; + + pte_unmap_unlock(ptep, hmm_vma_walk->ptl); + hmm_vma_walk->locked =3D false; + + ret =3D migrate_vma_split_folio(folio, + migrate->fault_page); + if (ret) + goto out_error; + return -EAGAIN; + } + + writable =3D pte_write(pte); + } + + if (!page || !page->mapping) + goto out; + + /* + * By getting a reference on the folio we pin it and that blocks + * any kind of migration. Side effect is that it "freezes" the + * pte. + * + * We drop this reference after isolating the folio from the lru + * for non device folio (device folio are not on the lru and thus + * can't be dropped from it). + */ + folio =3D page_folio(page); + folio_get(folio); + + /* + * We rely on folio_trylock() to avoid deadlock between + * concurrent migrations where each is waiting on the others + * folio lock. If we can't immediately lock the folio we fail this + * migration as it is only best effort anyway. + * + * If we can lock the folio it's safe to set up a migration entry + * now. In the common case where the folio is mapped once in a + * single process setting up the migration entry now is an + * optimisation to avoid walking the rmap later with + * try_to_migrate(). + */ + + if (fault_folio =3D=3D folio || folio_trylock(folio)) { + anon_exclusive =3D folio_test_anon(folio) && + PageAnonExclusive(page); + + flush_cache_page(walk->vma, addr, pfn); + + if (anon_exclusive) { + pte =3D ptep_clear_flush(walk->vma, addr, ptep); + + if (folio_try_share_anon_rmap_pte(folio, page)) { + set_pte_at(mm, addr, ptep, pte); + folio_unlock(folio); + folio_put(folio); + goto out; + } + } else { + pte =3D ptep_get_and_clear(mm, addr, ptep); + } + + if (pte_dirty(pte)) + folio_mark_dirty(folio); + + /* Setup special migration page table entry */ + if (writable) + entry =3D make_writable_migration_entry(pfn); + else if (anon_exclusive) + entry =3D make_readable_exclusive_migration_entry(pfn); + else + entry =3D make_readable_migration_entry(pfn); + + if (pte_present(pte)) { + if (pte_young(pte)) + entry =3D make_migration_entry_young(entry); + if (pte_dirty(pte)) + entry =3D make_migration_entry_dirty(entry); + } + + swp_pte =3D swp_entry_to_pte(entry); + if (pte_present(pte)) { + if (pte_soft_dirty(pte)) + swp_pte =3D pte_swp_mksoft_dirty(swp_pte); + if (pte_uffd_wp(pte)) + swp_pte =3D pte_swp_mkuffd_wp(swp_pte); + } else { + if (pte_swp_soft_dirty(pte)) + swp_pte =3D pte_swp_mksoft_dirty(swp_pte); + if (pte_swp_uffd_wp(pte)) + swp_pte =3D pte_swp_mkuffd_wp(swp_pte); + } + + set_pte_at(mm, addr, ptep, swp_pte); + folio_remove_rmap_pte(folio, page, walk->vma); + folio_put(folio); + *hmm_pfn |=3D HMM_PFN_MIGRATE; + + if (pte_present(pte)) + flush_tlb_range(walk->vma, addr, addr + PAGE_SIZE); + } else + folio_put(folio); +out: + return 0; +out_error: + return -EFAULT; + +} + +static int hmm_vma_walk_split(pmd_t *pmdp, + unsigned long addr, + struct mm_walk *walk) +{ + struct hmm_vma_walk *hmm_vma_walk =3D walk->private; + struct hmm_range *range =3D hmm_vma_walk->range; + struct migrate_vma *migrate =3D range->migrate; + struct folio *folio, *fault_folio; + spinlock_t *ptl; + int ret =3D 0; + + fault_folio =3D (migrate && migrate->fault_page) ? + page_folio(migrate->fault_page) : NULL; + + ptl =3D pmd_lock(walk->mm, pmdp); + if (unlikely(!pmd_trans_huge(*pmdp))) { + spin_unlock(ptl); + goto out; + } + + folio =3D pmd_folio(*pmdp); + if (is_huge_zero_folio(folio)) { + spin_unlock(ptl); + split_huge_pmd(walk->vma, pmdp, addr); + } else { + folio_get(folio); + spin_unlock(ptl); + + if (folio !=3D fault_folio) { + if (unlikely(!folio_trylock(folio))) { + folio_put(folio); + ret =3D -EBUSY; + goto out; + } + } else + folio_put(folio); + + ret =3D split_folio(folio); + if (fault_folio !=3D folio) { + folio_unlock(folio); + folio_put(folio); + } + + } +out: + return ret; +} +#else +static int hmm_vma_handle_migrate_prepare_pmd(const struct mm_walk *walk, + pmd_t *pmdp, + unsigned long start, + unsigned long end, + unsigned long *hmm_pfn) +{ + return 0; +} + +static int hmm_vma_handle_migrate_prepare(const struct mm_walk *walk, + pmd_t *pmdp, + pte_t *pte, + unsigned long addr, + unsigned long *hmm_pfn) +{ + return 0; +} + +static int hmm_vma_walk_split(pmd_t *pmdp, + unsigned long addr, + struct mm_walk *walk) +{ + return 0; +} +#endif + +static int hmm_vma_capture_migrate_range(unsigned long start, + unsigned long end, + struct mm_walk *walk) +{ + struct hmm_vma_walk *hmm_vma_walk =3D walk->private; + struct hmm_range *range =3D hmm_vma_walk->range; + + if (!hmm_select_migrate(range)) + return 0; + + if (hmm_vma_walk->vma && (hmm_vma_walk->vma !=3D walk->vma)) + return -ERANGE; + + hmm_vma_walk->vma =3D walk->vma; + hmm_vma_walk->start =3D start; + hmm_vma_walk->end =3D end; + + if (end - start > range->end - range->start) + return -ERANGE; + + if (!hmm_vma_walk->mmu_range.owner) { + mmu_notifier_range_init_owner(&hmm_vma_walk->mmu_range, MMU_NOTIFY_MIGRA= TE, 0, + walk->vma->vm_mm, start, end, + range->dev_private_owner); + mmu_notifier_invalidate_range_start(&hmm_vma_walk->mmu_range); + } + + return 0; +} + static int hmm_vma_walk_pmd(pmd_t *pmdp, unsigned long start, unsigned long end, @@ -403,43 +933,112 @@ static int hmm_vma_walk_pmd(pmd_t *pmdp, unsigned long *hmm_pfns =3D &range->hmm_pfns[(start - range->start) >> PAGE_SHIFT]; unsigned long npages =3D (end - start) >> PAGE_SHIFT; + struct mm_struct *mm =3D walk->vma->vm_mm; unsigned long addr =3D start; + enum migrate_vma_info minfo; + unsigned long i; + spinlock_t *ptl; pte_t *ptep; pmd_t pmd; + int r; + + minfo =3D hmm_select_migrate(range); =20 again: + hmm_vma_walk->locked =3D false; + hmm_vma_walk->pmdlocked =3D false; pmd =3D pmdp_get_lockless(pmdp); - if (pmd_none(pmd)) - return hmm_vma_walk_hole(start, end, -1, walk); + if (pmd_none(pmd)) { + r =3D hmm_vma_walk_hole(start, end, -1, walk); + if (r || !minfo) + return r; + + ptl =3D pmd_lock(walk->mm, pmdp); + if (pmd_none(*pmdp)) { + // hmm_vma_walk_hole() filled migration needs + spin_unlock(ptl); + return r; + } + spin_unlock(ptl); + } =20 if (thp_migration_supported() && pmd_is_migration_entry(pmd)) { - if (hmm_range_need_fault(hmm_vma_walk, hmm_pfns, npages, 0)) { + if (!minfo) { + if (hmm_range_need_fault(hmm_vma_walk, hmm_pfns, npages, 0)) { + hmm_vma_walk->last =3D addr; + pmd_migration_entry_wait(walk->mm, pmdp); + return -EBUSY; + } + } + for (i =3D 0; addr < end; addr +=3D PAGE_SIZE, i++) + hmm_pfns[i] &=3D HMM_PFN_INOUT_FLAGS; + + return 0; + } + + if (minfo) { + hmm_vma_walk->ptl =3D pmd_lock(mm, pmdp); + hmm_vma_walk->pmdlocked =3D true; + pmd =3D pmdp_get(pmdp); + } else + pmd =3D pmdp_get_lockless(pmdp); + + if (pmd_trans_huge(pmd) || !pmd_present(pmd)) { + + if (!pmd_present(pmd)) { + r =3D hmm_vma_handle_absent_pmd(walk, start, end, hmm_pfns, + pmd); + if (r || !minfo) + return r; + } else { + + /* + * No need to take pmd_lock here if not migrating, + * even if some other thread is splitting the huge + * pmd we will get that event through mmu_notifier callback. + * + * So just read pmd value and check again it's a transparent + * huge or device mapping one and compute corresponding pfn + * values. + */ + + if (!pmd_trans_huge(pmd)) { + // must be lockless + goto again; + } + + r =3D hmm_vma_handle_pmd(walk, addr, end, hmm_pfns, pmd); + + if (r || !minfo) + return r; + } + + r =3D hmm_vma_handle_migrate_prepare_pmd(walk, pmdp, start, end, hmm_pfn= s); + + if (hmm_vma_walk->pmdlocked) { + spin_unlock(hmm_vma_walk->ptl); + hmm_vma_walk->pmdlocked =3D false; + } + + if (r =3D=3D -ENOENT) { + r =3D hmm_vma_walk_split(pmdp, addr, walk); + if (r) { + /* Split not successful, skip */ + return hmm_pfns_fill(start, end, hmm_vma_walk, HMM_PFN_ERROR); + } + + /* Split successful or "again", reloop */ hmm_vma_walk->last =3D addr; - pmd_migration_entry_wait(walk->mm, pmdp); return -EBUSY; } - return hmm_pfns_fill(start, end, range, 0); - } =20 - if (!pmd_present(pmd)) - return hmm_vma_handle_absent_pmd(walk, start, end, hmm_pfns, - pmd); + return r; =20 - if (pmd_trans_huge(pmd)) { - /* - * No need to take pmd_lock here, even if some other thread - * is splitting the huge pmd we will get that event through - * mmu_notifier callback. - * - * So just read pmd value and check again it's a transparent - * huge or device mapping one and compute corresponding pfn - * values. - */ - pmd =3D pmdp_get_lockless(pmdp); - if (!pmd_trans_huge(pmd)) - goto again; + } =20 - return hmm_vma_handle_pmd(walk, addr, end, hmm_pfns, pmd); + if (hmm_vma_walk->pmdlocked) { + spin_unlock(hmm_vma_walk->ptl); + hmm_vma_walk->pmdlocked =3D false; } =20 /* @@ -451,22 +1050,41 @@ static int hmm_vma_walk_pmd(pmd_t *pmdp, if (pmd_bad(pmd)) { if (hmm_range_need_fault(hmm_vma_walk, hmm_pfns, npages, 0)) return -EFAULT; - return hmm_pfns_fill(start, end, range, HMM_PFN_ERROR); + return hmm_pfns_fill(start, end, hmm_vma_walk, HMM_PFN_ERROR); } =20 - ptep =3D pte_offset_map(pmdp, addr); + if (minfo) { + ptep =3D pte_offset_map_lock(mm, pmdp, addr, &hmm_vma_walk->ptl); + if (ptep) + hmm_vma_walk->locked =3D true; + } else + ptep =3D pte_offset_map(pmdp, addr); if (!ptep) goto again; + for (; addr < end; addr +=3D PAGE_SIZE, ptep++, hmm_pfns++) { - int r; =20 r =3D hmm_vma_handle_pte(walk, addr, end, pmdp, ptep, hmm_pfns); if (r) { /* hmm_vma_handle_pte() did pte_unmap() */ return r; } + + r =3D hmm_vma_handle_migrate_prepare(walk, pmdp, ptep, addr, hmm_pfns); + if (r =3D=3D -EAGAIN) { + goto again; + } + if (r) { + hmm_pfns_fill(addr, end, hmm_vma_walk, HMM_PFN_ERROR); + break; + } } - pte_unmap(ptep - 1); + + if (hmm_vma_walk->locked) + pte_unmap_unlock(ptep - 1, hmm_vma_walk->ptl); + else + pte_unmap(ptep - 1); + return 0; } =20 @@ -600,6 +1218,11 @@ static int hmm_vma_walk_test(unsigned long start, uns= igned long end, struct hmm_vma_walk *hmm_vma_walk =3D walk->private; struct hmm_range *range =3D hmm_vma_walk->range; struct vm_area_struct *vma =3D walk->vma; + int r; + + r =3D hmm_vma_capture_migrate_range(start, end, walk); + if (r) + return r; =20 if (!(vma->vm_flags & (VM_IO | VM_PFNMAP)) && vma->vm_flags & VM_READ) @@ -622,7 +1245,7 @@ static int hmm_vma_walk_test(unsigned long start, unsi= gned long end, (end - start) >> PAGE_SHIFT, 0)) return -EFAULT; =20 - hmm_pfns_fill(start, end, range, HMM_PFN_ERROR); + hmm_pfns_fill(start, end, hmm_vma_walk, HMM_PFN_ERROR); =20 /* Skip this vma and continue processing the next vma. */ return 1; @@ -652,9 +1275,17 @@ static const struct mm_walk_ops hmm_walk_ops =3D { * the invalidation to finish. * -EFAULT: A page was requested to be valid and could not be made val= id * ie it has no backing VMA or it is illegal to access + * -ERANGE: The range crosses multiple VMAs, or space for hmm_pfns arr= ay + * is too low. * * This is similar to get_user_pages(), except that it can read the page t= ables * without mutating them (ie causing faults). + * + * If want to do migrate after faulting, call hmm_range_fault() with + * HMM_PFN_REQ_MIGRATE and initialize range.migrate field. + * After hmm_range_fault() call migrate_hmm_range_setup() instead of + * migrate_vma_setup() and after that follow normal migrate calls path. + * */ int hmm_range_fault(struct hmm_range *range) { @@ -662,16 +1293,32 @@ int hmm_range_fault(struct hmm_range *range) .range =3D range, .last =3D range->start, }; - struct mm_struct *mm =3D range->notifier->mm; + bool is_fault_path =3D !!range->notifier; + struct mm_struct *mm; int ret; =20 + /* + * + * Could be serving a device fault or come from migrate + * entry point. For the former we have not resolved the vma + * yet, and the latter we don't have a notifier (but have a vma). + * + */ +#ifdef CONFIG_DEVICE_MIGRATION + mm =3D is_fault_path ? range->notifier->mm : range->migrate->vma->vm_mm; +#else + mm =3D range->notifier->mm; +#endif mmap_assert_locked(mm); =20 do { /* If range is no longer valid force retry. */ - if (mmu_interval_check_retry(range->notifier, - range->notifier_seq)) - return -EBUSY; + if (is_fault_path && mmu_interval_check_retry(range->notifier, + range->notifier_seq)) { + ret =3D -EBUSY; + break; + } + ret =3D walk_page_range(mm, hmm_vma_walk.last, range->end, &hmm_walk_ops, &hmm_vma_walk); /* @@ -681,6 +1328,19 @@ int hmm_range_fault(struct hmm_range *range) * output, and all >=3D are still at their input values. */ } while (ret =3D=3D -EBUSY); + +#ifdef CONFIG_DEVICE_MIGRATION + if (hmm_select_migrate(range) && range->migrate && + hmm_vma_walk.mmu_range.owner) { + // The migrate_vma path has the following initialized + if (is_fault_path) { + range->migrate->vma =3D hmm_vma_walk.vma; + range->migrate->start =3D range->start; + range->migrate->end =3D hmm_vma_walk.end; + } + mmu_notifier_invalidate_range_end(&hmm_vma_walk.mmu_range); + } +#endif return ret; } EXPORT_SYMBOL(hmm_range_fault); diff --git a/mm/migrate_device.c b/mm/migrate_device.c index 23379663b1e1..bda6320f6242 100644 --- a/mm/migrate_device.c +++ b/mm/migrate_device.c @@ -734,7 +734,16 @@ static void migrate_vma_unmap(struct migrate_vma *migr= ate) */ int migrate_vma_setup(struct migrate_vma *args) { + int ret; long nr_pages =3D (args->end - args->start) >> PAGE_SHIFT; + struct hmm_range range =3D { + .notifier =3D NULL, + .start =3D args->start, + .end =3D args->end, + .hmm_pfns =3D args->src, + .dev_private_owner =3D args->pgmap_owner, + .migrate =3D args + }; =20 args->start &=3D PAGE_MASK; args->end &=3D PAGE_MASK; @@ -759,17 +768,25 @@ int migrate_vma_setup(struct migrate_vma *args) args->cpages =3D 0; args->npages =3D 0; =20 - migrate_vma_collect(args); + if (args->flags & MIGRATE_VMA_FAULT) + range.default_flags |=3D HMM_PFN_REQ_FAULT; + + ret =3D hmm_range_fault(&range); + + migrate_hmm_range_setup(&range); =20 - if (args->cpages) - migrate_vma_unmap(args); + /* Remove migration PTEs */ + if (ret) { + migrate_vma_pages(args); + migrate_vma_finalize(args); + } =20 /* * At this point pages are locked and unmapped, and thus they have * stable content and can safely be copied to destination memory that * is allocated by the drivers. */ - return 0; + return ret; =20 } EXPORT_SYMBOL(migrate_vma_setup); @@ -1489,3 +1506,64 @@ int migrate_device_coherent_folio(struct folio *foli= o) return 0; return -EBUSY; } + +void migrate_hmm_range_setup(struct hmm_range *range) +{ + + struct migrate_vma *migrate =3D range->migrate; + + if (!migrate) + return; + + migrate->npages =3D (migrate->end - migrate->start) >> PAGE_SHIFT; + migrate->cpages =3D 0; + + for (unsigned long i =3D 0; i < migrate->npages; i++) { + + unsigned long pfn =3D range->hmm_pfns[i]; + + pfn &=3D ~HMM_PFN_INOUT_FLAGS; + + /* + * + * Don't do migration if valid and migrate flags are not both set. + * + */ + if ((pfn & (HMM_PFN_VALID | HMM_PFN_MIGRATE)) !=3D + (HMM_PFN_VALID | HMM_PFN_MIGRATE)) { + migrate->src[i] =3D 0; + migrate->dst[i] =3D 0; + continue; + } + + migrate->cpages++; + + /* + * + * The zero page is encoded in a special way, valid and migrate is + * set, and pfn part is zero. Encode specially for migrate also. + * + */ + if (pfn =3D=3D (HMM_PFN_VALID|HMM_PFN_MIGRATE)) { + migrate->src[i] =3D MIGRATE_PFN_MIGRATE; + migrate->dst[i] =3D 0; + continue; + } + if (pfn =3D=3D (HMM_PFN_VALID|HMM_PFN_MIGRATE|HMM_PFN_COMPOUND)) { + migrate->src[i] =3D MIGRATE_PFN_MIGRATE|MIGRATE_PFN_COMPOUND; + migrate->dst[i] =3D 0; + continue; + } + + migrate->src[i] =3D migrate_pfn(page_to_pfn(hmm_pfn_to_page(pfn))) + | MIGRATE_PFN_MIGRATE; + migrate->src[i] |=3D (pfn & HMM_PFN_WRITE) ? MIGRATE_PFN_WRITE : 0; + migrate->src[i] |=3D (pfn & HMM_PFN_COMPOUND) ? MIGRATE_PFN_COMPOUND : 0; + migrate->dst[i] =3D 0; + } + + if (migrate->cpages) + migrate_vma_unmap(migrate); + +} +EXPORT_SYMBOL(migrate_hmm_range_setup); --=20 2.50.0 From nobody Sat Feb 7 18:20:28 2026 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3A0CE3358DE for ; Mon, 19 Jan 2026 11:28:04 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1768822085; cv=none; b=ekMfuaf3uKrmylaBjt8nEn62CHiAU0eVWqOPgSod2VsCaXUDDNJJhpK9PdG3wUFu4ng/0cvAdDbePVJIiZ0pg13toYcOnfRTj8yNJ+KDxBBF3ocd4P90i8B3RMztQ53HP+DLSmIJbULVvzqtm28+yw1LHceHwXMX1jV14GvIeGU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1768822085; c=relaxed/simple; bh=1WBs/5NwS++R32SQ2pZFyMjPdfm8SKJsIsq/66bONYM=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=tuqLwRBv0tkyQJNQQ4QbC75aA2Pcmtb/9MKnpAASmHzaAiIRkZ+YjP0Iguk6VOyG7IenxLbd881aJT8aMrZ2aUZkIqtrvXobPOkslVWKrSSys3MWzc9ZAQOoVEPD8grscv//X2JNrLDmYLSCUYtVo+n22Gzcyx/QSkQ7WlXrIcU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=bqSs5oGE; dkim=pass (2048-bit key) header.d=redhat.com header.i=@redhat.com header.b=KvU0oGeF; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="bqSs5oGE"; dkim=pass (2048-bit key) header.d=redhat.com header.i=@redhat.com header.b="KvU0oGeF" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1768822082; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=pe4kihdf5rzPSapoEMxkpwm+CiIoqdx7EKDwdy3eKBM=; b=bqSs5oGEeU7lWzyrtwVi2zM8+8ydca1ttLfZx0O3YYPllAzXCCarDN7iIFKsgZXiZzK3l+ ssLz+yg6/6BaZ0wejDe1q7+S0mcW5lVj5vu3pX0DM8gOUh8ttD4Ib6Yt0rSw7Cd7a3pyk3 TY+ChF9OQ67pyyadNHeOcATRsfYBagk= Received: from mail-lf1-f70.google.com (mail-lf1-f70.google.com [209.85.167.70]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-623-hSOn14cgN_eqO_mZk-AEAA-1; Mon, 19 Jan 2026 06:28:01 -0500 X-MC-Unique: hSOn14cgN_eqO_mZk-AEAA-1 X-Mimecast-MFC-AGG-ID: hSOn14cgN_eqO_mZk-AEAA_1768822079 Received: by mail-lf1-f70.google.com with SMTP id 2adb3069b0e04-59b6c274d69so3339177e87.1 for ; Mon, 19 Jan 2026 03:28:00 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=google; t=1768822079; x=1769426879; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=pe4kihdf5rzPSapoEMxkpwm+CiIoqdx7EKDwdy3eKBM=; b=KvU0oGeFsN8lu6+8b3Er169aw2zwQMwIe5ZWpbry2XKIrENZCub5wAAxPOjA7z+bC3 EOvlIMwhOMCmNVjvtUX4WglZ5qiOC1VKjOEZPdMz6prJrOF1Kbo+AdXF5GopZw+6JOi9 GeEG14OhO80ABDuLfeAlnaMJEwkPZxfCHAqRK4FtaPkZ98iuMeK6jXHXkJCv7G+/I74i tzpxJXDqxEW7l6BvLLFju2qkSiBB26AYC1I1kwlHcOj8wR1b08DWcyuDtgvyX574ICfk 37ibtuM4o6VP47gqBX+Cl+B/PS4amDzkkgxXwnFkJwGMu3rfYXquhPoFA0+A3vKiIvkS njxQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1768822079; x=1769426879; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=pe4kihdf5rzPSapoEMxkpwm+CiIoqdx7EKDwdy3eKBM=; b=K7CND2z4CUQ5Lt4rDQK3ubwfVRSeUwR+9bps2Wqq1sTSGlL/iSjZ3dN6YaeDF5oX4P DfKcz/bP5ClC3kZ8/Rk+3x0oe/ySdCBhaqSeGwR1vpnMAJaxp7uhK8rEekgxTPgeG14G anjj6QF9CYfVcqSOsZWHWKoWK/0lEXn4nMPDTLVQnDXOwjLOWtYaQL/9wDTsFpoyaCTc f2A9Iw7r+28yQtiu4CNR+oY1IGSHf6HuqH5kdgfuLzcQnO4tn67Q6zlB0FGqcDy04XK9 HekMifrnCINTzeRnmyRucZkuA+KtO1Rwk+WaisxVDFjshNuzGcXyRBhFjV4cWeT+TsMM SnQw== X-Gm-Message-State: AOJu0Yx0v1c4q3RwtITsJEIQ5hfsOvh/6mq6ClqUiMaXWESL+TDysEiV ddFa/Ou3upOKw8UJM7YIWJpyzsMolLLUpFXVeG93GnEQgfCswlqHKYAEOobv1qILAqEWGmAI0LJ UifLHwAZ/MQY5BDhRBnBoCoMaFaoVjkywE3t1GwtTzTsXIWuQvQflDaaeSk/nP4vE X-Gm-Gg: AY/fxX4NRTPQtmMBOptc96rgMc5xIobY7UgVkRwtXIiXnglQXVB5foV7JbAjhVTndsz HYb14SWnFP54TUfVN///DHXMLn+c3su1N8aFT2ddMzd3OlEHvCGh1HucOoNEGIB7d/XZn6lMgE+ QFdEEuKIM+EqPX3+Qs+bcatDvieTmZK5KqQy1b1xpm5a4c4fbAUoV5f5IOm0nhEnbeDc1LKCt04 i0Do+oVmvJPBkbSDXzSXCNOGUSiN2VcnvqizEkMWEbY3oqekW7jkn6lvYdISSm+vR1aetBsxS/W IH6369BuSIZDN8IGinj74LDY0nb6R31N34XvRtWE6i+BCwC/Bfwl8AULu98QOTuMD2ACW8Oz3Uh GB46hqqpxCsm/FI95n+YKa0nJ X-Received: by 2002:a05:6512:ad0:b0:59b:7aa3:9d07 with SMTP id 2adb3069b0e04-59ba719c7b3mr5936526e87.22.1768822079224; Mon, 19 Jan 2026 03:27:59 -0800 (PST) X-Received: by 2002:a05:6512:ad0:b0:59b:7aa3:9d07 with SMTP id 2adb3069b0e04-59ba719c7b3mr5936506e87.22.1768822078707; Mon, 19 Jan 2026 03:27:58 -0800 (PST) Received: from fedora (85-23-51-1.bb.dnainternet.fi. [85.23.51.1]) by smtp.gmail.com with ESMTPSA id 38308e7fff4ca-38384f8a1efsm31226411fa.39.2026.01.19.03.27.58 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 19 Jan 2026 03:27:58 -0800 (PST) From: mpenttil@redhat.com To: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org, =?UTF-8?q?Mika=20Penttil=C3=A4?= , David Hildenbrand , Jason Gunthorpe , Leon Romanovsky , Alistair Popple , Balbir Singh , Zi Yan , Matthew Brost , Marco Pagani Subject: [PATCH v2 2/3] mm: add new testcase for the migrate on fault case Date: Mon, 19 Jan 2026 13:25:01 +0200 Message-ID: <20260119112502.645059-3-mpenttil@redhat.com> X-Mailer: git-send-email 2.50.0 In-Reply-To: <20260119112502.645059-1-mpenttil@redhat.com> References: <20260119112502.645059-1-mpenttil@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable From: Mika Penttil=C3=A4 Cc: David Hildenbrand Cc: Jason Gunthorpe Cc: Leon Romanovsky Cc: Alistair Popple Cc: Balbir Singh Cc: Zi Yan Cc: Matthew Brost Signed-off-by: Marco Pagani Signed-off-by: Mika Penttil=C3=A4 Suggested-by: Alistair Popple --- lib/test_hmm.c | 100 ++++++++++++++++++++++++- lib/test_hmm_uapi.h | 19 ++--- tools/testing/selftests/mm/hmm-tests.c | 54 +++++++++++++ 3 files changed, 163 insertions(+), 10 deletions(-) diff --git a/lib/test_hmm.c b/lib/test_hmm.c index 8af169d3873a..b82517cfd616 100644 --- a/lib/test_hmm.c +++ b/lib/test_hmm.c @@ -36,6 +36,7 @@ #define DMIRROR_RANGE_FAULT_TIMEOUT 1000 #define DEVMEM_CHUNK_SIZE (256 * 1024 * 1024U) #define DEVMEM_CHUNKS_RESERVE 16 +#define PFNS_ARRAY_SIZE 64 =20 /* * For device_private pages, dpage is just a dummy struct page @@ -145,7 +146,7 @@ static bool dmirror_is_private_zone(struct dmirror_devi= ce *mdevice) HMM_DMIRROR_MEMORY_DEVICE_PRIVATE); } =20 -static enum migrate_vma_direction +static enum migrate_vma_info dmirror_select_device(struct dmirror *dmirror) { return (dmirror->mdevice->zone_device_type =3D=3D @@ -1194,6 +1195,99 @@ static int dmirror_migrate_to_device(struct dmirror = *dmirror, return ret; } =20 +static int do_fault_and_migrate(struct dmirror *dmirror, struct hmm_range = *range) +{ + struct migrate_vma *migrate =3D range->migrate; + int ret; + + mmap_read_lock(dmirror->notifier.mm); + + /* Fault-in pages for migration and update device page table */ + ret =3D dmirror_range_fault(dmirror, range); + + pr_debug("Migrating from sys mem to device mem\n"); + migrate_hmm_range_setup(range); + + dmirror_migrate_alloc_and_copy(migrate, dmirror); + migrate_vma_pages(migrate); + dmirror_migrate_finalize_and_map(migrate, dmirror); + migrate_vma_finalize(migrate); + + mmap_read_unlock(dmirror->notifier.mm); + return ret; +} + +static int dmirror_fault_and_migrate_to_device(struct dmirror *dmirror, + struct hmm_dmirror_cmd *cmd) +{ + unsigned long start, size, end, next; + unsigned long src_pfns[PFNS_ARRAY_SIZE] =3D { 0 }; + unsigned long dst_pfns[PFNS_ARRAY_SIZE] =3D { 0 }; + struct migrate_vma migrate =3D { 0 }; + struct hmm_range range =3D { 0 }; + struct dmirror_bounce bounce; + int ret =3D 0; + + /* Whole range */ + start =3D cmd->addr; + size =3D cmd->npages << PAGE_SHIFT; + end =3D start + size; + + if (!mmget_not_zero(dmirror->notifier.mm)) { + ret =3D -EFAULT; + goto out; + } + + migrate.pgmap_owner =3D dmirror->mdevice; + migrate.src =3D src_pfns; + migrate.dst =3D dst_pfns; + + range.migrate =3D &migrate; + range.hmm_pfns =3D src_pfns; + range.pfn_flags_mask =3D 0; + range.default_flags =3D HMM_PFN_REQ_FAULT | HMM_PFN_REQ_MIGRATE; + range.dev_private_owner =3D dmirror->mdevice; + range.notifier =3D &dmirror->notifier; + + for (next =3D start; next < end; next =3D range.end) { + range.start =3D next; + range.end =3D min(end, next + (PFNS_ARRAY_SIZE << PAGE_SHIFT)); + + pr_debug("Fault and migrate range start:%#lx end:%#lx\n", + range.start, range.end); + + ret =3D do_fault_and_migrate(dmirror, &range); + if (ret) + goto out_mmput; + } + + /* + * Return the migrated data for verification. + * Only for pages in device zone + */ + ret =3D dmirror_bounce_init(&bounce, start, size); + if (ret) + goto out_mmput; + + mutex_lock(&dmirror->mutex); + ret =3D dmirror_do_read(dmirror, start, end, &bounce); + mutex_unlock(&dmirror->mutex); + if (ret =3D=3D 0) { + ret =3D copy_to_user(u64_to_user_ptr(cmd->ptr), bounce.ptr, bounce.size); + if (ret) + ret =3D -EFAULT; + } + + cmd->cpages =3D bounce.cpages; + dmirror_bounce_fini(&bounce); + + +out_mmput: + mmput(dmirror->notifier.mm); +out: + return ret; +} + static void dmirror_mkentry(struct dmirror *dmirror, struct hmm_range *ran= ge, unsigned char *perm, unsigned long entry) { @@ -1510,6 +1604,10 @@ static long dmirror_fops_unlocked_ioctl(struct file = *filp, ret =3D dmirror_migrate_to_device(dmirror, &cmd); break; =20 + case HMM_DMIRROR_MIGRATE_ON_FAULT_TO_DEV: + ret =3D dmirror_fault_and_migrate_to_device(dmirror, &cmd); + break; + case HMM_DMIRROR_MIGRATE_TO_SYS: ret =3D dmirror_migrate_to_system(dmirror, &cmd); break; diff --git a/lib/test_hmm_uapi.h b/lib/test_hmm_uapi.h index f94c6d457338..0b6e7a419e36 100644 --- a/lib/test_hmm_uapi.h +++ b/lib/test_hmm_uapi.h @@ -29,15 +29,16 @@ struct hmm_dmirror_cmd { }; =20 /* Expose the address space of the calling process through hmm device file= */ -#define HMM_DMIRROR_READ _IOWR('H', 0x00, struct hmm_dmirror_cmd) -#define HMM_DMIRROR_WRITE _IOWR('H', 0x01, struct hmm_dmirror_cmd) -#define HMM_DMIRROR_MIGRATE_TO_DEV _IOWR('H', 0x02, struct hmm_dmirror_cmd) -#define HMM_DMIRROR_MIGRATE_TO_SYS _IOWR('H', 0x03, struct hmm_dmirror_cmd) -#define HMM_DMIRROR_SNAPSHOT _IOWR('H', 0x04, struct hmm_dmirror_cmd) -#define HMM_DMIRROR_EXCLUSIVE _IOWR('H', 0x05, struct hmm_dmirror_cmd) -#define HMM_DMIRROR_CHECK_EXCLUSIVE _IOWR('H', 0x06, struct hmm_dmirror_cm= d) -#define HMM_DMIRROR_RELEASE _IOWR('H', 0x07, struct hmm_dmirror_cmd) -#define HMM_DMIRROR_FLAGS _IOWR('H', 0x08, struct hmm_dmirror_cmd) +#define HMM_DMIRROR_READ _IOWR('H', 0x00, struct hmm_dmirror_cmd) +#define HMM_DMIRROR_WRITE _IOWR('H', 0x01, struct hmm_dmirror_cmd) +#define HMM_DMIRROR_MIGRATE_TO_DEV _IOWR('H', 0x02, struct hmm_dmirror_cm= d) +#define HMM_DMIRROR_MIGRATE_ON_FAULT_TO_DEV _IOWR('H', 0x03, struct hmm_dm= irror_cmd) +#define HMM_DMIRROR_MIGRATE_TO_SYS _IOWR('H', 0x04, struct hmm_dmirror_cm= d) +#define HMM_DMIRROR_SNAPSHOT _IOWR('H', 0x05, struct hmm_dmirror_cmd) +#define HMM_DMIRROR_EXCLUSIVE _IOWR('H', 0x06, struct hmm_dmirror_cmd) +#define HMM_DMIRROR_CHECK_EXCLUSIVE _IOWR('H', 0x07, struct hmm_dmirror_c= md) +#define HMM_DMIRROR_RELEASE _IOWR('H', 0x08, struct hmm_dmirror_cmd) +#define HMM_DMIRROR_FLAGS _IOWR('H', 0x09, struct hmm_dmirror_cmd) =20 #define HMM_DMIRROR_FLAG_FAIL_ALLOC (1ULL << 0) =20 diff --git a/tools/testing/selftests/mm/hmm-tests.c b/tools/testing/selftes= ts/mm/hmm-tests.c index e8328c89d855..c75616875c9e 100644 --- a/tools/testing/selftests/mm/hmm-tests.c +++ b/tools/testing/selftests/mm/hmm-tests.c @@ -277,6 +277,13 @@ static int hmm_migrate_sys_to_dev(int fd, return hmm_dmirror_cmd(fd, HMM_DMIRROR_MIGRATE_TO_DEV, buffer, npages); } =20 +static int hmm_migrate_on_fault_sys_to_dev(int fd, + struct hmm_buffer *buffer, + unsigned long npages) +{ + return hmm_dmirror_cmd(fd, HMM_DMIRROR_MIGRATE_ON_FAULT_TO_DEV, buffer, n= pages); +} + static int hmm_migrate_dev_to_sys(int fd, struct hmm_buffer *buffer, unsigned long npages) @@ -1034,6 +1041,53 @@ TEST_F(hmm, migrate) hmm_buffer_free(buffer); } =20 + +/* + * Fault and migrate anonymous memory to device private memory. + */ +TEST_F(hmm, migrate_on_fault) +{ + struct hmm_buffer *buffer; + unsigned long npages; + unsigned long size; + unsigned long i; + int *ptr; + int ret; + + npages =3D ALIGN(HMM_BUFFER_SIZE, self->page_size) >> self->page_shift; + ASSERT_NE(npages, 0); + size =3D npages << self->page_shift; + + buffer =3D malloc(sizeof(*buffer)); + ASSERT_NE(buffer, NULL); + + buffer->fd =3D -1; + buffer->size =3D size; + buffer->mirror =3D malloc(size); + ASSERT_NE(buffer->mirror, NULL); + + buffer->ptr =3D mmap(NULL, size, + PROT_READ | PROT_WRITE, + MAP_PRIVATE | MAP_ANONYMOUS, + buffer->fd, 0); + ASSERT_NE(buffer->ptr, MAP_FAILED); + + /* Initialize buffer in system memory. */ + for (i =3D 0, ptr =3D buffer->ptr; i < size / sizeof(*ptr); ++i) + ptr[i] =3D i; + + /* Fault and migrate memory to device. */ + ret =3D hmm_migrate_on_fault_sys_to_dev(self->fd, buffer, npages); + ASSERT_EQ(ret, 0); + ASSERT_EQ(buffer->cpages, npages); + + /* Check what the device read. */ + for (i =3D 0, ptr =3D buffer->mirror; i < size / sizeof(*ptr); ++i) + ASSERT_EQ(ptr[i], i); + + hmm_buffer_free(buffer); +} + /* * Migrate anonymous memory to device private memory and fault some of it = back * to system memory, then try migrating the resulting mix of system and de= vice --=20 2.50.0 From nobody Sat Feb 7 18:20:28 2026 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C4C3D3019D9 for ; Mon, 19 Jan 2026 11:28:07 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1768822090; cv=none; b=j4udMEOqBCHv7BOR0gpU6jO2acjZOw8Oivs3q8AwLtXJ8K0bZIPixHtyCrKRQxyG3Ibhq7ToB3WQMoGyKMoxqfapqL8Ou76WoeejPvIqVQPFRrO8PyIrHlBj501vQWLQ50adGsnEVOytg1DMRLxSNbKbYCGq47iEjA4ovhfP2CI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1768822090; c=relaxed/simple; bh=/5r0FHkYXZenhm1MF/UgCIl0NnZzXK91E1hVCYi86Gw=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=uhRto4WIyj2DVKKx0RhLCQvVAFkUlI0bS8KjKPagLwwp1xCCzUgBnhgiBtEQjzRnhrNhNHytU6XVQ2d5EydIc9KfXrqbl3g9tu1KctMjeOphGSU4GS4h1XJAfKP4cTDLqvwvEIpOoAFIGbM2HU4+yagCVINA8rjCXjPa3Bh8aKw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=ALj3cV+1; dkim=pass (2048-bit key) header.d=redhat.com header.i=@redhat.com header.b=hvI9A3tn; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="ALj3cV+1"; dkim=pass (2048-bit key) header.d=redhat.com header.i=@redhat.com header.b="hvI9A3tn" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1768822086; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=MIWs13uPtbBYR5vhSSUBlncr2KRdF3LUR9HD+nqyOvQ=; b=ALj3cV+1w6YNGn8sRt6dRsw3QDRWbNMHtpvIEfD3vkgMxXtW6X8nZfNhEqF3jAJa2Z5e4L aa+mAQNUYWJIwwo4Qe47w6JwSp8q//W1jASisl4W6bADl+dX4eh4QjvcBQ7mYSngnhA92Z UeC3ztSe6AaGrmt31P+fBcGpNXIn7B4= Received: from mail-lj1-f200.google.com (mail-lj1-f200.google.com [209.85.208.200]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-444-cypjq_pBOn2OLfKVGry0cg-1; Mon, 19 Jan 2026 06:28:03 -0500 X-MC-Unique: cypjq_pBOn2OLfKVGry0cg-1 X-Mimecast-MFC-AGG-ID: cypjq_pBOn2OLfKVGry0cg_1768822082 Received: by mail-lj1-f200.google.com with SMTP id 38308e7fff4ca-38355551e76so29650971fa.0 for ; Mon, 19 Jan 2026 03:28:02 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=google; t=1768822082; x=1769426882; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=MIWs13uPtbBYR5vhSSUBlncr2KRdF3LUR9HD+nqyOvQ=; b=hvI9A3tn6AjWfKDHGstPMFfNaNUw41AVy8ZHGoNReZMA54A+2h1M2+yuT4VUb9A4Bm 0gSTCDuwVpWURLEvgNUz73h8ZlnE/DXTl5/fD12M8EJeyOSPrGk8HxZeUZI5rfNEFyV8 LBATmOm3SILpMOdbB1qs/b0kj1Az1pjLnIumYSRYUt8uVClaT4TPlOUvVGIwfUuMqXzO 1rzsnwXtcqUK9iiV5d4/s+sV1Mtl+ZEHO5WxAH2KqflfEdkqcZXESNhR5P4s/0Tr61GL vTrQCjK6oydQ7eQUy4eSHG7YpQXcFafULcNRiW4PhvTAr6MMihEcB+6LngaNx1epUXoU gWnw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1768822082; x=1769426882; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=MIWs13uPtbBYR5vhSSUBlncr2KRdF3LUR9HD+nqyOvQ=; b=tSmOS/0gaYVxLCPvOIMXZGj7m1Em21bJu8eEZdKbewVPsQIf+GH0WPrM9czwQ7NIWR +2VyB7ooS8d8ENONUFn/WAB6V2Yd4iA0+lWA8PutLsksK5+nvmnPNWqVzFn+Ca9hM7MG 4Wmoo82aytqyc+uvz9GC+5QsihrjSXAE+MkAR3K9O0lA0pxP/Ry98bdWS52R/O2KkQlk 2waSwU0EzFb9+WInqrYvcCo57qct8+NOBS0Ubabm0DzzwcAxcWU2psiNchylMNe5BvBB HqT/j0b8CQ5aaKTlaP8fZTSL6OZCgVAt5RlYBjOmDXG4IoePDI/JbY7Opetl917e8xDx ua9Q== X-Gm-Message-State: AOJu0YxGGfxidc++dV9ubPM7txp8HYHUFCP/sEixhqp1QdPGh+LQ8OYK 1T9dZAeaMxtAUIdE+mNiZYTlMSY2XuhRBmExaHPKxHAiGf5OGGFXoX9jmcCXh5nWJPGNPnkRtmS 12V4AR50/gVyCMIbEcoTJMINIZ9sR10txgBiv1jsw8SAWU+PhOuIUractS1uL8sAc X-Gm-Gg: AY/fxX4xU50uz8bdHt7y7xWC8dRUqZhMIFU0fJCHXXvjzgtI63Rie4P79wX6eM5KBIV pnM9j1YR/pxyKRO8wU+YkJwlc6zdqIfjU/p4L98fGN6Xxqzm8AJOwe+9NbMTFUHyvZsaCAOm0FT SNvz9lEr0JgAV84NPehCYyrrQD5uD6OSw0qzalPuQ4yvdxRM7r9oEOAW3Qy1PYBbQ5av7yQX9Ze 9UMiq5Zu6JZgkpkoG3cjF7E05UjbwCMmLUZFY9GmQNNOD4hesdFJ+DrJpc/XVD6Rx9j1m/kl9+P 2pkhDz+Vo6mGVWE6Ezxlz6K6iewuPqYHzkeaMF28r2BIbFWLFAdTWFbPnvCnHwp5DEJa8VQsuSg 7WElSpn1rugF9JP+1Wcg3vrA5 X-Received: by 2002:a2e:bc24:0:b0:383:250a:850e with SMTP id 38308e7fff4ca-3838417cc48mr38376911fa.10.1768822081493; Mon, 19 Jan 2026 03:28:01 -0800 (PST) X-Received: by 2002:a2e:bc24:0:b0:383:250a:850e with SMTP id 38308e7fff4ca-3838417cc48mr38376731fa.10.1768822080976; Mon, 19 Jan 2026 03:28:00 -0800 (PST) Received: from fedora (85-23-51-1.bb.dnainternet.fi. [85.23.51.1]) by smtp.gmail.com with ESMTPSA id 38308e7fff4ca-38384f8a1efsm31226411fa.39.2026.01.19.03.28.00 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 19 Jan 2026 03:28:00 -0800 (PST) From: mpenttil@redhat.com To: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org, =?UTF-8?q?Mika=20Penttil=C3=A4?= , David Hildenbrand , Jason Gunthorpe , Leon Romanovsky , Alistair Popple , Balbir Singh , Zi Yan , Matthew Brost Subject: [PATCH v2 3/3] mm:/migrate_device.c: remove migrate_vma_collect_*() functions Date: Mon, 19 Jan 2026 13:25:02 +0200 Message-ID: <20260119112502.645059-4-mpenttil@redhat.com> X-Mailer: git-send-email 2.50.0 In-Reply-To: <20260119112502.645059-1-mpenttil@redhat.com> References: <20260119112502.645059-1-mpenttil@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable From: Mika Penttil=C3=A4 With the unified fault handling and migrate path, the migrate_vma_collect_*() functions are unused, let's remove them. Cc: David Hildenbrand Cc: Jason Gunthorpe Cc: Leon Romanovsky Cc: Alistair Popple Cc: Balbir Singh Cc: Zi Yan Cc: Matthew Brost Signed-off-by: Mika Penttil=C3=A4 Suggested-by: Alistair Popple --- mm/migrate_device.c | 508 -------------------------------------------- 1 file changed, 508 deletions(-) diff --git a/mm/migrate_device.c b/mm/migrate_device.c index bda6320f6242..c896a4d8bca2 100644 --- a/mm/migrate_device.c +++ b/mm/migrate_device.c @@ -18,514 +18,6 @@ #include #include "internal.h" =20 -static int migrate_vma_collect_skip(unsigned long start, - unsigned long end, - struct mm_walk *walk) -{ - struct migrate_vma *migrate =3D walk->private; - unsigned long addr; - - for (addr =3D start; addr < end; addr +=3D PAGE_SIZE) { - migrate->dst[migrate->npages] =3D 0; - migrate->src[migrate->npages++] =3D 0; - } - - return 0; -} - -static int migrate_vma_collect_hole(unsigned long start, - unsigned long end, - __always_unused int depth, - struct mm_walk *walk) -{ - struct migrate_vma *migrate =3D walk->private; - unsigned long addr; - - /* Only allow populating anonymous memory. */ - if (!vma_is_anonymous(walk->vma)) - return migrate_vma_collect_skip(start, end, walk); - - if (thp_migration_supported() && - (migrate->flags & MIGRATE_VMA_SELECT_COMPOUND) && - (IS_ALIGNED(start, HPAGE_PMD_SIZE) && - IS_ALIGNED(end, HPAGE_PMD_SIZE))) { - migrate->src[migrate->npages] =3D MIGRATE_PFN_MIGRATE | - MIGRATE_PFN_COMPOUND; - migrate->dst[migrate->npages] =3D 0; - migrate->npages++; - migrate->cpages++; - - /* - * Collect the remaining entries as holes, in case we - * need to split later - */ - return migrate_vma_collect_skip(start + PAGE_SIZE, end, walk); - } - - for (addr =3D start; addr < end; addr +=3D PAGE_SIZE) { - migrate->src[migrate->npages] =3D MIGRATE_PFN_MIGRATE; - migrate->dst[migrate->npages] =3D 0; - migrate->npages++; - migrate->cpages++; - } - - return 0; -} - -/** - * migrate_vma_split_folio() - Helper function to split a THP folio - * @folio: the folio to split - * @fault_page: struct page associated with the fault if any - * - * Returns 0 on success - */ -static int migrate_vma_split_folio(struct folio *folio, - struct page *fault_page) -{ - int ret; - struct folio *fault_folio =3D fault_page ? page_folio(fault_page) : NULL; - struct folio *new_fault_folio =3D NULL; - - if (folio !=3D fault_folio) { - folio_get(folio); - folio_lock(folio); - } - - ret =3D split_folio(folio); - if (ret) { - if (folio !=3D fault_folio) { - folio_unlock(folio); - folio_put(folio); - } - return ret; - } - - new_fault_folio =3D fault_page ? page_folio(fault_page) : NULL; - - /* - * Ensure the lock is held on the correct - * folio after the split - */ - if (!new_fault_folio) { - folio_unlock(folio); - folio_put(folio); - } else if (folio !=3D new_fault_folio) { - if (new_fault_folio !=3D fault_folio) { - folio_get(new_fault_folio); - folio_lock(new_fault_folio); - } - folio_unlock(folio); - folio_put(folio); - } - - return 0; -} - -/** migrate_vma_collect_huge_pmd - collect THP pages without splitting the - * folio for device private pages. - * @pmdp: pointer to pmd entry - * @start: start address of the range for migration - * @end: end address of the range for migration - * @walk: mm_walk callback structure - * @fault_folio: folio associated with the fault if any - * - * Collect the huge pmd entry at @pmdp for migration and set the - * MIGRATE_PFN_COMPOUND flag in the migrate src entry to indicate that - * migration will occur at HPAGE_PMD granularity - */ -static int migrate_vma_collect_huge_pmd(pmd_t *pmdp, unsigned long start, - unsigned long end, struct mm_walk *walk, - struct folio *fault_folio) -{ - struct mm_struct *mm =3D walk->mm; - struct folio *folio; - struct migrate_vma *migrate =3D walk->private; - spinlock_t *ptl; - int ret; - unsigned long write =3D 0; - - ptl =3D pmd_lock(mm, pmdp); - if (pmd_none(*pmdp)) { - spin_unlock(ptl); - return migrate_vma_collect_hole(start, end, -1, walk); - } - - if (pmd_trans_huge(*pmdp)) { - if (!(migrate->flags & MIGRATE_VMA_SELECT_SYSTEM)) { - spin_unlock(ptl); - return migrate_vma_collect_skip(start, end, walk); - } - - folio =3D pmd_folio(*pmdp); - if (is_huge_zero_folio(folio)) { - spin_unlock(ptl); - return migrate_vma_collect_hole(start, end, -1, walk); - } - if (pmd_write(*pmdp)) - write =3D MIGRATE_PFN_WRITE; - } else if (!pmd_present(*pmdp)) { - const softleaf_t entry =3D softleaf_from_pmd(*pmdp); - - folio =3D softleaf_to_folio(entry); - - if (!softleaf_is_device_private(entry) || - !(migrate->flags & MIGRATE_VMA_SELECT_DEVICE_PRIVATE) || - (folio->pgmap->owner !=3D migrate->pgmap_owner)) { - spin_unlock(ptl); - return migrate_vma_collect_skip(start, end, walk); - } - - if (softleaf_is_migration(entry)) { - migration_entry_wait_on_locked(entry, ptl); - spin_unlock(ptl); - return -EAGAIN; - } - - if (softleaf_is_device_private_write(entry)) - write =3D MIGRATE_PFN_WRITE; - } else { - spin_unlock(ptl); - return -EAGAIN; - } - - folio_get(folio); - if (folio !=3D fault_folio && unlikely(!folio_trylock(folio))) { - spin_unlock(ptl); - folio_put(folio); - return migrate_vma_collect_skip(start, end, walk); - } - - if (thp_migration_supported() && - (migrate->flags & MIGRATE_VMA_SELECT_COMPOUND) && - (IS_ALIGNED(start, HPAGE_PMD_SIZE) && - IS_ALIGNED(end, HPAGE_PMD_SIZE))) { - - struct page_vma_mapped_walk pvmw =3D { - .ptl =3D ptl, - .address =3D start, - .pmd =3D pmdp, - .vma =3D walk->vma, - }; - - unsigned long pfn =3D page_to_pfn(folio_page(folio, 0)); - - migrate->src[migrate->npages] =3D migrate_pfn(pfn) | write - | MIGRATE_PFN_MIGRATE - | MIGRATE_PFN_COMPOUND; - migrate->dst[migrate->npages++] =3D 0; - migrate->cpages++; - ret =3D set_pmd_migration_entry(&pvmw, folio_page(folio, 0)); - if (ret) { - migrate->npages--; - migrate->cpages--; - migrate->src[migrate->npages] =3D 0; - migrate->dst[migrate->npages] =3D 0; - goto fallback; - } - migrate_vma_collect_skip(start + PAGE_SIZE, end, walk); - spin_unlock(ptl); - return 0; - } - -fallback: - spin_unlock(ptl); - if (!folio_test_large(folio)) - goto done; - ret =3D split_folio(folio); - if (fault_folio !=3D folio) - folio_unlock(folio); - folio_put(folio); - if (ret) - return migrate_vma_collect_skip(start, end, walk); - if (pmd_none(pmdp_get_lockless(pmdp))) - return migrate_vma_collect_hole(start, end, -1, walk); - -done: - return -ENOENT; -} - -static int migrate_vma_collect_pmd(pmd_t *pmdp, - unsigned long start, - unsigned long end, - struct mm_walk *walk) -{ - struct migrate_vma *migrate =3D walk->private; - struct vm_area_struct *vma =3D walk->vma; - struct mm_struct *mm =3D vma->vm_mm; - unsigned long addr =3D start, unmapped =3D 0; - spinlock_t *ptl; - struct folio *fault_folio =3D migrate->fault_page ? - page_folio(migrate->fault_page) : NULL; - pte_t *ptep; - -again: - if (pmd_trans_huge(*pmdp) || !pmd_present(*pmdp)) { - int ret =3D migrate_vma_collect_huge_pmd(pmdp, start, end, walk, fault_f= olio); - - if (ret =3D=3D -EAGAIN) - goto again; - if (ret =3D=3D 0) - return 0; - } - - ptep =3D pte_offset_map_lock(mm, pmdp, start, &ptl); - if (!ptep) - goto again; - arch_enter_lazy_mmu_mode(); - ptep +=3D (addr - start) / PAGE_SIZE; - - for (; addr < end; addr +=3D PAGE_SIZE, ptep++) { - struct dev_pagemap *pgmap; - unsigned long mpfn =3D 0, pfn; - struct folio *folio; - struct page *page; - softleaf_t entry; - pte_t pte; - - pte =3D ptep_get(ptep); - - if (pte_none(pte)) { - if (vma_is_anonymous(vma)) { - mpfn =3D MIGRATE_PFN_MIGRATE; - migrate->cpages++; - } - goto next; - } - - if (!pte_present(pte)) { - /* - * Only care about unaddressable device page special - * page table entry. Other special swap entries are not - * migratable, and we ignore regular swapped page. - */ - entry =3D softleaf_from_pte(pte); - if (!softleaf_is_device_private(entry)) - goto next; - - page =3D softleaf_to_page(entry); - pgmap =3D page_pgmap(page); - if (!(migrate->flags & - MIGRATE_VMA_SELECT_DEVICE_PRIVATE) || - pgmap->owner !=3D migrate->pgmap_owner) - goto next; - - folio =3D page_folio(page); - if (folio_test_large(folio)) { - int ret; - - arch_leave_lazy_mmu_mode(); - pte_unmap_unlock(ptep, ptl); - ret =3D migrate_vma_split_folio(folio, - migrate->fault_page); - - if (ret) { - if (unmapped) - flush_tlb_range(walk->vma, start, end); - - return migrate_vma_collect_skip(addr, end, walk); - } - - goto again; - } - - mpfn =3D migrate_pfn(page_to_pfn(page)) | - MIGRATE_PFN_MIGRATE; - if (softleaf_is_device_private_write(entry)) - mpfn |=3D MIGRATE_PFN_WRITE; - } else { - pfn =3D pte_pfn(pte); - if (is_zero_pfn(pfn) && - (migrate->flags & MIGRATE_VMA_SELECT_SYSTEM)) { - mpfn =3D MIGRATE_PFN_MIGRATE; - migrate->cpages++; - goto next; - } - page =3D vm_normal_page(migrate->vma, addr, pte); - if (page && !is_zone_device_page(page) && - !(migrate->flags & MIGRATE_VMA_SELECT_SYSTEM)) { - goto next; - } else if (page && is_device_coherent_page(page)) { - pgmap =3D page_pgmap(page); - - if (!(migrate->flags & - MIGRATE_VMA_SELECT_DEVICE_COHERENT) || - pgmap->owner !=3D migrate->pgmap_owner) - goto next; - } - folio =3D page ? page_folio(page) : NULL; - if (folio && folio_test_large(folio)) { - int ret; - - arch_leave_lazy_mmu_mode(); - pte_unmap_unlock(ptep, ptl); - ret =3D migrate_vma_split_folio(folio, - migrate->fault_page); - - if (ret) { - if (unmapped) - flush_tlb_range(walk->vma, start, end); - - return migrate_vma_collect_skip(addr, end, walk); - } - - goto again; - } - mpfn =3D migrate_pfn(pfn) | MIGRATE_PFN_MIGRATE; - mpfn |=3D pte_write(pte) ? MIGRATE_PFN_WRITE : 0; - } - - if (!page || !page->mapping) { - mpfn =3D 0; - goto next; - } - - /* - * By getting a reference on the folio we pin it and that blocks - * any kind of migration. Side effect is that it "freezes" the - * pte. - * - * We drop this reference after isolating the folio from the lru - * for non device folio (device folio are not on the lru and thus - * can't be dropped from it). - */ - folio =3D page_folio(page); - folio_get(folio); - - /* - * We rely on folio_trylock() to avoid deadlock between - * concurrent migrations where each is waiting on the others - * folio lock. If we can't immediately lock the folio we fail this - * migration as it is only best effort anyway. - * - * If we can lock the folio it's safe to set up a migration entry - * now. In the common case where the folio is mapped once in a - * single process setting up the migration entry now is an - * optimisation to avoid walking the rmap later with - * try_to_migrate(). - */ - if (fault_folio =3D=3D folio || folio_trylock(folio)) { - bool anon_exclusive; - pte_t swp_pte; - - flush_cache_page(vma, addr, pte_pfn(pte)); - anon_exclusive =3D folio_test_anon(folio) && - PageAnonExclusive(page); - if (anon_exclusive) { - pte =3D ptep_clear_flush(vma, addr, ptep); - - if (folio_try_share_anon_rmap_pte(folio, page)) { - set_pte_at(mm, addr, ptep, pte); - if (fault_folio !=3D folio) - folio_unlock(folio); - folio_put(folio); - mpfn =3D 0; - goto next; - } - } else { - pte =3D ptep_get_and_clear(mm, addr, ptep); - } - - migrate->cpages++; - - /* Set the dirty flag on the folio now the pte is gone. */ - if (pte_dirty(pte)) - folio_mark_dirty(folio); - - /* Setup special migration page table entry */ - if (mpfn & MIGRATE_PFN_WRITE) - entry =3D make_writable_migration_entry( - page_to_pfn(page)); - else if (anon_exclusive) - entry =3D make_readable_exclusive_migration_entry( - page_to_pfn(page)); - else - entry =3D make_readable_migration_entry( - page_to_pfn(page)); - if (pte_present(pte)) { - if (pte_young(pte)) - entry =3D make_migration_entry_young(entry); - if (pte_dirty(pte)) - entry =3D make_migration_entry_dirty(entry); - } - swp_pte =3D swp_entry_to_pte(entry); - if (pte_present(pte)) { - if (pte_soft_dirty(pte)) - swp_pte =3D pte_swp_mksoft_dirty(swp_pte); - if (pte_uffd_wp(pte)) - swp_pte =3D pte_swp_mkuffd_wp(swp_pte); - } else { - if (pte_swp_soft_dirty(pte)) - swp_pte =3D pte_swp_mksoft_dirty(swp_pte); - if (pte_swp_uffd_wp(pte)) - swp_pte =3D pte_swp_mkuffd_wp(swp_pte); - } - set_pte_at(mm, addr, ptep, swp_pte); - - /* - * This is like regular unmap: we remove the rmap and - * drop the folio refcount. The folio won't be freed, as - * we took a reference just above. - */ - folio_remove_rmap_pte(folio, page, vma); - folio_put(folio); - - if (pte_present(pte)) - unmapped++; - } else { - folio_put(folio); - mpfn =3D 0; - } - -next: - migrate->dst[migrate->npages] =3D 0; - migrate->src[migrate->npages++] =3D mpfn; - } - - /* Only flush the TLB if we actually modified any entries */ - if (unmapped) - flush_tlb_range(walk->vma, start, end); - - arch_leave_lazy_mmu_mode(); - pte_unmap_unlock(ptep - 1, ptl); - - return 0; -} - -static const struct mm_walk_ops migrate_vma_walk_ops =3D { - .pmd_entry =3D migrate_vma_collect_pmd, - .pte_hole =3D migrate_vma_collect_hole, - .walk_lock =3D PGWALK_RDLOCK, -}; - -/* - * migrate_vma_collect() - collect pages over a range of virtual addresses - * @migrate: migrate struct containing all migration information - * - * This will walk the CPU page table. For each virtual address backed by a - * valid page, it updates the src array and takes a reference on the page,= in - * order to pin the page until we lock it and unmap it. - */ -static void migrate_vma_collect(struct migrate_vma *migrate) -{ - struct mmu_notifier_range range; - - /* - * Note that the pgmap_owner is passed to the mmu notifier callback so - * that the registered device driver can skip invalidating device - * private page mappings that won't be migrated. - */ - mmu_notifier_range_init_owner(&range, MMU_NOTIFY_MIGRATE, 0, - migrate->vma->vm_mm, migrate->start, migrate->end, - migrate->pgmap_owner); - mmu_notifier_invalidate_range_start(&range); - - walk_page_range(migrate->vma->vm_mm, migrate->start, migrate->end, - &migrate_vma_walk_ops, migrate); - - mmu_notifier_invalidate_range_end(&range); - migrate->end =3D migrate->start + (migrate->npages << PAGE_SHIFT); -} - /* * migrate_vma_check_page() - check if page is pinned or not * @page: struct page to check --=20 2.50.0