From nobody Thu Oct 9 16:29:57 2025 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7B4222EA150 for ; Tue, 17 Jun 2025 15:44:17 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750175060; cv=none; b=ObGQ7DXBSkXyXMnok/o535DMxw6BKM4kBsA+ixk60WCG78vrv2jCe+85w0MHqCfm+rIN2tS2+JJ7vV+yDl72Wk7/UA/HewA45YQpA49X3HxfEa7VfC1DrojZah9CnLy3ttOSlzct+gnAYHszenn6tXTTOqVOWbehnkPn7rzPLaU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750175060; c=relaxed/simple; bh=OJI5f+1+NYHuwxH37IlrE7TLESeijQhASgWBHOjp2gw=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=HVd7qp4gQ1jRyjLVUoed1breKq9JOn+MOMRq+WMPUWkD1Fc0LSyV8eoXeQZ6UwruuGIEC74zSVTNvKgAbNEoWBLluCly7fNcuZmqlFvHUb2Hl9nmRU8ql+CL5AIzQpXhW7QXqxbkTwJbmjcxxCmhKnOkNz19lCSmQdNfFRfuoXE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=Gc+bm97I; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="Gc+bm97I" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1750175056; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=EH5nVYJRAiyh61U5bFJOLvxZlURX1rOpBEXY4zGSahY=; b=Gc+bm97IFsT+ZwcDQdk3Q70EOYTGQ3dg0FHYXyJPP5DUd7ftIiC8x3ow4S1FlsX02z3sfx fgp7WM7h/BFwzZCc974e5VxOOPKiCEAcf3z08GKkStmYZjRPwsRCMTPhKXos4WUeOcUDPd TIbgC1nsZB5cihp5F7Eh69p3EGsH3qM= Received: from mail-wr1-f71.google.com (mail-wr1-f71.google.com [209.85.221.71]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-426-xPemlLcGPL-zdzkAj12MfA-1; Tue, 17 Jun 2025 11:44:15 -0400 X-MC-Unique: xPemlLcGPL-zdzkAj12MfA-1 X-Mimecast-MFC-AGG-ID: xPemlLcGPL-zdzkAj12MfA_1750175054 Received: by mail-wr1-f71.google.com with SMTP id ffacd0b85a97d-3a5780e8137so1596901f8f.1 for ; Tue, 17 Jun 2025 08:44:15 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1750175054; x=1750779854; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=EH5nVYJRAiyh61U5bFJOLvxZlURX1rOpBEXY4zGSahY=; b=TdGd3lJl+S/jWLvhlbYXX6Y/ArwytESl7wmObWlssFgrBgl6PKLDBNgxZQgZJaitEt 9xzuV5+gWuLZFnqtbRDbaziN8/6qK7/DcMX4is//gwsdAscKj0KWTl0QWxA0Rrn+EUoL HvOsxOprVzp9rosRZpEU4v5v1+zgCex26+/Gnqj06/jpnoNHSx9zeCyDtpiKDfIdxp5k PpZfRvr46wcepm/ukp/bZwcoNc/XUv9i2sj5NC1PQl6HaFOvdrJpoek82CZ0O5VkFE+a 8zuW/gnMY1Youc0otMo/0fFU4J/PQ9iLQkDwuhkDYV5+cI622RCWe5Fn+HnP8BUfKzhl UDow== X-Gm-Message-State: AOJu0YyaDTyxr3Xz7Uw8t/1PGlgHYEaRV5VxmIi0eKhTwtK9Ldbcn/2s vheTLbwVJUPm0GdUA7i1A2F0eKsJZTt8ex5kzq5ASIJSrPckPSw50p7B5eYGKcLoRXvC2xDBw+S 00IK4vJETuEd+JmE3IHIeIsbLsVckIigbATI8eoG8ODXUvrkcY1nbx7VMqdrFsSs0MNK8SKeoNz 0ZyiHmzbwGc6n0Mus7dPAc6YMknwdUwTiyLunM4RCecvKLvNag X-Gm-Gg: ASbGncvH36upOnt8zPpwQdHjh+w2EJEdvIln6BS8f7tH1LkaSNk4DsSmRH7AvHYcRM4 Lc53SfYynYFJ3uYZwsYepfYWQ9F8eN1HZRCeXk2+pJuvRd7jYDk62A2Fkk6i4Moe0dKT/Tk980q N4r6Qo7HymyPF84coKpp1RP4UYp1MObryax+v3YhwM5bOnTwD+A2BX62zKxcNKp7+c8DovU+0sB qU/ylaWNxSkL9zTVS3SwbqKaw0LqGDP8kKVb+SUqRE+tqH9jGylnTv38hvijxL+eHaloDh6othc UwjpjZWuxs0ml8Dw1ULAuqyL5vMrCN2g6erjU/eDrEWLCABnBerNMEEyDFJVS4uWmk+eD76Lkwt +4eiyIg== X-Received: by 2002:a05:6000:2c0d:b0:3a4:edf5:8a41 with SMTP id ffacd0b85a97d-3a56d7bad5emr13168076f8f.4.1750175053836; Tue, 17 Jun 2025 08:44:13 -0700 (PDT) X-Google-Smtp-Source: AGHT+IFJKGvzcDxbJ2BQdcd7rWQnrxwh5Qby5YdaZuNvO87Urq3koTuTn6k4xY4C82ZJ429DbH36HQ== X-Received: by 2002:a05:6000:2c0d:b0:3a4:edf5:8a41 with SMTP id ffacd0b85a97d-3a56d7bad5emr13168003f8f.4.1750175053036; Tue, 17 Jun 2025 08:44:13 -0700 (PDT) Received: from localhost (p200300d82f3107003851c66ab6b93490.dip0.t-ipconnect.de. [2003:d8:2f31:700:3851:c66a:b6b9:3490]) by smtp.gmail.com with UTF8SMTPSA id ffacd0b85a97d-3a568b089d8sm14553679f8f.57.2025.06.17.08.44.11 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Tue, 17 Jun 2025 08:44:12 -0700 (PDT) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, nvdimm@lists.linux.dev, David Hildenbrand , Andrew Morton , Juergen Gross , Stefano Stabellini , Oleksandr Tyshchenko , Dan Williams , Alistair Popple , Matthew Wilcox , Jan Kara , Alexander Viro , Christian Brauner , Zi Yan , Baolin Wang , Lorenzo Stoakes , "Liam R. Howlett" , Nico Pache , Ryan Roberts , Dev Jain , Barry Song , Vlastimil Babka , Mike Rapoport , Suren Baghdasaryan , Michal Hocko , Jann Horn , Pedro Falcato Subject: [PATCH RFC 11/14] mm: remove "horrible special case to handle copy-on-write behaviour" Date: Tue, 17 Jun 2025 17:43:42 +0200 Message-ID: <20250617154345.2494405-12-david@redhat.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250617154345.2494405-1-david@redhat.com> References: <20250617154345.2494405-1-david@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Let's make the kernel a bit less horrible, by removing the linearity requirement in CoW PFNMAP mappings with !CONFIG_ARCH_HAS_PTE_SPECIAL. In particular, stop messing with vma->vm_pgoff in weird ways. Simply lookup in applicable (i.e., CoW PFNMAP) mappings whether we have an anon folio. Nobody should ever try mapping anon folios using PFNs, that just screams for other possible issues. To be sure, let's sanity-check when inserting PFNs. Are they really required? Probably not, but it's a good safety net at least for now. The runtime overhead should be limited: there is nothing to do for !CoW mappings (common case), and archs that care about performance (i.e., GUP-fast) should be supporting CONFIG_ARCH_HAS_PTE_SPECIAL either way. Likely the sanity checks added in mm/huge_memory.c are not required for now, because that code is probably only wired up with CONFIG_ARCH_HAS_PTE_SPECIAL, but this way is certainly cleaner and more consistent -- and doesn't really cost us anything in the cases we really care about. Signed-off-by: David Hildenbrand --- include/linux/mm.h | 16 ++++++ mm/huge_memory.c | 16 +++++- mm/memory.c | 118 +++++++++++++++++++++++++-------------------- 3 files changed, 96 insertions(+), 54 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 98a606908307b..3f52871becd3f 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2339,6 +2339,22 @@ static inline bool can_do_mlock(void) { return false= ; } extern int user_shm_lock(size_t, struct ucounts *); extern void user_shm_unlock(size_t, struct ucounts *); =20 +#ifdef CONFIG_ARCH_HAS_PTE_SPECIAL +static inline struct page *vm_pfnmap_normal_page_pfn(struct vm_area_struct= *vma, + unsigned long pfn) +{ + /* + * We don't identify normal pages using PFNs. So if we reach + * this point, it's just for sanity checks that don't apply with + * pte_special() etc. + */ + return NULL; +} +#else +struct page *vm_pfnmap_normal_page_pfn(struct vm_area_struct *vma, + unsigned long pfn); +#endif + struct folio *vm_normal_folio(struct vm_area_struct *vma, unsigned long ad= dr, pte_t pte); struct page *vm_normal_page(struct vm_area_struct *vma, unsigned long addr, diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 8f03cd4e40397..67220c30e7818 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1479,7 +1479,13 @@ vm_fault_t vmf_insert_pfn_pmd(struct vm_fault *vmf, = unsigned long pfn, BUG_ON(!(vma->vm_flags & (VM_PFNMAP|VM_MIXEDMAP))); BUG_ON((vma->vm_flags & (VM_PFNMAP|VM_MIXEDMAP)) =3D=3D (VM_PFNMAP|VM_MIXEDMAP)); - BUG_ON((vma->vm_flags & VM_PFNMAP) && is_cow_mapping(vma->vm_flags)); + + /* + * Refuse this pfn if we could mistake it as a refcounted folio + * in a CoW mapping later in vm_normal_page_pmd(). + */ + if ((vma->vm_flags & VM_PFNMAP) && vm_pfnmap_normal_page_pfn(vma, pfn)) + return VM_FAULT_SIGBUS; =20 pfnmap_setup_cachemode_pfn(pfn, &pgprot); =20 @@ -1587,7 +1593,13 @@ vm_fault_t vmf_insert_pfn_pud(struct vm_fault *vmf, = unsigned long pfn, BUG_ON(!(vma->vm_flags & (VM_PFNMAP|VM_MIXEDMAP))); BUG_ON((vma->vm_flags & (VM_PFNMAP|VM_MIXEDMAP)) =3D=3D (VM_PFNMAP|VM_MIXEDMAP)); - BUG_ON((vma->vm_flags & VM_PFNMAP) && is_cow_mapping(vma->vm_flags)); + + /* + * Refuse this pfn if we could mistake it as a refcounted folio + * in a CoW mapping later in vm_normal_page_pud(). + */ + if ((vma->vm_flags & VM_PFNMAP) && vm_pfnmap_normal_page_pfn(vma, pfn)) + return VM_FAULT_SIGBUS; =20 pfnmap_setup_cachemode_pfn(pfn, &pgprot); =20 diff --git a/mm/memory.c b/mm/memory.c index 3d3fa01cd217e..ace9c59e97181 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -536,9 +536,35 @@ static void print_bad_pte(struct vm_area_struct *vma, = unsigned long addr, add_taint(TAINT_BAD_PAGE, LOCKDEP_NOW_UNRELIABLE); } =20 +#ifndef CONFIG_ARCH_HAS_PTE_SPECIAL +struct page *vm_pfnmap_normal_page_pfn(struct vm_area_struct *vma, + unsigned long pfn) +{ + struct folio *folio; + struct page *page; + + VM_WARN_ON_ONCE(!(vma->vm_flags & VM_PFNMAP)); + + /* + * If we have a CoW mapping and spot an anon folio, then it can + * only be due to CoW: the page is "normal". + */ + if (likely(!is_cow_mapping(vma->vm_flags))) + return NULL; + if (likely(!pfn_valid(pfn))) + return NULL; + + page =3D pfn_to_page(pfn); + folio =3D page_folio(page); + if (folio_test_slab(folio) || !folio_test_anon(folio)) + return NULL; + return page; +} +#endif /* !CONFIG_ARCH_HAS_PTE_SPECIAL */ + /* Called only if the page table entry is not marked special. */ static inline struct page *vm_normal_page_pfn(struct vm_area_struct *vma, - unsigned long addr, unsigned long pfn) + unsigned long pfn) { /* * With CONFIG_ARCH_HAS_PTE_SPECIAL, any special page table mappings @@ -553,13 +579,8 @@ static inline struct page *vm_normal_page_pfn(struct v= m_area_struct *vma, if (!pfn_valid(pfn)) return NULL; } else { - unsigned long off =3D (addr - vma->vm_start) >> PAGE_SHIFT; - /* Only CoW'ed anon folios are "normal". */ - if (pfn =3D=3D vma->vm_pgoff + off) - return NULL; - if (!is_cow_mapping(vma->vm_flags)) - return NULL; + return vm_pfnmap_normal_page_pfn(vma, pfn); } } =20 @@ -589,30 +610,19 @@ static inline struct page *vm_normal_page_pfn(struct = vm_area_struct *vma, * (such as GUP) can still identify these mappings and work with the * underlying "struct page". * - * There are 2 broad cases. Firstly, an architecture may define a pte_spec= ial() - * pte bit, in which case this function is trivial. Secondly, an architect= ure - * may not have a spare pte bit, which requires a more complicated scheme, - * described below. + * An architecture may support pte_special() to distinguish "special" + * from "normal" mappings more efficiently, and even without the VMA at ha= nd. + * For example, in order to support GUP-fast, whereby we don't have the VMA + * available when walking the page tables, support for pte_special() is + * crucial. + * + * If an architecture does not support pte_special(), this function is less + * trivial and more expensive in some cases. * * A raw VM_PFNMAP mapping (ie. one that is not COWed) is always considere= d a * special mapping (even if there are underlying and valid "struct pages"). * COWed pages of a VM_PFNMAP are always normal. * - * The way we recognize COWed pages within VM_PFNMAP mappings is through t= he - * rules set up by "remap_pfn_range()": the vma will have the VM_PFNMAP bit - * set, and the vm_pgoff will point to the first PFN mapped: thus every sp= ecial - * mapping will always honor the rule - * - * pfn_of_page =3D=3D vma->vm_pgoff + ((addr - vma->vm_start) >> PAGE_SHIF= T) - * - * And for normal mappings this is false. - * - * This restricts such mappings to be a linear translation from virtual ad= dress - * to pfn. To get around this restriction, we allow arbitrary mappings so = long - * as the vma is not a COW mapping; in that case, we know that all ptes are - * special (because none can have been COWed). - * - * * In order to support COW of arbitrary special mappings, we have VM_MIXED= MAP. * * VM_MIXEDMAP mappings can likewise contain memory with or without "struct @@ -621,10 +631,7 @@ static inline struct page *vm_normal_page_pfn(struct v= m_area_struct *vma, * folios) are refcounted and considered normal pages by the VM. * * The disadvantage is that pages are refcounted (which can be slower and - * simply not an option for some PFNMAP users). The advantage is that we - * don't have to follow the strict linearity rule of PFNMAP mappings in - * order to support COWable mappings. - * + * simply not an option for some PFNMAP users). */ struct page *vm_normal_page(struct vm_area_struct *vma, unsigned long addr, pte_t pte) @@ -642,7 +649,7 @@ struct page *vm_normal_page(struct vm_area_struct *vma,= unsigned long addr, print_bad_pte(vma, addr, pte, NULL); return NULL; } - return vm_normal_page_pfn(vma, addr, pfn); + return vm_normal_page_pfn(vma, pfn); } =20 struct folio *vm_normal_folio(struct vm_area_struct *vma, unsigned long ad= dr, @@ -666,7 +673,7 @@ struct page *vm_normal_page_pmd(struct vm_area_struct *= vma, unsigned long addr, !is_huge_zero_pfn(pfn)); return NULL; } - return vm_normal_page_pfn(vma, addr, pfn); + return vm_normal_page_pfn(vma, pfn); } =20 struct folio *vm_normal_folio_pmd(struct vm_area_struct *vma, @@ -2422,6 +2429,13 @@ static vm_fault_t insert_pfn(struct vm_area_struct *= vma, unsigned long addr, pte_t *pte, entry; spinlock_t *ptl; =20 + /* + * Refuse this pfn if we could mistake it as a refcounted folio + * in a CoW mapping later in vm_normal_page(). + */ + if ((vma->vm_flags & VM_PFNMAP) && vm_pfnmap_normal_page_pfn(vma, pfn)) + return VM_FAULT_SIGBUS; + pte =3D get_locked_pte(mm, addr, &ptl); if (!pte) return VM_FAULT_OOM; @@ -2511,7 +2525,6 @@ vm_fault_t vmf_insert_pfn_prot(struct vm_area_struct = *vma, unsigned long addr, BUG_ON(!(vma->vm_flags & (VM_PFNMAP|VM_MIXEDMAP))); BUG_ON((vma->vm_flags & (VM_PFNMAP|VM_MIXEDMAP)) =3D=3D (VM_PFNMAP|VM_MIXEDMAP)); - BUG_ON((vma->vm_flags & VM_PFNMAP) && is_cow_mapping(vma->vm_flags)); BUG_ON((vma->vm_flags & VM_MIXEDMAP) && pfn_valid(pfn)); =20 if (addr < vma->vm_start || addr >=3D vma->vm_end) @@ -2656,10 +2669,11 @@ vm_fault_t vmf_insert_mixed_mkwrite(struct vm_area_= struct *vma, * mappings are removed. any references to nonexistent pages results * in null mappings (currently treated as "copy-on-access") */ -static int remap_pte_range(struct mm_struct *mm, pmd_t *pmd, +static int remap_pte_range(struct vm_area_struct *vma, pmd_t *pmd, unsigned long addr, unsigned long end, unsigned long pfn, pgprot_t prot) { + struct mm_struct *mm =3D vma->vm_mm; pte_t *pte, *mapped_pte; spinlock_t *ptl; int err =3D 0; @@ -2674,6 +2688,14 @@ static int remap_pte_range(struct mm_struct *mm, pmd= _t *pmd, err =3D -EACCES; break; } + /* + * Refuse this pfn if we could mistake it as a refcounted folio + * in a CoW mapping later in vm_normal_page(). + */ + if (vm_pfnmap_normal_page_pfn(vma, pfn)) { + err =3D -EINVAL; + break; + } set_pte_at(mm, addr, pte, pte_mkspecial(pfn_pte(pfn, prot))); pfn++; } while (pte++, addr +=3D PAGE_SIZE, addr !=3D end); @@ -2682,10 +2704,11 @@ static int remap_pte_range(struct mm_struct *mm, pm= d_t *pmd, return err; } =20 -static inline int remap_pmd_range(struct mm_struct *mm, pud_t *pud, +static inline int remap_pmd_range(struct vm_area_struct *vma, pud_t *pud, unsigned long addr, unsigned long end, unsigned long pfn, pgprot_t prot) { + struct mm_struct *mm =3D vma->vm_mm; pmd_t *pmd; unsigned long next; int err; @@ -2697,7 +2720,7 @@ static inline int remap_pmd_range(struct mm_struct *m= m, pud_t *pud, VM_BUG_ON(pmd_trans_huge(*pmd)); do { next =3D pmd_addr_end(addr, end); - err =3D remap_pte_range(mm, pmd, addr, next, + err =3D remap_pte_range(vma, pmd, addr, next, pfn + (addr >> PAGE_SHIFT), prot); if (err) return err; @@ -2705,10 +2728,11 @@ static inline int remap_pmd_range(struct mm_struct = *mm, pud_t *pud, return 0; } =20 -static inline int remap_pud_range(struct mm_struct *mm, p4d_t *p4d, +static inline int remap_pud_range(struct vm_area_struct *vma, p4d_t *p4d, unsigned long addr, unsigned long end, unsigned long pfn, pgprot_t prot) { + struct mm_struct *mm =3D vma->vm_mm; pud_t *pud; unsigned long next; int err; @@ -2719,7 +2743,7 @@ static inline int remap_pud_range(struct mm_struct *m= m, p4d_t *p4d, return -ENOMEM; do { next =3D pud_addr_end(addr, end); - err =3D remap_pmd_range(mm, pud, addr, next, + err =3D remap_pmd_range(vma, pud, addr, next, pfn + (addr >> PAGE_SHIFT), prot); if (err) return err; @@ -2727,10 +2751,11 @@ static inline int remap_pud_range(struct mm_struct = *mm, p4d_t *p4d, return 0; } =20 -static inline int remap_p4d_range(struct mm_struct *mm, pgd_t *pgd, +static inline int remap_p4d_range(struct vm_area_struct *vma, pgd_t *pgd, unsigned long addr, unsigned long end, unsigned long pfn, pgprot_t prot) { + struct mm_struct *mm =3D vma->vm_mm; p4d_t *p4d; unsigned long next; int err; @@ -2741,7 +2766,7 @@ static inline int remap_p4d_range(struct mm_struct *m= m, pgd_t *pgd, return -ENOMEM; do { next =3D p4d_addr_end(addr, end); - err =3D remap_pud_range(mm, p4d, addr, next, + err =3D remap_pud_range(vma, p4d, addr, next, pfn + (addr >> PAGE_SHIFT), prot); if (err) return err; @@ -2773,18 +2798,7 @@ static int remap_pfn_range_internal(struct vm_area_s= truct *vma, unsigned long ad * Disable vma merging and expanding with mremap(). * VM_DONTDUMP * Omit vma from core dump, even when VM_IO turned off. - * - * There's a horrible special case to handle copy-on-write - * behaviour that some programs depend on. We mark the "original" - * un-COW'ed pages by matching them up with "vma->vm_pgoff". - * See vm_normal_page() for details. */ - if (is_cow_mapping(vma->vm_flags)) { - if (addr !=3D vma->vm_start || end !=3D vma->vm_end) - return -EINVAL; - vma->vm_pgoff =3D pfn; - } - vm_flags_set(vma, VM_IO | VM_PFNMAP | VM_DONTEXPAND | VM_DONTDUMP); =20 BUG_ON(addr >=3D end); @@ -2793,7 +2807,7 @@ static int remap_pfn_range_internal(struct vm_area_st= ruct *vma, unsigned long ad flush_cache_range(vma, addr, end); do { next =3D pgd_addr_end(addr, end); - err =3D remap_p4d_range(mm, pgd, addr, next, + err =3D remap_p4d_range(vma, pgd, addr, next, pfn + (addr >> PAGE_SHIFT), prot); if (err) return err; --=20 2.49.0