From nobody Tue Apr 14 15:42:10 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E3D37C00140 for ; Wed, 3 Aug 2022 01:22:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235161AbiHCBWM (ORCPT ); Tue, 2 Aug 2022 21:22:12 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42826 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229631AbiHCBWI (ORCPT ); Tue, 2 Aug 2022 21:22:08 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id CAEFB558CB for ; Tue, 2 Aug 2022 18:22:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1659489725; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=dwFW3K3FIJLlM5PL8SE9IfeEGlQLs9/vUgiVy5snic8=; b=jVmjF/EXuYsv7G2ctkbcqIGWY4dSqnPSFGlodC92TvqATK3l0xOTsCDyYSZcwvBtb3Qg2x LiJa1QEk67b1DERcLACoNmNuCqO8COz6l9lFRTwDdaTQMpbVH39J1HCKqgAcSn+36rYTIH wCJ88g/c3P6Pz6F+zDiBNGAiShi9adk= Received: from mail-qt1-f200.google.com (mail-qt1-f200.google.com [209.85.160.200]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-199-FVHlSvC_McyqEqD_44B-vw-1; Tue, 02 Aug 2022 21:22:04 -0400 X-MC-Unique: FVHlSvC_McyqEqD_44B-vw-1 Received: by mail-qt1-f200.google.com with SMTP id h21-20020ac87775000000b0031f1f223ccaso10178130qtu.22 for ; Tue, 02 Aug 2022 18:22:04 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=dwFW3K3FIJLlM5PL8SE9IfeEGlQLs9/vUgiVy5snic8=; b=niB5/UCHMDmo0k4DvkrXEGEWY8i6E/fEMDQudef/mrXtwPJCtFimkzyuOQ6/5UrBL9 eTTiHflGs7RLsvqxs4XEOHz2MFsJV6XVzEwmsbL9GqdKjXPzSX2vOkpO75Rc4vYZRsaT +f7BGGcF7FUHn+fjq4v09kGBX9tRGHd8pVzbdaFe008Nx83O7+S74eJi3dUZYqdogJse 1Gu9N5ry/RCjHxIaNppFlXPIYrlX399UTEmllhXIYoTYihkqFhnJk4uFodyqua9L/K0L Ev7Xk2iWNuaWOVLhsyfkYkxs4qH4r0a/G6VtObnv+pbfe6m3KBe5jQkRQ0OWsWxyfSyV XY9g== X-Gm-Message-State: AJIora8mguKqQVElX9b5NzL9nMA2oS01Ew/vzNHga2L4HfTNWA6ExHi2 sy/jEa+TRPzu6FU554Y/YiL5esu0vrgT0jl0VTo+9YOV6N0teBmkN+GAhHlM+I3qBKEy8y58mI/ lwE2FUVeRIpqhoSFbi1WRfxd88EkiPbx1Jdqt7W9jQfamt7qo2I+5u5CVPO6Esn8PLIlZv0E0Ig == X-Received: by 2002:ac8:7c44:0:b0:31f:3dc4:25db with SMTP id o4-20020ac87c44000000b0031f3dc425dbmr19999734qtv.612.1659489724001; Tue, 02 Aug 2022 18:22:04 -0700 (PDT) X-Google-Smtp-Source: AGRyM1vcCeyVFHNZ4J0x6f6CT837Cgo29iD5JJtGhHFxRi5vGQQcZIMOdr25aBzXu4mipHkZozTZMg== X-Received: by 2002:ac8:7c44:0:b0:31f:3dc4:25db with SMTP id o4-20020ac87c44000000b0031f3dc425dbmr19999717qtv.612.1659489723682; Tue, 02 Aug 2022 18:22:03 -0700 (PDT) Received: from localhost.localdomain (bras-base-aurron9127w-grc-35-70-27-3-10.dsl.bell.ca. [70.27.3.10]) by smtp.gmail.com with ESMTPSA id s16-20020a05622a1a9000b0031ecce4077fsm10188675qtc.31.2022.08.02.18.22.02 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Tue, 02 Aug 2022 18:22:03 -0700 (PDT) From: Peter Xu To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: Andrea Arcangeli , Andi Kleen , Andrew Morton , David Hildenbrand , Hugh Dickins , Huang Ying , peterx@redhat.com, Nadav Amit , "Kirill A . Shutemov" , Vlastimil Babka Subject: [PATCH 1/2] mm/swap: Add swp_offset_pfn() to fetch PFN from swap entry Date: Tue, 2 Aug 2022 21:21:58 -0400 Message-Id: <20220803012159.36551-2-peterx@redhat.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220803012159.36551-1-peterx@redhat.com> References: <20220803012159.36551-1-peterx@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" We've got a bunch of special swap entries that stores PFN inside the swap offset fields. To fetch the PFN, normally the user just calls swp_offset() assuming that'll be the PFN. Add a helper swp_offset_pfn() to fetch the PFN instead, fetching only the max possible length of a PFN on the host, meanwhile doing proper check with MAX_PHYSMEM_BITS to make sure the swap offsets can actually store the PFNs properly always using the BUILD_BUG_ON() in is_pfn_swap_entry(). One reason to do so is we never tried to sanitize whether swap offset can really fit for storing PFN. At the meantime, this patch also prepares us with the future possibility to store more information inside the swp offset field, so assuming "swp_offset(entry)" to be the PFN will not stand any more very soon. Replace many of the swp_offset() callers to use swp_offset_pfn() where proper. Note that many of the existing users are not candidates for the replacement, e.g.: (1) When the swap entry is not a pfn swap entry at all, or, (2) when we wanna keep the whole swp_offset but only change the swp type. For the latter, it can happen when fork() triggered on a write-migration swap entry pte, we may want to only change the migration type from write->read but keep the rest, so it's not "fetching PFN" but "changing swap type only". They're left aside so that when there're more information within the swp offset they'll be carried over naturally in those cases. Since at it, dropping hwpoison_entry_to_pfn() because that's exactly what the new swp_offset_pfn() is about. Signed-off-by: Peter Xu --- arch/arm64/mm/hugetlbpage.c | 2 +- include/linux/swapops.h | 35 +++++++++++++++++++++++++++++------ mm/hmm.c | 2 +- mm/memory-failure.c | 2 +- mm/page_vma_mapped.c | 6 +++--- 5 files changed, 35 insertions(+), 12 deletions(-) diff --git a/arch/arm64/mm/hugetlbpage.c b/arch/arm64/mm/hugetlbpage.c index 7430060cb0d6..f897d40821dd 100644 --- a/arch/arm64/mm/hugetlbpage.c +++ b/arch/arm64/mm/hugetlbpage.c @@ -242,7 +242,7 @@ static inline struct folio *hugetlb_swap_entry_to_folio= (swp_entry_t entry) { VM_BUG_ON(!is_migration_entry(entry) && !is_hwpoison_entry(entry)); =20 - return page_folio(pfn_to_page(swp_offset(entry))); + return page_folio(pfn_to_page(swp_offset_pfn(entry))); } =20 void set_huge_pte_at(struct mm_struct *mm, unsigned long addr, diff --git a/include/linux/swapops.h b/include/linux/swapops.h index a3d435bf9f97..1d17e4bb3d2f 100644 --- a/include/linux/swapops.h +++ b/include/linux/swapops.h @@ -23,6 +23,20 @@ #define SWP_TYPE_SHIFT (BITS_PER_XA_VALUE - MAX_SWAPFILES_SHIFT) #define SWP_OFFSET_MASK ((1UL << SWP_TYPE_SHIFT) - 1) =20 +/* + * Definitions only for PFN swap entries (see is_pfn_swap_entry()). To + * store PFN, we only need SWP_PFN_BITS bits. Each of the pfn swap entries + * can use the extra bits to store other information besides PFN. + */ +#ifdef MAX_PHYSMEM_BITS +#define SWP_PFN_BITS (MAX_PHYSMEM_BITS - PAGE_SHIFT) +#else +#define SWP_PFN_BITS (BITS_PER_LONG - PAGE_SHIFT) +#endif +#define SWP_PFN_MASK ((1UL << SWP_PFN_BITS) - 1) + +static inline bool is_pfn_swap_entry(swp_entry_t entry); + /* Clear all flags but only keep swp_entry_t related information */ static inline pte_t pte_swp_clear_flags(pte_t pte) { @@ -64,6 +78,17 @@ static inline pgoff_t swp_offset(swp_entry_t entry) return entry.val & SWP_OFFSET_MASK; } =20 +/* + * This should only be called upon a pfn swap entry to get the PFN stored + * in the swap entry. Please refers to is_pfn_swap_entry() for definition + * of pfn swap entry. + */ +static inline unsigned long swp_offset_pfn(swp_entry_t entry) +{ + VM_BUG_ON(!is_pfn_swap_entry(entry)); + return swp_offset(entry) & SWP_PFN_MASK; +} + /* check whether a pte points to a swap entry */ static inline int is_swap_pte(pte_t pte) { @@ -369,7 +394,7 @@ static inline int pte_none_mostly(pte_t pte) =20 static inline struct page *pfn_swap_entry_to_page(swp_entry_t entry) { - struct page *p =3D pfn_to_page(swp_offset(entry)); + struct page *p =3D pfn_to_page(swp_offset_pfn(entry)); =20 /* * Any use of migration entries may only occur while the @@ -387,6 +412,9 @@ static inline struct page *pfn_swap_entry_to_page(swp_e= ntry_t entry) */ static inline bool is_pfn_swap_entry(swp_entry_t entry) { + /* Make sure the swp offset can always store the needed fields */ + BUILD_BUG_ON(SWP_TYPE_SHIFT < SWP_PFN_BITS); + return is_migration_entry(entry) || is_device_private_entry(entry) || is_device_exclusive_entry(entry); } @@ -475,11 +503,6 @@ static inline int is_hwpoison_entry(swp_entry_t entry) return swp_type(entry) =3D=3D SWP_HWPOISON; } =20 -static inline unsigned long hwpoison_entry_to_pfn(swp_entry_t entry) -{ - return swp_offset(entry); -} - static inline void num_poisoned_pages_inc(void) { atomic_long_inc(&num_poisoned_pages); diff --git a/mm/hmm.c b/mm/hmm.c index f2aa63b94d9b..3850fb625dda 100644 --- a/mm/hmm.c +++ b/mm/hmm.c @@ -253,7 +253,7 @@ static int hmm_vma_handle_pte(struct mm_walk *walk, uns= igned long addr, cpu_flags =3D HMM_PFN_VALID; if (is_writable_device_private_entry(entry)) cpu_flags |=3D HMM_PFN_WRITE; - *hmm_pfn =3D swp_offset(entry) | cpu_flags; + *hmm_pfn =3D swp_offset_pfn(entry) | cpu_flags; return 0; } =20 diff --git a/mm/memory-failure.c b/mm/memory-failure.c index cc6fc9be8d22..e451219124dd 100644 --- a/mm/memory-failure.c +++ b/mm/memory-failure.c @@ -632,7 +632,7 @@ static int check_hwpoisoned_entry(pte_t pte, unsigned l= ong addr, short shift, swp_entry_t swp =3D pte_to_swp_entry(pte); =20 if (is_hwpoison_entry(swp)) - pfn =3D hwpoison_entry_to_pfn(swp); + pfn =3D swp_offset_pfn(swp); } =20 if (!pfn || pfn !=3D poisoned_pfn) diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c index 8e9e574d535a..93e13fc17d3c 100644 --- a/mm/page_vma_mapped.c +++ b/mm/page_vma_mapped.c @@ -86,7 +86,7 @@ static bool check_pte(struct page_vma_mapped_walk *pvmw) !is_device_exclusive_entry(entry)) return false; =20 - pfn =3D swp_offset(entry); + pfn =3D swp_offset_pfn(entry); } else if (is_swap_pte(*pvmw->pte)) { swp_entry_t entry; =20 @@ -96,7 +96,7 @@ static bool check_pte(struct page_vma_mapped_walk *pvmw) !is_device_exclusive_entry(entry)) return false; =20 - pfn =3D swp_offset(entry); + pfn =3D swp_offset_pfn(entry); } else { if (!pte_present(*pvmw->pte)) return false; @@ -221,7 +221,7 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *= pvmw) return not_found(pvmw); entry =3D pmd_to_swp_entry(pmde); if (!is_migration_entry(entry) || - !check_pmd(swp_offset(entry), pvmw)) + !check_pmd(swp_offset_pfn(entry), pvmw)) return not_found(pvmw); return true; } --=20 2.32.0 From nobody Tue Apr 14 15:42:10 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5504AC00140 for ; Wed, 3 Aug 2022 01:22:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235363AbiHCBWQ (ORCPT ); Tue, 2 Aug 2022 21:22:16 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42832 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233329AbiHCBWJ (ORCPT ); Tue, 2 Aug 2022 21:22:09 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 4BF6C4507F for ; Tue, 2 Aug 2022 18:22:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1659489727; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=qSNm0hC0oMVbN7mCZyyiRtCdb6eVWmsQyqPCRK6Htxk=; b=FVRg48ooc4bSz1ZlaYNmofmCNxvjMG0mPuy0Ua7p6Y+JGO2W3nUWgQmBvc78MOAnKxjyEB F+dCW/tOrIvIHnRX3Cv4T7i7YfzKH3N4CMMcXnSEjNTiOREdoz/GOVWFRSp05AmjXiFtZJ p7JPdMZlookn4wmnB1+qIOEbVpRy0JE= Received: from mail-qk1-f199.google.com (mail-qk1-f199.google.com [209.85.222.199]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-447-5Dd72OLEPAm7tjzdkrbzOA-1; Tue, 02 Aug 2022 21:22:06 -0400 X-MC-Unique: 5Dd72OLEPAm7tjzdkrbzOA-1 Received: by mail-qk1-f199.google.com with SMTP id bk21-20020a05620a1a1500b006b5c24695a4so12745059qkb.15 for ; Tue, 02 Aug 2022 18:22:06 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=qSNm0hC0oMVbN7mCZyyiRtCdb6eVWmsQyqPCRK6Htxk=; b=XF7tkUpy77Fs8JFChliJl+kWpHkMJ3mvV/BsTGzpevlsDgdO00UBG/w1Yyr34ETzlU YuAscu/I71COpLF/DPmlzGornrxKxlp1hLrmGf0PVAf0UbMwuDqMIR1DWn8aEy4d50rG nKwLceOtQ6Pw2MwtyStMBvNaWsKI48GmK48ziMx/HP3FZlbtpza3no/UYkNu9je/GvsX SvVCV3yBRObCUf59ghz+BZhL4G1Mqw/QUhU4Sb0tEZS2KdQ2+MplJ/BnKr3A5pDRzqWa n0KU5hrsRSQmrj3AoY2L5MQkvCDM1Zwq9fUhTotS/l7Fj5JhQWxd/aEzsDP6J4+/ZW1f Dwog== X-Gm-Message-State: ACgBeo2tXyxcwj4b+K1g+hEC3eNTtylDqu1KkwfKgvS+6EheS1KwTJdu W7UhFNP4Y5wRIfRaPGJf55X3KEecOKKuxPJbtalkmf1c6/gplJWRHMDJ27B9EieNbONCde2xdMA wvypEa4lnHmPbGVpbSdAHjF/UKHKiZhERheSghc14wEkyNbLIat/DjsheDo5YBHztfZP+gOGDUw == X-Received: by 2002:a05:6214:1c88:b0:472:a7e2:bef4 with SMTP id ib8-20020a0562141c8800b00472a7e2bef4mr20152402qvb.32.1659489725675; Tue, 02 Aug 2022 18:22:05 -0700 (PDT) X-Google-Smtp-Source: AA6agR4UH8F044fQ08lVU/hx51p2rYfphp5Z9tYfImkYnzCIYCnCwy8y/F7hg34O3s3p80m6z+rB/A== X-Received: by 2002:a05:6214:1c88:b0:472:a7e2:bef4 with SMTP id ib8-20020a0562141c8800b00472a7e2bef4mr20152382qvb.32.1659489725312; Tue, 02 Aug 2022 18:22:05 -0700 (PDT) Received: from localhost.localdomain (bras-base-aurron9127w-grc-35-70-27-3-10.dsl.bell.ca. [70.27.3.10]) by smtp.gmail.com with ESMTPSA id s16-20020a05622a1a9000b0031ecce4077fsm10188675qtc.31.2022.08.02.18.22.03 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Tue, 02 Aug 2022 18:22:04 -0700 (PDT) From: Peter Xu To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: Andrea Arcangeli , Andi Kleen , Andrew Morton , David Hildenbrand , Hugh Dickins , Huang Ying , peterx@redhat.com, Nadav Amit , "Kirill A . Shutemov" , Vlastimil Babka Subject: [PATCH 2/2] mm: Remember young bit for page migrations Date: Tue, 2 Aug 2022 21:21:59 -0400 Message-Id: <20220803012159.36551-3-peterx@redhat.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220803012159.36551-1-peterx@redhat.com> References: <20220803012159.36551-1-peterx@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" When page migration happens, we always ignore the young bit settings in the old pgtable, and marking the page as old in the new page table using either pte_mkold() or pmd_mkold(). That's fine from functional-wise, but that's not friendly to page reclaim because the moving page can be actively accessed within the procedure. Not to mention hardware setting the young bit can bring quite some overhead on some systems, e.g. x86_64 needs a few hundreds nanoseconds to set the bit. Actually we can easily remember the young bit configuration and recover the information after the page is migrated. To achieve it, define a new bit in the migration swap offset field to show whether the old pte has young bit set or not. Then when removing/recovering the migration entry, we can recover the young bit even if the page changed. One thing to mention is that here we used max_swapfile_size() to detect how many swp offset bits we have, and we'll only enable this feature if we know the swp offset can be big enough to store both the PFN value and the young bit. Otherwise the young bit is dropped like before. Signed-off-by: Peter Xu Reported-by: kernel test robot --- include/linux/swapops.h | 49 +++++++++++++++++++++++++++++++++++++++++ mm/huge_memory.c | 10 +++++++-- mm/migrate.c | 4 +++- mm/migrate_device.c | 2 ++ mm/rmap.c | 3 ++- 5 files changed, 64 insertions(+), 4 deletions(-) diff --git a/include/linux/swapops.h b/include/linux/swapops.h index 1d17e4bb3d2f..9ddede3790a4 100644 --- a/include/linux/swapops.h +++ b/include/linux/swapops.h @@ -8,6 +8,8 @@ =20 #ifdef CONFIG_MMU =20 +#include + /* * swapcache pages are stored in the swapper_space radix tree. We want to * get good packing density in that tree, so the index should be dense in @@ -35,6 +37,16 @@ #endif #define SWP_PFN_MASK ((1UL << SWP_PFN_BITS) - 1) =20 +/** + * Migration swap entry specific bitfield definitions. + * + * @SWP_MIG_YOUNG_BIT: Whether the page used to have young bit set + * + * Note: these bits will be used only if there're free bits in arch + * specific swp offset field. + */ +#define SWP_MIG_YOUNG_BIT (1UL << SWP_PFN_BITS) + static inline bool is_pfn_swap_entry(swp_entry_t entry); =20 /* Clear all flags but only keep swp_entry_t related information */ @@ -265,6 +277,33 @@ static inline swp_entry_t make_writable_migration_entr= y(pgoff_t offset) return swp_entry(SWP_MIGRATION_WRITE, offset); } =20 +static inline bool migration_entry_supports_young(void) +{ + /* + * max_swapfile_size() returns the max supported swp-offset plus 1. + * We can support the migration young bit only if the pfn swap + * entry has the offset larger than storing the PFN value, then it + * means there's extra bit(s) where we can store the young bit. + */ + return max_swapfile_size() > SWP_MIG_YOUNG_BIT; +} + +static inline swp_entry_t make_migration_entry_young(swp_entry_t entry) +{ + if (migration_entry_supports_young()) + return swp_entry(swp_type(entry), + swp_offset(entry) | SWP_MIG_YOUNG_BIT); + return entry; +} + +static inline bool is_migration_entry_young(swp_entry_t entry) +{ + if (migration_entry_supports_young()) + return swp_offset(entry) & SWP_MIG_YOUNG_BIT; + /* Keep the old behavior of aging page after migration */ + return false; +} + extern void __migration_entry_wait(struct mm_struct *mm, pte_t *ptep, spinlock_t *ptl); extern void migration_entry_wait(struct mm_struct *mm, pmd_t *pmd, @@ -311,6 +350,16 @@ static inline int is_readable_migration_entry(swp_entr= y_t entry) return 0; } =20 +static inline swp_entry_t make_migration_entry_young(swp_entry_t entry) +{ + return entry; +} + +static inline bool is_migration_entry_young(swp_entry_t entry) +{ + return false; +} + #endif =20 typedef unsigned long pte_marker; diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 29e3628687a6..131fe5754d8f 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2088,7 +2088,7 @@ static void __split_huge_pmd_locked(struct vm_area_st= ruct *vma, pmd_t *pmd, write =3D is_writable_migration_entry(entry); if (PageAnon(page)) anon_exclusive =3D is_readable_exclusive_migration_entry(entry); - young =3D false; + young =3D is_migration_entry_young(entry); soft_dirty =3D pmd_swp_soft_dirty(old_pmd); uffd_wp =3D pmd_swp_uffd_wp(old_pmd); } else { @@ -2146,6 +2146,8 @@ static void __split_huge_pmd_locked(struct vm_area_st= ruct *vma, pmd_t *pmd, else swp_entry =3D make_readable_migration_entry( page_to_pfn(page + i)); + if (young) + swp_entry =3D make_migration_entry_young(swp_entry); entry =3D swp_entry_to_pte(swp_entry); if (soft_dirty) entry =3D pte_swp_mksoft_dirty(entry); @@ -3148,6 +3150,8 @@ int set_pmd_migration_entry(struct page_vma_mapped_wa= lk *pvmw, entry =3D make_readable_exclusive_migration_entry(page_to_pfn(page)); else entry =3D make_readable_migration_entry(page_to_pfn(page)); + if (pmd_young(pmdval)) + entry =3D make_migration_entry_young(entry); pmdswp =3D swp_entry_to_pmd(entry); if (pmd_soft_dirty(pmdval)) pmdswp =3D pmd_swp_mksoft_dirty(pmdswp); @@ -3173,13 +3177,15 @@ void remove_migration_pmd(struct page_vma_mapped_wa= lk *pvmw, struct page *new) =20 entry =3D pmd_to_swp_entry(*pvmw->pmd); get_page(new); - pmde =3D pmd_mkold(mk_huge_pmd(new, READ_ONCE(vma->vm_page_prot))); + pmde =3D mk_huge_pmd(new, READ_ONCE(vma->vm_page_prot)); if (pmd_swp_soft_dirty(*pvmw->pmd)) pmde =3D pmd_mksoft_dirty(pmde); if (is_writable_migration_entry(entry)) pmde =3D maybe_pmd_mkwrite(pmde, vma); if (pmd_swp_uffd_wp(*pvmw->pmd)) pmde =3D pmd_wrprotect(pmd_mkuffd_wp(pmde)); + if (!is_migration_entry_young(entry)) + pmde =3D pmd_mkold(pmde); =20 if (PageAnon(new)) { rmap_t rmap_flags =3D RMAP_COMPOUND; diff --git a/mm/migrate.c b/mm/migrate.c index 1649270bc1a7..62cb3a9451de 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -199,7 +199,7 @@ static bool remove_migration_pte(struct folio *folio, #endif =20 folio_get(folio); - pte =3D pte_mkold(mk_pte(new, READ_ONCE(vma->vm_page_prot))); + pte =3D mk_pte(new, READ_ONCE(vma->vm_page_prot)); if (pte_swp_soft_dirty(*pvmw.pte)) pte =3D pte_mksoft_dirty(pte); =20 @@ -207,6 +207,8 @@ static bool remove_migration_pte(struct folio *folio, * Recheck VMA as permissions can change since migration started */ entry =3D pte_to_swp_entry(*pvmw.pte); + if (!is_migration_entry_young(entry)) + pte =3D pte_mkold(pte); if (is_writable_migration_entry(entry)) pte =3D maybe_mkwrite(pte, vma); else if (pte_swp_uffd_wp(*pvmw.pte)) diff --git a/mm/migrate_device.c b/mm/migrate_device.c index 7feeb447e3b9..fd8daf45c1a6 100644 --- a/mm/migrate_device.c +++ b/mm/migrate_device.c @@ -221,6 +221,8 @@ static int migrate_vma_collect_pmd(pmd_t *pmdp, else entry =3D make_readable_migration_entry( page_to_pfn(page)); + if (pte_young(pte)) + entry =3D make_migration_entry_young(entry); swp_pte =3D swp_entry_to_pte(entry); if (pte_present(pte)) { if (pte_soft_dirty(pte)) diff --git a/mm/rmap.c b/mm/rmap.c index af775855e58f..605fb37ae95e 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -2065,7 +2065,8 @@ static bool try_to_migrate_one(struct folio *folio, s= truct vm_area_struct *vma, else entry =3D make_readable_migration_entry( page_to_pfn(subpage)); - + if (pte_young(pteval)) + entry =3D make_migration_entry_young(entry); swp_pte =3D swp_entry_to_pte(entry); if (pte_soft_dirty(pteval)) swp_pte =3D pte_swp_mksoft_dirty(swp_pte); --=20 2.32.0