From nobody Mon Dec 15 12:48:30 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0B3C6C76188 for ; Wed, 5 Apr 2023 14:27:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238592AbjDEO10 (ORCPT ); Wed, 5 Apr 2023 10:27:26 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57174 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238477AbjDEO1K (ORCPT ); Wed, 5 Apr 2023 10:27:10 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CE60859F2 for ; Wed, 5 Apr 2023 07:26:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1680704763; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=wA9vrEua4SxGH0U8touKUeIrkMHHDv7b7I0B36bB+wM=; b=PQPE0c3TDhCq4fec6YppK9KCtCujgKjUG2EEtoXnM5hE+t8u4tPRjsOiF/hYDjnnTSf6x+ 8VKErFnLvgMkzcIUVT0b4LIPJNz5x1VjJejw+47kzeMA1DYhU3/470/mh9EzsRh5gO5zEs PjlhLxqvND7C5LpLfvD018WC8iq5U6M= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-343-mYpZFFrRMw-fpnIzPwp4dg-1; Wed, 05 Apr 2023 10:25:42 -0400 X-MC-Unique: mYpZFFrRMw-fpnIzPwp4dg-1 Received: from smtp.corp.redhat.com (int-mx10.intmail.prod.int.rdu2.redhat.com [10.11.54.10]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 7A9C68996E4; Wed, 5 Apr 2023 14:25:38 +0000 (UTC) Received: from t480s.fritz.box (unknown [10.39.195.71]) by smtp.corp.redhat.com (Postfix) with ESMTP id 55334400F57; Wed, 5 Apr 2023 14:25:37 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, David Hildenbrand , Andrew Morton , Peter Xu , Muhammad Usama Anjum , stable@vger.kernel.org Subject: [PATCH v1 1/2] mm/userfaultfd: fix uffd-wp handling for THP migration entries Date: Wed, 5 Apr 2023 16:25:34 +0200 Message-Id: <20230405142535.493854-2-david@redhat.com> In-Reply-To: <20230405142535.493854-1-david@redhat.com> References: <20230405142535.493854-1-david@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.1 on 10.11.54.10 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Looks like what we fixed for hugetlb in commit 44f86392bdd1 ("mm/hugetlb: fix uffd-wp handling for migration entries in hugetlb_change_protection()") similarly applies to THP. Setting/clearing uffd-wp on THP migration entries is not implemented properly. Further, while removing migration PMDs considers the uffd-wp bit, inserting migration PMDs does not consider the uffd-wp bit. We have to set/clear independently of the migration entry type in change_huge_pmd() and properly copy the uffd-wp bit in set_pmd_migration_entry(). Verified using a simple reproducer that triggers migration of a THP, that the set_pmd_migration_entry() no longer loses the uffd-wp bit. Fixes: f45ec5ff16a7 ("userfaultfd: wp: support swap and page migration") Cc: stable@vger.kernel.org Signed-off-by: David Hildenbrand Reviewed-by: Peter Xu --- mm/huge_memory.c | 14 ++++++++++++-- 1 file changed, 12 insertions(+), 2 deletions(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 032fb0ef9cd1..bdda4f426d58 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1838,10 +1838,10 @@ int change_huge_pmd(struct mmu_gather *tlb, struct = vm_area_struct *vma, if (is_swap_pmd(*pmd)) { swp_entry_t entry =3D pmd_to_swp_entry(*pmd); struct page *page =3D pfn_swap_entry_to_page(entry); + pmd_t newpmd; =20 VM_BUG_ON(!is_pmd_migration_entry(*pmd)); if (is_writable_migration_entry(entry)) { - pmd_t newpmd; /* * A protection check is difficult so * just be safe and disable write @@ -1855,8 +1855,16 @@ int change_huge_pmd(struct mmu_gather *tlb, struct v= m_area_struct *vma, newpmd =3D pmd_swp_mksoft_dirty(newpmd); if (pmd_swp_uffd_wp(*pmd)) newpmd =3D pmd_swp_mkuffd_wp(newpmd); - set_pmd_at(mm, addr, pmd, newpmd); + } else { + newpmd =3D *pmd; } + + if (uffd_wp) + newpmd =3D pmd_swp_mkuffd_wp(newpmd); + else if (uffd_wp_resolve) + newpmd =3D pmd_swp_clear_uffd_wp(newpmd); + if (!pmd_same(*pmd, newpmd)) + set_pmd_at(mm, addr, pmd, newpmd); goto unlock; } #endif @@ -3251,6 +3259,8 @@ int set_pmd_migration_entry(struct page_vma_mapped_wa= lk *pvmw, pmdswp =3D swp_entry_to_pmd(entry); if (pmd_soft_dirty(pmdval)) pmdswp =3D pmd_swp_mksoft_dirty(pmdswp); + if (pmd_swp_uffd_wp(*pvmw->pmd)) + pmdswp =3D pmd_swp_mkuffd_wp(pmdswp); set_pmd_at(mm, address, pvmw->pmd, pmdswp); page_remove_rmap(page, vma, true); put_page(page); --=20 2.39.2