From nobody Sat Feb 7 11:31:36 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B69B530F95C; Tue, 23 Dec 2025 21:41:05 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766526068; cv=none; b=kA3+NZVqBaJlYmROnGM50MpEgfvrN7kfULD0/u1uUVfeL1PvwmXOOhHrS4xiOzZhF+H2JHv6b1MslSHiZM9OomErEfDAbUC8UWg2AsJVx2OfwPNatV/WsL2zPKe2+uLjO5xlQzSuXiY1sitaKNrb5AJsqTjSMaVgFJx2O2DNXDI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766526068; c=relaxed/simple; bh=W3HxSY2RwX3H9YY1vxQutvRujCU+HdANvwda7ib8N5o=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=jXe/Y+ttkvvyD5cYL/3iCnM/f/qf1CMyLs3DyY8lTfc61eUELiYqMdOUiK4h4NoqA+q4eI7+naF6p+o4aqHGEOo5EiiXM47pGmX9sxqYiMvXr2v9AUlxgl+8L/fkOVOuKtvm4qLeaG0yQFP95Jn6k5a6800KNFkrwswpvRxBnZI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=e+NWqGr+; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="e+NWqGr+" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 1E2F4C113D0; Tue, 23 Dec 2025 21:40:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1766526065; bh=W3HxSY2RwX3H9YY1vxQutvRujCU+HdANvwda7ib8N5o=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=e+NWqGr+Yf4O5nnD+ywIa3mHdqiAVrAxHl/eU9mmue0DLZASBjHt93WAweUTKRjpY i4nyhauR2ggiQzzUAiar/nb79XsXz+N3zvaoWgRryts3+WoY8xxiBQROAhPbj05nSP Y6jSBwlgDt+bEQrSoLvxFzQNjk8skpDj0k62O2H0YKdi8uBKS+/o6AUxdXUqhirCry 3pm2XnpXEy6PaH3KSdI5O+qY+lkmCZY2M5yMxdFhYfaabCApoYDM3fTa4/m/cYSvoU prD6j7tOQviYKtfPgYGLiklBwJ6w6uCaA6MxdHbsSMGTlXe2Nu+903I+PT/W1/moRX nbE0AU2KEpBYw== From: "David Hildenbrand (Red Hat)" To: linux-kernel@vger.kernel.org Cc: linux-arch@vger.kernel.org, linux-mm@kvack.org, "David Hildenbrand (Red Hat)" , Will Deacon , "Aneesh Kumar K.V" , Andrew Morton , Nick Piggin , Peter Zijlstra , Arnd Bergmann , Muchun Song , Oscar Salvador , "Liam R. Howlett" , Lorenzo Stoakes , Vlastimil Babka , Jann Horn , Pedro Falcato , Rik van Riel , Harry Yoo , Laurence Oberman , Prakash Sangappa , Nadav Amit , Liu Shixin Subject: [PATCH RESEND v3 3/4] mm/rmap: fix two comments related to huge_pmd_unshare() Date: Tue, 23 Dec 2025 22:40:36 +0100 Message-ID: <20251223214037.580860-4-david@kernel.org> X-Mailer: git-send-email 2.52.0 In-Reply-To: <20251223214037.580860-1-david@kernel.org> References: <20251223214037.580860-1-david@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" PMD page table unsharing no longer touches the refcount of a PMD page table. Also, it is not about dropping the refcount of a "PMD page" but the "PMD page table". Let's just simplify by saying that the PMD page table was unmapped, consequently also unmapping the folio that was mapped into this page. This code should be deduplicated in the future. Fixes: 59d9094df3d7 ("mm: hugetlb: independent PMD page table shared count") Reviewed-by: Rik van Riel Tested-by: Laurence Oberman Reviewed-by: Lorenzo Stoakes Acked-by: Oscar Salvador Cc: Liu Shixin Signed-off-by: David Hildenbrand (Red Hat) --- mm/rmap.c | 20 ++++---------------- 1 file changed, 4 insertions(+), 16 deletions(-) diff --git a/mm/rmap.c b/mm/rmap.c index f955f02d570ed..748f48727a162 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -2016,14 +2016,8 @@ static bool try_to_unmap_one(struct folio *folio, st= ruct vm_area_struct *vma, flush_tlb_range(vma, range.start, range.end); /* - * The ref count of the PMD page was - * dropped which is part of the way map - * counting is done for shared PMDs. - * Return 'true' here. When there is - * no other sharing, huge_pmd_unshare - * returns false and we will unmap the - * actual page and drop map count - * to zero. + * The PMD table was unmapped, + * consequently unmapping the folio. */ goto walk_done; } @@ -2416,14 +2410,8 @@ static bool try_to_migrate_one(struct folio *folio, = struct vm_area_struct *vma, range.start, range.end); =20 /* - * The ref count of the PMD page was - * dropped which is part of the way map - * counting is done for shared PMDs. - * Return 'true' here. When there is - * no other sharing, huge_pmd_unshare - * returns false and we will unmap the - * actual page and drop map count - * to zero. + * The PMD table was unmapped, + * consequently unmapping the folio. */ page_vma_mapped_walk_done(&pvmw); break; --=20 2.52.0