From nobody Sat Nov 23 05:39:08 2024 Received: from mail-pg1-f178.google.com (mail-pg1-f178.google.com [209.85.215.178]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A60D51F26C4 for ; Thu, 14 Nov 2024 07:00:34 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.178 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731567637; cv=none; b=iO9LV2wzvKfFxbuzZq8lr3E+94nEA/wsmGzGsqZa8Ymk6pnGYbnEatVcblQkCshQVs40gVgCArlTexw1+htf9/LOJ1wkxxdIMsRQuARogx3l8FXTsVLSBsqrTpPLxPDMS+s09kldXJU+ij7332SClXzXgjZe83yhVJyCgIxeA5g= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731567637; c=relaxed/simple; bh=8YRALO0EvYrZsfU4rQrXY0d2VfwYIIaImO3anfpy2LA=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=PoCqTSUV8/n9bArNl+93GFeRQEJBrexfWMS0GnvKuSa6q0RQQ4tQNh5LGGHAy0P0bJ/UpieN7FPGMiZQkFxIUP0NNeWyoh0u8d/ksqS2WZExWns0esRiCJhuujjMoL0XMEM6XuXhelGV0cFiNH2eRK8mFQ5NWEBURG/6MyTftlM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=bytedance.com; spf=pass smtp.mailfrom=bytedance.com; dkim=pass (2048-bit key) header.d=bytedance.com header.i=@bytedance.com header.b=gUUyCeAy; arc=none smtp.client-ip=209.85.215.178 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=bytedance.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=bytedance.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=bytedance.com header.i=@bytedance.com header.b="gUUyCeAy" Received: by mail-pg1-f178.google.com with SMTP id 41be03b00d2f7-7ea8de14848so169690a12.2 for ; Wed, 13 Nov 2024 23:00:34 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance.com; s=google; t=1731567634; x=1732172434; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=23GC9z2kYnX1PBmcPyoYRptMMW0UHpf+qVqZ3j/14aw=; b=gUUyCeAynnNl3ao/FS3QT5jFaB69NEh/mbwTn6pAzN1CjISZEj4yOuaspPFMdeXHfK dRtJT/VvLLHy1Bf+F2gLXYVUEvrQvtJFPSA9SCAHG9PYflKbJU5LkWw5XfNApVRZqxng 2znwksbLZ7rKc0lzZAeCCAvgtpXXijrpPyBLEF2Epdln7BywOWAK0Xwuw3pzhp9Oa8oH jeVEcQ/ZCzOJqiGGEP8TgMgwOEH/y8eP1uTly0iWw/qPIiem8OtNkNFWFbioA68RPGal rfPDTWGsNv3VyxG36h54GG05Ky317H02xvRTc8IjBgNK7EvlYDLoECAEB2MsRpoEwDQN aVRA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1731567634; x=1732172434; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=23GC9z2kYnX1PBmcPyoYRptMMW0UHpf+qVqZ3j/14aw=; b=VfmxnTQAu2SDxAOebdVo4JzhvZM8LXCRIXJJq25JtWodUjySambcZ1L1w3rk/C8ND6 0YjAqaXaCw+BPeN5jb6r+1taeFmtITUyTtwN9dyUMUJp0sdAHttpErRmD/7wES5l3Vdi pSjsW4XzSZeQnDXeqR473Ti87lOpox5m/Ujb4y//GItrzsXP3hpT09Ia3fDboU8NV/sb 7OV8SwR3oY6eyU/S5yiIsMjVc8jpgXnDR3h82xGhXYHvu5lPmf3pSaIfobTSB+aD6wmJ LVGzhqJYhnZEGdQj/ZhlFyFvJkHtfQpIPNCfme01NDqucSQLzPT9oGt9wTHjUdn19sLC wryw== X-Forwarded-Encrypted: i=1; AJvYcCX4p/IDmhYAi8p+/kj+BrSe5hSpRHiPDO7NUXWe9YVupgnSE4ywDbd5GFWgNwtmBObz2hfxP4+vFrko5FA=@vger.kernel.org X-Gm-Message-State: AOJu0YyJZ6Cfmqt5qHKvCR9lKGD3oWhz7F/yXDy8FuACj4OoRNimsk96 m+Yui3kGyjYF341NGlF+O5YH37Cfbo9F/iY5RUbDkirr6wkY7pO/RN7OoJt6CFI= X-Google-Smtp-Source: AGHT+IE3O39rZGVFEPKMSxSa/5Ca+GqYQaa3AwUorYO2XE8T6fJA6cVIU7wKCrtDgDFVz/hz4vTDDg== X-Received: by 2002:a05:6a20:8418:b0:1db:d81a:a900 with SMTP id adf61e73a8af0-1dc700ee224mr8489901637.0.1731567633901; Wed, 13 Nov 2024 23:00:33 -0800 (PST) Received: from C02DW0BEMD6R.bytedance.net ([63.216.146.178]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-211c7d389c2sm4119065ad.268.2024.11.13.23.00.26 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 13 Nov 2024 23:00:33 -0800 (PST) From: Qi Zheng To: david@redhat.com, jannh@google.com, hughd@google.com, willy@infradead.org, muchun.song@linux.dev, vbabka@kernel.org, akpm@linux-foundation.org, peterx@redhat.com Cc: mgorman@suse.de, catalin.marinas@arm.com, will@kernel.org, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, x86@kernel.org, lorenzo.stoakes@oracle.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, zokeefe@google.com, rientjes@google.com, Qi Zheng Subject: [PATCH v3 1/9] mm: khugepaged: recheck pmd state in retract_page_tables() Date: Thu, 14 Nov 2024 14:59:52 +0800 Message-Id: X-Mailer: git-send-email 2.24.3 (Apple Git-128) In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" In retract_page_tables(), the lock of new_folio is still held, we will be blocked in the page fault path, which prevents the pte entries from being set again. So even though the old empty PTE page may be concurrently freed and a new PTE page is filled into the pmd entry, it is still empty and can be removed. So just refactor the retract_page_tables() a little bit and recheck the pmd state after holding the pmd lock. Suggested-by: Jann Horn Signed-off-by: Qi Zheng --- mm/khugepaged.c | 45 +++++++++++++++++++++++++++++++-------------- 1 file changed, 31 insertions(+), 14 deletions(-) diff --git a/mm/khugepaged.c b/mm/khugepaged.c index 6f8d46d107b4b..99dc995aac110 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -947,17 +947,10 @@ static int hugepage_vma_revalidate(struct mm_struct *= mm, unsigned long address, return SCAN_SUCCEED; } =20 -static int find_pmd_or_thp_or_none(struct mm_struct *mm, - unsigned long address, - pmd_t **pmd) +static inline int check_pmd_state(pmd_t *pmd) { - pmd_t pmde; + pmd_t pmde =3D pmdp_get_lockless(pmd); =20 - *pmd =3D mm_find_pmd(mm, address); - if (!*pmd) - return SCAN_PMD_NULL; - - pmde =3D pmdp_get_lockless(*pmd); if (pmd_none(pmde)) return SCAN_PMD_NONE; if (!pmd_present(pmde)) @@ -971,6 +964,17 @@ static int find_pmd_or_thp_or_none(struct mm_struct *m= m, return SCAN_SUCCEED; } =20 +static int find_pmd_or_thp_or_none(struct mm_struct *mm, + unsigned long address, + pmd_t **pmd) +{ + *pmd =3D mm_find_pmd(mm, address); + if (!*pmd) + return SCAN_PMD_NULL; + + return check_pmd_state(*pmd); +} + static int check_pmd_still_valid(struct mm_struct *mm, unsigned long address, pmd_t *pmd) @@ -1720,7 +1724,7 @@ static void retract_page_tables(struct address_space = *mapping, pgoff_t pgoff) pmd_t *pmd, pgt_pmd; spinlock_t *pml; spinlock_t *ptl; - bool skipped_uffd =3D false; + bool success =3D false; =20 /* * Check vma->anon_vma to exclude MAP_PRIVATE mappings that @@ -1757,6 +1761,19 @@ static void retract_page_tables(struct address_space= *mapping, pgoff_t pgoff) mmu_notifier_invalidate_range_start(&range); =20 pml =3D pmd_lock(mm, pmd); + /* + * The lock of new_folio is still held, we will be blocked in + * the page fault path, which prevents the pte entries from + * being set again. So even though the old empty PTE page may be + * concurrently freed and a new PTE page is filled into the pmd + * entry, it is still empty and can be removed. + * + * So here we only need to recheck if the state of pmd entry + * still meets our requirements, rather than checking pmd_same() + * like elsewhere. + */ + if (check_pmd_state(pmd) !=3D SCAN_SUCCEED) + goto drop_pml; ptl =3D pte_lockptr(mm, pmd); if (ptl !=3D pml) spin_lock_nested(ptl, SINGLE_DEPTH_NESTING); @@ -1770,20 +1787,20 @@ static void retract_page_tables(struct address_spac= e *mapping, pgoff_t pgoff) * repeating the anon_vma check protects from one category, * and repeating the userfaultfd_wp() check from another. */ - if (unlikely(vma->anon_vma || userfaultfd_wp(vma))) { - skipped_uffd =3D true; - } else { + if (likely(!vma->anon_vma && !userfaultfd_wp(vma))) { pgt_pmd =3D pmdp_collapse_flush(vma, addr, pmd); pmdp_get_lockless_sync(); + success =3D true; } =20 if (ptl !=3D pml) spin_unlock(ptl); +drop_pml: spin_unlock(pml); =20 mmu_notifier_invalidate_range_end(&range); =20 - if (!skipped_uffd) { + if (success) { mm_dec_nr_ptes(mm); page_table_check_pte_clear_range(mm, addr, pgt_pmd); pte_free_defer(mm, pmd_pgtable(pgt_pmd)); --=20 2.20.1 From nobody Sat Nov 23 05:39:08 2024 Received: from mail-pl1-f170.google.com (mail-pl1-f170.google.com [209.85.214.170]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 476811F26E7 for ; Thu, 14 Nov 2024 07:00:42 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.170 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731567643; cv=none; b=BwzEwynVbzDFgcnyd0nwtELf5/ZRCNP3uiwW1LTI40Ak9R77ASQ2bR4VAHyrG/gWuLLntwNDf9NtobGYAHe567SP5ghym2sM4rZxtR4WwSdQtBmn6r+uNyGIZ8d6ZFqFneN3QAX3ucThqdV2BOhJ0XEycJRB3fepDFbHMADO4r0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731567643; c=relaxed/simple; bh=hP7hUc+z7BZxPdYj1/2wMZfEErGlV5j6jXq2U6tDsOQ=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=YkoutVsQgyppqq1M71CfD4xliWGVnqvklBoyVEb9ytw3w8d152BPzmEpK7JdKSCwIHSDcmGFDL0DuSemqRulBiZh+TgOJfy1deWrrib8j0rVMIxUuuHAvcpf6AkQJUaBjgYgBxrfYbsFWrsqrelhmIszLpMxxeI64Ejtn11g1vM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=bytedance.com; spf=pass smtp.mailfrom=bytedance.com; dkim=pass (2048-bit key) header.d=bytedance.com header.i=@bytedance.com header.b=eVl/ee1Y; arc=none smtp.client-ip=209.85.214.170 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=bytedance.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=bytedance.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=bytedance.com header.i=@bytedance.com header.b="eVl/ee1Y" Received: by mail-pl1-f170.google.com with SMTP id d9443c01a7336-20ce5e3b116so2153715ad.1 for ; Wed, 13 Nov 2024 23:00:42 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance.com; s=google; t=1731567641; x=1732172441; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=z5NhkFR9u/ZCBqXODjRxrXNpbWksUAj27FTFaGEH9DI=; b=eVl/ee1YJE8l8AJ26PFZ7FqJKHC6yrXpS6iPgeo03I7HlFVqWtQILFNg8UfrG5aY+i eyQ50trL9fovU0czvZS80BFuURfWNKL+0oty+Pa3x7yKcoZSr7JPjZjid2P10CZIjqF5 1/yoMrQhRBeIUDzprvgA6KkRaRXu3Ng+rOb5pgD8D3HvIqZn1UaLHZtxdmUQX50z483A B7Ox9JrGUkSy4fRpXwGz4f+4OjxBJbtxFAAdmnl9KD4ijWtqOlzx3/6GyS0FDiN8hOdj 3o1R/JSV0aAtupgtRp4wWtKlk35uucRgtf0KnHEwoh719fYPlK0NjjaC4r74/srnE063 Datw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1731567641; x=1732172441; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=z5NhkFR9u/ZCBqXODjRxrXNpbWksUAj27FTFaGEH9DI=; b=ipDy7qvJgzvMdHYPD24HWjp/hwYGDztFho6HdJYeyQVGv/5BgceurF56iylLNK2pBF Tz/ZqNNgWWy9hG+rYsn7UkhSezPhYn+bwVWGcg/ra+s0SVqhu5t9ul5dj+3mqzkT5fhm /srtDROtGrRlgcG+hGpjT/oI/dO8aguuh43BSpa4HJNIv8E0G24AzubfmXUe8AIPx/nY KrYEcjZoqRpA3ztHzuHW6Mhe0qawHL3bRD8GkQPD6spRyxXw5FBwDkO6g4jq10bFU+WV UePpgxkVoZCDUkRN/c0duYWfm2iUpoRlLRUc1CQ+TKemNy9ZpjNeBPkPvoGSsnzG0GKb vMQQ== X-Forwarded-Encrypted: i=1; AJvYcCVlA682MJh1tt7VBLN1pP9rn5L4LjKPaTXiBU9obGONtTRblW6ZPZNKmQrSbJKxMPUX0nq1GAUXZdbjN+k=@vger.kernel.org X-Gm-Message-State: AOJu0YzfhpYuM7hVWOr3eQVzgPF+uARIPhWoXqC1s5Ttc+zicIYoQ6Eg u/X9JN9G+UrHoa39BPyOBSpAvfH8VHKYBRXvFz54hsPFiYQ09CnZ/f4Mn45w6hY= X-Google-Smtp-Source: AGHT+IHBiLojldOiy5h7kyJqW1/SyLghLkJxoovGFCwaxMGieVNX0Uk7Et6paaJVqXGLWQ0ZBmUDAg== X-Received: by 2002:a17:902:d489:b0:20b:b75d:e8c1 with SMTP id d9443c01a7336-211c4f9cf52mr14080785ad.4.1731567641481; Wed, 13 Nov 2024 23:00:41 -0800 (PST) Received: from C02DW0BEMD6R.bytedance.net ([63.216.146.178]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-211c7d389c2sm4119065ad.268.2024.11.13.23.00.34 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 13 Nov 2024 23:00:40 -0800 (PST) From: Qi Zheng To: david@redhat.com, jannh@google.com, hughd@google.com, willy@infradead.org, muchun.song@linux.dev, vbabka@kernel.org, akpm@linux-foundation.org, peterx@redhat.com Cc: mgorman@suse.de, catalin.marinas@arm.com, will@kernel.org, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, x86@kernel.org, lorenzo.stoakes@oracle.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, zokeefe@google.com, rientjes@google.com, Qi Zheng Subject: [PATCH v3 2/9] mm: userfaultfd: recheck dst_pmd entry in move_pages_pte() Date: Thu, 14 Nov 2024 14:59:53 +0800 Message-Id: X-Mailer: git-send-email 2.24.3 (Apple Git-128) In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" In move_pages_pte(), since dst_pte needs to be none, the subsequent pte_same() check cannot prevent the dst_pte page from being freed concurrently, so we also need to abtain dst_pmdval and recheck pmd_same(). Otherwise, once we support empty PTE page reclaimation for anonymous pages, it may result in moving the src_pte page into the dts_pte page that is about to be freed by RCU. Signed-off-by: Qi Zheng --- mm/userfaultfd.c | 51 +++++++++++++++++++++++++++++++----------------- 1 file changed, 33 insertions(+), 18 deletions(-) diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c index 60a0be33766ff..8e16dc290ddf1 100644 --- a/mm/userfaultfd.c +++ b/mm/userfaultfd.c @@ -1020,6 +1020,14 @@ void double_pt_unlock(spinlock_t *ptl1, __release(ptl2); } =20 +static inline bool is_pte_pages_stable(pte_t *dst_pte, pte_t *src_pte, + pte_t orig_dst_pte, pte_t orig_src_pte, + pmd_t *dst_pmd, pmd_t dst_pmdval) +{ + return pte_same(ptep_get(src_pte), orig_src_pte) && + pte_same(ptep_get(dst_pte), orig_dst_pte) && + pmd_same(dst_pmdval, pmdp_get_lockless(dst_pmd)); +} =20 static int move_present_pte(struct mm_struct *mm, struct vm_area_struct *dst_vma, @@ -1027,6 +1035,7 @@ static int move_present_pte(struct mm_struct *mm, unsigned long dst_addr, unsigned long src_addr, pte_t *dst_pte, pte_t *src_pte, pte_t orig_dst_pte, pte_t orig_src_pte, + pmd_t *dst_pmd, pmd_t dst_pmdval, spinlock_t *dst_ptl, spinlock_t *src_ptl, struct folio *src_folio) { @@ -1034,8 +1043,8 @@ static int move_present_pte(struct mm_struct *mm, =20 double_pt_lock(dst_ptl, src_ptl); =20 - if (!pte_same(ptep_get(src_pte), orig_src_pte) || - !pte_same(ptep_get(dst_pte), orig_dst_pte)) { + if (!is_pte_pages_stable(dst_pte, src_pte, orig_dst_pte, orig_src_pte, + dst_pmd, dst_pmdval)) { err =3D -EAGAIN; goto out; } @@ -1071,6 +1080,7 @@ static int move_swap_pte(struct mm_struct *mm, unsigned long dst_addr, unsigned long src_addr, pte_t *dst_pte, pte_t *src_pte, pte_t orig_dst_pte, pte_t orig_src_pte, + pmd_t *dst_pmd, pmd_t dst_pmdval, spinlock_t *dst_ptl, spinlock_t *src_ptl) { if (!pte_swp_exclusive(orig_src_pte)) @@ -1078,8 +1088,8 @@ static int move_swap_pte(struct mm_struct *mm, =20 double_pt_lock(dst_ptl, src_ptl); =20 - if (!pte_same(ptep_get(src_pte), orig_src_pte) || - !pte_same(ptep_get(dst_pte), orig_dst_pte)) { + if (!is_pte_pages_stable(dst_pte, src_pte, orig_dst_pte, orig_src_pte, + dst_pmd, dst_pmdval)) { double_pt_unlock(dst_ptl, src_ptl); return -EAGAIN; } @@ -1097,13 +1107,14 @@ static int move_zeropage_pte(struct mm_struct *mm, unsigned long dst_addr, unsigned long src_addr, pte_t *dst_pte, pte_t *src_pte, pte_t orig_dst_pte, pte_t orig_src_pte, + pmd_t *dst_pmd, pmd_t dst_pmdval, spinlock_t *dst_ptl, spinlock_t *src_ptl) { pte_t zero_pte; =20 double_pt_lock(dst_ptl, src_ptl); - if (!pte_same(ptep_get(src_pte), orig_src_pte) || - !pte_same(ptep_get(dst_pte), orig_dst_pte)) { + if (!is_pte_pages_stable(dst_pte, src_pte, orig_dst_pte, orig_src_pte, + dst_pmd, dst_pmdval)) { double_pt_unlock(dst_ptl, src_ptl); return -EAGAIN; } @@ -1136,6 +1147,7 @@ static int move_pages_pte(struct mm_struct *mm, pmd_t= *dst_pmd, pmd_t *src_pmd, pte_t *src_pte =3D NULL; pte_t *dst_pte =3D NULL; pmd_t dummy_pmdval; + pmd_t dst_pmdval; struct folio *src_folio =3D NULL; struct anon_vma *src_anon_vma =3D NULL; struct mmu_notifier_range range; @@ -1148,11 +1160,11 @@ static int move_pages_pte(struct mm_struct *mm, pmd= _t *dst_pmd, pmd_t *src_pmd, retry: /* * Use the maywrite version to indicate that dst_pte will be modified, - * but since we will use pte_same() to detect the change of the pte - * entry, there is no need to get pmdval, so just pass a dummy variable - * to it. + * since dst_pte needs to be none, the subsequent pte_same() check + * cannot prevent the dst_pte page from being freed concurrently, so we + * also need to abtain dst_pmdval and recheck pmd_same() later. */ - dst_pte =3D pte_offset_map_rw_nolock(mm, dst_pmd, dst_addr, &dummy_pmdval, + dst_pte =3D pte_offset_map_rw_nolock(mm, dst_pmd, dst_addr, &dst_pmdval, &dst_ptl); =20 /* Retry if a huge pmd materialized from under us */ @@ -1161,7 +1173,11 @@ static int move_pages_pte(struct mm_struct *mm, pmd_= t *dst_pmd, pmd_t *src_pmd, goto out; } =20 - /* same as dst_pte */ + /* + * Unlike dst_pte, the subsequent pte_same() check can ensure the + * stability of the src_pte page, so there is no need to get pmdval, + * just pass a dummy variable to it. + */ src_pte =3D pte_offset_map_rw_nolock(mm, src_pmd, src_addr, &dummy_pmdval, &src_ptl); =20 @@ -1213,7 +1229,7 @@ static int move_pages_pte(struct mm_struct *mm, pmd_t= *dst_pmd, pmd_t *src_pmd, err =3D move_zeropage_pte(mm, dst_vma, src_vma, dst_addr, src_addr, dst_pte, src_pte, orig_dst_pte, orig_src_pte, - dst_ptl, src_ptl); + dst_pmd, dst_pmdval, dst_ptl, src_ptl); goto out; } =20 @@ -1303,8 +1319,8 @@ static int move_pages_pte(struct mm_struct *mm, pmd_t= *dst_pmd, pmd_t *src_pmd, =20 err =3D move_present_pte(mm, dst_vma, src_vma, dst_addr, src_addr, dst_pte, src_pte, - orig_dst_pte, orig_src_pte, - dst_ptl, src_ptl, src_folio); + orig_dst_pte, orig_src_pte, dst_pmd, + dst_pmdval, dst_ptl, src_ptl, src_folio); } else { entry =3D pte_to_swp_entry(orig_src_pte); if (non_swap_entry(entry)) { @@ -1319,10 +1335,9 @@ static int move_pages_pte(struct mm_struct *mm, pmd_= t *dst_pmd, pmd_t *src_pmd, goto out; } =20 - err =3D move_swap_pte(mm, dst_addr, src_addr, - dst_pte, src_pte, - orig_dst_pte, orig_src_pte, - dst_ptl, src_ptl); + err =3D move_swap_pte(mm, dst_addr, src_addr, dst_pte, src_pte, + orig_dst_pte, orig_src_pte, dst_pmd, + dst_pmdval, dst_ptl, src_ptl); } =20 out: --=20 2.20.1 From nobody Sat Nov 23 05:39:08 2024 Received: from mail-pl1-f182.google.com (mail-pl1-f182.google.com [209.85.214.182]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9E20C1DF72E for ; Thu, 14 Nov 2024 07:00:49 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.182 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731567651; cv=none; b=VUHuWd0qANZKz5M1WZ2IjYMxiBCZ9ssws7x9hghQmVU3Xc7QOWiYR9OVSU1BXstARUfpCjec5WXKHi549WBgPRQ+uuMAkGToI1aWGY7uDGENTk4O6rl+JOFO5gkt78iis3C55LPI3AKm5xN/6vkpmypIuUhz9rthrlnxzJE5kz4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731567651; c=relaxed/simple; bh=DcijnCvR53Hod0FMwgB4w7OhfVfcuHAm9/anCQe30pg=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=dpGK0PQQrBFLiSkUIeFUYqam7+EvgQHlBPGO51AOSejYJXfBLK9Ijk+DLkgqAbzl7p8sxl7TXPL6bTMN0CojSNfYgzpa1UJ2uisyb367P22HTVOdE6Udf8smbBjAc/4WCdT2/zhmO8GA9DzLnMr3PBIB/Vvw8TE9XbPP+ua5kmk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=bytedance.com; spf=pass smtp.mailfrom=bytedance.com; dkim=pass (2048-bit key) header.d=bytedance.com header.i=@bytedance.com header.b=L1gr+wTd; arc=none smtp.client-ip=209.85.214.182 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=bytedance.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=bytedance.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=bytedance.com header.i=@bytedance.com header.b="L1gr+wTd" Received: by mail-pl1-f182.google.com with SMTP id d9443c01a7336-20e576dbc42so2529815ad.0 for ; Wed, 13 Nov 2024 23:00:49 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance.com; s=google; t=1731567649; x=1732172449; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Injd64heqojhVahuoVzqrsqJ00krl1pndEuOREo5sHA=; b=L1gr+wTd4xhPiKgjPKk9uz3vaoomKit7xmaT8bV0/9Ok9AA7ARniWIDBQ2siz52VcA AHuqHEAp4rS4eJjDmOAehMCP6nokAj3u8paeG4kJAWSYIkKX3DhH+Gd68AxDix5xFKHN Wd8pdkgJyL3fmuNuUkwGrMnDVZ+lA5uqSDmWndHQyGmRpqh6vWSPZRTWKRt+0NKIaLXn fuKulF4/iX7hbLp8tagBPHcpzQx3aQ2PCymJl9ZkyXhkB1YBBNZvxFxhblu9tFgMgEgC ieOUWiZDeKIVJBuDJhTZygy8e6DJnioZX7Fja2WThjDhX2qUzqR4qKPe7pLD/bOP+f/9 vbXQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1731567649; x=1732172449; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Injd64heqojhVahuoVzqrsqJ00krl1pndEuOREo5sHA=; b=grXYCLjZ6Bvd3T4ML5RrngAx34UYx02cbokuHpMn1gWtN6m/EyleCKPsZobJi8AEJx zP+tQHf9c0tAWK83FFipp5tDRiBffyMMpv4OlQ/OZjM4FPIW1EW7bf9gfhlKvVsmWsxB 8FSqcQpXlgBGxnfrHvAMbhOLQMpvCFbpSnYijAMjsa0U+eukIUOQlQmMATTtvUdBRtdE L3WyBoKMA09gpdSd1F5t9TJoH0ewOUkdBxvH1g8Zaz6Oy6/Nd52D5R44ly99F6Ku3HVU +kkGO6WQFfVmljohK6Hw2zNvG2iOfEArrHMUcbx5vwFTjNvOBK8X9Ktx7wvzXcERmhCT LYBA== X-Forwarded-Encrypted: i=1; AJvYcCWipU3N1xLQT9glEj49n78gGBAzDBGF/ikDkEZQXabagh4YHoNUkn43RNRg6xUrpt2VQh3L18QkcqWOZXA=@vger.kernel.org X-Gm-Message-State: AOJu0YyPK8rQG5/k4Ehet1p/CgQlztsEo9McSHax101gmaV4AAGfYIC8 oFKMSJaKAUhwZihOsSNPYrFr02qqeeu25mbOz9YZO/mcZLVDNcL47vgA/uXeLmg= X-Google-Smtp-Source: AGHT+IF9J5Srma9m0kVIWNzHVFQU0S2TqIZpiUbAkjwNNaShq+S4YCS+Ol7MzG3HFBa6aKSnPA4QSw== X-Received: by 2002:a17:903:2310:b0:20c:aed1:813a with SMTP id d9443c01a7336-211b66023e2mr54146835ad.14.1731567648903; Wed, 13 Nov 2024 23:00:48 -0800 (PST) Received: from C02DW0BEMD6R.bytedance.net ([63.216.146.178]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-211c7d389c2sm4119065ad.268.2024.11.13.23.00.41 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 13 Nov 2024 23:00:48 -0800 (PST) From: Qi Zheng To: david@redhat.com, jannh@google.com, hughd@google.com, willy@infradead.org, muchun.song@linux.dev, vbabka@kernel.org, akpm@linux-foundation.org, peterx@redhat.com Cc: mgorman@suse.de, catalin.marinas@arm.com, will@kernel.org, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, x86@kernel.org, lorenzo.stoakes@oracle.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, zokeefe@google.com, rientjes@google.com, Qi Zheng Subject: [PATCH v3 3/9] mm: introduce zap_nonpresent_ptes() Date: Thu, 14 Nov 2024 14:59:54 +0800 Message-Id: <25e70f171e17370ec65159a301ff4f852991e14c.1731566457.git.zhengqi.arch@bytedance.com> X-Mailer: git-send-email 2.24.3 (Apple Git-128) In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Similar to zap_present_ptes(), let's introduce zap_nonpresent_ptes() to handle non-present ptes, which can improve code readability. No functional change. Signed-off-by: Qi Zheng Reviewed-by: Jann Horn Acked-by: David Hildenbrand --- mm/memory.c | 136 ++++++++++++++++++++++++++++------------------------ 1 file changed, 73 insertions(+), 63 deletions(-) diff --git a/mm/memory.c b/mm/memory.c index 209885a4134f7..bd9ebe0f4471f 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -1587,6 +1587,76 @@ static inline int zap_present_ptes(struct mmu_gather= *tlb, return 1; } =20 +static inline int zap_nonpresent_ptes(struct mmu_gather *tlb, + struct vm_area_struct *vma, pte_t *pte, pte_t ptent, + unsigned int max_nr, unsigned long addr, + struct zap_details *details, int *rss) +{ + swp_entry_t entry; + int nr =3D 1; + + entry =3D pte_to_swp_entry(ptent); + if (is_device_private_entry(entry) || + is_device_exclusive_entry(entry)) { + struct page *page =3D pfn_swap_entry_to_page(entry); + struct folio *folio =3D page_folio(page); + + if (unlikely(!should_zap_folio(details, folio))) + return 1; + /* + * Both device private/exclusive mappings should only + * work with anonymous page so far, so we don't need to + * consider uffd-wp bit when zap. For more information, + * see zap_install_uffd_wp_if_needed(). + */ + WARN_ON_ONCE(!vma_is_anonymous(vma)); + rss[mm_counter(folio)]--; + if (is_device_private_entry(entry)) + folio_remove_rmap_pte(folio, page, vma); + folio_put(folio); + } else if (!non_swap_entry(entry)) { + /* Genuine swap entries, hence a private anon pages */ + if (!should_zap_cows(details)) + return 1; + + nr =3D swap_pte_batch(pte, max_nr, ptent); + rss[MM_SWAPENTS] -=3D nr; + free_swap_and_cache_nr(entry, nr); + } else if (is_migration_entry(entry)) { + struct folio *folio =3D pfn_swap_entry_folio(entry); + + if (!should_zap_folio(details, folio)) + return 1; + rss[mm_counter(folio)]--; + } else if (pte_marker_entry_uffd_wp(entry)) { + /* + * For anon: always drop the marker; for file: only + * drop the marker if explicitly requested. + */ + if (!vma_is_anonymous(vma) && !zap_drop_markers(details)) + return 1; + } else if (is_guard_swp_entry(entry)) { + /* + * Ordinary zapping should not remove guard PTE + * markers. Only do so if we should remove PTE markers + * in general. + */ + if (!zap_drop_markers(details)) + return 1; + } else if (is_hwpoison_entry(entry) || is_poisoned_swp_entry(entry)) { + if (!should_zap_cows(details)) + return 1; + } else { + /* We should have covered all the swap entry types */ + pr_alert("unrecognized swap entry 0x%lx\n", entry.val); + WARN_ON_ONCE(1); + } + clear_not_present_full_ptes(vma->vm_mm, addr, pte, nr, tlb->fullmm); + zap_install_uffd_wp_if_needed(vma, addr, pte, nr, details, ptent); + + return nr; +} + static unsigned long zap_pte_range(struct mmu_gather *tlb, struct vm_area_struct *vma, pmd_t *pmd, unsigned long addr, unsigned long end, @@ -1598,7 +1668,6 @@ static unsigned long zap_pte_range(struct mmu_gather = *tlb, spinlock_t *ptl; pte_t *start_pte; pte_t *pte; - swp_entry_t entry; int nr; =20 tlb_change_page_size(tlb, PAGE_SIZE); @@ -1611,8 +1680,6 @@ static unsigned long zap_pte_range(struct mmu_gather = *tlb, arch_enter_lazy_mmu_mode(); do { pte_t ptent =3D ptep_get(pte); - struct folio *folio; - struct page *page; int max_nr; =20 nr =3D 1; @@ -1622,8 +1689,8 @@ static unsigned long zap_pte_range(struct mmu_gather = *tlb, if (need_resched()) break; =20 + max_nr =3D (end - addr) / PAGE_SIZE; if (pte_present(ptent)) { - max_nr =3D (end - addr) / PAGE_SIZE; nr =3D zap_present_ptes(tlb, vma, pte, ptent, max_nr, addr, details, rss, &force_flush, &force_break); @@ -1631,67 +1698,10 @@ static unsigned long zap_pte_range(struct mmu_gathe= r *tlb, addr +=3D nr * PAGE_SIZE; break; } - continue; - } - - entry =3D pte_to_swp_entry(ptent); - if (is_device_private_entry(entry) || - is_device_exclusive_entry(entry)) { - page =3D pfn_swap_entry_to_page(entry); - folio =3D page_folio(page); - if (unlikely(!should_zap_folio(details, folio))) - continue; - /* - * Both device private/exclusive mappings should only - * work with anonymous page so far, so we don't need to - * consider uffd-wp bit when zap. For more information, - * see zap_install_uffd_wp_if_needed(). - */ - WARN_ON_ONCE(!vma_is_anonymous(vma)); - rss[mm_counter(folio)]--; - if (is_device_private_entry(entry)) - folio_remove_rmap_pte(folio, page, vma); - folio_put(folio); - } else if (!non_swap_entry(entry)) { - max_nr =3D (end - addr) / PAGE_SIZE; - nr =3D swap_pte_batch(pte, max_nr, ptent); - /* Genuine swap entries, hence a private anon pages */ - if (!should_zap_cows(details)) - continue; - rss[MM_SWAPENTS] -=3D nr; - free_swap_and_cache_nr(entry, nr); - } else if (is_migration_entry(entry)) { - folio =3D pfn_swap_entry_folio(entry); - if (!should_zap_folio(details, folio)) - continue; - rss[mm_counter(folio)]--; - } else if (pte_marker_entry_uffd_wp(entry)) { - /* - * For anon: always drop the marker; for file: only - * drop the marker if explicitly requested. - */ - if (!vma_is_anonymous(vma) && - !zap_drop_markers(details)) - continue; - } else if (is_guard_swp_entry(entry)) { - /* - * Ordinary zapping should not remove guard PTE - * markers. Only do so if we should remove PTE markers - * in general. - */ - if (!zap_drop_markers(details)) - continue; - } else if (is_hwpoison_entry(entry) || - is_poisoned_swp_entry(entry)) { - if (!should_zap_cows(details)) - continue; } else { - /* We should have covered all the swap entry types */ - pr_alert("unrecognized swap entry 0x%lx\n", entry.val); - WARN_ON_ONCE(1); + nr =3D zap_nonpresent_ptes(tlb, vma, pte, ptent, max_nr, + addr, details, rss); } - clear_not_present_full_ptes(mm, addr, pte, nr, tlb->fullmm); - zap_install_uffd_wp_if_needed(vma, addr, pte, nr, details, ptent); } while (pte +=3D nr, addr +=3D PAGE_SIZE * nr, addr !=3D end); =20 add_mm_rss_vec(mm, rss); --=20 2.20.1 From nobody Sat Nov 23 05:39:08 2024 Received: from mail-pj1-f43.google.com (mail-pj1-f43.google.com [209.85.216.43]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 063B91DF72E for ; Thu, 14 Nov 2024 07:00:58 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.43 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731567660; cv=none; b=CvRZja37Umhv3AOB/5B9TIuHHvVO9RXvcja3lrdA7OIKExQ1rs0QynxcszV8pJIkyYEwWfkxUGghw4zQCgAjAOIiKc4nyWvDMvIjb281OPFluGgzsC4iTKzd1v3gzOIxyFYD/0T0FC4ZxlPfcG/Y0Gk4QEaZf3S+F0oijM2Qe3A= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731567660; c=relaxed/simple; bh=HfyPcE4EVfhBh9TW3UzFkypV2P4q5U6i4QfATr5vBm8=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=crbq2HrsKWtvkI8AJhVuVuXtC9aK9G+nJL3VRhFictf+whgUPMgXTUDjusANyEMSh31hiGU8MbMrSBaO/guH+1hu4U5ZutvQDgRtlPCnahjLq5qpHArqLRWD19Ob1ADB9avQxeAwZMBOtDZgbXww5nmnwnsFUU2hJ5wlL/rhF+c= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=bytedance.com; spf=pass smtp.mailfrom=bytedance.com; dkim=pass (2048-bit key) header.d=bytedance.com header.i=@bytedance.com header.b=AqMGLjUR; arc=none smtp.client-ip=209.85.216.43 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=bytedance.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=bytedance.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=bytedance.com header.i=@bytedance.com header.b="AqMGLjUR" Received: by mail-pj1-f43.google.com with SMTP id 98e67ed59e1d1-2e2a97c2681so252461a91.2 for ; Wed, 13 Nov 2024 23:00:58 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance.com; s=google; t=1731567658; x=1732172458; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=FQVB91K7fAT15psCxftJJTVCdE7dh3d9kskucoAkCus=; b=AqMGLjURuTx+he+Ux9DARb+1NwSwKiBh+TbbiCK7NndGGyrKUHWn9zm0wQtZ/s2gdF uqOtwYQ+vxDggDlVkPFIIf11cHwVw9mRJSyI0Ghj9F340Ympt/KBNuOXhFGOhdrWyhTr k47sKlL/4Y0O398MsI0w3utZUFPYBb88iIPzl6VB9GgPP2iQD/RutOVelaMjd22ZEHPL kiUqWqG1SDvaynP7zvKC5PTzgb2p6VlfhoiykN8GQh/+G0n5k6npupqszO/h9aXUcGlA Hff1ytNTmE/sv/a2N/YCIPlwNX8p3yaGGduka9rUGJ9ZvJe1e0M49bwxkwDjwDEQsMcx 2EvQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1731567658; x=1732172458; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=FQVB91K7fAT15psCxftJJTVCdE7dh3d9kskucoAkCus=; b=d+CC0BlEw41ZuHdjWBt8g9Khh44upEkSEi6szxEumwOvs3olVGx2OtNFf6c4pxCNal J/ipKuzi9bDtzCf0mj9+3wcMPsarzI9GRhooPNtbiERpkAIYgd1bLU2MfTK/6XdcJ90G yzKbHpP6HpNT/5I4jBGyJwywLcGOOdllXz2yu5UCnIt9ZCJRQfcwHzFPhVho2PlrfJKy McCauyNxnb1f7lCOMNY8ZviHb0YUYrL39FfYDij7s48dEPCAjiVXaLspFChWTFi53sBa Zam+mkW2xFCOWCNTzZqICE0Mo9DgnIercl2yR3yigI+bbdkI5bt6g7Sw6hHbvo0nxrFB R+7A== X-Forwarded-Encrypted: i=1; AJvYcCWPNJ569OiY31qFAC6btxvZ5kywa7SC7CVeszkROAWBVUXAUWQbei+2yWnJ14VY5VnVMs52XydcKZP4NzE=@vger.kernel.org X-Gm-Message-State: AOJu0YweEN/ZBtOXFbtz87wemzXQF3EDGLpvUqpuG2RE328akajlDAB7 mbl7/ZkCQr4b+lgnMw1jnlApSd4RzDVBcTu7AvRkp3CRH8dFKO09z/ZRfgnLFeI= X-Google-Smtp-Source: AGHT+IEEOci5upVGjhq10Ml46AvoKHnyJ01pyJF7RhtSq5zmKJYKrXkqxyrBi8tCjSUuj7BETevvWg== X-Received: by 2002:a17:90b:43:b0:2d8:e524:797b with SMTP id 98e67ed59e1d1-2e9e4bed427mr12110579a91.18.1731567658258; Wed, 13 Nov 2024 23:00:58 -0800 (PST) Received: from C02DW0BEMD6R.bytedance.net ([63.216.146.178]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-211c7d389c2sm4119065ad.268.2024.11.13.23.00.49 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 13 Nov 2024 23:00:57 -0800 (PST) From: Qi Zheng To: david@redhat.com, jannh@google.com, hughd@google.com, willy@infradead.org, muchun.song@linux.dev, vbabka@kernel.org, akpm@linux-foundation.org, peterx@redhat.com Cc: mgorman@suse.de, catalin.marinas@arm.com, will@kernel.org, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, x86@kernel.org, lorenzo.stoakes@oracle.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, zokeefe@google.com, rientjes@google.com, Qi Zheng Subject: [PATCH v3 4/9] mm: introduce skip_none_ptes() Date: Thu, 14 Nov 2024 14:59:55 +0800 Message-Id: <574bc9b646c87d878a5048edb63698a1f8483e10.1731566457.git.zhengqi.arch@bytedance.com> X-Mailer: git-send-email 2.24.3 (Apple Git-128) In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" This commit introduces skip_none_ptes() to skip over all consecutive none ptes in zap_pte_range(), which helps optimize away need_resched() + force_break + incremental pte/addr increments etc. Suggested-by: David Hildenbrand Signed-off-by: Qi Zheng --- mm/memory.c | 34 ++++++++++++++++++++++++++++++---- 1 file changed, 30 insertions(+), 4 deletions(-) diff --git a/mm/memory.c b/mm/memory.c index bd9ebe0f4471f..24633d0e1445a 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -1657,6 +1657,28 @@ static inline int zap_nonpresent_ptes(struct mmu_gat= her *tlb, return nr; } =20 +static inline int skip_none_ptes(pte_t *pte, unsigned long addr, + unsigned long end) +{ + pte_t ptent =3D ptep_get(pte); + int max_nr; + int nr; + + if (!pte_none(ptent)) + return 0; + + max_nr =3D (end - addr) / PAGE_SIZE; + nr =3D 1; + + for (; nr < max_nr; nr++) { + ptent =3D ptep_get(pte + nr); + if (!pte_none(ptent)) + break; + } + + return nr; +} + static unsigned long zap_pte_range(struct mmu_gather *tlb, struct vm_area_struct *vma, pmd_t *pmd, unsigned long addr, unsigned long end, @@ -1682,13 +1704,17 @@ static unsigned long zap_pte_range(struct mmu_gathe= r *tlb, pte_t ptent =3D ptep_get(pte); int max_nr; =20 - nr =3D 1; - if (pte_none(ptent)) - continue; - if (need_resched()) break; =20 + nr =3D skip_none_ptes(pte, addr, end); + if (nr) { + addr +=3D PAGE_SIZE * nr; + if (addr =3D=3D end) + break; + pte +=3D nr; + } + max_nr =3D (end - addr) / PAGE_SIZE; if (pte_present(ptent)) { nr =3D zap_present_ptes(tlb, vma, pte, ptent, max_nr, --=20 2.20.1 From nobody Sat Nov 23 05:39:08 2024 Received: from mail-pl1-f178.google.com (mail-pl1-f178.google.com [209.85.214.178]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 01CFB1E8855 for ; Thu, 14 Nov 2024 07:01:06 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.178 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731567668; cv=none; b=qnxS2p/xpOtm+6RuPEMeJm31/jK+dyYb0tI8hVu1X2OOt/rUVy9+4Z5R+8QKwbyYteAUA1ue7VtMzsepJD63l/byXPe3KsQMj/vdRsfNLf5+BaOh5DG0QaLdMDGkzNqI1nhz8BUalF+h3P6qxWhD4p0/Zx4NQooTD7Sj4DgFlz8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731567668; c=relaxed/simple; bh=PlCDMUKY8UaG2TptvcZhZCqJzQLxeovwSEQxQ9vjo7o=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=OvPAHeMQ9BhsLcfxVqz8QbEHSP0T10yTzJqGz7tiER7yjeOjud8z66zLPQSCWbEpGkEM2QwGKkClw0yl88SXLVRbbAsK4TNtsctrgpTNgmRL7LH0IOGA1TAyIevCxowpxh3jLVsbLkuAyajsRrSb1X1rZiUJ+RZMs4qcXV68SOE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=bytedance.com; spf=pass smtp.mailfrom=bytedance.com; dkim=pass (2048-bit key) header.d=bytedance.com header.i=@bytedance.com header.b=a3YhmZnl; arc=none smtp.client-ip=209.85.214.178 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=bytedance.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=bytedance.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=bytedance.com header.i=@bytedance.com header.b="a3YhmZnl" Received: by mail-pl1-f178.google.com with SMTP id d9443c01a7336-20cdda5cfb6so2310675ad.3 for ; Wed, 13 Nov 2024 23:01:06 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance.com; s=google; t=1731567666; x=1732172466; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=jUqyGtITukFY0TNyC3digbboCysogNiC1m+PxqBi3vg=; b=a3YhmZnl9a4WtpKdgHhh4nVLq3uAjHv8GYUF4/hMfCbNaIcI6/+In4tAYfHgs23CfU Ccsqk7tYqgJXwevkGqt0aquL+sUNCkQOOkMP+W4zz5Aks8zkO1bCbKpVKuSmLpodhPwT dme9h3Q5qbrLRFbSfR0qQAYOdrkhE9NvZA6tjINR7XVkbRLkS9VVYGi79lP5WTrafrBS N0eYerrJBbSLjSu7AvVNHXRq7tE4iK8jCMBHsvW8HfelotoNuOS9e7xa+lp9V1zWobGj e409hyMRHsLTQa/XwlXuUw9/FhzWNR8bfP2kbg6SjLvIuvhqIkjWDSi9FDROFL2fE7TI fctw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1731567666; x=1732172466; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=jUqyGtITukFY0TNyC3digbboCysogNiC1m+PxqBi3vg=; b=iv84zeJjP2TxWHioNAq7sQ+1RMOte+ura3/T00e3AkK1Hu9ExKun6YJ9OBnP985h0c rcBvCEhAuhlA9VXft8Ih9PKtqlKZRDCZ+4qZec+g3m8wetIzw5XVv/LdFafJyNewrKGw pCD10t6H5+Y+uV2FC1SPHyEEfRDPbK9m+84AD4F9KCsbrWNdgmv4uLdt/i8Ot8e+H0LX yUzVy8nx6tD6PPQD/SuBEGC10vyDtZ4hlWFrJHOuZbqE0VZfQElRmjn+lKNO2a/qWz94 6XjckKykOnKQvtms3Va0aJiXVLw+Fusk1JlpGKeEMIfKL9LrSQdgy6o7GdqCpMLCYmmP 1NfQ== X-Forwarded-Encrypted: i=1; AJvYcCWqhTCVaozkCk2bKRNl/KWIl61YduymyuyEXXJYzadxI/QTlIkxkivO8aBBemHH5Dzszf9ttS8Me59Hd/o=@vger.kernel.org X-Gm-Message-State: AOJu0Yxv4bDY8idIqQ7BqjVQ0+7aaP2rmlo7TGSeq+GN3FrnaS02ecTX i4rcdQeECXJn93YbofHpklAFZ8FpNhQoW0xDq03HkfepJ4IN/BY3g5BToHhLHW0= X-Google-Smtp-Source: AGHT+IH+CIyFvL8Lk36CjibW+Kiayt53v1Ia4QUhn9aVanblqL6M7a/wyMFzo1loiDImRID/mm6uDQ== X-Received: by 2002:a17:902:ec82:b0:20c:d469:ba95 with SMTP id d9443c01a7336-211b6609df6mr63772575ad.16.1731567666242; Wed, 13 Nov 2024 23:01:06 -0800 (PST) Received: from C02DW0BEMD6R.bytedance.net ([63.216.146.178]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-211c7d389c2sm4119065ad.268.2024.11.13.23.00.58 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 13 Nov 2024 23:01:05 -0800 (PST) From: Qi Zheng To: david@redhat.com, jannh@google.com, hughd@google.com, willy@infradead.org, muchun.song@linux.dev, vbabka@kernel.org, akpm@linux-foundation.org, peterx@redhat.com Cc: mgorman@suse.de, catalin.marinas@arm.com, will@kernel.org, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, x86@kernel.org, lorenzo.stoakes@oracle.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, zokeefe@google.com, rientjes@google.com, Qi Zheng Subject: [PATCH v3 5/9] mm: introduce do_zap_pte_range() Date: Thu, 14 Nov 2024 14:59:56 +0800 Message-Id: <1d8467c1428573cc666ca3150ba66877f7b316cf.1731566457.git.zhengqi.arch@bytedance.com> X-Mailer: git-send-email 2.24.3 (Apple Git-128) In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" This commit introduces do_zap_pte_range() to actually zap the PTEs, which will help improve code readability and facilitate secondary checking of the processed PTEs in the future. No functional change. Signed-off-by: Qi Zheng Reviewed-by: Jann Horn Acked-by: David Hildenbrand --- mm/memory.c | 39 ++++++++++++++++++++++++--------------- 1 file changed, 24 insertions(+), 15 deletions(-) diff --git a/mm/memory.c b/mm/memory.c index 24633d0e1445a..bf5ac8e0b4656 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -1679,6 +1679,25 @@ static inline int skip_none_ptes(pte_t *pte, unsigne= d long addr, return nr; } =20 +/* If PTE_MARKER_UFFD_WP is enabled, the uffd-wp PTEs may be re-installed.= */ +static inline int do_zap_pte_range(struct mmu_gather *tlb, + struct vm_area_struct *vma, pte_t *pte, + unsigned long addr, unsigned long end, + struct zap_details *details, int *rss, + bool *force_flush, bool *force_break) +{ + pte_t ptent =3D ptep_get(pte); + int max_nr =3D (end - addr) / PAGE_SIZE; + + if (pte_present(ptent)) + return zap_present_ptes(tlb, vma, pte, ptent, max_nr, + addr, details, rss, force_flush, + force_break); + + return zap_nonpresent_ptes(tlb, vma, pte, ptent, max_nr, addr, + details, rss); +} + static unsigned long zap_pte_range(struct mmu_gather *tlb, struct vm_area_struct *vma, pmd_t *pmd, unsigned long addr, unsigned long end, @@ -1701,9 +1720,6 @@ static unsigned long zap_pte_range(struct mmu_gather = *tlb, flush_tlb_batched_pending(mm); arch_enter_lazy_mmu_mode(); do { - pte_t ptent =3D ptep_get(pte); - int max_nr; - if (need_resched()) break; =20 @@ -1715,18 +1731,11 @@ static unsigned long zap_pte_range(struct mmu_gathe= r *tlb, pte +=3D nr; } =20 - max_nr =3D (end - addr) / PAGE_SIZE; - if (pte_present(ptent)) { - nr =3D zap_present_ptes(tlb, vma, pte, ptent, max_nr, - addr, details, rss, &force_flush, - &force_break); - if (unlikely(force_break)) { - addr +=3D nr * PAGE_SIZE; - break; - } - } else { - nr =3D zap_nonpresent_ptes(tlb, vma, pte, ptent, max_nr, - addr, details, rss); + nr =3D do_zap_pte_range(tlb, vma, pte, addr, end, details, + rss, &force_flush, &force_break); + if (unlikely(force_break)) { + addr +=3D nr * PAGE_SIZE; + break; } } while (pte +=3D nr, addr +=3D PAGE_SIZE * nr, addr !=3D end); =20 --=20 2.20.1 From nobody Sat Nov 23 05:39:08 2024 Received: from mail-pl1-f178.google.com (mail-pl1-f178.google.com [209.85.214.178]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C75BA1F26E6 for ; Thu, 14 Nov 2024 07:01:14 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.178 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731567676; cv=none; b=aO5kERRzuoElBSiWYYDnmKDRaFJLmLQdWEMSHUW+6skDeZXWVlD3EKusOF53qGO3abscWr6V3rfigmysRQOTeOtu2uWJo5muPOuNSDcFZTKbmhPPZ9OocRXTZzeeFN4/sHd72+XaYIvPXbDmHNebgRrvYSRD2CWufwmtel2RPX8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731567676; c=relaxed/simple; bh=uQ+ClEI5hwp0ini/MzCR2IyLRxAYlW8WNlB6Q1rs00Q=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=gsE/AzegB7MsFeIxdq79waQS/xAD4htVdyWCgz+2/X8Rp5Xost4Z3NxUcD3MWgb6mopC/FRITTqUJslIa7/aakJIrwKihp2u7pJNuCPXpUKUAdjO7qiXNwGIfh4Jc51RpwNy1hHFqS09CP8lIDGzc8NEP80HZfIoQKbNDK8L9kI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=bytedance.com; spf=pass smtp.mailfrom=bytedance.com; dkim=pass (2048-bit key) header.d=bytedance.com header.i=@bytedance.com header.b=TFlX+UcP; arc=none smtp.client-ip=209.85.214.178 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=bytedance.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=bytedance.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=bytedance.com header.i=@bytedance.com header.b="TFlX+UcP" Received: by mail-pl1-f178.google.com with SMTP id d9443c01a7336-211c1bd70f6so3357025ad.0 for ; Wed, 13 Nov 2024 23:01:14 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance.com; s=google; t=1731567674; x=1732172474; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=U4ITvW0bkgyK2CcMa7XVY4WP6GOaHym7f9gBon+26RE=; b=TFlX+UcPwWNnKYWCCLR/Gh9ZporjaE5gqhw21H66ZuGxaWfzytQi+hLJsk1nxiyDsR UunBsI16TQzGuEavIO+SdF0YN5mnPXowILPZEC6VIki7Co7srJctKE8H8n67l+c3Z6q/ b5hGnr+6qOPEJrTcO4/TKpPhlhj5Gv0cNVZipsclZ8ze70AvSRPoBWue0K5kEd6faXq2 SlTl8c1GE9Ys8+6uU7Q/KXIdNpDRAQ/S1eKBDaiaJL2Rmu29shHhShvrkg51K0O9NLPi 2Nd24bhCggn3aDkW0APQmQ8Uq0vDh6afy8avfRzJ6jwLhCO7pHCe0CSJCLnYbd99g3fU L5JQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1731567674; x=1732172474; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=U4ITvW0bkgyK2CcMa7XVY4WP6GOaHym7f9gBon+26RE=; b=Phb9rURuYVB6AJ23bTPz7gAjFbb904QZfjfIkYMl4qpptAqtv2DeuBS4hN+8EKJopT fCMrUq9DPW97Y7+RQJ15w4fuUsg8XCdMeQ/bNpjZaCUdjUiT3SwZLtZSlujDkOCmzgQy u+gNe6a2mbevdrt41PZKKkgJYWlokf3zKue17tBjRx0wg0KZENKO7UtY53G9GuWVjh2D rIrAVaG0AZ4rh5Vi2HoMZZ+Y6O/8bOkE0VW72YlApXI5iKXLv2z7Oca9xj0Q6RLDt0U6 w99XtyuGieQNJKTS+ExWLziWH8eYmqj2L+nRqLZ/UXO5c7tBqHnseclf74zjxKc4n2ky LxTQ== X-Forwarded-Encrypted: i=1; AJvYcCW0mDYiLPOCxRe+pNfcGJJJmzCwJDvUsBiot12glqOiJHMD+bPTHdj5CNvVjRg4vPasAdqa9eAjb0wVY98=@vger.kernel.org X-Gm-Message-State: AOJu0Yy/RxGTLYS1bUFBkY4YvzVa6dMSK7E1hx69ftG4PS3ZmCeG3Zy0 HPNMayw39Eo5PQm2J0xJQzp9LZcob+3I+vjiMjJLLYsdU3v5lmlaDlOYt/DP9KI= X-Google-Smtp-Source: AGHT+IHMIWeAj04FwYV71IQtIKJmFnsONOBKIQaZTsFXBlA5BRH3Z1Wl93O3jOYR0L0oLnRZ11BTuw== X-Received: by 2002:a17:903:11c3:b0:20c:5ffe:3ef1 with SMTP id d9443c01a7336-211c0f897b9mr36683095ad.17.1731567674099; Wed, 13 Nov 2024 23:01:14 -0800 (PST) Received: from C02DW0BEMD6R.bytedance.net ([63.216.146.178]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-211c7d389c2sm4119065ad.268.2024.11.13.23.01.06 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 13 Nov 2024 23:01:13 -0800 (PST) From: Qi Zheng To: david@redhat.com, jannh@google.com, hughd@google.com, willy@infradead.org, muchun.song@linux.dev, vbabka@kernel.org, akpm@linux-foundation.org, peterx@redhat.com Cc: mgorman@suse.de, catalin.marinas@arm.com, will@kernel.org, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, x86@kernel.org, lorenzo.stoakes@oracle.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, zokeefe@google.com, rientjes@google.com, Qi Zheng Subject: [PATCH v3 6/9] mm: make zap_pte_range() handle full within-PMD range Date: Thu, 14 Nov 2024 14:59:57 +0800 Message-Id: <3aaf6c2338372866b85cea78140f5ea497ccc33d.1731566457.git.zhengqi.arch@bytedance.com> X-Mailer: git-send-email 2.24.3 (Apple Git-128) In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" In preparation for reclaiming empty PTE pages, this commit first makes zap_pte_range() to handle the full within-PMD range, so that we can more easily detect and free PTE pages in this function in subsequent commits. Signed-off-by: Qi Zheng Reviewed-by: Jann Horn --- mm/memory.c | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/mm/memory.c b/mm/memory.c index bf5ac8e0b4656..8b3348ff374ff 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -1711,6 +1711,7 @@ static unsigned long zap_pte_range(struct mmu_gather = *tlb, pte_t *pte; int nr; =20 +retry: tlb_change_page_size(tlb, PAGE_SIZE); init_rss_vec(rss); start_pte =3D pte =3D pte_offset_map_lock(mm, pmd, addr, &ptl); @@ -1758,6 +1759,13 @@ static unsigned long zap_pte_range(struct mmu_gather= *tlb, if (force_flush) tlb_flush_mmu(tlb); =20 + if (addr !=3D end) { + cond_resched(); + force_flush =3D false; + force_break =3D false; + goto retry; + } + return addr; } =20 --=20 2.20.1 From nobody Sat Nov 23 05:39:08 2024 Received: from mail-pl1-f182.google.com (mail-pl1-f182.google.com [209.85.214.182]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9C1751EE01E for ; Thu, 14 Nov 2024 07:01:22 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.182 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731567684; cv=none; b=jZu8QhfzIoZgLcSw/+Uuj+R5tX2ynFTcjbu/Nj6LL5LEf83mCqwYad41uXUKtsna+dpWfQvPN/S1CBxZKVoCK3X2cNWvaIMwJnnnvYQZuGhN7OWdX8PcVV93DfAyoM1eVc3DGycLFxuzgcn5VRWoIfBVRkrbLy0hMQPLoEL/rcY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731567684; c=relaxed/simple; bh=lbosdsiQeVFGrA4rAtzINXGrCPnihfDpcv5YshmmcJU=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=WJgk45Yg7Ed93P6bdbCfgA6sMFmUUCWS6vLV/zVi3kL2YLVfoAgGCiIN8GaQXmzrSflGvBU3lVzFPy85zctY3Y+Yv6efUw57jlsQvtmX7paUxR+53j9xp615wUvpWEbkHBw/mk1iok5dI/pS2UO/itpQ/UM43Q8cXg6oicmh6nk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=bytedance.com; spf=pass smtp.mailfrom=bytedance.com; dkim=pass (2048-bit key) header.d=bytedance.com header.i=@bytedance.com header.b=ChZ8+bP9; arc=none smtp.client-ip=209.85.214.182 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=bytedance.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=bytedance.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=bytedance.com header.i=@bytedance.com header.b="ChZ8+bP9" Received: by mail-pl1-f182.google.com with SMTP id d9443c01a7336-210e5369b7dso2348805ad.3 for ; Wed, 13 Nov 2024 23:01:22 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance.com; s=google; t=1731567682; x=1732172482; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Ao15XNwWqfhJRpavPj5UJluLHg99Tu4ljFLM7T2kOsA=; b=ChZ8+bP9v97XCFPXrCpMQLalnI3c7cFWfi4JoZDWAw26KSLmTp9w0AKRxAYuLv/NEX ZIpPEBPi9XoKuxEc0v2hVF3LdPuxuCkC+rOeQPSP+hg7E3BCx3MDOl+4eQyK7jEeKAN9 NucUgMZY9e6szZnaosTHP5/ndfq4/dBebdd/620ho+0a4n9OpSDKPYn7aIyYJj5X2HQG CLJ+Jsy/lJN6XeYxsFgtJps7mCPHpk/tKeAEgNM9bUcH2c4F5hDwMCXHSUgCpps7cDjk Rjo4mQD/rQGPy4HbwJWnuCrreKy1mMY4frpPXO0A5wEo9mgkYyC/6eIHJTSIujSsIZVw 10UQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1731567682; x=1732172482; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Ao15XNwWqfhJRpavPj5UJluLHg99Tu4ljFLM7T2kOsA=; b=Cby0jf41kLGUEaMUfFhC6AFNUlH6V0EVFVZKMpgyzukVmjJdopgcagqurBXvLlsGTt SerapWUVQ31lmQhq+uGxcWBMwQpw7AeNGftpCu68YpsY19XxRTNXnI86fGhoOq1IWM9r GguTrglaHQn6+vEWMIzAjJtumc6EwTy+RY7LeIFosXezoiAthU7T1FAubZttUq4GL+o7 qfkLSxDzlok4SkvJ3PIbeb37kvG28gjcoCCzeMMkUSt32h09Ru7+84h4/ZBWWws40Iff zxXcPmAv5Huk1R6cDD5VX6Q25V9s+vvk9xJac7eCh66m7Vjn6MqswXjYUcm13S0xleRf KOpQ== X-Forwarded-Encrypted: i=1; AJvYcCUHlJzhews2kN9Abz5+xLfbkt85op9q+GdwEj1Gwhkaep85kMTxOXtWGNgTl3ZytzNy81Fw/XscK0j4NIc=@vger.kernel.org X-Gm-Message-State: AOJu0YxdADzetFCYAQvSPzlKAJvXzam+0dHmTtqGZnrdJNhyesuKKghU JTQspmmlnKX4hSolhLuTdYCx8JdSBeIasiQ7aCE8/VQcrjeZXqAdI4E8Gthht0I= X-Google-Smtp-Source: AGHT+IHw+F+4TmogLhrbU8N9wTkF2jmE4YHNJryvizmOe98BZRuryF1Qlfv8KxZRs6BXORJTlrQqRA== X-Received: by 2002:a17:902:e892:b0:20b:7210:5859 with SMTP id d9443c01a7336-211c50b0c51mr15181305ad.38.1731567681854; Wed, 13 Nov 2024 23:01:21 -0800 (PST) Received: from C02DW0BEMD6R.bytedance.net ([63.216.146.178]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-211c7d389c2sm4119065ad.268.2024.11.13.23.01.14 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 13 Nov 2024 23:01:21 -0800 (PST) From: Qi Zheng To: david@redhat.com, jannh@google.com, hughd@google.com, willy@infradead.org, muchun.song@linux.dev, vbabka@kernel.org, akpm@linux-foundation.org, peterx@redhat.com Cc: mgorman@suse.de, catalin.marinas@arm.com, will@kernel.org, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, x86@kernel.org, lorenzo.stoakes@oracle.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, zokeefe@google.com, rientjes@google.com, Qi Zheng Subject: [PATCH v3 7/9] mm: pgtable: try to reclaim empty PTE page in madvise(MADV_DONTNEED) Date: Thu, 14 Nov 2024 14:59:58 +0800 Message-Id: <9e6f0cff7ae29cd8bd1812d3a0e3513de3f42f42.1731566457.git.zhengqi.arch@bytedance.com> X-Mailer: git-send-email 2.24.3 (Apple Git-128) In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Now in order to pursue high performance, applications mostly use some high-performance user-mode memory allocators, such as jemalloc or tcmalloc. These memory allocators use madvise(MADV_DONTNEED or MADV_FREE) to release physical memory, but neither MADV_DONTNEED nor MADV_FREE will release page table memory, which may cause huge page table memory usage. The following are a memory usage snapshot of one process which actually happened on our server: VIRT: 55t RES: 590g VmPTE: 110g In this case, most of the page table entries are empty. For such a PTE page where all entries are empty, we can actually free it back to the system for others to use. As a first step, this commit aims to synchronously free the empty PTE pages in madvise(MADV_DONTNEED) case. We will detect and free empty PTE pages in zap_pte_range(), and will add zap_details.reclaim_pt to exclude cases other than madvise(MADV_DONTNEED). Once an empty PTE is detected, we first try to hold the pmd lock within the pte lock. If successful, we clear the pmd entry directly (fast path). Otherwise, we wait until the pte lock is released, then re-hold the pmd and pte locks and loop PTRS_PER_PTE times to check pte_none() to re-detect whether the PTE page is empty and free it (slow path). For other cases such as madvise(MADV_FREE), consider scanning and freeing empty PTE pages asynchronously in the future. The following code snippet can show the effect of optimization: mmap 50G while (1) { for (; i < 1024 * 25; i++) { touch 2M memory madvise MADV_DONTNEED 2M } } As we can see, the memory usage of VmPTE is reduced: before after VIRT 50.0 GB 50.0 GB RES 3.1 MB 3.1 MB VmPTE 102640 KB 240 KB Signed-off-by: Qi Zheng --- include/linux/mm.h | 1 + mm/Kconfig | 15 ++++++++++ mm/Makefile | 1 + mm/internal.h | 19 +++++++++++++ mm/madvise.c | 7 ++++- mm/memory.c | 45 ++++++++++++++++++++++++++++- mm/pt_reclaim.c | 71 ++++++++++++++++++++++++++++++++++++++++++++++ 7 files changed, 157 insertions(+), 2 deletions(-) create mode 100644 mm/pt_reclaim.c diff --git a/include/linux/mm.h b/include/linux/mm.h index ca59d165f1f2e..1fcd4172d2c03 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2319,6 +2319,7 @@ extern void pagefault_out_of_memory(void); struct zap_details { struct folio *single_folio; /* Locked folio to be unmapped */ bool even_cows; /* Zap COWed private pages too? */ + bool reclaim_pt; /* Need reclaim page tables? */ zap_flags_t zap_flags; /* Extra flags for zapping */ }; =20 diff --git a/mm/Kconfig b/mm/Kconfig index 84000b0168086..7949ab121070f 100644 --- a/mm/Kconfig +++ b/mm/Kconfig @@ -1301,6 +1301,21 @@ config ARCH_HAS_USER_SHADOW_STACK The architecture has hardware support for userspace shadow call stacks (eg, x86 CET, arm64 GCS or RISC-V Zicfiss). =20 +config ARCH_SUPPORTS_PT_RECLAIM + def_bool n + +config PT_RECLAIM + bool "reclaim empty user page table pages" + default y + depends on ARCH_SUPPORTS_PT_RECLAIM && MMU && SMP + select MMU_GATHER_RCU_TABLE_FREE + help + Try to reclaim empty user page table pages in paths other than munmap + and exit_mmap path. + + Note: now only empty user PTE page table pages will be reclaimed. + + source "mm/damon/Kconfig" =20 endmenu diff --git a/mm/Makefile b/mm/Makefile index dba52bb0da8ab..850386a67b3e0 100644 --- a/mm/Makefile +++ b/mm/Makefile @@ -146,3 +146,4 @@ obj-$(CONFIG_GENERIC_IOREMAP) +=3D ioremap.o obj-$(CONFIG_SHRINKER_DEBUG) +=3D shrinker_debug.o obj-$(CONFIG_EXECMEM) +=3D execmem.o obj-$(CONFIG_TMPFS_QUOTA) +=3D shmem_quota.o +obj-$(CONFIG_PT_RECLAIM) +=3D pt_reclaim.o diff --git a/mm/internal.h b/mm/internal.h index 5a7302baeed7c..5b2aef61073f1 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -1530,4 +1530,23 @@ int walk_page_range_mm(struct mm_struct *mm, unsigne= d long start, unsigned long end, const struct mm_walk_ops *ops, void *private); =20 +/* pt_reclaim.c */ +bool try_get_and_clear_pmd(struct mm_struct *mm, pmd_t *pmd, pmd_t *pmdval= ); +void free_pte(struct mm_struct *mm, unsigned long addr, struct mmu_gather = *tlb, + pmd_t pmdval); +void try_to_free_pte(struct mm_struct *mm, pmd_t *pmd, unsigned long addr, + struct mmu_gather *tlb); + +#ifdef CONFIG_PT_RECLAIM +bool reclaim_pt_is_enabled(unsigned long start, unsigned long end, + struct zap_details *details); +#else +static inline bool reclaim_pt_is_enabled(unsigned long start, unsigned lon= g end, + struct zap_details *details) +{ + return false; +} +#endif /* CONFIG_PT_RECLAIM */ + + #endif /* __MM_INTERNAL_H */ diff --git a/mm/madvise.c b/mm/madvise.c index 0ceae57da7dad..49f3a75046f63 100644 --- a/mm/madvise.c +++ b/mm/madvise.c @@ -851,7 +851,12 @@ static int madvise_free_single_vma(struct vm_area_stru= ct *vma, static long madvise_dontneed_single_vma(struct vm_area_struct *vma, unsigned long start, unsigned long end) { - zap_page_range_single(vma, start, end - start, NULL); + struct zap_details details =3D { + .reclaim_pt =3D true, + .even_cows =3D true, + }; + + zap_page_range_single(vma, start, end - start, &details); return 0; } =20 diff --git a/mm/memory.c b/mm/memory.c index 8b3348ff374ff..fe93b0648c430 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -1436,7 +1436,7 @@ copy_page_range(struct vm_area_struct *dst_vma, struc= t vm_area_struct *src_vma) static inline bool should_zap_cows(struct zap_details *details) { /* By default, zap all pages */ - if (!details) + if (!details || details->reclaim_pt) return true; =20 /* Or, we zap COWed pages only if the caller wants to */ @@ -1698,6 +1698,30 @@ static inline int do_zap_pte_range(struct mmu_gather= *tlb, details, rss); } =20 +static inline int count_pte_none(pte_t *pte, int nr) +{ + int none_nr =3D 0; + + /* + * If PTE_MARKER_UFFD_WP is enabled, the uffd-wp PTEs may be + * re-installed, so we need to check pte_none() one by one. + * Otherwise, checking a single PTE in a batch is sufficient. + */ +#ifdef CONFIG_PTE_MARKER_UFFD_WP + for (;;) { + if (pte_none(ptep_get(pte))) + none_nr++; + if (--nr =3D=3D 0) + break; + pte++; + } +#else + if (pte_none(ptep_get(pte))) + none_nr =3D nr; +#endif + return none_nr; +} + static unsigned long zap_pte_range(struct mmu_gather *tlb, struct vm_area_struct *vma, pmd_t *pmd, unsigned long addr, unsigned long end, @@ -1709,6 +1733,11 @@ static unsigned long zap_pte_range(struct mmu_gather= *tlb, spinlock_t *ptl; pte_t *start_pte; pte_t *pte; + pmd_t pmdval; + unsigned long start =3D addr; + bool can_reclaim_pt =3D reclaim_pt_is_enabled(start, end, details); + bool direct_reclaim =3D false; + int none_nr =3D 0; int nr; =20 retry: @@ -1726,6 +1755,8 @@ static unsigned long zap_pte_range(struct mmu_gather = *tlb, =20 nr =3D skip_none_ptes(pte, addr, end); if (nr) { + if (can_reclaim_pt) + none_nr +=3D nr; addr +=3D PAGE_SIZE * nr; if (addr =3D=3D end) break; @@ -1734,12 +1765,17 @@ static unsigned long zap_pte_range(struct mmu_gathe= r *tlb, =20 nr =3D do_zap_pte_range(tlb, vma, pte, addr, end, details, rss, &force_flush, &force_break); + if (can_reclaim_pt) + none_nr +=3D count_pte_none(pte, nr); if (unlikely(force_break)) { addr +=3D nr * PAGE_SIZE; break; } } while (pte +=3D nr, addr +=3D PAGE_SIZE * nr, addr !=3D end); =20 + if (can_reclaim_pt && addr =3D=3D end && (none_nr =3D=3D PTRS_PER_PTE)) + direct_reclaim =3D try_get_and_clear_pmd(mm, pmd, &pmdval); + add_mm_rss_vec(mm, rss); arch_leave_lazy_mmu_mode(); =20 @@ -1766,6 +1802,13 @@ static unsigned long zap_pte_range(struct mmu_gather= *tlb, goto retry; } =20 + if (can_reclaim_pt) { + if (direct_reclaim) + free_pte(mm, start, tlb, pmdval); + else + try_to_free_pte(mm, pmd, start, tlb); + } + return addr; } =20 diff --git a/mm/pt_reclaim.c b/mm/pt_reclaim.c new file mode 100644 index 0000000000000..6540a3115dde8 --- /dev/null +++ b/mm/pt_reclaim.c @@ -0,0 +1,71 @@ +// SPDX-License-Identifier: GPL-2.0 +#include +#include +#include + +#include "internal.h" + +bool reclaim_pt_is_enabled(unsigned long start, unsigned long end, + struct zap_details *details) +{ + return details && details->reclaim_pt && (end - start >=3D PMD_SIZE); +} + +bool try_get_and_clear_pmd(struct mm_struct *mm, pmd_t *pmd, pmd_t *pmdval) +{ + spinlock_t *pml =3D pmd_lockptr(mm, pmd); + + if (!spin_trylock(pml)) + return false; + + *pmdval =3D pmdp_get_lockless(pmd); + pmd_clear(pmd); + spin_unlock(pml); + + return true; +} + +void free_pte(struct mm_struct *mm, unsigned long addr, struct mmu_gather = *tlb, + pmd_t pmdval) +{ + pte_free_tlb(tlb, pmd_pgtable(pmdval), addr); + mm_dec_nr_ptes(mm); +} + +void try_to_free_pte(struct mm_struct *mm, pmd_t *pmd, unsigned long addr, + struct mmu_gather *tlb) +{ + pmd_t pmdval; + spinlock_t *pml, *ptl; + pte_t *start_pte, *pte; + int i; + + pml =3D pmd_lock(mm, pmd); + start_pte =3D pte_offset_map_rw_nolock(mm, pmd, addr, &pmdval, &ptl); + if (!start_pte) + goto out_ptl; + if (ptl !=3D pml) + spin_lock_nested(ptl, SINGLE_DEPTH_NESTING); + + /* Check if it is empty PTE page */ + for (i =3D 0, pte =3D start_pte; i < PTRS_PER_PTE; i++, pte++) { + if (!pte_none(ptep_get(pte))) + goto out_ptl; + } + pte_unmap(start_pte); + + pmd_clear(pmd); + + if (ptl !=3D pml) + spin_unlock(ptl); + spin_unlock(pml); + + free_pte(mm, addr, tlb, pmdval); + + return; +out_ptl: + if (start_pte) + pte_unmap_unlock(start_pte, ptl); + if (ptl !=3D pml) + spin_unlock(pml); +} --=20 2.20.1 From nobody Sat Nov 23 05:39:08 2024 Received: from mail-pl1-f171.google.com (mail-pl1-f171.google.com [209.85.214.171]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 64AD01F6681 for ; Thu, 14 Nov 2024 07:01:30 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.171 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731567692; cv=none; b=MdRPFyZcQKzAEQHgqNAkvRu4V6Z6T1bBE5V81sDzMivhauvmTtTIz2hbc/s7C9/fhxsN4uGF1LWN/MyNlvUvv7fcs+XAjH94fpo8sn+ClmgePgBCa/9iMC6bFdx1Qy6DZ3YWsq/oMkj4MN3keUutN6A4R505yx/nMyFh51xSKv4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731567692; c=relaxed/simple; bh=20qtQZ/mJDfwp0RQIdzo2NcEVODmaojrZtA17kfYSDs=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=Dem1wDDdo6EUV8sUS2Ca8sVWTpfmoFt7UFLS2xZQ+joesxFvbFC2viqjSBgAs/HONIus45x0TNcdl7ucRIUph2jw+Tz+KZkncHLo4VSm2YBWV49sVz0Z0BPGlH5Uzhqb05ADjy+TrTJSp2qvxp2WEqhuSERd24R6PegfcLTQcTY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=bytedance.com; spf=pass smtp.mailfrom=bytedance.com; dkim=pass (2048-bit key) header.d=bytedance.com header.i=@bytedance.com header.b=UNIHQwMY; arc=none smtp.client-ip=209.85.214.171 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=bytedance.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=bytedance.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=bytedance.com header.i=@bytedance.com header.b="UNIHQwMY" Received: by mail-pl1-f171.google.com with SMTP id d9443c01a7336-20c803787abso2012595ad.0 for ; Wed, 13 Nov 2024 23:01:30 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance.com; s=google; t=1731567690; x=1732172490; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=EIspmjr7fEoQ+01q6Xqvp9udSZQrZ5Y5COB53v8W124=; b=UNIHQwMY72wMd5M7iz4AyD7VC3S5JBNoKS1LSHLiP9f/BHdGrixdjdrzqJM1rrsE61 3xw5g/w9Ccc6RA0I5SMedDfcavUTdbJbWOOC9ZD4XMHaNstUq6gPbz4YuGV8RbWP5YbV 9esr8A/Cq4vfs0j12CUI2nOSwM+7AYi19c8xX7DSF46I2l3H3HOVQjwa4JZ4SvmfFA0a NOsGqbri31NA7yecFrTuEwDPqoUlRSnXp6vTQT8O2RtUHXCOw85EpTob9fPwHEEaDUZ/ 3gGVvQ6N/jqTvu2356tqRJfhxxvF/AdJLIwmEzVBKbPS8Im0BMdtaZbIM8tDFNpWKS0U 6dDg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1731567690; x=1732172490; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=EIspmjr7fEoQ+01q6Xqvp9udSZQrZ5Y5COB53v8W124=; b=WKzLzBjBURk5j7NeDuWgJtnux9rJRccJUV2SV9uZvNAZmUsuN0E/EZdeZhheKim2GC slk0NykYfCscE0kbv1XinngwD8kUvFNIZcV6r6F021bKluj2f8b+MPDaRC+lq5yyncWh Ctol4MhZdfS9rTvKHepArfqqg4zkNyDVwqEwvNLKWjoLOdiKsgsYB1GOZNOpXCH/+tPd 2dxteGPTGtFVTpJULJVKIO4TRL7pjPXWQjgFCS92T1oUz3TeK5qNO1pKGjy39ZM9odEV N3WvSnnlVvJDubqkXgmB76J8ajnNQYBUbTLn/PHJUIRAojyJnXHaXrARUSM9xC8yw2wD doUg== X-Forwarded-Encrypted: i=1; AJvYcCWFD9/OEIAaVyXzUTTXJIU8hJBspqDNUSTMOmJt7kWGBq+dv2415niUfF9ILm7+HD9QMXOdnfrjcHoLHaA=@vger.kernel.org X-Gm-Message-State: AOJu0YznX6kpi9To2FgCtxTYghYsHu3hbzeSAjzyelqxu4wTAZgmNm9a C8UirUaQpk9pj4nobKDTjkxzqTIHM2IZOPYvAbontEGS/2wP9B+Sbdx570lmcrE= X-Google-Smtp-Source: AGHT+IGZZsMk4C1Gc1YiyqnQ/Vm69/2CldCbtvO7gKrKwJI6uPz/y6t4qr23UntpDUqpfJ9OLSSGjA== X-Received: by 2002:a17:902:cf11:b0:211:2fb2:6a6 with SMTP id d9443c01a7336-211c0fa7399mr34336085ad.24.1731567689701; Wed, 13 Nov 2024 23:01:29 -0800 (PST) Received: from C02DW0BEMD6R.bytedance.net ([63.216.146.178]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-211c7d389c2sm4119065ad.268.2024.11.13.23.01.22 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 13 Nov 2024 23:01:28 -0800 (PST) From: Qi Zheng To: david@redhat.com, jannh@google.com, hughd@google.com, willy@infradead.org, muchun.song@linux.dev, vbabka@kernel.org, akpm@linux-foundation.org, peterx@redhat.com Cc: mgorman@suse.de, catalin.marinas@arm.com, will@kernel.org, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, x86@kernel.org, lorenzo.stoakes@oracle.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, zokeefe@google.com, rientjes@google.com, Qi Zheng Subject: [PATCH v3 8/9] x86: mm: free page table pages by RCU instead of semi RCU Date: Thu, 14 Nov 2024 14:59:59 +0800 Message-Id: X-Mailer: git-send-email 2.24.3 (Apple Git-128) In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Now, if CONFIG_MMU_GATHER_RCU_TABLE_FREE is selected, the page table pages will be freed by semi RCU, that is: - batch table freeing: asynchronous free by RCU - single table freeing: IPI + synchronous free In this way, the page table can be lockless traversed by disabling IRQ in paths such as fast GUP. But this is not enough to free the empty PTE page table pages in paths other that munmap and exit_mmap path, because IPI cannot be synchronized with rcu_read_lock() in pte_offset_map{_lock}(). In preparation for supporting empty PTE page table pages reclaimation, let single table also be freed by RCU like batch table freeing. Then we can also use pte_offset_map() etc to prevent PTE page from being freed. Like pte_free_defer(), we can also safely use ptdesc->pt_rcu_head to free the page table pages: - The pt_rcu_head is unioned with pt_list and pmd_huge_pte. - For pt_list, it is used to manage the PGD page in x86. Fortunately tlb_remove_table() will not be used for free PGD pages, so it is safe to use pt_rcu_head. - For pmd_huge_pte, it is used for THPs, so it is safe. After applying this patch, if CONFIG_PT_RECLAIM is enabled, the function call of free_pte() is as follows: free_pte pte_free_tlb __pte_free_tlb ___pte_free_tlb paravirt_tlb_remove_table tlb_remove_table [!CONFIG_PARAVIRT, Xen PV, Hyper-V, KVM] [no-free-memory slowpath:] tlb_table_invalidate tlb_remove_table_one __tlb_remove_table_one [frees via RCU] [fastpath:] tlb_table_flush tlb_remove_table_free [frees via RCU] native_tlb_remove_table [CONFIG_PARAVIRT on native] tlb_remove_table [see above] Signed-off-by: Qi Zheng Cc: x86@kernel.org Cc: Dave Hansen Cc: Andy Lutomirski Cc: Peter Zijlstra --- arch/x86/include/asm/tlb.h | 19 +++++++++++++++++++ arch/x86/kernel/paravirt.c | 7 +++++++ arch/x86/mm/pgtable.c | 10 +++++++++- include/linux/mm_types.h | 4 +++- mm/mmu_gather.c | 9 ++++++++- 5 files changed, 46 insertions(+), 3 deletions(-) diff --git a/arch/x86/include/asm/tlb.h b/arch/x86/include/asm/tlb.h index 580636cdc257b..d134ecf1ada06 100644 --- a/arch/x86/include/asm/tlb.h +++ b/arch/x86/include/asm/tlb.h @@ -34,4 +34,23 @@ static inline void __tlb_remove_table(void *table) free_page_and_swap_cache(table); } =20 +#ifdef CONFIG_PT_RECLAIM +static inline void __tlb_remove_table_one_rcu(struct rcu_head *head) +{ + struct page *page; + + page =3D container_of(head, struct page, rcu_head); + put_page(page); +} + +static inline void __tlb_remove_table_one(void *table) +{ + struct page *page; + + page =3D table; + call_rcu(&page->rcu_head, __tlb_remove_table_one_rcu); +} +#define __tlb_remove_table_one __tlb_remove_table_one +#endif /* CONFIG_PT_RECLAIM */ + #endif /* _ASM_X86_TLB_H */ diff --git a/arch/x86/kernel/paravirt.c b/arch/x86/kernel/paravirt.c index fec3815335558..89688921ea62e 100644 --- a/arch/x86/kernel/paravirt.c +++ b/arch/x86/kernel/paravirt.c @@ -59,10 +59,17 @@ void __init native_pv_lock_init(void) static_branch_enable(&virt_spin_lock_key); } =20 +#ifndef CONFIG_PT_RECLAIM static void native_tlb_remove_table(struct mmu_gather *tlb, void *table) { tlb_remove_page(tlb, table); } +#else +static void native_tlb_remove_table(struct mmu_gather *tlb, void *table) +{ + tlb_remove_table(tlb, table); +} +#endif =20 struct static_key paravirt_steal_enabled; struct static_key paravirt_steal_rq_enabled; diff --git a/arch/x86/mm/pgtable.c b/arch/x86/mm/pgtable.c index 5745a354a241c..69a357b15974a 100644 --- a/arch/x86/mm/pgtable.c +++ b/arch/x86/mm/pgtable.c @@ -19,12 +19,20 @@ EXPORT_SYMBOL(physical_mask); #endif =20 #ifndef CONFIG_PARAVIRT +#ifndef CONFIG_PT_RECLAIM static inline void paravirt_tlb_remove_table(struct mmu_gather *tlb, void *table) { tlb_remove_page(tlb, table); } -#endif +#else +static inline +void paravirt_tlb_remove_table(struct mmu_gather *tlb, void *table) +{ + tlb_remove_table(tlb, table); +} +#endif /* !CONFIG_PT_RECLAIM */ +#endif /* !CONFIG_PARAVIRT */ =20 gfp_t __userpte_alloc_gfp =3D GFP_PGTABLE_USER | PGTABLE_HIGHMEM; =20 diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 97e2f4fe1d6c4..266f53b2bb497 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -438,7 +438,9 @@ FOLIO_MATCH(compound_head, _head_2a); * struct ptdesc - Memory descriptor for page tables. * @__page_flags: Same as page flags. Powerpc only. * @pt_rcu_head: For freeing page table pages. - * @pt_list: List of used page tables. Used for s390 and x86. + * @pt_list: List of used page tables. Used for s390 gmap shadow = pages + * (which are not linked into the user page tables) and= x86 + * pgds. * @_pt_pad_1: Padding that aliases with page's compound head. * @pmd_huge_pte: Protected by ptdesc->ptl, used for THPs. * @__page_mapping: Aliases with page->mapping. Unused for page tables. diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c index 99b3e9408aa0f..1e21022bcf339 100644 --- a/mm/mmu_gather.c +++ b/mm/mmu_gather.c @@ -311,11 +311,18 @@ static inline void tlb_table_invalidate(struct mmu_ga= ther *tlb) } } =20 -static void tlb_remove_table_one(void *table) +#ifndef __tlb_remove_table_one +static inline void __tlb_remove_table_one(void *table) { tlb_remove_table_sync_one(); __tlb_remove_table(table); } +#endif + +static void tlb_remove_table_one(void *table) +{ + __tlb_remove_table_one(table); +} =20 static void tlb_table_flush(struct mmu_gather *tlb) { --=20 2.20.1 From nobody Sat Nov 23 05:39:08 2024 Received: from mail-pj1-f48.google.com (mail-pj1-f48.google.com [209.85.216.48]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CEA951F5829 for ; Thu, 14 Nov 2024 07:01:38 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.48 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731567700; cv=none; b=FfEIyc7TC9Gc4HSzpmIYfRDQymaH5HJRR8hcJqQ5fnO6w0gSWZxPe4IAdaxPZ9iNiRq4vqTafYs1SBqG00AWnhGL+/HAGKIasVWTUrHSM3I0AmjSPlICe52GZIJTK1xKouywgWzUzkW9dHjCWlAYNTvFf+qOscx+Ue9vrs1/bcY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731567700; c=relaxed/simple; bh=//tZic8NPzM9Ca/274IBsMLwUM8Dwx0QhVmk4xzlaPQ=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=CNucuwLRVWhLhaH5nMMyNpkwNuzBGoqjb/YG3RIeTevEdyQa6EKeTVXQaMKJjHIqa3OvvtH/xybsBg3z/eQEbTBNNBbYmzmUhWaO1PCC69WLpCgpZ/pUv3h8yZ2A1G27/DFsCgNUZdbDpNmd22o7bhJhnW5iFr/O30rKopWmUDQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=bytedance.com; spf=pass smtp.mailfrom=bytedance.com; dkim=pass (2048-bit key) header.d=bytedance.com header.i=@bytedance.com header.b=bp+Vpfnx; arc=none smtp.client-ip=209.85.216.48 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=bytedance.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=bytedance.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=bytedance.com header.i=@bytedance.com header.b="bp+Vpfnx" Received: by mail-pj1-f48.google.com with SMTP id 98e67ed59e1d1-2e2eba31d3aso239690a91.2 for ; Wed, 13 Nov 2024 23:01:38 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance.com; s=google; t=1731567698; x=1732172498; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Ux2VNbXo90ImX5V4UUDESmDG/GCgEeKifxWHEZQhBkY=; b=bp+VpfnxS1Rjfcq1e8PMQv5JKrmFcZ5GQthu7bQqdMNBXyTN5y8K+aeTruvX6n9rkd 0sDb77nQtrJfJ98J7forcrNokyiyiSR5H+Ld2jkY9fANRezKZP8A+n6SFl5M8EIBFt/2 iuPLwWkbNvhQHaeCqirlEwJvrQY2V77/Q0hCTkgLmRlRDpD69Gno60Th4mcCgXe9/v7Z nRYIpWbXqTnXpe4qwrlZ3n0wCRMXTX9esrYeZjzOm48xbD46c2X8YurVF8KT/sOCCHX3 voXdnGMz3l/QPt+cvwRKjIfzmKV1wzHD3en/efsYQyU4vO6/HVO4R6wX+r2EC7mk440U m/6w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1731567698; x=1732172498; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Ux2VNbXo90ImX5V4UUDESmDG/GCgEeKifxWHEZQhBkY=; b=vW7CwgpBvNiD30tmrWCm+m81ypr2EGhsvqWGeQj+p2MbinK6CMantvBqd5KP8LKXvv eDnlE/WQ4XhEsq5RjBfSVH1nrXZQ8dy3PGX50A4rQVsNGzVkSfzckvf/c6vf7k88Opcg P3pa83bIwTMgkx8UPyh6DtiYTqPbnG7XSnNUbhP26KgQyiOGdhgkSdw8M4Ffh7IlGynu 3djpSd16coLquayLoGpcvTE18UiK1sk0uOhE/Zeeol2UtkAa0f9s5e7pEEYQ8LsIL8x2 73bXaE0i6aMimXxq6ifCo8KZdXwEdcvUJ9HbRr++HinpC/Clfe1KtbAyEXbkTg1SqJIe oclw== X-Forwarded-Encrypted: i=1; AJvYcCWJqFwV4gm/SVRBCF+ItNsMHspNNmjzm4H9sP8cw93cIY33WPjijXapAlUKpZHMhfhs0uNOT/xncbL3ErY=@vger.kernel.org X-Gm-Message-State: AOJu0YzpfUNSip6cDcXNV+6JntUrMf1f1zt8I/V0Ai+FycVqy0eWyBkJ iimNopSvbh9+raFCaJDQKCPZOeuaE4ZmkWF32pmPyrLDZmkTluW/nQ+Jv0xE75w= X-Google-Smtp-Source: AGHT+IGD/Oma/paalKZqrPUZZp2giaMy+M6Rqmdoz0DNDtq/SsMG2116QxX53frrrDiQ+TqULr4MYA== X-Received: by 2002:a17:90b:17cb:b0:2e2:cef9:8f68 with SMTP id 98e67ed59e1d1-2e9b16ee943mr30937378a91.4.1731567698222; Wed, 13 Nov 2024 23:01:38 -0800 (PST) Received: from C02DW0BEMD6R.bytedance.net ([63.216.146.178]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-211c7d389c2sm4119065ad.268.2024.11.13.23.01.30 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 13 Nov 2024 23:01:36 -0800 (PST) From: Qi Zheng To: david@redhat.com, jannh@google.com, hughd@google.com, willy@infradead.org, muchun.song@linux.dev, vbabka@kernel.org, akpm@linux-foundation.org, peterx@redhat.com Cc: mgorman@suse.de, catalin.marinas@arm.com, will@kernel.org, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, x86@kernel.org, lorenzo.stoakes@oracle.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, zokeefe@google.com, rientjes@google.com, Qi Zheng Subject: [PATCH v3 9/9] x86: select ARCH_SUPPORTS_PT_RECLAIM if X86_64 Date: Thu, 14 Nov 2024 15:00:00 +0800 Message-Id: <69c25b17661499afe4b35f1d30b26dd346a649ec.1731566457.git.zhengqi.arch@bytedance.com> X-Mailer: git-send-email 2.24.3 (Apple Git-128) In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Now, x86 has fully supported the CONFIG_PT_RECLAIM feature, and reclaiming PTE pages is profitable only on 64-bit systems, so select ARCH_SUPPORTS_PT_RECLAIM if X86_64. Signed-off-by: Qi Zheng Cc: x86@kernel.org Cc: Dave Hansen Cc: Andy Lutomirski Cc: Peter Zijlstra --- arch/x86/Kconfig | 1 + 1 file changed, 1 insertion(+) diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index e74a611bff4a6..54526ce2b1d90 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -322,6 +322,7 @@ config X86 select FUNCTION_ALIGNMENT_4B imply IMA_SECURE_AND_OR_TRUSTED_BOOT if EFI select HAVE_DYNAMIC_FTRACE_NO_PATCHABLE + select ARCH_SUPPORTS_PT_RECLAIM if X86_64 =20 config INSTRUCTION_DECODER def_bool y --=20 2.20.1